From 3e10a15da94772b21c63145479ae0a56b698f5e9 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Fri, 3 May 2024 17:56:14 +0000 Subject: [PATCH] Update unreleased documentation (#764) * Update versions.json * Deployed 2b9d3ef60 to unreleased in versions with MkDocs 1.5.3 and mike 2.0.0 * Sort docs versions --------- Co-authored-by: GitHub Actions Bot --- versions/unreleased/guides/host/index.html | 186 +++++++++++-------- versions/unreleased/search/search_index.json | 2 +- versions/unreleased/sitemap.xml.gz | Bin 2435 -> 2435 bytes 3 files changed, 114 insertions(+), 74 deletions(-) diff --git a/versions/unreleased/guides/host/index.html b/versions/unreleased/guides/host/index.html index 3fdcfa140..10516bc03 100644 --- a/versions/unreleased/guides/host/index.html +++ b/versions/unreleased/guides/host/index.html @@ -192,7 +192,7 @@
- + Skip to content @@ -956,11 +956,11 @@
-

Start the Prefect server and it should begin to use your PostgreSQL database instance:

+

Start the Prefect server to use your PostgreSQL database instance:

prefect server start
 

In-memory database

-

One of the benefits of SQLite is in-memory database support.

To use an in-memory SQLite database, set the following environment variable:

prefect config set PREFECT_API_DATABASE_CONNECTION_URL="sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false"
@@ -13634,9 +13671,9 @@ 

In-memory databaseMigrations

Prefect uses Alembic to manage database migrations. Alembic is a -database migration tool for usage with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for +database migration tool to use with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for generating and applying schema changes to a database.

-

To apply migrations to your database you can run the following commands:

+

Apply migrations to your database with the following commands:

To upgrade:

prefect server database upgrade -y
@@ -13649,8 +13686,8 @@ 

Migrations
prefect server database downgrade -y -r -1
 
@@ -13663,18 +13700,21 @@

Migrationscontributing docs for information on how to create new database migrations.

+

See the contributing docs to create new database migrations.

Notifications

-

When you use Prefect Cloud you gain access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.

+

Prefect Cloud gives you access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.

Notifications enable you to set up alerts that are sent when a flow enters any state you specify. When your flow and task runs changes state, Prefect notes the state change and checks whether the new state matches any notification policies. If it does, a new notification is queued.

-

Prefect supports sending notifications via:

+

Prefect supports sending notifications through:

    -
  • Slack message to a channel
  • -
  • Microsoft Teams message to a channel
  • -
  • Opsgenie to alerts
  • -
  • PagerDuty to alerts
  • -
  • Twilio to phone numbers
  • -
  • Email (requires your own server)
  • +
  • Custom webhook
  • +
  • Discord webhook
  • +
  • Mattermost webhook
  • +
  • Microsoft Teams webhook
  • +
  • Opsgenie webhook
  • +
  • PagerDuty webhook
  • +
  • Sendgrid email
  • +
  • Slack webhook
  • +
  • Twilio SMS

Notifications in Prefect Cloud

@@ -13683,11 +13723,11 @@

NotificationsConfigure notifications

To configure a notification in a Prefect server, go to the Notifications page and select Create Notification or the + button.

Creating a notification in the Prefect UI

-

Notifications are structured just as you would describe them to someone. You can choose:

+

You can choose:

    -
  • Which run states should trigger a notification.
  • -
  • Tags to filter which flow runs are covered by the notification.
  • -
  • Whether to send an email, a Slack message, Microsoft Teams message, or other services.
  • +
  • Which run states should trigger a notification
  • +
  • Tags to filter which flow runs are covered by the notification
  • +
  • Whether to send an email, a Slack message, Microsoft Teams message, or use another services

For email notifications (supported on Prefect Cloud only), the configuration requires email addresses to which the message is sent.

For Slack notifications, the configuration requires webhook credentials for your Slack and the channel to which the message is sent.

diff --git a/versions/unreleased/search/search_index.json b/versions/unreleased/search/search_index.json index 599261f17..57472d3d3 100644 --- a/versions/unreleased/search/search_index.json +++ b/versions/unreleased/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Welcome to Prefect","text":"

Prefect is a workflow orchestration tool empowering developers to build, observe, and react to data pipelines.

It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated. Just bring your Python code, sprinkle in a few decorators, and go!

With Prefect you gain:

  • scheduling
  • retries
  • logging
  • convenient async functionality
  • caching
  • notifications
  • observability
  • event-based orchestration

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#new-to-prefect","title":"New to Prefect?","text":"

Get up and running quickly with the quickstart guide.

Want more hands on practice to productionize your workflows? Follow our tutorial.

For deeper dives into common use cases, explore our guides.

Take your understanding even further with Prefect's concepts and API reference.

Join Prefect's vibrant community of over 26,000 engineers to learn with others and share your knowledge!

Need help?

Get your questions answered by a Prefect Product Advocate! Book a Meeting

","tags":["getting started","quick start","overview"],"boost":2},{"location":"faq/","title":"Frequently Asked Questions","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#prefect","title":"Prefect","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-is-prefect-licensed","title":"How is Prefect licensed?","text":"

Prefect is licensed under the Apache 2.0 License, an OSI approved open-source license. If you have any questions about licensing, please contact us.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#is-the-prefect-v2-cloud-url-different-than-the-prefect-v1-cloud-url","title":"Is the Prefect v2 Cloud URL different than the Prefect v1 Cloud URL?","text":"

Yes. Prefect Cloud for v2 is at app.prefect.cloud/ while Prefect Cloud for v1 is at cloud.prefect.io.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#the-prefect-orchestration-engine","title":"The Prefect Orchestration Engine","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#why-was-the-prefect-orchestration-engine-created","title":"Why was the Prefect orchestration engine created?","text":"

The Prefect orchestration engine has three major objectives:

  • Embracing dynamic, DAG-free workflows
  • An extraordinary developer experience
  • Transparent and observable orchestration rules

As Prefect has matured, so has the modern data stack. The on-demand, dynamic, highly scalable workflows that used to exist principally in the domain of data science and analytics are now prevalent throughout all of data engineering. Few companies have workflows that don\u2019t deal with streaming data, uncertain timing, runtime logic, complex dependencies, versioning, or custom scheduling.

This means that the current generation of workflow managers are built around the wrong abstraction: the directed acyclic graph (DAG). DAGs are an increasingly arcane, constrained way of representing the dynamic, heterogeneous range of modern data and computation patterns.

Furthermore, as workflows have become more complex, it has become even more important to focus on the developer experience of building, testing, and monitoring them. Faced with an explosion of available tools, it is more important than ever for development teams to seek orchestration tools that will be compatible with any code, tools, or services they may require in the future.

And finally, this additional complexity means that providing clear and consistent insight into the behavior of the orchestration engine and any decisions it makes is critically important.

The Prefect orchestration engine represents a unified solution to these three problems.

The Prefect orchestration engine is capable of governing any code through a well-defined series of state transitions designed to maximize the user's understanding of what happened during execution. It's popular to describe \"workflows as code\" or \"orchestration as code,\" but the Prefect engine represents \"code as workflows\": rather than ask users to change how they work to meet the requirements of the orchestrator, we've defined an orchestrator that adapts to how our users work.

To achieve this, we've leveraged the familiar tools of native Python: first class functions, type annotations, and async support. Users are free to implement as much \u2014 or as little \u2014 of the Prefect engine as is useful for their objectives.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#if-im-using-prefect-cloud-2-do-i-still-need-to-run-a-prefect-server-locally","title":"If I\u2019m using Prefect Cloud 2, do I still need to run a Prefect server locally?","text":"

No, Prefect Cloud hosts an instance of the Prefect API for you. In fact, each workspace in Prefect Cloud corresponds directly to a single instance of the Prefect orchestration engine. See the Prefect Cloud Overview for more information.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#features","title":"Features","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-mapping","title":"Does Prefect support mapping?","text":"

Yes! For more information, see the Task.map API reference

@flow\ndef my_flow():\n\n    # map over a constant\n    for i in range(10):\n        my_mapped_task(i)\n\n    # map over a task's output\n    l = list_task()\n    for i in l.wait().result():\n        my_mapped_task_2(i)\n

Note that when tasks are called on constant values, they cannot detect their upstream edges automatically. In this example, my_mapped_task_2 does not know that it is downstream from list_task(). Prefect will have convenience functions for detecting these associations, and Prefect's .map() operator will automatically track them.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-enforce-ordering-between-tasks-that-dont-share-data","title":"Can I enforce ordering between tasks that don't share data?","text":"

Yes! For more information, see the Tasks section.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-proxies","title":"Does Prefect support proxies?","text":"

Yes!

Prefect supports communicating via proxies through the use of environment variables. You can read more about this in the Installation documentation and the article Using Prefect Cloud with proxies.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-linux","title":"Can I run Prefect flows on Linux?","text":"

Yes!

See the Installation documentation and Linux installation notes for details on getting started with Prefect on Linux.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-windows","title":"Can I run Prefect flows on Windows?","text":"

Yes!

See the Installation documentation and Windows installation notes for details on getting started with Prefect on Windows.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-external-requirements-does-prefect-have","title":"What external requirements does Prefect have?","text":"

Prefect does not have any additional requirements besides those installed by pip install --pre prefect. The entire system, including the UI and services, can be run in a single process via prefect server start and does not require Docker.

Prefect Cloud users do not need to worry about the Prefect database. Prefect Cloud uses PostgreSQL on GCP behind the scenes. To use PostgreSQL with a self-hosted Prefect server, users must provide the connection string for a running database via the PREFECT_API_DATABASE_CONNECTION_URL environment variable.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-databases-does-prefect-support","title":"What databases does Prefect support?","text":"

A self-hosted Prefect server can work with SQLite and PostgreSQL. New Prefect installs default to a SQLite database hosted at ~/.prefect/prefect.db on Mac or Linux machines. SQLite and PostgreSQL are not installed by Prefect.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-do-i-choose-between-sqlite-and-postgres","title":"How do I choose between SQLite and Postgres?","text":"

SQLite generally works well for getting started and exploring Prefect. We have tested it with up to hundreds of thousands of task runs. Many users may be able to stay on SQLite for some time. However, for production uses, Prefect Cloud or self-hosted PostgreSQL is highly recommended. Under write-heavy workloads, SQLite performance can begin to suffer. Users running many flows with high degrees of parallelism or concurrency should use PostgreSQL.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#relationship-with-other-prefect-products","title":"Relationship with other Prefect products","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-flow-written-with-prefect-1-be-orchestrated-with-prefect-2-and-vice-versa","title":"Can a flow written with Prefect 1 be orchestrated with Prefect 2 and vice versa?","text":"

No. Flows written with the Prefect 1 client must be rewritten with the Prefect 2 client. For most flows, this should take just a few minutes.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-use-prefect-1-and-prefect-2-at-the-same-time-on-my-local-machine","title":"Can I use Prefect 1 and Prefect 2 at the same time on my local machine?","text":"

Yes. Just use different virtual environments.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"api-ref/","title":"API Reference","text":"

Prefect auto-generates reference documentation for the following components:

  • Prefect Python SDK: used to build, test, and execute workflows.
  • Prefect REST API: used by both workflow clients as well as the Prefect UI for orchestration and data retrieval
  • Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.
  • The REST API documentation for a locally hosted open-source Prefect server is available in the Prefect REST API Reference.
  • Prefect Server SDK: used primarily by the server to work with workflow metadata and enforce orchestration logic. This is only used directly by Prefect developers and contributors.

Self-hosted docs

When self-hosting, you can access REST API documentation at the /docs endpoint of your PREFECT_API_URL - for example, if you ran prefect server start with no additional configuration you can find this reference at http://localhost:4200/docs.

","tags":["API","Prefect API","Prefect SDK","Prefect Cloud","REST API","development","orchestration"]},{"location":"api-ref/rest-api-reference/","title":"Prefect server REST API reference","text":"","tags":["REST API","Prefect server"]},{"location":"api-ref/prefect/agent/","title":"prefect.agent","text":"","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent","title":"prefect.agent","text":"

DEPRECATION WARNING:

This module is deprecated as of March 2024 and will not be available after September 2024. Agents have been replaced by workers, which offer enhanced functionality and better performance.

For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent","title":"PrefectAgent","text":"Source code in prefect/agent.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use a worker instead. Refer to the upgrade guide for more information: https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass PrefectAgent:\n    def __init__(\n        self,\n        work_queues: List[str] = None,\n        work_queue_prefix: Union[str, List[str]] = None,\n        work_pool_name: str = None,\n        prefetch_seconds: int = None,\n        default_infrastructure: Infrastructure = None,\n        default_infrastructure_document_id: UUID = None,\n        limit: Optional[int] = None,\n    ) -> None:\n        if default_infrastructure and default_infrastructure_document_id:\n            raise ValueError(\n                \"Provide only one of 'default_infrastructure' and\"\n                \" 'default_infrastructure_document_id'.\"\n            )\n\n        self.work_queues: Set[str] = set(work_queues) if work_queues else set()\n        self.work_pool_name = work_pool_name\n        self.prefetch_seconds = prefetch_seconds\n        self.submitting_flow_run_ids = set()\n        self.cancelling_flow_run_ids = set()\n        self.scheduled_task_scopes = set()\n        self.started = False\n        self.logger = get_logger(\"agent\")\n        self.task_group: Optional[anyio.abc.TaskGroup] = None\n        self.limit: Optional[int] = limit\n        self.limiter: Optional[anyio.CapacityLimiter] = None\n        self.client: Optional[PrefectClient] = None\n\n        if isinstance(work_queue_prefix, str):\n            work_queue_prefix = [work_queue_prefix]\n        self.work_queue_prefix = work_queue_prefix\n\n        self._work_queue_cache_expiration: pendulum.DateTime = None\n        self._work_queue_cache: List[WorkQueue] = []\n\n        if default_infrastructure:\n            self.default_infrastructure_document_id = (\n                default_infrastructure._block_document_id\n            )\n            self.default_infrastructure = default_infrastructure\n        elif default_infrastructure_document_id:\n            self.default_infrastructure_document_id = default_infrastructure_document_id\n            self.default_infrastructure = None\n        else:\n            self.default_infrastructure = Process()\n            self.default_infrastructure_document_id = None\n\n    async def update_matched_agent_work_queues(self):\n        if self.work_queue_prefix:\n            if self.work_pool_name:\n                matched_queues = await self.client.read_work_queues(\n                    work_pool_name=self.work_pool_name,\n                    work_queue_filter=WorkQueueFilter(\n                        name=WorkQueueFilterName(startswith_=self.work_queue_prefix)\n                    ),\n                )\n            else:\n                matched_queues = await self.client.match_work_queues(\n                    self.work_queue_prefix, work_pool_name=DEFAULT_AGENT_WORK_POOL_NAME\n                )\n\n            matched_queues = set(q.name for q in matched_queues)\n            if matched_queues != self.work_queues:\n                new_queues = matched_queues - self.work_queues\n                removed_queues = self.work_queues - matched_queues\n                if new_queues:\n                    self.logger.info(\n                        f\"Matched new work queues: {', '.join(new_queues)}\"\n                    )\n                if removed_queues:\n                    self.logger.info(\n                        f\"Work queues no longer matched: {', '.join(removed_queues)}\"\n                    )\n            self.work_queues = matched_queues\n\n    async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n        \"\"\"\n        Loads the work queue objects corresponding to the agent's target work\n        queues. If any of them don't exist, they are created.\n        \"\"\"\n\n        # if the queue cache has not expired, yield queues from the cache\n        now = pendulum.now(\"UTC\")\n        if (self._work_queue_cache_expiration or now) > now:\n            for queue in self._work_queue_cache:\n                yield queue\n            return\n\n        # otherwise clear the cache, set the expiration for 30 seconds, and\n        # reload the work queues\n        self._work_queue_cache.clear()\n        self._work_queue_cache_expiration = now.add(seconds=30)\n\n        await self.update_matched_agent_work_queues()\n\n        for name in self.work_queues:\n            try:\n                work_queue = await self.client.read_work_queue_by_name(\n                    work_pool_name=self.work_pool_name, name=name\n                )\n            except (ObjectNotFound, Exception):\n                work_queue = None\n\n            # if the work queue wasn't found and the agent is NOT polling\n            # for queues using a regex, try to create it\n            if work_queue is None and not self.work_queue_prefix:\n                try:\n                    work_queue = await self.client.create_work_queue(\n                        work_pool_name=self.work_pool_name, name=name\n                    )\n                except Exception:\n                    # if creating it raises an exception, it was probably just\n                    # created by some other agent; rather than entering a re-read\n                    # loop with new error handling, we log the exception and\n                    # continue.\n                    self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                    continue\n                else:\n                    log_str = f\"Created work queue {name!r}\"\n                    if self.work_pool_name:\n                        log_str = (\n                            f\"Created work queue {name!r} in work pool\"\n                            f\" {self.work_pool_name!r}.\"\n                        )\n                    else:\n                        log_str = f\"Created work queue '{name}'.\"\n                    self.logger.info(log_str)\n\n            if work_queue is None:\n                self.logger.error(\n                    f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n                    \" found\"\n                )\n            else:\n                self._work_queue_cache.append(work_queue)\n                yield work_queue\n\n    async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n        \"\"\"\n        The principle method on agents. Queries for scheduled flow runs and submits\n        them for execution in parallel.\n        \"\"\"\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for scheduled flow runs...\")\n\n        before = pendulum.now(\"utc\").add(\n            seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n        )\n\n        submittable_runs: List[FlowRun] = []\n\n        if self.work_pool_name:\n            responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n                work_pool_name=self.work_pool_name,\n                work_queue_names=[wq.name async for wq in self.get_work_queues()],\n                scheduled_before=before,\n            )\n            submittable_runs.extend([response.flow_run for response in responses])\n\n        else:\n            # load runs from each work queue\n            async for work_queue in self.get_work_queues():\n                # print a nice message if the work queue is paused\n                if work_queue.is_paused:\n                    self.logger.info(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                    )\n\n                else:\n                    try:\n                        queue_runs = await self.client.get_runs_in_work_queue(\n                            id=work_queue.id, limit=10, scheduled_before=before\n                        )\n                        submittable_runs.extend(queue_runs)\n                    except ObjectNotFound:\n                        self.logger.error(\n                            f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                            \" found.\"\n                        )\n                    except Exception as exc:\n                        self.logger.exception(exc)\n\n            submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n        for flow_run in submittable_runs:\n            # don't resubmit a run\n            if flow_run.id in self.submitting_flow_run_ids:\n                continue\n\n            try:\n                if self.limiter:\n                    self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self.logger.info(\n                    f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n                self.submitting_flow_run_ids.add(flow_run.id)\n                self.task_group.start_soon(\n                    self.submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n        )\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self.work_queues)))\n            if self.work_queues\n            else None\n        )\n\n        work_pool_filter = (\n            WorkPoolFilter(name=WorkPoolFilterName(any_=[self.work_pool_name]))\n            if self.work_pool_name\n            else WorkPoolFilter(name=WorkPoolFilterName(any_=[\"default-agent-pool\"]))\n        )\n        named_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self.logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self.cancelling_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: FlowRun) -> None:\n        \"\"\"\n        Cancel a flow run by killing its infrastructure\n        \"\"\"\n        if not flow_run.infrastructure_pid:\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n            if infrastructure.is_using_a_runner:\n                self.logger.info(\n                    f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n                    \" using enhanced cancellation. A dedicated runner will handle\"\n                    \" cancellation.\"\n                )\n                return\n        except Exception:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n                \"Flow run cannot be cancelled.\"\n            )\n            # Note: We leave this flow run in the cancelling set because it cannot be\n            #       cancelled and this will prevent additional attempts.\n            return\n\n        if not hasattr(infrastructure, \"kill\"):\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n                \"does not support killing created infrastructure. \"\n                \"Cancellation cannot be guaranteed.\"\n            )\n            return\n\n        self.logger.info(\n            f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n            f\"'{flow_run.id}'...\"\n        )\n        try:\n            await infrastructure.kill(flow_run.infrastructure_pid)\n        except InfrastructureNotFound as exc:\n            self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n        except Exception:\n            self.logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self.cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            await self._mark_flow_run_as_cancelled(flow_run)\n            self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: FlowRun, state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self.client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self.cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def get_infrastructure(self, flow_run: FlowRun) -> Infrastructure:\n        deployment = await self.client.read_deployment(flow_run.deployment_id)\n\n        flow = await self.client.read_flow(deployment.flow_id)\n\n        # overrides only apply when configuring known infra blocks\n        if not deployment.infrastructure_document_id:\n            if self.default_infrastructure:\n                infra_block = self.default_infrastructure\n            else:\n                infra_document = await self.client.read_block_document(\n                    self.default_infrastructure_document_id\n                )\n                infra_block = Block._from_block_document(infra_document)\n\n            # Add flow run metadata to the infrastructure\n            prepared_infrastructure = infra_block.prepare_for_flow_run(\n                flow_run, deployment=deployment, flow=flow\n            )\n            return prepared_infrastructure\n\n        ## get infra\n        infra_document = await self.client.read_block_document(\n            deployment.infrastructure_document_id\n        )\n\n        # this piece of logic applies any overrides that may have been set on the\n        # deployment; overrides are defined as dot.delimited paths on possibly nested\n        # attributes of the infrastructure block\n        doc_dict = infra_document.dict()\n        infra_dict = doc_dict.get(\"data\", {})\n        for override, value in (deployment.job_variables or {}).items():\n            nested_fields = override.split(\".\")\n            data = infra_dict\n            for field in nested_fields[:-1]:\n                data = data[field]\n\n            # once we reach the end, set the value\n            data[nested_fields[-1]] = value\n\n        # reconstruct the infra block\n        doc_dict[\"data\"] = infra_dict\n        infra_document = BlockDocument(**doc_dict)\n        infrastructure_block = Block._from_block_document(infra_document)\n\n        # TODO: Here the agent may update the infrastructure with agent-level settings\n\n        # Add flow run metadata to the infrastructure\n        prepared_infrastructure = infrastructure_block.prepare_for_flow_run(\n            flow_run, deployment=deployment, flow=flow\n        )\n\n        return prepared_infrastructure\n\n    async def submit_run(self, flow_run: FlowRun) -> None:\n        \"\"\"\n        Submit a flow run to the infrastructure\n        \"\"\"\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            try:\n                infrastructure = await self.get_infrastructure(flow_run)\n            except Exception as exc:\n                self.logger.exception(\n                    f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n                )\n                await self._propose_failed_state(flow_run, exc)\n                if self.limiter:\n                    self.limiter.release_on_behalf_of(flow_run.id)\n            else:\n                # Wait for submission to be completed. Note that the submission function\n                # may continue to run in the background after this exits.\n                readiness_result = await self.task_group.start(\n                    self._submit_run_and_capture_errors, flow_run, infrastructure\n                )\n\n                if readiness_result and not isinstance(readiness_result, Exception):\n                    try:\n                        await self.client.update_flow_run(\n                            flow_run_id=flow_run.id,\n                            infrastructure_pid=str(readiness_result),\n                        )\n                    except Exception:\n                        self.logger.exception(\n                            \"An error occurred while setting the `infrastructure_pid`\"\n                            f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                            \" cancellable.\"\n                        )\n\n                self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        self.submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self,\n        flow_run: FlowRun,\n        infrastructure: Infrastructure,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> Union[InfrastructureResult, Exception]:\n        # Note: There is not a clear way to determine if task_status.started() has been\n        #       called without peeking at the internal `_future`. Ideally we could just\n        #       check if the flow run id has been removed from `submitting_flow_run_ids`\n        #       but it is not so simple to guarantee that this coroutine yields back\n        #       to `submit_run` to execute that line when exceptions are raised during\n        #       submission.\n        try:\n            result = await infrastructure.run(task_status=task_status)\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                self.logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                self.logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            self.logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        return result\n\n    async def _propose_pending_state(self, flow_run: FlowRun) -> bool:\n        state = flow_run.state\n        try:\n            state = await propose_state(self.client, Pending(), flow_run_id=flow_run.id)\n        except Abort as exc:\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n            return False\n\n        if not state.is_pending():\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: FlowRun, exc: Exception) -> None:\n        try:\n            await propose_state(\n                self.client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: FlowRun, message: str) -> None:\n        try:\n            state = await propose_state(\n                self.client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            self.logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                self.logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n        \"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the agent exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.started:\n                with anyio.CancelScope() as scope:\n                    self.scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self.scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self.task_group.start(wrapper)\n\n    # Context management ---------------------------------------------------------------\n\n    async def start(self):\n        self.started = True\n        self.task_group = anyio.create_task_group()\n        self.limiter = (\n            anyio.CapacityLimiter(self.limit) if self.limit is not None else None\n        )\n        self.client = get_client()\n        await self.client.__aenter__()\n        await self.task_group.__aenter__()\n\n    async def shutdown(self, *exc_info):\n        self.started = False\n        # We must cancel scheduled task scopes before closing the task group\n        for scope in self.scheduled_task_scopes:\n            scope.cancel()\n        await self.task_group.__aexit__(*exc_info)\n        await self.client.__aexit__(*exc_info)\n        self.task_group = None\n        self.client = None\n        self.submitting_flow_run_ids.clear()\n        self.cancelling_flow_run_ids.clear()\n        self.scheduled_task_scopes.clear()\n        self._work_queue_cache_expiration = None\n        self._work_queue_cache = []\n\n    async def __aenter__(self):\n        await self.start()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        await self.shutdown(*exc_info)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.cancel_run","title":"cancel_run async","text":"

Cancel a flow run by killing its infrastructure

Source code in prefect/agent.py
async def cancel_run(self, flow_run: FlowRun) -> None:\n    \"\"\"\n    Cancel a flow run by killing its infrastructure\n    \"\"\"\n    if not flow_run.infrastructure_pid:\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n            \" attached. Cancellation cannot be guaranteed.\"\n        )\n        await self._mark_flow_run_as_cancelled(\n            flow_run,\n            state_updates={\n                \"message\": (\n                    \"This flow run is missing infrastructure tracking information\"\n                    \" and cancellation cannot be guaranteed.\"\n                )\n            },\n        )\n        return\n\n    try:\n        infrastructure = await self.get_infrastructure(flow_run)\n        if infrastructure.is_using_a_runner:\n            self.logger.info(\n                f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n                \" using enhanced cancellation. A dedicated runner will handle\"\n                \" cancellation.\"\n            )\n            return\n    except Exception:\n        self.logger.exception(\n            f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n            \"Flow run cannot be cancelled.\"\n        )\n        # Note: We leave this flow run in the cancelling set because it cannot be\n        #       cancelled and this will prevent additional attempts.\n        return\n\n    if not hasattr(infrastructure, \"kill\"):\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n            \"does not support killing created infrastructure. \"\n            \"Cancellation cannot be guaranteed.\"\n        )\n        return\n\n    self.logger.info(\n        f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n        f\"'{flow_run.id}'...\"\n    )\n    try:\n        await infrastructure.kill(flow_run.infrastructure_pid)\n    except InfrastructureNotFound as exc:\n        self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n        await self._mark_flow_run_as_cancelled(flow_run)\n    except InfrastructureNotAvailable as exc:\n        self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n    except Exception:\n        self.logger.exception(\n            \"Encountered exception while killing infrastructure for flow run \"\n            f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n        )\n        # We will try again on generic exceptions\n        self.cancelling_flow_run_ids.remove(flow_run.id)\n        return\n    else:\n        await self._mark_flow_run_as_cancelled(flow_run)\n        self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_and_submit_flow_runs","title":"get_and_submit_flow_runs async","text":"

The principle method on agents. Queries for scheduled flow runs and submits them for execution in parallel.

Source code in prefect/agent.py
async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n    \"\"\"\n    The principle method on agents. Queries for scheduled flow runs and submits\n    them for execution in parallel.\n    \"\"\"\n    if not self.started:\n        raise RuntimeError(\n            \"Agent is not started. Use `async with PrefectAgent()...`\"\n        )\n\n    self.logger.debug(\"Checking for scheduled flow runs...\")\n\n    before = pendulum.now(\"utc\").add(\n        seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n    )\n\n    submittable_runs: List[FlowRun] = []\n\n    if self.work_pool_name:\n        responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n            work_pool_name=self.work_pool_name,\n            work_queue_names=[wq.name async for wq in self.get_work_queues()],\n            scheduled_before=before,\n        )\n        submittable_runs.extend([response.flow_run for response in responses])\n\n    else:\n        # load runs from each work queue\n        async for work_queue in self.get_work_queues():\n            # print a nice message if the work queue is paused\n            if work_queue.is_paused:\n                self.logger.info(\n                    f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                )\n\n            else:\n                try:\n                    queue_runs = await self.client.get_runs_in_work_queue(\n                        id=work_queue.id, limit=10, scheduled_before=before\n                    )\n                    submittable_runs.extend(queue_runs)\n                except ObjectNotFound:\n                    self.logger.error(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                        \" found.\"\n                    )\n                except Exception as exc:\n                    self.logger.exception(exc)\n\n        submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n    for flow_run in submittable_runs:\n        # don't resubmit a run\n        if flow_run.id in self.submitting_flow_run_ids:\n            continue\n\n        try:\n            if self.limiter:\n                self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n        except anyio.WouldBlock:\n            self.logger.info(\n                f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                \" in progress.\"\n            )\n            break\n        else:\n            self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n            self.submitting_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(\n                self.submit_run,\n                flow_run,\n            )\n\n    return list(\n        filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n    )\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_work_queues","title":"get_work_queues async","text":"

Loads the work queue objects corresponding to the agent's target work queues. If any of them don't exist, they are created.

Source code in prefect/agent.py
async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n    \"\"\"\n    Loads the work queue objects corresponding to the agent's target work\n    queues. If any of them don't exist, they are created.\n    \"\"\"\n\n    # if the queue cache has not expired, yield queues from the cache\n    now = pendulum.now(\"UTC\")\n    if (self._work_queue_cache_expiration or now) > now:\n        for queue in self._work_queue_cache:\n            yield queue\n        return\n\n    # otherwise clear the cache, set the expiration for 30 seconds, and\n    # reload the work queues\n    self._work_queue_cache.clear()\n    self._work_queue_cache_expiration = now.add(seconds=30)\n\n    await self.update_matched_agent_work_queues()\n\n    for name in self.work_queues:\n        try:\n            work_queue = await self.client.read_work_queue_by_name(\n                work_pool_name=self.work_pool_name, name=name\n            )\n        except (ObjectNotFound, Exception):\n            work_queue = None\n\n        # if the work queue wasn't found and the agent is NOT polling\n        # for queues using a regex, try to create it\n        if work_queue is None and not self.work_queue_prefix:\n            try:\n                work_queue = await self.client.create_work_queue(\n                    work_pool_name=self.work_pool_name, name=name\n                )\n            except Exception:\n                # if creating it raises an exception, it was probably just\n                # created by some other agent; rather than entering a re-read\n                # loop with new error handling, we log the exception and\n                # continue.\n                self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                continue\n            else:\n                log_str = f\"Created work queue {name!r}\"\n                if self.work_pool_name:\n                    log_str = (\n                        f\"Created work queue {name!r} in work pool\"\n                        f\" {self.work_pool_name!r}.\"\n                    )\n                else:\n                    log_str = f\"Created work queue '{name}'.\"\n                self.logger.info(log_str)\n\n        if work_queue is None:\n            self.logger.error(\n                f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n                \" found\"\n            )\n        else:\n            self._work_queue_cache.append(work_queue)\n            yield work_queue\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.submit_run","title":"submit_run async","text":"

Submit a flow run to the infrastructure

Source code in prefect/agent.py
async def submit_run(self, flow_run: FlowRun) -> None:\n    \"\"\"\n    Submit a flow run to the infrastructure\n    \"\"\"\n    ready_to_submit = await self._propose_pending_state(flow_run)\n\n    if ready_to_submit:\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n        except Exception as exc:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n            )\n            await self._propose_failed_state(flow_run, exc)\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n        else:\n            # Wait for submission to be completed. Note that the submission function\n            # may continue to run in the background after this exits.\n            readiness_result = await self.task_group.start(\n                self._submit_run_and_capture_errors, flow_run, infrastructure\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self.client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    self.logger.exception(\n                        \"An error occurred while setting the `infrastructure_pid`\"\n                        f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                        \" cancellable.\"\n                    )\n\n            self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n    else:\n        # If the run is not ready to submit, release the concurrency slot\n        if self.limiter:\n            self.limiter.release_on_behalf_of(flow_run.id)\n\n    self.submitting_flow_run_ids.remove(flow_run.id)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/artifacts/","title":"prefect.artifacts","text":"","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts","title":"prefect.artifacts","text":"

Interface for creating and reading artifacts.

","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact","title":"Artifact","text":"

Bases: ArtifactCreate

An artifact is a piece of data that is created by a flow or task run. https://docs.prefect.io/latest/concepts/artifacts/

Parameters:

Name Type Description Default type

A string identifying the type of artifact.

required key

A user-provided string identifier. The key must only contain lowercase letters, numbers, and dashes.

required description

A user-specified description of the artifact.

required data

A JSON payload that allows for a result to be retrieved.

required Source code in prefect/artifacts.py
class Artifact(ArtifactRequest):\n    \"\"\"\n    An artifact is a piece of data that is created by a flow or task run.\n    https://docs.prefect.io/latest/concepts/artifacts/\n\n    Arguments:\n        type: A string identifying the type of artifact.\n        key: A user-provided string identifier.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n        data: A JSON payload that allows for a result to be retrieved.\n    \"\"\"\n\n    @sync_compatible\n    async def create(\n        self: Self,\n        client: Optional[PrefectClient] = None,\n    ) -> ArtifactResponse:\n        \"\"\"\n        A method to create an artifact.\n\n        Arguments:\n            client: The PrefectClient\n\n        Returns:\n            - The created artifact.\n        \"\"\"\n        client, _ = get_or_create_client(client)\n        task_run_id, flow_run_id = get_task_and_flow_run_ids()\n        return await client.create_artifact(\n            artifact=ArtifactRequest(\n                type=self.type,\n                key=self.key,\n                description=self.description,\n                task_run_id=self.task_run_id or task_run_id,\n                flow_run_id=self.flow_run_id or flow_run_id,\n                data=await self.format(),\n            )\n        )\n\n    @classmethod\n    @sync_compatible\n    async def get(\n        cls, key: Optional[str] = None, client: Optional[PrefectClient] = None\n    ) -> Optional[ArtifactResponse]:\n        \"\"\"\n        A method to get an artifact.\n\n        Arguments:\n            key (str, optional): The key of the artifact to get.\n            client (PrefectClient, optional): The PrefectClient\n\n        Returns:\n            (ArtifactResponse, optional): The artifact (if found).\n        \"\"\"\n        client, _ = get_or_create_client(client)\n        return next(\n            iter(\n                await client.read_artifacts(\n                    limit=1,\n                    sort=ArtifactSort.UPDATED_DESC,\n                    artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n                )\n            ),\n            None,\n        )\n\n    @classmethod\n    @sync_compatible\n    async def get_or_create(\n        cls,\n        key: Optional[str] = None,\n        description: Optional[str] = None,\n        data: Optional[Union[Dict[str, Any], Any]] = None,\n        client: Optional[PrefectClient] = None,\n        **kwargs: Any,\n    ) -> Tuple[ArtifactResponse, bool]:\n        \"\"\"\n        A method to get or create an artifact.\n\n        Arguments:\n            key (str, optional): The key of the artifact to get or create.\n            description (str, optional): The description of the artifact to create.\n            data (Union[Dict[str, Any], Any], optional): The data of the artifact to create.\n            client (PrefectClient, optional): The PrefectClient\n\n        Returns:\n            (ArtifactResponse): The artifact, either retrieved or created.\n        \"\"\"\n        artifact = await cls.get(key, client)\n        if artifact:\n            return artifact, False\n        else:\n            return (\n                await cls(key=key, description=description, data=data, **kwargs).create(\n                    client\n                ),\n                True,\n            )\n\n    async def format(self) -> Optional[Union[Dict[str, Any], Any]]:\n        return json.dumps(self.data)\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact.create","title":"create async","text":"

A method to create an artifact.

Parameters:

Name Type Description Default client Optional[PrefectClient]

The PrefectClient

None

Returns:

Type Description Artifact
  • The created artifact.
Source code in prefect/artifacts.py
@sync_compatible\nasync def create(\n    self: Self,\n    client: Optional[PrefectClient] = None,\n) -> ArtifactResponse:\n    \"\"\"\n    A method to create an artifact.\n\n    Arguments:\n        client: The PrefectClient\n\n    Returns:\n        - The created artifact.\n    \"\"\"\n    client, _ = get_or_create_client(client)\n    task_run_id, flow_run_id = get_task_and_flow_run_ids()\n    return await client.create_artifact(\n        artifact=ArtifactRequest(\n            type=self.type,\n            key=self.key,\n            description=self.description,\n            task_run_id=self.task_run_id or task_run_id,\n            flow_run_id=self.flow_run_id or flow_run_id,\n            data=await self.format(),\n        )\n    )\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact.get","title":"get async classmethod","text":"

A method to get an artifact.

Parameters:

Name Type Description Default key str

The key of the artifact to get.

None client PrefectClient

The PrefectClient

None

Returns:

Type Description (Artifact, optional)

The artifact (if found).

Source code in prefect/artifacts.py
@classmethod\n@sync_compatible\nasync def get(\n    cls, key: Optional[str] = None, client: Optional[PrefectClient] = None\n) -> Optional[ArtifactResponse]:\n    \"\"\"\n    A method to get an artifact.\n\n    Arguments:\n        key (str, optional): The key of the artifact to get.\n        client (PrefectClient, optional): The PrefectClient\n\n    Returns:\n        (ArtifactResponse, optional): The artifact (if found).\n    \"\"\"\n    client, _ = get_or_create_client(client)\n    return next(\n        iter(\n            await client.read_artifacts(\n                limit=1,\n                sort=ArtifactSort.UPDATED_DESC,\n                artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n            )\n        ),\n        None,\n    )\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact.get_or_create","title":"get_or_create async classmethod","text":"

A method to get or create an artifact.

Parameters:

Name Type Description Default key str

The key of the artifact to get or create.

None description str

The description of the artifact to create.

None data Union[Dict[str, Any], Any]

The data of the artifact to create.

None client PrefectClient

The PrefectClient

None

Returns:

Type Description Artifact

The artifact, either retrieved or created.

Source code in prefect/artifacts.py
@classmethod\n@sync_compatible\nasync def get_or_create(\n    cls,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n    data: Optional[Union[Dict[str, Any], Any]] = None,\n    client: Optional[PrefectClient] = None,\n    **kwargs: Any,\n) -> Tuple[ArtifactResponse, bool]:\n    \"\"\"\n    A method to get or create an artifact.\n\n    Arguments:\n        key (str, optional): The key of the artifact to get or create.\n        description (str, optional): The description of the artifact to create.\n        data (Union[Dict[str, Any], Any], optional): The data of the artifact to create.\n        client (PrefectClient, optional): The PrefectClient\n\n    Returns:\n        (ArtifactResponse): The artifact, either retrieved or created.\n    \"\"\"\n    artifact = await cls.get(key, client)\n    if artifact:\n        return artifact, False\n    else:\n        return (\n            await cls(key=key, description=description, data=data, **kwargs).create(\n                client\n            ),\n            True,\n        )\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.TableArtifact","title":"TableArtifact","text":"

Bases: Artifact

Source code in prefect/artifacts.py
class TableArtifact(Artifact):\n    table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]\n    type: Optional[str] = \"table\"\n\n    @classmethod\n    def _sanitize(\n        cls, item: Union[Dict[str, Any], List[Any], float]\n    ) -> Union[Dict[str, Any], List[Any], int, float, None]:\n        \"\"\"\n        Sanitize NaN values in a given item.\n        The item can be a dict, list or float.\n        \"\"\"\n        if isinstance(item, list):\n            return [cls._sanitize(sub_item) for sub_item in item]\n        elif isinstance(item, dict):\n            return {k: cls._sanitize(v) for k, v in item.items()}\n        elif isinstance(item, float) and math.isnan(item):\n            return None\n        else:\n            return item\n\n    async def format(self) -> str:\n        return json.dumps(self._sanitize(self.table))\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_link_artifact","title":"create_link_artifact async","text":"

Create a link artifact.

Parameters:

Name Type Description Default link str

The link to create.

required link_text Optional[str]

The link text.

None key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in prefect/artifacts.py
@sync_compatible\nasync def create_link_artifact(\n    link: str,\n    link_text: Optional[str] = None,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n    client: Optional[PrefectClient] = None,\n) -> UUID:\n    \"\"\"\n    Create a link artifact.\n\n    Arguments:\n        link: The link to create.\n        link_text: The link text.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    artifact = await LinkArtifact(\n        key=key,\n        description=description,\n        link=link,\n        link_text=link_text,\n    ).create(client)\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_markdown_artifact","title":"create_markdown_artifact async","text":"

Create a markdown artifact.

Parameters:

Name Type Description Default markdown str

The markdown to create.

required key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in prefect/artifacts.py
@sync_compatible\nasync def create_markdown_artifact(\n    markdown: str,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n    \"\"\"\n    Create a markdown artifact.\n\n    Arguments:\n        markdown: The markdown to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    artifact = await MarkdownArtifact(\n        key=key,\n        description=description,\n        markdown=markdown,\n    ).create()\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_table_artifact","title":"create_table_artifact async","text":"

Create a table artifact.

Parameters:

Name Type Description Default table Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]

The table to create.

required key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in prefect/artifacts.py
@sync_compatible\nasync def create_table_artifact(\n    table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]],\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n    \"\"\"\n    Create a table artifact.\n\n    Arguments:\n        table: The table to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n\n    artifact = await TableArtifact(\n        key=key,\n        description=description,\n        table=table,\n    ).create()\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/automations/","title":"prefect.automations","text":"","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations","title":"prefect.automations","text":"","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation","title":"Automation","text":"

Bases: AutomationCore

Source code in prefect/automations.py
class Automation(AutomationCore):\n    id: Optional[UUID] = Field(default=None, description=\"The ID of this automation\")\n\n    @sync_compatible\n    async def create(self: Self) -> Self:\n        \"\"\"\n        Create a new automation.\n\n        auto_to_create = Automation(\n            name=\"woodchonk\",\n            trigger=EventTrigger(\n                expect={\"animal.walked\"},\n                match={\n                    \"genus\": \"Marmota\",\n                    \"species\": \"monax\",\n                },\n                posture=\"Reactive\",\n                threshold=3,\n                within=timedelta(seconds=10),\n            ),\n            actions=[CancelFlowRun()]\n        )\n        created_automation = auto_to_create.create()\n        \"\"\"\n        client, _ = get_or_create_client()\n        automation = AutomationCore(**self.dict(exclude={\"id\"}))\n        self.id = await client.create_automation(automation=automation)\n        return self\n\n    @sync_compatible\n    async def update(self: Self):\n        \"\"\"\n        Updates an existing automation.\n        auto = Automation.read(id=123)\n        auto.name = \"new name\"\n        auto.update()\n        \"\"\"\n\n        client, _ = get_or_create_client()\n        automation = AutomationCore(**self.dict(exclude={\"id\", \"owner_resource\"}))\n        await client.update_automation(automation_id=self.id, automation=automation)\n\n    @classmethod\n    @sync_compatible\n    async def read(\n        cls: Self, id: Optional[UUID] = None, name: Optional[str] = None\n    ) -> Self:\n        \"\"\"\n        Read an automation by ID or name.\n        automation = Automation.read(name=\"woodchonk\")\n\n        or\n\n        automation = Automation.read(id=UUID(\"b3514963-02b1-47a5-93d1-6eeb131041cb\"))\n        \"\"\"\n        if id and name:\n            raise ValueError(\"Only one of id or name can be provided\")\n        if not id and not name:\n            raise ValueError(\"One of id or name must be provided\")\n        client, _ = get_or_create_client()\n        if id:\n            try:\n                automation = await client.read_automation(automation_id=id)\n            except PrefectHTTPStatusError as exc:\n                if exc.response.status_code == 404:\n                    raise ValueError(f\"Automation with ID {id!r} not found\")\n            return Automation(**automation.dict())\n        else:\n            automation = await client.read_automations_by_name(name=name)\n            if len(automation) > 0:\n                return Automation(**automation[0].dict()) if automation else None\n            else:\n                raise ValueError(f\"Automation with name {name!r} not found\")\n\n    @sync_compatible\n    async def delete(self: Self) -> bool:\n        \"\"\"\n        auto = Automation.read(id = 123)\n        auto.delete()\n        \"\"\"\n        try:\n            client, _ = get_or_create_client()\n            await client.delete_automation(self.id)\n            return True\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                return False\n            raise\n\n    @sync_compatible\n    async def disable(self: Self) -> bool:\n        \"\"\"\n        Disable an automation.\n        auto = Automation.read(id = 123)\n        auto.disable()\n        \"\"\"\n        try:\n            client, _ = get_or_create_client()\n            await client.pause_automation(self.id)\n            return True\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                return False\n            raise\n\n    @sync_compatible\n    async def enable(self: Self) -> bool:\n        \"\"\"\n        Enable an automation.\n        auto = Automation.read(id = 123)\n        auto.enable()\n        \"\"\"\n        try:\n            client, _ = get_or_create_client()\n            await client.resume_automation(\"asd\")\n            return True\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                return False\n            raise\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.create","title":"create async","text":"

Create a new automation.

auto_to_create = Automation( name=\"woodchonk\", trigger=EventTrigger( expect={\"animal.walked\"}, match={ \"genus\": \"Marmota\", \"species\": \"monax\", }, posture=\"Reactive\", threshold=3, within=timedelta(seconds=10), ), actions=[CancelFlowRun()] ) created_automation = auto_to_create.create()

Source code in prefect/automations.py
@sync_compatible\nasync def create(self: Self) -> Self:\n    \"\"\"\n    Create a new automation.\n\n    auto_to_create = Automation(\n        name=\"woodchonk\",\n        trigger=EventTrigger(\n            expect={\"animal.walked\"},\n            match={\n                \"genus\": \"Marmota\",\n                \"species\": \"monax\",\n            },\n            posture=\"Reactive\",\n            threshold=3,\n            within=timedelta(seconds=10),\n        ),\n        actions=[CancelFlowRun()]\n    )\n    created_automation = auto_to_create.create()\n    \"\"\"\n    client, _ = get_or_create_client()\n    automation = AutomationCore(**self.dict(exclude={\"id\"}))\n    self.id = await client.create_automation(automation=automation)\n    return self\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.delete","title":"delete async","text":"

auto = Automation.read(id = 123) auto.delete()

Source code in prefect/automations.py
@sync_compatible\nasync def delete(self: Self) -> bool:\n    \"\"\"\n    auto = Automation.read(id = 123)\n    auto.delete()\n    \"\"\"\n    try:\n        client, _ = get_or_create_client()\n        await client.delete_automation(self.id)\n        return True\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return False\n        raise\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.disable","title":"disable async","text":"

Disable an automation. auto = Automation.read(id = 123) auto.disable()

Source code in prefect/automations.py
@sync_compatible\nasync def disable(self: Self) -> bool:\n    \"\"\"\n    Disable an automation.\n    auto = Automation.read(id = 123)\n    auto.disable()\n    \"\"\"\n    try:\n        client, _ = get_or_create_client()\n        await client.pause_automation(self.id)\n        return True\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return False\n        raise\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.enable","title":"enable async","text":"

Enable an automation. auto = Automation.read(id = 123) auto.enable()

Source code in prefect/automations.py
@sync_compatible\nasync def enable(self: Self) -> bool:\n    \"\"\"\n    Enable an automation.\n    auto = Automation.read(id = 123)\n    auto.enable()\n    \"\"\"\n    try:\n        client, _ = get_or_create_client()\n        await client.resume_automation(\"asd\")\n        return True\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return False\n        raise\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.read","title":"read async classmethod","text":"

Read an automation by ID or name. automation = Automation.read(name=\"woodchonk\")

or

automation = Automation.read(id=UUID(\"b3514963-02b1-47a5-93d1-6eeb131041cb\"))

Source code in prefect/automations.py
@classmethod\n@sync_compatible\nasync def read(\n    cls: Self, id: Optional[UUID] = None, name: Optional[str] = None\n) -> Self:\n    \"\"\"\n    Read an automation by ID or name.\n    automation = Automation.read(name=\"woodchonk\")\n\n    or\n\n    automation = Automation.read(id=UUID(\"b3514963-02b1-47a5-93d1-6eeb131041cb\"))\n    \"\"\"\n    if id and name:\n        raise ValueError(\"Only one of id or name can be provided\")\n    if not id and not name:\n        raise ValueError(\"One of id or name must be provided\")\n    client, _ = get_or_create_client()\n    if id:\n        try:\n            automation = await client.read_automation(automation_id=id)\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                raise ValueError(f\"Automation with ID {id!r} not found\")\n        return Automation(**automation.dict())\n    else:\n        automation = await client.read_automations_by_name(name=name)\n        if len(automation) > 0:\n            return Automation(**automation[0].dict()) if automation else None\n        else:\n            raise ValueError(f\"Automation with name {name!r} not found\")\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.update","title":"update async","text":"

Updates an existing automation. auto = Automation.read(id=123) auto.name = \"new name\" auto.update()

Source code in prefect/automations.py
@sync_compatible\nasync def update(self: Self):\n    \"\"\"\n    Updates an existing automation.\n    auto = Automation.read(id=123)\n    auto.name = \"new name\"\n    auto.update()\n    \"\"\"\n\n    client, _ = get_or_create_client()\n    automation = AutomationCore(**self.dict(exclude={\"id\", \"owner_resource\"}))\n    await client.update_automation(automation_id=self.id, automation=automation)\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.AutomationCore","title":"AutomationCore","text":"

Bases: PrefectBaseModel

Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources

Source code in prefect/events/schemas/automations.py
class AutomationCore(PrefectBaseModel, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"Defines an action a user wants to take when a certain number of events\n    do or don't happen to the matching resources\"\"\"\n\n    name: str = Field(..., description=\"The name of this automation\")\n    description: str = Field(\"\", description=\"A longer description of this automation\")\n\n    enabled: bool = Field(True, description=\"Whether this automation will be evaluated\")\n\n    trigger: TriggerTypes = Field(\n        ...,\n        description=(\n            \"The criteria for which events this Automation covers and how it will \"\n            \"respond to the presence or absence of those events\"\n        ),\n    )\n\n    actions: List[ActionTypes] = Field(\n        ...,\n        description=\"The actions to perform when this Automation triggers\",\n    )\n\n    actions_on_trigger: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a triggered state\",\n    )\n\n    actions_on_resolve: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a resolving state\",\n    )\n\n    owner_resource: Optional[str] = Field(\n        default=None, description=\"The owning resource of this automation\"\n    )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.CompositeTrigger","title":"CompositeTrigger","text":"

Bases: Trigger, ABC

Requires some number of triggers to have fired within the given time period.

Source code in prefect/events/schemas/automations.py
class CompositeTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Requires some number of triggers to have fired within the given time period.\n    \"\"\"\n\n    type: Literal[\"compound\", \"sequence\"]\n    triggers: List[\"TriggerTypes\"]\n    within: Optional[timedelta]\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.CompoundTrigger","title":"CompoundTrigger","text":"

Bases: CompositeTrigger

A composite trigger that requires some number of triggers to have fired within the given time period

Source code in prefect/events/schemas/automations.py
class CompoundTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have\n    fired within the given time period\"\"\"\n\n    type: Literal[\"compound\"] = \"compound\"\n    require: Union[int, Literal[\"any\", \"all\"]]\n\n    @root_validator\n    def validate_require(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n        require = values.get(\"require\")\n\n        if isinstance(require, int):\n            if require < 1:\n                raise ValueError(\"required must be at least 1\")\n            if require > len(values[\"triggers\"]):\n                raise ValueError(\n                    \"required must be less than or equal to the number of triggers\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"{str(self.require).capitalize()} of:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.CompoundTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"{str(self.require).capitalize()} of:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.EventTrigger","title":"EventTrigger","text":"

Bases: ResourceTrigger

A trigger that fires based on the presence or absence of events within a given period of time.

Source code in prefect/events/schemas/automations.py
class EventTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the presence or absence of events within a given\n    period of time.\n    \"\"\"\n\n    type: Literal[\"event\"] = \"event\"\n\n    after: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) which must first been seen to fire this trigger.  If \"\n            \"empty, then fire this trigger immediately.  Events may include \"\n            \"trailing wildcards, like `prefect.flow-run.*`\"\n        ),\n    )\n    expect: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) this trigger is expecting to see.  If empty, this \"\n            \"trigger will match any event.  Events may include trailing wildcards, \"\n            \"like `prefect.flow-run.*`\"\n        ),\n    )\n\n    for_each: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"Evaluate the trigger separately for each distinct value of these labels \"\n            \"on the resource.  By default, labels refer to the primary resource of the \"\n            \"triggering event.  You may also refer to labels from related \"\n            \"resources by specifying `related:<role>:<label>`.  This will use the \"\n            \"value of that label for the first related resource in that role.  For \"\n            'example, `\"for_each\": [\"related:flow:prefect.resource.id\"]` would '\n            \"evaluate the trigger for each flow.\"\n        ),\n    )\n    posture: Literal[Posture.Reactive, Posture.Proactive] = Field(  # type: ignore[valid-type]\n        Posture.Reactive,\n        description=(\n            \"The posture of this trigger, either Reactive or Proactive.  Reactive \"\n            \"triggers respond to the _presence_ of the expected events, while \"\n            \"Proactive triggers respond to the _absence_ of those expected events.\"\n        ),\n    )\n    threshold: int = Field(\n        1,\n        description=(\n            \"The number of events required for this trigger to fire (for \"\n            \"Reactive triggers), or the number of events expected (for Proactive \"\n            \"triggers)\"\n        ),\n    )\n    within: timedelta = Field(\n        timedelta(0),\n        minimum=0.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The time period over which the events must occur.  For Reactive triggers, \"\n            \"this may be as low as 0 seconds, but must be at least 10 seconds for \"\n            \"Proactive triggers\"\n        ),\n    )\n\n    @validator(\"within\")\n    def enforce_minimum_within(\n        cls, value: timedelta, values, config, field: ModelField\n    ):\n        return validate_trigger_within(value, field)\n\n    @root_validator(skip_on_failure=True)\n    def enforce_minimum_within_for_proactive_triggers(cls, values: Dict[str, Any]):\n        posture: Optional[Posture] = values.get(\"posture\")\n        within: Optional[timedelta] = values.get(\"within\")\n\n        if posture == Posture.Proactive:\n            if not within or within == timedelta(0):\n                values[\"within\"] = timedelta(seconds=10.0)\n            elif within < timedelta(seconds=10.0):\n                raise ValueError(\n                    \"The minimum within for Proactive triggers is 10 seconds\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        if self.posture == Posture.Reactive:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n        else:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                        f\"within {self.within}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.EventTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    if self.posture == Posture.Reactive:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n    else:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                    f\"within {self.within}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.MetricTrigger","title":"MetricTrigger","text":"

Bases: ResourceTrigger

A trigger that fires based on the results of a metric query.

Source code in prefect/events/schemas/automations.py
class MetricTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the results of a metric query.\n    \"\"\"\n\n    type: Literal[\"metric\"] = \"metric\"\n\n    posture: Literal[Posture.Metric] = Field(  # type: ignore[valid-type]\n        Posture.Metric,\n        description=\"Periodically evaluate the configured metric query.\",\n    )\n\n    metric: MetricTriggerQuery = Field(\n        ...,\n        description=\"The metric query to evaluate for this trigger. \",\n    )\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        m = self.metric\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.MetricTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    m = self.metric\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.MetricTriggerQuery","title":"MetricTriggerQuery","text":"

Bases: PrefectBaseModel

Defines a subset of the Trigger subclass, which is specific to Metric automations, that specify the query configurations and breaching conditions for the Automation

Source code in prefect/events/schemas/automations.py
class MetricTriggerQuery(PrefectBaseModel):\n    \"\"\"Defines a subset of the Trigger subclass, which is specific\n    to Metric automations, that specify the query configurations\n    and breaching conditions for the Automation\"\"\"\n\n    name: PrefectMetric = Field(\n        ...,\n        description=\"The name of the metric to query.\",\n    )\n    threshold: float = Field(\n        ...,\n        description=(\n            \"The threshold value against which we'll compare \" \"the query result.\"\n        ),\n    )\n    operator: MetricTriggerOperator = Field(\n        ...,\n        description=(\n            \"The comparative operator (LT / LTE / GT / GTE) used to compare \"\n            \"the query result against the threshold value.\"\n        ),\n    )\n    range: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The lookback duration (seconds) for a metric query. This duration is \"\n            \"used to determine the time range over which the query will be executed. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n    firing_for: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The duration (seconds) for which the metric query must breach \"\n            \"or resolve continuously before the state is updated and the \"\n            \"automation is triggered. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.ResourceSpecification","title":"ResourceSpecification","text":"

Bases: PrefectBaseModel

A specification that may match zero, one, or many resources, used to target or select a set of resources in a query or automation. A resource must match at least one value of all of the provided labels

Source code in prefect/events/schemas/events.py
class ResourceSpecification(PrefectBaseModel):\n    \"\"\"A specification that may match zero, one, or many resources, used to target or\n    select a set of resources in a query or automation.  A resource must match at least\n    one value of all of the provided labels\"\"\"\n\n    __root__: Dict[str, Union[str, List[str]]]\n\n    def matches_every_resource(self) -> bool:\n        return len(self) == 0\n\n    def matches_every_resource_of_kind(self, prefix: str) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        if len(self.__root__) == 1:\n            if resource_id := self.__root__.get(\"prefect.resource.id\"):\n                values = [resource_id] if isinstance(resource_id, str) else resource_id\n                return any(value == f\"{prefix}.*\" for value in values)\n\n        return False\n\n    def includes(self, candidates: Iterable[Resource]) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        for candidate in candidates:\n            if self.matches(candidate):\n                return True\n\n        return False\n\n    def matches(self, resource: Resource) -> bool:\n        for label, expected in self.items():\n            value = resource.get(label)\n            if not any(matches(candidate, value) for candidate in expected):\n                return False\n        return True\n\n    def items(self) -> Iterable[Tuple[str, List[str]]]:\n        return [\n            (label, [value] if isinstance(value, str) else value)\n            for label, value in self.__root__.items()\n        ]\n\n    def __contains__(self, key: str) -> bool:\n        return self.__root__.__contains__(key)\n\n    def __getitem__(self, key: str) -> List[str]:\n        value = self.__root__[key]\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def pop(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.pop(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def get(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.get(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def __len__(self) -> int:\n        return len(self.__root__)\n\n    def deepcopy(self) -> \"ResourceSpecification\":\n        return ResourceSpecification.parse_obj(copy.deepcopy(self.__root__))\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.ResourceTrigger","title":"ResourceTrigger","text":"

Bases: Trigger, ABC

Base class for triggers that may filter by the labels of resources.

Source code in prefect/events/schemas/automations.py
class ResourceTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Base class for triggers that may filter by the labels of resources.\n    \"\"\"\n\n    type: str\n\n    match: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for resources which this trigger will match.\",\n    )\n    match_related: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for related resources which this trigger will match.\",\n    )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.SequenceTrigger","title":"SequenceTrigger","text":"

Bases: CompositeTrigger

A composite trigger that requires some number of triggers to have fired within the given time period in a specific order

Source code in prefect/events/schemas/automations.py
class SequenceTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have fired\n    within the given time period in a specific order\"\"\"\n\n    type: Literal[\"sequence\"] = \"sequence\"\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    \"In this order:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.SequenceTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                \"In this order:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Trigger","title":"Trigger","text":"

Bases: PrefectBaseModel, ABC

Base class describing a set of criteria that must be satisfied in order to trigger an automation.

Source code in prefect/events/schemas/automations.py
class Trigger(PrefectBaseModel, abc.ABC, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"\n    Base class describing a set of criteria that must be satisfied in order to trigger\n    an automation.\n    \"\"\"\n\n    type: str\n\n    @abc.abstractmethod\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n\n    # The following allows the regular Trigger class to be used when serving or\n    # deploying flows, analogous to how the Deployment*Trigger classes work\n\n    _deployment_id: Optional[UUID] = PrivateAttr(default=None)\n\n    def set_deployment_id(self, deployment_id: UUID):\n        self._deployment_id = deployment_id\n\n    def owner_resource(self) -> Optional[str]:\n        return f\"prefect.deployment.{self._deployment_id}\"\n\n    def actions(self) -> List[ActionTypes]:\n        assert self._deployment_id\n        return [\n            RunDeployment(\n                source=\"selected\",\n                deployment_id=self._deployment_id,\n                parameters=getattr(self, \"parameters\", None),\n                job_variables=getattr(self, \"job_variables\", None),\n            )\n        ]\n\n    def as_automation(self) -> \"AutomationCore\":\n        assert self._deployment_id\n\n        trigger: TriggerTypes = cast(TriggerTypes, self)\n\n        # This is one of the Deployment*Trigger classes, so translate it over to a\n        # plain Trigger\n        if hasattr(self, \"trigger_type\"):\n            trigger = self.trigger_type(**self.dict())\n\n        return AutomationCore(\n            name=(\n                getattr(self, \"name\", None)\n                or f\"Automation for deployment {self._deployment_id}\"\n            ),\n            description=\"\",\n            enabled=getattr(self, \"enabled\", True),\n            trigger=trigger,\n            actions=self.actions(),\n            owner_resource=self.owner_resource(),\n        )\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Trigger.describe_for_cli","title":"describe_for_cli abstractmethod","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
@abc.abstractmethod\ndef describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n
","tags":["Python API","automations"]},{"location":"api-ref/prefect/context/","title":"prefect.context","text":"","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context","title":"prefect.context","text":"

Async and thread safe models for passing runtime context data.

These contexts should never be directly mutated by the user.

For more user-accessible information about the current run, see prefect.runtime.

","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel","title":"ContextModel","text":"

Bases: BaseModel

A base model for context data that forbids mutation and extra data while providing a context manager

Source code in prefect/context.py
class ContextModel(BaseModel):\n    \"\"\"\n    A base model for context data that forbids mutation and extra data while providing\n    a context manager\n    \"\"\"\n\n    # The context variable for storing data must be defined by the child class\n    __var__: ContextVar\n    _token: Token = PrivateAttr(None)\n\n    class Config:\n        # allow_mutation = False\n        arbitrary_types_allowed = True\n        extra = \"forbid\"\n\n    def __enter__(self):\n        if self._token is not None:\n            raise RuntimeError(\n                \"Context already entered. Context enter calls cannot be nested.\"\n            )\n        self._token = self.__var__.set(self)\n        return self\n\n    def __exit__(self, *_):\n        if not self._token:\n            raise RuntimeError(\n                \"Asymmetric use of context. Context exit called without an enter.\"\n            )\n        self.__var__.reset(self._token)\n        self._token = None\n\n    @classmethod\n    def get(cls: Type[T]) -> Optional[T]:\n        return cls.__var__.get(None)\n\n    def copy(self, **kwargs):\n        \"\"\"\n        Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n        Attributes:\n            include: Fields to include in new model.\n            exclude: Fields to exclude from new model, as with values this takes precedence over include.\n            update: Values to change/add in the new model. Note: the data is not validated before creating\n                the new model - you should trust this data.\n            deep: Set to `True` to make a deep copy of the model.\n\n        Returns:\n            A new model instance.\n        \"\"\"\n        # Remove the token on copy to avoid re-entrance errors\n        new = super().copy(**kwargs)\n        new._token = None\n        return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel.copy","title":"copy","text":"

Duplicate the context model, optionally choosing which fields to include, exclude, or change.

Attributes:

Name Type Description include

Fields to include in new model.

exclude

Fields to exclude from new model, as with values this takes precedence over include.

update

Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data.

deep

Set to True to make a deep copy of the model.

Returns:

Type Description

A new model instance.

Source code in prefect/context.py
def copy(self, **kwargs):\n    \"\"\"\n    Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n    Attributes:\n        include: Fields to include in new model.\n        exclude: Fields to exclude from new model, as with values this takes precedence over include.\n        update: Values to change/add in the new model. Note: the data is not validated before creating\n            the new model - you should trust this data.\n        deep: Set to `True` to make a deep copy of the model.\n\n    Returns:\n        A new model instance.\n    \"\"\"\n    # Remove the token on copy to avoid re-entrance errors\n    new = super().copy(**kwargs)\n    new._token = None\n    return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.EngineContext","title":"EngineContext","text":"

Bases: RunContext

The context for a flow run. Data in this context is only available from within a flow run function.

Attributes:

Name Type Description flow Optional[Flow]

The flow instance associated with the run

flow_run Optional[FlowRun]

The API metadata for the flow run

task_runner BaseTaskRunner

The task runner instance being used for the flow run

task_run_futures List[PrefectFuture]

A list of futures for task runs submitted within this flow run

task_run_states List[State]

A list of states for task runs created within this flow run

task_run_results Dict[int, State]

A mapping of result ids to task run states for this flow run

flow_run_states List[State]

A list of states for flow runs created within this flow run

sync_portal Optional[BlockingPortal]

A blocking portal for sync task/flow runs in an async flow

timeout_scope Optional[CancelScope]

The cancellation scope for flow level timeouts

Source code in prefect/context.py
class EngineContext(RunContext):\n    \"\"\"\n    The context for a flow run. Data in this context is only available from within a\n    flow run function.\n\n    Attributes:\n        flow: The flow instance associated with the run\n        flow_run: The API metadata for the flow run\n        task_runner: The task runner instance being used for the flow run\n        task_run_futures: A list of futures for task runs submitted within this flow run\n        task_run_states: A list of states for task runs created within this flow run\n        task_run_results: A mapping of result ids to task run states for this flow run\n        flow_run_states: A list of states for flow runs created within this flow run\n        sync_portal: A blocking portal for sync task/flow runs in an async flow\n        timeout_scope: The cancellation scope for flow level timeouts\n    \"\"\"\n\n    flow: Optional[\"Flow\"] = None\n    flow_run: Optional[FlowRun] = None\n    autonomous_task_run: Optional[TaskRun] = None\n    task_runner: BaseTaskRunner\n    log_prints: bool = False\n    parameters: Optional[Dict[str, Any]] = None\n\n    # Result handling\n    result_factory: ResultFactory\n\n    # Counter for task calls allowing unique\n    task_run_dynamic_keys: Dict[str, int] = Field(default_factory=dict)\n\n    # Counter for flow pauses\n    observed_flow_pauses: Dict[str, int] = Field(default_factory=dict)\n\n    # Tracking for objects created by this flow run\n    task_run_futures: List[PrefectFuture] = Field(default_factory=list)\n    task_run_states: List[State] = Field(default_factory=list)\n    task_run_results: Dict[int, State] = Field(default_factory=dict)\n    flow_run_states: List[State] = Field(default_factory=list)\n\n    # The synchronous portal is only created for async flows for creating engine calls\n    # from synchronous task and subflow calls\n    sync_portal: Optional[anyio.abc.BlockingPortal] = None\n    timeout_scope: Optional[anyio.abc.CancelScope] = None\n\n    # Task group that can be used for background tasks during the flow run\n    background_tasks: anyio.abc.TaskGroup\n\n    # Events worker to emit events to Prefect Cloud\n    events: Optional[EventsWorker] = None\n\n    __var__ = ContextVar(\"flow_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry","title":"PrefectObjectRegistry","text":"

Bases: ContextModel

A context that acts as a registry for all Prefect objects that are registered during load and execution.

Attributes:

Name Type Description start_time DateTimeTZ

The time the object registry was created.

block_code_execution bool

If set, flow calls will be ignored.

capture_failures bool

If set, failures during init will be silenced and tracked.

Source code in prefect/context.py
class PrefectObjectRegistry(ContextModel):\n    \"\"\"\n    A context that acts as a registry for all Prefect objects that are\n    registered during load and execution.\n\n    Attributes:\n        start_time: The time the object registry was created.\n        block_code_execution: If set, flow calls will be ignored.\n        capture_failures: If set, failures during __init__ will be silenced and tracked.\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n\n    _instance_registry: Dict[Type[T], List[T]] = PrivateAttr(\n        default_factory=lambda: defaultdict(list)\n    )\n\n    # Failures will be a tuple of (exception, instance, args, kwargs)\n    _instance_init_failures: Dict[\n        Type[T], List[Tuple[Exception, T, Tuple, Dict]]\n    ] = PrivateAttr(default_factory=lambda: defaultdict(list))\n\n    block_code_execution: bool = False\n    capture_failures: bool = False\n\n    __var__ = ContextVar(\"object_registry\")\n\n    def get_instances(self, type_: Type[T]) -> List[T]:\n        instances = []\n        for registered_type, type_instances in self._instance_registry.items():\n            if type_ in registered_type.mro():\n                instances.extend(type_instances)\n        return instances\n\n    def get_instance_failures(\n        self, type_: Type[T]\n    ) -> List[Tuple[Exception, T, Tuple, Dict]]:\n        failures = []\n        for type__ in type_.mro():\n            failures.extend(self._instance_init_failures[type__])\n        return failures\n\n    def register_instance(self, object):\n        # TODO: Consider using a 'Set' to avoid duplicate entries\n        self._instance_registry[type(object)].append(object)\n\n    def register_init_failure(\n        self, exc: Exception, object: Any, init_args: Tuple, init_kwargs: Dict\n    ):\n        self._instance_init_failures[type(object)].append(\n            (exc, object, init_args, init_kwargs)\n        )\n\n    @classmethod\n    def register_instances(cls, type_: Type[T]) -> Type[T]:\n        \"\"\"\n        Decorator for a class that adds registration to the `PrefectObjectRegistry`\n        on initialization of instances.\n        \"\"\"\n        original_init = type_.__init__\n\n        def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n            registry = cls.get()\n            try:\n                original_init(__self__, *args, **kwargs)\n            except Exception as exc:\n                if not registry or not registry.capture_failures:\n                    raise\n                else:\n                    registry.register_init_failure(exc, __self__, args, kwargs)\n            else:\n                if registry:\n                    registry.register_instance(__self__)\n\n        update_wrapper(__register_init__, original_init)\n\n        type_.__init__ = __register_init__\n        return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry.register_instances","title":"register_instances classmethod","text":"

Decorator for a class that adds registration to the PrefectObjectRegistry on initialization of instances.

Source code in prefect/context.py
@classmethod\ndef register_instances(cls, type_: Type[T]) -> Type[T]:\n    \"\"\"\n    Decorator for a class that adds registration to the `PrefectObjectRegistry`\n    on initialization of instances.\n    \"\"\"\n    original_init = type_.__init__\n\n    def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n        registry = cls.get()\n        try:\n            original_init(__self__, *args, **kwargs)\n        except Exception as exc:\n            if not registry or not registry.capture_failures:\n                raise\n            else:\n                registry.register_init_failure(exc, __self__, args, kwargs)\n        else:\n            if registry:\n                registry.register_instance(__self__)\n\n    update_wrapper(__register_init__, original_init)\n\n    type_.__init__ = __register_init__\n    return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.RunContext","title":"RunContext","text":"

Bases: ContextModel

The base context for a flow or task run. Data in this context will always be available when get_run_context is called.

Attributes:

Name Type Description start_time DateTimeTZ

The time the run context was entered

client PrefectClient

The Prefect client instance being used for API communication

Source code in prefect/context.py
class RunContext(ContextModel):\n    \"\"\"\n    The base context for a flow or task run. Data in this context will always be\n    available when `get_run_context` is called.\n\n    Attributes:\n        start_time: The time the run context was entered\n        client: The Prefect client instance being used for API communication\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    input_keyset: Optional[Dict[str, Dict[str, str]]] = None\n    client: PrefectClient\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.SettingsContext","title":"SettingsContext","text":"

Bases: ContextModel

The context for a Prefect settings.

This allows for safe concurrent access and modification of settings.

Attributes:

Name Type Description profile Profile

The profile that is in use.

settings Settings

The complete settings model.

Source code in prefect/context.py
class SettingsContext(ContextModel):\n    \"\"\"\n    The context for a Prefect settings.\n\n    This allows for safe concurrent access and modification of settings.\n\n    Attributes:\n        profile: The profile that is in use.\n        settings: The complete settings model.\n    \"\"\"\n\n    profile: Profile\n    settings: Settings\n\n    __var__ = ContextVar(\"settings\")\n\n    def __hash__(self) -> int:\n        return hash(self.settings)\n\n    def __enter__(self):\n        \"\"\"\n        Upon entrance, we ensure the home directory for the profile exists.\n        \"\"\"\n        return_value = super().__enter__()\n\n        try:\n            prefect_home = Path(self.settings.value_of(PREFECT_HOME))\n            prefect_home.mkdir(mode=0o0700, exist_ok=True)\n        except OSError:\n            warnings.warn(\n                (\n                    \"Failed to create the Prefect home directory at \"\n                    f\"{self.settings.value_of(PREFECT_HOME)}\"\n                ),\n                stacklevel=2,\n            )\n\n        return return_value\n\n    @classmethod\n    def get(cls) -> \"SettingsContext\":\n        # Return the global context instead of `None` if no context exists\n        return super().get() or GLOBAL_SETTINGS_CONTEXT\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TagsContext","title":"TagsContext","text":"

Bases: ContextModel

The context for prefect.tags management.

Attributes:

Name Type Description current_tags Set[str]

A set of current tags in the context

Source code in prefect/context.py
class TagsContext(ContextModel):\n    \"\"\"\n    The context for `prefect.tags` management.\n\n    Attributes:\n        current_tags: A set of current tags in the context\n    \"\"\"\n\n    current_tags: Set[str] = Field(default_factory=set)\n\n    @classmethod\n    def get(cls) -> \"TagsContext\":\n        # Return an empty `TagsContext` instead of `None` if no context exists\n        return cls.__var__.get(TagsContext())\n\n    __var__ = ContextVar(\"tags\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TaskRunContext","title":"TaskRunContext","text":"

Bases: RunContext

The context for a task run. Data in this context is only available from within a task run function.

Attributes:

Name Type Description task Task

The task instance associated with the task run

task_run TaskRun

The API metadata for this task run

Source code in prefect/context.py
class TaskRunContext(RunContext):\n    \"\"\"\n    The context for a task run. Data in this context is only available from within a\n    task run function.\n\n    Attributes:\n        task: The task instance associated with the task run\n        task_run: The API metadata for this task run\n    \"\"\"\n\n    task: \"Task\"\n    task_run: TaskRun\n    log_prints: bool = False\n    parameters: Dict[str, Any]\n\n    # Result handling\n    result_factory: ResultFactory\n\n    __var__ = ContextVar(\"task_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_run_context","title":"get_run_context","text":"

Get the current run context from within a task or flow function.

Returns:

Type Description Union[FlowRunContext, TaskRunContext]

A FlowRunContext or TaskRunContext depending on the function type.

Raises RuntimeError: If called outside of a flow or task run.

Source code in prefect/context.py
def get_run_context() -> Union[FlowRunContext, TaskRunContext]:\n    \"\"\"\n    Get the current run context from within a task or flow function.\n\n    Returns:\n        A `FlowRunContext` or `TaskRunContext` depending on the function type.\n\n    Raises\n        RuntimeError: If called outside of a flow or task run.\n    \"\"\"\n    task_run_ctx = TaskRunContext.get()\n    if task_run_ctx:\n        return task_run_ctx\n\n    flow_run_ctx = FlowRunContext.get()\n    if flow_run_ctx:\n        return flow_run_ctx\n\n    raise MissingContextError(\n        \"No run context available. You are not in a flow or task run context.\"\n    )\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_settings_context","title":"get_settings_context","text":"

Get the current settings context which contains profile information and the settings that are being used.

Generally, the settings that are being used are a combination of values from the profile and environment. See prefect.context.use_profile for more details.

Source code in prefect/context.py
def get_settings_context() -> SettingsContext:\n    \"\"\"\n    Get the current settings context which contains profile information and the\n    settings that are being used.\n\n    Generally, the settings that are being used are a combination of values from the\n    profile and environment. See `prefect.context.use_profile` for more details.\n    \"\"\"\n    settings_ctx = SettingsContext.get()\n\n    if not settings_ctx:\n        raise MissingContextError(\"No settings context found.\")\n\n    return settings_ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.registry_from_script","title":"registry_from_script","text":"

Return a fresh registry with instances populated from execution of a script.

Source code in prefect/context.py
def registry_from_script(\n    path: str,\n    block_code_execution: bool = True,\n    capture_failures: bool = True,\n) -> PrefectObjectRegistry:\n    \"\"\"\n    Return a fresh registry with instances populated from execution of a script.\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=block_code_execution,\n        capture_failures=capture_failures,\n    ) as registry:\n        load_script_as_module(path)\n\n    return registry\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.root_settings_context","title":"root_settings_context","text":"

Return the settings context that will exist as the root context for the module.

The profile to use is determined with the following precedence - Command line via 'prefect --profile ' - Environment variable via 'PREFECT_PROFILE' - Profiles file via the 'active' key Source code in prefect/context.py

def root_settings_context():\n    \"\"\"\n    Return the settings context that will exist as the root context for the module.\n\n    The profile to use is determined with the following precedence\n    - Command line via 'prefect --profile <name>'\n    - Environment variable via 'PREFECT_PROFILE'\n    - Profiles file via the 'active' key\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    active_name = profiles.active_name\n    profile_source = \"in the profiles file\"\n\n    if \"PREFECT_PROFILE\" in os.environ:\n        active_name = os.environ[\"PREFECT_PROFILE\"]\n        profile_source = \"by environment variable\"\n\n    if (\n        sys.argv[0].endswith(\"/prefect\")\n        and len(sys.argv) >= 3\n        and sys.argv[1] == \"--profile\"\n    ):\n        active_name = sys.argv[2]\n        profile_source = \"by command line argument\"\n\n    if active_name not in profiles.names:\n        print(\n            (\n                f\"WARNING: Active profile {active_name!r} set {profile_source} not \"\n                \"found. The default profile will be used instead. \"\n            ),\n            file=sys.stderr,\n        )\n        active_name = \"default\"\n\n    with use_profile(\n        profiles[active_name],\n        # Override environment variables if the profile was set by the CLI\n        override_environment_variables=profile_source == \"by command line argument\",\n    ) as settings_context:\n        return settings_context\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.tags","title":"tags","text":"

Context manager to add tags to flow and task run calls.

Tags are always combined with any existing tags.

Yields:

Type Description Set[str]

The current set of tags

Examples:

>>> from prefect import tags, task, flow\n>>> @task\n>>> def my_task():\n>>>     pass\n

Run a task with tags

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         my_task()  # has tags: a, b\n

Run a flow with tags

>>> @flow\n>>> def my_flow():\n>>>     pass\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()  # has tags: a, b\n

Run a task with nested tag contexts

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         with tags(\"c\", \"d\"):\n>>>             my_task()  # has tags: a, b, c, d\n>>>         my_task()  # has tags: a, b\n

Inspect the current tags

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"c\", \"d\"):\n>>>         with tags(\"e\", \"f\") as current_tags:\n>>>              print(current_tags)\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()\n{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n
Source code in prefect/context.py
@contextmanager\ndef tags(*new_tags: str) -> Generator[Set[str], None, None]:\n    \"\"\"\n    Context manager to add tags to flow and task run calls.\n\n    Tags are always combined with any existing tags.\n\n    Yields:\n        The current set of tags\n\n    Examples:\n        >>> from prefect import tags, task, flow\n        >>> @task\n        >>> def my_task():\n        >>>     pass\n\n        Run a task with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         my_task()  # has tags: a, b\n\n        Run a flow with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     pass\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()  # has tags: a, b\n\n        Run a task with nested tag contexts\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         with tags(\"c\", \"d\"):\n        >>>             my_task()  # has tags: a, b, c, d\n        >>>         my_task()  # has tags: a, b\n\n        Inspect the current tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"c\", \"d\"):\n        >>>         with tags(\"e\", \"f\") as current_tags:\n        >>>              print(current_tags)\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()\n        {\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n    \"\"\"\n    current_tags = TagsContext.get().current_tags\n    new_tags = current_tags.union(new_tags)\n    with TagsContext(current_tags=new_tags):\n        yield new_tags\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.use_profile","title":"use_profile","text":"

Switch to a profile for the duration of this context.

Profile contexts are confined to an async context in a single thread.

Parameters:

Name Type Description Default profile Union[Profile, str]

The name of the profile to load or an instance of a Profile.

required override_environment_variable

If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings.

required include_current_context bool

If set, the new settings will be constructed with the current settings context as a base. If not set, the use_base settings will be loaded from the environment and defaults.

True

Yields:

Type Description

The created SettingsContext object

Source code in prefect/context.py
@contextmanager\ndef use_profile(\n    profile: Union[Profile, str],\n    override_environment_variables: bool = False,\n    include_current_context: bool = True,\n):\n    \"\"\"\n    Switch to a profile for the duration of this context.\n\n    Profile contexts are confined to an async context in a single thread.\n\n    Args:\n        profile: The name of the profile to load or an instance of a Profile.\n        override_environment_variable: If set, variables in the profile will take\n            precedence over current environment variables. By default, environment\n            variables will override profile settings.\n        include_current_context: If set, the new settings will be constructed\n            with the current settings context as a base. If not set, the use_base settings\n            will be loaded from the environment and defaults.\n\n    Yields:\n        The created `SettingsContext` object\n    \"\"\"\n    if isinstance(profile, str):\n        profiles = prefect.settings.load_profiles()\n        profile = profiles[profile]\n\n    if not isinstance(profile, Profile):\n        raise TypeError(\n            f\"Unexpected type {type(profile).__name__!r} for `profile`. \"\n            \"Expected 'str' or 'Profile'.\"\n        )\n\n    # Create a copy of the profiles settings as we will mutate it\n    profile_settings = profile.settings.copy()\n\n    existing_context = SettingsContext.get()\n    if existing_context and include_current_context:\n        settings = existing_context.settings\n    else:\n        settings = prefect.settings.get_settings_from_env()\n\n    if not override_environment_variables:\n        for key in os.environ:\n            if key in prefect.settings.SETTING_VARIABLES:\n                profile_settings.pop(prefect.settings.SETTING_VARIABLES[key], None)\n\n    new_settings = settings.copy_with_update(updates=profile_settings)\n\n    with SettingsContext(profile=profile, settings=new_settings) as ctx:\n        yield ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/engine/","title":"prefect.engine","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine","title":"prefect.engine","text":"

Client-side execution and orchestration of flows and tasks.

","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--engine-process-overview","title":"Engine process overview","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--flows","title":"Flows","text":"
  • The flow is called by the user or an existing flow run is executed in a new process.

    See Flow.__call__ and prefect.engine.__main__ (python -m prefect.engine)

  • A synchronous function acts as an entrypoint to the engine. The engine executes on a dedicated \"global loop\" thread. For asynchronous flow calls, we return a coroutine from the entrypoint so the user can enter the engine without blocking their event loop.

    See enter_flow_run_engine_from_flow_call, enter_flow_run_engine_from_subprocess

  • The thread that calls the entrypoint waits until orchestration of the flow run completes. This thread is referred to as the \"user\" thread and is usually the \"main\" thread. The thread is not blocked while waiting \u2014 it allows the engine to send work back to it. This allows us to send calls back to the user thread from the global loop thread.

    See wait_for_call_in_loop_thread and call_soon_in_waiting_thread

  • The asynchronous engine branches depending on if the flow run exists already and if there is a parent flow run in the current context.

    See create_then_begin_flow_run, create_and_begin_subflow_run, and retrieve_flow_then_begin_flow_run

  • The asynchronous engine prepares for execution of the flow run. This includes starting the task runner, preparing context, etc.

    See begin_flow_run

  • The flow run is orchestrated through states, calling the user's function as necessary. Generally the user's function is sent for execution on the user thread. If the flow function cannot be safely executed on the user thread, e.g. it is a synchronous child in an asynchronous parent it will be scheduled on a worker thread instead.

    See orchestrate_flow_run, call_soon_in_waiting_thread, call_soon_in_new_thread

","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--tasks","title":"Tasks","text":"
  • The task is called or submitted by the user. We require that this is always within a flow.

    See Task.__call__ and Task.submit

  • A synchronous function acts as an entrypoint to the engine. Unlike flow calls, this will not block until completion if submit was used.

    See enter_task_run_engine

  • A future is created for the task call. Creation of the task run and submission to the task runner is scheduled as a background task so submission of many tasks can occur concurrently.

    See create_task_run_future and create_task_run_then_submit

  • The engine branches depending on if a future, state, or result is requested. If a future is requested, it is returned immediately to the user thread. Otherwise, the engine will wait for the task run to complete and return the final state or result.

    See get_task_call_return_value

  • An engine function is submitted to the task runner. The task runner will schedule this function for execution on a worker. When executed, it will prepare for orchestration and wait for completion of the run.

    See create_task_run_then_submit and begin_task_run

  • The task run is orchestrated through states, calling the user's function as necessary. The user's function is always executed in a worker thread for isolation.

    See orchestrate_task_run, call_soon_in_new_thread

    _Ideally, for local and sequential task runners we would send the task run to the user thread as we do for flows. See #9855.

","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_flow_run","title":"begin_flow_run async","text":"

Begins execution of a flow run; blocks until completion of the flow run

  • Starts a task runner
  • Determines the result storage block to use
  • Orchestrates the flow run (runs the user-function and generates tasks)
  • Waits for tasks to complete / shutsdown the task runner
  • Sets a terminal state for the flow run

Note that the flow_run contains a parameters attribute which is the serialized parameters sent to the backend while the parameters argument here should be the deserialized and validated dictionary of python objects.

Returns:

Type Description State

The final state of the run

Source code in prefect/engine.py
async def begin_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n    \"\"\"\n    Begins execution of a flow run; blocks until completion of the flow run\n\n    - Starts a task runner\n    - Determines the result storage block to use\n    - Orchestrates the flow run (runs the user-function and generates tasks)\n    - Waits for tasks to complete / shutsdown the task runner\n    - Sets a terminal state for the flow run\n\n    Note that the `flow_run` contains a `parameters` attribute which is the serialized\n    parameters sent to the backend while the `parameters` argument here should be the\n    deserialized and validated dictionary of python objects.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    logger = flow_run_logger(flow_run, flow)\n\n    log_prints = should_log_prints(flow)\n    flow_run_context = FlowRunContext.construct(log_prints=log_prints)\n\n    async with AsyncExitStack() as stack:\n        await stack.enter_async_context(\n            report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n        )\n\n        # Create a task group for background tasks\n        flow_run_context.background_tasks = await stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n        # If the flow is async, we need to provide a portal so sync tasks can run\n        flow_run_context.sync_portal = (\n            stack.enter_context(start_blocking_portal()) if flow.isasync else None\n        )\n\n        task_runner = flow.task_runner.duplicate()\n        if task_runner is NotImplemented:\n            # Backwards compatibility; will not support concurrent flow runs\n            task_runner = flow.task_runner\n            logger.warning(\n                f\"Task runner {type(task_runner).__name__!r} does not implement the\"\n                \" `duplicate` method and will fail if used for concurrent execution of\"\n                \" the same flow.\"\n            )\n\n        logger.debug(\n            f\"Starting {type(flow.task_runner).__name__!r}; submitted tasks \"\n            f\"will be run {CONCURRENCY_MESSAGES[flow.task_runner.concurrency_type]}...\"\n        )\n\n        flow_run_context.task_runner = await stack.enter_async_context(\n            task_runner.start()\n        )\n\n        flow_run_context.result_factory = await ResultFactory.from_flow(\n            flow, client=client\n        )\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        terminal_or_paused_state = await orchestrate_flow_run(\n            flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            wait_for=None,\n            client=client,\n            partial_flow_run_context=flow_run_context,\n            # Orchestration needs to be interruptible if it has a timeout\n            interruptible=flow.timeout_seconds is not None,\n            user_thread=user_thread,\n        )\n\n    if terminal_or_paused_state.is_paused():\n        timeout = terminal_or_paused_state.state_details.pause_timeout\n        msg = \"Currently paused and suspending execution.\"\n        if timeout:\n            msg += f\" Resume before {timeout.to_rfc3339_string()} to finish execution.\"\n        logger.log(level=logging.INFO, msg=msg)\n        await APILogHandler.aflush()\n\n        return terminal_or_paused_state\n    else:\n        terminal_state = terminal_or_paused_state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # When a \"root\" flow run finishes, flush logs so we do not have to rely on handling\n    # during interpreter shutdown\n    await APILogHandler.aflush()\n\n    return terminal_state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_map","title":"begin_task_map async","text":"

Async entrypoint for task mapping

Source code in prefect/engine.py
async def begin_task_map(\n    task: Task,\n    flow_run_context: Optional[FlowRunContext],\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n    autonomous: bool = False,\n) -> List[Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]]:\n    \"\"\"Async entrypoint for task mapping\"\"\"\n    # We need to resolve some futures to map over their data, collect the upstream\n    # links beforehand to retain relationship tracking.\n    task_inputs = {\n        k: await collect_task_run_inputs(v, max_depth=0) for k, v in parameters.items()\n    }\n\n    # Resolve the top-level parameters in order to get mappable data of a known length.\n    # Nested parameters will be resolved in each mapped child where their relationships\n    # will also be tracked.\n    parameters = await resolve_inputs(parameters, max_depth=1)\n\n    # Ensure that any parameters in kwargs are expanded before this check\n    parameters = explode_variadic_parameter(task.fn, parameters)\n\n    iterable_parameters = {}\n    static_parameters = {}\n    annotated_parameters = {}\n    for key, val in parameters.items():\n        if isinstance(val, (allow_failure, quote)):\n            # Unwrap annotated parameters to determine if they are iterable\n            annotated_parameters[key] = val\n            val = val.unwrap()\n\n        if isinstance(val, unmapped):\n            static_parameters[key] = val.value\n        elif isiterable(val):\n            iterable_parameters[key] = list(val)\n        else:\n            static_parameters[key] = val\n\n    if not len(iterable_parameters):\n        raise MappingMissingIterable(\n            \"No iterable parameters were received. Parameters for map must \"\n            f\"include at least one iterable. Parameters: {parameters}\"\n        )\n\n    iterable_parameter_lengths = {\n        key: len(val) for key, val in iterable_parameters.items()\n    }\n    lengths = set(iterable_parameter_lengths.values())\n    if len(lengths) > 1:\n        raise MappingLengthMismatch(\n            \"Received iterable parameters with different lengths. Parameters for map\"\n            f\" must all be the same length. Got lengths: {iterable_parameter_lengths}\"\n        )\n\n    map_length = list(lengths)[0]\n\n    task_runs = []\n    for i in range(map_length):\n        call_parameters = {key: value[i] for key, value in iterable_parameters.items()}\n        call_parameters.update({key: value for key, value in static_parameters.items()})\n\n        # Add default values for parameters; these are skipped earlier since they should\n        # not be mapped over\n        for key, value in get_parameter_defaults(task.fn).items():\n            call_parameters.setdefault(key, value)\n\n        # Re-apply annotations to each key again\n        for key, annotation in annotated_parameters.items():\n            call_parameters[key] = annotation.rewrap(call_parameters[key])\n\n        # Collapse any previously exploded kwargs\n        call_parameters = collapse_variadic_parameters(task.fn, call_parameters)\n\n        if autonomous:\n            task_runs.append(\n                await create_autonomous_task_run(\n                    task=task,\n                    parameters=call_parameters,\n                )\n            )\n        else:\n            task_runs.append(\n                partial(\n                    get_task_call_return_value,\n                    task=task,\n                    flow_run_context=flow_run_context,\n                    parameters=call_parameters,\n                    wait_for=wait_for,\n                    return_type=return_type,\n                    task_runner=task_runner,\n                    extra_task_inputs=task_inputs,\n                )\n            )\n\n    if autonomous:\n        return task_runs\n\n    # Maintain the order of the task runs when using the sequential task runner\n    runner = task_runner if task_runner else flow_run_context.task_runner\n    if runner.concurrency_type == TaskConcurrencyType.SEQUENTIAL:\n        return [await task_run() for task_run in task_runs]\n\n    return await gather(*task_runs)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_run","title":"begin_task_run async","text":"

Entrypoint for task run execution.

This function is intended for submission to the task runner.

This method may be called from a worker so we ensure the settings context has been entered. For example, with a runner that is executing tasks in the same event loop, we will likely not enter the context again because the current context already matches:

main thread: --> Flow called with settings A --> begin_task_run executes same event loop --> Profile A matches and is not entered again

However, with execution on a remote environment, we are going to need to ensure the settings for the task run are respected by entering the context:

main thread: --> Flow called with settings A --> begin_task_run is scheduled on a remote worker, settings A is serialized remote worker: --> Remote worker imports Prefect (may not occur) --> Global settings is loaded with default settings --> begin_task_run executes on a different event loop than the flow --> Current settings is not set or does not match, settings A is entered

Source code in prefect/engine.py
async def begin_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    settings: prefect.context.SettingsContext,\n):\n    \"\"\"\n    Entrypoint for task run execution.\n\n    This function is intended for submission to the task runner.\n\n    This method may be called from a worker so we ensure the settings context has been\n    entered. For example, with a runner that is executing tasks in the same event loop,\n    we will likely not enter the context again because the current context already\n    matches:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` executes same event loop\n    --> Profile A matches and is not entered again\n\n    However, with execution on a remote environment, we are going to need to ensure the\n    settings for the task run are respected by entering the context:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` is scheduled on a remote worker, settings A is serialized\n    remote worker:\n    --> Remote worker imports Prefect (may not occur)\n    --> Global settings is loaded with default settings\n    --> `begin_task_run` executes on a different event loop than the flow\n    --> Current settings is not set or does not match, settings A is entered\n    \"\"\"\n    maybe_flow_run_context = prefect.context.FlowRunContext.get()\n\n    async with AsyncExitStack() as stack:\n        # The settings context may be null on a remote worker so we use the safe `.get`\n        # method and compare it to the settings required for this task run\n        if prefect.context.SettingsContext.get() != settings:\n            stack.enter_context(settings)\n            setup_logging()\n\n        if maybe_flow_run_context:\n            # Accessible if on a worker that is running in the same thread as the flow\n            client = maybe_flow_run_context.client\n            # Only run the task in an interruptible thread if it in the same thread as\n            # the flow _and_ the flow run has a timeout attached. If the task is on a\n            # worker, the flow run timeout will not be raised in the worker process.\n            interruptible = maybe_flow_run_context.timeout_scope is not None\n        else:\n            # Otherwise, retrieve a new clien`t\n            client = await stack.enter_async_context(get_client())\n            interruptible = False\n            await stack.enter_async_context(anyio.create_task_group())\n\n        await stack.enter_async_context(report_task_run_crashes(task_run, client))\n\n        # TODO: Use the background tasks group to manage logging for this task\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        await check_api_reachable(\n            client, f\"Cannot orchestrate task run '{task_run.id}'\"\n        )\n        try:\n            state = await orchestrate_task_run(\n                task=task,\n                task_run=task_run,\n                parameters=parameters,\n                wait_for=wait_for,\n                result_factory=result_factory,\n                log_prints=log_prints,\n                interruptible=interruptible,\n                client=client,\n            )\n\n            if not maybe_flow_run_context:\n                # When a a task run finishes on a remote worker flush logs to prevent\n                # loss if the process exits\n                await APILogHandler.aflush()\n\n        except Abort as abort:\n            # Task run probably already completed, fetch its state\n            task_run = await client.read_task_run(task_run.id)\n\n            if task_run.state.is_final():\n                task_run_logger(task_run).info(\n                    f\"Task run '{task_run.id}' already finished.\"\n                )\n            else:\n                # TODO: This is a concerning case; we should determine when this occurs\n                #       1. This can occur when the flow run is not in a running state\n                task_run_logger(task_run).warning(\n                    f\"Task run '{task_run.id}' received abort during orchestration: \"\n                    f\"{abort} Task run is in {task_run.state.type.value} state.\"\n                )\n            state = task_run.state\n\n        except Pause:\n            # A pause signal here should mean the flow run suspended, so we\n            # should do the same. We'll look up the flow run's pause state to\n            # try and reuse it, so we capture any data like timeouts.\n            flow_run = await client.read_flow_run(task_run.flow_run_id)\n            if flow_run.state and flow_run.state.is_paused():\n                state = flow_run.state\n            else:\n                state = Suspended()\n\n            task_run_logger(task_run).info(\n                \"Task run encountered a pause signal during orchestration.\"\n            )\n\n        return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_and_begin_subflow_run","title":"create_and_begin_subflow_run async","text":"

Async entrypoint for flows calls within a flow run

Subflows differ from parent flows in that they - Resolve futures in passed parameters into values - Create a dummy task for representation in the parent flow - Retrieve default result storage from the parent flow rather than the server

Returns:

Type Description Any

The final state of the run

Source code in prefect/engine.py
@inject_client\nasync def create_and_begin_subflow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n    \"\"\"\n    Async entrypoint for flows calls within a flow run\n\n    Subflows differ from parent flows in that they\n    - Resolve futures in passed parameters into values\n    - Create a dummy task for representation in the parent flow\n    - Retrieve default result storage from the parent flow rather than the server\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    parent_flow_run_context = FlowRunContext.get()\n    parent_logger = get_run_logger(parent_flow_run_context)\n    log_prints = should_log_prints(flow)\n    terminal_state = None\n\n    parent_logger.debug(f\"Resolving inputs to {flow.name!r}\")\n    task_inputs = {k: await collect_task_run_inputs(v) for k, v in parameters.items()}\n\n    if wait_for:\n        task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n    rerunning = (\n        parent_flow_run_context.flow_run.run_count > 1\n        if getattr(parent_flow_run_context, \"flow_run\", None)\n        else False\n    )\n\n    # Generate a task in the parent flow run to represent the result of the subflow run\n    dummy_task = Task(name=flow.name, fn=flow.fn, version=flow.version)\n    parent_task_run = await client.create_task_run(\n        task=dummy_task,\n        flow_run_id=(\n            parent_flow_run_context.flow_run.id\n            if getattr(parent_flow_run_context, \"flow_run\", None)\n            else None\n        ),\n        dynamic_key=_dynamic_key_for_task_run(parent_flow_run_context, dummy_task),\n        task_inputs=task_inputs,\n        state=Pending(),\n    )\n\n    # Resolve any task futures in the input\n    parameters = await resolve_inputs(parameters)\n\n    if parent_task_run.state.is_final() and not (\n        rerunning and not parent_task_run.state.is_completed()\n    ):\n        # Retrieve the most recent flow run from the database\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                parent_task_run_id={\"any_\": [parent_task_run.id]}\n            ),\n            sort=FlowRunSort.EXPECTED_START_TIME_ASC,\n        )\n        flow_run = flow_runs[-1]\n\n        # Set up variables required downstream\n        terminal_state = flow_run.state\n        logger = flow_run_logger(flow_run, flow)\n\n    else:\n        flow_run = await client.create_flow_run(\n            flow,\n            parameters=flow.serialize_parameters(parameters),\n            parent_task_run_id=parent_task_run.id,\n            state=parent_task_run.state if not rerunning else Pending(),\n            tags=TagsContext.get().current_tags,\n        )\n\n        parent_logger.info(\n            f\"Created subflow run {flow_run.name!r} for flow {flow.name!r}\"\n        )\n\n        logger = flow_run_logger(flow_run, flow)\n        ui_url = PREFECT_UI_URL.value()\n        if ui_url:\n            logger.info(\n                f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n                extra={\"send_to_api\": False},\n            )\n\n        result_factory = await ResultFactory.from_flow(\n            flow, client=parent_flow_run_context.client\n        )\n\n        if flow.should_validate_parameters:\n            try:\n                parameters = flow.validate_parameters(parameters)\n            except Exception:\n                message = \"Validation of flow parameters failed with error:\"\n                logger.exception(message)\n                terminal_state = await propose_state(\n                    client,\n                    state=await exception_to_failed_state(\n                        message=message, result_factory=result_factory\n                    ),\n                    flow_run_id=flow_run.id,\n                )\n\n        if terminal_state is None or not terminal_state.is_final():\n            async with AsyncExitStack() as stack:\n                await stack.enter_async_context(\n                    report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n                )\n\n                task_runner = flow.task_runner.duplicate()\n                if task_runner is NotImplemented:\n                    # Backwards compatibility; will not support concurrent flow runs\n                    task_runner = flow.task_runner\n                    logger.warning(\n                        f\"Task runner {type(task_runner).__name__!r} does not implement\"\n                        \" the `duplicate` method and will fail if used for concurrent\"\n                        \" execution of the same flow.\"\n                    )\n\n                await stack.enter_async_context(task_runner.start())\n\n                if log_prints:\n                    stack.enter_context(patch_print())\n\n                terminal_state = await orchestrate_flow_run(\n                    flow,\n                    flow_run=flow_run,\n                    parameters=parameters,\n                    wait_for=wait_for,\n                    # If the parent flow run has a timeout, then this one needs to be\n                    # interruptible as well\n                    interruptible=parent_flow_run_context.timeout_scope is not None,\n                    client=client,\n                    partial_flow_run_context=FlowRunContext.construct(\n                        sync_portal=parent_flow_run_context.sync_portal,\n                        task_runner=task_runner,\n                        background_tasks=parent_flow_run_context.background_tasks,\n                        result_factory=result_factory,\n                        log_prints=log_prints,\n                    ),\n                    user_thread=user_thread,\n                )\n\n    # Display the full state (including the result) if debugging\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # Track the subflow state so the parent flow can use it to determine its final state\n    parent_flow_run_context.flow_run_states.append(terminal_state)\n\n    if return_type == \"state\":\n        return terminal_state\n    elif return_type == \"result\":\n        return await terminal_state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_autonomous_task_run","title":"create_autonomous_task_run async","text":"

Create a task run in the API for an autonomous task submission and store the provided parameters using the existing result storage mechanism.

Source code in prefect/engine.py
async def create_autonomous_task_run(task: Task, parameters: Dict[str, Any]) -> TaskRun:\n    \"\"\"Create a task run in the API for an autonomous task submission and store\n    the provided parameters using the existing result storage mechanism.\n    \"\"\"\n    async with get_client() as client:\n        state = Scheduled()\n        if parameters:\n            parameters_id = uuid4()\n            state.state_details.task_parameters_id = parameters_id\n\n            # TODO: Improve use of result storage for parameter storage / reference\n            task.persist_result = True\n\n            factory = await ResultFactory.from_autonomous_task(task, client=client)\n            await factory.store_parameters(parameters_id, parameters)\n\n        task_run = await client.create_task_run(\n            task=task,\n            flow_run_id=None,\n            dynamic_key=f\"{task.task_key}-{str(uuid4())[:NUM_CHARS_DYNAMIC_KEY]}\",\n            state=state,\n        )\n\n        engine_logger.debug(f\"Submitted run of task {task.name!r} for execution\")\n\n    return task_run\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_then_begin_flow_run","title":"create_then_begin_flow_run async","text":"

Async entrypoint for flow calls

Creates the flow run in the backend, then enters the main flow run engine.

Source code in prefect/engine.py
@inject_client\nasync def create_then_begin_flow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n    \"\"\"\n    Async entrypoint for flow calls\n\n    Creates the flow run in the backend, then enters the main flow run engine.\n    \"\"\"\n    # TODO: Returns a `State` depending on `return_type` and we can add an overload to\n    #       the function signature to clarify this eventually.\n\n    await check_api_reachable(client, \"Cannot create flow run\")\n\n    state = Pending()\n    if flow.should_validate_parameters:\n        try:\n            parameters = flow.validate_parameters(parameters)\n        except Exception:\n            state = await exception_to_failed_state(\n                message=\"Validation of flow parameters failed with error:\"\n            )\n\n    flow_run = await client.create_flow_run(\n        flow,\n        # Send serialized parameters to the backend\n        parameters=flow.serialize_parameters(parameters),\n        state=state,\n        tags=TagsContext.get().current_tags,\n    )\n\n    engine_logger.info(f\"Created flow run {flow_run.name!r} for flow {flow.name!r}\")\n\n    logger = flow_run_logger(flow_run, flow)\n\n    ui_url = PREFECT_UI_URL.value()\n    if ui_url:\n        logger.info(\n            f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n            extra={\"send_to_api\": False},\n        )\n\n    if state.is_failed():\n        logger.error(state.message)\n        engine_logger.info(\n            f\"Flow run {flow_run.name!r} received invalid parameters and is marked as\"\n            \" failed.\"\n        )\n    else:\n        state = await begin_flow_run(\n            flow=flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            client=client,\n            user_thread=user_thread,\n        )\n\n    if return_type == \"state\":\n        return state\n    elif return_type == \"result\":\n        return await state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_flow_call","title":"enter_flow_run_engine_from_flow_call","text":"

Sync entrypoint for flow calls.

This function does the heavy lifting of ensuring we can get into an async context for flow run execution with minimal overhead.

Source code in prefect/engine.py
def enter_flow_run_engine_from_flow_call(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n) -> Union[State, Awaitable[State]]:\n    \"\"\"\n    Sync entrypoint for flow calls.\n\n    This function does the heavy lifting of ensuring we can get into an async context\n    for flow run execution with minimal overhead.\n    \"\"\"\n    setup_logging()\n\n    registry = PrefectObjectRegistry.get()\n    if registry and registry.block_code_execution:\n        engine_logger.warning(\n            f\"Script loading is in progress, flow {flow.name!r} will not be executed.\"\n            \" Consider updating the script to only call the flow if executed\"\n            f' directly:\\n\\n\\tif __name__ == \"__main__\":\\n\\t\\t{flow.fn.__name__}()'\n        )\n        return None\n\n    parent_flow_run_context = FlowRunContext.get()\n    is_subflow_run = parent_flow_run_context is not None\n\n    if wait_for is not None and not is_subflow_run:\n        raise ValueError(\"Only flows run as subflows can wait for dependencies.\")\n\n    begin_run = create_call(\n        create_and_begin_subflow_run if is_subflow_run else create_then_begin_flow_run,\n        flow=flow,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        client=parent_flow_run_context.client if is_subflow_run else None,\n        user_thread=threading.current_thread(),\n    )\n\n    # On completion of root flows, wait for the global thread to ensure that\n    # any work there is complete\n    done_callbacks = (\n        [create_call(wait_for_global_loop_exit)] if not is_subflow_run else None\n    )\n\n    # WARNING: You must define any context managers here to pass to our concurrency\n    # api instead of entering them in here in the engine entrypoint. Otherwise, async\n    # flows will not use the context as this function _exits_ to return an awaitable to\n    # the user. Generally, you should enter contexts _within_ the async `begin_run`\n    # instead but if you need to enter a context from the main thread you'll need to do\n    # it here.\n    contexts = [capture_sigterm()]\n\n    if flow.isasync and (\n        not is_subflow_run or (is_subflow_run and parent_flow_run_context.flow.isasync)\n    ):\n        # return a coro for the user to await if the flow is async\n        # unless it is an async subflow called in a sync flow\n        retval = from_async.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    else:\n        retval = from_sync.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    return retval\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_subprocess","title":"enter_flow_run_engine_from_subprocess","text":"

Sync entrypoint for flow runs that have been submitted for execution by an agent

Differs from enter_flow_run_engine_from_flow_call in that we have a flow run id but not a flow object. The flow must be retrieved before execution can begin. Additionally, this assumes that the caller is always in a context without an event loop as this should be called from a fresh process.

Source code in prefect/engine.py
def enter_flow_run_engine_from_subprocess(flow_run_id: UUID) -> State:\n    \"\"\"\n    Sync entrypoint for flow runs that have been submitted for execution by an agent\n\n    Differs from `enter_flow_run_engine_from_flow_call` in that we have a flow run id\n    but not a flow object. The flow must be retrieved before execution can begin.\n    Additionally, this assumes that the caller is always in a context without an event\n    loop as this should be called from a fresh process.\n    \"\"\"\n\n    # Ensure collections are imported and have the opportunity to register types before\n    # loading the user code from the deployment\n    prefect.plugins.load_prefect_collections()\n\n    setup_logging()\n\n    state = from_sync.wait_for_call_in_loop_thread(\n        create_call(\n            retrieve_flow_then_begin_flow_run,\n            flow_run_id,\n            user_thread=threading.current_thread(),\n        ),\n        contexts=[capture_sigterm()],\n    )\n\n    APILogHandler.flush()\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_task_run_engine","title":"enter_task_run_engine","text":"

Sync entrypoint for task calls

Source code in prefect/engine.py
def enter_task_run_engine(\n    task: Task,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n    mapped: bool,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]:\n    \"\"\"Sync entrypoint for task calls\"\"\"\n\n    flow_run_context = FlowRunContext.get()\n\n    if not flow_run_context:\n        if return_type == \"future\" or mapped:\n            raise RuntimeError(\n                \" If you meant to submit a background task, you need to set\"\n                \" `prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=true`\"\n                \" and use `your_task.submit()` instead of `your_task()`.\"\n            )\n        from prefect.task_engine import submit_autonomous_task_run_to_engine\n\n        return submit_autonomous_task_run_to_engine(\n            task=task,\n            task_run=None,\n            parameters=parameters,\n            task_runner=task_runner,\n            wait_for=wait_for,\n            return_type=return_type,\n            client=get_client(),\n        )\n\n    if flow_run_context.timeout_scope and flow_run_context.timeout_scope.cancel_called:\n        raise TimeoutError(\"Flow run timed out\")\n\n    begin_run = create_call(\n        begin_task_map if mapped else get_task_call_return_value,\n        task=task,\n        flow_run_context=flow_run_context,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=task_runner,\n    )\n\n    if task.isasync and (\n        flow_run_context.flow is None or flow_run_context.flow.isasync\n    ):\n        # return a coro for the user to await if an async task in an async flow\n        return from_async.wait_for_call_in_loop_thread(begin_run)\n    else:\n        return from_sync.wait_for_call_in_loop_thread(begin_run)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_flow_run","title":"orchestrate_flow_run async","text":"

Executes a flow run.

Note on flow timeouts

Since async flows are run directly in the main event loop, timeout behavior will match that described by anyio. If the flow is awaiting something, it will immediately return; otherwise, the next time it awaits it will exit. Sync flows are being task runner in a worker thread, which cannot be interrupted. The worker thread will exit at the next task call. The worker thread also has access to the status of the cancellation scope at FlowRunContext.timeout_scope.cancel_called which allows it to raise a TimeoutError to respect the timeout.

Returns:

Type Description State

The final state of the run

Source code in prefect/engine.py
async def orchestrate_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    interruptible: bool,\n    client: PrefectClient,\n    partial_flow_run_context: FlowRunContext,\n    user_thread: threading.Thread,\n) -> State:\n    \"\"\"\n    Executes a flow run.\n\n    Note on flow timeouts:\n        Since async flows are run directly in the main event loop, timeout behavior will\n        match that described by anyio. If the flow is awaiting something, it will\n        immediately return; otherwise, the next time it awaits it will exit. Sync flows\n        are being task runner in a worker thread, which cannot be interrupted. The worker\n        thread will exit at the next task call. The worker thread also has access to the\n        status of the cancellation scope at `FlowRunContext.timeout_scope.cancel_called`\n        which allows it to raise a `TimeoutError` to respect the timeout.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n\n    logger = flow_run_logger(flow_run, flow)\n\n    flow_run_context = None\n    parent_flow_run_context = FlowRunContext.get()\n\n    try:\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        if wait_for is not None:\n            await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            flow_run_id=flow_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=flow_run.state.is_pending(),\n        )\n\n    state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    # flag to ensure we only update the flow run name once\n    run_name_set = False\n\n    await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n    while state.is_running():\n        waited_for_task_runs = False\n\n        # Update the flow run to the latest data\n        flow_run = await client.read_flow_run(flow_run.id)\n        try:\n            with FlowRunContext(\n                **{\n                    **partial_flow_run_context.dict(),\n                    **{\n                        \"flow_run\": flow_run,\n                        \"flow\": flow,\n                        \"client\": client,\n                        \"parameters\": parameters,\n                    },\n                }\n            ) as flow_run_context:\n                # update flow run name\n                if not run_name_set and flow.flow_run_name:\n                    flow_run_name = _resolve_custom_flow_run_name(\n                        flow=flow, parameters=parameters\n                    )\n\n                    await client.update_flow_run(\n                        flow_run_id=flow_run.id, name=flow_run_name\n                    )\n                    logger.extra[\"flow_run_name\"] = flow_run_name\n                    logger.debug(\n                        f\"Renamed flow run {flow_run.name!r} to {flow_run_name!r}\"\n                    )\n                    flow_run.name = flow_run_name\n                    run_name_set = True\n\n                args, kwargs = parameters_to_args_kwargs(flow.fn, parameters)\n                logger.debug(\n                    f\"Executing flow {flow.name!r} for flow run {flow_run.name!r}...\"\n                )\n\n                if PREFECT_DEBUG_MODE:\n                    logger.debug(f\"Executing {call_repr(flow.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                flow_call = create_call(flow.fn, *args, **kwargs)\n\n                # This check for a parent call is needed for cases where the engine\n                # was entered directly during testing\n                parent_call = get_current_call()\n\n                if parent_call and (\n                    not parent_flow_run_context\n                    or (\n                        getattr(parent_flow_run_context, \"flow\", None)\n                        and parent_flow_run_context.flow.isasync == flow.isasync\n                    )\n                ):\n                    from_async.call_soon_in_waiting_thread(\n                        flow_call,\n                        thread=user_thread,\n                        timeout=flow.timeout_seconds,\n                    )\n                else:\n                    from_async.call_soon_in_new_thread(\n                        flow_call, timeout=flow.timeout_seconds\n                    )\n\n                result = await flow_call.aresult()\n\n                waited_for_task_runs = await wait_for_task_runs_and_report_crashes(\n                    flow_run_context.task_run_futures, client=client\n                )\n        except PausedRun as exc:\n            # could get raised either via utility or by returning Paused from a task run\n            # if a task run pauses, we set its state as the flow's state\n            # to preserve reschedule and timeout behavior\n            paused_flow_run = await client.read_flow_run(flow_run.id)\n            if paused_flow_run.state.is_running():\n                state = await propose_state(\n                    client,\n                    state=exc.state,\n                    flow_run_id=flow_run.id,\n                )\n\n                return state\n            paused_flow_run_state = paused_flow_run.state\n            return paused_flow_run_state\n        except CancelledError as exc:\n            if not flow_call.timedout():\n                # If the flow call was not cancelled by us; this is a crash\n                raise\n            # Construct a new exception as `TimeoutError`\n            original = exc\n            exc = TimeoutError()\n            exc.__cause__ = original\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                exc,\n                message=f\"Flow run exceeded timeout of {flow.timeout_seconds} seconds\",\n                result_factory=flow_run_context.result_factory,\n                name=\"TimedOut\",\n            )\n        except Exception:\n            # Generic exception in user code\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                message=\"Flow run encountered an exception.\",\n                result_factory=flow_run_context.result_factory,\n            )\n        else:\n            if result is None:\n                # All tasks and subflows are reference tasks if there is no return value\n                # If there are no tasks, use `None` instead of an empty iterable\n                result = (\n                    flow_run_context.task_run_futures\n                    + flow_run_context.task_run_states\n                    + flow_run_context.flow_run_states\n                ) or None\n\n            terminal_state = await return_value_to_state(\n                await resolve_futures_to_states(result),\n                result_factory=flow_run_context.result_factory,\n            )\n\n        if not waited_for_task_runs:\n            # An exception occurred that prevented us from waiting for task runs to\n            # complete. Ensure that we wait for them before proposing a final state\n            # for the flow run.\n            await wait_for_task_runs_and_report_crashes(\n                flow_run_context.task_run_futures, client=client\n            )\n\n        # Before setting the flow run state, store state.data using\n        # block storage and send the resulting data document to the Prefect API instead.\n        # This prevents the pickled return value of flow runs\n        # from being sent to the Prefect API and stored in the Prefect database.\n        # state.data is left as is, otherwise we would have to load\n        # the data from block storage again after storing.\n        state = await propose_state(\n            client,\n            state=terminal_state,\n            flow_run_id=flow_run.id,\n        )\n\n        await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n        if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n            logger.debug(\n                (\n                    f\"Received new state {state} when proposing final state\"\n                    f\" {terminal_state}\"\n                ),\n                extra={\"send_to_api\": False},\n            )\n\n        if not state.is_final() and not state.is_paused():\n            logger.info(\n                (\n                    f\"Received non-final state {state.name!r} when proposing final\"\n                    f\" state {terminal_state.name!r} and will attempt to run again...\"\n                ),\n            )\n            # Attempt to enter a running state again\n            state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_task_run","title":"orchestrate_task_run async","text":"

Execute a task run

This function should be submitted to a task runner. We must construct the context here instead of receiving it already populated since we may be in a new environment.

Proposes a RUNNING state, then - if accepted, the task user function will be run - if rejected, the received state will be returned

When the user function is run, the result will be used to determine a final state - if an exception is encountered, it is trapped and stored in a FAILED state - otherwise, return_value_to_state is used to determine the state

If the final state is COMPLETED, we generate a cache key as specified by the task

The final state is then proposed - if accepted, this is the final state and will be returned - if rejected and a new final state is provided, it will be returned - if rejected and a non-final state is provided, we will attempt to enter a RUNNING state again

Returns:

Type Description State

The final state of the run

Source code in prefect/engine.py
async def orchestrate_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    interruptible: bool,\n    client: PrefectClient,\n) -> State:\n    \"\"\"\n    Execute a task run\n\n    This function should be submitted to a task runner. We must construct the context\n    here instead of receiving it already populated since we may be in a new environment.\n\n    Proposes a RUNNING state, then\n    - if accepted, the task user function will be run\n    - if rejected, the received state will be returned\n\n    When the user function is run, the result will be used to determine a final state\n    - if an exception is encountered, it is trapped and stored in a FAILED state\n    - otherwise, `return_value_to_state` is used to determine the state\n\n    If the final state is COMPLETED, we generate a cache key as specified by the task\n\n    The final state is then proposed\n    - if accepted, this is the final state and will be returned\n    - if rejected and a new final state is provided, it will be returned\n    - if rejected and a non-final state is provided, we will attempt to enter a RUNNING\n        state again\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    flow_run_context = prefect.context.FlowRunContext.get()\n    if flow_run_context:\n        flow_run = flow_run_context.flow_run\n    else:\n        flow_run = await client.read_flow_run(task_run.flow_run_id)\n    logger = task_run_logger(task_run, task=task, flow_run=flow_run)\n\n    partial_task_run_context = TaskRunContext.construct(\n        task_run=task_run,\n        task=task,\n        client=client,\n        result_factory=result_factory,\n        log_prints=log_prints,\n    )\n    task_introspection_start_time = time.perf_counter()\n    try:\n        # Resolve futures in parameters into data\n        resolved_parameters = await resolve_inputs(parameters)\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            task_run_id=task_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=task_run.state.is_pending(),\n        )\n    task_introspection_end_time = time.perf_counter()\n\n    introspection_time = round(\n        task_introspection_end_time - task_introspection_start_time, 3\n    )\n    threshold = PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD.value()\n    if threshold and introspection_time > threshold:\n        logger.warning(\n            f\"Task parameter introspection took {introspection_time} seconds \"\n            f\", exceeding `PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD` of {threshold}. \"\n            \"Try wrapping large task parameters with \"\n            \"`prefect.utilities.annotations.quote` for increased performance, \"\n            \"e.g. `my_task(quote(param))`. To disable this message set \"\n            \"`PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD=0`.\"\n        )\n\n    # Generate the cache key to attach to proposed states\n    # The cache key uses a TaskRunContext that does not include a `timeout_context``\n\n    task_run_context = TaskRunContext(\n        **partial_task_run_context.dict(), parameters=resolved_parameters\n    )\n\n    cache_key = (\n        task.cache_key_fn(\n            task_run_context,\n            resolved_parameters,\n        )\n        if task.cache_key_fn\n        else None\n    )\n\n    # Ignore the cached results for a cache key, default = false\n    # Setting on task level overrules the Prefect setting (env var)\n    refresh_cache = (\n        task.refresh_cache\n        if task.refresh_cache is not None\n        else PREFECT_TASKS_REFRESH_CACHE.value()\n    )\n\n    # Emit an event to capture that the task run was in the `PENDING` state.\n    last_event = emit_task_run_state_change_event(\n        task_run=task_run, initial_state=None, validated_state=task_run.state\n    )\n    last_state = (\n        Pending()\n        if flow_run_context and flow_run_context.autonomous_task_run\n        else task_run.state\n    )\n\n    # Completed states with persisted results should have result data. If it's missing,\n    # this could be a manual state transition, so we should use the Unknown result type\n    # to represent that we know we don't know the result.\n    if (\n        last_state\n        and last_state.is_completed()\n        and result_factory.persist_result\n        and not last_state.data\n    ):\n        state = await propose_state(\n            client,\n            state=Completed(data=await UnknownResult.create()),\n            task_run_id=task_run.id,\n            force=True,\n        )\n\n    # Transition from `PENDING` -> `RUNNING`\n    try:\n        state = await propose_state(\n            client,\n            Running(\n                state_details=StateDetails(\n                    cache_key=cache_key, refresh_cache=refresh_cache\n                )\n            ),\n            task_run_id=task_run.id,\n        )\n    except Pause as exc:\n        # We shouldn't get a pause signal without a state, but if this happens,\n        # just use a Paused state to assume an in-process pause.\n        state = exc.state if exc.state else Paused()\n\n        # If a flow submits tasks and then pauses, we may reach this point due\n        # to concurrency timing because the tasks will try to transition after\n        # the flow run has paused. Orchestration will send back a Paused state\n        # for the task runs.\n        if state.state_details.pause_reschedule:\n            # If we're being asked to pause and reschedule, we should exit the\n            # task and expect to be resumed later.\n            raise\n\n    if state.is_paused():\n        BACKOFF_MAX = 10  # Seconds\n        backoff_count = 0\n\n        async def tick():\n            nonlocal backoff_count\n            if backoff_count < BACKOFF_MAX:\n                backoff_count += 1\n            interval = 1 + backoff_count + random.random() * backoff_count\n            await anyio.sleep(interval)\n\n        # Enter a loop to wait for the task run to be resumed, i.e.\n        # become Pending, and then propose a Running state again.\n        while True:\n            await tick()\n\n            # Propose a Running state again. We do this instead of reading the\n            # task run because if the flow run times out, this lets\n            # orchestration fail the task run.\n            try:\n                state = await propose_state(\n                    client,\n                    Running(\n                        state_details=StateDetails(\n                            cache_key=cache_key, refresh_cache=refresh_cache\n                        )\n                    ),\n                    task_run_id=task_run.id,\n                )\n            except Pause as exc:\n                if not exc.state:\n                    continue\n\n                if exc.state.state_details.pause_reschedule:\n                    # If the pause state includes pause_reschedule, we should exit the\n                    # task and expect to be resumed later. We've already checked for this\n                    # above, but we check again here in case the state changed; e.g. the\n                    # flow run suspended.\n                    raise\n                else:\n                    # Propose a Running state again.\n                    continue\n            else:\n                break\n\n    # Emit an event to capture the result of proposing a `RUNNING` state.\n    last_event = emit_task_run_state_change_event(\n        task_run=task_run,\n        initial_state=last_state,\n        validated_state=state,\n        follows=last_event,\n    )\n    last_state = state\n\n    # flag to ensure we only update the task run name once\n    run_name_set = False\n\n    # Only run the task if we enter a `RUNNING` state\n    while state.is_running():\n        # Retrieve the latest metadata for the task run context\n        task_run = await client.read_task_run(task_run.id)\n\n        with task_run_context.copy(\n            update={\"task_run\": task_run, \"start_time\": pendulum.now(\"UTC\")}\n        ):\n            try:\n                args, kwargs = parameters_to_args_kwargs(task.fn, resolved_parameters)\n                # update task run name\n                if not run_name_set and task.task_run_name:\n                    task_run_name = _resolve_custom_task_run_name(\n                        task=task, parameters=resolved_parameters\n                    )\n                    await client.set_task_run_name(\n                        task_run_id=task_run.id, name=task_run_name\n                    )\n                    logger.extra[\"task_run_name\"] = task_run_name\n                    logger.debug(\n                        f\"Renamed task run {task_run.name!r} to {task_run_name!r}\"\n                    )\n                    task_run.name = task_run_name\n                    run_name_set = True\n\n                if PREFECT_DEBUG_MODE.value():\n                    logger.debug(f\"Executing {call_repr(task.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                call = from_async.call_soon_in_new_thread(\n                    create_call(task.fn, *args, **kwargs), timeout=task.timeout_seconds\n                )\n                result = await call.aresult()\n\n            except (CancelledError, asyncio.CancelledError) as exc:\n                if not call.timedout():\n                    # If the task call was not cancelled by us; this is a crash\n                    raise\n                # Construct a new exception as `TimeoutError`\n                original = exc\n                exc = TimeoutError()\n                exc.__cause__ = original\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=(\n                        f\"Task run exceeded timeout of {task.timeout_seconds} seconds\"\n                    ),\n                    result_factory=task_run_context.result_factory,\n                    name=\"TimedOut\",\n                )\n            except Exception as exc:\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=\"Task run encountered an exception\",\n                    result_factory=task_run_context.result_factory,\n                )\n            else:\n                terminal_state = await return_value_to_state(\n                    result,\n                    result_factory=task_run_context.result_factory,\n                )\n\n                # for COMPLETED tasks, add the cache key and expiration\n                if terminal_state.is_completed():\n                    terminal_state.state_details.cache_expiration = (\n                        (pendulum.now(\"utc\") + task.cache_expiration)\n                        if task.cache_expiration\n                        else None\n                    )\n                    terminal_state.state_details.cache_key = cache_key\n\n            if terminal_state.is_failed():\n                # Defer to user to decide whether failure is retriable\n                terminal_state.state_details.retriable = (\n                    await _check_task_failure_retriable(task, task_run, terminal_state)\n                )\n            state = await propose_state(client, terminal_state, task_run_id=task_run.id)\n            last_event = emit_task_run_state_change_event(\n                task_run=task_run,\n                initial_state=last_state,\n                validated_state=state,\n                follows=last_event,\n            )\n            last_state = state\n\n            await _run_task_hooks(\n                task=task,\n                task_run=task_run,\n                state=state,\n            )\n\n            if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n                logger.debug(\n                    (\n                        f\"Received new state {state} when proposing final state\"\n                        f\" {terminal_state}\"\n                    ),\n                    extra={\"send_to_api\": False},\n                )\n\n            if not state.is_final() and not state.is_paused():\n                logger.info(\n                    (\n                        f\"Received non-final state {state.name!r} when proposing final\"\n                        f\" state {terminal_state.name!r} and will attempt to run\"\n                        \" again...\"\n                    ),\n                )\n                # Attempt to enter a running state again\n                state = await propose_state(client, Running(), task_run_id=task_run.id)\n                last_event = emit_task_run_state_change_event(\n                    task_run=task_run,\n                    initial_state=last_state,\n                    validated_state=state,\n                    follows=last_event,\n                )\n                last_state = state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(state) if PREFECT_DEBUG_MODE else str(state)\n\n    logger.log(\n        level=logging.INFO if state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.pause_flow_run","title":"pause_flow_run async","text":"

Pauses the current flow run by blocking execution until resumed.

When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time.

Parameters:

Name Type Description Default flow_run_id UUID

a flow run id. If supplied, this function will attempt to pause the specified flow run outside of the flow run process. When paused, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order pause a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results option.

None timeout int

the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.

3600 poll_interval int

The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds.

10 reschedule bool

Flag that will reschedule the flow run if resumed. Instead of blocking execution, the flow will gracefully exit (with no result returned) instead. To use this flag, a flow needs to have an associated deployment and results need to be configured with the persist_results option.

False key str

An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the \"reschedule\" option from running the same pause twice. A custom key can be supplied for custom pausing behavior.

None wait_for_input Optional[Type[T]]

a subclass of RunInput or any type supported by Pydantic. If provided when the flow pauses, the flow will wait for the input to be provided before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.

None
@task\ndef task_one():\n    for i in range(3):\n        sleep(1)\n\n@flow\ndef my_flow():\n    terminal_state = task_one.submit(return_state=True)\n    if terminal_state.type == StateType.COMPLETED:\n        print(\"Task one succeeded! Pausing flow run..\")\n        pause_flow_run(timeout=2)\n    else:\n        print(\"Task one failed. Skipping pause flow run..\")\n
Source code in prefect/engine.py
@sync_compatible\n@deprecated_parameter(\n    \"flow_run_id\", start_date=\"Dec 2023\", help=\"Use `suspend_flow_run` instead.\"\n)\n@deprecated_parameter(\n    \"reschedule\",\n    start_date=\"Dec 2023\",\n    when=lambda p: p is True,\n    help=\"Use `suspend_flow_run` instead.\",\n)\n@experimental_parameter(\n    \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def pause_flow_run(\n    wait_for_input: Optional[Type[T]] = None,\n    flow_run_id: UUID = None,\n    timeout: int = 3600,\n    poll_interval: int = 10,\n    reschedule: bool = False,\n    key: str = None,\n) -> Optional[T]:\n    \"\"\"\n    Pauses the current flow run by blocking execution until resumed.\n\n    When called within a flow run, execution will block and no downstream tasks will\n    run until the flow is resumed. Task runs that have already started will continue\n    running. A timeout parameter can be passed that will fail the flow run if it has not\n    been resumed within the specified time.\n\n    Args:\n        flow_run_id: a flow run id. If supplied, this function will attempt to pause\n            the specified flow run outside of the flow run process. When paused, the\n            flow run will continue execution until the NEXT task is orchestrated, at\n            which point the flow will exit. Any tasks that have already started will\n            run until completion. When resumed, the flow run will be rescheduled to\n            finish execution. In order pause a flow run in this way, the flow needs to\n            have an associated deployment and results need to be configured with the\n            `persist_results` option.\n        timeout: the number of seconds to wait for the flow to be resumed before\n            failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds\n            any configured flow-level timeout, the flow might fail even after resuming.\n        poll_interval: The number of seconds between checking whether the flow has been\n            resumed. Defaults to 10 seconds.\n        reschedule: Flag that will reschedule the flow run if resumed. Instead of\n            blocking execution, the flow will gracefully exit (with no result returned)\n            instead. To use this flag, a flow needs to have an associated deployment and\n            results need to be configured with the `persist_results` option.\n        key: An optional key to prevent calling pauses more than once. This defaults to\n            the number of pauses observed by the flow so far, and prevents pauses that\n            use the \"reschedule\" option from running the same pause twice. A custom key\n            can be supplied for custom pausing behavior.\n        wait_for_input: a subclass of `RunInput` or any type supported by\n            Pydantic. If provided when the flow pauses, the flow will wait for the\n            input to be provided before resuming. If the flow is resumed without\n            providing the input, the flow will fail. If the flow is resumed with the\n            input, the flow will resume and the input will be loaded and returned\n            from this function.\n\n    Example:\n    ```python\n    @task\n    def task_one():\n        for i in range(3):\n            sleep(1)\n\n    @flow\n    def my_flow():\n        terminal_state = task_one.submit(return_state=True)\n        if terminal_state.type == StateType.COMPLETED:\n            print(\"Task one succeeded! Pausing flow run..\")\n            pause_flow_run(timeout=2)\n        else:\n            print(\"Task one failed. Skipping pause flow run..\")\n    ```\n\n    \"\"\"\n    if flow_run_id:\n        if wait_for_input is not None:\n            raise RuntimeError(\"Cannot wait for input when pausing out of process.\")\n\n        return await _out_of_process_pause(\n            flow_run_id=flow_run_id,\n            timeout=timeout,\n            reschedule=reschedule,\n            key=key,\n        )\n    else:\n        return await _in_process_pause(\n            timeout=timeout,\n            poll_interval=poll_interval,\n            reschedule=reschedule,\n            key=key,\n            wait_for_input=wait_for_input,\n        )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_flow_run_crashes","title":"report_flow_run_crashes async","text":"

Detect flow run crashes during this context and update the run to a proper final state.

This context must reraise the exception to properly exit the run.

Source code in prefect/engine.py
@asynccontextmanager\nasync def report_flow_run_crashes(flow_run: FlowRun, client: PrefectClient, flow: Flow):\n    \"\"\"\n    Detect flow run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = flow_run_logger(flow_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            flow_run_state = await propose_state(client, state, flow_run_id=flow_run.id)\n            engine_logger.debug(\n                f\"Reported crashed flow run {flow_run.name!r} successfully!\"\n            )\n\n            # Only `on_crashed` and `on_cancellation` flow run state change hooks can be called here.\n            # We call the hooks after the state change proposal to `CRASHED` is validated\n            # or rejected (if it is in a `CANCELLING` state).\n            await _run_flow_hooks(\n                flow=flow,\n                flow_run=flow_run,\n                state=flow_run_state,\n            )\n\n        # Reraise the exception\n        raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_task_run_crashes","title":"report_task_run_crashes async","text":"

Detect task run crashes during this context and update the run to a proper final state.

This context must reraise the exception to properly exit the run.

Source code in prefect/engine.py
@asynccontextmanager\nasync def report_task_run_crashes(task_run: TaskRun, client: PrefectClient):\n    \"\"\"\n    Detect task run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = task_run_logger(task_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            await client.set_task_run_state(\n                state=state,\n                task_run_id=task_run.id,\n                force=True,\n            )\n            engine_logger.debug(\n                f\"Reported crashed task run {task_run.name!r} successfully!\"\n            )\n\n        # Reraise the exception\n        raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resume_flow_run","title":"resume_flow_run async","text":"

Resumes a paused flow.

Parameters:

Name Type Description Default flow_run_id

the flow_run_id to resume

required run_input Optional[Dict]

a dictionary of inputs to provide to the flow run.

None Source code in prefect/engine.py
@sync_compatible\nasync def resume_flow_run(flow_run_id, run_input: Optional[Dict] = None):\n    \"\"\"\n    Resumes a paused flow.\n\n    Args:\n        flow_run_id: the flow_run_id to resume\n        run_input: a dictionary of inputs to provide to the flow run.\n    \"\"\"\n    client = get_client()\n    async with client:\n        flow_run = await client.read_flow_run(flow_run_id)\n\n        if not flow_run.state.is_paused():\n            raise NotPausedError(\"Cannot resume a run that isn't paused!\")\n\n        response = await client.resume_flow_run(flow_run_id, run_input=run_input)\n\n    if response.status == SetStateStatus.REJECT:\n        if response.state.type == StateType.FAILED:\n            raise FlowPauseTimeout(\"Flow run can no longer be resumed.\")\n        else:\n            raise RuntimeError(f\"Cannot resume this run: {response.details.reason}\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.retrieve_flow_then_begin_flow_run","title":"retrieve_flow_then_begin_flow_run async","text":"

Async entrypoint for flow runs that have been submitted for execution by an agent

  • Retrieves the deployment information
  • Loads the flow object using deployment information
  • Updates the flow run version
Source code in prefect/engine.py
@inject_client\nasync def retrieve_flow_then_begin_flow_run(\n    flow_run_id: UUID,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n    \"\"\"\n    Async entrypoint for flow runs that have been submitted for execution by an agent\n\n    - Retrieves the deployment information\n    - Loads the flow object using deployment information\n    - Updates the flow run version\n    \"\"\"\n    flow_run = await client.read_flow_run(flow_run_id)\n\n    entrypoint = os.environ.get(\"PREFECT__FLOW_ENTRYPOINT\")\n\n    try:\n        flow = (\n            load_flow_from_entrypoint(entrypoint)\n            if entrypoint\n            else await load_flow_from_flow_run(flow_run, client=client)\n        )\n    except Exception:\n        message = (\n            \"Flow could not be retrieved from\"\n            f\" {'entrypoint' if entrypoint else 'deployment'}.\"\n        )\n        flow_run_logger(flow_run).exception(message)\n        state = await exception_to_failed_state(message=message)\n        await client.set_flow_run_state(\n            state=state, flow_run_id=flow_run_id, force=True\n        )\n        return state\n\n    # Update the flow run policy defaults to match settings on the flow\n    # Note: Mutating the flow run object prevents us from performing another read\n    #       operation if these properties are used by the client downstream\n    if flow_run.empirical_policy.retry_delay is None:\n        flow_run.empirical_policy.retry_delay = flow.retry_delay_seconds\n\n    if flow_run.empirical_policy.retries is None:\n        flow_run.empirical_policy.retries = flow.retries\n\n    await client.update_flow_run(\n        flow_run_id=flow_run_id,\n        flow_version=flow.version,\n        empirical_policy=flow_run.empirical_policy,\n    )\n\n    if flow.should_validate_parameters:\n        failed_state = None\n        try:\n            parameters = flow.validate_parameters(flow_run.parameters)\n        except Exception:\n            message = \"Validation of flow parameters failed with error: \"\n            flow_run_logger(flow_run).exception(message)\n            failed_state = await exception_to_failed_state(message=message)\n\n        if failed_state is not None:\n            await propose_state(\n                client,\n                state=failed_state,\n                flow_run_id=flow_run_id,\n            )\n            return failed_state\n    else:\n        parameters = flow_run.parameters\n\n    # Ensure default values are populated\n    parameters = {**get_parameter_defaults(flow.fn), **parameters}\n\n    return await begin_flow_run(\n        flow=flow,\n        flow_run=flow_run,\n        parameters=parameters,\n        client=client,\n        user_thread=user_thread,\n    )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.suspend_flow_run","title":"suspend_flow_run async","text":"

Suspends a flow run by stopping code execution until resumed.

When suspended, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order suspend a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results option.

Parameters:

Name Type Description Default flow_run_id Optional[UUID]

a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run.

None timeout Optional[int]

the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.

3600 key Optional[str]

An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior.

None wait_for_input Optional[Type[T]]

a subclass of RunInput or any type supported by Pydantic. If provided when the flow suspends, the flow will remain suspended until receiving the input before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.

None Source code in prefect/engine.py
@sync_compatible\n@inject_client\n@experimental_parameter(\n    \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def suspend_flow_run(\n    wait_for_input: Optional[Type[T]] = None,\n    flow_run_id: Optional[UUID] = None,\n    timeout: Optional[int] = 3600,\n    key: Optional[str] = None,\n    client: PrefectClient = None,\n) -> Optional[T]:\n    \"\"\"\n    Suspends a flow run by stopping code execution until resumed.\n\n    When suspended, the flow run will continue execution until the NEXT task is\n    orchestrated, at which point the flow will exit. Any tasks that have\n    already started will run until completion. When resumed, the flow run will\n    be rescheduled to finish execution. In order suspend a flow run in this\n    way, the flow needs to have an associated deployment and results need to be\n    configured with the `persist_results` option.\n\n    Args:\n        flow_run_id: a flow run id. If supplied, this function will attempt to\n            suspend the specified flow run. If not supplied will attempt to\n            suspend the current flow run.\n        timeout: the number of seconds to wait for the flow to be resumed before\n            failing. Defaults to 1 hour (3600 seconds). If the pause timeout\n            exceeds any configured flow-level timeout, the flow might fail even\n            after resuming.\n        key: An optional key to prevent calling suspend more than once. This\n            defaults to a random string and prevents suspends from running the\n            same suspend twice. A custom key can be supplied for custom\n            suspending behavior.\n        wait_for_input: a subclass of `RunInput` or any type supported by\n            Pydantic. If provided when the flow suspends, the flow will remain\n            suspended until receiving the input before resuming. If the flow is\n            resumed without providing the input, the flow will fail. If the flow is\n            resumed with the input, the flow will resume and the input will be\n            loaded and returned from this function.\n    \"\"\"\n    context = FlowRunContext.get()\n\n    if flow_run_id is None:\n        if TaskRunContext.get():\n            raise RuntimeError(\"Cannot suspend task runs.\")\n\n        if context is None or context.flow_run is None:\n            raise RuntimeError(\n                \"Flow runs can only be suspended from within a flow run.\"\n            )\n\n        logger = get_run_logger(context=context)\n        logger.info(\n            \"Suspending flow run, execution will be rescheduled when this flow run is\"\n            \" resumed.\"\n        )\n        flow_run_id = context.flow_run.id\n        suspending_current_flow_run = True\n        pause_counter = _observed_flow_pauses(context)\n        pause_key = key or str(pause_counter)\n    else:\n        # Since we're suspending another flow run we need to generate a pause\n        # key that won't conflict with whatever suspends/pauses that flow may\n        # have. Since this method won't be called during that flow run it's\n        # okay that this is non-deterministic.\n        suspending_current_flow_run = False\n        pause_key = key or str(uuid4())\n\n    proposed_state = Suspended(timeout_seconds=timeout, pause_key=pause_key)\n\n    if wait_for_input:\n        wait_for_input = run_input_subclass_from_type(wait_for_input)\n        run_input_keyset = keyset_from_paused_state(proposed_state)\n        proposed_state.state_details.run_input_keyset = run_input_keyset\n\n    try:\n        state = await propose_state(\n            client=client,\n            state=proposed_state,\n            flow_run_id=flow_run_id,\n        )\n    except Abort as exc:\n        # Aborted requests mean the suspension is not allowed\n        raise RuntimeError(f\"Flow run cannot be suspended: {exc}\")\n\n    if state.is_running():\n        # The orchestrator rejected the suspended state which means that this\n        # suspend has happened before and the flow run has been resumed.\n        if wait_for_input:\n            # The flow run wanted input, so we need to load it and return it\n            # to the user.\n            return await wait_for_input.load(run_input_keyset)\n        return\n\n    if not state.is_paused():\n        # If we receive anything but a PAUSED state, we are unable to continue\n        raise RuntimeError(\n            f\"Flow run cannot be suspended. Received unexpected state from API: {state}\"\n        )\n\n    if wait_for_input:\n        await wait_for_input.save(run_input_keyset)\n\n    if suspending_current_flow_run:\n        # Exit this process so the run can be resubmitted later\n        raise Pause()\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/events/","title":"prefect.events","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events","title":"prefect.events","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.TriggerTypes","title":"TriggerTypes: TypeAlias = Union[EventTrigger, MetricTrigger, CompoundTrigger, SequenceTrigger] module-attribute","text":"

The union of all concrete trigger types that a user may actually create

","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Action","title":"Action","text":"

Bases: PrefectBaseModel, ABC

An Action that may be performed when an Automation is triggered

Source code in prefect/events/actions.py
class Action(PrefectBaseModel, abc.ABC):\n    \"\"\"An Action that may be performed when an Automation is triggered\"\"\"\n\n    type: str\n\n    def describe_for_cli(self) -> str:\n        \"\"\"A human-readable description of the action\"\"\"\n        return self.type.replace(\"-\", \" \").capitalize()\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Action.describe_for_cli","title":"describe_for_cli","text":"

A human-readable description of the action

Source code in prefect/events/actions.py
def describe_for_cli(self) -> str:\n    \"\"\"A human-readable description of the action\"\"\"\n    return self.type.replace(\"-\", \" \").capitalize()\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.AutomationCore","title":"AutomationCore","text":"

Bases: PrefectBaseModel

Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources

Source code in prefect/events/schemas/automations.py
class AutomationCore(PrefectBaseModel, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"Defines an action a user wants to take when a certain number of events\n    do or don't happen to the matching resources\"\"\"\n\n    name: str = Field(..., description=\"The name of this automation\")\n    description: str = Field(\"\", description=\"A longer description of this automation\")\n\n    enabled: bool = Field(True, description=\"Whether this automation will be evaluated\")\n\n    trigger: TriggerTypes = Field(\n        ...,\n        description=(\n            \"The criteria for which events this Automation covers and how it will \"\n            \"respond to the presence or absence of those events\"\n        ),\n    )\n\n    actions: List[ActionTypes] = Field(\n        ...,\n        description=\"The actions to perform when this Automation triggers\",\n    )\n\n    actions_on_trigger: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a triggered state\",\n    )\n\n    actions_on_resolve: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a resolving state\",\n    )\n\n    owner_resource: Optional[str] = Field(\n        default=None, description=\"The owning resource of this automation\"\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CallWebhook","title":"CallWebhook","text":"

Bases: Action

Call a webhook when an Automation is triggered.

Source code in prefect/events/actions.py
class CallWebhook(Action):\n    \"\"\"Call a webhook when an Automation is triggered.\"\"\"\n\n    type: Literal[\"call-webhook\"] = \"call-webhook\"\n    block_document_id: UUID = Field(\n        description=\"The identifier of the webhook block to use\"\n    )\n    payload: str = Field(\n        default=\"\",\n        description=\"An optional templatable payload to send when calling the webhook.\",\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CancelFlowRun","title":"CancelFlowRun","text":"

Bases: Action

Cancels a flow run associated with the trigger

Source code in prefect/events/actions.py
class CancelFlowRun(Action):\n    \"\"\"Cancels a flow run associated with the trigger\"\"\"\n\n    type: Literal[\"cancel-flow-run\"] = \"cancel-flow-run\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ChangeFlowRunState","title":"ChangeFlowRunState","text":"

Bases: Action

Changes the state of a flow run associated with the trigger

Source code in prefect/events/actions.py
class ChangeFlowRunState(Action):\n    \"\"\"Changes the state of a flow run associated with the trigger\"\"\"\n\n    type: Literal[\"change-flow-run-state\"] = \"change-flow-run-state\"\n\n    name: Optional[str] = Field(\n        None,\n        description=\"The name of the state to change the flow run to\",\n    )\n    state: StateType = Field(\n        ...,\n        description=\"The type of the state to change the flow run to\",\n    )\n    message: Optional[str] = Field(\n        None,\n        description=\"An optional message to associate with the state change\",\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CompositeTrigger","title":"CompositeTrigger","text":"

Bases: Trigger, ABC

Requires some number of triggers to have fired within the given time period.

Source code in prefect/events/schemas/automations.py
class CompositeTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Requires some number of triggers to have fired within the given time period.\n    \"\"\"\n\n    type: Literal[\"compound\", \"sequence\"]\n    triggers: List[\"TriggerTypes\"]\n    within: Optional[timedelta]\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CompoundTrigger","title":"CompoundTrigger","text":"

Bases: CompositeTrigger

A composite trigger that requires some number of triggers to have fired within the given time period

Source code in prefect/events/schemas/automations.py
class CompoundTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have\n    fired within the given time period\"\"\"\n\n    type: Literal[\"compound\"] = \"compound\"\n    require: Union[int, Literal[\"any\", \"all\"]]\n\n    @root_validator\n    def validate_require(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n        require = values.get(\"require\")\n\n        if isinstance(require, int):\n            if require < 1:\n                raise ValueError(\"required must be at least 1\")\n            if require > len(values[\"triggers\"]):\n                raise ValueError(\n                    \"required must be less than or equal to the number of triggers\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"{str(self.require).capitalize()} of:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CompoundTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"{str(self.require).capitalize()} of:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeclareIncident","title":"DeclareIncident","text":"

Bases: Action

Declares an incident for the triggering event. Only available on Prefect Cloud

Source code in prefect/events/actions.py
class DeclareIncident(Action):\n    \"\"\"Declares an incident for the triggering event.  Only available on Prefect Cloud\"\"\"\n\n    type: Literal[\"declare-incident\"] = \"declare-incident\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentCompoundTrigger","title":"DeploymentCompoundTrigger","text":"

Bases: BaseDeploymentTrigger, CompoundTrigger

A composite trigger that requires some number of triggers to have fired within the given time period

Source code in prefect/events/schemas/deployment_triggers.py
class DeploymentCompoundTrigger(BaseDeploymentTrigger, CompoundTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have\n    fired within the given time period\"\"\"\n\n    trigger_type = CompoundTrigger\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentEventTrigger","title":"DeploymentEventTrigger","text":"

Bases: BaseDeploymentTrigger, EventTrigger

A trigger that fires based on the presence or absence of events within a given period of time.

Source code in prefect/events/schemas/deployment_triggers.py
class DeploymentEventTrigger(BaseDeploymentTrigger, EventTrigger):\n    \"\"\"\n    A trigger that fires based on the presence or absence of events within a given\n    period of time.\n    \"\"\"\n\n    trigger_type = EventTrigger\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentMetricTrigger","title":"DeploymentMetricTrigger","text":"

Bases: BaseDeploymentTrigger, MetricTrigger

A trigger that fires based on the results of a metric query.

Source code in prefect/events/schemas/deployment_triggers.py
class DeploymentMetricTrigger(BaseDeploymentTrigger, MetricTrigger):\n    \"\"\"\n    A trigger that fires based on the results of a metric query.\n    \"\"\"\n\n    trigger_type = MetricTrigger\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentSequenceTrigger","title":"DeploymentSequenceTrigger","text":"

Bases: BaseDeploymentTrigger, SequenceTrigger

A composite trigger that requires some number of triggers to have fired within the given time period in a specific order

Source code in prefect/events/schemas/deployment_triggers.py
class DeploymentSequenceTrigger(BaseDeploymentTrigger, SequenceTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have fired\n    within the given time period in a specific order\"\"\"\n\n    trigger_type = SequenceTrigger\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DoNothing","title":"DoNothing","text":"

Bases: Action

Do nothing when an Automation is triggered

Source code in prefect/events/actions.py
class DoNothing(Action):\n    \"\"\"Do nothing when an Automation is triggered\"\"\"\n\n    type: Literal[\"do-nothing\"] = \"do-nothing\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event","title":"Event","text":"

Bases: PrefectBaseModel

The client-side view of an event that has happened to a Resource

Source code in prefect/events/schemas/events.py
class Event(PrefectBaseModel):\n    \"\"\"The client-side view of an event that has happened to a Resource\"\"\"\n\n    occurred: DateTimeTZ = Field(\n        default_factory=lambda: pendulum.now(\"UTC\"),\n        description=\"When the event happened from the sender's perspective\",\n    )\n    event: str = Field(\n        description=\"The name of the event that happened\",\n    )\n    resource: Resource = Field(\n        description=\"The primary Resource this event concerns\",\n    )\n    related: List[RelatedResource] = Field(\n        default_factory=list,\n        description=\"A list of additional Resources involved in this event\",\n    )\n    payload: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"An open-ended set of data describing what happened\",\n    )\n    id: UUID = Field(\n        default_factory=uuid4,\n        description=\"The client-provided identifier of this event\",\n    )\n    follows: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The ID of an event that is known to have occurred prior to this one. \"\n            \"If set, this may be used to establish a more precise ordering of causally-\"\n            \"related events when they occur close enough together in time that the \"\n            \"system may receive them out-of-order.\"\n        ),\n    )\n\n    @property\n    def involved_resources(self) -> Sequence[Resource]:\n        return [self.resource] + list(self.related)\n\n    @property\n    def resource_in_role(self) -> Mapping[str, RelatedResource]:\n        \"\"\"Returns a mapping of roles to the first related resource in that role\"\"\"\n        return {related.role: related for related in reversed(self.related)}\n\n    @property\n    def resources_in_role(self) -> Mapping[str, Sequence[RelatedResource]]:\n        \"\"\"Returns a mapping of roles to related resources in that role\"\"\"\n        resources: Dict[str, List[RelatedResource]] = defaultdict(list)\n        for related in self.related:\n            resources[related.role].append(related)\n        return resources\n\n    @validator(\"related\")\n    def enforce_maximum_related_resources(cls, value: List[RelatedResource]):\n        if len(value) > PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES.value():\n            raise ValueError(\n                \"The maximum number of related resources \"\n                f\"is {PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES.value()}\"\n            )\n\n        return value\n\n    def find_resource_label(self, label: str) -> Optional[str]:\n        \"\"\"Finds the value of the given label in this event's resource or one of its\n        related resources.  If the label starts with `related:<role>:`, search for the\n        first matching label in a related resource with that role.\"\"\"\n        directive, _, related_label = label.rpartition(\":\")\n        directive, _, role = directive.partition(\":\")\n        if directive == \"related\":\n            for related in self.related:\n                if related.role == role:\n                    return related.get(related_label)\n        return self.resource.get(label)\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event.resource_in_role","title":"resource_in_role: Mapping[str, RelatedResource] property","text":"

Returns a mapping of roles to the first related resource in that role

","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event.resources_in_role","title":"resources_in_role: Mapping[str, Sequence[RelatedResource]] property","text":"

Returns a mapping of roles to related resources in that role

","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event.find_resource_label","title":"find_resource_label","text":"

Finds the value of the given label in this event's resource or one of its related resources. If the label starts with related:<role>:, search for the first matching label in a related resource with that role.

Source code in prefect/events/schemas/events.py
def find_resource_label(self, label: str) -> Optional[str]:\n    \"\"\"Finds the value of the given label in this event's resource or one of its\n    related resources.  If the label starts with `related:<role>:`, search for the\n    first matching label in a related resource with that role.\"\"\"\n    directive, _, related_label = label.rpartition(\":\")\n    directive, _, role = directive.partition(\":\")\n    if directive == \"related\":\n        for related in self.related:\n            if related.role == role:\n                return related.get(related_label)\n    return self.resource.get(label)\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.EventTrigger","title":"EventTrigger","text":"

Bases: ResourceTrigger

A trigger that fires based on the presence or absence of events within a given period of time.

Source code in prefect/events/schemas/automations.py
class EventTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the presence or absence of events within a given\n    period of time.\n    \"\"\"\n\n    type: Literal[\"event\"] = \"event\"\n\n    after: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) which must first been seen to fire this trigger.  If \"\n            \"empty, then fire this trigger immediately.  Events may include \"\n            \"trailing wildcards, like `prefect.flow-run.*`\"\n        ),\n    )\n    expect: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) this trigger is expecting to see.  If empty, this \"\n            \"trigger will match any event.  Events may include trailing wildcards, \"\n            \"like `prefect.flow-run.*`\"\n        ),\n    )\n\n    for_each: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"Evaluate the trigger separately for each distinct value of these labels \"\n            \"on the resource.  By default, labels refer to the primary resource of the \"\n            \"triggering event.  You may also refer to labels from related \"\n            \"resources by specifying `related:<role>:<label>`.  This will use the \"\n            \"value of that label for the first related resource in that role.  For \"\n            'example, `\"for_each\": [\"related:flow:prefect.resource.id\"]` would '\n            \"evaluate the trigger for each flow.\"\n        ),\n    )\n    posture: Literal[Posture.Reactive, Posture.Proactive] = Field(  # type: ignore[valid-type]\n        Posture.Reactive,\n        description=(\n            \"The posture of this trigger, either Reactive or Proactive.  Reactive \"\n            \"triggers respond to the _presence_ of the expected events, while \"\n            \"Proactive triggers respond to the _absence_ of those expected events.\"\n        ),\n    )\n    threshold: int = Field(\n        1,\n        description=(\n            \"The number of events required for this trigger to fire (for \"\n            \"Reactive triggers), or the number of events expected (for Proactive \"\n            \"triggers)\"\n        ),\n    )\n    within: timedelta = Field(\n        timedelta(0),\n        minimum=0.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The time period over which the events must occur.  For Reactive triggers, \"\n            \"this may be as low as 0 seconds, but must be at least 10 seconds for \"\n            \"Proactive triggers\"\n        ),\n    )\n\n    @validator(\"within\")\n    def enforce_minimum_within(\n        cls, value: timedelta, values, config, field: ModelField\n    ):\n        return validate_trigger_within(value, field)\n\n    @root_validator(skip_on_failure=True)\n    def enforce_minimum_within_for_proactive_triggers(cls, values: Dict[str, Any]):\n        posture: Optional[Posture] = values.get(\"posture\")\n        within: Optional[timedelta] = values.get(\"within\")\n\n        if posture == Posture.Proactive:\n            if not within or within == timedelta(0):\n                values[\"within\"] = timedelta(seconds=10.0)\n            elif within < timedelta(seconds=10.0):\n                raise ValueError(\n                    \"The minimum within for Proactive triggers is 10 seconds\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        if self.posture == Posture.Reactive:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n        else:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                        f\"within {self.within}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.EventTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    if self.posture == Posture.Reactive:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n    else:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                    f\"within {self.within}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.MetricTrigger","title":"MetricTrigger","text":"

Bases: ResourceTrigger

A trigger that fires based on the results of a metric query.

Source code in prefect/events/schemas/automations.py
class MetricTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the results of a metric query.\n    \"\"\"\n\n    type: Literal[\"metric\"] = \"metric\"\n\n    posture: Literal[Posture.Metric] = Field(  # type: ignore[valid-type]\n        Posture.Metric,\n        description=\"Periodically evaluate the configured metric query.\",\n    )\n\n    metric: MetricTriggerQuery = Field(\n        ...,\n        description=\"The metric query to evaluate for this trigger. \",\n    )\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        m = self.metric\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.MetricTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    m = self.metric\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.MetricTriggerQuery","title":"MetricTriggerQuery","text":"

Bases: PrefectBaseModel

Defines a subset of the Trigger subclass, which is specific to Metric automations, that specify the query configurations and breaching conditions for the Automation

Source code in prefect/events/schemas/automations.py
class MetricTriggerQuery(PrefectBaseModel):\n    \"\"\"Defines a subset of the Trigger subclass, which is specific\n    to Metric automations, that specify the query configurations\n    and breaching conditions for the Automation\"\"\"\n\n    name: PrefectMetric = Field(\n        ...,\n        description=\"The name of the metric to query.\",\n    )\n    threshold: float = Field(\n        ...,\n        description=(\n            \"The threshold value against which we'll compare \" \"the query result.\"\n        ),\n    )\n    operator: MetricTriggerOperator = Field(\n        ...,\n        description=(\n            \"The comparative operator (LT / LTE / GT / GTE) used to compare \"\n            \"the query result against the threshold value.\"\n        ),\n    )\n    range: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The lookback duration (seconds) for a metric query. This duration is \"\n            \"used to determine the time range over which the query will be executed. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n    firing_for: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The duration (seconds) for which the metric query must breach \"\n            \"or resolve continuously before the state is updated and the \"\n            \"automation is triggered. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseAutomation","title":"PauseAutomation","text":"

Bases: AutomationAction

Pauses a Work Queue

Source code in prefect/events/actions.py
class PauseAutomation(AutomationAction):\n    \"\"\"Pauses a Work Queue\"\"\"\n\n    type: Literal[\"pause-automation\"] = \"pause-automation\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseDeployment","title":"PauseDeployment","text":"

Bases: DeploymentAction

Pauses the given Deployment

Source code in prefect/events/actions.py
class PauseDeployment(DeploymentAction):\n    \"\"\"Pauses the given Deployment\"\"\"\n\n    type: Literal[\"pause-deployment\"] = \"pause-deployment\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseWorkPool","title":"PauseWorkPool","text":"

Bases: WorkPoolAction

Pauses a Work Pool

Source code in prefect/events/actions.py
class PauseWorkPool(WorkPoolAction):\n    \"\"\"Pauses a Work Pool\"\"\"\n\n    type: Literal[\"pause-work-pool\"] = \"pause-work-pool\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseWorkQueue","title":"PauseWorkQueue","text":"

Bases: WorkQueueAction

Pauses a Work Queue

Source code in prefect/events/actions.py
class PauseWorkQueue(WorkQueueAction):\n    \"\"\"Pauses a Work Queue\"\"\"\n\n    type: Literal[\"pause-work-queue\"] = \"pause-work-queue\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ReceivedEvent","title":"ReceivedEvent","text":"

Bases: Event

The server-side view of an event that has happened to a Resource after it has been received by the server

Source code in prefect/events/schemas/events.py
class ReceivedEvent(Event):\n    \"\"\"The server-side view of an event that has happened to a Resource after it has\n    been received by the server\"\"\"\n\n    class Config:\n        orm_mode = True\n\n    received: DateTimeTZ = Field(\n        ...,\n        description=\"When the event was received by Prefect Cloud\",\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.RelatedResource","title":"RelatedResource","text":"

Bases: Resource

A Resource with a specific role in an Event

Source code in prefect/events/schemas/events.py
class RelatedResource(Resource):\n    \"\"\"A Resource with a specific role in an Event\"\"\"\n\n    @root_validator(pre=True)\n    def requires_resource_role(cls, values: Dict[str, Any]):\n        labels = values.get(\"__root__\")\n        if not isinstance(labels, dict):\n            return values\n\n        labels = cast(Dict[str, str], labels)\n\n        if \"prefect.resource.role\" not in labels:\n            raise ValueError(\n                \"Related Resources must include the prefect.resource.role label\"\n            )\n        if not labels[\"prefect.resource.role\"]:\n            raise ValueError(\"The prefect.resource.role label must be non-empty\")\n\n        return values\n\n    @property\n    def role(self) -> str:\n        return self[\"prefect.resource.role\"]\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Resource","title":"Resource","text":"

Bases: Labelled

An observable business object of interest to the user

Source code in prefect/events/schemas/events.py
class Resource(Labelled):\n    \"\"\"An observable business object of interest to the user\"\"\"\n\n    @root_validator(pre=True)\n    def enforce_maximum_labels(cls, values: Dict[str, Any]):\n        labels = values.get(\"__root__\")\n        if not isinstance(labels, dict):\n            return values\n\n        if len(labels) > PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE.value():\n            raise ValueError(\n                \"The maximum number of labels per resource \"\n                f\"is {PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE.value()}\"\n            )\n\n        return values\n\n    @root_validator(pre=True)\n    def requires_resource_id(cls, values: Dict[str, Any]):\n        labels = values.get(\"__root__\")\n        if not isinstance(labels, dict):\n            return values\n\n        labels = cast(Dict[str, str], labels)\n\n        if \"prefect.resource.id\" not in labels:\n            raise ValueError(\"Resources must include the prefect.resource.id label\")\n        if not labels[\"prefect.resource.id\"]:\n            raise ValueError(\"The prefect.resource.id label must be non-empty\")\n\n        return values\n\n    @property\n    def id(self) -> str:\n        return self[\"prefect.resource.id\"]\n\n    @property\n    def name(self) -> Optional[str]:\n        return self.get(\"prefect.resource.name\")\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResourceSpecification","title":"ResourceSpecification","text":"

Bases: PrefectBaseModel

A specification that may match zero, one, or many resources, used to target or select a set of resources in a query or automation. A resource must match at least one value of all of the provided labels

Source code in prefect/events/schemas/events.py
class ResourceSpecification(PrefectBaseModel):\n    \"\"\"A specification that may match zero, one, or many resources, used to target or\n    select a set of resources in a query or automation.  A resource must match at least\n    one value of all of the provided labels\"\"\"\n\n    __root__: Dict[str, Union[str, List[str]]]\n\n    def matches_every_resource(self) -> bool:\n        return len(self) == 0\n\n    def matches_every_resource_of_kind(self, prefix: str) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        if len(self.__root__) == 1:\n            if resource_id := self.__root__.get(\"prefect.resource.id\"):\n                values = [resource_id] if isinstance(resource_id, str) else resource_id\n                return any(value == f\"{prefix}.*\" for value in values)\n\n        return False\n\n    def includes(self, candidates: Iterable[Resource]) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        for candidate in candidates:\n            if self.matches(candidate):\n                return True\n\n        return False\n\n    def matches(self, resource: Resource) -> bool:\n        for label, expected in self.items():\n            value = resource.get(label)\n            if not any(matches(candidate, value) for candidate in expected):\n                return False\n        return True\n\n    def items(self) -> Iterable[Tuple[str, List[str]]]:\n        return [\n            (label, [value] if isinstance(value, str) else value)\n            for label, value in self.__root__.items()\n        ]\n\n    def __contains__(self, key: str) -> bool:\n        return self.__root__.__contains__(key)\n\n    def __getitem__(self, key: str) -> List[str]:\n        value = self.__root__[key]\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def pop(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.pop(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def get(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.get(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def __len__(self) -> int:\n        return len(self.__root__)\n\n    def deepcopy(self) -> \"ResourceSpecification\":\n        return ResourceSpecification.parse_obj(copy.deepcopy(self.__root__))\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResourceTrigger","title":"ResourceTrigger","text":"

Bases: Trigger, ABC

Base class for triggers that may filter by the labels of resources.

Source code in prefect/events/schemas/automations.py
class ResourceTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Base class for triggers that may filter by the labels of resources.\n    \"\"\"\n\n    type: str\n\n    match: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for resources which this trigger will match.\",\n    )\n    match_related: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for related resources which this trigger will match.\",\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeAutomation","title":"ResumeAutomation","text":"

Bases: AutomationAction

Resumes a Work Queue

Source code in prefect/events/actions.py
class ResumeAutomation(AutomationAction):\n    \"\"\"Resumes a Work Queue\"\"\"\n\n    type: Literal[\"resume-automation\"] = \"resume-automation\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeDeployment","title":"ResumeDeployment","text":"

Bases: DeploymentAction

Resumes the given Deployment

Source code in prefect/events/actions.py
class ResumeDeployment(DeploymentAction):\n    \"\"\"Resumes the given Deployment\"\"\"\n\n    type: Literal[\"resume-deployment\"] = \"resume-deployment\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeWorkPool","title":"ResumeWorkPool","text":"

Bases: WorkPoolAction

Resumes a Work Pool

Source code in prefect/events/actions.py
class ResumeWorkPool(WorkPoolAction):\n    \"\"\"Resumes a Work Pool\"\"\"\n\n    type: Literal[\"resume-work-pool\"] = \"resume-work-pool\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeWorkQueue","title":"ResumeWorkQueue","text":"

Bases: WorkQueueAction

Resumes a Work Queue

Source code in prefect/events/actions.py
class ResumeWorkQueue(WorkQueueAction):\n    \"\"\"Resumes a Work Queue\"\"\"\n\n    type: Literal[\"resume-work-queue\"] = \"resume-work-queue\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.RunDeployment","title":"RunDeployment","text":"

Bases: DeploymentAction

Runs the given deployment with the given parameters

Source code in prefect/events/actions.py
class RunDeployment(DeploymentAction):\n    \"\"\"Runs the given deployment with the given parameters\"\"\"\n\n    type: Literal[\"run-deployment\"] = \"run-deployment\"\n\n    parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"The parameters to pass to the deployment, or None to use the \"\n            \"deployment's default parameters\"\n        ),\n    )\n    job_variables: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"The job variables to pass to the created flow run, or None \"\n            \"to use the deployment's default job variables\"\n        ),\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SendNotification","title":"SendNotification","text":"

Bases: Action

Send a notification when an Automation is triggered

Source code in prefect/events/actions.py
class SendNotification(Action):\n    \"\"\"Send a notification when an Automation is triggered\"\"\"\n\n    type: Literal[\"send-notification\"] = \"send-notification\"\n    block_document_id: UUID = Field(\n        description=\"The identifier of the notification block to use\"\n    )\n    subject: str = Field(\"Prefect automated notification\")\n    body: str = Field(description=\"The text of the notification to send\")\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SequenceTrigger","title":"SequenceTrigger","text":"

Bases: CompositeTrigger

A composite trigger that requires some number of triggers to have fired within the given time period in a specific order

Source code in prefect/events/schemas/automations.py
class SequenceTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have fired\n    within the given time period in a specific order\"\"\"\n\n    type: Literal[\"sequence\"] = \"sequence\"\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    \"In this order:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SequenceTrigger.describe_for_cli","title":"describe_for_cli","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                \"In this order:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SuspendFlowRun","title":"SuspendFlowRun","text":"

Bases: Action

Suspends a flow run associated with the trigger

Source code in prefect/events/actions.py
class SuspendFlowRun(Action):\n    \"\"\"Suspends a flow run associated with the trigger\"\"\"\n\n    type: Literal[\"suspend-flow-run\"] = \"suspend-flow-run\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Trigger","title":"Trigger","text":"

Bases: PrefectBaseModel, ABC

Base class describing a set of criteria that must be satisfied in order to trigger an automation.

Source code in prefect/events/schemas/automations.py
class Trigger(PrefectBaseModel, abc.ABC, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"\n    Base class describing a set of criteria that must be satisfied in order to trigger\n    an automation.\n    \"\"\"\n\n    type: str\n\n    @abc.abstractmethod\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n\n    # The following allows the regular Trigger class to be used when serving or\n    # deploying flows, analogous to how the Deployment*Trigger classes work\n\n    _deployment_id: Optional[UUID] = PrivateAttr(default=None)\n\n    def set_deployment_id(self, deployment_id: UUID):\n        self._deployment_id = deployment_id\n\n    def owner_resource(self) -> Optional[str]:\n        return f\"prefect.deployment.{self._deployment_id}\"\n\n    def actions(self) -> List[ActionTypes]:\n        assert self._deployment_id\n        return [\n            RunDeployment(\n                source=\"selected\",\n                deployment_id=self._deployment_id,\n                parameters=getattr(self, \"parameters\", None),\n                job_variables=getattr(self, \"job_variables\", None),\n            )\n        ]\n\n    def as_automation(self) -> \"AutomationCore\":\n        assert self._deployment_id\n\n        trigger: TriggerTypes = cast(TriggerTypes, self)\n\n        # This is one of the Deployment*Trigger classes, so translate it over to a\n        # plain Trigger\n        if hasattr(self, \"trigger_type\"):\n            trigger = self.trigger_type(**self.dict())\n\n        return AutomationCore(\n            name=(\n                getattr(self, \"name\", None)\n                or f\"Automation for deployment {self._deployment_id}\"\n            ),\n            description=\"\",\n            enabled=getattr(self, \"enabled\", True),\n            trigger=trigger,\n            actions=self.actions(),\n            owner_resource=self.owner_resource(),\n        )\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Trigger.describe_for_cli","title":"describe_for_cli abstractmethod","text":"

Return a human-readable description of this trigger for the CLI

Source code in prefect/events/schemas/automations.py
@abc.abstractmethod\ndef describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.emit_event","title":"emit_event","text":"

Send an event to Prefect Cloud.

Parameters:

Name Type Description Default event str

The name of the event that happened.

required resource Dict[str, str]

The primary Resource this event concerns.

required occurred Optional[DateTimeTZ]

When the event happened from the sender's perspective. Defaults to the current datetime.

None related Optional[Union[List[Dict[str, str]], List[RelatedResource]]]

A list of additional Resources involved in this event.

None payload Optional[Dict[str, Any]]

An open-ended set of data describing what happened.

None id Optional[UUID]

The sender-provided identifier for this event. Defaults to a random UUID.

None follows Optional[Event]

The event that preceded this one. If the preceding event happened more than 5 minutes prior to this event the follows relationship will not be set.

None

Returns:

Type Description Optional[Event]

The event that was emitted if worker is using a client that emit

Optional[Event]

events, otherwise None

Source code in prefect/events/utilities.py
def emit_event(\n    event: str,\n    resource: Dict[str, str],\n    occurred: Optional[DateTimeTZ] = None,\n    related: Optional[Union[List[Dict[str, str]], List[RelatedResource]]] = None,\n    payload: Optional[Dict[str, Any]] = None,\n    id: Optional[UUID] = None,\n    follows: Optional[Event] = None,\n) -> Optional[Event]:\n    \"\"\"\n    Send an event to Prefect Cloud.\n\n    Args:\n        event: The name of the event that happened.\n        resource: The primary Resource this event concerns.\n        occurred: When the event happened from the sender's perspective.\n                  Defaults to the current datetime.\n        related: A list of additional Resources involved in this event.\n        payload: An open-ended set of data describing what happened.\n        id: The sender-provided identifier for this event. Defaults to a random\n            UUID.\n        follows: The event that preceded this one. If the preceding event\n            happened more than 5 minutes prior to this event the follows\n            relationship will not be set.\n\n    Returns:\n        The event that was emitted if worker is using a client that emit\n        events, otherwise None\n    \"\"\"\n    if not should_emit_events():\n        return None\n\n    operational_clients = [\n        AssertingEventsClient,\n        PrefectCloudEventsClient,\n        PrefectEventsClient,\n        PrefectEphemeralEventsClient,\n    ]\n    worker_instance = EventsWorker.instance()\n\n    if worker_instance.client_type not in operational_clients:\n        return None\n\n    event_kwargs: Dict[str, Any] = {\n        \"event\": event,\n        \"resource\": resource,\n    }\n\n    if occurred is None:\n        occurred = pendulum.now(\"UTC\")\n    event_kwargs[\"occurred\"] = occurred\n\n    if related is not None:\n        event_kwargs[\"related\"] = related\n\n    if payload is not None:\n        event_kwargs[\"payload\"] = payload\n\n    if id is not None:\n        event_kwargs[\"id\"] = id\n\n    if follows is not None:\n        if -TIGHT_TIMING < (occurred - follows.occurred) < TIGHT_TIMING:\n            event_kwargs[\"follows\"] = follows.id\n\n    event_obj = Event(**event_kwargs)\n    worker_instance.send(event_obj)\n\n    return event_obj\n
","tags":["Python API","events"]},{"location":"api-ref/prefect/exceptions/","title":"prefect.exceptions","text":"","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions","title":"prefect.exceptions","text":"

Prefect-specific exceptions.

","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Abort","title":"Abort","text":"

Bases: PrefectSignal

Raised when the API sends an 'ABORT' instruction during state proposal.

Indicates that the run should exit immediately.

Source code in prefect/exceptions.py
class Abort(PrefectSignal):\n    \"\"\"\n    Raised when the API sends an 'ABORT' instruction during state proposal.\n\n    Indicates that the run should exit immediately.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.BlockMissingCapabilities","title":"BlockMissingCapabilities","text":"

Bases: PrefectException

Raised when a block does not have required capabilities for a given operation.

Source code in prefect/exceptions.py
class BlockMissingCapabilities(PrefectException):\n    \"\"\"\n    Raised when a block does not have required capabilities for a given operation.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CancelledRun","title":"CancelledRun","text":"

Bases: PrefectException

Raised when the result from a cancelled run is retrieved and an exception is not attached.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in prefect/exceptions.py
class CancelledRun(PrefectException):\n    \"\"\"\n    Raised when the result from a cancelled run is retrieved and an exception\n    is not attached.\n\n    This occurs when a string is attached to the state instead of an exception\n    or if the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CrashedRun","title":"CrashedRun","text":"

Bases: PrefectException

Raised when the result from a crashed run is retrieved.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in prefect/exceptions.py
class CrashedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a crashed run is retrieved.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ExternalSignal","title":"ExternalSignal","text":"

Bases: BaseException

Base type for external signal-like exceptions that should never be caught by users.

Source code in prefect/exceptions.py
class ExternalSignal(BaseException):\n    \"\"\"\n    Base type for external signal-like exceptions that should never be caught by users.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FailedRun","title":"FailedRun","text":"

Bases: PrefectException

Raised when the result from a failed run is retrieved and an exception is not attached.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in prefect/exceptions.py
class FailedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a failed run is retrieved and an exception is not\n    attached.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowPauseTimeout","title":"FlowPauseTimeout","text":"

Bases: PrefectException

Raised when a flow pause times out

Source code in prefect/exceptions.py
class FlowPauseTimeout(PrefectException):\n    \"\"\"Raised when a flow pause times out\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowRunWaitTimeout","title":"FlowRunWaitTimeout","text":"

Bases: PrefectException

Raised when a flow run takes longer than a given timeout

Source code in prefect/exceptions.py
class FlowRunWaitTimeout(PrefectException):\n    \"\"\"Raised when a flow run takes longer than a given timeout\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowScriptError","title":"FlowScriptError","text":"

Bases: PrefectException

Raised when a script errors during evaluation while attempting to load a flow.

Source code in prefect/exceptions.py
class FlowScriptError(PrefectException):\n    \"\"\"\n    Raised when a script errors during evaluation while attempting to load a flow.\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        script_path: str,\n    ) -> None:\n        message = f\"Flow script at {script_path!r} encountered an exception\"\n        super().__init__(message)\n\n        self.user_exc = user_exc\n\n    def rich_user_traceback(self, **kwargs):\n        trace = Traceback.extract(\n            type(self.user_exc),\n            self.user_exc,\n            self.user_exc.__traceback__.tb_next.tb_next.tb_next.tb_next,\n        )\n        return Traceback(trace, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureError","title":"InfrastructureError","text":"

Bases: PrefectException

A base class for exceptions related to infrastructure blocks

Source code in prefect/exceptions.py
class InfrastructureError(PrefectException):\n    \"\"\"\n    A base class for exceptions related to infrastructure blocks\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotAvailable","title":"InfrastructureNotAvailable","text":"

Bases: PrefectException

Raised when infrastructure is not accessible from the current machine. For example, if a process was spawned on another machine it cannot be managed.

Source code in prefect/exceptions.py
class InfrastructureNotAvailable(PrefectException):\n    \"\"\"\n    Raised when infrastructure is not accessible from the current machine. For example,\n    if a process was spawned on another machine it cannot be managed.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotFound","title":"InfrastructureNotFound","text":"

Bases: PrefectException

Raised when infrastructure is missing, likely because it has exited or been deleted.

Source code in prefect/exceptions.py
class InfrastructureNotFound(PrefectException):\n    \"\"\"\n    Raised when infrastructure is missing, likely because it has exited or been\n    deleted.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidNameError","title":"InvalidNameError","text":"

Bases: PrefectException, ValueError

Raised when a name contains characters that are not permitted.

Source code in prefect/exceptions.py
class InvalidNameError(PrefectException, ValueError):\n    \"\"\"\n    Raised when a name contains characters that are not permitted.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidRepositoryURLError","title":"InvalidRepositoryURLError","text":"

Bases: PrefectException

Raised when an incorrect URL is provided to a GitHub filesystem block.

Source code in prefect/exceptions.py
class InvalidRepositoryURLError(PrefectException):\n    \"\"\"Raised when an incorrect URL is provided to a GitHub filesystem block.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingLengthMismatch","title":"MappingLengthMismatch","text":"

Bases: PrefectException

Raised when attempting to call Task.map with arguments of different lengths.

Source code in prefect/exceptions.py
class MappingLengthMismatch(PrefectException):\n    \"\"\"\n    Raised when attempting to call Task.map with arguments of different lengths.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingMissingIterable","title":"MappingMissingIterable","text":"

Bases: PrefectException

Raised when attempting to call Task.map with all static arguments

Source code in prefect/exceptions.py
class MappingMissingIterable(PrefectException):\n    \"\"\"\n    Raised when attempting to call Task.map with all static arguments\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingContextError","title":"MissingContextError","text":"

Bases: PrefectException, RuntimeError

Raised when a method is called that requires a task or flow run context to be active but one cannot be found.

Source code in prefect/exceptions.py
class MissingContextError(PrefectException, RuntimeError):\n    \"\"\"\n    Raised when a method is called that requires a task or flow run context to be\n    active but one cannot be found.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingFlowError","title":"MissingFlowError","text":"

Bases: PrefectException

Raised when a given flow name is not found in the expected script.

Source code in prefect/exceptions.py
class MissingFlowError(PrefectException):\n    \"\"\"\n    Raised when a given flow name is not found in the expected script.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingProfileError","title":"MissingProfileError","text":"

Bases: PrefectException, ValueError

Raised when a profile name does not exist.

Source code in prefect/exceptions.py
class MissingProfileError(PrefectException, ValueError):\n    \"\"\"\n    Raised when a profile name does not exist.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingResult","title":"MissingResult","text":"

Bases: PrefectException

Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API.

Source code in prefect/exceptions.py
class MissingResult(PrefectException):\n    \"\"\"\n    Raised when a result is missing from a state; often when result persistence is\n    disabled and the state is retrieved from the API.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.NotPausedError","title":"NotPausedError","text":"

Bases: PrefectException

Raised when attempting to unpause a run that isn't paused.

Source code in prefect/exceptions.py
class NotPausedError(PrefectException):\n    \"\"\"Raised when attempting to unpause a run that isn't paused.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectAlreadyExists","title":"ObjectAlreadyExists","text":"

Bases: PrefectException

Raised when the client receives a 409 (conflict) from the API.

Source code in prefect/exceptions.py
class ObjectAlreadyExists(PrefectException):\n    \"\"\"\n    Raised when the client receives a 409 (conflict) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectNotFound","title":"ObjectNotFound","text":"

Bases: PrefectException

Raised when the client receives a 404 (not found) from the API.

Source code in prefect/exceptions.py
class ObjectNotFound(PrefectException):\n    \"\"\"\n    Raised when the client receives a 404 (not found) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterBindError","title":"ParameterBindError","text":"

Bases: TypeError, PrefectException

Raised when args and kwargs cannot be converted to parameters.

Source code in prefect/exceptions.py
class ParameterBindError(TypeError, PrefectException):\n    \"\"\"\n    Raised when args and kwargs cannot be converted to parameters.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bind_failure(\n        cls, fn: Callable, exc: TypeError, call_args: List, call_kwargs: Dict\n    ) -> Self:\n        fn_signature = str(inspect.signature(fn)).strip(\"()\")\n\n        base = f\"Error binding parameters for function '{fn.__name__}': {exc}\"\n        signature = f\"Function '{fn.__name__}' has signature '{fn_signature}'\"\n        received = f\"received args: {call_args} and kwargs: {list(call_kwargs.keys())}\"\n        msg = f\"{base}.\\n{signature} but {received}.\"\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterTypeError","title":"ParameterTypeError","text":"

Bases: PrefectException

Raised when a parameter does not pass Pydantic type validation.

Source code in prefect/exceptions.py
class ParameterTypeError(PrefectException):\n    \"\"\"\n    Raised when a parameter does not pass Pydantic type validation.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_validation_error(cls, exc: ValidationError) -> Self:\n        bad_params = [f'{\".\".join(err[\"loc\"])}: {err[\"msg\"]}' for err in exc.errors()]\n        msg = \"Flow run received invalid parameters:\\n - \" + \"\\n - \".join(bad_params)\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Pause","title":"Pause","text":"

Bases: PrefectSignal

Raised when a flow run is PAUSED and needs to exit for resubmission.

Source code in prefect/exceptions.py
class Pause(PrefectSignal):\n    \"\"\"\n    Raised when a flow run is PAUSED and needs to exit for resubmission.\n    \"\"\"\n\n    def __init__(self, *args, state=None, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PausedRun","title":"PausedRun","text":"

Bases: PrefectException

Raised when the result from a paused run is retrieved.

Source code in prefect/exceptions.py
class PausedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a paused run is retrieved.\n    \"\"\"\n\n    def __init__(self, *args, state=None, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectException","title":"PrefectException","text":"

Bases: Exception

Base exception type for Prefect errors.

Source code in prefect/exceptions.py
class PrefectException(Exception):\n    \"\"\"\n    Base exception type for Prefect errors.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError","title":"PrefectHTTPStatusError","text":"

Bases: HTTPStatusError

Raised when client receives a Response that contains an HTTPStatusError.

Used to include API error details in the error messages that the client provides users.

Source code in prefect/exceptions.py
class PrefectHTTPStatusError(HTTPStatusError):\n    \"\"\"\n    Raised when client receives a `Response` that contains an HTTPStatusError.\n\n    Used to include API error details in the error messages that the client provides users.\n    \"\"\"\n\n    @classmethod\n    def from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n        \"\"\"\n        Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n        \"\"\"\n        try:\n            details = httpx_error.response.json()\n        except Exception:\n            details = None\n\n        error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n        if details:\n            message_components = [error_message, f\"Response: {details}\", *more_info]\n        else:\n            message_components = [error_message, *more_info]\n\n        new_message = \"\\n\".join(message_components)\n\n        return cls(\n            new_message, request=httpx_error.request, response=httpx_error.response\n        )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError.from_httpx_error","title":"from_httpx_error classmethod","text":"

Generate a PrefectHTTPStatusError from an httpx.HTTPStatusError.

Source code in prefect/exceptions.py
@classmethod\ndef from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n    \"\"\"\n    Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n    \"\"\"\n    try:\n        details = httpx_error.response.json()\n    except Exception:\n        details = None\n\n    error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n    if details:\n        message_components = [error_message, f\"Response: {details}\", *more_info]\n    else:\n        message_components = [error_message, *more_info]\n\n    new_message = \"\\n\".join(message_components)\n\n    return cls(\n        new_message, request=httpx_error.request, response=httpx_error.response\n    )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectSignal","title":"PrefectSignal","text":"

Bases: BaseException

Base type for signal-like exceptions that should never be caught by users.

Source code in prefect/exceptions.py
class PrefectSignal(BaseException):\n    \"\"\"\n    Base type for signal-like exceptions that should never be caught by users.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ProtectedBlockError","title":"ProtectedBlockError","text":"

Bases: PrefectException

Raised when an operation is prevented due to block protection.

Source code in prefect/exceptions.py
class ProtectedBlockError(PrefectException):\n    \"\"\"\n    Raised when an operation is prevented due to block protection.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ReservedArgumentError","title":"ReservedArgumentError","text":"

Bases: PrefectException, TypeError

Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature

Source code in prefect/exceptions.py
class ReservedArgumentError(PrefectException, TypeError):\n    \"\"\"\n    Raised when a function used with Prefect has an argument with a name that is\n    reserved for a Prefect feature\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ScriptError","title":"ScriptError","text":"

Bases: PrefectException

Raised when a script errors during evaluation while attempting to load data

Source code in prefect/exceptions.py
class ScriptError(PrefectException):\n    \"\"\"\n    Raised when a script errors during evaluation while attempting to load data\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        path: str,\n    ) -> None:\n        message = f\"Script at {str(path)!r} encountered an exception: {user_exc!r}\"\n        super().__init__(message)\n        self.user_exc = user_exc\n\n        # Strip script run information from the traceback\n        self.user_exc.__traceback__ = _trim_traceback(\n            self.user_exc.__traceback__,\n            remove_modules=[prefect.utilities.importtools],\n        )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.SignatureMismatchError","title":"SignatureMismatchError","text":"

Bases: PrefectException, TypeError

Raised when parameters passed to a function do not match its signature.

Source code in prefect/exceptions.py
class SignatureMismatchError(PrefectException, TypeError):\n    \"\"\"Raised when parameters passed to a function do not match its signature.\"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bad_params(cls, expected_params: List[str], provided_params: List[str]):\n        msg = (\n            f\"Function expects parameters {expected_params} but was provided with\"\n            f\" parameters {provided_params}\"\n        )\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.TerminationSignal","title":"TerminationSignal","text":"

Bases: ExternalSignal

Raised when a flow run receives a termination signal.

Source code in prefect/exceptions.py
class TerminationSignal(ExternalSignal):\n    \"\"\"\n    Raised when a flow run receives a termination signal.\n    \"\"\"\n\n    def __init__(self, signal: int):\n        self.signal = signal\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnfinishedRun","title":"UnfinishedRun","text":"

Bases: PrefectException

Raised when the result from a run that is not finished is retrieved.

For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.

Source code in prefect/exceptions.py
class UnfinishedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a run that is not finished is retrieved.\n\n    For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnspecifiedFlowError","title":"UnspecifiedFlowError","text":"

Bases: PrefectException

Raised when multiple flows are found in the expected script and no name is given.

Source code in prefect/exceptions.py
class UnspecifiedFlowError(PrefectException):\n    \"\"\"\n    Raised when multiple flows are found in the expected script and no name is given.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UpstreamTaskError","title":"UpstreamTaskError","text":"

Bases: PrefectException

Raised when a task relies on the result of another task but that task is not 'COMPLETE'

Source code in prefect/exceptions.py
class UpstreamTaskError(PrefectException):\n    \"\"\"\n    Raised when a task relies on the result of another task but that task is not\n    'COMPLETE'\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.exception_traceback","title":"exception_traceback","text":"

Convert an exception to a printable string with a traceback

Source code in prefect/exceptions.py
def exception_traceback(exc: Exception) -> str:\n    \"\"\"\n    Convert an exception to a printable string with a traceback\n    \"\"\"\n    tb = traceback.TracebackException.from_exception(exc)\n    return \"\".join(list(tb.format()))\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/filesystems/","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure","title":"Azure","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

DEPRECATION WARNING:

This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by AzureBlobStorageContainer from the prefect-azure package, which offers enhanced functionality and better a better user experience.

Store data as a file on Azure Datalake and Azure Blob Storage.

Example

Load stored Azure config:

from prefect.filesystems import Azure\n\naz_block = Azure.load(\"BLOCK_NAME\")\n

Source code in prefect/filesystems.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the `AzureBlobStorageContainer` block from prefect-azure instead.\",\n)\nclass Azure(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `AzureBlobStorageContainer` from the `prefect-azure` package, which\n    offers enhanced functionality and better a better user experience.\n\n    Store data as a file on Azure Datalake and Azure Blob Storage.\n\n    Example:\n        Load stored Azure config:\n        ```python\n        from prefect.filesystems import Azure\n\n        az_block = Azure.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#azure\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An Azure storage bucket path.\",\n        examples=[\"my-bucket/a-directory-within\"],\n    )\n    azure_storage_connection_string: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage connection string\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_CONNECTION_STRING environment variable.\"\n        ),\n    )\n    azure_storage_account_name: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account name\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_ACCOUNT_NAME environment variable.\"\n        ),\n    )\n    azure_storage_account_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account key\",\n        description=\"Equivalent to the AZURE_STORAGE_ACCOUNT_KEY environment variable.\",\n    )\n    azure_storage_tenant_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage tenant ID\",\n        description=\"Equivalent to the AZURE_TENANT_ID environment variable.\",\n    )\n    azure_storage_client_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client ID\",\n        description=\"Equivalent to the AZURE_CLIENT_ID environment variable.\",\n    )\n    azure_storage_client_secret: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client secret\",\n        description=\"Equivalent to the AZURE_CLIENT_SECRET environment variable.\",\n    )\n    azure_storage_anon: bool = Field(\n        default=True,\n        title=\"Azure storage anonymous connection\",\n        description=(\n            \"Set the 'anon' flag for ADLFS. This should be False for systems that\"\n            \" require ADLFS to use DefaultAzureCredentials.\"\n        ),\n    )\n    azure_storage_container: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage container\",\n        description=(\n            \"Blob Container in Azure Storage Account. If set the 'bucket_path' will\"\n            \" be interpreted using the following URL format:\"\n            \"'az://<container>@<storage_account>.dfs.core.windows.net/<bucket_path>'.\"\n        ),\n    )\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        if self.azure_storage_container:\n            return (\n                f\"az://{self.azure_storage_container.get_secret_value()}\"\n                f\"@{self.azure_storage_account_name.get_secret_value()}\"\n                f\".dfs.core.windows.net/{self.bucket_path}\"\n            )\n        else:\n            return f\"az://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.azure_storage_connection_string:\n            settings[\n                \"connection_string\"\n            ] = self.azure_storage_connection_string.get_secret_value()\n        if self.azure_storage_account_name:\n            settings[\n                \"account_name\"\n            ] = self.azure_storage_account_name.get_secret_value()\n        if self.azure_storage_account_key:\n            settings[\"account_key\"] = self.azure_storage_account_key.get_secret_value()\n        if self.azure_storage_tenant_id:\n            settings[\"tenant_id\"] = self.azure_storage_tenant_id.get_secret_value()\n        if self.azure_storage_client_id:\n            settings[\"client_id\"] = self.azure_storage_client_id.get_secret_value()\n        if self.azure_storage_client_secret:\n            settings[\n                \"client_secret\"\n            ] = self.azure_storage_client_secret.get_secret_value()\n        settings[\"anon\"] = self.azure_storage_anon\n        self._remote_file_system = RemoteFileSystem(\n            basepath=self.basepath, settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS","title":"GCS","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

DEPRECATION WARNING:

This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by GcsBucket from the prefect-gcp package, which offers enhanced functionality and better a better user experience. Store data as a file on Google Cloud Storage.

Example

Load stored GCS config:

from prefect.filesystems import GCS\n\ngcs_block = GCS.load(\"BLOCK_NAME\")\n

Source code in prefect/filesystems.py
@deprecated_class(\n    start_date=\"Mar 2024\", help=\"Use the `GcsBucket` block from prefect-gcp instead.\"\n)\nclass GCS(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `GcsBucket` from the `prefect-gcp` package, which offers enhanced functionality\n    and better a better user experience.\n    Store data as a file on Google Cloud Storage.\n\n    Example:\n        Load stored GCS config:\n        ```python\n        from prefect.filesystems import GCS\n\n        gcs_block = GCS.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/422d13bb838cf247eb2b2cf229ce6a2e717d601b-256x256.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#gcs\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"A GCS bucket path.\",\n        examples=[\"my-bucket/a-directory-within\"],\n    )\n    service_account_info: Optional[SecretStr] = Field(\n        default=None,\n        description=\"The contents of a service account keyfile as a JSON string.\",\n    )\n    project: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The project the GCS bucket resides in. If not provided, the project will\"\n            \" be inferred from the credentials or environment.\"\n        ),\n    )\n\n    @property\n    def basepath(self) -> str:\n        return f\"gcs://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.service_account_info:\n            try:\n                settings[\"token\"] = json.loads(\n                    self.service_account_info.get_secret_value()\n                )\n            except json.JSONDecodeError:\n                raise ValueError(\n                    \"Unable to load provided service_account_info. Please make sure\"\n                    \" that the provided value is a valid JSON string.\"\n                )\n        remote_file_system = RemoteFileSystem(\n            basepath=f\"gcs://{self.bucket_path}\", settings=settings\n        )\n        return remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub","title":"GitHub","text":"

Bases: ReadableDeploymentStorage

DEPRECATION WARNING:\n\nThis class is deprecated as of March 2024 and will not be available after September 2024.\nIt has been replaced by `GitHubRepository` from the `prefect-github` package, which offers\nenhanced functionality and better a better user experience.\n

q Interact with files stored on GitHub repositories.

Source code in prefect/filesystems.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the `GitHubRepository` block from prefect-github instead.\",\n)\nclass GitHub(ReadableDeploymentStorage):\n    \"\"\"\n        DEPRECATION WARNING:\n\n        This class is deprecated as of March 2024 and will not be available after September 2024.\n        It has been replaced by `GitHubRepository` from the `prefect-github` package, which offers\n        enhanced functionality and better a better user experience.\n    q\n        Interact with files stored on GitHub repositories.\n    \"\"\"\n\n    _block_type_name = \"GitHub\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#github\"\n\n    repository: str = Field(\n        default=...,\n        description=(\n            \"The URL of a GitHub repository to read from, in either HTTPS or SSH\"\n            \" format.\"\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    access_token: Optional[SecretStr] = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=(\n            \"A GitHub Personal Access Token (PAT) with repo scope.\"\n            \" To use a fine-grained PAT, provide '{username}:{PAT}' as the value.\"\n        ),\n    )\n    include_git_objects: bool = Field(\n        default=True,\n        description=(\n            \"Whether to include git objects when copying the repo contents to a\"\n            \" directory.\"\n        ),\n    )\n\n    @validator(\"access_token\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n        return validate_github_access_token(v, values)\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urllib.parse.urlparse(self.repository)\n        if url_components.scheme == \"https\" and self.access_token is not None:\n            updated_components = url_components._replace(\n                netloc=f\"{self.access_token.get_secret_value()}@{url_components.netloc}\"\n            )\n            full_url = urllib.parse.urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: str\n    ) -> Tuple[str, str]:\n        \"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Clones a GitHub project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        cmd += [\"--depth\", \"1\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            ignore_func = None\n            if not self.include_git_objects:\n                ignore_func = ignore_patterns(\".git\")\n\n            copytree(\n                src=content_source,\n                dst=content_destination,\n                dirs_exist_ok=True,\n                ignore=ignore_func,\n            )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub.get_directory","title":"get_directory async","text":"

Clones a GitHub project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory.

Parameters:

Name Type Description Default from_path Optional[str]

If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

None local_path Optional[str]

A local path to clone to; defaults to present working directory.

None Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Clones a GitHub project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        ignore_func = None\n        if not self.include_git_objects:\n            ignore_func = ignore_patterns(\".git\")\n\n        copytree(\n            src=content_source,\n            dst=content_destination,\n            dirs_exist_ok=True,\n            ignore=ignore_func,\n        )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem","title":"LocalFileSystem","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a local file system.

Example

Load stored local file system config:

from prefect.filesystems import LocalFileSystem\n\nlocal_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n

Source code in prefect/filesystems.py
class LocalFileSystem(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    Store data as a file on a local file system.\n\n    Example:\n        Load stored local file system config:\n        ```python\n        from prefect.filesystems import LocalFileSystem\n\n        local_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Local File System\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/ad39089fa66d273b943394a68f003f7a19aa850e-48x48.png\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#local-filesystem\"\n    )\n\n    basepath: Optional[str] = Field(\n        default=None, description=\"Default local path for this block to write to.\"\n    )\n\n    @validator(\"basepath\", pre=True)\n    def cast_pathlib(cls, value):\n        return stringify_path(value)\n\n    def _resolve_path(self, path: str) -> Path:\n        # Only resolve the base path at runtime, default to the current directory\n        basepath = (\n            Path(self.basepath).expanduser().resolve()\n            if self.basepath\n            else Path(\".\").resolve()\n        )\n\n        # Determine the path to access relative to the base path, ensuring that paths\n        # outside of the base path are off limits\n        if path is None:\n            return basepath\n\n        path: Path = Path(path).expanduser()\n\n        if not path.is_absolute():\n            path = basepath / path\n        else:\n            path = path.resolve()\n            if basepath not in path.parents and (basepath != path):\n                raise ValueError(\n                    f\"Provided path {path} is outside of the base path {basepath}.\"\n                )\n\n        return path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: str = None, local_path: str = None\n    ) -> None:\n        \"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if not from_path:\n            from_path = Path(self.basepath).expanduser().resolve()\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if not local_path:\n            local_path = Path(\".\").resolve()\n        else:\n            local_path = Path(local_path).resolve()\n\n        if from_path == local_path:\n            # If the paths are the same there is no need to copy\n            # and we avoid shutil.copytree raising an error\n            return\n\n        # .prefectignore exists in the original location, not the current location which\n        # is most likely temporary\n        if (from_path / Path(\".prefectignore\")).exists():\n            ignore_func = await self._get_ignore_func(\n                local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n            )\n        else:\n            ignore_func = None\n\n        copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n\n    async def _get_ignore_func(self, local_path: str, ignore_file: str):\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(root=local_path, ignore_patterns=ignore_patterns)\n\n        def ignore_func(directory, files):\n            relative_path = Path(directory).relative_to(local_path)\n\n            files_to_ignore = [\n                f for f in files if str(relative_path / f) not in included_files\n            ]\n            return files_to_ignore\n\n        return ignore_func\n\n    @sync_compatible\n    async def put_directory(\n        self, local_path: str = None, to_path: str = None, ignore_file: str = None\n    ) -> None:\n        \"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the current working directory to the block's basepath.\n        An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n        \"\"\"\n        destination_path = self._resolve_path(to_path)\n\n        if not local_path:\n            local_path = Path(\".\").absolute()\n\n        if ignore_file:\n            ignore_func = await self._get_ignore_func(\n                local_path=local_path, ignore_file=ignore_file\n            )\n        else:\n            ignore_func = None\n\n        if local_path == destination_path:\n            pass\n        else:\n            copytree(\n                src=local_path,\n                dst=destination_path,\n                ignore=ignore_func,\n                dirs_exist_ok=True,\n            )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path: Path = self._resolve_path(path)\n\n        # Check if the path exists\n        if not path.exists():\n            raise ValueError(f\"Path {path} does not exist.\")\n\n        # Validate that its a file\n        if not path.is_file():\n            raise ValueError(f\"Path {path} is not a file.\")\n\n        async with await anyio.open_file(str(path), mode=\"rb\") as f:\n            content = await f.read()\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path: Path = self._resolve_path(path)\n\n        # Construct the path if it does not exist\n        path.parent.mkdir(exist_ok=True, parents=True)\n\n        # Check if the file already exists\n        if path.exists() and not path.is_file():\n            raise ValueError(f\"Path {path} already exists and is not a file.\")\n\n        async with await anyio.open_file(path, mode=\"wb\") as f:\n            await f.write(content)\n        # Leave path stringify to the OS\n        return str(path)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.get_directory","title":"get_directory async","text":"

Copies a directory from one place to another on the local filesystem.

Defaults to copying the entire contents of the block's basepath to the current working directory.

Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: str = None, local_path: str = None\n) -> None:\n    \"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if not from_path:\n        from_path = Path(self.basepath).expanduser().resolve()\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if not local_path:\n        local_path = Path(\".\").resolve()\n    else:\n        local_path = Path(local_path).resolve()\n\n    if from_path == local_path:\n        # If the paths are the same there is no need to copy\n        # and we avoid shutil.copytree raising an error\n        return\n\n    # .prefectignore exists in the original location, not the current location which\n    # is most likely temporary\n    if (from_path / Path(\".prefectignore\")).exists():\n        ignore_func = await self._get_ignore_func(\n            local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n        )\n    else:\n        ignore_func = None\n\n    copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.put_directory","title":"put_directory async","text":"

Copies a directory from one place to another on the local filesystem.

Defaults to copying the entire contents of the current working directory to the block's basepath. An ignore_file path may be provided that can include gitignore style expressions for filepaths to ignore.

Source code in prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n    \"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the current working directory to the block's basepath.\n    An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n    \"\"\"\n    destination_path = self._resolve_path(to_path)\n\n    if not local_path:\n        local_path = Path(\".\").absolute()\n\n    if ignore_file:\n        ignore_func = await self._get_ignore_func(\n            local_path=local_path, ignore_file=ignore_file\n        )\n    else:\n        ignore_func = None\n\n    if local_path == destination_path:\n        pass\n    else:\n        copytree(\n            src=local_path,\n            dst=destination_path,\n            ignore=ignore_func,\n            dirs_exist_ok=True,\n        )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem","title":"RemoteFileSystem","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a remote file system.

Supports any remote file system supported by fsspec. The file system is specified using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.

Example

Load stored remote file system config:

from prefect.filesystems import RemoteFileSystem\n\nremote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n

Source code in prefect/filesystems.py
class RemoteFileSystem(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    Store data as a file on a remote file system.\n\n    Supports any remote file system supported by `fsspec`. The file system is specified\n    using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.\n\n    Example:\n        Load stored remote file system config:\n        ```python\n        from prefect.filesystems import RemoteFileSystem\n\n        remote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Remote File System\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/e86b41bc0f9c99ba9489abeee83433b43d5c9365-48x48.png\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#remote-file-system\"\n    )\n\n    basepath: str = Field(\n        default=...,\n        description=\"Default path for this block to write to.\",\n        examples=[\"s3://my-bucket/my-folder/\"],\n    )\n    settings: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional settings to pass through to fsspec.\",\n    )\n\n    # Cache for the configured fsspec file system used for access\n    _filesystem: fsspec.AbstractFileSystem = None\n\n    @validator(\"basepath\")\n    def check_basepath(cls, value):\n        return validate_basepath(value)\n\n    def _resolve_path(self, path: str) -> str:\n        base_scheme, base_netloc, base_urlpath, _, _ = urllib.parse.urlsplit(\n            self.basepath\n        )\n        scheme, netloc, urlpath, _, _ = urllib.parse.urlsplit(path)\n\n        # Confirm that absolute paths are valid\n        if scheme:\n            if scheme != base_scheme:\n                raise ValueError(\n                    f\"Path {path!r} with scheme {scheme!r} must use the same scheme as\"\n                    f\" the base path {base_scheme!r}.\"\n                )\n\n        if netloc:\n            if (netloc != base_netloc) or not urlpath.startswith(base_urlpath):\n                raise ValueError(\n                    f\"Path {path!r} is outside of the base path {self.basepath!r}.\"\n                )\n\n        return f\"{self.basepath.rstrip('/')}/{urlpath.lstrip('/')}\"\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if from_path is None:\n            from_path = str(self.basepath)\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if local_path is None:\n            local_path = Path(\".\").absolute()\n\n        # validate that from_path has a trailing slash for proper fsspec behavior across versions\n        if not from_path.endswith(\"/\"):\n            from_path += \"/\"\n\n        return self.filesystem.get(from_path, local_path, recursive=True)\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n        overwrite: bool = True,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        if to_path is None:\n            to_path = str(self.basepath)\n        else:\n            to_path = self._resolve_path(to_path)\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(\n                local_path, ignore_patterns, include_dirs=True\n            )\n\n        counter = 0\n        for f in Path(local_path).rglob(\"*\"):\n            relative_path = f.relative_to(local_path)\n            if included_files and str(relative_path) not in included_files:\n                continue\n\n            if to_path.endswith(\"/\"):\n                fpath = to_path + relative_path.as_posix()\n            else:\n                fpath = to_path + \"/\" + relative_path.as_posix()\n\n            if f.is_dir():\n                pass\n            else:\n                f = f.as_posix()\n                if overwrite:\n                    self.filesystem.put_file(f, fpath, overwrite=True)\n                else:\n                    self.filesystem.put_file(f, fpath)\n\n                counter += 1\n\n        return counter\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path = self._resolve_path(path)\n\n        with self.filesystem.open(path, \"rb\") as file:\n            content = await run_sync_in_worker_thread(file.read)\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path = self._resolve_path(path)\n        dirpath = path[: path.rindex(\"/\")]\n\n        self.filesystem.makedirs(dirpath, exist_ok=True)\n\n        with self.filesystem.open(path, \"wb\") as file:\n            await run_sync_in_worker_thread(file.write, content)\n        return path\n\n    @property\n    def filesystem(self) -> fsspec.AbstractFileSystem:\n        if not self._filesystem:\n            scheme, _, _, _, _ = urllib.parse.urlsplit(self.basepath)\n\n            try:\n                self._filesystem = fsspec.filesystem(scheme, **self.settings)\n            except ImportError as exc:\n                # The path is a remote file system that uses a lib that is not installed\n                raise RuntimeError(\n                    f\"File system created with scheme {scheme!r} from base path \"\n                    f\"{self.basepath!r} could not be created. \"\n                    \"You are likely missing a Python module required to use the given \"\n                    \"storage protocol.\"\n                ) from exc\n\n        return self._filesystem\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if from_path is None:\n        from_path = str(self.basepath)\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if local_path is None:\n        local_path = Path(\".\").absolute()\n\n    # validate that from_path has a trailing slash for proper fsspec behavior across versions\n    if not from_path.endswith(\"/\"):\n        from_path += \"/\"\n\n    return self.filesystem.get(from_path, local_path, recursive=True)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n    overwrite: bool = True,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    if to_path is None:\n        to_path = str(self.basepath)\n    else:\n        to_path = self._resolve_path(to_path)\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(\n            local_path, ignore_patterns, include_dirs=True\n        )\n\n    counter = 0\n    for f in Path(local_path).rglob(\"*\"):\n        relative_path = f.relative_to(local_path)\n        if included_files and str(relative_path) not in included_files:\n            continue\n\n        if to_path.endswith(\"/\"):\n            fpath = to_path + relative_path.as_posix()\n        else:\n            fpath = to_path + \"/\" + relative_path.as_posix()\n\n        if f.is_dir():\n            pass\n        else:\n            f = f.as_posix()\n            if overwrite:\n                self.filesystem.put_file(f, fpath, overwrite=True)\n            else:\n                self.filesystem.put_file(f, fpath)\n\n            counter += 1\n\n    return counter\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3","title":"S3","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

DEPRECATION WARNING:

This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by S3Bucket from the prefect-aws package, which offers enhanced functionality and better a better user experience.

Store data as a file on AWS S3.

Example

Load stored S3 config:

from prefect.filesystems import S3\n\ns3_block = S3.load(\"BLOCK_NAME\")\n

Source code in prefect/filesystems.py
@deprecated_class(\n    start_date=\"Mar 2024\", help=\"Use the `S3Bucket` block from prefect-aws instead.\"\n)\nclass S3(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `S3Bucket` from the `prefect-aws` package, which offers enhanced functionality\n    and better a better user experience.\n\n    Store data as a file on AWS S3.\n\n    Example:\n        Load stored S3 config:\n        ```python\n        from prefect.filesystems import S3\n\n        s3_block = S3.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"S3\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#s3\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An S3 bucket path.\",\n        examples=[\"my-bucket/a-directory-within\"],\n    )\n    aws_access_key_id: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Access Key ID\",\n        description=\"Equivalent to the AWS_ACCESS_KEY_ID environment variable.\",\n        examples=[\"AKIAIOSFODNN7EXAMPLE\"],\n    )\n    aws_secret_access_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Secret Access Key\",\n        description=\"Equivalent to the AWS_SECRET_ACCESS_KEY environment variable.\",\n        examples=[\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\"],\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"s3://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.aws_access_key_id:\n            settings[\"key\"] = self.aws_access_key_id.get_secret_value()\n        if self.aws_secret_access_key:\n            settings[\"secret\"] = self.aws_secret_access_key.get_secret_value()\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"s3://{self.bucket_path}\", settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB","title":"SMB","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a SMB share.

Example

Load stored SMB config:

from prefect.filesystems import SMB\nsmb_block = SMB.load(\"BLOCK_NAME\")\n
Source code in prefect/filesystems.py
class SMB(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    Store data as a file on a SMB share.\n\n    Example:\n        Load stored SMB config:\n\n        ```python\n        from prefect.filesystems import SMB\n        smb_block = SMB.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"SMB\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/3f624663f7beb97d011d011bffd51ecf6c499efc-195x195.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#smb\"\n\n    share_path: str = Field(\n        default=...,\n        description=\"SMB target (requires <SHARE>, followed by <PATH>).\",\n        examples=[\"/SHARE/dir/subdir\"],\n    )\n    smb_username: Optional[SecretStr] = Field(\n        default=None,\n        title=\"SMB Username\",\n        description=\"Username with access to the target SMB SHARE.\",\n    )\n    smb_password: Optional[SecretStr] = Field(\n        default=None, title=\"SMB Password\", description=\"Password for SMB access.\"\n    )\n    smb_host: str = Field(\n        default=..., tile=\"SMB server/hostname\", description=\"SMB server/hostname.\"\n    )\n    smb_port: Optional[int] = Field(\n        default=None, title=\"SMB port\", description=\"SMB port (default: 445).\"\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.smb_username:\n            settings[\"username\"] = self.smb_username.get_secret_value()\n        if self.smb_password:\n            settings[\"password\"] = self.smb_password.get_secret_value()\n        if self.smb_host:\n            settings[\"host\"] = self.smb_host\n        if self.smb_port:\n            settings[\"port\"] = self.smb_port\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\",\n            settings=settings,\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path,\n            to_path=to_path,\n            ignore_file=ignore_file,\n            overwrite=False,\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path,\n        to_path=to_path,\n        ignore_file=ignore_file,\n        overwrite=False,\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/flow_runs/","title":"prefect.flow_runs","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs","title":"prefect.flow_runs","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs.wait_for_flow_run","title":"wait_for_flow_run async","text":"

Waits for the prefect flow run to finish and returns the FlowRun

Parameters:

Name Type Description Default flow_run_id UUID

The flow run ID for the flow run to wait for.

required timeout Optional[int]

The wait timeout in seconds. Defaults to 10800 (3 hours).

10800 poll_interval int

The poll interval in seconds. Defaults to 5.

5

Returns:

Name Type Description FlowRun FlowRun

The finished flow run.

Raises:

Type Description FlowWaitTimeout

If flow run goes over the timeout.

Examples:

Create a flow run for a deployment and wait for it to finish:

import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main():\n    async with get_client() as client:\n        flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n        flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n        print(flow_run.state)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n

Trigger multiple flow runs and wait for them to finish:

import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main(num_runs: int):\n    async with get_client() as client:\n        flow_runs = [\n            await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n            for _\n            in range(num_runs)\n        ]\n        coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n        finished_flow_runs = await asyncio.gather(*coros)\n        print([flow_run.state for flow_run in finished_flow_runs])\n\nif __name__ == \"__main__\":\n    asyncio.run(main(num_runs=10))\n

Source code in prefect/flow_runs.py
@inject_client\nasync def wait_for_flow_run(\n    flow_run_id: UUID,\n    timeout: Optional[int] = 10800,\n    poll_interval: int = 5,\n    client: Optional[PrefectClient] = None,\n    log_states: bool = False,\n) -> FlowRun:\n    \"\"\"\n    Waits for the prefect flow run to finish and returns the FlowRun\n\n    Args:\n        flow_run_id: The flow run ID for the flow run to wait for.\n        timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n        poll_interval: The poll interval in seconds. Defaults to 5.\n\n    Returns:\n        FlowRun: The finished flow run.\n\n    Raises:\n        prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n    Examples:\n        Create a flow run for a deployment and wait for it to finish:\n            ```python\n            import asyncio\n\n            from prefect import get_client\n            from prefect.flow_runs import wait_for_flow_run\n\n            async def main():\n                async with get_client() as client:\n                    flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n                    flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n                    print(flow_run.state)\n\n            if __name__ == \"__main__\":\n                asyncio.run(main())\n\n            ```\n\n        Trigger multiple flow runs and wait for them to finish:\n            ```python\n            import asyncio\n\n            from prefect import get_client\n            from prefect.flow_runs import wait_for_flow_run\n\n            async def main(num_runs: int):\n                async with get_client() as client:\n                    flow_runs = [\n                        await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n                        for _\n                        in range(num_runs)\n                    ]\n                    coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n                    finished_flow_runs = await asyncio.gather(*coros)\n                    print([flow_run.state for flow_run in finished_flow_runs])\n\n            if __name__ == \"__main__\":\n                asyncio.run(main(num_runs=10))\n\n            ```\n    \"\"\"\n    assert client is not None, \"Client injection failed\"\n    logger = get_logger()\n    with anyio.move_on_after(timeout):\n        while True:\n            flow_run = await client.read_flow_run(flow_run_id)\n            flow_state = flow_run.state\n            if log_states:\n                logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n            if flow_state and flow_state.is_final():\n                return flow_run\n            await anyio.sleep(poll_interval)\n    raise FlowRunWaitTimeout(\n        f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n    )\n
","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flows/","title":"prefect.flows","text":"","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows","title":"prefect.flows","text":"

Module containing the base workflow class and decorator - for most use cases, using the @flow decorator is preferred.

","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow","title":"Flow","text":"

Bases: Generic[P, R]

A Prefect workflow definition.

Note

We recommend using the @flow decorator for most use-cases.

Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

Parameters:

Name Type Description Default fn Callable[P, R]

The function defining the workflow.

required name Optional[str]

An optional name for the flow; if not provided, the name will be inferred from the given function.

None version Optional[str]

An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be used.

ConcurrentTaskRunner description str

An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

None validate_parameters bool

By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

True retries Optional[int]

An optional number of times to retry on flow run failure.

None retry_delay_seconds Optional[Union[int, float]]

An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

None persist_result Optional[bool]

An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None on_failure Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a failed state.

None on_completion Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a completed state.

None on_cancellation Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a cancelling state.

None on_crashed Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a crashed state.

None on_running Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a running state.

None Source code in prefect/flows.py
@PrefectObjectRegistry.register_instances\nclass Flow(Generic[P, R]):\n    \"\"\"\n    A Prefect workflow definition.\n\n    !!! note\n        We recommend using the [`@flow` decorator][prefect.flows.flow] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. To preserve the input\n    and output types, we use the generic type variables `P` and `R` for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the workflow.\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: An optional task runner to use for task execution within the flow;\n            if not provided, a `ConcurrentTaskRunner` will be used.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        on_failure: An optional list of callables to run when the flow enters a failed state.\n        on_completion: An optional list of callables to run when the flow enters a completed state.\n        on_cancellation: An optional list of callables to run when the flow enters a cancelling state.\n        on_crashed: An optional list of callables to run when the flow enters a crashed state.\n        on_running: An optional list of callables to run when the flow enters a running state.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @flow decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: Optional[str] = None,\n        version: Optional[str] = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[Union[int, float]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = ConcurrentTaskRunner,\n        description: str = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = True,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        cache_result_in_memory: bool = True,\n        log_prints: Optional[bool] = None,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ):\n        if name is not None and not isinstance(name, str):\n            raise TypeError(\n                \"Expected string for flow parameter 'name'; got {} instead. {}\".format(\n                    type(name).__name__,\n                    (\n                        \"Perhaps you meant to call it? e.g.\"\n                        \" '@flow(name=get_flow_run_name())'\"\n                        if callable(name)\n                        else \"\"\n                    ),\n                )\n            )\n\n        # Validate if hook passed is list and contains callables\n        hook_categories = [\n            on_completion,\n            on_failure,\n            on_cancellation,\n            on_crashed,\n            on_running,\n        ]\n        hook_names = [\n            \"on_completion\",\n            \"on_failure\",\n            \"on_cancellation\",\n            \"on_crashed\",\n            \"on_running\",\n        ]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        # Validate name if given\n        if name:\n            raise_on_name_with_banned_characters(name)\n\n        self.name = name or fn.__name__.replace(\"_\", \"-\")\n\n        if flow_run_name is not None:\n            if not isinstance(flow_run_name, str) and not callable(flow_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'flow_run_name'; got\"\n                    f\" {type(flow_run_name).__name__} instead.\"\n                )\n        self.flow_run_name = flow_run_name\n\n        task_runner = task_runner or ConcurrentTaskRunner()\n        self.task_runner = (\n            task_runner() if isinstance(task_runner, type) else task_runner\n        )\n\n        self.log_prints = log_prints\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = is_async_fn(self.fn)\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        # Version defaults to a hash of the function's file\n        flow_file = inspect.getsourcefile(self.fn)\n        if not version:\n            try:\n                version = file_hash(flow_file)\n            except (FileNotFoundError, TypeError, OSError):\n                pass  # `getsourcefile` can return null values and \"<stdin>\" for objects in repls\n        self.version = version\n\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n\n        # FlowRunPolicy settings\n        # TODO: We can instantiate a `FlowRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n        self.retries = (\n            retries if retries is not None else PREFECT_FLOW_DEFAULT_RETRIES.value()\n        )\n\n        self.retry_delay_seconds = (\n            retry_delay_seconds\n            if retry_delay_seconds is not None\n            else PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS.value()\n        )\n\n        self.parameters = parameter_schema(self.fn)\n        self.should_validate_parameters = validate_parameters\n\n        if self.should_validate_parameters:\n            # Try to create the validated function now so that incompatibility can be\n            # raised at declaration time rather than at runtime\n            # We cannot, however, store the validated function on the flow because it\n            # is not picklable in some environments\n            try:\n                ValidatedFunction(self.fn, config={\"arbitrary_types_allowed\": True})\n            except pydantic.ConfigError as exc:\n                raise ValueError(\n                    \"Flow function is not compatible with `validate_parameters`. \"\n                    \"Disable validation or change the argument names.\"\n                ) from exc\n\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.cache_result_in_memory = cache_result_in_memory\n\n        # Check for collision in the registry\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Flow)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            file = inspect.getsourcefile(self.fn)\n            line_number = inspect.getsourcelines(self.fn)[1]\n            warnings.warn(\n                f\"A flow named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another flow. Consider specifying a unique `name` \"\n                \"parameter in the flow definition:\\n\\n \"\n                \"`@flow(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n        self.on_cancellation = on_cancellation\n        self.on_crashed = on_crashed\n        self.on_running = on_running\n\n        # Used for flows loaded from remote storage\n        self._storage: Optional[RunnerStorage] = None\n        self._entrypoint: Optional[str] = None\n\n        module = fn.__module__\n        if module in (\"__main__\", \"__prefect_loader__\"):\n            module_name = inspect.getfile(fn)\n            module = module_name if module_name != \"__main__\" else module\n\n        self._entrypoint = f\"{module}:{fn.__name__}\"\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        version: str = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[Union[int, float]] = None,\n        description: str = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = None,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        cache_result_in_memory: bool = None,\n        log_prints: Optional[bool] = NotSet,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ) -> Self:\n        \"\"\"\n        Create a new flow from the current object, updating provided options.\n\n        Args:\n            name: A new name for the flow.\n            version: A new version for the flow.\n            description: A new description for the flow.\n            flow_run_name: An optional name to distinguish runs of this flow; this name\n                can be provided as a string template with the flow's parameters as variables,\n                or a function that returns a string.\n            task_runner: A new task runner for the flow.\n            timeout_seconds: A new number of seconds to fail the flow after if still\n                running.\n            validate_parameters: A new value indicating if flow calls should validate\n                given parameters.\n            retries: A new number of times to retry on flow run failure.\n            retry_delay_seconds: A new number of seconds to wait before retrying the\n                flow after failure. This is only applicable if `retries` is nonzero.\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            cache_result_in_memory: A new value indicating if the flow's result should\n                be cached in memory.\n            on_failure: A new list of callables to run when the flow enters a failed state.\n            on_completion: A new list of callables to run when the flow enters a completed state.\n            on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n            on_crashed: A new list of callables to run when the flow enters a crashed state.\n            on_running: A new list of callables to run when the flow enters a running state.\n\n        Returns:\n            A new `Flow` instance.\n\n        Examples:\n\n            Create a new flow from an existing flow and update the name:\n\n            >>> @flow(name=\"My flow\")\n            >>> def my_flow():\n            >>>     return 1\n            >>>\n            >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n            Create a new flow from an existing flow, update the task runner, and call\n            it without an intermediate variable:\n\n            >>> from prefect.task_runners import SequentialTaskRunner\n            >>>\n            >>> @flow\n            >>> def my_flow(x, y):\n            >>>     return x + y\n            >>>\n            >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n            >>> assert state.result() == 4\n\n        \"\"\"\n        new_flow = Flow(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            flow_run_name=flow_run_name or self.flow_run_name,\n            version=version or self.version,\n            task_runner=task_runner or self.task_runner,\n            retries=retries if retries is not None else self.retries,\n            retry_delay_seconds=(\n                retry_delay_seconds\n                if retry_delay_seconds is not None\n                else self.retry_delay_seconds\n            ),\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            validate_parameters=(\n                validate_parameters\n                if validate_parameters is not None\n                else self.should_validate_parameters\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n            on_cancellation=on_cancellation or self.on_cancellation,\n            on_crashed=on_crashed or self.on_crashed,\n            on_running=on_running or self.on_running,\n        )\n        new_flow._storage = self._storage\n        new_flow._entrypoint = self._entrypoint\n        return new_flow\n\n    def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n        associated types specified by the function's type annotations.\n\n        Returns:\n            A new dict of parameters that have been cast to the appropriate types\n\n        Raises:\n            ParameterTypeError: if the provided parameters are not valid\n        \"\"\"\n        args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n        if HAS_PYDANTIC_V2:\n            has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n                isinstance(o, V1BaseModel) for o in kwargs.values()\n            )\n            has_v2_types = any(is_v2_type(o) for o in args) or any(\n                is_v2_type(o) for o in kwargs.values()\n            )\n\n            if has_v1_models and has_v2_types:\n                raise ParameterTypeError(\n                    \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n                )\n\n            if has_v1_models:\n                validated_fn = V1ValidatedFunction(\n                    self.fn, config={\"arbitrary_types_allowed\": True}\n                )\n            else:\n                validated_fn = V2ValidatedFunction(\n                    self.fn, config={\"arbitrary_types_allowed\": True}\n                )\n\n        else:\n            validated_fn = ValidatedFunction(\n                self.fn, config={\"arbitrary_types_allowed\": True}\n            )\n\n        try:\n            model = validated_fn.init_model_instance(*args, **kwargs)\n        except pydantic.ValidationError as exc:\n            # We capture the pydantic exception and raise our own because the pydantic\n            # exception is not picklable when using a cythonized pydantic installation\n            raise ParameterTypeError.from_validation_error(exc) from None\n        except V2ValidationError as exc:\n            # We capture the pydantic exception and raise our own because the pydantic\n            # exception is not picklable when using a cythonized pydantic installation\n            raise ParameterTypeError.from_validation_error(exc) from None\n\n        # Get the updated parameter dict with cast values from the model\n        cast_parameters = {\n            k: v\n            for k, v in model._iter()\n            if k in model.__fields_set__ or model.__fields__[k].default_factory\n        }\n        return cast_parameters\n\n    def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Convert parameters to a serializable form.\n\n        Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n        converting everything directly to a string. This maintains basic types like\n        integers during API roundtrips.\n        \"\"\"\n        serialized_parameters = {}\n        for key, value in parameters.items():\n            try:\n                serialized_parameters[key] = jsonable_encoder(value)\n            except (TypeError, ValueError):\n                logger.debug(\n                    f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                    f\"type {type(value).__name__!r} and will not be stored \"\n                    \"in the backend.\"\n                )\n                serialized_parameters[key] = f\"<{type(value).__name__}>\"\n        return serialized_parameters\n\n    @sync_compatible\n    @deprecated_parameter(\n        \"schedule\",\n        start_date=\"Mar 2024\",\n        when=lambda p: p is not None,\n        help=\"Use `schedules` instead.\",\n    )\n    @deprecated_parameter(\n        \"is_schedule_active\",\n        start_date=\"Mar 2024\",\n        when=lambda p: p is not None,\n        help=\"Use `paused` instead.\",\n    )\n    async def to_deployment(\n        self,\n        name: str,\n        interval: Optional[\n            Union[\n                Iterable[Union[int, float, datetime.timedelta]],\n                int,\n                float,\n                datetime.timedelta,\n            ]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ) -> \"RunnerDeployment\":\n        \"\"\"\n        Creates a runner deployment object for this flow.\n\n        Args:\n            name: The name to give the created deployment.\n            interval: An interval on which to execute the new deployment. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this deployment.\n            rrule: An rrule schedule of when to execute runs of this deployment.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options such as `timezone`.\n            schedule: A schedule object defining when to execute runs of this deployment.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            parameters: A dictionary of default parameter values to pass to runs of this deployment.\n            triggers: A list of triggers that will kick off runs of this deployment.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for the created deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n\n        Examples:\n            Prepare two deployments and serve them:\n\n            ```python\n            from prefect import flow, serve\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            @flow\n            def my_other_flow(name):\n                print(f\"goodbye {name}\")\n\n            if __name__ == \"__main__\":\n                hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n                bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n                serve(hello_deploy, bye_deploy)\n            ```\n        \"\"\"\n        from prefect.deployments.runner import RunnerDeployment\n\n        if not name.endswith(\".py\"):\n            raise_on_name_with_banned_characters(name)\n        if self._storage and self._entrypoint:\n            return await RunnerDeployment.from_storage(\n                storage=self._storage,\n                entrypoint=self._entrypoint,\n                name=name,\n                interval=interval,\n                cron=cron,\n                rrule=rrule,\n                paused=paused,\n                schedules=schedules,\n                schedule=schedule,\n                is_schedule_active=is_schedule_active,\n                tags=tags,\n                triggers=triggers,\n                parameters=parameters or {},\n                description=description,\n                version=version,\n                enforce_parameter_schema=enforce_parameter_schema,\n                work_pool_name=work_pool_name,\n                work_queue_name=work_queue_name,\n                job_variables=job_variables,\n            )\n        else:\n            return RunnerDeployment.from_flow(\n                self,\n                name=name,\n                interval=interval,\n                cron=cron,\n                rrule=rrule,\n                paused=paused,\n                schedules=schedules,\n                schedule=schedule,\n                is_schedule_active=is_schedule_active,\n                tags=tags,\n                triggers=triggers,\n                parameters=parameters or {},\n                description=description,\n                version=version,\n                enforce_parameter_schema=enforce_parameter_schema,\n                work_pool_name=work_pool_name,\n                work_queue_name=work_queue_name,\n                job_variables=job_variables,\n                entrypoint_type=entrypoint_type,\n            )\n\n    @sync_compatible\n    async def serve(\n        self,\n        name: Optional[str] = None,\n        interval: Optional[\n            Union[\n                Iterable[Union[int, float, datetime.timedelta]],\n                int,\n                float,\n                datetime.timedelta,\n            ]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        parameters: Optional[dict] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        pause_on_shutdown: bool = True,\n        print_starting_message: bool = True,\n        limit: Optional[int] = None,\n        webserver: bool = False,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ):\n        \"\"\"\n        Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n        Args:\n            name: The name to give the created deployment. Defaults to the name of the flow.\n            interval: An interval on which to execute the deployment. Accepts a number or a\n                timedelta object to create a single schedule. If a number is given, it will be\n                interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n                multiple schedules.\n            cron: A cron schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of cron schedule strings to create multiple schedules.\n            rrule: An rrule schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of rrule schedule strings to create multiple schedules.\n            triggers: A list of triggers that will kick off runs of this deployment.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object defining when to execute runs of this deployment. Used to\n                define additional scheduling options such as `timezone`.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            parameters: A dictionary of default parameter values to pass to runs of this deployment.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for the created deployment.\n            pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n                If False, the schedules will continue running.\n            print_starting_message: Whether or not to print the starting message when flow is served.\n            limit: The maximum number of runs that can be executed concurrently.\n            webserver: Whether or not to start a monitoring webserver for this flow.\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n\n        Examples:\n            Serve a flow:\n\n            ```python\n            from prefect import flow\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            if __name__ == \"__main__\":\n                my_flow.serve(\"example-deployment\")\n            ```\n\n            Serve a flow and run it every hour:\n\n            ```python\n            from prefect import flow\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            if __name__ == \"__main__\":\n                my_flow.serve(\"example-deployment\", interval=3600)\n            ```\n        \"\"\"\n        from prefect.runner import Runner\n\n        if not name:\n            name = self.name\n        else:\n            # Handling for my_flow.serve(__file__)\n            # Will set name to name of file where my_flow.serve() without the extension\n            # Non filepath strings will pass through unchanged\n            name = Path(name).stem\n\n        runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n        deployment_id = await runner.add_flow(\n            self,\n            name=name,\n            triggers=triggers,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            paused=paused,\n            schedules=schedules,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            parameters=parameters,\n            description=description,\n            tags=tags,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            entrypoint_type=entrypoint_type,\n        )\n        if print_starting_message:\n            help_message = (\n                f\"[green]Your flow {self.name!r} is being served and polling for\"\n                \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n                \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n                f\" '{self.name}/{name}'\\n[/]\"\n            )\n            if PREFECT_UI_URL:\n                help_message += (\n                    \"\\nYou can also run your flow via the Prefect UI:\"\n                    f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n                )\n\n            console = Console()\n            console.print(help_message, soft_wrap=True)\n        await runner.start(webserver=webserver)\n\n    @classmethod\n    @sync_compatible\n    async def from_source(\n        cls: Type[F],\n        source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n        entrypoint: str,\n    ) -> F:\n        \"\"\"\n        Loads a flow from a remote source.\n\n        Args:\n            source: Either a URL to a git repository or a storage object.\n            entrypoint:  The path to a file containing a flow and the name of the flow function in\n                the format `./path/to/file.py:flow_func_name`.\n\n        Returns:\n            A new `Flow` instance.\n\n        Examples:\n            Load a flow from a public git repository:\n\n\n            ```python\n            from prefect import flow\n            from prefect.runner.storage import GitRepository\n            from prefect.blocks.system import Secret\n\n            my_flow = flow.from_source(\n                source=\"https://github.com/org/repo.git\",\n                entrypoint=\"flows.py:my_flow\",\n            )\n\n            my_flow()\n            ```\n\n            Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n            ```python\n            from prefect import flow\n            from prefect.runner.storage import GitRepository\n            from prefect.blocks.system import Secret\n\n            my_flow = flow.from_source(\n                source=GitRepository(\n                    url=\"https://github.com/org/repo.git\",\n                    credentials={\"access_token\": Secret.load(\"github-access-token\")}\n                ),\n                entrypoint=\"flows.py:my_flow\",\n            )\n\n            my_flow()\n            ```\n        \"\"\"\n        if isinstance(source, str):\n            storage = create_storage_from_url(source)\n        elif isinstance(source, RunnerStorage):\n            storage = source\n        elif hasattr(source, \"get_directory\"):\n            storage = BlockStorageAdapter(source)\n        else:\n            raise TypeError(\n                f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n                \" URL to remote storage or a storage object.\"\n            )\n        with tempfile.TemporaryDirectory() as tmpdir:\n            storage.set_base_path(Path(tmpdir))\n            await storage.pull_code()\n\n            full_entrypoint = str(storage.destination / entrypoint)\n            flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n                create_call(load_flow_from_entrypoint, full_entrypoint)\n            )\n            flow._storage = storage\n            flow._entrypoint = entrypoint\n\n        return flow\n\n    @sync_compatible\n    async def deploy(\n        self,\n        name: str,\n        work_pool_name: Optional[str] = None,\n        image: Optional[Union[str, DeploymentImage]] = None,\n        build: bool = True,\n        push: bool = True,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[dict] = None,\n        interval: Optional[Union[int, float, datetime.timedelta]] = None,\n        cron: Optional[str] = None,\n        rrule: Optional[str] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        parameters: Optional[dict] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n        print_next_steps: bool = True,\n        ignore_warnings: bool = False,\n    ) -> UUID:\n        \"\"\"\n        Deploys a flow to run on dynamic infrastructure via a work pool.\n\n        By default, calling this method will build a Docker image for the flow, push it to a registry,\n        and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n        If you want to use an existing image, you can pass `build=False` to skip building and pushing\n        an image.\n\n        Args:\n            name: The name to give the created deployment.\n            work_pool_name: The name of the work pool to use for this deployment. Defaults to\n                the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n            image: The name of the Docker image to build, including the registry and\n                repository. Pass a DeploymentImage instance to customize the Dockerfile used\n                and build arguments.\n            build: Whether or not to build a new image for the flow. If False, the provided\n                image will be used as-is and pulled at runtime.\n            push: Whether or not to skip pushing the built image to a registry.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n            interval: An interval on which to execute the deployment. Accepts a number or a\n                timedelta object to create a single schedule. If a number is given, it will be\n                interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n                multiple schedules.\n            cron: A cron schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of cron schedule strings to create multiple schedules.\n            rrule: An rrule schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of rrule schedule strings to create multiple schedules.\n            triggers: A list of triggers that will kick off runs of this deployment.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object defining when to execute runs of this deployment. Used to\n                define additional scheduling options like `timezone`.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            parameters: A dictionary of default parameter values to pass to runs of this deployment.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for the created deployment.\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n            print_next_steps_message: Whether or not to print a message with next steps\n                after deploying the deployments.\n            ignore_warnings: Whether or not to ignore warnings about the work pool type.\n\n        Returns:\n            The ID of the created/updated deployment.\n\n        Examples:\n            Deploy a local flow to a work pool:\n\n            ```python\n            from prefect import flow\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            if __name__ == \"__main__\":\n                my_flow.deploy(\n                    \"example-deployment\",\n                    work_pool_name=\"my-work-pool\",\n                    image=\"my-repository/my-image:dev\",\n                )\n            ```\n\n            Deploy a remotely stored flow to a work pool:\n\n            ```python\n            from prefect import flow\n\n            if __name__ == \"__main__\":\n                flow.from_source(\n                    source=\"https://github.com/org/repo.git\",\n                    entrypoint=\"flows.py:my_flow\",\n                ).deploy(\n                    \"example-deployment\",\n                    work_pool_name=\"my-work-pool\",\n                    image=\"my-repository/my-image:dev\",\n                )\n            ```\n        \"\"\"\n        work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n        try:\n            async with get_client() as client:\n                work_pool = await client.read_work_pool(work_pool_name)\n        except ObjectNotFound as exc:\n            raise ValueError(\n                f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n                \" deploying this flow.\"\n            ) from exc\n\n        deployment = await self.to_deployment(\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedules=schedules,\n            paused=paused,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            triggers=triggers,\n            parameters=parameters,\n            description=description,\n            tags=tags,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n            entrypoint_type=entrypoint_type,\n        )\n\n        deployment_ids = await deploy(\n            deployment,\n            work_pool_name=work_pool_name,\n            image=image,\n            build=build,\n            push=push,\n            print_next_steps_message=False,\n            ignore_warnings=ignore_warnings,\n        )\n\n        if print_next_steps:\n            console = Console()\n            if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n                console.print(\n                    \"\\nTo execute flow runs from this deployment, start a worker in a\"\n                    \" separate terminal that pulls work from the\"\n                    f\" {work_pool_name!r} work pool:\"\n                )\n                console.print(\n                    f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n                    style=\"blue\",\n                )\n            console.print(\n                \"\\nTo schedule a run for this deployment, use the following command:\"\n            )\n            console.print(\n                f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n                style=\"blue\",\n            )\n            if PREFECT_UI_URL:\n                message = (\n                    \"\\nYou can also run your flow via the Prefect UI:\"\n                    f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n                )\n                console.print(message, soft_wrap=True)\n\n        return deployment_ids[0]\n\n    @overload\n    def __call__(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: \"P.args\",\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n        \"\"\"\n        Run the flow and return its result.\n\n\n        Flow parameter values must be serializable by Pydantic.\n\n        If writing an async flow, this call must be awaited.\n\n        This will create a new flow run in the API.\n\n        Args:\n            *args: Arguments to run the flow with.\n            return_state: Return a Prefect State containing the result of the\n                flow run.\n            wait_for: Upstream task futures to wait for before starting the flow if called as a subflow\n            **kwargs: Keyword arguments to run the flow with.\n\n        Returns:\n            If `return_state` is False, returns the result of the flow run.\n            If `return_state` is True, returns the result of the flow run\n                wrapped in a Prefect State which provides error handling.\n\n        Examples:\n\n            Define a flow\n\n            >>> @flow\n            >>> def my_flow(name):\n            >>>     print(f\"hello {name}\")\n            >>>     return f\"goodbye {name}\"\n\n            Run a flow\n\n            >>> my_flow(\"marvin\")\n            hello marvin\n            \"goodbye marvin\"\n\n            Run a flow with additional tags\n\n            >>> from prefect import tags\n            >>> with tags(\"db\", \"blue\"):\n            >>>     my_flow(\"foo\")\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        task_viz_tracker = get_task_viz_tracker()\n        if task_viz_tracker:\n            # this is a subflow, for now return a single task and do not go further\n            # we can add support for exploring subflows for tasks in the future.\n            return track_viz_task(self.isasync, self.name, parameters)\n\n        if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE.value():\n            from prefect.new_flow_engine import run_flow, run_flow_sync\n\n            run_kwargs = dict(\n                flow=self,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n            )\n            if self.isasync:\n                # this returns an awaitable coroutine\n                return run_flow(**run_kwargs)\n            else:\n                return run_flow_sync(**run_kwargs)\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n        )\n\n    @overload\n    def _run(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def _run(self: \"Flow[P, T]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: \"P.args\",\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n        \"\"\"\n        Run the flow and return its final state.\n\n        Examples:\n\n            Run a flow and get the returned result\n\n            >>> state = my_flow._run(\"marvin\")\n            >>> state.result()\n           \"goodbye marvin\"\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n        )\n\n    @sync_compatible\n    async def visualize(self, *args, **kwargs):\n        \"\"\"\n        Generates a graphviz object representing the current flow. In IPython notebooks,\n        it's rendered inline, otherwise in a new window as a PNG.\n\n        Raises:\n            - ImportError: If `graphviz` isn't installed.\n            - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n            - FlowVisualizationError: If the flow can't be visualized for any other reason.\n        \"\"\"\n        if not PREFECT_UNIT_TEST_MODE:\n            warnings.warn(\n                \"`flow.visualize()` will execute code inside of your flow that is not\"\n                \" decorated with `@task` or `@flow`.\"\n            )\n\n        try:\n            with TaskVizTracker() as tracker:\n                if self.isasync:\n                    await self.fn(*args, **kwargs)\n                else:\n                    self.fn(*args, **kwargs)\n\n                graph = build_task_dependencies(tracker)\n\n                visualize_task_dependencies(graph, self.name)\n\n        except GraphvizImportError:\n            raise\n        except GraphvizExecutableNotFoundError:\n            raise\n        except VisualizationUnsupportedError:\n            raise\n        except FlowVisualizationError:\n            raise\n        except Exception as e:\n            msg = (\n                \"It's possible you are trying to visualize a flow that contains \"\n                \"code that directly interacts with the result of a task\"\n                \" inside of the flow. \\nTry passing a `viz_return_value` \"\n                \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n            )\n\n            new_exception = type(e)(str(e) + \"\\n\" + msg)\n            # Copy traceback information from the original exception\n            new_exception.__traceback__ = e.__traceback__\n            raise new_exception\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.deploy","title":"deploy async","text":"

Deploys a flow to run on dynamic infrastructure via a work pool.

By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule.

If you want to use an existing image, you can pass build=False to skip building and pushing an image.

Parameters:

Name Type Description Default name str

The name to give the created deployment.

required work_pool_name Optional[str]

The name of the work pool to use for this deployment. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME.

None image Optional[Union[str, DeploymentImage]]

The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.

None build bool

Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.

True push bool

Whether or not to skip pushing the built image to a registry.

True work_queue_name Optional[str]

The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

None job_variables Optional[dict]

Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

None interval Optional[Union[int, float, timedelta]]

An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.

None cron Optional[str]

A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.

None rrule Optional[str]

An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.

None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

A list of triggers that will kick off runs of this deployment.

None paused Optional[bool]

Whether or not to set this deployment as paused.

None schedules Optional[List[MinimalDeploymentSchedule]]

A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

None schedule Optional[SCHEDULE_TYPES]

A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options like timezone.

None is_schedule_active Optional[bool]

Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

None parameters Optional[dict]

A dictionary of default parameter values to pass to runs of this deployment.

None description Optional[str]

A description for the created deployment. Defaults to the flow's description if not provided.

None tags Optional[List[str]]

A list of tags to associate with the created deployment for organizational purposes.

None version Optional[str]

A version for the created deployment. Defaults to the flow's version.

None enforce_parameter_schema bool

Whether or not the Prefect API should enforce the parameter schema for the created deployment.

False entrypoint_type EntrypointType

Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

FILE_PATH print_next_steps_message

Whether or not to print a message with next steps after deploying the deployments.

required ignore_warnings bool

Whether or not to ignore warnings about the work pool type.

False

Returns:

Type Description UUID

The ID of the created/updated deployment.

Examples:

Deploy a local flow to a work pool:

from prefect import flow\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        \"example-deployment\",\n        work_pool_name=\"my-work-pool\",\n        image=\"my-repository/my-image:dev\",\n    )\n

Deploy a remotely stored flow to a work pool:

from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/org/repo.git\",\n        entrypoint=\"flows.py:my_flow\",\n    ).deploy(\n        \"example-deployment\",\n        work_pool_name=\"my-work-pool\",\n        image=\"my-repository/my-image:dev\",\n    )\n
Source code in prefect/flows.py
@sync_compatible\nasync def deploy(\n    self,\n    name: str,\n    work_pool_name: Optional[str] = None,\n    image: Optional[Union[str, DeploymentImage]] = None,\n    build: bool = True,\n    push: bool = True,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[dict] = None,\n    interval: Optional[Union[int, float, datetime.timedelta]] = None,\n    cron: Optional[str] = None,\n    rrule: Optional[str] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    parameters: Optional[dict] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    print_next_steps: bool = True,\n    ignore_warnings: bool = False,\n) -> UUID:\n    \"\"\"\n    Deploys a flow to run on dynamic infrastructure via a work pool.\n\n    By default, calling this method will build a Docker image for the flow, push it to a registry,\n    and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n    If you want to use an existing image, you can pass `build=False` to skip building and pushing\n    an image.\n\n    Args:\n        name: The name to give the created deployment.\n        work_pool_name: The name of the work pool to use for this deployment. Defaults to\n            the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n        image: The name of the Docker image to build, including the registry and\n            repository. Pass a DeploymentImage instance to customize the Dockerfile used\n            and build arguments.\n        build: Whether or not to build a new image for the flow. If False, the provided\n            image will be used as-is and pulled at runtime.\n        push: Whether or not to skip pushing the built image to a registry.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n        interval: An interval on which to execute the deployment. Accepts a number or a\n            timedelta object to create a single schedule. If a number is given, it will be\n            interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n            multiple schedules.\n        cron: A cron schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of cron schedule strings to create multiple schedules.\n        rrule: An rrule schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of rrule schedule strings to create multiple schedules.\n        triggers: A list of triggers that will kick off runs of this deployment.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object defining when to execute runs of this deployment. Used to\n            define additional scheduling options like `timezone`.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        parameters: A dictionary of default parameter values to pass to runs of this deployment.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for the created deployment.\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n        print_next_steps_message: Whether or not to print a message with next steps\n            after deploying the deployments.\n        ignore_warnings: Whether or not to ignore warnings about the work pool type.\n\n    Returns:\n        The ID of the created/updated deployment.\n\n    Examples:\n        Deploy a local flow to a work pool:\n\n        ```python\n        from prefect import flow\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        if __name__ == \"__main__\":\n            my_flow.deploy(\n                \"example-deployment\",\n                work_pool_name=\"my-work-pool\",\n                image=\"my-repository/my-image:dev\",\n            )\n        ```\n\n        Deploy a remotely stored flow to a work pool:\n\n        ```python\n        from prefect import flow\n\n        if __name__ == \"__main__\":\n            flow.from_source(\n                source=\"https://github.com/org/repo.git\",\n                entrypoint=\"flows.py:my_flow\",\n            ).deploy(\n                \"example-deployment\",\n                work_pool_name=\"my-work-pool\",\n                image=\"my-repository/my-image:dev\",\n            )\n        ```\n    \"\"\"\n    work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n    try:\n        async with get_client() as client:\n            work_pool = await client.read_work_pool(work_pool_name)\n    except ObjectNotFound as exc:\n        raise ValueError(\n            f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n            \" deploying this flow.\"\n        ) from exc\n\n    deployment = await self.to_deployment(\n        name=name,\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedules=schedules,\n        paused=paused,\n        schedule=schedule,\n        is_schedule_active=is_schedule_active,\n        triggers=triggers,\n        parameters=parameters,\n        description=description,\n        tags=tags,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n        entrypoint_type=entrypoint_type,\n    )\n\n    deployment_ids = await deploy(\n        deployment,\n        work_pool_name=work_pool_name,\n        image=image,\n        build=build,\n        push=push,\n        print_next_steps_message=False,\n        ignore_warnings=ignore_warnings,\n    )\n\n    if print_next_steps:\n        console = Console()\n        if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n            console.print(\n                \"\\nTo execute flow runs from this deployment, start a worker in a\"\n                \" separate terminal that pulls work from the\"\n                f\" {work_pool_name!r} work pool:\"\n            )\n            console.print(\n                f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n                style=\"blue\",\n            )\n        console.print(\n            \"\\nTo schedule a run for this deployment, use the following command:\"\n        )\n        console.print(\n            f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n            style=\"blue\",\n        )\n        if PREFECT_UI_URL:\n            message = (\n                \"\\nYou can also run your flow via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n            )\n            console.print(message, soft_wrap=True)\n\n    return deployment_ids[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.from_source","title":"from_source async classmethod","text":"

Loads a flow from a remote source.

Parameters:

Name Type Description Default source Union[str, RunnerStorage, ReadableDeploymentStorage]

Either a URL to a git repository or a storage object.

required entrypoint str

The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name.

required

Returns:

Type Description F

A new Flow instance.

Examples:

Load a flow from a public git repository:

from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n    source=\"https://github.com/org/repo.git\",\n    entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n

Load a flow from a private git repository using an access token stored in a Secret block:

from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n    source=GitRepository(\n        url=\"https://github.com/org/repo.git\",\n        credentials={\"access_token\": Secret.load(\"github-access-token\")}\n    ),\n    entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n
Source code in prefect/flows.py
@classmethod\n@sync_compatible\nasync def from_source(\n    cls: Type[F],\n    source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n    entrypoint: str,\n) -> F:\n    \"\"\"\n    Loads a flow from a remote source.\n\n    Args:\n        source: Either a URL to a git repository or a storage object.\n        entrypoint:  The path to a file containing a flow and the name of the flow function in\n            the format `./path/to/file.py:flow_func_name`.\n\n    Returns:\n        A new `Flow` instance.\n\n    Examples:\n        Load a flow from a public git repository:\n\n\n        ```python\n        from prefect import flow\n        from prefect.runner.storage import GitRepository\n        from prefect.blocks.system import Secret\n\n        my_flow = flow.from_source(\n            source=\"https://github.com/org/repo.git\",\n            entrypoint=\"flows.py:my_flow\",\n        )\n\n        my_flow()\n        ```\n\n        Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n        ```python\n        from prefect import flow\n        from prefect.runner.storage import GitRepository\n        from prefect.blocks.system import Secret\n\n        my_flow = flow.from_source(\n            source=GitRepository(\n                url=\"https://github.com/org/repo.git\",\n                credentials={\"access_token\": Secret.load(\"github-access-token\")}\n            ),\n            entrypoint=\"flows.py:my_flow\",\n        )\n\n        my_flow()\n        ```\n    \"\"\"\n    if isinstance(source, str):\n        storage = create_storage_from_url(source)\n    elif isinstance(source, RunnerStorage):\n        storage = source\n    elif hasattr(source, \"get_directory\"):\n        storage = BlockStorageAdapter(source)\n    else:\n        raise TypeError(\n            f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n            \" URL to remote storage or a storage object.\"\n        )\n    with tempfile.TemporaryDirectory() as tmpdir:\n        storage.set_base_path(Path(tmpdir))\n        await storage.pull_code()\n\n        full_entrypoint = str(storage.destination / entrypoint)\n        flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n            create_call(load_flow_from_entrypoint, full_entrypoint)\n        )\n        flow._storage = storage\n        flow._entrypoint = entrypoint\n\n    return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serialize_parameters","title":"serialize_parameters","text":"

Convert parameters to a serializable form.

Uses FastAPI's jsonable_encoder to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips.

Source code in prefect/flows.py
def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Convert parameters to a serializable form.\n\n    Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n    converting everything directly to a string. This maintains basic types like\n    integers during API roundtrips.\n    \"\"\"\n    serialized_parameters = {}\n    for key, value in parameters.items():\n        try:\n            serialized_parameters[key] = jsonable_encoder(value)\n        except (TypeError, ValueError):\n            logger.debug(\n                f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                f\"type {type(value).__name__!r} and will not be stored \"\n                \"in the backend.\"\n            )\n            serialized_parameters[key] = f\"<{type(value).__name__}>\"\n    return serialized_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serve","title":"serve async","text":"

Creates a deployment for this flow and starts a runner to monitor for scheduled work.

Parameters:

Name Type Description Default name Optional[str]

The name to give the created deployment. Defaults to the name of the flow.

None interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.

None cron Optional[Union[Iterable[str], str]]

A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.

None rrule Optional[Union[Iterable[str], str]]

An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.

None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

A list of triggers that will kick off runs of this deployment.

None paused Optional[bool]

Whether or not to set this deployment as paused.

None schedules Optional[List[FlexibleScheduleList]]

A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

None schedule Optional[SCHEDULE_TYPES]

A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options such as timezone.

None is_schedule_active Optional[bool]

Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

None parameters Optional[dict]

A dictionary of default parameter values to pass to runs of this deployment.

None description Optional[str]

A description for the created deployment. Defaults to the flow's description if not provided.

None tags Optional[List[str]]

A list of tags to associate with the created deployment for organizational purposes.

None version Optional[str]

A version for the created deployment. Defaults to the flow's version.

None enforce_parameter_schema bool

Whether or not the Prefect API should enforce the parameter schema for the created deployment.

False pause_on_shutdown bool

If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running.

True print_starting_message bool

Whether or not to print the starting message when flow is served.

True limit Optional[int]

The maximum number of runs that can be executed concurrently.

None webserver bool

Whether or not to start a monitoring webserver for this flow.

False entrypoint_type EntrypointType

Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

FILE_PATH

Examples:

Serve a flow:

from prefect import flow\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n    my_flow.serve(\"example-deployment\")\n

Serve a flow and run it every hour:

from prefect import flow\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n    my_flow.serve(\"example-deployment\", interval=3600)\n
Source code in prefect/flows.py
@sync_compatible\nasync def serve(\n    self,\n    name: Optional[str] = None,\n    interval: Optional[\n        Union[\n            Iterable[Union[int, float, datetime.timedelta]],\n            int,\n            float,\n            datetime.timedelta,\n        ]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    parameters: Optional[dict] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    pause_on_shutdown: bool = True,\n    print_starting_message: bool = True,\n    limit: Optional[int] = None,\n    webserver: bool = False,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n):\n    \"\"\"\n    Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n    Args:\n        name: The name to give the created deployment. Defaults to the name of the flow.\n        interval: An interval on which to execute the deployment. Accepts a number or a\n            timedelta object to create a single schedule. If a number is given, it will be\n            interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n            multiple schedules.\n        cron: A cron schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of cron schedule strings to create multiple schedules.\n        rrule: An rrule schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of rrule schedule strings to create multiple schedules.\n        triggers: A list of triggers that will kick off runs of this deployment.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object defining when to execute runs of this deployment. Used to\n            define additional scheduling options such as `timezone`.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        parameters: A dictionary of default parameter values to pass to runs of this deployment.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for the created deployment.\n        pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n            If False, the schedules will continue running.\n        print_starting_message: Whether or not to print the starting message when flow is served.\n        limit: The maximum number of runs that can be executed concurrently.\n        webserver: Whether or not to start a monitoring webserver for this flow.\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n\n    Examples:\n        Serve a flow:\n\n        ```python\n        from prefect import flow\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        if __name__ == \"__main__\":\n            my_flow.serve(\"example-deployment\")\n        ```\n\n        Serve a flow and run it every hour:\n\n        ```python\n        from prefect import flow\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        if __name__ == \"__main__\":\n            my_flow.serve(\"example-deployment\", interval=3600)\n        ```\n    \"\"\"\n    from prefect.runner import Runner\n\n    if not name:\n        name = self.name\n    else:\n        # Handling for my_flow.serve(__file__)\n        # Will set name to name of file where my_flow.serve() without the extension\n        # Non filepath strings will pass through unchanged\n        name = Path(name).stem\n\n    runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n    deployment_id = await runner.add_flow(\n        self,\n        name=name,\n        triggers=triggers,\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        paused=paused,\n        schedules=schedules,\n        schedule=schedule,\n        is_schedule_active=is_schedule_active,\n        parameters=parameters,\n        description=description,\n        tags=tags,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        entrypoint_type=entrypoint_type,\n    )\n    if print_starting_message:\n        help_message = (\n            f\"[green]Your flow {self.name!r} is being served and polling for\"\n            \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n            \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n            f\" '{self.name}/{name}'\\n[/]\"\n        )\n        if PREFECT_UI_URL:\n            help_message += (\n                \"\\nYou can also run your flow via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n            )\n\n        console = Console()\n        console.print(help_message, soft_wrap=True)\n    await runner.start(webserver=webserver)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.to_deployment","title":"to_deployment async","text":"

Creates a runner deployment object for this flow.

Parameters:

Name Type Description Default name str

The name to give the created deployment.

required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

None cron Optional[Union[Iterable[str], str]]

A cron schedule of when to execute runs of this deployment.

None rrule Optional[Union[Iterable[str], str]]

An rrule schedule of when to execute runs of this deployment.

None paused Optional[bool]

Whether or not to set this deployment as paused.

None schedules Optional[List[FlexibleScheduleList]]

A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as timezone.

None schedule Optional[SCHEDULE_TYPES]

A schedule object defining when to execute runs of this deployment.

None is_schedule_active Optional[bool]

Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

None parameters Optional[dict]

A dictionary of default parameter values to pass to runs of this deployment.

None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

A list of triggers that will kick off runs of this deployment.

None description Optional[str]

A description for the created deployment. Defaults to the flow's description if not provided.

None tags Optional[List[str]]

A list of tags to associate with the created deployment for organizational purposes.

None version Optional[str]

A version for the created deployment. Defaults to the flow's version.

None enforce_parameter_schema bool

Whether or not the Prefect API should enforce the parameter schema for the created deployment.

False work_pool_name Optional[str]

The name of the work pool to use for this deployment.

None work_queue_name Optional[str]

The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

None job_variables Optional[Dict[str, Any]]

Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for

None entrypoint_type EntrypointType

Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

FILE_PATH

Examples:

Prepare two deployments and serve them:

from prefect import flow, serve\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n    print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n    hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n    bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n    serve(hello_deploy, bye_deploy)\n
Source code in prefect/flows.py
@sync_compatible\n@deprecated_parameter(\n    \"schedule\",\n    start_date=\"Mar 2024\",\n    when=lambda p: p is not None,\n    help=\"Use `schedules` instead.\",\n)\n@deprecated_parameter(\n    \"is_schedule_active\",\n    start_date=\"Mar 2024\",\n    when=lambda p: p is not None,\n    help=\"Use `paused` instead.\",\n)\nasync def to_deployment(\n    self,\n    name: str,\n    interval: Optional[\n        Union[\n            Iterable[Union[int, float, datetime.timedelta]],\n            int,\n            float,\n            datetime.timedelta,\n        ]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n    \"\"\"\n    Creates a runner deployment object for this flow.\n\n    Args:\n        name: The name to give the created deployment.\n        interval: An interval on which to execute the new deployment. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this deployment.\n        rrule: An rrule schedule of when to execute runs of this deployment.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options such as `timezone`.\n        schedule: A schedule object defining when to execute runs of this deployment.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        parameters: A dictionary of default parameter values to pass to runs of this deployment.\n        triggers: A list of triggers that will kick off runs of this deployment.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for the created deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n\n    Examples:\n        Prepare two deployments and serve them:\n\n        ```python\n        from prefect import flow, serve\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        @flow\n        def my_other_flow(name):\n            print(f\"goodbye {name}\")\n\n        if __name__ == \"__main__\":\n            hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n            bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n            serve(hello_deploy, bye_deploy)\n        ```\n    \"\"\"\n    from prefect.deployments.runner import RunnerDeployment\n\n    if not name.endswith(\".py\"):\n        raise_on_name_with_banned_characters(name)\n    if self._storage and self._entrypoint:\n        return await RunnerDeployment.from_storage(\n            storage=self._storage,\n            entrypoint=self._entrypoint,\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            paused=paused,\n            schedules=schedules,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            tags=tags,\n            triggers=triggers,\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n    else:\n        return RunnerDeployment.from_flow(\n            self,\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            paused=paused,\n            schedules=schedules,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            tags=tags,\n            triggers=triggers,\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n            entrypoint_type=entrypoint_type,\n        )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.validate_parameters","title":"validate_parameters","text":"

Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations.

Returns:

Type Description Dict[str, Any]

A new dict of parameters that have been cast to the appropriate types

Raises:

Type Description ParameterTypeError

if the provided parameters are not valid

Source code in prefect/flows.py
def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n    associated types specified by the function's type annotations.\n\n    Returns:\n        A new dict of parameters that have been cast to the appropriate types\n\n    Raises:\n        ParameterTypeError: if the provided parameters are not valid\n    \"\"\"\n    args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n    if HAS_PYDANTIC_V2:\n        has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n            isinstance(o, V1BaseModel) for o in kwargs.values()\n        )\n        has_v2_types = any(is_v2_type(o) for o in args) or any(\n            is_v2_type(o) for o in kwargs.values()\n        )\n\n        if has_v1_models and has_v2_types:\n            raise ParameterTypeError(\n                \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n            )\n\n        if has_v1_models:\n            validated_fn = V1ValidatedFunction(\n                self.fn, config={\"arbitrary_types_allowed\": True}\n            )\n        else:\n            validated_fn = V2ValidatedFunction(\n                self.fn, config={\"arbitrary_types_allowed\": True}\n            )\n\n    else:\n        validated_fn = ValidatedFunction(\n            self.fn, config={\"arbitrary_types_allowed\": True}\n        )\n\n    try:\n        model = validated_fn.init_model_instance(*args, **kwargs)\n    except pydantic.ValidationError as exc:\n        # We capture the pydantic exception and raise our own because the pydantic\n        # exception is not picklable when using a cythonized pydantic installation\n        raise ParameterTypeError.from_validation_error(exc) from None\n    except V2ValidationError as exc:\n        # We capture the pydantic exception and raise our own because the pydantic\n        # exception is not picklable when using a cythonized pydantic installation\n        raise ParameterTypeError.from_validation_error(exc) from None\n\n    # Get the updated parameter dict with cast values from the model\n    cast_parameters = {\n        k: v\n        for k, v in model._iter()\n        if k in model.__fields_set__ or model.__fields__[k].default_factory\n    }\n    return cast_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.visualize","title":"visualize async","text":"

Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG.

Raises:

Type Description -ImportError

If graphviz isn't installed.

-GraphvizExecutableNotFoundError

If the dot executable isn't found.

-FlowVisualizationError

If the flow can't be visualized for any other reason.

Source code in prefect/flows.py
@sync_compatible\nasync def visualize(self, *args, **kwargs):\n    \"\"\"\n    Generates a graphviz object representing the current flow. In IPython notebooks,\n    it's rendered inline, otherwise in a new window as a PNG.\n\n    Raises:\n        - ImportError: If `graphviz` isn't installed.\n        - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n        - FlowVisualizationError: If the flow can't be visualized for any other reason.\n    \"\"\"\n    if not PREFECT_UNIT_TEST_MODE:\n        warnings.warn(\n            \"`flow.visualize()` will execute code inside of your flow that is not\"\n            \" decorated with `@task` or `@flow`.\"\n        )\n\n    try:\n        with TaskVizTracker() as tracker:\n            if self.isasync:\n                await self.fn(*args, **kwargs)\n            else:\n                self.fn(*args, **kwargs)\n\n            graph = build_task_dependencies(tracker)\n\n            visualize_task_dependencies(graph, self.name)\n\n    except GraphvizImportError:\n        raise\n    except GraphvizExecutableNotFoundError:\n        raise\n    except VisualizationUnsupportedError:\n        raise\n    except FlowVisualizationError:\n        raise\n    except Exception as e:\n        msg = (\n            \"It's possible you are trying to visualize a flow that contains \"\n            \"code that directly interacts with the result of a task\"\n            \" inside of the flow. \\nTry passing a `viz_return_value` \"\n            \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n        )\n\n        new_exception = type(e)(str(e) + \"\\n\" + msg)\n        # Copy traceback information from the original exception\n        new_exception.__traceback__ = e.__traceback__\n        raise new_exception\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.with_options","title":"with_options","text":"

Create a new flow from the current object, updating provided options.

Parameters:

Name Type Description Default name str

A new name for the flow.

None version str

A new version for the flow.

None description str

A new description for the flow.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

A new task runner for the flow.

None timeout_seconds Union[int, float]

A new number of seconds to fail the flow after if still running.

None validate_parameters bool

A new value indicating if flow calls should validate given parameters.

None retries Optional[int]

A new number of times to retry on flow run failure.

None retry_delay_seconds Optional[Union[int, float]]

A new number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

None persist_result Optional[bool]

A new option for enabling or disabling result persistence.

NotSet result_storage Optional[ResultStorage]

A new storage type to use for results.

NotSet result_serializer Optional[ResultSerializer]

A new serializer to use for results.

NotSet cache_result_in_memory bool

A new value indicating if the flow's result should be cached in memory.

None on_failure Optional[List[Callable[[Flow, FlowRun, State], None]]]

A new list of callables to run when the flow enters a failed state.

None on_completion Optional[List[Callable[[Flow, FlowRun, State], None]]]

A new list of callables to run when the flow enters a completed state.

None on_cancellation Optional[List[Callable[[Flow, FlowRun, State], None]]]

A new list of callables to run when the flow enters a cancelling state.

None on_crashed Optional[List[Callable[[Flow, FlowRun, State], None]]]

A new list of callables to run when the flow enters a crashed state.

None on_running Optional[List[Callable[[Flow, FlowRun, State], None]]]

A new list of callables to run when the flow enters a running state.

None

Returns:

Type Description Self

A new Flow instance.

Create a new flow from an existing flow and update the name:\n\n>>> @flow(name=\"My flow\")\n>>> def my_flow():\n>>>     return 1\n>>>\n>>> new_flow = my_flow.with_options(name=\"My new flow\")\n\nCreate a new flow from an existing flow, update the task runner, and call\nit without an intermediate variable:\n\n>>> from prefect.task_runners import SequentialTaskRunner\n>>>\n>>> @flow\n>>> def my_flow(x, y):\n>>>     return x + y\n>>>\n>>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n>>> assert state.result() == 4\n
Source code in prefect/flows.py
def with_options(\n    self,\n    *,\n    name: str = None,\n    version: str = None,\n    retries: Optional[int] = None,\n    retry_delay_seconds: Optional[Union[int, float]] = None,\n    description: str = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = None,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    cache_result_in_memory: bool = None,\n    log_prints: Optional[bool] = NotSet,\n    on_completion: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n) -> Self:\n    \"\"\"\n    Create a new flow from the current object, updating provided options.\n\n    Args:\n        name: A new name for the flow.\n        version: A new version for the flow.\n        description: A new description for the flow.\n        flow_run_name: An optional name to distinguish runs of this flow; this name\n            can be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: A new task runner for the flow.\n        timeout_seconds: A new number of seconds to fail the flow after if still\n            running.\n        validate_parameters: A new value indicating if flow calls should validate\n            given parameters.\n        retries: A new number of times to retry on flow run failure.\n        retry_delay_seconds: A new number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        cache_result_in_memory: A new value indicating if the flow's result should\n            be cached in memory.\n        on_failure: A new list of callables to run when the flow enters a failed state.\n        on_completion: A new list of callables to run when the flow enters a completed state.\n        on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n        on_crashed: A new list of callables to run when the flow enters a crashed state.\n        on_running: A new list of callables to run when the flow enters a running state.\n\n    Returns:\n        A new `Flow` instance.\n\n    Examples:\n\n        Create a new flow from an existing flow and update the name:\n\n        >>> @flow(name=\"My flow\")\n        >>> def my_flow():\n        >>>     return 1\n        >>>\n        >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n        Create a new flow from an existing flow, update the task runner, and call\n        it without an intermediate variable:\n\n        >>> from prefect.task_runners import SequentialTaskRunner\n        >>>\n        >>> @flow\n        >>> def my_flow(x, y):\n        >>>     return x + y\n        >>>\n        >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n        >>> assert state.result() == 4\n\n    \"\"\"\n    new_flow = Flow(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        flow_run_name=flow_run_name or self.flow_run_name,\n        version=version or self.version,\n        task_runner=task_runner or self.task_runner,\n        retries=retries if retries is not None else self.retries,\n        retry_delay_seconds=(\n            retry_delay_seconds\n            if retry_delay_seconds is not None\n            else self.retry_delay_seconds\n        ),\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        validate_parameters=(\n            validate_parameters\n            if validate_parameters is not None\n            else self.should_validate_parameters\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n        on_cancellation=on_cancellation or self.on_cancellation,\n        on_crashed=on_crashed or self.on_crashed,\n        on_running=on_running or self.on_running,\n    )\n    new_flow._storage = self._storage\n    new_flow._entrypoint = self._entrypoint\n    return new_flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.flow","title":"flow","text":"

Decorator to designate a function as a Prefect workflow.

This decorator may be used for asynchronous or synchronous functions.

Flow parameters must be serializable by Pydantic.

Parameters:

Name Type Description Default name Optional[str]

An optional name for the flow; if not provided, the name will be inferred from the given function.

None version Optional[str]

An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None retries int

An optional number of times to retry on flow run failure.

None retry_delay_seconds Union[int, float]

An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

None task_runner BaseTaskRunner

An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be instantiated.

ConcurrentTaskRunner description str

An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

None validate_parameters bool

By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

True persist_result Optional[bool]

An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None cache_result_in_memory bool

An optional toggle indicating whether the cached result of a running the flow should be stored in memory. Defaults to True.

True log_prints Optional[bool]

If set, print statements in the flow will be redirected to the Prefect logger for the flow run. Defaults to None, which indicates that the value from the parent flow should be used. If this is a parent flow, the default is pulled from the PREFECT_LOGGING_LOG_PRINTS setting.

None on_completion Optional[List[Callable[[Flow, FlowRun, State], Union[Awaitable[None], None]]]]

An optional list of functions to call when the flow run is completed. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None on_failure Optional[List[Callable[[Flow, FlowRun, State], Union[Awaitable[None], None]]]]

An optional list of functions to call when the flow run fails. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None on_cancellation Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of functions to call when the flow run is cancelled. These functions will be passed the flow, flow run, and final state.

None on_crashed Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of functions to call when the flow run crashes. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None on_running Optional[List[Callable[[Flow, FlowRun, State], None]]]

An optional list of functions to call when the flow run is started. Each function should accept three arguments: the flow, the flow run, and the current state

None

Returns:

Type Description

A callable Flow object which, when called, will run the flow and return its

final state.

Examples:

Define a simple flow

>>> from prefect import flow\n>>> @flow\n>>> def add(x, y):\n>>>     return x + y\n

Define an async flow

>>> @flow\n>>> async def add(x, y):\n>>>     return x + y\n

Define a flow with a version and description

>>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n>>> def my_flow():\n>>>     pass\n

Define a flow with a custom name

>>> @flow(name=\"The Ultimate Flow\")\n>>> def my_flow():\n>>>     pass\n

Define a flow that submits its tasks to dask

>>> from prefect_dask.task_runners import DaskTaskRunner\n>>>\n>>> @flow(task_runner=DaskTaskRunner)\n>>> def my_flow():\n>>>     pass\n
Source code in prefect/flows.py
def flow(\n    __fn=None,\n    *,\n    name: Optional[str] = None,\n    version: Optional[str] = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[int, float] = None,\n    task_runner: BaseTaskRunner = ConcurrentTaskRunner,\n    description: str = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = True,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    log_prints: Optional[bool] = None,\n    on_completion: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    on_failure: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n    \"\"\"\n    Decorator to designate a function as a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Flow parameters must be serializable by Pydantic.\n\n    Args:\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        task_runner: An optional task runner to use for task execution within the flow; if\n            not provided, a `ConcurrentTaskRunner` will be instantiated.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        cache_result_in_memory: An optional toggle indicating whether the cached result of\n            a running the flow should be stored in memory. Defaults to `True`.\n        log_prints: If set, `print` statements in the flow will be redirected to the\n            Prefect logger for the flow run. Defaults to `None`, which indicates that\n            the value from the parent flow should be used. If this is a parent flow,\n            the default is pulled from the `PREFECT_LOGGING_LOG_PRINTS` setting.\n        on_completion: An optional list of functions to call when the flow run is\n            completed. Each function should accept three arguments: the flow, the flow\n            run, and the final state of the flow run.\n        on_failure: An optional list of functions to call when the flow run fails. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n        on_cancellation: An optional list of functions to call when the flow run is\n            cancelled. These functions will be passed the flow, flow run, and final state.\n        on_crashed: An optional list of functions to call when the flow run crashes. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n        on_running: An optional list of functions to call when the flow run is started. Each\n            function should accept three arguments: the flow, the flow run, and the current state\n\n    Returns:\n        A callable `Flow` object which, when called, will run the flow and return its\n        final state.\n\n    Examples:\n        Define a simple flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async flow\n\n        >>> @flow\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a flow with a version and description\n\n        >>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow with a custom name\n\n        >>> @flow(name=\"The Ultimate Flow\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow that submits its tasks to dask\n\n        >>> from prefect_dask.task_runners import DaskTaskRunner\n        >>>\n        >>> @flow(task_runner=DaskTaskRunner)\n        >>> def my_flow():\n        >>>     pass\n    \"\"\"\n    if __fn:\n        return cast(\n            Flow[P, R],\n            Flow(\n                fn=__fn,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n                on_running=on_running,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Flow[P, R]],\n            partial(\n                flow,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n                on_running=on_running,\n            ),\n        )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_entrypoint","title":"load_flow_from_entrypoint","text":"

Extract a flow object from a script at an entrypoint by running all of the code in the file.

Parameters:

Name Type Description Default entrypoint str

a string in the format <path_to_script>:<flow_func_name> or a module path to a flow function

required

Returns:

Type Description Flow

The flow object from the script

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

MissingFlowError

If the flow function specified in the entrypoint does not exist

Source code in prefect/flows.py
def load_flow_from_entrypoint(entrypoint: str) -> Flow:\n    \"\"\"\n    Extract a flow object from a script at an entrypoint by running all of the code in the file.\n\n    Args:\n        entrypoint: a string in the format `<path_to_script>:<flow_func_name>` or a module path\n            to a flow function\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If the flow function specified in the entrypoint does not exist\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=True,\n        capture_failures=True,\n    ):\n        if \":\" in entrypoint:\n            # split by the last colon once to handle Windows paths with drive letters i.e C:\\path\\to\\file.py:do_stuff\n            path, func_name = entrypoint.rsplit(\":\", maxsplit=1)\n        else:\n            path, func_name = entrypoint.rsplit(\".\", maxsplit=1)\n        try:\n            flow = import_object(entrypoint)\n        except AttributeError as exc:\n            raise MissingFlowError(\n                f\"Flow function with name {func_name!r} not found in {path!r}. \"\n            ) from exc\n\n        if not isinstance(flow, Flow):\n            raise MissingFlowError(\n                f\"Function with name {func_name!r} is not a flow. Make sure that it is \"\n                \"decorated with '@flow'.\"\n            )\n\n        return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_script","title":"load_flow_from_script","text":"

Extract a flow object from a script by running all of the code in the file.

If the script has multiple flows in it, a flow name must be provided to specify the flow to return.

Parameters:

Name Type Description Default path str

A path to a Python script containing flows

required flow_name str

An optional flow name to look for in the script

None

Returns:

Type Description Flow

The flow object from the script

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

MissingFlowError

If no flows exist in the iterable

MissingFlowError

If a flow name is provided and that flow does not exist

UnspecifiedFlowError

If multiple flows exist but no flow name was provided

Source code in prefect/flows.py
def load_flow_from_script(path: str, flow_name: str = None) -> Flow:\n    \"\"\"\n    Extract a flow object from a script by running all of the code in the file.\n\n    If the script has multiple flows in it, a flow name must be provided to specify\n    the flow to return.\n\n    Args:\n        path: A path to a Python script containing flows\n        flow_name: An optional flow name to look for in the script\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    return select_flow(\n        load_flows_from_script(path),\n        flow_name=flow_name,\n        from_message=f\"in script '{path}'\",\n    )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_text","title":"load_flow_from_text","text":"

Load a flow from a text script.

The script will be written to a temporary local file path so errors can refer to line numbers and contextual tracebacks can be provided.

Source code in prefect/flows.py
def load_flow_from_text(script_contents: AnyStr, flow_name: str):\n    \"\"\"\n    Load a flow from a text script.\n\n    The script will be written to a temporary local file path so errors can refer\n    to line numbers and contextual tracebacks can be provided.\n    \"\"\"\n    with NamedTemporaryFile(\n        mode=\"wt\" if isinstance(script_contents, str) else \"wb\",\n        prefix=f\"flow-script-{flow_name}\",\n        suffix=\".py\",\n        delete=False,\n    ) as tmpfile:\n        tmpfile.write(script_contents)\n        tmpfile.flush()\n    try:\n        flow = load_flow_from_script(tmpfile.name, flow_name=flow_name)\n    finally:\n        # windows compat\n        tmpfile.close()\n        os.remove(tmpfile.name)\n    return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flows_from_script","title":"load_flows_from_script","text":"

Load all flow objects from the given python script. All of the code in the file will be executed.

Returns:

Type Description List[Flow]

A list of flows

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

Source code in prefect/flows.py
def load_flows_from_script(path: str) -> List[Flow]:\n    \"\"\"\n    Load all flow objects from the given python script. All of the code in the file\n    will be executed.\n\n    Returns:\n        A list of flows\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n    \"\"\"\n    return registry_from_script(path).get_instances(Flow)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.select_flow","title":"select_flow","text":"

Select the only flow in an iterable or a flow specified by name.

Returns A single flow object

Raises:

Type Description MissingFlowError

If no flows exist in the iterable

MissingFlowError

If a flow name is provided and that flow does not exist

UnspecifiedFlowError

If multiple flows exist but no flow name was provided

Source code in prefect/flows.py
def select_flow(\n    flows: Iterable[Flow], flow_name: str = None, from_message: str = None\n) -> Flow:\n    \"\"\"\n    Select the only flow in an iterable or a flow specified by name.\n\n    Returns\n        A single flow object\n\n    Raises:\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    # Convert to flows by name\n    flows = {f.name: f for f in flows}\n\n    # Add a leading space if given, otherwise use an empty string\n    from_message = (\" \" + from_message) if from_message else \"\"\n    if not flows:\n        raise MissingFlowError(f\"No flows found{from_message}.\")\n\n    elif flow_name and flow_name not in flows:\n        raise MissingFlowError(\n            f\"Flow {flow_name!r} not found{from_message}. \"\n            f\"Found the following flows: {listrepr(flows.keys())}. \"\n            \"Check to make sure that your flow function is decorated with `@flow`.\"\n        )\n\n    elif not flow_name and len(flows) > 1:\n        raise UnspecifiedFlowError(\n            (\n                f\"Found {len(flows)} flows{from_message}:\"\n                f\" {listrepr(sorted(flows.keys()))}. Specify a flow name to select a\"\n                \" flow.\"\n            ),\n        )\n\n    if flow_name:\n        return flows[flow_name]\n    else:\n        return list(flows.values())[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/futures/","title":"prefect.futures","text":"","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures","title":"prefect.futures","text":"

Futures represent the execution of a task and allow retrieval of the task run's state.

This module contains the definition for futures as well as utilities for resolving futures in nested data structures.

","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture","title":"PrefectFuture","text":"

Bases: Generic[R, A]

Represents the result of a computation happening in a task runner.

When tasks are called, they are submitted to a task runner which creates a future for access to the state and result of the task.

Examples:

Define a task that returns a string

>>> from prefect import flow, task\n>>> @task\n>>> def my_task() -> str:\n>>>     return \"hello\"\n

Calls of this task in a flow will return a future

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n>>>     future.task_run.id  # UUID for the task run\n

Wait for the task to complete

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait()\n

Wait N seconds for the task to complete

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait(0.1)\n>>>     if final_state:\n>>>         ... # Task done\n>>>     else:\n>>>         ... # Task not done yet\n

Wait for a task to complete and retrieve its result

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result()\n>>>     assert result == \"hello\"\n

Wait N seconds for a task to complete and retrieve its result

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result(timeout=5)\n>>>     assert result == \"hello\"\n

Retrieve the state of a task without waiting for completion

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     state = future.get_state()\n
Source code in prefect/futures.py
class PrefectFuture(Generic[R, A]):\n    \"\"\"\n    Represents the result of a computation happening in a task runner.\n\n    When tasks are called, they are submitted to a task runner which creates a future\n    for access to the state and result of the task.\n\n    Examples:\n        Define a task that returns a string\n\n        >>> from prefect import flow, task\n        >>> @task\n        >>> def my_task() -> str:\n        >>>     return \"hello\"\n\n        Calls of this task in a flow will return a future\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n        >>>     future.task_run.id  # UUID for the task run\n\n        Wait for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait()\n\n        Wait N seconds for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait(0.1)\n        >>>     if final_state:\n        >>>         ... # Task done\n        >>>     else:\n        >>>         ... # Task not done yet\n\n        Wait for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result()\n        >>>     assert result == \"hello\"\n\n        Wait N seconds for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result(timeout=5)\n        >>>     assert result == \"hello\"\n\n        Retrieve the state of a task without waiting for completion\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     state = future.get_state()\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str,\n        key: UUID,\n        task_runner: \"BaseTaskRunner\",\n        asynchronous: A = True,\n        _final_state: State[R] = None,  # Exposed for testing\n    ) -> None:\n        self.key = key\n        self.name = name\n        self.asynchronous = asynchronous\n        self.task_run = None\n        self._final_state = _final_state\n        self._exception: Optional[Exception] = None\n        self._task_runner = task_runner\n        self._submitted = anyio.Event()\n\n        self._loop = asyncio.get_running_loop()\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: None = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: float\n    ) -> Awaitable[Optional[State[R]]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: float) -> Optional[State[R]]:\n        ...\n\n    def wait(self, timeout=None):\n        \"\"\"\n        Wait for the run to finish and return the final state\n\n        If the timeout is reached before the run reaches a final state,\n        `None` is returned.\n        \"\"\"\n        wait = create_call(self._wait, timeout=timeout)\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(wait).aresult()\n        else:\n            # type checking cannot handle the overloaded timeout passing\n            return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n\n    @overload\n    async def _wait(self, timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    async def _wait(self, timeout: float) -> Optional[State[R]]:\n        ...\n\n    async def _wait(self, timeout=None):\n        \"\"\"\n        Async implementation for `wait`\n        \"\"\"\n        await self._wait_for_submission()\n\n        if self._final_state:\n            return self._final_state\n\n        self._final_state = await self._task_runner.wait(self.key, timeout)\n        return self._final_state\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> R:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Union[R, Exception]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> Awaitable[R]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Awaitable[Union[R, Exception]]:\n        ...\n\n    def result(self, timeout: float = None, raise_on_failure: bool = True):\n        \"\"\"\n        Wait for the run to finish and return the final state.\n\n        If the timeout is reached before the run reaches a final state, a `TimeoutError`\n        will be raised.\n\n        If `raise_on_failure` is `True` and the task run failed, the task run's\n        exception will be raised.\n        \"\"\"\n        result = create_call(\n            self._result, timeout=timeout, raise_on_failure=raise_on_failure\n        )\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(result).aresult()\n        else:\n            return from_sync.call_soon_in_loop_thread(result).result()\n\n    async def _result(self, timeout: float = None, raise_on_failure: bool = True):\n        \"\"\"\n        Async implementation of `result`\n        \"\"\"\n        final_state = await self._wait(timeout=timeout)\n        if not final_state:\n            raise TimeoutError(\"Call timed out before task finished.\")\n        return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Async]\", client: PrefectClient = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Sync]\", client: PrefectClient = None\n    ) -> State[R]:\n        ...\n\n    def get_state(self, client: PrefectClient = None):\n        \"\"\"\n        Get the current state of the task run.\n        \"\"\"\n        if self.asynchronous:\n            return cast(Awaitable[State[R]], self._get_state(client=client))\n        else:\n            return cast(State[R], sync(self._get_state, client=client))\n\n    @inject_client\n    async def _get_state(self, client: PrefectClient = None) -> State[R]:\n        assert client is not None  # always injected\n\n        # We must wait for the task run id to be populated\n        await self._wait_for_submission()\n\n        task_run = await client.read_task_run(self.task_run.id)\n\n        if not task_run:\n            raise RuntimeError(\"Future has no associated task run in the server.\")\n\n        # Update the task run reference\n        self.task_run = task_run\n        return task_run.state\n\n    async def _wait_for_submission(self):\n        await run_coroutine_in_loop_from_async(self._loop, self._submitted.wait())\n\n    def __hash__(self) -> int:\n        return hash(self.key)\n\n    def __repr__(self) -> str:\n        return f\"PrefectFuture({self.name!r})\"\n\n    def __bool__(self) -> bool:\n        warnings.warn(\n            (\n                \"A 'PrefectFuture' from a task call was cast to a boolean; \"\n                \"did you mean to check the result of the task instead? \"\n                \"e.g. `if my_task().result(): ...`\"\n            ),\n            stacklevel=2,\n        )\n        return True\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.get_state","title":"get_state","text":"

Get the current state of the task run.

Source code in prefect/futures.py
def get_state(self, client: PrefectClient = None):\n    \"\"\"\n    Get the current state of the task run.\n    \"\"\"\n    if self.asynchronous:\n        return cast(Awaitable[State[R]], self._get_state(client=client))\n    else:\n        return cast(State[R], sync(self._get_state, client=client))\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.result","title":"result","text":"

Wait for the run to finish and return the final state.

If the timeout is reached before the run reaches a final state, a TimeoutError will be raised.

If raise_on_failure is True and the task run failed, the task run's exception will be raised.

Source code in prefect/futures.py
def result(self, timeout: float = None, raise_on_failure: bool = True):\n    \"\"\"\n    Wait for the run to finish and return the final state.\n\n    If the timeout is reached before the run reaches a final state, a `TimeoutError`\n    will be raised.\n\n    If `raise_on_failure` is `True` and the task run failed, the task run's\n    exception will be raised.\n    \"\"\"\n    result = create_call(\n        self._result, timeout=timeout, raise_on_failure=raise_on_failure\n    )\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(result).aresult()\n    else:\n        return from_sync.call_soon_in_loop_thread(result).result()\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.wait","title":"wait","text":"

Wait for the run to finish and return the final state

If the timeout is reached before the run reaches a final state, None is returned.

Source code in prefect/futures.py
def wait(self, timeout=None):\n    \"\"\"\n    Wait for the run to finish and return the final state\n\n    If the timeout is reached before the run reaches a final state,\n    `None` is returned.\n    \"\"\"\n    wait = create_call(self._wait, timeout=timeout)\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(wait).aresult()\n    else:\n        # type checking cannot handle the overloaded timeout passing\n        return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.call_repr","title":"call_repr","text":"

Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"

Source code in prefect/futures.py
def call_repr(__fn: Callable, *args: Any, **kwargs: Any) -> str:\n    \"\"\"\n    Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"\n    \"\"\"\n\n    name = __fn.__name__\n\n    # TODO: If this computation is concerningly expensive, we can iterate checking the\n    #       length at each arg or avoid calling `repr` on args with large amounts of\n    #       data\n    call_args = \", \".join(\n        [repr(arg) for arg in args]\n        + [f\"{key}={repr(val)}\" for key, val in kwargs.items()]\n    )\n\n    # Enforce a maximum length\n    if len(call_args) > 100:\n        call_args = call_args[:100] + \"...\"\n\n    return f\"{name}({call_args})\"\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_data","title":"resolve_futures_to_data async","text":"

Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their results. Resolving futures to their results may wait for execution to complete and require communication with the API.

Unsupported object types will be returned without modification.

Source code in prefect/futures.py
async def resolve_futures_to_data(\n    expr: Union[PrefectFuture[R, Any], Any],\n    raise_on_failure: bool = True,\n) -> Union[R, Any]:\n    \"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their results.\n    Resolving futures to their results may wait for execution to complete and require\n    communication with the API.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    maybe_expr = visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n    if maybe_expr is not None:\n        expr = maybe_expr\n\n    # Get results\n    results = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(\n                create_call(future._result, raise_on_failure=raise_on_failure)\n            ).aresult()\n            for future in futures\n        ]\n    )\n\n    results_by_future = dict(zip(futures, results))\n\n    def replace_futures_with_results(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return results_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_results,\n        return_data=True,\n        context={},\n    )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_states","title":"resolve_futures_to_states async","text":"

Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete.

Unsupported object types will be returned without modification.

Source code in prefect/futures.py
async def resolve_futures_to_states(\n    expr: Union[PrefectFuture[R, Any], Any],\n) -> Union[State[R], Any]:\n    \"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their final states.\n    Resolving futures to their final states may wait for execution to complete.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n\n    # Get final states for each future\n    states = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(create_call(future._wait)).aresult()\n            for future in futures\n        ]\n    )\n\n    states_by_future = dict(zip(futures, states))\n\n    def replace_futures_with_states(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return states_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_states,\n        return_data=True,\n        context={},\n    )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/infrastructure/","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer","title":"DockerContainer","text":"

Bases: Infrastructure

Runs a command in a container.

Requires a Docker Engine to be connectable. Docker settings will be retrieved from the environment.

Click here to see a tutorial.

Attributes:

Name Type Description auto_remove bool

If set, the container will be removed on completion. Otherwise, the container will remain after exit for inspection.

command bool

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

env bool

Environment variables to set for the container.

image str

An optional string specifying the tag of a Docker image to use. Defaults to the Prefect image.

image_pull_policy Optional[ImagePullPolicy]

Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'.

image_registry Optional[DockerRegistry]

A DockerRegistry block containing credentials to use if image is stored in a private image registry.

labels Optional[DockerRegistry]

An optional dictionary of labels, mapping name to value.

name Optional[DockerRegistry]

An optional name for the container.

network_mode Optional[str]

Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set.

networks List[str]

An optional list of strings specifying Docker networks to connect the container to.

stream_output bool

If set, stream output from the container to local standard output.

volumes List[str]

An optional list of volume mount strings in the format of \"local_path:container_path\".

memswap_limit Union[int, str]

Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit is also set. If mem_limit is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit is 300m and memswap_limit is not set, the container can use 600m in total of memory and swap.

mem_limit Union[float, str]

Memory limit of the created container. Accepts float values to enforce a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. If a string is given without a unit, bytes are assumed.

privileged bool

Give extended privileges to this container.

","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer--connecting-to-a-locally-hosted-prefect-api","title":"Connecting to a locally hosted Prefect API","text":"

If using a local API URL on Linux, we will update the network mode default to 'host' to enable connectivity. If using another OS or an alternative network mode is used, we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally, this will enable connectivity, but the API URL can be provided as an environment variable to override inference in more complex use-cases.

Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not necessary and the API is connectable while bound to localhost.

Source code in prefect/infrastructure/container.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the Docker worker from prefect-docker instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass DockerContainer(Infrastructure):\n    \"\"\"\n    Runs a command in a container.\n\n    Requires a Docker Engine to be connectable. Docker settings will be retrieved from\n    the environment.\n\n    Click [here](https://docs.prefect.io/guides/deployment/docker) to see a tutorial.\n\n    Attributes:\n        auto_remove: If set, the container will be removed on completion. Otherwise,\n            the container will remain after exit for inspection.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the container.\n        image: An optional string specifying the tag of a Docker image to use.\n            Defaults to the Prefect image.\n        image_pull_policy: Specifies if the image should be pulled. One of 'ALWAYS',\n            'NEVER', 'IF_NOT_PRESENT'.\n        image_registry: A `DockerRegistry` block containing credentials to use if `image` is stored in a private\n            image registry.\n        labels: An optional dictionary of labels, mapping name to value.\n        name: An optional name for the container.\n        network_mode: Set the network mode for the created container. Defaults to 'host'\n            if a local API url is detected, otherwise the Docker default of 'bridge' is\n            used. If 'networks' is set, this cannot be set.\n        networks: An optional list of strings specifying Docker networks to connect the\n            container to.\n        stream_output: If set, stream output from the container to local standard output.\n        volumes: An optional list of volume mount strings in the format of\n            \"local_path:container_path\".\n        memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n            set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n            allowing the container to use as much swap as memory. For example, if\n            `mem_limit` is 300m and `memswap_limit` is not set, the container can use\n            600m in total of memory and swap.\n        mem_limit: Memory limit of the created container. Accepts float values to enforce\n            a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g.\n            If a string is given without a unit, bytes are assumed.\n        privileged: Give extended privileges to this container.\n\n    ## Connecting to a locally hosted Prefect API\n\n    If using a local API URL on Linux, we will update the network mode default to 'host'\n    to enable connectivity. If using another OS or an alternative network mode is used,\n    we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally,\n    this will enable connectivity, but the API URL can be provided as an environment\n    variable to override inference in more complex use-cases.\n\n    Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound\n    to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not\n    necessary and the API is connectable while bound to localhost.\n    \"\"\"\n\n    type: Literal[\"docker-container\"] = Field(\n        default=\"docker-container\", description=\"The type of infrastructure.\"\n    )\n    image: str = Field(\n        default_factory=get_prefect_image_name,\n        description=\"Tag of a Docker image to use. Defaults to the Prefect image.\",\n    )\n    image_pull_policy: Optional[ImagePullPolicy] = Field(\n        default=None, description=\"Specifies if the image should be pulled.\"\n    )\n    image_registry: Optional[DockerRegistry] = None\n    networks: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of strings specifying Docker networks to connect the container to.\"\n        ),\n    )\n    network_mode: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The network mode for the created container (e.g. host, bridge). If\"\n            \" 'networks' is set, this cannot be set.\"\n        ),\n    )\n    auto_remove: bool = Field(\n        default=False,\n        description=\"If set, the container will be removed on completion.\",\n    )\n    volumes: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of volume mount strings in the format of\"\n            ' \"local_path:container_path\".'\n        ),\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, the output will be streamed from the container to local standard\"\n            \" output.\"\n        ),\n    )\n    memswap_limit: Union[int, str] = Field(\n        default=None,\n        description=(\n            \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n            \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n            \"allowing the container to use as much swap as memory. For example, if \"\n            \"`mem_limit` is 300m and `memswap_limit` is not set, the container can use \"\n            \"600m in total of memory and swap.\"\n        ),\n    )\n    mem_limit: Union[float, str] = Field(\n        default=None,\n        description=(\n            \"Memory limit of the created container. Accepts float values to enforce \"\n            \"a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. \"\n            \"If a string is given without a unit, bytes are assumed.\"\n        ),\n    )\n    privileged: bool = Field(\n        default=False,\n        description=\"Give extended privileges to this container.\",\n    )\n\n    _block_type_name = \"Docker Container\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer\"\n\n    @validator(\"labels\")\n    def convert_labels_to_docker_format(cls, labels: Dict[str, str]):\n        labels = labels or {}\n        new_labels = {}\n        for name, value in labels.items():\n            if \"/\" in name:\n                namespace, key = name.split(\"/\", maxsplit=1)\n                new_namespace = \".\".join(reversed(namespace.split(\".\")))\n                new_labels[f\"{new_namespace}.{key}\"] = value\n            else:\n                new_labels[name] = value\n        return new_labels\n\n    @validator(\"volumes\")\n    def check_volume_format(cls, volumes):\n        for volume in volumes:\n            if \":\" not in volume:\n                raise ValueError(\n                    \"Invalid volume specification. \"\n                    f\"Expected format 'path:container_path', but got {volume!r}\"\n                )\n\n        return volumes\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> Optional[bool]:\n        if not self.command:\n            raise ValueError(\"Docker container cannot be run with empty command.\")\n\n        # The `docker` library uses requests instead of an async http library so it must\n        # be run in a thread to avoid blocking the event loop.\n        container = await run_sync_in_worker_thread(self._create_and_start_container)\n        container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n        # Mark as started and return the infrastructure id\n        if task_status:\n            task_status.started(container_pid)\n\n        # Monitor the container\n        container = await run_sync_in_worker_thread(\n            self._watch_container_safe, container\n        )\n\n        exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n        return DockerContainerResult(\n            status_code=exit_code if exit_code is not None else -1,\n            identifier=container_pid,\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        docker_client = self._get_client()\n        base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n\n        if docker_client.api.base_url != base_url:\n            raise InfrastructureNotAvailable(\n                \"\".join(\n                    [\n                        (\n                            f\"Unable to stop container {container_id!r}: the current\"\n                            \" Docker API \"\n                        ),\n                        (\n                            f\"URL {docker_client.api.base_url!r} does not match the\"\n                            \" expected \"\n                        ),\n                        f\"API base URL {base_url}.\",\n                    ]\n                )\n            )\n        try:\n            container = docker_client.containers.get(container_id=container_id)\n        except docker.errors.NotFound:\n            raise InfrastructureNotFound(\n                f\"Unable to stop container {container_id!r}: The container was not\"\n                \" found.\"\n            )\n\n        try:\n            container.stop(timeout=grace_seconds)\n        except Exception:\n            raise\n\n    def preview(self):\n        # TODO: build and document a more sophisticated preview\n        docker_client = self._get_client()\n        try:\n            return json.dumps(self._build_container_settings(docker_client))\n        finally:\n            docker_client.close()\n\n    async def generate_work_pool_base_job_template(self):\n        from prefect.workers.utilities import (\n            get_default_base_job_template_for_infrastructure_type,\n        )\n\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type()\n        )\n        if base_job_template is None:\n            return await super().generate_work_pool_base_job_template()\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key == \"image_registry\":\n                self.logger.warning(\n                    \"Image registry blocks are not supported by Docker\"\n                    \" work pools. Please authenticate to your registry using\"\n                    \" the `docker login` command on your worker instances.\"\n                )\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key == \"image_pull_policy\":\n                new_value = None\n                if value == ImagePullPolicy.ALWAYS:\n                    new_value = \"Always\"\n                elif value == ImagePullPolicy.NEVER:\n                    new_value = \"Never\"\n                elif value == ImagePullPolicy.IF_NOT_PRESENT:\n                    new_value = \"IfNotPresent\"\n\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = new_value\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Docker work pools. Skipping.\"\n                )\n\n        return base_job_template\n\n    def get_corresponding_worker_type(self):\n        return \"docker\"\n\n    def _get_infrastructure_pid(self, container_id: str) -> str:\n        \"\"\"Generates a Docker infrastructure_pid string in the form of\n        `<docker_host_base_url>:<container_id>`.\n        \"\"\"\n        docker_client = self._get_client()\n        base_url = docker_client.api.base_url\n        docker_client.close()\n        return f\"{base_url}:{container_id}\"\n\n    def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n        \"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n        # base_url can contain `:` so we only want the last item of the split\n        base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n        return base_url, str(container_id)\n\n    def _build_container_settings(\n        self,\n        docker_client: \"DockerClient\",\n    ) -> Dict:\n        network_mode = self._get_network_mode()\n        return dict(\n            image=self.image,\n            network=self.networks[0] if self.networks else None,\n            network_mode=network_mode,\n            command=self.command,\n            environment=self._get_environment_variables(network_mode),\n            auto_remove=self.auto_remove,\n            labels={**CONTAINER_LABELS, **self.labels},\n            extra_hosts=self._get_extra_hosts(docker_client),\n            name=self._get_container_name(),\n            volumes=self.volumes,\n            mem_limit=self.mem_limit,\n            memswap_limit=self.memswap_limit,\n            privileged=self.privileged,\n        )\n\n    def _create_and_start_container(self) -> \"Container\":\n        if self.image_registry:\n            # If an image registry block was supplied, load an authenticated Docker\n            # client from the block. Otherwise, use an unauthenticated client to\n            # pull images from public registries.\n            docker_client = self.image_registry.get_docker_client()\n        else:\n            docker_client = self._get_client()\n        container_settings = self._build_container_settings(docker_client)\n\n        if self._should_pull_image(docker_client):\n            self.logger.info(f\"Pulling image {self.image!r}...\")\n            self._pull_image(docker_client)\n\n        container = self._create_container(docker_client, **container_settings)\n\n        # Add additional networks after the container is created; only one network can\n        # be attached at creation time\n        if len(self.networks) > 1:\n            for network_name in self.networks[1:]:\n                network = docker_client.networks.get(network_name)\n                network.connect(container)\n\n        # Start the container\n        container.start()\n\n        docker_client.close()\n\n        return container\n\n    def _get_image_and_tag(self) -> Tuple[str, Optional[str]]:\n        return parse_image_tag(self.image)\n\n    def _determine_image_pull_policy(self) -> ImagePullPolicy:\n        \"\"\"\n        Determine the appropriate image pull policy.\n\n        1. If they specified an image pull policy, use that.\n\n        2. If they did not specify an image pull policy and gave us\n           the \"latest\" tag, use ImagePullPolicy.always.\n\n        3. If they did not specify an image pull policy and did not\n           specify a tag, use ImagePullPolicy.always.\n\n        4. If they did not specify an image pull policy and gave us\n           a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n        This logic matches the behavior of Kubernetes.\n        See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n        \"\"\"\n        if not self.image_pull_policy:\n            _, tag = self._get_image_and_tag()\n            if tag == \"latest\" or not tag:\n                return ImagePullPolicy.ALWAYS\n            return ImagePullPolicy.IF_NOT_PRESENT\n        return self.image_pull_policy\n\n    def _get_network_mode(self) -> Optional[str]:\n        # User's value takes precedence; this may collide with the incompatible options\n        # mentioned below.\n        if self.network_mode:\n            if sys.platform != \"linux\" and self.network_mode == \"host\":\n                warnings.warn(\n                    f\"{self.network_mode!r} network mode is not supported on platform \"\n                    f\"{sys.platform!r} and may not work as intended.\"\n                )\n            return self.network_mode\n\n        # Network mode is not compatible with networks or ports (we do not support ports\n        # yet though)\n        if self.networks:\n            return None\n\n        # Check for a local API connection\n        api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n        if api_url:\n            try:\n                _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n            except Exception as exc:\n                warnings.warn(\n                    f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                    f\"{exc}\\nThe network mode will not be inferred.\"\n                )\n                return None\n\n            host = netloc.split(\":\")[0]\n\n            # If using a locally hosted API, use a host network on linux\n            if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n                return \"host\"\n\n        # Default to unset\n        return None\n\n    def _should_pull_image(self, docker_client: \"DockerClient\") -> bool:\n        \"\"\"\n        Decide whether we need to pull the Docker image.\n        \"\"\"\n        image_pull_policy = self._determine_image_pull_policy()\n\n        if image_pull_policy is ImagePullPolicy.ALWAYS:\n            return True\n        elif image_pull_policy is ImagePullPolicy.NEVER:\n            return False\n        elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n            try:\n                # NOTE: images.get() wants the tag included with the image\n                # name, while images.pull() wants them split.\n                docker_client.images.get(self.image)\n            except docker.errors.ImageNotFound:\n                self.logger.debug(f\"Could not find Docker image locally: {self.image}\")\n                return True\n        return False\n\n    def _pull_image(self, docker_client: \"DockerClient\"):\n        \"\"\"\n        Pull the image we're going to use to create the container.\n        \"\"\"\n        image, tag = self._get_image_and_tag()\n\n        return docker_client.images.pull(image, tag)\n\n    def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n        \"\"\"\n        Create a docker container with retries on name conflicts.\n\n        If the container already exists with the given name, an incremented index is\n        added.\n        \"\"\"\n        # Create the container with retries on name conflicts (with an incremented idx)\n        index = 0\n        container = None\n        name = original_name = kwargs.pop(\"name\")\n\n        while not container:\n            from docker.errors import APIError\n\n            try:\n                display_name = repr(name) if name else \"with auto-generated name\"\n                self.logger.info(f\"Creating Docker container {display_name}...\")\n                container = docker_client.containers.create(name=name, **kwargs)\n            except APIError as exc:\n                if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n                    self.logger.info(\n                        f\"Docker container name {display_name} already exists; \"\n                        \"retrying...\"\n                    )\n                    index += 1\n                    name = f\"{original_name}-{index}\"\n                else:\n                    raise\n\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        return container\n\n    def _watch_container_safe(self, container: \"Container\") -> \"Container\":\n        # Monitor the container capturing the latest snapshot while capturing\n        # not found errors\n        docker_client = self._get_client()\n\n        try:\n            for latest_container in self._watch_container(docker_client, container.id):\n                container = latest_container\n        except docker.errors.NotFound:\n            # The container was removed during watching\n            self.logger.warning(\n                f\"Docker container {container.name} was removed before we could wait \"\n                \"for its completion.\"\n            )\n        finally:\n            docker_client.close()\n\n        return container\n\n    def _watch_container(\n        self, docker_client: \"DockerClient\", container_id: str\n    ) -> Generator[None, None, \"Container\"]:\n        container: \"Container\" = docker_client.containers.get(container_id)\n\n        status = container.status\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n        if self.stream_output:\n            try:\n                for log in container.logs(stream=True):\n                    log: bytes\n                    print(log.decode().rstrip())\n            except docker.errors.APIError as exc:\n                if \"marked for removal\" in str(exc):\n                    self.logger.warning(\n                        f\"Docker container {container.name} was marked for removal\"\n                        \" before logs could be retrieved. Output will not be\"\n                        \" streamed. \"\n                    )\n                else:\n                    self.logger.exception(\n                        \"An unexpected Docker API error occurred while streaming\"\n                        f\" output from container {container.name}.\"\n                    )\n\n            container.reload()\n            if container.status != status:\n                self.logger.info(\n                    f\"Docker container {container.name!r} has status\"\n                    f\" {container.status!r}\"\n                )\n            yield container\n\n        container.wait()\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n    def _get_client(self):\n        try:\n            with warnings.catch_warnings():\n                # Silence warnings due to use of deprecated methods within dockerpy\n                # See https://github.com/docker/docker-py/pull/2931\n                warnings.filterwarnings(\n                    \"ignore\",\n                    message=\"distutils Version classes are deprecated.*\",\n                    category=DeprecationWarning,\n                )\n\n                docker_client = docker.from_env()\n\n        except docker.errors.DockerException as exc:\n            raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n        return docker_client\n\n    def _get_container_name(self) -> Optional[str]:\n        \"\"\"\n        Generates a container name to match the configured name, ensuring it is Docker\n        compatible.\n        \"\"\"\n        # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n        if not self.name:\n            return None\n\n        return (\n            slugify(\n                self.name,\n                lowercase=False,\n                # Docker does not limit length but URL limits apply eventually so\n                # limit the length for safety\n                max_length=250,\n                # Docker allows these characters for container names\n                regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n            ).lstrip(\n                # Docker does not allow leading underscore, dash, or period\n                \"_-.\"\n            )\n            # Docker does not allow 0 character names so cast to null if the name is\n            # empty after slufification\n            or None\n        )\n\n    def _get_extra_hosts(self, docker_client) -> Dict[str, str]:\n        \"\"\"\n        A host.docker.internal -> host-gateway mapping is necessary for communicating\n        with the API on Linux machines. Docker Desktop on macOS will automatically\n        already have this mapping.\n        \"\"\"\n        if sys.platform == \"linux\" and (\n            # Do not warn if the user has specified a host manually that does not use\n            # a local address\n            \"PREFECT_API_URL\" not in self.env\n            or re.search(\n                \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n                self.env[\"PREFECT_API_URL\"],\n            )\n        ):\n            user_version = packaging.version.parse(\n                format_outlier_version_name(docker_client.version()[\"Version\"])\n            )\n            required_version = packaging.version.parse(\"20.10.0\")\n\n            if user_version < required_version:\n                warnings.warn(\n                    \"`host.docker.internal` could not be automatically resolved to\"\n                    \" your local ip address. This feature is not supported on Docker\"\n                    f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                    \" encounter issues.\"\n                )\n                return {}\n            else:\n                # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n                # Only supported by Docker v20.10.0+ which is our minimum recommend version\n                return {\"host.docker.internal\": \"host-gateway\"}\n\n    def _get_environment_variables(self, network_mode):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the docker internal host unless the\n        # network mode is \"host\" where localhost is available already.\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and network_mode != \"host\"\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", \"host.docker.internal\")\n                .replace(\"127.0.0.1\", \"host.docker.internal\")\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainerResult","title":"DockerContainerResult","text":"

Bases: InfrastructureResult

Contains information about a completed Docker container

Source code in prefect/infrastructure/container.py
class DockerContainerResult(InfrastructureResult):\n    \"\"\"Contains information about a completed Docker container\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure","title":"Infrastructure","text":"

Bases: Block, ABC

Source code in prefect/infrastructure/base.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the `BaseWorker` class to create custom infrastructure integrations instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Infrastructure(Block, abc.ABC):\n    _block_schema_capabilities = [\"run-infrastructure\"]\n\n    type: str\n\n    env: Dict[str, Optional[str]] = pydantic.Field(\n        default_factory=dict,\n        title=\"Environment\",\n        description=\"Environment variables to set in the configured infrastructure.\",\n    )\n    labels: Dict[str, str] = pydantic.Field(\n        default_factory=dict,\n        description=\"Labels applied to the infrastructure for metadata purposes.\",\n    )\n    name: Optional[str] = pydantic.Field(\n        default=None,\n        description=\"Name applied to the infrastructure for identification.\",\n    )\n    command: Optional[List[str]] = pydantic.Field(\n        default=None,\n        description=\"The command to run in the infrastructure.\",\n    )\n\n    async def generate_work_pool_base_job_template(self):\n        if self._block_document_id is None:\n            raise BlockNotSavedError(\n                \"Cannot publish as work pool, block has not been saved. Please call\"\n                \" `.save()` on your block before publishing.\"\n            )\n\n        block_schema = self.__class__.schema()\n        return {\n            \"job_configuration\": {\"block\": \"{{ block }}\"},\n            \"variables\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"block\": {\n                        \"title\": \"Block\",\n                        \"description\": (\n                            \"The infrastructure block to use for job creation.\"\n                        ),\n                        \"allOf\": [{\"$ref\": f\"#/definitions/{self.__class__.__name__}\"}],\n                        \"default\": {\n                            \"$ref\": {\"block_document_id\": str(self._block_document_id)}\n                        },\n                    }\n                },\n                \"required\": [\"block\"],\n                \"definitions\": {self.__class__.__name__: block_schema},\n            },\n        }\n\n    def get_corresponding_worker_type(self):\n        return \"block\"\n\n    @sync_compatible\n    async def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n        \"\"\"\n        Creates a work pool configured to use the given block as the job creator.\n\n        Used to migrate from a agents setup to a worker setup.\n\n        Args:\n            work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n                block will be used.\n        \"\"\"\n\n        base_job_template = await self.generate_work_pool_base_job_template()\n        work_pool_name = work_pool_name or self._block_document_name\n\n        if work_pool_name is None:\n            raise ValueError(\n                \"`work_pool_name` must be provided if the block has not been saved.\"\n            )\n\n        console = Console()\n\n        try:\n            async with prefect.get_client() as client:\n                work_pool = await client.create_work_pool(\n                    work_pool=WorkPoolCreate(\n                        name=work_pool_name,\n                        type=self.get_corresponding_worker_type(),\n                        base_job_template=base_job_template,\n                    )\n                )\n        except ObjectAlreadyExists:\n            console.print(\n                (\n                    f\"Work pool with name {work_pool_name!r} already exists, please use\"\n                    \" a different name.\"\n                ),\n                style=\"red\",\n            )\n            return\n\n        console.print(\n            f\"Work pool {work_pool.name} created!\",\n            style=\"green\",\n        )\n        if PREFECT_UI_URL:\n            console.print(\n                \"You see your new work pool in the UI at\"\n                f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n            )\n\n        deploy_script = (\n            \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n        )\n        if not hasattr(self, \"image\"):\n            deploy_script = (\n                \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n                f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n            )\n        console.print(\n            \"\\nYou can deploy a flow to this work pool by calling\"\n            f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n        )\n        console.print(\n            \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n        )\n        console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> InfrastructureResult:\n        \"\"\"\n        Run the infrastructure.\n\n        If provided a `task_status`, the status will be reported as started when the\n        infrastructure is successfully created. The status return value will be an\n        identifier for the infrastructure.\n\n        The call will then monitor the created infrastructure, returning a result at\n        the end containing a status code indicating if the infrastructure exited cleanly\n        or encountered an error.\n        \"\"\"\n        # Note: implementations should include `sync_compatible`\n\n    @abc.abstractmethod\n    def preview(self) -> str:\n        \"\"\"\n        View a preview of the infrastructure that would be run.\n        \"\"\"\n\n    @property\n    def logger(self):\n        return get_logger(f\"prefect.infrastructure.{self.type}\")\n\n    @property\n    def is_using_a_runner(self):\n        return self.command is not None and \"prefect flow-run execute\" in shlex.join(\n            self.command\n        )\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n        \"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    def prepare_for_flow_run(\n        self: Self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"Deployment\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ) -> Self:\n        \"\"\"\n        Return an infrastructure block that is prepared to execute a flow run.\n        \"\"\"\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        return self.copy(\n            update={\n                \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n                \"labels\": {\n                    **self._base_flow_run_labels(flow_run),\n                    **deployment_labels,\n                    **flow_labels,\n                    **self.labels,\n                },\n                \"name\": self.name or flow_run.name,\n                \"command\": self.command or self._base_flow_run_command(),\n            }\n        )\n\n    @staticmethod\n    def _base_flow_run_command() -> List[str]:\n        \"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        if experiment_enabled(\"enhanced_cancellation\"):\n            if (\n                PREFECT_EXPERIMENTAL_WARN\n                and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n            ):\n                warnings.warn(\n                    EXPERIMENTAL_WARNING.format(\n                        feature=\"Enhanced flow run cancellation\",\n                        group=\"enhanced_cancellation\",\n                        help=\"\",\n                    ),\n                    ExperimentalFeature,\n                    stacklevel=3,\n                )\n            return [\"prefect\", \"flow-run\", \"execute\"]\n\n        return [\"python\", \"-m\", \"prefect.engine\"]\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        environment = {}\n        environment[\"PREFECT__FLOW_RUN_ID\"] = str(flow_run.id)\n        return environment\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"Deployment\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-name\": flow.name,\n        }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

Return an infrastructure block that is prepared to execute a flow run.

Source code in prefect/infrastructure/base.py
def prepare_for_flow_run(\n    self: Self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"Deployment\"] = None,\n    flow: Optional[\"Flow\"] = None,\n) -> Self:\n    \"\"\"\n    Return an infrastructure block that is prepared to execute a flow run.\n    \"\"\"\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    return self.copy(\n        update={\n            \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n            \"labels\": {\n                **self._base_flow_run_labels(flow_run),\n                **deployment_labels,\n                **flow_labels,\n                **self.labels,\n            },\n            \"name\": self.name or flow_run.name,\n            \"command\": self.command or self._base_flow_run_command(),\n        }\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.preview","title":"preview abstractmethod","text":"

View a preview of the infrastructure that would be run.

Source code in prefect/infrastructure/base.py
@abc.abstractmethod\ndef preview(self) -> str:\n    \"\"\"\n    View a preview of the infrastructure that would be run.\n    \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.publish_as_work_pool","title":"publish_as_work_pool async","text":"

Creates a work pool configured to use the given block as the job creator.

Used to migrate from a agents setup to a worker setup.

Parameters:

Name Type Description Default work_pool_name Optional[str]

The name to give to the created work pool. If not provided, the name of the current block will be used.

None Source code in prefect/infrastructure/base.py
@sync_compatible\nasync def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n    \"\"\"\n    Creates a work pool configured to use the given block as the job creator.\n\n    Used to migrate from a agents setup to a worker setup.\n\n    Args:\n        work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n            block will be used.\n    \"\"\"\n\n    base_job_template = await self.generate_work_pool_base_job_template()\n    work_pool_name = work_pool_name or self._block_document_name\n\n    if work_pool_name is None:\n        raise ValueError(\n            \"`work_pool_name` must be provided if the block has not been saved.\"\n        )\n\n    console = Console()\n\n    try:\n        async with prefect.get_client() as client:\n            work_pool = await client.create_work_pool(\n                work_pool=WorkPoolCreate(\n                    name=work_pool_name,\n                    type=self.get_corresponding_worker_type(),\n                    base_job_template=base_job_template,\n                )\n            )\n    except ObjectAlreadyExists:\n        console.print(\n            (\n                f\"Work pool with name {work_pool_name!r} already exists, please use\"\n                \" a different name.\"\n            ),\n            style=\"red\",\n        )\n        return\n\n    console.print(\n        f\"Work pool {work_pool.name} created!\",\n        style=\"green\",\n    )\n    if PREFECT_UI_URL:\n        console.print(\n            \"You see your new work pool in the UI at\"\n            f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n        )\n\n    deploy_script = (\n        \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n    )\n    if not hasattr(self, \"image\"):\n        deploy_script = (\n            \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n            f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n        )\n    console.print(\n        \"\\nYou can deploy a flow to this work pool by calling\"\n        f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n    )\n    console.print(\n        \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n    )\n    console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.run","title":"run abstractmethod async","text":"

Run the infrastructure.

If provided a task_status, the status will be reported as started when the infrastructure is successfully created. The status return value will be an identifier for the infrastructure.

The call will then monitor the created infrastructure, returning a result at the end containing a status code indicating if the infrastructure exited cleanly or encountered an error.

Source code in prefect/infrastructure/base.py
@abc.abstractmethod\nasync def run(\n    self,\n    task_status: anyio.abc.TaskStatus = None,\n) -> InfrastructureResult:\n    \"\"\"\n    Run the infrastructure.\n\n    If provided a `task_status`, the status will be reported as started when the\n    infrastructure is successfully created. The status return value will be an\n    identifier for the infrastructure.\n\n    The call will then monitor the created infrastructure, returning a result at\n    the end containing a status code indicating if the infrastructure exited cleanly\n    or encountered an error.\n    \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

Bases: Block

Stores configuration for interaction with Kubernetes clusters.

See from_file for creation.

Attributes:

Name Type Description config Dict

The entire loaded YAML contents of a kubectl config file

context_name str

The name of the kubectl context to use

Example

Load a saved Kubernetes cluster config:

from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

Source code in prefect/blocks/kubernetes.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the KubernetesClusterConfig block from prefect-kubernetes instead.\",\n)\nclass KubernetesClusterConfig(Block):\n    \"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        return validate_yaml(value)\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n        \"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n        \"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n        \"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

Source code in prefect/blocks/kubernetes.py
def configure_client(self) -> None:\n    \"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

Create a cluster config from the a Kubernetes config file.

By default, the current context in the default Kubernetes config file will be used.

An alternative file or context may be specified.

The entire config file will be loaded and stored.

Source code in prefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n    \"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

Returns a Kubernetes API client for this cluster config.

Source code in prefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n    \"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob","title":"KubernetesJob","text":"

Bases: Infrastructure

Runs a command as a Kubernetes Job.

For a guided tutorial, see How to use Kubernetes with Prefect. For more information, including examples for customizing the resulting manifest, see KubernetesJob infrastructure concepts.

Attributes:

Name Type Description cluster_config Optional[KubernetesClusterConfig]

An optional Kubernetes cluster config to use for this job.

command Optional[KubernetesClusterConfig]

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

customizations JsonPatch

A list of JSON 6902 patches to apply to the base Job manifest.

env JsonPatch

Environment variables to set for the container.

finished_job_ttl Optional[int]

The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed.

image Optional[str]

An optional string specifying the image reference of a container image to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names. Defaults to the Prefect image.

image_pull_policy Optional[KubernetesImagePullPolicy]

The Kubernetes image pull policy to use for job containers.

job KubernetesManifest

The base manifest for the Kubernetes Job.

job_watch_timeout_seconds Optional[int]

Number of seconds to wait for the job to complete before marking it as crashed. Defaults to None, which means no timeout will be enforced.

labels Optional[int]

An optional dictionary of labels to add to the job.

name Optional[int]

An optional name for the job.

namespace Optional[str]

An optional string signifying the Kubernetes namespace to use.

pod_watch_timeout_seconds int

Number of seconds to watch for pod creation before timing out (default 60).

service_account_name Optional[str]

An optional string specifying which Kubernetes service account to use.

stream_output bool

If set, stream output from the job to local standard output.

Source code in prefect/infrastructure/kubernetes.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the Kubernetes worker from prefect-kubernetes instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass KubernetesJob(Infrastructure):\n    \"\"\"\n    Runs a command as a Kubernetes Job.\n\n    For a guided tutorial, see [How to use Kubernetes with Prefect](https://medium.com/the-prefect-blog/how-to-use-kubernetes-with-prefect-419b2e8b8cb2/).\n    For more information, including examples for customizing the resulting manifest, see [`KubernetesJob` infrastructure concepts](https://docs.prefect.io/concepts/infrastructure/#kubernetesjob).\n\n    Attributes:\n        cluster_config: An optional Kubernetes cluster config to use for this job.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        customizations: A list of JSON 6902 patches to apply to the base Job manifest.\n        env: Environment variables to set for the container.\n        finished_job_ttl: The number of seconds to retain jobs after completion. If set, finished jobs will\n            be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be\n            manually removed.\n        image: An optional string specifying the image reference of a container image\n            to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The\n            behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names.\n            Defaults to the Prefect image.\n        image_pull_policy: The Kubernetes image pull policy to use for job containers.\n        job: The base manifest for the Kubernetes Job.\n        job_watch_timeout_seconds: Number of seconds to wait for the job to complete\n            before marking it as crashed. Defaults to `None`, which means no timeout will be enforced.\n        labels: An optional dictionary of labels to add to the job.\n        name: An optional name for the job.\n        namespace: An optional string signifying the Kubernetes namespace to use.\n        pod_watch_timeout_seconds: Number of seconds to watch for pod creation before timing out (default 60).\n        service_account_name: An optional string specifying which Kubernetes service account to use.\n        stream_output: If set, stream output from the job to local standard output.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob\"\n\n    type: Literal[\"kubernetes-job\"] = Field(\n        default=\"kubernetes-job\", description=\"The type of infrastructure.\"\n    )\n    # shortcuts for the most common user-serviceable settings\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image reference of a container image to use for the job, for example,\"\n            \" `docker.io/prefecthq/prefect:2-latest`.The behavior is as described in\"\n            \" the Kubernetes documentation and uses the latest version of Prefect by\"\n            \" default, unless an image is already present in a provided job manifest.\"\n        ),\n    )\n    namespace: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The Kubernetes namespace to use for this job. Defaults to 'default' \"\n            \"unless a namespace is already present in a provided job manifest.\"\n        ),\n    )\n    service_account_name: Optional[str] = Field(\n        default=None, description=\"The Kubernetes service account to use for this job.\"\n    )\n    image_pull_policy: Optional[KubernetesImagePullPolicy] = Field(\n        default=None,\n        description=\"The Kubernetes image pull policy to use for job containers.\",\n    )\n\n    # connection to a cluster\n    cluster_config: Optional[KubernetesClusterConfig] = Field(\n        default=None, description=\"The Kubernetes cluster config to use for this job.\"\n    )\n\n    # settings allowing full customization of the Job\n    job: KubernetesManifest = Field(\n        default_factory=lambda: KubernetesJob.base_job_manifest(),\n        description=\"The base manifest for the Kubernetes Job.\",\n        title=\"Base Job Manifest\",\n    )\n    customizations: JsonPatch = Field(\n        default_factory=lambda: JsonPatch([]),\n        description=\"A list of JSON 6902 patches to apply to the base Job manifest.\",\n    )\n\n    # controls the behavior of execution\n    job_watch_timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"Number of seconds to wait for the job to complete before marking it as\"\n            \" crashed. Defaults to `None`, which means no timeout will be enforced.\"\n        ),\n    )\n    pod_watch_timeout_seconds: int = Field(\n        default=60,\n        description=\"Number of seconds to watch for pod creation before timing out.\",\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the job to local standard output.\"\n        ),\n    )\n    finished_job_ttl: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to retain jobs after completion. If set, finished\"\n            \" jobs will be cleaned up by Kubernetes after the given delay. If None\"\n            \" (default), jobs will need to be manually removed.\"\n        ),\n    )\n\n    # internal-use only right now\n    _api_dns_name: Optional[str] = None  # Replaces 'localhost' in API URL\n\n    _block_type_name = \"Kubernetes Job\"\n\n    @validator(\"job\")\n    def ensure_job_includes_all_required_components(cls, value: KubernetesManifest):\n        return validate_k8s_job_required_components(cls, value)\n\n    @validator(\"job\")\n    def ensure_job_has_compatible_values(cls, value: KubernetesManifest):\n        return validate_k8s_job_compatible_values(cls, value)\n\n    @validator(\"customizations\", pre=True)\n    def cast_customizations_to_a_json_patch(\n        cls, value: Union[List[Dict], JsonPatch, str]\n    ) -> JsonPatch:\n        return cast_k8s_job_customizations(cls, value)\n\n    @root_validator\n    def default_namespace(cls, values):\n        return set_default_namespace(values)\n\n    @root_validator\n    def default_image(cls, values):\n        return set_default_image(values)\n\n    # Support serialization of the 'JsonPatch' type\n    class Config:\n        arbitrary_types_allowed = True\n        json_encoders = {JsonPatch: lambda p: p.patch}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        d = super().dict(*args, **kwargs)\n        d[\"customizations\"] = self.customizations.patch\n        return d\n\n    @classmethod\n    def base_job_manifest(cls) -> KubernetesManifest:\n        \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n        return {\n            \"apiVersion\": \"batch/v1\",\n            \"kind\": \"Job\",\n            \"metadata\": {\"labels\": {}},\n            \"spec\": {\n                \"template\": {\n                    \"spec\": {\n                        \"parallelism\": 1,\n                        \"completions\": 1,\n                        \"restartPolicy\": \"Never\",\n                        \"containers\": [\n                            {\n                                \"name\": \"prefect-job\",\n                                \"env\": [],\n                            }\n                        ],\n                    }\n                }\n            },\n        }\n\n    # Note that we're using the yaml package to load both YAML and JSON files below.\n    # This works because YAML is a strict superset of JSON:\n    #\n    #   > The YAML 1.23 specification was published in 2009. Its primary focus was\n    #   > making YAML a strict superset of JSON. It also removed many of the problematic\n    #   > implicit typing recommendations.\n    #\n    #   https://yaml.org/spec/1.2.2/#12-yaml-history\n\n    @classmethod\n    def job_from_file(cls, filename: str) -> KubernetesManifest:\n        \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return yaml.load(f, yaml.SafeLoader)\n\n    @classmethod\n    def customize_from_file(cls, filename: str) -> JsonPatch:\n        \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return JsonPatch(yaml.load(f, yaml.SafeLoader))\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> KubernetesJobResult:\n        if not self.command:\n            raise ValueError(\"Kubernetes job cannot be run with empty command.\")\n\n        self._configure_kubernetes_library_client()\n        manifest = self.build_job()\n        job = await run_sync_in_worker_thread(self._create_job, manifest)\n\n        pid = await run_sync_in_worker_thread(self._get_infrastructure_pid, job)\n        # Indicate that the job has started\n        if task_status is not None:\n            task_status.started(pid)\n\n        # Monitor the job until completion\n        status_code = await run_sync_in_worker_thread(\n            self._watch_job, job.metadata.name\n        )\n        return KubernetesJobResult(identifier=pid, status_code=status_code)\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        self._configure_kubernetes_library_client()\n        job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n            infrastructure_pid\n        )\n\n        if not job_namespace == self.namespace:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n                f\"{job_namespace!r} but this block is configured to use \"\n                f\"{self.namespace!r}.\"\n            )\n\n        current_cluster_uid = self._get_cluster_uid()\n        if job_cluster_uid != current_cluster_uid:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running on another \"\n                \"cluster.\"\n            )\n\n        with self.get_batch_client() as batch_client:\n            try:\n                batch_client.delete_namespaced_job(\n                    name=job_name,\n                    namespace=job_namespace,\n                    grace_period_seconds=grace_seconds,\n                    # Foreground propagation deletes dependent objects before deleting owner objects.\n                    # This ensures that the pods are cleaned up before the job is marked as deleted.\n                    # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion\n                    propagation_policy=\"Foreground\",\n                )\n            except kubernetes.client.exceptions.ApiException as exc:\n                if exc.status == 404:\n                    raise InfrastructureNotFound(\n                        f\"Unable to kill job {job_name!r}: The job was not found.\"\n                    ) from exc\n                else:\n                    raise\n\n    def preview(self):\n        return yaml.dump(self.build_job())\n\n    def get_corresponding_worker_type(self):\n        return \"kubernetes\"\n\n    async def generate_work_pool_base_job_template(self):\n        from prefect.workers.utilities import (\n            get_default_base_job_template_for_infrastructure_type,\n        )\n\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type()\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to retrieve default base job template.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n                \"job\",\n                \"customizations\",\n            ]:\n                continue\n            elif key == \"image_pull_policy\":\n                base_job_template[\"variables\"][\"properties\"][\"image_pull_policy\"][\n                    \"default\"\n                ] = value.value\n            elif key == \"cluster_config\":\n                base_job_template[\"variables\"][\"properties\"][\"cluster_config\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(self.cluster_config._block_document_id)\n                    }\n                }\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Kubernetes work pools.\"\n                    \" Skipping.\"\n                )\n\n        custom_job_manifest = self.dict(exclude_unset=True, exclude_defaults=True).get(\n            \"job\"\n        )\n        if custom_job_manifest:\n            job_manifest = self.build_job()\n        else:\n            job_manifest = copy.deepcopy(\n                base_job_template[\"job_configuration\"][\"job_manifest\"]\n            )\n            job_manifest = self.customizations.apply(job_manifest)\n        base_job_template[\"job_configuration\"][\"job_manifest\"] = job_manifest\n\n        return base_job_template\n\n    def build_job(self) -> KubernetesManifest:\n        \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n        job_manifest = copy.copy(self.job)\n        job_manifest = self._shortcut_customizations().apply(job_manifest)\n        job_manifest = self.customizations.apply(job_manifest)\n        return job_manifest\n\n    @contextmanager\n    def get_batch_client(self) -> Generator[\"BatchV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.BatchV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    @contextmanager\n    def get_client(self) -> Generator[\"CoreV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.CoreV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    def _get_infrastructure_pid(self, job: \"V1Job\") -> str:\n        \"\"\"\n        Generates a Kubernetes infrastructure PID.\n\n        The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n        \"\"\"\n        cluster_uid = self._get_cluster_uid()\n        pid = f\"{cluster_uid}:{self.namespace}:{job.metadata.name}\"\n        return pid\n\n    def _parse_infrastructure_pid(\n        self, infrastructure_pid: str\n    ) -> Tuple[str, str, str]:\n        \"\"\"\n        Parse a Kubernetes infrastructure PID into its component parts.\n\n        Returns a cluster UID, namespace, and job name.\n        \"\"\"\n        cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n        return cluster_uid, namespace, job_name\n\n    def _get_cluster_uid(self) -> str:\n        \"\"\"\n        Gets a unique id for the current cluster being used.\n\n        There is no real unique identifier for a cluster. However, the `kube-system`\n        namespace is immutable and has a persistence UID that we use instead.\n\n        PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n        namespace cannot be read e.g. when a cluster role cannot be created. If set,\n        this variable will be used and we will not attempt to read the `kube-system`\n        namespace.\n\n        See https://github.com/kubernetes/kubernetes/issues/44954\n        \"\"\"\n        # Default to an environment variable\n        env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n        if env_cluster_uid:\n            return env_cluster_uid\n\n        # Read the UID from the cluster namespace\n        with self.get_client() as client:\n            namespace = client.read_namespace(\"kube-system\")\n        cluster_uid = namespace.metadata.uid\n\n        return cluster_uid\n\n    def _configure_kubernetes_library_client(self) -> None:\n        \"\"\"\n        Set the correct kubernetes client configuration.\n\n        WARNING: This action is not threadsafe and may override the configuration\n                  specified by another `KubernetesJob` instance.\n        \"\"\"\n        # TODO: Investigate returning a configured client so calls on other threads\n        #       will not invalidate the config needed here\n\n        # if a k8s cluster block is provided to the flow runner, use that\n        if self.cluster_config:\n            self.cluster_config.configure_client()\n        else:\n            # If no block specified, try to load Kubernetes configuration within a cluster. If that doesn't\n            # work, try to load the configuration from the local environment, allowing\n            # any further ConfigExceptions to bubble up.\n            try:\n                kubernetes.config.load_incluster_config()\n            except kubernetes.config.ConfigException:\n                kubernetes.config.load_kube_config()\n\n    def _shortcut_customizations(self) -> JsonPatch:\n        \"\"\"Produces the JSON 6902 patch for the most commonly used customizations, like\n        image and namespace, which we offer as top-level parameters (with sensible\n        default values)\"\"\"\n        shortcuts = []\n\n        if self.namespace:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/namespace\",\n                    \"value\": self.namespace,\n                }\n            )\n\n        if self.image:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/image\",\n                    \"value\": self.image,\n                }\n            )\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": (\n                    f\"/metadata/labels/{self._slugify_label_key(key).replace('/', '~1', 1)}\"\n                ),\n                \"value\": self._slugify_label_value(value),\n            }\n            for key, value in self.labels.items()\n        ]\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/env/-\",\n                \"value\": {\"name\": key, \"value\": value},\n            }\n            for key, value in self._get_environment_variables().items()\n        ]\n\n        if self.image_pull_policy:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/imagePullPolicy\",\n                    \"value\": self.image_pull_policy.value,\n                }\n            )\n\n        if self.service_account_name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/serviceAccountName\",\n                    \"value\": self.service_account_name,\n                }\n            )\n\n        if self.finished_job_ttl is not None:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/ttlSecondsAfterFinished\",\n                    \"value\": self.finished_job_ttl,\n                }\n            )\n\n        if self.command:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/args\",\n                    \"value\": self.command,\n                }\n            )\n\n        if self.name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": self._slugify_name(self.name) + \"-\",\n                }\n            )\n        else:\n            # Generate name is required\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": (\n                        \"prefect-job-\"\n                        # We generate a name using a hash of the primary job settings\n                        + stable_hash(\n                            *self.command,\n                            *self.env.keys(),\n                            *[v for v in self.env.values() if v is not None],\n                        )\n                        + \"-\"\n                    ),\n                }\n            )\n\n        return JsonPatch(shortcuts)\n\n    def _get_job(self, job_id: str) -> Optional[\"V1Job\"]:\n        with self.get_batch_client() as batch_client:\n            try:\n                job = batch_client.read_namespaced_job(job_id, self.namespace)\n            except kubernetes.client.exceptions.ApiException:\n                self.logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n                return None\n            return job\n\n    def _get_job_pod(self, job_name: str) -> \"V1Pod\":\n        \"\"\"Get the first running pod for a job.\"\"\"\n\n        # Wait until we find a running pod for the job\n        # if `pod_watch_timeout_seconds` is None, no timeout will be enforced\n        watch = kubernetes.watch.Watch()\n        self.logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n        last_phase = None\n        with self.get_client() as client:\n            for event in watch.stream(\n                func=client.list_namespaced_pod,\n                namespace=self.namespace,\n                label_selector=f\"job-name={job_name}\",\n                timeout_seconds=self.pod_watch_timeout_seconds,\n            ):\n                phase = event[\"object\"].status.phase\n                if phase != last_phase:\n                    self.logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n                if phase != \"Pending\":\n                    watch.stop()\n                    return event[\"object\"]\n\n                last_phase = phase\n\n        self.logger.error(f\"Job {job_name!r}: Pod never started.\")\n\n    def _watch_job(self, job_name: str) -> int:\n        \"\"\"\n        Watch a job.\n\n        Return the final status code of the first container.\n        \"\"\"\n        self.logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n        job = self._get_job(job_name)\n        if not job:\n            return -1\n\n        pod = self._get_job_pod(job_name)\n        if not pod:\n            return -1\n\n        # Calculate the deadline before streaming output\n        deadline = (\n            (time.monotonic() + self.job_watch_timeout_seconds)\n            if self.job_watch_timeout_seconds is not None\n            else None\n        )\n\n        if self.stream_output:\n            with self.get_client() as client:\n                logs = client.read_namespaced_pod_log(\n                    pod.metadata.name,\n                    self.namespace,\n                    follow=True,\n                    _preload_content=False,\n                    container=\"prefect-job\",\n                )\n                try:\n                    for log in logs.stream():\n                        print(log.decode().rstrip())\n\n                        # Check if we have passed the deadline and should stop streaming\n                        # logs\n                        remaining_time = (\n                            deadline - time.monotonic() if deadline else None\n                        )\n                        if deadline and remaining_time <= 0:\n                            break\n\n                except Exception:\n                    self.logger.warning(\n                        (\n                            \"Error occurred while streaming logs - \"\n                            \"Job will continue to run but logs will \"\n                            \"no longer be streamed to stdout.\"\n                        ),\n                        exc_info=True,\n                    )\n\n        with self.get_batch_client() as batch_client:\n            # Check if the job is completed before beginning a watch\n            job = batch_client.read_namespaced_job(\n                name=job_name, namespace=self.namespace\n            )\n            completed = job.status.completion_time is not None\n\n            while not completed:\n                remaining_time = (\n                    math.ceil(deadline - time.monotonic()) if deadline else None\n                )\n                if deadline and remaining_time <= 0:\n                    self.logger.error(\n                        f\"Job {job_name!r}: Job did not complete within \"\n                        f\"timeout of {self.job_watch_timeout_seconds}s.\"\n                    )\n                    return -1\n\n                watch = kubernetes.watch.Watch()\n                # The kubernetes library will disable retries if the timeout kwarg is\n                # present regardless of the value so we do not pass it unless given\n                # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n                timeout_seconds = (\n                    {\"timeout_seconds\": remaining_time} if deadline else {}\n                )\n\n                for event in watch.stream(\n                    func=batch_client.list_namespaced_job,\n                    field_selector=f\"metadata.name={job_name}\",\n                    namespace=self.namespace,\n                    **timeout_seconds,\n                ):\n                    if event[\"type\"] == \"DELETED\":\n                        self.logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n                        completed = True\n                    elif event[\"object\"].status.completion_time:\n                        if not event[\"object\"].status.succeeded:\n                            # Job failed, exit while loop and return pod exit code\n                            self.logger.error(f\"Job {job_name!r}: Job failed.\")\n                        completed = True\n                    # Check if the job has reached its backoff limit\n                    # and stop watching if it has\n                    elif (\n                        event[\"object\"].spec.backoff_limit is not None\n                        and event[\"object\"].status.failed is not None\n                        and event[\"object\"].status.failed\n                        > event[\"object\"].spec.backoff_limit\n                    ):\n                        self.logger.error(\n                            f\"Job {job_name!r}: Job reached backoff limit.\"\n                        )\n                        completed = True\n                    # If the job has no backoff limit, check if it has failed\n                    # and stop watching if it has\n                    elif (\n                        not event[\"object\"].spec.backoff_limit\n                        and event[\"object\"].status.failed\n                    ):\n                        completed = True\n\n                    if completed:\n                        watch.stop()\n                        break\n\n        with self.get_client() as core_client:\n            # Get all pods for the job\n            pods = core_client.list_namespaced_pod(\n                namespace=self.namespace, label_selector=f\"job-name={job_name}\"\n            )\n            # Get the status for only the most recently used pod\n            pods.items.sort(\n                key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n            )\n            most_recent_pod = pods.items[0] if pods.items else None\n            first_container_status = (\n                most_recent_pod.status.container_statuses[0]\n                if most_recent_pod\n                else None\n            )\n            if not first_container_status:\n                self.logger.error(f\"Job {job_name!r}: No pods found for job.\")\n                return -1\n\n            # In some cases, such as spot instance evictions, the pod will be forcibly\n            # terminated and not report a status correctly.\n            elif (\n                first_container_status.state is None\n                or first_container_status.state.terminated is None\n                or first_container_status.state.terminated.exit_code is None\n            ):\n                self.logger.error(\n                    f\"Could not determine exit code for {job_name!r}.\"\n                    \"Exit code will be reported as -1.\"\n                    \"First container status info did not report an exit code.\"\n                    f\"First container info: {first_container_status}.\"\n                )\n                return -1\n\n        return first_container_status.state.terminated.exit_code\n\n    def _create_job(self, job_manifest: KubernetesManifest) -> \"V1Job\":\n        \"\"\"\n        Given a Kubernetes Job Manifest, create the Job on the configured Kubernetes\n        cluster and return its name.\n        \"\"\"\n        with self.get_batch_client() as batch_client:\n            job = batch_client.create_namespaced_job(self.namespace, job_manifest)\n        return job\n\n    def _slugify_name(self, name: str) -> str:\n        \"\"\"\n        Slugify text for use as a name.\n\n        Keeps only alphanumeric characters and dashes, and caps the length\n        of the slug at 45 chars.\n\n        The 45 character length allows room for the k8s utility\n        \"generateName\" to generate a unique name from the slug while\n        keeping the total length of a name below 63 characters, which is\n        the limit for e.g. label names that follow RFC 1123 (hostnames) and\n        RFC 1035 (domain names).\n\n        Args:\n            name: The name of the job\n\n        Returns:\n            the slugified job name\n        \"\"\"\n        slug = slugify(\n            name,\n            max_length=45,  # Leave enough space for generateName\n            regex_pattern=r\"[^a-zA-Z0-9-]+\",\n        )\n\n        # TODO: Handle the case that the name is an empty string after being\n        # slugified.\n\n        return slug\n\n    def _slugify_label_key(self, key: str) -> str:\n        \"\"\"\n        Slugify text for use as a label key.\n\n        Keys are composed of an optional prefix and name, separated by a slash (/).\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the length of the label prefix to 253 characters.\n        Limits the length of the label name to 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            key: The label key\n\n        Returns:\n            The slugified label key\n        \"\"\"\n        if \"/\" in key:\n            prefix, name = key.split(\"/\", maxsplit=1)\n        else:\n            prefix = None\n            name = key\n\n        name_slug = (\n            slugify(name, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or name\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        if prefix:\n            prefix_slug = (\n                slugify(\n                    prefix,\n                    max_length=253,\n                    regex_pattern=r\"[^a-zA-Z0-9-\\.]+\",\n                ).strip(\"_-.\")  # Must start or end with alphanumeric characters\n                or prefix\n            )\n\n            return f\"{prefix_slug}/{name_slug}\"\n\n        return name_slug\n\n    def _slugify_label_value(self, value: str) -> str:\n        \"\"\"\n        Slugify text for use as a label value.\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the total length of label text to below 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            value: The text for the label\n\n        Returns:\n            The slugified value\n        \"\"\"\n        slug = (\n            slugify(value, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_\\.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or value\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        return slug\n\n    def _get_environment_variables(self):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the internal host\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and self._api_dns_name\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", self._api_dns_name)\n                .replace(\"127.0.0.1\", self._api_dns_name)\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.base_job_manifest","title":"base_job_manifest classmethod","text":"

Produces the bare minimum allowed Job manifest

Source code in prefect/infrastructure/kubernetes.py
@classmethod\ndef base_job_manifest(cls) -> KubernetesManifest:\n    \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n    return {\n        \"apiVersion\": \"batch/v1\",\n        \"kind\": \"Job\",\n        \"metadata\": {\"labels\": {}},\n        \"spec\": {\n            \"template\": {\n                \"spec\": {\n                    \"parallelism\": 1,\n                    \"completions\": 1,\n                    \"restartPolicy\": \"Never\",\n                    \"containers\": [\n                        {\n                            \"name\": \"prefect-job\",\n                            \"env\": [],\n                        }\n                    ],\n                }\n            }\n        },\n    }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.build_job","title":"build_job","text":"

Builds the Kubernetes Job Manifest

Source code in prefect/infrastructure/kubernetes.py
def build_job(self) -> KubernetesManifest:\n    \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n    job_manifest = copy.copy(self.job)\n    job_manifest = self._shortcut_customizations().apply(job_manifest)\n    job_manifest = self.customizations.apply(job_manifest)\n    return job_manifest\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.customize_from_file","title":"customize_from_file classmethod","text":"

Load an RFC 6902 JSON patch from a YAML or JSON file.

Source code in prefect/infrastructure/kubernetes.py
@classmethod\ndef customize_from_file(cls, filename: str) -> JsonPatch:\n    \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return JsonPatch(yaml.load(f, yaml.SafeLoader))\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.job_from_file","title":"job_from_file classmethod","text":"

Load a Kubernetes Job manifest from a YAML or JSON file.

Source code in prefect/infrastructure/kubernetes.py
@classmethod\ndef job_from_file(cls, filename: str) -> KubernetesManifest:\n    \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return yaml.load(f, yaml.SafeLoader)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJobResult","title":"KubernetesJobResult","text":"

Bases: InfrastructureResult

Contains information about the final state of a completed Kubernetes Job

Source code in prefect/infrastructure/kubernetes.py
class KubernetesJobResult(InfrastructureResult):\n    \"\"\"Contains information about the final state of a completed Kubernetes Job\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Process","title":"Process","text":"

Bases: Infrastructure

Run a command in a new process.

Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

Attributes:

Name Type Description command

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

env

Environment variables to set for the new process.

labels

Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself.

name

A name for the process. For display purposes only.

stream_output bool

Whether to stream output to local stdout.

working_dir Union[str, Path, None]

Working directory where the process should be opened. If not set, a tmp directory will be used.

Source code in prefect/infrastructure/process.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the process worker instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Process(Infrastructure):\n    \"\"\"\n    Run a command in a new process.\n\n    Current environment variables and Prefect settings will be included in the created\n    process. Configured environment variables will override any current environment\n    variables.\n\n    Attributes:\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the new process.\n        labels: Labels for the process. Labels are for metadata purposes only and\n            cannot be attached to the process itself.\n        name: A name for the process. For display purposes only.\n        stream_output: Whether to stream output to local stdout.\n        working_dir: Working directory where the process should be opened. If not set,\n            a tmp directory will be used.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/356e6766a91baf20e1d08bbe16e8b5aaef4d8643-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/infrastructure/#process\"\n\n    type: Literal[\"process\"] = Field(\n        default=\"process\", description=\"The type of infrastructure.\"\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the process to local standard output.\"\n        ),\n    )\n    working_dir: Union[str, Path, None] = Field(\n        default=None,\n        description=(\n            \"If set, the process will open within the specified path as the working\"\n            \" directory. Otherwise, a temporary directory will be created.\"\n        ),\n    )  # Underlying accepted types are str, bytes, PathLike[str], None\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> \"ProcessResult\":\n        if not self.command:\n            raise ValueError(\"Process cannot be run with empty command.\")\n\n        _use_threaded_child_watcher()\n        display_name = f\" {self.name!r}\" if self.name else \"\"\n\n        # Open a subprocess to execute the flow run\n        self.logger.info(f\"Opening process{display_name}...\")\n        working_dir_ctx = (\n            tempfile.TemporaryDirectory(suffix=\"prefect\")\n            if not self.working_dir\n            else contextlib.nullcontext(self.working_dir)\n        )\n        with working_dir_ctx as working_dir:\n            self.logger.debug(\n                f\"Process{display_name} running command: {' '.join(self.command)} in\"\n                f\" {working_dir}\"\n            )\n\n            # We must add creationflags to a dict so it is only passed as a function\n            # parameter on Windows, because the presence of creationflags causes\n            # errors on Unix even if set to None\n            kwargs: Dict[str, object] = {}\n            if sys.platform == \"win32\":\n                kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n            process = await run_process(\n                self.command,\n                stream_output=self.stream_output,\n                task_status=task_status,\n                task_status_handler=_infrastructure_pid_from_process,\n                env=self._get_environment_variables(),\n                cwd=working_dir,\n                **kwargs,\n            )\n\n        # Use the pid for display if no name was given\n        display_name = display_name or f\" {process.pid}\"\n\n        if process.returncode:\n            help_message = None\n            if process.returncode == -9:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGKILL signal. \"\n                    \"Typically, this is either caused by manual cancellation or \"\n                    \"high memory usage causing the operating system to \"\n                    \"terminate the process.\"\n                )\n            if process.returncode == -15:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGTERM signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n            elif process.returncode == 247:\n                help_message = (\n                    \"This indicates that the process was terminated due to high \"\n                    \"memory usage.\"\n                )\n            elif (\n                sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n            ):\n                help_message = (\n                    \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n\n            self.logger.error(\n                f\"Process{display_name} exited with status code: {process.returncode}\"\n                + (f\"; {help_message}\" if help_message else \"\")\n            )\n        else:\n            self.logger.info(f\"Process{display_name} exited cleanly.\")\n\n        return ProcessResult(\n            status_code=process.returncode, identifier=str(process.pid)\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        hostname, pid = _parse_infrastructure_pid(infrastructure_pid)\n\n        if hostname != socket.gethostname():\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill process {pid!r}: The process is running on a different\"\n                f\" host {hostname!r}.\"\n            )\n\n        # In a non-windows environment first send a SIGTERM, then, after\n        # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n        # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        if sys.platform == \"win32\":\n            try:\n                os.kill(pid, signal.CTRL_BREAK_EVENT)\n            except (ProcessLookupError, WindowsError):\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n        else:\n            try:\n                os.kill(pid, signal.SIGTERM)\n            except ProcessLookupError:\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n\n            # Throttle how often we check if the process is still alive to keep\n            # from making too many system calls in a short period of time.\n            check_interval = max(grace_seconds / 10, 1)\n\n            with anyio.move_on_after(grace_seconds):\n                while True:\n                    await anyio.sleep(check_interval)\n\n                    # Detect if the process is still alive. If not do an early\n                    # return as the process respected the SIGTERM from above.\n                    try:\n                        os.kill(pid, 0)\n                    except ProcessLookupError:\n                        return\n\n            try:\n                os.kill(pid, signal.SIGKILL)\n            except OSError:\n                # We shouldn't ever end up here, but it's possible that the\n                # process ended right after the check above.\n                return\n\n    def preview(self):\n        environment = self._get_environment_variables(include_os_environ=False)\n        return \" \\\\\\n\".join(\n            [f\"{key}={value}\" for key, value in environment.items()]\n            + [\" \".join(self.command)]\n        )\n\n    def _get_environment_variables(self, include_os_environ: bool = True):\n        os_environ = os.environ if include_os_environ else {}\n        # The base environment must override the current environment or\n        # the Prefect settings context may not be respected\n        env = {**os_environ, **self._base_environment(), **self.env}\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n\n    def _base_flow_run_command(self):\n        return [get_sys_executable(), \"-m\", \"prefect.engine\"]\n\n    def get_corresponding_worker_type(self):\n        return \"process\"\n\n    async def generate_work_pool_base_job_template(self):\n        from prefect.workers.utilities import (\n            get_default_base_job_template_for_infrastructure_type,\n        )\n\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type(),\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to generate default base job template for Process worker.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Process work pools.\"\n                    \" Skipping.\"\n                )\n\n        return base_job_template\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.ProcessResult","title":"ProcessResult","text":"

Bases: InfrastructureResult

Contains information about the final state of a completed process

Source code in prefect/infrastructure/process.py
class ProcessResult(InfrastructureResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/manifests/","title":"prefect.manifests","text":"","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests","title":"prefect.manifests","text":"

Manifests are portable descriptions of one or more workflows within a given directory structure.

They are the foundational building blocks for defining Flow Deployments.

","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests.Manifest","title":"Manifest","text":"

Bases: BaseModel

A JSON representation of a flow.

Source code in prefect/manifests.py
class Manifest(BaseModel):\n    \"\"\"A JSON representation of a flow.\"\"\"\n\n    flow_name: str = Field(default=..., description=\"The name of the flow.\")\n    import_path: str = Field(\n        default=..., description=\"The relative import path for the flow.\"\n    )\n    parameter_openapi_schema: ParameterSchema = Field(\n        default=..., description=\"The OpenAPI schema of the flow's parameters.\"\n    )\n
","tags":["Python API","deployments"]},{"location":"api-ref/prefect/serializers/","title":"prefect.serializers","text":"","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers","title":"prefect.serializers","text":"

Serializer implementations for converting objects to bytes and bytes to objects.

All serializers are based on the Serializer class and include a type string that allows them to be referenced without referencing the actual class. For example, you can get often specify the JSONSerializer with the string \"json\". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects.

All serializers must implement dumps and loads which convert objects to bytes and bytes to an object respectively.

","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedJSONSerializer","title":"CompressedJSONSerializer","text":"

Bases: CompressedSerializer

A compressed serializer preconfigured to use the json serializer.

Source code in prefect/serializers.py
class CompressedJSONSerializer(CompressedSerializer):\n    \"\"\"\n    A compressed serializer preconfigured to use the json serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/json\"] = \"compressed/json\"\n\n    serializer: Serializer = Field(default_factory=JSONSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedPickleSerializer","title":"CompressedPickleSerializer","text":"

Bases: CompressedSerializer

A compressed serializer preconfigured to use the pickle serializer.

Source code in prefect/serializers.py
class CompressedPickleSerializer(CompressedSerializer):\n    \"\"\"\n    A compressed serializer preconfigured to use the pickle serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/pickle\"] = \"compressed/pickle\"\n\n    serializer: Serializer = Field(default_factory=PickleSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer","title":"CompressedSerializer","text":"

Bases: Serializer

Wraps another serializer, compressing its output. Uses lzma by default. See compressionlib for using alternative libraries.

Attributes:

Name Type Description serializer Serializer

The serializer to use before compression.

compressionlib str

The import path of a compression module to use. Must have methods compress(bytes) -> bytes and decompress(bytes) -> bytes.

level str

If not null, the level of compression to pass to compress.

Source code in prefect/serializers.py
class CompressedSerializer(Serializer):\n    \"\"\"\n    Wraps another serializer, compressing its output.\n    Uses `lzma` by default. See `compressionlib` for using alternative libraries.\n\n    Attributes:\n        serializer: The serializer to use before compression.\n        compressionlib: The import path of a compression module to use.\n            Must have methods `compress(bytes) -> bytes` and `decompress(bytes) -> bytes`.\n        level: If not null, the level of compression to pass to `compress`.\n    \"\"\"\n\n    type: Literal[\"compressed\"] = \"compressed\"\n\n    serializer: Serializer\n    compressionlib: str = \"lzma\"\n\n    @validator(\"serializer\", pre=True)\n    def validate_serializer(cls, value):\n        return cast_type_names_to_serializers(value)\n\n    @validator(\"compressionlib\")\n    def check_compressionlib(cls, value):\n        return validate_compressionlib(value)\n\n    def dumps(self, obj: Any) -> bytes:\n        blob = self.serializer.dumps(obj)\n        compressor = from_qualified_name(self.compressionlib)\n        return base64.encodebytes(compressor.compress(blob))\n\n    def loads(self, blob: bytes) -> Any:\n        compressor = from_qualified_name(self.compressionlib)\n        uncompressed = compressor.decompress(base64.decodebytes(blob))\n        return self.serializer.loads(uncompressed)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.JSONSerializer","title":"JSONSerializer","text":"

Bases: Serializer

Serializes data to JSON.

Input types must be compatible with the stdlib json library.

Wraps the json library to serialize to UTF-8 bytes instead of string types.

Source code in prefect/serializers.py
class JSONSerializer(Serializer):\n    \"\"\"\n    Serializes data to JSON.\n\n    Input types must be compatible with the stdlib json library.\n\n    Wraps the `json` library to serialize to UTF-8 bytes instead of string types.\n    \"\"\"\n\n    type: Literal[\"json\"] = \"json\"\n\n    jsonlib: str = \"json\"\n    object_encoder: Optional[str] = Field(\n        default=\"prefect.serializers.prefect_json_object_encoder\",\n        description=(\n            \"An optional callable to use when serializing objects that are not \"\n            \"supported by the JSON encoder. By default, this is set to a callable that \"\n            \"adds support for all types supported by \"\n        ),\n    )\n    object_decoder: Optional[str] = Field(\n        default=\"prefect.serializers.prefect_json_object_decoder\",\n        description=(\n            \"An optional callable to use when deserializing objects. This callable \"\n            \"is passed each dictionary encountered during JSON deserialization. \"\n            \"By default, this is set to a callable that deserializes content created \"\n            \"by our default `object_encoder`.\"\n        ),\n    )\n    dumps_kwargs: Dict[str, Any] = Field(default_factory=dict)\n    loads_kwargs: Dict[str, Any] = Field(default_factory=dict)\n\n    @validator(\"dumps_kwargs\")\n    def dumps_kwargs_cannot_contain_default(cls, value):\n        return validate_dump_kwargs(value)\n\n    @validator(\"loads_kwargs\")\n    def loads_kwargs_cannot_contain_object_hook(cls, value):\n        return validate_load_kwargs(value)\n\n    def dumps(self, data: Any) -> bytes:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.dumps_kwargs.copy()\n        if self.object_encoder:\n            kwargs[\"default\"] = from_qualified_name(self.object_encoder)\n        result = json.dumps(data, **kwargs)\n        if isinstance(result, str):\n            # The standard library returns str but others may return bytes directly\n            result = result.encode()\n        return result\n\n    def loads(self, blob: bytes) -> Any:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.loads_kwargs.copy()\n        if self.object_decoder:\n            kwargs[\"object_hook\"] = from_qualified_name(self.object_decoder)\n        return json.loads(blob.decode(), **kwargs)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer","title":"PickleSerializer","text":"

Bases: Serializer

Serializes objects using the pickle protocol.

  • Uses cloudpickle by default. See picklelib for using alternative libraries.
  • Stores the version of the pickle library to check for compatibility during deserialization.
  • Wraps pickles in base64 for safe transmission.
Source code in prefect/serializers.py
class PickleSerializer(Serializer):\n    \"\"\"\n    Serializes objects using the pickle protocol.\n\n    - Uses `cloudpickle` by default. See `picklelib` for using alternative libraries.\n    - Stores the version of the pickle library to check for compatibility during\n        deserialization.\n    - Wraps pickles in base64 for safe transmission.\n    \"\"\"\n\n    type: Literal[\"pickle\"] = \"pickle\"\n\n    picklelib: str = \"cloudpickle\"\n    picklelib_version: str = None\n\n    @validator(\"picklelib\")\n    def check_picklelib(cls, value):\n        return validate_picklelib(value)\n\n    @root_validator\n    def check_picklelib_version(cls, values):\n        return validate_picklelib_version(values)\n\n    def dumps(self, obj: Any) -> bytes:\n        pickler = from_qualified_name(self.picklelib)\n        blob = pickler.dumps(obj)\n        return base64.encodebytes(blob)\n\n    def loads(self, blob: bytes) -> Any:\n        pickler = from_qualified_name(self.picklelib)\n        return pickler.loads(base64.decodebytes(blob))\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer","title":"Serializer","text":"

Bases: BaseModel, Generic[D], ABC

A serializer that can encode objects of type 'D' into bytes.

Source code in prefect/serializers.py
@register_base_type\nclass Serializer(BaseModel, Generic[D], abc.ABC):\n    \"\"\"\n    A serializer that can encode objects of type 'D' into bytes.\n    \"\"\"\n\n    def __init__(self, **data: Any) -> None:\n        type_string = get_dispatch_key(self) if type(self) != Serializer else \"__base__\"\n        data.setdefault(\"type\", type_string)\n        super().__init__(**data)\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n        if \"type\" in kwargs:\n            try:\n                subcls = lookup_type(cls, dispatch_key=kwargs[\"type\"])\n            except KeyError as exc:\n                raise ValidationError(errors=[exc], model=cls)\n\n            return super().__new__(subcls)\n        else:\n            return super().__new__(cls)\n\n    type: str\n\n    @abc.abstractmethod\n    def dumps(self, obj: D) -> bytes:\n        \"\"\"Encode the object into a blob of bytes.\"\"\"\n\n    @abc.abstractmethod\n    def loads(self, blob: bytes) -> D:\n        \"\"\"Decode the blob of bytes into an object.\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    @classmethod\n    def __dispatch_key__(cls):\n        return cls.__fields__.get(\"type\").get_default()\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.dumps","title":"dumps abstractmethod","text":"

Encode the object into a blob of bytes.

Source code in prefect/serializers.py
@abc.abstractmethod\ndef dumps(self, obj: D) -> bytes:\n    \"\"\"Encode the object into a blob of bytes.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.loads","title":"loads abstractmethod","text":"

Decode the blob of bytes into an object.

Source code in prefect/serializers.py
@abc.abstractmethod\ndef loads(self, blob: bytes) -> D:\n    \"\"\"Decode the blob of bytes into an object.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_decoder","title":"prefect_json_object_decoder","text":"

JSONDecoder.object_hook for decoding objects from JSON when previously encoded with prefect_json_object_encoder

Source code in prefect/serializers.py
def prefect_json_object_decoder(result: dict):\n    \"\"\"\n    `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded\n    with `prefect_json_object_encoder`\n    \"\"\"\n    if \"__class__\" in result:\n        return parse_obj_as(from_qualified_name(result[\"__class__\"]), result[\"data\"])\n    elif \"__exc_type__\" in result:\n        return from_qualified_name(result[\"__exc_type__\"])(result[\"message\"])\n    else:\n        return result\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_encoder","title":"prefect_json_object_encoder","text":"

JSONEncoder.default for encoding objects into JSON with extended type support.

Raises a TypeError to fallback on other encoders on failure.

Source code in prefect/serializers.py
def prefect_json_object_encoder(obj: Any) -> Any:\n    \"\"\"\n    `JSONEncoder.default` for encoding objects into JSON with extended type support.\n\n    Raises a `TypeError` to fallback on other encoders on failure.\n    \"\"\"\n    if isinstance(obj, BaseException):\n        return {\"__exc_type__\": to_qualified_name(obj.__class__), \"message\": str(obj)}\n    else:\n        return {\n            \"__class__\": to_qualified_name(obj.__class__),\n            \"data\": custom_pydantic_encoder({}, obj),\n        }\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/settings/","title":"prefect.settings","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings","title":"prefect.settings","text":"

Prefect settings management.

Each setting is defined as a Setting type. The name of each setting is stylized in all caps, matching the environment variable that can be used to change the setting.

All settings defined in this file are used to generate a dynamic Pydantic settings class called Settings. When instantiated, this class will load settings from environment variables and pull default values from the setting definitions.

The current instance of Settings being used by the application is stored in a SettingsContext model which allows each instance of the Settings class to be accessed in an async-safe manner.

Aside from environment variables, we allow settings to be changed during the runtime of the process using profiles. Profiles contain setting overrides that the user may persist without setting environment variables. Profiles are also used internally for managing settings during task run execution where differing settings may be used concurrently in the same process and during testing where we need to override settings to ensure their value is respected as intended.

The SettingsContext is set when the prefect module is imported. This context is referred to as the \"root\" settings context for clarity. Generally, this is the only settings context that will be used. When this context is entered, we will instantiate a Settings object, loading settings from environment variables and defaults, then we will load the active profile and use it to override settings. See enter_root_settings_context for details on determining the active profile.

Another SettingsContext may be entered at any time to change the settings being used by the code within the context. Generally, users should not use this. Settings management should be left to Prefect application internals.

Generally, settings should be accessed with SETTING_VARIABLE.value() which will pull the current Settings instance from the current SettingsContext and retrieve the value of the relevant setting.

Accessing a setting's value will also call the Setting.value_callback which allows settings to be dynamically modified on retrieval. This allows us to make settings dependent on the value of other settings or perform other dynamic effects.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_HOME","title":"PREFECT_HOME = Setting(Path, default=Path('~') / '.prefect', value_callback=expanduser_in_path) module-attribute","text":"

Prefect's home directory. Defaults to ~/.prefect. This directory may be created automatically when required.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXTRA_ENTRYPOINTS","title":"PREFECT_EXTRA_ENTRYPOINTS = Setting(str, default='') module-attribute","text":"

Modules for Prefect to import when Prefect is imported.

Values should be separated by commas, e.g. my_module,my_other_module. Objects within modules may be specified by a ':' partition, e.g. my_module:my_object. If a callable object is provided, it will be called with no arguments on import.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEBUG_MODE","title":"PREFECT_DEBUG_MODE = Setting(bool, default=False) module-attribute","text":"

If True, places the API in debug mode. This may modify behavior to facilitate debugging, including extra logs and other verbose assistance. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_COLORS","title":"PREFECT_CLI_COLORS = Setting(bool, default=True) module-attribute","text":"

If True, use colors in CLI output. If False, output will not include colors codes. Defaults to True.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_PROMPT","title":"PREFECT_CLI_PROMPT = Setting(Optional[bool], default=None) module-attribute","text":"

If True, use interactive prompts in CLI commands. If False, no interactive prompts will be used. If None, the value will be dynamically determined based on the presence of an interactive-enabled terminal.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_WRAP_LINES","title":"PREFECT_CLI_WRAP_LINES = Setting(bool, default=True) module-attribute","text":"

If True, wrap text by inserting new lines in long lines in CLI output. If False, output will not be wrapped. Defaults to True.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_MODE","title":"PREFECT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

If True, places the API in test mode. This may modify behavior to facilitate testing. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UNIT_TEST_MODE","title":"PREFECT_UNIT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

This variable only exists to facilitate unit testing. If True, code is executing in a unit test context. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UNIT_TEST_LOOP_DEBUG","title":"PREFECT_UNIT_TEST_LOOP_DEBUG = Setting(bool, default=True) module-attribute","text":"

If True turns on debug mode for the unit testing event loop. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_SETTING","title":"PREFECT_TEST_SETTING = Setting(Any, default=None, value_callback=only_return_value_in_test_mode) module-attribute","text":"

This variable only exists to facilitate testing of settings. If accessed when PREFECT_TEST_MODE is not set, None is returned.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TLS_INSECURE_SKIP_VERIFY","title":"PREFECT_API_TLS_INSECURE_SKIP_VERIFY = Setting(bool, default=False) module-attribute","text":"

If True, disables SSL checking to allow insecure requests. This is recommended only during development, e.g. when using self-signed certificates.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SSL_CERT_FILE","title":"PREFECT_API_SSL_CERT_FILE = Setting(str, default=os.environ.get('SSL_CERT_FILE')) module-attribute","text":"

This configuration settings option specifies the path to an SSL certificate file. When set, it allows the application to use the specified certificate for secure communication. If left unset, the setting will default to the value provided by the SSL_CERT_FILE environment variable.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_URL","title":"PREFECT_API_URL = Setting(str, default=None) module-attribute","text":"

If provided, the URL of a hosted Prefect API. Defaults to None.

When using Prefect Cloud, this will include an account and workspace.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SILENCE_API_URL_MISCONFIGURATION","title":"PREFECT_SILENCE_API_URL_MISCONFIGURATION = Setting(bool, default=False) module-attribute","text":"

If True, disable the warning when a user accidentally misconfigure its PREFECT_API_URL Sometimes when a user manually set PREFECT_API_URL to a custom url,reverse-proxy for example, we would like to silence this warning so we will set it to FALSE.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_KEY","title":"PREFECT_API_KEY = Setting(str, default=None, is_secret=True) module-attribute","text":"

API key used to authenticate with a the Prefect API. Defaults to None.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_ENABLE_HTTP2","title":"PREFECT_API_ENABLE_HTTP2 = Setting(bool, default=True) module-attribute","text":"

If true, enable support for HTTP/2 for communicating with an API.

If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_MAX_RETRIES","title":"PREFECT_CLIENT_MAX_RETRIES = Setting(int, default=5) module-attribute","text":"

The maximum number of retries to perform on failed HTTP requests.

Defaults to 5. Set to 0 to disable retries.

See PREFECT_CLIENT_RETRY_EXTRA_CODES for details on which HTTP status codes are retried.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_JITTER_FACTOR","title":"PREFECT_CLIENT_RETRY_JITTER_FACTOR = Setting(float, default=0.2) module-attribute","text":"

A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter.

Set to 0 to disable jitter. See clamped_poisson_interval for details on the how jitter can affect retry lengths.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_EXTRA_CODES","title":"PREFECT_CLIENT_RETRY_EXTRA_CODES = Setting(str, default='', value_callback=status_codes_as_integers_in_range) module-attribute","text":"

A comma-separated list of extra HTTP status codes to retry on. Defaults to an empty string. 429, 502 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_CSRF_SUPPORT_ENABLED","title":"PREFECT_CLIENT_CSRF_SUPPORT_ENABLED = Setting(bool, default=True) module-attribute","text":"

Determines if CSRF token handling is active in the Prefect client for API requests.

When enabled (True), the client automatically manages CSRF tokens by retrieving, storing, and including them in applicable state-changing requests (POST, PUT, PATCH, DELETE) to the API.

Disabling this setting (False) means the client will not handle CSRF tokens, which might be suitable for environments where CSRF protection is disabled.

Defaults to True, ensuring CSRF protection is enabled by default.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_API_URL","title":"PREFECT_CLOUD_API_URL = Setting(str, default='https://api.prefect.cloud/api', value_callback=check_for_deprecated_cloud_url) module-attribute","text":"

API URL for Prefect Cloud. Used for authentication.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_URL","title":"PREFECT_CLOUD_URL = Setting(str, default=None, deprecated=True, deprecated_start_date='Dec 2022', deprecated_help='Use `PREFECT_CLOUD_API_URL` instead.') module-attribute","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_URL","title":"PREFECT_UI_URL = Setting(Optional[str], default=None, value_callback=default_ui_url) module-attribute","text":"

The URL for the UI. By default, this is inferred from the PREFECT_API_URL.

When using Prefect Cloud, this will include the account and workspace. When using an ephemeral server, this will be None.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_UI_URL","title":"PREFECT_CLOUD_UI_URL = Setting(str, default=None, value_callback=default_cloud_ui_url) module-attribute","text":"

The URL for the Cloud UI. By default, this is inferred from the PREFECT_CLOUD_API_URL.

PREFECT_UI_URL will be workspace specific and will be usable in the open source too.

In contrast, this value is only valid for Cloud and will not include the workspace.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_REQUEST_TIMEOUT","title":"PREFECT_API_REQUEST_TIMEOUT = Setting(float, default=60.0) module-attribute","text":"

The default timeout for requests to the API

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN","title":"PREFECT_EXPERIMENTAL_WARN = Setting(bool, default=True) module-attribute","text":"

If enabled, warn on usage of experimental features.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_PROFILES_PATH","title":"PREFECT_PROFILES_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'profiles.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a profiles configuration files.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_DEFAULT_SERIALIZER","title":"PREFECT_RESULTS_DEFAULT_SERIALIZER = Setting(str, default='pickle') module-attribute","text":"

The default serializer to use when not otherwise specified.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_PERSIST_BY_DEFAULT","title":"PREFECT_RESULTS_PERSIST_BY_DEFAULT = Setting(bool, default=False) module-attribute","text":"

The default setting for persisting results when not otherwise specified. If enabled, flow and task results will be persisted unless they opt out.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASKS_REFRESH_CACHE","title":"PREFECT_TASKS_REFRESH_CACHE = Setting(bool, default=False) module-attribute","text":"

If True, enables a refresh of cached results: re-executing the task will refresh the cached results. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRIES","title":"PREFECT_TASK_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

This value sets the default number of retries for all tasks. This value does not overwrite individually set retries values on tasks

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRIES","title":"PREFECT_FLOW_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

This value sets the default number of retries for all flows. This value does not overwrite individually set retries values on a flow

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[int, float], default=0) module-attribute","text":"

This value sets the retry delay seconds for all flows. This value does not overwrite individually set retry delay seconds

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[float, int, List[float]], default=0) module-attribute","text":"

This value sets the default retry delay seconds for all tasks. This value does not overwrite individually set retry delay seconds

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS","title":"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS = Setting(int, default=30) module-attribute","text":"

The number of seconds to wait before retrying when a task run cannot secure a concurrency slot from the server.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOCAL_STORAGE_PATH","title":"PREFECT_LOCAL_STORAGE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'storage', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a block storage directory to store things in.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMO_STORE_PATH","title":"PREFECT_MEMO_STORE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'memo_store.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to the memo store file.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION","title":"PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION = Setting(bool, default=True) module-attribute","text":"

Controls whether or not block auto-registration on start up should be memoized. Setting to False may result in slower server start up times.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LEVEL","title":"PREFECT_LOGGING_LEVEL = Setting(str, default='INFO', value_callback=debug_mode_log_level) module-attribute","text":"

The default logging level for Prefect loggers. Defaults to \"INFO\" during normal operation. Is forced to \"DEBUG\" during debug mode.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_INTERNAL_LEVEL","title":"PREFECT_LOGGING_INTERNAL_LEVEL = Setting(str, default='ERROR', value_callback=debug_mode_log_level) module-attribute","text":"

The default logging level for Prefect's internal machinery loggers. Defaults to \"ERROR\" during normal operation. Is forced to \"DEBUG\" during debug mode.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SERVER_LEVEL","title":"PREFECT_LOGGING_SERVER_LEVEL = Setting(str, default='WARNING') module-attribute","text":"

The default logging level for the Prefect API server.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SETTINGS_PATH","title":"PREFECT_LOGGING_SETTINGS_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'logging.yml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a custom YAML logging configuration file. If no file is found, the default logging.yml is used. Defaults to a logging.yml in the Prefect home directory.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_EXTRA_LOGGERS","title":"PREFECT_LOGGING_EXTRA_LOGGERS = Setting(str, default='', value_callback=get_extra_loggers) module-attribute","text":"

Additional loggers to attach to Prefect logging at runtime. Values should be comma separated. The handlers attached to the 'prefect' logger will be added to these loggers. Additionally, if the level is not set, it will be set to the same level as the 'prefect' logger.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LOG_PRINTS","title":"PREFECT_LOGGING_LOG_PRINTS = Setting(bool, default=False) module-attribute","text":"

If set, print statements in flows and tasks will be redirected to the Prefect logger for the given run. This setting can be overridden by individual tasks and flows.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_ENABLED","title":"PREFECT_LOGGING_TO_API_ENABLED = Setting(bool, default=True) module-attribute","text":"

Toggles sending logs to the API. If False, logs sent to the API log handler will not be sent to the API.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_INTERVAL","title":"PREFECT_LOGGING_TO_API_BATCH_INTERVAL = Setting(float, default=2.0) module-attribute","text":"

The number of seconds between batched writes of logs to the API.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_SIZE","title":"PREFECT_LOGGING_TO_API_BATCH_SIZE = Setting(int, default=4000000) module-attribute","text":"

The maximum size in bytes for a batch of logs.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_MAX_LOG_SIZE","title":"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE = Setting(int, default=1000000) module-attribute","text":"

The maximum size in bytes for a single log.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW = Setting(Literal['warn', 'error', 'ignore'], default='warn') module-attribute","text":"

Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow.

All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing.

The following options are available:

  • \"warn\": Log a warning message.
  • \"error\": Raise an error.
  • \"ignore\": Do not log a warning message or raise an error.
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SQLALCHEMY_POOL_SIZE","title":"PREFECT_SQLALCHEMY_POOL_SIZE = Setting(int, default=None) module-attribute","text":"

Controls connection pool size when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy pool size will be used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SQLALCHEMY_MAX_OVERFLOW","title":"PREFECT_SQLALCHEMY_MAX_OVERFLOW = Setting(int, default=None) module-attribute","text":"

Controls maximum overflow of the connection pool when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy maximum overflow value will be used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_COLORS","title":"PREFECT_LOGGING_COLORS = Setting(bool, default=True) module-attribute","text":"

Whether to style console logs with color.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_MARKUP","title":"PREFECT_LOGGING_MARKUP = Setting(bool, default=False) module-attribute","text":"

Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. [red]This is a red message.[/red]. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD","title":"PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD = Setting(float, default=10.0) module-attribute","text":"

Threshold time in seconds for logging a warning if task parameter introspection exceeds this duration. Parameter introspection can be a significant performance hit when the parameter is a large collection object, e.g. a large dictionary or DataFrame, and each element needs to be inspected. See prefect.utilities.annotations.quote for more details. Defaults to 10.0. Set to 0 to disable logging the warning.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_QUERY_INTERVAL","title":"PREFECT_AGENT_QUERY_INTERVAL = Setting(float, default=15) module-attribute","text":"

The agent loop interval, in seconds. Agents will check for new runs this often. Defaults to 15.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_PREFETCH_SECONDS","title":"PREFECT_AGENT_PREFETCH_SECONDS = Setting(int, default=15) module-attribute","text":"

Agents will look for scheduled runs this many seconds in the future and attempt to run them. This accounts for any additional infrastructure spin-up time or latency in preparing a flow run. Note flow runs will not start before their scheduled time, even if they are prefetched. Defaults to 15.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ASYNC_FETCH_STATE_RESULT","title":"PREFECT_ASYNC_FETCH_STATE_RESULT = Setting(bool, default=False) module-attribute","text":"

Determines whether State.result() fetches results automatically or not. In Prefect 2.6.0, the State.result() method was updated to be async to facilitate automatic retrieval of results from storage which means when writing async code you must await the call. For backwards compatibility, the result is not retrieved by default for async users. You may opt into this per call by passing fetch=True or toggle this setting to change the behavior globally. This setting does not affect users writing synchronous tasks and flows. This setting does not affect retrieval of results when using Future.result().

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START","title":"PREFECT_API_BLOCKS_REGISTER_ON_START = Setting(bool, default=True) module-attribute","text":"

If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_PASSWORD","title":"PREFECT_API_DATABASE_PASSWORD = Setting(str, default=None, is_secret=True) module-attribute","text":"

Password to template into the PREFECT_API_DATABASE_CONNECTION_URL. This is useful if the password must be provided separately from the connection URL. To use this setting, you must include it in your connection URL.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL","title":"PREFECT_API_DATABASE_CONNECTION_URL = Setting(str, default=None, value_callback=default_database_connection_url, is_secret=True) module-attribute","text":"

A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use sqlite+aiosqlite and for Postgres use postgresql+asyncpg.

SQLite in-memory databases can be used by providing the url sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests.

Defaults to a sqlite database stored in the Prefect home directory.

If you need to provide password via a different environment variable, you use the PREFECT_API_DATABASE_PASSWORD setting. For example:

PREFECT_API_DATABASE_PASSWORD='mypassword'\nPREFECT_API_DATABASE_CONNECTION_URL='postgresql+asyncpg://postgres:${PREFECT_API_DATABASE_PASSWORD}@localhost/prefect'\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_ECHO","title":"PREFECT_API_DATABASE_ECHO = Setting(bool, default=False) module-attribute","text":"

If True, SQLAlchemy will log all SQL issued to the database. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START","title":"PREFECT_API_DATABASE_MIGRATE_ON_START = Setting(bool, default=True) module-attribute","text":"

If True, the database will be upgraded on application creation. If False, the database will need to be upgraded manually.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_TIMEOUT","title":"PREFECT_API_DATABASE_TIMEOUT = Setting(Optional[float], default=10.0) module-attribute","text":"

A statement timeout, in seconds, applied to all database interactions made by the API. Defaults to 10 seconds.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_API_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=5) module-attribute","text":"

A connection timeout, in seconds, applied to database connections. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS","title":"PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(float, default=60) module-attribute","text":"

The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to 60.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(int, default=100) module-attribute","text":"

The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for scheduler_loop_seconds until it has visited every deployment once. Defaults to 100.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS = Setting(int, default=100) module-attribute","text":"

The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of scheduler_max_scheduled_time. Defaults to 100.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS = Setting(int, default=3) module-attribute","text":"

The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of scheduler_min_scheduled_time. Defaults to 3.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(timedelta, default=timedelta(days=100)) module-attribute","text":"

The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over scheduler_max_runs: if a flow runs once a month and scheduler_max_scheduled_time is three months, then only three runs will be scheduled. Defaults to 100 days (8640000 seconds).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(timedelta, default=timedelta(hours=1)) module-attribute","text":"

The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over scheduler_min_runs: if a flow runs every hour and scheduler_min_scheduled_time is three hours, then three runs will be scheduled even if scheduler_min_runs is 1. Defaults to 1 hour (3600 seconds).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(int, default=500) module-attribute","text":"

The number of flow runs the scheduler will attempt to insert in one batch across all deployments. If the number of flow runs to schedule exceeds this amount, the runs will be inserted in batches of this size. Defaults to 500.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

The late runs service will look for runs to mark as late this often. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(timedelta, default=timedelta(seconds=15)) module-attribute","text":"

The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to 5 seconds.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

The pause expiration service will look for runs to mark as failed this often. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(float, default=20) module-attribute","text":"

The cancellation cleanup service will look non-terminal tasks and subflows this often. Defaults to 20.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_ENABLED","title":"PREFECT_API_SERVICES_FOREMAN_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the Foreman service in the server application.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS = Setting(float, default=15) module-attribute","text":"

The number of seconds to wait between each iteration of the Foreman loop which checks for offline workers and updates work pool status.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE","title":"PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE = Setting(int, default=3) module-attribute","text":"

The number of heartbeats that must be missed before a worker is marked as offline.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS = Setting(int, default=30) module-attribute","text":"

The number of seconds to use for online/offline evaluation if a worker's heartbeat interval is not set.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS = Setting(int, default=60) module-attribute","text":"

The number of seconds before a deployment is marked as not ready if it has not been polled.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS = Setting(int, default=60) module-attribute","text":"

The number of seconds before a work queue is marked as not ready if it has not been polled.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DEFAULT_LIMIT","title":"PREFECT_API_DEFAULT_LIMIT = Setting(int, default=200) module-attribute","text":"

The default limit applied to queries that can return multiple objects, such as POST /flow_runs/filter.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_HOST","title":"PREFECT_SERVER_API_HOST = Setting(str, default='127.0.0.1') module-attribute","text":"

The API's host address (defaults to 127.0.0.1).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_PORT","title":"PREFECT_SERVER_API_PORT = Setting(int, default=4200) module-attribute","text":"

The API's port address (defaults to 4200).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_KEEPALIVE_TIMEOUT","title":"PREFECT_SERVER_API_KEEPALIVE_TIMEOUT = Setting(int, default=5) module-attribute","text":"

The API's keep alive timeout (defaults to 5). Refer to https://www.uvicorn.org/settings/#timeouts for details.

When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout.

Note this setting only applies when calling prefect server start; if hosting the API with another tool you will need to configure this there instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED","title":"PREFECT_SERVER_CSRF_PROTECTION_ENABLED = Setting(bool, default=False) module-attribute","text":"

Controls the activation of CSRF protection for the Prefect server API.

When enabled (True), the server enforces CSRF validation checks on incoming state-changing requests (POST, PUT, PATCH, DELETE), requiring a valid CSRF token to be included in the request headers or body. This adds a layer of security by preventing unauthorized or malicious sites from making requests on behalf of authenticated users.

It is recommended to enable this setting in production environments where the API is exposed to web clients to safeguard against CSRF attacks.

Note: Enabling this setting requires corresponding support in the client for CSRF token management. See PREFECT_CLIENT_CSRF_SUPPORT_ENABLED for more.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_CSRF_TOKEN_EXPIRATION","title":"PREFECT_SERVER_CSRF_TOKEN_EXPIRATION = Setting(timedelta, default=timedelta(hours=1)) module-attribute","text":"

Specifies the duration for which a CSRF token remains valid after being issued by the server.

The default expiration time is set to 1 hour, which offers a reasonable compromise. Adjust this setting based on your specific security requirements and usage patterns.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_ENABLED","title":"PREFECT_UI_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to serve the Prefect UI.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_API_URL","title":"PREFECT_UI_API_URL = Setting(str, default=None, value_callback=default_ui_api_url) module-attribute","text":"

The connection url for communication from the UI to the API. Defaults to PREFECT_API_URL if set. Otherwise, the default URL is generated from PREFECT_SERVER_API_HOST and PREFECT_SERVER_API_PORT. If providing a custom value, the aforementioned settings may be templated into the given string.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED","title":"PREFECT_SERVER_ANALYTICS_ENABLED = Setting(bool, default=True) module-attribute","text":"

When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_API_SERVICES_SCHEDULER_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the scheduling service in the server application. If disabled, you will need to run this service separately to schedule runs for deployments.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_API_SERVICES_LATE_RUNS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the late runs service in the server application. If disabled, you will need to run this service separately to have runs past their scheduled start time marked as late.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the flow run notifications service in the server application. If disabled, you will need to run this service separately to send flow run notifications.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH = Setting(int, default=2000) module-attribute","text":"

The maximum number of characters allowed for a task run cache key. This setting cannot be changed client-side, it must be set on the server.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the cancellation cleanup service in the server application. If disabled, task runs and subflow runs belonging to cancelled flows may remain in non-terminal states.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES = Setting(int, default=10000) module-attribute","text":"

The maximum size of a flow run graph on the v2 API

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS = Setting(int, default=10000) module-attribute","text":"

The maximum number of artifacts to show on a flow run graph on the v2 API

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable artifacts on the flow run graph.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable flow run states on the flow run graph.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect work pools.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_WARN_WORK_POOLS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect work pools are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKERS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKERS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect workers.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKERS","title":"PREFECT_EXPERIMENTAL_WARN_WORKERS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect workers are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_VISUALIZE","title":"PREFECT_EXPERIMENTAL_WARN_VISUALIZE = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect visualize is used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental enhanced flow run cancellation.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental enhanced flow run cancellation is used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable deployment status in the UI

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when deployment status is used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT = Setting(bool, default=False) module-attribute","text":"

Whether or not to enable flow run input.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable flow run input.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_EVENTS","title":"PREFECT_EXPERIMENTAL_EVENTS = Setting(bool, default=False) module-attribute","text":"

Whether to enable Prefect's server-side event features. Note that Prefect Cloud clients will always emit events during flow and task runs regardless of this setting.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_PROCESS_LIMIT","title":"PREFECT_RUNNER_PROCESS_LIMIT = Setting(int, default=5) module-attribute","text":"

Maximum number of processes a runner will execute in parallel.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_POLL_FREQUENCY","title":"PREFECT_RUNNER_POLL_FREQUENCY = Setting(int, default=10) module-attribute","text":"

Number of seconds a runner should wait between queries for scheduled work.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE","title":"PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE = Setting(int, default=2) module-attribute","text":"

Number of missed polls before a runner is considered unhealthy by its webserver.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_HOST","title":"PREFECT_RUNNER_SERVER_HOST = Setting(str, default='localhost') module-attribute","text":"

The host address the runner's webserver should bind to.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_PORT","title":"PREFECT_RUNNER_SERVER_PORT = Setting(int, default=8080) module-attribute","text":"

The port the runner's webserver should bind to.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_LOG_LEVEL","title":"PREFECT_RUNNER_SERVER_LOG_LEVEL = Setting(str, default='error') module-attribute","text":"

The log level of the runner's webserver.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_ENABLE","title":"PREFECT_RUNNER_SERVER_ENABLE = Setting(bool, default=False) module-attribute","text":"

Whether or not to enable the runner's webserver.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_HEARTBEAT_SECONDS","title":"PREFECT_WORKER_HEARTBEAT_SECONDS = Setting(float, default=30) module-attribute","text":"

Number of seconds a worker should wait between sending a heartbeat.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_QUERY_SECONDS","title":"PREFECT_WORKER_QUERY_SECONDS = Setting(float, default=10) module-attribute","text":"

Number of seconds a worker should wait between queries for scheduled flow runs.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_PREFETCH_SECONDS","title":"PREFECT_WORKER_PREFETCH_SECONDS = Setting(float, default=10) module-attribute","text":"

The number of seconds into the future a worker should query for scheduled flow runs. Can be used to compensate for infrastructure start up time for a worker.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_HOST","title":"PREFECT_WORKER_WEBSERVER_HOST = Setting(str, default='0.0.0.0') module-attribute","text":"

The host address the worker's webserver should bind to.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_PORT","title":"PREFECT_WORKER_WEBSERVER_PORT = Setting(int, default=8080) module-attribute","text":"

The port the worker's webserver should bind to.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK","title":"PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK = Setting(str, default='local-file-system/prefect-task-scheduling') module-attribute","text":"

The block-type/block-document slug of a block to use as the default storage for autonomous tasks.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS","title":"PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS = Setting(bool, default=True) module-attribute","text":"

Whether or not to delete failed task submissions from the database.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE = Setting(int, default=1000) module-attribute","text":"

The maximum number of scheduled tasks to queue for submission.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE = Setting(int, default=100) module-attribute","text":"

The maximum number of retries to queue for submission.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT","title":"PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT = Setting(timedelta, default=timedelta(seconds=30)) module-attribute","text":"

How long before a PENDING task are made available to another task server. In practice, a task server should move a task from PENDING to RUNNING very quickly, so runs stuck in PENDING for a while is a sign that the task server may have crashed.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS","title":"PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS = Setting(bool, default=False) module-attribute","text":"

Whether or not to enable experimental worker webserver endpoints.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect artifacts.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_WARN_ARTIFACTS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect artifacts are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable the experimental workspace dashboard.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when the experimental workspace dashboard is enabled.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING","title":"PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING = Setting(bool, default=False) module-attribute","text":"

Whether or not to enable experimental task scheduling.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental work queue status in-place of work queue health.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE","title":"PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE = Setting(bool, default=False) module-attribute","text":"

Whether or not to enable experimental new engine.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT","title":"PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT = Setting(bool, default=False) module-attribute","text":"

Whether or not to disable the sync_compatible decorator utility.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_RESULT_STORAGE_BLOCK","title":"PREFECT_DEFAULT_RESULT_STORAGE_BLOCK = Setting(str, default=None) module-attribute","text":"

The block-type/block-document slug of a block to use as the default result storage.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_WORK_POOL_NAME","title":"PREFECT_DEFAULT_WORK_POOL_NAME = Setting(str, default=None) module-attribute","text":"

The default work pool to deploy to.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE","title":"PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE = Setting(str, default=None) module-attribute","text":"

The default Docker namespace to use when building images.

Can be either an organization/username or a registry URL with an organization/username.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_SERVE_BASE","title":"PREFECT_UI_SERVE_BASE = Setting(str, default='/') module-attribute","text":"

The base URL path to serve the Prefect UI from.

Defaults to the root path.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_STATIC_DIRECTORY","title":"PREFECT_UI_STATIC_DIRECTORY = Setting(str, default=None) module-attribute","text":"

The directory to serve static files from. This should be used when running into permissions issues when attempting to serve the UI from the default directory (for example when running in a Docker container)

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MESSAGING_BROKER","title":"PREFECT_MESSAGING_BROKER = Setting(str, default='prefect.server.utilities.messaging.memory') module-attribute","text":"

Which message broker implementation to use for the messaging system, should point to a module that exports a Publisher and Consumer class.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MESSAGING_CACHE","title":"PREFECT_MESSAGING_CACHE = Setting(str, default='prefect.server.utilities.messaging.memory') module-attribute","text":"

Which cache implementation to use for the events system. Should point to a module that exports a Cache class.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE","title":"PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE = Setting(int, default=500) module-attribute","text":"

The maximum number of labels a resource may have.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES","title":"PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES = Setting(int, default=500) module-attribute","text":"

The maximum number of related resources an Event may have.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_MAXIMUM_SIZE_BYTES","title":"PREFECT_EVENTS_MAXIMUM_SIZE_BYTES = Setting(int, default=1500000) module-attribute","text":"

The maximum size of an Event when serialized to JSON

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED","title":"PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the event debug logger service in the server application.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_TRIGGERS_ENABLED","title":"PREFECT_API_SERVICES_TRIGGERS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the triggers service in the server application.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER","title":"PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER = Setting(timedelta, default=timedelta(seconds=60)) module-attribute","text":"

The amount of time to retain expired automation buckets

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_PROACTIVE_GRANULARITY","title":"PREFECT_EVENTS_PROACTIVE_GRANULARITY = Setting(timedelta, default=timedelta(seconds=5)) module-attribute","text":"

How frequently proactive automations are evaluated

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED","title":"PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the event persister service in the server application.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE","title":"PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE = Setting(int, default=20, gt=0) module-attribute","text":"

The number of events the event persister will attempt to insert in one batch.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL","title":"PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL = Setting(float, default=5, gt=0.0) module-attribute","text":"

The maximum number of seconds between flushes of the event persister.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_RETENTION_PERIOD","title":"PREFECT_EVENTS_RETENTION_PERIOD = Setting(timedelta, default=timedelta(days=7)) module-attribute","text":"

The amount of time to retain events in the database.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_EVENTS_STREAM_OUT_ENABLED","title":"PREFECT_API_EVENTS_STREAM_OUT_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to allow streaming events out of via websockets.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL","title":"PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL = Setting(timedelta, default=timedelta(minutes=5)) module-attribute","text":"

How long to cache related resource data for emitting server-side vents

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting","title":"Setting","text":"

Bases: Generic[T]

Setting definition type.

Source code in prefect/settings.py
class Setting(Generic[T]):\n    \"\"\"\n    Setting definition type.\n    \"\"\"\n\n    def __init__(\n        self,\n        type: Type[T],\n        *,\n        deprecated: bool = False,\n        deprecated_start_date: Optional[str] = None,\n        deprecated_end_date: Optional[str] = None,\n        deprecated_help: str = \"\",\n        deprecated_when_message: str = \"\",\n        deprecated_when: Optional[Callable[[Any], bool]] = None,\n        deprecated_renamed_to: Optional[\"Setting[T]\"] = None,\n        value_callback: Optional[Callable[[\"Settings\", T], T]] = None,\n        is_secret: bool = False,\n        **kwargs: Any,\n    ) -> None:\n        self.field: fields.FieldInfo = Field(**kwargs)\n        self.type = type\n        self.value_callback = value_callback\n        self._name = None\n        self.is_secret = is_secret\n        self.deprecated = deprecated\n        self.deprecated_start_date = deprecated_start_date\n        self.deprecated_end_date = deprecated_end_date\n        self.deprecated_help = deprecated_help\n        self.deprecated_when = deprecated_when or (lambda _: True)\n        self.deprecated_when_message = deprecated_when_message\n        self.deprecated_renamed_to = deprecated_renamed_to\n        self.deprecated_renamed_from = None\n        self.__doc__ = self.field.description\n\n        # Validate the deprecation settings, will throw an error at setting definition\n        # time if the developer has not configured it correctly\n        if deprecated:\n            generate_deprecation_message(\n                name=\"...\",  # setting names not populated until after init\n                start_date=self.deprecated_start_date,\n                end_date=self.deprecated_end_date,\n                help=self.deprecated_help,\n                when=self.deprecated_when_message,\n            )\n\n        if deprecated_renamed_to is not None:\n            # Track the deprecation both ways\n            deprecated_renamed_to.deprecated_renamed_from = self\n\n    def value(self, bypass_callback: bool = False) -> T:\n        \"\"\"\n        Get the current value of a setting.\n\n        Example:\n        ```python\n        from prefect.settings import PREFECT_API_URL\n        PREFECT_API_URL.value()\n        ```\n        \"\"\"\n        return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n\n    def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n        \"\"\"\n        Get the value of a setting from a settings object\n\n        Example:\n        ```python\n        from prefect.settings import get_default_settings\n        PREFECT_API_URL.value_from(get_default_settings())\n        ```\n        \"\"\"\n        value = settings.value_of(self, bypass_callback=bypass_callback)\n\n        if not bypass_callback and self.deprecated and self.deprecated_when(value):\n            # Check if this setting is deprecated and someone is accessing the value\n            # via the old name\n            warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n            # If the the value is empty, return the new setting's value for compat\n            if value is None and self.deprecated_renamed_to is not None:\n                return self.deprecated_renamed_to.value_from(settings)\n\n        if not bypass_callback and self.deprecated_renamed_from is not None:\n            # Check if this setting is a rename of a deprecated setting and the\n            # deprecated setting is set and should be used for compatibility\n            deprecated_value = self.deprecated_renamed_from.value_from(\n                settings, bypass_callback=True\n            )\n            if deprecated_value is not None:\n                warnings.warn(\n                    (\n                        f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                        f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                        f\" instead of {self.name!r} for backwards compatibility.\"\n                    ),\n                    DeprecationWarning,\n                    stacklevel=3,\n                )\n            return deprecated_value or value\n\n        return value\n\n    @property\n    def name(self):\n        if self._name:\n            return self._name\n\n        # Lookup the name on first access\n        for name, val in tuple(globals().items()):\n            if val == self:\n                self._name = name\n                return name\n\n        raise ValueError(\"Setting not found in `prefect.settings` module.\")\n\n    @name.setter\n    def name(self, value: str):\n        self._name = value\n\n    @property\n    def deprecated_message(self):\n        return generate_deprecation_message(\n            name=f\"Setting {self.name!r}\",\n            start_date=self.deprecated_start_date,\n            end_date=self.deprecated_end_date,\n            help=self.deprecated_help,\n            when=self.deprecated_when_message,\n        )\n\n    def __repr__(self) -> str:\n        return f\"<{self.name}: {self.type.__name__}>\"\n\n    def __bool__(self) -> bool:\n        \"\"\"\n        Returns a truthy check of the current value.\n        \"\"\"\n        return bool(self.value())\n\n    def __eq__(self, __o: object) -> bool:\n        return __o.__eq__(self.value())\n\n    def __hash__(self) -> int:\n        return hash((type(self), self.name))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value","title":"value","text":"

Get the current value of a setting.

Example:

from prefect.settings import PREFECT_API_URL\nPREFECT_API_URL.value()\n

Source code in prefect/settings.py
def value(self, bypass_callback: bool = False) -> T:\n    \"\"\"\n    Get the current value of a setting.\n\n    Example:\n    ```python\n    from prefect.settings import PREFECT_API_URL\n    PREFECT_API_URL.value()\n    ```\n    \"\"\"\n    return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value_from","title":"value_from","text":"

Get the value of a setting from a settings object

Example:

from prefect.settings import get_default_settings\nPREFECT_API_URL.value_from(get_default_settings())\n

Source code in prefect/settings.py
def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n    \"\"\"\n    Get the value of a setting from a settings object\n\n    Example:\n    ```python\n    from prefect.settings import get_default_settings\n    PREFECT_API_URL.value_from(get_default_settings())\n    ```\n    \"\"\"\n    value = settings.value_of(self, bypass_callback=bypass_callback)\n\n    if not bypass_callback and self.deprecated and self.deprecated_when(value):\n        # Check if this setting is deprecated and someone is accessing the value\n        # via the old name\n        warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n        # If the the value is empty, return the new setting's value for compat\n        if value is None and self.deprecated_renamed_to is not None:\n            return self.deprecated_renamed_to.value_from(settings)\n\n    if not bypass_callback and self.deprecated_renamed_from is not None:\n        # Check if this setting is a rename of a deprecated setting and the\n        # deprecated setting is set and should be used for compatibility\n        deprecated_value = self.deprecated_renamed_from.value_from(\n            settings, bypass_callback=True\n        )\n        if deprecated_value is not None:\n            warnings.warn(\n                (\n                    f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                    f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                    f\" instead of {self.name!r} for backwards compatibility.\"\n                ),\n                DeprecationWarning,\n                stacklevel=3,\n            )\n        return deprecated_value or value\n\n    return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings","title":"Settings","text":"

Bases: SettingsFieldsMixin

Contains validated Prefect settings.

Settings should be accessed using the relevant Setting object. For example:

from prefect.settings import PREFECT_HOME\nPREFECT_HOME.value()\n

Accessing a setting attribute directly will ignore any value_callback mutations. This is not recommended:

from prefect.settings import Settings\nSettings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n

Source code in prefect/settings.py
@add_cloudpickle_reduction\nclass Settings(SettingsFieldsMixin):\n    \"\"\"\n    Contains validated Prefect settings.\n\n    Settings should be accessed using the relevant `Setting` object. For example:\n    ```python\n    from prefect.settings import PREFECT_HOME\n    PREFECT_HOME.value()\n    ```\n\n    Accessing a setting attribute directly will ignore any `value_callback` mutations.\n    This is not recommended:\n    ```python\n    from prefect.settings import Settings\n    Settings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n    ```\n    \"\"\"\n\n    def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n        \"\"\"\n        Retrieve a setting's value.\n        \"\"\"\n        value = getattr(self, setting.name)\n        if setting.value_callback and not bypass_callback:\n            value = setting.value_callback(self, value)\n        return value\n\n    @validator(PREFECT_LOGGING_LEVEL.name, PREFECT_LOGGING_SERVER_LEVEL.name)\n    def check_valid_log_level(cls, value):\n        if isinstance(value, str):\n            value = value.upper()\n        logging._checkLevel(value)\n        return value\n\n    @root_validator\n    def post_root_validators(cls, values):\n        \"\"\"\n        Add root validation functions for settings here.\n        \"\"\"\n        # TODO: We could probably register these dynamically but this is the simpler\n        #       approach for now. We can explore more interesting validation features\n        #       in the future.\n        values = max_log_size_smaller_than_batch_size(values)\n        values = warn_on_database_password_value_without_usage(values)\n        if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n            values = warn_on_misconfigured_api_url(values)\n        return values\n\n    def copy_with_update(\n        self,\n        updates: Mapping[Setting, Any] = None,\n        set_defaults: Mapping[Setting, Any] = None,\n        restore_defaults: Iterable[Setting] = None,\n    ) -> \"Settings\":\n        \"\"\"\n        Create a new `Settings` object with validation.\n\n        Arguments:\n            updates: A mapping of settings to new values. Existing values for the\n                given settings will be overridden.\n            set_defaults: A mapping of settings to new default values. Existing values for\n                the given settings will only be overridden if they were not set.\n            restore_defaults: An iterable of settings to restore to their default values.\n\n        Returns:\n            A new `Settings` object.\n        \"\"\"\n        updates = updates or {}\n        set_defaults = set_defaults or {}\n        restore_defaults = restore_defaults or set()\n        restore_defaults_names = {setting.name for setting in restore_defaults}\n\n        return self.__class__(\n            **{\n                **{setting.name: value for setting, value in set_defaults.items()},\n                **self.dict(exclude_unset=True, exclude=restore_defaults_names),\n                **{setting.name: value for setting, value in updates.items()},\n            }\n        )\n\n    def with_obfuscated_secrets(self):\n        \"\"\"\n        Returns a copy of this settings object with secret setting values obfuscated.\n        \"\"\"\n        settings = self.copy(\n            update={\n                setting.name: obfuscate(self.value_of(setting))\n                for setting in SETTING_VARIABLES.values()\n                if setting.is_secret\n                # Exclude deprecated settings with null values to avoid warnings\n                and not (setting.deprecated and self.value_of(setting) is None)\n            }\n        )\n        # Ensure that settings that have not been marked as \"set\" before are still so\n        # after we have updated their value above\n        settings.__fields_set__.intersection_update(self.__fields_set__)\n        return settings\n\n    def hash_key(self) -> str:\n        \"\"\"\n        Return a hash key for the settings object.  This is needed since some\n        settings may be unhashable.  An example is lists.\n        \"\"\"\n        env_variables = self.to_environment_variables()\n        return str(hash(tuple((key, value) for key, value in env_variables.items())))\n\n    def to_environment_variables(\n        self, include: Iterable[Setting] = None, exclude_unset: bool = False\n    ) -> Dict[str, str]:\n        \"\"\"\n        Convert the settings object to environment variables.\n\n        Note that setting values will not be run through their `value_callback` allowing\n        dynamic resolution to occur when loaded from the returned environment.\n\n        Args:\n            include_keys: An iterable of settings to include in the return value.\n                If not set, all settings are used.\n            exclude_unset: Only include settings that have been set (i.e. the value is\n                not from the default). If set, unset keys will be dropped even if they\n                are set in `include_keys`.\n\n        Returns:\n            A dictionary of settings with values cast to strings\n        \"\"\"\n        include = set(include or SETTING_VARIABLES.values())\n\n        if exclude_unset:\n            set_keys = {\n                # Collect all of the \"set\" keys and cast to `Setting` objects\n                SETTING_VARIABLES[key]\n                for key in self.dict(exclude_unset=True)\n            }\n            include.intersection_update(set_keys)\n\n        # Validate the types of items in `include` to prevent exclusion bugs\n        for key in include:\n            if not isinstance(key, Setting):\n                raise TypeError(\n                    \"Invalid type {type(key).__name__!r} for key in `include`.\"\n                )\n\n        env = {\n            # Use `getattr` instead of `value_of` to avoid value callback resolution\n            key: getattr(self, key)\n            for key, setting in SETTING_VARIABLES.items()\n            if setting in include\n        }\n\n        # Cast to strings and drop null values\n        return {key: str(value) for key, value in env.items() if value is not None}\n\n    class Config:\n        frozen = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.value_of","title":"value_of","text":"

Retrieve a setting's value.

Source code in prefect/settings.py
def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n    \"\"\"\n    Retrieve a setting's value.\n    \"\"\"\n    value = getattr(self, setting.name)\n    if setting.value_callback and not bypass_callback:\n        value = setting.value_callback(self, value)\n    return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.post_root_validators","title":"post_root_validators","text":"

Add root validation functions for settings here.

Source code in prefect/settings.py
@root_validator\ndef post_root_validators(cls, values):\n    \"\"\"\n    Add root validation functions for settings here.\n    \"\"\"\n    # TODO: We could probably register these dynamically but this is the simpler\n    #       approach for now. We can explore more interesting validation features\n    #       in the future.\n    values = max_log_size_smaller_than_batch_size(values)\n    values = warn_on_database_password_value_without_usage(values)\n    if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n        values = warn_on_misconfigured_api_url(values)\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.with_obfuscated_secrets","title":"with_obfuscated_secrets","text":"

Returns a copy of this settings object with secret setting values obfuscated.

Source code in prefect/settings.py
def with_obfuscated_secrets(self):\n    \"\"\"\n    Returns a copy of this settings object with secret setting values obfuscated.\n    \"\"\"\n    settings = self.copy(\n        update={\n            setting.name: obfuscate(self.value_of(setting))\n            for setting in SETTING_VARIABLES.values()\n            if setting.is_secret\n            # Exclude deprecated settings with null values to avoid warnings\n            and not (setting.deprecated and self.value_of(setting) is None)\n        }\n    )\n    # Ensure that settings that have not been marked as \"set\" before are still so\n    # after we have updated their value above\n    settings.__fields_set__.intersection_update(self.__fields_set__)\n    return settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.hash_key","title":"hash_key","text":"

Return a hash key for the settings object. This is needed since some settings may be unhashable. An example is lists.

Source code in prefect/settings.py
def hash_key(self) -> str:\n    \"\"\"\n    Return a hash key for the settings object.  This is needed since some\n    settings may be unhashable.  An example is lists.\n    \"\"\"\n    env_variables = self.to_environment_variables()\n    return str(hash(tuple((key, value) for key, value in env_variables.items())))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.to_environment_variables","title":"to_environment_variables","text":"

Convert the settings object to environment variables.

Note that setting values will not be run through their value_callback allowing dynamic resolution to occur when loaded from the returned environment.

Parameters:

Name Type Description Default include_keys

An iterable of settings to include in the return value. If not set, all settings are used.

required exclude_unset bool

Only include settings that have been set (i.e. the value is not from the default). If set, unset keys will be dropped even if they are set in include_keys.

False

Returns:

Type Description Dict[str, str]

A dictionary of settings with values cast to strings

Source code in prefect/settings.py
def to_environment_variables(\n    self, include: Iterable[Setting] = None, exclude_unset: bool = False\n) -> Dict[str, str]:\n    \"\"\"\n    Convert the settings object to environment variables.\n\n    Note that setting values will not be run through their `value_callback` allowing\n    dynamic resolution to occur when loaded from the returned environment.\n\n    Args:\n        include_keys: An iterable of settings to include in the return value.\n            If not set, all settings are used.\n        exclude_unset: Only include settings that have been set (i.e. the value is\n            not from the default). If set, unset keys will be dropped even if they\n            are set in `include_keys`.\n\n    Returns:\n        A dictionary of settings with values cast to strings\n    \"\"\"\n    include = set(include or SETTING_VARIABLES.values())\n\n    if exclude_unset:\n        set_keys = {\n            # Collect all of the \"set\" keys and cast to `Setting` objects\n            SETTING_VARIABLES[key]\n            for key in self.dict(exclude_unset=True)\n        }\n        include.intersection_update(set_keys)\n\n    # Validate the types of items in `include` to prevent exclusion bugs\n    for key in include:\n        if not isinstance(key, Setting):\n            raise TypeError(\n                \"Invalid type {type(key).__name__!r} for key in `include`.\"\n            )\n\n    env = {\n        # Use `getattr` instead of `value_of` to avoid value callback resolution\n        key: getattr(self, key)\n        for key, setting in SETTING_VARIABLES.items()\n        if setting in include\n    }\n\n    # Cast to strings and drop null values\n    return {key: str(value) for key, value in env.items() if value is not None}\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile","title":"Profile","text":"

Bases: BaseModel

A user profile containing settings.

Source code in prefect/settings.py
class Profile(BaseModel):\n    \"\"\"\n    A user profile containing settings.\n    \"\"\"\n\n    name: str\n    settings: Dict[Setting, Any] = Field(default_factory=dict)\n    source: Optional[Path]\n\n    @validator(\"settings\", pre=True)\n    def map_names_to_settings(cls, value):\n        return validate_settings(value)\n\n    def validate_settings(self) -> None:\n        \"\"\"\n        Validate the settings contained in this profile.\n\n        Raises:\n            pydantic.ValidationError: When settings do not have valid values.\n        \"\"\"\n        # Create a new `Settings` instance with the settings from this profile relying\n        # on Pydantic validation to raise an error.\n        # We do not return the `Settings` object because this is not the recommended\n        # path for constructing settings with a profile. See `use_profile` instead.\n        Settings(**{setting.name: value for setting, value in self.settings.items()})\n\n    def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n        \"\"\"\n        Update settings in place to replace deprecated settings with new settings when\n        renamed.\n\n        Returns a list of tuples with the old and new setting.\n        \"\"\"\n        changed = []\n        for setting in tuple(self.settings):\n            if (\n                setting.deprecated\n                and setting.deprecated_renamed_to\n                and setting.deprecated_renamed_to not in self.settings\n            ):\n                self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                    setting\n                )\n                changed.append((setting, setting.deprecated_renamed_to))\n        return changed\n\n    class Config:\n        arbitrary_types_allowed = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.validate_settings","title":"validate_settings","text":"

Validate the settings contained in this profile.

Raises:

Type Description ValidationError

When settings do not have valid values.

Source code in prefect/settings.py
def validate_settings(self) -> None:\n    \"\"\"\n    Validate the settings contained in this profile.\n\n    Raises:\n        pydantic.ValidationError: When settings do not have valid values.\n    \"\"\"\n    # Create a new `Settings` instance with the settings from this profile relying\n    # on Pydantic validation to raise an error.\n    # We do not return the `Settings` object because this is not the recommended\n    # path for constructing settings with a profile. See `use_profile` instead.\n    Settings(**{setting.name: value for setting, value in self.settings.items()})\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.convert_deprecated_renamed_settings","title":"convert_deprecated_renamed_settings","text":"

Update settings in place to replace deprecated settings with new settings when renamed.

Returns a list of tuples with the old and new setting.

Source code in prefect/settings.py
def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n    \"\"\"\n    Update settings in place to replace deprecated settings with new settings when\n    renamed.\n\n    Returns a list of tuples with the old and new setting.\n    \"\"\"\n    changed = []\n    for setting in tuple(self.settings):\n        if (\n            setting.deprecated\n            and setting.deprecated_renamed_to\n            and setting.deprecated_renamed_to not in self.settings\n        ):\n            self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                setting\n            )\n            changed.append((setting, setting.deprecated_renamed_to))\n    return changed\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection","title":"ProfilesCollection","text":"

\" A utility class for working with a collection of profiles.

Profiles in the collection must have unique names.

The collection may store the name of the active profile.

Source code in prefect/settings.py
class ProfilesCollection:\n    \"\"\" \"\n    A utility class for working with a collection of profiles.\n\n    Profiles in the collection must have unique names.\n\n    The collection may store the name of the active profile.\n    \"\"\"\n\n    def __init__(\n        self, profiles: Iterable[Profile], active: Optional[str] = None\n    ) -> None:\n        self.profiles_by_name = {profile.name: profile for profile in profiles}\n        self.active_name = active\n\n    @property\n    def names(self) -> Set[str]:\n        \"\"\"\n        Return a set of profile names in this collection.\n        \"\"\"\n        return set(self.profiles_by_name.keys())\n\n    @property\n    def active_profile(self) -> Optional[Profile]:\n        \"\"\"\n        Retrieve the active profile in this collection.\n        \"\"\"\n        if self.active_name is None:\n            return None\n        return self[self.active_name]\n\n    def set_active(self, name: Optional[str], check: bool = True):\n        \"\"\"\n        Set the active profile name in the collection.\n\n        A null value may be passed to indicate that this collection does not determine\n        the active profile.\n        \"\"\"\n        if check and name is not None and name not in self.names:\n            raise ValueError(f\"Unknown profile name {name!r}.\")\n        self.active_name = name\n\n    def update_profile(\n        self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n    ) -> Profile:\n        \"\"\"\n        Add a profile to the collection or update the existing on if the name is already\n        present in this collection.\n\n        If updating an existing profile, the settings will be merged. Settings can\n        be dropped from the existing profile by setting them to `None` in the new\n        profile.\n\n        Returns the new profile object.\n        \"\"\"\n        existing = self.profiles_by_name.get(name)\n\n        # Convert the input to a `Profile` to cast settings to the correct type\n        profile = Profile(name=name, settings=settings, source=source)\n\n        if existing:\n            new_settings = {**existing.settings, **profile.settings}\n\n            # Drop null keys to restore to default\n            for key, value in tuple(new_settings.items()):\n                if value is None:\n                    new_settings.pop(key)\n\n            new_profile = Profile(\n                name=profile.name,\n                settings=new_settings,\n                source=source or profile.source,\n            )\n        else:\n            new_profile = profile\n\n        self.profiles_by_name[new_profile.name] = new_profile\n\n        return new_profile\n\n    def add_profile(self, profile: Profile) -> None:\n        \"\"\"\n        Add a profile to the collection.\n\n        If the profile name already exists, an exception will be raised.\n        \"\"\"\n        if profile.name in self.profiles_by_name:\n            raise ValueError(\n                f\"Profile name {profile.name!r} already exists in collection.\"\n            )\n\n        self.profiles_by_name[profile.name] = profile\n\n    def remove_profile(self, name: str) -> None:\n        \"\"\"\n        Remove a profile from the collection.\n        \"\"\"\n        self.profiles_by_name.pop(name)\n\n    def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n        \"\"\"\n        Remove profiles that were loaded from a given path.\n\n        Returns a new collection.\n        \"\"\"\n        return ProfilesCollection(\n            [\n                profile\n                for profile in self.profiles_by_name.values()\n                if profile.source != path\n            ],\n            active=self.active_name,\n        )\n\n    def to_dict(self):\n        \"\"\"\n        Convert to a dictionary suitable for writing to disk.\n        \"\"\"\n        return {\n            \"active\": self.active_name,\n            \"profiles\": {\n                profile.name: {\n                    setting.name: value for setting, value in profile.settings.items()\n                }\n                for profile in self.profiles_by_name.values()\n            },\n        }\n\n    def __getitem__(self, name: str) -> Profile:\n        return self.profiles_by_name[name]\n\n    def __iter__(self):\n        return self.profiles_by_name.__iter__()\n\n    def items(self):\n        return self.profiles_by_name.items()\n\n    def __eq__(self, __o: object) -> bool:\n        if not isinstance(__o, ProfilesCollection):\n            return False\n\n        return (\n            self.profiles_by_name == __o.profiles_by_name\n            and self.active_name == __o.active_name\n        )\n\n    def __repr__(self) -> str:\n        return (\n            f\"ProfilesCollection(profiles={list(self.profiles_by_name.values())!r},\"\n            f\" active={self.active_name!r})>\"\n        )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.names","title":"names: Set[str] property","text":"

Return a set of profile names in this collection.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.active_profile","title":"active_profile: Optional[Profile] property","text":"

Retrieve the active profile in this collection.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.set_active","title":"set_active","text":"

Set the active profile name in the collection.

A null value may be passed to indicate that this collection does not determine the active profile.

Source code in prefect/settings.py
def set_active(self, name: Optional[str], check: bool = True):\n    \"\"\"\n    Set the active profile name in the collection.\n\n    A null value may be passed to indicate that this collection does not determine\n    the active profile.\n    \"\"\"\n    if check and name is not None and name not in self.names:\n        raise ValueError(f\"Unknown profile name {name!r}.\")\n    self.active_name = name\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.update_profile","title":"update_profile","text":"

Add a profile to the collection or update the existing on if the name is already present in this collection.

If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to None in the new profile.

Returns the new profile object.

Source code in prefect/settings.py
def update_profile(\n    self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n) -> Profile:\n    \"\"\"\n    Add a profile to the collection or update the existing on if the name is already\n    present in this collection.\n\n    If updating an existing profile, the settings will be merged. Settings can\n    be dropped from the existing profile by setting them to `None` in the new\n    profile.\n\n    Returns the new profile object.\n    \"\"\"\n    existing = self.profiles_by_name.get(name)\n\n    # Convert the input to a `Profile` to cast settings to the correct type\n    profile = Profile(name=name, settings=settings, source=source)\n\n    if existing:\n        new_settings = {**existing.settings, **profile.settings}\n\n        # Drop null keys to restore to default\n        for key, value in tuple(new_settings.items()):\n            if value is None:\n                new_settings.pop(key)\n\n        new_profile = Profile(\n            name=profile.name,\n            settings=new_settings,\n            source=source or profile.source,\n        )\n    else:\n        new_profile = profile\n\n    self.profiles_by_name[new_profile.name] = new_profile\n\n    return new_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.add_profile","title":"add_profile","text":"

Add a profile to the collection.

If the profile name already exists, an exception will be raised.

Source code in prefect/settings.py
def add_profile(self, profile: Profile) -> None:\n    \"\"\"\n    Add a profile to the collection.\n\n    If the profile name already exists, an exception will be raised.\n    \"\"\"\n    if profile.name in self.profiles_by_name:\n        raise ValueError(\n            f\"Profile name {profile.name!r} already exists in collection.\"\n        )\n\n    self.profiles_by_name[profile.name] = profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.remove_profile","title":"remove_profile","text":"

Remove a profile from the collection.

Source code in prefect/settings.py
def remove_profile(self, name: str) -> None:\n    \"\"\"\n    Remove a profile from the collection.\n    \"\"\"\n    self.profiles_by_name.pop(name)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.without_profile_source","title":"without_profile_source","text":"

Remove profiles that were loaded from a given path.

Returns a new collection.

Source code in prefect/settings.py
def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n    \"\"\"\n    Remove profiles that were loaded from a given path.\n\n    Returns a new collection.\n    \"\"\"\n    return ProfilesCollection(\n        [\n            profile\n            for profile in self.profiles_by_name.values()\n            if profile.source != path\n        ],\n        active=self.active_name,\n    )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_extra_loggers","title":"get_extra_loggers","text":"

value_callback for PREFECT_LOGGING_EXTRA_LOGGERSthat parses the CSV string into a list and trims whitespace from logger names.

Source code in prefect/settings.py
def get_extra_loggers(_: \"Settings\", value: str) -> List[str]:\n    \"\"\"\n    `value_callback` for `PREFECT_LOGGING_EXTRA_LOGGERS`that parses the CSV string into a\n    list and trims whitespace from logger names.\n    \"\"\"\n    return [name.strip() for name in value.split(\",\")] if value else []\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.debug_mode_log_level","title":"debug_mode_log_level","text":"

value_callback for PREFECT_LOGGING_LEVEL that overrides the log level to DEBUG when debug mode is enabled.

Source code in prefect/settings.py
def debug_mode_log_level(settings, value):\n    \"\"\"\n    `value_callback` for `PREFECT_LOGGING_LEVEL` that overrides the log level to DEBUG\n    when debug mode is enabled.\n    \"\"\"\n    if PREFECT_DEBUG_MODE.value_from(settings):\n        return \"DEBUG\"\n    else:\n        return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.only_return_value_in_test_mode","title":"only_return_value_in_test_mode","text":"

value_callback for PREFECT_TEST_SETTING that only allows access during test mode

Source code in prefect/settings.py
def only_return_value_in_test_mode(settings, value):\n    \"\"\"\n    `value_callback` for `PREFECT_TEST_SETTING` that only allows access during test mode\n    \"\"\"\n    if PREFECT_TEST_MODE.value_from(settings):\n        return value\n    else:\n        return None\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.default_ui_api_url","title":"default_ui_api_url","text":"

value_callback for PREFECT_UI_API_URL that sets the default value to relative path '/api', otherwise it constructs an API URL from the API settings.

Source code in prefect/settings.py
def default_ui_api_url(settings, value):\n    \"\"\"\n    `value_callback` for `PREFECT_UI_API_URL` that sets the default value to\n    relative path '/api', otherwise it constructs an API URL from the API settings.\n    \"\"\"\n    if value is None:\n        # Set a default value\n        value = \"/api\"\n\n    return template_with_settings(\n        PREFECT_SERVER_API_HOST, PREFECT_SERVER_API_PORT, PREFECT_API_URL\n    )(settings, value)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.status_codes_as_integers_in_range","title":"status_codes_as_integers_in_range","text":"

value_callback for PREFECT_CLIENT_RETRY_EXTRA_CODES that ensures status codes are integers in the range 100-599.

Source code in prefect/settings.py
def status_codes_as_integers_in_range(_, value):\n    \"\"\"\n    `value_callback` for `PREFECT_CLIENT_RETRY_EXTRA_CODES` that ensures status codes\n    are integers in the range 100-599.\n    \"\"\"\n    if value == \"\":\n        return set()\n\n    values = {v.strip() for v in value.split(\",\")}\n\n    if any(not v.isdigit() or int(v) < 100 or int(v) > 599 for v in values):\n        raise ValueError(\n            \"PREFECT_CLIENT_RETRY_EXTRA_CODES must be a comma separated list of \"\n            \"integers between 100 and 599.\"\n        )\n\n    values = {int(v) for v in values}\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.template_with_settings","title":"template_with_settings","text":"

Returns a value_callback that will template the given settings into the runtime value for the setting.

Source code in prefect/settings.py
def template_with_settings(*upstream_settings: Setting) -> Callable[[\"Settings\", T], T]:\n    \"\"\"\n    Returns a `value_callback` that will template the given settings into the runtime\n    value for the setting.\n    \"\"\"\n\n    def templater(settings, value):\n        if value is None:\n            return value  # Do not attempt to template a null string\n\n        original_type = type(value)\n        template_values = {\n            setting.name: setting.value_from(settings) for setting in upstream_settings\n        }\n        template = string.Template(str(value))\n        return original_type(template.substitute(template_values))\n\n    return templater\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.max_log_size_smaller_than_batch_size","title":"max_log_size_smaller_than_batch_size","text":"

Validator for settings asserting the batch size and match log size are compatible

Source code in prefect/settings.py
def max_log_size_smaller_than_batch_size(values):\n    \"\"\"\n    Validator for settings asserting the batch size and match log size are compatible\n    \"\"\"\n    if (\n        values[\"PREFECT_LOGGING_TO_API_BATCH_SIZE\"]\n        < values[\"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE\"]\n    ):\n        raise ValueError(\n            \"`PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` cannot be larger than\"\n            \" `PREFECT_LOGGING_TO_API_BATCH_SIZE`\"\n        )\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_database_password_value_without_usage","title":"warn_on_database_password_value_without_usage","text":"

Validator for settings warning if the database password is set but not used.

Source code in prefect/settings.py
def warn_on_database_password_value_without_usage(values):\n    \"\"\"\n    Validator for settings warning if the database password is set but not used.\n    \"\"\"\n    value = values[\"PREFECT_API_DATABASE_PASSWORD\"]\n    if (\n        value\n        and not value.startswith(OBFUSCATED_PREFIX)\n        and (\n            \"PREFECT_API_DATABASE_PASSWORD\"\n            not in values[\"PREFECT_API_DATABASE_CONNECTION_URL\"]\n        )\n    ):\n        warnings.warn(\n            \"PREFECT_API_DATABASE_PASSWORD is set but not included in the \"\n            \"PREFECT_API_DATABASE_CONNECTION_URL. \"\n            \"The provided password will be ignored.\"\n        )\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_misconfigured_api_url","title":"warn_on_misconfigured_api_url","text":"

Validator for settings warning if the API URL is misconfigured.

Source code in prefect/settings.py
def warn_on_misconfigured_api_url(values):\n    \"\"\"\n    Validator for settings warning if the API URL is misconfigured.\n    \"\"\"\n    api_url = values[\"PREFECT_API_URL\"]\n    if api_url is not None:\n        misconfigured_mappings = {\n            \"app.prefect.cloud\": (\n                \"`PREFECT_API_URL` points to `app.prefect.cloud`. Did you\"\n                \" mean `api.prefect.cloud`?\"\n            ),\n            \"account/\": (\n                \"`PREFECT_API_URL` uses `/account/` but should use `/accounts/`.\"\n            ),\n            \"workspace/\": (\n                \"`PREFECT_API_URL` uses `/workspace/` but should use `/workspaces/`.\"\n            ),\n        }\n        warnings_list = []\n\n        for misconfig, warning in misconfigured_mappings.items():\n            if misconfig in api_url:\n                warnings_list.append(warning)\n\n        parsed_url = urlparse(api_url)\n        if parsed_url.path and not parsed_url.path.startswith(\"/api\"):\n            warnings_list.append(\n                \"`PREFECT_API_URL` should have `/api` after the base URL.\"\n            )\n\n        if warnings_list:\n            example = 'e.g. PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"'\n            warnings_list.append(example)\n\n            warnings.warn(\"\\n\".join(warnings_list), stacklevel=2)\n\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_current_settings","title":"get_current_settings","text":"

Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment.

Source code in prefect/settings.py
def get_current_settings() -> Settings:\n    \"\"\"\n    Returns a settings object populated with values from the current settings context\n    or, if no settings context is active, the environment.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    settings_context = SettingsContext.get()\n    if settings_context is not None:\n        return settings_context.settings\n\n    return get_settings_from_env()\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_settings_from_env","title":"get_settings_from_env","text":"

Returns a settings object populated with default values and overrides from environment variables, ignoring any values in profiles.

Calls with the same environment return a cached object instead of reconstructing to avoid validation overhead.

Source code in prefect/settings.py
def get_settings_from_env() -> Settings:\n    \"\"\"\n    Returns a settings object populated with default values and overrides from\n    environment variables, ignoring any values in profiles.\n\n    Calls with the same environment return a cached object instead of reconstructing\n    to avoid validation overhead.\n    \"\"\"\n    # Since os.environ is a Dict[str, str] we can safely hash it by contents, but we\n    # must be careful to avoid hashing a generator instead of a tuple\n    cache_key = hash(tuple((key, value) for key, value in os.environ.items()))\n\n    if cache_key not in _FROM_ENV_CACHE:\n        _FROM_ENV_CACHE[cache_key] = Settings()\n\n    return _FROM_ENV_CACHE[cache_key]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_default_settings","title":"get_default_settings","text":"

Returns a settings object populated with default values, ignoring any overrides from environment variables or profiles.

This is cached since the defaults should not change during the lifetime of the module.

Source code in prefect/settings.py
def get_default_settings() -> Settings:\n    \"\"\"\n    Returns a settings object populated with default values, ignoring any overrides\n    from environment variables or profiles.\n\n    This is cached since the defaults should not change during the lifetime of the\n    module.\n    \"\"\"\n    global _DEFAULTS_CACHE\n\n    if not _DEFAULTS_CACHE:\n        old = os.environ\n        try:\n            os.environ = {}\n            settings = get_settings_from_env()\n        finally:\n            os.environ = old\n\n        _DEFAULTS_CACHE = settings\n\n    return _DEFAULTS_CACHE\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.temporary_settings","title":"temporary_settings","text":"

Temporarily override the current settings by entering a new profile.

See Settings.copy_with_update for details on different argument behavior.

Examples:

>>> from prefect.settings import PREFECT_API_URL\n>>>\n>>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n>>>    assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n>>>         assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n>>>         assert PREFECT_API_URL.value() is None\n>>>\n>>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n>>>             assert PREFECT_API_URL.value() == \"bar\"\n>>> assert PREFECT_API_URL.value() is None\n
Source code in prefect/settings.py
@contextmanager\ndef temporary_settings(\n    updates: Optional[Mapping[Setting[T], Any]] = None,\n    set_defaults: Optional[Mapping[Setting[T], Any]] = None,\n    restore_defaults: Optional[Iterable[Setting[T]]] = None,\n) -> Generator[Settings, None, None]:\n    \"\"\"\n    Temporarily override the current settings by entering a new profile.\n\n    See `Settings.copy_with_update` for details on different argument behavior.\n\n    Examples:\n        >>> from prefect.settings import PREFECT_API_URL\n        >>>\n        >>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n        >>>    assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n        >>>         assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n        >>>         assert PREFECT_API_URL.value() is None\n        >>>\n        >>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n        >>>             assert PREFECT_API_URL.value() == \"bar\"\n        >>> assert PREFECT_API_URL.value() is None\n    \"\"\"\n    import prefect.context\n\n    context = prefect.context.get_settings_context()\n\n    new_settings = context.settings.copy_with_update(\n        updates=updates, set_defaults=set_defaults, restore_defaults=restore_defaults\n    )\n\n    with prefect.context.SettingsContext(\n        profile=context.profile, settings=new_settings\n    ):\n        yield new_settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profiles","title":"load_profiles","text":"

Load all profiles from the default and current profile paths.

Source code in prefect/settings.py
def load_profiles() -> ProfilesCollection:\n    \"\"\"\n    Load all profiles from the default and current profile paths.\n    \"\"\"\n    profiles = _read_profiles_from(DEFAULT_PROFILES_PATH)\n\n    user_profiles_path = PREFECT_PROFILES_PATH.value()\n    if user_profiles_path.exists():\n        user_profiles = _read_profiles_from(user_profiles_path)\n\n        # Merge all of the user profiles with the defaults\n        for name in user_profiles:\n            profiles.update_profile(\n                name,\n                settings=user_profiles[name].settings,\n                source=user_profiles[name].source,\n            )\n\n        if user_profiles.active_name:\n            profiles.set_active(user_profiles.active_name, check=False)\n\n    return profiles\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_current_profile","title":"load_current_profile","text":"

Load the current profile from the default and current profile paths.

This will not include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved.

Source code in prefect/settings.py
def load_current_profile():\n    \"\"\"\n    Load the current profile from the default and current profile paths.\n\n    This will _not_ include settings from the current settings context. Only settings\n    that have been persisted to the profiles file will be saved.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    profiles = load_profiles()\n    context = SettingsContext.get()\n\n    if context:\n        profiles.set_active(context.profile.name)\n\n    return profiles.active_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.save_profiles","title":"save_profiles","text":"

Writes all non-default profiles to the current profiles path.

Source code in prefect/settings.py
def save_profiles(profiles: ProfilesCollection) -> None:\n    \"\"\"\n    Writes all non-default profiles to the current profiles path.\n    \"\"\"\n    profiles_path = PREFECT_PROFILES_PATH.value()\n    profiles = profiles.without_profile_source(DEFAULT_PROFILES_PATH)\n    return _write_profiles_to(profiles_path, profiles)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profile","title":"load_profile","text":"

Load a single profile by name.

Source code in prefect/settings.py
def load_profile(name: str) -> Profile:\n    \"\"\"\n    Load a single profile by name.\n    \"\"\"\n    profiles = load_profiles()\n    try:\n        return profiles[name]\n    except KeyError:\n        raise ValueError(f\"Profile {name!r} not found.\")\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.update_current_profile","title":"update_current_profile","text":"

Update the persisted data for the profile currently in-use.

If the profile does not exist in the profiles file, it will be created.

Given settings will be merged with the existing settings as described in ProfilesCollection.update_profile.

Returns:

Type Description Profile

The new profile.

Source code in prefect/settings.py
def update_current_profile(settings: Dict[Union[str, Setting], Any]) -> Profile:\n    \"\"\"\n    Update the persisted data for the profile currently in-use.\n\n    If the profile does not exist in the profiles file, it will be created.\n\n    Given settings will be merged with the existing settings as described in\n    `ProfilesCollection.update_profile`.\n\n    Returns:\n        The new profile.\n    \"\"\"\n    import prefect.context\n\n    current_profile = prefect.context.get_settings_context().profile\n\n    if not current_profile:\n        raise MissingProfileError(\"No profile is currently in use.\")\n\n    profiles = load_profiles()\n\n    # Ensure the current profile's settings are present\n    profiles.update_profile(current_profile.name, current_profile.settings)\n    # Then merge the new settings in\n    new_profile = profiles.update_profile(current_profile.name, settings)\n\n    # Validate before saving\n    new_profile.validate_settings()\n\n    save_profiles(profiles)\n\n    return profiles[current_profile.name]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/software/","title":"prefect.software","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/software/#prefect.software","title":"prefect.software","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/states/","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.AwaitingRetry","title":"AwaitingRetry","text":"

Convenience function for creating AwaitingRetry states.

Returns:

Name Type Description State State[R]

a AwaitingRetry state

Source code in prefect/states.py
def AwaitingRetry(\n    cls: Type[State[R]] = State,\n    scheduled_time: Optional[datetime.datetime] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelled","title":"Cancelled","text":"

Convenience function for creating Cancelled states.

Returns:

Name Type Description State State[R]

a Cancelled state

Source code in prefect/states.py
def Cancelled(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelling","title":"Cancelling","text":"

Convenience function for creating Cancelling states.

Returns:

Name Type Description State State[R]

a Cancelling state

Source code in prefect/states.py
def Cancelling(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Completed","title":"Completed","text":"

Convenience function for creating Completed states.

Returns:

Name Type Description State State[R]

a Completed state

Source code in prefect/states.py
def Completed(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Crashed","title":"Crashed","text":"

Convenience function for creating Crashed states.

Returns:

Name Type Description State State[R]

a Crashed state

Source code in prefect/states.py
def Crashed(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Failed","title":"Failed","text":"

Convenience function for creating Failed states.

Returns:

Name Type Description State State[R]

a Failed state

Source code in prefect/states.py
def Failed(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Late","title":"Late","text":"

Convenience function for creating Late states.

Returns:

Name Type Description State State[R]

a Late state

Source code in prefect/states.py
def Late(\n    cls: Type[State[R]] = State,\n    scheduled_time: Optional[datetime.datetime] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Paused","title":"Paused","text":"

Convenience function for creating Paused states.

Returns:

Name Type Description State State[R]

a Paused state

Source code in prefect/states.py
def Paused(\n    cls: Type[State[R]] = State,\n    timeout_seconds: Optional[int] = None,\n    pause_expiration_time: Optional[datetime.datetime] = None,\n    reschedule: bool = False,\n    pause_key: Optional[str] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Pending","title":"Pending","text":"

Convenience function for creating Pending states.

Returns:

Name Type Description State State[R]

a Pending state

Source code in prefect/states.py
def Pending(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Retrying","title":"Retrying","text":"

Convenience function for creating Retrying states.

Returns:

Name Type Description State State[R]

a Retrying state

Source code in prefect/states.py
def Retrying(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Running","title":"Running","text":"

Convenience function for creating Running states.

Returns:

Name Type Description State State[R]

a Running state

Source code in prefect/states.py
def Running(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Scheduled","title":"Scheduled","text":"

Convenience function for creating Scheduled states.

Returns:

Name Type Description State State[R]

a Scheduled state

Source code in prefect/states.py
def Scheduled(\n    cls: Type[State[R]] = State,\n    scheduled_time: Optional[datetime.datetime] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Suspended","title":"Suspended","text":"

Convenience function for creating Suspended states.

Returns:

Name Type Description State

a Suspended state

Source code in prefect/states.py
def Suspended(\n    cls: Type[State[R]] = State,\n    timeout_seconds: Optional[int] = None,\n    pause_expiration_time: Optional[datetime.datetime] = None,\n    pause_key: Optional[str] = None,\n    **kwargs: Any,\n):\n    \"\"\"Convenience function for creating `Suspended` states.\n\n    Returns:\n        State: a Suspended state\n    \"\"\"\n    return Paused(\n        cls=cls,\n        name=\"Suspended\",\n        reschedule=True,\n        timeout_seconds=timeout_seconds,\n        pause_expiration_time=pause_expiration_time,\n        pause_key=pause_key,\n        **kwargs,\n    )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_crashed_state","title":"exception_to_crashed_state async","text":"

Takes an exception that occurs outside of user code and converts it to a 'Crash' exception with a 'Crashed' state.

Source code in prefect/states.py
async def exception_to_crashed_state(\n    exc: BaseException,\n    result_factory: Optional[ResultFactory] = None,\n) -> State:\n    \"\"\"\n    Takes an exception that occurs _outside_ of user code and converts it to a\n    'Crash' exception with a 'Crashed' state.\n    \"\"\"\n    state_message = None\n\n    if isinstance(exc, anyio.get_cancelled_exc_class()):\n        state_message = \"Execution was cancelled by the runtime environment.\"\n\n    elif isinstance(exc, KeyboardInterrupt):\n        state_message = \"Execution was aborted by an interrupt signal.\"\n\n    elif isinstance(exc, TerminationSignal):\n        state_message = \"Execution was aborted by a termination signal.\"\n\n    elif isinstance(exc, SystemExit):\n        state_message = \"Execution was aborted by Python system exit call.\"\n\n    elif isinstance(exc, (httpx.TimeoutException, httpx.ConnectError)):\n        try:\n            request: httpx.Request = exc.request\n        except RuntimeError:\n            # The request property is not set\n            state_message = (\n                \"Request failed while attempting to contact the server:\"\n                f\" {format_exception(exc)}\"\n            )\n        else:\n            # TODO: We can check if this is actually our API url\n            state_message = f\"Request to {request.url} failed: {format_exception(exc)}.\"\n\n    else:\n        state_message = (\n            \"Execution was interrupted by an unexpected exception:\"\n            f\" {format_exception(exc)}\"\n        )\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    return Crashed(message=state_message, data=data)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_failed_state","title":"exception_to_failed_state async","text":"

Convenience function for creating Failed states from exceptions

Source code in prefect/states.py
async def exception_to_failed_state(\n    exc: Optional[BaseException] = None,\n    result_factory: Optional[ResultFactory] = None,\n    **kwargs,\n) -> State:\n    \"\"\"\n    Convenience function for creating `Failed` states from exceptions\n    \"\"\"\n    if not exc:\n        _, exc, _ = sys.exc_info()\n        if exc is None:\n            raise ValueError(\n                \"Exception was not passed and no active exception could be found.\"\n            )\n    else:\n        pass\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    existing_message = kwargs.pop(\"message\", \"\")\n    if existing_message and not existing_message.endswith(\" \"):\n        existing_message += \" \"\n\n    # TODO: Consider if we want to include traceback information, it is intentionally\n    #       excluded from messages for now\n    message = existing_message + format_exception(exc)\n\n    return Failed(data=data, message=message, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_exception","title":"get_state_exception async","text":"

If not given a FAILED or CRASHED state, this raise a value error.

If the state result is a state, its exception will be returned.

If the state result is an iterable of states, the exception of the first failure will be returned.

If the state result is a string, a wrapper exception will be returned with the string as the message.

If the state result is null, a wrapper exception will be returned with the state message attached.

If the state result is not of a known type, a TypeError will be returned.

When a wrapper exception is returned, the type will be: - FailedRun if the state type is FAILED. - CrashedRun if the state type is CRASHED. - CancelledRun if the state type is CANCELLED.

Source code in prefect/states.py
@sync_compatible\nasync def get_state_exception(state: State) -> BaseException:\n    \"\"\"\n    If not given a FAILED or CRASHED state, this raise a value error.\n\n    If the state result is a state, its exception will be returned.\n\n    If the state result is an iterable of states, the exception of the first failure\n    will be returned.\n\n    If the state result is a string, a wrapper exception will be returned with the\n    string as the message.\n\n    If the state result is null, a wrapper exception will be returned with the state\n    message attached.\n\n    If the state result is not of a known type, a `TypeError` will be returned.\n\n    When a wrapper exception is returned, the type will be:\n        - `FailedRun` if the state type is FAILED.\n        - `CrashedRun` if the state type is CRASHED.\n        - `CancelledRun` if the state type is CANCELLED.\n    \"\"\"\n\n    if state.is_failed():\n        wrapper = FailedRun\n        default_message = \"Run failed.\"\n    elif state.is_crashed():\n        wrapper = CrashedRun\n        default_message = \"Run crashed.\"\n    elif state.is_cancelled():\n        wrapper = CancelledRun\n        default_message = \"Run cancelled.\"\n    else:\n        raise ValueError(f\"Expected failed or crashed state got {state!r}.\")\n\n    if isinstance(state.data, BaseResult):\n        result = await state.data.get()\n    elif state.data is None:\n        result = None\n    else:\n        result = state.data\n\n    if result is None:\n        return wrapper(state.message or default_message)\n\n    if isinstance(result, Exception):\n        return result\n\n    elif isinstance(result, BaseException):\n        return result\n\n    elif isinstance(result, str):\n        return wrapper(result)\n\n    elif is_state(result):\n        # Return the exception from the inner state\n        return await get_state_exception(result)\n\n    elif is_state_iterable(result):\n        # Return the first failure\n        for state in result:\n            if state.is_failed() or state.is_crashed() or state.is_cancelled():\n                return await get_state_exception(state)\n\n        raise ValueError(\n            \"Failed state result was an iterable of states but none were failed.\"\n        )\n\n    else:\n        raise TypeError(\n            f\"Unexpected result for failed state: {result!r} \u2014\u2014 \"\n            f\"{type(result).__name__} cannot be resolved into an exception\"\n        )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_result","title":"get_state_result","text":"

Get the result from a state.

See State.result()

Source code in prefect/states.py
def get_state_result(\n    state: State[R], raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> R:\n    \"\"\"\n    Get the result from a state.\n\n    See `State.result()`\n    \"\"\"\n\n    if fetch is None and (\n        PREFECT_ASYNC_FETCH_STATE_RESULT or not in_async_main_thread()\n    ):\n        # Fetch defaults to `True` for sync users or async users who have opted in\n        fetch = True\n\n    if not fetch:\n        if fetch is None and in_async_main_thread():\n            warnings.warn(\n                (\n                    \"State.result() was called from an async context but not awaited. \"\n                    \"This method will be updated to return a coroutine by default in \"\n                    \"the future. Pass `fetch=True` and `await` the call to get rid of \"\n                    \"this warning.\"\n                ),\n                DeprecationWarning,\n                stacklevel=2,\n            )\n        # Backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return result_from_state_with_data_document(\n                state, raise_on_failure=raise_on_failure\n            )\n        else:\n            return state.data\n    else:\n        return _get_state_result(state, raise_on_failure=raise_on_failure)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state","title":"is_state","text":"

Check if the given object is a state instance

Source code in prefect/states.py
def is_state(obj: Any) -> TypeGuard[State]:\n    \"\"\"\n    Check if the given object is a state instance\n    \"\"\"\n    # We may want to narrow this to client-side state types but for now this provides\n    # backwards compatibility\n    try:\n        from prefect.server.schemas.states import State as State_\n\n        classes_ = (State, State_)\n    except ImportError:\n        classes_ = State\n\n    # return isinstance(obj, (State, State_))\n    return isinstance(obj, classes_)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state_iterable","title":"is_state_iterable","text":"

Check if a the given object is an iterable of states types

Supported iterables are: - set - list - tuple

Other iterables will return False even if they contain states.

Source code in prefect/states.py
def is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]]:\n    \"\"\"\n    Check if a the given object is an iterable of states types\n\n    Supported iterables are:\n    - set\n    - list\n    - tuple\n\n    Other iterables will return `False` even if they contain states.\n    \"\"\"\n    # We do not check for arbitrary iterables because this is not intended to be used\n    # for things like dictionaries, dataframes, or pydantic models\n    if (\n        not isinstance(obj, BaseAnnotation)\n        and isinstance(obj, (list, set, tuple))\n        and obj\n    ):\n        return all([is_state(o) for o in obj])\n    else:\n        return False\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.raise_state_exception","title":"raise_state_exception async","text":"

Given a FAILED or CRASHED state, raise the contained exception.

Source code in prefect/states.py
@sync_compatible\nasync def raise_state_exception(state: State) -> None:\n    \"\"\"\n    Given a FAILED or CRASHED state, raise the contained exception.\n    \"\"\"\n    if not (state.is_failed() or state.is_crashed() or state.is_cancelled()):\n        return None\n\n    raise await get_state_exception(state)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.return_value_to_state","title":"return_value_to_state async","text":"

Given a return value from a user's function, create a State the run should be placed in.

  • If data is returned, we create a 'COMPLETED' state with the data
  • If a single, manually created state is returned, we use that state as given (manual creation is determined by the lack of ids)
  • If an upstream state or iterable of upstream states is returned, we apply the aggregate rule

The aggregate rule says that given multiple states we will determine the final state such that:

  • If any states are not COMPLETED the final state is FAILED
  • If all of the states are COMPLETED the final state is COMPLETED
  • The states will be placed in the final state data attribute

Callers should resolve all futures into states before passing return values to this function.

Source code in prefect/states.py
async def return_value_to_state(retval: R, result_factory: ResultFactory) -> State[R]:\n    \"\"\"\n    Given a return value from a user's function, create a `State` the run should\n    be placed in.\n\n    - If data is returned, we create a 'COMPLETED' state with the data\n    - If a single, manually created state is returned, we use that state as given\n        (manual creation is determined by the lack of ids)\n    - If an upstream state or iterable of upstream states is returned, we apply the\n        aggregate rule\n\n    The aggregate rule says that given multiple states we will determine the final state\n    such that:\n\n    - If any states are not COMPLETED the final state is FAILED\n    - If all of the states are COMPLETED the final state is COMPLETED\n    - The states will be placed in the final state `data` attribute\n\n    Callers should resolve all futures into states before passing return values to this\n    function.\n    \"\"\"\n\n    if (\n        is_state(retval)\n        # Check for manual creation\n        and not retval.state_details.flow_run_id\n        and not retval.state_details.task_run_id\n    ):\n        state = retval\n\n        # Do not modify states with data documents attached; backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return state\n\n        # Unless the user has already constructed a result explicitly, use the factory\n        # to update the data to the correct type\n        if not isinstance(state.data, BaseResult):\n            state.data = await result_factory.create_result(state.data)\n\n        return state\n\n    # Determine a new state from the aggregate of contained states\n    if is_state(retval) or is_state_iterable(retval):\n        states = StateGroup(ensure_iterable(retval))\n\n        # Determine the new state type\n        if states.all_completed():\n            new_state_type = StateType.COMPLETED\n        elif states.any_cancelled():\n            new_state_type = StateType.CANCELLED\n        elif states.any_paused():\n            new_state_type = StateType.PAUSED\n        else:\n            new_state_type = StateType.FAILED\n\n        # Generate a nice message for the aggregate\n        if states.all_completed():\n            message = \"All states completed.\"\n        elif states.any_cancelled():\n            message = f\"{states.cancelled_count}/{states.total_count} states cancelled.\"\n        elif states.any_paused():\n            message = f\"{states.paused_count}/{states.total_count} states paused.\"\n        elif states.any_failed():\n            message = f\"{states.fail_count}/{states.total_count} states failed.\"\n        elif not states.all_final():\n            message = (\n                f\"{states.not_final_count}/{states.total_count} states are not final.\"\n            )\n        else:\n            message = \"Given states: \" + states.counts_message()\n\n        # TODO: We may actually want to set the data to a `StateGroup` object and just\n        #       allow it to be unpacked into a tuple and such so users can interact with\n        #       it\n        return State(\n            type=new_state_type,\n            message=message,\n            data=await result_factory.create_result(retval),\n        )\n\n    # Generators aren't portable, implicitly convert them to a list.\n    if isinstance(retval, GeneratorType):\n        data = list(retval)\n    else:\n        data = retval\n\n    # Otherwise, they just gave data and this is a completed retval\n    return Completed(data=await result_factory.create_result(data))\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/task-runners/","title":"prefect.task_runners","text":"","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners","title":"prefect.task_runners","text":"

Interface and implementations of various task runners.

Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

Example
>>> from prefect import flow, task\n>>> from prefect.task_runners import SequentialTaskRunner\n>>> from typing import List\n>>>\n>>> @task\n>>> def say_hello(name):\n...     print(f\"hello {name}\")\n>>>\n>>> @task\n>>> def say_goodbye(name):\n...     print(f\"goodbye {name}\")\n>>>\n>>> @flow(task_runner=SequentialTaskRunner())\n>>> def greetings(names: List[str]):\n...     for name in names:\n...         say_hello(name)\n...         say_goodbye(name)\n>>>\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n

Switching to a DaskTaskRunner:

>>> from prefect_dask.task_runners import DaskTaskRunner\n>>> flow.task_runner = DaskTaskRunner()\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\nhello ford\ngoodbye marvin\nhello marvin\ngoodbye ford\ngoodbye trillian\n

For usage details, see the Task Runners documentation.

","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner","title":"BaseTaskRunner","text":"Source code in prefect/task_runners.py
class BaseTaskRunner(metaclass=abc.ABCMeta):\n    def __init__(self) -> None:\n        self.logger = get_logger(f\"task_runner.{self.name}\")\n        self._started: bool = False\n\n    @property\n    @abc.abstractmethod\n    def concurrency_type(self) -> TaskConcurrencyType:\n        pass  # noqa\n\n    @property\n    def name(self):\n        return type(self).__name__.lower().replace(\"taskrunner\", \"\")\n\n    def duplicate(self):\n        \"\"\"\n        Return a new task runner instance with the same options.\n        \"\"\"\n        # The base class returns `NotImplemented` to indicate that this is not yet\n        # implemented by a given task runner.\n        return NotImplemented\n\n    def __eq__(self, other: object) -> bool:\n        \"\"\"\n        Returns true if the task runners use the same options.\n        \"\"\"\n        if type(other) == type(self) and (\n            # Compare public attributes for naive equality check\n            # Subclasses should implement this method with a check init option equality\n            {k: v for k, v in self.__dict__.items() if not k.startswith(\"_\")}\n            == {k: v for k, v in other.__dict__.items() if not k.startswith(\"_\")}\n        ):\n            return True\n        else:\n            return NotImplemented\n\n    @abc.abstractmethod\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        \"\"\"\n        Submit a call for execution and return a `PrefectFuture` that can be used to\n        get the call result.\n\n        Args:\n            task_run: The task run being submitted.\n            task_key: A unique key for this orchestration run of the task. Can be used\n                for caching.\n            call: The function to be executed\n            run_kwargs: A dict of keyword arguments to pass to `call`\n\n        Returns:\n            A future representing the result of `call` execution\n        \"\"\"\n        raise NotImplementedError()\n\n    @abc.abstractmethod\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        \"\"\"\n        Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n        If it is not finished after the timeout expires, `None` should be returned.\n\n        Implementers should be careful to ensure that this function never returns or\n        raises an exception.\n        \"\"\"\n        raise NotImplementedError()\n\n    @asynccontextmanager\n    async def start(\n        self: T,\n    ) -> AsyncIterator[T]:\n        \"\"\"\n        Start the task runner, preparing any resources necessary for task submission.\n\n        Children should implement `_start` to prepare and clean up resources.\n\n        Yields:\n            The prepared task runner\n        \"\"\"\n        if self._started:\n            raise RuntimeError(\"The task runner is already started!\")\n\n        async with AsyncExitStack() as exit_stack:\n            self.logger.debug(\"Starting task runner...\")\n            try:\n                await self._start(exit_stack)\n                self._started = True\n                yield self\n            finally:\n                self.logger.debug(\"Shutting down task runner...\")\n                self._started = False\n\n    async def _start(self, exit_stack: AsyncExitStack) -> None:\n        \"\"\"\n        Create any resources required for this task runner to submit work.\n\n        Cleanup of resources should be submitted to the `exit_stack`.\n        \"\"\"\n        pass  # noqa\n\n    def __str__(self) -> str:\n        return type(self).__name__\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.duplicate","title":"duplicate","text":"

Return a new task runner instance with the same options.

Source code in prefect/task_runners.py
def duplicate(self):\n    \"\"\"\n    Return a new task runner instance with the same options.\n    \"\"\"\n    # The base class returns `NotImplemented` to indicate that this is not yet\n    # implemented by a given task runner.\n    return NotImplemented\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.start","title":"start async","text":"

Start the task runner, preparing any resources necessary for task submission.

Children should implement _start to prepare and clean up resources.

Yields:

Type Description AsyncIterator[T]

The prepared task runner

Source code in prefect/task_runners.py
@asynccontextmanager\nasync def start(\n    self: T,\n) -> AsyncIterator[T]:\n    \"\"\"\n    Start the task runner, preparing any resources necessary for task submission.\n\n    Children should implement `_start` to prepare and clean up resources.\n\n    Yields:\n        The prepared task runner\n    \"\"\"\n    if self._started:\n        raise RuntimeError(\"The task runner is already started!\")\n\n    async with AsyncExitStack() as exit_stack:\n        self.logger.debug(\"Starting task runner...\")\n        try:\n            await self._start(exit_stack)\n            self._started = True\n            yield self\n        finally:\n            self.logger.debug(\"Shutting down task runner...\")\n            self._started = False\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.submit","title":"submit abstractmethod async","text":"

Submit a call for execution and return a PrefectFuture that can be used to get the call result.

Parameters:

Name Type Description Default task_run

The task run being submitted.

required task_key

A unique key for this orchestration run of the task. Can be used for caching.

required call Callable[..., Awaitable[State[R]]]

The function to be executed

required run_kwargs

A dict of keyword arguments to pass to call

required

Returns:

Type Description None

A future representing the result of call execution

Source code in prefect/task_runners.py
@abc.abstractmethod\nasync def submit(\n    self,\n    key: UUID,\n    call: Callable[..., Awaitable[State[R]]],\n) -> None:\n    \"\"\"\n    Submit a call for execution and return a `PrefectFuture` that can be used to\n    get the call result.\n\n    Args:\n        task_run: The task run being submitted.\n        task_key: A unique key for this orchestration run of the task. Can be used\n            for caching.\n        call: The function to be executed\n        run_kwargs: A dict of keyword arguments to pass to `call`\n\n    Returns:\n        A future representing the result of `call` execution\n    \"\"\"\n    raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.wait","title":"wait abstractmethod async","text":"

Given a PrefectFuture, wait for its return state up to timeout seconds. If it is not finished after the timeout expires, None should be returned.

Implementers should be careful to ensure that this function never returns or raises an exception.

Source code in prefect/task_runners.py
@abc.abstractmethod\nasync def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n    \"\"\"\n    Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n    If it is not finished after the timeout expires, `None` should be returned.\n\n    Implementers should be careful to ensure that this function never returns or\n    raises an exception.\n    \"\"\"\n    raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.ConcurrentTaskRunner","title":"ConcurrentTaskRunner","text":"

Bases: BaseTaskRunner

A concurrent task runner that allows tasks to switch when blocking on IO. Synchronous tasks will be submitted to a thread pool maintained by anyio.

Example
Using a thread for concurrency:\n>>> from prefect import flow\n>>> from prefect.task_runners import ConcurrentTaskRunner\n>>> @flow(task_runner=ConcurrentTaskRunner)\n>>> def my_flow():\n>>>     ...\n
Source code in prefect/task_runners.py
class ConcurrentTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A concurrent task runner that allows tasks to switch when blocking on IO.\n    Synchronous tasks will be submitted to a thread pool maintained by `anyio`.\n\n    Example:\n        ```\n        Using a thread for concurrency:\n        >>> from prefect import flow\n        >>> from prefect.task_runners import ConcurrentTaskRunner\n        >>> @flow(task_runner=ConcurrentTaskRunner)\n        >>> def my_flow():\n        >>>     ...\n        ```\n    \"\"\"\n\n    def __init__(self):\n        # TODO: Consider adding `max_workers` support using anyio capacity limiters\n\n        # Runtime attributes\n        self._task_group: anyio.abc.TaskGroup = None\n        self._result_events: Dict[UUID, Event] = {}\n        self._results: Dict[UUID, Any] = {}\n        self._keys: Set[UUID] = set()\n\n        super().__init__()\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.CONCURRENT\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[[], Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to submit work after \"\n                \"serialization.\"\n            )\n\n        # Create an event to set on completion\n        self._result_events[key] = Event()\n\n        # Rely on the event loop for concurrency\n        self._task_group.start_soon(self._run_and_store_result, key, call)\n\n    async def wait(\n        self,\n        key: UUID,\n        timeout: float = None,\n    ) -> Optional[State]:\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to wait for work after \"\n                \"serialization.\"\n            )\n\n        return await self._get_run_result(key, timeout)\n\n    async def _run_and_store_result(\n        self, key: UUID, call: Callable[[], Awaitable[State[R]]]\n    ):\n        \"\"\"\n        Simple utility to store the orchestration result in memory on completion\n\n        Since this run is occurring on the main thread, we capture exceptions to prevent\n        task crashes from crashing the flow run.\n        \"\"\"\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n        self._result_events[key].set()\n\n    async def _get_run_result(\n        self, key: UUID, timeout: float = None\n    ) -> Optional[State]:\n        \"\"\"\n        Block until the run result has been populated.\n        \"\"\"\n        result = None  # retval on timeout\n\n        # Note we do not use `asyncio.wrap_future` and instead use an `Event` to avoid\n        # stdlib behavior where the wrapped future is cancelled if the parent future is\n        # cancelled (as it would be during a timeout here)\n        with anyio.move_on_after(timeout):\n            await self._result_events[key].wait()\n            result = self._results[key]\n\n        return result  # timeout reached\n\n    async def _start(self, exit_stack: AsyncExitStack):\n        \"\"\"\n        Start the process pool\n        \"\"\"\n        self._task_group = await exit_stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n    def __getstate__(self):\n        \"\"\"\n        Allow the `ConcurrentTaskRunner` to be serialized by dropping the task group.\n        \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_task_group\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"\n        When deserialized, we will no longer have a reference to the task group.\n        \"\"\"\n        self.__dict__.update(data)\n        self._task_group = None\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.SequentialTaskRunner","title":"SequentialTaskRunner","text":"

Bases: BaseTaskRunner

A simple task runner that executes calls as they are submitted.

If writing synchronous tasks, this runner will always execute tasks sequentially. If writing async tasks, this runner will execute tasks sequentially unless grouped using anyio.create_task_group or asyncio.gather.

Source code in prefect/task_runners.py
class SequentialTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A simple task runner that executes calls as they are submitted.\n\n    If writing synchronous tasks, this runner will always execute tasks sequentially.\n    If writing async tasks, this runner will execute tasks sequentially unless grouped\n    using `anyio.create_task_group` or `asyncio.gather`.\n    \"\"\"\n\n    def __init__(self) -> None:\n        super().__init__()\n        self._results: Dict[str, State] = {}\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.SEQUENTIAL\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        # Run the function immediately and store the result in memory\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        return self._results[key]\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/tasks/","title":"prefect.tasks","text":"","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks","title":"prefect.tasks","text":"

Module containing the base workflow task class and decorator - for most use cases, using the @task decorator is preferred.

","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task","title":"Task","text":"

Bases: Generic[P, R]

A Prefect task definition.

Note

We recommend using the @task decorator for most use-cases.

Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run.

To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

Parameters:

Name Type Description Default fn Callable[P, R]

The function defining the task.

required name Optional[str]

An optional name for the task; if not provided, the name will be inferred from the given function.

None description Optional[str]

An optional string description for the task.

None tags Optional[Iterable[str]]

An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

None version Optional[str]

An optional string specifying the version of this task definition

None cache_key_fn Optional[Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]]

An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

None cache_expiration Optional[timedelta]

An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries Optional[int]

An optional number of times to retry on task run failure.

None retry_delay_seconds Optional[Union[float, int, List[float], Callable[[int], List[float]]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

None retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

None persist_result Optional[bool]

An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

None result_storage_key Optional[str]

An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

None timeout_seconds Union[int, float, None]

An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

None log_prints Optional[bool]

If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

False refresh_cache Optional[bool]

If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a failed state.

None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a completed state.

None retry_condition_fn Optional[Callable[[Task, TaskRun, State], bool]]

An optional callable run when a task run returns a Failed state. Should return True if the task should continue to its retry policy (e.g. retries=3), and False if the task should end as failed. Defaults to None, indicating the task should always continue to its retry policy.

None viz_return_value Optional[Any]

An optional value to return when the task dependency tree is visualized.

None Source code in prefect/tasks.py
@PrefectObjectRegistry.register_instances\nclass Task(Generic[P, R]):\n    \"\"\"\n    A Prefect task definition.\n\n    !!! note\n        We recommend using [the `@task` decorator][prefect.tasks.task] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function\n    creates a new task run.\n\n    To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the task.\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n        retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n            return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n            should end as failed. Defaults to `None`, indicating the task should always continue\n            to its retry policy.\n        viz_return_value: An optional value to return when the task dependency tree is visualized.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @task decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: Optional[str] = None,\n        description: Optional[str] = None,\n        tags: Optional[Iterable[str]] = None,\n        version: Optional[str] = None,\n        cache_key_fn: Optional[\n            Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]]\n        ] = None,\n        cache_expiration: Optional[datetime.timedelta] = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[\n            Union[\n                float,\n                int,\n                List[float],\n                Callable[[int], List[float]],\n            ]\n        ] = None,\n        retry_jitter_factor: Optional[float] = None,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        result_storage_key: Optional[str] = None,\n        cache_result_in_memory: bool = True,\n        timeout_seconds: Union[int, float, None] = None,\n        log_prints: Optional[bool] = False,\n        refresh_cache: Optional[bool] = None,\n        on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n        viz_return_value: Optional[Any] = None,\n    ):\n        # Validate if hook passed is list and contains callables\n        hook_categories = [on_completion, on_failure]\n        hook_names = [\"on_completion\", \"on_failure\"]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = inspect.iscoroutinefunction(self.fn)\n\n        if not name:\n            if not hasattr(self.fn, \"__name__\"):\n                self.name = type(self.fn).__name__\n            else:\n                self.name = self.fn.__name__\n        else:\n            self.name = name\n\n        if task_run_name is not None:\n            if not isinstance(task_run_name, str) and not callable(task_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'task_run_name'; got\"\n                    f\" {type(task_run_name).__name__} instead.\"\n                )\n        self.task_run_name = task_run_name\n\n        self.version = version\n        self.log_prints = log_prints\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        self.tags = set(tags if tags else [])\n\n        if not hasattr(self.fn, \"__qualname__\"):\n            self.task_key = to_qualified_name(type(self.fn))\n        else:\n            try:\n                task_origin_hash = hash_objects(\n                    self.name, os.path.abspath(inspect.getsourcefile(self.fn))\n                )\n            except TypeError:\n                task_origin_hash = \"unknown-source-file\"\n\n            self.task_key = f\"{self.fn.__qualname__}-{task_origin_hash}\"\n\n        self.cache_key_fn = cache_key_fn\n        self.cache_expiration = cache_expiration\n        self.refresh_cache = refresh_cache\n\n        # TaskRunPolicy settings\n        # TODO: We can instantiate a `TaskRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n\n        self.retries = (\n            retries if retries is not None else PREFECT_TASK_DEFAULT_RETRIES.value()\n        )\n        if retry_delay_seconds is None:\n            retry_delay_seconds = PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS.value()\n\n        if callable(retry_delay_seconds):\n            self.retry_delay_seconds = retry_delay_seconds(retries)\n        else:\n            self.retry_delay_seconds = retry_delay_seconds\n\n        if isinstance(self.retry_delay_seconds, list) and (\n            len(self.retry_delay_seconds) > 50\n        ):\n            raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n\n        if retry_jitter_factor is not None and retry_jitter_factor < 0:\n            raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n\n        self.retry_jitter_factor = retry_jitter_factor\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.result_storage_key = result_storage_key\n        self.cache_result_in_memory = cache_result_in_memory\n\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n        # Warn if this task's `name` conflicts with another task while having a\n        # different function. This is to detect the case where two or more tasks\n        # share a name or are lambdas, which should result in a warning, and to\n        # differentiate it from the case where the task was 'copied' via\n        # `with_options`, which should not result in a warning.\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Task)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            try:\n                file = inspect.getsourcefile(self.fn)\n                line_number = inspect.getsourcelines(self.fn)[1]\n            except TypeError:\n                file = \"unknown\"\n                line_number = \"unknown\"\n\n            warnings.warn(\n                f\"A task named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another task. Consider specifying a unique `name` \"\n                \"parameter in the task definition:\\n\\n \"\n                \"`@task(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n\n        # retry_condition_fn must be a callable or None. If it is neither, raise a TypeError\n        if retry_condition_fn is not None and not (callable(retry_condition_fn)):\n            raise TypeError(\n                \"Expected `retry_condition_fn` to be callable, got\"\n                f\" {type(retry_condition_fn).__name__} instead.\"\n            )\n\n        self.retry_condition_fn = retry_condition_fn\n        self.viz_return_value = viz_return_value\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        description: str = None,\n        tags: Iterable[str] = None,\n        cache_key_fn: Callable[\n            [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n        ] = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        cache_expiration: datetime.timedelta = None,\n        retries: Optional[int] = NotSet,\n        retry_delay_seconds: Union[\n            float,\n            int,\n            List[float],\n            Callable[[int], List[float]],\n        ] = NotSet,\n        retry_jitter_factor: Optional[float] = NotSet,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        result_storage_key: Optional[str] = NotSet,\n        cache_result_in_memory: Optional[bool] = None,\n        timeout_seconds: Union[int, float] = None,\n        log_prints: Optional[bool] = NotSet,\n        refresh_cache: Optional[bool] = NotSet,\n        on_completion: Optional[\n            List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n        ] = None,\n        on_failure: Optional[\n            List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n        ] = None,\n        retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n        viz_return_value: Optional[Any] = None,\n    ):\n        \"\"\"\n        Create a new task from the current object, updating provided options.\n\n        Args:\n            name: A new name for the task.\n            description: A new description for the task.\n            tags: A new set of tags for the task. If given, existing tags are ignored,\n                not merged.\n            cache_key_fn: A new cache key function for the task.\n            cache_expiration: A new cache expiration time for the task.\n            task_run_name: An optional name to distinguish runs of this task; this name can be provided\n                as a string template with the task's keyword arguments as variables,\n                or a function that returns a string.\n            retries: A new number of times to retry on task run failure.\n            retry_delay_seconds: Optionally configures how long to wait before retrying\n                the task after failure. This is only applicable if `retries` is nonzero.\n                This setting can either be a number of seconds, a list of retry delays,\n                or a callable that, given the total number of retries, generates a list\n                of retry delays. If a number of seconds, that delay will be applied to\n                all retries. If a list, each retry will wait for the corresponding delay\n                before retrying. When passing a callable or a list, the number of\n                configured retry delays cannot exceed 50.\n            retry_jitter_factor: An optional factor that defines the factor to which a\n                retry can be jittered in order to avoid a \"thundering herd\".\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            result_storage_key: A new key for the persisted result to be stored at.\n            timeout_seconds: A new maximum time for the task to complete in seconds.\n            log_prints: A new option for enabling or disabling redirection of `print` statements.\n            refresh_cache: A new option for enabling or disabling cache refresh.\n            on_completion: A new list of callables to run when the task enters a completed state.\n            on_failure: A new list of callables to run when the task enters a failed state.\n            retry_condition_fn: An optional callable run when a task run returns a Failed state.\n                Should return `True` if the task should continue to its retry policy, and `False`\n                if the task should end as failed. Defaults to `None`, indicating the task should\n                always continue to its retry policy.\n            viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n        Returns:\n            A new `Task` instance.\n\n        Examples:\n\n            Create a new task from an existing task and update the name\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> new_task = my_task.with_options(name=\"My new task\")\n\n            Create a new task from an existing task and update the retry settings\n\n            >>> from random import randint\n            >>>\n            >>> @task(retries=1, retry_delay_seconds=5)\n            >>> def my_task():\n            >>>     x = randint(0, 5)\n            >>>     if x >= 3:  # Make a task that fails sometimes\n            >>>         raise ValueError(\"Retry me please!\")\n            >>>     return x\n            >>>\n            >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n            Use a task with updated options within a flow\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> @flow\n            >>> my_flow():\n            >>>     new_task = my_task.with_options(name=\"My new task\")\n            >>>     new_task()\n        \"\"\"\n        return Task(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            tags=tags or copy(self.tags),\n            cache_key_fn=cache_key_fn or self.cache_key_fn,\n            cache_expiration=cache_expiration or self.cache_expiration,\n            task_run_name=task_run_name,\n            retries=retries if retries is not NotSet else self.retries,\n            retry_delay_seconds=(\n                retry_delay_seconds\n                if retry_delay_seconds is not NotSet\n                else self.retry_delay_seconds\n            ),\n            retry_jitter_factor=(\n                retry_jitter_factor\n                if retry_jitter_factor is not NotSet\n                else self.retry_jitter_factor\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_storage_key=(\n                result_storage_key\n                if result_storage_key is not NotSet\n                else self.result_storage_key\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n            refresh_cache=(\n                refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n            ),\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n            retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n            viz_return_value=viz_return_value or self.viz_return_value,\n        )\n\n    async def create_run(\n        self,\n        flow_run_context: FlowRunContext,\n        parameters: Dict[str, Any],\n        wait_for: Optional[Iterable[PrefectFuture]],\n        extra_task_inputs: Optional[Dict[str, Set[TaskRunInput]]] = None,\n    ) -> TaskRun:\n        # TODO: Investigate if we can replace create_task_run on the task run engine\n        # with this method. Would require updating to work without the flow run context.\n        from prefect.utilities.engine import (\n            _dynamic_key_for_task_run,\n            collect_task_run_inputs,\n        )\n\n        dynamic_key = _dynamic_key_for_task_run(flow_run_context, self)\n        task_inputs = {\n            k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        }\n        if wait_for:\n            task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n        # Join extra task inputs\n        extra_task_inputs = extra_task_inputs or {}\n        for k, extras in extra_task_inputs.items():\n            task_inputs[k] = task_inputs[k].union(extras)\n\n        flow_run_logger = get_run_logger(flow_run_context)\n\n        task_run = await flow_run_context.client.create_task_run(\n            task=self,\n            name=f\"{self.name} - {dynamic_key}\",\n            flow_run_id=flow_run_context.flow_run.id,\n            dynamic_key=dynamic_key,\n            state=Pending(),\n            extra_tags=TagsContext.get().current_tags,\n            task_inputs=task_inputs,\n        )\n\n        if flow_run_context.flow_run:\n            flow_run_logger.info(\n                f\"Created task run {task_run.name!r} for task {self.name!r}\"\n            )\n        else:\n            logger.info(f\"Created task run {task_run.name!r} for task {self.name!r}\")\n\n        return task_run\n\n    @overload\n    def __call__(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: P.args,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ):\n        \"\"\"\n        Run the task and return the result. If `return_state` is True returns\n        the result is wrapped in a Prefect State which provides error handling.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_engine import submit_autonomous_task_run_to_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        task_run_tracker = get_task_viz_tracker()\n        if task_run_tracker:\n            return track_viz_task(\n                self.isasync, self.name, parameters, self.viz_return_value\n            )\n\n        # new engine currently only compatible with async tasks\n        if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE.value():\n            from prefect.new_task_engine import run_task, run_task_sync\n\n            run_kwargs = dict(\n                task=self,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n            )\n            if self.isasync:\n                # this returns an awaitable coroutine\n                return run_task(**run_kwargs)\n            else:\n                return run_task_sync(**run_kwargs)\n\n        if (\n            PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n            and not FlowRunContext.get()\n        ):\n            from prefect import get_client\n\n            return submit_autonomous_task_run_to_engine(\n                task=self,\n                task_run=None,\n                task_runner=SequentialTaskRunner(),\n                parameters=parameters,\n                return_type=return_type,\n                client=get_client(),\n            )\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            task_runner=SequentialTaskRunner(),\n            return_type=return_type,\n            mapped=False,\n        )\n\n    @overload\n    def _run(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[State[T]]:\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: P.args,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ) -> Union[State, Awaitable[State]]:\n        \"\"\"\n        Run the task and return the final state.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n            task_runner=SequentialTaskRunner(),\n            mapped=False,\n        )\n\n    @overload\n    def submit(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[PrefectFuture[T, Async]]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[T, Sync]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> TaskRun:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[TaskRun]:\n        ...\n\n    def submit(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n        \"\"\"\n        Submit a run of the task to the engine.\n\n        If writing an async task, this call must be awaited.\n\n        If called from within a flow function,\n\n        Will create a new task run in the backing API and submit the task to the flow's\n        task runner. This call only blocks execution while the task is being submitted,\n        once it is submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n        and they are fully resolved on submission.\n\n        Args:\n            *args: Arguments to run the task with\n            return_state: Return the result of the flow run wrapped in a\n                Prefect State.\n            wait_for: Upstream task futures to wait for before starting the task\n            **kwargs: Keyword arguments to run the task with\n\n        Returns:\n            If `return_state` is False a future allowing asynchronous access to\n                the state of the task\n            If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n                the state of the task\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task():\n            >>>     return \"hello\"\n\n            Run a task in a flow\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit()\n\n            Wait for a task to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit().wait()\n\n            Use the result from a task in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     print(my_task.submit().result())\n            >>>\n            >>> my_flow()\n            hello\n\n            Run an async task in an async flow\n\n            >>> @task\n            >>> async def my_async_task():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> async def my_flow():\n            >>>     await my_async_task.submit()\n\n            Run a sync task in an async flow\n\n            >>> @flow\n            >>> async def my_flow():\n            >>>     my_task.submit()\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1():\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.submit(wait_for=[x])\n\n        \"\"\"\n\n        from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n        return_type = \"state\" if return_state else \"future\"\n        flow_run_context = FlowRunContext.get()\n\n        task_viz_tracker = get_task_viz_tracker()\n        if task_viz_tracker:\n            raise VisualizationUnsupportedError(\n                \"`task.submit()` is not currently supported by `flow.visualize()`\"\n            )\n\n        if PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING and not flow_run_context:\n            create_autonomous_task_run_call = create_call(\n                create_autonomous_task_run, task=self, parameters=parameters\n            )\n            if self.isasync:\n                return from_async.wait_for_call_in_loop_thread(\n                    create_autonomous_task_run_call\n                )\n            else:\n                return from_sync.wait_for_call_in_loop_thread(\n                    create_autonomous_task_run_call\n                )\n        if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE and flow_run_context:\n            if self.isasync:\n                return self._submit_async(\n                    parameters=parameters,\n                    flow_run_context=flow_run_context,\n                    wait_for=wait_for,\n                    return_state=return_state,\n                )\n            else:\n                raise NotImplementedError(\n                    \"Submitting sync tasks with the new engine has not be implemented yet.\"\n                )\n\n        else:\n            return enter_task_run_engine(\n                self,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n                task_runner=None,  # Use the flow's task runner\n                mapped=False,\n            )\n\n    async def _submit_async(\n        self,\n        parameters: Dict[str, Any],\n        flow_run_context: FlowRunContext,\n        wait_for: Optional[Iterable[PrefectFuture]],\n        return_state: bool,\n    ):\n        from prefect.new_task_engine import run_task\n\n        task_runner = flow_run_context.task_runner\n\n        task_run = await self.create_run(\n            flow_run_context=flow_run_context,\n            parameters=parameters,\n            wait_for=wait_for,\n        )\n\n        future = PrefectFuture(\n            name=task_run.name,\n            key=uuid4(),\n            task_runner=task_runner,\n            asynchronous=(self.isasync and flow_run_context.flow.isasync),\n        )\n        future.task_run = task_run\n        flow_run_context.task_run_futures.append(future)\n        await task_runner.submit(\n            key=future.key,\n            call=partial(\n                run_task,\n                task=self,\n                task_run=task_run,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=\"state\",\n            ),\n        )\n        # TODO: I don't like this. Can we move responsibility for creating the future\n        # and setting this anyio.Event to the task runner?\n        future._submitted.set()\n\n        if return_state:\n            return await future.wait()\n        else:\n            return future\n\n    @overload\n    def map(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[None, Sync]]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[List[PrefectFuture[T, Async]]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[T, Sync]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> List[State[T]]:\n        ...\n\n    def map(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Any:\n        \"\"\"\n        Submit a mapped run of the task to a worker.\n\n        Must be called within a flow function. If writing an async task, this\n        call must be awaited.\n\n        Must be called with at least one iterable and all iterables must be\n        the same length. Any arguments that are not iterable will be treated as\n        a static value and each task run will receive the same value.\n\n        Will create as many task runs as the length of the iterable(s) in the\n        backing API and submit the task runs to the flow's task runner. This\n        call blocks if given a future as input while the future is resolved. It\n        also blocks while the tasks are being submitted, once they are\n        submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution\n        for sync tasks and they are fully resolved on submission.\n\n        Args:\n            *args: Iterable and static arguments to run the tasks with\n            return_state: Return a list of Prefect States that wrap the results\n                of each task run.\n            wait_for: Upstream task futures to wait for before starting the\n                task\n            **kwargs: Keyword iterable arguments to run the task with\n\n        Returns:\n            A list of futures allowing asynchronous access to the state of the\n            tasks\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task(x):\n            >>>     return x + 1\n\n            Create mapped tasks\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.map([1, 2, 3])\n\n            Wait for all mapped tasks to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         future.wait()\n            >>>     # Now all of the mapped tasks have finished\n            >>>     my_task(10)\n\n            Use the result from mapped tasks in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         print(future.result())\n            >>> my_flow()\n            2\n            3\n            4\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1(x):\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2(y):\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n            Use a non-iterable input as a constant across mapped tasks\n            >>> @task\n            >>> def display(prefix, item):\n            >>>    print(prefix, item)\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     display.map(\"Check it out: \", [1, 2, 3])\n            >>>\n            >>> my_flow()\n            Check it out: 1\n            Check it out: 2\n            Check it out: 3\n\n            Use `unmapped` to treat an iterable argument as a constant\n            >>> from prefect import unmapped\n            >>>\n            >>> @task\n            >>> def add_n_to_items(items, n):\n            >>>     return [item + n for item in items]\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n            >>>\n            >>> my_flow()\n            [[11, 21], [12, 22], [13, 23]]\n        \"\"\"\n\n        from prefect.engine import begin_task_map, enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict; do not apply defaults\n        # since they should not be mapped over\n        parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n        return_type = \"state\" if return_state else \"future\"\n\n        task_viz_tracker = get_task_viz_tracker()\n        if task_viz_tracker:\n            raise VisualizationUnsupportedError(\n                \"`task.map()` is not currently supported by `flow.visualize()`\"\n            )\n\n        if (\n            PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n            and not FlowRunContext.get()\n        ):\n            map_call = create_call(\n                begin_task_map,\n                task=self,\n                parameters=parameters,\n                flow_run_context=None,\n                wait_for=wait_for,\n                return_type=return_type,\n                task_runner=None,\n                autonomous=True,\n            )\n            if self.isasync:\n                return from_async.wait_for_call_in_loop_thread(map_call)\n            else:\n                return from_sync.wait_for_call_in_loop_thread(map_call)\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,\n            mapped=True,\n        )\n\n    def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n        \"\"\"Serve the task using the provided task runner. This method is used to\n        establish a websocket connection with the Prefect server and listen for\n        submitted task runs to execute.\n\n        Args:\n            task_runner: The task runner to use for serving the task. If not provided,\n                the default ConcurrentTaskRunner will be used.\n\n        Examples:\n            Serve a task using the default task runner\n            >>> @task\n            >>> def my_task():\n            >>>     return 1\n\n            >>> my_task.serve()\n        \"\"\"\n\n        if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n            raise ValueError(\n                \"Task's `serve` method is an experimental feature and must be enabled with \"\n                \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n            )\n\n        from prefect.task_server import serve\n\n        serve(self, task_runner=task_runner)\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.map","title":"map","text":"

Submit a mapped run of the task to a worker.

Must be called within a flow function. If writing an async task, this call must be awaited.

Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will receive the same value.

Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

Parameters:

Name Type Description Default *args Any

Iterable and static arguments to run the tasks with

() return_state bool

Return a list of Prefect States that wrap the results of each task run.

False wait_for Optional[Iterable[PrefectFuture]]

Upstream task futures to wait for before starting the task

None **kwargs Any

Keyword iterable arguments to run the task with

{}

Returns:

Type Description Any

A list of futures allowing asynchronous access to the state of the

Any

tasks

Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task(x):\n>>>     return x + 1\n\nCreate mapped tasks\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.map([1, 2, 3])\n\nWait for all mapped tasks to finish\n\n>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         future.wait()\n>>>     # Now all of the mapped tasks have finished\n>>>     my_task(10)\n\nUse the result from mapped tasks in a flow\n\n>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         print(future.result())\n>>> my_flow()\n2\n3\n4\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1(x):\n>>>     pass\n>>>\n>>> @task\n>>> def task_2(y):\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\nUse a non-iterable input as a constant across mapped tasks\n>>> @task\n>>> def display(prefix, item):\n>>>    print(prefix, item)\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     display.map(\"Check it out: \", [1, 2, 3])\n>>>\n>>> my_flow()\nCheck it out: 1\nCheck it out: 2\nCheck it out: 3\n\nUse `unmapped` to treat an iterable argument as a constant\n>>> from prefect import unmapped\n>>>\n>>> @task\n>>> def add_n_to_items(items, n):\n>>>     return [item + n for item in items]\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n>>>\n>>> my_flow()\n[[11, 21], [12, 22], [13, 23]]\n
Source code in prefect/tasks.py
def map(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Any:\n    \"\"\"\n    Submit a mapped run of the task to a worker.\n\n    Must be called within a flow function. If writing an async task, this\n    call must be awaited.\n\n    Must be called with at least one iterable and all iterables must be\n    the same length. Any arguments that are not iterable will be treated as\n    a static value and each task run will receive the same value.\n\n    Will create as many task runs as the length of the iterable(s) in the\n    backing API and submit the task runs to the flow's task runner. This\n    call blocks if given a future as input while the future is resolved. It\n    also blocks while the tasks are being submitted, once they are\n    submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution\n    for sync tasks and they are fully resolved on submission.\n\n    Args:\n        *args: Iterable and static arguments to run the tasks with\n        return_state: Return a list of Prefect States that wrap the results\n            of each task run.\n        wait_for: Upstream task futures to wait for before starting the\n            task\n        **kwargs: Keyword iterable arguments to run the task with\n\n    Returns:\n        A list of futures allowing asynchronous access to the state of the\n        tasks\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task(x):\n        >>>     return x + 1\n\n        Create mapped tasks\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.map([1, 2, 3])\n\n        Wait for all mapped tasks to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         future.wait()\n        >>>     # Now all of the mapped tasks have finished\n        >>>     my_task(10)\n\n        Use the result from mapped tasks in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         print(future.result())\n        >>> my_flow()\n        2\n        3\n        4\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1(x):\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2(y):\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n        Use a non-iterable input as a constant across mapped tasks\n        >>> @task\n        >>> def display(prefix, item):\n        >>>    print(prefix, item)\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     display.map(\"Check it out: \", [1, 2, 3])\n        >>>\n        >>> my_flow()\n        Check it out: 1\n        Check it out: 2\n        Check it out: 3\n\n        Use `unmapped` to treat an iterable argument as a constant\n        >>> from prefect import unmapped\n        >>>\n        >>> @task\n        >>> def add_n_to_items(items, n):\n        >>>     return [item + n for item in items]\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n        >>>\n        >>> my_flow()\n        [[11, 21], [12, 22], [13, 23]]\n    \"\"\"\n\n    from prefect.engine import begin_task_map, enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict; do not apply defaults\n    # since they should not be mapped over\n    parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n    return_type = \"state\" if return_state else \"future\"\n\n    task_viz_tracker = get_task_viz_tracker()\n    if task_viz_tracker:\n        raise VisualizationUnsupportedError(\n            \"`task.map()` is not currently supported by `flow.visualize()`\"\n        )\n\n    if (\n        PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n        and not FlowRunContext.get()\n    ):\n        map_call = create_call(\n            begin_task_map,\n            task=self,\n            parameters=parameters,\n            flow_run_context=None,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,\n            autonomous=True,\n        )\n        if self.isasync:\n            return from_async.wait_for_call_in_loop_thread(map_call)\n        else:\n            return from_sync.wait_for_call_in_loop_thread(map_call)\n\n    return enter_task_run_engine(\n        self,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=None,\n        mapped=True,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.serve","title":"serve","text":"

Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute.

Parameters:

Name Type Description Default task_runner Optional[BaseTaskRunner]

The task runner to use for serving the task. If not provided, the default ConcurrentTaskRunner will be used.

None

Examples:

Serve a task using the default task runner

>>> @task\n>>> def my_task():\n>>>     return 1\n
>>> my_task.serve()\n
Source code in prefect/tasks.py
def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n    \"\"\"Serve the task using the provided task runner. This method is used to\n    establish a websocket connection with the Prefect server and listen for\n    submitted task runs to execute.\n\n    Args:\n        task_runner: The task runner to use for serving the task. If not provided,\n            the default ConcurrentTaskRunner will be used.\n\n    Examples:\n        Serve a task using the default task runner\n        >>> @task\n        >>> def my_task():\n        >>>     return 1\n\n        >>> my_task.serve()\n    \"\"\"\n\n    if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n        raise ValueError(\n            \"Task's `serve` method is an experimental feature and must be enabled with \"\n            \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n        )\n\n    from prefect.task_server import serve\n\n    serve(self, task_runner=task_runner)\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.submit","title":"submit","text":"

Submit a run of the task to the engine.

If writing an async task, this call must be awaited.

If called from within a flow function,

Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

Parameters:

Name Type Description Default *args Any

Arguments to run the task with

() return_state bool

Return the result of the flow run wrapped in a Prefect State.

False wait_for Optional[Iterable[PrefectFuture]]

Upstream task futures to wait for before starting the task

None **kwargs Any

Keyword arguments to run the task with

{}

Returns:

Type Description Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]

If return_state is False a future allowing asynchronous access to the state of the task

Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]

If return_state is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task

Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task():\n>>>     return \"hello\"\n\nRun a task in a flow\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.submit()\n\nWait for a task to finish\n\n>>> @flow\n>>> def my_flow():\n>>>     my_task.submit().wait()\n\nUse the result from a task in a flow\n\n>>> @flow\n>>> def my_flow():\n>>>     print(my_task.submit().result())\n>>>\n>>> my_flow()\nhello\n\nRun an async task in an async flow\n\n>>> @task\n>>> async def my_async_task():\n>>>     pass\n>>>\n>>> @flow\n>>> async def my_flow():\n>>>     await my_async_task.submit()\n\nRun a sync task in an async flow\n\n>>> @flow\n>>> async def my_flow():\n>>>     my_task.submit()\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1():\n>>>     pass\n>>>\n>>> @task\n>>> def task_2():\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.submit(wait_for=[x])\n
Source code in prefect/tasks.py
def submit(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n    \"\"\"\n    Submit a run of the task to the engine.\n\n    If writing an async task, this call must be awaited.\n\n    If called from within a flow function,\n\n    Will create a new task run in the backing API and submit the task to the flow's\n    task runner. This call only blocks execution while the task is being submitted,\n    once it is submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n    and they are fully resolved on submission.\n\n    Args:\n        *args: Arguments to run the task with\n        return_state: Return the result of the flow run wrapped in a\n            Prefect State.\n        wait_for: Upstream task futures to wait for before starting the task\n        **kwargs: Keyword arguments to run the task with\n\n    Returns:\n        If `return_state` is False a future allowing asynchronous access to\n            the state of the task\n        If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n            the state of the task\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task():\n        >>>     return \"hello\"\n\n        Run a task in a flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit()\n\n        Wait for a task to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit().wait()\n\n        Use the result from a task in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     print(my_task.submit().result())\n        >>>\n        >>> my_flow()\n        hello\n\n        Run an async task in an async flow\n\n        >>> @task\n        >>> async def my_async_task():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> async def my_flow():\n        >>>     await my_async_task.submit()\n\n        Run a sync task in an async flow\n\n        >>> @flow\n        >>> async def my_flow():\n        >>>     my_task.submit()\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1():\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.submit(wait_for=[x])\n\n    \"\"\"\n\n    from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict\n    parameters = get_call_parameters(self.fn, args, kwargs)\n    return_type = \"state\" if return_state else \"future\"\n    flow_run_context = FlowRunContext.get()\n\n    task_viz_tracker = get_task_viz_tracker()\n    if task_viz_tracker:\n        raise VisualizationUnsupportedError(\n            \"`task.submit()` is not currently supported by `flow.visualize()`\"\n        )\n\n    if PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING and not flow_run_context:\n        create_autonomous_task_run_call = create_call(\n            create_autonomous_task_run, task=self, parameters=parameters\n        )\n        if self.isasync:\n            return from_async.wait_for_call_in_loop_thread(\n                create_autonomous_task_run_call\n            )\n        else:\n            return from_sync.wait_for_call_in_loop_thread(\n                create_autonomous_task_run_call\n            )\n    if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE and flow_run_context:\n        if self.isasync:\n            return self._submit_async(\n                parameters=parameters,\n                flow_run_context=flow_run_context,\n                wait_for=wait_for,\n                return_state=return_state,\n            )\n        else:\n            raise NotImplementedError(\n                \"Submitting sync tasks with the new engine has not be implemented yet.\"\n            )\n\n    else:\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,  # Use the flow's task runner\n            mapped=False,\n        )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.with_options","title":"with_options","text":"

Create a new task from the current object, updating provided options.

Parameters:

Name Type Description Default name str

A new name for the task.

None description str

A new description for the task.

None tags Iterable[str]

A new set of tags for the task. If given, existing tags are ignored, not merged.

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

A new cache key function for the task.

None cache_expiration timedelta

A new cache expiration time for the task.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries Optional[int]

A new number of times to retry on task run failure.

NotSet retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

NotSet retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

NotSet persist_result Optional[bool]

A new option for enabling or disabling result persistence.

NotSet result_storage Optional[ResultStorage]

A new storage type to use for results.

NotSet result_serializer Optional[ResultSerializer]

A new serializer to use for results.

NotSet result_storage_key Optional[str]

A new key for the persisted result to be stored at.

NotSet timeout_seconds Union[int, float]

A new maximum time for the task to complete in seconds.

None log_prints Optional[bool]

A new option for enabling or disabling redirection of print statements.

NotSet refresh_cache Optional[bool]

A new option for enabling or disabling cache refresh.

NotSet on_completion Optional[List[Callable[[Task, TaskRun, State], Union[Awaitable[None], None]]]]

A new list of callables to run when the task enters a completed state.

None on_failure Optional[List[Callable[[Task, TaskRun, State], Union[Awaitable[None], None]]]]

A new list of callables to run when the task enters a failed state.

None retry_condition_fn Optional[Callable[[Task, TaskRun, State], bool]]

An optional callable run when a task run returns a Failed state. Should return True if the task should continue to its retry policy, and False if the task should end as failed. Defaults to None, indicating the task should always continue to its retry policy.

None viz_return_value Optional[Any]

An optional value to return when the task dependency tree is visualized.

None

Returns:

Type Description

A new Task instance.

Create a new task from an existing task and update the name\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> new_task = my_task.with_options(name=\"My new task\")\n\nCreate a new task from an existing task and update the retry settings\n\n>>> from random import randint\n>>>\n>>> @task(retries=1, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n>>>\n>>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\nUse a task with updated options within a flow\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> @flow\n>>> my_flow():\n>>>     new_task = my_task.with_options(name=\"My new task\")\n>>>     new_task()\n
Source code in prefect/tasks.py
def with_options(\n    self,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    cache_key_fn: Callable[\n        [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n    ] = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    retries: Optional[int] = NotSet,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = NotSet,\n    retry_jitter_factor: Optional[float] = NotSet,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    result_storage_key: Optional[str] = NotSet,\n    cache_result_in_memory: Optional[bool] = None,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = NotSet,\n    refresh_cache: Optional[bool] = NotSet,\n    on_completion: Optional[\n        List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    on_failure: Optional[\n        List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n    viz_return_value: Optional[Any] = None,\n):\n    \"\"\"\n    Create a new task from the current object, updating provided options.\n\n    Args:\n        name: A new name for the task.\n        description: A new description for the task.\n        tags: A new set of tags for the task. If given, existing tags are ignored,\n            not merged.\n        cache_key_fn: A new cache key function for the task.\n        cache_expiration: A new cache expiration time for the task.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: A new number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying\n            the task after failure. This is only applicable if `retries` is nonzero.\n            This setting can either be a number of seconds, a list of retry delays,\n            or a callable that, given the total number of retries, generates a list\n            of retry delays. If a number of seconds, that delay will be applied to\n            all retries. If a list, each retry will wait for the corresponding delay\n            before retrying. When passing a callable or a list, the number of\n            configured retry delays cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a\n            retry can be jittered in order to avoid a \"thundering herd\".\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        result_storage_key: A new key for the persisted result to be stored at.\n        timeout_seconds: A new maximum time for the task to complete in seconds.\n        log_prints: A new option for enabling or disabling redirection of `print` statements.\n        refresh_cache: A new option for enabling or disabling cache refresh.\n        on_completion: A new list of callables to run when the task enters a completed state.\n        on_failure: A new list of callables to run when the task enters a failed state.\n        retry_condition_fn: An optional callable run when a task run returns a Failed state.\n            Should return `True` if the task should continue to its retry policy, and `False`\n            if the task should end as failed. Defaults to `None`, indicating the task should\n            always continue to its retry policy.\n        viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n    Returns:\n        A new `Task` instance.\n\n    Examples:\n\n        Create a new task from an existing task and update the name\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> new_task = my_task.with_options(name=\"My new task\")\n\n        Create a new task from an existing task and update the retry settings\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=1, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n        >>>\n        >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n        Use a task with updated options within a flow\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> @flow\n        >>> my_flow():\n        >>>     new_task = my_task.with_options(name=\"My new task\")\n        >>>     new_task()\n    \"\"\"\n    return Task(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        tags=tags or copy(self.tags),\n        cache_key_fn=cache_key_fn or self.cache_key_fn,\n        cache_expiration=cache_expiration or self.cache_expiration,\n        task_run_name=task_run_name,\n        retries=retries if retries is not NotSet else self.retries,\n        retry_delay_seconds=(\n            retry_delay_seconds\n            if retry_delay_seconds is not NotSet\n            else self.retry_delay_seconds\n        ),\n        retry_jitter_factor=(\n            retry_jitter_factor\n            if retry_jitter_factor is not NotSet\n            else self.retry_jitter_factor\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_storage_key=(\n            result_storage_key\n            if result_storage_key is not NotSet\n            else self.result_storage_key\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n        refresh_cache=(\n            refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n        ),\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n        retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n        viz_return_value=viz_return_value or self.viz_return_value,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.exponential_backoff","title":"exponential_backoff","text":"

A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation.

Parameters:

Name Type Description Default backoff_factor float

the base delay for the first retry, subsequent retries will increase the delay time by powers of 2.

required

Returns:

Type Description Callable[[int], List[float]]

a callable that can be passed to the task constructor

Source code in prefect/tasks.py
def exponential_backoff(backoff_factor: float) -> Callable[[int], List[float]]:\n    \"\"\"\n    A task retry backoff utility that configures exponential backoff for task retries.\n    The exponential backoff design matches the urllib3 implementation.\n\n    Arguments:\n        backoff_factor: the base delay for the first retry, subsequent retries will\n            increase the delay time by powers of 2.\n\n    Returns:\n        a callable that can be passed to the task constructor\n    \"\"\"\n\n    def retry_backoff_callable(retries: int) -> List[float]:\n        # no more than 50 retry delays can be configured on a task\n        retries = min(retries, 50)\n\n        return [backoff_factor * max(0, 2**r) for r in range(retries)]\n\n    return retry_backoff_callable\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task","title":"task","text":"

Decorator to designate a function as a task in a Prefect workflow.

This decorator may be used for asynchronous or synchronous functions.

Parameters:

Name Type Description Default name str

An optional name for the task; if not provided, the name will be inferred from the given function.

None description str

An optional string description for the task.

None tags Iterable[str]

An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

None version str

An optional string specifying the version of this task definition

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

None cache_expiration timedelta

An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries int

An optional number of times to retry on task run failure

None retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

None retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

None persist_result Optional[bool]

An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

None result_storage_key Optional[str]

An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

None log_prints Optional[bool]

If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

None refresh_cache Optional[bool]

If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a failed state.

None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a completed state.

None retry_condition_fn Optional[Callable[[Task, TaskRun, State], bool]]

An optional callable run when a task run returns a Failed state. Should return True if the task should continue to its retry policy (e.g. retries=3), and False if the task should end as failed. Defaults to None, indicating the task should always continue to its retry policy.

None viz_return_value Any

An optional value to return when the task dependency tree is visualized.

None

Returns:

Type Description

A callable Task object which, when called, will submit the task for execution.

Examples:

Define a simple task

>>> @task\n>>> def add(x, y):\n>>>     return x + y\n

Define an async task

>>> @task\n>>> async def add(x, y):\n>>>     return x + y\n

Define a task with tags and a description

>>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n>>> def my_task():\n>>>     pass\n

Define a task with a custom name

>>> @task(name=\"The Ultimate Task\")\n>>> def my_task():\n>>>     pass\n

Define a task that retries 3 times with a 5 second delay between attempts

>>> from random import randint\n>>>\n>>> @task(retries=3, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n

Define a task that is cached for a day based on its inputs

>>> from prefect.tasks import task_input_hash\n>>> from datetime import timedelta\n>>>\n>>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n>>> def my_task():\n>>>     return \"hello\"\n
Source code in prefect/tasks.py
def task(\n    __fn=None,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    version: str = None,\n    cache_key_fn: Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = None,\n    retry_jitter_factor: Optional[float] = None,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_storage_key: Optional[str] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = None,\n    refresh_cache: Optional[bool] = None,\n    on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n    viz_return_value: Any = None,\n):\n    \"\"\"\n    Decorator to designate a function as a task in a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Args:\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n        retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n            return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n            should end as failed. Defaults to `None`, indicating the task should always continue\n            to its retry policy.\n        viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n    Returns:\n        A callable `Task` object which, when called, will submit the task for execution.\n\n    Examples:\n        Define a simple task\n\n        >>> @task\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async task\n\n        >>> @task\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a task with tags and a description\n\n        >>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task with a custom name\n\n        >>> @task(name=\"The Ultimate Task\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task that retries 3 times with a 5 second delay between attempts\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=3, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n\n        Define a task that is cached for a day based on its inputs\n\n        >>> from prefect.tasks import task_input_hash\n        >>> from datetime import timedelta\n        >>>\n        >>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n        >>> def my_task():\n        >>>     return \"hello\"\n    \"\"\"\n\n    if __fn:\n        return cast(\n            Task[P, R],\n            Task(\n                fn=__fn,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                retry_condition_fn=retry_condition_fn,\n                viz_return_value=viz_return_value,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Task[P, R]],\n            partial(\n                task,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                retry_condition_fn=retry_condition_fn,\n                viz_return_value=viz_return_value,\n            ),\n        )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task_input_hash","title":"task_input_hash","text":"

A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs.

Parameters:

Name Type Description Default context TaskRunContext

the active TaskRunContext

required arguments Dict[str, Any]

a dictionary of arguments to be passed to the underlying task

required

Returns:

Type Description Optional[str]

a string hash if hashing succeeded, else None

Source code in prefect/tasks.py
def task_input_hash(\n    context: \"TaskRunContext\", arguments: Dict[str, Any]\n) -> Optional[str]:\n    \"\"\"\n    A task cache key implementation which hashes all inputs to the task using a JSON or\n    cloudpickle serializer. If any arguments are not JSON serializable, the pickle\n    serializer is used as a fallback. If cloudpickle fails, this will return a null key\n    indicating that a cache key could not be generated for the given inputs.\n\n    Arguments:\n        context: the active `TaskRunContext`\n        arguments: a dictionary of arguments to be passed to the underlying task\n\n    Returns:\n        a string hash if hashing succeeded, else `None`\n    \"\"\"\n    return hash_objects(\n        # We use the task key to get the qualified name for the task and include the\n        # task functions `co_code` bytes to avoid caching when the underlying function\n        # changes\n        context.task.task_key,\n        context.task.fn.__code__.co_code.hex(),\n        arguments,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/testing/","title":"prefect.testing","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/testing/#prefect.testing","title":"prefect.testing","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/variables/","title":"prefect.variables","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables","title":"prefect.variables","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.Variable","title":"Variable","text":"

Bases: VariableCreate

Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud. https://docs.prefect.io/latest/concepts/variables/

Parameters:

Name Type Description Default name

A string identifying the variable.

required value

A string that is the value of the variable.

required tags

An optional list of strings to associate with the variable.

required Source code in prefect/variables.py
class Variable(VariableRequest):\n    \"\"\"\n    Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.\n    https://docs.prefect.io/latest/concepts/variables/\n\n    Arguments:\n        name: A string identifying the variable.\n        value: A string that is the value of the variable.\n        tags: An optional list of strings to associate with the variable.\n    \"\"\"\n\n    @classmethod\n    @sync_compatible\n    async def set(\n        cls,\n        name: str,\n        value: str,\n        tags: Optional[List[str]] = None,\n        overwrite: bool = False,\n    ) -> Optional[str]:\n        \"\"\"\n        Sets a new variable. If one exists with the same name, user must pass `overwrite=True`\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            def my_flow():\n                var = Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n        ```\n        or\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            async def my_flow():\n                var = await Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n        ```\n        \"\"\"\n        client, _ = get_or_create_client()\n        variable = await client.read_variable_by_name(name)\n        var_dict = {\"name\": name, \"value\": value}\n        var_dict[\"tags\"] = tags or []\n        if variable:\n            if not overwrite:\n                raise ValueError(\n                    \"You are attempting to save a variable with a name that is already in use. If you would like to overwrite the values that are saved, then call .set with `overwrite=True`.\"\n                )\n            var = VariableUpdateRequest(**var_dict)\n            await client.update_variable(variable=var)\n            variable = await client.read_variable_by_name(name)\n        else:\n            var = VariableRequest(**var_dict)\n            variable = await client.create_variable(variable=var)\n\n        return variable if variable else None\n\n    @classmethod\n    @sync_compatible\n    async def get(cls, name: str, default: Optional[str] = None) -> Optional[str]:\n        \"\"\"\n        Get a variable by name. If doesn't exist return the default.\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            def my_flow():\n                var = Variable.get(\"my_var\")\n        ```\n        or\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            async def my_flow():\n                var = await Variable.get(\"my_var\")\n        ```\n        \"\"\"\n        client, _ = get_or_create_client()\n        variable = await client.read_variable_by_name(name)\n        return variable if variable else default\n
","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.Variable.get","title":"get async classmethod","text":"

Get a variable by name. If doesn't exist return the default.

    from prefect.variables import Variable\n\n    @flow\n    def my_flow():\n        var = Variable.get(\"my_var\")\n
or
    from prefect.variables import Variable\n\n    @flow\n    async def my_flow():\n        var = await Variable.get(\"my_var\")\n

Source code in prefect/variables.py
@classmethod\n@sync_compatible\nasync def get(cls, name: str, default: Optional[str] = None) -> Optional[str]:\n    \"\"\"\n    Get a variable by name. If doesn't exist return the default.\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        def my_flow():\n            var = Variable.get(\"my_var\")\n    ```\n    or\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        async def my_flow():\n            var = await Variable.get(\"my_var\")\n    ```\n    \"\"\"\n    client, _ = get_or_create_client()\n    variable = await client.read_variable_by_name(name)\n    return variable if variable else default\n
","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.Variable.set","title":"set async classmethod","text":"

Sets a new variable. If one exists with the same name, user must pass overwrite=True

    from prefect.variables import Variable\n\n    @flow\n    def my_flow():\n        var = Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n
or
    from prefect.variables import Variable\n\n    @flow\n    async def my_flow():\n        var = await Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n

Source code in prefect/variables.py
@classmethod\n@sync_compatible\nasync def set(\n    cls,\n    name: str,\n    value: str,\n    tags: Optional[List[str]] = None,\n    overwrite: bool = False,\n) -> Optional[str]:\n    \"\"\"\n    Sets a new variable. If one exists with the same name, user must pass `overwrite=True`\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        def my_flow():\n            var = Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n    ```\n    or\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        async def my_flow():\n            var = await Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n    ```\n    \"\"\"\n    client, _ = get_or_create_client()\n    variable = await client.read_variable_by_name(name)\n    var_dict = {\"name\": name, \"value\": value}\n    var_dict[\"tags\"] = tags or []\n    if variable:\n        if not overwrite:\n            raise ValueError(\n                \"You are attempting to save a variable with a name that is already in use. If you would like to overwrite the values that are saved, then call .set with `overwrite=True`.\"\n            )\n        var = VariableUpdateRequest(**var_dict)\n        await client.update_variable(variable=var)\n        variable = await client.read_variable_by_name(name)\n    else:\n        var = VariableRequest(**var_dict)\n        variable = await client.create_variable(variable=var)\n\n    return variable if variable else None\n
","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.get","title":"get async","text":"

Get a variable by name. If doesn't exist return the default.

    from prefect import variables\n\n    @flow\n    def my_flow():\n        var = variables.get(\"my_var\")\n
or
    from prefect import variables\n\n    @flow\n    async def my_flow():\n        var = await variables.get(\"my_var\")\n

Source code in prefect/variables.py
@deprecated_callable(start_date=\"Apr 2024\")\n@sync_compatible\nasync def get(name: str, default: Optional[str] = None) -> Optional[str]:\n    \"\"\"\n    Get a variable by name. If doesn't exist return the default.\n    ```\n        from prefect import variables\n\n        @flow\n        def my_flow():\n            var = variables.get(\"my_var\")\n    ```\n    or\n    ```\n        from prefect import variables\n\n        @flow\n        async def my_flow():\n            var = await variables.get(\"my_var\")\n    ```\n    \"\"\"\n    variable = await Variable.get(name)\n    return variable.value if variable else default\n
","tags":["Python API","variables"]},{"location":"api-ref/prefect/blocks/core/","title":"core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core","title":"prefect.blocks.core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block","title":"Block","text":"

Bases: BaseModel, ABC

A base class for implementing a block that wraps an external service.

This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. _block_document_name, _block_document_id, _block_schema_id, and _block_type_id are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API.

Instead of the init method, a block implementation allows the definition of a block_initialization method that is called after initialization.

Source code in prefect/blocks/core.py
@register_base_type\n@instrument_method_calls_on_class_instances\nclass Block(BaseModel, ABC):\n    \"\"\"\n    A base class for implementing a block that wraps an external service.\n\n    This class can be defined with an arbitrary set of fields and methods, and\n    couples business logic with data contained in an block document.\n    `_block_document_name`, `_block_document_id`, `_block_schema_id`, and\n    `_block_type_id` are reserved by Prefect as Block metadata fields, but\n    otherwise a Block can implement arbitrary logic. Blocks can be instantiated\n    without populating these metadata fields, but can only be used interactively,\n    not with the Prefect API.\n\n    Instead of the __init__ method, a block implementation allows the\n    definition of a `block_initialization` method that is called after\n    initialization.\n    \"\"\"\n\n    class Config:\n        extra = \"allow\"\n\n        json_encoders = {SecretDict: lambda v: v.dict()}\n\n        @staticmethod\n        def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n            \"\"\"\n            Customizes Pydantic's schema generation feature to add blocks related information.\n            \"\"\"\n            schema[\"block_type_slug\"] = model.get_block_type_slug()\n            # Ensures args and code examples aren't included in the schema\n            description = model.get_description()\n            if description:\n                schema[\"description\"] = description\n            else:\n                # Prevent the description of the base class from being included in the schema\n                schema.pop(\"description\", None)\n\n            # create a list of secret field names\n            # secret fields include both top-level keys and dot-delimited nested secret keys\n            # A wildcard (*) means that all fields under a given key are secret.\n            # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n            # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n            # nested under the \"child\" key are all secret. There is no limit to nesting.\n            secrets = schema[\"secret_fields\"] = []\n            for field in model.__fields__.values():\n                _collect_secret_fields(field.name, field.type_, secrets)\n\n            # create block schema references\n            refs = schema[\"block_schema_references\"] = {}\n            for field in model.__fields__.values():\n                if Block.is_block_class(field.type_):\n                    refs[field.name] = field.type_._to_block_schema_reference_dict()\n                if get_origin(field.type_) is Union:\n                    for type_ in get_args(field.type_):\n                        if Block.is_block_class(type_):\n                            if isinstance(refs.get(field.name), list):\n                                refs[field.name].append(\n                                    type_._to_block_schema_reference_dict()\n                                )\n                            elif isinstance(refs.get(field.name), dict):\n                                refs[field.name] = [\n                                    refs[field.name],\n                                    type_._to_block_schema_reference_dict(),\n                                ]\n                            else:\n                                refs[\n                                    field.name\n                                ] = type_._to_block_schema_reference_dict()\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.block_initialization()\n\n    def __str__(self) -> str:\n        return self.__repr__()\n\n    def __repr_args__(self):\n        repr_args = super().__repr_args__()\n        data_keys = self.schema()[\"properties\"].keys()\n        return [\n            (key, value) for key, value in repr_args if key is None or key in data_keys\n        ]\n\n    def block_initialization(self) -> None:\n        pass\n\n    # -- private class variables\n    # set by the class itself\n\n    # Attribute to customize the name of the block type created\n    # when the block is registered with the API. If not set, block\n    # type name will default to the class name.\n    _block_type_name: Optional[str] = None\n    _block_type_slug: Optional[str] = None\n\n    # Attributes used to set properties on a block type when registered\n    # with the API.\n    _logo_url: Optional[HttpUrl] = None\n    _documentation_url: Optional[HttpUrl] = None\n    _description: Optional[str] = None\n    _code_example: Optional[str] = None\n\n    # -- private instance variables\n    # these are set when blocks are loaded from the API\n    _block_type_id: Optional[UUID] = None\n    _block_schema_id: Optional[UUID] = None\n    _block_schema_capabilities: Optional[List[str]] = None\n    _block_schema_version: Optional[str] = None\n    _block_document_id: Optional[UUID] = None\n    _block_document_name: Optional[str] = None\n    _is_anonymous: Optional[bool] = None\n\n    # Exclude `save` as it uses the `sync_compatible` decorator and needs to be\n    # decorated directly.\n    _events_excluded_methods = [\"block_initialization\", \"save\", \"dict\"]\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"Block\":\n            return None  # The base class is abstract\n        return block_schema_to_key(cls._to_block_schema())\n\n    @classmethod\n    def get_block_type_name(cls):\n        return cls._block_type_name or cls.__name__\n\n    @classmethod\n    def get_block_type_slug(cls):\n        return slugify(cls._block_type_slug or cls.get_block_type_name())\n\n    @classmethod\n    def get_block_capabilities(cls) -> FrozenSet[str]:\n        \"\"\"\n        Returns the block capabilities for this Block. Recursively collects all block\n        capabilities of all parent classes into a single frozenset.\n        \"\"\"\n        return frozenset(\n            {\n                c\n                for base in (cls,) + cls.__mro__\n                for c in getattr(base, \"_block_schema_capabilities\", []) or []\n            }\n        )\n\n    @classmethod\n    def _get_current_package_version(cls):\n        current_module = inspect.getmodule(cls)\n        if current_module:\n            top_level_module = sys.modules[\n                current_module.__name__.split(\".\")[0] or \"__main__\"\n            ]\n            try:\n                version = Version(top_level_module.__version__)\n                # Strips off any local version information\n                return version.base_version\n            except (AttributeError, InvalidVersion):\n                # Module does not have a __version__ attribute or is not a parsable format\n                pass\n        return DEFAULT_BLOCK_SCHEMA_VERSION\n\n    @classmethod\n    def get_block_schema_version(cls) -> str:\n        return cls._block_schema_version or cls._get_current_package_version()\n\n    @classmethod\n    def _to_block_schema_reference_dict(cls):\n        return dict(\n            block_type_slug=cls.get_block_type_slug(),\n            block_schema_checksum=cls._calculate_schema_checksum(),\n        )\n\n    @classmethod\n    def _calculate_schema_checksum(\n        cls, block_schema_fields: Optional[Dict[str, Any]] = None\n    ):\n        \"\"\"\n        Generates a unique hash for the underlying schema of block.\n\n        Args:\n            block_schema_fields: Dictionary detailing block schema fields to generate a\n                checksum for. The fields of the current class is used if this parameter\n                is not provided.\n\n        Returns:\n            str: The calculated checksum prefixed with the hashing algorithm used.\n        \"\"\"\n        block_schema_fields = (\n            cls.schema() if block_schema_fields is None else block_schema_fields\n        )\n        fields_for_checksum = remove_nested_keys([\"secret_fields\"], block_schema_fields)\n        if fields_for_checksum.get(\"definitions\"):\n            non_block_definitions = _get_non_block_reference_definitions(\n                fields_for_checksum, fields_for_checksum[\"definitions\"]\n            )\n            if non_block_definitions:\n                fields_for_checksum[\"definitions\"] = non_block_definitions\n            else:\n                # Pop off definitions entirely instead of empty dict for consistency\n                # with the OpenAPI specification\n                fields_for_checksum.pop(\"definitions\")\n        checksum = hash_objects(fields_for_checksum, hash_algo=hashlib.sha256)\n        if checksum is None:\n            raise ValueError(\"Unable to compute checksum for block schema\")\n        else:\n            return f\"sha256:{checksum}\"\n\n    def _to_block_document(\n        self,\n        name: Optional[str] = None,\n        block_schema_id: Optional[UUID] = None,\n        block_type_id: Optional[UUID] = None,\n        is_anonymous: Optional[bool] = None,\n    ) -> BlockDocument:\n        \"\"\"\n        Creates the corresponding block document based on the data stored in a block.\n        The corresponding block document name, block type ID, and block schema ID must\n        either be passed into the method or configured on the block.\n\n        Args:\n            name: The name of the created block document. Not required if anonymous.\n            block_schema_id: UUID of the corresponding block schema.\n            block_type_id: UUID of the corresponding block type.\n            is_anonymous: if True, an anonymous block is created. Anonymous\n                blocks are not displayed in the UI and used primarily for system\n                operations and features that need to automatically generate blocks.\n\n        Returns:\n            BlockDocument: Corresponding block document\n                populated with the block's configured data.\n        \"\"\"\n        if is_anonymous is None:\n            is_anonymous = self._is_anonymous or False\n\n        # name must be present if not anonymous\n        if not is_anonymous and not name and not self._block_document_name:\n            raise ValueError(\"No name provided, either as an argument or on the block.\")\n\n        if not block_schema_id and not self._block_schema_id:\n            raise ValueError(\n                \"No block schema ID provided, either as an argument or on the block.\"\n            )\n        if not block_type_id and not self._block_type_id:\n            raise ValueError(\n                \"No block type ID provided, either as an argument or on the block.\"\n            )\n\n        # The keys passed to `include` must NOT be aliases, else some items will be missed\n        # i.e. must do `self.schema_` vs `self.schema` to get a `schema_ = Field(alias=\"schema\")`\n        # reported from https://github.com/PrefectHQ/prefect-dbt/issues/54\n        data_keys = self.schema(by_alias=False)[\"properties\"].keys()\n\n        # `block_document_data`` must return the aliased version for it to show in the UI\n        block_document_data = self.dict(by_alias=True, include=data_keys)\n\n        # Iterate through and find blocks that already have saved block documents to\n        # create references to those saved block documents.\n        for key in data_keys:\n            field_value = getattr(self, key)\n            if (\n                isinstance(field_value, Block)\n                and field_value._block_document_id is not None\n            ):\n                block_document_data[key] = {\n                    \"$ref\": {\"block_document_id\": field_value._block_document_id}\n                }\n\n        return BlockDocument(\n            id=self._block_document_id or uuid4(),\n            name=(name or self._block_document_name) if not is_anonymous else None,\n            block_schema_id=block_schema_id or self._block_schema_id,\n            block_type_id=block_type_id or self._block_type_id,\n            data=block_document_data,\n            block_schema=self._to_block_schema(\n                block_type_id=block_type_id or self._block_type_id,\n            ),\n            block_type=self._to_block_type(),\n            is_anonymous=is_anonymous,\n        )\n\n    @classmethod\n    def _to_block_schema(cls, block_type_id: Optional[UUID] = None) -> BlockSchema:\n        \"\"\"\n        Creates the corresponding block schema of the block.\n        The corresponding block_type_id must either be passed into\n        the method or configured on the block.\n\n        Args:\n            block_type_id: UUID of the corresponding block type.\n\n        Returns:\n            BlockSchema: The corresponding block schema.\n        \"\"\"\n        fields = cls.schema()\n        return BlockSchema(\n            id=cls._block_schema_id if cls._block_schema_id is not None else uuid4(),\n            checksum=cls._calculate_schema_checksum(),\n            fields=fields,\n            block_type_id=block_type_id or cls._block_type_id,\n            block_type=cls._to_block_type(),\n            capabilities=list(cls.get_block_capabilities()),\n            version=cls.get_block_schema_version(),\n        )\n\n    @classmethod\n    def _parse_docstring(cls) -> List[DocstringSection]:\n        \"\"\"\n        Parses the docstring into list of DocstringSection objects.\n        Helper method used primarily to suppress irrelevant logs, e.g.\n        `<module>:11: No type or annotation for parameter 'write_json'`\n        because griffe is unable to parse the types from pydantic.BaseModel.\n        \"\"\"\n        with disable_logger(\"griffe.docstrings.google\"):\n            with disable_logger(\"griffe.agents.nodes\"):\n                docstring = Docstring(cls.__doc__)\n                parsed = parse(docstring, Parser.google)\n        return parsed\n\n    @classmethod\n    def get_description(cls) -> Optional[str]:\n        \"\"\"\n        Returns the description for the current block. Attempts to parse\n        description from class docstring if an override is not defined.\n        \"\"\"\n        description = cls._description\n        # If no description override has been provided, find the first text section\n        # and use that as the description\n        if description is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            parsed_description = next(\n                (\n                    section.as_dict().get(\"value\")\n                    for section in parsed\n                    if section.kind == DocstringSectionKind.text\n                ),\n                None,\n            )\n            if isinstance(parsed_description, str):\n                description = parsed_description.strip()\n        return description\n\n    @classmethod\n    def get_code_example(cls) -> Optional[str]:\n        \"\"\"\n        Returns the code example for the given block. Attempts to parse\n        code example from the class docstring if an override is not provided.\n        \"\"\"\n        code_example = (\n            dedent(cls._code_example) if cls._code_example is not None else None\n        )\n        # If no code example override has been provided, attempt to find a examples\n        # section or an admonition with the annotation \"example\" and use that as the\n        # code example\n        if code_example is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            for section in parsed:\n                # Section kind will be \"examples\" if Examples section heading is used.\n                if section.kind == DocstringSectionKind.examples:\n                    # Examples sections are made up of smaller sections that need to be\n                    # joined with newlines. Smaller sections are represented as tuples\n                    # with shape (DocstringSectionKind, str)\n                    code_example = \"\\n\".join(\n                        (part[1] for part in section.as_dict().get(\"value\", []))\n                    )\n                    break\n                # Section kind will be \"admonition\" if Example section heading is used.\n                if section.kind == DocstringSectionKind.admonition:\n                    value = section.as_dict().get(\"value\", {})\n                    if value.get(\"annotation\") == \"example\":\n                        code_example = value.get(\"description\")\n                        break\n\n        if code_example is None:\n            # If no code example has been specified or extracted from the class\n            # docstring, generate a sensible default\n            code_example = cls._generate_code_example()\n\n        return code_example\n\n    @classmethod\n    def _generate_code_example(cls) -> str:\n        \"\"\"Generates a default code example for the current class\"\"\"\n        qualified_name = to_qualified_name(cls)\n        module_str = \".\".join(qualified_name.split(\".\")[:-1])\n        class_name = cls.__name__\n        block_variable_name = f'{cls.get_block_type_slug().replace(\"-\", \"_\")}_block'\n\n        return dedent(\n            f\"\"\"\\\n        ```python\n        from {module_str} import {class_name}\n\n        {block_variable_name} = {class_name}.load(\"BLOCK_NAME\")\n        ```\"\"\"\n        )\n\n    @classmethod\n    def _to_block_type(cls) -> BlockType:\n        \"\"\"\n        Creates the corresponding block type of the block.\n\n        Returns:\n            BlockType: The corresponding block type.\n        \"\"\"\n        return BlockType(\n            id=cls._block_type_id or uuid4(),\n            slug=cls.get_block_type_slug(),\n            name=cls.get_block_type_name(),\n            logo_url=cls._logo_url,\n            documentation_url=cls._documentation_url,\n            description=cls.get_description(),\n            code_example=cls.get_code_example(),\n        )\n\n    @classmethod\n    def _from_block_document(cls, block_document: BlockDocument):\n        \"\"\"\n        Instantiates a block from a given block document. The corresponding block class\n        will be looked up in the block registry based on the corresponding block schema\n        of the provided block document.\n\n        Args:\n            block_document: The block document used to instantiate a block.\n\n        Raises:\n            ValueError: If the provided block document doesn't have a corresponding block\n                schema.\n\n        Returns:\n            Block: Hydrated block with data from block document.\n        \"\"\"\n        if block_document.block_schema is None:\n            raise ValueError(\n                \"Unable to determine block schema for provided block document\"\n            )\n\n        block_cls = (\n            cls\n            if cls.__name__ != \"Block\"\n            # Look up the block class by dispatch\n            else cls.get_block_class_from_schema(block_document.block_schema)\n        )\n\n        block_cls = instrument_method_calls_on_class_instances(block_cls)\n\n        block = block_cls.parse_obj(block_document.data)\n        block._block_document_id = block_document.id\n        block.__class__._block_schema_id = block_document.block_schema_id\n        block.__class__._block_type_id = block_document.block_type_id\n        block._block_document_name = block_document.name\n        block._is_anonymous = block_document.is_anonymous\n        block._define_metadata_on_nested_blocks(\n            block_document.block_document_references\n        )\n\n        # Due to the way blocks are loaded we can't directly instrument the\n        # `load` method and have the data be about the block document. Instead\n        # this will emit a proxy event for the load method so that block\n        # document data can be included instead of the event being about an\n        # 'anonymous' block.\n\n        emit_instance_method_called_event(block, \"load\", successful=True)\n\n        return block\n\n    def _event_kind(self) -> str:\n        return f\"prefect.block.{self.get_block_type_slug()}\"\n\n    def _event_method_called_resources(self) -> Optional[ResourceTuple]:\n        if not (self._block_document_id and self._block_document_name):\n            return None\n\n        return (\n            {\n                \"prefect.resource.id\": (\n                    f\"prefect.block-document.{self._block_document_id}\"\n                ),\n                \"prefect.resource.name\": self._block_document_name,\n            },\n            [\n                {\n                    \"prefect.resource.id\": (\n                        f\"prefect.block-type.{self.get_block_type_slug()}\"\n                    ),\n                    \"prefect.resource.role\": \"block-type\",\n                }\n            ],\n        )\n\n    @classmethod\n    def get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n        \"\"\"\n        Retrieve the block class implementation given a schema.\n        \"\"\"\n        return cls.get_block_class_from_key(block_schema_to_key(schema))\n\n    @classmethod\n    def get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n        \"\"\"\n        Retrieve the block class implementation given a key.\n        \"\"\"\n        # Ensure collections are imported and have the opportunity to register types\n        # before looking up the block class\n        prefect.plugins.load_prefect_collections()\n\n        return lookup_type(cls, key)\n\n    def _define_metadata_on_nested_blocks(\n        self, block_document_references: Dict[str, Dict[str, Any]]\n    ):\n        \"\"\"\n        Recursively populates metadata fields on nested blocks based on the\n        provided block document references.\n        \"\"\"\n        for item in block_document_references.items():\n            field_name, block_document_reference = item\n            nested_block = getattr(self, field_name)\n            if isinstance(nested_block, Block):\n                nested_block_document_info = block_document_reference.get(\n                    \"block_document\", {}\n                )\n                nested_block._define_metadata_on_nested_blocks(\n                    nested_block_document_info.get(\"block_document_references\", {})\n                )\n                nested_block_document_id = nested_block_document_info.get(\"id\")\n                nested_block._block_document_id = (\n                    UUID(nested_block_document_id) if nested_block_document_id else None\n                )\n                nested_block._block_document_name = nested_block_document_info.get(\n                    \"name\"\n                )\n                nested_block._is_anonymous = nested_block_document_info.get(\n                    \"is_anonymous\"\n                )\n\n    @classmethod\n    @inject_client\n    async def _get_block_document(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        if cls.__name__ == \"Block\":\n            block_type_slug, block_document_name = name.split(\"/\", 1)\n        else:\n            block_type_slug = cls.get_block_type_slug()\n            block_document_name = name\n\n        try:\n            block_document = await client.read_block_document_by_name(\n                name=block_document_name, block_type_slug=block_type_slug\n            )\n        except prefect.exceptions.ObjectNotFound as e:\n            raise ValueError(\n                f\"Unable to find block document named {block_document_name} for block\"\n                f\" type {block_type_slug}\"\n            ) from e\n\n        return block_document, block_document_name\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def load(\n        cls,\n        name: str,\n        validate: bool = True,\n        client: \"PrefectClient\" = None,\n    ):\n        \"\"\"\n        Retrieves data from the block document with the given name for the block type\n        that corresponds with the current class and returns an instantiated version of\n        the current class with the data stored in the block document.\n\n        If a block document for a given block type is saved with a different schema\n        than the current class calling `load`, a warning will be raised.\n\n        If the current class schema is a subset of the block document schema, the block\n        can be loaded as normal using the default `validate = True`.\n\n        If the current class schema is a superset of the block document schema, `load`\n        must be called with `validate` set to False to prevent a validation error. In\n        this case, the block attributes will default to `None` and must be set manually\n        and saved to a new block document before the block can be used as expected.\n\n        Args:\n            name: The name or slug of the block document. A block document slug is a\n                string with the format <block_type_slug>/<block_document_name>\n            validate: If False, the block document will be loaded without Pydantic\n                validating the block schema. This is useful if the block schema has\n                changed client-side since the block document referred to by `name` was saved.\n            client: The client to use to load the block document. If not provided, the\n                default client will be injected.\n\n        Raises:\n            ValueError: If the requested block document is not found.\n\n        Returns:\n            An instance of the current class hydrated with the data stored in the\n            block document with the specified name.\n\n        Examples:\n            Load from a Block subclass with a block document name:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Custom.load(\"my-custom-message\")\n            ```\n\n            Load from Block with a block document slug:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Block.load(\"custom/my-custom-message\")\n            ```\n\n            Migrate a block document to a new schema:\n            ```python\n            # original class\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            # Updated class with new required field\n            class Custom(Block):\n                message: str\n                number_of_ducks: int\n\n            loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n            # Prints UserWarning about schema mismatch\n\n            loaded_block.number_of_ducks = 42\n\n            loaded_block.save(\"my-custom-message\", overwrite=True)\n            ```\n        \"\"\"\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        try:\n            return cls._from_block_document(block_document)\n        except ValidationError as e:\n            if not validate:\n                missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n                missing_block_data = {field: None for field in missing_fields}\n                warnings.warn(\n                    f\"Could not fully load {block_document_name!r} of block type\"\n                    f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                    \" required fields were added to the schema for\"\n                    f\" {cls.__name__!r} that did not exist on the class when this block\"\n                    \" was last saved. Please specify values for new field(s):\"\n                    f\" {listrepr(missing_fields)}, then run\"\n                    f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                    \" and load this block again before attempting to use it.\"\n                )\n                return cls.construct(**block_document.data, **missing_block_data)\n            raise RuntimeError(\n                f\"Unable to load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n                \" validation, try loading again with `validate=False`.\"\n            ) from e\n\n    @staticmethod\n    def is_block_class(block) -> bool:\n        return _is_subclass(block, Block)\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n        \"\"\"\n        Makes block available for configuration with current Prefect API.\n        Recursively registers all nested blocks. Registration is idempotent.\n\n        Args:\n            client: Optional client to use for registering type and schema with the\n                Prefect API. A new client will be created and used if one is not\n                provided.\n        \"\"\"\n        if cls.__name__ == \"Block\":\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on the Block class directly.\"\n            )\n        if ABC in getattr(cls, \"__bases__\", []):\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on a Block interface class directly.\"\n            )\n\n        for field in cls.__fields__.values():\n            if Block.is_block_class(field.type_):\n                await field.type_.register_type_and_schema(client=client)\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        await type_.register_type_and_schema(client=client)\n\n        try:\n            block_type = await client.read_block_type_by_slug(\n                slug=cls.get_block_type_slug()\n            )\n            cls._block_type_id = block_type.id\n            local_block_type = cls._to_block_type()\n            if _should_update_block_type(\n                local_block_type=local_block_type, server_block_type=block_type\n            ):\n                await client.update_block_type(\n                    block_type_id=block_type.id, block_type=local_block_type\n                )\n        except prefect.exceptions.ObjectNotFound:\n            block_type = await client.create_block_type(block_type=cls._to_block_type())\n            cls._block_type_id = block_type.id\n\n        try:\n            block_schema = await client.read_block_schema_by_checksum(\n                checksum=cls._calculate_schema_checksum(),\n                version=cls.get_block_schema_version(),\n            )\n        except prefect.exceptions.ObjectNotFound:\n            block_schema = await client.create_block_schema(\n                block_schema=cls._to_block_schema(block_type_id=block_type.id)\n            )\n\n        cls._block_schema_id = block_schema.id\n\n    @inject_client\n    async def _save(\n        self,\n        name: Optional[str] = None,\n        is_anonymous: bool = False,\n        overwrite: bool = False,\n        client: \"PrefectClient\" = None,\n    ):\n        \"\"\"\n        Saves the values of a block as a block document with an option to save as an\n        anonymous block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            is_anonymous: Boolean value specifying whether the block document is anonymous. Anonymous\n                blocks are intended for system use and are not shown in the UI. Anonymous blocks do not\n                require a user-supplied name.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        Raises:\n            ValueError: If a name is not given and `is_anonymous` is `False` or a name is given and\n                `is_anonymous` is `True`.\n        \"\"\"\n        if name is None and not is_anonymous:\n            if self._block_document_name is None:\n                raise ValueError(\n                    \"You're attempting to save a block document without a name.\"\n                    \" Please either call `save` with a `name` or pass\"\n                    \" `is_anonymous=True` to save an anonymous block.\"\n                )\n            else:\n                name = self._block_document_name\n\n        self._is_anonymous = is_anonymous\n\n        # Ensure block type and schema are registered before saving block document.\n        await self.register_type_and_schema(client=client)\n\n        try:\n            block_document = await client.create_block_document(\n                block_document=self._to_block_document(name=name)\n            )\n        except prefect.exceptions.ObjectAlreadyExists as err:\n            if overwrite:\n                block_document_id = self._block_document_id\n                if block_document_id is None:\n                    existing_block_document = await client.read_block_document_by_name(\n                        name=name, block_type_slug=self.get_block_type_slug()\n                    )\n                    block_document_id = existing_block_document.id\n                await client.update_block_document(\n                    block_document_id=block_document_id,\n                    block_document=self._to_block_document(name=name),\n                )\n                block_document = await client.read_block_document(\n                    block_document_id=block_document_id\n                )\n            else:\n                raise ValueError(\n                    \"You are attempting to save values with a name that is already in\"\n                    \" use for this block type. If you would like to overwrite the\"\n                    \" values that are saved, then save with `overwrite=True`.\"\n                ) from err\n\n        # Update metadata on block instance for later use.\n        self._block_document_name = block_document.name\n        self._block_document_id = block_document.id\n        return self._block_document_id\n\n    @sync_compatible\n    @instrument_instance_method_call\n    async def save(\n        self,\n        name: Optional[str] = None,\n        overwrite: bool = False,\n        client: \"PrefectClient\" = None,\n    ):\n        \"\"\"\n        Saves the values of a block as a block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        \"\"\"\n        document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n        return document_id\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def delete(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        await client.delete_block_document(block_document.id)\n\n    def _iter(self, *, include=None, exclude=None, **kwargs):\n        # Injects the `block_type_slug` into serialized payloads for dispatch\n        for key_value in super()._iter(include=include, exclude=exclude, **kwargs):\n            yield key_value\n\n        # Respect inclusion and exclusion still\n        if include and \"block_type_slug\" not in include:\n            return\n        if exclude and \"block_type_slug\" in exclude:\n            return\n\n        yield \"block_type_slug\", self.get_block_type_slug()\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n        \"\"\"\n        Create an instance of the Block subclass type if a `block_type_slug` is\n        present in the data payload.\n        \"\"\"\n        block_type_slug = kwargs.pop(\"block_type_slug\", None)\n        if block_type_slug:\n            subcls = lookup_type(cls, dispatch_key=block_type_slug)\n            m = super().__new__(subcls)\n            # NOTE: This is a workaround for an obscure issue where copied models were\n            #       missing attributes. This pattern is from Pydantic's\n            #       `BaseModel._copy_and_set_values`.\n            #       The issue this fixes could not be reproduced in unit tests that\n            #       directly targeted dispatch handling and was only observed when\n            #       copying then saving infrastructure blocks on deployment models.\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n        else:\n            m = super().__new__(cls)\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n\n    def get_block_placeholder(self) -> str:\n        \"\"\"\n        Returns the block placeholder for the current block which can be used for\n        templating.\n\n        Returns:\n            str: The block placeholder for the current block in the format\n                `prefect.blocks.{block_type_name}.{block_document_name}`\n\n        Raises:\n            BlockNotSavedError: Raised if the block has not been saved.\n\n        If a block has not been saved, the return value will be `None`.\n        \"\"\"\n        block_document_name = self._block_document_name\n        if not block_document_name:\n            raise BlockNotSavedError(\n                \"Could not generate block placeholder for unsaved block.\"\n            )\n\n        return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config","title":"Config","text":"Source code in prefect/blocks/core.py
class Config:\n    extra = \"allow\"\n\n    json_encoders = {SecretDict: lambda v: v.dict()}\n\n    @staticmethod\n    def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n        \"\"\"\n        Customizes Pydantic's schema generation feature to add blocks related information.\n        \"\"\"\n        schema[\"block_type_slug\"] = model.get_block_type_slug()\n        # Ensures args and code examples aren't included in the schema\n        description = model.get_description()\n        if description:\n            schema[\"description\"] = description\n        else:\n            # Prevent the description of the base class from being included in the schema\n            schema.pop(\"description\", None)\n\n        # create a list of secret field names\n        # secret fields include both top-level keys and dot-delimited nested secret keys\n        # A wildcard (*) means that all fields under a given key are secret.\n        # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n        # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n        # nested under the \"child\" key are all secret. There is no limit to nesting.\n        secrets = schema[\"secret_fields\"] = []\n        for field in model.__fields__.values():\n            _collect_secret_fields(field.name, field.type_, secrets)\n\n        # create block schema references\n        refs = schema[\"block_schema_references\"] = {}\n        for field in model.__fields__.values():\n            if Block.is_block_class(field.type_):\n                refs[field.name] = field.type_._to_block_schema_reference_dict()\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        if isinstance(refs.get(field.name), list):\n                            refs[field.name].append(\n                                type_._to_block_schema_reference_dict()\n                            )\n                        elif isinstance(refs.get(field.name), dict):\n                            refs[field.name] = [\n                                refs[field.name],\n                                type_._to_block_schema_reference_dict(),\n                            ]\n                        else:\n                            refs[\n                                field.name\n                            ] = type_._to_block_schema_reference_dict()\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config.schema_extra","title":"schema_extra staticmethod","text":"

Customizes Pydantic's schema generation feature to add blocks related information.

Source code in prefect/blocks/core.py
@staticmethod\ndef schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n    \"\"\"\n    Customizes Pydantic's schema generation feature to add blocks related information.\n    \"\"\"\n    schema[\"block_type_slug\"] = model.get_block_type_slug()\n    # Ensures args and code examples aren't included in the schema\n    description = model.get_description()\n    if description:\n        schema[\"description\"] = description\n    else:\n        # Prevent the description of the base class from being included in the schema\n        schema.pop(\"description\", None)\n\n    # create a list of secret field names\n    # secret fields include both top-level keys and dot-delimited nested secret keys\n    # A wildcard (*) means that all fields under a given key are secret.\n    # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n    # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n    # nested under the \"child\" key are all secret. There is no limit to nesting.\n    secrets = schema[\"secret_fields\"] = []\n    for field in model.__fields__.values():\n        _collect_secret_fields(field.name, field.type_, secrets)\n\n    # create block schema references\n    refs = schema[\"block_schema_references\"] = {}\n    for field in model.__fields__.values():\n        if Block.is_block_class(field.type_):\n            refs[field.name] = field.type_._to_block_schema_reference_dict()\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    if isinstance(refs.get(field.name), list):\n                        refs[field.name].append(\n                            type_._to_block_schema_reference_dict()\n                        )\n                    elif isinstance(refs.get(field.name), dict):\n                        refs[field.name] = [\n                            refs[field.name],\n                            type_._to_block_schema_reference_dict(),\n                        ]\n                    else:\n                        refs[\n                            field.name\n                        ] = type_._to_block_schema_reference_dict()\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_capabilities","title":"get_block_capabilities classmethod","text":"

Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset.

Source code in prefect/blocks/core.py
@classmethod\ndef get_block_capabilities(cls) -> FrozenSet[str]:\n    \"\"\"\n    Returns the block capabilities for this Block. Recursively collects all block\n    capabilities of all parent classes into a single frozenset.\n    \"\"\"\n    return frozenset(\n        {\n            c\n            for base in (cls,) + cls.__mro__\n            for c in getattr(base, \"_block_schema_capabilities\", []) or []\n        }\n    )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_key","title":"get_block_class_from_key classmethod","text":"

Retrieve the block class implementation given a key.

Source code in prefect/blocks/core.py
@classmethod\ndef get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n    \"\"\"\n    Retrieve the block class implementation given a key.\n    \"\"\"\n    # Ensure collections are imported and have the opportunity to register types\n    # before looking up the block class\n    prefect.plugins.load_prefect_collections()\n\n    return lookup_type(cls, key)\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_schema","title":"get_block_class_from_schema classmethod","text":"

Retrieve the block class implementation given a schema.

Source code in prefect/blocks/core.py
@classmethod\ndef get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n    \"\"\"\n    Retrieve the block class implementation given a schema.\n    \"\"\"\n    return cls.get_block_class_from_key(block_schema_to_key(schema))\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_placeholder","title":"get_block_placeholder","text":"

Returns the block placeholder for the current block which can be used for templating.

Returns:

Name Type Description str str

The block placeholder for the current block in the format prefect.blocks.{block_type_name}.{block_document_name}

Raises:

Type Description BlockNotSavedError

Raised if the block has not been saved.

If a block has not been saved, the return value will be None.

Source code in prefect/blocks/core.py
def get_block_placeholder(self) -> str:\n    \"\"\"\n    Returns the block placeholder for the current block which can be used for\n    templating.\n\n    Returns:\n        str: The block placeholder for the current block in the format\n            `prefect.blocks.{block_type_name}.{block_document_name}`\n\n    Raises:\n        BlockNotSavedError: Raised if the block has not been saved.\n\n    If a block has not been saved, the return value will be `None`.\n    \"\"\"\n    block_document_name = self._block_document_name\n    if not block_document_name:\n        raise BlockNotSavedError(\n            \"Could not generate block placeholder for unsaved block.\"\n        )\n\n    return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_code_example","title":"get_code_example classmethod","text":"

Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided.

Source code in prefect/blocks/core.py
@classmethod\ndef get_code_example(cls) -> Optional[str]:\n    \"\"\"\n    Returns the code example for the given block. Attempts to parse\n    code example from the class docstring if an override is not provided.\n    \"\"\"\n    code_example = (\n        dedent(cls._code_example) if cls._code_example is not None else None\n    )\n    # If no code example override has been provided, attempt to find a examples\n    # section or an admonition with the annotation \"example\" and use that as the\n    # code example\n    if code_example is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        for section in parsed:\n            # Section kind will be \"examples\" if Examples section heading is used.\n            if section.kind == DocstringSectionKind.examples:\n                # Examples sections are made up of smaller sections that need to be\n                # joined with newlines. Smaller sections are represented as tuples\n                # with shape (DocstringSectionKind, str)\n                code_example = \"\\n\".join(\n                    (part[1] for part in section.as_dict().get(\"value\", []))\n                )\n                break\n            # Section kind will be \"admonition\" if Example section heading is used.\n            if section.kind == DocstringSectionKind.admonition:\n                value = section.as_dict().get(\"value\", {})\n                if value.get(\"annotation\") == \"example\":\n                    code_example = value.get(\"description\")\n                    break\n\n    if code_example is None:\n        # If no code example has been specified or extracted from the class\n        # docstring, generate a sensible default\n        code_example = cls._generate_code_example()\n\n    return code_example\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_description","title":"get_description classmethod","text":"

Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined.

Source code in prefect/blocks/core.py
@classmethod\ndef get_description(cls) -> Optional[str]:\n    \"\"\"\n    Returns the description for the current block. Attempts to parse\n    description from class docstring if an override is not defined.\n    \"\"\"\n    description = cls._description\n    # If no description override has been provided, find the first text section\n    # and use that as the description\n    if description is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        parsed_description = next(\n            (\n                section.as_dict().get(\"value\")\n                for section in parsed\n                if section.kind == DocstringSectionKind.text\n            ),\n            None,\n        )\n        if isinstance(parsed_description, str):\n            description = parsed_description.strip()\n    return description\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.load","title":"load async classmethod","text":"

Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document.

If a block document for a given block type is saved with a different schema than the current class calling load, a warning will be raised.

If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default validate = True.

If the current class schema is a superset of the block document schema, load must be called with validate set to False to prevent a validation error. In this case, the block attributes will default to None and must be set manually and saved to a new block document before the block can be used as expected.

Parameters:

Name Type Description Default name str

The name or slug of the block document. A block document slug is a string with the format / required validate bool

If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by name was saved.

True client PrefectClient

The client to use to load the block document. If not provided, the default client will be injected.

None

Raises:

Type Description ValueError

If the requested block document is not found.

Returns:

Type Description

An instance of the current class hydrated with the data stored in the

block document with the specified name.

Examples:

Load from a Block subclass with a block document name:

class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Custom.load(\"my-custom-message\")\n

Load from Block with a block document slug:

class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Block.load(\"custom/my-custom-message\")\n

Migrate a block document to a new schema:

# original class\nclass Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\n# Updated class with new required field\nclass Custom(Block):\n    message: str\n    number_of_ducks: int\n\nloaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n# Prints UserWarning about schema mismatch\n\nloaded_block.number_of_ducks = 42\n\nloaded_block.save(\"my-custom-message\", overwrite=True)\n

Source code in prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def load(\n    cls,\n    name: str,\n    validate: bool = True,\n    client: \"PrefectClient\" = None,\n):\n    \"\"\"\n    Retrieves data from the block document with the given name for the block type\n    that corresponds with the current class and returns an instantiated version of\n    the current class with the data stored in the block document.\n\n    If a block document for a given block type is saved with a different schema\n    than the current class calling `load`, a warning will be raised.\n\n    If the current class schema is a subset of the block document schema, the block\n    can be loaded as normal using the default `validate = True`.\n\n    If the current class schema is a superset of the block document schema, `load`\n    must be called with `validate` set to False to prevent a validation error. In\n    this case, the block attributes will default to `None` and must be set manually\n    and saved to a new block document before the block can be used as expected.\n\n    Args:\n        name: The name or slug of the block document. A block document slug is a\n            string with the format <block_type_slug>/<block_document_name>\n        validate: If False, the block document will be loaded without Pydantic\n            validating the block schema. This is useful if the block schema has\n            changed client-side since the block document referred to by `name` was saved.\n        client: The client to use to load the block document. If not provided, the\n            default client will be injected.\n\n    Raises:\n        ValueError: If the requested block document is not found.\n\n    Returns:\n        An instance of the current class hydrated with the data stored in the\n        block document with the specified name.\n\n    Examples:\n        Load from a Block subclass with a block document name:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Custom.load(\"my-custom-message\")\n        ```\n\n        Load from Block with a block document slug:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Block.load(\"custom/my-custom-message\")\n        ```\n\n        Migrate a block document to a new schema:\n        ```python\n        # original class\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        # Updated class with new required field\n        class Custom(Block):\n            message: str\n            number_of_ducks: int\n\n        loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n        # Prints UserWarning about schema mismatch\n\n        loaded_block.number_of_ducks = 42\n\n        loaded_block.save(\"my-custom-message\", overwrite=True)\n        ```\n    \"\"\"\n    block_document, block_document_name = await cls._get_block_document(name)\n\n    try:\n        return cls._from_block_document(block_document)\n    except ValidationError as e:\n        if not validate:\n            missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n            missing_block_data = {field: None for field in missing_fields}\n            warnings.warn(\n                f\"Could not fully load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                \" required fields were added to the schema for\"\n                f\" {cls.__name__!r} that did not exist on the class when this block\"\n                \" was last saved. Please specify values for new field(s):\"\n                f\" {listrepr(missing_fields)}, then run\"\n                f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                \" and load this block again before attempting to use it.\"\n            )\n            return cls.construct(**block_document.data, **missing_block_data)\n        raise RuntimeError(\n            f\"Unable to load {block_document_name!r} of block type\"\n            f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n            \" validation, try loading again with `validate=False`.\"\n        ) from e\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.register_type_and_schema","title":"register_type_and_schema async classmethod","text":"

Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent.

Parameters:

Name Type Description Default client PrefectClient

Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided.

None Source code in prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n    \"\"\"\n    Makes block available for configuration with current Prefect API.\n    Recursively registers all nested blocks. Registration is idempotent.\n\n    Args:\n        client: Optional client to use for registering type and schema with the\n            Prefect API. A new client will be created and used if one is not\n            provided.\n    \"\"\"\n    if cls.__name__ == \"Block\":\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on the Block class directly.\"\n        )\n    if ABC in getattr(cls, \"__bases__\", []):\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on a Block interface class directly.\"\n        )\n\n    for field in cls.__fields__.values():\n        if Block.is_block_class(field.type_):\n            await field.type_.register_type_and_schema(client=client)\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    await type_.register_type_and_schema(client=client)\n\n    try:\n        block_type = await client.read_block_type_by_slug(\n            slug=cls.get_block_type_slug()\n        )\n        cls._block_type_id = block_type.id\n        local_block_type = cls._to_block_type()\n        if _should_update_block_type(\n            local_block_type=local_block_type, server_block_type=block_type\n        ):\n            await client.update_block_type(\n                block_type_id=block_type.id, block_type=local_block_type\n            )\n    except prefect.exceptions.ObjectNotFound:\n        block_type = await client.create_block_type(block_type=cls._to_block_type())\n        cls._block_type_id = block_type.id\n\n    try:\n        block_schema = await client.read_block_schema_by_checksum(\n            checksum=cls._calculate_schema_checksum(),\n            version=cls.get_block_schema_version(),\n        )\n    except prefect.exceptions.ObjectNotFound:\n        block_schema = await client.create_block_schema(\n            block_schema=cls._to_block_schema(block_type_id=block_type.id)\n        )\n\n    cls._block_schema_id = block_schema.id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.save","title":"save async","text":"

Saves the values of a block as a block document.

Parameters:

Name Type Description Default name Optional[str]

User specified name to give saved block document which can later be used to load the block document.

None overwrite bool

Boolean value specifying if values should be overwritten if a block document with the specified name already exists.

False Source code in prefect/blocks/core.py
@sync_compatible\n@instrument_instance_method_call\nasync def save(\n    self,\n    name: Optional[str] = None,\n    overwrite: bool = False,\n    client: \"PrefectClient\" = None,\n):\n    \"\"\"\n    Saves the values of a block as a block document.\n\n    Args:\n        name: User specified name to give saved block document which can later be used to load the\n            block document.\n        overwrite: Boolean value specifying if values should be overwritten if a block document with\n            the specified name already exists.\n\n    \"\"\"\n    document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n    return document_id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.BlockNotSavedError","title":"BlockNotSavedError","text":"

Bases: RuntimeError

Raised when a given block is not saved and an operation that requires the block to be saved is attempted.

Source code in prefect/blocks/core.py
class BlockNotSavedError(RuntimeError):\n    \"\"\"\n    Raised when a given block is not saved and an operation that requires\n    the block to be saved is attempted.\n    \"\"\"\n\n    pass\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.InvalidBlockRegistration","title":"InvalidBlockRegistration","text":"

Bases: Exception

Raised on attempted registration of the base Block class or a Block interface class

Source code in prefect/blocks/core.py
class InvalidBlockRegistration(Exception):\n    \"\"\"\n    Raised on attempted registration of the base Block\n    class or a Block interface class\n    \"\"\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.block_schema_to_key","title":"block_schema_to_key","text":"

Defines the unique key used to lookup the Block class for a given schema.

Source code in prefect/blocks/core.py
def block_schema_to_key(schema: BlockSchema) -> str:\n    \"\"\"\n    Defines the unique key used to lookup the Block class for a given schema.\n    \"\"\"\n    return f\"{schema.block_type.slug}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/fields/","title":"fields","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/fields/#prefect.blocks.fields","title":"prefect.blocks.fields","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/kubernetes/","title":"kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes","title":"prefect.blocks.kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

Bases: Block

Stores configuration for interaction with Kubernetes clusters.

See from_file for creation.

Attributes:

Name Type Description config Dict

The entire loaded YAML contents of a kubectl config file

context_name str

The name of the kubectl context to use

Example

Load a saved Kubernetes cluster config:

from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

Source code in prefect/blocks/kubernetes.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the KubernetesClusterConfig block from prefect-kubernetes instead.\",\n)\nclass KubernetesClusterConfig(Block):\n    \"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        return validate_yaml(value)\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n        \"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n        \"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n        \"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

Source code in prefect/blocks/kubernetes.py
def configure_client(self) -> None:\n    \"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

Create a cluster config from the a Kubernetes config file.

By default, the current context in the default Kubernetes config file will be used.

An alternative file or context may be specified.

The entire config file will be loaded and stored.

Source code in prefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n    \"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

Returns a Kubernetes API client for this cluster config.

Source code in prefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n    \"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/notifications/","title":"notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications","title":"prefect.blocks.notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AbstractAppriseNotificationBlock","title":"AbstractAppriseNotificationBlock","text":"

Bases: NotificationBlock, ABC

An abstract class for sending notifications using Apprise.

Source code in prefect/blocks/notifications.py
class AbstractAppriseNotificationBlock(NotificationBlock, ABC):\n    \"\"\"\n    An abstract class for sending notifications using Apprise.\n    \"\"\"\n\n    notify_type: Literal[\n        \"prefect_default\", \"info\", \"success\", \"warning\", \"failure\"\n    ] = Field(\n        default=PREFECT_NOTIFY_TYPE_DEFAULT,\n        description=(\n            \"The type of notification being performed; the prefect_default \"\n            \"is a plain notification that does not attach an image.\"\n        ),\n    )\n\n    def __init__(self, *args, **kwargs):\n        import apprise\n\n        if PREFECT_NOTIFY_TYPE_DEFAULT not in apprise.NOTIFY_TYPES:\n            apprise.NOTIFY_TYPES += (PREFECT_NOTIFY_TYPE_DEFAULT,)\n\n        super().__init__(*args, **kwargs)\n\n    def _start_apprise_client(self, url: SecretStr):\n        from apprise import Apprise, AppriseAsset\n\n        # A custom `AppriseAsset` that ensures Prefect Notifications\n        # appear correctly across multiple messaging platforms\n        prefect_app_data = AppriseAsset(\n            app_id=\"Prefect Notifications\",\n            app_desc=\"Prefect Notifications\",\n            app_url=\"https://prefect.io\",\n        )\n\n        self._apprise_client = Apprise(asset=prefect_app_data)\n        self._apprise_client.add(url.get_secret_value())\n\n    def block_initialization(self) -> None:\n        self._start_apprise_client(self.url)\n\n    @sync_compatible\n    @instrument_instance_method_call\n    async def notify(\n        self,\n        body: str,\n        subject: Optional[str] = None,\n    ):\n        with LogEavesdropper(\"apprise\", level=logging.DEBUG) as eavesdropper:\n            result = await self._apprise_client.async_notify(\n                body=body, title=subject, notify_type=self.notify_type\n            )\n        if not result and self._raise_on_failure:\n            raise NotificationError(log=eavesdropper.text())\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AppriseNotificationBlock","title":"AppriseNotificationBlock","text":"

Bases: AbstractAppriseNotificationBlock, ABC

A base class for sending notifications using Apprise, through webhook URLs.

Source code in prefect/blocks/notifications.py
class AppriseNotificationBlock(AbstractAppriseNotificationBlock, ABC):\n    \"\"\"\n    A base class for sending notifications using Apprise, through webhook URLs.\n    \"\"\"\n\n    _documentation_url = \"https://docs.prefect.io/ui/notifications/\"\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Incoming webhook URL used to send notifications.\",\n        examples=[\"https://hooks.example.com/XXX\"],\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock","title":"CustomWebhookNotificationBlock","text":"

Bases: NotificationBlock

Enables sending notifications via any custom webhook.

All nested string param contains {{key}} will be substituted with value from context/secrets.

Context values include: subject, body and name.

Examples:

Load a saved custom webhook and send a message:

from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\ncustom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\ncustom_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class CustomWebhookNotificationBlock(NotificationBlock):\n    \"\"\"\n    Enables sending notifications via any custom webhook.\n\n    All nested string param contains `{{key}}` will be substituted with value from context/secrets.\n\n    Context values include: `subject`, `body` and `name`.\n\n    Examples:\n        Load a saved custom webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\n        custom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\n        custom_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Custom Webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock\"\n\n    name: str = Field(title=\"Name\", description=\"Name of the webhook.\")\n\n    url: str = Field(\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        examples=[\"https://hooks.slack.com/XXX\"],\n    )\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    params: Optional[Dict[str, str]] = Field(\n        default=None, title=\"Query Params\", description=\"Custom query params.\"\n    )\n    json_data: Optional[dict] = Field(\n        default=None,\n        title=\"JSON Data\",\n        description=\"Send json data as payload.\",\n        examples=[\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ],\n    )\n    form_data: Optional[Dict[str, str]] = Field(\n        default=None,\n        title=\"Form Data\",\n        description=(\n            \"Send form data as payload. Should not be used together with _JSON Data_.\"\n        ),\n        examples=[\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ],\n    )\n\n    headers: Optional[Dict[str, str]] = Field(None, description=\"Custom headers.\")\n    cookies: Optional[Dict[str, str]] = Field(None, description=\"Custom cookies.\")\n\n    timeout: float = Field(\n        default=10, description=\"Request timeout in seconds. Defaults to 10.\"\n    )\n\n    secrets: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Custom Secret Values\",\n        description=\"A dictionary of secret values to be substituted in other configs.\",\n        examples=['{\"tokenFromSecrets\":\"SomeSecretToken\"}'],\n    )\n\n    def _build_request_args(self, body: str, subject: Optional[str]):\n        \"\"\"Build kwargs for httpx.AsyncClient.request\"\"\"\n        # prepare values\n        values = self.secrets.get_secret_value()\n        # use 'null' when subject is None\n        values.update(\n            {\n                \"subject\": \"null\" if subject is None else subject,\n                \"body\": body,\n                \"name\": self.name,\n            }\n        )\n        # do substution\n        return apply_values(\n            {\n                \"method\": self.method,\n                \"url\": self.url,\n                \"params\": self.params,\n                \"data\": self.form_data,\n                \"json\": self.json_data,\n                \"headers\": self.headers,\n                \"cookies\": self.cookies,\n                \"timeout\": self.timeout,\n            },\n            values,\n        )\n\n    def block_initialization(self) -> None:\n        # check form_data and json_data\n        if self.form_data is not None and self.json_data is not None:\n            raise ValueError(\"both `Form Data` and `JSON Data` provided\")\n        allowed_keys = {\"subject\", \"body\", \"name\"}.union(\n            self.secrets.get_secret_value().keys()\n        )\n        # test template to raise a error early\n        for name in [\"url\", \"params\", \"form_data\", \"json_data\", \"headers\", \"cookies\"]:\n            template = getattr(self, name)\n            if template is None:\n                continue\n            # check for placeholders not in predefined keys and secrets\n            placeholders = find_placeholders(template)\n            for placeholder in placeholders:\n                if placeholder.name not in allowed_keys:\n                    raise KeyError(f\"{name}/{placeholder}\")\n\n    @sync_compatible\n    @instrument_instance_method_call\n    async def notify(self, body: str, subject: Optional[str] = None):\n        import httpx\n\n        request_args = self._build_request_args(body, subject)\n        cookies = request_args.pop(\"cookies\", None)\n        # make request with httpx\n        client = httpx.AsyncClient(\n            headers={\"user-agent\": \"Prefect Notifications\"}, cookies=cookies\n        )\n        async with client:\n            resp = await client.request(**request_args)\n        resp.raise_for_status()\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook","title":"DiscordWebhook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided Discord webhook. See Apprise notify_Discord docs # noqa

Examples:

Load a saved Discord webhook and send a message:

from prefect.blocks.notifications import DiscordWebhook\n\ndiscord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\ndiscord_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class DiscordWebhook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Discord webhook.\n    See [Apprise notify_Discord docs](https://github.com/caronc/apprise/wiki/Notify_Discord) # noqa\n\n    Examples:\n        Load a saved Discord webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import DiscordWebhook\n\n        discord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\n        discord_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Discord webhook.\"\n    _block_type_name = \"Discord Webhook\"\n    _block_type_slug = \"discord-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/9e94976c80ef925b66d24e5d14f0d47baa6b8f88-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook\"\n\n    webhook_id: SecretStr = Field(\n        default=...,\n        description=(\n            \"The first part of 2 tokens provided to you after creating a\"\n            \" incoming-webhook.\"\n        ),\n    )\n\n    webhook_token: SecretStr = Field(\n        default=...,\n        description=(\n            \"The second part of 2 tokens provided to you after creating a\"\n            \" incoming-webhook.\"\n        ),\n    )\n\n    botname: Optional[str] = Field(\n        title=\"Bot name\",\n        default=None,\n        description=(\n            \"Identify the name of the bot that should issue the message. If one isn't\"\n            \" specified then the default is to just use your account (associated with\"\n            \" the incoming-webhook).\"\n        ),\n    )\n\n    tts: bool = Field(\n        default=False,\n        description=\"Whether to enable Text-To-Speech.\",\n    )\n\n    include_image: bool = Field(\n        default=False,\n        description=(\n            \"Whether to include an image in-line with the message describing the\"\n            \" notification type.\"\n        ),\n    )\n\n    avatar: bool = Field(\n        default=False,\n        description=\"Whether to override the default discord avatar icon.\",\n    )\n\n    avatar_url: Optional[str] = Field(\n        title=\"Avatar URL\",\n        default=False,\n        description=(\n            \"Over-ride the default discord avatar icon URL. By default this is not set\"\n            \" and Apprise chooses the URL dynamically based on the type of message\"\n            \" (info, success, warning, or error).\"\n        ),\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyDiscord import NotifyDiscord\n\n        url = SecretStr(\n            NotifyDiscord(\n                webhook_id=self.webhook_id.get_secret_value(),\n                webhook_token=self.webhook_token.get_secret_value(),\n                botname=self.botname,\n                tts=self.tts,\n                include_image=self.include_image,\n                avatar=self.avatar,\n                avatar_url=self.avatar_url,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook","title":"MattermostWebhook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided Mattermost webhook. See Apprise notify_Mattermost docs # noqa

Examples:

Load a saved Mattermost webhook and send a message:

from prefect.blocks.notifications import MattermostWebhook\n\nmattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\nmattermost_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class MattermostWebhook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Mattermost webhook.\n    See [Apprise notify_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa\n\n\n    Examples:\n        Load a saved Mattermost webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MattermostWebhook\n\n        mattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\n        mattermost_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Mattermost webhook.\"\n    _block_type_name = \"Mattermost Webhook\"\n    _block_type_slug = \"mattermost-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/1350a147130bf82cbc799a5f868d2c0116207736-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook\"\n\n    hostname: str = Field(\n        default=...,\n        description=\"The hostname of your Mattermost server.\",\n        examples=[\"Mattermost.example.com\"],\n    )\n\n    token: SecretStr = Field(\n        default=...,\n        description=\"The token associated with your Mattermost webhook.\",\n    )\n\n    botname: Optional[str] = Field(\n        title=\"Bot name\",\n        default=None,\n        description=\"The name of the bot that will send the message.\",\n    )\n\n    channels: Optional[List[str]] = Field(\n        default=None,\n        description=\"The channel(s) you wish to notify.\",\n    )\n\n    include_image: bool = Field(\n        default=False,\n        description=\"Whether to include the Apprise status image in the message.\",\n    )\n\n    path: Optional[str] = Field(\n        default=None,\n        description=\"An optional sub-path specification to append to the hostname.\",\n    )\n\n    port: int = Field(\n        default=8065,\n        description=\"The port of your Mattermost server.\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyMattermost import NotifyMattermost\n\n        url = SecretStr(\n            NotifyMattermost(\n                token=self.token.get_secret_value(),\n                fullpath=self.path,\n                host=self.hostname,\n                botname=self.botname,\n                channels=self.channels,\n                include_image=self.include_image,\n                port=self.port,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook","title":"MicrosoftTeamsWebhook","text":"

Bases: AppriseNotificationBlock

Enables sending notifications via a provided Microsoft Teams webhook.

Examples:

Load a saved Teams webhook and send a message:

from prefect.blocks.notifications import MicrosoftTeamsWebhook\nteams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\nteams_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class MicrosoftTeamsWebhook(AppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Microsoft Teams webhook.\n\n    Examples:\n        Load a saved Teams webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MicrosoftTeamsWebhook\n        teams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\n        teams_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Microsoft Teams Webhook\"\n    _block_type_slug = \"ms-teams-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/817efe008a57f0a24f3587414714b563e5e23658-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook\"\n\n    url: SecretStr = Field(\n        ...,\n        title=\"Webhook URL\",\n        description=\"The Teams incoming webhook URL used to send notifications.\",\n        examples=[\n            \"https://your-org.webhook.office.com/webhookb2/XXX/IncomingWebhook/YYY/ZZZ\"\n        ],\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook","title":"OpsgenieWebhook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided Opsgenie webhook. See Apprise notify_opsgenie docs for more info on formatting the URL.

Examples:

Load a saved Opsgenie webhook and send a message:

from prefect.blocks.notifications import OpsgenieWebhook\nopsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\nopsgenie_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class OpsgenieWebhook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Opsgenie webhook.\n    See [Apprise notify_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved Opsgenie webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import OpsgenieWebhook\n        opsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\n        opsgenie_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Opsgenie webhook.\"\n\n    _block_type_name = \"Opsgenie Webhook\"\n    _block_type_slug = \"opsgenie-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d8b5bc6244ae6cd83b62ec42f10d96e14d6e9113-280x280.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook\"\n\n    apikey: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your Opsgenie account.\",\n    )\n\n    target_user: Optional[List] = Field(\n        default=None, description=\"The user(s) you wish to notify.\"\n    )\n\n    target_team: Optional[List] = Field(\n        default=None, description=\"The team(s) you wish to notify.\"\n    )\n\n    target_schedule: Optional[List] = Field(\n        default=None, description=\"The schedule(s) you wish to notify.\"\n    )\n\n    target_escalation: Optional[List] = Field(\n        default=None, description=\"The escalation(s) you wish to notify.\"\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The 2-character region code.\"\n    )\n\n    batch: bool = Field(\n        default=False,\n        description=\"Notify all targets in batches (instead of individually).\",\n    )\n\n    tags: Optional[List] = Field(\n        default=None,\n        description=(\n            \"A comma-separated list of tags you can associate with your Opsgenie\"\n            \" message.\"\n        ),\n        examples=['[\"tag1\", \"tag2\"]'],\n    )\n\n    priority: Optional[str] = Field(\n        default=3,\n        description=(\n            \"The priority to associate with the message. It is on a scale between 1\"\n            \" (LOW) and 5 (EMERGENCY).\"\n        ),\n    )\n\n    alias: Optional[str] = Field(\n        default=None, description=\"The alias to associate with the message.\"\n    )\n\n    entity: Optional[str] = Field(\n        default=None, description=\"The entity to associate with the message.\"\n    )\n\n    details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details composed of key/values pairs.\",\n        examples=['{\"key1\": \"value1\", \"key2\": \"value2\"}'],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyOpsgenie import NotifyOpsgenie\n\n        targets = []\n        if self.target_user:\n            [targets.append(f\"@{x}\") for x in self.target_user]\n        if self.target_team:\n            [targets.append(f\"#{x}\") for x in self.target_team]\n        if self.target_schedule:\n            [targets.append(f\"*{x}\") for x in self.target_schedule]\n        if self.target_escalation:\n            [targets.append(f\"^{x}\") for x in self.target_escalation]\n        url = SecretStr(\n            NotifyOpsgenie(\n                apikey=self.apikey.get_secret_value(),\n                targets=targets,\n                region_name=self.region_name,\n                details=self.details,\n                priority=self.priority,\n                alias=self.alias,\n                entity=self.entity,\n                batch=self.batch,\n                tags=self.tags,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook","title":"PagerDutyWebHook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided PagerDuty webhook. See Apprise notify_pagerduty docs for more info on formatting the URL.

Examples:

Load a saved PagerDuty webhook and send a message:

from prefect.blocks.notifications import PagerDutyWebHook\npagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\npagerduty_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class PagerDutyWebHook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided PagerDuty webhook.\n    See [Apprise notify_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved PagerDuty webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import PagerDutyWebHook\n        pagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\n        pagerduty_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided PagerDuty webhook.\"\n\n    _block_type_name = \"Pager Duty Webhook\"\n    _block_type_slug = \"pager-duty-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8dbf37d17089c1ce531708eac2e510801f7b3aee-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook\"\n\n    # The default cannot be prefect_default because NotifyPagerDuty's\n    # PAGERDUTY_SEVERITY_MAP only has these notify types defined as keys\n    notify_type: Literal[\"info\", \"success\", \"warning\", \"failure\"] = Field(\n        default=\"info\", description=\"The severity of the notification.\"\n    )\n\n    integration_key: SecretStr = Field(\n        default=...,\n        description=(\n            \"This can be found on the Events API V2 \"\n            \"integration's detail page, and is also referred to as a Routing Key. \"\n            \"This must be provided alongside `api_key`, but will error if provided \"\n            \"alongside `url`.\"\n        ),\n    )\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=(\n            \"This can be found under Integrations. \"\n            \"This must be provided alongside `integration_key`, but will error if \"\n            \"provided alongside `url`.\"\n        ),\n    )\n\n    source: Optional[str] = Field(\n        default=\"Prefect\", description=\"The source string as part of the payload.\"\n    )\n\n    component: str = Field(\n        default=\"Notification\",\n        description=\"The component string as part of the payload.\",\n    )\n\n    group: Optional[str] = Field(\n        default=None, description=\"The group string as part of the payload.\"\n    )\n\n    class_id: Optional[str] = Field(\n        default=None,\n        title=\"Class ID\",\n        description=\"The class string as part of the payload.\",\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The region name.\"\n    )\n\n    clickable_url: Optional[AnyHttpUrl] = Field(\n        default=None,\n        title=\"Clickable URL\",\n        description=\"A clickable URL to associate with the notice.\",\n    )\n\n    include_image: bool = Field(\n        default=True,\n        description=\"Associate the notification status via a represented icon.\",\n    )\n\n    custom_details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details to include as part of the payload.\",\n        examples=['{\"disk_space_left\": \"145GB\"}'],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyPagerDuty import NotifyPagerDuty\n\n        url = SecretStr(\n            NotifyPagerDuty(\n                apikey=self.api_key.get_secret_value(),\n                integrationkey=self.integration_key.get_secret_value(),\n                source=self.source,\n                component=self.component,\n                group=self.group,\n                class_id=self.class_id,\n                region_name=self.region_name,\n                click=self.clickable_url,\n                include_image=self.include_image,\n                details=self.custom_details,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail","title":"SendgridEmail","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via any sendgrid account. See Apprise Notify_sendgrid docs

Examples:

Load a saved Sendgrid and send a email message: ```python from prefect.blocks.notifications import SendgridEmail

sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")

sendgrid_block.notify(\"Hello from Prefect!\")

Source code in prefect/blocks/notifications.py
class SendgridEmail(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via any sendgrid account.\n    See [Apprise Notify_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid)\n\n    Examples:\n        Load a saved Sendgrid and send a email message:\n        ```python\n        from prefect.blocks.notifications import SendgridEmail\n\n        sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")\n\n        sendgrid_block.notify(\"Hello from Prefect!\")\n    \"\"\"\n\n    _description = \"Enables sending notifications via Sendgrid email service.\"\n    _block_type_name = \"Sendgrid Email\"\n    _block_type_slug = \"sendgrid-email\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/82bc6ed16ca42a2252a5512c72233a253b8a58eb-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail\"\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your sendgrid account.\",\n    )\n\n    sender_email: str = Field(\n        title=\"Sender email id\",\n        description=\"The sender email id.\",\n        examples=[\"test-support@gmail.com\"],\n    )\n\n    to_emails: List[str] = Field(\n        default=...,\n        title=\"Recipient emails\",\n        description=\"Email ids of all recipients.\",\n        examples=['\"recipient1@gmail.com\"'],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifySendGrid import NotifySendGrid\n\n        url = SecretStr(\n            NotifySendGrid(\n                apikey=self.api_key.get_secret_value(),\n                from_email=self.sender_email,\n                targets=self.to_emails,\n            ).url()\n        )\n\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook","title":"SlackWebhook","text":"

Bases: AppriseNotificationBlock

Enables sending notifications via a provided Slack webhook.

Examples:

Load a saved Slack webhook and send a message:

from prefect.blocks.notifications import SlackWebhook\n\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class SlackWebhook(AppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Slack webhook.\n\n    Examples:\n        Load a saved Slack webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import SlackWebhook\n\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        slack_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Slack Webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c1965ecbf8704ee1ea20d77786de9a41ce1087d1-500x500.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook\"\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Slack incoming webhook URL used to send notifications.\",\n        examples=[\"https://hooks.slack.com/XXX\"],\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS","title":"TwilioSMS","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the docs.

Examples:

Load a saved TwilioSMS block and send a message:

from prefect.blocks.notifications import TwilioSMS\ntwilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\ntwilio_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in prefect/blocks/notifications.py
class TwilioSMS(AbstractAppriseNotificationBlock):\n    \"\"\"Enables sending notifications via Twilio SMS.\n    Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms).\n\n    Examples:\n        Load a saved `TwilioSMS` block and send a message:\n        ```python\n        from prefect.blocks.notifications import TwilioSMS\n        twilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\n        twilio_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via Twilio SMS.\"\n    _block_type_name = \"Twilio SMS\"\n    _block_type_slug = \"twilio-sms\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8bd8777999f82112c09b9c8d57083ac75a4a0d65-250x250.png\"  # noqa\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS\"\n\n    account_sid: str = Field(\n        default=...,\n        description=(\n            \"The Twilio Account SID - it can be found on the homepage \"\n            \"of the Twilio console.\"\n        ),\n    )\n\n    auth_token: SecretStr = Field(\n        default=...,\n        description=(\n            \"The Twilio Authentication Token - \"\n            \"it can be found on the homepage of the Twilio console.\"\n        ),\n    )\n\n    from_phone_number: str = Field(\n        default=...,\n        description=\"The valid Twilio phone number to send the message from.\",\n        examples=[\"18001234567\"],\n    )\n\n    to_phone_numbers: List[str] = Field(\n        default=...,\n        description=\"A list of valid Twilio phone number(s) to send the message to.\",\n        # not wrapped in brackets because of the way UI displays examples; in code should be [\"18004242424\"]\n        examples=[\"18004242424\"],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyTwilio import NotifyTwilio\n\n        url = SecretStr(\n            NotifyTwilio(\n                account_sid=self.account_sid,\n                auth_token=self.auth_token.get_secret_value(),\n                source=self.from_phone_number,\n                targets=self.to_phone_numbers,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/system/","title":"system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system","title":"prefect.blocks.system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime","title":"DateTime","text":"

Bases: Block

A block that represents a datetime

Attributes:

Name Type Description value DateTimeTZ

An ISO 8601-compatible datetime value.

Example

Load a stored JSON value:

from prefect.blocks.system import DateTime\n\ndata_time_block = DateTime.load(\"BLOCK_NAME\")\n

Source code in prefect/blocks/system.py
class DateTime(Block):\n    \"\"\"\n    A block that represents a datetime\n\n    Attributes:\n        value: An ISO 8601-compatible datetime value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import DateTime\n\n        data_time_block = DateTime.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Date Time\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8b3da9a6621e92108b8e6a75b82e15374e170ff7-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime\"\n\n    value: DateTimeTZ = Field(\n        default=...,\n        description=\"An ISO 8601-compatible datetime value.\",\n    )\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.JSON","title":"JSON","text":"

Bases: Block

A block that represents JSON

Attributes:

Name Type Description value Any

A JSON-compatible value.

Example

Load a stored JSON value:

from prefect.blocks.system import JSON\n\njson_block = JSON.load(\"BLOCK_NAME\")\n

Source code in prefect/blocks/system.py
class JSON(Block):\n    \"\"\"\n    A block that represents JSON\n\n    Attributes:\n        value: A JSON-compatible value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import JSON\n\n        json_block = JSON.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/4fcef2294b6eeb423b1332d1ece5156bf296ff96-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.JSON\"\n\n    value: Any = Field(default=..., description=\"A JSON-compatible value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.Secret","title":"Secret","text":"

Bases: Block

A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI.

Attributes:

Name Type Description value SecretStr

A string value that should be kept secret.

Example
from prefect.blocks.system import Secret\n\nsecret_block = Secret.load(\"BLOCK_NAME\")\n\n# Access the stored secret\nsecret_block.get()\n
Source code in prefect/blocks/system.py
class Secret(Block):\n    \"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c6f20e556dd16effda9df16551feecfb5822092b-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.Secret\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )\n\n    def get(self):\n        return self.value.get_secret_value()\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.String","title":"String","text":"

Bases: Block

A block that represents a string

Attributes:

Name Type Description value str

A string value.

Example

Load a stored string value:

from prefect.blocks.system import String\n\nstring_block = String.load(\"BLOCK_NAME\")\n

Source code in prefect/blocks/system.py
class String(Block):\n    \"\"\"\n    A block that represents a string\n\n    Attributes:\n        value: A string value.\n\n    Example:\n        Load a stored string value:\n        ```python\n        from prefect.blocks.system import String\n\n        string_block = String.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c262ea2c80a2c043564e8763f3370c3db5a6b3e6-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.String\"\n\n    value: str = Field(default=..., description=\"A string value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/webhook/","title":"webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook","title":"prefect.blocks.webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook","title":"Webhook","text":"

Bases: Block

Block that enables calling webhooks.

Source code in prefect/blocks/webhook.py
class Webhook(Block):\n    \"\"\"\n    Block that enables calling webhooks.\n    \"\"\"\n\n    _block_type_name = \"Webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\"  # type: ignore\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook\"\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        examples=[\"https://hooks.slack.com/XXX\"],\n    )\n\n    headers: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Webhook Headers\",\n        description=\"A dictionary of headers to send with the webhook request.\",\n    )\n\n    def block_initialization(self):\n        self._client = AsyncClient(transport=_http_transport)\n\n    async def call(self, payload: Optional[dict] = None) -> Response:\n        \"\"\"\n        Call the webhook.\n\n        Args:\n            payload: an optional payload to send when calling the webhook.\n        \"\"\"\n        async with self._client:\n            return await self._client.request(\n                method=self.method,\n                url=self.url.get_secret_value(),\n                headers=self.headers.get_secret_value(),\n                json=payload,\n            )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook.call","title":"call async","text":"

Call the webhook.

Parameters:

Name Type Description Default payload Optional[dict]

an optional payload to send when calling the webhook.

None Source code in prefect/blocks/webhook.py
async def call(self, payload: Optional[dict] = None) -> Response:\n    \"\"\"\n    Call the webhook.\n\n    Args:\n        payload: an optional payload to send when calling the webhook.\n    \"\"\"\n    async with self._client:\n        return await self._client.request(\n            method=self.method,\n            url=self.url.get_secret_value(),\n            headers=self.headers.get_secret_value(),\n            json=payload,\n        )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/cli/agent/","title":"agent","text":"","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent","title":"prefect.cli.agent","text":"

Command line interface for working with agent services

","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent.start","title":"start async","text":"

Start an agent process to poll one or more work queues for flow runs.

Source code in prefect/cli/agent.py
@agent_app.command()\nasync def start(\n    # deprecated main argument\n    work_queue: str = typer.Argument(\n        None,\n        show_default=False,\n        help=\"DEPRECATED: A work queue name or ID\",\n    ),\n    work_queues: List[str] = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n    work_queue_prefix: List[str] = typer.Option(\n        None,\n        \"-m\",\n        \"--match\",\n        help=(\n            \"Dynamically matches work queue names with the specified prefix for the\"\n            \" agent to pull from,for example `dev-` will match all work queues with a\"\n            \" name that starts with `dev-`\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"A work pool name for the agent to pull from.\",\n    ),\n    hide_welcome: bool = typer.Option(False, \"--hide-welcome\"),\n    api: str = SettingsOption(PREFECT_API_URL),\n    run_once: bool = typer.Option(\n        False, help=\"Run the agent loop once, instead of forever.\"\n    ),\n    prefetch_seconds: int = SettingsOption(PREFECT_AGENT_PREFETCH_SECONDS),\n    # deprecated tags\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags that will be used to create a work\"\n            \" queue. This option will be removed on 2023-02-23.\"\n        ),\n    ),\n    limit: int = typer.Option(\n        None,\n        \"-l\",\n        \"--limit\",\n        help=\"Maximum number of flow runs to start simultaneously.\",\n    ),\n):\n    \"\"\"\n    Start an agent process to poll one or more work queues for flow runs.\n    \"\"\"\n    work_queues = work_queues or []\n\n    if work_queue is not None:\n        # try to treat the work_queue as a UUID\n        try:\n            async with get_client() as client:\n                q = await client.read_work_queue(UUID(work_queue))\n                work_queue = q.name\n        # otherwise treat it as a string name\n        except (TypeError, ValueError):\n            pass\n        work_queues.append(work_queue)\n        app.console.print(\n            (\n                \"Agents now support multiple work queues. Instead of passing a single\"\n                \" argument, provide work queue names with the `-q` or `--work-queue`\"\n                f\" flag: `prefect agent start -q {work_queue}`\\n\"\n            ),\n            style=\"blue\",\n        )\n    if work_pool_name:\n        is_queues_paused = await _check_work_queues_paused(\n            work_pool_name=work_pool_name,\n            work_queues=work_queues,\n        )\n        if is_queues_paused:\n            queue_scope = (\n                \"The 'default' work queue\"\n                if not work_queues\n                else \"Specified work queue(s)\"\n            )\n            app.console.print(\n                (\n                    f\"{queue_scope} in the work pool {work_pool_name!r} is currently\"\n                    \" paused. This agent will not execute any flow runs until the work\"\n                    \" queue(s) are unpaused.\"\n                ),\n                style=\"yellow\",\n            )\n\n    if not work_queues and not tags and not work_queue_prefix and not work_pool_name:\n        exit_with_error(\"No work queues provided!\", style=\"red\")\n    elif bool(work_queues) + bool(tags) + bool(work_queue_prefix) > 1:\n        exit_with_error(\n            \"Only one of `work_queues`, `match`, or `tags` can be provided.\",\n            style=\"red\",\n        )\n    if work_pool_name and tags:\n        exit_with_error(\n            \"`tag` and `pool` options cannot be used together.\", style=\"red\"\n        )\n\n    if tags:\n        work_queue_name = f\"Agent queue {'-'.join(sorted(tags))}\"\n        app.console.print(\n            (\n                \"`tags` are deprecated. For backwards-compatibility with old versions\"\n                \" of Prefect, this agent will create a work queue named\"\n                f\" `{work_queue_name}` that uses legacy tag-based matching. This option\"\n                \" will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n        async with get_client() as client:\n            try:\n                work_queue = await client.read_work_queue_by_name(work_queue_name)\n                if work_queue.filter is None:\n                    # ensure the work queue has legacy (deprecated) tag-based behavior\n                    await client.update_work_queue(filter=dict(tags=tags))\n            except ObjectNotFound:\n                # if the work queue doesn't already exist, we create it with tags\n                # to enable legacy (deprecated) tag-matching behavior\n                await client.create_work_queue(name=work_queue_name, tags=tags)\n\n        work_queues = [work_queue_name]\n\n    if not hide_welcome:\n        if api:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent connected to {api}...\"\n            )\n        else:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent with ephemeral API...\"\n            )\n\n    agent_process_id = os.getpid()\n    setup_signal_handlers_agent(\n        agent_process_id, \"the Prefect agent\", app.console.print\n    )\n\n    async with PrefectAgent(\n        work_queues=work_queues,\n        work_queue_prefix=work_queue_prefix,\n        work_pool_name=work_pool_name,\n        prefetch_seconds=prefetch_seconds,\n        limit=limit,\n    ) as agent:\n        if not hide_welcome:\n            app.console.print(ascii_name)\n            if work_pool_name:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"work pool '{work_pool_name}'...\"\n                )\n            elif work_queue_prefix:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s) that start with the prefix: {work_queue_prefix}...\"\n                )\n            else:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s): {', '.join(work_queues)}...\"\n                )\n\n        async with anyio.create_task_group() as tg:\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.get_and_submit_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value(),\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,  # Up to ~1 minute interval during backoff\n                )\n            )\n\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.check_for_cancelled_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value() * 2,\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n\n    app.console.print(\"Agent stopped!\")\n
","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/artifact/","title":"artifact","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact","title":"prefect.cli.artifact","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.delete","title":"delete async","text":"

Delete an artifact.

Parameters:

Name Type Description Default key Optional[str]

the key of the artifact to delete

Argument(None, help='The key of the artifact to delete.')

Examples:

$ prefect artifact delete \"my-artifact\"

Source code in prefect/cli/artifact.py
@artifact_app.command(\"delete\")\nasync def delete(\n    key: Optional[str] = typer.Argument(\n        None, help=\"The key of the artifact to delete.\"\n    ),\n    artifact_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"The ID of the artifact to delete.\"\n    ),\n):\n    \"\"\"\n    Delete an artifact.\n\n    Arguments:\n        key: the key of the artifact to delete\n\n    Examples:\n        $ prefect artifact delete \"my-artifact\"\n    \"\"\"\n    if key and artifact_id:\n        exit_with_error(\"Please provide either a key or an artifact_id but not both.\")\n\n    async with get_client() as client:\n        if artifact_id is not None:\n            try:\n                confirm_delete = typer.confirm(\n                    (\n                        \"Are you sure you want to delete artifact with id\"\n                        f\" {artifact_id!r}?\"\n                    ),\n                    default=False,\n                )\n                if not confirm_delete:\n                    exit_with_error(\"Deletion aborted.\")\n\n                await client.delete_artifact(artifact_id)\n                exit_with_success(f\"Deleted artifact with id {artifact_id!r}.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Artifact with id {artifact_id!r} not found!\")\n\n        elif key is not None:\n            artifacts = await client.read_artifacts(\n                artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n            )\n            if not artifacts:\n                exit_with_error(\n                    f\"Artifact with key {key!r} not found. You can also specify an\"\n                    \" artifact id with the --id flag.\"\n                )\n\n            confirm_delete = typer.confirm(\n                (\n                    f\"Are you sure you want to delete {len(artifacts)} artifact(s) with\"\n                    f\" key {key!r}?\"\n                ),\n                default=False,\n            )\n            if not confirm_delete:\n                exit_with_error(\"Deletion aborted.\")\n\n            for a in artifacts:\n                await client.delete_artifact(a.id)\n\n            exit_with_success(f\"Deleted {len(artifacts)} artifact(s) with key {key!r}.\")\n\n        else:\n            exit_with_error(\"Please provide a key or an artifact_id.\")\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.inspect","title":"inspect async","text":"
View details about an artifact.\n\nArguments:\n    key: the key of the artifact to inspect\n\nExamples:\n    $ prefect artifact inspect \"my-artifact\"\n   [\n    {\n        'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n        'created': '2023-03-21T21:40:09.895910+00:00',\n        'updated': '2023-03-21T21:40:09.895910+00:00',\n        'key': 'my-artifact',\n        'type': 'markdown',\n        'description': None,\n        'data': 'my markdown',\n        'metadata_': None,\n        'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n        'task_run_id': None\n    },\n    {\n        'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n        'created': '2023-03-27T23:16:15.536434+00:00',\n        'updated': '2023-03-27T23:16:15.536434+00:00',\n        'key': 'my-artifact',\n        'type': 'markdown',\n        'description': 'my-artifact-description',\n        'data': 'my markdown',\n        'metadata_': None,\n        'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n        'task_run_id': None\n    }\n

]

Source code in prefect/cli/artifact.py
@artifact_app.command(\"inspect\")\nasync def inspect(\n    key: str,\n    limit: int = typer.Option(\n        10,\n        \"--limit\",\n        help=\"The maximum number of artifacts to return.\",\n    ),\n):\n    \"\"\"\n        View details about an artifact.\n\n        Arguments:\n            key: the key of the artifact to inspect\n\n        Examples:\n            $ prefect artifact inspect \"my-artifact\"\n           [\n            {\n                'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n                'created': '2023-03-21T21:40:09.895910+00:00',\n                'updated': '2023-03-21T21:40:09.895910+00:00',\n                'key': 'my-artifact',\n                'type': 'markdown',\n                'description': None,\n                'data': 'my markdown',\n                'metadata_': None,\n                'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n                'task_run_id': None\n            },\n            {\n                'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n                'created': '2023-03-27T23:16:15.536434+00:00',\n                'updated': '2023-03-27T23:16:15.536434+00:00',\n                'key': 'my-artifact',\n                'type': 'markdown',\n                'description': 'my-artifact-description',\n                'data': 'my markdown',\n                'metadata_': None,\n                'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n                'task_run_id': None\n            }\n    ]\n    \"\"\"\n\n    async with get_client() as client:\n        artifacts = await client.read_artifacts(\n            limit=limit,\n            sort=ArtifactSort.UPDATED_DESC,\n            artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n        )\n        if not artifacts:\n            exit_with_error(f\"Artifact {key!r} not found.\")\n\n        artifacts = [a.dict(json_compatible=True) for a in artifacts]\n\n        app.console.print(Pretty(artifacts))\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.list_artifacts","title":"list_artifacts async","text":"

List artifacts.

Source code in prefect/cli/artifact.py
@artifact_app.command(\"ls\")\nasync def list_artifacts(\n    limit: int = typer.Option(\n        100,\n        \"--limit\",\n        help=\"The maximum number of artifacts to return.\",\n    ),\n    all: bool = typer.Option(\n        False,\n        \"--all\",\n        \"-a\",\n        help=\"Whether or not to only return the latest version of each artifact.\",\n    ),\n):\n    \"\"\"\n    List artifacts.\n    \"\"\"\n    table = Table(\n        title=\"Artifacts\",\n        caption=\"List Artifacts using `prefect artifact ls`\",\n        show_header=True,\n    )\n\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Key\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n    async with get_client() as client:\n        if all:\n            artifacts = await client.read_artifacts(\n                sort=ArtifactSort.KEY_ASC,\n                limit=limit,\n            )\n\n            for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n                table.add_row(\n                    str(artifact.id),\n                    artifact.key,\n                    artifact.type,\n                    pendulum.instance(artifact.updated).diff_for_humans(),\n                )\n\n        else:\n            artifacts = await client.read_latest_artifacts(\n                sort=ArtifactCollectionSort.KEY_ASC,\n                limit=limit,\n            )\n\n            for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n                table.add_row(\n                    str(artifact.latest_id),\n                    artifact.key,\n                    artifact.type,\n                    pendulum.instance(artifact.updated).diff_for_humans(),\n                )\n\n        app.console.print(table)\n
","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/block/","title":"block","text":"","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block","title":"prefect.cli.block","text":"

Command line interface for working with blocks.

","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_create","title":"block_create async","text":"

Generate a link to the Prefect UI to create a block.

Source code in prefect/cli/block.py
@blocks_app.command(\"create\")\nasync def block_create(\n    block_type_slug: str = typer.Argument(\n        ...,\n        help=\"A block type slug. View available types with: prefect block type ls\",\n        show_default=False,\n    ),\n):\n    \"\"\"\n    Generate a link to the Prefect UI to create a block.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            block_type = await client.read_block_type_by_slug(block_type_slug)\n        except ObjectNotFound:\n            app.console.print(f\"[red]Block type {block_type_slug!r} not found![/red]\")\n            block_types = await client.read_block_types()\n            slugs = {block_type.slug for block_type in block_types}\n            app.console.print(f\"Available block types: {', '.join(slugs)}\")\n            raise typer.Exit(1)\n\n        if not PREFECT_UI_URL:\n            exit_with_error(\n                \"Prefect must be configured to use a hosted Prefect server or \"\n                \"Prefect Cloud to display the Prefect UI\"\n            )\n\n        block_link = f\"{PREFECT_UI_URL.value()}/blocks/catalog/{block_type.slug}/create\"\n        app.console.print(\n            f\"Create a {block_type_slug} block: {block_link}\",\n        )\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_delete","title":"block_delete async","text":"

Delete a configured block.

Source code in prefect/cli/block.py
@blocks_app.command(\"delete\")\nasync def block_delete(\n    slug: Optional[str] = typer.Argument(\n        None, help=\"A block slug. Formatted as '<BLOCK_TYPE_SLUG>/<BLOCK_NAME>'\"\n    ),\n    block_id: Optional[str] = typer.Option(None, \"--id\", help=\"A block id.\"),\n):\n    \"\"\"\n    Delete a configured block.\n    \"\"\"\n    async with get_client() as client:\n        if slug is None and block_id is not None:\n            try:\n                await client.delete_block_document(block_id)\n                exit_with_success(f\"Deleted Block '{block_id}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {block_id!r} not found!\")\n        elif slug is not None:\n            if \"/\" not in slug:\n                exit_with_error(\n                    f\"{slug!r} is not valid. Slug must contain a '/', e.g. 'json/my-json-block'\"\n                )\n            block_type_slug, block_document_name = slug.split(\"/\")\n            try:\n                block_document = await client.read_block_document_by_name(\n                    block_document_name, block_type_slug, include_secrets=False\n                )\n                await client.delete_block_document(block_document.id)\n                exit_with_success(f\"Deleted Block '{slug}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Block {slug!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a block slug or id\")\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_inspect","title":"block_inspect async","text":"

Displays details about a configured block.

Source code in prefect/cli/block.py
@blocks_app.command(\"inspect\")\nasync def block_inspect(\n    slug: Optional[str] = typer.Argument(\n        None, help=\"A Block slug: <BLOCK_TYPE_SLUG>/<BLOCK_NAME>\"\n    ),\n    block_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A Block id to search for if no slug is given\"\n    ),\n):\n    \"\"\"\n    Displays details about a configured block.\n    \"\"\"\n    async with get_client() as client:\n        if slug is None and block_id is not None:\n            try:\n                block_document = await client.read_block_document(\n                    block_id, include_secrets=False\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {block_id!r} not found!\")\n        elif slug is not None:\n            if \"/\" not in slug:\n                exit_with_error(\n                    f\"{slug!r} is not valid. Slug must contain a '/', e.g. 'json/my-json-block'\"\n                )\n            block_type_slug, block_document_name = slug.split(\"/\")\n            try:\n                block_document = await client.read_block_document_by_name(\n                    block_document_name, block_type_slug, include_secrets=False\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"Block {slug!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a block slug or id\")\n        app.console.print(display_block(block_document))\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_ls","title":"block_ls async","text":"

View all configured blocks.

Source code in prefect/cli/block.py
@blocks_app.command(\"ls\")\nasync def block_ls():\n    \"\"\"\n    View all configured blocks.\n    \"\"\"\n    async with get_client() as client:\n        blocks = await client.read_block_documents()\n\n    table = Table(\n        title=\"Blocks\", caption=\"List Block Types using `prefect block type ls`\"\n    )\n    table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Slug\", style=\"blue\", no_wrap=True)\n\n    for block in sorted(blocks, key=lambda x: f\"{x.block_type.slug}/{x.name}\"):\n        table.add_row(\n            str(block.id),\n            block.block_type.name,\n            str(block.name),\n            f\"{block.block_type.slug}/{block.name}\",\n        )\n\n    app.console.print(table)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_delete","title":"blocktype_delete async","text":"

Delete an unprotected Block Type.

Source code in prefect/cli/block.py
@blocktypes_app.command(\"delete\")\nasync def blocktype_delete(\n    slug: str = typer.Argument(..., help=\"A Block type slug\"),\n):\n    \"\"\"\n    Delete an unprotected Block Type.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            block_type = await client.read_block_type_by_slug(slug)\n            await client.delete_block_type(block_type.id)\n            exit_with_success(f\"Deleted Block Type '{slug}'.\")\n        except ObjectNotFound:\n            exit_with_error(f\"Block Type {slug!r} not found!\")\n        except ProtectedBlockError:\n            exit_with_error(f\"Block Type {slug!r} is a protected block!\")\n        except PrefectHTTPStatusError:\n            exit_with_error(f\"Cannot delete Block Type {slug!r}!\")\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_inspect","title":"blocktype_inspect async","text":"

Display details about a block type.

Source code in prefect/cli/block.py
@blocktypes_app.command(\"inspect\")\nasync def blocktype_inspect(\n    slug: str = typer.Argument(..., help=\"A block type slug\"),\n):\n    \"\"\"\n    Display details about a block type.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            block_type = await client.read_block_type_by_slug(slug)\n        except ObjectNotFound:\n            exit_with_error(f\"Block type {slug!r} not found!\")\n\n        app.console.print(display_block_type(block_type))\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.list_types","title":"list_types async","text":"

List all block types.

Source code in prefect/cli/block.py
@blocktypes_app.command(\"ls\")\nasync def list_types():\n    \"\"\"\n    List all block types.\n    \"\"\"\n    async with get_client() as client:\n        block_types = await client.read_block_types()\n\n    table = Table(\n        title=\"Block Types\",\n        show_lines=True,\n    )\n\n    table.add_column(\"Block Type Slug\", style=\"italic cyan\", no_wrap=True)\n    table.add_column(\"Description\", style=\"blue\", no_wrap=False, justify=\"left\")\n    table.add_column(\n        \"Generate creation link\", style=\"italic cyan\", no_wrap=False, justify=\"left\"\n    )\n\n    for blocktype in sorted(block_types, key=lambda x: x.name):\n        table.add_row(\n            str(blocktype.slug),\n            (\n                str(blocktype.description.splitlines()[0].partition(\".\")[0])\n                if blocktype.description is not None\n                else \"\"\n            ),\n            f\"prefect block create {blocktype.slug}\",\n        )\n\n    app.console.print(table)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.register","title":"register async","text":"

Register blocks types within a module or file.

This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition.

\b Examples: \b Register block types in a Python module: $ prefect block register -m prefect_aws.credentials \b Register block types in a .py file: $ prefect block register -f my_blocks.py

Source code in prefect/cli/block.py
@blocks_app.command()\nasync def register(\n    module_name: Optional[str] = typer.Option(\n        None,\n        \"--module\",\n        \"-m\",\n        help=\"Python module containing block types to be registered\",\n    ),\n    file_path: Optional[Path] = typer.Option(\n        None,\n        \"--file\",\n        \"-f\",\n        help=\"Path to .py file containing block types to be registered\",\n    ),\n):\n    \"\"\"\n    Register blocks types within a module or file.\n\n    This makes the blocks available for configuration via the UI.\n    If a block type has already been registered, its registration will be updated to\n    match the block's current definition.\n\n    \\b\n    Examples:\n        \\b\n        Register block types in a Python module:\n        $ prefect block register -m prefect_aws.credentials\n        \\b\n        Register block types in a .py file:\n        $ prefect block register -f my_blocks.py\n    \"\"\"\n    # Handles if both options are specified or if neither are specified\n    if not (bool(file_path) ^ bool(module_name)):\n        exit_with_error(\n            \"Please specify either a module or a file containing blocks to be\"\n            \" registered, but not both.\"\n        )\n\n    if module_name:\n        try:\n            imported_module = import_module(name=module_name)\n        except ModuleNotFoundError:\n            exit_with_error(\n                f\"Unable to load {module_name}. Please make sure the module is \"\n                \"installed in your current environment.\"\n            )\n\n    if file_path:\n        if file_path.suffix != \".py\":\n            exit_with_error(\n                f\"{file_path} is not a .py file. Please specify a \"\n                \".py that contains blocks to be registered.\"\n            )\n        try:\n            imported_module = await run_sync_in_worker_thread(\n                load_script_as_module, str(file_path)\n            )\n        except ScriptError as exc:\n            app.console.print(exc)\n            app.console.print(exception_traceback(exc.user_exc))\n            exit_with_error(\n                f\"Unable to load file at {file_path}. Please make sure the file path \"\n                \"is correct and the file contains valid Python.\"\n            )\n\n    registered_blocks = await _register_blocks_in_module(imported_module)\n    number_of_registered_blocks = len(registered_blocks)\n    block_text = \"block\" if 0 < number_of_registered_blocks < 2 else \"blocks\"\n    app.console.print(\n        f\"[green]Successfully registered {number_of_registered_blocks} {block_text}\\n\"\n    )\n    app.console.print(_build_registered_blocks_table(registered_blocks))\n    msg = (\n        \"\\n To configure the newly registered blocks, \"\n        \"go to the Blocks page in the Prefect UI.\\n\"\n    )\n\n    ui_url = PREFECT_UI_URL.value()\n    if ui_url is not None:\n        block_catalog_url = f\"{ui_url}/blocks/catalog\"\n        msg = f\"{msg.rstrip().rstrip('.')}: {block_catalog_url}\\n\"\n\n    app.console.print(msg)\n
","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/cloud-webhook/","title":"Cloud webhook","text":"","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook","title":"prefect.cli.cloud.webhook","text":"

Command line interface for working with webhooks

","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.create","title":"create async","text":"

Create a new Cloud webhook

Source code in prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def create(\n    webhook_name: str,\n    description: str = typer.Option(\n        \"\", \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n    \"\"\"\n    Create a new Cloud webhook\n    \"\"\"\n    if not template:\n        exit_with_error(\n            \"Please provide a Jinja2 template expression in the --template flag \\nwhich\"\n            ' should define (at minimum) the following attributes: \\n{ \"event\":'\n            ' \"your.event.name\", \"resource\": { \"prefect.resource.id\":'\n            ' \"your.resource.id\" } }'\n            \" \\nhttps://docs.prefect.io/latest/cloud/webhooks/#webhook-templates\"\n        )\n\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\n            \"POST\",\n            \"/webhooks/\",\n            json={\n                \"name\": webhook_name,\n                \"description\": description,\n                \"template\": template,\n            },\n        )\n        app.console.print(f'Successfully created webhook {response[\"name\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.delete","title":"delete async","text":"

Delete an existing Cloud webhook

Source code in prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def delete(webhook_id: UUID):\n    \"\"\"\n    Delete an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_delete = typer.confirm(\n        \"Are you sure you want to delete it? This cannot be undone.\"\n    )\n\n    if not confirm_delete:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        await client.request(\"DELETE\", f\"/webhooks/{webhook_id}\")\n        app.console.print(f\"Successfully deleted webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.get","title":"get async","text":"

Retrieve a webhook by ID.

Source code in prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def get(webhook_id: UUID):\n    \"\"\"\n    Retrieve a webhook by ID.\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        webhook = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        display_table = _render_webhooks_into_table([webhook])\n        app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.ls","title":"ls async","text":"

Fetch and list all webhooks in your workspace

Source code in prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def ls():\n    \"\"\"\n    Fetch and list all webhooks in your workspace\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        retrieved_webhooks = await client.request(\"POST\", \"/webhooks/filter\")\n        display_table = _render_webhooks_into_table(retrieved_webhooks)\n        app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.rotate","title":"rotate async","text":"

Rotate url for an existing Cloud webhook, in case it has been compromised

Source code in prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def rotate(webhook_id: UUID):\n    \"\"\"\n    Rotate url for an existing Cloud webhook, in case it has been compromised\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_rotate = typer.confirm(\n        \"Are you sure you want to rotate? This will invalidate the old URL.\"\n    )\n\n    if not confirm_rotate:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"POST\", f\"/webhooks/{webhook_id}/rotate\")\n        app.console.print(f'Successfully rotated webhook URL to {response[\"slug\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.toggle","title":"toggle async","text":"

Toggle the enabled status of an existing Cloud webhook

Source code in prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def toggle(\n    webhook_id: UUID,\n):\n    \"\"\"\n    Toggle the enabled status of an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    status_lookup = {True: \"enabled\", False: \"disabled\"}\n\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        current_status = response[\"enabled\"]\n        new_status = not current_status\n\n        await client.request(\n            \"PATCH\", f\"/webhooks/{webhook_id}\", json={\"enabled\": new_status}\n        )\n        app.console.print(f\"Webhook is now {status_lookup[new_status]}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.update","title":"update async","text":"

Partially update an existing Cloud webhook

Source code in prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def update(\n    webhook_id: UUID,\n    webhook_name: str = typer.Option(None, \"--name\", \"-n\", help=\"Webhook name\"),\n    description: str = typer.Option(\n        None, \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n    \"\"\"\n    Partially update an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        update_payload = {\n            \"name\": webhook_name or response[\"name\"],\n            \"description\": description or response[\"description\"],\n            \"template\": template or response[\"template\"],\n        }\n\n        await client.request(\"PUT\", f\"/webhooks/{webhook_id}\", json=update_payload)\n        app.console.print(f\"Successfully updated webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud/","title":"cloud","text":"","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud","title":"prefect.cli.cloud","text":"

Command line interface for interacting with Prefect Cloud

","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_api","title":"login_api = FastAPI(lifespan=lifespan) module-attribute","text":"

This small API server is used for data transmission for browser-based log in.

","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.check_key_is_valid_for_login","title":"check_key_is_valid_for_login async","text":"

Attempt to use a key to see if it is valid

Source code in prefect/cli/cloud/__init__.py
async def check_key_is_valid_for_login(key: str):\n    \"\"\"\n    Attempt to use a key to see if it is valid\n    \"\"\"\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            await client.read_workspaces()\n            return True\n        except CloudUnauthorizedError:\n            return False\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login","title":"login async","text":"

Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT_API_KEY. Uses a previously configured profile if it exists.

Source code in prefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def login(\n    key: Optional[str] = typer.Option(\n        None, \"--key\", \"-k\", help=\"API Key to authenticate with Prefect\"\n    ),\n    workspace_handle: Optional[str] = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n    \"\"\"\n    Log in to Prefect Cloud.\n    Creates a new profile configured to use the specified PREFECT_API_KEY.\n    Uses a previously configured profile if it exists.\n    \"\"\"\n    if not is_interactive() and (not key or not workspace_handle):\n        exit_with_error(\n            \"When not using an interactive terminal, you must supply a `--key` and\"\n            \" `--workspace`.\"\n        )\n\n    profiles = load_profiles()\n    current_profile = get_settings_context().profile\n    env_var_api_key = PREFECT_API_KEY.value()\n    selected_workspace = None\n\n    if env_var_api_key and key and env_var_api_key != key:\n        exit_with_error(\n            \"Cannot log in with a key when a different PREFECT_API_KEY is present as an\"\n            \" environment variable that will override it.\"\n        )\n\n    if env_var_api_key and env_var_api_key == key:\n        is_valid_key = await check_key_is_valid_for_login(key)\n        is_correct_key_format = key.startswith(\"pnu_\") or key.startswith(\"pnb_\")\n        if not is_valid_key:\n            help_message = \"Please ensure your credentials are correct and unexpired.\"\n            if not is_correct_key_format:\n                help_message = \"Your key is not in our expected format.\"\n            exit_with_error(\n                f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n            )\n\n    already_logged_in_profiles = []\n    for name, profile in profiles.items():\n        profile_key = profile.settings.get(PREFECT_API_KEY)\n        if (\n            # If a key is provided, only show profiles with the same key\n            (key and profile_key == key)\n            # Otherwise, show all profiles with a key set\n            or (not key and profile_key is not None)\n            # Check that the key is usable to avoid suggesting unauthenticated profiles\n            and await check_key_is_valid_for_login(profile_key)\n        ):\n            already_logged_in_profiles.append(name)\n\n    current_profile_is_logged_in = current_profile.name in already_logged_in_profiles\n\n    if current_profile_is_logged_in:\n        app.console.print(\"It looks like you're already authenticated on this profile.\")\n        if is_interactive():\n            should_reauth = typer.confirm(\n                \"? Would you like to reauthenticate?\", default=False\n            )\n        else:\n            should_reauth = True\n\n        if not should_reauth:\n            app.console.print(\"Using the existing authentication on this profile.\")\n            key = PREFECT_API_KEY.value()\n\n    elif already_logged_in_profiles:\n        app.console.print(\n            \"It looks like you're already authenticated with another profile.\"\n        )\n        if typer.confirm(\n            \"? Would you like to switch profiles?\",\n            default=True,\n        ):\n            profile_name = prompt_select_from_list(\n                app.console,\n                \"Which authenticated profile would you like to switch to?\",\n                already_logged_in_profiles,\n            )\n\n            profiles.set_active(profile_name)\n            save_profiles(profiles)\n            exit_with_success(f\"Switched to authenticated profile {profile_name!r}.\")\n\n    if not key:\n        choice = prompt_select_from_list(\n            app.console,\n            \"How would you like to authenticate?\",\n            [\n                (\"browser\", \"Log in with a web browser\"),\n                (\"key\", \"Paste an API key\"),\n            ],\n        )\n\n        if choice == \"key\":\n            key = typer.prompt(\"Paste your API key\", hide_input=True)\n        elif choice == \"browser\":\n            key = await login_with_browser()\n\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            workspaces = await client.read_workspaces()\n            current_workspace = get_current_workspace(workspaces)\n            prompt_switch_workspace = False\n        except CloudUnauthorizedError:\n            if key.startswith(\"pcu\"):\n                help_message = (\n                    \"It looks like you're using API key from Cloud 1\"\n                    \" (https://cloud.prefect.io). Make sure that you generate API key\"\n                    \" using Cloud 2 (https://app.prefect.cloud)\"\n                )\n            elif not key.startswith(\"pnu_\") and not key.startswith(\"pnb_\"):\n                help_message = (\n                    \"Your key is not in our expected format: 'pnu_' or 'pnb_'.\"\n                )\n            else:\n                help_message = (\n                    \"Please ensure your credentials are correct and unexpired.\"\n                )\n            exit_with_error(\n                f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n            )\n        except httpx.HTTPStatusError as exc:\n            exit_with_error(f\"Error connecting to Prefect Cloud: {exc!r}\")\n\n    if workspace_handle:\n        # Search for the given workspace\n        for workspace in workspaces:\n            if workspace.handle == workspace_handle:\n                selected_workspace = workspace\n                break\n        else:\n            if workspaces:\n                hint = (\n                    \" Available workspaces:\"\n                    f\" {listrepr((w.handle for w in workspaces), ', ')}\"\n                )\n            else:\n                hint = \"\"\n\n            exit_with_error(f\"Workspace {workspace_handle!r} not found.\" + hint)\n    else:\n        # Prompt a switch if the number of workspaces is greater than one\n        prompt_switch_workspace = len(workspaces) > 1\n\n        # Confirm that we want to switch if the current profile is already logged in\n        if (\n            current_profile_is_logged_in and current_workspace is not None\n        ) and prompt_switch_workspace:\n            app.console.print(\n                f\"You are currently using workspace {current_workspace.handle!r}.\"\n            )\n            prompt_switch_workspace = typer.confirm(\n                \"? Would you like to switch workspaces?\", default=False\n            )\n    if prompt_switch_workspace:\n        go_back = True\n        while go_back:\n            selected_workspace, go_back = await _prompt_for_account_and_workspace(\n                workspaces\n            )\n        if selected_workspace is None:\n            exit_with_error(\"No workspace selected.\")\n\n    elif not selected_workspace and not workspace_handle:\n        if current_workspace:\n            selected_workspace = current_workspace\n        elif len(workspaces) > 0:\n            selected_workspace = workspaces[0]\n        else:\n            exit_with_error(\n                \"No workspaces found! Create a workspace at\"\n                f\" {PREFECT_CLOUD_UI_URL.value()} and try again.\"\n            )\n\n    update_current_profile(\n        {\n            PREFECT_API_KEY: key,\n            PREFECT_API_URL: selected_workspace.api_url(),\n        }\n    )\n\n    exit_with_success(\n        f\"Authenticated with Prefect Cloud! Using workspace {selected_workspace.handle!r}.\"\n    )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_with_browser","title":"login_with_browser async","text":"

Perform login using the browser.

On failure, this function will exit the process. On success, it will return an API key.

Source code in prefect/cli/cloud/__init__.py
async def login_with_browser() -> str:\n    \"\"\"\n    Perform login using the browser.\n\n    On failure, this function will exit the process.\n    On success, it will return an API key.\n    \"\"\"\n\n    # Set up an event that the login API will toggle on startup\n    ready_event = login_api.extra[\"ready-event\"] = anyio.Event()\n\n    # Set up an event that the login API will set when a response comes from the UI\n    result_event = login_api.extra[\"result-event\"] = anyio.Event()\n\n    timeout_scope = None\n    async with anyio.create_task_group() as tg:\n        # Run a server in the background to get payload from the browser\n        server = await tg.start(serve_login_api, tg.cancel_scope)\n\n        # Wait for the login server to be ready\n        with anyio.fail_after(10):\n            await ready_event.wait()\n\n            # The server may not actually be serving as the lifespan is started first\n            while not server.started:\n                await anyio.sleep(0)\n\n        # Get the port the server is using\n        server_port = server.servers[0].sockets[0].getsockname()[1]\n        callback = urllib.parse.quote(f\"http://localhost:{server_port}\")\n        ui_login_url = (\n            PREFECT_CLOUD_UI_URL.value() + f\"/auth/client?callback={callback}\"\n        )\n\n        # Then open the authorization page in a new browser tab\n        app.console.print(\"Opening browser...\")\n        await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_login_url)\n\n        # Wait for the response from the browser,\n        with anyio.move_on_after(120) as timeout_scope:\n            app.console.print(\"Waiting for response...\")\n            await result_event.wait()\n\n        # Uvicorn installs signal handlers, this is the cleanest way to shutdown the\n        # login API\n        raise_signal(signal.SIGINT)\n\n    result = login_api.extra.get(\"result\")\n    if not result:\n        if timeout_scope and timeout_scope.cancel_called:\n            exit_with_error(\"Timed out while waiting for authorization.\")\n        else:\n            exit_with_error(\"Aborted.\")\n\n    if result.type == \"success\":\n        return result.content.api_key\n    elif result.type == \"failure\":\n        exit_with_error(f\"Failed to log in. {result.content.reason}\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.logout","title":"logout async","text":"

Logout the current workspace. Reset PREFECT_API_KEY and PREFECT_API_URL to default.

Source code in prefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def logout():\n    \"\"\"\n    Logout the current workspace.\n    Reset PREFECT_API_KEY and PREFECT_API_URL to default.\n    \"\"\"\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile is None:\n        exit_with_error(\"There is no current profile set.\")\n\n    if current_profile.settings.get(PREFECT_API_KEY) is None:\n        exit_with_error(\"Current profile is not logged into Prefect Cloud.\")\n\n    update_current_profile(\n        {\n            PREFECT_API_URL: None,\n            PREFECT_API_KEY: None,\n        },\n    )\n\n    exit_with_success(\"Logged out from Prefect Cloud.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.ls","title":"ls async","text":"

List available workspaces.

Source code in prefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def ls():\n    \"\"\"List available workspaces.\"\"\"\n\n    confirm_logged_in()\n\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n    current_workspace = get_current_workspace(workspaces)\n\n    table = Table(caption=\"* active workspace\")\n    table.add_column(\n        \"[#024dfd]Workspaces:\", justify=\"left\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for workspace_handle in sorted(workspace.handle for workspace in workspaces):\n        if workspace_handle == current_workspace.handle:\n            table.add_row(f\"[green]* {workspace_handle}[/green]\")\n        else:\n            table.add_row(f\"  {workspace_handle}\")\n\n    app.console.print(table)\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.open","title":"open async","text":"

Open the Prefect Cloud UI in the browser.

Source code in prefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def open():\n    \"\"\"\n    Open the Prefect Cloud UI in the browser.\n    \"\"\"\n    confirm_logged_in()\n\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile is None:\n        exit_with_error(\n            \"There is no current profile set - set one with `prefect profile create\"\n            \" <name>` and `prefect profile use <name>`.\"\n        )\n\n    current_workspace = get_current_workspace(\n        await prefect.get_cloud_client().read_workspaces()\n    )\n    if current_workspace is None:\n        exit_with_error(\n            \"There is no current workspace set - set one with `prefect cloud workspace\"\n            \" set --workspace <workspace>`.\"\n        )\n\n    ui_url = current_workspace.ui_url()\n\n    await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_url)\n\n    exit_with_success(f\"Opened {current_workspace.handle!r} in browser.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.prompt_select_from_list","title":"prompt_select_from_list","text":"

Given a list of options, display the values to user in a table and prompt them to select one.

Parameters:

Name Type Description Default options Union[List[str], List[Tuple[Hashable, str]]]

A list of options to present to the user. A list of tuples can be passed as key value pairs. If a value is chosen, the key will be returned.

required

Returns:

Name Type Description str str

the selected option

Source code in prefect/cli/cloud/__init__.py
def prompt_select_from_list(\n    console, prompt: str, options: Union[List[str], List[Tuple[Hashable, str]]]\n) -> str:\n    \"\"\"\n    Given a list of options, display the values to user in a table and prompt them\n    to select one.\n\n    Args:\n        options: A list of options to present to the user.\n            A list of tuples can be passed as key value pairs. If a value is chosen, the\n            key will be returned.\n\n    Returns:\n        str: the selected option\n    \"\"\"\n\n    current_idx = 0\n    selected_option = None\n\n    def build_table() -> Table:\n        \"\"\"\n        Generate a table of options. The `current_idx` will be highlighted.\n        \"\"\"\n\n        table = Table(box=False, header_style=None, padding=(0, 0))\n        table.add_column(\n            f\"? [bold]{prompt}[/] [bright_blue][Use arrows to move; enter to select]\",\n            justify=\"left\",\n            no_wrap=True,\n        )\n\n        for i, option in enumerate(options):\n            if isinstance(option, tuple):\n                option = option[1]\n\n            if i == current_idx:\n                # Use blue for selected options\n                table.add_row(\"[bold][blue]> \" + option)\n            else:\n                table.add_row(\"  \" + option)\n        return table\n\n    with Live(build_table(), auto_refresh=False, console=console) as live:\n        while selected_option is None:\n            key = readchar.readkey()\n\n            if key == readchar.key.UP:\n                current_idx = current_idx - 1\n                # wrap to bottom if at the top\n                if current_idx < 0:\n                    current_idx = len(options) - 1\n            elif key == readchar.key.DOWN:\n                current_idx = current_idx + 1\n                # wrap to top if at the bottom\n                if current_idx >= len(options):\n                    current_idx = 0\n            elif key == readchar.key.CTRL_C:\n                # gracefully exit with no message\n                exit_with_error(\"\")\n            elif key == readchar.key.ENTER or key == readchar.key.CR:\n                selected_option = options[current_idx]\n                if isinstance(selected_option, tuple):\n                    selected_option = selected_option[0]\n\n            live.update(build_table(), refresh=True)\n\n        return selected_option\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.set","title":"set async","text":"

Set current workspace. Shows a workspace picker if no workspace is specified.

Source code in prefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def set(\n    workspace_handle: str = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n    \"\"\"Set current workspace. Shows a workspace picker if no workspace is specified.\"\"\"\n    confirm_logged_in()\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n        if workspace_handle:\n            # Search for the given workspace\n            for workspace in workspaces:\n                if workspace.handle == workspace_handle:\n                    break\n            else:\n                exit_with_error(f\"Workspace {workspace_handle!r} not found.\")\n        else:\n            if not workspaces:\n                exit_with_error(\"No workspaces found in the selected account.\")\n\n            go_back = True\n            while go_back:\n                workspace, go_back = await _prompt_for_account_and_workspace(workspaces)\n            if workspace is None:\n                exit_with_error(\"No workspace selected.\")\n\n        profile = update_current_profile({PREFECT_API_URL: workspace.api_url()})\n        exit_with_success(\n            f\"Successfully set workspace to {workspace.handle!r} in profile\"\n            f\" {profile.name!r}.\"\n        )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/concurrency_limit/","title":"concurrency_limit","text":"","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit","title":"prefect.cli.concurrency_limit","text":"

Command line interface for working with concurrency limits.

","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.create","title":"create async","text":"

Create a concurrency limit against a tag.

This limit controls how many task runs with that tag may simultaneously be in a Running state.

Source code in prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def create(tag: str, concurrency_limit: int):\n    \"\"\"\n    Create a concurrency limit against a tag.\n\n    This limit controls how many task runs with that tag may simultaneously be in a\n    Running state.\n    \"\"\"\n\n    async with get_client() as client:\n        await client.create_concurrency_limit(\n            tag=tag, concurrency_limit=concurrency_limit\n        )\n        await client.read_concurrency_limit_by_tag(tag)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created concurrency limit with properties:\n                tag - {tag!r}\n                concurrency_limit - {concurrency_limit}\n\n            Delete the concurrency limit:\n                prefect concurrency-limit delete {tag!r}\n\n            Inspect the concurrency limit:\n                prefect concurrency-limit inspect {tag!r}\n        \"\"\"\n        )\n    )\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.delete","title":"delete async","text":"

Delete the concurrency limit set on the specified tag.

Source code in prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def delete(tag: str):\n    \"\"\"\n    Delete the concurrency limit set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.delete_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Deleted concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.inspect","title":"inspect async","text":"

View details about a concurrency limit. active_slots shows a list of TaskRun IDs which are currently using a concurrency slot.

Source code in prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def inspect(tag: str):\n    \"\"\"\n    View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs\n    which are currently using a concurrency slot.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            result = await client.read_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    trid_table = Table()\n    trid_table.add_column(\"Active Task Run IDs\", style=\"cyan\", no_wrap=True)\n\n    cl_table = Table(title=f\"Concurrency Limit ID: [red]{str(result.id)}\")\n    cl_table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    cl_table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    cl_table.add_column(\"Created\", style=\"magenta\", no_wrap=True)\n    cl_table.add_column(\"Updated\", style=\"magenta\", no_wrap=True)\n\n    for trid in sorted(result.active_slots):\n        trid_table.add_row(str(trid))\n\n    cl_table.add_row(\n        str(result.tag),\n        str(result.concurrency_limit),\n        Pretty(pendulum.instance(result.created).diff_for_humans()),\n        Pretty(pendulum.instance(result.updated).diff_for_humans()),\n    )\n\n    group = Group(\n        cl_table,\n        trid_table,\n    )\n    app.console.print(Panel(group, expand=False))\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.ls","title":"ls async","text":"

View all concurrency limits.

Source code in prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def ls(limit: int = 15, offset: int = 0):\n    \"\"\"\n    View all concurrency limits.\n    \"\"\"\n    table = Table(\n        title=\"Concurrency Limits\",\n        caption=\"inspect a concurrency limit to show active task run IDs\",\n    )\n    table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Active Task Runs\", style=\"magenta\", no_wrap=True)\n\n    async with get_client() as client:\n        concurrency_limits = await client.read_concurrency_limits(\n            limit=limit, offset=offset\n        )\n\n    for cl in sorted(concurrency_limits, key=lambda c: c.updated, reverse=True):\n        table.add_row(\n            str(cl.tag),\n            str(cl.id),\n            str(cl.concurrency_limit),\n            str(len(cl.active_slots)),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.reset","title":"reset async","text":"

Resets the concurrency limit slots set on the specified tag.

Source code in prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def reset(tag: str):\n    \"\"\"\n    Resets the concurrency limit slots set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.reset_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Reset concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/config/","title":"config","text":"","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config","title":"prefect.cli.config","text":"

Command line interface for working with profiles

","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.set_","title":"set_","text":"

Change the value for a setting by setting the value in the current profile.

Source code in prefect/cli/config.py
@config_app.command(\"set\")\ndef set_(settings: List[str]):\n    \"\"\"\n    Change the value for a setting by setting the value in the current profile.\n    \"\"\"\n    parsed_settings = {}\n    for item in settings:\n        try:\n            setting, value = item.split(\"=\", maxsplit=1)\n        except ValueError:\n            exit_with_error(\n                f\"Failed to parse argument {item!r}. Use the format 'VAR=VAL'.\"\n            )\n\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n\n        # Guard against changing settings that tweak config locations\n        if setting in {\"PREFECT_HOME\", \"PREFECT_PROFILES_PATH\"}:\n            exit_with_error(\n                f\"Setting {setting!r} cannot be changed with this command. \"\n                \"Use an environment variable instead.\"\n            )\n\n        parsed_settings[setting] = value\n\n    try:\n        new_profile = prefect.settings.update_current_profile(parsed_settings)\n    except pydantic.ValidationError as exc:\n        for error in exc.errors():\n            setting = error[\"loc\"][0]\n            message = error[\"msg\"]\n            app.console.print(f\"Validation error for setting {setting!r}: {message}\")\n        exit_with_error(\"Invalid setting value.\")\n\n    for setting, value in parsed_settings.items():\n        app.console.print(f\"Set {setting!r} to {value!r}.\")\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting} is also set by an environment variable which will \"\n                f\"override your config value. Run `unset {setting}` to clear it.\"\n            )\n\n        if prefect.settings.SETTING_VARIABLES[setting].deprecated:\n            app.console.print(\n                f\"[yellow]{prefect.settings.SETTING_VARIABLES[setting].deprecated_message}.\"\n            )\n\n    exit_with_success(f\"Updated profile {new_profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.unset","title":"unset","text":"

Restore the default value for a setting.

Removes the setting from the current profile.

Source code in prefect/cli/config.py
@config_app.command()\ndef unset(settings: List[str]):\n    \"\"\"\n    Restore the default value for a setting.\n\n    Removes the setting from the current profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    parsed = set()\n\n    for setting in settings:\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n        # Cast to settings objects\n        parsed.add(prefect.settings.SETTING_VARIABLES[setting])\n\n    for setting in parsed:\n        if setting not in profile.settings:\n            exit_with_error(f\"{setting.name!r} is not set in profile {profile.name!r}.\")\n\n    profiles.update_profile(\n        name=profile.name, settings={setting: None for setting in parsed}\n    )\n\n    for setting in settings:\n        app.console.print(f\"Unset {setting!r}.\")\n\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting!r} is also set by an environment variable. \"\n                f\"Use `unset {setting}` to clear it.\"\n            )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Updated profile {profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.validate","title":"validate","text":"

Read and validate the current profile.

Deprecated settings will be automatically converted to new names unless both are set.

Source code in prefect/cli/config.py
@config_app.command()\ndef validate():\n    \"\"\"\n    Read and validate the current profile.\n\n    Deprecated settings will be automatically converted to new names unless both are\n    set.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    changed = profile.convert_deprecated_renamed_settings()\n    for old, new in changed:\n        app.console.print(f\"Updated {old.name!r} to {new.name!r}.\")\n\n    for setting in profile.settings.keys():\n        if setting.deprecated:\n            app.console.print(f\"Found deprecated setting {setting.name!r}.\")\n\n    profile.validate_settings()\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(\"Configuration valid!\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.view","title":"view","text":"

Display the current settings.

Source code in prefect/cli/config.py
@config_app.command()\ndef view(\n    show_defaults: Optional[bool] = typer.Option(\n        False, \"--show-defaults/--hide-defaults\", help=(show_defaults_help)\n    ),\n    show_sources: Optional[bool] = typer.Option(\n        True,\n        \"--show-sources/--hide-sources\",\n        help=(show_sources_help),\n    ),\n    show_secrets: Optional[bool] = typer.Option(\n        False,\n        \"--show-secrets/--hide-secrets\",\n        help=\"Toggle display of secrets setting values.\",\n    ),\n):\n    \"\"\"\n    Display the current settings.\n    \"\"\"\n    context = prefect.context.get_settings_context()\n\n    # Get settings at each level, converted to a flat dictionary for easy comparison\n    default_settings = prefect.settings.get_default_settings()\n    env_settings = prefect.settings.get_settings_from_env()\n    current_profile_settings = context.settings\n\n    # Obfuscate secrets\n    if not show_secrets:\n        default_settings = default_settings.with_obfuscated_secrets()\n        env_settings = env_settings.with_obfuscated_secrets()\n        current_profile_settings = current_profile_settings.with_obfuscated_secrets()\n\n    # Display the profile first\n    app.console.print(f\"PREFECT_PROFILE={context.profile.name!r}\")\n\n    settings_output = []\n\n    # The combination of environment variables and profile settings that are in use\n    profile_overrides = current_profile_settings.dict(exclude_unset=True)\n\n    # Used to see which settings in current_profile_settings came from env vars\n    env_overrides = env_settings.dict(exclude_unset=True)\n\n    for key, value in profile_overrides.items():\n        source = \"env\" if env_overrides.get(key) is not None else \"profile\"\n        source_blurb = f\" (from {source})\" if show_sources else \"\"\n        settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    if show_defaults:\n        for key, value in default_settings.dict().items():\n            if key not in profile_overrides:\n                source_blurb = \" (from defaults)\" if show_sources else \"\"\n                settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    app.console.print(\"\\n\".join(sorted(settings_output)))\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/deploy/","title":"deploy","text":"","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy","title":"prefect.cli.deploy","text":"

Module containing implementation for deploying flows.

","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy.deploy","title":"deploy async","text":"

Deploy a flow from this project by creating a deployment.

Should be run from a project root directory.

Source code in prefect/cli/deploy.py
@app.command()\nasync def deploy(\n    entrypoint: str = typer.Argument(\n        None,\n        help=(\n            \"The path to a flow entrypoint within a project, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    names: List[str] = typer.Option(\n        None,\n        \"--name\",\n        \"-n\",\n        help=(\n            \"The name to give the deployment. Can be a pattern. Examples:\"\n            \" 'my-deployment', 'my-flow/my-deployment', 'my-deployment-*',\"\n            \" '*-flow-name/deployment*'\"\n        ),\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the deployment. If not provided, the description\"\n            \" will be populated from the flow's description.\"\n        ),\n    ),\n    version: str = typer.Option(\n        None, \"--version\", help=\"A version to give the deployment.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"One or more optional tags to apply to the deployment. Note: tags are used\"\n            \" only for organizational purposes. For delegating work to agents, use the\"\n            \" --work-queue flag.\"\n        ),\n    ),\n    work_pool_name: str = SettingsOption(\n        PREFECT_DEFAULT_WORK_POOL_NAME,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool that will handle this deployment's runs.\",\n    ),\n    work_queue_name: str = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"The work queue that will handle this deployment's runs. \"\n            \"It will be created if it doesn't already exist. Defaults to `None`.\"\n        ),\n    ),\n    variables: List[str] = typer.Option(\n        None,\n        \"-v\",\n        \"--variable\",\n        help=(\"DEPRECATED: Please use --jv/--job-variable for similar functionality \"),\n    ),\n    job_variables: List[str] = typer.Option(\n        None,\n        \"-jv\",\n        \"--job-variable\",\n        help=(\n            \"One or more job variable overrides for the work pool provided in the\"\n            \" format of key=value string or a JSON object\"\n        ),\n    ),\n    cron: List[str] = typer.Option(\n        None,\n        \"--cron\",\n        help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n    ),\n    interval: List[int] = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) that will be used to set an\"\n            \" IntervalSchedule on the deployment.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The anchor date for all interval schedules\"\n    ),\n    rrule: List[str] = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n    ),\n    timezone: str = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    trigger: List[str] = typer.Option(\n        None,\n        \"--trigger\",\n        help=(\n            \"Specifies a trigger for the deployment. The value can be a\"\n            \" json string or path to `.yaml`/`.json` file. This flag can be used\"\n            \" multiple times.\"\n        ),\n    ),\n    param: List[str] = typer.Option(\n        None,\n        \"--param\",\n        help=(\n            \"An optional parameter override, values are parsed as JSON strings e.g.\"\n            \" --param question=ultimate --param answer=42\"\n        ),\n    ),\n    params: str = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"An optional parameter override in a JSON string format e.g.\"\n            ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n        ),\n    ),\n    enforce_parameter_schema: bool = typer.Option(\n        False,\n        \"--enforce-parameter-schema\",\n        help=(\n            \"Whether to enforce the parameter schema on this deployment. If set to\"\n            \" True, any parameters passed to this deployment must match the signature\"\n            \" of the flow.\"\n        ),\n    ),\n    deploy_all: bool = typer.Option(\n        False,\n        \"--all\",\n        help=(\n            \"Deploy all flows in the project. If a flow name or entrypoint is also\"\n            \" provided, this flag will be ignored.\"\n        ),\n    ),\n    prefect_file: Path = typer.Option(\n        Path(\"prefect.yaml\"),\n        \"--prefect-file\",\n        help=\"Specify a custom path to a prefect.yaml file\",\n    ),\n):\n    \"\"\"\n    Deploy a flow from this project by creating a deployment.\n\n    Should be run from a project root directory.\n    \"\"\"\n\n    if variables is not None:\n        app.console.print(\n            generate_deprecation_message(\n                name=\"The `--variable` flag\",\n                start_date=\"Mar 2024\",\n                help=(\n                    \"Please use the `--job-variable foo=bar` argument instead: `prefect\"\n                    \" deploy --job-variable`.\"\n                ),\n            ),\n            style=\"yellow\",\n        )\n\n    if variables is None:\n        variables = list()\n    if job_variables is None:\n        job_variables = list()\n    job_variables.extend(variables)\n\n    options = {\n        \"entrypoint\": entrypoint,\n        \"description\": description,\n        \"version\": version,\n        \"tags\": tags,\n        \"work_pool_name\": work_pool_name,\n        \"work_queue_name\": work_queue_name,\n        \"variables\": job_variables,\n        \"cron\": cron,\n        \"interval\": interval,\n        \"anchor_date\": interval_anchor,\n        \"rrule\": rrule,\n        \"timezone\": timezone,\n        \"triggers\": trigger,\n        \"param\": param,\n        \"params\": params,\n        \"enforce_parameter_schema\": enforce_parameter_schema,\n    }\n    try:\n        deploy_configs, actions = _load_deploy_configs_and_actions(\n            prefect_file=prefect_file,\n        )\n        parsed_names = []\n        for name in names or []:\n            if \"*\" in name:\n                parsed_names.extend(_parse_name_from_pattern(deploy_configs, name))\n            else:\n                parsed_names.append(name)\n        deploy_configs = _pick_deploy_configs(\n            deploy_configs,\n            parsed_names,\n            deploy_all,\n        )\n\n        if len(deploy_configs) > 1:\n            if any(options.values()):\n                app.console.print(\n                    (\n                        \"You have passed options to the deploy command, but you are\"\n                        \" creating or updating multiple deployments. These options\"\n                        \" will be ignored.\"\n                    ),\n                    style=\"yellow\",\n                )\n            await _run_multi_deploy(\n                deploy_configs=deploy_configs,\n                actions=actions,\n                deploy_all=deploy_all,\n                prefect_file=prefect_file,\n            )\n        else:\n            # Accommodate passing in -n flow-name/deployment-name as well as -n deployment-name\n            options[\"names\"] = [\n                name.split(\"/\", 1)[-1] if \"/\" in name else name for name in parsed_names\n            ]\n\n            await _run_single_deploy(\n                deploy_config=deploy_configs[0] if deploy_configs else {},\n                actions=actions,\n                options=options,\n                prefect_file=prefect_file,\n            )\n    except ValueError as exc:\n        exit_with_error(str(exc))\n
","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy.init","title":"init async","text":"

Initialize a new deployment configuration recipe.

Source code in prefect/cli/deploy.py
@app.command()\nasync def init(\n    name: str = None,\n    recipe: str = None,\n    fields: List[str] = typer.Option(\n        None,\n        \"-f\",\n        \"--field\",\n        help=(\n            \"One or more fields to pass to the recipe (e.g., image_name) in the format\"\n            \" of key=value.\"\n        ),\n    ),\n):\n    \"\"\"\n    Initialize a new deployment configuration recipe.\n    \"\"\"\n    inputs = {}\n    fields = fields or []\n    recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n\n    for field in fields:\n        key, value = field.split(\"=\")\n        inputs[key] = value\n\n    if not recipe and is_interactive():\n        recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n        recipes = []\n\n        for r in recipe_paths.iterdir():\n            if r.is_dir() and (r / \"prefect.yaml\").exists():\n                with open(r / \"prefect.yaml\") as f:\n                    recipe_data = yaml.safe_load(f)\n                    recipe_name = r.name\n                    recipe_description = recipe_data.get(\n                        \"description\", \"(no description available)\"\n                    )\n                    recipe_dict = {\n                        \"name\": recipe_name,\n                        \"description\": recipe_description,\n                    }\n                    recipes.append(recipe_dict)\n\n        selected_recipe = prompt_select_from_table(\n            app.console,\n            \"Would you like to initialize your deployment configuration with a recipe?\",\n            columns=[\n                {\"header\": \"Name\", \"key\": \"name\"},\n                {\"header\": \"Description\", \"key\": \"description\"},\n            ],\n            data=recipes,\n            opt_out_message=\"No, I'll use the default deployment configuration.\",\n            opt_out_response={},\n        )\n        if selected_recipe != {}:\n            recipe = selected_recipe[\"name\"]\n\n    if recipe and (recipe_paths / recipe / \"prefect.yaml\").exists():\n        with open(recipe_paths / recipe / \"prefect.yaml\") as f:\n            recipe_inputs = yaml.safe_load(f).get(\"required_inputs\") or {}\n\n        if recipe_inputs:\n            if set(recipe_inputs.keys()) < set(inputs.keys()):\n                # message to user about extra fields\n                app.console.print(\n                    (\n                        f\"Warning: extra fields provided for {recipe!r} recipe:\"\n                        f\" '{', '.join(set(inputs.keys()) - set(recipe_inputs.keys()))}'\"\n                    ),\n                    style=\"red\",\n                )\n            elif set(recipe_inputs.keys()) > set(inputs.keys()):\n                table = Table(\n                    title=f\"[red]Required inputs for {recipe!r} recipe[/red]\",\n                )\n                table.add_column(\"Field Name\", style=\"green\", no_wrap=True)\n                table.add_column(\n                    \"Description\", justify=\"left\", style=\"white\", no_wrap=False\n                )\n                for field, description in recipe_inputs.items():\n                    if field not in inputs:\n                        table.add_row(field, description)\n\n                app.console.print(table)\n\n                for key, description in recipe_inputs.items():\n                    if key not in inputs:\n                        inputs[key] = typer.prompt(key)\n\n            app.console.print(\"-\" * 15)\n\n    try:\n        files = [\n            f\"[green]{fname}[/green]\"\n            for fname in initialize_project(name=name, recipe=recipe, inputs=inputs)\n        ]\n    except ValueError as exc:\n        if \"Unknown recipe\" in str(exc):\n            exit_with_error(\n                f\"Unknown recipe {recipe!r} provided - run [yellow]`prefect init\"\n                \"`[/yellow] to see all available recipes.\"\n            )\n        else:\n            raise\n\n    files = \"\\n\".join(files)\n    empty_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green]; no new files\"\n        \" created.\"\n    )\n    file_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green] with the following\"\n        f\" new files:\\n{files}\"\n    )\n    app.console.print(file_msg if files else empty_msg)\n
","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deployment/","title":"deployment","text":"","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment","title":"prefect.cli.deployment","text":"

Command line interface for working with deployments.

","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.apply","title":"apply async","text":"

Create or update a deployment from a YAML file.

Source code in prefect/cli/deployment.py
@deployment_app.command(\n    deprecated=True,\n    deprecated_start_date=\"Mar 2024\",\n    deprecated_name=\"deployment apply\",\n    deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def apply(\n    paths: List[str] = typer.Argument(\n        ...,\n        help=\"One or more paths to deployment YAML files.\",\n    ),\n    upload: bool = typer.Option(\n        False,\n        \"--upload\",\n        help=(\n            \"A flag that, when provided, uploads this deployment's files to remote\"\n            \" storage.\"\n        ),\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n):\n    \"\"\"\n    Create or update a deployment from a YAML file.\n    \"\"\"\n    deployment = None\n    async with get_client() as client:\n        for path in paths:\n            try:\n                deployment = await Deployment.load_from_yaml(path)\n                app.console.print(\n                    f\"Successfully loaded {deployment.name!r}\", style=\"green\"\n                )\n            except Exception as exc:\n                exit_with_error(\n                    f\"'{path!s}' did not conform to deployment spec: {exc!r}\"\n                )\n\n            assert deployment\n\n            await create_work_queue_and_set_concurrency_limit(\n                deployment.work_queue_name,\n                deployment.work_pool_name,\n                work_queue_concurrency,\n            )\n\n            if upload:\n                if (\n                    deployment.storage\n                    and \"put-directory\" in deployment.storage.get_block_capabilities()\n                ):\n                    file_count = await deployment.upload_to_storage()\n                    if file_count:\n                        app.console.print(\n                            (\n                                f\"Successfully uploaded {file_count} files to\"\n                                f\" {deployment.location}\"\n                            ),\n                            style=\"green\",\n                        )\n                else:\n                    app.console.print(\n                        (\n                            f\"Deployment storage {deployment.storage} does not have\"\n                            \" upload capabilities; no files uploaded.\"\n                        ),\n                        style=\"red\",\n                    )\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n\n            if client.server_type != ServerType.CLOUD and deployment.triggers:\n                app.console.print(\n                    (\n                        \"Deployment triggers are only supported on \"\n                        f\"Prefect Cloud. Triggers defined in {path!r} will be \"\n                        \"ignored.\"\n                    ),\n                    style=\"red\",\n                )\n\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n\n            if PREFECT_UI_URL:\n                app.console.print(\n                    \"View Deployment in UI:\"\n                    f\" {PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}\"\n                )\n\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.build","title":"build async","text":"

Generate a deployment YAML from /path/to/file.py:flow_function

Source code in prefect/cli/deployment.py
@deployment_app.command(\n    deprecated=True,\n    deprecated_start_date=\"Mar 2024\",\n    deprecated_name=\"deployment build\",\n    deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def build(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a flow entrypoint, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    name: str = typer.Option(\n        None, \"--name\", \"-n\", help=\"The name to give the deployment.\"\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the deployment. If not provided, the description\"\n            \" will be populated from the flow's description.\"\n        ),\n    ),\n    version: str = typer.Option(\n        None, \"--version\", \"-v\", help=\"A version to give the deployment.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"One or more optional tags to apply to the deployment. Note: tags are used\"\n            \" only for organizational purposes. For delegating work to agents, use the\"\n            \" --work-queue flag.\"\n        ),\n    ),\n    work_queue_name: str = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"The work queue that will handle this deployment's runs. \"\n            \"It will be created if it doesn't already exist. Defaults to `None`. \"\n            \"Note that if a work queue is not set, work will not be scheduled.\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool that will handle this deployment's runs.\",\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n    infra_type: str = typer.Option(\n        None,\n        \"--infra\",\n        \"-i\",\n        help=\"The infrastructure type to use, prepopulated with defaults. For example: \"\n        + listrepr(builtin_infrastructure_types, sep=\", \"),\n    ),\n    infra_block: str = typer.Option(\n        None,\n        \"--infra-block\",\n        \"-ib\",\n        help=\"The slug of the infrastructure block to use as a template.\",\n    ),\n    overrides: List[str] = typer.Option(\n        None,\n        \"--override\",\n        help=(\n            \"One or more optional infrastructure overrides provided as a dot delimited\"\n            \" path, e.g., `env.env_key=env_value`\"\n        ),\n    ),\n    storage_block: str = typer.Option(\n        None,\n        \"--storage-block\",\n        \"-sb\",\n        help=(\n            \"The slug of a remote storage block. Use the syntax:\"\n            \" 'block_type/block_name', where block_type is one of 'github', 's3',\"\n            \" 'gcs', 'azure', 'smb', or a registered block from a library that\"\n            \" implements the WritableDeploymentStorage interface such as\"\n            \" 'gitlab-repository', 'bitbucket-repository', 's3-bucket',\"\n            \" 'gcs-bucket'\"\n        ),\n    ),\n    skip_upload: bool = typer.Option(\n        False,\n        \"--skip-upload\",\n        help=(\n            \"A flag that, when provided, skips uploading this deployment's files to\"\n            \" remote storage.\"\n        ),\n    ),\n    cron: str = typer.Option(\n        None,\n        \"--cron\",\n        help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n    ),\n    interval: int = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) that will be used to set an\"\n            \" IntervalSchedule on the deployment.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The anchor date for an interval schedule\"\n    ),\n    rrule: str = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n    ),\n    timezone: str = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    path: str = typer.Option(\n        None,\n        \"--path\",\n        help=(\n            \"An optional path to specify a subdirectory of remote storage to upload to,\"\n            \" or to point to a subdirectory of a locally stored flow.\"\n        ),\n    ),\n    output: str = typer.Option(\n        None,\n        \"--output\",\n        \"-o\",\n        help=\"An optional filename to write the deployment file to.\",\n    ),\n    _apply: bool = typer.Option(\n        False,\n        \"--apply\",\n        \"-a\",\n        help=(\n            \"An optional flag to automatically register the resulting deployment with\"\n            \" the API.\"\n        ),\n    ),\n    param: List[str] = typer.Option(\n        None,\n        \"--param\",\n        help=(\n            \"An optional parameter override, values are parsed as JSON strings e.g.\"\n            \" --param question=ultimate --param answer=42\"\n        ),\n    ),\n    params: str = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"An optional parameter override in a JSON string format e.g.\"\n            ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n        ),\n    ),\n    no_schedule: bool = typer.Option(\n        False,\n        \"--no-schedule\",\n        help=\"An optional flag to disable scheduling for this deployment.\",\n    ),\n):\n    \"\"\"\n    Generate a deployment YAML from /path/to/file.py:flow_function\n    \"\"\"\n    # validate inputs\n    if not name:\n        exit_with_error(\n            \"A name for this deployment must be provided with the '--name' flag.\"\n        )\n\n    if (\n        len([value for value in (cron, rrule, interval) if value is not None])\n        + (1 if no_schedule else 0)\n        > 1\n    ):\n        exit_with_error(\"Only one schedule type can be provided.\")\n\n    if infra_block and infra_type:\n        exit_with_error(\n            \"Only one of `infra` or `infra_block` can be provided, please choose one.\"\n        )\n\n    output_file = None\n    if output:\n        output_file = Path(output)\n        if output_file.suffix and output_file.suffix != \".yaml\":\n            exit_with_error(\"Output file must be a '.yaml' file.\")\n        else:\n            output_file = output_file.with_suffix(\".yaml\")\n\n    # validate flow\n    try:\n        fpath, obj_name = entrypoint.rsplit(\":\", 1)\n    except ValueError as exc:\n        if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n            missing_flow_name_msg = (\n                \"Your flow entrypoint must include the name of the function that is\"\n                f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name>\"\n            )\n            exit_with_error(missing_flow_name_msg)\n        else:\n            raise exc\n    try:\n        flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n    except Exception as exc:\n        exit_with_error(exc)\n    app.console.print(f\"Found flow {flow.name!r}\", style=\"green\")\n    job_variables = {}\n    for override in overrides or []:\n        key, value = override.split(\"=\", 1)\n        job_variables[key] = value\n\n    if infra_block:\n        infrastructure = await Block.load(infra_block)\n    elif infra_type:\n        # Create an instance of the given type\n        infrastructure = Block.get_block_class_from_key(infra_type)()\n    else:\n        # will reset to a default of Process is no infra is present on the\n        # server-side definition of this deployment\n        infrastructure = None\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    schedule = None\n    if cron:\n        cron_kwargs = {\"cron\": cron, \"timezone\": timezone}\n        schedule = CronSchedule(\n            **{k: v for k, v in cron_kwargs.items() if v is not None}\n        )\n    elif interval:\n        interval_kwargs = {\n            \"interval\": timedelta(seconds=interval),\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        schedule = IntervalSchedule(\n            **{k: v for k, v in interval_kwargs.items() if v is not None}\n        )\n    elif rrule:\n        try:\n            schedule = RRuleSchedule(**json.loads(rrule))\n            if timezone:\n                # override timezone if specified via CLI argument\n                schedule.timezone = timezone\n        except json.JSONDecodeError:\n            schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n    # parse storage_block\n    if storage_block:\n        block_type, block_name, *block_path = storage_block.split(\"/\")\n        if block_path and path:\n            exit_with_error(\n                \"Must provide a `path` explicitly or provide one on the storage block\"\n                \" specification, but not both.\"\n            )\n        elif not path:\n            path = \"/\".join(block_path)\n        storage_block = f\"{block_type}/{block_name}\"\n        storage = await Block.load(storage_block)\n    else:\n        storage = None\n\n    if create_default_ignore_file(path=\".\"):\n        app.console.print(\n            (\n                \"Default '.prefectignore' file written to\"\n                f\" {(Path('.') / '.prefectignore').absolute()}\"\n            ),\n            style=\"green\",\n        )\n\n    if param and (params is not None):\n        exit_with_error(\"Can only pass one of `param` or `params` options\")\n\n    parameters = dict()\n\n    if param:\n        for p in param or []:\n            k, unparsed_value = p.split(\"=\", 1)\n            try:\n                v = json.loads(unparsed_value)\n                app.console.print(\n                    f\"The parameter value {unparsed_value} is parsed as a JSON string\"\n                )\n            except json.JSONDecodeError:\n                v = unparsed_value\n            parameters[k] = v\n\n    if params is not None:\n        parameters = json.loads(params)\n\n    # set up deployment object\n    entrypoint = (\n        f\"{Path(fpath).absolute().relative_to(Path('.').absolute())}:{obj_name}\"\n    )\n\n    init_kwargs = dict(\n        path=path,\n        entrypoint=entrypoint,\n        version=version,\n        storage=storage,\n        job_variables=job_variables or {},\n    )\n\n    if parameters:\n        init_kwargs[\"parameters\"] = parameters\n\n    if description:\n        init_kwargs[\"description\"] = description\n\n    # if a schedule, tags, work_queue_name, or infrastructure are not provided via CLI,\n    # we let `build_from_flow` load them from the server\n    if schedule or no_schedule:\n        init_kwargs.update(schedule=schedule)\n    if tags:\n        init_kwargs.update(tags=tags)\n    if infrastructure:\n        init_kwargs.update(infrastructure=infrastructure)\n    if work_queue_name:\n        init_kwargs.update(work_queue_name=work_queue_name)\n    if work_pool_name:\n        init_kwargs.update(work_pool_name=work_pool_name)\n\n    deployment_loc = output_file or f\"{obj_name}-deployment.yaml\"\n    deployment = await Deployment.build_from_flow(\n        flow=flow,\n        name=name,\n        output=deployment_loc,\n        skip_upload=skip_upload,\n        apply=False,\n        **init_kwargs,\n    )\n    app.console.print(\n        f\"Deployment YAML created at '{Path(deployment_loc).absolute()!s}'.\",\n        style=\"green\",\n    )\n\n    await create_work_queue_and_set_concurrency_limit(\n        deployment.work_queue_name, deployment.work_pool_name, work_queue_concurrency\n    )\n\n    # we process these separately for informative output\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            file_count = await deployment.upload_to_storage()\n            if file_count:\n                app.console.print(\n                    (\n                        f\"Successfully uploaded {file_count} files to\"\n                        f\" {deployment.location}\"\n                    ),\n                    style=\"green\",\n                )\n        else:\n            app.console.print(\n                (\n                    f\"Deployment storage {deployment.storage} does not have upload\"\n                    \" capabilities; no files uploaded.  Pass --skip-upload to suppress\"\n                    \" this warning.\"\n                ),\n                style=\"green\",\n            )\n\n    if _apply:\n        async with get_client() as client:\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.clear_schedules","title":"clear_schedules async","text":"

Clear all schedules for a deployment.

Source code in prefect/cli/deployment.py
@schedule_app.command(\"clear\")\nasync def clear_schedules(\n    deployment_name: str,\n    assume_yes: Optional[bool] = typer.Option(\n        False,\n        \"--accept-yes\",\n        \"-y\",\n        help=\"Accept the confirmation prompt without prompting\",\n    ),\n):\n    \"\"\"\n    Clear all schedules for a deployment.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n        await client.read_flow(deployment.flow_id)\n\n        # Get input from user: confirm removal of all schedules\n        if not assume_yes and not typer.confirm(\n            \"Are you sure you want to clear all schedules for this deployment?\",\n        ):\n            exit_with_error(\"Clearing schedules cancelled.\")\n\n        for schedule in deployment.schedules:\n            try:\n                await client.delete_deployment_schedule(deployment.id, schedule.id)\n            except ObjectNotFound:\n                pass\n\n        exit_with_success(f\"Cleared all schedules for deployment {deployment_name}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.create_schedule","title":"create_schedule async","text":"

Create a schedule for a given deployment.

Source code in prefect/cli/deployment.py
@schedule_app.command(\"create\")\nasync def create_schedule(\n    name: str,\n    interval: Optional[float] = typer.Option(\n        None,\n        \"--interval\",\n        help=\"An interval to schedule on, specified in seconds\",\n        min=0.0001,\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None,\n        \"--anchor-date\",\n        help=\"The anchor date for an interval schedule\",\n    ),\n    rrule_string: Optional[str] = typer.Option(\n        None, \"--rrule\", help=\"Deployment schedule rrule string\"\n    ),\n    cron_string: Optional[str] = typer.Option(\n        None, \"--cron\", help=\"Deployment schedule cron string\"\n    ),\n    cron_day_or: Optional[str] = typer.Option(\n        None,\n        \"--day_or\",\n        help=\"Control how croniter handles `day` and `day_of_week` entries\",\n    ),\n    timezone: Optional[str] = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    active: Optional[bool] = typer.Option(\n        True,\n        \"--active\",\n        help=\"Whether the schedule is active. Defaults to True.\",\n    ),\n    replace: Optional[bool] = typer.Option(\n        False,\n        \"--replace\",\n        help=\"Replace the deployment's current schedule(s) with this new schedule.\",\n    ),\n    assume_yes: Optional[bool] = typer.Option(\n        False,\n        \"--accept-yes\",\n        \"-y\",\n        help=\"Accept the confirmation prompt without prompting\",\n    ),\n):\n    \"\"\"\n    Create a schedule for a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    if sum(option is not None for option in [interval, rrule_string, cron_string]) != 1:\n        exit_with_error(\n            \"Exactly one of `--interval`, `--rrule`, or `--cron` must be provided.\"\n        )\n\n    schedule = None\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    if interval is not None:\n        if interval_anchor:\n            try:\n                pendulum.parse(interval_anchor)\n            except ValueError:\n                return exit_with_error(\"The anchor date must be a valid date string.\")\n        interval_schedule = {\n            \"interval\": interval,\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        schedule = IntervalSchedule(\n            **{k: v for k, v in interval_schedule.items() if v is not None}\n        )\n\n    if cron_string is not None:\n        cron_schedule = {\n            \"cron\": cron_string,\n            \"day_or\": cron_day_or,\n            \"timezone\": timezone,\n        }\n        schedule = CronSchedule(\n            **{k: v for k, v in cron_schedule.items() if v is not None}\n        )\n\n    if rrule_string is not None:\n        # a timezone in the `rrule_string` gets ignored by the RRuleSchedule constructor\n        if \"TZID\" in rrule_string and not timezone:\n            exit_with_error(\n                \"You can provide a timezone by providing a dict with a `timezone` key\"\n                \" to the --rrule option. E.g. {'rrule': 'FREQ=MINUTELY;INTERVAL=5',\"\n                \" 'timezone': 'America/New_York'}.\\nAlternatively, you can provide a\"\n                \" timezone by passing in a --timezone argument.\"\n            )\n        try:\n            schedule = RRuleSchedule(**json.loads(rrule_string))\n            if timezone:\n                # override timezone if specified via CLI argument\n                schedule.timezone = timezone\n        except json.JSONDecodeError:\n            schedule = RRuleSchedule(rrule=rrule_string, timezone=timezone)\n\n    if schedule is None:\n        return exit_with_success(\n            \"Could not create a valid schedule from the provided options.\"\n        )\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {name!r} not found!\")\n\n        num_schedules = len(deployment.schedules)\n        noun = \"schedule\" if num_schedules == 1 else \"schedules\"\n\n        if replace and num_schedules > 0:\n            if not assume_yes and not typer.confirm(\n                f\"Are you sure you want to replace {num_schedules} {noun} for {name}?\"\n            ):\n                return exit_with_error(\"Schedule replacement cancelled.\")\n\n            for existing_schedule in deployment.schedules:\n                try:\n                    await client.delete_deployment_schedule(\n                        deployment.id, existing_schedule.id\n                    )\n                except ObjectNotFound:\n                    pass\n\n        await client.create_deployment_schedules(deployment.id, [(schedule, active)])\n\n        if replace and num_schedules > 0:\n            exit_with_success(f\"Replaced existing deployment {noun} with new schedule!\")\n        else:\n            exit_with_success(\"Created deployment schedule!\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete","title":"delete async","text":"

Delete a deployment.

\b Examples: \b $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6

Source code in prefect/cli/deployment.py
@deployment_app.command()\nasync def delete(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A deployment id to search for if no name is given\"\n    ),\n):\n    \"\"\"\n    Delete a deployment.\n\n    \\b\n    Examples:\n        \\b\n        $ prefect deployment delete test_flow/test_deployment\n        $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6\n    \"\"\"\n    async with get_client() as client:\n        if name is None and deployment_id is not None:\n            try:\n                await client.delete_deployment(deployment_id)\n                exit_with_success(f\"Deleted deployment '{deployment_id}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n        elif name is not None:\n            try:\n                deployment = await client.read_deployment_by_name(name)\n                await client.delete_deployment(deployment.id)\n                exit_with_success(f\"Deleted deployment '{name}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {name!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a deployment name or id\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete_schedule","title":"delete_schedule async","text":"

Delete a deployment schedule.

Source code in prefect/cli/deployment.py
@schedule_app.command(\"delete\")\nasync def delete_schedule(\n    deployment_name: str,\n    schedule_id: UUID,\n    assume_yes: Optional[bool] = typer.Option(\n        False,\n        \"--accept-yes\",\n        \"-y\",\n        help=\"Accept the confirmation prompt without prompting\",\n    ),\n):\n    \"\"\"\n    Delete a deployment schedule.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name} not found!\")\n\n        try:\n            schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n        except IndexError:\n            return exit_with_error(\"Deployment schedule not found!\")\n\n        if not assume_yes and not typer.confirm(\n            f\"Are you sure you want to delete this schedule: {schedule.schedule}\",\n        ):\n            return exit_with_error(\"Deletion cancelled.\")\n\n        try:\n            await client.delete_deployment_schedule(deployment.id, schedule_id)\n        except ObjectNotFound:\n            exit_with_error(\"Deployment schedule not found!\")\n\n        exit_with_success(f\"Deleted deployment schedule {schedule_id}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.inspect","title":"inspect async","text":"

View details about a deployment.

\b Example: \b $ prefect deployment inspect \"hello-world/my-deployment\" { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedules': None, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } }

Source code in prefect/cli/deployment.py
@deployment_app.command()\nasync def inspect(name: str):\n    \"\"\"\n    View details about a deployment.\n\n    \\b\n    Example:\n        \\b\n        $ prefect deployment inspect \"hello-world/my-deployment\"\n        {\n            'id': '610df9c3-0fb4-4856-b330-67f588d20201',\n            'created': '2022-08-01T18:36:25.192102+00:00',\n            'updated': '2022-08-01T18:36:25.188166+00:00',\n            'name': 'my-deployment',\n            'description': None,\n            'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e',\n            'schedules': None,\n            'parameters': {'name': 'Marvin'},\n            'tags': ['test'],\n            'parameter_openapi_schema': {\n                'title': 'Parameters',\n                'type': 'object',\n                'properties': {\n                    'name': {\n                        'title': 'name',\n                        'type': 'string'\n                    }\n                },\n                'required': ['name']\n            },\n            'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32',\n            'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028',\n            'infrastructure': {\n                'type': 'process',\n                'env': {},\n                'labels': {},\n                'name': None,\n                'command': ['python', '-m', 'prefect.engine'],\n                'stream_output': True\n            }\n        }\n\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        deployment_json = deployment.dict(json_compatible=True)\n\n        if deployment.infrastructure_document_id:\n            deployment_json[\"infrastructure\"] = Block._from_block_document(\n                await client.read_block_document(deployment.infrastructure_document_id)\n            ).dict(\n                exclude={\"_block_document_id\", \"_block_document_name\", \"_is_anonymous\"}\n            )\n\n        if client.server_type.supports_automations():\n            deployment_json[\"automations\"] = [\n                a.dict()\n                for a in await client.read_resource_related_automations(\n                    f\"prefect.deployment.{deployment.id}\"\n                )\n            ]\n\n    app.console.print(Pretty(deployment_json))\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.list_schedules","title":"list_schedules async","text":"

View all schedules for a deployment.

Source code in prefect/cli/deployment.py
@schedule_app.command(\"ls\")\nasync def list_schedules(deployment_name: str):\n    \"\"\"\n    View all schedules for a deployment.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n    def sort_by_created_key(schedule: DeploymentSchedule):  # noqa\n        return pendulum.now(\"utc\") - schedule.created\n\n    def schedule_details(schedule: DeploymentSchedule):\n        if isinstance(schedule.schedule, IntervalSchedule):\n            return f\"interval: {schedule.schedule.interval}s\"\n        elif isinstance(schedule.schedule, CronSchedule):\n            return f\"cron: {schedule.schedule.cron}\"\n        elif isinstance(schedule.schedule, RRuleSchedule):\n            return f\"rrule: {schedule.schedule.rrule}\"\n        else:\n            return \"unknown\"\n\n    table = Table(\n        title=\"Deployment Schedules\",\n    )\n    table.add_column(\"ID\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Schedule\", style=\"cyan\", no_wrap=False)\n    table.add_column(\"Active\", style=\"purple\", no_wrap=True)\n\n    for schedule in sorted(deployment.schedules, key=sort_by_created_key):\n        table.add_row(\n            str(schedule.id),\n            schedule_details(schedule),\n            str(schedule.active),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.ls","title":"ls async","text":"

View all deployments or deployments for specific flows.

Source code in prefect/cli/deployment.py
@deployment_app.command()\nasync def ls(flow_name: List[str] = None, by_created: bool = False):\n    \"\"\"\n    View all deployments or deployments for specific flows.\n    \"\"\"\n    async with get_client() as client:\n        deployments = await client.read_deployments(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None\n        )\n        flows = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [d.flow_id for d in deployments]})\n            )\n        }\n\n    def sort_by_name_keys(d):\n        return flows[d.flow_id].name, d.name\n\n    def sort_by_created_key(d):\n        return pendulum.now(\"utc\") - d.created\n\n    table = Table(\n        title=\"Deployments\",\n    )\n    table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n\n    for deployment in sorted(\n        deployments, key=sort_by_created_key if by_created else sort_by_name_keys\n    ):\n        table.add_row(\n            f\"{flows[deployment.flow_id].name}/[bold]{deployment.name}[/]\",\n            str(deployment.id),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.pause_schedule","title":"pause_schedule async","text":"

Pause a deployment schedule.

Source code in prefect/cli/deployment.py
@schedule_app.command(\"pause\")\nasync def pause_schedule(deployment_name: str, schedule_id: UUID):\n    \"\"\"\n    Pause a deployment schedule.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n        try:\n            schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n        except IndexError:\n            return exit_with_error(\"Deployment schedule not found!\")\n\n        if not schedule.active:\n            return exit_with_error(\n                f\"Deployment schedule {schedule_id} is already inactive\"\n            )\n\n        await client.update_deployment_schedule(\n            deployment.id, schedule_id, active=False\n        )\n        exit_with_success(\n            f\"Paused schedule {schedule.schedule} for deployment {deployment_name}\"\n        )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.resume_schedule","title":"resume_schedule async","text":"

Resume a deployment schedule.

Source code in prefect/cli/deployment.py
@schedule_app.command(\"resume\")\nasync def resume_schedule(deployment_name: str, schedule_id: UUID):\n    \"\"\"\n    Resume a deployment schedule.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n        try:\n            schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n        except IndexError:\n            return exit_with_error(\"Deployment schedule not found!\")\n\n        if schedule.active:\n            return exit_with_error(\n                f\"Deployment schedule {schedule_id} is already active\"\n            )\n\n        await client.update_deployment_schedule(deployment.id, schedule_id, active=True)\n        exit_with_success(\n            f\"Resumed schedule {schedule.schedule} for deployment {deployment_name}\"\n        )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.run","title":"run async","text":"

Create a flow run for the given flow and deployment.

The flow run will be scheduled to run immediately unless --start-in or --start-at is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the --watch flag.

Source code in prefect/cli/deployment.py
@deployment_app.command()\nasync def run(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None,\n        \"--id\",\n        help=(\"A deployment id to search for if no name is given\"),\n    ),\n    job_variables: List[str] = typer.Option(\n        None,\n        \"-jv\",\n        \"--job-variable\",\n        help=(\n            \"A key, value pair (key=value) specifying a flow run job variable. The value will\"\n            \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n            \" job variable values.\"\n        ),\n    ),\n    params: List[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--param\",\n        help=(\n            \"A key, value pair (key=value) specifying a flow parameter. The value will\"\n            \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n            \" parameter values.\"\n        ),\n    ),\n    multiparams: Optional[str] = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"A mapping of parameters to values. To use a stdin, pass '-'. Any \"\n            \"parameters passed with `--param` will take precedence over these values.\"\n        ),\n    ),\n    start_in: Optional[str] = typer.Option(\n        None,\n        \"--start-in\",\n        help=(\n            \"A human-readable string specifying a time interval to wait before starting\"\n            \" the flow run. E.g. 'in 5 minutes', 'in 1 hour', 'in 2 days'.\"\n        ),\n    ),\n    start_at: Optional[str] = typer.Option(\n        None,\n        \"--start-at\",\n        help=(\n            \"A human-readable string specifying a time to start the flow run. E.g.\"\n            \" 'at 5:30pm', 'at 2022-08-01 17:30', 'at 2022-08-01 17:30:00'.\"\n        ),\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"--tag\",\n        help=(\"Tag(s) to be applied to flow run.\"),\n    ),\n    watch: bool = typer.Option(\n        False,\n        \"--watch\",\n        help=(\"Whether to poll the flow run until a terminal state is reached.\"),\n    ),\n    watch_interval: Optional[int] = typer.Option(\n        None,\n        \"--watch-interval\",\n        help=(\"How often to poll the flow run for state changes (in seconds).\"),\n    ),\n    watch_timeout: Optional[int] = typer.Option(\n        None,\n        \"--watch-timeout\",\n        help=(\"Timeout for --watch.\"),\n    ),\n):\n    \"\"\"\n    Create a flow run for the given flow and deployment.\n\n    The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified.\n    The flow run will not execute until a worker starts.\n    To watch the flow run until it reaches a terminal state, use the `--watch` flag.\n    \"\"\"\n    import dateparser\n\n    now = pendulum.now(\"UTC\")\n\n    multi_params = {}\n    if multiparams:\n        if multiparams == \"-\":\n            multiparams = sys.stdin.read()\n            if not multiparams:\n                exit_with_error(\"No data passed to stdin\")\n\n        try:\n            multi_params = json.loads(multiparams)\n        except ValueError as exc:\n            exit_with_error(f\"Failed to parse JSON: {exc}\")\n        if watch_interval and not watch:\n            exit_with_error(\n                \"`--watch-interval` can only be used with `--watch`.\",\n            )\n    cli_params = _load_json_key_values(params or [], \"parameter\")\n    conflicting_keys = set(cli_params.keys()).intersection(multi_params.keys())\n    if conflicting_keys:\n        app.console.print(\n            \"The following parameters were specified by `--param` and `--params`, the \"\n            f\"`--param` value will be used: {conflicting_keys}\"\n        )\n    parameters = {**multi_params, **cli_params}\n\n    job_vars = _load_json_key_values(job_variables or [], \"job variable\")\n    if start_in and start_at:\n        exit_with_error(\n            \"Only one of `--start-in` or `--start-at` can be set, not both.\"\n        )\n\n    elif start_in is None and start_at is None:\n        scheduled_start_time = now\n        human_dt_diff = \" (now)\"\n    else:\n        if start_in:\n            start_time_raw = \"in \" + start_in\n        else:\n            start_time_raw = \"at \" + start_at\n        with warnings.catch_warnings():\n            # PyTZ throws a warning based on dateparser usage of the library\n            # See https://github.com/scrapinghub/dateparser/issues/1089\n            warnings.filterwarnings(\"ignore\", module=\"dateparser\")\n\n            try:\n                start_time_parsed = dateparser.parse(\n                    start_time_raw,\n                    settings={\n                        \"TO_TIMEZONE\": \"UTC\",\n                        \"RETURN_AS_TIMEZONE_AWARE\": False,\n                        \"PREFER_DATES_FROM\": \"future\",\n                        \"RELATIVE_BASE\": datetime.fromtimestamp(\n                            now.timestamp(), tz=timezone.utc\n                        ),\n                    },\n                )\n\n            except Exception as exc:\n                exit_with_error(f\"Failed to parse '{start_time_raw!r}': {exc!s}\")\n\n        if start_time_parsed is None:\n            exit_with_error(f\"Unable to parse scheduled start time {start_time_raw!r}.\")\n\n        scheduled_start_time = pendulum.instance(start_time_parsed)\n        human_dt_diff = (\n            \" (\" + pendulum.format_diff(scheduled_start_time.diff(now)) + \")\"\n        )\n\n    async with get_client() as client:\n        deployment = await get_deployment(client, name, deployment_id)\n        flow = await client.read_flow(deployment.flow_id)\n\n        deployment_parameters = deployment.parameter_openapi_schema[\"properties\"].keys()\n        unknown_keys = set(parameters.keys()).difference(deployment_parameters)\n        if unknown_keys:\n            available_parameters = (\n                (\n                    \"The following parameters are available on the deployment: \"\n                    + listrepr(deployment_parameters, sep=\", \")\n                )\n                if deployment_parameters\n                else \"This deployment does not accept parameters.\"\n            )\n\n            exit_with_error(\n                \"The following parameters were specified but not found on the \"\n                f\"deployment: {listrepr(unknown_keys, sep=', ')}\"\n                f\"\\n{available_parameters}\"\n            )\n\n        app.console.print(\n            f\"Creating flow run for deployment '{flow.name}/{deployment.name}'...\",\n        )\n\n        try:\n            flow_run = await client.create_flow_run_from_deployment(\n                deployment.id,\n                parameters=parameters,\n                state=Scheduled(scheduled_time=scheduled_start_time),\n                tags=tags,\n                job_variables=job_vars,\n            )\n        except PrefectHTTPStatusError as exc:\n            detail = exc.response.json().get(\"detail\")\n            if detail:\n                exit_with_error(\n                    exc.response.json()[\"detail\"],\n                )\n            else:\n                raise\n\n    if PREFECT_UI_URL:\n        run_url = f\"{PREFECT_UI_URL.value()}/flow-runs/flow-run/{flow_run.id}\"\n    else:\n        run_url = \"<no dashboard available>\"\n\n    datetime_local_tz = scheduled_start_time.in_tz(pendulum.tz.local_timezone())\n    scheduled_display = (\n        datetime_local_tz.to_datetime_string()\n        + \" \"\n        + datetime_local_tz.tzname()\n        + human_dt_diff\n    )\n\n    app.console.print(f\"Created flow run {flow_run.name!r}.\")\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n        \u2514\u2500\u2500 UUID: {flow_run.id}\n        \u2514\u2500\u2500 Parameters: {flow_run.parameters}\n        \u2514\u2500\u2500 Job Variables: {flow_run.job_variables}\n        \u2514\u2500\u2500 Scheduled start time: {scheduled_display}\n        \u2514\u2500\u2500 URL: {run_url}\n        \"\"\"\n        ).strip(),\n        soft_wrap=True,\n    )\n    if watch:\n        watch_interval = 5 if watch_interval is None else watch_interval\n        app.console.print(f\"Watching flow run {flow_run.name!r}...\")\n        finished_flow_run = await wait_for_flow_run(\n            flow_run.id,\n            timeout=watch_timeout,\n            poll_interval=watch_interval,\n            log_states=True,\n        )\n        finished_flow_run_state = finished_flow_run.state\n        if finished_flow_run_state.is_completed():\n            exit_with_success(\n                f\"Flow run finished successfully in {finished_flow_run_state.name!r}.\"\n            )\n        exit_with_error(\n            f\"Flow run finished in state {finished_flow_run_state.name!r}.\",\n            code=1,\n        )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.str_presenter","title":"str_presenter","text":"

configures yaml for dumping multiline strings Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data

Source code in prefect/cli/deployment.py
def str_presenter(dumper, data):\n    \"\"\"\n    configures yaml for dumping multiline strings\n    Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data\n    \"\"\"\n    if len(data.splitlines()) > 1:  # check for multiline string\n        return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data, style=\"|\")\n    return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/dev/","title":"dev","text":"","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev","title":"prefect.cli.dev","text":"

Command line interface for working with Prefect Server

","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent","title":"agent async","text":"

Starts a hot-reloading development agent process.

Source code in prefect/cli/dev.py
@dev_app.command()\nasync def agent(\n    api_url: str = SettingsOption(PREFECT_API_URL),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n):\n    \"\"\"\n    Starts a hot-reloading development agent process.\n    \"\"\"\n    # Delayed import since this is only a 'dev' dependency\n    import watchfiles\n\n    app.console.print(\"Creating hot-reloading agent process...\")\n\n    try:\n        await watchfiles.arun_process(\n            prefect.__module_path__,\n            target=agent_process_entrypoint,\n            kwargs=dict(api=api_url, work_queues=work_queues),\n        )\n    except RuntimeError as err:\n        # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n        # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n        if str(err).strip() != \"Already borrowed\":\n            raise\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent_process_entrypoint","title":"agent_process_entrypoint","text":"

An entrypoint for starting an agent in a subprocess. Adds a Rich console to the Typer app, processes Typer default parameters, then starts an agent. All kwargs are forwarded to prefect.cli.agent.start.

Source code in prefect/cli/dev.py
def agent_process_entrypoint(**kwargs):\n    \"\"\"\n    An entrypoint for starting an agent in a subprocess. Adds a Rich console\n    to the Typer app, processes Typer default parameters, then starts an agent.\n    All kwargs are forwarded to  `prefect.cli.agent.start`.\n    \"\"\"\n    import inspect\n\n    # import locally so only the `dev` command breaks if Typer internals change\n    from typer.models import ParameterInfo\n\n    # Typer does not process default parameters when calling a function\n    # directly, so we must set `start_agent`'s default parameters manually.\n    # get the signature of the `start_agent` function\n    start_agent_signature = inspect.signature(start_agent)\n\n    # for any arguments not present in kwargs, use the default value.\n    for name, param in start_agent_signature.parameters.items():\n        if name not in kwargs:\n            # All `param.default` values for start_agent are Typer params that store the\n            # actual default value in their `default` attribute and we must call\n            # `param.default.default` to get the actual default value. We should also\n            # ensure we extract the right default if non-Typer defaults are added\n            # to `start_agent` in the future.\n            if isinstance(param.default, ParameterInfo):\n                default = param.default.default\n            else:\n                default = param.default\n\n            # Some defaults are Prefect `SettingsOption.value` methods\n            # that must be called to get the actual value.\n            kwargs[name] = default() if callable(default) else default\n\n    try:\n        start_agent(**kwargs)  # type: ignore\n    except KeyboardInterrupt:\n        # expected when watchfiles kills the process\n        pass\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.api","title":"api async","text":"

Starts a hot-reloading development API.

Source code in prefect/cli/dev.py
@dev_app.command()\nasync def api(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    log_level: str = \"DEBUG\",\n    services: bool = True,\n):\n    \"\"\"\n    Starts a hot-reloading development API.\n    \"\"\"\n    import watchfiles\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_RUN_IN_APP\"] = str(services)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = \"False\"\n    server_env[\"PREFECT_UI_API_URL\"] = f\"http://{host}:{port}/api\"\n\n    command = [\n        sys.executable,\n        \"-m\",\n        \"uvicorn\",\n        \"--factory\",\n        \"prefect.server.api.server:create_app\",\n        \"--host\",\n        str(host),\n        \"--port\",\n        str(port),\n        \"--log-level\",\n        log_level.lower(),\n    ]\n\n    app.console.print(f\"Running: {' '.join(command)}\")\n    import signal\n\n    stop_event = anyio.Event()\n    start_command = partial(\n        run_process, command=command, env=server_env, stream_output=True\n    )\n\n    async with anyio.create_task_group() as tg:\n        try:\n            server_pid = await tg.start(start_command)\n            async for _ in watchfiles.awatch(\n                prefect.__module_path__,\n                stop_event=stop_event,  # type: ignore\n            ):\n                # when any watched files change, restart the server\n                app.console.print(\"Restarting Prefect Server...\")\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n                # start a new server\n                server_pid = await tg.start(start_command)\n        except RuntimeError as err:\n            # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n            # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n            if str(err).strip() != \"Already borrowed\":\n                raise\n        except KeyboardInterrupt:\n            # exit cleanly on ctrl-c by killing the server process if it's\n            # still running\n            try:\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n            except ProcessLookupError:\n                # process already exited\n                pass\n\n            stop_event.set()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_docs","title":"build_docs","text":"

Builds REST API reference documentation for static display.

Source code in prefect/cli/dev.py
@dev_app.command()\ndef build_docs(\n    schema_path: str = None,\n):\n    \"\"\"\n    Builds REST API reference documentation for static display.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    from prefect.server.api.server import create_app\n\n    schema = create_app(ephemeral=True).openapi()\n\n    if not schema_path:\n        schema_path = (\n            prefect.__development_base_path__ / \"docs\" / \"api-ref\" / \"schema.json\"\n        ).absolute()\n    # overwrite info for display purposes\n    schema[\"info\"] = {}\n    with open(schema_path, \"w\") as f:\n        json.dump(schema, f)\n    app.console.print(f\"OpenAPI schema written to {schema_path}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_image","title":"build_image","text":"

Build a docker image for development.

Source code in prefect/cli/dev.py
@dev_app.command()\ndef build_image(\n    arch: str = typer.Option(\n        None,\n        help=(\n            \"The architecture to build the container for. \"\n            \"Defaults to the architecture of the host Python. \"\n            f\"[default: {platform.machine()}]\"\n        ),\n    ),\n    python_version: str = typer.Option(\n        None,\n        help=(\n            \"The Python version to build the container for. \"\n            \"Defaults to the version of the host Python. \"\n            f\"[default: {python_version_minor()}]\"\n        ),\n    ),\n    flavor: str = typer.Option(\n        None,\n        help=(\n            \"An alternative flavor to build, for example 'conda'. \"\n            \"Defaults to the standard Python base image\"\n        ),\n    ),\n    dry_run: bool = False,\n):\n    \"\"\"\n    Build a docker image for development.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    # TODO: Once https://github.com/tiangolo/typer/issues/354 is addressed, the\n    #       default can be set in the function signature\n    arch = arch or platform.machine()\n    python_version = python_version or python_version_minor()\n\n    tag = get_prefect_image_name(python_version=python_version, flavor=flavor)\n\n    # Here we use a subprocess instead of the docker-py client to easily stream output\n    # as it comes\n    command = [\n        \"docker\",\n        \"build\",\n        str(prefect.__development_base_path__),\n        \"--tag\",\n        tag,\n        \"--platform\",\n        f\"linux/{arch}\",\n        \"--build-arg\",\n        \"PREFECT_EXTRAS=[dev]\",\n        \"--build-arg\",\n        f\"PYTHON_VERSION={python_version}\",\n    ]\n\n    if flavor:\n        command += [\"--build-arg\", f\"BASE_IMAGE=prefect-{flavor}\"]\n\n    if dry_run:\n        print(\" \".join(command))\n        return\n\n    try:\n        subprocess.check_call(command, shell=sys.platform == \"win32\")\n    except subprocess.CalledProcessError:\n        exit_with_error(\"Failed to build image!\")\n    else:\n        exit_with_success(f\"Built image {tag!r} for linux/{arch}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.container","title":"container","text":"

Run a docker container with local code mounted and installed.

Source code in prefect/cli/dev.py
@dev_app.command()\ndef container(bg: bool = False, name=\"prefect-dev\", api: bool = True, tag: str = None):\n    \"\"\"\n    Run a docker container with local code mounted and installed.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    import docker\n    from docker.models.containers import Container\n\n    client = docker.from_env()\n\n    containers = client.containers.list()\n    container_names = {container.name for container in containers}\n    if name in container_names:\n        exit_with_error(\n            f\"Container {name!r} already exists. Specify a different name or stop \"\n            \"the existing container.\"\n        )\n\n    blocking_cmd = \"prefect dev api\" if api else \"sleep infinity\"\n    tag = tag or get_prefect_image_name()\n\n    container: Container = client.containers.create(\n        image=tag,\n        command=[\n            \"/bin/bash\",\n            \"-c\",\n            (  # noqa\n                \"pip install -e /opt/prefect/repo\\\\[dev\\\\] && touch /READY &&\"\n                f\" {blocking_cmd}\"\n            ),\n        ],\n        name=name,\n        auto_remove=True,\n        working_dir=\"/opt/prefect/repo\",\n        volumes=[f\"{prefect.__development_base_path__}:/opt/prefect/repo\"],\n        shm_size=\"4G\",\n    )\n\n    print(f\"Starting container for image {tag!r}...\")\n    container.start()\n\n    print(\"Waiting for installation to complete\", end=\"\", flush=True)\n    try:\n        ready = False\n        while not ready:\n            print(\".\", end=\"\", flush=True)\n            result = container.exec_run(\"test -f /READY\")\n            ready = result.exit_code == 0\n            if not ready:\n                time.sleep(3)\n    except BaseException:\n        print(\"\\nInterrupted. Stopping container...\")\n        container.stop()\n        raise\n\n    print(\n        textwrap.dedent(\n            f\"\"\"\n            Container {container.name!r} is ready! To connect to the container, run:\n\n                docker exec -it {container.name} /bin/bash\n            \"\"\"\n        )\n    )\n\n    if bg:\n        print(\n            textwrap.dedent(\n                f\"\"\"\n                The container will run forever. Stop the container with:\n\n                    docker stop {container.name}\n                \"\"\"\n            )\n        )\n        # Exit without stopping\n        return\n\n    try:\n        print(\"Send a keyboard interrupt to exit...\")\n        container.wait()\n    except KeyboardInterrupt:\n        pass  # Avoid showing \"Abort\"\n    finally:\n        print(\"\\nStopping container...\")\n        container.stop()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.kubernetes_manifest","title":"kubernetes_manifest","text":"

Generates a Kubernetes manifest for development.

Example

$ prefect dev kubernetes-manifest | kubectl apply -f -

Source code in prefect/cli/dev.py
@dev_app.command()\ndef kubernetes_manifest():\n    \"\"\"\n    Generates a Kubernetes manifest for development.\n\n    Example:\n        $ prefect dev kubernetes-manifest | kubectl apply -f -\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-dev.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"prefect_root_directory\": prefect.__development_base_path__,\n            \"image_name\": get_prefect_image_name(),\n        }\n    )\n    print(manifest)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.start","title":"start async","text":"

Starts a hot-reloading development server with API, UI, and agent processes.

Each service has an individual command if you wish to start them separately. Each service can be excluded here as well.

Source code in prefect/cli/dev.py
@dev_app.command()\nasync def start(\n    exclude_api: bool = typer.Option(False, \"--no-api\"),\n    exclude_ui: bool = typer.Option(False, \"--no-ui\"),\n    exclude_agent: bool = typer.Option(False, \"--no-agent\"),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the dev agent to pull from.\",\n    ),\n):\n    \"\"\"\n    Starts a hot-reloading development server with API, UI, and agent processes.\n\n    Each service has an individual command if you wish to start them separately.\n    Each service can be excluded here as well.\n    \"\"\"\n    async with anyio.create_task_group() as tg:\n        if not exclude_api:\n            tg.start_soon(\n                partial(\n                    api,\n                    host=PREFECT_SERVER_API_HOST.value(),\n                    port=PREFECT_SERVER_API_PORT.value(),\n                )\n            )\n        if not exclude_ui:\n            tg.start_soon(ui)\n        if not exclude_agent:\n            # Hook the agent to the hosted API if running\n            if not exclude_api:\n                host = f\"http://{PREFECT_SERVER_API_HOST.value()}:{PREFECT_SERVER_API_PORT.value()}/api\"  # noqa\n            else:\n                host = PREFECT_API_URL.value()\n            tg.start_soon(agent, host, work_queues)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.ui","title":"ui async","text":"

Starts a hot-reloading development UI.

Source code in prefect/cli/dev.py
@dev_app.command()\nasync def ui():\n    \"\"\"\n    Starts a hot-reloading development UI.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    with tmpchdir(prefect.__development_base_path__ / \"ui\"):\n        app.console.print(\"Installing npm packages...\")\n        await run_process([\"npm\", \"install\"], stream_output=True)\n\n        app.console.print(\"Starting UI development server...\")\n        await run_process(command=[\"npm\", \"run\", \"serve\"], stream_output=True)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/events/","title":"Events","text":"","tags":["Python API","CLI","Events"]},{"location":"api-ref/prefect/cli/events/#prefect.cli.events","title":"prefect.cli.events","text":"","tags":["Python API","CLI","Events"]},{"location":"api-ref/prefect/cli/events/#prefect.cli.events.stream","title":"stream async","text":"

Subscribes to the event stream of a workspace, printing each event as it is received. By default, events are printed as JSON, but can be printed as text by passing --format text.

Source code in prefect/cli/events.py
@events_app.command()\nasync def stream(\n    format: StreamFormat = typer.Option(\n        StreamFormat.json, \"--format\", help=\"Output format (json or text)\"\n    ),\n    output_file: str = typer.Option(\n        None, \"--output-file\", help=\"File to write events to\"\n    ),\n    account: bool = typer.Option(\n        False,\n        \"--account\",\n        help=\"Stream events for entire account, including audit logs\",\n    ),\n    run_once: bool = typer.Option(False, \"--run-once\", help=\"Stream only one event\"),\n):\n    \"\"\"Subscribes to the event stream of a workspace, printing each event\n    as it is received. By default, events are printed as JSON, but can be\n    printed as text by passing `--format text`.\n    \"\"\"\n\n    try:\n        if account:\n            events_subscriber = PrefectCloudAccountEventSubscriber()\n        else:\n            events_subscriber = get_events_subscriber()\n\n        app.console.print(\"Subscribing to event stream...\")\n        async with events_subscriber as subscriber:\n            async for event in subscriber:\n                await handle_event(event, format, output_file)\n                if run_once:\n                    typer.Exit(0)\n    except Exception as exc:\n        handle_error(exc)\n
","tags":["Python API","CLI","Events"]},{"location":"api-ref/prefect/cli/flow/","title":"flow","text":"","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow","title":"prefect.cli.flow","text":"

Command line interface for working with flows.

","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.ls","title":"ls async","text":"

View flows.

Source code in prefect/cli/flow.py
@flow_app.command()\nasync def ls(\n    limit: int = 15,\n):\n    \"\"\"\n    View flows.\n    \"\"\"\n    async with get_client() as client:\n        flows = await client.read_flows(\n            limit=limit,\n            sort=FlowSort.CREATED_DESC,\n        )\n\n    table = Table(title=\"Flows\")\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Created\", no_wrap=True)\n\n    for flow in flows:\n        table.add_row(\n            str(flow.id),\n            str(flow.name),\n            str(flow.created),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.serve","title":"serve async","text":"

Serve a flow via an entrypoint.

Source code in prefect/cli/flow.py
@flow_app.command()\nasync def serve(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a file containing a flow and the name of the flow function in\"\n            \" the format `./path/to/file.py:flow_func_name`.\"\n        ),\n    ),\n    name: str = typer.Option(\n        ...,\n        \"--name\",\n        \"-n\",\n        help=\"The name to give the deployment created for the flow.\",\n    ),\n    description: Optional[str] = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the created deployment. If not provided, the\"\n            \" description will be populated from the flow's description.\"\n        ),\n    ),\n    version: Optional[str] = typer.Option(\n        None, \"-v\", \"--version\", help=\"A version to give the created deployment.\"\n    ),\n    tags: Optional[List[str]] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=\"One or more optional tags to apply to the created deployment.\",\n    ),\n    cron: Optional[str] = typer.Option(\n        None,\n        \"--cron\",\n        help=(\n            \"A cron string that will be used to set a schedule for the created\"\n            \" deployment.\"\n        ),\n    ),\n    interval: Optional[int] = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) between scheduled runs of\"\n            \" the flow.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The start date for an interval schedule.\"\n    ),\n    rrule: Optional[str] = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set a schedule for the created deployment.\",\n    ),\n    timezone: Optional[str] = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Timezone to used scheduling flow runs e.g. 'America/New_York'\",\n    ),\n    pause_on_shutdown: bool = typer.Option(\n        True,\n        help=(\n            \"If set, provided schedule will be paused when the serve command is\"\n            \" stopped. If not set, the schedules will continue running.\"\n        ),\n    ),\n):\n    \"\"\"\n    Serve a flow via an entrypoint.\n    \"\"\"\n    runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown)\n    try:\n        schedules = []\n        if interval or cron or rrule:\n            schedule = construct_schedule(\n                interval=interval,\n                cron=cron,\n                rrule=rrule,\n                timezone=timezone,\n                anchor_date=interval_anchor,\n            )\n            schedules = [MinimalDeploymentSchedule(schedule=schedule, active=True)]\n\n        runner_deployment = RunnerDeployment.from_entrypoint(\n            entrypoint=entrypoint,\n            name=name,\n            schedules=schedules,\n            description=description,\n            tags=tags or [],\n            version=version,\n        )\n    except (MissingFlowError, ValueError) as exc:\n        exit_with_error(str(exc))\n    deployment_id = await runner.add_deployment(runner_deployment)\n\n    help_message = (\n        f\"[green]Your flow {runner_deployment.flow_name!r} is being served and polling\"\n        \" for scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the following\"\n        \" command:\\n[blue]\\n\\t$ prefect deployment run\"\n        f\" '{runner_deployment.flow_name}/{name}'\\n[/]\"\n    )\n    if PREFECT_UI_URL:\n        help_message += (\n            \"\\nYou can also run your flow via the Prefect UI:\"\n            f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n        )\n\n    app.console.print(help_message, soft_wrap=True)\n    await runner.start()\n
","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow_run/","title":"flow_run","text":"","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run","title":"prefect.cli.flow_run","text":"

Command line interface for working with flow runs

","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.cancel","title":"cancel async","text":"

Cancel a flow run by ID.

Source code in prefect/cli/flow_run.py
@flow_run_app.command()\nasync def cancel(id: UUID):\n    \"\"\"Cancel a flow run by ID.\"\"\"\n    async with get_client() as client:\n        cancelling_state = State(type=StateType.CANCELLING)\n        try:\n            result = await client.set_flow_run_state(\n                flow_run_id=id, state=cancelling_state\n            )\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    if result.status == SetStateStatus.ABORT:\n        exit_with_error(\n            f\"Flow run '{id}' was unable to be cancelled. Reason:\"\n            f\" '{result.details.reason}'\"\n        )\n\n    exit_with_success(f\"Flow run '{id}' was successfully scheduled for cancellation.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.delete","title":"delete async","text":"

Delete a flow run by ID.

Source code in prefect/cli/flow_run.py
@flow_run_app.command()\nasync def delete(id: UUID):\n    \"\"\"\n    Delete a flow run by ID.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.delete_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    exit_with_success(f\"Successfully deleted flow run '{id}'.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.inspect","title":"inspect async","text":"

View details about a flow run.

Source code in prefect/cli/flow_run.py
@flow_run_app.command()\nasync def inspect(id: UUID):\n    \"\"\"\n    View details about a flow run.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            flow_run = await client.read_flow_run(id)\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code == status.HTTP_404_NOT_FOUND:\n                exit_with_error(f\"Flow run {id!r} not found!\")\n            else:\n                raise\n\n    app.console.print(Pretty(flow_run))\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.logs","title":"logs async","text":"

View logs for a flow run.

Source code in prefect/cli/flow_run.py
@flow_run_app.command()\nasync def logs(\n    id: UUID,\n    head: bool = typer.Option(\n        False,\n        \"--head\",\n        \"-h\",\n        help=(\n            f\"Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n    num_logs: int = typer.Option(\n        None,\n        \"--num-logs\",\n        \"-n\",\n        help=(\n            \"Number of logs to show when using the --head or --tail flag. If None,\"\n            f\" defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.\"\n        ),\n        min=1,\n    ),\n    reverse: bool = typer.Option(\n        False,\n        \"--reverse\",\n        \"-r\",\n        help=\"Reverse the logs order to print the most recent logs first\",\n    ),\n    tail: bool = typer.Option(\n        False,\n        \"--tail\",\n        \"-t\",\n        help=(\n            f\"Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n):\n    \"\"\"\n    View logs for a flow run.\n    \"\"\"\n    # Pagination - API returns max 200 (LOGS_DEFAULT_PAGE_SIZE) logs at a time\n    offset = 0\n    more_logs = True\n    num_logs_returned = 0\n\n    # if head and tail flags are being used together\n    if head and tail:\n        exit_with_error(\"Please provide either a `head` or `tail` option but not both.\")\n\n    user_specified_num_logs = (\n        num_logs or LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS\n        if head or tail or num_logs\n        else None\n    )\n\n    # if using tail update offset according to LOGS_DEFAULT_PAGE_SIZE\n    if tail:\n        offset = max(0, user_specified_num_logs - LOGS_DEFAULT_PAGE_SIZE)\n\n    log_filter = LogFilter(flow_run_id={\"any_\": [id]})\n\n    async with get_client() as client:\n        # Get the flow run\n        try:\n            flow_run = await client.read_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run {str(id)!r} not found!\")\n\n        while more_logs:\n            num_logs_to_return_from_page = (\n                LOGS_DEFAULT_PAGE_SIZE\n                if user_specified_num_logs is None\n                else min(\n                    LOGS_DEFAULT_PAGE_SIZE, user_specified_num_logs - num_logs_returned\n                )\n            )\n\n            # Get the next page of logs\n            page_logs = await client.read_logs(\n                log_filter=log_filter,\n                limit=num_logs_to_return_from_page,\n                offset=offset,\n                sort=(\n                    LogSort.TIMESTAMP_DESC if reverse or tail else LogSort.TIMESTAMP_ASC\n                ),\n            )\n\n            for log in reversed(page_logs) if tail and not reverse else page_logs:\n                app.console.print(\n                    # Print following the flow run format (declared in logging.yml)\n                    (\n                        f\"{pendulum.instance(log.timestamp).to_datetime_string()}.{log.timestamp.microsecond // 1000:03d} |\"\n                        f\" {logging.getLevelName(log.level):7s} | Flow run\"\n                        f\" {flow_run.name!r} - {log.message}\"\n                    ),\n                    soft_wrap=True,\n                )\n\n            # Update the number of logs retrieved\n            num_logs_returned += num_logs_to_return_from_page\n\n            if tail:\n                #  If the current offset is not 0, update the offset for the next page\n                if offset != 0:\n                    offset = (\n                        0\n                        # Reset the offset to 0 if there are less logs than the LOGS_DEFAULT_PAGE_SIZE to get the remaining log\n                        if offset < LOGS_DEFAULT_PAGE_SIZE\n                        else offset - LOGS_DEFAULT_PAGE_SIZE\n                    )\n                else:\n                    more_logs = False\n            else:\n                if len(page_logs) == LOGS_DEFAULT_PAGE_SIZE:\n                    offset += LOGS_DEFAULT_PAGE_SIZE\n                else:\n                    # No more logs to show, exit\n                    more_logs = False\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.ls","title":"ls async","text":"

View recent flow runs or flow runs for specific flows.

Arguments:

flow_name: Name of the flow\n\nlimit: Maximum number of flow runs to list. Defaults to 15.\n\nstate: Name of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'PAUSED', 'SUSPENDED', 'AWAITINGRETRY', 'RETRYING', and 'LATE'.\n\nstate_type: Type of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'CRASHED', and 'PAUSED'.\n

Examples:

$ prefect flow-runs ls --state Running

$ prefect flow-runs ls --state Running --state late

$ prefect flow-runs ls --state-type RUNNING

$ prefect flow-runs ls --state-type RUNNING --state-type FAILED

Source code in prefect/cli/flow_run.py
@flow_run_app.command()\nasync def ls(\n    flow_name: List[str] = typer.Option(None, help=\"Name of the flow\"),\n    limit: int = typer.Option(15, help=\"Maximum number of flow runs to list\"),\n    state: List[str] = typer.Option(None, help=\"Name of the flow run's state\"),\n    state_type: List[str] = typer.Option(None, help=\"Type of the flow run's state\"),\n):\n    \"\"\"\n    View recent flow runs or flow runs for specific flows.\n\n    Arguments:\n\n        flow_name: Name of the flow\n\n        limit: Maximum number of flow runs to list. Defaults to 15.\n\n        state: Name of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'PAUSED', 'SUSPENDED', 'AWAITINGRETRY', 'RETRYING', and 'LATE'.\n\n        state_type: Type of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'CRASHED', and 'PAUSED'.\n\n    Examples:\n\n    $ prefect flow-runs ls --state Running\n\n    $ prefect flow-runs ls --state Running --state late\n\n    $ prefect flow-runs ls --state-type RUNNING\n\n    $ prefect flow-runs ls --state-type RUNNING --state-type FAILED\n    \"\"\"\n\n    # Handling `state` and `state_type` argument validity in the function instead of by specifying\n    # List[StateType] and List[StateName] in the type hints, allows users to provide\n    # case-insensitive arguments for `state` and `state_type`.\n\n    prefect_state_names = {\n        \"SCHEDULED\": \"Scheduled\",\n        \"PENDING\": \"Pending\",\n        \"RUNNING\": \"Running\",\n        \"COMPLETED\": \"Completed\",\n        \"FAILED\": \"Failed\",\n        \"CANCELLED\": \"Cancelled\",\n        \"CRASHED\": \"Crashed\",\n        \"PAUSED\": \"Paused\",\n        \"CANCELLING\": \"Cancelling\",\n        \"SUSPENDED\": \"Suspended\",\n        \"AWAITINGRETRY\": \"AwaitingRetry\",\n        \"RETRYING\": \"Retrying\",\n        \"LATE\": \"Late\",\n    }\n\n    state_filter = {}\n    formatted_states = []\n\n    if state:\n        for s in state:\n            uppercased_state = s.upper()\n            if uppercased_state in prefect_state_names:\n                capitalized_state = prefect_state_names[uppercased_state]\n                formatted_states.append(capitalized_state)\n            else:\n                # Do not change the case of the state name if it is not one of the official Prefect state names\n                formatted_states.append(s)\n                logger.warning(\n                    f\"State name {repr(s)} is not one of the official Prefect state names.\"\n                )\n\n        state_filter[\"name\"] = {\"any_\": formatted_states}\n\n    if state_type:\n        upper_cased_states = [s.upper() for s in state_type]\n        if not all(s in StateType.__members__ for s in upper_cased_states):\n            exit_with_error(\n                f\"Invalid state type. Options are {', '.join(StateType.__members__)}.\"\n            )\n\n        state_filter[\"type\"] = {\n            \"any_\": [StateType[s].value for s in upper_cased_states]\n        }\n\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None,\n            flow_run_filter=FlowRunFilter(state=state_filter) if state_filter else None,\n            limit=limit,\n            sort=FlowRunSort.EXPECTED_START_TIME_DESC,\n        )\n        flows_by_id = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [run.flow_id for run in flow_runs]})\n            )\n        }\n\n        if not flow_runs:\n            exit_with_success(\"No flow runs found.\")\n\n    table = Table(title=\"Flow Runs\")\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Flow\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"State\", no_wrap=True)\n    table.add_column(\"When\", style=\"bold\", no_wrap=True)\n\n    for flow_run in sorted(flow_runs, key=lambda d: d.created, reverse=True):\n        flow = flows_by_id[flow_run.flow_id]\n        timestamp = (\n            flow_run.state.state_details.scheduled_time\n            if flow_run.state.is_scheduled()\n            else flow_run.state.timestamp\n        )\n        table.add_row(\n            str(flow_run.id),\n            str(flow.name),\n            str(flow_run.name),\n            str(flow_run.state.type.value),\n            pendulum.instance(timestamp).diff_for_humans(),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/kubernetes/","title":"kubernetes","text":"","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes","title":"prefect.cli.kubernetes","text":"

Command line interface for working with Prefect on Kubernetes

","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_agent","title":"manifest_agent","text":"

Generates a manifest for deploying Agent on Kubernetes.

Example

$ prefect kubernetes manifest agent | kubectl apply -f -

Source code in prefect/cli/kubernetes.py
@manifest_app.command(\"agent\")\ndef manifest_agent(\n    api_url: str = SettingsOption(PREFECT_API_URL),\n    api_key: str = SettingsOption(PREFECT_API_KEY),\n    image_tag: str = typer.Option(\n        get_prefect_image_name(),\n        \"-i\",\n        \"--image-tag\",\n        help=\"The tag of a Docker image to use for the Agent.\",\n    ),\n    namespace: str = typer.Option(\n        \"default\",\n        \"-n\",\n        \"--namespace\",\n        help=\"A Kubernetes namespace to create agent in.\",\n    ),\n    work_queue: str = typer.Option(\n        \"kubernetes\",\n        \"-q\",\n        \"--work-queue\",\n        help=\"A work queue name for the agent to pull from.\",\n    ),\n):\n    \"\"\"\n    Generates a manifest for deploying Agent on Kubernetes.\n\n    Example:\n        $ prefect kubernetes manifest agent | kubectl apply -f -\n    \"\"\"\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-agent.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"api_url\": api_url,\n            \"api_key\": api_key,\n            \"image_name\": image_tag,\n            \"namespace\": namespace,\n            \"work_queue\": work_queue,\n        }\n    )\n    print(manifest)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_flow_run_job","title":"manifest_flow_run_job async","text":"

Prints the default KubernetesJob Job manifest.

Use this file to fully customize your KubernetesJob deployments.

\b Example: \b $ prefect kubernetes manifest flow-run-job

\b Output, a YAML file: \b apiVersion: batch/v1 kind: Job ...

Source code in prefect/cli/kubernetes.py
@manifest_app.command(\"flow-run-job\")\nasync def manifest_flow_run_job():\n    \"\"\"\n    Prints the default KubernetesJob Job manifest.\n\n    Use this file to fully customize your `KubernetesJob` deployments.\n\n    \\b\n    Example:\n        \\b\n        $ prefect kubernetes manifest flow-run-job\n\n    \\b\n    Output, a YAML file:\n        \\b\n        apiVersion: batch/v1\n        kind: Job\n        ...\n    \"\"\"\n\n    KubernetesJob.base_job_manifest()\n\n    output = yaml.dump(KubernetesJob.base_job_manifest())\n\n    # add some commentary where appropriate\n    output = output.replace(\n        \"metadata:\\n  labels:\",\n        \"metadata:\\n  # labels are required, even if empty\\n  labels:\",\n    )\n    output = output.replace(\n        \"containers:\\n\",\n        \"containers:  # the first container is required\\n\",\n    )\n    output = output.replace(\n        \"env: []\\n\",\n        \"env: []  # env is required, even if empty\\n\",\n    )\n\n    print(output)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_server","title":"manifest_server","text":"

Generates a manifest for deploying Prefect on Kubernetes.

Example

$ prefect kubernetes manifest server | kubectl apply -f -

Source code in prefect/cli/kubernetes.py
@manifest_app.command(\"server\")\ndef manifest_server(\n    image_tag: str = typer.Option(\n        get_prefect_image_name(),\n        \"-i\",\n        \"--image-tag\",\n        help=\"The tag of a Docker image to use for the server.\",\n    ),\n    namespace: str = typer.Option(\n        \"default\",\n        \"-n\",\n        \"--namespace\",\n        help=\"A Kubernetes namespace to create the server in.\",\n    ),\n    log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n):\n    \"\"\"\n    Generates a manifest for deploying Prefect on Kubernetes.\n\n    Example:\n        $ prefect kubernetes manifest server | kubectl apply -f -\n    \"\"\"\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-server.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"image_name\": image_tag,\n            \"namespace\": namespace,\n            \"log_level\": log_level,\n        }\n    )\n    print(manifest)\n
","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/profile/","title":"profile","text":"","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile","title":"prefect.cli.profile","text":"

Command line interface for working with profiles.

","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.create","title":"create","text":"

Create a new profile.

Source code in prefect/cli/profile.py
@profile_app.command()\ndef create(\n    name: str,\n    from_name: str = typer.Option(None, \"--from\", help=\"Copy an existing profile.\"),\n):\n    \"\"\"\n    Create a new profile.\n    \"\"\"\n\n    profiles = prefect.settings.load_profiles()\n    if name in profiles:\n        app.console.print(\n            textwrap.dedent(\n                f\"\"\"\n                [red]Profile {name!r} already exists.[/red]\n                To create a new profile, remove the existing profile first:\n\n                    prefect profile delete {name!r}\n                \"\"\"\n            ).strip()\n        )\n        raise typer.Exit(1)\n\n    if from_name:\n        if from_name not in profiles:\n            exit_with_error(f\"Profile {from_name!r} not found.\")\n\n        # Create a copy of the profile with a new name and add to the collection\n        profiles.add_profile(profiles[from_name].copy(update={\"name\": name}))\n    else:\n        profiles.add_profile(prefect.settings.Profile(name=name, settings={}))\n\n    prefect.settings.save_profiles(profiles)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created profile with properties:\n                name - {name!r}\n                from name - {from_name or None}\n\n            Use created profile for future, subsequent commands:\n                prefect profile use {name!r}\n\n            Use created profile temporarily for a single command:\n                prefect -p {name!r} config view\n            \"\"\"\n        )\n    )\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.delete","title":"delete","text":"

Delete the given profile.

Source code in prefect/cli/profile.py
@profile_app.command()\ndef delete(name: str):\n    \"\"\"\n    Delete the given profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile.name == name:\n        exit_with_error(\n            f\"Profile {name!r} is the active profile. You must switch profiles before\"\n            \" it can be deleted.\"\n        )\n\n    profiles.remove_profile(name)\n\n    verb = \"Removed\"\n    if name == \"default\":\n        verb = \"Reset\"\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"{verb} profile {name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.inspect","title":"inspect","text":"

Display settings from a given profile; defaults to active.

Source code in prefect/cli/profile.py
@profile_app.command()\ndef inspect(\n    name: Optional[str] = typer.Argument(\n        None, help=\"Name of profile to inspect; defaults to active profile.\"\n    ),\n):\n    \"\"\"\n    Display settings from a given profile; defaults to active.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name is None:\n        current_profile = prefect.context.get_settings_context().profile\n        if not current_profile:\n            exit_with_error(\"No active profile set - please provide a name to inspect.\")\n        name = current_profile.name\n        print(f\"No name provided, defaulting to {name!r}\")\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if not profiles[name].settings:\n        # TODO: Consider instructing on how to add settings.\n        print(f\"Profile {name!r} is empty.\")\n\n    for setting, value in profiles[name].settings.items():\n        app.console.print(f\"{setting.name}='{value}'\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.ls","title":"ls","text":"

List profile names.

Source code in prefect/cli/profile.py
@profile_app.command()\ndef ls():\n    \"\"\"\n    List profile names.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    current_profile = prefect.context.get_settings_context().profile\n    current_name = current_profile.name if current_profile is not None else None\n\n    table = Table(caption=\"* active profile\")\n    table.add_column(\n        \"[#024dfd]Available Profiles:\", justify=\"right\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for name in profiles:\n        if name == current_name:\n            table.add_row(f\"[green]  * {name}[/green]\")\n        else:\n            table.add_row(f\"  {name}\")\n    app.console.print(table)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.rename","title":"rename","text":"

Change the name of a profile.

Source code in prefect/cli/profile.py
@profile_app.command()\ndef rename(name: str, new_name: str):\n    \"\"\"\n    Change the name of a profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if new_name in profiles:\n        exit_with_error(f\"Profile {new_name!r} already exists.\")\n\n    profiles.add_profile(profiles[name].copy(update={\"name\": new_name}))\n    profiles.remove_profile(name)\n\n    # If the active profile was renamed switch the active profile to the new name.\n    prefect.context.get_settings_context().profile\n    if profiles.active_name == name:\n        profiles.set_active(new_name)\n    if os.environ.get(\"PREFECT_PROFILE\") == name:\n        app.console.print(\n            f\"You have set your current profile to {name!r} with the \"\n            \"PREFECT_PROFILE environment variable. You must update this variable to \"\n            f\"{new_name!r} to continue using the profile.\"\n        )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Renamed profile {name!r} to {new_name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.use","title":"use async","text":"

Set the given profile to active.

Source code in prefect/cli/profile.py
@profile_app.command()\nasync def use(name: str):\n    \"\"\"\n    Set the given profile to active.\n    \"\"\"\n    status_messages = {\n        ConnectionStatus.CLOUD_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_UNAUTHORIZED: (\n            exit_with_error,\n            f\"Error authenticating with Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.EPHEMERAL: (\n            exit_with_success,\n            (\n                f\"No Prefect server specified using profile {name!r} - the API will run\"\n                \" in ephemeral mode.\"\n            ),\n        ),\n        ConnectionStatus.INVALID_API: (\n            exit_with_error,\n            \"Error connecting to Prefect API URL\",\n        ),\n    }\n\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles.names:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    profiles.set_active(name)\n    prefect.settings.save_profiles(profiles)\n\n    with Progress(\n        SpinnerColumn(),\n        TextColumn(\"[progress.description]{task.description}\"),\n        transient=False,\n    ) as progress:\n        progress.add_task(\n            description=\"Checking API connectivity...\",\n            total=None,\n        )\n\n        with use_profile(name, include_current_context=False):\n            connection_status = await check_orion_connection()\n\n        exit_method, msg = status_messages[connection_status]\n\n    exit_method(msg)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/root/","title":"root","text":"","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root","title":"prefect.cli.root","text":"

Base prefect command-line application

","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root.version","title":"version async","text":"

Get the current Prefect version.

Source code in prefect/cli/root.py
@app.command()\nasync def version():\n    \"\"\"Get the current Prefect version.\"\"\"\n    import sqlite3\n\n    from prefect.server.utilities.database import get_dialect\n    from prefect.settings import PREFECT_API_DATABASE_CONNECTION_URL\n\n    version_info = {\n        \"Version\": prefect.__version__,\n        \"API version\": SERVER_API_VERSION,\n        \"Python version\": platform.python_version(),\n        \"Git commit\": prefect.__version_info__[\"full-revisionid\"][:8],\n        \"Built\": pendulum.parse(\n            prefect.__version_info__[\"date\"]\n        ).to_day_datetime_string(),\n        \"OS/Arch\": f\"{sys.platform}/{platform.machine()}\",\n        \"Profile\": prefect.context.get_settings_context().profile.name,\n    }\n\n    server_type: str\n\n    try:\n        # We do not context manage the client because when using an ephemeral app we do not\n        # want to create the database or run migrations\n        client = prefect.get_client()\n        server_type = client.server_type.value\n    except Exception:\n        server_type = \"<client error>\"\n\n    version_info[\"Server type\"] = server_type.lower()\n\n    # TODO: Consider adding an API route to retrieve this information?\n    if server_type == ServerType.EPHEMERAL.value:\n        database = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        version_info[\"Server\"] = {\"Database\": database}\n        if database == \"sqlite\":\n            version_info[\"Server\"][\"SQLite version\"] = sqlite3.sqlite_version\n\n    def display(object: dict, nesting: int = 0):\n        # Recursive display of a dictionary with nesting\n        for key, value in object.items():\n            key += \":\"\n            if isinstance(value, dict):\n                app.console.print(key)\n                return display(value, nesting + 2)\n            prefix = \" \" * nesting\n            app.console.print(f\"{prefix}{key.ljust(20 - len(prefix))} {value}\")\n\n    display(version_info)\n
","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/server/","title":"server","text":"","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server","title":"prefect.cli.server","text":"

Command line interface for working with Prefect

","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.downgrade","title":"downgrade async","text":"

Downgrade the Prefect database

Source code in prefect/cli/server.py
@database_app.command()\nasync def downgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"-1\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic downgrade`. If not provided, \"\n            \"downgrades to the most recent revision. Use 'base' to run all \"\n            \"migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n    \"\"\"Downgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_downgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to downgrade the Prefect \"\n            f\"database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database downgrade aborted!\")\n\n    app.console.print(\"Running downgrade migrations ...\")\n    await run_sync_in_worker_thread(\n        alembic_downgrade, revision=revision, dry_run=dry_run\n    )\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} downgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.reset","title":"reset async","text":"

Drop and recreate all Prefect database tables

Source code in prefect/cli/server.py
@database_app.command()\nasync def reset(yes: bool = typer.Option(False, \"--yes\", \"-y\")):\n    \"\"\"Drop and recreate all Prefect database tables\"\"\"\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to reset the Prefect database located \"\n            f'at \"{engine.url!r}\"? This will drop and recreate all tables.'\n        )\n        if not confirm:\n            exit_with_error(\"Database reset aborted\")\n    app.console.print(\"Downgrading database...\")\n    await db.drop_db()\n    app.console.print(\"Upgrading database...\")\n    await db.create_db()\n    exit_with_success(f'Prefect database \"{engine.url!r}\" reset!')\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.revision","title":"revision async","text":"

Create a new migration for the Prefect database

Source code in prefect/cli/server.py
@database_app.command()\nasync def revision(\n    message: str = typer.Option(\n        None,\n        \"--message\",\n        \"-m\",\n        help=\"A message to describe the migration.\",\n    ),\n    autogenerate: bool = False,\n):\n    \"\"\"Create a new migration for the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_revision\n\n    app.console.print(\"Running migration file creation ...\")\n    await run_sync_in_worker_thread(\n        alembic_revision,\n        message=message,\n        autogenerate=autogenerate,\n    )\n    exit_with_success(\"Creating new migration file succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.stamp","title":"stamp async","text":"

Stamp the revision table with the given revision; don't run any migrations

Source code in prefect/cli/server.py
@database_app.command()\nasync def stamp(revision: str):\n    \"\"\"Stamp the revision table with the given revision; don't run any migrations\"\"\"\n    from prefect.server.database.alembic_commands import alembic_stamp\n\n    app.console.print(\"Stamping database with revision ...\")\n    await run_sync_in_worker_thread(alembic_stamp, revision=revision)\n    exit_with_success(\"Stamping database with revision succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.start","title":"start async","text":"

Start a Prefect server instance

Source code in prefect/cli/server.py
@server_app.command()\nasync def start(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    keep_alive_timeout: int = SettingsOption(PREFECT_SERVER_API_KEEPALIVE_TIMEOUT),\n    log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n    scheduler: bool = SettingsOption(PREFECT_API_SERVICES_SCHEDULER_ENABLED),\n    analytics: bool = SettingsOption(\n        PREFECT_SERVER_ANALYTICS_ENABLED, \"--analytics-on/--analytics-off\"\n    ),\n    late_runs: bool = SettingsOption(PREFECT_API_SERVICES_LATE_RUNS_ENABLED),\n    ui: bool = SettingsOption(PREFECT_UI_ENABLED),\n):\n    \"\"\"Start a Prefect server instance\"\"\"\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_SCHEDULER_ENABLED\"] = str(scheduler)\n    server_env[\"PREFECT_SERVER_ANALYTICS_ENABLED\"] = str(analytics)\n    server_env[\"PREFECT_API_SERVICES_LATE_RUNS_ENABLED\"] = str(late_runs)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = str(ui)\n    server_env[\"PREFECT_LOGGING_SERVER_LEVEL\"] = log_level\n\n    base_url = f\"http://{host}:{port}\"\n\n    async with anyio.create_task_group() as tg:\n        app.console.print(generate_welcome_blurb(base_url, ui_enabled=ui))\n        app.console.print(\"\\n\")\n\n        server_process_id = await tg.start(\n            partial(\n                run_process,\n                command=[\n                    get_sys_executable(),\n                    \"-m\",\n                    \"uvicorn\",\n                    \"--app-dir\",\n                    # quote wrapping needed for windows paths with spaces\n                    f'\"{prefect.__module_path__.parent}\"',\n                    \"--factory\",\n                    \"prefect.server.api.server:create_app\",\n                    \"--host\",\n                    str(host),\n                    \"--port\",\n                    str(port),\n                    \"--timeout-keep-alive\",\n                    str(keep_alive_timeout),\n                ],\n                env=server_env,\n                stream_output=True,\n            )\n        )\n\n        # Explicitly handle the interrupt signal here, as it will allow us to\n        # cleanly stop the uvicorn server. Failing to do that may cause a\n        # large amount of anyio error traces on the terminal, because the\n        # SIGINT is handled by Typer/Click in this process (the parent process)\n        # and will start shutting down subprocesses:\n        # https://github.com/PrefectHQ/server/issues/2475\n\n        setup_signal_handlers_server(\n            server_process_id, \"the Prefect server\", app.console.print\n        )\n\n    app.console.print(\"Server stopped!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.upgrade","title":"upgrade async","text":"

Upgrade the Prefect database

Source code in prefect/cli/server.py
@database_app.command()\nasync def upgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"head\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic upgrade`. If not provided, runs all\"\n            \" migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n    \"\"\"Upgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_upgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            f\"Are you sure you want to upgrade the Prefect database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database upgrade aborted!\")\n\n    app.console.print(\"Running upgrade migrations ...\")\n    await run_sync_in_worker_thread(alembic_upgrade, revision=revision, dry_run=dry_run)\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} upgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/shell/","title":"shell","text":"","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell","title":"prefect.cli.shell","text":"

Provides a set of tools for executing shell commands as Prefect flows. Includes functionalities for running shell commands ad-hoc or serving them as Prefect flows, with options for logging output, scheduling, and deployment customization.

","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.output_collect","title":"output_collect","text":"

Collects output from a subprocess pipe and stores it in a container list.

Parameters:

Name Type Description Default pipe

The output pipe of the subprocess, either stdout or stderr.

required container

A list to store the collected output lines.

required Source code in prefect/cli/shell.py
def output_collect(pipe, container):\n    \"\"\"\n    Collects output from a subprocess pipe and stores it in a container list.\n\n    Args:\n        pipe: The output pipe of the subprocess, either stdout or stderr.\n        container: A list to store the collected output lines.\n    \"\"\"\n    for line in iter(pipe.readline, \"\"):\n        container.append(line)\n
","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.output_stream","title":"output_stream","text":"

Read from a pipe line by line and log using the provided logging function.

Parameters:

Name Type Description Default pipe IO

A file-like object for reading process output.

required logger_function function

A logging function from the logger.

required Source code in prefect/cli/shell.py
def output_stream(pipe, logger_function):\n    \"\"\"\n    Read from a pipe line by line and log using the provided logging function.\n\n    Args:\n        pipe (IO): A file-like object for reading process output.\n        logger_function (function): A logging function from the logger.\n    \"\"\"\n    with pipe:\n        for line in iter(pipe.readline, \"\"):\n            logger_function(line.strip())\n
","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.run_shell_process","title":"run_shell_process","text":"

Asynchronously executes the specified shell command and logs its output.

This function is designed to be used within Prefect flows to run shell commands as part of task execution. It handles both the execution of the command and the collection of its output for logging purposes.

Parameters:

Name Type Description Default command str

The shell command to execute.

required log_output bool

If True, the output of the command (both stdout and stderr) is logged to Prefect. Defaults to True

True stream_stdout bool

If True, the stdout of the command is streamed to Prefect logs. Defaults to False.

False log_stderr bool

If True, the stderr of the command is logged to Prefect logs. Defaults to False.

False Source code in prefect/cli/shell.py
@flow\ndef run_shell_process(\n    command: str,\n    log_output: bool = True,\n    stream_stdout: bool = False,\n    log_stderr: bool = False,\n):\n    \"\"\"\n    Asynchronously executes the specified shell command and logs its output.\n\n    This function is designed to be used within Prefect flows to run shell commands as part of task execution.\n    It handles both the execution of the command and the collection of its output for logging purposes.\n\n    Args:\n        command (str): The shell command to execute.\n        log_output (bool, optional): If True, the output of the command (both stdout and stderr) is logged to Prefect.\n                                     Defaults to True\n        stream_stdout (bool, optional): If True, the stdout of the command is streamed to Prefect logs. Defaults to False.\n        log_stderr (bool, optional): If True, the stderr of the command is logged to Prefect logs. Defaults to False.\n\n\n    \"\"\"\n\n    logger = get_run_logger() if log_output else logging.getLogger(\"prefect\")\n\n    # Containers for log batching\n    stdout_container, stderr_container = [], []\n    with subprocess.Popen(\n        command,\n        stdout=subprocess.PIPE,\n        stderr=subprocess.PIPE,\n        shell=True,\n        text=True,\n        bufsize=1,\n        universal_newlines=True,\n    ) as proc:\n        # Create threads for collecting stdout and stderr\n        if stream_stdout:\n            stdout_logger = logger.info\n            output = output_stream\n        else:\n            stdout_logger = stdout_container\n            output = output_collect\n\n        stdout_thread = threading.Thread(\n            target=output, args=(proc.stdout, stdout_logger)\n        )\n\n        stderr_thread = threading.Thread(\n            target=output_collect, args=(proc.stderr, stderr_container)\n        )\n\n        stdout_thread.start()\n        stderr_thread.start()\n\n        stdout_thread.join()\n        stderr_thread.join()\n\n        proc.wait()\n        if stdout_container:\n            logger.info(\"\".join(stdout_container).strip())\n\n        if stderr_container and log_stderr:\n            logger.error(\"\".join(stderr_container).strip())\n            # Suppress traceback\n        if proc.returncode != 0:\n            logger.error(\"\".join(stderr_container).strip())\n            sys.tracebacklimit = 0\n            raise FailedRun(f\"Command failed with exit code {proc.returncode}\")\n
","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.serve","title":"serve async","text":"

Creates and serves a Prefect deployment that runs a specified shell command according to a cron schedule or ad hoc.

This function allows users to integrate shell command execution into Prefect workflows seamlessly. It provides options for scheduled execution via cron expressions, flow and deployment naming for better management, and the application of tags for easier categorization and filtering within the Prefect UI. Additionally, it supports streaming command output to Prefect logs, setting concurrency limits to control flow execution, and optionally running the deployment once for ad-hoc tasks.

Parameters:

Name Type Description Default command str

The shell command the flow will execute.

required name str

The name assigned to the flow. This is required..

required deployment_tags List[str]

Optional tags for the deployment to facilitate filtering and organization.

Option(None, '--tag', help='Tag for the deployment (can be provided multiple times)') log_output bool

If True, streams the output of the shell command to the Prefect logs. Defaults to True.

Option(True, help='Stream the output of the command', hidden=True) cron_schedule str

A cron expression that defines when the flow will run. If not provided, the flow can be triggered manually.

Option(None, help='Cron schedule for the flow') timezone str

The timezone for the cron schedule. This is important if the schedule should align with local time.

Option(None, help='Timezone for the schedule') concurrency_limit int

The maximum number of instances of the flow that can run simultaneously.

Option(None, min=1, help='The maximum number of flow runs that can execute at the same time') deployment_name str

The name of the deployment. This helps distinguish deployments within the Prefect platform.

Option('CLI Runner Deployment', help='Name of the deployment') run_once bool

When True, the flow will only run once upon deployment initiation, rather than continuously.

Option(False, help='Run the agent loop once, instead of forever.') Source code in prefect/cli/shell.py
@shell_app.command(\"serve\")\nasync def serve(\n    command: str,\n    flow_name: str = typer.Option(..., help=\"Name of the flow\"),\n    deployment_name: str = typer.Option(\n        \"CLI Runner Deployment\", help=\"Name of the deployment\"\n    ),\n    deployment_tags: List[str] = typer.Option(\n        None, \"--tag\", help=\"Tag for the deployment (can be provided multiple times)\"\n    ),\n    log_output: bool = typer.Option(\n        True, help=\"Stream the output of the command\", hidden=True\n    ),\n    stream_stdout: bool = typer.Option(True, help=\"Stream the output of the command\"),\n    cron_schedule: str = typer.Option(None, help=\"Cron schedule for the flow\"),\n    timezone: str = typer.Option(None, help=\"Timezone for the schedule\"),\n    concurrency_limit: int = typer.Option(\n        None,\n        min=1,\n        help=\"The maximum number of flow runs that can execute at the same time\",\n    ),\n    run_once: bool = typer.Option(\n        False, help=\"Run the agent loop once, instead of forever.\"\n    ),\n):\n    \"\"\"\n    Creates and serves a Prefect deployment that runs a specified shell command according to a cron schedule or ad hoc.\n\n    This function allows users to integrate shell command execution into Prefect workflows seamlessly. It provides options for\n    scheduled execution via cron expressions, flow and deployment naming for better management, and the application of tags for\n    easier categorization and filtering within the Prefect UI. Additionally, it supports streaming command output to Prefect logs,\n    setting concurrency limits to control flow execution, and optionally running the deployment once for ad-hoc tasks.\n\n    Args:\n        command (str): The shell command the flow will execute.\n        name (str): The name assigned to the flow. This is required..\n        deployment_tags (List[str], optional): Optional tags for the deployment to facilitate filtering and organization.\n        log_output (bool, optional): If True, streams the output of the shell command to the Prefect logs. Defaults to True.\n        cron_schedule (str, optional): A cron expression that defines when the flow will run. If not provided, the flow can be triggered manually.\n        timezone (str, optional): The timezone for the cron schedule. This is important if the schedule should align with local time.\n        concurrency_limit (int, optional): The maximum number of instances of the flow that can run simultaneously.\n        deployment_name (str, optional): The name of the deployment. This helps distinguish deployments within the Prefect platform.\n        run_once (bool, optional): When True, the flow will only run once upon deployment initiation, rather than continuously.\n    \"\"\"\n    schedule = (\n        CronSchedule(cron=cron_schedule, timezone=timezone) if cron_schedule else None\n    )\n    defined_flow = run_shell_process.with_options(name=flow_name)\n\n    runner_deployment = await defined_flow.to_deployment(\n        name=deployment_name,\n        parameters={\n            \"command\": command,\n            \"log_output\": log_output,\n            \"stream_stdout\": stream_stdout,\n        },\n        entrypoint_type=EntrypointType.MODULE_PATH,\n        schedule=schedule,\n        tags=(deployment_tags or []) + [\"shell\"],\n    )\n\n    runner = Runner(name=flow_name)\n    deployment_id = await runner.add_deployment(runner_deployment)\n    help_message = (\n        f\"[green]Your flow {runner_deployment.flow_name!r} is being served and polling\"\n        \" for scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the following\"\n        \" command:\\n[blue]\\n\\t$ prefect deployment run\"\n        f\" '{runner_deployment.flow_name}/{deployment_name}'\\n[/]\"\n    )\n    if PREFECT_UI_URL:\n        help_message += (\n            \"\\nYou can also run your flow via the Prefect UI:\"\n            f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n        )\n\n    app.console.print(help_message, soft_wrap=True)\n    await runner.start(run_once=run_once)\n
","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.watch","title":"watch async","text":"

Executes a shell command and observes it as Prefect flow.

Parameters:

Name Type Description Default command str

The shell command to be executed.

required log_output bool

If True, logs the command's output. Defaults to True.

Option(True, help='Log the output of the command to Prefect logs.') flow_run_name str

An optional name for the flow run.

Option(None, help='Name of the flow run.') flow_name str

An optional name for the flow. Useful for identification in the Prefect UI.

Option('Shell Command', help='Name of the flow.') tag List[str]

An optional list of tags for categorizing and filtering flows in the Prefect UI.

None Source code in prefect/cli/shell.py
@shell_app.command(\"watch\")\nasync def watch(\n    command: str,\n    log_output: bool = typer.Option(\n        True, help=\"Log the output of the command to Prefect logs.\"\n    ),\n    flow_run_name: str = typer.Option(None, help=\"Name of the flow run.\"),\n    flow_name: str = typer.Option(\"Shell Command\", help=\"Name of the flow.\"),\n    stream_stdout: bool = typer.Option(True, help=\"Stream the output of the command.\"),\n    tag: Annotated[\n        Optional[List[str]], typer.Option(help=\"Optional tags for the flow run.\")\n    ] = None,\n):\n    \"\"\"\n    Executes a shell command and observes it as Prefect flow.\n\n    Args:\n        command (str): The shell command to be executed.\n        log_output (bool, optional): If True, logs the command's output. Defaults to True.\n        flow_run_name (str, optional): An optional name for the flow run.\n        flow_name (str, optional): An optional name for the flow. Useful for identification in the Prefect UI.\n        tag (List[str], optional): An optional list of tags for categorizing and filtering flows in the Prefect UI.\n    \"\"\"\n    tag = (tag or []) + [\"shell\"]\n\n    # Call the shell_run_command flow with provided arguments\n    defined_flow = run_shell_process.with_options(\n        name=flow_name, flow_run_name=flow_run_name\n    )\n    with tags(*tag):\n        defined_flow(\n            command=command, log_output=log_output, stream_stdout=stream_stdout\n        )\n
","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/variable/","title":"variable","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable","title":"prefect.cli.variable","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.delete","title":"delete async","text":"

Delete a variable.

Parameters:

Name Type Description Default name str

the name of the variable to delete

required Source code in prefect/cli/variable.py
@variable_app.command(\"delete\")\nasync def delete(\n    name: str,\n):\n    \"\"\"\n    Delete a variable.\n\n    Arguments:\n        name: the name of the variable to delete\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.delete_variable_by_name(\n                name=name,\n            )\n        except ObjectNotFound:\n            exit_with_error(f\"Variable {name!r} not found.\")\n\n        exit_with_success(f\"Deleted variable {name!r}.\")\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.inspect","title":"inspect async","text":"

View details about a variable.

Parameters:

Name Type Description Default name str

the name of the variable to inspect

required Source code in prefect/cli/variable.py
@variable_app.command(\"inspect\")\nasync def inspect(\n    name: str,\n):\n    \"\"\"\n    View details about a variable.\n\n    Arguments:\n        name: the name of the variable to inspect\n    \"\"\"\n\n    async with get_client() as client:\n        variable = await client.read_variable_by_name(\n            name=name,\n        )\n        if not variable:\n            exit_with_error(f\"Variable {name!r} not found.\")\n\n        app.console.print(Pretty(variable))\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.list_variables","title":"list_variables async","text":"

List variables.

Source code in prefect/cli/variable.py
@variable_app.command(\"ls\")\nasync def list_variables(\n    limit: int = typer.Option(\n        100,\n        \"--limit\",\n        help=\"The maximum number of variables to return.\",\n    ),\n):\n    \"\"\"\n    List variables.\n    \"\"\"\n    async with get_client() as client:\n        variables = await client.read_variables(\n            limit=limit,\n        )\n\n        table = Table(\n            title=\"Variables\",\n            caption=\"List Variables using `prefect variable ls`\",\n            show_header=True,\n        )\n\n        table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n        # values can be up 5000 characters so truncate early\n        table.add_column(\"Value\", style=\"blue\", no_wrap=True, max_width=50)\n        table.add_column(\"Created\", style=\"blue\", no_wrap=True)\n        table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n        for variable in sorted(variables, key=lambda x: f\"{x.name}\"):\n            table.add_row(\n                variable.name,\n                variable.value,\n                pendulum.instance(variable.created).diff_for_humans(),\n                pendulum.instance(variable.updated).diff_for_humans(),\n            )\n\n        app.console.print(table)\n
","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/work_pool/","title":"work_pool","text":"","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool","title":"prefect.cli.work_pool","text":"

Command line interface for working with work queues.

","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.clear_concurrency_limit","title":"clear_concurrency_limit async","text":"

Clear the concurrency limit for a work pool.

\b Examples: $ prefect work-pool clear-concurrency-limit \"my-pool\"

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def clear_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n):\n    \"\"\"\n    Clear the concurrency limit for a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool clear-concurrency-limit \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    concurrency_limit=None,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Cleared concurrency limit for work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.create","title":"create async","text":"

Create a new work pool.

\b Examples: \b Create a Kubernetes work pool in a paused state: \b $ prefect work-pool create \"my-pool\" --type kubernetes --paused \b Create a Docker work pool with a custom base job template: \b $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def create(\n    name: str = typer.Argument(..., help=\"The name of the work pool.\"),\n    base_job_template: typer.FileText = typer.Option(\n        None,\n        \"--base-job-template\",\n        help=(\n            \"The path to a JSON file containing the base job template to use. If\"\n            \" unspecified, Prefect will use the default base job template for the given\"\n            \" worker type.\"\n        ),\n    ),\n    paused: bool = typer.Option(\n        False,\n        \"--paused\",\n        help=\"Whether or not to create the work pool in a paused state.\",\n    ),\n    type: str = typer.Option(\n        None, \"-t\", \"--type\", help=\"The type of work pool to create.\"\n    ),\n    set_as_default: bool = typer.Option(\n        False,\n        \"--set-as-default\",\n        help=(\n            \"Whether or not to use the created work pool as the local default for\"\n            \" deployment.\"\n        ),\n    ),\n    provision_infrastructure: bool = typer.Option(\n        False,\n        \"--provision-infrastructure\",\n        \"--provision-infra\",\n        help=(\n            \"Whether or not to provision infrastructure for the work pool if supported\"\n            \" for the given work pool type.\"\n        ),\n    ),\n):\n    \"\"\"\n    Create a new work pool.\n\n    \\b\n    Examples:\n        \\b\n        Create a Kubernetes work pool in a paused state:\n            \\b\n            $ prefect work-pool create \"my-pool\" --type kubernetes --paused\n        \\b\n        Create a Docker work pool with a custom base job template:\n            \\b\n            $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json\n\n    \"\"\"\n    if not name.lower().strip(\"'\\\" \"):\n        exit_with_error(\"Work pool name cannot be empty.\")\n    async with get_client() as client:\n        try:\n            await client.read_work_pool(work_pool_name=name)\n        except ObjectNotFound:\n            pass\n        else:\n            exit_with_error(\n                f\"Work pool named {name!r} already exists. Please try creating your\"\n                \" work pool again with a different name.\"\n            )\n\n        if type is None:\n            async with get_collections_metadata_client() as collections_client:\n                if not is_interactive():\n                    exit_with_error(\n                        \"When not using an interactive terminal, you must supply a\"\n                        \" `--type` value.\"\n                    )\n                worker_metadata = await collections_client.read_worker_metadata()\n\n                # Retrieve only push pools if provisioning infrastructure\n                data = [\n                    worker\n                    for collection in worker_metadata.values()\n                    for worker in collection.values()\n                    if provision_infrastructure\n                    and has_provisioner_for_type(worker[\"type\"])\n                    or not provision_infrastructure\n                ]\n                worker = prompt_select_from_table(\n                    app.console,\n                    \"What type of work pool infrastructure would you like to use?\",\n                    columns=[\n                        {\"header\": \"Infrastructure Type\", \"key\": \"display_name\"},\n                        {\"header\": \"Description\", \"key\": \"description\"},\n                    ],\n                    data=data,\n                    table_kwargs={\"show_lines\": True},\n                )\n                type = worker[\"type\"]\n\n        available_work_pool_types = await get_available_work_pool_types()\n        if type not in available_work_pool_types:\n            exit_with_error(\n                f\"Unknown work pool type {type!r}. \"\n                \"Please choose from\"\n                f\" {', '.join(available_work_pool_types)}.\"\n            )\n\n        if base_job_template is None:\n            template_contents = (\n                await get_default_base_job_template_for_infrastructure_type(type)\n            )\n        else:\n            template_contents = json.load(base_job_template)\n\n        if provision_infrastructure:\n            try:\n                provisioner = get_infrastructure_provisioner_for_work_pool_type(type)\n                provisioner.console = app.console\n                template_contents = await provisioner.provision(\n                    work_pool_name=name, base_job_template=template_contents\n                )\n            except ValueError as exc:\n                print(exc)\n                app.console.print(\n                    (\n                        \"Automatic infrastructure provisioning is not supported for\"\n                        f\" {type!r} work pools.\"\n                    ),\n                    style=\"yellow\",\n                )\n            except RuntimeError as exc:\n                exit_with_error(f\"Failed to provision infrastructure: {exc}\")\n\n        try:\n            wp = WorkPoolCreate(\n                name=name,\n                type=type,\n                base_job_template=template_contents,\n                is_paused=paused,\n            )\n            work_pool = await client.create_work_pool(work_pool=wp)\n            app.console.print(f\"Created work pool {work_pool.name!r}!\\n\", style=\"green\")\n            if (\n                not work_pool.is_paused\n                and not work_pool.is_managed_pool\n                and not work_pool.is_push_pool\n            ):\n                app.console.print(\"To start a worker for this work pool, run:\\n\")\n                app.console.print(\n                    f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\"\n                )\n            if set_as_default:\n                set_work_pool_as_default(work_pool.name)\n            exit_with_success(\"\")\n        except ObjectAlreadyExists:\n            exit_with_error(\n                f\"Work pool named {name!r} already exists. Please try creating your\"\n                \" work pool again with a different name.\"\n            )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.delete","title":"delete async","text":"

Delete a work pool.

\b Examples: $ prefect work-pool delete \"my-pool\"

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def delete(\n    name: str = typer.Argument(..., help=\"The name of the work pool to delete.\"),\n):\n    \"\"\"\n    Delete a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool delete \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.delete_work_pool(work_pool_name=name)\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Deleted work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.get_default_base_job_template","title":"get_default_base_job_template async","text":"

Get the default base job template for a given work pool type.

\b Examples: $ prefect work-pool get-default-base-job-template --type kubernetes

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def get_default_base_job_template(\n    type: str = typer.Option(\n        None,\n        \"-t\",\n        \"--type\",\n        help=\"The type of work pool for which to get the default base job template.\",\n    ),\n    file: str = typer.Option(\n        None, \"-f\", \"--file\", help=\"If set, write the output to a file.\"\n    ),\n):\n    \"\"\"\n    Get the default base job template for a given work pool type.\n\n    \\b\n    Examples:\n        $ prefect work-pool get-default-base-job-template --type kubernetes\n    \"\"\"\n    base_job_template = await get_default_base_job_template_for_infrastructure_type(\n        type\n    )\n    if base_job_template is None:\n        exit_with_error(\n            f\"Unknown work pool type {type!r}. \"\n            \"Please choose from\"\n            f\" {', '.join(await get_available_work_pool_types())}.\"\n        )\n\n    if file is None:\n        print(json.dumps(base_job_template, indent=2))\n    else:\n        with open(file, mode=\"w\") as f:\n            json.dump(base_job_template, fp=f, indent=2)\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.has_provisioner_for_type","title":"has_provisioner_for_type","text":"

Check if there is a provisioner for the given work pool type.

Parameters:

Name Type Description Default work_pool_type str

The type of the work pool.

required

Returns:

Name Type Description bool bool

True if a provisioner exists for the given type, False otherwise.

Source code in prefect/cli/work_pool.py
def has_provisioner_for_type(work_pool_type: str) -> bool:\n    \"\"\"\n    Check if there is a provisioner for the given work pool type.\n\n    Args:\n        work_pool_type (str): The type of the work pool.\n\n    Returns:\n        bool: True if a provisioner exists for the given type, False otherwise.\n    \"\"\"\n    return work_pool_type in _provisioners\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.inspect","title":"inspect async","text":"

Inspect a work pool.

\b Examples: $ prefect work-pool inspect \"my-pool\"

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def inspect(\n    name: str = typer.Argument(..., help=\"The name of the work pool to inspect.\"),\n):\n    \"\"\"\n    Inspect a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool inspect \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            pool = await client.read_work_pool(work_pool_name=name)\n            app.console.print(Pretty(pool))\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool {name!r} not found!\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.ls","title":"ls async","text":"

List work pools.

\b Examples: $ prefect work-pool ls

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def ls(\n    verbose: bool = typer.Option(\n        False,\n        \"--verbose\",\n        \"-v\",\n        help=\"Show additional information about work pools.\",\n    ),\n):\n    \"\"\"\n    List work pools.\n\n    \\b\n    Examples:\n        $ prefect work-pool ls\n    \"\"\"\n    table = Table(\n        title=\"Work Pools\", caption=\"(**) denotes a paused pool\", caption_style=\"red\"\n    )\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Type\", style=\"magenta\", no_wrap=True)\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    if verbose:\n        table.add_column(\"Base Job Template\", style=\"magenta\", no_wrap=True)\n\n    async with get_client() as client:\n        pools = await client.read_work_pools()\n\n    def sort_by_created_key(q):\n        return pendulum.now(\"utc\") - q.created\n\n    for pool in sorted(pools, key=sort_by_created_key):\n        row = [\n            f\"{pool.name} [red](**)\" if pool.is_paused else pool.name,\n            str(pool.type),\n            str(pool.id),\n            (\n                f\"[red]{pool.concurrency_limit}\"\n                if pool.concurrency_limit is not None\n                else \"[blue]None\"\n            ),\n        ]\n        if verbose:\n            row.append(str(pool.base_job_template))\n        table.add_row(*row)\n\n    app.console.print(table)\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.pause","title":"pause async","text":"

Pause a work pool.

\b Examples: $ prefect work-pool pause \"my-pool\"

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def pause(\n    name: str = typer.Argument(..., help=\"The name of the work pool to pause.\"),\n):\n    \"\"\"\n    Pause a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool pause \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    is_paused=True,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Paused work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.preview","title":"preview async","text":"

Preview the work pool's scheduled work for all queues.

\b Examples: $ prefect work-pool preview \"my-pool\" --hours 24

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def preview(\n    name: str = typer.Argument(None, help=\"The name or ID of the work pool to preview\"),\n    hours: int = typer.Option(\n        None,\n        \"-h\",\n        \"--hours\",\n        help=\"The number of hours to look ahead; defaults to 1 hour\",\n    ),\n):\n    \"\"\"\n    Preview the work pool's scheduled work for all queues.\n\n    \\b\n    Examples:\n        $ prefect work-pool preview \"my-pool\" --hours 24\n\n    \"\"\"\n    if hours is None:\n        hours = 1\n\n    async with get_client() as client:\n        try:\n            responses = await client.get_scheduled_flow_runs_for_work_pool(\n                work_pool_name=name,\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n    runs = [response.flow_run for response in responses]\n    table = Table(caption=\"(**) denotes a late run\", caption_style=\"red\")\n\n    table.add_column(\n        \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n    )\n    table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n    pendulum.now(\"utc\").add(hours=hours or 1)\n\n    now = pendulum.now(\"utc\")\n\n    def sort_by_created_key(r):\n        return now - r.created\n\n    for run in sorted(runs, key=sort_by_created_key):\n        table.add_row(\n            (\n                f\"{run.expected_start_time} [red](**)\"\n                if run.expected_start_time < now\n                else f\"{run.expected_start_time}\"\n            ),\n            str(run.id),\n            run.name,\n            str(run.deployment_id),\n        )\n\n    if runs:\n        app.console.print(table)\n    else:\n        app.console.print(\n            (\n                \"No runs found - try increasing how far into the future you preview\"\n                \" with the --hours flag\"\n            ),\n            style=\"yellow\",\n        )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.provision_infrastructure","title":"provision_infrastructure async","text":"

Provision infrastructure for a work pool.

\b Examples: $ prefect work-pool provision-infrastructure \"my-pool\"

$ prefect work-pool provision-infra \"my-pool\"\n
Source code in prefect/cli/work_pool.py
@work_pool_app.command(aliases=[\"provision-infra\"])\nasync def provision_infrastructure(\n    name: str = typer.Argument(\n        ..., help=\"The name of the work pool to provision infrastructure for.\"\n    ),\n):\n    \"\"\"\n    Provision infrastructure for a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool provision-infrastructure \"my-pool\"\n\n        $ prefect work-pool provision-infra \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            work_pool = await client.read_work_pool(work_pool_name=name)\n            if not work_pool.is_push_pool:\n                exit_with_error(\n                    f\"Work pool {name!r} is not a push pool type. \"\n                    \"Please try provisioning infrastructure for a push pool.\"\n                )\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool {name!r} does not exist.\")\n        except Exception as exc:\n            exit_with_error(f\"Failed to read work pool {name!r}: {exc}\")\n\n        try:\n            provisioner = get_infrastructure_provisioner_for_work_pool_type(\n                work_pool.type\n            )\n            provisioner.console = app.console\n            new_base_job_template = await provisioner.provision(\n                work_pool_name=name, base_job_template=work_pool.base_job_template\n            )\n\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    base_job_template=new_base_job_template,\n                ),\n            )\n\n        except ValueError as exc:\n            app.console.print(f\"Error: {exc}\")\n            app.console.print(\n                (\n                    \"Automatic infrastructure provisioning is not supported for\"\n                    f\" {work_pool.type!r} work pools.\"\n                ),\n                style=\"yellow\",\n            )\n        except RuntimeError as exc:\n            exit_with_error(\n                f\"Failed to provision infrastructure for '{name}' work pool: {exc}\"\n            )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.resume","title":"resume async","text":"

Resume a work pool.

\b Examples: $ prefect work-pool resume \"my-pool\"

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def resume(\n    name: str = typer.Argument(..., help=\"The name of the work pool to resume.\"),\n):\n    \"\"\"\n    Resume a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool resume \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    is_paused=False,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Resumed work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.set_concurrency_limit","title":"set_concurrency_limit async","text":"

Set the concurrency limit for a work pool.

\b Examples: $ prefect work-pool set-concurrency-limit \"my-pool\" 10

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def set_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n    concurrency_limit: int = typer.Argument(\n        ..., help=\"The new concurrency limit for the work pool.\"\n    ),\n):\n    \"\"\"\n    Set the concurrency limit for a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool set-concurrency-limit \"my-pool\" 10\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    concurrency_limit=concurrency_limit,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(\n            f\"Set concurrency limit for work pool {name!r} to {concurrency_limit}\"\n        )\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.update","title":"update async","text":"

Update a work pool.

\b Examples: $ prefect work-pool update \"my-pool\"

Source code in prefect/cli/work_pool.py
@work_pool_app.command()\nasync def update(\n    name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n    base_job_template: typer.FileText = typer.Option(\n        None,\n        \"--base-job-template\",\n        help=(\n            \"The path to a JSON file containing the base job template to use. If\"\n            \" unspecified, Prefect will use the default base job template for the given\"\n            \" worker type. If None, the base job template will not be modified.\"\n        ),\n    ),\n    concurrency_limit: int = typer.Option(\n        None,\n        \"--concurrency-limit\",\n        help=(\n            \"The concurrency limit for the work pool. If None, the concurrency limit\"\n            \" will not be modified.\"\n        ),\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        help=(\n            \"The description for the work pool. If None, the description will not be\"\n            \" modified.\"\n        ),\n    ),\n):\n    \"\"\"\n    Update a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool update \"my-pool\"\n\n    \"\"\"\n    wp = WorkPoolUpdate()\n    if base_job_template:\n        wp.base_job_template = json.load(base_job_template)\n    if concurrency_limit:\n        wp.concurrency_limit = concurrency_limit\n    if description:\n        wp.description = description\n\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=wp,\n            )\n        except ObjectNotFound:\n            exit_with_error(\"Work pool named {name!r} does not exist.\")\n\n        exit_with_success(f\"Updated work pool {name!r}\")\n
","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_queue/","title":"work_queue","text":"","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue","title":"prefect.cli.work_queue","text":"

Command line interface for working with work queues.

","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.clear_concurrency_limit","title":"clear_concurrency_limit async","text":"

Clear any concurrency limits from a work queue.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def clear_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to clear\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Clear any concurrency limits from a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=None,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limits removed on work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limits removed on work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.create","title":"create async","text":"

Create a work queue.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def create(\n    name: str = typer.Argument(..., help=\"The unique name to assign this work queue\"),\n    limit: int = typer.Option(\n        None, \"-l\", \"--limit\", help=\"The concurrency limit to set on the queue.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags. This option will be removed on\"\n            \" 2023-02-23.\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool to create the work queue in.\",\n    ),\n    priority: Optional[int] = typer.Option(\n        None,\n        \"-q\",\n        \"--priority\",\n        help=\"The associated priority for the created work queue\",\n    ),\n):\n    \"\"\"\n    Create a work queue.\n    \"\"\"\n    if tags:\n        app.console.print(\n            (\n                \"Supplying `tags` for work queues is deprecated. This work \"\n                \"queue will use legacy tag-matching behavior. \"\n                \"This option will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n    if pool and tags:\n        exit_with_error(\n            \"Work queues created with tags cannot specify work pools or set priorities.\"\n        )\n\n    async with get_client() as client:\n        try:\n            result = await client.create_work_queue(\n                name=name, tags=tags or None, work_pool_name=pool, priority=priority\n            )\n            if limit is not None:\n                await client.update_work_queue(\n                    id=result.id,\n                    concurrency_limit=limit,\n                )\n        except ObjectAlreadyExists:\n            exit_with_error(f\"Work queue with name: {name!r} already exists.\")\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool with name: {pool!r} not found.\")\n\n    if tags:\n        tags_message = f\"tags - {', '.join(sorted(tags))}\\n\" or \"\"\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                id - {result.id}\n                concurrency limit - {limit}\n                {tags_message}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    else:\n        if not pool:\n            # specify the default work pool name after work queue creation to allow the server\n            # to handle a bunch of logic associated with agents without work pools\n            pool = DEFAULT_AGENT_WORK_POOL_NAME\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                work pool - {pool!r}\n                id - {result.id}\n                concurrency limit - {limit}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name} -p {pool}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    exit_with_success(output_msg)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.delete","title":"delete async","text":"

Delete a work queue by ID.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def delete(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to delete\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queue to delete.\",\n    ),\n):\n    \"\"\"\n    Delete a work queue by ID.\n    \"\"\"\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.delete_work_queue_by_id(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n    if pool:\n        success_message = (\n            f\"Successfully deleted work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Successfully deleted work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.inspect","title":"inspect async","text":"

Inspect a work queue by ID.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def inspect(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to inspect\"\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Inspect a work queue by ID.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            result = await client.read_work_queue(id=queue_id)\n            app.console.print(Pretty(result))\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n        try:\n            status = await client.read_work_queue_status(id=queue_id)\n            app.console.print(Pretty(status))\n        except ObjectNotFound:\n            pass\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.ls","title":"ls async","text":"

View all work queues.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def ls(\n    verbose: bool = typer.Option(\n        False, \"--verbose\", \"-v\", help=\"Display more information.\"\n    ),\n    work_queue_prefix: str = typer.Option(\n        None,\n        \"--match\",\n        \"-m\",\n        help=(\n            \"Will match work queues with names that start with the specified prefix\"\n            \" string\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queues to list.\",\n    ),\n):\n    \"\"\"\n    View all work queues.\n    \"\"\"\n    if not pool and not experiment_enabled(\"work_pools\"):\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit is not None\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n    elif not pool:\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Pool\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            pool_ids = [q.work_pool_id for q in queues]\n            wp_filter = WorkPoolFilter(id=WorkPoolFilterId(any_=pool_ids))\n            pools = await client.read_work_pools(work_pool_filter=wp_filter)\n            pool_id_name_map = {p.id: p.name for p in pools}\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    pool_id_name_map[queue.work_pool_id],\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit is not None\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n\n    else:\n        table = Table(\n            title=f\"Work Queues in Work Pool {pool!r}\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Priority\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Description\", style=\"cyan\", no_wrap=False)\n\n        async with get_client() as client:\n            try:\n                queues = await client.read_work_queues(work_pool_name=pool)\n            except ObjectNotFound:\n                exit_with_error(f\"No work pool found: {pool!r}\")\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    f\"{queue.priority}\",\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit is not None\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose:\n                    row.append(queue.description)\n                table.add_row(*row)\n\n    app.console.print(table)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.pause","title":"pause async","text":"

Pause a work queue.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def pause(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to pause\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Pause a work queue.\n    \"\"\"\n\n    if not pool and not typer.confirm(\n        f\"You have not specified a work pool. Are you sure you want to pause {name} work queue in `{DEFAULT_AGENT_WORK_POOL_NAME}`?\"\n    ):\n        exit_with_error(\"Work queue pause aborted!\")\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=True,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} paused\"\n    else:\n        success_message = f\"Work queue {name!r} paused\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.preview","title":"preview async","text":"

Preview a work queue.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def preview(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to preview\"\n    ),\n    hours: int = typer.Option(\n        None,\n        \"-h\",\n        \"--hours\",\n        help=\"The number of hours to look ahead; defaults to 1 hour\",\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Preview a work queue.\n    \"\"\"\n    if pool:\n        title = f\"Preview of Work Queue {name!r} in Work Pool {pool!r}\"\n    else:\n        title = f\"Preview of Work Queue {name!r}\"\n\n    table = Table(title=title, caption=\"(**) denotes a late run\", caption_style=\"red\")\n    table.add_column(\n        \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n    )\n    table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n    window = pendulum.now(\"utc\").add(hours=hours or 1)\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name, work_pool_name=pool\n    )\n    async with get_client() as client:\n        if pool:\n            try:\n                responses = await client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=pool,\n                    work_queue_names=[name],\n                )\n                runs = [response.flow_run for response in responses]\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r} in work pool {pool!r}\")\n        else:\n            try:\n                runs = await client.get_runs_in_work_queue(\n                    queue_id,\n                    limit=10,\n                    scheduled_before=window,\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r}\")\n    now = pendulum.now(\"utc\")\n\n    def sort_by_created_key(r):\n        return now - r.created\n\n    for run in sorted(runs, key=sort_by_created_key):\n        table.add_row(\n            (\n                f\"{run.expected_start_time} [red](**)\"\n                if run.expected_start_time < now\n                else f\"{run.expected_start_time}\"\n            ),\n            str(run.id),\n            run.name,\n            str(run.deployment_id),\n        )\n\n    if runs:\n        app.console.print(table)\n    else:\n        app.console.print(\n            (\n                \"No runs found - try increasing how far into the future you preview\"\n                \" with the --hours flag\"\n            ),\n            style=\"yellow\",\n        )\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.read_wq_runs","title":"read_wq_runs async","text":"

Get runs in a work queue. Note that this will trigger an artificial poll of the work queue.

Source code in prefect/cli/work_queue.py
@work_app.command(\"read-runs\")\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def read_wq_runs(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to poll\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queue to poll.\",\n    ),\n):\n    \"\"\"\n    Get runs in a work queue. Note that this will trigger an artificial poll of\n    the work queue.\n    \"\"\"\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            runs = await client.get_runs_in_work_queue(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n    success_message = (\n        f\"Read {len(runs)} runs for work queue {name!r} in work pool {pool}: {runs}\"\n    )\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.resume","title":"resume async","text":"

Resume a paused work queue.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def resume(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to resume\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Resume a paused work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=False,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} resumed\"\n    else:\n        success_message = f\"Work queue {name!r} resumed\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.set_concurrency_limit","title":"set_concurrency_limit async","text":"

Set a concurrency limit on a work queue.

Source code in prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def set_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue\"),\n    limit: int = typer.Argument(..., help=\"The concurrency limit to set on the queue.\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Set a concurrency limit on a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=limit,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = (\n                    f\"No work queue named {name!r} found in work pool {pool!r}.\"\n                )\n            else:\n                error_message = f\"No work queue named {name!r} found.\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limit of {limit} set on work queue {name!r} in work pool\"\n            f\" {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limit of {limit} set on work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/worker/","title":"worker","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker","title":"prefect.cli.worker","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker.start","title":"start async","text":"

Start a worker process to poll a work pool for flow runs.

Source code in prefect/cli/worker.py
@worker_app.command()\nasync def start(\n    worker_name: str = typer.Option(\n        None,\n        \"-n\",\n        \"--name\",\n        help=(\n            \"The name to give to the started worker. If not provided, a unique name\"\n            \" will be generated.\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        ...,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool the started worker should poll.\",\n        prompt=True,\n    ),\n    work_queues: List[str] = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"One or more work queue names for the worker to pull from. If not provided,\"\n            \" the worker will pull from all work queues in the work pool.\"\n        ),\n    ),\n    worker_type: Optional[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--type\",\n        help=(\n            \"The type of worker to start. If not provided, the worker type will be\"\n            \" inferred from the work pool.\"\n        ),\n    ),\n    prefetch_seconds: int = SettingsOption(\n        PREFECT_WORKER_PREFETCH_SECONDS,\n        help=\"Number of seconds to look into the future for scheduled flow runs.\",\n    ),\n    run_once: bool = typer.Option(\n        False, help=\"Only run worker polling once. By default, the worker runs forever.\"\n    ),\n    limit: int = typer.Option(\n        None,\n        \"-l\",\n        \"--limit\",\n        help=\"Maximum number of flow runs to start simultaneously.\",\n    ),\n    with_healthcheck: bool = typer.Option(\n        False, help=\"Start a healthcheck server for the worker.\"\n    ),\n    install_policy: InstallPolicy = typer.Option(\n        InstallPolicy.PROMPT.value,\n        \"--install-policy\",\n        help=\"Install policy to use workers from Prefect integration packages.\",\n        case_sensitive=False,\n    ),\n    base_job_template: typer.FileText = typer.Option(\n        None,\n        \"--base-job-template\",\n        help=(\n            \"The path to a JSON file containing the base job template to use. If\"\n            \" unspecified, Prefect will use the default base job template for the given\"\n            \" worker type. If the work pool already exists, this will be ignored.\"\n        ),\n    ),\n):\n    \"\"\"\n    Start a worker process to poll a work pool for flow runs.\n    \"\"\"\n\n    is_paused = await _check_work_pool_paused(work_pool_name)\n    if is_paused:\n        app.console.print(\n            (\n                f\"The work pool {work_pool_name!r} is currently paused. This worker\"\n                \" will not execute any flow runs until the work pool is unpaused.\"\n            ),\n            style=\"yellow\",\n        )\n\n    is_queues_paused = await _check_work_queues_paused(\n        work_pool_name,\n        work_queues,\n    )\n    if is_queues_paused:\n        queue_scope = (\n            \"All work queues\" if not work_queues else \"Specified work queue(s)\"\n        )\n        app.console.print(\n            (\n                f\"{queue_scope} in the work pool {work_pool_name!r} are currently\"\n                \" paused. This worker will not execute any flow runs until the work\"\n                \" queues are unpaused.\"\n            ),\n            style=\"yellow\",\n        )\n\n    worker_cls = await _get_worker_class(worker_type, work_pool_name, install_policy)\n\n    if worker_cls is None:\n        exit_with_error(\n            \"Unable to start worker. Please ensure you have the necessary dependencies\"\n            \" installed to run your desired worker type.\"\n        )\n\n    worker_process_id = os.getpid()\n    setup_signal_handlers_worker(\n        worker_process_id, f\"the {worker_type} worker\", app.console.print\n    )\n\n    template_contents = None\n    if base_job_template is not None:\n        template_contents = json.load(fp=base_job_template)\n\n    async with worker_cls(\n        name=worker_name,\n        work_pool_name=work_pool_name,\n        work_queues=work_queues,\n        limit=limit,\n        prefetch_seconds=prefetch_seconds,\n        heartbeat_interval_seconds=PREFECT_WORKER_HEARTBEAT_SECONDS.value(),\n        base_job_template=template_contents,\n    ) as worker:\n        app.console.print(f\"Worker {worker.name!r} started!\", style=\"green\")\n        async with anyio.create_task_group() as tg:\n            # wait for an initial heartbeat to configure the worker\n            await worker.sync_with_backend()\n            # schedule the scheduled flow run polling loop\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=worker.get_and_submit_flow_runs,\n                    interval=PREFECT_WORKER_QUERY_SECONDS.value(),\n                    run_once=run_once,\n                    printer=app.console.print,\n                    jitter_range=0.3,\n                    backoff=4,  # Up to ~1 minute interval during backoff\n                )\n            )\n            # schedule the sync loop\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=worker.sync_with_backend,\n                    interval=worker.heartbeat_interval_seconds,\n                    run_once=run_once,\n                    printer=app.console.print,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=worker.check_for_cancelled_flow_runs,\n                    interval=PREFECT_WORKER_QUERY_SECONDS.value() * 2,\n                    run_once=run_once,\n                    printer=app.console.print,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n\n            started_event = await worker._emit_worker_started_event()\n\n            # if --with-healthcheck was passed, start the healthcheck server\n            if with_healthcheck:\n                # we'll start the ASGI server in a separate thread so that\n                # uvicorn does not block the main thread\n                server_thread = threading.Thread(\n                    name=\"healthcheck-server-thread\",\n                    target=partial(\n                        start_healthcheck_server,\n                        worker=worker,\n                        query_interval_seconds=PREFECT_WORKER_QUERY_SECONDS.value(),\n                    ),\n                    daemon=True,\n                )\n                server_thread.start()\n\n    await worker._emit_worker_stopped_event(started_event)\n    app.console.print(f\"Worker {worker.name!r} stopped!\")\n
","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/client/base/","title":"base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base","title":"prefect.client.base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse","title":"PrefectResponse","text":"

Bases: Response

A Prefect wrapper for the httpx.Response class.

Provides more informative error messages.

Source code in prefect/client/base.py
class PrefectResponse(httpx.Response):\n    \"\"\"\n    A Prefect wrapper for the `httpx.Response` class.\n\n    Provides more informative error messages.\n    \"\"\"\n\n    def raise_for_status(self) -> None:\n        \"\"\"\n        Raise an exception if the response contains an HTTPStatusError.\n\n        The `PrefectHTTPStatusError` contains useful additional information that\n        is not contained in the `HTTPStatusError`.\n        \"\"\"\n        try:\n            return super().raise_for_status()\n        except HTTPStatusError as exc:\n            raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n\n    @classmethod\n    def from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n        \"\"\"\n        Create a `PrefectReponse` from an `httpx.Response`.\n\n        By changing the `__class__` attribute of the Response, we change the method\n        resolution order to look for methods defined in PrefectResponse, while leaving\n        everything else about the original Response instance intact.\n        \"\"\"\n        new_response = copy.copy(response)\n        new_response.__class__ = cls\n        return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.raise_for_status","title":"raise_for_status","text":"

Raise an exception if the response contains an HTTPStatusError.

The PrefectHTTPStatusError contains useful additional information that is not contained in the HTTPStatusError.

Source code in prefect/client/base.py
def raise_for_status(self) -> None:\n    \"\"\"\n    Raise an exception if the response contains an HTTPStatusError.\n\n    The `PrefectHTTPStatusError` contains useful additional information that\n    is not contained in the `HTTPStatusError`.\n    \"\"\"\n    try:\n        return super().raise_for_status()\n    except HTTPStatusError as exc:\n        raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.from_httpx_response","title":"from_httpx_response classmethod","text":"

Create a PrefectReponse from an httpx.Response.

By changing the __class__ attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact.

Source code in prefect/client/base.py
@classmethod\ndef from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n    \"\"\"\n    Create a `PrefectReponse` from an `httpx.Response`.\n\n    By changing the `__class__` attribute of the Response, we change the method\n    resolution order to look for methods defined in PrefectResponse, while leaving\n    everything else about the original Response instance intact.\n    \"\"\"\n    new_response = copy.copy(response)\n    new_response.__class__ = cls\n    return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient","title":"PrefectHttpxClient","text":"

Bases: AsyncClient

A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503).

Additionally, this client will always call raise_for_status on responses.

For more details on rate limit headers, see: Configuring Cloudflare Rate Limiting

Source code in prefect/client/base.py
class PrefectHttpxClient(httpx.AsyncClient):\n    \"\"\"\n    A Prefect wrapper for the async httpx client with support for retry-after headers\n    for the provided status codes (typically 429, 502 and 503).\n\n    Additionally, this client will always call `raise_for_status` on responses.\n\n    For more details on rate limit headers, see:\n    [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI)\n    \"\"\"\n\n    def __init__(\n        self,\n        *args,\n        enable_csrf_support: bool = False,\n        raise_on_all_errors: bool = True,\n        **kwargs,\n    ):\n        self.enable_csrf_support: bool = enable_csrf_support\n        self.csrf_token: Optional[str] = None\n        self.csrf_token_expiration: Optional[datetime] = None\n        self.csrf_client_id: uuid.UUID = uuid.uuid4()\n        self.raise_on_all_errors: bool = raise_on_all_errors\n\n        super().__init__(*args, **kwargs)\n\n        user_agent = (\n            f\"prefect/{prefect.__version__} (API {constants.SERVER_API_VERSION})\"\n        )\n        self.headers[\"User-Agent\"] = user_agent\n\n    async def _send_with_retry(\n        self,\n        request: Request,\n        send: Callable[[Request], Awaitable[Response]],\n        send_args: Tuple,\n        send_kwargs: Dict,\n        retry_codes: Set[int] = set(),\n        retry_exceptions: Tuple[Exception, ...] = tuple(),\n    ):\n        \"\"\"\n        Send a request and retry it if it fails.\n\n        Sends the provided request and retries it up to PREFECT_CLIENT_MAX_RETRIES times\n        if the request either raises an exception listed in `retry_exceptions` or\n        receives a response with a status code listed in `retry_codes`.\n\n        Retries will be delayed based on either the retry header (preferred) or\n        exponential backoff if a retry header is not provided.\n        \"\"\"\n        try_count = 0\n        response = None\n\n        is_change_request = request.method.lower() in {\"post\", \"put\", \"patch\", \"delete\"}\n\n        if self.enable_csrf_support and is_change_request:\n            await self._add_csrf_headers(request=request)\n\n        while try_count <= PREFECT_CLIENT_MAX_RETRIES.value():\n            try_count += 1\n            retry_seconds = None\n            exc_info = None\n\n            try:\n                response = await send(request, *send_args, **send_kwargs)\n            except retry_exceptions:  # type: ignore\n                if try_count > PREFECT_CLIENT_MAX_RETRIES.value():\n                    raise\n                # Otherwise, we will ignore this error but capture the info for logging\n                exc_info = sys.exc_info()\n            else:\n                # We got a response; check if it's a CSRF error, otherwise\n                # return immediately if it is not retryable\n                if (\n                    response.status_code == status.HTTP_403_FORBIDDEN\n                    and \"Invalid CSRF token\" in response.text\n                ):\n                    # We got a CSRF error, clear the token and try again\n                    self.csrf_token = None\n                    await self._add_csrf_headers(request)\n                elif response.status_code not in retry_codes:\n                    return response\n\n                if \"Retry-After\" in response.headers:\n                    retry_seconds = float(response.headers[\"Retry-After\"])\n\n            # Use an exponential back-off if not set in a header\n            if retry_seconds is None:\n                retry_seconds = 2**try_count\n\n            # Add jitter\n            jitter_factor = PREFECT_CLIENT_RETRY_JITTER_FACTOR.value()\n            if retry_seconds > 0 and jitter_factor > 0:\n                if response is not None and \"Retry-After\" in response.headers:\n                    # Always wait for _at least_ retry seconds if requested by the API\n                    retry_seconds = bounded_poisson_interval(\n                        retry_seconds, retry_seconds * (1 + jitter_factor)\n                    )\n                else:\n                    # Otherwise, use a symmetrical jitter\n                    retry_seconds = clamped_poisson_interval(\n                        retry_seconds, jitter_factor\n                    )\n\n            logger.debug(\n                (\n                    \"Encountered retryable exception during request. \"\n                    if exc_info\n                    else (\n                        \"Received response with retryable status code\"\n                        f\" {response.status_code}. \"\n                    )\n                )\n                + f\"Another attempt will be made in {retry_seconds}s. \"\n                \"This is attempt\"\n                f\" {try_count}/{PREFECT_CLIENT_MAX_RETRIES.value() + 1}.\",\n                exc_info=exc_info,\n            )\n            await anyio.sleep(retry_seconds)\n\n        assert (\n            response is not None\n        ), \"Retry handling ended without response or exception\"\n\n        # We ran out of retries, return the failed response\n        return response\n\n    async def send(self, request: Request, *args, **kwargs) -> Response:\n        \"\"\"\n        Send a request with automatic retry behavior for the following status codes:\n\n        - 403 Forbidden, if the request failed due to CSRF protection\n        - 408 Request Timeout\n        - 429 CloudFlare-style rate limiting\n        - 502 Bad Gateway\n        - 503 Service unavailable\n        - Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES`\n        \"\"\"\n\n        super_send = super().send\n        response = await self._send_with_retry(\n            request=request,\n            send=super_send,\n            send_args=args,\n            send_kwargs=kwargs,\n            retry_codes={\n                status.HTTP_429_TOO_MANY_REQUESTS,\n                status.HTTP_503_SERVICE_UNAVAILABLE,\n                status.HTTP_502_BAD_GATEWAY,\n                status.HTTP_408_REQUEST_TIMEOUT,\n                *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n            },\n            retry_exceptions=(\n                httpx.ReadTimeout,\n                httpx.PoolTimeout,\n                httpx.ConnectTimeout,\n                # `ConnectionResetError` when reading socket raises as a `ReadError`\n                httpx.ReadError,\n                # Sockets can be closed during writes resulting in a `WriteError`\n                httpx.WriteError,\n                # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n                httpx.RemoteProtocolError,\n                # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n                httpx.LocalProtocolError,\n            ),\n        )\n\n        # Convert to a Prefect response to add nicer errors messages\n        response = PrefectResponse.from_httpx_response(response)\n\n        if self.raise_on_all_errors:\n            response.raise_for_status()\n\n        return response\n\n    async def _add_csrf_headers(self, request: Request):\n        now = datetime.now(timezone.utc)\n\n        if not self.enable_csrf_support:\n            return\n\n        if not self.csrf_token or (\n            self.csrf_token_expiration and now > self.csrf_token_expiration\n        ):\n            token_request = self.build_request(\n                \"GET\", f\"/csrf-token?client={self.csrf_client_id}\"\n            )\n\n            try:\n                token_response = await self.send(token_request)\n            except PrefectHTTPStatusError as exc:\n                old_server = exc.response.status_code == status.HTTP_404_NOT_FOUND\n                unconfigured_server = (\n                    exc.response.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY\n                    and \"CSRF protection is disabled.\" in exc.response.text\n                )\n\n                if old_server or unconfigured_server:\n                    # The token endpoint is either unavailable, suggesting an\n                    # older server, or CSRF protection is disabled. In either\n                    # case we should disable CSRF support.\n                    self.enable_csrf_support = False\n                    return\n\n                raise\n\n            token: CsrfToken = CsrfToken.parse_obj(token_response.json())\n            self.csrf_token = token.token\n            self.csrf_token_expiration = token.expiration\n\n        request.headers[\"Prefect-Csrf-Token\"] = self.csrf_token\n        request.headers[\"Prefect-Csrf-Client\"] = str(self.csrf_client_id)\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient.send","title":"send async","text":"

Send a request with automatic retry behavior for the following status codes:

  • 403 Forbidden, if the request failed due to CSRF protection
  • 408 Request Timeout
  • 429 CloudFlare-style rate limiting
  • 502 Bad Gateway
  • 503 Service unavailable
  • Any additional status codes provided in PREFECT_CLIENT_RETRY_EXTRA_CODES
Source code in prefect/client/base.py
async def send(self, request: Request, *args, **kwargs) -> Response:\n    \"\"\"\n    Send a request with automatic retry behavior for the following status codes:\n\n    - 403 Forbidden, if the request failed due to CSRF protection\n    - 408 Request Timeout\n    - 429 CloudFlare-style rate limiting\n    - 502 Bad Gateway\n    - 503 Service unavailable\n    - Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES`\n    \"\"\"\n\n    super_send = super().send\n    response = await self._send_with_retry(\n        request=request,\n        send=super_send,\n        send_args=args,\n        send_kwargs=kwargs,\n        retry_codes={\n            status.HTTP_429_TOO_MANY_REQUESTS,\n            status.HTTP_503_SERVICE_UNAVAILABLE,\n            status.HTTP_502_BAD_GATEWAY,\n            status.HTTP_408_REQUEST_TIMEOUT,\n            *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n        },\n        retry_exceptions=(\n            httpx.ReadTimeout,\n            httpx.PoolTimeout,\n            httpx.ConnectTimeout,\n            # `ConnectionResetError` when reading socket raises as a `ReadError`\n            httpx.ReadError,\n            # Sockets can be closed during writes resulting in a `WriteError`\n            httpx.WriteError,\n            # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n            httpx.RemoteProtocolError,\n            # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n            httpx.LocalProtocolError,\n        ),\n    )\n\n    # Convert to a Prefect response to add nicer errors messages\n    response = PrefectResponse.from_httpx_response(response)\n\n    if self.raise_on_all_errors:\n        response.raise_for_status()\n\n    return response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.app_lifespan_context","title":"app_lifespan_context async","text":"

A context manager that calls startup/shutdown hooks for the given application.

Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed.

This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits.

A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed.

In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done.

Source code in prefect/client/base.py
@asynccontextmanager\nasync def app_lifespan_context(app: ASGIApp) -> AsyncGenerator[None, None]:\n    \"\"\"\n    A context manager that calls startup/shutdown hooks for the given application.\n\n    Lifespan contexts are cached per application to avoid calling the lifespan hooks\n    more than once if the context is entered in nested code. A no-op context will be\n    returned if the context for the given application is already being managed.\n\n    This manager is robust to concurrent access within the event loop. For example,\n    if you have concurrent contexts for the same application, it is guaranteed that\n    startup hooks will be called before their context starts and shutdown hooks will\n    only be called after their context exits.\n\n    A reference count is used to support nested use of clients without running\n    lifespan hooks excessively. The first client context entered will create and enter\n    a lifespan context. Each subsequent client will increment a reference count but will\n    not create a new lifespan context. When each client context exits, the reference\n    count is decremented. When the last client context exits, the lifespan will be\n    closed.\n\n    In simple nested cases, the first client context will be the one to exit the\n    lifespan. However, if client contexts are entered concurrently they may not exit\n    in a consistent order. If the first client context was responsible for closing\n    the lifespan, it would have to wait until all other client contexts to exit to\n    avoid firing shutdown hooks while the application is in use. Waiting for the other\n    clients to exit can introduce deadlocks, so, instead, the first client will exit\n    without closing the lifespan context and reference counts will be used to ensure\n    the lifespan is closed once all of the clients are done.\n    \"\"\"\n    # TODO: A deadlock has been observed during multithreaded use of clients while this\n    #       lifespan context is being used. This has only been reproduced on Python 3.7\n    #       and while we hope to discourage using multiple event loops in threads, this\n    #       bug may emerge again.\n    #       See https://github.com/PrefectHQ/orion/pull/1696\n    thread_id = threading.get_ident()\n\n    # The id of the application is used instead of the hash so each application instance\n    # is managed independently even if they share the same settings. We include the\n    # thread id since applications are managed separately per thread.\n    key = (thread_id, id(app))\n\n    # On exception, this will be populated with exception details\n    exc_info = (None, None, None)\n\n    # Get a lock unique to this thread since anyio locks are not threadsafe\n    lock = APP_LIFESPANS_LOCKS[thread_id]\n\n    async with lock:\n        if key in APP_LIFESPANS:\n            # The lifespan is already being managed, just increment the reference count\n            APP_LIFESPANS_REF_COUNTS[key] += 1\n        else:\n            # Create a new lifespan manager\n            APP_LIFESPANS[key] = context = LifespanManager(\n                app, startup_timeout=30, shutdown_timeout=30\n            )\n            APP_LIFESPANS_REF_COUNTS[key] = 1\n\n            # Ensure we enter the context before releasing the lock so startup hooks\n            # are complete before another client can be used\n            await context.__aenter__()\n\n    try:\n        yield\n    except BaseException:\n        exc_info = sys.exc_info()\n        raise\n    finally:\n        # If we do not shield against anyio cancellation, the lock will return\n        # immediately and the code in its context will not run, leaving the lifespan\n        # open\n        with anyio.CancelScope(shield=True):\n            async with lock:\n                # After the consumer exits the context, decrement the reference count\n                APP_LIFESPANS_REF_COUNTS[key] -= 1\n\n                # If this the last context to exit, close the lifespan\n                if APP_LIFESPANS_REF_COUNTS[key] <= 0:\n                    APP_LIFESPANS_REF_COUNTS.pop(key)\n                    context = APP_LIFESPANS.pop(key)\n                    await context.__aexit__(*exc_info)\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/","title":"cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud","title":"prefect.client.cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudUnauthorizedError","title":"CloudUnauthorizedError","text":"

Bases: PrefectException

Raised when the CloudClient receives a 401 or 403 from the Cloud API.

Source code in prefect/client/cloud.py
class CloudUnauthorizedError(PrefectException):\n    \"\"\"\n    Raised when the CloudClient receives a 401 or 403 from the Cloud API.\n    \"\"\"\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient","title":"CloudClient","text":"Source code in prefect/client/cloud.py
class CloudClient:\n    def __init__(\n        self,\n        host: str,\n        api_key: str,\n        httpx_settings: Optional[Dict[str, Any]] = None,\n    ) -> None:\n        httpx_settings = httpx_settings or dict()\n        httpx_settings.setdefault(\"headers\", dict())\n        httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        httpx_settings.setdefault(\"base_url\", host)\n        if not PREFECT_UNIT_TEST_MODE.value():\n            httpx_settings.setdefault(\"follow_redirects\", True)\n        self._client = PrefectHttpxClient(**httpx_settings, enable_csrf_support=False)\n\n    async def api_healthcheck(self):\n        \"\"\"\n        Attempts to connect to the Cloud API and raises the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        with anyio.fail_after(10):\n            await self.read_workspaces()\n\n    async def read_workspaces(self) -> List[Workspace]:\n        workspaces = pydantic.parse_obj_as(\n            List[Workspace], await self.get(\"/me/workspaces\")\n        )\n        return workspaces\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n        configured_url = prefect.settings.PREFECT_API_URL.value()\n        account_id, workspace_id = re.findall(PARSE_API_URL_REGEX, configured_url)[0]\n        return await self.get(\n            f\"accounts/{account_id}/workspaces/{workspace_id}/collections/work_pool_types\"\n        )\n\n    async def __aenter__(self):\n        await self._client.__aenter__()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        return await self._client.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `CloudClient` must be entered with an async context. Use 'async \"\n            \"with CloudClient(...)' not 'with CloudClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n\n    async def get(self, route, **kwargs):\n        return await self.request(\"GET\", route, **kwargs)\n\n    async def request(self, method, route, **kwargs):\n        try:\n            res = await self._client.request(method, route, **kwargs)\n            res.raise_for_status()\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code in (\n                status.HTTP_401_UNAUTHORIZED,\n                status.HTTP_403_FORBIDDEN,\n            ):\n                raise CloudUnauthorizedError\n            else:\n                raise exc\n\n        if res.status_code == status.HTTP_204_NO_CONTENT:\n            return\n\n        return res.json()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient.api_healthcheck","title":"api_healthcheck async","text":"

Attempts to connect to the Cloud API and raises the encountered exception if not successful.

If successful, returns None.

Source code in prefect/client/cloud.py
async def api_healthcheck(self):\n    \"\"\"\n    Attempts to connect to the Cloud API and raises the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    with anyio.fail_after(10):\n        await self.read_workspaces()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.get_cloud_client","title":"get_cloud_client","text":"

Needs a docstring.

Source code in prefect/client/cloud.py
def get_cloud_client(\n    host: Optional[str] = None,\n    api_key: Optional[str] = None,\n    httpx_settings: Optional[dict] = None,\n    infer_cloud_url: bool = False,\n) -> \"CloudClient\":\n    \"\"\"\n    Needs a docstring.\n    \"\"\"\n    if httpx_settings is not None:\n        httpx_settings = httpx_settings.copy()\n\n    if infer_cloud_url is False:\n        host = host or PREFECT_CLOUD_API_URL.value()\n    else:\n        configured_url = prefect.settings.PREFECT_API_URL.value()\n        host = re.sub(PARSE_API_URL_REGEX, \"\", configured_url)\n\n    return CloudClient(\n        host=host,\n        api_key=api_key or PREFECT_API_KEY.value(),\n        httpx_settings=httpx_settings,\n    )\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/orchestration/","title":"orchestration","text":"

Asynchronous client implementation for communicating with the Prefect REST API.

Explore the client by communicating with an in-memory webserver \u2014 no setup required:

$ # start python REPL with native await functionality\n$ python -m asyncio\n>>> from prefect import get_client\n>>> async with get_client() as client:\n...     response = await client.hello()\n...     print(response.json())\n\ud83d\udc4b\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration","title":"prefect.client.orchestration","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient","title":"PrefectClient","text":"

An asynchronous client for interacting with the Prefect REST API.

Parameters:

Name Type Description Default api Union[str, ASGIApp]

the REST API URL or FastAPI application to connect to

required api_key str

An optional API key for authentication.

None api_version str

The API version this client is compatible with.

None httpx_settings Optional[Dict[str, Any]]

An optional dictionary of settings to pass to the underlying httpx.AsyncClient

None
Say hello to a Prefect REST API\n\n<div class=\"terminal\">\n```\n>>> async with get_client() as client:\n>>>     response = await client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n```\n</div>\n
Source code in prefect/client/orchestration.py
class PrefectClient:\n    \"\"\"\n    An asynchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n    Args:\n        api: the REST API URL or FastAPI application to connect to\n        api_key: An optional API key for authentication.\n        api_version: The API version this client is compatible with.\n        httpx_settings: An optional dictionary of settings to pass to the underlying\n            `httpx.AsyncClient`\n\n    Examples:\n\n        Say hello to a Prefect REST API\n\n        <div class=\"terminal\">\n        ```\n        >>> async with get_client() as client:\n        >>>     response = await client.hello()\n        >>>\n        >>> print(response.json())\n        \ud83d\udc4b\n        ```\n        </div>\n    \"\"\"\n\n    def __init__(\n        self,\n        api: Union[str, ASGIApp],\n        *,\n        api_key: str = None,\n        api_version: str = None,\n        httpx_settings: Optional[Dict[str, Any]] = None,\n    ) -> None:\n        httpx_settings = httpx_settings.copy() if httpx_settings else {}\n        httpx_settings.setdefault(\"headers\", {})\n\n        if PREFECT_API_TLS_INSECURE_SKIP_VERIFY:\n            httpx_settings.setdefault(\"verify\", False)\n        else:\n            cert_file = PREFECT_API_SSL_CERT_FILE.value()\n            if not cert_file:\n                cert_file = certifi.where()\n            httpx_settings.setdefault(\"verify\", cert_file)\n\n        if api_version is None:\n            api_version = SERVER_API_VERSION\n        httpx_settings[\"headers\"].setdefault(\"X-PREFECT-API-VERSION\", api_version)\n        if api_key:\n            httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        # Context management\n        self._exit_stack = AsyncExitStack()\n        self._ephemeral_app: Optional[ASGIApp] = None\n        self.manage_lifespan = True\n        self.server_type: ServerType\n\n        # Only set if this client started the lifespan of the application\n        self._ephemeral_lifespan: Optional[LifespanManager] = None\n\n        self._closed = False\n        self._started = False\n\n        # Connect to an external application\n        if isinstance(api, str):\n            if httpx_settings.get(\"app\"):\n                raise ValueError(\n                    \"Invalid httpx settings: `app` cannot be set when providing an \"\n                    \"api url. `app` is only for use with ephemeral instances. Provide \"\n                    \"it as the `api` parameter instead.\"\n                )\n            httpx_settings.setdefault(\"base_url\", api)\n\n            # See https://www.python-httpx.org/advanced/#pool-limit-configuration\n            httpx_settings.setdefault(\n                \"limits\",\n                httpx.Limits(\n                    # We see instability when allowing the client to open many connections at once.\n                    # Limiting concurrency results in more stable performance.\n                    max_connections=16,\n                    max_keepalive_connections=8,\n                    # The Prefect Cloud LB will keep connections alive for 30s.\n                    # Only allow the client to keep connections alive for 25s.\n                    keepalive_expiry=25,\n                ),\n            )\n\n            # See https://www.python-httpx.org/http2/\n            # Enabling HTTP/2 support on the client does not necessarily mean that your requests\n            # and responses will be transported over HTTP/2, since both the client and the server\n            # need to support HTTP/2. If you connect to a server that only supports HTTP/1.1 the\n            # client will use a standard HTTP/1.1 connection instead.\n            httpx_settings.setdefault(\"http2\", PREFECT_API_ENABLE_HTTP2.value())\n\n            self.server_type = (\n                ServerType.CLOUD\n                if api.startswith(PREFECT_CLOUD_API_URL.value())\n                else ServerType.SERVER\n            )\n\n        # Connect to an in-process application\n        elif isinstance(api, ASGIApp):\n            self._ephemeral_app = api\n            self.server_type = ServerType.EPHEMERAL\n\n            # When using an ephemeral server, server-side exceptions can be raised\n            # client-side breaking all of our response error code handling. To work\n            # around this, we create an ASGI transport with application exceptions\n            # disabled instead of using the application directly.\n            # refs:\n            # - https://github.com/PrefectHQ/prefect/pull/9637\n            # - https://github.com/encode/starlette/blob/d3a11205ed35f8e5a58a711db0ff59c86fa7bb31/starlette/middleware/errors.py#L184\n            # - https://github.com/tiangolo/fastapi/blob/8cc967a7605d3883bd04ceb5d25cc94ae079612f/fastapi/applications.py#L163-L164\n            httpx_settings.setdefault(\n                \"transport\",\n                httpx.ASGITransport(\n                    app=self._ephemeral_app, raise_app_exceptions=False\n                ),\n            )\n            httpx_settings.setdefault(\"base_url\", \"http://ephemeral-prefect/api\")\n\n        else:\n            raise TypeError(\n                f\"Unexpected type {type(api).__name__!r} for argument `api`. Expected\"\n                \" 'str' or 'ASGIApp/FastAPI'\"\n            )\n\n        # See https://www.python-httpx.org/advanced/#timeout-configuration\n        httpx_settings.setdefault(\n            \"timeout\",\n            httpx.Timeout(\n                connect=PREFECT_API_REQUEST_TIMEOUT.value(),\n                read=PREFECT_API_REQUEST_TIMEOUT.value(),\n                write=PREFECT_API_REQUEST_TIMEOUT.value(),\n                pool=PREFECT_API_REQUEST_TIMEOUT.value(),\n            ),\n        )\n\n        if not PREFECT_UNIT_TEST_MODE:\n            httpx_settings.setdefault(\"follow_redirects\", True)\n\n        enable_csrf_support = (\n            self.server_type != ServerType.CLOUD\n            and PREFECT_CLIENT_CSRF_SUPPORT_ENABLED.value()\n        )\n\n        self._client = PrefectHttpxClient(\n            **httpx_settings, enable_csrf_support=enable_csrf_support\n        )\n        self._loop = None\n\n        # See https://www.python-httpx.org/advanced/#custom-transports\n        #\n        # If we're using an HTTP/S client (not the ephemeral client), adjust the\n        # transport to add retries _after_ it is instantiated. If we alter the transport\n        # before instantiation, the transport will not be aware of proxies unless we\n        # reproduce all of the logic to make it so.\n        #\n        # Only alter the transport to set our default of 3 retries, don't modify any\n        # transport a user may have provided via httpx_settings.\n        #\n        # Making liberal use of getattr and isinstance checks here to avoid any\n        # surprises if the internals of httpx or httpcore change on us\n        if isinstance(api, str) and not httpx_settings.get(\"transport\"):\n            transport_for_url = getattr(self._client, \"_transport_for_url\", None)\n            if callable(transport_for_url):\n                server_transport = transport_for_url(httpx.URL(api))\n                if isinstance(server_transport, httpx.AsyncHTTPTransport):\n                    pool = getattr(server_transport, \"_pool\", None)\n                    if isinstance(pool, httpcore.AsyncConnectionPool):\n                        pool._retries = 3\n\n        self.logger = get_logger(\"client\")\n\n    @property\n    def api_url(self) -> httpx.URL:\n        \"\"\"\n        Get the base URL for the API.\n        \"\"\"\n        return self._client.base_url\n\n    # API methods ----------------------------------------------------------------------\n\n    async def api_healthcheck(self) -> Optional[Exception]:\n        \"\"\"\n        Attempts to connect to the API and returns the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        try:\n            await self._client.get(\"/health\")\n            return None\n        except Exception as exc:\n            return exc\n\n    async def hello(self) -> httpx.Response:\n        \"\"\"\n        Send a GET request to /hello for testing purposes.\n        \"\"\"\n        return await self._client.get(\"/hello\")\n\n    async def create_flow(self, flow: \"FlowObject\") -> UUID:\n        \"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow: a [Flow][prefect.flows.Flow] object\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        return await self.create_flow_from_name(flow.name)\n\n    async def create_flow_from_name(self, flow_name: str) -> UUID:\n        \"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow_name: the name of the new flow\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        flow_data = FlowCreate(name=flow_name)\n        response = await self._client.post(\n            \"/flows/\", json=flow_data.dict(json_compatible=True)\n        )\n\n        flow_id = response.json().get(\"id\")\n        if not flow_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        # Return the id of the created flow\n        return UUID(flow_id)\n\n    async def read_flow(self, flow_id: UUID) -> Flow:\n        \"\"\"\n        Query the Prefect API for a flow by id.\n\n        Args:\n            flow_id: the flow ID of interest\n\n        Returns:\n            a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n        \"\"\"\n        response = await self._client.get(f\"/flows/{flow_id}\")\n        return Flow.parse_obj(response.json())\n\n    async def read_flows(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Flow]:\n        \"\"\"\n        Query the Prefect API for flows. Only flows matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flows\n            limit: limit for the flow query\n            offset: offset for the flow query\n\n        Returns:\n            a list of Flow model representations of the flows\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flows/filter\", json=body)\n        return pydantic.parse_obj_as(List[Flow], response.json())\n\n    async def read_flow_by_name(\n        self,\n        flow_name: str,\n    ) -> Flow:\n        \"\"\"\n        Query the Prefect API for a flow by name.\n\n        Args:\n            flow_name: the name of a flow\n\n        Returns:\n            a fully hydrated Flow model\n        \"\"\"\n        response = await self._client.get(f\"/flows/name/{flow_name}\")\n        return Flow.parse_obj(response.json())\n\n    async def create_flow_run_from_deployment(\n        self,\n        deployment_id: UUID,\n        *,\n        parameters: Optional[Dict[str, Any]] = None,\n        context: Optional[Dict[str, Any]] = None,\n        state: prefect.states.State = None,\n        name: str = None,\n        tags: Iterable[str] = None,\n        idempotency_key: str = None,\n        parent_task_run_id: UUID = None,\n        work_queue_name: str = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ) -> FlowRun:\n        \"\"\"\n        Create a flow run for a deployment.\n\n        Args:\n            deployment_id: The deployment ID to create the flow run from\n            parameters: Parameter overrides for this flow run. Merged with the\n                deployment defaults\n            context: Optional run context data\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n            name: An optional name for the flow run. If not provided, the server will\n                generate a name.\n            tags: An optional iterable of tags to apply to the flow run; these tags\n                are merged with the deployment's tags.\n            idempotency_key: Optional idempotency key for creation of the flow run.\n                If the key matches the key of an existing flow run, the existing run will\n                be returned instead of creating a new one.\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            work_queue_name: An optional work queue name to add this run to. If not provided,\n                will default to the deployment's set work queue.  If one is provided that does not\n                exist, a new work queue will be created within the deployment's work pool.\n            job_variables: Optional variables that will be supplied to the flow run job.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n        state = state or prefect.states.Scheduled()\n        tags = tags or []\n\n        flow_run_create = DeploymentFlowRunCreate(\n            parameters=parameters,\n            context=context,\n            state=state.to_state_create(),\n            tags=tags,\n            name=name,\n            idempotency_key=idempotency_key,\n            parent_task_run_id=parent_task_run_id,\n            job_variables=job_variables,\n        )\n\n        # done separately to avoid including this field in payloads sent to older API versions\n        if work_queue_name:\n            flow_run_create.work_queue_name = work_queue_name\n\n        response = await self._client.post(\n            f\"/deployments/{deployment_id}/create_flow_run\",\n            json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n        )\n        return FlowRun.parse_obj(response.json())\n\n    async def create_flow_run(\n        self,\n        flow: \"FlowObject\",\n        name: Optional[str] = None,\n        parameters: Optional[Dict[str, Any]] = None,\n        context: Optional[Dict[str, Any]] = None,\n        tags: Optional[Iterable[str]] = None,\n        parent_task_run_id: Optional[UUID] = None,\n        state: Optional[\"prefect.states.State\"] = None,\n    ) -> FlowRun:\n        \"\"\"\n        Create a flow run for a flow.\n\n        Args:\n            flow: The flow model to create the flow run for\n            name: An optional name for the flow run\n            parameters: Parameter overrides for this flow run.\n            context: Optional run context data\n            tags: a list of tags to apply to this flow run\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        # Retrieve the flow id\n        flow_id = await self.create_flow(flow)\n\n        flow_run_create = FlowRunCreate(\n            flow_id=flow_id,\n            flow_version=flow.version,\n            name=name,\n            parameters=parameters,\n            context=context,\n            tags=list(tags or []),\n            parent_task_run_id=parent_task_run_id,\n            state=state.to_state_create(),\n            empirical_policy=FlowRunPolicy(\n                retries=flow.retries,\n                retry_delay=flow.retry_delay_seconds,\n            ),\n        )\n\n        flow_run_create_json = flow_run_create.dict(json_compatible=True)\n        response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n        flow_run = FlowRun.parse_obj(response.json())\n\n        # Restore the parameters to the local objects to retain expectations about\n        # Python objects\n        flow_run.parameters = parameters\n\n        return flow_run\n\n    async def update_flow_run(\n        self,\n        flow_run_id: UUID,\n        flow_version: Optional[str] = None,\n        parameters: Optional[dict] = None,\n        name: Optional[str] = None,\n        tags: Optional[Iterable[str]] = None,\n        empirical_policy: Optional[FlowRunPolicy] = None,\n        infrastructure_pid: Optional[str] = None,\n        job_variables: Optional[dict] = None,\n    ) -> httpx.Response:\n        \"\"\"\n        Update a flow run's details.\n\n        Args:\n            flow_run_id: The identifier for the flow run to update.\n            flow_version: A new version string for the flow run.\n            parameters: A dictionary of parameter values for the flow run. This will not\n                be merged with any existing parameters.\n            name: A new name for the flow run.\n            empirical_policy: A new flow run orchestration policy. This will not be\n                merged with any existing policy.\n            tags: An iterable of new tags for the flow run. These will not be merged with\n                any existing tags.\n            infrastructure_pid: The id of flow run as returned by an\n                infrastructure block.\n\n        Returns:\n            an `httpx.Response` object from the PATCH request\n        \"\"\"\n        params = {}\n        if flow_version is not None:\n            params[\"flow_version\"] = flow_version\n        if parameters is not None:\n            params[\"parameters\"] = parameters\n        if name is not None:\n            params[\"name\"] = name\n        if tags is not None:\n            params[\"tags\"] = tags\n        if empirical_policy is not None:\n            params[\"empirical_policy\"] = empirical_policy\n        if infrastructure_pid:\n            params[\"infrastructure_pid\"] = infrastructure_pid\n        if job_variables is not None:\n            params[\"job_variables\"] = job_variables\n\n        flow_run_data = FlowRunUpdate(**params)\n\n        return await self._client.patch(\n            f\"/flow_runs/{flow_run_id}\",\n            json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def delete_flow_run(\n        self,\n        flow_run_id: UUID,\n    ) -> None:\n        \"\"\"\n        Delete a flow run by UUID.\n\n        Args:\n            flow_run_id: The flow run UUID of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_concurrency_limit(\n        self,\n        tag: str,\n        concurrency_limit: int,\n    ) -> UUID:\n        \"\"\"\n        Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n        running tasks.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n        Raises:\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the ID of the concurrency limit in the backend\n        \"\"\"\n\n        concurrency_limit_create = ConcurrencyLimitCreate(\n            tag=tag,\n            concurrency_limit=concurrency_limit,\n        )\n        response = await self._client.post(\n            \"/concurrency_limits/\",\n            json=concurrency_limit_create.dict(json_compatible=True),\n        )\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(concurrency_limit_id)\n\n    async def read_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n        \"\"\"\n        Read the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the concurrency limit set on a specific tag\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n        return concurrency_limit\n\n    async def read_concurrency_limits(\n        self,\n        limit: int,\n        offset: int,\n    ):\n        \"\"\"\n        Lists concurrency limits set on task run tags.\n\n        Args:\n            limit: the maximum number of concurrency limits returned\n            offset: the concurrency limit query offset\n\n        Returns:\n            a list of concurrency limits\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n        return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n\n    async def reset_concurrency_limit_by_tag(\n        self,\n        tag: str,\n        slot_override: Optional[List[Union[UUID, str]]] = None,\n    ):\n        \"\"\"\n        Resets the concurrency limit slots set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            slot_override: a list of task run IDs that are currently using a\n                concurrency slot, please check that any task run IDs included in\n                `slot_override` are currently running, otherwise those concurrency\n                slots will never be released.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        if slot_override is not None:\n            slot_override = [str(slot) for slot in slot_override]\n\n        try:\n            await self._client.post(\n                f\"/concurrency_limits/tag/{tag}/reset\",\n                json=dict(slot_override=slot_override),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n        \"\"\"\n        Delete the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_work_queue(\n        self,\n        name: str,\n        tags: Optional[List[str]] = None,\n        description: Optional[str] = None,\n        is_paused: Optional[bool] = None,\n        concurrency_limit: Optional[int] = None,\n        priority: Optional[int] = None,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n        \"\"\"\n        Create a work queue.\n\n        Args:\n            name: a unique name for the work queue\n            tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n                will be included in the queue. This option will be removed on 2023-02-23.\n            description: An optional description for the work queue.\n            is_paused: Whether or not the work queue is paused.\n            concurrency_limit: An optional concurrency limit for the work queue.\n            priority: The queue's priority. Lower values are higher priority (1 is the highest).\n            work_pool_name: The name of the work pool to use for this queue.\n\n        Raises:\n            prefect.exceptions.ObjectAlreadyExists: If request returns 409\n            httpx.RequestError: If request fails\n\n        Returns:\n            The created work queue\n        \"\"\"\n        if tags:\n            warnings.warn(\n                (\n                    \"The use of tags for creating work queue filters is deprecated.\"\n                    \" This option will be removed on 2023-02-23.\"\n                ),\n                DeprecationWarning,\n            )\n            filter = QueueFilter(tags=tags)\n        else:\n            filter = None\n        create_model = WorkQueueCreate(name=name, filter=filter)\n        if description is not None:\n            create_model.description = description\n        if is_paused is not None:\n            create_model.is_paused = is_paused\n        if concurrency_limit is not None:\n            create_model.concurrency_limit = concurrency_limit\n        if priority is not None:\n            create_model.priority = priority\n\n        data = create_model.dict(json_compatible=True)\n        try:\n            if work_pool_name is not None:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues\", json=data\n                )\n            else:\n                response = await self._client.post(\"/work_queues/\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def read_work_queue_by_name(\n        self,\n        name: str,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n        \"\"\"\n        Read a work queue by name.\n\n        Args:\n            name (str): a unique name for the work queue\n            work_pool_name (str, optional): the name of the work pool\n                the queue belongs to.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: if no work queue is found\n            httpx.HTTPStatusError: other status errors\n\n        Returns:\n            WorkQueue: a work queue API object\n        \"\"\"\n        try:\n            if work_pool_name is not None:\n                response = await self._client.get(\n                    f\"/work_pools/{work_pool_name}/queues/{name}\"\n                )\n            else:\n                response = await self._client.get(f\"/work_queues/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return WorkQueue.parse_obj(response.json())\n\n    async def update_work_queue(self, id: UUID, **kwargs):\n        \"\"\"\n        Update properties of a work queue.\n\n        Args:\n            id: the ID of the work queue to update\n            **kwargs: the fields to update\n\n        Raises:\n            ValueError: if no kwargs are provided\n            prefect.exceptions.ObjectNotFound: if request returns 404\n            httpx.RequestError: if the request fails\n\n        \"\"\"\n        if not kwargs:\n            raise ValueError(\"No fields provided to update.\")\n\n        data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n        try:\n            await self._client.patch(f\"/work_queues/{id}\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def get_runs_in_work_queue(\n        self,\n        id: UUID,\n        limit: int = 10,\n        scheduled_before: datetime.datetime = None,\n    ) -> List[FlowRun]:\n        \"\"\"\n        Read flow runs off a work queue.\n\n        Args:\n            id: the id of the work queue to read from\n            limit: a limit on the number of runs to return\n            scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n                Defaults to now.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            List[FlowRun]: a list of FlowRun objects read from the queue\n        \"\"\"\n        if scheduled_before is None:\n            scheduled_before = pendulum.now(\"UTC\")\n\n        try:\n            response = await self._client.post(\n                f\"/work_queues/{id}/get_runs\",\n                json={\n                    \"limit\": limit,\n                    \"scheduled_before\": scheduled_before.isoformat(),\n                },\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def read_work_queue(\n        self,\n        id: UUID,\n    ) -> WorkQueue:\n        \"\"\"\n        Read a work queue.\n\n        Args:\n            id: the id of the work queue to load\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            WorkQueue: an instantiated WorkQueue object\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_queues/{id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def read_work_queue_status(\n        self,\n        id: UUID,\n    ) -> WorkQueueStatusDetail:\n        \"\"\"\n        Read a work queue status.\n\n        Args:\n            id: the id of the work queue to load\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            WorkQueueStatus: an instantiated WorkQueueStatus object\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_queues/{id}/status\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueueStatusDetail.parse_obj(response.json())\n\n    async def match_work_queues(\n        self,\n        prefixes: List[str],\n        work_pool_name: Optional[str] = None,\n    ) -> List[WorkQueue]:\n        \"\"\"\n        Query the Prefect API for work queues with names with a specific prefix.\n\n        Args:\n            prefixes: a list of strings used to match work queue name prefixes\n            work_pool_name: an optional work pool name to scope the query to\n\n        Returns:\n            a list of WorkQueue model representations\n                of the work queues\n        \"\"\"\n        page_length = 100\n        current_page = 0\n        work_queues = []\n\n        while True:\n            new_queues = await self.read_work_queues(\n                work_pool_name=work_pool_name,\n                offset=current_page * page_length,\n                limit=page_length,\n                work_queue_filter=WorkQueueFilter(\n                    name=WorkQueueFilterName(startswith_=prefixes)\n                ),\n            )\n            if not new_queues:\n                break\n            work_queues += new_queues\n            current_page += 1\n\n        return work_queues\n\n    async def delete_work_queue_by_id(\n        self,\n        id: UUID,\n    ):\n        \"\"\"\n        Delete a work queue by its ID.\n\n        Args:\n            id: the id of the work queue to delete\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/work_queues/{id}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n        \"\"\"\n        Create a block type in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_types/\",\n                json=block_type.dict(\n                    json_compatible=True, exclude_unset=True, exclude={\"id\"}\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n        \"\"\"\n        Create a block schema in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_schemas/\",\n                json=block_schema.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_type\", \"checksum\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def create_block_document(\n        self,\n        block_document: Union[BlockDocument, BlockDocumentCreate],\n        include_secrets: bool = True,\n    ) -> BlockDocument:\n        \"\"\"\n        Create a block document in the Prefect API. This data is used to configure a\n        corresponding Block.\n\n        Args:\n            include_secrets (bool): whether to include secret values\n                on the stored Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. Note Blocks may not work as expected if\n                this is set to `False`.\n        \"\"\"\n        if isinstance(block_document, BlockDocument):\n            block_document = BlockDocumentCreate.parse_obj(\n                block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n\n        try:\n            response = await self._client.post(\n                \"/block_documents/\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def update_block_document(\n        self,\n        block_document_id: UUID,\n        block_document: BlockDocumentUpdate,\n    ):\n        \"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_documents/{block_document_id}\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_document(self, block_document_id: UUID):\n        \"\"\"\n        Delete a block document.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_documents/{block_document_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_block_type_by_slug(self, slug: str) -> BlockType:\n        \"\"\"\n        Read a block type by its slug.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/block_types/slug/{slug}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def read_block_schema_by_checksum(\n        self, checksum: str, version: Optional[str] = None\n    ) -> BlockSchema:\n        \"\"\"\n        Look up a block schema checksum\n        \"\"\"\n        try:\n            url = f\"/block_schemas/checksum/{checksum}\"\n            if version is not None:\n                url = f\"{url}?version={version}\"\n            response = await self._client.get(url)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n        \"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_types/{block_type_id}\",\n                json=block_type.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include=BlockTypeUpdate.updatable_fields(),\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_type(self, block_type_id: UUID):\n        \"\"\"\n        Delete a block type.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_types/{block_type_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            elif (\n                e.response.status_code == status.HTTP_403_FORBIDDEN\n                and e.response.json()[\"detail\"]\n                == \"protected block types cannot be deleted.\"\n            ):\n                raise prefect.exceptions.ProtectedBlockError(\n                    \"Protected block types cannot be deleted.\"\n                ) from e\n            else:\n                raise\n\n    async def read_block_types(self) -> List[BlockType]:\n        \"\"\"\n        Read all block types\n        Raises:\n            httpx.RequestError: if the block types were not found\n\n        Returns:\n            List of BlockTypes.\n        \"\"\"\n        response = await self._client.post(\"/block_types/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockType], response.json())\n\n    async def read_block_schemas(self) -> List[BlockSchema]:\n        \"\"\"\n        Read all block schemas\n        Raises:\n            httpx.RequestError: if a valid block schema was not found\n\n        Returns:\n            A BlockSchema.\n        \"\"\"\n        response = await self._client.post(\"/block_schemas/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockSchema], response.json())\n\n    async def get_most_recent_block_schema_for_block_type(\n        self,\n        block_type_id: UUID,\n    ) -> Optional[BlockSchema]:\n        \"\"\"\n        Fetches the most recent block schema for a specified block type ID.\n\n        Args:\n            block_type_id: The ID of the block type.\n\n        Raises:\n            httpx.RequestError: If the request fails for any reason.\n\n        Returns:\n            The most recent block schema or None.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_schemas/filter\",\n                json={\n                    \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n                    \"limit\": 1,\n                },\n            )\n        except httpx.HTTPStatusError:\n            raise\n        return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n\n    async def read_block_document(\n        self,\n        block_document_id: UUID,\n        include_secrets: bool = True,\n    ):\n        \"\"\"\n        Read the block document with the specified ID.\n\n        Args:\n            block_document_id: the block document id\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        assert (\n            block_document_id is not None\n        ), \"Unexpected ID on block document. Was it persisted?\"\n        try:\n            response = await self._client.get(\n                f\"/block_documents/{block_document_id}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_document_by_name(\n        self,\n        name: str,\n        block_type_slug: str,\n        include_secrets: bool = True,\n    ) -> BlockDocument:\n        \"\"\"\n        Read the block document with the specified name that corresponds to a\n        specific block type name.\n\n        Args:\n            name: The block document name.\n            block_type_slug: The block type slug.\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_documents(\n        self,\n        block_schema_type: Optional[str] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n        include_secrets: bool = True,\n    ):\n        \"\"\"\n        Read block documents\n\n        Args:\n            block_schema_type: an optional block schema type\n            offset: an offset\n            limit: the number of blocks to return\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Returns:\n            A list of block documents\n        \"\"\"\n        response = await self._client.post(\n            \"/block_documents/filter\",\n            json=dict(\n                block_schema_type=block_schema_type,\n                offset=offset,\n                limit=limit,\n                include_secrets=include_secrets,\n            ),\n        )\n        return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n    async def read_block_documents_by_type(\n        self,\n        block_type_slug: str,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n        include_secrets: bool = True,\n    ) -> List[BlockDocument]:\n        \"\"\"Retrieve block documents by block type slug.\n\n        Args:\n            block_type_slug: The block type slug.\n            offset: an offset\n            limit: the number of blocks to return\n            include_secrets: whether to include secret values\n\n        Returns:\n            A list of block documents\n        \"\"\"\n        response = await self._client.get(\n            f\"/block_types/slug/{block_type_slug}/block_documents\",\n            params=dict(\n                offset=offset,\n                limit=limit,\n                include_secrets=include_secrets,\n            ),\n        )\n\n        return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n    async def create_deployment(\n        self,\n        flow_id: UUID,\n        name: str,\n        version: str = None,\n        schedule: SCHEDULE_TYPES = None,\n        schedules: List[DeploymentScheduleCreate] = None,\n        parameters: Optional[Dict[str, Any]] = None,\n        description: str = None,\n        work_queue_name: str = None,\n        work_pool_name: str = None,\n        tags: List[str] = None,\n        storage_document_id: UUID = None,\n        manifest_path: str = None,\n        path: str = None,\n        entrypoint: str = None,\n        infrastructure_document_id: UUID = None,\n        infra_overrides: Optional[Dict[str, Any]] = None,  # for backwards compat\n        parameter_openapi_schema: Optional[Dict[str, Any]] = None,\n        is_schedule_active: Optional[bool] = None,\n        paused: Optional[bool] = None,\n        pull_steps: Optional[List[dict]] = None,\n        enforce_parameter_schema: Optional[bool] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ) -> UUID:\n        \"\"\"\n        Create a deployment.\n\n        Args:\n            flow_id: the flow ID to create a deployment for\n            name: the name of the deployment\n            version: an optional version string for the deployment\n            schedule: an optional schedule to apply to the deployment\n            tags: an optional list of tags to apply to the deployment\n            storage_document_id: an reference to the storage block document\n                used for the deployed flow\n            infrastructure_document_id: an reference to the infrastructure block document\n                to use for this deployment\n            job_variables: A dictionary of dot delimited infrastructure overrides that\n                will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n                `namespace='prefect'`. This argument was previously named `infra_overrides`.\n                Both arguments are supported for backwards compatibility.\n\n        Raises:\n            httpx.RequestError: if the deployment was not created for any reason\n\n        Returns:\n            the ID of the deployment in the backend\n        \"\"\"\n        jv = handle_deprecated_infra_overrides_parameter(job_variables, infra_overrides)\n\n        deployment_create = DeploymentCreate(\n            flow_id=flow_id,\n            name=name,\n            version=version,\n            parameters=dict(parameters or {}),\n            tags=list(tags or []),\n            work_queue_name=work_queue_name,\n            description=description,\n            storage_document_id=storage_document_id,\n            path=path,\n            entrypoint=entrypoint,\n            manifest_path=manifest_path,  # for backwards compat\n            infrastructure_document_id=infrastructure_document_id,\n            job_variables=jv,\n            parameter_openapi_schema=parameter_openapi_schema,\n            is_schedule_active=is_schedule_active,\n            paused=paused,\n            schedule=schedule,\n            schedules=schedules or [],\n            pull_steps=pull_steps,\n            enforce_parameter_schema=enforce_parameter_schema,\n        )\n\n        if work_pool_name is not None:\n            deployment_create.work_pool_name = work_pool_name\n\n        # Exclude newer fields that are not set to avoid compatibility issues\n        exclude = {\n            field\n            for field in [\"work_pool_name\", \"work_queue_name\"]\n            if field not in deployment_create.__fields_set__\n        }\n\n        if deployment_create.is_schedule_active is None:\n            exclude.add(\"is_schedule_active\")\n\n        if deployment_create.paused is None:\n            exclude.add(\"paused\")\n\n        if deployment_create.pull_steps is None:\n            exclude.add(\"pull_steps\")\n\n        if deployment_create.enforce_parameter_schema is None:\n            exclude.add(\"enforce_parameter_schema\")\n\n        json = deployment_create.dict(json_compatible=True, exclude=exclude)\n        response = await self._client.post(\n            \"/deployments/\",\n            json=json,\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def update_schedule(self, deployment_id: UUID, active: bool = True):\n        path = \"set_schedule_active\" if active else \"set_schedule_inactive\"\n        await self._client.post(\n            f\"/deployments/{deployment_id}/{path}\",\n        )\n\n    async def set_deployment_paused_state(self, deployment_id: UUID, paused: bool):\n        await self._client.patch(\n            f\"/deployments/{deployment_id}\", json={\"paused\": paused}\n        )\n\n    async def update_deployment(\n        self,\n        deployment: Deployment,\n        schedule: SCHEDULE_TYPES = None,\n        is_schedule_active: bool = None,\n    ):\n        deployment_update = DeploymentUpdate(\n            version=deployment.version,\n            schedule=schedule if schedule is not None else deployment.schedule,\n            is_schedule_active=(\n                is_schedule_active\n                if is_schedule_active is not None\n                else deployment.is_schedule_active\n            ),\n            description=deployment.description,\n            work_queue_name=deployment.work_queue_name,\n            tags=deployment.tags,\n            manifest_path=deployment.manifest_path,\n            path=deployment.path,\n            entrypoint=deployment.entrypoint,\n            parameters=deployment.parameters,\n            storage_document_id=deployment.storage_document_id,\n            infrastructure_document_id=deployment.infrastructure_document_id,\n            job_variables=deployment.job_variables,\n            enforce_parameter_schema=deployment.enforce_parameter_schema,\n        )\n\n        if getattr(deployment, \"work_pool_name\", None) is not None:\n            deployment_update.work_pool_name = deployment.work_pool_name\n\n        exclude = set()\n        if deployment.enforce_parameter_schema is None:\n            exclude.add(\"enforce_parameter_schema\")\n\n        await self._client.patch(\n            f\"/deployments/{deployment.id}\",\n            json=deployment_update.dict(json_compatible=True, exclude=exclude),\n        )\n\n    async def _create_deployment_from_schema(self, schema: DeploymentCreate) -> UUID:\n        \"\"\"\n        Create a deployment from a prepared `DeploymentCreate` schema.\n        \"\"\"\n        # TODO: We are likely to remove this method once we have considered the\n        #       packaging interface for deployments further.\n        response = await self._client.post(\n            \"/deployments/\", json=schema.dict(json_compatible=True)\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def read_deployment(\n        self,\n        deployment_id: UUID,\n    ) -> DeploymentResponse:\n        \"\"\"\n        Query the Prefect API for a deployment by id.\n\n        Args:\n            deployment_id: the deployment ID of interest\n\n        Returns:\n            a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployment_by_name(\n        self,\n        name: str,\n    ) -> DeploymentResponse:\n        \"\"\"\n        Query the Prefect API for a deployment by name.\n\n        Args:\n            name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            a Deployment model representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployments(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        limit: int = None,\n        sort: DeploymentSort = None,\n        offset: int = 0,\n    ) -> List[DeploymentResponse]:\n        \"\"\"\n        Query the Prefect API for deployments. Only deployments matching all\n        the provided criteria will be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            limit: a limit for the deployment query\n            offset: an offset for the deployment query\n\n        Returns:\n            a list of Deployment model representations\n                of the deployments\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/deployments/filter\", json=body)\n        return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n\n    async def delete_deployment(\n        self,\n        deployment_id: UUID,\n    ):\n        \"\"\"\n        Delete deployment by id.\n\n        Args:\n            deployment_id: The deployment id of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_deployment_schedules(\n        self,\n        deployment_id: UUID,\n        schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n    ) -> List[DeploymentSchedule]:\n        \"\"\"\n        Create deployment schedules.\n\n        Args:\n            deployment_id: the deployment ID\n            schedules: a list of tuples containing the schedule to create\n                       and whether or not it should be active.\n\n        Raises:\n            httpx.RequestError: if the schedules were not created for any reason\n\n        Returns:\n            the list of schedules created in the backend\n        \"\"\"\n        deployment_schedule_create = [\n            DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n            for schedule in schedules\n        ]\n\n        json = [\n            deployment_schedule_create.dict(json_compatible=True)\n            for deployment_schedule_create in deployment_schedule_create\n        ]\n        response = await self._client.post(\n            f\"/deployments/{deployment_id}/schedules\", json=json\n        )\n        return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n    async def read_deployment_schedules(\n        self,\n        deployment_id: UUID,\n    ) -> List[DeploymentSchedule]:\n        \"\"\"\n        Query the Prefect API for a deployment's schedules.\n\n        Args:\n            deployment_id: the deployment ID\n\n        Returns:\n            a list of DeploymentSchedule model representations of the deployment schedules\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n    async def update_deployment_schedule(\n        self,\n        deployment_id: UUID,\n        schedule_id: UUID,\n        active: Optional[bool] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n    ):\n        \"\"\"\n        Update a deployment schedule by ID.\n\n        Args:\n            deployment_id: the deployment ID\n            schedule_id: the deployment schedule ID of interest\n            active: whether or not the schedule should be active\n            schedule: the cron, rrule, or interval schedule this deployment schedule should use\n        \"\"\"\n        kwargs = {}\n        if active is not None:\n            kwargs[\"active\"] = active\n        elif schedule is not None:\n            kwargs[\"schedule\"] = schedule\n\n        deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n        json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n        try:\n            await self._client.patch(\n                f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_deployment_schedule(\n        self,\n        deployment_id: UUID,\n        schedule_id: UUID,\n    ) -> None:\n        \"\"\"\n        Delete a deployment schedule.\n\n        Args:\n            deployment_id: the deployment ID\n            schedule_id: the ID of the deployment schedule to delete.\n\n        Raises:\n            httpx.RequestError: if the schedules were not deleted for any reason\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n        \"\"\"\n        Query the Prefect API for a flow run by id.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n\n        Returns:\n            a Flow Run model representation of the flow run\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return FlowRun.parse_obj(response.json())\n\n    async def resume_flow_run(\n        self, flow_run_id: UUID, run_input: Optional[Dict] = None\n    ) -> OrchestrationResult:\n        \"\"\"\n        Resumes a paused flow run.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n            run_input: the input to resume the flow run with\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        try:\n            response = await self._client.post(\n                f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n            )\n        except httpx.HTTPStatusError:\n            raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[FlowRun]:\n        \"\"\"\n        Query the Prefect API for flow runs. Only flow runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flow runs\n            limit: limit for the flow run query\n            offset: offset for the flow run query\n\n        Returns:\n            a list of Flow Run model representations\n                of the flow runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flow_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def set_flow_run_state(\n        self,\n        flow_run_id: UUID,\n        state: \"prefect.states.State\",\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a flow run.\n\n        Args:\n            flow_run_id: the id of the flow run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.flow_run_id = flow_run_id\n        state_create.state_details.transition_id = uuid4()\n        try:\n            response = await self._client.post(\n                f\"/flow_runs/{flow_run_id}/set_state\",\n                json=dict(state=state_create.dict(json_compatible=True), force=force),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_run_states(\n        self, flow_run_id: UUID\n    ) -> List[prefect.states.State]:\n        \"\"\"\n        Query for the states of a flow run\n\n        Args:\n            flow_run_id: the id of the flow run\n\n        Returns:\n            a list of State model representations\n                of the flow run states\n        \"\"\"\n        response = await self._client.get(\n            \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def set_task_run_name(self, task_run_id: UUID, name: str):\n        task_run_data = TaskRunUpdate(name=name)\n        return await self._client.patch(\n            f\"/task_runs/{task_run_id}\",\n            json=task_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def create_task_run(\n        self,\n        task: \"TaskObject[P, R]\",\n        flow_run_id: Optional[UUID],\n        dynamic_key: str,\n        name: Optional[str] = None,\n        extra_tags: Optional[Iterable[str]] = None,\n        state: Optional[prefect.states.State[R]] = None,\n        task_inputs: Optional[\n            Dict[\n                str,\n                List[\n                    Union[\n                        TaskRunResult,\n                        Parameter,\n                        Constant,\n                    ]\n                ],\n            ]\n        ] = None,\n    ) -> TaskRun:\n        \"\"\"\n        Create a task run\n\n        Args:\n            task: The Task to run\n            flow_run_id: The flow run id with which to associate the task run\n            dynamic_key: A key unique to this particular run of a Task within the flow\n            name: An optional name for the task run\n            extra_tags: an optional list of extra tags to apply to the task run in\n                addition to `task.tags`\n            state: The initial state for the run. If not provided, defaults to\n                `Pending` for now. Should always be a `Scheduled` type.\n            task_inputs: the set of inputs passed to the task\n\n        Returns:\n            The created task run.\n        \"\"\"\n        tags = set(task.tags).union(extra_tags or [])\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        task_run_data = TaskRunCreate(\n            name=name,\n            flow_run_id=flow_run_id,\n            task_key=task.task_key,\n            dynamic_key=dynamic_key,\n            tags=list(tags),\n            task_version=task.version,\n            empirical_policy=TaskRunPolicy(\n                retries=task.retries,\n                retry_delay=task.retry_delay_seconds,\n                retry_jitter_factor=task.retry_jitter_factor,\n            ),\n            state=state.to_state_create(),\n            task_inputs=task_inputs or {},\n        )\n\n        response = await self._client.post(\n            \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n        )\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n        \"\"\"\n        Query the Prefect API for a task run by id.\n\n        Args:\n            task_run_id: the task run ID of interest\n\n        Returns:\n            a Task Run model representation of the task run\n        \"\"\"\n        response = await self._client.get(f\"/task_runs/{task_run_id}\")\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        sort: TaskRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[TaskRun]:\n        \"\"\"\n        Query the Prefect API for task runs. Only task runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            sort: sort criteria for the task runs\n            limit: a limit for the task run query\n            offset: an offset for the task run query\n\n        Returns:\n            a list of Task Run model representations\n                of the task runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/task_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[TaskRun], response.json())\n\n    async def delete_task_run(self, task_run_id: UUID) -> None:\n        \"\"\"\n        Delete a task run by id.\n\n        Args:\n            task_run_id: the task run ID of interest\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/task_runs/{task_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def set_task_run_state(\n        self,\n        task_run_id: UUID,\n        state: prefect.states.State,\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a task run.\n\n        Args:\n            task_run_id: the id of the task run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.task_run_id = task_run_id\n        response = await self._client.post(\n            f\"/task_runs/{task_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_task_run_states(\n        self, task_run_id: UUID\n    ) -> List[prefect.states.State]:\n        \"\"\"\n        Query for the states of a task run\n\n        Args:\n            task_run_id: the id of the task run\n\n        Returns:\n            a list of State model representations of the task run states\n        \"\"\"\n        response = await self._client.get(\n            \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n        \"\"\"\n        Create logs for a flow or task run\n\n        Args:\n            logs: An iterable of `LogCreate` objects or already json-compatible dicts\n        \"\"\"\n        serialized_logs = [\n            log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n            for log in logs\n        ]\n        await self._client.post(\"/logs/\", json=serialized_logs)\n\n    async def create_flow_run_notification_policy(\n        self,\n        block_document_id: UUID,\n        is_active: bool = True,\n        tags: List[str] = None,\n        state_names: List[str] = None,\n        message_template: Optional[str] = None,\n    ) -> UUID:\n        \"\"\"\n        Create a notification policy for flow runs\n\n        Args:\n            block_document_id: The block document UUID\n            is_active: Whether the notification policy is active\n            tags: List of flow tags\n            state_names: List of state names\n            message_template: Notification message template\n        \"\"\"\n        if tags is None:\n            tags = []\n        if state_names is None:\n            state_names = []\n\n        policy = FlowRunNotificationPolicyCreate(\n            block_document_id=block_document_id,\n            is_active=is_active,\n            tags=tags,\n            state_names=state_names,\n            message_template=message_template,\n        )\n        response = await self._client.post(\n            \"/flow_run_notification_policies/\",\n            json=policy.dict(json_compatible=True),\n        )\n\n        policy_id = response.json().get(\"id\")\n        if not policy_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(policy_id)\n\n    async def delete_flow_run_notification_policy(\n        self,\n        id: UUID,\n    ) -> None:\n        \"\"\"\n        Delete a flow run notification policy by id.\n\n        Args:\n            id: UUID of the flow run notification policy to delete.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def update_flow_run_notification_policy(\n        self,\n        id: UUID,\n        block_document_id: Optional[UUID] = None,\n        is_active: Optional[bool] = None,\n        tags: Optional[List[str]] = None,\n        state_names: Optional[List[str]] = None,\n        message_template: Optional[str] = None,\n    ) -> None:\n        \"\"\"\n        Update a notification policy for flow runs\n\n        Args:\n            id: UUID of the notification policy\n            block_document_id: The block document UUID\n            is_active: Whether the notification policy is active\n            tags: List of flow tags\n            state_names: List of state names\n            message_template: Notification message template\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        params = {}\n        if block_document_id is not None:\n            params[\"block_document_id\"] = block_document_id\n        if is_active is not None:\n            params[\"is_active\"] = is_active\n        if tags is not None:\n            params[\"tags\"] = tags\n        if state_names is not None:\n            params[\"state_names\"] = state_names\n        if message_template is not None:\n            params[\"message_template\"] = message_template\n\n        policy = FlowRunNotificationPolicyUpdate(**params)\n\n        try:\n            await self._client.patch(\n                f\"/flow_run_notification_policies/{id}\",\n                json=policy.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_flow_run_notification_policies(\n        self,\n        flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n        limit: Optional[int] = None,\n        offset: int = 0,\n    ) -> List[FlowRunNotificationPolicy]:\n        \"\"\"\n        Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n        be returned.\n\n        Args:\n            flow_run_notification_policy_filter: filter criteria for notification policies\n            limit: a limit for the notification policies query\n            offset: an offset for the notification policies query\n\n        Returns:\n            a list of FlowRunNotificationPolicy model representations\n                of the notification policies\n        \"\"\"\n        body = {\n            \"flow_run_notification_policy_filter\": (\n                flow_run_notification_policy_filter.dict(json_compatible=True)\n                if flow_run_notification_policy_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\n            \"/flow_run_notification_policies/filter\", json=body\n        )\n        return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n\n    async def read_logs(\n        self,\n        log_filter: LogFilter = None,\n        limit: int = None,\n        offset: int = None,\n        sort: LogSort = LogSort.TIMESTAMP_ASC,\n    ) -> List[Log]:\n        \"\"\"\n        Read flow and task run logs.\n        \"\"\"\n        body = {\n            \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/logs/filter\", json=body)\n        return pydantic.parse_obj_as(List[Log], response.json())\n\n    async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n        \"\"\"\n        Recursively decode possibly nested data documents.\n\n        \"server\" encoded documents will be retrieved from the server.\n\n        Args:\n            datadoc: The data document to resolve\n\n        Returns:\n            a decoded object, the innermost data\n        \"\"\"\n        if not isinstance(datadoc, DataDocument):\n            raise TypeError(\n                f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n            )\n\n        async def resolve_inner(data):\n            if isinstance(data, bytes):\n                try:\n                    data = DataDocument.parse_raw(data)\n                except pydantic.ValidationError:\n                    return data\n\n            if isinstance(data, DataDocument):\n                return await resolve_inner(data.decode())\n\n            return data\n\n        return await resolve_inner(datadoc)\n\n    async def send_worker_heartbeat(\n        self,\n        work_pool_name: str,\n        worker_name: str,\n        heartbeat_interval_seconds: Optional[float] = None,\n    ):\n        \"\"\"\n        Sends a worker heartbeat for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool to heartbeat against.\n            worker_name: The name of the worker sending the heartbeat.\n        \"\"\"\n        await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n            json={\n                \"name\": worker_name,\n                \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n            },\n        )\n\n    async def read_workers_for_work_pool(\n        self,\n        work_pool_name: str,\n        worker_filter: Optional[WorkerFilter] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n    ) -> List[Worker]:\n        \"\"\"\n        Reads workers for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool for which to get\n                member workers.\n            worker_filter: Criteria by which to filter workers.\n            limit: Limit for the worker query.\n            offset: Limit for the worker query.\n        \"\"\"\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/filter\",\n            json={\n                \"worker_filter\": (\n                    worker_filter.dict(json_compatible=True, exclude_unset=True)\n                    if worker_filter\n                    else None\n                ),\n                \"offset\": offset,\n                \"limit\": limit,\n            },\n        )\n\n        return pydantic.parse_obj_as(List[Worker], response.json())\n\n    async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n        \"\"\"\n        Reads information for a given work pool\n\n        Args:\n            work_pool_name: The name of the work pool to for which to get\n                information.\n\n        Returns:\n            Information about the requested work pool.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n            return pydantic.parse_obj_as(WorkPool, response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_pools(\n        self,\n        limit: Optional[int] = None,\n        offset: int = 0,\n        work_pool_filter: Optional[WorkPoolFilter] = None,\n    ) -> List[WorkPool]:\n        \"\"\"\n        Reads work pools.\n\n        Args:\n            limit: Limit for the work pool query.\n            offset: Offset for the work pool query.\n            work_pool_filter: Criteria by which to filter work pools.\n\n        Returns:\n            A list of work pools.\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n        }\n        response = await self._client.post(\"/work_pools/filter\", json=body)\n        return pydantic.parse_obj_as(List[WorkPool], response.json())\n\n    async def create_work_pool(\n        self,\n        work_pool: WorkPoolCreate,\n    ) -> WorkPool:\n        \"\"\"\n        Creates a work pool with the provided configuration.\n\n        Args:\n            work_pool: Desired configuration for the new work pool.\n\n        Returns:\n            Information about the newly created work pool.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/work_pools/\",\n                json=work_pool.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n\n        return pydantic.parse_obj_as(WorkPool, response.json())\n\n    async def update_work_pool(\n        self,\n        work_pool_name: str,\n        work_pool: WorkPoolUpdate,\n    ):\n        \"\"\"\n        Updates a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool to update.\n            work_pool: Fields to update in the work pool.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/work_pools/{work_pool_name}\",\n                json=work_pool.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_work_pool(\n        self,\n        work_pool_name: str,\n    ):\n        \"\"\"\n        Deletes a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/work_pools/{work_pool_name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_queues(\n        self,\n        work_pool_name: Optional[str] = None,\n        work_queue_filter: Optional[WorkQueueFilter] = None,\n        limit: Optional[int] = None,\n        offset: Optional[int] = None,\n    ) -> List[WorkQueue]:\n        \"\"\"\n        Retrieves queues for a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool for which to get queues.\n            work_queue_filter: Criteria by which to filter queues.\n            limit: Limit for the queue query.\n            offset: Limit for the queue query.\n\n        Returns:\n            List of queues for the specified work pool.\n        \"\"\"\n        json = {\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        if work_pool_name:\n            try:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues/filter\",\n                    json=json,\n                )\n            except httpx.HTTPStatusError as e:\n                if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                    raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n                else:\n                    raise\n        else:\n            response = await self._client.post(\"/work_queues/filter\", json=json)\n\n        return pydantic.parse_obj_as(List[WorkQueue], response.json())\n\n    async def get_scheduled_flow_runs_for_deployments(\n        self,\n        deployment_ids: List[UUID],\n        scheduled_before: Optional[datetime.datetime] = None,\n        limit: Optional[int] = None,\n    ):\n        body: Dict[str, Any] = dict(deployment_ids=[str(id) for id in deployment_ids])\n        if scheduled_before:\n            body[\"scheduled_before\"] = str(scheduled_before)\n        if limit:\n            body[\"limit\"] = limit\n\n        response = await self._client.post(\n            \"/deployments/get_scheduled_flow_runs\",\n            json=body,\n        )\n\n        return pydantic.parse_obj_as(List[FlowRunResponse], response.json())\n\n    async def get_scheduled_flow_runs_for_work_pool(\n        self,\n        work_pool_name: str,\n        work_queue_names: Optional[List[str]] = None,\n        scheduled_before: Optional[datetime.datetime] = None,\n    ) -> List[WorkerFlowRunResponse]:\n        \"\"\"\n        Retrieves scheduled flow runs for the provided set of work pool queues.\n\n        Args:\n            work_pool_name: The name of the work pool that the work pool\n                queues are associated with.\n            work_queue_names: The names of the work pool queues from which\n                to get scheduled flow runs.\n            scheduled_before: Datetime used to filter returned flow runs. Flow runs\n                scheduled for after the given datetime string will not be returned.\n\n        Returns:\n            A list of worker flow run responses containing information about the\n            retrieved flow runs.\n        \"\"\"\n        body: Dict[str, Any] = {}\n        if work_queue_names is not None:\n            body[\"work_queue_names\"] = list(work_queue_names)\n        if scheduled_before:\n            body[\"scheduled_before\"] = str(scheduled_before)\n\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n            json=body,\n        )\n        return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n\n    async def create_artifact(\n        self,\n        artifact: ArtifactCreate,\n    ) -> Artifact:\n        \"\"\"\n        Creates an artifact with the provided configuration.\n\n        Args:\n            artifact: Desired configuration for the new artifact.\n        Returns:\n            Information about the newly created artifact.\n        \"\"\"\n\n        response = await self._client.post(\n            \"/artifacts/\",\n            json=artifact.dict(json_compatible=True, exclude_unset=True),\n        )\n\n        return pydantic.parse_obj_as(Artifact, response.json())\n\n    async def read_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Artifact]:\n        \"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/filter\", json=body)\n        return pydantic.parse_obj_as(List[Artifact], response.json())\n\n    async def read_latest_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactCollectionFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactCollectionSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[ArtifactCollection]:\n        \"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n        return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n\n    async def delete_artifact(self, artifact_id: UUID) -> None:\n        \"\"\"\n        Deletes an artifact with the provided id.\n\n        Args:\n            artifact_id: The id of the artifact to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/artifacts/{artifact_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_variable(self, variable: VariableCreate) -> Variable:\n        \"\"\"\n        Creates an variable with the provided configuration.\n\n        Args:\n            variable: Desired configuration for the new variable.\n        Returns:\n            Information about the newly created variable.\n        \"\"\"\n        response = await self._client.post(\n            \"/variables/\",\n            json=variable.dict(json_compatible=True, exclude_unset=True),\n        )\n        return Variable(**response.json())\n\n    async def update_variable(self, variable: VariableUpdate) -> None:\n        \"\"\"\n        Updates a variable with the provided configuration.\n\n        Args:\n            variable: Desired configuration for the updated variable.\n        Returns:\n            Information about the updated variable.\n        \"\"\"\n        await self._client.patch(\n            f\"/variables/name/{variable.name}\",\n            json=variable.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n        \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n        try:\n            response = await self._client.get(f\"/variables/name/{name}\")\n            return Variable(**response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                return None\n            else:\n                raise\n\n    async def delete_variable_by_name(self, name: str):\n        \"\"\"Deletes a variable by name.\"\"\"\n        try:\n            await self._client.delete(f\"/variables/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_variables(self, limit: int = None) -> List[Variable]:\n        \"\"\"Reads all variables.\"\"\"\n        response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n        return pydantic.parse_obj_as(List[Variable], response.json())\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n        \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n        response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n        response.raise_for_status()\n        return response.json()\n\n    async def increment_concurrency_slots(\n        self, names: List[str], slots: int, mode: str\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/increment\",\n            json={\"names\": names, \"slots\": slots, \"mode\": mode},\n        )\n\n    async def release_concurrency_slots(\n        self, names: List[str], slots: int, occupancy_seconds: float\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/decrement\",\n            json={\n                \"names\": names,\n                \"slots\": slots,\n                \"occupancy_seconds\": occupancy_seconds,\n            },\n        )\n\n    async def create_global_concurrency_limit(\n        self, concurrency_limit: GlobalConcurrencyLimitCreate\n    ) -> UUID:\n        response = await self._client.post(\n            \"/v2/concurrency_limits/\",\n            json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n        )\n        return UUID(response.json()[\"id\"])\n\n    async def update_global_concurrency_limit(\n        self, name: str, concurrency_limit: GlobalConcurrencyLimitUpdate\n    ) -> httpx.Response:\n        try:\n            response = await self._client.patch(\n                f\"/v2/concurrency_limits/{name}\",\n                json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n            )\n            return response\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_global_concurrency_limit_by_name(\n        self, name: str\n    ) -> httpx.Response:\n        try:\n            response = await self._client.delete(f\"/v2/concurrency_limits/{name}\")\n            return response\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_global_concurrency_limit_by_name(\n        self, name: str\n    ) -> GlobalConcurrencyLimitResponse:\n        try:\n            response = await self._client.get(f\"/v2/concurrency_limits/{name}\")\n            return GlobalConcurrencyLimitResponse.parse_obj(response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_global_concurrency_limits(\n        self, limit: int = 10, offset: int = 0\n    ) -> List[GlobalConcurrencyLimitResponse]:\n        response = await self._client.post(\n            \"/v2/concurrency_limits/filter\",\n            json={\n                \"limit\": limit,\n                \"offset\": offset,\n            },\n        )\n        return pydantic.parse_obj_as(\n            List[GlobalConcurrencyLimitResponse], response.json()\n        )\n\n    async def create_flow_run_input(\n        self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n    ):\n        \"\"\"\n        Creates a flow run input.\n\n        Args:\n            flow_run_id: The flow run id.\n            key: The input key.\n            value: The input value.\n            sender: The sender of the input.\n        \"\"\"\n\n        # Initialize the input to ensure that the key is valid.\n        FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/input\",\n            json={\"key\": key, \"value\": value, \"sender\": sender},\n        )\n        response.raise_for_status()\n\n    async def filter_flow_run_input(\n        self, flow_run_id: UUID, key_prefix: str, limit: int, exclude_keys: Set[str]\n    ) -> List[FlowRunInput]:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/input/filter\",\n            json={\n                \"prefix\": key_prefix,\n                \"limit\": limit,\n                \"exclude_keys\": list(exclude_keys),\n            },\n        )\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[FlowRunInput], response.json())\n\n    async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n        \"\"\"\n        Reads a flow run input.\n\n        Args:\n            flow_run_id: The flow run id.\n            key: The input key.\n        \"\"\"\n        response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n        response.raise_for_status()\n        return response.content.decode()\n\n    async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n        \"\"\"\n        Deletes a flow run input.\n\n        Args:\n            flow_run_id: The flow run id.\n            key: The input key.\n        \"\"\"\n        response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n        response.raise_for_status()\n\n    def _raise_for_unsupported_automations(self) -> NoReturn:\n        if not PREFECT_EXPERIMENTAL_EVENTS:\n            raise RuntimeError(\n                \"The current server and client configuration does not support \"\n                \"events.  Enable experimental events support with the \"\n                \"PREFECT_EXPERIMENTAL_EVENTS setting.\"\n            )\n        else:\n            raise RuntimeError(\n                \"The current server and client configuration does not support \"\n                \"automations.  Enable experimental automations with the \"\n                \"PREFECT_API_SERVICES_TRIGGERS_ENABLED setting.\"\n            )\n\n    async def create_automation(self, automation: AutomationCore) -> UUID:\n        \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.post(\n            \"/automations/\",\n            json=automation.dict(json_compatible=True),\n        )\n\n        return UUID(response.json()[\"id\"])\n\n    async def update_automation(self, automation_id: UUID, automation: AutomationCore):\n        \"\"\"Updates an automation in Prefect Cloud.\"\"\"\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n        response = await self._client.put(\n            f\"/automations/{automation_id}\",\n            json=automation.dict(json_compatible=True, exclude_unset=True),\n        )\n        response.raise_for_status\n\n    async def read_automations(self) -> List[Automation]:\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.post(\"/automations/filter\")\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[Automation], response.json())\n\n    async def find_automation(\n        self, id_or_name: Union[str, UUID], exit_if_not_found: bool = True\n    ) -> Optional[Automation]:\n        if isinstance(id_or_name, str):\n            try:\n                id = UUID(id_or_name)\n            except ValueError:\n                id = None\n        elif isinstance(id_or_name, UUID):\n            id = id_or_name\n\n        if id:\n            try:\n                automation = await self.read_automation(id)\n                return automation\n            except prefect.exceptions.HTTPStatusError as e:\n                if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                    raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n\n        automations = await self.read_automations()\n\n        # Look for it by an exact name\n        for automation in automations:\n            if automation.name == id_or_name:\n                return automation\n\n        # Look for it by a case-insensitive name\n        for automation in automations:\n            if automation.name.lower() == id_or_name.lower():\n                return automation\n\n        return None\n\n    async def read_automation(self, automation_id: UUID) -> Optional[Automation]:\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.get(f\"/automations/{automation_id}\")\n        if response.status_code == 404:\n            return None\n        response.raise_for_status()\n        return Automation.parse_obj(response.json())\n\n    async def read_automations_by_name(self, name: str) -> List[Automation]:\n        \"\"\"\n        Query the Prefect API for an automation by name. Only automations matching the provided name will be returned.\n\n        Args:\n            name: the name of the automation to query\n\n        Returns:\n            a list of Automation model representations of the automations\n        \"\"\"\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n        automation_filter = filters.AutomationFilter(name=dict(any_=[name]))\n\n        response = await self._client.post(\n            \"/automations/filter\",\n            json={\n                \"sort\": sorting.AutomationSort.UPDATED_DESC,\n                \"automations\": automation_filter.dict(json_compatible=True)\n                if automation_filter\n                else None,\n            },\n        )\n\n        response.raise_for_status()\n\n        return pydantic.parse_obj_as(List[Automation], response.json())\n\n    async def pause_automation(self, automation_id: UUID):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.patch(\n            f\"/automations/{automation_id}\", json={\"enabled\": False}\n        )\n        response.raise_for_status()\n\n    async def resume_automation(self, automation_id: UUID):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.patch(\n            f\"/automations/{automation_id}\", json={\"enabled\": True}\n        )\n        response.raise_for_status()\n\n    async def delete_automation(self, automation_id: UUID):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.delete(f\"/automations/{automation_id}\")\n        if response.status_code == 404:\n            return\n\n        response.raise_for_status()\n\n    async def read_resource_related_automations(\n        self, resource_id: str\n    ) -> List[Automation]:\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.get(f\"/automations/related-to/{resource_id}\")\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[Automation], response.json())\n\n    async def delete_resource_owned_automations(self, resource_id: str):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        await self._client.delete(f\"/automations/owned-by/{resource_id}\")\n\n    async def __aenter__(self):\n        \"\"\"\n        Start the client.\n\n        If the client is already started, this will raise an exception.\n\n        If the client is already closed, this will raise an exception. Use a new client\n        instance instead.\n        \"\"\"\n        if self._closed:\n            # httpx.AsyncClient does not allow reuse so we will not either.\n            raise RuntimeError(\n                \"The client cannot be started again after closing. \"\n                \"Retrieve a new client with `get_client()` instead.\"\n            )\n\n        if self._started:\n            # httpx.AsyncClient does not allow reentrancy so we will not either.\n            raise RuntimeError(\"The client cannot be started more than once.\")\n\n        self._loop = asyncio.get_running_loop()\n        await self._exit_stack.__aenter__()\n\n        # Enter a lifespan context if using an ephemeral application.\n        # See https://github.com/encode/httpx/issues/350\n        if self._ephemeral_app and self.manage_lifespan:\n            self._ephemeral_lifespan = await self._exit_stack.enter_async_context(\n                app_lifespan_context(self._ephemeral_app)\n            )\n\n        if self._ephemeral_app:\n            self.logger.debug(\n                \"Using ephemeral application with database at \"\n                f\"{PREFECT_API_DATABASE_CONNECTION_URL.value()}\"\n            )\n        else:\n            self.logger.debug(f\"Connecting to API at {self.api_url}\")\n\n        # Enter the httpx client's context\n        await self._exit_stack.enter_async_context(self._client)\n\n        self._started = True\n\n        return self\n\n    async def __aexit__(self, *exc_info):\n        \"\"\"\n        Shutdown the client.\n        \"\"\"\n        self._closed = True\n        return await self._exit_stack.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `PrefectClient` must be entered with an async context. Use 'async \"\n            \"with PrefectClient(...)' not 'with PrefectClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_url","title":"api_url: httpx.URL property","text":"

Get the base URL for the API.

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_healthcheck","title":"api_healthcheck async","text":"

Attempts to connect to the API and returns the encountered exception if not successful.

If successful, returns None.

Source code in prefect/client/orchestration.py
async def api_healthcheck(self) -> Optional[Exception]:\n    \"\"\"\n    Attempts to connect to the API and returns the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    try:\n        await self._client.get(\"/health\")\n        return None\n    except Exception as exc:\n        return exc\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.hello","title":"hello async","text":"

Send a GET request to /hello for testing purposes.

Source code in prefect/client/orchestration.py
async def hello(self) -> httpx.Response:\n    \"\"\"\n    Send a GET request to /hello for testing purposes.\n    \"\"\"\n    return await self._client.get(\"/hello\")\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow","title":"create_flow async","text":"

Create a flow in the Prefect API.

Parameters:

Name Type Description Default flow Flow

a Flow object

required

Raises:

Type Description RequestError

if a flow was not created for any reason

Returns:

Type Description UUID

the ID of the flow in the backend

Source code in prefect/client/orchestration.py
async def create_flow(self, flow: \"FlowObject\") -> UUID:\n    \"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow: a [Flow][prefect.flows.Flow] object\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    return await self.create_flow_from_name(flow.name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_from_name","title":"create_flow_from_name async","text":"

Create a flow in the Prefect API.

Parameters:

Name Type Description Default flow_name str

the name of the new flow

required

Raises:

Type Description RequestError

if a flow was not created for any reason

Returns:

Type Description UUID

the ID of the flow in the backend

Source code in prefect/client/orchestration.py
async def create_flow_from_name(self, flow_name: str) -> UUID:\n    \"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow_name: the name of the new flow\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    flow_data = FlowCreate(name=flow_name)\n    response = await self._client.post(\n        \"/flows/\", json=flow_data.dict(json_compatible=True)\n    )\n\n    flow_id = response.json().get(\"id\")\n    if not flow_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    # Return the id of the created flow\n    return UUID(flow_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow","title":"read_flow async","text":"

Query the Prefect API for a flow by id.

Parameters:

Name Type Description Default flow_id UUID

the flow ID of interest

required

Returns:

Type Description Flow

a Flow model representation of the flow

Source code in prefect/client/orchestration.py
async def read_flow(self, flow_id: UUID) -> Flow:\n    \"\"\"\n    Query the Prefect API for a flow by id.\n\n    Args:\n        flow_id: the flow ID of interest\n\n    Returns:\n        a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n    \"\"\"\n    response = await self._client.get(f\"/flows/{flow_id}\")\n    return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flows","title":"read_flows async","text":"

Query the Prefect API for flows. Only flows matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None sort FlowSort

sort criteria for the flows

None limit int

limit for the flow query

None offset int

offset for the flow query

0

Returns:

Type Description List[Flow]

a list of Flow model representations of the flows

Source code in prefect/client/orchestration.py
async def read_flows(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Flow]:\n    \"\"\"\n    Query the Prefect API for flows. Only flows matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flows\n        limit: limit for the flow query\n        offset: offset for the flow query\n\n    Returns:\n        a list of Flow model representations of the flows\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flows/filter\", json=body)\n    return pydantic.parse_obj_as(List[Flow], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_by_name","title":"read_flow_by_name async","text":"

Query the Prefect API for a flow by name.

Parameters:

Name Type Description Default flow_name str

the name of a flow

required

Returns:

Type Description Flow

a fully hydrated Flow model

Source code in prefect/client/orchestration.py
async def read_flow_by_name(\n    self,\n    flow_name: str,\n) -> Flow:\n    \"\"\"\n    Query the Prefect API for a flow by name.\n\n    Args:\n        flow_name: the name of a flow\n\n    Returns:\n        a fully hydrated Flow model\n    \"\"\"\n    response = await self._client.get(f\"/flows/name/{flow_name}\")\n    return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

Create a flow run for a deployment.

Parameters:

Name Type Description Default deployment_id UUID

The deployment ID to create the flow run from

required parameters Optional[Dict[str, Any]]

Parameter overrides for this flow run. Merged with the deployment defaults

None context Optional[Dict[str, Any]]

Optional run context data

None state State

The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

None name str

An optional name for the flow run. If not provided, the server will generate a name.

None tags Iterable[str]

An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags.

None idempotency_key str

Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one.

None parent_task_run_id UUID

if a subflow run is being created, the placeholder task run identifier in the parent flow

None work_queue_name str

An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool.

None job_variables Optional[Dict[str, Any]]

Optional variables that will be supplied to the flow run job.

None

Raises:

Type Description RequestError

if the Prefect API does not successfully create a run for any reason

Returns:

Type Description FlowRun

The flow run model

Source code in prefect/client/orchestration.py
async def create_flow_run_from_deployment(\n    self,\n    deployment_id: UUID,\n    *,\n    parameters: Optional[Dict[str, Any]] = None,\n    context: Optional[Dict[str, Any]] = None,\n    state: prefect.states.State = None,\n    name: str = None,\n    tags: Iterable[str] = None,\n    idempotency_key: str = None,\n    parent_task_run_id: UUID = None,\n    work_queue_name: str = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n) -> FlowRun:\n    \"\"\"\n    Create a flow run for a deployment.\n\n    Args:\n        deployment_id: The deployment ID to create the flow run from\n        parameters: Parameter overrides for this flow run. Merged with the\n            deployment defaults\n        context: Optional run context data\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n        name: An optional name for the flow run. If not provided, the server will\n            generate a name.\n        tags: An optional iterable of tags to apply to the flow run; these tags\n            are merged with the deployment's tags.\n        idempotency_key: Optional idempotency key for creation of the flow run.\n            If the key matches the key of an existing flow run, the existing run will\n            be returned instead of creating a new one.\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        work_queue_name: An optional work queue name to add this run to. If not provided,\n            will default to the deployment's set work queue.  If one is provided that does not\n            exist, a new work queue will be created within the deployment's work pool.\n        job_variables: Optional variables that will be supplied to the flow run job.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n    state = state or prefect.states.Scheduled()\n    tags = tags or []\n\n    flow_run_create = DeploymentFlowRunCreate(\n        parameters=parameters,\n        context=context,\n        state=state.to_state_create(),\n        tags=tags,\n        name=name,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n        job_variables=job_variables,\n    )\n\n    # done separately to avoid including this field in payloads sent to older API versions\n    if work_queue_name:\n        flow_run_create.work_queue_name = work_queue_name\n\n    response = await self._client.post(\n        f\"/deployments/{deployment_id}/create_flow_run\",\n        json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n    )\n    return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run","title":"create_flow_run async","text":"

Create a flow run for a flow.

Parameters:

Name Type Description Default flow Flow

The flow model to create the flow run for

required name Optional[str]

An optional name for the flow run

None parameters Optional[Dict[str, Any]]

Parameter overrides for this flow run.

None context Optional[Dict[str, Any]]

Optional run context data

None tags Optional[Iterable[str]]

a list of tags to apply to this flow run

None parent_task_run_id Optional[UUID]

if a subflow run is being created, the placeholder task run identifier in the parent flow

None state Optional[State]

The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

None

Raises:

Type Description RequestError

if the Prefect API does not successfully create a run for any reason

Returns:

Type Description FlowRun

The flow run model

Source code in prefect/client/orchestration.py
async def create_flow_run(\n    self,\n    flow: \"FlowObject\",\n    name: Optional[str] = None,\n    parameters: Optional[Dict[str, Any]] = None,\n    context: Optional[Dict[str, Any]] = None,\n    tags: Optional[Iterable[str]] = None,\n    parent_task_run_id: Optional[UUID] = None,\n    state: Optional[\"prefect.states.State\"] = None,\n) -> FlowRun:\n    \"\"\"\n    Create a flow run for a flow.\n\n    Args:\n        flow: The flow model to create the flow run for\n        name: An optional name for the flow run\n        parameters: Parameter overrides for this flow run.\n        context: Optional run context data\n        tags: a list of tags to apply to this flow run\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    # Retrieve the flow id\n    flow_id = await self.create_flow(flow)\n\n    flow_run_create = FlowRunCreate(\n        flow_id=flow_id,\n        flow_version=flow.version,\n        name=name,\n        parameters=parameters,\n        context=context,\n        tags=list(tags or []),\n        parent_task_run_id=parent_task_run_id,\n        state=state.to_state_create(),\n        empirical_policy=FlowRunPolicy(\n            retries=flow.retries,\n            retry_delay=flow.retry_delay_seconds,\n        ),\n    )\n\n    flow_run_create_json = flow_run_create.dict(json_compatible=True)\n    response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n    flow_run = FlowRun.parse_obj(response.json())\n\n    # Restore the parameters to the local objects to retain expectations about\n    # Python objects\n    flow_run.parameters = parameters\n\n    return flow_run\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run","title":"update_flow_run async","text":"

Update a flow run's details.

Parameters:

Name Type Description Default flow_run_id UUID

The identifier for the flow run to update.

required flow_version Optional[str]

A new version string for the flow run.

None parameters Optional[dict]

A dictionary of parameter values for the flow run. This will not be merged with any existing parameters.

None name Optional[str]

A new name for the flow run.

None empirical_policy Optional[FlowRunPolicy]

A new flow run orchestration policy. This will not be merged with any existing policy.

None tags Optional[Iterable[str]]

An iterable of new tags for the flow run. These will not be merged with any existing tags.

None infrastructure_pid Optional[str]

The id of flow run as returned by an infrastructure block.

None

Returns:

Type Description Response

an httpx.Response object from the PATCH request

Source code in prefect/client/orchestration.py
async def update_flow_run(\n    self,\n    flow_run_id: UUID,\n    flow_version: Optional[str] = None,\n    parameters: Optional[dict] = None,\n    name: Optional[str] = None,\n    tags: Optional[Iterable[str]] = None,\n    empirical_policy: Optional[FlowRunPolicy] = None,\n    infrastructure_pid: Optional[str] = None,\n    job_variables: Optional[dict] = None,\n) -> httpx.Response:\n    \"\"\"\n    Update a flow run's details.\n\n    Args:\n        flow_run_id: The identifier for the flow run to update.\n        flow_version: A new version string for the flow run.\n        parameters: A dictionary of parameter values for the flow run. This will not\n            be merged with any existing parameters.\n        name: A new name for the flow run.\n        empirical_policy: A new flow run orchestration policy. This will not be\n            merged with any existing policy.\n        tags: An iterable of new tags for the flow run. These will not be merged with\n            any existing tags.\n        infrastructure_pid: The id of flow run as returned by an\n            infrastructure block.\n\n    Returns:\n        an `httpx.Response` object from the PATCH request\n    \"\"\"\n    params = {}\n    if flow_version is not None:\n        params[\"flow_version\"] = flow_version\n    if parameters is not None:\n        params[\"parameters\"] = parameters\n    if name is not None:\n        params[\"name\"] = name\n    if tags is not None:\n        params[\"tags\"] = tags\n    if empirical_policy is not None:\n        params[\"empirical_policy\"] = empirical_policy\n    if infrastructure_pid:\n        params[\"infrastructure_pid\"] = infrastructure_pid\n    if job_variables is not None:\n        params[\"job_variables\"] = job_variables\n\n    flow_run_data = FlowRunUpdate(**params)\n\n    return await self._client.patch(\n        f\"/flow_runs/{flow_run_id}\",\n        json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run","title":"delete_flow_run async","text":"

Delete a flow run by UUID.

Parameters:

Name Type Description Default flow_run_id UUID

The flow run UUID of interest.

required Source code in prefect/client/orchestration.py
async def delete_flow_run(\n    self,\n    flow_run_id: UUID,\n) -> None:\n    \"\"\"\n    Delete a flow run by UUID.\n\n    Args:\n        flow_run_id: The flow run UUID of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_concurrency_limit","title":"create_concurrency_limit async","text":"

Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required concurrency_limit int

the maximum number of concurrent task runs for a given tag

required

Raises:

Type Description RequestError

if the concurrency limit was not created for any reason

Returns:

Type Description UUID

the ID of the concurrency limit in the backend

Source code in prefect/client/orchestration.py
async def create_concurrency_limit(\n    self,\n    tag: str,\n    concurrency_limit: int,\n) -> UUID:\n    \"\"\"\n    Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n    running tasks.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n    Raises:\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the ID of the concurrency limit in the backend\n    \"\"\"\n\n    concurrency_limit_create = ConcurrencyLimitCreate(\n        tag=tag,\n        concurrency_limit=concurrency_limit,\n    )\n    response = await self._client.post(\n        \"/concurrency_limits/\",\n        json=concurrency_limit_create.dict(json_compatible=True),\n    )\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(concurrency_limit_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limit_by_tag","title":"read_concurrency_limit_by_tag async","text":"

Read the concurrency limit set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

if the concurrency limit was not created for any reason

Returns:

Type Description

the concurrency limit set on a specific tag

Source code in prefect/client/orchestration.py
async def read_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n    \"\"\"\n    Read the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the concurrency limit set on a specific tag\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n    return concurrency_limit\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limits","title":"read_concurrency_limits async","text":"

Lists concurrency limits set on task run tags.

Parameters:

Name Type Description Default limit int

the maximum number of concurrency limits returned

required offset int

the concurrency limit query offset

required

Returns:

Type Description

a list of concurrency limits

Source code in prefect/client/orchestration.py
async def read_concurrency_limits(\n    self,\n    limit: int,\n    offset: int,\n):\n    \"\"\"\n    Lists concurrency limits set on task run tags.\n\n    Args:\n        limit: the maximum number of concurrency limits returned\n        offset: the concurrency limit query offset\n\n    Returns:\n        a list of concurrency limits\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n    return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.reset_concurrency_limit_by_tag","title":"reset_concurrency_limit_by_tag async","text":"

Resets the concurrency limit slots set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required slot_override Optional[List[Union[UUID, str]]]

a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in slot_override are currently running, otherwise those concurrency slots will never be released.

None

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

If request fails

Source code in prefect/client/orchestration.py
async def reset_concurrency_limit_by_tag(\n    self,\n    tag: str,\n    slot_override: Optional[List[Union[UUID, str]]] = None,\n):\n    \"\"\"\n    Resets the concurrency limit slots set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        slot_override: a list of task run IDs that are currently using a\n            concurrency slot, please check that any task run IDs included in\n            `slot_override` are currently running, otherwise those concurrency\n            slots will never be released.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    if slot_override is not None:\n        slot_override = [str(slot) for slot in slot_override]\n\n    try:\n        await self._client.post(\n            f\"/concurrency_limits/tag/{tag}/reset\",\n            json=dict(slot_override=slot_override),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_concurrency_limit_by_tag","title":"delete_concurrency_limit_by_tag async","text":"

Delete the concurrency limit set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

If request fails

Source code in prefect/client/orchestration.py
async def delete_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n    \"\"\"\n    Delete the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_queue","title":"create_work_queue async","text":"

Create a work queue.

Parameters:

Name Type Description Default name str

a unique name for the work queue

required tags Optional[List[str]]

will be included in the queue. This option will be removed on 2023-02-23.

None description Optional[str]

An optional description for the work queue.

None is_paused Optional[bool]

Whether or not the work queue is paused.

None concurrency_limit Optional[int]

An optional concurrency limit for the work queue.

None priority Optional[int]

The queue's priority. Lower values are higher priority (1 is the highest).

None work_pool_name Optional[str]

The name of the work pool to use for this queue.

None

Raises:

Type Description ObjectAlreadyExists

If request returns 409

RequestError

If request fails

Returns:

Type Description WorkQueue

The created work queue

Source code in prefect/client/orchestration.py
async def create_work_queue(\n    self,\n    name: str,\n    tags: Optional[List[str]] = None,\n    description: Optional[str] = None,\n    is_paused: Optional[bool] = None,\n    concurrency_limit: Optional[int] = None,\n    priority: Optional[int] = None,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n    \"\"\"\n    Create a work queue.\n\n    Args:\n        name: a unique name for the work queue\n        tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n            will be included in the queue. This option will be removed on 2023-02-23.\n        description: An optional description for the work queue.\n        is_paused: Whether or not the work queue is paused.\n        concurrency_limit: An optional concurrency limit for the work queue.\n        priority: The queue's priority. Lower values are higher priority (1 is the highest).\n        work_pool_name: The name of the work pool to use for this queue.\n\n    Raises:\n        prefect.exceptions.ObjectAlreadyExists: If request returns 409\n        httpx.RequestError: If request fails\n\n    Returns:\n        The created work queue\n    \"\"\"\n    if tags:\n        warnings.warn(\n            (\n                \"The use of tags for creating work queue filters is deprecated.\"\n                \" This option will be removed on 2023-02-23.\"\n            ),\n            DeprecationWarning,\n        )\n        filter = QueueFilter(tags=tags)\n    else:\n        filter = None\n    create_model = WorkQueueCreate(name=name, filter=filter)\n    if description is not None:\n        create_model.description = description\n    if is_paused is not None:\n        create_model.is_paused = is_paused\n    if concurrency_limit is not None:\n        create_model.concurrency_limit = concurrency_limit\n    if priority is not None:\n        create_model.priority = priority\n\n    data = create_model.dict(json_compatible=True)\n    try:\n        if work_pool_name is not None:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues\", json=data\n            )\n        else:\n            response = await self._client.post(\"/work_queues/\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_by_name","title":"read_work_queue_by_name async","text":"

Read a work queue by name.

Parameters:

Name Type Description Default name str

a unique name for the work queue

required work_pool_name str

the name of the work pool the queue belongs to.

None

Raises:

Type Description ObjectNotFound

if no work queue is found

HTTPStatusError

other status errors

Returns:

Name Type Description WorkQueue WorkQueue

a work queue API object

Source code in prefect/client/orchestration.py
async def read_work_queue_by_name(\n    self,\n    name: str,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n    \"\"\"\n    Read a work queue by name.\n\n    Args:\n        name (str): a unique name for the work queue\n        work_pool_name (str, optional): the name of the work pool\n            the queue belongs to.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: if no work queue is found\n        httpx.HTTPStatusError: other status errors\n\n    Returns:\n        WorkQueue: a work queue API object\n    \"\"\"\n    try:\n        if work_pool_name is not None:\n            response = await self._client.get(\n                f\"/work_pools/{work_pool_name}/queues/{name}\"\n            )\n        else:\n            response = await self._client.get(f\"/work_queues/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_queue","title":"update_work_queue async","text":"

Update properties of a work queue.

Parameters:

Name Type Description Default id UUID

the ID of the work queue to update

required **kwargs

the fields to update

{}

Raises:

Type Description ValueError

if no kwargs are provided

ObjectNotFound

if request returns 404

RequestError

if the request fails

Source code in prefect/client/orchestration.py
async def update_work_queue(self, id: UUID, **kwargs):\n    \"\"\"\n    Update properties of a work queue.\n\n    Args:\n        id: the ID of the work queue to update\n        **kwargs: the fields to update\n\n    Raises:\n        ValueError: if no kwargs are provided\n        prefect.exceptions.ObjectNotFound: if request returns 404\n        httpx.RequestError: if the request fails\n\n    \"\"\"\n    if not kwargs:\n        raise ValueError(\"No fields provided to update.\")\n\n    data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n    try:\n        await self._client.patch(f\"/work_queues/{id}\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_runs_in_work_queue","title":"get_runs_in_work_queue async","text":"

Read flow runs off a work queue.

Parameters:

Name Type Description Default id UUID

the id of the work queue to read from

required limit int

a limit on the number of runs to return

10 scheduled_before datetime

a timestamp; only runs scheduled before this time will be returned. Defaults to now.

None

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

If request fails

Returns:

Type Description List[FlowRun]

List[FlowRun]: a list of FlowRun objects read from the queue

Source code in prefect/client/orchestration.py
async def get_runs_in_work_queue(\n    self,\n    id: UUID,\n    limit: int = 10,\n    scheduled_before: datetime.datetime = None,\n) -> List[FlowRun]:\n    \"\"\"\n    Read flow runs off a work queue.\n\n    Args:\n        id: the id of the work queue to read from\n        limit: a limit on the number of runs to return\n        scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n            Defaults to now.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        List[FlowRun]: a list of FlowRun objects read from the queue\n    \"\"\"\n    if scheduled_before is None:\n        scheduled_before = pendulum.now(\"UTC\")\n\n    try:\n        response = await self._client.post(\n            f\"/work_queues/{id}/get_runs\",\n            json={\n                \"limit\": limit,\n                \"scheduled_before\": scheduled_before.isoformat(),\n            },\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue","title":"read_work_queue async","text":"

Read a work queue.

Parameters:

Name Type Description Default id UUID

the id of the work queue to load

required

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

If request fails

Returns:

Name Type Description WorkQueue WorkQueue

an instantiated WorkQueue object

Source code in prefect/client/orchestration.py
async def read_work_queue(\n    self,\n    id: UUID,\n) -> WorkQueue:\n    \"\"\"\n    Read a work queue.\n\n    Args:\n        id: the id of the work queue to load\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        WorkQueue: an instantiated WorkQueue object\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_queues/{id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_status","title":"read_work_queue_status async","text":"

Read a work queue status.

Parameters:

Name Type Description Default id UUID

the id of the work queue to load

required

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

If request fails

Returns:

Name Type Description WorkQueueStatus WorkQueueStatusDetail

an instantiated WorkQueueStatus object

Source code in prefect/client/orchestration.py
async def read_work_queue_status(\n    self,\n    id: UUID,\n) -> WorkQueueStatusDetail:\n    \"\"\"\n    Read a work queue status.\n\n    Args:\n        id: the id of the work queue to load\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        WorkQueueStatus: an instantiated WorkQueueStatus object\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_queues/{id}/status\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueueStatusDetail.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.match_work_queues","title":"match_work_queues async","text":"

Query the Prefect API for work queues with names with a specific prefix.

Parameters:

Name Type Description Default prefixes List[str]

a list of strings used to match work queue name prefixes

required work_pool_name Optional[str]

an optional work pool name to scope the query to

None

Returns:

Type Description List[WorkQueue]

a list of WorkQueue model representations of the work queues

Source code in prefect/client/orchestration.py
async def match_work_queues(\n    self,\n    prefixes: List[str],\n    work_pool_name: Optional[str] = None,\n) -> List[WorkQueue]:\n    \"\"\"\n    Query the Prefect API for work queues with names with a specific prefix.\n\n    Args:\n        prefixes: a list of strings used to match work queue name prefixes\n        work_pool_name: an optional work pool name to scope the query to\n\n    Returns:\n        a list of WorkQueue model representations\n            of the work queues\n    \"\"\"\n    page_length = 100\n    current_page = 0\n    work_queues = []\n\n    while True:\n        new_queues = await self.read_work_queues(\n            work_pool_name=work_pool_name,\n            offset=current_page * page_length,\n            limit=page_length,\n            work_queue_filter=WorkQueueFilter(\n                name=WorkQueueFilterName(startswith_=prefixes)\n            ),\n        )\n        if not new_queues:\n            break\n        work_queues += new_queues\n        current_page += 1\n\n    return work_queues\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_queue_by_id","title":"delete_work_queue_by_id async","text":"

Delete a work queue by its ID.

Parameters:

Name Type Description Default id UUID

the id of the work queue to delete

required

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

If requests fails

Source code in prefect/client/orchestration.py
async def delete_work_queue_by_id(\n    self,\n    id: UUID,\n):\n    \"\"\"\n    Delete a work queue by its ID.\n\n    Args:\n        id: the id of the work queue to delete\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/work_queues/{id}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_type","title":"create_block_type async","text":"

Create a block type in the Prefect API.

Source code in prefect/client/orchestration.py
async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n    \"\"\"\n    Create a block type in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_types/\",\n            json=block_type.dict(\n                json_compatible=True, exclude_unset=True, exclude={\"id\"}\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_schema","title":"create_block_schema async","text":"

Create a block schema in the Prefect API.

Source code in prefect/client/orchestration.py
async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n    \"\"\"\n    Create a block schema in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_schemas/\",\n            json=block_schema.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                exclude={\"id\", \"block_type\", \"checksum\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_document","title":"create_block_document async","text":"

Create a block document in the Prefect API. This data is used to configure a corresponding Block.

Parameters:

Name Type Description Default include_secrets bool

whether to include secret values on the stored Block, corresponding to Pydantic's SecretStr and SecretBytes fields. Note Blocks may not work as expected if this is set to False.

True Source code in prefect/client/orchestration.py
async def create_block_document(\n    self,\n    block_document: Union[BlockDocument, BlockDocumentCreate],\n    include_secrets: bool = True,\n) -> BlockDocument:\n    \"\"\"\n    Create a block document in the Prefect API. This data is used to configure a\n    corresponding Block.\n\n    Args:\n        include_secrets (bool): whether to include secret values\n            on the stored Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. Note Blocks may not work as expected if\n            this is set to `False`.\n    \"\"\"\n    if isinstance(block_document, BlockDocument):\n        block_document = BlockDocumentCreate.parse_obj(\n            block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n\n    try:\n        response = await self._client.post(\n            \"/block_documents/\",\n            json=block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_document","title":"update_block_document async","text":"

Update a block document in the Prefect API.

Source code in prefect/client/orchestration.py
async def update_block_document(\n    self,\n    block_document_id: UUID,\n    block_document: BlockDocumentUpdate,\n):\n    \"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_documents/{block_document_id}\",\n            json=block_document.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_document","title":"delete_block_document async","text":"

Delete a block document.

Source code in prefect/client/orchestration.py
async def delete_block_document(self, block_document_id: UUID):\n    \"\"\"\n    Delete a block document.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_documents/{block_document_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_type_by_slug","title":"read_block_type_by_slug async","text":"

Read a block type by its slug.

Source code in prefect/client/orchestration.py
async def read_block_type_by_slug(self, slug: str) -> BlockType:\n    \"\"\"\n    Read a block type by its slug.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/block_types/slug/{slug}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schema_by_checksum","title":"read_block_schema_by_checksum async","text":"

Look up a block schema checksum

Source code in prefect/client/orchestration.py
async def read_block_schema_by_checksum(\n    self, checksum: str, version: Optional[str] = None\n) -> BlockSchema:\n    \"\"\"\n    Look up a block schema checksum\n    \"\"\"\n    try:\n        url = f\"/block_schemas/checksum/{checksum}\"\n        if version is not None:\n            url = f\"{url}?version={version}\"\n        response = await self._client.get(url)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_type","title":"update_block_type async","text":"

Update a block document in the Prefect API.

Source code in prefect/client/orchestration.py
async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n    \"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_types/{block_type_id}\",\n            json=block_type.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include=BlockTypeUpdate.updatable_fields(),\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_type","title":"delete_block_type async","text":"

Delete a block type.

Source code in prefect/client/orchestration.py
async def delete_block_type(self, block_type_id: UUID):\n    \"\"\"\n    Delete a block type.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_types/{block_type_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        elif (\n            e.response.status_code == status.HTTP_403_FORBIDDEN\n            and e.response.json()[\"detail\"]\n            == \"protected block types cannot be deleted.\"\n        ):\n            raise prefect.exceptions.ProtectedBlockError(\n                \"Protected block types cannot be deleted.\"\n            ) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_types","title":"read_block_types async","text":"

Read all block types Raises: httpx.RequestError: if the block types were not found

Returns:

Type Description List[BlockType]

List of BlockTypes.

Source code in prefect/client/orchestration.py
async def read_block_types(self) -> List[BlockType]:\n    \"\"\"\n    Read all block types\n    Raises:\n        httpx.RequestError: if the block types were not found\n\n    Returns:\n        List of BlockTypes.\n    \"\"\"\n    response = await self._client.post(\"/block_types/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockType], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schemas","title":"read_block_schemas async","text":"

Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found

Returns:

Type Description List[BlockSchema]

A BlockSchema.

Source code in prefect/client/orchestration.py
async def read_block_schemas(self) -> List[BlockSchema]:\n    \"\"\"\n    Read all block schemas\n    Raises:\n        httpx.RequestError: if a valid block schema was not found\n\n    Returns:\n        A BlockSchema.\n    \"\"\"\n    response = await self._client.post(\"/block_schemas/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockSchema], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_most_recent_block_schema_for_block_type","title":"get_most_recent_block_schema_for_block_type async","text":"

Fetches the most recent block schema for a specified block type ID.

Parameters:

Name Type Description Default block_type_id UUID

The ID of the block type.

required

Raises:

Type Description RequestError

If the request fails for any reason.

Returns:

Type Description Optional[BlockSchema]

The most recent block schema or None.

Source code in prefect/client/orchestration.py
async def get_most_recent_block_schema_for_block_type(\n    self,\n    block_type_id: UUID,\n) -> Optional[BlockSchema]:\n    \"\"\"\n    Fetches the most recent block schema for a specified block type ID.\n\n    Args:\n        block_type_id: The ID of the block type.\n\n    Raises:\n        httpx.RequestError: If the request fails for any reason.\n\n    Returns:\n        The most recent block schema or None.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_schemas/filter\",\n            json={\n                \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n                \"limit\": 1,\n            },\n        )\n    except httpx.HTTPStatusError:\n        raise\n    return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document","title":"read_block_document async","text":"

Read the block document with the specified ID.

Parameters:

Name Type Description Default block_document_id UUID

the block document id

required include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Raises:

Type Description RequestError

if the block document was not found for any reason

Returns:

Type Description

A block document or None.

Source code in prefect/client/orchestration.py
async def read_block_document(\n    self,\n    block_document_id: UUID,\n    include_secrets: bool = True,\n):\n    \"\"\"\n    Read the block document with the specified ID.\n\n    Args:\n        block_document_id: the block document id\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    assert (\n        block_document_id is not None\n    ), \"Unexpected ID on block document. Was it persisted?\"\n    try:\n        response = await self._client.get(\n            f\"/block_documents/{block_document_id}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document_by_name","title":"read_block_document_by_name async","text":"

Read the block document with the specified name that corresponds to a specific block type name.

Parameters:

Name Type Description Default name str

The block document name.

required block_type_slug str

The block type slug.

required include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Raises:

Type Description RequestError

if the block document was not found for any reason

Returns:

Type Description BlockDocument

A block document or None.

Source code in prefect/client/orchestration.py
async def read_block_document_by_name(\n    self,\n    name: str,\n    block_type_slug: str,\n    include_secrets: bool = True,\n) -> BlockDocument:\n    \"\"\"\n    Read the block document with the specified name that corresponds to a\n    specific block type name.\n\n    Args:\n        name: The block document name.\n        block_type_slug: The block type slug.\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents","title":"read_block_documents async","text":"

Read block documents

Parameters:

Name Type Description Default block_schema_type Optional[str]

an optional block schema type

None offset Optional[int]

an offset

None limit Optional[int]

the number of blocks to return

None include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Returns:

Type Description

A list of block documents

Source code in prefect/client/orchestration.py
async def read_block_documents(\n    self,\n    block_schema_type: Optional[str] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n    include_secrets: bool = True,\n):\n    \"\"\"\n    Read block documents\n\n    Args:\n        block_schema_type: an optional block schema type\n        offset: an offset\n        limit: the number of blocks to return\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Returns:\n        A list of block documents\n    \"\"\"\n    response = await self._client.post(\n        \"/block_documents/filter\",\n        json=dict(\n            block_schema_type=block_schema_type,\n            offset=offset,\n            limit=limit,\n            include_secrets=include_secrets,\n        ),\n    )\n    return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents_by_type","title":"read_block_documents_by_type async","text":"

Retrieve block documents by block type slug.

Parameters:

Name Type Description Default block_type_slug str

The block type slug.

required offset Optional[int]

an offset

None limit Optional[int]

the number of blocks to return

None include_secrets bool

whether to include secret values

True

Returns:

Type Description List[BlockDocument]

A list of block documents

Source code in prefect/client/orchestration.py
async def read_block_documents_by_type(\n    self,\n    block_type_slug: str,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n    include_secrets: bool = True,\n) -> List[BlockDocument]:\n    \"\"\"Retrieve block documents by block type slug.\n\n    Args:\n        block_type_slug: The block type slug.\n        offset: an offset\n        limit: the number of blocks to return\n        include_secrets: whether to include secret values\n\n    Returns:\n        A list of block documents\n    \"\"\"\n    response = await self._client.get(\n        f\"/block_types/slug/{block_type_slug}/block_documents\",\n        params=dict(\n            offset=offset,\n            limit=limit,\n            include_secrets=include_secrets,\n        ),\n    )\n\n    return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment","title":"create_deployment async","text":"

Create a deployment.

Parameters:

Name Type Description Default flow_id UUID

the flow ID to create a deployment for

required name str

the name of the deployment

required version str

an optional version string for the deployment

None schedule SCHEDULE_TYPES

an optional schedule to apply to the deployment

None tags List[str]

an optional list of tags to apply to the deployment

None storage_document_id UUID

an reference to the storage block document used for the deployed flow

None infrastructure_document_id UUID

an reference to the infrastructure block document to use for this deployment

None job_variables Optional[Dict[str, Any]]

A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'. This argument was previously named infra_overrides. Both arguments are supported for backwards compatibility.

None

Raises:

Type Description RequestError

if the deployment was not created for any reason

Returns:

Type Description UUID

the ID of the deployment in the backend

Source code in prefect/client/orchestration.py
async def create_deployment(\n    self,\n    flow_id: UUID,\n    name: str,\n    version: str = None,\n    schedule: SCHEDULE_TYPES = None,\n    schedules: List[DeploymentScheduleCreate] = None,\n    parameters: Optional[Dict[str, Any]] = None,\n    description: str = None,\n    work_queue_name: str = None,\n    work_pool_name: str = None,\n    tags: List[str] = None,\n    storage_document_id: UUID = None,\n    manifest_path: str = None,\n    path: str = None,\n    entrypoint: str = None,\n    infrastructure_document_id: UUID = None,\n    infra_overrides: Optional[Dict[str, Any]] = None,  # for backwards compat\n    parameter_openapi_schema: Optional[Dict[str, Any]] = None,\n    is_schedule_active: Optional[bool] = None,\n    paused: Optional[bool] = None,\n    pull_steps: Optional[List[dict]] = None,\n    enforce_parameter_schema: Optional[bool] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n) -> UUID:\n    \"\"\"\n    Create a deployment.\n\n    Args:\n        flow_id: the flow ID to create a deployment for\n        name: the name of the deployment\n        version: an optional version string for the deployment\n        schedule: an optional schedule to apply to the deployment\n        tags: an optional list of tags to apply to the deployment\n        storage_document_id: an reference to the storage block document\n            used for the deployed flow\n        infrastructure_document_id: an reference to the infrastructure block document\n            to use for this deployment\n        job_variables: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`. This argument was previously named `infra_overrides`.\n            Both arguments are supported for backwards compatibility.\n\n    Raises:\n        httpx.RequestError: if the deployment was not created for any reason\n\n    Returns:\n        the ID of the deployment in the backend\n    \"\"\"\n    jv = handle_deprecated_infra_overrides_parameter(job_variables, infra_overrides)\n\n    deployment_create = DeploymentCreate(\n        flow_id=flow_id,\n        name=name,\n        version=version,\n        parameters=dict(parameters or {}),\n        tags=list(tags or []),\n        work_queue_name=work_queue_name,\n        description=description,\n        storage_document_id=storage_document_id,\n        path=path,\n        entrypoint=entrypoint,\n        manifest_path=manifest_path,  # for backwards compat\n        infrastructure_document_id=infrastructure_document_id,\n        job_variables=jv,\n        parameter_openapi_schema=parameter_openapi_schema,\n        is_schedule_active=is_schedule_active,\n        paused=paused,\n        schedule=schedule,\n        schedules=schedules or [],\n        pull_steps=pull_steps,\n        enforce_parameter_schema=enforce_parameter_schema,\n    )\n\n    if work_pool_name is not None:\n        deployment_create.work_pool_name = work_pool_name\n\n    # Exclude newer fields that are not set to avoid compatibility issues\n    exclude = {\n        field\n        for field in [\"work_pool_name\", \"work_queue_name\"]\n        if field not in deployment_create.__fields_set__\n    }\n\n    if deployment_create.is_schedule_active is None:\n        exclude.add(\"is_schedule_active\")\n\n    if deployment_create.paused is None:\n        exclude.add(\"paused\")\n\n    if deployment_create.pull_steps is None:\n        exclude.add(\"pull_steps\")\n\n    if deployment_create.enforce_parameter_schema is None:\n        exclude.add(\"enforce_parameter_schema\")\n\n    json = deployment_create.dict(json_compatible=True, exclude=exclude)\n    response = await self._client.post(\n        \"/deployments/\",\n        json=json,\n    )\n    deployment_id = response.json().get(\"id\")\n    if not deployment_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(deployment_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment","title":"read_deployment async","text":"

Query the Prefect API for a deployment by id.

Parameters:

Name Type Description Default deployment_id UUID

the deployment ID of interest

required

Returns:

Type Description DeploymentResponse

a Deployment model representation of the deployment

Source code in prefect/client/orchestration.py
async def read_deployment(\n    self,\n    deployment_id: UUID,\n) -> DeploymentResponse:\n    \"\"\"\n    Query the Prefect API for a deployment by id.\n\n    Args:\n        deployment_id: the deployment ID of interest\n\n    Returns:\n        a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_by_name","title":"read_deployment_by_name async","text":"

Query the Prefect API for a deployment by name.

Parameters:

Name Type Description Default name str

A deployed flow's name: / required

Raises:

Type Description ObjectNotFound

If request returns 404

RequestError

If request fails

Returns:

Type Description DeploymentResponse

a Deployment model representation of the deployment

Source code in prefect/client/orchestration.py
async def read_deployment_by_name(\n    self,\n    name: str,\n) -> DeploymentResponse:\n    \"\"\"\n    Query the Prefect API for a deployment by name.\n\n    Args:\n        name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        a Deployment model representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployments","title":"read_deployments async","text":"

Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None limit int

a limit for the deployment query

None offset int

an offset for the deployment query

0

Returns:

Type Description List[DeploymentResponse]

a list of Deployment model representations of the deployments

Source code in prefect/client/orchestration.py
async def read_deployments(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    limit: int = None,\n    sort: DeploymentSort = None,\n    offset: int = 0,\n) -> List[DeploymentResponse]:\n    \"\"\"\n    Query the Prefect API for deployments. Only deployments matching all\n    the provided criteria will be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        limit: a limit for the deployment query\n        offset: an offset for the deployment query\n\n    Returns:\n        a list of Deployment model representations\n            of the deployments\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/deployments/filter\", json=body)\n    return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment","title":"delete_deployment async","text":"

Delete deployment by id.

Parameters:

Name Type Description Default deployment_id UUID

The deployment id of interest.

required Source code in prefect/client/orchestration.py
async def delete_deployment(\n    self,\n    deployment_id: UUID,\n):\n    \"\"\"\n    Delete deployment by id.\n\n    Args:\n        deployment_id: The deployment id of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment_schedules","title":"create_deployment_schedules async","text":"

Create deployment schedules.

Parameters:

Name Type Description Default deployment_id UUID

the deployment ID

required schedules List[Tuple[SCHEDULE_TYPES, bool]]

a list of tuples containing the schedule to create and whether or not it should be active.

required

Raises:

Type Description RequestError

if the schedules were not created for any reason

Returns:

Type Description List[DeploymentSchedule]

the list of schedules created in the backend

Source code in prefect/client/orchestration.py
async def create_deployment_schedules(\n    self,\n    deployment_id: UUID,\n    schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n) -> List[DeploymentSchedule]:\n    \"\"\"\n    Create deployment schedules.\n\n    Args:\n        deployment_id: the deployment ID\n        schedules: a list of tuples containing the schedule to create\n                   and whether or not it should be active.\n\n    Raises:\n        httpx.RequestError: if the schedules were not created for any reason\n\n    Returns:\n        the list of schedules created in the backend\n    \"\"\"\n    deployment_schedule_create = [\n        DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n        for schedule in schedules\n    ]\n\n    json = [\n        deployment_schedule_create.dict(json_compatible=True)\n        for deployment_schedule_create in deployment_schedule_create\n    ]\n    response = await self._client.post(\n        f\"/deployments/{deployment_id}/schedules\", json=json\n    )\n    return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_schedules","title":"read_deployment_schedules async","text":"

Query the Prefect API for a deployment's schedules.

Parameters:

Name Type Description Default deployment_id UUID

the deployment ID

required

Returns:

Type Description List[DeploymentSchedule]

a list of DeploymentSchedule model representations of the deployment schedules

Source code in prefect/client/orchestration.py
async def read_deployment_schedules(\n    self,\n    deployment_id: UUID,\n) -> List[DeploymentSchedule]:\n    \"\"\"\n    Query the Prefect API for a deployment's schedules.\n\n    Args:\n        deployment_id: the deployment ID\n\n    Returns:\n        a list of DeploymentSchedule model representations of the deployment schedules\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_deployment_schedule","title":"update_deployment_schedule async","text":"

Update a deployment schedule by ID.

Parameters:

Name Type Description Default deployment_id UUID

the deployment ID

required schedule_id UUID

the deployment schedule ID of interest

required active Optional[bool]

whether or not the schedule should be active

None schedule Optional[SCHEDULE_TYPES]

the cron, rrule, or interval schedule this deployment schedule should use

None Source code in prefect/client/orchestration.py
async def update_deployment_schedule(\n    self,\n    deployment_id: UUID,\n    schedule_id: UUID,\n    active: Optional[bool] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n):\n    \"\"\"\n    Update a deployment schedule by ID.\n\n    Args:\n        deployment_id: the deployment ID\n        schedule_id: the deployment schedule ID of interest\n        active: whether or not the schedule should be active\n        schedule: the cron, rrule, or interval schedule this deployment schedule should use\n    \"\"\"\n    kwargs = {}\n    if active is not None:\n        kwargs[\"active\"] = active\n    elif schedule is not None:\n        kwargs[\"schedule\"] = schedule\n\n    deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n    json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n    try:\n        await self._client.patch(\n            f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment_schedule","title":"delete_deployment_schedule async","text":"

Delete a deployment schedule.

Parameters:

Name Type Description Default deployment_id UUID

the deployment ID

required schedule_id UUID

the ID of the deployment schedule to delete.

required

Raises:

Type Description RequestError

if the schedules were not deleted for any reason

Source code in prefect/client/orchestration.py
async def delete_deployment_schedule(\n    self,\n    deployment_id: UUID,\n    schedule_id: UUID,\n) -> None:\n    \"\"\"\n    Delete a deployment schedule.\n\n    Args:\n        deployment_id: the deployment ID\n        schedule_id: the ID of the deployment schedule to delete.\n\n    Raises:\n        httpx.RequestError: if the schedules were not deleted for any reason\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run","title":"read_flow_run async","text":"

Query the Prefect API for a flow run by id.

Parameters:

Name Type Description Default flow_run_id UUID

the flow run ID of interest

required

Returns:

Type Description FlowRun

a Flow Run model representation of the flow run

Source code in prefect/client/orchestration.py
async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n    \"\"\"\n    Query the Prefect API for a flow run by id.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n\n    Returns:\n        a Flow Run model representation of the flow run\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resume_flow_run","title":"resume_flow_run async","text":"

Resumes a paused flow run.

Parameters:

Name Type Description Default flow_run_id UUID

the flow run ID of interest

required run_input Optional[Dict]

the input to resume the flow run with

None

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in prefect/client/orchestration.py
async def resume_flow_run(\n    self, flow_run_id: UUID, run_input: Optional[Dict] = None\n) -> OrchestrationResult:\n    \"\"\"\n    Resumes a paused flow run.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n        run_input: the input to resume the flow run with\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    try:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n        )\n    except httpx.HTTPStatusError:\n        raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_runs","title":"read_flow_runs async","text":"

Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None sort FlowRunSort

sort criteria for the flow runs

None limit int

limit for the flow run query

None offset int

offset for the flow run query

0

Returns:

Type Description List[FlowRun]

a list of Flow Run model representations of the flow runs

Source code in prefect/client/orchestration.py
async def read_flow_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[FlowRun]:\n    \"\"\"\n    Query the Prefect API for flow runs. Only flow runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flow runs\n        limit: limit for the flow run query\n        offset: offset for the flow run query\n\n    Returns:\n        a list of Flow Run model representations\n            of the flow runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flow_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_flow_run_state","title":"set_flow_run_state async","text":"

Set the state of a flow run.

Parameters:

Name Type Description Default flow_run_id UUID

the id of the flow run

required state State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in prefect/client/orchestration.py
async def set_flow_run_state(\n    self,\n    flow_run_id: UUID,\n    state: \"prefect.states.State\",\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a flow run.\n\n    Args:\n        flow_run_id: the id of the flow run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.flow_run_id = flow_run_id\n    state_create.state_details.transition_id = uuid4()\n    try:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_states","title":"read_flow_run_states async","text":"

Query for the states of a flow run

Parameters:

Name Type Description Default flow_run_id UUID

the id of the flow run

required

Returns:

Type Description List[State]

a list of State model representations of the flow run states

Source code in prefect/client/orchestration.py
async def read_flow_run_states(\n    self, flow_run_id: UUID\n) -> List[prefect.states.State]:\n    \"\"\"\n    Query for the states of a flow run\n\n    Args:\n        flow_run_id: the id of the flow run\n\n    Returns:\n        a list of State model representations\n            of the flow run states\n    \"\"\"\n    response = await self._client.get(\n        \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_task_run","title":"create_task_run async","text":"

Create a task run

Parameters:

Name Type Description Default task Task[P, R]

The Task to run

required flow_run_id Optional[UUID]

The flow run id with which to associate the task run

required dynamic_key str

A key unique to this particular run of a Task within the flow

required name Optional[str]

An optional name for the task run

None extra_tags Optional[Iterable[str]]

an optional list of extra tags to apply to the task run in addition to task.tags

None state Optional[State[R]]

The initial state for the run. If not provided, defaults to Pending for now. Should always be a Scheduled type.

None task_inputs Optional[Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]]

the set of inputs passed to the task

None

Returns:

Type Description TaskRun

The created task run.

Source code in prefect/client/orchestration.py
async def create_task_run(\n    self,\n    task: \"TaskObject[P, R]\",\n    flow_run_id: Optional[UUID],\n    dynamic_key: str,\n    name: Optional[str] = None,\n    extra_tags: Optional[Iterable[str]] = None,\n    state: Optional[prefect.states.State[R]] = None,\n    task_inputs: Optional[\n        Dict[\n            str,\n            List[\n                Union[\n                    TaskRunResult,\n                    Parameter,\n                    Constant,\n                ]\n            ],\n        ]\n    ] = None,\n) -> TaskRun:\n    \"\"\"\n    Create a task run\n\n    Args:\n        task: The Task to run\n        flow_run_id: The flow run id with which to associate the task run\n        dynamic_key: A key unique to this particular run of a Task within the flow\n        name: An optional name for the task run\n        extra_tags: an optional list of extra tags to apply to the task run in\n            addition to `task.tags`\n        state: The initial state for the run. If not provided, defaults to\n            `Pending` for now. Should always be a `Scheduled` type.\n        task_inputs: the set of inputs passed to the task\n\n    Returns:\n        The created task run.\n    \"\"\"\n    tags = set(task.tags).union(extra_tags or [])\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    task_run_data = TaskRunCreate(\n        name=name,\n        flow_run_id=flow_run_id,\n        task_key=task.task_key,\n        dynamic_key=dynamic_key,\n        tags=list(tags),\n        task_version=task.version,\n        empirical_policy=TaskRunPolicy(\n            retries=task.retries,\n            retry_delay=task.retry_delay_seconds,\n            retry_jitter_factor=task.retry_jitter_factor,\n        ),\n        state=state.to_state_create(),\n        task_inputs=task_inputs or {},\n    )\n\n    response = await self._client.post(\n        \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n    )\n    return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run","title":"read_task_run async","text":"

Query the Prefect API for a task run by id.

Parameters:

Name Type Description Default task_run_id UUID

the task run ID of interest

required

Returns:

Type Description TaskRun

a Task Run model representation of the task run

Source code in prefect/client/orchestration.py
async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n    \"\"\"\n    Query the Prefect API for a task run by id.\n\n    Args:\n        task_run_id: the task run ID of interest\n\n    Returns:\n        a Task Run model representation of the task run\n    \"\"\"\n    response = await self._client.get(f\"/task_runs/{task_run_id}\")\n    return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_runs","title":"read_task_runs async","text":"

Query the Prefect API for task runs. Only task runs matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None sort TaskRunSort

sort criteria for the task runs

None limit int

a limit for the task run query

None offset int

an offset for the task run query

0

Returns:

Type Description List[TaskRun]

a list of Task Run model representations of the task runs

Source code in prefect/client/orchestration.py
async def read_task_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    sort: TaskRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[TaskRun]:\n    \"\"\"\n    Query the Prefect API for task runs. Only task runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        sort: sort criteria for the task runs\n        limit: a limit for the task run query\n        offset: an offset for the task run query\n\n    Returns:\n        a list of Task Run model representations\n            of the task runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/task_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[TaskRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_task_run","title":"delete_task_run async","text":"

Delete a task run by id.

Parameters:

Name Type Description Default task_run_id UUID

the task run ID of interest

required Source code in prefect/client/orchestration.py
async def delete_task_run(self, task_run_id: UUID) -> None:\n    \"\"\"\n    Delete a task run by id.\n\n    Args:\n        task_run_id: the task run ID of interest\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/task_runs/{task_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_task_run_state","title":"set_task_run_state async","text":"

Set the state of a task run.

Parameters:

Name Type Description Default task_run_id UUID

the id of the task run

required state State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in prefect/client/orchestration.py
async def set_task_run_state(\n    self,\n    task_run_id: UUID,\n    state: prefect.states.State,\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a task run.\n\n    Args:\n        task_run_id: the id of the task run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.task_run_id = task_run_id\n    response = await self._client.post(\n        f\"/task_runs/{task_run_id}/set_state\",\n        json=dict(state=state_create.dict(json_compatible=True), force=force),\n    )\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run_states","title":"read_task_run_states async","text":"

Query for the states of a task run

Parameters:

Name Type Description Default task_run_id UUID

the id of the task run

required

Returns:

Type Description List[State]

a list of State model representations of the task run states

Source code in prefect/client/orchestration.py
async def read_task_run_states(\n    self, task_run_id: UUID\n) -> List[prefect.states.State]:\n    \"\"\"\n    Query for the states of a task run\n\n    Args:\n        task_run_id: the id of the task run\n\n    Returns:\n        a list of State model representations of the task run states\n    \"\"\"\n    response = await self._client.get(\n        \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_logs","title":"create_logs async","text":"

Create logs for a flow or task run

Parameters:

Name Type Description Default logs Iterable[Union[LogCreate, dict]]

An iterable of LogCreate objects or already json-compatible dicts

required Source code in prefect/client/orchestration.py
async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n    \"\"\"\n    Create logs for a flow or task run\n\n    Args:\n        logs: An iterable of `LogCreate` objects or already json-compatible dicts\n    \"\"\"\n    serialized_logs = [\n        log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n        for log in logs\n    ]\n    await self._client.post(\"/logs/\", json=serialized_logs)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_notification_policy","title":"create_flow_run_notification_policy async","text":"

Create a notification policy for flow runs

Parameters:

Name Type Description Default block_document_id UUID

The block document UUID

required is_active bool

Whether the notification policy is active

True tags List[str]

List of flow tags

None state_names List[str]

List of state names

None message_template Optional[str]

Notification message template

None Source code in prefect/client/orchestration.py
async def create_flow_run_notification_policy(\n    self,\n    block_document_id: UUID,\n    is_active: bool = True,\n    tags: List[str] = None,\n    state_names: List[str] = None,\n    message_template: Optional[str] = None,\n) -> UUID:\n    \"\"\"\n    Create a notification policy for flow runs\n\n    Args:\n        block_document_id: The block document UUID\n        is_active: Whether the notification policy is active\n        tags: List of flow tags\n        state_names: List of state names\n        message_template: Notification message template\n    \"\"\"\n    if tags is None:\n        tags = []\n    if state_names is None:\n        state_names = []\n\n    policy = FlowRunNotificationPolicyCreate(\n        block_document_id=block_document_id,\n        is_active=is_active,\n        tags=tags,\n        state_names=state_names,\n        message_template=message_template,\n    )\n    response = await self._client.post(\n        \"/flow_run_notification_policies/\",\n        json=policy.dict(json_compatible=True),\n    )\n\n    policy_id = response.json().get(\"id\")\n    if not policy_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(policy_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_notification_policy","title":"delete_flow_run_notification_policy async","text":"

Delete a flow run notification policy by id.

Parameters:

Name Type Description Default id UUID

UUID of the flow run notification policy to delete.

required Source code in prefect/client/orchestration.py
async def delete_flow_run_notification_policy(\n    self,\n    id: UUID,\n) -> None:\n    \"\"\"\n    Delete a flow run notification policy by id.\n\n    Args:\n        id: UUID of the flow run notification policy to delete.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run_notification_policy","title":"update_flow_run_notification_policy async","text":"

Update a notification policy for flow runs

Parameters:

Name Type Description Default id UUID

UUID of the notification policy

required block_document_id Optional[UUID]

The block document UUID

None is_active Optional[bool]

Whether the notification policy is active

None tags Optional[List[str]]

List of flow tags

None state_names Optional[List[str]]

List of state names

None message_template Optional[str]

Notification message template

None Source code in prefect/client/orchestration.py
async def update_flow_run_notification_policy(\n    self,\n    id: UUID,\n    block_document_id: Optional[UUID] = None,\n    is_active: Optional[bool] = None,\n    tags: Optional[List[str]] = None,\n    state_names: Optional[List[str]] = None,\n    message_template: Optional[str] = None,\n) -> None:\n    \"\"\"\n    Update a notification policy for flow runs\n\n    Args:\n        id: UUID of the notification policy\n        block_document_id: The block document UUID\n        is_active: Whether the notification policy is active\n        tags: List of flow tags\n        state_names: List of state names\n        message_template: Notification message template\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    params = {}\n    if block_document_id is not None:\n        params[\"block_document_id\"] = block_document_id\n    if is_active is not None:\n        params[\"is_active\"] = is_active\n    if tags is not None:\n        params[\"tags\"] = tags\n    if state_names is not None:\n        params[\"state_names\"] = state_names\n    if message_template is not None:\n        params[\"message_template\"] = message_template\n\n    policy = FlowRunNotificationPolicyUpdate(**params)\n\n    try:\n        await self._client.patch(\n            f\"/flow_run_notification_policies/{id}\",\n            json=policy.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_notification_policies","title":"read_flow_run_notification_policies async","text":"

Query the Prefect API for flow run notification policies. Only policies matching all criteria will be returned.

Parameters:

Name Type Description Default flow_run_notification_policy_filter FlowRunNotificationPolicyFilter

filter criteria for notification policies

required limit Optional[int]

a limit for the notification policies query

None offset int

an offset for the notification policies query

0

Returns:

Type Description List[FlowRunNotificationPolicy]

a list of FlowRunNotificationPolicy model representations of the notification policies

Source code in prefect/client/orchestration.py
async def read_flow_run_notification_policies(\n    self,\n    flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n    limit: Optional[int] = None,\n    offset: int = 0,\n) -> List[FlowRunNotificationPolicy]:\n    \"\"\"\n    Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n    be returned.\n\n    Args:\n        flow_run_notification_policy_filter: filter criteria for notification policies\n        limit: a limit for the notification policies query\n        offset: an offset for the notification policies query\n\n    Returns:\n        a list of FlowRunNotificationPolicy model representations\n            of the notification policies\n    \"\"\"\n    body = {\n        \"flow_run_notification_policy_filter\": (\n            flow_run_notification_policy_filter.dict(json_compatible=True)\n            if flow_run_notification_policy_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\n        \"/flow_run_notification_policies/filter\", json=body\n    )\n    return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_logs","title":"read_logs async","text":"

Read flow and task run logs.

Source code in prefect/client/orchestration.py
async def read_logs(\n    self,\n    log_filter: LogFilter = None,\n    limit: int = None,\n    offset: int = None,\n    sort: LogSort = LogSort.TIMESTAMP_ASC,\n) -> List[Log]:\n    \"\"\"\n    Read flow and task run logs.\n    \"\"\"\n    body = {\n        \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/logs/filter\", json=body)\n    return pydantic.parse_obj_as(List[Log], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resolve_datadoc","title":"resolve_datadoc async","text":"

Recursively decode possibly nested data documents.

\"server\" encoded documents will be retrieved from the server.

Parameters:

Name Type Description Default datadoc DataDocument

The data document to resolve

required

Returns:

Type Description Any

a decoded object, the innermost data

Source code in prefect/client/orchestration.py
async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n    \"\"\"\n    Recursively decode possibly nested data documents.\n\n    \"server\" encoded documents will be retrieved from the server.\n\n    Args:\n        datadoc: The data document to resolve\n\n    Returns:\n        a decoded object, the innermost data\n    \"\"\"\n    if not isinstance(datadoc, DataDocument):\n        raise TypeError(\n            f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n        )\n\n    async def resolve_inner(data):\n        if isinstance(data, bytes):\n            try:\n                data = DataDocument.parse_raw(data)\n            except pydantic.ValidationError:\n                return data\n\n        if isinstance(data, DataDocument):\n            return await resolve_inner(data.decode())\n\n        return data\n\n    return await resolve_inner(datadoc)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.send_worker_heartbeat","title":"send_worker_heartbeat async","text":"

Sends a worker heartbeat for a given work pool.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool to heartbeat against.

required worker_name str

The name of the worker sending the heartbeat.

required Source code in prefect/client/orchestration.py
async def send_worker_heartbeat(\n    self,\n    work_pool_name: str,\n    worker_name: str,\n    heartbeat_interval_seconds: Optional[float] = None,\n):\n    \"\"\"\n    Sends a worker heartbeat for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool to heartbeat against.\n        worker_name: The name of the worker sending the heartbeat.\n    \"\"\"\n    await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n        json={\n            \"name\": worker_name,\n            \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n        },\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_workers_for_work_pool","title":"read_workers_for_work_pool async","text":"

Reads workers for a given work pool.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool for which to get member workers.

required worker_filter Optional[WorkerFilter]

Criteria by which to filter workers.

None limit Optional[int]

Limit for the worker query.

None offset Optional[int]

Limit for the worker query.

None Source code in prefect/client/orchestration.py
async def read_workers_for_work_pool(\n    self,\n    work_pool_name: str,\n    worker_filter: Optional[WorkerFilter] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n) -> List[Worker]:\n    \"\"\"\n    Reads workers for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool for which to get\n            member workers.\n        worker_filter: Criteria by which to filter workers.\n        limit: Limit for the worker query.\n        offset: Limit for the worker query.\n    \"\"\"\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/filter\",\n        json={\n            \"worker_filter\": (\n                worker_filter.dict(json_compatible=True, exclude_unset=True)\n                if worker_filter\n                else None\n            ),\n            \"offset\": offset,\n            \"limit\": limit,\n        },\n    )\n\n    return pydantic.parse_obj_as(List[Worker], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pool","title":"read_work_pool async","text":"

Reads information for a given work pool

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool to for which to get information.

required

Returns:

Type Description WorkPool

Information about the requested work pool.

Source code in prefect/client/orchestration.py
async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n    \"\"\"\n    Reads information for a given work pool\n\n    Args:\n        work_pool_name: The name of the work pool to for which to get\n            information.\n\n    Returns:\n        Information about the requested work pool.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n        return pydantic.parse_obj_as(WorkPool, response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pools","title":"read_work_pools async","text":"

Reads work pools.

Parameters:

Name Type Description Default limit Optional[int]

Limit for the work pool query.

None offset int

Offset for the work pool query.

0 work_pool_filter Optional[WorkPoolFilter]

Criteria by which to filter work pools.

None

Returns:

Type Description List[WorkPool]

A list of work pools.

Source code in prefect/client/orchestration.py
async def read_work_pools(\n    self,\n    limit: Optional[int] = None,\n    offset: int = 0,\n    work_pool_filter: Optional[WorkPoolFilter] = None,\n) -> List[WorkPool]:\n    \"\"\"\n    Reads work pools.\n\n    Args:\n        limit: Limit for the work pool query.\n        offset: Offset for the work pool query.\n        work_pool_filter: Criteria by which to filter work pools.\n\n    Returns:\n        A list of work pools.\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n    }\n    response = await self._client.post(\"/work_pools/filter\", json=body)\n    return pydantic.parse_obj_as(List[WorkPool], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_pool","title":"create_work_pool async","text":"

Creates a work pool with the provided configuration.

Parameters:

Name Type Description Default work_pool WorkPoolCreate

Desired configuration for the new work pool.

required

Returns:

Type Description WorkPool

Information about the newly created work pool.

Source code in prefect/client/orchestration.py
async def create_work_pool(\n    self,\n    work_pool: WorkPoolCreate,\n) -> WorkPool:\n    \"\"\"\n    Creates a work pool with the provided configuration.\n\n    Args:\n        work_pool: Desired configuration for the new work pool.\n\n    Returns:\n        Information about the newly created work pool.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/work_pools/\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n\n    return pydantic.parse_obj_as(WorkPool, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_pool","title":"update_work_pool async","text":"

Updates a work pool.

Parameters:

Name Type Description Default work_pool_name str

Name of the work pool to update.

required work_pool WorkPoolUpdate

Fields to update in the work pool.

required Source code in prefect/client/orchestration.py
async def update_work_pool(\n    self,\n    work_pool_name: str,\n    work_pool: WorkPoolUpdate,\n):\n    \"\"\"\n    Updates a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool to update.\n        work_pool: Fields to update in the work pool.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/work_pools/{work_pool_name}\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_pool","title":"delete_work_pool async","text":"

Deletes a work pool.

Parameters:

Name Type Description Default work_pool_name str

Name of the work pool to delete.

required Source code in prefect/client/orchestration.py
async def delete_work_pool(\n    self,\n    work_pool_name: str,\n):\n    \"\"\"\n    Deletes a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/work_pools/{work_pool_name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queues","title":"read_work_queues async","text":"

Retrieves queues for a work pool.

Parameters:

Name Type Description Default work_pool_name Optional[str]

Name of the work pool for which to get queues.

None work_queue_filter Optional[WorkQueueFilter]

Criteria by which to filter queues.

None limit Optional[int]

Limit for the queue query.

None offset Optional[int]

Limit for the queue query.

None

Returns:

Type Description List[WorkQueue]

List of queues for the specified work pool.

Source code in prefect/client/orchestration.py
async def read_work_queues(\n    self,\n    work_pool_name: Optional[str] = None,\n    work_queue_filter: Optional[WorkQueueFilter] = None,\n    limit: Optional[int] = None,\n    offset: Optional[int] = None,\n) -> List[WorkQueue]:\n    \"\"\"\n    Retrieves queues for a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool for which to get queues.\n        work_queue_filter: Criteria by which to filter queues.\n        limit: Limit for the queue query.\n        offset: Limit for the queue query.\n\n    Returns:\n        List of queues for the specified work pool.\n    \"\"\"\n    json = {\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    if work_pool_name:\n        try:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues/filter\",\n                json=json,\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n    else:\n        response = await self._client.post(\"/work_queues/filter\", json=json)\n\n    return pydantic.parse_obj_as(List[WorkQueue], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_scheduled_flow_runs_for_work_pool","title":"get_scheduled_flow_runs_for_work_pool async","text":"

Retrieves scheduled flow runs for the provided set of work pool queues.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool that the work pool queues are associated with.

required work_queue_names Optional[List[str]]

The names of the work pool queues from which to get scheduled flow runs.

None scheduled_before Optional[datetime]

Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned.

None

Returns:

Type Description List[WorkerFlowRunResponse]

A list of worker flow run responses containing information about the

List[WorkerFlowRunResponse]

retrieved flow runs.

Source code in prefect/client/orchestration.py
async def get_scheduled_flow_runs_for_work_pool(\n    self,\n    work_pool_name: str,\n    work_queue_names: Optional[List[str]] = None,\n    scheduled_before: Optional[datetime.datetime] = None,\n) -> List[WorkerFlowRunResponse]:\n    \"\"\"\n    Retrieves scheduled flow runs for the provided set of work pool queues.\n\n    Args:\n        work_pool_name: The name of the work pool that the work pool\n            queues are associated with.\n        work_queue_names: The names of the work pool queues from which\n            to get scheduled flow runs.\n        scheduled_before: Datetime used to filter returned flow runs. Flow runs\n            scheduled for after the given datetime string will not be returned.\n\n    Returns:\n        A list of worker flow run responses containing information about the\n        retrieved flow runs.\n    \"\"\"\n    body: Dict[str, Any] = {}\n    if work_queue_names is not None:\n        body[\"work_queue_names\"] = list(work_queue_names)\n    if scheduled_before:\n        body[\"scheduled_before\"] = str(scheduled_before)\n\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n        json=body,\n    )\n    return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_artifact","title":"create_artifact async","text":"

Creates an artifact with the provided configuration.

Parameters:

Name Type Description Default artifact ArtifactCreate

Desired configuration for the new artifact.

required Source code in prefect/client/orchestration.py
async def create_artifact(\n    self,\n    artifact: ArtifactCreate,\n) -> Artifact:\n    \"\"\"\n    Creates an artifact with the provided configuration.\n\n    Args:\n        artifact: Desired configuration for the new artifact.\n    Returns:\n        Information about the newly created artifact.\n    \"\"\"\n\n    response = await self._client.post(\n        \"/artifacts/\",\n        json=artifact.dict(json_compatible=True, exclude_unset=True),\n    )\n\n    return pydantic.parse_obj_as(Artifact, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_artifacts","title":"read_artifacts async","text":"

Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts

Source code in prefect/client/orchestration.py
async def read_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Artifact]:\n    \"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/filter\", json=body)\n    return pydantic.parse_obj_as(List[Artifact], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_latest_artifacts","title":"read_latest_artifacts async","text":"

Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts

Source code in prefect/client/orchestration.py
async def read_latest_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactCollectionFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactCollectionSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[ArtifactCollection]:\n    \"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n    return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_artifact","title":"delete_artifact async","text":"

Deletes an artifact with the provided id.

Parameters:

Name Type Description Default artifact_id UUID

The id of the artifact to delete.

required Source code in prefect/client/orchestration.py
async def delete_artifact(self, artifact_id: UUID) -> None:\n    \"\"\"\n    Deletes an artifact with the provided id.\n\n    Args:\n        artifact_id: The id of the artifact to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/artifacts/{artifact_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_variable","title":"create_variable async","text":"

Creates an variable with the provided configuration.

Parameters:

Name Type Description Default variable VariableCreate

Desired configuration for the new variable.

required Source code in prefect/client/orchestration.py
async def create_variable(self, variable: VariableCreate) -> Variable:\n    \"\"\"\n    Creates an variable with the provided configuration.\n\n    Args:\n        variable: Desired configuration for the new variable.\n    Returns:\n        Information about the newly created variable.\n    \"\"\"\n    response = await self._client.post(\n        \"/variables/\",\n        json=variable.dict(json_compatible=True, exclude_unset=True),\n    )\n    return Variable(**response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_variable","title":"update_variable async","text":"

Updates a variable with the provided configuration.

Parameters:

Name Type Description Default variable VariableUpdate

Desired configuration for the updated variable.

required Source code in prefect/client/orchestration.py
async def update_variable(self, variable: VariableUpdate) -> None:\n    \"\"\"\n    Updates a variable with the provided configuration.\n\n    Args:\n        variable: Desired configuration for the updated variable.\n    Returns:\n        Information about the updated variable.\n    \"\"\"\n    await self._client.patch(\n        f\"/variables/name/{variable.name}\",\n        json=variable.dict(json_compatible=True, exclude_unset=True),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variable_by_name","title":"read_variable_by_name async","text":"

Reads a variable by name. Returns None if no variable is found.

Source code in prefect/client/orchestration.py
async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n    \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n    try:\n        response = await self._client.get(f\"/variables/name/{name}\")\n        return Variable(**response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            return None\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_variable_by_name","title":"delete_variable_by_name async","text":"

Deletes a variable by name.

Source code in prefect/client/orchestration.py
async def delete_variable_by_name(self, name: str):\n    \"\"\"Deletes a variable by name.\"\"\"\n    try:\n        await self._client.delete(f\"/variables/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variables","title":"read_variables async","text":"

Reads all variables.

Source code in prefect/client/orchestration.py
async def read_variables(self, limit: int = None) -> List[Variable]:\n    \"\"\"Reads all variables.\"\"\"\n    response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n    return pydantic.parse_obj_as(List[Variable], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_worker_metadata","title":"read_worker_metadata async","text":"

Reads worker metadata stored in Prefect collection registry.

Source code in prefect/client/orchestration.py
async def read_worker_metadata(self) -> Dict[str, Any]:\n    \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n    response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n    response.raise_for_status()\n    return response.json()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_input","title":"create_flow_run_input async","text":"

Creates a flow run input.

Parameters:

Name Type Description Default flow_run_id UUID

The flow run id.

required key str

The input key.

required value str

The input value.

required sender Optional[str]

The sender of the input.

None Source code in prefect/client/orchestration.py
async def create_flow_run_input(\n    self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n):\n    \"\"\"\n    Creates a flow run input.\n\n    Args:\n        flow_run_id: The flow run id.\n        key: The input key.\n        value: The input value.\n        sender: The sender of the input.\n    \"\"\"\n\n    # Initialize the input to ensure that the key is valid.\n    FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n    response = await self._client.post(\n        f\"/flow_runs/{flow_run_id}/input\",\n        json={\"key\": key, \"value\": value, \"sender\": sender},\n    )\n    response.raise_for_status()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_input","title":"read_flow_run_input async","text":"

Reads a flow run input.

Parameters:

Name Type Description Default flow_run_id UUID

The flow run id.

required key str

The input key.

required Source code in prefect/client/orchestration.py
async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n    \"\"\"\n    Reads a flow run input.\n\n    Args:\n        flow_run_id: The flow run id.\n        key: The input key.\n    \"\"\"\n    response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n    response.raise_for_status()\n    return response.content.decode()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_input","title":"delete_flow_run_input async","text":"

Deletes a flow run input.

Parameters:

Name Type Description Default flow_run_id UUID

The flow run id.

required key str

The input key.

required Source code in prefect/client/orchestration.py
async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n    \"\"\"\n    Deletes a flow run input.\n\n    Args:\n        flow_run_id: The flow run id.\n        key: The input key.\n    \"\"\"\n    response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n    response.raise_for_status()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_automation","title":"create_automation async","text":"

Creates an automation in Prefect Cloud.

Source code in prefect/client/orchestration.py
async def create_automation(self, automation: AutomationCore) -> UUID:\n    \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n    if not self.server_type.supports_automations():\n        self._raise_for_unsupported_automations()\n\n    response = await self._client.post(\n        \"/automations/\",\n        json=automation.dict(json_compatible=True),\n    )\n\n    return UUID(response.json()[\"id\"])\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_automation","title":"update_automation async","text":"

Updates an automation in Prefect Cloud.

Source code in prefect/client/orchestration.py
async def update_automation(self, automation_id: UUID, automation: AutomationCore):\n    \"\"\"Updates an automation in Prefect Cloud.\"\"\"\n    if not self.server_type.supports_automations():\n        self._raise_for_unsupported_automations()\n    response = await self._client.put(\n        f\"/automations/{automation_id}\",\n        json=automation.dict(json_compatible=True, exclude_unset=True),\n    )\n    response.raise_for_status\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_automations_by_name","title":"read_automations_by_name async","text":"

Query the Prefect API for an automation by name. Only automations matching the provided name will be returned.

Parameters:

Name Type Description Default name str

the name of the automation to query

required

Returns:

Type Description List[Automation]

a list of Automation model representations of the automations

Source code in prefect/client/orchestration.py
async def read_automations_by_name(self, name: str) -> List[Automation]:\n    \"\"\"\n    Query the Prefect API for an automation by name. Only automations matching the provided name will be returned.\n\n    Args:\n        name: the name of the automation to query\n\n    Returns:\n        a list of Automation model representations of the automations\n    \"\"\"\n    if not self.server_type.supports_automations():\n        self._raise_for_unsupported_automations()\n    automation_filter = filters.AutomationFilter(name=dict(any_=[name]))\n\n    response = await self._client.post(\n        \"/automations/filter\",\n        json={\n            \"sort\": sorting.AutomationSort.UPDATED_DESC,\n            \"automations\": automation_filter.dict(json_compatible=True)\n            if automation_filter\n            else None,\n        },\n    )\n\n    response.raise_for_status()\n\n    return pydantic.parse_obj_as(List[Automation], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient","title":"SyncPrefectClient","text":"

A synchronous client for interacting with the Prefect REST API.

Parameters:

Name Type Description Default api Union[str, ASGIApp]

the REST API URL or FastAPI application to connect to

required api_key Optional[str]

An optional API key for authentication.

None api_version Optional[str]

The API version this client is compatible with.

None httpx_settings Optional[Dict[str, Any]]

An optional dictionary of settings to pass to the underlying httpx.AsyncClient

None
Say hello to a Prefect REST API\n\n<div class=\"terminal\">\n```\n>>> with get_client(sync_client=True) as client:\n>>>     response = client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n```\n</div>\n
Source code in prefect/client/orchestration.py
class SyncPrefectClient:\n    \"\"\"\n    A synchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n    Args:\n        api: the REST API URL or FastAPI application to connect to\n        api_key: An optional API key for authentication.\n        api_version: The API version this client is compatible with.\n        httpx_settings: An optional dictionary of settings to pass to the underlying\n            `httpx.AsyncClient`\n\n    Examples:\n\n        Say hello to a Prefect REST API\n\n        <div class=\"terminal\">\n        ```\n        >>> with get_client(sync_client=True) as client:\n        >>>     response = client.hello()\n        >>>\n        >>> print(response.json())\n        \ud83d\udc4b\n        ```\n        </div>\n    \"\"\"\n\n    def __init__(\n        self,\n        api: Union[str, ASGIApp],\n        *,\n        api_key: Optional[str] = None,\n        api_version: Optional[str] = None,\n        httpx_settings: Optional[Dict[str, Any]] = None,\n    ) -> None:\n        self._prefect_client = PrefectClient(\n            api=api,\n            api_key=api_key,\n            api_version=api_version,\n            httpx_settings=httpx_settings,\n        )\n\n    def __enter__(self):\n        run_sync(self._prefect_client.__aenter__())\n        return self\n\n    def __exit__(self, *exc_info):\n        return run_sync(self._prefect_client.__aexit__(*exc_info))\n\n    async def __aenter__(self):\n        raise RuntimeError(\n            \"The `SyncPrefectClient` must be entered with a sync context. Use '\"\n            \"with SyncPrefectClient(...)' not 'async with SyncPrefectClient(...)'\"\n        )\n\n    async def __aexit__(self, *_):\n        assert False, \"This should never be called but must be defined for __aenter__\"\n\n    def hello(self) -> httpx.Response:\n        \"\"\"\n        Send a GET request to /hello for testing purposes.\n        \"\"\"\n        return run_sync(self._prefect_client.hello())\n\n    def create_task_run(\n        self,\n        task: \"TaskObject[P, R]\",\n        flow_run_id: Optional[UUID],\n        dynamic_key: str,\n        name: Optional[str] = None,\n        extra_tags: Optional[Iterable[str]] = None,\n        state: Optional[prefect.states.State[R]] = None,\n        task_inputs: Optional[\n            Dict[\n                str,\n                List[\n                    Union[\n                        TaskRunResult,\n                        Parameter,\n                        Constant,\n                    ]\n                ],\n            ]\n        ] = None,\n    ) -> TaskRun:\n        \"\"\"\n        Create a task run\n\n        Args:\n            task: The Task to run\n            flow_run_id: The flow run id with which to associate the task run\n            dynamic_key: A key unique to this particular run of a Task within the flow\n            name: An optional name for the task run\n            extra_tags: an optional list of extra tags to apply to the task run in\n                addition to `task.tags`\n            state: The initial state for the run. If not provided, defaults to\n                `Pending` for now. Should always be a `Scheduled` type.\n            task_inputs: the set of inputs passed to the task\n\n        Returns:\n            The created task run.\n        \"\"\"\n        return run_sync(\n            self._prefect_client.create_task_run(\n                task=task,\n                flow_run_id=flow_run_id,\n                dynamic_key=dynamic_key,\n                name=name,\n                extra_tags=extra_tags,\n                state=state,\n                task_inputs=task_inputs,\n            )\n        )\n\n    def set_task_run_state(\n        self,\n        task_run_id: UUID,\n        state: prefect.states.State,\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a task run.\n\n        Args:\n            task_run_id: the id of the task run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        return run_sync(\n            self._prefect_client.set_task_run_state(\n                task_run_id=task_run_id,\n                state=state,\n                force=force,\n            )\n        )\n\n    def create_flow_run(\n        self,\n        flow_id: UUID,\n        parameters: Optional[Dict[str, Any]] = None,\n        context: Optional[Dict[str, Any]] = None,\n        scheduled_start_time: Optional[datetime.datetime] = None,\n        run_name: Optional[str] = None,\n        labels: Optional[List[str]] = None,\n        parameters_json: Optional[str] = None,\n        run_config: Optional[Dict[str, Any]] = None,\n        idempotency_key: Optional[str] = None,\n    ) -> FlowRunResponse:\n        \"\"\"\n        Create a new flow run.\n\n        Args:\n            - flow_id (UUID): the ID of the flow to create a run for\n            - parameters (Optional[Dict[str, Any]]): a dictionary of parameter values to pass to the flow\n            - context (Optional[Dict[str, Any]]): a dictionary of context values to pass to the flow\n            - scheduled_start_time (Optional[datetime.datetime]): the scheduled start time for the flow run\n            - run_name (Optional[str]): a name to assign to the flow run\n            - labels (Optional[List[str]]): a list of labels to assign to the flow run\n            - parameters_json (Optional[str]): a JSON string of parameter values to pass to the flow\n            - run_config (Optional[Dict[str, Any]]): a dictionary of run configuration options\n            - idempotency_key (Optional[str]): a key to ensure idempotency when creating the flow run\n\n        Returns:\n            - FlowRunResponse: the created flow run\n        \"\"\"\n        return run_sync(\n            self._prefect_client.create_flow_run(\n                flow_id=flow_id,\n                parameters=parameters,\n                context=context,\n                scheduled_start_time=scheduled_start_time,\n                run_name=run_name,\n                labels=labels,\n                parameters_json=parameters_json,\n                run_config=run_config,\n                idempotency_key=idempotency_key,\n            )\n        )\n\n    async def set_flow_run_state(\n        self,\n        flow_run_id: UUID,\n        state: \"prefect.states.State\",\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a flow run.\n\n        Args:\n            flow_run_id: the id of the flow run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        return run_sync(\n            self._prefect_client.set_flow_run_state(\n                flow_run_id=flow_run_id,\n                state=state,\n                force=force,\n            )\n        )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.hello","title":"hello","text":"

Send a GET request to /hello for testing purposes.

Source code in prefect/client/orchestration.py
def hello(self) -> httpx.Response:\n    \"\"\"\n    Send a GET request to /hello for testing purposes.\n    \"\"\"\n    return run_sync(self._prefect_client.hello())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.create_task_run","title":"create_task_run","text":"

Create a task run

Parameters:

Name Type Description Default task Task[P, R]

The Task to run

required flow_run_id Optional[UUID]

The flow run id with which to associate the task run

required dynamic_key str

A key unique to this particular run of a Task within the flow

required name Optional[str]

An optional name for the task run

None extra_tags Optional[Iterable[str]]

an optional list of extra tags to apply to the task run in addition to task.tags

None state Optional[State[R]]

The initial state for the run. If not provided, defaults to Pending for now. Should always be a Scheduled type.

None task_inputs Optional[Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]]

the set of inputs passed to the task

None

Returns:

Type Description TaskRun

The created task run.

Source code in prefect/client/orchestration.py
def create_task_run(\n    self,\n    task: \"TaskObject[P, R]\",\n    flow_run_id: Optional[UUID],\n    dynamic_key: str,\n    name: Optional[str] = None,\n    extra_tags: Optional[Iterable[str]] = None,\n    state: Optional[prefect.states.State[R]] = None,\n    task_inputs: Optional[\n        Dict[\n            str,\n            List[\n                Union[\n                    TaskRunResult,\n                    Parameter,\n                    Constant,\n                ]\n            ],\n        ]\n    ] = None,\n) -> TaskRun:\n    \"\"\"\n    Create a task run\n\n    Args:\n        task: The Task to run\n        flow_run_id: The flow run id with which to associate the task run\n        dynamic_key: A key unique to this particular run of a Task within the flow\n        name: An optional name for the task run\n        extra_tags: an optional list of extra tags to apply to the task run in\n            addition to `task.tags`\n        state: The initial state for the run. If not provided, defaults to\n            `Pending` for now. Should always be a `Scheduled` type.\n        task_inputs: the set of inputs passed to the task\n\n    Returns:\n        The created task run.\n    \"\"\"\n    return run_sync(\n        self._prefect_client.create_task_run(\n            task=task,\n            flow_run_id=flow_run_id,\n            dynamic_key=dynamic_key,\n            name=name,\n            extra_tags=extra_tags,\n            state=state,\n            task_inputs=task_inputs,\n        )\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.set_task_run_state","title":"set_task_run_state","text":"

Set the state of a task run.

Parameters:

Name Type Description Default task_run_id UUID

the id of the task run

required state State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in prefect/client/orchestration.py
def set_task_run_state(\n    self,\n    task_run_id: UUID,\n    state: prefect.states.State,\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a task run.\n\n    Args:\n        task_run_id: the id of the task run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    return run_sync(\n        self._prefect_client.set_task_run_state(\n            task_run_id=task_run_id,\n            state=state,\n            force=force,\n        )\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.create_flow_run","title":"create_flow_run","text":"

Create a new flow run.

Parameters:

Name Type Description Default - flow_id (UUID

the ID of the flow to create a run for

required - parameters (Optional[Dict[str, Any]]

a dictionary of parameter values to pass to the flow

required - context (Optional[Dict[str, Any]]

a dictionary of context values to pass to the flow

required - scheduled_start_time (Optional[datetime.datetime]

the scheduled start time for the flow run

required - run_name (Optional[str]

a name to assign to the flow run

required - labels (Optional[List[str]]

a list of labels to assign to the flow run

required - parameters_json (Optional[str]

a JSON string of parameter values to pass to the flow

required - run_config (Optional[Dict[str, Any]]

a dictionary of run configuration options

required - idempotency_key (Optional[str]

a key to ensure idempotency when creating the flow run

required

Returns:

Type Description FlowRunResponse
  • FlowRunResponse: the created flow run
Source code in prefect/client/orchestration.py
def create_flow_run(\n    self,\n    flow_id: UUID,\n    parameters: Optional[Dict[str, Any]] = None,\n    context: Optional[Dict[str, Any]] = None,\n    scheduled_start_time: Optional[datetime.datetime] = None,\n    run_name: Optional[str] = None,\n    labels: Optional[List[str]] = None,\n    parameters_json: Optional[str] = None,\n    run_config: Optional[Dict[str, Any]] = None,\n    idempotency_key: Optional[str] = None,\n) -> FlowRunResponse:\n    \"\"\"\n    Create a new flow run.\n\n    Args:\n        - flow_id (UUID): the ID of the flow to create a run for\n        - parameters (Optional[Dict[str, Any]]): a dictionary of parameter values to pass to the flow\n        - context (Optional[Dict[str, Any]]): a dictionary of context values to pass to the flow\n        - scheduled_start_time (Optional[datetime.datetime]): the scheduled start time for the flow run\n        - run_name (Optional[str]): a name to assign to the flow run\n        - labels (Optional[List[str]]): a list of labels to assign to the flow run\n        - parameters_json (Optional[str]): a JSON string of parameter values to pass to the flow\n        - run_config (Optional[Dict[str, Any]]): a dictionary of run configuration options\n        - idempotency_key (Optional[str]): a key to ensure idempotency when creating the flow run\n\n    Returns:\n        - FlowRunResponse: the created flow run\n    \"\"\"\n    return run_sync(\n        self._prefect_client.create_flow_run(\n            flow_id=flow_id,\n            parameters=parameters,\n            context=context,\n            scheduled_start_time=scheduled_start_time,\n            run_name=run_name,\n            labels=labels,\n            parameters_json=parameters_json,\n            run_config=run_config,\n            idempotency_key=idempotency_key,\n        )\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.set_flow_run_state","title":"set_flow_run_state async","text":"

Set the state of a flow run.

Parameters:

Name Type Description Default flow_run_id UUID

the id of the flow run

required state State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in prefect/client/orchestration.py
async def set_flow_run_state(\n    self,\n    flow_run_id: UUID,\n    state: \"prefect.states.State\",\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a flow run.\n\n    Args:\n        flow_run_id: the id of the flow run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    return run_sync(\n        self._prefect_client.set_flow_run_state(\n            flow_run_id=flow_run_id,\n            state=state,\n            force=force,\n        )\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.get_client","title":"get_client","text":"

Retrieve a HTTP client for communicating with the Prefect REST API.

The client must be context managed; for example:

async with get_client() as client:\n    await client.hello()\n

To return a synchronous client, pass sync_client=True:

with get_client(sync_client=True) as client:\n    client.hello()\n
Source code in prefect/client/orchestration.py
def get_client(\n    httpx_settings: Optional[Dict[str, Any]] = None, sync_client: bool = False\n) -> \"PrefectClient\":\n    \"\"\"\n    Retrieve a HTTP client for communicating with the Prefect REST API.\n\n    The client must be context managed; for example:\n\n    ```python\n    async with get_client() as client:\n        await client.hello()\n    ```\n\n    To return a synchronous client, pass sync_client=True:\n\n    ```python\n    with get_client(sync_client=True) as client:\n        client.hello()\n    ```\n    \"\"\"\n    ctx = prefect.context.get_settings_context()\n    api = PREFECT_API_URL.value()\n\n    if not api:\n        # create an ephemeral API if none was provided\n        from prefect.server.api.server import create_app\n\n        api = create_app(ctx.settings, ephemeral=True)\n\n    if sync_client:\n        return SyncPrefectClient(\n            api,\n            api_key=PREFECT_API_KEY.value(),\n            httpx_settings=httpx_settings,\n        )\n    else:\n        return PrefectClient(\n            api,\n            api_key=PREFECT_API_KEY.value(),\n            httpx_settings=httpx_settings,\n        )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_1","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions","title":"prefect.client.schemas.actions","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.StateCreate","title":"StateCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a new state.

Source code in prefect/client/schemas/actions.py
class StateCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    message: Optional[str] = Field(default=None, examples=[\"Run started\"])\n    state_details: StateDetails = Field(default_factory=StateDetails)\n    data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n        default=None,\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowCreate","title":"FlowCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow.

Source code in prefect/client/schemas/actions.py
class FlowCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowUpdate","title":"FlowUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow.

Source code in prefect/client/schemas/actions.py
class FlowUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate","title":"DeploymentCreate","text":"

Bases: DeprecatedInfraOverridesField, ActionBaseModel

Data used by the Prefect REST API to create a deployment.

Source code in prefect/client/schemas/actions.py
class DeploymentCreate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    name: str = Field(..., description=\"The name of the deployment.\")\n    flow_id: UUID = Field(..., description=\"The ID of the flow to deploy.\")\n    is_schedule_active: Optional[bool] = Field(None)\n    paused: Optional[bool] = Field(None)\n    schedules: List[DeploymentScheduleCreate] = Field(\n        default_factory=list,\n        description=\"A list of schedules for the deployment.\",\n    )\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(None)\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(default_factory=list)\n    pull_steps: Optional[List[dict]] = Field(None)\n\n    manifest_path: Optional[str] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    schedule: Optional[SCHEDULE_TYPES] = Field(None)\n    description: Optional[str] = Field(None)\n    path: Optional[str] = Field(None)\n    version: Optional[str] = Field(None)\n    entrypoint: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n            jsonschema.validate(self.job_variables, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration","text":"

Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

Source code in prefect/client/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n        jsonschema.validate(self.job_variables, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate","text":"

Bases: DeprecatedInfraOverridesField, ActionBaseModel

Data used by the Prefect REST API to update a deployment.

Source code in prefect/client/schemas/actions.py
class DeploymentUpdate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    @validator(\"schedule\")\n    def validate_none_schedule(cls, v):\n        return return_none_schedule(v)\n\n    version: Optional[str] = Field(None)\n    schedule: Optional[SCHEDULE_TYPES] = Field(None)\n    description: Optional[str] = Field(None)\n    is_schedule_active: bool = Field(None)\n    parameters: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(default_factory=list)\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    path: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n    entrypoint: Optional[str] = Field(None)\n    manifest_path: Optional[str] = Field(None)\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n        if variables_schema is not None:\n            jsonschema.validate(self.job_variables, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration","text":"

Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

Source code in prefect/client/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n    if variables_schema is not None:\n        jsonschema.validate(self.job_variables, variables_schema)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow run.

Source code in prefect/client/schemas/actions.py
class FlowRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n    name: Optional[str] = Field(None)\n    flow_version: Optional[str] = Field(None)\n    parameters: Optional[Dict[str, Any]] = Field(None)\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    infrastructure_pid: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunCreate","title":"TaskRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a task run

Source code in prefect/client/schemas/actions.py
class TaskRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n    # TaskRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the task run to create\"\n    )\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the task run\",\n    )\n    flow_run_id: Optional[UUID] = Field(None)\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(None)\n    cache_expiration: Optional[objects.DateTimeTZ] = Field(None)\n    task_version: Optional[str] = Field(None)\n    empirical_policy: objects.TaskRunPolicy = Field(\n        default_factory=objects.TaskRunPolicy,\n    )\n    tags: List[str] = Field(default_factory=list)\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                objects.TaskRunResult,\n                objects.Parameter,\n                objects.Constant,\n            ]\n        ],\n    ] = Field(default_factory=dict)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a task run

Source code in prefect/client/schemas/actions.py
class TaskRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n    name: Optional[str] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunCreate","title":"FlowRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run.

Source code in prefect/client/schemas/actions.py
class FlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: Optional[str] = Field(default=None, description=\"The name of the flow run.\")\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    deployment_id: Optional[UUID] = Field(None)\n    flow_version: Optional[str] = Field(None)\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The parameters for the flow run.\"\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The context for the flow run.\"\n    )\n    parent_task_run_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    idempotency_key: Optional[str] = Field(None)\n\n    class Config(ActionBaseModel.Config):\n        json_dumps = orjson_dumps_extra_compatible\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run from a deployment.

Source code in prefect/client/schemas/actions.py
class DeploymentFlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: Optional[str] = Field(default=None, description=\"The name of the flow run.\")\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The parameters for the flow run.\"\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The context for the flow run.\"\n    )\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    idempotency_key: Optional[str] = Field(None)\n    parent_task_run_id: Optional[UUID] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    job_variables: Optional[dict] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a saved search.

Source code in prefect/client/schemas/actions.py
class SavedSearchCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[objects.SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a concurrency limit.

Source code in prefect/client/schemas/actions.py
class ConcurrencyLimitCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block type.

Source code in prefect/client/schemas/actions.py
class BlockTypeCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[objects.HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[objects.HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n\n    # validators\n    _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n        validate_block_type_slug\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a block type.

Source code in prefect/client/schemas/actions.py
class BlockTypeUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n    logo_url: Optional[objects.HttpUrl] = Field(None)\n    documentation_url: Optional[objects.HttpUrl] = Field(None)\n    description: Optional[str] = Field(None)\n    code_example: Optional[str] = Field(None)\n\n    @classmethod\n    def updatable_fields(cls) -> set:\n        return get_class_fields_only(cls)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block schema.

Source code in prefect/client/schemas/actions.py
class BlockSchemaCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(None)\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=objects.DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block document.

Source code in prefect/client/schemas/actions.py
class BlockDocumentCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None, description=\"The name of the block document\"\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(\n        default=..., description=\"The block schema ID for the block document\"\n    )\n    block_type_id: UUID = Field(\n        default=..., description=\"The block type ID for the block document\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    _validate_name_format = validator(\"name\", allow_reuse=True)(\n        validate_block_document_name\n    )\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a block document.

Source code in prefect/client/schemas/actions.py
class BlockDocumentUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n    block_schema_id: Optional[UUID] = Field(\n        default=None, description=\"A block schema ID\"\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    merge_existing_data: bool = Field(\n        default=True,\n        description=\"Whether to merge the existing data with the new data or replace it\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate","text":"

Bases: ActionBaseModel

Data used to create block document reference.

Source code in prefect/client/schemas/actions.py
class BlockDocumentReferenceCreate(ActionBaseModel):\n    \"\"\"Data used to create block document reference.\"\"\"\n\n    id: UUID = Field(default_factory=uuid4)\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.LogCreate","title":"LogCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a log.

Source code in prefect/client/schemas/actions.py
class LogCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(None)\n    task_run_id: Optional[UUID] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a work pool.

Source code in prefect/client/schemas/actions.py
class WorkPoolCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(None)\n    type: str = Field(\n        description=\"The work pool type.\", default=\"prefect-agent\"\n    )  # TODO: change default\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"The base job template for the work pool.\",\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Whether the work pool is paused.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a work pool.

Source code in prefect/client/schemas/actions.py
class WorkPoolUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n    description: Optional[str] = Field(None)\n    is_paused: Optional[bool] = Field(None)\n    base_job_template: Optional[Dict[str, Any]] = Field(None)\n    concurrency_limit: Optional[int] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a work queue.

Source code in prefect/client/schemas/actions.py
class WorkQueueCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(None)\n    is_paused: bool = Field(\n        default=False,\n        description=\"Whether the work queue is paused.\",\n    )\n    concurrency_limit: Optional[int] = Field(\n        default=None,\n        description=\"A concurrency limit for the work queue.\",\n    )\n    priority: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n\n    # DEPRECATED\n\n    filter: Optional[objects.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a work queue.

Source code in prefect/client/schemas/actions.py
class WorkQueueUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n    name: Optional[str] = Field(None)\n    description: Optional[str] = Field(None)\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[int] = Field(None)\n    priority: Optional[int] = Field(None)\n    last_polled: Optional[DateTimeTZ] = Field(None)\n\n    # DEPRECATED\n\n    filter: Optional[objects.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run notification policy.

Source code in prefect/client/schemas/actions.py
class FlowRunNotificationPolicyCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(objects.FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow run notification policy.

Source code in prefect/client/schemas/actions.py
class FlowRunNotificationPolicyUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n    is_active: Optional[bool] = Field(None)\n    state_names: Optional[List[str]] = Field(None)\n    tags: Optional[List[str]] = Field(None)\n    block_document_id: Optional[UUID] = Field(None)\n    message_template: Optional[str] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactCreate","title":"ArtifactCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create an artifact.

Source code in prefect/client/schemas/actions.py
class ArtifactCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n    key: Optional[str] = Field(None)\n    type: Optional[str] = Field(None)\n    description: Optional[str] = Field(None)\n    data: Optional[Union[Dict[str, Any], Any]] = Field(None)\n    metadata_: Optional[Dict[str, str]] = Field(None)\n    flow_run_id: Optional[UUID] = Field(None)\n    task_run_id: Optional[UUID] = Field(None)\n\n    _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n        validate_artifact_key\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update an artifact.

Source code in prefect/client/schemas/actions.py
class ArtifactUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n    data: Optional[Union[Dict[str, Any], Any]] = Field(None)\n    description: Optional[str] = Field(None)\n    metadata_: Optional[Dict[str, str]] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableCreate","title":"VariableCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a Variable.

Source code in prefect/client/schemas/actions.py
class VariableCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n    name: str = Field(\n        default=...,\n        description=\"The name of the variable\",\n        examples=[\"my_variable\"],\n        max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: str = Field(\n        default=...,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=objects.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: Optional[List[str]] = Field(default=None)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableUpdate","title":"VariableUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a Variable.

Source code in prefect/client/schemas/actions.py
class VariableUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the variable\",\n        examples=[\"my_variable\"],\n        max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: Optional[str] = Field(\n        default=None,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n    )\n    tags: Optional[List[str]] = Field(default=None)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitCreate","title":"GlobalConcurrencyLimitCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a global concurrency limit.

Source code in prefect/client/schemas/actions.py
class GlobalConcurrencyLimitCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a global concurrency limit.\"\"\"\n\n    name: str = Field(description=\"The name of the global concurrency limit.\")\n    limit: int = Field(\n        description=(\n            \"The maximum number of slots that can be occupied on this concurrency\"\n            \" limit.\"\n        )\n    )\n    active: Optional[bool] = Field(\n        default=True,\n        description=\"Whether or not the concurrency limit is in an active state.\",\n    )\n    active_slots: Optional[int] = Field(\n        default=0,\n        description=\"Number of tasks currently using a concurrency slot.\",\n    )\n    slot_decay_per_second: Optional[float] = Field(\n        default=0.0,\n        description=(\n            \"Controls the rate at which slots are released when the concurrency limit\"\n            \" is used as a rate limit.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitUpdate","title":"GlobalConcurrencyLimitUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a global concurrency limit.

Source code in prefect/client/schemas/actions.py
class GlobalConcurrencyLimitUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a global concurrency limit.\"\"\"\n\n    name: Optional[str] = Field(None)\n    limit: Optional[NonNegativeInteger] = Field(None)\n    active: Optional[NonNegativeInteger] = Field(None)\n    active_slots: Optional[NonNegativeInteger] = Field(None)\n    slot_decay_per_second: Optional[NonNegativeFloat] = Field(None)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_2","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters","title":"prefect.client.schemas.filters","text":"

Schemas that define Prefect REST API filtering operations.

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.Operator","title":"Operator","text":"

Bases: AutoEnum

Operators for combining filter criteria.

Source code in prefect/client/schemas/filters.py
class Operator(AutoEnum):\n    \"\"\"Operators for combining filter criteria.\"\"\"\n\n    and_ = AutoEnum.auto()\n    or_ = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.OperatorMixin","title":"OperatorMixin","text":"

Base model for Prefect filters that combines criteria with a user-provided operator

Source code in prefect/client/schemas/filters.py
class OperatorMixin:\n    \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n    operator: Operator = Field(\n        default=Operator.and_,\n        description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterId","title":"FlowFilterId","text":"

Bases: PrefectBaseModel

Filter by Flow.id.

Source code in prefect/client/schemas/filters.py
class FlowFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Flow.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterName","title":"FlowFilterName","text":"

Bases: PrefectBaseModel

Filter by Flow.name.

Source code in prefect/client/schemas/filters.py
class FlowFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Flow.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow names to include\",\n        examples=[[\"my-flow-1\", \"my-flow-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterTags","title":"FlowFilterTags","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by Flow.tags.

Source code in prefect/client/schemas/filters.py
class FlowFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `Flow.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flows will be returned only if their tags are a superset\"\n            \" of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flows without tags\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilter","title":"FlowFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter for flows. Only flows matching all criteria will be returned.

Source code in prefect/client/schemas/filters.py
class FlowFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n    id: Optional[FlowFilterId] = Field(\n        default=None, description=\"Filter criteria for `Flow.id`\"\n    )\n    name: Optional[FlowFilterName] = Field(\n        default=None, description=\"Filter criteria for `Flow.name`\"\n    )\n    tags: Optional[FlowFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Flow.tags`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId","text":"

Bases: PrefectBaseModel

Filter by FlowRun.id.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterId(PrefectBaseModel):\n    \"\"\"Filter by FlowRun.id.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n    not_any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to exclude\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName","text":"

Bases: PrefectBaseModel

Filter by FlowRun.name.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterName(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow run names to include\",\n        examples=[[\"my-flow-run-1\", \"my-flow-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by FlowRun.tags.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flow runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flow runs without tags\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by FlowRun.deployment_id.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterDeploymentId(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run deployment ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without deployment ids\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by FlowRun.work_queue_name.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterWorkQueueName(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without work queue names\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType","text":"

Bases: PrefectBaseModel

Filter by FlowRun.state_type.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterStateType(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n    any_: Optional[List[StateType]] = Field(\n        default=None, description=\"A list of flow run state types to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion","text":"

Bases: PrefectBaseModel

Filter by FlowRun.flow_version.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterFlowVersion(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run flow_versions to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime","text":"

Bases: PrefectBaseModel

Filter by FlowRun.start_time.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterStartTime(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return flow runs without a start time\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime","text":"

Bases: PrefectBaseModel

Filter by FlowRun.expected_start_time.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterExpectedStartTime(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or after this time\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime","text":"

Bases: PrefectBaseModel

Filter by FlowRun.next_scheduled_start_time.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterNextScheduledStartTime(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time or before this\"\n            \" time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time at or after this\"\n            \" time\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter for subflows of the given flow runs

Source code in prefect/client/schemas/filters.py
class FlowRunFilterParentFlowRunId(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter for subflows of the given flow runs\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parents to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by FlowRun.parent_task_run_id.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterParentTaskRunId(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parent_task_run_ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without parent_task_run_id\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey","text":"

Bases: PrefectBaseModel

Filter by FlowRun.idempotency_key.

Source code in prefect/client/schemas/filters.py
class FlowRunFilterIdempotencyKey(PrefectBaseModel):\n    \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to exclude\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilter","title":"FlowRunFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter flow runs. Only flow runs matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class FlowRunFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n    id: Optional[FlowRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.id`\"\n    )\n    name: Optional[FlowRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.name`\"\n    )\n    tags: Optional[FlowRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.tags`\"\n    )\n    deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n    )\n    work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n    )\n    state: Optional[FlowRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state`\"\n    )\n    flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n    )\n    start_time: Optional[FlowRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n    )\n    expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n    )\n    next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n        default=None,\n        description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n    )\n    parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n        default=None, description=\"Filter criteria for subflows of the given flow runs\"\n    )\n    parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n    )\n    idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId","text":"

Bases: PrefectBaseModel

Filter by TaskRun.flow_run_id.

Source code in prefect/client/schemas/filters.py
class TaskRunFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n\n    is_null_: bool = Field(\n        default=False,\n        description=\"If true, only include task runs without a flow run id\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId","text":"

Bases: PrefectBaseModel

Filter by TaskRun.id.

Source code in prefect/client/schemas/filters.py
class TaskRunFilterId(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName","text":"

Bases: PrefectBaseModel

Filter by TaskRun.name.

Source code in prefect/client/schemas/filters.py
class TaskRunFilterName(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of task run names to include\",\n        examples=[[\"my-task-run-1\", \"my-task-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by TaskRun.tags.

Source code in prefect/client/schemas/filters.py
class TaskRunFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Task runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include task runs without tags\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType","text":"

Bases: PrefectBaseModel

Filter by TaskRun.state_type.

Source code in prefect/client/schemas/filters.py
class TaskRunFilterStateType(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n    any_: Optional[List[StateType]] = Field(\n        default=None, description=\"A list of task run state types to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns","text":"

Bases: PrefectBaseModel

Filter by TaskRun.subflow_run.

Source code in prefect/client/schemas/filters.py
class TaskRunFilterSubFlowRuns(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If true, only include task runs that are subflow run parents; if false,\"\n            \" exclude parent task runs\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime","text":"

Bases: PrefectBaseModel

Filter by TaskRun.start_time.

Source code in prefect/client/schemas/filters.py
class TaskRunFilterStartTime(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return task runs without a start time\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilter","title":"TaskRunFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter task runs. Only task runs matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class TaskRunFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n    id: Optional[TaskRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.id`\"\n    )\n    name: Optional[TaskRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.name`\"\n    )\n    tags: Optional[TaskRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.tags`\"\n    )\n    state: Optional[TaskRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.state`\"\n    )\n    start_time: Optional[TaskRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n    )\n    subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n    )\n    flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId","text":"

Bases: PrefectBaseModel

Filter by Deployment.id.

Source code in prefect/client/schemas/filters.py
class DeploymentFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of deployment ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName","text":"

Bases: PrefectBaseModel

Filter by Deployment.name.

Source code in prefect/client/schemas/filters.py
class DeploymentFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of deployment names to include\",\n        examples=[[\"my-deployment-1\", \"my-deployment-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName","text":"

Bases: PrefectBaseModel

Filter by Deployment.work_queue_name.

Source code in prefect/client/schemas/filters.py
class DeploymentFilterWorkQueueName(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive","text":"

Bases: PrefectBaseModel

Filter by Deployment.is_schedule_active.

Source code in prefect/client/schemas/filters.py
class DeploymentFilterIsScheduleActive(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.is_schedule_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by Deployment.tags.

Source code in prefect/client/schemas/filters.py
class DeploymentFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Deployments will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include deployments without tags\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilter","title":"DeploymentFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter for deployments. Only deployments matching all criteria will be returned.

Source code in prefect/client/schemas/filters.py
class DeploymentFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    id: Optional[DeploymentFilterId] = Field(\n        default=None, description=\"Filter criteria for `Deployment.id`\"\n    )\n    name: Optional[DeploymentFilterName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.name`\"\n    )\n    is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n        default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n    )\n    tags: Optional[DeploymentFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Deployment.tags`\"\n    )\n    work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterName","title":"LogFilterName","text":"

Bases: PrefectBaseModel

Filter by Log.name.

Source code in prefect/client/schemas/filters.py
class LogFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Log.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of log names to include\",\n        examples=[[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"]],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterLevel","title":"LogFilterLevel","text":"

Bases: PrefectBaseModel

Filter by Log.level.

Source code in prefect/client/schemas/filters.py
class LogFilterLevel(PrefectBaseModel):\n    \"\"\"Filter by `Log.level`.\"\"\"\n\n    ge_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level greater than or equal to this level\",\n        examples=[20],\n    )\n\n    le_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level less than or equal to this level\",\n        examples=[50],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp","text":"

Bases: PrefectBaseModel

Filter by Log.timestamp.

Source code in prefect/client/schemas/filters.py
class LogFilterTimestamp(PrefectBaseModel):\n    \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or after this time\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId","text":"

Bases: PrefectBaseModel

Filter by Log.flow_run_id.

Source code in prefect/client/schemas/filters.py
class LogFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId","text":"

Bases: PrefectBaseModel

Filter by Log.task_run_id.

Source code in prefect/client/schemas/filters.py
class LogFilterTaskRunId(PrefectBaseModel):\n    \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilter","title":"LogFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter logs. Only logs matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class LogFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n    level: Optional[LogFilterLevel] = Field(\n        default=None, description=\"Filter criteria for `Log.level`\"\n    )\n    timestamp: Optional[LogFilterTimestamp] = Field(\n        default=None, description=\"Filter criteria for `Log.timestamp`\"\n    )\n    flow_run_id: Optional[LogFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n    )\n    task_run_id: Optional[LogFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.task_run_id`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FilterSet","title":"FilterSet","text":"

Bases: PrefectBaseModel

A collection of filters for common objects

Source code in prefect/client/schemas/filters.py
class FilterSet(PrefectBaseModel):\n    \"\"\"A collection of filters for common objects\"\"\"\n\n    flows: FlowFilter = Field(\n        default_factory=FlowFilter, description=\"Filters that apply to flows\"\n    )\n    flow_runs: FlowRunFilter = Field(\n        default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n    )\n    task_runs: TaskRunFilter = Field(\n        default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n    )\n    deployments: DeploymentFilter = Field(\n        default_factory=DeploymentFilter,\n        description=\"Filters that apply to deployments\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName","text":"

Bases: PrefectBaseModel

Filter by BlockType.name

Source code in prefect/client/schemas/filters.py
class BlockTypeFilterName(PrefectBaseModel):\n    \"\"\"Filter by `BlockType.name`\"\"\"\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug","text":"

Bases: PrefectBaseModel

Filter by BlockType.slug

Source code in prefect/client/schemas/filters.py
class BlockTypeFilterSlug(PrefectBaseModel):\n    \"\"\"Filter by `BlockType.slug`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of slugs to match\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter","text":"

Bases: PrefectBaseModel

Filter BlockTypes

Source code in prefect/client/schemas/filters.py
class BlockTypeFilter(PrefectBaseModel):\n    \"\"\"Filter BlockTypes\"\"\"\n\n    name: Optional[BlockTypeFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockType.name`\"\n    )\n\n    slug: Optional[BlockTypeFilterSlug] = Field(\n        default=None, description=\"Filter criteria for `BlockType.slug`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId","text":"

Bases: PrefectBaseModel

Filter by BlockSchema.block_type_id.

Source code in prefect/client/schemas/filters.py
class BlockSchemaFilterBlockTypeId(PrefectBaseModel):\n    \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId","text":"

Bases: PrefectBaseModel

Filter by BlockSchema.id

Source code in prefect/client/schemas/filters.py
class BlockSchemaFilterId(PrefectBaseModel):\n    \"\"\"Filter by BlockSchema.id\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of IDs to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities","text":"

Bases: PrefectBaseModel

Filter by BlockSchema.capabilities

Source code in prefect/client/schemas/filters.py
class BlockSchemaFilterCapabilities(PrefectBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"write-storage\", \"read-storage\"]],\n        description=(\n            \"A list of block capabilities. Block entities will be returned only if an\"\n            \" associated block schema has a superset of the defined capabilities.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion","text":"

Bases: PrefectBaseModel

Filter by BlockSchema.capabilities

Source code in prefect/client/schemas/filters.py
class BlockSchemaFilterVersion(PrefectBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"2.0.0\", \"2.1.0\"]],\n        description=\"A list of block schema versions.\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter BlockSchemas

Source code in prefect/client/schemas/filters.py
class BlockSchemaFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter BlockSchemas\"\"\"\n\n    block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n    )\n    block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n    )\n    id: Optional[BlockSchemaFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.id`\"\n    )\n    version: Optional[BlockSchemaFilterVersion] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.version`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous","text":"

Bases: PrefectBaseModel

Filter by BlockDocument.is_anonymous.

Source code in prefect/client/schemas/filters.py
class BlockDocumentFilterIsAnonymous(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter block documents for only those that are or are not anonymous.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId","text":"

Bases: PrefectBaseModel

Filter by BlockDocument.block_type_id.

Source code in prefect/client/schemas/filters.py
class BlockDocumentFilterBlockTypeId(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId","text":"

Bases: PrefectBaseModel

Filter by BlockDocument.id.

Source code in prefect/client/schemas/filters.py
class BlockDocumentFilterId(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName","text":"

Bases: PrefectBaseModel

Filter by BlockDocument.name.

Source code in prefect/client/schemas/filters.py
class BlockDocumentFilterName(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of block names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match block names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-block%\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class BlockDocumentFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n    id: Optional[BlockDocumentFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.id`\"\n    )\n    is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n        # default is to exclude anonymous blocks\n        BlockDocumentFilterIsAnonymous(eq_=False),\n        description=(\n            \"Filter criteria for `BlockDocument.is_anonymous`. \"\n            \"Defaults to excluding anonymous blocks.\"\n        ),\n    )\n    block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n    )\n    name: Optional[BlockDocumentFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.name`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive","text":"

Bases: PrefectBaseModel

Filter by FlowRunNotificationPolicy.is_active.

Source code in prefect/client/schemas/filters.py
class FlowRunNotificationPolicyFilterIsActive(PrefectBaseModel):\n    \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter notification policies for only those that are or are not active.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter","text":"

Bases: PrefectBaseModel

Filter FlowRunNotificationPolicies.

Source code in prefect/client/schemas/filters.py
class FlowRunNotificationPolicyFilter(PrefectBaseModel):\n    \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n    is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n        default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n        description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId","text":"

Bases: PrefectBaseModel

Filter by WorkQueue.id.

Source code in prefect/client/schemas/filters.py
class WorkQueueFilterId(PrefectBaseModel):\n    \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"A list of work queue ids to include\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName","text":"

Bases: PrefectBaseModel

Filter by WorkQueue.name.

Source code in prefect/client/schemas/filters.py
class WorkQueueFilterName(PrefectBaseModel):\n    \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"wq-1\", \"wq-2\"]],\n    )\n\n    startswith_: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"A list of case-insensitive starts-with matches. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n        ),\n        examples=[[\"marvin\", \"Marvin-robot\"]],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter work queues. Only work queues matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class WorkQueueFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter work queues. Only work queues matching all criteria will be\n    returned\"\"\"\n\n    id: Optional[WorkQueueFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.id`\"\n    )\n\n    name: Optional[WorkQueueFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.name`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId","text":"

Bases: PrefectBaseModel

Filter by WorkPool.id.

Source code in prefect/client/schemas/filters.py
class WorkPoolFilterId(PrefectBaseModel):\n    \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName","text":"

Bases: PrefectBaseModel

Filter by WorkPool.name.

Source code in prefect/client/schemas/filters.py
class WorkPoolFilterName(PrefectBaseModel):\n    \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool names to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType","text":"

Bases: PrefectBaseModel

Filter by WorkPool.type.

Source code in prefect/client/schemas/filters.py
class WorkPoolFilterType(PrefectBaseModel):\n    \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool types to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId","text":"

Bases: PrefectBaseModel

Filter by Worker.worker_config_id.

Source code in prefect/client/schemas/filters.py
class WorkerFilterWorkPoolId(PrefectBaseModel):\n    \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime","text":"

Bases: PrefectBaseModel

Filter by Worker.last_heartbeat_time.

Source code in prefect/client/schemas/filters.py
class WorkerFilterLastHeartbeatTime(PrefectBaseModel):\n    \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or before this time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or after this time\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId","text":"

Bases: PrefectBaseModel

Filter by Artifact.id.

Source code in prefect/client/schemas/filters.py
class ArtifactFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey","text":"

Bases: PrefectBaseModel

Filter by Artifact.key.

Source code in prefect/client/schemas/filters.py
class ArtifactFilterKey(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId","text":"

Bases: PrefectBaseModel

Filter by Artifact.flow_run_id.

Source code in prefect/client/schemas/filters.py
class ArtifactFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId","text":"

Bases: PrefectBaseModel

Filter by Artifact.task_run_id.

Source code in prefect/client/schemas/filters.py
class ArtifactFilterTaskRunId(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType","text":"

Bases: PrefectBaseModel

Filter by Artifact.type.

Source code in prefect/client/schemas/filters.py
class ArtifactFilterType(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilter","title":"ArtifactFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter artifacts. Only artifacts matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class ArtifactFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n    id: Optional[ArtifactFilterId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId","text":"

Bases: PrefectBaseModel

Filter by ArtifactCollection.latest_id.

Source code in prefect/client/schemas/filters.py
class ArtifactCollectionFilterLatestId(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey","text":"

Bases: PrefectBaseModel

Filter by ArtifactCollection.key.

Source code in prefect/client/schemas/filters.py
class ArtifactCollectionFilterKey(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key. Should return all rows in \"\n            \"the ArtifactCollection table if specified.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId","text":"

Bases: PrefectBaseModel

Filter by ArtifactCollection.flow_run_id.

Source code in prefect/client/schemas/filters.py
class ArtifactCollectionFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId","text":"

Bases: PrefectBaseModel

Filter by ArtifactCollection.task_run_id.

Source code in prefect/client/schemas/filters.py
class ArtifactCollectionFilterTaskRunId(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType","text":"

Bases: PrefectBaseModel

Filter by ArtifactCollection.type.

Source code in prefect/client/schemas/filters.py
class ArtifactCollectionFilterType(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter artifact collections. Only artifact collections matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class ArtifactCollectionFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n    latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactCollectionFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactCollectionFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterId","title":"VariableFilterId","text":"

Bases: PrefectBaseModel

Filter by Variable.id.

Source code in prefect/client/schemas/filters.py
class VariableFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Variable.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of variable ids to include\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterName","title":"VariableFilterName","text":"

Bases: PrefectBaseModel

Filter by Variable.name.

Source code in prefect/client/schemas/filters.py
class VariableFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Variable.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my_variable_%\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterValue","title":"VariableFilterValue","text":"

Bases: PrefectBaseModel

Filter by Variable.value.

Source code in prefect/client/schemas/filters.py
class VariableFilterValue(PrefectBaseModel):\n    \"\"\"Filter by `Variable.value`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables value to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable value against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-value-%\"],\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterTags","title":"VariableFilterTags","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter by Variable.tags.

Source code in prefect/client/schemas/filters.py
class VariableFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `Variable.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Variables will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include Variables without tags\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilter","title":"VariableFilter","text":"

Bases: PrefectBaseModel, OperatorMixin

Filter variables. Only variables matching all criteria will be returned

Source code in prefect/client/schemas/filters.py
class VariableFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n    id: Optional[VariableFilterId] = Field(\n        default=None, description=\"Filter criteria for `Variable.id`\"\n    )\n    name: Optional[VariableFilterName] = Field(\n        default=None, description=\"Filter criteria for `Variable.name`\"\n    )\n    value: Optional[VariableFilterValue] = Field(\n        default=None, description=\"Filter criteria for `Variable.value`\"\n    )\n    tags: Optional[VariableFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Variable.tags`\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_3","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects","title":"prefect.client.schemas.objects","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.StateType","title":"StateType","text":"

Bases: AutoEnum

Enumeration of state types.

Source code in prefect/client/schemas/objects.py
class StateType(AutoEnum):\n    \"\"\"Enumeration of state types.\"\"\"\n\n    SCHEDULED = AutoEnum.auto()\n    PENDING = AutoEnum.auto()\n    RUNNING = AutoEnum.auto()\n    COMPLETED = AutoEnum.auto()\n    FAILED = AutoEnum.auto()\n    CANCELLED = AutoEnum.auto()\n    CRASHED = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n    CANCELLING = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPoolStatus","title":"WorkPoolStatus","text":"

Bases: AutoEnum

Enumeration of work pool statuses.

Source code in prefect/client/schemas/objects.py
class WorkPoolStatus(AutoEnum):\n    \"\"\"Enumeration of work pool statuses.\"\"\"\n\n    READY = AutoEnum.auto()\n    NOT_READY = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkerStatus","title":"WorkerStatus","text":"

Bases: AutoEnum

Enumeration of worker statuses.

Source code in prefect/client/schemas/objects.py
class WorkerStatus(AutoEnum):\n    \"\"\"Enumeration of worker statuses.\"\"\"\n\n    ONLINE = AutoEnum.auto()\n    OFFLINE = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.DeploymentStatus","title":"DeploymentStatus","text":"

Bases: AutoEnum

Enumeration of deployment statuses.

Source code in prefect/client/schemas/objects.py
class DeploymentStatus(AutoEnum):\n    \"\"\"Enumeration of deployment statuses.\"\"\"\n\n    READY = AutoEnum.auto()\n    NOT_READY = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueStatus","title":"WorkQueueStatus","text":"

Bases: AutoEnum

Enumeration of work queue statuses.

Source code in prefect/client/schemas/objects.py
class WorkQueueStatus(AutoEnum):\n    \"\"\"Enumeration of work queue statuses.\"\"\"\n\n    READY = AutoEnum.auto()\n    NOT_READY = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State","title":"State","text":"

Bases: ObjectBaseModel, Generic[R]

The state of a run.

Source code in prefect/client/schemas/objects.py
class State(ObjectBaseModel, Generic[R]):\n    \"\"\"\n    The state of a run.\n    \"\"\"\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    message: Optional[str] = Field(default=None, examples=[\"Run started\"])\n    state_details: StateDetails = Field(default_factory=StateDetails)\n    data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n        default=None,\n    )\n\n    @overload\n    def result(self: \"State[R]\", raise_on_failure: bool = True) -> R:\n        ...\n\n    @overload\n    def result(self: \"State[R]\", raise_on_failure: bool = False) -> Union[R, Exception]:\n        ...\n\n    def result(\n        self, raise_on_failure: bool = True, fetch: Optional[bool] = None\n    ) -> Union[R, Exception]:\n        \"\"\"\n        Retrieve the result attached to this state.\n\n        Args:\n            raise_on_failure: a boolean specifying whether to raise an exception\n                if the state is of type `FAILED` and the underlying data is an exception\n            fetch: a boolean specifying whether to resolve references to persisted\n                results into data. For synchronous users, this defaults to `True`.\n                For asynchronous users, this defaults to `False` for backwards\n                compatibility.\n\n        Raises:\n            TypeError: If the state is failed but the result is not an exception.\n\n        Returns:\n            The result of the run\n\n        Examples:\n            >>> from prefect import flow, task\n            >>> @task\n            >>> def my_task(x):\n            >>>     return x\n\n            Get the result from a task future in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     future = my_task(\"hello\")\n            >>>     state = future.wait()\n            >>>     result = state.result()\n            >>>     print(result)\n            >>> my_flow()\n            hello\n\n            Get the result from a flow state\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     return \"hello\"\n            >>> my_flow(return_state=True).result()\n            hello\n\n            Get the result from a failed state\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     raise ValueError(\"oh no!\")\n            >>> state = my_flow(return_state=True)  # Error is wrapped in FAILED state\n            >>> state.result()  # Raises `ValueError`\n\n            Get the result from a failed state without erroring\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     raise ValueError(\"oh no!\")\n            >>> state = my_flow(return_state=True)\n            >>> result = state.result(raise_on_failure=False)\n            >>> print(result)\n            ValueError(\"oh no!\")\n\n\n            Get the result from a flow state in an async context\n\n            >>> @flow\n            >>> async def my_flow():\n            >>>     return \"hello\"\n            >>> state = await my_flow(return_state=True)\n            >>> await state.result()\n            hello\n        \"\"\"\n        from prefect.states import get_state_result\n\n        return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n\n    def to_state_create(self):\n        \"\"\"\n        Convert this state to a `StateCreate` type which can be used to set the state of\n        a run in the API.\n\n        This method will drop this state's `data` if it is not a result type. Only\n        results should be sent to the API. Other data is only available locally.\n        \"\"\"\n        from prefect.client.schemas.actions import StateCreate\n        from prefect.results import BaseResult\n\n        return StateCreate(\n            type=self.type,\n            name=self.name,\n            message=self.message,\n            data=self.data if isinstance(self.data, BaseResult) else None,\n            state_details=self.state_details,\n        )\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n        return get_or_create_state_name(v, values)\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n        \"\"\"\n        TODO: This should throw an error instead of setting a default but is out of\n              scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n              into work refactoring state initialization\n        \"\"\"\n        if values.get(\"type\") == StateType.SCHEDULED:\n            state_details = values.setdefault(\n                \"state_details\", cls.__fields__[\"state_details\"].get_default()\n            )\n            if not state_details.scheduled_time:\n                state_details.scheduled_time = pendulum.now(\"utc\")\n        return values\n\n    def is_scheduled(self) -> bool:\n        return self.type == StateType.SCHEDULED\n\n    def is_pending(self) -> bool:\n        return self.type == StateType.PENDING\n\n    def is_running(self) -> bool:\n        return self.type == StateType.RUNNING\n\n    def is_completed(self) -> bool:\n        return self.type == StateType.COMPLETED\n\n    def is_failed(self) -> bool:\n        return self.type == StateType.FAILED\n\n    def is_crashed(self) -> bool:\n        return self.type == StateType.CRASHED\n\n    def is_cancelled(self) -> bool:\n        return self.type == StateType.CANCELLED\n\n    def is_cancelling(self) -> bool:\n        return self.type == StateType.CANCELLING\n\n    def is_final(self) -> bool:\n        return self.type in {\n            StateType.CANCELLED,\n            StateType.FAILED,\n            StateType.COMPLETED,\n            StateType.CRASHED,\n        }\n\n    def is_paused(self) -> bool:\n        return self.type == StateType.PAUSED\n\n    def copy(\n        self,\n        *,\n        update: Optional[Dict[str, Any]] = None,\n        reset_fields: bool = False,\n        **kwargs,\n    ):\n        \"\"\"\n        Copying API models should return an object that could be inserted into the\n        database again. The 'timestamp' is reset using the default factory.\n        \"\"\"\n        update = update or {}\n        update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n        return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n    def __repr__(self) -> str:\n        \"\"\"\n        Generates a complete state representation appropriate for introspection\n        and debugging, including the result:\n\n        `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n        \"\"\"\n        from prefect.deprecated.data_documents import DataDocument\n\n        if isinstance(self.data, DataDocument):\n            result = self.data.decode()\n        else:\n            result = self.data\n\n        display = dict(\n            message=repr(self.message),\n            type=str(self.type.value),\n            result=repr(result),\n        )\n\n        return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n    def __str__(self) -> str:\n        \"\"\"\n        Generates a simple state representation appropriate for logging:\n\n        `MyCompletedState(\"my message\", type=COMPLETED)`\n        \"\"\"\n\n        display = []\n\n        if self.message:\n            display.append(repr(self.message))\n\n        if self.type.value.lower() != self.name.lower():\n            display.append(f\"type={self.type.value}\")\n\n        return f\"{self.name}({', '.join(display)})\"\n\n    def __hash__(self) -> int:\n        return hash(\n            (\n                getattr(self.state_details, \"flow_run_id\", None),\n                getattr(self.state_details, \"task_run_id\", None),\n                self.timestamp,\n                self.type,\n            )\n        )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.result","title":"result","text":"

Retrieve the result attached to this state.

Parameters:

Name Type Description Default raise_on_failure bool

a boolean specifying whether to raise an exception if the state is of type FAILED and the underlying data is an exception

True fetch Optional[bool]

a boolean specifying whether to resolve references to persisted results into data. For synchronous users, this defaults to True. For asynchronous users, this defaults to False for backwards compatibility.

None

Raises:

Type Description TypeError

If the state is failed but the result is not an exception.

Returns:

Type Description Union[R, Exception]

The result of the run

Examples:

>>> from prefect import flow, task\n>>> @task\n>>> def my_task(x):\n>>>     return x\n

Get the result from a task future in a flow

>>> @flow\n>>> def my_flow():\n>>>     future = my_task(\"hello\")\n>>>     state = future.wait()\n>>>     result = state.result()\n>>>     print(result)\n>>> my_flow()\nhello\n

Get the result from a flow state

>>> @flow\n>>> def my_flow():\n>>>     return \"hello\"\n>>> my_flow(return_state=True).result()\nhello\n

Get the result from a failed state

>>> @flow\n>>> def my_flow():\n>>>     raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True)  # Error is wrapped in FAILED state\n>>> state.result()  # Raises `ValueError`\n

Get the result from a failed state without erroring

>>> @flow\n>>> def my_flow():\n>>>     raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True)\n>>> result = state.result(raise_on_failure=False)\n>>> print(result)\nValueError(\"oh no!\")\n

Get the result from a flow state in an async context

>>> @flow\n>>> async def my_flow():\n>>>     return \"hello\"\n>>> state = await my_flow(return_state=True)\n>>> await state.result()\nhello\n
Source code in prefect/client/schemas/objects.py
def result(\n    self, raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> Union[R, Exception]:\n    \"\"\"\n    Retrieve the result attached to this state.\n\n    Args:\n        raise_on_failure: a boolean specifying whether to raise an exception\n            if the state is of type `FAILED` and the underlying data is an exception\n        fetch: a boolean specifying whether to resolve references to persisted\n            results into data. For synchronous users, this defaults to `True`.\n            For asynchronous users, this defaults to `False` for backwards\n            compatibility.\n\n    Raises:\n        TypeError: If the state is failed but the result is not an exception.\n\n    Returns:\n        The result of the run\n\n    Examples:\n        >>> from prefect import flow, task\n        >>> @task\n        >>> def my_task(x):\n        >>>     return x\n\n        Get the result from a task future in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task(\"hello\")\n        >>>     state = future.wait()\n        >>>     result = state.result()\n        >>>     print(result)\n        >>> my_flow()\n        hello\n\n        Get the result from a flow state\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     return \"hello\"\n        >>> my_flow(return_state=True).result()\n        hello\n\n        Get the result from a failed state\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     raise ValueError(\"oh no!\")\n        >>> state = my_flow(return_state=True)  # Error is wrapped in FAILED state\n        >>> state.result()  # Raises `ValueError`\n\n        Get the result from a failed state without erroring\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     raise ValueError(\"oh no!\")\n        >>> state = my_flow(return_state=True)\n        >>> result = state.result(raise_on_failure=False)\n        >>> print(result)\n        ValueError(\"oh no!\")\n\n\n        Get the result from a flow state in an async context\n\n        >>> @flow\n        >>> async def my_flow():\n        >>>     return \"hello\"\n        >>> state = await my_flow(return_state=True)\n        >>> await state.result()\n        hello\n    \"\"\"\n    from prefect.states import get_state_result\n\n    return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.to_state_create","title":"to_state_create","text":"

Convert this state to a StateCreate type which can be used to set the state of a run in the API.

This method will drop this state's data if it is not a result type. Only results should be sent to the API. Other data is only available locally.

Source code in prefect/client/schemas/objects.py
def to_state_create(self):\n    \"\"\"\n    Convert this state to a `StateCreate` type which can be used to set the state of\n    a run in the API.\n\n    This method will drop this state's `data` if it is not a result type. Only\n    results should be sent to the API. Other data is only available locally.\n    \"\"\"\n    from prefect.client.schemas.actions import StateCreate\n    from prefect.results import BaseResult\n\n    return StateCreate(\n        type=self.type,\n        name=self.name,\n        message=self.message,\n        data=self.data if isinstance(self.data, BaseResult) else None,\n        state_details=self.state_details,\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.default_scheduled_start_time","title":"default_scheduled_start_time","text":"This should throw an error instead of setting a default but is out of

scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled into work refactoring state initialization

Source code in prefect/client/schemas/objects.py
@root_validator\ndef default_scheduled_start_time(cls, values):\n    \"\"\"\n    TODO: This should throw an error instead of setting a default but is out of\n          scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n          into work refactoring state initialization\n    \"\"\"\n    if values.get(\"type\") == StateType.SCHEDULED:\n        state_details = values.setdefault(\n            \"state_details\", cls.__fields__[\"state_details\"].get_default()\n        )\n        if not state_details.scheduled_time:\n            state_details.scheduled_time = pendulum.now(\"utc\")\n    return values\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunPolicy","title":"FlowRunPolicy","text":"

Bases: PrefectBaseModel

Defines of how a flow run should be orchestrated.

Source code in prefect/client/schemas/objects.py
class FlowRunPolicy(PrefectBaseModel):\n    \"\"\"Defines of how a flow run should be orchestrated.\"\"\"\n\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Optional[int] = Field(\n        default=None, description=\"The delay time between retries, in seconds.\"\n    )\n    pause_keys: Optional[set] = Field(\n        default_factory=set, description=\"Tracks pauses this run has observed.\"\n    )\n    resuming: Optional[bool] = Field(\n        default=False, description=\"Indicates if this run is resuming from a pause.\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n        return set_run_policy_deprecated_fields(values)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRun","title":"FlowRun","text":"

Bases: ObjectBaseModel

Source code in prefect/client/schemas/objects.py
class FlowRun(ObjectBaseModel):\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_val\"}],\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    work_pool_id: Optional[UUID] = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    state: Optional[State] = Field(\n        default=None,\n        description=\"The state of the flow run.\",\n        examples=[State(type=StateType.COMPLETED)],\n    )\n    job_variables: Optional[dict] = Field(\n        default=None, description=\"Job variables for the flow run.\"\n    )\n\n    # These are server-side optimizations and should not be present on client models\n    # TODO: Deprecate these fields\n\n    state_type: Optional[StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n\n    @validator(\"name\", pre=True)\n    def set_default_name(cls, name):\n        return get_or_create_run_name(name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunPolicy","title":"TaskRunPolicy","text":"

Bases: PrefectBaseModel

Defines of how a task run should retry.

Source code in prefect/client/schemas/objects.py
class TaskRunPolicy(PrefectBaseModel):\n    \"\"\"Defines of how a task run should retry.\"\"\"\n\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Union[None, int, List[int]] = Field(\n        default=None,\n        description=\"A delay time or list of delay times between retries, in seconds.\",\n    )\n    retry_jitter_factor: Optional[float] = Field(\n        default=None, description=\"Determines the amount a retry should jitter\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n        return set_run_policy_deprecated_fields(values)\n\n    @validator(\"retry_delay\")\n    def validate_configured_retry_delays(cls, v):\n        return list_length_50_or_less(v)\n\n    @validator(\"retry_jitter_factor\")\n    def validate_jitter_factor(cls, v):\n        return validate_not_negative(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunInput","title":"TaskRunInput","text":"

Bases: PrefectBaseModel

Base class for classes that represent inputs to task runs, which could include, constants, parameters, or other task runs.

Source code in prefect/client/schemas/objects.py
class TaskRunInput(PrefectBaseModel):\n    \"\"\"\n    Base class for classes that represent inputs to task runs, which\n    could include, constants, parameters, or other task runs.\n    \"\"\"\n\n    # freeze TaskRunInputs to allow them to be placed in sets\n    class Config:\n        frozen = True\n\n    input_type: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunResult","title":"TaskRunResult","text":"

Bases: TaskRunInput

Represents a task run result input to another task run.

Source code in prefect/client/schemas/objects.py
class TaskRunResult(TaskRunInput):\n    \"\"\"Represents a task run result input to another task run.\"\"\"\n\n    input_type: Literal[\"task_run\"] = \"task_run\"\n    id: UUID\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Parameter","title":"Parameter","text":"

Bases: TaskRunInput

Represents a parameter input to a task run.

Source code in prefect/client/schemas/objects.py
class Parameter(TaskRunInput):\n    \"\"\"Represents a parameter input to a task run.\"\"\"\n\n    input_type: Literal[\"parameter\"] = \"parameter\"\n    name: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Constant","title":"Constant","text":"

Bases: TaskRunInput

Represents constant input value to a task run.

Source code in prefect/client/schemas/objects.py
class Constant(TaskRunInput):\n    \"\"\"Represents constant input value to a task run.\"\"\"\n\n    input_type: Literal[\"constant\"] = \"constant\"\n    type: str\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace","title":"Workspace","text":"

Bases: PrefectBaseModel

A Prefect Cloud workspace.

Expected payload for each workspace returned by the me/workspaces route.

Source code in prefect/client/schemas/objects.py
class Workspace(PrefectBaseModel):\n    \"\"\"\n    A Prefect Cloud workspace.\n\n    Expected payload for each workspace returned by the `me/workspaces` route.\n    \"\"\"\n\n    account_id: UUID = Field(..., description=\"The account id of the workspace.\")\n    account_name: str = Field(..., description=\"The account name.\")\n    account_handle: str = Field(..., description=\"The account's unique handle.\")\n    workspace_id: UUID = Field(..., description=\"The workspace id.\")\n    workspace_name: str = Field(..., description=\"The workspace name.\")\n    workspace_description: str = Field(..., description=\"Description of the workspace.\")\n    workspace_handle: str = Field(..., description=\"The workspace's unique handle.\")\n\n    class Config:\n        extra = \"ignore\"\n\n    @property\n    def handle(self) -> str:\n        \"\"\"\n        The full handle of the workspace as `account_handle` / `workspace_handle`\n        \"\"\"\n        return self.account_handle + \"/\" + self.workspace_handle\n\n    def api_url(self) -> str:\n        \"\"\"\n        Generate the API URL for accessing this workspace\n        \"\"\"\n        return (\n            f\"{PREFECT_CLOUD_API_URL.value()}\"\n            f\"/accounts/{self.account_id}\"\n            f\"/workspaces/{self.workspace_id}\"\n        )\n\n    def ui_url(self) -> str:\n        \"\"\"\n        Generate the UI URL for accessing this workspace\n        \"\"\"\n        return (\n            f\"{PREFECT_CLOUD_UI_URL.value()}\"\n            f\"/account/{self.account_id}\"\n            f\"/workspace/{self.workspace_id}\"\n        )\n\n    def __hash__(self):\n        return hash(self.handle)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.handle","title":"handle: str property","text":"

The full handle of the workspace as account_handle / workspace_handle

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.api_url","title":"api_url","text":"

Generate the API URL for accessing this workspace

Source code in prefect/client/schemas/objects.py
def api_url(self) -> str:\n    \"\"\"\n    Generate the API URL for accessing this workspace\n    \"\"\"\n    return (\n        f\"{PREFECT_CLOUD_API_URL.value()}\"\n        f\"/accounts/{self.account_id}\"\n        f\"/workspaces/{self.workspace_id}\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.ui_url","title":"ui_url","text":"

Generate the UI URL for accessing this workspace

Source code in prefect/client/schemas/objects.py
def ui_url(self) -> str:\n    \"\"\"\n    Generate the UI URL for accessing this workspace\n    \"\"\"\n    return (\n        f\"{PREFECT_CLOUD_UI_URL.value()}\"\n        f\"/account/{self.account_id}\"\n        f\"/workspace/{self.workspace_id}\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockType","title":"BlockType","text":"

Bases: ObjectBaseModel

An ORM representation of a block type

Source code in prefect/client/schemas/objects.py
class BlockType(ObjectBaseModel):\n    \"\"\"An ORM representation of a block type\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n    is_protected: bool = Field(\n        default=False, description=\"Protected block types cannot be modified via API.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocument","title":"BlockDocument","text":"

Bases: ObjectBaseModel

An ORM representation of a block document.

Source code in prefect/client/schemas/objects.py
class BlockDocument(ObjectBaseModel):\n    \"\"\"An ORM representation of a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n    block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The associated block schema\"\n    )\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n    block_type_name: Optional[str] = Field(None, description=\"A block type name\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    block_document_references: Dict[str, Dict[str, Any]] = Field(\n        default_factory=dict, description=\"Record of the block document's references\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        # the BlockDocumentCreate subclass allows name=None\n        # and will inherit this validator\n        return raise_on_name_with_banned_characters(v)\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Flow","title":"Flow","text":"

Bases: ObjectBaseModel

An ORM representation of flow data.

Source code in prefect/client/schemas/objects.py
class Flow(ObjectBaseModel):\n    \"\"\"An ORM representation of flow data.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Deployment","title":"Deployment","text":"

Bases: DeprecatedInfraOverridesField, ObjectBaseModel

An ORM representation of deployment data.

Source code in prefect/client/schemas/objects.py
class Deployment(DeprecatedInfraOverridesField, ObjectBaseModel):\n    \"\"\"An ORM representation of deployment data.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the deployment.\")\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description for the deployment.\"\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The flow id associated with the deployment.\"\n    )\n    schedule: Optional[SCHEDULE_TYPES] = Field(\n        default=None, description=\"A schedule for the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether or not the deployment schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentSchedule] = Field(\n        default_factory=list, description=\"A list of schedules for the deployment.\"\n    )\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    pull_steps: Optional[List[dict]] = Field(\n        default=None,\n        description=\"Pull steps for cloning and running this deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the deployment\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The work queue for the deployment. If no work queue is set, work will not\"\n            \" be scheduled.\"\n        ),\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The last time the deployment was polled for status updates.\",\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    storage_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining storage used for this flow.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use for flow runs.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this deployment.\",\n    )\n    updated_by: Optional[UpdatedBy] = Field(\n        default=None,\n        description=\"Optional information about the updater of this deployment.\",\n    )\n    work_queue_id: UUID = Field(\n        default=None,\n        description=(\n            \"The id of the work pool queue to which this deployment is assigned.\"\n        ),\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.ConcurrencyLimit","title":"ConcurrencyLimit","text":"

Bases: ObjectBaseModel

An ORM representation of a concurrency limit.

Source code in prefect/client/schemas/objects.py
class ConcurrencyLimit(ObjectBaseModel):\n    \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: List[UUID] = Field(\n        default_factory=list,\n        description=\"A list of active run ids using a concurrency slot\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchema","title":"BlockSchema","text":"

Bases: ObjectBaseModel

An ORM representation of a block schema.

Source code in prefect/client/schemas/objects.py
class BlockSchema(ObjectBaseModel):\n    \"\"\"An ORM representation of a block schema.\"\"\"\n\n    checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchemaReference","title":"BlockSchemaReference","text":"

Bases: ObjectBaseModel

An ORM representation of a block schema reference.

Source code in prefect/client/schemas/objects.py
class BlockSchemaReference(ObjectBaseModel):\n    \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n    parent_block_schema_id: UUID = Field(\n        default=..., description=\"ID of block schema the reference is nested within\"\n    )\n    parent_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The block schema the reference is nested within\"\n    )\n    reference_block_schema_id: UUID = Field(\n        default=..., description=\"ID of the nested block schema\"\n    )\n    reference_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The nested block schema\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocumentReference","title":"BlockDocumentReference","text":"

Bases: ObjectBaseModel

An ORM representation of a block document reference.

Source code in prefect/client/schemas/objects.py
class BlockDocumentReference(ObjectBaseModel):\n    \"\"\"An ORM representation of a block document reference.\"\"\"\n\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    parent_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    reference_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        return validate_parent_and_ref_diff(values)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearchFilter","title":"SavedSearchFilter","text":"

Bases: PrefectBaseModel

A filter for a saved search model. Intended for use by the Prefect UI.

Source code in prefect/client/schemas/objects.py
class SavedSearchFilter(PrefectBaseModel):\n    \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n    object: str = Field(default=..., description=\"The object over which to filter.\")\n    property: str = Field(\n        default=..., description=\"The property of the object on which to filter.\"\n    )\n    type: str = Field(default=..., description=\"The type of the property.\")\n    operation: str = Field(\n        default=...,\n        description=\"The operator to apply to the object. For example, `equals`.\",\n    )\n    value: Any = Field(\n        default=..., description=\"A JSON-compatible value for the filter.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearch","title":"SavedSearch","text":"

Bases: ObjectBaseModel

An ORM representation of saved search data. Represents a set of filter criteria.

Source code in prefect/client/schemas/objects.py
class SavedSearch(ObjectBaseModel):\n    \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Log","title":"Log","text":"

Bases: ObjectBaseModel

An ORM representation of log data.

Source code in prefect/client/schemas/objects.py
class Log(ObjectBaseModel):\n    \"\"\"An ORM representation of log data.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run ID associated with the log.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run ID associated with the log.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.QueueFilter","title":"QueueFilter","text":"

Bases: PrefectBaseModel

Filter criteria definition for a work queue.

Source code in prefect/client/schemas/objects.py
class QueueFilter(PrefectBaseModel):\n    \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"Only include flow runs with these tags in the work queue.\",\n    )\n    deployment_ids: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"Only include flow runs from these deployments in the work queue.\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueue","title":"WorkQueue","text":"

Bases: ObjectBaseModel

An ORM representation of a work queue

Source code in prefect/client/schemas/objects.py
class WorkQueue(ObjectBaseModel):\n    \"\"\"An ORM representation of a work queue\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"An optional concurrency limit for the work queue.\"\n    )\n    priority: PositiveInteger = Field(\n        default=1,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n    work_pool_name: Optional[str] = Field(default=None)\n    # Will be required after a future migration\n    work_pool_id: Optional[UUID] = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    filter: Optional[QueueFilter] = Field(\n        default=None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time an agent polled this queue for work.\"\n    )\n    status: Optional[WorkQueueStatus] = Field(\n        default=None, description=\"The queue status.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy","text":"

Bases: PrefectBaseModel

Source code in prefect/client/schemas/objects.py
class WorkQueueHealthPolicy(PrefectBaseModel):\n    maximum_late_runs: Optional[int] = Field(\n        default=0,\n        description=(\n            \"The maximum number of late runs in the work queue before it is deemed\"\n            \" unhealthy. Defaults to `0`.\"\n        ),\n    )\n    maximum_seconds_since_last_polled: Optional[int] = Field(\n        default=60,\n        description=(\n            \"The maximum number of time in seconds elapsed since work queue has been\"\n            \" polled before it is deemed unhealthy. Defaults to `60`.\"\n        ),\n    )\n\n    def evaluate_health_status(\n        self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n    ) -> bool:\n        \"\"\"\n        Given empirical information about the state of the work queue, evaluate its health status.\n\n        Args:\n            late_runs: the count of late runs for the work queue.\n            last_polled: the last time the work queue was polled, if available.\n\n        Returns:\n            bool: whether or not the work queue is healthy.\n        \"\"\"\n        healthy = True\n        if (\n            self.maximum_late_runs is not None\n            and late_runs_count > self.maximum_late_runs\n        ):\n            healthy = False\n\n        if self.maximum_seconds_since_last_polled is not None:\n            if (\n                last_polled is None\n                or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n                > self.maximum_seconds_since_last_polled\n            ):\n                healthy = False\n\n        return healthy\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status","text":"

Given empirical information about the state of the work queue, evaluate its health status.

Parameters:

Name Type Description Default late_runs

the count of late runs for the work queue.

required last_polled Optional[DateTimeTZ]

the last time the work queue was polled, if available.

None

Returns:

Name Type Description bool bool

whether or not the work queue is healthy.

Source code in prefect/client/schemas/objects.py
def evaluate_health_status(\n    self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n    \"\"\"\n    Given empirical information about the state of the work queue, evaluate its health status.\n\n    Args:\n        late_runs: the count of late runs for the work queue.\n        last_polled: the last time the work queue was polled, if available.\n\n    Returns:\n        bool: whether or not the work queue is healthy.\n    \"\"\"\n    healthy = True\n    if (\n        self.maximum_late_runs is not None\n        and late_runs_count > self.maximum_late_runs\n    ):\n        healthy = False\n\n    if self.maximum_seconds_since_last_polled is not None:\n        if (\n            last_polled is None\n            or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n            > self.maximum_seconds_since_last_polled\n        ):\n            healthy = False\n\n    return healthy\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy","text":"

Bases: ObjectBaseModel

An ORM representation of a flow run notification.

Source code in prefect/client/schemas/objects.py
class FlowRunNotificationPolicy(ObjectBaseModel):\n    \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Agent","title":"Agent","text":"

Bases: ObjectBaseModel

An ORM representation of an agent

Source code in prefect/client/schemas/objects.py
class Agent(ObjectBaseModel):\n    \"\"\"An ORM representation of an agent\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the agent. If a name is not provided, it will be\"\n            \" auto-generated.\"\n        ),\n    )\n    work_queue_id: UUID = Field(\n        default=..., description=\"The work queue with which the agent is associated.\"\n    )\n    last_activity_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time this agent polled for work.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPool","title":"WorkPool","text":"

Bases: ObjectBaseModel

An ORM representation of a work pool

Source code in prefect/client/schemas/objects.py
class WorkPool(ObjectBaseModel):\n    \"\"\"An ORM representation of a work pool\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description of the work pool.\"\n    )\n    type: str = Field(description=\"The work pool type.\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n    status: Optional[WorkPoolStatus] = Field(\n        default=None, description=\"The current status of the work pool.\"\n    )\n\n    # this required field has a default of None so that the custom validator\n    # below will be called and produce a more helpful error message\n    default_queue_id: UUID = Field(\n        None, description=\"The id of the pool's default queue.\"\n    )\n\n    @property\n    def is_push_pool(self) -> bool:\n        return self.type.endswith(\":push\")\n\n    @property\n    def is_managed_pool(self) -> bool:\n        return self.type.endswith(\":managed\")\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n\n    @validator(\"default_queue_id\", always=True)\n    def helpful_error_for_missing_default_queue_id(cls, v):\n        return validate_default_queue_id_not_none(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Worker","title":"Worker","text":"

Bases: ObjectBaseModel

An ORM representation of a worker

Source code in prefect/client/schemas/objects.py
class Worker(ObjectBaseModel):\n    \"\"\"An ORM representation of a worker\"\"\"\n\n    name: str = Field(description=\"The name of the worker.\")\n    work_pool_id: UUID = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    last_heartbeat_time: datetime.datetime = Field(\n        None, description=\"The last time the worker process sent a heartbeat.\"\n    )\n    heartbeat_interval_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to expect between heartbeats sent by the worker.\"\n        ),\n    )\n    status: WorkerStatus = Field(\n        WorkerStatus.OFFLINE,\n        description=\"Current status of the worker.\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput","title":"FlowRunInput","text":"

Bases: ObjectBaseModel

Source code in prefect/client/schemas/objects.py
class FlowRunInput(ObjectBaseModel):\n    flow_run_id: UUID = Field(description=\"The flow run ID associated with the input.\")\n    key: str = Field(description=\"The key of the input.\")\n    value: str = Field(description=\"The value of the input.\")\n    sender: Optional[str] = Field(description=\"The sender of the input.\")\n\n    @property\n    def decoded_value(self) -> Any:\n        \"\"\"\n        Decode the value of the input.\n\n        Returns:\n            Any: the decoded value\n        \"\"\"\n        return orjson.loads(self.value)\n\n    @validator(\"key\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_alphanumeric_dashes_only(v)\n        return v\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput.decoded_value","title":"decoded_value: Any property","text":"

Decode the value of the input.

Returns:

Name Type Description Any Any

the decoded value

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.GlobalConcurrencyLimit","title":"GlobalConcurrencyLimit","text":"

Bases: ObjectBaseModel

An ORM representation of a global concurrency limit

Source code in prefect/client/schemas/objects.py
class GlobalConcurrencyLimit(ObjectBaseModel):\n    \"\"\"An ORM representation of a global concurrency limit\"\"\"\n\n    name: str = Field(description=\"The name of the global concurrency limit.\")\n    limit: int = Field(\n        description=(\n            \"The maximum number of slots that can be occupied on this concurrency\"\n            \" limit.\"\n        )\n    )\n    active: Optional[bool] = Field(\n        default=True,\n        description=\"Whether or not the concurrency limit is in an active state.\",\n    )\n    active_slots: Optional[int] = Field(\n        default=0,\n        description=\"Number of tasks currently using a concurrency slot.\",\n    )\n    slot_decay_per_second: Optional[float] = Field(\n        default=0.0,\n        description=(\n            \"Controls the rate at which slots are released when the concurrency limit\"\n            \" is used as a rate limit.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_4","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses","title":"prefect.client.schemas.responses","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.SetStateStatus","title":"SetStateStatus","text":"

Bases: AutoEnum

Enumerates return statuses for setting run states.

Source code in prefect/client/schemas/responses.py
class SetStateStatus(AutoEnum):\n    \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n    ACCEPT = AutoEnum.auto()\n    REJECT = AutoEnum.auto()\n    ABORT = AutoEnum.auto()\n    WAIT = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails","text":"

Bases: PrefectBaseModel

Details associated with an ACCEPT state transition.

Source code in prefect/client/schemas/responses.py
class StateAcceptDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n    type: Literal[\"accept_details\"] = Field(\n        default=\"accept_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateRejectDetails","title":"StateRejectDetails","text":"

Bases: PrefectBaseModel

Details associated with a REJECT state transition.

Source code in prefect/client/schemas/responses.py
class StateRejectDetails(PrefectBaseModel):\n    \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n    type: Literal[\"reject_details\"] = Field(\n        default=\"reject_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was rejected.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAbortDetails","title":"StateAbortDetails","text":"

Bases: PrefectBaseModel

Details associated with an ABORT state transition.

Source code in prefect/client/schemas/responses.py
class StateAbortDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n    type: Literal[\"abort_details\"] = Field(\n        default=\"abort_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was aborted.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateWaitDetails","title":"StateWaitDetails","text":"

Bases: PrefectBaseModel

Details associated with a WAIT state transition.

Source code in prefect/client/schemas/responses.py
class StateWaitDetails(PrefectBaseModel):\n    \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n    type: Literal[\"wait_details\"] = Field(\n        default=\"wait_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    delay_seconds: int = Field(\n        default=...,\n        description=(\n            \"The length of time in seconds the client should wait before transitioning\"\n            \" states.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition should wait.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponseState","title":"HistoryResponseState","text":"

Bases: PrefectBaseModel

Represents a single state's history over an interval.

Source code in prefect/client/schemas/responses.py
class HistoryResponseState(PrefectBaseModel):\n    \"\"\"Represents a single state's history over an interval.\"\"\"\n\n    state_type: objects.StateType = Field(default=..., description=\"The state type.\")\n    state_name: str = Field(default=..., description=\"The state name.\")\n    count_runs: int = Field(\n        default=...,\n        description=\"The number of runs in the specified state during the interval.\",\n    )\n    sum_estimated_run_time: datetime.timedelta = Field(\n        default=...,\n        description=\"The total estimated run time of all runs during the interval.\",\n    )\n    sum_estimated_lateness: datetime.timedelta = Field(\n        default=...,\n        description=(\n            \"The sum of differences between actual and expected start time during the\"\n            \" interval.\"\n        ),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponse","title":"HistoryResponse","text":"

Bases: PrefectBaseModel

Represents a history of aggregation states over an interval

Source code in prefect/client/schemas/responses.py
class HistoryResponse(PrefectBaseModel):\n    \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n    interval_start: DateTimeTZ = Field(\n        default=..., description=\"The start date of the interval.\"\n    )\n    interval_end: DateTimeTZ = Field(\n        default=..., description=\"The end date of the interval.\"\n    )\n    states: List[HistoryResponseState] = Field(\n        default=..., description=\"A list of state histories during the interval.\"\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.OrchestrationResult","title":"OrchestrationResult","text":"

Bases: PrefectBaseModel

A container for the output of state orchestration.

Source code in prefect/client/schemas/responses.py
class OrchestrationResult(PrefectBaseModel):\n    \"\"\"\n    A container for the output of state orchestration.\n    \"\"\"\n\n    state: Optional[objects.State]\n    status: SetStateStatus\n    details: StateResponseDetails\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.FlowRunResponse","title":"FlowRunResponse","text":"

Bases: ObjectBaseModel

Source code in prefect/client/schemas/responses.py
class FlowRunResponse(ObjectBaseModel):\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_val\"}],\n    )\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    work_pool_id: Optional[UUID] = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    state: Optional[objects.State] = Field(\n        default=None,\n        description=\"The state of the flow run.\",\n        examples=[objects.State(type=objects.StateType.COMPLETED)],\n    )\n    job_variables: Optional[dict] = Field(\n        default=None, description=\"Job variables for the flow run.\"\n    )\n\n    # These are server-side optimizations and should not be present on client models\n    # TODO: Deprecate these fields\n\n    state_type: Optional[objects.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, objects.FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.GlobalConcurrencyLimitResponse","title":"GlobalConcurrencyLimitResponse","text":"

Bases: ObjectBaseModel

A response object for global concurrency limits.

Source code in prefect/client/schemas/responses.py
class GlobalConcurrencyLimitResponse(ObjectBaseModel):\n    \"\"\"\n    A response object for global concurrency limits.\n    \"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the global concurrency limit is active.\"\n    )\n    name: str = Field(\n        default=..., description=\"The name of the global concurrency limit.\"\n    )\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=..., description=\"The number of active slots.\")\n    slot_decay_per_second: float = Field(\n        default=2.0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_5","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules","title":"prefect.client.schemas.schedules","text":"

Schedule schemas

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.IntervalSchedule","title":"IntervalSchedule","text":"

Bases: PrefectBaseModel

A schedule formed by adding interval increments to an anchor_date. If no anchor_date is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.

NOTE: If the IntervalSchedule anchor_date or timezone is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

Parameters:

Name Type Description Default interval timedelta

an interval to schedule on

required anchor_date DateTimeTZ

an anchor date to schedule increments against; if not provided, the current timestamp will be used

required timezone str

a valid timezone string

required Source code in prefect/client/schemas/schedules.py
class IntervalSchedule(PrefectBaseModel):\n    \"\"\"\n    A schedule formed by adding `interval` increments to an `anchor_date`. If no\n    `anchor_date` is supplied, the current UTC time is used.  If a\n    timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n    in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n    anchor dates are always stored as UTC offsets, so a `timezone` can be\n    provided to determine localization behaviors like DST boundary handling. If\n    none is provided it will be inferred from the anchor date.\n\n    NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n    DST-observing timezone, then the schedule will adjust itself appropriately.\n    Intervals greater than 24 hours will follow DST conventions, while intervals\n    of less than 24 hours will follow UTC intervals. For example, an hourly\n    schedule will fire every UTC hour, even across DST boundaries. When clocks\n    are set back, this will result in two runs that *appear* to both be\n    scheduled for 1am local time, even though they are an hour apart in UTC\n    time. For longer intervals, like a daily schedule, the interval schedule\n    will adjust for DST boundaries so that the clock-hour remains constant. This\n    means that a daily schedule that always fires at 9am will observe DST and\n    continue to fire at 9am in the local time zone.\n\n    Args:\n        interval (datetime.timedelta): an interval to schedule on\n        anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n            if not provided, the current timestamp will be used\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n        exclude_none = True\n\n    interval: PositiveDuration\n    anchor_date: DateTimeTZ = None\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"anchor_date\", always=True)\n    def validate_anchor_date(cls, v):\n        return default_anchor_date(v)\n\n    @validator(\"timezone\", always=True)\n    def validate_default_timezone(cls, v, values):\n        return default_timezone(v, values=values)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.CronSchedule","title":"CronSchedule","text":"

Bases: PrefectBaseModel

Cron schedule

NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.

Parameters:

Name Type Description Default cron str

a valid cron string

required timezone str

a valid timezone string in IANA tzdata format (for example, America/New_York).

required day_or bool

Control how croniter handles day and day_of_week entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.

required Source code in prefect/client/schemas/schedules.py
class CronSchedule(PrefectBaseModel):\n    \"\"\"\n    Cron schedule\n\n    NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n    itself appropriately. Cron's rules for DST are based on schedule times, not\n    intervals. This means that an hourly cron schedule will fire on every new\n    schedule hour, not every elapsed hour; for example, when clocks are set back\n    this will result in a two-hour pause as the schedule will fire *the first\n    time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n    Longer schedules, such as one that fires at 9am every morning, will\n    automatically adjust for DST.\n\n    Args:\n        cron (str): a valid cron string\n        timezone (str): a valid timezone string in IANA tzdata format (for example,\n            America/New_York).\n        day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n            entries. Defaults to True, matching cron which connects those values using\n            OR. If the switch is set to False, the values are connected using AND. This\n            behaves like fcron and enables you to e.g. define a job that executes each\n            2nd friday of a month by setting the days of month and the weekday.\n\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    cron: str = Field(default=..., examples=[\"0 0 * * *\"])\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n    day_or: bool = Field(\n        default=True,\n        description=(\n            \"Control croniter behavior for handling day and day_of_week entries.\"\n        ),\n    )\n\n    @validator(\"timezone\")\n    def valid_timezone(cls, v):\n        return default_timezone(v)\n\n    @validator(\"cron\")\n    def valid_cron_string(cls, v):\n        return validate_cron_string(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule","title":"RRuleSchedule","text":"

Bases: PrefectBaseModel

RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule.

RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.

Note that as a calendar-oriented standard, RRuleSchedules are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.

Parameters:

Name Type Description Default rrule str

a valid RRule string

required timezone str

a valid timezone string

required Source code in prefect/client/schemas/schedules.py
class RRuleSchedule(PrefectBaseModel):\n    \"\"\"\n    RRule schedule, based on the iCalendar standard\n    ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n    implemented in `dateutils.rrule`.\n\n    RRules are appropriate for any kind of calendar-date manipulation, including\n    irregular intervals, repetition, exclusions, week day or day-of-month\n    adjustments, and more.\n\n    Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n    to the initial timezone provided. A 9am daily schedule with a daylight saving\n    time-aware start date will maintain a local 9am time through DST boundaries;\n    a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n    Args:\n        rrule (str): a valid RRule string\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    rrule: str\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"rrule\")\n    def validate_rrule_str(cls, v):\n        return validate_rrule_string(v)\n\n    @classmethod\n    def from_rrule(cls, rrule: dateutil.rrule.rrule):\n        if isinstance(rrule, dateutil.rrule.rrule):\n            if rrule._dtstart.tzinfo is not None:\n                timezone = rrule._dtstart.tzinfo.name\n            else:\n                timezone = \"UTC\"\n            return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n            unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n            unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n            if len(unique_timezones) > 1:\n                raise ValueError(\n                    f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n                )\n\n            if len(unique_dstarts) > 1:\n                raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n            if unique_dstarts and unique_timezones:\n                timezone = dtstarts[0].tzinfo.name\n            else:\n                timezone = \"UTC\"\n\n            rruleset_string = \"\"\n            if rrule._rrule:\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n            if rrule._exrule:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n                    \"RRULE\", \"EXRULE\"\n                )\n            if rrule._rdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"RDATE:\" + \",\".join(\n                    rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n                )\n            if rrule._exdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"EXDATE:\" + \",\".join(\n                    exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n                )\n            return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n        else:\n            raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n    def to_rrule(self) -> dateutil.rrule.rrule:\n        \"\"\"\n        Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n        here\n        \"\"\"\n        rrule = dateutil.rrule.rrulestr(\n            self.rrule,\n            dtstart=DEFAULT_ANCHOR_DATE,\n            cache=True,\n        )\n        timezone = dateutil.tz.gettz(self.timezone)\n        if isinstance(rrule, dateutil.rrule.rrule):\n            kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n            if rrule._until:\n                kwargs.update(\n                    until=rrule._until.replace(tzinfo=timezone),\n                )\n            return rrule.replace(**kwargs)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            # update rrules\n            localized_rrules = []\n            for rr in rrule._rrule:\n                kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n                if rr._until:\n                    kwargs.update(\n                        until=rr._until.replace(tzinfo=timezone),\n                    )\n                localized_rrules.append(rr.replace(**kwargs))\n            rrule._rrule = localized_rrules\n\n            # update exrules\n            localized_exrules = []\n            for exr in rrule._exrule:\n                kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n                if exr._until:\n                    kwargs.update(\n                        until=exr._until.replace(tzinfo=timezone),\n                    )\n                localized_exrules.append(exr.replace(**kwargs))\n            rrule._exrule = localized_exrules\n\n            # update rdates\n            localized_rdates = []\n            for rd in rrule._rdate:\n                localized_rdates.append(rd.replace(tzinfo=timezone))\n            rrule._rdate = localized_rdates\n\n            # update exdates\n            localized_exdates = []\n            for exd in rrule._exdate:\n                localized_exdates.append(exd.replace(tzinfo=timezone))\n            rrule._exdate = localized_exdates\n\n            return rrule\n\n    @validator(\"timezone\", always=True)\n    def valid_timezone(cls, v):\n        return validate_rrule_timezone(v)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule","text":"

Since rrule doesn't properly serialize/deserialize timezones, we localize dates here

Source code in prefect/client/schemas/schedules.py
def to_rrule(self) -> dateutil.rrule.rrule:\n    \"\"\"\n    Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n    here\n    \"\"\"\n    rrule = dateutil.rrule.rrulestr(\n        self.rrule,\n        dtstart=DEFAULT_ANCHOR_DATE,\n        cache=True,\n    )\n    timezone = dateutil.tz.gettz(self.timezone)\n    if isinstance(rrule, dateutil.rrule.rrule):\n        kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n        if rrule._until:\n            kwargs.update(\n                until=rrule._until.replace(tzinfo=timezone),\n            )\n        return rrule.replace(**kwargs)\n    elif isinstance(rrule, dateutil.rrule.rruleset):\n        # update rrules\n        localized_rrules = []\n        for rr in rrule._rrule:\n            kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n            if rr._until:\n                kwargs.update(\n                    until=rr._until.replace(tzinfo=timezone),\n                )\n            localized_rrules.append(rr.replace(**kwargs))\n        rrule._rrule = localized_rrules\n\n        # update exrules\n        localized_exrules = []\n        for exr in rrule._exrule:\n            kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n            if exr._until:\n                kwargs.update(\n                    until=exr._until.replace(tzinfo=timezone),\n                )\n            localized_exrules.append(exr.replace(**kwargs))\n        rrule._exrule = localized_exrules\n\n        # update rdates\n        localized_rdates = []\n        for rd in rrule._rdate:\n            localized_rdates.append(rd.replace(tzinfo=timezone))\n        rrule._rdate = localized_rdates\n\n        # update exdates\n        localized_exdates = []\n        for exd in rrule._exdate:\n            localized_exdates.append(exd.replace(tzinfo=timezone))\n        rrule._exdate = localized_exdates\n\n        return rrule\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.construct_schedule","title":"construct_schedule","text":"

Construct a schedule from the provided arguments.

Parameters:

Name Type Description Default interval Optional[Union[int, float, timedelta]]

An interval on which to schedule runs. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

None anchor_date Optional[Union[datetime, str]]

The start date for an interval schedule.

None cron Optional[str]

A cron schedule for runs.

None rrule Optional[str]

An rrule schedule of when to execute runs of this flow.

None timezone Optional[str]

A timezone to use for the schedule. Defaults to UTC.

None Source code in prefect/client/schemas/schedules.py
def construct_schedule(\n    interval: Optional[Union[int, float, datetime.timedelta]] = None,\n    anchor_date: Optional[Union[datetime.datetime, str]] = None,\n    cron: Optional[str] = None,\n    rrule: Optional[str] = None,\n    timezone: Optional[str] = None,\n) -> SCHEDULE_TYPES:\n    \"\"\"\n    Construct a schedule from the provided arguments.\n\n    Args:\n        interval: An interval on which to schedule runs. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        anchor_date: The start date for an interval schedule.\n        cron: A cron schedule for runs.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        timezone: A timezone to use for the schedule. Defaults to UTC.\n    \"\"\"\n    num_schedules = sum(1 for entry in (interval, cron, rrule) if entry is not None)\n    if num_schedules > 1:\n        raise ValueError(\"Only one of interval, cron, or rrule can be provided.\")\n\n    if anchor_date and not interval:\n        raise ValueError(\n            \"An anchor date can only be provided with an interval schedule\"\n        )\n\n    if timezone and not (interval or cron or rrule):\n        raise ValueError(\n            \"A timezone can only be provided with interval, cron, or rrule\"\n        )\n\n    schedule = None\n    if interval:\n        if isinstance(interval, (int, float)):\n            interval = datetime.timedelta(seconds=interval)\n        schedule = IntervalSchedule(\n            interval=interval, anchor_date=anchor_date, timezone=timezone\n        )\n    elif cron:\n        schedule = CronSchedule(cron=cron, timezone=timezone)\n    elif rrule:\n        schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n    if schedule is None:\n        raise ValueError(\"Either interval, cron, or rrule must be provided\")\n\n    return schedule\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_6","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting","title":"prefect.client.schemas.sorting","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowRunSort","title":"FlowRunSort","text":"

Bases: AutoEnum

Defines flow run sorting options.

Source code in prefect/client/schemas/sorting.py
class FlowRunSort(AutoEnum):\n    \"\"\"Defines flow run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    START_TIME_ASC = AutoEnum.auto()\n    START_TIME_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.TaskRunSort","title":"TaskRunSort","text":"

Bases: AutoEnum

Defines task run sorting options.

Source code in prefect/client/schemas/sorting.py
class TaskRunSort(AutoEnum):\n    \"\"\"Defines task run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.AutomationSort","title":"AutomationSort","text":"

Bases: AutoEnum

Defines automation sorting options.

Source code in prefect/client/schemas/sorting.py
class AutomationSort(AutoEnum):\n    \"\"\"Defines automation sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.LogSort","title":"LogSort","text":"

Bases: AutoEnum

Defines log sorting options.

Source code in prefect/client/schemas/sorting.py
class LogSort(AutoEnum):\n    \"\"\"Defines log sorting options.\"\"\"\n\n    TIMESTAMP_ASC = AutoEnum.auto()\n    TIMESTAMP_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowSort","title":"FlowSort","text":"

Bases: AutoEnum

Defines flow sorting options.

Source code in prefect/client/schemas/sorting.py
class FlowSort(AutoEnum):\n    \"\"\"Defines flow sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.DeploymentSort","title":"DeploymentSort","text":"

Bases: AutoEnum

Defines deployment sorting options.

Source code in prefect/client/schemas/sorting.py
class DeploymentSort(AutoEnum):\n    \"\"\"Defines deployment sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactSort","title":"ArtifactSort","text":"

Bases: AutoEnum

Defines artifact sorting options.

Source code in prefect/client/schemas/sorting.py
class ArtifactSort(AutoEnum):\n    \"\"\"Defines artifact sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort","text":"

Bases: AutoEnum

Defines artifact collection sorting options.

Source code in prefect/client/schemas/sorting.py
class ArtifactCollectionSort(AutoEnum):\n    \"\"\"Defines artifact collection sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.VariableSort","title":"VariableSort","text":"

Bases: AutoEnum

Defines variables sorting options.

Source code in prefect/client/schemas/sorting.py
class VariableSort(AutoEnum):\n    \"\"\"Defines variables sorting options.\"\"\"\n\n    CREATED_DESC = \"CREATED_DESC\"\n    UPDATED_DESC = \"UPDATED_DESC\"\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort","text":"

Bases: AutoEnum

Defines block document sorting options.

Source code in prefect/client/schemas/sorting.py
class BlockDocumentSort(AutoEnum):\n    \"\"\"Defines block document sorting options.\"\"\"\n\n    NAME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    BLOCK_TYPE_AND_NAME_ASC = AutoEnum.auto()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/","title":"utilities","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities","title":"prefect.client.utilities","text":"

Utilities for working with clients.

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.get_or_create_client","title":"get_or_create_client","text":"

Returns provided client, infers a client from context if available, or creates a new client.

Parameters:

Name Type Description Default - client (PrefectClient

an optional client to use

required

Returns:

Type Description Tuple[PrefectClient, bool]
  • tuple: a tuple of the client and a boolean indicating if the client was inferred from context
Source code in prefect/client/utilities.py
def get_or_create_client(\n    client: Optional[\"PrefectClient\"] = None,\n) -> Tuple[\"PrefectClient\", bool]:\n    \"\"\"\n    Returns provided client, infers a client from context if available, or creates a new client.\n\n    Args:\n        - client (PrefectClient, optional): an optional client to use\n\n    Returns:\n        - tuple: a tuple of the client and a boolean indicating if the client was inferred from context\n    \"\"\"\n    if client is not None:\n        return client, True\n    from prefect._internal.concurrency.event_loop import get_running_loop\n    from prefect.context import FlowRunContext, TaskRunContext\n\n    flow_run_context = FlowRunContext.get()\n    task_run_context = TaskRunContext.get()\n\n    if (\n        flow_run_context\n        and getattr(flow_run_context.client, \"_loop\") == get_running_loop()\n    ):\n        return flow_run_context.client, True\n    elif (\n        task_run_context\n        and getattr(task_run_context.client, \"_loop\") == get_running_loop()\n    ):\n        return task_run_context.client, True\n    else:\n        from prefect.client.orchestration import get_client as get_httpx_client\n\n        return get_httpx_client(), False\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.inject_client","title":"inject_client","text":"

Simple helper to provide a context managed client to a asynchronous function.

The decorated function must take a client kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context.

Source code in prefect/client/utilities.py
def inject_client(\n    fn: Callable[P, Coroutine[Any, Any, Any]],\n) -> Callable[P, Coroutine[Any, Any, Any]]:\n    \"\"\"\n    Simple helper to provide a context managed client to a asynchronous function.\n\n    The decorated function _must_ take a `client` kwarg and if a client is passed when\n    called it will be used instead of creating a new one, but it will not be context\n    managed as it is assumed that the caller is managing the context.\n    \"\"\"\n\n    @wraps(fn)\n    async def with_injected_client(*args: P.args, **kwargs: P.kwargs) -> Any:\n        client = cast(Optional[\"PrefectClient\"], kwargs.pop(\"client\", None))\n        client, inferred = get_or_create_client(client)\n        if not inferred:\n            context = client\n        else:\n            from prefect.utilities.asyncutils import asyncnullcontext\n\n            context = asyncnullcontext()\n        async with context as new_client:\n            kwargs.setdefault(\"client\", new_client or client)\n            return await fn(*args, **kwargs)\n\n    return with_injected_client\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/concurrency/asyncio/","title":"asyncio","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio","title":"prefect.concurrency.asyncio","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.ConcurrencySlotAcquisitionError","title":"ConcurrencySlotAcquisitionError","text":"

Bases: Exception

Raised when an unhandlable occurs while acquiring concurrency slots.

Source code in prefect/concurrency/asyncio.py
class ConcurrencySlotAcquisitionError(Exception):\n    \"\"\"Raised when an unhandlable occurs while acquiring concurrency slots.\"\"\"\n
","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.rate_limit","title":"rate_limit async","text":"

Block execution until an occupy number of slots of the concurrency limits given in names are acquired. Requires that all given concurrency limits have a slot decay.

Parameters:

Name Type Description Default names Union[str, List[str]]

The names of the concurrency limits to acquire slots from.

required occupy int

The number of slots to acquire and hold from each limit.

1 Source code in prefect/concurrency/asyncio.py
async def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n    \"\"\"Block execution until an `occupy` number of slots of the concurrency\n    limits given in `names` are acquired. Requires that all given concurrency\n    limits have a slot decay.\n\n    Args:\n        names: The names of the concurrency limits to acquire slots from.\n        occupy: The number of slots to acquire and hold from each limit.\n    \"\"\"\n    names = names if isinstance(names, list) else [names]\n    limits = await _acquire_concurrency_slots(names, occupy, mode=\"rate_limit\")\n    _emit_concurrency_acquisition_events(limits, occupy)\n
","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/events/","title":"events","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/events/#prefect.concurrency.events","title":"prefect.concurrency.events","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/","title":"services","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/#prefect.concurrency.services","title":"prefect.concurrency.services","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/sync/","title":"sync","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync","title":"prefect.concurrency.sync","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync.rate_limit","title":"rate_limit","text":"

Block execution until an occupy number of slots of the concurrency limits given in names are acquired. Requires that all given concurrency limits have a slot decay.

Parameters:

Name Type Description Default names Union[str, List[str]]

The names of the concurrency limits to acquire slots from.

required occupy int

The number of slots to acquire and hold from each limit.

1 Source code in prefect/concurrency/sync.py
def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n    \"\"\"Block execution until an `occupy` number of slots of the concurrency\n    limits given in `names` are acquired. Requires that all given concurrency\n    limits have a slot decay.\n\n    Args:\n        names: The names of the concurrency limits to acquire slots from.\n        occupy: The number of slots to acquire and hold from each limit.\n    \"\"\"\n    names = names if isinstance(names, list) else [names]\n    limits = _call_async_function_from_sync(\n        _acquire_concurrency_slots, names, occupy, mode=\"rate_limit\"\n    )\n    _emit_concurrency_acquisition_events(limits, occupy)\n
","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/deployments/base/","title":"base","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base","title":"prefect.deployments.base","text":"

Core primitives for managing Prefect deployments via prefect deploy, providing a minimally opinionated build system for managing flows and deployments.

To get started, follow along with the deloyments tutorial.

","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.configure_project_by_recipe","title":"configure_project_by_recipe","text":"

Given a recipe name, returns a dictionary representing base configuration options.

Parameters:

Name Type Description Default recipe str

the name of the recipe to use

required formatting_kwargs dict

additional keyword arguments to format the recipe

{}

Raises:

Type Description ValueError

if provided recipe name does not exist.

Source code in prefect/deployments/base.py
def configure_project_by_recipe(recipe: str, **formatting_kwargs) -> dict:\n    \"\"\"\n    Given a recipe name, returns a dictionary representing base configuration options.\n\n    Args:\n        recipe (str): the name of the recipe to use\n        formatting_kwargs (dict, optional): additional keyword arguments to format the recipe\n\n    Raises:\n        ValueError: if provided recipe name does not exist.\n    \"\"\"\n    # load the recipe\n    recipe_path = Path(__file__).parent / \"recipes\" / recipe / \"prefect.yaml\"\n\n    if not recipe_path.exists():\n        raise ValueError(f\"Unknown recipe {recipe!r} provided.\")\n\n    with recipe_path.open(mode=\"r\") as f:\n        config = yaml.safe_load(f)\n\n    config = apply_values(\n        template=config, values=formatting_kwargs, remove_notset=False\n    )\n\n    return config\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.create_default_prefect_yaml","title":"create_default_prefect_yaml","text":"

Creates default prefect.yaml file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

Parameters:

Name Type Description Default name str

the name of the project; if not provided, the current directory name will be used

None contents dict

a dictionary of contents to write to the file; if not provided, defaults will be used

None Source code in prefect/deployments/base.py
def create_default_prefect_yaml(\n    path: str, name: str = None, contents: Optional[Dict[str, Any]] = None\n) -> bool:\n    \"\"\"\n    Creates default `prefect.yaml` file in the provided path if one does not already exist;\n    returns boolean specifying whether a file was created.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n            will be used\n        contents (dict, optional): a dictionary of contents to write to the file; if not provided,\n            defaults will be used\n    \"\"\"\n    path = Path(path)\n    prefect_file = path / \"prefect.yaml\"\n    if prefect_file.exists():\n        return False\n    default_file = Path(__file__).parent / \"templates\" / \"prefect.yaml\"\n\n    with default_file.open(mode=\"r\") as df:\n        default_contents = yaml.safe_load(df)\n\n    import prefect\n\n    contents[\"prefect-version\"] = prefect.__version__\n    contents[\"name\"] = name\n\n    with prefect_file.open(mode=\"w\") as f:\n        # write header\n        f.write(\n            \"# Welcome to your prefect.yaml file! You can use this file for storing and\"\n            \" managing\\n# configuration for deploying your flows. We recommend\"\n            \" committing this file to source\\n# control along with your flow code.\\n\\n\"\n        )\n\n        f.write(\"# Generic metadata about this project\\n\")\n        yaml.dump({\"name\": contents[\"name\"]}, f, sort_keys=False)\n        yaml.dump({\"prefect-version\": contents[\"prefect-version\"]}, f, sort_keys=False)\n        f.write(\"\\n\")\n\n        # build\n        f.write(\"# build section allows you to manage and build docker images\\n\")\n        yaml.dump(\n            {\"build\": contents.get(\"build\", default_contents.get(\"build\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # push\n        f.write(\n            \"# push section allows you to manage if and how this project is uploaded to\"\n            \" remote locations\\n\"\n        )\n        yaml.dump(\n            {\"push\": contents.get(\"push\", default_contents.get(\"push\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # pull\n        f.write(\n            \"# pull section allows you to provide instructions for cloning this project\"\n            \" in remote locations\\n\"\n        )\n        yaml.dump(\n            {\"pull\": contents.get(\"pull\", default_contents.get(\"pull\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # deployments\n        f.write(\n            \"# the deployments section allows you to provide configuration for\"\n            \" deploying flows\\n\"\n        )\n        yaml.dump(\n            {\n                \"deployments\": contents.get(\n                    \"deployments\", default_contents.get(\"deployments\")\n                )\n            },\n            f,\n            sort_keys=False,\n        )\n    return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.initialize_project","title":"initialize_project","text":"

Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred.

Parameters:

Name Type Description Default name str

the name of the project; if not provided, the current directory name

None recipe str

the name of the recipe to use; if not provided, one is inferred

None inputs dict

a dictionary of inputs to use when formatting the recipe

None

Returns:

Type Description List[str]

List[str]: a list of files / directories that were created

Source code in prefect/deployments/base.py
def initialize_project(\n    name: str = None, recipe: str = None, inputs: Optional[Dict[str, Any]] = None\n) -> List[str]:\n    \"\"\"\n    Initializes a basic project structure with base files.  If no name is provided, the name\n    of the current directory is used.  If no recipe is provided, one is inferred.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n        recipe (str, optional): the name of the recipe to use; if not provided, one is inferred\n        inputs (dict, optional): a dictionary of inputs to use when formatting the recipe\n\n    Returns:\n        List[str]: a list of files / directories that were created\n    \"\"\"\n    # determine if in git repo or use directory name as a default\n    is_git_based = False\n    formatting_kwargs = {\"directory\": str(Path(\".\").absolute().resolve())}\n    dir_name = os.path.basename(os.getcwd())\n\n    remote_url = _get_git_remote_origin_url()\n    if remote_url:\n        formatting_kwargs[\"repository\"] = remote_url\n        is_git_based = True\n        branch = _get_git_branch()\n        formatting_kwargs[\"branch\"] = branch or \"main\"\n\n    formatting_kwargs[\"name\"] = dir_name\n\n    has_dockerfile = Path(\"Dockerfile\").exists()\n\n    if has_dockerfile:\n        formatting_kwargs[\"dockerfile\"] = \"Dockerfile\"\n    elif recipe is not None and \"docker\" in recipe:\n        formatting_kwargs[\"dockerfile\"] = \"auto\"\n\n    # hand craft a pull step\n    if is_git_based and recipe is None:\n        if has_dockerfile:\n            recipe = \"docker-git\"\n        else:\n            recipe = \"git\"\n    elif recipe is None and has_dockerfile:\n        recipe = \"docker\"\n    elif recipe is None:\n        recipe = \"local\"\n\n    formatting_kwargs.update(inputs or {})\n    configuration = configure_project_by_recipe(recipe=recipe, **formatting_kwargs)\n\n    project_name = name or dir_name\n\n    files = []\n    if create_default_ignore_file(\".\"):\n        files.append(\".prefectignore\")\n    if create_default_prefect_yaml(\".\", name=project_name, contents=configuration):\n        files.append(\"prefect.yaml\")\n\n    return files\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/deployments/","title":"deployments","text":"","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments","title":"prefect.deployments.deployments","text":"

Objects for specifying deployments and utilities for loading flows from deployments.

","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment","title":"Deployment","text":"

Bases: DeprecatedInfraOverridesField, BaseModel

DEPRECATION WARNING:

This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by flow.deploy, which offers enhanced functionality and better a better user experience. For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

A Prefect Deployment definition, used for specifying and building deployments.

Parameters:

Name Type Description Default name

A name for the deployment (required).

required version

An optional version for the deployment; defaults to the flow's version

required description

An optional description of the deployment; defaults to the flow's description

required tags

An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name.

required schedule

A schedule to run this deployment on, once registered (deprecated)

required is_schedule_active

Whether or not the schedule is active (deprecated)

required schedules

A list of schedules to run this deployment on

required work_queue_name

The work queue that will handle this deployment's runs

required work_pool_name

The work pool for the deployment

required flow_name

The name of the flow this deployment encapsulates

required parameters

A dictionary of parameter values to pass to runs created from this deployment

required infrastructure

An optional infrastructure block used to configure infrastructure for runs; if not provided, will default to running this deployment in Agent subprocesses

required job_variables

A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'

required storage

An optional remote storage block used to store and retrieve this workflow; if not provided, will default to referencing this flow by its local path

required path

The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path

required entrypoint

The path to the entrypoint for the workflow, always relative to the path

required parameter_openapi_schema

The parameter schema of the flow, including defaults.

required enforce_parameter_schema

Whether or not the Prefect API should enforce the parameter schema for this deployment.

required
Create a new deployment using configuration defaults for an imported flow:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>>\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"example\",\n...     version=\"1\",\n...     tags=[\"demo\"],\n>>> )\n>>> deployment.apply()\n\nCreate a new deployment with custom storage and an infrastructure override:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>> from prefect.filesystems import S3\n\n>>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"s3-example\",\n...     version=\"2\",\n...     tags=[\"aws\"],\n...     storage=storage,\n...     job_variables=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n>>> )\n>>> deployment.apply()\n
Source code in prefect/deployments/deployments.py
@deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use `flow.deploy` to deploy your flows instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Deployment(DeprecatedInfraOverridesField, BaseModel):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `flow.deploy`, which offers enhanced functionality and better a better user experience.\n    For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\n\n    A Prefect Deployment definition, used for specifying and building deployments.\n\n    Args:\n        name: A name for the deployment (required).\n        version: An optional version for the deployment; defaults to the flow's version\n        description: An optional description of the deployment; defaults to the flow's\n            description\n        tags: An optional list of tags to associate with this deployment; note that tags\n            are used only for organizational purposes. For delegating work to agents,\n            see `work_queue_name`.\n        schedule: A schedule to run this deployment on, once registered (deprecated)\n        is_schedule_active: Whether or not the schedule is active (deprecated)\n        schedules: A list of schedules to run this deployment on\n        work_queue_name: The work queue that will handle this deployment's runs\n        work_pool_name: The work pool for the deployment\n        flow_name: The name of the flow this deployment encapsulates\n        parameters: A dictionary of parameter values to pass to runs created from this\n            deployment\n        infrastructure: An optional infrastructure block used to configure\n            infrastructure for runs; if not provided, will default to running this\n            deployment in Agent subprocesses\n        job_variables: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`\n        storage: An optional remote storage block used to store and retrieve this\n            workflow; if not provided, will default to referencing this flow by its\n            local path\n        path: The path to the working directory for the workflow, relative to remote\n            storage or, if stored on a local filesystem, an absolute path\n        entrypoint: The path to the entrypoint for the workflow, always relative to the\n            `path`\n        parameter_openapi_schema: The parameter schema of the flow, including defaults.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n\n    Examples:\n\n        Create a new deployment using configuration defaults for an imported flow:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>>\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"example\",\n        ...     version=\"1\",\n        ...     tags=[\"demo\"],\n        >>> )\n        >>> deployment.apply()\n\n        Create a new deployment with custom storage and an infrastructure override:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>> from prefect.filesystems import S3\n\n        >>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"s3-example\",\n        ...     version=\"2\",\n        ...     tags=[\"aws\"],\n        ...     storage=storage,\n        ...     job_variables=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n        >>> )\n        >>> deployment.apply()\n\n    \"\"\"\n\n    class Config:\n        json_encoders = {SecretDict: lambda v: v.dict()}\n        validate_assignment = True\n        extra = \"forbid\"\n\n    @property\n    def _editable_fields(self) -> List[str]:\n        editable_fields = [\n            \"name\",\n            \"description\",\n            \"version\",\n            \"work_queue_name\",\n            \"work_pool_name\",\n            \"tags\",\n            \"parameters\",\n            \"schedule\",\n            \"schedules\",\n            \"is_schedule_active\",\n            # The `infra_overrides` field has been renamed to `job_variables`.\n            # We will continue writing it in the YAML file as `infra_overrides`\n            # instead of `job_variables` for better backwards compat, but we'll\n            # accept either `job_variables` or `infra_overrides` when we read\n            # the file.\n            \"infra_overrides\",\n        ]\n\n        # if infrastructure is baked as a pre-saved block, then\n        # editing its fields will not update anything\n        if self.infrastructure._block_document_id:\n            return editable_fields\n        else:\n            return editable_fields + [\"infrastructure\"]\n\n    @property\n    def location(self) -> str:\n        \"\"\"\n        The 'location' that this deployment points to is given by `path` alone\n        in the case of no remote storage, and otherwise by `storage.basepath / path`.\n\n        The underlying flow entrypoint is interpreted relative to this location.\n        \"\"\"\n        location = \"\"\n        if self.storage:\n            location = (\n                self.storage.basepath + \"/\"\n                if not self.storage.basepath.endswith(\"/\")\n                else \"\"\n            )\n        if self.path:\n            location += self.path\n        return location\n\n    @sync_compatible\n    async def to_yaml(self, path: Path) -> None:\n        yaml_dict = self._yaml_dict()\n        schema = self.schema()\n\n        with open(path, \"w\") as f:\n            # write header\n            f.write(\n                \"###\\n### A complete description of a Prefect Deployment for flow\"\n                f\" {self.flow_name!r}\\n###\\n\"\n            )\n\n            # write editable fields\n            for field in self._editable_fields:\n                # write any comments\n                if schema[\"properties\"][field].get(\"yaml_comment\"):\n                    f.write(f\"# {schema['properties'][field]['yaml_comment']}\\n\")\n                # write the field\n                yaml.dump({field: yaml_dict[field]}, f, sort_keys=False)\n\n            # write non-editable fields, excluding `job_variables` because we'll\n            # continue writing it as `infra_overrides` for better backwards compat\n            # with the existing file format.\n            f.write(\"\\n###\\n### DO NOT EDIT BELOW THIS LINE\\n###\\n\")\n            yaml.dump(\n                {\n                    k: v\n                    for k, v in yaml_dict.items()\n                    if k not in self._editable_fields and k != \"job_variables\"\n                },\n                f,\n                sort_keys=False,\n            )\n\n    def _yaml_dict(self) -> dict:\n        \"\"\"\n        Returns a YAML-compatible representation of this deployment as a dictionary.\n        \"\"\"\n        # avoids issues with UUIDs showing up in YAML\n        all_fields = json.loads(\n            self.json(\n                exclude={\n                    \"storage\": {\"_filesystem\", \"filesystem\", \"_remote_file_system\"}\n                }\n            )\n        )\n        if all_fields[\"storage\"]:\n            all_fields[\"storage\"][\n                \"_block_type_slug\"\n            ] = self.storage.get_block_type_slug()\n        if all_fields[\"infrastructure\"]:\n            all_fields[\"infrastructure\"][\n                \"_block_type_slug\"\n            ] = self.infrastructure.get_block_type_slug()\n        return all_fields\n\n    @classmethod\n    def _validate_schedule(cls, value):\n        \"\"\"We do not support COUNT-based (# of occurrences) RRule schedules for deployments.\"\"\"\n        if value:\n            rrule_value = getattr(value, \"rrule\", None)\n            if rrule_value and \"COUNT\" in rrule_value.upper():\n                raise ValueError(\n                    \"RRule schedules with `COUNT` are not supported. Please use `UNTIL`\"\n                    \" or the `/deployments/{id}/schedule` endpoint to schedule a fixed\"\n                    \" number of flow runs.\"\n                )\n\n    # top level metadata\n    name: str = Field(..., description=\"The name of the deployment.\")\n    description: Optional[str] = Field(\n        default=None, description=\"An optional description of the deployment.\"\n    )\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"One of more tags to apply to this deployment.\",\n    )\n    schedule: Optional[SCHEDULE_TYPES] = Field(default=None)\n    schedules: List[MinimalDeploymentSchedule] = Field(\n        default_factory=list,\n        description=\"The schedules to run this deployment on.\",\n    )\n    is_schedule_active: Optional[bool] = Field(\n        default=None, description=\"Whether or not the schedule is active.\"\n    )\n    flow_name: Optional[str] = Field(default=None, description=\"The name of the flow.\")\n    work_queue_name: Optional[str] = Field(\n        \"default\",\n        description=\"The work queue for the deployment.\",\n        yaml_comment=\"The work queue that will handle this deployment's runs\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None, description=\"The work pool for the deployment\"\n    )\n    # flow data\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    infrastructure: Infrastructure = Field(default_factory=Process)\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to the base infrastructure block at runtime.\",\n    )\n    storage: Optional[Block] = Field(\n        None,\n        help=\"The remote storage to use for this workflow.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    parameter_openapi_schema: ParameterSchema = Field(\n        default_factory=ParameterSchema,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    timestamp: datetime = Field(default_factory=partial(pendulum.now, \"UTC\"))\n    triggers: List[Union[DeploymentTriggerTypes, TriggerTypes]] = Field(\n        default_factory=list,\n        description=\"The triggers that should cause this deployment to run.\",\n    )\n    # defaults to None to allow for backwards compatibility\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the Prefect API should enforce the parameter schema for\"\n            \" this deployment.\"\n        ),\n    )\n\n    @validator(\"infrastructure\", pre=True)\n    def validate_infrastructure_capabilities(cls, value):\n        return infrastructure_must_have_capabilities(value)\n\n    @validator(\"storage\", pre=True)\n    def validate_storage(cls, value):\n        return storage_must_have_capabilities(value)\n\n    @validator(\"parameter_openapi_schema\", pre=True)\n    def validate_parameter_openapi_schema(cls, value):\n        return handle_openapi_schema(value)\n\n    @validator(\"triggers\")\n    def validate_triggers(cls, field_value, values):\n        return validate_automation_names(field_value, values)\n\n    @root_validator(pre=True)\n    def validate_schedule(cls, values):\n        return validate_deprecated_schedule_fields(values, logger)\n\n    @root_validator(pre=True)\n    def validate_backwards_compatibility_for_schedule(cls, values):\n        return reconcile_schedules(cls, values)\n\n    @classmethod\n    @sync_compatible\n    async def load_from_yaml(cls, path: str):\n        data = yaml.safe_load(await anyio.Path(path).read_bytes())\n        # load blocks from server to ensure secret values are properly hydrated\n        if data.get(\"storage\"):\n            block_doc_name = data[\"storage\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"storage\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"storage\"] = block\n\n        if data.get(\"infrastructure\"):\n            block_doc_name = data[\"infrastructure\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"infrastructure\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"infrastructure\"] = block\n\n            return cls(**data)\n\n    @sync_compatible\n    async def load(self) -> bool:\n        \"\"\"\n        Queries the API for a deployment with this name for this flow, and if found,\n        prepopulates any settings that were not set at initialization.\n\n        Returns a boolean specifying whether a load was successful or not.\n\n        Raises:\n            - ValueError: if both name and flow name are not set\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be provided.\")\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment_by_name(\n                    f\"{self.flow_name}/{self.name}\"\n                )\n                if deployment.storage_document_id:\n                    Block._from_block_document(\n                        await client.read_block_document(deployment.storage_document_id)\n                    )\n\n                excluded_fields = self.__fields_set__.union(\n                    {\n                        \"infrastructure\",\n                        \"storage\",\n                        \"timestamp\",\n                        \"triggers\",\n                        \"enforce_parameter_schema\",\n                        \"schedules\",\n                        \"schedule\",\n                        \"is_schedule_active\",\n                    }\n                )\n                for field in set(self.__fields__.keys()) - excluded_fields:\n                    new_value = getattr(deployment, field)\n                    setattr(self, field, new_value)\n\n                if \"schedules\" not in self.__fields_set__:\n                    self.schedules = [\n                        MinimalDeploymentSchedule(\n                            **schedule.dict(include={\"schedule\", \"active\"})\n                        )\n                        for schedule in deployment.schedules\n                    ]\n\n                # The API server generates the \"schedule\" field from the\n                # current list of schedules, so if the user has locally set\n                # \"schedules\" to anything, we should avoid sending \"schedule\"\n                # and let the API server generate a new value if necessary.\n                if \"schedules\" in self.__fields_set__:\n                    self.schedule = None\n                    self.is_schedule_active = None\n                else:\n                    # The user isn't using \"schedules,\" so we should\n                    # populate \"schedule\" and \"is_schedule_active\" from the\n                    # API's version of the deployment, unless the user gave\n                    # us these fields in __init__().\n                    if \"schedule\" not in self.__fields_set__:\n                        self.schedule = deployment.schedule\n                    if \"is_schedule_active\" not in self.__fields_set__:\n                        self.is_schedule_active = deployment.is_schedule_active\n\n                if \"infrastructure\" not in self.__fields_set__:\n                    if deployment.infrastructure_document_id:\n                        self.infrastructure = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.infrastructure_document_id\n                            )\n                        )\n                if \"storage\" not in self.__fields_set__:\n                    if deployment.storage_document_id:\n                        self.storage = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.storage_document_id\n                            )\n                        )\n            except ObjectNotFound:\n                return False\n        return True\n\n    @sync_compatible\n    async def update(self, ignore_none: bool = False, **kwargs):\n        \"\"\"\n        Performs an in-place update with the provided settings.\n\n        Args:\n            ignore_none: if True, all `None` values are ignored when performing the\n                update\n        \"\"\"\n        unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n        if unknown_keys:\n            raise ValueError(\n                f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n            )\n        for key, value in kwargs.items():\n            if ignore_none and value is None:\n                continue\n            setattr(self, key, value)\n\n    @sync_compatible\n    async def upload_to_storage(\n        self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n    ) -> Optional[int]:\n        \"\"\"\n        Uploads the workflow this deployment represents using a provided storage block;\n        if no block is provided, defaults to configuring self for local storage.\n\n        Args:\n            storage_block: a string reference a remote storage block slug `$type/$name`;\n                if provided, used to upload the workflow's project\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n        \"\"\"\n        file_count = None\n        if storage_block:\n            storage = await Block.load(storage_block)\n\n            if \"put-directory\" not in storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {storage!r} missing 'put-directory' capability.\"\n                )\n\n            self.storage = storage\n\n            # upload current directory to storage location\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n        elif self.storage:\n            if \"put-directory\" not in self.storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {self.storage!r} missing 'put-directory'\"\n                    \" capability.\"\n                )\n\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n\n        # persists storage now in case it contains secret values\n        if self.storage and not self.storage._block_document_id:\n            await self.storage._save(is_anonymous=True)\n\n        return file_count\n\n    @sync_compatible\n    async def apply(\n        self, upload: bool = False, work_queue_concurrency: int = None\n    ) -> UUID:\n        \"\"\"\n        Registers this deployment with the API and returns the deployment's ID.\n\n        Args:\n            upload: if True, deployment files are automatically uploaded to remote\n                storage\n            work_queue_concurrency: If provided, sets the concurrency limit on the\n                deployment's work queue\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be set.\")\n        async with get_client() as client:\n            # prep IDs\n            flow_id = await client.create_flow_from_name(self.flow_name)\n\n            infrastructure_document_id = self.infrastructure._block_document_id\n            if not infrastructure_document_id:\n                # if not building off a block, will create an anonymous block\n                self.infrastructure = self.infrastructure.copy()\n                infrastructure_document_id = await self.infrastructure._save(\n                    is_anonymous=True,\n                )\n\n            if upload:\n                await self.upload_to_storage()\n\n            if self.work_queue_name and work_queue_concurrency is not None:\n                try:\n                    res = await client.create_work_queue(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                except ObjectAlreadyExists:\n                    res = await client.read_work_queue_by_name(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                await client.update_work_queue(\n                    res.id, concurrency_limit=work_queue_concurrency\n                )\n\n            if self.schedule:\n                logger.info(\n                    \"Interpreting the deprecated `schedule` field as an entry in \"\n                    \"`schedules`.\"\n                )\n                schedules = [\n                    DeploymentScheduleCreate(\n                        schedule=self.schedule, active=self.is_schedule_active\n                    )\n                ]\n            elif self.schedules:\n                schedules = [\n                    DeploymentScheduleCreate(**schedule.dict())\n                    for schedule in self.schedules\n                ]\n            else:\n                schedules = None\n\n            # we assume storage was already saved\n            storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n            deployment_id = await client.create_deployment(\n                flow_id=flow_id,\n                name=self.name,\n                work_queue_name=self.work_queue_name,\n                work_pool_name=self.work_pool_name,\n                version=self.version,\n                schedules=schedules,\n                is_schedule_active=self.is_schedule_active,\n                parameters=self.parameters,\n                description=self.description,\n                tags=self.tags,\n                manifest_path=self.manifest_path,  # allows for backwards YAML compat\n                path=self.path,\n                entrypoint=self.entrypoint,\n                job_variables=self.job_variables,\n                storage_document_id=storage_document_id,\n                infrastructure_document_id=infrastructure_document_id,\n                parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n                enforce_parameter_schema=self.enforce_parameter_schema,\n            )\n\n            if client.server_type.supports_automations():\n                try:\n                    # The triggers defined in the deployment spec are, essentially,\n                    # anonymous and attempting truly sync them with cloud is not\n                    # feasible. Instead, we remove all automations that are owned\n                    # by the deployment, meaning that they were created via this\n                    # mechanism below, and then recreate them.\n                    await client.delete_resource_owned_automations(\n                        f\"prefect.deployment.{deployment_id}\"\n                    )\n                except PrefectHTTPStatusError as e:\n                    if e.response.status_code == 404:\n                        # This Prefect server does not support automations, so we can safely\n                        # ignore this 404 and move on.\n                        return deployment_id\n                    raise e\n\n                for trigger in self.triggers:\n                    trigger.set_deployment_id(deployment_id)\n                    await client.create_automation(trigger.as_automation())\n\n            return deployment_id\n\n    @classmethod\n    @sync_compatible\n    async def build_from_flow(\n        cls,\n        flow: Flow,\n        name: str,\n        output: str = None,\n        skip_upload: bool = False,\n        ignore_file: str = \".prefectignore\",\n        apply: bool = False,\n        load_existing: bool = True,\n        schedules: Optional[FlexibleScheduleList] = None,\n        **kwargs,\n    ) -> \"Deployment\":\n        \"\"\"\n        Configure a deployment for a given flow.\n\n        Args:\n            flow: A flow function to deploy\n            name: A name for the deployment\n            output (optional): if provided, the full deployment specification will be\n                written as a YAML file in the location specified by `output`\n            skip_upload: if True, deployment files are not automatically uploaded to\n                remote storage\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n            apply: if True, the deployment is automatically registered with the API\n            load_existing: if True, load any settings that may already be configured for\n                the named deployment server-side (e.g., schedules, default parameter\n                values, etc.)\n            schedules: An optional list of schedules. Each item in the list can be:\n                  - An instance of `MinimalDeploymentSchedule`.\n                  - A dictionary with a `schedule` key, and optionally, an\n                    `active` key. The `schedule` key should correspond to a\n                    schedule type, and `active` is a boolean indicating whether\n                    the schedule is active or not.\n                  - An instance of one of the predefined schedule types:\n                    `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n            **kwargs: other keyword arguments to pass to the constructor for the\n                `Deployment` class\n        \"\"\"\n        if not name:\n            raise ValueError(\"A deployment name must be provided.\")\n\n        # note that `deployment.load` only updates settings that were *not*\n        # provided at initialization\n\n        deployment_args = {\n            \"name\": name,\n            \"flow_name\": flow.name,\n            **kwargs,\n        }\n\n        if schedules is not None:\n            deployment_args[\"schedules\"] = schedules\n\n        deployment = cls(**deployment_args)\n        deployment.flow_name = flow.name\n        if not deployment.entrypoint:\n            ## first see if an entrypoint can be determined\n            flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n            mod_name = getattr(flow, \"__module__\", None)\n            if not flow_file:\n                if not mod_name:\n                    # todo, check if the file location was manually set already\n                    raise ValueError(\"Could not determine flow's file location.\")\n                module = importlib.import_module(mod_name)\n                flow_file = getattr(module, \"__file__\", None)\n                if not flow_file:\n                    raise ValueError(\"Could not determine flow's file location.\")\n\n            # set entrypoint\n            entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n            deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n        if load_existing:\n            await deployment.load()\n\n        # set a few attributes for this flow object\n        deployment.parameter_openapi_schema = parameter_schema(flow)\n\n        # ensure the ignore file exists\n        if not Path(ignore_file).exists():\n            Path(ignore_file).touch()\n\n        if not deployment.version:\n            deployment.version = flow.version\n        if not deployment.description:\n            deployment.description = flow.description\n\n        # proxy for whether infra is docker-based\n        is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n        if not deployment.storage and not is_docker_based and not deployment.path:\n            deployment.path = str(Path(\".\").absolute())\n        elif not deployment.storage and is_docker_based:\n            # only update if a path is not already set\n            if not deployment.path:\n                deployment.path = \"/opt/prefect/flows\"\n\n        if not skip_upload:\n            if (\n                deployment.storage\n                and \"put-directory\" in deployment.storage.get_block_capabilities()\n            ):\n                await deployment.upload_to_storage(ignore_file=ignore_file)\n\n        if output:\n            await deployment.to_yaml(output)\n\n        if apply:\n            await deployment.apply()\n\n        return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.location","title":"location: str property","text":"

The 'location' that this deployment points to is given by path alone in the case of no remote storage, and otherwise by storage.basepath / path.

The underlying flow entrypoint is interpreted relative to this location.

","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.apply","title":"apply async","text":"

Registers this deployment with the API and returns the deployment's ID.

Parameters:

Name Type Description Default upload bool

if True, deployment files are automatically uploaded to remote storage

False work_queue_concurrency int

If provided, sets the concurrency limit on the deployment's work queue

None Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def apply(\n    self, upload: bool = False, work_queue_concurrency: int = None\n) -> UUID:\n    \"\"\"\n    Registers this deployment with the API and returns the deployment's ID.\n\n    Args:\n        upload: if True, deployment files are automatically uploaded to remote\n            storage\n        work_queue_concurrency: If provided, sets the concurrency limit on the\n            deployment's work queue\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be set.\")\n    async with get_client() as client:\n        # prep IDs\n        flow_id = await client.create_flow_from_name(self.flow_name)\n\n        infrastructure_document_id = self.infrastructure._block_document_id\n        if not infrastructure_document_id:\n            # if not building off a block, will create an anonymous block\n            self.infrastructure = self.infrastructure.copy()\n            infrastructure_document_id = await self.infrastructure._save(\n                is_anonymous=True,\n            )\n\n        if upload:\n            await self.upload_to_storage()\n\n        if self.work_queue_name and work_queue_concurrency is not None:\n            try:\n                res = await client.create_work_queue(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            except ObjectAlreadyExists:\n                res = await client.read_work_queue_by_name(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            await client.update_work_queue(\n                res.id, concurrency_limit=work_queue_concurrency\n            )\n\n        if self.schedule:\n            logger.info(\n                \"Interpreting the deprecated `schedule` field as an entry in \"\n                \"`schedules`.\"\n            )\n            schedules = [\n                DeploymentScheduleCreate(\n                    schedule=self.schedule, active=self.is_schedule_active\n                )\n            ]\n        elif self.schedules:\n            schedules = [\n                DeploymentScheduleCreate(**schedule.dict())\n                for schedule in self.schedules\n            ]\n        else:\n            schedules = None\n\n        # we assume storage was already saved\n        storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n        deployment_id = await client.create_deployment(\n            flow_id=flow_id,\n            name=self.name,\n            work_queue_name=self.work_queue_name,\n            work_pool_name=self.work_pool_name,\n            version=self.version,\n            schedules=schedules,\n            is_schedule_active=self.is_schedule_active,\n            parameters=self.parameters,\n            description=self.description,\n            tags=self.tags,\n            manifest_path=self.manifest_path,  # allows for backwards YAML compat\n            path=self.path,\n            entrypoint=self.entrypoint,\n            job_variables=self.job_variables,\n            storage_document_id=storage_document_id,\n            infrastructure_document_id=infrastructure_document_id,\n            parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n            enforce_parameter_schema=self.enforce_parameter_schema,\n        )\n\n        if client.server_type.supports_automations():\n            try:\n                # The triggers defined in the deployment spec are, essentially,\n                # anonymous and attempting truly sync them with cloud is not\n                # feasible. Instead, we remove all automations that are owned\n                # by the deployment, meaning that they were created via this\n                # mechanism below, and then recreate them.\n                await client.delete_resource_owned_automations(\n                    f\"prefect.deployment.{deployment_id}\"\n                )\n            except PrefectHTTPStatusError as e:\n                if e.response.status_code == 404:\n                    # This Prefect server does not support automations, so we can safely\n                    # ignore this 404 and move on.\n                    return deployment_id\n                raise e\n\n            for trigger in self.triggers:\n                trigger.set_deployment_id(deployment_id)\n                await client.create_automation(trigger.as_automation())\n\n        return deployment_id\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.build_from_flow","title":"build_from_flow async classmethod","text":"

Configure a deployment for a given flow.

Parameters:

Name Type Description Default flow Flow

A flow function to deploy

required name str

A name for the deployment

required output optional

if provided, the full deployment specification will be written as a YAML file in the location specified by output

None skip_upload bool

if True, deployment files are not automatically uploaded to remote storage

False ignore_file str

an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

'.prefectignore' apply bool

if True, the deployment is automatically registered with the API

False load_existing bool

if True, load any settings that may already be configured for the named deployment server-side (e.g., schedules, default parameter values, etc.)

True schedules Optional[FlexibleScheduleList]

An optional list of schedules. Each item in the list can be: - An instance of MinimalDeploymentSchedule. - A dictionary with a schedule key, and optionally, an active key. The schedule key should correspond to a schedule type, and active is a boolean indicating whether the schedule is active or not. - An instance of one of the predefined schedule types: IntervalSchedule, CronSchedule, or RRuleSchedule.

None **kwargs

other keyword arguments to pass to the constructor for the Deployment class

{} Source code in prefect/deployments/deployments.py
@classmethod\n@sync_compatible\nasync def build_from_flow(\n    cls,\n    flow: Flow,\n    name: str,\n    output: str = None,\n    skip_upload: bool = False,\n    ignore_file: str = \".prefectignore\",\n    apply: bool = False,\n    load_existing: bool = True,\n    schedules: Optional[FlexibleScheduleList] = None,\n    **kwargs,\n) -> \"Deployment\":\n    \"\"\"\n    Configure a deployment for a given flow.\n\n    Args:\n        flow: A flow function to deploy\n        name: A name for the deployment\n        output (optional): if provided, the full deployment specification will be\n            written as a YAML file in the location specified by `output`\n        skip_upload: if True, deployment files are not automatically uploaded to\n            remote storage\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n        apply: if True, the deployment is automatically registered with the API\n        load_existing: if True, load any settings that may already be configured for\n            the named deployment server-side (e.g., schedules, default parameter\n            values, etc.)\n        schedules: An optional list of schedules. Each item in the list can be:\n              - An instance of `MinimalDeploymentSchedule`.\n              - A dictionary with a `schedule` key, and optionally, an\n                `active` key. The `schedule` key should correspond to a\n                schedule type, and `active` is a boolean indicating whether\n                the schedule is active or not.\n              - An instance of one of the predefined schedule types:\n                `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n        **kwargs: other keyword arguments to pass to the constructor for the\n            `Deployment` class\n    \"\"\"\n    if not name:\n        raise ValueError(\"A deployment name must be provided.\")\n\n    # note that `deployment.load` only updates settings that were *not*\n    # provided at initialization\n\n    deployment_args = {\n        \"name\": name,\n        \"flow_name\": flow.name,\n        **kwargs,\n    }\n\n    if schedules is not None:\n        deployment_args[\"schedules\"] = schedules\n\n    deployment = cls(**deployment_args)\n    deployment.flow_name = flow.name\n    if not deployment.entrypoint:\n        ## first see if an entrypoint can be determined\n        flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n        mod_name = getattr(flow, \"__module__\", None)\n        if not flow_file:\n            if not mod_name:\n                # todo, check if the file location was manually set already\n                raise ValueError(\"Could not determine flow's file location.\")\n            module = importlib.import_module(mod_name)\n            flow_file = getattr(module, \"__file__\", None)\n            if not flow_file:\n                raise ValueError(\"Could not determine flow's file location.\")\n\n        # set entrypoint\n        entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n        deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n    if load_existing:\n        await deployment.load()\n\n    # set a few attributes for this flow object\n    deployment.parameter_openapi_schema = parameter_schema(flow)\n\n    # ensure the ignore file exists\n    if not Path(ignore_file).exists():\n        Path(ignore_file).touch()\n\n    if not deployment.version:\n        deployment.version = flow.version\n    if not deployment.description:\n        deployment.description = flow.description\n\n    # proxy for whether infra is docker-based\n    is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n    if not deployment.storage and not is_docker_based and not deployment.path:\n        deployment.path = str(Path(\".\").absolute())\n    elif not deployment.storage and is_docker_based:\n        # only update if a path is not already set\n        if not deployment.path:\n            deployment.path = \"/opt/prefect/flows\"\n\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            await deployment.upload_to_storage(ignore_file=ignore_file)\n\n    if output:\n        await deployment.to_yaml(output)\n\n    if apply:\n        await deployment.apply()\n\n    return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.load","title":"load async","text":"

Queries the API for a deployment with this name for this flow, and if found, prepopulates any settings that were not set at initialization.

Returns a boolean specifying whether a load was successful or not.

Raises:

Type Description -ValueError

if both name and flow name are not set

Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def load(self) -> bool:\n    \"\"\"\n    Queries the API for a deployment with this name for this flow, and if found,\n    prepopulates any settings that were not set at initialization.\n\n    Returns a boolean specifying whether a load was successful or not.\n\n    Raises:\n        - ValueError: if both name and flow name are not set\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be provided.\")\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(\n                f\"{self.flow_name}/{self.name}\"\n            )\n            if deployment.storage_document_id:\n                Block._from_block_document(\n                    await client.read_block_document(deployment.storage_document_id)\n                )\n\n            excluded_fields = self.__fields_set__.union(\n                {\n                    \"infrastructure\",\n                    \"storage\",\n                    \"timestamp\",\n                    \"triggers\",\n                    \"enforce_parameter_schema\",\n                    \"schedules\",\n                    \"schedule\",\n                    \"is_schedule_active\",\n                }\n            )\n            for field in set(self.__fields__.keys()) - excluded_fields:\n                new_value = getattr(deployment, field)\n                setattr(self, field, new_value)\n\n            if \"schedules\" not in self.__fields_set__:\n                self.schedules = [\n                    MinimalDeploymentSchedule(\n                        **schedule.dict(include={\"schedule\", \"active\"})\n                    )\n                    for schedule in deployment.schedules\n                ]\n\n            # The API server generates the \"schedule\" field from the\n            # current list of schedules, so if the user has locally set\n            # \"schedules\" to anything, we should avoid sending \"schedule\"\n            # and let the API server generate a new value if necessary.\n            if \"schedules\" in self.__fields_set__:\n                self.schedule = None\n                self.is_schedule_active = None\n            else:\n                # The user isn't using \"schedules,\" so we should\n                # populate \"schedule\" and \"is_schedule_active\" from the\n                # API's version of the deployment, unless the user gave\n                # us these fields in __init__().\n                if \"schedule\" not in self.__fields_set__:\n                    self.schedule = deployment.schedule\n                if \"is_schedule_active\" not in self.__fields_set__:\n                    self.is_schedule_active = deployment.is_schedule_active\n\n            if \"infrastructure\" not in self.__fields_set__:\n                if deployment.infrastructure_document_id:\n                    self.infrastructure = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.infrastructure_document_id\n                        )\n                    )\n            if \"storage\" not in self.__fields_set__:\n                if deployment.storage_document_id:\n                    self.storage = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.storage_document_id\n                        )\n                    )\n        except ObjectNotFound:\n            return False\n    return True\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.update","title":"update async","text":"

Performs an in-place update with the provided settings.

Parameters:

Name Type Description Default ignore_none bool

if True, all None values are ignored when performing the update

False Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def update(self, ignore_none: bool = False, **kwargs):\n    \"\"\"\n    Performs an in-place update with the provided settings.\n\n    Args:\n        ignore_none: if True, all `None` values are ignored when performing the\n            update\n    \"\"\"\n    unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n    if unknown_keys:\n        raise ValueError(\n            f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n        )\n    for key, value in kwargs.items():\n        if ignore_none and value is None:\n            continue\n        setattr(self, key, value)\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.upload_to_storage","title":"upload_to_storage async","text":"

Uploads the workflow this deployment represents using a provided storage block; if no block is provided, defaults to configuring self for local storage.

Parameters:

Name Type Description Default storage_block str

a string reference a remote storage block slug $type/$name; if provided, used to upload the workflow's project

None ignore_file str

an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

'.prefectignore' Source code in prefect/deployments/deployments.py
@sync_compatible\nasync def upload_to_storage(\n    self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n) -> Optional[int]:\n    \"\"\"\n    Uploads the workflow this deployment represents using a provided storage block;\n    if no block is provided, defaults to configuring self for local storage.\n\n    Args:\n        storage_block: a string reference a remote storage block slug `$type/$name`;\n            if provided, used to upload the workflow's project\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n    \"\"\"\n    file_count = None\n    if storage_block:\n        storage = await Block.load(storage_block)\n\n        if \"put-directory\" not in storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {storage!r} missing 'put-directory' capability.\"\n            )\n\n        self.storage = storage\n\n        # upload current directory to storage location\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n    elif self.storage:\n        if \"put-directory\" not in self.storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {self.storage!r} missing 'put-directory'\"\n                \" capability.\"\n            )\n\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n\n    # persists storage now in case it contains secret values\n    if self.storage and not self.storage._block_document_id:\n        await self.storage._save(is_anonymous=True)\n\n    return file_count\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_deployments_from_yaml","title":"load_deployments_from_yaml","text":"

Load deployments from a yaml file.

Source code in prefect/deployments/deployments.py
@deprecated_callable(start_date=\"Mar 2024\")\ndef load_deployments_from_yaml(\n    path: str,\n) -> PrefectObjectRegistry:\n    \"\"\"\n    Load deployments from a yaml file.\n    \"\"\"\n    with open(path, \"r\") as f:\n        contents = f.read()\n\n    # Parse into a yaml tree to retrieve separate documents\n    nodes = yaml.compose_all(contents)\n\n    with PrefectObjectRegistry(capture_failures=True) as registry:\n        for node in nodes:\n            with tmpchdir(path):\n                deployment_dict = yaml.safe_load(yaml.serialize(node))\n                # The return value is not necessary, just instantiating the Deployment\n                # is enough to get it recorded on the registry\n                parse_obj_as(Deployment, deployment_dict)\n\n    return registry\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_flow_from_flow_run","title":"load_flow_from_flow_run async","text":"

Load a flow from the location/script provided in a deployment's storage document.

If ignore_storage=True is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally.

Source code in prefect/deployments/deployments.py
@inject_client\nasync def load_flow_from_flow_run(\n    flow_run: FlowRun,\n    client: PrefectClient,\n    ignore_storage: bool = False,\n    storage_base_path: Optional[str] = None,\n) -> Flow:\n    \"\"\"\n    Load a flow from the location/script provided in a deployment's storage document.\n\n    If `ignore_storage=True` is provided, no pull from remote storage occurs.  This flag\n    is largely for testing, and assumes the flow is already available locally.\n    \"\"\"\n    deployment = await client.read_deployment(flow_run.deployment_id)\n\n    if deployment.entrypoint is None:\n        raise ValueError(\n            f\"Deployment {deployment.id} does not have an entrypoint and can not be run.\"\n        )\n\n    run_logger = flow_run_logger(flow_run)\n\n    runner_storage_base_path = storage_base_path or os.environ.get(\n        \"PREFECT__STORAGE_BASE_PATH\"\n    )\n\n    # If there's no colon, assume it's a module path\n    if \":\" not in deployment.entrypoint:\n        run_logger.debug(\n            f\"Importing flow code from module path {deployment.entrypoint}\"\n        )\n        flow = await run_sync_in_worker_thread(\n            load_flow_from_entrypoint, deployment.entrypoint\n        )\n        return flow\n\n    if not ignore_storage and not deployment.pull_steps:\n        sys.path.insert(0, \".\")\n        if deployment.storage_document_id:\n            storage_document = await client.read_block_document(\n                deployment.storage_document_id\n            )\n            storage_block = Block._from_block_document(storage_document)\n        else:\n            basepath = deployment.path or Path(deployment.manifest_path).parent\n            if runner_storage_base_path:\n                basepath = str(basepath).replace(\n                    \"$STORAGE_BASE_PATH\", runner_storage_base_path\n                )\n            storage_block = LocalFileSystem(basepath=basepath)\n\n        from_path = (\n            str(deployment.path).replace(\"$STORAGE_BASE_PATH\", runner_storage_base_path)\n            if runner_storage_base_path and deployment.path\n            else deployment.path\n        )\n        run_logger.info(f\"Downloading flow code from storage at {from_path!r}\")\n        await storage_block.get_directory(from_path=from_path, local_path=\".\")\n\n    if deployment.pull_steps:\n        run_logger.debug(f\"Running {len(deployment.pull_steps)} deployment pull steps\")\n        output = await run_steps(deployment.pull_steps)\n        if output.get(\"directory\"):\n            run_logger.debug(f\"Changing working directory to {output['directory']!r}\")\n            os.chdir(output[\"directory\"])\n\n    import_path = relative_path_to_current_platform(deployment.entrypoint)\n    # for backwards compat\n    if deployment.manifest_path:\n        with open(deployment.manifest_path, \"r\") as f:\n            import_path = json.load(f)[\"import_path\"]\n            import_path = (\n                Path(deployment.manifest_path).parent / import_path\n            ).absolute()\n    run_logger.debug(f\"Importing flow code from '{import_path}'\")\n\n    flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, str(import_path))\n\n    return flow\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.run_deployment","title":"run_deployment async","text":"

Create a flow run for a deployment and return it after completion or a timeout.

By default, this function blocks until the flow run finishes executing. Specify a timeout (in seconds) to wait for the flow run to execute before returning flow run metadata. To return immediately, without waiting for the flow run to execute, set timeout=0.

Note that if you specify a timeout, this function will return the flow run metadata whether or not the flow run finished executing.

If called within a flow or task, the flow run this function creates will be linked to the current flow run as a subflow. Disable this behavior by passing as_subflow=False.

Parameters:

Name Type Description Default name Union[str, UUID]

The deployment id or deployment name in the form: \"flow name/deployment name\"

required parameters Optional[dict]

Parameter overrides for this flow run. Merged with the deployment defaults.

None scheduled_time Optional[datetime]

The time to schedule the flow run for, defaults to scheduling the flow run to start now.

None flow_run_name Optional[str]

A name for the created flow run

None timeout Optional[float]

The amount of time to wait (in seconds) for the flow run to complete before returning. Setting timeout to 0 will return the flow run metadata immediately. Setting timeout to None will allow this function to poll indefinitely. Defaults to None.

None poll_interval Optional[float]

The number of seconds between polls

5 tags Optional[Iterable[str]]

A list of tags to associate with this flow run; tags can be used in automations and for organizational purposes.

None idempotency_key Optional[str]

A unique value to recognize retries of the same run, and prevent creating multiple flow runs.

None work_queue_name Optional[str]

The name of a work queue to use for this run. Defaults to the default work queue for the deployment.

None as_subflow Optional[bool]

Whether to link the flow run as a subflow of the current flow or task run.

True job_variables Optional[dict]

A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'

None Source code in prefect/deployments/deployments.py
@sync_compatible\n@deprecated_parameter(\n    \"infra_overrides\",\n    start_date=\"Apr 2024\",\n    help=\"Use `job_variables` instead.\",\n)\n@inject_client\nasync def run_deployment(\n    name: Union[str, UUID],\n    client: Optional[PrefectClient] = None,\n    parameters: Optional[dict] = None,\n    scheduled_time: Optional[datetime] = None,\n    flow_run_name: Optional[str] = None,\n    timeout: Optional[float] = None,\n    poll_interval: Optional[float] = 5,\n    tags: Optional[Iterable[str]] = None,\n    idempotency_key: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    as_subflow: Optional[bool] = True,\n    infra_overrides: Optional[dict] = None,\n    job_variables: Optional[dict] = None,\n) -> FlowRun:\n    \"\"\"\n    Create a flow run for a deployment and return it after completion or a timeout.\n\n    By default, this function blocks until the flow run finishes executing.\n    Specify a timeout (in seconds) to wait for the flow run to execute before\n    returning flow run metadata. To return immediately, without waiting for the\n    flow run to execute, set `timeout=0`.\n\n    Note that if you specify a timeout, this function will return the flow run\n    metadata whether or not the flow run finished executing.\n\n    If called within a flow or task, the flow run this function creates will\n    be linked to the current flow run as a subflow. Disable this behavior by\n    passing `as_subflow=False`.\n\n    Args:\n        name: The deployment id or deployment name in the form:\n            `\"flow name/deployment name\"`\n        parameters: Parameter overrides for this flow run. Merged with the deployment\n            defaults.\n        scheduled_time: The time to schedule the flow run for, defaults to scheduling\n            the flow run to start now.\n        flow_run_name: A name for the created flow run\n        timeout: The amount of time to wait (in seconds) for the flow run to\n            complete before returning. Setting `timeout` to 0 will return the flow\n            run metadata immediately. Setting `timeout` to None will allow this\n            function to poll indefinitely. Defaults to None.\n        poll_interval: The number of seconds between polls\n        tags: A list of tags to associate with this flow run; tags can be used in\n            automations and for organizational purposes.\n        idempotency_key: A unique value to recognize retries of the same run, and\n            prevent creating multiple flow runs.\n        work_queue_name: The name of a work queue to use for this run. Defaults to\n            the default work queue for the deployment.\n        as_subflow: Whether to link the flow run as a subflow of the current\n            flow or task run.\n        job_variables: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`\n    \"\"\"\n    if timeout is not None and timeout < 0:\n        raise ValueError(\"`timeout` cannot be negative\")\n\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n\n    jv = handle_deprecated_infra_overrides_parameter(job_variables, infra_overrides)\n\n    parameters = parameters or {}\n\n    deployment_id = None\n\n    if isinstance(name, UUID):\n        deployment_id = name\n    else:\n        try:\n            deployment_id = UUID(name)\n        except ValueError:\n            pass\n\n    if deployment_id:\n        deployment = await client.read_deployment(deployment_id=deployment_id)\n    else:\n        deployment = await client.read_deployment_by_name(name)\n\n    flow_run_ctx = FlowRunContext.get()\n    task_run_ctx = TaskRunContext.get()\n    if as_subflow and (flow_run_ctx or task_run_ctx):\n        # This was called from a flow. Link the flow run as a subflow.\n        from prefect.engine import (\n            Pending,\n            _dynamic_key_for_task_run,\n            collect_task_run_inputs,\n        )\n\n        task_inputs = {\n            k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        }\n\n        if deployment_id:\n            flow = await client.read_flow(deployment.flow_id)\n            deployment_name = f\"{flow.name}/{deployment.name}\"\n        else:\n            deployment_name = name\n\n        # Generate a task in the parent flow run to represent the result of the subflow\n        dummy_task = Task(\n            name=deployment_name,\n            fn=lambda: None,\n            version=deployment.version,\n        )\n        # Override the default task key to include the deployment name\n        dummy_task.task_key = f\"{__name__}.run_deployment.{slugify(deployment_name)}\"\n        flow_run_id = (\n            flow_run_ctx.flow_run.id\n            if flow_run_ctx\n            else task_run_ctx.task_run.flow_run_id\n        )\n        dynamic_key = (\n            _dynamic_key_for_task_run(flow_run_ctx, dummy_task)\n            if flow_run_ctx\n            else task_run_ctx.task_run.dynamic_key\n        )\n        parent_task_run = await client.create_task_run(\n            task=dummy_task,\n            flow_run_id=flow_run_id,\n            dynamic_key=dynamic_key,\n            task_inputs=task_inputs,\n            state=Pending(),\n        )\n        parent_task_run_id = parent_task_run.id\n    else:\n        parent_task_run_id = None\n\n    flow_run = await client.create_flow_run_from_deployment(\n        deployment.id,\n        parameters=parameters,\n        state=Scheduled(scheduled_time=scheduled_time),\n        name=flow_run_name,\n        tags=tags,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n        work_queue_name=work_queue_name,\n        job_variables=jv,\n    )\n\n    flow_run_id = flow_run.id\n\n    if timeout == 0:\n        return flow_run\n\n    with anyio.move_on_after(timeout):\n        while True:\n            flow_run = await client.read_flow_run(flow_run_id)\n            flow_state = flow_run.state\n            if flow_state and flow_state.is_final():\n                return flow_run\n            await anyio.sleep(poll_interval)\n\n    return flow_run\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/runner/","title":"runner","text":"","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner","title":"prefect.deployments.runner","text":"

Objects for creating and configuring deployments for flows using serve functionality.

Example
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    # to_deployment creates RunnerDeployment instances\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n    serve(slow_deploy, fast_deploy)\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentApplyError","title":"DeploymentApplyError","text":"

Bases: RuntimeError

Raised when an error occurs while applying a deployment.

Source code in prefect/deployments/runner.py
class DeploymentApplyError(RuntimeError):\n    \"\"\"\n    Raised when an error occurs while applying a deployment.\n    \"\"\"\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentImage","title":"DeploymentImage","text":"

Configuration used to build and push a Docker image for a deployment.

Attributes:

Name Type Description name

The name of the Docker image to build, including the registry and repository.

tag

The tag to apply to the built image.

dockerfile

The path to the Dockerfile to use for building the image. If not provided, a default Dockerfile will be generated.

**build_kwargs

Additional keyword arguments to pass to the Docker build request. See the docker-py documentation for more information.

Source code in prefect/deployments/runner.py
class DeploymentImage:\n    \"\"\"\n    Configuration used to build and push a Docker image for a deployment.\n\n    Attributes:\n        name: The name of the Docker image to build, including the registry and\n            repository.\n        tag: The tag to apply to the built image.\n        dockerfile: The path to the Dockerfile to use for building the image. If\n            not provided, a default Dockerfile will be generated.\n        **build_kwargs: Additional keyword arguments to pass to the Docker build request.\n            See the [`docker-py` documentation](https://docker-py.readthedocs.io/en/stable/images.html#docker.models.images.ImageCollection.build)\n            for more information.\n\n    \"\"\"\n\n    def __init__(self, name, tag=None, dockerfile=\"auto\", **build_kwargs):\n        image_name, image_tag = parse_image_tag(name)\n        if tag and image_tag:\n            raise ValueError(\n                f\"Only one tag can be provided - both {image_tag!r} and {tag!r} were\"\n                \" provided as tags.\"\n            )\n        namespace, repository = split_repository_path(image_name)\n        # if the provided image name does not include a namespace (registry URL or user/org name),\n        # use the default namespace\n        if not namespace:\n            namespace = PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE.value()\n        # join the namespace and repository to create the full image name\n        # ignore namespace if it is None\n        self.name = \"/\".join(filter(None, [namespace, repository]))\n        self.tag = tag or image_tag or slugify(pendulum.now(\"utc\").isoformat())\n        self.dockerfile = dockerfile\n        self.build_kwargs = build_kwargs\n\n    @property\n    def reference(self):\n        return f\"{self.name}:{self.tag}\"\n\n    def build(self):\n        full_image_name = self.reference\n        build_kwargs = self.build_kwargs.copy()\n        build_kwargs[\"context\"] = Path.cwd()\n        build_kwargs[\"tag\"] = full_image_name\n        build_kwargs[\"pull\"] = build_kwargs.get(\"pull\", True)\n\n        if self.dockerfile == \"auto\":\n            with generate_default_dockerfile():\n                build_image(**build_kwargs)\n        else:\n            build_kwargs[\"dockerfile\"] = self.dockerfile\n            build_image(**build_kwargs)\n\n    def push(self):\n        with docker_client() as client:\n            events = client.api.push(\n                repository=self.name, tag=self.tag, stream=True, decode=True\n            )\n            for event in events:\n                if \"error\" in event:\n                    raise PushError(event[\"error\"])\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.EntrypointType","title":"EntrypointType","text":"

Bases: Enum

Enum representing a entrypoint type.

File path entrypoints are in the format: path/to/file.py:function_name. Module path entrypoints are in the format: path.to.module.function_name.

Source code in prefect/deployments/runner.py
class EntrypointType(enum.Enum):\n    \"\"\"\n    Enum representing a entrypoint type.\n\n    File path entrypoints are in the format: `path/to/file.py:function_name`.\n    Module path entrypoints are in the format: `path.to.module.function_name`.\n    \"\"\"\n\n    FILE_PATH = \"file_path\"\n    MODULE_PATH = \"module_path\"\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment","title":"RunnerDeployment","text":"

Bases: BaseModel

A Prefect RunnerDeployment definition, used for specifying and building deployments.

Attributes:

Name Type Description name str

A name for the deployment (required).

version Optional[str]

An optional version for the deployment; defaults to the flow's version

description Optional[str]

An optional description of the deployment; defaults to the flow's description

tags List[str]

An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name.

schedule Optional[SCHEDULE_TYPES]

A schedule to run this deployment on, once registered

is_schedule_active Optional[bool]

Whether or not the schedule is active

parameters Dict[str, Any]

A dictionary of parameter values to pass to runs created from this deployment

path Dict[str, Any]

The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path

entrypoint Optional[str]

The path to the entrypoint for the workflow, always relative to the path

parameter_openapi_schema Optional[str]

The parameter schema of the flow, including defaults.

enforce_parameter_schema bool

Whether or not the Prefect API should enforce the parameter schema for this deployment.

work_pool_name Optional[str]

The name of the work pool to use for this deployment.

work_queue_name Optional[str]

The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

job_variables Dict[str, Any]

Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

Source code in prefect/deployments/runner.py
class RunnerDeployment(BaseModel):\n    \"\"\"\n    A Prefect RunnerDeployment definition, used for specifying and building deployments.\n\n    Attributes:\n        name: A name for the deployment (required).\n        version: An optional version for the deployment; defaults to the flow's version\n        description: An optional description of the deployment; defaults to the flow's\n            description\n        tags: An optional list of tags to associate with this deployment; note that tags\n            are used only for organizational purposes. For delegating work to agents,\n            see `work_queue_name`.\n        schedule: A schedule to run this deployment on, once registered\n        is_schedule_active: Whether or not the schedule is active\n        parameters: A dictionary of parameter values to pass to runs created from this\n            deployment\n        path: The path to the working directory for the workflow, relative to remote\n            storage or, if stored on a local filesystem, an absolute path\n        entrypoint: The path to the entrypoint for the workflow, always relative to the\n            `path`\n        parameter_openapi_schema: The parameter schema of the flow, including defaults.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n\n    class Config:\n        arbitrary_types_allowed = True\n\n    name: str = Field(..., description=\"The name of the deployment.\")\n    flow_name: Optional[str] = Field(\n        None, description=\"The name of the underlying flow; typically inferred.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"An optional description of the deployment.\"\n    )\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"One of more tags to apply to this deployment.\",\n    )\n    schedules: Optional[List[MinimalDeploymentSchedule]] = Field(\n        default=None,\n        description=\"The schedules that should cause this deployment to run.\",\n    )\n    schedule: Optional[SCHEDULE_TYPES] = None\n    paused: Optional[bool] = Field(\n        default=None, description=\"Whether or not the deployment is paused.\"\n    )\n    is_schedule_active: Optional[bool] = Field(\n        default=None, description=\"DEPRECATED: Whether or not the schedule is active.\"\n    )\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    triggers: List[Union[DeploymentTriggerTypes, TriggerTypes]] = Field(\n        default_factory=list,\n        description=\"The triggers that should cause this deployment to run.\",\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the Prefect API should enforce the parameter schema for\"\n            \" this deployment.\"\n        ),\n    )\n    storage: Optional[RunnerStorage] = Field(\n        default=None,\n        description=(\n            \"The storage object used to retrieve flow code for this deployment.\"\n        ),\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The name of the work pool to use for this deployment. Only used when\"\n            \" the deployment is registered with a built runner.\"\n        ),\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The name of the work queue to use for this deployment. Only used when\"\n            \" the deployment is registered with a built runner.\"\n        ),\n    )\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=(\n            \"Job variables used to override the default values of a work pool\"\n            \" base job template. Only used when the deployment is registered with\"\n            \" a built runner.\"\n        ),\n    )\n    _entrypoint_type: EntrypointType = PrivateAttr(\n        default=EntrypointType.FILE_PATH,\n    )\n    _path: Optional[str] = PrivateAttr(\n        default=None,\n    )\n    _parameter_openapi_schema: ParameterSchema = PrivateAttr(\n        default_factory=ParameterSchema,\n    )\n\n    @property\n    def entrypoint_type(self) -> EntrypointType:\n        return self._entrypoint_type\n\n    @validator(\"triggers\", allow_reuse=True)\n    def validate_automation_names(cls, field_value, values):\n        \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n        return validate_automation_names(field_value, values)\n\n    @root_validator(pre=True)\n    def reconcile_paused(cls, values):\n        return reconcile_paused_deployment(values)\n\n    @root_validator(pre=True)\n    def reconcile_schedules(cls, values):\n        return reconcile_schedules_runner(values)\n\n    @sync_compatible\n    async def apply(\n        self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n    ) -> UUID:\n        \"\"\"\n        Registers this deployment with the API and returns the deployment's ID.\n\n        Args:\n            work_pool_name: The name of the work pool to use for this\n                deployment.\n            image: The registry, name, and tag of the Docker image to\n                use for this deployment. Only used when the deployment is\n                deployed to a work pool.\n\n        Returns:\n            The ID of the created deployment.\n        \"\"\"\n\n        work_pool_name = work_pool_name or self.work_pool_name\n\n        if image and not work_pool_name:\n            raise ValueError(\n                \"An image can only be provided when registering a deployment with a\"\n                \" work pool.\"\n            )\n\n        if self.work_queue_name and not work_pool_name:\n            raise ValueError(\n                \"A work queue can only be provided when registering a deployment with\"\n                \" a work pool.\"\n            )\n\n        if self.job_variables and not work_pool_name:\n            raise ValueError(\n                \"Job variables can only be provided when registering a deployment\"\n                \" with a work pool.\"\n            )\n\n        async with get_client() as client:\n            flow_id = await client.create_flow_from_name(self.flow_name)\n\n            create_payload = dict(\n                flow_id=flow_id,\n                name=self.name,\n                work_queue_name=self.work_queue_name,\n                work_pool_name=work_pool_name,\n                version=self.version,\n                paused=self.paused,\n                schedules=self.schedules,\n                parameters=self.parameters,\n                description=self.description,\n                tags=self.tags,\n                path=self._path,\n                entrypoint=self.entrypoint,\n                storage_document_id=None,\n                infrastructure_document_id=None,\n                parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n                enforce_parameter_schema=self.enforce_parameter_schema,\n            )\n\n            if work_pool_name:\n                create_payload[\"job_variables\"] = self.job_variables\n                if image:\n                    create_payload[\"job_variables\"][\"image\"] = image\n                create_payload[\"path\"] = None if self.storage else self._path\n                create_payload[\"pull_steps\"] = (\n                    [self.storage.to_pull_step()] if self.storage else []\n                )\n\n            try:\n                deployment_id = await client.create_deployment(**create_payload)\n            except Exception as exc:\n                if isinstance(exc, PrefectHTTPStatusError):\n                    detail = exc.response.json().get(\"detail\")\n                    if detail:\n                        raise DeploymentApplyError(detail) from exc\n                raise DeploymentApplyError(\n                    f\"Error while applying deployment: {str(exc)}\"\n                ) from exc\n\n            if client.server_type.supports_automations():\n                try:\n                    # The triggers defined in the deployment spec are, essentially,\n                    # anonymous and attempting truly sync them with cloud is not\n                    # feasible. Instead, we remove all automations that are owned\n                    # by the deployment, meaning that they were created via this\n                    # mechanism below, and then recreate them.\n                    await client.delete_resource_owned_automations(\n                        f\"prefect.deployment.{deployment_id}\"\n                    )\n                except PrefectHTTPStatusError as e:\n                    if e.response.status_code == 404:\n                        # This Prefect server does not support automations, so we can safely\n                        # ignore this 404 and move on.\n                        return deployment_id\n                    raise e\n\n                for trigger in self.triggers:\n                    trigger.set_deployment_id(deployment_id)\n                    await client.create_automation(trigger.as_automation())\n\n            return deployment_id\n\n    @staticmethod\n    def _construct_deployment_schedules(\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        anchor_date: Optional[Union[datetime, str]] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        timezone: Optional[str] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n    ) -> Union[List[MinimalDeploymentSchedule], FlexibleScheduleList]:\n        \"\"\"\n        Construct a schedule or schedules from the provided arguments.\n\n        This method serves as a unified interface for creating deployment\n        schedules. If `schedules` is provided, it is directly returned. If\n        `schedule` is provided, it is encapsulated in a list and returned. If\n        `interval`, `cron`, or `rrule` are provided, they are used to construct\n        schedule objects.\n\n        Args:\n            interval: An interval on which to schedule runs, either as a single\n              value or as a list of values. Accepts numbers (interpreted as\n              seconds) or `timedelta` objects. Each value defines a separate\n              scheduling interval.\n            anchor_date: The anchor date from which interval schedules should\n              start. This applies to all intervals if a list is provided.\n            cron: A cron expression or a list of cron expressions defining cron\n              schedules. Each expression defines a separate cron schedule.\n            rrule: An rrule string or a list of rrule strings for scheduling.\n              Each string defines a separate recurrence rule.\n            timezone: The timezone to apply to the cron or rrule schedules.\n              This is a single value applied uniformly to all schedules.\n            schedule: A singular schedule object, used for advanced scheduling\n              options like specifying a timezone. This is returned as a list\n              containing this single schedule.\n            schedules: A pre-defined list of schedule objects. If provided,\n              this list is returned as-is, bypassing other schedule construction\n              logic.\n        \"\"\"\n\n        num_schedules = sum(\n            1\n            for entry in (interval, cron, rrule, schedule, schedules)\n            if entry is not None\n        )\n        if num_schedules > 1:\n            raise ValueError(\n                \"Only one of interval, cron, rrule, schedule, or schedules can be provided.\"\n            )\n        elif num_schedules == 0:\n            return []\n\n        if schedules is not None:\n            return schedules\n        elif interval or cron or rrule:\n            # `interval`, `cron`, and `rrule` can be lists of values. This\n            # block figures out which one is not None and uses that to\n            # construct the list of schedules via `construct_schedule`.\n            parameters = [(\"interval\", interval), (\"cron\", cron), (\"rrule\", rrule)]\n            schedule_type, value = [\n                param for param in parameters if param[1] is not None\n            ][0]\n\n            if not isiterable(value):\n                value = [value]\n\n            return [\n                create_minimal_deployment_schedule(\n                    construct_schedule(\n                        **{\n                            schedule_type: v,\n                            \"timezone\": timezone,\n                            \"anchor_date\": anchor_date,\n                        }\n                    )\n                )\n                for v in value\n            ]\n        else:\n            return [create_minimal_deployment_schedule(schedule)]\n\n    def _set_defaults_from_flow(self, flow: \"Flow\"):\n        self._parameter_openapi_schema = parameter_schema(flow)\n\n        if not self.version:\n            self.version = flow.version\n        if not self.description:\n            self.description = flow.description\n\n    @classmethod\n    def from_flow(\n        cls,\n        flow: \"Flow\",\n        name: str,\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ) -> \"RunnerDeployment\":\n        \"\"\"\n        Configure a deployment for a given flow.\n\n        Args:\n            flow: A flow function to deploy\n            name: A name for the deployment\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for this deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n        \"\"\"\n        constructed_schedules = cls._construct_deployment_schedules(\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedule=schedule,\n            schedules=schedules,\n        )\n\n        job_variables = job_variables or {}\n\n        deployment = cls(\n            name=Path(name).stem,\n            flow_name=flow.name,\n            schedule=schedule,\n            schedules=constructed_schedules,\n            is_schedule_active=is_schedule_active,\n            paused=paused,\n            tags=tags or [],\n            triggers=triggers or [],\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n\n        if not deployment.entrypoint:\n            no_file_location_error = (\n                \"Flows defined interactively cannot be deployed. Check out the\"\n                \" quickstart guide for help getting started:\"\n                \" https://docs.prefect.io/latest/getting-started/quickstart\"\n            )\n            ## first see if an entrypoint can be determined\n            flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n            mod_name = getattr(flow, \"__module__\", None)\n            if entrypoint_type == EntrypointType.MODULE_PATH:\n                if mod_name:\n                    deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n                else:\n                    raise ValueError(\n                        \"Unable to determine module path for provided flow.\"\n                    )\n            else:\n                if not flow_file:\n                    if not mod_name:\n                        raise ValueError(no_file_location_error)\n                    try:\n                        module = importlib.import_module(mod_name)\n                        flow_file = getattr(module, \"__file__\", None)\n                    except ModuleNotFoundError as exc:\n                        if \"__prefect_loader__\" in str(exc):\n                            raise ValueError(\n                                \"Cannot create a RunnerDeployment from a flow that has been\"\n                                \" loaded from an entrypoint. To deploy a flow via\"\n                                \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n                            )\n                        raise ValueError(no_file_location_error)\n                    if not flow_file:\n                        raise ValueError(no_file_location_error)\n\n                # set entrypoint\n                entry_path = (\n                    Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n                )\n                deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n        if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n            deployment._path = \".\"\n\n        deployment._entrypoint_type = entrypoint_type\n\n        cls._set_defaults_from_flow(deployment, flow)\n\n        return deployment\n\n    @classmethod\n    def from_entrypoint(\n        cls,\n        entrypoint: str,\n        name: str,\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ) -> \"RunnerDeployment\":\n        \"\"\"\n        Configure a deployment for a given flow located at a given entrypoint.\n\n        Args:\n            entrypoint:  The path to a file containing a flow and the name of the flow function in\n                the format `./path/to/file.py:flow_func_name`.\n            name: A name for the deployment\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for this deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n        \"\"\"\n        from prefect.flows import load_flow_from_entrypoint\n\n        job_variables = job_variables or {}\n        flow = load_flow_from_entrypoint(entrypoint)\n\n        constructed_schedules = cls._construct_deployment_schedules(\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedule=schedule,\n            schedules=schedules,\n        )\n\n        deployment = cls(\n            name=Path(name).stem,\n            flow_name=flow.name,\n            schedule=schedule,\n            schedules=constructed_schedules,\n            paused=paused,\n            is_schedule_active=is_schedule_active,\n            tags=tags or [],\n            triggers=triggers or [],\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            entrypoint=entrypoint,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n        deployment._path = str(Path.cwd())\n\n        cls._set_defaults_from_flow(deployment, flow)\n\n        return deployment\n\n    @classmethod\n    @sync_compatible\n    async def from_storage(\n        cls,\n        storage: RunnerStorage,\n        entrypoint: str,\n        name: str,\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ):\n        \"\"\"\n        Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n        local storage location.\n\n        Args:\n            entrypoint:  The path to a file containing a flow and the name of the flow function in\n                the format `./path/to/file.py:flow_func_name`.\n            name: A name for the deployment\n            storage: A storage object to use for retrieving flow code. If not provided, a\n                URL must be provided.\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for this deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n        \"\"\"\n        from prefect.flows import load_flow_from_entrypoint\n\n        constructed_schedules = cls._construct_deployment_schedules(\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedule=schedule,\n            schedules=schedules,\n        )\n\n        job_variables = job_variables or {}\n\n        with tempfile.TemporaryDirectory() as tmpdir:\n            storage.set_base_path(Path(tmpdir))\n            await storage.pull_code()\n\n            full_entrypoint = str(storage.destination / entrypoint)\n            flow = await from_async.wait_for_call_in_new_thread(\n                create_call(load_flow_from_entrypoint, full_entrypoint)\n            )\n\n        deployment = cls(\n            name=Path(name).stem,\n            flow_name=flow.name,\n            schedule=schedule,\n            schedules=constructed_schedules,\n            paused=paused,\n            is_schedule_active=is_schedule_active,\n            tags=tags or [],\n            triggers=triggers or [],\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            entrypoint=entrypoint,\n            enforce_parameter_schema=enforce_parameter_schema,\n            storage=storage,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n        deployment._path = str(storage.destination).replace(\n            tmpdir, \"$STORAGE_BASE_PATH\"\n        )\n\n        cls._set_defaults_from_flow(deployment, flow)\n\n        return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.apply","title":"apply async","text":"

Registers this deployment with the API and returns the deployment's ID.

Parameters:

Name Type Description Default work_pool_name Optional[str]

The name of the work pool to use for this deployment.

None image Optional[str]

The registry, name, and tag of the Docker image to use for this deployment. Only used when the deployment is deployed to a work pool.

None

Returns:

Type Description UUID

The ID of the created deployment.

Source code in prefect/deployments/runner.py
@sync_compatible\nasync def apply(\n    self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n) -> UUID:\n    \"\"\"\n    Registers this deployment with the API and returns the deployment's ID.\n\n    Args:\n        work_pool_name: The name of the work pool to use for this\n            deployment.\n        image: The registry, name, and tag of the Docker image to\n            use for this deployment. Only used when the deployment is\n            deployed to a work pool.\n\n    Returns:\n        The ID of the created deployment.\n    \"\"\"\n\n    work_pool_name = work_pool_name or self.work_pool_name\n\n    if image and not work_pool_name:\n        raise ValueError(\n            \"An image can only be provided when registering a deployment with a\"\n            \" work pool.\"\n        )\n\n    if self.work_queue_name and not work_pool_name:\n        raise ValueError(\n            \"A work queue can only be provided when registering a deployment with\"\n            \" a work pool.\"\n        )\n\n    if self.job_variables and not work_pool_name:\n        raise ValueError(\n            \"Job variables can only be provided when registering a deployment\"\n            \" with a work pool.\"\n        )\n\n    async with get_client() as client:\n        flow_id = await client.create_flow_from_name(self.flow_name)\n\n        create_payload = dict(\n            flow_id=flow_id,\n            name=self.name,\n            work_queue_name=self.work_queue_name,\n            work_pool_name=work_pool_name,\n            version=self.version,\n            paused=self.paused,\n            schedules=self.schedules,\n            parameters=self.parameters,\n            description=self.description,\n            tags=self.tags,\n            path=self._path,\n            entrypoint=self.entrypoint,\n            storage_document_id=None,\n            infrastructure_document_id=None,\n            parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n            enforce_parameter_schema=self.enforce_parameter_schema,\n        )\n\n        if work_pool_name:\n            create_payload[\"job_variables\"] = self.job_variables\n            if image:\n                create_payload[\"job_variables\"][\"image\"] = image\n            create_payload[\"path\"] = None if self.storage else self._path\n            create_payload[\"pull_steps\"] = (\n                [self.storage.to_pull_step()] if self.storage else []\n            )\n\n        try:\n            deployment_id = await client.create_deployment(**create_payload)\n        except Exception as exc:\n            if isinstance(exc, PrefectHTTPStatusError):\n                detail = exc.response.json().get(\"detail\")\n                if detail:\n                    raise DeploymentApplyError(detail) from exc\n            raise DeploymentApplyError(\n                f\"Error while applying deployment: {str(exc)}\"\n            ) from exc\n\n        if client.server_type.supports_automations():\n            try:\n                # The triggers defined in the deployment spec are, essentially,\n                # anonymous and attempting truly sync them with cloud is not\n                # feasible. Instead, we remove all automations that are owned\n                # by the deployment, meaning that they were created via this\n                # mechanism below, and then recreate them.\n                await client.delete_resource_owned_automations(\n                    f\"prefect.deployment.{deployment_id}\"\n                )\n            except PrefectHTTPStatusError as e:\n                if e.response.status_code == 404:\n                    # This Prefect server does not support automations, so we can safely\n                    # ignore this 404 and move on.\n                    return deployment_id\n                raise e\n\n            for trigger in self.triggers:\n                trigger.set_deployment_id(deployment_id)\n                await client.create_automation(trigger.as_automation())\n\n        return deployment_id\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_entrypoint","title":"from_entrypoint classmethod","text":"

Configure a deployment for a given flow located at a given entrypoint.

Parameters:

Name Type Description Default entrypoint str

The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name.

required name str

A name for the deployment

required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

None cron Optional[Union[Iterable[str], str]]

A cron schedule of when to execute runs of this flow.

None rrule Optional[Union[Iterable[str], str]]

An rrule schedule of when to execute runs of this flow.

None paused Optional[bool]

Whether or not to set this deployment as paused.

None schedules Optional[FlexibleScheduleList]

A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

None schedule Optional[SCHEDULE_TYPES]

A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

None is_schedule_active Optional[bool]

Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

A list of triggers that should kick of a run of this flow.

None parameters Optional[dict]

A dictionary of default parameter values to pass to runs of this flow.

None description Optional[str]

A description for the created deployment. Defaults to the flow's description if not provided.

None tags Optional[List[str]]

A list of tags to associate with the created deployment for organizational purposes.

None version Optional[str]

A version for the created deployment. Defaults to the flow's version.

None enforce_parameter_schema bool

Whether or not the Prefect API should enforce the parameter schema for this deployment.

False work_pool_name Optional[str]

The name of the work pool to use for this deployment.

None work_queue_name Optional[str]

The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

None job_variables Optional[Dict[str, Any]]

Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

None Source code in prefect/deployments/runner.py
@classmethod\ndef from_entrypoint(\n    cls,\n    entrypoint: str,\n    name: str,\n    interval: Optional[\n        Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n) -> \"RunnerDeployment\":\n    \"\"\"\n    Configure a deployment for a given flow located at a given entrypoint.\n\n    Args:\n        entrypoint:  The path to a file containing a flow and the name of the flow function in\n            the format `./path/to/file.py:flow_func_name`.\n        name: A name for the deployment\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n    from prefect.flows import load_flow_from_entrypoint\n\n    job_variables = job_variables or {}\n    flow = load_flow_from_entrypoint(entrypoint)\n\n    constructed_schedules = cls._construct_deployment_schedules(\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedule=schedule,\n        schedules=schedules,\n    )\n\n    deployment = cls(\n        name=Path(name).stem,\n        flow_name=flow.name,\n        schedule=schedule,\n        schedules=constructed_schedules,\n        paused=paused,\n        is_schedule_active=is_schedule_active,\n        tags=tags or [],\n        triggers=triggers or [],\n        parameters=parameters or {},\n        description=description,\n        version=version,\n        entrypoint=entrypoint,\n        enforce_parameter_schema=enforce_parameter_schema,\n        work_pool_name=work_pool_name,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n    )\n    deployment._path = str(Path.cwd())\n\n    cls._set_defaults_from_flow(deployment, flow)\n\n    return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_flow","title":"from_flow classmethod","text":"

Configure a deployment for a given flow.

Parameters:

Name Type Description Default flow Flow

A flow function to deploy

required name str

A name for the deployment

required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

None cron Optional[Union[Iterable[str], str]]

A cron schedule of when to execute runs of this flow.

None rrule Optional[Union[Iterable[str], str]]

An rrule schedule of when to execute runs of this flow.

None paused Optional[bool]

Whether or not to set this deployment as paused.

None schedules Optional[FlexibleScheduleList]

A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

None schedule Optional[SCHEDULE_TYPES]

A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

None is_schedule_active Optional[bool]

Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

A list of triggers that should kick of a run of this flow.

None parameters Optional[dict]

A dictionary of default parameter values to pass to runs of this flow.

None description Optional[str]

A description for the created deployment. Defaults to the flow's description if not provided.

None tags Optional[List[str]]

A list of tags to associate with the created deployment for organizational purposes.

None version Optional[str]

A version for the created deployment. Defaults to the flow's version.

None enforce_parameter_schema bool

Whether or not the Prefect API should enforce the parameter schema for this deployment.

False work_pool_name Optional[str]

The name of the work pool to use for this deployment.

None work_queue_name Optional[str]

The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

None job_variables Optional[Dict[str, Any]]

Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

None Source code in prefect/deployments/runner.py
@classmethod\ndef from_flow(\n    cls,\n    flow: \"Flow\",\n    name: str,\n    interval: Optional[\n        Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n    \"\"\"\n    Configure a deployment for a given flow.\n\n    Args:\n        flow: A flow function to deploy\n        name: A name for the deployment\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n    constructed_schedules = cls._construct_deployment_schedules(\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedule=schedule,\n        schedules=schedules,\n    )\n\n    job_variables = job_variables or {}\n\n    deployment = cls(\n        name=Path(name).stem,\n        flow_name=flow.name,\n        schedule=schedule,\n        schedules=constructed_schedules,\n        is_schedule_active=is_schedule_active,\n        paused=paused,\n        tags=tags or [],\n        triggers=triggers or [],\n        parameters=parameters or {},\n        description=description,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        work_pool_name=work_pool_name,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n    )\n\n    if not deployment.entrypoint:\n        no_file_location_error = (\n            \"Flows defined interactively cannot be deployed. Check out the\"\n            \" quickstart guide for help getting started:\"\n            \" https://docs.prefect.io/latest/getting-started/quickstart\"\n        )\n        ## first see if an entrypoint can be determined\n        flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n        mod_name = getattr(flow, \"__module__\", None)\n        if entrypoint_type == EntrypointType.MODULE_PATH:\n            if mod_name:\n                deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n            else:\n                raise ValueError(\n                    \"Unable to determine module path for provided flow.\"\n                )\n        else:\n            if not flow_file:\n                if not mod_name:\n                    raise ValueError(no_file_location_error)\n                try:\n                    module = importlib.import_module(mod_name)\n                    flow_file = getattr(module, \"__file__\", None)\n                except ModuleNotFoundError as exc:\n                    if \"__prefect_loader__\" in str(exc):\n                        raise ValueError(\n                            \"Cannot create a RunnerDeployment from a flow that has been\"\n                            \" loaded from an entrypoint. To deploy a flow via\"\n                            \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n                        )\n                    raise ValueError(no_file_location_error)\n                if not flow_file:\n                    raise ValueError(no_file_location_error)\n\n            # set entrypoint\n            entry_path = (\n                Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n            )\n            deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n    if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n        deployment._path = \".\"\n\n    deployment._entrypoint_type = entrypoint_type\n\n    cls._set_defaults_from_flow(deployment, flow)\n\n    return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_storage","title":"from_storage async classmethod","text":"

Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location.

Parameters:

Name Type Description Default entrypoint str

The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name.

required name str

A name for the deployment

required storage RunnerStorage

A storage object to use for retrieving flow code. If not provided, a URL must be provided.

required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

None cron Optional[Union[Iterable[str], str]]

A cron schedule of when to execute runs of this flow.

None rrule Optional[Union[Iterable[str], str]]

An rrule schedule of when to execute runs of this flow.

None schedule Optional[SCHEDULE_TYPES]

A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

None is_schedule_active Optional[bool]

Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

A list of triggers that should kick of a run of this flow.

None parameters Optional[dict]

A dictionary of default parameter values to pass to runs of this flow.

None description Optional[str]

A description for the created deployment. Defaults to the flow's description if not provided.

None tags Optional[List[str]]

A list of tags to associate with the created deployment for organizational purposes.

None version Optional[str]

A version for the created deployment. Defaults to the flow's version.

None enforce_parameter_schema bool

Whether or not the Prefect API should enforce the parameter schema for this deployment.

False work_pool_name Optional[str]

The name of the work pool to use for this deployment.

None work_queue_name Optional[str]

The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

None job_variables Optional[Dict[str, Any]]

Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

None Source code in prefect/deployments/runner.py
@classmethod\n@sync_compatible\nasync def from_storage(\n    cls,\n    storage: RunnerStorage,\n    entrypoint: str,\n    name: str,\n    interval: Optional[\n        Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n):\n    \"\"\"\n    Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n    local storage location.\n\n    Args:\n        entrypoint:  The path to a file containing a flow and the name of the flow function in\n            the format `./path/to/file.py:flow_func_name`.\n        name: A name for the deployment\n        storage: A storage object to use for retrieving flow code. If not provided, a\n            URL must be provided.\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n    from prefect.flows import load_flow_from_entrypoint\n\n    constructed_schedules = cls._construct_deployment_schedules(\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedule=schedule,\n        schedules=schedules,\n    )\n\n    job_variables = job_variables or {}\n\n    with tempfile.TemporaryDirectory() as tmpdir:\n        storage.set_base_path(Path(tmpdir))\n        await storage.pull_code()\n\n        full_entrypoint = str(storage.destination / entrypoint)\n        flow = await from_async.wait_for_call_in_new_thread(\n            create_call(load_flow_from_entrypoint, full_entrypoint)\n        )\n\n    deployment = cls(\n        name=Path(name).stem,\n        flow_name=flow.name,\n        schedule=schedule,\n        schedules=constructed_schedules,\n        paused=paused,\n        is_schedule_active=is_schedule_active,\n        tags=tags or [],\n        triggers=triggers or [],\n        parameters=parameters or {},\n        description=description,\n        version=version,\n        entrypoint=entrypoint,\n        enforce_parameter_schema=enforce_parameter_schema,\n        storage=storage,\n        work_pool_name=work_pool_name,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n    )\n    deployment._path = str(storage.destination).replace(\n        tmpdir, \"$STORAGE_BASE_PATH\"\n    )\n\n    cls._set_defaults_from_flow(deployment, flow)\n\n    return deployment\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.validate_automation_names","title":"validate_automation_names","text":"

Ensure that each trigger has a name for its automation if none is provided.

Source code in prefect/deployments/runner.py
@validator(\"triggers\", allow_reuse=True)\ndef validate_automation_names(cls, field_value, values):\n    \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n    return validate_automation_names(field_value, values)\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.deploy","title":"deploy async","text":"

Deploy the provided list of deployments to dynamic infrastructure via a work pool.

By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule.

If you want to use an existing image, you can pass build=False to skip building and pushing an image.

Parameters:

Name Type Description Default *deployments RunnerDeployment

A list of deployments to deploy.

() work_pool_name Optional[str]

The name of the work pool to use for these deployments. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME.

None image Optional[Union[str, DeploymentImage]]

The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.

None build bool

Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.

True push bool

Whether or not to skip pushing the built image to a registry.

True print_next_steps_message bool

Whether or not to print a message with next steps after deploying the deployments.

True

Returns:

Type Description List[UUID]

A list of deployment IDs for the created/updated deployments.

Examples:

Deploy a group of flows to a work pool:

from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef local_flow():\n    print(\"I'm a locally defined flow!\")\n\nif __name__ == \"__main__\":\n    deploy(\n        local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n        flow.from_source(\n            source=\"https://github.com/org/repo.git\",\n            entrypoint=\"flows.py:my_flow\",\n        ).to_deployment(\n            name=\"example-deploy-remote-flow\",\n        ),\n        work_pool_name=\"my-work-pool\",\n        image=\"my-registry/my-image:dev\",\n    )\n
Source code in prefect/deployments/runner.py
@sync_compatible\nasync def deploy(\n    *deployments: RunnerDeployment,\n    work_pool_name: Optional[str] = None,\n    image: Optional[Union[str, DeploymentImage]] = None,\n    build: bool = True,\n    push: bool = True,\n    print_next_steps_message: bool = True,\n    ignore_warnings: bool = False,\n) -> List[UUID]:\n    \"\"\"\n    Deploy the provided list of deployments to dynamic infrastructure via a\n    work pool.\n\n    By default, calling this function will build a Docker image for the deployments, push it to a\n    registry, and create each deployment via the Prefect API that will run the corresponding\n    flow on the given schedule.\n\n    If you want to use an existing image, you can pass `build=False` to skip building and pushing\n    an image.\n\n    Args:\n        *deployments: A list of deployments to deploy.\n        work_pool_name: The name of the work pool to use for these deployments. Defaults to\n            the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n        image: The name of the Docker image to build, including the registry and\n            repository. Pass a DeploymentImage instance to customize the Dockerfile used\n            and build arguments.\n        build: Whether or not to build a new image for the flow. If False, the provided\n            image will be used as-is and pulled at runtime.\n        push: Whether or not to skip pushing the built image to a registry.\n        print_next_steps_message: Whether or not to print a message with next steps\n            after deploying the deployments.\n\n    Returns:\n        A list of deployment IDs for the created/updated deployments.\n\n    Examples:\n        Deploy a group of flows to a work pool:\n\n        ```python\n        from prefect import deploy, flow\n\n        @flow(log_prints=True)\n        def local_flow():\n            print(\"I'm a locally defined flow!\")\n\n        if __name__ == \"__main__\":\n            deploy(\n                local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n                flow.from_source(\n                    source=\"https://github.com/org/repo.git\",\n                    entrypoint=\"flows.py:my_flow\",\n                ).to_deployment(\n                    name=\"example-deploy-remote-flow\",\n                ),\n                work_pool_name=\"my-work-pool\",\n                image=\"my-registry/my-image:dev\",\n            )\n        ```\n    \"\"\"\n    work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n    if not image and not all(\n        d.storage or d.entrypoint_type == EntrypointType.MODULE_PATH\n        for d in deployments\n    ):\n        raise ValueError(\n            \"Either an image or remote storage location must be provided when deploying\"\n            \" a deployment.\"\n        )\n\n    if not work_pool_name:\n        raise ValueError(\n            \"A work pool name must be provided when deploying a deployment. Either\"\n            \" provide a work pool name when calling `deploy` or set\"\n            \" `PREFECT_DEFAULT_WORK_POOL_NAME` in your profile.\"\n        )\n\n    if image and isinstance(image, str):\n        image_name, image_tag = parse_image_tag(image)\n        image = DeploymentImage(name=image_name, tag=image_tag)\n\n    try:\n        async with get_client() as client:\n            work_pool = await client.read_work_pool(work_pool_name)\n    except ObjectNotFound as exc:\n        raise ValueError(\n            f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n            \" deploying this flow.\"\n        ) from exc\n\n    is_docker_based_work_pool = get_from_dict(\n        work_pool.base_job_template, \"variables.properties.image\", False\n    )\n    is_block_based_work_pool = get_from_dict(\n        work_pool.base_job_template, \"variables.properties.block\", False\n    )\n    # carve out an exception for block based work pools that only have a block in their base job template\n    console = Console()\n    if not is_docker_based_work_pool and not is_block_based_work_pool:\n        if image:\n            raise ValueError(\n                f\"Work pool {work_pool_name!r} does not support custom Docker images.\"\n                \" Please use a work pool with an `image` variable in its base job template\"\n                \" or specify a remote storage location for the flow with `.from_source`.\"\n                \" If you are attempting to deploy a flow to a local process work pool,\"\n                \" consider using `flow.serve` instead. See the documentation for more\"\n                \" information: https://docs.prefect.io/latest/concepts/flows/#serving-a-flow\"\n            )\n        elif work_pool.type == \"process\" and not ignore_warnings:\n            console.print(\n                \"Looks like you're deploying to a process work pool. If you're creating a\"\n                \" deployment for local development, calling `.serve` on your flow is a great\"\n                \" way to get started. See the documentation for more information:\"\n                \" https://docs.prefect.io/latest/concepts/flows/#serving-a-flow. \"\n                \" Set `ignore_warnings=True` to suppress this message.\",\n                style=\"yellow\",\n            )\n\n    is_managed_pool = work_pool.is_managed_pool\n    if is_managed_pool:\n        build = False\n        push = False\n\n    if image and build:\n        with Progress(\n            SpinnerColumn(),\n            TextColumn(f\"Building image {image.reference}...\"),\n            transient=True,\n            console=console,\n        ) as progress:\n            docker_build_task = progress.add_task(\"docker_build\", total=1)\n            image.build()\n\n            progress.update(docker_build_task, completed=1)\n            console.print(\n                f\"Successfully built image {image.reference!r}\", style=\"green\"\n            )\n\n    if image and build and push:\n        with Progress(\n            SpinnerColumn(),\n            TextColumn(\"Pushing image...\"),\n            transient=True,\n            console=console,\n        ) as progress:\n            docker_push_task = progress.add_task(\"docker_push\", total=1)\n\n            image.push()\n\n            progress.update(docker_push_task, completed=1)\n\n        console.print(f\"Successfully pushed image {image.reference!r}\", style=\"green\")\n\n    deployment_exceptions = []\n    deployment_ids = []\n    image_ref = image.reference if image else None\n    for deployment in track(\n        deployments,\n        description=\"Creating/updating deployments...\",\n        console=console,\n        transient=True,\n    ):\n        try:\n            deployment_ids.append(\n                await deployment.apply(image=image_ref, work_pool_name=work_pool_name)\n            )\n        except Exception as exc:\n            if len(deployments) == 1:\n                raise\n            deployment_exceptions.append({\"deployment\": deployment, \"exc\": exc})\n\n    if deployment_exceptions:\n        console.print(\n            \"Encountered errors while creating/updating deployments:\\n\",\n            style=\"orange_red1\",\n        )\n    else:\n        console.print(\"Successfully created/updated all deployments!\\n\", style=\"green\")\n\n    complete_failure = len(deployment_exceptions) == len(deployments)\n\n    table = Table(\n        title=\"Deployments\",\n        show_lines=True,\n    )\n\n    table.add_column(header=\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(header=\"Status\", style=\"blue\", no_wrap=True)\n    table.add_column(header=\"Details\", style=\"blue\")\n\n    for deployment in deployments:\n        errored_deployment = next(\n            (d for d in deployment_exceptions if d[\"deployment\"] == deployment),\n            None,\n        )\n        if errored_deployment:\n            table.add_row(\n                f\"{deployment.flow_name}/{deployment.name}\",\n                \"failed\",\n                str(errored_deployment[\"exc\"]),\n                style=\"red\",\n            )\n        else:\n            table.add_row(f\"{deployment.flow_name}/{deployment.name}\", \"applied\")\n    console.print(table)\n\n    if print_next_steps_message and not complete_failure:\n        if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n            console.print(\n                \"\\nTo execute flow runs from these deployments, start a worker in a\"\n                \" separate terminal that pulls work from the\"\n                f\" {work_pool_name!r} work pool:\"\n            )\n            console.print(\n                f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n                style=\"blue\",\n            )\n        console.print(\n            \"\\nTo trigger any of these deployments, use the\"\n            \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n            \" [DEPLOYMENT_NAME]\\n[/]\"\n        )\n\n        if PREFECT_UI_URL:\n            console.print(\n                \"\\nYou can also trigger your deployments via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n            )\n\n    return deployment_ids\n
","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/steps/core/","title":"steps.core","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core","title":"prefect.deployments.steps.core","text":"

Core primitives for running Prefect deployment steps.

Deployment steps are YAML representations of Python functions along with their inputs.

Whenever a step is run, the following actions are taken:

  • The step's inputs and block / variable references are resolved (see the prefect deploy documentation for more details)
  • The step's function is imported; if it cannot be found, the requires keyword is used to install the necessary packages
  • The step's function is called with the resolved inputs
  • The step's output is returned and used to resolve inputs for subsequent steps
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.StepExecutionError","title":"StepExecutionError","text":"

Bases: Exception

Raised when a step fails to execute.

Source code in prefect/deployments/steps/core.py
class StepExecutionError(Exception):\n    \"\"\"\n    Raised when a step fails to execute.\n    \"\"\"\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.run_step","title":"run_step async","text":"

Runs a step, returns the step's output.

Steps are assumed to be in the format {\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}.

The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function:

This keyword is used to specify packages that should be installed before running the step.

Source code in prefect/deployments/steps/core.py
async def run_step(step: Dict, upstream_outputs: Optional[Dict] = None) -> Dict:\n    \"\"\"\n    Runs a step, returns the step's output.\n\n    Steps are assumed to be in the format `{\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}`.\n\n    The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the\n    inputs before passing to the step function:\n\n    This keyword is used to specify packages that should be installed before running the step.\n    \"\"\"\n    fqn, inputs = _get_step_fully_qualified_name_and_inputs(step)\n    upstream_outputs = upstream_outputs or {}\n\n    if len(step.keys()) > 1:\n        raise ValueError(\n            f\"Step has unexpected additional keys: {', '.join(list(step.keys())[1:])}\"\n        )\n\n    keywords = {\n        keyword: inputs.pop(keyword)\n        for keyword in RESERVED_KEYWORDS\n        if keyword in inputs\n    }\n\n    inputs = apply_values(inputs, upstream_outputs)\n    inputs = await resolve_block_document_references(inputs)\n    inputs = await resolve_variables(inputs)\n    inputs = apply_values(inputs, os.environ)\n    step_func = _get_function_for_step(fqn, requires=keywords.get(\"requires\"))\n    result = await from_async.call_soon_in_new_thread(\n        Call.new(step_func, **inputs)\n    ).aresult()\n    return result\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/pull/","title":"steps.pull","text":"","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull","title":"prefect.deployments.steps.pull","text":"

Core set of steps for specifying a Prefect project pull step.

","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone","title":"git_clone async","text":"

Clones a git repository into the current working directory.

Parameters:

Name Type Description Default repository str

the URL of the repository to clone

required branch Optional[str]

the branch to clone; if not provided, the default branch will be used

None include_submodules bool

whether to include git submodules when cloning the repository

False access_token Optional[str]

an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials

None credentials Optional[Block]

a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository.

None

Returns:

Name Type Description dict dict

a dictionary containing a directory key of the new directory that was created

Raises:

Type Description CalledProcessError

if the git clone command fails for any reason

Examples:

Clone a public repository:

pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/PrefectHQ/prefect.git\n

Clone a branch of a public repository:

pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/PrefectHQ/prefect.git\n        branch: my-branch\n

Clone a private repository using a GitHubCredentials block:

pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n

Clone a private repository using an access token:

pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n
Note that you will need to create a Secret block to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. x-token-auth) in your secret block. Refer to your git providers documentation for the correct authentication schema.

Clone a repository with submodules:

pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        include_submodules: true\n

Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows):

pull:\n    - prefect.deployments.steps.git_clone:\n        repository: git@github.com:org/repo.git\n

Source code in prefect/deployments/steps/pull.py
@sync_compatible\nasync def git_clone(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n    credentials: Optional[Block] = None,\n) -> dict:\n    \"\"\"\n    Clones a git repository into the current working directory.\n\n    Args:\n        repository: the URL of the repository to clone\n        branch: the branch to clone; if not provided, the default branch will be used\n        include_submodules (bool): whether to include git submodules when cloning the repository\n        access_token: an access token to use for cloning the repository; if not provided\n            the repository will be cloned using the default git credentials\n        credentials: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the\n            credentials to use for cloning the repository.\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the new directory that was created\n\n    Raises:\n        subprocess.CalledProcessError: if the git clone command fails for any reason\n\n    Examples:\n        Clone a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n        ```\n\n        Clone a branch of a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n                branch: my-branch\n        ```\n\n        Clone a private repository using a GitHubCredentials block:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n        ```\n\n        Clone a private repository using an access token:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n        ```\n        Note that you will need to [create a Secret block](/concepts/blocks/#using-existing-block-types) to store the\n        value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`)\n        in your secret block. Refer to your git providers documentation for the correct authentication schema.\n\n        Clone a repository with submodules:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                include_submodules: true\n        ```\n\n        Clone a repository with an SSH key (note that the SSH key must be added to the worker\n        before executing flows):\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: git@github.com:org/repo.git\n        ```\n    \"\"\"\n    if access_token and credentials:\n        raise ValueError(\n            \"Please provide either an access token or credentials but not both.\"\n        )\n\n    credentials = {\"access_token\": access_token} if access_token else credentials\n\n    storage = GitRepository(\n        url=repository,\n        credentials=credentials,\n        branch=branch,\n        include_submodules=include_submodules,\n    )\n\n    await storage.pull_code()\n\n    directory = str(storage.destination.relative_to(Path.cwd()))\n    deployment_logger.info(f\"Cloned repository {repository!r} into {directory!r}\")\n    return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone_project","title":"git_clone_project async","text":"

Deprecated. Use git_clone instead.

Source code in prefect/deployments/steps/pull.py
@deprecated_callable(start_date=\"Jun 2023\", help=\"Use 'git clone' instead.\")\n@sync_compatible\nasync def git_clone_project(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n) -> dict:\n    \"\"\"Deprecated. Use `git_clone` instead.\"\"\"\n    return await git_clone(\n        repository=repository,\n        branch=branch,\n        include_submodules=include_submodules,\n        access_token=access_token,\n    )\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_from_remote_storage","title":"pull_from_remote_storage async","text":"

Pulls code from a remote storage location into the current working directory.

Works with protocols supported by fsspec.

Parameters:

Name Type Description Default url str

the URL of the remote storage location. Should be a valid fsspec URL. Some protocols may require an additional fsspec dependency to be installed. Refer to the fsspec docs for more details.

required **settings Any

any additional settings to pass the fsspec filesystem class.

{}

Returns:

Name Type Description dict

a dictionary containing a directory key of the new directory that was created

Examples:

Pull code from a remote storage location:

pull:\n    - prefect.deployments.steps.pull_from_remote_storage:\n        url: s3://my-bucket/my-folder\n

Pull code from a remote storage location with additional settings:

pull:\n    - prefect.deployments.steps.pull_from_remote_storage:\n        url: s3://my-bucket/my-folder\n        key: {{ prefect.blocks.secret.my-aws-access-key }}}\n        secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n

Source code in prefect/deployments/steps/pull.py
async def pull_from_remote_storage(url: str, **settings: Any):\n    \"\"\"\n    Pulls code from a remote storage location into the current working directory.\n\n    Works with protocols supported by `fsspec`.\n\n    Args:\n        url (str): the URL of the remote storage location. Should be a valid `fsspec` URL.\n            Some protocols may require an additional `fsspec` dependency to be installed.\n            Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n            for more details.\n        **settings (Any): any additional settings to pass the `fsspec` filesystem class.\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the new directory that was created\n\n    Examples:\n        Pull code from a remote storage location:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.pull_from_remote_storage:\n                url: s3://my-bucket/my-folder\n        ```\n\n        Pull code from a remote storage location with additional settings:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.pull_from_remote_storage:\n                url: s3://my-bucket/my-folder\n                key: {{ prefect.blocks.secret.my-aws-access-key }}}\n                secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n        ```\n    \"\"\"\n    storage = RemoteStorage(url, **settings)\n\n    await storage.pull_code()\n\n    directory = str(storage.destination.relative_to(Path.cwd()))\n    deployment_logger.info(f\"Pulled code from {url!r} into {directory!r}\")\n    return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_with_block","title":"pull_with_block async","text":"

Pulls code using a block.

Parameters:

Name Type Description Default block_document_name str

The name of the block document to use

required block_type_slug str

The slug of the type of block to use

required Source code in prefect/deployments/steps/pull.py
async def pull_with_block(block_document_name: str, block_type_slug: str):\n    \"\"\"\n    Pulls code using a block.\n\n    Args:\n        block_document_name: The name of the block document to use\n        block_type_slug: The slug of the type of block to use\n    \"\"\"\n    full_slug = f\"{block_type_slug}/{block_document_name}\"\n    try:\n        block = await Block.load(full_slug)\n    except Exception:\n        deployment_logger.exception(\"Unable to load block '%s'\", full_slug)\n        raise\n\n    try:\n        storage = BlockStorageAdapter(block)\n    except Exception:\n        deployment_logger.exception(\n            \"Unable to create storage adapter for block '%s'\", full_slug\n        )\n        raise\n\n    await storage.pull_code()\n\n    directory = str(storage.destination.relative_to(Path.cwd()))\n    deployment_logger.info(\n        \"Pulled code using block '%s' into '%s'\", full_slug, directory\n    )\n    return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.set_working_directory","title":"set_working_directory","text":"

Sets the working directory; works with both absolute and relative paths.

Parameters:

Name Type Description Default directory str

the directory to set as the working directory

required

Returns:

Name Type Description dict dict

a dictionary containing a directory key of the directory that was set

Source code in prefect/deployments/steps/pull.py
def set_working_directory(directory: str) -> dict:\n    \"\"\"\n    Sets the working directory; works with both absolute and relative paths.\n\n    Args:\n        directory (str): the directory to set as the working directory\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the\n            directory that was set\n    \"\"\"\n    os.chdir(directory)\n    return dict(directory=directory)\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/utility/","title":"steps.utility","text":"","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility","title":"prefect.deployments.steps.utility","text":"

Utility project steps that are useful for managing a project's deployment lifecycle.

Steps within this module can be used within a build, push, or pull deployment action.

Example

Use the run_shell_script setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag:

build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker\n        image_name: my-image\n        image_tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.RunShellScriptResult","title":"RunShellScriptResult","text":"

Bases: TypedDict

The result of a run_shell_script step.

Attributes:

Name Type Description stdout str

The captured standard output of the script.

stderr str

The captured standard error of the script.

Source code in prefect/deployments/steps/utility.py
class RunShellScriptResult(TypedDict):\n    \"\"\"\n    The result of a `run_shell_script` step.\n\n    Attributes:\n        stdout: The captured standard output of the script.\n        stderr: The captured standard error of the script.\n    \"\"\"\n\n    stdout: str\n    stderr: str\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.pip_install_requirements","title":"pip_install_requirements async","text":"

Installs dependencies from a requirements.txt file.

Parameters:

Name Type Description Default requirements_file str

The requirements.txt to use for installation.

'requirements.txt' directory Optional[str]

The directory the requirements.txt file is in. Defaults to the current working directory.

None stream_output bool

Whether to stream the output from pip install should be streamed to the console

True

Returns:

Type Description

A dictionary with the keys stdout and stderr containing the output the pip install command

Raises:

Type Description CalledProcessError

if the pip install command fails for any reason

Example
pull:\n    - prefect.deployments.steps.git_clone:\n        id: clone-step\n        repository: https://github.com/org/repo.git\n    - prefect.deployments.steps.pip_install_requirements:\n        directory: {{ clone-step.directory }}\n        requirements_file: requirements.txt\n        stream_output: False\n
Source code in prefect/deployments/steps/utility.py
async def pip_install_requirements(\n    directory: Optional[str] = None,\n    requirements_file: str = \"requirements.txt\",\n    stream_output: bool = True,\n):\n    \"\"\"\n    Installs dependencies from a requirements.txt file.\n\n    Args:\n        requirements_file: The requirements.txt to use for installation.\n        directory: The directory the requirements.txt file is in. Defaults to\n            the current working directory.\n        stream_output: Whether to stream the output from pip install should be\n            streamed to the console\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            the `pip install` command\n\n    Raises:\n        subprocess.CalledProcessError: if the pip install command fails for any reason\n\n    Example:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                id: clone-step\n                repository: https://github.com/org/repo.git\n            - prefect.deployments.steps.pip_install_requirements:\n                directory: {{ clone-step.directory }}\n                requirements_file: requirements.txt\n                stream_output: False\n        ```\n    \"\"\"\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    async with open_process(\n        [get_sys_executable(), \"-m\", \"pip\", \"install\", \"-r\", requirements_file],\n        stdout=subprocess.PIPE,\n        stderr=subprocess.PIPE,\n        cwd=directory,\n    ) as process:\n        await _stream_capture_process_output(\n            process,\n            stdout_sink=stdout_sink,\n            stderr_sink=stderr_sink,\n            stream_output=stream_output,\n        )\n        await process.wait()\n\n        if process.returncode != 0:\n            raise RuntimeError(\n                f\"pip_install_requirements failed with error code {process.returncode}:\"\n                f\" {stderr_sink.getvalue()}\"\n            )\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.run_shell_script","title":"run_shell_script async","text":"

Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script.

Parameters:

Name Type Description Default script str

The script to run

required directory Optional[str]

The directory to run the script in. Defaults to the current working directory.

None env Optional[Dict[str, str]]

A dictionary of environment variables to set for the script

None stream_output bool

Whether to stream the output of the script to stdout/stderr

True expand_env_vars bool

Whether to expand environment variables in the script before running it

False

Returns:

Type Description RunShellScriptResult

A dictionary with the keys stdout and stderr containing the output of the script

Examples:

Retrieve the short Git commit hash of the current repository to use as a Docker image tag:

build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker\n        image_name: my-image\n        image_tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

Run a multi-line shell script:

build:\n    - prefect.deployments.steps.run_shell_script:\n        script: |\n            echo \"Hello\"\n            echo \"World\"\n

Run a shell script with environment variables:

build:\n    - prefect.deployments.steps.run_shell_script:\n        script: echo \"Hello $NAME\"\n        env:\n            NAME: World\n

Run a shell script with environment variables expanded from the current environment:

pull:\n    - prefect.deployments.steps.run_shell_script:\n        script: |\n            echo \"User: $USER\"\n            echo \"Home Directory: $HOME\"\n        stream_output: true\n        expand_env_vars: true\n

Run a shell script in a specific directory:

build:\n    - prefect.deployments.steps.run_shell_script:\n        script: echo \"Hello\"\n        directory: /path/to/directory\n

Run a script stored in a file:

build:\n    - prefect.deployments.steps.run_shell_script:\n        script: \"bash path/to/script.sh\"\n

Source code in prefect/deployments/steps/utility.py
async def run_shell_script(\n    script: str,\n    directory: Optional[str] = None,\n    env: Optional[Dict[str, str]] = None,\n    stream_output: bool = True,\n    expand_env_vars: bool = False,\n) -> RunShellScriptResult:\n    \"\"\"\n    Runs one or more shell commands in a subprocess. Returns the standard\n    output and standard error of the script.\n\n    Args:\n        script: The script to run\n        directory: The directory to run the script in. Defaults to the current\n            working directory.\n        env: A dictionary of environment variables to set for the script\n        stream_output: Whether to stream the output of the script to\n            stdout/stderr\n        expand_env_vars: Whether to expand environment variables in the script\n            before running it\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            of the script\n\n    Examples:\n        Retrieve the short Git commit hash of the current repository to use as\n            a Docker image tag:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                id: get-commit-hash\n                script: git rev-parse --short HEAD\n                stream_output: false\n            - prefect_docker.deployments.steps.build_docker_image:\n                requires: prefect-docker\n                image_name: my-image\n                image_tag: \"{{ get-commit-hash.stdout }}\"\n                dockerfile: auto\n        ```\n\n        Run a multi-line shell script:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: |\n                    echo \"Hello\"\n                    echo \"World\"\n        ```\n\n        Run a shell script with environment variables:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello $NAME\"\n                env:\n                    NAME: World\n        ```\n\n        Run a shell script with environment variables expanded\n            from the current environment:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.run_shell_script:\n                script: |\n                    echo \"User: $USER\"\n                    echo \"Home Directory: $HOME\"\n                stream_output: true\n                expand_env_vars: true\n        ```\n\n        Run a shell script in a specific directory:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello\"\n                directory: /path/to/directory\n        ```\n\n        Run a script stored in a file:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: \"bash path/to/script.sh\"\n        ```\n    \"\"\"\n    current_env = os.environ.copy()\n    current_env.update(env or {})\n\n    commands = script.splitlines()\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    for command in commands:\n        if expand_env_vars:\n            # Expand environment variables in command and provided environment\n            command = string.Template(command).safe_substitute(current_env)\n        split_command = shlex.split(command, posix=sys.platform != \"win32\")\n        if not split_command:\n            continue\n        async with open_process(\n            split_command,\n            stdout=subprocess.PIPE,\n            stderr=subprocess.PIPE,\n            cwd=directory,\n            env=current_env,\n        ) as process:\n            await _stream_capture_process_output(\n                process,\n                stdout_sink=stdout_sink,\n                stderr_sink=stderr_sink,\n                stream_output=stream_output,\n            )\n\n            await process.wait()\n\n            if process.returncode != 0:\n                raise RuntimeError(\n                    f\"`run_shell_script` failed with error code {process.returncode}:\"\n                    f\" {stderr_sink.getvalue()}\"\n                )\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/input/actions/","title":"actions","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions","title":"prefect.input.actions","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.create_flow_run_input","title":"create_flow_run_input async","text":"

Create a new flow run input. The given value will be serialized to JSON and stored as a flow run input value.

Parameters:

Name Type Description Default - key (str

the flow run input key

required - value (Any

the flow run input value

required - flow_run_id (UUID

the, optional, flow run ID. If not given will default to pulling the flow run ID from the current context.

required Source code in prefect/input/actions.py
@sync_compatible\n@client_injector\nasync def create_flow_run_input(\n    client: \"PrefectClient\",\n    key: str,\n    value: Any,\n    flow_run_id: Optional[UUID] = None,\n    sender: Optional[str] = None,\n):\n    \"\"\"\n    Create a new flow run input. The given `value` will be serialized to JSON\n    and stored as a flow run input value.\n\n    Args:\n        - key (str): the flow run input key\n        - value (Any): the flow run input value\n        - flow_run_id (UUID): the, optional, flow run ID. If not given will\n          default to pulling the flow run ID from the current context.\n    \"\"\"\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n\n    await client.create_flow_run_input(\n        flow_run_id=flow_run_id,\n        key=key,\n        sender=sender,\n        value=orjson.dumps(value).decode(),\n    )\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.delete_flow_run_input","title":"delete_flow_run_input async","text":"

Delete a flow run input.

Parameters:

Name Type Description Default - flow_run_id (UUID

the flow run ID

required - key (str

the flow run input key

required Source code in prefect/input/actions.py
@sync_compatible\n@client_injector\nasync def delete_flow_run_input(\n    client: \"PrefectClient\", key: str, flow_run_id: Optional[UUID] = None\n):\n    \"\"\"Delete a flow run input.\n\n    Args:\n        - flow_run_id (UUID): the flow run ID\n        - key (str): the flow run input key\n    \"\"\"\n\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n\n    await client.delete_flow_run_input(flow_run_id=flow_run_id, key=key)\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.read_flow_run_input","title":"read_flow_run_input async","text":"

Read a flow run input.

Parameters:

Name Type Description Default - key (str

the flow run input key

required - flow_run_id (UUID

the flow run ID

required Source code in prefect/input/actions.py
@sync_compatible\n@client_injector\nasync def read_flow_run_input(\n    client: \"PrefectClient\", key: str, flow_run_id: Optional[UUID] = None\n) -> Any:\n    \"\"\"Read a flow run input.\n\n    Args:\n        - key (str): the flow run input key\n        - flow_run_id (UUID): the flow run ID\n    \"\"\"\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n\n    try:\n        value = await client.read_flow_run_input(flow_run_id=flow_run_id, key=key)\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return None\n        raise\n    else:\n        return orjson.loads(value)\n
","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/run_input/","title":"run_input","text":"","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input","title":"prefect.input.run_input","text":"

This module contains functions that allow sending type-checked RunInput data to flows at runtime. Flows can send back responses, establishing two-way channels with senders. These functions are particularly useful for systems that require ongoing data transfer or need to react to input quickly. real-time interaction and efficient data handling. It's designed to facilitate dynamic communication within distributed or microservices-oriented systems, making it ideal for scenarios requiring continuous data synchronization and processing. It's particularly useful for systems that require ongoing data input and output.

The following is an example of two flows. One sends a random number to the other and waits for a response. The other receives the number, squares it, and sends the result back. The sender flow then prints the result.

Sender flow:

import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n    number: int\n\n\n@flow\nasync def sender_flow(receiver_flow_run_id: UUID):\n    logger = get_run_logger()\n\n    the_number = random.randint(1, 100)\n\n    await NumberData(number=the_number).send_to(receiver_flow_run_id)\n\n    receiver = NumberData.receive(flow_run_id=receiver_flow_run_id)\n    squared = await receiver.next()\n\n    logger.info(f\"{the_number} squared is {squared.number}\")\n

Receiver flow:

import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n    number: int\n\n\n@flow\nasync def receiver_flow():\n    async for data in NumberData.receive():\n        squared = data.number ** 2\n        data.respond(NumberData(number=squared))\n

","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput","title":"AutomaticRunInput","text":"

Bases: RunInput, Generic[T]

Source code in prefect/input/run_input.py
class AutomaticRunInput(RunInput, Generic[T]):\n    value: T\n\n    @classmethod\n    @sync_compatible\n    async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n        \"\"\"\n        Load the run input response from the given key.\n\n        Args:\n            - keyset (Keyset): the keyset to load the input for\n            - flow_run_id (UUID, optional): the flow run ID to load the input for\n        \"\"\"\n        instance = await super().load(keyset, flow_run_id=flow_run_id)\n        return instance.value\n\n    @classmethod\n    def subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n        \"\"\"\n        Create a new `AutomaticRunInput` subclass from the given type.\n\n        This method uses the type's name as a key prefix to identify related\n        flow run inputs. This helps in ensuring that values saved under a type\n        (like List[int]) are retrievable under the generic type name (like \"list\").\n        \"\"\"\n        fields: Dict[str, Any] = {\"value\": (_type, ...)}\n\n        # Explanation for using getattr for type name extraction:\n        # - \"__name__\": This is the usual attribute for getting the name of\n        #   most types.\n        # - \"_name\": Used as a fallback, some type annotations in Python 3.9\n        #   and earlier might only have this attribute instead of __name__.\n        # - If neither is available, defaults to an empty string to prevent\n        #   errors, but typically we should find at least one valid name\n        #   attribute. This will match all automatic inputs sent to the flow\n        #   run, rather than a specific type.\n        #\n        # This approach ensures compatibility across Python versions and\n        # handles various edge cases in type annotations.\n\n        type_prefix: str = getattr(\n            _type, \"__name__\", getattr(_type, \"_name\", \"\")\n        ).lower()\n\n        class_name = f\"{type_prefix}AutomaticRunInput\"\n\n        # Creating a new Pydantic model class dynamically with the name based\n        # on the type prefix.\n        new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n            class_name, **fields, __base__=AutomaticRunInput\n        )\n        return new_cls\n\n    @classmethod\n    def receive(cls, *args, **kwargs):\n        if kwargs.get(\"key_prefix\") is None:\n            kwargs[\"key_prefix\"] = f\"{cls.__name__.lower()}-auto\"\n\n        return GetAutomaticInputHandler(run_input_cls=cls, *args, **kwargs)\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.load","title":"load async classmethod","text":"

Load the run input response from the given key.

Parameters:

Name Type Description Default - keyset (Keyset

the keyset to load the input for

required - flow_run_id (UUID

the flow run ID to load the input for

required Source code in prefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n    \"\"\"\n    Load the run input response from the given key.\n\n    Args:\n        - keyset (Keyset): the keyset to load the input for\n        - flow_run_id (UUID, optional): the flow run ID to load the input for\n    \"\"\"\n    instance = await super().load(keyset, flow_run_id=flow_run_id)\n    return instance.value\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.subclass_from_type","title":"subclass_from_type classmethod","text":"

Create a new AutomaticRunInput subclass from the given type.

This method uses the type's name as a key prefix to identify related flow run inputs. This helps in ensuring that values saved under a type (like List[int]) are retrievable under the generic type name (like \"list\").

Source code in prefect/input/run_input.py
@classmethod\ndef subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n    \"\"\"\n    Create a new `AutomaticRunInput` subclass from the given type.\n\n    This method uses the type's name as a key prefix to identify related\n    flow run inputs. This helps in ensuring that values saved under a type\n    (like List[int]) are retrievable under the generic type name (like \"list\").\n    \"\"\"\n    fields: Dict[str, Any] = {\"value\": (_type, ...)}\n\n    # Explanation for using getattr for type name extraction:\n    # - \"__name__\": This is the usual attribute for getting the name of\n    #   most types.\n    # - \"_name\": Used as a fallback, some type annotations in Python 3.9\n    #   and earlier might only have this attribute instead of __name__.\n    # - If neither is available, defaults to an empty string to prevent\n    #   errors, but typically we should find at least one valid name\n    #   attribute. This will match all automatic inputs sent to the flow\n    #   run, rather than a specific type.\n    #\n    # This approach ensures compatibility across Python versions and\n    # handles various edge cases in type annotations.\n\n    type_prefix: str = getattr(\n        _type, \"__name__\", getattr(_type, \"_name\", \"\")\n    ).lower()\n\n    class_name = f\"{type_prefix}AutomaticRunInput\"\n\n    # Creating a new Pydantic model class dynamically with the name based\n    # on the type prefix.\n    new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n        class_name, **fields, __base__=AutomaticRunInput\n    )\n    return new_cls\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput","title":"RunInput","text":"

Bases: BaseModel

Source code in prefect/input/run_input.py
class RunInput(pydantic.BaseModel):\n    class Config:\n        extra = \"forbid\"\n\n    _description: Optional[str] = pydantic.PrivateAttr(default=None)\n    _metadata: RunInputMetadata = pydantic.PrivateAttr()\n\n    @property\n    def metadata(self) -> RunInputMetadata:\n        return self._metadata\n\n    @classmethod\n    def keyset_from_type(cls) -> Keyset:\n        return keyset_from_base_key(cls.__name__.lower())\n\n    @classmethod\n    @sync_compatible\n    async def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n        \"\"\"\n        Save the run input response to the given key.\n\n        Args:\n            - keyset (Keyset): the keyset to save the input for\n            - flow_run_id (UUID, optional): the flow run ID to save the input for\n        \"\"\"\n\n        if HAS_PYDANTIC_V2:\n            schema = create_v2_schema(cls.__name__, model_base=cls)\n        else:\n            schema = cls.schema(by_alias=True)\n\n        await create_flow_run_input(\n            key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n        )\n\n        description = cls._description if isinstance(cls._description, str) else None\n        if description:\n            await create_flow_run_input(\n                key=keyset[\"description\"],\n                value=description,\n                flow_run_id=flow_run_id,\n            )\n\n    @classmethod\n    @sync_compatible\n    async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n        \"\"\"\n        Load the run input response from the given key.\n\n        Args:\n            - keyset (Keyset): the keyset to load the input for\n            - flow_run_id (UUID, optional): the flow run ID to load the input for\n        \"\"\"\n        flow_run_id = ensure_flow_run_id(flow_run_id)\n        value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n        if value:\n            instance = cls(**value)\n        else:\n            instance = cls()\n        instance._metadata = RunInputMetadata(\n            key=keyset[\"response\"], sender=None, receiver=flow_run_id\n        )\n        return instance\n\n    @classmethod\n    def load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n        \"\"\"\n        Load the run input from a FlowRunInput object.\n\n        Args:\n            - flow_run_input (FlowRunInput): the flow run input to load the input for\n        \"\"\"\n        instance = cls(**flow_run_input.decoded_value)\n        instance._metadata = RunInputMetadata(\n            key=flow_run_input.key,\n            sender=flow_run_input.sender,\n            receiver=flow_run_input.flow_run_id,\n        )\n        return instance\n\n    @classmethod\n    def with_initial_data(\n        cls: Type[R], description: Optional[str] = None, **kwargs: Any\n    ) -> Type[R]:\n        \"\"\"\n        Create a new `RunInput` subclass with the given initial data as field\n        defaults.\n\n        Args:\n            - description (str, optional): a description to show when resuming\n                a flow run that requires input\n            - kwargs (Any): the initial data to populate the subclass\n        \"\"\"\n        fields: Dict[str, Any] = {}\n        for key, value in kwargs.items():\n            fields[key] = (type(value), value)\n        model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n        if description is not None:\n            model._description = description\n\n        return model\n\n    @sync_compatible\n    async def respond(\n        self,\n        run_input: \"RunInput\",\n        sender: Optional[str] = None,\n        key_prefix: Optional[str] = None,\n    ):\n        flow_run_id = None\n        if self.metadata.sender and self.metadata.sender.startswith(\"prefect.flow-run\"):\n            _, _, id = self.metadata.sender.rpartition(\".\")\n            flow_run_id = UUID(id)\n\n        if not flow_run_id:\n            raise RuntimeError(\n                \"Cannot respond to an input that was not sent by a flow run.\"\n            )\n\n        await _send_input(\n            flow_run_id=flow_run_id,\n            run_input=run_input,\n            sender=sender,\n            key_prefix=key_prefix,\n        )\n\n    @sync_compatible\n    async def send_to(\n        self,\n        flow_run_id: UUID,\n        sender: Optional[str] = None,\n        key_prefix: Optional[str] = None,\n    ):\n        await _send_input(\n            flow_run_id=flow_run_id,\n            run_input=self,\n            sender=sender,\n            key_prefix=key_prefix,\n        )\n\n    @classmethod\n    def receive(\n        cls,\n        timeout: Optional[float] = 3600,\n        poll_interval: float = 10,\n        raise_timeout_error: bool = False,\n        exclude_keys: Optional[Set[str]] = None,\n        key_prefix: Optional[str] = None,\n        flow_run_id: Optional[UUID] = None,\n    ):\n        if key_prefix is None:\n            key_prefix = f\"{cls.__name__.lower()}-auto\"\n\n        return GetInputHandler(\n            run_input_cls=cls,\n            key_prefix=key_prefix,\n            timeout=timeout,\n            poll_interval=poll_interval,\n            raise_timeout_error=raise_timeout_error,\n            exclude_keys=exclude_keys,\n            flow_run_id=flow_run_id,\n        )\n\n    @classmethod\n    def subclass_from_base_model_type(\n        cls, model_cls: Type[pydantic.BaseModel]\n    ) -> Type[\"RunInput\"]:\n        \"\"\"\n        Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n        subclass.\n\n        Args:\n            - model_cls (pydantic.BaseModel subclass): the class from which\n                to create the new `RunInput` subclass\n        \"\"\"\n        return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {})  # type: ignore\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load","title":"load async classmethod","text":"

Load the run input response from the given key.

Parameters:

Name Type Description Default - keyset (Keyset

the keyset to load the input for

required - flow_run_id (UUID

the flow run ID to load the input for

required Source code in prefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n    \"\"\"\n    Load the run input response from the given key.\n\n    Args:\n        - keyset (Keyset): the keyset to load the input for\n        - flow_run_id (UUID, optional): the flow run ID to load the input for\n    \"\"\"\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n    value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n    if value:\n        instance = cls(**value)\n    else:\n        instance = cls()\n    instance._metadata = RunInputMetadata(\n        key=keyset[\"response\"], sender=None, receiver=flow_run_id\n    )\n    return instance\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load_from_flow_run_input","title":"load_from_flow_run_input classmethod","text":"

Load the run input from a FlowRunInput object.

Parameters:

Name Type Description Default - flow_run_input (FlowRunInput

the flow run input to load the input for

required Source code in prefect/input/run_input.py
@classmethod\ndef load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n    \"\"\"\n    Load the run input from a FlowRunInput object.\n\n    Args:\n        - flow_run_input (FlowRunInput): the flow run input to load the input for\n    \"\"\"\n    instance = cls(**flow_run_input.decoded_value)\n    instance._metadata = RunInputMetadata(\n        key=flow_run_input.key,\n        sender=flow_run_input.sender,\n        receiver=flow_run_input.flow_run_id,\n    )\n    return instance\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.save","title":"save async classmethod","text":"

Save the run input response to the given key.

Parameters:

Name Type Description Default - keyset (Keyset

the keyset to save the input for

required - flow_run_id (UUID

the flow run ID to save the input for

required Source code in prefect/input/run_input.py
@classmethod\n@sync_compatible\nasync def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n    \"\"\"\n    Save the run input response to the given key.\n\n    Args:\n        - keyset (Keyset): the keyset to save the input for\n        - flow_run_id (UUID, optional): the flow run ID to save the input for\n    \"\"\"\n\n    if HAS_PYDANTIC_V2:\n        schema = create_v2_schema(cls.__name__, model_base=cls)\n    else:\n        schema = cls.schema(by_alias=True)\n\n    await create_flow_run_input(\n        key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n    )\n\n    description = cls._description if isinstance(cls._description, str) else None\n    if description:\n        await create_flow_run_input(\n            key=keyset[\"description\"],\n            value=description,\n            flow_run_id=flow_run_id,\n        )\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.subclass_from_base_model_type","title":"subclass_from_base_model_type classmethod","text":"

Create a new RunInput subclass from the given pydantic.BaseModel subclass.

Parameters:

Name Type Description Default - model_cls (pydantic.BaseModel subclass

the class from which to create the new RunInput subclass

required Source code in prefect/input/run_input.py
@classmethod\ndef subclass_from_base_model_type(\n    cls, model_cls: Type[pydantic.BaseModel]\n) -> Type[\"RunInput\"]:\n    \"\"\"\n    Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n    subclass.\n\n    Args:\n        - model_cls (pydantic.BaseModel subclass): the class from which\n            to create the new `RunInput` subclass\n    \"\"\"\n    return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {})  # type: ignore\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.with_initial_data","title":"with_initial_data classmethod","text":"

Create a new RunInput subclass with the given initial data as field defaults.

Parameters:

Name Type Description Default - description (str

a description to show when resuming a flow run that requires input

required - kwargs (Any

the initial data to populate the subclass

required Source code in prefect/input/run_input.py
@classmethod\ndef with_initial_data(\n    cls: Type[R], description: Optional[str] = None, **kwargs: Any\n) -> Type[R]:\n    \"\"\"\n    Create a new `RunInput` subclass with the given initial data as field\n    defaults.\n\n    Args:\n        - description (str, optional): a description to show when resuming\n            a flow run that requires input\n        - kwargs (Any): the initial data to populate the subclass\n    \"\"\"\n    fields: Dict[str, Any] = {}\n    for key, value in kwargs.items():\n        fields[key] = (type(value), value)\n    model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n    if description is not None:\n        model._description = description\n\n    return model\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_base_key","title":"keyset_from_base_key","text":"

Get the keyset for the given base key.

Parameters:

Name Type Description Default - base_key (str

the base key to get the keyset for

required

Returns:

Type Description Keyset
  • Dict[str, str]: the keyset
Source code in prefect/input/run_input.py
def keyset_from_base_key(base_key: str) -> Keyset:\n    \"\"\"\n    Get the keyset for the given base key.\n\n    Args:\n        - base_key (str): the base key to get the keyset for\n\n    Returns:\n        - Dict[str, str]: the keyset\n    \"\"\"\n    return {\n        \"description\": f\"{base_key}-description\",\n        \"response\": f\"{base_key}-response\",\n        \"schema\": f\"{base_key}-schema\",\n    }\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_paused_state","title":"keyset_from_paused_state","text":"

Get the keyset for the given Paused state.

Parameters:

Name Type Description Default - state (State

the state to get the keyset for

required Source code in prefect/input/run_input.py
def keyset_from_paused_state(state: \"State\") -> Keyset:\n    \"\"\"\n    Get the keyset for the given Paused state.\n\n    Args:\n        - state (State): the state to get the keyset for\n    \"\"\"\n\n    if not state.is_paused():\n        raise RuntimeError(f\"{state.type.value!r} is unsupported.\")\n\n    state_name = state.name or \"\"\n    base_key = f\"{state_name.lower()}-{str(state.state_details.pause_key)}\"\n    return keyset_from_base_key(base_key)\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.run_input_subclass_from_type","title":"run_input_subclass_from_type","text":"

Create a new RunInput subclass from the given type.

Source code in prefect/input/run_input.py
def run_input_subclass_from_type(\n    _type: Union[Type[R], Type[T], pydantic.BaseModel],\n) -> Union[Type[AutomaticRunInput[T]], Type[R]]:\n    \"\"\"\n    Create a new `RunInput` subclass from the given type.\n    \"\"\"\n    if isclass(_type):\n        if issubclass(_type, RunInput):\n            return cast(Type[R], _type)\n        elif issubclass(_type, pydantic.BaseModel):\n            return cast(Type[R], RunInput.subclass_from_base_model_type(_type))\n\n    # Could be something like a typing._GenericAlias or any other type that\n    # isn't a `RunInput` subclass or `pydantic.BaseModel` subclass. Try passing\n    # it to AutomaticRunInput to see if we can create a model from it.\n    return cast(\n        Type[AutomaticRunInput[T]],\n        AutomaticRunInput.subclass_from_type(cast(Type[T], _type)),\n    )\n
","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/logging/configuration/","title":"configuration","text":"

\"\"\"

","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration","title":"prefect.logging.configuration","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.load_logging_config","title":"load_logging_config","text":"

Loads logging configuration from a path allowing override from the environment

Source code in prefect/logging/configuration.py
def load_logging_config(path: Path) -> dict:\n    \"\"\"\n    Loads logging configuration from a path allowing override from the environment\n    \"\"\"\n    template = string.Template(path.read_text())\n    with warnings.catch_warnings():\n        warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n        config = yaml.safe_load(\n            # Substitute settings into the template in format $SETTING / ${SETTING}\n            template.substitute(\n                {\n                    setting.name: str(setting.value())\n                    for setting in SETTING_VARIABLES.values()\n                    if setting.value() is not None\n                }\n            )\n        )\n\n    # Load overrides from the environment\n    flat_config = dict_to_flatdict(config)\n\n    for key_tup, val in flat_config.items():\n        env_val = os.environ.get(\n            # Generate a valid environment variable with nesting indicated with '_'\n            to_envvar(\"PREFECT_LOGGING_\" + \"_\".join(key_tup)).upper()\n        )\n        if env_val:\n            val = env_val\n\n        # reassign the updated value\n        flat_config[key_tup] = val\n\n    return flatdict_to_dict(flat_config)\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.setup_logging","title":"setup_logging","text":"

Sets up logging.

Returns the config used.

Source code in prefect/logging/configuration.py
def setup_logging(incremental: Optional[bool] = None) -> dict:\n    \"\"\"\n    Sets up logging.\n\n    Returns the config used.\n    \"\"\"\n    global PROCESS_LOGGING_CONFIG\n\n    # If the user has specified a logging path and it exists we will ignore the\n    # default entirely rather than dealing with complex merging\n    config = load_logging_config(\n        (\n            PREFECT_LOGGING_SETTINGS_PATH.value()\n            if PREFECT_LOGGING_SETTINGS_PATH.value().exists()\n            else DEFAULT_LOGGING_SETTINGS_PATH\n        )\n    )\n\n    incremental = (\n        incremental if incremental is not None else bool(PROCESS_LOGGING_CONFIG)\n    )\n\n    # Perform an incremental update if setup has already been run\n    config.setdefault(\"incremental\", incremental)\n\n    try:\n        logging.config.dictConfig(config)\n    except ValueError:\n        if incremental:\n            setup_logging(incremental=False)\n\n    # Copy configuration of the 'prefect.extra' logger to the extra loggers\n    extra_config = logging.getLogger(\"prefect.extra\")\n\n    for logger_name in PREFECT_LOGGING_EXTRA_LOGGERS.value():\n        logger = logging.getLogger(logger_name)\n        for handler in extra_config.handlers:\n            if not config[\"incremental\"]:\n                logger.addHandler(handler)\n            if logger.level == logging.NOTSET:\n                logger.setLevel(extra_config.level)\n            logger.propagate = extra_config.propagate\n\n    PROCESS_LOGGING_CONFIG = config\n\n    return config\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/formatters/","title":"formatters","text":"

\"\"\"

","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters","title":"prefect.logging.formatters","text":"","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.JsonFormatter","title":"JsonFormatter","text":"

Bases: Formatter

Formats log records as a JSON string.

The format may be specified as \"pretty\" to format the JSON with indents and newlines.

Source code in prefect/logging/formatters.py
class JsonFormatter(logging.Formatter):\n    \"\"\"\n    Formats log records as a JSON string.\n\n    The format may be specified as \"pretty\" to format the JSON with indents and\n    newlines.\n    \"\"\"\n\n    def __init__(self, fmt, dmft, style) -> None:  # noqa\n        super().__init__()\n\n        if fmt not in [\"pretty\", \"default\"]:\n            raise ValueError(\"Format must be either 'pretty' or 'default'.\")\n\n        self.serializer = JSONSerializer(\n            jsonlib=\"orjson\",\n            object_encoder=\"pydantic.json.pydantic_encoder\",\n            dumps_kwargs={\"option\": orjson.OPT_INDENT_2} if fmt == \"pretty\" else {},\n        )\n\n    def format(self, record: logging.LogRecord) -> str:\n        record_dict = record.__dict__.copy()\n\n        # GCP severity detection compatibility\n        record_dict.setdefault(\"severity\", record.levelname)\n\n        # replace any exception tuples returned by `sys.exc_info()`\n        # with a JSON-serializable `dict`.\n        if record.exc_info:\n            record_dict[\"exc_info\"] = format_exception_info(record.exc_info)\n\n        log_json_bytes = self.serializer.dumps(record_dict)\n\n        # JSONSerializer returns bytes; decode to string to conform to\n        # the `logging.Formatter.format` interface\n        return log_json_bytes.decode()\n
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.PrefectFormatter","title":"PrefectFormatter","text":"

Bases: Formatter

Source code in prefect/logging/formatters.py
class PrefectFormatter(logging.Formatter):\n    def __init__(\n        self,\n        format=None,\n        datefmt=None,\n        style=\"%\",\n        validate=True,\n        *,\n        defaults=None,\n        task_run_fmt: str = None,\n        flow_run_fmt: str = None,\n    ) -> None:\n        \"\"\"\n        Implementation of the standard Python formatter with support for multiple\n        message formats.\n\n        \"\"\"\n        # See https://github.com/python/cpython/blob/c8c6113398ee9a7867fe9b08bc539cceb61e2aaa/Lib/logging/__init__.py#L546\n        # for implementation details\n\n        init_kwargs = {}\n        style_kwargs = {}\n\n        # defaults added in 3.10\n        if sys.version_info >= (3, 10):\n            init_kwargs[\"defaults\"] = defaults\n            style_kwargs[\"defaults\"] = defaults\n\n        # validate added in 3.8\n        if sys.version_info >= (3, 8):\n            init_kwargs[\"validate\"] = validate\n        else:\n            validate = False\n\n        super().__init__(format, datefmt, style, **init_kwargs)\n\n        self.flow_run_fmt = flow_run_fmt\n        self.task_run_fmt = task_run_fmt\n\n        # Retrieve the style class from the base class to avoid importing private\n        # `_STYLES` mapping\n        style_class = type(self._style)\n\n        self._flow_run_style = (\n            style_class(flow_run_fmt, **style_kwargs) if flow_run_fmt else self._style\n        )\n        self._task_run_style = (\n            style_class(task_run_fmt, **style_kwargs) if task_run_fmt else self._style\n        )\n        if validate:\n            self._flow_run_style.validate()\n            self._task_run_style.validate()\n\n    def formatMessage(self, record: logging.LogRecord):\n        if record.name == \"prefect.flow_runs\":\n            style = self._flow_run_style\n        elif record.name == \"prefect.task_runs\":\n            style = self._task_run_style\n        else:\n            style = self._style\n\n        return style.format(record)\n
","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/handlers/","title":"handlers","text":"

\"\"\"

","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers","title":"prefect.logging.handlers","text":"","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler","title":"APILogHandler","text":"

Bases: Handler

A logging handler that sends logs to the Prefect API.

Sends log records to the APILogWorker which manages sending batches of logs in the background.

Source code in prefect/logging/handlers.py
class APILogHandler(logging.Handler):\n    \"\"\"\n    A logging handler that sends logs to the Prefect API.\n\n    Sends log records to the `APILogWorker` which manages sending batches of logs in\n    the background.\n    \"\"\"\n\n    @classmethod\n    def flush(cls):\n        \"\"\"\n        Tell the `APILogWorker` to send any currently enqueued logs and block until\n        completion.\n\n        Use `aflush` from async contexts instead.\n        \"\"\"\n        loop = get_running_loop()\n        if loop:\n            if in_global_loop():  # Guard against internal misuse\n                raise RuntimeError(\n                    \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n                    \" would block the event loop and cause a deadlock. Use\"\n                    \" `APILogWorker.aflush` instead.\"\n                )\n\n            # Not ideal, but this method is called by the stdlib and cannot return a\n            # coroutine so we just schedule the drain in a new thread and continue\n            from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n            return None\n        else:\n            # We set a timeout of 5s because we don't want to block forever if the worker\n            # is stuck. This can occur when the handler is being shutdown and the\n            # `logging._lock` is held but the worker is attempting to emit logs resulting\n            # in a deadlock.\n            return APILogWorker.drain_all(timeout=5)\n\n    @classmethod\n    def aflush(cls):\n        \"\"\"\n        Tell the `APILogWorker` to send any currently enqueued logs and block until\n        completion.\n\n        If called in a synchronous context, will only block up to 5s before returning.\n        \"\"\"\n\n        if not get_running_loop():\n            raise RuntimeError(\n                \"`aflush` cannot be used from a synchronous context; use `flush`\"\n                \" instead.\"\n            )\n\n        return APILogWorker.drain_all()\n\n    def emit(self, record: logging.LogRecord):\n        \"\"\"\n        Send a log to the `APILogWorker`\n        \"\"\"\n        try:\n            profile = prefect.context.get_settings_context()\n\n            if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n                return  # Respect the global settings toggle\n            if not getattr(record, \"send_to_api\", True):\n                return  # Do not send records that have opted out\n            if not getattr(record, \"send_to_orion\", True):\n                return  # Backwards compatibility\n\n            log = self.prepare(record)\n            APILogWorker.instance().send(log)\n\n        except Exception:\n            self.handleError(record)\n\n    def handleError(self, record: logging.LogRecord) -> None:\n        _, exc, _ = sys.exc_info()\n\n        if isinstance(exc, MissingContextError):\n            log_handling_when_missing_flow = (\n                PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW.value()\n            )\n            if log_handling_when_missing_flow == \"warn\":\n                # Warn when a logger is used outside of a run context, the stack level here\n                # gets us to the user logging call\n                warnings.warn(str(exc), stacklevel=8)\n                return\n            elif log_handling_when_missing_flow == \"ignore\":\n                return\n            else:\n                raise exc\n\n        # Display a longer traceback for other errors\n        return super().handleError(record)\n\n    def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n        \"\"\"\n        Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n        This infers the linked flow or task run from the log record or the current\n        run context.\n\n        If a flow run id cannot be found, the log will be dropped.\n\n        Logs exceeding the maximum size will be dropped.\n        \"\"\"\n        flow_run_id = getattr(record, \"flow_run_id\", None)\n        task_run_id = getattr(record, \"task_run_id\", None)\n\n        if not flow_run_id:\n            try:\n                context = prefect.context.get_run_context()\n            except MissingContextError:\n                raise MissingContextError(\n                    f\"Logger {record.name!r} attempted to send logs to the API without\"\n                    \" a flow run id. The API log handler can only send logs within\"\n                    \" flow run contexts unless the flow run id is manually provided.\"\n                ) from None\n\n            if hasattr(context, \"flow_run\"):\n                flow_run_id = context.flow_run.id\n            elif hasattr(context, \"task_run\"):\n                flow_run_id = context.task_run.flow_run_id\n                task_run_id = task_run_id or context.task_run.id\n            else:\n                raise ValueError(\n                    \"Encountered malformed run context. Does not contain flow or task \"\n                    \"run information.\"\n                )\n\n        # Parsing to a `LogCreate` object here gives us nice parsing error messages\n        # from the standard lib `handleError` method if something goes wrong and\n        # prevents malformed logs from entering the queue\n        try:\n            is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n                isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n            )\n        except ValueError:\n            is_uuid_like = False\n\n        log = LogCreate(\n            flow_run_id=flow_run_id if is_uuid_like else None,\n            task_run_id=task_run_id,\n            name=record.name,\n            level=record.levelno,\n            timestamp=pendulum.from_timestamp(\n                getattr(record, \"created\", None) or time.time()\n            ),\n            message=self.format(record),\n        ).dict(json_compatible=True)\n\n        log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n        if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n            raise ValueError(\n                f\"Log of size {log_size} is greater than the max size of \"\n                f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n            )\n\n        return log\n\n    def _get_payload_size(self, log: Dict[str, Any]) -> int:\n        return len(json.dumps(log).encode())\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.aflush","title":"aflush classmethod","text":"

Tell the APILogWorker to send any currently enqueued logs and block until completion.

If called in a synchronous context, will only block up to 5s before returning.

Source code in prefect/logging/handlers.py
@classmethod\ndef aflush(cls):\n    \"\"\"\n    Tell the `APILogWorker` to send any currently enqueued logs and block until\n    completion.\n\n    If called in a synchronous context, will only block up to 5s before returning.\n    \"\"\"\n\n    if not get_running_loop():\n        raise RuntimeError(\n            \"`aflush` cannot be used from a synchronous context; use `flush`\"\n            \" instead.\"\n        )\n\n    return APILogWorker.drain_all()\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.emit","title":"emit","text":"

Send a log to the APILogWorker

Source code in prefect/logging/handlers.py
def emit(self, record: logging.LogRecord):\n    \"\"\"\n    Send a log to the `APILogWorker`\n    \"\"\"\n    try:\n        profile = prefect.context.get_settings_context()\n\n        if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n            return  # Respect the global settings toggle\n        if not getattr(record, \"send_to_api\", True):\n            return  # Do not send records that have opted out\n        if not getattr(record, \"send_to_orion\", True):\n            return  # Backwards compatibility\n\n        log = self.prepare(record)\n        APILogWorker.instance().send(log)\n\n    except Exception:\n        self.handleError(record)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.flush","title":"flush classmethod","text":"

Tell the APILogWorker to send any currently enqueued logs and block until completion.

Use aflush from async contexts instead.

Source code in prefect/logging/handlers.py
@classmethod\ndef flush(cls):\n    \"\"\"\n    Tell the `APILogWorker` to send any currently enqueued logs and block until\n    completion.\n\n    Use `aflush` from async contexts instead.\n    \"\"\"\n    loop = get_running_loop()\n    if loop:\n        if in_global_loop():  # Guard against internal misuse\n            raise RuntimeError(\n                \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n                \" would block the event loop and cause a deadlock. Use\"\n                \" `APILogWorker.aflush` instead.\"\n            )\n\n        # Not ideal, but this method is called by the stdlib and cannot return a\n        # coroutine so we just schedule the drain in a new thread and continue\n        from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n        return None\n    else:\n        # We set a timeout of 5s because we don't want to block forever if the worker\n        # is stuck. This can occur when the handler is being shutdown and the\n        # `logging._lock` is held but the worker is attempting to emit logs resulting\n        # in a deadlock.\n        return APILogWorker.drain_all(timeout=5)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.prepare","title":"prepare","text":"

Convert a logging.LogRecord to the API LogCreate schema and serialize.

This infers the linked flow or task run from the log record or the current run context.

If a flow run id cannot be found, the log will be dropped.

Logs exceeding the maximum size will be dropped.

Source code in prefect/logging/handlers.py
def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n    \"\"\"\n    Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n    This infers the linked flow or task run from the log record or the current\n    run context.\n\n    If a flow run id cannot be found, the log will be dropped.\n\n    Logs exceeding the maximum size will be dropped.\n    \"\"\"\n    flow_run_id = getattr(record, \"flow_run_id\", None)\n    task_run_id = getattr(record, \"task_run_id\", None)\n\n    if not flow_run_id:\n        try:\n            context = prefect.context.get_run_context()\n        except MissingContextError:\n            raise MissingContextError(\n                f\"Logger {record.name!r} attempted to send logs to the API without\"\n                \" a flow run id. The API log handler can only send logs within\"\n                \" flow run contexts unless the flow run id is manually provided.\"\n            ) from None\n\n        if hasattr(context, \"flow_run\"):\n            flow_run_id = context.flow_run.id\n        elif hasattr(context, \"task_run\"):\n            flow_run_id = context.task_run.flow_run_id\n            task_run_id = task_run_id or context.task_run.id\n        else:\n            raise ValueError(\n                \"Encountered malformed run context. Does not contain flow or task \"\n                \"run information.\"\n            )\n\n    # Parsing to a `LogCreate` object here gives us nice parsing error messages\n    # from the standard lib `handleError` method if something goes wrong and\n    # prevents malformed logs from entering the queue\n    try:\n        is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n            isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n        )\n    except ValueError:\n        is_uuid_like = False\n\n    log = LogCreate(\n        flow_run_id=flow_run_id if is_uuid_like else None,\n        task_run_id=task_run_id,\n        name=record.name,\n        level=record.levelno,\n        timestamp=pendulum.from_timestamp(\n            getattr(record, \"created\", None) or time.time()\n        ),\n        message=self.format(record),\n    ).dict(json_compatible=True)\n\n    log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n    if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n        raise ValueError(\n            f\"Log of size {log_size} is greater than the max size of \"\n            f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n        )\n\n    return log\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.PrefectConsoleHandler","title":"PrefectConsoleHandler","text":"

Bases: StreamHandler

Source code in prefect/logging/handlers.py
class PrefectConsoleHandler(logging.StreamHandler):\n    def __init__(\n        self,\n        stream=None,\n        highlighter: Highlighter = PrefectConsoleHighlighter,\n        styles: Dict[str, str] = None,\n        level: Union[int, str] = logging.NOTSET,\n    ):\n        \"\"\"\n        The default console handler for Prefect, which highlights log levels,\n        web and file URLs, flow and task (run) names, and state types in the\n        local console (terminal).\n\n        Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting.\n        For finer control, use logging.yml to add or remove styles, and/or\n        adjust colors.\n        \"\"\"\n        super().__init__(stream=stream)\n\n        styled_console = PREFECT_LOGGING_COLORS.value()\n        markup_console = PREFECT_LOGGING_MARKUP.value()\n        if styled_console:\n            highlighter = highlighter()\n            theme = Theme(styles, inherit=False)\n        else:\n            highlighter = NullHighlighter()\n            theme = Theme(inherit=False)\n\n        self.level = level\n        self.console = Console(\n            highlighter=highlighter,\n            theme=theme,\n            file=self.stream,\n            markup=markup_console,\n        )\n\n    def emit(self, record: logging.LogRecord):\n        try:\n            message = self.format(record)\n            self.console.print(message, soft_wrap=True)\n        except RecursionError:\n            # This was copied over from logging.StreamHandler().emit()\n            # https://bugs.python.org/issue36272\n            raise\n        except Exception:\n            self.handleError(record)\n
","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/highlighters/","title":"highlighters","text":"

\"\"\"

","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers","title":"prefect.logging.loggers","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.LogEavesdropper","title":"LogEavesdropper","text":"

Bases: Handler

A context manager that collects logs for the duration of the context

Example:\n\n    ```python\n    import logging\n    from prefect.logging import LogEavesdropper\n\n    with LogEavesdropper(\"my_logger\") as eavesdropper:\n        logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n        logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n    print(eavesdropper.text())\n\n    # Outputs: \"Hello, world!\n

Another one!\"

Source code in prefect/logging/loggers.py
class LogEavesdropper(logging.Handler):\n    \"\"\"A context manager that collects logs for the duration of the context\n\n    Example:\n\n        ```python\n        import logging\n        from prefect.logging import LogEavesdropper\n\n        with LogEavesdropper(\"my_logger\") as eavesdropper:\n            logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n            logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n        print(eavesdropper.text())\n\n        # Outputs: \"Hello, world!\\nAnother one!\"\n    \"\"\"\n\n    _target_logger: logging.Logger\n    _lines: List[str]\n\n    def __init__(self, eavesdrop_on: str, level: int = logging.NOTSET):\n        \"\"\"\n        Args:\n            eavesdrop_on (str): the name of the logger to eavesdrop on\n            level (int): the minimum log level to eavesdrop on; if omitted, all levels\n                are captured\n        \"\"\"\n\n        super().__init__(level=level)\n        self.eavesdrop_on = eavesdrop_on\n        self._target_logger = None\n\n        # It's important that we use a very minimalistic formatter for use cases where\n        # we may present these logs back to the user.  We shouldn't leak filenames,\n        # versions, or other environmental information.\n        self.formatter = logging.Formatter(\"[%(levelname)s]: %(message)s\")\n\n    def __enter__(self) -> Self:\n        self._target_logger = logging.getLogger(self.eavesdrop_on)\n        self._original_level = self._target_logger.level\n        self._target_logger.level = self.level\n        self._target_logger.addHandler(self)\n        self._lines = []\n        return self\n\n    def __exit__(self, *_):\n        if self._target_logger:\n            self._target_logger.removeHandler(self)\n            self._target_logger.level = self._original_level\n\n    def emit(self, record: LogRecord) -> None:\n        \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n        self._lines.append(self.format(record))\n\n    def text(self) -> str:\n        \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n        return \"\\n\".join(self._lines)\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.LogEavesdropper.emit","title":"emit","text":"

The logging.Handler implementation, not intended to be called directly.

Source code in prefect/logging/loggers.py
def emit(self, record: LogRecord) -> None:\n    \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n    self._lines.append(self.format(record))\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.LogEavesdropper.text","title":"text","text":"

Return the collected logs as a single newline-delimited string

Source code in prefect/logging/loggers.py
def text(self) -> str:\n    \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n    return \"\\n\".join(self._lines)\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter","text":"

Bases: LoggerAdapter

Adapter that ensures extra kwargs are passed through correctly; without this the extra fields set on the adapter would overshadow any provided on a log-by-log basis.

See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.

Source code in prefect/logging/loggers.py
class PrefectLogAdapter(logging.LoggerAdapter):\n    \"\"\"\n    Adapter that ensures extra kwargs are passed through correctly; without this\n    the `extra` fields set on the adapter would overshadow any provided on a\n    log-by-log basis.\n\n    See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n    not a bug in the LoggingAdapter and subclassing is the intended workaround.\n    \"\"\"\n\n    def process(self, msg, kwargs):\n        kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n        from prefect._internal.compatibility.deprecated import (\n            PrefectDeprecationWarning,\n            generate_deprecation_message,\n        )\n\n        if \"send_to_orion\" in kwargs[\"extra\"]:\n            warnings.warn(\n                generate_deprecation_message(\n                    'The \"send_to_orion\" option',\n                    start_date=\"May 2023\",\n                    help='Use \"send_to_api\" instead.',\n                ),\n                PrefectDeprecationWarning,\n                stacklevel=4,\n            )\n\n        return (msg, kwargs)\n\n    def getChild(\n        self, suffix: str, extra: Optional[Dict[str, str]] = None\n    ) -> \"PrefectLogAdapter\":\n        if extra is None:\n            extra = {}\n\n        return PrefectLogAdapter(\n            self.logger.getChild(suffix),\n            extra={\n                **self.extra,\n                **extra,\n            },\n        )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_logger","title":"disable_logger","text":"

Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.

Source code in prefect/logging/loggers.py
@contextmanager\ndef disable_logger(name: str):\n    \"\"\"\n    Get a logger by name and disables it within the context manager.\n    Upon exiting the context manager, the logger is returned to its\n    original state.\n    \"\"\"\n    logger = logging.getLogger(name=name)\n\n    # determine if it's already disabled\n    base_state = logger.disabled\n    try:\n        # disable the logger\n        logger.disabled = True\n        yield\n    finally:\n        # return to base state\n        logger.disabled = base_state\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger","text":"

Gets both prefect.flow_run and prefect.task_run and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.

Source code in prefect/logging/loggers.py
@contextmanager\ndef disable_run_logger():\n    \"\"\"\n    Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n    within the context manager. Upon exiting the context manager, both loggers\n    are returned to its original state.\n    \"\"\"\n    with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n        yield\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger","text":"

Create a flow run logger with the run's metadata attached.

Additional keyword arguments can be provided to attach custom data to the log records.

If the flow run context is available, see get_run_logger instead.

Source code in prefect/logging/loggers.py
def flow_run_logger(\n    flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n    flow: Optional[\"Flow\"] = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a flow run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the flow run context is available, see `get_run_logger` instead.\n    \"\"\"\n    return PrefectLogAdapter(\n        get_logger(\"prefect.flow_runs\"),\n        extra={\n            **{\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_logger","title":"get_logger cached","text":"

Get a prefect logger. These loggers are intended for internal use within the prefect package.

See get_run_logger for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler.

Source code in prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n    \"\"\"\n    Get a `prefect` logger. These loggers are intended for internal use within the\n    `prefect` package.\n\n    See `get_run_logger` for retrieving loggers for use within task or flow runs.\n    By default, only run-related loggers are connected to the `APILogHandler`.\n    \"\"\"\n    parent_logger = logging.getLogger(\"prefect\")\n\n    if name:\n        # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n        # should not become \"prefect.prefect.test\"\n        if not name.startswith(parent_logger.name + \".\"):\n            logger = parent_logger.getChild(name)\n        else:\n            logger = logging.getLogger(name)\n    else:\n        logger = parent_logger\n\n    # Prevent the current API key from being logged in plain text\n    obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n    logger.addFilter(obfuscate_api_key_filter)\n\n    return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_run_logger","title":"get_run_logger","text":"

Get a Prefect logger for the current task run or flow run.

The logger will be named either prefect.task_runs or prefect.flow_runs. Contextual data about the run will be attached to the log records.

These loggers are connected to the APILogHandler by default to send log records to the API.

Parameters:

Name Type Description Default context RunContext

A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.

None **kwargs str

Additional keyword arguments will be attached to the log records in addition to the run metadata

{}

Raises:

Type Description RuntimeError

If no context can be found

Source code in prefect/logging/loggers.py
def get_run_logger(\n    context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n    \"\"\"\n    Get a Prefect logger for the current task run or flow run.\n\n    The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n    Contextual data about the run will be attached to the log records.\n\n    These loggers are connected to the `APILogHandler` by default to send log records to\n    the API.\n\n    Arguments:\n        context: A specific context may be provided as an override. By default, the\n            context is inferred from global state and this should not be needed.\n        **kwargs: Additional keyword arguments will be attached to the log records in\n            addition to the run metadata\n\n    Raises:\n        RuntimeError: If no context can be found\n    \"\"\"\n    # Check for existing contexts\n    task_run_context = prefect.context.TaskRunContext.get()\n    flow_run_context = prefect.context.FlowRunContext.get()\n\n    # Apply the context override\n    if context:\n        if isinstance(context, prefect.context.FlowRunContext):\n            flow_run_context = context\n        elif isinstance(context, prefect.context.TaskRunContext):\n            task_run_context = context\n        else:\n            raise TypeError(\n                f\"Received unexpected type {type(context).__name__!r} for context. \"\n                \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n            )\n\n    # Determine if this is a task or flow run logger\n    if task_run_context:\n        logger = task_run_logger(\n            task_run=task_run_context.task_run,\n            task=task_run_context.task,\n            flow_run=flow_run_context.flow_run if flow_run_context else None,\n            flow=flow_run_context.flow if flow_run_context else None,\n            **kwargs,\n        )\n    elif flow_run_context:\n        logger = flow_run_logger(\n            flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n        )\n    elif (\n        get_logger(\"prefect.flow_run\").disabled\n        and get_logger(\"prefect.task_run\").disabled\n    ):\n        logger = logging.getLogger(\"null\")\n    else:\n        raise MissingContextError(\"There is no active flow or task run context.\")\n\n    return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.patch_print","title":"patch_print","text":"

Patches the Python builtin print method to use print_as_log

Source code in prefect/logging/loggers.py
@contextmanager\ndef patch_print():\n    \"\"\"\n    Patches the Python builtin `print` method to use `print_as_log`\n    \"\"\"\n    import builtins\n\n    original = builtins.print\n\n    try:\n        builtins.print = print_as_log\n        yield\n    finally:\n        builtins.print = original\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.print_as_log","title":"print_as_log","text":"

A patch for print to send printed messages to the Prefect run logger.

If no run is active, print will behave as if it were not patched.

If print sends data to a file other than sys.stdout or sys.stderr, it will not be forwarded to the Prefect logger either.

Source code in prefect/logging/loggers.py
def print_as_log(*args, **kwargs):\n    \"\"\"\n    A patch for `print` to send printed messages to the Prefect run logger.\n\n    If no run is active, `print` will behave as if it were not patched.\n\n    If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n    not be forwarded to the Prefect logger either.\n    \"\"\"\n    from prefect.context import FlowRunContext, TaskRunContext\n\n    context = TaskRunContext.get() or FlowRunContext.get()\n    if (\n        not context\n        or not context.log_prints\n        or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n    ):\n        return print(*args, **kwargs)\n\n    logger = get_run_logger()\n\n    # Print to an in-memory buffer; so we do not need to implement `print`\n    buffer = io.StringIO()\n    kwargs[\"file\"] = buffer\n    print(*args, **kwargs)\n\n    # Remove trailing whitespace to prevent duplicates\n    logger.info(buffer.getvalue().rstrip())\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.task_run_logger","title":"task_run_logger","text":"

Create a task run logger with the run's metadata attached.

Additional keyword arguments can be provided to attach custom data to the log records.

If the task run context is available, see get_run_logger instead.

If only the flow run context is available, it will be used for default values of flow_run and flow.

Source code in prefect/logging/loggers.py
def task_run_logger(\n    task_run: \"TaskRun\",\n    task: \"Task\" = None,\n    flow_run: \"FlowRun\" = None,\n    flow: \"Flow\" = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a task run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the task run context is available, see `get_run_logger` instead.\n\n    If only the flow run context is available, it will be used for default values\n    of `flow_run` and `flow`.\n    \"\"\"\n    if not flow_run or not flow:\n        flow_run_context = prefect.context.FlowRunContext.get()\n        if flow_run_context:\n            flow_run = flow_run or flow_run_context.flow_run\n            flow = flow or flow_run_context.flow\n\n    return PrefectLogAdapter(\n        get_logger(\"prefect.task_runs\"),\n        extra={\n            **{\n                \"task_run_id\": str(task_run.id),\n                \"flow_run_id\": str(task_run.flow_run_id),\n                \"task_run_name\": task_run.name,\n                \"task_name\": task.name if task else \"<unknown>\",\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/","title":"loggers","text":"

\"\"\"

","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers","title":"prefect.logging.loggers","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.LogEavesdropper","title":"LogEavesdropper","text":"

Bases: Handler

A context manager that collects logs for the duration of the context

Example:\n\n    ```python\n    import logging\n    from prefect.logging import LogEavesdropper\n\n    with LogEavesdropper(\"my_logger\") as eavesdropper:\n        logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n        logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n    print(eavesdropper.text())\n\n    # Outputs: \"Hello, world!\n

Another one!\"

Source code in prefect/logging/loggers.py
class LogEavesdropper(logging.Handler):\n    \"\"\"A context manager that collects logs for the duration of the context\n\n    Example:\n\n        ```python\n        import logging\n        from prefect.logging import LogEavesdropper\n\n        with LogEavesdropper(\"my_logger\") as eavesdropper:\n            logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n            logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n        print(eavesdropper.text())\n\n        # Outputs: \"Hello, world!\\nAnother one!\"\n    \"\"\"\n\n    _target_logger: logging.Logger\n    _lines: List[str]\n\n    def __init__(self, eavesdrop_on: str, level: int = logging.NOTSET):\n        \"\"\"\n        Args:\n            eavesdrop_on (str): the name of the logger to eavesdrop on\n            level (int): the minimum log level to eavesdrop on; if omitted, all levels\n                are captured\n        \"\"\"\n\n        super().__init__(level=level)\n        self.eavesdrop_on = eavesdrop_on\n        self._target_logger = None\n\n        # It's important that we use a very minimalistic formatter for use cases where\n        # we may present these logs back to the user.  We shouldn't leak filenames,\n        # versions, or other environmental information.\n        self.formatter = logging.Formatter(\"[%(levelname)s]: %(message)s\")\n\n    def __enter__(self) -> Self:\n        self._target_logger = logging.getLogger(self.eavesdrop_on)\n        self._original_level = self._target_logger.level\n        self._target_logger.level = self.level\n        self._target_logger.addHandler(self)\n        self._lines = []\n        return self\n\n    def __exit__(self, *_):\n        if self._target_logger:\n            self._target_logger.removeHandler(self)\n            self._target_logger.level = self._original_level\n\n    def emit(self, record: LogRecord) -> None:\n        \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n        self._lines.append(self.format(record))\n\n    def text(self) -> str:\n        \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n        return \"\\n\".join(self._lines)\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.LogEavesdropper.emit","title":"emit","text":"

The logging.Handler implementation, not intended to be called directly.

Source code in prefect/logging/loggers.py
def emit(self, record: LogRecord) -> None:\n    \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n    self._lines.append(self.format(record))\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.LogEavesdropper.text","title":"text","text":"

Return the collected logs as a single newline-delimited string

Source code in prefect/logging/loggers.py
def text(self) -> str:\n    \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n    return \"\\n\".join(self._lines)\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter","text":"

Bases: LoggerAdapter

Adapter that ensures extra kwargs are passed through correctly; without this the extra fields set on the adapter would overshadow any provided on a log-by-log basis.

See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.

Source code in prefect/logging/loggers.py
class PrefectLogAdapter(logging.LoggerAdapter):\n    \"\"\"\n    Adapter that ensures extra kwargs are passed through correctly; without this\n    the `extra` fields set on the adapter would overshadow any provided on a\n    log-by-log basis.\n\n    See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n    not a bug in the LoggingAdapter and subclassing is the intended workaround.\n    \"\"\"\n\n    def process(self, msg, kwargs):\n        kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n        from prefect._internal.compatibility.deprecated import (\n            PrefectDeprecationWarning,\n            generate_deprecation_message,\n        )\n\n        if \"send_to_orion\" in kwargs[\"extra\"]:\n            warnings.warn(\n                generate_deprecation_message(\n                    'The \"send_to_orion\" option',\n                    start_date=\"May 2023\",\n                    help='Use \"send_to_api\" instead.',\n                ),\n                PrefectDeprecationWarning,\n                stacklevel=4,\n            )\n\n        return (msg, kwargs)\n\n    def getChild(\n        self, suffix: str, extra: Optional[Dict[str, str]] = None\n    ) -> \"PrefectLogAdapter\":\n        if extra is None:\n            extra = {}\n\n        return PrefectLogAdapter(\n            self.logger.getChild(suffix),\n            extra={\n                **self.extra,\n                **extra,\n            },\n        )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_logger","title":"disable_logger","text":"

Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.

Source code in prefect/logging/loggers.py
@contextmanager\ndef disable_logger(name: str):\n    \"\"\"\n    Get a logger by name and disables it within the context manager.\n    Upon exiting the context manager, the logger is returned to its\n    original state.\n    \"\"\"\n    logger = logging.getLogger(name=name)\n\n    # determine if it's already disabled\n    base_state = logger.disabled\n    try:\n        # disable the logger\n        logger.disabled = True\n        yield\n    finally:\n        # return to base state\n        logger.disabled = base_state\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger","text":"

Gets both prefect.flow_run and prefect.task_run and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.

Source code in prefect/logging/loggers.py
@contextmanager\ndef disable_run_logger():\n    \"\"\"\n    Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n    within the context manager. Upon exiting the context manager, both loggers\n    are returned to its original state.\n    \"\"\"\n    with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n        yield\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger","text":"

Create a flow run logger with the run's metadata attached.

Additional keyword arguments can be provided to attach custom data to the log records.

If the flow run context is available, see get_run_logger instead.

Source code in prefect/logging/loggers.py
def flow_run_logger(\n    flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n    flow: Optional[\"Flow\"] = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a flow run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the flow run context is available, see `get_run_logger` instead.\n    \"\"\"\n    return PrefectLogAdapter(\n        get_logger(\"prefect.flow_runs\"),\n        extra={\n            **{\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_logger","title":"get_logger cached","text":"

Get a prefect logger. These loggers are intended for internal use within the prefect package.

See get_run_logger for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler.

Source code in prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n    \"\"\"\n    Get a `prefect` logger. These loggers are intended for internal use within the\n    `prefect` package.\n\n    See `get_run_logger` for retrieving loggers for use within task or flow runs.\n    By default, only run-related loggers are connected to the `APILogHandler`.\n    \"\"\"\n    parent_logger = logging.getLogger(\"prefect\")\n\n    if name:\n        # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n        # should not become \"prefect.prefect.test\"\n        if not name.startswith(parent_logger.name + \".\"):\n            logger = parent_logger.getChild(name)\n        else:\n            logger = logging.getLogger(name)\n    else:\n        logger = parent_logger\n\n    # Prevent the current API key from being logged in plain text\n    obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n    logger.addFilter(obfuscate_api_key_filter)\n\n    return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_run_logger","title":"get_run_logger","text":"

Get a Prefect logger for the current task run or flow run.

The logger will be named either prefect.task_runs or prefect.flow_runs. Contextual data about the run will be attached to the log records.

These loggers are connected to the APILogHandler by default to send log records to the API.

Parameters:

Name Type Description Default context RunContext

A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.

None **kwargs str

Additional keyword arguments will be attached to the log records in addition to the run metadata

{}

Raises:

Type Description RuntimeError

If no context can be found

Source code in prefect/logging/loggers.py
def get_run_logger(\n    context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n    \"\"\"\n    Get a Prefect logger for the current task run or flow run.\n\n    The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n    Contextual data about the run will be attached to the log records.\n\n    These loggers are connected to the `APILogHandler` by default to send log records to\n    the API.\n\n    Arguments:\n        context: A specific context may be provided as an override. By default, the\n            context is inferred from global state and this should not be needed.\n        **kwargs: Additional keyword arguments will be attached to the log records in\n            addition to the run metadata\n\n    Raises:\n        RuntimeError: If no context can be found\n    \"\"\"\n    # Check for existing contexts\n    task_run_context = prefect.context.TaskRunContext.get()\n    flow_run_context = prefect.context.FlowRunContext.get()\n\n    # Apply the context override\n    if context:\n        if isinstance(context, prefect.context.FlowRunContext):\n            flow_run_context = context\n        elif isinstance(context, prefect.context.TaskRunContext):\n            task_run_context = context\n        else:\n            raise TypeError(\n                f\"Received unexpected type {type(context).__name__!r} for context. \"\n                \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n            )\n\n    # Determine if this is a task or flow run logger\n    if task_run_context:\n        logger = task_run_logger(\n            task_run=task_run_context.task_run,\n            task=task_run_context.task,\n            flow_run=flow_run_context.flow_run if flow_run_context else None,\n            flow=flow_run_context.flow if flow_run_context else None,\n            **kwargs,\n        )\n    elif flow_run_context:\n        logger = flow_run_logger(\n            flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n        )\n    elif (\n        get_logger(\"prefect.flow_run\").disabled\n        and get_logger(\"prefect.task_run\").disabled\n    ):\n        logger = logging.getLogger(\"null\")\n    else:\n        raise MissingContextError(\"There is no active flow or task run context.\")\n\n    return logger\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.patch_print","title":"patch_print","text":"

Patches the Python builtin print method to use print_as_log

Source code in prefect/logging/loggers.py
@contextmanager\ndef patch_print():\n    \"\"\"\n    Patches the Python builtin `print` method to use `print_as_log`\n    \"\"\"\n    import builtins\n\n    original = builtins.print\n\n    try:\n        builtins.print = print_as_log\n        yield\n    finally:\n        builtins.print = original\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.print_as_log","title":"print_as_log","text":"

A patch for print to send printed messages to the Prefect run logger.

If no run is active, print will behave as if it were not patched.

If print sends data to a file other than sys.stdout or sys.stderr, it will not be forwarded to the Prefect logger either.

Source code in prefect/logging/loggers.py
def print_as_log(*args, **kwargs):\n    \"\"\"\n    A patch for `print` to send printed messages to the Prefect run logger.\n\n    If no run is active, `print` will behave as if it were not patched.\n\n    If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n    not be forwarded to the Prefect logger either.\n    \"\"\"\n    from prefect.context import FlowRunContext, TaskRunContext\n\n    context = TaskRunContext.get() or FlowRunContext.get()\n    if (\n        not context\n        or not context.log_prints\n        or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n    ):\n        return print(*args, **kwargs)\n\n    logger = get_run_logger()\n\n    # Print to an in-memory buffer; so we do not need to implement `print`\n    buffer = io.StringIO()\n    kwargs[\"file\"] = buffer\n    print(*args, **kwargs)\n\n    # Remove trailing whitespace to prevent duplicates\n    logger.info(buffer.getvalue().rstrip())\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.task_run_logger","title":"task_run_logger","text":"

Create a task run logger with the run's metadata attached.

Additional keyword arguments can be provided to attach custom data to the log records.

If the task run context is available, see get_run_logger instead.

If only the flow run context is available, it will be used for default values of flow_run and flow.

Source code in prefect/logging/loggers.py
def task_run_logger(\n    task_run: \"TaskRun\",\n    task: \"Task\" = None,\n    flow_run: \"FlowRun\" = None,\n    flow: \"Flow\" = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a task run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the task run context is available, see `get_run_logger` instead.\n\n    If only the flow run context is available, it will be used for default values\n    of `flow_run` and `flow`.\n    \"\"\"\n    if not flow_run or not flow:\n        flow_run_context = prefect.context.FlowRunContext.get()\n        if flow_run_context:\n            flow_run = flow_run or flow_run_context.flow_run\n            flow = flow or flow_run_context.flow\n\n    return PrefectLogAdapter(\n        get_logger(\"prefect.task_runs\"),\n        extra={\n            **{\n                \"task_run_id\": str(task_run.id),\n                \"flow_run_id\": str(task_run.flow_run_id),\n                \"task_run_name\": task_run.name,\n                \"task_name\": task.name if task else \"<unknown>\",\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/runner/runner/","title":"runner","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner","title":"prefect.runner.runner","text":"

Runners are responsible for managing the execution of deployments created and managed by either flow.serve or the serve utility.

Example
import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n    # serve generates a Runner instance\n    serve(slow_deploy, fast_deploy)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner","title":"Runner","text":"Source code in prefect/runner/runner.py
class Runner:\n    def __init__(\n        self,\n        name: Optional[str] = None,\n        query_seconds: Optional[float] = None,\n        prefetch_seconds: float = 10,\n        limit: Optional[int] = None,\n        pause_on_shutdown: bool = True,\n        webserver: bool = False,\n    ):\n        \"\"\"\n        Responsible for managing the execution of remotely initiated flow runs.\n\n        Args:\n            name: The name of the runner. If not provided, a random one\n                will be generated. If provided, it cannot contain '/' or '%'.\n            query_seconds: The number of seconds to wait between querying for\n                scheduled flow runs; defaults to `PREFECT_RUNNER_POLL_FREQUENCY`\n            prefetch_seconds: The number of seconds to prefetch flow runs for.\n            limit: The maximum number of flow runs this runner should be running at\n            pause_on_shutdown: A boolean for whether or not to automatically pause\n                deployment schedules on shutdown; defaults to `True`\n            webserver: a boolean flag for whether to start a webserver for this runner\n\n        Examples:\n            Set up a Runner to manage the execute of scheduled flow runs for two flows:\n                ```python\n                from prefect import flow, Runner\n\n                @flow\n                def hello_flow(name):\n                    print(f\"hello {name}\")\n\n                @flow\n                def goodbye_flow(name):\n                    print(f\"goodbye {name}\")\n\n                if __name__ == \"__main__\"\n                    runner = Runner(name=\"my-runner\")\n\n                    # Will be runnable via the API\n                    runner.add_flow(hello_flow)\n\n                    # Run on a cron schedule\n                    runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n                    runner.start()\n                ```\n        \"\"\"\n        if name and (\"/\" in name or \"%\" in name):\n            raise ValueError(\"Runner name cannot contain '/' or '%'\")\n        self.name = Path(name).stem if name is not None else f\"runner-{uuid4()}\"\n        self._logger = get_logger(\"runner\")\n\n        self.started = False\n        self.stopping = False\n        self.pause_on_shutdown = pause_on_shutdown\n        self.limit = limit or PREFECT_RUNNER_PROCESS_LIMIT.value()\n        self.webserver = webserver\n\n        self.query_seconds = query_seconds or PREFECT_RUNNER_POLL_FREQUENCY.value()\n        self._prefetch_seconds = prefetch_seconds\n\n        self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n        self._loops_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n\n        self._limiter: Optional[anyio.CapacityLimiter] = anyio.CapacityLimiter(\n            self.limit\n        )\n        self._client = get_client()\n        self._submitting_flow_run_ids = set()\n        self._cancelling_flow_run_ids = set()\n        self._scheduled_task_scopes = set()\n        self._deployment_ids: Set[UUID] = set()\n        self._flow_run_process_map = dict()\n\n        self._tmp_dir: Path = (\n            Path(tempfile.gettempdir()) / \"runner_storage\" / str(uuid4())\n        )\n        self._storage_objs: List[RunnerStorage] = []\n        self._deployment_storage_map: Dict[UUID, RunnerStorage] = {}\n        self._loop = asyncio.get_event_loop()\n\n    @sync_compatible\n    async def add_deployment(\n        self,\n        deployment: RunnerDeployment,\n    ) -> UUID:\n        \"\"\"\n        Registers the deployment with the Prefect API and will monitor for work once\n        the runner is started.\n\n        Args:\n            deployment: A deployment for the runner to register.\n        \"\"\"\n        deployment_id = await deployment.apply()\n        storage = deployment.storage\n        if storage is not None:\n            storage = await self._add_storage(storage)\n            self._deployment_storage_map[deployment_id] = storage\n        self._deployment_ids.add(deployment_id)\n\n        return deployment_id\n\n    @sync_compatible\n    async def add_flow(\n        self,\n        flow: Flow,\n        name: str = None,\n        interval: Optional[\n            Union[\n                Iterable[Union[int, float, datetime.timedelta]],\n                int,\n                float,\n                datetime.timedelta,\n            ]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ) -> UUID:\n        \"\"\"\n        Provides a flow to the runner to be run based on the provided configuration.\n\n        Will create a deployment for the provided flow and register the deployment\n        with the runner.\n\n        Args:\n            flow: A flow for the runner to run.\n            name: The name to give the created deployment. Will default to the name\n                of the runner.\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n        \"\"\"\n        api = PREFECT_API_URL.value()\n        if any([interval, cron, rrule]) and not api:\n            self._logger.warning(\n                \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n                \" start` to start the scheduler.\"\n            )\n        name = self.name if name is None else name\n\n        deployment = await flow.to_deployment(\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedules=schedules,\n            schedule=schedule,\n            paused=paused,\n            is_schedule_active=is_schedule_active,\n            triggers=triggers,\n            parameters=parameters,\n            description=description,\n            tags=tags,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            entrypoint_type=entrypoint_type,\n        )\n        return await self.add_deployment(deployment)\n\n    @sync_compatible\n    async def _add_storage(self, storage: RunnerStorage) -> RunnerStorage:\n        \"\"\"\n        Adds a storage object to the runner. The storage object will be used to pull\n        code to the runner's working directory before the runner starts.\n\n        Args:\n            storage: The storage object to add to the runner.\n        Returns:\n            The updated storage object that was added to the runner.\n        \"\"\"\n        if storage not in self._storage_objs:\n            storage_copy = deepcopy(storage)\n            storage_copy.set_base_path(self._tmp_dir)\n\n            self._logger.debug(\n                f\"Adding storage {storage_copy!r} to runner at\"\n                f\" {str(storage_copy.destination)!r}\"\n            )\n            self._storage_objs.append(storage_copy)\n\n            return storage_copy\n        else:\n            return next(s for s in self._storage_objs if s == storage)\n\n    def handle_sigterm(self, signum, frame):\n        \"\"\"\n        Gracefully shuts down the runner when a SIGTERM is received.\n        \"\"\"\n        self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n        from_sync.call_in_loop_thread(create_call(self.stop))\n\n        sys.exit(0)\n\n    @sync_compatible\n    async def start(\n        self, run_once: bool = False, webserver: Optional[bool] = None\n    ) -> None:\n        \"\"\"\n        Starts a runner.\n\n        The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n        Args:\n            run_once: If True, the runner will through one query loop and then exit.\n            webserver: a boolean for whether to start a webserver for this runner. If provided,\n                overrides the default on the runner\n\n        Examples:\n            Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n            ```python\n            from prefect import flow, Runner\n\n            @flow\n            def hello_flow(name):\n                print(f\"hello {name}\")\n\n            @flow\n            def goodbye_flow(name):\n                print(f\"goodbye {name}\")\n\n            if __name__ == \"__main__\"\n                runner = Runner(name=\"my-runner\")\n\n                # Will be runnable via the API\n                runner.add_flow(hello_flow)\n\n                # Run on a cron schedule\n                runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n                runner.start()\n            ```\n        \"\"\"\n        _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n        webserver = webserver if webserver is not None else self.webserver\n\n        if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n            # we'll start the ASGI server in a separate thread so that\n            # uvicorn does not block the main thread\n            server_thread = threading.Thread(\n                name=\"runner-server-thread\",\n                target=partial(\n                    start_webserver,\n                    runner=self,\n                ),\n                daemon=True,\n            )\n            server_thread.start()\n\n        async with self as runner:\n            async with self._loops_task_group as tg:\n                for storage in self._storage_objs:\n                    if storage.pull_interval:\n                        tg.start_soon(\n                            partial(\n                                critical_service_loop,\n                                workload=storage.pull_code,\n                                interval=storage.pull_interval,\n                                run_once=run_once,\n                                jitter_range=0.3,\n                            )\n                        )\n                    else:\n                        tg.start_soon(storage.pull_code)\n                tg.start_soon(\n                    partial(\n                        critical_service_loop,\n                        workload=runner._get_and_submit_flow_runs,\n                        interval=self.query_seconds,\n                        run_once=run_once,\n                        jitter_range=0.3,\n                    )\n                )\n                tg.start_soon(\n                    partial(\n                        critical_service_loop,\n                        workload=runner._check_for_cancelled_flow_runs,\n                        interval=self.query_seconds * 2,\n                        run_once=run_once,\n                        jitter_range=0.3,\n                    )\n                )\n\n    def execute_in_background(self, func, *args, **kwargs):\n        \"\"\"\n        Executes a function in the background.\n        \"\"\"\n\n        return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n\n    async def cancel_all(self):\n        runs_to_cancel = []\n\n        # done to avoid dictionary size changing during iteration\n        for info in self._flow_run_process_map.values():\n            runs_to_cancel.append(info[\"flow_run\"])\n        if runs_to_cancel:\n            for run in runs_to_cancel:\n                try:\n                    await self._cancel_run(run, state_msg=\"Runner is shutting down.\")\n                except Exception:\n                    self._logger.exception(\n                        f\"Exception encountered while cancelling {run.id}\",\n                        exc_info=True,\n                    )\n\n    @sync_compatible\n    async def stop(self):\n        \"\"\"Stops the runner's polling cycle.\"\"\"\n        if not self.started:\n            raise RuntimeError(\n                \"Runner has not yet started. Please start the runner by calling\"\n                \" .start()\"\n            )\n\n        self.started = False\n        self.stopping = True\n        await self.cancel_all()\n        try:\n            self._loops_task_group.cancel_scope.cancel()\n        except Exception:\n            self._logger.exception(\n                \"Exception encountered while shutting down\", exc_info=True\n            )\n\n    async def execute_flow_run(\n        self, flow_run_id: UUID, entrypoint: Optional[str] = None\n    ):\n        \"\"\"\n        Executes a single flow run with the given ID.\n\n        Execution will wait to monitor for cancellation requests. Exits once\n        the flow run process has exited.\n        \"\"\"\n        self.pause_on_shutdown = False\n        context = self if not self.started else asyncnullcontext()\n\n        async with context:\n            if not self._acquire_limit_slot(flow_run_id):\n                return\n\n            async with anyio.create_task_group() as tg:\n                with anyio.CancelScope():\n                    self._submitting_flow_run_ids.add(flow_run_id)\n                    flow_run = await self._client.read_flow_run(flow_run_id)\n\n                    pid = await self._runs_task_group.start(\n                        partial(\n                            self._submit_run_and_capture_errors,\n                            flow_run=flow_run,\n                            entrypoint=entrypoint,\n                        ),\n                    )\n\n                    self._flow_run_process_map[flow_run.id] = dict(\n                        pid=pid, flow_run=flow_run\n                    )\n\n                    # We want this loop to stop when the flow run process exits\n                    # so we'll check if the flow run process is still alive on\n                    # each iteration and cancel the task group if it is not.\n                    workload = partial(\n                        self._check_for_cancelled_flow_runs,\n                        should_stop=lambda: not self._flow_run_process_map,\n                        on_stop=tg.cancel_scope.cancel,\n                    )\n\n                    tg.start_soon(\n                        partial(\n                            critical_service_loop,\n                            workload=workload,\n                            interval=self.query_seconds,\n                            jitter_range=0.3,\n                        )\n                    )\n\n    def _get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n        return flow_run_logger(flow_run=flow_run).getChild(\n            \"runner\",\n            extra={\n                \"runner_name\": self.name,\n            },\n        )\n\n    async def _run_process(\n        self,\n        flow_run: \"FlowRun\",\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n        entrypoint: Optional[str] = None,\n    ):\n        \"\"\"\n        Runs the given flow run in a subprocess.\n\n        Args:\n            flow_run: Flow run to execute via process. The ID of this flow run\n                is stored in the PREFECT__FLOW_RUN_ID environment variable to\n                allow the engine to retrieve the corresponding flow's code and\n                begin execution.\n            task_status: anyio task status used to send a message to the caller\n                than the flow run process has started.\n        \"\"\"\n        command = f\"{shlex.quote(sys.executable)} -m prefect.engine\"\n\n        flow_run_logger = self._get_flow_run_logger(flow_run)\n\n        # We must add creationflags to a dict so it is only passed as a function\n        # parameter on Windows, because the presence of creationflags causes\n        # errors on Unix even if set to None\n        kwargs: Dict[str, object] = {}\n        if sys.platform == \"win32\":\n            kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n        _use_threaded_child_watcher()\n        flow_run_logger.info(\"Opening process...\")\n\n        env = get_current_settings().to_environment_variables(exclude_unset=True)\n        env.update(\n            {\n                **{\n                    \"PREFECT__FLOW_RUN_ID\": str(flow_run.id),\n                    \"PREFECT__STORAGE_BASE_PATH\": str(self._tmp_dir),\n                    \"PREFECT__ENABLE_CANCELLATION_AND_CRASHED_HOOKS\": \"false\",\n                },\n                **({\"PREFECT__FLOW_ENTRYPOINT\": entrypoint} if entrypoint else {}),\n            }\n        )\n        env.update(**os.environ)  # is this really necessary??\n\n        storage = self._deployment_storage_map.get(flow_run.deployment_id)\n        if storage and storage.pull_interval:\n            # perform an adhoc pull of code before running the flow if an\n            # adhoc pull hasn't been performed in the last pull_interval\n            # TODO: Explore integrating this behavior with global concurrency.\n            last_adhoc_pull = getattr(storage, \"last_adhoc_pull\", None)\n            if (\n                last_adhoc_pull is None\n                or last_adhoc_pull\n                < datetime.datetime.now()\n                - datetime.timedelta(seconds=storage.pull_interval)\n            ):\n                self._logger.debug(\n                    \"Performing adhoc pull of code for flow run %s with storage %r\",\n                    flow_run.id,\n                    storage,\n                )\n                await storage.pull_code()\n                setattr(storage, \"last_adhoc_pull\", datetime.datetime.now())\n\n        process = await run_process(\n            shlex.split(command),\n            stream_output=True,\n            task_status=task_status,\n            env=env,\n            **kwargs,\n            cwd=storage.destination if storage else None,\n        )\n\n        # Use the pid for display if no name was given\n\n        if process.returncode:\n            help_message = None\n            level = logging.ERROR\n            if process.returncode == -9:\n                level = logging.INFO\n                help_message = (\n                    \"This indicates that the process exited due to a SIGKILL signal. \"\n                    \"Typically, this is either caused by manual cancellation or \"\n                    \"high memory usage causing the operating system to \"\n                    \"terminate the process.\"\n                )\n            if process.returncode == -15:\n                level = logging.INFO\n                help_message = (\n                    \"This indicates that the process exited due to a SIGTERM signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n            elif process.returncode == 247:\n                help_message = (\n                    \"This indicates that the process was terminated due to high \"\n                    \"memory usage.\"\n                )\n            elif (\n                sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n            ):\n                level = logging.INFO\n                help_message = (\n                    \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n\n            flow_run_logger.log(\n                level,\n                f\"Process for flow run {flow_run.name!r} exited with status code:\"\n                f\" {process.returncode}\"\n                + (f\"; {help_message}\" if help_message else \"\"),\n            )\n        else:\n            flow_run_logger.info(\n                f\"Process for flow run {flow_run.name!r} exited cleanly.\"\n            )\n\n        return process.returncode\n\n    async def _kill_process(\n        self,\n        pid: int,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Kills a given flow run process.\n\n        Args:\n            pid: ID of the process to kill\n            grace_seconds: Number of seconds to wait for the process to end.\n        \"\"\"\n        # In a non-windows environment first send a SIGTERM, then, after\n        # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n        # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        if sys.platform == \"win32\":\n            try:\n                os.kill(pid, signal.CTRL_BREAK_EVENT)\n            except (ProcessLookupError, WindowsError):\n                raise RuntimeError(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n        else:\n            try:\n                os.kill(pid, signal.SIGTERM)\n            except ProcessLookupError:\n                raise RuntimeError(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n\n            # Throttle how often we check if the process is still alive to keep\n            # from making too many system calls in a short period of time.\n            check_interval = max(grace_seconds / 10, 1)\n\n            with anyio.move_on_after(grace_seconds):\n                while True:\n                    await anyio.sleep(check_interval)\n\n                    # Detect if the process is still alive. If not do an early\n                    # return as the process respected the SIGTERM from above.\n                    try:\n                        os.kill(pid, 0)\n                    except ProcessLookupError:\n                        return\n\n            try:\n                os.kill(pid, signal.SIGKILL)\n            except OSError:\n                # We shouldn't ever end up here, but it's possible that the\n                # process ended right after the check above.\n                return\n\n    async def _pause_schedules(self):\n        \"\"\"\n        Pauses all deployment schedules.\n        \"\"\"\n        self._logger.info(\"Pausing all deployments...\")\n        for deployment_id in self._deployment_ids:\n            self._logger.debug(f\"Pausing deployment '{deployment_id}'\")\n            await self._client.set_deployment_paused_state(deployment_id, True)\n        self._logger.info(\"All deployments have been paused!\")\n\n    async def _get_and_submit_flow_runs(self):\n        if self.stopping:\n            return\n        runs_response = await self._get_scheduled_flow_runs()\n        self.last_polled = pendulum.now(\"UTC\")\n        return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n    async def _check_for_cancelled_flow_runs(\n        self, should_stop: Callable = lambda: False, on_stop: Callable = lambda: None\n    ):\n        \"\"\"\n        Checks for flow runs with CANCELLING a cancelling state and attempts to\n        cancel them.\n\n        Args:\n            should_stop: A callable that returns a boolean indicating whether or not\n                the runner should stop checking for cancelled flow runs.\n            on_stop: A callable that is called when the runner should stop checking\n                for cancelled flow runs.\n        \"\"\"\n        if self.stopping:\n            return\n        if not self.started:\n            raise RuntimeError(\n                \"Runner is not set up. Please make sure you are running this runner \"\n                \"as an async context manager.\"\n            )\n\n        if should_stop():\n            self._logger.debug(\n                \"Runner has no active flow runs or deployments. Sending message to loop\"\n                \" service that no further cancellation checks are needed.\"\n            )\n            on_stop()\n\n        self._logger.debug(\"Checking for cancelled flow runs...\")\n\n        named_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(\n                    any_=list(\n                        self._flow_run_process_map.keys()\n                        - self._cancelling_flow_run_ids\n                    )\n                ),\n            ),\n        )\n\n        typed_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(\n                    any_=list(\n                        self._flow_run_process_map.keys()\n                        - self._cancelling_flow_run_ids\n                    )\n                ),\n            ),\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self._logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self._cancelling_flow_run_ids.add(flow_run.id)\n            self._runs_task_group.start_soon(self._cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def _cancel_run(self, flow_run: \"FlowRun\", state_msg: Optional[str] = None):\n        run_logger = self._get_flow_run_logger(flow_run)\n\n        pid = self._flow_run_process_map.get(flow_run.id, {}).get(\"pid\")\n        if not pid:\n            await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"Could not find process ID for flow run\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            await self._kill_process(pid)\n        except RuntimeError as exc:\n            self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except Exception:\n            run_logger.exception(\n                \"Encountered exception while killing process for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self._cancelling_flow_run_ids.remove(flow_run.id)\n        else:\n            await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": state_msg or \"Flow run was cancelled successfully.\"\n                },\n            )\n            run_logger.info(f\"Cancelled flow run '{flow_run.name}'!\")\n\n    async def _get_scheduled_flow_runs(\n        self,\n    ) -> List[\"FlowRun\"]:\n        \"\"\"\n        Retrieve scheduled flow runs for this runner.\n        \"\"\"\n        scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n        self._logger.debug(\n            f\"Querying for flow runs scheduled before {scheduled_before}\"\n        )\n\n        scheduled_flow_runs = (\n            await self._client.get_scheduled_flow_runs_for_deployments(\n                deployment_ids=list(self._deployment_ids),\n                scheduled_before=scheduled_before,\n            )\n        )\n        self._logger.debug(f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\")\n        return scheduled_flow_runs\n\n    def has_slots_available(self) -> bool:\n        \"\"\"\n        Determine if the flow run limit has been reached.\n\n        Returns:\n            - bool: True if the limit has not been reached, False otherwise.\n        \"\"\"\n        return self._limiter.available_tokens > 0\n\n    def _acquire_limit_slot(self, flow_run_id: str) -> bool:\n        \"\"\"\n        Enforces flow run limit set on runner.\n\n        Returns:\n            - bool: True if a slot was acquired, False otherwise.\n        \"\"\"\n        try:\n            if self._limiter:\n                self._limiter.acquire_on_behalf_of_nowait(flow_run_id)\n                self._logger.debug(\"Limit slot acquired for flow run '%s'\", flow_run_id)\n            return True\n        except RuntimeError as exc:\n            if (\n                \"this borrower is already holding one of this CapacityLimiter's tokens\"\n                in str(exc)\n            ):\n                self._logger.warning(\n                    f\"Duplicate submission of flow run '{flow_run_id}' detected. Runner\"\n                    \" will not re-submit flow run.\"\n                )\n                return False\n            else:\n                raise\n        except anyio.WouldBlock:\n            self._logger.info(\n                f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n                \" in progress. You can control this limit by passing a `limit` value\"\n                \" to `serve` or adjusting the PREFECT_RUNNER_PROCESS_LIMIT setting.\"\n            )\n            return False\n\n    def _release_limit_slot(self, flow_run_id: str) -> None:\n        \"\"\"\n        Frees up a slot taken by the given flow run id.\n        \"\"\"\n        if self._limiter:\n            self._limiter.release_on_behalf_of(flow_run_id)\n            self._logger.debug(\"Limit slot released for flow run '%s'\", flow_run_id)\n\n    async def _submit_scheduled_flow_runs(\n        self,\n        flow_run_response: List[\"FlowRun\"],\n        entrypoints: Optional[List[str]] = None,\n    ) -> List[\"FlowRun\"]:\n        \"\"\"\n        Takes a list of FlowRuns and submits the referenced flow runs\n        for execution by the runner.\n        \"\"\"\n        submittable_flow_runs = flow_run_response\n        submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n        for i, flow_run in enumerate(submittable_flow_runs):\n            if flow_run.id in self._submitting_flow_run_ids:\n                continue\n\n            if self._acquire_limit_slot(flow_run.id):\n                run_logger = self._get_flow_run_logger(flow_run)\n                run_logger.info(\n                    f\"Runner '{self.name}' submitting flow run '{flow_run.id}'\"\n                )\n                self._submitting_flow_run_ids.add(flow_run.id)\n                self._runs_task_group.start_soon(\n                    partial(\n                        self._submit_run,\n                        flow_run=flow_run,\n                        entrypoint=(\n                            entrypoints[i] if entrypoints else None\n                        ),  # TODO: avoid relying on index\n                    )\n                )\n            else:\n                break\n\n        return list(\n            filter(\n                lambda run: run.id in self._submitting_flow_run_ids,\n                submittable_flow_runs,\n            )\n        )\n\n    async def _submit_run(self, flow_run: \"FlowRun\", entrypoint: Optional[str] = None):\n        \"\"\"\n        Submits a given flow run for execution by the runner.\n        \"\"\"\n        run_logger = self._get_flow_run_logger(flow_run)\n\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            readiness_result = await self._runs_task_group.start(\n                partial(\n                    self._submit_run_and_capture_errors,\n                    flow_run=flow_run,\n                    entrypoint=entrypoint,\n                ),\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                self._flow_run_process_map[flow_run.id] = dict(\n                    pid=readiness_result, flow_run=flow_run\n                )\n\n            run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            self._release_limit_slot(flow_run.id)\n\n        self._submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self,\n        flow_run: \"FlowRun\",\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n        entrypoint: Optional[str] = None,\n    ) -> Union[Optional[int], Exception]:\n        run_logger = self._get_flow_run_logger(flow_run)\n\n        try:\n            status_code = await self._run_process(\n                flow_run=flow_run,\n                task_status=task_status,\n                entrypoint=entrypoint,\n            )\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                run_logger.exception(\n                    f\"Failed to start process for flow run '{flow_run.id}'.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run process could not be started\"\n                )\n            else:\n                run_logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            self._release_limit_slot(flow_run.id)\n            self._flow_run_process_map.pop(flow_run.id, None)\n\n        if status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                f\"Flow run process exited with non-zero status code {status_code}.\",\n            )\n\n        api_flow_run = await self._client.read_flow_run(flow_run_id=flow_run.id)\n        terminal_state = api_flow_run.state\n        if terminal_state.is_crashed():\n            await self._run_on_crashed_hooks(flow_run=flow_run, state=terminal_state)\n\n        return status_code\n\n    async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n        run_logger = self._get_flow_run_logger(flow_run)\n        state = flow_run.state\n        try:\n            state = await propose_state(\n                self._client, Pending(), flow_run_id=flow_run.id\n            )\n        except Abort as exc:\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            run_logger.exception(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n            )\n            return False\n\n        if not state.is_pending():\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n        run_logger = self._get_flow_run_logger(flow_run)\n        try:\n            await propose_state(\n                self._client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            run_logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n        run_logger = self._get_flow_run_logger(flow_run)\n        try:\n            state = await propose_state(\n                self._client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                run_logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n        \"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the runner exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.started:\n                with anyio.CancelScope() as scope:\n                    self._scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self._scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self._runs_task_group.start(wrapper)\n\n    async def _run_on_cancellation_hooks(\n        self,\n        flow_run: \"FlowRun\",\n        state: State,\n    ) -> None:\n        \"\"\"\n        Run the hooks for a flow.\n        \"\"\"\n        if state.is_cancelling():\n            flow = await load_flow_from_flow_run(\n                flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n            )\n            hooks = flow.on_cancellation or []\n\n            await _run_hooks(hooks, flow_run, flow, state)\n\n    async def _run_on_crashed_hooks(\n        self,\n        flow_run: \"FlowRun\",\n        state: State,\n    ) -> None:\n        \"\"\"\n        Run the hooks for a flow.\n        \"\"\"\n        if state.is_crashed():\n            flow = await load_flow_from_flow_run(\n                flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n            )\n            hooks = flow.on_crashed or []\n\n            await _run_hooks(hooks, flow_run, flow, state)\n\n    async def __aenter__(self):\n        self._logger.debug(\"Starting runner...\")\n        self._client = get_client()\n        self._tmp_dir.mkdir(parents=True)\n        await self._client.__aenter__()\n        await self._runs_task_group.__aenter__()\n\n        self.started = True\n        return self\n\n    async def __aexit__(self, *exc_info):\n        self._logger.debug(\"Stopping runner...\")\n        if self.pause_on_shutdown:\n            await self._pause_schedules()\n        self.started = False\n        for scope in self._scheduled_task_scopes:\n            scope.cancel()\n        if self._runs_task_group:\n            await self._runs_task_group.__aexit__(*exc_info)\n        if self._client:\n            await self._client.__aexit__(*exc_info)\n        shutil.rmtree(str(self._tmp_dir))\n\n    def __repr__(self):\n        return f\"Runner(name={self.name!r})\"\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_deployment","title":"add_deployment async","text":"

Registers the deployment with the Prefect API and will monitor for work once the runner is started.

Parameters:

Name Type Description Default deployment RunnerDeployment

A deployment for the runner to register.

required Source code in prefect/runner/runner.py
@sync_compatible\nasync def add_deployment(\n    self,\n    deployment: RunnerDeployment,\n) -> UUID:\n    \"\"\"\n    Registers the deployment with the Prefect API and will monitor for work once\n    the runner is started.\n\n    Args:\n        deployment: A deployment for the runner to register.\n    \"\"\"\n    deployment_id = await deployment.apply()\n    storage = deployment.storage\n    if storage is not None:\n        storage = await self._add_storage(storage)\n        self._deployment_storage_map[deployment_id] = storage\n    self._deployment_ids.add(deployment_id)\n\n    return deployment_id\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_flow","title":"add_flow async","text":"

Provides a flow to the runner to be run based on the provided configuration.

Will create a deployment for the provided flow and register the deployment with the runner.

Parameters:

Name Type Description Default flow Flow

A flow for the runner to run.

required name str

The name to give the created deployment. Will default to the name of the runner.

None interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

None cron Optional[Union[Iterable[str], str]]

A cron schedule of when to execute runs of this flow.

None rrule Optional[Union[Iterable[str], str]]

An rrule schedule of when to execute runs of this flow.

None schedule Optional[SCHEDULE_TYPES]

A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

None is_schedule_active Optional[bool]

Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

A list of triggers that should kick of a run of this flow.

None parameters Optional[dict]

A dictionary of default parameter values to pass to runs of this flow.

None description Optional[str]

A description for the created deployment. Defaults to the flow's description if not provided.

None tags Optional[List[str]]

A list of tags to associate with the created deployment for organizational purposes.

None version Optional[str]

A version for the created deployment. Defaults to the flow's version.

None entrypoint_type EntrypointType

Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

FILE_PATH Source code in prefect/runner/runner.py
@sync_compatible\nasync def add_flow(\n    self,\n    flow: Flow,\n    name: str = None,\n    interval: Optional[\n        Union[\n            Iterable[Union[int, float, datetime.timedelta]],\n            int,\n            float,\n            datetime.timedelta,\n        ]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> UUID:\n    \"\"\"\n    Provides a flow to the runner to be run based on the provided configuration.\n\n    Will create a deployment for the provided flow and register the deployment\n    with the runner.\n\n    Args:\n        flow: A flow for the runner to run.\n        name: The name to give the created deployment. Will default to the name\n            of the runner.\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n    \"\"\"\n    api = PREFECT_API_URL.value()\n    if any([interval, cron, rrule]) and not api:\n        self._logger.warning(\n            \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n            \" start` to start the scheduler.\"\n        )\n    name = self.name if name is None else name\n\n    deployment = await flow.to_deployment(\n        name=name,\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedules=schedules,\n        schedule=schedule,\n        paused=paused,\n        is_schedule_active=is_schedule_active,\n        triggers=triggers,\n        parameters=parameters,\n        description=description,\n        tags=tags,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        entrypoint_type=entrypoint_type,\n    )\n    return await self.add_deployment(deployment)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_flow_run","title":"execute_flow_run async","text":"

Executes a single flow run with the given ID.

Execution will wait to monitor for cancellation requests. Exits once the flow run process has exited.

Source code in prefect/runner/runner.py
async def execute_flow_run(\n    self, flow_run_id: UUID, entrypoint: Optional[str] = None\n):\n    \"\"\"\n    Executes a single flow run with the given ID.\n\n    Execution will wait to monitor for cancellation requests. Exits once\n    the flow run process has exited.\n    \"\"\"\n    self.pause_on_shutdown = False\n    context = self if not self.started else asyncnullcontext()\n\n    async with context:\n        if not self._acquire_limit_slot(flow_run_id):\n            return\n\n        async with anyio.create_task_group() as tg:\n            with anyio.CancelScope():\n                self._submitting_flow_run_ids.add(flow_run_id)\n                flow_run = await self._client.read_flow_run(flow_run_id)\n\n                pid = await self._runs_task_group.start(\n                    partial(\n                        self._submit_run_and_capture_errors,\n                        flow_run=flow_run,\n                        entrypoint=entrypoint,\n                    ),\n                )\n\n                self._flow_run_process_map[flow_run.id] = dict(\n                    pid=pid, flow_run=flow_run\n                )\n\n                # We want this loop to stop when the flow run process exits\n                # so we'll check if the flow run process is still alive on\n                # each iteration and cancel the task group if it is not.\n                workload = partial(\n                    self._check_for_cancelled_flow_runs,\n                    should_stop=lambda: not self._flow_run_process_map,\n                    on_stop=tg.cancel_scope.cancel,\n                )\n\n                tg.start_soon(\n                    partial(\n                        critical_service_loop,\n                        workload=workload,\n                        interval=self.query_seconds,\n                        jitter_range=0.3,\n                    )\n                )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_in_background","title":"execute_in_background","text":"

Executes a function in the background.

Source code in prefect/runner/runner.py
def execute_in_background(self, func, *args, **kwargs):\n    \"\"\"\n    Executes a function in the background.\n    \"\"\"\n\n    return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.handle_sigterm","title":"handle_sigterm","text":"

Gracefully shuts down the runner when a SIGTERM is received.

Source code in prefect/runner/runner.py
def handle_sigterm(self, signum, frame):\n    \"\"\"\n    Gracefully shuts down the runner when a SIGTERM is received.\n    \"\"\"\n    self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n    from_sync.call_in_loop_thread(create_call(self.stop))\n\n    sys.exit(0)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.has_slots_available","title":"has_slots_available","text":"

Determine if the flow run limit has been reached.

Returns:

Type Description bool
  • bool: True if the limit has not been reached, False otherwise.
Source code in prefect/runner/runner.py
def has_slots_available(self) -> bool:\n    \"\"\"\n    Determine if the flow run limit has been reached.\n\n    Returns:\n        - bool: True if the limit has not been reached, False otherwise.\n    \"\"\"\n    return self._limiter.available_tokens > 0\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.start","title":"start async","text":"

Starts a runner.

The runner will begin monitoring for and executing any scheduled work for all added flows.

Parameters:

Name Type Description Default run_once bool

If True, the runner will through one query loop and then exit.

False webserver Optional[bool]

a boolean for whether to start a webserver for this runner. If provided, overrides the default on the runner

None

Examples:

Initialize a Runner, add two flows, and serve them by starting the Runner:

from prefect import flow, Runner\n\n@flow\ndef hello_flow(name):\n    print(f\"hello {name}\")\n\n@flow\ndef goodbye_flow(name):\n    print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\"\n    runner = Runner(name=\"my-runner\")\n\n    # Will be runnable via the API\n    runner.add_flow(hello_flow)\n\n    # Run on a cron schedule\n    runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n    runner.start()\n
Source code in prefect/runner/runner.py
@sync_compatible\nasync def start(\n    self, run_once: bool = False, webserver: Optional[bool] = None\n) -> None:\n    \"\"\"\n    Starts a runner.\n\n    The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n    Args:\n        run_once: If True, the runner will through one query loop and then exit.\n        webserver: a boolean for whether to start a webserver for this runner. If provided,\n            overrides the default on the runner\n\n    Examples:\n        Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n        ```python\n        from prefect import flow, Runner\n\n        @flow\n        def hello_flow(name):\n            print(f\"hello {name}\")\n\n        @flow\n        def goodbye_flow(name):\n            print(f\"goodbye {name}\")\n\n        if __name__ == \"__main__\"\n            runner = Runner(name=\"my-runner\")\n\n            # Will be runnable via the API\n            runner.add_flow(hello_flow)\n\n            # Run on a cron schedule\n            runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n            runner.start()\n        ```\n    \"\"\"\n    _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n    webserver = webserver if webserver is not None else self.webserver\n\n    if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n        # we'll start the ASGI server in a separate thread so that\n        # uvicorn does not block the main thread\n        server_thread = threading.Thread(\n            name=\"runner-server-thread\",\n            target=partial(\n                start_webserver,\n                runner=self,\n            ),\n            daemon=True,\n        )\n        server_thread.start()\n\n    async with self as runner:\n        async with self._loops_task_group as tg:\n            for storage in self._storage_objs:\n                if storage.pull_interval:\n                    tg.start_soon(\n                        partial(\n                            critical_service_loop,\n                            workload=storage.pull_code,\n                            interval=storage.pull_interval,\n                            run_once=run_once,\n                            jitter_range=0.3,\n                        )\n                    )\n                else:\n                    tg.start_soon(storage.pull_code)\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=runner._get_and_submit_flow_runs,\n                    interval=self.query_seconds,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                )\n            )\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=runner._check_for_cancelled_flow_runs,\n                    interval=self.query_seconds * 2,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                )\n            )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.stop","title":"stop async","text":"

Stops the runner's polling cycle.

Source code in prefect/runner/runner.py
@sync_compatible\nasync def stop(self):\n    \"\"\"Stops the runner's polling cycle.\"\"\"\n    if not self.started:\n        raise RuntimeError(\n            \"Runner has not yet started. Please start the runner by calling\"\n            \" .start()\"\n        )\n\n    self.started = False\n    self.stopping = True\n    await self.cancel_all()\n    try:\n        self._loops_task_group.cancel_scope.cancel()\n    except Exception:\n        self._logger.exception(\n            \"Exception encountered while shutting down\", exc_info=True\n        )\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.serve","title":"serve async","text":"

Serve the provided list of deployments.

Parameters:

Name Type Description Default *args RunnerDeployment

A list of deployments to serve.

() pause_on_shutdown bool

A boolean for whether or not to automatically pause deployment schedules on shutdown.

True print_starting_message bool

Whether or not to print message to the console on startup.

True limit Optional[int]

The maximum number of runs that can be executed concurrently.

None **kwargs

Additional keyword arguments to pass to the runner.

{}

Examples:

Prepare two deployments and serve them:

import datetime\n\nfrom prefect import flow, serve\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n    print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n    # Run once a day\n    hello_deploy = my_flow.to_deployment(\n        \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n    )\n\n    # Run every Sunday at 4:00 AM\n    bye_deploy = my_other_flow.to_deployment(\n        \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n    )\n\n    serve(hello_deploy, bye_deploy)\n
Source code in prefect/runner/runner.py
@sync_compatible\nasync def serve(\n    *args: RunnerDeployment,\n    pause_on_shutdown: bool = True,\n    print_starting_message: bool = True,\n    limit: Optional[int] = None,\n    **kwargs,\n):\n    \"\"\"\n    Serve the provided list of deployments.\n\n    Args:\n        *args: A list of deployments to serve.\n        pause_on_shutdown: A boolean for whether or not to automatically pause\n            deployment schedules on shutdown.\n        print_starting_message: Whether or not to print message to the console\n            on startup.\n        limit: The maximum number of runs that can be executed concurrently.\n        **kwargs: Additional keyword arguments to pass to the runner.\n\n    Examples:\n        Prepare two deployments and serve them:\n\n        ```python\n        import datetime\n\n        from prefect import flow, serve\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        @flow\n        def my_other_flow(name):\n            print(f\"goodbye {name}\")\n\n        if __name__ == \"__main__\":\n            # Run once a day\n            hello_deploy = my_flow.to_deployment(\n                \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n            )\n\n            # Run every Sunday at 4:00 AM\n            bye_deploy = my_other_flow.to_deployment(\n                \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n            )\n\n            serve(hello_deploy, bye_deploy)\n        ```\n    \"\"\"\n    runner = Runner(pause_on_shutdown=pause_on_shutdown, limit=limit, **kwargs)\n    for deployment in args:\n        await runner.add_deployment(deployment)\n\n    if print_starting_message:\n        help_message_top = (\n            \"[green]Your deployments are being served and polling for\"\n            \" scheduled runs!\\n[/]\"\n        )\n\n        table = Table(title=\"Deployments\", show_header=False)\n\n        table.add_column(style=\"blue\", no_wrap=True)\n\n        for deployment in args:\n            table.add_row(f\"{deployment.flow_name}/{deployment.name}\")\n\n        help_message_bottom = (\n            \"\\nTo trigger any of these deployments, use the\"\n            \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n            \" [DEPLOYMENT_NAME]\\n[/]\"\n        )\n        if PREFECT_UI_URL:\n            help_message_bottom += (\n                \"\\nYou can also trigger your deployments via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n            )\n\n        console = Console()\n        console.print(\n            Group(help_message_top, table, help_message_bottom), soft_wrap=True\n        )\n\n    await runner.start()\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/server/","title":"server","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server","title":"prefect.runner.server","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.build_server","title":"build_server async","text":"

Build a FastAPI server for a runner.

Parameters:

Name Type Description Default runner Runner

the runner this server interacts with and monitors

required log_level str

the log level to use for the server

required Source code in prefect/runner/server.py
@sync_compatible\nasync def build_server(runner: \"Runner\") -> FastAPI:\n    \"\"\"\n    Build a FastAPI server for a runner.\n\n    Args:\n        runner (Runner): the runner this server interacts with and monitors\n        log_level (str): the log level to use for the server\n    \"\"\"\n    webserver = FastAPI()\n    router = APIRouter()\n\n    router.add_api_route(\n        \"/health\", perform_health_check(runner=runner), methods=[\"GET\"]\n    )\n    router.add_api_route(\"/run_count\", run_count(runner=runner), methods=[\"GET\"])\n    router.add_api_route(\"/shutdown\", shutdown(runner=runner), methods=[\"POST\"])\n    webserver.include_router(router)\n\n    if PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS.value():\n        deployments_router, deployment_schemas = await get_deployment_router(runner)\n        webserver.include_router(deployments_router)\n\n        subflow_schemas = await get_subflow_schemas(runner)\n        webserver.add_api_route(\n            \"/flow/run\",\n            _build_generic_endpoint_for_flows(runner=runner, schemas=subflow_schemas),\n            methods=[\"POST\"],\n            name=\"Run flow in background\",\n            description=\"Trigger any flow run as a background task on the runner.\",\n            summary=\"Run flow\",\n        )\n\n        def customize_openapi():\n            if webserver.openapi_schema:\n                return webserver.openapi_schema\n\n            openapi_schema = inject_schemas_into_openapi(webserver, deployment_schemas)\n            webserver.openapi_schema = openapi_schema\n            return webserver.openapi_schema\n\n        webserver.openapi = customize_openapi\n\n    return webserver\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.get_subflow_schemas","title":"get_subflow_schemas async","text":"

Load available subflow schemas by filtering for only those subflows in the deployment entrypoint's import space.

Source code in prefect/runner/server.py
async def get_subflow_schemas(runner: \"Runner\") -> Dict[str, Dict]:\n    \"\"\"\n    Load available subflow schemas by filtering for only those subflows in the\n    deployment entrypoint's import space.\n    \"\"\"\n    schemas = {}\n    async with get_client() as client:\n        for deployment_id in runner._deployment_ids:\n            deployment = await client.read_deployment(deployment_id)\n            if deployment.entrypoint is None:\n                continue\n\n            script = deployment.entrypoint.split(\":\")[0]\n            subflows = load_flows_from_script(script)\n            for flow in subflows:\n                schemas[flow.name] = flow.parameters.dict()\n\n    return schemas\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.start_webserver","title":"start_webserver","text":"

Run a FastAPI server for a runner.

Parameters:

Name Type Description Default runner Runner

the runner this server interacts with and monitors

required log_level str

the log level to use for the server

None Source code in prefect/runner/server.py
def start_webserver(runner: \"Runner\", log_level: Optional[str] = None) -> None:\n    \"\"\"\n    Run a FastAPI server for a runner.\n\n    Args:\n        runner (Runner): the runner this server interacts with and monitors\n        log_level (str): the log level to use for the server\n    \"\"\"\n    host = PREFECT_RUNNER_SERVER_HOST.value()\n    port = PREFECT_RUNNER_SERVER_PORT.value()\n    log_level = log_level or PREFECT_RUNNER_SERVER_LOG_LEVEL.value()\n    webserver = build_server(runner)\n    uvicorn.run(webserver, host=host, port=port, log_level=log_level)\n
","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/storage/","title":"storage","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage","title":"prefect.runner.storage","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.BlockStorageAdapter","title":"BlockStorageAdapter","text":"

A storage adapter for a storage block object to allow it to be used as a runner storage object.

Source code in prefect/runner/storage.py
class BlockStorageAdapter:\n    \"\"\"\n    A storage adapter for a storage block object to allow it to be used as a\n    runner storage object.\n    \"\"\"\n\n    def __init__(\n        self,\n        block: Union[ReadableDeploymentStorage, WritableDeploymentStorage],\n        pull_interval: Optional[int] = 60,\n    ):\n        self._block = block\n        self._pull_interval = pull_interval\n        self._storage_base_path = Path.cwd()\n        if not isinstance(block, Block):\n            raise TypeError(\n                f\"Expected a block object. Received a {type(block).__name__!r} object.\"\n            )\n        if not hasattr(block, \"get_directory\"):\n            raise ValueError(\"Provided block must have a `get_directory` method.\")\n\n        self._name = (\n            f\"{block.get_block_type_slug()}-{block._block_document_name}\"\n            if block._block_document_name\n            else str(uuid4())\n        )\n\n    def set_base_path(self, path: Path):\n        self._storage_base_path = path\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        return self._pull_interval\n\n    @property\n    def destination(self) -> Path:\n        return self._storage_base_path / self._name\n\n    async def pull_code(self):\n        if not self.destination.exists():\n            self.destination.mkdir(parents=True, exist_ok=True)\n        await self._block.get_directory(local_path=str(self.destination))\n\n    def to_pull_step(self) -> dict:\n        # Give blocks the change to implement their own pull step\n        if hasattr(self._block, \"get_pull_step\"):\n            return self._block.get_pull_step()\n        else:\n            if not self._block._block_document_name:\n                raise BlockNotSavedError(\n                    \"Block must be saved with `.save()` before it can be converted to a\"\n                    \" pull step.\"\n                )\n            return {\n                \"prefect.deployments.steps.pull_with_block\": {\n                    \"block_type_slug\": self._block.get_block_type_slug(),\n                    \"block_document_name\": self._block._block_document_name,\n                }\n            }\n\n    def __eq__(self, __value) -> bool:\n        if isinstance(__value, BlockStorageAdapter):\n            return self._block == __value._block\n        return False\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository","title":"GitRepository","text":"

Pulls the contents of a git repository to the local filesystem.

Parameters:

Name Type Description Default url str

The URL of the git repository to pull from

required credentials Union[GitCredentials, Block, Dict[str, Any], None]

A dictionary of credentials to use when pulling from the repository. If a username is provided, an access token must also be provided.

None name Optional[str]

The name of the repository. If not provided, the name will be inferred from the repository URL.

None branch Optional[str]

The branch to pull from. Defaults to \"main\".

None pull_interval Optional[int]

The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.

60

Examples:

Pull the contents of a private git repository to the local filesystem:

from prefect.runner.storage import GitRepository\n\nstorage = GitRepository(\n    url=\"https://github.com/org/repo.git\",\n    credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n)\n\nawait storage.pull_code()\n
Source code in prefect/runner/storage.py
class GitRepository:\n    \"\"\"\n    Pulls the contents of a git repository to the local filesystem.\n\n    Parameters:\n        url: The URL of the git repository to pull from\n        credentials: A dictionary of credentials to use when pulling from the\n            repository. If a username is provided, an access token must also be\n            provided.\n        name: The name of the repository. If not provided, the name will be\n            inferred from the repository URL.\n        branch: The branch to pull from. Defaults to \"main\".\n        pull_interval: The interval in seconds at which to pull contents from\n            remote storage to local storage. If None, remote storage will perform\n            a one-time sync.\n\n    Examples:\n        Pull the contents of a private git repository to the local filesystem:\n\n        ```python\n        from prefect.runner.storage import GitRepository\n\n        storage = GitRepository(\n            url=\"https://github.com/org/repo.git\",\n            credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n        )\n\n        await storage.pull_code()\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        url: str,\n        credentials: Union[GitCredentials, Block, Dict[str, Any], None] = None,\n        name: Optional[str] = None,\n        branch: Optional[str] = None,\n        include_submodules: bool = False,\n        pull_interval: Optional[int] = 60,\n    ):\n        if credentials is None:\n            credentials = {}\n\n        if (\n            isinstance(credentials, dict)\n            and credentials.get(\"username\")\n            and not (credentials.get(\"access_token\") or credentials.get(\"password\"))\n        ):\n            raise ValueError(\n                \"If a username is provided, an access token or password must also be\"\n                \" provided.\"\n            )\n        self._url = url\n        self._branch = branch\n        self._credentials = credentials\n        self._include_submodules = include_submodules\n        repo_name = urlparse(url).path.split(\"/\")[-1].replace(\".git\", \"\")\n        default_name = f\"{repo_name}-{branch}\" if branch else repo_name\n        self._name = name or default_name\n        self._logger = get_logger(f\"runner.storage.git-repository.{self._name}\")\n        self._storage_base_path = Path.cwd()\n        self._pull_interval = pull_interval\n\n    @property\n    def destination(self) -> Path:\n        return self._storage_base_path / self._name\n\n    def set_base_path(self, path: Path):\n        self._storage_base_path = path\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        return self._pull_interval\n\n    @property\n    def _repository_url_with_credentials(self) -> str:\n        if not self._credentials:\n            return self._url\n\n        url_components = urlparse(self._url)\n\n        credentials = (\n            self._credentials.dict()\n            if isinstance(self._credentials, Block)\n            else deepcopy(self._credentials)\n        )\n\n        for k, v in credentials.items():\n            if isinstance(v, Secret):\n                credentials[k] = v.get()\n            elif isinstance(v, SecretStr):\n                credentials[k] = v.get_secret_value()\n\n        formatted_credentials = _format_token_from_credentials(\n            urlparse(self._url).netloc, credentials\n        )\n        if url_components.scheme == \"https\" and formatted_credentials is not None:\n            updated_components = url_components._replace(\n                netloc=f\"{formatted_credentials}@{url_components.netloc}\"\n            )\n            repository_url = urlunparse(updated_components)\n        else:\n            repository_url = self._url\n\n        return repository_url\n\n    async def pull_code(self):\n        \"\"\"\n        Pulls the contents of the configured repository to the local filesystem.\n        \"\"\"\n        self._logger.debug(\n            \"Pulling contents from repository '%s' to '%s'...\",\n            self._name,\n            self.destination,\n        )\n\n        git_dir = self.destination / \".git\"\n\n        if git_dir.exists():\n            # Check if the existing repository matches the configured repository\n            result = await run_process(\n                [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n                cwd=str(self.destination),\n            )\n            existing_repo_url = None\n            if result.stdout is not None:\n                existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n            if existing_repo_url != self._url:\n                raise ValueError(\n                    f\"The existing repository at {str(self.destination)} \"\n                    f\"does not match the configured repository {self._url}\"\n                )\n\n            self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n            # Update the existing repository\n            cmd = [\"git\", \"pull\", \"origin\"]\n            if self._branch:\n                cmd += [self._branch]\n            if self._include_submodules:\n                cmd += [\"--recurse-submodules\"]\n            cmd += [\"--depth\", \"1\"]\n            try:\n                await run_process(cmd, cwd=self.destination)\n                self._logger.debug(\"Successfully pulled latest changes\")\n            except subprocess.CalledProcessError as exc:\n                self._logger.error(\n                    f\"Failed to pull latest changes with exit code {exc}\"\n                )\n                shutil.rmtree(self.destination)\n                await self._clone_repo()\n\n        else:\n            await self._clone_repo()\n\n    async def _clone_repo(self):\n        \"\"\"\n        Clones the repository into the local destination.\n        \"\"\"\n        self._logger.debug(\"Cloning repository %s\", self._url)\n\n        repository_url = self._repository_url_with_credentials\n\n        cmd = [\n            \"git\",\n            \"clone\",\n            repository_url,\n        ]\n        if self._branch:\n            cmd += [\"--branch\", self._branch]\n        if self._include_submodules:\n            cmd += [\"--recurse-submodules\"]\n\n        # Limit git history and set path to clone to\n        cmd += [\"--depth\", \"1\", str(self.destination)]\n\n        try:\n            await run_process(cmd)\n        except subprocess.CalledProcessError as exc:\n            # Hide the command used to avoid leaking the access token\n            exc_chain = None if self._credentials else exc\n            raise RuntimeError(\n                f\"Failed to clone repository {self._url!r} with exit code\"\n                f\" {exc.returncode}.\"\n            ) from exc_chain\n\n    def __eq__(self, __value) -> bool:\n        if isinstance(__value, GitRepository):\n            return (\n                self._url == __value._url\n                and self._branch == __value._branch\n                and self._name == __value._name\n            )\n        return False\n\n    def __repr__(self) -> str:\n        return (\n            f\"GitRepository(name={self._name!r} repository={self._url!r},\"\n            f\" branch={self._branch!r})\"\n        )\n\n    def to_pull_step(self) -> Dict:\n        pull_step = {\n            \"prefect.deployments.steps.git_clone\": {\n                \"repository\": self._url,\n                \"branch\": self._branch,\n            }\n        }\n        if isinstance(self._credentials, Block):\n            pull_step[\"prefect.deployments.steps.git_clone\"][\n                \"credentials\"\n            ] = f\"{{{{ {self._credentials.get_block_placeholder()} }}}}\"\n        elif isinstance(self._credentials, dict):\n            if isinstance(self._credentials.get(\"access_token\"), Secret):\n                pull_step[\"prefect.deployments.steps.git_clone\"][\"credentials\"] = {\n                    **self._credentials,\n                    \"access_token\": (\n                        \"{{\"\n                        f\" {self._credentials['access_token'].get_block_placeholder()} }}}}\"\n                    ),\n                }\n            elif self._credentials.get(\"access_token\") is not None:\n                raise ValueError(\n                    \"Please save your access token as a Secret block before converting\"\n                    \" this storage object to a pull step.\"\n                )\n\n        return pull_step\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository.pull_code","title":"pull_code async","text":"

Pulls the contents of the configured repository to the local filesystem.

Source code in prefect/runner/storage.py
async def pull_code(self):\n    \"\"\"\n    Pulls the contents of the configured repository to the local filesystem.\n    \"\"\"\n    self._logger.debug(\n        \"Pulling contents from repository '%s' to '%s'...\",\n        self._name,\n        self.destination,\n    )\n\n    git_dir = self.destination / \".git\"\n\n    if git_dir.exists():\n        # Check if the existing repository matches the configured repository\n        result = await run_process(\n            [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n            cwd=str(self.destination),\n        )\n        existing_repo_url = None\n        if result.stdout is not None:\n            existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n        if existing_repo_url != self._url:\n            raise ValueError(\n                f\"The existing repository at {str(self.destination)} \"\n                f\"does not match the configured repository {self._url}\"\n            )\n\n        self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n        # Update the existing repository\n        cmd = [\"git\", \"pull\", \"origin\"]\n        if self._branch:\n            cmd += [self._branch]\n        if self._include_submodules:\n            cmd += [\"--recurse-submodules\"]\n        cmd += [\"--depth\", \"1\"]\n        try:\n            await run_process(cmd, cwd=self.destination)\n            self._logger.debug(\"Successfully pulled latest changes\")\n        except subprocess.CalledProcessError as exc:\n            self._logger.error(\n                f\"Failed to pull latest changes with exit code {exc}\"\n            )\n            shutil.rmtree(self.destination)\n            await self._clone_repo()\n\n    else:\n        await self._clone_repo()\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage","title":"RemoteStorage","text":"

Pulls the contents of a remote storage location to the local filesystem.

Parameters:

Name Type Description Default url str

The URL of the remote storage location to pull from. Supports fsspec URLs. Some protocols may require an additional fsspec dependency to be installed. Refer to the fsspec docs for more details.

required pull_interval Optional[int]

The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.

60 **settings Any

Any additional settings to pass the fsspec filesystem class.

{}

Examples:

Pull the contents of a remote storage location to the local filesystem:

from prefect.runner.storage import RemoteStorage\n\nstorage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\nawait storage.pull_code()\n

Pull the contents of a remote storage location to the local filesystem with additional settings:

from prefect.runner.storage import RemoteStorage\nfrom prefect.blocks.system import Secret\n\nstorage = RemoteStorage(\n    url=\"s3://my-bucket/my-folder\",\n    # Use Secret blocks to keep credentials out of your code\n    key=Secret.load(\"my-aws-access-key\"),\n    secret=Secret.load(\"my-aws-secret-key\"),\n)\n\nawait storage.pull_code()\n
Source code in prefect/runner/storage.py
class RemoteStorage:\n    \"\"\"\n    Pulls the contents of a remote storage location to the local filesystem.\n\n    Parameters:\n        url: The URL of the remote storage location to pull from. Supports\n            `fsspec` URLs. Some protocols may require an additional `fsspec`\n            dependency to be installed. Refer to the\n            [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n            for more details.\n        pull_interval: The interval in seconds at which to pull contents from\n            remote storage to local storage. If None, remote storage will perform\n            a one-time sync.\n        **settings: Any additional settings to pass the `fsspec` filesystem class.\n\n    Examples:\n        Pull the contents of a remote storage location to the local filesystem:\n\n        ```python\n        from prefect.runner.storage import RemoteStorage\n\n        storage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\n        await storage.pull_code()\n        ```\n\n        Pull the contents of a remote storage location to the local filesystem\n        with additional settings:\n\n        ```python\n        from prefect.runner.storage import RemoteStorage\n        from prefect.blocks.system import Secret\n\n        storage = RemoteStorage(\n            url=\"s3://my-bucket/my-folder\",\n            # Use Secret blocks to keep credentials out of your code\n            key=Secret.load(\"my-aws-access-key\"),\n            secret=Secret.load(\"my-aws-secret-key\"),\n        )\n\n        await storage.pull_code()\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        url: str,\n        pull_interval: Optional[int] = 60,\n        **settings: Any,\n    ):\n        self._url = url\n        self._settings = settings\n        self._logger = get_logger(\"runner.storage.remote-storage\")\n        self._storage_base_path = Path.cwd()\n        self._pull_interval = pull_interval\n\n    @staticmethod\n    def _get_required_package_for_scheme(scheme: str) -> Optional[str]:\n        # attempt to discover the package name for the given scheme\n        # from fsspec's registry\n        known_implementation = fsspec.registry.get(scheme)\n        if known_implementation:\n            return known_implementation.__module__.split(\".\")[0]\n        # if we don't know the implementation, try to guess it for some\n        # common schemes\n        elif scheme == \"s3\":\n            return \"s3fs\"\n        elif scheme == \"gs\" or scheme == \"gcs\":\n            return \"gcsfs\"\n        elif scheme == \"abfs\" or scheme == \"az\":\n            return \"adlfs\"\n        else:\n            return None\n\n    @property\n    def _filesystem(self) -> fsspec.AbstractFileSystem:\n        scheme, _, _, _, _ = urlsplit(self._url)\n\n        def replace_blocks_with_values(obj: Any) -> Any:\n            if isinstance(obj, Block):\n                if hasattr(obj, \"get\"):\n                    return obj.get()\n                if hasattr(obj, \"value\"):\n                    return obj.value\n                else:\n                    return obj.dict()\n            return obj\n\n        settings_with_block_values = visit_collection(\n            self._settings, replace_blocks_with_values, return_data=True\n        )\n\n        return fsspec.filesystem(scheme, **settings_with_block_values)\n\n    def set_base_path(self, path: Path):\n        self._storage_base_path = path\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        \"\"\"\n        The interval at which contents from remote storage should be pulled to\n        local storage. If None, remote storage will perform a one-time sync.\n        \"\"\"\n        return self._pull_interval\n\n    @property\n    def destination(self) -> Path:\n        \"\"\"\n        The local file path to pull contents from remote storage to.\n        \"\"\"\n        return self._storage_base_path / self._remote_path\n\n    @property\n    def _remote_path(self) -> Path:\n        \"\"\"\n        The remote file path to pull contents from remote storage to.\n        \"\"\"\n        _, netloc, urlpath, _, _ = urlsplit(self._url)\n        return Path(netloc) / Path(urlpath.lstrip(\"/\"))\n\n    async def pull_code(self):\n        \"\"\"\n        Pulls contents from remote storage to the local filesystem.\n        \"\"\"\n        self._logger.debug(\n            \"Pulling contents from remote storage '%s' to '%s'...\",\n            self._url,\n            self.destination,\n        )\n\n        if not self.destination.exists():\n            self.destination.mkdir(parents=True, exist_ok=True)\n\n        remote_path = str(self._remote_path) + \"/\"\n\n        try:\n            await from_async.wait_for_call_in_new_thread(\n                create_call(\n                    self._filesystem.get,\n                    remote_path,\n                    str(self.destination),\n                    recursive=True,\n                )\n            )\n        except Exception as exc:\n            raise RuntimeError(\n                f\"Failed to pull contents from remote storage {self._url!r} to\"\n                f\" {self.destination!r}\"\n            ) from exc\n\n    def to_pull_step(self) -> dict:\n        \"\"\"\n        Returns a dictionary representation of the storage object that can be\n        used as a deployment pull step.\n        \"\"\"\n\n        def replace_block_with_placeholder(obj: Any) -> Any:\n            if isinstance(obj, Block):\n                return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n            return obj\n\n        settings_with_placeholders = visit_collection(\n            self._settings, replace_block_with_placeholder, return_data=True\n        )\n        required_package = self._get_required_package_for_scheme(\n            urlparse(self._url).scheme\n        )\n        step = {\n            \"prefect.deployments.steps.pull_from_remote_storage\": {\n                \"url\": self._url,\n                **settings_with_placeholders,\n            }\n        }\n        if required_package:\n            step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n                \"requires\"\n            ] = required_package\n        return step\n\n    def __eq__(self, __value) -> bool:\n        \"\"\"\n        Equality check for runner storage objects.\n        \"\"\"\n        if isinstance(__value, RemoteStorage):\n            return self._url == __value._url and self._settings == __value._settings\n        return False\n\n    def __repr__(self) -> str:\n        return f\"RemoteStorage(url={self._url!r})\"\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.destination","title":"destination: Path property","text":"

The local file path to pull contents from remote storage to.

","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_interval","title":"pull_interval: Optional[int] property","text":"

The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.

","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_code","title":"pull_code async","text":"

Pulls contents from remote storage to the local filesystem.

Source code in prefect/runner/storage.py
async def pull_code(self):\n    \"\"\"\n    Pulls contents from remote storage to the local filesystem.\n    \"\"\"\n    self._logger.debug(\n        \"Pulling contents from remote storage '%s' to '%s'...\",\n        self._url,\n        self.destination,\n    )\n\n    if not self.destination.exists():\n        self.destination.mkdir(parents=True, exist_ok=True)\n\n    remote_path = str(self._remote_path) + \"/\"\n\n    try:\n        await from_async.wait_for_call_in_new_thread(\n            create_call(\n                self._filesystem.get,\n                remote_path,\n                str(self.destination),\n                recursive=True,\n            )\n        )\n    except Exception as exc:\n        raise RuntimeError(\n            f\"Failed to pull contents from remote storage {self._url!r} to\"\n            f\" {self.destination!r}\"\n        ) from exc\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.to_pull_step","title":"to_pull_step","text":"

Returns a dictionary representation of the storage object that can be used as a deployment pull step.

Source code in prefect/runner/storage.py
def to_pull_step(self) -> dict:\n    \"\"\"\n    Returns a dictionary representation of the storage object that can be\n    used as a deployment pull step.\n    \"\"\"\n\n    def replace_block_with_placeholder(obj: Any) -> Any:\n        if isinstance(obj, Block):\n            return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n        return obj\n\n    settings_with_placeholders = visit_collection(\n        self._settings, replace_block_with_placeholder, return_data=True\n    )\n    required_package = self._get_required_package_for_scheme(\n        urlparse(self._url).scheme\n    )\n    step = {\n        \"prefect.deployments.steps.pull_from_remote_storage\": {\n            \"url\": self._url,\n            **settings_with_placeholders,\n        }\n    }\n    if required_package:\n        step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n            \"requires\"\n        ] = required_package\n    return step\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage","title":"RunnerStorage","text":"

Bases: Protocol

A storage interface for a runner to use to retrieve remotely stored flow code.

Source code in prefect/runner/storage.py
@runtime_checkable\nclass RunnerStorage(Protocol):\n    \"\"\"\n    A storage interface for a runner to use to retrieve\n    remotely stored flow code.\n    \"\"\"\n\n    def set_base_path(self, path: Path):\n        \"\"\"\n        Sets the base path to use when pulling contents from remote storage to\n        local storage.\n        \"\"\"\n        ...\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        \"\"\"\n        The interval at which contents from remote storage should be pulled to\n        local storage. If None, remote storage will perform a one-time sync.\n        \"\"\"\n        ...\n\n    @property\n    def destination(self) -> Path:\n        \"\"\"\n        The local file path to pull contents from remote storage to.\n        \"\"\"\n        ...\n\n    async def pull_code(self):\n        \"\"\"\n        Pulls contents from remote storage to the local filesystem.\n        \"\"\"\n        ...\n\n    def to_pull_step(self) -> dict:\n        \"\"\"\n        Returns a dictionary representation of the storage object that can be\n        used as a deployment pull step.\n        \"\"\"\n        ...\n\n    def __eq__(self, __value) -> bool:\n        \"\"\"\n        Equality check for runner storage objects.\n        \"\"\"\n        ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.destination","title":"destination: Path property","text":"

The local file path to pull contents from remote storage to.

","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_interval","title":"pull_interval: Optional[int] property","text":"

The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.

","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_code","title":"pull_code async","text":"

Pulls contents from remote storage to the local filesystem.

Source code in prefect/runner/storage.py
async def pull_code(self):\n    \"\"\"\n    Pulls contents from remote storage to the local filesystem.\n    \"\"\"\n    ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.set_base_path","title":"set_base_path","text":"

Sets the base path to use when pulling contents from remote storage to local storage.

Source code in prefect/runner/storage.py
def set_base_path(self, path: Path):\n    \"\"\"\n    Sets the base path to use when pulling contents from remote storage to\n    local storage.\n    \"\"\"\n    ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.to_pull_step","title":"to_pull_step","text":"

Returns a dictionary representation of the storage object that can be used as a deployment pull step.

Source code in prefect/runner/storage.py
def to_pull_step(self) -> dict:\n    \"\"\"\n    Returns a dictionary representation of the storage object that can be\n    used as a deployment pull step.\n    \"\"\"\n    ...\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.create_storage_from_url","title":"create_storage_from_url","text":"

Creates a storage object from a URL.

Parameters:

Name Type Description Default url str

The URL to create a storage object from. Supports git and fsspec URLs.

required pull_interval Optional[int]

The interval at which to pull contents from remote storage to local storage

60

Returns:

Name Type Description RunnerStorage RunnerStorage

A runner storage compatible object

Source code in prefect/runner/storage.py
def create_storage_from_url(\n    url: str, pull_interval: Optional[int] = 60\n) -> RunnerStorage:\n    \"\"\"\n    Creates a storage object from a URL.\n\n    Args:\n        url: The URL to create a storage object from. Supports git and `fsspec`\n            URLs.\n        pull_interval: The interval at which to pull contents from remote storage to\n            local storage\n\n    Returns:\n        RunnerStorage: A runner storage compatible object\n    \"\"\"\n    parsed_url = urlparse(url)\n    if parsed_url.scheme == \"git\" or parsed_url.path.endswith(\".git\"):\n        return GitRepository(url=url, pull_interval=pull_interval)\n    else:\n        return RemoteStorage(url=url, pull_interval=pull_interval)\n
","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/utils/","title":"utils","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils","title":"prefect.runner.utils","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.inject_schemas_into_openapi","title":"inject_schemas_into_openapi","text":"

Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.

Parameters:

Name Type Description Default webserver FastAPI

The FastAPI instance representing the webserver.

required schemas_to_inject Dict[str, Any]

A dictionary of OpenAPI schemas to integrate.

required

Returns:

Type Description Dict[str, Any]

The augmented OpenAPI schema dictionary.

Source code in prefect/runner/utils.py
def inject_schemas_into_openapi(\n    webserver: FastAPI, schemas_to_inject: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.\n\n    Args:\n        webserver: The FastAPI instance representing the webserver.\n        schemas_to_inject: A dictionary of OpenAPI schemas to integrate.\n\n    Returns:\n        The augmented OpenAPI schema dictionary.\n    \"\"\"\n    openapi_schema = get_openapi(\n        title=\"FastAPI Prefect Runner\", version=PREFECT_VERSION, routes=webserver.routes\n    )\n\n    augmented_schema = merge_definitions(schemas_to_inject, openapi_schema)\n    return update_refs_to_components(augmented_schema)\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.merge_definitions","title":"merge_definitions","text":"

Integrates definitions from injected schemas into the OpenAPI components.

Parameters:

Name Type Description Default injected_schemas Dict[str, Any]

A dictionary of deployment-specific schemas.

required openapi_schema Dict[str, Any]

The base OpenAPI schema to update.

required Source code in prefect/runner/utils.py
def merge_definitions(\n    injected_schemas: Dict[str, Any], openapi_schema: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Integrates definitions from injected schemas into the OpenAPI components.\n\n    Args:\n        injected_schemas: A dictionary of deployment-specific schemas.\n        openapi_schema: The base OpenAPI schema to update.\n    \"\"\"\n    openapi_schema_copy = deepcopy(openapi_schema)\n    components = openapi_schema_copy.setdefault(\"components\", {}).setdefault(\n        \"schemas\", {}\n    )\n    for definitions in injected_schemas.values():\n        if \"definitions\" in definitions:\n            for def_name, def_schema in definitions[\"definitions\"].items():\n                def_schema_copy = deepcopy(def_schema)\n                update_refs_in_schema(def_schema_copy, \"#/components/schemas/\")\n                components[def_name] = def_schema_copy\n    return openapi_schema_copy\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_in_schema","title":"update_refs_in_schema","text":"

Recursively replaces $ref with a new reference base in a schema item.

Parameters:

Name Type Description Default schema_item Any

A schema or part of a schema to update references in.

required new_ref str

The new base string to replace in $ref values.

required Source code in prefect/runner/utils.py
def update_refs_in_schema(schema_item: Any, new_ref: str) -> None:\n    \"\"\"\n    Recursively replaces `$ref` with a new reference base in a schema item.\n\n    Args:\n        schema_item: A schema or part of a schema to update references in.\n        new_ref: The new base string to replace in `$ref` values.\n    \"\"\"\n    if isinstance(schema_item, dict):\n        if \"$ref\" in schema_item:\n            schema_item[\"$ref\"] = schema_item[\"$ref\"].replace(\"#/definitions/\", new_ref)\n        for value in schema_item.values():\n            update_refs_in_schema(value, new_ref)\n    elif isinstance(schema_item, list):\n        for item in schema_item:\n            update_refs_in_schema(item, new_ref)\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_to_components","title":"update_refs_to_components","text":"

Updates all $ref fields in the OpenAPI schema to reference the components section.

Parameters:

Name Type Description Default openapi_schema Dict[str, Any]

The OpenAPI schema to modify $ref fields in.

required Source code in prefect/runner/utils.py
def update_refs_to_components(openapi_schema: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Updates all `$ref` fields in the OpenAPI schema to reference the components section.\n\n    Args:\n        openapi_schema: The OpenAPI schema to modify `$ref` fields in.\n    \"\"\"\n    for path_item in openapi_schema.get(\"paths\", {}).values():\n        for operation in path_item.values():\n            schema = (\n                operation.get(\"requestBody\", {})\n                .get(\"content\", {})\n                .get(\"application/json\", {})\n                .get(\"schema\", {})\n            )\n            update_refs_in_schema(schema, \"#/components/schemas/\")\n\n    for definition in openapi_schema.get(\"definitions\", {}).values():\n        update_refs_in_schema(definition, \"#/components/schemas/\")\n\n    return openapi_schema\n
","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runtime/deployment/","title":"deployment","text":"","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/deployment/#prefect.runtime.deployment","title":"prefect.runtime.deployment","text":"

Access attributes of the current deployment run dynamically.

Note that if a deployment is not currently being run, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__DEPLOYMENT.

Example usage
from prefect.runtime import deployment\n\ndef get_task_runner():\n    task_runner_config = deployment.parameters.get(\"runner_config\", \"default config here\")\n    return DummyTaskRunner(task_runner_specs=task_runner_config)\n
Available attributes
  • id: the deployment's unique ID
  • name: the deployment's name
  • version: the deployment's version
  • flow_run_id: the current flow run ID for this deployment
  • parameters: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values set on the deployment object or those directly provided via API for this run
","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/flow_run/","title":"flow_run","text":"","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/flow_run/#prefect.runtime.flow_run","title":"prefect.runtime.flow_run","text":"

Access attributes of the current flow run dynamically.

Note that if a flow run cannot be discovered, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__FLOW_RUN.

Available attributes
  • id: the flow run's unique ID
  • tags: the flow run's set of tags
  • scheduled_start_time: the flow run's expected scheduled start time; defaults to now if not present
  • name: the name of the flow run
  • flow_name: the name of the flow
  • parameters: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values explicitly passed for the run
  • parent_flow_run_id: the ID of the flow run that triggered this run, if any
  • parent_deployment_id: the ID of the deployment that triggered this run, if any
  • run_count: the number of times this flow run has been run
","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/task_run/","title":"task_run","text":"","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/runtime/task_run/#prefect.runtime.task_run","title":"prefect.runtime.task_run","text":"

Access attributes of the current task run dynamically.

Note that if a task run cannot be discovered, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__TASK_RUN.

Available attributes
  • id: the task run's unique ID
  • name: the name of the task run
  • tags: the task run's set of tags
  • parameters: the parameters the task was called with
  • run_count: the number of times this task run has been run
  • task_name: the name of the task
","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/utilities/annotations/","title":"annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations","title":"prefect.utilities.annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.BaseAnnotation","title":"BaseAnnotation","text":"

Bases: namedtuple('BaseAnnotation', field_names='value'), ABC, Generic[T]

Base class for Prefect annotation types.

Inherits from namedtuple for unpacking support in another tools.

Source code in prefect/utilities/annotations.py
class BaseAnnotation(\n    namedtuple(\"BaseAnnotation\", field_names=\"value\"), ABC, Generic[T]\n):\n    \"\"\"\n    Base class for Prefect annotation types.\n\n    Inherits from `namedtuple` for unpacking support in another tools.\n    \"\"\"\n\n    def unwrap(self) -> T:\n        if sys.version_info < (3, 8):\n            # cannot simply return self.value due to recursion error in Python 3.7\n            # also _asdict does not follow convention; it's not an internal method\n            # https://stackoverflow.com/a/26180604\n            return self._asdict()[\"value\"]\n        else:\n            return self.value\n\n    def rewrap(self, value: T) -> \"BaseAnnotation[T]\":\n        return type(self)(value)\n\n    def __eq__(self, other: object) -> bool:\n        if not type(self) == type(other):\n            return False\n        return self.unwrap() == other.unwrap()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}({self.value!r})\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.NotSet","title":"NotSet","text":"

Singleton to distinguish None from a value that is not provided by the user.

Source code in prefect/utilities/annotations.py
class NotSet:\n    \"\"\"\n    Singleton to distinguish `None` from a value that is not provided by the user.\n    \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.allow_failure","title":"allow_failure","text":"

Bases: BaseAnnotation[T]

Wrapper for states or futures.

Indicates that the upstream run for this input can be failed.

Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream.

If the input is from a failed run, the attached exception will be passed to your function.

Source code in prefect/utilities/annotations.py
class allow_failure(BaseAnnotation[T]):\n    \"\"\"\n    Wrapper for states or futures.\n\n    Indicates that the upstream run for this input can be failed.\n\n    Generally, Prefect will not allow a downstream run to start if any of its inputs\n    are failed. This annotation allows you to opt into receiving a failed input\n    downstream.\n\n    If the input is from a failed run, the attached exception will be passed to your\n    function.\n    \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.quote","title":"quote","text":"

Bases: BaseAnnotation[T]

Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state.

quote will also instruct prefect to ignore introspection of the wrapped object when passed as flow or task parameter. Parameter introspection can be a significant performance hit when the object is a large collection, e.g. a large dictionary or DataFrame, and each element needs to be visited. This will disable task dependency tracking for the wrapped object, but likely will increase performance.

@task\ndef my_task(df):\n    ...\n\n@flow\ndef my_flow():\n    my_task(quote(df))\n
Source code in prefect/utilities/annotations.py
class quote(BaseAnnotation[T]):\n    \"\"\"\n    Simple wrapper to mark an expression as a different type so it will not be coerced\n    by Prefect. For example, if you want to return a state from a flow without having\n    the flow assume that state.\n\n    quote will also instruct prefect to ignore introspection of the wrapped object\n    when passed as flow or task parameter. Parameter introspection can be a\n    significant performance hit when the object is a large collection,\n    e.g. a large dictionary or DataFrame, and each element needs to be visited. This\n    will disable task dependency tracking for the wrapped object, but likely will\n    increase performance.\n\n    ```\n    @task\n    def my_task(df):\n        ...\n\n    @flow\n    def my_flow():\n        my_task(quote(df))\n    ```\n    \"\"\"\n\n    def unquote(self) -> T:\n        return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.unmapped","title":"unmapped","text":"

Bases: BaseAnnotation[T]

Wrapper for iterables.

Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split.

Source code in prefect/utilities/annotations.py
class unmapped(BaseAnnotation[T]):\n    \"\"\"\n    Wrapper for iterables.\n\n    Indicates that this input should be sent as-is to all runs created during a mapping\n    operation instead of being split.\n    \"\"\"\n\n    def __getitem__(self, _) -> T:\n        # Internally, this acts as an infinite array where all items are the same value\n        return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/asyncutils/","title":"asyncutils","text":"","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils","title":"prefect.utilities.asyncutils","text":"

Utilities for interoperability with async functions and workers from various contexts.

","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherIncomplete","title":"GatherIncomplete","text":"

Bases: RuntimeError

Used to indicate retrieving gather results before completion

Source code in prefect/utilities/asyncutils.py
class GatherIncomplete(RuntimeError):\n    \"\"\"Used to indicate retrieving gather results before completion\"\"\"\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup","title":"GatherTaskGroup","text":"

Bases: TaskGroup

A task group that gathers results.

AnyIO does not include support gather. This class extends the TaskGroup interface to allow simple gathering.

See https://github.com/agronholm/anyio/issues/100

This class should be instantiated with create_gather_task_group.

Source code in prefect/utilities/asyncutils.py
class GatherTaskGroup(anyio.abc.TaskGroup):\n    \"\"\"\n    A task group that gathers results.\n\n    AnyIO does not include support `gather`. This class extends the `TaskGroup`\n    interface to allow simple gathering.\n\n    See https://github.com/agronholm/anyio/issues/100\n\n    This class should be instantiated with `create_gather_task_group`.\n    \"\"\"\n\n    def __init__(self, task_group: anyio.abc.TaskGroup):\n        self._results: Dict[UUID, Any] = {}\n        # The concrete task group implementation to use\n        self._task_group: anyio.abc.TaskGroup = task_group\n\n    async def _run_and_store(self, key, fn, args):\n        self._results[key] = await fn(*args)\n\n    def start_soon(self, fn, *args) -> UUID:\n        key = uuid4()\n        # Put a placeholder in-case the result is retrieved earlier\n        self._results[key] = GatherIncomplete\n        self._task_group.start_soon(self._run_and_store, key, fn, args)\n        return key\n\n    async def start(self, fn, *args):\n        \"\"\"\n        Since `start` returns the result of `task_status.started()` but here we must\n        return the key instead, we just won't support this method for now.\n        \"\"\"\n        raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n\n    def get_result(self, key: UUID) -> Any:\n        result = self._results[key]\n        if result is GatherIncomplete:\n            raise GatherIncomplete(\n                \"Task is not complete. \"\n                \"Results should not be retrieved until the task group exits.\"\n            )\n        return result\n\n    async def __aenter__(self):\n        await self._task_group.__aenter__()\n        return self\n\n    async def __aexit__(self, *tb):\n        try:\n            retval = await self._task_group.__aexit__(*tb)\n            return retval\n        finally:\n            del self._task_group\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup.start","title":"start async","text":"

Since start returns the result of task_status.started() but here we must return the key instead, we just won't support this method for now.

Source code in prefect/utilities/asyncutils.py
async def start(self, fn, *args):\n    \"\"\"\n    Since `start` returns the result of `task_status.started()` but here we must\n    return the key instead, we just won't support this method for now.\n    \"\"\"\n    raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.add_event_loop_shutdown_callback","title":"add_event_loop_shutdown_callback async","text":"

Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down.

Requires use of asyncio.run() which waits for async generator shutdown by default or explicit call of asyncio.shutdown_asyncgens(). If the application is entered with asyncio.run_until_complete() and the user calls asyncio.close() without the generator shutdown call, this will not trigger callbacks.

asyncio does not provided any other way to clean up a resource when the event loop is about to close.

Source code in prefect/utilities/asyncutils.py
async def add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable]):\n    \"\"\"\n    Adds a callback to the given callable on event loop closure. The callable must be\n    a coroutine function. It will be awaited when the current event loop is shutting\n    down.\n\n    Requires use of `asyncio.run()` which waits for async generator shutdown by\n    default or explicit call of `asyncio.shutdown_asyncgens()`. If the application\n    is entered with `asyncio.run_until_complete()` and the user calls\n    `asyncio.close()` without the generator shutdown call, this will not trigger\n    callbacks.\n\n    asyncio does not provided _any_ other way to clean up a resource when the event\n    loop is about to close.\n    \"\"\"\n\n    async def on_shutdown(key):\n        # It appears that EVENT_LOOP_GC_REFS is somehow being garbage collected early.\n        # We hold a reference to it so as to preserve it, at least for the lifetime of\n        # this coroutine. See the issue below for the initial report/discussion:\n        # https://github.com/PrefectHQ/prefect/issues/7709#issuecomment-1560021109\n        _ = EVENT_LOOP_GC_REFS\n        try:\n            yield\n        except GeneratorExit:\n            await coroutine_fn()\n            # Remove self from the garbage collection set\n            EVENT_LOOP_GC_REFS.pop(key)\n\n    # Create the iterator and store it in a global variable so it is not garbage\n    # collected. If the iterator is garbage collected before the event loop closes, the\n    # callback will not run. Since this function does not know the scope of the event\n    # loop that is calling it, a reference with global scope is necessary to ensure\n    # garbage collection does not occur until after event loop closure.\n    key = id(on_shutdown)\n    EVENT_LOOP_GC_REFS[key] = on_shutdown(key)\n\n    # Begin iterating so it will be cleaned up as an incomplete generator\n    try:\n        await EVENT_LOOP_GC_REFS[key].__anext__()\n    # There is a poorly understood edge case we've seen in CI where the key is\n    # removed from the dict before we begin generator iteration.\n    except KeyError:\n        logger.warning(\"The event loop shutdown callback was not properly registered. \")\n        pass\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.create_gather_task_group","title":"create_gather_task_group","text":"

Create a new task group that gathers results

Source code in prefect/utilities/asyncutils.py
def create_gather_task_group() -> GatherTaskGroup:\n    \"\"\"Create a new task group that gathers results\"\"\"\n    # This function matches the AnyIO API which uses callables since the concrete\n    # task group class depends on the async library being used and cannot be\n    # determined until runtime\n    return GatherTaskGroup(anyio.create_task_group())\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.gather","title":"gather async","text":"

Run calls concurrently and gather their results.

Unlike asyncio.gather this expects to receive callables not coroutines. This matches anyio semantics.

Source code in prefect/utilities/asyncutils.py
async def gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> List[T]:\n    \"\"\"\n    Run calls concurrently and gather their results.\n\n    Unlike `asyncio.gather` this expects to receive _callables_ not _coroutines_.\n    This matches `anyio` semantics.\n    \"\"\"\n    keys = []\n    async with create_gather_task_group() as tg:\n        for call in calls:\n            keys.append(tg.start_soon(call))\n    return [tg.get_result(key) for key in keys]\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_fn","title":"is_async_fn","text":"

Returns True if a function returns a coroutine.

See https://github.com/microsoft/pyright/issues/2142 for an example use

Source code in prefect/utilities/asyncutils.py
def is_async_fn(\n    func: Union[Callable[P, R], Callable[P, Awaitable[R]]],\n) -> TypeGuard[Callable[P, Awaitable[R]]]:\n    \"\"\"\n    Returns `True` if a function returns a coroutine.\n\n    See https://github.com/microsoft/pyright/issues/2142 for an example use\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.iscoroutinefunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_gen_fn","title":"is_async_gen_fn","text":"

Returns True if a function is an async generator.

Source code in prefect/utilities/asyncutils.py
def is_async_gen_fn(func):\n    \"\"\"\n    Returns `True` if a function is an async generator.\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.isasyncgenfunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.raise_async_exception_in_thread","title":"raise_async_exception_in_thread","text":"

Raise an exception in a thread asynchronously.

This will not interrupt long-running system calls like sleep or wait.

Source code in prefect/utilities/asyncutils.py
def raise_async_exception_in_thread(thread: Thread, exc_type: Type[BaseException]):\n    \"\"\"\n    Raise an exception in a thread asynchronously.\n\n    This will not interrupt long-running system calls like `sleep` or `wait`.\n    \"\"\"\n    ret = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n        ctypes.c_long(thread.ident), ctypes.py_object(exc_type)\n    )\n    if ret == 0:\n        raise ValueError(\"Thread not found.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_async_from_worker_thread","title":"run_async_from_worker_thread","text":"

Runs an async function in the main thread's event loop, blocking the worker thread until completion

Source code in prefect/utilities/asyncutils.py
def run_async_from_worker_thread(\n    __fn: Callable[..., Awaitable[T]], *args: Any, **kwargs: Any\n) -> T:\n    \"\"\"\n    Runs an async function in the main thread's event loop, blocking the worker\n    thread until completion\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return anyio.from_thread.run(call)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync","title":"run_sync","text":"

Runs a coroutine from a synchronous context. A thread will be spawned to run the event loop if necessary, which allows coroutines to run in environments like Jupyter notebooks where the event loop runs on the main thread.

Parameters:

Name Type Description Default coroutine Coroutine[Any, Any, T]

The coroutine to run.

required

Returns:

Type Description T

The return value of the coroutine.

Example

Basic usage:

async def my_async_function(x: int) -> int:\n    return x + 1\n\nrun_sync(my_async_function(1))\n

Source code in prefect/utilities/asyncutils.py
def run_sync(coroutine: Coroutine[Any, Any, T]) -> T:\n    \"\"\"\n    Runs a coroutine from a synchronous context. A thread will be spawned\n    to run the event loop if necessary, which allows coroutines to run in\n    environments like Jupyter notebooks where the event loop runs on the main\n    thread.\n\n    Args:\n        coroutine: The coroutine to run.\n\n    Returns:\n        The return value of the coroutine.\n\n    Example:\n        Basic usage:\n        ```python\n        async def my_async_function(x: int) -> int:\n            return x + 1\n\n        run_sync(my_async_function(1))\n        ```\n    \"\"\"\n    # ensure context variables are properly copied to the async frame\n    context = copy_context()\n    try:\n        loop = asyncio.get_running_loop()\n    except RuntimeError:\n        loop = None\n\n    if loop and loop.is_running():\n        with ThreadPoolExecutor() as executor:\n            future = executor.submit(context.run, asyncio.run, coroutine)\n            return cast(T, future.result())\n    else:\n        return context.run(asyncio.run, coroutine)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_interruptible_worker_thread","title":"run_sync_in_interruptible_worker_thread async","text":"

Runs a sync function in a new interruptible worker thread so that the main thread's event loop is not blocked

Unlike the anyio function, this performs best-effort cancellation of the thread using the C API. Cancellation will not interrupt system calls like sleep.

Source code in prefect/utilities/asyncutils.py
async def run_sync_in_interruptible_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n    \"\"\"\n    Runs a sync function in a new interruptible worker thread so that the main\n    thread's event loop is not blocked\n\n    Unlike the anyio function, this performs best-effort cancellation of the\n    thread using the C API. Cancellation will not interrupt system calls like\n    `sleep`.\n    \"\"\"\n\n    class NotSet:\n        pass\n\n    thread: Thread = None\n    result = NotSet\n    event = asyncio.Event()\n    loop = asyncio.get_running_loop()\n\n    def capture_worker_thread_and_result():\n        # Captures the worker thread that AnyIO is using to execute the function so\n        # the main thread can perform actions on it\n        nonlocal thread, result\n        try:\n            thread = threading.current_thread()\n            result = __fn(*args, **kwargs)\n        except BaseException as exc:\n            result = exc\n            raise\n        finally:\n            loop.call_soon_threadsafe(event.set)\n\n    async def send_interrupt_to_thread():\n        # This task waits until the result is returned from the thread, if cancellation\n        # occurs during that time, we will raise the exception in the thread as well\n        try:\n            await event.wait()\n        except anyio.get_cancelled_exc_class():\n            # NOTE: We could send a SIGINT here which allow us to interrupt system\n            # calls but the interrupt bubbles from the child thread into the main thread\n            # and there is not a clear way to prevent it.\n            raise_async_exception_in_thread(thread, anyio.get_cancelled_exc_class())\n            raise\n\n    async with anyio.create_task_group() as tg:\n        tg.start_soon(send_interrupt_to_thread)\n        tg.start_soon(\n            partial(\n                anyio.to_thread.run_sync,\n                capture_worker_thread_and_result,\n                cancellable=True,\n                limiter=get_thread_limiter(),\n            )\n        )\n\n    assert result is not NotSet\n    return result\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_worker_thread","title":"run_sync_in_worker_thread async","text":"

Runs a sync function in a new worker thread so that the main thread's event loop is not blocked

Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function.

Note that cancellation of threads will not result in interrupted computation, the thread may continue running \u2014 the outcome will just be ignored.

Source code in prefect/utilities/asyncutils.py
async def run_sync_in_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n    \"\"\"\n    Runs a sync function in a new worker thread so that the main thread's event loop\n    is not blocked\n\n    Unlike the anyio function, this defaults to a cancellable thread and does not allow\n    passing arguments to the anyio function so users can pass kwargs to their function.\n\n    Note that cancellation of threads will not result in interrupted computation, the\n    thread may continue running \u2014 the outcome will just be ignored.\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return await anyio.to_thread.run_sync(\n        call, cancellable=True, limiter=get_thread_limiter()\n    )\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync","title":"sync","text":"

Call an async function from a synchronous context. Block until completion.

If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended.

Source code in prefect/utilities/asyncutils.py
def sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T:\n    \"\"\"\n    Call an async function from a synchronous context. Block until completion.\n\n    If in an asynchronous context, we will run the code in a separate loop instead of\n    failing but a warning will be displayed since this is not recommended.\n    \"\"\"\n    if in_async_main_thread():\n        warnings.warn(\n            \"`sync` called from an asynchronous context; \"\n            \"you should `await` the async function directly instead.\"\n        )\n        with anyio.start_blocking_portal() as portal:\n            return portal.call(partial(__async_fn, *args, **kwargs))\n    elif in_async_worker_thread():\n        # In a sync context but we can access the event loop thread; send the async\n        # call to the parent\n        return run_async_from_worker_thread(__async_fn, *args, **kwargs)\n    else:\n        # In a sync context and there is no event loop; just create an event loop\n        # to run the async code then tear it down\n        return run_async_in_new_loop(__async_fn, *args, **kwargs)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync_compatible","title":"sync_compatible","text":"

Converts an async function into a dual async and sync function.

When the returned function is called, we will attempt to determine the best way to enter the async function.

  • If in a thread with a running event loop, we will return the coroutine for the caller to await. This is normal async behavior.
  • If in a blocking worker thread with access to an event loop in another thread, we will submit the async method to the event loop.
  • If we cannot find an event loop, we will create a new one and run the async method then tear down the loop.
Source code in prefect/utilities/asyncutils.py
def sync_compatible(async_fn: T) -> T:\n    \"\"\"\n    Converts an async function into a dual async and sync function.\n\n    When the returned function is called, we will attempt to determine the best way\n    to enter the async function.\n\n    - If in a thread with a running event loop, we will return the coroutine for the\n        caller to await. This is normal async behavior.\n    - If in a blocking worker thread with access to an event loop in another thread, we\n        will submit the async method to the event loop.\n    - If we cannot find an event loop, we will create a new one and run the async method\n        then tear down the loop.\n    \"\"\"\n\n    @wraps(async_fn)\n    def coroutine_wrapper(*args, **kwargs):\n        from prefect._internal.concurrency.api import create_call, from_sync\n        from prefect._internal.concurrency.calls import get_current_call, logger\n        from prefect._internal.concurrency.event_loop import get_running_loop\n        from prefect._internal.concurrency.threads import get_global_loop\n        from prefect.settings import PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT\n\n        if PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT:\n            return async_fn(*args, **kwargs)\n\n        global_thread_portal = get_global_loop()\n        current_thread = threading.current_thread()\n        current_call = get_current_call()\n        current_loop = get_running_loop()\n\n        if current_thread.ident == global_thread_portal.thread.ident:\n            logger.debug(f\"{async_fn} --> return coroutine for internal await\")\n            # In the prefect async context; return the coro for us to await\n            return async_fn(*args, **kwargs)\n        elif in_async_main_thread() and (\n            not current_call or is_async_fn(current_call.fn)\n        ):\n            # In the main async context; return the coro for them to await\n            logger.debug(f\"{async_fn} --> return coroutine for user await\")\n            return async_fn(*args, **kwargs)\n        elif in_async_worker_thread():\n            # In a sync context but we can access the event loop thread; send the async\n            # call to the parent\n            return run_async_from_worker_thread(async_fn, *args, **kwargs)\n        elif current_loop is not None:\n            logger.debug(f\"{async_fn} --> run async in global loop portal\")\n            # An event loop is already present but we are in a sync context, run the\n            # call in Prefect's event loop thread\n            return from_sync.call_soon_in_loop_thread(\n                create_call(async_fn, *args, **kwargs)\n            ).result()\n        else:\n            logger.debug(f\"{async_fn} --> run async in new loop\")\n            # Run in a new event loop, but use a `Call` for nested context detection\n            call = create_call(async_fn, *args, **kwargs)\n            return call()\n\n    # TODO: This is breaking type hints on the callable... mypy is behind the curve\n    #       on argument annotations. We can still fix this for editors though.\n    if is_async_fn(async_fn):\n        wrapper = coroutine_wrapper\n    elif is_async_gen_fn(async_fn):\n        raise ValueError(\"Async generators cannot yet be marked as `sync_compatible`\")\n    else:\n        raise TypeError(\"The decorated function must be async.\")\n\n    wrapper.aio = async_fn\n    return wrapper\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/callables/","title":"callables","text":"","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables","title":"prefect.utilities.callables","text":"

Utilities for working with Python callables.

","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema","title":"ParameterSchema","text":"

Bases: BaseModel

Simple data model corresponding to an OpenAPI Schema.

Source code in prefect/utilities/callables.py
class ParameterSchema(pydantic.BaseModel):\n    \"\"\"Simple data model corresponding to an OpenAPI `Schema`.\"\"\"\n\n    title: Literal[\"Parameters\"] = \"Parameters\"\n    type: Literal[\"object\"] = \"object\"\n    properties: Dict[str, Any] = pydantic.Field(default_factory=dict)\n    required: List[str] = None\n    definitions: Optional[Dict[str, Any]] = None\n\n    def dict(self, *args, **kwargs):\n        \"\"\"Exclude `None` fields by default to comply with\n        the OpenAPI spec.\n        \"\"\"\n        kwargs.setdefault(\"exclude_none\", True)\n        return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema.dict","title":"dict","text":"

Exclude None fields by default to comply with the OpenAPI spec.

Source code in prefect/utilities/callables.py
def dict(self, *args, **kwargs):\n    \"\"\"Exclude `None` fields by default to comply with\n    the OpenAPI spec.\n    \"\"\"\n    kwargs.setdefault(\"exclude_none\", True)\n    return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.call_with_parameters","title":"call_with_parameters","text":"

Call a function with parameters extracted with get_call_parameters

The function must have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using parameters_to_positional_and_keyword directly

Source code in prefect/utilities/callables.py
def call_with_parameters(fn: Callable, parameters: Dict[str, Any]):\n    \"\"\"\n    Call a function with parameters extracted with `get_call_parameters`\n\n    The function _must_ have an identical signature to the original function or this\n    will fail. If you need to send to a function with a different signature, extract\n    the args/kwargs using `parameters_to_positional_and_keyword` directly\n    \"\"\"\n    args, kwargs = parameters_to_args_kwargs(fn, parameters)\n    return fn(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.cloudpickle_wrapped_call","title":"cloudpickle_wrapped_call","text":"

Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value

This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. anyio.to_process and multiprocessing) but may require a wider range of pickling support.

Source code in prefect/utilities/callables.py
def cloudpickle_wrapped_call(\n    __fn: Callable, *args: Any, **kwargs: Any\n) -> Callable[[], bytes]:\n    \"\"\"\n    Serializes a function call using cloudpickle then returns a callable which will\n    execute that call and return a cloudpickle serialized return value\n\n    This is particularly useful for sending calls to libraries that only use the Python\n    built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require\n    a wider range of pickling support.\n    \"\"\"\n    payload = cloudpickle.dumps((__fn, args, kwargs))\n    return partial(_run_serialized_call, payload)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.collapse_variadic_parameters","title":"collapse_variadic_parameters","text":"

Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument.

Example:

```python\ndef foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\ncollapse_variadic_parameters(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n```\n
Source code in prefect/utilities/callables.py
def collapse_variadic_parameters(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Given a parameter dictionary, move any parameters stored not present in the\n    signature into the variadic keyword argument.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        collapse_variadic_parameters(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        ```\n    \"\"\"\n    signature_parameters = inspect.signature(fn).parameters\n    variadic_key = None\n    for key, parameter in signature_parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    missing_parameters = set(parameters.keys()) - set(signature_parameters.keys())\n\n    if not variadic_key and missing_parameters:\n        raise ValueError(\n            f\"Signature for {fn} does not include any variadic keyword argument \"\n            \"but parameters were given that are not present in the signature.\"\n        )\n\n    if variadic_key and not missing_parameters:\n        # variadic key is present but no missing parameters, return parameters unchanged\n        return parameters\n\n    new_parameters = parameters.copy()\n    if variadic_key:\n        new_parameters[variadic_key] = {}\n\n    for key in missing_parameters:\n        new_parameters[variadic_key][key] = new_parameters.pop(key)\n\n    return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.explode_variadic_parameter","title":"explode_variadic_parameter","text":"

Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. **kwargs) into the top level.

Example:

```python\ndef foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\nexplode_variadic_parameter(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n```\n
Source code in prefect/utilities/callables.py
def explode_variadic_parameter(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Given a parameter dictionary, move any parameters stored in a variadic keyword\n    argument parameter (i.e. **kwargs) into the top level.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        explode_variadic_parameter(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        ```\n    \"\"\"\n    variadic_key = None\n    for key, parameter in inspect.signature(fn).parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    if not variadic_key:\n        return parameters\n\n    new_parameters = parameters.copy()\n    for key, value in new_parameters.pop(variadic_key, {}).items():\n        new_parameters[key] = value\n\n    return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_call_parameters","title":"get_call_parameters","text":"

Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overridden.

Raises a ParameterBindError if the arguments/kwargs are not valid for the function

Source code in prefect/utilities/callables.py
def get_call_parameters(\n    fn: Callable,\n    call_args: Tuple[Any, ...],\n    call_kwargs: Dict[str, Any],\n    apply_defaults: bool = True,\n) -> Dict[str, Any]:\n    \"\"\"\n    Bind a call to a function to get parameter/value mapping. Default values on the\n    signature will be included if not overridden.\n\n    Raises a ParameterBindError if the arguments/kwargs are not valid for the function\n    \"\"\"\n    try:\n        bound_signature = inspect.signature(fn).bind(*call_args, **call_kwargs)\n    except TypeError as exc:\n        raise ParameterBindError.from_bind_failure(fn, exc, call_args, call_kwargs)\n\n    if apply_defaults:\n        bound_signature.apply_defaults()\n\n    # We cast from `OrderedDict` to `dict` because Dask will not convert futures in an\n    # ordered dictionary to values during execution; this is the default behavior in\n    # Python 3.9 anyway.\n    return dict(bound_signature.arguments)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_parameter_defaults","title":"get_parameter_defaults","text":"

Get default parameter values for a callable.

Source code in prefect/utilities/callables.py
def get_parameter_defaults(\n    fn: Callable,\n) -> Dict[str, Any]:\n    \"\"\"\n    Get default parameter values for a callable.\n    \"\"\"\n    signature = inspect.signature(fn)\n\n    parameter_defaults = {}\n\n    for name, param in signature.parameters.items():\n        if param.default is not signature.empty:\n            parameter_defaults[name] = param.default\n\n    return parameter_defaults\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_docstrings","title":"parameter_docstrings","text":"

Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring.

Parameters:

Name Type Description Default docstring Optional[str]

The function's docstring.

required

Returns:

Type Description Dict[str, str]

Mapping from parameter names to docstrings.

Source code in prefect/utilities/callables.py
def parameter_docstrings(docstring: Optional[str]) -> Dict[str, str]:\n    \"\"\"\n    Given a docstring in Google docstring format, parse the parameter section\n    and return a dictionary that maps parameter names to docstring.\n\n    Args:\n        docstring: The function's docstring.\n\n    Returns:\n        Mapping from parameter names to docstrings.\n    \"\"\"\n    param_docstrings = {}\n\n    if not docstring:\n        return param_docstrings\n\n    with disable_logger(\"griffe.docstrings.google\"), disable_logger(\n        \"griffe.agents.nodes\"\n    ):\n        parsed = parse(Docstring(docstring), Parser.google)\n        for section in parsed:\n            if section.kind != DocstringSectionKind.parameters:\n                continue\n            param_docstrings = {\n                parameter.name: parameter.description for parameter in section.value\n            }\n\n    return param_docstrings\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_schema","title":"parameter_schema","text":"

Given a function, generates an OpenAPI-compatible description of the function's arguments, including: - name - typing information - whether it is required - a default value - additional constraints (like possible enum values)

Parameters:

Name Type Description Default fn Callable

The function whose arguments will be serialized

required

Returns:

Name Type Description ParameterSchema ParameterSchema

the argument schema

Source code in prefect/utilities/callables.py
def parameter_schema(fn: Callable) -> ParameterSchema:\n    \"\"\"Given a function, generates an OpenAPI-compatible description\n    of the function's arguments, including:\n        - name\n        - typing information\n        - whether it is required\n        - a default value\n        - additional constraints (like possible enum values)\n\n    Args:\n        fn (Callable): The function whose arguments will be serialized\n\n    Returns:\n        ParameterSchema: the argument schema\n    \"\"\"\n    try:\n        signature = inspect.signature(fn, eval_str=True)  # novm\n    except (NameError, TypeError):\n        # `eval_str` is not available in Python < 3.10\n        signature = inspect.signature(fn)\n\n    model_fields = {}\n    aliases = {}\n    docstrings = parameter_docstrings(inspect.getdoc(fn))\n\n    class ModelConfig:\n        arbitrary_types_allowed = True\n\n    if HAS_PYDANTIC_V2 and not has_v1_type_as_param(signature):\n        create_schema = create_v2_schema\n        process_params = process_v2_params\n    else:\n        create_schema = create_v1_schema\n        process_params = process_v1_params\n\n    for position, param in enumerate(signature.parameters.values()):\n        name, type_, field = process_params(\n            param, position=position, docstrings=docstrings, aliases=aliases\n        )\n        # Generate a Pydantic model at each step so we can check if this parameter\n        # type supports schema generation\n        try:\n            create_schema(\n                \"CheckParameter\", model_cfg=ModelConfig, **{name: (type_, field)}\n            )\n        except (ValueError, TypeError):\n            # This field's type is not valid for schema creation, update it to `Any`\n            type_ = Any\n        model_fields[name] = (type_, field)\n\n    # Generate the final model and schema\n    schema = create_schema(\"Parameters\", model_cfg=ModelConfig, **model_fields)\n    return ParameterSchema(**schema)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameters_to_args_kwargs","title":"parameters_to_args_kwargs","text":"

Convert a parameters dictionary to positional and keyword arguments

The function must have an identical signature to the original function or this will return an empty tuple and dict.

Source code in prefect/utilities/callables.py
def parameters_to_args_kwargs(\n    fn: Callable,\n    parameters: Dict[str, Any],\n) -> Tuple[Tuple[Any, ...], Dict[str, Any]]:\n    \"\"\"\n    Convert a `parameters` dictionary to positional and keyword arguments\n\n    The function _must_ have an identical signature to the original function or this\n    will return an empty tuple and dict.\n    \"\"\"\n    function_params = dict(inspect.signature(fn).parameters).keys()\n    # Check for parameters that are not present in the function signature\n    unknown_params = parameters.keys() - function_params\n    if unknown_params:\n        raise SignatureMismatchError.from_bad_params(\n            list(function_params), list(parameters.keys())\n        )\n    bound_signature = inspect.signature(fn).bind_partial()\n    bound_signature.arguments = parameters\n\n    return bound_signature.args, bound_signature.kwargs\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.raise_for_reserved_arguments","title":"raise_for_reserved_arguments","text":"

Raise a ReservedArgumentError if fn has any parameters that conflict with the names contained in reserved_arguments.

Source code in prefect/utilities/callables.py
def raise_for_reserved_arguments(fn: Callable, reserved_arguments: Iterable[str]):\n    \"\"\"Raise a ReservedArgumentError if `fn` has any parameters that conflict\n    with the names contained in `reserved_arguments`.\"\"\"\n    function_paremeters = inspect.signature(fn).parameters\n\n    for argument in reserved_arguments:\n        if argument in function_paremeters:\n            raise ReservedArgumentError(\n                f\"{argument!r} is a reserved argument name and cannot be used.\"\n            )\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/collections/","title":"collections","text":"","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections","title":"prefect.utilities.collections","text":"

Utilities for extensions of and operations on Python collections.

","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum","title":"AutoEnum","text":"

Bases: str, Enum

An enum class that automatically generates value from variable names.

This guards against common errors where variable names are updated but values are not.

In addition, because AutoEnums inherit from str, they are automatically JSON-serializable.

See https://docs.python.org/3/library/enum.html#using-automatic-values

Example
class MyEnum(AutoEnum):\n    RED = AutoEnum.auto() # equivalent to RED = 'RED'\n    BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n
Source code in prefect/utilities/collections.py
class AutoEnum(str, Enum):\n    \"\"\"\n    An enum class that automatically generates value from variable names.\n\n    This guards against common errors where variable names are updated but values are\n    not.\n\n    In addition, because AutoEnums inherit from `str`, they are automatically\n    JSON-serializable.\n\n    See https://docs.python.org/3/library/enum.html#using-automatic-values\n\n    Example:\n        ```python\n        class MyEnum(AutoEnum):\n            RED = AutoEnum.auto() # equivalent to RED = 'RED'\n            BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n        ```\n    \"\"\"\n\n    def _generate_next_value_(name, start, count, last_values):\n        return name\n\n    @staticmethod\n    def auto():\n        \"\"\"\n        Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n        \"\"\"\n        return auto()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}.{self.value}\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum.auto","title":"auto staticmethod","text":"

Exposes enum.auto() to avoid requiring a second import to use AutoEnum

Source code in prefect/utilities/collections.py
@staticmethod\ndef auto():\n    \"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.StopVisiting","title":"StopVisiting","text":"

Bases: BaseException

A special exception used to stop recursive visits in visit_collection.

When raised, the expression is returned without modification and recursive visits in that path will end.

Source code in prefect/utilities/collections.py
class StopVisiting(BaseException):\n    \"\"\"\n    A special exception used to stop recursive visits in `visit_collection`.\n\n    When raised, the expression is returned without modification and recursive visits\n    in that path will end.\n    \"\"\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.batched_iterable","title":"batched_iterable","text":"

Yield batches of a certain size from an iterable

Parameters:

Name Type Description Default iterable Iterable

An iterable

required size int

The batch size to return

required

Yields:

Name Type Description tuple T

A batch of the iterable

Source code in prefect/utilities/collections.py
def batched_iterable(iterable: Iterable[T], size: int) -> Iterator[Tuple[T, ...]]:\n    \"\"\"\n    Yield batches of a certain size from an iterable\n\n    Args:\n        iterable (Iterable): An iterable\n        size (int): The batch size to return\n\n    Yields:\n        tuple: A batch of the iterable\n    \"\"\"\n    it = iter(iterable)\n    while True:\n        batch = tuple(itertools.islice(it, size))\n        if not batch:\n            break\n        yield batch\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.dict_to_flatdict","title":"dict_to_flatdict","text":"

Converts a (nested) dictionary to a flattened representation.

Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\" for the corresponding value.

Parameters:

Name Type Description Default dct dict

The dictionary to flatten

required _parent Tuple

The current parent for recursion

None

Returns:

Type Description Dict[Tuple[KT, ...], Any]

A flattened dict of the same type as dct

Source code in prefect/utilities/collections.py
def dict_to_flatdict(\n    dct: Dict[KT, Union[Any, Dict[KT, Any]]], _parent: Tuple[KT, ...] = None\n) -> Dict[Tuple[KT, ...], Any]:\n    \"\"\"Converts a (nested) dictionary to a flattened representation.\n\n    Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\"\n    for the corresponding value.\n\n    Args:\n        dct (dict): The dictionary to flatten\n        _parent (Tuple, optional): The current parent for recursion\n\n    Returns:\n        A flattened dict of the same type as dct\n    \"\"\"\n    typ = cast(Type[Dict[Tuple[KT, ...], Any]], type(dct))\n    items: List[Tuple[Tuple[KT, ...], Any]] = []\n    parent = _parent or tuple()\n\n    for k, v in dct.items():\n        k_parent = tuple(parent + (k,))\n        # if v is a non-empty dict, recurse\n        if isinstance(v, dict) and v:\n            items.extend(dict_to_flatdict(v, _parent=k_parent).items())\n        else:\n            items.append((k_parent, v))\n    return typ(items)\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.extract_instances","title":"extract_instances","text":"

Extract objects from a file and returns a dict of type -> instances

Parameters:

Name Type Description Default objects Iterable

An iterable of objects

required types Union[Type[T], Tuple[Type[T], ...]]

A type or tuple of types to extract, defaults to all objects

object

Returns:

Type Description Union[List[T], Dict[Type[T], T]]

If a single type is given: a list of instances of that type

Union[List[T], Dict[Type[T], T]]

If a tuple of types is given: a mapping of type to a list of instances

Source code in prefect/utilities/collections.py
def extract_instances(\n    objects: Iterable,\n    types: Union[Type[T], Tuple[Type[T], ...]] = object,\n) -> Union[List[T], Dict[Type[T], T]]:\n    \"\"\"\n    Extract objects from a file and returns a dict of type -> instances\n\n    Args:\n        objects: An iterable of objects\n        types: A type or tuple of types to extract, defaults to all objects\n\n    Returns:\n        If a single type is given: a list of instances of that type\n        If a tuple of types is given: a mapping of type to a list of instances\n    \"\"\"\n    types = ensure_iterable(types)\n\n    # Create a mapping of type -> instance from the exec values\n    ret = defaultdict(list)\n\n    for o in objects:\n        # We iterate here so that the key is the passed type rather than type(o)\n        for type_ in types:\n            if isinstance(o, type_):\n                ret[type_].append(o)\n\n    if len(types) == 1:\n        return ret[types[0]]\n\n    return ret\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.flatdict_to_dict","title":"flatdict_to_dict","text":"

Converts a flattened dictionary back to a nested dictionary.

Parameters:

Name Type Description Default dct dict

The dictionary to be nested. Each key should be a tuple of keys as generated by dict_to_flatdict

required

Returns A nested dict of the same type as dct

Source code in prefect/utilities/collections.py
def flatdict_to_dict(\n    dct: Dict[Tuple[KT, ...], VT],\n) -> Dict[KT, Union[VT, Dict[KT, VT]]]:\n    \"\"\"Converts a flattened dictionary back to a nested dictionary.\n\n    Args:\n        dct (dict): The dictionary to be nested. Each key should be a tuple of keys\n            as generated by `dict_to_flatdict`\n\n    Returns\n        A nested dict of the same type as dct\n    \"\"\"\n    typ = type(dct)\n    result = cast(Dict[KT, Union[VT, Dict[KT, VT]]], typ())\n    for key_tuple, value in dct.items():\n        current_dict = result\n        for prefix_key in key_tuple[:-1]:\n            # Build nested dictionaries up for the current key tuple\n            # Use `setdefault` in case the nested dict has already been created\n            current_dict = current_dict.setdefault(prefix_key, typ())  # type: ignore\n        # Set the value\n        current_dict[key_tuple[-1]] = value\n\n    return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.get_from_dict","title":"get_from_dict","text":"

Fetch a value from a nested dictionary or list using a sequence of keys.

This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value.

Parameters:

Name Type Description Default dct Dict

The nested dictionary or list from which to fetch the value.

required keys Union[str, List[str]]

The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets.

required default Any

The default value to return if the requested key path does not exist. Defaults to None.

None

Returns:

Type Description Any

The fetched value if the key exists, or the default value if it does not.

get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') 'default'

Source code in prefect/utilities/collections.py
def get_from_dict(dct: Dict, keys: Union[str, List[str]], default: Any = None) -> Any:\n    \"\"\"\n    Fetch a value from a nested dictionary or list using a sequence of keys.\n\n    This function allows to fetch a value from a deeply nested structure\n    of dictionaries and lists using either a dot-separated string or a list\n    of keys. If a requested key does not exist, the function returns the\n    provided default value.\n\n    Args:\n        dct: The nested dictionary or list from which to fetch the value.\n        keys: The sequence of keys to use for access. Can be a\n            dot-separated string or a list of keys. List indices can be included\n            in the sequence as either integer keys or as string indices in square\n            brackets.\n        default: The default value to return if the requested key path does not\n            exist. Defaults to None.\n\n    Returns:\n        The fetched value if the key exists, or the default value if it does not.\n\n    Examples:\n    >>> get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]')\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1])\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default')\n    'default'\n    \"\"\"\n    if isinstance(keys, str):\n        keys = keys.replace(\"[\", \".\").replace(\"]\", \"\").split(\".\")\n    try:\n        for key in keys:\n            try:\n                # Try to cast to int to handle list indices\n                key = int(key)\n            except ValueError:\n                # If it's not an int, use the key as-is\n                # for dict lookup\n                pass\n            dct = dct[key]\n        return dct\n    except (TypeError, KeyError, IndexError):\n        return default\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.isiterable","title":"isiterable","text":"

Return a boolean indicating if an object is iterable.

Excludes types that are iterable but typically used as singletons: - str - bytes - IO objects

Source code in prefect/utilities/collections.py
def isiterable(obj: Any) -> bool:\n    \"\"\"\n    Return a boolean indicating if an object is iterable.\n\n    Excludes types that are iterable but typically used as singletons:\n    - str\n    - bytes\n    - IO objects\n    \"\"\"\n    try:\n        iter(obj)\n    except TypeError:\n        return False\n    else:\n        return not isinstance(obj, (str, bytes, io.IOBase))\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.remove_nested_keys","title":"remove_nested_keys","text":"

Recurses a dictionary returns a copy without all keys that match an entry in key_to_remove. Return obj unchanged if not a dictionary.

Parameters:

Name Type Description Default keys_to_remove List[Hashable]

A list of keys to remove from obj obj: The object to remove keys from.

required

Returns:

Type Description

obj without keys matching an entry in keys_to_remove if obj is a dictionary. obj if obj is not a dictionary.

Source code in prefect/utilities/collections.py
def remove_nested_keys(keys_to_remove: List[Hashable], obj):\n    \"\"\"\n    Recurses a dictionary returns a copy without all keys that match an entry in\n    `key_to_remove`. Return `obj` unchanged if not a dictionary.\n\n    Args:\n        keys_to_remove: A list of keys to remove from obj obj: The object to remove keys\n            from.\n\n    Returns:\n        `obj` without keys matching an entry in `keys_to_remove` if `obj` is a\n            dictionary. `obj` if `obj` is not a dictionary.\n    \"\"\"\n    if not isinstance(obj, dict):\n        return obj\n    return {\n        key: remove_nested_keys(keys_to_remove, value)\n        for key, value in obj.items()\n        if key not in keys_to_remove\n    }\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.visit_collection","title":"visit_collection","text":"

This function visits every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, visit_fn will be called with the element. The return value of visit_fn can be used to alter the element if return_data is set.

Note that when using return_data a copy of each collection is created to avoid mutating the original object. This may have significant performance penalties and should only be used if you intend to transform the collection.

Supported types: - List - Tuple - Set - Dict (note: keys are also visited recursively) - Dataclass - Pydantic model - Prefect annotations

Parameters:

Name Type Description Default expr Any

a Python object or expression

required visit_fn Callable[[Any], Awaitable[Any]]

an async function that will be applied to every non-collection element of expr.

required return_data bool

if True, a copy of expr containing data modified by visit_fn will be returned. This is slower than return_data=False (the default).

False max_depth int

Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer N, visitation will only descend to N layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited.

-1 context Optional[dict]

An optional dictionary. If passed, the context will be sent to each call to the visit_fn. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available \"up\" a level from a given expression.

The context will be automatically populated with an 'annotation' key when visiting collections within a BaseAnnotation type. This requires the caller to pass context={} and will not be activated by default.

None remove_annotations bool

If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited.

False Source code in prefect/utilities/collections.py
def visit_collection(\n    expr,\n    visit_fn: Callable[[Any], Any],\n    return_data: bool = False,\n    max_depth: int = -1,\n    context: Optional[dict] = None,\n    remove_annotations: bool = False,\n):\n    \"\"\"\n    This function visits every element of an arbitrary Python collection. If an element\n    is a Python collection, it will be visited recursively. If an element is not a\n    collection, `visit_fn` will be called with the element. The return value of\n    `visit_fn` can be used to alter the element if `return_data` is set.\n\n    Note that when using `return_data` a copy of each collection is created to avoid\n    mutating the original object. This may have significant performance penalties and\n    should only be used if you intend to transform the collection.\n\n    Supported types:\n    - List\n    - Tuple\n    - Set\n    - Dict (note: keys are also visited recursively)\n    - Dataclass\n    - Pydantic model\n    - Prefect annotations\n\n    Args:\n        expr (Any): a Python object or expression\n        visit_fn (Callable[[Any], Awaitable[Any]]): an async function that\n            will be applied to every non-collection element of expr.\n        return_data (bool): if `True`, a copy of `expr` containing data modified\n            by `visit_fn` will be returned. This is slower than `return_data=False`\n            (the default).\n        max_depth: Controls the depth of recursive visitation. If set to zero, no\n            recursion will occur. If set to a positive integer N, visitation will only\n            descend to N layers deep. If set to any negative integer, no limit will be\n            enforced and recursion will continue until terminal items are reached. By\n            default, recursion is unlimited.\n        context: An optional dictionary. If passed, the context will be sent to each\n            call to the `visit_fn`. The context can be mutated by each visitor and will\n            be available for later visits to expressions at the given depth. Values\n            will not be available \"up\" a level from a given expression.\n\n            The context will be automatically populated with an 'annotation' key when\n            visiting collections within a `BaseAnnotation` type. This requires the\n            caller to pass `context={}` and will not be activated by default.\n        remove_annotations: If set, annotations will be replaced by their contents. By\n            default, annotations are preserved but their contents are visited.\n    \"\"\"\n\n    def visit_nested(expr):\n        # Utility for a recursive call, preserving options and updating the depth.\n        return visit_collection(\n            expr,\n            visit_fn=visit_fn,\n            return_data=return_data,\n            remove_annotations=remove_annotations,\n            max_depth=max_depth - 1,\n            # Copy the context on nested calls so it does not \"propagate up\"\n            context=context.copy() if context is not None else None,\n        )\n\n    def visit_expression(expr):\n        if context is not None:\n            return visit_fn(expr, context)\n        else:\n            return visit_fn(expr)\n\n    # Visit every expression\n    try:\n        result = visit_expression(expr)\n    except StopVisiting:\n        max_depth = 0\n        result = expr\n\n    if return_data:\n        # Only mutate the expression while returning data, otherwise it could be null\n        expr = result\n\n    # Then, visit every child of the expression recursively\n\n    # If we have reached the maximum depth, do not perform any recursion\n    if max_depth == 0:\n        return result if return_data else None\n\n    # Get the expression type; treat iterators like lists\n    typ = list if isinstance(expr, IteratorABC) and isiterable(expr) else type(expr)\n    typ = cast(type, typ)  # mypy treats this as 'object' otherwise and complains\n\n    # Then visit every item in the expression if it is a collection\n    if isinstance(expr, Mock):\n        # Do not attempt to recurse into mock objects\n        result = expr\n\n    elif isinstance(expr, BaseAnnotation):\n        if context is not None:\n            context[\"annotation\"] = expr\n        value = visit_nested(expr.unwrap())\n\n        if remove_annotations:\n            result = value if return_data else None\n        else:\n            result = expr.rewrap(value) if return_data else None\n\n    elif typ in (list, tuple, set):\n        items = [visit_nested(o) for o in expr]\n        result = typ(items) if return_data else None\n\n    elif typ in (dict, OrderedDict):\n        assert isinstance(expr, (dict, OrderedDict))  # typecheck assertion\n        items = [(visit_nested(k), visit_nested(v)) for k, v in expr.items()]\n        result = typ(items) if return_data else None\n\n    elif is_dataclass(expr) and not isinstance(expr, type):\n        values = [visit_nested(getattr(expr, f.name)) for f in fields(expr)]\n        items = {field.name: value for field, value in zip(fields(expr), values)}\n        result = typ(**items) if return_data else None\n\n    elif isinstance(expr, pydantic.BaseModel):\n        # NOTE: This implementation *does not* traverse private attributes\n        # Pydantic does not expose extras in `__fields__` so we use `__fields_set__`\n        # as well to get all of the relevant attributes\n        # Check for presence of attrs even if they're in the field set due to pydantic#4916\n        model_fields = {\n            f for f in expr.__fields_set__.union(expr.__fields__) if hasattr(expr, f)\n        }\n        items = [visit_nested(getattr(expr, key)) for key in model_fields]\n\n        if return_data:\n            # Collect fields with aliases so reconstruction can use the correct field name\n            aliases = {\n                key: value.alias\n                for key, value in expr.__fields__.items()\n                if value.has_alias\n            }\n\n            model_instance = typ(\n                **{\n                    aliases.get(key) or key: value\n                    for key, value in zip(model_fields, items)\n                }\n            )\n\n            # Private attributes are not included in `__fields_set__` but we do not want\n            # to drop them from the model so we restore them after constructing a new\n            # model\n            for attr in expr.__private_attributes__:\n                # Use `object.__setattr__` to avoid errors on immutable models\n                object.__setattr__(model_instance, attr, getattr(expr, attr))\n\n            # Preserve data about which fields were explicitly set on the original model\n            object.__setattr__(model_instance, \"__fields_set__\", expr.__fields_set__)\n            result = model_instance\n        else:\n            result = None\n\n    else:\n        result = result if return_data else None\n\n    return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/compat/","title":"compat","text":"","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/compat/#prefect.utilities.compat","title":"prefect.utilities.compat","text":"

Utilities for Python version compatibility

","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/context/","title":"context","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/context/#prefect.utilities.context","title":"prefect.utilities.context","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/context/#prefect.utilities.context.get_task_and_flow_run_ids","title":"get_task_and_flow_run_ids","text":"

Get the task run and flow run ids from the context, if available.

Returns:

Type Description Tuple[Optional[UUID], Optional[UUID]]

Tuple[Optional[UUID], Optional[UUID]]: a tuple of the task run id and flow run id

Source code in prefect/utilities/context.py
def get_task_and_flow_run_ids() -> Tuple[Optional[UUID], Optional[UUID]]:\n    \"\"\"\n    Get the task run and flow run ids from the context, if available.\n\n    Returns:\n        Tuple[Optional[UUID], Optional[UUID]]: a tuple of the task run id and flow run id\n    \"\"\"\n    return get_task_run_id(), get_flow_run_id()\n
","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/dispatch/","title":"dispatch","text":"","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch","title":"prefect.utilities.dispatch","text":"

Provides methods for performing dynamic dispatch for actions on base type to one of its subtypes.

Example:

@register_base_type\nclass Base:\n    @classmethod\n    def __dispatch_key__(cls):\n        return cls.__name__.lower()\n\n\nclass Foo(Base):\n    ...\n\nkey = get_dispatch_key(Foo)  # 'foo'\nlookup_type(Base, key) # Foo\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_dispatch_key","title":"get_dispatch_key","text":"

Retrieve the unique dispatch key for a class type or instance.

This key is defined at the __dispatch_key__ attribute. If it is a callable, it will be resolved.

If allow_missing is False, an exception will be raised if the attribute is not defined or the key is null. If True, None will be returned in these cases.

Source code in prefect/utilities/dispatch.py
def get_dispatch_key(\n    cls_or_instance: Any, allow_missing: bool = False\n) -> Optional[str]:\n    \"\"\"\n    Retrieve the unique dispatch key for a class type or instance.\n\n    This key is defined at the `__dispatch_key__` attribute. If it is a callable, it\n    will be resolved.\n\n    If `allow_missing` is `False`, an exception will be raised if the attribute is not\n    defined or the key is null. If `True`, `None` will be returned in these cases.\n    \"\"\"\n    dispatch_key = getattr(cls_or_instance, \"__dispatch_key__\", None)\n\n    type_name = (\n        cls_or_instance.__name__\n        if isinstance(cls_or_instance, type)\n        else type(cls_or_instance).__name__\n    )\n\n    if dispatch_key is None:\n        if allow_missing:\n            return None\n        raise ValueError(\n            f\"Type {type_name!r} does not define a value for \"\n            \"'__dispatch_key__' which is required for registry lookup.\"\n        )\n\n    if callable(dispatch_key):\n        dispatch_key = dispatch_key()\n\n    if allow_missing and dispatch_key is None:\n        return None\n\n    if not isinstance(dispatch_key, str):\n        raise TypeError(\n            f\"Type {type_name!r} has a '__dispatch_key__' of type \"\n            f\"{type(dispatch_key).__name__} but a type of 'str' is required.\"\n        )\n\n    return dispatch_key\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_registry_for_type","title":"get_registry_for_type","text":"

Get the first matching registry for a class or any of its base classes.

If not found, None is returned.

Source code in prefect/utilities/dispatch.py
def get_registry_for_type(cls: T) -> Optional[Dict[str, T]]:\n    \"\"\"\n    Get the first matching registry for a class or any of its base classes.\n\n    If not found, `None` is returned.\n    \"\"\"\n    return next(\n        filter(\n            lambda registry: registry is not None,\n            (_TYPE_REGISTRIES.get(cls) for cls in cls.mro()),\n        ),\n        None,\n    )\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.lookup_type","title":"lookup_type","text":"

Look up a dispatch key in the type registry for the given class.

Source code in prefect/utilities/dispatch.py
def lookup_type(cls: T, dispatch_key: str) -> T:\n    \"\"\"\n    Look up a dispatch key in the type registry for the given class.\n    \"\"\"\n    # Get the first matching registry for the class or one of its bases\n    registry = get_registry_for_type(cls)\n\n    # Look up this type in the registry\n    subcls = registry.get(dispatch_key)\n\n    if subcls is None:\n        raise KeyError(\n            f\"No class found for dispatch key {dispatch_key!r} in registry for type \"\n            f\"{cls.__name__!r}.\"\n        )\n\n    return subcls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_base_type","title":"register_base_type","text":"

Register a base type allowing child types to be registered for dispatch with register_type.

The base class may or may not define a __dispatch_key__ to allow lookups of the base type.

Source code in prefect/utilities/dispatch.py
def register_base_type(cls: T) -> T:\n    \"\"\"\n    Register a base type allowing child types to be registered for dispatch with\n    `register_type`.\n\n    The base class may or may not define a `__dispatch_key__` to allow lookups of the\n    base type.\n    \"\"\"\n    registry = _TYPE_REGISTRIES.setdefault(cls, {})\n    base_key = get_dispatch_key(cls, allow_missing=True)\n    if base_key is not None:\n        registry[base_key] = cls\n\n    # Add automatic subtype registration\n    cls.__init_subclass_original__ = getattr(cls, \"__init_subclass__\")\n    cls.__init_subclass__ = _register_subclass_of_base_type\n\n    return cls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_type","title":"register_type","text":"

Register a type for lookup with dispatch.

The type or one of its parents must define a unique __dispatch_key__.

One of the classes base types must be registered using register_base_type.

Source code in prefect/utilities/dispatch.py
def register_type(cls: T) -> T:\n    \"\"\"\n    Register a type for lookup with dispatch.\n\n    The type or one of its parents must define a unique `__dispatch_key__`.\n\n    One of the classes base types must be registered using `register_base_type`.\n    \"\"\"\n    # Lookup the registry for this type\n    registry = get_registry_for_type(cls)\n\n    # Check if a base type is registered\n    if registry is None:\n        # Include a description of registered base types\n        known = \", \".join(repr(base.__name__) for base in _TYPE_REGISTRIES)\n        known_message = (\n            f\" Did you mean to inherit from one of the following known types: {known}.\"\n            if known\n            else \"\"\n        )\n\n        # And a list of all base types for the type they tried to register\n        bases = \", \".join(\n            repr(base.__name__) for base in cls.mro() if base not in (object, cls)\n        )\n\n        raise ValueError(\n            f\"No registry found for type {cls.__name__!r} with bases {bases}.\"\n            + known_message\n        )\n\n    key = get_dispatch_key(cls)\n    existing_value = registry.get(key)\n    if existing_value is not None and id(existing_value) != id(cls):\n        # Get line numbers for debugging\n        file = inspect.getsourcefile(cls)\n        line_number = inspect.getsourcelines(cls)[1]\n        existing_file = inspect.getsourcefile(existing_value)\n        existing_line_number = inspect.getsourcelines(existing_value)[1]\n        warnings.warn(\n            f\"Type {cls.__name__!r} at {file}:{line_number} has key {key!r} that \"\n            f\"matches existing registered type {existing_value.__name__!r} from \"\n            f\"{existing_file}:{existing_line_number}. The existing type will be \"\n            \"overridden.\"\n        )\n\n    # Add to the registry\n    registry[key] = cls\n\n    return cls\n
","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dockerutils/","title":"dockerutils","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils","title":"prefect.utilities.dockerutils","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.BuildError","title":"BuildError","text":"

Bases: Exception

Raised when a Docker build fails

Source code in prefect/utilities/dockerutils.py
class BuildError(Exception):\n    \"\"\"Raised when a Docker build fails\"\"\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder","title":"ImageBuilder","text":"

An interface for preparing Docker build contexts and building images

Source code in prefect/utilities/dockerutils.py
class ImageBuilder:\n    \"\"\"An interface for preparing Docker build contexts and building images\"\"\"\n\n    base_directory: Path\n    context: Optional[Path]\n    platform: Optional[str]\n    dockerfile_lines: List[str]\n\n    def __init__(\n        self,\n        base_image: str,\n        base_directory: Path = None,\n        platform: str = None,\n        context: Path = None,\n    ):\n        \"\"\"Create an ImageBuilder\n\n        Args:\n            base_image: the base image to use\n            base_directory: the starting point on your host for relative file locations,\n                defaulting to the current directory\n            context: use this path as the build context (if not provided, will create a\n                temporary directory for the context)\n\n        Returns:\n            The image ID\n        \"\"\"\n        self.base_directory = base_directory or context or Path().absolute()\n        self.temporary_directory = None\n        self.context = context\n        self.platform = platform\n        self.dockerfile_lines = []\n\n        if self.context:\n            dockerfile_path: Path = self.context / \"Dockerfile\"\n            if dockerfile_path.exists():\n                raise ValueError(f\"There is already a Dockerfile at {context}\")\n\n        self.add_line(f\"FROM {base_image}\")\n\n    def __enter__(self) -> Self:\n        if self.context and not self.temporary_directory:\n            return self\n\n        self.temporary_directory = TemporaryDirectory()\n        self.context = Path(self.temporary_directory.__enter__())\n        return self\n\n    def __exit__(\n        self, exc: Type[BaseException], value: BaseException, traceback: TracebackType\n    ) -> None:\n        if not self.temporary_directory:\n            return\n\n        self.temporary_directory.__exit__(exc, value, traceback)\n        self.temporary_directory = None\n        self.context = None\n\n    def add_line(self, line: str) -> None:\n        \"\"\"Add a line to this image's Dockerfile\"\"\"\n        self.add_lines([line])\n\n    def add_lines(self, lines: Iterable[str]) -> None:\n        \"\"\"Add lines to this image's Dockerfile\"\"\"\n        self.dockerfile_lines.extend(lines)\n\n    def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n        \"\"\"Copy a file to this image\"\"\"\n        if not self.context:\n            raise Exception(\"No context available\")\n\n        if not isinstance(destination, PurePosixPath):\n            destination = PurePosixPath(destination)\n\n        if not isinstance(source, Path):\n            source = Path(source)\n\n        if source.is_absolute():\n            source = source.resolve().relative_to(self.base_directory)\n\n        if self.temporary_directory:\n            os.makedirs(self.context / source.parent, exist_ok=True)\n\n            if source.is_dir():\n                shutil.copytree(self.base_directory / source, self.context / source)\n            else:\n                shutil.copy2(self.base_directory / source, self.context / source)\n\n        self.add_line(f\"COPY {source} {destination}\")\n\n    def write_text(self, text: str, destination: Union[str, PurePosixPath]):\n        if not self.context:\n            raise Exception(\"No context available\")\n\n        if not isinstance(destination, PurePosixPath):\n            destination = PurePosixPath(destination)\n\n        source_hash = hashlib.sha256(text.encode()).hexdigest()\n        (self.context / f\".{source_hash}\").write_text(text)\n        self.add_line(f\"COPY .{source_hash} {destination}\")\n\n    def build(\n        self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n    ) -> str:\n        \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n        Args:\n            pull: True to pull the base image during the build\n            stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n                that will collect the build output as it is reported by Docker\n\n        Returns:\n            The image ID\n        \"\"\"\n        dockerfile_path: Path = self.context / \"Dockerfile\"\n\n        with dockerfile_path.open(\"w\") as dockerfile:\n            dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n        try:\n            return build_image(\n                self.context,\n                platform=self.platform,\n                pull=pull,\n                stream_progress_to=stream_progress_to,\n            )\n        finally:\n            os.unlink(dockerfile_path)\n\n    def assert_has_line(self, line: str) -> None:\n        \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n        all_lines = \"\\n\".join(\n            [f\"  {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n        )\n        message = (\n            f\"Expected {line!r} not found in Dockerfile.  Dockerfile:\\n{all_lines}\"\n        )\n        assert line in self.dockerfile_lines, message\n\n    def assert_line_absent(self, line: str) -> None:\n        \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n        if line not in self.dockerfile_lines:\n            return\n\n        i = self.dockerfile_lines.index(line)\n\n        surrounding_lines = \"\\n\".join(\n            [\n                f\"  {i+1:>3}: {line}\"\n                for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n            ]\n        )\n        message = (\n            f\"Unexpected {line!r} found in Dockerfile at line {i+1}.  \"\n            f\"Surrounding lines:\\n{surrounding_lines}\"\n        )\n\n        assert line not in self.dockerfile_lines, message\n\n    def assert_line_before(self, first: str, second: str) -> None:\n        \"\"\"Asserts that the first line appears before the second line\"\"\"\n        self.assert_has_line(first)\n        self.assert_has_line(second)\n\n        first_index = self.dockerfile_lines.index(first)\n        second_index = self.dockerfile_lines.index(second)\n\n        surrounding_lines = \"\\n\".join(\n            [\n                f\"  {i+1:>3}: {line}\"\n                for i, line in enumerate(\n                    self.dockerfile_lines[second_index - 2 : first_index + 2]\n                )\n            ]\n        )\n\n        message = (\n            f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n            f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n            f\"{second_index+1}.  Surrounding lines:\\n{surrounding_lines}\"\n        )\n\n        assert first_index < second_index, message\n\n    def assert_line_after(self, second: str, first: str) -> None:\n        \"\"\"Asserts that the second line appears after the first line\"\"\"\n        self.assert_line_before(first, second)\n\n    def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n        \"\"\"Asserts that the given file or directory will be copied into the container\n        at the given path\"\"\"\n        if source.is_absolute():\n            source = source.relative_to(self.base_directory)\n\n        self.assert_has_line(f\"COPY {source} {container_path}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_line","title":"add_line","text":"

Add a line to this image's Dockerfile

Source code in prefect/utilities/dockerutils.py
def add_line(self, line: str) -> None:\n    \"\"\"Add a line to this image's Dockerfile\"\"\"\n    self.add_lines([line])\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_lines","title":"add_lines","text":"

Add lines to this image's Dockerfile

Source code in prefect/utilities/dockerutils.py
def add_lines(self, lines: Iterable[str]) -> None:\n    \"\"\"Add lines to this image's Dockerfile\"\"\"\n    self.dockerfile_lines.extend(lines)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_file","title":"assert_has_file","text":"

Asserts that the given file or directory will be copied into the container at the given path

Source code in prefect/utilities/dockerutils.py
def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n    \"\"\"Asserts that the given file or directory will be copied into the container\n    at the given path\"\"\"\n    if source.is_absolute():\n        source = source.relative_to(self.base_directory)\n\n    self.assert_has_line(f\"COPY {source} {container_path}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_line","title":"assert_has_line","text":"

Asserts that the given line is in the Dockerfile

Source code in prefect/utilities/dockerutils.py
def assert_has_line(self, line: str) -> None:\n    \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n    all_lines = \"\\n\".join(\n        [f\"  {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n    )\n    message = (\n        f\"Expected {line!r} not found in Dockerfile.  Dockerfile:\\n{all_lines}\"\n    )\n    assert line in self.dockerfile_lines, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_absent","title":"assert_line_absent","text":"

Asserts that the given line is absent from the Dockerfile

Source code in prefect/utilities/dockerutils.py
def assert_line_absent(self, line: str) -> None:\n    \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n    if line not in self.dockerfile_lines:\n        return\n\n    i = self.dockerfile_lines.index(line)\n\n    surrounding_lines = \"\\n\".join(\n        [\n            f\"  {i+1:>3}: {line}\"\n            for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n        ]\n    )\n    message = (\n        f\"Unexpected {line!r} found in Dockerfile at line {i+1}.  \"\n        f\"Surrounding lines:\\n{surrounding_lines}\"\n    )\n\n    assert line not in self.dockerfile_lines, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_after","title":"assert_line_after","text":"

Asserts that the second line appears after the first line

Source code in prefect/utilities/dockerutils.py
def assert_line_after(self, second: str, first: str) -> None:\n    \"\"\"Asserts that the second line appears after the first line\"\"\"\n    self.assert_line_before(first, second)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_before","title":"assert_line_before","text":"

Asserts that the first line appears before the second line

Source code in prefect/utilities/dockerutils.py
def assert_line_before(self, first: str, second: str) -> None:\n    \"\"\"Asserts that the first line appears before the second line\"\"\"\n    self.assert_has_line(first)\n    self.assert_has_line(second)\n\n    first_index = self.dockerfile_lines.index(first)\n    second_index = self.dockerfile_lines.index(second)\n\n    surrounding_lines = \"\\n\".join(\n        [\n            f\"  {i+1:>3}: {line}\"\n            for i, line in enumerate(\n                self.dockerfile_lines[second_index - 2 : first_index + 2]\n            )\n        ]\n    )\n\n    message = (\n        f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n        f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n        f\"{second_index+1}.  Surrounding lines:\\n{surrounding_lines}\"\n    )\n\n    assert first_index < second_index, message\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.build","title":"build","text":"

Build the Docker image from the current state of the ImageBuilder

Parameters:

Name Type Description Default pull bool

True to pull the base image during the build

False stream_progress_to Optional[TextIO]

an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker

None

Returns:

Type Description str

The image ID

Source code in prefect/utilities/dockerutils.py
def build(\n    self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n) -> str:\n    \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n    Args:\n        pull: True to pull the base image during the build\n        stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n            that will collect the build output as it is reported by Docker\n\n    Returns:\n        The image ID\n    \"\"\"\n    dockerfile_path: Path = self.context / \"Dockerfile\"\n\n    with dockerfile_path.open(\"w\") as dockerfile:\n        dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n    try:\n        return build_image(\n            self.context,\n            platform=self.platform,\n            pull=pull,\n            stream_progress_to=stream_progress_to,\n        )\n    finally:\n        os.unlink(dockerfile_path)\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.copy","title":"copy","text":"

Copy a file to this image

Source code in prefect/utilities/dockerutils.py
def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n    \"\"\"Copy a file to this image\"\"\"\n    if not self.context:\n        raise Exception(\"No context available\")\n\n    if not isinstance(destination, PurePosixPath):\n        destination = PurePosixPath(destination)\n\n    if not isinstance(source, Path):\n        source = Path(source)\n\n    if source.is_absolute():\n        source = source.resolve().relative_to(self.base_directory)\n\n    if self.temporary_directory:\n        os.makedirs(self.context / source.parent, exist_ok=True)\n\n        if source.is_dir():\n            shutil.copytree(self.base_directory / source, self.context / source)\n        else:\n            shutil.copy2(self.base_directory / source, self.context / source)\n\n    self.add_line(f\"COPY {source} {destination}\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.PushError","title":"PushError","text":"

Bases: Exception

Raised when a Docker image push fails

Source code in prefect/utilities/dockerutils.py
class PushError(Exception):\n    \"\"\"Raised when a Docker image push fails\"\"\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.build_image","title":"build_image","text":"

Builds a Docker image, returning the image ID

Parameters:

Name Type Description Default context Path

the root directory for the Docker build context

required dockerfile str

the path to the Dockerfile, relative to the context

'Dockerfile' tag Optional[str]

the tag to give this image

None pull bool

True to pull the base image during the build

False stream_progress_to Optional[TextIO]

an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker

None

Returns:

Type Description str

The image ID

Source code in prefect/utilities/dockerutils.py
@silence_docker_warnings()\ndef build_image(\n    context: Path,\n    dockerfile: str = \"Dockerfile\",\n    tag: Optional[str] = None,\n    pull: bool = False,\n    platform: str = None,\n    stream_progress_to: Optional[TextIO] = None,\n    **kwargs,\n) -> str:\n    \"\"\"Builds a Docker image, returning the image ID\n\n    Args:\n        context: the root directory for the Docker build context\n        dockerfile: the path to the Dockerfile, relative to the context\n        tag: the tag to give this image\n        pull: True to pull the base image during the build\n        stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n            will collect the build output as it is reported by Docker\n\n    Returns:\n        The image ID\n    \"\"\"\n\n    if not context:\n        raise ValueError(\"context required to build an image\")\n\n    if not Path(context).exists():\n        raise ValueError(f\"Context path {context} does not exist\")\n\n    kwargs = {key: kwargs[key] for key in kwargs if key not in [\"decode\", \"labels\"]}\n\n    image_id = None\n    with docker_client() as client:\n        events = client.api.build(\n            path=str(context),\n            tag=tag,\n            dockerfile=dockerfile,\n            pull=pull,\n            decode=True,\n            labels=IMAGE_LABELS,\n            platform=platform,\n            **kwargs,\n        )\n\n        try:\n            for event in events:\n                if \"stream\" in event:\n                    if not stream_progress_to:\n                        continue\n                    stream_progress_to.write(event[\"stream\"])\n                    stream_progress_to.flush()\n                elif \"aux\" in event:\n                    image_id = event[\"aux\"][\"ID\"]\n                elif \"error\" in event:\n                    raise BuildError(event[\"error\"])\n                elif \"message\" in event:\n                    raise BuildError(event[\"message\"])\n        except docker.errors.APIError as e:\n            raise BuildError(e.explanation) from e\n\n    assert image_id, \"The Docker daemon did not return an image ID\"\n    return image_id\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.docker_client","title":"docker_client","text":"

Get the environmentally-configured Docker client

Source code in prefect/utilities/dockerutils.py
@contextmanager\ndef docker_client() -> Generator[\"DockerClient\", None, None]:\n    \"\"\"Get the environmentally-configured Docker client\"\"\"\n    client = None\n    try:\n        with silence_docker_warnings():\n            client = docker.DockerClient.from_env()\n\n            yield client\n    except docker.errors.DockerException as exc:\n        raise RuntimeError(\n            \"This error is often thrown because Docker is not running. Please ensure Docker is running.\"\n        ) from exc\n    finally:\n        client is not None and client.close()\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.format_outlier_version_name","title":"format_outlier_version_name","text":"

Formats outlier docker version names to pass packaging.version.parse validation - Current cases are simple, but creates stub for more complicated formatting if eventually needed. - Example outlier versions that throw a parsing exception: - \"20.10.0-ce\" (variant of community edition label) - \"20.10.0-ee\" (variant of enterprise edition label)

Parameters:

Name Type Description Default version str

raw docker version value

required

Returns:

Name Type Description str

value that can pass packaging.version.parse validation

Source code in prefect/utilities/dockerutils.py
def format_outlier_version_name(version: str):\n    \"\"\"\n    Formats outlier docker version names to pass `packaging.version.parse` validation\n    - Current cases are simple, but creates stub for more complicated formatting if eventually needed.\n    - Example outlier versions that throw a parsing exception:\n      - \"20.10.0-ce\" (variant of community edition label)\n      - \"20.10.0-ee\" (variant of enterprise edition label)\n\n    Args:\n        version (str): raw docker version value\n\n    Returns:\n        str: value that can pass `packaging.version.parse` validation\n    \"\"\"\n    return version.replace(\"-ce\", \"\").replace(\"-ee\", \"\")\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.generate_default_dockerfile","title":"generate_default_dockerfile","text":"

Generates a default Dockerfile used for deploying flows. The Dockerfile is written to a temporary file and yielded. The temporary file is removed after the context manager exits.

Parameters:

Name Type Description Default - context

The context to use for the Dockerfile. Defaults to the current working directory.

required Source code in prefect/utilities/dockerutils.py
@contextmanager\ndef generate_default_dockerfile(context: Optional[Path] = None):\n    \"\"\"\n    Generates a default Dockerfile used for deploying flows. The Dockerfile is written\n    to a temporary file and yielded. The temporary file is removed after the context\n    manager exits.\n\n    Args:\n        - context: The context to use for the Dockerfile. Defaults to\n            the current working directory.\n    \"\"\"\n    if not context:\n        context = Path.cwd()\n    lines = []\n    base_image = get_prefect_image_name()\n    lines.append(f\"FROM {base_image}\")\n    dir_name = context.name\n\n    if (context / \"requirements.txt\").exists():\n        lines.append(f\"COPY requirements.txt /opt/prefect/{dir_name}/requirements.txt\")\n        lines.append(\n            f\"RUN python -m pip install -r /opt/prefect/{dir_name}/requirements.txt\"\n        )\n\n    lines.append(f\"COPY . /opt/prefect/{dir_name}/\")\n    lines.append(f\"WORKDIR /opt/prefect/{dir_name}/\")\n\n    temp_dockerfile = context / \"Dockerfile\"\n    if Path(temp_dockerfile).exists():\n        raise RuntimeError(\n            \"Failed to generate Dockerfile. Dockerfile already exists in the\"\n            \" current directory.\"\n        )\n\n    with Path(temp_dockerfile).open(\"w\") as f:\n        f.writelines(line + \"\\n\" for line in lines)\n\n    try:\n        yield temp_dockerfile\n    finally:\n        temp_dockerfile.unlink()\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.get_prefect_image_name","title":"get_prefect_image_name","text":"

Get the Prefect image name matching the current Prefect and Python versions.

Parameters:

Name Type Description Default prefect_version str

An optional override for the Prefect version.

None python_version str

An optional override for the Python version; must be at the minor level e.g. '3.9'.

None flavor str

An optional alternative image flavor to build, like 'conda'

None Source code in prefect/utilities/dockerutils.py
def get_prefect_image_name(\n    prefect_version: str = None, python_version: str = None, flavor: str = None\n) -> str:\n    \"\"\"\n    Get the Prefect image name matching the current Prefect and Python versions.\n\n    Args:\n        prefect_version: An optional override for the Prefect version.\n        python_version: An optional override for the Python version; must be at the\n            minor level e.g. '3.9'.\n        flavor: An optional alternative image flavor to build, like 'conda'\n    \"\"\"\n    parsed_version = (prefect_version or prefect.__version__).split(\"+\")\n    is_prod_build = len(parsed_version) == 1\n    prefect_version = (\n        parsed_version[0]\n        if is_prod_build\n        else \"sha-\" + prefect.__version_info__[\"full-revisionid\"][:7]\n    )\n\n    python_version = python_version or python_version_minor()\n\n    tag = slugify(\n        f\"{prefect_version}-python{python_version}\" + (f\"-{flavor}\" if flavor else \"\"),\n        lowercase=False,\n        max_length=128,\n        # Docker allows these characters for tag names\n        regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n    )\n\n    image = \"prefect\" if is_prod_build else \"prefect-dev\"\n    return f\"prefecthq/{image}:{tag}\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.parse_image_tag","title":"parse_image_tag","text":"

Parse Docker Image String

  • If a tag exists, this function parses and returns the image registry and tag, separately as a tuple.
  • Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest')
  • Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest')
  • Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0
  • Image building tools typically enforce this standard

Parameters:

Name Type Description Default name str

Name of Docker Image

required Return Source code in prefect/utilities/dockerutils.py
def parse_image_tag(name: str) -> Tuple[str, Optional[str]]:\n    \"\"\"\n    Parse Docker Image String\n\n    - If a tag exists, this function parses and returns the image registry and tag,\n      separately as a tuple.\n      - Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest')\n      - Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest')\n    - Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0\n      - Image building tools typically enforce this standard\n\n    Args:\n        name (str): Name of Docker Image\n\n    Return:\n        tuple: image registry, image tag\n    \"\"\"\n    tag = None\n    name_parts = name.split(\"/\")\n    # First handles the simplest image names (DockerHub-based, index-free, potentionally with a tag)\n    # - Example: simplename:latest\n    if len(name_parts) == 1:\n        if \":\" in name_parts[0]:\n            image_name, tag = name_parts[0].split(\":\")\n        else:\n            image_name = name_parts[0]\n    else:\n        # 1. Separates index (hostname.io or prefecthq) from path:tag (folder/subfolder:latest or prefect:latest)\n        # 2. Separates path and tag (if tag exists)\n        # 3. Reunites index and path (without tag) as image name\n        index_name = name_parts[0]\n        image_path = \"/\".join(name_parts[1:])\n        if \":\" in image_path:\n            image_path, tag = image_path.split(\":\")\n        image_name = f\"{index_name}/{image_path}\"\n    return image_name, tag\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.push_image","title":"push_image","text":"

Pushes a local image to a Docker registry, returning the registry-qualified tag for that image

This assumes that the environment's Docker daemon is already authenticated to the given registry, and currently makes no attempt to authenticate.

Parameters:

Name Type Description Default image_id str

a Docker image ID

required registry_url str

the URL of a Docker registry

required name str

the name of this image

required tag str

the tag to give this image (defaults to a short representation of the image's ID)

None stream_progress_to Optional[TextIO]

an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker

None

Returns:

Type Description str

A registry-qualified tag, like my-registry.example.com/my-image:abcdefg

Source code in prefect/utilities/dockerutils.py
@silence_docker_warnings()\ndef push_image(\n    image_id: str,\n    registry_url: str,\n    name: str,\n    tag: Optional[str] = None,\n    stream_progress_to: Optional[TextIO] = None,\n) -> str:\n    \"\"\"Pushes a local image to a Docker registry, returning the registry-qualified tag\n    for that image\n\n    This assumes that the environment's Docker daemon is already authenticated to the\n    given registry, and currently makes no attempt to authenticate.\n\n    Args:\n        image_id (str): a Docker image ID\n        registry_url (str): the URL of a Docker registry\n        name (str): the name of this image\n        tag (str): the tag to give this image (defaults to a short representation of\n            the image's ID)\n        stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n            will collect the build output as it is reported by Docker\n\n    Returns:\n        A registry-qualified tag, like my-registry.example.com/my-image:abcdefg\n    \"\"\"\n\n    if not tag:\n        tag = slugify(pendulum.now(\"utc\").isoformat())\n\n    _, registry, _, _, _ = urlsplit(registry_url)\n    repository = f\"{registry}/{name}\"\n\n    with docker_client() as client:\n        image: \"docker.Image\" = client.images.get(image_id)\n        image.tag(repository, tag=tag)\n        events = client.api.push(repository, tag=tag, stream=True, decode=True)\n        try:\n            for event in events:\n                if \"status\" in event:\n                    if not stream_progress_to:\n                        continue\n                    stream_progress_to.write(event[\"status\"])\n                    if \"progress\" in event:\n                        stream_progress_to.write(\" \" + event[\"progress\"])\n                    stream_progress_to.write(\"\\n\")\n                    stream_progress_to.flush()\n                elif \"error\" in event:\n                    raise PushError(event[\"error\"])\n        finally:\n            client.api.remove_image(f\"{repository}:{tag}\", noprune=True)\n\n    return f\"{repository}:{tag}\"\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.split_repository_path","title":"split_repository_path","text":"

Splits a Docker repository path into its namespace and repository components.

Parameters:

Name Type Description Default repository_path str

The Docker repository path to split.

required

Returns:

Type Description Tuple[Optional[str], str]

Tuple[Optional[str], str]: A tuple containing the namespace and repository components. - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present. - repository (Optionals[str]): The repository name.

Source code in prefect/utilities/dockerutils.py
def split_repository_path(repository_path: str) -> Tuple[Optional[str], str]:\n    \"\"\"\n    Splits a Docker repository path into its namespace and repository components.\n\n    Args:\n        repository_path: The Docker repository path to split.\n\n    Returns:\n        Tuple[Optional[str], str]: A tuple containing the namespace and repository components.\n            - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present.\n            - repository (Optionals[str]): The repository name.\n    \"\"\"\n    parts = repository_path.split(\"/\", 2)\n\n    # Check if the path includes a registry and organization or just organization/repository\n    if len(parts) == 3 or (len(parts) == 2 and (\".\" in parts[0] or \":\" in parts[0])):\n        # Namespace includes registry and organization\n        namespace = \"/\".join(parts[:-1])\n        repository = parts[-1]\n    elif len(parts) == 2:\n        # Only organization/repository provided, so namespace is just the first part\n        namespace = parts[0]\n        repository = parts[1]\n    else:\n        # No namespace provided\n        namespace = None\n        repository = parts[0]\n\n    return namespace, repository\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.to_run_command","title":"to_run_command","text":"

Convert a process-style list of command arguments to a single Dockerfile RUN instruction.

Source code in prefect/utilities/dockerutils.py
def to_run_command(command: List[str]) -> str:\n    \"\"\"\n    Convert a process-style list of command arguments to a single Dockerfile RUN\n    instruction.\n    \"\"\"\n    if not command:\n        return \"\"\n\n    run_command = f\"RUN {command[0]}\"\n    if len(command) > 1:\n        run_command += \" \" + \" \".join([repr(arg) for arg in command[1:]])\n\n    # TODO: Consider performing text-wrapping to improve readability of the generated\n    #       Dockerfile\n    # return textwrap.wrap(\n    #     run_command,\n    #     subsequent_indent=\" \" * 4,\n    #     break_on_hyphens=False,\n    #     break_long_words=False\n    # )\n\n    return run_command\n
","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/filesystem/","title":"filesystem","text":"","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem","title":"prefect.utilities.filesystem","text":"

Utilities for working with file systems

","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.create_default_ignore_file","title":"create_default_ignore_file","text":"

Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

Source code in prefect/utilities/filesystem.py
def create_default_ignore_file(path: str) -> bool:\n    \"\"\"\n    Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n    whether a file was created.\n    \"\"\"\n    path = pathlib.Path(path)\n    ignore_file = path / \".prefectignore\"\n    if ignore_file.exists():\n        return False\n    default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n    with ignore_file.open(mode=\"w\") as f:\n        f.write(default_file.read_text())\n    return True\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filename","title":"filename","text":"

Extract the file name from a path with remote file system support

Source code in prefect/utilities/filesystem.py
def filename(path: str) -> str:\n    \"\"\"Extract the file name from a path with remote file system support\"\"\"\n    try:\n        of: OpenFile = fsspec.open(path)\n        sep = of.fs.sep\n    except (ImportError, AttributeError):\n        sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n    return path.split(sep)[-1]\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filter_files","title":"filter_files","text":"

This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored.

The specification matches that of .gitignore files.

Source code in prefect/utilities/filesystem.py
def filter_files(\n    root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n    \"\"\"\n    This function accepts a root directory path and a list of file patterns to ignore, and returns\n    a list of files that excludes those that should be ignored.\n\n    The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n    \"\"\"\n    if ignore_patterns is None:\n        ignore_patterns = []\n    spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n    ignored_files = {p.path for p in spec.match_tree_entries(root)}\n    if include_dirs:\n        all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n    else:\n        all_files = set(pathspec.util.iter_tree_files(root))\n    included_files = all_files - ignored_files\n    return included_files\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.get_open_file_limit","title":"get_open_file_limit","text":"

Get the maximum number of open files allowed for the current process

Source code in prefect/utilities/filesystem.py
def get_open_file_limit() -> int:\n    \"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n\n    try:\n        if os.name == \"nt\":\n            import ctypes\n\n            return ctypes.cdll.ucrtbase._getmaxstdio()\n        else:\n            import resource\n\n            soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n            return soft_limit\n    except Exception:\n        # Catch all exceptions, as ctypes can raise several errors\n        # depending on what went wrong. Return a safe default if we\n        # can't get the limit from the OS.\n        return 200\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.is_local_path","title":"is_local_path","text":"

Check if the given path points to a local or remote file system

Source code in prefect/utilities/filesystem.py
def is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n    \"\"\"Check if the given path points to a local or remote file system\"\"\"\n    if isinstance(path, str):\n        try:\n            of = fsspec.open(path)\n        except ImportError:\n            # The path is a remote file system that uses a lib that is not installed\n            return False\n    elif isinstance(path, pathlib.Path):\n        return True\n    elif isinstance(path, OpenFile):\n        of = path\n    else:\n        raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n    return type(of.fs) == LocalFileSystem\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.relative_path_to_current_platform","title":"relative_path_to_current_platform","text":"

Converts a relative path generated on any platform to a relative path for the current platform.

Source code in prefect/utilities/filesystem.py
def relative_path_to_current_platform(path_str: str) -> Path:\n    \"\"\"\n    Converts a relative path generated on any platform to a relative path for the\n    current platform.\n    \"\"\"\n\n    return Path(PureWindowsPath(path_str).as_posix())\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.tmpchdir","title":"tmpchdir","text":"

Change current-working directories for the duration of the context

Source code in prefect/utilities/filesystem.py
@contextmanager\ndef tmpchdir(path: str):\n    \"\"\"\n    Change current-working directories for the duration of the context\n    \"\"\"\n    path = os.path.abspath(path)\n    if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n        path = os.path.dirname(path)\n\n    owd = os.getcwd()\n\n    with chdir_lock:\n        try:\n            os.chdir(path)\n            yield path\n        finally:\n            os.chdir(owd)\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.to_display_path","title":"to_display_path","text":"

Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter.

Source code in prefect/utilities/filesystem.py
def to_display_path(\n    path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n    \"\"\"\n    Convert a path to a displayable path. The absolute path or relative path to the\n    current (or given) directory will be returned, whichever is shorter.\n    \"\"\"\n    path, relative_to = (\n        pathlib.Path(path).resolve(),\n        pathlib.Path(relative_to or \".\").resolve(),\n    )\n    relative_path = str(path.relative_to(relative_to))\n    absolute_path = str(path)\n    return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/hashing/","title":"hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing","title":"prefect.utilities.hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.file_hash","title":"file_hash","text":"

Given a path to a file, produces a stable hash of the file contents.

Parameters:

Name Type Description Default path str

the path to a file

required hash_algo

Hash algorithm from hashlib to use.

_md5

Returns:

Name Type Description str str

a hash of the file contents

Source code in prefect/utilities/hashing.py
def file_hash(path: str, hash_algo=_md5) -> str:\n    \"\"\"Given a path to a file, produces a stable hash of the file contents.\n\n    Args:\n        path (str): the path to a file\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        str: a hash of the file contents\n    \"\"\"\n    contents = Path(path).read_bytes()\n    return stable_hash(contents, hash_algo=hash_algo)\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.hash_objects","title":"hash_objects","text":"

Attempt to hash objects by dumping to JSON or serializing with cloudpickle. On failure of both, None will be returned

Source code in prefect/utilities/hashing.py
def hash_objects(*args, hash_algo=_md5, **kwargs) -> Optional[str]:\n    \"\"\"\n    Attempt to hash objects by dumping to JSON or serializing with cloudpickle.\n    On failure of both, `None` will be returned\n    \"\"\"\n    try:\n        serializer = JSONSerializer(dumps_kwargs={\"sort_keys\": True})\n        return stable_hash(serializer.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    try:\n        return stable_hash(cloudpickle.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    return None\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.stable_hash","title":"stable_hash","text":"

Given some arguments, produces a stable 64-bit hash of their contents.

Supports bytes and strings. Strings will be UTF-8 encoded.

Parameters:

Name Type Description Default *args Union[str, bytes]

Items to include in the hash.

() hash_algo

Hash algorithm from hashlib to use.

_md5

Returns:

Type Description str

A hex hash.

Source code in prefect/utilities/hashing.py
def stable_hash(*args: Union[str, bytes], hash_algo=_md5) -> str:\n    \"\"\"Given some arguments, produces a stable 64-bit hash of their contents.\n\n    Supports bytes and strings. Strings will be UTF-8 encoded.\n\n    Args:\n        *args: Items to include in the hash.\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        A hex hash.\n    \"\"\"\n    h = hash_algo()\n    for a in args:\n        if isinstance(a, str):\n            a = a.encode()\n        h.update(a)\n    return h.hexdigest()\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/importtools/","title":"importtools","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools","title":"prefect.utilities.importtools","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleDefinition","title":"AliasedModuleDefinition","text":"

Bases: NamedTuple

A definition for the AliasedModuleFinder.

Parameters:

Name Type Description Default alias

The import name to create

required real

The import name of the module to reference for the alias

required callback

A function to call when the alias module is loaded

required Source code in prefect/utilities/importtools.py
class AliasedModuleDefinition(NamedTuple):\n    \"\"\"\n    A definition for the `AliasedModuleFinder`.\n\n    Args:\n        alias: The import name to create\n        real: The import name of the module to reference for the alias\n        callback: A function to call when the alias module is loaded\n    \"\"\"\n\n    alias: str\n    real: str\n    callback: Optional[Callable[[str], None]]\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder","title":"AliasedModuleFinder","text":"

Bases: MetaPathFinder

Source code in prefect/utilities/importtools.py
class AliasedModuleFinder(MetaPathFinder):\n    def __init__(self, aliases: Iterable[AliasedModuleDefinition]):\n        \"\"\"\n        See `AliasedModuleDefinition` for alias specification.\n\n        Aliases apply to all modules nested within an alias.\n        \"\"\"\n        self.aliases = aliases\n\n    def find_spec(\n        self,\n        fullname: str,\n        path=None,\n        target=None,\n    ) -> Optional[ModuleSpec]:\n        \"\"\"\n        The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n        for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n        create a new spec for \"phi.bar\" that points to \"foo.bar\".\n        \"\"\"\n        for alias, real, callback in self.aliases:\n            if fullname.startswith(alias):\n                # Retrieve the spec of the real module\n                real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n                # Create a new spec for the alias\n                return ModuleSpec(\n                    fullname,\n                    AliasedModuleLoader(fullname, callback, real_spec),\n                    origin=real_spec.origin,\n                    is_package=real_spec.submodule_search_locations is not None,\n                )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder.find_spec","title":"find_spec","text":"

The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\" for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and create a new spec for \"phi.bar\" that points to \"foo.bar\".

Source code in prefect/utilities/importtools.py
def find_spec(\n    self,\n    fullname: str,\n    path=None,\n    target=None,\n) -> Optional[ModuleSpec]:\n    \"\"\"\n    The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n    for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n    create a new spec for \"phi.bar\" that points to \"foo.bar\".\n    \"\"\"\n    for alias, real, callback in self.aliases:\n        if fullname.startswith(alias):\n            # Retrieve the spec of the real module\n            real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n            # Create a new spec for the alias\n            return ModuleSpec(\n                fullname,\n                AliasedModuleLoader(fullname, callback, real_spec),\n                origin=real_spec.origin,\n                is_package=real_spec.submodule_search_locations is not None,\n            )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.DelayedImportErrorModule","title":"DelayedImportErrorModule","text":"

Bases: ModuleType

A fake module returned by lazy_import when the module cannot be found. When any of the module's attributes are accessed, we will throw a ModuleNotFoundError.

Adapted from lazy_loader

Source code in prefect/utilities/importtools.py
class DelayedImportErrorModule(ModuleType):\n    \"\"\"\n    A fake module returned by `lazy_import` when the module cannot be found. When any\n    of the module's attributes are accessed, we will throw a `ModuleNotFoundError`.\n\n    Adapted from [lazy_loader][1]\n\n    [1]: https://github.com/scientific-python/lazy_loader\n    \"\"\"\n\n    def __init__(self, frame_data, help_message, *args, **kwargs):\n        self.__frame_data = frame_data\n        self.__help_message = (\n            help_message or \"Import errors for this module are only reported when used.\"\n        )\n        super().__init__(*args, **kwargs)\n\n    def __getattr__(self, attr):\n        if attr in (\"__class__\", \"__file__\", \"__frame_data\", \"__help_message\"):\n            super().__getattr__(attr)\n        else:\n            fd = self.__frame_data\n            raise ModuleNotFoundError(\n                f\"No module named '{fd['spec']}'\\n\\nThis module was originally imported\"\n                f\" at:\\n  File \\\"{fd['filename']}\\\", line {fd['lineno']}, in\"\n                f\" {fd['function']}\\n\\n    {''.join(fd['code_context']).strip()}\\n\"\n                + self.__help_message\n            )\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.from_qualified_name","title":"from_qualified_name","text":"

Import an object given a fully-qualified name.

Parameters:

Name Type Description Default name str

The fully-qualified name of the object to import.

required

Returns:

Type Description Any

the imported object

Examples:

>>> obj = from_qualified_name(\"random.randint\")\n>>> import random\n>>> obj == random.randint\nTrue\n
Source code in prefect/utilities/importtools.py
def from_qualified_name(name: str) -> Any:\n    \"\"\"\n    Import an object given a fully-qualified name.\n\n    Args:\n        name: The fully-qualified name of the object to import.\n\n    Returns:\n        the imported object\n\n    Examples:\n        >>> obj = from_qualified_name(\"random.randint\")\n        >>> import random\n        >>> obj == random.randint\n        True\n    \"\"\"\n    # Try importing it first so we support \"module\" or \"module.sub_module\"\n    try:\n        module = importlib.import_module(name)\n        return module\n    except ImportError:\n        # If no subitem was included raise the import error\n        if \".\" not in name:\n            raise\n\n    # Otherwise, we'll try to load it as an attribute of a module\n    mod_name, attr_name = name.rsplit(\".\", 1)\n    module = importlib.import_module(mod_name)\n    return getattr(module, attr_name)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.import_object","title":"import_object","text":"

Load an object from an import path.

Import paths can be formatted as one of: - module.object - module:object - /path/to/script.py:object

This function is not thread safe as it modifies the 'sys' module during execution.

Source code in prefect/utilities/importtools.py
def import_object(import_path: str):\n    \"\"\"\n    Load an object from an import path.\n\n    Import paths can be formatted as one of:\n    - module.object\n    - module:object\n    - /path/to/script.py:object\n\n    This function is not thread safe as it modifies the 'sys' module during execution.\n    \"\"\"\n    if \".py:\" in import_path:\n        script_path, object_name = import_path.rsplit(\":\", 1)\n        module = load_script_as_module(script_path)\n    else:\n        if \":\" in import_path:\n            module_name, object_name = import_path.rsplit(\":\", 1)\n        elif \".\" in import_path:\n            module_name, object_name = import_path.rsplit(\".\", 1)\n        else:\n            raise ValueError(\n                f\"Invalid format for object import. Received {import_path!r}.\"\n            )\n\n        module = load_module(module_name)\n\n    return getattr(module, object_name)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.lazy_import","title":"lazy_import","text":"

Create a lazily-imported module to use in place of the module of the given name. Use this to retain module-level imports for libraries that we don't want to actually import until they are needed.

Adapted from the Python documentation and lazy_loader

Source code in prefect/utilities/importtools.py
def lazy_import(\n    name: str, error_on_import: bool = False, help_message: str = \"\"\n) -> ModuleType:\n    \"\"\"\n    Create a lazily-imported module to use in place of the module of the given name.\n    Use this to retain module-level imports for libraries that we don't want to\n    actually import until they are needed.\n\n    Adapted from the [Python documentation][1] and [lazy_loader][2]\n\n    [1]: https://docs.python.org/3/library/importlib.html#implementing-lazy-imports\n    [2]: https://github.com/scientific-python/lazy_loader\n    \"\"\"\n\n    try:\n        return sys.modules[name]\n    except KeyError:\n        pass\n\n    spec = importlib.util.find_spec(name)\n    if spec is None:\n        if error_on_import:\n            raise ModuleNotFoundError(f\"No module named '{name}'.\\n{help_message}\")\n        else:\n            try:\n                parent = inspect.stack()[1]\n                frame_data = {\n                    \"spec\": name,\n                    \"filename\": parent.filename,\n                    \"lineno\": parent.lineno,\n                    \"function\": parent.function,\n                    \"code_context\": parent.code_context,\n                }\n                return DelayedImportErrorModule(\n                    frame_data, help_message, \"DelayedImportErrorModule\"\n                )\n            finally:\n                del parent\n\n    module = importlib.util.module_from_spec(spec)\n    sys.modules[name] = module\n\n    loader = importlib.util.LazyLoader(spec.loader)\n    loader.exec_module(module)\n\n    return module\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_module","title":"load_module","text":"

Import a module with support for relative imports within the module.

Source code in prefect/utilities/importtools.py
def load_module(module_name: str) -> ModuleType:\n    \"\"\"\n    Import a module with support for relative imports within the module.\n    \"\"\"\n    # Ensure relative imports within the imported module work if the user is in the\n    # correct working directory\n    working_directory = os.getcwd()\n    sys.path.insert(0, working_directory)\n\n    try:\n        return importlib.import_module(module_name)\n    finally:\n        sys.path.remove(working_directory)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_script_as_module","title":"load_script_as_module","text":"

Execute a script at the given path.

Sets the module name to __prefect_loader__.

If an exception occurs during execution of the script, a prefect.exceptions.ScriptError is created to wrap the exception and raised.

During the duration of this function call, sys is modified to support loading. These changes are reverted after completion, but this function is not thread safe and use of it in threaded contexts may result in undesirable behavior.

See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly

Source code in prefect/utilities/importtools.py
def load_script_as_module(path: str) -> ModuleType:\n    \"\"\"\n    Execute a script at the given path.\n\n    Sets the module name to `__prefect_loader__`.\n\n    If an exception occurs during execution of the script, a\n    `prefect.exceptions.ScriptError` is created to wrap the exception and raised.\n\n    During the duration of this function call, `sys` is modified to support loading.\n    These changes are reverted after completion, but this function is not thread safe\n    and use of it in threaded contexts may result in undesirable behavior.\n\n    See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly\n    \"\"\"\n    # We will add the parent directory to search locations to support relative imports\n    # during execution of the script\n    if not path.endswith(\".py\"):\n        raise ValueError(f\"The provided path does not point to a python file: {path!r}\")\n\n    parent_path = str(Path(path).resolve().parent)\n    working_directory = os.getcwd()\n\n    spec = importlib.util.spec_from_file_location(\n        \"__prefect_loader__\",\n        path,\n        # Support explicit relative imports i.e. `from .foo import bar`\n        submodule_search_locations=[parent_path, working_directory],\n    )\n    module = importlib.util.module_from_spec(spec)\n    sys.modules[\"__prefect_loader__\"] = module\n\n    # Support implicit relative imports i.e. `from foo import bar`\n    sys.path.insert(0, working_directory)\n    sys.path.insert(0, parent_path)\n    try:\n        spec.loader.exec_module(module)\n    except Exception as exc:\n        raise ScriptError(user_exc=exc, path=path) from exc\n    finally:\n        sys.modules.pop(\"__prefect_loader__\")\n        sys.path.remove(parent_path)\n        sys.path.remove(working_directory)\n\n    return module\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.objects_from_script","title":"objects_from_script","text":"

Run a python script and return all the global variables

Supports remote paths by copying to a local temporary file.

WARNING: The Python documentation does not recommend using runpy for this pattern.

Furthermore, any functions and classes defined by the executed code are not guaranteed to work correctly after a runpy function has returned. If that limitation is not acceptable for a given use case, importlib is likely to be a more suitable choice than this module.

The function load_script_as_module uses importlib instead and should be used instead for loading objects from scripts.

Parameters:

Name Type Description Default path str

The path to the script to run

required text Union[str, bytes]

Optionally, the text of the script. Skips loading the contents if given.

None

Returns:

Type Description Dict[str, Any]

A dictionary mapping variable name to value

Raises:

Type Description ScriptError

if the script raises an exception during execution

Source code in prefect/utilities/importtools.py
def objects_from_script(path: str, text: Union[str, bytes] = None) -> Dict[str, Any]:\n    \"\"\"\n    Run a python script and return all the global variables\n\n    Supports remote paths by copying to a local temporary file.\n\n    WARNING: The Python documentation does not recommend using runpy for this pattern.\n\n    > Furthermore, any functions and classes defined by the executed code are not\n    > guaranteed to work correctly after a runpy function has returned. If that\n    > limitation is not acceptable for a given use case, importlib is likely to be a\n    > more suitable choice than this module.\n\n    The function `load_script_as_module` uses importlib instead and should be used\n    instead for loading objects from scripts.\n\n    Args:\n        path: The path to the script to run\n        text: Optionally, the text of the script. Skips loading the contents if given.\n\n    Returns:\n        A dictionary mapping variable name to value\n\n    Raises:\n        ScriptError: if the script raises an exception during execution\n    \"\"\"\n\n    def run_script(run_path: str):\n        # Cast to an absolute path before changing directories to ensure relative paths\n        # are not broken\n        abs_run_path = os.path.abspath(run_path)\n        with tmpchdir(run_path):\n            try:\n                return runpy.run_path(abs_run_path)\n            except Exception as exc:\n                raise ScriptError(user_exc=exc, path=path) from exc\n\n    if text:\n        with NamedTemporaryFile(\n            mode=\"wt\" if isinstance(text, str) else \"wb\",\n            prefix=f\"run-{filename(path)}\",\n            suffix=\".py\",\n        ) as tmpfile:\n            tmpfile.write(text)\n            tmpfile.flush()\n            return run_script(tmpfile.name)\n\n    else:\n        if not is_local_path(path):\n            # Remote paths need to be local to run\n            with fsspec.open(path) as f:\n                contents = f.read()\n            return objects_from_script(path, contents)\n        else:\n            return run_script(path)\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.to_qualified_name","title":"to_qualified_name","text":"

Given an object, returns its fully-qualified name: a string that represents its Python import path.

Parameters:

Name Type Description Default obj Any

an importable Python object

required

Returns:

Name Type Description str str

the qualified name

Source code in prefect/utilities/importtools.py
def to_qualified_name(obj: Any) -> str:\n    \"\"\"\n    Given an object, returns its fully-qualified name: a string that represents its\n    Python import path.\n\n    Args:\n        obj (Any): an importable Python object\n\n    Returns:\n        str: the qualified name\n    \"\"\"\n    if sys.version_info < (3, 10):\n        # These attributes are only available in Python 3.10+\n        if isinstance(obj, (classmethod, staticmethod)):\n            obj = obj.__func__\n    return obj.__module__ + \".\" + obj.__qualname__\n
","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/math/","title":"math","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math","title":"prefect.utilities.math","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.bounded_poisson_interval","title":"bounded_poisson_interval","text":"

Bounds Poisson \"inter-arrival times\" to a range.

Unlike clamped_poisson_interval this does not take a target average interval. Instead, the interval is predetermined and the average is calculated as their midpoint. This allows Poisson intervals to be used in cases where a lower bound must be enforced.

Source code in prefect/utilities/math.py
def bounded_poisson_interval(lower_bound, upper_bound):\n    \"\"\"\n    Bounds Poisson \"inter-arrival times\" to a range.\n\n    Unlike `clamped_poisson_interval` this does not take a target average interval.\n    Instead, the interval is predetermined and the average is calculated as their\n    midpoint. This allows Poisson intervals to be used in cases where a lower bound\n    must be enforced.\n    \"\"\"\n    average = (float(lower_bound) + float(upper_bound)) / 2.0\n    upper_rv = exponential_cdf(upper_bound, average)\n    lower_rv = exponential_cdf(lower_bound, average)\n    return poisson_interval(average, lower_rv, upper_rv)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.clamped_poisson_interval","title":"clamped_poisson_interval","text":"

Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.

The upper bound for this random variate is: average_interval * (1 + clamping_factor). A lower bound is picked so that the average interval remains approximately fixed.

Source code in prefect/utilities/math.py
def clamped_poisson_interval(average_interval, clamping_factor=0.3):\n    \"\"\"\n    Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.\n\n    The upper bound for this random variate is: average_interval * (1 + clamping_factor).\n    A lower bound is picked so that the average interval remains approximately fixed.\n    \"\"\"\n    if clamping_factor <= 0:\n        raise ValueError(\"`clamping_factor` must be >= 0.\")\n\n    upper_clamp_multiple = 1 + clamping_factor\n    upper_bound = average_interval * upper_clamp_multiple\n    lower_bound = max(0, average_interval * lower_clamp_multiple(upper_clamp_multiple))\n\n    upper_rv = exponential_cdf(upper_bound, average_interval)\n    lower_rv = exponential_cdf(lower_bound, average_interval)\n    return poisson_interval(average_interval, lower_rv, upper_rv)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.lower_clamp_multiple","title":"lower_clamp_multiple","text":"

Computes a lower clamp multiple that can be used to bound a random variate drawn from an exponential distribution.

Given an upper clamp multiple k (and corresponding upper bound k * average_interval), this function computes a lower clamp multiple c (corresponding to a lower bound c * average_interval) where the probability mass between the lower bound and the median is equal to the probability mass between the median and the upper bound.

Source code in prefect/utilities/math.py
def lower_clamp_multiple(k):\n    \"\"\"\n    Computes a lower clamp multiple that can be used to bound a random variate drawn\n    from an exponential distribution.\n\n    Given an upper clamp multiple `k` (and corresponding upper bound k * average_interval),\n    this function computes a lower clamp multiple `c` (corresponding to a lower bound\n    c * average_interval) where the probability mass between the lower bound and the\n    median is equal to the probability mass between the median and the upper bound.\n    \"\"\"\n    if k >= 50:\n        # return 0 for large values of `k` to prevent numerical overflow\n        return 0.0\n\n    return math.log(max(2**k / (2**k - 1), 1e-10), 2)\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.poisson_interval","title":"poisson_interval","text":"

Generates an \"inter-arrival time\" for a Poisson process.

Draws a random variable from an exponential distribution using the inverse-CDF method. Can optionally be passed a lower and upper bound between (0, 1] to clamp the potential output values.

Source code in prefect/utilities/math.py
def poisson_interval(average_interval, lower=0, upper=1):\n    \"\"\"\n    Generates an \"inter-arrival time\" for a Poisson process.\n\n    Draws a random variable from an exponential distribution using the inverse-CDF\n    method. Can optionally be passed a lower and upper bound between (0, 1] to clamp\n    the potential output values.\n    \"\"\"\n\n    # note that we ensure the argument to the logarithm is stabilized to prevent\n    # calling log(0), which results in a DomainError\n    return -math.log(max(1 - random.uniform(lower, upper), 1e-10)) * average_interval\n
","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/names/","title":"names","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names","title":"prefect.utilities.names","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.generate_slug","title":"generate_slug","text":"

Generates a random slug.

Parameters:

Name Type Description Default - n_words (int

the number of words in the slug

required Source code in prefect/utilities/names.py
def generate_slug(n_words: int) -> str:\n    \"\"\"\n    Generates a random slug.\n\n    Args:\n        - n_words (int): the number of words in the slug\n    \"\"\"\n    words = coolname.generate(n_words)\n\n    # regenerate words if they include ignored words\n    while IGNORE_LIST.intersection(words):\n        words = coolname.generate(n_words)\n\n    return \"-\".join(words)\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate","title":"obfuscate","text":"

Obfuscates any data type's string representation. See obfuscate_string.

Source code in prefect/utilities/names.py
def obfuscate(s: Any, show_tail=False) -> str:\n    \"\"\"\n    Obfuscates any data type's string representation. See `obfuscate_string`.\n    \"\"\"\n    if s is None:\n        return OBFUSCATED_PREFIX + \"*\" * 4\n\n    return obfuscate_string(str(s), show_tail=show_tail)\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate_string","title":"obfuscate_string","text":"

Obfuscates a string by returning a new string of 8 characters. If the input string is longer than 10 characters and show_tail is True, then up to 4 of its final characters will become final characters of the obfuscated string; all other characters are \"*\".

\"abc\" -> \"*\" \"abcdefgh\" -> \"*\" \"abcdefghijk\" -> \"*k\" \"abcdefghijklmnopqrs\" -> \"****pqrs\"

Source code in prefect/utilities/names.py
def obfuscate_string(s: str, show_tail=False) -> str:\n    \"\"\"\n    Obfuscates a string by returning a new string of 8 characters. If the input\n    string is longer than 10 characters and show_tail is True, then up to 4 of\n    its final characters will become final characters of the obfuscated string;\n    all other characters are \"*\".\n\n    \"abc\"      -> \"********\"\n    \"abcdefgh\" -> \"********\"\n    \"abcdefghijk\" -> \"*******k\"\n    \"abcdefghijklmnopqrs\" -> \"****pqrs\"\n    \"\"\"\n    result = OBFUSCATED_PREFIX + \"*\" * 4\n    # take up to 4 characters, but only after the 10th character\n    suffix = s[10:][-4:]\n    if suffix and show_tail:\n        result = f\"{result[:-len(suffix)]}{suffix}\"\n    return result\n
","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/processutils/","title":"processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils","title":"prefect.utilities.processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.forward_signal_handler","title":"forward_signal_handler","text":"

Forward subsequent signum events (e.g. interrupts) to respective signums.

Source code in prefect/utilities/processutils.py
def forward_signal_handler(\n    pid: int, signum: int, *signums: int, process_name: str, print_fn: Callable\n):\n    \"\"\"Forward subsequent signum events (e.g. interrupts) to respective signums.\"\"\"\n    current_signal, future_signals = signums[0], signums[1:]\n\n    # avoid RecursionError when setting up a direct signal forward to the same signal for the main pid\n    avoid_infinite_recursion = signum == current_signal and pid == os.getpid()\n    if avoid_infinite_recursion:\n        # store the vanilla handler so it can be temporarily restored below\n        original_handler = signal.getsignal(current_signal)\n\n    def handler(*args):\n        print_fn(\n            f\"Received {getattr(signum, 'name', signum)}. \"\n            f\"Sending {getattr(current_signal, 'name', current_signal)} to\"\n            f\" {process_name} (PID {pid})...\"\n        )\n        if avoid_infinite_recursion:\n            signal.signal(current_signal, original_handler)\n        os.kill(pid, current_signal)\n        if future_signals:\n            forward_signal_handler(\n                pid,\n                signum,\n                *future_signals,\n                process_name=process_name,\n                print_fn=print_fn,\n            )\n\n    # register current and future signal handlers\n    _register_signal(signum, handler)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.open_process","title":"open_process async","text":"

Like anyio.open_process but with: - Support for Windows command joining - Termination of the process on exception during yield - Forced cleanup of process resources during cancellation

Source code in prefect/utilities/processutils.py
@asynccontextmanager\nasync def open_process(command: List[str], **kwargs):\n    \"\"\"\n    Like `anyio.open_process` but with:\n    - Support for Windows command joining\n    - Termination of the process on exception during yield\n    - Forced cleanup of process resources during cancellation\n    \"\"\"\n    # Passing a string to open_process is equivalent to shell=True which is\n    # generally necessary for Unix-like commands on Windows but otherwise should\n    # be avoided\n    if not isinstance(command, list):\n        raise TypeError(\n            \"The command passed to open process must be a list. You passed the command\"\n            f\"'{command}', which is type '{type(command)}'.\"\n        )\n\n    if sys.platform == \"win32\":\n        command = \" \".join(command)\n        process = await _open_anyio_process(command, **kwargs)\n    else:\n        process = await anyio.open_process(command, **kwargs)\n\n    # if there's a creationflags kwarg and it contains CREATE_NEW_PROCESS_GROUP,\n    # use SetConsoleCtrlHandler to handle CTRL-C\n    win32_process_group = False\n    if (\n        sys.platform == \"win32\"\n        and \"creationflags\" in kwargs\n        and kwargs[\"creationflags\"] & subprocess.CREATE_NEW_PROCESS_GROUP\n    ):\n        win32_process_group = True\n        _windows_process_group_pids.add(process.pid)\n        # Add a handler for CTRL-C. Re-adding the handler is safe as Windows\n        # will not add a duplicate handler if _win32_ctrl_handler is\n        # already registered.\n        windll.kernel32.SetConsoleCtrlHandler(_win32_ctrl_handler, 1)\n\n    try:\n        async with process:\n            yield process\n    finally:\n        try:\n            process.terminate()\n            if win32_process_group:\n                _windows_process_group_pids.remove(process.pid)\n\n        except OSError:\n            # Occurs if the process is already terminated\n            pass\n\n        # Ensure the process resource is closed. If not shielded from cancellation,\n        # this resource can be left open and the subprocess output can appear after\n        # the parent process has exited.\n        with anyio.CancelScope(shield=True):\n            await process.aclose()\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.run_process","title":"run_process async","text":"

Like anyio.run_process but with:

  • Use of our open_process utility to ensure resources are cleaned up
  • Simple stream_output support to connect the subprocess to the parent stdout/err
  • Support for submission with TaskGroup.start marking as 'started' after the process has been created. When used, the PID is returned to the task status.
Source code in prefect/utilities/processutils.py
async def run_process(\n    command: List[str],\n    stream_output: Union[bool, Tuple[Optional[TextSink], Optional[TextSink]]] = False,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n    task_status_handler: Optional[Callable[[anyio.abc.Process], Any]] = None,\n    **kwargs,\n):\n    \"\"\"\n    Like `anyio.run_process` but with:\n\n    - Use of our `open_process` utility to ensure resources are cleaned up\n    - Simple `stream_output` support to connect the subprocess to the parent stdout/err\n    - Support for submission with `TaskGroup.start` marking as 'started' after the\n        process has been created. When used, the PID is returned to the task status.\n\n    \"\"\"\n    if stream_output is True:\n        stream_output = (sys.stdout, sys.stderr)\n\n    async with open_process(\n        command,\n        stdout=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        stderr=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        **kwargs,\n    ) as process:\n        if task_status is not None:\n            if not task_status_handler:\n\n                def task_status_handler(process):\n                    return process.pid\n\n            task_status.started(task_status_handler(process))\n\n        if stream_output:\n            await consume_process_output(\n                process, stdout_sink=stream_output[0], stderr_sink=stream_output[1]\n            )\n\n        await process.wait()\n\n    return process\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_agent","title":"setup_signal_handlers_agent","text":"

Handle interrupts of the agent gracefully.

Source code in prefect/utilities/processutils.py
def setup_signal_handlers_agent(pid: int, process_name: str, print_fn: Callable):\n    \"\"\"Handle interrupts of the agent gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_server","title":"setup_signal_handlers_server","text":"

Handle interrupts of the server gracefully.

Source code in prefect/utilities/processutils.py
def setup_signal_handlers_server(pid: int, process_name: str, print_fn: Callable):\n    \"\"\"Handle interrupts of the server gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when server receives a signal, it needs to be propagated to the uvicorn subprocess\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # first interrupt: SIGTERM, second interrupt: SIGKILL\n        setup_handler(signal.SIGINT, signal.SIGTERM, signal.SIGKILL)\n        # forward first SIGTERM directly, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGTERM, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_worker","title":"setup_signal_handlers_worker","text":"

Handle interrupts of workers gracefully.

Source code in prefect/utilities/processutils.py
def setup_signal_handlers_worker(pid: int, process_name: str, print_fn: Callable):\n    \"\"\"Handle interrupts of workers gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/pydantic/","title":"pydantic","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic","title":"prefect.utilities.pydantic","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.PartialModel","title":"PartialModel","text":"

Bases: Generic[M]

A utility for creating a Pydantic model in several steps.

Fields may be set at initialization, via attribute assignment, or at finalization when the concrete model is returned.

Pydantic validation does not occur until finalization.

Each field can only be set once and a ValueError will be raised on assignment if a field already has a value.

Example

class MyModel(pydantic_v1.BaseModel): x: int y: str z: float

partial_model = PartialModel(MyModel, x=1) partial_model.y = \"two\" model = partial_model.finalize(z=3.0)

Source code in prefect/utilities/pydantic.py
class PartialModel(Generic[M]):\n    \"\"\"\n    A utility for creating a Pydantic model in several steps.\n\n    Fields may be set at initialization, via attribute assignment, or at finalization\n    when the concrete model is returned.\n\n    Pydantic validation does not occur until finalization.\n\n    Each field can only be set once and a `ValueError` will be raised on assignment if\n    a field already has a value.\n\n    Example:\n        >>> class MyModel(pydantic_v1.BaseModel):\n        >>>     x: int\n        >>>     y: str\n        >>>     z: float\n        >>>\n        >>> partial_model = PartialModel(MyModel, x=1)\n        >>> partial_model.y = \"two\"\n        >>> model = partial_model.finalize(z=3.0)\n    \"\"\"\n\n    def __init__(self, __model_cls: Type[M], **kwargs: Any) -> None:\n        self.fields = kwargs\n        # Set fields first to avoid issues if `fields` is also set on the `model_cls`\n        # in our custom `setattr` implementation.\n        self.model_cls = __model_cls\n\n        for name in kwargs.keys():\n            self.raise_if_not_in_model(name)\n\n    def finalize(self, **kwargs: Any) -> M:\n        for name in kwargs.keys():\n            self.raise_if_already_set(name)\n            self.raise_if_not_in_model(name)\n        return self.model_cls(**self.fields, **kwargs)\n\n    def raise_if_already_set(self, name):\n        if name in self.fields:\n            raise ValueError(f\"Field {name!r} has already been set.\")\n\n    def raise_if_not_in_model(self, name):\n        if name not in self.model_cls.__fields__:\n            raise ValueError(f\"Field {name!r} is not present in the model.\")\n\n    def __setattr__(self, __name: str, __value: Any) -> None:\n        if __name in {\"fields\", \"model_cls\"}:\n            return super().__setattr__(__name, __value)\n\n        self.raise_if_already_set(__name)\n        self.raise_if_not_in_model(__name)\n        self.fields[__name] = __value\n\n    def __repr__(self) -> str:\n        dsp_fields = \", \".join(\n            f\"{key}={repr(value)}\" for key, value in self.fields.items()\n        )\n        return f\"PartialModel(cls={self.model_cls.__name__}, {dsp_fields})\"\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_cloudpickle_reduction","title":"add_cloudpickle_reduction","text":"

Adds a __reducer__ to the given class that ensures it is cloudpickle compatible.

Workaround for issues with cloudpickle when using cythonized pydantic which throws exceptions when attempting to pickle the class which has \"compiled\" validator methods dynamically attached to it.

We cannot define this utility in the model class itself because the class is the type that contains unserializable methods.

Any model using some features of Pydantic (e.g. Path validation) with a Cython compiled Pydantic installation may encounter pickling issues.

See related issue at https://github.com/cloudpipe/cloudpickle/issues/408

Source code in prefect/utilities/pydantic.py
def add_cloudpickle_reduction(__model_cls: Type[M] = None, **kwargs: Any):\n    \"\"\"\n    Adds a `__reducer__` to the given class that ensures it is cloudpickle compatible.\n\n    Workaround for issues with cloudpickle when using cythonized pydantic which\n    throws exceptions when attempting to pickle the class which has \"compiled\"\n    validator methods dynamically attached to it.\n\n    We cannot define this utility in the model class itself because the class is the\n    type that contains unserializable methods.\n\n    Any model using some features of Pydantic (e.g. `Path` validation) with a Cython\n    compiled Pydantic installation may encounter pickling issues.\n\n    See related issue at https://github.com/cloudpipe/cloudpickle/issues/408\n    \"\"\"\n    if __model_cls:\n        __model_cls.__reduce__ = _reduce_model\n        __model_cls.__reduce_kwargs__ = kwargs\n        return __model_cls\n    else:\n        return cast(\n            Callable[[Type[M]], Type[M]],\n            partial(\n                add_cloudpickle_reduction,\n                **kwargs,\n            ),\n        )\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_type_dispatch","title":"add_type_dispatch","text":"

Extend a Pydantic model to add a 'type' field that is used as a discriminator field to dynamically determine the subtype that when deserializing models.

This allows automatic resolution to subtypes of the decorated model.

If a type field already exists, it should be a string literal field that has a constant value for each subclass. The default value of this field will be used as the dispatch key.

If a type field does not exist, one will be added. In this case, the value of the field will be set to the value of the __dispatch_key__. The base class should define a __dispatch_key__ class method that is used to determine the unique key for each subclass. Alternatively, each subclass can define the __dispatch_key__ as a string literal.

The base class must not define a 'type' field. If it is not desirable to add a field to the model and the dispatch key can be tracked separately, the lower level utilities in prefect.utilities.dispatch should be used directly.

Source code in prefect/utilities/pydantic.py
def add_type_dispatch(model_cls: Type[M]) -> Type[M]:\n    \"\"\"\n    Extend a Pydantic model to add a 'type' field that is used as a discriminator field\n    to dynamically determine the subtype that when deserializing models.\n\n    This allows automatic resolution to subtypes of the decorated model.\n\n    If a type field already exists, it should be a string literal field that has a\n    constant value for each subclass. The default value of this field will be used as\n    the dispatch key.\n\n    If a type field does not exist, one will be added. In this case, the value of the\n    field will be set to the value of the `__dispatch_key__`. The base class should\n    define a `__dispatch_key__` class method that is used to determine the unique key\n    for each subclass. Alternatively, each subclass can define the `__dispatch_key__`\n    as a string literal.\n\n    The base class must not define a 'type' field. If it is not desirable to add a field\n    to the model and the dispatch key can be tracked separately, the lower level\n    utilities in `prefect.utilities.dispatch` should be used directly.\n    \"\"\"\n    defines_dispatch_key = hasattr(\n        model_cls, \"__dispatch_key__\"\n    ) or \"__dispatch_key__\" in getattr(model_cls, \"__annotations__\", {})\n\n    defines_type_field = \"type\" in model_cls.__fields__\n\n    if not defines_dispatch_key and not defines_type_field:\n        raise ValueError(\n            f\"Model class {model_cls.__name__!r} does not define a `__dispatch_key__` \"\n            \"or a type field. One of these is required for dispatch.\"\n        )\n\n    elif defines_dispatch_key and not defines_type_field:\n        # Add a type field to store the value of the dispatch key\n        model_cls.__fields__[\"type\"] = pydantic_v1.fields.ModelField(\n            name=\"type\",\n            type_=str,\n            required=True,\n            class_validators=None,\n            model_config=model_cls.__config__,\n        )\n\n    elif not defines_dispatch_key and defines_type_field:\n        field_type_annotation = model_cls.__fields__[\"type\"].type_\n        if field_type_annotation != str:\n            raise TypeError(\n                f\"Model class {model_cls.__name__!r} defines a 'type' field with \"\n                f\"type {field_type_annotation.__name__!r} but it must be 'str'.\"\n            )\n\n        # Set the dispatch key to retrieve the value from the type field\n        @classmethod\n        def dispatch_key_from_type_field(cls):\n            return cls.__fields__[\"type\"].default\n\n        model_cls.__dispatch_key__ = dispatch_key_from_type_field\n\n    else:\n        raise ValueError(\n            f\"Model class {model_cls.__name__!r} defines a `__dispatch_key__` \"\n            \"and a type field. Only one of these may be defined for dispatch.\"\n        )\n\n    cls_init = model_cls.__init__\n    cls_new = model_cls.__new__\n\n    def __init__(__pydantic_self__, **data: Any) -> None:\n        type_string = (\n            get_dispatch_key(__pydantic_self__)\n            if type(__pydantic_self__) != model_cls\n            else \"__base__\"\n        )\n        data.setdefault(\"type\", type_string)\n        cls_init(__pydantic_self__, **data)\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n        if \"type\" in kwargs:\n            try:\n                subcls = lookup_type(cls, dispatch_key=kwargs[\"type\"])\n            except KeyError as exc:\n                raise pydantic_v1.ValidationError(errors=[exc], model=cls)\n            return cls_new(subcls)\n        else:\n            return cls_new(cls)\n\n    model_cls.__init__ = __init__\n    model_cls.__new__ = __new__\n\n    register_base_type(model_cls)\n\n    return model_cls\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.get_class_fields_only","title":"get_class_fields_only","text":"

Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included.

Source code in prefect/utilities/pydantic.py
def get_class_fields_only(model: Type[pydantic_v1.BaseModel]) -> set:\n    \"\"\"\n    Gets all the field names defined on the model class but not any parent classes.\n    Any fields that are on the parent but redefined on the subclass are included.\n    \"\"\"\n    subclass_class_fields = set(model.__annotations__.keys())\n    parent_class_fields = set()\n\n    for base in model.__class__.__bases__:\n        if issubclass(base, pydantic_v1.BaseModel):\n            parent_class_fields.update(base.__annotations__.keys())\n\n    return (subclass_class_fields - parent_class_fields) | (\n        subclass_class_fields & parent_class_fields\n    )\n
","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/render_swagger/","title":"render_swagger","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger","title":"prefect.utilities.render_swagger","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger.swagger_lib","title":"swagger_lib","text":"

Provides the actual swagger library used

Source code in prefect/utilities/render_swagger.py
def swagger_lib(config) -> dict:\n    \"\"\"\n    Provides the actual swagger library used\n    \"\"\"\n    lib_swagger = {\n        \"css\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui.css\",\n        \"js\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui-bundle.js\",\n    }\n\n    extra_javascript = config.get(\"extra_javascript\", [])\n    extra_css = config.get(\"extra_css\", [])\n    for lib in extra_javascript:\n        if os.path.basename(urllib.parse.urlparse(lib).path) == \"swagger-ui-bundle.js\":\n            lib_swagger[\"js\"] = lib\n            break\n\n    for css in extra_css:\n        if os.path.basename(urllib.parse.urlparse(css).path) == \"swagger-ui.css\":\n            lib_swagger[\"css\"] = css\n            break\n    return lib_swagger\n
","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/services/","title":"services","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services","title":"prefect.utilities.services","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services.critical_service_loop","title":"critical_service_loop async","text":"

Runs the given workload function on the specified interval, while being forgiving of intermittent issues like temporary HTTP errors. If more than a certain number of consecutive errors occur, print a summary of up to memory recent exceptions to printer, then begin backoff.

The loop will exit after reaching the consecutive error limit backoff times. On each backoff, the interval will be doubled. On a successful loop, the backoff will be reset.

Parameters:

Name Type Description Default workload Callable[..., Coroutine]

the function to call

required interval float

how frequently to call it

required memory int

how many recent errors to remember

10 consecutive int

how many consecutive errors must we see before we begin backoff

3 backoff int

how many times we should allow consecutive errors before exiting

1 printer Callable[..., None]

a print-like function where errors will be reported

print run_once bool

if set, the loop will only run once then return

False jitter_range float

if set, the interval will be a random variable (rv) drawn from a clamped Poisson distribution where lambda = interval and the rv is bound between interval * (1 - range) < rv < interval * (1 + range)

None Source code in prefect/utilities/services.py
async def critical_service_loop(\n    workload: Callable[..., Coroutine],\n    interval: float,\n    memory: int = 10,\n    consecutive: int = 3,\n    backoff: int = 1,\n    printer: Callable[..., None] = print,\n    run_once: bool = False,\n    jitter_range: float = None,\n):\n    \"\"\"\n    Runs the given `workload` function on the specified `interval`, while being\n    forgiving of intermittent issues like temporary HTTP errors.  If more than a certain\n    number of `consecutive` errors occur, print a summary of up to `memory` recent\n    exceptions to `printer`, then begin backoff.\n\n    The loop will exit after reaching the consecutive error limit `backoff` times.\n    On each backoff, the interval will be doubled. On a successful loop, the backoff\n    will be reset.\n\n    Args:\n        workload: the function to call\n        interval: how frequently to call it\n        memory: how many recent errors to remember\n        consecutive: how many consecutive errors must we see before we begin backoff\n        backoff: how many times we should allow consecutive errors before exiting\n        printer: a `print`-like function where errors will be reported\n        run_once: if set, the loop will only run once then return\n        jitter_range: if set, the interval will be a random variable (rv) drawn from\n            a clamped Poisson distribution where lambda = interval and the rv is bound\n            between `interval * (1 - range) < rv < interval * (1 + range)`\n    \"\"\"\n\n    track_record: Deque[bool] = deque([True] * consecutive, maxlen=consecutive)\n    failures: Deque[Tuple[Exception, TracebackType]] = deque(maxlen=memory)\n    backoff_count = 0\n\n    while True:\n        try:\n            workload_display_name = (\n                workload.__name__ if hasattr(workload, \"__name__\") else workload\n            )\n            logger.debug(f\"Starting run of {workload_display_name!r}\")\n            await workload()\n\n            # Reset the backoff count on success; we may want to consider resetting\n            # this only if the track record is _all_ successful to avoid ending backoff\n            # prematurely\n            if backoff_count > 0:\n                printer(\"Resetting backoff due to successful run.\")\n                backoff_count = 0\n\n            track_record.append(True)\n        except httpx.TransportError as exc:\n            # httpx.TransportError is the base class for any kind of communications\n            # error, like timeouts, connection failures, etc.  This does _not_ cover\n            # routine HTTP error codes (even 5xx errors like 502/503) so this\n            # handler should not be attempting to cover cases where the Prefect server\n            # or Prefect Cloud is having an outage (which will be covered by the\n            # exception clause below)\n            track_record.append(False)\n            failures.append((exc, sys.exc_info()[-1]))\n            logger.debug(\n                f\"Run of {workload!r} failed with TransportError\", exc_info=exc\n            )\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code >= 500:\n                # 5XX codes indicate a potential outage of the Prefect API which is\n                # likely to be temporary and transient.  Don't quit over these unless\n                # it is prolonged.\n                track_record.append(False)\n                failures.append((exc, sys.exc_info()[-1]))\n                logger.debug(\n                    f\"Run of {workload!r} failed with HTTPStatusError\", exc_info=exc\n                )\n            else:\n                raise\n\n        # Decide whether to exit now based on recent history.\n        #\n        # Given some typical background error rate of, say, 1%, we may still observe\n        # quite a few errors in our recent samples, but this is not necessarily a cause\n        # for concern.\n        #\n        # Imagine two distributions that could reflect our situation at any time: the\n        # everything-is-fine distribution of errors, and the everything-is-on-fire\n        # distribution of errors. We are trying to determine which of those two worlds\n        # we are currently experiencing.  We compare the likelihood that we'd draw N\n        # consecutive errors from each.  In the everything-is-fine distribution, that\n        # would be a very low-probability occurrence, but in the everything-is-on-fire\n        # distribution, that is a high-probability occurrence.\n        #\n        # Remarkably, we only need to look back for a small number of consecutive\n        # errors to have reasonable confidence that this is indeed an anomaly.\n        # @anticorrelator and @chrisguidry estimated that we should only need to look\n        # back for 3 consecutive errors.\n        if not any(track_record):\n            # We've failed enough times to be sure something is wrong, the writing is\n            # on the wall.  Let's explain what we've seen and exit.\n            printer(\n                f\"\\nFailed the last {consecutive} attempts. \"\n                \"Please check your environment and configuration.\"\n            )\n\n            printer(\"Examples of recent errors:\\n\")\n\n            failures_by_type = distinct(\n                reversed(failures),\n                key=lambda pair: type(pair[0]),  # Group by the type of exception\n            )\n            for exception, traceback in failures_by_type:\n                printer(\"\".join(format_exception(None, exception, traceback)))\n                printer()\n\n            backoff_count += 1\n\n            if backoff_count >= backoff:\n                raise RuntimeError(\"Service exceeded error threshold.\")\n\n            # Reset the track record\n            track_record.extend([True] * consecutive)\n            failures.clear()\n            printer(\n                \"Backing off due to consecutive errors, using increased interval of \"\n                f\" {interval * 2**backoff_count}s.\"\n            )\n\n        if run_once:\n            return\n\n        if jitter_range is not None:\n            sleep = clamped_poisson_interval(interval, clamping_factor=jitter_range)\n        else:\n            sleep = interval * 2**backoff_count\n\n        await anyio.sleep(sleep)\n
","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/slugify/","title":"slugify","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/slugify/#prefect.utilities.slugify","title":"prefect.utilities.slugify","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/templating/","title":"templating","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating","title":"prefect.utilities.templating","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.apply_values","title":"apply_values","text":"

Replaces placeholders in a template with values from a supplied dictionary.

Will recursively replace placeholders in dictionaries and lists.

If a value has no placeholders, it will be returned unchanged.

If a template contains only a single placeholder, the placeholder will be fully replaced with the value.

If a template contains text before or after a placeholder or there are multiple placeholders, the placeholders will be replaced with the corresponding variable values.

If a template contains a placeholder that is not in values, NotSet will be returned to signify that no placeholder replacement occurred. If template is a dictionary that contains a key with a value of NotSet, the key will be removed in the return value unless remove_notset is set to False.

Parameters:

Name Type Description Default template T

template to discover and replace values in

required values Dict[str, Any]

The values to apply to placeholders in the template

required remove_notset bool

If True, remove keys with an unset value

True

Returns:

Type Description Union[T, Type[NotSet]]

The template with the values applied

Source code in prefect/utilities/templating.py
def apply_values(\n    template: T, values: Dict[str, Any], remove_notset: bool = True\n) -> Union[T, Type[NotSet]]:\n    \"\"\"\n    Replaces placeholders in a template with values from a supplied dictionary.\n\n    Will recursively replace placeholders in dictionaries and lists.\n\n    If a value has no placeholders, it will be returned unchanged.\n\n    If a template contains only a single placeholder, the placeholder will be\n    fully replaced with the value.\n\n    If a template contains text before or after a placeholder or there are\n    multiple placeholders, the placeholders will be replaced with the\n    corresponding variable values.\n\n    If a template contains a placeholder that is not in `values`, NotSet will\n    be returned to signify that no placeholder replacement occurred. If\n    `template` is a dictionary that contains a key with a value of NotSet,\n    the key will be removed in the return value unless `remove_notset` is set to False.\n\n    Args:\n        template: template to discover and replace values in\n        values: The values to apply to placeholders in the template\n        remove_notset: If True, remove keys with an unset value\n\n    Returns:\n        The template with the values applied\n    \"\"\"\n    if isinstance(template, (int, float, bool, type(NotSet), type(None))):\n        return template\n    if isinstance(template, str):\n        placeholders = find_placeholders(template)\n        if not placeholders:\n            # If there are no values, we can just use the template\n            return template\n        elif (\n            len(placeholders) == 1\n            and list(placeholders)[0].full_match == template\n            and list(placeholders)[0].type is PlaceholderType.STANDARD\n        ):\n            # If there is only one variable with no surrounding text,\n            # we can replace it. If there is no variable value, we\n            # return NotSet to indicate that the value should not be included.\n            return get_from_dict(values, list(placeholders)[0].name, NotSet)\n        else:\n            for full_match, name, placeholder_type in placeholders:\n                if placeholder_type is PlaceholderType.STANDARD:\n                    value = get_from_dict(values, name, NotSet)\n                elif placeholder_type is PlaceholderType.ENV_VAR:\n                    name = name.lstrip(ENV_VAR_PLACEHOLDER_PREFIX)\n                    value = os.environ.get(name, NotSet)\n                else:\n                    continue\n\n                if value is NotSet and not remove_notset:\n                    continue\n                elif value is NotSet:\n                    template = template.replace(full_match, \"\")\n                else:\n                    template = template.replace(full_match, str(value))\n\n            return template\n    elif isinstance(template, dict):\n        updated_template = {}\n        for key, value in template.items():\n            updated_value = apply_values(value, values, remove_notset=remove_notset)\n            if updated_value is not NotSet:\n                updated_template[key] = updated_value\n            elif not remove_notset:\n                updated_template[key] = value\n\n        return updated_template\n    elif isinstance(template, list):\n        updated_list = []\n        for value in template:\n            updated_value = apply_values(value, values, remove_notset=remove_notset)\n            if updated_value is not NotSet:\n                updated_list.append(updated_value)\n        return updated_list\n    else:\n        raise ValueError(f\"Unexpected template type {type(template).__name__!r}\")\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.determine_placeholder_type","title":"determine_placeholder_type","text":"

Determines the type of a placeholder based on its name.

Parameters:

Name Type Description Default name str

The name of the placeholder

required

Returns:

Type Description PlaceholderType

The type of the placeholder

Source code in prefect/utilities/templating.py
def determine_placeholder_type(name: str) -> PlaceholderType:\n    \"\"\"\n    Determines the type of a placeholder based on its name.\n\n    Args:\n        name: The name of the placeholder\n\n    Returns:\n        The type of the placeholder\n    \"\"\"\n    if name.startswith(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX):\n        return PlaceholderType.BLOCK_DOCUMENT\n    elif name.startswith(VARIABLE_PLACEHOLDER_PREFIX):\n        return PlaceholderType.VARIABLE\n    elif name.startswith(ENV_VAR_PLACEHOLDER_PREFIX):\n        return PlaceholderType.ENV_VAR\n    else:\n        return PlaceholderType.STANDARD\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.find_placeholders","title":"find_placeholders","text":"

Finds all placeholders in a template.

Parameters:

Name Type Description Default template T

template to discover placeholders in

required

Returns:

Type Description Set[Placeholder]

A set of all placeholders in the template

Source code in prefect/utilities/templating.py
def find_placeholders(template: T) -> Set[Placeholder]:\n    \"\"\"\n    Finds all placeholders in a template.\n\n    Args:\n        template: template to discover placeholders in\n\n    Returns:\n        A set of all placeholders in the template\n    \"\"\"\n    if isinstance(template, (int, float, bool)):\n        return set()\n    if isinstance(template, str):\n        result = PLACEHOLDER_CAPTURE_REGEX.findall(template)\n        return {\n            Placeholder(full_match, name, determine_placeholder_type(name))\n            for full_match, name in result\n        }\n    elif isinstance(template, dict):\n        return set().union(\n            *[find_placeholders(value) for key, value in template.items()]\n        )\n    elif isinstance(template, list):\n        return set().union(*[find_placeholders(item) for item in template])\n    else:\n        raise ValueError(f\"Unexpected type: {type(template)}\")\n
","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references","title":"resolve_block_document_references async","text":"

Resolve block document references in a template by replacing each reference with the data of the block document.

Recursively searches for block document references in dictionaries and lists.

Identifies block document references by the as dictionary with the following structure:

{\n    \"$ref\": {\n        \"block_document_id\": <block_document_id>\n    }\n}\n
where <block_document_id> is the ID of the block document to resolve.

Once the block document is retrieved from the API, the data of the block document is used to replace the reference.

","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--accessing-values","title":"Accessing Values:","text":"

To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.

For a block document with the structure:

{\n    \"value\": {\n        \"key\": {\n            \"nested-key\": \"nested-value\"\n        },\n        \"list\": [\n            {\"list-key\": \"list-value\"},\n            1,\n            2\n        ]\n    }\n}\n
examples of value resolution are as follows:

  1. Accessing a nested dictionary: Format: prefect.blocks...value.key Example: Returns {\"nested-key\": \"nested-value\"}

  2. Accessing a specific nested value: Format: prefect.blocks...value.key.nested-key Example: Returns \"nested-value\"

  3. Accessing a list element's key-value: Format: prefect.blocks...value.list[0].list-key Example: Returns \"list-value\"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--default-resolution-for-system-blocks","title":"Default Resolution for System Blocks:","text":"

    For system blocks, which only contain a value attribute, this attribute is resolved by default.

    Parameters:

    Name Type Description Default template T

    The template to resolve block documents in

    required

    Returns:

    Type Description Union[T, Dict[str, Any]]

    The template with block documents resolved

    Source code in prefect/utilities/templating.py
    @inject_client\nasync def resolve_block_document_references(\n    template: T, client: \"PrefectClient\" = None\n) -> Union[T, Dict[str, Any]]:\n    \"\"\"\n    Resolve block document references in a template by replacing each reference with\n    the data of the block document.\n\n    Recursively searches for block document references in dictionaries and lists.\n\n    Identifies block document references by the as dictionary with the following\n    structure:\n    ```\n    {\n        \"$ref\": {\n            \"block_document_id\": <block_document_id>\n        }\n    }\n    ```\n    where `<block_document_id>` is the ID of the block document to resolve.\n\n    Once the block document is retrieved from the API, the data of the block document\n    is used to replace the reference.\n\n    Accessing Values:\n    -----------------\n    To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.\n\n    For a block document with the structure:\n    ```json\n    {\n        \"value\": {\n            \"key\": {\n                \"nested-key\": \"nested-value\"\n            },\n            \"list\": [\n                {\"list-key\": \"list-value\"},\n                1,\n                2\n            ]\n        }\n    }\n    ```\n    examples of value resolution are as follows:\n\n    1. Accessing a nested dictionary:\n       Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key\n       Example: Returns {\"nested-key\": \"nested-value\"}\n\n    2. Accessing a specific nested value:\n       Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key.nested-key\n       Example: Returns \"nested-value\"\n\n    3. Accessing a list element's key-value:\n       Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.list[0].list-key\n       Example: Returns \"list-value\"\n\n    Default Resolution for System Blocks:\n    -------------------------------------\n    For system blocks, which only contain a `value` attribute, this attribute is resolved by default.\n\n    Args:\n        template: The template to resolve block documents in\n\n    Returns:\n        The template with block documents resolved\n    \"\"\"\n    if isinstance(template, dict):\n        block_document_id = template.get(\"$ref\", {}).get(\"block_document_id\")\n        if block_document_id:\n            block_document = await client.read_block_document(block_document_id)\n            return block_document.data\n        updated_template = {}\n        for key, value in template.items():\n            updated_value = await resolve_block_document_references(\n                value, client=client\n            )\n            updated_template[key] = updated_value\n        return updated_template\n    elif isinstance(template, list):\n        return [\n            await resolve_block_document_references(item, client=client)\n            for item in template\n        ]\n    elif isinstance(template, str):\n        placeholders = find_placeholders(template)\n        has_block_document_placeholder = any(\n            placeholder.type is PlaceholderType.BLOCK_DOCUMENT\n            for placeholder in placeholders\n        )\n        if len(placeholders) == 0 or not has_block_document_placeholder:\n            return template\n        elif (\n            len(placeholders) == 1\n            and list(placeholders)[0].full_match == template\n            and list(placeholders)[0].type is PlaceholderType.BLOCK_DOCUMENT\n        ):\n            # value_keypath will be a list containing a dot path if additional\n            # attributes are accessed and an empty list otherwise.\n            block_type_slug, block_document_name, *value_keypath = (\n                list(placeholders)[0]\n                .name.replace(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX, \"\")\n                .split(\".\", 2)\n            )\n            block_document = await client.read_block_document_by_name(\n                name=block_document_name, block_type_slug=block_type_slug\n            )\n            value = block_document.data\n\n            # resolving system blocks to their data for backwards compatibility\n            if len(value) == 1 and \"value\" in value:\n                # only resolve the value if the keypath is not already pointing to \"value\"\n                if len(value_keypath) == 0 or value_keypath[0][:5] != \"value\":\n                    value = value[\"value\"]\n\n            # resolving keypath/block attributes\n            if len(value_keypath) > 0:\n                value_keypath: str = value_keypath[0]\n                value = get_from_dict(value, value_keypath, default=NotSet)\n                if value is NotSet:\n                    raise ValueError(\n                        f\"Invalid template: {template!r}. Could not resolve the\"\n                        \" keypath in the block document data.\"\n                    )\n\n            return value\n        else:\n            raise ValueError(\n                f\"Invalid template: {template!r}. Only a single block placeholder is\"\n                \" allowed in a string and no surrounding text is allowed.\"\n            )\n\n    return template\n
    ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_variables","title":"resolve_variables async","text":"

    Resolve variables in a template by replacing each variable placeholder with the value of the variable.

    Recursively searches for variable placeholders in dictionaries and lists.

    Strips variable placeholders if the variable is not found.

    Parameters:

    Name Type Description Default template T

    The template to resolve variables in

    required

    Returns:

    Type Description

    The template with variables resolved

    Source code in prefect/utilities/templating.py
    @inject_client\nasync def resolve_variables(template: T, client: \"PrefectClient\" = None):\n    \"\"\"\n    Resolve variables in a template by replacing each variable placeholder with the\n    value of the variable.\n\n    Recursively searches for variable placeholders in dictionaries and lists.\n\n    Strips variable placeholders if the variable is not found.\n\n    Args:\n        template: The template to resolve variables in\n\n    Returns:\n        The template with variables resolved\n    \"\"\"\n    if isinstance(template, str):\n        placeholders = find_placeholders(template)\n        has_variable_placeholder = any(\n            placeholder.type is PlaceholderType.VARIABLE for placeholder in placeholders\n        )\n        if not placeholders or not has_variable_placeholder:\n            # If there are no values, we can just use the template\n            return template\n        elif (\n            len(placeholders) == 1\n            and list(placeholders)[0].full_match == template\n            and list(placeholders)[0].type is PlaceholderType.VARIABLE\n        ):\n            variable_name = list(placeholders)[0].name.replace(\n                VARIABLE_PLACEHOLDER_PREFIX, \"\"\n            )\n            variable = await client.read_variable_by_name(name=variable_name)\n            if variable is None:\n                return \"\"\n            else:\n                return variable.value\n        else:\n            for full_match, name, placeholder_type in placeholders:\n                if placeholder_type is PlaceholderType.VARIABLE:\n                    variable_name = name.replace(VARIABLE_PLACEHOLDER_PREFIX, \"\")\n                    variable = await client.read_variable_by_name(name=variable_name)\n                    if variable is None:\n                        template = template.replace(full_match, \"\")\n                    else:\n                        template = template.replace(full_match, variable.value)\n            return template\n    elif isinstance(template, dict):\n        return {\n            key: await resolve_variables(value, client=client)\n            for key, value in template.items()\n        }\n    elif isinstance(template, list):\n        return [await resolve_variables(item, client=client) for item in template]\n    else:\n        return template\n
    ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/text/","title":"text","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/text/#prefect.utilities.text","title":"prefect.utilities.text","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/visualization/","title":"visualization","text":"","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization","title":"prefect.utilities.visualization","text":"

    Utilities for working with Flow.visualize()

    ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker","title":"TaskVizTracker","text":"Source code in prefect/utilities/visualization.py
    class TaskVizTracker:\n    def __init__(self):\n        self.tasks = []\n        self.dynamic_task_counter = {}\n        self.object_id_to_task = {}\n\n    def add_task(self, task: VizTask):\n        if task.name not in self.dynamic_task_counter:\n            self.dynamic_task_counter[task.name] = 0\n        else:\n            self.dynamic_task_counter[task.name] += 1\n\n        task.name = f\"{task.name}-{self.dynamic_task_counter[task.name]}\"\n        self.tasks.append(task)\n\n    def __enter__(self):\n        TaskVizTrackerState.current = self\n        return self\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        TaskVizTrackerState.current = None\n\n    def link_viz_return_value_to_viz_task(\n        self, viz_return_value: Any, viz_task: VizTask\n    ) -> None:\n        \"\"\"\n        We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n        because they are singletons.\n        \"\"\"\n        from prefect.utilities.engine import UNTRACKABLE_TYPES\n\n        if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n            isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n        ):\n            return\n        self.object_id_to_task[id(viz_return_value)] = viz_task\n
    ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker.link_viz_return_value_to_viz_task","title":"link_viz_return_value_to_viz_task","text":"

    We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256 because they are singletons.

    Source code in prefect/utilities/visualization.py
    def link_viz_return_value_to_viz_task(\n    self, viz_return_value: Any, viz_task: VizTask\n) -> None:\n    \"\"\"\n    We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n    because they are singletons.\n    \"\"\"\n    from prefect.utilities.engine import UNTRACKABLE_TYPES\n\n    if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n        isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n    ):\n        return\n    self.object_id_to_task[id(viz_return_value)] = viz_task\n
    ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.build_task_dependencies","title":"build_task_dependencies","text":"

    Constructs a Graphviz directed graph object that represents the dependencies between tasks in the given TaskVizTracker.

    • task_run_tracker (TaskVizTracker): An object containing tasks and their dependencies.
    • graphviz.Digraph: A directed graph object depicting the relationships and dependencies between tasks.

    Raises: - GraphvizImportError: If there's an ImportError related to graphviz. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value.

    Source code in prefect/utilities/visualization.py
    def build_task_dependencies(task_run_tracker: TaskVizTracker):\n    \"\"\"\n    Constructs a Graphviz directed graph object that represents the dependencies\n    between tasks in the given TaskVizTracker.\n\n    Parameters:\n    - task_run_tracker (TaskVizTracker): An object containing tasks and their\n      dependencies.\n\n    Returns:\n    - graphviz.Digraph: A directed graph object depicting the relationships and\n      dependencies between tasks.\n\n    Raises:\n    - GraphvizImportError: If there's an ImportError related to graphviz.\n    - FlowVisualizationError: If there's any other error during the visualization\n      process or if return values of tasks are directly accessed without\n      specifying a `viz_return_value`.\n    \"\"\"\n    try:\n        g = graphviz.Digraph()\n        for task in task_run_tracker.tasks:\n            g.node(task.name)\n            for upstream in task.upstream_tasks:\n                g.edge(upstream.name, task.name)\n        return g\n    except ImportError as exc:\n        raise GraphvizImportError from exc\n    except Exception:\n        raise FlowVisualizationError(\n            \"Something went wrong building the flow's visualization.\"\n            \" If you're interacting with the return value of a task\"\n            \" directly inside of your flow, you must set a set a `viz_return_value`\"\n            \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n        )\n
    ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.track_viz_task","title":"track_viz_task","text":"

    Return a result if sync otherwise return a coroutine that returns the result

    Source code in prefect/utilities/visualization.py
    def track_viz_task(\n    is_async: bool,\n    task_name: str,\n    parameters: dict,\n    viz_return_value: Optional[Any] = None,\n):\n    \"\"\"Return a result if sync otherwise return a coroutine that returns the result\"\"\"\n    if is_async:\n        return from_async.wait_for_call_in_loop_thread(\n            partial(_track_viz_task, task_name, parameters, viz_return_value)\n        )\n    else:\n        return _track_viz_task(task_name, parameters, viz_return_value)\n
    ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.visualize_task_dependencies","title":"visualize_task_dependencies","text":"

    Renders and displays a Graphviz directed graph representing task dependencies.

    The graph is rendered in PNG format and saved with the name specified by flow_run_name. After rendering, the visualization is opened and displayed.

    Parameters: - graph (graphviz.Digraph): The directed graph object to visualize. - flow_run_name (str): The name to use when saving the rendered graph image.

    Raises: - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value.

    Source code in prefect/utilities/visualization.py
    def visualize_task_dependencies(graph: graphviz.Digraph, flow_run_name: str):\n    \"\"\"\n    Renders and displays a Graphviz directed graph representing task dependencies.\n\n    The graph is rendered in PNG format and saved with the name specified by\n    flow_run_name. After rendering, the visualization is opened and displayed.\n\n    Parameters:\n    - graph (graphviz.Digraph): The directed graph object to visualize.\n    - flow_run_name (str): The name to use when saving the rendered graph image.\n\n    Raises:\n    - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system.\n    - FlowVisualizationError: If there's any other error during the visualization\n      process or if return values of tasks are directly accessed without\n      specifying a `viz_return_value`.\n    \"\"\"\n    try:\n        graph.render(filename=flow_run_name, view=True, format=\"png\", cleanup=True)\n    except graphviz.backend.ExecutableNotFound as exc:\n        msg = (\n            \"It appears you do not have Graphviz installed, or it is not on your \"\n            \"PATH. Please install Graphviz from http://www.graphviz.org/download/. \"\n            \"Note: Just installing the `graphviz` python package is not \"\n            \"sufficient.\"\n        )\n        raise GraphvizExecutableNotFoundError(msg) from exc\n    except Exception:\n        raise FlowVisualizationError(\n            \"Something went wrong building the flow's visualization.\"\n            \" If you're interacting with the return value of a task\"\n            \" directly inside of your flow, you must set a set a `viz_return_value`\"\n            \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n        )\n
    ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/workers/base/","title":"base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base","title":"prefect.workers.base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration","title":"BaseJobConfiguration","text":"

    Bases: BaseModel

    Source code in prefect/workers/base.py
    class BaseJobConfiguration(BaseModel):\n    command: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The command to use when starting a flow run. \"\n            \"In most cases, this should be left blank and the command \"\n            \"will be automatically generated by the worker.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to set when starting a flow run.\",\n    )\n    labels: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"Labels applied to infrastructure created by the worker using \"\n            \"this job configuration.\"\n        ),\n    )\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"Name given to infrastructure created by the worker using this \"\n            \"job configuration.\"\n        ),\n    )\n\n    _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n    @property\n    def is_using_a_runner(self):\n        return self.command is not None and \"prefect flow-run execute\" in self.command\n\n    @validator(\"command\")\n    def _coerce_command(cls, v):\n        return return_v_or_none(v)\n\n    @staticmethod\n    def _get_base_config_defaults(variables: dict) -> dict:\n        \"\"\"Get default values from base config for all variables that have them.\"\"\"\n        defaults = dict()\n        for variable_name, attrs in variables.items():\n            if \"default\" in attrs:\n                defaults[variable_name] = attrs[\"default\"]\n\n        return defaults\n\n    @classmethod\n    @inject_client\n    async def from_template_and_values(\n        cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n    ):\n        \"\"\"Creates a valid worker configuration object from the provided base\n        configuration and overrides.\n\n        Important: this method expects that the base_job_template was already\n        validated server-side.\n        \"\"\"\n        job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n        variables_schema = base_job_template[\"variables\"]\n        variables = cls._get_base_config_defaults(\n            variables_schema.get(\"properties\", {})\n        )\n        variables.update(values)\n\n        populated_configuration = apply_values(template=job_config, values=variables)\n        populated_configuration = await resolve_block_document_references(\n            template=populated_configuration, client=client\n        )\n        populated_configuration = await resolve_variables(\n            template=populated_configuration, client=client\n        )\n        return cls(**populated_configuration)\n\n    @classmethod\n    def json_template(cls) -> dict:\n        \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n        Defaults to using the job configuration parameter name as the template variable name.\n\n        e.g.\n        {\n            key1: '{{ key1 }}',     # default variable template\n            key2: '{{ template2 }}', # `template2` specifically provide as template\n        }\n        \"\"\"\n        configuration = {}\n        properties = cls.schema()[\"properties\"]\n        for k, v in properties.items():\n            if v.get(\"template\"):\n                template = v[\"template\"]\n            else:\n                template = \"{{ \" + k + \" }}\"\n            configuration[k] = template\n\n        return configuration\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepare the job configuration for a flow run.\n\n        This method is called by the worker before starting a flow run. It\n        should be used to set any configuration values that are dependent on\n        the flow run.\n\n        Args:\n            flow_run: The flow run to be executed.\n            deployment: The deployment that the flow run is associated with.\n            flow: The flow that the flow run is associated with.\n        \"\"\"\n\n        self._related_objects = {\n            \"deployment\": deployment,\n            \"flow\": flow,\n            \"flow-run\": flow_run,\n        }\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        env = {\n            **self._base_environment(),\n            **self._base_flow_run_environment(flow_run),\n            **self.env,\n        }\n        self.env = {key: value for key, value in env.items() if value is not None}\n        self.labels = {\n            **self._base_flow_run_labels(flow_run),\n            **deployment_labels,\n            **flow_labels,\n            **self.labels,\n        }\n        self.name = self.name or flow_run.name\n        self.command = self.command or self._base_flow_run_command()\n\n    @staticmethod\n    def _base_flow_run_command() -> str:\n        \"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        if experiment_enabled(\"enhanced_cancellation\"):\n            if (\n                PREFECT_EXPERIMENTAL_WARN\n                and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n            ):\n                warnings.warn(\n                    EXPERIMENTAL_WARNING.format(\n                        feature=\"Enhanced flow run cancellation\",\n                        group=\"enhanced_cancellation\",\n                        help=\"\",\n                    ),\n                    ExperimentalFeature,\n                    stacklevel=3,\n                )\n            return \"prefect flow-run execute\"\n        return \"python -m prefect.engine\"\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n        \"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        return {\"PREFECT__FLOW_RUN_ID\": str(flow_run.id)}\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"DeploymentResponse\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-id\": str(deployment.id),\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-id\": str(flow.id),\n            \"prefect.io/flow-name\": flow.name,\n        }\n\n    def _related_resources(self) -> List[RelatedResource]:\n        tags = set()\n        related = []\n\n        for kind, obj in self._related_objects.items():\n            if obj is None:\n                continue\n            if hasattr(obj, \"tags\"):\n                tags.update(obj.tags)\n            related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n        return related + tags_as_related_resources(tags)\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.from_template_and_values","title":"from_template_and_values async classmethod","text":"

    Creates a valid worker configuration object from the provided base configuration and overrides.

    Important: this method expects that the base_job_template was already validated server-side.

    Source code in prefect/workers/base.py
    @classmethod\n@inject_client\nasync def from_template_and_values(\n    cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n    \"\"\"Creates a valid worker configuration object from the provided base\n    configuration and overrides.\n\n    Important: this method expects that the base_job_template was already\n    validated server-side.\n    \"\"\"\n    job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n    variables_schema = base_job_template[\"variables\"]\n    variables = cls._get_base_config_defaults(\n        variables_schema.get(\"properties\", {})\n    )\n    variables.update(values)\n\n    populated_configuration = apply_values(template=job_config, values=variables)\n    populated_configuration = await resolve_block_document_references(\n        template=populated_configuration, client=client\n    )\n    populated_configuration = await resolve_variables(\n        template=populated_configuration, client=client\n    )\n    return cls(**populated_configuration)\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.json_template","title":"json_template classmethod","text":"

    Returns a dict with job configuration as keys and the corresponding templates as values

    Defaults to using the job configuration parameter name as the template variable name.

    e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2 specifically provide as template }

    Source code in prefect/workers/base.py
    @classmethod\ndef json_template(cls) -> dict:\n    \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n    Defaults to using the job configuration parameter name as the template variable name.\n\n    e.g.\n    {\n        key1: '{{ key1 }}',     # default variable template\n        key2: '{{ template2 }}', # `template2` specifically provide as template\n    }\n    \"\"\"\n    configuration = {}\n    properties = cls.schema()[\"properties\"]\n    for k, v in properties.items():\n        if v.get(\"template\"):\n            template = v[\"template\"]\n        else:\n            template = \"{{ \" + k + \" }}\"\n        configuration[k] = template\n\n    return configuration\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Prepare the job configuration for a flow run.

    This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to be executed.

    required deployment Optional[DeploymentResponse]

    The deployment that the flow run is associated with.

    None flow Optional[Flow]

    The flow that the flow run is associated with.

    None Source code in prefect/workers/base.py
    def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepare the job configuration for a flow run.\n\n    This method is called by the worker before starting a flow run. It\n    should be used to set any configuration values that are dependent on\n    the flow run.\n\n    Args:\n        flow_run: The flow run to be executed.\n        deployment: The deployment that the flow run is associated with.\n        flow: The flow that the flow run is associated with.\n    \"\"\"\n\n    self._related_objects = {\n        \"deployment\": deployment,\n        \"flow\": flow,\n        \"flow-run\": flow_run,\n    }\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    env = {\n        **self._base_environment(),\n        **self._base_flow_run_environment(flow_run),\n        **self.env,\n    }\n    self.env = {key: value for key, value in env.items() if value is not None}\n    self.labels = {\n        **self._base_flow_run_labels(flow_run),\n        **deployment_labels,\n        **flow_labels,\n        **self.labels,\n    }\n    self.name = self.name or flow_run.name\n    self.command = self.command or self._base_flow_run_command()\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker","title":"BaseWorker","text":"

    Bases: ABC

    Source code in prefect/workers/base.py
    @register_base_type\nclass BaseWorker(abc.ABC):\n    type: str\n    job_configuration: Type[BaseJobConfiguration] = BaseJobConfiguration\n    job_configuration_variables: Optional[Type[BaseVariables]] = None\n\n    _documentation_url = \"\"\n    _logo_url = \"\"\n    _description = \"\"\n\n    def __init__(\n        self,\n        work_pool_name: str,\n        work_queues: Optional[List[str]] = None,\n        name: Optional[str] = None,\n        prefetch_seconds: Optional[float] = None,\n        create_pool_if_not_found: bool = True,\n        limit: Optional[int] = None,\n        heartbeat_interval_seconds: Optional[int] = None,\n        *,\n        base_job_template: Optional[Dict[str, Any]] = None,\n    ):\n        \"\"\"\n        Base class for all Prefect workers.\n\n        Args:\n            name: The name of the worker. If not provided, a random one\n                will be generated. If provided, it cannot contain '/' or '%'.\n                The name is used to identify the worker in the UI; if two\n                processes have the same name, they will be treated as the same\n                worker.\n            work_pool_name: The name of the work pool to poll.\n            work_queues: A list of work queues to poll. If not provided, all\n                work queue in the work pool will be polled.\n            prefetch_seconds: The number of seconds to prefetch flow runs for.\n            create_pool_if_not_found: Whether to create the work pool\n                if it is not found. Defaults to `True`, but can be set to `False` to\n                ensure that work pools are not created accidentally.\n            limit: The maximum number of flow runs this worker should be running at\n                a given time.\n            base_job_template: If creating the work pool, provide the base job\n                template to use. Logs a warning if the pool already exists.\n        \"\"\"\n        if name and (\"/\" in name or \"%\" in name):\n            raise ValueError(\"Worker name cannot contain '/' or '%'\")\n        self.name = name or f\"{self.__class__.__name__} {uuid4()}\"\n        self._logger = get_logger(f\"worker.{self.__class__.type}.{self.name.lower()}\")\n\n        self.is_setup = False\n        self._create_pool_if_not_found = create_pool_if_not_found\n        self._base_job_template = base_job_template\n        self._work_pool_name = work_pool_name\n        self._work_queues: Set[str] = set(work_queues) if work_queues else set()\n\n        self._prefetch_seconds: float = (\n            prefetch_seconds or PREFECT_WORKER_PREFETCH_SECONDS.value()\n        )\n        self.heartbeat_interval_seconds = (\n            heartbeat_interval_seconds or PREFECT_WORKER_HEARTBEAT_SECONDS.value()\n        )\n\n        self._work_pool: Optional[WorkPool] = None\n        self._runs_task_group: Optional[anyio.abc.TaskGroup] = None\n        self._client: Optional[PrefectClient] = None\n        self._last_polled_time: pendulum.DateTime = pendulum.now(\"utc\")\n        self._limit = limit\n        self._limiter: Optional[anyio.CapacityLimiter] = None\n        self._submitting_flow_run_ids = set()\n        self._cancelling_flow_run_ids = set()\n        self._scheduled_task_scopes = set()\n\n    @classmethod\n    def get_documentation_url(cls) -> str:\n        return cls._documentation_url\n\n    @classmethod\n    def get_logo_url(cls) -> str:\n        return cls._logo_url\n\n    @classmethod\n    def get_description(cls) -> str:\n        return cls._description\n\n    @classmethod\n    def get_default_base_job_template(cls) -> Dict:\n        if cls.job_configuration_variables is None:\n            schema = cls.job_configuration.schema()\n            # remove \"template\" key from all dicts in schema['properties'] because it is not a\n            # relevant field\n            for key, value in schema[\"properties\"].items():\n                if isinstance(value, dict):\n                    schema[\"properties\"][key].pop(\"template\", None)\n            variables_schema = schema\n        else:\n            variables_schema = cls.job_configuration_variables.schema()\n        variables_schema.pop(\"title\", None)\n        return {\n            \"job_configuration\": cls.job_configuration.json_template(),\n            \"variables\": variables_schema,\n        }\n\n    @staticmethod\n    def get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n        \"\"\"\n        Returns the worker class for a given worker type. If the worker type\n        is not recognized, returns None.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return worker_registry.get(type)\n\n    @staticmethod\n    def get_all_available_worker_types() -> List[str]:\n        \"\"\"\n        Returns all worker types available in the local registry.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return list(worker_registry.keys())\n        return []\n\n    def get_name_slug(self):\n        return slugify(self.name)\n\n    def get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n        return flow_run_logger(flow_run=flow_run).getChild(\n            \"worker\",\n            extra={\n                \"worker_name\": self.name,\n                \"work_pool_name\": (\n                    self._work_pool_name if self._work_pool else \"<unknown>\"\n                ),\n                \"work_pool_id\": str(getattr(self._work_pool, \"id\", \"unknown\")),\n            },\n        )\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: BaseJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        \"\"\"\n        Runs a given flow run on the current worker.\n        \"\"\"\n        raise NotImplementedError(\n            \"Workers must implement a method for running submitted flow runs\"\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: BaseJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Method for killing infrastructure created by a worker. Should be implemented by\n        individual workers if they support killing infrastructure.\n        \"\"\"\n        raise NotImplementedError(\n            \"This worker does not support killing infrastructure.\"\n        )\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"BaseWorker\":\n            return None  # The base class is abstract\n        return cls.type\n\n    async def setup(self):\n        \"\"\"Prepares the worker to run.\"\"\"\n        self._logger.debug(\"Setting up worker...\")\n        self._runs_task_group = anyio.create_task_group()\n        self._limiter = (\n            anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n        )\n        self._client = get_client()\n        await self._client.__aenter__()\n        await self._runs_task_group.__aenter__()\n\n        self.is_setup = True\n\n    async def teardown(self, *exc_info):\n        \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n        self._logger.debug(\"Tearing down worker...\")\n        self.is_setup = False\n        for scope in self._scheduled_task_scopes:\n            scope.cancel()\n        if self._runs_task_group:\n            await self._runs_task_group.__aexit__(*exc_info)\n        if self._client:\n            await self._client.__aexit__(*exc_info)\n        self._runs_task_group = None\n        self._client = None\n\n    def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n        \"\"\"\n        This method is invoked by a webserver healthcheck handler\n        and returns a boolean indicating if the worker has recorded a\n        scheduled flow run poll within a variable amount of time.\n\n        The `query_interval_seconds` is the same value that is used by\n        the loop services - we will evaluate if the _last_polled_time\n        was within that interval x 30 (so 10s -> 5m)\n\n        The instance property `self._last_polled_time`\n        is currently set/updated in `get_and_submit_flow_runs()`\n        \"\"\"\n        threshold_seconds = query_interval_seconds * 30\n\n        seconds_since_last_poll = (\n            pendulum.now(\"utc\") - self._last_polled_time\n        ).in_seconds()\n\n        is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n        if not is_still_polling:\n            self._logger.error(\n                f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n                \"and should be restarted\"\n            )\n\n        return is_still_polling\n\n    async def get_and_submit_flow_runs(self):\n        runs_response = await self._get_scheduled_flow_runs()\n\n        self._last_polled_time = pendulum.now(\"utc\")\n\n        return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.is_setup:\n            raise RuntimeError(\n                \"Worker is not set up. Please make sure you are running this worker \"\n                \"as an async context manager.\"\n            )\n\n        self._logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self._work_queues)))\n            if self._work_queues\n            else None\n        )\n\n        named_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self._logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self._cancelling_flow_run_ids.add(flow_run.id)\n            self._runs_task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: \"FlowRun\"):\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n        except ObjectNotFound:\n            self._logger.warning(\n                f\"Flow run {flow_run.id!r} cannot be cancelled by this worker:\"\n                f\" associated deployment {flow_run.deployment_id!r} does not exist.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure configuration information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n        else:\n            if configuration.is_using_a_runner:\n                self._logger.info(\n                    f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n                    \" using enhanced cancellation. A dedicated runner will handle\"\n                    \" cancellation.\"\n                )\n                return\n\n        if not flow_run.infrastructure_pid:\n            run_logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            await self.kill_infrastructure(\n                infrastructure_pid=flow_run.infrastructure_pid,\n                configuration=configuration,\n            )\n        except NotImplementedError:\n            self._logger.error(\n                f\"Worker type {self.type!r} does not support killing created \"\n                \"infrastructure. Cancellation cannot be guaranteed.\"\n            )\n        except InfrastructureNotFound as exc:\n            self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self._logger.warning(f\"{exc} Flow run cannot be cancelled by this worker.\")\n        except Exception:\n            run_logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self._cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            self._emit_flow_run_cancelled_event(\n                flow_run=flow_run, configuration=configuration\n            )\n            await self._mark_flow_run_as_cancelled(flow_run)\n            run_logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _update_local_work_pool_info(self):\n        try:\n            work_pool = await self._client.read_work_pool(\n                work_pool_name=self._work_pool_name\n            )\n        except ObjectNotFound:\n            if self._create_pool_if_not_found:\n                wp = WorkPoolCreate(\n                    name=self._work_pool_name,\n                    type=self.type,\n                )\n                if self._base_job_template is not None:\n                    wp.base_job_template = self._base_job_template\n\n                work_pool = await self._client.create_work_pool(work_pool=wp)\n                self._logger.info(f\"Work pool {self._work_pool_name!r} created.\")\n            else:\n                self._logger.warning(f\"Work pool {self._work_pool_name!r} not found!\")\n                if self._base_job_template is not None:\n                    self._logger.warning(\n                        \"Ignoring supplied base job template because the work pool\"\n                        \" already exists\"\n                    )\n                return\n\n        # if the remote config type changes (or if it's being loaded for the\n        # first time), check if it matches the local type and warn if not\n        if getattr(self._work_pool, \"type\", 0) != work_pool.type:\n            if work_pool.type != self.__class__.type:\n                self._logger.warning(\n                    \"Worker type mismatch! This worker process expects type \"\n                    f\"{self.type!r} but received {work_pool.type!r}\"\n                    \" from the server. Unexpected behavior may occur.\"\n                )\n\n        # once the work pool is loaded, verify that it has a `base_job_template` and\n        # set it if not\n        if not work_pool.base_job_template:\n            job_template = self.__class__.get_default_base_job_template()\n            await self._set_work_pool_template(work_pool, job_template)\n            work_pool.base_job_template = job_template\n\n        self._work_pool = work_pool\n\n    async def _send_worker_heartbeat(self):\n        if self._work_pool:\n            await self._client.send_worker_heartbeat(\n                work_pool_name=self._work_pool_name,\n                worker_name=self.name,\n                heartbeat_interval_seconds=self.heartbeat_interval_seconds,\n            )\n\n    async def sync_with_backend(self):\n        \"\"\"\n        Updates the worker's local information about it's current work pool and\n        queues. Sends a worker heartbeat to the API.\n        \"\"\"\n        await self._update_local_work_pool_info()\n\n        await self._send_worker_heartbeat()\n\n        self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n\n    async def _get_scheduled_flow_runs(\n        self,\n    ) -> List[\"WorkerFlowRunResponse\"]:\n        \"\"\"\n        Retrieve scheduled flow runs from the work pool's queues.\n        \"\"\"\n        scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n        self._logger.debug(\n            f\"Querying for flow runs scheduled before {scheduled_before}\"\n        )\n        try:\n            scheduled_flow_runs = (\n                await self._client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=self._work_pool_name,\n                    scheduled_before=scheduled_before,\n                    work_queue_names=list(self._work_queues),\n                )\n            )\n            self._logger.debug(\n                f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\"\n            )\n            return scheduled_flow_runs\n        except ObjectNotFound:\n            # the pool doesn't exist; it will be created on the next\n            # heartbeat (or an appropriate warning will be logged)\n            return []\n\n    async def _submit_scheduled_flow_runs(\n        self, flow_run_response: List[\"WorkerFlowRunResponse\"]\n    ) -> List[\"FlowRun\"]:\n        \"\"\"\n        Takes a list of WorkerFlowRunResponses and submits the referenced flow runs\n        for execution by the worker.\n        \"\"\"\n        submittable_flow_runs = [entry.flow_run for entry in flow_run_response]\n        submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n        for flow_run in submittable_flow_runs:\n            if flow_run.id in self._submitting_flow_run_ids:\n                continue\n\n            try:\n                if self._limiter:\n                    self._limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self._logger.info(\n                    f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                run_logger = self.get_flow_run_logger(flow_run)\n                run_logger.info(\n                    f\"Worker '{self.name}' submitting flow run '{flow_run.id}'\"\n                )\n                self._submitting_flow_run_ids.add(flow_run.id)\n                self._runs_task_group.start_soon(\n                    self._submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(\n                lambda run: run.id in self._submitting_flow_run_ids,\n                submittable_flow_runs,\n            )\n        )\n\n    async def _check_flow_run(self, flow_run: \"FlowRun\") -> None:\n        \"\"\"\n        Performs a check on a submitted flow run to warn the user if the flow run\n        was created from a deployment with a storage block.\n        \"\"\"\n        if flow_run.deployment_id:\n            deployment = await self._client.read_deployment(flow_run.deployment_id)\n            if deployment.storage_document_id:\n                raise ValueError(\n                    f\"Flow run {flow_run.id!r} was created from deployment\"\n                    f\" {deployment.name!r} which is configured with a storage block.\"\n                    \" Please use an\"\n                    \" agent to execute this flow run.\"\n                )\n\n    async def _submit_run(self, flow_run: \"FlowRun\") -> None:\n        \"\"\"\n        Submits a given flow run for execution by the worker.\n        \"\"\"\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            await self._check_flow_run(flow_run)\n        except (ValueError, ObjectNotFound):\n            self._logger.exception(\n                (\n                    \"Flow run %s did not pass checks and will not be submitted for\"\n                    \" execution\"\n                ),\n                flow_run.id,\n            )\n            self._submitting_flow_run_ids.remove(flow_run.id)\n            return\n\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            readiness_result = await self._runs_task_group.start(\n                self._submit_run_and_capture_errors, flow_run\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self._client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    run_logger.exception(\n                        \"An error occurred while setting the `infrastructure_pid` on \"\n                        f\"flow run {flow_run.id!r}. The flow run will \"\n                        \"not be cancellable.\"\n                    )\n\n            run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        self._submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self, flow_run: \"FlowRun\", task_status: anyio.abc.TaskStatus = None\n    ) -> Union[BaseWorkerResult, Exception]:\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n            submitted_event = self._emit_flow_run_submitted_event(configuration)\n            result = await self.run(\n                flow_run=flow_run,\n                task_status=task_status,\n                configuration=configuration,\n            )\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                run_logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                run_logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            run_logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        self._emit_flow_run_executed_event(result, configuration, submitted_event)\n\n        return result\n\n    def get_status(self):\n        \"\"\"\n        Retrieves the status of the current worker including its name, current worker\n        pool, the work pool queues it is polling, and its local settings.\n        \"\"\"\n        return {\n            \"name\": self.name,\n            \"work_pool\": (\n                self._work_pool.dict(json_compatible=True)\n                if self._work_pool is not None\n                else None\n            ),\n            \"settings\": {\n                \"prefetch_seconds\": self._prefetch_seconds,\n            },\n        }\n\n    async def _get_configuration(\n        self,\n        flow_run: \"FlowRun\",\n    ) -> BaseJobConfiguration:\n        deployment = await self._client.read_deployment(flow_run.deployment_id)\n        flow = await self._client.read_flow(flow_run.flow_id)\n\n        deployment_vars = deployment.job_variables or {}\n        flow_run_vars = flow_run.job_variables or {}\n        job_variables = {**deployment_vars, **flow_run_vars}\n\n        configuration = await self.job_configuration.from_template_and_values(\n            base_job_template=self._work_pool.base_job_template,\n            values=job_variables,\n            client=self._client,\n        )\n        configuration.prepare_for_flow_run(\n            flow_run=flow_run, deployment=deployment, flow=flow\n        )\n        return configuration\n\n    async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n        run_logger = self.get_flow_run_logger(flow_run)\n        state = flow_run.state\n        try:\n            state = await propose_state(\n                self._client, Pending(), flow_run_id=flow_run.id\n            )\n        except Abort as exc:\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            run_logger.exception(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n            )\n            return False\n\n        if not state.is_pending():\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            await propose_state(\n                self._client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            run_logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            state = await propose_state(\n                self._client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                run_logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def _set_work_pool_template(self, work_pool, job_template):\n        \"\"\"Updates the `base_job_template` for the worker's work pool server side.\"\"\"\n        await self._client.update_work_pool(\n            work_pool_name=work_pool.name,\n            work_pool=WorkPoolUpdate(\n                base_job_template=job_template,\n            ),\n        )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n        \"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the worker exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.is_setup:\n                with anyio.CancelScope() as scope:\n                    self._scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self._scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self._runs_task_group.start(wrapper)\n\n    async def __aenter__(self):\n        self._logger.debug(\"Entering worker context...\")\n        await self.setup()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        self._logger.debug(\"Exiting worker context...\")\n        await self.teardown(*exc_info)\n\n    def __repr__(self):\n        return f\"Worker(pool={self._work_pool_name!r}, name={self.name!r})\"\n\n    def _event_resource(self):\n        return {\n            \"prefect.resource.id\": f\"prefect.worker.{self.type}.{self.get_name_slug()}\",\n            \"prefect.resource.name\": self.name,\n            \"prefect.version\": prefect.__version__,\n            \"prefect.worker-type\": self.type,\n        }\n\n    def _event_related_resources(\n        self,\n        configuration: Optional[BaseJobConfiguration] = None,\n        include_self: bool = False,\n    ) -> List[RelatedResource]:\n        related = []\n        if configuration:\n            related += configuration._related_resources()\n\n        if self._work_pool:\n            related.append(\n                object_as_related_resource(\n                    kind=\"work-pool\", role=\"work-pool\", object=self._work_pool\n                )\n            )\n\n        if include_self:\n            worker_resource = self._event_resource()\n            worker_resource[\"prefect.resource.role\"] = \"worker\"\n            related.append(RelatedResource.parse_obj(worker_resource))\n\n        return related\n\n    def _emit_flow_run_submitted_event(\n        self, configuration: BaseJobConfiguration\n    ) -> Event:\n        return emit_event(\n            event=\"prefect.worker.submitted-flow-run\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(configuration=configuration),\n        )\n\n    def _emit_flow_run_executed_event(\n        self,\n        result: BaseWorkerResult,\n        configuration: BaseJobConfiguration,\n        submitted_event: Event,\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(result.identifier)\n                resource[\"prefect.infrastructure.status-code\"] = str(result.status_code)\n\n        emit_event(\n            event=\"prefect.worker.executed-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n            follows=submitted_event,\n        )\n\n    async def _emit_worker_started_event(self) -> Event:\n        return emit_event(\n            \"prefect.worker.started\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n        )\n\n    async def _emit_worker_stopped_event(self, started_event: Event):\n        emit_event(\n            \"prefect.worker.stopped\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n            follows=started_event,\n        )\n\n    def _emit_flow_run_cancelled_event(\n        self, flow_run: \"FlowRun\", configuration: BaseJobConfiguration\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(\n                    flow_run.infrastructure_pid\n                )\n\n        emit_event(\n            event=\"prefect.worker.cancelled-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n        )\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_all_available_worker_types","title":"get_all_available_worker_types staticmethod","text":"

    Returns all worker types available in the local registry.

    Source code in prefect/workers/base.py
    @staticmethod\ndef get_all_available_worker_types() -> List[str]:\n    \"\"\"\n    Returns all worker types available in the local registry.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return list(worker_registry.keys())\n    return []\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_status","title":"get_status","text":"

    Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings.

    Source code in prefect/workers/base.py
    def get_status(self):\n    \"\"\"\n    Retrieves the status of the current worker including its name, current worker\n    pool, the work pool queues it is polling, and its local settings.\n    \"\"\"\n    return {\n        \"name\": self.name,\n        \"work_pool\": (\n            self._work_pool.dict(json_compatible=True)\n            if self._work_pool is not None\n            else None\n        ),\n        \"settings\": {\n            \"prefetch_seconds\": self._prefetch_seconds,\n        },\n    }\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_worker_class_from_type","title":"get_worker_class_from_type staticmethod","text":"

    Returns the worker class for a given worker type. If the worker type is not recognized, returns None.

    Source code in prefect/workers/base.py
    @staticmethod\ndef get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n    \"\"\"\n    Returns the worker class for a given worker type. If the worker type\n    is not recognized, returns None.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return worker_registry.get(type)\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.is_worker_still_polling","title":"is_worker_still_polling","text":"

    This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time.

    The query_interval_seconds is the same value that is used by the loop services - we will evaluate if the _last_polled_time was within that interval x 30 (so 10s -> 5m)

    The instance property self._last_polled_time is currently set/updated in get_and_submit_flow_runs()

    Source code in prefect/workers/base.py
    def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n    \"\"\"\n    This method is invoked by a webserver healthcheck handler\n    and returns a boolean indicating if the worker has recorded a\n    scheduled flow run poll within a variable amount of time.\n\n    The `query_interval_seconds` is the same value that is used by\n    the loop services - we will evaluate if the _last_polled_time\n    was within that interval x 30 (so 10s -> 5m)\n\n    The instance property `self._last_polled_time`\n    is currently set/updated in `get_and_submit_flow_runs()`\n    \"\"\"\n    threshold_seconds = query_interval_seconds * 30\n\n    seconds_since_last_poll = (\n        pendulum.now(\"utc\") - self._last_polled_time\n    ).in_seconds()\n\n    is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n    if not is_still_polling:\n        self._logger.error(\n            f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n            \"and should be restarted\"\n        )\n\n    return is_still_polling\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

    Method for killing infrastructure created by a worker. Should be implemented by individual workers if they support killing infrastructure.

    Source code in prefect/workers/base.py
    async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: BaseJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Method for killing infrastructure created by a worker. Should be implemented by\n    individual workers if they support killing infrastructure.\n    \"\"\"\n    raise NotImplementedError(\n        \"This worker does not support killing infrastructure.\"\n    )\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.run","title":"run abstractmethod async","text":"

    Runs a given flow run on the current worker.

    Source code in prefect/workers/base.py
    @abc.abstractmethod\nasync def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: BaseJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n    \"\"\"\n    Runs a given flow run on the current worker.\n    \"\"\"\n    raise NotImplementedError(\n        \"Workers must implement a method for running submitted flow runs\"\n    )\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.setup","title":"setup async","text":"

    Prepares the worker to run.

    Source code in prefect/workers/base.py
    async def setup(self):\n    \"\"\"Prepares the worker to run.\"\"\"\n    self._logger.debug(\"Setting up worker...\")\n    self._runs_task_group = anyio.create_task_group()\n    self._limiter = (\n        anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n    )\n    self._client = get_client()\n    await self._client.__aenter__()\n    await self._runs_task_group.__aenter__()\n\n    self.is_setup = True\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.sync_with_backend","title":"sync_with_backend async","text":"

    Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API.

    Source code in prefect/workers/base.py
    async def sync_with_backend(self):\n    \"\"\"\n    Updates the worker's local information about it's current work pool and\n    queues. Sends a worker heartbeat to the API.\n    \"\"\"\n    await self._update_local_work_pool_info()\n\n    await self._send_worker_heartbeat()\n\n    self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.teardown","title":"teardown async","text":"

    Cleans up resources after the worker is stopped.

    Source code in prefect/workers/base.py
    async def teardown(self, *exc_info):\n    \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n    self._logger.debug(\"Tearing down worker...\")\n    self.is_setup = False\n    for scope in self._scheduled_task_scopes:\n        scope.cancel()\n    if self._runs_task_group:\n        await self._runs_task_group.__aexit__(*exc_info)\n    if self._client:\n        await self._client.__aexit__(*exc_info)\n    self._runs_task_group = None\n    self._client = None\n
    ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/block/","title":"block","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block","title":"prefect.workers.block","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration","title":"BlockWorkerJobConfiguration","text":"

    Bases: BaseModel

    Source code in prefect/workers/block.py
    class BlockWorkerJobConfiguration(BaseModel):\n    block: Block = Field(\n        default=..., description=\"The infrastructure block to use for job creation.\"\n    )\n\n    @validator(\"block\")\n    def _validate_infrastructure_block(cls, v):\n        return validate_block_is_infrastructure(v)\n\n    _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n    @property\n    def is_using_a_runner(self):\n        return (\n            self.block.command is not None\n            and \"prefect flow-run execute\" in shlex.join(self.block.command)\n        )\n\n    @staticmethod\n    def _get_base_config_defaults(variables: dict) -> dict:\n        \"\"\"Get default values from base config for all variables that have them.\"\"\"\n        defaults = dict()\n        for variable_name, attrs in variables.items():\n            if \"default\" in attrs:\n                defaults[variable_name] = attrs[\"default\"]\n\n        return defaults\n\n    @classmethod\n    @inject_client\n    async def from_template_and_values(\n        cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n    ):\n        \"\"\"Creates a valid worker configuration object from the provided base\n        configuration and overrides.\n\n        Important: this method expects that the base_job_template was already\n        validated server-side.\n        \"\"\"\n        job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n        variables_schema = base_job_template[\"variables\"]\n        variables = cls._get_base_config_defaults(\n            variables_schema.get(\"properties\", {})\n        )\n        variables.update(values)\n\n        populated_configuration = apply_values(template=job_config, values=variables)\n\n        block_document_id = get_from_dict(\n            populated_configuration, \"block.$ref.block_document_id\"\n        )\n        if not block_document_id:\n            raise ValueError(\n                \"Base job template is invalid for this worker type because it does not\"\n                \" contain a block_document_id after variable resolution.\"\n            )\n\n        block_document = await client.read_block_document(\n            block_document_id=block_document_id\n        )\n        infrastructure_block = Block._from_block_document(block_document)\n\n        populated_configuration[\"block\"] = infrastructure_block\n\n        return cls(**populated_configuration)\n\n    @classmethod\n    def json_template(cls) -> dict:\n        \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n        Defaults to using the job configuration parameter name as the template variable name.\n\n        e.g.\n        {\n            key1: '{{ key1 }}',     # default variable template\n            key2: '{{ template2 }}', # `template2` specifically provide as template\n        }\n        \"\"\"\n        configuration = {}\n        properties = cls.schema()[\"properties\"]\n        for k, v in properties.items():\n            if v.get(\"template\"):\n                template = v[\"template\"]\n            else:\n                template = \"{{ \" + k + \" }}\"\n            configuration[k] = template\n\n        return configuration\n\n    def _related_resources(self) -> List[RelatedResource]:\n        tags = set()\n        related = []\n\n        for kind, obj in self._related_objects.items():\n            if obj is None:\n                continue\n            if hasattr(obj, \"tags\"):\n                tags.update(obj.tags)\n            related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n        return related + tags_as_related_resources(tags)\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        self.block = self.block.prepare_for_flow_run(\n            flow_run=flow_run, deployment=deployment, flow=flow\n        )\n
    ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.from_template_and_values","title":"from_template_and_values async classmethod","text":"

    Creates a valid worker configuration object from the provided base configuration and overrides.

    Important: this method expects that the base_job_template was already validated server-side.

    Source code in prefect/workers/block.py
    @classmethod\n@inject_client\nasync def from_template_and_values(\n    cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n    \"\"\"Creates a valid worker configuration object from the provided base\n    configuration and overrides.\n\n    Important: this method expects that the base_job_template was already\n    validated server-side.\n    \"\"\"\n    job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n    variables_schema = base_job_template[\"variables\"]\n    variables = cls._get_base_config_defaults(\n        variables_schema.get(\"properties\", {})\n    )\n    variables.update(values)\n\n    populated_configuration = apply_values(template=job_config, values=variables)\n\n    block_document_id = get_from_dict(\n        populated_configuration, \"block.$ref.block_document_id\"\n    )\n    if not block_document_id:\n        raise ValueError(\n            \"Base job template is invalid for this worker type because it does not\"\n            \" contain a block_document_id after variable resolution.\"\n        )\n\n    block_document = await client.read_block_document(\n        block_document_id=block_document_id\n    )\n    infrastructure_block = Block._from_block_document(block_document)\n\n    populated_configuration[\"block\"] = infrastructure_block\n\n    return cls(**populated_configuration)\n
    ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.json_template","title":"json_template classmethod","text":"

    Returns a dict with job configuration as keys and the corresponding templates as values

    Defaults to using the job configuration parameter name as the template variable name.

    e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2 specifically provide as template }

    Source code in prefect/workers/block.py
    @classmethod\ndef json_template(cls) -> dict:\n    \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n    Defaults to using the job configuration parameter name as the template variable name.\n\n    e.g.\n    {\n        key1: '{{ key1 }}',     # default variable template\n        key2: '{{ template2 }}', # `template2` specifically provide as template\n    }\n    \"\"\"\n    configuration = {}\n    properties = cls.schema()[\"properties\"]\n    for k, v in properties.items():\n        if v.get(\"template\"):\n            template = v[\"template\"]\n        else:\n            template = \"{{ \" + k + \" }}\"\n        configuration[k] = template\n\n    return configuration\n
    ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerResult","title":"BlockWorkerResult","text":"

    Bases: BaseWorkerResult

    Result of a block worker job

    Source code in prefect/workers/block.py
    class BlockWorkerResult(BaseWorkerResult):\n    \"\"\"Result of a block worker job\"\"\"\n
    ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/process/","title":"process","text":"","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process","title":"prefect.workers.process","text":"

    Module containing the Process worker used for executing flow runs as subprocesses.

    To start a Process worker, run the following command:

    prefect worker start --pool 'my-work-pool' --type process\n

    Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

    For more information about work pools and workers, checkout out the Prefect docs.

    ","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration","title":"ProcessJobConfiguration","text":"

    Bases: BaseJobConfiguration

    Source code in prefect/workers/process.py
    class ProcessJobConfiguration(BaseJobConfiguration):\n    stream_output: bool = Field(default=True)\n    working_dir: Optional[Path] = Field(default=None)\n\n    @validator(\"working_dir\")\n    def validate_command(cls, v):\n        return validate_command(v)\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self.env = {**os.environ, **self.env}\n        self.command = (\n            f\"{get_sys_executable()} -m prefect.engine\"\n            if self.command == self._base_flow_run_command()\n            else self.command\n        )\n\n    def _base_flow_run_command(self) -> str:\n        \"\"\"\n        Override the base flow run command because enhanced cancellation doesn't\n        work with the process worker.\n        \"\"\"\n        return \"python -m prefect.engine\"\n
    ","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessWorkerResult","title":"ProcessWorkerResult","text":"

    Bases: BaseWorkerResult

    Contains information about the final state of a completed process

    Source code in prefect/workers/process.py
    class ProcessWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
    ","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/server/","title":"server","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server","title":"prefect.workers.server","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server.start_healthcheck_server","title":"start_healthcheck_server","text":"

    Run a healthcheck FastAPI server for a worker.

    Parameters:

    Name Type Description Default worker BaseWorker | ProcessWorker

    the worker whose health we will check

    required log_level str

    the log level to use for the server

    'error' Source code in prefect/workers/server.py
    def start_healthcheck_server(\n    worker: Union[BaseWorker, ProcessWorker],\n    query_interval_seconds: int,\n    log_level: str = \"error\",\n) -> None:\n    \"\"\"\n    Run a healthcheck FastAPI server for a worker.\n\n    Args:\n        worker (BaseWorker | ProcessWorker): the worker whose health we will check\n        log_level (str): the log level to use for the server\n    \"\"\"\n    webserver = FastAPI()\n    router = APIRouter()\n\n    def perform_health_check():\n        did_recently_poll = worker.is_worker_still_polling(\n            query_interval_seconds=query_interval_seconds\n        )\n\n        if not did_recently_poll:\n            return JSONResponse(\n                status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n                content={\"message\": \"Worker may be unresponsive at this time\"},\n            )\n        return JSONResponse(status_code=status.HTTP_200_OK, content={\"message\": \"OK\"})\n\n    router.add_api_route(\"/health\", perform_health_check, methods=[\"GET\"])\n\n    webserver.include_router(router)\n\n    uvicorn.run(\n        webserver,\n        host=PREFECT_WORKER_WEBSERVER_HOST.value(),\n        port=PREFECT_WORKER_WEBSERVER_PORT.value(),\n        log_level=log_level,\n    )\n
    ","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/utilities/","title":"utilities","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/prefect/workers/utilities/#prefect.workers.utilities","title":"prefect.workers.utilities","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/python/","title":"Python SDK","text":"

    The Prefect Python SDK is used to build, test, and execute workflows against the Prefect API.

    Explore the modules in the navigation bar to the left to learn more.

    ","tags":["API","Python SDK"]},{"location":"api-ref/rest-api/","title":"REST API","text":"

    The Prefect REST API is used for communicating data from clients to a Prefect server so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard.

    Prefect Cloud and a locally hosted Prefect server each provide a REST API.

    • Prefect Cloud:
      • Interactive Prefect Cloud REST API documentation
      • Finding your Prefect Cloud details
    • A Locally hosted open-source Prefect server:
      • Interactive REST API documentation for a locally hosted open-source Prefect server is available at http://localhost:4200/docs or the /docs endpoint of the PREFECT_API_URL you have configured to access the server. You must have the server running with prefect server start to access the interactive documentation.
      • Prefect REST API documentation
    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#interacting-with-the-rest-api","title":"Interacting with the REST API","text":"

    You have many options to interact with the Prefect REST API:

    • Create an instance of PrefectClient
    • Use your favorite Python HTTP library such as Requests or HTTPX
    • Use an HTTP library in your language of choice
    • Use curl from the command line
    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#prefectclient-with-a-prefect-server","title":"PrefectClient with a Prefect server","text":"

    This example uses PrefectClient with a locally hosted Prefect server:

    import asyncio\nfrom prefect.client import get_client\n\nasync def get_flows():\n    client = get_client()\n    r = await client.read_flows(limit=5)\n    return r\n\nr = asyncio.run(get_flows())\n\nfor flow in r:\n    print(flow.name, flow.id)\n\nif __name__ == \"__main__\":\n    asyncio.run(get_flows())\n

    Output:

    cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022\ndog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766\nsloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d\ncapybara-facts fbadaf8b-584f-48b9-b092-07d351edd424\nlemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c\n
    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#requests-with-prefect","title":"Requests with Prefect","text":"

    This example uses the Requests library with Prefect Cloud to return the five newest artifacts.

    import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n
    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#curl-with-prefect-cloud","title":"curl with Prefect Cloud","text":"

    This example uses curl with Prefect Cloud to create a flow run:

    ACCOUNT_ID=\"abc-my-cloud-account-id-goes-here\"\nWORKSPACE_ID=\"123-my-workspace-id-goes-here\"\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\nDEPLOYMENT_ID=\"my_deployment_id\"\n\ncurl --location --request POST \"$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run\" \\\n  --header \"Content-Type: application/json\" \\\n  --header \"Authorization: Bearer $PREFECT_API_KEY\" \\\n  --header \"X-PREFECT-API-VERSION: 0.8.4\" \\\n  --data-raw \"{}\"\n

    Note that in this example --data-raw \"{}\" is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute ^ for \\ for line multi-line commands.

    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#finding-your-prefect-cloud-details","title":"Finding your Prefect Cloud details","text":"

    When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the workspace you want to interact with. You can find both IDs for a Prefect profile in the CLI with prefect profile inspect my_profile. This command will also display your Prefect API key, as shown below:

    PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here'\nPREFECT_API_KEY='123abc_my_api_key_is_here'\n

    Alternatively, view your Account ID and Workspace ID in your browser URL. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#rest-guidelines","title":"REST Guidelines","text":"

    The REST APIs adhere to the following guidelines:

    • Collection names are pluralized (for example, /flows or /runs).
    • We indicate variable placeholders with colons: GET /flows/:id.
    • We use snake case for route names: GET /task_runs.
    • We avoid nested resources unless there is no possibility of accessing the child resource outside the parent context. For example, we query /task_runs with a flow run filter instead of accessing /flow_runs/:id/task_runs.
    • The API is hosted with an /api/:version prefix that (optionally) allows versioning in the future. By convention, we treat that as part of the base URL and do not include that in API examples.
    • Filtering, sorting, and pagination parameters are provided in the request body of POST requests where applicable.
      • Pagination parameters are limit and offset.
      • Sorting is specified with a single sort parameter.
      • See more information on filtering below.
    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#http-verbs","title":"HTTP verbs","text":"
    • GET, PUT and DELETE requests are always idempotent. POST and PATCH are not guaranteed to be idempotent.
    • GET requests cannot receive information from the request body.
    • POST requests can receive information from the request body.
    • POST /collection creates a new member of the collection.
    • GET /collection lists all members of the collection.
    • GET /collection/:id gets a specific member of the collection by ID.
    • DELETE /collection/:id deletes a specific member of the collection.
    • PUT /collection/:id creates or replaces a specific member of the collection.
    • PATCH /collection/:id partially updates a specific member of the collection.
    • POST /collection/action is how we implement non-CRUD actions. For example, to set a flow run's state, we use POST /flow_runs/:id/set_state.
    • POST /collection/action may also be used for read-only queries. This is to allow us to send complex arguments as body arguments (which often cannot be done via GET). Examples include POST /flow_runs/filter, POST /flow_runs/count, and POST /flow_runs/history.
    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#filtering","title":"Filtering","text":"

    Objects can be filtered by providing filter criteria in the body of a POST request. When multiple criteria are specified, logical AND will be applied to the criteria.

    Filter criteria are structured as follows:

    {\n    \"objects\": {\n        \"object_field\": {\n            \"field_operator_\": <field_value>\n        }\n    }\n}\n

    In this example, objects is the name of the collection to filter over (for example, flows). The collection can be either the object being queried for (flows for POST /flows/filter) or a related object (flow_runs for POST /flows/filter).

    object_field is the name of the field over which to filter (name for flows). Note that some objects may have nested object fields, such as {flow_run: {state: {type: {any_: []}}}}.

    field_operator_ is the operator to apply to a field when filtering. Common examples include:

    • any_: return objects where this field matches any of the following values.
    • is_null_: return objects where this field is or is not null.
    • eq_: return objects where this field is equal to the following value.
    • all_: return objects where this field matches all of the following values.
    • before_: return objects where this datetime field is less than or equal to the following value.
    • after_: return objects where this datetime field is greater than or equal to the following value.

    For example, to query for flows with the tag \"database\" and failed flow runs, POST /flows/filter with the following request body:

    {\n    \"flows\": {\n        \"tags\": {\n            \"all_\": [\"database\"]\n        }\n    },\n    \"flow_runs\": {\n        \"state\": {\n            \"type\": {\n              \"any_\": [\"FAILED\"]\n            }\n        }\n    }\n}\n
    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#openapi","title":"OpenAPI","text":"

    The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. OpenAPI is a standard specification for describing REST APIs.

    To generate the Prefect server's complete OpenAPI document, run the following commands in an interactive Python session:

    from prefect.server.api.server import create_app\n\napp = create_app()\nopenapi_doc = app.openapi()\n

    This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance.

    ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/server/","title":"Server API","text":"

    The Prefect server API is used by the server to work with workflow metadata and enforce orchestration logic. This API is primarily used by Prefect developers.

    Select links in the left navigation menu to explore.

    ","tags":["API","Server API"]},{"location":"api-ref/server/api/admin/","title":"server.api.admin","text":"","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin","title":"prefect.server.api.admin","text":"

    Routes for admin-level interactions with the Prefect REST API.

    ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.clear_database","title":"clear_database async","text":"

    Clear all database tables without dropping them.

    Source code in prefect/server/api/admin.py
    @router.post(\"/database/clear\", status_code=status.HTTP_204_NO_CONTENT)\nasync def clear_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n    \"\"\"Clear all database tables without dropping them.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n    async with db.session_context(begin_transaction=True) as session:\n        # work pool has a circular dependency on pool queue; delete it first\n        await session.execute(db.WorkPool.__table__.delete())\n        for table in reversed(db.Base.metadata.sorted_tables):\n            await session.execute(table.delete())\n
    ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.create_database","title":"create_database async","text":"

    Create all database objects.

    Source code in prefect/server/api/admin.py
    @router.post(\"/database/create\", status_code=status.HTTP_204_NO_CONTENT)\nasync def create_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n    \"\"\"Create all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.create_db()\n
    ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.drop_database","title":"drop_database async","text":"

    Drop all database objects.

    Source code in prefect/server/api/admin.py
    @router.post(\"/database/drop\", status_code=status.HTTP_204_NO_CONTENT)\nasync def drop_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n    \"\"\"Drop all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.drop_db()\n
    ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_settings","title":"read_settings async","text":"

    Get the current Prefect REST API settings.

    Secret setting values will be obfuscated.

    Source code in prefect/server/api/admin.py
    @router.get(\"/settings\")\nasync def read_settings() -> prefect.settings.Settings:\n    \"\"\"\n    Get the current Prefect REST API settings.\n\n    Secret setting values will be obfuscated.\n    \"\"\"\n    return prefect.settings.get_current_settings().with_obfuscated_secrets()\n
    ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_version","title":"read_version async","text":"

    Returns the Prefect version number

    Source code in prefect/server/api/admin.py
    @router.get(\"/version\")\nasync def read_version() -> str:\n    \"\"\"Returns the Prefect version number\"\"\"\n    return prefect.__version__\n
    ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/csrf_token/","title":"server.api.csrf_token","text":"","tags":["Prefect API","csrf","security"]},{"location":"api-ref/server/api/csrf_token/#prefect.server.api.csrf_token","title":"prefect.server.api.csrf_token","text":"","tags":["Prefect API","csrf","security"]},{"location":"api-ref/server/api/csrf_token/#prefect.server.api.csrf_token.create_csrf_token","title":"create_csrf_token async","text":"

    Create or update a CSRF token for a client

    Source code in prefect/server/api/csrf_token.py
    @router.get(\"\")\nasync def create_csrf_token(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    client: str = Query(..., description=\"The client to create a CSRF token for\"),\n) -> schemas.core.CsrfToken:\n    \"\"\"Create or update a CSRF token for a client\"\"\"\n    if PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value() is False:\n        raise HTTPException(\n            status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"CSRF protection is disabled.\",\n        )\n\n    async with db.session_context(begin_transaction=True) as session:\n        token = await models.csrf_token.create_or_update_csrf_token(\n            session=session, client=client\n        )\n        await models.csrf_token.delete_expired_tokens(session=session)\n\n    return token\n
    ","tags":["Prefect API","csrf","security"]},{"location":"api-ref/server/api/dependencies/","title":"server.api.dependencies","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies","title":"prefect.server.api.dependencies","text":"

    Utilities for injecting FastAPI dependencies.

    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.EnforceMinimumAPIVersion","title":"EnforceMinimumAPIVersion","text":"

    FastAPI Dependency used to check compatibility between the version of the api and a given request.

    Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version.

    Source code in prefect/server/api/dependencies.py
    class EnforceMinimumAPIVersion:\n    \"\"\"\n    FastAPI Dependency used to check compatibility between the version of the api\n    and a given request.\n\n    Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it\n    to the api's version. Rejects requests that are lower than the minimum version.\n    \"\"\"\n\n    def __init__(self, minimum_api_version: str, logger: logging.Logger):\n        self.minimum_api_version = minimum_api_version\n        versions = [int(v) for v in minimum_api_version.split(\".\")]\n        self.api_major = versions[0]\n        self.api_minor = versions[1]\n        self.api_patch = versions[2]\n        self.logger = logger\n\n    async def __call__(\n        self,\n        x_prefect_api_version: str = Header(None),\n    ):\n        request_version = x_prefect_api_version\n\n        # if no version header, assume latest and continue\n        if not request_version:\n            return\n\n        # parse version\n        try:\n            major, minor, patch = [int(v) for v in request_version.split(\".\")]\n        except ValueError:\n            await self._notify_of_invalid_value(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    \"Invalid X-PREFECT-API-VERSION header format.\"\n                    f\"Expected header in format 'x.y.z' but received {request_version}\"\n                ),\n            )\n\n        if (major, minor, patch) < (self.api_major, self.api_minor, self.api_patch):\n            await self._notify_of_outdated_version(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    f\"The request specified API version {request_version} but this \"\n                    f\"server requires version {self.minimum_api_version} or higher.\"\n                ),\n            )\n\n    async def _notify_of_invalid_value(self, request_version: str):\n        self.logger.error(\n            f\"Invalid X-PREFECT-API-VERSION header format: '{request_version}'\"\n        )\n\n    async def _notify_of_outdated_version(self, request_version: str):\n        self.logger.error(\n            f\"X-PREFECT-API-VERSION header specifies version '{request_version}' \"\n            f\"but minimum allowed version is '{self.minimum_api_version}'\"\n        )\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.LimitBody","title":"LimitBody","text":"

    A fastapi.Depends factory for pulling a limit: int parameter from the request body while determining the default from the current settings.

    Source code in prefect/server/api/dependencies.py
    def LimitBody() -> Depends:\n    \"\"\"\n    A `fastapi.Depends` factory for pulling a `limit: int` parameter from the\n    request body while determining the default from the current settings.\n    \"\"\"\n\n    def get_limit(\n        limit: int = Body(\n            None,\n            description=\"Defaults to PREFECT_API_DEFAULT_LIMIT if not provided.\",\n        ),\n    ):\n        default_limit = PREFECT_API_DEFAULT_LIMIT.value()\n        limit = limit if limit is not None else default_limit\n        if not limit >= 0:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=\"Invalid limit: must be greater than or equal to 0.\",\n            )\n        if limit > default_limit:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=f\"Invalid limit: must be less than or equal to {default_limit}.\",\n            )\n        return limit\n\n    return Depends(get_limit)\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.get_created_by","title":"get_created_by","text":"

    A dependency that returns the provenance information to use when creating objects during this API call.

    Source code in prefect/server/api/dependencies.py
    def get_created_by(\n    prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False),\n    prefect_automation_name: Optional[str] = Header(None, include_in_schema=False),\n) -> Optional[schemas.core.CreatedBy]:\n    \"\"\"A dependency that returns the provenance information to use when creating objects\n    during this API call.\"\"\"\n    if prefect_automation_id and prefect_automation_name:\n        try:\n            display_value = b64decode(prefect_automation_name.encode()).decode()\n        except Exception:\n            display_value = None\n\n        if display_value:\n            return schemas.core.CreatedBy(\n                id=prefect_automation_id,\n                type=\"AUTOMATION\",\n                display_value=display_value,\n            )\n\n    return None\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.get_updated_by","title":"get_updated_by","text":"

    A dependency that returns the provenance information to use when updating objects during this API call.

    Source code in prefect/server/api/dependencies.py
    def get_updated_by(\n    prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False),\n    prefect_automation_name: Optional[str] = Header(None, include_in_schema=False),\n) -> Optional[schemas.core.UpdatedBy]:\n    \"\"\"A dependency that returns the provenance information to use when updating objects\n    during this API call.\"\"\"\n    if prefect_automation_id and prefect_automation_name:\n        return schemas.core.UpdatedBy(\n            id=prefect_automation_id,\n            type=\"AUTOMATION\",\n            display_value=prefect_automation_name,\n        )\n\n    return None\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.is_ephemeral_request","title":"is_ephemeral_request","text":"

    A dependency that returns whether the request is to an ephemeral server.

    Source code in prefect/server/api/dependencies.py
    def is_ephemeral_request(request: Request):\n    \"\"\"\n    A dependency that returns whether the request is to an ephemeral server.\n    \"\"\"\n    return \"ephemeral-prefect\" in str(request.base_url)\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/deployments/","title":"server.api.deployments","text":"","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments","title":"prefect.server.api.deployments","text":"

    Routes for interacting with Deployment objects.

    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.count_deployments","title":"count_deployments async","text":"

    Count deployments.

    Source code in prefect/server/api/deployments.py
    @router.post(\"/count\")\nasync def count_deployments(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n    \"\"\"\n    Count deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.deployments.count_deployments(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_deployment","title":"create_deployment async","text":"

    Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated.

    If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted.

    Source code in prefect/server/api/deployments.py
    @router.post(\"/\")\nasync def create_deployment(\n    deployment: schemas.actions.DeploymentCreate,\n    response: Response,\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by),\n    updated_by: Optional[schemas.core.UpdatedBy] = Depends(dependencies.get_updated_by),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n    \"\"\"\n    Gracefully creates a new deployment from the provided schema. If a deployment with\n    the same name and flow_id already exists, the deployment is updated.\n\n    If the deployment has an active schedule, flow runs will be scheduled.\n    When upserting, any scheduled runs from the existing deployment will be deleted.\n    \"\"\"\n\n    data = deployment.dict(exclude_unset=True)\n    data[\"created_by\"] = created_by.dict() if created_by else None\n    data[\"updated_by\"] = updated_by.dict() if created_by else None\n\n    async with db.session_context(begin_transaction=True) as session:\n        if (\n            deployment.work_pool_name\n            and deployment.work_pool_name != DEFAULT_AGENT_WORK_POOL_NAME\n        ):\n            # Make sure that deployment is valid before beginning creation process\n            work_pool = await models.workers.read_work_pool_by_name(\n                session=session, work_pool_name=deployment.work_pool_name\n            )\n            if work_pool is None:\n                raise HTTPException(\n                    status_code=status.HTTP_404_NOT_FOUND,\n                    detail=f'Work pool \"{deployment.work_pool_name}\" not found.',\n                )\n            try:\n                deployment.check_valid_configuration(work_pool.base_job_template)\n            except (MissingVariableError, jsonschema.exceptions.ValidationError) as exc:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=f\"Error creating deployment: {exc!r}\",\n                )\n\n        # hydrate the input model into a full model\n        deployment_dict = deployment.dict(exclude={\"work_pool_name\"})\n        if deployment.work_pool_name and deployment.work_queue_name:\n            # If a specific pool name/queue name combination was provided, get the\n            # ID for that work pool queue.\n            deployment_dict[\n                \"work_queue_id\"\n            ] = await worker_lookups._get_work_queue_id_from_name(\n                session=session,\n                work_pool_name=deployment.work_pool_name,\n                work_queue_name=deployment.work_queue_name,\n                create_queue_if_not_found=True,\n            )\n        elif deployment.work_pool_name:\n            # If just a pool name was provided, get the ID for its default\n            # work pool queue.\n            deployment_dict[\n                \"work_queue_id\"\n            ] = await worker_lookups._get_default_work_queue_id_from_work_pool_name(\n                session=session,\n                work_pool_name=deployment.work_pool_name,\n            )\n        elif deployment.work_queue_name:\n            # If just a queue name was provided, ensure that the queue exists and\n            # get its ID.\n            work_queue = await models.work_queues.ensure_work_queue_exists(\n                session=session, name=deployment.work_queue_name\n            )\n            deployment_dict[\"work_queue_id\"] = work_queue.id\n\n        deployment = schemas.core.Deployment(**deployment_dict)\n        # check to see if relevant blocks exist, allowing us throw a useful error message\n        # for debugging\n        if deployment.infrastructure_document_id is not None:\n            infrastructure_block = (\n                await models.block_documents.read_block_document_by_id(\n                    session=session,\n                    block_document_id=deployment.infrastructure_document_id,\n                )\n            )\n            if not infrastructure_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find infrastructure\"\n                        f\" block with id: {deployment.infrastructure_document_id}. This\"\n                        \" usually occurs when applying a deployment specification that\"\n                        \" was built against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        if deployment.storage_document_id is not None:\n            storage_block = await models.block_documents.read_block_document_by_id(\n                session=session,\n                block_document_id=deployment.storage_document_id,\n            )\n            if not storage_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find storage block with\"\n                        f\" id: {deployment.storage_document_id}. This usually occurs\"\n                        \" when applying a deployment specification that was built\"\n                        \" against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        # Ensure that `paused` and `is_schedule_active` are consistent.\n        if \"paused\" in data:\n            deployment.is_schedule_active = not data[\"paused\"]\n        elif \"is_schedule_active\" in data:\n            deployment.paused = not data[\"is_schedule_active\"]\n\n        now = pendulum.now(\"UTC\")\n        model = await models.deployments.create_deployment(\n            session=session, deployment=deployment\n        )\n\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.DeploymentResponse.from_orm(model)\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

    Create a flow run from a deployment.

    Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used.

    If no state is provided, the flow run will be created in a SCHEDULED state.

    Source code in prefect/server/api/deployments.py
    @router.post(\"/{id}/create_flow_run\")\nasync def create_flow_run_from_deployment(\n    flow_run: schemas.actions.DeploymentFlowRunCreate,\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    response: Response = None,\n) -> schemas.responses.FlowRunResponse:\n    \"\"\"\n    Create a flow run from a deployment.\n\n    Any parameters not provided will be inferred from the deployment's parameters.\n    If tags are not provided, the deployment's tags will be used.\n\n    If no state is provided, the flow run will be created in a SCHEDULED state.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        # get relevant info from the deployment\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n\n        try:\n            dehydrated_params = deployment.parameters\n            dehydrated_params.update(flow_run.parameters or {})\n            ctx = await HydrationContext.build(session=session, raise_on_error=True)\n            parameters = hydrate(dehydrated_params, ctx)\n        except HydrationError as exc:\n            raise HTTPException(\n                status.HTTP_400_BAD_REQUEST,\n                detail=f\"Error hydrating flow run parameters: {exc}\",\n            )\n\n        if deployment.enforce_parameter_schema:\n            if not isinstance(deployment.parameter_openapi_schema, dict):\n                raise HTTPException(\n                    status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error updating deployment: Cannot update parameters because\"\n                        \" parameter schema enforcement is enabled and the deployment\"\n                        \" does not have a valid parameter schema.\"\n                    ),\n                )\n            try:\n                validate(\n                    parameters, deployment.parameter_openapi_schema, raise_on_error=True\n                )\n            except ValidationError as exc:\n                raise HTTPException(\n                    status.HTTP_409_CONFLICT,\n                    detail=f\"Error creating flow run: {exc}\",\n                )\n            except CircularSchemaRefError:\n                raise HTTPException(\n                    status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                    detail=\"Invalid schema: Unable to validate schema with circular references.\",\n                )\n\n        await validate_job_variables_for_flow_run(flow_run, deployment, session)\n\n        work_queue_name = deployment.work_queue_name\n        work_queue_id = deployment.work_queue_id\n\n        if flow_run.work_queue_name:\n            # can't mutate the ORM model or else it will commit the changes back\n            work_queue_id = await worker_lookups._get_work_queue_id_from_name(\n                session=session,\n                work_pool_name=deployment.work_queue.work_pool.name,\n                work_queue_name=flow_run.work_queue_name,\n                create_queue_if_not_found=True,\n            )\n            work_queue_name = flow_run.work_queue_name\n\n        # hydrate the input model into a full flow run / state model\n        flow_run = schemas.core.FlowRun(\n            **flow_run.dict(\n                exclude={\n                    \"parameters\",\n                    \"tags\",\n                    \"infrastructure_document_id\",\n                    \"work_queue_name\",\n                }\n            ),\n            flow_id=deployment.flow_id,\n            deployment_id=deployment.id,\n            deployment_version=deployment.version,\n            parameters=parameters,\n            tags=set(deployment.tags).union(flow_run.tags),\n            infrastructure_document_id=(\n                flow_run.infrastructure_document_id\n                or deployment.infrastructure_document_id\n            ),\n            work_queue_name=work_queue_name,\n            work_queue_id=work_queue_id,\n            created_by=created_by,\n        )\n\n        if not flow_run.state:\n            flow_run.state = schemas.states.Scheduled()\n\n        now = pendulum.now(\"UTC\")\n        model = await models.flow_runs.create_flow_run(\n            session=session, flow_run=flow_run\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.delete_deployment","title":"delete_deployment async","text":"

    Delete a deployment by id.

    Source code in prefect/server/api/deployments.py
    @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a deployment by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.deployments.delete_deployment(\n            session=session, deployment_id=deployment_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.get_scheduled_flow_runs_for_deployments","title":"get_scheduled_flow_runs_for_deployments async","text":"

    Get scheduled runs for a set of deployments. Used by a runner to poll for work.

    Source code in prefect/server/api/deployments.py
    @router.post(\"/get_scheduled_flow_runs\")\nasync def get_scheduled_flow_runs_for_deployments(\n    background_tasks: BackgroundTasks,\n    deployment_ids: List[UUID] = Body(\n        default=..., description=\"The deployment IDs to get scheduled runs for\"\n    ),\n    scheduled_before: DateTimeTZ = Body(\n        None, description=\"The maximum time to look for scheduled flow runs\"\n    ),\n    limit: int = dependencies.LimitBody(),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n    \"\"\"\n    Get scheduled runs for a set of deployments. Used by a runner to poll for work.\n    \"\"\"\n    async with db.session_context() as session:\n        orm_flow_runs = await models.flow_runs.read_flow_runs(\n            session=session,\n            limit=limit,\n            deployment_filter=schemas.filters.DeploymentFilter(\n                id=schemas.filters.DeploymentFilterId(any_=deployment_ids),\n            ),\n            flow_run_filter=schemas.filters.FlowRunFilter(\n                next_scheduled_start_time=schemas.filters.FlowRunFilterNextScheduledStartTime(\n                    before_=scheduled_before\n                ),\n                state=schemas.filters.FlowRunFilterState(\n                    type=schemas.filters.FlowRunFilterStateType(\n                        any_=[schemas.states.StateType.SCHEDULED]\n                    )\n                ),\n            ),\n            sort=schemas.sorting.FlowRunSort.NEXT_SCHEDULED_START_TIME_ASC,\n        )\n\n        flow_run_responses = [\n            schemas.responses.FlowRunResponse.from_orm(orm_flow_run=orm_flow_run)\n            for orm_flow_run in orm_flow_runs\n        ]\n\n    background_tasks.add_task(\n        mark_deployments_ready,\n        deployment_ids=deployment_ids,\n    )\n\n    return flow_run_responses\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.pause_deployment","title":"pause_deployment async","text":"

    Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted.

    Source code in prefect/server/api/deployments.py
    @router.post(\"/{id:uuid}/set_schedule_inactive\")  # Legacy route\n@router.post(\"/{id:uuid}/pause_deployment\")\nasync def pause_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n    \"\"\"\n    Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled\n    state will be deleted.\n    \"\"\"\n    async with db.session_context(begin_transaction=False) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = False\n        deployment.paused = True\n\n        # Ensure that we're updating the replicated schedule's `active` field,\n        # if there is only a single schedule. This is support for legacy\n        # clients.\n\n        number_of_schedules = len(deployment.schedules)\n\n        if number_of_schedules == 1:\n            deployment.schedules[0].active = False\n        elif number_of_schedules > 1:\n            raise _multiple_schedules_error(deployment_id)\n\n        # commit here to make the inactive schedule \"visible\" to the scheduler service\n        await session.commit()\n\n        # delete any auto scheduled runs\n        await models.deployments._delete_scheduled_runs(\n            session=session,\n            deployment_id=deployment_id,\n            auto_scheduled_only=True,\n        )\n\n        await session.commit()\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment","title":"read_deployment async","text":"

    Get a deployment by id.

    Source code in prefect/server/api/deployments.py
    @router.get(\"/{id}\")\nasync def read_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n    \"\"\"\n    Get a deployment by id.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

    Get a deployment using the name of the flow and the deployment.

    Source code in prefect/server/api/deployments.py
    @router.get(\"/name/{flow_name}/{deployment_name}\")\nasync def read_deployment_by_name(\n    flow_name: str = Path(..., description=\"The name of the flow\"),\n    deployment_name: str = Path(..., description=\"The name of the deployment\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n    \"\"\"\n    Get a deployment using the name of the flow and the deployment.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment_by_name(\n            session=session, name=deployment_name, flow_name=flow_name\n        )\n        if not deployment:\n            raise HTTPException(\n                status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployments","title":"read_deployments async","text":"

    Query for deployments.

    Source code in prefect/server/api/deployments.py
    @router.post(\"/filter\")\nasync def read_deployments(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = Body(\n        schemas.sorting.DeploymentSort.NAME_ASC\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.DeploymentResponse]:\n    \"\"\"\n    Query for deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        response = await models.deployments.read_deployments(\n            session=session,\n            offset=offset,\n            sort=sort,\n            limit=limit,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        return [\n            schemas.responses.DeploymentResponse.from_orm(orm_deployment=deployment)\n            for deployment in response\n        ]\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.resume_deployment","title":"resume_deployment async","text":"

    Set a deployment schedule to active. Runs will be scheduled immediately.

    Source code in prefect/server/api/deployments.py
    @router.post(\"/{id:uuid}/set_schedule_active\")  # Legacy route\n@router.post(\"/{id:uuid}/resume_deployment\")\nasync def resume_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n    \"\"\"\n    Set a deployment schedule to active. Runs will be scheduled immediately.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = True\n        deployment.paused = False\n\n        # Ensure that we're updating the replicated schedule's `active` field,\n        # if there is only a single schedule. This is support for legacy\n        # clients.\n\n        number_of_schedules = len(deployment.schedules)\n\n        if number_of_schedules == 1:\n            deployment.schedules[0].active = True\n        elif number_of_schedules > 1:\n            raise _multiple_schedules_error(deployment_id)\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.schedule_deployment","title":"schedule_deployment async","text":"

    Schedule runs for a deployment. For backfills, provide start/end times in the past.

    This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

    - Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time + min_time` is reached\n
    Source code in prefect/server/api/deployments.py
    @router.post(\"/{id}/schedule\")\nasync def schedule_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    start_time: DateTimeTZ = Body(None, description=\"The earliest date to schedule\"),\n    end_time: DateTimeTZ = Body(None, description=\"The latest date to schedule\"),\n    min_time: datetime.timedelta = Body(\n        None,\n        description=(\n            \"Runs will be scheduled until at least this long after the `start_time`\"\n        ),\n    ),\n    min_runs: int = Body(None, description=\"The minimum number of runs to schedule\"),\n    max_runs: int = Body(None, description=\"The maximum number of runs to schedule\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n    \"\"\"\n    Schedule runs for a deployment. For backfills, provide start/end times in the past.\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time + min_time` is reached\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        await models.deployments.schedule_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            min_time=min_time,\n            end_time=end_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.work_queue_check_for_deployment","title":"work_queue_check_for_deployment async","text":"

    Get list of work-queues that are able to pick up the specified deployment.

    This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments.

    Source code in prefect/server/api/deployments.py
    @router.get(\"/{id}/work_queue_check\", deprecated=True)\nasync def work_queue_check_for_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.WorkQueue]:\n    \"\"\"\n    Get list of work-queues that are able to pick up the specified deployment.\n\n    This endpoint is intended to be used by the UI to provide users warnings\n    about deployments that are unable to be executed because there are no work\n    queues that will pick up their runs, based on existing filter criteria. It\n    may be deprecated in the future because there is not a strict relationship\n    between work queues and deployments.\n    \"\"\"\n    try:\n        async with db.session_context() as session:\n            work_queues = await models.deployments.check_work_queues_for_deployment(\n                session=session, deployment_id=deployment_id\n            )\n    except ObjectNotFoundError:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n    return work_queues\n
    ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/flow_run_states/","title":"server.api.flow_run_states","text":"","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states","title":"prefect.server.api.flow_run_states","text":"

    Routes for interacting with flow run state objects.

    ","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

    Get a flow run state by id.

    Source code in prefect/server/api/flow_run_states.py
    @router.get(\"/{id}\")\nasync def read_flow_run_state(\n    flow_run_state_id: UUID = Path(\n        ..., description=\"The flow run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n    \"\"\"\n    Get a flow run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run_state = await models.flow_run_states.read_flow_run_state(\n            session=session, flow_run_state_id=flow_run_state_id\n        )\n    if not flow_run_state:\n        raise HTTPException(\n            status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return flow_run_state\n
    ","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

    Get states associated with a flow run.

    Source code in prefect/server/api/flow_run_states.py
    @router.get(\"/\")\nasync def read_flow_run_states(\n    flow_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n    \"\"\"\n    Get states associated with a flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_run_states.read_flow_run_states(\n            session=session, flow_run_id=flow_run_id\n        )\n
    ","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_runs/","title":"server.api.flow_runs","text":"","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs","title":"prefect.server.api.flow_runs","text":"

    Routes for interacting with flow run objects.

    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.average_flow_run_lateness","title":"average_flow_run_lateness async","text":"

    Query for average flow-run lateness in seconds.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/lateness\")\nasync def average_flow_run_lateness(\n    flows: Optional[schemas.filters.FlowFilter] = None,\n    flow_runs: Optional[schemas.filters.FlowRunFilter] = None,\n    task_runs: Optional[schemas.filters.TaskRunFilter] = None,\n    deployments: Optional[schemas.filters.DeploymentFilter] = None,\n    work_pools: Optional[schemas.filters.WorkPoolFilter] = None,\n    work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Optional[float]:\n    \"\"\"\n    Query for average flow-run lateness in seconds.\n    \"\"\"\n    async with db.session_context() as session:\n        if db.dialect.name == \"sqlite\":\n            # Since we want an _average_ of the lateness we're unable to use\n            # the existing FlowRun.expected_start_time_delta property as it\n            # returns a timedelta and SQLite is unable to properly deal with it\n            # and always returns 1970.0 as the average. This copies the same\n            # logic but ensures that it returns the number of seconds instead\n            # so it's compatible with SQLite.\n            base_query = sa.case(\n                (\n                    db.FlowRun.start_time > db.FlowRun.expected_start_time,\n                    sa.func.strftime(\"%s\", db.FlowRun.start_time)\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                (\n                    db.FlowRun.start_time.is_(None)\n                    & db.FlowRun.state_type.notin_(schemas.states.TERMINAL_STATES)\n                    & (db.FlowRun.expected_start_time < sa.func.datetime(\"now\")),\n                    sa.func.strftime(\"%s\", sa.func.datetime(\"now\"))\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                else_=0,\n            )\n        else:\n            base_query = db.FlowRun.estimated_start_time_delta\n\n        query = await models.flow_runs._apply_flow_run_filters(\n            sa.select(sa.func.avg(base_query)),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        result = await session.execute(query)\n\n        avg_lateness = result.scalar()\n\n        if avg_lateness is None:\n            return None\n        elif isinstance(avg_lateness, datetime.timedelta):\n            return avg_lateness.total_seconds()\n        else:\n            return avg_lateness\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

    Query for flow runs.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/count\")\nasync def count_flow_runs(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n    \"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.count_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run","title":"create_flow_run async","text":"

    Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned.

    If no state is provided, the flow run will be created in a PENDING state.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/\")\nasync def create_flow_run(\n    flow_run: schemas.actions.FlowRunCreate,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> schemas.responses.FlowRunResponse:\n    \"\"\"\n    Create a flow run. If a flow run with the same flow_id and\n    idempotency key already exists, the existing flow run will be returned.\n\n    If no state is provided, the flow run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full flow run / state model\n    flow_run = schemas.core.FlowRun(**flow_run.dict(), created_by=created_by)\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    if not flow_run.state:\n        flow_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flow_runs.create_flow_run(\n            session=session,\n            flow_run=flow_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run_input","title":"create_flow_run_input async","text":"

    Create a key/value input for a flow run.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/{id}/input\", status_code=status.HTTP_201_CREATED)\nasync def create_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    key: str = Body(..., description=\"The input key\"),\n    value: bytes = Body(..., description=\"The value of the input\"),\n    sender: Optional[str] = Body(None, description=\"The sender of the input\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Create a key/value input for a flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        try:\n            await models.flow_run_input.create_flow_run_input(\n                session=session,\n                flow_run_input=schemas.core.FlowRunInput(\n                    flow_run_id=flow_run_id,\n                    key=key,\n                    sender=sender,\n                    value=value.decode(),\n                ),\n            )\n            await session.commit()\n\n        except IntegrityError as exc:\n            if \"unique constraint\" in str(exc).lower():\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=\"A flow run input with this key already exists.\",\n                )\n            else:\n                raise HTTPException(\n                    status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n                )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

    Delete a flow run by id.

    Source code in prefect/server/api/flow_runs.py
    @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a flow run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flow_runs.delete_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n        )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run_input","title":"delete_flow_run_input async","text":"

    Delete a flow run input

    Source code in prefect/server/api/flow_runs.py
    @router.delete(\"/{id}/input/{key}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    key: str = Path(..., description=\"The input key\", alias=\"key\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a flow run input\n    \"\"\"\n\n    async with db.session_context() as session:\n        deleted = await models.flow_run_input.delete_flow_run_input(\n            session=session, flow_run_id=flow_run_id, key=key\n        )\n        await session.commit()\n\n        if not deleted:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n            )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.filter_flow_run_input","title":"filter_flow_run_input async","text":"

    Filter flow run inputs by key prefix

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/{id}/input/filter\")\nasync def filter_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    prefix: str = Body(..., description=\"The input key prefix\", embed=True),\n    limit: int = Body(\n        1, description=\"The maximum number of results to return\", embed=True\n    ),\n    exclude_keys: List[str] = Body(\n        [], description=\"Exclude inputs with these keys\", embed=True\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.FlowRunInput]:\n    \"\"\"\n    Filter flow run inputs by key prefix\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_run_input.filter_flow_run_input(\n            session=session,\n            flow_run_id=flow_run_id,\n            prefix=prefix,\n            limit=limit,\n            exclude_keys=exclude_keys,\n        )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.flow_run_history","title":"flow_run_history async","text":"

    Query for flow run history data across a given range and interval.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/history\")\nasync def flow_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n    \"\"\"\n    Query for flow run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"flow_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n            work_pools=work_pools,\n            work_queues=work_queues,\n        )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run","title":"read_flow_run async","text":"

    Get a flow run by id.

    Source code in prefect/server/api/flow_runs.py
    @router.get(\"/{id}\")\nasync def read_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.FlowRunResponse:\n    \"\"\"\n    Get a flow run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run = await models.flow_runs.read_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n        if not flow_run:\n            raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n        return schemas.responses.FlowRunResponse.from_orm(flow_run)\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v1","title":"read_flow_run_graph_v1 async","text":"

    Get a task run dependency map for a given flow run.

    Source code in prefect/server/api/flow_runs.py
    @router.get(\"/{id}/graph\")\nasync def read_flow_run_graph_v1(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[DependencyResult]:\n    \"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.read_task_run_dependencies(\n            session=session, flow_run_id=flow_run_id\n        )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v2","title":"read_flow_run_graph_v2 async","text":"

    Get a graph of the tasks and subflow runs for the given flow run

    Source code in prefect/server/api/flow_runs.py
    @router.get(\"/{id:uuid}/graph-v2\")\nasync def read_flow_run_graph_v2(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    since: datetime.datetime = Query(\n        datetime.datetime.min,\n        description=\"Only include runs that start or end after this time.\",\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Graph:\n    \"\"\"\n    Get a graph of the tasks and subflow runs for the given flow run\n    \"\"\"\n    async with db.session_context() as session:\n        try:\n            return await read_flow_run_graph(\n                session=session,\n                flow_run_id=flow_run_id,\n                since=since,\n            )\n        except FlowRunGraphTooLarge as e:\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=str(e),\n            )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_input","title":"read_flow_run_input async","text":"

    Create a value from a flow run input

    Source code in prefect/server/api/flow_runs.py
    @router.get(\"/{id}/input/{key}\")\nasync def read_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    key: str = Path(..., description=\"The input key\", alias=\"key\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> PlainTextResponse:\n    \"\"\"\n    Create a value from a flow run input\n    \"\"\"\n\n    async with db.session_context() as session:\n        flow_run_input = await models.flow_run_input.read_flow_run_input(\n            session=session, flow_run_id=flow_run_id, key=key\n        )\n\n    if flow_run_input:\n        return PlainTextResponse(flow_run_input.value)\n    else:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n        )\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

    Query for flow runs.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/filter\", response_class=ORJSONResponse)\nasync def read_flow_runs(\n    sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n    \"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        db_flow_runs = await models.flow_runs.read_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n\n        # Instead of relying on fastapi.encoders.jsonable_encoder to convert the\n        # response to JSON, we do so more efficiently ourselves.\n        # In particular, the FastAPI encoder is very slow for large, nested objects.\n        # See: https://github.com/tiangolo/fastapi/issues/1224\n        encoded = [\n            schemas.responses.FlowRunResponse.from_orm(fr).dict(json_compatible=True)\n            for fr in db_flow_runs\n        ]\n        return ORJSONResponse(content=encoded)\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.resume_flow_run","title":"resume_flow_run async","text":"

    Resume a paused flow run.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/{id}/resume\")\nasync def resume_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    run_input: Optional[Dict] = Body(default=None, embed=True),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    task_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_task_policy\n    ),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n    \"\"\"\n    Resume a paused flow run.\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        flow_run = await models.flow_runs.read_flow_run(session, flow_run_id)\n        state = flow_run.state\n\n        if state is None or state.type != schemas.states.StateType.PAUSED:\n            result = OrchestrationResult(\n                state=None,\n                status=schemas.responses.SetStateStatus.ABORT,\n                details=schemas.responses.StateAbortDetails(\n                    reason=\"Cannot resume a flow run that is not paused.\"\n                ),\n            )\n            return result\n\n        orchestration_parameters.update({\"api-version\": api_version})\n\n        keyset = state.state_details.run_input_keyset\n\n        if keyset:\n            run_input = run_input or {}\n\n            try:\n                hydration_context = await schema_tools.HydrationContext.build(\n                    session=session, raise_on_error=True\n                )\n                run_input = schema_tools.hydrate(run_input, hydration_context) or {}\n            except schema_tools.HydrationError as exc:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=f\"Error hydrating run input: {exc}\",\n                    ),\n                )\n\n            schema_json = await models.flow_run_input.read_flow_run_input(\n                session=session, flow_run_id=flow_run.id, key=keyset[\"schema\"]\n            )\n\n            if schema_json is None:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=\"Run input schema not found.\"\n                    ),\n                )\n\n            try:\n                schema = orjson.loads(schema_json.value)\n            except orjson.JSONDecodeError:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=\"Run input schema is not valid JSON.\"\n                    ),\n                )\n\n            try:\n                schema_tools.validate(run_input, schema, raise_on_error=True)\n            except schema_tools.ValidationError as exc:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=f\"Reason: {exc}\"\n                    ),\n                )\n            except schema_tools.CircularSchemaRefError:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=\"Invalid schema: Unable to validate schema with circular references.\",\n                    ),\n                )\n\n        if state.state_details.pause_reschedule:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Scheduled(\n                    name=\"Resuming\", scheduled_time=pendulum.now(\"UTC\")\n                ),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n        else:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Running(),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n\n        if (\n            keyset\n            and run_input\n            and orchestration_result.status == schemas.responses.SetStateStatus.ACCEPT\n        ):\n            # The state change is accepted, go ahead and store the validated\n            # run input.\n            await models.flow_run_input.create_flow_run_input(\n                session=session,\n                flow_run_input=schemas.core.FlowRunInput(\n                    flow_run_id=flow_run_id,\n                    key=keyset[\"response\"],\n                    value=orjson.dumps(run_input).decode(\"utf-8\"),\n                ),\n            )\n\n        # set the 201 if a new state was created\n        if orchestration_result.state and orchestration_result.state.timestamp >= now:\n            response.status_code = status.HTTP_201_CREATED\n        else:\n            response.status_code = status.HTTP_200_OK\n\n        return orchestration_result\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

    Set a flow run state, invoking any orchestration rules.

    Source code in prefect/server/api/flow_runs.py
    @router.post(\"/{id}/set_state\")\nasync def set_flow_run_state(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n    \"\"\"Set a flow run state, invoking any orchestration rules.\"\"\"\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=flow_run_id,\n            # convert to a full State object\n            state=schemas.states.State.parse_obj(state),\n            force=force,\n            flow_policy=flow_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.update_flow_run","title":"update_flow_run async","text":"

    Updates a flow run.

    Source code in prefect/server/api/flow_runs.py
    @router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow_run(\n    flow_run: schemas.actions.FlowRunUpdate,\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Updates a flow run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        if flow_run.job_variables is not None:\n            this_run = await models.flow_runs.read_flow_run(\n                session, flow_run_id=flow_run_id\n            )\n            if this_run is None:\n                raise HTTPException(\n                    status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n                )\n            if not this_run.state:\n                raise HTTPException(\n                    status.HTTP_400_BAD_REQUEST,\n                    detail=\"Flow run state is required to update job variables but none exists\",\n                )\n            if this_run.state.type != schemas.states.StateType.SCHEDULED:\n                raise HTTPException(\n                    status_code=status.HTTP_400_BAD_REQUEST,\n                    detail=f\"Job variables for a flow run in state {this_run.state.type.name} cannot be updated\",\n                )\n            if this_run.deployment_id is None:\n                raise HTTPException(\n                    status_code=status.HTTP_400_BAD_REQUEST,\n                    detail=\"A deployment for the flow run could not be found\",\n                )\n\n            deployment = await models.deployments.read_deployment(\n                session=session, deployment_id=this_run.deployment_id\n            )\n            if deployment is None:\n                raise HTTPException(\n                    status_code=status.HTTP_400_BAD_REQUEST,\n                    detail=\"A deployment for the flow run could not be found\",\n                )\n\n            await validate_job_variables_for_flow_run(flow_run, deployment, session)\n\n        result = await models.flow_runs.update_flow_run(\n            session=session, flow_run=flow_run, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n
    ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flows/","title":"server.api.flows","text":"","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows","title":"prefect.server.api.flows","text":"

    Routes for interacting with flow objects.

    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.count_flows","title":"count_flows async","text":"

    Count flows.

    Source code in prefect/server/api/flows.py
    @router.post(\"/count\")\nasync def count_flows(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n    \"\"\"\n    Count flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.count_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n        )\n
    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.create_flow","title":"create_flow async","text":"

    Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned.

    Source code in prefect/server/api/flows.py
    @router.post(\"/\")\nasync def create_flow(\n    flow: schemas.actions.FlowCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n    \"\"\"Gracefully creates a new flow from the provided schema. If a flow with the\n    same name already exists, the existing flow is returned.\n    \"\"\"\n    # hydrate the input model into a full flow model\n    flow = schemas.core.Flow(**flow.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flows.create_flow(session=session, flow=flow)\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n    return model\n
    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.delete_flow","title":"delete_flow async","text":"

    Delete a flow by id.

    Source code in prefect/server/api/flows.py
    @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a flow by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.delete_flow(session=session, flow_id=flow_id)\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow","title":"read_flow async","text":"

    Get a flow by id.

    Source code in prefect/server/api/flows.py
    @router.get(\"/{id}\")\nasync def read_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n    \"\"\"\n    Get a flow by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow(session=session, flow_id=flow_id)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

    Get a flow by name.

    Source code in prefect/server/api/flows.py
    @router.get(\"/name/{name}\")\nasync def read_flow_by_name(\n    name: str = Path(..., description=\"The name of the flow\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n    \"\"\"\n    Get a flow by name.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow_by_name(session=session, name=name)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flows","title":"read_flows async","text":"

    Query for flows.

    Source code in prefect/server/api/flows.py
    @router.post(\"/filter\")\nasync def read_flows(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.Flow]:\n    \"\"\"\n    Query for flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.read_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            sort=sort,\n            offset=offset,\n            limit=limit,\n        )\n
    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.update_flow","title":"update_flow async","text":"

    Updates a flow.

    Source code in prefect/server/api/flows.py
    @router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow(\n    flow: schemas.actions.FlowUpdate,\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Updates a flow.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.update_flow(\n            session=session, flow=flow, flow_id=flow_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
    ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/middleware/","title":"server.api.middleware","text":"","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/middleware/#prefect.server.api.middleware","title":"prefect.server.api.middleware","text":"","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/middleware/#prefect.server.api.middleware.CsrfMiddleware","title":"CsrfMiddleware","text":"

    Bases: BaseHTTPMiddleware

    Middleware for CSRF protection. This middleware will check for a CSRF token in the headers of any POST, PUT, PATCH, or DELETE request. If the token is not present or does not match the token stored in the database for the client, the request will be rejected with a 403 status code.

    Source code in prefect/server/api/middleware.py
    class CsrfMiddleware(BaseHTTPMiddleware):\n    \"\"\"\n    Middleware for CSRF protection. This middleware will check for a CSRF token\n    in the headers of any POST, PUT, PATCH, or DELETE request. If the token is\n    not present or does not match the token stored in the database for the\n    client, the request will be rejected with a 403 status code.\n    \"\"\"\n\n    async def dispatch(\n        self, request: Request, call_next: NextMiddlewareFunction\n    ) -> Response:\n        \"\"\"\n        Dispatch method for the middleware. This method will check for the\n        presence of a CSRF token in the headers of the request and compare it\n        to the token stored in the database for the client. If the token is not\n        present or does not match, the request will be rejected with a 403\n        status code.\n        \"\"\"\n\n        request_needs_csrf_protection = request.method in {\n            \"POST\",\n            \"PUT\",\n            \"PATCH\",\n            \"DELETE\",\n        }\n\n        if (\n            settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value()\n            and request_needs_csrf_protection\n        ):\n            incoming_token = request.headers.get(\"Prefect-Csrf-Token\")\n            incoming_client = request.headers.get(\"Prefect-Csrf-Client\")\n\n            if incoming_token is None:\n                return JSONResponse(\n                    {\"detail\": \"Missing CSRF token.\"},\n                    status_code=status.HTTP_403_FORBIDDEN,\n                )\n\n            if incoming_client is None:\n                return JSONResponse(\n                    {\"detail\": \"Missing client identifier.\"},\n                    status_code=status.HTTP_403_FORBIDDEN,\n                )\n\n            db = provide_database_interface()\n            async with db.session_context() as session:\n                token = await models.csrf_token.read_token_for_client(\n                    session=session, client=incoming_client\n                )\n\n                if token is None or token.token != incoming_token:\n                    return JSONResponse(\n                        {\"detail\": \"Invalid CSRF token or client identifier.\"},\n                        status_code=status.HTTP_403_FORBIDDEN,\n                        headers={\"Access-Control-Allow-Origin\": \"*\"},\n                    )\n\n        return await call_next(request)\n
    ","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/middleware/#prefect.server.api.middleware.CsrfMiddleware.dispatch","title":"dispatch async","text":"

    Dispatch method for the middleware. This method will check for the presence of a CSRF token in the headers of the request and compare it to the token stored in the database for the client. If the token is not present or does not match, the request will be rejected with a 403 status code.

    Source code in prefect/server/api/middleware.py
    async def dispatch(\n    self, request: Request, call_next: NextMiddlewareFunction\n) -> Response:\n    \"\"\"\n    Dispatch method for the middleware. This method will check for the\n    presence of a CSRF token in the headers of the request and compare it\n    to the token stored in the database for the client. If the token is not\n    present or does not match, the request will be rejected with a 403\n    status code.\n    \"\"\"\n\n    request_needs_csrf_protection = request.method in {\n        \"POST\",\n        \"PUT\",\n        \"PATCH\",\n        \"DELETE\",\n    }\n\n    if (\n        settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value()\n        and request_needs_csrf_protection\n    ):\n        incoming_token = request.headers.get(\"Prefect-Csrf-Token\")\n        incoming_client = request.headers.get(\"Prefect-Csrf-Client\")\n\n        if incoming_token is None:\n            return JSONResponse(\n                {\"detail\": \"Missing CSRF token.\"},\n                status_code=status.HTTP_403_FORBIDDEN,\n            )\n\n        if incoming_client is None:\n            return JSONResponse(\n                {\"detail\": \"Missing client identifier.\"},\n                status_code=status.HTTP_403_FORBIDDEN,\n            )\n\n        db = provide_database_interface()\n        async with db.session_context() as session:\n            token = await models.csrf_token.read_token_for_client(\n                session=session, client=incoming_client\n            )\n\n            if token is None or token.token != incoming_token:\n                return JSONResponse(\n                    {\"detail\": \"Invalid CSRF token or client identifier.\"},\n                    status_code=status.HTTP_403_FORBIDDEN,\n                    headers={\"Access-Control-Allow-Origin\": \"*\"},\n                )\n\n    return await call_next(request)\n
    ","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/run_history/","title":"server.api.run_history","text":"","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history","title":"prefect.server.api.run_history","text":"

    Utilities for querying flow and task run history.

    ","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history.run_history","title":"run_history async","text":"

    Produce a history of runs aggregated by interval and state

    Source code in prefect/server/api/run_history.py
    @db_injector\nasync def run_history(\n    db: PrefectDBInterface,\n    session: sa.orm.Session,\n    run_type: Literal[\"flow_run\", \"task_run\"],\n    history_start: DateTimeTZ,\n    history_end: DateTimeTZ,\n    history_interval: datetime.timedelta,\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n) -> List[schemas.responses.HistoryResponse]:\n    \"\"\"\n    Produce a history of runs aggregated by interval and state\n    \"\"\"\n\n    # SQLite has issues with very small intervals\n    # (by 0.001 seconds it stops incrementing the interval)\n    if history_interval < datetime.timedelta(seconds=1):\n        raise ValueError(\"History interval must not be less than 1 second.\")\n\n    # prepare run-specific models\n    if run_type == \"flow_run\":\n        run_model = db.FlowRun\n        run_filter_function = models.flow_runs._apply_flow_run_filters\n    elif run_type == \"task_run\":\n        run_model = db.TaskRun\n        run_filter_function = models.task_runs._apply_task_run_filters\n    else:\n        raise ValueError(\n            f\"Unknown run type {run_type!r}. Expected 'flow_run' or 'task_run'.\"\n        )\n\n    # create a CTE for timestamp intervals\n    intervals = db.make_timestamp_intervals(\n        history_start,\n        history_end,\n        history_interval,\n    ).cte(\"intervals\")\n\n    # apply filters to the flow runs (and related states)\n    runs = (\n        await run_filter_function(\n            sa.select(\n                run_model.id,\n                run_model.expected_start_time,\n                run_model.estimated_run_time,\n                run_model.estimated_start_time_delta,\n                run_model.state_type,\n                run_model.state_name,\n            ).select_from(run_model),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_queues,\n        )\n    ).alias(\"runs\")\n    # outer join intervals to the filtered runs to create a dataset composed of\n    # every interval and the aggregate of all its runs. The runs aggregate is represented\n    # by a descriptive JSON object\n    counts = (\n        sa.select(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            # build a JSON object, ignoring the case where the count of runs is 0\n            sa.case(\n                (sa.func.count(runs.c.id) == 0, None),\n                else_=db.build_json_object(\n                    \"state_type\",\n                    runs.c.state_type,\n                    \"state_name\",\n                    runs.c.state_name,\n                    \"count_runs\",\n                    sa.func.count(runs.c.id),\n                    # estimated run times only includes positive run times (to avoid any unexpected corner cases)\n                    \"sum_estimated_run_time\",\n                    sa.func.sum(\n                        db.greatest(0, sa.extract(\"epoch\", runs.c.estimated_run_time))\n                    ),\n                    # estimated lateness is the sum of any positive start time deltas\n                    \"sum_estimated_lateness\",\n                    sa.func.sum(\n                        db.greatest(\n                            0, sa.extract(\"epoch\", runs.c.estimated_start_time_delta)\n                        )\n                    ),\n                ),\n            ).label(\"state_agg\"),\n        )\n        .select_from(intervals)\n        .join(\n            runs,\n            sa.and_(\n                runs.c.expected_start_time >= intervals.c.interval_start,\n                runs.c.expected_start_time < intervals.c.interval_end,\n            ),\n            isouter=True,\n        )\n        .group_by(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            runs.c.state_type,\n            runs.c.state_name,\n        )\n    ).alias(\"counts\")\n\n    # aggregate all state-aggregate objects into a single array for each interval,\n    # ensuring that intervals with no runs have an empty array\n    query = (\n        sa.select(\n            counts.c.interval_start,\n            counts.c.interval_end,\n            sa.func.coalesce(\n                db.json_arr_agg(db.cast_to_json(counts.c.state_agg)).filter(\n                    counts.c.state_agg.is_not(None)\n                ),\n                sa.text(\"'[]'\"),\n            ).label(\"states\"),\n        )\n        .group_by(counts.c.interval_start, counts.c.interval_end)\n        .order_by(counts.c.interval_start)\n        # return no more than 500 bars\n        .limit(500)\n    )\n\n    # issue the query\n    result = await session.execute(query)\n    records = result.mappings()\n\n    # load and parse the record if the database returns JSON as strings\n    if db.uses_json_strings:\n        records = [dict(r) for r in records]\n        for r in records:\n            r[\"states\"] = json.loads(r[\"states\"])\n\n    return pydantic.parse_obj_as(List[schemas.responses.HistoryResponse], list(records))\n
    ","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/saved_searches/","title":"server.api.saved_searches","text":"","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches","title":"prefect.server.api.saved_searches","text":"

    Routes for interacting with saved search objects.

    ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.create_saved_search","title":"create_saved_search async","text":"

    Gracefully creates a new saved search from the provided schema.

    If a saved search with the same name already exists, the saved search's fields are replaced.

    Source code in prefect/server/api/saved_searches.py
    @router.put(\"/\")\nasync def create_saved_search(\n    saved_search: schemas.actions.SavedSearchCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n    \"\"\"Gracefully creates a new saved search from the provided schema.\n\n    If a saved search with the same name already exists, the saved search's fields are\n    replaced.\n    \"\"\"\n\n    # hydrate the input model into a full model\n    saved_search = schemas.core.SavedSearch(**saved_search.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.saved_searches.create_saved_search(\n            session=session, saved_search=saved_search\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n\n    return model\n
    ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

    Delete a saved search by id.

    Source code in prefect/server/api/saved_searches.py
    @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a saved search by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.saved_searches.delete_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n
    ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_search","title":"read_saved_search async","text":"

    Get a saved search by id.

    Source code in prefect/server/api/saved_searches.py
    @router.get(\"/{id}\")\nasync def read_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n    \"\"\"\n    Get a saved search by id.\n    \"\"\"\n    async with db.session_context() as session:\n        saved_search = await models.saved_searches.read_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not saved_search:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n    return saved_search\n
    ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

    Query for saved searches.

    Source code in prefect/server/api/saved_searches.py
    @router.post(\"/filter\")\nasync def read_saved_searches(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.SavedSearch]:\n    \"\"\"\n    Query for saved searches.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.saved_searches.read_saved_searches(\n            session=session,\n            offset=offset,\n            limit=limit,\n        )\n
    ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/server/","title":"server.api.server","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server","title":"prefect.server.api.server","text":"

    Defines the Prefect REST API FastAPI app.

    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.RequestLimitMiddleware","title":"RequestLimitMiddleware","text":"

    A middleware that limits the number of concurrent requests handled by the API.

    This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes.

    Source code in prefect/server/api/server.py
    class RequestLimitMiddleware:\n    \"\"\"\n    A middleware that limits the number of concurrent requests handled by the API.\n\n    This is a blunt tool for limiting SQLite concurrent writes which will cause failures\n    at high volume. Ideally, we would only apply the limit to routes that perform\n    writes.\n    \"\"\"\n\n    def __init__(self, app, limit: float):\n        self.app = app\n        self._limiter = anyio.CapacityLimiter(limit)\n\n    async def __call__(self, scope, receive, send) -> None:\n        async with self._limiter:\n            await self.app(scope, receive, send)\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.SPAStaticFiles","title":"SPAStaticFiles","text":"

    Bases: StaticFiles

    Implementation of StaticFiles for serving single page applications.

    Adds get_response handling to ensure that when a resource isn't found the application still returns the index.

    Source code in prefect/server/api/server.py
    class SPAStaticFiles(StaticFiles):\n    \"\"\"\n    Implementation of `StaticFiles` for serving single page applications.\n\n    Adds `get_response` handling to ensure that when a resource isn't found the\n    application still returns the index.\n    \"\"\"\n\n    async def get_response(self, path: str, scope):\n        try:\n            return await super().get_response(path, scope)\n        except HTTPException:\n            return await super().get_response(\"./index.html\", scope)\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_api_app","title":"create_api_app","text":"

    Create a FastAPI app that includes the Prefect REST API

    Parameters:

    Name Type Description Default router_prefix Optional[str]

    a prefix to apply to all included routers

    '' dependencies Optional[List[Depends]]

    a list of global dependencies to add to each Prefect REST API router

    None health_check_path str

    the health check route path

    '/health' fast_api_app_kwargs Optional[Dict[str, Any]]

    kwargs to pass to the FastAPI constructor

    None router_overrides Mapping[str, Optional[APIRouter]]

    a mapping of route prefixes (i.e. \"/admin\") to new routers allowing the caller to override the default routers. If None is provided as a value, the default router will be dropped from the application.

    None

    Returns:

    Type Description FastAPI

    a FastAPI app that serves the Prefect REST API

    Source code in prefect/server/api/server.py
    def create_api_app(\n    router_prefix: Optional[str] = \"\",\n    dependencies: Optional[List[Depends]] = None,\n    health_check_path: str = \"/health\",\n    version_check_path: str = \"/version\",\n    fast_api_app_kwargs: Optional[Dict[str, Any]] = None,\n    router_overrides: Mapping[str, Optional[APIRouter]] = None,\n) -> FastAPI:\n    \"\"\"\n    Create a FastAPI app that includes the Prefect REST API\n\n    Args:\n        router_prefix: a prefix to apply to all included routers\n        dependencies: a list of global dependencies to add to each Prefect REST API router\n        health_check_path: the health check route path\n        fast_api_app_kwargs: kwargs to pass to the FastAPI constructor\n        router_overrides: a mapping of route prefixes (i.e. \"/admin\") to new routers\n            allowing the caller to override the default routers. If `None` is provided\n            as a value, the default router will be dropped from the application.\n\n    Returns:\n        a FastAPI app that serves the Prefect REST API\n    \"\"\"\n    fast_api_app_kwargs = fast_api_app_kwargs or {}\n    api_app = FastAPI(title=API_TITLE, **fast_api_app_kwargs)\n    api_app.add_middleware(GZipMiddleware)\n\n    @api_app.get(health_check_path, tags=[\"Root\"])\n    async def health_check():\n        return True\n\n    @api_app.get(version_check_path, tags=[\"Root\"])\n    async def orion_info():\n        return SERVER_API_VERSION\n\n    # always include version checking\n    if dependencies is None:\n        dependencies = [Depends(enforce_minimum_version)]\n    else:\n        dependencies.append(Depends(enforce_minimum_version))\n\n    routers = {router.prefix: router for router in API_ROUTERS}\n\n    if router_overrides:\n        for prefix, router in router_overrides.items():\n            # We may want to allow this behavior in the future to inject new routes, but\n            # for now this will be treated an as an exception\n            if prefix not in routers:\n                raise KeyError(\n                    \"Router override provided for prefix that does not exist:\"\n                    f\" {prefix!r}\"\n                )\n\n            # Drop the existing router\n            existing_router = routers.pop(prefix)\n\n            # Replace it with a new router if provided\n            if router is not None:\n                if prefix != router.prefix:\n                    # We may want to allow this behavior in the future, but it will\n                    # break expectations without additional routing and is banned for\n                    # now\n                    raise ValueError(\n                        f\"Router override for {prefix!r} defines a different prefix \"\n                        f\"{router.prefix!r}.\"\n                    )\n\n                existing_paths = method_paths_from_routes(existing_router.routes)\n                new_paths = method_paths_from_routes(router.routes)\n                if not existing_paths.issubset(new_paths):\n                    raise ValueError(\n                        f\"Router override for {prefix!r} is missing paths defined by \"\n                        f\"the original router: {existing_paths.difference(new_paths)}\"\n                    )\n\n                routers[prefix] = router\n\n    for router in routers.values():\n        api_app.include_router(router, prefix=router_prefix, dependencies=dependencies)\n\n    return api_app\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_app","title":"create_app","text":"

    Create an FastAPI app that includes the Prefect REST API and UI

    Parameters:

    Name Type Description Default settings Settings

    The settings to use to create the app. If not set, settings are pulled from the context.

    None ignore_cache bool

    If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache.

    False ephemeral bool

    If set, the application will be treated as ephemeral. The UI and services will be disabled.

    False Source code in prefect/server/api/server.py
    def create_app(\n    settings: prefect.settings.Settings = None,\n    ephemeral: bool = False,\n    ignore_cache: bool = False,\n) -> FastAPI:\n    \"\"\"\n    Create an FastAPI app that includes the Prefect REST API and UI\n\n    Args:\n        settings: The settings to use to create the app. If not set, settings are pulled\n            from the context.\n        ignore_cache: If set, a new application will be created even if the settings\n            match. Otherwise, an application is returned from the cache.\n        ephemeral: If set, the application will be treated as ephemeral. The UI\n            and services will be disabled.\n    \"\"\"\n    settings = settings or prefect.settings.get_current_settings()\n    cache_key = (settings.hash_key(), ephemeral)\n\n    if cache_key in APP_CACHE and not ignore_cache:\n        return APP_CACHE[cache_key]\n\n    # TODO: Move these startup functions out of this closure into the top-level or\n    #       another dedicated location\n    async def run_migrations():\n        \"\"\"Ensure the database is created and up to date with the current migrations\"\"\"\n        if prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START:\n            from prefect.server.database.dependencies import provide_database_interface\n\n            db = provide_database_interface()\n            await db.create_db()\n\n    @_memoize_block_auto_registration\n    async def add_block_types():\n        \"\"\"Add all registered blocks to the database\"\"\"\n        if not prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START:\n            return\n\n        from prefect.server.database.dependencies import provide_database_interface\n        from prefect.server.models.block_registration import run_block_auto_registration\n\n        db = provide_database_interface()\n        session = await db.session()\n\n        async with session:\n            await run_block_auto_registration(session=session)\n\n    async def start_services():\n        \"\"\"Start additional services when the Prefect REST API starts up.\"\"\"\n\n        if ephemeral:\n            app.state.services = None\n            return\n\n        service_instances = []\n\n        if prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED.value():\n            service_instances.append(services.scheduler.Scheduler())\n            service_instances.append(services.scheduler.RecentDeploymentsScheduler())\n\n        if prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED.value():\n            service_instances.append(services.late_runs.MarkLateRuns())\n\n        if prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED.value():\n            service_instances.append(services.pause_expirations.FailExpiredPauses())\n\n        if prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED.value():\n            service_instances.append(\n                services.cancellation_cleanup.CancellationCleanup()\n            )\n\n        if prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED.value():\n            service_instances.append(services.telemetry.Telemetry())\n\n        if prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED.value():\n            service_instances.append(\n                services.flow_run_notifications.FlowRunNotifications()\n            )\n\n        if prefect.settings.PREFECT_API_SERVICES_FOREMAN_ENABLED.value():\n            service_instances.append(services.foreman.Foreman())\n\n        if prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n            service_instances.append(services.task_scheduling.TaskSchedulingTimeouts())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS.value()\n            and prefect.settings.PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED.value()\n        ):\n            service_instances.append(EventLogger())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS.value()\n            and prefect.settings.PREFECT_API_SERVICES_TRIGGERS_ENABLED.value()\n        ):\n            service_instances.append(ReactiveTriggers())\n            service_instances.append(ProactiveTriggers())\n            service_instances.append(Actions())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS\n            and prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED\n        ):\n            service_instances.append(EventPersister())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS\n            and prefect.settings.PREFECT_API_EVENTS_STREAM_OUT_ENABLED\n        ):\n            service_instances.append(stream.Distributor())\n\n        loop = asyncio.get_running_loop()\n\n        app.state.services = {\n            service: loop.create_task(service.start()) for service in service_instances\n        }\n\n        for service, task in app.state.services.items():\n            logger.info(f\"{service.name} service scheduled to start in-app\")\n            task.add_done_callback(partial(on_service_exit, service))\n\n    async def stop_services():\n        \"\"\"Ensure services are stopped before the Prefect REST API shuts down.\"\"\"\n        if hasattr(app.state, \"services\") and app.state.services:\n            await asyncio.gather(*[service.stop() for service in app.state.services])\n            try:\n                await asyncio.gather(\n                    *[task.stop() for task in app.state.services.values()]\n                )\n            except Exception:\n                # `on_service_exit` should handle logging exceptions on exit\n                pass\n\n    @asynccontextmanager\n    async def lifespan(app):\n        try:\n            await run_migrations()\n            await add_block_types()\n            await start_services()\n            yield\n        finally:\n            await stop_services()\n\n    def on_service_exit(service, task):\n        \"\"\"\n        Added as a callback for completion of services to log exit\n        \"\"\"\n        try:\n            # Retrieving the result will raise the exception\n            task.result()\n        except Exception:\n            logger.error(f\"{service.name} service failed!\", exc_info=True)\n        else:\n            logger.info(f\"{service.name} service stopped!\")\n\n    app = FastAPI(\n        title=TITLE,\n        version=API_VERSION,\n        lifespan=lifespan,\n    )\n    api_app = create_api_app(\n        fast_api_app_kwargs={\n            \"exception_handlers\": {\n                # NOTE: FastAPI special cases the generic `Exception` handler and\n                #       registers it as a separate middleware from the others\n                Exception: custom_internal_exception_handler,\n                RequestValidationError: validation_exception_handler,\n                sa.exc.IntegrityError: integrity_exception_handler,\n                ObjectNotFoundError: prefect_object_not_found_exception_handler,\n            }\n        },\n    )\n    ui_app = create_ui_app(ephemeral)\n\n    # middleware\n    app.add_middleware(\n        CORSMiddleware,\n        allow_origins=[\"*\"],\n        allow_methods=[\"*\"],\n        allow_headers=[\"*\"],\n    )\n\n    # Limit the number of concurrent requests when using a SQLite database to reduce\n    # chance of errors where the database cannot be opened due to a high number of\n    # concurrent writes\n    if (\n        get_dialect(prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        == \"sqlite\"\n    ):\n        app.add_middleware(RequestLimitMiddleware, limit=100)\n\n    if prefect.settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value():\n        app.add_middleware(api.middleware.CsrfMiddleware)\n\n    api_app.mount(\n        \"/static\",\n        StaticFiles(\n            directory=os.path.join(\n                os.path.dirname(os.path.realpath(__file__)), \"static\"\n            )\n        ),\n        name=\"static\",\n    )\n    app.api_app = api_app\n    app.mount(\"/api\", app=api_app, name=\"api\")\n    app.mount(\"/\", app=ui_app, name=\"ui\")\n\n    def openapi():\n        \"\"\"\n        Convenience method for extracting the user facing OpenAPI schema from the API app.\n\n        This method is attached to the global public app for easy access.\n        \"\"\"\n        partial_schema = get_openapi(\n            title=API_TITLE,\n            version=API_VERSION,\n            routes=api_app.routes,\n        )\n        new_schema = partial_schema.copy()\n        new_schema[\"paths\"] = {}\n        for path, value in partial_schema[\"paths\"].items():\n            new_schema[\"paths\"][f\"/api{path}\"] = value\n\n        new_schema[\"info\"][\"x-logo\"] = {\"url\": \"static/prefect-logo-mark-gradient.png\"}\n        return new_schema\n\n    app.openapi = openapi\n\n    APP_CACHE[cache_key] = app\n    return app\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.custom_internal_exception_handler","title":"custom_internal_exception_handler async","text":"

    Log a detailed exception for internal server errors before returning.

    Send 503 for errors clients can retry on.

    Source code in prefect/server/api/server.py
    async def custom_internal_exception_handler(request: Request, exc: Exception):\n    \"\"\"\n    Log a detailed exception for internal server errors before returning.\n\n    Send 503 for errors clients can retry on.\n    \"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n\n    if is_client_retryable_exception(exc):\n        return JSONResponse(\n            content={\"exception_message\": \"Service Unavailable\"},\n            status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n        )\n\n    return JSONResponse(\n        content={\"exception_message\": \"Internal Server Error\"},\n        status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,\n    )\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.integrity_exception_handler","title":"integrity_exception_handler async","text":"

    Capture database integrity errors.

    Source code in prefect/server/api/server.py
    async def integrity_exception_handler(request: Request, exc: Exception):\n    \"\"\"Capture database integrity errors.\"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n    return JSONResponse(\n        content={\n            \"detail\": (\n                \"Data integrity conflict. This usually means a \"\n                \"unique or foreign key constraint was violated. \"\n                \"See server logs for details.\"\n            )\n        },\n        status_code=status.HTTP_409_CONFLICT,\n    )\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.prefect_object_not_found_exception_handler","title":"prefect_object_not_found_exception_handler async","text":"

    Return 404 status code on object not found exceptions.

    Source code in prefect/server/api/server.py
    async def prefect_object_not_found_exception_handler(\n    request: Request, exc: ObjectNotFoundError\n):\n    \"\"\"Return 404 status code on object not found exceptions.\"\"\"\n    return JSONResponse(\n        content={\"exception_message\": str(exc)}, status_code=status.HTTP_404_NOT_FOUND\n    )\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.replace_placeholder_string_in_files","title":"replace_placeholder_string_in_files","text":"

    Recursively loops through all files in the given directory and replaces a placeholder string.

    Source code in prefect/server/api/server.py
    def replace_placeholder_string_in_files(\n    directory, placeholder, replacement, allowed_extensions=None\n):\n    \"\"\"\n    Recursively loops through all files in the given directory and replaces\n    a placeholder string.\n    \"\"\"\n    if allowed_extensions is None:\n        allowed_extensions = [\".txt\", \".html\", \".css\", \".js\", \".json\", \".txt\"]\n\n    for root, dirs, files in os.walk(directory):\n        for file in files:\n            if any(file.endswith(ext) for ext in allowed_extensions):\n                file_path = os.path.join(root, file)\n\n                with open(file_path, \"r\", encoding=\"utf-8\") as file:\n                    file_data = file.read()\n\n                file_data = file_data.replace(placeholder, replacement)\n\n                with open(file_path, \"w\", encoding=\"utf-8\") as file:\n                    file.write(file_data)\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.validation_exception_handler","title":"validation_exception_handler async","text":"

    Provide a detailed message for request validation errors.

    Source code in prefect/server/api/server.py
    async def validation_exception_handler(request: Request, exc: RequestValidationError):\n    \"\"\"Provide a detailed message for request validation errors.\"\"\"\n    return JSONResponse(\n        status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n        content=jsonable_encoder(\n            {\n                \"exception_message\": \"Invalid request received.\",\n                \"exception_detail\": exc.errors(),\n                \"request_body\": exc.body,\n            }\n        ),\n    )\n
    ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/task_run_states/","title":"server.api.task_run_states","text":"","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states","title":"prefect.server.api.task_run_states","text":"

    Routes for interacting with task run state objects.

    ","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

    Get a task run state by id.

    Source code in prefect/server/api/task_run_states.py
    @router.get(\"/{id}\")\nasync def read_task_run_state(\n    task_run_state_id: UUID = Path(\n        ..., description=\"The task run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n    \"\"\"\n    Get a task run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run_state = await models.task_run_states.read_task_run_state(\n            session=session, task_run_state_id=task_run_state_id\n        )\n    if not task_run_state:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return task_run_state\n
    ","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

    Get states associated with a task run.

    Source code in prefect/server/api/task_run_states.py
    @router.get(\"/\")\nasync def read_task_run_states(\n    task_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n    \"\"\"\n    Get states associated with a task run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_run_states.read_task_run_states(\n            session=session, task_run_id=task_run_id\n        )\n
    ","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_runs/","title":"server.api.task_runs","text":"","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs","title":"prefect.server.api.task_runs","text":"

    Routes for interacting with task run objects.

    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.count_task_runs","title":"count_task_runs async","text":"

    Count task runs.

    Source code in prefect/server/api/task_runs.py
    @router.post(\"/count\")\nasync def count_task_runs(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n) -> int:\n    \"\"\"\n    Count task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.count_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n        )\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.create_task_run","title":"create_task_run async","text":"

    Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned.

    If no state is provided, the task run will be created in a PENDING state.

    Source code in prefect/server/api/task_runs.py
    @router.post(\"/\")\nasync def create_task_run(\n    task_run: schemas.actions.TaskRunCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> schemas.core.TaskRun:\n    \"\"\"\n    Create a task run. If a task run with the same flow_run_id,\n    task_key, and dynamic_key already exists, the existing task\n    run will be returned.\n\n    If no state is provided, the task run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full task run / state model\n    task_run = schemas.core.TaskRun(**task_run.dict())\n\n    if not task_run.state:\n        task_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.task_runs.create_task_run(\n            session=session,\n            task_run=task_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n\n    new_task_run: schemas.core.TaskRun = schemas.core.TaskRun.from_orm(model)\n\n    return new_task_run\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.delete_task_run","title":"delete_task_run async","text":"

    Delete a task run by id.

    Source code in prefect/server/api/task_runs.py
    @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a task run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.delete_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_run","title":"read_task_run async","text":"

    Get a task run by id.

    Source code in prefect/server/api/task_runs.py
    @router.get(\"/{id}\")\nasync def read_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.TaskRun:\n    \"\"\"\n    Get a task run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run = await models.task_runs.read_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not task_run:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n    return task_run\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_runs","title":"read_task_runs async","text":"

    Query for task runs.

    Source code in prefect/server/api/task_runs.py
    @router.post(\"/filter\")\nasync def read_task_runs(\n    sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.TaskRun]:\n    \"\"\"\n    Query for task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.read_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

    Set a task run state, invoking any orchestration rules.

    Source code in prefect/server/api/task_runs.py
    @router.post(\"/{id}/set_state\")\nasync def set_task_run_state(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    task_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_task_policy\n    ),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> OrchestrationResult:\n    \"\"\"Set a task run state, invoking any orchestration rules.\"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=task_run_id,\n            state=schemas.states.State.parse_obj(\n                state\n            ),  # convert to a full State object\n            force=force,\n            task_policy=task_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.task_run_history","title":"task_run_history async","text":"

    Query for task run history data across a given range and interval.

    Source code in prefect/server/api/task_runs.py
    @router.post(\"/history\")\nasync def task_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n    \"\"\"\n    Query for task run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"task_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n        )\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.update_task_run","title":"update_task_run async","text":"

    Updates a task run.

    Source code in prefect/server/api/task_runs.py
    @router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_task_run(\n    task_run: schemas.actions.TaskRunUpdate,\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Updates a task run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.update_task_run(\n            session=session, task_run=task_run, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task run not found\")\n
    ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/models/csrf_token/","title":"server.models.csrf_token","text":""},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token","title":"prefect.server.models.csrf_token","text":""},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token.create_or_update_csrf_token","title":"create_or_update_csrf_token async","text":"

    Create or update a CSRF token for a client. If the client already has a token, it will be updated.

    Parameters:

    Name Type Description Default session AsyncSession

    The database session

    required client str

    The client identifier

    required

    Returns:

    Type Description CsrfToken

    core.CsrfToken: The CSRF token

    Source code in prefect/server/models/csrf_token.py
    @db_injector\nasync def create_or_update_csrf_token(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    client: str,\n) -> core.CsrfToken:\n    \"\"\"Create or update a CSRF token for a client. If the client already has a\n    token, it will be updated.\n\n    Args:\n        session (AsyncSession): The database session\n        client (str): The client identifier\n\n    Returns:\n        core.CsrfToken: The CSRF token\n    \"\"\"\n\n    expiration = (\n        datetime.now(timezone.utc)\n        + settings.PREFECT_SERVER_CSRF_TOKEN_EXPIRATION.value()\n    )\n    token = secrets.token_hex(32)\n\n    await session.execute(\n        db.insert(db.CsrfToken)\n        .values(\n            client=client,\n            token=token,\n            expiration=expiration,\n        )\n        .on_conflict_do_update(\n            index_elements=[db.CsrfToken.client],\n            set_={\"token\": token, \"expiration\": expiration},\n        ),\n    )\n\n    # Return the created / updated token object\n    return await read_token_for_client(session=session, client=client)\n
    "},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token.delete_expired_tokens","title":"delete_expired_tokens async","text":"

    Delete expired CSRF tokens.

    Parameters:

    Name Type Description Default session AsyncSession

    The database session

    required

    Returns:

    Name Type Description int int

    The number of tokens deleted

    Source code in prefect/server/models/csrf_token.py
    @db_injector\nasync def delete_expired_tokens(db: PrefectDBInterface, session: AsyncSession) -> int:\n    \"\"\"Delete expired CSRF tokens.\n\n    Args:\n        session (AsyncSession): The database session\n\n    Returns:\n        int: The number of tokens deleted\n    \"\"\"\n\n    result = await session.execute(\n        sa.delete(db.CsrfToken).where(\n            db.CsrfToken.expiration < datetime.now(timezone.utc)\n        )\n    )\n    return result.rowcount\n
    "},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token.read_token_for_client","title":"read_token_for_client async","text":"

    Read a CSRF token for a client.

    Parameters:

    Name Type Description Default session AsyncSession

    The database session

    required client str

    The client identifier

    required

    Returns:

    Type Description Optional[CsrfToken]

    Optional[core.CsrfToken]: The CSRF token, if it exists and is not expired.

    Source code in prefect/server/models/csrf_token.py
    @db_injector\nasync def read_token_for_client(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    client: str,\n) -> Optional[core.CsrfToken]:\n    \"\"\"Read a CSRF token for a client.\n\n    Args:\n        session (AsyncSession): The database session\n        client (str): The client identifier\n\n    Returns:\n        Optional[core.CsrfToken]: The CSRF token, if it exists and is not\n            expired.\n    \"\"\"\n    token = (\n        await session.execute(\n            sa.select(db.CsrfToken).where(\n                sa.and_(\n                    db.CsrfToken.expiration > datetime.now(timezone.utc),\n                    db.CsrfToken.client == client,\n                )\n            )\n        )\n    ).scalar_one_or_none()\n\n    if token is None:\n        return None\n\n    return core.CsrfToken.from_orm(token)\n
    "},{"location":"api-ref/server/models/deployments/","title":"server.models.deployments","text":""},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments","title":"prefect.server.models.deployments","text":"

    Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API.

    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.check_work_queues_for_deployment","title":"check_work_queues_for_deployment async","text":"

    Get work queues that can pick up the specified deployment.

    Work queues will pick up a deployment when all of the following are met.

    • The deployment has ALL tags that the work queue has (i.e. the work queue's tags must be a subset of the deployment's tags).
    • The work queue's specified deployment IDs match the deployment's ID, or the work queue does NOT have specified deployment IDs.
    • The work queue's specified flow runners match the deployment's flow runner or the work queue does NOT have a specified flow runner.

    Notes on the query:

    • Our database currently allows either \"null\" and empty lists as null values in filters, so we need to catch both cases with \"or\".
    • json_contains(A, B) should be interpreted as \"True if A contains B\".

    Returns:

    Type Description List[WorkQueue]

    List[db.WorkQueue]: WorkQueues

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def check_work_queues_for_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> List[schemas.core.WorkQueue]:\n    \"\"\"\n    Get work queues that can pick up the specified deployment.\n\n    Work queues will pick up a deployment when all of the following are met.\n\n    - The deployment has ALL tags that the work queue has (i.e. the work\n    queue's tags must be a subset of the deployment's tags).\n    - The work queue's specified deployment IDs match the deployment's ID,\n    or the work queue does NOT have specified deployment IDs.\n    - The work queue's specified flow runners match the deployment's flow\n    runner or the work queue does NOT have a specified flow runner.\n\n    Notes on the query:\n\n    - Our database currently allows either \"null\" and empty lists as\n    null values in filters, so we need to catch both cases with \"or\".\n    - `json_contains(A, B)` should be interpreted as \"True if A\n    contains B\".\n\n    Returns:\n        List[db.WorkQueue]: WorkQueues\n    \"\"\"\n    deployment = await session.get(db.Deployment, deployment_id)\n    if not deployment:\n        raise ObjectNotFoundError(f\"Deployment with id {deployment_id} not found\")\n\n    query = (\n        select(db.WorkQueue)\n        # work queue tags are a subset of deployment tags\n        .filter(\n            or_(\n                json_contains(deployment.tags, db.WorkQueue.filter[\"tags\"]),\n                json_contains([], db.WorkQueue.filter[\"tags\"]),\n                json_contains(None, db.WorkQueue.filter[\"tags\"]),\n            )\n        )\n        # deployment_ids is null or contains the deployment's ID\n        .filter(\n            or_(\n                json_contains(\n                    db.WorkQueue.filter[\"deployment_ids\"],\n                    str(deployment.id),\n                ),\n                json_contains(None, db.WorkQueue.filter[\"deployment_ids\"]),\n                json_contains([], db.WorkQueue.filter[\"deployment_ids\"]),\n            )\n        )\n    )\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.count_deployments","title":"count_deployments async","text":"

    Count deployments.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required flow_filter FlowFilter

    only count deployments whose flows match these criteria

    None flow_run_filter FlowRunFilter

    only count deployments whose flow runs match these criteria

    None task_run_filter TaskRunFilter

    only count deployments whose task runs match these criteria

    None deployment_filter DeploymentFilter

    only count deployment that match these filters

    None work_pool_filter WorkPoolFilter

    only count deployments that match these work pool filters

    None work_queue_filter WorkQueueFilter

    only count deployments that match these work pool queue filters

    None

    Returns:

    Name Type Description int int

    the number of deployments matching filters

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def count_deployments(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n    \"\"\"\n    Count deployments.\n\n    Args:\n        session: A database session\n        flow_filter: only count deployments whose flows match these criteria\n        flow_run_filter: only count deployments whose flow runs match these criteria\n        task_run_filter: only count deployments whose task runs match these criteria\n        deployment_filter: only count deployment that match these filters\n        work_pool_filter: only count deployments that match these work pool filters\n        work_queue_filter: only count deployments that match these work pool queue filters\n\n    Returns:\n        int: the number of deployments matching filters\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Deployment)\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment","title":"create_deployment async","text":"

    Upserts a deployment.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required deployment Deployment

    a deployment model

    required

    Returns:

    Type Description Optional[ORMDeployment]

    db.Deployment: the newly-created or updated deployment

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def create_deployment(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment: schemas.core.Deployment,\n) -> Optional[\"ORMDeployment\"]:\n    \"\"\"Upserts a deployment.\n\n    Args:\n        session: a database session\n        deployment: a deployment model\n\n    Returns:\n        db.Deployment: the newly-created or updated deployment\n\n    \"\"\"\n\n    # set `updated` manually\n    # known limitation of `on_conflict_do_update`, will not use `Column.onupdate`\n    # https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#the-set-clause\n    deployment.updated = pendulum.now(\"UTC\")\n\n    schedules = deployment.schedules\n    insert_values = deployment.dict(\n        shallow=True, exclude_unset=True, exclude={\"schedules\"}\n    )\n\n    # The job_variables field in client and server schemas is named\n    # infra_overrides in the database.\n    job_variables = insert_values.pop(\"job_variables\", None)\n    if job_variables:\n        insert_values[\"infra_overrides\"] = job_variables\n\n    conflict_update_fields = deployment.dict(\n        shallow=True,\n        exclude_unset=True,\n        exclude={\"id\", \"created\", \"created_by\", \"schedules\", \"job_variables\"},\n    )\n    if job_variables:\n        conflict_update_fields[\"infra_overrides\"] = job_variables\n\n    insert_stmt = (\n        db.insert(db.Deployment)\n        .values(**insert_values)\n        .on_conflict_do_update(\n            index_elements=db.deployment_unique_upsert_columns,\n            set_={**conflict_update_fields},\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    # Get the id of the deployment we just created or updated\n    result = await session.execute(\n        sa.select(db.Deployment.id).where(\n            sa.and_(\n                db.Deployment.flow_id == deployment.flow_id,\n                db.Deployment.name == deployment.name,\n            )\n        )\n    )\n    deployment_id = result.scalar_one_or_none()\n\n    if not deployment_id:\n        return None\n\n    # Because this was possibly an upsert, we need to delete any existing\n    # schedules and any runs from the old deployment.\n\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=True\n    )\n\n    await delete_schedules_for_deployment(session=session, deployment_id=deployment_id)\n\n    if schedules:\n        await create_deployment_schedules(\n            session=session,\n            deployment_id=deployment_id,\n            schedules=[\n                schemas.actions.DeploymentScheduleCreate(\n                    schedule=schedule.schedule,\n                    active=schedule.active,  # type: ignore[call-arg]\n                )\n                for schedule in schedules\n            ],\n        )\n\n    query = (\n        sa.select(db.Deployment)\n        .where(\n            sa.and_(\n                db.Deployment.flow_id == deployment.flow_id,\n                db.Deployment.name == deployment.name,\n            )\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment_schedules","title":"create_deployment_schedules async","text":"

    Creates a deployment's schedules.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required deployment_id UUID

    a deployment id

    required schedules List[DeploymentScheduleCreate]

    a list of deployment schedule create actions

    required Source code in prefect/server/models/deployments.py
    @db_injector\nasync def create_deployment_schedules(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    schedules: List[schemas.actions.DeploymentScheduleCreate],\n) -> List[schemas.core.DeploymentSchedule]:\n    \"\"\"\n    Creates a deployment's schedules.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n        schedules: a list of deployment schedule create actions\n    \"\"\"\n\n    schedules_with_deployment_id = []\n    for schedule in schedules:\n        data = schedule.dict()\n        data[\"deployment_id\"] = deployment_id\n        schedules_with_deployment_id.append(data)\n\n    models = [\n        db.DeploymentSchedule(**schedule) for schedule in schedules_with_deployment_id\n    ]\n    session.add_all(models)\n    await session.flush()\n\n    return [schemas.core.DeploymentSchedule.from_orm(m) for m in models]\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment","title":"delete_deployment async","text":"

    Delete a deployment by id.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required deployment_id UUID

    a deployment id

    required

    Returns:

    Name Type Description bool bool

    whether or not the deployment was deleted

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def delete_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> bool:\n    \"\"\"\n    Delete a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        bool: whether or not the deployment was deleted\n    \"\"\"\n\n    # delete scheduled runs, both auto- and user- created.\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=False\n    )\n\n    result = await session.execute(\n        delete(db.Deployment).where(db.Deployment.id == deployment_id)\n    )\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment_schedule","title":"delete_deployment_schedule async","text":"

    Deletes a deployment schedule.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required deployment_schedule_id UUID

    a deployment schedule id

    required Source code in prefect/server/models/deployments.py
    @db_injector\nasync def delete_deployment_schedule(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment_schedule_id: UUID,\n) -> bool:\n    \"\"\"\n    Deletes a deployment schedule.\n\n    Args:\n        session: A database session\n        deployment_schedule_id: a deployment schedule id\n    \"\"\"\n\n    result = await session.execute(\n        sa.delete(db.DeploymentSchedule).where(\n            sa.and_(\n                db.DeploymentSchedule.id == deployment_schedule_id,\n                db.DeploymentSchedule.deployment_id == deployment_id,\n            )\n        )\n    )\n\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_schedules_for_deployment","title":"delete_schedules_for_deployment async","text":"

    Deletes a deployment schedule.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required deployment_id UUID

    a deployment id

    required Source code in prefect/server/models/deployments.py
    @db_injector\nasync def delete_schedules_for_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> bool:\n    \"\"\"\n    Deletes a deployment schedule.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n    \"\"\"\n\n    result = await session.execute(\n        sa.delete(db.DeploymentSchedule).where(\n            db.DeploymentSchedule.deployment_id == deployment_id\n        )\n    )\n\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment","title":"read_deployment async","text":"

    Reads a deployment by id.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required deployment_id UUID

    a deployment id

    required

    Returns:

    Type Description Optional[ORMDeployment]

    db.Deployment: the deployment

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def read_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> Optional[\"ORMDeployment\"]:\n    \"\"\"Reads a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    return await session.get(db.Deployment, deployment_id)\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

    Reads a deployment by name.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required name str

    a deployment name

    required flow_name str

    the name of the flow the deployment belongs to

    required

    Returns:

    Type Description Optional[ORMDeployment]

    db.Deployment: the deployment

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def read_deployment_by_name(\n    db: PrefectDBInterface, session: AsyncSession, name: str, flow_name: str\n) -> Optional[\"ORMDeployment\"]:\n    \"\"\"Reads a deployment by name.\n\n    Args:\n        session: A database session\n        name: a deployment name\n        flow_name: the name of the flow the deployment belongs to\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    result = await session.execute(\n        select(db.Deployment)\n        .join(db.Flow, db.Deployment.flow_id == db.Flow.id)\n        .where(\n            sa.and_(\n                db.Flow.name == flow_name,\n                db.Deployment.name == name,\n            )\n        )\n        .limit(1)\n    )\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_schedules","title":"read_deployment_schedules async","text":"

    Reads a deployment's schedules.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required deployment_id UUID

    a deployment id

    required

    Returns:

    Type Description List[DeploymentSchedule]

    list[schemas.core.DeploymentSchedule]: the deployment's schedules

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def read_deployment_schedules(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment_schedule_filter: Optional[\n        schemas.filters.DeploymentScheduleFilter\n    ] = None,\n) -> List[schemas.core.DeploymentSchedule]:\n    \"\"\"\n    Reads a deployment's schedules.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        list[schemas.core.DeploymentSchedule]: the deployment's schedules\n    \"\"\"\n\n    query = (\n        sa.select(db.DeploymentSchedule)\n        .where(db.DeploymentSchedule.deployment_id == deployment_id)\n        .order_by(db.DeploymentSchedule.updated.desc())\n    )\n\n    if deployment_schedule_filter:\n        query = query.where(deployment_schedule_filter.as_sql_filter(db))\n\n    result = await session.execute(query)\n\n    return [schemas.core.DeploymentSchedule.from_orm(s) for s in result.scalars().all()]\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployments","title":"read_deployments async","text":"

    Read deployments.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required offset int

    Query offset

    None limit int

    Query limit

    None flow_filter FlowFilter

    only select deployments whose flows match these criteria

    None flow_run_filter FlowRunFilter

    only select deployments whose flow runs match these criteria

    None task_run_filter TaskRunFilter

    only select deployments whose task runs match these criteria

    None deployment_filter DeploymentFilter

    only select deployment that match these filters

    None work_pool_filter WorkPoolFilter

    only select deployments whose work pools match these criteria

    None work_queue_filter WorkQueueFilter

    only select deployments whose work pool queues match these criteria

    None sort DeploymentSort

    the sort criteria for selected deployments. Defaults to name ASC.

    NAME_ASC

    Returns:

    Type Description Sequence[ORMDeployment]

    List[db.Deployment]: deployments

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def read_deployments(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    offset: int = None,\n    limit: int = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC,\n) -> Sequence[\"ORMDeployment\"]:\n    \"\"\"\n    Read deployments.\n\n    Args:\n        session: A database session\n        offset: Query offset\n        limit: Query limit\n        flow_filter: only select deployments whose flows match these criteria\n        flow_run_filter: only select deployments whose flow runs match these criteria\n        task_run_filter: only select deployments whose task runs match these criteria\n        deployment_filter: only select deployment that match these filters\n        work_pool_filter: only select deployments whose work pools match these criteria\n        work_queue_filter: only select deployments whose work pool queues match these criteria\n        sort: the sort criteria for selected deployments. Defaults to `name` ASC.\n\n    Returns:\n        List[db.Deployment]: deployments\n    \"\"\"\n\n    query = select(db.Deployment).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.schedule_runs","title":"schedule_runs async","text":"

    Schedule flow runs for a deployment

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required deployment_id UUID

    the id of the deployment to schedule

    required start_time datetime

    the time from which to start scheduling runs

    None end_time datetime

    runs will be scheduled until at most this time

    None min_time timedelta

    runs will be scheduled until at least this far in the future

    None min_runs int

    a minimum amount of runs to schedule

    None max_runs int

    a maximum amount of runs to schedule

    None

    This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

    - Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time` + `min_time` is reached\n

    Returns:

    Type Description List[UUID]

    a list of flow run ids scheduled for the deployment

    Source code in prefect/server/models/deployments.py
    async def schedule_runs(\n    session: AsyncSession,\n    deployment_id: UUID,\n    start_time: datetime.datetime = None,\n    end_time: datetime.datetime = None,\n    min_time: datetime.timedelta = None,\n    min_runs: int = None,\n    max_runs: int = None,\n    auto_scheduled: bool = True,\n) -> List[UUID]:\n    \"\"\"\n    Schedule flow runs for a deployment\n\n    Args:\n        session: a database session\n        deployment_id: the id of the deployment to schedule\n        start_time: the time from which to start scheduling runs\n        end_time: runs will be scheduled until at most this time\n        min_time: runs will be scheduled until at least this far in the future\n        min_runs: a minimum amount of runs to schedule\n        max_runs: a maximum amount of runs to schedule\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time` + `min_time` is reached\n\n    Returns:\n        a list of flow run ids scheduled for the deployment\n    \"\"\"\n    if min_runs is None:\n        min_runs = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n    if max_runs is None:\n        max_runs = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n    if start_time is None:\n        start_time = pendulum.now(\"UTC\")\n    if end_time is None:\n        end_time = start_time + (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n    if min_time is None:\n        min_time = PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n\n    start_time = pendulum.instance(start_time)\n    end_time = pendulum.instance(end_time)\n\n    runs = await _generate_scheduled_flow_runs(\n        session=session,\n        deployment_id=deployment_id,\n        start_time=start_time,\n        end_time=end_time,\n        min_time=min_time,\n        min_runs=min_runs,\n        max_runs=max_runs,\n        auto_scheduled=auto_scheduled,\n    )\n    return await _insert_scheduled_flow_runs(session=session, runs=runs)\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment","title":"update_deployment async","text":"

    Updates a deployment.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required deployment_id UUID

    the ID of the deployment to modify

    required deployment DeploymentUpdate

    changes to a deployment model

    required

    Returns:

    Name Type Description bool bool

    whether the deployment was updated

    Source code in prefect/server/models/deployments.py
    @db_injector\nasync def update_deployment(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment: schemas.actions.DeploymentUpdate,\n) -> bool:\n    \"\"\"Updates a deployment.\n\n    Args:\n        session: a database session\n        deployment_id: the ID of the deployment to modify\n        deployment: changes to a deployment model\n\n    Returns:\n        bool: whether the deployment was updated\n\n    \"\"\"\n\n    from prefect.server.api.workers import WorkerLookups\n\n    schedules = deployment.schedules\n\n    # exclude_unset=True allows us to only update values provided by\n    # the user, ignoring any defaults on the model\n    update_data = deployment.dict(\n        shallow=True,\n        exclude_unset=True,\n        exclude={\"work_pool_name\"},\n    )\n\n    # The job_variables field in client and server schemas is named\n    # infra_overrides in the database.\n    job_variables = update_data.pop(\"job_variables\", None)\n    if job_variables:\n        update_data[\"infra_overrides\"] = job_variables\n\n    should_update_schedules = update_data.pop(\"schedules\", None) is not None\n\n    if deployment.work_pool_name and deployment.work_queue_name:\n        # If a specific pool name/queue name combination was provided, get the\n        # ID for that work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_work_queue_id_from_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n            work_queue_name=deployment.work_queue_name,\n            create_queue_if_not_found=True,\n        )\n    elif deployment.work_pool_name:\n        # If just a pool name was provided, get the ID for its default\n        # work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_default_work_queue_id_from_work_pool_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n        )\n    elif deployment.work_queue_name:\n        # If just a queue name was provided, ensure the queue exists and\n        # get its ID.\n        work_queue = await models.work_queues.ensure_work_queue_exists(\n            session=session, name=update_data[\"work_queue_name\"]\n        )\n        update_data[\"work_queue_id\"] = work_queue.id\n\n    if \"is_schedule_active\" in update_data:\n        update_data[\"paused\"] = not update_data[\"is_schedule_active\"]\n\n    update_stmt = (\n        sa.update(db.Deployment)\n        .where(db.Deployment.id == deployment_id)\n        .values(**update_data)\n    )\n    result = await session.execute(update_stmt)\n\n    # delete any auto scheduled runs that would have reflected the old deployment config\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=True\n    )\n\n    if should_update_schedules:\n        # If schedules were provided, remove the existing schedules and\n        # replace them with the new ones.\n        await delete_schedules_for_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        await create_deployment_schedules(\n            session=session,\n            deployment_id=deployment_id,\n            schedules=[\n                schemas.actions.DeploymentScheduleCreate(\n                    schedule=schedule.schedule,\n                    active=schedule.active,  # type: ignore[call-arg]\n                )\n                for schedule in schedules\n            ],\n        )\n\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment_schedule","title":"update_deployment_schedule async","text":"

    Updates a deployment's schedules.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required deployment_schedule_id UUID

    a deployment schedule id

    required schedule DeploymentScheduleUpdate

    a deployment schedule update action

    required Source code in prefect/server/models/deployments.py
    @db_injector\nasync def update_deployment_schedule(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment_schedule_id: UUID,\n    schedule: schemas.actions.DeploymentScheduleUpdate,\n) -> bool:\n    \"\"\"\n    Updates a deployment's schedules.\n\n    Args:\n        session: A database session\n        deployment_schedule_id: a deployment schedule id\n        schedule: a deployment schedule update action\n    \"\"\"\n\n    result = await session.execute(\n        sa.update(db.DeploymentSchedule)\n        .where(\n            sa.and_(\n                db.DeploymentSchedule.id == deployment_schedule_id,\n                db.DeploymentSchedule.deployment_id == deployment_id,\n            )\n        )\n        .values(**schedule.dict(exclude_none=True))\n    )\n\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/flow_run_states/","title":"server.models.flow_run_states","text":""},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states","title":"prefect.server.models.flow_run_states","text":"

    Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API.

    "},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.delete_flow_run_state","title":"delete_flow_run_state async","text":"

    Delete a flow run state by id.

    Parameters:

    Name Type Description Default session Session

    A database session

    required flow_run_state_id UUID

    a flow run state id

    required

    Returns:

    Name Type Description bool bool

    whether or not the flow run state was deleted

    Source code in prefect/server/models/flow_run_states.py
    @inject_db\nasync def delete_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        bool: whether or not the flow run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRunState).where(db.FlowRunState.id == flow_run_state_id)\n    )\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

    Reads a flow run state by id.

    Parameters:

    Name Type Description Default session Session

    A database session

    required flow_run_state_id UUID

    a flow run state id

    required

    Returns:

    Type Description

    db.FlowRunState: the flow state

    Source code in prefect/server/models/flow_run_states.py
    @inject_db\nasync def read_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        db.FlowRunState: the flow state\n    \"\"\"\n\n    return await session.get(db.FlowRunState, flow_run_state_id)\n
    "},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

    Reads flow runs states for a flow run.

    Parameters:

    Name Type Description Default session Session

    A database session

    required flow_run_id UUID

    the flow run id

    required

    Returns:

    Type Description

    List[db.FlowRunState]: the flow run states

    Source code in prefect/server/models/flow_run_states.py
    @inject_db\nasync def read_flow_run_states(\n    session: sa.orm.Session, flow_run_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads flow runs states for a flow run.\n\n    Args:\n        session: A database session\n        flow_run_id: the flow run id\n\n    Returns:\n        List[db.FlowRunState]: the flow run states\n    \"\"\"\n\n    query = (\n        select(db.FlowRunState)\n        .filter_by(flow_run_id=flow_run_id)\n        .order_by(db.FlowRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/flow_runs/","title":"server.models.flow_runs","text":""},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs","title":"prefect.server.models.flow_runs","text":"

    Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API.

    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

    Count flow runs.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required flow_filter FlowFilter

    only count flow runs whose flows match these filters

    None flow_run_filter FlowRunFilter

    only count flow runs that match these filters

    None task_run_filter TaskRunFilter

    only count flow runs whose task runs match these filters

    None deployment_filter DeploymentFilter

    only count flow runs whose deployments match these filters

    None

    Returns:

    Name Type Description int int

    count of flow runs

    Source code in prefect/server/models/flow_runs.py
    @db_injector\nasync def count_flow_runs(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n    \"\"\"\n    Count flow runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count flow runs whose flows match these filters\n        flow_run_filter: only count flow runs that match these filters\n        task_run_filter: only count flow runs whose task runs match these filters\n        deployment_filter: only count flow runs whose deployments match these filters\n\n    Returns:\n        int: count of flow runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.FlowRun)\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.create_flow_run","title":"create_flow_run async","text":"

    Creates a new flow run.

    If the provided flow run has a state attached, it will also be created.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required flow_run FlowRun

    a flow run model

    required

    Returns:

    Type Description ORMFlowRun

    db.FlowRun: the newly-created flow run

    Source code in prefect/server/models/flow_runs.py
    @db_injector\nasync def create_flow_run(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run: schemas.core.FlowRun,\n    orchestration_parameters: Optional[dict] = None,\n) -> \"ORMFlowRun\":\n    \"\"\"Creates a new flow run.\n\n    If the provided flow run has a state attached, it will also be created.\n\n    Args:\n        session: a database session\n        flow_run: a flow run model\n\n    Returns:\n        db.FlowRun: the newly-created flow run\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    flow_run_dict = dict(\n        **flow_run.dict(\n            shallow=True,\n            exclude={\n                \"created\",\n                \"state\",\n                \"estimated_run_time\",\n                \"estimated_start_time_delta\",\n            },\n            exclude_unset=True,\n        ),\n        created=now,\n    )\n\n    # if no idempotency key was provided, create the run directly\n    if not flow_run.idempotency_key:\n        model = db.FlowRun(**flow_run_dict)\n        session.add(model)\n        await session.flush()\n\n    # otherwise let the database take care of enforcing idempotency\n    else:\n        insert_stmt = (\n            db.insert(db.FlowRun)\n            .values(**flow_run_dict)\n            .on_conflict_do_nothing(\n                index_elements=db.flow_run_unique_upsert_columns,\n            )\n        )\n        await session.execute(insert_stmt)\n\n        # read the run to see if idempotency was applied or not\n        query = (\n            sa.select(db.FlowRun)\n            .where(\n                sa.and_(\n                    db.FlowRun.flow_id == flow_run.flow_id,\n                    db.FlowRun.idempotency_key == flow_run.idempotency_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n            .options(\n                selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n            )\n        )\n        result = await session.execute(query)\n        model = result.scalar()\n\n    # if the flow run was created in this function call then we need to set the\n    # state. If it was created idempotently, the created time won't match.\n    if model.created == now and flow_run.state:\n        await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=model.id,\n            state=flow_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

    Delete a flow run by flow_run_id.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required flow_run_id UUID

    a flow run id

    required

    Returns:

    Name Type Description bool bool

    whether or not the flow run was deleted

    Source code in prefect/server/models/flow_runs.py
    @db_injector\nasync def delete_flow_run(\n    db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID\n) -> bool:\n    \"\"\"\n    Delete a flow run by flow_run_id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        bool: whether or not the flow run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n    )\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run","title":"read_flow_run async","text":"

    Reads a flow run by id.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required flow_run_id UUID

    a flow run id

    required

    Returns:

    Type Description Optional[ORMFlowRun]

    db.FlowRun: the flow run

    Source code in prefect/server/models/flow_runs.py
    @db_injector\nasync def read_flow_run(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run_id: UUID,\n    for_update: bool = False,\n) -> Optional[\"ORMFlowRun\"]:\n    \"\"\"\n    Reads a flow run by id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        db.FlowRun: the flow run\n    \"\"\"\n    select = (\n        sa.select(db.FlowRun)\n        .where(db.FlowRun.id == flow_run_id)\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if for_update:\n        select = select.with_for_update()\n\n    result = await session.execute(select)\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run_graph","title":"read_flow_run_graph async","text":"

    Given a flow run, return the graph of it's task and subflow runs. If a since datetime is provided, only return items that may have changed since that time.

    Source code in prefect/server/models/flow_runs.py
    @db_injector\nasync def read_flow_run_graph(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run_id: UUID,\n    since: datetime.datetime = datetime.datetime.min,\n) -> Graph:\n    \"\"\"Given a flow run, return the graph of it's task and subflow runs. If a `since`\n    datetime is provided, only return items that may have changed since that time.\"\"\"\n    return await db.queries.flow_run_graph_v2(\n        db=db,\n        session=session,\n        flow_run_id=flow_run_id,\n        since=since,\n        max_nodes=PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES.value(),\n        max_artifacts=PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS.value(),\n    )\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

    Read flow runs.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required columns List

    a list of the flow run ORM columns to load, for performance

    None flow_filter FlowFilter

    only select flow runs whose flows match these filters

    None flow_run_filter FlowRunFilter

    only select flow runs match these filters

    None task_run_filter TaskRunFilter

    only select flow runs whose task runs match these filters

    None deployment_filter DeploymentFilter

    only select flow runs whose deployments match these filters

    None offset int

    Query offset

    None limit int

    Query limit

    None sort FlowRunSort

    Query sort

    ID_DESC

    Returns:

    Type Description Sequence[ORMFlowRun]

    List[db.FlowRun]: flow runs

    Source code in prefect/server/models/flow_runs.py
    @db_injector\nasync def read_flow_runs(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    columns: List = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC,\n) -> Sequence[\"ORMFlowRun\"]:\n    \"\"\"\n    Read flow runs.\n\n    Args:\n        session: a database session\n        columns: a list of the flow run ORM columns to load, for performance\n        flow_filter: only select flow runs whose flows match these filters\n        flow_run_filter: only select flow runs match these filters\n        task_run_filter: only select flow runs whose task runs match these filters\n        deployment_filter: only select flow runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.FlowRun]: flow runs\n    \"\"\"\n    query = (\n        select(db.FlowRun)\n        .order_by(sort.as_sql_sort(db))\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if columns:\n        query = query.options(load_only(*columns))\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_task_run_dependencies","title":"read_task_run_dependencies async","text":"

    Get a task run dependency map for a given flow run.

    Source code in prefect/server/models/flow_runs.py
    async def read_task_run_dependencies(\n    session: AsyncSession,\n    flow_run_id: UUID,\n) -> List[DependencyResult]:\n    \"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    flow_run = await models.flow_runs.read_flow_run(\n        session=session, flow_run_id=flow_run_id\n    )\n    if not flow_run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    task_runs = await models.task_runs.read_task_runs(\n        session=session,\n        flow_run_filter=schemas.filters.FlowRunFilter(\n            id=schemas.filters.FlowRunFilterId(any_=[flow_run_id])\n        ),\n    )\n\n    dependency_graph = []\n\n    for task_run in task_runs:\n        inputs = list(set(chain(*task_run.task_inputs.values())))\n        untrackable_result_status = (\n            False\n            if task_run.state is None\n            else task_run.state.state_details.untrackable_result\n        )\n        dependency_graph.append(\n            {\n                \"id\": task_run.id,\n                \"upstream_dependencies\": inputs,\n                \"state\": task_run.state,\n                \"expected_start_time\": task_run.expected_start_time,\n                \"name\": task_run.name,\n                \"start_time\": task_run.start_time,\n                \"end_time\": task_run.end_time,\n                \"total_run_time\": task_run.total_run_time,\n                \"estimated_run_time\": task_run.estimated_run_time,\n                \"untrackable_result\": untrackable_result_status,\n            }\n        )\n\n    return dependency_graph\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

    Creates a new orchestrated flow run state.

    Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required flow_run_id UUID

    the flow run id

    required state State

    a flow run state model

    required force bool

    if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

    False

    Returns:

    Type Description OrchestrationResult

    OrchestrationResult object

    Source code in prefect/server/models/flow_runs.py
    async def set_flow_run_state(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    flow_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: Optional[Dict[str, Any]] = None,\n) -> OrchestrationResult:\n    \"\"\"\n    Creates a new orchestrated flow run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id\n        state: a flow run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the flow run\n    run = await models.flow_runs.read_flow_run(\n        session=session,\n        flow_run_id=flow_run_id,\n        # Lock the row to prevent orchestration race conditions\n        for_update=True,\n    )\n\n    if not run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if force or flow_policy is None:\n        flow_policy = MinimalFlowPolicy\n\n    orchestration_rules = flow_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalFlowPolicy.compile_transition_rules(*intended_transition)\n\n    context = FlowOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new flow run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    # if a new state is being set (either ACCEPTED from user or REJECTED\n    # and set by the server), check for any notification policies\n    if result.status in (SetStateStatus.ACCEPT, SetStateStatus.REJECT):\n        await models.flow_run_notification_policies.queue_flow_run_notifications(\n            session=session, flow_run=run\n        )\n\n    return result\n
    "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.update_flow_run","title":"update_flow_run async","text":"

    Updates a flow run.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required flow_run_id UUID

    the flow run id to update

    required flow_run FlowRunUpdate

    a flow run model

    required

    Returns:

    Name Type Description bool bool

    whether or not matching rows were found to update

    Source code in prefect/server/models/flow_runs.py
    @db_injector\nasync def update_flow_run(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run_id: UUID,\n    flow_run: schemas.actions.FlowRunUpdate,\n) -> bool:\n    \"\"\"\n    Updates a flow run.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id to update\n        flow_run: a flow run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.FlowRun)\n        .where(db.FlowRun.id == flow_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/flows/","title":"server.models.flows","text":""},{"location":"api-ref/server/models/flows/#prefect.server.models.flows","title":"prefect.server.models.flows","text":"

    Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API.

    "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.count_flows","title":"count_flows async","text":"

    Count flows.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required flow_filter Union[FlowFilter, None]

    only count flows that match these filters

    None flow_run_filter Union[FlowRunFilter, None]

    only count flows whose flow runs match these filters

    None task_run_filter Union[TaskRunFilter, None]

    only count flows whose task runs match these filters

    None deployment_filter Union[DeploymentFilter, None]

    only count flows whose deployments match these filters

    None work_pool_filter Union[WorkPoolFilter, None]

    only count flows whose work pools match these filters

    None

    Returns:

    Name Type Description int int

    count of flows

    Source code in prefect/server/models/flows.py
    @db_injector\nasync def count_flows(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: Union[schemas.filters.FlowFilter, None] = None,\n    flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None,\n    task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None,\n    deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None,\n    work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None,\n) -> int:\n    \"\"\"\n    Count flows.\n\n    Args:\n        session: A database session\n        flow_filter: only count flows that match these filters\n        flow_run_filter: only count flows whose flow runs match these filters\n        task_run_filter: only count flows whose task runs match these filters\n        deployment_filter: only count flows whose deployments match these filters\n        work_pool_filter: only count flows whose work pools match these filters\n\n    Returns:\n        int: count of flows\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Flow)\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n    )\n\n    result = await session.execute(query)\n    return result.scalar_one()\n
    "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.create_flow","title":"create_flow async","text":"

    Creates a new flow.

    If a flow with the same name already exists, the existing flow is returned.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required flow Flow

    a flow model

    required

    Returns:

    Type Description ORMFlow

    db.Flow: the newly-created or existing flow

    Source code in prefect/server/models/flows.py
    @db_injector\nasync def create_flow(\n    db: PrefectDBInterface, session: AsyncSession, flow: schemas.core.Flow\n) -> \"ORMFlow\":\n    \"\"\"\n    Creates a new flow.\n\n    If a flow with the same name already exists, the existing flow is returned.\n\n    Args:\n        session: a database session\n        flow: a flow model\n\n    Returns:\n        db.Flow: the newly-created or existing flow\n    \"\"\"\n\n    insert_stmt = (\n        db.insert(db.Flow)\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_nothing(\n            index_elements=db.flow_unique_upsert_columns,\n        )\n    )\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.Flow)\n        .where(\n            db.Flow.name == flow.name,\n        )\n        .limit(1)\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar_one()\n    return model\n
    "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.delete_flow","title":"delete_flow async","text":"

    Delete a flow by id.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required flow_id UUID

    a flow id

    required

    Returns:

    Name Type Description bool bool

    whether or not the flow was deleted

    Source code in prefect/server/models/flows.py
    @db_injector\nasync def delete_flow(\n    db: PrefectDBInterface, session: AsyncSession, flow_id: UUID\n) -> bool:\n    \"\"\"\n    Delete a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        bool: whether or not the flow was deleted\n    \"\"\"\n\n    result = await session.execute(delete(db.Flow).where(db.Flow.id == flow_id))\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow","title":"read_flow async","text":"

    Reads a flow by id.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required flow_id UUID

    a flow id

    required

    Returns:

    Type Description Optional[ORMFlow]

    db.Flow: the flow

    Source code in prefect/server/models/flows.py
    @db_injector\nasync def read_flow(\n    db: PrefectDBInterface, session: AsyncSession, flow_id: UUID\n) -> Optional[\"ORMFlow\"]:\n    \"\"\"\n    Reads a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n    return await session.get(db.Flow, flow_id)\n
    "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

    Reads a flow by name.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required name str

    a flow name

    required

    Returns:

    Type Description Optional[ORMFlow]

    db.Flow: the flow

    Source code in prefect/server/models/flows.py
    @db_injector\nasync def read_flow_by_name(\n    db: PrefectDBInterface, session: AsyncSession, name: str\n) -> Optional[\"ORMFlow\"]:\n    \"\"\"\n    Reads a flow by name.\n\n    Args:\n        session: A database session\n        name: a flow name\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n\n    result = await session.execute(select(db.Flow).filter_by(name=name))\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flows","title":"read_flows async","text":"

    Read multiple flows.

    Parameters:

    Name Type Description Default session AsyncSession

    A database session

    required flow_filter Union[FlowFilter, None]

    only select flows that match these filters

    None flow_run_filter Union[FlowRunFilter, None]

    only select flows whose flow runs match these filters

    None task_run_filter Union[TaskRunFilter, None]

    only select flows whose task runs match these filters

    None deployment_filter Union[DeploymentFilter, None]

    only select flows whose deployments match these filters

    None work_pool_filter Union[WorkPoolFilter, None]

    only select flows whose work pools match these filters

    None offset Union[int, None]

    Query offset

    None limit Union[int, None]

    Query limit

    None

    Returns:

    Type Description Sequence[ORMFlow]

    List[db.Flow]: flows

    Source code in prefect/server/models/flows.py
    @db_injector\nasync def read_flows(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: Union[schemas.filters.FlowFilter, None] = None,\n    flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None,\n    task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None,\n    deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None,\n    work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None,\n    sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC,\n    offset: Union[int, None] = None,\n    limit: Union[int, None] = None,\n) -> Sequence[\"ORMFlow\"]:\n    \"\"\"\n    Read multiple flows.\n\n    Args:\n        session: A database session\n        flow_filter: only select flows that match these filters\n        flow_run_filter: only select flows whose flow runs match these filters\n        task_run_filter: only select flows whose task runs match these filters\n        deployment_filter: only select flows whose deployments match these filters\n        work_pool_filter: only select flows whose work pools match these filters\n        offset: Query offset\n        limit: Query limit\n\n    Returns:\n        List[db.Flow]: flows\n    \"\"\"\n\n    query = select(db.Flow).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.update_flow","title":"update_flow async","text":"

    Updates a flow.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required flow_id UUID

    the flow id to update

    required flow FlowUpdate

    a flow update model

    required

    Returns:

    Name Type Description bool bool

    whether or not matching rows were found to update

    Source code in prefect/server/models/flows.py
    @db_injector\nasync def update_flow(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_id: UUID,\n    flow: schemas.actions.FlowUpdate,\n) -> bool:\n    \"\"\"\n    Updates a flow.\n\n    Args:\n        session: a database session\n        flow_id: the flow id to update\n        flow: a flow update model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.Flow)\n        .where(db.Flow.id == flow_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/saved_searches/","title":"server.models.saved_searches","text":""},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches","title":"prefect.server.models.saved_searches","text":"

    Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API.

    "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.create_saved_search","title":"create_saved_search async","text":"

    Upserts a SavedSearch.

    If a SavedSearch with the same name exists, all properties will be updated.

    Parameters:

    Name Type Description Default session Session

    a database session

    required saved_search SavedSearch

    a SavedSearch model

    required

    Returns:

    Type Description

    db.SavedSearch: the newly-created or updated SavedSearch

    Source code in prefect/server/models/saved_searches.py
    @inject_db\nasync def create_saved_search(\n    session: sa.orm.Session,\n    saved_search: schemas.core.SavedSearch,\n    db: PrefectDBInterface,\n):\n    \"\"\"\n    Upserts a SavedSearch.\n\n    If a SavedSearch with the same name exists, all properties will be updated.\n\n    Args:\n        session (sa.orm.Session): a database session\n        saved_search (schemas.core.SavedSearch): a SavedSearch model\n\n    Returns:\n        db.SavedSearch: the newly-created or updated SavedSearch\n\n    \"\"\"\n\n    insert_stmt = (\n        db.insert(db.SavedSearch)\n        .values(**saved_search.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_update(\n            index_elements=db.saved_search_unique_upsert_columns,\n            set_=saved_search.dict(shallow=True, include={\"filters\"}),\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.SavedSearch)\n        .where(\n            db.SavedSearch.name == saved_search.name,\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    return model\n
    "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

    Delete a SavedSearch by id.

    Parameters:

    Name Type Description Default session Session

    A database session

    required saved_search_id str

    a SavedSearch id

    required

    Returns:

    Name Type Description bool bool

    whether or not the SavedSearch was deleted

    Source code in prefect/server/models/saved_searches.py
    @inject_db\nasync def delete_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        bool: whether or not the SavedSearch was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.SavedSearch).where(db.SavedSearch.id == saved_search_id)\n    )\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search","title":"read_saved_search async","text":"

    Reads a SavedSearch by id.

    Parameters:

    Name Type Description Default session Session

    A database session

    required saved_search_id str

    a SavedSearch id

    required

    Returns:

    Type Description

    db.SavedSearch: the SavedSearch

    Source code in prefect/server/models/saved_searches.py
    @inject_db\nasync def read_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n\n    return await session.get(db.SavedSearch, saved_search_id)\n
    "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search_by_name","title":"read_saved_search_by_name async","text":"

    Reads a SavedSearch by name.

    Parameters:

    Name Type Description Default session Session

    A database session

    required name str

    a SavedSearch name

    required

    Returns:

    Type Description

    db.SavedSearch: the SavedSearch

    Source code in prefect/server/models/saved_searches.py
    @inject_db\nasync def read_saved_search_by_name(\n    session: sa.orm.Session, name: str, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a SavedSearch by name.\n\n    Args:\n        session (sa.orm.Session): A database session\n        name (str): a SavedSearch name\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n    result = await session.execute(\n        select(db.SavedSearch).where(db.SavedSearch.name == name).limit(1)\n    )\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

    Read SavedSearches.

    Parameters:

    Name Type Description Default session Session

    A database session

    required offset int

    Query offset

    None limit(int)

    Query limit

    required

    Returns:

    Type Description

    List[db.SavedSearch]: SavedSearches

    Source code in prefect/server/models/saved_searches.py
    @inject_db\nasync def read_saved_searches(\n    db: PrefectDBInterface,\n    session: sa.orm.Session,\n    offset: int = None,\n    limit: int = None,\n):\n    \"\"\"\n    Read SavedSearches.\n\n    Args:\n        session (sa.orm.Session): A database session\n        offset (int): Query offset\n        limit(int): Query limit\n\n    Returns:\n        List[db.SavedSearch]: SavedSearches\n    \"\"\"\n\n    query = select(db.SavedSearch).order_by(db.SavedSearch.name)\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/task_run_states/","title":"server.models.task_run_states","text":""},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states","title":"prefect.server.models.task_run_states","text":"

    Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API.

    "},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.delete_task_run_state","title":"delete_task_run_state async","text":"

    Delete a task run state by id.

    Parameters:

    Name Type Description Default session Session

    A database session

    required task_run_state_id UUID

    a task run state id

    required

    Returns:

    Name Type Description bool bool

    whether or not the task run state was deleted

    Source code in prefect/server/models/task_run_states.py
    @inject_db\nasync def delete_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        bool: whether or not the task run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRunState).where(db.TaskRunState.id == task_run_state_id)\n    )\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

    Reads a task run state by id.

    Parameters:

    Name Type Description Default session Session

    A database session

    required task_run_state_id UUID

    a task run state id

    required

    Returns:

    Type Description

    db.TaskRunState: the task state

    Source code in prefect/server/models/task_run_states.py
    @inject_db\nasync def read_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        db.TaskRunState: the task state\n    \"\"\"\n\n    return await session.get(db.TaskRunState, task_run_state_id)\n
    "},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

    Reads task runs states for a task run.

    Parameters:

    Name Type Description Default session Session

    A database session

    required task_run_id UUID

    the task run id

    required

    Returns:

    Type Description

    List[db.TaskRunState]: the task run states

    Source code in prefect/server/models/task_run_states.py
    @inject_db\nasync def read_task_run_states(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads task runs states for a task run.\n\n    Args:\n        session: A database session\n        task_run_id: the task run id\n\n    Returns:\n        List[db.TaskRunState]: the task run states\n    \"\"\"\n\n    query = (\n        select(db.TaskRunState)\n        .filter_by(task_run_id=task_run_id)\n        .order_by(db.TaskRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/task_runs/","title":"server.models.task_runs","text":""},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs","title":"prefect.server.models.task_runs","text":"

    Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API.

    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs","title":"count_task_runs async","text":"

    Count task runs.

    Parameters:

    Name Type Description Default session Session

    a database session

    required flow_filter FlowFilter

    only count task runs whose flows match these filters

    None flow_run_filter FlowRunFilter

    only count task runs whose flow runs match these filters

    None task_run_filter TaskRunFilter

    only count task runs that match these filters

    None deployment_filter DeploymentFilter

    only count task runs whose deployments match these filters

    None Source code in prefect/server/models/task_runs.py
    @inject_db\nasync def count_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n) -> int:\n    \"\"\"\n    Count task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count task runs whose flows match these filters\n        flow_run_filter: only count task runs whose flow runs match these filters\n        task_run_filter: only count task runs that match these filters\n        deployment_filter: only count task runs whose deployments match these filters\n    Returns:\n        int: count of task runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.TaskRun)\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs_by_state","title":"count_task_runs_by_state async","text":"

    Count task runs by state.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required flow_filter Optional[FlowFilter]

    only count task runs whose flows match these filters

    None flow_run_filter Optional[FlowRunFilter]

    only count task runs whose flow runs match these filters

    None task_run_filter Optional[TaskRunFilter]

    only count task runs that match these filters

    None deployment_filter Optional[DeploymentFilter]

    only count task runs whose deployments match these filters

    None Source code in prefect/server/models/task_runs.py
    async def count_task_runs_by_state(\n    session: AsyncSession,\n    db: PrefectDBInterface,\n    flow_filter: Optional[schemas.filters.FlowFilter] = None,\n    flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None,\n    task_run_filter: Optional[schemas.filters.TaskRunFilter] = None,\n    deployment_filter: Optional[schemas.filters.DeploymentFilter] = None,\n) -> schemas.states.CountByState:\n    \"\"\"\n    Count task runs by state.\n\n    Args:\n        session: a database session\n        flow_filter: only count task runs whose flows match these filters\n        flow_run_filter: only count task runs whose flow runs match these filters\n        task_run_filter: only count task runs that match these filters\n        deployment_filter: only count task runs whose deployments match these filters\n    Returns:\n        schemas.states.CountByState: count of task runs by state\n    \"\"\"\n\n    base_query = (\n        select(\n            db.TaskRun.state_type,\n            sa.func.count(sa.text(\"*\")).label(\"count\"),\n        )\n        .select_from(db.TaskRun)\n        .group_by(db.TaskRun.state_type)\n    )\n\n    query = await _apply_task_run_filters(\n        base_query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n    )\n\n    result = await session.execute(query)\n\n    counts = schemas.states.CountByState()\n\n    for row in result:\n        setattr(counts, row.state_type, row.count)\n\n    return counts\n
    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.create_task_run","title":"create_task_run async","text":"

    Creates a new task run.

    If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created.

    Parameters:

    Name Type Description Default session Session

    a database session

    required task_run TaskRun

    a task run model

    required

    Returns:

    Type Description

    db.TaskRun: the newly-created or existing task run

    Source code in prefect/server/models/task_runs.py
    @inject_db\nasync def create_task_run(\n    session: sa.orm.Session,\n    task_run: schemas.core.TaskRun,\n    db: PrefectDBInterface,\n    orchestration_parameters: Optional[Dict[str, Any]] = None,\n):\n    \"\"\"\n    Creates a new task run.\n\n    If a task run with the same flow_run_id, task_key, and dynamic_key already exists,\n    the existing task run will be returned. If the provided task run has a state\n    attached, it will also be created.\n\n    Args:\n        session: a database session\n        task_run: a task run model\n\n    Returns:\n        db.TaskRun: the newly-created or existing task run\n    \"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # if a dynamic key exists, we need to guard against conflicts\n    if task_run.flow_run_id:\n        insert_stmt = (\n            db.insert(db.TaskRun)\n            .values(\n                created=now,\n                **task_run.dict(\n                    shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n                ),\n            )\n            .on_conflict_do_nothing(\n                index_elements=db.task_run_unique_upsert_columns,\n            )\n        )\n        await session.execute(insert_stmt)\n\n        query = (\n            sa.select(db.TaskRun)\n            .where(\n                sa.and_(\n                    db.TaskRun.flow_run_id == task_run.flow_run_id,\n                    db.TaskRun.task_key == task_run.task_key,\n                    db.TaskRun.dynamic_key == task_run.dynamic_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n        )\n        result = await session.execute(query)\n        model = result.scalar()\n    else:\n        # Upsert on (task_key, dynamic_key) application logic.\n        query = (\n            sa.select(db.TaskRun)\n            .where(\n                sa.and_(\n                    db.TaskRun.flow_run_id.is_(None),\n                    db.TaskRun.task_key == task_run.task_key,\n                    db.TaskRun.dynamic_key == task_run.dynamic_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n        )\n\n        result = await session.execute(query)\n        model = result.scalar()\n\n        if model is None:\n            model = db.TaskRun(\n                created=now,\n                **task_run.dict(\n                    shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n                ),\n                state=None,\n            )\n            session.add(model)\n            await session.flush()\n\n    if model.created == now and task_run.state:\n        await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=model.id,\n            state=task_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.delete_task_run","title":"delete_task_run async","text":"

    Delete a task run by id.

    Parameters:

    Name Type Description Default session Session

    a database session

    required task_run_id UUID

    the task run id to delete

    required

    Returns:

    Name Type Description bool bool

    whether or not the task run was deleted

    Source code in prefect/server/models/task_runs.py
    @inject_db\nasync def delete_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to delete\n\n    Returns:\n        bool: whether or not the task run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRun).where(db.TaskRun.id == task_run_id)\n    )\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_run","title":"read_task_run async","text":"

    Read a task run by id.

    Parameters:

    Name Type Description Default session Session

    a database session

    required task_run_id UUID

    the task run id

    required

    Returns:

    Type Description

    db.TaskRun: the task run

    Source code in prefect/server/models/task_runs.py
    @inject_db\nasync def read_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Read a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n\n    Returns:\n        db.TaskRun: the task run\n    \"\"\"\n\n    model = await session.get(db.TaskRun, task_run_id)\n    return model\n
    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_runs","title":"read_task_runs async","text":"

    Read task runs.

    Parameters:

    Name Type Description Default session Session

    a database session

    required flow_filter FlowFilter

    only select task runs whose flows match these filters

    None flow_run_filter FlowRunFilter

    only select task runs whose flow runs match these filters

    None task_run_filter TaskRunFilter

    only select task runs that match these filters

    None deployment_filter DeploymentFilter

    only select task runs whose deployments match these filters

    None offset int

    Query offset

    None limit int

    Query limit

    None sort TaskRunSort

    Query sort

    ID_DESC

    Returns:

    Type Description

    List[db.TaskRun]: the task runs

    Source code in prefect/server/models/task_runs.py
    @inject_db\nasync def read_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC,\n):\n    \"\"\"\n    Read task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only select task runs whose flows match these filters\n        flow_run_filter: only select task runs whose flow runs match these filters\n        task_run_filter: only select task runs that match these filters\n        deployment_filter: only select task runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.TaskRun]: the task runs\n    \"\"\"\n\n    query = select(db.TaskRun).order_by(sort.as_sql_sort(db))\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    logger.debug(f\"In read_task_runs, query generated is:\\n{query}\")\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

    Creates a new orchestrated task run state.

    Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

    Parameters:

    Name Type Description Default session Session

    a database session

    required task_run_id UUID

    the task run id

    required state State

    a task run state model

    required force bool

    if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

    False

    Returns:

    Type Description OrchestrationResult

    OrchestrationResult object

    Source code in prefect/server/models/task_runs.py
    async def set_task_run_state(\n    session: sa.orm.Session,\n    task_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    task_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: Optional[Dict[str, Any]] = None,\n) -> OrchestrationResult:\n    \"\"\"\n    Creates a new orchestrated task run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n        state: a task run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the task run\n    run = await models.task_runs.read_task_run(session=session, task_run_id=task_run_id)\n\n    if not run:\n        raise ObjectNotFoundError(f\"Task run with id {task_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if run.flow_run_id is None:\n        task_policy = AutonomousTaskPolicy  # CoreTaskPolicy + prevent `Running` -> `Running` transition\n    elif force or task_policy is None:\n        task_policy = MinimalTaskPolicy\n\n    orchestration_rules = task_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalTaskPolicy.compile_transition_rules(*intended_transition)\n\n    context = TaskOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new task run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    return result\n
    "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.update_task_run","title":"update_task_run async","text":"

    Updates a task run.

    Parameters:

    Name Type Description Default session AsyncSession

    a database session

    required task_run_id UUID

    the task run id to update

    required task_run TaskRunUpdate

    a task run model

    required

    Returns:

    Name Type Description bool bool

    whether or not matching rows were found to update

    Source code in prefect/server/models/task_runs.py
    @inject_db\nasync def update_task_run(\n    session: AsyncSession,\n    task_run_id: UUID,\n    task_run: schemas.actions.TaskRunUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n    \"\"\"\n    Updates a task run.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to update\n        task_run: a task run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.TaskRun)\n        .where(db.TaskRun.id == task_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**task_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
    "},{"location":"api-ref/server/orchestration/core_policy/","title":"server.orchestration.core_policy","text":""},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy","title":"prefect.server.orchestration.core_policy","text":"

    Orchestration logic that fires on state transitions.

    CoreFlowPolicy and CoreTaskPolicy contain all default orchestration rules that Prefect enforces on a state transition.

    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreFlowPolicy","title":"CoreFlowPolicy","text":"

    Bases: BaseOrchestrationPolicy

    Orchestration rules that run against flow-run-state transitions in priority order.

    Source code in prefect/server/orchestration/core_policy.py
    class CoreFlowPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Orchestration rules that run against flow-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            PreventDuplicateTransitions,\n            HandleFlowTerminalStateTransitions,\n            EnforceCancellingToCancelledTransition,\n            BypassCancellingFlowRunsWithNoInfra,\n            PreventPendingTransitions,\n            EnsureOnlyScheduledFlowsMarkedLate,\n            HandlePausingFlows,\n            HandleResumingPausedFlows,\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedFlows,\n        ] + ([InstrumentFlowRunStateTransitions] if PREFECT_EXPERIMENTAL_EVENTS else [])\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreTaskPolicy","title":"CoreTaskPolicy","text":"

    Bases: BaseOrchestrationPolicy

    Orchestration rules that run against task-run-state transitions in priority order.

    Source code in prefect/server/orchestration/core_policy.py
    class CoreTaskPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Orchestration rules that run against task-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            CacheRetrieval,\n            HandleTaskTerminalStateTransitions,\n            PreventRunningTasksFromStoppedFlows,\n            SecureTaskConcurrencySlots,  # retrieve cached states even if slots are full\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedTasks,\n            RenameReruns,\n            UpdateFlowRunTrackerOnTasks,\n            CacheInsertion,\n            ReleaseTaskConcurrencySlots,\n        ]\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AutonomousTaskPolicy","title":"AutonomousTaskPolicy","text":"

    Bases: BaseOrchestrationPolicy

    Orchestration rules that run against task-run-state transitions in priority order.

    Source code in prefect/server/orchestration/core_policy.py
    class AutonomousTaskPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Orchestration rules that run against task-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            PreventPendingTransitions,\n            CacheRetrieval,\n            HandleTaskTerminalStateTransitions,\n            SecureTaskConcurrencySlots,  # retrieve cached states even if slots are full\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedTasks,\n            RenameReruns,\n            UpdateFlowRunTrackerOnTasks,\n            CacheInsertion,\n            ReleaseTaskConcurrencySlots,\n            EnqueueScheduledTasks,\n        ]\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.SecureTaskConcurrencySlots","title":"SecureTaskConcurrencySlots","text":"

    Bases: BaseOrchestrationRule

    Checks relevant concurrency slots are available before entering a Running state.

    This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for the duration specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks.

    Source code in prefect/server/orchestration/core_policy.py
    class SecureTaskConcurrencySlots(BaseOrchestrationRule):\n    \"\"\"\n    Checks relevant concurrency slots are available before entering a Running state.\n\n    This rule checks if concurrency limits have been set on the tags associated with a\n    TaskRun. If so, a concurrency slot will be secured against each concurrency limit\n    before being allowed to transition into a running state. If a concurrency limit has\n    been reached, the client will be instructed to delay the transition for the duration\n    specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting\n    before trying again. If the concurrency limit set on a tag is 0, the transition will\n    be aborted to prevent deadlocks.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self._applied_limits = []\n        filtered_limits = (\n            await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                context.session, tags=context.run.tags\n            )\n        )\n        run_limits = {limit.tag: limit for limit in filtered_limits}\n        for tag, cl in run_limits.items():\n            limit = cl.concurrency_limit\n            if limit == 0:\n                # limits of 0 will deadlock, and the transition needs to abort\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.abort_transition(\n                    reason=(\n                        f'The concurrency limit on tag \"{tag}\" is 0 and will deadlock'\n                        \" if the task tries to run again.\"\n                    ),\n                )\n            elif len(cl.active_slots) >= limit:\n                # if the limit has already been reached, delay the transition\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.delay_transition(\n                    PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS.value(),\n                    f\"Concurrency limit for the {tag} tag has been reached\",\n                )\n            else:\n                # log the TaskRun ID to active_slots\n                self._applied_limits.append(tag)\n                active_slots = set(cl.active_slots)\n                active_slots.add(str(context.run.id))\n                cl.active_slots = list(active_slots)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        for tag in self._applied_limits:\n            cl = await concurrency_limits.read_concurrency_limit_by_tag(\n                context.session, tag\n            )\n            active_slots = set(cl.active_slots)\n            active_slots.discard(str(context.run.id))\n            cl.active_slots = list(active_slots)\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.ReleaseTaskConcurrencySlots","title":"ReleaseTaskConcurrencySlots","text":"

    Bases: BaseUniversalTransform

    Releases any concurrency slots held by a run upon exiting a Running or Cancelling state.

    Source code in prefect/server/orchestration/core_policy.py
    class ReleaseTaskConcurrencySlots(BaseUniversalTransform):\n    \"\"\"\n    Releases any concurrency slots held by a run upon exiting a Running or\n    Cancelling state.\n    \"\"\"\n\n    async def after_transition(\n        self,\n        context: OrchestrationContext,\n    ):\n        if self.nullified_transition():\n            return\n\n        if context.validated_state and context.validated_state.type not in [\n            states.StateType.RUNNING,\n            states.StateType.CANCELLING,\n        ]:\n            filtered_limits = (\n                await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                    context.session, tags=context.run.tags\n                )\n            )\n            run_limits = {limit.tag: limit for limit in filtered_limits}\n            for tag, cl in run_limits.items():\n                active_slots = set(cl.active_slots)\n                active_slots.discard(str(context.run.id))\n                cl.active_slots = list(active_slots)\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AddUnknownResult","title":"AddUnknownResult","text":"

    Bases: BaseOrchestrationRule

    Assign an \"unknown\" result to runs that are forced to complete from a failed or crashed state, if the previous state used a persisted result.

    When we retry a flow run, we retry any task runs that were in a failed or crashed state, but we also retry completed task runs that didn't use a persisted result. This means that without a sentinel value for unknown results, a task run forced into Completed state will always get rerun if the flow run retries because the task run lacks a persisted result. The \"unknown\" sentinel ensures that when we see a completed task run with an unknown result, we know that it was forced to complete and we shouldn't rerun it.

    Flow runs forced into a Completed state have a similar problem: without a sentinel value, attempting to refer to the flow run's result will raise an exception because the flow run has no result. The sentinel ensures that we can distinguish between a flow run that has no result and a flow run that has an unknown result.

    Source code in prefect/server/orchestration/core_policy.py
    class AddUnknownResult(BaseOrchestrationRule):\n    \"\"\"\n    Assign an \"unknown\" result to runs that are forced to complete from a\n    failed or crashed state, if the previous state used a persisted result.\n\n    When we retry a flow run, we retry any task runs that were in a failed or\n    crashed state, but we also retry completed task runs that didn't use a\n    persisted result. This means that without a sentinel value for unknown\n    results, a task run forced into Completed state will always get rerun if the\n    flow run retries because the task run lacks a persisted result. The\n    \"unknown\" sentinel ensures that when we see a completed task run with an\n    unknown result, we know that it was forced to complete and we shouldn't\n    rerun it.\n\n    Flow runs forced into a Completed state have a similar problem: without a\n    sentinel value, attempting to refer to the flow run's result will raise an\n    exception because the flow run has no result. The sentinel ensures that we\n    can distinguish between a flow run that has no result and a flow run that\n    has an unknown result.\n    \"\"\"\n\n    FROM_STATES = [StateType.FAILED, StateType.CRASHED]\n    TO_STATES = [StateType.COMPLETED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if (\n            initial_state\n            and initial_state.data\n            and initial_state.data.get(\"type\") == \"reference\"\n        ):\n            unknown_result = await UnknownResult.create()\n            self.context.proposed_state.data = unknown_result.dict()\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheInsertion","title":"CacheInsertion","text":"

    Bases: BaseOrchestrationRule

    Caches completed states with cache keys after they are validated.

    Source code in prefect/server/orchestration/core_policy.py
    class CacheInsertion(BaseOrchestrationRule):\n    \"\"\"\n    Caches completed states with cache keys after they are validated.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.COMPLETED]\n\n    @inject_db\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        if not validated_state or not context.session:\n            return\n\n        cache_key = validated_state.state_details.cache_key\n        if cache_key:\n            new_cache_item = db.TaskRunStateCache(\n                cache_key=cache_key,\n                cache_expiration=validated_state.state_details.cache_expiration,\n                task_run_state_id=validated_state.id,\n            )\n            context.session.add(new_cache_item)\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheRetrieval","title":"CacheRetrieval","text":"

    Bases: BaseOrchestrationRule

    Rejects running states if a completed state has been cached.

    This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead.

    Source code in prefect/server/orchestration/core_policy.py
    class CacheRetrieval(BaseOrchestrationRule):\n    \"\"\"\n    Rejects running states if a completed state has been cached.\n\n    This rule rejects transitions into a running state with a cache key if the key\n    has already been associated with a completed state in the cache table. The client\n    will be instructed to transition into the cached completed state instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    @inject_db\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        cache_key = proposed_state.state_details.cache_key\n        if cache_key and not proposed_state.state_details.refresh_cache:\n            # Check for cached states matching the cache key\n            cached_state_id = (\n                select(db.TaskRunStateCache.task_run_state_id)\n                .where(\n                    sa.and_(\n                        db.TaskRunStateCache.cache_key == cache_key,\n                        sa.or_(\n                            db.TaskRunStateCache.cache_expiration.is_(None),\n                            db.TaskRunStateCache.cache_expiration > pendulum.now(\"utc\"),\n                        ),\n                    ),\n                )\n                .order_by(db.TaskRunStateCache.created.desc())\n                .limit(1)\n            ).scalar_subquery()\n            query = select(db.TaskRunState).where(db.TaskRunState.id == cached_state_id)\n            cached_state = (await context.session.execute(query)).scalar()\n            if cached_state:\n                new_state = cached_state.as_state().copy(reset_fields=True)\n                new_state.name = \"Cached\"\n                await self.reject_transition(\n                    state=new_state, reason=\"Retrieved state from cache\"\n                )\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedFlows","title":"RetryFailedFlows","text":"

    Bases: BaseOrchestrationRule

    Rejects failed states and schedules a retry if the retry limit has not been reached.

    This rule rejects transitions into a failed state if retries has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution.

    Source code in prefect/server/orchestration/core_policy.py
    class RetryFailedFlows(BaseOrchestrationRule):\n    \"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set and the run count has not reached the specified limit. The client will be\n    instructed to transition into a scheduled state to retry flow execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n\n        if run_settings.retries is None or run_count > run_settings.retries:\n            return  # Retry count exceeded, allow transition to failed\n\n        scheduled_start_time = pendulum.now(\"UTC\").add(\n            seconds=run_settings.retry_delay or 0\n        )\n\n        # support old-style flow run retries for older clients\n        # older flow retries require us to loop over failed tasks to update their state\n        # this is not required after API version 0.8.3\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version and api_version < Version(\"0.8.3\"):\n            failed_task_runs = await models.task_runs.read_task_runs(\n                context.session,\n                flow_run_filter=filters.FlowRunFilter(id={\"any_\": [context.run.id]}),\n                task_run_filter=filters.TaskRunFilter(\n                    state={\"type\": {\"any_\": [\"FAILED\"]}}\n                ),\n            )\n            for run in failed_task_runs:\n                await models.task_runs.set_task_run_state(\n                    context.session,\n                    run.id,\n                    state=states.AwaitingRetry(scheduled_time=scheduled_start_time),\n                    force=True,\n                )\n                # Reset the run count so that the task run retries still work correctly\n                run.run_count = 0\n\n        # Reset pause metadata on retry\n        # Pauses as a concept only exist after API version 0.8.4\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            updated_policy = context.run.empirical_policy.dict()\n            updated_policy[\"resuming\"] = False\n            updated_policy[\"pause_keys\"] = set()\n            context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n        # Generate a new state for the flow\n        retry_state = states.AwaitingRetry(\n            scheduled_time=scheduled_start_time,\n            message=proposed_state.message,\n            data=proposed_state.data,\n        )\n        await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedTasks","title":"RetryFailedTasks","text":"

    Bases: BaseOrchestrationRule

    Rejects failed states and schedules a retry if the retry limit has not been reached.

    This rule rejects transitions into a failed state if retries has been set, the run count has not reached the specified limit, and the client asserts it is a retriable task run. The client will be instructed to transition into a scheduled state to retry task execution.

    Source code in prefect/server/orchestration/core_policy.py
    class RetryFailedTasks(BaseOrchestrationRule):\n    \"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set, the run count has not reached the specified limit, and the client\n    asserts it is a retriable task run. The client will be instructed to\n    transition into a scheduled state to retry task execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n        delay = run_settings.retry_delay\n\n        if isinstance(delay, list):\n            base_delay = delay[min(run_count - 1, len(delay) - 1)]\n        else:\n            base_delay = run_settings.retry_delay or 0\n\n        # guard against negative relative jitter inputs\n        if run_settings.retry_jitter_factor:\n            delay = clamped_poisson_interval(\n                base_delay, clamping_factor=run_settings.retry_jitter_factor\n            )\n        else:\n            delay = base_delay\n\n        # set by user to conditionally retry a task using @task(retry_condition_fn=...)\n        if getattr(proposed_state.state_details, \"retriable\", True) is False:\n            return\n\n        if run_settings.retries is not None and run_count <= run_settings.retries:\n            retry_state = states.AwaitingRetry(\n                scheduled_time=pendulum.now(\"UTC\").add(seconds=delay),\n                message=proposed_state.message,\n                data=proposed_state.data,\n            )\n            await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnqueueScheduledTasks","title":"EnqueueScheduledTasks","text":"

    Bases: BaseOrchestrationRule

    Enqueues autonomous task runs when they are scheduled

    Source code in prefect/server/orchestration/core_policy.py
    class EnqueueScheduledTasks(BaseOrchestrationRule):\n    \"\"\"\n    Enqueues autonomous task runs when they are scheduled\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.SCHEDULED]\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n            # Only if task scheduling is enabled\n            return\n\n        if not validated_state:\n            # Only if the transition was valid\n            return\n\n        if context.run.flow_run_id:\n            # Only for autonomous tasks\n            return\n\n        task_run: TaskRun = TaskRun.from_orm(context.run)\n        queue = TaskQueue.for_key(task_run.task_key)\n\n        if validated_state.name == \"AwaitingRetry\":\n            await queue.retry(task_run)\n        else:\n            await queue.enqueue(task_run)\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RenameReruns","title":"RenameReruns","text":"

    Bases: BaseOrchestrationRule

    Name the states if they have run more than once.

    In the special case where the initial state is an \"AwaitingRetry\" scheduled state, the proposed state will be renamed to \"Retrying\" instead.

    Source code in prefect/server/orchestration/core_policy.py
    class RenameReruns(BaseOrchestrationRule):\n    \"\"\"\n    Name the states if they have run more than once.\n\n    In the special case where the initial state is an \"AwaitingRetry\" scheduled state,\n    the proposed state will be renamed to \"Retrying\" instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_count = context.run.run_count\n        if run_count > 0:\n            if initial_state.name == \"AwaitingRetry\":\n                await self.rename_state(\"Retrying\")\n            else:\n                await self.rename_state(\"Rerunning\")\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CopyScheduledTime","title":"CopyScheduledTime","text":"

    Bases: BaseOrchestrationRule

    Ensures scheduled time is copied from scheduled states to pending states.

    If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored.

    Source code in prefect/server/orchestration/core_policy.py
    class CopyScheduledTime(BaseOrchestrationRule):\n    \"\"\"\n    Ensures scheduled time is copied from scheduled states to pending states.\n\n    If a new scheduled time has been proposed on the pending state, the scheduled time\n    on the scheduled state will be ignored.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if not proposed_state.state_details.scheduled_time:\n            proposed_state.state_details.scheduled_time = (\n                initial_state.state_details.scheduled_time\n            )\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.WaitForScheduledTime","title":"WaitForScheduledTime","text":"

    Bases: BaseOrchestrationRule

    Prevents transitions to running states from happening to early.

    This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for delay_seconds before attempting the transition again.

    Source code in prefect/server/orchestration/core_policy.py
    class WaitForScheduledTime(BaseOrchestrationRule):\n    \"\"\"\n    Prevents transitions to running states from happening to early.\n\n    This rule enforces that all scheduled states will only start with the machine clock\n    used by the Prefect REST API instance. This rule will identify transitions from scheduled\n    states that are too early and nullify them. Instead, no state will be written to the\n    database and the client will be sent an instruction to wait for `delay_seconds`\n    before attempting the transition again.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED, StateType.PENDING]\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        scheduled_time = initial_state.state_details.scheduled_time\n        if not scheduled_time:\n            return\n\n        # At this moment, we round delay to the nearest second as the API schema\n        # specifies an integer return value.\n        delay = scheduled_time - pendulum.now(\"UTC\")\n        delay_seconds = delay.in_seconds()\n        delay_seconds += round(delay.microseconds / 1e6)\n        if delay_seconds > 0:\n            await self.delay_transition(\n                delay_seconds, reason=\"Scheduled time is in the future\"\n            )\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandlePausingFlows","title":"HandlePausingFlows","text":"

    Bases: BaseOrchestrationRule

    Governs runs attempting to enter a Paused/Suspended state

    Source code in prefect/server/orchestration/core_policy.py
    class HandlePausingFlows(BaseOrchestrationRule):\n    \"\"\"\n    Governs runs attempting to enter a Paused/Suspended state\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.PAUSED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n        if initial_state is None:\n            await self.abort_transition(f\"Cannot {verb} flows with no state.\")\n            return\n\n        if not initial_state.is_running():\n            await self.reject_transition(\n                state=None,\n                reason=f\"Cannot {verb} flows that are not currently running.\",\n            )\n            return\n\n        self.key = proposed_state.state_details.pause_key\n        if self.key is None:\n            # if no pause key is provided, default to a UUID\n            self.key = str(uuid4())\n\n        if self.key in context.run.empirical_policy.pause_keys:\n            await self.reject_transition(\n                state=None, reason=f\"This {verb} has already fired.\"\n            )\n            return\n\n        if proposed_state.state_details.pause_reschedule:\n            if context.run.parent_task_run_id:\n                await self.abort_transition(\n                    reason=f\"Cannot {verb} subflows.\",\n                )\n                return\n\n            if context.run.deployment_id is None:\n                await self.abort_transition(\n                    reason=f\"Cannot {verb} flows without a deployment.\",\n                )\n                return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"pause_keys\"].add(self.key)\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleResumingPausedFlows","title":"HandleResumingPausedFlows","text":"

    Bases: BaseOrchestrationRule

    Governs runs attempting to leave a Paused state

    Source code in prefect/server/orchestration/core_policy.py
    class HandleResumingPausedFlows(BaseOrchestrationRule):\n    \"\"\"\n    Governs runs attempting to leave a Paused state\n    \"\"\"\n\n    FROM_STATES = [StateType.PAUSED]\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if not (\n            proposed_state.is_running()\n            or proposed_state.is_scheduled()\n            or proposed_state.is_final()\n        ):\n            await self.reject_transition(\n                state=None,\n                reason=(\n                    f\"This run cannot transition to the {proposed_state.type} state\"\n                    f\" from the {initial_state.type} state.\"\n                ),\n            )\n            return\n\n        verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n        if initial_state.state_details.pause_reschedule:\n            if not context.run.deployment_id:\n                await self.reject_transition(\n                    state=None,\n                    reason=(\n                        f\"Cannot reschedule a {proposed_state.name.lower()} flow run\"\n                        \" without a deployment.\"\n                    ),\n                )\n                return\n        pause_timeout = initial_state.state_details.pause_timeout\n        if pause_timeout and pause_timeout < pendulum.now(\"UTC\"):\n            pause_timeout_failure = states.Failed(\n                message=(\n                    f\"The flow was {proposed_state.name.lower()} and never resumed.\"\n                ),\n            )\n            await self.reject_transition(\n                state=pause_timeout_failure,\n                reason=f\"The flow run {verb} has timed out and can no longer resume.\",\n            )\n            return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"resuming\"] = True\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.UpdateFlowRunTrackerOnTasks","title":"UpdateFlowRunTrackerOnTasks","text":"

    Bases: BaseOrchestrationRule

    Tracks the flow run attempt a task run state is associated with.

    Source code in prefect/server/orchestration/core_policy.py
    class UpdateFlowRunTrackerOnTasks(BaseOrchestrationRule):\n    \"\"\"\n    Tracks the flow run attempt a task run state is associated with.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if context.run.flow_run_id is not None:\n            self.flow_run = await context.flow_run()\n            if self.flow_run:\n                context.run.flow_run_run_count = self.flow_run.run_count\n            else:\n                raise ObjectNotFoundError(\n                    (\n                        \"Unable to read flow run associated with task run:\"\n                        f\" {context.run.id}, this flow run might have been deleted\"\n                    ),\n                )\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleTaskTerminalStateTransitions","title":"HandleTaskTerminalStateTransitions","text":"

    Bases: BaseOrchestrationRule

    We do not allow tasks to leave terminal states if: - The task is completed and has a persisted result - The task is going to CANCELLING / PAUSED / CRASHED

    We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries.

    Source code in prefect/server/orchestration/core_policy.py
    class HandleTaskTerminalStateTransitions(BaseOrchestrationRule):\n    \"\"\"\n    We do not allow tasks to leave terminal states if:\n    - The task is completed and has a persisted result\n    - The task is going to CANCELLING / PAUSED / CRASHED\n\n    We reset the run count when a task leaves a terminal state for a non-terminal state\n    which resets task run retries; this is particularly relevant for flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self.original_run_count = context.run.run_count\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(f\"Run is already {initial_state.type.value}.\")\n            return\n\n        # Only allow departure from a happily completed state if the result is not persisted\n        if (\n            initial_state.is_completed()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"This run is already completed.\")\n            return\n\n        if not proposed_state.is_final():\n            # Reset run count to reset retries\n            context.run.run_count = 0\n\n        # Change the name of the state to retrying if its a flow run retry\n        if proposed_state.is_running() and context.run.flow_run_id is not None:\n            self.flow_run = await context.flow_run()\n            flow_retrying = context.run.flow_run_run_count < self.flow_run.run_count\n            if flow_retrying:\n                await self.rename_state(\"Retrying\")\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        # reset run count\n        context.run.run_count = self.original_run_count\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleFlowTerminalStateTransitions","title":"HandleFlowTerminalStateTransitions","text":"

    Bases: BaseOrchestrationRule

    We do not allow flows to leave terminal states if: - The flow is completed and has a persisted result - The flow is going to CANCELLING / PAUSED / CRASHED - The flow is going to scheduled and has no deployment

    We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries.

    Source code in prefect/server/orchestration/core_policy.py
    class HandleFlowTerminalStateTransitions(BaseOrchestrationRule):\n    \"\"\"\n    We do not allow flows to leave terminal states if:\n    - The flow is completed and has a persisted result\n    - The flow is going to CANCELLING / PAUSED / CRASHED\n    - The flow is going to scheduled and has no deployment\n\n    We reset the pause metadata when a flow leaves a terminal state for a non-terminal\n    state. This resets pause behavior during manual flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        self.original_flow_policy = context.run.empirical_policy.dict()\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(\n                f\"Run is already in terminal state {initial_state.type.value}.\"\n            )\n            return\n\n        # Only allow departure from a happily completed state if the result is not\n        # persisted and the a rerun is being proposed\n        if (\n            initial_state.is_completed()\n            and not proposed_state.is_final()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"Run is already COMPLETED.\")\n            return\n\n        # Do not allows runs to be rescheduled without a deployment\n        if proposed_state.is_scheduled() and not context.run.deployment_id:\n            await self.abort_transition(\n                \"Cannot reschedule a run without an associated deployment.\"\n            )\n            return\n\n        if not proposed_state.is_final():\n            # Reset pause metadata when leaving a terminal state\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                updated_policy = context.run.empirical_policy.dict()\n                updated_policy[\"resuming\"] = False\n                updated_policy[\"pause_keys\"] = set()\n                context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        context.run.empirical_policy = core.FlowRunPolicy(**self.original_flow_policy)\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventPendingTransitions","title":"PreventPendingTransitions","text":"

    Bases: BaseOrchestrationRule

    Prevents transitions to PENDING.

    This rule is only used for flow runs.

    This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run.

    Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state.

    CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING.

    Source code in prefect/server/orchestration/core_policy.py
    class PreventPendingTransitions(BaseOrchestrationRule):\n    \"\"\"\n    Prevents transitions to PENDING.\n\n    This rule is only used for flow runs.\n\n    This is intended to prevent race conditions during duplicate submissions of runs.\n    Before a run is submitted to its execution environment, it should be placed in a\n    PENDING state. If two workers attempt to submit the same run, one of them should\n    encounter a PENDING -> PENDING transition and abort orchestration of the run.\n\n    Similarly, if the execution environment starts quickly the run may be in a RUNNING\n    state when the second worker attempts the PENDING transition. We deny these state\n    changes as well to prevent duplicate submission. If a run has transitioned to a\n    RUNNING state a worker should not attempt to submit it again unless it has moved\n    into a terminal state.\n\n    CANCELLING and CANCELLED runs should not be allowed to transition to PENDING.\n    For re-runs of deployed runs, they should transition to SCHEDULED first.\n    For re-runs of ad-hoc runs, they should transition directly to RUNNING.\n    \"\"\"\n\n    FROM_STATES = [\n        StateType.PENDING,\n        StateType.CANCELLING,\n        StateType.RUNNING,\n        StateType.CANCELLED,\n    ]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        await self.abort_transition(\n            reason=(\n                f\"This run is in a {initial_state.type.name} state and cannot\"\n                \" transition to a PENDING state.\"\n            )\n        )\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventRunningTasksFromStoppedFlows","title":"PreventRunningTasksFromStoppedFlows","text":"

    Bases: BaseOrchestrationRule

    Prevents running tasks from stopped flows.

    A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running.

    Source code in prefect/server/orchestration/core_policy.py
    class PreventRunningTasksFromStoppedFlows(BaseOrchestrationRule):\n    \"\"\"\n    Prevents running tasks from stopped flows.\n\n    A running state implies execution, but also the converse. This rule ensures that a\n    flow's tasks cannot be run unless the flow is also running.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        flow_run = await context.flow_run()\n        if flow_run is not None:\n            if flow_run.state is None:\n                await self.abort_transition(\n                    reason=\"The enclosing flow must be running to begin task execution.\"\n                )\n            elif flow_run.state.type == StateType.PAUSED:\n                # Use the flow run's Paused state details to preserve data like\n                # timeouts.\n                paused_state = states.Paused(\n                    name=\"NotReady\",\n                    pause_expiration_time=flow_run.state.state_details.pause_timeout,\n                    reschedule=flow_run.state.state_details.pause_reschedule,\n                )\n                await self.reject_transition(\n                    state=paused_state,\n                    reason=(\n                        \"The flow is paused, new tasks can execute after resuming flow\"\n                        f\" run: {flow_run.id}.\"\n                    ),\n                )\n            elif not flow_run.state.type == StateType.RUNNING:\n                # task runners should abort task run execution\n                await self.abort_transition(\n                    reason=(\n                        \"The enclosing flow must be running to begin task execution.\"\n                    ),\n                )\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnforceCancellingToCancelledTransition","title":"EnforceCancellingToCancelledTransition","text":"

    Bases: BaseOrchestrationRule

    Rejects transitions from Cancelling to any terminal state except for Cancelled.

    Source code in prefect/server/orchestration/core_policy.py
    class EnforceCancellingToCancelledTransition(BaseOrchestrationRule):\n    \"\"\"\n    Rejects transitions from Cancelling to any terminal state except for Cancelled.\n    \"\"\"\n\n    FROM_STATES = {StateType.CANCELLED, StateType.CANCELLING}\n    TO_STATES = ALL_ORCHESTRATION_STATES - {StateType.CANCELLED}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        await self.reject_transition(\n            state=None,\n            reason=(\n                \"Cannot transition flows that are cancelling to a state other \"\n                \"than Cancelled.\"\n            ),\n        )\n        return\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.BypassCancellingFlowRunsWithNoInfra","title":"BypassCancellingFlowRunsWithNoInfra","text":"

    Bases: BaseOrchestrationRule

    Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID. Also Rejects transitions from Paused to Cancelling if the Paused state's details indicates the flow run has been suspended, exiting the flow and tearing down infra.

    The Cancelling state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to Cancelled. Runs that are AwaitingRetry are a Scheduled state that may have associated infrastructure.

    Source code in prefect/server/orchestration/core_policy.py
    class BypassCancellingFlowRunsWithNoInfra(BaseOrchestrationRule):\n    \"\"\"Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled,\n    if the flow run has no associated infrastructure process ID. Also Rejects transitions from\n    Paused to Cancelling if the Paused state's details indicates the flow run has been suspended,\n    exiting the flow and tearing down infra.\n\n    The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure\n    to clean up, we can transition directly to `Cancelled`. Runs that are `AwaitingRetry` are\n    a `Scheduled` state that may have associated infrastructure.\n    \"\"\"\n\n    FROM_STATES = {StateType.SCHEDULED, StateType.PAUSED}\n    TO_STATES = {StateType.CANCELLING}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        if (\n            initial_state.type == states.StateType.SCHEDULED\n            and not context.run.infrastructure_pid\n        ):\n            await self.reject_transition(\n                state=states.Cancelled(),\n                reason=\"Scheduled flow run has no infrastructure to terminate.\",\n            )\n        elif (\n            initial_state.type == states.StateType.PAUSED\n            and initial_state.state_details.pause_reschedule\n        ):\n            await self.reject_transition(\n                state=states.Cancelled(),\n                reason=\"Suspended flow run has no infrastructure to terminate.\",\n            )\n
    "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventDuplicateTransitions","title":"PreventDuplicateTransitions","text":"

    Bases: BaseOrchestrationRule

    Prevent duplicate transitions from being made right after one another.

    This rule allows for clients to set an optional transition_id on a state. If the run's next transition has the same transition_id, the transition will be rejected and the existing state will be returned.

    This allows for clients to make state transition requests without worrying about the following case: - A client making a state transition request - The server accepts transition and commits the transition - The client is unable to receive the response and retries the request

    Source code in prefect/server/orchestration/core_policy.py
    class PreventDuplicateTransitions(BaseOrchestrationRule):\n    \"\"\"\n    Prevent duplicate transitions from being made right after one another.\n\n    This rule allows for clients to set an optional transition_id on a state. If the\n    run's next transition has the same transition_id, the transition will be\n    rejected and the existing state will be returned.\n\n    This allows for clients to make state transition requests without worrying about\n    the following case:\n    - A client making a state transition request\n    - The server accepts transition and commits the transition\n    - The client is unable to receive the response and retries the request\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if (\n            initial_state is None\n            or proposed_state is None\n            or initial_state.state_details is None\n            or proposed_state.state_details is None\n        ):\n            return\n\n        initial_transition_id = getattr(\n            initial_state.state_details, \"transition_id\", None\n        )\n        proposed_transition_id = getattr(\n            proposed_state.state_details, \"transition_id\", None\n        )\n        if (\n            initial_transition_id is not None\n            and proposed_transition_id is not None\n            and initial_transition_id == proposed_transition_id\n        ):\n            await self.reject_transition(\n                # state=None will return the initial (current) state\n                state=None,\n                reason=\"This run has already made this state transition.\",\n            )\n
    "},{"location":"api-ref/server/orchestration/global_policy/","title":"server.orchestration.global_policy","text":""},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy","title":"prefect.server.orchestration.global_policy","text":"

    Bookkeeping logic that fires on every state transition.

    For clarity, GlobalFlowpolicy and GlobalTaskPolicy contain all transition logic implemented using BaseUniversalTransform. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop.

    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalFlowPolicy","title":"GlobalFlowPolicy","text":"

    Bases: BaseOrchestrationPolicy

    Global transforms that run against flow-run-state transitions in priority order.

    These transforms are intended to run immediately before and after a state transition is validated.

    Source code in prefect/server/orchestration/global_policy.py
    class GlobalFlowPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Global transforms that run against flow-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            UpdateSubflowParentTask,\n            UpdateSubflowStateDetails,\n            IncrementFlowRunCount,\n            RemoveResumingIndicator,\n        ]\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalTaskPolicy","title":"GlobalTaskPolicy","text":"

    Bases: BaseOrchestrationPolicy

    Global transforms that run against task-run-state transitions in priority order.

    These transforms are intended to run immediately before and after a state transition is validated.

    Source code in prefect/server/orchestration/global_policy.py
    class GlobalTaskPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Global transforms that run against task-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            IncrementTaskRunCount,\n        ]\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateType","title":"SetRunStateType","text":"

    Bases: BaseUniversalTransform

    Updates the state type of a run on a state transition.

    Source code in prefect/server/orchestration/global_policy.py
    class SetRunStateType(BaseUniversalTransform):\n    \"\"\"\n    Updates the state type of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's type\n        context.run.state_type = context.proposed_state.type\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateName","title":"SetRunStateName","text":"

    Bases: BaseUniversalTransform

    Updates the state name of a run on a state transition.

    Source code in prefect/server/orchestration/global_policy.py
    class SetRunStateName(BaseUniversalTransform):\n    \"\"\"\n    Updates the state name of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's name\n        context.run.state_name = context.proposed_state.name\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetStartTime","title":"SetStartTime","text":"

    Bases: BaseUniversalTransform

    Records the time a run enters a running state for the first time.

    Source code in prefect/server/orchestration/global_policy.py
    class SetStartTime(BaseUniversalTransform):\n    \"\"\"\n    Records the time a run enters a running state for the first time.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state and no start time is set...\n        if context.proposed_state.is_running() and context.run.start_time is None:\n            # set the start time\n            context.run.start_time = context.proposed_state.timestamp\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateTimestamp","title":"SetRunStateTimestamp","text":"

    Bases: BaseUniversalTransform

    Records the time a run changes states.

    Source code in prefect/server/orchestration/global_policy.py
    class SetRunStateTimestamp(BaseUniversalTransform):\n    \"\"\"\n    Records the time a run changes states.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's timestamp\n        context.run.state_timestamp = context.proposed_state.timestamp\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetEndTime","title":"SetEndTime","text":"

    Bases: BaseUniversalTransform

    Records the time a run enters a terminal state.

    With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset.

    Source code in prefect/server/orchestration/global_policy.py
    class SetEndTime(BaseUniversalTransform):\n    \"\"\"\n    Records the time a run enters a terminal state.\n\n    With normal client usage, a run will not transition out of a terminal state.\n    However, it's possible to force these transitions manually via the API. While\n    leaving a terminal state, the end time will be unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a final state for a non-final state...\n        if (\n            context.initial_state\n            and context.initial_state.is_final()\n            and not context.proposed_state.is_final()\n        ):\n            # clear the end time\n            context.run.end_time = None\n\n        # if entering a final state...\n        if context.proposed_state.is_final():\n            # if the run has a start time and no end time, give it one\n            if context.run.start_time and not context.run.end_time:\n                context.run.end_time = context.proposed_state.timestamp\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementRunTime","title":"IncrementRunTime","text":"

    Bases: BaseUniversalTransform

    Records the amount of time a run spends in the running state.

    Source code in prefect/server/orchestration/global_policy.py
    class IncrementRunTime(BaseUniversalTransform):\n    \"\"\"\n    Records the amount of time a run spends in the running state.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a running state...\n        if context.initial_state and context.initial_state.is_running():\n            # increment the run time by the time spent in the previous state\n            context.run.total_run_time += (\n                context.proposed_state.timestamp - context.initial_state.timestamp\n            )\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementFlowRunCount","title":"IncrementFlowRunCount","text":"

    Bases: BaseUniversalTransform

    Records the number of times a run enters a running state. For use with retries.

    Source code in prefect/server/orchestration/global_policy.py
    class IncrementFlowRunCount(BaseUniversalTransform):\n    \"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # do not increment the run count if resuming a paused flow\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                if context.run.empirical_policy.resuming:\n                    return\n\n            # increment the run count\n            context.run.run_count += 1\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.RemoveResumingIndicator","title":"RemoveResumingIndicator","text":"

    Bases: BaseUniversalTransform

    Removes the indicator on a flow run that marks it as resuming.

    Source code in prefect/server/orchestration/global_policy.py
    class RemoveResumingIndicator(BaseUniversalTransform):\n    \"\"\"\n    Removes the indicator on a flow run that marks it as resuming.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        proposed_state = context.proposed_state\n\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            if proposed_state.is_running() or proposed_state.is_final():\n                if context.run.empirical_policy.resuming:\n                    updated_policy = context.run.empirical_policy.dict()\n                    updated_policy[\"resuming\"] = False\n                    context.run.empirical_policy = FlowRunPolicy(**updated_policy)\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementTaskRunCount","title":"IncrementTaskRunCount","text":"

    Bases: BaseUniversalTransform

    Records the number of times a run enters a running state. For use with retries.

    Source code in prefect/server/orchestration/global_policy.py
    class IncrementTaskRunCount(BaseUniversalTransform):\n    \"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # increment the run count\n            context.run.run_count += 1\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetExpectedStartTime","title":"SetExpectedStartTime","text":"

    Bases: BaseUniversalTransform

    Estimates the time a state is expected to start running if not set.

    For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect.

    Source code in prefect/server/orchestration/global_policy.py
    class SetExpectedStartTime(BaseUniversalTransform):\n    \"\"\"\n    Estimates the time a state is expected to start running if not set.\n\n    For scheduled states, this estimate is simply the scheduled time. For other states,\n    this is set to the time the proposed state was created by Prefect.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # set expected start time if this is the first state\n        if not context.run.expected_start_time:\n            if context.proposed_state.is_scheduled():\n                context.run.expected_start_time = (\n                    context.proposed_state.state_details.scheduled_time\n                )\n            else:\n                context.run.expected_start_time = context.proposed_state.timestamp\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetNextScheduledStartTime","title":"SetNextScheduledStartTime","text":"

    Bases: BaseUniversalTransform

    Records the scheduled time on a run.

    When a run enters a scheduled state, run.next_scheduled_start_time is set to the state's scheduled time. When leaving a scheduled state, run.next_scheduled_start_time is unset.

    Source code in prefect/server/orchestration/global_policy.py
    class SetNextScheduledStartTime(BaseUniversalTransform):\n    \"\"\"\n    Records the scheduled time on a run.\n\n    When a run enters a scheduled state, `run.next_scheduled_start_time` is set to\n    the state's scheduled time. When leaving a scheduled state,\n    `run.next_scheduled_start_time` is unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # remove the next scheduled start time if exiting a scheduled state\n        if context.initial_state and context.initial_state.is_scheduled():\n            context.run.next_scheduled_start_time = None\n\n        # set next scheduled start time if entering a scheduled state\n        if context.proposed_state.is_scheduled():\n            context.run.next_scheduled_start_time = (\n                context.proposed_state.state_details.scheduled_time\n            )\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowParentTask","title":"UpdateSubflowParentTask","text":"

    Bases: BaseUniversalTransform

    Whenever a subflow changes state, it must update its parent task run's state.

    Source code in prefect/server/orchestration/global_policy.py
    class UpdateSubflowParentTask(BaseUniversalTransform):\n    \"\"\"\n    Whenever a subflow changes state, it must update its parent task run's state.\n    \"\"\"\n\n    async def after_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            # avoid mutation of the flow run state\n            subflow_parent_task_state = context.validated_state.copy(\n                reset_fields=True,\n                include={\n                    \"type\",\n                    \"timestamp\",\n                    \"name\",\n                    \"message\",\n                    \"state_details\",\n                    \"data\",\n                },\n            )\n\n            # set the task's \"child flow run id\" to be the subflow run id\n            subflow_parent_task_state.state_details.child_flow_run_id = context.run.id\n\n            await models.task_runs.set_task_run_state(\n                session=context.session,\n                task_run_id=context.run.parent_task_run_id,\n                state=subflow_parent_task_state,\n                force=True,\n            )\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowStateDetails","title":"UpdateSubflowStateDetails","text":"

    Bases: BaseUniversalTransform

    Update a child subflow state's references to a corresponding tracking task run id in the parent flow run

    Source code in prefect/server/orchestration/global_policy.py
    class UpdateSubflowStateDetails(BaseUniversalTransform):\n    \"\"\"\n    Update a child subflow state's references to a corresponding tracking task run id\n    in the parent flow run\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            context.proposed_state.state_details.task_run_id = (\n                context.run.parent_task_run_id\n            )\n
    "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateStateDetails","title":"UpdateStateDetails","text":"

    Bases: BaseUniversalTransform

    Update a state's references to a corresponding flow- or task- run.

    Source code in prefect/server/orchestration/global_policy.py
    class UpdateStateDetails(BaseUniversalTransform):\n    \"\"\"\n    Update a state's references to a corresponding flow- or task- run.\n    \"\"\"\n\n    async def before_transition(\n        self,\n        context: OrchestrationContext,\n    ) -> None:\n        if self.nullified_transition():\n            return\n\n        if isinstance(context, FlowOrchestrationContext):\n            flow_run = await context.flow_run()\n            context.proposed_state.state_details.flow_run_id = flow_run.id\n\n        elif isinstance(context, TaskOrchestrationContext):\n            task_run = await context.task_run()\n            context.proposed_state.state_details.flow_run_id = task_run.flow_run_id\n            context.proposed_state.state_details.task_run_id = task_run.id\n
    "},{"location":"api-ref/server/orchestration/policies/","title":"server.orchestration.policies","text":""},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies","title":"prefect.server.orchestration.policies","text":"

    Policies are collections of orchestration rules and transforms.

    Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process.

    While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition.

    "},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy","title":"BaseOrchestrationPolicy","text":"

    Bases: ABC

    An abstract base class used to organize orchestration rules in priority order.

    Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic.

    Source code in prefect/server/orchestration/policies.py
    class BaseOrchestrationPolicy(ABC):\n    \"\"\"\n    An abstract base class used to organize orchestration rules in priority order.\n\n    Different collections of orchestration rules might be used to govern various kinds\n    of transitions. For example, flow-run states and task-run states might require\n    different orchestration logic.\n    \"\"\"\n\n    @staticmethod\n    @abstractmethod\n    def priority():\n        \"\"\"\n        A list of orchestration rules in priority order.\n        \"\"\"\n\n        return []\n\n    @classmethod\n    def compile_transition_rules(cls, from_state=None, to_state=None):\n        \"\"\"\n        Returns rules in policy that are valid for the specified state transition.\n        \"\"\"\n\n        transition_rules = []\n        for rule in cls.priority():\n            if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n                transition_rules.append(rule)\n        return transition_rules\n
    "},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.priority","title":"priority abstractmethod staticmethod","text":"

    A list of orchestration rules in priority order.

    Source code in prefect/server/orchestration/policies.py
    @staticmethod\n@abstractmethod\ndef priority():\n    \"\"\"\n    A list of orchestration rules in priority order.\n    \"\"\"\n\n    return []\n
    "},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.compile_transition_rules","title":"compile_transition_rules classmethod","text":"

    Returns rules in policy that are valid for the specified state transition.

    Source code in prefect/server/orchestration/policies.py
    @classmethod\ndef compile_transition_rules(cls, from_state=None, to_state=None):\n    \"\"\"\n    Returns rules in policy that are valid for the specified state transition.\n    \"\"\"\n\n    transition_rules = []\n    for rule in cls.priority():\n        if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n            transition_rules.append(rule)\n    return transition_rules\n
    "},{"location":"api-ref/server/orchestration/rules/","title":"server.orchestration.rules","text":""},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules","title":"prefect.server.orchestration.rules","text":"

    Prefect's flow and task-run orchestration machinery.

    This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept documentation.

    Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code.

    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext","title":"OrchestrationContext","text":"

    Bases: PrefectBaseModel

    A container for a state transition, governed by orchestration rules.

    Note

    An OrchestrationContext should not be instantiated directly, instead use the flow- or task- specific subclasses, FlowOrchestrationContext and TaskOrchestrationContext.

    When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

    OrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

    Attributes:

    Name Type Description session Optional[Union[Session, AsyncSession]]

    a SQLAlchemy database session

    initial_state Optional[State]

    the initial state of a run

    proposed_state Optional[State]

    the proposed state a run is transitioning into

    validated_state Optional[State]

    a proposed state that has committed to the database

    rule_signature List[str]

    a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

    finalization_signature List[str]

    a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

    response_status SetStateStatus

    a SetStateStatus object used to build the API response

    response_details StateResponseDetails

    a StateResponseDetails object use to build the API response

    Parameters:

    Name Type Description Default session

    a SQLAlchemy database session

    required initial_state

    the initial state of a run

    required proposed_state

    the proposed state a run is transitioning into

    required Source code in prefect/server/orchestration/rules.py
    class OrchestrationContext(PrefectBaseModel):\n    \"\"\"\n    A container for a state transition, governed by orchestration rules.\n\n    Note:\n        An `OrchestrationContext` should not be instantiated directly, instead\n        use the flow- or task- specific subclasses, `FlowOrchestrationContext` and\n        `TaskOrchestrationContext`.\n\n    When a flow- or task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `OrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    class Config:\n        arbitrary_types_allowed = True\n\n    session: Optional[Union[sa.orm.Session, AsyncSession]] = ...\n    initial_state: Optional[states.State] = ...\n    proposed_state: Optional[states.State] = ...\n    validated_state: Optional[states.State]\n    rule_signature: List[str] = Field(default_factory=list)\n    finalization_signature: List[str] = Field(default_factory=list)\n    response_status: SetStateStatus = Field(default=SetStateStatus.ACCEPT)\n    response_details: StateResponseDetails = Field(default_factory=StateAcceptDetails)\n    orchestration_error: Optional[Exception] = Field(default=None)\n    parameters: Dict[Any, Any] = Field(default_factory=dict)\n\n    @property\n    def initial_state_type(self) -> Optional[states.StateType]:\n        \"\"\"The state type of `self.initial_state` if it exists.\"\"\"\n\n        return self.initial_state.type if self.initial_state else None\n\n    @property\n    def proposed_state_type(self) -> Optional[states.StateType]:\n        \"\"\"The state type of `self.proposed_state` if it exists.\"\"\"\n\n        return self.proposed_state.type if self.proposed_state else None\n\n    @property\n    def validated_state_type(self) -> Optional[states.StateType]:\n        \"\"\"The state type of `self.validated_state` if it exists.\"\"\"\n        return self.validated_state.type if self.validated_state else None\n\n    def safe_copy(self):\n        \"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Returns:\n            A mutation-safe copy of the `OrchestrationContext`\n        \"\"\"\n\n        safe_copy = self.copy()\n\n        safe_copy.initial_state = (\n            self.initial_state.copy() if self.initial_state else None\n        )\n        safe_copy.proposed_state = (\n            self.proposed_state.copy() if self.proposed_state else None\n        )\n        safe_copy.validated_state = (\n            self.validated_state.copy() if self.validated_state else None\n        )\n        safe_copy.parameters = self.parameters.copy()\n        return safe_copy\n\n    def entry_context(self):\n        \"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks before a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.proposed_state, safe_context\n\n    def exit_context(self):\n        \"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks after a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.validated_state, safe_context\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.initial_state_type","title":"initial_state_type: Optional[states.StateType] property","text":"

    The state type of self.initial_state if it exists.

    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.proposed_state_type","title":"proposed_state_type: Optional[states.StateType] property","text":"

    The state type of self.proposed_state if it exists.

    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.validated_state_type","title":"validated_state_type: Optional[states.StateType] property","text":"

    The state type of self.validated_state if it exists.

    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.safe_copy","title":"safe_copy","text":"

    Creates a mostly-mutation-safe copy for use in orchestration rules.

    Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

    Returns:

    Type Description

    A mutation-safe copy of the OrchestrationContext

    Source code in prefect/server/orchestration/rules.py
    def safe_copy(self):\n    \"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Returns:\n        A mutation-safe copy of the `OrchestrationContext`\n    \"\"\"\n\n    safe_copy = self.copy()\n\n    safe_copy.initial_state = (\n        self.initial_state.copy() if self.initial_state else None\n    )\n    safe_copy.proposed_state = (\n        self.proposed_state.copy() if self.proposed_state else None\n    )\n    safe_copy.validated_state = (\n        self.validated_state.copy() if self.validated_state else None\n    )\n    safe_copy.parameters = self.parameters.copy()\n    return safe_copy\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.entry_context","title":"entry_context","text":"

    A convenience method that generates input parameters for orchestration rules.

    An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

    Source code in prefect/server/orchestration/rules.py
    def entry_context(self):\n    \"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks before a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.proposed_state, safe_context\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.exit_context","title":"exit_context","text":"

    A convenience method that generates input parameters for orchestration rules.

    An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

    Source code in prefect/server/orchestration/rules.py
    def exit_context(self):\n    \"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks after a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.validated_state, safe_context\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext","title":"FlowOrchestrationContext","text":"

    Bases: OrchestrationContext

    A container for a flow run state transition, governed by orchestration rules.

    When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

    FlowOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

    Attributes:

    Name Type Description session

    a SQLAlchemy database session

    run Any

    the flow run attempting to change state

    initial_state Any

    the initial state of the run

    proposed_state Any

    the proposed state the run is transitioning into

    validated_state Any

    a proposed state that has committed to the database

    rule_signature Any

    a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

    finalization_signature Any

    a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

    response_status Any

    a SetStateStatus object used to build the API response

    response_details Any

    a StateResponseDetails object use to build the API response

    Parameters:

    Name Type Description Default session

    a SQLAlchemy database session

    required run

    the flow run attempting to change state

    required initial_state

    the initial state of a run

    required proposed_state

    the proposed state a run is transitioning into

    required Source code in prefect/server/orchestration/rules.py
    class FlowOrchestrationContext(OrchestrationContext):\n    \"\"\"\n    A container for a flow run state transition, governed by orchestration rules.\n\n    When a flow- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `FlowOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.FlowRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        \"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `FlowOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None and not (\n                isinstance(state_data, dict) and state_data.get(\"type\") == \"unpersisted\"\n            ):\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.flow_run_id = self.run.id\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.FlowRunState(\n                flow_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n        \"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `FlowOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n        \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return None\n\n    async def flow_run(self):\n        return self.run\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

    Run-level settings used to orchestrate the state transition.

    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

    Validates a proposed state by committing it to the database.

    After the FlowOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

    If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

    Returns:

    Type Description

    None

    Source code in prefect/server/orchestration/rules.py
    @inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n    \"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `FlowOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.safe_copy","title":"safe_copy","text":"

    Creates a mostly-mutation-safe copy for use in orchestration rules.

    Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

    Note

    self.run is an ORM model, and even when copied is unsafe to mutate

    Returns:

    Type Description

    A mutation-safe copy of FlowOrchestrationContext

    Source code in prefect/server/orchestration/rules.py
    def safe_copy(self):\n    \"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `FlowOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext","title":"TaskOrchestrationContext","text":"

    Bases: OrchestrationContext

    A container for a task run state transition, governed by orchestration rules.

    When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

    TaskOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

    Attributes:

    Name Type Description session

    a SQLAlchemy database session

    run Any

    the task run attempting to change state

    initial_state Any

    the initial state of the run

    proposed_state Any

    the proposed state the run is transitioning into

    validated_state Any

    a proposed state that has committed to the database

    rule_signature Any

    a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

    finalization_signature Any

    a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

    response_status Any

    a SetStateStatus object used to build the API response

    response_details Any

    a StateResponseDetails object use to build the API response

    Parameters:

    Name Type Description Default session

    a SQLAlchemy database session

    required run

    the task run attempting to change state

    required initial_state

    the initial state of a run

    required proposed_state

    the proposed state a run is transitioning into

    required Source code in prefect/server/orchestration/rules.py
    class TaskOrchestrationContext(OrchestrationContext):\n    \"\"\"\n    A container for a task run state transition, governed by orchestration rules.\n\n    When a task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `TaskOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.TaskRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        \"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `TaskOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None and not (\n                isinstance(state_data, dict) and state_data.get(\"type\") == \"unpersisted\"\n            ):\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.task_run_id = self.run.id\n\n                if self.run.flow_run_id is not None:\n                    flow_run = await self.flow_run()\n                    state_result_artifact.flow_run_id = flow_run.id\n\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.TaskRunState(\n                task_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n        \"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `TaskOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n        \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return self.run\n\n    async def flow_run(self):\n        return await flow_runs.read_flow_run(\n            session=self.session,\n            flow_run_id=self.run.flow_run_id,\n        )\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

    Run-level settings used to orchestrate the state transition.

    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

    Validates a proposed state by committing it to the database.

    After the TaskOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

    If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

    Returns:

    Type Description

    None

    Source code in prefect/server/orchestration/rules.py
    @inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n    \"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `TaskOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.safe_copy","title":"safe_copy","text":"

    Creates a mostly-mutation-safe copy for use in orchestration rules.

    Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

    Note

    self.run is an ORM model, and even when copied is unsafe to mutate

    Returns:

    Type Description

    A mutation-safe copy of TaskOrchestrationContext

    Source code in prefect/server/orchestration/rules.py
    def safe_copy(self):\n    \"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `TaskOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule","title":"BaseOrchestrationRule","text":"

    Bases: AbstractAsyncContextManager

    An abstract base class used to implement a discrete piece of orchestration logic.

    An OrchestrationRule is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an OrchestrationContext that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered \"valid\" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect.

    A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the \"initial state\", and the state a run is attempting to transition into is the \"proposed state\". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the OrchestrationContext will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as context.validated_state, and rules will call the self.after_transition hook upon exiting the managed context.

    Examples:

    Create a rule:\n\n>>> class BasicRule(BaseOrchestrationRule):\n>>>     # allowed initial state types\n>>>     FROM_STATES = [StateType.RUNNING]\n>>>     # allowed proposed state types\n>>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n>>>\n>>>     async def before_transition(initial_state, proposed_state, ctx):\n>>>         # side effects and proposed state mutation can happen here\n>>>         ...\n>>>\n>>>     async def after_transition(initial_state, validated_state, ctx):\n>>>         # operations on states that have been validated can happen here\n>>>         ...\n>>>\n>>>     async def cleanup(intitial_state, validated_state, ctx):\n>>>         # reverts side effects generated by `before_transition` if necessary\n>>>         ...\n\nUse a rule:\n\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with BasicRule(context, *intended_transition):\n>>>     # context.proposed_state has been governed by BasicRule\n>>>     ...\n\nUse multiple rules:\n\n>>> rules = [BasicRule, BasicRule]\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with contextlib.AsyncExitStack() as stack:\n>>>     for rule in rules:\n>>>         stack.enter_async_context(rule(context, *intended_transition))\n>>>\n>>>     # context.proposed_state has been governed by all rules\n>>>     ...\n

    Attributes:

    Name Type Description FROM_STATES Iterable

    list of valid initial state types this rule governs

    TO_STATES Iterable

    list of valid proposed state types this rule governs

    context

    the orchestration context

    from_state_type

    the state type a run is currently in

    to_state_type

    the intended proposed state type prior to any orchestration

    Parameters:

    Name Type Description Default context OrchestrationContext

    A FlowOrchestrationContext or TaskOrchestrationContext that is passed between rules

    required from_state_type Optional[StateType]

    The state type of the initial state of a run, if this state type is not contained in FROM_STATES, no hooks will fire

    required to_state_type Optional[StateType]

    The state type of the proposed state before orchestration, if this state type is not contained in TO_STATES, no hooks will fire

    required Source code in prefect/server/orchestration/rules.py
    class BaseOrchestrationRule(contextlib.AbstractAsyncContextManager):\n    \"\"\"\n    An abstract base class used to implement a discrete piece of orchestration logic.\n\n    An `OrchestrationRule` is a stateful context manager that directly governs a state\n    transition. Complex orchestration is achieved by nesting multiple rules.\n    Each rule runs against an `OrchestrationContext` that contains the transition\n    details; this context is then passed to subsequent rules. The context can be\n    modified by hooks that fire before and after a new state is validated and committed\n    to the database. These hooks will fire as long as the state transition is\n    considered \"valid\" and govern a transition by either modifying the proposed state\n    before it is validated or by producing a side-effect.\n\n    A state transition occurs whenever a flow- or task- run changes state, prompting\n    Prefect REST API to decide whether or not this transition can proceed. The current state of\n    the run is referred to as the \"initial state\", and the state a run is\n    attempting to transition into is the \"proposed state\". Together, the initial state\n    transitioning into the proposed state is the intended transition that is governed\n    by these orchestration rules. After using rules to enter a runtime context, the\n    `OrchestrationContext` will contain a proposed state that has been governed by\n    each rule, and at that point can validate the proposed state and commit it to\n    the database. The validated state will be set on the context as\n    `context.validated_state`, and rules will call the `self.after_transition` hook\n    upon exiting the managed context.\n\n    Examples:\n\n        Create a rule:\n\n        >>> class BasicRule(BaseOrchestrationRule):\n        >>>     # allowed initial state types\n        >>>     FROM_STATES = [StateType.RUNNING]\n        >>>     # allowed proposed state types\n        >>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n        >>>\n        >>>     async def before_transition(initial_state, proposed_state, ctx):\n        >>>         # side effects and proposed state mutation can happen here\n        >>>         ...\n        >>>\n        >>>     async def after_transition(initial_state, validated_state, ctx):\n        >>>         # operations on states that have been validated can happen here\n        >>>         ...\n        >>>\n        >>>     async def cleanup(intitial_state, validated_state, ctx):\n        >>>         # reverts side effects generated by `before_transition` if necessary\n        >>>         ...\n\n        Use a rule:\n\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with BasicRule(context, *intended_transition):\n        >>>     # context.proposed_state has been governed by BasicRule\n        >>>     ...\n\n        Use multiple rules:\n\n        >>> rules = [BasicRule, BasicRule]\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with contextlib.AsyncExitStack() as stack:\n        >>>     for rule in rules:\n        >>>         stack.enter_async_context(rule(context, *intended_transition))\n        >>>\n        >>>     # context.proposed_state has been governed by all rules\n        >>>     ...\n\n    Attributes:\n        FROM_STATES: list of valid initial state types this rule governs\n        TO_STATES: list of valid proposed state types this rule governs\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between rules\n        from_state_type: The state type of the initial state of a run, if this\n            state type is not contained in `FROM_STATES`, no hooks will fire\n        to_state_type: The state type of the proposed state before orchestration, if\n            this state type is not contained in `TO_STATES`, no hooks will fire\n    \"\"\"\n\n    FROM_STATES: Iterable = []\n    TO_STATES: Iterable = []\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n        self._invalid_on_entry = None\n\n    async def __aenter__(self) -> OrchestrationContext:\n        \"\"\"\n        Enter an async runtime context governed by this rule.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` is considered invalid on entry, entering this context\n        will do nothing. Otherwise, `self.before_transition` will fire.\n        \"\"\"\n\n        if await self.invalid():\n            pass\n        else:\n            try:\n                entry_context = self.context.entry_context()\n                await self.before_transition(*entry_context)\n                self.context.rule_signature.append(str(self.__class__))\n            except Exception as before_transition_error:\n                reason = (\n                    f\"Aborting orchestration due to error in {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n                logger.exception(\n                    f\"Error running before-transition hook in rule {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n\n                self.context.proposed_state = None\n                self.context.response_status = SetStateStatus.ABORT\n                self.context.response_details = StateAbortDetails(reason=reason)\n                self.context.orchestration_error = before_transition_error\n\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n        \"\"\"\n        Exit the async runtime context governed by this rule.\n\n        One of three outcomes can happen upon exiting this rule's context depending on\n        the state of the rule. If the rule was found to be invalid on entry, nothing\n        happens. If the rule was valid on entry and continues to be valid on exit,\n        `self.after_transition` will fire. If the rule was valid on entry but invalid\n        on exit, the rule will \"fizzle\" and `self.cleanup` will fire in order to revert\n        any side-effects produced by `self.before_transition`.\n        \"\"\"\n\n        exit_context = self.context.exit_context()\n        if await self.invalid():\n            pass\n        elif await self.fizzled():\n            await self.cleanup(*exit_context)\n        else:\n            await self.after_transition(*exit_context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        \"\"\"\n        Implements a hook that can fire before a state is committed to the database.\n\n        This hook may produce side-effects or mutate the proposed state of a\n        transition using one of four methods: `self.reject_transition`,\n        `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n        Note:\n            As currently implemented, the `before_transition` hook is not\n            perfectly isolated from mutating the transition. It is a standard instance\n            method that has access to `self`, and therefore `self.context`. This should\n            never be modified directly. Furthermore, `context.run` is an ORM model, and\n            mutating the run can also cause unintended writes to the database.\n\n        Args:\n            initial_state: The initial state of a transition\n            proposed_state: The proposed state of a transition\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        \"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            initial_state: The initial state of a transition\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        \"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        The intended use of this method is to revert side-effects produced by\n        `self.before_transition` when the transition is found to be invalid on exit.\n        This allows multiple rules to be gracefully run in sequence, without logic that\n        keeps track of all other rules that might govern a transition.\n\n        Args:\n            initial_state: The initial state of a transition\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def invalid(self) -> bool:\n        \"\"\"\n        Determines if a rule is invalid.\n\n        Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n        context. Rules are invalid if the transition states types are not contained in\n        `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n        a transition that differs from the transition the rule was instantiated with.\n\n        Returns:\n            True if the rules in invalid, False otherwise.\n        \"\"\"\n        # invalid and fizzled states are mutually exclusive,\n        # `_invalid_on_entry` holds this statefulness\n        if self.from_state_type not in self.FROM_STATES:\n            self._invalid_on_entry = True\n        if self.to_state_type not in self.TO_STATES:\n            self._invalid_on_entry = True\n\n        if self._invalid_on_entry is None:\n            self._invalid_on_entry = await self.invalid_transition()\n        return self._invalid_on_entry\n\n    async def fizzled(self) -> bool:\n        \"\"\"\n        Determines if a rule is fizzled and side-effects need to be reverted.\n\n        Rules are fizzled if the transitions were valid on entry (thus firing\n        `self.before_transition`) but are invalid upon exiting the governed context,\n        most likely caused by another rule mutating the transition.\n\n        Returns:\n            True if the rule is fizzled, False otherwise.\n        \"\"\"\n\n        if self._invalid_on_entry:\n            return False\n        return await self.invalid_transition()\n\n    async def invalid_transition(self) -> bool:\n        \"\"\"\n        Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n        If the `OrchestrationContext` is attempting to manage a transition with this\n        rule that differs from the transition the rule was instantiated with, the\n        transition is considered to be invalid. Depending on the context, a rule with an\n        invalid transition is either \"invalid\" or \"fizzled\".\n\n        Returns:\n            True if the transition is invalid, False otherwise.\n        \"\"\"\n\n        initial_state_type = self.context.initial_state_type\n        proposed_state_type = self.context.proposed_state_type\n        return (self.from_state_type != initial_state_type) or (\n            self.to_state_type != proposed_state_type\n        )\n\n    async def reject_transition(self, state: Optional[states.State], reason: str):\n        \"\"\"\n        Rejects a proposed transition before the transition is validated.\n\n        This method will reject a proposed transition, mutating the proposed state to\n        the provided `state`. A reason for rejecting the transition is also passed on\n        to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n        despite the proposed state type changing.\n\n        Args:\n            state: The new proposed state. If `None`, the current run state will be\n                returned in the result instead.\n            reason: The reason for rejecting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # the current state will be used if a new one is not provided\n        if state is None:\n            if self.from_state_type is None:\n                raise OrchestrationError(\n                    \"The current run has no state; this transition cannot be \"\n                    \"rejected without providing a new state.\"\n                )\n            self.to_state_type = None\n            self.context.proposed_state = None\n        else:\n            # a rule that mutates state should not fizzle itself\n            self.to_state_type = state.type\n            self.context.proposed_state = state\n\n        self.context.response_status = SetStateStatus.REJECT\n        self.context.response_details = StateRejectDetails(reason=reason)\n\n    async def delay_transition(\n        self,\n        delay_seconds: int,\n        reason: str,\n    ):\n        \"\"\"\n        Delays a proposed transition before the transition is validated.\n\n        This method will delay a proposed transition, setting the proposed state to\n        `None`, signaling to the `OrchestrationContext` that no state should be\n        written to the database. The number of seconds a transition should be delayed is\n        passed to the `OrchestrationContext`. A reason for delaying the transition is\n        also provided. Rules that delay the transition will not fizzle, despite the\n        proposed state type changing.\n\n        Args:\n            delay_seconds: The number of seconds the transition should be delayed\n            reason: The reason for delaying the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.WAIT\n        self.context.response_details = StateWaitDetails(\n            delay_seconds=delay_seconds, reason=reason\n        )\n\n    async def abort_transition(self, reason: str):\n        \"\"\"\n        Aborts a proposed transition before the transition is validated.\n\n        This method will abort a proposed transition, expecting no further action to\n        occur for this run. The proposed state is set to `None`, signaling to the\n        `OrchestrationContext` that no state should be written to the database. A\n        reason for aborting the transition is also provided. Rules that abort the\n        transition will not fizzle, despite the proposed state type changing.\n\n        Args:\n            reason: The reason for aborting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.ABORT\n        self.context.response_details = StateAbortDetails(reason=reason)\n\n    async def rename_state(self, state_name):\n        \"\"\"\n        Sets the \"name\" attribute on a proposed state.\n\n        The name of a state is an annotation intended to provide rich, human-readable\n        context for how a run is progressing. This method only updates the name and not\n        the canonical state TYPE, and will not fizzle or invalidate any other rules\n        that might govern this state transition.\n        \"\"\"\n        if self.context.proposed_state is not None:\n            self.context.proposed_state.name = state_name\n\n    async def update_context_parameters(self, key, value):\n        \"\"\"\n        Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n        This mechanism streamlines the process of passing messages and information\n        between orchestration rules if necessary and is simpler and more ephemeral than\n        message-passing via the database or some other side-effect. This mechanism can\n        be used to break up large rules for ease of testing or comprehension, but note\n        that any rules coupled this way (or any other way) are no longer independent and\n        the order in which they appear in the orchestration policy priority will matter.\n        \"\"\"\n\n        self.context.parameters.update({key: value})\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.before_transition","title":"before_transition async","text":"

    Implements a hook that can fire before a state is committed to the database.

    This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: self.reject_transition, self.delay_transition, self.abort_transition, and self.rename_state.

    Note

    As currently implemented, the before_transition hook is not perfectly isolated from mutating the transition. It is a standard instance method that has access to self, and therefore self.context. This should never be modified directly. Furthermore, context.run is an ORM model, and mutating the run can also cause unintended writes to the database.

    Parameters:

    Name Type Description Default initial_state Optional[State]

    The initial state of a transition

    required proposed_state Optional[State]

    The proposed state of a transition

    required context OrchestrationContext

    A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

    required

    Returns:

    Type Description None

    None

    Source code in prefect/server/orchestration/rules.py
    async def before_transition(\n    self,\n    initial_state: Optional[states.State],\n    proposed_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n    \"\"\"\n    Implements a hook that can fire before a state is committed to the database.\n\n    This hook may produce side-effects or mutate the proposed state of a\n    transition using one of four methods: `self.reject_transition`,\n    `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n    Note:\n        As currently implemented, the `before_transition` hook is not\n        perfectly isolated from mutating the transition. It is a standard instance\n        method that has access to `self`, and therefore `self.context`. This should\n        never be modified directly. Furthermore, `context.run` is an ORM model, and\n        mutating the run can also cause unintended writes to the database.\n\n    Args:\n        initial_state: The initial state of a transition\n        proposed_state: The proposed state of a transition\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.after_transition","title":"after_transition async","text":"

    Implements a hook that can fire after a state is committed to the database.

    Parameters:

    Name Type Description Default initial_state Optional[State]

    The initial state of a transition

    required validated_state Optional[State]

    The governed state that has been committed to the database

    required context OrchestrationContext

    A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

    required

    Returns:

    Type Description None

    None

    Source code in prefect/server/orchestration/rules.py
    async def after_transition(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n    \"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        initial_state: The initial state of a transition\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.cleanup","title":"cleanup async","text":"

    Implements a hook that can fire after a state is committed to the database.

    The intended use of this method is to revert side-effects produced by self.before_transition when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition.

    Parameters:

    Name Type Description Default initial_state Optional[State]

    The initial state of a transition

    required validated_state Optional[State]

    The governed state that has been committed to the database

    required context OrchestrationContext

    A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

    required

    Returns:

    Type Description None

    None

    Source code in prefect/server/orchestration/rules.py
    async def cleanup(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n    \"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    The intended use of this method is to revert side-effects produced by\n    `self.before_transition` when the transition is found to be invalid on exit.\n    This allows multiple rules to be gracefully run in sequence, without logic that\n    keeps track of all other rules that might govern a transition.\n\n    Args:\n        initial_state: The initial state of a transition\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid","title":"invalid async","text":"

    Determines if a rule is invalid.

    Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in self.FROM_STATES and self.TO_STATES, or if the context is proposing a transition that differs from the transition the rule was instantiated with.

    Returns:

    Type Description bool

    True if the rules in invalid, False otherwise.

    Source code in prefect/server/orchestration/rules.py
    async def invalid(self) -> bool:\n    \"\"\"\n    Determines if a rule is invalid.\n\n    Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n    context. Rules are invalid if the transition states types are not contained in\n    `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n    a transition that differs from the transition the rule was instantiated with.\n\n    Returns:\n        True if the rules in invalid, False otherwise.\n    \"\"\"\n    # invalid and fizzled states are mutually exclusive,\n    # `_invalid_on_entry` holds this statefulness\n    if self.from_state_type not in self.FROM_STATES:\n        self._invalid_on_entry = True\n    if self.to_state_type not in self.TO_STATES:\n        self._invalid_on_entry = True\n\n    if self._invalid_on_entry is None:\n        self._invalid_on_entry = await self.invalid_transition()\n    return self._invalid_on_entry\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.fizzled","title":"fizzled async","text":"

    Determines if a rule is fizzled and side-effects need to be reverted.

    Rules are fizzled if the transitions were valid on entry (thus firing self.before_transition) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition.

    Returns:

    Type Description bool

    True if the rule is fizzled, False otherwise.

    Source code in prefect/server/orchestration/rules.py
    async def fizzled(self) -> bool:\n    \"\"\"\n    Determines if a rule is fizzled and side-effects need to be reverted.\n\n    Rules are fizzled if the transitions were valid on entry (thus firing\n    `self.before_transition`) but are invalid upon exiting the governed context,\n    most likely caused by another rule mutating the transition.\n\n    Returns:\n        True if the rule is fizzled, False otherwise.\n    \"\"\"\n\n    if self._invalid_on_entry:\n        return False\n    return await self.invalid_transition()\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid_transition","title":"invalid_transition async","text":"

    Determines if the transition proposed by the OrchestrationContext is invalid.

    If the OrchestrationContext is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either \"invalid\" or \"fizzled\".

    Returns:

    Type Description bool

    True if the transition is invalid, False otherwise.

    Source code in prefect/server/orchestration/rules.py
    async def invalid_transition(self) -> bool:\n    \"\"\"\n    Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n    If the `OrchestrationContext` is attempting to manage a transition with this\n    rule that differs from the transition the rule was instantiated with, the\n    transition is considered to be invalid. Depending on the context, a rule with an\n    invalid transition is either \"invalid\" or \"fizzled\".\n\n    Returns:\n        True if the transition is invalid, False otherwise.\n    \"\"\"\n\n    initial_state_type = self.context.initial_state_type\n    proposed_state_type = self.context.proposed_state_type\n    return (self.from_state_type != initial_state_type) or (\n        self.to_state_type != proposed_state_type\n    )\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.reject_transition","title":"reject_transition async","text":"

    Rejects a proposed transition before the transition is validated.

    This method will reject a proposed transition, mutating the proposed state to the provided state. A reason for rejecting the transition is also passed on to the OrchestrationContext. Rules that reject the transition will not fizzle, despite the proposed state type changing.

    Parameters:

    Name Type Description Default state Optional[State]

    The new proposed state. If None, the current run state will be returned in the result instead.

    required reason str

    The reason for rejecting the transition

    required Source code in prefect/server/orchestration/rules.py
    async def reject_transition(self, state: Optional[states.State], reason: str):\n    \"\"\"\n    Rejects a proposed transition before the transition is validated.\n\n    This method will reject a proposed transition, mutating the proposed state to\n    the provided `state`. A reason for rejecting the transition is also passed on\n    to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n    despite the proposed state type changing.\n\n    Args:\n        state: The new proposed state. If `None`, the current run state will be\n            returned in the result instead.\n        reason: The reason for rejecting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # the current state will be used if a new one is not provided\n    if state is None:\n        if self.from_state_type is None:\n            raise OrchestrationError(\n                \"The current run has no state; this transition cannot be \"\n                \"rejected without providing a new state.\"\n            )\n        self.to_state_type = None\n        self.context.proposed_state = None\n    else:\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = state.type\n        self.context.proposed_state = state\n\n    self.context.response_status = SetStateStatus.REJECT\n    self.context.response_details = StateRejectDetails(reason=reason)\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.delay_transition","title":"delay_transition async","text":"

    Delays a proposed transition before the transition is validated.

    This method will delay a proposed transition, setting the proposed state to None, signaling to the OrchestrationContext that no state should be written to the database. The number of seconds a transition should be delayed is passed to the OrchestrationContext. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing.

    Parameters:

    Name Type Description Default delay_seconds int

    The number of seconds the transition should be delayed

    required reason str

    The reason for delaying the transition

    required Source code in prefect/server/orchestration/rules.py
    async def delay_transition(\n    self,\n    delay_seconds: int,\n    reason: str,\n):\n    \"\"\"\n    Delays a proposed transition before the transition is validated.\n\n    This method will delay a proposed transition, setting the proposed state to\n    `None`, signaling to the `OrchestrationContext` that no state should be\n    written to the database. The number of seconds a transition should be delayed is\n    passed to the `OrchestrationContext`. A reason for delaying the transition is\n    also provided. Rules that delay the transition will not fizzle, despite the\n    proposed state type changing.\n\n    Args:\n        delay_seconds: The number of seconds the transition should be delayed\n        reason: The reason for delaying the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.WAIT\n    self.context.response_details = StateWaitDetails(\n        delay_seconds=delay_seconds, reason=reason\n    )\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.abort_transition","title":"abort_transition async","text":"

    Aborts a proposed transition before the transition is validated.

    This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to None, signaling to the OrchestrationContext that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing.

    Parameters:

    Name Type Description Default reason str

    The reason for aborting the transition

    required Source code in prefect/server/orchestration/rules.py
    async def abort_transition(self, reason: str):\n    \"\"\"\n    Aborts a proposed transition before the transition is validated.\n\n    This method will abort a proposed transition, expecting no further action to\n    occur for this run. The proposed state is set to `None`, signaling to the\n    `OrchestrationContext` that no state should be written to the database. A\n    reason for aborting the transition is also provided. Rules that abort the\n    transition will not fizzle, despite the proposed state type changing.\n\n    Args:\n        reason: The reason for aborting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.ABORT\n    self.context.response_details = StateAbortDetails(reason=reason)\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.rename_state","title":"rename_state async","text":"

    Sets the \"name\" attribute on a proposed state.

    The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition.

    Source code in prefect/server/orchestration/rules.py
    async def rename_state(self, state_name):\n    \"\"\"\n    Sets the \"name\" attribute on a proposed state.\n\n    The name of a state is an annotation intended to provide rich, human-readable\n    context for how a run is progressing. This method only updates the name and not\n    the canonical state TYPE, and will not fizzle or invalidate any other rules\n    that might govern this state transition.\n    \"\"\"\n    if self.context.proposed_state is not None:\n        self.context.proposed_state.name = state_name\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.update_context_parameters","title":"update_context_parameters async","text":"

    Updates the \"parameters\" dictionary attribute with the specified key-value pair.

    This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter.

    Source code in prefect/server/orchestration/rules.py
    async def update_context_parameters(self, key, value):\n    \"\"\"\n    Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n    This mechanism streamlines the process of passing messages and information\n    between orchestration rules if necessary and is simpler and more ephemeral than\n    message-passing via the database or some other side-effect. This mechanism can\n    be used to break up large rules for ease of testing or comprehension, but note\n    that any rules coupled this way (or any other way) are no longer independent and\n    the order in which they appear in the orchestration policy priority will matter.\n    \"\"\"\n\n    self.context.parameters.update({key: value})\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform","title":"BaseUniversalTransform","text":"

    Bases: AbstractAsyncContextManager

    An abstract base class used to implement privileged bookkeeping logic.

    Warning

    In almost all cases, use the BaseOrchestrationRule base class instead.

    Beyond the orchestration rules implemented with the BaseOrchestrationRule ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is None, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care.

    Attributes:

    Name Type Description FROM_STATES Iterable

    for compatibility with BaseOrchestrationPolicy

    TO_STATES Iterable

    for compatibility with BaseOrchestrationPolicy

    context

    the orchestration context

    from_state_type

    the state type a run is currently in

    to_state_type

    the intended proposed state type prior to any orchestration

    Parameters:

    Name Type Description Default context OrchestrationContext

    A FlowOrchestrationContext or TaskOrchestrationContext that is passed between transforms

    required Source code in prefect/server/orchestration/rules.py
    class BaseUniversalTransform(contextlib.AbstractAsyncContextManager):\n    \"\"\"\n    An abstract base class used to implement privileged bookkeeping logic.\n\n    Warning:\n        In almost all cases, use the `BaseOrchestrationRule` base class instead.\n\n    Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC,\n    Universal transforms are not stateful, and fire their before- and after- transition\n    hooks on every state transition unless the proposed state is `None`, indicating that\n    no state should be written to the database. Because there are no guardrails in place\n    to prevent directly mutating state or other parts of the orchestration context,\n    universal transforms should only be used with care.\n\n    Attributes:\n        FROM_STATES: for compatibility with `BaseOrchestrationPolicy`\n        TO_STATES: for compatibility with `BaseOrchestrationPolicy`\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between transforms\n    \"\"\"\n\n    # `BaseUniversalTransform` will always fire on non-null transitions\n    FROM_STATES: Iterable = ALL_ORCHESTRATION_STATES\n    TO_STATES: Iterable = ALL_ORCHESTRATION_STATES\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n\n    async def __aenter__(self):\n        \"\"\"\n        Enter an async runtime context governed by this transform.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` has been nullified on entry and `context.proposed_state`\n        is `None`, entering this context will do nothing. Otherwise\n        `self.before_transition` will fire.\n        \"\"\"\n\n        await self.before_transition(self.context)\n        self.context.rule_signature.append(str(self.__class__))\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n        \"\"\"\n        Exit the async runtime context governed by this transform.\n\n        If the transition has been nullified or errorred upon exiting this transforms's context,\n        nothing happens. Otherwise, `self.after_transition` will fire on every non-null\n        proposed state.\n        \"\"\"\n\n        if not self.exception_in_transition():\n            await self.after_transition(self.context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(self, context) -> None:\n        \"\"\"\n        Implements a hook that fires before a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(self, context) -> None:\n        \"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    def nullified_transition(self) -> bool:\n        \"\"\"\n        Determines if the transition has been nullified.\n\n        Transitions are nullified if the proposed state is `None`, indicating that\n        nothing should be written to the database.\n\n        Returns:\n            True if the transition is nullified, False otherwise.\n        \"\"\"\n\n        return self.context.proposed_state is None\n\n    def exception_in_transition(self) -> bool:\n        \"\"\"\n        Determines if the transition has encountered an exception.\n\n        Returns:\n            True if the transition is encountered an exception, False otherwise.\n        \"\"\"\n\n        return self.context.orchestration_error is not None\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.before_transition","title":"before_transition async","text":"

    Implements a hook that fires before a state is committed to the database.

    Parameters:

    Name Type Description Default context

    the OrchestrationContext that contains transition details

    required

    Returns:

    Type Description None

    None

    Source code in prefect/server/orchestration/rules.py
    async def before_transition(self, context) -> None:\n    \"\"\"\n    Implements a hook that fires before a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.after_transition","title":"after_transition async","text":"

    Implements a hook that can fire after a state is committed to the database.

    Parameters:

    Name Type Description Default context

    the OrchestrationContext that contains transition details

    required

    Returns:

    Type Description None

    None

    Source code in prefect/server/orchestration/rules.py
    async def after_transition(self, context) -> None:\n    \"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.nullified_transition","title":"nullified_transition","text":"

    Determines if the transition has been nullified.

    Transitions are nullified if the proposed state is None, indicating that nothing should be written to the database.

    Returns:

    Type Description bool

    True if the transition is nullified, False otherwise.

    Source code in prefect/server/orchestration/rules.py
    def nullified_transition(self) -> bool:\n    \"\"\"\n    Determines if the transition has been nullified.\n\n    Transitions are nullified if the proposed state is `None`, indicating that\n    nothing should be written to the database.\n\n    Returns:\n        True if the transition is nullified, False otherwise.\n    \"\"\"\n\n    return self.context.proposed_state is None\n
    "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.exception_in_transition","title":"exception_in_transition","text":"

    Determines if the transition has encountered an exception.

    Returns:

    Type Description bool

    True if the transition is encountered an exception, False otherwise.

    Source code in prefect/server/orchestration/rules.py
    def exception_in_transition(self) -> bool:\n    \"\"\"\n    Determines if the transition has encountered an exception.\n\n    Returns:\n        True if the transition is encountered an exception, False otherwise.\n    \"\"\"\n\n    return self.context.orchestration_error is not None\n
    "},{"location":"api-ref/server/schemas/actions/","title":"server.schemas.actions","text":""},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions","title":"prefect.server.schemas.actions","text":"

    Reduced schemas for accepting API actions.

    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate","title":"ArtifactCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create an artifact.

    Source code in prefect/server/schemas/actions.py
    class ArtifactCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n    key: Optional[str] = Field(\n        default=None, description=\"An optional unique reference key for this artifact.\"\n    )\n    type: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An identifier that describes the shape of the data field. e.g. 'result',\"\n            \" 'table', 'markdown'\"\n        ),\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A markdown-enabled description of the artifact.\"\n    )\n    data: Optional[Union[Dict[str, Any], Any]] = Field(\n        default=None,\n        description=(\n            \"Data associated with the artifact, e.g. a result.; structure depends on\"\n            \" the artifact type.\"\n        ),\n    )\n    metadata_: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=(\n            \"User-defined artifact metadata. Content must be string key and value\"\n            \" pairs.\"\n        ),\n    )\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run associated with the artifact.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run associated with the artifact.\"\n    )\n\n    @classmethod\n    def from_result(cls, data: Any):\n        artifact_info = dict()\n        if isinstance(data, dict):\n            artifact_key = data.pop(\"artifact_key\", None)\n            if artifact_key:\n                artifact_info[\"key\"] = artifact_key\n\n            artifact_type = data.pop(\"artifact_type\", None)\n            if artifact_type:\n                artifact_info[\"type\"] = artifact_type\n\n            description = data.pop(\"artifact_description\", None)\n            if description:\n                artifact_info[\"description\"] = description\n\n        return cls(data=data, **artifact_info)\n\n    _validate_metadata_length = validator(\"metadata_\")(validate_max_metadata_length)\n\n    _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n        validate_artifact_key\n    )\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update an artifact.

    Source code in prefect/server/schemas/actions.py
    class ArtifactUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n    data: Optional[Union[Dict[str, Any], Any]] = Field(None)\n    description: Optional[str] = Field(None)\n    metadata_: Optional[Dict[str, str]] = Field(None)\n\n    _validate_metadata_length = validator(\"metadata_\", allow_reuse=True)(\n        validate_max_metadata_length\n    )\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a block document.

    Source code in prefect/server/schemas/actions.py
    class BlockDocumentCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    _validate_name_format = validator(\"name\", allow_reuse=True)(\n        validate_block_document_name\n    )\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate","text":"

    Bases: ActionBaseModel

    Data used to create block document reference.

    Source code in prefect/server/schemas/actions.py
    class BlockDocumentReferenceCreate(ActionBaseModel):\n    \"\"\"Data used to create block document reference.\"\"\"\n\n    id: UUID = Field(\n        default_factory=uuid4, description=\"The block document reference ID\"\n    )\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of the parent block document\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        return validate_parent_and_ref_diff(values)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a block document.

    Source code in prefect/server/schemas/actions.py
    class BlockDocumentUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n    block_schema_id: Optional[UUID] = Field(\n        default=None, description=\"A block schema ID\"\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    merge_existing_data: bool = True\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a block schema.

    Source code in prefect/server/schemas/actions.py
    class BlockSchemaCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=schemas.core.DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a block type.

    Source code in prefect/server/schemas/actions.py
    class BlockTypeCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n\n    # validators\n    _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n        validate_block_type_slug\n    )\n\n    _validate_name_characters = validator(\"name\", check_fields=False)(\n        raise_on_name_with_banned_characters\n    )\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a block type.

    Source code in prefect/server/schemas/actions.py
    class BlockTypeUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n    logo_url: Optional[schemas.core.HttpUrl] = Field(None)\n    documentation_url: Optional[schemas.core.HttpUrl] = Field(None)\n    description: Optional[str] = Field(None)\n    code_example: Optional[str] = Field(None)\n\n    @classmethod\n    def updatable_fields(cls) -> set:\n        return get_class_fields_only(cls)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a concurrency limit.

    Source code in prefect/server/schemas/actions.py
    class ConcurrencyLimitCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create","title":"ConcurrencyLimitV2Create","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a v2 concurrency limit.

    Source code in prefect/server/schemas/actions.py
    class ConcurrencyLimitV2Create(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a v2 concurrency limit.\"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the concurrency limit is active.\"\n    )\n    name: str = Field(default=..., description=\"The name of the concurrency limit.\")\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=0, description=\"The number of active slots.\")\n    denied_slots: int = Field(default=0, description=\"The number of denied slots.\")\n    slot_decay_per_second: float = Field(\n        default=0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update","title":"ConcurrencyLimitV2Update","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a v2 concurrency limit.

    Source code in prefect/server/schemas/actions.py
    class ConcurrencyLimitV2Update(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a v2 concurrency limit.\"\"\"\n\n    active: Optional[bool] = Field(None)\n    name: Optional[str] = Field(None)\n    limit: Optional[NonNegativeInteger] = Field(None)\n    active_slots: Optional[NonNegativeInteger] = Field(None)\n    denied_slots: Optional[NonNegativeInteger] = Field(None)\n    slot_decay_per_second: Optional[NonNegativeFloat] = Field(None)\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate","title":"DeploymentCreate","text":"

    Bases: DeprecatedInfraOverridesField, ActionBaseModel

    Data used by the Prefect REST API to create a deployment.

    Source code in prefect/server/schemas/actions.py
    class DeploymentCreate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n    @root_validator\n    def populate_schedules(cls, values):\n        return set_deployment_schedules(values)\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    name: str = Field(\n        default=...,\n        description=\"The name of the deployment.\",\n        examples=[\"my-deployment\"],\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The ID of the flow associated with the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether the schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentScheduleCreate] = Field(\n        default_factory=list,\n        description=\"A list of schedules for the deployment.\",\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of deployment tags.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    pull_steps: Optional[List[dict]] = Field(None)\n\n    manifest_path: Optional[str] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = Field(\n        None, description=\"The schedule for the deployment.\"\n    )\n    description: Optional[str] = Field(None)\n    path: Optional[str] = Field(None)\n    version: Optional[str] = Field(None)\n    entrypoint: Optional[str] = Field(None)\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides for the flow's infrastructure configuration.\",\n    )\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n            jsonschema.validate(self.job_variables, variables_schema)\n\n    @validator(\"parameters\")\n    def _validate_parameters_conform_to_schema(cls, value, values):\n        return validate_parameters_conform_to_schema(value, values)\n\n    @validator(\"parameter_openapi_schema\")\n    def _validate_parameter_openapi_schema(cls, value, values):\n        return validate_parameter_openapi_schema(value, values)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration","text":"

    Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

    Source code in prefect/server/schemas/actions.py
    def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n        jsonschema.validate(self.job_variables, variables_schema)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.schema","title":"schema classmethod","text":"

    Don't use the mixin docstring as the description if this class is missing a docstring.

    Source code in prefect/_internal/compatibility/deprecated.py
    @classmethod\ndef schema(\n    cls, by_alias: bool = True, ref_template: str = default_ref_template\n) -> Dict[str, Any]:\n    \"\"\"\n    Don't use the mixin docstring as the description if this class is missing a\n    docstring.\n    \"\"\"\n    schema = super().schema(by_alias=by_alias, ref_template=ref_template)\n\n    if not cls.__doc__:\n        schema.pop(\"description\", None)\n\n    return schema\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow run from a deployment.

    Source code in prefect/server/schemas/actions.py
    class DeploymentFlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    context: Dict[str, Any] = Field(default_factory=dict)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: schemas.core.FlowRunPolicy = Field(\n        default_factory=schemas.core.FlowRunPolicy,\n        description=\"The empirical policy for the flow run.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the flow run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    idempotency_key: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional idempotency key. If a flow run with the same idempotency key\"\n            \" has already been created, the existing flow run will be returned.\"\n        ),\n    )\n    parent_task_run_id: Optional[UUID] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(None)\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate","text":"

    Bases: DeprecatedInfraOverridesField, ActionBaseModel

    Data used by the Prefect REST API to update a deployment.

    Source code in prefect/server/schemas/actions.py
    class DeploymentUpdate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    version: Optional[str] = Field(None)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = Field(\n        None, description=\"The schedule for the deployment.\"\n    )\n    description: Optional[str] = Field(None)\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether the schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentScheduleCreate] = Field(\n        default_factory=list,\n        description=\"A list of schedules for the deployment.\",\n    )\n    parameters: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of deployment tags.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    path: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Overrides for the flow's infrastructure configuration.\",\n    )\n    entrypoint: Optional[str] = Field(None)\n    manifest_path: Optional[str] = Field(None)\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    class Config:\n        allow_population_by_field_name = True\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n        if variables_schema is not None:\n            jsonschema.validate(self.job_variables, variables_schema)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration","text":"

    Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

    Source code in prefect/server/schemas/actions.py
    def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n    if variables_schema is not None:\n        jsonschema.validate(self.job_variables, variables_schema)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.schema","title":"schema classmethod","text":"

    Don't use the mixin docstring as the description if this class is missing a docstring.

    Source code in prefect/_internal/compatibility/deprecated.py
    @classmethod\ndef schema(\n    cls, by_alias: bool = True, ref_template: str = default_ref_template\n) -> Dict[str, Any]:\n    \"\"\"\n    Don't use the mixin docstring as the description if this class is missing a\n    docstring.\n    \"\"\"\n    schema = super().schema(by_alias=by_alias, ref_template=ref_template)\n\n    if not cls.__doc__:\n        schema.pop(\"description\", None)\n\n    return schema\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate","title":"FlowCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow.

    Source code in prefect/server/schemas/actions.py
    class FlowCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate","title":"FlowRunCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow run.

    Source code in prefect/server/schemas/actions.py
    class FlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    flow_version: Optional[str] = Field(\n        default=None, description=\"The version of the flow being run.\"\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"The context of the flow run.\",\n    )\n    parent_task_run_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: schemas.core.FlowRunPolicy = Field(\n        default_factory=schemas.core.FlowRunPolicy,\n        description=\"The empirical policy for the flow run.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the flow run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    idempotency_key: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional idempotency key. If a flow run with the same idempotency key\"\n            \" has already been created, the existing flow run will be returned.\"\n        ),\n    )\n\n    # DEPRECATED\n\n    deployment_id: Optional[UUID] = Field(\n        None,\n        description=(\n            \"DEPRECATED: The id of the deployment associated with this flow run, if\"\n            \" available.\"\n        ),\n        deprecated=True,\n    )\n\n    class Config(ActionBaseModel.Config):\n        json_dumps = orjson_dumps_extra_compatible\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow run notification policy.

    Source code in prefect/server/schemas/actions.py
    class FlowRunNotificationPolicyCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(schemas.core.FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a flow run notification policy.

    Source code in prefect/server/schemas/actions.py
    class FlowRunNotificationPolicyUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n    is_active: Optional[bool] = Field(None)\n    state_names: Optional[List[str]] = Field(None)\n    tags: Optional[List[str]] = Field(None)\n    block_document_id: Optional[UUID] = Field(None)\n    message_template: Optional[str] = Field(None)\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a flow run.

    Source code in prefect/server/schemas/actions.py
    class FlowRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n    name: Optional[str] = Field(None)\n    flow_version: Optional[str] = Field(None)\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    empirical_policy: schemas.core.FlowRunPolicy = Field(\n        default_factory=schemas.core.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    infrastructure_pid: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(None)\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate","title":"FlowUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a flow.

    Source code in prefect/server/schemas/actions.py
    class FlowUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate","title":"LogCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a log.

    Source code in prefect/server/schemas/actions.py
    class LogCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(None)\n    task_run_id: Optional[UUID] = Field(None)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a saved search.

    Source code in prefect/server/schemas/actions.py
    class SavedSearchCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[schemas.core.SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate","title":"StateCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a new state.

    Source code in prefect/server/schemas/actions.py
    class StateCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n    type: schemas.states.StateType = Field(\n        default=..., description=\"The type of the state to create\"\n    )\n    name: Optional[str] = Field(\n        default=None, description=\"The name of the state to create\"\n    )\n    message: Optional[str] = Field(\n        default=None, description=\"The message of the state to create\"\n    )\n    data: Optional[Any] = Field(\n        default=None, description=\"The data of the state to create\"\n    )\n    state_details: schemas.states.StateDetails = Field(\n        default_factory=schemas.states.StateDetails,\n        description=\"The details of the state to create\",\n    )\n\n    timestamp: Optional[DateTimeTZ] = Field(\n        default=None,\n        repr=False,\n        ignored=True,\n    )\n    id: Optional[UUID] = Field(default=None, repr=False, ignored=True)\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n        return get_or_create_state_name(v, values)\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n        return set_default_scheduled_time(cls, values)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate","title":"TaskRunCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a task run

    Source code in prefect/server/schemas/actions.py
    class TaskRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n    # TaskRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the task run to create\"\n    )\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2), examples=[\"my-task-run\"]\n    )\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run id of the task run.\"\n    )\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional cache key. If a COMPLETED state associated with this cache key\"\n            \" is found, the cached COMPLETED state will be used instead of executing\"\n            \" the task run.\"\n        ),\n    )\n    cache_expiration: Optional[DateTimeTZ] = Field(\n        default=None, description=\"Specifies when the cached state should expire.\"\n    )\n    task_version: Optional[str] = Field(\n        default=None, description=\"The version of the task being run.\"\n    )\n    empirical_policy: schemas.core.TaskRunPolicy = Field(\n        default_factory=schemas.core.TaskRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the task run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                schemas.core.TaskRunResult,\n                schemas.core.Parameter,\n                schemas.core.Constant,\n            ]\n        ],\n    ] = Field(\n        default_factory=dict,\n        description=\"The inputs to the task run.\",\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n\n    @validator(\"cache_key\")\n    def validate_cache_key(cls, cache_key):\n        return validate_cache_key_length(cache_key)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a task run

    Source code in prefect/server/schemas/actions.py
    class TaskRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2), examples=[\"my-task-run\"]\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate","title":"VariableCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a Variable.

    Source code in prefect/server/schemas/actions.py
    class VariableCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n    name: str = Field(\n        default=...,\n        description=\"The name of the variable\",\n        examples=[\"my-variable\"],\n        max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: str = Field(\n        default=...,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of variable tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate","title":"VariableUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a Variable.

    Source code in prefect/server/schemas/actions.py
    class VariableUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the variable\",\n        examples=[\"my-variable\"],\n        max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: Optional[str] = Field(\n        default=None,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of variable tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a work pool.

    Source code in prefect/server/schemas/actions.py
    class WorkPoolCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n    name: str = Field(..., description=\"The name of the work pool.\")\n    description: Optional[str] = Field(None, description=\"The work pool description.\")\n    type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n\n    _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n        validate_base_job_template\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a work pool.

    Source code in prefect/server/schemas/actions.py
    class WorkPoolUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n    description: Optional[str] = Field(None)\n    is_paused: Optional[bool] = Field(None)\n    base_job_template: Optional[Dict[str, Any]] = Field(None)\n    concurrency_limit: Optional[NonNegativeInteger] = Field(None)\n\n    _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n        validate_base_job_template\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a work queue.

    Source code in prefect/server/schemas/actions.py
    class WorkQueueCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        None, description=\"The work queue's concurrency limit.\"\n    )\n    priority: Optional[PositiveInteger] = Field(\n        None,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a work queue.

    Source code in prefect/server/schemas/actions.py
    class WorkQueueUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n    name: Optional[str] = Field(None)\n    description: Optional[str] = Field(None)\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(None)\n    priority: Optional[PositiveInteger] = Field(None)\n    last_polled: Optional[DateTimeTZ] = Field(None)\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
    "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/core/","title":"server.schemas.core","text":""},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core","title":"prefect.server.schemas.core","text":"

    Full schemas of Prefect REST API objects.

    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Agent","title":"Agent","text":"

    Bases: ORMBaseModel

    An ORM representation of an agent

    Source code in prefect/server/schemas/core.py
    class Agent(ORMBaseModel):\n    \"\"\"An ORM representation of an agent\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the agent. If a name is not provided, it will be\"\n            \" auto-generated.\"\n        ),\n    )\n    work_queue_id: UUID = Field(\n        default=..., description=\"The work queue with which the agent is associated.\"\n    )\n    last_activity_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time this agent polled for work.\"\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocument","title":"BlockDocument","text":"

    Bases: ORMBaseModel

    An ORM representation of a block document.

    Source code in prefect/server/schemas/core.py
    class BlockDocument(ORMBaseModel):\n    \"\"\"An ORM representation of a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n    block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The associated block schema\"\n    )\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n    block_type_name: Optional[str] = Field(\n        default=None, description=\"The associated block type's name\"\n    )\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    block_document_references: Dict[str, Dict[str, Any]] = Field(\n        default_factory=dict, description=\"Record of the block document's references\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        # the BlockDocumentCreate subclass allows name=None\n        # and will inherit this validator\n        return raise_on_name_with_banned_characters(v)\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n\n    @classmethod\n    async def from_orm_model(\n        cls,\n        session,\n        orm_block_document: \"prefect.server.database.orm_models.ORMBlockDocument\",\n        include_secrets: bool = False,\n    ):\n        data = await orm_block_document.decrypt_data(session=session)\n        # if secrets are not included, obfuscate them based on the schema's\n        # `secret_fields`. Note this walks any nested blocks as well. If the\n        # nested blocks were recovered from named blocks, they will already\n        # be obfuscated, but if nested fields were hardcoded into the parent\n        # blocks data, this is the only opportunity to obfuscate them.\n        if not include_secrets:\n            flat_data = dict_to_flatdict(data)\n            # iterate over the (possibly nested) secret fields\n            # and obfuscate their data\n            for secret_field in orm_block_document.block_schema.fields.get(\n                \"secret_fields\", []\n            ):\n                secret_key = tuple(secret_field.split(\".\"))\n                if flat_data.get(secret_key) is not None:\n                    flat_data[secret_key] = obfuscate_string(flat_data[secret_key])\n                # If a wildcard (*) is in the current secret key path, we take the portion\n                # of the path before the wildcard and compare it to the same level of each\n                # key. A match means that the field is nested under the secret key and should\n                # be obfuscated.\n                elif \"*\" in secret_key:\n                    wildcard_index = secret_key.index(\"*\")\n                    for data_key in flat_data.keys():\n                        if secret_key[0:wildcard_index] == data_key[0:wildcard_index]:\n                            flat_data[data_key] = obfuscate(flat_data[data_key])\n            data = flatdict_to_dict(flat_data)\n\n        return cls(\n            id=orm_block_document.id,\n            created=orm_block_document.created,\n            updated=orm_block_document.updated,\n            name=orm_block_document.name,\n            data=data,\n            block_schema_id=orm_block_document.block_schema_id,\n            block_schema=orm_block_document.block_schema,\n            block_type_id=orm_block_document.block_type_id,\n            block_type_name=orm_block_document.block_type_name,\n            block_type=orm_block_document.block_type,\n            is_anonymous=orm_block_document.is_anonymous,\n        )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocumentReference","title":"BlockDocumentReference","text":"

    Bases: ORMBaseModel

    An ORM representation of a block document reference.

    Source code in prefect/server/schemas/core.py
    class BlockDocumentReference(ORMBaseModel):\n    \"\"\"An ORM representation of a block document reference.\"\"\"\n\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    parent_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    reference_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        return validate_parent_and_ref_diff(values)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchema","title":"BlockSchema","text":"

    Bases: ORMBaseModel

    An ORM representation of a block schema.

    Source code in prefect/server/schemas/core.py
    class BlockSchema(ORMBaseModel):\n    \"\"\"An ORM representation of a block schema.\"\"\"\n\n    checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchemaReference","title":"BlockSchemaReference","text":"

    Bases: ORMBaseModel

    An ORM representation of a block schema reference.

    Source code in prefect/server/schemas/core.py
    class BlockSchemaReference(ORMBaseModel):\n    \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n    parent_block_schema_id: UUID = Field(\n        default=..., description=\"ID of block schema the reference is nested within\"\n    )\n    parent_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The block schema the reference is nested within\"\n    )\n    reference_block_schema_id: UUID = Field(\n        default=..., description=\"ID of the nested block schema\"\n    )\n    reference_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The nested block schema\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockType","title":"BlockType","text":"

    Bases: ORMBaseModel

    An ORM representation of a block type

    Source code in prefect/server/schemas/core.py
    class BlockType(ORMBaseModel):\n    \"\"\"An ORM representation of a block type\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n    is_protected: bool = Field(\n        default=False, description=\"Protected block types cannot be modified via API.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimit","title":"ConcurrencyLimit","text":"

    Bases: ORMBaseModel

    An ORM representation of a concurrency limit.

    Source code in prefect/server/schemas/core.py
    class ConcurrencyLimit(ORMBaseModel):\n    \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: List[UUID] = Field(\n        default_factory=list,\n        description=\"A list of active run ids using a concurrency slot\",\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimitV2","title":"ConcurrencyLimitV2","text":"

    Bases: ORMBaseModel

    An ORM representation of a v2 concurrency limit.

    Source code in prefect/server/schemas/core.py
    class ConcurrencyLimitV2(ORMBaseModel):\n    \"\"\"An ORM representation of a v2 concurrency limit.\"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the concurrency limit is active.\"\n    )\n    name: str = Field(default=..., description=\"The name of the concurrency limit.\")\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=0, description=\"The number of active slots.\")\n    denied_slots: int = Field(default=0, description=\"The number of denied slots.\")\n    slot_decay_per_second: float = Field(\n        default=0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n    avg_slot_occupancy_seconds: float = Field(\n        default=2.0, description=\"The average amount of time a slot is occupied.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Configuration","title":"Configuration","text":"

    Bases: ORMBaseModel

    An ORM representation of account info.

    Source code in prefect/server/schemas/core.py
    class Configuration(ORMBaseModel):\n    \"\"\"An ORM representation of account info.\"\"\"\n\n    key: str = Field(default=..., description=\"Account info key\")\n    value: Dict[str, Any] = Field(default=..., description=\"Account info\")\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Deployment","title":"Deployment","text":"

    Bases: DeprecatedInfraOverridesField, ORMBaseModel

    An ORM representation of deployment data.

    Source code in prefect/server/schemas/core.py
    class Deployment(DeprecatedInfraOverridesField, ORMBaseModel):\n    \"\"\"An ORM representation of deployment data.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the deployment.\")\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description for the deployment.\"\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The flow id associated with the deployment.\"\n    )\n    schedule: Optional[schedules.SCHEDULE_TYPES] = Field(\n        default=None, description=\"A schedule for the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether or not the deployment schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentSchedule] = Field(\n        default_factory=list, description=\"A list of schedules for the deployment.\"\n    )\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    pull_steps: Optional[List[dict]] = Field(\n        default=None,\n        description=\"Pull steps for cloning and running this deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the deployment\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The work queue for the deployment. If no work queue is set, work will not\"\n            \" be scheduled.\"\n        ),\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The last time the deployment was polled for status updates.\",\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    storage_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining storage used for this flow.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use for flow runs.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this deployment.\",\n    )\n    updated_by: Optional[UpdatedBy] = Field(\n        default=None,\n        description=\"Optional information about the updater of this deployment.\",\n    )\n    work_queue_id: UUID = Field(\n        default=None,\n        description=(\n            \"The id of the work pool queue to which this deployment is assigned.\"\n        ),\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    class Config:\n        allow_population_by_field_name = True\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Flow","title":"Flow","text":"

    Bases: ORMBaseModel

    An ORM representation of flow data.

    Source code in prefect/server/schemas/core.py
    class Flow(ORMBaseModel):\n    \"\"\"An ORM representation of flow data.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRun","title":"FlowRun","text":"

    Bases: ORMBaseModel

    An ORM representation of flow run data.

    Source code in prefect/server/schemas/core.py
    class FlowRun(ORMBaseModel):\n    \"\"\"An ORM representation of flow run data.\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_value\"}],\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    # relationships\n    # flow: Flow = None\n    # task_runs: List[\"TaskRun\"] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current state of the flow run.\"\n    )\n    # parent_task_run: \"TaskRun\" = None\n\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Variables used as overrides in the base job template\",\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy","text":"

    Bases: ORMBaseModel

    An ORM representation of a flow run notification.

    Source code in prefect/server/schemas/core.py
    class FlowRunNotificationPolicy(ORMBaseModel):\n    \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy","title":"FlowRunPolicy","text":"

    Bases: PrefectBaseModel

    Defines of how a flow run should retry.

    Source code in prefect/server/schemas/core.py
    class FlowRunPolicy(PrefectBaseModel):\n    \"\"\"Defines of how a flow run should retry.\"\"\"\n\n    # TODO: Determine how to separate between infrastructure and within-process level\n    #       retries\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Optional[int] = Field(\n        default=None, description=\"The delay time between retries, in seconds.\"\n    )\n    pause_keys: Optional[set] = Field(\n        default_factory=set, description=\"Tracks pauses this run has observed.\"\n    )\n    resuming: Optional[bool] = Field(\n        default=False, description=\"Indicates if this run is resuming from a pause.\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n        return set_run_policy_deprecated_fields(values)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Log","title":"Log","text":"

    Bases: ORMBaseModel

    An ORM representation of log data.

    Source code in prefect/server/schemas/core.py
    class Log(ORMBaseModel):\n    \"\"\"An ORM representation of log data.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run ID associated with the log.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run ID associated with the log.\"\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.QueueFilter","title":"QueueFilter","text":"

    Bases: PrefectBaseModel

    Filter criteria definition for a work queue.

    Source code in prefect/server/schemas/core.py
    class QueueFilter(PrefectBaseModel):\n    \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"Only include flow runs with these tags in the work queue.\",\n    )\n    deployment_ids: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"Only include flow runs from these deployments in the work queue.\",\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearch","title":"SavedSearch","text":"

    Bases: ORMBaseModel

    An ORM representation of saved search data. Represents a set of filter criteria.

    Source code in prefect/server/schemas/core.py
    class SavedSearch(ORMBaseModel):\n    \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearchFilter","title":"SavedSearchFilter","text":"

    Bases: PrefectBaseModel

    A filter for a saved search model. Intended for use by the Prefect UI.

    Source code in prefect/server/schemas/core.py
    class SavedSearchFilter(PrefectBaseModel):\n    \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n    object: str = Field(default=..., description=\"The object over which to filter.\")\n    property: str = Field(\n        default=..., description=\"The property of the object on which to filter.\"\n    )\n    type: str = Field(default=..., description=\"The type of the property.\")\n    operation: str = Field(\n        default=...,\n        description=\"The operator to apply to the object. For example, `equals`.\",\n    )\n    value: Any = Field(\n        default=..., description=\"A JSON-compatible value for the filter.\"\n    )\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.TaskRun","title":"TaskRun","text":"

    Bases: ORMBaseModel

    An ORM representation of task run data.

    Source code in prefect/server/schemas/core.py
    class TaskRun(ORMBaseModel):\n    \"\"\"An ORM representation of task run data.\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2), examples=[\"my-task-run\"]\n    )\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run id of the task run.\"\n    )\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional cache key. If a COMPLETED state associated with this cache key\"\n            \" is found, the cached COMPLETED state will be used instead of executing\"\n            \" the task run.\"\n        ),\n    )\n    cache_expiration: Optional[DateTimeTZ] = Field(\n        default=None, description=\"Specifies when the cached state should expire.\"\n    )\n    task_version: Optional[str] = Field(\n        default=None, description=\"The version of the task being run.\"\n    )\n    empirical_policy: TaskRunPolicy = Field(\n        default_factory=TaskRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the task run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the current task run state.\"\n    )\n    task_inputs: Dict[str, List[Union[TaskRunResult, Parameter, Constant]]] = Field(\n        default_factory=dict,\n        description=(\n            \"Tracks the source of inputs to a task run. Used for internal bookkeeping.\"\n        ),\n    )\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current task run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current task run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the task run has been executed.\"\n    )\n    flow_run_run_count: int = Field(\n        default=0,\n        description=(\n            \"If the parent flow has retried, this indicates the flow retry this run is\"\n            \" associated with.\"\n        ),\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The task run's expected start time.\",\n    )\n\n    # the next scheduled start time will be populated\n    # whenever the run is in a scheduled state\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the task run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the task run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n\n    # relationships\n    # flow_run: FlowRun = None\n    # subflow_runs: List[FlowRun] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current task run state.\"\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n\n    @validator(\"cache_key\")\n    def validate_cache_key(cls, cache_key):\n        return validate_cache_key_length(cache_key)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool","title":"WorkPool","text":"

    Bases: ORMBaseModel

    An ORM representation of a work pool

    Source code in prefect/server/schemas/core.py
    class WorkPool(ORMBaseModel):\n    \"\"\"An ORM representation of a work pool\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description of the work pool.\"\n    )\n    type: str = Field(description=\"The work pool type.\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n    status: Optional[WorkPoolStatus] = Field(\n        default=None, description=\"The current status of the work pool.\"\n    )\n\n    # this required field has a default of None so that the custom validator\n    # below will be called and produce a more helpful error message\n    default_queue_id: UUID = Field(\n        None, description=\"The id of the pool's default queue.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n\n    @validator(\"default_queue_id\", always=True)\n    def helpful_error_for_missing_default_queue_id(cls, v):\n        return validate_default_queue_id_not_none(v)\n\n    @classmethod\n    def from_orm(cls, work_pool: \"ORMWorkPool\") -> Self:\n        parsed: WorkPool = super().from_orm(work_pool)\n        if work_pool.type == \"prefect-agent\":\n            parsed.status = None\n        return parsed\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueue","title":"WorkQueue","text":"

    Bases: ORMBaseModel

    An ORM representation of a work queue

    Source code in prefect/server/schemas/core.py
    class WorkQueue(ORMBaseModel):\n    \"\"\"An ORM representation of a work queue\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"An optional concurrency limit for the work queue.\"\n    )\n    priority: PositiveInteger = Field(\n        default=1,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n    # Will be required after a future migration\n    work_pool_id: Optional[UUID] = Field(\n        default=None, description=\"The work pool with which the queue is associated.\"\n    )\n    filter: Optional[QueueFilter] = Field(\n        default=None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time an agent polled this queue for work.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy","text":"

    Bases: PrefectBaseModel

    Source code in prefect/server/schemas/core.py
    class WorkQueueHealthPolicy(PrefectBaseModel):\n    maximum_late_runs: Optional[int] = Field(\n        default=0,\n        description=(\n            \"The maximum number of late runs in the work queue before it is deemed\"\n            \" unhealthy. Defaults to `0`.\"\n        ),\n    )\n    maximum_seconds_since_last_polled: Optional[int] = Field(\n        default=60,\n        description=(\n            \"The maximum number of time in seconds elapsed since work queue has been\"\n            \" polled before it is deemed unhealthy. Defaults to `60`.\"\n        ),\n    )\n\n    def evaluate_health_status(\n        self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n    ) -> bool:\n        \"\"\"\n        Given empirical information about the state of the work queue, evaluate its health status.\n\n        Args:\n            late_runs: the count of late runs for the work queue.\n            last_polled: the last time the work queue was polled, if available.\n\n        Returns:\n            bool: whether or not the work queue is healthy.\n        \"\"\"\n        healthy = True\n        if (\n            self.maximum_late_runs is not None\n            and late_runs_count > self.maximum_late_runs\n        ):\n            healthy = False\n\n        if self.maximum_seconds_since_last_polled is not None:\n            if (\n                last_polled is None\n                or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n                > self.maximum_seconds_since_last_polled\n            ):\n                healthy = False\n\n        return healthy\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status","text":"

    Given empirical information about the state of the work queue, evaluate its health status.

    Parameters:

    Name Type Description Default late_runs

    the count of late runs for the work queue.

    required last_polled Optional[DateTimeTZ]

    the last time the work queue was polled, if available.

    None

    Returns:

    Name Type Description bool bool

    whether or not the work queue is healthy.

    Source code in prefect/server/schemas/core.py
    def evaluate_health_status(\n    self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n    \"\"\"\n    Given empirical information about the state of the work queue, evaluate its health status.\n\n    Args:\n        late_runs: the count of late runs for the work queue.\n        last_polled: the last time the work queue was polled, if available.\n\n    Returns:\n        bool: whether or not the work queue is healthy.\n    \"\"\"\n    healthy = True\n    if (\n        self.maximum_late_runs is not None\n        and late_runs_count > self.maximum_late_runs\n    ):\n        healthy = False\n\n    if self.maximum_seconds_since_last_polled is not None:\n        if (\n            last_polled is None\n            or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n            > self.maximum_seconds_since_last_polled\n        ):\n            healthy = False\n\n    return healthy\n
    "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Worker","title":"Worker","text":"

    Bases: ORMBaseModel

    An ORM representation of a worker

    Source code in prefect/server/schemas/core.py
    class Worker(ORMBaseModel):\n    \"\"\"An ORM representation of a worker\"\"\"\n\n    name: str = Field(description=\"The name of the worker.\")\n    work_pool_id: UUID = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    last_heartbeat_time: datetime.datetime = Field(\n        None, description=\"The last time the worker process sent a heartbeat.\"\n    )\n    heartbeat_interval_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to expect between heartbeats sent by the worker.\"\n        ),\n    )\n
    "},{"location":"api-ref/server/schemas/filters/","title":"server.schemas.filters","text":""},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters","title":"prefect.server.schemas.filters","text":"

    Schemas that define Prefect REST API filtering operations.

    Each filter schema includes logic for transforming itself into a SQL where clause.

    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter artifact collections. Only artifact collections matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class ArtifactCollectionFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n    latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactCollectionFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactCollectionFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.latest_id is not None:\n            filters.append(self.latest_id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId","text":"

    Bases: PrefectFilterBaseModel

    Filter by ArtifactCollection.flow_run_id.

    Source code in prefect/server/schemas/filters.py
    class ArtifactCollectionFilterFlowRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.flow_run_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey","text":"

    Bases: PrefectFilterBaseModel

    Filter by ArtifactCollection.key.

    Source code in prefect/server/schemas/filters.py
    class ArtifactCollectionFilterKey(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key. Should return all rows in \"\n            \"the ArtifactCollection table if specified.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.ArtifactCollection.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.ArtifactCollection.key.isnot(None)\n                if self.exists_\n                else db.ArtifactCollection.key.is_(None)\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId","text":"

    Bases: PrefectFilterBaseModel

    Filter by ArtifactCollection.latest_id.

    Source code in prefect/server/schemas/filters.py
    class ArtifactCollectionFilterLatestId(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.latest_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId","text":"

    Bases: PrefectFilterBaseModel

    Filter by ArtifactCollection.task_run_id.

    Source code in prefect/server/schemas/filters.py
    class ArtifactCollectionFilterTaskRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.task_run_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType","text":"

    Bases: PrefectFilterBaseModel

    Filter by ArtifactCollection.type.

    Source code in prefect/server/schemas/filters.py
    class ArtifactCollectionFilterType(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.ArtifactCollection.type.notin_(self.not_any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter","title":"ArtifactFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter artifacts. Only artifacts matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class ArtifactFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n    id: Optional[ArtifactFilterId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Artifact.flow_run_id.

    Source code in prefect/server/schemas/filters.py
    class ArtifactFilterFlowRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.flow_run_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Artifact.id.

    Source code in prefect/server/schemas/filters.py
    class ArtifactFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey","text":"

    Bases: PrefectFilterBaseModel

    Filter by Artifact.key.

    Source code in prefect/server/schemas/filters.py
    class ArtifactFilterKey(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Artifact.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.Artifact.key.isnot(None)\n                if self.exists_\n                else db.Artifact.key.is_(None)\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Artifact.task_run_id.

    Source code in prefect/server/schemas/filters.py
    class ArtifactFilterTaskRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.task_run_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType","text":"

    Bases: PrefectFilterBaseModel

    Filter by Artifact.type.

    Source code in prefect/server/schemas/filters.py
    class ArtifactFilterType(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.Artifact.type.notin_(self.not_any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class BlockDocumentFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n    id: Optional[BlockDocumentFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.id`\"\n    )\n    is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n        # default is to exclude anonymous blocks\n        BlockDocumentFilterIsAnonymous(eq_=False),\n        description=(\n            \"Filter criteria for `BlockDocument.is_anonymous`. \"\n            \"Defaults to excluding anonymous blocks.\"\n        ),\n    )\n    block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n    )\n    name: Optional[BlockDocumentFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.is_anonymous is not None:\n            filters.append(self.is_anonymous.as_sql_filter(db))\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockDocument.block_type_id.

    Source code in prefect/server/schemas/filters.py
    class BlockDocumentFilterBlockTypeId(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.block_type_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockDocument.id.

    Source code in prefect/server/schemas/filters.py
    class BlockDocumentFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockDocument.is_anonymous.

    Source code in prefect/server/schemas/filters.py
    class BlockDocumentFilterIsAnonymous(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter block documents for only those that are or are not anonymous.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.BlockDocument.is_anonymous.is_(self.eq_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockDocument.name.

    Source code in prefect/server/schemas/filters.py
    class BlockDocumentFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of block names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match block names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-block%\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.BlockDocument.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter BlockSchemas

    Source code in prefect/server/schemas/filters.py
    class BlockSchemaFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter BlockSchemas\"\"\"\n\n    block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n    )\n    block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n    )\n    id: Optional[BlockSchemaFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.id`\"\n    )\n    version: Optional[BlockSchemaFilterVersion] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.version`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.block_capabilities is not None:\n            filters.append(self.block_capabilities.as_sql_filter(db))\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.version is not None:\n            filters.append(self.version.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockSchema.block_type_id.

    Source code in prefect/server/schemas/filters.py
    class BlockSchemaFilterBlockTypeId(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.block_type_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockSchema.capabilities

    Source code in prefect/server/schemas/filters.py
    class BlockSchemaFilterCapabilities(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"write-storage\", \"read-storage\"]],\n        description=(\n            \"A list of block capabilities. Block entities will be returned only if an\"\n            \" associated block schema has a superset of the defined capabilities.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.BlockSchema.capabilities, self.all_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockSchema.id

    Source code in prefect/server/schemas/filters.py
    class BlockSchemaFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by BlockSchema.id\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockSchema.capabilities

    Source code in prefect/server/schemas/filters.py
    class BlockSchemaFilterVersion(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"2.0.0\", \"2.1.0\"]],\n        description=\"A list of block schema versions.\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        pass\n\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.version.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter","text":"

    Bases: PrefectFilterBaseModel

    Filter BlockTypes

    Source code in prefect/server/schemas/filters.py
    class BlockTypeFilter(PrefectFilterBaseModel):\n    \"\"\"Filter BlockTypes\"\"\"\n\n    name: Optional[BlockTypeFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockType.name`\"\n    )\n\n    slug: Optional[BlockTypeFilterSlug] = Field(\n        default=None, description=\"Filter criteria for `BlockType.slug`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.slug is not None:\n            filters.append(self.slug.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockType.name

    Source code in prefect/server/schemas/filters.py
    class BlockTypeFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockType.name`\"\"\"\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.like_ is not None:\n            filters.append(db.BlockType.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug","text":"

    Bases: PrefectFilterBaseModel

    Filter by BlockType.slug

    Source code in prefect/server/schemas/filters.py
    class BlockTypeFilterSlug(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockType.slug`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of slugs to match\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockType.slug.in_(self.any_))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter","title":"DeploymentFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter for deployments. Only deployments matching all criteria will be returned.

    Source code in prefect/server/schemas/filters.py
    class DeploymentFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    id: Optional[DeploymentFilterId] = Field(\n        default=None, description=\"Filter criteria for `Deployment.id`\"\n    )\n    name: Optional[DeploymentFilterName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.name`\"\n    )\n    paused: Optional[DeploymentFilterPaused] = Field(\n        default=None, description=\"Filter criteria for `Deployment.paused`\"\n    )\n    is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n        default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n    )\n    tags: Optional[DeploymentFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Deployment.tags`\"\n    )\n    work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.paused is not None:\n            filters.append(self.paused.as_sql_filter(db))\n        if self.is_schedule_active is not None:\n            filters.append(self.is_schedule_active.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Deployment.id.

    Source code in prefect/server/schemas/filters.py
    class DeploymentFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of deployment ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive","text":"

    Bases: PrefectFilterBaseModel

    Legacy filter to filter by Deployment.is_schedule_active which is always the opposite of Deployment.paused.

    Source code in prefect/server/schemas/filters.py
    class DeploymentFilterIsScheduleActive(PrefectFilterBaseModel):\n    \"\"\"Legacy filter to filter by `Deployment.is_schedule_active` which\n    is always the opposite of `Deployment.paused`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.Deployment.paused.is_not(self.eq_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by Deployment.name.

    Source code in prefect/server/schemas/filters.py
    class DeploymentFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of deployment names to include\",\n        examples=[[\"my-deployment-1\", \"my-deployment-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Deployment.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused","title":"DeploymentFilterPaused","text":"

    Bases: PrefectFilterBaseModel

    Filter by Deployment.paused.

    Source code in prefect/server/schemas/filters.py
    class DeploymentFilterPaused(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.paused`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment is/is not paused\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.Deployment.paused.is_(self.eq_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by Deployment.tags.

    Source code in prefect/server/schemas/filters.py
    class DeploymentFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Deployments will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include deployments without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Deployment.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Deployment.tags == [] if self.is_null_ else db.Deployment.tags != []\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName","text":"

    Bases: PrefectFilterBaseModel

    Filter by Deployment.work_queue_name.

    Source code in prefect/server/schemas/filters.py
    class DeploymentFilterWorkQueueName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.work_queue_name.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter","title":"DeploymentScheduleFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter for deployments. Only deployments matching all criteria will be returned.

    Source code in prefect/server/schemas/filters.py
    class DeploymentScheduleFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    active: Optional[DeploymentScheduleFilterActive] = Field(\n        default=None, description=\"Filter criteria for `DeploymentSchedule.active`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.active is not None:\n            filters.append(self.active.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive","title":"DeploymentScheduleFilterActive","text":"

    Bases: PrefectFilterBaseModel

    Filter by DeploymentSchedule.active.

    Source code in prefect/server/schemas/filters.py
    class DeploymentScheduleFilterActive(PrefectFilterBaseModel):\n    \"\"\"Filter by `DeploymentSchedule.active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.DeploymentSchedule.active.is_(self.eq_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet","title":"FilterSet","text":"

    Bases: PrefectBaseModel

    A collection of filters for common objects

    Source code in prefect/server/schemas/filters.py
    class FilterSet(PrefectBaseModel):\n    \"\"\"A collection of filters for common objects\"\"\"\n\n    flows: FlowFilter = Field(\n        default_factory=FlowFilter, description=\"Filters that apply to flows\"\n    )\n    flow_runs: FlowRunFilter = Field(\n        default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n    )\n    task_runs: TaskRunFilter = Field(\n        default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n    )\n    deployments: DeploymentFilter = Field(\n        default_factory=DeploymentFilter,\n        description=\"Filters that apply to deployments\",\n    )\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter","title":"FlowFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter for flows. Only flows matching all criteria will be returned.

    Source code in prefect/server/schemas/filters.py
    class FlowFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n    id: Optional[FlowFilterId] = Field(\n        default=None, description=\"Filter criteria for `Flow.id`\"\n    )\n    deployment: Optional[FlowFilterDeployment] = Field(\n        default=None, description=\"Filter criteria for Flow deployments\"\n    )\n    name: Optional[FlowFilterName] = Field(\n        default=None, description=\"Filter criteria for `Flow.name`\"\n    )\n    tags: Optional[FlowFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Flow.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.deployment is not None:\n            filters.append(self.deployment.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment","title":"FlowFilterDeployment","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by flows by deployment

    Source code in prefect/server/schemas/filters.py
    class FlowFilterDeployment(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by flows by deployment\"\"\"\n\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flows without deployments\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.is_null_ is not None:\n            deployments_subquery = (\n                sa.select(db.Deployment.flow_id).distinct().subquery()\n            )\n\n            if self.is_null_:\n                filters.append(\n                    db.Flow.id.not_in(sa.select(deployments_subquery.c.flow_id))\n                )\n            else:\n                filters.append(\n                    db.Flow.id.in_(sa.select(deployments_subquery.c.flow_id))\n                )\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId","title":"FlowFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Flow.id.

    Source code in prefect/server/schemas/filters.py
    class FlowFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Flow.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName","title":"FlowFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by Flow.name.

    Source code in prefect/server/schemas/filters.py
    class FlowFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Flow.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow names to include\",\n        examples=[[\"my-flow-1\", \"my-flow-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Flow.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags","title":"FlowFilterTags","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by Flow.tags.

    Source code in prefect/server/schemas/filters.py
    class FlowFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Flow.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flows will be returned only if their tags are a superset\"\n            \" of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flows without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Flow.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(db.Flow.tags == [] if self.is_null_ else db.Flow.tags != [])\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter","title":"FlowRunFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter flow runs. Only flow runs matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n    id: Optional[FlowRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.id`\"\n    )\n    name: Optional[FlowRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.name`\"\n    )\n    tags: Optional[FlowRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.tags`\"\n    )\n    deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n    )\n    work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n    )\n    state: Optional[FlowRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state`\"\n    )\n    flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n    )\n    start_time: Optional[FlowRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n    )\n    expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n    )\n    next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n        default=None,\n        description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n    )\n    parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n        default=None, description=\"Filter criteria for subflows of the given flow runs\"\n    )\n    parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n    )\n    idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n    )\n\n    def only_filters_on_id(self):\n        return (\n            self.id is not None\n            and (self.id.any_ and not self.id.not_any_)\n            and self.name is None\n            and self.tags is None\n            and self.deployment_id is None\n            and self.work_queue_name is None\n            and self.state is None\n            and self.flow_version is None\n            and self.start_time is None\n            and self.expected_start_time is None\n            and self.next_scheduled_start_time is None\n            and self.parent_flow_run_id is None\n            and self.parent_task_run_id is None\n            and self.idempotency_key is None\n        )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.deployment_id is not None:\n            filters.append(self.deployment_id.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n        if self.flow_version is not None:\n            filters.append(self.flow_version.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.expected_start_time is not None:\n            filters.append(self.expected_start_time.as_sql_filter(db))\n        if self.next_scheduled_start_time is not None:\n            filters.append(self.next_scheduled_start_time.as_sql_filter(db))\n        if self.parent_flow_run_id is not None:\n            filters.append(self.parent_flow_run_id.as_sql_filter(db))\n        if self.parent_task_run_id is not None:\n            filters.append(self.parent_task_run_id.as_sql_filter(db))\n        if self.idempotency_key is not None:\n            filters.append(self.idempotency_key.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by FlowRun.deployment_id.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterDeploymentId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run deployment ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without deployment ids\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.deployment_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.deployment_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.deployment_id.is_not(None)\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.expected_start_time.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterExpectedStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.expected_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.expected_start_time >= self.after_)\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.flow_version.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterFlowVersion(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run flow_versions to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.flow_version.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.id.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n    not_any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.id.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.id.not_in(self.not_any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.idempotency_key.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterIdempotencyKey(PrefectFilterBaseModel):\n    \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.not_in(self.not_any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.name.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow run names to include\",\n        examples=[[\"my-flow-run-1\", \"my-flow-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.FlowRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.next_scheduled_start_time.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterNextScheduledStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time or before this\"\n            \" time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time at or after this\"\n            \" time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time >= self.after_)\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter for subflows of a given flow run

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterParentFlowRunId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for subflows of a given flow run\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of parent flow run ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(\n                db.FlowRun.id.in_(\n                    sa.select(db.FlowRun.id)\n                    .join(\n                        db.TaskRun,\n                        sa.and_(\n                            db.TaskRun.id == db.FlowRun.parent_task_run_id,\n                        ),\n                    )\n                    .where(db.TaskRun.flow_run_id.in_(self.any_))\n                )\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by FlowRun.parent_task_run_id.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterParentTaskRunId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parent_task_run_ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without parent_task_run_id\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.parent_task_run_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.parent_task_run_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.parent_task_run_id.is_not(None)\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.start_time.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return flow runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.start_time.is_(None)\n                if self.is_null_\n                else db.FlowRun.start_time.is_not(None)\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState","title":"FlowRunFilterState","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by FlowRun.state_type and FlowRun.state_name.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterState(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.state_type` and `FlowRun.state_name`.\"\"\"\n\n    type: Optional[FlowRunFilterStateType] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state_type`\"\n    )\n    name: Optional[FlowRunFilterStateName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state_name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName","title":"FlowRunFilterStateName","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.state_name.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterStateName(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_name.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRun.state_type.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterStateType(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of flow run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_type.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by FlowRun.tags.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flow runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flow runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.FlowRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.tags == [] if self.is_null_ else db.FlowRun.tags != []\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by FlowRun.work_queue_name.

    Source code in prefect/server/schemas/filters.py
    class FlowRunFilterWorkQueueName(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without work queue names\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.work_queue_name.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.work_queue_name.is_(None)\n                if self.is_null_\n                else db.FlowRun.work_queue_name.is_not(None)\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter","text":"

    Bases: PrefectFilterBaseModel

    Filter FlowRunNotificationPolicies.

    Source code in prefect/server/schemas/filters.py
    class FlowRunNotificationPolicyFilter(PrefectFilterBaseModel):\n    \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n    is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n        default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n        description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.is_active is not None:\n            filters.append(self.is_active.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive","text":"

    Bases: PrefectFilterBaseModel

    Filter by FlowRunNotificationPolicy.is_active.

    Source code in prefect/server/schemas/filters.py
    class FlowRunNotificationPolicyFilterIsActive(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter notification policies for only those that are or are not active.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.FlowRunNotificationPolicy.is_active.is_(self.eq_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter","title":"LogFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter logs. Only logs matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class LogFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n    level: Optional[LogFilterLevel] = Field(\n        default=None, description=\"Filter criteria for `Log.level`\"\n    )\n    timestamp: Optional[LogFilterTimestamp] = Field(\n        default=None, description=\"Filter criteria for `Log.timestamp`\"\n    )\n    flow_run_id: Optional[LogFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n    )\n    task_run_id: Optional[LogFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.task_run_id`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.level is not None:\n            filters.append(self.level.as_sql_filter(db))\n        if self.timestamp is not None:\n            filters.append(self.timestamp.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Log.flow_run_id.

    Source code in prefect/server/schemas/filters.py
    class LogFilterFlowRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.flow_run_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel","title":"LogFilterLevel","text":"

    Bases: PrefectFilterBaseModel

    Filter by Log.level.

    Source code in prefect/server/schemas/filters.py
    class LogFilterLevel(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.level`.\"\"\"\n\n    ge_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level greater than or equal to this level\",\n        examples=[20],\n    )\n\n    le_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level less than or equal to this level\",\n        examples=[50],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.ge_ is not None:\n            filters.append(db.Log.level >= self.ge_)\n        if self.le_ is not None:\n            filters.append(db.Log.level <= self.le_)\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName","title":"LogFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by Log.name.

    Source code in prefect/server/schemas/filters.py
    class LogFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of log names to include\",\n        examples=[[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"]],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.name.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Log.task_run_id.

    Source code in prefect/server/schemas/filters.py
    class LogFilterTaskRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.task_run_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp","text":"

    Bases: PrefectFilterBaseModel

    Filter by Log.timestamp.

    Source code in prefect/server/schemas/filters.py
    class LogFilterTimestamp(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Log.timestamp <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Log.timestamp >= self.after_)\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator","title":"Operator","text":"

    Bases: AutoEnum

    Operators for combining filter criteria.

    Source code in prefect/server/schemas/filters.py
    class Operator(AutoEnum):\n    \"\"\"Operators for combining filter criteria.\"\"\"\n\n    and_ = AutoEnum.auto()\n    or_ = AutoEnum.auto()\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator.auto","title":"auto staticmethod","text":"

    Exposes enum.auto() to avoid requiring a second import to use AutoEnum

    Source code in prefect/utilities/collections.py
    @staticmethod\ndef auto():\n    \"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel","title":"PrefectFilterBaseModel","text":"

    Bases: PrefectBaseModel

    Base model for Prefect filters

    Source code in prefect/server/schemas/filters.py
    class PrefectFilterBaseModel(PrefectBaseModel):\n    \"\"\"Base model for Prefect filters\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n        \"\"\"Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter.\"\"\"\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters)\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        \"\"\"Return a list of boolean filter statements based on filter parameters\"\"\"\n        raise NotImplementedError(\"_get_filter_list must be implemented\")\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel","title":"PrefectOperatorFilterBaseModel","text":"

    Bases: PrefectFilterBaseModel

    Base model for Prefect filters that combines criteria with a user-provided operator

    Source code in prefect/server/schemas/filters.py
    class PrefectOperatorFilterBaseModel(PrefectFilterBaseModel):\n    \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n    operator: Operator = Field(\n        default=Operator.and_,\n        description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n    )\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters) if self.operator == Operator.and_ else sa.or_(*filters)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter","title":"TaskRunFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter task runs. Only task runs matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n    id: Optional[TaskRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.id`\"\n    )\n    name: Optional[TaskRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.name`\"\n    )\n    tags: Optional[TaskRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.tags`\"\n    )\n    state: Optional[TaskRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.state`\"\n    )\n    start_time: Optional[TaskRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n    )\n    subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n    )\n    flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.subflow_runs is not None:\n            filters.append(self.subflow_runs.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by TaskRun.flow_run_id.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterFlowRunId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run flow run ids to include\"\n    )\n\n    is_null_: bool = Field(\n        default=False, description=\"Filter for task runs with None as their flow run id\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.is_null_ is True:\n            filters.append(db.TaskRun.flow_run_id.is_(None))\n        else:\n            if self.any_ is not None:\n                filters.append(db.TaskRun.flow_run_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by TaskRun.id.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by TaskRun.name.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of task run names to include\",\n        examples=[[\"my-task-run-1\", \"my-task-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.TaskRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime","text":"

    Bases: PrefectFilterBaseModel

    Filter by TaskRun.start_time.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return task runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.TaskRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.TaskRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.start_time.is_(None)\n                if self.is_null_\n                else db.TaskRun.start_time.is_not(None)\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState","title":"TaskRunFilterState","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by TaskRun.type and TaskRun.name.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterState(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `TaskRun.type` and `TaskRun.name`.\"\"\"\n\n    type: Optional[TaskRunFilterStateType]\n    name: Optional[TaskRunFilterStateName]\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName","title":"TaskRunFilterStateName","text":"

    Bases: PrefectFilterBaseModel

    Filter by TaskRun.state_name.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterStateName(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of task run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_name.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType","text":"

    Bases: PrefectFilterBaseModel

    Filter by TaskRun.state_type.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterStateType(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of task run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_type.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns","text":"

    Bases: PrefectFilterBaseModel

    Filter by TaskRun.subflow_run.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterSubFlowRuns(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If true, only include task runs that are subflow run parents; if false,\"\n            \" exclude parent task runs\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.exists_ is True:\n            filters.append(db.TaskRun.subflow_run.has())\n        elif self.exists_ is False:\n            filters.append(sa.not_(db.TaskRun.subflow_run.has()))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by TaskRun.tags.

    Source code in prefect/server/schemas/filters.py
    class TaskRunFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Task runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include task runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.TaskRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.tags == [] if self.is_null_ else db.TaskRun.tags != []\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter","title":"VariableFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter variables. Only variables matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class VariableFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n    id: Optional[VariableFilterId] = Field(\n        default=None, description=\"Filter criteria for `Variable.id`\"\n    )\n    name: Optional[VariableFilterName] = Field(\n        default=None, description=\"Filter criteria for `Variable.name`\"\n    )\n    value: Optional[VariableFilterValue] = Field(\n        default=None, description=\"Filter criteria for `Variable.value`\"\n    )\n    tags: Optional[VariableFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Variable.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.value is not None:\n            filters.append(self.value.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId","title":"VariableFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Variable.id.

    Source code in prefect/server/schemas/filters.py
    class VariableFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Variable.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of variable ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName","title":"VariableFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by Variable.name.

    Source code in prefect/server/schemas/filters.py
    class VariableFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Variable.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my_variable_%\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags","title":"VariableFilterTags","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by Variable.tags.

    Source code in prefect/server/schemas/filters.py
    class VariableFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Variable.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Variables will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include Variables without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Variable.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Variable.tags == [] if self.is_null_ else db.Variable.tags != []\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue","title":"VariableFilterValue","text":"

    Bases: PrefectFilterBaseModel

    Filter by Variable.value.

    Source code in prefect/server/schemas/filters.py
    class VariableFilterValue(PrefectFilterBaseModel):\n    \"\"\"Filter by `Variable.value`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables value to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable value against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-value-%\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.value.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.value.ilike(f\"%{self.like_}%\"))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter","title":"WorkPoolFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter work pools. Only work pools matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class WorkPoolFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter work pools. Only work pools matching all criteria will be returned\"\"\"\n\n    id: Optional[WorkPoolFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.id`\"\n    )\n    name: Optional[WorkPoolFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.name`\"\n    )\n    type: Optional[WorkPoolFilterType] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by WorkPool.id.

    Source code in prefect/server/schemas/filters.py
    class WorkPoolFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by WorkPool.name.

    Source code in prefect/server/schemas/filters.py
    class WorkPoolFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.name.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType","text":"

    Bases: PrefectFilterBaseModel

    Filter by WorkPool.type.

    Source code in prefect/server/schemas/filters.py
    class WorkPoolFilterType(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.type.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter work queues. Only work queues matching all criteria will be returned

    Source code in prefect/server/schemas/filters.py
    class WorkQueueFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter work queues. Only work queues matching all criteria will be\n    returned\"\"\"\n\n    id: Optional[WorkQueueFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.id`\"\n    )\n\n    name: Optional[WorkQueueFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId","text":"

    Bases: PrefectFilterBaseModel

    Filter by WorkQueue.id.

    Source code in prefect/server/schemas/filters.py
    class WorkQueueFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"A list of work queue ids to include\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName","text":"

    Bases: PrefectFilterBaseModel

    Filter by WorkQueue.name.

    Source code in prefect/server/schemas/filters.py
    class WorkQueueFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"wq-1\", \"wq-2\"]],\n    )\n\n    startswith_: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"A list of case-insensitive starts-with matches. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n        ),\n        examples=[[\"marvin\", \"Marvin-robot\"]],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.name.in_(self.any_))\n        if self.startswith_ is not None:\n            filters.append(\n                sa.or_(\n                    *[db.WorkQueue.name.ilike(f\"{item}%\") for item in self.startswith_]\n                )\n            )\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter","title":"WorkerFilter","text":"

    Bases: PrefectOperatorFilterBaseModel

    Filter by Worker.last_heartbeat_time.

    Source code in prefect/server/schemas/filters.py
    class WorkerFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    # worker_config_id: Optional[WorkerFilterWorkPoolId] = Field(\n    #     default=None, description=\"Filter criteria for `Worker.worker_config_id`\"\n    # )\n\n    last_heartbeat_time: Optional[WorkerFilterLastHeartbeatTime] = Field(\n        default=None,\n        description=\"Filter criteria for `Worker.last_heartbeat_time`\",\n    )\n\n    status: Optional[WorkerFilterStatus] = Field(\n        default=None, description=\"Filter criteria for `Worker.status`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.last_heartbeat_time is not None:\n            filters.append(self.last_heartbeat_time.as_sql_filter(db))\n\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime","text":"

    Bases: PrefectFilterBaseModel

    Filter by Worker.last_heartbeat_time.

    Source code in prefect/server/schemas/filters.py
    class WorkerFilterLastHeartbeatTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or before this time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or after this time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Worker.last_heartbeat_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Worker.last_heartbeat_time >= self.after_)\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterStatus","title":"WorkerFilterStatus","text":"

    Bases: PrefectFilterBaseModel

    Filter by Worker.status.

    Source code in prefect/server/schemas/filters.py
    class WorkerFilterStatus(PrefectFilterBaseModel):\n    \"\"\"Filter by `Worker.status`.\"\"\"\n\n    any_: Optional[List[schemas.statuses.WorkerStatus]] = Field(\n        default=None, description=\"A list of worker statuses to include\"\n    )\n    not_any_: Optional[List[schemas.statuses.WorkerStatus]] = Field(\n        default=None, description=\"A list of worker statuses to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Worker.status.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.Worker.status.notin_(self.not_any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterStatus.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId","text":"

    Bases: PrefectFilterBaseModel

    Filter by Worker.worker_config_id.

    Source code in prefect/server/schemas/filters.py
    class WorkerFilterWorkPoolId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Worker.worker_config_id.in_(self.any_))\n        return filters\n
    "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/responses/","title":"server.schemas.responses","text":""},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses","title":"prefect.server.schemas.responses","text":"

    Schemas for special responses from the Prefect REST API.

    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.FlowRunResponse","title":"FlowRunResponse","text":"

    Bases: ORMBaseModel

    Source code in prefect/server/schemas/responses.py
    class FlowRunResponse(ORMBaseModel):\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_val\"}],\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n    state_type: Optional[schemas.states.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_pool_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The id of the flow run's work pool.\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    state: Optional[schemas.states.State] = Field(\n        default=None, description=\"The current state of the flow run.\"\n    )\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Variables used as overrides in the base job template\",\n    )\n\n    @classmethod\n    def from_orm(cls, orm_flow_run: \"ORMFlowRun\"):\n        response = super().from_orm(orm_flow_run)\n        if orm_flow_run.work_queue:\n            response.work_queue_id = orm_flow_run.work_queue.id\n            response.work_queue_name = orm_flow_run.work_queue.name\n            if orm_flow_run.work_queue.work_pool:\n                response.work_pool_id = orm_flow_run.work_queue.work_pool.id\n                response.work_pool_name = orm_flow_run.work_queue.work_pool.name\n\n        return response\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRunResponse):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.GlobalConcurrencyLimitResponse","title":"GlobalConcurrencyLimitResponse","text":"

    Bases: ORMBaseModel

    A response object for global concurrency limits.

    Source code in prefect/server/schemas/responses.py
    class GlobalConcurrencyLimitResponse(ORMBaseModel):\n    \"\"\"\n    A response object for global concurrency limits.\n    \"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the global concurrency limit is active.\"\n    )\n    name: str = Field(\n        default=..., description=\"The name of the global concurrency limit.\"\n    )\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=..., description=\"The number of active slots.\")\n    slot_decay_per_second: float = Field(\n        default=2.0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponse","title":"HistoryResponse","text":"

    Bases: PrefectBaseModel

    Represents a history of aggregation states over an interval

    Source code in prefect/server/schemas/responses.py
    class HistoryResponse(PrefectBaseModel):\n    \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n    interval_start: DateTimeTZ = Field(\n        default=..., description=\"The start date of the interval.\"\n    )\n    interval_end: DateTimeTZ = Field(\n        default=..., description=\"The end date of the interval.\"\n    )\n    states: List[HistoryResponseState] = Field(\n        default=..., description=\"A list of state histories during the interval.\"\n    )\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponseState","title":"HistoryResponseState","text":"

    Bases: PrefectBaseModel

    Represents a single state's history over an interval.

    Source code in prefect/server/schemas/responses.py
    class HistoryResponseState(PrefectBaseModel):\n    \"\"\"Represents a single state's history over an interval.\"\"\"\n\n    state_type: schemas.states.StateType = Field(\n        default=..., description=\"The state type.\"\n    )\n    state_name: str = Field(default=..., description=\"The state name.\")\n    count_runs: int = Field(\n        default=...,\n        description=\"The number of runs in the specified state during the interval.\",\n    )\n    sum_estimated_run_time: datetime.timedelta = Field(\n        default=...,\n        description=\"The total estimated run time of all runs during the interval.\",\n    )\n    sum_estimated_lateness: datetime.timedelta = Field(\n        default=...,\n        description=(\n            \"The sum of differences between actual and expected start time during the\"\n            \" interval.\"\n        ),\n    )\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.OrchestrationResult","title":"OrchestrationResult","text":"

    Bases: PrefectBaseModel

    A container for the output of state orchestration.

    Source code in prefect/server/schemas/responses.py
    class OrchestrationResult(PrefectBaseModel):\n    \"\"\"\n    A container for the output of state orchestration.\n    \"\"\"\n\n    state: Optional[schemas.states.State]\n    status: SetStateStatus\n    details: StateResponseDetails\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.SetStateStatus","title":"SetStateStatus","text":"

    Bases: AutoEnum

    Enumerates return statuses for setting run states.

    Source code in prefect/server/schemas/responses.py
    class SetStateStatus(AutoEnum):\n    \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n    ACCEPT = AutoEnum.auto()\n    REJECT = AutoEnum.auto()\n    ABORT = AutoEnum.auto()\n    WAIT = AutoEnum.auto()\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAbortDetails","title":"StateAbortDetails","text":"

    Bases: PrefectBaseModel

    Details associated with an ABORT state transition.

    Source code in prefect/server/schemas/responses.py
    class StateAbortDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n    type: Literal[\"abort_details\"] = Field(\n        default=\"abort_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was aborted.\"\n    )\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails","text":"

    Bases: PrefectBaseModel

    Details associated with an ACCEPT state transition.

    Source code in prefect/server/schemas/responses.py
    class StateAcceptDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n    type: Literal[\"accept_details\"] = Field(\n        default=\"accept_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateRejectDetails","title":"StateRejectDetails","text":"

    Bases: PrefectBaseModel

    Details associated with a REJECT state transition.

    Source code in prefect/server/schemas/responses.py
    class StateRejectDetails(PrefectBaseModel):\n    \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n    type: Literal[\"reject_details\"] = Field(\n        default=\"reject_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was rejected.\"\n    )\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateWaitDetails","title":"StateWaitDetails","text":"

    Bases: PrefectBaseModel

    Details associated with a WAIT state transition.

    Source code in prefect/server/schemas/responses.py
    class StateWaitDetails(PrefectBaseModel):\n    \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n    type: Literal[\"wait_details\"] = Field(\n        default=\"wait_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    delay_seconds: int = Field(\n        default=...,\n        description=(\n            \"The length of time in seconds the client should wait before transitioning\"\n            \" states.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition should wait.\"\n    )\n
    "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.WorkQueueWithStatus","title":"WorkQueueWithStatus","text":"

    Bases: WorkQueueResponse, WorkQueueStatusDetail

    Combines a work queue and its status details into a single object

    Source code in prefect/server/schemas/responses.py
    class WorkQueueWithStatus(WorkQueueResponse, WorkQueueStatusDetail):\n    \"\"\"Combines a work queue and its status details into a single object\"\"\"\n
    "},{"location":"api-ref/server/schemas/schedules/","title":"server.schemas.schedules","text":""},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules","title":"prefect.server.schemas.schedules","text":"

    Schedule schemas

    "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule","title":"CronSchedule","text":"

    Bases: PrefectBaseModel

    Cron schedule

    NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.

    Parameters:

    Name Type Description Default cron str

    a valid cron string

    required timezone str

    a valid timezone string in IANA tzdata format (for example, America/New_York).

    required day_or bool

    Control how croniter handles day and day_of_week entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.

    required Source code in prefect/server/schemas/schedules.py
    class CronSchedule(PrefectBaseModel):\n    \"\"\"\n    Cron schedule\n\n    NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n    itself appropriately. Cron's rules for DST are based on schedule times, not\n    intervals. This means that an hourly cron schedule will fire on every new\n    schedule hour, not every elapsed hour; for example, when clocks are set back\n    this will result in a two-hour pause as the schedule will fire *the first\n    time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n    Longer schedules, such as one that fires at 9am every morning, will\n    automatically adjust for DST.\n\n    Args:\n        cron (str): a valid cron string\n        timezone (str): a valid timezone string in IANA tzdata format (for example,\n            America/New_York).\n        day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n            entries. Defaults to True, matching cron which connects those values using\n            OR. If the switch is set to False, the values are connected using AND. This\n            behaves like fcron and enables you to e.g. define a job that executes each\n            2nd friday of a month by setting the days of month and the weekday.\n\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    cron: str = Field(default=..., examples=[\"0 0 * * *\"])\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n    day_or: bool = Field(\n        default=True,\n        description=(\n            \"Control croniter behavior for handling day and day_of_week entries.\"\n        ),\n    )\n\n    @validator(\"timezone\")\n    def validate_timezone(cls, v, *, values, **kwargs):\n        return default_timezone(v, values)\n\n    @validator(\"cron\")\n    def valid_cron_string(cls, v):\n        return validate_cron_string(v)\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        elif self.timezone:\n            start = start.in_tz(self.timezone)\n\n        # subtract one second from the start date, so that croniter returns it\n        # as an event (if it meets the cron criteria)\n        start = start.subtract(seconds=1)\n\n        # Respect microseconds by rounding up\n        if start.microsecond > 0:\n            start += datetime.timedelta(seconds=1)\n\n        # croniter's DST logic interferes with all other datetime libraries except pytz\n        start_localized = pytz.timezone(start.tz.name).localize(\n            datetime.datetime(\n                year=start.year,\n                month=start.month,\n                day=start.day,\n                hour=start.hour,\n                minute=start.minute,\n                second=start.second,\n                microsecond=start.microsecond,\n            )\n        )\n        start_naive_tz = start.naive()\n\n        cron = croniter(self.cron, start_naive_tz, day_or=self.day_or)  # type: ignore\n        dates = set()\n        counter = 0\n\n        while True:\n            # croniter does not handle DST properly when the start time is\n            # in and around when the actual shift occurs. To work around this,\n            # we use the naive start time to get the next cron date delta, then\n            # add that time to the original scheduling anchor.\n            next_time = cron.get_next(datetime.datetime)\n            delta = next_time - start_naive_tz\n            next_date = pendulum.instance(start_localized + delta)\n\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
    "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule.get_dates","title":"get_dates async","text":"

    Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

    Parameters:

    Name Type Description Default n int

    The number of dates to generate

    None start datetime

    The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

    None end datetime

    The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

    None

    Returns:

    Type Description List[DateTime]

    List[pendulum.DateTime]: A list of dates

    Source code in prefect/server/schemas/schedules.py
    async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n    \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
    "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule","title":"IntervalSchedule","text":"

    Bases: PrefectBaseModel

    A schedule formed by adding interval increments to an anchor_date. If no anchor_date is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.

    NOTE: If the IntervalSchedule anchor_date or timezone is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

    Parameters:

    Name Type Description Default interval timedelta

    an interval to schedule on.

    required anchor_date DateTimeTZ

    an anchor date to schedule increments against; if not provided, the current timestamp will be used.

    required timezone str

    a valid timezone string.

    required Source code in prefect/server/schemas/schedules.py
    class IntervalSchedule(PrefectBaseModel):\n    \"\"\"\n    A schedule formed by adding `interval` increments to an `anchor_date`. If no\n    `anchor_date` is supplied, the current UTC time is used.  If a\n    timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n    in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n    anchor dates are always stored as UTC offsets, so a `timezone` can be\n    provided to determine localization behaviors like DST boundary handling. If\n    none is provided it will be inferred from the anchor date.\n\n    NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n    DST-observing timezone, then the schedule will adjust itself appropriately.\n    Intervals greater than 24 hours will follow DST conventions, while intervals\n    of less than 24 hours will follow UTC intervals. For example, an hourly\n    schedule will fire every UTC hour, even across DST boundaries. When clocks\n    are set back, this will result in two runs that *appear* to both be\n    scheduled for 1am local time, even though they are an hour apart in UTC\n    time. For longer intervals, like a daily schedule, the interval schedule\n    will adjust for DST boundaries so that the clock-hour remains constant. This\n    means that a daily schedule that always fires at 9am will observe DST and\n    continue to fire at 9am in the local time zone.\n\n    Args:\n        interval (datetime.timedelta): an interval to schedule on.\n        anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n            if not provided, the current timestamp will be used.\n        timezone (str, optional): a valid timezone string.\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n        exclude_none = True\n\n    interval: PositiveDuration\n    anchor_date: DateTimeTZ = None\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"anchor_date\", always=True)\n    def validate_anchor_date(cls, v):\n        return default_anchor_date(v)\n\n    @validator(\"timezone\", always=True)\n    def validate_timezone(cls, v, *, values, **kwargs):\n        return default_timezone(v, values)\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        anchor_tz = self.anchor_date.in_tz(self.timezone)\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        # compute the offset between the anchor date and the start date to jump to the\n        # next date\n        offset = (start - anchor_tz).total_seconds() / self.interval.total_seconds()\n        next_date = anchor_tz.add(seconds=self.interval.total_seconds() * int(offset))\n\n        # break the interval into `days` and `seconds` because pendulum\n        # will handle DST boundaries properly if days are provided, but not\n        # if we add `total seconds`. Therefore, `next_date + self.interval`\n        # fails while `next_date.add(days=days, seconds=seconds)` works.\n        interval_days = self.interval.days\n        interval_seconds = self.interval.total_seconds() - (\n            interval_days * 24 * 60 * 60\n        )\n\n        # daylight saving time boundaries can create a situation where the next date is\n        # before the start date, so we advance it if necessary\n        while next_date < start:\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n\n        counter = 0\n        dates = set()\n\n        while True:\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n
    "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule.get_dates","title":"get_dates async","text":"

    Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

    Parameters:

    Name Type Description Default n int

    The number of dates to generate

    None start datetime

    The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

    None end datetime

    The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

    None

    Returns:

    Type Description List[DateTime]

    List[pendulum.DateTime]: A list of dates

    Source code in prefect/server/schemas/schedules.py
    async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n    \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
    "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule","title":"RRuleSchedule","text":"

    Bases: PrefectBaseModel

    RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule.

    RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.

    Note that as a calendar-oriented standard, RRuleSchedules are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.

    Parameters:

    Name Type Description Default rrule str

    a valid RRule string

    required timezone str

    a valid timezone string

    required Source code in prefect/server/schemas/schedules.py
    class RRuleSchedule(PrefectBaseModel):\n    \"\"\"\n    RRule schedule, based on the iCalendar standard\n    ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n    implemented in `dateutils.rrule`.\n\n    RRules are appropriate for any kind of calendar-date manipulation, including\n    irregular intervals, repetition, exclusions, week day or day-of-month\n    adjustments, and more.\n\n    Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n    to the initial timezone provided. A 9am daily schedule with a daylight saving\n    time-aware start date will maintain a local 9am time through DST boundaries;\n    a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n    Args:\n        rrule (str): a valid RRule string\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    rrule: str\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"rrule\")\n    def validate_rrule_str(cls, v):\n        return validate_rrule_string(v)\n\n    @classmethod\n    def from_rrule(cls, rrule: dateutil.rrule.rrule):\n        if isinstance(rrule, dateutil.rrule.rrule):\n            if rrule._dtstart.tzinfo is not None:\n                timezone = rrule._dtstart.tzinfo.name\n            else:\n                timezone = \"UTC\"\n            return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n            unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n            unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n            if len(unique_timezones) > 1:\n                raise ValueError(\n                    f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n                )\n\n            if len(unique_dstarts) > 1:\n                raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n            if unique_dstarts and unique_timezones:\n                timezone = dtstarts[0].tzinfo.name\n            else:\n                timezone = \"UTC\"\n\n            rruleset_string = \"\"\n            if rrule._rrule:\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n            if rrule._exrule:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n                    \"RRULE\", \"EXRULE\"\n                )\n            if rrule._rdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"RDATE:\" + \",\".join(\n                    rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n                )\n            if rrule._exdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"EXDATE:\" + \",\".join(\n                    exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n                )\n            return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n        else:\n            raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n    def to_rrule(self) -> dateutil.rrule.rrule:\n        \"\"\"\n        Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n        here\n        \"\"\"\n        rrule = dateutil.rrule.rrulestr(\n            self.rrule,\n            dtstart=DEFAULT_ANCHOR_DATE,\n            cache=True,\n        )\n        timezone = dateutil.tz.gettz(self.timezone)\n        if isinstance(rrule, dateutil.rrule.rrule):\n            kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n            if rrule._until:\n                kwargs.update(\n                    until=rrule._until.replace(tzinfo=timezone),\n                )\n            return rrule.replace(**kwargs)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            # update rrules\n            localized_rrules = []\n            for rr in rrule._rrule:\n                kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n                if rr._until:\n                    kwargs.update(\n                        until=rr._until.replace(tzinfo=timezone),\n                    )\n                localized_rrules.append(rr.replace(**kwargs))\n            rrule._rrule = localized_rrules\n\n            # update exrules\n            localized_exrules = []\n            for exr in rrule._exrule:\n                kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n                if exr._until:\n                    kwargs.update(\n                        until=exr._until.replace(tzinfo=timezone),\n                    )\n                localized_exrules.append(exr.replace(**kwargs))\n            rrule._exrule = localized_exrules\n\n            # update rdates\n            localized_rdates = []\n            for rd in rrule._rdate:\n                localized_rdates.append(rd.replace(tzinfo=timezone))\n            rrule._rdate = localized_rdates\n\n            # update exdates\n            localized_exdates = []\n            for exd in rrule._exdate:\n                localized_exdates.append(exd.replace(tzinfo=timezone))\n            rrule._exdate = localized_exdates\n\n            return rrule\n\n    @validator(\"timezone\", always=True)\n    def valid_timezone(cls, v):\n        return validate_rrule_timezone(v)\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up\n            # to MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        dates = set()\n        counter = 0\n\n        # pass count = None to account for discrepancies with duplicates around DST\n        # boundaries\n        for next_date in self.to_rrule().xafter(start, count=None, inc=True):\n            next_date = pendulum.instance(next_date).in_tz(self.timezone)\n\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
    "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.get_dates","title":"get_dates async","text":"

    Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

    Parameters:

    Name Type Description Default n int

    The number of dates to generate

    None start datetime

    The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

    None end datetime

    The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

    None

    Returns:

    Type Description List[DateTime]

    List[pendulum.DateTime]: A list of dates

    Source code in prefect/server/schemas/schedules.py
    async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n    \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
    "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule","text":"

    Since rrule doesn't properly serialize/deserialize timezones, we localize dates here

    Source code in prefect/server/schemas/schedules.py
    def to_rrule(self) -> dateutil.rrule.rrule:\n    \"\"\"\n    Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n    here\n    \"\"\"\n    rrule = dateutil.rrule.rrulestr(\n        self.rrule,\n        dtstart=DEFAULT_ANCHOR_DATE,\n        cache=True,\n    )\n    timezone = dateutil.tz.gettz(self.timezone)\n    if isinstance(rrule, dateutil.rrule.rrule):\n        kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n        if rrule._until:\n            kwargs.update(\n                until=rrule._until.replace(tzinfo=timezone),\n            )\n        return rrule.replace(**kwargs)\n    elif isinstance(rrule, dateutil.rrule.rruleset):\n        # update rrules\n        localized_rrules = []\n        for rr in rrule._rrule:\n            kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n            if rr._until:\n                kwargs.update(\n                    until=rr._until.replace(tzinfo=timezone),\n                )\n            localized_rrules.append(rr.replace(**kwargs))\n        rrule._rrule = localized_rrules\n\n        # update exrules\n        localized_exrules = []\n        for exr in rrule._exrule:\n            kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n            if exr._until:\n                kwargs.update(\n                    until=exr._until.replace(tzinfo=timezone),\n                )\n            localized_exrules.append(exr.replace(**kwargs))\n        rrule._exrule = localized_exrules\n\n        # update rdates\n        localized_rdates = []\n        for rd in rrule._rdate:\n            localized_rdates.append(rd.replace(tzinfo=timezone))\n        rrule._rdate = localized_rdates\n\n        # update exdates\n        localized_exdates = []\n        for exd in rrule._exdate:\n            localized_exdates.append(exd.replace(tzinfo=timezone))\n        rrule._exdate = localized_exdates\n\n        return rrule\n
    "},{"location":"api-ref/server/schemas/sorting/","title":"server.schemas.sorting","text":""},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting","title":"prefect.server.schemas.sorting","text":"

    Schemas for sorting Prefect REST API objects.

    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort","text":"

    Bases: AutoEnum

    Defines artifact collection sorting options.

    Source code in prefect/server/schemas/sorting.py
    class ArtifactCollectionSort(AutoEnum):\n    \"\"\"Defines artifact collection sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort artifact collections\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n            \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n            \"ID_DESC\": db.ArtifactCollection.id.desc(),\n            \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n            \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort artifact collections

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort artifact collections\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n        \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n        \"ID_DESC\": db.ArtifactCollection.id.desc(),\n        \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n        \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort","title":"ArtifactSort","text":"

    Bases: AutoEnum

    Defines artifact sorting options.

    Source code in prefect/server/schemas/sorting.py
    class ArtifactSort(AutoEnum):\n    \"\"\"Defines artifact sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort artifacts\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Artifact.created.desc(),\n            \"UPDATED_DESC\": db.Artifact.updated.desc(),\n            \"ID_DESC\": db.Artifact.id.desc(),\n            \"KEY_DESC\": db.Artifact.key.desc(),\n            \"KEY_ASC\": db.Artifact.key.asc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort artifacts

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort artifacts\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Artifact.created.desc(),\n        \"UPDATED_DESC\": db.Artifact.updated.desc(),\n        \"ID_DESC\": db.Artifact.id.desc(),\n        \"KEY_DESC\": db.Artifact.key.desc(),\n        \"KEY_ASC\": db.Artifact.key.asc(),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort","text":"

    Bases: AutoEnum

    Defines block document sorting options.

    Source code in prefect/server/schemas/sorting.py
    class BlockDocumentSort(AutoEnum):\n    \"\"\"Defines block document sorting options.\"\"\"\n\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n    BLOCK_TYPE_AND_NAME_ASC = \"BLOCK_TYPE_AND_NAME_ASC\"\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort block documents\"\"\"\n        sort_mapping = {\n            \"NAME_DESC\": db.BlockDocument.name.desc(),\n            \"NAME_ASC\": db.BlockDocument.name.asc(),\n            \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort block documents

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort block documents\"\"\"\n    sort_mapping = {\n        \"NAME_DESC\": db.BlockDocument.name.desc(),\n        \"NAME_ASC\": db.BlockDocument.name.asc(),\n        \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort","title":"DeploymentSort","text":"

    Bases: AutoEnum

    Defines deployment sorting options.

    Source code in prefect/server/schemas/sorting.py
    class DeploymentSort(AutoEnum):\n    \"\"\"Defines deployment sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort deployments\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Deployment.created.desc(),\n            \"UPDATED_DESC\": db.Deployment.updated.desc(),\n            \"NAME_ASC\": db.Deployment.name.asc(),\n            \"NAME_DESC\": db.Deployment.name.desc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort deployments

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort deployments\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Deployment.created.desc(),\n        \"UPDATED_DESC\": db.Deployment.updated.desc(),\n        \"NAME_ASC\": db.Deployment.name.asc(),\n        \"NAME_DESC\": db.Deployment.name.desc(),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowRunSort","title":"FlowRunSort","text":"

    Bases: AutoEnum

    Defines flow run sorting options.

    Source code in prefect/server/schemas/sorting.py
    class FlowRunSort(AutoEnum):\n    \"\"\"Defines flow run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    START_TIME_ASC = AutoEnum.auto()\n    START_TIME_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        from sqlalchemy.sql.functions import coalesce\n\n        \"\"\"Return an expression used to sort flow runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.FlowRun.id.desc(),\n            \"START_TIME_ASC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).asc(),\n            \"START_TIME_DESC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).desc(),\n            \"EXPECTED_START_TIME_ASC\": db.FlowRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.FlowRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.FlowRun.name.asc(),\n            \"NAME_DESC\": db.FlowRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.FlowRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.FlowRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort","title":"FlowSort","text":"

    Bases: AutoEnum

    Defines flow sorting options.

    Source code in prefect/server/schemas/sorting.py
    class FlowSort(AutoEnum):\n    \"\"\"Defines flow sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort flows\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Flow.created.desc(),\n            \"UPDATED_DESC\": db.Flow.updated.desc(),\n            \"NAME_ASC\": db.Flow.name.asc(),\n            \"NAME_DESC\": db.Flow.name.desc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort flows

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort flows\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Flow.created.desc(),\n        \"UPDATED_DESC\": db.Flow.updated.desc(),\n        \"NAME_ASC\": db.Flow.name.asc(),\n        \"NAME_DESC\": db.Flow.name.desc(),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort","title":"LogSort","text":"

    Bases: AutoEnum

    Defines log sorting options.

    Source code in prefect/server/schemas/sorting.py
    class LogSort(AutoEnum):\n    \"\"\"Defines log sorting options.\"\"\"\n\n    TIMESTAMP_ASC = AutoEnum.auto()\n    TIMESTAMP_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n            \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort task runs

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n        \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort","title":"TaskRunSort","text":"

    Bases: AutoEnum

    Defines task run sorting options.

    Source code in prefect/server/schemas/sorting.py
    class TaskRunSort(AutoEnum):\n    \"\"\"Defines task run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.TaskRun.id.desc(),\n            \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.TaskRun.name.asc(),\n            \"NAME_DESC\": db.TaskRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort task runs

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"ID_DESC\": db.TaskRun.id.desc(),\n        \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n        \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n        \"NAME_ASC\": db.TaskRun.name.asc(),\n        \"NAME_DESC\": db.TaskRun.name.desc(),\n        \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n        \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort","title":"VariableSort","text":"

    Bases: AutoEnum

    Defines variables sorting options.

    Source code in prefect/server/schemas/sorting.py
    class VariableSort(AutoEnum):\n    \"\"\"Defines variables sorting options.\"\"\"\n\n    CREATED_DESC = \"CREATED_DESC\"\n    UPDATED_DESC = \"UPDATED_DESC\"\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort variables\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Variable.created.desc(),\n            \"UPDATED_DESC\": db.Variable.updated.desc(),\n            \"NAME_DESC\": db.Variable.name.desc(),\n            \"NAME_ASC\": db.Variable.name.asc(),\n        }\n        return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort.as_sql_sort","title":"as_sql_sort","text":"

    Return an expression used to sort variables

    Source code in prefect/server/schemas/sorting.py
    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort variables\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Variable.created.desc(),\n        \"UPDATED_DESC\": db.Variable.updated.desc(),\n        \"NAME_DESC\": db.Variable.name.desc(),\n        \"NAME_ASC\": db.Variable.name.asc(),\n    }\n    return sort_mapping[self.value]\n
    "},{"location":"api-ref/server/schemas/states/","title":"server.schemas.states","text":""},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states","title":"prefect.server.schemas.states","text":"

    State schemas.

    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State","title":"State","text":"

    Bases: StateBaseModel

    Represents the state of a run.

    Source code in prefect/server/schemas/states.py
    class State(StateBaseModel):\n    \"\"\"Represents the state of a run.\"\"\"\n\n    class Config:\n        orm_mode = True\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    message: Optional[str] = Field(default=None, examples=[\"Run started\"])\n    data: Optional[Any] = Field(\n        default=None,\n        description=(\n            \"Data associated with the state, e.g. a result. \"\n            \"Content must be storable as JSON.\"\n        ),\n    )\n    state_details: StateDetails = Field(default_factory=StateDetails)\n\n    @classmethod\n    def from_orm_without_result(\n        cls,\n        orm_state: Union[\n            \"prefect.server.database.orm_models.ORMFlowRunState\",\n            \"prefect.server.database.orm_models.ORMTaskRunState\",\n        ],\n        with_data: Optional[Any] = None,\n    ):\n        \"\"\"\n        During orchestration, ORM states can be instantiated prior to inserting results\n        into the artifact table and the `data` field will not be eagerly loaded. In\n        these cases, sqlalchemy will attempt to lazily load the the relationship, which\n        will fail when called within a synchronous pydantic method.\n\n        This method will construct a `State` object from an ORM model without a loaded\n        artifact and attach data passed using the `with_data` argument to the `data`\n        field.\n        \"\"\"\n\n        field_keys = cls.schema()[\"properties\"].keys()\n        state_data = {\n            field: getattr(orm_state, field, None)\n            for field in field_keys\n            if field != \"data\"\n        }\n        state_data[\"data\"] = with_data\n        return cls(**state_data)\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n        return get_or_create_state_name(v, values)\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n        return set_default_scheduled_time(cls, values)\n\n    def is_scheduled(self) -> bool:\n        return self.type == StateType.SCHEDULED\n\n    def is_pending(self) -> bool:\n        return self.type == StateType.PENDING\n\n    def is_running(self) -> bool:\n        return self.type == StateType.RUNNING\n\n    def is_completed(self) -> bool:\n        return self.type == StateType.COMPLETED\n\n    def is_failed(self) -> bool:\n        return self.type == StateType.FAILED\n\n    def is_crashed(self) -> bool:\n        return self.type == StateType.CRASHED\n\n    def is_cancelled(self) -> bool:\n        return self.type == StateType.CANCELLED\n\n    def is_cancelling(self) -> bool:\n        return self.type == StateType.CANCELLING\n\n    def is_final(self) -> bool:\n        return self.type in TERMINAL_STATES\n\n    def is_paused(self) -> bool:\n        return self.type == StateType.PAUSED\n\n    def copy(\n        self,\n        *,\n        update: Optional[Dict[str, Any]] = None,\n        reset_fields: bool = False,\n        **kwargs,\n    ):\n        \"\"\"\n        Copying API models should return an object that could be inserted into the\n        database again. The 'timestamp' is reset using the default factory.\n        \"\"\"\n        update = update or {}\n        update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n        return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n    def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n        # Backwards compatible `result` handling on the server-side schema\n        from prefect.states import State\n\n        warnings.warn(\n            (\n                \"`result` is no longer supported by\"\n                \" `prefect.server.schemas.states.State` and will be removed in a future\"\n                \" release. When result retrieval is needed, use `prefect.states.State`.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.result(raise_on_failure=raise_on_failure, fetch=fetch)\n\n    def to_state_create(self):\n        # Backwards compatibility for `to_state_create`\n        from prefect.client.schemas import State\n\n        warnings.warn(\n            (\n                \"Use of `prefect.server.schemas.states.State` from the client is\"\n                \" deprecated and support will be removed in a future release. Use\"\n                \" `prefect.states.State` instead.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.to_state_create()\n\n    def __repr__(self) -> str:\n        \"\"\"\n        Generates a complete state representation appropriate for introspection\n        and debugging, including the result:\n\n        `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n        \"\"\"\n        from prefect.deprecated.data_documents import DataDocument\n\n        if isinstance(self.data, DataDocument):\n            result = self.data.decode()\n        else:\n            result = self.data\n\n        display = dict(\n            message=repr(self.message),\n            type=str(self.type.value),\n            result=repr(result),\n        )\n\n        return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n    def __str__(self) -> str:\n        \"\"\"\n        Generates a simple state representation appropriate for logging:\n\n        `MyCompletedState(\"my message\", type=COMPLETED)`\n        \"\"\"\n\n        display = []\n\n        if self.message:\n            display.append(repr(self.message))\n\n        if self.type.value.lower() != self.name.lower():\n            display.append(f\"type={self.type.value}\")\n\n        return f\"{self.name}({', '.join(display)})\"\n\n    def __hash__(self) -> int:\n        return hash(\n            (\n                getattr(self.state_details, \"flow_run_id\", None),\n                getattr(self.state_details, \"task_run_id\", None),\n                self.timestamp,\n                self.type,\n            )\n        )\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.from_orm_without_result","title":"from_orm_without_result classmethod","text":"

    During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the data field will not be eagerly loaded. In these cases, sqlalchemy will attempt to lazily load the the relationship, which will fail when called within a synchronous pydantic method.

    This method will construct a State object from an ORM model without a loaded artifact and attach data passed using the with_data argument to the data field.

    Source code in prefect/server/schemas/states.py
    @classmethod\ndef from_orm_without_result(\n    cls,\n    orm_state: Union[\n        \"prefect.server.database.orm_models.ORMFlowRunState\",\n        \"prefect.server.database.orm_models.ORMTaskRunState\",\n    ],\n    with_data: Optional[Any] = None,\n):\n    \"\"\"\n    During orchestration, ORM states can be instantiated prior to inserting results\n    into the artifact table and the `data` field will not be eagerly loaded. In\n    these cases, sqlalchemy will attempt to lazily load the the relationship, which\n    will fail when called within a synchronous pydantic method.\n\n    This method will construct a `State` object from an ORM model without a loaded\n    artifact and attach data passed using the `with_data` argument to the `data`\n    field.\n    \"\"\"\n\n    field_keys = cls.schema()[\"properties\"].keys()\n    state_data = {\n        field: getattr(orm_state, field, None)\n        for field in field_keys\n        if field != \"data\"\n    }\n    state_data[\"data\"] = with_data\n    return cls(**state_data)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel","title":"StateBaseModel","text":"

    Bases: IDBaseModel

    Source code in prefect/server/schemas/states.py
    class StateBaseModel(IDBaseModel):\n    def orm_dict(\n        self, *args, shallow: bool = False, json_compatible: bool = False, **kwargs\n    ) -> dict:\n        \"\"\"\n        This method is used as a convenience method for constructing fixtues by first\n        building a `State` schema object and converting it into an ORM-compatible\n        format. Because the `data` field is not writable on ORM states, this method\n        omits the `data` field entirely for the purposes of constructing an ORM model.\n        If state data is required, an artifact must be created separately.\n        \"\"\"\n\n        schema_dict = self.dict(\n            *args, shallow=shallow, json_compatible=json_compatible, **kwargs\n        )\n        # remove the data field in order to construct a state ORM model\n        schema_dict.pop(\"data\", None)\n        return schema_dict\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.json","title":"json","text":"

    Returns a representation of the model as JSON.

    If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

    Source code in prefect/server/utilities/schemas/bases.py
    def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType","title":"StateType","text":"

    Bases: AutoEnum

    Enumeration of state types.

    Source code in prefect/server/schemas/states.py
    class StateType(AutoEnum):\n    \"\"\"Enumeration of state types.\"\"\"\n\n    SCHEDULED = AutoEnum.auto()\n    PENDING = AutoEnum.auto()\n    RUNNING = AutoEnum.auto()\n    COMPLETED = AutoEnum.auto()\n    FAILED = AutoEnum.auto()\n    CANCELLED = AutoEnum.auto()\n    CRASHED = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n    CANCELLING = AutoEnum.auto()\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType.auto","title":"auto staticmethod","text":"

    Exposes enum.auto() to avoid requiring a second import to use AutoEnum

    Source code in prefect/utilities/collections.py
    @staticmethod\ndef auto():\n    \"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.AwaitingRetry","title":"AwaitingRetry","text":"

    Convenience function for creating AwaitingRetry states.

    Returns:

    Name Type Description State State

    a AwaitingRetry state

    Source code in prefect/server/schemas/states.py
    def AwaitingRetry(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n    \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelled","title":"Cancelled","text":"

    Convenience function for creating Cancelled states.

    Returns:

    Name Type Description State State

    a Cancelled state

    Source code in prefect/server/schemas/states.py
    def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelling","title":"Cancelling","text":"

    Convenience function for creating Cancelling states.

    Returns:

    Name Type Description State State

    a Cancelling state

    Source code in prefect/server/schemas/states.py
    def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Completed","title":"Completed","text":"

    Convenience function for creating Completed states.

    Returns:

    Name Type Description State State

    a Completed state

    Source code in prefect/server/schemas/states.py
    def Completed(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Crashed","title":"Crashed","text":"

    Convenience function for creating Crashed states.

    Returns:

    Name Type Description State State

    a Crashed state

    Source code in prefect/server/schemas/states.py
    def Crashed(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Failed","title":"Failed","text":"

    Convenience function for creating Failed states.

    Returns:

    Name Type Description State State

    a Failed state

    Source code in prefect/server/schemas/states.py
    def Failed(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Late","title":"Late","text":"

    Convenience function for creating Late states.

    Returns:

    Name Type Description State State

    a Late state

    Source code in prefect/server/schemas/states.py
    def Late(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n    \"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Paused","title":"Paused","text":"

    Convenience function for creating Paused states.

    Returns:

    Name Type Description State State

    a Paused state

    Source code in prefect/server/schemas/states.py
    def Paused(\n    cls: Type[State] = State,\n    timeout_seconds: int = None,\n    pause_expiration_time: datetime.datetime = None,\n    reschedule: bool = False,\n    pause_key: str = None,\n    **kwargs,\n) -> State:\n    \"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Pending","title":"Pending","text":"

    Convenience function for creating Pending states.

    Returns:

    Name Type Description State State

    a Pending state

    Source code in prefect/server/schemas/states.py
    def Pending(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Retrying","title":"Retrying","text":"

    Convenience function for creating Retrying states.

    Returns:

    Name Type Description State State

    a Retrying state

    Source code in prefect/server/schemas/states.py
    def Retrying(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Running","title":"Running","text":"

    Convenience function for creating Running states.

    Returns:

    Name Type Description State State

    a Running state

    Source code in prefect/server/schemas/states.py
    def Running(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Scheduled","title":"Scheduled","text":"

    Convenience function for creating Scheduled states.

    Returns:

    Name Type Description State State

    a Scheduled state

    Source code in prefect/server/schemas/states.py
    def Scheduled(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n    \"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    # NOTE: `scheduled_time` must come first for backwards compatibility\n\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
    "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Suspended","title":"Suspended","text":"

    Convenience function for creating Suspended states.

    Returns:

    Name Type Description State

    a Suspended state

    Source code in prefect/server/schemas/states.py
    def Suspended(\n    cls: Type[State] = State,\n    timeout_seconds: Optional[int] = None,\n    pause_expiration_time: Optional[datetime.datetime] = None,\n    pause_key: Optional[str] = None,\n    **kwargs,\n):\n    \"\"\"Convenience function for creating `Suspended` states.\n\n    Returns:\n        State: a Suspended state\n    \"\"\"\n    return Paused(\n        cls=cls,\n        name=\"Suspended\",\n        reschedule=True,\n        timeout_seconds=timeout_seconds,\n        pause_expiration_time=pause_expiration_time,\n        pause_key=pause_key,\n        **kwargs,\n    )\n
    "},{"location":"api-ref/server/services/late_runs/","title":"server.services.late_runs","text":""},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs","title":"prefect.server.services.late_runs","text":"

    The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.

    "},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns","title":"MarkLateRuns","text":"

    Bases: LoopService

    A simple loop service responsible for identifying flow runs that are \"late\".

    A flow run is defined as \"late\" if has not scheduled within a certain amount of time after its scheduled start time. The exact amount is configurable in Prefect REST API Settings.

    Source code in prefect/server/services/late_runs.py
    class MarkLateRuns(LoopService):\n    \"\"\"\n    A simple loop service responsible for identifying flow runs that are \"late\".\n\n    A flow run is defined as \"late\" if has not scheduled within a certain amount\n    of time after its scheduled start time. The exact amount is configurable in\n    Prefect REST API Settings.\n    \"\"\"\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=loop_seconds\n            or PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS.value(),\n            **kwargs,\n        )\n\n        # mark runs late if they are this far past their expected start time\n        self.mark_late_after: datetime.timedelta = (\n            PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.value()\n        )\n\n        # query for this many runs to mark as late at once\n        self.batch_size = 400\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n        \"\"\"\n        Mark flow runs as late by:\n\n        - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n        - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n        \"\"\"\n        scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n            seconds=self.mark_late_after.total_seconds()\n        )\n\n        while True:\n            async with db.session_context(begin_transaction=True) as session:\n                query = self._get_select_late_flow_runs_query(\n                    scheduled_to_start_before=scheduled_to_start_before, db=db\n                )\n\n                result = await session.execute(query)\n                runs = result.all()\n\n                # mark each run as late\n                for run in runs:\n                    await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n                # if no runs were found, exit the loop\n                if len(runs) < self.batch_size:\n                    break\n\n        self.logger.info(\"Finished monitoring for late runs.\")\n\n    @inject_db\n    def _get_select_late_flow_runs_query(\n        self, scheduled_to_start_before: datetime.datetime, db: PrefectDBInterface\n    ):\n        \"\"\"\n        Returns a sqlalchemy query for late flow runs.\n\n        Args:\n            scheduled_to_start_before: the maximum next scheduled start time of\n                scheduled flow runs to consider in the returned query\n        \"\"\"\n        query = (\n            sa.select(\n                db.FlowRun.id,\n                db.FlowRun.next_scheduled_start_time,\n            )\n            .where(\n                # The next scheduled start time is in the past, including the mark late\n                # after buffer\n                (db.FlowRun.next_scheduled_start_time <= scheduled_to_start_before),\n                db.FlowRun.state_type == states.StateType.SCHEDULED,\n                db.FlowRun.state_name == \"Scheduled\",\n            )\n            .limit(self.batch_size)\n        )\n        return query\n\n    async def _mark_flow_run_as_late(\n        self, session: AsyncSession, flow_run: PrefectDBInterface.FlowRun\n    ) -> None:\n        \"\"\"\n        Mark a flow run as late.\n\n        Pass-through method for overrides.\n        \"\"\"\n        try:\n            await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run.id,\n                state=states.Late(scheduled_time=flow_run.next_scheduled_start_time),\n                flow_policy=MarkLateRunsPolicy,  # type: ignore\n            )\n        except ObjectNotFoundError:\n            return  # flow run was deleted, ignore it\n
    "},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns.run_once","title":"run_once async","text":"

    Mark flow runs as late by:

    • Querying for flow runs in a scheduled state that are Scheduled to start in the past
    • For any runs past the \"late\" threshold, setting the flow run state to a new Late state
    Source code in prefect/server/services/late_runs.py
    @inject_db\nasync def run_once(self, db: PrefectDBInterface):\n    \"\"\"\n    Mark flow runs as late by:\n\n    - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n    - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n    \"\"\"\n    scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n        seconds=self.mark_late_after.total_seconds()\n    )\n\n    while True:\n        async with db.session_context(begin_transaction=True) as session:\n            query = self._get_select_late_flow_runs_query(\n                scheduled_to_start_before=scheduled_to_start_before, db=db\n            )\n\n            result = await session.execute(query)\n            runs = result.all()\n\n            # mark each run as late\n            for run in runs:\n                await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n            # if no runs were found, exit the loop\n            if len(runs) < self.batch_size:\n                break\n\n    self.logger.info(\"Finished monitoring for late runs.\")\n
    "},{"location":"api-ref/server/services/loop_service/","title":"server.services.loop_service","text":""},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service","title":"prefect.server.services.loop_service","text":"

    The base class for all Prefect REST API loop services.

    "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService","title":"LoopService","text":"

    Loop services are relatively lightweight maintenance routines that need to run periodically.

    This class makes it straightforward to design and integrate them. Users only need to define the run_once coroutine to describe the behavior of the service on each loop.

    Source code in prefect/server/services/loop_service.py
    class LoopService:\n    \"\"\"\n    Loop services are relatively lightweight maintenance routines that need to run periodically.\n\n    This class makes it straightforward to design and integrate them. Users only need to\n    define the `run_once` coroutine to describe the behavior of the service on each loop.\n    \"\"\"\n\n    loop_seconds = 60\n\n    def __init__(self, loop_seconds: float = None, handle_signals: bool = True):\n        \"\"\"\n        Args:\n            loop_seconds (float): if provided, overrides the loop interval\n                otherwise specified as a class variable\n            handle_signals (bool): if True (default), SIGINT and SIGTERM are\n                gracefully intercepted and shut down the running service.\n        \"\"\"\n        if loop_seconds:\n            self.loop_seconds = loop_seconds  # seconds between runs\n        self._should_stop = False  # flag for whether the service should stop running\n        self._is_running = False  # flag for whether the service is running\n        self.name = type(self).__name__\n        self.logger = get_logger(f\"server.services.{self.name.lower()}\")\n\n        if handle_signals:\n            _register_signal(signal.SIGINT, self._stop)\n            _register_signal(signal.SIGTERM, self._stop)\n\n    @inject_db\n    async def _on_start(self, db: PrefectDBInterface) -> None:\n        \"\"\"\n        Called prior to running the service\n        \"\"\"\n        # reset the _should_stop flag\n        self._should_stop = False\n        # set the _is_running flag\n        self._is_running = True\n\n    async def _on_stop(self) -> None:\n        \"\"\"\n        Called after running the service\n        \"\"\"\n        # reset the _is_running flag\n        self._is_running = False\n\n    async def start(self, loops=None) -> None:\n        \"\"\"\n        Run the service `loops` time. Pass loops=None to run forever.\n\n        Args:\n            loops (int, optional): the number of loops to run before exiting.\n        \"\"\"\n\n        await self._on_start()\n\n        i = 0\n        while not self._should_stop:\n            start_time = pendulum.now(\"UTC\")\n\n            try:\n                self.logger.debug(f\"About to run {self.name}...\")\n                await self.run_once()\n\n            except NotImplementedError as exc:\n                raise exc from None\n\n            # if an error is raised, log and continue\n            except Exception as exc:\n                self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n            end_time = pendulum.now(\"UTC\")\n\n            # if service took longer than its loop interval, log a warning\n            # that the interval might be too short\n            if (end_time - start_time).total_seconds() > self.loop_seconds:\n                self.logger.warning(\n                    f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                    \" to run, which is longer than its loop interval of\"\n                    f\" {self.loop_seconds} seconds.\"\n                )\n\n            # check if early stopping was requested\n            i += 1\n            if loops is not None and i == loops:\n                self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n                await self.stop(block=False)\n\n            # next run is every \"loop seconds\" after each previous run *started*.\n            # note that if the loop took unexpectedly long, the \"next_run\" time\n            # might be in the past, which will result in an instant start\n            next_run = max(\n                start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n            )\n            self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n            # check the `_should_stop` flag every 1 seconds until the next run time is reached\n            while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n                await asyncio.sleep(\n                    min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n                )\n\n        await self._on_stop()\n\n    async def stop(self, block=True) -> None:\n        \"\"\"\n        Gracefully stops a running LoopService and optionally blocks until the\n        service stops.\n\n        Args:\n            block (bool): if True, blocks until the service is\n                finished running. Otherwise it requests a stop and returns but\n                the service may still be running a final loop.\n\n        \"\"\"\n        self._stop()\n\n        if block:\n            # if block=True, sleep until the service stops running,\n            # but no more than `loop_seconds` to avoid a deadlock\n            with anyio.move_on_after(self.loop_seconds):\n                while self._is_running:\n                    await asyncio.sleep(0.1)\n\n            # if the service is still running after `loop_seconds`, something's wrong\n            if self._is_running:\n                self.logger.warning(\n                    f\"`stop(block=True)` was called on {self.name} but more than one\"\n                    f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                    \" usually means something is wrong. If `stop()` was called from\"\n                    \" inside the loop service, use `stop(block=False)` instead.\"\n                )\n\n    def _stop(self, *_) -> None:\n        \"\"\"\n        Private, synchronous method for setting the `_should_stop` flag. Takes arbitrary\n        arguments so it can be used as a signal handler.\n        \"\"\"\n        self._should_stop = True\n\n    async def run_once(self) -> None:\n        \"\"\"\n        Represents one loop of the service.\n\n        Users should override this method.\n\n        To actually run the service once, call `LoopService().start(loops=1)`\n        instead of `LoopService().run_once()`, because this method will not invoke setup\n        and teardown methods properly.\n        \"\"\"\n        raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
    "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.run_once","title":"run_once async","text":"

    Represents one loop of the service.

    Users should override this method.

    To actually run the service once, call LoopService().start(loops=1) instead of LoopService().run_once(), because this method will not invoke setup and teardown methods properly.

    Source code in prefect/server/services/loop_service.py
    async def run_once(self) -> None:\n    \"\"\"\n    Represents one loop of the service.\n\n    Users should override this method.\n\n    To actually run the service once, call `LoopService().start(loops=1)`\n    instead of `LoopService().run_once()`, because this method will not invoke setup\n    and teardown methods properly.\n    \"\"\"\n    raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
    "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.start","title":"start async","text":"

    Run the service loops time. Pass loops=None to run forever.

    Parameters:

    Name Type Description Default loops int

    the number of loops to run before exiting.

    None Source code in prefect/server/services/loop_service.py
    async def start(self, loops=None) -> None:\n    \"\"\"\n    Run the service `loops` time. Pass loops=None to run forever.\n\n    Args:\n        loops (int, optional): the number of loops to run before exiting.\n    \"\"\"\n\n    await self._on_start()\n\n    i = 0\n    while not self._should_stop:\n        start_time = pendulum.now(\"UTC\")\n\n        try:\n            self.logger.debug(f\"About to run {self.name}...\")\n            await self.run_once()\n\n        except NotImplementedError as exc:\n            raise exc from None\n\n        # if an error is raised, log and continue\n        except Exception as exc:\n            self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n        end_time = pendulum.now(\"UTC\")\n\n        # if service took longer than its loop interval, log a warning\n        # that the interval might be too short\n        if (end_time - start_time).total_seconds() > self.loop_seconds:\n            self.logger.warning(\n                f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                \" to run, which is longer than its loop interval of\"\n                f\" {self.loop_seconds} seconds.\"\n            )\n\n        # check if early stopping was requested\n        i += 1\n        if loops is not None and i == loops:\n            self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n            await self.stop(block=False)\n\n        # next run is every \"loop seconds\" after each previous run *started*.\n        # note that if the loop took unexpectedly long, the \"next_run\" time\n        # might be in the past, which will result in an instant start\n        next_run = max(\n            start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n        )\n        self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n        # check the `_should_stop` flag every 1 seconds until the next run time is reached\n        while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n            await asyncio.sleep(\n                min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n            )\n\n    await self._on_stop()\n
    "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.stop","title":"stop async","text":"

    Gracefully stops a running LoopService and optionally blocks until the service stops.

    Parameters:

    Name Type Description Default block bool

    if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop.

    True Source code in prefect/server/services/loop_service.py
    async def stop(self, block=True) -> None:\n    \"\"\"\n    Gracefully stops a running LoopService and optionally blocks until the\n    service stops.\n\n    Args:\n        block (bool): if True, blocks until the service is\n            finished running. Otherwise it requests a stop and returns but\n            the service may still be running a final loop.\n\n    \"\"\"\n    self._stop()\n\n    if block:\n        # if block=True, sleep until the service stops running,\n        # but no more than `loop_seconds` to avoid a deadlock\n        with anyio.move_on_after(self.loop_seconds):\n            while self._is_running:\n                await asyncio.sleep(0.1)\n\n        # if the service is still running after `loop_seconds`, something's wrong\n        if self._is_running:\n            self.logger.warning(\n                f\"`stop(block=True)` was called on {self.name} but more than one\"\n                f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                \" usually means something is wrong. If `stop()` was called from\"\n                \" inside the loop service, use `stop(block=False)` instead.\"\n            )\n
    "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.run_multiple_services","title":"run_multiple_services async","text":"

    Only one signal handler can be active at a time, so this function takes a list of loop services and runs all of them with a global signal handler.

    Source code in prefect/server/services/loop_service.py
    async def run_multiple_services(loop_services: List[LoopService]):\n    \"\"\"\n    Only one signal handler can be active at a time, so this function takes a list\n    of loop services and runs all of them with a global signal handler.\n    \"\"\"\n\n    def stop_all_services(self, *_):\n        for service in loop_services:\n            service._stop()\n\n    signal.signal(signal.SIGINT, stop_all_services)\n    signal.signal(signal.SIGTERM, stop_all_services)\n    await asyncio.gather(*[service.start() for service in loop_services])\n
    "},{"location":"api-ref/server/services/scheduler/","title":"server.services.scheduler","text":""},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler","title":"prefect.server.services.scheduler","text":"

    The Scheduler service.

    "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.RecentDeploymentsScheduler","title":"RecentDeploymentsScheduler","text":"

    Bases: Scheduler

    A scheduler that only schedules deployments that were updated very recently. This scheduler can run on a tight loop and ensure that runs from newly-created or updated deployments are rapidly scheduled without having to wait for the \"main\" scheduler to complete its loop.

    Note that scheduling is idempotent, so its ok for this scheduler to attempt to schedule the same deployments as the main scheduler. It's purpose is to accelerate scheduling for any deployments that users are interacting with.

    Source code in prefect/server/services/scheduler.py
    class RecentDeploymentsScheduler(Scheduler):\n    \"\"\"\n    A scheduler that only schedules deployments that were updated very recently.\n    This scheduler can run on a tight loop and ensure that runs from\n    newly-created or updated deployments are rapidly scheduled without having to\n    wait for the \"main\" scheduler to complete its loop.\n\n    Note that scheduling is idempotent, so its ok for this scheduler to attempt\n    to schedule the same deployments as the main scheduler. It's purpose is to\n    accelerate scheduling for any deployments that users are interacting with.\n    \"\"\"\n\n    # this scheduler runs on a tight loop\n    loop_seconds = 5\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n        \"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule\n        \"\"\"\n        query = (\n            sa.select(db.Deployment.id)\n            .where(\n                sa.and_(\n                    db.Deployment.paused.is_not(True),\n                    # use a slightly larger window than the loop interval to pick up\n                    # any deployments that were created *while* the scheduler was\n                    # last running (assuming the scheduler takes less than one\n                    # second to run). Scheduling is idempotent so picking up schedules\n                    # multiple times is not a concern.\n                    db.Deployment.updated\n                    >= pendulum.now(\"UTC\").subtract(seconds=self.loop_seconds + 1),\n                    (\n                        # Only include deployments that have at least one\n                        # active schedule.\n                        sa.select(db.DeploymentSchedule.deployment_id)\n                        .where(\n                            sa.and_(\n                                db.DeploymentSchedule.deployment_id == db.Deployment.id,\n                                db.DeploymentSchedule.active.is_(True),\n                            )\n                        )\n                        .exists()\n                    ),\n                )\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n
    "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler","title":"Scheduler","text":"

    Bases: LoopService

    A loop service that schedules flow runs from deployments.

    Source code in prefect/server/services/scheduler.py
    class Scheduler(LoopService):\n    \"\"\"\n    A loop service that schedules flow runs from deployments.\n    \"\"\"\n\n    # the main scheduler takes its loop interval from\n    # PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS\n    loop_seconds = None\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=(\n                loop_seconds\n                or self.loop_seconds\n                or PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS.value()\n            ),\n            **kwargs,\n        )\n        self.deployment_batch_size: int = (\n            PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE.value()\n        )\n        self.max_runs: int = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n        self.min_runs: int = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n        self.max_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n        self.min_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n        )\n        self.insert_batch_size = (\n            PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE.value()\n        )\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n        \"\"\"\n        Schedule flow runs by:\n\n        - Querying for deployments with active schedules\n        - Generating the next set of flow runs based on each deployments schedule\n        - Inserting all scheduled flow runs into the database\n\n        All inserted flow runs are committed to the database at the termination of the\n        loop.\n        \"\"\"\n        total_inserted_runs = 0\n\n        last_id = None\n        while True:\n            async with db.session_context(begin_transaction=False) as session:\n                query = self._get_select_deployments_to_schedule_query()\n\n                # use cursor based pagination\n                if last_id:\n                    query = query.where(db.Deployment.id > last_id)\n\n                result = await session.execute(query)\n                deployment_ids = result.scalars().unique().all()\n\n                # collect runs across all deployments\n                try:\n                    runs_to_insert = await self._collect_flow_runs(\n                        session=session, deployment_ids=deployment_ids\n                    )\n                except TryAgain:\n                    continue\n\n            # bulk insert the runs based on batch size setting\n            for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n                async with db.session_context(begin_transaction=True) as session:\n                    inserted_runs = await self._insert_scheduled_flow_runs(\n                        session=session, runs=batch\n                    )\n                    total_inserted_runs += len(inserted_runs)\n\n            # if this is the last page of deployments, exit the loop\n            if len(deployment_ids) < self.deployment_batch_size:\n                break\n            else:\n                # record the last deployment ID\n                last_id = deployment_ids[-1]\n\n        self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n        \"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule.\n\n        The query gets the IDs of any deployments with:\n\n            - an active schedule\n            - EITHER:\n                - fewer than `min_runs` auto-scheduled runs\n                - OR the max scheduled time is less than `max_scheduled_time` in the future\n        \"\"\"\n        now = pendulum.now(\"UTC\")\n        query = (\n            sa.select(db.Deployment.id)\n            .select_from(db.Deployment)\n            # TODO: on Postgres, this could be replaced with a lateral join that\n            # sorts by `next_scheduled_start_time desc` and limits by\n            # `self.min_runs` for a ~ 50% speedup. At the time of writing,\n            # performance of this universal query appears to be fast enough that\n            # this optimization is not worth maintaining db-specific queries\n            .join(\n                db.FlowRun,\n                # join on matching deployments, only picking up future scheduled runs\n                sa.and_(\n                    db.Deployment.id == db.FlowRun.deployment_id,\n                    db.FlowRun.state_type == StateType.SCHEDULED,\n                    db.FlowRun.next_scheduled_start_time >= now,\n                    db.FlowRun.auto_scheduled.is_(True),\n                ),\n                isouter=True,\n            )\n            .where(\n                sa.and_(\n                    db.Deployment.paused.is_not(True),\n                    (\n                        # Only include deployments that have at least one\n                        # active schedule.\n                        sa.select(db.DeploymentSchedule.deployment_id)\n                        .where(\n                            sa.and_(\n                                db.DeploymentSchedule.deployment_id == db.Deployment.id,\n                                db.DeploymentSchedule.active.is_(True),\n                            )\n                        )\n                        .exists()\n                    ),\n                )\n            )\n            .group_by(db.Deployment.id)\n            # having EITHER fewer than three runs OR runs not scheduled far enough out\n            .having(\n                sa.or_(\n                    sa.func.count(db.FlowRun.next_scheduled_start_time) < self.min_runs,\n                    sa.func.max(db.FlowRun.next_scheduled_start_time)\n                    < now + self.min_scheduled_time,\n                )\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n\n    async def _collect_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_ids: List[UUID],\n    ) -> List[Dict]:\n        runs_to_insert = []\n        for deployment_id in deployment_ids:\n            now = pendulum.now(\"UTC\")\n            # guard against erroneously configured schedules\n            try:\n                runs_to_insert.extend(\n                    await self._generate_scheduled_flow_runs(\n                        session=session,\n                        deployment_id=deployment_id,\n                        start_time=now,\n                        end_time=now + self.max_scheduled_time,\n                        min_time=self.min_scheduled_time,\n                        min_runs=self.min_runs,\n                        max_runs=self.max_runs,\n                    )\n                )\n            except Exception:\n                self.logger.exception(\n                    f\"Error scheduling deployment {deployment_id!r}.\",\n                )\n            finally:\n                connection = await session.connection()\n                if connection.invalidated:\n                    # If the error we handled above was the kind of database error that\n                    # causes underlying transaction to rollback and the connection to\n                    # become invalidated, rollback this session.  Errors that may cause\n                    # this are connection drops, database restarts, and things of the\n                    # sort.\n                    #\n                    # This rollback _does not rollback a transaction_, since that has\n                    # actually already happened due to the error above.  It brings the\n                    # Python session in sync with underlying connection so that when we\n                    # exec the outer with block, the context manager will not attempt to\n                    # commit the session.\n                    #\n                    # Then, raise TryAgain to break out of these nested loops, back to\n                    # the outer loop, where we'll begin a new transaction with\n                    # session.begin() in the next loop iteration.\n                    await session.rollback()\n                    raise TryAgain()\n        return runs_to_insert\n\n    @inject_db\n    async def _generate_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_id: UUID,\n        start_time: datetime.datetime,\n        end_time: datetime.datetime,\n        min_time: datetime.timedelta,\n        min_runs: int,\n        max_runs: int,\n        db: PrefectDBInterface,\n    ) -> List[Dict]:\n        \"\"\"\n        Given a `deployment_id` and schedule params, generates a list of flow run\n        objects and associated scheduled states that represent scheduled flow runs.\n\n        Pass-through method for overrides.\n\n\n        Args:\n            session: a database session\n            deployment_id: the id of the deployment to schedule\n            start_time: the time from which to start scheduling runs\n            end_time: runs will be scheduled until at most this time\n            min_time: runs will be scheduled until at least this far in the future\n            min_runs: a minimum amount of runs to schedule\n            max_runs: a maximum amount of runs to schedule\n\n        This function will generate the minimum number of runs that satisfy the min\n        and max times, and the min and max counts. Specifically, the following order\n        will be respected:\n\n            - Runs will be generated starting on or after the `start_time`\n            - No more than `max_runs` runs will be generated\n            - No runs will be generated after `end_time` is reached\n            - At least `min_runs` runs will be generated\n            - Runs will be generated until at least `start_time + min_time` is reached\n\n        \"\"\"\n        return await models.deployments._generate_scheduled_flow_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            end_time=end_time,\n            min_time=min_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n\n    @inject_db\n    async def _insert_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        runs: List[Dict],\n        db: PrefectDBInterface,\n    ) -> List[UUID]:\n        \"\"\"\n        Given a list of flow runs to schedule, as generated by\n        `_generate_scheduled_flow_runs`, inserts them into the database. Note this is a\n        separate method to facilitate batch operations on many scheduled runs.\n\n        Pass-through method for overrides.\n        \"\"\"\n        return await models.deployments._insert_scheduled_flow_runs(\n            session=session, runs=runs\n        )\n
    "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler.run_once","title":"run_once async","text":"

    Schedule flow runs by:

    • Querying for deployments with active schedules
    • Generating the next set of flow runs based on each deployments schedule
    • Inserting all scheduled flow runs into the database

    All inserted flow runs are committed to the database at the termination of the loop.

    Source code in prefect/server/services/scheduler.py
    @inject_db\nasync def run_once(self, db: PrefectDBInterface):\n    \"\"\"\n    Schedule flow runs by:\n\n    - Querying for deployments with active schedules\n    - Generating the next set of flow runs based on each deployments schedule\n    - Inserting all scheduled flow runs into the database\n\n    All inserted flow runs are committed to the database at the termination of the\n    loop.\n    \"\"\"\n    total_inserted_runs = 0\n\n    last_id = None\n    while True:\n        async with db.session_context(begin_transaction=False) as session:\n            query = self._get_select_deployments_to_schedule_query()\n\n            # use cursor based pagination\n            if last_id:\n                query = query.where(db.Deployment.id > last_id)\n\n            result = await session.execute(query)\n            deployment_ids = result.scalars().unique().all()\n\n            # collect runs across all deployments\n            try:\n                runs_to_insert = await self._collect_flow_runs(\n                    session=session, deployment_ids=deployment_ids\n                )\n            except TryAgain:\n                continue\n\n        # bulk insert the runs based on batch size setting\n        for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n            async with db.session_context(begin_transaction=True) as session:\n                inserted_runs = await self._insert_scheduled_flow_runs(\n                    session=session, runs=batch\n                )\n                total_inserted_runs += len(inserted_runs)\n\n        # if this is the last page of deployments, exit the loop\n        if len(deployment_ids) < self.deployment_batch_size:\n            break\n        else:\n            # record the last deployment ID\n            last_id = deployment_ids[-1]\n\n    self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n
    "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.TryAgain","title":"TryAgain","text":"

    Bases: Exception

    Internal control-flow exception used to retry the Scheduler's main loop

    Source code in prefect/server/services/scheduler.py
    class TryAgain(Exception):\n    \"\"\"Internal control-flow exception used to retry the Scheduler's main loop\"\"\"\n
    "},{"location":"api-ref/server/utilities/database/","title":"server.utilities.database","text":""},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database","title":"prefect.server.utilities.database","text":"

    Utilities for interacting with Prefect REST API database and ORM layer.

    Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two.

    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.GenerateUUID","title":"GenerateUUID","text":"

    Bases: FunctionElement

    Platform-independent UUID default generator. Note the actual functionality for this class is specified in the compiles-decorated functions below

    Source code in prefect/server/utilities/database.py
    class GenerateUUID(FunctionElement):\n    \"\"\"\n    Platform-independent UUID default generator.\n    Note the actual functionality for this class is specified in the\n    `compiles`-decorated functions below\n    \"\"\"\n\n    name = \"uuid_default\"\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.JSON","title":"JSON","text":"

    Bases: TypeDecorator

    JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise.

    The \"base\" type is postgresql.JSONB to expose useful methods prior to SQL compilation

    Source code in prefect/server/utilities/database.py
    class JSON(TypeDecorator):\n    \"\"\"\n    JSON type that returns SQLAlchemy's dialect-specific JSON types, where\n    possible. Uses generic JSON otherwise.\n\n    The \"base\" type is postgresql.JSONB to expose useful methods prior\n    to SQL compilation\n    \"\"\"\n\n    impl = postgresql.JSONB\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.JSONB(none_as_null=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(sqlite.JSON(none_as_null=True))\n        else:\n            return dialect.type_descriptor(sa.JSON(none_as_null=True))\n\n    def process_bind_param(self, value, dialect):\n        \"\"\"Prepares the given value to be used as a JSON field in a parameter binding\"\"\"\n        if not value:\n            return value\n\n        # PostgreSQL does not support the floating point extrema values `NaN`,\n        # `-Infinity`, or `Infinity`\n        # https://www.postgresql.org/docs/current/datatype-json.html#JSON-TYPE-MAPPING-TABLE\n        #\n        # SQLite supports storing and retrieving full JSON values that include\n        # `NaN`, `-Infinity`, or `Infinity`, but any query that requires SQLite to parse\n        # the value (like `json_extract`) will fail.\n        #\n        # Replace any `NaN`, `-Infinity`, or `Infinity` values with `None` in the\n        # returned value.  See more about `parse_constant` at\n        # https://docs.python.org/3/library/json.html#json.load.\n        return json.loads(json.dumps(value), parse_constant=lambda c: None)\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Pydantic","title":"Pydantic","text":"

    Bases: TypeDecorator

    A pydantic type that converts inserted parameters to json and converts read values to the pydantic type.

    Source code in prefect/server/utilities/database.py
    class Pydantic(TypeDecorator):\n    \"\"\"\n    A pydantic type that converts inserted parameters to\n    json and converts read values to the pydantic type.\n    \"\"\"\n\n    impl = JSON\n    cache_ok = True\n\n    def __init__(self, pydantic_type, sa_column_type=None):\n        super().__init__()\n        self._pydantic_type = pydantic_type\n        if sa_column_type is not None:\n            self.impl = sa_column_type\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        # parse the value to ensure it complies with the schema\n        # (this will raise validation errors if not)\n        value = pydantic.parse_obj_as(self._pydantic_type, value)\n        # sqlalchemy requires the bind parameter's value to be a python-native\n        # collection of JSON-compatible objects. we achieve that by dumping the\n        # value to a json string using the pydantic JSON encoder and re-parsing\n        # it into a python-native form.\n        return json.loads(json.dumps(value, default=pydantic.json.pydantic_encoder))\n\n    def process_result_value(self, value, dialect):\n        if value is not None:\n            # load the json object into a fully hydrated typed object\n            return pydantic.parse_obj_as(self._pydantic_type, value)\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Timestamp","title":"Timestamp","text":"

    Bases: TypeDecorator

    TypeDecorator that ensures that timestamps have a timezone.

    For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC.

    Source code in prefect/server/utilities/database.py
    class Timestamp(TypeDecorator):\n    \"\"\"TypeDecorator that ensures that timestamps have a timezone.\n\n    For SQLite, all timestamps are converted to UTC (since they are stored\n    as naive timestamps without timezones) and recovered as UTC.\n    \"\"\"\n\n    impl = sa.TIMESTAMP(timezone=True)\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.TIMESTAMP(timezone=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(\n                sqlite.DATETIME(\n                    # SQLite is very particular about datetimes, and performs all comparisons\n                    # as alphanumeric comparisons without regard for actual timestamp\n                    # semantics or timezones. Therefore, it's important to have uniform\n                    # and sortable datetime representations. The default is an ISO8601-compatible\n                    # string with NO time zone and a space (\" \") delimiter between the date\n                    # and the time. The below settings can be used to add a \"T\" delimiter but\n                    # will require all other sqlite datetimes to be set similarly, including\n                    # the custom default value for datetime columns and any handwritten SQL\n                    # formed with `strftime()`.\n                    #\n                    # store with \"T\" separator for time\n                    # storage_format=(\n                    #     \"%(year)04d-%(month)02d-%(day)02d\"\n                    #     \"T%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d\"\n                    # ),\n                    # handle ISO 8601 with \"T\" or \" \" as the time separator\n                    # regexp=r\"(\\d+)-(\\d+)-(\\d+)[T ](\\d+):(\\d+):(\\d+).(\\d+)\",\n                )\n            )\n        else:\n            return dialect.type_descriptor(sa.TIMESTAMP(timezone=True))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        else:\n            if value.tzinfo is None:\n                raise ValueError(\"Timestamps must have a timezone.\")\n            elif dialect.name == \"sqlite\":\n                return pendulum.instance(value).in_timezone(\"UTC\")\n            else:\n                return value\n\n    def process_result_value(self, value, dialect):\n        # retrieve timestamps in their native timezone (or UTC)\n        if value is not None:\n            return pendulum.instance(value).in_timezone(\"utc\")\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.UUID","title":"UUID","text":"

    Bases: TypeDecorator

    Platform-independent UUID type.

    Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens.

    Source code in prefect/server/utilities/database.py
    class UUID(TypeDecorator):\n    \"\"\"\n    Platform-independent UUID type.\n\n    Uses PostgreSQL's UUID type, otherwise uses\n    CHAR(36), storing as stringified hex values with\n    hyphens.\n    \"\"\"\n\n    impl = TypeEngine\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.UUID())\n        else:\n            return dialect.type_descriptor(CHAR(36))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        elif dialect.name == \"postgresql\":\n            return str(value)\n        elif isinstance(value, uuid.UUID):\n            return str(value)\n        else:\n            return str(uuid.UUID(value))\n\n    def process_result_value(self, value, dialect):\n        if value is None:\n            return value\n        else:\n            if not isinstance(value, uuid.UUID):\n                value = uuid.UUID(value)\n            return value\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_add","title":"date_add","text":"

    Bases: FunctionElement

    Platform-independent way to add a date and an interval.

    Source code in prefect/server/utilities/database.py
    class date_add(FunctionElement):\n    \"\"\"\n    Platform-independent way to add a date and an interval.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"date_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, dt, interval):\n        self.dt = dt\n        self.interval = interval\n        super().__init__()\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_diff","title":"date_diff","text":"

    Bases: FunctionElement

    Platform-independent difference of dates. Computes d1 - d2.

    Source code in prefect/server/utilities/database.py
    class date_diff(FunctionElement):\n    \"\"\"\n    Platform-independent difference of dates. Computes d1 - d2.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"date_diff\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, d1, d2):\n        self.d1 = d1\n        self.d2 = d2\n        super().__init__()\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.interval_add","title":"interval_add","text":"

    Bases: FunctionElement

    Platform-independent way to add two intervals.

    Source code in prefect/server/utilities/database.py
    class interval_add(FunctionElement):\n    \"\"\"\n    Platform-independent way to add two intervals.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"interval_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, i1, i2):\n        self.i1 = i1\n        self.i2 = i2\n        super().__init__()\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_contains","title":"json_contains","text":"

    Bases: FunctionElement

    Platform independent json_contains operator, tests if the left expression contains the right expression.

    On postgres this is equivalent to the @> containment operator. https://www.postgresql.org/docs/current/functions-json.html

    Source code in prefect/server/utilities/database.py
    class json_contains(FunctionElement):\n    \"\"\"\n    Platform independent json_contains operator, tests if the\n    `left` expression contains the `right` expression.\n\n    On postgres this is equivalent to the @> containment operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_contains\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, left, right):\n        self.left = left\n        self.right = right\n        super().__init__()\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_extract","title":"json_extract","text":"

    Bases: FunctionElement

    Platform independent json_extract operator, extracts a value from a JSON field via key.

    On postgres this is equivalent to the ->> operator. https://www.postgresql.org/docs/current/functions-json.html

    Source code in prefect/server/utilities/database.py
    class json_extract(FunctionElement):\n    \"\"\"\n    Platform independent json_extract operator, extracts a value from a JSON\n    field via key.\n\n    On postgres this is equivalent to the ->> operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = sa.Text()\n    name = \"json_extract\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, column: sa.Column, path: str, wrap_quotes: bool = False):\n        self.column = column\n        self.path = path\n        self.wrap_quotes = wrap_quotes\n        super().__init__()\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_all_keys","title":"json_has_all_keys","text":"

    Bases: FunctionElement

    Platform independent json_has_all_keys operator.

    On postgres this is equivalent to the ?& existence operator. https://www.postgresql.org/docs/current/functions-json.html

    Source code in prefect/server/utilities/database.py
    class json_has_all_keys(FunctionElement):\n    \"\"\"Platform independent json_has_all_keys operator.\n\n    On postgres this is equivalent to the ?& existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_all_keys\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if isinstance(values, list) and not all(isinstance(v, str) for v in values):\n            raise ValueError(\n                \"json_has_all_key values must be strings if provided as a literal list\"\n            )\n        self.values = values\n        super().__init__()\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_any_key","title":"json_has_any_key","text":"

    Bases: FunctionElement

    Platform independent json_has_any_key operator.

    On postgres this is equivalent to the ?| existence operator. https://www.postgresql.org/docs/current/functions-json.html

    Source code in prefect/server/utilities/database.py
    class json_has_any_key(FunctionElement):\n    \"\"\"\n    Platform independent json_has_any_key operator.\n\n    On postgres this is equivalent to the ?| existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_any_key\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if not all(isinstance(v, str) for v in values):\n            raise ValueError(\"json_has_any_key values must be strings\")\n        self.values = values\n        super().__init__()\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.now","title":"now","text":"

    Bases: FunctionElement

    Platform-independent \"now\" generator.

    Source code in prefect/server/utilities/database.py
    class now(FunctionElement):\n    \"\"\"\n    Platform-independent \"now\" generator.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"now\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = True\n
    "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.get_dialect","title":"get_dialect","text":"

    Get the dialect of a session, engine, or connection url.

    Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres.

    Example
    import prefect.settings\nfrom prefect.server.utilities.database import get_dialect\n\ndialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\nif dialect.name == \"sqlite\":\n    print(\"Using SQLite!\")\nelse:\n    print(\"Using Postgres!\")\n
    Source code in prefect/server/utilities/database.py
    def get_dialect(\n    obj: Union[str, sa.orm.Session, sa.engine.Engine],\n) -> sa.engine.Dialect:\n    \"\"\"\n    Get the dialect of a session, engine, or connection url.\n\n    Primary use case is figuring out whether the Prefect REST API is communicating with\n    SQLite or Postgres.\n\n    Example:\n        ```python\n        import prefect.settings\n        from prefect.server.utilities.database import get_dialect\n\n        dialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\n        if dialect.name == \"sqlite\":\n            print(\"Using SQLite!\")\n        else:\n            print(\"Using Postgres!\")\n        ```\n    \"\"\"\n    if isinstance(obj, sa.orm.Session):\n        url = obj.bind.url\n    elif isinstance(obj, sa.engine.Engine):\n        url = obj.url\n    else:\n        url = sa.engine.url.make_url(obj)\n\n    return url.get_dialect()\n
    "},{"location":"api-ref/server/utilities/schemas/","title":"server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas","title":"prefect.server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/server/","title":"server.utilities.server","text":""},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server","title":"prefect.server.utilities.server","text":"

    Utilities for the Prefect REST API server.

    "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectAPIRoute","title":"PrefectAPIRoute","text":"

    Bases: APIRoute

    A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned.

    Requests already have request.scope['fastapi_astack'] which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at request.state.response_scoped_stack.

    Source code in prefect/server/utilities/server.py
    class PrefectAPIRoute(APIRoute):\n    \"\"\"\n    A FastAPIRoute class which attaches an async stack to requests that exits before\n    a response is returned.\n\n    Requests already have `request.scope['fastapi_astack']` which is an async stack for\n    the full scope of the request. This stack is used for managing contexts of FastAPI\n    dependencies. If we want to close a dependency before the request is complete\n    (i.e. before returning a response to the user), we need a stack with a different\n    scope. This extension adds this stack at `request.state.response_scoped_stack`.\n    \"\"\"\n\n    def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:\n        default_handler = super().get_route_handler()\n\n        async def handle_response_scoped_depends(request: Request) -> Response:\n            # Create a new stack scoped to exit before the response is returned\n            async with AsyncExitStack() as stack:\n                request.state.response_scoped_stack = stack\n                response = await default_handler(request)\n\n            return response\n\n        return handle_response_scoped_depends\n
    "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter","title":"PrefectRouter","text":"

    Bases: APIRouter

    A base class for Prefect REST API routers.

    Source code in prefect/server/utilities/server.py
    class PrefectRouter(APIRouter):\n    \"\"\"\n    A base class for Prefect REST API routers.\n    \"\"\"\n\n    def __init__(self, **kwargs: Any) -> None:\n        kwargs.setdefault(\"route_class\", PrefectAPIRoute)\n        super().__init__(**kwargs)\n\n    def add_api_route(\n        self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n    ) -> None:\n        \"\"\"\n        Add an API route.\n\n        For routes that return content and have not specified a `response_model`,\n        use return type annotation to infer the response model.\n\n        For routes that return No-Content status codes, explicitly set\n        a `response_class` to ensure nothing is returned in the response body.\n        \"\"\"\n        if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n            # any routes that return No-Content status codes must\n            # explicitly set a response_class that will handle status codes\n            # and not return anything in the body\n            kwargs[\"response_class\"] = Response\n        if kwargs.get(\"response_model\") is None:\n            kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n        return super().add_api_route(path, endpoint, **kwargs)\n
    "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter.add_api_route","title":"add_api_route","text":"

    Add an API route.

    For routes that return content and have not specified a response_model, use return type annotation to infer the response model.

    For routes that return No-Content status codes, explicitly set a response_class to ensure nothing is returned in the response body.

    Source code in prefect/server/utilities/server.py
    def add_api_route(\n    self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n) -> None:\n    \"\"\"\n    Add an API route.\n\n    For routes that return content and have not specified a `response_model`,\n    use return type annotation to infer the response model.\n\n    For routes that return No-Content status codes, explicitly set\n    a `response_class` to ensure nothing is returned in the response body.\n    \"\"\"\n    if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n        # any routes that return No-Content status codes must\n        # explicitly set a response_class that will handle status codes\n        # and not return anything in the body\n        kwargs[\"response_class\"] = Response\n    if kwargs.get(\"response_model\") is None:\n        kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n    return super().add_api_route(path, endpoint, **kwargs)\n
    "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.method_paths_from_routes","title":"method_paths_from_routes","text":"

    Generate a set of strings describing the given routes in the format:

    For example, \"GET /logs/\"

    Source code in prefect/server/utilities/server.py
    def method_paths_from_routes(routes: Sequence[BaseRoute]) -> Set[str]:\n    \"\"\"\n    Generate a set of strings describing the given routes in the format: <method> <path>\n\n    For example, \"GET /logs/\"\n    \"\"\"\n    method_paths = set()\n    for route in routes:\n        if isinstance(route, (APIRoute, StarletteRoute)):\n            for method in route.methods:\n                method_paths.add(f\"{method} {route.path}\")\n\n    return method_paths\n
    "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.response_scoped_dependency","title":"response_scoped_dependency","text":"

    Ensure that this dependency closes before the response is returned to the client. By default, FastAPI closes dependencies after sending the response.

    Uses an async stack that is exited before the response is returned. This is particularly useful for database sessions which must be committed before the client can do more work.

    Do not use a response-scoped dependency within a FastAPI background task.

    Background tasks run after FastAPI sends the response, so a response-scoped dependency will already be closed. Use a normal FastAPI dependency instead.

    Parameters:

    Name Type Description Default dependency Callable

    An async callable. FastAPI dependencies may still be used.

    required

    Returns:

    Type Description

    A wrapped dependency which will push the dependency context manager onto

    a stack when called.

    Source code in prefect/server/utilities/server.py
    def response_scoped_dependency(dependency: Callable):\n    \"\"\"\n    Ensure that this dependency closes before the response is returned to the client. By\n    default, FastAPI closes dependencies after sending the response.\n\n    Uses an async stack that is exited before the response is returned. This is\n    particularly useful for database sessions which must be committed before the client\n    can do more work.\n\n    NOTE: Do not use a response-scoped dependency within a FastAPI background task.\n          Background tasks run after FastAPI sends the response, so a response-scoped\n          dependency will already be closed. Use a normal FastAPI dependency instead.\n\n    Args:\n        dependency: An async callable. FastAPI dependencies may still be used.\n\n    Returns:\n        A wrapped `dependency` which will push the `dependency` context manager onto\n        a stack when called.\n    \"\"\"\n    signature = inspect.signature(dependency)\n\n    async def wrapper(*args, request: Request, **kwargs):\n        # Replicate FastAPI behavior of auto-creating a context manager\n        if inspect.isasyncgenfunction(dependency):\n            context_manager = asynccontextmanager(dependency)\n        else:\n            context_manager = dependency\n\n        # Ensure request is provided if requested\n        if \"request\" in signature.parameters:\n            kwargs[\"request\"] = request\n\n        # Enter the route handler provided stack that is closed before responding,\n        # return the value yielded by the wrapped dependency\n        return await request.state.response_scoped_stack.enter_async_context(\n            context_manager(*args, **kwargs)\n        )\n\n    # Ensure that the signature includes `request: Request` to ensure that FastAPI will\n    # inject the request as a dependency; maintain the old signature so those depends\n    # work\n    request_parameter = inspect.signature(wrapper).parameters[\"request\"]\n    functools.update_wrapper(wrapper, dependency)\n\n    if \"request\" not in signature.parameters:\n        new_parameters = signature.parameters.copy()\n        new_parameters[\"request\"] = request_parameter\n        wrapper.__signature__ = signature.replace(\n            parameters=tuple(new_parameters.values())\n        )\n\n    return wrapper\n
    "},{"location":"cloud/","title":"Welcome to Prefect Cloud","text":"

    Prefect Cloud is a hosted workflow application framework that provides all the capabilities of Prefect server plus additional features, such as:

    • automations, events, and webhooks so you can create event-driven workflows
    • workspaces, RBAC, SSO, audit logs and related user management tools for collaboration
    • push work pools for running flows on serverless infrastructure without a worker
    • error summaries powered by Marvin AI to help you resolve errors faster

    Getting Started with Prefect Cloud

    Ready to jump right in and start running with Prefect Cloud? See the Quickstart and follow the instructions on the Cloud tabs to write and deploy your first Prefect Cloud-monitored flow run.

    Prefect Cloud includes all the features in the open-source Prefect server plus the following:

    Prefect Cloud features

    • User accounts \u2014 personal accounts for working in Prefect Cloud.
    • Workspaces \u2014 isolated environments to organize your flows, deployments, and flow runs.
    • Automations \u2014 configure triggers, actions, and notifications in response to real-time monitoring events.
    • Email notifications \u2014 send email alerts from Prefect's server based on automation triggers.
    • Service accounts \u2014 configure API access for running workers or executing flow runs on remote infrastructure.
    • Custom role-based access controls (RBAC) \u2014 assign users granular permissions to perform certain activities within an account or a workspace.
    • Single Sign-on (SSO) \u2014 authentication using your identity provider.
    • Audit Log \u2014 a record of user activities to monitor security and compliance.
    • Collaboration \u2014 invite other people to your account.
    • Error summaries \u2014 (enabled by Marvin AI) distill the error logs of Failed and Crashed flow runs into actionable information.
    • Push work pools \u2014 run flows on your serverless infrastructure without running a worker.
    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#user-accounts","title":"User accounts","text":"

    When you sign up for Prefect Cloud, an account and a user profile are automatically provisioned for you.

    Your profile is the place where you'll manage settings related to yourself as a user, including:

    • Profile, including profile handle and image
    • API keys
    • Preferences, including timezone and color mode

    As an account Admin, you will also have access to account settings from the Account Settings page, such as:

    • Members
    • Workspaces
    • Roles

    As an account Admin you can create a workspace and invite other individuals to your workspace.

    Upgrading from a Prefect Cloud Free tier plan to a Pro or Custom tier plan enables additional functionality for adding workspaces, managing teams, and running higher volume workloads.

    Workspace Admins for Pro tier plans have the ability to set role-based access controls (RBAC), view Audit Logs, and configure service accounts.

    Custom plans have object-level access control lists, custom roles, teams, and [single sign-on (SSO)](#single-sign-on-(sso) with Directory Sync/SCIM provisioning.

    Prefect Cloud plans for teams of every size

    See the Prefect Cloud plans for details on Pro and Custom account tiers.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#workspaces","title":"Workspaces","text":"

    A workspace is an isolated environment within Prefect Cloud for your flows, deployments, and block configuration. See the Workspaces documentation for more information about configuring and using workspaces.

    Each workspace keeps track of its own:

    • Flow runs and task runs executed in an environment that is syncing with the workspace
    • Flows associated with flow runs and deployments observed by the Prefect Cloud API
    • Deployments
    • Work pools
    • Blocks and storage
    • Events
    • Automations
    • Incidents

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#events","title":"Events","text":"

    Prefect Cloud allows you to see your events. Events provide information about the state of your workflows, and can be used as automation triggers.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#automations","title":"Automations","text":"

    Prefect Cloud automations provide additional notification capabilities beyond those in a self-hosted open-source Prefect server. Automations also enable you to create event-driven workflows, toggle resources such as schedules and work pools, and declare incidents.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#incidents","title":"Incidents","text":"

    Prefect Cloud's incidents help teams identify, rectify, and document issues in mission-critical workflows. Incidents are formal declarations of disruptions to a workspace. With automations), activity in that workspace can be paused when an incident is created and resumed when it is resolved.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#error-summaries","title":"Error summaries","text":"

    Prefect Cloud error summaries, enabled by Marvin AI, distill the error logs of Failed and Crashed flow runs into actionable information. To enable this feature and others powered by Marvin AI, visit the Settings page for your account.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#service-accounts","title":"Service accounts","text":"

    Service accounts enable you to create Prefect Cloud API keys that are not associated with a user account. Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. See the service accounts documentation for more information about creating and managing service accounts.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#roles-and-custom-permissions","title":"Roles and custom permissions","text":"

    Role-based access controls (RBAC) enable you to assign users a role with permissions to perform certain activities within an account or a workspace. See the role-based access controls (RBAC) documentation for more information about managing user roles in a Prefect Cloud account.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"

    Prefect Cloud's Pro and Custom plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML. Directory Sync and SCIM provisioning is also available with Custom plans.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#audit-log","title":"Audit log","text":"

    Prefect Cloud's Pro and Custom plans offer Audit Logs for compliance and security. Audit logs provide a chronological record of activities performed by users in an account.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#prefect-cloud-rest-api","title":"Prefect Cloud REST API","text":"

    The Prefect REST API is used for communicating data from Prefect clients to Prefect Cloud or a local Prefect server for orchestration and monitoring. This API is mainly consumed by Prefect clients like the Prefect Python Client or the Prefect UI.

    Prefect Cloud REST API interactive documentation

    Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#start-using-prefect-cloud","title":"Start using Prefect Cloud","text":"

    To create an account or sign in with an existing Prefect Cloud account, go to https://app.prefect.cloud/.

    Then follow the steps in the UI to deploy your first Prefect Cloud-monitored flow run. For more details, see the Prefect Quickstart and follow the instructions on the Cloud tabs.

    Need help?

    Get your questions answered by a Prefect Product Advocate! Book a Meeting

    ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/connecting/","title":"Connecting & Troubleshooting Prefect Cloud","text":"

    To create flow runs in a local or remote execution environment and use either Prefect Cloud or a Prefect server as the backend API server, you need to

    • Configure the execution environment with the location of the API.
    • Authenticate with the API, either by logging in or providing a valid API key (Prefect Cloud only).
    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"

    Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.

    1. Open a new terminal session.
    2. Install Prefect in the environment in which you want to execute flow runs.
    $ pip install -U prefect\n
    1. Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.
    $ prefect cloud login\n

    The prefect cloud login command, used on its own, provides an interactive login experience. Using this command, you can log in with either an API key or through a browser.

    $ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n    Paste an API key\nPaste your authentication key:\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n    g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n

    You can also log in by providing a Prefect Cloud API key that you create.

    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#change-workspaces","title":"Change workspaces","text":"

    If you need to change which workspace you're syncing with, use the prefect cloud workspace set Prefect CLI command while logged in, passing the account handle and workspace name.

    $ prefect cloud workspace set --workspace \"prefect/my-workspace\"\n

    If no workspace is provided, you will be prompted to select one.

    Workspace Settings also shows you the prefect cloud workspace set Prefect CLI command you can use to sync a local execution environment with a given workspace.

    You may also use the prefect cloud login command with the --workspace or -w option to set the current workspace.

    $ prefect cloud login --workspace \"prefect/my-workspace\"\n
    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#manually-configure-prefect-api-settings","title":"Manually configure Prefect API settings","text":"

    You can also manually configure the PREFECT_API_URL setting to specify the Prefect Cloud API.

    For Prefect Cloud, you can configure the PREFECT_API_URL and PREFECT_API_KEY settings to authenticate with Prefect Cloud by using an account ID, workspace ID, and API key.

    $ prefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n$ prefect config set PREFECT_API_KEY=\"[API-KEY]\"\n

    When you're in a Prefect Cloud workspace, you can copy the PREFECT_API_URL value directly from the page URL.

    In this example, we configured PREFECT_API_URL and PREFECT_API_KEY in the default profile. You can use prefect profile CLI commands to create settings profiles for different configurations. For example, you could have a \"cloud\" profile configured to use the Prefect Cloud API URL and API key, and another \"local\" profile for local development using a local Prefect API server started with prefect server start. See Settings for details.

    Environment variables

    You can also set PREFECT_API_URL and PREFECT_API_KEY as you would any other environment variable. See Overriding defaults with environment variables for more information.

    See the Flow orchestration with Prefect tutorial for examples.

    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#install-requirements-in-execution-environments","title":"Install requirements in execution environments","text":"

    In local and remote execution environments \u2014 such as VMs and containers \u2014 you must make sure any flow requirements or dependencies have been installed before creating a flow run.

    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#troubleshooting-prefect-cloud","title":"Troubleshooting Prefect Cloud","text":"

    This section provides tips that may be helpful if you run into problems using Prefect Cloud.

    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-and-proxies","title":"Prefect Cloud and proxies","text":"

    Proxies intermediate network requests between a server and a client.

    To communicate with Prefect Cloud, the Prefect client library makes HTTPS requests. These requests are made using the httpx Python library. httpx respects accepted proxy environment variables, so the Prefect client is able to communicate through proxies.

    To enable communication via proxies, simply set the HTTPS_PROXY and SSL_CERT_FILE environment variables as appropriate in your execution environment and things should \u201cjust work.\u201d

    See the Using Prefect Cloud with proxies topic in Prefect Discourse for examples of proxy configuration.

    URLs that should be whitelisted for outbound-communication in a secure environment include the UI, the API, Authentication, and the current OCSP server:

    • app.prefect.cloud
    • api.prefect.cloud
    • auth.workos.com
    • api.github.com
    • github.com
    • ocsp.pki.goog/s/gts1d4/OxYEb8XcYmo
    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-access-via-api","title":"Prefect Cloud access via API","text":"

    If the Prefect Cloud API key, environment variable settings, or account login for your execution environment are not configured correctly, you may experience errors or unexpected flow run results when using Prefect CLI commands, running flows, or observing flow run results in Prefect Cloud.

    Use the prefect config view CLI command to make sure your execution environment is correctly configured to access Prefect Cloud.

    $ prefect config view\nPREFECT_PROFILE='cloud'\nPREFECT_API_KEY='pnu_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' (from profile)\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/...' (from profile)\n

    Make sure PREFECT_API_URL is configured to use https://api.prefect.cloud/api/....

    Make sure PREFECT_API_KEY is configured to use a valid API key.

    You can use the prefect cloud workspace ls CLI command to view or set the active workspace.

    $ prefect cloud workspace ls\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503   Available Workspaces: \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502   g-gadflow/g-workspace \u2502\n\u2502    * prefect/workinonit \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n    * active workspace\n

    You can also check that the account and workspace IDs specified in the URL for PREFECT_API_URL match those shown in the URL bar for your Prefect Cloud workspace.

    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-login-errors","title":"Prefect Cloud login errors","text":"

    If you're having difficulty logging in to Prefect Cloud, the following troubleshooting steps may resolve the issue, or will provide more information when sharing your case to the support channel.

    • Are you logging into Prefect Cloud 2? Prefect Cloud 1 and Prefect Cloud 2 use separate accounts. Make sure to use the right Prefect Cloud 2 URL: https://app.prefect.cloud/
    • Do you already have a Prefect Cloud account? If you\u2019re having difficulty accepting an invitation, try creating an account first using the email associated with the invitation, then accept the invitation.
    • Are you using a single sign-on (SSO) provider, social authentication (Google, Microsoft, or GitHub) or just using an emailed link?

    Other tips to help with login difficulties:

    • Hard refresh your browser with Cmd+Shift+R.
    • Try in a different browser. We actively test against the following browsers:
    • Chrome
    • Edge
    • Firefox
    • Safari
    • Clear recent browser history/cookies

    None of this worked?

    Email us at help@prefect.io and provide answers to the questions above in your email to make it faster to troubleshoot and unblock you. Make sure you add the email address with which you were trying to log in, your Prefect Cloud account name, and, if applicable, the organization to which it belongs.

    ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/events/","title":"Events","text":"

    An event is a notification of a change. Together, events form a feed of activity recording what's happening across your stack.

    Events power several features in Prefect Cloud, including flow run logs, audit logs, and automations.

    Events can represent API calls, state transitions, or changes in your execution environment or infrastructure.

    Events enable observability into your data stack via the event feed, and the configuration of Prefect's reactivity via automations.

    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-specification","title":"Event specification","text":"

    Events adhere to a structured specification.

    Name Type Required? Description occurred String yes When the event happened event String yes The name of the event that happened resource Object yes The primary Resource this event concerns related Array no A list of additional Resources involved in this event payload Object no An open-ended set of data describing what happened id String yes The client-provided identifier of this event follows String no The ID of an event that is known to have occurred prior to this one.","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-grammar","title":"Event grammar","text":"

    Generally, events have a consistent and informative grammar - an event describes a resource and an action that the resource took or that was taken on that resource. For example, events emitted by Prefect objects take the form of:

    prefect.block.write-method.called\nprefect-cloud.automation.action.executed\nprefect-cloud.user.logged-in\n
    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-sources","title":"Event sources","text":"

    Events are automatically emitted by all Prefect objects, including flows, tasks, deployments, work queues, and logs. Prefect-emitted events will contain the prefect or prefect-cloud resource prefix. Events can also be sent to the Prefect events API via authenticated http request.

    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#emit-custom-events-from-python-code","title":"Emit custom events from Python code","text":"

    The Prefect Python SDK provides an emit_event function that emits an Prefect event when called. The function can be called inside or outside of a task or flow. Running the following code will emit an event to Prefect Cloud, which will validate and ingest the event data.

    from prefect.events import emit_event\n\ndef some_function(name: str=\"kiki\") -> None:\n    print(f\"hi {name}!\")\n    emit_event(event=f\"{name}.sent.event!\", resource={\"prefect.resource.id\": f\"coder.{name}\"})\n\nsome_function()\n

    Note that the emit_event arguments shown above are required: event represents the name of the event and resource={\"prefect.resource.id\": \"my_string\"} is the resource id. To get data into an event for use in an automation action, you can specify a dictionary of values for the payload parameter.

    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#emit-events-via-webhooks","title":"Emit events via webhooks","text":"

    Prefect Cloud offers programmable webhooks to receive HTTP requests from other systems and translate them into events within your workspace. Webhooks can emit pre-defined static events, dynamic events that use portions of the incoming HTTP request, or events derived from CloudEvents.

    Events emitted from any source will appear in the event feed, where you can visualize activity in context and configure automations to react to the presence or absence of it in the future.

    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#resources","title":"Resources","text":"

    Every event has a primary resource, which describes the object that emitted an event. Resources are used as quasi-stable identifiers for sources of events, and are constructed as dot-delimited strings, for example:

    prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\nacme.user.kiki.elt_script_1\nprefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6\n

    Resources can optionally have additional arbitrary labels which can be used in event aggregation queries, such as:

    \"resource\": {\n    \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n    \"prefect-cloud.action.type\": \"call-webhook\"\n    }\n

    Events can optionally contain related resources, used to associate the event with other resources, such as in the case that the primary resource acted on or with another resource:

    \"resource\": {\n    \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\",\n    \"prefect-cloud.action.type\": \"call-webhook\"\n  },\n\"related\": [\n  {\n      \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n      \"prefect.resource.role\": \"automation\",\n      \"prefect-cloud.name\": \"webhook_body_demo\",\n      \"prefect-cloud.posture\": \"Reactive\"\n  }\n]\n
    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#events-in-the-cloud-ui","title":"Events in the Cloud UI","text":"

    Prefect Cloud provides an interactive dashboard to analyze and take action on events that occurred in your workspace on the event feed page.

    The event feed is the primary place to view, search, and filter events to understand activity across your stack. Each entry displays data on the resource, related resource, and event that took place.

    You can view more information about an event by clicking into it, where you can view the full details of an event's resource, related resources, and its payload.

    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#reacting-to-events","title":"Reacting to events","text":"

    From an event page, you can configure an automation to trigger on the observation of matching events or a lack of matching events by clicking the automate button in the overflow menu:

    The default trigger configuration will fire every time it sees an event with a matching resource identifier. Advanced configuration is possible via custom triggers.

    ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/incidents/","title":"Incidents","text":"","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#overview","title":"Overview","text":"

    Incidents are a Prefect Cloud feature to help your team manage workflow disruptions. Incidents help you identify, resolve, and document issues with mission-critical workflows. This system enhances operational efficiency by automating the incident management process and providing a centralized platform for collaboration and compliance.

    ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#what-are-incidents","title":"What are incidents?","text":"

    Incidents are formal declarations of disruptions to a workspace. With automations, activity in a workspace can be paused when an incident is created and resumed when it is resolved.

    Incidents vary in nature and severity, ranging from minor glitches to critical system failures. Prefect Cloud enables users to effectively and automatically track and manage these incidents, ensuring minimal impact on operational continuity.

    ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#why-use-incident-management","title":"Why use incident management?","text":"
    1. Automated detection and reporting: Incidents can be automatically identified based on specific triggers or manually reported by team members, facilitating prompt response.

    2. Collaborative problem-solving: The platform fosters collaboration, allowing team members to share insights, discuss resolutions, and track contributions.

    3. Comprehensive impact assessment: Users gain insights into the incident's influence on workflows, helping in prioritizing response efforts.

    4. Compliance with incident management processes: Detailed documentation and reporting features support compliance with incident management systems.

    5. Enhanced operational transparency: The system provides a transparent view of both ongoing and resolved incidents, promoting accountability and continuous improvement.

    ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#how-to-use-incident-management-in-prefect-cloud","title":"How to use incident management in Prefect Cloud","text":"","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#creating-an-incident","title":"Creating an incident","text":"

    There are several ways to create an incident:

    1. From the Incidents page:

      • Click on the + button.
      • Fill in required fields and attach any Prefect resources related to your incident.
    2. From a flow run, work pool, or block:

      • Initiate an incident directly from a failed flow run, automatically linking it as a resource, by clicking on the menu button and selecting \"Declare an incident\".
    3. Via an automation:

      • Set up incident creation as an automated response to selected triggers.
    ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#incident-automations","title":"Incident automations","text":"

    Automations can be used for triggering an incident and for selecting actions to take when an incident is triggered. For example, a work pool status change could trigger the declaration of an incident, or a critical level incident could trigger a notification action.

    To automatically take action when an incident is declared, set up a custom trigger that listens for declaration events.

    {\n  \"match\": {\n    \"prefect.resource.id\": \"prefect-cloud.incident.*\"\n  },\n  \"expect\": [\n    \"prefect-cloud.incident.declared\"\n  ],\n  \"posture\": \"Reactive\",\n  \"threshold\": 1,\n  \"within\": 0\n}\n

    Building custom triggers

    To get started with incident automations, you only need to specify two fields in your trigger:

    • match: The resource emitting your event of interest. You can match on specific resource IDs, use wildcards to match on all resources of a given type, and even match on other resource attributes, like prefect.resource.name.

    • expect: The event type to listen for. For example, you could listen for any (or all) of the following event types:

      • prefect-cloud.incident.declared
      • prefect-cloud.incident.resolved
      • prefect-cloud.incident.updated.severity

    See Event Triggers for more information on custom triggers, and check out your Event Feed to see the event types emitted by your incidents and other resources (i.e. events that you can react to).

    When an incident is declared, any actions you configure such as pausing work pools or sending notifications, will execute immediately.

    ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#managing-an-incident","title":"Managing an incident","text":"
    • Monitor active incidents: View real-time status, severity, and impact.
    • Adjust incident details: Update status, severity, and other relevant information.
    • Collaborate: Add comments and insights; these will display with user identifiers and timestamps.
    • Impact assessment: Evaluate how the incident affects ongoing and future workflows.
    ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#resolving-and-documenting-incidents","title":"Resolving and documenting incidents","text":"
    • Resolution: Update the incident status to reflect resolution steps taken.
    • Documentation: Ensure all actions, comments, and changes are logged for future reference.
    ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#incident-reporting","title":"Incident reporting","text":"
    • Generate a detailed timeline of the incident: actions taken, updates to severity and resolution - suitable for compliance and retrospective analysis.
    ","tags":["incidents"],"boost":2},{"location":"cloud/rate-limits/","title":"API Rate Limits & Retention Periods","text":"

    API rate limits restrict the number of requests that a single client can make in a given time period. They ensure Prefect Cloud's stability, so that when you make an API call, you always get a response.

    Prefect Cloud rate limits are subject to change

    The following rate limits are in effect currently, but are subject to change. Contact Prefect support at help@prefect.io if you have questions about current rate limits.

    Prefect Cloud enforces the following rate limits:

    • Flow and task creation rate limits
    • Log service rate limits
    ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-flow-run-and-task-run-rate-limits","title":"Flow, flow run, and task run rate limits","text":"

    Prefect Cloud limits the flow_runs, task_runs, and flows endpoints and their subroutes at the following levels:

    • 400 per minute for personal accounts
    • 2,000 per minute for Pro accounts

    The Prefect Cloud API will return a 429 response with an appropriate Retry-After header if these limits are triggered.

    ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#log-service-rate-limits","title":"Log service rate limits","text":"

    Prefect Cloud limits the number of logs accepted:

    • 700 logs per minute for personal accounts
    • 10,000 logs per minute for Pro accounts

    The Prefect Cloud API will return a 429 response if these limits are triggered.

    ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-run-retention","title":"Flow run retention","text":"

    Prefect Cloud feature

    The Flow Run Retention Policy setting is only applicable in Prefect Cloud.

    Flow runs in Prefect Cloud are retained according to the Flow Run Retention Policy set by your account tier. The policy setting applies to all workspaces owned by the account.

    The flow run retention policy represents the number of days each flow run is available in the Prefect Cloud UI, and via the Prefect CLI and API after it ends. Once a flow run reaches a terminal state (detailed in the chart here), it will be retained until the end of the flow run retention period.

    Flow Run Retention Policy keys on terminal state

    Note that, because Flow Run Retention Policy keys on terminal state, if two flows start at the same time, but reach a terminal state at different times, they will be removed at different times according to when they each reached their respective terminal states.

    This retention policy applies to all details about a flow run, including its task runs. Subflow runs follow the retention policy independently from their parent flow runs, and are removed based on the time each subflow run reaches a terminal state.

    If you or your organization have needs that require a tailored retention period, contact the Prefect Sales team.

    ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/workspaces/","title":"Workspaces","text":"

    A workspace is a discrete environment within Prefect Cloud for your workflows and blocks. Workspaces are available to Prefect Cloud accounts only.

    Workspaces can be used to organize and compartmentalize your workflows. For example, you can use separate workspaces to isolate dev, staging, and prod environments, or to provide separation between different teams.

    When you first log into Prefect Cloud, you will be prompted to create your own initial workspace. After creating your workspace, you'll be able to view flow runs, flows, deployments, and other workspace-specific features in the Prefect Cloud UI.

    Select a workspace name in the navigation menu to see all workspaces you can access.

    Your list of available workspaces may include:

    • Your own account's workspace.
    • Workspaces in an account to which you've been invited and have been given access as an Admin or Member.

    Workspace-specific features

    Each workspace keeps track of its own:

    • Flow runs and task runs executed in an environment that is syncing with the workspace
    • Flows associated with flow runs or deployments observed by the Prefect Cloud API
    • Deployments
    • Work pools
    • Blocks and Storage
    • Automations

    Your user permissions within workspaces may vary. Account admins can assign roles and permissions at the workspace level.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#create-a-workspace","title":"Create a workspace","text":"

    On the Account Workspaces dropdown or the Workspaces page select the + icon to create a new workspace.

    You'll be prompted to configure:

    • The Workspace Owner from the dropdown account menu options.
    • The Workspace Name must be unique within the account.
    • An optional description for the workspace.

    Select Create to create the new workspace. The number of available workspaces varies by Prefect Cloud plan. See Pricing if you need additional workspaces or users.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-settings","title":"Workspace settings","text":"

    Within a workspace, select Settings -> General to view or edit workspace details.

    On this page you can edit workspace details or delete the workspace.

    Deleting a workspace

    Deleting a workspace deletes all deployments, flow run history, work pools, and notifications configured in workspace.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-access","title":"Workspace access","text":"

    Within a Prefect Cloud Pro or Custom tier account, Workspace Owners can invite other people to be members and provision service accounts to a workspace. In addition to giving the user access to the workspace, a Workspace Owner assigns a workspace role to the user. The role specifies the scope of permissions for the user within the workspace.

    As a Workspace Owner, select Workspaces -> Sharing to manage members and service accounts for the workspace.

    If you've previously invited individuals to your account or provisioned service accounts, you'll see them listed here.

    To invite someone to an account, select the Members + icon. You can select from a list of existing account members.

    Select a Role for the user. This will be the initial role for the user within the workspace. A workspace Owner can change this role at any time.

    Select Send to initiate the invitation.

    To add a service account to a workspace, select the Service Accounts + icon. You can select from a list of configured service accounts. Select a Workspace Role for the service account. This will be the initial role for the service account within the workspace. A workspace Owner can change this role at any time. Select Share to finalize adding the service account.

    To remove a workspace member or service account, select Remove from the menu on the right side of the user or service account information on this page.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-transfer","title":"Workspace transfer","text":"

    Workspace transfer enables you to move an existing workspace from one account to another.

    Workspace transfer retains existing workspace configuration and flow run history, including blocks, deployments, notifications, work pools, and logs.

    Workspace transfer permissions

    Workspace transfer must be initiated or approved by a user with admin privileges for the workspace to be transferred.

    To initiate a workspace transfer between personal accounts, contact support@prefect.io.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#transfer-a-workspace","title":"Transfer a workspace","text":"

    To transfer a workspace, select Settings -> General within the workspace. Then, from the three dot menu in the upper right of the page, select Transfer.

    The Transfer Workspace page shows the workspace to be transferred on the left. Select the target account for the workspace on the right.

    Workspace transfer impact on accounts

    Workspace transfer may impact resource usage and costs for source and target accounts.

    When you transfer a workspace, users, API keys, and service accounts may lose access to the workspace. Audit log will no longer track activity on the workspace. Flow runs ending outside of the destination account\u2019s flow run retention period will be removed. You may also need to update Prefect CLI profiles and execution environment settings to access the workspace's new location.

    You may also incur new charges in the target account to accommodate the transferred workspace.

    The Transfer Workspace page outlines the impacts of transferring the selected workspace to the selected target. Please review these notes carefully before selecting Transfer to transfer the workspace.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/","title":"User accounts","text":"

    Sign up for a Prefect Cloud account at app.prefect.cloud.

    An individual user can be invited to become a member of other accounts.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#user-settings","title":"User settings","text":"

    Users can access their personal settings in the profile menu, including:

    • Profile: View and editing basic information, such as name.
    • API keys: Create and view API keys for connecting to Prefect Cloud from the CLI or other environments.
    • Preferences: Manage settings, such as color mode and default time zone.
    • Feature previews: Enable or disable feature previews.
    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#account-roles","title":"Account roles","text":"

    Users who are part of an account can hold the role of Admin or Member. Admins can invite other users to join the account and manage the account's workspaces and teams.

    Admins on Pro and Custom tier Prefect Cloud accounts can grant members of the account roles in a workspace, such as Runner or Viewer. Custom roles are available on Custom tier accounts.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#api-keys","title":"API keys","text":"

    API keys enable you to authenticate an environment to work with Prefect Cloud.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#service-accounts","title":"Service accounts","text":"

    Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#single-sign-on-sso","title":"Single sign-on (SSO)","text":"

    Custom tier plans offer single sign-on (SSO) integration with your team\u2019s identity provider, including options for directory sync and SCIM provisioning.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#audit-log","title":"Audit log","text":"

    Audit logs provide a chronological record of activities performed by Prefect Cloud users who are members of an account.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#object-level-access-control-lists-acls","title":"Object-level access control lists (ACLs)","text":"

    Prefect Cloud's Custom plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#teams","title":"Teams","text":"

    Users of Custom tier Prefect Cloud accounts can be added to Teams to simplify access control governance.

    ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/api-keys/","title":"Manage Prefect Cloud API Keys","text":"

    API keys enable you to authenticate a local environment to work with Prefect Cloud.

    If you run prefect cloud login from your CLI, you'll have the choice to authenticate through your browser or by pasting an API key.

    If you choose to authenticate through your browser, you'll be directed to an authorization page. After you grant approval to connect, you'll be redirected to the CLI and the API key will be saved to your local Prefect profile.

    If you choose to authenticate by pasting an API key, you'll need to create an API key in the Prefect Cloud UI first.

    ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#create-an-api-key","title":"Create an API key","text":"

    To create an API key, select the account icon at the bottom-left corner of the UI.

    Select API Keys. The page displays a list of previously generated keys and lets you create new API keys or delete keys.

    Select the + button to create a new API key. Provide a name for the key and an expiration date.

    Warning

    Note that an API key cannot be revealed again in the UI after it is generated, so copy the key to a secure location.

    ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#log-into-prefect-cloud-with-an-api-key","title":"Log into Prefect Cloud with an API Key","text":"
    prefect cloud login -k '<my-api-key>'\n
    ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#service-account-api-keys","title":"Service account API keys","text":"

    Service accounts are a feature of Prefect Cloud Pro and Custom tier plans that enable you to create a Prefect Cloud API key that is not associated with a user account.

    Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. Events and logs for flow runs in those environments are then associated with the service account rather than a user, and API access may be managed or revoked by configuring or removing the service account without disrupting user access.

    See the service accounts documentation for more information about creating and managing service accounts in Prefect Cloud.

    ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/audit-log/","title":"Audit Log","text":"

    Prefect Cloud's Pro and Custom plans offer enhanced compliance and transparency tools with Audit Log. Audit logs provide a chronological record of activities performed by members in your account, allowing you to monitor detailed Prefect Cloud actions for security and compliance purposes.

    Audit logs enable you to identify who took what action, when, and using what resources within your Prefect Cloud account. In conjunction with appropriate tools and procedures, audit logs can assist in detecting potential security violations and investigating application errors.

    Audit logs can be used to identify changes in:

    • Access to workspaces
    • User login activity
    • User API key creation and removal
    • Workspace creation and removal
    • Account member invitations and removal
    • Service account creation, API key rotation, and removal
    • Billing payment method for self-serve pricing tiers

    See the Prefect Cloud plan information to learn more about options for supporting audit logs.

    ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/audit-log/#viewing-audit-logs","title":"Viewing audit logs","text":"

    From your Pro or Custom account settings page, select the Audit Log page to view audit logs.

    Pro and Custom account tier admins can view audit logs for:

    • Account-level events in Prefect Cloud, such as:
    • Member invites
    • Changing a member\u2019s role
    • Member login and logout of Prefect Cloud
    • Creating or deleting a service account
    • Workspace-level events in Prefect Cloud, such as:
    • Adding a member to a workspace
    • Changing a member\u2019s workspace role
    • Creating or deleting a workspace

    Admins can filter audit logs on multiple dimensions to restrict the results they see by workspace, user, or event type. Available audit log events are displayed in the Events drop-down menu.

    Audit logs may also be filtered by date range. Audit log retention period varies by Prefect Cloud plan.

    ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/object-access-control-lists/","title":"Object Access Control Lists","text":"

    Prefect Cloud's Custom plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace. ACLs are supported for blocks and deployments.

    Organization Admins and Workspace Owners can configure access control lists by navigating to an object and clicking manage access. When an ACL is added, all users and service accounts with access to an object via their workspace role will lose access if not explicitly added to the ACL.

    ACLs and visibility

    Objects not governed by access control lists such as flow runs, flows, and artifacts will be visible to a user within a workspace even if an associated block or deployment has been restricted for that user.

    See the Prefect Cloud plans to learn more about options for supporting object-level access control.

    ","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/roles/","title":"User and Service Account Roles","text":"

    Prefect Cloud's Pro and Custom tiers allow you to set team member access to the appropriate level within specific workspaces.

    Role-based access controls (RBAC) enable you to assign users granular permissions to perform certain activities.

    To give users access to functionality beyond the scope of Prefect\u2019s built-in workspace roles, Custom account Admins can create custom roles for users.

    ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#built-in-roles","title":"Built-in roles","text":"

    Roles give users abilities at either the account level or at the individual workspace level.

    • An account-level role defines a user's default permissions within an account.
    • A workspace-level role defines a user's permissions within a specific workspace.

    The following sections outline the abilities of the built-in, Prefect-defined ac and workspace roles.

    ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#account-level-roles","title":"Account-level roles","text":"

    The following built-in roles have permissions across an account in Prefect Cloud.

    Role Abilities Owner \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Bypass SSO. Admin \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Cannot bypass SSO. Member \u2022 View account profile settings. \u2022 View workspaces I have access to in the account. \u2022 View account members and their roles. \u2022 View service accounts in the account.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-level-roles","title":"Workspace-level roles","text":"

    The following built-in roles have permissions within a given workspace in Prefect Cloud.

    Role Abilities Viewer \u2022 View flow runs within a workspace. \u2022 View deployments within a workspace. \u2022 View all work pools within a workspace. \u2022 View all blocks within a workspace. \u2022 View all automations within a workspace. \u2022 View workspace handle and description. Runner All Viewer abilities, plus: \u2022 Run deployments within a workspace. Developer All Runner abilities, plus: \u2022 Run flows within a workspace. \u2022 Delete flow runs within a workspace. \u2022 Create, edit, and delete deployments within a workspace. \u2022 Create, edit, and delete work pools within a workspace. \u2022 Create, edit, and delete all blocks and their secrets within a workspace. \u2022 Create, edit, and delete automations within a workspace. \u2022 View all workspace settings. Owner All Developer abilities, plus: \u2022 Add and remove account members, and set their role within a workspace. \u2022 Set the workspace\u2019s default workspace role for all users in the account. \u2022 Set, view, edit workspace settings. Worker The minimum scopes required for a worker to poll for and submit work.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#custom-workspace-roles","title":"Custom workspace roles","text":"

    The built-in roles will serve the needs of most users, but your team may need to configure custom roles, giving users access to specific permissions within a workspace.

    Custom roles can inherit permissions from a built-in role. This enables tweaks to the role to meet your team\u2019s needs, while ensuring users can still benefit from Prefect\u2019s default workspace role permission curation as new functionality becomes available.

    Custom workspace roles can also be created independent of Prefect\u2019s built-in roles. This option gives workspace admins full control of user access to workspace functionality. However, for non-inherited custom roles, the workspace admin takes on the responsibility for monitoring and setting permissions for new functionality as it is released.

    See Role permissions for details of permissions you may set for custom roles.

    After you create a new role, it become available in the account Members page and the Workspace Sharing page for you to apply to users.

    ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#inherited-roles","title":"Inherited roles","text":"

    A custom role may be configured as an Inherited Role. Using an inherited role allows you to create a custom role using a set of initial permissions associated with a built-in Prefect role. Additional permissions can be added to the custom role. Permissions included in the inherited role cannot be removed.

    Custom roles created using an inherited role will follow Prefect's default workspace role permission curation as new functionality becomes available.

    To configure an inherited role when configuring a custom role, select the Inherit permission from a default role check box, then select the role from which the new role should inherit permissions.

    ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-role-permissions","title":"Workspace role permissions","text":"

    The following permissions are available for custom roles.

    ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#automations","title":"Automations","text":"Permission Description View automations User can see configured automations within a workspace. Create, edit, and delete automations User can create, edit, and delete automations within a workspace. Includes permissions of View automations.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#blocks","title":"Blocks","text":"Permission Description View blocks User can see configured blocks within a workspace. View secret block data User can see configured blocks and their secrets within a workspace. Includes permissions of\u00a0View blocks. Create, edit, and delete blocks User can create, edit, and delete blocks within a workspace. Includes permissions of View blocks and View secret block data.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#deployments","title":"Deployments","text":"Permission Description View deployments User can see configured deployments within a workspace. Run deployments User can run deployments within a workspace. This does not give a user permission to execute the flow associated with the deployment. This only gives a user (via their key) the ability to run a deployment \u2014 another user/key must actually execute that flow, such as a service account with an appropriate role. Includes permissions of View deployments. Create and edit deployments User can create and edit deployments within a workspace. Includes permissions of View deployments and Run deployments. Delete deployments User can delete deployments within a workspace. Includes permissions of View deployments, Run deployments, and Create and edit deployments.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#flows","title":"Flows","text":"Permission Description View flows and flow runs User can see flows and flow runs within a workspace. Create, update, and delete saved search filters User can create, update, and delete saved flow run search filters configured within a workspace. Includes permissions of View flows and flow runs. Create, update, and run flows User can create, update, and run flows within a workspace. Includes permissions of View flows and flow runs. Delete flows User can delete flows within a workspace. Includes permissions of View flows and flow runs and Create, update, and run flows.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#notifications","title":"Notifications","text":"Permission Description View notification policies User can see notification policies configured within a workspace. Create and edit notification policies User can create and edit notification policies configured within a workspace. Includes permissions of View notification policies. Delete notification policies User can delete notification policies configured within a workspace. Includes permissions of View notification policies and Create and edit notification policies.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#task-run-concurrency","title":"Task run concurrency","text":"Permission Description View concurrency limits User can see configured task run concurrency limits within a workspace. Create, edit, and delete concurrency limits User can create, edit, and delete task run concurrency limits within a workspace. Includes permissions of View concurrency limits.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#work-pools","title":"Work pools","text":"Permission Description View work pools User can see work pools configured within a workspace. Create, edit, and pause work pools User can create, edit, and pause work pools configured within a workspace. Includes permissions of View work pools. Delete work pools User can delete work pools configured within a workspace. Includes permissions of View work pools and Create, edit, and pause work pools.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-management","title":"Workspace management","text":"Permission Description View information about workspace service accounts User can see service accounts configured within a workspace. View information about workspace users User can see user accounts for users invited to the workspace. View workspace settings User can see settings configured within a workspace. Edit workspace settings User can edit settings for a workspace. Includes permissions of View workspace settings. Delete the workspace User can delete a workspace. Includes permissions of View workspace settings and Edit workspace settings.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/service-accounts/","title":"Service Accounts","text":"

    Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running workers or executing deployment flow runs on remote infrastructure.

    Service accounts are non-user accounts that have the following features:

    • Prefect Cloud API keys
    • Roles and permissions

    Using service account credentials, you can configure an execution environment to interact with your Prefect Cloud workspaces without a user having to manually log in from that environment. Service accounts may be created, added to workspaces, have their roles changed, or deleted without affecting other user accounts.

    Select Service Accounts to view, create, or edit service accounts.

    Service accounts are created at the account level, but individual workspaces may be shared with the service account. See workspace sharing for more information.

    Service account credentials

    When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.

    Service account roles

    Service accounts are created at the account level, and can then be added to workspaces within the account. You may apply any valid workspace-level role to a service account.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#create-a-service-account","title":"Create a service account","text":"

    Within your account, on the Service Accounts page, select the + icon to create a new service account. You'll be prompted to configure:

    • The service account name. This name must be unique within your account.
    • An expiration date, or the Never Expire option.

    Service account roles

    A service account may only be a Member of an account. You may apply any valid workspace-level role to a service account when it is added to a workspace.

    Select Create to create the new service account.

    Warning

    Note that an API key cannot be revealed again in the UI after it is generated, so copy the key to a secure location.

    You can change the API key and expiration for a service account by rotating the API key. Select Rotate API Key from the menu on the left side of the service account's information on this page. Optionally, you can set a period of time for your old service account key to remain active.

    To delete a service account, select Remove from the menu on the left side of the service account's information.

    ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/sso/","title":"Single Sign-on (SSO)","text":"

    Prefect Cloud's Custom plans offer single sign-on (SSO) integration with your team\u2019s identity provider. SSO integration can be set up with any identity provider that supports:

    • OIDC
    • SAML 2.0

    When using SSO, Prefect Cloud won't store passwords for any accounts managed by your identity provider. Members of your Prefect Cloud account will instead log in and authenticate using your identity provider.

    Once your SSO integration has been set up, non-admins will be required to authenticate through the SSO provider when accessing account resources.

    See the Prefect Cloud plans to learn more about options for supporting more users and workspaces, service accounts, and SSO.

    ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#configuring-sso","title":"Configuring SSO","text":"

    Within your account, select the SSO page to enable SSO for users.

    If you haven't enabled SSO for a domain yet, enter the email domains for which you want to configure SSO in Prefect Cloud and save it.

    Under Enabled Domains, select the domains from the Domains list, then select Generate Link. This step creates a link you can use to configure SSO with your identity provider.

    Using the provided link navigate to the Identity Provider Configuration dashboard and select your identity provider to continue configuration. If your provider isn't listed, you can continue with the SAML or Open ID Connect choices instead.

    Once you complete SSO configuration your users will be required to authenticate via your identity provider when accessing account resources, giving you full control over application access.

    ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#directory-sync","title":"Directory sync","text":"

    Directory sync automatically provisions and de-provisions users for your account.

    Provisioned users are given basic \u201cMember\u201d roles and will have access to any resources that role entails.

    When a user is unassigned from the Prefect Cloud application in your identity provider, they will automatically lose access to Prefect Cloud resources, allowing your IT team to control access to Prefect Cloud without ever signing into the Prefect UI.

    ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#scim-provisioning","title":"SCIM Provisioning","text":"

    Custom accounts have access to SCIM for user provisioning. The SSO tab provides access to enable SCIM provisioning.

    ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/teams/","title":"Teams","text":"

    Prefect Cloud's Custom plan offers team management to simplify access control governance.

    Account Admins can configure teams and team membership from the account settings menu by clicking Teams. Teams are composed of users and service accounts. Teams can be added to workspaces or object access control lists just like users and service accounts.

    If SCIM is enabled on your account, the set of teams and the users within them is governed by your IDP. Prefect Cloud service accounts, which are not governed by your IDP, can be still be added to your existing set of teams.

    See the Prefect Cloud plans to learn more about options for supporting teams.

    ","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"community/","title":"Community","text":"

    There are many ways to get involved with the Prefect community

    • Join over 26,000 engineers in the Prefect Slack community
    • Get help in Prefect Discourse - the community-driven knowledge base
    • Give Prefect a \u2b50\ufe0f on GitHub
    • Contribute to Prefect's open source libraries
    • Become a Prefect Ambassador by joining Club 42
    ","tags":["community","Slack","Discourse"],"boost":2},{"location":"concepts/","title":"Explore Prefect concepts","text":"Concept Description Flows A Prefect workflow, defined as a Python function. Tasks Discrete units of work in a Prefect workflow. Deployments A server-side concept that encapsulates flow metadata, allowing it to be scheduled and triggered via API. Work Pools & Workers Use Prefect to dynamically provision and configure infrastructure in your execution environment. Schedules Tell the Prefect API how to create new flow runs for you automatically on a specified cadence. Results The data returned by a flow or a task. Artifacts Formatted outputs rendered in the Prefect UI, such as markdown, tables, or links. States Rich objects that capture the status of a particular task run or flow run. Blocks Prefect primitives that enable the storage of configuration and provide a UI interface. Task Runners Configure how tasks are run - concurrently, in parallel, or in a distributed environment. Automations Configure actions that Prefect executes automatically based on trigger conditions. Block and Agent-Based Deployments Description Block-based Deployments Create deployments that rely on blocks. Infrastructure Blocks that specify infrastructure for flow runs created by a deployment. Storage Lets you configure how flow code for deployments is persisted and retrieved. Agents Like untyped workers.

    Many features specific to Prefect Cloud are in their own navigation subheading.

    ","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/agents/","title":"Agents","text":"

    Workers are recommended

    Agents are part of the block-based deployment model. Work Pools and Workers simplify the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.

    ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-overview","title":"Agent overview","text":"

    Agent processes are lightweight polling services that get scheduled work from a work pool and deploy the corresponding flow runs.

    Agents poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_AGENT_QUERY_INTERVAL setting.

    It is possible for multiple agent processes to be started for a single work pool. Each agent process sends a unique ID to the server to help disambiguate themselves and let users know how many agents are active.

    ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-options","title":"Agent options","text":"

    Agents are configured to pull work from one or more work pool queues. If the agent references a work queue that doesn't exist, it will be created automatically.

    Configuration parameters you can specify when starting an agent include:

    Option Description --api The API URL for the Prefect server. Default is the value of PREFECT_API_URL. --hide-welcome Do not display the startup ASCII art for the agent process. --limit Maximum number of flow runs to start simultaneously. [default: None] --match, -m Dynamically matches work queue names with the specified prefix for the agent to pull from,for example dev- will match all work queues with a name that starts with dev-. [default: None] --pool, -p A work pool name for the agent to pull from. [default: None] --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_AGENT_PREFETCH_SECONDS. --run-once Only run agent polling once. By default, the agent runs forever. [default: no-run-once] --work-queue, -q One or more work queue names for the agent to pull from. [default: None]

    You must start an agent within an environment that can access or create the infrastructure needed to execute flow runs. Your agent will deploy flow runs to the infrastructure specified by the deployment.

    Prefect must be installed in execution environments

    Prefect must be installed in any environment in which you intend to run the agent or execute a flow run.

    PREFECT_API_URL and PREFECT_API_KEY settings for agents

    PREFECT_API_URL must be set for the environment in which your agent is running or specified when starting the agent with the --api flag. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

    If you want an agent to communicate with Prefect Cloud or a Prefect server from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

    ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#starting-an-agent","title":"Starting an agent","text":"

    Use the prefect agent start CLI command to start an agent. You must pass at least one work pool name or match string that the agent will poll for work. If the work pool does not exist, it will be created.

    prefect agent start -p [work pool name]\n

    For example:

    Starting agent with ephemeral API...\n\u00a0 ___ ___ ___ ___ ___ ___ _____ \u00a0 \u00a0 _ \u00a0 ___ ___ _\u00a0 _ _____\n\u00a0| _ \\ _ \\ __| __| __/ __|_ \u00a0 _| \u00a0 /_\\ / __| __| \\| |_ \u00a0 _|\n\u00a0|\u00a0 _/ \u00a0 / _|| _|| _| (__\u00a0 | |\u00a0 \u00a0 / _ \\ (_ | _|| .` | | |\n\u00a0|_| |_|_\\___|_| |___\\___| |_| \u00a0 /_/ \\_\\___|___|_|\\_| |_|\n\nAgent started! Looking for work from work pool 'my-pool'...\n

    By default, the agent polls the API specified by the PREFECT_API_URL environment variable. To configure the agent to poll from a different server location, use the --api flag, specifying the URL of the server.

    In addition, agents can match multiple queues in a work pool by providing a --match string instead of specifying all of the queues. The agent will poll every queue with a name that starts with the given string. New queues matching this prefix will be found by the agent without needing to restart it.

    For example:

    prefect agent start --match \"foo-\"\n

    This example will poll every work queue that starts with \"foo-\".

    ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#configuring-prefetch","title":"Configuring prefetch","text":"

    By default, the agent begins submission of flow runs a short time (10 seconds) before they are scheduled to run. This allows time for the infrastructure to be created, so the flow run can start on time. In some cases, infrastructure will take longer than this to actually start the flow run. In these cases, the prefetch can be increased using the --prefetch-seconds option or the PREFECT_AGENT_PREFETCH_SECONDS setting.

    Submission can begin an arbitrary amount of time before the flow run is scheduled to start. If this value is larger than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time. This allows flow runs to start exactly on time.

    ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#troubleshooting","title":"Troubleshooting","text":"","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-crash-or-keyboard-interrupt","title":"Agent crash or keyboard interrupt","text":"

    If the agent process is ended abruptly, you can sometimes have left over flows that were destined for the agent whose process was ended. In the UI, these will show up as pending. You will need to delete these flows in order for the restarted agent to begin processing the work queue again. Take note of the flows you deleted, you might need to set them to run manually.

    ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/artifacts/","title":"Artifacts","text":"

    Artifacts are persisted outputs such as tables, Markdown, or links. They are stored on Prefect Cloud or a Prefect server instance and rendered in the Prefect UI. Artifacts make it easy to track and monitor the objects that your flows produce and update over time.

    Published artifacts may be associated with a particular task run or flow run. Artifacts can also be created outside of any flow run context.

    Whether you're publishing links, Markdown, or tables, artifacts provide a powerful and flexible way to showcase data within your workflows.

    With artifacts, you can easily manage and share information with your team, providing valuable insights and context.

    Common use cases for artifacts include:

    • Debugging: By publishing data that you care about in the UI, you can easily see when and where your results were written. If an artifact doesn't look the way you expect, you can find out which flow run last updated it, and you can click through a link in the artifact to a storage location (such as an S3 bucket).
    • Data quality checks: Artifacts can be used to publish data quality checks from in-progress tasks. This can help ensure that data quality is maintained throughout the pipeline. During long-running tasks such as ML model training, you might use artifacts to publish performance graphs. This can help you visualize how well your models are performing and make adjustments as needed. You can also track the versions of these artifacts over time, making it easier to identify changes in your data.
    • Documentation: Artifacts can be used to publish documentation and sample data to help you keep track of your work and share information with your colleagues. For instance, artifacts allow you to add a description to let your colleagues know why this piece of data is important.
    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-artifacts","title":"Creating artifacts","text":"

    Creating artifacts allows you to publish data from task and flow runs or outside of a flow run context. Currently, you can render three artifact types: links, Markdown, and tables.

    Artifacts render individually

    Please note that every artifact created within a task will be displayed as an individual artifact in the Prefect UI. This means that each call to create_link_artifact() or create_markdown_artifact() generates a distinct artifact.

    Unlike the print() command, where you can concatenate multiple calls to include additional items in a report, within a task, these commands must be used multiple times if necessary.

    To create artifacts like reports or summaries using create_markdown_artifact(), compile your message string separately and then pass it to create_markdown_artifact() to create the complete artifact.

    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-link-artifacts","title":"Creating link artifacts","text":"

    To create a link artifact, use the create_link_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_link_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.

    from prefect import flow, task\nfrom prefect.artifacts import create_link_artifact\n\n@task\ndef my_first_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/highly_variable_data.csv\",\n            description=\"## Highly variable data\",\n        )\n\n@task\ndef my_second_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/low_pred_data.csv\",\n            description=\"# Low prediction accuracy\",\n        )\n\n@flow\ndef my_flow():\n    my_first_task()\n    my_second_task()\n\nif __name__ == \"__main__\":\n    my_flow()\n

    Tip

    You can specify multiple artifacts with the same key to more easily track something very specific that you care about, such as irregularities in your data pipeline.

    After running the above flows, you can find your new artifacts in the Artifacts page of the UI. Click into the \"irregular-data\" artifact and see all versions of it, along with custom descriptions and links to the relevant data.

    Here, you'll also be able to view information about your artifact such as its associated flow run or task run id, previous and future versions of the artifact (multiple artifacts can have the same key in order to show lineage), the data you've stored (in this case a Markdown-rendered link), an optional Markdown description, and when the artifact was created or updated.

    To make the links more readable for you and your collaborators, you can pass in a link_text argument for your link artifacts:

    from prefect import flow\nfrom prefect.artifacts import create_link_artifact\n\n@flow\ndef my_flow():\n    create_link_artifact(\n        key=\"my-important-link\",\n        link=\"https://www.prefect.io/\",\n        link_text=\"Prefect\",\n    )\n\nif __name__ == \"__main__\":\n    my_flow()\n

    In the above example, the create_link_artifact method is used within a flow to create a link artifact with a key of my-important-link. The link parameter is used to specify the external resource to be linked to, and link_text is used to specify the text to be displayed for the link. An optional description could also be added for context.

    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-markdown-artifacts","title":"Creating Markdown artifacts","text":"

    To create a Markdown artifact, you can use the create_markdown_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_markdown_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.

    Don't indent Markdown

    Markdown in mult-line strings must be unindented to be interpreted correctly.

    from prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef markdown_task():\n    na_revenue = 500000\n    markdown_report = f\"\"\"# Sales Report\n\n## Summary\n\nIn the past quarter, our company saw a significant increase in sales, with a total revenue of $1,000,000. \nThis represents a 20% increase over the same period last year.\n\n## Sales by Region\n\n| Region        | Revenue |\n|:--------------|-------:|\n| North America | ${na_revenue:,} |\n| Europe        | $250,000 |\n| Asia          | $150,000 |\n| South America | $75,000 |\n| Africa        | $25,000 |\n\n## Top Products\n\n1. Product A - $300,000 in revenue\n2. Product B - $200,000 in revenue\n3. Product C - $150,000 in revenue\n\n## Conclusion\n\nOverall, these results are very encouraging and demonstrate the success of our sales team in increasing revenue \nacross all regions. However, we still have room for improvement and should focus on further increasing sales in \nthe coming quarter.\n\"\"\"\n    create_markdown_artifact(\n        key=\"gtm-report\",\n        markdown=markdown_report,\n        description=\"Quarterly Sales Report\",\n    )\n\n@flow()\ndef my_flow():\n    markdown_task()\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

    After running the above flow, you should see your \"gtm-report\" artifact in the Artifacts page of the UI.

    As with all artifacts, you'll be able to view the associated flow run or task run id, previous and future versions of the artifact, your rendered Markdown data, and your optional Markdown description.

    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#create-table-artifacts","title":"Create table artifacts","text":"

    You can create a table artifact by calling create_table_artifact(). To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_table_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.

    Note

    The create_table_artifact() function accepts a table argument, which can be provided as either a list of lists, a list of dictionaries, or a dictionary of lists.

    from prefect.artifacts import create_table_artifact\n\ndef my_fn():\n    highest_churn_possibility = [\n       {'customer_id':'12345', 'name': 'John Smith', 'churn_probability': 0.85 }, \n       {'customer_id':'56789', 'name': 'Jane Jones', 'churn_probability': 0.65 } \n    ]\n\n    create_table_artifact(\n        key=\"personalized-reachout\",\n        table=highest_churn_possibility,\n        description= \"# Marvin, please reach out to these customers today!\"\n    )\n\nif __name__ == \"__main__\":\n    my_fn()\n

    As you can see, you don't need to create an artifact in a flow run context. You can create one anywhere in a Python script and see it in the Prefect UI.

    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#managing-artifacts","title":"Managing artifacts","text":"","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#reading-artifacts","title":"Reading artifacts","text":"

    In the Prefect UI, you can view all of the latest versions of your artifacts and click into a specific artifact to see its lineage over time. Additionally, you can inspect all versions of an artifact with a given key from the CLI by running:

    prefect artifact inspect <my_key>\n

    or view all artifacts by running:

    prefect artifact ls\n

    You can also use the Prefect REST API to programmatically filter your results.

    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#fetching-artifacts","title":"Fetching artifacts","text":"

    In Python code, you can retrieve an existing artifact with the Artifact.get class method:

    from prefect.artifacts import Artifact\n\nmy_retrieved_artifact = Artifact.get(\"my_artifact_key\")\n
    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#deleting-artifacts","title":"Deleting artifacts","text":"

    You can delete an artifact directly using the CLI to delete specific artifacts with a given key or id:

    prefect artifact delete <my_key>\n
    prefect artifact delete --id <my_id>\n

    Alternatively, you can delete artifacts using the Prefect REST API.

    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-api","title":"Artifacts API","text":"

    Prefect provides the Prefect REST API to allow you to create, read, and delete artifacts programmatically. With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow.

    For example, to read the five most recently created Markdown, table, and link artifacts, you can run the following:

    import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc/workspaces/xyz\"\nPREFECT_API_KEY=\"pnu_ghijk\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n

    If you don't specify a key or that a key must exist, you will also return results (which are a type of key-less artifact).

    See the rest of the Prefect REST API documentation on artifacts for more information!

    ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/automations/","title":"Automations","text":"

    Automations in Prefect Cloud enable you to configure actions that Prefect executes automatically based on trigger conditions.

    Potential triggers include the occurrence of events from changes in a flow run's state - or the absence of such events. You can even define your own custom trigger to fire based on an event created from a webhook or a custom event defined in Python code.

    Potential actions include kicking off flow runs, pausing schedules, and sending custom notifications.

    Automations are only available in Prefect Cloud

    Notifications in an open-source Prefect server provide a subset of the notification message-sending features available in Automations.

    Automations provide a flexible and powerful framework for automatically taking action in response to events.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automations-overview","title":"Automations overview","text":"

    The Automations page provides an overview of all configured automations for your workspace.

    Selecting the toggle next to an automation pauses execution of the automation.

    The button next to the toggle provides commands to copy the automation ID, edit the automation, or delete the automation.

    Select the name of an automation to view Details about it and relevant Events.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation","title":"Create an automation","text":"

    On the Automations page, select the + icon to create a new automation. You'll be prompted to configure:

    • A trigger condition that causes the automation to execute.
    • One or more actions carried out by the automation.
    • Details about the automation, such as a name and description.
    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#triggers","title":"Triggers","text":"

    Triggers specify the conditions under which your action should be performed. The Prefect UI includes templates for many common conditions, such as:

    • Flow run state change
    • Note - Flow Run Tags currently are only evaluated with OR criteria
    • Work pool status
    • Work queue status
    • Deployment status
    • Metric thresholds, such as average duration, lateness, or completion percentage
    • Incident declarations (available on Pro and Custom plans)
    • Custom event triggers

    Automations API

    The automations API enables further programmatic customization of trigger and action policies based on arbitrary events.

    Importantly, triggers can be configured not only in reaction to events, but also proactively: to fire in the absence of an expected event.

    For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get \u201cstuck\u201d in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be taken on the flow itself, such as canceling or restarting it. Or the action could take the form of a notification so someone can take manual remediation steps. Or you could set both actions to to take place when the trigger occurs.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#actions","title":"Actions","text":"

    Actions specify what your automation does when its trigger criteria are met. Current action types include:

    • Cancel a flow run
    • Pause or resume a schedule
    • Run a deployment
    • Pause or resume a deployment schedule
    • Pause or resume a work pool
    • Pause or resume a work queue
    • Pause or resume an automation
    • Send a notification
    • Call a webhook
    • Suspend a flow run
    • Declare an incident (available on Pro and Custom plans)
    • Change the state of a flow run

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#creating-automations-in-python-code","title":"Creating automations In Python code","text":"

    You can create and access any automation with the Python SDK's Automation class and its methods.

    from prefect.automations import Automation\nfrom prefect.events.schemas.automations import EventTrigger\nfrom prefect.server.events.actions import CancelFlowRun\n\n# creating an automation\nautomation =\n  Automation(\n    name=\"woodchonk\",\n    trigger=EventTrigger(\n      expect={\"animal.walked\"},\n      match={\n        \"genus\": \"Marmota\",\n        \"species\": \"monax\",\n        },\n        posture=\"Reactive\",\n        threshold=3,\n        ),\n      actions=[CancelFlowRun()]\n        ).create()\nprint(automation)\n# name='woodchonk' description='' enabled=True trigger=EventTrigger(type='event', match=ResourceSpecification(__root__={'genus': 'Marmota', 'species': 'monax'}), match_related=ResourceSpecification(__root__={}), after=set(), expect={'animal.walked'}, for_each=set(), posture=Posture.Reactive, threshold=3, within=datetime.timedelta(seconds=10)) actions=[CancelFlowRun(type='cancel-flow-run')] actions_on_trigger=[] actions_on_resolve=[] owner_resource=None id=UUID('d641c552-775c-4dc6-a31e-541cb11137a6')\n\n# reading the automation\n\nautomation = Automation.read(id =\"d641c552-775c-4dc6-a31e-541cb11137a6\")\nor\nautomation = Automation.read(\"woodchonk\")\n\nprint(automation)\n# name='woodchonk' description='' enabled=True trigger=EventTrigger(type='event', match=ResourceSpecification(__root__={'genus': 'Marmota', 'species': 'monax'}), match_related=ResourceSpecification(__root__={}), after=set(), expect={'animal.walked'}, for_each=set(), posture=Posture.Reactive, threshold=3, within=datetime.timedelta(seconds=10)) actions=[CancelFlowRun(type='cancel-flow-run')] actions_on_trigger=[] actions_on_resolve=[] owner_resource=None id=UUID('d641c552-775c-4dc6-a31e-541cb11137a6')\n
    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#selected-and-inferred-action-targets","title":"Selected and inferred action targets","text":"

    Some actions require you to either select the target of the action, or specify that the target of the action should be inferred.

    Selected targets are simple, and useful for when you know exactly what object your action should act on \u2014 for example, the case of a cleanup flow you want to run or a specific notification you\u2019d like to send.

    Inferred targets are deduced from the trigger itself.

    For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow run, the flow run to cancel is inferred as the stuck run that caused the trigger to fire.

    Similarly, if a trigger fires on a work queue event and the corresponding action is to pause an inferred work queue, the inferred work queue is the one that emitted the event.

    Prefect tries to infer the relevant event whenever possible, but sometimes one does not exist.

    Specify a name and, optionally, a description for the automation.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#custom-triggers","title":"Custom triggers","text":"

    When you need a trigger that doesn't quite fit the templates in UI trigger builder, you can define a custom trigger in JSON. With custom triggers, you have access to the full capabilities of Prefect's automation system - allowing you to react to many kinds of events and metrics in your workspace.

    Each automation has a single trigger that, when fired, will cause all of its associated actions to run. That single trigger may be a reactive or proactive event trigger, a trigger monitoring the value of a metric, or a composite trigger that combines several underlying triggers.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#event-triggers","title":"Event triggers","text":"

    Event triggers are the most common types of trigger, and they are intended to react to the presence or absence of an event happening in your workspace. Event triggers are indicated with {\"type\": \"event\"}.

    The schema that defines an event trigger is as follows:

    Name Type Supports trailing wildcards Description match object Labels for resources which this Automation will match. match_related object Labels for related resources which this Automation will match. posture string enum N/A The posture of this Automation, either Reactive or Proactive. Reactive automations respond to the presence of the expected events, while Proactive automations respond to the absence of those expected events. after array of strings Event(s), one of which must have first been seen to start this automation. expect array of strings The event(s) this automation is expecting to see. If empty, this automation will evaluate any matched event. for_each array of strings Evaluate the Automation separately for each distinct value of these labels on the resource. By default, labels refer to the primary resource of the triggering event. You may also refer to labels from related resources by specifying related:<role>:<label>. This will use the value of that label for the first related resource in that role. threshold integer N/A The number of events required for this Automation to trigger (for Reactive automations), or the number of events expected (for Proactive automations) within number N/A The time period over which the events must occur. For Reactive triggers, this may be as low as 0 seconds, but must be at least 10 seconds for Proactive triggers","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#resource-matching","title":"Resource matching","text":"

    Both the event and metric triggers support matching events for specific resources in your workspace, including most Prefect objects (like flows, deployment, blocks, work pools, tags, etc) as well as resources you have defined in any events you emit yourself. The match and match_related fields control which events a trigger considers for evaluation by filtering on the contents of their resource and related fields, respectively. Each label added to a match filter is ANDed with the other labels, and can accept a single value or a list of multiple values that are ORed together.

    Consider the resource and related fields on the following prefect.flow-run.Completed event, truncated for the sake of example. Its primary resource is a flow run, and since that flow run was started via a deployment, it is related to both its flow and its deployment:

    \"resource\": {\n  \"prefect.resource.id\": \"prefect.flow-run.925eacce-7fe5-4753-8f02-77f1511543db\",\n  \"prefect.resource.name\": \"cute-kittiwake\"\n}\n\"related\": [\n  {\n    \"prefect.resource.id\": \"prefect.flow.cb6126db-d528-402f-b439-96637187a8ca\",\n    \"prefect.resource.role\": \"flow\",\n    \"prefect.resource.name\": \"hello\"\n  },\n  {\n    \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\",\n    \"prefect.resource.role\": \"deployment\",\n    \"prefect.resource.name\": \"example\"\n  }\n]\n

    There are a number of valid ways to select the above event for evaluation, and the approach depends on the purpose of the automation.

    The following configuration will filter for any events whose primary resource is a flow run, and that flow run has a name starting with cute- or radical-.

    \"match\": {\n  \"prefect.resource.id\": \"prefect.flow-run.*\",\n  \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {},\n...\n

    This configuration, on the other hand, will filter for any events for which this specific deployment is a related resource.

    \"match\": {},\n\"match_related\": {\n  \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n

    Both of the above approaches will select the example prefect.flow-run.Completed event, but will permit additional, possibly undesired events through the filter as well. match and match_related can be combined for more restrictive filtering:

    \"match\": {\n  \"prefect.resource.id\": \"prefect.flow-run.*\",\n  \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {\n  \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n

    Now this trigger will filter only for events whose primary resource is a flow run started by a specific deployment, and that flow run has a name starting with cute- or radical-.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#expected-events","title":"Expected events","text":"

    Once an event has passed through the match filters, it must be decided if this event should be counted toward the trigger's threshold. Whether that is the case is determined by the event names present in expect.

    This configuration informs the trigger to evaluate only prefect.flow-run.Completed events that have passed the match filters.

    \"expect\": [\n  \"prefect.flow-run.Completed\"\n],\n...\n

    threshold decides the quantity of expected events needed to satisfy the trigger. Increasing the threshold above 1 will also require use of within to define a range of time in which multiple events are seen. The following configuration will expect two occurrences of prefect.flow-run.Completed within 60 seconds.

    \"expect\": [\n  \"prefect.flow-run.Completed\"\n],\n\"threshold\": 2,\n\"within\": 60,\n...\n

    after can be used to handle scenarios that require more complex event reactivity.

    Take, for example, this flow which emits an event indicating the table it operates on is missing or empty:

    from prefect import flow\nfrom prefect.events import emit_event\nfrom db import Table\n\n\n@flow\ndef transform(table_name: str):\n  table = Table(table_name)\n\n  if not table.exists():\n    emit_event(\n        event=\"table-missing\",\n        resource={\"prefect.resource.id\": \"etl-events.transform\"}\n    )\n  elif table.is_empty():\n    emit_event(\n        event=\"table-empty\",\n        resource={\"prefect.resource.id\": \"etl-events.transform\"}\n    )\n  else:\n    # transform data\n

    The following configuration uses after to prevent this automation from firing unless either a table-missing or a table-empty event has occurred before a flow run of this deployment completes.

    Tip

    Note how match and match_related are used to ensure the trigger only evaluates events that are relevant to its purpose.

    \"match\": {\n  \"prefect.resource.id\": [\n    \"prefect.flow-run.*\",\n    \"etl-events.transform\"\n  ]\n},\n\"match_related\": {\n  \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n}\n\"after\": [\n  \"table-missing\",\n  \"table-empty\"\n]\n\"expect\": [\n  \"prefect.flow-run.Completed\"\n],\n...\n
    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#evaluation-strategy","title":"Evaluation strategy","text":"

    All of the previous examples were designed around a reactive posture - that is, count up events toward the threshold until it is met, then execute actions. To respond to the absence of events, use a proactive posture. A proactive trigger will fire when its threshold has not been met by the end of the window of time defined by within. Proactive triggers must have a within value of at least 10 seconds.

    The following trigger will fire if a prefect.flow-run.Completed event is not seen within 60 seconds after a prefect.flow-run.Running event is seen.

    {\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {},\n  \"after\": [\n    \"prefect.flow-run.Running\"\n  ],\n  \"expect\": [\n    \"prefect.flow-run.Completed\"\n  ],\n  \"for_each\": [],\n  \"posture\": \"Proactive\",\n  \"threshold\": 1,\n  \"within\": 60\n}\n

    However, without for_each, a prefect.flow-run.Completed event from a different flow run than the one that started this trigger with its prefect.flow-run.Running event could satisfy the condition. Adding a for_each of prefect.resource.id will cause this trigger to be evaluated separately for each flow run id associated with these events.

    {\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {},\n  \"after\": [\n    \"prefect.flow-run.Running\"\n  ],\n  \"expect\": [\n    \"prefect.flow-run.Completed\"\n  ],\n  \"for_each\": [\n    \"prefect.resource.id\"\n  ],\n  \"posture\": \"Proactive\",\n  \"threshold\": 1,\n  \"within\": 60\n}\n
    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#metric-triggers","title":"Metric triggers","text":"

    Metric triggers ({\"type\": \"metric\"}) fire when the value of a metric in your workspace crosses a threshold you've defined. For example, you can trigger an automation when the success rate of flows in your workspace drops below 95% over the course of an hour.

    Prefect's metrics are all derived by examining your workspace's events, and if applicable, use the occurred times of those events as the basis for their calculations.

    Prefect defines three metrics today:

    • Successes ({\"name\": \"successes\"}), defined as the number of flow runs that went Pending and then the latest state we saw was not a failure (Failed or Crashed). This metric accounts for retries if the ultimate state was successful.
    • Duration ({\"name\": \"duration\"}), defined as the length of time that a flow remains in a Running state before transitioning to a terminal state such as Completed, Failed, or Crashed. Because this time is derived in terms of flow run state change events, it may be greater than the runtime of your function.
    • Lateness ({\"name\": \"lateness\"}), defined as the length of time that a Scheduled flow remains in a Late state before transitioning to a Running and/or Crashed state. Only flow runs that the system marks Late are included.

    The schema of a metric trigger is as follows:

    Name Type Supports trailing wildcards Description match object Labels for resources which this Automation will match. match_related object Labels for related resources which this Automation will match. metric MetricTriggerQuery N/A The definition of the metric query to run

    And the MetricTriggerQuery query is defined as:

    Name Type Description name string The name of the Prefect metric to evaluate (see above). threshold number The threshold to which the current metric value is compared operator string (\"<\", \"<=\", \">\", \">=\") The comparison operator to use to decide if the threshold value is met range duration in seconds How far back to evaluate the metric firing_for duration in seconds How long the value must exceed the threshold before this trigger fires

    For example, to fire when flow runs tagged production in your workspace are failing at a rate of 10% or worse for the last hour (in other words, your success rate is below 90%), create this trigger:

    {\n  \"type\": \"metric\",\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {\n    \"prefect.resource.id\": \"prefect.tag.production\",\n    \"prefect.resource.role\": \"tag\"\n  },\n  \"metric\": {\n    \"name\": \"successes\",\n    \"threshold\": 0.9,\n    \"operator\": \"<\",\n    \"range\": 3600,\n    \"firing_for\": 0\n  }\n}\n

    To detect when the average lateness of your Kubernetes workloads (running on a work pool named kubernetes) in the last day exceeds 5 minutes late, and that number hasn't gotten better for the last 10 minutes, use a trigger like this:

    {\n  \"type\": \"metric\",\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {\n    \"prefect.resource.id\": \"prefect.work-pool.kubernetes\",\n    \"prefect.resource.role\": \"work-pool\"\n  },\n  \"metric\": {\n    \"name\": \"lateness\",\n    \"threshold\": 300,\n    \"operator\": \">\",\n    \"range\": 86400,\n    \"firing_for\": 600\n  }\n}\n
    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#composite-triggers","title":"Composite triggers","text":"

    To create a trigger from multiple kinds of events and metrics, use a compound or sequence trigger. These higher-order triggers are composed from a set of underlying event and metric triggers.

    For example, if you want to run a deployment only after three different flows in your workspace have written their results to a remote filesystem, combine them with a 'compound' trigger:

    {\n  \"type\": \"compound\",\n  \"require\": \"all\",\n  \"within\": 3600,\n  \"triggers\": [\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-customer-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-revenue-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-expenses-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    }\n  ]\n}\n

    This trigger will fire once it sees at least one of each of the underlying event triggers fire within the time frame specified. Then the trigger will reset its state and fire the next time these three events all happen. The order the events occur doesn't matter, just that all of the events occur within one hour.

    If you want a flow run to complete prior to starting to watch for those three events, you can combine the entire previous trigger as the second part of a sequence of two triggers:

    {\n  // the outer trigger is now a \"sequence\" trigger\n  \"type\": \"sequence\",\n  \"within\": 7200,\n  \"triggers\": [\n    // with the first child trigger expecting a Completed event\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.flow-run.Completed\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-export-initiator\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    // and the second child trigger being the compound trigger from the prior example\n    {\n      \"type\": \"compound\",\n      \"require\": \"all\",\n      \"within\": 3600,\n      \"triggers\": [\n        {\n          \"type\": \"event\",\n          \"posture\": \"Reactive\",\n          \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n          \"match_related\": {\n            \"prefect.resource.name\": \"daily-customer-export\",\n            \"prefect.resource.role\": \"flow\"\n          }\n        },\n        {\n          \"type\": \"event\",\n          \"posture\": \"Reactive\",\n          \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n          \"match_related\": {\n            \"prefect.resource.name\": \"daily-revenue-export\",\n            \"prefect.resource.role\": \"flow\"\n          }\n        },\n        {\n          \"type\": \"event\",\n          \"posture\": \"Reactive\",\n          \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n          \"match_related\": {\n            \"prefect.resource.name\": \"daily-expenses-export\",\n            \"prefect.resource.role\": \"flow\"\n          }\n        }\n      ]\n    }\n  ]\n}\n

    In this case, the trigger will only fire if it sees the daily-export-initiator flow complete, and then the three files written by the other flows.

    The within parameter for compound and sequence triggers constrains how close in time (in seconds) the child triggers must fire to satisfy the composite trigger. For example, if the daily-export-initiator flow runs, but the other three flows don't write their result files until three hours later, this trigger won't fire. Placing these time constraints on the triggers can prevent a misfire if you know that the events will generally happen within a specific timeframe, and you don't want a stray older event to be included in the evaluation of the trigger. If this isn't a concern for you, you may omit the within period, in which case there is no limit to how far apart in time the child triggers occur.

    Any type of trigger may be composed into higher-order composite triggers, including proactive event triggers and metric triggers. In the following example, the compound trigger will fire if any of the following events occur: a flow run stuck in Pending, a work pool becoming unready, or the average amount of Late work in your workspace going over 10 minutes:

    {\n  \"type\": \"compound\",\n  \"require\": \"any\",\n  \"triggers\": [\n    {\n      \"type\": \"event\",\n      \"posture\": \"Proactive\",\n      \"after\": [\"prefect.flow-run.Pending\"],\n      \"expect\": [\"prefect.flow-run.Running\", \"prefect.flow-run.Crashed\"],\n      \"for_each\": [\"prefect.resource.id\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-customer-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.work-pool.not-ready\"],\n      \"match\": {\n        \"prefect.resource.name\": \"kubernetes-workers\",\n      }\n    },\n    {\n      \"type\": \"metric\",\n      \"metric\": {\n        \"name\": \"lateness\",\n        \"operator\": \">\",\n        \"threshold\": 600,\n        \"range\": 3600,\n        \"firing_for\": 300\n      }\n    }\n  ]\n}\n

    For compound triggers, the require parameter may be \"any\", \"all\", or a number between 1 and the number of child triggers. In the example above, if you feel that you are receiving too many spurious notifications for issues that resolve on their own, you can specify {\"require\": 2} to express that any two of the triggers must fire in order for the compound trigger to fire. Sequence triggers, on the other hand, always require all of their child triggers to fire before they fire.

    Compound triggers are defined as:

    Name Type Description require number, \"any\", or \"all\" How many of the child triggers must fire for this trigger to fire within time, in seconds How close in time the child triggers must fire for this trigger to fire triggers array of other triggers

    Sequence triggers are defined as:

    Name Type Description within time, in seconds How close in time the child triggers must fire for this trigger to fire triggers array of other triggers","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation-via-deployment-triggers","title":"Create an automation via deployment triggers","text":"

    To enable the simple configuration of event-driven deployments, Prefect provides deployment triggers - a shorthand for creating automations that are linked to specific deployments to run them based on the presence or absence of events.

    Trigger definitions for deployments are supported in prefect.yaml, .serve, and .deploy. At deployment time, specified trigger definitions will create linked automations that are triggered by events matching your chosen grammar. Each trigger definition may include a jinja template to render the triggering event as the parameters of your deployment's flow run.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#defining-triggers-in-prefectyaml","title":"Defining triggers in prefect.yaml","text":"

    A list of triggers can be included directly on any deployment in a prefect.yaml file:

    deployments:\n  - name: my-deployment\n    entrypoint: path/to/flow.py:decorated_fn\n    work_pool:\n      name: my-work-pool\n    triggers:\n      - type: event\n        enabled: true\n        match:\n          prefect.resource.id: my.external.resource\n        expect:\n          - external.resource.pinged\n        parameters:\n          param_1: \"{{ event }}\"\n

    This deployment will create a flow run when an external.resource.pinged event and an external.resource.replied event have been seen from my.external.resource:

    deployments:\n  - name: my-deployment\n    entrypoint: path/to/flow.py:decorated_fn\n    work_pool:\n      name: my-work-pool\n    triggers:\n      - type: compound\n        require: all\n        parameters:\n          param_1: \"{{ event }}\"\n        triggers:\n          - type: event\n            match:\n              prefect.resource.id: my.external.resource\n            expect:\n              - external.resource.pinged\n          - type: event\n            match:\n              prefect.resource.id: my.external.resource\n            expect:\n              - external.resource.replied\n
    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#defining-triggers-in-serve-and-deploy","title":"Defining triggers in .serve and .deploy","text":"

    For creating deployments with triggers in Python, the trigger types DeploymentEventTrigger, DeploymentMetricTrigger, DeploymentCompoundTrigger, and DeploymentSequenceTrigger can be imported from prefect.events:

    from prefect import flow\nfrom prefect.events import DeploymentEventTrigger\n\n\n@flow(log_prints=True)\ndef decorated_fn(param_1: str):\n    print(param_1)\n\n\nif __name__==\"__main__\":\n    decorated_fn.serve(\n        name=\"my-deployment\",\n        triggers=[\n            DeploymentEventTrigger(\n                enabled=True,\n                match={\"prefect.resource.id\": \"my.external.resource\"},\n                expect=[\"external.resource.pinged\"],\n                parameters={\n                    \"param_1\": \"{{ event }}\",\n                },\n            )\n        ],\n    )\n

    As with prior examples, composite triggers must be supplied with a list of underlying triggers:

    from prefect import flow\nfrom prefect.events import DeploymentCompoundTrigger\n\n\n@flow(log_prints=True)\ndef decorated_fn(param_1: str):\n    print(param_1)\n\n\nif __name__==\"__main__\":\n    decorated_fn.deploy(\n        name=\"my-deployment\",\n        image=\"my-image-registry/my-image:my-tag\"\n        triggers=[\n            DeploymentCompoundTrigger(\n                enabled=True,\n                name=\"my-compound-trigger\",\n                require=\"all\",\n                triggers=[\n                    {\n                      \"type\": \"event\",\n                      \"match\": {\"prefect.resource.id\": \"my.external.resource\"},\n                      \"expect\": [\"external.resource.pinged\"],\n                    },\n                    {\n                      \"type\": \"event\",\n                      \"match\": {\"prefect.resource.id\": \"my.external.resource\"},\n                      \"expect\": [\"external.resource.replied\"],\n                    },\n                ],\n                parameters={\n                    \"param_1\": \"{{ event }}\",\n                },\n            )\n        ],\n        work_pool_name=\"my-work-pool\",\n    )\n
    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#pass-triggers-to-prefect-deploy","title":"Pass triggers to prefect deploy","text":"

    You can pass one or more --trigger arguments to prefect deploy, which can be either a JSON string or a path to a .yaml or .json file.

    # Pass a trigger as a JSON string\nprefect deploy -n test-deployment \\\n  --trigger '{\n    \"enabled\": true,\n    \"match\": {\n      \"prefect.resource.id\": \"prefect.flow-run.*\"\n    },\n    \"expect\": [\"prefect.flow-run.Completed\"]\n  }'\n\n# Pass a trigger using a JSON/YAML file\nprefect deploy -n test-deployment --trigger triggers.yaml\nprefect deploy -n test-deployment --trigger my_stuff/triggers.json\n

    For example, a triggers.yaml file could have many triggers defined:

    triggers:\n  - enabled: true\n    match:\n      prefect.resource.id: my.external.resource\n    expect:\n      - external.resource.pinged\n    parameters:\n      param_1: \"{{ event }}\"\n  - enabled: true\n    match:\n      prefect.resource.id: my.other.external.resource\n    expect:\n      - some.other.event\n    parameters:\n      param_1: \"{{ event }}\"\n

    Both of the above triggers would be attached to test-deployment after running prefect deploy.

    Triggers passed to prefect deploy will override any triggers defined in prefect.yaml

    While you can define triggers in prefect.yaml for a given deployment, triggers passed to prefect deploy will take precedence over those defined in prefect.yaml.

    Note that deployment triggers contribute to the total number of automations in your workspace.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automation-notifications","title":"Automation notifications","text":"

    Notifications enable you to set up automation actions that send a message.

    Automation notifications support sending notifications via any predefined block that is capable of and configured to send a message. That includes, for example:

    • Slack message to a channel
    • Microsoft Teams message to a channel
    • Email to a configured email address

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#templating-with-jinja","title":"Templating with Jinja","text":"

    Automation actions can access templated variables through Jinja syntax. Templated variables enable you to dynamically include details from an automation trigger, such as a flow or pool name.

    Jinja templated variable syntax wraps the variable name in double curly brackets, like this: {{ variable }}.

    You can access properties of the underlying flow run objects including:

    • flow_run
    • flow
    • deployment
    • work_queue
    • work_pool

    In addition to its native properties, each object includes an id along with created and updated timestamps.

    The flow_run|ui_url token returns the URL for viewing the flow run in Prefect Cloud.

    Here\u2019s an example for something that would be relevant to a flow run state-based notification:

    Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}.\n\n    Timestamp: {{ flow_run.state.timestamp }}\n    Flow ID: {{ flow_run.flow_id }}\n    Flow Run ID: {{ flow_run.id }}\n    State message: {{ flow_run.state.message }}\n

    The resulting Slack webhook notification would look something like this:

    You could include flow and deployment properties.

    Flow run {{ flow_run.name }} for flow {{ flow.name }}\nentered state {{ flow_run.state.name }}\nwith message {{ flow_run.state.message }}\n\nFlow tags: {{ flow_run.tags }}\nDeployment name: {{ deployment.name }}\nDeployment version: {{ deployment.version }}\nDeployment parameters: {{ deployment.parameters }}\n

    An automation that reports on work pool status might include notifications using work_pool properties.

    Work pool status alert!\n\nName: {{ work_pool.name }}\nLast polled: {{ work_pool.last_polled }}\n

    In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the Automations API for additional details.

    Automation: {{ automation.name }}\nDescription: {{ automation.description }}\n\nEvent: {{ event.id }}\nResource:\n{% for label, value in event.resource %}\n{{ label }}: {{ value }}\n{% endfor %}\nRelated Resources:\n{% for related in event.related %}\n    Role: {{ related.role }}\n    {% for label, value in event.resource %}\n    {{ label }}: {{ value }}\n    {% endfor %}\n{% endfor %}\n

    Note that this example also illustrates the ability to use Jinja features such as iterator and for loop control structures when templating notifications.

    ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/blocks/","title":"Blocks","text":"

    Blocks are a primitive within Prefect that enable the storage of configuration and provide an interface for interacting with external systems.

    With blocks, you can securely store credentials for authenticating with services like AWS, GitHub, Slack, and any other system you'd like to orchestrate with Prefect.

    Blocks expose methods that provide pre-built functionality for performing actions against an external system. They can be used to download data from or upload data to an S3 bucket, query data from or write data to a database, or send a message to a Slack channel.

    You may configure blocks through code or via the Prefect Cloud and the Prefect server UI.

    You can access blocks for both configuring flow deployments and directly from within your flow code.

    Prefect provides some built-in block types that you can use right out of the box. Additional blocks are available through Prefect Integrations. To use these blocks you can pip install the package, then register the blocks you want to use with Prefect Cloud or a Prefect server.

    Prefect Cloud and the Prefect server UI display a library of block types available for you to configure blocks that may be used by your flows.

    Blocks and parameters

    Blocks are useful for configuration that needs to be shared across flow runs and between flows.

    For configuration that will change between flow runs, we recommend using parameters.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#prefect-built-in-blocks","title":"Prefect built-in blocks","text":"

    Prefect provides a broad range of commonly used, built-in block types. These block types are available in Prefect Cloud and the Prefect server UI.

    Block Slug Description Azure azure Store data as a file on Azure Datalake and Azure Blob Storage. Date Time date-time A block that represents a datetime. Docker Container docker-container Runs a command in a container. Docker Registry docker-registry Connects to a Docker registry. Requires a Docker Engine to be connectable. GCS gcs Store data as a file on Google Cloud Storage. GitHub github Interact with files stored on public GitHub repositories. JSON json A block that represents JSON. Kubernetes Cluster Config kubernetes-cluster-config Stores configuration for interaction with Kubernetes clusters. Kubernetes Job kubernetes-job Runs a command as a Kubernetes Job. Local File System local-file-system Store data as a file on a local file system. Microsoft Teams Webhook ms-teams-webhook Enables sending notifications via a provided Microsoft Teams webhook. Opsgenie Webhook opsgenie-webhook Enables sending notifications via a provided Opsgenie webhook. Pager Duty Webhook pager-duty-webhook Enables sending notifications via a provided PagerDuty webhook. Process process Run a command in a new process. Remote File System remote-file-system Store data as a file on a remote file system. Supports any remote file system supported by fsspec. S3 s3 Store data as a file on AWS S3. Secret secret A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI. Slack Webhook slack-webhook Enables sending notifications via a provided Slack webhook. SMB smb Store data as a file on a SMB share. String string A block that represents a string. Twilio SMS twilio-sms Enables sending notifications via Twilio SMS. Webhook webhook Block that enables calling webhooks.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-in-prefect-integrations","title":"Blocks in Prefect Integrations","text":"

    Blocks can also be created by anyone and shared with the community. You'll find blocks that are available for consumption in many of the published Prefect Integrations. The following table provides an overview of the blocks available from our most popular Prefect Integrations.

    Integration Block Slug prefect-airbyte Airbyte Connection airbyte-connection prefect-airbyte Airbyte Server airbyte-server prefect-aws AWS Credentials aws-credentials prefect-aws ECS Task ecs-task prefect-aws MinIO Credentials minio-credentials prefect-aws S3 Bucket s3-bucket prefect-azure Azure Blob Storage Credentials azure-blob-storage-credentials prefect-azure Azure Container Instance Credentials azure-container-instance-credentials prefect-azure Azure Container Instance Job azure-container-instance-job prefect-azure Azure Cosmos DB Credentials azure-cosmos-db-credentials prefect-azure AzureML Credentials azureml-credentials prefect-bitbucket BitBucket Credentials bitbucket-credentials prefect-bitbucket BitBucket Repository bitbucket-repository prefect-census Census Credentials census-credentials prefect-census Census Sync census-sync prefect-databricks Databricks Credentials databricks-credentials prefect-dbt dbt CLI BigQuery Target Configs dbt-cli-bigquery-target-configs prefect-dbt dbt CLI Profile dbt-cli-profile prefect-dbt dbt Cloud Credentials dbt-cloud-credentials prefect-dbt dbt CLI Global Configs dbt-cli-global-configs prefect-dbt dbt CLI Postgres Target Configs dbt-cli-postgres-target-configs prefect-dbt dbt CLI Snowflake Target Configs dbt-cli-snowflake-target-configs prefect-dbt dbt CLI Target Configs dbt-cli-target-configs prefect-docker Docker Host docker-host prefect-docker Docker Registry Credentials docker-registry-credentials prefect-email Email Server Credentials email-server-credentials prefect-firebolt Firebolt Credentials firebolt-credentials prefect-firebolt Firebolt Database firebolt-database prefect-gcp BigQuery Warehouse bigquery-warehouse prefect-gcp GCP Cloud Run Job cloud-run-job prefect-gcp GCP Credentials gcp-credentials prefect-gcp GcpSecret gcpsecret prefect-gcp GCS Bucket gcs-bucket prefect-gcp Vertex AI Custom Training Job vertex-ai-custom-training-job prefect-github GitHub Credentials github-credentials prefect-github GitHub Repository github-repository prefect-gitlab GitLab Credentials gitlab-credentials prefect-gitlab GitLab Repository gitlab-repository prefect-hex Hex Credentials hex-credentials prefect-hightouch Hightouch Credentials hightouch-credentials prefect-kubernetes Kubernetes Credentials kubernetes-credentials prefect-monday Monday Credentials monday-credentials prefect-monte-carlo Monte Carlo Credentials monte-carlo-credentials prefect-openai OpenAI Completion Model openai-completion-model prefect-openai OpenAI Image Model openai-image-model prefect-openai OpenAI Credentials openai-credentials prefect-slack Slack Credentials slack-credentials prefect-slack Slack Incoming Webhook slack-incoming-webhook prefect-snowflake Snowflake Connector snowflake-connector prefect-snowflake Snowflake Credentials snowflake-credentials prefect-sqlalchemy Database Credentials database-credentials prefect-sqlalchemy SQLAlchemy Connector sqlalchemy-connector prefect-twitter Twitter Credentials twitter-credentials","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#using-existing-block-types","title":"Using existing block types","text":"

    Blocks are classes that subclass the Block base class. They can be instantiated and used like normal classes.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#instantiating-blocks","title":"Instantiating blocks","text":"

    For example, to instantiate a block that stores a JSON value, use the JSON block:

    from prefect.blocks.system import JSON\n\njson_block = JSON(value={\"the_answer\": 42})\n
    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#saving-blocks","title":"Saving blocks","text":"

    If this JSON value needs to be retrieved later to be used within a flow or task, we can use the .save() method on the block to store the value in a block document on the Prefect database for retrieval later:

    json_block.save(name=\"life-the-universe-everything\")\n

    If you'd like to update the block value stored for a given name, you can overwrite the existing block document by setting overwrite=True:

    json_block.save(overwrite=True)\n

    Tip

    in the above example, the name \"life-the-universe-everything\" is inferred from the existing block document

    ... or save the same block value as a new block document by setting the name parameter to a new value:

    json_block.save(name=\"actually-life-the-universe-everything\")\n

    Utilizing the UI

    Blocks documents can also be created and updated via the Prefect UI.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#loading-blocks","title":"Loading blocks","text":"

    The name given when saving the value stored in the JSON block can be used when retrieving the value during a flow or task run:

    from prefect import flow\nfrom prefect.blocks.system import JSON\n\n@flow\ndef what_is_the_answer():\n    json_block = JSON.load(\"life-the-universe-everything\")\n    print(json_block.value[\"the_answer\"])\n\nwhat_is_the_answer() # 42\n

    Blocks can also be loaded with a unique slug that is a combination of a block type slug and a block document name.

    To load our JSON block document from before, we can run the following:

    from prefect.blocks.core import Block\n\njson_block = Block.load(\"json/life-the-universe-everything\")\nprint(json_block.value[\"the-answer\"]) #42\n

    Sharing Blocks

    Blocks can also be loaded by fellow Workspace Collaborators, available on Prefect Cloud.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#deleting-blocks","title":"Deleting blocks","text":"

    You can delete a block by using the .delete() method on the block:

    from prefect.blocks.core import Block\nBlock.delete(\"json/life-the-universe-everything\")\n

    You can also use the CLI to delete specific blocks with a given slug or id:

    prefect block delete json/life-the-universe-everything\n
    prefect block delete --id <my-id>\n
    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#creating-new-block-types","title":"Creating new block types","text":"

    To create a custom block type, define a class that subclasses Block. The Block base class builds off of Pydantic's BaseModel, so custom blocks can be declared in same manner as a Pydantic model.

    Here's a block that represents a cube and holds information about the length of each edge in inches:

    from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n

    You can also include methods on a block include useful functionality. Here's the same cube block with methods to calculate the volume and surface area of the cube:

    from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n\n    def get_volume(self):\n        return self.edge_length_inches**3\n\n    def get_surface_area(self):\n        return 6 * self.edge_length_inches**2\n

    Now the Cube block can be used to store different cube configuration that can later be used in a flow:

    from prefect import flow\n\nrubiks_cube = Cube(edge_length_inches=2.25)\nrubiks_cube.save(\"rubiks-cube\")\n\n@flow\ndef calculate_cube_surface_area(cube_name):\n    cube = Cube.load(cube_name)\n    print(cube.get_surface_area())\n\ncalculate_cube_surface_area(\"rubiks-cube\") # 30.375\n
    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#secret-fields","title":"Secret fields","text":"

    All block values are encrypted before being stored, but if you have values that you would not like visible in the UI or in logs, then you can use the SecretStr field type provided by Pydantic to automatically obfuscate those values. This can be useful for fields that are used to store credentials like passwords and API tokens.

    Here's an example of an AWSCredentials block that uses SecretStr:

    from typing import Optional\n\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr  # if pydantic version >= 2.0, use: from pydantic.v1 import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\n    aws_secret_access_key: Optional[SecretStr] = None\n    aws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n

    Because aws_secret_access_key has the SecretStr type hint assigned to it, the value of that field will not be exposed if the object is logged:

    aws_credentials_block = AWSCredentials(\n    aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n    aws_secret_access_key=\"secret_access_key\"\n)\n\nprint(aws_credentials_block)\n# aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None\n

    There's also use the SecretDict field type provided by Prefect. This type will allow you to add a dictionary field to your block that will have values at all levels automatically obfuscated in the UI or in logs. This is useful for blocks where typing or structure of secret fields is not known until configuration time.

    Here's an example of a block that uses SecretDict:

    from typing import Dict\n\nfrom prefect.blocks.core import Block\nfrom prefect.blocks.fields import SecretDict\n\n\nclass SystemConfiguration(Block):\n    system_secrets: SecretDict\n    system_variables: Dict\n\n\nsystem_configuration_block = SystemConfiguration(\n    system_secrets={\n        \"password\": \"p@ssw0rd\",\n        \"api_token\": \"token_123456789\",\n        \"private_key\": \"<private key here>\",\n    },\n    system_variables={\n        \"self_destruct_countdown_seconds\": 60,\n        \"self_destruct_countdown_stop_time\": 7,\n    },\n)\n
    system_secrets will be obfuscated when system_configuration_block is displayed, but system_variables will be shown in plain-text:

    print(system_configuration_block)\n# SystemConfiguration(\n#   system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), \n#   system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7}\n# )\n
    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-metadata","title":"Blocks metadata","text":"

    The way that a block is displayed can be controlled by metadata fields that can be set on a block subclass.

    Available metadata fields include:

    Property Description _block_type_name Display name of the block in the UI. Defaults to the class name. _block_type_slug Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. _logo_url URL pointing to an image that should be displayed for the block type in the UI. Default to None. _description Short description of block type. Defaults to docstring, if provided. _code_example Short code snippet shown in UI for how to load/use block type. Default to first example provided in the docstring of the class, if provided.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#nested-blocks","title":"Nested blocks","text":"

    Block are composable. This means that you can create a block that uses functionality from another block by declaring it as an attribute on the block that you're creating. It also means that configuration can be changed for each block independently, which allows configuration that may change on different time frames to be easily managed and configuration can be shared across multiple use cases.

    To illustrate, here's a an expanded AWSCredentials block that includes the ability to get an authenticated session via the boto3 library:

    from typing import Optional\n\nimport boto3\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\n    aws_secret_access_key: Optional[SecretStr] = None\n    aws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n\n    def get_boto3_session(self):\n        return boto3.Session(\n            aws_access_key_id = self.aws_access_key_id\n            aws_secret_access_key = self.aws_secret_access_key\n            aws_session_token = self.aws_session_token\n            profile_name = self.profile_name\n            region_name = self.region\n        )\n

    The AWSCredentials block can be used within an S3Bucket block to provide authentication when interacting with an S3 bucket:

    import io\n\nclass S3Bucket(Block):\n    bucket_name: str\n    credentials: AWSCredentials\n\n    def read(self, key: str) -> bytes:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n\n        stream = io.BytesIO()\n        s3_client.download_fileobj(Bucket=self.bucket_name, key=key, Fileobj=stream)\n\n        stream.seek(0)\n        output = stream.read()\n\n        return output\n\n    def write(self, key: str, data: bytes) -> None:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n        stream = io.BytesIO(data)\n        s3_client.upload_fileobj(stream, Bucket=self.bucket_name, Key=key)\n

    You can use this S3Bucket block with previously saved AWSCredentials block values in order to interact with the configured S3 bucket:

    my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials.load(\"my_aws_credentials\")\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

    Saving block values like this links the values of the two blocks so that any changes to the values stored for the AWSCredentials block with the name my_aws_credentials will be seen the next time that block values for the S3Bucket block named my_s3_bucket is loaded.

    Values for nested blocks can also be hard coded by not first saving child blocks:

    my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials(\n        aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

    In the above example, the values for AWSCredentials are saved with my_s3_bucket and will not be usable with any other blocks.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#handling-updates-to-custom-block-types","title":"Handling updates to custom Block types","text":"

    Let's say that you now want to add a bucket_folder field to your custom S3Bucket block that represents the default path to read and write objects from (this field exists on our implementation).

    We can add the new field to the class definition:

    class S3Bucket(Block):\n    bucket_name: str\n    credentials: AWSCredentials\n    bucket_folder: str = None\n    ...\n

    Then register the updated block type with either Prefect Cloud or your self-hosted Prefect server.

    If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, you can migrate them to the new version of your block type by adding the missing values:

    # Bypass Pydantic validation to allow your local Block class to load the old block version\nmy_s3_bucket_block = S3Bucket.load(\"my-s3-bucket\", validate=False)\n\n# Set the new field to an appropriate value\nmy_s3_bucket_block.bucket_path = \"my-default-bucket-path\"\n\n# Overwrite the old block values and update the expected fields on the block\nmy_s3_bucket_block.save(\"my-s3-bucket\", overwrite=True)\n
    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#registering-blocks-for-use-in-the-prefect-ui","title":"Registering blocks for use in the Prefect UI","text":"

    Blocks can be registered from a Python module available in the current virtual environment with a CLI command like this:

    prefect block register --module prefect_aws.credentials\n

    This command is useful for registering all blocks found in the credentials module within Prefect Integrations.

    Or, if a block has been created in a .py file, the block can also be registered with the CLI command:

    prefect block register --file my_block.py\n

    The registered block will then be available in the Prefect UI for configuration.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/deployments-block-based/","title":"Block Based Deployments","text":"

    Workers are recommended

    This page is about the block-based deployment model. The Work Pools and Workers based deployment model simplifies the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.

    We encourage you to check out the new deployment experience with guided command line prompts and convenient CI/CD with prefect.yaml files.

    With remote storage blocks, you can package not only your flow code script but also any supporting files, including your custom modules, SQL scripts and any configuration files needed in your project.

    To define how your flow execution environment should be configured, you may either reference pre-configured infrastructure blocks or let Prefect create those automatically for you as anonymous blocks (this happens when you specify the infrastructure type using --infra flag during the build process).

    Work queue affinity improved starting from Prefect 2.0.5

    Until Prefect 2.0.4, tags were used to associate flow runs with work queues. Starting in Prefect 2.0.5, tag-based work queues are deprecated. Instead, work queue names are used to explicitly direct flow runs from deployments into queues.

    Note that backward compatibility is maintained and work queues that use tag-based matching can still be created and will continue to work. However, those work queues are now considered legacy and we encourage you to use the new behavior by specifying work queues explicitly on agents and deployments.

    See Agents & Work Pools for details.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployments-and-flows","title":"Deployments and flows","text":"

    Each deployment is associated with a single flow, but any given flow can be referenced by multiple deployments.

    Deployments are uniquely identified by the combination of: flow_name/deployment_name.

    graph LR\n    F(\"my_flow\"):::yellow -.-> A(\"Deployment 'daily'\"):::tan --> W(\"my_flow/daily\"):::fgreen\n    F -.-> B(\"Deployment 'weekly'\"):::gold  --> X(\"my_flow/weekly\"):::green\n    F -.-> C(\"Deployment 'ad-hoc'\"):::dgold --> Y(\"my_flow/ad-hoc\"):::dgreen\n    F -.-> D(\"Deployment 'trigger-based'\"):::dgold --> Z(\"my_flow/trigger-based\"):::dgreen\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:white\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px\n    classDef dgold fill:darkgoldenrod,stroke:darkgoldenrod,stroke-width:4px,color:white\n    classDef tan fill:tan,stroke:tan,stroke-width:4px,color:white\n    classDef fgreen fill:forestgreen,stroke:forestgreen,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef dgreen fill:darkgreen,stroke:darkgreen,stroke-width:4px,color:white

    This enables you to run a single flow with different parameters, based on multiple schedules and triggers, and in different environments. This also enables you to run different versions of the same flow for testing and production purposes.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-definition","title":"Deployment definition","text":"

    A deployment definition captures the settings for creating a deployment object on the Prefect API. You can create the deployment definition by:

    • Run the prefect deployment build CLI command with deployment options to create a deployment.yaml deployment definition file, then run prefect deployment apply to create a deployment on the API using the settings in deployment.yaml.
    • Define a Deployment Python object, specifying the deployment options as properties of the object, then building and applying the object using methods of Deployment.

    The minimum required information to create a deployment includes:

    • The path and filename of the file containing the flow script.
    • The name of the entrypoint flow function \u2014 this is the flow function that starts the flow and calls and additional tasks or subflows.
    • The name of the deployment.

    You may provide additional settings for the deployment. Any settings you do not explicitly specify are inferred from defaults.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-on-the-cli","title":"Create a deployment on the CLI","text":"

    To create a deployment on the CLI, there are two steps:

    1. Build the deployment definition file deployment.yaml. This step includes uploading your flow to its configured remote storage location, if one is specified.
    2. Create the deployment on the API.
    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#build-the-deployment","title":"Build the deployment","text":"

    To build the deployment definition file deployment.yaml, run the prefect deployment build Prefect CLI command from the folder containing your flow script and any dependencies of the script.

    $ prefect deployment build [OPTIONS] PATH\n

    Path to the flow is specified in the format path-to-script:flow-function-name \u2014 The path and filename of the flow script file, a colon, then the name of the entrypoint flow function.

    For example:

    $ prefect deployment build -n marvin -p default-agent-pool -q test flows/marvin.py:say_hi\n

    When you run this command, Prefect:

    • Creates a marvin_flow-deployment.yaml file for your deployment based on your flow code and options.
    • Uploads your flow files to the configured storage location (local by default).
    • Submit your deployment to the work queue test. The work queue test will be created if it doesn't exist.

    Uploading files may require storage filesystem libraries

    Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library.

    Ignore files or directories from a deployment

    By default, Prefect uploads all files in the current folder to the configured storage location (local by default) when you build a deployment.

    If you want to omit certain files or directories from your deployments, add a .prefectignore file to the root directory. .prefectignore enables users to omit certain files or directories from their deployments.

    Similar to other .ignore files, the syntax supports pattern matching, so an entry of *.pyc will ensure all .pyc files are ignored by the deployment call when uploading to remote storage.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-build-options","title":"Deployment build options","text":"

    You may specify additional options to further customize your deployment.

    Options Description PATH Path, filename, and flow name of the flow definition. (Required) --apply, -a When provided, automatically registers the resulting deployment with the API. --cron TEXT A cron string that will be used to set a CronSchedule on the deployment. For example, --cron \"*/1 * * * *\" to create flow runs from that deployment every minute. --help Display help for available commands and options. --infra-block TEXT, -ib The infrastructure block to use, in block-type/block-name format. --infra, -i The infrastructure type to use. (Default is Process) --interval INTEGER An integer specifying an interval (in seconds) that will be used to set an IntervalSchedule on the deployment. For example, --interval 60 to create flow runs from that deployment every minute. --name TEXT, -n The name of the deployment. --output TEXT, -o Optional location for the YAML manifest generated as a result of the build step. You can version-control that file, but it's not required since the CLI can generate everything you need to define a deployment. --override TEXT One or more optional infrastructure overrides provided as a dot delimited path. For example, specify an environment variable: env.env_key=env_value. For Kubernetes, specify customizations: customizations='[{\"op\": \"add\",\"path\": \"/spec/template/spec/containers/0/resources/limits\", \"value\": {\"memory\": \"8Gi\",\"cpu\": \"4000m\"}}]' (note the string format). --param An optional parameter override, values are parsed as JSON strings. For example, --param question=ultimate --param answer=42. --params An optional parameter override in a JSON string format. For example, --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\'. --path An optional path to specify a subdirectory of remote storage to upload to, or to point to a subdirectory of a locally stored flow. --pool TEXT, -p The work pool that will handle this deployment's runs. \u2502 --rrule TEXT An RRule that will be used to set an RRuleSchedule on the deployment. For example, --rrule 'FREQ=HOURLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=9,10,11,12,13,14,15,16,17' to create flow runs from that deployment every hour but only during business hours. --skip-upload When provided, skips uploading this deployment's files to remote storage. --storage-block TEXT, -sb The storage block to use, in block-type/block-name or block-type/block-name/path format. Note that the appropriate library supporting the storage filesystem must be installed. --tag TEXT, -t One or more optional tags to apply to the deployment. --version TEXT, -v An optional version for the deployment. This could be a git commit hash if you use this command from a CI/CD pipeline. --work-queue TEXT, -q The work queue that will handle this deployment's runs. It will be created if it doesn't already exist. Defaults to None. Note that if a work queue is not set, work will not be scheduled.","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#block-identifiers","title":"Block identifiers","text":"

    When specifying a storage block with the -sb or --storage-block flag, you may specify the block by passing its slug. The storage block slug is formatted as block-type/block-name.

    For example, s3/example-block is the slug for an S3 block named example-block.

    In addition, when passing the storage block slug, you may pass just the block slug or the block slug and a path.

    • block-type/block-name indicates just the block, including any path included in the block configuration.
    • block-type/block-name/path indicates a storage path in addition to any path included in the block configuration.

    When specifying an infrastructure block with the -ib or --infra-block flag, you specify the block by passing its slug. The infrastructure block slug is formatted as block-type/block-name.

    Block name Block class name Block type for a slug Azure Azure azure Docker Container DockerContainer docker-container GitHub GitHub github GCS GCS gcs Kubernetes Job KubernetesJob kubernetes-job Process Process process Remote File System RemoteFileSystem remote-file-system S3 S3 s3 SMB SMB smb GitLab Repository GitLabRepository gitlab-repository

    Note that the appropriate library supporting the storage filesystem must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library. See Storage for more information.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deploymentyaml","title":"deployment.yaml","text":"

    A deployment's YAML file configures additional settings needed to create a deployment on the server.

    A single flow may have multiple deployments created for it, with different schedules, tags, and so on. A single flow definition may have multiple deployment YAML files referencing it, each specifying different settings. The only requirement is that each deployment must have a unique name.

    The default {flow-name}-deployment.yaml filename may be edited as needed with the --output flag to prefect deployment build.

    ###\n### A complete description of a Prefect Deployment for flow 'Cat Facts'\n###\nname: catfact\ndescription: null\nversion: c0fc95308d8137c50d2da51af138aa23\n# The work queue that will handle this deployment's runs\nwork_queue_name: test\nwork_pool_name: null\ntags: []\nparameters: {}\nschedule: null\ninfra_overrides: {}\ninfrastructure:\n  type: process\n  env: {}\n  labels: {}\n  name: null\n  command:\n  - python\n  - -m\n  - prefect.engine\n  stream_output: true\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: Cat Facts\nmanifest_path: null\nstorage: null\npath: /Users/terry/test/testflows/catfact\nentrypoint: catfact.py:catfacts_flow\nparameter_openapi_schema:\n  title: Parameters\n  type: object\n  properties:\n    url:\n      title: url\n  required:\n  - url\n  definitions: null\n

    Editing deployment.yaml

    Note the big DO NOT EDIT comment in your deployment's YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

    We recommend editing most of these fields from the CLI or Prefect UI for convenience.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#parameters-in-deployments","title":"Parameters in deployments","text":"

    You may provide default parameter values in the deployment.yaml configuration, and these parameter values will be used for flow runs based on the deployment.

    To configure default parameter values, add them to the parameters: {} line of deployment.yaml as JSON key-value pairs. The parameter list configured in deployment.yaml must match the parameters expected by the entrypoint flow function.

    parameters: {\"name\": \"Marvin\", \"num\": 42, \"url\": \"https://catfact.ninja/fact\"}\n

    Passing **kwargs as flow parameters

    You may pass **kwargs as a deployment parameter as a \"kwargs\":{} JSON object containing the key-value pairs of any passed keyword arguments.

    parameters: {\"name\": \"Marvin\", \"kwargs\":{\"cattype\":\"tabby\",\"num\": 42}\n

    You can edit default parameters for deployments in the Prefect UI, and you can override default parameter values when creating ad-hoc flow runs via the Prefect UI.

    To edit parameters in the Prefect UI, go the the details page for a deployment, then select Edit from the commands menu. If you change parameter values, the new values are used for all future flow runs based on the deployment.

    To create an ad-hoc flow run with different parameter values, go the the details page for a deployment, select Run, then select Custom. You will be able to provide custom values for any editable deployment fields. Under Parameters, select Custom. Provide the new values, then select Save. Select Run to begin the flow run with custom values.

    If you want the Prefect API to verify the parameter values passed to a flow run against the schema defined by parameter_openapi_schema, set enforce_parameter_schema to true.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment","title":"Create a deployment","text":"

    When you've configured deployment.yaml for a deployment, you can create the deployment on the API by running the prefect deployment apply Prefect CLI command.

    $ prefect deployment apply catfacts_flow-deployment.yaml\n

    For example:

    $ prefect deployment apply ./catfacts_flow-deployment.yaml\nSuccessfully loaded 'catfact'\nDeployment '76a9f1ac-4d8c-4a92-8869-615bec502685' successfully created.\n

    prefect deployment apply accepts an optional --upload flag that, when provided, uploads this deployment's files to remote storage.

    Once the deployment has been created, you'll see it in the Prefect UI and can inspect it using the CLI.

    $ prefect deployment ls\n                               Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                           \u2503 ID                                   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Cat Facts/catfact              \u2502 76a9f1ac-4d8c-4a92-8869-615bec502685 \u2502\n\u2502 leonardo_dicapriflow/hello_leo \u2502 fb4681d7-aa5a-4617-bf6f-f67e6f964984 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

    When you run a deployed flow with Prefect, the following happens:

    • The user runs the deployment, which creates a flow run. (The API creates flow runs automatically for deployments with schedules.)
    • An agent picks up the flow run from a work queue and uses an infrastructure block to create infrastructure for the run.
    • The flow run executes within the infrastructure.

    Agents and work pools enable the Prefect orchestration engine and API to run deployments in your local execution environments. To execute deployed flow runs you need to configure at least one agent.

    Scheduled flow runs

    Scheduled flow runs will not be created unless the scheduler is running with either Prefect Cloud or a local Prefect server started with prefect server start.

    Scheduled flow runs will not run unless an appropriate agent and work pool are configured.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-from-a-python-object","title":"Create a deployment from a Python object","text":"

    You can also create deployments from Python scripts by using the prefect.deployments.Deployment class.

    Create a new deployment using configuration defaults for an imported flow:

    from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"example-deployment\", \n    version=1, \n    work_queue_name=\"demo\",\n    work_pool_name=\"default-agent-pool\",\n)\ndeployment.apply()\n

    Create a new deployment with a pre-defined storage block and an infrastructure override:

    from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import S3\n\nstorage = S3.load(\"dev-bucket\") # load a pre-defined block\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    work_pool_name=\"default-agent-pool\",\n    storage=storage,\n    infra_overrides={\n        \"env\": {\n            \"ENV_VAR\": \"value\"\n        }\n    },\n)\n\ndeployment.apply()\n

    If you have settings that you want to share from an existing deployment you can load those settings:

    deployment = Deployment(\n    name=\"a-name-you-used\", \n    flow_name=\"name-of-flow\"\n)\ndeployment.load() # loads server-side settings\n

    Once the existing deployment settings are loaded, you may update them as needed by changing deployment properties.

    View all of the parameters for the Deployment object in the Python API documentation.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-api-representation","title":"Deployment API representation","text":"

    When you create a deployment, it is constructed from deployment definition data you provide and additional properties set by client-side utilities.

    Deployment properties include:

    Property Description id An auto-generated UUID ID value identifying the deployment. created A datetime timestamp indicating when the deployment was created. updated A datetime timestamp indicating when the deployment was last changed. name The name of the deployment. version The version of the deployment description A description of the deployment. flow_id The id of the flow associated with the deployment. schedule An optional schedule for the deployment. is_schedule_active Boolean indicating whether the deployment schedule is active. Default is True. infra_overrides One or more optional infrastructure overrides parameters An optional dictionary of parameters for flow runs scheduled by the deployment. tags An optional list of tags for the deployment. work_queue_name The optional work queue that will handle the deployment's run parameter_openapi_schema JSON schema for flow parameters. enforce_parameter_schema Whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema path The path to the deployment.yaml file entrypoint The path to a flow entry point storage_document_id Storage block configured for the deployment. infrastructure_document_id Infrastructure block configured for the deployment.

    You can inspect a deployment using the CLI with the prefect deployment inspect command, referencing the deployment with <flow_name>/<deployment_name>.

    $ prefect deployment inspect 'Cat Facts/catfact'\n{\n    'id': '76a9f1ac-4d8c-4a92-8869-615bec502685',\n    'created': '2022-07-26T03:48:14.723328+00:00',\n    'updated': '2022-07-26T03:50:02.043238+00:00',\n    'name': 'catfact',\n    'version': '899b136ebc356d58562f48d8ddce7c19',\n    'description': None,\n    'flow_id': '2c7b36d1-0bdb-462e-bb97-f6eb9fef6fd5',\n    'schedule': None,\n    'is_schedule_active': True,\n    'infra_overrides': {},\n    'parameters': {},\n    'tags': [],\n    'work_queue_name': 'test',\n    'parameter_openapi_schema': {\n        'title': 'Parameters',\n        'type': 'object',\n        'properties': {'url': {'title': 'url'}},\n        'required': ['url']\n    },\n    'path': '/Users/terry/test/testflows/catfact',\n    'entrypoint': 'catfact.py:catfacts_flow',\n    'manifest_path': None,\n    'storage_document_id': None,\n    'infrastructure_document_id': 'f958db1c-b143-4709-846c-321125247e07',\n    'infrastructure': {\n        'type': 'process',\n        'env': {},\n        'labels': {},\n        'name': None,\n        'command': ['python', '-m', 'prefect.engine'],\n        'stream_output': True\n    }\n}\n
    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-from-a-deployment","title":"Create a flow run from a deployment","text":"","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-a-schedule","title":"Create a flow run with a schedule","text":"

    If you specify a schedule for a deployment, the deployment will execute its flow automatically on that schedule as long as a Prefect server and agent are running. Prefect Cloud creates schedules flow runs automatically, and they will run on schedule if an agent is configured to pick up flow runs for the deployment.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-an-event-trigger","title":"Create a flow run with an event trigger","text":"

    deployment triggers are only available in Prefect Cloud

    Deployments can optionally take a trigger specification, which will configure an automation to run the deployment based on the presence or absence of events, and optionally pass event data into the deployment run as parameters via jinja templating.

    triggers:\n  - enabled: true\n    match:\n      prefect.resource.id: prefect.flow-run.*\n    expect:\n      - prefect.flow-run.Completed\n    match_related:\n      prefect.resource.name: prefect.flow.etl-flow\n      prefect.resource.role: flow\n    parameters:\n      param_1: \"{{ event }}\"\n

    When applied, this deployment will start a flow run upon the completion of the upstream flow specified in the match_related key, with the flow run passed in as a parameter. Triggers can be configured to respond to the presence or absence of arbitrary internal or external events. The trigger system and API are detailed in Automations.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-prefect-ui","title":"Create a flow run with Prefect UI","text":"

    In the Prefect UI, you can click the Run button next to any deployment to execute an ad hoc flow run for that deployment.

    The prefect deployment CLI command provides commands for managing and running deployments locally.

    Command Description apply Create or update a deployment from a YAML file. build Generate a deployment YAML from /path/to/file.py:flow_function. delete Delete a deployment. inspect View details about a deployment. ls View all deployments or deployments for specific flows. pause-schedule Pause schedule of a given deployment. resume-schedule Resume schedule of a given deployment. run Create a flow run for the given flow and deployment. schedule Commands for interacting with your deployment's schedules. set-schedule Set schedule for a given deployment.

    Deprecated Schedule Commands

    The pause-schedule, resume-schedule, and set-schedule commands are deprecated due to the introduction of multi-schedule support for deployments. Use the new prefect deployment schedule command for enhanced flexibility and control over your deployment schedules.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-in-a-python-script","title":"Create a flow run in a Python script","text":"

    You can create a flow run from a deployment in a Python script with the run_deployment function.

    from prefect.deployments import run_deployment\n\n\ndef main():\n    response = run_deployment(name=\"flow-name/deployment-name\")\n    print(response)\n\n\nif __name__ == \"__main__\":\n   main()\n

    PREFECT_API_URL setting for agents

    You'll need to configure agents and work pools that can create flow runs for deployments in remote environments. PREFECT_API_URL must be set for the environment in which your agent is running.

    If you want the agent to communicate with Prefect Cloud from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#examples","title":"Examples","text":"
    • How to deploy Prefect flows to AWS
    • How to deploy Prefect flows to GCP
    • How to deploy Prefect flows to Azure
    • How to deploy Prefect flows using files stored locally
    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments/","title":"Deployments","text":"

    Deployments are server-side representations of flows. They store the crucial metadata needed for remote orchestration including when, where, and how a workflow should run. Deployments elevate workflows from functions that you must call manually to API-managed entities that can be triggered remotely.

    Here we will focus largely on the metadata that defines a deployment and how it is used. Different ways of creating a deployment populate these fields differently.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#overview","title":"Overview","text":"

    Every Prefect deployment references one and only one \"entrypoint\" flow (though that flow may itself call any number of subflows). Different deployments may reference the same underlying flow, a useful pattern when developing or promoting workflow changes through staged environments.

    The complete schema that defines a deployment is as follows:

    class Deployment:\n    \"\"\"\n    Structure of the schema defining a deployment\n    \"\"\"\n\n    # required defining data\n    name: str \n    flow_id: UUID\n    entrypoint: str\n    path: str = None\n\n    # workflow scheduling and parametrization\n    parameters: Optional[Dict[str, Any]] = None\n    parameter_openapi_schema: Optional[Dict[str, Any]] = None\n    schedules: list[Schedule] = None\n    paused: bool = False\n    trigger: Trigger = None\n\n    # metadata for bookkeeping\n    version: str = None\n    description: str = None\n    tags: list = None\n\n    # worker-specific fields\n    work_pool_name: str = None\n    work_queue_name: str = None\n    infra_overrides: Optional[Dict[str, Any]] = None\n    pull_steps: Optional[Dict[str, Any]] = None\n

    All methods for creating Prefect deployments are interfaces for populating this schema. Let's look at each section in turn.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#required-data","title":"Required data","text":"

    Deployments universally require both a name and a reference to an underlying Flow. In almost all instances of deployment creation, users do not need to concern themselves with the flow_id as most interfaces will only need the flow's name. Note that the deployment name is not required to be unique across all deployments but is required to be unique for a given flow ID. As a consequence, you will often see references to the deployment's unique identifying name {FLOW_NAME}/{DEPLOYMENT_NAME}. For example, triggering a run of a deployment from the Prefect CLI can be done via:

    prefect deployment run my-first-flow/my-first-deployment\n

    The other two fields are less obvious:

    • path: the path can generally be interpreted as the runtime working directory for the flow. For example, if a deployment references a workflow defined within a Docker image, the path will be the absolute path to the parent directory where that workflow will run anytime the deployment is triggered. This interpretation is more subtle in the case of flows defined in remote filesystems.
    • entrypoint: the entrypoint of a deployment is a relative reference to a function decorated as a flow that exists on some filesystem. It is always specified relative to the path. Entrypoints use Python's standard path-to-object syntax (e.g., path/to/file.py:function_name or simply path:object).

    The entrypoint must reference the same flow as the flow ID.

    Note that Prefect requires that deployments reference flows defined within Python files. Flows defined within interactive REPLs or notebooks cannot currently be deployed as such. They are still valid flows that will be monitored by the API and observable in the UI whenever they are run, but Prefect cannot trigger them.

    Deployments do not contain code definitions

    Deployment metadata references code that exists in potentially diverse locations within your environment. This separation of concerns means that your flow code stays within your storage and execution infrastructure and never lives on the Prefect server or database.

    This is the heart of the Prefect hybrid model: there's a boundary between your proprietary assets, such as your flow code, and the Prefect backend (including Prefect Cloud).

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#scheduling-and-parametrization","title":"Scheduling and parametrization","text":"

    One of the primary motivations for creating deployments of flows is to remotely schedule and trigger them. Just as flows can be called as functions with different input values, so can deployments be triggered or scheduled with different values through the use of parameters.

    The six fields here capture the necessary metadata to perform such actions:

    • schedules: a list of schedule objects. Most of the convenient interfaces for creating deployments allow users to avoid creating this object themselves. For example, when updating a deployment schedule in the UI basic information such as a cron string or interval is all that's required.
    • trigger (Cloud-only): triggers allow you to define event-based rules for running a deployment. For more information see Automations.
    • parameter_openapi_schema: an OpenAPI compatible schema that defines the types and defaults for the flow's parameters. This is used by both the UI and the backend to expose options for creating manual runs as well as type validation.
    • parameters: default values of flow parameters that this deployment will pass on each run. These can be overwritten through a trigger or when manually creating a custom run.
    • enforce_parameter_schema: a boolean flag that determines whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema.

    Scheduling is asynchronous and decoupled

    Because deployments are nothing more than metadata, runs can be created at anytime. Note that pausing a schedule, updating your deployment, and other actions reset your auto-scheduled runs.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#running-a-deployed-flow-from-within-python-flow-code","title":"Running a deployed flow from within Python flow code","text":"

    Prefect provides a run_deployment function that can be used to schedule the run of an existing deployment when your Python code executes.

    from prefect.deployments import run_deployment\n\ndef main():\n    run_deployment(name=\"my_flow_name/my_deployment_name\")\n

    Run a deployment without blocking

    By default, run_deployment blocks until the scheduled flow run finishes executing. Pass timeout=0 to return immediately and not block.

    If you call run_deployment from within a flow or task, the scheduled flow run will be linked to the calling flow run (or the calling task's flow run) as a subflow run by default.

    Subflow runs have different behavior than regular flow runs. For example, a subflow run can't be suspended independently of its parent flow. If you'd rather not link the scheduled flow run to the calling flow or task run, you can disable this behavior by passing as_subflow=False:

    from prefect import flow\nfrom prefect.deployments import run_deployment\n\n\n@flow\ndef my_flow():\n    # The scheduled flow run will not be linked to this flow as a subflow.\n    run_deployment(name=\"my_other_flow/my_deployment_name\", as_subflow=False)\n

    The return value of run_deployment is a FlowRun object containing metadata about the scheduled run. You can use this object to retrieve information about the run after calling run_deployment:

    from prefect import get_client\nfrom prefect.deployments import run_deployment\n\ndef main():\n    flow_run = run_deployment(name=\"my_flow_name/my_deployment_name\")\n    flow_run_id = flow_run.id\n\n    # If you save the flow run's ID, you can use it later to retrieve\n    # flow run metadata again, e.g. to check if it's completed.\n    async with get_client() as client:\n        flow_run = client.read_flow_run(flow_run_id)\n        print(f\"Current state of the flow run: {flow_run.state}\")\n

    Using the Prefect client

    For more information on using the Prefect client to interact with Prefect's REST API, see our guide.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#versioning-and-bookkeeping","title":"Versioning and bookkeeping","text":"

    Versions, descriptions and tags are omnipresent fields throughout Prefect that can be easy to overlook. However, putting some extra thought into how you use these fields can pay dividends down the road.

    • version: versions are always set by the client and can be any arbitrary string. We recommend tightly coupling this field on your deployments to your software development lifecycle. For example if you leverage git to manage code changes, use either a tag or commit hash in this field. If you don't set a value for the version, Prefect will compute a hash
    • description: the description field of a deployment is a place to provide rich reference material for downstream stakeholders such as intended use and parameter documentation. Markdown formatting will be rendered in the Prefect UI, allowing for section headers, links, tables, and other formatting. If not provided explicitly, Prefect will use the docstring of your flow function as a default value.
    • tags: tags are a mechanism for grouping related work together across a diverse set of objects. Tags set on a deployment will be inherited by that deployment's flow runs. These tags can then be used to filter what runs are displayed on the primary UI dashboard, allowing you to customize different views into your work. In addition, in Prefect Cloud you can easily find objects through searching by tag.

    All of these bits of metadata can be leveraged to great effect by injecting them into the processes that Prefect is orchestrating. For example you can use both run ID and versions to organize files that you produce from your workflows, or by associating your flow run's tags with the metadata of a job it orchestrates. This metadata is available during execution through Prefect runtime.

    Everything has a version

    Deployments aren't the only entity in Prefect with a version attached; both flows and tasks also have versions that can be set through their respective decorators. These versions will be sent to the API anytime the flow or task is run and thereby allow you to audit your changes across all levels.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#workers-and-work-pools","title":"Workers and Work Pools","text":"

    Workers and work pools are an advanced deployment pattern that allow you to dynamically provision infrastructure for each flow run. In addition, the work pool job template interface allows users to create and govern opinionated interfaces to their workflow infrastructure. To do this, a deployment using workers needs to evaluate the following fields:

    • work_pool_name: the name of the work pool this deployment will be associated with. Work pool types mirror infrastructure types and therefore the decision here affects the options available for the other fields.
    • work_queue_name: if you are using work queues to either manage priority or concurrency, you can associate a deployment with a specific queue within a work pool using this field.
    • infra_overrides: often called job_variables within various interfaces, this field allows deployment authors to customize whatever infrastructure options have been exposed on this work pool. This field is often used for things such as Docker image names, Kubernetes annotations and limits, and environment variables.
    • pull_steps: a JSON description of steps that should be performed to retrieve flow code or configuration and prepare the runtime environment for workflow execution.

    Pull steps allow users to highly decouple their workflow architecture. For example, a common use of pull steps is to dynamically pull code from remote filesystems such as GitHub with each run of their deployment.

    For more information see the guide to deploying with a worker.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#two-approaches-to-deployments","title":"Two approaches to deployments","text":"

    There are two primary ways to deploy flows with Prefect, differentiated by how much control Prefect has over the infrastructure in which the flows run.

    In one setup, deploying Prefect flows is analogous to deploying a webserver - users author their workflows and then start a long-running process (often within a Docker container) that is responsible for managing all of the runs for the associated deployment(s).

    In the other setup, you do a little extra up-front work to set up a work pool and a base job template that defines how individual flow runs will be submitted to infrastructure.

    Prefect provides several types of work pools corresponding to different types of infrastructure. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.

    Some work pool types require a client-side worker to submit job definitions to the appropriate infrastructure with each run.

    Each of these setups can support production workloads. The choice ultimately boils down to your use case and preferences. Read further to decide which setup is best for your situation.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#serving-flows-on-long-lived-infrastructure","title":"Serving flows on long-lived infrastructure","text":"

    When you have several flows running regularly, the serve method of the Flow object or the serve utility is a great option for managing multiple flows simultaneously.

    Once you have authored your flow and decided on its deployment settings as described above, all that's left is to run this long-running process in a location of your choosing. The process will stay in communication with the Prefect API, monitoring for work and submitting each run within an individual subprocess. Note that because runs are submitted to subprocesses, any external infrastructure configuration will need to be setup beforehand and kept associated with this process.

    This approach has many benefits:

    • Users are in complete control of their infrastructure, and anywhere the \"serve\" Python process can run is a suitable deployment environment.
    • It is simple to reason about.
    • Creating deployments requires a minimal set of decisions.
    • Iteration speed is fast.

    However, there are a few reasons you might consider running flows on dynamically provisioned infrastructure with work pools instead:

    • Flows that have expensive infrastructure needs may be more costly in this setup due to the long-running process.
    • Flows with heterogeneous infrastructure needs across runs will be more difficult to configure and schedule.
    • Large volumes of deployments can be harder to track.
    • If your internal team structure requires that deployment authors be members of a different team than the team managing infrastructure, the work pool interface may be preferred.
    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#dynamically-provisioning-infrastructure-with-work-pools","title":"Dynamically provisioning infrastructure with work pools","text":"

    Work pools allow Prefect to exercise greater control of the infrastructure on which flows run. Options for serverless work pools allow you to scale to zero when workflows aren't running. Prefect even provides you with the ability to provision cloud infrastructure via a single CLI command, if you use a Prefect Cloud push work pool option.

    With work pools:

    • You can configure and monitor infrastructure configuration within the Prefect UI.
    • Infrastructure is ephemeral and dynamically provisioned.
    • Prefect is more infrastructure-aware and therefore collects more event data from your infrastructure by default.
    • Highly decoupled setups are possible.

    You don't have to commit to one approach

    You are not required to use only one of these approaches for your deployments. You can mix and match approaches based on the needs of each flow. Further, you can change the deployment approach for a particular flow as its needs evolve. For example, you might use workers for your expensive machine learning pipelines, but use the serve mechanics for smaller, more frequent file-processing pipelines.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/filesystems/","title":"Filesystems","text":"

    A filesystem block is an object that allows you to read and write data from paths. Prefect provides multiple built-in file system types that cover a wide range of use cases.

    • LocalFileSystem
    • RemoteFileSystem
    • Azure
    • GitHub
    • GitLab
    • GCS
    • S3
    • SMB

    Additional file system types are available in Prefect Collections.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#local-filesystem","title":"Local filesystem","text":"

    The LocalFileSystem block enables interaction with the files in your current development environment.

    LocalFileSystem properties include:

    Property Description basepath String path to the location of files on the local filesystem. Access to files outside of the base path will not be allowed.
    from prefect.filesystems import LocalFileSystem\n\nfs = LocalFileSystem(basepath=\"/foo/bar\")\n

    Limited access to local file system

    Be aware that LocalFileSystem access is limited to the exact path provided. This file system may not be ideal for some use cases. The execution environment for your workflows may not have the same file system as the environment you are writing and deploying your code on.

    Use of this file system can limit the availability of results after a flow run has completed or prevent the code for a flow from being retrieved successfully at the start of a run.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remote-file-system","title":"Remote file system","text":"

    The RemoteFileSystem block enables interaction with arbitrary remote file systems. Under the hood, RemoteFileSystem uses fsspec and supports any file system that fsspec supports.

    RemoteFileSystem properties include:

    Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. settings Dictionary containing extra parameters required to access the remote file system.

    The file system is specified using a protocol:

    • s3://my-bucket/my-folder/ will use S3
    • gcs://my-bucket/my-folder/ will use GCS
    • az://my-bucket/my-folder/ will use Azure

    For example, to use it with Amazon S3:

    from prefect.filesystems import RemoteFileSystem\n\nblock = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nblock.save(\"dev\")\n

    You may need to install additional libraries to use some remote storage types.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remotefilesystem-examples","title":"RemoteFileSystem examples","text":"

    How can we use RemoteFileSystem to store our flow code? The following is a use case where we use MinIO as a storage backend:

    from prefect.filesystems import RemoteFileSystem\n\nminio_block = RemoteFileSystem(\n    basepath=\"s3://my-bucket\",\n    settings={\n        \"key\": \"MINIO_ROOT_USER\",\n        \"secret\": \"MINIO_ROOT_PASSWORD\",\n        \"client_kwargs\": {\"endpoint_url\": \"http://localhost:9000\"},\n    },\n)\nminio_block.save(\"minio\")\n
    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#azure","title":"Azure","text":"

    The Azure file system block enables interaction with Azure Datalake and Azure Blob Storage. Under the hood, the Azure block uses adlfs.

    Azure properties include:

    Property Description bucket_path String path to the location of files on the remote filesystem. Access to files outside of the bucket path will not be allowed. azure_storage_connection_string Azure storage connection string. azure_storage_account_name Azure storage account name. azure_storage_account_key Azure storage account key. azure_storage_tenant_id Azure storage tenant ID. azure_storage_client_id Azure storage client ID. azure_storage_client_secret Azure storage client secret. azure_storage_anon Anonymous authentication, disable to use DefaultAzureCredential.

    To create a block:

    from prefect.filesystems import Azure\n\nblock = Azure(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

    To use it in a deployment:

    prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb az/dev\n

    You need to install adlfs to use it.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#github","title":"GitHub","text":"

    The GitHub filesystem block enables interaction with GitHub repositories. This block is read-only and works with both public and private repositories.

    GitHub properties include:

    Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitHub repository to read from, in either HTTPS or SSH format. access_token A GitHub Personal Access Token (PAT) with repo scope.

    To create a block:

    from prefect.filesystems import GitHub\n\nblock = GitHub(\n    repository=\"https://github.com/my-repo/\",\n    access_token=<my_access_token> # only required for private repos\n)\nblock.get_directory(\"folder-in-repo\") # specify a subfolder of repo\nblock.save(\"dev\")\n

    To use it in a deployment:

    prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb github/dev -a\n
    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#gitlabrepository","title":"GitLabRepository","text":"

    The GitLabRepository block is read-only and works with private GitLab repositories.

    GitLabRepository properties include:

    Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitLab repository to read from, in either HTTPS or SSH format. credentials A GitLabCredentials block with Personal Access Token (PAT) with read_repository scope.

    To create a block:

    from prefect_gitlab.credentials import GitLabCredentials\nfrom prefect_gitlab.repositories import GitLabRepository\n\ngitlab_creds = GitLabCredentials(token=\"YOUR_GITLAB_ACCESS_TOKEN\")\ngitlab_repo = GitLabRepository(\n    repository=\"https://gitlab.com/yourorg/yourrepo.git\",\n    reference=\"main\",\n    credentials=gitlab_creds,\n)\ngitlab_repo.save(\"dev\")\n

    To use it in a deployment (and apply):

    prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gitlab-repository/dev -a\n

    Note that to use this block, you need to install the prefect-gitlab collection.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#gcs","title":"GCS","text":"

    The GCS file system block enables interaction with Google Cloud Storage. Under the hood, GCS uses gcsfs.

    GCS properties include:

    Property Description bucket_path A GCS bucket path service_account_info The contents of a service account keyfile as a JSON string. project The project the GCS bucket resides in. If not provided, the project will be inferred from the credentials or environment.

    To create a block:

    from prefect.filesystems import GCS\n\nblock = GCS(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

    To use it in a deployment:

    prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gcs/dev\n

    You need to install gcsfsto use it.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#s3","title":"S3","text":"

    The S3 file system block enables interaction with Amazon S3. Under the hood, S3 uses s3fs.

    S3 properties include:

    Property Description bucket_path An S3 bucket path aws_access_key_id AWS Access Key ID aws_secret_access_key AWS Secret Access Key

    To create a block:

    from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

    To use it in a deployment:

    prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb s3/dev\n

    You need to install s3fsto use this block.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#smb","title":"SMB","text":"

    The SMB file system block enables interaction with SMB shared network storage. Under the hood, SMB uses smbprotocol. Used to connect to Windows-based SMB shares from Linux-based Prefect flows. The SMB file system block is able to copy files, but cannot create directories.

    SMB properties include:

    Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. smb_host Hostname or IP address where SMB network share is located. smb_port Port for SMB network share (defaults to 445). smb_username SMB username with read/write permissions. smb_password SMB password.

    To create a block:

    from prefect.filesystems import SMB\n\nblock = SMB(basepath=\"my-share/folder/\")\nblock.save(\"dev\")\n

    To use it in a deployment:

    prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb smb/dev\n

    You need to install smbprotocol to use it.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#handling-credentials-for-cloud-object-storage-services","title":"Handling credentials for cloud object storage services","text":"

    If you leverage S3, GCS, or Azure storage blocks, and you don't explicitly configure credentials on the respective storage block, those credentials will be inferred from the environment. Make sure to set those either explicitly on the block or as environment variables, configuration files, or IAM roles within both the build and runtime environment for your deployments.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#filesystem-package-dependencies","title":"Filesystem package dependencies","text":"

    A Prefect installation and doesn't include filesystem-specific package dependencies such as s3fs, gcsfs or adlfs. This includes Prefect base Docker images.

    You must ensure that filesystem-specific libraries are installed in an execution environment where they will be used by flow runs.

    In Dockerized deployments using the Prefect base image, you can leverage the EXTRA_PIP_PACKAGES environment variable. Those dependencies will be installed at runtime within your Docker container or Kubernetes Job before the flow starts running.

    In Dockerized deployments using a custom image, you must include the filesystem-specific package dependency in your image.

    Here is an example from a deployment YAML file showing how to specify the installation of s3fs from into your image:

    infrastructure:\n  type: docker-container\n  env:\n    EXTRA_PIP_PACKAGES: s3fs  # could be gcsfs, adlfs, etc.\n

    You may specify multiple dependencies by providing a comma-delimted list.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#saving-and-loading-file-systems","title":"Saving and loading file systems","text":"

    Configuration for a file system can be saved to the Prefect API. For example:

    fs = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nfs.write_path(\"foo\", b\"hello\")\nfs.save(\"dev-s3\")\n

    This file system can be retrieved for later use with load.

    fs = RemoteFileSystem.load(\"dev-s3\")\nfs.read_path(\"foo\")  # b'hello'\n
    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#readable-and-writable-file-systems","title":"Readable and writable file systems","text":"

    Prefect provides two abstract file system types, ReadableFileSystem and WriteableFileSystem.

    • All readable file systems must implement read_path, which takes a file path to read content from and returns bytes.
    • All writeable file systems must implement write_path which takes a file path and content and writes the content to the file as bytes.

    A file system may implement both of these types.

    ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/flows/","title":"Flows","text":"

    Flows are the most central Prefect object. A flow is a container for workflow logic as-code and allows users to configure how their workflows behave. Flows are defined as Python functions, and any Python function is eligible to be a flow.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flows-overview","title":"Flows overview","text":"

    Flows can be thought of as special types of functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

    • Every invocation of this function is tracked and all state transitions are reported to the API, allowing observation of flow execution.
    • Input arguments are automatically type checked and coerced to the appropriate types.
    • Retries can be performed on failure.
    • Timeouts can be enforced to prevent unintentional, long-running workflows.

    Flows also take advantage of automatic Prefect logging to capture details about flow runs such as run time and final state.

    Flows can include calls to tasks as well as to other flows, which Prefect calls \"subflows\" in this context. Flows may be defined within modules and imported for use as subflows in your flow definitions.

    Deployments elevate individual workflows from functions that you call manually to API-managed entities.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-runs","title":"Flow runs","text":"

    A flow run represents a single execution of the flow.

    You can create a flow run by calling the flow manually. For example, by running a Python script or importing the flow into an interactive session and calling it.

    You can also create a flow run by:

    • Using external schedulers such as cron to invoke a flow function
    • Creating a deployment on Prefect Cloud or a locally run Prefect server.
    • Creating a flow run for the deployment via a schedule, the Prefect UI, or the Prefect API.

    However you run the flow, the Prefect API monitors the flow run, capturing flow run state for observability.

    When you run a flow that contains tasks or additional flows, Prefect will track the relationship of each child run to the parent flow run.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#writing-flows","title":"Writing flows","text":"

    The @flow decorator is used to designate a flow:

    from prefect import flow\n\n@flow\ndef my_flow():\n    return\n

    There are no rigid rules for what code you include within a flow definition - all valid Python is acceptable.

    Flows are uniquely identified by name. You can provide a name parameter value for the flow. If you don't provide a name, Prefect uses the flow function name.

    @flow(name=\"My Flow\")\ndef my_flow():\n    return\n

    Flows can call tasks to allow Prefect to orchestrate and track more granular units of work:

    from prefect import flow, task\n\n@task\ndef print_hello(name):\n    print(f\"Hello {name}!\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print_hello(name)\n

    Flows and tasks

    There's nothing stopping you from putting all of your code in a single flow function \u2014 Prefect will happily run it!

    However, organizing your workflow code into smaller flow and task units lets you take advantage of Prefect features like retries, more granular visibility into runtime state, the ability to determine final state regardless of individual task state, and more.

    In addition, if you put all of your workflow logic in a single flow function and any line of code fails, the entire flow will fail and must be retried from the beginning. This can be avoided by breaking up the code into multiple tasks.

    You may call any number of other tasks, subflows, and even regular Python functions within your flow. You can pass parameters to your flow function that will be used elsewhere in the workflow, and Prefect will report on the progress and final state of any invocation.

    Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-settings","title":"Flow settings","text":"

    Flows allow a great deal of configuration by passing arguments to the decorator. Flows accept the following optional settings.

    Argument Description description An optional string description for the flow. If not provided, the description will be pulled from the docstring for the decorated function. name An optional name for the flow. If not provided, the name will be inferred from the function. retries An optional number of times to retry on flow run failure. retry_delay_seconds An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero. flow_run_name An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables; this name can also be provided as a function that returns a string. task_runner An optional task runner to use for task execution within the flow when you .submit() tasks. If not provided and you .submit() tasks, the ConcurrentTaskRunner will be used. timeout_seconds An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. validate_parameters Boolean indicating whether parameters passed to flows are validated by Pydantic. Default is True. version An optional version string for the flow. If not provided, we will attempt to create a version string as a hash of the file containing the wrapped function. If the file cannot be located, the version will be null.

    For example, you can provide a name value for the flow. Here we've also used the optional description argument and specified a non-default task runner.

    from prefect import flow\nfrom prefect.task_runners import SequentialTaskRunner\n\n@flow(name=\"My Flow\",\n      description=\"My flow using SequentialTaskRunner\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n    return\n

    You can also provide the description as the docstring on the flow function.

    @flow(name=\"My Flow\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n    \"\"\"My flow using SequentialTaskRunner\"\"\"\n    return\n

    You can distinguish runs of this flow by providing a flow_run_name. This setting accepts a string that can optionally contain templated references to the parameters of your flow. The name will be formatted using Python's standard string formatting syntax as can be seen here:

    import datetime\nfrom prefect import flow\n\n@flow(flow_run_name=\"{name}-on-{date:%A}\")\ndef my_flow(name: str, date: datetime.datetime):\n    pass\n\n# creates a flow run called 'marvin-on-Thursday'\nmy_flow(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n

    Additionally this setting also accepts a function that returns a string for the flow run name:

    import datetime\nfrom prefect import flow\n\ndef generate_flow_run_name():\n    date = datetime.datetime.now(datetime.timezone.utc)\n\n    return f\"{date:%A}-is-a-nice-day\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str):\n    pass\n\n# creates a flow run called 'Thursday-is-a-nice-day'\nif __name__ == \"__main__\":\n    my_flow(name=\"marvin\")\n

    If you need access to information about the flow, use the prefect.runtime module. For example:

    from prefect import flow\nfrom prefect.runtime import flow_run\n\ndef generate_flow_run_name():\n    flow_name = flow_run.flow_name\n\n    parameters = flow_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-with-{name}-and-{limit}\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str, limit: int = 100):\n    pass\n\n# creates a flow run called 'my-flow-with-marvin-and-100'\nif __name__ == \"__main__\":\n    my_flow(name=\"marvin\")\n

    Note that validate_parameters will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type. For example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#separating-logic-into-tasks","title":"Separating logic into tasks","text":"

    The simplest workflow is just a @flow function that does all the work of the workflow.

    from prefect import flow\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print(f\"Hello {name}!\")\n\nif __name__ == \"__main__\":\n    hello_world(\"Marvin\")\n

    When you run this flow, you'll see output like the following:

    $ python hello.py\n15:11:23.594 | INFO    | prefect.engine - Created flow run 'benevolent-donkey' for flow 'hello-world'\n15:11:23.594 | INFO    | Flow run 'benevolent-donkey' - Using task runner 'ConcurrentTaskRunner'\nHello Marvin!\n15:11:24.447 | INFO    | Flow run 'benevolent-donkey' - Finished in state Completed()\n

    A better practice is to create @task functions that do the specific work of your flow, and use your @flow function as the conductor that orchestrates the flow of your application:

    from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n\nif __name__ == \"__main__\":\n    hello_world(\"Marvin\")\n

    When you run this flow, you'll see the following output, which illustrates how the work is encapsulated in a task run.

    $ python hello.py\n15:15:58.673 | INFO    | prefect.engine - Created flow run 'loose-wolverine' for flow 'Hello Flow'\n15:15:58.674 | INFO    | Flow run 'loose-wolverine' - Using task runner 'ConcurrentTaskRunner'\n15:15:58.973 | INFO    | Flow run 'loose-wolverine' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:15:59.037 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:15:59.568 | INFO    | Flow run 'loose-wolverine' - Finished in state Completed('All states completed.')\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#visualizing-flow-structure","title":"Visualizing flow structure","text":"

    You can get a quick sense of the structure of your flow using the .visualize() method on your flow. Calling this method will attempt to produce a schematic diagram of your flow and tasks without actually running your flow code.

    Functions and code not inside of flows or tasks will still be run when calling .visualize(). This may have unintended consequences. Place your code into tasks to avoid unintended execution.

    To use the visualize() method, Graphviz must be installed and on your PATH. Please install Graphviz from http://www.graphviz.org/download/. And note: just installing the graphviz python package is not sufficient.

    from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@task(name=\"Print Hello Again\")\ndef print_hello_again(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    message2 = print_hello_again(message)\n\nif __name__ == \"__main__\":\n    hello_world.visualize()\n

    Prefect cannot automatically produce a schematic for dynamic workflows, such as those with loops or if/else control flow. In this case, you can provide tasks with mock return values for use in the visualize() call.

    from prefect import flow, task\n@task(viz_return_value=[4])\ndef get_list():\n    return [1, 2, 3]\n\n@task\ndef append_one(n):\n    return n.append(6)\n\n@flow\ndef viz_return_value_tracked():\n    l = get_list()\n    for num in range(3):\n        l.append(5)\n        append_one(l)\n\nif __name__ == \"__main__\":\n    viz_return_value_tracked.visualize()\n

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#composing-flows","title":"Composing flows","text":"

    A subflow run is created when a flow function is called inside the execution of another flow. The primary flow is the \"parent\" flow. The flow created within the parent is the \"child\" flow or \"subflow.\"

    Subflow runs behave like normal flow runs. There is a full representation of the flow run in the backend as if it had been called separately. When a subflow starts, it will create a new task runner for tasks within the subflow. When the subflow completes, the task runner is shut down.

    Subflows will block execution of the parent flow until completion. However, asynchronous subflows can be run concurrently by using AnyIO task groups or asyncio.gather.

    Subflows differ from normal flows in that they will resolve any passed task futures into data. This allows data to be passed from the parent flow to the child easily.

    The relationship between a child and parent flow is tracked by creating a special task run in the parent flow. This task run will mirror the state of the child flow run.

    A task that represents a subflow will be annotated as such in its state_details via the presence of a child_flow_run_id field. A subflow can be identified via the presence of a parent_task_run_id on state_details.

    You can define multiple flows within the same file. Whether running locally or via a deployment, you must indicate which flow is the entrypoint for a flow run.

    Cancelling subflow runs

    Inline subflow runs, specifically those created without run_deployment, cannot be cancelled without cancelling their parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.

    from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nif __name__ == \"__main__\":\n    hello_world(\"Marvin\")\n

    You can also define flows or tasks in separate modules and import them for usage. For example, here's a simple subflow module:

    from prefect import flow, task\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n

    Here's a parent flow that imports and uses my_subflow() as a subflow:

    from prefect import flow, task\nfrom subflow import my_subflow\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nhello_world(\"Marvin\")\n

    Running the hello_world() flow (in this example from the file hello.py) creates a flow run like this:

    $ python hello.py\n15:19:21.651 | INFO    | prefect.engine - Created flow run 'daft-cougar' for flow 'Hello Flow'\n15:19:21.651 | INFO    | Flow run 'daft-cougar' - Using task runner 'ConcurrentTaskRunner'\n15:19:21.945 | INFO    | Flow run 'daft-cougar' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:19:22.055 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:19:22.107 | INFO    | Flow run 'daft-cougar' - Created subflow run 'ninja-duck' for flow 'Subflow'\nSubflow says: Hello Marvin!\n15:19:22.794 | INFO    | Flow run 'ninja-duck' - Finished in state Completed()\n15:19:23.215 | INFO    | Flow run 'daft-cougar' - Finished in state Completed('All states completed.')\n

    Subflows or tasks?

    In Prefect you can call tasks or subflows to do work within your workflow, including passing results from other tasks to your subflow. So a common question is:

    \"When should I use a subflow instead of a task?\"

    We recommend writing tasks that do a discrete, specific piece of work in your workflow: calling an API, performing a database operation, analyzing or transforming a data point. Prefect tasks are well suited to parallel or distributed execution using distributed computation frameworks such as Dask or Ray. For troubleshooting, the more granular you create your tasks, the easier it is to find and fix issues should a task fail.

    Subflows enable you to group related tasks within your workflow. Here are some scenarios where you might choose to use a subflow rather than calling tasks individually:

    • Observability: Subflows, like any other flow run, have first-class observability within the Prefect UI and Prefect Cloud. You'll see subflow status in the Flow Runs dashboard rather than having to dig down into the tasks within a specific flow run. See Final state determination for some examples of leveraging task state within flows.
    • Conditional flows: If you have a group of tasks that run only under certain conditions, you can group them within a subflow and conditionally run the subflow rather than each task individually.
    • Parameters: Flows have first-class support for parameterization, making it easy to run the same group of tasks in different use cases by simply passing different parameters to the subflow in which they run.
    • Task runners: Subflows enable you to specify the task runner used for tasks within the flow. For example, if you want to optimize parallel execution of certain tasks with Dask, you can group them in a subflow that uses the Dask task runner. You can use a different task runner for each subflow.
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#parameters","title":"Parameters","text":"

    Flows can be called with both positional and keyword arguments. These arguments are resolved at runtime into a dictionary of parameters mapping name to value. These parameters are stored by the Prefect orchestration engine on the flow run object.

    Prefect API requires keyword arguments

    When creating flow runs from the Prefect API, parameter names must be specified when overriding defaults \u2014 they cannot be positional.

    Type hints provide an easy way to enforce typing on your flow parameters via pydantic. This means any pydantic model used as a type hint within a flow will be coerced automatically into the relevant object type:

    from prefect import flow\nfrom pydantic import BaseModel\n\nclass Model(BaseModel):\n    a: int\n    b: float\n    c: str\n\n@flow\ndef model_validator(model: Model):\n    print(model)\n

    Note that parameter values can be provided to a flow via API using a deployment. Flow run parameters sent to the API on flow calls are coerced to a serializable form. Type hints on your flow functions provide you a way of automatically coercing JSON provided values to their appropriate Python representation.

    For example, to automatically convert something to a datetime:

    from prefect import flow\nfrom datetime import datetime\n\n@flow\ndef what_day_is_it(date: datetime = None):\n    if date is None:\n        date = datetime.now(timezone.utc)\n    print(f\"It was {date.strftime('%A')} on {date.isoformat()}\")\n\nif __name__ == \"__main__\":\n    what_day_is_it(\"2021-01-01T02:00:19.180906\")\n

    When you run this flow, you'll see the following output:

    It was Friday on 2021-01-01T02:00:19.180906\n

    Parameters are validated before a flow is run. If a flow call receives invalid parameters, a flow run is created in a Failed state. If a flow run for a deployment receives invalid parameters, it will move from a Pending state to a Failed without entering a Running state.

    Flow run parameters cannot exceed 512kb in size

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#final-state-determination","title":"Final state determination","text":"

    Prerequisite

    Read the documentation about states before proceeding with this section.

    The final state of the flow is determined by its return value. The following rules apply:

    • If an exception is raised directly in the flow function, the flow run is marked as failed.
    • If the flow does not return a value (or returns None), its state is determined by the states of all of the tasks and subflows within it.
    • If any task run or subflow run failed, then the final flow run state is marked as FAILED.
    • If any task run was cancelled, then the final flow run state is marked as CANCELLED.
    • If a flow returns a manually created state, it is used as the state of the final flow run. This allows for manual determination of final state.
    • If the flow run returns any other object, then it is marked as completed.

    The following examples illustrate each of these cases:

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#raise-an-exception","title":"Raise an exception","text":"

    If an exception is raised within the flow function, the flow is immediately marked as failed.

    from prefect import flow\n\n@flow\ndef always_fails_flow():\n    raise ValueError(\"This flow immediately fails\")\n\nif __name__ == \"__main__\":\n    always_fails_flow()\n

    Running this flow produces the following result:

    22:22:36.864 | INFO    | prefect.engine - Created flow run 'acrid-tuatara' for flow 'always-fails-flow'\n22:22:36.864 | INFO    | Flow run 'acrid-tuatara' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n22:22:37.060 | ERROR   | Flow run 'acrid-tuatara' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: This flow immediately fails\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-none","title":"Return none","text":"

    A flow with no return statement is determined by the state of all of its task runs.

    from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_fails_flow():\n    always_fails_task.submit().result(raise_on_failure=False)\n    always_succeeds_task()\n\nif __name__ == \"__main__\":\n    always_fails_flow()\n

    Running this flow produces the following result:

    18:32:05.345 | INFO    | prefect.engine - Created flow run 'auburn-lionfish' for flow 'always-fails-flow'\n18:32:05.346 | INFO    | Flow run 'auburn-lionfish' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:32:05.610 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:32:05.638 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:32:05.658 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:32:05.659 | INFO    | Flow run 'auburn-lionfish' - Executing 'always_succeeds_task-9c27db32-0' immediately...\nI'm fail safe!\n18:32:05.703 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:32:05.730 | ERROR   | Flow run 'auburn-lionfish' - Finished in state Failed('1/2 states failed.')\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-future","title":"Return a future","text":"

    If a flow returns one or more futures, the final state is determined based on the underlying states.

    from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit().result(raise_on_failure=False)\n    y = always_succeeds_task.submit(wait_for=[x])\n    return y\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

    Running this flow produces the following result \u2014 it succeeds because it returns the future of the task that succeeds:

    18:35:24.965 | INFO    | prefect.engine - Created flow run 'whispering-guan' for flow 'always-succeeds-flow'\n18:35:24.965 | INFO    | Flow run 'whispering-guan' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:35:25.204 | INFO    | Flow run 'whispering-guan' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:35:25.205 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:35:25.232 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:35:25.265 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\nI'm fail safe!\n18:35:25.335 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:35:25.362 | INFO    | Flow run 'whispering-guan' - Finished in state Completed('All states completed.')\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-multiple-states-or-futures","title":"Return multiple states or futures","text":"

    If a flow returns a mix of futures and states, the final state is determined by resolving all futures to states, then determining if any of the states are not COMPLETED.

    from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I am bad task\")\n\n@task\ndef always_succeeds_task():\n    return \"foo\"\n\n@flow\ndef always_succeeds_flow():\n    return \"bar\"\n\n@flow\ndef always_fails_flow():\n    x = always_fails_task()\n    y = always_succeeds_task()\n    z = always_succeeds_flow()\n    return x, y, z\n

    Running this flow produces the following result. It fails because one of the three returned futures failed. Note that the final state is Failed, but the states of each of the returned futures is included in the flow state:

    20:57:51.547 | INFO    | prefect.engine - Created flow run 'impartial-gorilla' for flow 'always-fails-flow'\n20:57:51.548 | INFO    | Flow run 'impartial-gorilla' - Using task runner 'ConcurrentTaskRunner'\n20:57:51.645 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n20:57:51.686 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_succeeds_task-c9014725-0' for task 'always_succeeds_task'\n20:57:51.727 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n20:57:51.787 | INFO    | Task run 'always_succeeds_task-c9014725-0' - Finished in state Completed()\n20:57:51.808 | INFO    | Flow run 'impartial-gorilla' - Created subflow run 'unbiased-firefly' for flow 'always-succeeds-flow'\n20:57:51.884 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n20:57:52.438 | INFO    | Flow run 'unbiased-firefly' - Finished in state Completed()\n20:57:52.811 | ERROR   | Flow run 'impartial-gorilla' - Finished in state Failed('1/3 states failed.')\nFailed(message='1/3 states failed.', type=FAILED, result=(Failed(message='Task run encountered an exception.', type=FAILED, result=ValueError('I am bad task'), task_run_id=5fd4c697-7c4c-440d-8ebc-dd9c5bbf2245), Completed(message=None, type=COMPLETED, result='foo', task_run_id=df9b6256-f8ac-457c-ba69-0638ac9b9367), Completed(message=None, type=COMPLETED, result='bar', task_run_id=cfdbf4f1-dccd-4816-8d0f-128750017d0c)), flow_run_id=6d2ec094-001a-4cb0-a24e-d2051db6318d)\n

    Returning multiple states

    When returning multiple states, they must be contained in a set, list, or tuple. If other collection types are used, the result of the contained states will not be checked.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-manual-state","title":"Return a manual state","text":"

    If a flow returns a manually created state, the final state is determined based on the return value.

    from prefect import task, flow\nfrom prefect.states import Completed, Failed\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit()\n    y = always_succeeds_task.submit()\n    if y.result() == \"success\":\n        return Completed(message=\"I am happy with this result\")\n    else:\n        return Failed(message=\"How did this happen!?\")\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

    Running this flow produces the following result.

    18:37:42.844 | INFO    | prefect.engine - Created flow run 'lavender-elk' for flow 'always-succeeds-flow'\n18:37:42.845 | INFO    | Flow run 'lavender-elk' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:37:43.125 | INFO    | Flow run 'lavender-elk' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:37:43.126 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:37:43.162 | INFO    | Flow run 'lavender-elk' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:37:43.163 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\n18:37:43.175 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\nI'm fail safe!\n18:37:43.217 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:37:43.236 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:37:43.264 | INFO    | Flow run 'lavender-elk' - Finished in state Completed('I am happy with this result')\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-an-object","title":"Return an object","text":"

    If the flow run returns any other object, then it is marked as completed.

    from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@flow\ndef always_succeeds_flow():\n    always_fails_task().submit()\n    return \"foo\"\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

    Running this flow produces the following result.

    21:02:45.715 | INFO    | prefect.engine - Created flow run 'sparkling-pony' for flow 'always-succeeds-flow'\n21:02:45.715 | INFO    | Flow run 'sparkling-pony' - Using task runner 'ConcurrentTaskRunner'\n21:02:45.816 | INFO    | Flow run 'sparkling-pony' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n21:02:45.853 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n21:02:45.879 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n21:02:46.593 | INFO    | Flow run 'sparkling-pony' - Finished in state Completed()\nCompleted(message=None, type=COMPLETED, result='foo', flow_run_id=7240e6f5-f0a8-4e00-9440-a7b33fb51153)\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-a-flow","title":"Serving a flow","text":"

    The simplest way to create a deployment for your flow is by calling its serve method. This method creates a deployment for the flow and starts a long-running process that monitors for work from the Prefect server. When work is found, it is executed within its own isolated subprocess.

    hello_world.py
    from prefect import flow\n\n\n@flow(log_prints=True)\ndef hello_world(name: str = \"world\", goodbye: bool = False):\n    print(f\"Hello {name} from Prefect! \ud83e\udd17\")\n\n    if goodbye:\n        print(f\"Goodbye {name}!\")\n\n\nif __name__ == \"__main__\":\n    # creates a deployment and stays running to monitor for work instructions generated on the server\n\n    hello_world.serve(name=\"my-first-deployment\",\n                      tags=[\"onboarding\"],\n                      parameters={\"goodbye\": True},\n                      interval=60)\n

    This interface provides all of the configuration needed for a deployment with no strong infrastructure requirements:

    • schedules
    • event triggers
    • metadata such as tags and description
    • default parameter values

    Schedules are auto-paused on shutdown

    By default, stopping the process running flow.serve will pause the schedule for the deployment (if it has one). When running this in environments where restarts are expected use the pause_on_shutdown=False flag to prevent this behavior:

    if __name__ == \"__main__\":\n    hello_world.serve(name=\"my-first-deployment\",\n                      tags=[\"onboarding\"],\n                      parameters={\"goodbye\": True},\n                      pause_on_shutdown=False,\n                      interval=60)\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-multiple-flows-at-once","title":"Serving multiple flows at once","text":"

    You can take this further and serve multiple flows with the same process using the serve utility along with the to_deployment method of flows:

    import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n    serve(slow_deploy, fast_deploy)\n

    The behavior and interfaces are identical to the single flow case.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#retrieve-a-flow-from-remote-storage","title":"Retrieve a flow from remote storage","text":"

    Flows can be retrieved from remote storage using the flow.from_source method.

    flow.from_source accepts a git repository URL and an entrypoint pointing to the flow to load from the repository:

    load_from_url.py
    from prefect import flow\n\nmy_flow = flow.from_source(\n    source=\"https://github.com/PrefectHQ/prefect.git\",\n    entrypoint=\"flows/hello_world.py:hello\"\n)\n\nif __name__ == \"__main__\":\n    my_flow()\n
    16:40:33.818 | INFO    | prefect.engine - Created flow run 'muscular-perch' for flow 'hello'\n16:40:34.048 | INFO    | Flow run 'muscular-perch' - Hello world!\n16:40:34.706 | INFO    | Flow run 'muscular-perch' - Finished in state Completed()\n

    A flow entrypoint is the path to the file the flow is located in and the name of the flow function separated by a colon.

    If you need additional configuration, such as specifying a private repository, you can provide a GitRepository instead of URL:

    load_from_storage.py
    from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n    source=GitRepository(\n        url=\"https://github.com/org/private-repo.git\",\n        branch=\"dev\",\n        credentials={\n            \"access_token\": Secret.load(\"github-access-token\").get()\n        }\n    ),\n    entrypoint=\"flows.py:my_flow\"\n)\n\nif __name__ == \"__main__\":\n    my_flow()\n

    You can serve loaded flows

    Flows loaded from remote storage can be served using the same serve method as local flows:

    serve_loaded_flow.py
    from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/org/repo.git\",\n        entrypoint=\"flows.py:my_flow\"\n    ).serve(name=\"my-deployment\")\n

    When you serve a flow loaded from remote storage, the serving process will periodically poll your remote storage for updates to the flow's code. This pattern allows you to update your flow code without restarting the serving process.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-or-suspending-a-flow-run","title":"Pausing or suspending a flow run","text":"

    Prefect provides you with the ability to halt a flow run with two functions that are similar, but slightly different. When a flow run is paused, code execution is stopped and the process continues to run. When a flow run is suspended, code execution is stopped and so is the process.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-a-flow-run","title":"Pausing a flow run","text":"

    Prefect enables pausing an in-progress flow run for manual approval. Prefect exposes this functionality via the pause_flow_run and resume_flow_run functions.

    Timeouts

    Paused flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it paused and never resumed. You can specify a different timeout period in seconds using the timeout parameter.

    Most simply, pause_flow_run can be called inside a flow:

    from prefect import task, flow, pause_flow_run, resume_flow_run\n\n@task\nasync def marvin_setup():\n    return \"a raft of ducks walk into a bar...\"\n\n\n@task\nasync def marvin_punchline():\n    return \"it's a wonder none of them ducked!\"\n\n\n@flow\nasync def inspiring_joke():\n    await marvin_setup()\n    await pause_flow_run(timeout=600)  # pauses for 10 minutes\n    await marvin_punchline()\n

    You can also implement conditional pauses:

    from prefect import task, flow, pause_flow_run\n\n@task\ndef task_one():\n    for i in range(3):\n        sleep(1)\n        print(i)\n\n@flow(log_prints=True)\ndef my_flow():\n    terminal_state = task_one.submit(return_state=True)\n    if terminal_state.type == StateType.COMPLETED:\n        print(\"Task one succeeded! Pausing flow run..\")\n        pause_flow_run(timeout=2) \n    else:\n        print(\"Task one failed. Skipping pause flow run..\")\n

    Calling this flow will block code execution after the first task and wait for resumption to deliver the punchline.

    await inspiring_joke()\n> \"a raft of ducks walk into a bar...\"\n

    Paused flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run utility via client code.

    resume_flow_run(FLOW_RUN_ID)\n

    The paused flow run will then finish!

    > \"it's a wonder none of them ducked!\"\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#suspending-a-flow-run","title":"Suspending a flow run","text":"

    Similar to pausing a flow run, Prefect enables suspending an in-progress flow run.

    The difference between pausing and suspending a flow run

    There is an important difference between pausing and suspending a flow run. When you pause a flow run, the flow code is still running but is blocked until someone resumes the flow. This is not the case with suspending a flow run! When you suspend a flow run, the flow exits completely and the infrastructure running it (e.g., a Kubernetes Job) tears down.

    This means that you can suspend flow runs to save costs instead of paying for long-running infrastructure. However, when the flow run resumes, the flow code will execute again from the beginning of the flow, so you should use tasks and task caching to avoid recomputing expensive operations.

    Prefect exposes this functionality via the suspend_flow_run and resume_flow_run functions, as well as the Prefect UI.

    When called inside of a flow suspend_flow_run will immediately suspend execution of the flow run. The flow run will be marked as Suspended and will not be resumed until resume_flow_run is called.

    Timeouts

    Suspended flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it suspended and never resumed. You can specify a different timeout period in seconds using the timeout parameter or pass timeout=None for no timeout.

    Here is an example of a flow that does not block flow execution while paused. This flow will exit after one task, and will be rescheduled upon resuming. The stored result of the first task is retrieved instead of being rerun.

    from prefect import flow, pause_flow_run, task\n\n@task(persist_result=True)\ndef foo():\n    return 42\n\n@flow(persist_result=True)\ndef noblock_pausing():\n    x = foo.submit()\n    pause_flow_run(timeout=30, reschedule=True)\n    y = foo.submit()\n    z = foo(wait_for=[x])\n    alpha = foo(wait_for=[y])\n    omega = foo(wait_for=[x, y])\n

    Flow runs can be suspended out-of-process by calling suspend_flow_run(flow_run_id=<ID>) or selecting the Suspend button in the Prefect UI or Prefect Cloud.

    Suspended flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run utility via client code.

    resume_flow_run(FLOW_RUN_ID)\n

    Subflows can't be suspended independently of their parent run

    You can't suspend a subflow run independently of its parent flow run.

    If you use a flow to schedule a flow run with run_deployment, the scheduled flow run will be linked to the calling flow as a subflow run by default. This means you won't be able to suspend the scheduled flow run independently of the calling flow. Call run_deployment with as_subflow=False to disable this linking if you need to be able to suspend the scheduled flow run independently of the calling flow.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#waiting-for-input-when-pausing-or-suspending-a-flow-run","title":"Waiting for input when pausing or suspending a flow run","text":"

    Experimental

    The wait_for_input parameter used in the pause_flow_run or suspend_flow_run functions is an experimental feature. The interface or behavior of this feature may change without warning in future releases.

    If you encounter any issues, please let us know in Slack or with a Github issue.

    When pausing or suspending a flow run you may want to wait for input from a user. Prefect provides a way to do this by leveraging the pause_flow_run and suspend_flow_run functions. These functions accept a wait_for_input argument, the value of which should be a subclass of prefect.input.RunInput, a pydantic model. When resuming the flow run, users are required to provide data for this model. Upon successful validation, the flow run resumes, and the return value of the pause_flow_run or suspend_flow_run is an instance of the model containing the provided data.

    Here is an example of a flow that pauses and waits for input from a user:

    from prefect import flow, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass UserNameInput(RunInput):\n    name: str\n\n\n@flow(log_prints=True)\nasync def greet_user():\n    user_input = await pause_flow_run(\n        wait_for_input=UserNameInput\n    )\n\n    print(f\"Hello, {user_input.name}!\")\n

    Running this flow will create a flow run. The flow run will advance until code execution reaches pause_flow_run, at which point it will move into a Paused state. Execution will block and wait for resumption.

    When resuming the flow run, users will be prompted to provide a value for the name field of the UserNameInput model. Upon successful validation, the flow run will resume, and the return value of the pause_flow_run will be an instance of the UserNameInput model containing the provided data.

    For more in-depth information on receiving input from users when pausing and suspending flow runs, see the Creating interactive workflows guide.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#canceling-a-flow-run","title":"Canceling a flow run","text":"

    You may cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client.

    When cancellation is requested, the flow run is moved to a \"Cancelling\" state. If the deployment is a work pool-based deployemnt with a worker, then the worker monitors the state of flow runs and detects that cancellation has been requested. The worker then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure will be killed, ensuring the flow run exits.

    A deployment is required

    Flow run cancellation requires the flow run to be associated with a deployment. A monitoring process must be running to enforce the cancellation. Inline subflow runs, i.e. those created without run_deployment, cannot be cancelled without cancelling the parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.

    Cancellation is robust to restarts of Prefect workers. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the infrastructure_pid or infrastructure identifier. Generally, this is composed of two parts:

    1. Scope: identifying where the infrastructure is running.
    2. ID: a unique identifier for the infrastructure within the scope.

    The scope is used to ensure that Prefect does not kill the wrong infrastructure. For example, workers running on multiple machines may have overlapping process IDs but should not have a matching scope.

    The identifiers for infrastructure types:

    • Processes: The machine hostname and the PID.
    • Docker Containers: The Docker API URL and container ID.
    • Kubernetes Jobs: The Kubernetes cluster name and the job name.

    While the cancellation process is robust, there are a few issues than can occur:

    • If the infrastructure block for the flow run has been removed or altered, cancellation may not work.
    • If the infrastructure block for the flow run does not have support for cancellation, cancellation will not work.
    • If the identifier scope does not match when attempting to cancel a flow run the worker will be unable to cancel the flow run. Another worker may attempt cancellation.
    • If the infrastructure associated with the run cannot be found or has already been killed, the worker will mark the flow run as cancelled.
    • If the infrastructre_pid is missing from the flow run will be marked as cancelled but cancellation cannot be enforced.
    • If the worker runs into an unexpected error during cancellation the flow run may or may not be cancelled depending on where the error occurred. The worker will try again to cancel the flow run. Another worker may attempt cancellation.

    Enhanced cancellation

    We are working on improving cases where cancellation can fail. You can try the improved cancellation experience by enabling the PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION setting on your worker or agents:

    prefect config set PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION=True\n

    If you encounter any issues, please let us know in Slack or with a Github issue.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-cli","title":"Cancel via the CLI","text":"

    From the command line in your execution environment, you can cancel a flow run by using the prefect flow-run cancel CLI command, passing the ID of the flow run.

    prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736'\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-ui","title":"Cancel via the UI","text":"

    From the UI you can cancel a flow run by navigating to the flow run's detail page and clicking the Cancel button in the upper right corner.

    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#timeouts","title":"Timeouts","text":"

    Flow timeouts are used to prevent unintentional long-running flows. When the duration of execution for a flow exceeds the duration specified in the timeout, a timeout exception will be raised and the flow will be marked as failed. In the UI, the flow will be visibly designated as TimedOut.

    Timeout durations are specified using the timeout_seconds keyword argument.

    from prefect import flow\nimport time\n\n@flow(timeout_seconds=1, log_prints=True)\ndef show_timeouts():\n    print(\"I will execute\")\n    time.sleep(5)\n    print(\"I will not execute\")\n
    ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/infrastructure/","title":"Infrastructure","text":"

    Workers are recommended

    Infrastructure blocks are part of the agent based deployment model. Work Pools and Workers simplify the specification of each flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.

    Users may specify an infrastructure block when creating a deployment. This block will be used to specify infrastructure for flow runs created by the deployment at runtime.

    Infrastructure can only be used with a deployment. When you run a flow directly by calling the flow yourself, you are responsible for the environment in which the flow executes.

    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#infrastructure-overview","title":"Infrastructure overview","text":"

    Prefect uses infrastructure to create the environment for a user's flow to execute.

    Infrastructure is attached to a deployment and is propagated to flow runs created for that deployment. Infrastructure is deserialized by the agent and it has two jobs:

    • Create execution environment infrastructure for the flow run.
    • Run a Python command to start the prefect.engine in the infrastructure, which retrieves the flow from storage and executes the flow.

    The engine acquires and calls the flow. Infrastructure doesn't know anything about how the flow is stored, it's just passing a flow run ID to the engine.

    Infrastructure is specific to the environments in which flows will run. Prefect currently provides the following infrastructure types:

    • Process runs flows in a local subprocess.
    • DockerContainer runs flows in a Docker container.
    • KubernetesJob runs flows in a Kubernetes Job.
    • ECSTask runs flows in an Amazon ECS Task.
    • Cloud Run runs flows in a Google Cloud Run Job.
    • Container Instance runs flows in an Azure Container Instance.

    What about tasks?

    Flows and tasks can both use configuration objects to manage the environment in which code runs.

    Flows use infrastructure.

    Tasks use task runners. For more on how task runners work, see Task Runners.

    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#using-infrastructure","title":"Using infrastructure","text":"

    You may create customized infrastructure blocks through the Prefect UI or Prefect Cloud Blocks page or create them in code and save them to the API using the blocks .save() method.

    Once created, there are two distinct ways to use infrastructure in a deployment:

    • Starting with Prefect defaults \u2014 this is what happens when you pass the -i or --infra flag and provide a type when building deployment files.
    • Pre-configure infrastructure settings as blocks and base your deployment infrastructure on those settings \u2014 by passing -ib or --infra-block and a block slug when building deployment files.

    For example, when creating your deployment files, the supported Prefect infrastrucure types are:

    • process
    • docker-container
    • kubernetes-job
    • ecs-task
    • cloud-run-job
    • container-instance-job
    $ prefect deployment build ./my_flow.py:my_flow -n my-flow-deployment -t test -i docker-container -sb s3/my-bucket --override env.EXTRA_PIP_PACKAGES=s3fs\nFound flow 'my-flow'\nSuccessfully uploaded 2 files to s3://bucket-full-of-sunshine\nDeployment YAML created at '/Users/terry/test/flows/infra/deployment.yaml'.\n

    In this example we specify the DockerContainer infrastructure in addition to a preconfigured AWS S3 bucket storage block.

    The default deployment YAML filename may be edited as needed to add an infrastructure type or infrastructure settings.

    ###\n### A complete description of a Prefect Deployment for flow 'my-flow'\n###\nname: my-flow-deployment\ndescription: null\nversion: e29de5d01b06d61b4e321d40f34a480c\n# The work queue that will handle this deployment's runs\nwork_queue_name: default\nwork_pool_name: default-agent-pool\ntags:\n- test\nparameters: {}\nschedules: []\npaused: true\ninfra_overrides:\n  env.EXTRA_PIP_PACKAGES: s3fs\ninfrastructure:\n  type: docker-container\n  env: {}\n  labels: {}\n  name: null\n  command:\n  - python\n  - -m\n  - prefect.engine\n  image: prefecthq/prefect:2-latest\n  image_pull_policy: null\n  networks: []\n  network_mode: null\n  auto_remove: false\n  volumes: []\n  stream_output: true\n  memswap_limit: null\n  mem_limit: null\n  privileged: false\n  block_type_slug: docker-container\n  _block_type_slug: docker-container\n\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: my-flow\nmanifest_path: my_flow-manifest.json\nstorage:\n  bucket_path: bucket-full-of-sunshine\n  aws_access_key_id: '**********'\n  aws_secret_access_key: '**********'\n  _is_anonymous: true\n  _block_document_name: anonymous-xxxxxxxx-f1ff-4265-b55c-6353a6d65333\n  _block_document_id: xxxxxxxx-06c2-4c3c-a505-4a8db0147011\n  block_type_slug: s3\n  _block_type_slug: s3\npath: ''\nentrypoint: my_flow.py:my-flow\nparameter_openapi_schema:\n  title: Parameters\n  type: object\n  properties: {}\n  required: null\n  definitions: null\ntimestamp: '2023-02-08T23:00:14.974642+00:00'\n

    Editing deployment YAML

    Note the big DO NOT EDIT comment in the deployment YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

    Once the deployment exists, any flow runs that this deployment starts will use DockerContainer infrastructure.

    You can also create custom infrastructure blocks \u2014 either in the Prefect UI for in code via the API \u2014 and use the settings in the block to configure your infastructure. For example, here we specify settings for Kubernetes infrastructure in a block named k8sdev.

    from prefect.infrastructure import KubernetesJob, KubernetesImagePullPolicy\n\nk8s_job = KubernetesJob(\n    namespace=\"dev\",\n    image=\"prefecthq/prefect:2.0.0-python3.9\",\n    image_pull_policy=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n)\nk8s_job.save(\"k8sdev\")\n

    Now we can apply the infrastrucure type and settings in the block by specifying the block slug kubernetes-job/k8sdev as the infrastructure type when building a deployment:

    prefect deployment build flows/k8s_example.py:k8s_flow --name k8sdev --tag k8s -sb s3/dev -ib kubernetes-job/k8sdev\n

    See Deployments for more information about deployment build options.

    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#configuring-infrastructure","title":"Configuring infrastructure","text":"

    Every infrastrcture type has type-specific options.

    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#process","title":"Process","text":"

    Process infrastructure runs a command in a new process.

    Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

    Process supports the following settings:

    Attributes Description command A list of strings specifying the command to start the flow run. In most cases you should not override this. env Environment variables to set for the new process. labels Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself. name A name for the process. For display purposes only.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#dockercontainer","title":"DockerContainer","text":"

    DockerContainer infrastructure executes flow runs in a container.

    Requirements for DockerContainer:

    • Docker Engine must be available.
    • You must configure remote Storage. Local storage is not supported for Docker.
    • The API must be available from within the flow run container. To facilitate connections to locally hosted APIs, localhost and 127.0.0.1 will be replaced with host.docker.internal.
    • The ephemeral Prefect API won't work with Docker and Kubernetes. You must have a Prefect server or Prefect Cloud API endpoint set in your agent's configuration.

    DockerContainer supports the following settings:

    Attributes Description auto_remove Bool indicating whether the container will be removed on completion. If False, the container will remain after exit for inspection. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. env Environment variables to set for the container. image An optional string specifying the name of a Docker image to use. Defaults to the Prefect image. If the image is stored anywhere other than a public Docker Hub registry, use a corresponding registry block, e.g. DockerRegistry or ensure otherwise that your execution layer is authenticated to pull the image from the image registry. image_pull_policy Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'. image_registry A DockerRegistry block containing credentials to use if image is stored in a private image registry. labels An optional dictionary of labels, mapping name to value. name An optional name for the container. networks An optional list of strings specifying Docker networks to connect the container to. network_mode Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set. stream_output Bool indicating whether to stream output from the subprocess to local standard output. volumes An optional list of volume mount strings in the format of \"local_path:container_path\".

    Prefect automatically sets a Docker image matching the Python and Prefect version you're using at deployment time. You can see all available images at Docker Hub.

    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#kubernetesjob","title":"KubernetesJob","text":"

    KubernetesJob infrastructure executes flow runs in a Kubernetes Job.

    Requirements for KubernetesJob:

    • kubectl must be available.
    • You must configure remote Storage. Local storage is not supported for Kubernetes.
    • The ephemeral Prefect API won't work with Docker and Kubernetes. You must have a Prefect server or Prefect Cloud API endpoint set in your agent's configuration.

    The Prefect CLI command prefect kubernetes manifest server automatically generates a Kubernetes manifest with default settings for Prefect deployments. By default, it simply prints out the YAML configuration for a manifest. You can pipe this output to a file of your choice and edit as necessary.

    KubernetesJob supports the following settings:

    Attributes Description cluster_config An optional Kubernetes cluster config to use for this job. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. customizations A list of JSON 6902 patches to apply to the base Job manifest. Alternatively, a valid JSON string is allowed (handy for deployments CLI). env Environment variables to set for the container. finished_job_ttl The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed. image String specifying the tag of a Docker image to use for the Job. image_pull_policy The Kubernetes image pull policy to use for job containers. job The base manifest for the Kubernetes Job. job_watch_timeout_seconds Number of seconds to watch for job creation before timing out (defaults to None). labels Dictionary of labels to add to the Job. name An optional name for the job. namespace String signifying the Kubernetes namespace to use. pod_watch_timeout_seconds Number of seconds to watch for pod creation before timing out (default 60). service_account_name An optional string specifying which Kubernetes service account to use. stream_output Bool indicating whether to stream output from the subprocess to local standard output.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#kubernetesjob-overrides-and-customizations","title":"KubernetesJob overrides and customizations","text":"

    When creating deployments using KubernetesJob infrastructure, the infra_overrides parameter expects a dictionary. For a KubernetesJob, the customizations parameter expects a list.

    Containers expect a list of objects, even if there is only one. For any patches applying to the container, the path value should be a list, for example: /spec/templates/spec/containers/0/resources

    A Kubernetes-Job infrastructure block defined in Python:

    customizations = [\n {\n     \"op\": \"add\",\n     \"path\": \"/spec/template/spec/containers/0/resources\",\n     \"value\": {\n         \"requests\": {\n             \"cpu\": \"2000m\",\n             \"memory\": \"4gi\"\n         },\n         \"limits\": {\n             \"cpu\": \"4000m\",\n             \"memory\": \"8Gi\",\n             \"nvidia.com/gpu\": \"1\"\n      }\n  },\n }\n]\n\nk8s_job = KubernetesJob(\n        namespace=namespace,\n        image=image_name,\n        image_pull_policy=KubernetesImagePullPolicy.ALWAYS,\n        finished_job_ttl=300,\n        job_watch_timeout_seconds=600,\n        pod_watch_timeout_seconds=600,\n        service_account_name=\"prefect-server\",\n        customizations=customizations,\n    )\nk8s_job.save(\"devk8s\")\n

    A Deployment with infra-overrides defined in Python:

    infra_overrides={ \n    \"customizations\": [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/resources\",\n                \"value\": {\n                    \"requests\": {\n                        \"cpu\": \"2000m\",\n                        \"memory\": \"4gi\"\n                    },\n                    \"limits\": {\n                        \"cpu\": \"4000m\",\n                        \"memory\": \"8Gi\",\n                        \"nvidia.com/gpu\": \"1\"\n                }\n            },\n        }\n    ]\n}\n\n# Load an already created K8s Block\nk8sjob = k8s_job.load(\"devk8s\")\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    infrastructure=k8sjob,\n    storage=storage,\n    infra_overrides=infra_overrides,\n)\n\ndeployment.apply()\n
    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#ecstask","title":"ECSTask","text":"

    ECSTask infrastructure runs your flow in an ECS Task.

    Requirements for ECSTask:

    • The ephemeral Prefect API won't work with ECS directly. You must have a Prefect server or Prefect Cloud API endpoint set in your agent's configuration.
    • The prefect-aws collection must be installed within the agent environment: pip install prefect-aws
    • The ECSTask and AwsCredentials blocks must be registered within the agent environment: prefect block register -m prefect_aws.ecs
    • You must configure remote Storage. Local storage is not supported for ECS tasks. The most commonly used type of storage with ECSTask is S3. If you leverage that type of block, make sure that s3fs is installed within your agent and flow run environment. The easiest way to satisfy all the installation-related points mentioned above is to include the following commands in your Dockerfile:
    FROM prefecthq/prefect:2-python3.9  # example base image \nRUN pip install s3fs prefect-aws\n

    Make sure to allocate enough CPU and memory to your agent, and consider adding retries

    When you start a Prefect agent on AWS ECS Fargate, allocate as much CPU and memory as needed for your workloads. Your agent needs enough resources to appropriately provision infrastructure for your flow runs and to monitor their execution. Otherwise, your flow runs may get stuck in a Pending state. Alternatively, set a work-queue concurrency limit to ensure that the agent will not try to process all runs at the same time.

    Some API calls to provision infrastructure may fail due to unexpected issues on the client side (for example, transient errors such as ConnectionError, HTTPClientError, or RequestTimeout), or due to server-side rate limiting from the AWS service. To mitigate those issues, we recommend adding environment variables such as AWS_MAX_ATTEMPTS (can be set to an integer value such as 10) and AWS_RETRY_MODE (can be set to a string value including standard or adaptive modes). Those environment variables must be added within the agent environment, e.g. on your ECS service running the agent, rather than on the ECSTask infrastructure block.

    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#docker-images","title":"Docker images","text":"

    Learn about options for Prefect-maintained Docker images in the Docker guide.

    ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/results/","title":"Results","text":"

    Results represent the data returned by a flow or a task.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#retrieving-results","title":"Retrieving results","text":"

    When calling flows or tasks, the result is returned directly:

    from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    task_result = my_task()\n    return task_result + 1\n\nresult = my_flow()\nassert result == 2\n

    When working with flow and task states, the result can be retrieved with the State.result() method:

    from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n    return state.result() + 1\n\nstate = my_flow(return_state=True)\nassert state.result() == 2\n

    When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

    from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    return future.result() + 1\n\nresult = my_flow()\nassert result == 2\n
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#handling-failures","title":"Handling failures","text":"

    Sometimes your flows or tasks will encounter an exception. Prefect captures all exceptions in order to report states to the orchestrator, but we do not hide them from you (unless you ask us to) as your program needs to know if an unexpected error has occurred.

    When calling flows or tasks, the exceptions are raised as in normal Python:

    from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    try:\n        my_task()\n    except ValueError:\n        print(\"Oh no! The task failed.\")\n\n    return True\n\nmy_flow()\n

    If you would prefer to check for a failed task without using try/except, you may ask Prefect to return the state:

    from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    if state.is_failed():\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = state.result()\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

    If you retrieve the result from a failed state, the exception will be raised. For this reason, it's often best to check if the state is failed first.

    from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    try:\n        result = state.result()\n    except ValueError:\n        print(\"Oh no! The state raised the error!\")\n\n    return True\n\nmy_flow()\n

    When retrieving the result from a state, you can ask Prefect not to raise exceptions:

    from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    maybe_result = state.result(raise_on_failure=False)\n    if isinstance(maybe_result, ValueError):\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = maybe_result\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

    When submitting tasks to a runner, Future.result() works the same as State.result():

    from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n\n    try:\n        future.result()\n    except ValueError:\n        print(\"Ah! Futures will raise the failure as well.\")\n\n    # You can ask it not to raise the exception too\n    maybe_result = future.result(raise_on_failure=False)\n    print(f\"Got {type(maybe_result)}\")\n\n    return True\n\nmy_flow()\n
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#working-with-async-results","title":"Working with async results","text":"

    When calling flows or tasks, the result is returned directly:

    import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    task_result = await my_task()\n    return task_result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n

    When working with flow and task states, the result can be retrieved with the State.result() method:

    import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    state = await my_task(return_state=True)\n    result = await state.result(fetch=True)\n    return result + 1\n\nasync def main():\n    state = await my_flow(return_state=True)\n    assert await state.result(fetch=True) == 2\n\nasyncio.run(main())\n

    Resolving results

    Prefect 2.6.0 added automatic retrieval of persisted results. Prior to this version, State.result() did not require an await. For backwards compatibility, when used from an asynchronous context, State.result() returns a raw result type.

    You may opt-in to the new behavior by passing fetch=True as shown in the example above. If you would like this behavior to be used automatically, you may enable the PREFECT_ASYNC_FETCH_STATE_RESULT setting. If you do not opt-in to this behavior, you will see a warning.

    You may also opt-out by setting fetch=False. This will silence the warning, but you will need to retrieve your result manually from the result type.

    When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

    import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    future = await my_task.submit()\n    result = await future.result()\n    return result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisting-results","title":"Persisting results","text":"

    The Prefect API does not store your results except in special cases. Instead, the result is persisted to a storage location in your infrastructure and Prefect stores a reference to the result.

    The following Prefect features require results to be persisted:

    • Task cache keys
    • Flow run retries

    If results are not persisted, these features may not be usable.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#configuring-persistence-of-results","title":"Configuring persistence of results","text":"

    Persistence of results requires a serializer and a storage location. Prefect sets defaults for these, and you should not need to adjust them until you want to customize behavior. You can configure results on the flow and task decorators with the following options:

    • persist_result: Whether the result should be persisted to storage.
    • result_storage: Where to store the result when persisted.
    • result_serializer: How to convert the result to a storable form.
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#toggling-persistence","title":"Toggling persistence","text":"

    Persistence of the result of a task or flow can be configured with the persist_result option. The persist_result option defaults to a null value, which will automatically enable persistence if it is needed for a Prefect feature used by the flow or task. Otherwise, persistence is disabled by default.

    For example, the following flow has retries enabled. Flow retries require that all task results are persisted, so the task's result will be persisted:

    from prefect import flow, task\n\n@task\ndef my_task():\n    return \"hello world!\"\n\n@flow(retries=2)\ndef my_flow():\n    # This task does not have persistence toggled off and it is needed for the flow feature,\n    # so Prefect will persist its result at runtime\n    my_task()\n

    Flow retries do not require the flow's result to be persisted, so it will not be.

    In this next example, one task has caching enabled. Task caching requires that the given task's result is persisted:

    from prefect import flow, task\nfrom datetime import timedelta\n\n@task(cache_key_fn=lambda: \"always\", cache_expiration=timedelta(seconds=20))\ndef my_task():\n    # This task uses caching so its result will be persisted by default\n    return \"hello world!\"\n\n\n@task\ndef my_other_task():\n    ...\n\n@flow\ndef my_flow():\n    # This task uses a feature that requires result persistence\n    my_task()\n\n    # This task does not use a feature that requires result persistence and the\n    # flow does not use any features that require task result persistence so its\n    # result will not be persisted by default\n    my_other_task()\n

    Persistence of results can be manually toggled on or off:

    from prefect import flow, task\n\n@flow(persist_result=True)\ndef my_flow():\n    # This flow will persist its result even if not necessary for a feature.\n    ...\n\n@task(persist_result=False)\ndef my_task():\n    # This task will never persist its result.\n    # If persistence needed for a feature, an error will be raised.\n    ...\n

    Toggling persistence manually will always override any behavior that Prefect would infer.

    You may also change Prefect's default persistence behavior with the PREFECT_RESULTS_PERSIST_BY_DEFAULT setting. To persist results by default, even if they are not needed for a feature change the value to a truthy value:

    prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true\n

    Task and flows with persist_result=False will not persist their results even if PREFECT_RESULTS_PERSIST_BY_DEFAULT is true.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-location","title":"Result storage location","text":"

    The result storage location can be configured with the result_storage option. The result_storage option defaults to a null value, which infers storage from the context. Generally, this means that tasks will use the result storage configured on the flow unless otherwise specified. If there is no context to load the storage from and results must be persisted, results will be stored in the path specified by the PREFECT_LOCAL_STORAGE_PATH setting (defaults to ~/.prefect/storage).

    from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(persist_result=True)\ndef my_flow():\n    my_task()  # This task will use the flow's result storage\n\n@task(persist_result=True)\ndef my_task():\n    ...\n\nmy_flow()  # The flow has no result storage configured and no parent, the local file system will be used.\n\n\n# Reconfigure the flow to use a different storage type\nnew_flow = my_flow.with_options(result_storage=S3(bucket_path=\"my-bucket\"))\n\nnew_flow()  # The flow and task within it will use S3 for result storage.\n

    You can configure this to use a specific storage using one of the following:

    • A storage instance, e.g. LocalFileSystem(basepath=\".my-results\")
    • A storage slug, e.g. 's3/dev-s3-block'
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-key","title":"Result storage key","text":"

    The path of the result file in the result storage can be configured with the result_storage_key. The result_storage_key option defaults to a null value, which generates a unique identifier for each result.

    from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(result_storage=S3(bucket_path=\"my-bucket\"))\ndef my_flow():\n    my_task()\n\n@task(persist_result=True, result_storage_key=\"my_task.json\")\ndef my_task():\n    ...\n\nmy_flow()  # The task's result will be persisted to 's3://my-bucket/my_task.json'\n

    Result storage keys are formatted with access to all of the modules in prefect.runtime and the run's parameters. In the following example, we will run a flow with three runs of the same task. Each task run will write its result to a unique file based on the name parameter.

    from prefect import flow, task\n\n@flow()\ndef my_flow():\n    hello_world()\n    hello_world(name=\"foo\")\n    hello_world(name=\"bar\")\n\n@task(persist_result=True, result_storage_key=\"hello-{parameters[name]}.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

    After running the flow, we can see three persisted result files in our storage directory:

    $ ls ~/.prefect/storage | grep \"hello-\"\nhello-bar.json\nhello-foo.json\nhello-world.json\n

    In the next example, we include metadata about the flow run from the prefect.runtime.flow_run module:

    from prefect import flow, task\n\n@flow\ndef my_flow():\n    hello_world()\n\n@task(persist_result=True, result_storage_key=\"{flow_run.flow_name}_{flow_run.name}_hello.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

    After running this flow, we can see a result file templated with the name of the flow and the flow run:

    \u276f ls ~/.prefect/storage | grep \"my-flow\"    \nmy-flow_industrious-trout_hello.json\n

    If a result exists at a given storage key in the storage location, it will be overwritten.

    Result storage keys can only be configured on tasks at this time.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer","title":"Result serializer","text":"

    The result serializer can be configured with the result_serializer option. The result_serializer option defaults to a null value, which infers the serializer from the context. Generally, this means that tasks will use the result serializer configured on the flow unless otherwise specified. If there is no context to load the serializer from, the serializer defined by PREFECT_RESULTS_DEFAULT_SERIALIZER will be used. This setting defaults to Prefect's pickle serializer.

    You may configure the result serializer using:

    • A type name, e.g. \"json\" or \"pickle\" \u2014 this corresponds to an instance with default values
    • An instance, e.g. JSONSerializer(jsonlib=\"orjson\")
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#compressing-results","title":"Compressing results","text":"

    Prefect provides a CompressedSerializer which can be used to wrap other serializers to provide compression over the bytes they generate. The compressed serializer uses lzma compression by default. We test other compression schemes provided in the Python standard library such as bz2 and zlib, but you should be able to use any compression library that provides compress and decompress methods.

    You may configure compression of results using:

    • A type name, prefixed with compressed/ e.g. \"compressed/json\" or \"compressed/pickle\"
    • An instance e.g. CompressedSerializer(serializer=\"pickle\", compressionlib=\"lzma\")

    Note that the \"compressed/<serializer-type>\" shortcut will only work for serializers provided by Prefect. If you are using custom serializers, you must pass a full instance.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#storage-of-results-in-prefect","title":"Storage of results in Prefect","text":"

    The Prefect API does not store your results in most cases for the following reasons:

    • Results can be large and slow to send to and from the API.
    • Results often contain private information or data.
    • Results would need to be stored in the database or complex logic implemented to hydrate from another source.

    There are a few cases where Prefect will store your results directly in the database. This is an optimization to reduce the overhead of reading and writing to result storage.

    The following data types will be stored by the API without persistence to storage:

    • booleans (True, False)
    • nulls (None)

    If persist_result is set to False, these values will never be stored.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#tracking-results","title":"Tracking results","text":"

    The Prefect API tracks metadata about your results. The value of your result is only stored in specific cases. Result metadata can be seen in the UI on the \"Results\" page for flows.

    Prefect tracks the following result metadata:

    • Data type
    • Storage location (if persisted)
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#caching-of-results-in-memory","title":"Caching of results in memory","text":"

    When running your workflows, Prefect will keep the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task it can be costly to keep it memory for the entire duration of the flow run.

    Flows and tasks both include an option to drop the result from memory with cache_result_in_memory:

    @flow(cache_result_in_memory=False)\ndef foo():\n    return \"pretend this is large data\"\n\n@task(cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n

    When cache_result_in_memory is disabled, the result of your flow or task will be persisted by default. The result will then be pulled from storage when needed.

    @flow\ndef foo():\n    result = bar()\n    state = bar(return_state=True)\n\n    # The result will be retrieved from storage here\n    state.result()\n\n    future = bar.submit()\n    # The result will be retrieved from storage here\n    future.result()\n\n@task(cache_result_in_memory=False)\ndef bar():\n    # This result will persisted\n    return \"pretend this is biiiig data\"\n

    If both cache_result_in_memory and persistence are disabled, your results will not be available downstream.

    @task(persist_result=False, cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n\n@flow\ndef foo():\n    # Raises an error\n    result = bar()\n\n    # This is oaky\n    state = bar(return_state=True)\n\n    # Raises an error\n    state.result()\n\n    # This is okay\n    future = bar.submit()\n\n    # Raises an error\n    future.result()\n
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-types","title":"Result storage types","text":"

    Result storage is responsible for reading and writing serialized data to an external location. At this time, any file system block can be used for result storage.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer-types","title":"Result serializer types","text":"

    A result serializer is responsible for converting your Python object to and from bytes. This is necessary to store the object outside of Python and retrieve it later.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#pickle-serializer","title":"Pickle serializer","text":"

    Pickle is a standard Python protocol for encoding arbitrary Python objects. We supply a custom pickle serializer at prefect.serializers.PickleSerializer. Prefect's pickle serializer uses the cloudpickle project by default to support more object types. Alternative pickle libraries can be specified:

    from prefect.serializers import PickleSerializer\n\nPickleSerializer(picklelib=\"custompickle\")\n

    Benefits of the pickle serializer:

    • Many object types are supported.
    • Objects can define custom pickle support.

    Drawbacks of the pickle serializer:

    • When nested attributes of an object cannot be pickled, it is hard to determine the cause.
    • When deserializing objects, your Python and pickle library versions must match the one used at serialization time.
    • Serialized objects cannot be easily shared across different programming languages.
    • Serialized objects are not human readable.
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#json-serializer","title":"JSON serializer","text":"

    We supply a custom JSON serializer at prefect.serializers.JSONSerializer. Prefect's JSON serializer uses custom hooks by default to support more object types. Specifically, we add support for all types supported by Pydantic.

    By default, we use the standard Python json library. Alternative JSON libraries can be specified:

    from prefect.serializers import JSONSerializer\n\nJSONSerializer(jsonlib=\"orjson\")\n

    Benefits of the JSON serializer:

    • Serialized objects are human readable.
    • Serialized objects can often be shared across different programming languages.
    • Deserialization of serialized objects is generally version agnostic.

    Drawbacks of the JSON serializer:

    • Supported types are limited.
    • Implementing support for additional types must be done at the serializer level.
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-types","title":"Result types","text":"

    Prefect uses internal result types to capture information about the result attached to a state. The following types are used:

    • UnpersistedResult: Stores result metadata but the value is only available when created.
    • LiteralResult: Stores simple values inline.
    • PersistedResult: Stores a reference to a result persisted to storage.

    All result types include a get() method that can be called to return the value of the result. This is done behind the scenes when the result() method is used on states or futures.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#unpersisted-results","title":"Unpersisted results","text":"

    Unpersisted results are used to represent results that have not been and will not be persisted beyond the current flow run. The value associated with the result is stored in memory, but will not be available later. Result metadata is attached to this object for storage in the API and representation in the UI.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#literal-results","title":"Literal results","text":"

    Literal results are used to represent results stored in the Prefect database. The values contained by these results must always be JSON serializable.

    Example:

    result = LiteralResult(value=None)\nresult.json()\n# {\"type\": \"result\", \"value\": \"null\"}\n

    Literal results reduce the overhead required to persist simple results.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-results","title":"Persisted results","text":"

    The persisted result type contains all of the information needed to retrieve the result from storage. This includes:

    • Storage: A reference to the result storage that can be used to read the serialized result.
    • Key: Indicates where this specific result is in storage.

    Persisted result types also contain metadata for inspection without retrieving the result:

    • Serializer type: The name of the result serializer type.

    The get() method on result references retrieves the data from storage, deserializes it, and returns the original object. The get() operation will cache the resolved object to reduce the overhead of subsequent calls.

    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-result-blob","title":"Persisted result blob","text":"

    When results are persisted to storage, they are always written as a JSON document. The schema for this is described by the PersistedResultBlob type. The document contains:

    • The serialized data of the result.
    • A full description of result serializer that can be used to deserialize the result data.
    • The Prefect version used to create the result.
    ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/schedules/","title":"Schedules","text":"

    Scheduling is one of the primary reasons for using an orchestrator such as Prefect. Prefect allows you to use schedules to automatically create new flow runs for deployments.

    Prefect Cloud can also schedule flow runs through event-driven automations.

    Schedules tell the Prefect API how to create new flow runs for you automatically on a specified cadence.

    You can add a schedule to any deployment. The Prefect Scheduler service periodically reviews every deployment and creates new flow runs according to the schedule configured for the deployment.

    Support for multiple schedules

    We are currently rolling out support for multiple schedules per deployment. You can now assign multiple schedules to deployments in the Prefect UI, the CLI via prefect deployment schedule commands, the Deployment class, and in block-based deployment YAML files.

    Support for multiple schedules in flow.serve, flow.deploy, serve, and worker-based deployments with prefect deploy will arrive soon.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#schedule-types","title":"Schedule types","text":"

    Prefect supports several types of schedules that cover a wide range of use cases and offer a large degree of customization:

    • Cron is most appropriate for users who are already familiar with cron from previous use.
    • Interval is best suited for deployments that need to run at some consistent cadence that isn't related to absolute time.
    • RRule is best suited for deployments that rely on calendar logic for simple recurring schedules, irregular intervals, exclusions, or day-of-month adjustments.

    Schedules can be inactive

    When you create or edit a schedule, you can set the active property to False in Python (or false in a YAML file) to deactivate the schedule. This is useful if you want to keep the schedule configuration but temporarily stop the schedule from creating new flow runs.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#cron","title":"Cron","text":"

    A schedule may be specified with a cron pattern. Users may also provide a timezone to enforce DST behaviors.

    Cron uses croniter to specify datetime iteration with a cron-like format.

    Cron properties include:

    Property Description cron A valid cron string. (Required) day_or Boolean indicating how croniter handles day and day_of_week entries. Default is True. timezone String name of a time zone. (See the IANA Time Zone Database for valid time zones.)","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#how-the-day_or-property-works","title":"How the day_or property works","text":"

    The day_or property defaults to True, matching the behavior of cron. In this mode, if you specify a day (of the month) entry and a day_of_week entry, the schedule will run a flow on both the specified day of the month and on the specified day of the week. The \"or\" in day_or refers to the fact that the two entries are treated like an OR statement, so the schedule should include both, as in the SQL statement SELECT * FROM employees WHERE first_name = 'Xi\u0101ng' OR last_name = 'Brookins';.

    For example, with day_or set to True, the cron schedule * * 3 1 2 runs a flow every minute on the 3rd day of the month (whatever that is) and on Tuesday (the second day of the week) in January (the first month of the year).

    With day_or set to False, the day (of the month) and day_of_week entries are joined with the more restrictive AND operation, as in the SQL statement SELECT * from employees WHERE first_name = 'Andrew' AND last_name = 'Brookins';. For example, the same schedule, when day_or is False, runs a flow on every minute on the 3rd Tuesday in January. This behavior matches fcron instead of cron.

    Supported croniter features

    While Prefect supports most features of croniter for creating cron-like schedules, we do not currently support \"R\" random or \"H\" hashed keyword expressions or the schedule jittering possible with those expressions.

    Daylight saving time considerations

    If the timezone is a DST-observing one, then the schedule will adjust itself appropriately.

    The cron rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule fires on every new schedule hour, not every elapsed hour. For example, when clocks are set back, this results in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later.

    Longer schedules, such as one that fires at 9am every morning, will adjust for DST automatically.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#interval","title":"Interval","text":"

    An Interval schedule creates new flow runs on a regular interval measured in seconds. Intervals are computed using an optional anchor_date. For example, here's how you can create a schedule for every 10 minutes in a block-based deployment YAML file:

    schedule:\n  interval: 600\n  timezone: America/Chicago \n

    Interval properties include:

    Property Description interval datetime.timedelta indicating the time between flow runs. (Required) anchor_date datetime.datetime indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. (See the IANA Time Zone Database for valid time zones.)

    Note that the anchor_date does not indicate a \"start time\" for the schedule, but rather a fixed point in time from which to compute intervals. If the anchor date is in the future, then schedule dates are computed by subtracting the interval from it. Note that in this example, we import the Pendulum Python package for easy datetime manipulation. Pendulum isn\u2019t required, but it\u2019s a useful tool for specifying dates.

    Daylight saving time considerations

    If the schedule's anchor_date or timezone are provided with a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals.

    For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time.

    For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#rrule","title":"RRule","text":"

    An RRule scheduling supports iCal recurrence rules (RRules), which provide convenient syntax for creating repetitive schedules. Schedules can repeat on a frequency from yearly down to every minute.

    RRule uses the dateutil rrule module to specify iCal recurrence rules.

    RRules are appropriate for any kind of calendar-date manipulation, including simple repetition, irregular intervals, exclusions, week day or day-of-month adjustments, and more. RRules can represent complex logic like:

    • The last weekday of each month
    • The fourth Thursday of November
    • Every other day of the week

    RRule properties include:

    Property Description rrule String representation of an RRule schedule. See the rrulestr examples for syntax. timezone String name of a time zone. See the IANA Time Zone Database for valid time zones.

    You may find it useful to use an RRule string generator such as the iCalendar.org RRule Tool to help create valid RRules.

    For example, the following RRule schedule in a block-based deployment YAML file creates flow runs on Monday, Wednesday, and Friday until July 30, 2024.

    schedule:\n  rrule: 'FREQ=WEEKLY;BYDAY=MO,WE,FR;UNTIL=20240730T040000Z'\n

    RRule restrictions

    Note the max supported character length of an rrulestr is 6500 characters

    Note that COUNT is not supported. Please use UNTIL or the /deployments/{id}/runs endpoint to schedule a fixed number of flow runs.

    Daylight saving time considerations

    Note that as a calendar-oriented standard, RRules are sensitive to the initial timezone provided. A 9am daily schedule with a DST-aware start date will maintain a local 9am time through DST boundaries. A 9am daily schedule with a UTC start date will maintain a 9am UTC time.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules","title":"Creating schedules","text":"

    There are several ways to create a schedule for a deployment:

    • Through the Prefect UI
    • Via the cron, interval, or rrule parameters if building your deployment via the serve method of the Flow object or the serve utility for managing multiple flows simultaneously
    • If using worker-based deployments
    • When you define a deployment with flow.serve or flow.deploy
    • Through the interactive prefect deploy command
    • With the deployments -> schedules section of the prefect.yaml file
    • If using block-based deployments - Deprecated
    • Through the schedules section of the deployment YAML file
    • By passing schedules into the Deployment class or Deployment.build_from_flow
    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-in-the-ui","title":"Creating schedules in the UI","text":"

    You can add schedules in the Schedules section on a Deployment page in the UI.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#locating-the-schedules-section","title":"Locating the Schedules section","text":"

    The Schedules section appears in the sidebar on the right side of the page on wider displays. On narrower displays, it appears on the Details tab of the page.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#adding-a-schedule","title":"Adding a schedule","text":"

    Under Schedules, select the + Schedule button. A modal dialog will open. Choose Interval or Cron to create a schedule.

    What about RRule?

    The UI does not support creating RRule schedules. However, the UI will display RRule schedules that you've created via the command line.

    The new schedule will appear on the Deployment page where you created it. In addition, the schedule will be viewable in human-friendly text in the list of deployments on the Deployments page.

    After you create a schedule, new scheduled flow runs will be visible in the Upcoming tab of the Deployment page where you created it.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#editing-schedules","title":"Editing schedules","text":"

    You can edit a schedule by selecting Edit from the three-dot menu next to a schedule on a Deployment page.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-with-a-python-deployment-creation-file","title":"Creating schedules with a Python deployment creation file","text":"

    When you create a deployment in a Python file with flow.serve(), serve, flow.deploy(), or deploy you can specify the schedule. Just add the keyword argument cron, interval, or rrule.

    interval: An interval on which to execute the deployment. Accepts a number or a \n    timedelta object to create a single schedule. If a number is given, it will be \n    interpreted as seconds. Also accepts an iterable of numbers or timedelta to create \n    multiple schedules.\ncron: A cron schedule string of when to execute runs of this deployment. \n    Also accepts an iterable of cron schedule strings to create multiple schedules.\nrrule: An rrule schedule string of when to execute runs of this deployment.\n    Also accepts an iterable of rrule schedule strings to create multiple schedules.\nschedules: A list of schedule objects defining when to execute runs of this deployment.\n    Used to define multiple schedules or additional scheduling options such as `timezone`.\nschedule: A schedule object defining when to execute runs of this deployment. Used to\n    define additional scheduling options like `timezone`.\n

    Here's an example of creating a cron schedule with serve for a deployment flow that will run every minute of every day:

    my_flow.serve(name=\"flowing\", cron=\"* * * * *\")\n

    If using work pool-based deployments, the deploy method has the same schedule-based parameters.

    Here's an example of creating an interval schedule with serve for a deployment flow that will run every 10 minutes with an anchor date and a timezone:

    from datetime import timedelta, datetime\nfrom prefect.client.schemas.schedules import IntervalSchedule\n\nmy_flow.serve(name=\"flowing\", schedule=IntervalSchedule(interval=timedelta(minutes=10), anchor_date=datetime(2023, 1, 1, 0, 0), timezone=\"America/Chicago\"))\n

    Block and agent-based deployments with Python files are not a recommended way to create deployments. However, if you are using that deployment creation method you can create a schedule by passing a schedule argument to the Deployment.build_from_flow method.

    Here's how you create the equivalent schedule in a Python deployment file.

    from prefect.client.schemas.schedules import CronSchedule\n\ncron_demo = Deployment.build_from_flow(\n    pipeline,\n    \"etl\",\n    schedule=(CronSchedule(cron=\"0 0 * * *\", timezone=\"America/Chicago\"))\n)\n
    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-with-the-interactive-prefect-deploy-command","title":"Creating schedules with the interactive prefect deploy command","text":"

    If you are using worker-based deployments, you can create a schedule through the interactive prefect deploy command. You will be prompted to choose which type of schedule to create.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-in-the-prefectyaml-files-deployments-schedule-section","title":"Creating schedules in the prefect.yaml file's deployments -> schedule section","text":"

    If you save the prefect.yaml file from the prefect deploy command, you will see it has a schedules section for your deployment. Alternatively, you can create a prefect.yaml file from a recipe or from scratch and add a schedules section to it.

    deployments:\n  ...\n  schedules:\n    - cron: \"0 0 * * *\"\n      timezone: \"America/Chicago\"\n      active: false\n    - cron: \"0 12 * * *\"\n      timezone: \"America/New_York\"\n      active: true\n    - cron: \"0 18 * * *\"\n      timezone: \"Europe/London\"\n      active: true\n
    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#the-scheduler-service","title":"The Scheduler service","text":"

    The Scheduler service is started automatically when prefect server start is run and it is a built-in service of Prefect Cloud.

    By default, the Scheduler service visits deployments on a 60-second loop, though recently-modified deployments will be visited more frequently. The Scheduler evaluates each deployment's schedules and creates new runs appropriately. For typical deployments, it will create the next three runs, though more runs will be scheduled if the next 3 would all start in the next hour.

    More specifically, the Scheduler tries to create the smallest number of runs that satisfy the following constraints, in order:

    • No more than 100 runs will be scheduled.
    • Runs will not be scheduled more than 100 days in the future.
    • At least 3 runs will be scheduled.
    • Runs will be scheduled until at least one hour in the future.

    These behaviors can all be adjusted through the relevant settings that can be viewed with the terminal command prefect config view --show-defaults:

    PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100'\nPREFECT_API_SERVICES_SCHEDULER_ENABLED='True'\nPREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500'\nPREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0'\nPREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3'\nPREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100'\nPREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00'\nPREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00'\n

    See the Settings docs for more information on altering your settings.

    These settings mean that if a deployment has an hourly schedule, the default settings will create runs for the next 4 days (or 100 hours). If it has a weekly schedule, the default settings will maintain the next 14 runs (up to 100 days in the future).

    The Scheduler does not affect execution

    The Prefect Scheduler service only creates new flow runs and places them in Scheduled states. It is not involved in flow or task execution.

    If you change a schedule, previously scheduled flow runs that have not started are removed, and new scheduled flow runs are created to reflect the new schedule.

    To remove all scheduled runs for a flow deployment, you can remove the schedule via the UI.

    ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/states/","title":"States","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#overview","title":"Overview","text":"

    States are rich objects that contain information about the status of a particular task run or flow run. While you don't need to know the details of the states to use Prefect, you can give your workflows superpowers by taking advantage of it.

    At any moment, you can learn anything you need to know about a task or flow by examining its current state or the history of its states. For example, a state could tell you that a task:

    • is scheduled to make a third run attempt in an hour

    • succeeded and what data it produced

    • was scheduled to run, but later cancelled

    • used the cached result of a previous run instead of re-running

    • failed because it timed out

    By manipulating a relatively small number of task states, Prefect flows can harness the complexity that emerges in workflows.

    Only runs have states

    Though we often refer to the \"state\" of a flow or a task, what we really mean is the state of a flow run or a task run. Flows and tasks are templates that describe what a system does; only when we run the system does it also take on a state. So while we might refer to a task as \"running\" or being \"successful\", we really mean that a specific instance of the task is in that state.

    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-types","title":"State Types","text":"

    States have names and types. State types are canonical, with specific orchestration rules that apply to transitions into and out of each state type. A state's name, is often, but not always, synonymous with its type. For example, a task run that is running for the first time has a state with the name Running and the type RUNNING. However, if the task retries, that same task run will have the name Retrying and the type RUNNING. Each time the task run transitions into the RUNNING state, the same orchestration rules are applied.

    There are terminal state types from which there are no orchestrated transitions to any other state type.

    • COMPLETED
    • CANCELLED
    • FAILED
    • CRASHED

    The full complement of states and state types includes:

    Name Type Terminal? Description Scheduled SCHEDULED No The run will begin at a particular time in the future. Late SCHEDULED No The run's scheduled start time has passed, but it has not transitioned to PENDING (15 seconds by default). AwaitingRetry SCHEDULED No The run did not complete successfully because of a code issue and had remaining retry attempts. Pending PENDING No The run has been submitted to run, but is waiting on necessary preconditions to be satisfied. Running RUNNING No The run code is currently executing. Retrying RUNNING No The run code is currently executing after previously not complete successfully. Paused PAUSED No The run code has stopped executing until it receives manual approval to proceed. Cancelling CANCELLING No The infrastructure on which the code was running is being cleaned up. Cancelled CANCELLED Yes The run did not complete because a user determined that it should not. Completed COMPLETED Yes The run completed successfully. Failed FAILED Yes The run did not complete because of a code issue and had no remaining retry attempts. Crashed CRASHED Yes The run did not complete because of an infrastructure issue.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#returned-values","title":"Returned values","text":"

    When calling a task or a flow, there are three types of returned values:

    • Data: A Python object (such as int, str, dict, list, and so on).
    • State: A Prefect object indicating the state of a flow or task run.
    • PrefectFuture: A Prefect object that contains both data and State.

    Returning data\u200a is the default behavior any time you call your_task().

    Returning Prefect State occurs anytime you call your task or flow with the argument return_state=True.

    Returning PrefectFuture is achieved by calling your_task.submit().

    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-data","title":"Return Data","text":"

    By default, running a task will return data:

    from prefect import flow, task \n\n@task \ndef add_one(x):\n    return x + 1\n\n@flow \ndef my_flow():\n    result = add_one(1) # return int\n

    The same rule applies for a subflow:

    @flow \ndef subflow():\n    return 42 \n\n@flow \ndef my_flow():\n    result = subflow() # return data\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-prefect-state","title":"Return Prefect State","text":"

    To return a State instead, add return_state=True as a parameter of your task call.

    @flow \ndef my_flow():\n    state = add_one(1, return_state=True) # return State\n

    To get data from a State, call .result().

    @flow \ndef my_flow():\n    state = add_one(1, return_state=True) # return State\n    result = state.result() # return int\n

    The same rule applies for a subflow:

    @flow \ndef subflow():\n    return 42 \n\n@flow \ndef my_flow():\n    state = subflow(return_state=True) # return State\n    result = state.result() # return int\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-a-prefectfuture","title":"Return a PrefectFuture","text":"

    To get a PrefectFuture, add .submit() to your task call.

    @flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\n

    To get data from a PrefectFuture, call .result().

    @flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\n    result = future.result() # return data\n

    To get a State from a PrefectFuture, call .wait().

    @flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\n    state = future.wait() # return State\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#final-state-determination","title":"Final state determination","text":"

    The final state of a flow is determined by its return value. The following rules apply:

    • If an exception is raised directly in the flow function, the flow run is marked as FAILED.
    • If the flow does not return a value (or returns None), its state is determined by the states of all of the tasks and subflows within it.
    • If any task run or subflow run failed and none were cancelled, then the final flow run state is marked as FAILED.
    • If any task run or subflow run was cancelled, then the final flow run state is marked as CANCELLED.
    • If a flow returns a manually created state, it is used as the state of the final flow run. This allows for manual determination of final state.
    • If the flow run returns any other object, then it is marked as successfully completed.

    See the Final state determination section of the Flows documentation for further details and examples.

    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-change-hooks","title":"State Change Hooks","text":"

    State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow.

    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#a-simple-example","title":"A simple example","text":"
    from prefect import flow\n\ndef my_success_hook(flow, flow_run, state):\n    print(\"Flow run succeeded!\")\n\n@flow(on_completion=[my_success_hook])\ndef my_flow():\n    return 42\n\nmy_flow()\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-and-use-hooks","title":"Create and use hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#available-state-change-hooks","title":"Available state change hooks","text":"Type Flow Task Description on_completion \u2713 \u2713 Executes when a flow or task run enters a Completed state. on_failure \u2713 \u2713 Executes when a flow or task run enters a Failed state. on_cancellation \u2713 - Executes when a flow run enters a Cancelling state. on_crashed \u2713 - Executes when a flow run enters a Crashed state. on_running \u2713 - Executes when a flow run enters a Running state.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-flow-run-state-change-hooks","title":"Create flow run state change hooks","text":"
    def my_flow_hook(flow: Flow, flow_run: FlowRun, state: State):\n    \"\"\"This is the required signature for a flow run state\n    change hook. This hook can only be passed into flows.\n    \"\"\"\n\n# pass hook as a list of callables\n@flow(on_completion=[my_flow_hook])\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-task-run-state-change-hooks","title":"Create task run state change hooks","text":"
    def my_task_hook(task: Task, task_run: TaskRun, state: State):\n    \"\"\"This is the required signature for a task run state change\n    hook. This hook can only be passed into tasks.\n    \"\"\"\n\n# pass hook as a list of callables\n@task(on_failure=[my_task_hook])\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#use-multiple-state-change-hooks","title":"Use multiple state change hooks","text":"

    State change hooks are versatile, allowing you to specify multiple state change hooks for the same state transition, or to use the same state change hook for different transitions:

    def my_success_hook(task, task_run, state):\n    print(\"Task run succeeded!\")\n\ndef my_failure_hook(task, task_run, state):\n    print(\"Task run failed!\")\n\ndef my_succeed_or_fail_hook(task, task_run, state):\n    print(\"If the task run succeeds or fails, this hook runs.\")\n\n@task(\n    on_completion=[my_success_hook, my_succeed_or_fail_hook],\n    on_failure=[my_failure_hook, my_succeed_or_fail_hook]\n)\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#pass-kwargs-to-your-hooks","title":"Pass kwargs to your hooks","text":"

    The Prefect engine will call your hooks for you upon the state change, passing in the flow, flow run, and state objects.

    However, you can define your hook to have additional default arguments:

    from prefect import flow\n\ndata = {}\n\ndef my_hook(flow, flow_run, state, my_arg=\"custom_value\"):\n    data.update(my_arg=my_arg, state=state)\n\n@flow(on_completion=[my_hook])\ndef lazy_flow():\n    pass\n\nstate = lazy_flow(return_state=True)\n\nassert data == {\"my_arg\": \"custom_value\", \"state\": state}\n

    ... or define your hook to accept arbitrary keyword arguments:

    from functools import partial\nfrom prefect import flow, task\n\ndata = {}\n\ndef my_hook(task, task_run, state, **kwargs):\n    data.update(state=state, **kwargs)\n\n@task\ndef bad_task():\n    raise ValueError(\"meh\")\n\n@flow\ndef ok_with_failure_flow(x: str = \"foo\", y: int = 42):\n    bad_task_with_a_hook = bad_task.with_options(\n        on_failure=[partial(my_hook, **dict(x=x, y=y))]\n    )\n    # return a tuple of \"bar\" and the task run state\n    # to avoid raising the task's exception\n    return \"bar\", bad_task_with_a_hook(return_state=True)\n\n_, task_run_state = ok_with_failure_flow()\n\nassert data == {\"x\": \"foo\", \"y\": 42, \"state\": task_run_state}\n
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#more-examples-of-state-change-hooks","title":"More examples of state change hooks","text":"
    • Send a notification when a flow run fails
    • Delete a Cloud Run job when a flow crashes
    ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/storage/","title":"Storage","text":"

    Storage blocks are not recommended

    Storage blocks are part of the legacy block-based deployment model. Instead, using serve or runner-based Python creation methods or workers and work pools with prefect deploy via the CLI are the recommended options for creating a deployment. Flow code storage can be specified in the Python file with serve or runner-based Python creation methods; alternatively, with the work pools and workers style of flow deployment, you can specify flow code storage during the interactive prefect deploy CLI experience and in its resulting prefect.yaml file.

    Storage lets you configure how flow code for deployments is persisted and retrieved by Prefect workers (or legacy agents). Anytime you build a block-based deployment, a storage block is used to upload the entire directory containing your workflow code (along with supporting files) to its configured location. This helps ensure portability of your relative imports, configuration files, and more. Note that your environment dependencies (for example, external Python packages) still need to be managed separately.

    If no storage is explicitly configured, Prefect will use LocalFileSystem storage by default. Local storage works fine for many local flow run scenarios, especially when testing and getting started. However, due to the inherent lack of portability, many use cases are better served by using remote storage such as S3 or Google Cloud Storage.

    Prefect supports creating multiple storage configurations and switching between storage as needed.

    Storage uses blocks

    Blocks are the Prefect technology underlying storage, and enables you to do so much more.

    In addition to creating storage blocks via the Prefect CLI, you can now create storage blocks and other kinds of block configuration objects via the Prefect UI and Prefect Cloud.

    ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-storage-for-a-deployment","title":"Configuring storage for a deployment","text":"

    When building a deployment for a workflow, you have two options for configuring workflow storage:

    • Use the default local storage
    • Preconfigure a storage block to use
    ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#using-the-default","title":"Using the default","text":"

    Anytime you call prefect deployment build without providing the --storage-block flag, a default LocalFileSystem block will be used. Note that this block will always use your present working directory as its basepath (which is usually desirable). You can see the block's settings by inspecting the deployment.yaml file that Prefect creates after calling prefect deployment build.

    While you generally can't run a deployment stored on a local file system on other machines, any agent running on the same machine will be able to successfully run your deployment.

    ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#supported-storage-blocks","title":"Supported storage blocks","text":"

    Current options for deployment storage blocks include:

    Storage Description Required Library Local File System Store code in a run's local file system. Remote File System Store code in a any filesystem supported by fsspec. AWS S3 Storage Store code in an AWS S3 bucket. s3fs Azure Storage Store code in Azure Datalake and Azure Blob Storage. adlfs GitHub Storage Store code in a GitHub repository. Google Cloud Storage Store code in a Google Cloud Platform (GCP) Cloud Storage bucket. gcsfs SMB Store code in SMB shared network storage. smbprotocol GitLab Repository Store code in a GitLab repository. prefect-gitlab Bitbucket Repository Store code in a Bitbucket repository. prefect-bitbucket

    Accessing files may require storage filesystem libraries

    Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block or accessing the storage location from flow scripts.

    For example, the AWS S3 Storage block requires the s3fs library.

    See Filesystem package dependencies for more information about configuring filesystem libraries in your execution environment.

    ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-a-block","title":"Configuring a block","text":"

    You can create these blocks either via the UI or via Python.

    You can create, edit, and manage storage blocks in the Prefect UI and Prefect Cloud. On a Prefect server, blocks are created in the server's database. On Prefect Cloud, blocks are created on a workspace.

    To create a new block, select the + button. Prefect displays a library of block types you can configure to create blocks to be used by your flows.

    Select Add + to configure a new storage block based on a specific block type. Prefect displays a Create page that enables specifying storage settings.

    You can also create blocks using the Prefect Python API:

    from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/a-sub-directory\", \n           aws_access_key_id=\"foo\", \n           aws_secret_access_key=\"bar\"\n)\nblock.save(\"example-block\")\n

    This block configuration is now available to be used by anyone with appropriate access to your Prefect API. We can use this block to build a deployment by passing its slug to the prefect deployment build command. The storage block slug is formatted as block-type/block-name. In this case, s3/example-block for an AWS S3 Bucket block named example-block. See block identifiers for details.

    prefect deployment build ./flows/my_flow.py:my_flow --name \"Example Deployment\" --storage-block s3/example-block\n

    This command will upload the contents of your flow's directory to the designated storage location, then the full deployment specification will be persisted to a newly created deployment.yaml file. For more information, see Deployments.

    ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/task-runners/","title":"Task runners","text":"

    Task runners enable you to engage specific executors for Prefect tasks, such as for concurrent, parallel, or distributed execution of tasks.

    Task runners are not required for task execution. If you call a task function directly, the task executes as a regular Python function, without a task runner, and produces whatever result is returned by the function.

    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#task-runner-overview","title":"Task runner overview","text":"

    Calling a task function from within a flow, using the default task settings, executes the function sequentially. Execution of the task function blocks execution of the flow until the task completes. This means, by default, calling multiple tasks in a flow causes them to run in order.

    However, that's not the only way to run tasks!

    You can use the .submit() method on a task function to submit the task to a task runner. Using a task runner enables you to control whether tasks run sequentially, concurrently, or if you want to take advantage of a parallel or distributed execution library such as Dask or Ray.

    Using the .submit() method to submit a task also causes the task run to return a PrefectFuture, a Prefect object that contains both any data returned by the task function and a State, a Prefect object indicating the state of the task run.

    Prefect currently provides the following built-in task runners:

    • SequentialTaskRunner can run tasks sequentially.
    • ConcurrentTaskRunner can run tasks concurrently, allowing tasks to switch when blocking on IO. Tasks will be submitted to a thread pool maintained by anyio.

    In addition, the following Prefect-developed task runners for parallel or distributed task execution may be installed as Prefect Integrations.

    • DaskTaskRunner can run tasks requiring parallel execution using dask.distributed.
    • RayTaskRunner can run tasks requiring parallel execution using Ray.

    Concurrency versus parallelism

    The words \"concurrency\" and \"parallelism\" may sound the same, but they mean different things in computing.

    Concurrency refers to a system that can do more than one thing simultaneously, but not at the exact same time. It may be more accurate to think of concurrent execution as non-blocking: within the restrictions of resources available in the execution environment and data dependencies between tasks, execution of one task does not block execution of other tasks in a flow.

    Parallelism refers to a system that can do more than one thing at the exact same time. Again, within the restrictions of resources available, parallel execution can run tasks at the same time, such as for operations mapped across a dataset.

    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-task-runner","title":"Using a task runner","text":"

    You do not need to specify a task runner for a flow unless your tasks require a specific type of execution.

    To configure your flow to use a specific task runner, import a task runner and assign it as an argument for the flow when the flow is defined.

    Remember to call .submit() when using a task runner

    Make sure you use .submit() to run your task with a task runner. Calling the task directly, without .submit(), from within a flow will run the task sequentially instead of using a specified task runner.

    For example, you can use ConcurrentTaskRunner to allow tasks to switch when they would block.

    from prefect import flow, task\nfrom prefect.task_runners import ConcurrentTaskRunner\nimport time\n\n@task\ndef stop_at_floor(floor):\n    print(f\"elevator moving to floor {floor}\")\n    time.sleep(floor)\n    print(f\"elevator stops on floor {floor}\")\n\n@flow(task_runner=ConcurrentTaskRunner())\ndef elevator():\n    for floor in range(10, 0, -1):\n        stop_at_floor.submit(floor)\n

    If you specify an uninitialized task runner class, a task runner instance of that type is created with the default settings. You can also pass additional configuration parameters for task runners that accept parameters, such as DaskTaskRunner and RayTaskRunner.

    Default task runner

    If you don't specify a task runner for a flow and you call a task with .submit() within the flow, Prefect uses the default ConcurrentTaskRunner.

    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-sequentially","title":"Running tasks sequentially","text":"

    Sometimes, it's useful to force tasks to run sequentially to make it easier to reason about the behavior of your program. Switching to the SequentialTaskRunner will force submitted tasks to run sequentially rather than concurrently.

    Synchronous and asynchronous tasks

    The SequentialTaskRunner works with both synchronous and asynchronous task functions. Asynchronous tasks are Python functions defined using async def rather than def.

    The following example demonstrates using the SequentialTaskRunner to ensure that tasks run sequentially. In the example, the flow glass_tower runs the task stop_at_floor for floors one through 38, in that order.

    from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nimport random\n\n@task\ndef stop_at_floor(floor):\n    situation = random.choice([\"on fire\",\"clear\"])\n    print(f\"elevator stops on {floor} which is {situation}\")\n\n@flow(task_runner=SequentialTaskRunner(),\n      name=\"towering-infernflow\",\n      )\ndef glass_tower():\n    for floor in range(1, 39):\n        stop_at_floor.submit(floor)\n\nglass_tower()\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

    Each flow can only have a single task runner, but sometimes you may want a subset of your tasks to run using a specific task runner. In this case, you can create subflows for tasks that need to use a different task runner.

    For example, you can have a flow (in the example below called sequential_flow) that runs its tasks locally using the SequentialTaskRunner. If you have some tasks that can run more efficiently in parallel on a Dask cluster, you could create a subflow (such as dask_subflow) to run those tasks using the DaskTaskRunner.

    from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef hello_local():\n    print(\"Hello!\")\n\n@task\ndef hello_dask():\n    print(\"Hello from Dask!\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef sequential_flow():\n    hello_local.submit()\n    dask_subflow()\n    hello_local.submit()\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_subflow():\n    hello_dask.submit()\n\nif __name__ == \"__main__\":\n    sequential_flow()\n

    Guarding main

    Note that you should guard the main function by using if __name__ == \"__main__\" to avoid issues with parallel processing.

    This script outputs the following logs demonstrating the use of the Dask task runner:

    120:14:29.785 | INFO    | prefect.engine - Created flow run 'ivory-caiman' for flow 'sequential-flow'\n20:14:29.785 | INFO    | Flow run 'ivory-caiman' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n20:14:29.880 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-0' for task 'hello_local'\n20:14:29.881 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-0' immediately...\nHello!\n20:14:29.904 | INFO    | Task run 'hello_local-7633879f-0' - Finished in state Completed()\n20:14:29.952 | INFO    | Flow run 'ivory-caiman' - Created subflow run 'nimble-sparrow' for flow 'dask-subflow'\n20:14:29.953 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n20:14:31.862 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n20:14:31.901 | INFO    | Flow run 'nimble-sparrow' - Created task run 'hello_dask-2b96d711-0' for task 'hello_dask'\n20:14:32.370 | INFO    | Flow run 'nimble-sparrow' - Submitted task run 'hello_dask-2b96d711-0' for execution.\nHello from Dask!\n20:14:33.358 | INFO    | Flow run 'nimble-sparrow' - Finished in state Completed('All states completed.')\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-1' for task 'hello_local'\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-1' immediately...\nHello!\n20:14:33.386 | INFO    | Task run 'hello_local-7633879f-1' - Finished in state Completed()\n20:14:33.399 | INFO    | Flow run 'ivory-caiman' - Finished in state Completed('All states completed.')\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-results-from-submitted-tasks","title":"Using results from submitted tasks","text":"

    When you use .submit() to submit a task to a task runner, the task runner creates a PrefectFuture for access to the state and result of the task.

    A PrefectFuture is an object that provides access to a computation happening in a task runner \u2014 even if that computation is happening on a remote system.

    In the following example, we save the return value of calling .submit() on the task say_hello to the variable future, and then we print the type of the variable:

    from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@flow\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print(f\"variable 'future' is type {type(future)}\")\n\nhello_world()\n

    When you run this code, you'll see that the variable future is a PrefectFuture:

    variable 'future' is type <class 'prefect.futures.PrefectFuture'>\n

    When you pass a future into a task, Prefect waits for the \"upstream\" task \u2014 the one that the future references \u2014 to reach a final state before starting the downstream task.

    This means that the downstream task won't receive the PrefectFuture you passed as an argument. Instead, the downstream task will receive the value that the upstream task returned.

    Take a look at how this works in the following example

    from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef print_result(result):\n    print(type(result))\n    print(result)\n\n@flow(name=\"hello-flow\")\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print_result.submit(future)\n\nhello_world()\n
    <class 'str'>\nHello Marvin!\n

    Futures have a few useful methods. For example, you can get the return value of the task run with .result():

    from prefect import flow, task\n\n@task\ndef my_task():\n    return 42\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result()\n    print(result)\n\nmy_flow()\n

    The .result() method will wait for the task to complete before returning the result to the caller. If the task run fails, .result() will raise the task run's exception. You may disable this behavior with the raise_on_failure option:

    from prefect import flow, task\n\n@task\ndef my_task():\n    return \"I'm a task!\"\n\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result(raise_on_failure=False)\n    if future.get_state().is_failed():\n        # `result` is an exception! handle accordingly\n        ...\n    else:\n        # `result` is the expected return value of our task\n        ...\n

    You can retrieve the current state of the task run associated with the PrefectFuture using .get_state():

    @flow\ndef my_flow():\n    future = my_task.submit()\n    state = future.get_state()\n

    You can also wait for a task to complete by using the .wait() method:

    @flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait()\n

    You can include a timeout in the wait call to perform logic if the task has not finished in a given amount of time:

    @flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait(1)  # Wait one second max\n    if final_state:\n        # Take action if the task is done\n        result = final_state.result()\n    else:\n        ... # Task action if the task is still running\n

    You may also use the wait_for=[] parameter when calling a task, specifying upstream task dependencies. This enables you to control task execution order for tasks that do not share data dependencies.

    @task\ndef task_a():\n    pass\n\n@task\ndef task_b():\n    pass\n\n@task\ndef task_c():\n    pass\n\n@task\ndef task_d():\n    pass\n\n@flow\ndef my_flow():\n    a = task_a.submit()\n    b = task_b.submit()\n    # Wait for task_a and task_b to complete\n    c = task_c.submit(wait_for=[a, b])\n    # task_d will wait for task_c to complete\n    # Note: If waiting for one task it must still be in a list.\n    d = task_d(wait_for=[c])\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#when-to-use-result-in-flows","title":"When to use .result() in flows","text":"

    The simplest pattern for writing a flow is either only using tasks or only using pure Python functions. When you need to mix the two, use .result().

    Using only tasks:

    from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    hello = say_hello.submit(\"Marvin\")\n    nice_to_meet_you = say_nice_to_meet_you.submit(hello)\n\nhello_world()\n

    Using only Python functions:

    from prefect import flow, task\n\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # because this is just a Python function, calls will not be tracked\n    hello = say_hello(\"Marvin\") \n    nice_to_meet_you = say_nice_to_meet_you(hello)\n\nhello_world()\n

    Mixing tasks and Python functions:

    from prefect import flow, task\n\ndef say_hello_extra_nicely_to_marvin(hello): # not a task or flow!\n    if hello == \"Hello Marvin!\":\n        return \"HI MARVIN!\"\n    return hello\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # run a task and get the result\n    hello = say_hello.submit(\"Marvin\").result()\n\n    # not calling a task or flow\n    special_greeting = say_hello_extra_nicely_to_marvin(hello)\n\n    # pass our modified greeting back into a task\n    nice_to_meet_you = say_nice_to_meet_you.submit(special_greeting)\n\n    print(nice_to_meet_you.result())\n\nhello_world()\n

    Note that .result() also limits Prefect's ability to track task dependencies. In the \"mixed\" example above, Prefect will not be aware that say_hello is upstream of nice_to_meet_you.

    Calling .result() is blocking

    When calling .result(), be mindful your flow function will have to wait until the task run is completed before continuing.

    from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef do_important_stuff():\n    print(\"Doing lots of important stuff!\")\n\n@flow\ndef hello_world():\n    # blocks until `say_hello` has finished\n    result = say_hello.submit(\"Marvin\").result() \n    do_important_stuff.submit()\n\nhello_world()\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-dask","title":"Running tasks on Dask","text":"

    The DaskTaskRunner is a parallel task runner that submits tasks to the dask.distributed scheduler. By default, a temporary Dask cluster is created for the duration of the flow run. If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via the address kwarg.

    1. Make sure the prefect-dask collection is installed: pip install prefect-dask.
    2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
    3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.

    For example, this flow uses the DaskTaskRunner configured to access an existing Dask cluster at http://my-dask-cluster.

    from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n    ...\n

    DaskTaskRunner accepts the following optional parameters:

    Parameter Description address Address of a currently running Dask scheduler. cluster_class The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, \"distributed.LocalCluster\"), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client.

    Multiprocessing safety

    Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or you will encounter warnings and errors.

    If you don't provide the address of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers or threads_per_worker to cluster_kwargs.

    # Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n    cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"

    The DaskTaskRunner is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.

    To configure, you need to provide a cluster_class. This can be:

    • A string specifying the import path to the cluster class (for example, \"dask_cloudprovider.aws.FargateCluster\")
    • The cluster class itself
    • A function for creating a custom cluster.

    You can also configure cluster_kwargs, which takes a dictionary of keyword arguments to pass to cluster_class when starting the flow run.

    For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster with 4 workers running with an image named my-prefect-image:

    DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"

    Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):

    • All workers in the cluster must have dependencies installed for all flows you intend to run.
    • Multiple flow runs may compete for resources. Dask tries to do a good job sharing resources between tasks, but you may still run into issues.

    That said, you may prefer managing a single long-running cluster.

    To configure a DaskTaskRunner to connect to an existing cluster, pass in the address of the scheduler to the address argument:

    # Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#adaptive-scaling","title":"Adaptive scaling","text":"

    One nice feature of using a DaskTaskRunner is the ability to scale adaptively to the workload. Instead of specifying n_workers as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.

    To do this, you can pass adapt_kwargs to DaskTaskRunner. This takes the following fields:

    • maximum (int or None, optional): the maximum number of workers to scale to. Set to None for no maximum.
    • minimum (int or None, optional): the minimum number of workers to scale to. Set to None for no minimum.

    For example, here we configure a flow to run on a FargateCluster scaling up to at most 10 workers.

    DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    adapt_kwargs={\"maximum\": 10}\n)\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#dask-annotations","title":"Dask annotations","text":"

    Dask annotations can be used to further control the behavior of tasks.

    For example, we can set the priority of tasks in the Dask scheduler:

    import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n    with dask.annotate(priority=-10):\n        future = show.submit(1)  # low priority task\n\n    with dask.annotate(priority=10):\n        future = show.submit(2)  # high priority task\n

    Another common use case is resource annotations:

    import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n    task_runner=DaskTaskRunner(\n        cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n    )\n)\n\ndef my_flow():\n    with dask.annotate(resources={'GPU': 1}):\n        future = show(0)  # this task requires 1 GPU resource on a worker\n\n    with dask.annotate(resources={'process': 1}):\n        # These tasks each require 1 process on a worker; because we've \n        # specified that our cluster has 1 process per worker and 1 worker,\n        # these tasks will run sequentially\n        future = show(1)\n        future = show(2)\n        future = show(3)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n
    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-ray","title":"Running tasks on Ray","text":"

    The RayTaskRunner \u2014 installed separately as a Prefect Collection \u2014 is a parallel task runner that submits tasks to Ray. By default, a temporary Ray instance is created for the duration of the flow run. If you already have a Ray instance running, you can provide the connection URL via an address argument.

    Remote storage and Ray tasks

    We recommend configuring remote storage for task execution with the RayTaskRunner. This ensures tasks executing in Ray have access to task result storage, particularly when accessing a Ray instance outside of your execution environment.

    To configure your flow to use the RayTaskRunner:

    1. Make sure the prefect-ray collection is installed: pip install prefect-ray.
    2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
    3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

    For example, this flow uses the RayTaskRunner configured to access an existing Ray instance at ray://192.0.2.255:8786.

    from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n    ... \n

    RayTaskRunner accepts the following optional parameters:

    Parameter Description address Address of a currently running Ray instance, starting with the ray:// URI. init_kwargs Additional kwargs to use when calling ray.init.

    Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address of a Ray instance, Prefect creates a temporary instance automatically.

    Ray environment limitations

    While we're excited about adding support for parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

    Ray's support for Python 3.11 is experimental.

    Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.

    See the Ray installation documentation for further compatibility information.

    ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/tasks/","title":"Tasks","text":"

    A task is a function that represents a discrete unit of work in a Prefect workflow. Tasks are not required \u2014 you may define Prefect workflows that consist only of flows, using regular Python statements and functions. Tasks enable you to encapsulate elements of your workflow logic in observable units that can be reused across flows and subflows.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tasks-overview","title":"Tasks overview","text":"

    Tasks are functions: they can take inputs, perform work, and return an output. A Prefect task can do almost anything a Python function can do.

    Tasks are special because they receive metadata about upstream dependencies and the state of those dependencies before they run, even if they don't receive any explicit data inputs from them. This gives you the opportunity to, for example, have a task wait on the completion of another task before executing.

    Tasks also take advantage of automatic Prefect logging to capture details about task runs such as runtime, tags, and final state.

    You can define your tasks within the same file as your flow definition, or you can define tasks within modules and import them for use in your flow definitions. Tasks may be called from within a flow, from within a subflow, or (as of prefect 2.18.x) from within another task.

    Calling a task from a flow

    Use the @task decorator to designate a function as a task. Calling the task creates a new task run:

    from prefect import flow, task\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n    my_task()\n

    Calling a task from another task

    As of prefect 2.18.x, you can call a task from within another task:

    from prefect import task\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@task(log_prints=True)\ndef my_parent_task():\n    my_task()\n

    Tasks are uniquely identified by a task key, which is a hash composed of the task name, the fully-qualified name of the function, and any tags. If the task does not have a name specified, the name is derived from the task function.

    How big should a task be?

    Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

    To be clear, there's nothing stopping you from putting all of your code in a single task \u2014 Prefect will happily run it! However, if any line of code fails, the entire task will fail and must be retried from the beginning. This can be avoided by splitting the code into multiple dependent tasks.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-arguments","title":"Task arguments","text":"

    Tasks allow for customization through optional arguments:

    Argument Description name An optional name for the task. If not provided, the name will be inferred from the function name. description An optional string description for the task. If not provided, the description will be pulled from the docstring for the decorated function. tags An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime. cache_key_fn An optional callable that, given the task run context and call parameters, generates a string key. If the key matches a previous completed state, that state result will be restored instead of running the task again. cache_expiration An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. retries An optional number of times to retry on task run failure. retry_delay_seconds An optional number of seconds to wait before retrying the task after failure. This is only applicable if retries is nonzero. log_prints An optional boolean indicating whether to log print statements. persist_result An optional boolean indicating whether to persist the result of the task run to storage.

    See all possible parameters in the Python SDK API docs.

    For example, you can provide a name value for the task. Here we've used the optional description argument as well.

    @task(name=\"hello-task\", \n      description=\"This task says hello.\")\ndef my_task():\n    print(\"Hello, I'm a task\")\n

    You can distinguish runs of this task by providing a task_run_name; this setting accepts a string that can optionally contain templated references to the keyword arguments of your task. The name will be formatted using Python's standard string formatting syntax as can be seen here:

    import datetime\nfrom prefect import flow, task\n\n@task(name=\"My Example Task\", \n      description=\"An example task for a tutorial.\",\n      task_run_name=\"hello-{name}-on-{date:%A}\")\ndef my_task(name, date):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"hello-marvin-on-Thursday\"\n    my_task(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n

    Additionally this setting also accepts a function that returns a string to be used for the task run name:

    import datetime\nfrom prefect import flow, task\n\ndef generate_task_name():\n    date = datetime.datetime.now(datetime.timezone.utc)\n    return f\"{date:%A}-is-a-lovely-day\"\n\n@task(name=\"My Example Task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"Thursday-is-a-lovely-day\"\n    my_task(name=\"marvin\")\n

    If you need access to information about the task, use the prefect.runtime module. For example:

    from prefect import flow\nfrom prefect.runtime import flow_run, task_run\n\ndef generate_task_name():\n    flow_name = flow_run.flow_name\n    task_name = task_run.task_name\n\n    parameters = task_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-{task_name}-with-{name}-and-{limit}\"\n\n@task(name=\"my-example-task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name: str, limit: int = 100):\n    pass\n\n@flow\ndef my_flow(name: str):\n    # creates a run with a name like \"my-flow-my-example-task-with-marvin-and-100\"\n    my_task(name=\"marvin\")\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tags","title":"Tags","text":"

    Tags are optional string labels that enable you to identify and group tasks other than by name or flow. Tags are useful for:

    • Filtering task runs by tag in the UI and via the Prefect REST API.
    • Setting concurrency limits on task runs by tag.

    Tags may be specified as a keyword argument on the task decorator.

    @task(name=\"hello-task\", tags=[\"test\"])\ndef my_task():\n    print(\"Hello, I'm a task\")\n

    You can also provide tags as an argument with a tags context manager, specifying tags when the task is called rather than in its definition.

    from prefect import flow, task\nfrom prefect import tags\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n    with tags(\"test\"):\n        my_task()\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#retries","title":"Retries","text":"

    Prefect can automatically retry tasks on failure. In Prefect, a task fails if its Python function raises an exception.

    To enable retries, pass retries and retry_delay_seconds parameters to your task. If the task fails, Prefect will retry it up to retries times, waiting retry_delay_seconds seconds between each attempt. If the task fails on the final retry, Prefect marks the task as crashed if the task raised an exception or failed if it returned a string.

    Retries don't create new task runs

    A new task run is not created when a task is retried. A new state is added to the state history of the original task run.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#a-real-world-example-making-an-api-request","title":"A real-world example: making an API request","text":"

    Consider the real-world problem of making an API request. In this example, we'll use the httpx library to make an HTTP request.

    import httpx\n\nfrom prefect import flow, task\n\n\n@task(retries=2, retry_delay_seconds=5)\ndef get_data_task(\n    url: str = \"https://api.brittle-service.com/endpoint\"\n) -> dict:\n    response = httpx.get(url)\n\n    # If the response status code is anything but a 2xx, httpx will raise\n    # an exception. This task doesn't handle the exception, so Prefect will\n    # catch the exception and will consider the task run failed.\n    response.raise_for_status()\n\n    return response.json()\n\n\n@flow\ndef get_data_flow():\n    get_data_task()\n

    In this task, if the HTTP request to the brittle API receives any status code other than a 2xx (200, 201, etc.), Prefect will retry the task a maximum of two times, waiting five seconds in between retries.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#custom-retry-behavior","title":"Custom retry behavior","text":"

    The retry_delay_seconds option accepts a list of delays for more custom retry behavior. The following task will wait for successively increasing intervals of 1, 10, and 100 seconds, respectively, before the next attempt starts:

    from prefect import task\n\n@task(retries=3, retry_delay_seconds=[1, 10, 100])\ndef some_task_with_manual_backoff_retries():\n   ...\n

    The retry_condition_fn option accepts a callable that returns a boolean. If the callable returns True, the task will be retried. If the callable returns False, the task will not be retried. The callable accepts three arguments \u2014 the task, the task run, and the state of the task run. The following task will retry on HTTP status codes other than 401 or 404:

    import httpx\nfrom prefect import flow, task\n\ndef retry_handler(task, task_run, state) -> bool:\n    \"\"\"This is a custom retry handler to handle when we want to retry a task\"\"\"\n    try:\n        # Attempt to get the result of the task\n        state.result()\n    except httpx.HTTPStatusError as exc:\n        # Retry on any HTTP status code that is not 401 or 404\n        do_not_retry_on_these_codes = [401, 404]\n        return exc.response.status_code not in do_not_retry_on_these_codes\n    except httpx.ConnectError:\n        # Do not retry\n        return False\n    except:\n        # For any other exception, retry\n        return True\n\n@task(retries=1, retry_condition_fn=retry_handler)\ndef my_api_call_task(url):\n    response = httpx.get(url)\n    response.raise_for_status()\n    return response.json()\n\n@flow\ndef get_data_flow(url):\n    my_api_call_task(url=url)\n\nif __name__ == \"__main__\":\n    get_data_flow(url=\"https://httpbin.org/status/503\")\n

    Additionally, you can pass a callable that accepts the number of retries as an argument and returns a list. Prefect includes an exponential_backoff utility that will automatically generate a list of retry delays that correspond to an exponential backoff retry strategy. The following flow will wait for 10, 20, then 40 seconds before each retry.

    from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(retries=3, retry_delay_seconds=exponential_backoff(backoff_factor=10))\ndef some_task_with_exponential_backoff_retries():\n   ...\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#advanced-topic-adding-jitter","title":"Advanced topic: adding \"jitter\"","text":"

    While using exponential backoff, you may also want to add jitter to the delay times. Jitter is a random amount of time added to retry periods that helps prevent \"thundering herd\" scenarios, which is when many tasks all retry at the exact same time, potentially overwhelming systems.

    The retry_jitter_factor option can be used to add variance to the base delay. For example, a retry delay of 10 seconds with a retry_jitter_factor of 0.5 will be allowed to delay up to 15 seconds. Large values of retry_jitter_factor provide more protection against \"thundering herds,\" while keeping the average retry delay time constant. For example, the following task adds jitter to its exponential backoff so the retry delays will vary up to a maximum delay time of 20, 40, and 80 seconds respectively.

    from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(\n    retries=3,\n    retry_delay_seconds=exponential_backoff(backoff_factor=10),\n    retry_jitter_factor=1,\n)\ndef some_task_with_exponential_backoff_retries():\n   ...\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-retry-behavior-globally-with-settings","title":"Configuring retry behavior globally with settings","text":"

    You can also set retries and retry delays by using the following global settings. These settings will not override the retries or retry_delay_seconds that are set in the flow or task decorator.

    prefect config set PREFECT_FLOW_DEFAULT_RETRIES=2\nprefect config set PREFECT_TASK_DEFAULT_RETRIES=2\nprefect config set PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\nprefect config set PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#caching","title":"Caching","text":"

    Caching refers to the ability of a task run to reflect a finished state without actually running the code that defines the task. This allows you to efficiently reuse results of tasks that may be expensive to run with every flow run, or reuse cached results if the inputs to a task have not changed.

    To determine whether a task run should retrieve a cached state, we use \"cache keys\". A cache key is a string value that indicates if one run should be considered identical to another. When a task run with a cache key finishes, we attach that cache key to the state. When each task run starts, Prefect checks for states with a matching cache key. If a state with an identical key is found, Prefect will use the cached state instead of running the task again.

    To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. If you do not specify a cache_expiration, the cache key does not expire.

    You can define a task that is cached based on its inputs by using the Prefect task_input_hash. This is a task cache key implementation that hashes all inputs to the task using a JSON or cloudpickle serializer. If the task inputs do not change, the cached results are used rather than running the task until the cache expires.

    Note that, if any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, task_input_hash returns a null key indicating that a cache key could not be generated for the given inputs.

    In this example, until the cache_expiration time ends, as long as the input to hello_task() remains the same when it is called, the cached return value is returned. In this situation the task is not rerun. However, if the input argument value changes, hello_task() runs using the new input.

    from datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\ndef hello_task(name_input):\n    # Doing some work\n    print(\"Saying hello\")\n    return \"hello \" + name_input\n\n@flow(log_prints=True)\ndef hello_flow(name_input):\n    hello_task(name_input)\n

    Alternatively, you can provide your own function or other callable that returns a string cache key. A generic cache_key_fn is a function that accepts two positional arguments:

    • The first argument corresponds to the TaskRunContext, which stores task run metadata in the attributes task_run_id, flow_run_id, and task.
    • The second argument corresponds to a dictionary of input values to the task. For example, if your task is defined with signature fn(x, y, z) then the dictionary will have keys \"x\", \"y\", and \"z\" with corresponding values that can be used to compute your cache key.

    Note that the cache_key_fn is not defined as a @task.

    Task cache keys

    By default, a task cache key is limited to 2000 characters, specified by the PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH setting.

    from prefect import task, flow\n\ndef static_cache_key(context, parameters):\n    # return a constant\n    return \"static cache key\"\n\n@task(cache_key_fn=static_cache_key)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n\n@flow\ndef test_caching():\n    cached_task()\n    cached_task()\n    cached_task()\n

    In this case, there's no expiration for the cache key, and no logic to change the cache key, so cached_task() only runs once.

    >>> test_caching()\nrunning an expensive operation\n>>> test_caching()\n>>> test_caching()\n

    When each task run requested to enter a Running state, it provided its cache key computed from the cache_key_fn. The Prefect backend identified that there was a COMPLETED state associated with this key and instructed the run to immediately enter the same COMPLETED state, including the same return values.

    A real-world example might include the flow run ID from the context in the cache key so only repeated calls in the same flow run are cached.

    def cache_within_flow_run(context, parameters):\n    return f\"{context.task_run.flow_run_id}-{task_input_hash(context, parameters)}\"\n\n@task(cache_key_fn=cache_within_flow_run)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n

    Task results, retries, and caching

    Task results are cached in memory during a flow run and persisted to the location specified by the PREFECT_LOCAL_STORAGE_PATH setting. As a result, task caching between flow runs is currently limited to flow runs with access to that local storage path.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#refreshing-the-cache","title":"Refreshing the cache","text":"

    Sometimes, you want a task to update the data associated with its cache key instead of using the cache. This is a cache \"refresh\".

    The refresh_cache option can be used to enable this behavior for a specific task:

    import random\n\n\ndef static_cache_key(context, parameters):\n    # return a constant\n    return \"static cache key\"\n\n\n@task(cache_key_fn=static_cache_key, refresh_cache=True)\ndef caching_task():\n    return random.random()\n

    When this task runs, it will always update the cache key instead of using the cached value. This is particularly useful when you have a flow that is responsible for updating the cache.

    If you want to refresh the cache for all tasks, you can use the PREFECT_TASKS_REFRESH_CACHE setting. Setting PREFECT_TASKS_REFRESH_CACHE=true will change the default behavior of all tasks to refresh. This is particularly useful if you want to rerun a flow without cached results.

    If you have tasks that should not refresh when this setting is enabled, you may explicitly set refresh_cache to False. These tasks will never refresh the cache \u2014 if a cache key exists it will be read, not updated. Note that, if a cache key does not exist yet, these tasks can still write to the cache.

    @task(cache_key_fn=static_cache_key, refresh_cache=False)\ndef caching_task():\n    return random.random()\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#timeouts","title":"Timeouts","text":"

    Task timeouts are used to prevent unintentional long-running tasks. When the duration of execution for a task exceeds the duration specified in the timeout, a timeout exception will be raised and the task will be marked as failed. In the UI, the task will be visibly designated as TimedOut. From the perspective of the flow, the timed-out task will be treated like any other failed task.

    Timeout durations are specified using the timeout_seconds keyword argument.

    from prefect import task\nimport time\n\n@task(timeout_seconds=1, log_prints=True)\ndef show_timeouts():\n    print(\"I will execute\")\n    time.sleep(5)\n    print(\"I will not execute\")\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-results","title":"Task results","text":"

    Depending on how you call tasks, they can return different types of results and optionally engage the use of a task runner.

    Any task can return:

    • Data\u200a, such as int, str, dict, list, and so on \u2014 \u200athis is the default behavior any time you call your_task().
    • PrefectFuture \u2014 \u200athis is achieved by calling your_task.submit(). A PrefectFuture contains both data and State
    • Prefect State \u200a\u2014 anytime you call your task or flow with the argument return_state=True, it will directly return a state you can use to build custom behavior based on a state change you care about, such as task or flow failing or retrying.

    To run your task with a task runner, you must call the task with .submit().

    See state returned values for examples.

    Task runners are optional

    If you just need the result from a task, you can simply call the task from your flow. For most workflows, the default behavior of calling a task directly and receiving a result is all you'll need.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#wait-for","title":"Wait for","text":"

    To create a dependency between two tasks that do not exchange data, but one needs to wait for the other to finish, use the special wait_for keyword argument:

    @task\ndef task_1():\n    pass\n\n@task\ndef task_2():\n    pass\n\n@flow\ndef my_flow():\n    x = task_1()\n\n    # task 2 will wait for task_1 to complete\n    y = task_2(wait_for=[x])\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#map","title":"Map","text":"

    Prefect provides a .map() implementation that automatically creates a task run for each element of its input data. Mapped tasks represent the computations of many individual children tasks.

    The simplest Prefect map takes a tasks and applies it to each element of its inputs.

    from prefect import flow, task\n\n@task\ndef print_nums(nums):\n    for n in nums:\n        print(n)\n\n@task\ndef square_num(num):\n    return num**2\n\n@flow\ndef map_flow(nums):\n    print_nums(nums)\n    squared_nums = square_num.map(nums) \n    print_nums(squared_nums)\n\nmap_flow([1,2,3,5,8,13])\n

    Prefect also supports unmapped arguments, allowing you to pass static values that don't get mapped over.

    from prefect import flow, task\n\n@task\ndef add_together(x, y):\n    return x + y\n\n@flow\ndef sum_it(numbers, static_value):\n    futures = add_together.map(numbers, static_value)\n    return futures\n\nsum_it([1, 2, 3], 5)\n

    If your static argument is an iterable, you'll need to wrap it with unmapped to tell Prefect that it should be treated as a static value.

    from prefect import flow, task, unmapped\n\n@task\ndef sum_plus(x, static_iterable):\n    return x + sum(static_iterable)\n\n@flow\ndef sum_it(numbers, static_iterable):\n    futures = sum_plus.map(numbers, static_iterable)\n    return futures\n\nsum_it([4, 5, 6], unmapped([1, 2, 3]))\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#async-tasks","title":"Async tasks","text":"

    Prefect also supports asynchronous task and flow definitions by default. All of the standard rules of async apply:

    import asyncio\n\nfrom prefect import task, flow\n\n@task\nasync def print_values(values):\n    for value in values:\n        await asyncio.sleep(1) # yield\n        print(value, end=\" \")\n\n@flow\nasync def async_flow():\n    await print_values([1, 2])  # runs immediately\n    coros = [print_values(\"abcd\"), print_values(\"6789\")]\n\n    # asynchronously gather the tasks\n    await asyncio.gather(*coros)\n\nasyncio.run(async_flow())\n

    Note, if you are not using asyncio.gather, calling .submit() is required for asynchronous execution on the ConcurrentTaskRunner.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-run-concurrency-limits","title":"Task run concurrency limits","text":"

    There are situations in which you want to actively prevent too many tasks from running simultaneously. For example, if many tasks across multiple flows are designed to interact with a database that only allows 10 connections, you want to make sure that no more than 10 tasks that connect to this database are running at any given time.

    Prefect has built-in functionality for achieving this: task concurrency limits.

    Task concurrency limits use task tags. You can specify an optional concurrency limit as the maximum number of concurrent task runs in a Running state for tasks with a given tag. The specified concurrency limit applies to any task to which the tag is applied.

    If a task has multiple tags, it will run only if all tags have available concurrency.

    Tags without explicit limits are considered to have unlimited concurrency.

    0 concurrency limit aborts task runs

    Currently, if the concurrency limit is set to 0 for a tag, any attempt to run a task with that tag will be aborted instead of delayed.

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#execution-behavior","title":"Execution behavior","text":"

    Task tag limits are checked whenever a task run attempts to enter a Running state.

    If there are no concurrency slots available for any one of your task's tags, the transition to a Running state will be delayed and the client is instructed to try entering a Running state again in 30 seconds (or the value specified by the PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS setting).

    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-concurrency-limits","title":"Configuring concurrency limits","text":"

    Flow run concurrency limits are set at a work pool and/or work queue level

    While task run concurrency limits are configured via tags (as shown below), flow run concurrency limits are configured via work pools and/or work queues.

    You can set concurrency limits on as few or as many tags as you wish. You can set limits through:

    • Prefect CLI
    • Prefect API by using PrefectClient Python client
    • Prefect server UI or Prefect Cloud
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#cli","title":"CLI","text":"

    You can create, list, and remove concurrency limits by using Prefect CLI concurrency-limit commands.

    prefect concurrency-limit [command] [arguments]\n
    Command Description create Create a concurrency limit by specifying a tag and limit. delete Delete the concurrency limit set on the specified tag. inspect View details about a concurrency limit set on the specified tag. ls View all defined concurrency limits.

    For example, to set a concurrency limit of 10 on the 'small_instance' tag:

    prefect concurrency-limit create small_instance 10\n

    To delete the concurrency limit on the 'small_instance' tag:

    prefect concurrency-limit delete small_instance\n

    To view details about the concurrency limit on the 'small_instance' tag:

    prefect concurrency-limit inspect small_instance\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#python-client","title":"Python client","text":"

    To update your tag concurrency limits programmatically, use PrefectClient.orchestration.create_concurrency_limit.

    create_concurrency_limit takes two arguments:

    • tag specifies the task tag on which you're setting a limit.
    • concurrency_limit specifies the maximum number of concurrent task runs for that tag.

    For example, to set a concurrency limit of 10 on the 'small_instance' tag:

    from prefect import get_client\n\nasync with get_client() as client:\n    # set a concurrency limit of 10 on the 'small_instance' tag\n    limit_id = await client.create_concurrency_limit(\n        tag=\"small_instance\", \n        concurrency_limit=10\n        )\n

    To remove all concurrency limits on a tag, use PrefectClient.delete_concurrency_limit_by_tag, passing the tag:

    async with get_client() as client:\n    # remove a concurrency limit on the 'small_instance' tag\n    await client.delete_concurrency_limit_by_tag(tag=\"small_instance\")\n

    If you wish to query for the currently set limit on a tag, use PrefectClient.read_concurrency_limit_by_tag, passing the tag:

    To see all of your limits across all of your tags, use PrefectClient.read_concurrency_limits.

    async with get_client() as client:\n    # query the concurrency limit on the 'small_instance' tag\n    limit = await client.read_concurrency_limit_by_tag(tag=\"small_instance\")\n
    ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/work-pools/","title":"Work Pools & Workers","text":"

    Work pools and workers bridge the Prefect orchestration environment with your execution environment. When a deployment creates a flow run, it is submitted to a specific work pool for scheduling. A worker running in the execution environment can poll its respective work pool for new runs to execute, or the work pool can submit flow runs to serverless infrastructure directly, depending on your configuration.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-overview","title":"Work pool overview","text":"

    Work pools organize work for execution. Work pools have types corresponding to the infrastructure that will execute the flow code, as well as the delivery method of work to that environment. Pull work pools require workers (or less ideally, agents) to poll the work pool for flow runs to execute. Push work pools can submit runs directly to your serverless infrastructure providers such as Google Cloud Run, Azure Container Instances, and AWS ECS without the need for an agent or worker. Managed work pools are administered by Prefect and handle the submission and execution of code on your behalf.

    Work pools are like pub/sub topics

    It's helpful to think of work pools as a way to coordinate (potentially many) deployments with (potentially many) workers through a known channel: the pool itself. This is similar to how \"topics\" are used to connect producers and consumers in a pub/sub or message-based system. By switching a deployment's work pool, users can quickly change the worker that will execute their runs, making it easy to promote runs through environments or even debug locally.

    In addition, users can control aspects of work pool behavior, such as how many runs the pool allows to be run concurrently or pausing delivery entirely. These options can be modified at any time, and any workers requesting work for a specific pool will only see matching flow runs.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-configuration","title":"Work pool configuration","text":"

    You can configure work pools by using any of the following:

    • Prefect UI
    • Prefect CLI commands
    • Prefect REST API
    • Terraform provider for Prefect Cloud

    To manage work pools in the UI, click the Work Pools icon. This displays a list of currently configured work pools.

    You can pause a work pool from this page by using the toggle.

    Select the + button to create a new work pool. You'll be able to specify the details for work served by this work pool.

    To create a work pool via the Prefect CLI, use the prefect work-pool create command:

    prefect work-pool create [OPTIONS] NAME\n

    NAME is a required, unique name for the work pool.

    Optional configuration parameters you can specify to filter work on the pool include:

    Option Description --paused If provided, the work pool will be created in a paused state. --type The type of infrastructure that can execute runs from this work pool. --set-as-default Whether to use the created work pool as the local default for deployment. --base-job-template The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type.

    For example, to create a work pool called test-pool, you would run this command:

    prefect work-pool create test-pool\n
    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-types","title":"Work pool types","text":"

    If you don't use the --type flag to specify an infrastructure type, you are prompted to select from the following options:

    Prefect CloudPrefect server instance Infrastructure Type Description Prefect Agent Execute flow runs on heterogeneous infrastructure using infrastructure blocks. Process Execute flow runs as subprocesses on a worker. Works well for local execution when first getting started. AWS Elastic Container Service Execute flow runs within containers on AWS ECS. Works with EC2 and Fargate clusters. Requires an AWS account. Azure Container Instances Execute flow runs within containers on Azure's Container Instances service. Requires an Azure account. Docker Execute flow runs within Docker containers. Works well for managing flow execution environments via Docker images. Requires access to a running Docker daemon. Google Cloud Run Execute flow runs within containers on Google Cloud Run. Requires a Google Cloud Platform account. Google Cloud Run V2 Execute flow runs within containers on Google Cloud Run (V2 API). Requires a Google Cloud Platform account. Google Vertex AI Execute flow runs within containers on Google Vertex AI. Requires a Google Cloud Platform account. Kubernetes Execute flow runs within jobs scheduled on a Kubernetes cluster. Requires a Kubernetes cluster. Google Cloud Run - Push Execute flow runs within containers on Google Cloud Run. Requires a Google Cloud Platform account. Flow runs are pushed directly to your environment, without the need for a Prefect worker. AWS Elastic Container Service - Push Execute flow runs within containers on AWS ECS. Works with existing ECS clusters and serverless execution via AWS Fargate. Requires an AWS account. Flow runs are pushed directly to your environment, without the need for a Prefect worker. Azure Container Instances - Push Execute flow runs within containers on Azure's Container Instances service. Requires an Azure account. Flow runs are pushed directly to your environment, without the need for a Prefect worker. Modal - Push Execute flow runs on Modal. Requires a Modal account. Flow runs are pushed directly to your Modal workspace, without the need for a Prefect worker. Prefect Managed Execute flow runs within containers on Prefect managed infrastructure. Infrastructure Type Description Prefect Agent Execute flow runs on heterogeneous infrastructure using infrastructure blocks. Process Execute flow runs as subprocesses on a worker. Works well for local execution when first getting started. AWS Elastic Container Service Execute flow runs within containers on AWS ECS. Works with EC2 and Fargate clusters. Requires an AWS account. Azure Container Instances Execute flow runs within containers on Azure's Container Instances service. Requires an Azure account. Docker Execute flow runs within Docker containers. Works well for managing flow execution environments via Docker images. Requires access to a running Docker daemon. Google Cloud Run Execute flow runs within containers on Google Cloud Run. Requires a Google Cloud Platform account. Google Cloud Run V2 Execute flow runs within containers on Google Cloud Run (V2 API). Requires a Google Cloud Platform account. Google Vertex AI Execute flow runs within containers on Google Vertex AI. Requires a Google Cloud Platform account. Kubernetes Execute flow runs within jobs scheduled on a Kubernetes cluster. Requires a Kubernetes cluster.

    On success, the command returns the details of the newly created work pool.

    Created work pool with properties:\n    name - 'test-pool'\n    id - a51adf8c-58bb-4949-abe6-1b87af46eabd\n    concurrency limit - None\n\nStart a worker to pick up flows from the work pool:\n    prefect worker start -p 'test-pool'\n\nInspect the work pool:\n    prefect work-pool inspect 'test-pool'\n

    Set a work pool as the default for new deployments by adding the --set-as-default flag.

    Which would result in output similar to the following:

    Set 'test-pool' as default work pool for profile 'default'\n\nTo change your default work pool, run:\n\n        prefect config set PREFECT_DEFAULT_WORK_POOL_NAME=<work-pool-name>\n

    To update a work pool via the Prefect CLI, use the prefect work-pool update command:

    prefect work-pool update [OPTIONS] NAME\n

    NAME is the name of the work pool to update.

    Optional configuration parameters you can specify to update the work pool include:

    Option Description --base-job-template The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. --description A description of the work pool. --concurrency-limit The maximum number of flow runs to run simultaneously in the work pool.

    Managing work pools in CI/CD

    You can version control your base job template by committing it as a JSON file to your repository and control updates to your work pools' base job templates by using the prefect work-pool update command in your CI/CD pipeline. For example, you could use the following command to update a work pool's base job template to the contents of a file named base-job-template.json:

    prefect work-pool update --base-job-template base-job-template.json my-work-pool\n
    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#base-job-template","title":"Base job template","text":"

    Each work pool has a base job template that allows the customization of the behavior of the worker executing flow runs from the work pool.

    The base job template acts as a contract defining the configuration passed to the worker for each flow run and the options available to deployment creators to customize worker behavior per deployment.

    A base job template comprises a job_configuration section and a variables section.

    The variables section defines the fields available to be customized per deployment. The variables section follows the OpenAPI specification, which allows work pool creators to place limits on provided values (type, minimum, maximum, etc.).

    The job configuration section defines how values provided for fields in the variables section should be translated into the configuration given to a worker when executing a flow run.

    The values in the job_configuration can use placeholders to reference values provided in the variables section. Placeholders are declared using double curly braces, e.g., {{ variable_name }}. job_configuration values can also be hard-coded if the value should not be customizable.

    Each worker type is configured with a default base job template, making it easy to start with a work pool. The default base template defines values that will be passed to every flow run, but can be overridden on a per-deployment or per-flow run basis.

    For example, if we create a process work pool named 'above-ground' via the CLI:

    prefect work-pool create --type process above-ground\n

    We see these configuration options available in the Prefect UI:

    For a process work pool with the default base job template, we can set environment variables for spawned processes, set the working directory to execute flows, and control whether the flow run output is streamed to workers' standard output. You can also see an example of JSON formatted base job template with the 'Advanced' tab.

    You can examine the default base job template for a given worker type by running:

    prefect work-pool get-default-base-job-template --type process\n
    {\n  \"job_configuration\": {\n    \"command\": \"{{ command }}\",\n    \"env\": \"{{ env }}\",\n    \"labels\": \"{{ labels }}\",\n    \"name\": \"{{ name }}\",\n    \"stream_output\": \"{{ stream_output }}\",\n    \"working_dir\": \"{{ working_dir }}\"\n  },\n  \"variables\": {\n    \"type\": \"object\",\n    \"properties\": {\n      \"name\": {\n        \"title\": \"Name\",\n        \"description\": \"Name given to infrastructure created by a worker.\",\n        \"type\": \"string\"\n      },\n      \"env\": {\n        \"title\": \"Environment Variables\",\n        \"description\": \"Environment variables to set when starting a flow run.\",\n        \"type\": \"object\",\n        \"additionalProperties\": {\n          \"type\": \"string\"\n        }\n      },\n      \"labels\": {\n        \"title\": \"Labels\",\n        \"description\": \"Labels applied to infrastructure created by a worker.\",\n        \"type\": \"object\",\n        \"additionalProperties\": {\n          \"type\": \"string\"\n        }\n      },\n      \"command\": {\n        \"title\": \"Command\",\n        \"description\": \"The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker.\",\n        \"type\": \"string\"\n      },\n      \"stream_output\": {\n        \"title\": \"Stream Output\",\n        \"description\": \"If enabled, workers will stream output from flow run processes to local standard output.\",\n        \"default\": true,\n        \"type\": \"boolean\"\n      },\n      \"working_dir\": {\n        \"title\": \"Working Directory\",\n        \"description\": \"If provided, workers will open flow run processes within the specified path as the working directory. Otherwise, a temporary directory will be created.\",\n        \"type\": \"string\",\n        \"format\": \"path\"\n      }\n    }\n  }\n}\n

    You can override each of these attributes on a per-deployment or per-flow run basis. When creating a deployment, you can specify these overrides in the deployments.work_pool.job_variables section of a prefect.yaml file or in the job_variables argument of a Python flow.deploy method.

    For example, to turn off streaming output for a specific deployment, we could add the following to our prefect.yaml:

    deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: above-ground  \n    job_variables:\n        stream_output: false\n

    See more about overriding job variables in the Overriding Job Variables Guide.

    Advanced Customization of the Base Job Template

    For advanced use cases, you can create work pools with fully customizable job templates. This customization is available when creating or editing a work pool on the 'Advanced' tab within the UI or when updating a work pool via the Prefect CLI.

    Advanced customization is useful anytime the underlying infrastructure supports a high degree of customization. In these scenarios a work pool job template allows you to expose a minimal and easy-to-digest set of options to deployment authors. Additionally, these options are the only customizable aspects for deployment infrastructure, which can be useful for restricting functionality in secure environments. For example, the kubernetes worker type allows users to specify a custom job template that can be used to configure the manifest that workers use to create jobs for flow execution.

    For more information and advanced configuration examples, see the Kubernetes Worker documentation.

    For more information on overriding a work pool's job variables see this guide.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#viewing-work-pools","title":"Viewing work pools","text":"

    At any time, users can see and edit configured work pools in the Prefect UI.

    To view work pools with the Prefect CLI, you can:

    • List (ls) all available pools
    • Inspect (inspect) the details of a single pool
    • Preview (preview) scheduled work for a single pool

    prefect work-pool ls lists all configured work pools for the server.

    prefect work-pool ls\n

    For example:

                                   Work pools\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name       \u2503    Type        \u2503                                   ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 barbeque   \u2502 docker         \u2502 72c0a101-b3e2-4448-b5f8-a8c5184abd17 \u2502 None              \u2502\n\u2502 k8s-pool   \u2502 kubernetes     \u2502 7b6e3523-d35b-4882-84a7-7a107325bb3f \u2502 None              \u2502\n\u2502 test-pool  \u2502 prefect-agent  \u2502 a51adf8c-58bb-4949-abe6-1b87af46eabd \u2502 None              |\n| my-pool    \u2502 process        \u2502 cd6ff9e8-bfd8-43be-9be3-69375f7a11cd \u2502 None              \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                       (**) denotes a paused pool\n

    prefect work-pool inspect provides all configuration metadata for a specific work pool by ID.

    prefect work-pool inspect 'test-pool'\n

    Outputs information similar to the following:

    Workpool(\n    id='a51adf8c-58bb-4949-abe6-1b87af46eabd',\n    created='2 minutes ago',\n    updated='2 minutes ago',\n    name='test-pool',\n    filter=None,\n)\n

    prefect work-pool preview displays scheduled flow runs for a specific work pool by ID for the upcoming hour. The optional --hours flag lets you specify the number of hours to look ahead.

    prefect work-pool preview 'test-pool' --hours 12\n
    \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Scheduled Star\u2026 \u2503 Run ID                     \u2503 Name         \u2503 Deployment ID               \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 2022-02-26 06:\u2026 \u2502 741483d4-dc90-4913-b88d-0\u2026 \u2502 messy-petrel \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 05:\u2026 \u2502 14e23a19-a51b-4833-9322-5\u2026 \u2502 unselfish-g\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 04:\u2026 \u2502 deb44d4d-5fa2-4f70-a370-e\u2026 \u2502 solid-ostri\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 03:\u2026 \u2502 07374b5c-121f-4c8d-9105-b\u2026 \u2502 sophisticat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 02:\u2026 \u2502 545bc975-b694-4ece-9def-8\u2026 \u2502 gorgeous-mo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 01:\u2026 \u2502 704f2d67-9dfa-4fb8-9784-4\u2026 \u2502 sassy-hedge\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 00:\u2026 \u2502 691312f0-d142-4218-b617-a\u2026 \u2502 sincere-moo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 23:\u2026 \u2502 7cb3ff96-606b-4d8c-8a33-4\u2026 \u2502 curious-cat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 22:\u2026 \u2502 3ea559fe-cb34-43b0-8090-1\u2026 \u2502 primitive-f\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 21:\u2026 \u2502 96212e80-426d-4bf4-9c49-e\u2026 \u2502 phenomenal-\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                                   (**) denotes a late run\n
    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-status","title":"Work pool status","text":"

    Work pools have three statuses: READY, NOT_READY, and PAUSED. A work pool is considered ready if it has at least one online worker sending heartbeats to the work pool. If a work pool has no online workers, it is considered not ready to execute work. A work pool can be placed in a paused status manually by a user or via an automation. When a paused work pool is unpaused, it will be reassigned the appropriate status based on whether any workers are sending heartbeats.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#pausing-and-deleting-work-pools","title":"Pausing and deleting work pools","text":"

    A work pool can be paused at any time to stop the delivery of work to workers. Workers will not receive any work when polling a paused pool.

    To pause a work pool through the Prefect CLI, use the prefect work-pool pause command:

    prefect work-pool pause 'test-pool'\n

    To resume a work pool through the Prefect CLI, use the prefect work-pool resume command with the work pool name.

    To delete a work pool through the Prefect CLI, use the prefect work-pool delete command with the work pool name.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#managing-concurrency","title":"Managing concurrency","text":"

    Each work pool can optionally restrict concurrent runs of matching flows.

    For example, a work pool with a concurrency limit of 5 will only release new work if fewer than 5 matching runs are currently in a Running or Pending state. If 3 runs are Running or Pending, polling the pool for work will only result in 2 new runs, even if there are many more available, to ensure that the concurrency limit is not exceeded.

    When using the prefect work-pool Prefect CLI command to configure a work pool, the following subcommands set concurrency limits:

    • set-concurrency-limit sets a concurrency limit on a work pool.
    • clear-concurrency-limit clears any concurrency limits from a work pool.
    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-queues","title":"Work queues","text":"

    Advanced topic

    Prefect will automatically create a default work queue if needed.

    Work queues offer advanced control over how runs are executed. Each work pool has a \"default\" queue that all work will be sent to by default. Additional queues can be added to a work pool to enable greater control over work delivery through fine grained priority and concurrency. Each work queue has a priority indicated by a unique positive integer. Lower numbers take greater priority in the allocation of work. Accordingly, new queues can be added without changing the rank of the higher-priority queues (e.g. no matter how many queues you add, the queue with priority 1 will always be the highest priority).

    Work queues can also have their own concurrency limits. Note that each queue is also subject to the global work pool concurrency limit, which cannot be exceeded.

    Together work queue priority and concurrency enable precise control over work. For example, a pool may have three queues: A \"low\" queue with priority 10 and no concurrency limit, a \"high\" queue with priority 5 and a concurrency limit of 3, and a \"critical\" queue with priority 1 and a concurrency limit of 1. This arrangement would enable a pattern in which there are two levels of priority, \"high\" and \"low\" for regularly scheduled flow runs, with the remaining \"critical\" queue for unplanned, urgent work, such as a backfill.

    Priority is evaluated to determine the order in which flow runs are submitted for execution. If all flow runs are capable of being executed with no limitation due to concurrency or otherwise, priority is still used to determine order of submission, but there is no impact to execution. If not all flow runs can be executed, usually as a result of concurrency limits, priority is used to determine which queues receive precedence to submit runs for execution.

    Priority for flow run submission proceeds from the highest priority to the lowest priority. In the preceding example, all work from the \"critical\" queue (priority 1) will be submitted, before any work is submitted from \"high\" (priority 5). Once all work has been submitted from priority queue \"critical\", work from the \"high\" queue will begin submission.

    If new flow runs are received on the \"critical\" queue while flow runs are still in scheduled on the \"high\" and \"low\" queues, flow run submission goes back to ensuring all scheduled work is first satisfied from the highest priority queue, until it is empty, in waterfall fashion.

    Work queue status

    A work queue has a READY status when it has been polled by a worker in the last 60 seconds. Pausing a work queue will give it a PAUSED status and mean that it will accept no new work until it is unpaused. A user can control the work queue's paused status in the UI. Unpausing a work queue will give the work queue a NOT_READY status unless a worker has polled it in the last 60 seconds.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#local-debugging","title":"Local debugging","text":"

    As long as your deployment's infrastructure block supports it, you can use work pools to temporarily send runs to a worker running on your local machine for debugging by running prefect worker start -p my-local-machine and updating the deployment's work pool to my-local-machine.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-overview","title":"Worker overview","text":"

    Workers are lightweight polling services that retrieve scheduled runs from a work pool and execute them.

    Workers are similar to agents, but offer greater control over infrastructure configuration and the ability to route work to specific types of execution environments.

    Workers each have a type corresponding to the execution environment to which they will submit flow runs. Workers are only able to poll work pools that match their type. As a result, when deployments are assigned to a work pool, you know in which execution environment scheduled flow runs for that deployment will run.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-types","title":"Worker types","text":"

    Below is a list of available worker types. Note that most worker types will require installation of an additional package.

    Worker Type Description Required Package process Executes flow runs in subprocesses kubernetes Executes flow runs as Kubernetes jobs prefect-kubernetes docker Executes flow runs within Docker containers prefect-docker ecs Executes flow runs as ECS tasks prefect-aws cloud-run Executes flow runs as Google Cloud Run jobs prefect-gcp vertex-ai Executes flow runs as Google Cloud Vertex AI jobs prefect-gcp azure-container-instance Execute flow runs in ACI containers prefect-azure

    If you don\u2019t see a worker type that meets your needs, consider developing a new worker type!

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-options","title":"Worker options","text":"

    Workers poll for work from one or more queues within a work pool. If the worker references a work queue that doesn't exist, it will be created automatically. The worker CLI is able to infer the worker type from the work pool. Alternatively, you can also specify the worker type explicitly. If you supply the worker type to the worker CLI, a work pool will be created automatically if it doesn't exist (using default job settings).

    Configuration parameters you can specify when starting a worker include:

    Option Description --name, -n The name to give to the started worker. If not provided, a unique name will be generated. --pool, -p The work pool the started worker should poll. --work-queue, -q One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. --type, -t The type of worker to start. If not provided, the worker type will be inferred from the work pool. --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_WORKER_PREFETCH_SECONDS. --run-once Only run worker polling once. By default, the worker runs forever. --limit, -l The maximum number of flow runs to start simultaneously. --with-healthcheck Start a healthcheck server for the worker. --install-policy Install policy to use workers from Prefect integration packages.

    You must start a worker within an environment that can access or create the infrastructure needed to execute flow runs. The worker will deploy flow runs to the infrastructure corresponding to the worker type. For example, if you start a worker with type kubernetes, the worker will deploy flow runs to a Kubernetes cluster.

    Prefect must be installed in execution environments

    Prefect must be installed in any environment (virtual environment, Docker container, etc.) where you intend to run the worker or execute a flow run.

    PREFECT_API_URL and PREFECT_API_KEYsettings for workers

    PREFECT_API_URL must be set for the environment in which your worker is running. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-status","title":"Worker status","text":"

    Workers have two statuses: ONLINE and OFFLINE. A worker is online if it sends regular heartbeat messages to the Prefect API. If a worker has missed three heartbeats, it is considered offline. By default, a worker is considered offline a maximum of 90 seconds after it stopped sending heartbeats, but the threshold can be configured via the PREFECT_WORKER_HEARTBEAT_SECONDS setting.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#starting-a-worker","title":"Starting a worker","text":"

    Use the prefect worker start CLI command to start a worker. You must pass at least the work pool name. If the work pool does not exist, it will be created if the --type flag is used.

    prefect worker start -p [work pool name]\n

    For example:

    prefect worker start -p \"my-pool\"\n

    Results in output like this:

    Discovered worker type 'process' for work pool 'my-pool'.\nWorker 'ProcessWorker 65716280-96f8-420b-9300-7e94417f2673' started!\n

    In this case, Prefect automatically discovered the worker type from the work pool. To create a work pool and start a worker in one command, use the --type flag:

    prefect worker start -p \"my-pool\" --type \"process\"\n
    Worker 'ProcessWorker d24f3768-62a9-4141-9480-a056b9539a25' started!\n06:57:53.289 | INFO    | prefect.worker.process.processworker d24f3768-62a9-4141-9480-a056b9539a25 - Worker pool 'my-pool' created.\n

    In addition, workers can limit the number of flow runs they will start simultaneously with the --limit flag. For example, to limit a worker to five concurrent flow runs:

    prefect worker start --pool \"my-pool\" --limit 5\n
    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch","title":"Configuring prefetch","text":"

    By default, the worker begins submitting flow runs a short time (10 seconds) before they are scheduled to run. This behavior allows time for the infrastructure to be created so that the flow run can start on time.

    In some cases, infrastructure will take longer than 10 seconds to start the flow run. The prefetch can be increased using the --prefetch-seconds option or the PREFECT_WORKER_PREFETCH_SECONDS setting.

    If this value is more than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#polling-for-work","title":"Polling for work","text":"

    Workers poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_WORKER_QUERY_SECONDS setting.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#install-policy","title":"Install policy","text":"

    The Prefect CLI can install the required package for Prefect-maintained worker types automatically. You can configure this behavior with the --install-policy option. The following are valid install policies

    Install Policy Description always Always install the required package. Will update the required package to the most recent version if already installed. if-not-present Install the required package if it is not already installed. never Never install the required package. prompt Prompt the user to choose whether to install the required package. This is the default install policy. If prefect worker start is run non-interactively, the prompt install policy will behave the same as never.","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#additional-resources","title":"Additional resources","text":"

    See how to daemonize a Prefect worker in this guide.

    For more information on overriding a work pool's job variables see this guide.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"contributing/overview/","title":"Contributing","text":"

    Thanks for considering contributing to Prefect!

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#setting-up-a-development-environment","title":"Setting up a development environment","text":"

    First, download the source code and install an editable version of the Python package:

    # Clone the repository\ngit clone https://github.com/PrefectHQ/prefect.git\ncd prefect\n\n# We recommend using a virtual environment\n\npython -m venv .venv\nsource .venv/bin/activate\n\n# Install the package with development dependencies\n\npip install -e \".[dev]\"\n\n# Setup pre-commit hooks for required formatting\n\npre-commit install\n

    If you don't want to install the pre-commit hooks, you can manually install the formatting dependencies with:

    pip install $(./scripts/precommit-versions.py)\n

    You'll need to run black and ruff before a contribution can be accepted.

    After installation, you can run the test suite with pytest:

    # Run all the tests\npytest tests\n\n# Run a subset of tests\n\npytest tests/test_flows.py\n

    Building the Prefect UI

    If you intend to run a local Prefect server during development, you must first build the UI. See UI development for instructions.

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#prefect-code-of-conduct","title":"Prefect Code of Conduct","text":"","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-pledge","title":"Our Pledge","text":"

    In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-standards","title":"Our Standards","text":"

    Examples of behavior that contributes to creating a positive environment include:

    • Using welcoming and inclusive language
    • Being respectful of differing viewpoints and experiences
    • Gracefully accepting constructive criticism
    • Focusing on what is best for the community
    • Showing empathy towards other community members

    Examples of unacceptable behavior by participants include:

    • The use of sexualized language or imagery and unwelcome sexual attention or advances
    • Trolling, insulting/derogatory comments, and personal or political attacks
    • Public or private harassment
    • Publishing others' private information, such as a physical or electronic address, without explicit permission
    • Other conduct which could reasonably be considered inappropriate in a professional setting
    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-responsibilities","title":"Our Responsibilities","text":"

    Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

    Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#scope","title":"Scope","text":"

    This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#enforcement","title":"Enforcement","text":"

    Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Chris White at chris@prefect.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

    Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#attribution","title":"Attribution","text":"

    This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

    For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#developer-tooling","title":"Developer tooling","text":"

    The Prefect CLI provides several helpful commands to aid development.

    Start all services with hot-reloading on code changes (requires UI dependencies to be installed):

    prefect dev start\n

    Start a Prefect API that reloads on code changes:

    prefect dev api\n

    Start a Prefect worker that reloads on code changes:

    prefect dev agent\n
    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#ui-development","title":"UI development","text":"

    Developing the Prefect UI requires that npm is installed.

    Start a development UI that reloads on code changes:

    prefect dev ui\n

    Build the static UI (the UI served by prefect server start):

    prefect dev build-ui\n
    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#docs-development","title":"Docs Development","text":"

    Prefect uses mkdocs for the docs website and the mkdocs-material theme. While we use mkdocs-material-insiders for production, builds can still happen without the extra plugins. Deploy previews are available on pull requests, so you'll be able to browse the final look of your changes before merging.

    To build the docs:

    mkdocs build\n

    To serve the docs locally at http://127.0.0.1:8000/:

    mkdocs serve\n

    For additional mkdocs help and options:

    mkdocs --help\n

    We use the mkdocs-material theme. To add additional JavaScript or CSS to the docs, please see the theme documentation here.

    Internal developers can install the production theme by running:

    pip install -e git+https://github.com/PrefectHQ/mkdocs-material-insiders.git#egg=mkdocs-material\nmkdocs build # or mkdocs build --config-file mkdocs.insiders.yml if needed\n
    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#kubernetes-development","title":"Kubernetes development","text":"

    Generate a manifest to deploy a development API to a local kubernetes cluster:

    prefect dev kubernetes-manifest\n

    To access the Prefect UI running in a Kubernetes cluster, use the kubectl port-forward command to forward a port on your local machine to an open port within the cluster. For example:

    kubectl port-forward deployment/prefect-dev 4200:4200\n

    This forwards port 4200 on the default internal loop IP for localhost to the Prefect server deployment.

    To tell the local prefect command how to communicate with the Prefect API running in Kubernetes, set the PREFECT_API_URL environment variable:

    export PREFECT_API_URL=http://localhost:4200/api\n

    Since you previously configured port forwarding for the localhost port to the Kubernetes environment, you\u2019ll be able to interact with the Prefect API running in Kubernetes when using local Prefect CLI commands.

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#adding-database-migrations","title":"Adding database migrations","text":"

    To make changes to a table, first update the SQLAlchemy model in src/prefect/server/database/orm_models.py. For example, if you wanted to add a new column to the flow_run table, you would add a new column to the FlowRun model:

    # src/prefect/server/database/orm_models.py\n\n@declarative_mixin\nclass ORMFlowRun(ORMRun):\n    \"\"\"SQLAlchemy model of a flow run.\"\"\"\n    ...\n    new_column = Column(String, nullable=True) # <-- add this line\n

    Next, you will need to generate new migration files. You must generate a new migration file for each database type. Migrations will be generated for whatever database type PREFECT_API_DATABASE_CONNECTION_URL is set to. See here for how to set the database connection URL for each database type.

    To generate a new migration file, run the following command:

    prefect server database revision --autogenerate -m \"<migration name>\"\n

    Try to make your migration name brief but descriptive. For example:

    • add_flow_run_new_column
    • add_flow_run_new_column_idx
    • rename_flow_run_old_column_to_new_column

    The --autogenerate flag will automatically generate a migration file based on the changes to the models.

    Always inspect the output of --autogenerate

    --autogenerate will generate a migration file based on the changes to the models. However, it is not perfect. Be sure to check the file to make sure it only includes the changes you want to make. Additionally, you may need to remove extra statements that were included and not related to your change.

    The new migration can be found in the src/prefect/server/database/migrations/versions/ directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in src/prefect/server/database/migrations/versions/sqlite/.

    After you have inspected the migration file, you can apply the migration to your database by running the following command:

    prefect server database upgrade -y\n

    Once you have successfully created and applied migrations for all database types, make sure to update MIGRATION-NOTES.md to document your additions.

    ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/style/","title":"Code style and practices","text":"

    Generally, we follow the Google Python Style Guide. This document covers sections where we differ or where additional clarification is necessary.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports","title":"Imports","text":"

    A brief collection of rules and guidelines for how imports should be handled in this repository.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-__init__-files","title":"Imports in __init__ files","text":"

    Leave __init__ files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-objects-from-submodules","title":"Exposing objects from submodules","text":"

    If importing objects from submodules, the __init__ file should use a relative import. This is required for type checkers to understand the exposed interface.

    # Correct\nfrom .flows import flow\n
    # Wrong\nfrom prefect.flows import flow\n
    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-submodules","title":"Exposing submodules","text":"

    Generally, submodules should not be imported in the __init__ file. Submodules should only be exposed when the module is designed to be imported and used as a namespaced object.

    For example, we do this for our schema and model modules because it is important to know if you are working with an API schema or database model, both of which may have similar names.

    import prefect.server.schemas as schemas\n\n# The full module is accessible now\nschemas.core.FlowRun\n

    If exposing a submodule, use a relative import as you would when exposing an object.

    # Correct\nfrom . import flows\n
    # Wrong\nimport prefect.flows\n
    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-to-run-side-effects","title":"Importing to run side-effects","text":"

    Another use case for importing submodules is perform global side-effects that occur when they are imported.

    Often, global side-effects on import are a dangerous pattern. Avoid them if feasible.

    We have a couple acceptable use-cases for this currently:

    • To register dispatchable types, e.g. prefect.serializers.
    • To extend a CLI application e.g. prefect.cli.
    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-modules","title":"Imports in modules","text":"","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-other-modules","title":"Importing other modules","text":"

    The from syntax should be reserved for importing objects from modules. Modules should not be imported using the from syntax.

    # Correct\nimport prefect.server.schemas  # use with the full name\nimport prefect.server.schemas as schemas  # use the shorter name\n
    # Wrong\nfrom prefect.server import schemas\n

    Unless in an __init__.py file, relative imports should not be used.

    # Correct\nfrom prefect.utilities.foo import bar\n
    # Wrong\nfrom .utilities.foo import bar\n

    Imports dependent on file location should never be used without explicit indication it is relative. This avoids confusion about the source of a module.

    # Correct\nfrom . import test\n
    # Wrong\nimport test\n
    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#resolving-circular-dependencies","title":"Resolving circular dependencies","text":"

    Sometimes, we must defer an import and perform it within a function to avoid a circular dependency.

    ## This function in `settings.py` requires a method from the global `context` but the context\n## uses settings\ndef from_context():\n    from prefect.context import get_profile_context\n\n    ...\n

    Attempt to avoid circular dependencies. This often reveals overentanglement in the design.

    When performing deferred imports, they should all be placed at the top of the function.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#with-type-annotations","title":"With type annotations","text":"

    If you are just using the imported object for a type signature, you should use the TYPE_CHECKING flag.

    # Correct\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n    from prefect.server.schemas.states import State\n\ndef foo(state: \"State\"):\n    pass\n

    Note that usage of the type within the module will need quotes e.g. \"State\" since it is not available at runtime.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-optional-requirements","title":"Importing optional requirements","text":"

    We do not have a best practice for this yet. See the kubernetes, docker, and distributed implementations for now.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#delaying-expensive-imports","title":"Delaying expensive imports","text":"

    Sometimes, imports are slow. We'd like to keep the prefect module import times fast. In these cases, we can lazily import the slow module by deferring import to the relevant function body. For modules that are consumed by many functions, the pattern used for optional requirements may be used instead.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#command-line-interface-cli-output-messages","title":"Command line interface (CLI) output messages","text":"

    Upon executing a command that creates an object, the output message should offer: - A short description of what the command just did. - A bullet point list, rehashing user inputs, if possible. - Next steps, like the next command to run, if applicable. - Other relevant, pre-formatted commands that can be copied and pasted, if applicable. - A new line before the first line and after the last line.

    Output Example:

    $ prefect work-queue create testing\n\nCreated work queue with properties:\n    name - 'abcde'\n    uuid - 940f9828-c820-4148-9526-ea8107082bda\n    tags - None\n    deployment_ids - None\n\nStart an agent to pick up flows from the created work queue:\n    prefect agent start -q 'abcde'\n\nInspect the created work queue:\n    prefect work-queue inspect 'abcde'\n

    Additionally:

    • Wrap generated arguments in apostrophes (') to ensure validity by using suffixing formats with !r.
    • Indent example commands, instead of wrapping in backticks (`).
    • Use placeholders if the example cannot be pre-formatted completely.
    • Capitalize placeholder labels and wrap them in less than (<) and greater than (>) signs.
    • Utilize textwrap.dedent to remove extraneous spacing for strings that are written with triple quotes (\"\"\").

    Placeholder Example:

    Create a work queue with tags:\n    prefect work-queue create '<WORK QUEUE NAME>' -t '<OPTIONAL TAG 1>' -t '<OPTIONAL TAG 2>'\n

    Dedent Example:

    from textwrap import dedent\n...\noutput_msg = dedent(\n    f\"\"\"\n    Created work queue with properties:\n        name - {name!r}\n        uuid - {result}\n        tags - {tags or None}\n        deployment_ids - {deployment_ids or None}\n\n    Start an agent to pick up flows from the created work queue:\n        prefect agent start -q {name!r}\n\n    Inspect the created work queue:\n        prefect work-queue inspect {name!r}\n    \"\"\"\n)\n

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#api-versioning","title":"API Versioning","text":"

    The Prefect client can be run separately from the Prefect orchestration server and communicate entirely via an API. Among other things, the Prefect client includes anything that runs task or flow code, (e.g. agents, and the Python client) or any consumer of Prefect metadata, (e.g. the Prefect UI, and CLI). The Prefect server stores this metadata and serves it via the REST API.

    Sometimes, we make breaking changes to the API (for good reasons). In order to check that a Prefect client is compatible with the API it's making requests to, every API call the client makes includes a three-component API_VERSION header with major, minor, and patch versions.

    For example, a request with the X-PREFECT-API-VERSION=3.2.1 header has a major version of 3, minor version 2, and patch version 1.

    This version header can be changed by modifying the API_VERSION constant in prefect.server.api.server.

    When making a breaking change to the API, we should consider if the change might be backwards compatible for clients, meaning that the previous version of the client would still be able to make calls against the updated version of the server code. This might happen if the changes are purely additive: such as adding a non-critical API route. In these cases, we should make sure to bump the patch version.

    In almost all other cases, we should bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version chanes to denote also-backwards compatible change that might be significant in some way, such as a major release milestone.

    ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/versioning/","title":"Versioning","text":"","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#understanding-version-numbers","title":"Understanding version numbers","text":"

    Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0.

    Occasionally, we will add a suffix to the version such as rc, a, or b. These indicate pre-release versions that users can opt-into installing to test functionality before it is ready for release.

    Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it will be reset to zero.

    ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#prefects-versioning-scheme","title":"Prefect's versioning scheme","text":"

    Prefect will increase the major version when significant and widespread changes are made to the core product. It is very unlikely that the major version will change without extensive warning.

    Prefect will increase the minor version when:

    • Introducing a new concept that changes how Prefect can be used
    • Changing an existing concept in a way that fundamentally alters how it is used
    • Removing a deprecated feature

    Prefect will increase the patch version when:

    • Making enhancements to existing features
    • Fixing behavior in existing features
    • Adding new functionality to existing concepts
    • Updating dependencies
    ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#breaking-changes-and-deprecation","title":"Breaking changes and deprecation","text":"

    A breaking change means that your code will need to change to use a new version of Prefect. We strive to avoid breaking changes in all releases.

    At times, Prefect will deprecate a feature. This means that a feature has been marked for removal in the future. When you use it, you may see warnings that it will be removed. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least 3 minor version increases or 6 months, whichever is longer. We may retain deprecated features longer than this time period.

    Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes.

    ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-prefect","title":"Client compatibility with Prefect","text":"

    When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes new clients cannot be used with an old server. The new client may expect the server to support functionality that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older.

    For example, a client on 2.1.0 can be used with a server on 2.5.0. A client on 2.5.0 cannot be used with a server on 2.1.0.

    ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-cloud","title":"Client compatibility with Cloud","text":"

    Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please file a bug report.

    ","tags":["versioning","semver"],"boost":2},{"location":"getting-started/installation/","title":"Installation","text":"

    Prefect requires Python 3.8 or newer.

    We recommend installing Prefect using a Python virtual environment manager such as pipenv, conda, or virtualenv/venv.

    You can use Prefect Cloud as your API server or host your own Prefect server instance backed by PostgreSQL. For development, you can use SQLite 2.24 or newer as your database.

    Prefect Cloud is a managed solution that provides strong scaling, performance, and security. Learn more about Prefect Cloud solutions for enterprises here.

    Windows and Linux requirements

    See Windows installation notes and Linux installation notes for details on additional installation requirements and considerations.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-prefect","title":"Install Prefect","text":"

    The following sections describe how to install Prefect in your environment.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-latest-version","title":"Installing the latest version","text":"

    Prefect is published as a Python package. To install the latest release or upgrade an existing Prefect install, and upgrade existing Python dependencies, run the following command in your terminal:

    pip install -U prefect\n

    To install a specific Prefect version, specify the version number like this:

    pip install -U \"prefect==2.17.1\"\n

    See available release versions in the Prefect Release Notes.

    See our Contributing guide for instructions on installing Prefect for development and see the section below to install directly from the main branch.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#checking-your-installation","title":"Checking your installation","text":"

    To confirm that Prefect was installed correctly, run the command prefect version in your terminal.

    prefect version\n

    You should see output similar to the following:

    Version:             2.17.1\nAPI version:         0.8.4\nPython version:      3.12.2\nGit commit:          d6bdb075\nBuilt:               Thu, Apr 11, 2024 6:58 PM\nOS/Arch:             darwin/arm64\nProfile:              local\nServer type:         ephemeral\nServer:\n  Database:          sqlite\n  SQLite version:    3.45.2\n
    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#windows-installation-notes","title":"Windows installation notes","text":"

    You can install and run Prefect via Windows PowerShell, the Windows Command Prompt, or conda. After installation, you may need to manually add the Python local packages Scripts folder to your Path environment variable.

    The Scripts folder path looks something like this (the username and Python version may be different on your system):

    C:\\Users\\MyUserNameHere\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\n

    Watch the pip install output messages for the Scripts folder path on your system.

    If you're using Windows Subsystem for Linux (WSL), see Linux installation notes.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#linux-installation-notes","title":"Linux installation notes","text":"

    Linux is a popular operating system for running Prefect. If you are hosting your own Prefect server instance with a SQLite database, note that certain Linux versions of SQLite can be problematic. Compatible versions include Ubuntu 22.04 LTS and Ubuntu 20.04 LTS.

    Alternatively, you can install SQLite on Red Hat Custom Linux (RHEL) or use the conda virtual environment manager and configure a compatible SQLite version.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-a-self-signed-ssl-certificate","title":"Using a self-signed SSL certificate","text":"

    If you're using a self-signed SSL certificate, you need to configure your environment to trust the certificate. You can add the certificate to your system bundle and pointing your tools to use that bundle by configuring the SSL_CERT_FILE environment variable.

    If the certificate is not part of your system bundle, you can set the PREFECT_API_TLS_INSECURE_SKIP_VERIFY to True to disable certificate verification altogether.

    Note: Disabling certificate validation is insecure and only suggested as an option for testing!

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#proxies","title":"Proxies","text":"

    Prefect supports communicating via proxies through environment variables. Whether you are using Prefect Cloud or hosting your own Prefect server instance, set HTTPS_PROXY and SSL_CERT_FILE in your environment, and the underlying network libraries will route Prefect\u2019s requests appropriately.

    Alternatively, the Prefect library will connect to the API via any proxies you have listed in the HTTP_PROXY or ALL_PROXY environment variables. You may also use the NO_PROXY environment variable to specify which hosts should not be sent through the proxy.

    For more information about these environment variables, see the cURL documentation.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#prefect-client-library","title":"prefect-client library","text":"

    The prefect-client library is a minimal installation of Prefect designed for interacting with Prefect Cloud or a remote self-hosted server instance.

    prefect-client enables a subset of Prefect's functionality with a smaller installation size, making it ideal for use in lightweight, resource-constrained, or ephemeral environments. It omits all CLI and server components found in the prefect library.

    Install the latest version with:

    pip install -U prefect-client\n
    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#sqlite","title":"SQLite","text":"

    By default, a local Prefect server instance uses SQLite as the backing database. SQLite is not packaged with the Prefect installation. Most systems will already have SQLite installed, because it is typically bundled with Python.

    Note

    Note that in production we recommend using Prefect Cloud as your API server or hosting your own Prefect server instance backed by PostgreSQL.

    Info

    If you install the prefect-client library that provides a limited set of the full Prefect library's functionality, you do not need SQLite installed.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-sqlite-on-rhel","title":"Install SQLite on RHEL","text":"

    To install an appropriate version of SQLite on Red Hat Custom Linux (RHEL), follow the instructions below:

    Expand for instructions Note that some RHEL instances have no C compiler, so you may need to check for and install `gcc` first:
    yum install gcc\n
    Download and extract the tarball for SQLite.
    wget https://www.sqlite.org/2022/sqlite-autoconf-3390200.tar.gz\ntar -xzf sqlite-autoconf-3390200.tar.gz\n
    Move to the extracted SQLite directory, then build and install SQLite.
    cd sqlite-autoconf-3390200/\n./configure\nmake\nmake install\n
    Add `LD_LIBRARY_PATH` to your profile.
    echo 'export LD_LIBRARY_PATH=\"/usr/local/lib\"' >> /etc/profile\n
    Restart your shell to register these changes. Now you can install Prefect using `pip`.
    pip3 install prefect\n
    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-unreleased-code","title":"Installing unreleased code","text":"

    To use the most up-to-date, unreleased Prefect code, you can install directly off the main GitHub branch:

    pip install -U git+https://github.com/PrefectHQ/prefect\n

    The main branch may not be stable

    Please be aware that this method installs unreleased code and may not be stable.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#next-steps","title":"Next steps","text":"

    Now that you have Prefect installed and your environment configured, check out the Tutorial to get more familiar with Prefect.

    ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/quickstart/","title":"Quickstart","text":"

    Prefect is an orchestration and observability platform that empowers developers to build and scale code quickly, turning their Python scripts into resilient, recurring workflows.

    In this quickstart, you'll see how you can schedule your code on remote infrastructure and observe the state of your workflows. With Prefect, you can go from a Python script to a production-ready workflow that runs remotely in a few minutes.

    Let's get started!

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#setup","title":"Setup","text":"

    Here's a basic script that fetches statistics about the main Prefect GitHub repository.

    import httpx\n\ndef get_repo_info():\n    url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n    response = httpx.get(url)\n    repo = response.json()\n    print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

    Let's make this script schedulable, observable, resilient, and capable of running anywhere.

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-1-install-prefect","title":"Step 1: Install Prefect","text":"
    pip install -U prefect\n

    See the install guide for more detailed installation instructions, if needed.

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-2-connect-to-prefects-api","title":"Step 2: Connect to Prefect's API","text":"

    Much of Prefect's functionality is backed by an API. The easiest way to get started is to use the API hosted by Prefect:

    1. Create a forever-free Prefect Cloud account or sign in at https://app.prefect.cloud/
    2. Use the prefect cloud login CLI command to log in to Prefect Cloud from your development environment
    prefect cloud login\n

    Choose Log in with a web browser and click the Authorize button in the browser window that opens. Your CLI is now authenticated with your Prefect Cloud account through a locally-stored API key that expires in 30 days.

    If you have any issues with browser-based authentication, see the Prefect Cloud docs to learn how to authenticate with a manually created API key.

    Self-hosted Prefect server instance

    If you would like to host a Prefect server instance on your own infrastructure, see the tutorial and select the \"Self-hosted\" tab. Note that you will need to both host your own server and run your flows on your own infrastructure.

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-3-turn-your-function-into-a-prefect-flow","title":"Step 3: Turn your function into a Prefect flow","text":"

    The fastest way to get started with Prefect is to add a @flow decorator to your Python function. Flows are the core observable, deployable units in Prefect and are the primary entrypoint to orchestrated work.

    my_gh_workflow.py
    import httpx   # an HTTP client library and dependency of Prefect\nfrom prefect import flow, task\n\n\n@task(retries=2)\ndef get_repo_info(repo_owner: str, repo_name: str):\n    \"\"\"Get info about a repo - will retry twice after failing\"\"\"\n    url = f\"https://api.github.com/repos/{repo_owner}/{repo_name}\"\n    api_response = httpx.get(url)\n    api_response.raise_for_status()\n    repo_info = api_response.json()\n    return repo_info\n\n\n@task\ndef get_contributors(repo_info: dict):\n    \"\"\"Get contributors for a repo\"\"\"\n    contributors_url = repo_info[\"contributors_url\"]\n    response = httpx.get(contributors_url)\n    response.raise_for_status()\n    contributors = response.json()\n    return contributors\n\n\n@flow(log_prints=True)\ndef repo_info(repo_owner: str = \"PrefectHQ\", repo_name: str = \"prefect\"):\n    \"\"\"\n    Given a GitHub repository, logs the number of stargazers\n    and contributors for that repo.\n    \"\"\"\n    repo_info = get_repo_info(repo_owner, repo_name)\n    print(f\"Stars \ud83c\udf20 : {repo_info['stargazers_count']}\")\n\n    contributors = get_contributors(repo_info)\n    print(f\"Number of contributors \ud83d\udc77: {len(contributors)}\")\n\n\nif __name__ == \"__main__\":\n    repo_info()\n

    Note that we added a log_prints=True argument to the @flow decorator so that print statements within the flow-decorated function will be logged. Also note that our flow calls two tasks, which are defined by the @task decorator. Tasks are the smallest unit of observed and orchestrated work in Prefect.

    python my_gh_workflow.py\n

    Now when we run this script, Prefect will automatically track the state of the flow run and log the output where we can see it in the UI and CLI.

    14:28:31.099 | INFO    | prefect.engine - Created flow run 'energetic-panther' for flow 'repo-info'\n14:28:31.100 | INFO    | Flow run 'energetic-panther' - View at https://app.prefect.cloud/account/123/workspace/abc/flow-runs/flow-run/xyz\n14:28:32.178 | INFO    | Flow run 'energetic-panther' - Created task run 'get_repo_info-0' for task 'get_repo_info'\n14:28:32.179 | INFO    | Flow run 'energetic-panther' - Executing 'get_repo_info-0' immediately...\n14:28:32.584 | INFO    | Task run 'get_repo_info-0' - Finished in state Completed()\n14:28:32.599 | INFO    | Flow run 'energetic-panther' - Stars \ud83c\udf20 : 13609\n14:28:32.682 | INFO    | Flow run 'energetic-panther' - Created task run 'get_contributors-0' for task 'get_contributors'\n14:28:32.682 | INFO    | Flow run 'energetic-panther' - Executing 'get_contributors-0' immediately...\n14:28:33.118 | INFO    | Task run 'get_contributors-0' - Finished in state Completed()\n14:28:33.134 | INFO    | Flow run 'energetic-panther' - Number of contributors \ud83d\udc77: 30\n14:28:33.255 | INFO    | Flow run 'energetic-panther' - Finished in state Completed('All states completed.')\n

    You should see similar output in your terminal, with your own randomly generated flow run name and your own Prefect Cloud account URL.

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-4-choose-a-remote-infrastructure-location","title":"Step 4: Choose a remote infrastructure location","text":"

    Let's get this workflow running on infrastructure other than your local machine! We can tell Prefect where we want to run our workflow by creating a work pool.

    We can have Prefect Cloud run our flow code for us with a Prefect Managed work pool.

    Let's create a Prefect Managed work pool so that Prefect can run our flows for us. We can create a work pool in the UI or from the CLI. Let's use the CLI:

    prefect work-pool create my-managed-pool --type prefect:managed\n

    You should see a message in the CLI that your work pool was created. Feel free to check out your new work pool on the Work Pools page in the UI.

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-5-make-your-code-schedulable","title":"Step 5: Make your code schedulable","text":"

    We have a flow function and we have a work pool where we can run our flow remotely. Let's package both of these things, along with the location for where to find our flow code, into a deployment so that we can schedule our workflow to run remotely.

    Deployments elevate flows to remotely configurable entities that have their own API.

    Let's make a script to build a deployment with the name my-first-deployment and set it to run on a schedule.

    create_deployment.py
    from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/discdiver/demos.git\",\n        entrypoint=\"my_gh_workflow.py:repo_info\",\n    ).deploy(\n        name=\"my-first-deployment\",\n        work_pool_name=\"my-managed-pool\",\n        cron=\"0 1 * * *\",\n    )\n

    Run the script to create the deployment on the Prefect Cloud server. Note that the cron argument will schedule the deployment to run at 1am every day.

    python create_deployment.py\n

    You should see a message that your deployment was created, similar to the one below.

    Successfully created/updated all deployments!\n______________________________________________________\n|                    Deployments                     |  \n______________________________________________________\n|    Name                       |  Status  | Details |\n______________________________________________________\n| repo-info/my-first-deployment | applied  |         |\n______________________________________________________\n\nTo schedule a run for this deployment, use the following command:\n\n       $ prefect deployment run 'repo-info/my-first-deployment'\n\nYou can also run your flow via the Prefect UI: <https://app.prefect.cloud/account/abc/workspace/123/deployments/deployment/xyz>\n

    Head to the Deployments page of the UI to check it out.

    Code storage options

    You can store your flow code in nearly any location. You just need to tell Prefect where to find it. In this example, we use a GitHub repository, but you could bake your code into a Docker image or store it in cloud provider storage. Read more in this guide.

    Push your code to GitHub

    In the example above, we use an existing GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source argument to point to your repository.

    You can trigger a manual run of this deployment by either clicking the Run button in the top right of the deployment page in the UI, or by running the following CLI command in your terminal:

    prefect deployment run 'repo-info/my-first-deployment'\n

    The deployment is configured to run on a Prefect Managed work pool, so Prefect will automatically spin up the infrastructure to run this flow. It may take a minute to set up the Docker image in which the flow will run.

    After a minute or so, you should see the flow run graph and logs on the Flow Run page in the UI.

    Remove the schedule

    Click the Remove button in the top right of the Deployment page so that the workflow is no longer scheduled to run once per day.

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#next-steps","title":"Next steps","text":"

    You've seen how to move from a Python script to a scheduled, observable, remotely orchestrated workflow with Prefect.

    To learn how to run flows on your own infrastructure, customize the Docker image where your flow runs, and gain more orchestration and observation benefits check out the tutorial.

    Need help?

    Get your questions answered by a Prefect Product Advocate! Book a meeting

    Happy building!

    ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"guides/","title":"How-to Guides","text":"

    This section of the documentation contains how-to guides for common workflows and use cases.

    ","tags":["guides","how to"],"boost":2},{"location":"guides/#development","title":"Development","text":"Title Description Hosting Host your own Prefect server instance. Profiles & Settings Configure Prefect and save your settings. Testing Easily test your workflows. Global Concurrency Limits Limit flow runs. Runtime Context Enable a flow to access metadata about itself and its context when it runs. Variables Store and retrieve configuration data. Prefect Client Use PrefectClient to interact with the API server. Interactive Workflows Create human-in-the-loop workflows by pausing flow runs for input. Automations Configure actions that Prefect executes automatically based on trigger conditions. Webhooks Receive, observe, and react to events from other systems. Terraform Provider Use the Terraform Provider for Prefect Cloud for infrastructure as code. CI/CD Use CI/CD with Prefect. Specifying Upstream Dependencies Run tasks in a desired order. Third-party Secrets Use credentials stored in a secrets manager in your workflows. Prefect Recipes Common, extensible examples for setting up Prefect.","tags":["guides","how to"],"boost":2},{"location":"guides/#execution","title":"Execution","text":"Title Description Docker Deploy flows with Docker containers. State Change Hooks Execute code in response to state changes. Dask and Ray Scale your flows with parallel computing frameworks. Read and Write Data Read and write data to and from cloud provider storage. Big Data Handle large data with Prefect. Logging Configure Prefect's logger and aggregate logs from other tools. Troubleshooting Identify and resolve common issues with Prefect. Managed Execution Let prefect run your code. Shell Commands Shell commands as Prefect flows","tags":["guides","how to"],"boost":2},{"location":"guides/#work-pools","title":"Work Pools","text":"Title Description Deploying Flows to Work Pools and Workers Learn how to run you code with dynamic infrastructure. Upgrade from Agents to Workers Why and how to upgrade from agents to workers. Flow Code Storage Where to store your code for deployments. Kubernetes Deploy flows on Kubernetes. Serverless Push Work Pools Run flows on serverless infrastructure without a worker. Serverless Work Pools with Workers Run flows on serverless infrastructure with a worker. Daemonize Processes Set up a systemd service to run a Prefect worker or .serve process. Custom Workers Develop your own worker type. Overriding Work Pool Job Variables Override job variables for a work pool for a given deployment.

    Need help?

    Get your questions answered by a Prefect Product Advocate! Book a Meeting

    ","tags":["guides","how to"],"boost":2},{"location":"guides/automations/","title":"Using Automations for Dynamic Responses","text":"

    From the Automations concept page, we saw what an automation can do and how to configure one within the UI.

    In this guide, we will showcase the following common use cases:

    • Create a simple notification automation in just a few UI clicks
    • Build upon an event based automation
    • Combine into a multi-layered responsive deployment pattern

    Available only on Prefect Cloud

    Automations are a Prefect Cloud feature.\n
    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#prerequisites","title":"Prerequisites","text":"

    Please have the following before exploring the guide:

    • Python installed
    • Prefect installed (follow the installation guide)
    • Authenticated to a Prefect Cloud workspace
    • A work pool set up to handle the deployments
    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#creating-the-example-script","title":"Creating the example script","text":"

    Automations allow you to take actions in response to triggering events recorded by Prefect.

    For example, let's try to grab data from an API and send a notification based on the end state.

    We can start by pulling hypothetical user data from an endpoint and then performing data cleaning and transformations.

    Let's create a simple extract method, that pulls the data from a random user data generator endpoint.

    from prefect import flow, task, get_run_logger\nimport requests\nimport json\n\n@task\ndef fetch(url: str):\n    logger = get_run_logger()\n    response = requests.get(url)\n    raw_data = response.json()\n    logger.info(f\"Raw response: {raw_data}\")\n    return raw_data\n\n@task\ndef clean(raw_data: dict):\n    print(raw_data.get('results')[0])\n    results = raw_data.get('results')[0]\n    logger = get_run_logger()\n    logger.info(f\"Cleaned results: {results}\")\n    return results['name']\n\n@flow\ndef build_names(num: int = 10):\n    df = []\n    url = \"https://randomuser.me/api/\"\n    logger = get_run_logger()\n    copy = num\n    while num != 0:\n        raw_data = fetch(url)\n        df.append(clean(raw_data))\n        num -= 1\n    logger.info(f\"Built {copy} names: {df}\")\n    return df\n\nif __name__ == \"__main__\":\n    list_of_names = build_names()\n

    The data cleaning workflow has visibility into each step, and we are sending a list of names to our next step of our pipeline.

    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#create-notification-block-within-the-ui","title":"Create notification block within the UI","text":"

    Now let's try to send a notification based off a completed state outcome. We can configure a notification to be sent so that we know when to look into our workflow logic.

    1. Prior to creating the automation, let's confirm the notification location. We have to create a notification block to help define where the notification will be sent.

    2. Let's navigate to the blocks page on the UI, and click into creating an email notification block.

    3. Now that we created a notification block, we can go to the automations page to create our first automation.

    4. Next we try to find the trigger type, in this case let's use a flow completion.

    5. Finally, let's create the actions that will be done once the triggered is hit. In this case, let's create a notification to be sent out to showcase the completion.

    6. Now the automation is ready to be triggered from a flow run completion. Let's run the file locally and see that the notification is sent to our inbox after the completion. It may take a few minutes for the notification to arrive.

    No deployment created

    Keep in mind, we did not need to create a deployment to trigger our automation, where a state outcome of a local flow run helped trigger this notification block. We are not required to create a deployment to trigger a notification.

    Now that you've seen how to create an email notification from a flow run completion, let's see how we can kick off a deployment run in response to an event.

    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#event-based-deployment-automation","title":"Event-based deployment automation","text":"

    We can create an automation that can kick off a deployment instead of a notification. Let's explore how we can programmatically create this automation. We will take advantage of Prefect's REST API to help create this automation.

    See the REST API documentation as a reference for interacting with the Prefect Cloud automation endpoints.

    Let's create a deployment where we can kick off some work based on how long a flow is running. For example, if the build_names flow is taking too long to execute, we can kick off a deployment of the with the same build_names flow, but replace the count value with a lower number - to speed up completion. You can create a deployment with a prefect.yaml file or a Python file that uses flow.deploy.

    prefect.yaml.deploy

    Create a prefect.yaml file like this one for our flow build_names:

      # Welcome to your prefect.yaml file! You can use this file for storing and managing\n  # configuration for deploying your flows. We recommend committing this file to source\n  # control along with your flow code.\n\n  # Generic metadata about this project\n  name: automations-guide\n  prefect-version: 2.13.1\n\n  # build section allows you to manage and build docker images\n  build: null\n\n  # push section allows you to manage if and how this project is uploaded to remote locations\n  push: null\n\n  # pull section allows you to provide instructions for cloning this project in remote locations\n  pull:\n  - prefect.deployments.steps.set_working_directory:\n      directory: /Users/src/prefect/Playground/automations-guide\n\n  # the deployments section allows you to provide configuration for deploying flows\n  deployments:\n  - name: deploy-build-names\n    version: null\n    tags: []\n    description: null\n    entrypoint: test-automations.py:build_names\n    parameters: {}\n    work_pool:\n      name: tutorial-process-pool\n      work_queue_name: null\n      job_variables: {}\n    schedule: null\n

    To follow a more Python based approach to create a deployment, you can use flow.deploy as in the example below.

    # .deploy only needs a name, valid work pool \n# and a reference to where the flow code exists\n\nif __name__ == \"__main__\":\nbuild_names.deploy(\n    name=\"deploy-build-names\",\n    work_pool_name=\"tutorial-process-pool\"\n    image=\"my_registry/my_image:my_image_tag\",\n)\n

    Now let's grab our deployment_id from this deployment, and embed it in our automation. There are many ways to obtain the deployment_id, but the CLI is a quick way to see all of your deployment ids.

    Find deployment_id from the CLI

    The quickest way to see the ID's associated with your deployment would be running prefect deployment ls in an authenticated command prompt, and you will be able to see the id's associated with all of your deployments

    prefect deployment ls\n                                          Deployments                                           \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                                                  \u2503 ID                                   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Extract islands/island-schedule                       \u2502 d9d7289c-7a41-436d-8313-80a044e61532 \u2502\n\u2502 build-names/deploy-build-names                        \u2502 8b10a65e-89ef-4c19-9065-eec5c50242f4 \u2502\n\u2502 ride-duration-prediction-backfill/backfill-deployment \u2502 76dc6581-1773-45c5-a291-7f864d064c57 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
    We can create an automation via a POST call, where we can programmatically create the automation. Ensure you have your api_key, account_id, and workspace_id.

    def create_event_driven_automation():\n    api_url = f\"https://api.prefect.cloud/api/accounts/{account_id}/workspaces/{workspace_id}/automations/\"\n    data = {\n    \"name\": \"Event Driven Redeploy\",\n    \"description\": \"Programmatically created an automation to redeploy a flow based on an event\",\n    \"enabled\": \"true\",\n    \"trigger\": {\n    \"after\": [\n        \"string\"\n    ],\n    \"expect\": [\n        \"prefect.flow-run.Running\"\n    ],\n    \"for_each\": [\n        \"prefect.resource.id\"\n    ],\n    \"posture\": \"Proactive\",\n    \"threshold\": 30,\n    \"within\": 0\n    },\n    \"actions\": [\n    {\n        \"type\": \"run-deployment\",\n        \"source\": \"selected\",\n        \"deployment_id\": \"YOUR-DEPLOYMENT-ID\", \n        \"parameters\": \"10\"\n    }\n    ],\n    \"owner_resource\": \"string\"\n        }\n\n    headers = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\n    response = requests.post(api_url, headers=headers, json=data)\n\n    print(response.json())\n    return response.json()\n

    After running this function, you will see within the UI the changes that came from the post request. Keep in mind, the context will be \"custom\" on UI.

    Let's run the underlying flow and see the deployment get kicked off after 30 seconds elapsed. This will result in a new flow run of build_names, and we are able to see this new deployment get initiated with the custom parameters we outlined above.

    In a few quick changes, we are able to programmatically create an automation that deploys workflows with custom parameters.

    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-an-underlying-yaml-file","title":"Using an underlying .yaml file","text":"

    We can extend this idea one step further by utilizing our own .yaml version of the automation, and registering that file with our UI. This simplifies the requirements of the automation by declaring it in its own .yaml file, and then registering that .yaml with the API.

    Let's first start with creating the .yaml file that will house the automation requirements. Here is how it would look like:

    name: Cancel long running flows\ndescription: Cancel any flow run after an hour of execution\ntrigger:\n  match:\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  match_related: {}\n  after:\n    - \"prefect.flow-run.Failed\"\n  expect:\n    - \"prefect.flow-run.*\"\n  for_each:\n    - \"prefect.resource.id\"\n  posture: \"Proactive\"\n  threshold: 1\n  within: 30\nactions:\n  - type: \"cancel-flow-run\"\n

    We can then have a helper function that applies this YAML file with the REST API function.

    import yaml\n\nfrom utils import post, put\n\ndef create_or_update_automation(path: str = \"automation.yaml\"):\n    \"\"\"Create or update an automation from a local YAML file\"\"\"\n    # Load the definition\n    with open(path, \"r\") as fh:\n        payload = yaml.safe_load(fh)\n\n    # Find existing automations by name\n    automations = post(\"/automations/filter\")\n    existing_automation = [a[\"id\"] for a in automations if a[\"name\"] == payload[\"name\"]]\n    automation_exists = len(existing_automation) > 0\n\n    # Create or update the automation\n    if automation_exists:\n        print(f\"Automation '{payload['name']}' already exists and will be updated\")\n        put(f\"/automations/{existing_automation[0]}\", payload=payload)\n    else:\n        print(f\"Creating automation '{payload['name']}'\")\n        post(\"/automations/\", payload=payload)\n\nif __name__ == \"__main__\":\n    create_or_update_automation()\n

    You can find a complete repo with these APIs examples in this GitHub repository.

    In this example, we managed to create the automation by registering the .yaml file with a helper function. This offers another experience when trying to create an automation.

    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#custom-webhook-kicking-off-an-automation","title":"Custom webhook kicking off an automation","text":"

    We can use webhooks to expose the events API which allows us to extend the functionality of deployments and ways to respond to changes in our workflow through a few easy steps.

    By exposing a webhook endpoint, we can kick off workflows that can trigger deployments - all from a simple event created from an HTTP request.

    Lets create a webhook within the UI. Here is the webhook we can use to create these dynamic events.

    {\n    \"event\": \"model-update\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.{{ body.model_id}}\",\n        \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n        \"run_count\": \"{{body.run_count}}\"\n    }\n}\n
    From a simple input, we can easily create an exposed webhook endpoint.

    Each webhook will correspond to a custom event created, where you can react to it downstream with a separate deployment or automation.

    For example, we can create a curl request that sends the endpoint information such as a run count for our deployment.

    curl -X POST https://api.prefect.cloud/hooks/34iV2SFke3mVa6y5Y-YUoA -d \"model_id=adhoc\" -d \"run_count=10\" -d \"friendly_name=test-user-input\"\n
    From here, we can make a webhook that is connected to pulling in parameters on the curl command, and then it kicks off a deployment that uses these pulled parameters.

    Let us go into the event feed, and we can automate straight from this event.

    This allows us to create automations that respond to these webhook events. From a few clicks in the UI, we are able to associate an external process with the Prefect events API, that can enable us to trigger downstream deployments.

    In the next section, we will explore event triggers that automate the kickoff of a deployment run.

    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-triggers","title":"Using triggers","text":"

    Let's take this idea one step further, by creating a deployment that will be triggered when a flow run takes longer than expected. We can take advantage of Prefect's Marvin library that will use an LLM to classify our data. Marvin is great at embedding data science and data analysis applications within your pre-existing data engineering workflows. In this case, we can use Marvin'd AI functions to help make our dataset more information rich.

    Install Marvin with pip install marvin and set you OpenAI API key as shown here

    We can add a trigger to run a deployment in response to a specific event.

    Let's create an example with Marvin's AI functions. We will take in a pandas DataFrame and use the AI function to analyze it.

    Here is an example of pulling in that data and classifying using Marvin AI. We can help create dummy data based on classifications we have already created.

    from marvin import ai_classifier\nfrom enum import Enum\nimport pandas as pd\n\n@ai_fn\ndef generate_synthetic_user_data(build_of_names: list[dict]) -> list:\n    \"\"\"\n    Generate additional data for userID (numerical values with 6 digits), location, and timestamp as separate columns and append the data onto 'build_of_names'. Make userID the first column\n    \"\"\"\n\n@flow\ndef create_fake_user_dataset(df):\n  artifact_df = generate_synthetic_user_data(df)\n  print(artifact_df)\n\n  create_table_artifact(\n      key=\"fake-user-data\",\n      table=artifact_df,\n      description= \"Dataset that is comprised of a mix of autogenerated data based on user data\"\n  )\n\nif __name__ == \"__main__\":\n    create_fake_artifact()  \n

    Let's kick off a deployment with a trigger defined in a prefect.yaml file. Let's specify what we want to trigger when the event stays in a running state for longer than 30 seconds.

    # Welcome to your prefect.yaml file! You can use this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: automations-guide\nprefect-version: 2.13.1\n\n# build section allows you to manage and build docker images\nbuild: null\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush: null\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n    directory: /Users/src/prefect/Playground/marvin-extension\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: create-fake-user-dataset\n  triggers:\n    - enabled: true\n      match:\n        prefect.resource.id: \"prefect.flow-run.*\"\n      after: \"prefect.flow-run.Running\",\n      expect: [],\n      for_each: [\"prefect.resource.id\"],\n      parameters:\n        param_1: 10\n      posture: \"Proactive\"\n  version: null\n  tags: []\n  description: null\n  entrypoint: marvin-extension.py:create_fake_user_dataset\n  parameters: {}\n  work_pool:\n    name: tutorial-process-pool\n    work_queue_name: null\n    job_variables: {}\n  schedule: null\n

    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#next-steps","title":"Next steps","text":"

    You've seen how to create automations via the UI, REST API, and a triggers defined in a prefect.yaml deployment definition.

    To learn more about events that can act as automation triggers, see the events docs. To learn more about event webhooks in particular, see the webhooks guide.

    ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/big-data/","title":"Big Data with Prefect","text":"

    In this guide you'll learn tips for working with large amounts of data in Prefect.

    Big data doesn't have a widely accepted, precise definition. In this guide, we'll discuss methods to reduce the processing time or memory utilization of Prefect workflows, without editing your Python code.

    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#optimizing-your-python-code-with-prefect-for-big-data","title":"Optimizing your Python code with Prefect for big data","text":"

    Depending upon your needs, you may want to optimize your Python code for speed, memory, compute, or disk space.

    Prefect provides several options that we'll explore in this guide:

    1. Remove task introspection with quote to save time running your code.
    2. Write task results to cloud storage such as S3 using a block to save memory.
    3. Save data to disk within a flow rather than using results.
    4. Cache task results to save time and compute.
    5. Compress results written to disk to save space.
    6. Use a task runner for parallelizable operations to save time.
    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#remove-task-introspection","title":"Remove task introspection","text":"

    When a task is called from a flow, each argument is introspected by Prefect, by default. To speed up your flow runs, you can disable this behavior for a task by wrapping the argument using quote.

    To demonstrate, let's use a basic example that extracts and transforms some New York taxi data.

    et_quote.py
    from prefect import task, flow\nfrom prefect.utilities.annotations import quote\nimport pandas as pd\n\n\n@task\ndef extract(url: str):\n    \"\"\"Extract data\"\"\"\n    df_raw = pd.read_parquet(url)\n    print(df_raw.info())\n    return df_raw\n\n\n@task\ndef transform(df: pd.DataFrame):\n    \"\"\"Basic transformation\"\"\"\n    df[\"tip_fraction\"] = df[\"tip_amount\"] / df[\"total_amount\"]\n    print(df.info())\n    return df\n\n\n@flow(log_prints=True)\ndef et(url: str):\n    \"\"\"ET pipeline\"\"\"\n    df_raw = extract(url)\n    df = transform(quote(df_raw))\n\n\nif __name__ == \"__main__\":\n    url = \"https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-09.parquet\"\n    et(url)\n

    Introspection can take significant time when the object being passed is a large collection, such as dictionary or DataFrame, where each element needs to be visited. Note that using quote reduces execution time at the expense of disabling task dependency tracking for the wrapped object.

    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#write-task-results-to-cloud-storage","title":"Write task results to cloud storage","text":"

    By default, the results of task runs are stored in memory in your execution environment. This behavior makes flow runs fast for small data, but can be problematic for large data. You can save memory by writing results to disk. In production, you'll generally want to write results to a cloud provider storage such as AWS S3. Prefect lets you to use a storage block from a Prefect cloud integration library such as prefect-aws to save your configuration information. Learn more about blocks here.

    Install the relevant library, register the block with the server, and create your storage block. Then you can reference the block in your flow like this:

    ...\nfrom prefect_aws.s3 import S3Bucket\n\nmy_s3_block = S3Bucket.load(\"MY_BLOCK_NAME\")\n\n...\n@task(result_storage=my_s3_block)\n

    Now the result of the task will be written to S3, rather than stored in memory.

    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#save-data-to-disk-within-a-flow","title":"Save data to disk within a flow","text":"

    To save memory and time with big data, you don't need to pass results between tasks at all. Instead, you can write and read data to disk directly in your flow code. Prefect has integration libraries for each of the major cloud providers. Each library contains blocks with methods that make it convenient to read and write data to and from cloud object storage. The moving data guide has step-by-step examples for each cloud provider.

    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#cache-task-results","title":"Cache task results","text":"

    Caching allows you to avoid re-running tasks when doing so is unnecessary. Caching can save you time and compute. Note that caching requires task result persistence. Caching is discussed in detail in the tasks concept page.

    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#compress-results-written-to-disk","title":"Compress results written to disk","text":"

    If you're using Prefect's task result persistence, you can save disk space by compressing the results. You just need to specify the result type with compressed/ prefixed like this:

    @task(result_serializer=\"compressed/json\")\n

    Read about compressing results with Prefect for more details. The tradeoff of using compression is that it takes time to compress and decompress the data.

    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#use-a-task-runner-for-parallelizable-operations","title":"Use a task runner for parallelizable operations","text":"

    Prefect's task runners allow you to use the Dask and Ray Python libraries to run tasks in parallel and distributed across multiple machines. This can save you time and compute when operating on large data structures. See the guide to working with Dask and Ray Task Runners for details.

    ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/ci-cd/","title":"CI/CD With Prefect","text":"

    Many organizations deploy Prefect workflows via their CI/CD process. Each organization has their own unique CI/CD setup, but a common pattern is to use CI/CD to manage Prefect deployments. Combining Prefect's deployment features with CI/CD tools enables efficient management of flow code updates, scheduling changes, and container builds. This guide uses GitHub Actions to implement a CI/CD process, but these concepts are generally applicable across many CI/CD tools.

    Note that Prefect's primary ways for creating deployments, a .deploy flow method or a prefect.yaml configuration file, are both designed with building and pushing images to a Docker registry in mind.

    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#getting-started-with-github-actions-and-prefect","title":"Getting started with GitHub Actions and Prefect","text":"

    In this example, you'll write a GitHub Actions workflow that will run each time you push to your repository's main branch. This workflow will build and push a Docker image containing your flow code to Docker Hub, then deploy the flow to Prefect Cloud.

    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#repository-secrets","title":"Repository secrets","text":"

    Your CI/CD process must be able to authenticate with Prefect in order to deploy flows.

    Deploying flows securely and non-interactively in your CI/CD process can be accomplished by saving your PREFECT_API_URL and PREFECT_API_KEY as secrets in your repository's settings so they can be accessed in your CI/CD runner's environment without exposing them in any scripts or configuration files.

    In this scenario, deploying flows involves building and pushing Docker images, so add DOCKER_USERNAME and DOCKER_PASSWORD as secrets to your repository as well.

    You can create secrets for GitHub Actions in your repository under Settings -> Secrets and variables -> Actions -> New repository secret:

    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#writing-a-github-workflow","title":"Writing a GitHub workflow","text":"

    To deploy your flow via GitHub Actions, you'll need a workflow YAML file. GitHub will look for workflow YAML files in the .github/workflows/ directory in the root of your repository. In their simplest form, GitHub workflow files are made up of triggers and jobs.

    The on: trigger is set to run the workflow each time a push occurs on the main branch of the repository.

    The deploy job is comprised of four steps:

    • Checkout clones your repository into the GitHub Actions runner so you can reference files or run scripts from your repository in later steps.
    • Log in to Docker Hub authenticates to DockerHub so your image can be pushed to the Docker registry in your DockerHub account. docker/login-action is an existing GitHub action maintained by Docker. with: passes values into the Action, similar to passing parameters to a function.
    • Setup Python installs your selected version of Python.
    • Prefect Deploy installs the dependencies used in your flow, then deploys your flow. env: makes the PREFECT_API_KEY and PREFECT_API_URL secrets from your repository available as environment variables during this step's execution.

    For reference, the examples below can be found on their respective branches of this repository.

    .deployprefect.yaml
    .\n\u251c\u2500\u2500 .github/\n\u2502   \u2514\u2500\u2500 workflows/\n\u2502       \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u2514\u2500\u2500 requirements.txt\n

    flow.py

    from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n  print(\"Hello!\")\n\nif __name__ == \"__main__\":\n    hello.deploy(\n        name=\"my-deployment\",\n        work_pool_name=\"my-work-pool\",\n        image=\"my_registry/my_image:my_image_tag\",\n    )\n

    .github/workflows/deploy-prefect-flow.yaml

    name: Deploy Prefect flow\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    name: Deploy\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Log in to Docker Hub\n        uses: docker/login-action@v3\n        with:\n          username: ${{ secrets.DOCKER_USERNAME }}\n          password: ${{ secrets.DOCKER_PASSWORD }}\n\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: '3.11'\n\n      - name: Prefect Deploy\n        env:\n          PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n          PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n        run: |\n          pip install -r requirements.txt\n          python flow.py\n
    .\n\u251c\u2500\u2500 .github/\n\u2502   \u2514\u2500\u2500 workflows/\n\u2502       \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 requirements.txt\n

    flow.py

    from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n  print(\"Hello!\")\n

    prefect.yaml

    name: cicd-example\nprefect-version: 2.14.11\n\nbuild:\n  - prefect_docker.deployments.steps.build_docker_image:\n      id: build_image\n      requires: prefect-docker>=0.3.1\n      image_name: my_registry/my_image\n      tag: my_image_tag\n      dockerfile: auto\n\npush:\n  - prefect_docker.deployments.steps.push_docker_image:\n      requires: prefect-docker>=0.3.1\n      image_name: \"{{ build_image.image_name }}\"\n      tag: \"{{ build_image.tag }}\"\n\npull: null\n\ndeployments:\n  - name: my-deployment\n    entrypoint: flow.py:hello\n    work_pool:\n      name: my-work-pool\n      work_queue_name: default\n      job_variables:\n        image: \"{{ build-image.image }}\"\n

    .github/workflows/deploy-prefect-flow.yaml

    name: Deploy Prefect flow\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    name: Deploy\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Log in to Docker Hub\n        uses: docker/login-action@v3\n        with:\n          username: ${{ secrets.DOCKER_USERNAME }}\n          password: ${{ secrets.DOCKER_PASSWORD }}\n\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: '3.11'\n\n      - name: Prefect Deploy\n        env:\n          PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n          PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n        run: |\n          pip install -r requirements.txt\n          prefect deploy -n my-deployment\n
    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#running-a-github-workflow","title":"Running a GitHub workflow","text":"

    After pushing commits to your repository, GitHub will automatically trigger a run of your workflow. The status of running and completed workflows can be monitored from the Actions tab of your repository.

    You can view the logs from each workflow step as they run. The Prefect Deploy step will include output about your image build and push, and the creation/update of your deployment.

    Successfully built image '***/cicd-example:latest'\n\nSuccessfully pushed image '***/cicd-example:latest'\n\nSuccessfully created/updated all deployments!\n\n                Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                \u2503 Status  \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 hello/my-deployment \u2502 applied \u2502         \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#advanced-example","title":"Advanced example","text":"

    In more complex scenarios, CI/CD processes often need to accommodate several additional considerations to enable a smooth development workflow:

    • Making code available in different environments as it advances through stages of development
    • Handling independent deployment of distinct groupings of work, as in a monorepo
    • Efficiently using build time to avoid repeated work

    This example repository demonstrates how each of these considerations can be addressed using a combination of Prefect's and GitHub's capabilities.

    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#deploying-to-multiple-workspaces","title":"Deploying to multiple workspaces","text":"

    Which deployment processes should run are automatically selected when changes are pushed depending on two conditions:

    on:\n  push:\n    branches:\n      - stg\n      - main\n    paths:\n      - \"project_1/**\"\n
    1. branches: - which branch has changed. This will ultimately select which Prefect workspace a deployment is created or updated in. In this example, changes on the stg branch will deploy flows to a staging workspace, and changes on the main branch will deploy flows to a production workspace.
    2. paths: - which project folders' files have changed. Since each project folder contains its own flows, dependencies, and prefect.yaml, it represents a complete set of logic and configuration that can be deployed independently. Each project in this repository gets its own GitHub Actions workflow YAML file.

    The prefect.yaml file in each project folder depends on environment variables that are dictated by the selected job in each CI/CD workflow, enabling external code storage for Prefect deployments that is clearly separated across projects and environments.

      .\n  \u251c\u2500\u2500 cicd-example-workspaces-prod  # production bucket\n  \u2502   \u251c\u2500\u2500 project_1\n  \u2502   \u2514\u2500\u2500 project_2\n  \u2514\u2500\u2500 cicd-example-workspaces-stg  # staging bucket\n      \u251c\u2500\u2500 project_1\n      \u2514\u2500\u2500 project_2  \n

    Since the deployments in this example use S3 for code storage, it's important that push steps place flow files in separate locations depending upon their respective environment and project so no deployment overwrites another deployment's files.

    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#caching-build-dependencies","title":"Caching build dependencies","text":"

    Since building Docker images and installing Python dependencies are essential parts of the deployment process, it's useful to rely on caching to skip repeated build steps.

    The setup-python action offers caching options so Python packages do not have to be downloaded on repeat workflow runs.

    - name: Setup Python\n  uses: actions/setup-python@v5\n  with:\n    python-version: \"3.11\"\n    cache: \"pip\"\n
    Using cached prefect-2.16.1-py3-none-any.whl (2.9 MB)\nUsing cached prefect_aws-0.4.10-py3-none-any.whl (61 kB)\n

    The build-push-action for building Docker images also offers caching options for GitHub Actions. If you are not using GitHub, other remote cache backends are available as well.

    - name: Build and push\n  id: build-docker-image\n  env:\n      GITHUB_SHA: ${{ steps.get-commit-hash.outputs.COMMIT_HASH }}\n  uses: docker/build-push-action@v5\n  with:\n    context: ${{ env.PROJECT_NAME }}/\n    push: true\n    tags: ${{ secrets.DOCKER_USERNAME }}/${{ env.PROJECT_NAME }}:${{ env.GITHUB_SHA }}-stg\n    cache-from: type=gha\n    cache-to: type=gha,mode=max\n
    importing cache manifest from gha:***\nDONE 0.1s\n\n[internal] load build context\ntransferring context: 70B done\nDONE 0.0s\n\n[2/3] COPY requirements.txt requirements.txt\nCACHED\n\n[3/3] RUN pip install -r requirements.txt\nCACHED\n
    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#prefect-github-actions","title":"Prefect GitHub Actions","text":"

    Prefect provides its own GitHub Actions for authentication and deployment creation. These actions can simplify deploying with CI/CD when using prefect.yaml, especially in cases where a repository contains flows that are used in multiple deployments across multiple Prefect Cloud workspaces.

    Here's an example of integrating these actions into the workflow we created above:

    name: Deploy Prefect flow\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    name: Deploy\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Log in to Docker Hub\n        uses: docker/login-action@v3\n        with:\n          username: ${{ secrets.DOCKER_USERNAME }}\n          password: ${{ secrets.DOCKER_PASSWORD }}\n\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: \"3.11\"\n\n      - name: Prefect Auth\n        uses: PrefectHQ/actions-prefect-auth@v1\n        with:\n          prefect-api-key: ${{ secrets.PREFECT_API_KEY }}\n          prefect-workspace: ${{ secrets.PREFECT_WORKSPACE }}\n\n      - name: Run Prefect Deploy\n        uses: PrefectHQ/actions-prefect-deploy@v3\n        with:\n          deployment-names: my-deployment\n          requirements-file-paths: requirements.txt\n
    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#authenticating-to-other-docker-image-registries","title":"Authenticating to other Docker image registries","text":"

    The docker/login-action GitHub Action supports pushing images to a wide variety of image registries.

    For example, if you are storing Docker images in AWS Elastic Container Registry, you can add your ECR registry URL to the registry key in the with: part of the action and use an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as your username and password.

    - name: Login to ECR\n  uses: docker/login-action@v3\n  with:\n    registry: <aws-account-number>.dkr.ecr.<region>.amazonaws.com\n    username: ${{ secrets.AWS_ACCESS_KEY_ID }}\n    password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}\n
    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#other-resources","title":"Other resources","text":"

    Check out the Prefect Cloud Terraform provider if you're using Terraform to manage your infrastructure.

    ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/cli-shell/","title":"Orchestrating Shell Commands with Prefect","text":"

    Harness the power of the Prefect CLI to execute and schedule shell commands as Prefect flows. This guide shows how to use the watch and serve commands to showcase the CLI's versatility for automation tasks.

    Here's what you'll learn:

    • Running a shell command as a Prefect flow on-demand with watch.
    • Scheduling a shell command as a recurring Prefect flow using serve.
    • The benefits of embedding these commands into your automation workflows.
    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#prerequisites","title":"Prerequisites","text":"

    Before you begin, ensure you have:

    • A basic understanding of Prefect flows. Start with the Getting Started guide if you're new.
    • A recent version of Prefect installed in your command line environment. Follow the instructions in the docs if you have any issues.
    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#the-watch-command","title":"The watch command","text":"

    The watch command wraps any shell command in a Prefect flow for instant execution, ideal for quick tasks or integrating shell scripts into your workflows.

    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#example-usage","title":"Example usage","text":"

    Imagine you want to fetch the current weather in Chicago using the curl command. The following Prefect CLI command does just that:

    prefect shell watch \"curl http://wttr.in/Chicago?format=3\"\n

    This command makes a request to wttr.in, a console-oriented weather service, and prints the weather conditions for Chicago.

    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#benefits-of-watch","title":"Benefits of watch","text":"
    • Immediate feedback: Execute shell commands within the Prefect framework for immediate results.
    • Easy integration: Seamlessly blend external scripts or data fetching into your data workflows.
    • Visibility and logging: Leverage Prefect's logging to track the execution and output of your shell tasks.
    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#deploying-with-serve","title":"Deploying with serve","text":"

    When you need to run shell commands on a schedule, the serve command creates a Prefect [deployment](/concepts/deployments/ for regular execution. This is an extremely quick way to create a deployment that is served by Prefect.

    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#example-usage_1","title":"Example usage","text":"

    To set up a daily weather report for Chicago at 9 AM, you can use the serve command as follows:

    prefect shell serve \"curl http://wttr.in/Chicago?format=3\" --flow-name \"Daily Chicago Weather Report\" --cron-schedule \"0 9 * * *\" --deployment-name \"Chicago Weather\"\n

    This command schedules a Prefect flow to fetch Chicago's weather conditions daily, providing consistent updates without manual intervention. Additionally, if you want to fetch the Chicago weather, you can manually create a run of your new deployment from the UI or the CLI.

    To shut down your server and pause your scheduled runs, hit ctrl + c in the CLI.

    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#benefits-of-serve","title":"Benefits of serve","text":"
    • Automated scheduling: Schedule shell commands to run automatically, ensuring critical updates are generated and available on time.
    • Centralized workflow management: Manage and monitor your scheduled shell commands inside Prefect for a unified workflow overview.
    • Configurable execution: Tailor execution frequency, concurrency limits, and other parameters to suit your project's needs and resources.
    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#next-steps","title":"Next steps","text":"

    With the watch and serve commands at your disposal, you're ready to incorporate shell command automation into your Prefect workflows. You can start with straightforward tasks like observing cron jobs and expand to more complex automation scenarios to enhance your workflows' efficiency and capabilities.

    Check out the tutorial and explore other Prefect docs to learn how to gain more observability and orchestration capabilities in your workflows.

    ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/creating-interactive-workflows/","title":"Creating Interactive Workflows","text":"

    Flows can pause or suspend execution and automatically resume when they receive type-checked input in Prefect's UI. Flows can also send and receive type-checked input at any time while running, without pausing or suspending. This guide will show you how to use these features to build interactive workflows.

    A note on async Python syntax

    Most of the example code in this section uses async Python functions and await. However, as with other Prefect features, you can call these functions with or without await.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#pausing-or-suspending-a-flow-until-it-receives-input","title":"Pausing or suspending a flow until it receives input","text":"

    You can pause or suspend a flow until it receives input from a user in Prefect's UI. This is useful when you need to ask for additional information or feedback before resuming a flow. Such workflows are often called human-in-the-loop (HITL) systems.

    What is human-in-the-loop interactivity used for?

    Approval workflows that pause to ask a human to confirm whether a workflow should continue are very common in the business world. Certain types of machine learning training and artificial intelligence workflows benefit from incorporating HITL design.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#waiting-for-input","title":"Waiting for input","text":"

    To receive input while paused or suspended use the wait_for_input parameter in the pause_flow_run or suspend_flow_run functions. This parameter accepts one of the following:

    • A built-in type like int or str, or a built-in collection like List[int]
    • A pydantic.BaseModel subclass
    • A subclass of prefect.input.RunInput

    When to use a RunModel or BaseModel instead of a built-in type

    There are a few reasons to use a RunModel or BaseModel. The first is that when you let Prefect automatically create one of these classes for your input type, the field that users will see in Prefect's UI when they click \"Resume\" on a flow run is named value and has no help text to suggest what the field is. If you create a RunInput or BaseModel, you can change details like the field name, help text, and default value, and users will see those reflected in the \"Resume\" form.

    The simplest way to pause or suspend and wait for input is to pass a built-in type:

    from prefect import flow, pause_flow_run, get_run_logger\n\n@flow\ndef greet_user():\n    logger = get_run_logger()\n\n    user = pause_flow_run(wait_for_input=str)\n\n    logger.info(f\"Hello, {user}!\")\n

    In this example, the flow run will pause until a user clicks the Resume button in the Prefect UI, enters a name, and submits the form.

    What types can you pass for wait_for_input?

    When you pass a built-in type such as int as an argument for the wait_for_input parameter to pause_flow_run or suspend_flow_run, Prefect automatically creates a Pydantic model containing one field annotated with the type you specified. This means you can use any type annotation that Pydantic accepts for model fields with these functions.

    Instead of a built-in type, you can pass in a pydantic.BaseModel class. This is useful if you already have a BaseModel you want to use:

    from prefect import flow, pause_flow_run, get_run_logger\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n\n    user = await pause_flow_run(wait_for_input=User)\n\n    logger.info(f\"Hello, {user.name}!\")\n

    BaseModel classes are upgraded to RunInput classes automatically

    When you pass a pydantic.BaseModel class as the wait_for_input argument to pause_flow_run or suspend_flow_run, Prefect automatically creates a RunInput class with the same behavior as your BaseModel and uses that instead.

    RunInput classes contain extra logic that allows flows to send and receive them at runtime. You shouldn't notice any difference!

    Finally, for advanced use cases like overriding how Prefect stores flow run inputs, you can create a RunInput class:

    from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n    name: str\n    age: int\n\n    # Imagine overridden methods here!\n    def override_something(self, *args, **kwargs):\n        super().override_something(*args, **kwargs)\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n\n    user = await pause_flow_run(wait_for_input=UserInput)\n\n    logger.info(f\"Hello, {user.name}!\")\n
    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-initial-data","title":"Providing initial data","text":"

    You can set default values for fields in your model by using the with_initial_data method. This is useful when you want to provide default values for the fields in your own RunInput class.

    Expanding on the example above, you could make the name field default to \"anonymous\":

    from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n    name: str\n    age: int\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n\n    user_input = await pause_flow_run(\n        wait_for_input=UserInput.with_initial_data(name=\"anonymous\")\n    )\n\n    if user_input.name == \"anonymous\":\n        logger.info(\"Hello, stranger!\")\n    else:\n        logger.info(f\"Hello, {user_input.name}!\")\n

    When a user sees the form for this input, the name field will contain \"anonymous\" as the default.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-a-description-with-runtime-data","title":"Providing a description with runtime data","text":"

    You can provide a dynamic, markdown description that will appear in the Prefect UI when the flow run pauses. This feature enables context-specific prompts, enhancing clarity and user interaction. Building on the example above:

    from datetime import datetime\nfrom prefect import flow, pause_flow_run, get_run_logger\nfrom prefect.input import RunInput\n\n\nclass UserInput(RunInput):\n    name: str\n    age: int\n\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n    current_date = datetime.now().strftime(\"%B %d, %Y\")\n\n    description_md = f\"\"\"\n**Welcome to the User Greeting Flow!**\nToday's Date: {current_date}\n\nPlease enter your details below:\n- **Name**: What should we call you?\n- **Age**: Just a number, nothing more.\n\"\"\"\n\n    user_input = await pause_flow_run(\n        wait_for_input=UserInput.with_initial_data(\n            description=description_md, name=\"anonymous\"\n        )\n    )\n\n    if user_input.name == \"anonymous\":\n        logger.info(\"Hello, stranger!\")\n    else:\n        logger.info(f\"Hello, {user_input.name}!\")\n

    When a user sees the form for this input, the given markdown will appear above the input fields.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#handling-custom-validation","title":"Handling custom validation","text":"

    Prefect uses the fields and type hints on your RunInput or BaseModel class to validate the general structure of input your flow receives, but you might require more complex validation. If you do, you can use Pydantic validators.

    Custom validation runs after the flow resumes

    Prefect transforms the type annotations in your RunInput or BaseModel class to a JSON schema and uses that schema in the UI for client-side validation. However, custom validation requires running Python logic defined in your RunInput class. Because of this, validation happens after the flow resumes, so you'll want to handle it explicitly in your flow. Continue reading for an example best practice.

    The following is an example RunInput class that uses a custom field validator:

    import pydantic\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n    size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n    color: Literal[\"red\", \"green\", \"black\"]\n\n    @pydantic.validator(\"color\")\n    def validate_age(cls, value, values, **kwargs):\n        if value == \"green\" and values[\"size\"] == \"small\":\n            raise ValueError(\n                \"Green is only in-stock for medium, large, and XL sizes.\"\n            )\n\n        return value\n

    In the example, we use Pydantic's validator decorator to define a custom validation method for the color field. We can use it in a flow like this:

    import pydantic\nfrom prefect import flow, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n    size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n    color: Literal[\"red\", \"green\", \"black\"]\n\n    @pydantic.validator(\"color\")\n    def validate_age(cls, value, values, **kwargs):\n        if value == \"green\" and values[\"size\"] == \"small\":\n            raise ValueError(\n                \"Green is only in-stock for medium, large, and XL sizes.\"\n            )\n\n        return value\n\n\n@flow\ndef get_shirt_order():\n    shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n

    If a user chooses any size and color combination other than small and green, the flow run will resume successfully. However, if the user chooses size small and color green, the flow run will resume, and pause_flow_run will raise a ValidationError exception. This will cause the flow run to fail and log the error.

    However, what if you don't want the flow run to fail? One way to handle this case is to use a while loop and pause again if the ValidationError exception is raised:

    from typing import Literal\n\nimport pydantic\nfrom prefect import flow, get_run_logger, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n    size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n    color: Literal[\"red\", \"green\", \"black\"]\n\n    @pydantic.validator(\"color\")\n    def validate_age(cls, value, values, **kwargs):\n        if value == \"green\" and values[\"size\"] == \"small\":\n            raise ValueError(\n                \"Green is only in-stock for medium, large, and XL sizes.\"\n            )\n\n        return value\n\n\n@flow\ndef get_shirt_order():\n    logger = get_run_logger()\n    shirt_order = None\n\n    while shirt_order is None:\n        try:\n            shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n        except pydantic.ValidationError as exc:\n            logger.error(f\"Invalid size and color combination: {exc}\")\n\n    logger.info(\n        f\"Shirt order: {shirt_order.size}, {shirt_order.color}\"\n    )\n

    This code will cause the flow run to continually pause until the user enters a valid age.

    As an additional step, you may want to use an automation or notification to alert the user to the error.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#sending-and-receiving-input-at-runtime","title":"Sending and receiving input at runtime","text":"

    Use the send_input and receive_input functions to send input to a flow or receive input from a flow at runtime. You don't need to pause or suspend the flow to send or receive input.

    Why would you send or receive input without pausing or suspending?

    You might want to send or receive input without pausing or suspending in scenarios where the flow run is designed to handle real-time data. For instance, in a live monitoring system, you might need to update certain parameters based on the incoming data without interrupting the flow. Another use is having a long-running flow that continually responds to runtime input with low latency. For example, if you're building a chatbot, you could have a flow that starts a GPT Assistant and manages a conversation thread.

    The most important parameter to the send_input and receive_input functions is run_type, which should be one of the following:

    • A built-in type such as int or str
    • A pydantic.BaseModel class
    • A prefect.input.RunInput class

    When to use a BaseModel or RunInput instead of a built-in type

    Most built-in types and collections of built-in types should work with send_input and receive_input, but there is a caveat with nested collection types, such as lists of tuples, e.g. List[Tuple[str, float]]). In this case, validation may happen after your flow receives the data, so calling receive_input may raise a ValidationError. You can plan to catch this exception, but also, consider placing the field in an explicit BaseModel or RunInput so that your flow only receives exact type matches.

    Let's look at some examples! We'll check out receive_input first, followed by send_input, and then we'll see the two functions working together.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#receiving-input","title":"Receiving input","text":"

    The following flow uses receive_input to continually receive names and print a personalized greeting for each name it receives:

    from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n    async for name_input in receive_input(str, timeout=None):\n        # Prints \"Hello, andrew!\" if another flow sent \"andrew\"\n        print(f\"Hello, {name_input}!\")\n

    When you pass a type such as str into receive_input, Prefect creates a RunInput class to manage your input automatically. When a flow sends input of this type, Prefect uses the RunInput class to validate the input. If the validation succeeds, your flow receives the input in the type you specified. In this example, if the flow received a valid string as input, the variable name_input would contain the string value.

    If, instead, you pass a BaseModel, Prefect upgrades your BaseModel to a RunInput class, and the variable your flow sees \u2014 in this case, name_input \u2014 is a RunInput instance that behaves like a BaseModel. Of course, if you pass in a RunInput class, no upgrade is needed, and you'll get a RunInput instance.

    If you prefer to keep things simple and pass types such as str into receive_input, you can do so. If you need access to the generated RunInput that contains the received value, pass with_metadata=True to receive_input:

    from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n    async for name_input in receive_input(\n        str,\n        timeout=None,\n        with_metadata=True\n    ):\n        # Input will always be in the field \"value\" on this object.\n        print(f\"Hello, {name_input.value}!\")\n

    Why would you need to use with_metadata=True?

    The primary uses of accessing the RunInput object for a receive input are to respond to the sender with the RunInput.respond() function or to access the unique key for an input. Later in this guide, we'll discuss how and why you might use these features.

    Notice that we are now printing name_input.value. When Prefect generates a RunInput for you from a built-in type, the RunInput class has a single field, value, that uses a type annotation matching the type you specified. So if you call receive_input like this: receive_input(str, with_metadata=True), that's equivalent to manually creating the following RunInput class and receive_input call:

    from prefect import flow\nfrom prefect.input.run_input import RunInput\n\nclass GreeterInput(RunInput):\n    value: str\n\n@flow\nasync def greeter_flow():\n    async for name_input in receive_input(GreeterInput, timeout=None):\n        print(f\"Hello, {name_input.value}!\")\n

    The type used in receive_input and send_input must match

    For a flow to receive input, the sender must use the same type that the receiver is receiving. This means that if the receiver is receiving GreeterInput, the sender must send GreeterInput. If the receiver is receiving GreeterInput and the sender sends str input that Prefect automatically upgrades to a RunInput class, the types won't match, so the receiving flow run won't receive the input. However, the input will be waiting if the flow ever calls receive_input(str)!

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#keeping-track-of-inputs-youve-already-seen","title":"Keeping track of inputs you've already seen","text":"

    By default, each time you call receive_input, you get an iterator that iterates over all known inputs to a specific flow run, starting with the first received. The iterator will keep track of your current position as you iterate over it, or you can call next() to explicitly get the next input. If you're using the iterator in a loop, you should probably assign it to a variable:

    from prefect import flow, get_client\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def sender():\n    greeter_flow_run = await run_deployment(\n        \"greeter/send-receive\", timeout=0, as_subflow=False\n    )\n    client = get_client()\n\n    # Assigning the `receive_input` iterator to a variable\n    # outside of the the `while True` loop allows us to continue\n    # iterating over inputs in subsequent passes through the\n    # while loop without losing our position.\n    receiver = receive_input(\n        str,\n        with_metadata=True,\n        timeout=None,\n        poll_interval=0.1\n    )\n\n    while True:\n        name = input(\"What is your name? \")\n        if not name:\n            continue\n\n        if name == \"q\" or name == \"quit\":\n            await send_input(\n                EXIT_SIGNAL,\n                flow_run_id=greeter_flow_run.id\n            )\n            print(\"Goodbye!\")\n            break\n\n        await send_input(name, flow_run_id=greeter_flow_run.id)\n\n        # Saving the iterator outside of the while loop and\n        # calling next() on each iteration of the loop ensures\n        # that we're always getting the newest greeting. If we\n        # had instead called `receive_input` here, we would\n        # always get the _first_ greeting this flow received,\n        # print it, and then ask for a new name.\n        greeting = await receiver.next()\n        print(greeting)\n

    So, an iterator helps to keep track of the inputs your flow has already received. But what if you want your flow to suspend and then resume later, picking up where it left off? In that case, you will need to save the keys of the inputs you've seen so that the flow can read them back out when it resumes. You might use a Block, such as a JSONBlock.

    The following flow receives input for 30 seconds then suspends itself, which exits the flow and tears down infrastructure:

    from prefect import flow, get_run_logger, suspend_flow_run\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.input.run_input import receive_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n    logger = get_run_logger()\n    run_context = get_run_context()\n    assert run_context.flow_run, \"Could not see my flow run ID\"\n\n    block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n    try:\n        seen_keys_block = await JSON.load(block_name)\n    except ValueError:\n        seen_keys_block = JSON(\n            value=[],\n        )\n\n    try:\n        async for name_input in receive_input(\n            str,\n            with_metadata=True,\n            poll_interval=0.1,\n            timeout=30,\n            exclude_keys=seen_keys_block.value\n        ):\n            if name_input.value == EXIT_SIGNAL:\n                print(\"Goodbye!\")\n                return\n            await name_input.respond(f\"Hello, {name_input.value}!\")\n\n            seen_keys_block.value.append(name_input.metadata.key)\n            await seen_keys_block.save(\n                name=block_name,\n                overwrite=True\n            )\n    except TimeoutError:\n        logger.info(\"Suspending greeter after 30 seconds of idle time\")\n        await suspend_flow_run(timeout=10000)\n

    As this flow processes name input, it adds the key of the flow run input to the seen_keys_block. When the flow later suspends and then resumes, it reads the keys it has already seen out of the JSON Block and passes them as the exlude_keys parameter to receive_input.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#responding-to-the-inputs-sender","title":"Responding to the input's sender","text":"

    When your flow receives input from another flow, Prefect knows the sending flow run ID, so the receiving flow can respond by calling the respond method on the RunInput instance the flow received. There are a couple of requirements:

    1. You will need to pass in a BaseModel or RunInput, or use with_metadata=True
    2. The flow you are responding to must receive the same type of input you send in order to see it.

    The respond method is equivalent to calling send_input(..., flow_run_id=sending_flow_run.id), but with respond, your flow doesn't need to know the sending flow run's ID.

    Now that we know about respond, let's make our greeter_flow respond to name inputs instead of printing them:

    from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter():\n    async for name_input in receive_input(\n        str,\n        with_metadata=True,\n        timeout=None\n    ):\n        await name_input.respond(f\"Hello, {name_input.value}!\")\n

    Cool! There's one problem left: this flow runs forever! We need a way to signal that it should exit. Let's keep things simple and teach it to look for a special string:

    from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n    async for name_input in receive_input(\n        str,\n        with_metadata=True,\n        poll_interval=0.1,\n        timeout=None\n    ):\n        if name_input.value == EXIT_SIGNAL:\n            print(\"Goodbye!\")\n            return\n        await name_input.respond(f\"Hello, {name_input.value}!\")\n

    With a greeter flow in place, we're ready to create the flow that sends greeter names!

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#sending-input","title":"Sending input","text":"

    You can send input to a flow with the send_input function. This works similarly to receive_input and, like that function, accepts the same run_input argument, which can be a built-in type such as str, or else a BaseModel or RunInput subclass.

    When can you send input to a flow run?

    You can send input to a flow run as soon as you have the flow run's ID. The flow does not have to be receiving input for you to send input. If you send a flow input before it is receiving, it will see your input when it calls receive_input (as long as the types in the send_input and receive_input calls match!)

    Next, we'll create a sender flow that starts a greeter flow run and then enters a loop, continuously getting input from the terminal and sending it to the greeter flow:

    @flow\nasync def sender():\n    greeter_flow_run = await run_deployment(\n        \"greeter/send-receive\", timeout=0, as_subflow=False\n    )\n    receiver = receive_input(str, timeout=None, poll_interval=0.1)\n    client = get_client()\n\n    while True:\n        flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n        if not flow_run.state or not flow_run.state.is_running():\n            continue\n\n        name = input(\"What is your name? \")\n        if not name:\n            continue\n\n        if name == \"q\" or name == \"quit\":\n            await send_input(\n                EXIT_SIGNAL,\n                flow_run_id=greeter_flow_run.id\n            )\n            print(\"Goodbye!\")\n            break\n\n        await send_input(name, flow_run_id=greeter_flow_run.id)\n        greeting = await receiver.next()\n        print(greeting)\n

    There's more going on here than in greeter, so let's take a closer look at the pieces.

    First, we use run_deployment to start a greeter flow run. This means we must have a worker or flow.serve() running in separate process. That process will begin running greeter while sender continues to execute. Calling run_deployment(..., timeout=0) ensures that sender won't wait for the greeter flow run to complete, because it's running a loop and will only exit when we send EXIT_SIGNAL.

    Next, we capture the iterator returned by receive_input as receiver. This flow works by entering a loop, and on each iteration of the loop, the flow asks for terminal input, sends that to the greeter flow, and then runs receiver.next() to wait until it receives the response from greeter.

    Next, we let the terminal user who ran this flow exit by entering the string q or quit. When that happens, we send the greeter flow an exit signal so it will shut down too.

    Finally, we send the new name to greeter. We know that greeter is going to send back a greeting as a string, so we immediately wait for new string input. When we receive the greeting, we print it and continue the loop that gets terminal input.

    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#seeing-a-complete-example","title":"Seeing a complete example","text":"

    Finally, let's see a complete example of using send_input and receive_input. Here is what the greeter and sender flows look like together:

    import asyncio\nimport sys\nfrom prefect import flow, get_client\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n    run_context = get_run_context()\n    assert run_context.flow_run, \"Could not see my flow run ID\"\n\n    block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n    try:\n        seen_keys_block = await JSON.load(block_name)\n    except ValueError:\n        seen_keys_block = JSON(\n            value=[],\n        )\n\n    async for name_input in receive_input(\n        str,\n        with_metadata=True,\n        poll_interval=0.1,\n        timeout=None\n    ):\n        if name_input.value == EXIT_SIGNAL:\n            print(\"Goodbye!\")\n            return\n        await name_input.respond(f\"Hello, {name_input.value}!\")\n\n        seen_keys_block.value.append(name_input.metadata.key)\n        await seen_keys_block.save(\n            name=block_name,\n            overwrite=True\n        )\n\n\n@flow\nasync def sender():\n    greeter_flow_run = await run_deployment(\n        \"greeter/send-receive\", timeout=0, as_subflow=False\n    )\n    receiver = receive_input(str, timeout=None, poll_interval=0.1)\n    client = get_client()\n\n    while True:\n        flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n        if not flow_run.state or not flow_run.state.is_running():\n            continue\n\n        name = input(\"What is your name? \")\n        if not name:\n            continue\n\n        if name == \"q\" or name == \"quit\":\n            await send_input(\n                EXIT_SIGNAL,\n                flow_run_id=greeter_flow_run.id\n            )\n            print(\"Goodbye!\")\n            break\n\n        await send_input(name, flow_run_id=greeter_flow_run.id)\n        greeting = await receiver.next()\n        print(greeting)\n\n\nif __name__ == \"__main__\":\n    if sys.argv[1] == \"greeter\":\n        asyncio.run(greeter.serve(name=\"send-receive\"))\n    elif sys.argv[1] == \"sender\":\n        asyncio.run(sender())\n

    To run the example, you'll need a Python environment with Prefect installed, pointed at either an open-source Prefect server instance or Prefect Cloud.

    With your environment set up, start a flow runner in one terminal with the following command:

    python my_file_name greeter\n

    For example, with Prefect Cloud, you should see output like this:

    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Your flow 'greeter' is being served and polling for scheduled runs!                              \u2502\n\u2502                                                                                                  \u2502\n\u2502 To trigger a run for this flow, use the following command:                                       \u2502\n\u2502                                                                                                  \u2502\n\u2502         $ prefect deployment run 'greeter/send-receive'                                          \u2502\n\u2502                                                                                                  \u2502\n\u2502 You can also run your flow via the Prefect UI:                                                   \u2502\n\u2502 https://app.prefect.cloud/account/...(a URL for your account)                                    \u2502\n\u2502                                                                                                  \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n

    Then start the greeter process in another terminal:

    python my_file_name sender\n

    You should see output like this:

    11:38:41.800 | INFO    | prefect.engine - Created flow run 'gregarious-owl' for flow 'sender'\n11:38:41.802 | INFO    | Flow run 'gregarious-owl' - View at https://app.prefect.cloud/account/...\nWhat is your name?\n

    Type a name and press the enter key to see a greeting, and you'll see sending and receiving in action:

    What is your name? andrew\nHello, andrew!\n
    ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/dask-ray-task-runners/","title":"Dask and Ray Task Runners","text":"

    Task runners provide an execution environment for tasks. In a flow decorator, you can specify a task runner to run the tasks called in that flow.

    The default task runner is the ConcurrentTaskRunner.

    Use .submit to run your tasks asynchronously

    To run tasks asynchronously use the .submit method when you call them. If you call a task as you would normally in Python code it will run synchronously, even if you are calling the task within a flow that uses the ConcurrentTaskRunner, DaskTaskRunner, or RayTaskRunner.

    Many real-world data workflows benefit from true parallel, distributed task execution. For these use cases, the following Prefect-developed task runners for parallel task execution may be installed as Prefect Integrations.

    • DaskTaskRunner runs tasks requiring parallel execution using dask.distributed.
    • RayTaskRunner runs tasks requiring parallel execution using Ray.

    These task runners can spin up a local Dask cluster or Ray instance on the fly, or let you connect with a Dask or Ray environment you've set up separately. Then you can take advantage of massively parallel computing environments.

    Use Dask or Ray in your flows to choose the execution environment that fits your particular needs.

    To show you how they work, let's start small.

    Remote storage

    We recommend configuring remote file storage for task execution with DaskTaskRunner or RayTaskRunner. This ensures tasks executing in Dask or Ray have access to task result storage, particularly when accessing a Dask or Ray instance outside of your execution environment.

    ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#configuring-a-task-runner","title":"Configuring a task runner","text":"

    You may have seen this briefly in a previous tutorial, but let's look a bit more closely at how you can configure a specific task runner for a flow.

    Let's start with the SequentialTaskRunner. This task runner runs all tasks synchronously and may be useful when used as a debugging tool in conjunction with async code.

    Let's start with this simple flow. We import the SequentialTaskRunner, specify a task_runner on the flow, and call the tasks with .submit().

    from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

    Save this as sequential_flow.py and run it in a terminal. You'll see output similar to the following:

    $ python sequential_flow.py\n16:51:17.967 | INFO    | prefect.engine - Created flow run 'humongous-mink' for flow 'greetings'\n16:51:17.967 | INFO    | Flow run 'humongous-mink' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:51:18.060 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:51:18.123 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:51:18.150 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:51:18.181 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:51:18.210 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:51:18.237 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:51:18.264 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:51:18.290 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:51:18.321 | INFO    | Flow run 'humongous-mink' - Finished in state Completed('All states completed.')\n

    If we take out the log messages and just look at the printed output of the tasks, you see they're executed in sequential order:

    $ python sequential_flow.py\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
    ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-dask","title":"Running parallel tasks with Dask","text":"

    You could argue that this simple flow gains nothing from parallel execution, but let's roll with it so you can see just how simple it is to take advantage of the DaskTaskRunner.

    To configure your flow to use the DaskTaskRunner:

    1. Make sure the prefect-dask collection is installed by running pip install prefect-dask.
    2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
    3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.
    4. Use the .submit method when calling functions.

    This is the same flow as above, with a few minor changes to use DaskTaskRunner where we previously configured SequentialTaskRunner. Install prefect-dask, made these changes, then save the updated code as dask_flow.py.

    from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

    Note that, because you're using DaskTaskRunner in a script, you must use if __name__ == \"__main__\": or you'll see warnings and errors.

    Now run dask_flow.py. If you get a warning about accepting incoming network connections, that's okay - everything is local in this example.

    $ python dask_flow.py\n19:29:03.798 | INFO    | prefect.engine - Created flow run 'fine-bison' for flow 'greetings'\n\n19:29:03.798 | INFO    | Flow run 'fine-bison' - Using task runner 'DaskTaskRunner'\n\n19:29:04.080 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:18.465 | INFO    | prefect.engine - Created flow run 'radical-finch' for flow 'greetings'\n16:54:18.465 | INFO    | Flow run 'radical-finch' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:54:18.465 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:19.811 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:54:19.881 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:54:20.364 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-0' for execution.\n16:54:20.379 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:54:20.386 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-0' for execution.\n16:54:20.397 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:54:20.401 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-1' for execution.\n16:54:20.417 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:54:20.423 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-1' for execution.\n16:54:20.443 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:54:20.449 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-2' for execution.\n16:54:20.462 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:54:20.474 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-2' for execution.\n16:54:20.500 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:54:20.511 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-3' for execution.\n16:54:20.544 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:54:20.555 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-3' for execution.\nhello arthur\ngoodbye ford\ngoodbye arthur\nhello ford\ngoodbye marvin\ngoodbye trillian\nhello trillian\nhello marvin\n

    DaskTaskRunner automatically creates a local Dask cluster, then starts executing all of the tasks in parallel. The results do not return in the same order as the sequential code above.

    Notice what happens if you do not use the submit method when calling tasks:

    from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello(name)\n        say_goodbye(name)\n\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
    $ python dask_flow.py\n\n16:57:34.534 | INFO    | prefect.engine - Created flow run 'papaya-honeybee' for flow 'greetings'\n16:57:34.534 | INFO    | Flow run 'papaya-honeybee' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:57:34.535 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:57:35.715 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:57:35.787 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:57:35.788 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:57:35.810 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:57:35.840 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:57:35.869 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:57:35.894 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:57:35.924 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:57:35.951 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:57:35.976 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:57:36.004 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:57:36.289 | INFO    | Flow run 'papaya-honeybee' - Finished in state Completed('All states completed.')\n

    The tasks are not submitted to the DaskTaskRunner and are run sequentially.

    ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-ray","title":"Running parallel tasks with Ray","text":"

    To demonstrate the ability to flexibly apply the task runner appropriate for your workflow, use the same flow as above, with a few minor changes to use the RayTaskRunner where we previously configured DaskTaskRunner.

    To configure your flow to use the RayTaskRunner:

    1. Make sure the prefect-ray collection is installed by running pip install prefect-ray.
    2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
    3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

    Ray environment limitations

    While we're excited about parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

    • Support for Python 3.11 is experimental.
    • Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.
    • Ray's Windows support is currently in beta.

    See the Ray installation documentation for further compatibility information.

    Save this code in ray_flow.py.

    from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

    Now run ray_flow.py RayTaskRunner automatically creates a local Ray instance, then immediately starts executing all of the tasks in parallel. If you have an existing Ray instance, you can provide the address as a parameter to run tasks in the instance. See Running tasks on Ray for details.

    ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

    Many workflows include a variety of tasks, and not all of them benefit from parallel execution. You'll most likely want to use the Dask or Ray task runners and spin up their respective resources only for those tasks that need them.

    Because task runners are specified on flows, you can assign different task runners to tasks by using subflows to organize those tasks.

    This example uses the same tasks as the previous examples, but on the parent flow greetings() we use the default ConcurrentTaskRunner. Then we call a ray_greetings() subflow that uses the RayTaskRunner to execute the same tasks in a Ray instance.

    from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef ray_greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\n@flow()\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n    ray_greetings(names)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

    If you save this as ray_subflow.py and run it, you'll see that the flow greetings runs as you'd expect for a concurrent flow, then flow ray-greetings spins up a Ray instance to run the tasks again.

    ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/docker/","title":"Running Flows with Docker","text":"

    In the Deployments tutorial, we looked at serving a flow that enables scheduling or creating flow runs via the Prefect API.

    With our Python script in hand, we can build a Docker image for our script, allowing us to serve our flow in various remote environments. We'll use Kubernetes in this guide, but you can use any Docker-compatible infrastructure.

    In this guide we'll:

    • Write a Dockerfile to build an image that stores our Prefect flow code.
    • Build a Docker image for our flow.
    • Deploy and run our Docker image on a Kubernetes cluster.
    • Look at the Prefect-maintained Docker images and discuss options for use

    Note that in this guide we'll create a Dockerfile from scratch. Alternatively, Prefect makes it convenient to build a Docker image as part of deployment creation. You can even include environment variables and specify additional Python packages to install at runtime.

    If creating a deployment with a prefect.yaml file, the build step makes it easy to customize your Docker image and push it to the registry of your choice. See an example here.

    Deployment creation with a Python script that includes flow.deploy similarly allows you to customize your Docker image with keyword arguments as shown below.

    ...\n\nif __name__ == \"__main__\":\n    hello_world.deploy(\n        name=\"my-first-deployment\",\n        work_pool_name=\"above-ground\",\n        image='my_registry/hello_world:demo',\n        job_variables={\"env\": { \"EXTRA_PIP_PACKAGES\": \"boto3\" } }\n    )\n
    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#prerequisites","title":"Prerequisites","text":"

    To complete this guide, you'll need the following:

    • A Python script that defines and serves a flow.
    • We'll use the flow script and deployment from the Deployments tutorial.
    • Access to a running Prefect API server.
    • You can sign up for a forever free Prefect Cloud account or run a Prefect API server locally with prefect server start.
    • Docker Desktop installed on your machine.
    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#writing-a-dockerfile","title":"Writing a Dockerfile","text":"

    First let's make a clean directory to work from, prefect-docker-guide.

    mkdir prefect-docker-guide\ncd prefect-docker-guide\n

    In this directory, we'll create a sub-directory named flows and put our flow script from the Deployments tutorial in it.

    mkdir flows\ncd flows\ntouch prefect-docker-guide-flow.py\n

    Here's the flow code for reference:

    prefect-docker-guide-flow.py
    import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.serve(name=\"prefect-docker-guide\")\n

    The next file we'll add to the prefect-docker-guide directory is a requirements.txt. We'll include all dependencies required for our prefect-docker-guide-flow.py script in the Docker image we'll build.

    # ensure you run this line from the top level of the `prefect-docker-guide` directory\ntouch requirements.txt\n

    Here's what we'll put in our requirements.txt file:

    requirements.txt
    prefect>=2.12.0\nhttpx\n

    Next, we'll create a Dockerfile that we'll use to create a Docker image that will also store the flow code.

    touch Dockerfile\n

    We'll add the following content to our Dockerfile:

    Dockerfile
    # We're using the latest version of Prefect with Python 3.10\nFROM prefecthq/prefect:2-python3.10\n\n# Add our requirements.txt file to the image and install dependencies\nCOPY requirements.txt .\nRUN pip install -r requirements.txt --trusted-host pypi.python.org --no-cache-dir\n\n# Add our flow code to the image\nCOPY flows /opt/prefect/flows\n\n# Run our flow script when the container starts\nCMD [\"python\", \"flows/prefect-docker-guide-flow.py\"]\n
    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#building-a-docker-image","title":"Building a Docker image","text":"

    Now that we have a Dockerfile we can build our image by running:

    docker build -t prefect-docker-guide-image .\n

    We can check that our build worked by running a container from our new image.

    CloudSelf-hosted

    Our container will need an API URL and and API key to communicate with Prefect Cloud.

    • You can get an API key from the API Keys section of the user settings in the Prefect UI.

    • You can get your API URL by running prefect config view and copying the PREFECT_API_URL value.

    We'll provide both these values to our container by passing them as environment variables with the -e flag.

    docker run -e PREFECT_API_URL=YOUR_PREFECT_API_URL -e PREFECT_API_KEY=YOUR_API_KEY prefect-docker-guide-image\n

    After running the above command, the container should start up and serve the flow within the container!

    Our container will need an API URL and network access to communicate with the Prefect API.

    For this guide, we'll assume the Prefect API is running on the same machine that we'll run our container on and the Prefect API was started with prefect server start. If you're running a different setup, check out the Hosting a Prefect server guide for information on how to connect to your Prefect API instance.

    To ensure that our flow container can communicate with the Prefect API, we'll set our PREFECT_API_URL to http://host.docker.internal:4200/api. If you're running Linux, you'll need to set your PREFECT_API_URL to http://localhost:4200/api and use the --network=\"host\" option instead.

    docker run --network=\"host\" -e PREFECT_API_URL=http://host.docker.internal:4200/api prefect-docker-guide-image\n

    After running the above command, the container should start up and serve the flow within the container!

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#deploying-to-a-remote-environment","title":"Deploying to a remote environment","text":"

    Now that we have a Docker image with our flow code embedded, we can deploy it to a remote environment!

    For this guide, we'll simulate a remote environment by using Kubernetes locally with Docker Desktop. You can use the instructions provided by Docker to set up Kubernetes locally.

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#creating-a-kubernetes-deployment-manifest","title":"Creating a Kubernetes deployment manifest","text":"

    To ensure the process serving our flow is always running, we'll create a Kubernetes deployment. If our flow's container ever crashes, Kubernetes will automatically restart it, ensuring that we won't miss any scheduled runs.

    First, we'll create a deployment-manifest.yaml file in our prefect-docker-guide directory:

    touch deployment-manifest.yaml\n

    And we'll add the following content to our deployment-manifest.yaml file:

    CloudSelf-hosted deployment-manifest.yaml
    apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: prefect-docker-guide\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      flow: get-repo-info\n  template:\n    metadata:\n      labels:\n        flow: get-repo-info\n    spec:\n      containers:\n      - name: flow-container\n        image: prefect-docker-guide-image:latest\n        env:\n        - name: PREFECT_API_URL\n          value: YOUR_PREFECT_API_URL\n        - name: PREFECT_API_KEY\n          value: YOUR_API_KEY\n        # Never pull the image because we're using a local image\n        imagePullPolicy: Never\n

    Keep your API key secret

    In the above manifest we are passing in the Prefect API URL and API key as environment variables. This approach is simple, but it is not secure. If you are deploying your flow to a remote cluster, you should use a Kubernetes secret to store your API key.

    deployment-manifest.yaml
    apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: prefect-docker-guide\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      flow: get-repo-info\n  template:\n    metadata:\n      labels:\n        flow: get-repo-info\n    spec:\n      containers:\n      - name: flow-container\n        image: prefect-docker-guide-image:latest\n        env:\n        - name: PREFECT_API_URL\n          value: <http://host.docker.internal:4200/api>\n        # Never pull the image because we're using a local image\n        imagePullPolicy: Never\n

    Linux users

    If you're running Linux, you'll need to set your PREFECT_API_URL to use the IP address of your machine instead of host.docker.internal.

    This manifest defines how our image will run when deployed in our Kubernetes cluster. Note that we will be running a single replica of our flow container. If you want to run multiple replicas of your flow container to keep up with an active schedule, or because our flow is resource-intensive, you can increase the replicas value.

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#deploying-our-flow-to-the-cluster","title":"Deploying our flow to the cluster","text":"

    Now that we have a deployment manifest, we can deploy our flow to the cluster by running:

    kubectl apply -f deployment-manifest.yaml\n

    We can monitor the status of our Kubernetes deployment by running:

    kubectl get deployments\n

    Once the deployment has successfully started, we can check the logs of our flow container by running the following:

    kubectl logs -l flow=get-repo-info\n

    Now that we're serving our flow in our cluster, we can trigger a flow run by running:

    prefect deployment run get-repo-info/prefect-docker-guide\n

    If we navigate to the URL provided by the prefect deployment run command, we can follow the flow run via the logs in the Prefect UI!

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#prefect-maintained-docker-images","title":"Prefect-maintained Docker images","text":"

    Every release of Prefect results in several new Docker images. These images are all named prefecthq/prefect and their tags identify their differences.

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#image-tags","title":"Image tags","text":"

    When a release is published, images are built for all of Prefect's supported Python versions. These images are tagged to identify the combination of Prefect and Python versions contained. Additionally, we have \"convenience\" tags which are updated with each release to facilitate automatic updates.

    For example, when release 2.11.5 is published:

    1. Images with the release packaged are built for each supported Python version (3.8, 3.9, 3.10, 3.11) with both standard Python and Conda.
    2. These images are tagged with the full description, e.g. prefect:2.1.1-python3.10 and prefect:2.1.1-python3.10-conda.
    3. For users that want more specific pins, these images are also tagged with the SHA of the git commit of the release, e.g. sha-88a7ff17a3435ec33c95c0323b8f05d7b9f3f6d2-python3.10
    4. For users that want to be on the latest 2.1.x release, receiving patch updates, we update a tag without the patch version to this release, e.g. prefect.2.1-python3.10.
    5. For users that want to be on the latest 2.x.y release, receiving minor version updates, we update a tag without the minor or patch version to this release, e.g. prefect.2-python3.10
    6. Finally, for users who want the latest 2.x.y release without specifying a Python version, we update 2-latest to the image for our highest supported Python version, which in this case would be equivalent to prefect:2.1.1-python3.10.

    Choose image versions carefully

    It's a good practice to use Docker images with specific Prefect versions in production.

    Use care when employing images that automatically update to new versions (such as prefecthq/prefect:2-python3.11 or prefecthq/prefect:2-latest).

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#standard-python","title":"Standard Python","text":"

    Standard Python images are based on the official Python slim images, e.g. python:3.10-slim.

    Tag Prefect Version Python Version 2-latest most recent v2 PyPi version 3.10 2-python3.11 most recent v2 PyPi version 3.11 2-python3.10 most recent v2 PyPi version 3.10 2-python3.9 most recent v2 PyPi version 3.9 2-python3.8 most recent v2 PyPi version 3.8 2.X-python3.11 2.X 3.11 2.X-python3.10 2.X 3.10 2.X-python3.9 2.X 3.9 2.X-python3.8 2.X 3.8 sha-<hash>-python3.11 <hash> 3.11 sha-<hash>-python3.10 <hash> 3.10 sha-<hash>-python3.9 <hash> 3.9 sha-<hash>-python3.8 <hash> 3.8","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#conda-flavored-python","title":"Conda-flavored Python","text":"

    Conda flavored images are based on continuumio/miniconda3. Prefect is installed into a conda environment named prefect.

    Tag Prefect Version Python Version 2-latest-conda most recent v2 PyPi version 3.10 2-python3.11-conda most recent v2 PyPi version 3.11 2-python3.10-conda most recent v2 PyPi version 3.10 2-python3.9-conda most recent v2 PyPi version 3.9 2-python3.8-conda most recent v2 PyPi version 3.8 2.X-python3.11-conda 2.X 3.11 2.X-python3.10-conda 2.X 3.10 2.X-python3.9-conda 2.X 3.9 2.X-python3.8-conda 2.X 3.8 sha-<hash>-python3.11-conda <hash> 3.11 sha-<hash>-python3.10-conda <hash> 3.10 sha-<hash>-python3.9-conda <hash> 3.9 sha-<hash>-python3.8-conda <hash> 3.8","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#building-your-own-image","title":"Building your own image","text":"

    If your flow relies on dependencies not found in the default prefecthq/prefect images, you may want to build your own image. You can either base it off of one of the provided prefecthq/prefect images, or build your own image. See the Work pool deployment guide for discussion of how Prefect can help you build custom images with dependencies specified in a requirements.txt file.

    By default, Prefect work pools that use containers refer to the 2-latest image. You can specify another image at work pool creation. The work pool image choice can be overridden in individual deployments.

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#extending-the-prefecthqprefect-image-manually","title":"Extending the prefecthq/prefect image manually","text":"

    Here we provide an example Dockerfile for building an image based on prefecthq/prefect:2-latest, but with scikit-learn installed.

    FROM prefecthq/prefect:2-latest\n\nRUN pip install scikit-learn\n
    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#choosing-an-image-strategy","title":"Choosing an image strategy","text":"

    The options described above have different complexity (and performance) characteristics. For choosing a strategy, we provide the following recommendations:

    • If your flow only makes use of tasks defined in the same file as the flow, or tasks that are part of prefect itself, then you can rely on the default provided prefecthq/prefect image.

    • If your flow requires a few extra dependencies found on PyPI, you can use the default prefecthq/prefect image and set prefect.deployments.steps.pip_install_requirements: in the pullstep to install these dependencies at runtime.

    • If the installation process requires compiling code or other expensive operations, you may be better off building a custom image instead.

    • If your flow (or flows) require extra dependencies or shared libraries, we recommend building a shared custom image with all the extra dependencies and shared task definitions you need. Your flows can then all rely on the same image, but have their source stored externally. This option can ease development, as the shared image only needs to be rebuilt when dependencies change, not when the flow source changes.

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#next-steps","title":"Next steps","text":"

    We only served a single flow in this guide, but you can extend this setup to serve multiple flows in a single Docker image by updating your Python script to using flow.to_deployment and serve to serve multiple flows or the same flow with different configuration.

    To learn more about deploying flows, check out the Deployments concept doc!

    For advanced infrastructure requirements, such as executing each flow run within its own dedicated Docker container, learn more in the Work pool deployment guide.

    ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/global-concurrency-limits/","title":"Global Concurrency Limits and Rate Limits","text":"

    Global concurrency limits allow you to manage execution efficiently, controlling how many tasks, flows, or other operations can run simultaneously. They are ideal when optimizing resource usage, preventing bottlenecks, and customizing task execution are priorities.

    Clarification on use of the term 'tasks'

    In the context of global concurrency and rate limits, \"tasks\" refers not specifically to Prefect tasks, but to concurrent units of work in general, such as those managed by an event loop or TaskGroup in asynchronous programming. These general \"tasks\" could include Prefect tasks when they are part of an asynchronous execution environment.

    Rate Limits ensure system stability by governing the frequency of requests or operations. They are suitable for preventing overuse, ensuring fairness, and handling errors gracefully.

    When selecting between Concurrency and Rate Limits, consider your primary goal. Choose Concurrency Limits for resource optimization and task management. Choose Rate Limits to maintain system stability and fair access to services.

    The core difference between a rate limit and a concurrency limit is the way in which slots are released. With a rate limit, slots are released at a controlled rate, controlled by slot_decay_per_second whereas with a concurrency limit, slots are released when the concurrency manager is exited.

    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#managing-global-concurrency-limits-and-rate-limits","title":"Managing Global concurrency limits and rate limits","text":"

    You can create, read, edit, and delete concurrency limits via the Prefect UI.

    When creating a concurrency limit, you can specify the following parameters:

    • Name: The name of the concurrency limit. This name is also how you'll reference the concurrency limit in your code. Special characters, such as /, %, &, >, <, are not allowed.
    • Concurrency Limit: The maximum number of slots that can be occupied on this concurrency limit.
    • Slot Decay Per Second: Controls the rate at which slots are released when the concurrency limit is used as a rate limit. This value must be configured when using the rate_limit function.
    • Active: Whether or not the concurrency limit is in an active state.
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#active-vs-inactive-limits","title":"Active vs inactive limits","text":"

    Global concurrency limits can be in either an active or inactive state.

    • Active: In this state, slots can be occupied, and code execution will be blocked when slots are unable to be acquired.
    • Inactive: In this state, slots will not be occupied, and code execution will not be blocked. Concurrency enforcement occurs only when you activate the limit.
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#slot-decay","title":"Slot decay","text":"

    Global concurrency limits can be configured with slot decay. This is used when the concurrency limit is used as a rate limit, and it governs the pace at which slots are released or become available for reuse after being occupied. These slots effectively represent the concurrency capacity within a specific concurrency limit. The concept is best understood as the rate at which these slots \"decay\" or refresh.

    To configure slot decay, you can set the slot_decay_per_second parameter when defining or adjusting a concurrency limit.

    For practical use, consider the following:

    • Higher values: Setting slot_decay_per_second to a higher value, such as 5.0, results in slots becoming available relatively quickly. In this scenario, a slot that was occupied by a task will free up after just 0.2 (1.0 / 5.0) seconds.

    • Lower values: Conversely, setting slot_decay_per_second to a lower value, like 0.1, causes slots to become available more slowly. In this scenario it would take 10 (1.0 / 0.1) seconds for a slot to become available again after occupancy

    Slot decay provides fine-grained control over the availability of slots, enabling you to optimize the rate of your workflow based on your specific requirements.

    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-the-concurrency-context-manager","title":"Using the concurrency context manager","text":"

    The concurrencycontext manager allows control over the maximum number of concurrent operations. You can select either the synchronous (sync) or asynchronous (async) version, depending on your use case. Here's how to use it:

    Concurrency limits are implicitly created

    When using the concurrency context manager, the concurrency limit you use will be created, in an inactive state, if it does not already exist.

    Sync

    from prefect import flow, task\nfrom prefect.concurrency.sync import concurrency\n\n\n@task\ndef process_data(x, y):\n    with concurrency(\"database\", occupy=1):\n        return x + y\n\n\n@flow\ndef my_flow():\n    for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n        process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

    Async

    import asyncio\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import concurrency\n\n\n@task\nasync def process_data(x, y):\n    async with concurrency(\"database\", occupy=1):\n        return x + y\n\n\n@flow\nasync def my_flow():\n    for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n        await process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(my_flow())\n
    1. The code imports the necessary modules and the concurrency context manager. Use the prefect.concurrency.sync module for sync usage and the prefect.concurrency.asyncio module for async usage.
    2. It defines a process_data task, taking x and y as input arguments. Inside this task, the concurrency context manager controls concurrency, using the database concurrency limit and occupying one slot. If another task attempts to run with the same limit and no slots are available, that task will be blocked until a slot becomes available.
    3. A flow named my_flow is defined. Within this flow, it iterates through a list of tuples, each containing pairs of x and y values. For each pair, the process_data task is submitted with the corresponding x and y values for processing.
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-rate_limit","title":"Using rate_limit","text":"

    The Rate Limit feature provides control over the frequency of requests or operations, ensuring responsible usage and system stability. Depending on your requirements, you can utilize rate_limit to govern both synchronous (sync) and asynchronous (async) operations. Here's how to make the most of it:

    Slot decay

    When using the rate_limit function the concurrency limit you use must have a slot decay configured.

    Sync

    from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef make_http_request():\n    rate_limit(\"rate-limited-api\")\n    print(\"Making an HTTP request...\")\n\n\n@flow\ndef my_flow():\n    for _ in range(10):\n        make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

    Async

    import asyncio\n\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import rate_limit\n\n\n@task\nasync def make_http_request():\n    await rate_limit(\"rate-limited-api\")\n    print(\"Making an HTTP request...\")\n\n\n@flow\nasync def my_flow():\n    for _ in range(10):\n        await make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(my_flow())\n
    1. The code imports the necessary modules and the rate_limit function. Use the prefect.concurrency.sync module for sync usage and the prefect.concurrency.asyncio module for async usage.
    2. It defines a make_http_request task. Inside this task, the rate_limit function is used to ensure that the requests are made at a controlled pace.
    3. A flow named my_flow is defined. Within this flow the make_http_request task is submitted 10 times.
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-concurrency-and-rate_limit-outside-of-a-flow","title":"Using concurrency and rate_limit outside of a flow","text":"

    concurreny and rate_limit can be used outside of a flow to control concurrency and rate limits for any operation.

    import asyncio\n\nfrom prefect.concurrency.asyncio import rate_limit\n\n\nasync def main():\n    for _ in range(10):\n        await rate_limit(\"rate-limited-api\")\n        print(\"Making an HTTP request...\")\n\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#use-cases","title":"Use cases","text":"","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#throttling-task-submission","title":"Throttling task submission","text":"

    Throttling task submission to avoid overloading resources, to comply with external rate limits, or ensure a steady, controlled flow of work.

    In this scenario the rate_limit function is used to throttle the submission of tasks. The rate limit acts as a bottleneck, ensuring that tasks are submitted at a controlled rate, governed by the slot_decay_per_second setting on the associated concurrency limit.

    from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef my_task(i):\n    return i\n\n\n@flow\ndef my_flow():\n    for _ in range(100):\n        rate_limit(\"slow-my-flow\", occupy=1)\n        my_task.submit(1)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#managing-database-connections","title":"Managing database connections","text":"

    Managing the maximum number of concurrent database connections to avoid exhausting database resources.

    In this scenario we've setup a concurrency limit named database and given it a maximum concurrency limit that matches the maximum number of database connections we want to allow. We then use the concurrency context manager to control the number of database connections allowed at any one time.

    from prefect import flow, task, concurrency\nimport psycopg2\n\n@task\ndef database_query(query):\n    # Here we request a single slot on the 'database' concurrency limit. This\n    # will block in the case that all of the database connections are in use\n    # ensuring that we never exceed the maximum number of database connections.\n    with concurrency(\"database\", occupy=1):\n        connection = psycopg2.connect(\"<connection_string>\")\n        cursor = connection.cursor()\n        cursor.execute(query)\n        result = cursor.fetchall()\n        connection.close()\n        return result\n\n@flow\ndef my_flow():\n    queries = [\"SELECT * FROM table1\", \"SELECT * FROM table2\", \"SELECT * FROM table3\"]\n\n    for query in queries:\n        database_query.submit(query)\n\nif __name__ == \"__main__\":\n    my_flow()\n
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#parallel-data-processing","title":"Parallel data processing","text":"

    Limiting the maximum number of parallel processing tasks.

    In this scenario we want to limit the number of process_data tasks to five at any one time. We do this by using the concurrency context manager to request five slots on the data-processing concurrency limit. This will block until five slots are free and then submit five more tasks, ensuring that we never exceed the maximum number of parallel processing tasks.

    import asyncio\nfrom prefect.concurrency.sync import concurrency\n\n\nasync def process_data(data):\n    print(f\"Processing: {data}\")\n    await asyncio.sleep(1)\n    return f\"Processed: {data}\"\n\n\nasync def main():\n    data_items = list(range(100))\n    processed_data = []\n\n    while data_items:\n        with concurrency(\"data-processing\", occupy=5):\n            chunk = [data_items.pop() for _ in range(5)]\n            processed_data += await asyncio.gather(\n                *[process_data(item) for item in chunk]\n            )\n\n    print(processed_data)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n
    ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/host/","title":"Hosting a Prefect server instance","text":"

    After you install Prefect you have a Python SDK client that can communicate with Prefect Cloud, the platform hosted by Prefect. You also have an API server instance backed by a database and a UI.

    In this section you'll learn how to host your own Prefect server instance. If you would like to host a Prefect server instance on Kubernetes, check out the prefect-server Helm chart.

    Spin up a local Prefect server UI by running the prefect server start CLI command in the terminal:

    prefect server start\n

    Open the URL for the Prefect server UI (http://127.0.0.1:4200 by default) in a browser.

    Shut down the Prefect server with ctrl + c in the terminal.","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#differences-between-a-self-hosted-prefect-server-instance-and-prefect-cloud","title":"Differences between a self-hosted Prefect server instance and Prefect Cloud","text":"

    A self-hosted Prefect server instance and Prefect Cloud share a common set of features. Prefect Cloud includes the following additional features:

    • Workspaces \u2014 isolated environments to organize your flows, deployments, and flow runs.
    • Automations \u2014 configure triggers, actions, and notifications in response to real-time monitoring events.
    • Email notifications \u2014 send email alerts from Prefect's servers based on automation triggers.
    • Service accounts \u2014 configure API access for running workers or executing flow runs on remote infrastructure.
    • Custom role-based access controls (RBAC) \u2014 assign users granular permissions to perform activities within an account or workspace.
    • Single Sign-on (SSO) \u2014 authentication using your identity provider.
    • Audit Log \u2014 a record of user activities to monitor security and compliance.

    You can read more about Prefect Cloud in the Cloud section.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configuring-a-prefect-server-instance","title":"Configuring a Prefect server instance","text":"

    Go to your terminal session and run this command to set the API URL to point to a Prefect server instance:

    prefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

    PREFECT_API_URL required when running Prefect inside a container

    You must set the API server address to use Prefect within a container, such as a Docker container.

    You can save the API server address in a Prefect profile. Whenever that profile is active, the API endpoint will be be at that address.

    See Profiles & Configuration for more information on profiles and configurable Prefect settings.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#prefect-database","title":"Prefect database","text":"

    The Prefect database persists data to track the state of your flow runs and related Prefect concepts, including:

    • Flow run and task run state
    • Run history
    • Logs
    • Deployments
    • Flow and task run concurrency limits
    • Storage blocks for flow and task results
    • Variables
    • Artifacts
    • Work pool status

    Currently Prefect supports the following databases:

    • SQLite: The default in Prefect, and our recommendation for lightweight, single-server deployments. SQLite requires essentially no setup.
    • PostgreSQL: Best for connecting to external databases, but does require additional setup (such as Docker). Prefect uses the pg_trgm extension, so it must be installed and enabled.
    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#using-the-database","title":"Using the database","text":"

    A local SQLite database is the default database and is configured upon Prefect installation. The database is located at ~/.prefect/prefect.db by default.

    To reset your database, run the CLI command:

    prefect server database reset -y\n

    This command will clear all data and reapply the schema.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#database-settings","title":"Database settings","text":"

    Prefect provides several settings for configuring the database. Here are the default settings:

    PREFECT_API_DATABASE_CONNECTION_URL='sqlite+aiosqlite:///${PREFECT_HOME}/prefect.db'\nPREFECT_API_DATABASE_ECHO='False'\nPREFECT_API_DATABASE_MIGRATE_ON_START='True'\nPREFECT_API_DATABASE_PASSWORD='None'\n

    You can save a setting to your active Prefect profile with prefect config set.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configuring-a-postgresql-database","title":"Configuring a PostgreSQL database","text":"

    To connect Prefect to a PostgreSQL database, you can set the following environment variable:

    prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n

    The above environment variable assumes that:

    • You have a username called postgres
    • Your password is set to yourTopSecretPassword
    • Your database runs on the same host as the Prefect server instance, localhost
    • You use the default PostgreSQL port 5432
    • Your PostgreSQL instance has a database called prefect
    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#quick-start-configuring-a-postgresql-database-with-docker","title":"Quick start: configuring a PostgreSQL database with Docker","text":"

    To quickly start a PostgreSQL instance that can be used as your Prefect database, use the following command, which will start a Docker container running PostgreSQL:

    docker run -d --name prefect-postgres -v prefectdb:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=yourTopSecretPassword -e POSTGRES_DB=prefect postgres:latest\n

    The above command:

    • Pulls the latest version of the official postgres Docker image, which is compatible with Prefect.
    • Starts a container with the name prefect-postgres.
    • Creates a database prefect with a user postgres and yourTopSecretPassword password.
    • Mounts the PostgreSQL data to a Docker volume called prefectdb to provide persistence if you ever have to restart or rebuild that container.

    Then you'll want to run the command above to set your current Prefect Profile to the PostgreSQL database instance running in your Docker container.

    prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n
    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#confirming-your-postgresql-database-configuration","title":"Confirming your PostgreSQL database configuration","text":"

    Inspect your Prefect profile to confirm that the environment variable has been set properly:

    prefect config view --show-sources\n
    You should see output similar to the following:\n\nPREFECT_PROFILE='my_profile'\nPREFECT_API_DATABASE_CONNECTION_URL='********' (from profile)\nPREFECT_API_URL='http://127.0.0.1:4200/api' (from profile)\n

    Start the Prefect server and it should begin to use your PostgreSQL database instance:

    prefect server start\n
    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#in-memory-database","title":"In-memory database","text":"

    One of the benefits of SQLite is in-memory database support.

    To use an in-memory SQLite database, set the following environment variable:

    prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false\"\n

    Use SQLite database for testing only

    SQLite is only supported by Prefect for testing purposes and is not compatible with multiprocessing.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#migrations","title":"Migrations","text":"

    Prefect uses Alembic to manage database migrations. Alembic is a database migration tool for usage with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for generating and applying schema changes to a database.

    To apply migrations to your database you can run the following commands:

    To upgrade:

    prefect server database upgrade -y\n

    To downgrade:

    prefect server database downgrade -y\n

    You can use the -r flag to specify a specific migration version to upgrade or downgrade to. For example, to downgrade to the previous migration version you can run:

    prefect server database downgrade -y -r -1\n

    or to downgrade to a specific revision:

    prefect server database downgrade -y -r d20618ce678e\n

    To downgrade all migrations, use the base revision.

    See the contributing docs for information on how to create new database migrations.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#notifications","title":"Notifications","text":"

    When you use Prefect Cloud you gain access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.

    Notifications enable you to set up alerts that are sent when a flow enters any state you specify. When your flow and task runs changes state, Prefect notes the state change and checks whether the new state matches any notification policies. If it does, a new notification is queued.

    Prefect supports sending notifications via:

    • Slack message to a channel
    • Microsoft Teams message to a channel
    • Opsgenie to alerts
    • PagerDuty to alerts
    • Twilio to phone numbers
    • Email (requires your own server)

    Notifications in Prefect Cloud

    Prefect Cloud uses the robust Automations interface to enable notifications related to flow run state changes and work pool status.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configure-notifications","title":"Configure notifications","text":"

    To configure a notification in a Prefect server, go to the Notifications page and select Create Notification or the + button.

    Notifications are structured just as you would describe them to someone. You can choose:

    • Which run states should trigger a notification.
    • Tags to filter which flow runs are covered by the notification.
    • Whether to send an email, a Slack message, Microsoft Teams message, or other services.

    For email notifications (supported on Prefect Cloud only), the configuration requires email addresses to which the message is sent.

    For Slack notifications, the configuration requires webhook credentials for your Slack and the channel to which the message is sent.

    For example, to get a Slack message if a flow with a daily-etl tag fails, the notification will read:

    If a run of any flow with daily-etl tag enters a failed state, send a notification to my-slack-webhook

    When the conditions of the notification are triggered, you\u2019ll receive a message:

    The fuzzy-leopard run of the daily-etl flow entered a failed state at 22-06-27 16:21:37 EST.

    On the Notifications page you can pause, edit, or delete any configured notification.

    ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/logs/","title":"Logging","text":"

    Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing.

    Prefect captures logs for your flow and task runs by default, even if you have not started a Prefect server with prefect server start.

    You can view and filter logs in the Prefect UI or Prefect Cloud, or access log records via the API.

    Prefect enables fine-grained customization of log levels for flows and tasks, including configuration for default levels and log message formatting.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-overview","title":"Logging overview","text":"

    Whenever you run a flow, Prefect automatically logs events for flow runs and task runs, along with any custom log handlers you have configured. No configuration is needed to enable Prefect logging.

    For example, say you created a simple flow in a file flow.py. If you create a local flow run with python flow.py, you'll see an example of the log messages created automatically by Prefect:

    16:45:44.534 | INFO    | prefect.engine - Created flow run 'gray-dingo' for flow\n'hello-flow'\n16:45:44.534 | INFO    | Flow run 'gray-dingo' - Using task runner 'SequentialTaskRunner'\n16:45:44.598 | INFO    | Flow run 'gray-dingo' - Created task run 'hello-task-54135dc1-0'\nfor task 'hello-task'\nHello world!\n16:45:44.650 | INFO    | Task run 'hello-task-54135dc1-0' - Finished in state\nCompleted(None)\n16:45:44.672 | INFO    | Flow run 'gray-dingo' - Finished in state\nCompleted('All states completed.')\n

    You can see logs for a flow run in the Prefect UI by navigating to the Flow Runs page and selecting a specific flow run to inspect.

    These log messages reflect the logging configuration for log levels and message formatters. You may customize the log levels captured and the default message format through configuration, and you can capture custom logging events by explicitly emitting log messages during flow and task runs.

    Prefect supports the standard Python logging levels CRITICAL, ERROR, WARNING, INFO, and DEBUG. By default, Prefect displays INFO-level and above events. You can configure the root logging level as well as specific logging levels for flow and task runs.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-configuration","title":"Logging configuration","text":"","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-settings","title":"Logging settings","text":"

    Prefect provides several settings for configuring logging level and loggers.

    By default, Prefect displays INFO-level and above logging records. You may change this level to DEBUG and DEBUG-level logs created by Prefect will be shown as well. You may need to change the log level used by loggers from other libraries to see their log records.

    You can override any logging configuration by setting an environment variable or Prefect Profile setting using the syntax PREFECT_LOGGING_[PATH]_[TO]_[KEY], with [PATH]_[TO]_[KEY] corresponding to the nested address of any setting.

    For example, to change the default logging levels for Prefect to DEBUG, you can set the environment variable PREFECT_LOGGING_LEVEL=\"DEBUG\".

    You may also configure the \"root\" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output WARNING level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the variable PREFECT_LOGGING_ROOT_LEVEL.

    You may adjust the log level used by specific handlers. For example, you could set PREFECT_LOGGING_HANDLERS_API_LEVEL=ERROR to have only ERROR logs reported to the Prefect API. The console handlers will still default to level INFO.

    There is a logging.yml file packaged with Prefect that defines the default logging configuration.

    You can customize logging configuration by creating your own version of logging.yml with custom settings, by either creating the file at the default location (/.prefect/logging.yml) or by specifying the path to the file with PREFECT_LOGGING_SETTINGS_PATH. (If the file does not exist at the specified location, Prefect ignores the setting and uses the default configuration.)

    See the Python Logging configuration documentation for more information about the configuration options and syntax used by logging.yml.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#prefect-loggers","title":"Prefect loggers","text":"

    To access the Prefect logger, import from prefect import get_run_logger. You can send messages to the logger in both flows and tasks.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-in-flows","title":"Logging in flows","text":"

    To log from a flow, retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

    from prefect import flow, get_run_logger\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message.\")\n

    Prefect automatically uses the flow run logger based on the flow context. If you run the above code, Prefect captures the following as a log event.

    15:35:17.304 | INFO    | Flow run 'mottled-marten' - INFO level log message.\n

    The default flow run log formatter uses the flow run name for log messages.

    Note

    Starting in 2.7.11, if you use a logger that sends logs to the API outside of a flow or task run, a warning will be displayed instead of an error. You can silence this warning by setting `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW=ignore` or have the logger raise an error by setting the value to `error`.\n
    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-in-tasks","title":"Logging in tasks","text":"

    Logging in tasks works much as logging in flows: retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

    from prefect import flow, task, get_run_logger\n\n@task(name=\"log-example-task\")\ndef logger_task():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message from a task.\")\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger_task()\n

    Prefect automatically uses the task run logger based on the task context. The default task run log formatter uses the task run name for log messages.

    15:33:47.179 | INFO   | Task run 'logger_task-80a1ffd1-0' - INFO level log message from a task.\n

    The underlying log model for task runs captures the task name, task run ID, and parent flow run ID, which are persisted to the database for reporting and may also be used in custom message formatting.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-print-statements","title":"Logging print statements","text":"

    Prefect provides the log_prints option to enable the logging of print statements at the task or flow level. When log_prints=True for a given task or flow, the Python builtin print will be patched to redirect to the Prefect logger for the scope of that task or flow.

    By default, tasks and subflows will inherit the log_prints setting from their parent flow, unless opted out with their own explicit log_prints setting.

    from prefect import task, flow\n\n@task\ndef my_task():\n    print(\"we're logging print statements from a task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

    Will output:

    15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\n15:52:12.217 | INFO    | Task run 'my_task-20c6ece6-0' - we're logging print statements from a task\n
    from prefect import task, flow\n\n@task\ndef my_task(log_prints=False):\n    print(\"not logging print statements in this task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

    Using log_prints=False at the task level will output:

    15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\nnot logging print statements in this task\n

    You can also configure this behavior globally for all Prefect flows, tasks, and subflows.

    prefect config set PREFECT_LOGGING_LOG_PRINTS=True\n
    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#formatters","title":"Formatters","text":"

    Prefect log formatters specify the format of log messages. You can see details of message formatting for different loggers in logging.yml. For example, the default formatting for task run log records is:

    \"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s\"\n

    The variables available to interpolate in log messages varies by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables.

    The flow run logger has the following:

    • flow_run_name
    • flow_run_id
    • flow_name

    The task run logger has the following:

    • task_run_id
    • flow_run_id
    • task_run_name
    • task_name
    • flow_run_name
    • flow_name

    You can specify custom formatting by setting an environment variable or by modifying the formatter in a logging.yml file as described earlier. For example, to change the formatting for the flow runs formatter:

    PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT=\"%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s\"\n

    The resulting messages, using the flow run ID instead of name, would look like this:

    10:40:01.211 | INFO    | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run\n'othertask-1c085beb-3' for task 'othertask'\n
    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#styles","title":"Styles","text":"

    By default, Prefect highlights specific keywords in the console logs with a variety of colors.

    Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting, e.g.

    PREFECT_LOGGING_COLORS=False\n

    You can change what gets highlighted and also adjust the colors by updating the styles in a logging.yml file. Below lists the specific keys built-in to the PrefectConsoleHighlighter.

    URLs:

    • log.web_url
    • log.local_url

    Log levels:

    • log.info_level
    • log.warning_level
    • log.error_level
    • log.critical_level

    State types:

    • log.pending_state
    • log.running_state
    • log.scheduled_state
    • log.completed_state
    • log.cancelled_state
    • log.failed_state
    • log.crashed_state

    Flow (run) names:

    • log.flow_run_name
    • log.flow_name

    Task (run) names:

    • log.task_run_name
    • log.task_name

    You can also build your own handler with a custom highlighter. For example, to additionally highlight emails:

    1. Copy and paste the following into my_package_or_module.py (rename as needed) in the same directory as the flow run script, or ideally part of a Python package so it's available in site-packages to be accessed anywhere within your environment.
    import logging\nfrom typing import Dict, Union\n\nfrom rich.highlighter import Highlighter\n\nfrom prefect.logging.handlers import PrefectConsoleHandler\nfrom prefect.logging.highlighters import PrefectConsoleHighlighter\n\nclass CustomConsoleHighlighter(PrefectConsoleHighlighter):\n    base_style = \"log.\"\n    highlights = PrefectConsoleHighlighter.highlights + [\n        # ?P<email> is naming this expression as `email`\n        r\"(?P<email>[\\w-]+@([\\w-]+\\.)+[\\w-]+)\",\n    ]\n\nclass CustomConsoleHandler(PrefectConsoleHandler):\n    def __init__(\n        self,\n        highlighter: Highlighter = CustomConsoleHighlighter,\n        styles: Dict[str, str] = None,\n        level: Union[int, str] = logging.NOTSET,\n   ):\n        super().__init__(highlighter=highlighter, styles=styles, level=level)\n
    1. Update /.prefect/logging.yml to use my_package_or_module.CustomConsoleHandler and additionally reference the base_style and named expression: log.email.
        console_flow_runs:\n        level: 0\n        class: my_package_or_module.CustomConsoleHandler\n        formatter: flow_runs\n        styles:\n            log.email: magenta\n            # other styles can be appended here, e.g.\n            # log.completed_state: green\n
    1. Then on your next flow run, text that looks like an email will be highlighted--e.g. my@email.com is colored in magenta here.
    from prefect import flow, get_run_logger\n\n@flow\ndef log_email_flow():\n    logger = get_run_logger()\n    logger.info(\"my@email.com\")\n\nlog_email_flow()\n
    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#applying-markup-in-logs","title":"Applying markup in logs","text":"

    To use Rich's markup in Prefect logs, first configure PREFECT_LOGGING_MARKUP.

    PREFECT_LOGGING_MARKUP=True\n

    Then, the following will highlight \"fancy\" in red.

    from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"This is [bold red]fancy[/]\")\n\nmy_flow()\n

    Inaccurate logs could result

    Although this can be convenient, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#log-database-schema","title":"Log database schema","text":"

    Logged events are also persisted to the Prefect database. A log record includes the following data:

    Column Description id Primary key ID of the log record. created Timestamp specifying when the record was created. updated Timestamp specifying when the record was updated. name String specifying the name of the logger. level Integer representation of the logging level. flow_run_id ID of the flow run associated with the log record. If the log record is for a task run, this is the parent flow of the task. task_run_id ID of the task run associated with the log record. Null if logging a flow run event. message Log message. timestamp The client-side timestamp of this logged statement.

    For more information, see Log schema in the API documentation.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#including-logs-from-other-libraries","title":"Including logs from other libraries","text":"

    By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the PREFECT_LOGGING_EXTRA_LOGGERS setting.

    To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want to make sure Prefect captures Dask and SciPy logging statements with your flow and task run logs:

    PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy\n

    You can set this setting as an environment variable or in a profile. See Settings for more details about how to use settings.

    ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/managed-execution/","title":"Managed Execution","text":"

    Prefect Cloud can run your flows on your behalf with Prefect Managed work pools. Flows run with this work pool do not require a worker or cloud provider account. Prefect handles the infrastructure and code execution for you.

    Managed execution is a great option for users who want to get started quickly, with no infrastructure setup.

    Managed Execution is in beta

    Managed Execution is currently in beta. Features are likely to change without warning.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-guide","title":"Usage guide","text":"

    Run a flow with managed infrastructure in three steps.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-1","title":"Step 1","text":"

    Create a new work pool of type Prefect Managed in the UI or the CLI. Here's the command to create a new work pool using the CLI:

    prefect work-pool create my-managed-pool --type prefect:managed\n
    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-2","title":"Step 2","text":"

    Create a deployment using the flow deploy method or prefect.yaml.

    Specify the name of your managed work pool, as shown in this example that uses the deploy method:

    managed-execution.py
    from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n    source=\"https://github.com/desertaxle/demo.git\",\n    entrypoint=\"flow.py:my_flow\",\n    ).deploy(\n        name=\"test-managed-flow\",\n        work_pool_name=\"my-managed-pool\",\n    )\n

    With your CLI authenticated to your Prefect Cloud workspace, run the script to create your deployment:

    python managed-execution.py\n

    Note that this deployment uses flow code stored in a GitHub repository.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-3","title":"Step 3","text":"

    Run the deployment from the UI or from the CLI.

    That's it! You ran a flow on remote infrastructure without any infrastructure setup, starting a worker, or needing a cloud provider account.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#adding-dependencies","title":"Adding dependencies","text":"

    Prefect can install Python packages in the container that runs your flow at runtime. You can specify these dependencies in the Pip Packages field in the UI, or by configuring job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]} in your deployment creation like this:

    from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n    source=\"https://github.com/desertaxle/demo.git\",\n    entrypoint=\"flow.py:my_flow\",\n    ).deploy(\n        name=\"test-managed-flow\",\n        work_pool_name=\"my-managed-pool\",\n        job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]}\n    )\n

    Alternatively, you can create a requirements.txt file and reference it in your prefect.yaml pull_step.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#limitations","title":"Limitations","text":"

    Managed execution requires Prefect 2.14.4 or newer.

    All limitations listed below may change without warning during the beta period. We will update this page as we make changes.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#concurrency-work-pools","title":"Concurrency & work pools","text":"

    Free tier accounts are limited to:

    • Maximum of 1 concurrent flow run per workspace across all prefect:managed pools.
    • Maximum of 1 managed execution work pool per workspace.

    Pro tier and above accounts are limited to:

    • Maximum of 10 concurrent flow runs per workspace across all prefect:managed pools.
    • Maximum of 5 managed execution work pools per workspace.
    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#images","title":"Images","text":"

    At this time, managed execution requires that you run the official Prefect Docker image: prefecthq/prefect:2-latest. However, as noted above, you can install Python package dependencies at runtime. If you need to use your own image, we recommend using another type of work pool.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#code-storage","title":"Code storage","text":"

    Flow code must be stored in an accessible remote location. This means git-based cloud providers such as GitHub, Bitbucket, or GitLab are supported. Remote block-based storage is also supported, so S3, GCS, and Azure Blob are additional code storage options.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#resources","title":"Resources","text":"

    Memory is limited to 2GB of RAM, which includes all operations such as dependency installation. Maximum job run time is 24 hours.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-limits","title":"Usage limits","text":"

    Free tier accounts are limited to ten compute hours per workspace per month. Pro tier and above accounts are limited to 250 hours per workspace per month. You can view your compute hours quota usage on the Work Pools page in the UI.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#next-steps","title":"Next steps","text":"

    Read more about creating deployments in the deployment guide.

    If you find that you need more control over your infrastructure, such as the ability to run custom Docker images, serverless push work pools might be a good option. Read more here.

    ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/migration-guide/","title":"Migrating from Prefect 1 to Prefect 2","text":"

    This guide is designed to help you migrate your workflows from Prefect 1 to Prefect 2.

    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-stayed-the-same","title":"What stayed the same","text":"

    Prefect 2 still:

    • Has tasks and flows.
    • Orchestrates your flow runs and provides observability into their execution states.
    • Runs and inspects flow runs locally.
    • Provides a coordination plane for your dataflows based on the same principles.
    • Employs the same hybrid execution model, where Prefect doesn't store your flow code or data.
    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed","title":"What changed","text":"

    Prefect 2 requires modifications to your existing tasks, flows, and deployment patterns. We've organized this section into the following categories:

    • Simplified patterns \u2014 abstractions from Prefect 1 that are no longer necessary in the dynamic, DAG-free Prefect workflows that support running native Python code in your flows.
    • Conceptual and syntax changes that often clarify names and simplify familiar abstractions such as retries and caching.
    • New features enabled by the dynamic and flexible Prefect API.
    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#simplified-patterns","title":"Simplified patterns","text":"

    Since Prefect 2 allows running native Python code within the flow function, some abstractions are no longer necessary:

    • Parameter tasks: in Prefect 2, inputs to your flow function are automatically treated as parameters of your flow. You can define the parameter values in your flow code when you create your Deployment, or when you schedule an ad-hoc flow run. One benefit of Prefect parametrization is built-in type validation with pydantic.
    • Task-level state_handlers: in Prefect 2, you can build custom logic that reacts to task-run states within your flow function without the need for state_handlers. The page \" How to take action on a state change of a task run\" provides a further explanation and code examples.
    • Instead of using signals, Prefect 2 allows you to raise an arbitrary exception in your task or flow and return a custom state. For more details and examples, see How can I stop the task run based on a custom logic.
    • Conditional tasks such as case are no longer required. Use Python native if...else statements to build a conditional logic. The Discourse tag \"conditional-logic\" provides more resources.
    • Since you can use any context manager directly in your flow, a resource_manager is no longer necessary. As long as you point to your flow script in your Deployment, you can share database connections and any other resources between tasks in your flow. The Discourse page How to clean up resources used in a flow provides a full example.
    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#conceptual-and-syntax-changes","title":"Conceptual and syntax changes","text":"

    The changes listed below require you to modify your workflow code. The following table shows how Prefect 1 concepts have been implemented in Prefect 2. The last column contains references to additional resources that provide more details and examples.

    Concept Prefect 1 Prefect 2 Reference links Flow definition. with Flow(\"flow_name\") as flow: @flow(name=\"flow_name\") How can I define a flow? Flow executor that determines how to execute your task runs. Executor such as LocalExecutor. Task runner such as ConcurrentTaskRunner. What is the default TaskRunner (executor)? Configuration that determines how and where to execute your flow runs. Run configuration such as flow.run_config = DockerRun(). Create an infrastructure block such as a Docker Container and specify it as the infrastructure when creating a deployment. How can I run my flow in a Docker container? Assignment of schedules and default parameter values. Schedules are attached to the flow object and default parameter values are defined within the Parameter tasks. Schedules and default parameters are assigned to a flow\u2019s Deployment, rather than to a Flow object. How can I attach a schedule to a flow? Retries @task(max_retries=2, retry_delay=timedelta(seconds=5)) @task(retries=2, retry_delay_seconds=5) How can I specify the retry behavior for a specific task? Logger syntax. Logger is retrieved from prefect.context and can only be used within tasks. In Prefect 2, you can log not only from tasks, but also within flows. To get the logger object, use: prefect.get_run_logger(). How can I add logs to my flow? The syntax and contents of Prefect context. Context is a thread-safe way of accessing variables related to the flow run and task run. The syntax to retrieve it: prefect.context. Context is still available, but its content is much richer, allowing you to retrieve even more information about your flow runs and task runs. The syntax to retrieve it: prefect.context.get_run_context(). How to access Prefect context values? Task library. Included in the main Prefect Core repository. Separated into individual repositories per system, cloud provider, or technology. How to migrate Prefect 1 tasks to Prefect 2 integrations.","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-dataflow-orchestration","title":"What changed in dataflow orchestration?","text":"

    Let\u2019s look at the differences in how Prefect 2 transitions your flow and task runs between various execution states.

    • In Prefect 2, the final state of a flow run that finished without errors is Completed, while in Prefect 1, this flow run has a Success state. You can find more about that topic here.
    • The decision about whether a flow run should be considered successful or not is no longer based on special reference tasks. Instead, your flow\u2019s return value determines the final state of a flow run. This link provides a more detailed explanation with code examples.
    • In Prefect 1, concurrency limits were only available to Prefect Cloud users. Prefect 2 provides customizable concurrency limits with the open-source Prefect server and Prefect Cloud. In Prefect 2, flow run concurrency limits are set on work pools.
    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-flow-deployment-patterns","title":"What changed in flow deployment patterns?","text":"

    To deploy your Prefect 1 flows, you have to send flow metadata to the backend in a step called registration. Prefect 2 no longer requires flow pre-registration. Instead, you create a Deployment that specifies the entry point to your flow code and optionally specifies:

    • Where to run your flow (your Infrastructure, such as a DockerContainer, KubernetesJob, or ECSTask).
    • When to run your flow (an Interval, Cron, or RRule schedule).
    • How to run your flow (execution details such as parameters, flow deployment name, and more).
    • The work pool for your deployment. If no work pool is specified, a default work pool named default is used.

    The API is now implemented as a REST API rather than GraphQL. This page illustrates how you can interact with the API.

    In Prefect 1, the logical grouping of flows was based on projects. Prefect 2 provides a much more flexible way of organizing your flows, tasks, and deployments through customizable filters and\u00a0tags. This page provides more details on how to assign tags to various Prefect 2 objects.

    The role of agents has changed:

    • In Prefect 2, there is only one generic agent type. The agent polls a work pool looking for flow runs.
    • See this Discourse page for a more detailed discussion.
    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#new-features-introduced-in-prefect-2","title":"New features introduced in Prefect 2","text":"

    The following new components and capabilities are enabled by Prefect 2.

    • More flexibility thanks to the elimination of flow pre-registration.
    • More flexibility for flow deployments, including easier promotion of a flow through development, staging, and production environments.
    • Native async support.
    • Out-of-the-box pydantic validation.
    • Blocks allowing you to securely store UI-editable, type-checked configuration to external systems and an easy-to-use Key-Value Store. All those components are configurable in one place and provided as part of the open-source Prefect 2 product. In contrast, the concept of Secrets in Prefect 1 was much more narrow and only available in Prefect Cloud.
    • Notifications available in the open-source Prefect 2 version, as opposed to Cloud-only Automations in Prefect 1.
    • A first-class subflows concept: Prefect 1 only allowed the flow-of-flows orchestrator pattern. With Prefect 2 subflows, you gain a natural and intuitive way of organizing your flows into modular sub-components. For more details, see the following list of resources about subflows.
    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#orchestration-behind-the-api","title":"Orchestration behind the API","text":"

    Apart from new features, Prefect 2 simplifies many usage patterns and provides a much more seamless onboarding experience.

    Every time you run a flow, whether it is tracked by the API server or ad-hoc through a Python script, it is on the same UI page for easier debugging and observability.

    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#code-as-workflows","title":"Code as workflows","text":"

    With Prefect 2, your functions\u00a0are\u00a0your flows and tasks. Prefect 2 automatically detects your flows and tasks without the need to define a rigid DAG structure. While use of tasks is encouraged to provide you the maximum visibility into your workflows, they are no longer required. You can add a single @flow decorator to your main function to transform any Python script into a Prefect workflow.

    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#incremental-adoption","title":"Incremental adoption","text":"

    The built-in SQLite database automatically tracks all your locally executed flow runs. As soon as you start a Prefect server and open the Prefect UI in your browser (or authenticate your CLI with your Prefect Cloud workspace), you can see all your locally executed flow runs in the UI. You don't even need to start an agent.

    Then, when you want to move toward scheduled, repeatable workflows, you can build a deployment and send it to the server by running a CLI command or a Python script.

    • You can create a deployment to on remote infrastructure, where the run environment is defined by a reusable infrastructure block.
    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#fewer-ambiguities","title":"Fewer ambiguities","text":"

    Prefect 2 eliminates ambiguities in many ways. For example. there is no more confusion between Prefect Core and Prefect Server \u2014 Prefect 2 unifies those into a single open source product. This product is also much easier to deploy with no requirement for Docker or docker-compose.

    If you want to switch your backend to use Prefect Cloud for an easier production-level managed experience, Prefect profiles let you quickly connect to your workspace.

    In Prefect 1, there are several confusing ways you could implement caching. Prefect 2 resolves those ambiguities by providing a single cache_key_fn function paired with cache_expiration, allowing you to define arbitrary caching mechanisms \u2014 no more confusion about whether you need to use cache_for, cache_validator, or file-based caching using targets.

    For more details on how to configure caching, check out the following resources:

    • Caching docs
    • Time-based caching
    • Input-based caching

    A similarly confusing concept in Prefect 1 was distinguishing between the functional and imperative APIs. This distinction caused ambiguities with respect to how to define state dependencies between tasks. Prefect 1 users were often unsure whether they should use the functional upstream_tasks keyword argument or the imperative methods such as task.set_upstream(), task.set_downstream(), or flow.set_dependencies(). In Prefect 2, there is only the functional API.

    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#next-steps","title":"Next steps","text":"

    We know migrations can be tough. We encourage you to take it step-by-step and experiment with the new features.

    To make the migration process easier for you:

    • We provided a detailed FAQ section allowing you to find the right information you need to move your workflows to Prefect 2. If you still have some open questions, feel free to create a new topic describing your migration issue.
    • We have dedicated resources in the Customer Success team to help you along your migration journey. Reach out to cs@prefect.io to discuss how we can help.
    • You can ask questions in our 20,000+ member Community Slack.

    Happy Engineering!

    ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/moving-data/","title":"Read and Write Data to and from Cloud Provider Storage","text":"

    Writing data to cloud-based storage and reading data from that storage is a common task in data engineering. In this guide we'll learn how to use Prefect to move data to and from AWS, Azure, and GCP blob storage.

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#prerequisites","title":"Prerequisites","text":"
    • Prefect installed
    • Authenticated with Prefect Cloud (or self-hosted Prefect server instance)
    • A cloud provider account (e.g. AWS)
    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#install-relevant-prefect-integration-library","title":"Install relevant Prefect integration library","text":"

    In the CLI, install the Prefect integration library for your cloud provider:

    AWSAzureGCP

    prefect-aws provides blocks for interacting with AWS services.

    pip install -U prefect-aws\n

    prefect-azure provides blocks for interacting with Azure services.

     pip install -U prefect-azure\n

    prefect-gcp provides blocks for interacting with GCP services.

     pip install -U prefect-gcp\n

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#register-the-block-types","title":"Register the block types","text":"

    Register the new block types with Prefect Cloud (or with your self-hosted Prefect server instance):

    AWSAzureGCP

    prefect block register -m prefect_aws  \n

    prefect block register -m prefect_azure \n

    prefect block register -m prefect_gcp\n

    We should see a message in the CLI that several block types were registered. If we check the UI, we should see the new block types listed.

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-bucket","title":"Create a storage bucket","text":"

    Create a storage bucket in the cloud provider account. Ensure the bucket is publicly accessible or create a user or service account with the appropriate permissions to fetch and write data to the bucket.

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-credentials-block","title":"Create a credentials block","text":"

    If the bucket is private, there are several options to authenticate:

    1. At deployment runtime, ensure the runtime environment is authenticated.
    2. Create a block with configuration details and reference it when creating the storage block.

    If saving credential details in a block we can use a credentials block specific to the cloud provider or use a more generic secret block. We can create blocks via the UI or Python code. Below we'll use Python code to create a credentials block for our cloud provider.

    Credentials safety

    Reminder, don't store credential values in public locations such as public git platform repositories. In the examples below we use environment variables to store credential values.

    AWSAzureGCP
    import os\nfrom prefect_aws import AwsCredentials\n\nmy_aws_creds = AwsCredentials(\n    aws_access_key_id=\"123abc\",\n    aws_secret_access_key=os.environ.get(\"MY_AWS_SECRET_ACCESS_KEY\"),\n)\nmy_aws_creds.save(name=\"my-aws-creds-block\", overwrite=True)\n
    import os\nfrom prefect_azure import AzureBlobStorageCredentials\n\nmy_azure_creds = AzureBlobStorageCredentials(\n    connection_string=os.environ.get(\"MY_AZURE_CONNECTION_STRING\"),\n)\nmy_azure_creds.save(name=\"my-azure-creds-block\", overwrite=True)\n

    We recommend specifying the service account key file contents as a string, rather than the path to the file, because that file might not be available in your production environments.

    import os\nfrom prefect_gcp import GCPCredentials\n\nmy_gcp_creds = GCPCredentials(\n    service_account_info=os.environ.get(\"GCP_SERVICE_ACCOUNT_KEY_FILE_CONTENTS\"), \n)\nmy_gcp_creds.save(name=\"my-gcp-creds-block\", overwrite=True)\n

    Run the code to create the block. We should see a message that the block was created.

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-block","title":"Create a storage block","text":"

    Let's create a block for the chosen cloud provider using Python code or the UI. In this example we'll use Python code.

    AWSAzureGCP

    Note that the S3Bucket block is not the same as the S3 block that ships with Prefect. The S3Bucket block we use in this example is part of the prefect-aws library and provides additional functionality.

    We'll reference the credentials block created above.

    from prefect_aws import S3Bucket\n\ns3bucket = S3Bucket.create(\n    bucket=\"my-bucket-name\",\n    credentials=\"my-aws-creds-block\"\n    )\ns3bucket.save(name=\"my-s3-bucket-block\", overwrite=True)\n

    Note that the AzureBlobStorageCredentials block is not the same as the Azure block that ships with Prefect. The AzureBlobStorageCredentials block we use in this example is part of the prefect-azure library and provides additional functionality.

    Azure blob storage doesn't require a separate block, the connection string used in the AzureBlobStorageCredentials block can encode the information needed.

    Note that the GcsBucket block is not the same as the GCS block that ships with Prefect. The GcsBucket block is part of the prefect-gcp library and provides additional functionality. We'll use it here.

    We'll reference the credentials block created above.

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcsbucket = GcsBucket(\n    bucket=\"my-bucket-name\", \n    credentials=\"my-gcp-creds-block\"\n    )\ngcsbucket.save(name=\"my-gcs-bucket-block\", overwrite=True)\n

    Run the code to create the block. We should see a message that the block was created.

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#write-data","title":"Write data","text":"

    Use your new block inside a flow to write data to your cloud provider.

    AWSAzureGCP
    from pathlib import Path\nfrom prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\n@flow()\ndef upload_to_s3():\n    \"\"\"Flow function to upload data\"\"\"\n    path = Path(\"my_path_to/my_file.parquet\")\n    aws_block = S3Bucket.load(\"my-s3-bucket-block\")\n    aws_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n    upload_to_s3()\n
    from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_upload\n\n@flow\ndef upload_to_azure():\n    \"\"\"Flow function to upload data\"\"\"\n    blob_storage_credentials = AzureBlobStorageCredentials.load(\n        name=\"my-azure-creds-block\"\n    )\n\n    with open(\"my_path_to/my_file.parquet\", \"rb\") as f:\n        blob_storage_upload(\n            data=f.read(),\n            container=\"my_container\",\n            blob=\"my_path_to/my_file.parquet\",\n            blob_storage_credentials=blob_storage_credentials,\n        )\n\nif __name__ == \"__main__\":\n    upload_to_azure()\n
    from pathlib import Path\nfrom prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow()\ndef upload_to_gcs():\n    \"\"\"Flow function to upload data\"\"\"\n    path = Path(\"my_path_to/my_file.parquet\")\n    gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n    gcs_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n    upload_to_gcs()\n
    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#read-data","title":"Read data","text":"

    Use your block to read data from your cloud provider inside a flow.

    AWSAzureGCP
    from prefect import flow\nfrom prefect_aws import S3Bucket\n\n@flow\ndef download_from_s3():\n    \"\"\"Flow function to download data\"\"\"\n    s3_block = S3Bucket.load(\"my-s3-bucket-block\")\n    s3_block.get_directory(\n        from_path=\"my_path_to/my_file.parquet\", \n        local_path=\"my_path_to/my_file.parquet\"\n    )\n\nif __name__ == \"__main__\":\n    download_from_s3()\n
    from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef download_from_azure():\n    \"\"\"Flow function to download data\"\"\"\n    blob_storage_credentials = AzureBlobStorageCredentials.load(\n        name=\"my-azure-creds-block\"\n    )\n    blob_storage_download(\n        blob=\"my_path_to/my_file.parquet\",\n        container=\"my_container\",\n        blob_storage_credentials=blob_storage_credentials,\n    )\n\nif __name__ == \"__main__\":\n    download_from_azure()\n
    from prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow\ndef download_from_gcs():\n    gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n    gcs_block.get_directory(\n        from_path=\"my_path_to/my_file.parquet\", \n        local_path=\"my_path_to/my_file.parquet\"\n    )\n\nif __name__ == \"__main__\":\n    download_from_gcs()\n

    In this guide we've seen how to use Prefect to read data from and write data to cloud providers!

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#next-steps","title":"Next steps","text":"

    Check out the prefect-aws, prefect-azure, and prefect-gcp docs to see additional methods for interacting with cloud storage providers. Each library also contains blocks for interacting with other cloud-provider services.

    ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/prefect-deploy/","title":"Deploying Flows to Work Pools and Workers","text":"

    In this guide, we will configure a deployment that uses a work pool for dynamically provisioned infrastructure.

    All Prefect flow runs are tracked by the API. The API does not require prior registration of flows. With Prefect, you can call a flow locally or on a remote environment and it will be tracked.

    A deployment turns your workflow into an application that can be interacted with and managed via the Prefect API. A deployment enables you to:

    • Schedule flow runs.
    • Specify event triggers for flow runs.
    • Assign one or more tags to organize your deployments and flow runs. You can use those tags as filters in the Prefect UI.
    • Assign custom parameter values for flow runs based on the deployment.
    • Create ad-hoc flow runs from the API or Prefect UI.
    • Upload flow files to a defined storage location for retrieval at run time.

    Deployments created with .serve

    A deployment created with the Python flow.serve method or the serve function runs flows in a subprocess on the same machine where the deployment is created. It does not use a work pool or worker.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#work-pool-based-deployments","title":"Work pool-based deployments","text":"

    A work pool-based deployment is useful when you want to dynamically scale the infrastructure where your flow code runs. Work pool-based deployments contain information about the infrastructure type and configuration for your workflow execution.

    Work pool-based deployment infrastructure options include the following:

    • Process - runs flow in a subprocess. In most cases, you're better off using .serve.
    • Docker - runs flows in an ephemeral Docker container.
    • Kubernetes - runs flows as a Kubernetes Job.
    • Serverless Cloud Provider options - runs flows in a Docker container in a serverless cloud provider environment, such as AWS ECS, Azure Container Instance, Google Cloud Run, or Vertex AI.

    The following diagram provides a high-level overview of the conceptual elements involved in defining a work-pool based deployment that is polled by a worker and executes a flow run based on that deployment.

    %%{\n  init: {\n    'theme': 'base',\n    'themeVariables': {\n      'fontSize': '19px'\n    }\n  }\n}%%\n\nflowchart LR\n    F(\"<div style='margin: 5px 10px 5px 5px;'>Flow Code</div>\"):::yellow -.-> A(\"<div style='margin: 5px 10px 5px 5px;'>Deployment Definition</div>\"):::gold\n    subgraph Server [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Prefect API</div>\"]\n        D(\"<div style='margin: 5px 10px 5px 5px;'>Deployment</div>\"):::green\n    end\n    subgraph Remote Storage [\"<div style='width: 160px; text-align: center; margin-top: 5px;'>Remote Storage</div>\"]\n        B(\"<div style='margin: 5px 6px 5px 5px;'>Flow</div>\"):::yellow\n    end\n    subgraph Infrastructure [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Infrastructure</div>\"]\n        G(\"<div style='margin: 5px 10px 5px 5px;'>Flow Run</div>\"):::blue\n    end\n\n    A --> D\n    D --> E(\"<div style='margin: 5px 10px 5px 5px;'>Worker</div>\"):::red\n    B -.-> E\n    A -.-> B\n    E -.-> G\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:black\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px,color:black\n    classDef gray fill:lightgray,stroke:lightgray,stroke-width:4px\n    classDef blue fill:blue,stroke:blue,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef red fill:red,stroke:red,stroke-width:4px,color:white\n    classDef dkgray fill:darkgray,stroke:darkgray,stroke-width:4px,color:white

    The work pool types above require a worker to be running on your infrastructure to poll a work pool for scheduled flow runs.

    Additional work pool options available with Prefect Cloud

    Prefect Cloud offers other flavors of work pools that don't require a worker:

    • Push Work Pools - serverless cloud options that don't require a worker because Prefect Cloud submits them to your serverless cloud infrastructure on your behalf. Prefect can auto-provision your cloud infrastructure for you and set it up to use your work pool.

    • Managed Execution Prefect Cloud submits and runs your deployment on serverless infrastructure. No cloud provider account required.

    In this guide, we focus on deployments that require a worker.

    Work pool-based deployments that use a worker also allow you to assign a work queue name to prioritize work and allow you to limit concurrent runs at the work pool level.

    When creating a deployment that uses a work pool and worker, we must answer two basic questions:

    • What instructions does a worker need to set up an execution environment for our flow? For example, a flow may have Python package requirements, unique Kubernetes settings, or Docker networking configuration.
    • How should the flow code be accessed?

    The tutorial shows how you can create a deployment with a long-running process using .serve and how to move to a work-pool-based deployment setup with .deploy. See the discussion of when you might want to move to work-pool-based deployments there.

    Next, we'll explore how to use .deploy to create deployments with Python code. If you'd prefer to learn about using a YAML-based alternative for managing deployment configuration, skip to the later section on prefect.yaml.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#creating-work-pool-based-deployments-with-deploy","title":"Creating work pool-based deployments with .deploy","text":"","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#automatically-bake-your-code-into-a-docker-image","title":"Automatically bake your code into a Docker image","text":"

    You can create a deployment from Python code by calling the .deploy method on a flow.

    buy.py
    from prefect import flow\n\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my_registry/my_image:my_image_tag\"\n    )\n

    Make sure you have the work pool created in the Prefect Cloud workspace you are authenticated to or on your running self-hosted server instance. Then run the script to create a deployment (in future examples this step will be omitted for brevity):

    python buy.py\n

    You should see messages in your terminal that Docker is building your image. When the deployment build succeeds you will see helpful information in your terminal showing you how to start a worker for your deployment and how to run your deployment. Your deployment will be visible on the Deployments page in the UI.

    By default, .deploy will build a Docker image with your flow code baked into it and push the image to the Docker Hub registry specified in the image argument`.

    Authentication to Docker Hub

    You need your environment to be authenticated to your Docker registry to push an image to it.

    You can specify a registry other than Docker Hub by providing the full registry path in the image argument.

    Warning

    If building a Docker image, the environment in which you are creating the deployment needs to have Docker installed and running.

    To avoid pushing to a registry, set push=False in the .deploy method.

    if __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my_registry/my_image:my_image_tag\",\n        push=False\n    )\n

    To avoid building an image, set build=False in the .deploy method.

    if __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"discdiver/no-build-image:1.0\",\n        build=False\n    )\n

    The specified image will need to be available in your deployment's execution environment for your flow code to be accessible.

    Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt file.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#automatically-build-a-custom-docker-image-with-a-local-dockerfile","title":"Automatically build a custom Docker image with a local Dockerfile","text":"

    If you want to use a custom Dockerfile, you can specify the path to the Dockerfile with the DeploymentImage class:

    custom_dockerfile.py
    from prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Selling securities\")\n\n\nif __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-custom-dockerfile-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=DeploymentImage(\n            name=\"my_image\",\n            tag=\"deploy-guide\",\n            dockerfile=\"Dockerfile\"\n    ),\n    push=False\n)\n

    The DeploymentImage object allows for a great deal of image customization.

    For example, you can install a private Python package from GCP's artifact registry like this:

    Create a custom base Dockerfile.

    FROM python:3.10\n\nARG AUTHED_ARTIFACT_REG_URL\nCOPY ./requirements.txt /requirements.txt\n\nRUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt\n

    Create our deployment by leveraging the DeploymentImage class.

    private-package.py
    from prefect import flow\nfrom prefect.deployments.runner import DeploymentImage\nfrom prefect.blocks.system import Secret\nfrom my_private_package import do_something_cool\n\n\n@flow(log_prints=True)\ndef my_flow():\n    do_something_cool()\n\n\nif __name__ == \"__main__\":\n    artifact_reg_url: Secret = Secret.load(\"artifact-reg-url\")\n\n    my_flow.deploy(\n        name=\"my-deployment\",\n        work_pool_name=\"k8s-demo\",\n        image=DeploymentImage(\n            name=\"my-image\",\n            tag=\"test\",\n            dockerfile=\"Dockerfile\",\n            buildargs={\"AUTHED_ARTIFACT_REG_URL\": artifact_reg_url.get()},\n        ),\n    )\n

    Note that we used a Prefect Secret block to load the URL configuration for the artifact registry above.

    See all the optional keyword arguments for the DeploymentImage class here.

    Default Docker namespace

    You can set the PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE setting to append a default Docker namespace to all images you build with .deploy. This is great if you use a private registry to store your images.

    To set a default Docker namespace for your current profile run:

    prefect config set PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE=<docker-registry-url>/<organization-or-username>\n

    Once set, you can omit the namespace from your image name when creating a deployment:

    with_default_docker_namespace.py
    if __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my_image:my_image_tag\"\n    )\n

    The above code will build an image with the format <docker-registry-url>/<organization-or-username>/my_image:my_image_tag when PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE is set.

    While baking code into Docker images is a popular deployment option, many teams decide to store their workflow code in git-based storage, such as GitHub, Bitbucket, or Gitlab. Let's see how to do that next.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#store-your-code-in-git-based-cloud-storage","title":"Store your code in git-based cloud storage","text":"

    If you don't specify an image argument for .deploy, then you need to specify where to pull the flow code from at runtime with the from_source method.

    Here's how we can pull our flow code from a GitHub repository.

    git_storage.py
    from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        \"https://github.com/my_github_account/my_repo/my_file.git\",\n        entrypoint=\"flows/no-image.py:hello_world\",\n    ).deploy(\n        name=\"no-image-deployment\",\n        work_pool_name=\"my_pool\",\n        build=False\n    )\n

    The entrypoint is the path to the file the flow is located in and the function name, separated by a colon.

    Alternatively, you could specify a git-based cloud storage URL for a Bitbucket or Gitlab repository.

    Note

    If you don't specify an image as part of your deployment creation, the image specified in the work pool will be used to run your flow.

    After creating a deployment you might change your flow code. Generally, you can just push your code to GitHub, without rebuilding your deployment. The exception is if something that the server needs to know about changes, such as the flow entrypoint parameters. Rerunning the Python script with .deploy will update your deployment on the server with the new flow code.

    If you need to provide additional configuration, such as specifying a private repository, you can provide a GitRepository object instead of a URL:

    private_git_storage.py
    from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=GitRepository(\n        url=\"https://github.com/org/private-repo.git\",\n        branch=\"dev\",\n        credentials={\n            \"access_token\": Secret.load(\"github-access-token\")\n        }\n    ),\n    entrypoint=\"flows/no-image.py:hello_world\",\n    ).deploy(\n        name=\"private-git-storage-deployment\",\n        work_pool_name=\"my_pool\",\n        build=False\n    )\n

    Note the use of the Secret block to load the GitHub access token. Alternatively, you could provide a username and password to the username and password fields of the credentials argument.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#store-your-code-in-cloud-provider-storage","title":"Store your code in cloud provider storage","text":"

    Another option for flow code storage is any fsspec-supported storage location, such as AWS S3, GCP GCS, or Azure Blob Storage.

    For example, you can pass the S3 bucket path to source.

    s3_storage.py
    from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"s3://my-bucket/my-folder\",\n        entrypoint=\"flows.py:my_flow\",\n    ).deploy(\n        name=\"deployment-from-aws-flow\",\n        work_pool_name=\"my_pool\",\n    )\n

    In the example above your credentials will be auto-discovered from your deployment creation environment and credentials will need to be available in your runtime environment.

    If you need additional configuration for your cloud-based storage - for example, with a private S3 Bucket - we recommend using a storage block. A storage block also ensures your credentials will be available in both your deployment creation environment and your execution environment.

    Here's an example that uses an S3Bucket block from the prefect-aws library.

    s3_storage_auth.py
    from prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=S3Bucket.load(\"my-code-storage\"), entrypoint=\"my_file.py:my_flow\"\n    ).deploy(name=\"test-s3\", work_pool_name=\"my_pool\")\n

    If you are familiar with the deployment creation mechanics with .serve, you will notice that .deploy is very similar. .deploy just requires a work pool name and has a number of parameters dealing with flow-code storage for Docker images.

    Unlike .serve, if you don't specify an image to use for your flow, you must to specify where to pull the flow code from at runtime with the from_source method, whereas from_source is optional with .serve.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#additional-configuration-with-deploy","title":"Additional configuration with .deploy","text":"

    Our examples thus far have explored options for where to store flow code. Let's turn our attention to other deployment configuration options.

    To pass parameters to your flow, you can use the parameters argument in the .deploy method. Just pass in a dictionary of key-value pairs.

    pass_params.py
    from prefect import flow\n\n@flow\ndef hello_world(name: str):\n    print(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n    hello_world.deploy(\n        name=\"pass-params-deployment\",\n        work_pool_name=\"my_pool\",\n        parameters=dict(name=\"Prefect\"),\n        image=\"my_registry/my_image:my_image_tag\",\n    )\n

    The job_variables parameter allows you to fine-tune the infrastructure settings for a deployment. The values passed in override default values in the specified work pool's base job template.

    You can override environment variables, such as image_pull_policy and image, for a specific deployment with the job_variables argument.

    job_var_image_pull.py
    if __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-deployment-never-pull\", \n        work_pool_name=\"my-docker-pool\", \n        job_variables={\"image_pull_policy\": \"Never\"},\n        image=\"my-image:my-tag\"\",\n        push=False\n    )\n

    Similarly, you can override the environment variables specified in a work pool through the job_variables parameter:

    job_var_env_vars.py
    if __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-deployment-never-pull\", \n        work_pool_name=\"my-docker-pool\", \n        job_variables={\"env\": {\"EXTRA_PIP_PACKAGES\": \"boto3\"} },\n        image=\"my-image:my-tag\"\",\n        push=False\n    )\n

    The dictionary key \"EXTRA_PIP_PACKAGES\" denotes a special environment variable that Prefect will use to install additional Python packages at runtime. This approach is an alternative to building an image with a custom requirements.txt copied into it.

    For more information on overriding job variables see this guide.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#working-with-multiple-deployments-with-deploy","title":"Working with multiple deployments with deploy","text":"

    You can create multiple deployments from one or more Python files that use .deploy. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.

    To create multiple work pool-based deployments at once you can use the deploy function, which is analogous to the serve function.

    from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n    deploy(\n        buy.to_deployment(name=\"dev-deploy\", work_pool_name=\"my-dev-work-pool\"),\n        buy.to_deployment(name=\"prod-deploy\", work_pool_name=\"my-prod-work-pool\"),\n        image=\"my-registry/my-image:dev\",\n        push=False,\n    )\n

    Note that in the example above we created two deployments from the same flow, but with different work pools. Alternatively, we could have created two deployments from different flows.

    from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Buying securities.\")\n\n@flow(log_prints=True)\ndef sell():\n    print(\"Selling securities.\")\n\n\nif __name__ == \"__main__\":\n    deploy(\n        buy.to_deployment(name=\"buy-deploy\"),\n        sell.to_deployment(name=\"sell-deploy\"),\n        work_pool_name=\"my-dev-work-pool\"\n        image=\"my-registry/my-image:dev\",\n        push=False,\n    )\n

    In the example above the code for both flows gets baked into the same image.

    We can specify that one or more flows should be pulled from a remote location at runtime by using the from_source method. Here's an example of deploying two flows, one defined locally and one defined in a remote repository:

    from prefect import deploy, flow\n\n\n@flow(log_prints=True)\ndef local_flow():\n    print(\"I'm a flow!\")\n\nif __name__ == \"__main__\":\n    deploy(\n        local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n        flow.from_source(\n            source=\"https://github.com/org/repo.git\",\n            entrypoint=\"flows.py:my_flow\",\n        ).to_deployment(\n            name=\"example-deploy-remote-flow\",\n        ),\n        work_pool_name=\"my-work-pool\",\n        image=\"my-registry/my-image:dev\",\n    )\n

    You could pass any number of flows to the deploy function. This behavior is useful if using a monorepo approach to your workflows.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#creating-work-pool-based-deployments-with-prefectyaml","title":"Creating work pool-based deployments with prefect.yaml","text":"

    The prefect.yaml file is a YAML file describing base settings for your deployments, procedural steps for preparing deployments, and instructions for preparing the execution environment for a deployment run.

    You can initialize your deployment configuration, which creates the prefect.yaml file, by running the CLI command prefect init in any directory or repository that stores your flow code.

    Deployment configuration recipes

    Prefect ships with many off-the-shelf \"recipes\" that allow you to get started with more structure within your prefect.yaml file; run prefect init to be prompted with available recipes in your installation. You can provide a recipe name in your initialization command with the --recipe flag, otherwise Prefect will attempt to guess an appropriate recipe based on the structure of your working directory (for example if you initialize within a git repository, Prefect will use the git recipe).

    The prefect.yaml file contains deployment configuration for deployments created from this file, default instructions for how to build and push any necessary code artifacts (such as Docker images), and default instructions for pulling a deployment in remote execution environments (e.g., cloning a GitHub repository).

    Any deployment configuration can be overridden via options available on the prefect deploy CLI command when creating a deployment.

    prefect.yaml file flexibility

    In older versions of Prefect, this file had to be in the root of your repository or project directory and named prefect.yaml. Now this file can be located in a directory outside the project or a subdirectory inside the project. It can be named differently, provided the filename ends in .yaml. You can even have multiple prefect.yaml files with the same name in different directories. By default, prefect deploy will use a prefect.yaml file in the project's root directory. To use a custom deployment configuration file, supply the new --prefect-file CLI argument when running the deploy command from the root of your project directory:

    prefect deploy --prefect-file path/to/my_file.yaml

    The base structure for prefect.yaml is as follows:

    # generic metadata\nprefect-version: null\nname: null\n\n# preparation steps\nbuild: null\npush: null\n\n# runtime steps\npull: null\n\n# deployment configurations\ndeployments:\n- # base metadata\n    name: null\n    version: null\n    tags: []\n    description: null\n    schedule: null\n\n    # flow-specific fields\n    entrypoint: null\n    parameters: {}\n\n    # infra-specific fields\n    work_pool:\n    name: null\n    work_queue_name: null\n    job_variables: {}\n

    The metadata fields are always pre-populated for you. These fields are for bookkeeping purposes only. The other sections are pre-populated based on recipe; if no recipe is provided, Prefect will attempt to guess an appropriate one based on local configuration.

    You can create deployments via the CLI command prefect deploy without ever needing to alter the deployments section of your prefect.yaml file \u2014 the prefect deploy command will help in deployment creation via interactive prompts. The prefect.yaml file facilitates version-controlling your deployment configuration and managing multiple deployments.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-actions","title":"Deployment actions","text":"

    Deployment actions defined in your prefect.yaml file control the lifecycle of the creation and execution of your deployments. The three actions available are build, push, and pull. pull is the only required deployment action \u2014 it is used to define how Prefect will pull your deployment in remote execution environments.

    Each action is defined as a list of steps that are executing in sequence.

    Each step has the following format:

    section:\n- prefect_package.path.to.importable.step:\n    id: \"step-id\" # optional\n    requires: \"pip-installable-package-spec\" # optional\n    kwarg1: value\n    kwarg2: more-values\n

    Every step can optionally provide a requires field that Prefect will use to auto-install in the event that the step cannot be found in the current environment. Each step can also specify an id for the step which is used when referencing step outputs in later steps. The additional fields map directly onto Python keyword arguments to the step function. Within a given section, steps always run in the order that they are provided within the prefect.yaml file.

    Deployment Instruction Overrides

    build, push, and pull sections can all be overridden on a per-deployment basis by defining build, push, and pull fields within a deployment definition in the prefect.yaml file.

    The prefect deploy command will use any build, push, or pull instructions provided in a deployment's definition in the prefect.yaml file.

    This capability is useful with multiple deployments that require different deployment instructions.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-build-action","title":"The build action","text":"

    The build section of prefect.yaml is where any necessary side effects for running your deployments are built - the most common type of side effect produced here is a Docker image. If you initialize with the docker recipe, you will be prompted to provide required information, such as image name and tag:

    prefect init --recipe docker\n>> image_name: < insert image name here >\n>> tag: < insert image tag here >\n

    Use --field to avoid the interactive experience

    We recommend that you only initialize a recipe when you are first creating your deployment structure, and afterwards store your configuration files within version control. However, sometimes you may need to initialize programmatically and avoid the interactive prompts. To do so, provide all required fields for your recipe using the --field flag:

    prefect init --recipe docker \\\n    --field image_name=my-repo/my-image \\\n    --field tag=my-tag\n

    build:\n- prefect_docker.deployments.steps.build_docker_image:\n    requires: prefect-docker>=0.3.0\n    image_name: my-repo/my-image\n    tag: my-tag\n    dockerfile: auto\n    push: true\n

    Once you've confirmed that these fields are set to their desired values, this step will automatically build a Docker image with the provided name and tag and push it to the repository referenced by the image name. As the prefect-docker package documentation notes, this step produces a few fields that can optionally be used in future steps or within prefect.yaml as template values. It is best practice to use {{ image }} within prefect.yaml (specifically the work pool's job variables section) so that you don't risk having your build step and deployment specification get out of sync with hardcoded values.

    Note

    Note that in the build step example above, we relied on the prefect-docker package; in cases that deal with external services, additional packages are often required and will be auto-installed for you.

    Pass output to downstream steps

    Each deployment action can be composed of multiple steps. For example, if you wanted to build a Docker image tagged with the current commit hash, you could use the run_shell_script step and feed the output into the build_docker_image step:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker\n        image_name: my-image\n        image_tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

    Note that the id field is used in the run_shell_script step so that its output can be referenced in the next step.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-push-action","title":"The push action","text":"

    The push section is most critical for situations in which code is not stored on persistent filesystems or in version control. In this scenario, code is often pushed and pulled from a Cloud storage bucket of some kind (e.g., S3, GCS, Azure Blobs, etc.). The push section allows users to specify and customize the logic for pushing this code repository to arbitrary remote locations.

    For example, a user wishing to store their code in an S3 bucket and rely on default worker settings for its runtime environment could use the s3 recipe:

    prefect init --recipe s3\n>> bucket: < insert bucket name here >\n

    Inspecting our newly created prefect.yaml file we find that the push and pull sections have been templated out for us as follows:

    push:\n- prefect_aws.deployments.steps.push_to_s3:\n    id: push-code\n    requires: prefect-aws>=0.3.0\n    bucket: my-bucket\n    folder: project-name\n    credentials: null\n\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n    requires: prefect-aws>=0.3.0\n    bucket: my-bucket\n    folder: \"{{ push-code.folder }}\"\n    credentials: null\n

    The bucket has been populated with our provided value (which also could have been provided with the --field flag); note that the folder property of the push step is a template - the pull_from_s3 step outputs both a bucket value as well as a folder value that can be used to template downstream steps. Doing this helps you keep your steps consistent across edits.

    As discussed above, if you are using blocks, the credentials section can be templated with a block reference for secure and dynamic credentials access:

    push:\n- prefect_aws.deployments.steps.push_to_s3:\n    requires: prefect-aws>=0.3.0\n    bucket: my-bucket\n    folder: project-name\n    credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

    Anytime you run prefect deploy, this push section will be executed upon successful completion of your build section. For more information on the mechanics of steps, see below.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-pull-action","title":"The pull action","text":"

    The pull section is the most important section within the prefect.yaml file. It contains instructions for preparing your flows for a deployment run. These instructions will be executed each time a deployment created within this folder is run via a worker.

    There are three main types of steps that typically show up in a pull section:

    • set_working_directory: this step simply sets the working directory for the process prior to importing your flow
    • git_clone: this step clones the provided repository on the provided branch
    • pull_from_{cloud}: this step pulls the working directory from a Cloud storage location (e.g., S3)

    Use block and variable references

    All block and variable references within your pull step will remain unresolved until runtime and will be pulled each time your deployment is run. This allows you to avoid storing sensitive information insecurely; it also allows you to manage certain types of configuration from the API and UI without having to rebuild your deployment every time.

    Below is an example of how to use an existing GitHubCredentials block to clone a private GitHub repository:

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        credentials: \"{{ prefect.blocks.github-credentials.my-credentials }}\"\n

    Alternatively, you can specify a BitBucketCredentials or GitLabCredentials block to clone from Bitbucket or GitLab. In lieu of a credentials block, you can also provide a GitHub, GitLab, or Bitbucket token directly to the 'access_token` field. You can use a Secret block to do this securely:

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://bitbucket.org/org/repo.git\n        access_token: \"{{ prefect.blocks.secret.bitbucket-token }}\"\n
    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#utility-steps","title":"Utility steps","text":"

    Utility steps can be used within a build, push, or pull action to assist in managing the deployment lifecycle:

    • run_shell_script allows for the execution of one or more shell commands in a subprocess, and returns the standard output and standard error of the script. This step is useful for scripts that require execution in a specific environment, or those which have specific input and output requirements.

    Here is an example of retrieving the short Git commit hash of the current repository to use as a Docker image tag:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker>=0.3.0\n        image_name: my-image\n        tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

    Provided environment variables are not expanded by default

    To expand environment variables in your shell script, set expand_env_vars: true in your run_shell_script step. For example:

    - prefect.deployments.steps.run_shell_script:\n    id: get-user\n    script: echo $USER\n    stream_output: true\n    expand_env_vars: true\n

    Without expand_env_vars: true, the above step would return a literal string $USER instead of the current user.

    • pip_install_requirements installs dependencies from a requirements.txt file within a specified directory.

    Below is an example of installing dependencies from a requirements.txt file after cloning:

    pull:\n    - prefect.deployments.steps.git_clone:\n        id: clone-step  # needed in order to be referenced in subsequent steps\n        repository: https://github.com/org/repo.git\n    - prefect.deployments.steps.pip_install_requirements:\n        directory: {{ clone-step.directory }}  # `clone-step` is a user-provided `id` field\n        requirements_file: requirements.txt\n

    Below is an example that retrieves an access token from a 3rd party Key Vault and uses it in a private clone step:

    pull:\n- prefect.deployments.steps.run_shell_script:\n    id: get-access-token\n    script: az keyvault secret show --name <secret name> --vault-name <secret vault> --query \"value\" --output tsv\n    stream_output: false\n- prefect.deployments.steps.git_clone:\n    repository: https://bitbucket.org/samples/deployments.git\n    branch: master\n    access_token: \"{{ get-access-token.stdout }}\"\n

    You can also run custom steps by packaging them. In the example below, retrieve_secrets is a custom python module that has been packaged into the default working directory of a Docker image (which is /opt/prefect by default). main is the function entry point, which returns an access token (e.g. return {\"access_token\": access_token}) like the preceding example, but utilizing the Azure Python SDK for retrieval.

    - retrieve_secrets.main:\n    id: get-access-token\n- prefect.deployments.steps.git_clone:\n    repository: https://bitbucket.org/samples/deployments.git\n    branch: master\n    access_token: '{{ get-access-token.access_token }}'\n
    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#templating-options","title":"Templating options","text":"

    Values that you place within your prefect.yaml file can reference dynamic values in several different ways:

    • step outputs: every step of both build and push produce named fields such as image_name; you can reference these fields within prefect.yaml and prefect deploy will populate them with each call. References must be enclosed in double brackets and be of the form \"{{ field_name }}\"
    • blocks: Prefect blocks can also be referenced with the special syntax {{ prefect.blocks.block_type.block_slug }}. It is highly recommended that you use block references for any sensitive information (such as a GitHub access token or any credentials) to avoid hardcoding these values in plaintext
    • variables: Prefect variables can also be referenced with the special syntax {{ prefect.variables.variable_name }}. Variables can be used to reference non-sensitive, reusable pieces of information such as a default image name or a default work pool name.
    • environment variables: you can also reference environment variables with the special syntax {{ $MY_ENV_VAR }}. This is especially useful for referencing environment variables that are set at runtime.

    As an example, consider the following prefect.yaml file:

    build:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build-image\n    requires: prefect-docker>=0.3.0\n    image_name: my-repo/my-image\n    tag: my-tag\n    dockerfile: auto\n    push: true\n\ndeployments:\n- # base metadata\n    name: null\n    version: \"{{ build-image.tag }}\"\n    tags:\n        - \"{{ $my_deployment_tag }}\"\n        - \"{{ prefect.variables.some_common_tag }}\"\n    description: null\n    schedule: null\n\n    # flow-specific fields\n    entrypoint: null\n    parameters: {}\n\n    # infra-specific fields\n    work_pool:\n        name: \"my-k8s-work-pool\"\n        work_queue_name: null\n        job_variables:\n            image: \"{{ build-image.image }}\"\n            cluster_config: \"{{ prefect.blocks.kubernetes-cluster-config.my-favorite-config }}\"\n

    So long as our build steps produce fields called image_name and tag, every time we deploy a new version of our deployment, the {{ build-image.image }} variable will be dynamically populated with the relevant values.

    Docker step

    The most commonly used build step is prefect_docker.deployments.steps.build_docker_image which produces both the image_name and tag fields.

    For an example, check out the deployments tutorial.

    A prefect.yaml file can have multiple deployment configurations that control the behavior of several deployments. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#working-with-multiple-deployments-with-prefectyaml","title":"Working with multiple deployments with prefect.yaml","text":"

    Prefect supports multiple deployment declarations within the prefect.yaml file. This method of declaring multiple deployments allows the configuration for all deployments to be version controlled and deployed with a single command.

    New deployment declarations can be added to the prefect.yaml file by adding a new entry to the deployments list. Each deployment declaration must have a unique name field which is used to select deployment declarations when using the prefect deploy command.

    Warning

    When using a prefect.yaml file that is in another directory or differently named, remember that the value for the deployment entrypoint must be relative to the root directory of the project.

    For example, consider the following prefect.yaml file:

    build: ...\npush: ...\npull: ...\n\ndeployments:\n- name: deployment-1\n    entrypoint: flows/hello.py:my_flow\n    parameters:\n        number: 42,\n        message: Don't panic!\n    work_pool:\n        name: my-process-work-pool\n        work_queue_name: primary-queue\n\n- name: deployment-2\n    entrypoint: flows/goodbye.py:my_other_flow\n    work_pool:\n        name: my-process-work-pool\n        work_queue_name: secondary-queue\n\n- name: deployment-3\n    entrypoint: flows/hello.py:yet_another_flow\n    work_pool:\n        name: my-docker-work-pool\n        work_queue_name: tertiary-queue\n

    This file has three deployment declarations, each referencing a different flow. Each deployment declaration has a unique name field and can be deployed individually by using the --name flag when deploying.

    For example, to deploy deployment-1 you would run:

    prefect deploy --name deployment-1\n

    To deploy multiple deployments you can provide multiple --name flags:

    prefect deploy --name deployment-1 --name deployment-2\n

    To deploy multiple deployments with the same name, you can prefix the deployment name with its flow name:

    prefect deploy --name my_flow/deployment-1 --name my_other_flow/deployment-1\n

    To deploy all deployments you can use the --all flag:

    prefect deploy --all\n

    To deploy deployments that match a pattern you can run:

    prefect deploy -n my-flow/* -n *dev/my-deployment -n dep*prod\n

    The above command will deploy all deployments from the flow my-flow, all flows ending in dev with a deployment named my-deployment, and all deployments starting with dep and ending in prod.

    CLI Options When Deploying Multiple Deployments

    When deploying more than one deployment with a single prefect deploy command, any additional attributes provided via the CLI will be ignored.

    To provide overrides to a deployment via the CLI, you must deploy that deployment individually.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#reusing-configuration-across-deployments","title":"Reusing configuration across deployments","text":"

    Because a prefect.yaml file is a standard YAML file, you can use YAML aliases to reuse configuration across deployments.

    This functionality is useful when multiple deployments need to share the work pool configuration, deployment actions, or other configurations.

    You can declare a YAML alias by using the &{alias_name} syntax and insert that alias elsewhere in the file with the *{alias_name} syntax. When aliasing YAML maps, you can also override specific fields of the aliased map by using the <<: *{alias_name} syntax and adding additional fields below.

    We recommend adding a definitions section to your prefect.yaml file at the same level as the deployments section to store your aliases.

    For example, consider the following prefect.yaml file:

    build: ...\npush: ...\npull: ...\n\ndefinitions:\n    work_pools:\n        my_docker_work_pool: &my_docker_work_pool\n            name: my-docker-work-pool\n            work_queue_name: default\n            job_variables:\n                image: \"{{ build-image.image }}\"\n    schedules:\n        every_ten_minutes: &every_10_minutes\n            interval: 600\n    actions:\n        docker_build: &docker_build\n            - prefect_docker.deployments.steps.build_docker_image: &docker_build_config\n                id: build-image\n                requires: prefect-docker>=0.3.0\n                image_name: my-example-image\n                tag: dev\n                dockerfile: auto\n                push: true\n\ndeployments:\n- name: deployment-1\n    entrypoint: flows/hello.py:my_flow\n    schedule: *every_10_minutes\n    parameters:\n        number: 42,\n        message: Don't panic!\n    work_pool: *my_docker_work_pool\n    build: *docker_build # Uses the full docker_build action with no overrides\n\n- name: deployment-2\n    entrypoint: flows/goodbye.py:my_other_flow\n    work_pool: *my_docker_work_pool\n    build:\n        - prefect_docker.deployments.steps.build_docker_image:\n            <<: *docker_build_config # Uses the docker_build_config alias and overrides the dockerfile field\n            dockerfile: Dockerfile.custom\n\n- name: deployment-3\n    entrypoint: flows/hello.py:yet_another_flow\n    schedule: *every_10_minutes\n    work_pool:\n        name: my-process-work-pool\n        work_queue_name: primary-queue\n

    In the above example, we are using YAML aliases to reuse work pool, schedule, and build configuration across multiple deployments:

    • deployment-1 and deployment-2 are using the same work pool configuration
    • deployment-1 and deployment-3 are using the same schedule
    • deployment-1 and deployment-2 are using the same build deployment action, but deployment-2 is overriding the dockerfile field to use a custom Dockerfile
    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-declaration-reference","title":"Deployment declaration reference","text":"","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-fields","title":"Deployment fields","text":"

    Below are fields that can be added to each deployment declaration.

    Property Description name The name to give to the created deployment. Used with the prefect deploy command to create or update specific deployments. version An optional version for the deployment. tags A list of strings to assign to the deployment as tags. description An optional description for the deployment. schedule An optional schedule to assign to the deployment. Fields for this section are documented in the Schedule Fields section. triggers An optional array of triggers to assign to the deployment entrypoint Required path to the .py file containing the flow you want to deploy (relative to the root directory of your development folder) combined with the name of the flow function. Should be in the format path/to/file.py:flow_function_name. parameters Optional default values to provide for the parameters of the deployed flow. Should be an object with key/value pairs. enforce_parameter_schema Boolean flag that determines whether the API should validate the parameters passed to a flow run against the parameter schema generated for the deployed flow. work_pool Information on where to schedule flow runs for the deployment. Fields for this section are documented in the Work Pool Fields section.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#schedule-fields","title":"Schedule fields","text":"

    Below are fields that can be added to a deployment declaration's schedule section.

    Property Description interval Number of seconds indicating the time between flow runs. Cannot be used in conjunction with cron or rrule. anchor_date Datetime string indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. Can only be used with interval. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. See the IANA Time Zone Database for valid time zones. cron A valid cron string. Cannot be used in conjunction with interval or rrule. day_or Boolean indicating how croniter handles day and day_of_week entries. Must be used with cron. Defaults to True. rrule String representation of an RRule schedule. See the rrulestr examples for syntax. Cannot be used in conjunction with interval or cron.

    For more information about schedules, see the Schedules concept doc.

    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#work-pool-fields","title":"Work pool fields","text":"

    Below are fields that can be added to a deployment declaration's work_pool section.

    Property Description name The name of the work pool to schedule flow runs in for the deployment. work_queue_name The name of the work queue within the specified work pool to schedule flow runs in for the deployment. If not provided, the default queue for the specified work pool will be used. job_variables Values used to override the default values in the specified work pool's base job template. Maps directly to a created deployments infra_overrides attribute.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-mechanics","title":"Deployment mechanics","text":"

    Anytime you run prefect deploy in a directory that contains a prefect.yaml file, the following actions are taken in order:

    • The prefect.yaml file is loaded. First, the build section is loaded and all variable and block references are resolved. The steps are then run in the order provided.
    • Next, the push section is loaded and all variable and block references are resolved; the steps within this section are then run in the order provided
    • Next, the pull section is templated with any step outputs but is not run. Note that block references are not hydrated for security purposes - block references are always resolved at runtime
    • Next, all variable and block references are resolved with the deployment declaration. All flags provided via the prefect deploy CLI are then overlaid on the values loaded from the file.
    • The final step occurs when the fully realized deployment specification is registered with the Prefect API

    Deployment Instruction Overrides

    The build, push, and pull sections in deployment definitions take precedence over the corresponding sections above them in prefect.yaml.

    Each time a step is run, the following actions are taken in order:

    • The step's inputs and block / variable references are resolved (see the templating documentation above for more details).
    • The step's function is imported; if it cannot be found, the special requires keyword is used to install the necessary packages
    • The step's function is called with the resolved inputs.
    • The step's output is returned and used to resolve inputs for subsequent steps.
    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#next-steps","title":"Next steps","text":"

    Now that you are familiar with creating deployments, you may want to explore infrastructure options for running your deployments:

    • Managed work pools
    • Push work pools
    • Kubernetes work pools
    • Serverless hybrid work pools
    ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/runtime-context/","title":"Get Information about the Runtime Context","text":"

    Prefect tracks information about the current flow or task run with a run context. The run context can be thought of as a global variable that allows the Prefect engine to determine relationships between your runs, such as which flow your task was called from.

    The run context itself contains many internal objects used by Prefect to manage execution of your run and is only available in specific situations. For this reason, we expose a simple interface that only includes the items you care about and dynamically retrieves additional information when necessary. We call this the \"runtime context\" as it contains information that can be accessed only when a run is happening.

    Mock values via environment variable

    Oftentimes, you may want to mock certain values for testing purposes. For example, by manually setting an ID or a scheduled start time to ensure your code is functioning properly. Starting in version 2.10.3, you can mock values in runtime via environment variable using the schema PREFECT__RUNTIME__{SUBMODULE}__{KEY_NAME}=value:

    $ export PREFECT__RUNTIME__TASK_RUN__FAKE_KEY='foo'\n$ python -c 'from prefect.runtime import task_run; print(task_run.fake_key)' # \"foo\"\n

    If the environment variable mocks an existing runtime attribute, the value is cast to the same type. This works for runtime attributes of basic types (bool, int, float and str) and pendulum.DateTime. For complex types like list or dict, we suggest mocking them using monkeypatch or a similar tool.

    ","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/runtime-context/#accessing-runtime-information","title":"Accessing runtime information","text":"

    The prefect.runtime module is the home for all runtime context access. Each major runtime concept has its own submodule:

    • deployment: Access information about the deployment for the current run
    • flow_run: Access information about the current flow run
    • task_run: Access information about the current task run

    For example:

    my_runtime_info.py
    from prefect import flow, task\nfrom prefect import runtime\n\n@flow(log_prints=True)\ndef my_flow(x):\n    print(\"My name is\", runtime.flow_run.name)\n    print(\"I belong to deployment\", runtime.deployment.name)\n    my_task(2)\n\n@task\ndef my_task(y):\n    print(\"My name is\", runtime.task_run.name)\n    print(\"Flow run parameters:\", runtime.flow_run.parameters)\n\nmy_flow(1)\n

    Running this file will produce output similar to the following:

    10:08:02.948 | INFO    | prefect.engine - Created flow run 'solid-gibbon' for flow 'my-flow'\n10:08:03.555 | INFO    | Flow run 'solid-gibbon' - My name is solid-gibbon\n10:08:03.558 | INFO    | Flow run 'solid-gibbon' - I belong to deployment None\n10:08:03.703 | INFO    | Flow run 'solid-gibbon' - Created task run 'my_task-0' for task 'my_task'\n10:08:03.704 | INFO    | Flow run 'solid-gibbon' - Executing 'my_task-0' immediately...\n10:08:04.006 | INFO    | Task run 'my_task-0' - My name is my_task-0\n10:08:04.007 | INFO    | Task run 'my_task-0' - Flow run parameters: {'x': 1}\n10:08:04.105 | INFO    | Task run 'my_task-0' - Finished in state Completed()\n10:08:04.968 | INFO    | Flow run 'solid-gibbon' - Finished in state Completed('All states completed.')\n

    Above, we demonstrated access to information about the current flow run, task run, and deployment. When run without a deployment (via python my_runtime_info.py), you should see \"I belong to deployment None\" logged. When information is not available, the runtime will always return an empty value. Because this flow was run outside of a deployment, there is no deployment data. If this flow was run as part of a deployment, we'd see the name of the deployment instead.

    See the runtime API reference for a full list of available attributes.

    ","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/runtime-context/#accessing-the-run-context-directly","title":"Accessing the run context directly","text":"

    The current run context can be accessed with prefect.context.get_run_context(). This function will raise an exception if no run context is available, meaning you are not in a flow or task run. If a task run context is available, it will be returned even if a flow run context is available.

    Alternatively, you can access the flow run or task run context explicitly. This will, for example, allow you to access the flow run context from a task run.

    Note that we do not send the flow run context to distributed task workers because the context is costly to serialize and deserialize.

    from prefect.context import FlowRunContext, TaskRunContext\n\nflow_run_ctx = FlowRunContext.get()\ntask_run_ctx = TaskRunContext.get()\n

    Unlike get_run_context, these method calls will not raise an error if the context is not available. Instead, they will return None.

    ","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/secrets/","title":"Third-party Secrets: Connect to services without storing credentials in blocks","text":"

    Credentials blocks and secret blocks are popular ways to store and retrieve sensitive information for connecting to third-party services.

    In Prefect Cloud, these block values are stored in encrypted format. Organizations whose security policies make such storage infeasible can still use Prefect to connect to third-party services securely.

    In this example, we interact with a Snowflake database and store the credentials we need to connect in AWS Secrets Manager. This example can be generalized to other third party services that require credentials. We use Prefect Cloud in this example.

    "},{"location":"guides/secrets/#prerequisites","title":"Prerequisites","text":"
    1. Prefect installed.
    2. CLI authenticated to your Prefect Cloud account.
    3. Snowflake account.
    4. AWS account.
    "},{"location":"guides/secrets/#steps","title":"Steps","text":"
    1. Install prefect-aws and prefect-snowflake integration libraries.
    2. Store Snowflake password in AWS Secrets Manager.
    3. Create AwsSecret block to access the Snowflake password.
    4. Create AwsCredentials block for authentication.
    5. Ensure the compute environment has access to AWS credentials that are authorized to access the secret in AWS.
    6. Create and use SnowflakeCredentials and SnowflakeConnector blocks in Python code to interact with Snowflake.
    "},{"location":"guides/secrets/#install-prefect-aws-and-prefect-snowflake-libraries","title":"Install prefect-aws and prefect-snowflake libraries","text":"

    The following code will install and upgrade the necessary libraries and their dependencies.

    pip install -U prefect-aws prefect-snowflake\n
    "},{"location":"guides/secrets/#store-snowflake-password-in-aws-secrets-manager","title":"Store Snowflake password in AWS Secrets Manager","text":"

    Go to the AWS Secrets Manager console and create a new secret. Alternatively, create a secret using the AWS CLI or a script.

    1. In the UI, choose Store a new secret.
    2. Select Other type of secret.
    3. Input the key-value pair for your Snowflake password where the key is any string and the value is your Snowflake password.
    4. Copy the key for future reference and click Next.
    5. Enter a name for your secret, copy the name, and click Next.
    6. For this demo, we won't rotate the key, so click Next.
    7. Click Store.
    "},{"location":"guides/secrets/#create-awssecret-block-to-access-your-snowflake-password","title":"Create AwsSecret block to access your Snowflake password","text":"

    You can create blocks with Python code or via the Prefect UI. Block creation through the UI can help you visualize how the pieces fit together, so let's use it here.

    On the Blocks page, click on + to add a new block and select AWS Secret from the list of block types. Enter a name for your block and enter the secret name from AWS Secrets Manager.

    Note that if you're using a self-hosted Prefect server instance, you'll need to register the block types in the newly installed modules before creating blocks.

    prefect block register -m prefect_aws && prefect block register -m prefect_snowflake\n
    "},{"location":"guides/secrets/#create-awscredentials-block","title":"Create AwsCredentials block","text":"

    In the AwsCredentials section, click Add + and a form will appear to create an AWS Credentials block.

    Values for Access Key ID and Secret Access Key will be read from the compute environment. My AWS Access Key ID and Secret Access Key values with permissions to read the AWS Secret are stored locally in my ~/.aws/credentials file, so I'll leave those fields blank. You could enter those values at block creation, but then they would be saved to the database, and that's what we're trying to avoid. By leaving those attributes blank, Prefect knows to look to the compute environment.

    We need to specify a region in our local AWS config file or in our AWSCredentials block. The AwsCredentials block takes precedence, so let's specify it here for portability.

    Under the hood, Prefect is using the AWS boto3 client to create a session.

    Click Create to save the blocks.

    "},{"location":"guides/secrets/#ensure-the-compute-environment-has-access-to-aws-credentials","title":"Ensure the compute environment has access to AWS credentials","text":"

    Ensure the compute environment contains AWS credentials with authorization to access AWS Secrets Manager. When we connect to Snowflake, Prefect will automatically use these credentials to authenticate and access the secret.

    "},{"location":"guides/secrets/#create-and-use-snowflakecredentials-and-snowflakeconnector-blocks-in-python-code","title":"Create and use SnowflakeCredentials and SnowflakeConnector blocks in Python code","text":"

    Let's use Prefect's blocks for convenient access to Snowflake. We won't save the blocks, to ensure the credentials are not stored in Prefect Cloud.

    We'll create a flow that connects to Snowflake and calls two tasks. The first task creates a table and inserts some data. The second task reads the data out.

    import json\nfrom prefect import flow, task\nfrom prefect_aws import AwsSecret\nfrom prefect_snowflake import SnowflakeConnector, SnowflakeCredentials\n\n\n@task\ndef setup_table(snow_connector: SnowflakeConnector) -> None:\n    with snow_connector as connector:\n        connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Space\"},\n                {\"name\": \"Me\", \"address\": \"Myway 88\"},\n            ],\n        )\n\n\n@task\ndef fetch_data(snow_connector: SnowflakeConnector) -> list:\n    all_rows = []\n    with snow_connector as connector:\n        while True:\n            new_rows = connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n\n@flow(log_prints=True)\ndef snowflake_flow():\n    aws_secret_block = AwsSecret.load(\"my-snowflake-pw\")\n\n    snow_connector = SnowflakeConnector(\n        schema=\"MY_SCHEMA\",\n        database=\"MY_DATABASE\",\n        warehouse=\"COMPUTE_WH\",\n        fetch_size=1,\n        credentials=SnowflakeCredentials(\n            role=\"MYROLE\",\n            user=\"MYUSERNAME\",\n            account=\"ab12345.us-east-2.aws\",\n            password=json.loads(aws_secret_block.read_secret()).get(\"my-snowflake-pw\"),\n        ),\n        poll_frequency_s=1,\n    )\n\n    setup_table(snow_connector)\n    all_rows = fetch_data(snow_connector)\n    print(all_rows)\n\n\nif __name__ == \"__main__\":\n    snowflake_flow()\n

    Fill in the relevant details for your Snowflake account and run the script.

    Note that the flow reads the Snowflake password from the AWS Secret Manager and uses it in the SnowflakeCredentials block. The SnowflakeConnector block uses the nested SnowflakeCredentials block to connect to Snowflake. Again, neither of the Snowflake blocks are saved, so the credentials are not stored in Prefect Cloud.

    Check out the prefect-snowflake docs for more examples of working with Snowflake.

    "},{"location":"guides/secrets/#next-steps","title":"Next steps","text":"

    Now you can turn your flow into a deployment so that you and your team can run it remotely on a schedule, in response to an event, or manually.

    Make sure to specify the prefect-aws and prefect-snowflake dependencies in your work pool or deployment so that they are available at runtime.

    Also ensure your compute has the AWS credentials for accessing the secret in AWS Secrets Manager.

    You've seen how to use Prefect blocks to store non-sensitive configuration and fetch sensitive configuration values from the environment. You can use this pattern to connect to other third-party services that require credentials, such as databases and APIs. You can use a similar pattern with any secret manager, or extend it to work with environment variables.

    "},{"location":"guides/settings/","title":"Profiles & Configuration","text":"

    Prefect's local settings are documented and type-validated.

    By modifying the default settings, you can customize various aspects of the system. You can override a setting with an environment variable or by updating the setting in a Prefect profile.

    Prefect profiles are persisted groups of settings on your local machine. A single profile is always active.

    Initially, a default profile named default is active and contains no settings overrides.

    All currently active settings can be viewed from the command line by running the following command:

    prefect config view --show-defaults\n

    When you switch to a different profile, all of the settings configured in the newly activated profile are applied.

    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#commonly-configured-settings","title":"Commonly configured settings","text":"

    This section describes some commonly configured settings. See Configuring settings for details on setting and unsetting configuration values.

    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_api_key","title":"PREFECT_API_KEY","text":"

    The PREFECT_API_KEY value specifies the API key used to authenticate with Prefect Cloud.

    PREFECT_API_KEY=\"[API-KEY]\"\n

    Generally, you will set the PREFECT_API_URL and PREFECT_API_KEY for your active profile by running prefect cloud login. If you're curious, read more about managing API keys.

    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_api_url","title":"PREFECT_API_URL","text":"

    The PREFECT_API_URL value specifies the API endpoint of your Prefect Cloud workspace or a self-hosted Prefect server instance.

    For example, if using Prefect Cloud:

    PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

    You can view your Account ID and Workspace ID in your browser URL when at a Prefect Cloud workspace page. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

    If using a local Prefect server instance, set your API URL like this:

    PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

    PREFECT_API_URL setting for workers

    If using a worker (agent and block-based deployments are legacy) that can create flow runs for deployments in remote environments, PREFECT_API_URL must be set for the environment in which your worker is running.

    If you want the worker to communicate with Prefect Cloud or a Prefect server instance from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

    Running the Prefect UI behind a reverse proxy

    When using a reverse proxy (such as Nginx or Traefik) to proxy traffic to a locally-hosted Prefect UI instance, the Prefect server instance also needs to be configured to know how to connect to the API. The PREFECT_UI_API_URL should be set to the external proxy URL (e.g. if your external URL is https://prefect-server.example.com/ then set PREFECT_UI_API_URL=https://prefect-server.example.com/api for the Prefect server process). You can also accomplish this by setting PREFECT_API_URL to the API URL, as this setting is used as a fallback if PREFECT_UI_API_URL is not set.

    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_home","title":"PREFECT_HOME","text":"

    The PREFECT_HOME value specifies the local Prefect directory for configuration files, profiles, and the location of the default Prefect SQLite database.

    PREFECT_HOME='~/.prefect'\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_local_storage_path","title":"PREFECT_LOCAL_STORAGE_PATH","text":"

    The PREFECT_LOCAL_STORAGE_PATH value specifies the default location of local storage for flow runs.

    PREFECT_LOCAL_STORAGE_PATH='${PREFECT_HOME}/storage'\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#csrf-protection-settings","title":"CSRF Protection Settings","text":"

    If using a local Prefect server instance, you can configure CSRF protection settings.

    PREFECT_SERVER_CSRF_PROTECTION_ENABLED - Activates CSRF protection on the server, requiring valid CSRF tokens for applicable requests. Recommended for production to prevent CSRF attacks. Defaults to False.

    PREFECT_SERVER_CSRF_PROTECTION_ENABLED=True\n

    PREFECT_SERVER_CSRF_TOKEN_EXPIRATION - Sets the expiration duration for server-issued CSRF tokens, influencing how often tokens need to be refreshed. The default is 1 hour.

    PREFECT_SERVER_CSRF_TOKEN_EXPIRATION='3600'  # 1 hour in seconds\n

    By default clients expect that CSRF protection is enabled on the server. If you are running a server without CSRF protection, you can disable CSRF support in the client.

    PREFECT_CLIENT_CSRF_SUPPORT_ENABLED - Enables or disables CSRF token handling in the Prefect client. When enabled, the client manages CSRF tokens for state-changing API requests. Defaults to True.

    PREFECT_CLIENT_CSRF_SUPPORT_ENABLED=True\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#database-settings","title":"Database settings","text":"

    If running a self-hosted Prefect server instance, there are several database configuration settings you can read about here.

    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#logging-settings","title":"Logging settings","text":"

    Prefect provides several logging configuration settings that you can read about in the logging docs.

    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuring-settings","title":"Configuring settings","text":"

    The prefect config CLI commands enable you to view, set, and unset settings.

    Command Description set Change the value for a setting. unset Restore the default value for a setting. view Display the current settings.","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#viewing-settings-from-the-cli","title":"Viewing settings from the CLI","text":"

    The prefect config view command will display settings that override default values.

    $ prefect config view\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG'\n

    You can show the sources of values with --show-sources:

    $ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

    You can also include default values with --show-defaults:

    $ prefect config view --show-defaults\nPREFECT_PROFILE='default'\nPREFECT_AGENT_PREFETCH_SECONDS='10' (from defaults)\nPREFECT_AGENT_QUERY_INTERVAL='5.0' (from defaults)\nPREFECT_API_KEY='None' (from defaults)\nPREFECT_API_REQUEST_TIMEOUT='60.0' (from defaults)\nPREFECT_API_URL='None' (from defaults)\n...\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#setting-and-clearing-values","title":"Setting and clearing values","text":"

    The prefect config set command lets you change the value of a default setting.

    A commonly used example is setting the PREFECT_API_URL, which you may need to change when interacting with different Prefect server instances or Prefect Cloud.

    # use a local Prefect server\nprefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n\n# use Prefect Cloud\nprefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

    If you want to configure a setting to use its default value, use the prefect config unset command.

    prefect config unset PREFECT_API_URL\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#overriding-defaults-with-environment-variables","title":"Overriding defaults with environment variables","text":"

    All settings have keys that match the environment variable that can be used to override them.

    For example, configuring the home directory:

    # environment variable\nexport PREFECT_HOME=\"/path/to/home\"\n
    # python\nimport prefect.settings\nprefect.settings.PREFECT_HOME.value()  # PosixPath('/path/to/home')\n

    Configuring the a server instance's port:

    # environment variable\nexport PREFECT_SERVER_API_PORT=4242\n
    # python\nprefect.settings.PREFECT_SERVER_API_PORT.value()  # 4242\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuration-profiles","title":"Configuration profiles","text":"

    Prefect allows you to persist settings instead of setting an environment variable each time you open a new shell. Settings are persisted to profiles, which allow you to move between groups of settings quickly.

    The prefect profile CLI commands enable you to create, review, and manage profiles.

    Command Description create Create a new profile. delete Delete the given profile. inspect Display settings from a given profile; defaults to active. ls List profile names. rename Change the name of a profile. use Switch the active profile.

    If you configured settings for a profile, prefect profile inspect displays those settings:

    $ prefect profile inspect\nPREFECT_PROFILE = \"default\"\nPREFECT_API_KEY = \"pnu_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nPREFECT_API_URL = \"http://127.0.0.1:4200/api\"\n

    You can pass the name of a profile to view its settings:

    $ prefect profile create test\n$ prefect profile inspect test\nPREFECT_PROFILE=\"test\"\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#creating-and-removing-profiles","title":"Creating and removing profiles","text":"

    Create a new profile with no settings:

    $ prefect profile create test\nCreated profile 'test' at /Users/terry/.prefect/profiles.toml.\n

    Create a new profile foo with settings cloned from an existing default profile:

    $ prefect profile create foo --from default\nCreated profile 'cloud' matching 'default' at /Users/terry/.prefect/profiles.toml.\n

    Rename a profile:

    $ prefect profile rename temp test\nRenamed profile 'temp' to 'test'.\n

    Remove a profile:

    $ prefect profile delete test\nRemoved profile 'test'.\n

    Removing the default profile resets it:

    $ prefect profile delete default\nReset profile 'default'.\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#change-values-in-profiles","title":"Change values in profiles","text":"

    Set a value in the current profile:

    $ prefect config set VAR=X\nSet variable 'VAR' to 'X'\nUpdated profile 'default'\n

    Set multiple values in the current profile:

    $ prefect config set VAR2=Y VAR3=Z\nSet variable 'VAR2' to 'Y'\nSet variable 'VAR3' to 'Z'\nUpdated profile 'default'\n

    You can set a value in another profile by passing the --profile NAME option to a CLI command:

    $ prefect --profile \"foo\" config set VAR=Y\nSet variable 'VAR' to 'Y'\nUpdated profile 'foo'\n

    Unset values in the current profile to restore the defaults:

    $ prefect config unset VAR2 VAR3\nUnset variable 'VAR2'\nUnset variable 'VAR3'\nUpdated profile 'default'\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#inspecting-profiles","title":"Inspecting profiles","text":"

    See a list of available profiles:

    $ prefect profile ls\n* default\ncloud\ntest\nlocal\n

    View all settings for a profile:

    $ prefect profile inspect cloud\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx\nx/workspaces/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'\nPREFECT_API_KEY='xxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'          \n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#using-profiles","title":"Using profiles","text":"

    The profile named default is used by default. There are several methods to switch to another profile.

    The recommended method is to use the prefect profile use command with the name of the profile:

    $ prefect profile use foo\nProfile 'test' now active.\n

    Alternatively, you can set the environment variable PREFECT_PROFILE to the name of the profile:

    export PREFECT_PROFILE=foo\n

    Or, specify the profile in the CLI command for one-time usage:

    prefect --profile \"foo\" ...\n

    Note that this option must come before the subcommand. For example, to list flow runs using the profile foo:

    prefect --profile \"foo\" flow-run ls\n

    You may use the -p flag as well:

    prefect -p \"foo\" flow-run ls\n

    You may also create an 'alias' to automatically use your profile:

    $ alias prefect-foo=\"prefect --profile 'foo' \"\n# uses our profile!\n$ prefect-foo config view  \n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#conflicts-with-environment-variables","title":"Conflicts with environment variables","text":"

    If setting the profile from the CLI with --profile, environment variables that conflict with settings in the profile will be ignored.

    In all other cases, environment variables will take precedence over the value in the profile.

    For example, a value set in a profile will be used by default:

    $ prefect config set PREFECT_LOGGING_LEVEL=\"ERROR\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n

    But, setting an environment variable will override the profile setting:

    $ export PREFECT_LOGGING_LEVEL=\"DEBUG\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

    Unless the profile is explicitly requested when using the CLI:

    $ prefect --profile default config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#profile-files","title":"Profile files","text":"

    Profiles are persisted to the file location specified by PREFECT_PROFILES_PATH. The default location is a profiles.toml file in the PREFECT_HOME directory:

    $ prefect config view --show-defaults\n...\nPREFECT_PROFILES_PATH='${PREFECT_HOME}/profiles.toml'\n...\n

    The TOML format is used to store profile data.

    ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/specifying-upstream-dependencies/","title":"Specifying Upstream Dependencies","text":"

    Results from a task can be provided to other tasks (or subflows) as upstream dependencies. Prefect uses upstream dependencies in two ways:

    1. To populate dependency arrows in the flow run graph
    2. To determine execution order for concurrently submitted units of work that depend on each other

    Tasks vs. other functions

    Only results from tasks inform Prefect's ability to determine dependencies. Return values from functions without task decorators, including subflows, do not carry the same information about their origin as task results.

    When using non-sequential task runners such as the ConcurrentTaskRunner or DaskTaskRunner, the order of execution for submitted tasks is not guaranteed unless their dependencies are specified.

    For example, compare how tasks submitted to the ConcurrentTaskRunner behave with and without upstream dependencies by clicking on the tabs below.

    Without dependenciesWith dependencies
    @flow(log_prints=True) # Default task runner is ConcurrentTaskRunner\ndef flow_of_tasks():\n    # no dependencies, execution is order not guaranteed\n    first.submit()\n    second.submit()\n    third.submit()\n\n@task\ndef first():\n    print(\"I'm first!\")\n\n@task\ndef second():\n    print(\"I'm second!\")\n\n@task\ndef third():\n    print(\"I'm third!\")\n
    Flow run 'pumpkin-puffin' - Created task run 'first-0' for task 'first'\nFlow run 'pumpkin-puffin' - Submitted task run 'first-0' for execution.\nFlow run 'pumpkin-puffin' - Created task run 'second-0' for task 'second'\nFlow run 'pumpkin-puffin' - Submitted task run 'second-0' for execution.\nFlow run 'pumpkin-puffin' - Created task run 'third-0' for task 'third'\nFlow run 'pumpkin-puffin' - Submitted task run 'third-0' for execution.\nTask run 'third-0' - I'm third!\nTask run 'first-0' - I'm first!\nTask run 'second-0' - I'm second!\nTask run 'second-0' - Finished in state Completed()\nTask run 'third-0' - Finished in state Completed()\nTask run 'first-0' - Finished in state Completed()\nFlow run 'pumpkin-puffin' - Finished in state Completed('All states completed.')\n
    @flow(log_prints=True) # Default task runner is ConcurrentTaskRunner\ndef flow_of_tasks():\n    # with dependencies, tasks execute in order\n    first_result = first.submit()\n    second_result = second.submit(first_result)\n    third.submit(second_result)\n\n@task\ndef first():\n    print(\"I'm first!\")\n\n@task\ndef second(input):\n    print(\"I'm second!\")\n\n@task\ndef third(input):\n    print(\"I'm third!\")\n
    Flow run 'statuesque-waxbill' - Created task run 'first-0' for task 'first'\nFlow run 'statuesque-waxbill' - Submitted task run 'first-0' for execution.\nFlow run 'statuesque-waxbill' - Created task run 'second-0' for task 'second'\nFlow run 'statuesque-waxbill' - Submitted task run 'second-0' for execution.\nFlow run 'statuesque-waxbill' - Created task run 'third-0' for task 'third'\nFlow run 'statuesque-waxbill' - Submitted task run 'third-0' for execution.\nTask run 'first-0' - I'm first!\nTask run 'first-0' - Finished in state Completed()\nTask run 'second-0' - I'm second!\nTask run 'second-0' - Finished in state Completed()\nTask run 'third-0' - I'm third!\nTask run 'third-0' - Finished in state Completed()\nFlow run 'statuesque-waxbill' - Finished in state Completed('All states completed.')\n
    ","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#determination-methods","title":"Determination methods","text":"

    A task or subflow's upstream dependencies can be inferred automatically via its inputs, or stated explicitly via the wait_for parameter.

    ","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#automatic","title":"Automatic","text":"

    When a result from a task is used as input for another task, Prefect automatically recognizes the task that result originated from as an upstream dependency.

    This applies to every way you can run tasks with Prefect, whether you're calling the task function directly, calling .submit(), or calling .map(). Subflows similarly recognize tasks results as upstream dependencies.

    from prefect import flow, task\n\n\n@flow(log_prints=True)\ndef flow_of_tasks():\n    upstream_result = upstream.submit()\n    downstream_1_result = downstream_1.submit(upstream_result)\n    downstream_2_result = downstream_2.submit(upstream_result)\n    mapped_task_results = mapped_task.map([downstream_1_result, downstream_2_result])\n    final_task(mapped_task_results)\n\n@task\ndef upstream():\n    return \"Hello from upstream!\"\n\n@task\ndef downstream_1(input):\n    return input\n\n@task\ndef downstream_2(input):\n    return input\n\n@task\ndef mapped_task(input):\n    return input\n\n@task\ndef final_task(input):\n    print(input)\n
    Flow run graph displaying inferred dependencies with the \"Dependency grid\" layout selected","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#manual","title":"Manual","text":"

    Tasks that do not share data can be informed of their upstream dependencies through the wait_for parameter. Just as with automatic dependencies, this applies to direct task function calls, .submit(), .map(), and subflows.

    Differences with .map()

    Manually defined upstream dependencies apply to all tasks submitted by .map(), so each mapped task must wait for all upstream dependencies passed into wait_for to finish. This is distinct from automatic dependencies for mapped tasks, where each mapped task must only wait for the upstream tasks whose results it depends on.

    from prefect import flow, task\n\n\n@flow(log_prints=True)\ndef flow_of_tasks():\n    upstream_result = upstream.submit()\n    downstream_1_result = downstream_1.submit(wait_for=[upstream_result])\n    downstream_2_result = downstream_2.submit(wait_for=[upstream_result])\n    mapped_task_results = mapped_task.map([1, 2], wait_for=[downstream_1_result, downstream_2_result])\n    final_task(wait_for=mapped_task_results)\n\n@task\ndef upstream():\n    pass\n\n@task\ndef downstream_1():\n    pass\n\n@task\ndef downstream_2():\n    pass\n\n@task\ndef mapped_task(input):\n    pass\n\n@task\ndef final_task():\n    pass\n
    Flow run graph displaying manual dependencies with the \"Dependency grid\" layout selected","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#deployments-as-dependencies","title":"Deployments as dependencies","text":"

    For more complex workflows, parts of your logic may require additional resources, different infrastructure, or independent parallel execution. A typical approach for addressing these needs is to execute that logic as separate deployment runs from within a flow.

    Composing deployment runs into a flow so that they can be treated as upstream dependencies is as simple as calling run_deployment from within a task.

    Given a deployment process-user of flow parallel-work, a flow of deployments might look like this:

    from prefect import flow, task\nfrom prefect.deployments import run_deployment\n\n\n@flow\ndef flow_of_deployments():\n    deployment_run_1 = run_deployment_task.submit(\n        flow_name=\"parallel-work\",\n        deployment_name=\"process-user\",\n        parameters={\"user_id\": 1},\n    )\n    deployment_run_2 = run_deployment_task.submit(\n        flow_name=\"parallel-work\",\n        deployment_name=\"process-user\",\n        parameters={\"user_id\": 2},\n    )\n    downstream_task(wait_for=[deployment_run_1, deployment_run_2])\n\n\n@task(task_run_name=\"Run deployment {flow_name}/{deployment_name}\")\ndef run_deployment_task(\n    flow_name: str,\n    deployment_name: str,\n    parameters: dict\n):\n    run_deployment(\n        name=f\"{flow_name}/{deployment_name}\",\n        parameters=parameters\n    )\n\n\n@task\ndef downstream_task():\n    print(\"I'm downstream!\")\n

    By default, deployments started from run_deployment will also appear as subflows for tracking purposes. This behavior can be disabled by setting the as_subflow parameter for run_deployment to False.

    Flow run graph displaying deployments as dependencies with the \"Dependency grid\" layout selected","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/state-change-hooks/","title":"State Change Hooks","text":"

    State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow. This guide provides examples of real-world use cases.

    ","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#example-use-cases","title":"Example use cases","text":"","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#send-a-notification-when-a-flow-run-fails","title":"Send a notification when a flow run fails","text":"

    State change hooks enable you to customize messages sent when tasks transition between states, such as sending notifications containing sensitive information when tasks enter a Failed state. Let's run a client-side hook upon a flow run entering a Failed state.

    from prefect import flow\nfrom prefect.blocks.core import Block\nfrom prefect.settings import PREFECT_API_URL\n\ndef notify_slack(flow, flow_run, state):\n    slack_webhook_block = Block.load(\n        \"slack-webhook/my-slack-webhook\"\n    )\n\n    slack_webhook_block.notify(\n        (\n            f\"Your job {flow_run.name} entered {state.name} \"\n            f\"with message:\\n\\n\"\n            f\"See <https://{PREFECT_API_URL.value()}/flow-runs/\"\n            f\"flow-run/{flow_run.id}|the flow run in the UI>\\n\\n\"\n            f\"Tags: {flow_run.tags}\\n\\n\"\n            f\"Scheduled start: {flow_run.expected_start_time}\"\n        )\n    )\n\n@flow(on_failure=[notify_slack], retries=1)\ndef failing_flow():\n    raise ValueError(\"oops!\")\n\nif __name__ == \"__main__\":\n    failing_flow()\n

    Note that because we've configured retries in this example, the on_failure hook will not run until all retries have completed, when the flow run enters a Failed state.

    ","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#delete-a-cloud-run-job-when-a-flow-run-crashes","title":"Delete a Cloud Run job when a flow run crashes","text":"

    State change hooks can aid in managing infrastructure cleanup in scenarios where tasks spin up individual infrastructure resources independently of Prefect. When a flow run crashes, tasks may exit abruptly, resulting in the potential omission of cleanup logic within the tasks. State change hooks can be used to ensure infrastructure is properly cleaned up even when a flow run enters a Crashed state!

    Let's create a hook that deletes a Cloud Run job if the flow run crashes.

    import os\nfrom prefect import flow, task\nfrom prefect.blocks.system import String\nfrom prefect.client import get_client\nimport prefect.runtime\n\nasync def delete_cloud_run_job(flow, flow_run, state):\n    \"\"\"Flow run state change hook that deletes a Cloud Run Job if\n    the flow run crashes.\"\"\"\n\n    # retrieve Cloud Run job name\n    cloud_run_job_name = await String.load(\n        name=\"crashing-flow-cloud-run-job\"\n    )\n\n    # delete Cloud Run job\n    delete_job_command = f\"yes | gcloud beta run jobs delete \n    {cloud_run_job_name.value} --region us-central1\"\n    os.system(delete_job_command)\n\n    # clean up the Cloud Run job string block as well\n    async with get_client() as client:\n        block_document = await client.read_block_document_by_name(\n            \"crashing-flow-cloud-run-job\", block_type_slug=\"string\"\n        )\n        await client.delete_block_document(block_document.id)\n\n@task\ndef my_task_that_crashes():\n    raise SystemExit(\"Crashing on purpose!\")\n\n@flow(on_crashed=[delete_cloud_run_job])\ndef crashing_flow():\n    \"\"\"Save the flow run name (i.e. Cloud Run job name) as a \n    String block. It then executes a task that ends up crashing.\"\"\"\n    flow_run_name = prefect.runtime.flow_run.name\n    cloud_run_job_name = String(value=flow_run_name)\n    cloud_run_job_name.save(\n        name=\"crashing-flow-cloud-run-job\", overwrite=True\n    )\n\n    my_task_that_crashes()\n\nif __name__ == \"__main__\":\n    crashing_flow()\n
    ","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/testing/","title":"Testing","text":"

    Once you have some awesome flows, you probably want to test them!

    ","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-flows","title":"Unit testing flows","text":"

    Prefect provides a simple context manager for unit tests that allows you to run flows and tasks against a temporary local SQLite database.

    from prefect import flow\nfrom prefect.testing.utilities import prefect_test_harness\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n  with prefect_test_harness():\n      # run the flow against a temporary testing database\n      assert my_favorite_flow() == 42\n

    For more extensive testing, you can leverage prefect_test_harness as a fixture in your unit testing framework. For example, when using pytest:

    from prefect import flow\nimport pytest\nfrom prefect.testing.utilities import prefect_test_harness\n\n@pytest.fixture(autouse=True, scope=\"session\")\ndef prefect_test_fixture():\n    with prefect_test_harness():\n        yield\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n    assert my_favorite_flow() == 42\n

    Note

    In this example, the fixture is scoped to run once for the entire test session. In most cases, you will not need a clean database for each test and just want to isolate your test runs to a test database. Creating a new test database per test creates significant overhead, so we recommend scoping the fixture to the session. If you need to isolate some tests fully, you can use the test harness again to create a fresh database.

    ","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-tasks","title":"Unit testing tasks","text":"

    To test an individual task, you can access the original function using .fn:

    from prefect import flow, task\n\n@task\ndef my_favorite_task():\n    return 42\n\n@flow\ndef my_favorite_flow():\n    val = my_favorite_task()\n    return val\n\ndef test_my_favorite_task():\n    assert my_favorite_task.fn() == 42\n

    Disable logger

    If your task makes uses a logger, you can disable the logger in order to avoid the RuntimeError raised from a missing flow context.

    from prefect.logging import disable_run_logger\n\ndef test_my_favorite_task():\n    with disable_run_logger():\n        assert my_favorite_task.fn() == 42\n

    ","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/troubleshooting/","title":"Troubleshooting","text":"

    Don't Panic! If you experience an error with Prefect, there are many paths to understanding and resolving it. The first troubleshooting step is confirming that you are running the latest version of Prefect. If you are not, be sure to upgrade to the latest version, since the issue may have already been fixed. Beyond that, there are several categories of errors:

    • The issue may be in your flow code, in which case you should carefully read the logs.
    • The issue could be with how you are authenticated, and whether or not you are connected to Cloud.
    • The issue might have to do with how your code is executed.
    ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#upgrade","title":"Upgrade","text":"

    Prefect is constantly evolving, adding new features and fixing bugs. Chances are that a patch has already been identified and released. Search existing issues for similar reports and check out the Release Notes. Upgrade to the newest version with the following command:

    pip install --upgrade prefect\n

    Different components may use different versions of Prefect:

    • Cloud will generally always be the newest version. Cloud is continuously deployed by the Prefect team. When using a self-hosted server, you can control this version.
    • Workers and agents typically don't change versions frequently, and are usually whatever the latest version was at the time of creation. Workers and agents provision infrastructure for flow runs, so upgrading them may help with infrastructure problems.
    • Flows could use a different version than the worker or agent that created them, especially when running in different environments. Suppose your worker and flow both use the latest official Docker image, but your worker was created a month ago. Your worker will often be on an older version than your flow.

    Integration Versions

    Keep in mind that integrations are versioned and released independently of the core Prefect library. They should be upgraded simultaneously with the core library, using the same method.

    ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#logs","title":"Logs","text":"

    In many cases, there will be an informative stack trace in Prefect's logs. Read it carefully, locate the source of the error, and try to identify the cause.

    There are two types of logs:

    • Flow and task logs are always scoped to a flow. They are sent to Prefect and are viewable in the UI.
    • Worker and agent logs are not scoped to a flow and may have more information on what happened before the flow started. These logs are generally only available where the worker or agent is running.

    If your flow and task logs are empty, there may have been an infrastructure issue that prevented your flow from starting. Check your worker logs for more details.

    If there is no clear indication of what went wrong, try updating the logging level from the default INFO level to the DEBUG level. Settings such as the logging level are propagated from the worker environment to the flow run environment and can be set via environment variables or the prefect config set CLI:

    # Using the CLI\nprefect config set PREFECT_LOGGING_LEVEL=DEBUG\n\n# Using environment variables\nexport PREFECT_LOGGING_LEVEL=DEBUG\n

    The DEBUG logging level produces a high volume of logs so consider setting it back to INFO once any issues are resolved.

    ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#cloud","title":"Cloud","text":"

    When using Prefect Cloud, there are the additional concerns of authentication and authorization. The Prefect API authenticates users and service accounts - collectively known as actors - with API keys. Missing, incorrect, or expired API keys will result in a 401 response with detail Invalid authentication credentials. Use the following command to check your authentication, replacing $PREFECT_API_KEY with your API key:

    curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/\"\n

    Users vs Service Accounts

    Service accounts - sometimes referred to as bots - represent non-human actors that interact with Prefect such as workers and CI/CD systems. Each human that interacts with Prefect should be represented as a user. User API keys start with pnu_ and service account API keys start with pnb_.

    Supposing the response succeeds, let's check our authorization. Actors can be members of workspaces. An actor attempting an action in a workspace they are not a member of will result in a 404 response. Use the following command to check your actor's workspace memberships:

    curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/workspaces\"\n

    Formatting JSON

    Python comes with a helpful tool for formatting JSON. Append the following to the end of the command above to make the output more readable: | python -m json.tool

    Make sure your actor is a member of the workspace you are working in. Within a workspace, an actor has a role which grants them certain permissions. Insufficient permissions will result in an error. For example, starting an agent or worker with the Viewer role, will result in errors.

    ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#execution","title":"Execution","text":"

    Prefect flows can be executed locally by the user, or remotely by a worker or agent. Local execution generally means that you - the user - run your flow directly with a command like python flow.py. Remote execution generally means that a worker runs your flow via a deployment, optionally on different infrastructure.

    With remote execution, the creation of your flow run happens separately from its execution. Flow runs are assigned to a work pool and a work queue. For flow runs to execute, a worker must be subscribed to the work pool and work queue, otherwise the flow runs will go from Scheduled to Late. Ensure that your work pool and work queue have a subscribed worker.

    Local and remote execution can also differ in their treatment of relative imports. If switching from local to remote execution results in local import errors, try replicating the behavior by executing the flow locally with the -m flag (i.e. python -m flow instead of python flow.py). Read more about -m here.

    ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#api-tests-return-an-unexpected-307-redirected","title":"API tests return an unexpected 307 Redirected","text":"

    Summary: Requests require a trailing / in the request URL.

    If you write a test that does not include a trailing / when making a request to a specific endpoint:

    async def test_example(client):\n    response = await client.post(\"/my_route\")\n    assert response.status_code == 201\n

    You'll see a failure like:

    E       assert 307 == 201\nE        +  where 307 = <Response [307 Temporary Redirect]>.status_code\n

    To resolve this, include the trailing /:

    async def test_example(client):\n    response = await client.post(\"/my_route/\")\n    assert response.status_code == 201\n

    Note: requests to nested URLs may exhibit the opposite behavior and require no trailing slash:

    async def test_nested_example(client):\n    response = await client.post(\"/my_route/filter/\")\n    assert response.status_code == 307\n\n    response = await client.post(\"/my_route/filter\")\n    assert response.status_code == 200\n

    Reference: \"HTTPX disabled redirect following by default\" in 0.22.0.

    ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#pytestpytestunraisableexceptionwarning-or-resourcewarning","title":"pytest.PytestUnraisableExceptionWarning or ResourceWarning","text":"

    As you're working with one of the FlowRunner implementations, you may get an error like this one:

    E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <ssl.SSLSocket fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>\nE\nE               Traceback (most recent call last):\nE                 File \".../pytest_asyncio/plugin.py\", line 306, in setup\nE                   res = await func(**_add_kwargs(func, kwargs, event_loop, request))\nE               ResourceWarning: unclosed <ssl.SSLSocket fd=10, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 60605), raddr=('127.0.0.1', 6443)>\n\n.../_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning\n

    This error is saying that your test suite (or the prefect library code) opened a connection to something (like a Docker daemon or a Kubernetes cluster) and didn't close it.

    It may help to re-run the specific test with PYTHONTRACEMALLOC=25 pytest ... so that Python can display more of the stack trace where the connection was opened.

    ","tags":["troubleshooting","guides","how to"]},{"location":"guides/upgrade-guide-agents-to-workers/","title":"Upgrade from Agents to Workers","text":"

    Upgrading from agents to workers significantly enhances the experience of deploying flows. It simplifies the specification of each flow's infrastructure and runtime environment.

    A worker is the fusion of an agent with an infrastructure block. Like agents, workers poll a work pool for flow runs that are scheduled to start. Like infrastructure blocks, workers are typed - they work with only one kind of infrastructure, and they specify the default configuration for jobs submitted to that infrastructure.

    Accordingly, workers are not a drop-in replacement for agents. Using workers requires deploying flows differently. In particular, deploying a flow with a worker does not involve specifying an infrastructure block. Instead, infrastructure configuration is specified on the work pool and passed to each worker that polls work from that pool.

    This guide provides an overview of the differences between agents and workers. It also describes how to upgrade from agents to workers in just a few quick steps.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#enhancements","title":"Enhancements","text":"","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#workers","title":"Workers","text":"
    • Improved visibility into the status of each worker, including when a worker was started and when it last polled.
    • Better handling of race conditions for high availability use cases.
    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#work-pools","title":"Work pools","text":"
    • Work pools allow greater customization and governance of infrastructure parameters for deployments via their base job template.
    • Prefect Cloud push work pools enable flow execution in your cloud provider environment without needing to host a worker.
    • Prefect Cloud managed work pools allow you to run flows on Prefect's infrastructure, without needing to host a worker or configure cloud provider infrastructure.
    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#improved-deployment-interfaces","title":"Improved deployment interfaces","text":"
    • The Python deployment experience with .deploy() or the alternative deployment experience with prefect.yaml are more flexible and easier to use than block and agent-based deployments.
    • Both options allow you to deploy multiple flows with a single command.
    • Both options allow you to build Docker images for your flows to create portable execution environments.
    • The YAML-based API supports templating to enable dryer deployment definitions.
    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#whats-different","title":"What's different","text":"
    1. Deployment CLI and Python SDK:

      prefect deployment build <entrypoint>/prefect deployment apply --> prefect deploy

      Prefect will now automatically detect flows in your repo and provide a wizard \ud83e\uddd9 to guide you through setting required attributes for your deployments.

      Deployment.build_from_flow --> flow.deploy

    2. Configuring remote flow code storage:

      storage blocks --> pull action

      When using the YAML-based deployment API, you can configure a pull action in your prefect.yaml file to specify how to retrieve flow code for your deployments. You can use configuration from your existing storage blocks to define your pull action via templating.

      When using the Python deployment API, you can pass any storage block to the flow.deploy method to specify how to retrieve flow code for your deployment.

    3. Configuring flow run infrastructure:

      infrastructure blocks --> typed work pool

      Default infrastructure config is now set on the typed work pool, and can be overwritten by individual deployments.

    4. Managing multiple deployments:

      Create and/or update many deployments at once through a prefect.yaml file or use the deploy function.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#whats-similar","title":"What's similar","text":"
    • Storage blocks can be set as the pull action in a prefect.yaml file.
    • Infrastructure blocks have configuration fields similar to typed work pools.
    • Deployment-level infrastructure overrides operate in much the same way.

      infra_override -> job_variable

    • The process for starting an agent and starting a worker in your environment are virtually identical.

      prefect agent start --pool <work pool name> --> prefect worker start --pool <work pool name>

      Worker Helm chart

      If you host your agents in a Kubernetes cluster, you can use the Prefect worker Helm chart to host workers in your cluster.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#upgrade-guide","title":"Upgrade guide","text":"

    If you have existing deployments that use infrastructure blocks, you can quickly upgrade them to be compatible with workers by following these steps:

    1. Create a work pool

    This new work pool will replace your infrastructure block.

    You can use the .publish_as_work_pool method on any infrastructure block to create a work pool with the same configuration.

    For example, if you have a KubernetesJob infrastructure block named 'my-k8s-job', you can create a work pool with the same configuration with this script:

    from prefect.infrastructure import KubernetesJob\n\nKubernetesJob.load(\"my-k8s-job\").publish_as_work_pool()\n
    Running this script will create a work pool named 'my-k8s-job' with the same configuration as your infrastructure block.\n

    Serving flows

    If you are using a Process infrastructure block and a LocalFilesystem storage block (or aren't using an infrastructure and storage block at all), you can use flow.serve to create a deployment without needing to specify a work pool name or start a worker.

    This is a quick way to create a deployment for a flow and is a great way to manage your deployments if you don't need the dynamic infrastructure creation or configuration offered by workers.

    Check out our Docker guide for how to build a served flow into a Docker image and host it in your environment.

    1. Start a worker

    This worker will replace your agent and poll your new work pool for flow runs to execute.

    prefect worker start -p <work pool name>\n
    1. Deploy your flows to the new work pool

    To deploy your flows to the new work pool, you can use flow.deploy for a Pythonic deployment experience or prefect deploy for a YAML-based deployment experience.

    If you currently use Deployment.build_from_flow, we recommend using flow.deploy.

    If you currently use prefect deployment build and prefect deployment apply, we recommend using prefect deploy.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#flowdeploy","title":"flow.deploy","text":"

    If you have a Python script that uses Deployment.build_from_flow, you can replace it with flow.deploy.

    Most arguments to Deployment.build_from_flow can be translated directly to flow.deploy, but here are some changes that you may need to make:

    • Replace infrastructure with work_pool_name.
    • If you've used the .publish_as_work_pool method on your infrastructure block, use the name of the created work pool.
    • Replace infra_overrides with job_variables.
    • Replace storage with a call to flow.from_source.
    • flow.from_source will load your flow from a remote storage location and make it deployable. Your existing storage block can be passed to the source argument of flow.from_source.

    Below are some examples of how to translate Deployment.build_from_flow to flow.deploy.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-without-any-blocks","title":"Deploying without any blocks","text":"

    If you aren't using any blocks:

    from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n    Deployment.build_from_flow(\n        my_flow,\n        name=\"my-deployment\",\n        parameters=dict(name=\"Marvin\"),\n    )\n

    You can replace Deployment.build_from_flow with flow.serve :

    from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n    my_flow.serve(\n        name=\"my-deployment\",\n        parameters=dict(name=\"Marvin\"),\n    )\n

    This will start a process that will serve your flow and execute any flow runs that are scheduled to start.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-a-storage-block","title":"Deploying using a storage block","text":"

    If you currently use a storage block to load your flow code but no infrastructure block:

    from prefect import flow\nfrom prefect.storage import GitHub\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\nif __name__ == \"__main__\":\n    Deployment.build_from_flow(\n        my_flow,\n        name=\"my-deployment\",\n        storage=GitHub.load(\"demo-repo\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

    you can use flow.from_source to load your flow from the same location and flow.serve to create a deployment:

    from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=GitHub.load(\"demo-repo\"),\n        entrypoint=\"example.py:my_flow\"\n    ).serve(\n        name=\"my-deployment\",\n        parameters=dict(name=\"Marvin\"),\n    )\n

    This will allow you to execute scheduled flow runs without starting a worker. Additionally, the process serving your flow will regularly check for updates to your flow code and automatically update the flow if it detects any changes to the code.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-an-infrastructure-and-storage-block","title":"Deploying using an infrastructure and storage block","text":"

    For the code below, we'll need to create a work pool from our infrastructure block and pass it to flow.deploy as the work_pool_name argument. We'll also need to pass our storage block to flow.from_source as the source argument.

    from prefect import flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import GitHub\nfrom prefect.infrastructure.kubernetes import KubernetesJob\n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\n\nif __name__ == \"__main__\":\n    Deployment.build_from_flow(\n        my_flow,\n        name=\"my-deployment\",\n        storage=GitHub.load(\"demo-repo\"),\n        entrypoint=\"example.py:my_flow\",\n        infrastructure=KubernetesJob.load(\"my-k8s-job\"),\n        infra_overrides=dict(pull_policy=\"Never\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

    The equivalent deployment code using flow.deploy would look like this:

    from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=GitHub.load(\"demo-repo\"),\n        entrypoint=\"example.py:my_flow\"\n    ).deploy(\n        name=\"my-deployment\",\n        work_pool_name=\"my-k8s-job\",\n        job_variables=dict(pull_policy=\"Never\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

    Note that when using flow.from_source(...).deploy(...), the flow you're deploying does not need to be available locally before running your script.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-via-a-docker-image","title":"Deploying via a Docker image","text":"

    If you currently bake your flow code into a Docker image before deploying, you can use the image argument of flow.deploy to build a Docker image as part of your deployment process:

    from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a Docker image!\")\n\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        name=\"my-deployment\",\n        image=\"my-repo/my-image:latest\",\n        work_pool_name=\"my-k8s-job\",\n        job_variables=dict(pull_policy=\"Never\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

    You can skip a flow.from_source call when building an image with flow.deploy. Prefect will keep track of the flow's source code location in the image and load it from that location when the flow is executed.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#using-prefect-deploy","title":"Using prefect deploy","text":"

    Always run prefect deploy commands from the root level of your repo!

    With agents, you might have had multiple deployment.yaml files, but under worker deployment patterns, each repo will have a single prefect.yaml file located at the root of the repo that contains deployment configuration for all flows in that repo.

    To set up a new prefect.yaml file for your deployments, run the following command from the root level of your repo:

    prefect deploy\n

    This will start a wizard that will guide you through setting up your deployment.

    For step 4, select y on the last prompt to save the configuration for the deployment.

    Saving the configuration for your deployment will result in a prefect.yaml file populated with your first deployment. You can use this YAML file to edit and define multiple deployments for this repo.

    You can add more deployments to the deployments list in your prefect.yaml file and/or by continuing to use the deployment creation wizard.

    For more information on deployments, check out our in-depth guide for deploying flows to work pools.

    ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/using-the-client/","title":"Using the Prefect Orchestration Client","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#overview","title":"Overview","text":"

    In the API reference for the PrefectClient, you can find many useful client methods that make it simpler to do things such as:

    • reschedule late flow runs
    • get the last N completed flow runs from my workspace

    The PrefectClient is an async context manager, so you can use it like this:

    from prefect import get_client\n\nasync with get_client() as client:\n    response = await client.hello()\n    print(response.json()) # \ud83d\udc4b\n
    ","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#examples","title":"Examples","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#rescheduling-late-flow-runs","title":"Rescheduling late flow runs","text":"

    Sometimes, you may need to bulk reschedule flow runs that are late - for example, if you've accidentally scheduled many flow runs of a deployment to an inactive work pool.

    To do this, we can delete late flow runs and create new ones in a Scheduled state with a delay.

    This example reschedules the last 3 late flow runs of a deployment named healthcheck-storage-test to run 6 hours later than their original expected start time. It also deletes any remaining late flow runs of that deployment.

    import asyncio\nfrom datetime import datetime, timedelta, timezone\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import (\n    DeploymentFilter, FlowRunFilter\n)\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\nfrom prefect.states import Scheduled\n\nasync def reschedule_late_flow_runs(\n    deployment_name: str,\n    delay: timedelta,\n    most_recent_n: int,\n    delete_remaining: bool = True,\n    states: Optional[list[str]] = None\n) -> list[FlowRun]:\n    if not states:\n        states = [\"Late\"]\n\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=dict(name=dict(any_=states)),\n                expected_start_time=dict(\n                    before_=datetime.now(timezone.utc)\n                ),\n            ),\n            deployment_filter=DeploymentFilter(\n                name={'like_': deployment_name}\n            ),\n            sort=FlowRunSort.START_TIME_DESC,\n            limit=most_recent_n if not delete_remaining else None\n        )\n\n        if not flow_runs:\n            print(f\"No flow runs found in states: {states!r}\")\n            return []\n\n        rescheduled_flow_runs = []\n        for i, run in enumerate(flow_runs):\n            await client.delete_flow_run(flow_run_id=run.id)\n            if i < most_recent_n:\n                new_run = await client.create_flow_run_from_deployment(\n                    deployment_id=run.deployment_id,\n                    state=Scheduled(\n                        scheduled_time=run.expected_start_time + delay\n                    ),\n                )\n                rescheduled_flow_runs.append(new_run)\n\n        return rescheduled_flow_runs\n\nif __name__ == \"__main__\":\n    rescheduled_flow_runs = asyncio.run(\n        reschedule_late_flow_runs(\n            deployment_name=\"healthcheck-storage-test\",\n            delay=timedelta(hours=6),\n            most_recent_n=3,\n        )\n    )\n\n    print(f\"Rescheduled {len(rescheduled_flow_runs)} flow runs\")\n\n    assert all(\n        run.state.is_scheduled() for run in rescheduled_flow_runs\n    )\n    assert all(\n        run.expected_start_time > datetime.now(timezone.utc)\n        for run in rescheduled_flow_runs\n    )\n
    ","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#get-the-last-n-completed-flow-runs-from-my-workspace","title":"Get the last N completed flow runs from my workspace","text":"

    To get the last N completed flow runs from our workspace, we can make use of read_flow_runs and prefect.client.schemas.

    This example gets the last three completed flow runs from our workspace:

    import asyncio\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import FlowRunFilter\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\n\nasync def get_most_recent_flow_runs(\n    n: int = 3,\n    states: Optional[list[str]] = None\n) -> list[FlowRun]:\n    if not states:\n        states = [\"COMPLETED\"]\n\n    async with get_client() as client:\n        return await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state={'type': {'any_': states}}\n            ),\n            sort=FlowRunSort.END_TIME_DESC,\n            limit=n,\n        )\n\nif __name__ == \"__main__\":\n    last_3_flow_runs: list[FlowRun] = asyncio.run(\n        get_most_recent_flow_runs()\n    )\n    print(last_3_flow_runs)\n\n    assert all(\n        run.state.is_completed() for run in last_3_flow_runs\n    )\n    assert (\n        end_times := [run.end_time for run in last_3_flow_runs]\n    ) == sorted(end_times, reverse=True)\n

    Instead of the last three from the whole workspace, you could also use the DeploymentFilter like the previous example to get the last three completed flow runs of a specific deployment.

    ","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#transition-all-running-flows-to-cancelled-via-the-client","title":"Transition all running flows to cancelled via the Client","text":"

    It can be cumbersome to cancel many flow runs through the UI. You can use get_clientto set multiple runs to a Cancelled state. The code below will cancel all flow runs that are in Pending, Running, Scheduled, or Late states when the script is run.

    import anyio\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import FlowRunFilter, FlowRunFilterState, FlowRunFilterStateName\nfrom prefect.client.schemas.objects import StateType\n\nasync def list_flow_runs_with_states(states: list[str]):\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    name=FlowRunFilterStateName(any_=states)\n                )\n            )\n        )\n    return flow_runs\n\n\nasync def cancel_flow_runs(flow_runs):\n    async with get_client() as client:\n        for idx, flow_run in enumerate(flow_runs):\n            print(f\"[{idx + 1}] Cancelling flow run '{flow_run.name}' with ID '{flow_run.id}'\")\n            state_updates = {}\n            state_updates.setdefault(\"name\", \"Cancelled\")\n            state_updates.setdefault(\"type\", StateType.CANCELLED)\n            state = flow_run.state.copy(update=state_updates)\n            await client.set_flow_run_state(flow_run.id, state, force=True)\n\n\nasync def bulk_cancel_flow_runs():\n    states = [\"Pending\", \"Running\", \"Scheduled\", \"Late\"]\n    flow_runs = await list_flow_runs_with_states(states)\n\n    while len(flow_runs) > 0:\n        print(f\"Cancelling {len(flow_runs)} flow runs\\n\")\n        await cancel_flow_runs(flow_runs)\n        flow_runs = await list_flow_runs_with_states(states)\n    print(\"Done!\")\n\n\nif __name__ == \"__main__\":\n    anyio.run(bulk_cancel_flow_runs)\n

    There are other ways to filter objects like flow runs

    See the filters API reference for more ways to filter flow runs and other objects in your Prefect ecosystem.

    ","tags":["client","API","filters","orchestration"]},{"location":"guides/variables/","title":"Variables","text":"

    Variables enable you to store and reuse non-sensitive bits of data, such as configuration information. Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.

    Variables can be created or modified at any time, but are intended for values with infrequent writes and frequent reads. Variable values may be cached for quicker retrieval.

    While variable values are most commonly loaded during flow runtime, they can be loaded in other contexts, at any time, such that they can be used to pass configuration information to Prefect configuration files, such as deployment steps.

    Variables are not Encrypted

    Using variables to store sensitive information, such as credentials, is not recommended. Instead, use Secret blocks to store and access sensitive information.

    ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#managing-variables","title":"Managing variables","text":"

    You can create, read, edit and delete variables via the Prefect UI, API, and CLI. Names must adhere to traditional variable naming conventions:

    • Have no more than 255 characters.
    • Only contain lowercase alphanumeric characters ([a-z], [0-9]) or underscores (_). Spaces are not allowed.
    • be unique.

    Values must:

    • have less than or equal to 5000 characters.

    Optionally, you can add tags to the variable.

    ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-prefect-ui","title":"Via the Prefect UI","text":"

    You can see all the variables in your Prefect server instance or Prefect Cloud workspace on the Variables page of the Prefect UI. Both the name and value of all variables are visible to anyone with access to the server or workspace.

    To create a new variable, select the + button next to the header of the Variables page. Enter the name and value of the variable.

    ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-rest-api","title":"Via the REST API","text":"

    Variables can be created and deleted via the REST API. You can also set and get variables via the API with either the variable name or ID. See the REST reference for more information.

    ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-cli","title":"Via the CLI","text":"

    You can list, inspect, and delete variables via the command line interface with the prefect variable ls, prefect variable inspect <name>, and prefect variable delete <name> commands, respectively.

    ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#accessing-variables","title":"Accessing variables","text":"

    In addition to the UI and API, variables can be referenced in code and in certain Prefect configuration files.

    ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-python-code","title":"In Python code","text":"

    You can access any variable via the Python SDK via the Variable.get() method. If you attempt to reference a variable that does not exist, the method will return None. You can create variables via the Python SDK with the Variable.set() method. Note that if a variable of the same name exists, you'll need to pass overwrite=True.

    from prefect.variables import Variable\n\n# setting the variable\nvariable = Variable.set(name=\"the_answer\", value=\"42\")\n\n# getting from a synchronous context\nanswer = Variable.get('the_answer')\nprint(answer.value)\n# 42\n\n# getting from an asynchronous context\nanswer = await Variable.get('the_answer')\nprint(answer.value)\n# 42\n\n# getting without a default value\nanswer = Variable.get('not_the_answer')\nprint(answer.value)\n# None\n\n# getting with a default value\nanswer = Variable.get('not_the_answer', default='42')\nprint(answer.value)\n# 42\n\n# using `overwrite=True`\nanswer = Variable.get('the_answer')\nprint(answer.value)\n#42\nanswer = Variable.set(name=\"the_answer\", value=\"43\", overwrite=True)\nprint(answer.value)\n#43\n
    ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-prefectyaml-deployment-steps","title":"In prefect.yaml deployment steps","text":"

    In .yaml files, variables are denoted by quotes and double curly brackets, like so: \"{{ prefect.variables.my_variable }}\". You can use variables to templatize deployment steps by referencing them in the prefect.yaml file used to create deployments. For example, you could pass a variable in to specify a branch for a git repo in a deployment pull step:

    pull:\n- prefect.deployments.steps.git_clone:\n    repository: https://github.com/PrefectHQ/hello-projects.git\n    branch: \"{{ prefect.variables.deployment_branch }}\"\n

    The deployment_branch variable will be evaluated at runtime for the deployed flow, allowing changes to be made to variables used in a pull action without updating a deployment directly.

    ","tags":["variables","blocks"],"boost":2},{"location":"guides/webhooks/","title":"Webhooks","text":"

    Use webhooks in your Prefect Cloud workspace to receive, observe, and react to events from other systems in your ecosystem. Each webhook exposes a unique URL endpoint to receive events from other systems and transforms them into Prefect events for use in automations.

    Webhooks are defined by two essential components: a unique URL and a template that translates incoming web requests to a Prefect event.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#configuring-webhooks","title":"Configuring webhooks","text":"","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cloud-api","title":"Via the Prefect Cloud API","text":"

    Webhooks are managed via the Webhooks API endpoints. This is a Prefect Cloud-only feature. You authenticate API calls using the standard authentication methods you use with Prefect Cloud.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-prefect-cloud","title":"Via Prefect Cloud","text":"

    Webhooks can be created and managed from the Prefect Cloud UI.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cli","title":"Via the Prefect CLI","text":"

    Webhooks can be managed and interacted with via the prefect cloud webhook command group.

    prefect cloud webhook --help\n

    You can create your first webhook by invoking create:

    prefect cloud webhook create your-webhook-name \\\n    --description \"Receives webhooks from your system\" \\\n    --template '{ \"event\": \"your.event.name\", \"resource\": { \"prefect.resource.id\": \"your.resource.id\" } }'\n

    Note the template string, which is discussed in greater detail below

    You can retrieve details for a specific webhook by ID using get, or optionally query all webhooks in your workspace via ls:

    # get webhook by ID\nprefect cloud webhook get <webhook-id>\n\n# list all configured webhooks in your workspace\n\nprefect cloud webhook ls\n

    If you need to disable an existing webhook without deleting it, use toggle:

    prefect cloud webhook toggle <webhook-id>\nWebhook is now disabled\n\nprefect cloud webhook toggle <webhook-id>\nWebhook is now enabled\n

    If you are concerned that your webhook endpoint may have been compromised, use rotate to generate a new, random endpoint

    prefect cloud webhook rotate <webhook-url-slug>\n
    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#webhook-endpoints","title":"Webhook endpoints","text":"

    The webhook endpoints have randomly generated opaque URLs that do not divulge any information about your Prefect Cloud workspace. They are rooted at https://api.prefect.cloud/hooks/. For example: https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ. Prefect Cloud assigns this URL when you create a webhook; it cannot be set via the API. You may rotate your webhook URL at any time without losing the associated configuration.

    All webhooks may accept requests via the most common HTTP methods:

    • GET, HEAD, and DELETE may be used for webhooks that define a static event template, or a template that does not depend on the body of the HTTP request. The headers of the request will be available for templates.
    • POST, PUT, and PATCH may be used when the webhook request will include a body. See How HTTP request components are handled for more details on how the body is parsed.

    Prefect Cloud webhooks are deliberately quiet to the outside world, and will only return a 204 No Content response when they are successful, and a 400 Bad Request error when there is any error interpreting the request. For more visibility when your webhooks fail, see the Troubleshooting section below.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#webhook-templates","title":"Webhook templates","text":"

    The purpose of a webhook is to accept an HTTP request from another system and produce a Prefect event from it. You may find that you often have little influence or control over the format of those requests, so Prefect's webhook system gives you full control over how you turn those notifications from other systems into meaningful events in your Prefect Cloud workspace. The template you define for each webhook will determine how individual components of the incoming HTTP request become the event name and resource labels of the resulting Prefect event.

    As with the templates available in Prefect Cloud Automation for defining notifications and other parameters, you will write templates in Jinja2. All of the built-in Jinja2 blocks and filters are available, as well as the filters from the jinja2-humanize-extensions package.

    Your goal when defining your event template is to produce a valid JSON object that defines (at minimum) the event name and the resource[\"prefect.resource.id\"], which are required of all events. The simplest template is one in which these are statically defined.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#static-webhook-events","title":"Static webhook events","text":"

    Let's see a static webhook template example. Say you want to configure a webhook that will notify Prefect when your recommendations machine learning model has been updated, so you can then send a Slack notification to your team and run a few subsequent deployments. Those models are produced on a daily schedule by another team that is using cron for scheduling. They aren't able to use Prefect for their flows (yet!), but they are happy to add a curl to the end of their daily script to notify you. Because this webhook will only be used for a single event from a single resource, your template can be entirely static:

    {\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.recommendations\",\n        \"prefect.resource.name\": \"Recommendations [Products]\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

    Make sure to produce valid JSON

    The output of your template, when rendered, should be a valid string that can be parsed, for example, with json.loads.

    A webhook with this template may be invoked via any of the HTTP methods, including a GET request with no body, so the team you are integrating with can include this line at the end of their daily script:

    curl https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

    Each time the script hits the webhook, the webhook will produce a single Prefect event with that name and resource in your workspace.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#event-fields-that-prefect-cloud-populates-for-you","title":"Event fields that Prefect Cloud populates for you","text":"

    You may notice that you only had to provide the event and resource definition, which is not a completely fleshed out event. Prefect Cloud will set default values for any missing fields, such as occurred and id, so you don't need to set them in your template. Additionally, Prefect Cloud will add the webhook itself as a related resource on all of the events it produces.

    If your template does not produce a payload field, the payload will default to a standard set of debugging information, including the HTTP method, headers, and body.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#dynamic-webhook-events","title":"Dynamic webhook events","text":"

    Now let's say that after a few days you and the Data Science team are getting a lot of value from the automations you have set up with the static webhook. You've agreed to upgrade this webhook to handle all of the various models that the team produces. It's time to add some dynamic information to your webhook template.

    Your colleagues on the team have adjusted their daily cron scripts to POST a small body that includes the ID and name of the model that was updated:

    curl \\\n    -d \"model=recommendations\" \\\n    -d \"friendly_name=Recommendations%20[Products]\" \\\n    -X POST https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

    This script will send a POST request and the body will include a traditional URL-encoded form with two fields describing the model that was updated: model and friendly_name. Here's the webhook code that uses Jinja to receive these values in your template and produce different events for the different models:

    {\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.{{ body.model }}\",\n        \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

    All subsequent POST requests will produce events with those variable resource IDs and names. The other statically-defined parts, such as event or the producing-team label you included earlier will still be used.

    Use Jinja2's default filter to handle missing values

    Jinja2 has a helpful default filter that can compensate for missing values in the request. In this example, you may want to use the model's ID in place of the friendly name when the friendly name is not provided: {{ body.friendly_name|default(body.model) }}.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#how-http-request-components-are-handled","title":"How HTTP request components are handled","text":"

    The Jinja2 template context includes the three parts of the incoming HTTP request:

    • method is the uppercased string of the HTTP method, like GET or POST.
    • headers is a case-insensitive dictionary of the HTTP headers included with the request. To prevent accidental disclosures, the Authorization header is removed.
    • body represents the body that was posted to the webhook, with a best-effort approach to parse it into an object you can access.

    HTTP headers are available without any alteration as a dict-like object, but you may access them with header names in any case. For example, these template expressions all return the value of the Content-Length header:

    {{ headers['Content-Length'] }}\n\n{{ headers['content-length'] }}\n\n{{ headers['CoNtEnt-LeNgTh'] }}\n

    The HTTP request body goes through some light preprocessing to make it more useful in templates. If the Content-Type of the request is application/json, the body will be parsed as a JSON object and made available to the webhook templates. If the Content-Type is application/x-www-form-urlencoded (as in our example above), the body is parsed into a flat dict-like object of key-value pairs. Jinja2 supports both index and attribute access to the fields of these objects, so the following two expressions are equivalent:

    {{ body['friendly_name'] }}\n\n{{ body.friendly_name }}\n

    Only for Python identifiers

    Jinja2's syntax only allows attribute-like access if the key is a valid Python identifier, so body.friendly-name will not work. Use body['friendly-name'] in those cases.

    You may not have much control over the client invoking your webhook, but would still like for bodies that look like JSON to be parsed as such. Prefect Cloud will attempt to parse any other content type (like text/plain) as if it were JSON first. In any case where the body cannot be transformed into JSON, it will be made available to your templates as a Python str.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#accepting-prefect-events-directly","title":"Accepting Prefect events directly","text":"

    In cases where you have more control over the client, your webhook can accept Prefect events directly with a simple pass-through template:

    {{ body|tojson }}\n

    This template accepts the incoming body (assuming it was in JSON format) and just passes it through unmodified. This allows a POST of a partial Prefect event as in this example:

    POST /hooks/AERylZ_uewzpDx-8fcweHQ HTTP/1.1\nHost: api.prefect.cloud\nContent-Type: application/json\nContent-Length: 228\n\n{\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.recommendations\",\n        \"prefect.resource.name\": \"Recommendations [Products]\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

    The resulting event will be filled out with the default values for occurred, id, and other fields as described above.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#accepting-cloudevents","title":"Accepting CloudEvents","text":"

    The Cloud Native Computing Foundation has standardized CloudEvents for use by systems to exchange event information in a common format. These events are supported by major cloud providers and a growing number of cloud-native systems. Prefect Cloud can interpret a webhook containing a CloudEvent natively with the following template:

    {{ body|from_cloud_event(headers) }}\n

    The resulting event will use the CloudEvent's subject as the resource (or the source if no subject is available). The CloudEvent's data attribute will become the Prefect event's payload['data'], and the other CloudEvent metadata will be at payload['cloudevents']. If you would like to handle CloudEvents in a more specific way tailored to your use case, use a dynamic template to interpret the incoming body.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#troubleshooting","title":"Troubleshooting","text":"

    The initial configuration of your webhook may require some trial and error as you get the sender and your receiving webhook speaking a compatible language. While you are in this phase, you may find the Event Feed in the UI to be indispensable for seeing the events as they are happening.

    When Prefect Cloud encounters an error during receipt of a webhook, it will produce a prefect-cloud.webhook.failed event in your workspace. This event will include critical information about the HTTP method, headers, and body it received, as well as what the template rendered. Keep an eye out for these events when something goes wrong.

    ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/deployment/aci/","title":"Run an Agent with Azure Container Instances","text":"

    Microsoft Azure Container Instances (ACI) provides a convenient and simple service for quickly spinning up a Docker container that can host a Prefect Agent and execute flow runs.

    ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#prerequisites","title":"Prerequisites","text":"

    To follow this quickstart, you'll need the following:

    • A Prefect Cloud account
    • A Prefect Cloud API key (Prefect Cloud Pro and Custom tier accounts can use a service account API key)
    • A Microsoft Azure account
    • Azure CLI installed and authenticated
    ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-resource-group","title":"Create a resource group","text":"

    Like most Azure resources, ACI applications must live in a resource group. If you don\u2019t already have a resource group you\u2019d like to use, create a new one by running the az group create command. For example, this example creates a resource group called prefect-agents in the eastus region:

    az group create --name prefect-agents --location eastus\n

    Feel free to change the group name or location to match your use case. You can also run az account list-locations -o table to see all available resource group locations for your account.

    ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-the-container-instance","title":"Create the container instance","text":"

    Prefect provides pre-configured Docker images you can use to quickly stand up a container instance. These Docker images include Python and Prefect. For example, the image prefecthq/prefect:2-python3.10 includes the latest release version of Prefect and Python 3.10.

    To create the container instance, use the az container create command. This example shows the syntax, but you'll need to provide the correct values for [ACCOUNT-ID],[WORKSPACE-ID], [API-KEY], and any dependencies you need to pip install on the instance. These options are discussed below.

    az container create \\\n--resource-group prefect-agents \\\n--name prefect-agent-example \\\n--image prefecthq/prefect:2-python3.10 \\\n--secure-environment-variables PREFECT_API_URL='https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]' PREFECT_API_KEY='[API-KEY]' \\\n--command-line \"/bin/bash -c 'pip install adlfs s3fs requests pandas; prefect agent start -p default-agent-pool -q test'\"\n

    When the container instance is running, go to Prefect Cloud and select the Work Pools page. Select default-agent-pool, then select the Queues tab to see work queues configured on this work pool. When the container instance is running and the agent has started, the test work queue displays \"Healthy\" status. This work queue and agent are ready to execute deployments configured to run on the test queue.

    Agents and queues

    The agent running in this container instance can now pick up and execute flow runs for any deployment configured to use the test queue on the default-agent-pool work pool.

    ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#container-create-options","title":"Container create options","text":"

    Let's break down the details of the az container create command used here.

    The az container create command creates a new ACI container.

    --resource-group prefect-agents tells Azure which resource group the new container is created in. Here, the examples uses the prefect-agents resource group created earlier.

    --name prefect-agent-example determines the container name you will see in the Azure Portal. You can set any name you\u2019d like here to suit your use case, but container instance names must be unique in your resource group.

    --image prefecthq/prefect:2-python3.10 tells ACI which Docker images to run. The script above pulls a public Prefect image from Docker Hub. You can also build custom images and push them to a public container registry so ACI can access them. Or you can push your image to a private Azure Container Registry and use it to create a container instance.

    --secure-environment-variables sets environment variables that are only visible from inside the container. They do not show up when viewing the container\u2019s metadata. You'll populate these environment variables with a few pieces of information to configure the execution environment of the container instance so it can communicate with your Prefect Cloud workspace:

    • A Prefect Cloud [PREFECT_API_KEY]/concepts/settings/#prefect_api_key) value specifying the API key used to authenticate with your Prefect Cloud workspace. (Pro and Custom tier accounts can use a service account API key.)
    • The PREFECT_API_URL value specifying the API endpoint of your Prefect Cloud workspace.

    --command-line lets you override the container\u2019s normal entry point and run a command instead. The script above uses this section to install the adlfs pip package so it can read flow code from Azure Blob Storage, along with s3fs, pandas, and requests. It then runs the Prefect agent, in this case using the default work pool and a test work queue. If you want to use a different work pool or queue, make sure to change these values appropriately.

    ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-deployment","title":"Create a deployment","text":"

    Following the example of the Flow deployments tutorial, let's create a deployment that can be executed by the agent on this container instance.

    In an environment where you have installed Prefect, create a new folder called health_test, and within it create a new file called health_flow.py containing the following code.

    import prefect\nfrom prefect import task, flow\nfrom prefect import get_run_logger\n\n\n@task\ndef say_hi():\n    logger = get_run_logger()\n    logger.info(\"Hello from the Health Check Flow! \ud83d\udc4b\")\n\n\n@task\ndef log_platform_info():\n    import platform\n    import sys\n    from prefect.server.api.server import SERVER_API_VERSION\n\n    logger = get_run_logger()\n    logger.info(\"Host's network name = %s\", platform.node())\n    logger.info(\"Python version = %s\", platform.python_version())\n    logger.info(\"Platform information (instance type) = %s \", platform.platform())\n    logger.info(\"OS/Arch = %s/%s\", sys.platform, platform.machine())\n    logger.info(\"Prefect Version = %s \ud83d\ude80\", prefect.__version__)\n    logger.info(\"Prefect API Version = %s\", SERVER_API_VERSION)\n\n\n@flow(name=\"Health Check Flow\")\ndef health_check_flow():\n    hi = say_hi()\n    log_platform_info(wait_for=[hi])\n

    Now create a deployment for this flow script, making sure that it's configured to use the test queue on the default-agent-pool work pool.

    prefect deployment build --infra process --storage-block azure/flowsville/health_test --name health-test --pool default-agent-pool --work-queue test --apply health_flow.py:health_check_flow\n

    Once created, any flow runs for this deployment will be picked up by the agent running on this container instance.

    Infrastructure and storage

    This Prefect deployment example was built using the Process infrastructure type and Azure Blob Storage.

    You might wonder why your deployment needs process infrastructure rather than DockerContainer infrastructure when you are deploying a Docker image to ACI.

    A Prefect deployment\u2019s infrastructure type describes how you want Prefect agents to run flows for the deployment. With DockerContainer infrastructure, the agent will try to use Docker to spin up a new container for each flow run. Since you\u2019ll be starting your own container on ACI, you don\u2019t need Prefect to do it for you. Specifying process infrastructure on the deployment tells Prefect you want to agent to run flows by starting a process in your ACI container.

    You can use any storage type as long as you've configured a block for it before creating the deployment.

    ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#cleaning-up","title":"Cleaning up","text":"

    Note that ACI instances may incur usage charges while running, but must be running for the agent to pick up and execute flow runs.

    To stop a container, use the az container stop command:

    az container stop --resource-group prefect-agents --name prefect-agent-example\n

    To delete a container, use the az container delete command:

    az container delete --resource-group prefect-agents --name prefect-agent-example\n
    ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/daemonize/","title":"Daemonize Processes for Prefect Deployments","text":"

    When running workflow applications, it can be helpful to create long-running processes that run at startup and are robust to failure. In this guide you'll learn how to set up a systemd service to create long-running Prefect processes that poll for scheduled flow runs.

    A systemd service is ideal for running a long-lived process on a Linux VM or physical Linux server. We will leverage systemd and see how to automatically start a Prefect worker or long-lived serve process when Linux starts. This approach provides resilience by automatically restarting the process if it crashes.

    In this guide we will:

    • Create a Linux user
    • Install and configure Prefect
    • Set up a systemd service for the Prefect worker or .serve process
    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#prerequisites","title":"Prerequisites","text":"
    • An environment with a linux operating system with systemd and Python 3.8 or later.
    • A superuser account (you can run sudo commands).
    • A Prefect Cloud account, or a local instance of a Prefect server running on your network.
    • If daemonizing a worker, you'll need a Prefect deployment with a work pool your worker can connect to.

    If using an AWS t2-micro EC2 instance with an AWS Linux image, you can install Python and pip with sudo yum install -y python3 python3-pip.

    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-1-add-a-user","title":"Step 1: Add a user","text":"

    Create a user account on your linux system for the Prefect process. While you can run a worker or serve process as root, it's good security practice to avoid doing so unless you are sure you need to.

    In a terminal, run:

    sudo useradd -m prefect\nsudo passwd prefect\n

    When prompted, enter a password for the prefect account.

    Next, log in to the prefect account by running:

    sudo su prefect\n
    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-2-install-prefect","title":"Step 2: Install Prefect","text":"

    Run:

    pip3 install prefect\n

    This guide assumes you are installing Prefect globally, not in a virtual environment. If running a systemd service in a virtual environment, you'll just need to change the ExecPath. For example, if using venv, change the ExecPath to target the prefect application in the bin subdirectory of your virtual environment.

    Next, set up your environment so that the Prefect client will know which server to connect to.

    If connecting to Prefect Cloud, follow the instructions to obtain an API key and then run the following:

    prefect cloud login -k YOUR_API_KEY\n

    When prompted, choose the Prefect workspace you'd like to log in to.

    If connecting to a self-hosted Prefect server instance instead of Prefect Cloud, run the following and substitute the IP address of your server:

    prefect config set PREFECT_API_URL=http://your-prefect-server-IP:4200\n

    Finally, run the exit command to sign out of the prefect Linux account. This command switches you back to your sudo-enabled account so you will can run the commands in the next section.

    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-3-set-up-a-systemd-service","title":"Step 3: Set up a systemd service","text":"

    See the section below if you are setting up a Prefect worker. Skip to the next section if you are setting up a Prefect .serve process.

    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#setting-up-a-systemd-service-for-a-prefect-worker","title":"Setting up a systemd service for a Prefect worker","text":"

    Move into the /etc/systemd/system folder and open a file for editing. We use the Vim text editor below.

    cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
    my-prefect-service.service
    [Unit]\nDescription=Prefect worker\n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=prefect worker start --pool YOUR_WORK_POOL_NAME\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n

    Make sure you substitute your own work pool name.

    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#setting-up-a-systemd-service-for-serve","title":"Setting up a systemd service for .serve","text":"

    Copy your flow entrypoint Python file and any other files needed for your flow to run into the /home directory (or the directory of your choice).

    Here's a basic example flow:

    my_file.py
    from prefect import flow\n\n\n@flow(log_prints=True)\ndef say_hi():\n    print(\"Hello!\")\n\nif __name__==\"__main__\":\n    say_hi.serve(name=\"Greeting from daemonized .serve\")\n

    If you want to make changes to your flow code without restarting your process, you can push your code to git-based cloud storage (GitHub, BitBucket, GitLab) and use flow.from_source().serve(), as in the example below.

    my_remote_flow_code_file.py
    if __name__ == \"__main__\":\nflow.from_source(\n    source=\"https://github.com/org/repo.git\",\n    entrypoint=\"path/to/my_remote_flow_code_file.py:say_hi\",\n).serve(name=\"deployment-with-github-storage\")\n

    Make sure you substitute your own flow code entrypoint path.

    Note that if you change the flow entrypoint parameters, you will need to restart the process.

    Move into the /etc/systemd/system folder and open a file for editing. We use the Vim text editor below.

    cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
    my-prefect-service.service
    [Unit]\nDescription=Prefect serve \n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=python3 my_file.py\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n
    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#save-enable-and-start-the-service","title":"Save, enable, and start the service","text":"

    To save the file and exit Vim hit the escape key, type :wq!, then press the return key.

    Next, run sudo systemctl daemon-reload to make systemd aware of your new service.

    Then, run sudo systemctl enable my-prefect-service to enable the service. This command will ensure it runs when your system boots.

    Next, run sudo systemctl start my-prefect-service to start the service.

    Run your deployment from UI and check out the logs on the Flow Runs page.

    You can see if your daemonized Prefect worker or serve process is running and see the Prefect logs with systemctl status my-prefect-service.

    That's it! You now have a systemd service that starts when your system boots, and will restart if it ever crashes.

    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#next-steps","title":"Next steps","text":"

    If you want to set up a long-lived process on a Windows machine the pattern is similar. Instead of systemd, you can use NSSM.

    Check out other Prefect guides to see what else you can do with Prefect!

    ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/","title":"Developing a New Worker Type","text":"

    Advanced Topic

    This tutorial is for users who want to extend the Prefect framework and completing this successfully will require deep knowledge of Prefect concepts. For standard use cases, we recommend using one of the available workers instead.

    Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure.

    A list of available workers can be found here. What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-configuration","title":"Worker configuration","text":"

    When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run.

    How are the configuration values populated?

    The work pool that a worker polls for flow runs has a base job template associated with it. The template is the contract for how configuration values populate for each flow run.

    The keys in the job_configuration section of this base job template match the worker's configuration class attributes. The values in the job_configuration section of the base job template are used to populate the attributes of the worker's configuration class.

    The work pool creator gets to decide how they want to populate the values in the job_configuration section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#implementing-a-basejobconfiguration-subclass","title":"Implementing a BaseJobConfiguration subclass","text":"

    A worker developer defines their worker's configuration to function with a class extending BaseJobConfiguration.

    BaseJobConfiguration has attributes that are common to all workers:

    Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned to the created execution environment for metadata purposes. command The command to use when starting a flow run.

    Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the prepare_for_flow_run method.

    Here's an example prepare_for_flow_run method that adds a label to the execution environment:

    def prepare_for_flow_run(\n    self, flow_run, deployment = None, flow = None,\n):  \n    super().prepare_for_flow_run(flow_run, deployment, flow)  \n    self.labels.append(\"my-custom-label\")\n

    A worker configuration class is a Pydantic model, so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

    from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

    This configuration class will populate the job_configuration section of the resulting base job template.

    For this example, the base job template would look like this:

    job_configuration:\n    name: \"{{ name }}\"\n    env: \"{{ env }}\"\n    labels: \"{{ labels }}\"\n    command: \"{{ command }}\"\n    memory: \"{{ memory }}\"\n    cpu: \"{{ cpu }}\"\nvariables:\n    type: object\n    properties:\n        name:\n          title: Name\n          description: Name given to infrastructure created by a worker.\n          type: string\n        env:\n          title: Environment Variables\n          description: Environment variables to set when starting a flow run.\n          type: object\n          additionalProperties:\n            type: string\n        labels:\n          title: Labels\n          description: Labels applied to infrastructure created by a worker.\n          type: object\n          additionalProperties:\n            type: string\n        command:\n          title: Command\n          description: The command to use when starting a flow run. In most cases,\n            this should be left blank and the command will be automatically generated\n            by the worker.\n          type: string\n        memory:\n            title: Memory\n            description: Memory allocation for the execution environment.\n            type: integer\n            default: 1024\n        cpu:\n            title: CPU\n            description: CPU allocation for the execution environment.\n            type: integer\n            default: 500\n

    This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment.

    Notice that each attribute for the class was added in the job_configuration section with placeholders whose name matches the attribute name. The variables section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#customizing-configuration-attribute-templates","title":"Customizing Configuration Attribute Templates","text":"

    You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the memory attribute, you can do so like this:

    from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n

    Notice that we changed the type of each attribute to str to accommodate the units, and we added a new template attribute to each attribute. The template attribute is used to populate the job_configuration section of the resulting base job template.

    For this example, the job_configuration section of the resulting base job template would look like this:

    job_configuration:\n    name: \"{{ name }}\"\n    env: \"{{ env }}\"\n    labels: \"{{ labels }}\"\n    command: \"{{ command }}\"\n    memory: \"{{ memory_request }}Mi\"\n    cpu: \"{{ cpu_request }}m\"\n

    Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the Worker Template Variables section.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#rules-for-template-variable-interpolation","title":"Rules for template variable interpolation","text":"

    When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules:

    1. If a template variable is the only value for a key in the job_configuration section, the key will be replaced with the value template variable.
    2. If a template variable is part of a string (i.e., there is text before or after the template variable), the value of the template variable will be interpolated into the string.
    3. If a template variable is the only value for a key in the job_configuration section and no value is provided for the template variable, the key will be removed from the job_configuration section.

    These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#template-variable-usage-strategies","title":"Template variable usage strategies","text":"

    Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization.

    There are two patterns that are represented in current worker implementations:

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#pass-through","title":"Pass-through","text":"

    In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment.

    This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge.

    The Docker worker is an example of a worker that uses this pattern.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#infrastructure-as-code-templating","title":"Infrastructure as code templating","text":"

    Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition).

    In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form.

    This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators.

    The Kubernetes worker is an example of a worker that uses this pattern.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#configuring-credentials","title":"Configuring credentials","text":"

    When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user.

    For example, if you want to allow the user to configure AWS credentials, you can do so like this:

    from prefect_aws import AwsCredentials\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    aws_credentials: AwsCredentials = Field(\n        default=None,\n        description=\"AWS credentials to use when creating AWS resources.\"\n    )\n

    Users can create and assign a block to the aws_credentials attribute in the UI and the worker will use these credentials when interacting with AWS resources.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-template-variables","title":"Worker template variables","text":"

    Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use.

    Default template variables for a worker are defined by implementing the BaseVariables class. Like the BaseJobConfiguration class, the BaseVariables class has attributes that are common to all workers:

    Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned the created execution environment for metadata purposes. command The command to use when starting a flow run.

    Additional attributes can be added to the BaseVariables class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

    from pydantic import Field\nfrom prefect.workers.base import BaseVariables\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

    When MyWorkerTemplateVariables is used in conjunction with MyWorkerConfiguration from the Customizing Configuration Attribute Templates section, the resulting base job template will look like this:

    job_configuration:\n    name: \"{{ name }}\"\n    env: \"{{ env }}\"\n    labels: \"{{ labels }}\"\n    command: \"{{ command }}\"\n    memory: \"{{ memory_request }}Mi\"\n    cpu: \"{{ cpu_request }}m\"\nvariables:\n    type: object\n    properties:\n        name:\n          title: Name\n          description: Name given to infrastructure created by a worker.\n          type: string\n        env:\n          title: Environment Variables\n          description: Environment variables to set when starting a flow run.\n          type: object\n          additionalProperties:\n            type: string\n        labels:\n          title: Labels\n          description: Labels applied to infrastructure created by a worker.\n          type: object\n          additionalProperties:\n            type: string\n        command:\n          title: Command\n          description: The command to use when starting a flow run. In most cases,\n            this should be left blank and the command will be automatically generated\n            by the worker.\n          type: string\n        memory_request:\n            title: Memory Request\n            description: Memory allocation for the execution environment.\n            type: integer\n            default: 1024\n        cpu_request:\n            title: CPU Request\n            description: CPU allocation for the execution environment.\n            type: integer\n            default: 500\n

    Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the variables section of a base job template and validate the template variables provided by the user.

    We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation","title":"Worker implementation","text":"

    Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#attributes","title":"Attributes","text":"

    To implement a worker, you must implement the BaseWorker class and provide it with the following attributes:

    Attribute Description Required type The type of the worker. Yes job_configuration The configuration class for the worker. Yes job_configuration_variables The template variables class for the worker. No _documentation_url Link to documentation for the worker. No _logo_url Link to a logo for the worker. No _description A description of the worker. No","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#methods","title":"Methods","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#run","title":"run","text":"

    In addition to the attributes above, you must also implement a run method. The run method is called for each flow run the worker receives for execution from the work pool.

    The run method has the following signature:

     async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        ...\n

    The run method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully.

    The run method must also return a BaseWorkerResult object. The BaseWorkerResult object returned contains information about the flow run execution. For the most part, you can implement the BaseWorkerResult with no modifications like so:

    from prefect.workers.base import BaseWorkerResult\n\nclass MyWorkerResult(BaseWorkerResult):\n    \"\"\"Result returned by the MyWorker.\"\"\"\n

    If you would like to return more information about a flow run, then additional attributes can be added to the BaseWorkerResult class.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#kill_infrastructure","title":"kill_infrastructure","text":"

    Workers must implement a kill_infrastructure method to support flow run cancellation. The kill_infrastructure method is called when a flow run is canceled and is passed an identifier for the infrastructure to tear down and the execution environment configuration for the flow run.

    The infrastructure_pid passed to the kill_infrastructure method is the same identifier used to mark a flow run execution as started in the run method. The infrastructure_pid must be a string, but it can take on any format you choose.

    The infrastructure_pid should contain enough information to uniquely identify the infrastructure created for a flow run when used with the job_configuration passed to the kill_infrastructure method. Examples of useful information include: the cluster name, the hostname, the process ID, the container ID, etc.

    If a worker cannot tear down infrastructure created for a flow run, the kill_infrastructure command should raise an InfrastructureNotFound or InfrastructureNotAvailable exception.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation-example","title":"Worker implementation example","text":"

    Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts.

    from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n\nclass MyWorkerResult(BaseWorkerResult):\n    \"\"\"Result returned by the MyWorker.\"\"\"\n\nclass MyWorker(BaseWorker):\n    type = \"my-worker\"\n    job_configuration = MyWorkerConfiguration\n    job_configuration_variables = MyWorkerTemplateVariables\n    _documentation_url = \"https://example.com/docs\"\n    _logo_url = \"https://example.com/logo\"\n    _description = \"My worker description.\"\n\n    async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        # Create the execution environment and start execution\n        job = await self._create_and_start_job(configuration)\n\n        if task_status:\n            # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure\n            # if the flow run is cancelled.\n            task_status.started(job.id) \n\n        # Monitor the execution\n        job_status = await self._watch_job(job, configuration)\n\n        exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting\n        return MyWorkerResult(\n            status_code=exit_code,\n            identifier=job.id,\n        )\n\n    async def kill_infrastructure(self, infrastructure_pid: str, configuration: BaseJobConfiguration) -> None:\n        # Tear down the execution environment\n        await self._kill_job(infrastructure_pid, configuration)\n

    Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the run method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed task_status object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a BaseWorkerResult object

    To see other examples of worker implementations, see the ProcessWorker and KubernetesWorker implementations.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#integrating-with-the-prefect-cli","title":"Integrating with the Prefect CLI","text":"

    Workers can be started via the Prefect CLI by providing the --type option to the prefect worker start CLI command. To make your worker type available via the CLI, it must be available at import time.

    If your worker is in a package, you can add an entry point to your setup file in the following format:

    entry_points={\n    \"prefect.collections\": [\n        \"my_package_name = my_worker_module\",\n    ]\n},\n

    Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI.

    ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/kubernetes/","title":"Running Flows with Kubernetes","text":"

    This guide will walk you through running your flows on Kubernetes. Though much of the guide is general to any Kubernetes cluster, there are differences between the managed Kubernetes offerings between cloud providers, especially when it comes to container registries and access management. We'll focus on Amazon Elastic Kubernetes Service (EKS).

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#prerequisites","title":"Prerequisites","text":"

    Before we begin, there are a few pre-requisites:

    1. A Prefect Cloud account
    2. A cloud provider (AWS, GCP, or Azure) account
    3. Install Python and Prefect
    4. Install Helm
    5. Install the Kubernetes CLI (kubectl)

    Prefect is tested against Kubernetes 1.26.0 and newer minor versions.

    Administrator Access

    Though not strictly necessary, you may want to ensure you have admin access, both in Prefect Cloud and in your cloud provider. Admin access is only necessary during the initial setup and can be downgraded after.

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-cluster","title":"Create a cluster","text":"

    Let's start by creating a new cluster. If you already have one, skip ahead to the next section.

    AWSGCPAzure

    One easy way to get set up with a cluster in EKS is with eksctl. Node pools can be backed by either EC2 instances or FARGATE. Let's choose FARGATE so there's less to manage. The following command takes around 15 minutes and must not be interrupted:

    # Replace the cluster name with your own value\neksctl create cluster --fargate --name <CLUSTER-NAME>\n\n# Authenticate to the cluster.\naws eks update-kubeconfig --name <CLUSTER-NAME>\n

    You can get a GKE cluster up and running with a few commands using the gcloud CLI. We'll build a bare-bones cluster that is accessible over the open internet - this should not be used in a production environment. To deploy the cluster, your project must have a VPC network configured.

    First, authenticate to GCP by setting the following configuration options.

    # Authenticate to gcloud\ngcloud auth login\n\n# Specify the project & zone to deploy the cluster to\n# Replace the project name with your GCP project name\ngcloud config set project <GCP-PROJECT-NAME>\ngcloud config set compute/zone <AVAILABILITY-ZONE>\n

    Next, deploy the cluster - this command will take ~15 minutes to complete. Once the cluster has been created, authenticate to the cluster.

    # Create cluster\n# Replace the cluster name with your own value\ngcloud container clusters create <CLUSTER-NAME> --num-nodes=1 \\\n--machine-type=n1-standard-2\n\n# Authenticate to the cluster\ngcloud container clusters <CLUSTER-NAME> --region <AVAILABILITY-ZONE>\n

    GCP Gotchas

    • You'll need to enable the default service account in the IAM console, or specify a different service account with the appropriate permissions to be used.
    ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=Service account \"000000000000-compute@developer.gserviceaccount.com\" is disabled.\n
    • Organization policy blocks creation of external (public) IPs. You can override this policy (if you have the appropriate permissions) under the Organizational Policy page within IAM.
    creation failed: Constraint constraints/compute.vmExternalIpAccess violated for project 000000000000. Add instance projects/<GCP-PROJECT-NAME>/zones/us-east1-b/instances/gke-gke-guide-1-default-pool-c369c84d-wcfl to the constraint to use external IP with it.\"\n

    You can quickly create an AKS cluster using the Azure CLI, or use the Cloud Shell directly from the Azure portal shell.azure.com.

    First, authenticate to Azure if not already done.

      az login\n

    Next, deploy the cluster - this command will take ~4 minutes to complete. Once the cluster has been created, authenticate to the cluster.

      # Create a Resource Group at the desired location, e.g. westus\n  az group create --name <RESOURCE-GROUP-NAME> --location <LOCATION>\n\n  # Create a kubernetes cluster with default kubernetes version, default SKU load balancer (Standard) and default vm set type (VirtualMachineScaleSets)\n  az aks create --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n  # Configure kubectl to connect to your Kubernetes cluster\n  az aks get-credentials --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n  # Verify the connection by listing the cluster nodes\n  kubectl get nodes\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-container-registry","title":"Create a container registry","text":"

    Besides a cluster, the other critical resource we'll need is a container registry. A registry is not strictly required, but in most cases you'll want to use custom images and/or have more control over where images are stored. If you already have a registry, skip ahead to the next section.

    AWSGCPAzure

    Let's create a registry using the AWS CLI and authenticate the docker daemon to said registry:

    # Replace the image name with your own value\naws ecr create-repository --repository-name <IMAGE-NAME>\n\n# Login to ECR\n# Replace the region and account ID with your own values\naws ecr get-login-password --region <REGION> | docker login \\\n  --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com\n

    Let's create a registry using the gcloud CLI and authenticate the docker daemon to said registry:

    # Create artifact registry repository to host your custom image\n# Replace the repository name with your own value; it can be the \n# same name as your image\ngcloud artifacts repositories create <REPOSITORY-NAME> \\\n--repository-format=docker --location=us\n\n# Authenticate to artifact registry\ngcloud auth configure-docker us-docker.pkg.dev\n

    Let's create a registry using the Azure CLI and authenticate the docker daemon to said registry:

    # Name must be a lower-case alphanumeric\n# Tier SKU can easily be updated later, e.g. az acr update --name <REPOSITORY-NAME> --sku Standard\naz acr create --resource-group <RESOURCE-GROUP-NAME> \\\n  --name <REPOSITORY-NAME> \\\n  --sku Basic\n\n# Attach ACR to AKS cluster\n# You need Owner, Account Administrator, or Co-Administrator role on your Azure subscription as per Azure docs\naz aks update --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --attach-acr <REPOSITORY-NAME>\n\n# You can verify AKS can now reach ACR\naz aks check-acr --resource-group RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --acr <REPOSITORY-NAME>.azurecr.io\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-work-pool","title":"Create a Kubernetes work pool","text":"

    Work pools allow you to manage deployment infrastructure. We'll configure the default values for our Kubernetes base job template. Note that these values can be overridden by individual deployments.

    Let's switch to the Prefect Cloud UI, where we'll create a new Kubernetes work pool (alternatively, you could use the Prefect CLI to create a work pool).

    1. Click on the Work Pools tab on the left sidebar
    2. Click the + button at the top of the page
    3. Select Kubernetes as the work pool type
    4. Click Next to configure the work pool settings

    Let's look at a few popular configuration options.

    Environment Variables

    Add environment variables to set when starting a flow run. So long as you are using a Prefect-maintained image and haven't overwritten the image's entrypoint, you can specify Python packages to install at runtime with {\"EXTRA_PIP_PACKAGES\":\"my_package\"}. For example {\"EXTRA_PIP_PACKAGES\":\"pandas==1.2.3\"} will install pandas version 1.2.3. Alternatively, you can specify package installation in a custom Dockerfile, which can allow you to take advantage of image caching. As we'll see below, Prefect can help us create a Dockerfile with our flow code and the packages specified in a requirements.txt file baked in.

    Namespace

    Set the Kubernetes namespace to create jobs within, such as prefect. By default, set to default.

    Image

    Specify the Docker container image for created jobs. If not set, the latest Prefect 2 image will be used (i.e. prefecthq/prefect:2-latest). Note that you can override this on each deployment through job_variables.

    Image Pull Policy

    Select from the dropdown options to specify when to pull the image. When using the IfNotPresent policy, make sure to use unique image tags, as otherwise old images could get cached on your nodes.

    Finished Job TTL

    Number of seconds before finished jobs are automatically cleaned up by Kubernetes' controller. You may want to set to 60 so that completed flow runs are cleaned up after a minute.

    Pod Watch Timeout Seconds

    Number of seconds for pod creation to complete before timing out. Consider setting to 300, especially if using a serverless type node pool, as these tend to have longer startup times.

    Kubernetes Cluster Config

    You can configure the Kubernetes cluster to use for job creation by specifying a KubernetesClusterConfig block. Generally you should leave the cluster config blank as the worker should be provisioned with appropriate access and permissions. Typically this setting is used when a worker is deployed to a cluster that is different from the cluster where flow runs are executed.

    Advanced Settings

    Want to modify the default base job template to add other fields or delete existing fields?

    Select the Advanced tab and edit the JSON representation of the base job template.

    For example, to set a CPU request, add the following section under variables:

    \"cpu_request\": {\n  \"title\": \"CPU Request\",\n  \"description\": \"The CPU allocation to request for this pod.\",\n  \"default\": \"default\",\n  \"type\": \"string\"\n},\n

    Next add the following to the first containers item under job_configuration:

    ...\n\"containers\": [\n  {\n    ...,\n    \"resources\": {\n      \"requests\": {\n        \"cpu\": \"{{ cpu_request }}\"\n      }\n    }\n  }\n],\n...\n

    Running deployments with this work pool will now request the specified CPU.

    After configuring the work pool settings, move to the next screen.

    Give the work pool a name and save.

    Our new Kubernetes work pool should now appear in the list of work pools.

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-prefect-cloud-api-key","title":"Create a Prefect Cloud API key","text":"

    While in the Prefect Cloud UI, create a Prefect Cloud API key if you don't already have one. Click on your profile avatar picture, then click your name to go to your profile settings, click API Keys and hit the plus button to create a new API key here. Make sure to store it safely along with your other passwords, ideally via a password manager.

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-a-worker-using-helm","title":"Deploy a worker using Helm","text":"

    With our cluster and work pool created, it's time to deploy a worker, which will set up Kubernetes infrastructure to run our flows. The best way to deploy a worker is using the Prefect Helm Chart.

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#add-the-prefect-helm-repository","title":"Add the Prefect Helm repository","text":"

    Add the Prefect Helm repository to your Helm client:

    helm repo add prefect https://prefecthq.github.io/prefect-helm\nhelm repo update\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-namespace","title":"Create a namespace","text":"

    Create a new namespace in your Kubernetes cluster to deploy the Prefect worker:

    kubectl create namespace prefect\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-secret-for-the-prefect-api-key","title":"Create a Kubernetes secret for the Prefect API key","text":"
    kubectl create secret generic prefect-api-key \\\n--namespace=prefect --from-literal=key=your-prefect-cloud-api-key\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#configure-helm-chart-values","title":"Configure Helm chart values","text":"

    Create a values.yaml file to customize the Prefect worker configuration. Add the following contents to the file:

    worker:\n  cloudApiConfig:\n    accountId: <target account ID>\n    workspaceId: <target workspace ID>\n  config:\n    workPool: <target work pool name>\n

    These settings will ensure that the worker connects to the proper account, workspace, and work pool.

    View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-helm-release","title":"Create a Helm release","text":"

    Let's install the Prefect worker using the Helm chart with your custom values.yaml file:

    helm install prefect-worker prefect/prefect-worker \\\n  --namespace=prefect \\\n  -f values.yaml\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#verify-deployment","title":"Verify deployment","text":"

    Check the status of your Prefect worker deployment:

    kubectl get pods -n prefect\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#define-a-flow","title":"Define a flow","text":"

    Let's start simple with a flow that just logs a message. In a directory named flows, create a file named hello.py with the following contents:

    from prefect import flow, get_run_logger, tags\n\n@flow\ndef hello(name: str = \"Marvin\"):\n    logger = get_run_logger()\n    logger.info(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n    with tags(\"local\"):\n        hello()\n

    Run the flow locally with python hello.py to verify that it works. Note that we use the tags context manager to tag the flow run as local. This step is not required, but does add some helpful metadata.

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#define-a-prefect-deployment","title":"Define a Prefect deployment","text":"

    Prefect has two recommended options for creating a deployment with dynamic infrastructure. You can define a deployment in a Python script using the flow.deploy mechanics or in a prefect.yaml definition file. The prefect.yaml file currently allows for more customization in terms of push and pull steps. Kubernetes objects are defined in YAML, so we expect many teams using Kubernetes work pools to create their deployments with YAML as well. To learn about the Python deployment creation method with flow.deploy refer to the Workers & Work Pools tutorial page.

    The prefect.yaml file is used by the prefect deploy command to deploy our flows. As a part of that process it will also build and push our image. Create a new file named prefect.yaml with the following contents:

    # Generic metadata about this project\nname: flows\nprefect-version: 2.13.8\n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build-image\n    requires: prefect-docker>=0.4.0\n    image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n    tag: latest\n    dockerfile: auto\n    platform: \"linux/amd64\"\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n    requires: prefect-docker>=0.4.0\n    image_name: \"{{ build-image.image_name }}\"\n    tag: \"{{ build-image.tag }}\"\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n    directory: /opt/prefect/flows\n\n# the definitions section allows you to define reusable components for your deployments\ndefinitions:\n  tags: &common_tags\n    - \"eks\"\n  work_pool: &common_work_pool\n    name: \"kubernetes\"\n    job_variables:\n      image: \"{{ build-image.image }}\"\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: \"default\"\n  tags: *common_tags\n  schedule: null\n  entrypoint: \"flows/hello.py:hello\"\n  work_pool: *common_work_pool\n\n- name: \"arthur\"\n  tags: *common_tags\n  schedule: null\n  entrypoint: \"flows/hello.py:hello\"\n  parameters:\n    name: \"Arthur\"\n  work_pool: *common_work_pool\n

    We define two deployments of the hello flow: default and arthur. Note that by specifying dockerfile: auto, Prefect will automatically create a dockerfile that installs any requirements.txt and copies over the current directory. You can pass a custom Dockerfile instead with dockerfile: Dockerfile or dockerfile: path/to/Dockerfile. Also note that we are specifically building for the linux/amd64 platform. This specification is often necessary when images are built on Macs with M series chips but run on cloud provider instances.

    Deployment specific build, push, and pull

    The build, push, and pull steps can be overridden for each deployment. This allows for more custom behavior, such as specifying a different image for each deployment.

    Let's make sure we define our requirements in a requirements.txt file:

    prefect>=2.13.8\nprefect-docker>=0.4.0\nprefect-kubernetes>=0.3.1\n

    The directory should now look something like this:

    .\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 flows\n    \u251c\u2500\u2500 requirements.txt\n    \u2514\u2500\u2500 hello.py\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#tag-images-with-a-git-sha","title":"Tag images with a Git SHA","text":"

    If your code is stored in a GitHub repository, it's good practice to tag your images with the Git SHA of the code used to build it. This can be done in the prefect.yaml file with a few minor modifications, and isn't yet an option with the Python deployment creation method. Let's use the run_shell_script command to grab the SHA and pass it to the tag parameter of build_docker_image:

    build:\n- prefect.deployments.steps.run_shell_script:\n    id: get-commit-hash\n    script: git rev-parse --short HEAD\n    stream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build-image\n    requires: prefect-docker>=0.4.0\n    image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n    tag: \"{{ get-commit-hash.stdout }}\"\n    dockerfile: auto\n    platform: \"linux/amd64\"\n

    Let's also set the SHA as a tag for easy identification in the UI:

    definitions:\n  tags: &common_tags\n    - \"eks\"\n    - \"{{ get-commit-hash.stdout }}\"\n  work_pool: &common_work_pool\n    name: \"kubernetes\"\n    job_variables:\n      image: \"{{ build-image.image }}\"\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#authenticate-to-prefect","title":"Authenticate to Prefect","text":"

    Before we deploy the flows to Prefect, we will need to authenticate via the Prefect CLI. We will also need to ensure that all of our flow's dependencies are present at deploy time.

    This example uses a virtual environment to ensure consistency across environments.

    # Create a virtualenv & activate it\nvirtualenv prefect-demo\nsource prefect-demo/bin/activate\n\n# Install dependencies of your flow\nprefect-demo/bin/pip install -r requirements.txt\n\n# Authenticate to Prefect & select the appropriate \n# workspace to deploy your flows to\nprefect-demo/bin/prefect cloud login\n
    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-the-flows","title":"Deploy the flows","text":"

    Now we're ready to deploy our flows which will build our images. The image name determines which registry it will end up in. We have configured our prefect.yaml file to get the image name from the PREFECT_IMAGE_NAME environment variable, so let's set that first:

    AWSGCPAzure
    export PREFECT_IMAGE_NAME=<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<IMAGE-NAME>\n
    export PREFECT_IMAGE_NAME=us-docker.pkg.dev/<GCP-PROJECT-NAME>/<REPOSITORY-NAME>/<IMAGE-NAME>\n
    export PREFECT_IMAGE_NAME=<REPOSITORY-NAME>.azurecr.io/<IMAGE-NAME>\n

    To deploy your flows, ensure your Docker daemon is running first. Deploy all the flows with prefect deploy --all or deploy them individually by name: prefect deploy -n hello/default or prefect deploy -n hello/arthur.

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#run-the-flows","title":"Run the flows","text":"

    Once the deployments are successfully created, we can run them from the UI or the CLI:

    prefect deployment run hello/default\nprefect deployment run hello/arthur\n

    Congratulations! You just ran two deployments in Kubernetes. Head over to the UI to check their status!

    ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/overriding-job-variables/","title":"Deeper Dive: Overriding Work Pool Job Variables","text":"

    As described in the Deploying Flows to Work Pools and Workers guide, there are two ways to deploy flows to work pools: using a prefect.yaml file or using the .deploy() method.

    In both cases, you can override job variables on a work pool for a given deployment.

    While exactly which job variables are available to be overridden depends on the type of work pool you're using at a given time, this guide will explore some common patterns for overriding job variables in both deployment methods.

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#background","title":"Background","text":"

    First of all, what are \"job variables\"?

    Job variables are infrastructure related values that are configurable on a work pool, which may be relevant to how your flow run executes on your infrastructure. Job variables can be overridden on a per-deployment or per-flow run basis, allowing you to dynamically change infrastructure from the work pools defaults depending on your needs.

    Let's use env - the only job variable that is configurable for all work pool types - as an example.

    When you create or edit a work pool, you can specify a set of environment variables that will be set in the runtime environment of the flow run.

    For example, you might want a certain deployment to have the following environment variables available:

    {\n  \"EXECUTION_ENV\": \"staging\",\n  \"MY_NOT_SO_SECRET_CONFIG\": \"plumbus\",\n}\n

    Rather than hardcoding these values into your work pool in the UI and making them available to all deployments associated with that work pool, you can override these values on a per-deployment basis.

    Let's look at how to do that.

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#how-to-override-job-variables-on-a-deployment","title":"How to override job variables on a deployment","text":"

    Say we have the following repo structure:

    \u00bb tree\n.\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 demo_project\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 daily_flow.py\n

    ... and we have some demo_flow.py file like this:

    import os\nfrom prefect import flow, task\n\n@task\ndef do_something_important(not_so_secret_value: str) -> None:\n    print(f\"Doing something important with {not_so_secret_value}!\")\n\n@flow(log_prints=True)\ndef some_work():\n    environment = os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\")\n\n    print(f\"Coming to you live from {environment}!\")\n\n    not_so_secret_value = os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n\n    if not_so_secret_value is None:\n        raise ValueError(\"You forgot to set MY_NOT_SO_SECRET_CONFIG!\")\n\n    do_something_important(not_so_secret_value)\n
    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-a-prefectyaml-file","title":"Using a prefect.yaml file","text":"

    In this case, let's also say we have the following deployment definition in a prefect.yaml file at the root of our repository:

    deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: local\n  schedule: null\n

    Note

    While not the focus of this guide, note that this deployment definition uses a default \"global\" pull step, because one is not explicitly defined on the deployment. For reference, here's what that would look like at the top of the prefect.yaml file:

    pull:\n- prefect.deployments.steps.git_clone: &clone_repo\n    repository: https://github.com/some-user/prefect-monorepo\n    branch: main\n

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#hard-coded-job-variables","title":"Hard-coded job variables","text":"

    To provide the EXECUTION_ENVIRONMENT and MY_NOT_SO_SECRET_CONFIG environment variables to this deployment, we can add a job_variables section to our deployment definition in the prefect.yaml file:

    deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: local\n    job_variables:\n        env:\n            EXECUTION_ENVIRONMENT: staging\n            MY_NOT_SO_SECRET_CONFIG: plumbus\n  schedule: null\n

    ... and then run prefect deploy -n demo-deployment to deploy the flow with these job variables.

    We should then be able to see the job variables in the Configuration tab of the deployment in the UI:

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-existing-environment-variables","title":"Using existing environment variables","text":"

    If you want to use environment variables that are already set in your local environment, you can template these in the prefect.yaml file using the {{ $ENV_VAR_NAME }} syntax:

    deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: local\n    job_variables:\n        env:\n            EXECUTION_ENVIRONMENT: \"{{ $EXECUTION_ENVIRONMENT }}\"\n            MY_NOT_SO_SECRET_CONFIG: \"{{ $MY_NOT_SO_SECRET_CONFIG }}\"\n  schedule: null\n

    Note

    This assumes that the machine where prefect deploy is run would have these environment variables set.

    export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n

    As before, run prefect deploy -n demo-deployment to deploy the flow with these job variables, and you should see them in the UI under the Configuration tab.

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-the-deploy-method","title":"Using the .deploy() method","text":"

    If you're using the .deploy() method to deploy your flow, the process is similar, but instead of having your prefect.yaml file define the job variables, you can pass them as a dictionary to the job_variables argument of the .deploy() method.

    We could add the following block to our demo_project/daily_flow.py file from the setup section:

    if __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/zzstoatzz/prefect-monorepo.git\",\n        entrypoint=\"src/demo_project/demo_flow.py:some_work\"\n    ).deploy(\n        name=\"demo-deployment\",\n        work_pool_name=\"local\", # can only .deploy() to a local work pool in prefect>=2.15.1\n        job_variables={\n            \"env\": {\n                \"EXECUTION_ENVIRONMENT\": os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\"),\n                \"MY_NOT_SO_SECRET_CONFIG\": os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n            }\n        }\n    )\n

    Note

    The above example works assuming a couple things: - the machine where this script is run would have these environment variables set.

    export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n

    • demo_project/daily_flow.py already exists in the repository at the specified path

    Running this script with something like:

    python demo_project/daily_flow.py\n

    ... will deploy the flow with the specified job variables, which should then be visible in the UI under the Configuration tab.

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#how-to-override-job-variables-on-a-flow-run","title":"How to override job variables on a flow run","text":"

    When running flows, you can pass in job variables that override any values set on the work pool or deployment. Any interface that runs deployments can accept job variables.

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-the-custom-run-form-in-the-ui","title":"Using the custom run form in the UI","text":"

    Custom runs allow you to pass in a dictionary of variables into your flow run infrastructure. Using the same env example from above, we could do the following:

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-the-cli","title":"Using the CLI","text":"

    Similarly, runs kicked off via CLI accept job variables with the -jv or --job-variable flag.

    prefect deployment run \\\n  --id \"fb8e3073-c449-474b-b993-851fe5e80e53\" \\\n  --job-variable MY_NEW_ENV_VAR=42 \\\n  --job-variable HELLO=THERE\n
    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-job-variables-in-automations","title":"Using job variables in automations","text":"

    Additionally, runs kicked off via automation actions can use job variables, including ones rendered from Jinja templates.

    ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/push-work-pools/","title":"Push Work to Serverless Computing Infrastructure","text":"

    Push work pools are a special type of work pool that allows Prefect Cloud to submit flow runs for execution to serverless computing infrastructure without running a worker. Push work pools currently support execution in AWS ECS tasks, Azure Container Instances, Google Cloud Run jobs, and Modal.

    In this guide you will:

    • Create a push work pool that sends work to Amazon Elastic Container Service (AWS ECS), Azure Container Instances (ACI), Google Cloud Run, or Modal
    • Deploy a flow to that work pool
    • Execute a flow without having to run a worker or agent process to poll for flow runs

    You can automatically provision infrastructure and create your push work pool using the prefect work-pool create CLI command with the --provision-infra flag. This approach greatly simplifies the setup process.

    Let's explore automatic infrastructure provisioning for push work pools first, and then we'll cover how to manually set up your push work pool.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatic-infrastructure-provisioning","title":"Automatic infrastructure provisioning","text":"

    With Perfect Cloud you can provision infrastructure for use with an AWS ECS, Google Cloud Run, ACI push work pool. Push work pools in Prefect Cloud simplify the setup and management of the infrastructure necessary to run your flows. However, setting up infrastructure on your cloud provider can still be a time-consuming process. Prefect can dramatically simplify this process by automatically provisioning the necessary infrastructure for you.

    We'll use the prefect work-pool create CLI command with the --provision-infra flag to automatically provision your serverless cloud resources and set up your Prefect workspace to use a new push pool.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#prerequisites","title":"Prerequisites","text":"

    To use automatic infrastructure provisioning, you'll need to have the relevant cloud CLI library installed and to have authenticated with your cloud provider.

    AWS ECSAzure Container InstancesGoogle Cloud RunModal

    Install the AWS CLI, authenticate with your AWS account, and set a default region.

    If you already have the AWS CLI installed, be sure to update to the latest version.

    You will need the following permissions in your authenticated AWS account:

    IAM Permissions:

    • iam:CreatePolicy
    • iam:GetPolicy
    • iam:ListPolicies
    • iam:CreateUser
    • iam:GetUser
    • iam:AttachUserPolicy
    • iam:CreateRole
    • iam:GetRole
    • iam:AttachRolePolicy
    • iam:ListRoles
    • iam:PassRole

    Amazon ECS Permissions:

    • ecs:CreateCluster
    • ecs:DescribeClusters

    Amazon EC2 Permissions:

    • ec2:CreateVpc
    • ec2:DescribeVpcs
    • ec2:CreateInternetGateway
    • ec2:AttachInternetGateway
    • ec2:CreateRouteTable
    • ec2:CreateRoute
    • ec2:CreateSecurityGroup
    • ec2:DescribeSubnets
    • ec2:CreateSubnet
    • ec2:DescribeAvailabilityZones
    • ec2:AuthorizeSecurityGroupIngress
    • ec2:AuthorizeSecurityGroupEgress

    Amazon ECR Permissions:

    • ecr:CreateRepository
    • ecr:DescribeRepositories
    • ecr:GetAuthorizationToken

    If you want to use AWS managed policies, you can use the following:

    • AmazonECS_FullAccess
    • AmazonEC2FullAccess
    • IAMFullAccess
    • AmazonEC2ContainerRegistryFullAccess

    Note that the above policies will give you all the permissions needed, but are more permissive than necessary.

    Docker is also required to build and push images to your registry. You can install Docker here.

    Install the Azure CLI and authenticate with your Azure account.

    If you already have the Azure CLI installed, be sure to update to the latest version with az upgrade.

    You will also need the following roles in your Azure subscription:

    • Contributor
    • User Access Administrator
    • Application Administrator
    • Managed Identity Operator
    • Azure Container Registry Contributor

    Docker is also required to build and push images to your registry. You can install Docker here.

    Install the gcloud CLI and authenticate with your GCP project.

    If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update.

    You will also need the following permissions in your GCP project:

    • resourcemanager.projects.list
    • serviceusage.services.enable
    • iam.serviceAccounts.create
    • iam.serviceAccountKeys.create
    • resourcemanager.projects.setIamPolicy
    • artifactregistry.repositories.create

    Docker is also required to build and push images to your registry. You can install Docker here.

    Install modal by running:

    pip install modal\n

    Create a Modal API token by running:

    modal token new\n

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatically-creating-a-new-push-work-pool-and-provisioning-infrastructure","title":"Automatically creating a new push work pool and provisioning infrastructure","text":"

    Here's the command to create a new push work pool and configure the necessary infrastructure.

    AWS ECSAzure Container InstancesGoogle Cloud RunModal
    prefect work-pool create --type ecs:push --provision-infra my-ecs-pool\n

    Using the --provision-infra flag will automatically set up your default AWS account to be ready to execute flows via ECS tasks. In your AWS account, this command will create a new IAM user, IAM policy, ECS cluster that uses AWS Fargate, VPC, and ECR repository if they don't already exist. In your Prefect workspace, this command will create an AWSCredentials block for storing the generated credentials.

    Here's an abbreviated example output from running the command:

    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-ecs-pool will require:                                          \u2502\n\u2502                                                                                                                   \u2502\n\u2502          - Creating an IAM user for managing ECS tasks: prefect-ecs-user                                          \u2502\n\u2502          - Creating and attaching an IAM policy for managing ECS tasks: prefect-ecs-policy                        \u2502\n\u2502          - Storing generated AWS credentials in a block                                                           \u2502\n\u2502          - Creating an ECS cluster for running Prefect flows: prefect-ecs-cluster                                 \u2502\n\u2502          - Creating a VPC with CIDR 172.31.0.0/16 for running ECS tasks: prefect-ecs-vpc                          \u2502\n\u2502          - Creating an ECR repository for storing Prefect images: prefect-flows                                   \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nProvisioning IAM user\nCreating IAM policy\nGenerating AWS credentials\nCreating AWS credentials block\nProvisioning ECS cluster\nProvisioning VPC\nCreating internet gateway\nSetting up subnets\nSetting up security group\nProvisioning ECR repository\nAuthenticating with ECR\nSetting default Docker build namespace\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-ecs-pool'!\n

    Default Docker build namespace

    After infrastructure provisioning completes, you will be logged into your new ECR repository and the default Docker build namespace will be set to the URL of the registry.

    While the default namespace is set, you will not need to provide the registry URL when building images as part of your deployment process.

    To take advantage of this, you can write your deploy scripts like this:

    example_deploy_script.py
    from prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n@flow(log_prints=True)            \ndef my_flow(name: str = \"world\"):                          \n    print(f\"Hello {name}! I'm a flow running in a ECS task!\") \n\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        name=\"my-deployment\", \n        work_pool_name=\"my-work-pool\",\n        image=DeploymentImage(                                                 \n            name=\"my-repository:latest\",\n            platform=\"linux/amd64\",\n        )                                                                      \n    )       \n

    This will build an image with the tag <ecr-registry-url>/my-image:latest and push it to the registry.

    Your image name will need to match the name of the repository created with your work pool. You can create new repositories in the ECR console.

    prefect work-pool create --type azure-container-instance:push --provision-infra my-aci-pool\n

    Using the --provision-infra flag will automatically set up your default Azure account to be ready to execute flows via Azure Container Instances. In your Azure account, this command will create a resource group, app registration, service account with necessary permission, generate a secret for the app registration, and create an Azure Container Registry, if they don't already exist. In your Prefect workspace, this command will create an AzureContainerInstanceCredentials block for storing the client secret value from the generated secret.

    Here's an abbreviated example output from running the command:

    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-aci-work-pool will require:                     \u2502\n\u2502                                                                                           \u2502\n\u2502     Updates in subscription Azure subscription 1                                          \u2502\n\u2502                                                                                           \u2502\n\u2502         - Create a resource group in location eastus                                      \u2502\n\u2502         - Create an app registration in Azure AD prefect-aci-push-pool-app                \u2502\n\u2502         - Create/use a service principal for app registration                             \u2502\n\u2502         - Generate a secret for app registration                                          \u2502\n\u2502         - Create an Azure Container Registry with prefix prefect                          \u2502\n\u2502         - Create an identity prefect-acr-identity to allow access to the created registry \u2502\n\u2502         - Assign Contributor role to service account                                      \u2502\n\u2502         - Create an ACR registry for image hosting                                        \u2502\n\u2502         - Create an identity for Azure Container Instance to allow access to the registry \u2502\n\u2502                                                                                           \u2502\n\u2502     Updates in Prefect workspace                                                          \u2502\n\u2502                                                                                           \u2502\n\u2502         - Create Azure Container Instance credentials block aci-push-pool-credentials     \u2502\n\u2502                                                                                           \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]:     \nCreating resource group\nCreating app registration\nGenerating secret for app registration\nCreating ACI credentials block\nACI credentials block 'aci-push-pool-credentials' created in Prefect Cloud\nAssigning Contributor role to service account\nCreating Azure Container Registry\nCreating identity\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned for 'my-aci-work-pool' work pool!\nCreated work pool 'my-aci-work-pool'!\n

    Default Docker build namespace

    After infrastructure provisioning completes, you will be logged into your new Azure Container Registry and the default Docker build namespace will be set to the URL of the registry.

    While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the registry.

    To take advantage of this functionality, you can write your deploy scripts like this:

    example_deploy_script.py
    from prefect import flow                                                       \nfrom prefect.deployments import DeploymentImage                                \n\n\n@flow(log_prints=True)                                                         \ndef my_flow(name: str = \"world\"):                                              \n    print(f\"Hello {name}! I'm a flow running on an Azure Container Instance!\") \n\n\nif __name__ == \"__main__\":                                                     \n    my_flow.deploy(                                                            \n        name=\"my-deployment\",\n        work_pool_name=\"my-work-pool\",                                                \n        image=DeploymentImage(                                                 \n            name=\"my-image:latest\",                                            \n            platform=\"linux/amd64\",                                            \n        )                                                                      \n    )       \n

    This will build an image with the tag <acr-registry-url>/my-image:latest and push it to the registry.

    prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n

    Using the --provision-infra flag will allow you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials block for storing the service account key.

    Here's an abbreviated example output from running the command:

    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require:                           \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in GCP project central-kit-405415 in region us-central1                                      \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Activate the Cloud Run API for your project                                                    \u2502\n\u2502         - Activate the Artifact Registry API for your project                                            \u2502\n\u2502         - Create an Artifact Registry repository named prefect-images                                    \u2502\n\u2502         - Create a service account for managing Cloud Run jobs: prefect-cloud-run                        \u2502\n\u2502             - Service account will be granted the following roles:                                       \u2502\n\u2502                 - Service Account User                                                                   \u2502\n\u2502                 - Cloud Run Developer                                                                    \u2502\n\u2502         - Create a key for service account prefect-cloud-run                                             \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in Prefect workspace                                                                         \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Create GCP credentials block my--pool-push-pool-credentials to store the service account key   \u2502\n\u2502                                                                                                          \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n

    Default Docker build namespace

    After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.

    While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.

    To take advantage of this functionality, you can write your deploy scripts like this:

    example_deploy_script.py
    from prefect import flow                                                       \nfrom prefect.deployments import DeploymentImage                                \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\":                                                     \n    my_flow.deploy(                                                            \n        name=\"my-deployment\",\n        work_pool_name=\"above-ground\",\n        image=DeploymentImage(\n            name=\"my-image:latest\",\n            platform=\"linux/amd64\",\n        )\n    )\n

    This will build an image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest and push it to the repository.

    prefect work-pool create --type modal:push --provision-infra my-modal-pool \n

    Using the --provision-infra flag will trigger the creation of a ModalCredentials block in your Prefect Cloud workspace. This block will store your Modal API token, which is used to authenticate with Modal's API. By default, the token for your current Modal profile will be used for the new ModalCredentials block. If Prefect is unable to discover a Modal API token for your current profile, you will be prompted to create a new one.

    That's it! You're ready to create and schedule deployments that use your new push work pool. Reminder that no worker is needed to run flows with a push work pool.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#using-existing-resources-with-automatic-infrastructure-provisioning","title":"Using existing resources with automatic infrastructure provisioning","text":"

    If you already have the necessary infrastructure set up, Prefect will detect that upon work pool creation and the infrastructure provisioning for that resource will be skipped.

    For example, here's how prefect work-pool create my-work-pool --provision-infra looks when existing Azure resources are detected:

    Proceed with infrastructure provisioning? [y/n]: y\nCreating resource group\nResource group 'prefect-aci-push-pool-rg' already exists in location 'eastus'.\nCreating app registration\nApp registration 'prefect-aci-push-pool-app' already exists.\nGenerating secret for app registration\nProvisioning infrastructure\nACI credentials block 'bb-push-pool-credentials' created\nAssigning Contributor role to service account...\nService principal with object ID '4be6fed7-...' already has the 'Contributor' role assigned in \n'/subscriptions/.../'\nCreating Azure Container Instance\nContainer instance 'prefect-aci-push-pool-container' already exists.\nCreating Azure Container Instance credentials block\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-work-pool'!\n
    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#provisioning-infrastructure-for-an-existing-push-work-pool","title":"Provisioning infrastructure for an existing push work pool","text":"

    If you already have a push work pool set up, but haven't configured the necessary infrastructure, you can use the provision-infra sub-command to provision the infrastructure for that work pool. For example, you can run the following command if you have a work pool named \"my-work-pool\".

    prefect work-pool provision-infra my-work-pool\n

    Prefect will create the necessary infrastructure for the my-work-pool work pool and provide you with a summary of the changes to be made:

    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-work-pool will require:                                      \u2502\n\u2502                                                                                                                \u2502\n\u2502     Updates in subscription Azure subscription 1                                                               \u2502\n\u2502                                                                                                                \u2502\n\u2502         - Create a resource group in location eastus                                                           \u2502\n\u2502         - Create an app registration in Azure AD prefect-aci-push-pool-app                                     \u2502\n\u2502         - Create/use a service principal for app registration                                                  \u2502\n\u2502         - Generate a secret for app registration                                                               \u2502\n\u2502         - Assign Contributor role to service account                                                           \u2502\n\u2502         - Create Azure Container Instance 'aci-push-pool-container' in resource group prefect-aci-push-pool-rg \u2502\n\u2502                                                                                                                \u2502\n\u2502     Updates in Prefect workspace                                                                               \u2502\n\u2502                                                                                                                \u2502\n\u2502         - Create Azure Container Instance credentials block aci-push-pool-credentials                          \u2502\n\u2502                                                                                                                \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\n

    This command can speed up your infrastructure setup process.

    As with the examples above, you will need to have the related cloud CLI library installed and be authenticated with your cloud provider.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#manual-infrastructure-provisioning","title":"Manual infrastructure provisioning","text":"

    If you prefer to set up your infrastructure manually, don't include the --provision-infra flag in the CLI command. In the examples below, we'll create a push work pool via the Prefect Cloud UI.

    AWS ECSAzure Container InstancesGoogle Cloud RunModal

    To push work to ECS, AWS credentials are required.

    Create a user and attach the AmazonECS_FullAccess permissions.

    From that user's page create credentials and store them somewhere safe for use in the next section.

    To push work to Azure, an Azure subscription, resource group and tenant secret are required.

    Create Subscription and Resource Group

    1. In the Azure portal, create a subscription.
    2. Create a resource group within your subscription.

    Create App Registration

    1. In the Azure portal, create an app registration.
    2. In the app registration, create a client secret. Copy the value and store it somewhere safe.

    Add App Registration to Resource Group

    1. Navigate to the resource group you created earlier.
    2. Choose the \"Access control (IAM)\" blade in the left-hand side menu. Click \"+ Add\" button at the top, then \"Add role assignment\".
    3. Go to the \"Privileged administrator roles\" tab, click on \"Contributor\", then click \"Next\" at the bottom of the page.
    4. Click on \"+ Select members\". Type the name of the app registration (otherwise it may not autopopulate) and click to add it. Then hit \"Select\" and click \"Next\". The default permissions associated with a role like \"Contributor\" might not always be sufficient for all operations related to Azure Container Instances (ACI). The specific permissions required can depend on the operations you need to perform (like creating, running, and deleting ACI container groups) and your organization's security policies. In some cases, additional permissions or custom roles might be necessary.
    5. Click \"Review + assign\" to finish.

    A GCP service account and an API Key are required, to push work to Cloud Run.

    Create a service account by navigating to the service accounts page and clicking Create. Name and describe your service account, and click continue to configure permissions.

    The service account must have two roles at a minimum, Cloud Run Developer, and Service Account User.

    Once the Service account is created, navigate to its Keys page to add an API key. Create a JSON type key, download it, and store it somewhere safe for use in the next section.

    A Modal API token is required to push work to Modal.

    Create a Modal API token by navigating to Settings in the Modal UI. In the API Tokens section of the Settings page, click New Token.

    Copy the token ID and token secret and store them somewhere safe for use in the next section.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#work-pool-configuration","title":"Work pool configuration","text":"

    Our push work pool will store information about what type of infrastructure our flow will run on, what default values to provide to compute jobs, and other important execution environment parameters. Because our push work pool needs to integrate securely with your serverless infrastructure, we need to start by storing our credentials in Prefect Cloud, which we'll do by making a block.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-credentials-block","title":"Creating a Credentials block","text":"AWS ECSAzure Container InstancesGoogle Cloud RunModal

    Navigate to the blocks page, click create new block, and select AWS Credentials for the type.

    For use in a push work pool, region, access key, and access key secret must be set.

    Provide any other optional information and create your block.

    Navigate to the blocks page and click the \"+\" at the top to create a new block. Find the Azure Container Instance Credentials block and click \"Add +\".

    Locate the client ID and tenant ID on your app registration and use the client secret you saved earlier. Be sure to use the value of the secret, not the secret ID!

    Provide any other optional information and click \"Create\".

    Navigate to the blocks page, click create new block, and select GCP Credentials for the type.

    For use in a push work pool, this block must have the contents of the JSON key stored in the Service Account Info field, as such:

    Provide any other optional information and create your block.

    Navigate to the blocks page, click create new block, and select Modal Credentials for the type.

    For use in a push work pool, this block must have the token ID and token secret stored in the Token ID and Token Secret fields, respectively.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-push-work-pool","title":"Creating a push work pool","text":"

    Now navigate to the work pools page. Click Create to start configuring your push work pool by selecting a push option in the infrastructure type step.

    AWS ECSAzure Container InstancesGoogle Cloud RunModal

    Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the AWS Credentials field. This will allow Prefect Cloud to securely interact with your ECS cluster.

    Fill in the subscription ID and resource group name from the resource group you created. Add the Azure Container Instance Credentials block you created in the step above.

    Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the GCP Credentials field. This will allow Prefect Cloud to securely interact with your GCP project.

    Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the Modal Credentials field. This will allow Prefect Cloud to securely interact with your Modal account.

    Create your pool and you are ready to deploy flows to your Push work pool.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#deployment","title":"Deployment","text":"

    Deployment details are described in the deployments concept section. Your deployment needs to be configured to send flow runs to our push work pool. For example, if you create a deployment through the interactive command line experience, choose the work pool you just created. If you are deploying an existing prefect.yaml file, the deployment would contain:

      work_pool:\n    name: my-push-pool\n

    Deploying your flow to the my-push-pool work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them.

    Serverless infrastructure may require a certain image architecture

    Note that serverless infrastructure may assume a certain Docker image architecture; for example, Google Cloud Run will fail to run images built with linux/arm64 architecture. If using Prefect to build your image, you can change the image architecture through the platform keyword (e.g., platform=\"linux/amd64\").

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#putting-it-all-together","title":"Putting it all together","text":"

    With your deployment created, navigate to its detail page and create a new flow run. You'll see the flow start running without ever having to poll the work pool, because Prefect Cloud securely connected to your serverless infrastructure, created a job, ran the job, and began reporting on its execution.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#next-steps","title":"Next steps","text":"

    Learn more about workers and work pools in the Prefect concept documentation.

    Learn about installing dependencies at runtime or baking them into your Docker image in the Deploying Flows to Work Pools and Workers guide.

    ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/serverless-workers/","title":"Run Deployments on Serverless Infrastructure with Prefect Workers","text":"

    Prefect provides hybrid work pools for workers to run flows on the serverless platforms of major cloud providers. The following options are available:

    • AWS Elastic Container Service (ECS)
    • Azure Container Instances (ACI)
    • Google Cloud Run
    • Google Cloud Run V2
    • Google Vertex AI

    • Create a work pool that sends work to your chosen serverless infrastructure
    • Deploy a flow to that work pool
    • Start a worker in your serverless cloud provider that will poll its matched work pool for scheduled runs
    • Schedule a deployment run that a worker will pick up from the work pool and run on your serverless infrastructure

    Push work pools don't require a worker

    Options for push work pool versions of AWS ECS, Azure Container Instances, and Google Cloud Run that do not require a worker are available with Prefect Cloud. These push work pool options require connection configuration information to be stored on Prefect Cloud. Read more in the Serverless Push Work Pool Guide.

    This is a brief overview of the options to run workflows on serverless infrastructure. For in-depth guides, see the Prefect integration libraries:

    • AWS ECS guide in the prefect-aws docs
    • Azure Container Instances guide
    • Google Cloud Run guide in the prefect-gcp docs.
    • For Google Vertex AI, follow the Cloud Run guide, substituting Google Vertex AI where Google Cloud Run is mentioned.

    Choosing between Google Cloud Run and Google Vertex AI

    Google Vertex AI is well-suited for machine learning model training applications in which GPUs or TPUs and high resource levels are desired.

    ","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/serverless-workers/#steps","title":"Steps","text":"
    1. Make sure you have an user or service account on your chosen cloud provider with the necessary permissions to run serverless jobs
    2. Create the appropriate serverless work pool that uses a worker in the Prefect UI
    3. Create a deployment that references the work pool
    4. Start a worker in your chose serverless cloud provider infrastructure
    5. Run the deployment
    ","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/serverless-workers/#next-steps","title":"Next steps","text":"

    Options for push versions on AWS ECS, Azure Container Instances, and Google Cloud Run work pools that do not require a worker are available with Prefect Cloud. Read more in the Serverless Push Work Pool Guide.

    Learn more about workers and work pools in the Prefect concept documentation.

    Learn about installing dependencies at runtime or baking them into your Docker image in the Deploying Flows to Work Pools and Workers guide.

    ","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/storage-guide/","title":"Where to Store Your Flow Code","text":"

    When a flow runs, the execution environment needs access to its code. Flow code is not stored in a Prefect server database instance or Prefect Cloud. When deploying a flow, you have several flow code storage options.

    This guide discusses storage options with a focus on deployments created with the interactive CLI experience or a prefect.yaml file. If you'd like to create your deployments using Python code, see the discussion of flow code storage on the .deploy tab of Deploying Flows to Work pools and Workers guide.

    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-1-local-storage","title":"Option 1: Local storage","text":"

    Local flow code storage is often used with a Local Subprocess work pool for initial experimentation.

    To create a deployment with local storage and a Local Subprocess work pool, do the following:

    1. Run prefect deploy from the root of the directory containing your flow code.
    2. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.
    3. Select a process work pool.

    You are then shown the location that your flow code will be fetched from when a flow is run. For example:

    Your Prefect workers will attempt to load your flow from: \n/my-path/my-flow-file.py. To see more options for managing your flow's code, run:\n\n    $ prefect init\n

    When deploying a flow to production, you most likely want code to run with infrastructure-specific configuration. The flow code storage options shown below are recommended for production deployments.

    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-2-git-based-storage","title":"Option 2: Git-based storage","text":"

    Git-based version control platforms are popular locations for code storage. They provide redundancy, version control, and easier collaboration.

    GitHub is the most popular cloud-based repository hosting provider. GitLab and Bitbucket are other popular options. Prefect supports each of these platforms.

    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#creating-a-deployment-with-git-based-storage","title":"Creating a deployment with git-based storage","text":"

    Run prefect deploy from the root directory of the git repository and create a new deployment. You will see a series of prompts. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.

    Prefect detects that you are in a git repository and asks if you want to store your flow code in a git repository. Select \"y\" and you will be prompted to confirm the URL of your git repository and the branch name, as in the example below:

    ? Your Prefect workers will need access to this flow's code in order to run it. \nWould you like your workers to pull your flow code from its remote repository when running this flow? [y/n] (y): \n? Is https://github.com/my_username/my_repo.git the correct URL to pull your flow code from? [y/n] (y): \n? Is main the correct branch to pull your flow code from? [y/n] (y): \n? Is this a private repository? [y/n]: y\n

    In this example, the git repository is hosted on GitHub. If you are using Bitbucket or GitLab, the URL will match your provider. If the repository is public, enter \"n\" and you are on your way.

    If the repository is private, you can enter a token to access your private repository. This token will be saved in an encrypted Prefect Secret block.

    ? Please enter a token that can be used to access your private repository. This token will be saved as a Secret block via the Prefect API: \"123_abc_this_is_my_token\"\n

    Verify that you have a new Secret block in your active workspace named in the format \"deployment-my-deployment-my-flow-name-repo-token\".

    Creating access tokens differs for each provider.

    GitHubBitbucketGitLab

    We recommend using HTTPS with fine-grained Personal Access Tokens so that you can limit access by repository. See the GitHub docs for Personal Access Tokens (PATs).

    Under Your Profile->Developer Settings->Personal access tokens->Fine-grained token choose Generate New Token and fill in the required fields. Under Repository access choose Only select repositories and grant the token permissions for Contents.

    We recommend using HTTPS with Repository, Project, or Workspace Access Tokens.

    You can create a Repository Access Token with Scopes->Repositories->Read.

    Bitbucket requires you prepend the token string with x-token-auth: So the full string looks like x-token-auth:abc_123_this_is_my_token.

    We recommend using HTTPS with Project Access Tokens.

    In your repository in the GitLab UI, select Settings->Repository->Project Access Tokens and check read_repository under Select scopes.

    If you want to configure a Secret block ahead of time, create the block via code or the Prefect UI and reference it in your prefect.yaml file.

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://bitbucket.org/org/my-private-repo.git\n        access_token: \"{{ prefect.blocks.secret.my-block-name }}\"\n

    Alternatively, you can create a Credentials block ahead of time and reference it in the prefect.yaml pull step.

    GitHubBitbucketGitLab
    1. Install the Prefect-Github library with pip install -U prefect-github
    2. Register the blocks in that library to make them available on the server with prefect block register -m prefect_github.
    3. Create a GitHub Credentials block via code or the Prefect UI and reference it as shown above.
    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/discdiver/my-private-repo.git\n        credentials: \"{{ prefect.blocks.github-credentials.my-block-name }}\"\n
    1. Install the relevant library with pip install -U prefect-bitbucket
    2. Register the blocks in that library with prefect block register -m prefect_bitbucket
    3. Create a Bitbucket credentials block via code or the Prefect UI and reference it as shown above.
    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://bitbucket.org/org/my-private-repo.git\n        credentials: \"{{ prefect.blocks.bitbucket-credentials.my-block-name }}\"\n
    1. Install the relevant library with pip install -U prefect-gitlab
    2. Register the blocks in that library with prefect block register -m prefect_gitlab
    3. Create a GitLab credentials block via code or the Prefect UI and reference it as shown above.
    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://gitlab.com/org/my-private-repo.git\n        credentials: \"{{ prefect.blocks.gitlab-credentials.my-block-name }}\"\n

    Push your code

    When you make a change to your code, Prefect does not push your code to your git-based version control platform. You need to push your code manually or as part of your CI/CD pipeline. This design decision is an intentional one to avoid confusion about the git history and push process.

    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-3-docker-based-storage","title":"Option 3: Docker-based storage","text":"

    Another popular way to store your flow code is to include it in a Docker image. The following work pools use Docker containers, so the flow code can be directly baked into the image:

    • Docker
    • Kubernetes
    • Serverless cloud-based options
      • AWS Elastic Container Service
      • Azure Container Instances
      • Google Cloud Run
    • Push-based serverless cloud-based options (no worker required)

      • AWS Elastic Container Service - Push
      • Azure Container Instances - Push
      • Google Cloud Run - Push
    • Run prefect init in the root of your repository and choose docker for the project name and answer the prompts to create a prefect.yaml file with a build step that will create a Docker image with the flow code built in. See the Workers and Work Pools page of the tutorial for more info.

    • Run prefect deploy from the root of your repository to create a deployment.
    • Upon deployment run the worker will pull the Docker image and spin up a container.
    • The flow code baked into the image will run inside the container.

    CI/CD may not require push or pull steps

    You don't need push or pull steps in the prefect.yaml file if using CI/CD to build a Docker image outside of Prefect. Instead, the work pool can reference the image directly.

    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-4-cloud-provider-storage","title":"Option 4: Cloud-provider storage","text":"

    You can store your code in an AWS S3 bucket, Azure Blob Storage container, or GCP GCS bucket and specify the destination directly in the push and pull steps of your prefect.yaml file.

    To create a templated prefect.yaml file run prefect init and select the recipe for the applicable cloud-provider storage. Below are the recipe options and the relevant portions of the prefect.yaml file.

    AWS S3 bucketAzure Blob Storage containerGCP GCS bucket

    Choose s3 as the recipe and enter the bucket name when prompted.

    # push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_aws.deployments.steps.push_to_s3:\n    id: push_code\n    requires: prefect-aws>=0.3.4\n    bucket: my-bucket\n    folder: my-folder\n    credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n    id: pull_code\n    requires: prefect-aws>=0.3.4\n    bucket: '{{ push_code.bucket }}'\n    folder: '{{ push_code.folder }}'\n    credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private \n

    If the bucket requires authentication to access it, you can do the following:

    1. Install the Prefect-AWS library with pip install -U prefect-aws
    2. Register the blocks in Prefect-AWS with prefect block register -m prefect_aws
    3. Create a user with a role with read and write permissions to access the bucket. If using the UI, create an access key pair with IAM->Users->Security credentials->Access keys->Create access key. Choose Use case->Other and then copy the Access key and Secret access key values.
    4. Create an AWS Credentials block via code or the Prefect UI. In addition to the block name, most users will fill in the AWS Access Key ID and AWS Access Key Secret fields.
    5. Reference the block as shown in the push and pull steps above.

    Choose azure as the recipe and enter the container name when prompted.

    # push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_azure.deployments.steps.push_to_azure_blob_storage:\n    id: push_code\n    requires: prefect-azure>=0.2.8\n    container: my-prefect-azure-container\n    folder: my-folder\n    credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n    id: pull_code\n    requires: prefect-azure>=0.2.8\n    container: '{{ push_code.container }}'\n    folder: '{{ push_code.folder }}'\n    credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n

    If the blob requires authentication to access it, you can do the following:

    1. Install the Prefect-Azure library with pip install -U prefect-azure
    2. Register the blocks in Prefect-Azure with prefect block register -m prefect_azure
    3. Create an access key for a role with sufficient (read and write) permissions to access the blob. A connection string that will contain all needed information can be created in the UI under Storage Account->Access keys.
    4. Create an Azure Blob Storage Credentials block via code or the Prefect UI. Enter a name for the block and paste the connection string into the Connection String field.
    5. Reference the block as shown in the push and pull steps above.

    Choose `gcs`` as the recipe and enter the bucket name when prompted.

    # push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_gcp.deployment.steps.push_to_gcs:\n    id: push_code\n    requires: prefect-gcp>=0.4.3\n    bucket: my-bucket\n    folder: my-folder\n    credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_gcp.deployment.steps.pull_from_gcs:\n    id: pull_code\n    requires: prefect-gcp>=0.4.3\n    bucket: '{{ push_code.bucket }}'\n    folder: '{{ pull_code.folder }}'\n    credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n

    If the bucket requires authentication to access it, you can do the following:

    1. Install the Prefect-GCP library with pip install -U prefect-gcp
    2. Register the blocks in Prefect-GCP with prefect block register -m prefect_gcp
    3. Create a service account in GCP for a role with read and write permissions to access the bucket contents. If using the GCP console, go to IAM & Admin->Service accounts->Create service account. After choosing a role with the required permissions, see your service account and click on the three dot menu in the Actions column. Select Manage Keys->ADD KEY->Create new key->JSON. Download the JSON file.
    4. Create a GCP Credentials block via code or the Prefect UI. Enter a name for the block and paste the entire contents of the JSON key file into the Service Account Info field.
    5. Reference the block as shown in the push and pull steps above.

    Another option for authentication is for the worker to have access to the storage location at runtime via SSH keys.

    Alternatively, you can inject environment variables into your deployment like this example that uses an environment variable named CUSTOM_FOLDER:

     push:\n    - prefect_gcp.deployment.steps.push_to_gcs:\n        id: push_code\n        requires: prefect-gcp>=0.4.3\n        bucket: my-bucket\n        folder: '{{ $CUSTOM_FOLDER }}'\n
    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#including-and-excluding-files-from-storage","title":"Including and excluding files from storage","text":"

    By default, Prefect uploads all files in the current folder to the configured storage location when you create a deployment.

    When using a git repository, Docker image, or cloud-provider storage location, you may want to exclude certain files or directories.

    • If you are familiar with git you are likely familiar with the .gitignore file.
    • If you are familiar with Docker you are likely familiar with the .dockerignore file.
    • For cloud-provider storage the .prefectignore file serves the same purpose and follows a similar syntax as those files. So an entry of *.pyc will exclude all .pyc files from upload.
    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#other-code-storage-creation-methods","title":"Other code storage creation methods","text":"

    In earlier versions of Prefect storage blocks were the recommended way to store flow code. Storage blocks are still supported, but not recommended.

    As shown above, repositories can be referenced directly through interactive prompts with prefect deploy or in a prefect.yaml. When authentication is needed, Secret or Credential blocks can be referenced, and in some cases created automatically through interactive deployment creation prompts.

    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#next-steps","title":"Next steps","text":"

    You've seen options for where to store your flow code.

    We recommend using Docker-based storage or git-based storage for your production deployments.

    Check out more guides to reach your goals with Prefect.

    ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"integrations/","title":"Integrations","text":"

    You can install the following integrations of pre-built tasks, flows, blocks and more as PyPI packages:

    Alert

    Maintained by Khuyen Tran

    AWS

    Maintained by Prefect

    Azure

    Maintained by Prefect

    Bitbucket

    Maintained by Prefect

    Coiled

    Maintained by Coiled

    CubeJS

    Maintained by Alessandro Lollo

    Dask

    Maintained by Prefect

    Databricks

    Maintained by Prefect

    dbt

    Maintained by Prefect

    Docker

    Maintained by Prefect

    Earthdata

    Maintained by Giorgio Basile

    Email

    Maintained by Prefect

    Fivetran

    Maintained by Fivetran

    Fugue

    Maintained by The Fugue Development Team

    GCP

    Maintained by Prefect

    GitHub

    Maintained by Prefect

    GitLab

    Maintained by Prefect

    Google Sheets

    Maintained by Stefano Cascavilla

    HashiCorp Vault

    Maintained by Pavel Chekin

    Kubernetes

    Maintained by Prefect

    KV

    Maintained by Zanie Blue

    MetricFlow

    Maintained by Alessandro Lollo

    Planetary Computer

    Maintained by Giorgio Basile

    Ray

    Maintained by Prefect

    Shell

    Maintained by Prefect

    Sifflet

    Maintained by Sifflet and Alessandro Lollo

    Slack

    Maintained by Prefect

    Snowflake

    Maintained by Prefect

    Soda Cloud

    Maintained by Alessandro Lollo

    Soda Core

    Maintained by Soda and Alessandro Lollo

    Spark on Kubernetes

    Maintained by Manoj Babu Katragadda

    SQLAlchemy

    Maintained by Prefect

    Stitch

    Maintained by Alessandro Lollo

    Transform

    Maintained by Alessandro Lollo

    ","tags":["tasks","flows","blocks","collections","task library","integrations","Alert","AWS","Azure","Bitbucket","Coiled","CubeJS","Dask","Databricks","dbt","Docker","Earthdata","Email","Fivetran","Fugue","GCP","GitHub","GitLab","Google Sheets","HashiCorp Vault","Kubernetes","KV","MetricFlow","Planetary Computer","Ray","Shell","Sifflet","Slack","Snowflake","Soda Cloud","Soda Core","Spark on Kubernetes","SQLAlchemy","Stitch","Transform"],"boost":2},{"location":"integrations/contribute/","title":"Contribute","text":"

    We welcome contributors! You can help contribute blocks and integrations by following these steps.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-blocks","title":"Contributing Blocks","text":"

    Building your own custom block is simple!

    1. Subclass from Block.
    2. Add a description alongside an Attributes and Example section in the docstring.
    3. Set a _logo_url to point to a relevant image.
    4. Create the pydantic.Fields of the block with a type annotation, default or default_factory, and a short description about the field.
    5. Define the methods of the block.

    For example, this is how the Secret block is implemented:

    from pydantic import Field, SecretStr\nfrom prefect.blocks.core import Block\n\nclass Secret(Block):\n    \"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://example.com/logo.png\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )  # ... indicates it's a required field\n\n    def get(self):\n        return self.value.get_secret_value()\n

    To view in Prefect Cloud or the Prefect server UI, register the block.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-integrations","title":"Contributing Integrations","text":"

    Anyone can create and share a Prefect Integration and we encourage anyone interested in creating an integration to do so!

    ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#generate-a-project","title":"Generate a project","text":"

    To help you get started with your integration, we've created a template that gives the tools you need to create and publish your integration.

    Use the Prefect Integration template to get started creating an integration with a bootstrapped project!

    ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#list-a-project-in-the-integrations-catalog","title":"List a project in the Integrations Catalog","text":"

    To list your integration in the Prefect Integrations Catalog, submit a PR to the Prefect repository adding a file to the docs/integrations/catalog directory with details about your integration. Please use TEMPLATE.yaml in that folder as a guide.

    ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contribute-fixes-or-enhancements-to-integrations","title":"Contribute fixes or enhancements to Integrations","text":"

    If you'd like to help contribute to fix an issue or add a feature to any of our Integrations, please propose changes through a pull request from a fork of the repository.

    1. Fork the repository
    2. Clone the forked repository
    3. Install the repository and its dependencies:
      pip install -e \".[dev]\"\n
    4. Make desired changes
    5. Add tests
    6. Insert an entry to the Integration's CHANGELOG.md
    7. Install pre-commit to perform quality checks prior to commit:
      pre-commit install\n
    8. git commit, git push, and create a pull request
    ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/usage/","title":"Using Integrations","text":"","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#installing-an-integration","title":"Installing an Integration","text":"

    Install the Integration via pip.

    For example, to use prefect-aws:

    pip install prefect-aws\n
    ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#registering-blocks-from-an-integration","title":"Registering Blocks from an Integration","text":"

    Once the Prefect Integration is installed, register the blocks within the integration to view them in the Prefect Cloud UI:

    For example, to register the blocks available in prefect-aws:

    prefect block register -m prefect_aws\n

    Updating blocks from an integrations

    If you install an updated Prefect integration that adds fields to a block type, you will need to re-register that block type.

    Loading a block in code

    To use the load method on a Block, you must already have a block document saved either through code or through the Prefect UI.

    Learn more about Blocks here!

    ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#using-tasks-and-flows-from-an-integration","title":"Using Tasks and Flows from an Integration","text":"

    Integrations also contain pre-built tasks and flows that can be imported and called within your code.

    As an example, to read a secret from AWS Secrets Manager with the read_secret task:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef connect_to_database():\n    aws_credentials = AwsCredentials.load(\"MY_BLOCK_NAME\")\n    secret_value = read_secret(\n        secret_name=\"db_password\",\n        aws_credentials=aws_credentials\n    )\n\n    # Use secret_value to connect to a database\n
    ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#customizing-tasks-and-flows-from-an-integration","title":"Customizing Tasks and Flows from an Integration","text":"

    To customize the settings of a task or flow pre-configured in a collection, use with_options:

    from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\ncustom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n    name=\"Run My DBT Cloud Job\",\n    retries=2,\n    retry_delay_seconds=10\n)\n\n@flow\ndef run_dbt_job_flow():\n    run_result = custom_run_dbt_cloud_job(\n        dbt_cloud_credentials=DbtCloudCredentials.load(\"my-dbt-cloud-credentials\"),\n        job_id=1\n    )\n\nrun_dbt_job_flow()\n
    ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#recipes-and-tutorials","title":"Recipes and Tutorials","text":"

    To learn more about how to use Integrations, check out Prefect recipes on GitHub. These recipes provide examples of how Integrations can be used in various scenarios.

    ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/prefect-aws/","title":"prefect-aws","text":""},{"location":"integrations/prefect-aws/#welcome","title":"Welcome","text":"

    prefect-aws makes it easy to leverage the capabilities of AWS in your workflows.

    "},{"location":"integrations/prefect-aws/#getting-started","title":"Getting started","text":""},{"location":"integrations/prefect-aws/#installation","title":"Installation","text":"

    Prefect requires Python 3.8 or newer.

    We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

    Install prefect-aws

    pip install prefect-aws\n
    "},{"location":"integrations/prefect-aws/#registering-blocks","title":"Registering blocks","text":"

    Register blocks in this module to make them available for use.

    prefect block register -m prefect_aws\n

    A list of available blocks in prefect-aws and their setup instructions can be found here.

    "},{"location":"integrations/prefect-aws/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

    You will need an AWS account and credentials to use prefect-aws.

    1. Refer to the AWS Configuration documentation on how to retrieve your access key ID and secret access key
    2. Copy the access key ID and secret access key
    3. Create an AWSCredenitals block in the Prefect UI or use a Python script like the one below and replace the placeholders with your credential information and desired block name:
    from prefect_aws import AwsCredentials\nAwsCredentials(\n    aws_access_key_id=\"PLACEHOLDER\",\n    aws_secret_access_key=\"PLACEHOLDER\",\n    aws_session_token=None,  # replace this with token if necessary\n    region_name=\"us-east-2\"\n).save(\"BLOCK-NAME-PLACEHOLDER\")\n

    Congrats! You can now load the saved block to use your credentials in your Python code:

    from prefect_aws import AwsCredentials\nAwsCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n
    "},{"location":"integrations/prefect-aws/#using-prefect-with-aws-s3","title":"Using Prefect with AWS S3","text":"

    prefect_aws allows you to read and write objects with AWS S3 within your Prefect flows.

    The provided code snippet shows how you can use prefect_aws to upload a file to a AWS S3 bucket and download the same file under a different file name.

    Note, the following code assumes that the bucket already exists.

    from pathlib import Path\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials, S3Bucket\n\n@flow\ndef s3_flow():\n    # create a dummy file to upload\n    file_path = Path(\"test-example.txt\")\n    file_path.write_text(\"Hello, Prefect!\")\n\n    aws_credentials = AwsCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    s3_bucket = S3Bucket(\n        bucket_name=\"BUCKET-NAME-PLACEHOLDER\",\n        credentials=aws_credentials\n    )\n\n    s3_bucket_path = s3_bucket.upload_from_path(file_path)\n    downloaded_file_path = s3_bucket.download_object_to_path(\n        s3_bucket_path, \"downloaded-test-example.txt\"\n    )\n    return downloaded_file_path.read_text()\n\ns3_flow()\n
    "},{"location":"integrations/prefect-aws/#using-prefect-with-aws-secrets-manager","title":"Using Prefect with AWS Secrets Manager","text":"

    prefect_aws allows you to read and write secrets with AWS Secrets Manager within your Prefect flows.

    The provided code snippet shows how you can use prefect_aws to write a secret to the Secret Manager, read the secret data, delete the secret, and finally return the secret data.

    from prefect import flow\nfrom prefect_aws import AwsCredentials, AwsSecret\n\n@flow\ndef secrets_manager_flow():\n    aws_credentials = AwsCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    aws_secret = AwsSecret(secret_name=\"test-example\", aws_credentials=aws_credentials)\n    aws_secret.write_secret(secret_data=b\"Hello, Prefect!\")\n    secret_data = aws_secret.read_secret()\n    aws_secret.delete_secret()\n    return secret_data\n\nsecrets_manager_flow()\n
    "},{"location":"integrations/prefect-aws/#using-prefect-with-aws-ecs","title":"Using Prefect with AWS ECS","text":"

    prefect_aws allows you to use AWS ECS as infrastructure for your deployments. Using ECS for scheduled flow runs enables the dynamic provisioning of infrastructure for containers and unlocks greater scalability. This setup gives you all of the observation and orchestration benefits of Prefect, while also providing you the scalability of ECS.

    See the ECS guide for a full walkthrough.

    "},{"location":"integrations/prefect-aws/#resources","title":"Resources","text":"

    Refer to the API documentation on the sidebar to explore all the capabilities of Prefect AWS!

    For more tips on how to use blocks and tasks in Prefect integration libraries, check out the docs!

    For more information about how to use Prefect, please refer to the Prefect documentation.

    If you encounter any bugs while using prefect-aws, feel free to open an issue in the prefect repository.

    If you have any questions or issues while using prefect-aws, you can find help in the Prefect Slack community.

    "},{"location":"integrations/prefect-aws/batch/","title":"Batch","text":""},{"location":"integrations/prefect-aws/batch/#prefect_aws.batch","title":"prefect_aws.batch","text":"

    Tasks for interacting with AWS Batch

    "},{"location":"integrations/prefect-aws/batch/#prefect_aws.batch.batch_submit","title":"batch_submit async","text":"

    Submit a job to the AWS Batch job service.

    Parameters:

    Name Type Description Default job_name str

    The AWS batch job name.

    required job_queue str

    Name of the AWS batch job queue.

    required job_definition str

    The AWS batch job definition.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required **batch_kwargs Optional[Dict[str, Any]]

    Additional keyword arguments to pass to the boto3 submit_job function. See the documentation for submit_job for more details.

    {}

    Returns:

    Type Description str

    The id corresponding to the job.

    Example

    Submits a job to batch.

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.batch import batch_submit\n\n\n@flow\ndef example_batch_submit_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    job_id = batch_submit(\n        \"job_name\",\n        \"job_queue\",\n        \"job_definition\",\n        aws_credentials\n    )\n    return job_id\n\nexample_batch_submit_flow()\n
    Source code in prefect_aws/batch.py
    @task\nasync def batch_submit(\n    job_name: str,\n    job_queue: str,\n    job_definition: str,\n    aws_credentials: AwsCredentials,\n    **batch_kwargs: Optional[Dict[str, Any]],\n) -> str:\n    \"\"\"\n    Submit a job to the AWS Batch job service.\n\n    Args:\n        job_name: The AWS batch job name.\n        job_queue: Name of the AWS batch job queue.\n        job_definition: The AWS batch job definition.\n        aws_credentials: Credentials to use for authentication with AWS.\n        **batch_kwargs: Additional keyword arguments to pass to the boto3\n            `submit_job` function. See the documentation for\n            [submit_job](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/batch.html#Batch.Client.submit_job)\n            for more details.\n\n    Returns:\n        The id corresponding to the job.\n\n    Example:\n        Submits a job to batch.\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.batch import batch_submit\n\n\n        @flow\n        def example_batch_submit_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            job_id = batch_submit(\n                \"job_name\",\n                \"job_queue\",\n                \"job_definition\",\n                aws_credentials\n            )\n            return job_id\n\n        example_batch_submit_flow()\n        ```\n\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Preparing to submit %s job to %s job queue\", job_name, job_queue)\n\n    batch_client = aws_credentials.get_boto3_session().client(\"batch\")\n\n    response = await run_sync_in_worker_thread(\n        batch_client.submit_job,\n        jobName=job_name,\n        jobQueue=job_queue,\n        jobDefinition=job_definition,\n        **batch_kwargs,\n    )\n    return response[\"jobId\"]\n
    "},{"location":"integrations/prefect-aws/client_waiter/","title":"Client Waiter","text":""},{"location":"integrations/prefect-aws/client_waiter/#prefect_aws.client_waiter","title":"prefect_aws.client_waiter","text":"

    Task for waiting on a long-running AWS job

    "},{"location":"integrations/prefect-aws/client_waiter/#prefect_aws.client_waiter.client_waiter","title":"client_waiter async","text":"

    Uses the underlying boto3 waiter functionality.

    Parameters:

    Name Type Description Default client str

    The AWS client on which to wait (e.g., 'client_wait', 'ec2', etc).

    required waiter_name str

    The name of the waiter to instantiate. You may also use a custom waiter name, if you supply an accompanying waiter definition dict.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required waiter_definition Optional[Dict[str, Any]]

    A valid custom waiter model, as a dict. Note that if you supply a custom definition, it is assumed that the provided 'waiter_name' is contained within the waiter definition dict.

    None **waiter_kwargs Optional[Dict[str, Any]]

    Arguments to pass to the waiter.wait(...) method. Will depend upon the specific waiter being called.

    {} Example

    Run an ec2 waiter until instance_exists.

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.client_wait import client_waiter\n\n@flow\ndef example_client_wait_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n\n    waiter = client_waiter(\n        \"ec2\",\n        \"instance_exists\",\n        aws_credentials\n    )\n\n    return waiter\nexample_client_wait_flow()\n

    Source code in prefect_aws/client_waiter.py
    @task\nasync def client_waiter(\n    client: str,\n    waiter_name: str,\n    aws_credentials: AwsCredentials,\n    waiter_definition: Optional[Dict[str, Any]] = None,\n    **waiter_kwargs: Optional[Dict[str, Any]],\n):\n    \"\"\"\n    Uses the underlying boto3 waiter functionality.\n\n    Args:\n        client: The AWS client on which to wait (e.g., 'client_wait', 'ec2', etc).\n        waiter_name: The name of the waiter to instantiate.\n            You may also use a custom waiter name, if you supply\n            an accompanying waiter definition dict.\n        aws_credentials: Credentials to use for authentication with AWS.\n        waiter_definition: A valid custom waiter model, as a dict. Note that if\n            you supply a custom definition, it is assumed that the provided\n            'waiter_name' is contained within the waiter definition dict.\n        **waiter_kwargs: Arguments to pass to the `waiter.wait(...)` method. Will\n            depend upon the specific waiter being called.\n\n    Example:\n        Run an ec2 waiter until instance_exists.\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.client_wait import client_waiter\n\n        @flow\n        def example_client_wait_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n\n            waiter = client_waiter(\n                \"ec2\",\n                \"instance_exists\",\n                aws_credentials\n            )\n\n            return waiter\n        example_client_wait_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Waiting on %s job\", client)\n\n    boto_client = aws_credentials.get_boto3_session().client(client)\n\n    if waiter_definition is not None:\n        # Use user-provided waiter definition\n        waiter_model = WaiterModel(waiter_definition)\n        waiter = create_waiter_with_client(waiter_name, waiter_model, boto_client)\n    elif waiter_name in boto_client.waiter_names:\n        waiter = boto_client.get_waiter(waiter_name)\n    else:\n        raise ValueError(\n            f\"The waiter name, {waiter_name}, is not a valid boto waiter; \"\n            \"if using a custom waiter, you must provide a waiter definition\"\n        )\n\n    await run_sync_in_worker_thread(waiter.wait, **waiter_kwargs)\n
    "},{"location":"integrations/prefect-aws/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials","title":"prefect_aws.credentials","text":"

    Module handling AWS credentials

    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials","title":"AwsCredentials","text":"

    Bases: CredentialsBlock

    Block used to manage authentication with AWS. AWS authentication is handled via the boto3 module. Refer to the boto3 docs for more info about the possible credential configurations.

    Example

    Load stored AWS credentials:

    from prefect_aws import AwsCredentials\n\naws_credentials_block = AwsCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_aws/credentials.py
    class AwsCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with AWS. AWS authentication is\n    handled via the `boto3` module. Refer to the\n    [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html)\n    for more info about the possible credential configurations.\n\n    Example:\n        Load stored AWS credentials:\n        ```python\n        from prefect_aws import AwsCredentials\n\n        aws_credentials_block = AwsCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _block_type_name = \"AWS Credentials\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials\"  # noqa\n\n    aws_access_key_id: Optional[str] = Field(\n        default=None,\n        description=\"A specific AWS access key ID.\",\n        title=\"AWS Access Key ID\",\n    )\n    aws_secret_access_key: Optional[SecretStr] = Field(\n        default=None,\n        description=\"A specific AWS secret access key.\",\n        title=\"AWS Access Key Secret\",\n    )\n    aws_session_token: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The session key for your AWS account. \"\n            \"This is only needed when you are using temporary credentials.\"\n        ),\n        title=\"AWS Session Token\",\n    )\n    profile_name: Optional[str] = Field(\n        default=None, description=\"The profile to use when creating your session.\"\n    )\n    region_name: Optional[str] = Field(\n        default=None,\n        description=\"The AWS Region where you want to create new connections.\",\n    )\n    aws_client_parameters: AwsClientParameters = Field(\n        default_factory=AwsClientParameters,\n        description=\"Extra parameters to initialize the Client.\",\n        title=\"AWS Client Parameters\",\n    )\n\n    class Config:\n        \"\"\"Config class for pydantic model.\"\"\"\n\n        arbitrary_types_allowed = True\n\n    def __hash__(self):\n        field_hashes = (\n            hash(self.aws_access_key_id),\n            hash(self.aws_secret_access_key),\n            hash(self.aws_session_token),\n            hash(self.profile_name),\n            hash(self.region_name),\n            hash(self.aws_client_parameters),\n        )\n        return hash(field_hashes)\n\n    def get_boto3_session(self) -> boto3.Session:\n        \"\"\"\n        Returns an authenticated boto3 session that can be used to create clients\n        for AWS services\n\n        Example:\n            Create an S3 client from an authorized boto3 session:\n            ```python\n            aws_credentials = AwsCredentials(\n                aws_access_key_id = \"access_key_id\",\n                aws_secret_access_key = \"secret_access_key\"\n                )\n            s3_client = aws_credentials.get_boto3_session().client(\"s3\")\n            ```\n        \"\"\"\n\n        if self.aws_secret_access_key:\n            aws_secret_access_key = self.aws_secret_access_key.get_secret_value()\n        else:\n            aws_secret_access_key = None\n\n        return boto3.Session(\n            aws_access_key_id=self.aws_access_key_id,\n            aws_secret_access_key=aws_secret_access_key,\n            aws_session_token=self.aws_session_token,\n            profile_name=self.profile_name,\n            region_name=self.region_name,\n        )\n\n    def get_client(self, client_type: Union[str, ClientType]):\n        \"\"\"\n        Helper method to dynamically get a client type.\n\n        Args:\n            client_type: The client's service name.\n\n        Returns:\n            An authenticated client.\n\n        Raises:\n            ValueError: if the client is not supported.\n        \"\"\"\n        if isinstance(client_type, ClientType):\n            client_type = client_type.value\n\n        return _get_client_cached(ctx=self, client_type=client_type)\n\n    def get_s3_client(self) -> S3Client:\n        \"\"\"\n        Gets an authenticated S3 client.\n\n        Returns:\n            An authenticated S3 client.\n        \"\"\"\n        return self.get_client(client_type=ClientType.S3)\n\n    def get_secrets_manager_client(self) -> SecretsManagerClient:\n        \"\"\"\n        Gets an authenticated Secrets Manager client.\n\n        Returns:\n            An authenticated Secrets Manager client.\n        \"\"\"\n        return self.get_client(client_type=ClientType.SECRETS_MANAGER)\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.Config","title":"Config","text":"

    Config class for pydantic model.

    Source code in prefect_aws/credentials.py
    class Config:\n    \"\"\"Config class for pydantic model.\"\"\"\n\n    arbitrary_types_allowed = True\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_boto3_session","title":"get_boto3_session","text":"

    Returns an authenticated boto3 session that can be used to create clients for AWS services

    Example

    Create an S3 client from an authorized boto3 session:

    aws_credentials = AwsCredentials(\n    aws_access_key_id = \"access_key_id\",\n    aws_secret_access_key = \"secret_access_key\"\n    )\ns3_client = aws_credentials.get_boto3_session().client(\"s3\")\n

    Source code in prefect_aws/credentials.py
    def get_boto3_session(self) -> boto3.Session:\n    \"\"\"\n    Returns an authenticated boto3 session that can be used to create clients\n    for AWS services\n\n    Example:\n        Create an S3 client from an authorized boto3 session:\n        ```python\n        aws_credentials = AwsCredentials(\n            aws_access_key_id = \"access_key_id\",\n            aws_secret_access_key = \"secret_access_key\"\n            )\n        s3_client = aws_credentials.get_boto3_session().client(\"s3\")\n        ```\n    \"\"\"\n\n    if self.aws_secret_access_key:\n        aws_secret_access_key = self.aws_secret_access_key.get_secret_value()\n    else:\n        aws_secret_access_key = None\n\n    return boto3.Session(\n        aws_access_key_id=self.aws_access_key_id,\n        aws_secret_access_key=aws_secret_access_key,\n        aws_session_token=self.aws_session_token,\n        profile_name=self.profile_name,\n        region_name=self.region_name,\n    )\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_client","title":"get_client","text":"

    Helper method to dynamically get a client type.

    Parameters:

    Name Type Description Default client_type Union[str, ClientType]

    The client's service name.

    required

    Returns:

    Type Description

    An authenticated client.

    Raises:

    Type Description ValueError

    if the client is not supported.

    Source code in prefect_aws/credentials.py
    def get_client(self, client_type: Union[str, ClientType]):\n    \"\"\"\n    Helper method to dynamically get a client type.\n\n    Args:\n        client_type: The client's service name.\n\n    Returns:\n        An authenticated client.\n\n    Raises:\n        ValueError: if the client is not supported.\n    \"\"\"\n    if isinstance(client_type, ClientType):\n        client_type = client_type.value\n\n    return _get_client_cached(ctx=self, client_type=client_type)\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_s3_client","title":"get_s3_client","text":"

    Gets an authenticated S3 client.

    Returns:

    Type Description S3Client

    An authenticated S3 client.

    Source code in prefect_aws/credentials.py
    def get_s3_client(self) -> S3Client:\n    \"\"\"\n    Gets an authenticated S3 client.\n\n    Returns:\n        An authenticated S3 client.\n    \"\"\"\n    return self.get_client(client_type=ClientType.S3)\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_secrets_manager_client","title":"get_secrets_manager_client","text":"

    Gets an authenticated Secrets Manager client.

    Returns:

    Type Description SecretsManagerClient

    An authenticated Secrets Manager client.

    Source code in prefect_aws/credentials.py
    def get_secrets_manager_client(self) -> SecretsManagerClient:\n    \"\"\"\n    Gets an authenticated Secrets Manager client.\n\n    Returns:\n        An authenticated Secrets Manager client.\n    \"\"\"\n    return self.get_client(client_type=ClientType.SECRETS_MANAGER)\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.ClientType","title":"ClientType","text":"

    Bases: Enum

    The supported boto3 clients.

    Source code in prefect_aws/credentials.py
    class ClientType(Enum):\n    \"\"\"The supported boto3 clients.\"\"\"\n\n    S3 = \"s3\"\n    ECS = \"ecs\"\n    BATCH = \"batch\"\n    SECRETS_MANAGER = \"secretsmanager\"\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials","title":"MinIOCredentials","text":"

    Bases: CredentialsBlock

    Block used to manage authentication with MinIO. Refer to the MinIO docs for more info about the possible credential configurations.

    Attributes:

    Name Type Description minio_root_user str

    Admin or root user.

    minio_root_password SecretStr

    Admin or root password.

    region_name Optional[str]

    Location of server, e.g. \"us-east-1\".

    Example

    Load stored MinIO credentials:

    from prefect_aws import MinIOCredentials\n\nminio_credentials_block = MinIOCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_aws/credentials.py
    class MinIOCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with MinIO. Refer to the\n    [MinIO docs](https://docs.min.io/docs/minio-server-configuration-guide.html)\n    for more info about the possible credential configurations.\n\n    Attributes:\n        minio_root_user: Admin or root user.\n        minio_root_password: Admin or root password.\n        region_name: Location of server, e.g. \"us-east-1\".\n\n    Example:\n        Load stored MinIO credentials:\n        ```python\n        from prefect_aws import MinIOCredentials\n\n        minio_credentials_block = MinIOCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/676cb17bcbdff601f97e0a02ff8bcb480e91ff40-250x250.png\"  # noqa\n    _block_type_name = \"MinIO Credentials\"\n    _description = (\n        \"Block used to manage authentication with MinIO. Refer to the MinIO \"\n        \"docs: https://docs.min.io/docs/minio-server-configuration-guide.html \"\n        \"for more info about the possible credential configurations.\"\n    )\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials\"  # noqa\n\n    minio_root_user: str = Field(default=..., description=\"Admin or root user.\")\n    minio_root_password: SecretStr = Field(\n        default=..., description=\"Admin or root password.\"\n    )\n    region_name: Optional[str] = Field(\n        default=None,\n        description=\"The AWS Region where you want to create new connections.\",\n    )\n    aws_client_parameters: AwsClientParameters = Field(\n        default_factory=AwsClientParameters,\n        description=\"Extra parameters to initialize the Client.\",\n    )\n\n    class Config:\n        \"\"\"Config class for pydantic model.\"\"\"\n\n        arbitrary_types_allowed = True\n\n    def __hash__(self):\n        return hash(\n            (\n                hash(self.minio_root_user),\n                hash(self.minio_root_password),\n                hash(self.region_name),\n                hash(frozenset(self.aws_client_parameters.dict().items())),\n            )\n        )\n\n    def get_boto3_session(self) -> boto3.Session:\n        \"\"\"\n        Returns an authenticated boto3 session that can be used to create clients\n        and perform object operations on MinIO server.\n\n        Example:\n            Create an S3 client from an authorized boto3 session\n\n            ```python\n            minio_credentials = MinIOCredentials(\n                minio_root_user = \"minio_root_user\",\n                minio_root_password = \"minio_root_password\"\n            )\n            s3_client = minio_credentials.get_boto3_session().client(\n                service=\"s3\",\n                endpoint_url=\"http://localhost:9000\"\n            )\n            ```\n        \"\"\"\n\n        minio_root_password = (\n            self.minio_root_password.get_secret_value()\n            if self.minio_root_password\n            else None\n        )\n\n        return boto3.Session(\n            aws_access_key_id=self.minio_root_user,\n            aws_secret_access_key=minio_root_password,\n            region_name=self.region_name,\n        )\n\n    def get_client(self, client_type: Union[str, ClientType]):\n        \"\"\"\n        Helper method to dynamically get a client type.\n\n        Args:\n            client_type: The client's service name.\n\n        Returns:\n            An authenticated client.\n\n        Raises:\n            ValueError: if the client is not supported.\n        \"\"\"\n        if isinstance(client_type, ClientType):\n            client_type = client_type.value\n\n        return _get_client_cached(ctx=self, client_type=client_type)\n\n    def get_s3_client(self) -> S3Client:\n        \"\"\"\n        Gets an authenticated S3 client.\n\n        Returns:\n            An authenticated S3 client.\n        \"\"\"\n        return self.get_client(client_type=ClientType.S3)\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.Config","title":"Config","text":"

    Config class for pydantic model.

    Source code in prefect_aws/credentials.py
    class Config:\n    \"\"\"Config class for pydantic model.\"\"\"\n\n    arbitrary_types_allowed = True\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.get_boto3_session","title":"get_boto3_session","text":"

    Returns an authenticated boto3 session that can be used to create clients and perform object operations on MinIO server.

    Example

    Create an S3 client from an authorized boto3 session

    minio_credentials = MinIOCredentials(\n    minio_root_user = \"minio_root_user\",\n    minio_root_password = \"minio_root_password\"\n)\ns3_client = minio_credentials.get_boto3_session().client(\n    service=\"s3\",\n    endpoint_url=\"http://localhost:9000\"\n)\n
    Source code in prefect_aws/credentials.py
    def get_boto3_session(self) -> boto3.Session:\n    \"\"\"\n    Returns an authenticated boto3 session that can be used to create clients\n    and perform object operations on MinIO server.\n\n    Example:\n        Create an S3 client from an authorized boto3 session\n\n        ```python\n        minio_credentials = MinIOCredentials(\n            minio_root_user = \"minio_root_user\",\n            minio_root_password = \"minio_root_password\"\n        )\n        s3_client = minio_credentials.get_boto3_session().client(\n            service=\"s3\",\n            endpoint_url=\"http://localhost:9000\"\n        )\n        ```\n    \"\"\"\n\n    minio_root_password = (\n        self.minio_root_password.get_secret_value()\n        if self.minio_root_password\n        else None\n    )\n\n    return boto3.Session(\n        aws_access_key_id=self.minio_root_user,\n        aws_secret_access_key=minio_root_password,\n        region_name=self.region_name,\n    )\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.get_client","title":"get_client","text":"

    Helper method to dynamically get a client type.

    Parameters:

    Name Type Description Default client_type Union[str, ClientType]

    The client's service name.

    required

    Returns:

    Type Description

    An authenticated client.

    Raises:

    Type Description ValueError

    if the client is not supported.

    Source code in prefect_aws/credentials.py
    def get_client(self, client_type: Union[str, ClientType]):\n    \"\"\"\n    Helper method to dynamically get a client type.\n\n    Args:\n        client_type: The client's service name.\n\n    Returns:\n        An authenticated client.\n\n    Raises:\n        ValueError: if the client is not supported.\n    \"\"\"\n    if isinstance(client_type, ClientType):\n        client_type = client_type.value\n\n    return _get_client_cached(ctx=self, client_type=client_type)\n
    "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.get_s3_client","title":"get_s3_client","text":"

    Gets an authenticated S3 client.

    Returns:

    Type Description S3Client

    An authenticated S3 client.

    Source code in prefect_aws/credentials.py
    def get_s3_client(self) -> S3Client:\n    \"\"\"\n    Gets an authenticated S3 client.\n\n    Returns:\n        An authenticated S3 client.\n    \"\"\"\n    return self.get_client(client_type=ClientType.S3)\n
    "},{"location":"integrations/prefect-aws/ecs/","title":"ECS (deprecated)","text":""},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs","title":"prefect_aws.ecs","text":"

    DEPRECATION WARNING:

    This module is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by the ECS worker, which offers enhanced functionality and better performance.

    For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

    Integrations with the Amazon Elastic Container Service.

    Examples:

    Run a task using ECS Fargate\n```python\nECSTask(command=[\"echo\", \"hello world\"]).run()\n```\n\nRun a task using ECS Fargate with a spot container instance\n```python\nECSTask(command=[\"echo\", \"hello world\"], launch_type=\"FARGATE_SPOT\").run()\n```\n\nRun a task using ECS with an EC2 container instance\n```python\nECSTask(command=[\"echo\", \"hello world\"], launch_type=\"EC2\").run()\n```\n\nRun a task on a specific VPC using ECS Fargate\n```python\nECSTask(command=[\"echo\", \"hello world\"], vpc_id=\"vpc-01abcdf123456789a\").run()\n```\n\nRun a task and stream the container's output to the local terminal. Note an\nexecution role must be provided with permissions: logs:CreateLogStream,\nlogs:CreateLogGroup, and logs:PutLogEvents.\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    stream_output=True,\n    execution_role_arn=\"...\"\n)\n```\n\nRun a task using an existing task definition as a base\n```python\nECSTask(command=[\"echo\", \"hello world\"], task_definition_arn=\"arn:aws:ecs:...\")\n```\n\nRun a task with a specific image\n```python\nECSTask(command=[\"echo\", \"hello world\"], image=\"alpine:latest\")\n```\n\nRun a task with custom memory and CPU requirements\n```python\nECSTask(command=[\"echo\", \"hello world\"], memory=4096, cpu=2048)\n```\n\nRun a task with custom environment variables\n```python\nECSTask(command=[\"echo\", \"hello $PLANET\"], env={\"PLANET\": \"earth\"})\n```\n\nRun a task in a specific ECS cluster\n```python\nECSTask(command=[\"echo\", \"hello world\"], cluster=\"my-cluster-name\")\n```\n\nRun a task with custom VPC subnets\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    task_customizations=[\n        {\n            \"op\": \"add\",\n            \"path\": \"/networkConfiguration/awsvpcConfiguration/subnets\",\n            \"value\": [\"subnet-80b6fbcd\", \"subnet-42a6fdgd\"],\n        },\n    ]\n)\n```\n\nRun a task without a public IP assigned\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    vpc_id=\"vpc-01abcdf123456789a\",\n    task_customizations=[\n        {\n            \"op\": \"replace\",\n            \"path\": \"/networkConfiguration/awsvpcConfiguration/assignPublicIp\",\n            \"value\": \"DISABLED\",\n        },\n    ]\n)\n```\n\nRun a task with custom VPC security groups\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    vpc_id=\"vpc-01abcdf123456789a\",\n    task_customizations=[\n        {\n            \"op\": \"add\",\n            \"path\": \"/networkConfiguration/awsvpcConfiguration/securityGroups\",\n            \"value\": [\"sg-d72e9599956a084f5\"],\n        },\n    ],\n)\n```\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask","title":"ECSTask","text":"

    Bases: Infrastructure

    Run a command as an ECS task.

    Attributes:

    Name Type Description type Literal['ecs-task']

    The slug for this task type with a default value of \"ecs-task\".

    aws_credentials AwsCredentials

    The AWS credentials to use to connect to ECS with a default factory of AwsCredentials.

    task_definition_arn Optional[str]

    An optional identifier for an existing task definition to use. If fields are set on the ECSTask that conflict with the task definition, a new copy will be registered with the required values. Cannot be used with task_definition. If not provided, Prefect will generate and register a minimal task definition.

    task_definition Optional[dict]

    An optional ECS task definition to use. Prefect may set defaults or override fields on this task definition to match other ECSTask fields. Cannot be used with task_definition_arn. If not provided, Prefect will generate and register a minimal task definition.

    family Optional[str]

    An optional family for the task definition. If not provided, it will be inferred from the task definition. If the task definition does not have a family, the name will be generated. When flow and deployment metadata is available, the generated name will include their names. Values for this field will be slugified to match AWS character requirements.

    image Optional[str]

    An optional image to use for the Prefect container in the task. If this value is not null, it will override the value in the task definition. This value defaults to a Prefect base image matching your local versions.

    auto_deregister_task_definition bool

    A boolean that controls if any task definitions that are created by this block will be deregistered or not. Existing task definitions linked by ARN will never be deregistered. Deregistering a task definition does not remove it from your AWS account, instead it will be marked as INACTIVE.

    cpu int

    The amount of CPU to provide to the ECS task. Valid amounts are specified in the AWS documentation. If not provided, a default value of ECS_DEFAULT_CPU will be used unless present on the task definition.

    memory int

    The amount of memory to provide to the ECS task. Valid amounts are specified in the AWS documentation. If not provided, a default value of ECS_DEFAULT_MEMORY will be used unless present on the task definition.

    execution_role_arn str

    An execution role to use for the task. This controls the permissions of the task when it is launching. If this value is not null, it will override the value in the task definition. An execution role must be provided to capture logs from the container.

    configure_cloudwatch_logs bool

    A boolean that controls if the Prefect container will be configured to send its output to the AWS CloudWatch logs service or not. This functionality requires an execution role with permissions to create log streams and groups.

    cloudwatch_logs_options Dict[str, str]

    A dictionary of options to pass to the CloudWatch logs configuration.

    stream_output bool

    A boolean indicating whether logs will be streamed from the Prefect container to the local console.

    launch_type Optional[Literal['FARGATE', 'EC2', 'EXTERNAL', 'FARGATE_SPOT']]

    An optional launch type for the ECS task run infrastructure.

    vpc_id Optional[str]

    An optional VPC ID to link the task run to. This is only applicable when using the 'awsvpc' network mode for your task.

    cluster Optional[str]

    An optional ECS cluster to run the task in. The ARN or name may be provided. If not provided, the default cluster will be used.

    env Dict[str, Optional[str]]

    A dictionary of environment variables to provide to the task run. These variables are set on the Prefect container at task runtime.

    task_role_arn str

    An optional role to attach to the task run. This controls the permissions of the task while it is running.

    task_customizations JsonPatch

    A list of JSON 6902 patches to apply to the task run request. If a string is given, it will parsed as a JSON expression.

    task_start_timeout_seconds int

    The amount of time to watch for the start of the ECS task before marking it as failed. The task must enter a RUNNING state to be considered started.

    task_watch_poll_interval float

    The amount of time to wait between AWS API calls while monitoring the state of an ECS task.

    Source code in prefect_aws/ecs.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=(\n        \"Use the ECS worker instead.\"\n        \" Refer to the upgrade guide for more information:\"\n        \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\"\n    ),\n)\nclass ECSTask(Infrastructure):\n    \"\"\"\n    Run a command as an ECS task.\n\n    Attributes:\n        type: The slug for this task type with a default value of \"ecs-task\".\n        aws_credentials: The AWS credentials to use to connect to ECS with a\n            default factory of AwsCredentials.\n        task_definition_arn: An optional identifier for an existing task definition\n            to use. If fields are set on the ECSTask that conflict with the task\n            definition, a new copy will be registered with the required values.\n            Cannot be used with task_definition. If not provided, Prefect will\n            generate and register a minimal task definition.\n        task_definition: An optional ECS task definition to use. Prefect may set\n            defaults or override fields on this task definition to match other\n            ECSTask fields. Cannot be used with task_definition_arn.\n            If not provided, Prefect will generate and register\n            a minimal task definition.\n        family: An optional family for the task definition. If not provided,\n            it will be inferred from the task definition. If the task definition\n            does not have a family, the name will be generated. When flow and\n            deployment metadata is available, the generated name will include\n            their names. Values for this field will be slugified to match\n            AWS character requirements.\n        image: An optional image to use for the Prefect container in the task.\n            If this value is not null, it will override the value in the task\n            definition. This value defaults to a Prefect base image matching\n            your local versions.\n        auto_deregister_task_definition: A boolean that controls if any task\n            definitions that are created by this block will be deregistered\n            or not. Existing task definitions linked by ARN will never be\n            deregistered. Deregistering a task definition does not remove\n            it from your AWS account, instead it will be marked as INACTIVE.\n        cpu: The amount of CPU to provide to the ECS task. Valid amounts are\n            specified in the AWS documentation. If not provided, a default\n            value of ECS_DEFAULT_CPU will be used unless present on\n            the task definition.\n        memory: The amount of memory to provide to the ECS task.\n            Valid amounts are specified in the AWS documentation.\n            If not provided, a default value of ECS_DEFAULT_MEMORY\n            will be used unless present on the task definition.\n        execution_role_arn: An execution role to use for the task.\n            This controls the permissions of the task when it is launching.\n            If this value is not null, it will override the value in the task\n            definition. An execution role must be provided to capture logs\n            from the container.\n        configure_cloudwatch_logs: A boolean that controls if the Prefect\n            container will be configured to send its output to the\n            AWS CloudWatch logs service or not. This functionality requires\n            an execution role with permissions to create log streams and groups.\n        cloudwatch_logs_options: A dictionary of options to pass to\n            the CloudWatch logs configuration.\n        stream_output: A boolean indicating whether logs will be\n            streamed from the Prefect container to the local console.\n        launch_type: An optional launch type for the ECS task run infrastructure.\n        vpc_id: An optional VPC ID to link the task run to.\n            This is only applicable when using the 'awsvpc' network mode for your task.\n        cluster: An optional ECS cluster to run the task in.\n            The ARN or name may be provided. If not provided,\n            the default cluster will be used.\n        env: A dictionary of environment variables to provide to\n            the task run. These variables are set on the\n            Prefect container at task runtime.\n        task_role_arn: An optional role to attach to the task run.\n            This controls the permissions of the task while it is running.\n        task_customizations: A list of JSON 6902 patches to apply to the task\n            run request. If a string is given, it will parsed as a JSON expression.\n        task_start_timeout_seconds: The amount of time to watch for the\n            start of the ECS task before marking it as failed. The task must\n            enter a RUNNING state to be considered started.\n        task_watch_poll_interval: The amount of time to wait between AWS API\n            calls while monitoring the state of an ECS task.\n    \"\"\"\n\n    _block_type_slug = \"ecs-task\"\n    _block_type_name = \"ECS Task\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _description = \"Run a command as an ECS task.\"  # noqa\n    _documentation_url = (\n        \"https://prefecthq.github.io/prefect-aws/ecs/#prefect_aws.ecs.ECSTask\"  # noqa\n    )\n\n    type: Literal[\"ecs-task\"] = Field(\n        \"ecs-task\", description=\"The slug for this task type.\"\n    )\n\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to use to connect to ECS.\",\n    )\n\n    # Task definition settings\n    task_definition_arn: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An identifier for an existing task definition to use. If fields are set \"\n            \"on the `ECSTask` that conflict with the task definition, a new copy \"\n            \"will be registered with the required values. \"\n            \"Cannot be used with `task_definition`. If not provided, Prefect will \"\n            \"generate and register a minimal task definition.\"\n        ),\n    )\n    task_definition: Optional[dict] = Field(\n        default=None,\n        description=(\n            \"An ECS task definition to use. Prefect may set defaults or override \"\n            \"fields on this task definition to match other `ECSTask` fields. \"\n            \"Cannot be used with `task_definition_arn`. If not provided, Prefect will \"\n            \"generate and register a minimal task definition.\"\n        ),\n    )\n    family: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A family for the task definition. If not provided, it will be inferred \"\n            \"from the task definition. If the task definition does not have a family, \"\n            \"the name will be generated. When flow and deployment metadata is \"\n            \"available, the generated name will include their names. Values for this \"\n            \"field will be slugified to match AWS character requirements.\"\n        ),\n    )\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image to use for the Prefect container in the task. If this value is \"\n            \"not null, it will override the value in the task definition. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    auto_deregister_task_definition: bool = Field(\n        default=True,\n        description=(\n            \"If set, any task definitions that are created by this block will be \"\n            \"deregistered. Existing task definitions linked by ARN will never be \"\n            \"deregistered. Deregistering a task definition does not remove it from \"\n            \"your AWS account, instead it will be marked as INACTIVE.\"\n        ),\n    )\n\n    # Mixed task definition / run settings\n    cpu: int = Field(\n        title=\"CPU\",\n        default=None,\n        description=(\n            \"The amount of CPU to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_CPU} will be used unless present on the task definition.\"\n        ),\n    )\n    memory: int = Field(\n        default=None,\n        description=(\n            \"The amount of memory to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_MEMORY} will be used unless present on the task definition.\"\n        ),\n    )\n    execution_role_arn: str = Field(\n        title=\"Execution Role ARN\",\n        default=None,\n        description=(\n            \"An execution role to use for the task. This controls the permissions of \"\n            \"the task when it is launching. If this value is not null, it will \"\n            \"override the value in the task definition. An execution role must be \"\n            \"provided to capture logs from the container.\"\n        ),\n    )\n    configure_cloudwatch_logs: bool = Field(\n        default=None,\n        description=(\n            \"If `True`, the Prefect container will be configured to send its output \"\n            \"to the AWS CloudWatch logs service. This functionality requires an \"\n            \"execution role with logs:CreateLogStream, logs:CreateLogGroup, and \"\n            \"logs:PutLogEvents permissions. The default for this field is `False` \"\n            \"unless `stream_output` is set.\"\n        ),\n    )\n    cloudwatch_logs_options: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"When `configure_cloudwatch_logs` is enabled, this setting may be used to \"\n            \"pass additional options to the CloudWatch logs configuration or override \"\n            \"the default options. See the AWS documentation for available options. \"\n            \"https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html#create_awslogs_logdriver_options.\"  # noqa\n        ),\n    )\n    stream_output: bool = Field(\n        default=None,\n        description=(\n            \"If `True`, logs will be streamed from the Prefect container to the local \"\n            \"console. Unless you have configured AWS CloudWatch logs manually on your \"\n            \"task definition, this requires the same prerequisites outlined in \"\n            \"`configure_cloudwatch_logs`.\"\n        ),\n    )\n\n    # Task run settings\n    launch_type: Optional[\n        Literal[\"FARGATE\", \"EC2\", \"EXTERNAL\", \"FARGATE_SPOT\"]\n    ] = Field(\n        default=\"FARGATE\",\n        description=(\n            \"The type of ECS task run infrastructure that should be used. Note that\"\n            \" 'FARGATE_SPOT' is not a formal ECS launch type, but we will configure\"\n            \" the proper capacity provider strategy if set here.\"\n        ),\n    )\n    vpc_id: Optional[str] = Field(\n        title=\"VPC ID\",\n        default=None,\n        description=(\n            \"The AWS VPC to link the task run to. This is only applicable when using \"\n            \"the 'awsvpc' network mode for your task. FARGATE tasks require this \"\n            \"network  mode, but for EC2 tasks the default network mode is 'bridge'. \"\n            \"If using the 'awsvpc' network mode and this field is null, your default \"\n            \"VPC will be used. If no default VPC can be found, the task run will fail.\"\n        ),\n    )\n    cluster: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The ECS cluster to run the task in. The ARN or name may be provided. If \"\n            \"not provided, the default cluster will be used.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        title=\"Environment Variables\",\n        default_factory=dict,\n        description=(\n            \"Environment variables to provide to the task run. These variables are set \"\n            \"on the Prefect container at task runtime. These will not be set on the \"\n            \"task definition.\"\n        ),\n    )\n    task_role_arn: str = Field(\n        title=\"Task Role ARN\",\n        default=None,\n        description=(\n            \"A role to attach to the task run. This controls the permissions of the \"\n            \"task while it is running.\"\n        ),\n    )\n    task_customizations: JsonPatch = Field(\n        default_factory=lambda: JsonPatch([]),\n        description=(\n            \"A list of JSON 6902 patches to apply to the task run request. \"\n            \"If a string is given, it will parsed as a JSON expression.\"\n        ),\n    )\n\n    # Execution settings\n    task_start_timeout_seconds: int = Field(\n        default=120,\n        description=(\n            \"The amount of time to watch for the start of the ECS task \"\n            \"before marking it as failed. The task must enter a RUNNING state to be \"\n            \"considered started.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an ECS task.\"\n        ),\n    )\n\n    @root_validator(pre=True)\n    def set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n        \"\"\"\n        Streaming output generally requires CloudWatch logs to be configured.\n\n        To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n        defaults to matching the value of `stream_output`.\n        \"\"\"\n        configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n        if configure_cloudwatch_logs is None:\n            values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n        return values\n\n    @root_validator\n    def configure_cloudwatch_logs_requires_execution_role_arn(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if (\n            values.get(\"configure_cloudwatch_logs\")\n            and not values.get(\"execution_role_arn\")\n            # Do not raise if they've linked to another task definition or provided\n            # it without using our shortcuts\n            and not values.get(\"task_definition_arn\")\n            and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n        ):\n            raise ValueError(\n                \"An `execution_role_arn` must be provided to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs`.\"\n            )\n        return values\n\n    @root_validator\n    def cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if values.get(\"cloudwatch_logs_options\") and not values.get(\n            \"configure_cloudwatch_logs\"\n        ):\n            raise ValueError(\n                \"`configure_cloudwatch_log` must be enabled to use \"\n                \"`cloudwatch_logs_options`.\"\n            )\n        return values\n\n    @root_validator(pre=True)\n    def image_is_required(cls, values: dict) -> dict:\n        \"\"\"\n        Enforces that an image is available if image is `None`.\n        \"\"\"\n        has_image = bool(values.get(\"image\"))\n        has_task_definition_arn = bool(values.get(\"task_definition_arn\"))\n\n        # The image can only be null when the task_definition_arn is set\n        if has_image or has_task_definition_arn:\n            return values\n\n        prefect_container = (\n            get_prefect_container(\n                (values.get(\"task_definition\") or {}).get(\"containerDefinitions\", [])\n            )\n            or {}\n        )\n        image_in_task_definition = prefect_container.get(\"image\")\n\n        # If a task_definition is given with a prefect container image, use that value\n        if image_in_task_definition:\n            values[\"image\"] = image_in_task_definition\n        # Otherwise, it should default to the Prefect base image\n        else:\n            values[\"image\"] = get_prefect_image_name()\n        return values\n\n    @validator(\"task_customizations\", pre=True)\n    def cast_customizations_to_a_json_patch(\n        cls, value: Union[List[Dict], JsonPatch, str]\n    ) -> JsonPatch:\n        \"\"\"\n        Casts lists to JsonPatch instances.\n        \"\"\"\n        if isinstance(value, str):\n            value = json.loads(value)\n        if isinstance(value, list):\n            return JsonPatch(value)\n        return value  # type: ignore\n\n    class Config:\n        \"\"\"Configuration of pydantic.\"\"\"\n\n        # Support serialization of the 'JsonPatch' type\n        arbitrary_types_allowed = True\n        json_encoders = {JsonPatch: lambda p: p.patch}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        \"\"\"\n        Convert to a dictionary.\n        \"\"\"\n        # Support serialization of the 'JsonPatch' type\n        d = super().dict(*args, **kwargs)\n        d[\"task_customizations\"] = self.task_customizations.patch\n        return d\n\n    def prepare_for_flow_run(\n        self: Self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"Deployment\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ) -> Self:\n        \"\"\"\n        Return an copy of the block that is prepared to execute a flow run.\n        \"\"\"\n        new_family = None\n\n        # Update the family if not specified elsewhere\n        if (\n            not self.family\n            and not self.task_definition_arn\n            and not (self.task_definition and self.task_definition.get(\"family\"))\n        ):\n            if flow and deployment:\n                new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}__{deployment.name}\"\n            elif flow and not deployment:\n                new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}\"\n            elif deployment and not flow:\n                # This is a weird case and should not be see in the wild\n                new_family = f\"{ECS_DEFAULT_FAMILY}__unknown-flow__{deployment.name}\"\n\n        new = super().prepare_for_flow_run(flow_run, deployment=deployment, flow=flow)\n\n        if new_family:\n            return new.copy(update={\"family\": new_family})\n        else:\n            # Avoid an extra copy if not needed\n            return new\n\n    @sync_compatible\n    async def run(self, task_status: Optional[TaskStatus] = None) -> ECSTaskResult:\n        \"\"\"\n        Run the configured task on ECS.\n        \"\"\"\n        boto_session, ecs_client = await run_sync_in_worker_thread(\n            self._get_session_and_client\n        )\n\n        (\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition,\n        ) = await run_sync_in_worker_thread(\n            self._create_task_and_wait_for_start, boto_session, ecs_client\n        )\n\n        # Display a nice message indicating the command and image\n        command = self.command or get_prefect_container(\n            task_definition[\"containerDefinitions\"]\n        ).get(\"command\", [])\n        self.logger.info(\n            f\"{self._log_prefix}: Running command {' '.join(command)!r} \"\n            f\"in container {PREFECT_ECS_CONTAINER_NAME!r} ({self.image})...\"\n        )\n\n        # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n        # if set to preserve matching by name rather than arn\n        # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n        # single colons.\n        identifier = (self.cluster if self.cluster else cluster_arn) + \"::\" + task_arn\n\n        if task_status:\n            task_status.started(identifier)\n\n        status_code = await run_sync_in_worker_thread(\n            self._watch_task_and_get_exit_code,\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition and self.auto_deregister_task_definition,\n            boto_session,\n            ecs_client,\n        )\n\n        return ECSTaskResult(\n            identifier=identifier,\n            # If the container does not start the exit code can be null but we must\n            # still report a status code. We use a -1 to indicate a special code.\n            status_code=status_code if status_code is not None else -1,\n        )\n\n    @sync_compatible\n    async def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n        \"\"\"\n        Kill a task running on ECS.\n\n        Args:\n            identifier: A cluster and task arn combination. This should match a value\n                yielded by `ECSTask.run`.\n        \"\"\"\n        if grace_seconds != 30:\n            self.logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n                \"support dynamic grace period configuration so 30s will be used. \"\n                \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n            )\n        cluster, task = parse_task_identifier(identifier)\n        await run_sync_in_worker_thread(self._stop_task, cluster, task)\n\n    @staticmethod\n    def get_corresponding_worker_type() -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        return ECSWorker.type\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for a cloud-run work pool with the same\n        configuration as this block.\n\n        Returns:\n            - dict: a base job template for a cloud-run work pool\n        \"\"\"\n        base_job_template = copy.deepcopy(ECSWorker.get_default_base_job_template())\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n                \"task_customizations\",\n            ]:\n                continue\n            elif key == \"aws_credentials\":\n                if not self.aws_credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"aws_credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(\n                            self.aws_credentials._block_document_id\n                        )\n                    }\n                }\n            elif key == \"task_definition\":\n                base_job_template[\"job_configuration\"][\"task_definition\"] = value\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                    \" Skipping.\"\n                )\n\n        if self.task_customizations:\n            network_config_patches = JsonPatch(\n                [\n                    patch\n                    for patch in self.task_customizations\n                    if \"networkConfiguration\" in patch[\"path\"]\n                ]\n            )\n            minimal_network_config = assemble_document_for_patches(\n                network_config_patches\n            )\n            if minimal_network_config:\n                minimal_network_config_with_patches = network_config_patches.apply(\n                    minimal_network_config\n                )\n                base_job_template[\"variables\"][\"properties\"][\"network_configuration\"][\n                    \"default\"\n                ] = minimal_network_config_with_patches[\"networkConfiguration\"]\n            try:\n                base_job_template[\"job_configuration\"][\n                    \"task_run_request\"\n                ] = self.task_customizations.apply(\n                    base_job_template[\"job_configuration\"][\"task_run_request\"]\n                )\n            except JsonPointerException:\n                self.logger.warning(\n                    \"Unable to apply task customizations to the base job template.\"\n                    \"You may need to update the template manually.\"\n                )\n\n        return base_job_template\n\n    def _stop_task(self, cluster: str, task: str) -> None:\n        \"\"\"\n        Stop a running ECS task.\n        \"\"\"\n        if self.cluster is not None and cluster != self.cluster:\n            raise InfrastructureNotAvailable(\n                \"Cannot stop ECS task: this infrastructure block has access to \"\n                f\"cluster {self.cluster!r} but the task is running in cluster \"\n                f\"{cluster!r}.\"\n            )\n\n        _, ecs_client = self._get_session_and_client()\n        try:\n            ecs_client.stop_task(cluster=cluster, task=task)\n        except Exception as exc:\n            # Raise a special exception if the task does not exist\n            if \"ClusterNotFound\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} could not be found.\"\n                ) from exc\n            if \"not find task\" in str(exc) or \"referenced task was not found\" in str(\n                exc\n            ):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the task {task!r} could not be found in \"\n                    f\"cluster {cluster!r}.\"\n                ) from exc\n            if \"no registered tasks\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} has no tasks.\"\n                ) from exc\n\n            # Reraise unknown exceptions\n            raise\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"ECSTask {self.name!r}\"\n        else:\n            return \"ECSTask\"\n\n    def _get_session_and_client(self) -> Tuple[boto3.Session, _ECSClient]:\n        \"\"\"\n        Retrieve a boto3 session and ECS client\n        \"\"\"\n        boto_session = self.aws_credentials.get_boto3_session()\n        ecs_client = boto_session.client(\"ecs\")\n        return boto_session, ecs_client\n\n    def _create_task_and_wait_for_start(\n        self, boto_session: boto3.Session, ecs_client: _ECSClient\n    ) -> Tuple[str, str, dict, bool]:\n        \"\"\"\n        Register the task definition, create the task run, and wait for it to start.\n\n        Returns a tuple of\n        - The task ARN\n        - The task's cluster ARN\n        - The task definition\n        - A bool indicating if the task definition is newly registered\n        \"\"\"\n        new_task_definition_registered = False\n        requested_task_definition = (\n            self._retrieve_task_definition(ecs_client, self.task_definition_arn)\n            if self.task_definition_arn\n            else self.task_definition\n        ) or {}\n        task_definition_arn = requested_task_definition.get(\"taskDefinitionArn\", None)\n\n        task_definition = self._prepare_task_definition(\n            requested_task_definition, region=ecs_client.meta.region_name\n        )\n\n        # We must register the task definition if the arn is null or changes were made\n        if task_definition != requested_task_definition or not task_definition_arn:\n            # Before registering, check if the latest task definition in the family\n            # can be used\n            latest_task_definition = self._retrieve_latest_task_definition(\n                ecs_client, task_definition[\"family\"]\n            )\n            if self._task_definitions_equal(latest_task_definition, task_definition):\n                self.logger.debug(\n                    f\"{self._log_prefix}: The latest task definition matches the \"\n                    \"required task definition; using that instead of registering a new \"\n                    \" one.\"\n                )\n                task_definition_arn = latest_task_definition[\"taskDefinitionArn\"]\n            else:\n                if task_definition_arn:\n                    self.logger.warning(\n                        f\"{self._log_prefix}: Settings require changes to the linked \"\n                        \"task definition. A new task definition will be registered. \"\n                        + (\n                            \"Enable DEBUG level logs to see the difference.\"\n                            if self.logger.level > logging.DEBUG\n                            else \"\"\n                        )\n                    )\n                    self.logger.debug(\n                        f\"{self._log_prefix}: Diff for requested task definition\"\n                        + _pretty_diff(requested_task_definition, task_definition)\n                    )\n                else:\n                    self.logger.info(\n                        f\"{self._log_prefix}: Registering task definition...\"\n                    )\n                    self.logger.debug(\n                        \"Task definition payload\\n\" + yaml.dump(task_definition)\n                    )\n\n                task_definition_arn = self._register_task_definition(\n                    ecs_client, task_definition\n                )\n                new_task_definition_registered = True\n\n        if task_definition.get(\"networkMode\") == \"awsvpc\":\n            network_config = self._load_vpc_network_config(self.vpc_id, boto_session)\n        else:\n            network_config = None\n\n        task_run = self._prepare_task_run(\n            network_config=network_config,\n            task_definition_arn=task_definition_arn,\n        )\n        self.logger.info(f\"{self._log_prefix}: Creating task run...\")\n        self.logger.debug(\"Task run payload\\n\" + yaml.dump(task_run))\n\n        try:\n            task = self._run_task(ecs_client, task_run)\n            task_arn = task[\"taskArn\"]\n            cluster_arn = task[\"clusterArn\"]\n        except Exception as exc:\n            self._report_task_run_creation_failure(task_run, exc)\n\n        # Raises an exception if the task does not start\n        self.logger.info(f\"{self._log_prefix}: Waiting for task run to start...\")\n        self._wait_for_task_start(\n            task_arn, cluster_arn, ecs_client, timeout=self.task_start_timeout_seconds\n        )\n\n        return task_arn, cluster_arn, task_definition, new_task_definition_registered\n\n    def _watch_task_and_get_exit_code(\n        self,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        deregister_task_definition: bool,\n        boto_session: boto3.Session,\n        ecs_client: _ECSClient,\n    ) -> Optional[int]:\n        \"\"\"\n        Wait for the task run to complete and retrieve the exit code of the Prefect\n        container.\n        \"\"\"\n\n        # Wait for completion and stream logs\n        task = self._wait_for_task_finish(\n            task_arn, cluster_arn, task_definition, ecs_client, boto_session\n        )\n\n        if deregister_task_definition:\n            ecs_client.deregister_task_definition(\n                taskDefinition=task[\"taskDefinitionArn\"]\n            )\n\n        # Check the status code of the Prefect container\n        prefect_container = get_prefect_container(task[\"containers\"])\n        assert (\n            prefect_container is not None\n        ), f\"'prefect' container missing from task: {task}\"\n        status_code = prefect_container.get(\"exitCode\")\n        self._report_container_status_code(PREFECT_ECS_CONTAINER_NAME, status_code)\n\n        return status_code\n\n    def _task_definitions_equal(self, taskdef_1, taskdef_2) -> bool:\n        \"\"\"\n        Compare two task definitions.\n\n        Since one may come from the AWS API and have populated defaults, we do our best\n        to homogenize the definitions without changing their meaning.\n        \"\"\"\n        if taskdef_1 == taskdef_2:\n            return True\n\n        if taskdef_1 is None or taskdef_2 is None:\n            return False\n\n        taskdef_1 = copy.deepcopy(taskdef_1)\n        taskdef_2 = copy.deepcopy(taskdef_2)\n\n        def _set_aws_defaults(taskdef):\n            \"\"\"Set defaults that AWS would set after registration\"\"\"\n            container_definitions = taskdef.get(\"containerDefinitions\", [])\n            essential = any(\n                container.get(\"essential\") for container in container_definitions\n            )\n            if not essential:\n                container_definitions[0].setdefault(\"essential\", True)\n\n            taskdef.setdefault(\"networkMode\", \"bridge\")\n\n        _set_aws_defaults(taskdef_1)\n        _set_aws_defaults(taskdef_2)\n\n        def _drop_empty_keys(dict_):\n            \"\"\"Recursively drop keys with 'empty' values\"\"\"\n            for key, value in tuple(dict_.items()):\n                if not value:\n                    dict_.pop(key)\n                if isinstance(value, dict):\n                    _drop_empty_keys(value)\n                if isinstance(value, list):\n                    for v in value:\n                        if isinstance(v, dict):\n                            _drop_empty_keys(v)\n\n        _drop_empty_keys(taskdef_1)\n        _drop_empty_keys(taskdef_2)\n\n        # Clear fields that change on registration for comparison\n        for field in POST_REGISTRATION_FIELDS:\n            taskdef_1.pop(field, None)\n            taskdef_2.pop(field, None)\n\n        return taskdef_1 == taskdef_2\n\n    def preview(self) -> str:\n        \"\"\"\n        Generate a preview of the task definition and task run that will be sent to AWS.\n        \"\"\"\n        preview = \"\"\n\n        task_definition_arn = self.task_definition_arn or \"<registered at runtime>\"\n\n        if self.task_definition or not self.task_definition_arn:\n            task_definition = self._prepare_task_definition(\n                self.task_definition or {},\n                region=self.aws_credentials.region_name\n                or \"<loaded from client at runtime>\",\n            )\n            preview += \"---\\n# Task definition\\n\"\n            preview += yaml.dump(task_definition)\n            preview += \"\\n\"\n        else:\n            task_definition = None\n\n        if task_definition and task_definition.get(\"networkMode\") == \"awsvpc\":\n            vpc = \"the default VPC\" if not self.vpc_id else self.vpc_id\n            network_config = {\n                \"awsvpcConfiguration\": {\n                    \"subnets\": f\"<loaded from {vpc} at runtime>\",\n                    \"assignPublicIp\": \"ENABLED\",\n                }\n            }\n        else:\n            network_config = None\n\n        task_run = self._prepare_task_run(network_config, task_definition_arn)\n        preview += \"---\\n# Task run request\\n\"\n        preview += yaml.dump(task_run)\n\n        return preview\n\n    def _report_container_status_code(\n        self, name: str, status_code: Optional[int]\n    ) -> None:\n        \"\"\"\n        Display a log for the given container status code.\n        \"\"\"\n        if status_code is None:\n            self.logger.error(\n                f\"{self._log_prefix}: Task exited without reporting an exit status \"\n                f\"for container {name!r}.\"\n            )\n        elif status_code == 0:\n            self.logger.info(\n                f\"{self._log_prefix}: Container {name!r} exited successfully.\"\n            )\n        else:\n            self.logger.warning(\n                f\"{self._log_prefix}: Container {name!r} exited with non-zero exit \"\n                f\"code {status_code}.\"\n            )\n\n    def _report_task_run_creation_failure(self, task_run: dict, exc: Exception) -> None:\n        \"\"\"\n        Wrap common AWS task run creation failures with nicer user-facing messages.\n        \"\"\"\n        # AWS generates exception types at runtime so they must be captured a bit\n        # differently than normal.\n        if \"ClusterNotFoundException\" in str(exc):\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} not found. \"\n                \"Confirm that the cluster is configured in your region.\"\n            ) from exc\n        elif \"No Container Instances\" in str(exc) and self.launch_type == \"EC2\":\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} does not appear to \"\n                \"have any container instances associated with it. Confirm that you \"\n                \"have EC2 container instances available.\"\n            ) from exc\n        elif (\n            \"failed to validate logger args\" in str(exc)\n            and \"AccessDeniedException\" in str(exc)\n            and self.configure_cloudwatch_logs\n        ):\n            raise RuntimeError(\n                \"Failed to run ECS task, the attached execution role does not appear \"\n                \"to have sufficient permissions. Ensure that the execution role \"\n                f\"{self.execution_role!r} has permissions logs:CreateLogStream, \"\n                \"logs:CreateLogGroup, and logs:PutLogEvents.\"\n            )\n        else:\n            raise\n\n    def _watch_task_run(\n        self,\n        task_arn: str,\n        cluster_arn: str,\n        ecs_client: _ECSClient,\n        current_status: str = \"UNKNOWN\",\n        until_status: str = None,\n        timeout: int = None,\n    ) -> Generator[None, None, dict]:\n        \"\"\"\n        Watches an ECS task run by querying every `poll_interval` seconds. After each\n        query, the retrieved task is yielded. This function returns when the task run\n        reaches a STOPPED status or the provided `until_status`.\n\n        Emits a log each time the status changes.\n        \"\"\"\n        last_status = status = current_status\n        t0 = time.time()\n        while status != until_status:\n            tasks = ecs_client.describe_tasks(\n                tasks=[task_arn], cluster=cluster_arn, include=[\"TAGS\"]\n            )[\"tasks\"]\n\n            if tasks:\n                task = tasks[0]\n\n                status = task[\"lastStatus\"]\n                if status != last_status:\n                    self.logger.info(f\"{self._log_prefix}: Status is {status}.\")\n\n                yield task\n\n                # No point in continuing if the status is final\n                if status == \"STOPPED\":\n                    break\n\n                last_status = status\n\n            else:\n                # Intermittently, the task will not be described. We wat to respect the\n                # watch timeout though.\n                self.logger.debug(f\"{self._log_prefix}: Task not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching task for status \"\n                    f\"{until_status or 'STOPPED'}\"\n                )\n            time.sleep(self.task_watch_poll_interval)\n\n    def _wait_for_task_start(\n        self, task_arn: str, cluster_arn: str, ecs_client: _ECSClient, timeout: int\n    ) -> dict:\n        \"\"\"\n        Waits for an ECS task run to reach a RUNNING status.\n\n        If a STOPPED status is reached instead, an exception is raised indicating the\n        reason that the task run did not start.\n        \"\"\"\n        for task in self._watch_task_run(\n            task_arn, cluster_arn, ecs_client, until_status=\"RUNNING\", timeout=timeout\n        ):\n            # TODO: It is possible that the task has passed _through_ a RUNNING\n            #       status during the polling interval. In this case, there is not an\n            #       exception to raise.\n            if task[\"lastStatus\"] == \"STOPPED\":\n                code = task.get(\"stopCode\")\n                reason = task.get(\"stoppedReason\")\n                # Generate a dynamic exception type from the AWS name\n                raise type(code, (RuntimeError,), {})(reason)\n\n        return task\n\n    def _wait_for_task_finish(\n        self,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        ecs_client: _ECSClient,\n        boto_session: boto3.Session,\n    ):\n        \"\"\"\n        Watch an ECS task until it reaches a STOPPED status.\n\n        If configured, logs from the Prefect container are streamed to stderr.\n\n        Returns a description of the task on completion.\n        \"\"\"\n        can_stream_output = False\n\n        if self.stream_output:\n            container_def = get_prefect_container(\n                task_definition[\"containerDefinitions\"]\n            )\n            if not container_def:\n                self.logger.warning(\n                    f\"{self._log_prefix}: Prefect container definition not found in \"\n                    \"task definition. Output cannot be streamed.\"\n                )\n            elif not container_def.get(\"logConfiguration\"):\n                self.logger.warning(\n                    f\"{self._log_prefix}: Logging configuration not found on task. \"\n                    \"Output cannot be streamed.\"\n                )\n            elif not container_def[\"logConfiguration\"].get(\"logDriver\") == \"awslogs\":\n                self.logger.warning(\n                    f\"{self._log_prefix}: Logging configuration uses unsupported \"\n                    \" driver {container_def['logConfiguration'].get('logDriver')!r}. \"\n                    \"Output cannot be streamed.\"\n                )\n            else:\n                # Prepare to stream the output\n                log_config = container_def[\"logConfiguration\"][\"options\"]\n                logs_client = boto_session.client(\"logs\")\n                can_stream_output = True\n                # Track the last log timestamp to prevent double display\n                last_log_timestamp: Optional[int] = None\n                # Determine the name of the stream as \"prefix/container/run-id\"\n                stream_name = \"/\".join(\n                    [\n                        log_config[\"awslogs-stream-prefix\"],\n                        PREFECT_ECS_CONTAINER_NAME,\n                        task_arn.rsplit(\"/\")[-1],\n                    ]\n                )\n                self.logger.info(\n                    f\"{self._log_prefix}: Streaming output from container \"\n                    f\"{PREFECT_ECS_CONTAINER_NAME!r}...\"\n                )\n\n        for task in self._watch_task_run(\n            task_arn, cluster_arn, ecs_client, current_status=\"RUNNING\"\n        ):\n            if self.stream_output and can_stream_output:\n                # On each poll for task run status, also retrieve available logs\n                last_log_timestamp = self._stream_available_logs(\n                    logs_client,\n                    log_group=log_config[\"awslogs-group\"],\n                    log_stream=stream_name,\n                    last_log_timestamp=last_log_timestamp,\n                )\n\n        return task\n\n    def _stream_available_logs(\n        self,\n        logs_client: Any,\n        log_group: str,\n        log_stream: str,\n        last_log_timestamp: Optional[int] = None,\n    ) -> Optional[int]:\n        \"\"\"\n        Stream logs from the given log group and stream since the last log timestamp.\n\n        Will continue on paginated responses until all logs are returned.\n\n        Returns the last log timestamp which can be used to call this method in the\n        future.\n        \"\"\"\n        last_log_stream_token = \"NO-TOKEN\"\n        next_log_stream_token = None\n\n        # AWS will return the same token that we send once the end of the paginated\n        # response is reached\n        while last_log_stream_token != next_log_stream_token:\n            last_log_stream_token = next_log_stream_token\n\n            request = {\n                \"logGroupName\": log_group,\n                \"logStreamName\": log_stream,\n            }\n\n            if last_log_stream_token is not None:\n                request[\"nextToken\"] = last_log_stream_token\n\n            if last_log_timestamp is not None:\n                # Bump the timestamp by one ms to avoid retrieving the last log again\n                request[\"startTime\"] = last_log_timestamp + 1\n\n            try:\n                response = logs_client.get_log_events(**request)\n            except Exception:\n                self.logger.error(\n                    (\n                        f\"{self._log_prefix}: Failed to read log events with request \"\n                        f\"{request}\"\n                    ),\n                    exc_info=True,\n                )\n                return last_log_timestamp\n\n            log_events = response[\"events\"]\n            for log_event in log_events:\n                # TODO: This doesn't forward to the local logger, which can be\n                #       bad for customizing handling and understanding where the\n                #       log is coming from, but it avoid nesting logger information\n                #       when the content is output from a Prefect logger on the\n                #       running infrastructure\n                print(log_event[\"message\"], file=sys.stderr)\n\n                if (\n                    last_log_timestamp is None\n                    or log_event[\"timestamp\"] > last_log_timestamp\n                ):\n                    last_log_timestamp = log_event[\"timestamp\"]\n\n            next_log_stream_token = response.get(\"nextForwardToken\")\n            if not log_events:\n                # Stop reading pages if there was no data\n                break\n\n        return last_log_timestamp\n\n    def _retrieve_latest_task_definition(\n        self, ecs_client: _ECSClient, task_definition_family: str\n    ) -> Optional[dict]:\n        try:\n            latest_task_definition = self._retrieve_task_definition(\n                ecs_client, task_definition_family\n            )\n        except Exception:\n            # The family does not exist...\n            return None\n\n        return latest_task_definition\n\n    def _retrieve_task_definition(\n        self, ecs_client: _ECSClient, task_definition_arn: str\n    ):\n        \"\"\"\n        Retrieve an existing task definition from AWS.\n        \"\"\"\n        self.logger.info(\n            f\"{self._log_prefix}: Retrieving task definition {task_definition_arn!r}...\"\n        )\n        response = ecs_client.describe_task_definition(\n            taskDefinition=task_definition_arn\n        )\n        return response[\"taskDefinition\"]\n\n    def _register_task_definition(\n        self, ecs_client: _ECSClient, task_definition: dict\n    ) -> str:\n        \"\"\"\n        Register a new task definition with AWS.\n        \"\"\"\n        # TODO: Consider including a global cache for this task definition since\n        #       registration of task definitions is frequently rate limited\n        task_definition_request = copy.deepcopy(task_definition)\n\n        # We need to remove some fields here if copying an existing task definition\n        for field in POST_REGISTRATION_FIELDS:\n            task_definition_request.pop(field, None)\n\n        response = ecs_client.register_task_definition(**task_definition_request)\n        return response[\"taskDefinition\"][\"taskDefinitionArn\"]\n\n    def _prepare_task_definition(self, task_definition: dict, region: str) -> dict:\n        \"\"\"\n        Prepare a task definition by inferring any defaults and merging overrides.\n        \"\"\"\n        task_definition = copy.deepcopy(task_definition)\n\n        # Configure the Prefect runtime container\n        task_definition.setdefault(\"containerDefinitions\", [])\n        container = get_prefect_container(task_definition[\"containerDefinitions\"])\n        if container is None:\n            container = {\"name\": PREFECT_ECS_CONTAINER_NAME}\n            task_definition[\"containerDefinitions\"].append(container)\n\n        if self.image:\n            container[\"image\"] = self.image\n\n        # Remove any keys that have been explicitly \"unset\"\n        unset_keys = {key for key, value in self.env.items() if value is None}\n        for item in tuple(container.get(\"environment\", [])):\n            if item[\"name\"] in unset_keys:\n                container[\"environment\"].remove(item)\n\n        if self.configure_cloudwatch_logs:\n            container[\"logConfiguration\"] = {\n                \"logDriver\": \"awslogs\",\n                \"options\": {\n                    \"awslogs-create-group\": \"true\",\n                    \"awslogs-group\": \"prefect\",\n                    \"awslogs-region\": region,\n                    \"awslogs-stream-prefix\": self.name or \"prefect\",\n                    **self.cloudwatch_logs_options,\n                },\n            }\n\n        family = self.family or task_definition.get(\"family\") or ECS_DEFAULT_FAMILY\n        task_definition[\"family\"] = slugify(\n            family,\n            max_length=255,\n            regex_pattern=r\"[^a-zA-Z0-9-_]+\",\n        )\n\n        # CPU and memory are required in some cases, retrieve the value to use\n        cpu = self.cpu or task_definition.get(\"cpu\") or ECS_DEFAULT_CPU\n        memory = self.memory or task_definition.get(\"memory\") or ECS_DEFAULT_MEMORY\n\n        if self.launch_type == \"FARGATE\" or self.launch_type == \"FARGATE_SPOT\":\n            # Task level memory and cpu are required when using fargate\n            task_definition[\"cpu\"] = str(cpu)\n            task_definition[\"memory\"] = str(memory)\n\n            # The FARGATE compatibility is required if it will be used as as launch type\n            requires_compatibilities = task_definition.setdefault(\n                \"requiresCompatibilities\", []\n            )\n            if \"FARGATE\" not in requires_compatibilities:\n                task_definition[\"requiresCompatibilities\"].append(\"FARGATE\")\n\n            # Only the 'awsvpc' network mode is supported when using FARGATE\n            # However, we will not enforce that here if the user has set it\n            network_mode = task_definition.setdefault(\"networkMode\", \"awsvpc\")\n\n            if network_mode != \"awsvpc\":\n                warnings.warn(\n                    f\"Found network mode {network_mode!r} which is not compatible with \"\n                    f\"launch type {self.launch_type!r}. Use either the 'EC2' launch \"\n                    \"type or the 'awsvpc' network mode.\"\n                )\n\n        elif self.launch_type == \"EC2\":\n            # Container level memory and cpu are required when using ec2\n            container.setdefault(\"cpu\", int(cpu))\n            container.setdefault(\"memory\", int(memory))\n\n        if self.execution_role_arn and not self.task_definition_arn:\n            task_definition[\"executionRoleArn\"] = self.execution_role_arn\n\n        if self.configure_cloudwatch_logs and not task_definition.get(\n            \"executionRoleArn\"\n        ):\n            raise ValueError(\n                \"An execution role arn must be set on the task definition to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs` but no execution role \"\n                \"was found on the task definition.\"\n            )\n\n        return task_definition\n\n    def _prepare_task_run_overrides(self) -> dict:\n        \"\"\"\n        Prepare the 'overrides' payload for a task run request.\n        \"\"\"\n        overrides = {\n            \"containerOverrides\": [\n                {\n                    \"name\": PREFECT_ECS_CONTAINER_NAME,\n                    \"environment\": [\n                        {\"name\": key, \"value\": value}\n                        for key, value in {\n                            **self._base_environment(),\n                            **self.env,\n                        }.items()\n                        if value is not None\n                    ],\n                }\n            ],\n        }\n\n        prefect_container_overrides = overrides[\"containerOverrides\"][0]\n\n        if self.command:\n            prefect_container_overrides[\"command\"] = self.command\n\n        if self.execution_role_arn:\n            overrides[\"executionRoleArn\"] = self.execution_role_arn\n\n        if self.task_role_arn:\n            overrides[\"taskRoleArn\"] = self.task_role_arn\n\n        if self.memory:\n            overrides[\"memory\"] = str(self.memory)\n            prefect_container_overrides.setdefault(\"memory\", self.memory)\n\n        if self.cpu:\n            overrides[\"cpu\"] = str(self.cpu)\n            prefect_container_overrides.setdefault(\"cpu\", self.cpu)\n\n        return overrides\n\n    def _load_vpc_network_config(\n        self, vpc_id: Optional[str], boto_session: boto3.Session\n    ) -> dict:\n        \"\"\"\n        Load settings from a specific VPC or the default VPC and generate a task\n        run request's network configuration.\n        \"\"\"\n        ec2_client = boto_session.client(\"ec2\")\n        vpc_message = \"the default VPC\" if not vpc_id else f\"VPC with ID {vpc_id}\"\n\n        if not vpc_id:\n            # Retrieve the default VPC\n            describe = {\"Filters\": [{\"Name\": \"isDefault\", \"Values\": [\"true\"]}]}\n        else:\n            describe = {\"VpcIds\": [vpc_id]}\n\n        vpcs = ec2_client.describe_vpcs(**describe)[\"Vpcs\"]\n        if not vpcs:\n            help_message = (\n                \"Pass an explicit `vpc_id` or configure a default VPC.\"\n                if not vpc_id\n                else \"Check that the VPC exists in the current region.\"\n            )\n            raise ValueError(\n                f\"Failed to find {vpc_message}. \"\n                \"Network configuration cannot be inferred. \" + help_message\n            )\n\n        vpc_id = vpcs[0][\"VpcId\"]\n        subnets = ec2_client.describe_subnets(\n            Filters=[{\"Name\": \"vpc-id\", \"Values\": [vpc_id]}]\n        )[\"Subnets\"]\n        if not subnets:\n            raise ValueError(\n                f\"Failed to find subnets for {vpc_message}. \"\n                \"Network configuration cannot be inferred.\"\n            )\n\n        return {\n            \"awsvpcConfiguration\": {\n                \"subnets\": [s[\"SubnetId\"] for s in subnets],\n                \"assignPublicIp\": \"ENABLED\",\n                \"securityGroups\": [],\n            }\n        }\n\n    def _prepare_task_run(\n        self,\n        network_config: Optional[dict],\n        task_definition_arn: str,\n    ) -> dict:\n        \"\"\"\n        Prepare a task run request payload.\n        \"\"\"\n        task_run = {\n            \"overrides\": self._prepare_task_run_overrides(),\n            \"tags\": [\n                {\n                    \"key\": slugify(\n                        key,\n                        regex_pattern=_TAG_REGEX,\n                        allow_unicode=True,\n                        lowercase=False,\n                    ),\n                    \"value\": slugify(\n                        value,\n                        regex_pattern=_TAG_REGEX,\n                        allow_unicode=True,\n                        lowercase=False,\n                    ),\n                }\n                for key, value in self.labels.items()\n            ],\n            \"taskDefinition\": task_definition_arn,\n        }\n\n        if self.cluster:\n            task_run[\"cluster\"] = self.cluster\n\n        if self.launch_type:\n            if self.launch_type == \"FARGATE_SPOT\":\n                task_run[\"capacityProviderStrategy\"] = [\n                    {\"capacityProvider\": \"FARGATE_SPOT\", \"weight\": 1}\n                ]\n            else:\n                task_run[\"launchType\"] = self.launch_type\n\n        if network_config:\n            task_run[\"networkConfiguration\"] = network_config\n\n        task_run = self.task_customizations.apply(task_run)\n        return task_run\n\n    def _run_task(self, ecs_client: _ECSClient, task_run: dict):\n        \"\"\"\n        Run the task using the ECS client.\n\n        This is isolated as a separate method for testing purposes.\n        \"\"\"\n        return ecs_client.run_task(**task_run)[\"tasks\"][0]\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.Config","title":"Config","text":"

    Configuration of pydantic.

    Source code in prefect_aws/ecs.py
    class Config:\n    \"\"\"Configuration of pydantic.\"\"\"\n\n    # Support serialization of the 'JsonPatch' type\n    arbitrary_types_allowed = True\n    json_encoders = {JsonPatch: lambda p: p.patch}\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.cast_customizations_to_a_json_patch","title":"cast_customizations_to_a_json_patch","text":"

    Casts lists to JsonPatch instances.

    Source code in prefect_aws/ecs.py
    @validator(\"task_customizations\", pre=True)\ndef cast_customizations_to_a_json_patch(\n    cls, value: Union[List[Dict], JsonPatch, str]\n) -> JsonPatch:\n    \"\"\"\n    Casts lists to JsonPatch instances.\n    \"\"\"\n    if isinstance(value, str):\n        value = json.loads(value)\n    if isinstance(value, list):\n        return JsonPatch(value)\n    return value  # type: ignore\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.cloudwatch_logs_options_requires_configure_cloudwatch_logs","title":"cloudwatch_logs_options_requires_configure_cloudwatch_logs","text":"

    Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

    Source code in prefect_aws/ecs.py
    @root_validator\ndef cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if values.get(\"cloudwatch_logs_options\") and not values.get(\n        \"configure_cloudwatch_logs\"\n    ):\n        raise ValueError(\n            \"`configure_cloudwatch_log` must be enabled to use \"\n            \"`cloudwatch_logs_options`.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.configure_cloudwatch_logs_requires_execution_role_arn","title":"configure_cloudwatch_logs_requires_execution_role_arn","text":"

    Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

    Source code in prefect_aws/ecs.py
    @root_validator\ndef configure_cloudwatch_logs_requires_execution_role_arn(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if (\n        values.get(\"configure_cloudwatch_logs\")\n        and not values.get(\"execution_role_arn\")\n        # Do not raise if they've linked to another task definition or provided\n        # it without using our shortcuts\n        and not values.get(\"task_definition_arn\")\n        and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n    ):\n        raise ValueError(\n            \"An `execution_role_arn` must be provided to use \"\n            \"`configure_cloudwatch_logs` or `stream_logs`.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.dict","title":"dict","text":"

    Convert to a dictionary.

    Source code in prefect_aws/ecs.py
    def dict(self, *args, **kwargs) -> Dict:\n    \"\"\"\n    Convert to a dictionary.\n    \"\"\"\n    # Support serialization of the 'JsonPatch' type\n    d = super().dict(*args, **kwargs)\n    d[\"task_customizations\"] = self.task_customizations.patch\n    return d\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

    Generate a base job template for a cloud-run work pool with the same configuration as this block.

    Returns:

    Type Description dict
    • dict: a base job template for a cloud-run work pool
    Source code in prefect_aws/ecs.py
    async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for a cloud-run work pool with the same\n    configuration as this block.\n\n    Returns:\n        - dict: a base job template for a cloud-run work pool\n    \"\"\"\n    base_job_template = copy.deepcopy(ECSWorker.get_default_base_job_template())\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n            \"task_customizations\",\n        ]:\n            continue\n        elif key == \"aws_credentials\":\n            if not self.aws_credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"aws_credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(\n                        self.aws_credentials._block_document_id\n                    )\n                }\n            }\n        elif key == \"task_definition\":\n            base_job_template[\"job_configuration\"][\"task_definition\"] = value\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                \" Skipping.\"\n            )\n\n    if self.task_customizations:\n        network_config_patches = JsonPatch(\n            [\n                patch\n                for patch in self.task_customizations\n                if \"networkConfiguration\" in patch[\"path\"]\n            ]\n        )\n        minimal_network_config = assemble_document_for_patches(\n            network_config_patches\n        )\n        if minimal_network_config:\n            minimal_network_config_with_patches = network_config_patches.apply(\n                minimal_network_config\n            )\n            base_job_template[\"variables\"][\"properties\"][\"network_configuration\"][\n                \"default\"\n            ] = minimal_network_config_with_patches[\"networkConfiguration\"]\n        try:\n            base_job_template[\"job_configuration\"][\n                \"task_run_request\"\n            ] = self.task_customizations.apply(\n                base_job_template[\"job_configuration\"][\"task_run_request\"]\n            )\n        except JsonPointerException:\n            self.logger.warning(\n                \"Unable to apply task customizations to the base job template.\"\n                \"You may need to update the template manually.\"\n            )\n\n    return base_job_template\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.get_corresponding_worker_type","title":"get_corresponding_worker_type staticmethod","text":"

    Return the corresponding worker type for this infrastructure block.

    Source code in prefect_aws/ecs.py
    @staticmethod\ndef get_corresponding_worker_type() -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    return ECSWorker.type\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.image_is_required","title":"image_is_required","text":"

    Enforces that an image is available if image is None.

    Source code in prefect_aws/ecs.py
    @root_validator(pre=True)\ndef image_is_required(cls, values: dict) -> dict:\n    \"\"\"\n    Enforces that an image is available if image is `None`.\n    \"\"\"\n    has_image = bool(values.get(\"image\"))\n    has_task_definition_arn = bool(values.get(\"task_definition_arn\"))\n\n    # The image can only be null when the task_definition_arn is set\n    if has_image or has_task_definition_arn:\n        return values\n\n    prefect_container = (\n        get_prefect_container(\n            (values.get(\"task_definition\") or {}).get(\"containerDefinitions\", [])\n        )\n        or {}\n    )\n    image_in_task_definition = prefect_container.get(\"image\")\n\n    # If a task_definition is given with a prefect container image, use that value\n    if image_in_task_definition:\n        values[\"image\"] = image_in_task_definition\n    # Otherwise, it should default to the Prefect base image\n    else:\n        values[\"image\"] = get_prefect_image_name()\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.kill","title":"kill async","text":"

    Kill a task running on ECS.

    Parameters:

    Name Type Description Default identifier str

    A cluster and task arn combination. This should match a value yielded by ECSTask.run.

    required Source code in prefect_aws/ecs.py
    @sync_compatible\nasync def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n    \"\"\"\n    Kill a task running on ECS.\n\n    Args:\n        identifier: A cluster and task arn combination. This should match a value\n            yielded by `ECSTask.run`.\n    \"\"\"\n    if grace_seconds != 30:\n        self.logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n            \"support dynamic grace period configuration so 30s will be used. \"\n            \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n        )\n    cluster, task = parse_task_identifier(identifier)\n    await run_sync_in_worker_thread(self._stop_task, cluster, task)\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Return an copy of the block that is prepared to execute a flow run.

    Source code in prefect_aws/ecs.py
    def prepare_for_flow_run(\n    self: Self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"Deployment\"] = None,\n    flow: Optional[\"Flow\"] = None,\n) -> Self:\n    \"\"\"\n    Return an copy of the block that is prepared to execute a flow run.\n    \"\"\"\n    new_family = None\n\n    # Update the family if not specified elsewhere\n    if (\n        not self.family\n        and not self.task_definition_arn\n        and not (self.task_definition and self.task_definition.get(\"family\"))\n    ):\n        if flow and deployment:\n            new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}__{deployment.name}\"\n        elif flow and not deployment:\n            new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}\"\n        elif deployment and not flow:\n            # This is a weird case and should not be see in the wild\n            new_family = f\"{ECS_DEFAULT_FAMILY}__unknown-flow__{deployment.name}\"\n\n    new = super().prepare_for_flow_run(flow_run, deployment=deployment, flow=flow)\n\n    if new_family:\n        return new.copy(update={\"family\": new_family})\n    else:\n        # Avoid an extra copy if not needed\n        return new\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.preview","title":"preview","text":"

    Generate a preview of the task definition and task run that will be sent to AWS.

    Source code in prefect_aws/ecs.py
    def preview(self) -> str:\n    \"\"\"\n    Generate a preview of the task definition and task run that will be sent to AWS.\n    \"\"\"\n    preview = \"\"\n\n    task_definition_arn = self.task_definition_arn or \"<registered at runtime>\"\n\n    if self.task_definition or not self.task_definition_arn:\n        task_definition = self._prepare_task_definition(\n            self.task_definition or {},\n            region=self.aws_credentials.region_name\n            or \"<loaded from client at runtime>\",\n        )\n        preview += \"---\\n# Task definition\\n\"\n        preview += yaml.dump(task_definition)\n        preview += \"\\n\"\n    else:\n        task_definition = None\n\n    if task_definition and task_definition.get(\"networkMode\") == \"awsvpc\":\n        vpc = \"the default VPC\" if not self.vpc_id else self.vpc_id\n        network_config = {\n            \"awsvpcConfiguration\": {\n                \"subnets\": f\"<loaded from {vpc} at runtime>\",\n                \"assignPublicIp\": \"ENABLED\",\n            }\n        }\n    else:\n        network_config = None\n\n    task_run = self._prepare_task_run(network_config, task_definition_arn)\n    preview += \"---\\n# Task run request\\n\"\n    preview += yaml.dump(task_run)\n\n    return preview\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.run","title":"run async","text":"

    Run the configured task on ECS.

    Source code in prefect_aws/ecs.py
    @sync_compatible\nasync def run(self, task_status: Optional[TaskStatus] = None) -> ECSTaskResult:\n    \"\"\"\n    Run the configured task on ECS.\n    \"\"\"\n    boto_session, ecs_client = await run_sync_in_worker_thread(\n        self._get_session_and_client\n    )\n\n    (\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition,\n    ) = await run_sync_in_worker_thread(\n        self._create_task_and_wait_for_start, boto_session, ecs_client\n    )\n\n    # Display a nice message indicating the command and image\n    command = self.command or get_prefect_container(\n        task_definition[\"containerDefinitions\"]\n    ).get(\"command\", [])\n    self.logger.info(\n        f\"{self._log_prefix}: Running command {' '.join(command)!r} \"\n        f\"in container {PREFECT_ECS_CONTAINER_NAME!r} ({self.image})...\"\n    )\n\n    # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n    # if set to preserve matching by name rather than arn\n    # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n    # single colons.\n    identifier = (self.cluster if self.cluster else cluster_arn) + \"::\" + task_arn\n\n    if task_status:\n        task_status.started(identifier)\n\n    status_code = await run_sync_in_worker_thread(\n        self._watch_task_and_get_exit_code,\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition and self.auto_deregister_task_definition,\n        boto_session,\n        ecs_client,\n    )\n\n    return ECSTaskResult(\n        identifier=identifier,\n        # If the container does not start the exit code can be null but we must\n        # still report a status code. We use a -1 to indicate a special code.\n        status_code=status_code if status_code is not None else -1,\n    )\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.set_default_configure_cloudwatch_logs","title":"set_default_configure_cloudwatch_logs","text":"

    Streaming output generally requires CloudWatch logs to be configured.

    To avoid entangled arguments in the simple case, configure_cloudwatch_logs defaults to matching the value of stream_output.

    Source code in prefect_aws/ecs.py
    @root_validator(pre=True)\ndef set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n    \"\"\"\n    Streaming output generally requires CloudWatch logs to be configured.\n\n    To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n    defaults to matching the value of `stream_output`.\n    \"\"\"\n    configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n    if configure_cloudwatch_logs is None:\n        values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTaskResult","title":"ECSTaskResult","text":"

    Bases: InfrastructureResult

    The result of a run of an ECS task

    Source code in prefect_aws/ecs.py
    class ECSTaskResult(InfrastructureResult):\n    \"\"\"The result of a run of an ECS task\"\"\"\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.get_container","title":"get_container","text":"

    Extract a container from a list of containers or container definitions. If not found, None is returned.

    Source code in prefect_aws/ecs.py
    def get_container(containers: List[dict], name: str) -> Optional[dict]:\n    \"\"\"\n    Extract a container from a list of containers or container definitions.\n    If not found, `None` is returned.\n    \"\"\"\n    for container in containers:\n        if container.get(\"name\") == name:\n            return container\n    return None\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.get_prefect_container","title":"get_prefect_container","text":"

    Extract the Prefect container from a list of containers or container definitions. If not found, None is returned.

    Source code in prefect_aws/ecs.py
    def get_prefect_container(containers: List[dict]) -> Optional[dict]:\n    \"\"\"\n    Extract the Prefect container from a list of containers or container definitions.\n    If not found, `None` is returned.\n    \"\"\"\n    return get_container(containers, PREFECT_ECS_CONTAINER_NAME)\n
    "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.parse_task_identifier","title":"parse_task_identifier","text":"

    Splits identifier into its cluster and task components, e.g. input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").

    Source code in prefect_aws/ecs.py
    def parse_task_identifier(identifier: str) -> Tuple[str, str]:\n    \"\"\"\n    Splits identifier into its cluster and task components, e.g.\n    input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").\n    \"\"\"\n    cluster, task = identifier.split(\"::\", maxsplit=1)\n    return cluster, task\n
    "},{"location":"integrations/prefect-aws/ecs_guide/","title":"ECS Worker Guide","text":""},{"location":"integrations/prefect-aws/ecs_guide/#why-use-ecs-for-flow-run-execution","title":"Why use ECS for flow run execution?","text":"

    ECS (Elastic Container Service) tasks are a good option for executing Prefect flow runs for several reasons:

    1. Scalability: ECS scales your infrastructure in response to demand, effectively managing Prefect flow runs. ECS automatically administers container distribution across multiple instances based on demand.
    2. Flexibility: ECS lets you choose between AWS Fargate and Amazon EC2 for container operation. Fargate abstracts the underlying infrastructure, while EC2 has faster job start times and offers additional control over instance management and configuration.
    3. AWS Integration: Easily connect with other AWS services, such as AWS IAM and CloudWatch.
    4. Containerization: ECS supports Docker containers and offers managed execution. Containerization encourages reproducible deployments.
    "},{"location":"integrations/prefect-aws/ecs_guide/#ecs-flow-run-execution","title":"ECS flow run execution","text":"

    Prefect enables remote flow execution via workers and work pools. To learn more about these concepts please see our deployment tutorial.

    For details on how workers and work pools are implemented for ECS, see the diagram below.

    %%{\n  init: {\n    'theme': 'base',\n    'themeVariables': {\n      'primaryColor': '#2D6DF6',\n      'primaryTextColor': '#fff',\n      'lineColor': '#FE5A14',\n      'secondaryColor': '#E04BF0',\n      'tertiaryColor': '#fff'\n    }\n  }\n}%%\ngraph TB\n\n  subgraph ecs_cluster[ECS cluster]\n    subgraph ecs_service[ECS service]\n      td_worker[Worker task definition] --> |defines| prefect_worker((Prefect worker))\n    end\n    prefect_worker -->|kicks off| ecs_task\n    fr_task_definition[Flow run task definition]\n\n\n    subgraph ecs_task[\"ECS task execution\"]\n    style ecs_task text-align:center,display:flex\n\n\n    flow_run((Flow run))\n\n    end\n    fr_task_definition -->|defines| ecs_task\n  end\n\n  subgraph prefect_cloud[Prefect Cloud]\n    subgraph prefect_workpool[ECS work pool]\n      workqueue[Work queue]\n    end\n  end\n\n  subgraph github[\"ECR\"]\n    flow_code{{\"Flow code\"}}\n  end\n  flow_code --> |pulls| ecs_task\n  prefect_worker -->|polls| workqueue\n  prefect_workpool -->|configures| fr_task_definition
    "},{"location":"integrations/prefect-aws/ecs_guide/#ecs-and-prefect","title":"ECS and Prefect","text":"

    ECS tasks != Prefect tasks

    An ECS task is not the same thing as a Prefect task.

    ECS tasks are groupings of containers that run within an ECS Cluster. An ECS task's behavior is determined by its task definition.

    An ECS task definition is the blueprint for the ECS task. It describes which Docker containers to run and what you want to have happen inside these containers.

    ECS tasks are instances of a task definition. A Task Execution launches container(s) as defined in the task definition until they are stopped or exit on their own. This setup is ideal for ephemeral processes such as a Prefect flow run.

    The ECS task running the Prefect worker should be an ECS Service, given its long-running nature and need for auto-recovery in case of failure. An ECS service automatically replaces any task that fails, which is ideal for managing a long-running process such as a Prefect worker.

    When a Prefect flow is scheduled to run it goes into the work pool specified in the flow's deployment. Work pools are typed according to the infrastructure the flow will run on. Flow runs scheduled in an ecs typed work pool are executed as ECS tasks. Only Prefect ECS workers can poll an ecs typed work pool.

    When the ECS worker receives a scheduled flow run from the ECS work pool it is polling, it spins up the specified infrastructure on AWS ECS. The worker knows to build an ECS task definition for each flow run based on the configuration specified in the work pool.

    Once the flow run completes, the ECS containers of the cluster are spun down to a single container that continues to run the Prefect worker. This worker continues polling for work from the Prefect work pool.

    If you specify a task definition ARN (Amazon Resource Name) in the work pool, the worker will use that ARN when spinning up the ECS Task, rather than creating a task definition from the fields supplied in the work pool configuration.

    You can use either EC2 or Fargate as the capacity provider. Fargate simplifies initiation, but lengthens infrastructure setup time for each flow run. Using EC2 for the ECS cluster can reduce setup time. In this example, we will show how to use Fargate.

    Tip

    If you prefer infrastructure as code check out this Terraform module to provision an ECS cluster with a worker.

    "},{"location":"integrations/prefect-aws/ecs_guide/#prerequisites","title":"Prerequisites","text":"
    • An AWS account with permissions to create ECS services and IAM roles.
    • The AWS CLI installed on your local machine. You can download it from the AWS website.
    • An ECS Cluster to host both the worker and the flow runs it submits. This guide uses the default cluster. To create your own follow this guide.
    • A VPC configured for your ECS tasks. This guide uses the default VPC.
    • Prefect Cloud account or Prefect self-managed instance.
    "},{"location":"integrations/prefect-aws/ecs_guide/#step-1-set-up-an-ecs-work-pool","title":"Step 1: Set up an ECS work pool","text":"

    Before setting up the worker, create a work pool of type ECS for the worker to pull work from. If doing so from the CLI, be sure to authenticate with Prefect Cloud.

    Create a work pool from the CLI:

    prefect work-pool create --type ecs my-ecs-pool\n

    Or from the Prefect UI:

    !!! Because this guide uses Fargate as the capacity provider and the default VPC and ECS cluster, no further configuration is needed.

    Next, set up a Prefect ECS worker that will discover and pull work from this work pool.

    "},{"location":"integrations/prefect-aws/ecs_guide/#step-2-start-a-prefect-worker-in-your-ecs-cluster","title":"Step 2: Start a Prefect worker in your ECS cluster","text":"

    First start by creating the IAM role required in order for your worker and flows to run. The sample flow in this guide doesn't interact with many other AWS services, so you will only be creating one role, taskExecutionRole. To create an IAM role for the ECS task using the AWS CLI, follow these steps:

    "},{"location":"integrations/prefect-aws/ecs_guide/#1-create-a-trust-policy","title":"1. Create a trust policy","text":"

    The trust policy will specify that the ECS service containing the Prefect worker will be able to assume the role required for calling other AWS services.

    Save this policy to a file, such as ecs-trust-policy.json:

    {\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Service\": \"ecs-tasks.amazonaws.com\"\n            },\n            \"Action\": \"sts:AssumeRole\"\n        }\n    ]\n}\n
    "},{"location":"integrations/prefect-aws/ecs_guide/#2-create-the-iam-roles","title":"2. Create the IAM roles","text":"

    Use the aws iam create-role command to create the roles that you will be using. For this guide, the ecsTaskExecutionRole will be used by the worker to start ECS tasks, and will also be the role assigned to the ECS tasks running your Prefect flows.

        aws iam create-role \\\n    --role-name ecsTaskExecutionRole \\\n    --assume-role-policy-document file://ecs-trust-policy.json\n

    Tip

    Depending on the requirements of your flows, it is advised to create a second role for your ECS tasks. This role will contain the permissions required by the ECS tasks in which your flows will run. For example, if your workflow loads data into an S3 bucket, you would need a role with additional permissions to access S3.

    "},{"location":"integrations/prefect-aws/ecs_guide/#3-attach-the-policy-to-the-role","title":"3. Attach the policy to the role","text":"

    For this guide the ECS worker will require permissions to pull images from ECR and publish logs to CloudWatch. Amazon has a managed policy named AmazonECSTaskExecutionRolePolicy that grants the permissions necessary for starting ECS tasks. See here for other common execution role permissions. Attach this policy to your task execution role:

        aws iam attach-role-policy \\\n    --role-name ecsTaskExecutionRole \\\n    --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy\n

    Remember to replace the --role-name and --policy-arn with the actual role name and policy Amazon Resource Name (ARN) you want to use.

    "},{"location":"integrations/prefect-aws/ecs_guide/#step-3-creating-an-ecs-worker-service","title":"Step 3: Creating an ECS worker service","text":""},{"location":"integrations/prefect-aws/ecs_guide/#1-launch-an-ecs-service-to-host-the-worker","title":"1. Launch an ECS Service to host the worker","text":"

    Next, create an ECS task definition that specifies the Docker image for the Prefect worker, the resources it requires, and the command it should run. In this example, the command to start the worker is prefect worker start --pool my-ecs-pool.

    Create a JSON file with the following contents:

    {\n    \"family\": \"prefect-worker-task\",\n    \"networkMode\": \"awsvpc\",\n    \"requiresCompatibilities\": [\n        \"FARGATE\"\n    ],\n    \"cpu\": \"512\",\n    \"memory\": \"1024\",\n    \"executionRoleArn\": \"<ecs-task-role-arn>\",\n    \"taskRoleArn\": \"<ecs-task-role-arn>\",\n    \"containerDefinitions\": [\n        {\n            \"name\": \"prefect-worker\",\n            \"image\": \"prefecthq/prefect:2-latest\",\n            \"cpu\": 512,\n            \"memory\": 1024,\n            \"essential\": true,\n            \"command\": [\n                \"/bin/sh\",\n                \"-c\",\n                \"pip install prefect-aws && prefect worker start --pool my-ecs-pool --type ecs\"\n            ],\n            \"environment\": [\n                {\n                    \"name\": \"PREFECT_API_URL\",\n                    \"value\": \"prefect-api-url>\"\n                },\n                {\n                    \"name\": \"PREFECT_API_KEY\",\n                    \"value\": \"<prefect-api-key>\"\n                }\n            ]\n        }\n    ]\n}\n
    • Use prefect config view to view the PREFECT_API_URL for your current Prefect profile. Use this to replace <prefect-api-url>.

    • For the PREFECT_API_KEY, if you are on a paid plan you can create a service account for the worker. If your are on a free plan, you can pass a user\u2019s API key.

    • Replace both instances of <ecs-task-role-arn> with the ARN of the IAM role you created in Step 2. You can grab this by running:

      aws iam get-role --role-name taskExecutionRole --query 'Role.[RoleName, Arn]' --output text\n

    • Notice that the CPU and Memory allocations are relatively small. The worker's main responsibility is to submit work through API calls to AWS, not to execute your Prefect flow code.

    Tip

    To avoid hardcoding your API key into the task definition JSON see how to add sensitive data using AWS secrets manager to the container definition.

    "},{"location":"integrations/prefect-aws/ecs_guide/#2-register-the-task-definition","title":"2. Register the task definition","text":"

    Before creating a service, you first need to register a task definition. You can do that using the register-task-definition command in the AWS CLI. Here is an example:

    aws ecs register-task-definition --cli-input-json file://task-definition.json\n

    Replace task-definition.json with the name of your JSON file.

    "},{"location":"integrations/prefect-aws/ecs_guide/#3-create-an-ecs-service-to-host-your-worker","title":"3. Create an ECS service to host your worker","text":"

    Finally, create a service that will manage your Prefect worker:

    Open a terminal window and run the following command to create an ECS Fargate service:

    aws ecs create-service \\\n    --service-name prefect-worker-service \\\n    --cluster <ecs-cluster> \\\n    --task-definition <task-definition-arn> \\\n    --launch-type FARGATE \\\n    --desired-count 1 \\\n    --network-configuration \"awsvpcConfiguration={subnets=[<subnet-ids>],securityGroups=[<security-group-ids>],assignPublicIp='ENABLED'}\"\n
    • Replace <ecs-cluster> with the name of your ECS cluster.
    • Replace <task-definition-arn> with the ARN of the task definition you just registered.
    • Replace <subnet-ids> with a comma-separated list of your VPC subnet IDs. Ensure that these subnets are aligned with the vpc specified on the work pool in step 1. You can view subnet ids with the following command: aws ec2 describe-subnets --filter Name=<vpc-id>
    • Replace <security-group-ids> with a comma-separated list of your VPC security group IDs.

    Sanity check

    The work pool page in the Prefect UI allows you to check the health of your workers - make sure your new worker is live! Note that it can take a few minutes for an ECS service to come online. If your worker does not come online and you are using the command from this guide, you may not be using the default VPC. For connectivity issues, check your VPC's configuration and refer to the ECS outbound networking guide.

    "},{"location":"integrations/prefect-aws/ecs_guide/#step-4-pick-up-a-flow-run-with-your-new-worker","title":"Step 4: Pick up a flow run with your new worker","text":"

    This guide uses ECR to store a Docker image containing your flow code. To do this, we will write a flow, then deploy it using build and push steps that copy flow code into a Docker image and push that image to an ECR repository.

    "},{"location":"integrations/prefect-aws/ecs_guide/#1-write-a-simple-test-flow","title":"1. Write a simple test flow","text":"

    my_flow.py

    from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"Hello from ECS!!\")\n\nif __name__ == \"__main__\":\n    my_flow()\n
    "},{"location":"integrations/prefect-aws/ecs_guide/#2-create-an-ecr-repository","title":"2. Create an ECR repository","text":"

    Use the following AWS CLI command to create an ECR repository. The name you choose for your repository will be reused in the next step when defining your Prefect deployment.

    aws ecr create-repository \\\n--repository-name <my-ecr-repo> \\\n--region <region>\n
    "},{"location":"integrations/prefect-aws/ecs_guide/#3-create-a-prefectyaml-file","title":"3. Create a prefect.yaml file","text":"

    To have Prefect build your image when deploying your flow create a prefect.yaml file with the following specification:

    name: ecs-worker-guide\n# this is pre-populated by running prefect init\nprefect-version: 2.14.20 \n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build_image\n    requires: prefect-docker>=0.3.1\n    image_name: <my-ecr-repo>\n    tag: latest\n    dockerfile: auto\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n    requires: prefect-docker>=0.3.1\n    image_name: '{{ build_image.image_name }}'\n    tag: '{{ build_image.tag }}'\n\n # the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: my_ecs_deployment\n    version:\n    tags: []\n    description:\n    entrypoint: flow.py:my_flow\n    parameters: {}\n    work_pool:\n        name: ecs-dev-pool\n        work_queue_name:\n        job_variables:\n        image: '{{ build_image.image }}'\n    schedules: []\npull:\n    - prefect.deployments.steps.set_working_directory:\n        directory: /opt/prefect/ecs-worker-guide\n
    "},{"location":"integrations/prefect-aws/ecs_guide/#4-deploy-the-flow-to-the-prefect-cloud-or-your-self-managed-server-instance-specifying-the-ecs-work-pool-when-prompted","title":"4. Deploy the flow to the Prefect Cloud or your self-managed server instance, specifying the ECS work pool when prompted","text":"
    prefect deploy my_flow.py:my_ecs_deployment\n
    "},{"location":"integrations/prefect-aws/ecs_guide/#5-find-the-deployment-in-the-ui-and-click-the-quick-run-button","title":"5. Find the deployment in the UI and click the Quick Run button!","text":""},{"location":"integrations/prefect-aws/ecs_guide/#optional-next-steps","title":"Optional next steps","text":"
    1. Now that you are confident your ECS worker is healthy, you can experiment with different work pool configurations.

      • Do your flow runs require higher CPU?
      • Would an EC2 Launch Type speed up your flow run execution?

      These infrastructure configuration values can be set on your ECS work pool or they can be overridden on the deployment level through job_variables if desired.

    "},{"location":"integrations/prefect-aws/ecs_worker/","title":"ECS Worker","text":""},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker","title":"prefect_aws.workers.ecs_worker","text":"

    Prefect worker for executing flow runs as ECS tasks.

    Get started by creating a work pool:

    $ prefect work-pool create --type ecs my-ecs-pool\n

    Then, you can start a worker for the pool:

    $ prefect worker start --pool my-ecs-pool\n

    It's common to deploy the worker as an ECS task as well. However, you can run the worker locally to get started.

    The worker may work without any additional configuration, but it is dependent on your specific AWS setup and we'd recommend opening the work pool editor in the UI to see the available options.

    By default, the worker will register a task definition for each flow run and run a task in your default ECS cluster using AWS Fargate. Fargate requires tasks to configure subnets, which we will infer from your default VPC. If you do not have a default VPC, you must provide a VPC ID or manually setup the network configuration for your tasks.

    Note, the worker caches task definitions for each deployment to avoid excessive registration. The worker will check that the cached task definition is compatible with your configuration before using it.

    The launch type option can be used to run your tasks in different modes. For example, FARGATE_SPOT can be used to use spot instances for your Fargate tasks or EC2 can be used to run your tasks on a cluster backed by EC2 instances.

    Generally, it is very useful to enable CloudWatch logging for your ECS tasks; this can help you debug task failures. To enable CloudWatch logging, you must provide an execution role ARN with permissions to create and write to log streams. See the configure_cloudwatch_logs field documentation for details.

    The worker can be configured to use an existing task definition by setting the task definition arn variable or by providing a \"taskDefinition\" in the task run request. When a task definition is provided, the worker will never create a new task definition which may result in variables that are templated into the task definition payload being ignored.

    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.CapacityProvider","title":"CapacityProvider","text":"

    Bases: BaseModel

    The capacity provider strategy to use when running the task.

    Source code in prefect_aws/workers/ecs_worker.py
    class CapacityProvider(BaseModel):\n    \"\"\"\n    The capacity provider strategy to use when running the task.\n    \"\"\"\n\n    capacityProvider: str\n    weight: int\n    base: int\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSIdentifier","title":"ECSIdentifier","text":"

    Bases: NamedTuple

    The identifier for a running ECS task.

    Source code in prefect_aws/workers/ecs_worker.py
    class ECSIdentifier(NamedTuple):\n    \"\"\"\n    The identifier for a running ECS task.\n    \"\"\"\n\n    cluster: str\n    task_arn: str\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration","title":"ECSJobConfiguration","text":"

    Bases: BaseJobConfiguration

    Job configuration for an ECS worker.

    Source code in prefect_aws/workers/ecs_worker.py
    class ECSJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Job configuration for an ECS worker.\n    \"\"\"\n\n    aws_credentials: Optional[AwsCredentials] = Field(default_factory=AwsCredentials)\n    task_definition: Optional[Dict[str, Any]] = Field(\n        template=_default_task_definition_template()\n    )\n    task_run_request: Dict[str, Any] = Field(\n        template=_default_task_run_request_template()\n    )\n    configure_cloudwatch_logs: Optional[bool] = Field(default=None)\n    cloudwatch_logs_options: Dict[str, str] = Field(default_factory=dict)\n    cloudwatch_logs_prefix: Optional[str] = Field(default=None)\n    network_configuration: Dict[str, Any] = Field(default_factory=dict)\n    stream_output: Optional[bool] = Field(default=None)\n    task_start_timeout_seconds: int = Field(default=300)\n    task_watch_poll_interval: float = Field(default=5.0)\n    auto_deregister_task_definition: bool = Field(default=False)\n    vpc_id: Optional[str] = Field(default=None)\n    container_name: Optional[str] = Field(default=None)\n    cluster: Optional[str] = Field(default=None)\n    match_latest_revision_in_family: bool = Field(default=False)\n\n    @root_validator\n    def task_run_request_requires_arn_if_no_task_definition_given(cls, values) -> dict:\n        \"\"\"\n        If no task definition is provided, a task definition ARN must be present on the\n        task run request.\n        \"\"\"\n        if not values.get(\"task_run_request\", {}).get(\n            \"taskDefinition\"\n        ) and not values.get(\"task_definition\"):\n            raise ValueError(\n                \"A task definition must be provided if a task definition ARN is not \"\n                \"present on the task run request.\"\n            )\n        return values\n\n    @root_validator\n    def container_name_default_from_task_definition(cls, values) -> dict:\n        \"\"\"\n        Infers the container name from the task definition if not provided.\n        \"\"\"\n        if values.get(\"container_name\") is None:\n            values[\"container_name\"] = _container_name_from_task_definition(\n                values.get(\"task_definition\")\n            )\n\n            # We may not have a name here still; for example if someone is using a task\n            # definition arn. In that case, we'll perform similar logic later to find\n            # the name to treat as the \"orchestration\" container.\n\n        return values\n\n    @root_validator(pre=True)\n    def set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n        \"\"\"\n        Streaming output generally requires CloudWatch logs to be configured.\n\n        To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n        defaults to matching the value of `stream_output`.\n        \"\"\"\n        configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n        if configure_cloudwatch_logs is None:\n            values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n        return values\n\n    @root_validator\n    def configure_cloudwatch_logs_requires_execution_role_arn(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if (\n            values.get(\"configure_cloudwatch_logs\")\n            and not values.get(\"execution_role_arn\")\n            # TODO: Does not match\n            # Do not raise if they've linked to another task definition or provided\n            # it without using our shortcuts\n            and not values.get(\"task_run_request\", {}).get(\"taskDefinition\")\n            and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n        ):\n            raise ValueError(\n                \"An `execution_role_arn` must be provided to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs`.\"\n            )\n        return values\n\n    @root_validator\n    def cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if values.get(\"cloudwatch_logs_options\") and not values.get(\n            \"configure_cloudwatch_logs\"\n        ):\n            raise ValueError(\n                \"`configure_cloudwatch_log` must be enabled to use \"\n                \"`cloudwatch_logs_options`.\"\n            )\n        return values\n\n    @root_validator\n    def network_configuration_requires_vpc_id(cls, values: dict) -> dict:\n        \"\"\"\n        Enforces a `vpc_id` is provided when custom network configuration mode is\n        enabled for network settings.\n        \"\"\"\n        if values.get(\"network_configuration\") and not values.get(\"vpc_id\"):\n            raise ValueError(\n                \"You must provide a `vpc_id` to enable custom `network_configuration`.\"\n            )\n        return values\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.cloudwatch_logs_options_requires_configure_cloudwatch_logs","title":"cloudwatch_logs_options_requires_configure_cloudwatch_logs","text":"

    Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

    Source code in prefect_aws/workers/ecs_worker.py
    @root_validator\ndef cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if values.get(\"cloudwatch_logs_options\") and not values.get(\n        \"configure_cloudwatch_logs\"\n    ):\n        raise ValueError(\n            \"`configure_cloudwatch_log` must be enabled to use \"\n            \"`cloudwatch_logs_options`.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.configure_cloudwatch_logs_requires_execution_role_arn","title":"configure_cloudwatch_logs_requires_execution_role_arn","text":"

    Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

    Source code in prefect_aws/workers/ecs_worker.py
    @root_validator\ndef configure_cloudwatch_logs_requires_execution_role_arn(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if (\n        values.get(\"configure_cloudwatch_logs\")\n        and not values.get(\"execution_role_arn\")\n        # TODO: Does not match\n        # Do not raise if they've linked to another task definition or provided\n        # it without using our shortcuts\n        and not values.get(\"task_run_request\", {}).get(\"taskDefinition\")\n        and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n    ):\n        raise ValueError(\n            \"An `execution_role_arn` must be provided to use \"\n            \"`configure_cloudwatch_logs` or `stream_logs`.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.container_name_default_from_task_definition","title":"container_name_default_from_task_definition","text":"

    Infers the container name from the task definition if not provided.

    Source code in prefect_aws/workers/ecs_worker.py
    @root_validator\ndef container_name_default_from_task_definition(cls, values) -> dict:\n    \"\"\"\n    Infers the container name from the task definition if not provided.\n    \"\"\"\n    if values.get(\"container_name\") is None:\n        values[\"container_name\"] = _container_name_from_task_definition(\n            values.get(\"task_definition\")\n        )\n\n        # We may not have a name here still; for example if someone is using a task\n        # definition arn. In that case, we'll perform similar logic later to find\n        # the name to treat as the \"orchestration\" container.\n\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.network_configuration_requires_vpc_id","title":"network_configuration_requires_vpc_id","text":"

    Enforces a vpc_id is provided when custom network configuration mode is enabled for network settings.

    Source code in prefect_aws/workers/ecs_worker.py
    @root_validator\ndef network_configuration_requires_vpc_id(cls, values: dict) -> dict:\n    \"\"\"\n    Enforces a `vpc_id` is provided when custom network configuration mode is\n    enabled for network settings.\n    \"\"\"\n    if values.get(\"network_configuration\") and not values.get(\"vpc_id\"):\n        raise ValueError(\n            \"You must provide a `vpc_id` to enable custom `network_configuration`.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.set_default_configure_cloudwatch_logs","title":"set_default_configure_cloudwatch_logs","text":"

    Streaming output generally requires CloudWatch logs to be configured.

    To avoid entangled arguments in the simple case, configure_cloudwatch_logs defaults to matching the value of stream_output.

    Source code in prefect_aws/workers/ecs_worker.py
    @root_validator(pre=True)\ndef set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n    \"\"\"\n    Streaming output generally requires CloudWatch logs to be configured.\n\n    To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n    defaults to matching the value of `stream_output`.\n    \"\"\"\n    configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n    if configure_cloudwatch_logs is None:\n        values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.task_run_request_requires_arn_if_no_task_definition_given","title":"task_run_request_requires_arn_if_no_task_definition_given","text":"

    If no task definition is provided, a task definition ARN must be present on the task run request.

    Source code in prefect_aws/workers/ecs_worker.py
    @root_validator\ndef task_run_request_requires_arn_if_no_task_definition_given(cls, values) -> dict:\n    \"\"\"\n    If no task definition is provided, a task definition ARN must be present on the\n    task run request.\n    \"\"\"\n    if not values.get(\"task_run_request\", {}).get(\n        \"taskDefinition\"\n    ) and not values.get(\"task_definition\"):\n        raise ValueError(\n            \"A task definition must be provided if a task definition ARN is not \"\n            \"present on the task run request.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSVariables","title":"ECSVariables","text":"

    Bases: BaseVariables

    Variables for templating an ECS job.

    Source code in prefect_aws/workers/ecs_worker.py
    class ECSVariables(BaseVariables):\n    \"\"\"\n    Variables for templating an ECS job.\n    \"\"\"\n\n    task_definition_arn: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An identifier for an existing task definition to use. If set, options that\"\n            \" require changes to the task definition will be ignored. All contents of \"\n            \"the task definition in the job configuration will be ignored.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        title=\"Environment Variables\",\n        default_factory=dict,\n        description=(\n            \"Environment variables to provide to the task run. These variables are set \"\n            \"on the Prefect container at task runtime. These will not be set on the \"\n            \"task definition.\"\n        ),\n    )\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=(\n            \"The AWS credentials to use to connect to ECS. If not provided, credentials\"\n            \" will be inferred from the local environment following AWS's boto client's\"\n            \" rules.\"\n        ),\n    )\n    cluster: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The ECS cluster to run the task in. An ARN or name may be provided. If \"\n            \"not provided, the default cluster will be used.\"\n        ),\n    )\n    family: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A family for the task definition. If not provided, it will be inferred \"\n            \"from the task definition. If the task definition does not have a family, \"\n            \"the name will be generated. When flow and deployment metadata is \"\n            \"available, the generated name will include their names. Values for this \"\n            \"field will be slugified to match AWS character requirements.\"\n        ),\n    )\n    launch_type: Optional[\n        Literal[\"FARGATE\", \"EC2\", \"EXTERNAL\", \"FARGATE_SPOT\"]\n    ] = Field(\n        default=ECS_DEFAULT_LAUNCH_TYPE,\n        description=(\n            \"The type of ECS task run infrastructure that should be used. Note that\"\n            \" 'FARGATE_SPOT' is not a formal ECS launch type, but we will configure\"\n            \" the proper capacity provider strategy if set here.\"\n        ),\n    )\n    capacity_provider_strategy: Optional[List[CapacityProvider]] = Field(\n        default_factory=list,\n        description=(\n            \"The capacity provider strategy to use when running the task. \"\n            \"If a capacity provider strategy is specified, the selected launch\"\n            \" type will be ignored.\"\n        ),\n    )\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image to use for the Prefect container in the task. If this value is \"\n            \"not null, it will override the value in the task definition. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    cpu: int = Field(\n        title=\"CPU\",\n        default=None,\n        description=(\n            \"The amount of CPU to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_CPU} will be used unless present on the task definition.\"\n        ),\n    )\n    memory: int = Field(\n        default=None,\n        description=(\n            \"The amount of memory to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_MEMORY} will be used unless present on the task definition.\"\n        ),\n    )\n    container_name: str = Field(\n        default=None,\n        description=(\n            \"The name of the container flow run orchestration will occur in. If not \"\n            f\"specified, a default value of {ECS_DEFAULT_CONTAINER_NAME} will be used \"\n            \"and if that is not found in the task definition the first container will \"\n            \"be used.\"\n        ),\n    )\n    task_role_arn: str = Field(\n        title=\"Task Role ARN\",\n        default=None,\n        description=(\n            \"A role to attach to the task run. This controls the permissions of the \"\n            \"task while it is running.\"\n        ),\n    )\n    execution_role_arn: str = Field(\n        title=\"Execution Role ARN\",\n        default=None,\n        description=(\n            \"An execution role to use for the task. This controls the permissions of \"\n            \"the task when it is launching. If this value is not null, it will \"\n            \"override the value in the task definition. An execution role must be \"\n            \"provided to capture logs from the container.\"\n        ),\n    )\n    vpc_id: Optional[str] = Field(\n        title=\"VPC ID\",\n        default=None,\n        description=(\n            \"The AWS VPC to link the task run to. This is only applicable when using \"\n            \"the 'awsvpc' network mode for your task. FARGATE tasks require this \"\n            \"network  mode, but for EC2 tasks the default network mode is 'bridge'. \"\n            \"If using the 'awsvpc' network mode and this field is null, your default \"\n            \"VPC will be used. If no default VPC can be found, the task run will fail.\"\n        ),\n    )\n    configure_cloudwatch_logs: bool = Field(\n        default=None,\n        description=(\n            \"If enabled, the Prefect container will be configured to send its output \"\n            \"to the AWS CloudWatch logs service. This functionality requires an \"\n            \"execution role with logs:CreateLogStream, logs:CreateLogGroup, and \"\n            \"logs:PutLogEvents permissions. The default for this field is `False` \"\n            \"unless `stream_output` is set.\"\n        ),\n    )\n    cloudwatch_logs_options: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"When `configure_cloudwatch_logs` is enabled, this setting may be used to\"\n            \" pass additional options to the CloudWatch logs configuration or override\"\n            \" the default options. See the [AWS\"\n            \" documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html#create_awslogs_logdriver_options)\"  # noqa\n            \" for available options. \"\n        ),\n    )\n    cloudwatch_logs_prefix: Optional[str] = Field(\n        default=None,\n        description=(\n            \"When `configure_cloudwatch_logs` is enabled, this setting may be used to\"\n            \" set a prefix for the log group. If not provided, the default prefix will\"\n            \" be `prefect-logs_<work_pool_name>_<deployment_id>`. If\"\n            \" `awslogs-stream-prefix` is present in `Cloudwatch logs options` this\"\n            \" setting will be ignored.\"\n        ),\n    )\n\n    network_configuration: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=(\n            \"When `network_configuration` is supplied it will override ECS Worker's\"\n            \"awsvpcConfiguration that defined in the ECS task executing your workload. \"\n            \"See the [AWS documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-service-awsvpcconfiguration.html)\"  # noqa\n            \" for available options.\"\n        ),\n    )\n\n    stream_output: bool = Field(\n        default=None,\n        description=(\n            \"If enabled, logs will be streamed from the Prefect container to the local \"\n            \"console. Unless you have configured AWS CloudWatch logs manually on your \"\n            \"task definition, this requires the same prerequisites outlined in \"\n            \"`configure_cloudwatch_logs`.\"\n        ),\n    )\n    task_start_timeout_seconds: int = Field(\n        default=300,\n        description=(\n            \"The amount of time to watch for the start of the ECS task \"\n            \"before marking it as failed. The task must enter a RUNNING state to be \"\n            \"considered started.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an ECS task.\"\n        ),\n    )\n    auto_deregister_task_definition: bool = Field(\n        default=False,\n        description=(\n            \"If enabled, any task definitions that are created by this block will be \"\n            \"deregistered. Existing task definitions linked by ARN will never be \"\n            \"deregistered. Deregistering a task definition does not remove it from \"\n            \"your AWS account, instead it will be marked as INACTIVE.\"\n        ),\n    )\n    match_latest_revision_in_family: bool = Field(\n        default=False,\n        description=(\n            \"If enabled, the most recent active revision in the task definition \"\n            \"family will be compared against the desired ECS task configuration. \"\n            \"If they are equal, the existing task definition will be used instead \"\n            \"of registering a new one. If no family is specified the default family \"\n            f'\"{ECS_DEFAULT_FAMILY}\" will be used.'\n        ),\n    )\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorker","title":"ECSWorker","text":"

    Bases: BaseWorker

    A Prefect worker to run flow runs as ECS tasks.

    Source code in prefect_aws/workers/ecs_worker.py
    class ECSWorker(BaseWorker):\n    \"\"\"\n    A Prefect worker to run flow runs as ECS tasks.\n    \"\"\"\n\n    type = \"ecs\"\n    job_configuration = ECSJobConfiguration\n    job_configuration_variables = ECSVariables\n    _description = (\n        \"Execute flow runs within containers on AWS ECS. Works with EC2 \"\n        \"and Fargate clusters. Requires an AWS account.\"\n    )\n    _display_name = \"AWS Elastic Container Service\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/ecs_worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: ECSJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> ECSWorkerResult:\n        \"\"\"\n        Runs a given flow run on the current worker.\n        \"\"\"\n        ecs_client = await run_sync_in_worker_thread(\n            self._get_client, configuration, \"ecs\"\n        )\n\n        logger = self.get_flow_run_logger(flow_run)\n\n        (\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition,\n        ) = await run_sync_in_worker_thread(\n            self._create_task_and_wait_for_start,\n            logger,\n            ecs_client,\n            configuration,\n            flow_run,\n        )\n\n        # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n        # if set to preserve matching by name rather than arn\n        # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n        # single colons.\n        identifier = (\n            (configuration.cluster if configuration.cluster else cluster_arn)\n            + \"::\"\n            + task_arn\n        )\n\n        if task_status:\n            task_status.started(identifier)\n\n        status_code = await run_sync_in_worker_thread(\n            self._watch_task_and_get_exit_code,\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition and configuration.auto_deregister_task_definition,\n            ecs_client,\n        )\n\n        return ECSWorkerResult(\n            identifier=identifier,\n            # If the container does not start the exit code can be null but we must\n            # still report a status code. We use a -1 to indicate a special code.\n            status_code=status_code if status_code is not None else -1,\n        )\n\n    def _get_client(\n        self, configuration: ECSJobConfiguration, client_type: Union[str, ClientType]\n    ) -> _ECSClient:\n        \"\"\"\n        Get a boto3 client of client_type. Will use a cached client if one exists.\n        \"\"\"\n        return configuration.aws_credentials.get_client(client_type)\n\n    def _create_task_and_wait_for_start(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        configuration: ECSJobConfiguration,\n        flow_run: FlowRun,\n    ) -> Tuple[str, str, dict, bool]:\n        \"\"\"\n        Register the task definition, create the task run, and wait for it to start.\n\n        Returns a tuple of\n        - The task ARN\n        - The task's cluster ARN\n        - The task definition\n        - A bool indicating if the task definition is newly registered\n        \"\"\"\n        task_definition_arn = configuration.task_run_request.get(\"taskDefinition\")\n        new_task_definition_registered = False\n\n        if not task_definition_arn:\n            task_definition = self._prepare_task_definition(\n                configuration, region=ecs_client.meta.region_name, flow_run=flow_run\n            )\n            (\n                task_definition_arn,\n                new_task_definition_registered,\n            ) = self._get_or_register_task_definition(\n                logger, ecs_client, configuration, flow_run, task_definition\n            )\n        else:\n            task_definition = self._retrieve_task_definition(\n                logger, ecs_client, task_definition_arn\n            )\n            if configuration.task_definition:\n                logger.warning(\n                    \"Ignoring task definition in configuration since task definition\"\n                    \" ARN is provided on the task run request.\"\n                )\n\n        self._validate_task_definition(task_definition, configuration)\n\n        _TASK_DEFINITION_CACHE[flow_run.deployment_id] = task_definition_arn\n\n        logger.info(f\"Using ECS task definition {task_definition_arn!r}...\")\n        logger.debug(\n            f\"Task definition {json.dumps(task_definition, indent=2, default=str)}\"\n        )\n\n        task_run_request = self._prepare_task_run_request(\n            configuration,\n            task_definition,\n            task_definition_arn,\n        )\n\n        logger.info(\"Creating ECS task run...\")\n        logger.debug(\n            \"Task run request\"\n            f\"{json.dumps(mask_api_key(task_run_request), indent=2, default=str)}\"\n        )\n\n        try:\n            task = self._create_task_run(ecs_client, task_run_request)\n            task_arn = task[\"taskArn\"]\n            cluster_arn = task[\"clusterArn\"]\n        except Exception as exc:\n            self._report_task_run_creation_failure(configuration, task_run_request, exc)\n            raise\n\n        logger.info(\"Waiting for ECS task run to start...\")\n        self._wait_for_task_start(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            ecs_client,\n            timeout=configuration.task_start_timeout_seconds,\n        )\n\n        return task_arn, cluster_arn, task_definition, new_task_definition_registered\n\n    def _get_or_register_task_definition(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        configuration: ECSJobConfiguration,\n        flow_run: FlowRun,\n        task_definition: dict,\n    ) -> Tuple[str, bool]:\n        \"\"\"Get or register a task definition for the given flow run.\n\n        Returns a tuple of the task definition ARN and a bool indicating if the task\n        definition is newly registered.\n        \"\"\"\n\n        cached_task_definition_arn = _TASK_DEFINITION_CACHE.get(flow_run.deployment_id)\n        new_task_definition_registered = False\n\n        if cached_task_definition_arn:\n            try:\n                cached_task_definition = self._retrieve_task_definition(\n                    logger, ecs_client, cached_task_definition_arn\n                )\n                if not cached_task_definition[\n                    \"status\"\n                ] == \"ACTIVE\" or not self._task_definitions_equal(\n                    task_definition, cached_task_definition\n                ):\n                    cached_task_definition_arn = None\n            except Exception:\n                cached_task_definition_arn = None\n\n        if (\n            not cached_task_definition_arn\n            and configuration.match_latest_revision_in_family\n        ):\n            family_name = task_definition.get(\"family\", ECS_DEFAULT_FAMILY)\n            try:\n                task_definition_from_family = self._retrieve_task_definition(\n                    logger, ecs_client, family_name\n                )\n                if task_definition_from_family and self._task_definitions_equal(\n                    task_definition, task_definition_from_family\n                ):\n                    cached_task_definition_arn = task_definition_from_family[\n                        \"taskDefinitionArn\"\n                    ]\n            except Exception:\n                cached_task_definition_arn = None\n\n        if not cached_task_definition_arn:\n            task_definition_arn = self._register_task_definition(\n                logger, ecs_client, task_definition\n            )\n            new_task_definition_registered = True\n        else:\n            task_definition_arn = cached_task_definition_arn\n\n        return task_definition_arn, new_task_definition_registered\n\n    def _watch_task_and_get_exit_code(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        deregister_task_definition: bool,\n        ecs_client: _ECSClient,\n    ) -> Optional[int]:\n        \"\"\"\n        Wait for the task run to complete and retrieve the exit code of the Prefect\n        container.\n        \"\"\"\n\n        # Wait for completion and stream logs\n        task = self._wait_for_task_finish(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            task_definition,\n            ecs_client,\n        )\n\n        if deregister_task_definition:\n            ecs_client.deregister_task_definition(\n                taskDefinition=task[\"taskDefinitionArn\"]\n            )\n\n        container_name = (\n            configuration.container_name\n            or _container_name_from_task_definition(task_definition)\n            or ECS_DEFAULT_CONTAINER_NAME\n        )\n\n        # Check the status code of the Prefect container\n        container = _get_container(task[\"containers\"], container_name)\n        assert (\n            container is not None\n        ), f\"'{container_name}' container missing from task: {task}\"\n        status_code = container.get(\"exitCode\")\n        self._report_container_status_code(logger, container_name, status_code)\n\n        return status_code\n\n    def _report_container_status_code(\n        self, logger: logging.Logger, name: str, status_code: Optional[int]\n    ) -> None:\n        \"\"\"\n        Display a log for the given container status code.\n        \"\"\"\n        if status_code is None:\n            logger.error(\n                f\"Task exited without reporting an exit status for container {name!r}.\"\n            )\n        elif status_code == 0:\n            logger.info(f\"Container {name!r} exited successfully.\")\n        else:\n            logger.warning(\n                f\"Container {name!r} exited with non-zero exit code {status_code}.\"\n            )\n\n    def _report_task_run_creation_failure(\n        self, configuration: ECSJobConfiguration, task_run: dict, exc: Exception\n    ) -> None:\n        \"\"\"\n        Wrap common AWS task run creation failures with nicer user-facing messages.\n        \"\"\"\n        # AWS generates exception types at runtime so they must be captured a bit\n        # differently than normal.\n        if \"ClusterNotFoundException\" in str(exc):\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} not found. \"\n                \"Confirm that the cluster is configured in your region.\"\n            ) from exc\n        elif (\n            \"No Container Instances\" in str(exc) and task_run.get(\"launchType\") == \"EC2\"\n        ):\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} does not appear to \"\n                \"have any container instances associated with it. Confirm that you \"\n                \"have EC2 container instances available.\"\n            ) from exc\n        elif (\n            \"failed to validate logger args\" in str(exc)\n            and \"AccessDeniedException\" in str(exc)\n            and configuration.configure_cloudwatch_logs\n        ):\n            raise RuntimeError(\n                \"Failed to run ECS task, the attached execution role does not appear\"\n                \" to have sufficient permissions. Ensure that the execution role\"\n                f\" {configuration.execution_role!r} has permissions\"\n                \" logs:CreateLogStream, logs:CreateLogGroup, and logs:PutLogEvents.\"\n            )\n        else:\n            raise\n\n    def _validate_task_definition(\n        self, task_definition: dict, configuration: ECSJobConfiguration\n    ) -> None:\n        \"\"\"\n        Ensure that the task definition is compatible with the configuration.\n\n        Raises `ValueError` on incompatibility. Returns `None` on success.\n        \"\"\"\n        launch_type = configuration.task_run_request.get(\n            \"launchType\", ECS_DEFAULT_LAUNCH_TYPE\n        )\n        if (\n            launch_type != \"EC2\"\n            and \"FARGATE\" not in task_definition[\"requiresCompatibilities\"]\n        ):\n            raise ValueError(\n                \"Task definition does not have 'FARGATE' in 'requiresCompatibilities'\"\n                f\" and cannot be used with launch type {launch_type!r}\"\n            )\n\n        if launch_type == \"FARGATE\" or launch_type == \"FARGATE_SPOT\":\n            # Only the 'awsvpc' network mode is supported when using FARGATE\n            network_mode = task_definition.get(\"networkMode\")\n            if network_mode != \"awsvpc\":\n                raise ValueError(\n                    f\"Found network mode {network_mode!r} which is not compatible with \"\n                    f\"launch type {launch_type!r}. Use either the 'EC2' launch \"\n                    \"type or the 'awsvpc' network mode.\"\n                )\n\n        if configuration.configure_cloudwatch_logs and not task_definition.get(\n            \"executionRoleArn\"\n        ):\n            raise ValueError(\n                \"An execution role arn must be set on the task definition to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs` but no execution role \"\n                \"was found on the task definition.\"\n            )\n\n    def _register_task_definition(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        task_definition: dict,\n    ) -> str:\n        \"\"\"\n        Register a new task definition with AWS.\n\n        Returns the ARN.\n        \"\"\"\n        logger.info(\"Registering ECS task definition...\")\n        logger.debug(\n            \"Task definition request\"\n            f\"{json.dumps(task_definition, indent=2, default=str)}\"\n        )\n        response = ecs_client.register_task_definition(**task_definition)\n        return response[\"taskDefinition\"][\"taskDefinitionArn\"]\n\n    def _retrieve_task_definition(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        task_definition: str,\n    ):\n        \"\"\"\n        Retrieve an existing task definition from AWS.\n        \"\"\"\n        if task_definition.startswith(\"arn:aws:ecs:\"):\n            logger.info(f\"Retrieving ECS task definition {task_definition!r}...\")\n        else:\n            logger.info(\n                \"Retrieving most recent active revision from \"\n                f\"ECS task family {task_definition!r}...\"\n            )\n        response = ecs_client.describe_task_definition(taskDefinition=task_definition)\n        return response[\"taskDefinition\"]\n\n    def _wait_for_task_start(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        ecs_client: _ECSClient,\n        timeout: int,\n    ) -> dict:\n        \"\"\"\n        Waits for an ECS task run to reach a RUNNING status.\n\n        If a STOPPED status is reached instead, an exception is raised indicating the\n        reason that the task run did not start.\n        \"\"\"\n        for task in self._watch_task_run(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            ecs_client,\n            until_status=\"RUNNING\",\n            timeout=timeout,\n        ):\n            # TODO: It is possible that the task has passed _through_ a RUNNING\n            #       status during the polling interval. In this case, there is not an\n            #       exception to raise.\n            if task[\"lastStatus\"] == \"STOPPED\":\n                code = task.get(\"stopCode\")\n                reason = task.get(\"stoppedReason\")\n                # Generate a dynamic exception type from the AWS name\n                raise type(code, (RuntimeError,), {})(reason)\n\n        return task\n\n    def _wait_for_task_finish(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        ecs_client: _ECSClient,\n    ):\n        \"\"\"\n        Watch an ECS task until it reaches a STOPPED status.\n\n        If configured, logs from the Prefect container are streamed to stderr.\n\n        Returns a description of the task on completion.\n        \"\"\"\n        can_stream_output = False\n        container_name = (\n            configuration.container_name\n            or _container_name_from_task_definition(task_definition)\n            or ECS_DEFAULT_CONTAINER_NAME\n        )\n\n        if configuration.stream_output:\n            container_def = _get_container(\n                task_definition[\"containerDefinitions\"], container_name\n            )\n            if not container_def:\n                logger.warning(\n                    \"Prefect container definition not found in \"\n                    \"task definition. Output cannot be streamed.\"\n                )\n            elif not container_def.get(\"logConfiguration\"):\n                logger.warning(\n                    \"Logging configuration not found on task. \"\n                    \"Output cannot be streamed.\"\n                )\n            elif not container_def[\"logConfiguration\"].get(\"logDriver\") == \"awslogs\":\n                logger.warning(\n                    \"Logging configuration uses unsupported \"\n                    \" driver {container_def['logConfiguration'].get('logDriver')!r}. \"\n                    \"Output cannot be streamed.\"\n                )\n            else:\n                # Prepare to stream the output\n                log_config = container_def[\"logConfiguration\"][\"options\"]\n                logs_client = self._get_client(configuration, \"logs\")\n                can_stream_output = True\n                # Track the last log timestamp to prevent double display\n                last_log_timestamp: Optional[int] = None\n                # Determine the name of the stream as \"prefix/container/run-id\"\n                stream_name = \"/\".join(\n                    [\n                        log_config[\"awslogs-stream-prefix\"],\n                        container_name,\n                        task_arn.rsplit(\"/\")[-1],\n                    ]\n                )\n                self._logger.info(\n                    f\"Streaming output from container {container_name!r}...\"\n                )\n\n        for task in self._watch_task_run(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            ecs_client,\n            current_status=\"RUNNING\",\n        ):\n            if configuration.stream_output and can_stream_output:\n                # On each poll for task run status, also retrieve available logs\n                last_log_timestamp = self._stream_available_logs(\n                    logger,\n                    logs_client,\n                    log_group=log_config[\"awslogs-group\"],\n                    log_stream=stream_name,\n                    last_log_timestamp=last_log_timestamp,\n                )\n\n        return task\n\n    def _stream_available_logs(\n        self,\n        logger: logging.Logger,\n        logs_client: Any,\n        log_group: str,\n        log_stream: str,\n        last_log_timestamp: Optional[int] = None,\n    ) -> Optional[int]:\n        \"\"\"\n        Stream logs from the given log group and stream since the last log timestamp.\n\n        Will continue on paginated responses until all logs are returned.\n\n        Returns the last log timestamp which can be used to call this method in the\n        future.\n        \"\"\"\n        last_log_stream_token = \"NO-TOKEN\"\n        next_log_stream_token = None\n\n        # AWS will return the same token that we send once the end of the paginated\n        # response is reached\n        while last_log_stream_token != next_log_stream_token:\n            last_log_stream_token = next_log_stream_token\n\n            request = {\n                \"logGroupName\": log_group,\n                \"logStreamName\": log_stream,\n            }\n\n            if last_log_stream_token is not None:\n                request[\"nextToken\"] = last_log_stream_token\n\n            if last_log_timestamp is not None:\n                # Bump the timestamp by one ms to avoid retrieving the last log again\n                request[\"startTime\"] = last_log_timestamp + 1\n\n            try:\n                response = logs_client.get_log_events(**request)\n            except Exception:\n                logger.error(\n                    f\"Failed to read log events with request {request}\",\n                    exc_info=True,\n                )\n                return last_log_timestamp\n\n            log_events = response[\"events\"]\n            for log_event in log_events:\n                # TODO: This doesn't forward to the local logger, which can be\n                #       bad for customizing handling and understanding where the\n                #       log is coming from, but it avoid nesting logger information\n                #       when the content is output from a Prefect logger on the\n                #       running infrastructure\n                print(log_event[\"message\"], file=sys.stderr)\n\n                if (\n                    last_log_timestamp is None\n                    or log_event[\"timestamp\"] > last_log_timestamp\n                ):\n                    last_log_timestamp = log_event[\"timestamp\"]\n\n            next_log_stream_token = response.get(\"nextForwardToken\")\n            if not log_events:\n                # Stop reading pages if there was no data\n                break\n\n        return last_log_timestamp\n\n    def _watch_task_run(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        ecs_client: _ECSClient,\n        current_status: str = \"UNKNOWN\",\n        until_status: str = None,\n        timeout: int = None,\n    ) -> Generator[None, None, dict]:\n        \"\"\"\n        Watches an ECS task run by querying every `poll_interval` seconds. After each\n        query, the retrieved task is yielded. This function returns when the task run\n        reaches a STOPPED status or the provided `until_status`.\n\n        Emits a log each time the status changes.\n        \"\"\"\n        last_status = status = current_status\n        t0 = time.time()\n        while status != until_status:\n            tasks = ecs_client.describe_tasks(\n                tasks=[task_arn], cluster=cluster_arn, include=[\"TAGS\"]\n            )[\"tasks\"]\n\n            if tasks:\n                task = tasks[0]\n\n                status = task[\"lastStatus\"]\n                if status != last_status:\n                    logger.info(f\"ECS task status is {status}.\")\n\n                yield task\n\n                # No point in continuing if the status is final\n                if status == \"STOPPED\":\n                    break\n\n                last_status = status\n\n            else:\n                # Intermittently, the task will not be described. We wat to respect the\n                # watch timeout though.\n                logger.debug(\"Task not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching task for status \"\n                    f\"{until_status or 'STOPPED'}.\"\n                )\n            time.sleep(configuration.task_watch_poll_interval)\n\n    def _get_or_generate_family(self, task_definition: dict, flow_run: FlowRun) -> str:\n        \"\"\"\n        Gets or generate a family for the task definition.\n        \"\"\"\n        family = task_definition.get(\"family\")\n        if not family:\n            assert self._work_pool_name and flow_run.deployment_id\n            family = (\n                f\"{ECS_DEFAULT_FAMILY}_{self._work_pool_name}_{flow_run.deployment_id}\"\n            )\n        slugify(\n            family,\n            max_length=255,\n            regex_pattern=r\"[^a-zA-Z0-9-_]+\",\n        )\n        return family\n\n    def _prepare_task_definition(\n        self,\n        configuration: ECSJobConfiguration,\n        region: str,\n        flow_run: FlowRun,\n    ) -> dict:\n        \"\"\"\n        Prepare a task definition by inferring any defaults and merging overrides.\n        \"\"\"\n        task_definition = copy.deepcopy(configuration.task_definition)\n\n        # Configure the Prefect runtime container\n        task_definition.setdefault(\"containerDefinitions\", [])\n\n        # Remove empty container definitions\n        task_definition[\"containerDefinitions\"] = [\n            d for d in task_definition[\"containerDefinitions\"] if d\n        ]\n\n        container_name = configuration.container_name\n        if not container_name:\n            container_name = (\n                _container_name_from_task_definition(task_definition)\n                or ECS_DEFAULT_CONTAINER_NAME\n            )\n\n        container = _get_container(\n            task_definition[\"containerDefinitions\"], container_name\n        )\n        if container is None:\n            if container_name != ECS_DEFAULT_CONTAINER_NAME:\n                raise ValueError(\n                    f\"Container {container_name!r} not found in task definition.\"\n                )\n\n            # Look for a container without a name\n            for container in task_definition[\"containerDefinitions\"]:\n                if \"name\" not in container:\n                    container[\"name\"] = container_name\n                    break\n            else:\n                container = {\"name\": container_name}\n                task_definition[\"containerDefinitions\"].append(container)\n\n        # Image is required so make sure it's present\n        container.setdefault(\"image\", get_prefect_image_name())\n\n        # Remove any keys that have been explicitly \"unset\"\n        unset_keys = {key for key, value in configuration.env.items() if value is None}\n        for item in tuple(container.get(\"environment\", [])):\n            if item[\"name\"] in unset_keys or item[\"value\"] is None:\n                container[\"environment\"].remove(item)\n\n        if configuration.configure_cloudwatch_logs:\n            prefix = f\"prefect-logs_{self._work_pool_name}_{flow_run.deployment_id}\"\n            container[\"logConfiguration\"] = {\n                \"logDriver\": \"awslogs\",\n                \"options\": {\n                    \"awslogs-create-group\": \"true\",\n                    \"awslogs-group\": \"prefect\",\n                    \"awslogs-region\": region,\n                    \"awslogs-stream-prefix\": (\n                        configuration.cloudwatch_logs_prefix or prefix\n                    ),\n                    **configuration.cloudwatch_logs_options,\n                },\n            }\n\n        task_definition[\"family\"] = self._get_or_generate_family(\n            task_definition, flow_run\n        )\n        # CPU and memory are required in some cases, retrieve the value to use\n        cpu = task_definition.get(\"cpu\") or ECS_DEFAULT_CPU\n        memory = task_definition.get(\"memory\") or ECS_DEFAULT_MEMORY\n\n        launch_type = configuration.task_run_request.get(\n            \"launchType\", ECS_DEFAULT_LAUNCH_TYPE\n        )\n\n        if launch_type == \"FARGATE\" or launch_type == \"FARGATE_SPOT\":\n            # Task level memory and cpu are required when using fargate\n            task_definition[\"cpu\"] = str(cpu)\n            task_definition[\"memory\"] = str(memory)\n\n            # The FARGATE compatibility is required if it will be used as as launch type\n            requires_compatibilities = task_definition.setdefault(\n                \"requiresCompatibilities\", []\n            )\n            if \"FARGATE\" not in requires_compatibilities:\n                task_definition[\"requiresCompatibilities\"].append(\"FARGATE\")\n\n            # Only the 'awsvpc' network mode is supported when using FARGATE\n            # However, we will not enforce that here if the user has set it\n            task_definition.setdefault(\"networkMode\", \"awsvpc\")\n\n        elif launch_type == \"EC2\":\n            # Container level memory and cpu are required when using ec2\n            container.setdefault(\"cpu\", cpu)\n            container.setdefault(\"memory\", memory)\n\n            # Ensure set values are cast to integers\n            container[\"cpu\"] = int(container[\"cpu\"])\n            container[\"memory\"] = int(container[\"memory\"])\n\n        # Ensure set values are cast to strings\n        if task_definition.get(\"cpu\"):\n            task_definition[\"cpu\"] = str(task_definition[\"cpu\"])\n        if task_definition.get(\"memory\"):\n            task_definition[\"memory\"] = str(task_definition[\"memory\"])\n\n        return task_definition\n\n    def _load_network_configuration(\n        self, vpc_id: Optional[str], configuration: ECSJobConfiguration\n    ) -> dict:\n        \"\"\"\n        Load settings from a specific VPC or the default VPC and generate a task\n        run request's network configuration.\n        \"\"\"\n        ec2_client = self._get_client(configuration, \"ec2\")\n        vpc_message = \"the default VPC\" if not vpc_id else f\"VPC with ID {vpc_id}\"\n\n        if not vpc_id:\n            # Retrieve the default VPC\n            describe = {\"Filters\": [{\"Name\": \"isDefault\", \"Values\": [\"true\"]}]}\n        else:\n            describe = {\"VpcIds\": [vpc_id]}\n\n        vpcs = ec2_client.describe_vpcs(**describe)[\"Vpcs\"]\n        if not vpcs:\n            help_message = (\n                \"Pass an explicit `vpc_id` or configure a default VPC.\"\n                if not vpc_id\n                else \"Check that the VPC exists in the current region.\"\n            )\n            raise ValueError(\n                f\"Failed to find {vpc_message}. \"\n                \"Network configuration cannot be inferred. \" + help_message\n            )\n\n        vpc_id = vpcs[0][\"VpcId\"]\n        subnets = ec2_client.describe_subnets(\n            Filters=[{\"Name\": \"vpc-id\", \"Values\": [vpc_id]}]\n        )[\"Subnets\"]\n        if not subnets:\n            raise ValueError(\n                f\"Failed to find subnets for {vpc_message}. \"\n                \"Network configuration cannot be inferred.\"\n            )\n\n        return {\n            \"awsvpcConfiguration\": {\n                \"subnets\": [s[\"SubnetId\"] for s in subnets],\n                \"assignPublicIp\": \"ENABLED\",\n                \"securityGroups\": [],\n            }\n        }\n\n    def _custom_network_configuration(\n        self,\n        vpc_id: str,\n        network_configuration: dict,\n        configuration: ECSJobConfiguration,\n    ) -> dict:\n        \"\"\"\n        Load settings from a specific VPC or the default VPC and generate a task\n        run request's network configuration.\n        \"\"\"\n        ec2_client = self._get_client(configuration, \"ec2\")\n        vpc_message = f\"VPC with ID {vpc_id}\"\n\n        vpcs = ec2_client.describe_vpcs(VpcIds=[vpc_id]).get(\"Vpcs\")\n\n        if not vpcs:\n            raise ValueError(\n                f\"Failed to find {vpc_message}. \"\n                + \"Network configuration cannot be inferred. \"\n                + \"Pass an explicit `vpc_id`.\"\n            )\n\n        vpc_id = vpcs[0][\"VpcId\"]\n        subnets = ec2_client.describe_subnets(\n            Filters=[{\"Name\": \"vpc-id\", \"Values\": [vpc_id]}]\n        )[\"Subnets\"]\n\n        if not subnets:\n            raise ValueError(\n                f\"Failed to find subnets for {vpc_message}. \"\n                + \"Network configuration cannot be inferred.\"\n            )\n\n        subnet_ids = [subnet[\"SubnetId\"] for subnet in subnets]\n\n        config_subnets = network_configuration.get(\"subnets\", [])\n        if not all(conf_sn in subnet_ids for conf_sn in config_subnets):\n            raise ValueError(\n                f\"Subnets {config_subnets} not found within {vpc_message}.\"\n                + \"Please check that VPC is associated with supplied subnets.\"\n            )\n\n        return {\"awsvpcConfiguration\": network_configuration}\n\n    def _prepare_task_run_request(\n        self,\n        configuration: ECSJobConfiguration,\n        task_definition: dict,\n        task_definition_arn: str,\n    ) -> dict:\n        \"\"\"\n        Prepare a task run request payload.\n        \"\"\"\n        task_run_request = deepcopy(configuration.task_run_request)\n\n        task_run_request.setdefault(\"taskDefinition\", task_definition_arn)\n        assert task_run_request[\"taskDefinition\"] == task_definition_arn\n        capacityProviderStrategy = task_run_request.get(\"capacityProviderStrategy\")\n\n        if capacityProviderStrategy:\n            # Should not be provided at all if capacityProviderStrategy is set, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html#ECS-RunTask-request-capacityProviderStrategy  # noqa\n            self._logger.warning(\n                \"Found capacityProviderStrategy. \"\n                \"Removing launchType from task run request.\"\n            )\n            task_run_request.pop(\"launchType\", None)\n\n        elif task_run_request.get(\"launchType\") == \"FARGATE_SPOT\":\n            # Should not be provided at all for FARGATE SPOT\n            task_run_request.pop(\"launchType\", None)\n\n            # A capacity provider strategy is required for FARGATE SPOT\n            task_run_request[\"capacityProviderStrategy\"] = [\n                {\"capacityProvider\": \"FARGATE_SPOT\", \"weight\": 1}\n            ]\n        overrides = task_run_request.get(\"overrides\", {})\n        container_overrides = overrides.get(\"containerOverrides\", [])\n\n        # Ensure the network configuration is present if using awsvpc for network mode\n        if (\n            task_definition.get(\"networkMode\") == \"awsvpc\"\n            and not task_run_request.get(\"networkConfiguration\")\n            and not configuration.network_configuration\n        ):\n            task_run_request[\"networkConfiguration\"] = self._load_network_configuration(\n                configuration.vpc_id, configuration\n            )\n\n        # Use networkConfiguration if supplied by user\n        if (\n            task_definition.get(\"networkMode\") == \"awsvpc\"\n            and configuration.network_configuration\n            and configuration.vpc_id\n        ):\n            task_run_request[\n                \"networkConfiguration\"\n            ] = self._custom_network_configuration(\n                configuration.vpc_id,\n                configuration.network_configuration,\n                configuration,\n            )\n\n        # Ensure the container name is set if not provided at template time\n\n        container_name = (\n            configuration.container_name\n            or _container_name_from_task_definition(task_definition)\n            or ECS_DEFAULT_CONTAINER_NAME\n        )\n\n        if container_overrides and not container_overrides[0].get(\"name\"):\n            container_overrides[0][\"name\"] = container_name\n\n        # Ensure configuration command is respected post-templating\n\n        orchestration_container = _get_container(container_overrides, container_name)\n\n        if orchestration_container:\n            # Override the command if given on the configuration\n            if configuration.command:\n                orchestration_container[\"command\"] = configuration.command\n\n        # Clean up templated variable formatting\n\n        for container in container_overrides:\n            if isinstance(container.get(\"command\"), str):\n                container[\"command\"] = shlex.split(container[\"command\"])\n            if isinstance(container.get(\"environment\"), dict):\n                container[\"environment\"] = [\n                    {\"name\": k, \"value\": v} for k, v in container[\"environment\"].items()\n                ]\n\n            # Remove null values \u2014 they're not allowed by AWS\n            container[\"environment\"] = [\n                item\n                for item in container.get(\"environment\", [])\n                if item[\"value\"] is not None\n            ]\n\n        if isinstance(task_run_request.get(\"tags\"), dict):\n            task_run_request[\"tags\"] = [\n                {\"key\": k, \"value\": v} for k, v in task_run_request[\"tags\"].items()\n            ]\n\n        if overrides.get(\"cpu\"):\n            overrides[\"cpu\"] = str(overrides[\"cpu\"])\n\n        if overrides.get(\"memory\"):\n            overrides[\"memory\"] = str(overrides[\"memory\"])\n\n        # Ensure configuration tags and env are respected post-templating\n\n        tags = [\n            item\n            for item in task_run_request.get(\"tags\", [])\n            if item[\"key\"] not in configuration.labels.keys()\n        ] + [\n            {\"key\": k, \"value\": v}\n            for k, v in configuration.labels.items()\n            if v is not None\n        ]\n\n        # Slugify tags keys and values\n        tags = [\n            {\n                \"key\": slugify(\n                    item[\"key\"],\n                    regex_pattern=_TAG_REGEX,\n                    allow_unicode=True,\n                    lowercase=False,\n                ),\n                \"value\": slugify(\n                    item[\"value\"],\n                    regex_pattern=_TAG_REGEX,\n                    allow_unicode=True,\n                    lowercase=False,\n                ),\n            }\n            for item in tags\n        ]\n\n        if tags:\n            task_run_request[\"tags\"] = tags\n\n        if orchestration_container:\n            environment = [\n                item\n                for item in orchestration_container.get(\"environment\", [])\n                if item[\"name\"] not in configuration.env.keys()\n            ] + [\n                {\"name\": k, \"value\": v}\n                for k, v in configuration.env.items()\n                if v is not None\n            ]\n            if environment:\n                orchestration_container[\"environment\"] = environment\n\n        # Remove empty container overrides\n\n        overrides[\"containerOverrides\"] = [v for v in container_overrides if v]\n\n        return task_run_request\n\n    @retry(\n        stop=stop_after_attempt(MAX_CREATE_TASK_RUN_ATTEMPTS),\n        wait=wait_fixed(CREATE_TASK_RUN_MIN_DELAY_SECONDS)\n        + wait_random(\n            CREATE_TASK_RUN_MIN_DELAY_JITTER_SECONDS,\n            CREATE_TASK_RUN_MAX_DELAY_JITTER_SECONDS,\n        ),\n        reraise=True,\n    )\n    def _create_task_run(self, ecs_client: _ECSClient, task_run_request: dict) -> str:\n        \"\"\"\n        Create a run of a task definition.\n\n        Returns the task run ARN.\n        \"\"\"\n        task = ecs_client.run_task(**task_run_request)\n        if task[\"failures\"]:\n            raise RuntimeError(\n                f\"Failed to run ECS task: {task['failures'][0]['reason']}\"\n            )\n        elif not task[\"tasks\"]:\n            raise RuntimeError(\n                \"Failed to run ECS task: no tasks or failures were returned.\"\n            )\n        return task[\"tasks\"][0]\n\n    def _task_definitions_equal(self, taskdef_1, taskdef_2) -> bool:\n        \"\"\"\n        Compare two task definitions.\n\n        Since one may come from the AWS API and have populated defaults, we do our best\n        to homogenize the definitions without changing their meaning.\n        \"\"\"\n        if taskdef_1 == taskdef_2:\n            return True\n\n        if taskdef_1 is None or taskdef_2 is None:\n            return False\n\n        taskdef_1 = copy.deepcopy(taskdef_1)\n        taskdef_2 = copy.deepcopy(taskdef_2)\n\n        for taskdef in (taskdef_1, taskdef_2):\n            # Set defaults that AWS would set after registration\n            container_definitions = taskdef.get(\"containerDefinitions\", [])\n            essential = any(\n                container.get(\"essential\") for container in container_definitions\n            )\n            if not essential:\n                container_definitions[0].setdefault(\"essential\", True)\n\n            taskdef.setdefault(\"networkMode\", \"bridge\")\n\n        _drop_empty_keys_from_task_definition(taskdef_1)\n        _drop_empty_keys_from_task_definition(taskdef_2)\n\n        # Clear fields that change on registration for comparison\n        for field in ECS_POST_REGISTRATION_FIELDS:\n            taskdef_1.pop(field, None)\n            taskdef_2.pop(field, None)\n\n        return taskdef_1 == taskdef_2\n\n    async def kill_infrastructure(\n        self,\n        configuration: ECSJobConfiguration,\n        infrastructure_pid: str,\n        grace_seconds: int = 30,\n    ) -> None:\n        \"\"\"\n        Kill a task running on ECS.\n\n        Args:\n            infrastructure_pid: A cluster and task arn combination. This should match a\n                value yielded by `ECSWorker.run`.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n                \"support dynamic grace period configuration so 30s will be used. \"\n                \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n            )\n        cluster, task = parse_identifier(infrastructure_pid)\n        await run_sync_in_worker_thread(self._stop_task, configuration, cluster, task)\n\n    def _stop_task(\n        self, configuration: ECSJobConfiguration, cluster: str, task: str\n    ) -> None:\n        \"\"\"\n        Stop a running ECS task.\n        \"\"\"\n        if configuration.cluster is not None and cluster != configuration.cluster:\n            raise InfrastructureNotAvailable(\n                \"Cannot stop ECS task: this infrastructure block has access to \"\n                f\"cluster {configuration.cluster!r} but the task is running in cluster \"\n                f\"{cluster!r}.\"\n            )\n\n        ecs_client = self._get_client(configuration, \"ecs\")\n        try:\n            ecs_client.stop_task(cluster=cluster, task=task)\n        except Exception as exc:\n            # Raise a special exception if the task does not exist\n            if \"ClusterNotFound\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} could not be found.\"\n                ) from exc\n            if \"not find task\" in str(exc) or \"referenced task was not found\" in str(\n                exc\n            ):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the task {task!r} could not be found in \"\n                    f\"cluster {cluster!r}.\"\n                ) from exc\n            if \"no registered tasks\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} has no tasks.\"\n                ) from exc\n\n            # Reraise unknown exceptions\n            raise\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

    Kill a task running on ECS.

    Parameters:

    Name Type Description Default infrastructure_pid str

    A cluster and task arn combination. This should match a value yielded by ECSWorker.run.

    required Source code in prefect_aws/workers/ecs_worker.py
    async def kill_infrastructure(\n    self,\n    configuration: ECSJobConfiguration,\n    infrastructure_pid: str,\n    grace_seconds: int = 30,\n) -> None:\n    \"\"\"\n    Kill a task running on ECS.\n\n    Args:\n        infrastructure_pid: A cluster and task arn combination. This should match a\n            value yielded by `ECSWorker.run`.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n            \"support dynamic grace period configuration so 30s will be used. \"\n            \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n        )\n    cluster, task = parse_identifier(infrastructure_pid)\n    await run_sync_in_worker_thread(self._stop_task, configuration, cluster, task)\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorker.run","title":"run async","text":"

    Runs a given flow run on the current worker.

    Source code in prefect_aws/workers/ecs_worker.py
    async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: ECSJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> ECSWorkerResult:\n    \"\"\"\n    Runs a given flow run on the current worker.\n    \"\"\"\n    ecs_client = await run_sync_in_worker_thread(\n        self._get_client, configuration, \"ecs\"\n    )\n\n    logger = self.get_flow_run_logger(flow_run)\n\n    (\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition,\n    ) = await run_sync_in_worker_thread(\n        self._create_task_and_wait_for_start,\n        logger,\n        ecs_client,\n        configuration,\n        flow_run,\n    )\n\n    # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n    # if set to preserve matching by name rather than arn\n    # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n    # single colons.\n    identifier = (\n        (configuration.cluster if configuration.cluster else cluster_arn)\n        + \"::\"\n        + task_arn\n    )\n\n    if task_status:\n        task_status.started(identifier)\n\n    status_code = await run_sync_in_worker_thread(\n        self._watch_task_and_get_exit_code,\n        logger,\n        configuration,\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition and configuration.auto_deregister_task_definition,\n        ecs_client,\n    )\n\n    return ECSWorkerResult(\n        identifier=identifier,\n        # If the container does not start the exit code can be null but we must\n        # still report a status code. We use a -1 to indicate a special code.\n        status_code=status_code if status_code is not None else -1,\n    )\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorkerResult","title":"ECSWorkerResult","text":"

    Bases: BaseWorkerResult

    The result of an ECS job.

    Source code in prefect_aws/workers/ecs_worker.py
    class ECSWorkerResult(BaseWorkerResult):\n    \"\"\"\n    The result of an ECS job.\n    \"\"\"\n
    "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.parse_identifier","title":"parse_identifier","text":"

    Splits identifier into its cluster and task components, e.g. input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").

    Source code in prefect_aws/workers/ecs_worker.py
    def parse_identifier(identifier: str) -> ECSIdentifier:\n    \"\"\"\n    Splits identifier into its cluster and task components, e.g.\n    input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").\n    \"\"\"\n    cluster, task = identifier.split(\"::\", maxsplit=1)\n    return ECSIdentifier(cluster, task)\n
    "},{"location":"integrations/prefect-aws/glue_job/","title":"Glue Job","text":""},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job","title":"prefect_aws.glue_job","text":"

    Integrations with the AWS Glue Job.

    "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobBlock","title":"GlueJobBlock","text":"

    Bases: JobBlock

    Execute a job to the AWS Glue Job service.

    Attributes:

    Name Type Description job_name str

    The name of the job definition to use.

    arguments Optional[dict]

    The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself. You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes. Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job. doc

    job_watch_poll_interval float

    The amount of time to wait between AWS API calls while monitoring the state of a Glue Job. default is 60s because of jobs that use AWS Glue versions 2.0 and later have a 1-minute minimum. AWS Glue Pricing

    Example

    Start a job to AWS Glue Job.

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.glue_job import GlueJobBlock\n\n\n@flow\ndef example_run_glue_job():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"your_access_key_id\",\n        aws_secret_access_key=\"your_secret_access_key\"\n    )\n    glue_job_run = GlueJobBlock(\n        job_name=\"your_glue_job_name\",\n        arguments={\"--YOUR_EXTRA_ARGUMENT\": \"YOUR_EXTRA_ARGUMENT_VALUE\"},\n    ).trigger()\n\n    return glue_job_run.wait_for_completion()\n\n\nexample_run_glue_job()\n

    Source code in prefect_aws/glue_job.py
    class GlueJobBlock(JobBlock):\n    \"\"\"Execute a job to the AWS Glue Job service.\n\n    Attributes:\n        job_name: The name of the job definition to use.\n        arguments: The job arguments associated with this run.\n            For this job run, they replace the default arguments set in the job\n            definition itself.\n            You can specify arguments here that your own job-execution script consumes,\n            as well as arguments that Glue itself consumes.\n            Job arguments may be logged. Do not pass plaintext secrets as arguments.\n            Retrieve secrets from a Glue Connection, Secrets Manager or other secret\n            management mechanism if you intend to keep them within the Job.\n            [doc](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html)\n        job_watch_poll_interval: The amount of time to wait between AWS API\n            calls while monitoring the state of a Glue Job.\n            default is 60s because of jobs that use AWS Glue versions 2.0 and later\n            have a 1-minute minimum.\n            [AWS Glue Pricing](https://aws.amazon.com/glue/pricing/?nc1=h_ls)\n\n    Example:\n        Start a job to AWS Glue Job.\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.glue_job import GlueJobBlock\n\n\n        @flow\n        def example_run_glue_job():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"your_access_key_id\",\n                aws_secret_access_key=\"your_secret_access_key\"\n            )\n            glue_job_run = GlueJobBlock(\n                job_name=\"your_glue_job_name\",\n                arguments={\"--YOUR_EXTRA_ARGUMENT\": \"YOUR_EXTRA_ARGUMENT_VALUE\"},\n            ).trigger()\n\n            return glue_job_run.wait_for_completion()\n\n\n        example_run_glue_job()\n        ```\n    \"\"\"\n\n    job_name: str = Field(\n        ...,\n        title=\"AWS Glue Job Name\",\n        description=\"The name of the job definition to use.\",\n    )\n\n    arguments: Optional[dict] = Field(\n        default=None,\n        title=\"AWS Glue Job Arguments\",\n        description=\"The job arguments associated with this run.\",\n    )\n    job_watch_poll_interval: float = Field(\n        default=60.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an Glue Job.\"\n        ),\n    )\n\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to use to connect to Glue.\",\n    )\n\n    async def trigger(self) -> GlueJobRun:\n        \"\"\"trigger for GlueJobRun\"\"\"\n        client = self._get_client()\n        job_run_id = self._start_job(client)\n        return GlueJobRun(\n            job_name=self.job_name,\n            job_id=job_run_id,\n            job_watch_poll_interval=self.job_watch_poll_interval,\n        )\n\n    def _start_job(self, client: _GlueJobClient) -> str:\n        \"\"\"\n        Start the AWS Glue Job\n        [doc](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue/client/start_job_run.html)\n        \"\"\"\n        self.logger.info(\n            f\"starting job {self.job_name} with arguments {self.arguments}\"\n        )\n        try:\n            response = client.start_job_run(\n                JobName=self.job_name,\n                Arguments=self.arguments,\n            )\n            job_run_id = str(response[\"JobRunId\"])\n            self.logger.info(f\"job started with job run id: {job_run_id}\")\n            return job_run_id\n        except Exception as e:\n            self.logger.error(f\"failed to start job: {e}\")\n            raise RuntimeError\n\n    def _get_client(self) -> _GlueJobClient:\n        \"\"\"\n        Retrieve a Glue Job Client\n        \"\"\"\n        boto_session = self.aws_credentials.get_boto3_session()\n        return boto_session.client(\"glue\")\n
    "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobBlock.trigger","title":"trigger async","text":"

    trigger for GlueJobRun

    Source code in prefect_aws/glue_job.py
    async def trigger(self) -> GlueJobRun:\n    \"\"\"trigger for GlueJobRun\"\"\"\n    client = self._get_client()\n    job_run_id = self._start_job(client)\n    return GlueJobRun(\n        job_name=self.job_name,\n        job_id=job_run_id,\n        job_watch_poll_interval=self.job_watch_poll_interval,\n    )\n
    "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobRun","title":"GlueJobRun","text":"

    Bases: JobRun, BaseModel

    Execute a Glue Job

    Source code in prefect_aws/glue_job.py
    class GlueJobRun(JobRun, BaseModel):\n    \"\"\"Execute a Glue Job\"\"\"\n\n    job_name: str = Field(\n        ...,\n        title=\"AWS Glue Job Name\",\n        description=\"The name of the job definition to use.\",\n    )\n\n    job_id: str = Field(\n        ...,\n        title=\"AWS Glue Job ID\",\n        description=\"The ID of the job run.\",\n    )\n\n    job_watch_poll_interval: float = Field(\n        default=60.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an Glue Job.\"\n        ),\n    )\n\n    _error_states = [\"FAILED\", \"STOPPED\", \"ERROR\", \"TIMEOUT\"]\n\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to use to connect to Glue.\",\n    )\n\n    client: _GlueJobClient = Field(default=None, description=\"\")\n\n    async def fetch_result(self) -> str:\n        \"\"\"fetch glue job state\"\"\"\n        job = self._get_job_run()\n        return job[\"JobRun\"][\"JobRunState\"]\n\n    def wait_for_completion(self) -> None:\n        \"\"\"\n        Wait for the job run to complete and get exit code\n        \"\"\"\n        self.logger.info(f\"watching job {self.job_name} with run id {self.job_id}\")\n        while True:\n            job = self._get_job_run()\n            job_state = job[\"JobRun\"][\"JobRunState\"]\n            if job_state in self._error_states:\n                # Generate a dynamic exception type from the AWS name\n                self.logger.error(f\"job failed: {job['JobRun']['ErrorMessage']}\")\n                raise RuntimeError(job[\"JobRun\"][\"ErrorMessage\"])\n            elif job_state == \"SUCCEEDED\":\n                self.logger.info(f\"job succeeded: {self.job_id}\")\n                break\n\n            time.sleep(self.job_watch_poll_interval)\n\n    def _get_job_run(self):\n        \"\"\"get glue job\"\"\"\n        return self.client.get_job_run(JobName=self.job_name, RunId=self.job_id)\n
    "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobRun.fetch_result","title":"fetch_result async","text":"

    fetch glue job state

    Source code in prefect_aws/glue_job.py
    async def fetch_result(self) -> str:\n    \"\"\"fetch glue job state\"\"\"\n    job = self._get_job_run()\n    return job[\"JobRun\"][\"JobRunState\"]\n
    "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobRun.wait_for_completion","title":"wait_for_completion","text":"

    Wait for the job run to complete and get exit code

    Source code in prefect_aws/glue_job.py
    def wait_for_completion(self) -> None:\n    \"\"\"\n    Wait for the job run to complete and get exit code\n    \"\"\"\n    self.logger.info(f\"watching job {self.job_name} with run id {self.job_id}\")\n    while True:\n        job = self._get_job_run()\n        job_state = job[\"JobRun\"][\"JobRunState\"]\n        if job_state in self._error_states:\n            # Generate a dynamic exception type from the AWS name\n            self.logger.error(f\"job failed: {job['JobRun']['ErrorMessage']}\")\n            raise RuntimeError(job[\"JobRun\"][\"ErrorMessage\"])\n        elif job_state == \"SUCCEEDED\":\n            self.logger.info(f\"job succeeded: {self.job_id}\")\n            break\n\n        time.sleep(self.job_watch_poll_interval)\n
    "},{"location":"integrations/prefect-aws/lambda_function/","title":"Lambda","text":""},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function","title":"prefect_aws.lambda_function","text":"

    Integrations with AWS Lambda.

    Examples:

    Run a lambda function with a payload\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(payload={\"foo\": \"bar\"})\n```\n\nSpecify a version of a lambda function\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    qualifier=\"1\",\n    aws_credentials=aws_credentials,\n).invoke()\n```\n\nInvoke a lambda function asynchronously\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(invocation_type=\"Event\")\n```\n\nInvoke a lambda function and return the last 4 KB of logs\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(tail=True)\n```\n\nInvoke a lambda function with a client context\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(client_context={\"bar\": \"foo\"})\n```\n
    "},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function.LambdaFunction","title":"LambdaFunction","text":"

    Bases: Block

    Invoke a Lambda function. This block is part of the prefect-aws collection. Install prefect-aws with pip install prefect-aws to use this block.

    Attributes:

    Name Type Description function_name str

    The name, ARN, or partial ARN of the Lambda function to run. This must be the name of a function that is already deployed to AWS Lambda.

    qualifier Optional[str]

    The version or alias of the Lambda function to use when invoked. If not specified, the latest (unqualified) version of the Lambda function will be used.

    aws_credentials AwsCredentials

    The AWS credentials to use to connect to AWS Lambda with a default factory of AwsCredentials.

    Source code in prefect_aws/lambda_function.py
    class LambdaFunction(Block):\n    \"\"\"Invoke a Lambda function. This block is part of the prefect-aws\n    collection. Install prefect-aws with `pip install prefect-aws` to use this\n    block.\n\n    Attributes:\n        function_name: The name, ARN, or partial ARN of the Lambda function to\n            run. This must be the name of a function that is already deployed\n            to AWS Lambda.\n        qualifier: The version or alias of the Lambda function to use when\n            invoked. If not specified, the latest (unqualified) version of the\n            Lambda function will be used.\n        aws_credentials: The AWS credentials to use to connect to AWS Lambda\n            with a default factory of AwsCredentials.\n\n    \"\"\"\n\n    _block_type_name = \"Lambda Function\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/s3/#prefect_aws.lambda_function.LambdaFunction\"  # noqa\n\n    function_name: str = Field(\n        title=\"Function Name\",\n        description=(\n            \"The name, ARN, or partial ARN of the Lambda function to run. This\"\n            \" must be the name of a function that is already deployed to AWS\"\n            \" Lambda.\"\n        ),\n    )\n    qualifier: Optional[str] = Field(\n        default=None,\n        title=\"Qualifier\",\n        description=(\n            \"The version or alias of the Lambda function to use when invoked. \"\n            \"If not specified, the latest (unqualified) version of the Lambda \"\n            \"function will be used.\"\n        ),\n    )\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to invoke the Lambda with.\",\n    )\n\n    class Config:\n        \"\"\"Lambda's pydantic configuration.\"\"\"\n\n        smart_union = True\n\n    def _get_lambda_client(self):\n        \"\"\"\n        Retrieve a boto3 session and Lambda client\n        \"\"\"\n        boto_session = self.aws_credentials.get_boto3_session()\n        lambda_client = boto_session.client(\"lambda\")\n        return lambda_client\n\n    @sync_compatible\n    async def invoke(\n        self,\n        payload: dict = None,\n        invocation_type: Literal[\n            \"RequestResponse\", \"Event\", \"DryRun\"\n        ] = \"RequestResponse\",\n        tail: bool = False,\n        client_context: Optional[dict] = None,\n    ) -> dict:\n        \"\"\"\n        [Invoke](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda/client/invoke.html)\n        the Lambda function with the given payload.\n\n        Args:\n            payload: The payload to send to the Lambda function.\n            invocation_type: The invocation type of the Lambda function. This\n                can be one of \"RequestResponse\", \"Event\", or \"DryRun\". Uses\n                \"RequestResponse\" by default.\n            tail: If True, the response will include the base64-encoded last 4\n                KB of log data produced by the Lambda function.\n            client_context: The client context to send to the Lambda function.\n                Limited to 3583 bytes.\n\n        Returns:\n            The response from the Lambda function.\n\n        Examples:\n\n            ```python\n            from prefect_aws.lambda_function import LambdaFunction\n            from prefect_aws.credentials import AwsCredentials\n\n            credentials = AwsCredentials()\n            lambda_function = LambdaFunction(\n                function_name=\"test_lambda_function\",\n                aws_credentials=credentials,\n            )\n            response = lambda_function.invoke(\n                payload={\"foo\": \"bar\"},\n                invocation_type=\"RequestResponse\",\n            )\n            response[\"Payload\"].read()\n            ```\n            ```txt\n            b'{\"foo\": \"bar\"}'\n            ```\n\n        \"\"\"\n        # Add invocation arguments\n        kwargs = dict(FunctionName=self.function_name)\n\n        if payload:\n            kwargs[\"Payload\"] = json.dumps(payload).encode()\n\n        # Let boto handle invalid invocation types\n        kwargs[\"InvocationType\"] = invocation_type\n\n        if self.qualifier is not None:\n            kwargs[\"Qualifier\"] = self.qualifier\n\n        if tail:\n            kwargs[\"LogType\"] = \"Tail\"\n\n        if client_context is not None:\n            # For some reason this is string, but payload is bytes\n            kwargs[\"ClientContext\"] = json.dumps(client_context)\n\n        # Get client and invoke\n        lambda_client = await run_sync_in_worker_thread(self._get_lambda_client)\n        return await run_sync_in_worker_thread(lambda_client.invoke, **kwargs)\n
    "},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function.LambdaFunction.Config","title":"Config","text":"

    Lambda's pydantic configuration.

    Source code in prefect_aws/lambda_function.py
    class Config:\n    \"\"\"Lambda's pydantic configuration.\"\"\"\n\n    smart_union = True\n
    "},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function.LambdaFunction.invoke","title":"invoke async","text":"

    Invoke the Lambda function with the given payload.

    Parameters:

    Name Type Description Default payload dict

    The payload to send to the Lambda function.

    None invocation_type Literal['RequestResponse', 'Event', 'DryRun']

    The invocation type of the Lambda function. This can be one of \"RequestResponse\", \"Event\", or \"DryRun\". Uses \"RequestResponse\" by default.

    'RequestResponse' tail bool

    If True, the response will include the base64-encoded last 4 KB of log data produced by the Lambda function.

    False client_context Optional[dict]

    The client context to send to the Lambda function. Limited to 3583 bytes.

    None

    Returns:

    Type Description dict

    The response from the Lambda function.

    ```python\nfrom prefect_aws.lambda_function import LambdaFunction\nfrom prefect_aws.credentials import AwsCredentials\n\ncredentials = AwsCredentials()\nlambda_function = LambdaFunction(\n    function_name=\"test_lambda_function\",\n    aws_credentials=credentials,\n)\nresponse = lambda_function.invoke(\n    payload={\"foo\": \"bar\"},\n    invocation_type=\"RequestResponse\",\n)\nresponse[\"Payload\"].read()\n```\n```txt\nb'{\"foo\": \"bar\"}'\n```\n
    Source code in prefect_aws/lambda_function.py
    @sync_compatible\nasync def invoke(\n    self,\n    payload: dict = None,\n    invocation_type: Literal[\n        \"RequestResponse\", \"Event\", \"DryRun\"\n    ] = \"RequestResponse\",\n    tail: bool = False,\n    client_context: Optional[dict] = None,\n) -> dict:\n    \"\"\"\n    [Invoke](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda/client/invoke.html)\n    the Lambda function with the given payload.\n\n    Args:\n        payload: The payload to send to the Lambda function.\n        invocation_type: The invocation type of the Lambda function. This\n            can be one of \"RequestResponse\", \"Event\", or \"DryRun\". Uses\n            \"RequestResponse\" by default.\n        tail: If True, the response will include the base64-encoded last 4\n            KB of log data produced by the Lambda function.\n        client_context: The client context to send to the Lambda function.\n            Limited to 3583 bytes.\n\n    Returns:\n        The response from the Lambda function.\n\n    Examples:\n\n        ```python\n        from prefect_aws.lambda_function import LambdaFunction\n        from prefect_aws.credentials import AwsCredentials\n\n        credentials = AwsCredentials()\n        lambda_function = LambdaFunction(\n            function_name=\"test_lambda_function\",\n            aws_credentials=credentials,\n        )\n        response = lambda_function.invoke(\n            payload={\"foo\": \"bar\"},\n            invocation_type=\"RequestResponse\",\n        )\n        response[\"Payload\"].read()\n        ```\n        ```txt\n        b'{\"foo\": \"bar\"}'\n        ```\n\n    \"\"\"\n    # Add invocation arguments\n    kwargs = dict(FunctionName=self.function_name)\n\n    if payload:\n        kwargs[\"Payload\"] = json.dumps(payload).encode()\n\n    # Let boto handle invalid invocation types\n    kwargs[\"InvocationType\"] = invocation_type\n\n    if self.qualifier is not None:\n        kwargs[\"Qualifier\"] = self.qualifier\n\n    if tail:\n        kwargs[\"LogType\"] = \"Tail\"\n\n    if client_context is not None:\n        # For some reason this is string, but payload is bytes\n        kwargs[\"ClientContext\"] = json.dumps(client_context)\n\n    # Get client and invoke\n    lambda_client = await run_sync_in_worker_thread(self._get_lambda_client)\n    return await run_sync_in_worker_thread(lambda_client.invoke, **kwargs)\n
    "},{"location":"integrations/prefect-aws/s3/","title":"S3","text":""},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3","title":"prefect_aws.s3","text":"

    Tasks for interacting with AWS S3

    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket","title":"S3Bucket","text":"

    Bases: WritableFileSystem, WritableDeploymentStorage, ObjectStorageBlock

    Block used to store data using AWS S3 or S3-compatible object storage like MinIO.

    Attributes:

    Name Type Description bucket_name str

    Name of your bucket.

    credentials Union[MinIOCredentials, AwsCredentials]

    A block containing your credentials to AWS or MinIO.

    bucket_folder str

    A default path to a folder within the S3 bucket to use for reading and writing objects.

    Source code in prefect_aws/s3.py
    class S3Bucket(WritableFileSystem, WritableDeploymentStorage, ObjectStorageBlock):\n\n    \"\"\"\n    Block used to store data using AWS S3 or S3-compatible object storage like MinIO.\n\n    Attributes:\n        bucket_name: Name of your bucket.\n        credentials: A block containing your credentials to AWS or MinIO.\n        bucket_folder: A default path to a folder within the S3 bucket to use\n            for reading and writing objects.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _block_type_name = \"S3 Bucket\"\n    _documentation_url = (\n        \"https://prefecthq.github.io/prefect-aws/s3/#prefect_aws.s3.S3Bucket\"  # noqa\n    )\n\n    bucket_name: str = Field(default=..., description=\"Name of your bucket.\")\n\n    credentials: Union[MinIOCredentials, AwsCredentials] = Field(\n        default_factory=AwsCredentials,\n        description=\"A block containing your credentials to AWS or MinIO.\",\n    )\n\n    bucket_folder: str = Field(\n        default=\"\",\n        description=(\n            \"A default path to a folder within the S3 bucket to use \"\n            \"for reading and writing objects.\"\n        ),\n    )\n\n    # Property to maintain compatibility with storage block based deployments\n    @property\n    def basepath(self) -> str:\n        \"\"\"\n        The base path of the S3 bucket.\n\n        Returns:\n            str: The base path of the S3 bucket.\n        \"\"\"\n        return self.bucket_folder\n\n    @basepath.setter\n    def basepath(self, value: str) -> None:\n        self.bucket_folder = value\n\n    def _resolve_path(self, path: str) -> str:\n        \"\"\"\n        A helper function used in write_path to join `self.basepath` and `path`.\n\n        Args:\n\n            path: Name of the key, e.g. \"file1\". Each object in your\n                bucket has a unique key (or key name).\n\n        \"\"\"\n        # If bucket_folder provided, it means we won't write to the root dir of\n        # the bucket. So we need to add it on the front of the path.\n        #\n        # AWS object key naming guidelines require '/' for bucket folders.\n        # Get POSIX path to prevent `pathlib` from inferring '\\' on Windows OS\n        path = (\n            (Path(self.bucket_folder) / path).as_posix() if self.bucket_folder else path\n        )\n\n        return path\n\n    def _get_s3_client(self) -> boto3.client:\n        \"\"\"\n        Authenticate MinIO credentials or AWS credentials and return an S3 client.\n        This is a helper function called by read_path() or write_path().\n        \"\"\"\n        return self.credentials.get_client(\"s3\")\n\n    def _get_bucket_resource(self) -> boto3.resource:\n        \"\"\"\n        Retrieves boto3 resource object for the configured bucket\n        \"\"\"\n        params_override = self.credentials.aws_client_parameters.get_params_override()\n        bucket = (\n            self.credentials.get_boto3_session()\n            .resource(\"s3\", **params_override)\n            .Bucket(self.bucket_name)\n        )\n        return bucket\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Copies a folder from the configured S3 bucket to a local directory.\n\n        Defaults to copying the entire contents of the block's basepath to the current\n        working directory.\n\n        Args:\n            from_path: Path in S3 bucket to download from. Defaults to the block's\n                configured basepath.\n            local_path: Local path to download S3 contents to. Defaults to the current\n                working directory.\n        \"\"\"\n        bucket_folder = self.bucket_folder\n        if from_path is None:\n            from_path = str(bucket_folder) if bucket_folder else \"\"\n\n        if local_path is None:\n            local_path = str(Path(\".\").absolute())\n        else:\n            local_path = str(Path(local_path).expanduser())\n\n        bucket = self._get_bucket_resource()\n        for obj in bucket.objects.filter(Prefix=from_path):\n            if obj.key[-1] == \"/\":\n                # object is a folder and will be created if it contains any objects\n                continue\n            target = os.path.join(\n                local_path,\n                os.path.relpath(obj.key, from_path),\n            )\n            os.makedirs(os.path.dirname(target), exist_ok=True)\n            bucket.download_file(obj.key, target)\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to the configured S3 bucket in a\n        given folder.\n\n        Defaults to uploading the entire contents the current working directory to the\n        block's basepath.\n\n        Args:\n            local_path: Path to local directory to upload from.\n            to_path: Path in S3 bucket to upload to. Defaults to block's configured\n                basepath.\n            ignore_file: Path to file containing gitignore style expressions for\n                filepaths to ignore.\n\n        \"\"\"\n        to_path = \"\" if to_path is None else to_path\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(local_path, ignore_patterns)\n\n        uploaded_file_count = 0\n        for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n            if (\n                included_files is not None\n                and str(local_file_path.relative_to(local_path)) not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = Path(to_path) / local_file_path.relative_to(\n                    local_path\n                )\n                with open(local_file_path, \"rb\") as local_file:\n                    local_file_content = local_file.read()\n\n                await self.write_path(\n                    remote_file_path.as_posix(), content=local_file_content\n                )\n                uploaded_file_count += 1\n\n        return uploaded_file_count\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        \"\"\"\n        Read specified path from S3 and return contents. Provide the entire\n        path to the key in S3.\n\n        Args:\n            path: Entire path to (and including) the key.\n\n        Example:\n            Read \"subfolder/file1\" contents from an S3 bucket named \"bucket\":\n            ```python\n            from prefect_aws import AwsCredentials\n            from prefect_aws.s3 import S3Bucket\n\n            aws_creds = AwsCredentials(\n                aws_access_key_id=AWS_ACCESS_KEY_ID,\n                aws_secret_access_key=AWS_SECRET_ACCESS_KEY\n            )\n\n            s3_bucket_block = S3Bucket(\n                bucket_name=\"bucket\",\n                credentials=aws_creds,\n                bucket_folder=\"subfolder\"\n            )\n\n            key_contents = s3_bucket_block.read_path(path=\"subfolder/file1\")\n            ```\n        \"\"\"\n        path = self._resolve_path(path)\n\n        return await run_sync_in_worker_thread(self._read_sync, path)\n\n    def _read_sync(self, key: str) -> bytes:\n        \"\"\"\n        Called by read_path(). Creates an S3 client and retrieves the\n        contents from  a specified path.\n        \"\"\"\n\n        s3_client = self._get_s3_client()\n\n        with io.BytesIO() as stream:\n            s3_client.download_fileobj(Bucket=self.bucket_name, Key=key, Fileobj=stream)\n            stream.seek(0)\n            output = stream.read()\n            return output\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        \"\"\"\n        Writes to an S3 bucket.\n\n        Args:\n\n            path: The key name. Each object in your bucket has a unique\n                key (or key name).\n            content: What you are uploading to S3.\n\n        Example:\n\n            Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket:\n            ```python\n            from prefect_aws import MinioCredentials\n            from prefect_aws.s3 import S3Bucket\n\n            minio_creds = MinIOCredentials(\n                minio_root_user = \"minioadmin\",\n                minio_root_password = \"minioadmin\",\n            )\n\n            s3_bucket_block = S3Bucket(\n                bucket_name=\"bucket\",\n                minio_credentials=minio_creds,\n                bucket_folder=\"dogs/smalldogs\",\n                endpoint_url=\"http://localhost:9000\",\n            )\n            s3_havanese_path = s3_bucket_block.write_path(path=\"havanese\", content=data)\n            ```\n        \"\"\"\n\n        path = self._resolve_path(path)\n\n        await run_sync_in_worker_thread(self._write_sync, path, content)\n\n        return path\n\n    def _write_sync(self, key: str, data: bytes) -> None:\n        \"\"\"\n        Called by write_path(). Creates an S3 client and uploads a file\n        object.\n        \"\"\"\n\n        s3_client = self._get_s3_client()\n\n        with io.BytesIO(data) as stream:\n            s3_client.upload_fileobj(Fileobj=stream, Bucket=self.bucket_name, Key=key)\n\n    # NEW BLOCK INTERFACE METHODS BELOW\n    @staticmethod\n    def _list_objects_sync(page_iterator: PageIterator) -> List[Dict[str, Any]]:\n        \"\"\"\n        Synchronous method to collect S3 objects into a list\n\n        Args:\n            page_iterator: AWS Paginator for S3 objects\n\n        Returns:\n            List[Dict]: List of object information\n        \"\"\"\n        return [\n            content for page in page_iterator for content in page.get(\"Contents\", [])\n        ]\n\n    def _join_bucket_folder(self, bucket_path: str = \"\") -> str:\n        \"\"\"\n        Joins the base bucket folder to the bucket path.\n        NOTE: If a method reuses another method in this class, be careful to not\n        call this  twice because it'll join the bucket folder twice.\n        See https://github.com/PrefectHQ/prefect-aws/issues/141 for a past issue.\n        \"\"\"\n        if not self.bucket_folder and not bucket_path:\n            # there's a difference between \".\" and \"\", at least in the tests\n            return \"\"\n\n        bucket_path = str(bucket_path)\n        if self.bucket_folder != \"\" and bucket_path.startswith(self.bucket_folder):\n            self.logger.info(\n                f\"Bucket path {bucket_path!r} is already prefixed with \"\n                f\"bucket folder {self.bucket_folder!r}; is this intentional?\"\n            )\n\n        return (Path(self.bucket_folder) / bucket_path).as_posix() + (\n            \"\" if not bucket_path.endswith(\"/\") else \"/\"\n        )\n\n    @sync_compatible\n    async def list_objects(\n        self,\n        folder: str = \"\",\n        delimiter: str = \"\",\n        page_size: Optional[int] = None,\n        max_items: Optional[int] = None,\n        jmespath_query: Optional[str] = None,\n    ) -> List[Dict[str, Any]]:\n        \"\"\"\n        Args:\n            folder: Folder to list objects from.\n            delimiter: Character used to group keys of listed objects.\n            page_size: Number of objects to return in each request to the AWS API.\n            max_items: Maximum number of objects that to be returned by task.\n            jmespath_query: Query used to filter objects based on object attributes refer to\n                the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath)\n                for more information on how to construct queries.\n\n        Returns:\n            List of objects and their metadata in the bucket.\n\n        Examples:\n            List objects under the `base_folder`.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.list_objects(\"base_folder\")\n            ```\n        \"\"\"  # noqa: E501\n        bucket_path = self._join_bucket_folder(folder)\n        client = self.credentials.get_s3_client()\n        paginator = client.get_paginator(\"list_objects_v2\")\n        page_iterator = paginator.paginate(\n            Bucket=self.bucket_name,\n            Prefix=bucket_path,\n            Delimiter=delimiter,\n            PaginationConfig={\"PageSize\": page_size, \"MaxItems\": max_items},\n        )\n        if jmespath_query:\n            page_iterator = page_iterator.search(f\"{jmespath_query} | {{Contents: @}}\")\n\n        self.logger.info(f\"Listing objects in bucket {bucket_path}.\")\n        objects = await run_sync_in_worker_thread(\n            self._list_objects_sync, page_iterator\n        )\n        return objects\n\n    @sync_compatible\n    async def download_object_to_path(\n        self,\n        from_path: str,\n        to_path: Optional[Union[str, Path]],\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads an object from the S3 bucket to a path.\n\n        Args:\n            from_path: The path to the object to download; this gets prefixed\n                with the bucket_folder.\n            to_path: The path to download the object to. If not provided, the\n                object's name will be used.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Client.download_file`.\n\n        Returns:\n            The absolute path that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to notes.txt.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n            ```\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        # making path absolute, but converting back to str here\n        # since !r looks nicer that way and filename arg expects str\n        to_path = str(Path(to_path).absolute())\n        bucket_path = self._join_bucket_folder(from_path)\n        client = self.credentials.get_s3_client()\n\n        self.logger.debug(\n            f\"Preparing to download object from bucket {self.bucket_name!r} \"\n            f\"path {bucket_path!r} to {to_path!r}.\"\n        )\n        await run_sync_in_worker_thread(\n            client.download_file,\n            Bucket=self.bucket_name,\n            Key=from_path,\n            Filename=to_path,\n            **download_kwargs,\n        )\n        self.logger.info(\n            f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n            f\"to {to_path!r}.\"\n        )\n        return Path(to_path)\n\n    @sync_compatible\n    async def download_object_to_file_object(\n        self,\n        from_path: str,\n        to_file_object: BinaryIO,\n        **download_kwargs: Dict[str, Any],\n    ) -> BinaryIO:\n        \"\"\"\n        Downloads an object from the object storage service to a file-like object,\n        which can be a BytesIO object or a BufferedWriter.\n\n        Args:\n            from_path: The path to the object to download from; this gets prefixed\n                with the bucket_folder.\n            to_file_object: The file-like object to download the object to.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Client.download_fileobj`.\n\n        Returns:\n            The file-like object that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to a BytesIO object.\n            ```python\n            from io import BytesIO\n\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with BytesIO() as buf:\n                s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n            ```\n\n            Download my_folder/notes.txt object to a BufferedWriter.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"wb\") as f:\n                s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n            ```\n        \"\"\"\n        client = self.credentials.get_s3_client()\n        bucket_path = self._join_bucket_folder(from_path)\n\n        self.logger.debug(\n            f\"Preparing to download object from bucket {self.bucket_name!r} \"\n            f\"path {bucket_path!r} to file object.\"\n        )\n        await run_sync_in_worker_thread(\n            client.download_fileobj,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            Fileobj=to_file_object,\n            **download_kwargs,\n        )\n        self.logger.info(\n            f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n            \"to file object.\"\n        )\n        return to_file_object\n\n    @sync_compatible\n    async def download_folder_to_path(\n        self,\n        from_folder: str,\n        to_folder: Optional[Union[str, Path]] = None,\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads objects *within* a folder (excluding the folder itself)\n        from the S3 bucket to a folder.\n\n        Args:\n            from_folder: The path to the folder to download from.\n            to_folder: The path to download the folder to.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Client.download_file`.\n\n        Returns:\n            The absolute path that the folder was downloaded to.\n\n        Examples:\n            Download my_folder to a local folder named my_folder.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n            ```\n        \"\"\"\n        if to_folder is None:\n            to_folder = \"\"\n        to_folder = Path(to_folder).absolute()\n\n        client = self.credentials.get_s3_client()\n        objects = await self.list_objects(folder=from_folder)\n\n        # do not call self._join_bucket_folder for filter\n        # because it's built-in to that method already!\n        # however, we still need to do it because we're using relative_to\n        bucket_folder = self._join_bucket_folder(from_folder)\n\n        async_coros = []\n        for object in objects:\n            bucket_path = Path(object[\"Key\"]).relative_to(bucket_folder)\n            # this skips the actual directory itself, e.g.\n            # `my_folder/` will be skipped\n            # `my_folder/notes.txt` will be downloaded\n            if bucket_path.is_dir():\n                continue\n            to_path = to_folder / bucket_path\n            to_path.parent.mkdir(parents=True, exist_ok=True)\n            to_path = str(to_path)  # must be string\n            self.logger.info(\n                f\"Downloading object from bucket {self.bucket_name!r} path \"\n                f\"{bucket_path.as_posix()!r} to {to_path!r}.\"\n            )\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    client.download_file,\n                    Bucket=self.bucket_name,\n                    Key=object[\"Key\"],\n                    Filename=to_path,\n                    **download_kwargs,\n                )\n            )\n        await asyncio.gather(*async_coros)\n\n        return Path(to_folder)\n\n    @sync_compatible\n    async def stream_from(\n        self,\n        bucket: \"S3Bucket\",\n        from_path: str,\n        to_path: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"Streams an object from another bucket to this bucket. Requires the\n        object to be downloaded and uploaded in chunks. If `self`'s credentials\n        allow for writes to the other bucket, try using `S3Bucket.copy_object`.\n\n        Args:\n            bucket: The bucket to stream from.\n            from_path: The path of the object to stream.\n            to_path: The path to stream the object to. Defaults to the object's name.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Client.upload_fileobj`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            your_s3_bucket = S3Bucket.load(\"your-bucket\")\n            my_s3_bucket = S3Bucket.load(\"my-bucket\")\n\n            my_s3_bucket.stream_from(\n                your_s3_bucket,\n                \"notes.txt\",\n                to_path=\"landed/notes.txt\"\n            )\n            ```\n\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        # Get the source object's StreamingBody\n        from_path: str = bucket._join_bucket_folder(from_path)\n        from_client = bucket.credentials.get_s3_client()\n        obj = await run_sync_in_worker_thread(\n            from_client.get_object, Bucket=bucket.bucket_name, Key=from_path\n        )\n        body: StreamingBody = obj[\"Body\"]\n\n        # Upload the StreamingBody to this bucket\n        bucket_path = str(self._join_bucket_folder(to_path))\n        to_client = self.credentials.get_s3_client()\n        await run_sync_in_worker_thread(\n            to_client.upload_fileobj,\n            Fileobj=body,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            **upload_kwargs,\n        )\n        self.logger.info(\n            f\"Streamed s3://{bucket.bucket_name}/{from_path} to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_path(\n        self,\n        from_path: Union[str, Path],\n        to_path: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads an object from a path to the S3 bucket.\n\n        Args:\n            from_path: The path to the file to upload from.\n            to_path: The path to upload the file to.\n            **upload_kwargs: Additional keyword arguments to pass to `Client.upload`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload notes.txt to my_folder/notes.txt.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n            ```\n        \"\"\"\n        from_path = str(Path(from_path).absolute())\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        bucket_path = str(self._join_bucket_folder(to_path))\n        client = self.credentials.get_s3_client()\n\n        await run_sync_in_worker_thread(\n            client.upload_file,\n            Filename=from_path,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            **upload_kwargs,\n        )\n        self.logger.info(\n            f\"Uploaded from {from_path!r} to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_file_object(\n        self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n    ) -> str:\n        \"\"\"\n        Uploads an object to the S3 bucket from a file-like object,\n        which can be a BytesIO object or a BufferedReader.\n\n        Args:\n            from_file_object: The file-like object to upload from.\n            to_path: The path to upload the object to.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Client.upload_fileobj`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload BytesIO object to my_folder/notes.txt.\n            ```python\n            from io import BytesIO\n\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                s3_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n            ```\n\n            Upload BufferedReader object to my_folder/notes.txt.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                s3_bucket.upload_from_file_object(\n                    f, \"my_folder/notes.txt\"\n                )\n            ```\n        \"\"\"\n        bucket_path = str(self._join_bucket_folder(to_path))\n        client = self.credentials.get_s3_client()\n        await run_sync_in_worker_thread(\n            client.upload_fileobj,\n            Fileobj=from_file_object,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            **upload_kwargs,\n        )\n        self.logger.info(\n            \"Uploaded from file object to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_folder(\n        self,\n        from_folder: Union[str, Path],\n        to_folder: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads files *within* a folder (excluding the folder itself)\n        to the object storage service folder.\n\n        Args:\n            from_folder: The path to the folder to upload from.\n            to_folder: The path to upload the folder to.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Client.upload_fileobj`.\n\n        Returns:\n            The path that the folder was uploaded to.\n\n        Examples:\n            Upload contents from my_folder to new_folder.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.upload_from_folder(\"my_folder\", \"new_folder\")\n            ```\n        \"\"\"\n        from_folder = Path(from_folder)\n        bucket_folder = self._join_bucket_folder(to_folder or \"\")\n\n        num_uploaded = 0\n        client = self.credentials.get_s3_client()\n\n        async_coros = []\n        for from_path in from_folder.rglob(\"**/*\"):\n            # this skips the actual directory itself, e.g.\n            # `my_folder/` will be skipped\n            # `my_folder/notes.txt` will be uploaded\n            if from_path.is_dir():\n                continue\n            bucket_path = (\n                Path(bucket_folder) / from_path.relative_to(from_folder)\n            ).as_posix()\n            self.logger.info(\n                f\"Uploading from {str(from_path)!r} to the bucket \"\n                f\"{self.bucket_name!r} path {bucket_path!r}.\"\n            )\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    client.upload_file,\n                    Filename=str(from_path),\n                    Bucket=self.bucket_name,\n                    Key=bucket_path,\n                    **upload_kwargs,\n                )\n            )\n            num_uploaded += 1\n        await asyncio.gather(*async_coros)\n\n        if num_uploaded == 0:\n            self.logger.warning(f\"No files were uploaded from {str(from_folder)!r}.\")\n        else:\n            self.logger.info(\n                f\"Uploaded {num_uploaded} files from {str(from_folder)!r} to \"\n                f\"the bucket {self.bucket_name!r} path {bucket_path!r}\"\n            )\n\n        return to_folder\n\n    @sync_compatible\n    async def copy_object(\n        self,\n        from_path: Union[str, Path],\n        to_path: Union[str, Path],\n        to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n        **copy_kwargs,\n    ) -> str:\n        \"\"\"Uses S3's internal\n        [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)\n        to copy objects within or between buckets. To copy objects between buckets,\n        `self`'s credentials must have permission to read the source object and write\n        to the target object. If the credentials do not have those permissions, try\n        using `S3Bucket.stream_from`.\n\n        Args:\n            from_path: The path of the object to copy.\n            to_path: The path to copy the object to.\n            to_bucket: The bucket to copy to. Defaults to the current bucket.\n            **copy_kwargs: Additional keyword arguments to pass to\n                `S3Client.copy_object`.\n\n        Returns:\n            The path that the object was copied to. Excludes the bucket name.\n\n        Examples:\n\n            Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.copy_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n            ```\n\n            Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n            another bucket.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.copy_object(\n                \"my_folder/notes.txt\",\n                \"my_folder/notes_copy.txt\",\n                to_bucket=\"other-bucket\"\n            )\n            ```\n        \"\"\"\n        s3_client = self.credentials.get_s3_client()\n\n        source_bucket_name = self.bucket_name\n        source_path = self._resolve_path(Path(from_path).as_posix())\n\n        # Default to copying within the same bucket\n        to_bucket = to_bucket or self\n\n        target_bucket_name: str\n        target_path: str\n        if isinstance(to_bucket, S3Bucket):\n            target_bucket_name = to_bucket.bucket_name\n            target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n        elif isinstance(to_bucket, str):\n            target_bucket_name = to_bucket\n            target_path = Path(to_path).as_posix()\n        else:\n            raise TypeError(\n                f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n            )\n\n        self.logger.info(\n            \"Copying object from bucket %s with key %s to bucket %s with key %s\",\n            source_bucket_name,\n            source_path,\n            target_bucket_name,\n            target_path,\n        )\n\n        s3_client.copy_object(\n            CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n            Bucket=target_bucket_name,\n            Key=target_path,\n            **copy_kwargs,\n        )\n\n        return target_path\n\n    @sync_compatible\n    async def move_object(\n        self,\n        from_path: Union[str, Path],\n        to_path: Union[str, Path],\n        to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n    ) -> str:\n        \"\"\"Uses S3's internal CopyObject and DeleteObject to move objects within or\n        between buckets. To move objects between buckets, `self`'s credentials must\n        have permission to read and delete the source object and write to the target\n        object. If the credentials do not have those permissions, this method will\n        raise an error. If the credentials have permission to read the source object\n        but not delete it, the object will be copied but not deleted.\n\n        Args:\n            from_path: The path of the object to move.\n            to_path: The path to move the object to.\n            to_bucket: The bucket to move to. Defaults to the current bucket.\n\n        Returns:\n            The path that the object was moved to. Excludes the bucket name.\n\n        Examples:\n\n            Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.move_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n            ```\n\n            Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n            another bucket.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.move_object(\n                \"my_folder/notes.txt\",\n                \"my_folder/notes_copy.txt\",\n                to_bucket=\"other-bucket\"\n            )\n            ```\n        \"\"\"\n        s3_client = self.credentials.get_s3_client()\n\n        source_bucket_name = self.bucket_name\n        source_path = self._resolve_path(Path(from_path).as_posix())\n\n        # Default to moving within the same bucket\n        to_bucket = to_bucket or self\n\n        target_bucket_name: str\n        target_path: str\n        if isinstance(to_bucket, S3Bucket):\n            target_bucket_name = to_bucket.bucket_name\n            target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n        elif isinstance(to_bucket, str):\n            target_bucket_name = to_bucket\n            target_path = Path(to_path).as_posix()\n        else:\n            raise TypeError(\n                f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n            )\n\n        self.logger.info(\n            \"Moving object from s3://%s/%s to s3://%s/%s\",\n            source_bucket_name,\n            source_path,\n            target_bucket_name,\n            target_path,\n        )\n\n        # If invalid, should error and prevent next operation\n        s3_client.copy(\n            CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n            Bucket=target_bucket_name,\n            Key=target_path,\n        )\n        s3_client.delete_object(Bucket=source_bucket_name, Key=source_path)\n        return target_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.basepath","title":"basepath: str property writable","text":"

    The base path of the S3 bucket.

    Returns:

    Name Type Description str str

    The base path of the S3 bucket.

    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.copy_object","title":"copy_object async","text":"

    Uses S3's internal CopyObject to copy objects within or between buckets. To copy objects between buckets, self's credentials must have permission to read the source object and write to the target object. If the credentials do not have those permissions, try using S3Bucket.stream_from.

    Parameters:

    Name Type Description Default from_path Union[str, Path]

    The path of the object to copy.

    required to_path Union[str, Path]

    The path to copy the object to.

    required to_bucket Optional[Union[S3Bucket, str]]

    The bucket to copy to. Defaults to the current bucket.

    None **copy_kwargs

    Additional keyword arguments to pass to S3Client.copy_object.

    {}

    Returns:

    Type Description str

    The path that the object was copied to. Excludes the bucket name.

    Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.copy_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n```\n\nCopy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\nanother bucket.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.copy_object(\n    \"my_folder/notes.txt\",\n    \"my_folder/notes_copy.txt\",\n    to_bucket=\"other-bucket\"\n)\n```\n
    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def copy_object(\n    self,\n    from_path: Union[str, Path],\n    to_path: Union[str, Path],\n    to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n    **copy_kwargs,\n) -> str:\n    \"\"\"Uses S3's internal\n    [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)\n    to copy objects within or between buckets. To copy objects between buckets,\n    `self`'s credentials must have permission to read the source object and write\n    to the target object. If the credentials do not have those permissions, try\n    using `S3Bucket.stream_from`.\n\n    Args:\n        from_path: The path of the object to copy.\n        to_path: The path to copy the object to.\n        to_bucket: The bucket to copy to. Defaults to the current bucket.\n        **copy_kwargs: Additional keyword arguments to pass to\n            `S3Client.copy_object`.\n\n    Returns:\n        The path that the object was copied to. Excludes the bucket name.\n\n    Examples:\n\n        Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.copy_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n        ```\n\n        Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n        another bucket.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.copy_object(\n            \"my_folder/notes.txt\",\n            \"my_folder/notes_copy.txt\",\n            to_bucket=\"other-bucket\"\n        )\n        ```\n    \"\"\"\n    s3_client = self.credentials.get_s3_client()\n\n    source_bucket_name = self.bucket_name\n    source_path = self._resolve_path(Path(from_path).as_posix())\n\n    # Default to copying within the same bucket\n    to_bucket = to_bucket or self\n\n    target_bucket_name: str\n    target_path: str\n    if isinstance(to_bucket, S3Bucket):\n        target_bucket_name = to_bucket.bucket_name\n        target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n    elif isinstance(to_bucket, str):\n        target_bucket_name = to_bucket\n        target_path = Path(to_path).as_posix()\n    else:\n        raise TypeError(\n            f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n        )\n\n    self.logger.info(\n        \"Copying object from bucket %s with key %s to bucket %s with key %s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    s3_client.copy_object(\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Bucket=target_bucket_name,\n        Key=target_path,\n        **copy_kwargs,\n    )\n\n    return target_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.download_folder_to_path","title":"download_folder_to_path async","text":"

    Downloads objects within a folder (excluding the folder itself) from the S3 bucket to a folder.

    Parameters:

    Name Type Description Default from_folder str

    The path to the folder to download from.

    required to_folder Optional[Union[str, Path]]

    The path to download the folder to.

    None **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Client.download_file.

    {}

    Returns:

    Type Description Path

    The absolute path that the folder was downloaded to.

    Examples:

    Download my_folder to a local folder named my_folder.

    from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def download_folder_to_path(\n    self,\n    from_folder: str,\n    to_folder: Optional[Union[str, Path]] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads objects *within* a folder (excluding the folder itself)\n    from the S3 bucket to a folder.\n\n    Args:\n        from_folder: The path to the folder to download from.\n        to_folder: The path to download the folder to.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Client.download_file`.\n\n    Returns:\n        The absolute path that the folder was downloaded to.\n\n    Examples:\n        Download my_folder to a local folder named my_folder.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n        ```\n    \"\"\"\n    if to_folder is None:\n        to_folder = \"\"\n    to_folder = Path(to_folder).absolute()\n\n    client = self.credentials.get_s3_client()\n    objects = await self.list_objects(folder=from_folder)\n\n    # do not call self._join_bucket_folder for filter\n    # because it's built-in to that method already!\n    # however, we still need to do it because we're using relative_to\n    bucket_folder = self._join_bucket_folder(from_folder)\n\n    async_coros = []\n    for object in objects:\n        bucket_path = Path(object[\"Key\"]).relative_to(bucket_folder)\n        # this skips the actual directory itself, e.g.\n        # `my_folder/` will be skipped\n        # `my_folder/notes.txt` will be downloaded\n        if bucket_path.is_dir():\n            continue\n        to_path = to_folder / bucket_path\n        to_path.parent.mkdir(parents=True, exist_ok=True)\n        to_path = str(to_path)  # must be string\n        self.logger.info(\n            f\"Downloading object from bucket {self.bucket_name!r} path \"\n            f\"{bucket_path.as_posix()!r} to {to_path!r}.\"\n        )\n        async_coros.append(\n            run_sync_in_worker_thread(\n                client.download_file,\n                Bucket=self.bucket_name,\n                Key=object[\"Key\"],\n                Filename=to_path,\n                **download_kwargs,\n            )\n        )\n    await asyncio.gather(*async_coros)\n\n    return Path(to_folder)\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.download_object_to_file_object","title":"download_object_to_file_object async","text":"

    Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter.

    Parameters:

    Name Type Description Default from_path str

    The path to the object to download from; this gets prefixed with the bucket_folder.

    required to_file_object BinaryIO

    The file-like object to download the object to.

    required **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Client.download_fileobj.

    {}

    Returns:

    Type Description BinaryIO

    The file-like object that the object was downloaded to.

    Examples:

    Download my_folder/notes.txt object to a BytesIO object.

    from io import BytesIO\n\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith BytesIO() as buf:\n    s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n

    Download my_folder/notes.txt object to a BufferedWriter.

    from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"wb\") as f:\n    s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def download_object_to_file_object(\n    self,\n    from_path: str,\n    to_file_object: BinaryIO,\n    **download_kwargs: Dict[str, Any],\n) -> BinaryIO:\n    \"\"\"\n    Downloads an object from the object storage service to a file-like object,\n    which can be a BytesIO object or a BufferedWriter.\n\n    Args:\n        from_path: The path to the object to download from; this gets prefixed\n            with the bucket_folder.\n        to_file_object: The file-like object to download the object to.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Client.download_fileobj`.\n\n    Returns:\n        The file-like object that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to a BytesIO object.\n        ```python\n        from io import BytesIO\n\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with BytesIO() as buf:\n            s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n        ```\n\n        Download my_folder/notes.txt object to a BufferedWriter.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"wb\") as f:\n            s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n        ```\n    \"\"\"\n    client = self.credentials.get_s3_client()\n    bucket_path = self._join_bucket_folder(from_path)\n\n    self.logger.debug(\n        f\"Preparing to download object from bucket {self.bucket_name!r} \"\n        f\"path {bucket_path!r} to file object.\"\n    )\n    await run_sync_in_worker_thread(\n        client.download_fileobj,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        Fileobj=to_file_object,\n        **download_kwargs,\n    )\n    self.logger.info(\n        f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n        \"to file object.\"\n    )\n    return to_file_object\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.download_object_to_path","title":"download_object_to_path async","text":"

    Downloads an object from the S3 bucket to a path.

    Parameters:

    Name Type Description Default from_path str

    The path to the object to download; this gets prefixed with the bucket_folder.

    required to_path Optional[Union[str, Path]]

    The path to download the object to. If not provided, the object's name will be used.

    required **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Client.download_file.

    {}

    Returns:

    Type Description Path

    The absolute path that the object was downloaded to.

    Examples:

    Download my_folder/notes.txt object to notes.txt.

    from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def download_object_to_path(\n    self,\n    from_path: str,\n    to_path: Optional[Union[str, Path]],\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads an object from the S3 bucket to a path.\n\n    Args:\n        from_path: The path to the object to download; this gets prefixed\n            with the bucket_folder.\n        to_path: The path to download the object to. If not provided, the\n            object's name will be used.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Client.download_file`.\n\n    Returns:\n        The absolute path that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to notes.txt.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n        ```\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    # making path absolute, but converting back to str here\n    # since !r looks nicer that way and filename arg expects str\n    to_path = str(Path(to_path).absolute())\n    bucket_path = self._join_bucket_folder(from_path)\n    client = self.credentials.get_s3_client()\n\n    self.logger.debug(\n        f\"Preparing to download object from bucket {self.bucket_name!r} \"\n        f\"path {bucket_path!r} to {to_path!r}.\"\n    )\n    await run_sync_in_worker_thread(\n        client.download_file,\n        Bucket=self.bucket_name,\n        Key=from_path,\n        Filename=to_path,\n        **download_kwargs,\n    )\n    self.logger.info(\n        f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n        f\"to {to_path!r}.\"\n    )\n    return Path(to_path)\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.get_directory","title":"get_directory async","text":"

    Copies a folder from the configured S3 bucket to a local directory.

    Defaults to copying the entire contents of the block's basepath to the current working directory.

    Parameters:

    Name Type Description Default from_path Optional[str]

    Path in S3 bucket to download from. Defaults to the block's configured basepath.

    None local_path Optional[str]

    Local path to download S3 contents to. Defaults to the current working directory.

    None Source code in prefect_aws/s3.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Copies a folder from the configured S3 bucket to a local directory.\n\n    Defaults to copying the entire contents of the block's basepath to the current\n    working directory.\n\n    Args:\n        from_path: Path in S3 bucket to download from. Defaults to the block's\n            configured basepath.\n        local_path: Local path to download S3 contents to. Defaults to the current\n            working directory.\n    \"\"\"\n    bucket_folder = self.bucket_folder\n    if from_path is None:\n        from_path = str(bucket_folder) if bucket_folder else \"\"\n\n    if local_path is None:\n        local_path = str(Path(\".\").absolute())\n    else:\n        local_path = str(Path(local_path).expanduser())\n\n    bucket = self._get_bucket_resource()\n    for obj in bucket.objects.filter(Prefix=from_path):\n        if obj.key[-1] == \"/\":\n            # object is a folder and will be created if it contains any objects\n            continue\n        target = os.path.join(\n            local_path,\n            os.path.relpath(obj.key, from_path),\n        )\n        os.makedirs(os.path.dirname(target), exist_ok=True)\n        bucket.download_file(obj.key, target)\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.list_objects","title":"list_objects async","text":"

    Parameters:

    Name Type Description Default folder str

    Folder to list objects from.

    '' delimiter str

    Character used to group keys of listed objects.

    '' page_size Optional[int]

    Number of objects to return in each request to the AWS API.

    None max_items Optional[int]

    Maximum number of objects that to be returned by task.

    None jmespath_query Optional[str]

    Query used to filter objects based on object attributes refer to the boto3 docs for more information on how to construct queries.

    None

    Returns:

    Type Description List[Dict[str, Any]]

    List of objects and their metadata in the bucket.

    Examples:

    List objects under the base_folder.

    from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.list_objects(\"base_folder\")\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def list_objects(\n    self,\n    folder: str = \"\",\n    delimiter: str = \"\",\n    page_size: Optional[int] = None,\n    max_items: Optional[int] = None,\n    jmespath_query: Optional[str] = None,\n) -> List[Dict[str, Any]]:\n    \"\"\"\n    Args:\n        folder: Folder to list objects from.\n        delimiter: Character used to group keys of listed objects.\n        page_size: Number of objects to return in each request to the AWS API.\n        max_items: Maximum number of objects that to be returned by task.\n        jmespath_query: Query used to filter objects based on object attributes refer to\n            the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath)\n            for more information on how to construct queries.\n\n    Returns:\n        List of objects and their metadata in the bucket.\n\n    Examples:\n        List objects under the `base_folder`.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.list_objects(\"base_folder\")\n        ```\n    \"\"\"  # noqa: E501\n    bucket_path = self._join_bucket_folder(folder)\n    client = self.credentials.get_s3_client()\n    paginator = client.get_paginator(\"list_objects_v2\")\n    page_iterator = paginator.paginate(\n        Bucket=self.bucket_name,\n        Prefix=bucket_path,\n        Delimiter=delimiter,\n        PaginationConfig={\"PageSize\": page_size, \"MaxItems\": max_items},\n    )\n    if jmespath_query:\n        page_iterator = page_iterator.search(f\"{jmespath_query} | {{Contents: @}}\")\n\n    self.logger.info(f\"Listing objects in bucket {bucket_path}.\")\n    objects = await run_sync_in_worker_thread(\n        self._list_objects_sync, page_iterator\n    )\n    return objects\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.move_object","title":"move_object async","text":"

    Uses S3's internal CopyObject and DeleteObject to move objects within or between buckets. To move objects between buckets, self's credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted.

    Parameters:

    Name Type Description Default from_path Union[str, Path]

    The path of the object to move.

    required to_path Union[str, Path]

    The path to move the object to.

    required to_bucket Optional[Union[S3Bucket, str]]

    The bucket to move to. Defaults to the current bucket.

    None

    Returns:

    Type Description str

    The path that the object was moved to. Excludes the bucket name.

    Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.move_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n```\n\nMove notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\nanother bucket.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.move_object(\n    \"my_folder/notes.txt\",\n    \"my_folder/notes_copy.txt\",\n    to_bucket=\"other-bucket\"\n)\n```\n
    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def move_object(\n    self,\n    from_path: Union[str, Path],\n    to_path: Union[str, Path],\n    to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n) -> str:\n    \"\"\"Uses S3's internal CopyObject and DeleteObject to move objects within or\n    between buckets. To move objects between buckets, `self`'s credentials must\n    have permission to read and delete the source object and write to the target\n    object. If the credentials do not have those permissions, this method will\n    raise an error. If the credentials have permission to read the source object\n    but not delete it, the object will be copied but not deleted.\n\n    Args:\n        from_path: The path of the object to move.\n        to_path: The path to move the object to.\n        to_bucket: The bucket to move to. Defaults to the current bucket.\n\n    Returns:\n        The path that the object was moved to. Excludes the bucket name.\n\n    Examples:\n\n        Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.move_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n        ```\n\n        Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n        another bucket.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.move_object(\n            \"my_folder/notes.txt\",\n            \"my_folder/notes_copy.txt\",\n            to_bucket=\"other-bucket\"\n        )\n        ```\n    \"\"\"\n    s3_client = self.credentials.get_s3_client()\n\n    source_bucket_name = self.bucket_name\n    source_path = self._resolve_path(Path(from_path).as_posix())\n\n    # Default to moving within the same bucket\n    to_bucket = to_bucket or self\n\n    target_bucket_name: str\n    target_path: str\n    if isinstance(to_bucket, S3Bucket):\n        target_bucket_name = to_bucket.bucket_name\n        target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n    elif isinstance(to_bucket, str):\n        target_bucket_name = to_bucket\n        target_path = Path(to_path).as_posix()\n    else:\n        raise TypeError(\n            f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n        )\n\n    self.logger.info(\n        \"Moving object from s3://%s/%s to s3://%s/%s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    # If invalid, should error and prevent next operation\n    s3_client.copy(\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Bucket=target_bucket_name,\n        Key=target_path,\n    )\n    s3_client.delete_object(Bucket=source_bucket_name, Key=source_path)\n    return target_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.put_directory","title":"put_directory async","text":"

    Uploads a directory from a given local path to the configured S3 bucket in a given folder.

    Defaults to uploading the entire contents the current working directory to the block's basepath.

    Parameters:

    Name Type Description Default local_path Optional[str]

    Path to local directory to upload from.

    None to_path Optional[str]

    Path in S3 bucket to upload to. Defaults to block's configured basepath.

    None ignore_file Optional[str]

    Path to file containing gitignore style expressions for filepaths to ignore.

    None Source code in prefect_aws/s3.py
    @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to the configured S3 bucket in a\n    given folder.\n\n    Defaults to uploading the entire contents the current working directory to the\n    block's basepath.\n\n    Args:\n        local_path: Path to local directory to upload from.\n        to_path: Path in S3 bucket to upload to. Defaults to block's configured\n            basepath.\n        ignore_file: Path to file containing gitignore style expressions for\n            filepaths to ignore.\n\n    \"\"\"\n    to_path = \"\" if to_path is None else to_path\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(local_path, ignore_patterns)\n\n    uploaded_file_count = 0\n    for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n        if (\n            included_files is not None\n            and str(local_file_path.relative_to(local_path)) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = Path(to_path) / local_file_path.relative_to(\n                local_path\n            )\n            with open(local_file_path, \"rb\") as local_file:\n                local_file_content = local_file.read()\n\n            await self.write_path(\n                remote_file_path.as_posix(), content=local_file_content\n            )\n            uploaded_file_count += 1\n\n    return uploaded_file_count\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.read_path","title":"read_path async","text":"

    Read specified path from S3 and return contents. Provide the entire path to the key in S3.

    Parameters:

    Name Type Description Default path str

    Entire path to (and including) the key.

    required Example

    Read \"subfolder/file1\" contents from an S3 bucket named \"bucket\":

    from prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import S3Bucket\n\naws_creds = AwsCredentials(\n    aws_access_key_id=AWS_ACCESS_KEY_ID,\n    aws_secret_access_key=AWS_SECRET_ACCESS_KEY\n)\n\ns3_bucket_block = S3Bucket(\n    bucket_name=\"bucket\",\n    credentials=aws_creds,\n    bucket_folder=\"subfolder\"\n)\n\nkey_contents = s3_bucket_block.read_path(path=\"subfolder/file1\")\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def read_path(self, path: str) -> bytes:\n    \"\"\"\n    Read specified path from S3 and return contents. Provide the entire\n    path to the key in S3.\n\n    Args:\n        path: Entire path to (and including) the key.\n\n    Example:\n        Read \"subfolder/file1\" contents from an S3 bucket named \"bucket\":\n        ```python\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import S3Bucket\n\n        aws_creds = AwsCredentials(\n            aws_access_key_id=AWS_ACCESS_KEY_ID,\n            aws_secret_access_key=AWS_SECRET_ACCESS_KEY\n        )\n\n        s3_bucket_block = S3Bucket(\n            bucket_name=\"bucket\",\n            credentials=aws_creds,\n            bucket_folder=\"subfolder\"\n        )\n\n        key_contents = s3_bucket_block.read_path(path=\"subfolder/file1\")\n        ```\n    \"\"\"\n    path = self._resolve_path(path)\n\n    return await run_sync_in_worker_thread(self._read_sync, path)\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.stream_from","title":"stream_from async","text":"

    Streams an object from another bucket to this bucket. Requires the object to be downloaded and uploaded in chunks. If self's credentials allow for writes to the other bucket, try using S3Bucket.copy_object.

    Parameters:

    Name Type Description Default bucket S3Bucket

    The bucket to stream from.

    required from_path str

    The path of the object to stream.

    required to_path Optional[str]

    The path to stream the object to. Defaults to the object's name.

    None **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Client.upload_fileobj.

    {}

    Returns:

    Type Description str

    The path that the object was uploaded to.

    Examples:

    Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt.

    from prefect_aws.s3 import S3Bucket\n\nyour_s3_bucket = S3Bucket.load(\"your-bucket\")\nmy_s3_bucket = S3Bucket.load(\"my-bucket\")\n\nmy_s3_bucket.stream_from(\n    your_s3_bucket,\n    \"notes.txt\",\n    to_path=\"landed/notes.txt\"\n)\n
    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def stream_from(\n    self,\n    bucket: \"S3Bucket\",\n    from_path: str,\n    to_path: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"Streams an object from another bucket to this bucket. Requires the\n    object to be downloaded and uploaded in chunks. If `self`'s credentials\n    allow for writes to the other bucket, try using `S3Bucket.copy_object`.\n\n    Args:\n        bucket: The bucket to stream from.\n        from_path: The path of the object to stream.\n        to_path: The path to stream the object to. Defaults to the object's name.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Client.upload_fileobj`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        your_s3_bucket = S3Bucket.load(\"your-bucket\")\n        my_s3_bucket = S3Bucket.load(\"my-bucket\")\n\n        my_s3_bucket.stream_from(\n            your_s3_bucket,\n            \"notes.txt\",\n            to_path=\"landed/notes.txt\"\n        )\n        ```\n\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    # Get the source object's StreamingBody\n    from_path: str = bucket._join_bucket_folder(from_path)\n    from_client = bucket.credentials.get_s3_client()\n    obj = await run_sync_in_worker_thread(\n        from_client.get_object, Bucket=bucket.bucket_name, Key=from_path\n    )\n    body: StreamingBody = obj[\"Body\"]\n\n    # Upload the StreamingBody to this bucket\n    bucket_path = str(self._join_bucket_folder(to_path))\n    to_client = self.credentials.get_s3_client()\n    await run_sync_in_worker_thread(\n        to_client.upload_fileobj,\n        Fileobj=body,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        **upload_kwargs,\n    )\n    self.logger.info(\n        f\"Streamed s3://{bucket.bucket_name}/{from_path} to the bucket \"\n        f\"{self.bucket_name!r} path {bucket_path!r}.\"\n    )\n    return bucket_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.upload_from_file_object","title":"upload_from_file_object async","text":"

    Uploads an object to the S3 bucket from a file-like object, which can be a BytesIO object or a BufferedReader.

    Parameters:

    Name Type Description Default from_file_object BinaryIO

    The file-like object to upload from.

    required to_path str

    The path to upload the object to.

    required **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Client.upload_fileobj.

    {}

    Returns:

    Type Description str

    The path that the object was uploaded to.

    Examples:

    Upload BytesIO object to my_folder/notes.txt.

    from io import BytesIO\n\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    s3_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n

    Upload BufferedReader object to my_folder/notes.txt.

    from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    s3_bucket.upload_from_file_object(\n        f, \"my_folder/notes.txt\"\n    )\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def upload_from_file_object(\n    self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n) -> str:\n    \"\"\"\n    Uploads an object to the S3 bucket from a file-like object,\n    which can be a BytesIO object or a BufferedReader.\n\n    Args:\n        from_file_object: The file-like object to upload from.\n        to_path: The path to upload the object to.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Client.upload_fileobj`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload BytesIO object to my_folder/notes.txt.\n        ```python\n        from io import BytesIO\n\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            s3_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n        ```\n\n        Upload BufferedReader object to my_folder/notes.txt.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            s3_bucket.upload_from_file_object(\n                f, \"my_folder/notes.txt\"\n            )\n        ```\n    \"\"\"\n    bucket_path = str(self._join_bucket_folder(to_path))\n    client = self.credentials.get_s3_client()\n    await run_sync_in_worker_thread(\n        client.upload_fileobj,\n        Fileobj=from_file_object,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        **upload_kwargs,\n    )\n    self.logger.info(\n        \"Uploaded from file object to the bucket \"\n        f\"{self.bucket_name!r} path {bucket_path!r}.\"\n    )\n    return bucket_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.upload_from_folder","title":"upload_from_folder async","text":"

    Uploads files within a folder (excluding the folder itself) to the object storage service folder.

    Parameters:

    Name Type Description Default from_folder Union[str, Path]

    The path to the folder to upload from.

    required to_folder Optional[str]

    The path to upload the folder to.

    None **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Client.upload_fileobj.

    {}

    Returns:

    Type Description str

    The path that the folder was uploaded to.

    Examples:

    Upload contents from my_folder to new_folder.

    from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.upload_from_folder(\"my_folder\", \"new_folder\")\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def upload_from_folder(\n    self,\n    from_folder: Union[str, Path],\n    to_folder: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads files *within* a folder (excluding the folder itself)\n    to the object storage service folder.\n\n    Args:\n        from_folder: The path to the folder to upload from.\n        to_folder: The path to upload the folder to.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Client.upload_fileobj`.\n\n    Returns:\n        The path that the folder was uploaded to.\n\n    Examples:\n        Upload contents from my_folder to new_folder.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.upload_from_folder(\"my_folder\", \"new_folder\")\n        ```\n    \"\"\"\n    from_folder = Path(from_folder)\n    bucket_folder = self._join_bucket_folder(to_folder or \"\")\n\n    num_uploaded = 0\n    client = self.credentials.get_s3_client()\n\n    async_coros = []\n    for from_path in from_folder.rglob(\"**/*\"):\n        # this skips the actual directory itself, e.g.\n        # `my_folder/` will be skipped\n        # `my_folder/notes.txt` will be uploaded\n        if from_path.is_dir():\n            continue\n        bucket_path = (\n            Path(bucket_folder) / from_path.relative_to(from_folder)\n        ).as_posix()\n        self.logger.info(\n            f\"Uploading from {str(from_path)!r} to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        async_coros.append(\n            run_sync_in_worker_thread(\n                client.upload_file,\n                Filename=str(from_path),\n                Bucket=self.bucket_name,\n                Key=bucket_path,\n                **upload_kwargs,\n            )\n        )\n        num_uploaded += 1\n    await asyncio.gather(*async_coros)\n\n    if num_uploaded == 0:\n        self.logger.warning(f\"No files were uploaded from {str(from_folder)!r}.\")\n    else:\n        self.logger.info(\n            f\"Uploaded {num_uploaded} files from {str(from_folder)!r} to \"\n            f\"the bucket {self.bucket_name!r} path {bucket_path!r}\"\n        )\n\n    return to_folder\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.upload_from_path","title":"upload_from_path async","text":"

    Uploads an object from a path to the S3 bucket.

    Parameters:

    Name Type Description Default from_path Union[str, Path]

    The path to the file to upload from.

    required to_path Optional[str]

    The path to upload the file to.

    None **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Client.upload.

    {}

    Returns:

    Type Description str

    The path that the object was uploaded to.

    Examples:

    Upload notes.txt to my_folder/notes.txt.

    from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n

    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def upload_from_path(\n    self,\n    from_path: Union[str, Path],\n    to_path: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads an object from a path to the S3 bucket.\n\n    Args:\n        from_path: The path to the file to upload from.\n        to_path: The path to upload the file to.\n        **upload_kwargs: Additional keyword arguments to pass to `Client.upload`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload notes.txt to my_folder/notes.txt.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n        ```\n    \"\"\"\n    from_path = str(Path(from_path).absolute())\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    bucket_path = str(self._join_bucket_folder(to_path))\n    client = self.credentials.get_s3_client()\n\n    await run_sync_in_worker_thread(\n        client.upload_file,\n        Filename=from_path,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        **upload_kwargs,\n    )\n    self.logger.info(\n        f\"Uploaded from {from_path!r} to the bucket \"\n        f\"{self.bucket_name!r} path {bucket_path!r}.\"\n    )\n    return bucket_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.write_path","title":"write_path async","text":"

    Writes to an S3 bucket.

    Args:

    path: The key name. Each object in your bucket has a unique\n    key (or key name).\ncontent: What you are uploading to S3.\n

    Example:

    Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket:\n```python\nfrom prefect_aws import MinioCredentials\nfrom prefect_aws.s3 import S3Bucket\n\nminio_creds = MinIOCredentials(\n    minio_root_user = \"minioadmin\",\n    minio_root_password = \"minioadmin\",\n)\n\ns3_bucket_block = S3Bucket(\n    bucket_name=\"bucket\",\n    minio_credentials=minio_creds,\n    bucket_folder=\"dogs/smalldogs\",\n    endpoint_url=\"http://localhost:9000\",\n)\ns3_havanese_path = s3_bucket_block.write_path(path=\"havanese\", content=data)\n```\n
    Source code in prefect_aws/s3.py
    @sync_compatible\nasync def write_path(self, path: str, content: bytes) -> str:\n    \"\"\"\n    Writes to an S3 bucket.\n\n    Args:\n\n        path: The key name. Each object in your bucket has a unique\n            key (or key name).\n        content: What you are uploading to S3.\n\n    Example:\n\n        Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket:\n        ```python\n        from prefect_aws import MinioCredentials\n        from prefect_aws.s3 import S3Bucket\n\n        minio_creds = MinIOCredentials(\n            minio_root_user = \"minioadmin\",\n            minio_root_password = \"minioadmin\",\n        )\n\n        s3_bucket_block = S3Bucket(\n            bucket_name=\"bucket\",\n            minio_credentials=minio_creds,\n            bucket_folder=\"dogs/smalldogs\",\n            endpoint_url=\"http://localhost:9000\",\n        )\n        s3_havanese_path = s3_bucket_block.write_path(path=\"havanese\", content=data)\n        ```\n    \"\"\"\n\n    path = self._resolve_path(path)\n\n    await run_sync_in_worker_thread(self._write_sync, path, content)\n\n    return path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_copy","title":"s3_copy async","text":"

    Uses S3's internal CopyObject to copy objects within or between buckets. To copy objects between buckets, the credentials must have permission to read the source object and write to the target object. If the credentials do not have those permissions, try using S3Bucket.stream_from.

    Parameters:

    Name Type Description Default source_path str

    The path to the object to copy. Can be a string or Path.

    required target_path str

    The path to copy the object to. Can be a string or Path.

    required source_bucket_name str

    The bucket to copy the object from.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required target_bucket_name Optional[str]

    The bucket to copy the object to. If not provided, defaults to source_bucket.

    None **copy_kwargs

    Additional keyword arguments to pass to S3Client.copy_object.

    {}

    Returns:

    Type Description str

    The path that the object was copied to. Excludes the bucket name.

    Copy notes.txt from s3://my-bucket/my_folder/notes.txt to\ns3://my-bucket/my_folder/notes_copy.txt.\n\n```python\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_copy\n\naws_credentials = AwsCredentials.load(\"my-creds\")\n\n@flow\nasync def example_copy_flow():\n    await s3_copy(\n        source_path=\"my_folder/notes.txt\",\n        target_path=\"my_folder/notes_copy.txt\",\n        source_bucket_name=\"my-bucket\",\n        aws_credentials=aws_credentials,\n    )\n\nexample_copy_flow()\n```\n\nCopy notes.txt from s3://my-bucket/my_folder/notes.txt to\ns3://other-bucket/notes_copy.txt.\n\n```python\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_copy\n\naws_credentials = AwsCredentials.load(\"shared-creds\")\n\n@flow\nasync def example_copy_flow():\n    await s3_copy(\n        source_path=\"my_folder/notes.txt\",\n        target_path=\"notes_copy.txt\",\n        source_bucket_name=\"my-bucket\",\n        aws_credentials=aws_credentials,\n        target_bucket_name=\"other-bucket\",\n    )\n\nexample_copy_flow()\n```\n
    Source code in prefect_aws/s3.py
    @task\nasync def s3_copy(\n    source_path: str,\n    target_path: str,\n    source_bucket_name: str,\n    aws_credentials: AwsCredentials,\n    target_bucket_name: Optional[str] = None,\n    **copy_kwargs,\n) -> str:\n    \"\"\"Uses S3's internal\n    [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)\n    to copy objects within or between buckets. To copy objects between buckets, the\n    credentials must have permission to read the source object and write to the target\n    object. If the credentials do not have those permissions, try using\n    `S3Bucket.stream_from`.\n\n    Args:\n        source_path: The path to the object to copy. Can be a string or `Path`.\n        target_path: The path to copy the object to. Can be a string or `Path`.\n        source_bucket_name: The bucket to copy the object from.\n        aws_credentials: Credentials to use for authentication with AWS.\n        target_bucket_name: The bucket to copy the object to. If not provided, defaults\n            to `source_bucket`.\n        **copy_kwargs: Additional keyword arguments to pass to `S3Client.copy_object`.\n\n    Returns:\n        The path that the object was copied to. Excludes the bucket name.\n\n    Examples:\n\n        Copy notes.txt from s3://my-bucket/my_folder/notes.txt to\n        s3://my-bucket/my_folder/notes_copy.txt.\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_copy\n\n        aws_credentials = AwsCredentials.load(\"my-creds\")\n\n        @flow\n        async def example_copy_flow():\n            await s3_copy(\n                source_path=\"my_folder/notes.txt\",\n                target_path=\"my_folder/notes_copy.txt\",\n                source_bucket_name=\"my-bucket\",\n                aws_credentials=aws_credentials,\n            )\n\n        example_copy_flow()\n        ```\n\n        Copy notes.txt from s3://my-bucket/my_folder/notes.txt to\n        s3://other-bucket/notes_copy.txt.\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_copy\n\n        aws_credentials = AwsCredentials.load(\"shared-creds\")\n\n        @flow\n        async def example_copy_flow():\n            await s3_copy(\n                source_path=\"my_folder/notes.txt\",\n                target_path=\"notes_copy.txt\",\n                source_bucket_name=\"my-bucket\",\n                aws_credentials=aws_credentials,\n                target_bucket_name=\"other-bucket\",\n            )\n\n        example_copy_flow()\n        ```\n\n    \"\"\"\n    logger = get_run_logger()\n\n    s3_client = aws_credentials.get_s3_client()\n\n    target_bucket_name = target_bucket_name or source_bucket_name\n\n    logger.info(\n        \"Copying object from bucket %s with key %s to bucket %s with key %s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    s3_client.copy_object(\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Bucket=target_bucket_name,\n        Key=target_path,\n        **copy_kwargs,\n    )\n\n    return target_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_download","title":"s3_download async","text":"

    Downloads an object with a given key from a given S3 bucket.

    Parameters:

    Name Type Description Default bucket str

    Name of bucket to download object from. Required if a default value was not supplied when creating the task.

    required key str

    Key of object to download. Required if a default value was not supplied when creating the task.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required aws_client_parameters AwsClientParameters

    Custom parameter for the boto3 client initialization.

    AwsClientParameters()

    Returns:

    Type Description bytes

    A bytes representation of the downloaded object.

    Example

    Download a file from an S3 bucket:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_download\n\n\n@flow\nasync def example_s3_download_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    data = await s3_download(\n        bucket=\"bucket\",\n        key=\"key\",\n        aws_credentials=aws_credentials,\n    )\n\nexample_s3_download_flow()\n
    Source code in prefect_aws/s3.py
    @task\nasync def s3_download(\n    bucket: str,\n    key: str,\n    aws_credentials: AwsCredentials,\n    aws_client_parameters: AwsClientParameters = AwsClientParameters(),\n) -> bytes:\n    \"\"\"\n    Downloads an object with a given key from a given S3 bucket.\n\n    Args:\n        bucket: Name of bucket to download object from. Required if a default value was\n            not supplied when creating the task.\n        key: Key of object to download. Required if a default value was not supplied\n            when creating the task.\n        aws_credentials: Credentials to use for authentication with AWS.\n        aws_client_parameters: Custom parameter for the boto3 client initialization.\n\n\n    Returns:\n        A `bytes` representation of the downloaded object.\n\n    Example:\n        Download a file from an S3 bucket:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_download\n\n\n        @flow\n        async def example_s3_download_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            data = await s3_download(\n                bucket=\"bucket\",\n                key=\"key\",\n                aws_credentials=aws_credentials,\n            )\n\n        example_s3_download_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Downloading object from bucket %s with key %s\", bucket, key)\n\n    s3_client = aws_credentials.get_boto3_session().client(\n        \"s3\", **aws_client_parameters.get_params_override()\n    )\n    stream = io.BytesIO()\n    await run_sync_in_worker_thread(\n        s3_client.download_fileobj, Bucket=bucket, Key=key, Fileobj=stream\n    )\n    stream.seek(0)\n    output = stream.read()\n\n    return output\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_list_objects","title":"s3_list_objects async","text":"

    Lists details of objects in a given S3 bucket.

    Parameters:

    Name Type Description Default bucket str

    Name of bucket to list items from. Required if a default value was not supplied when creating the task.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required aws_client_parameters AwsClientParameters

    Custom parameter for the boto3 client initialization..

    AwsClientParameters() prefix str

    Used to filter objects with keys starting with the specified prefix.

    '' delimiter str

    Character used to group keys of listed objects.

    '' page_size Optional[int]

    Number of objects to return in each request to the AWS API.

    None max_items Optional[int]

    Maximum number of objects that to be returned by task.

    None jmespath_query Optional[str]

    Query used to filter objects based on object attributes refer to the boto3 docs for more information on how to construct queries.

    None

    Returns:

    Type Description List[Dict[str, Any]]

    A list of dictionaries containing information about the objects retrieved. Refer to the boto3 docs for an example response.

    Example

    List all objects in a bucket:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_list_objects\n\n\n@flow\nasync def example_s3_list_objects_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    objects = await s3_list_objects(\n        bucket=\"data_bucket\",\n        aws_credentials=aws_credentials\n    )\n\nexample_s3_list_objects_flow()\n
    Source code in prefect_aws/s3.py
    @task\nasync def s3_list_objects(\n    bucket: str,\n    aws_credentials: AwsCredentials,\n    aws_client_parameters: AwsClientParameters = AwsClientParameters(),\n    prefix: str = \"\",\n    delimiter: str = \"\",\n    page_size: Optional[int] = None,\n    max_items: Optional[int] = None,\n    jmespath_query: Optional[str] = None,\n) -> List[Dict[str, Any]]:\n    \"\"\"\n    Lists details of objects in a given S3 bucket.\n\n    Args:\n        bucket: Name of bucket to list items from. Required if a default value was not\n            supplied when creating the task.\n        aws_credentials: Credentials to use for authentication with AWS.\n        aws_client_parameters: Custom parameter for the boto3 client initialization..\n        prefix: Used to filter objects with keys starting with the specified prefix.\n        delimiter: Character used to group keys of listed objects.\n        page_size: Number of objects to return in each request to the AWS API.\n        max_items: Maximum number of objects that to be returned by task.\n        jmespath_query: Query used to filter objects based on object attributes refer to\n            the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath)\n            for more information on how to construct queries.\n\n    Returns:\n        A list of dictionaries containing information about the objects retrieved. Refer\n            to the boto3 docs for an example response.\n\n    Example:\n        List all objects in a bucket:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_list_objects\n\n\n        @flow\n        async def example_s3_list_objects_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            objects = await s3_list_objects(\n                bucket=\"data_bucket\",\n                aws_credentials=aws_credentials\n            )\n\n        example_s3_list_objects_flow()\n        ```\n    \"\"\"  # noqa E501\n    logger = get_run_logger()\n    logger.info(\"Listing objects in bucket %s with prefix %s\", bucket, prefix)\n\n    s3_client = aws_credentials.get_boto3_session().client(\n        \"s3\", **aws_client_parameters.get_params_override()\n    )\n    paginator = s3_client.get_paginator(\"list_objects_v2\")\n    page_iterator = paginator.paginate(\n        Bucket=bucket,\n        Prefix=prefix,\n        Delimiter=delimiter,\n        PaginationConfig={\"PageSize\": page_size, \"MaxItems\": max_items},\n    )\n    if jmespath_query:\n        page_iterator = page_iterator.search(f\"{jmespath_query} | {{Contents: @}}\")\n\n    return await run_sync_in_worker_thread(_list_objects_sync, page_iterator)\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_move","title":"s3_move async","text":"

    Move an object from one S3 location to another. To move objects between buckets, the credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted.

    Parameters:

    Name Type Description Default source_path str

    The path of the object to move

    required target_path str

    The path to move the object to

    required source_bucket_name str

    The name of the bucket containing the source object

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required target_bucket_name Optional[str]

    The bucket to copy the object to. If not provided, defaults to source_bucket.

    None

    Returns:

    Type Description str

    The path that the object was moved to. Excludes the bucket name.

    Source code in prefect_aws/s3.py
    @task\nasync def s3_move(\n    source_path: str,\n    target_path: str,\n    source_bucket_name: str,\n    aws_credentials: AwsCredentials,\n    target_bucket_name: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Move an object from one S3 location to another. To move objects between buckets,\n    the credentials must have permission to read and delete the source object and write\n    to the target object. If the credentials do not have those permissions, this method\n    will raise an error. If the credentials have permission to read the source object\n    but not delete it, the object will be copied but not deleted.\n\n    Args:\n        source_path: The path of the object to move\n        target_path: The path to move the object to\n        source_bucket_name: The name of the bucket containing the source object\n        aws_credentials: Credentials to use for authentication with AWS.\n        target_bucket_name: The bucket to copy the object to. If not provided, defaults\n            to `source_bucket`.\n\n    Returns:\n        The path that the object was moved to. Excludes the bucket name.\n    \"\"\"\n    logger = get_run_logger()\n\n    s3_client = aws_credentials.get_s3_client()\n\n    # If target bucket is not provided, assume it's the same as the source bucket\n    target_bucket_name = target_bucket_name or source_bucket_name\n\n    logger.info(\n        \"Moving object from s3://%s/%s s3://%s/%s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    # Copy the object to the new location\n    s3_client.copy_object(\n        Bucket=target_bucket_name,\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Key=target_path,\n    )\n\n    # Delete the original object\n    s3_client.delete_object(Bucket=source_bucket_name, Key=source_path)\n\n    return target_path\n
    "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_upload","title":"s3_upload async","text":"

    Uploads data to an S3 bucket.

    Parameters:

    Name Type Description Default data bytes

    Bytes representation of data to upload to S3.

    required bucket str

    Name of bucket to upload data to. Required if a default value was not supplied when creating the task.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required aws_client_parameters AwsClientParameters

    Custom parameter for the boto3 client initialization..

    AwsClientParameters() key Optional[str]

    Key of object to download. Defaults to a UUID string.

    None

    Returns:

    Type Description str

    The key of the uploaded object

    Example

    Read and upload a file to an S3 bucket:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_upload\n\n\n@flow\nasync def example_s3_upload_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    with open(\"data.csv\", \"rb\") as file:\n        key = await s3_upload(\n            bucket=\"bucket\",\n            key=\"data.csv\",\n            data=file.read(),\n            aws_credentials=aws_credentials,\n        )\n\nexample_s3_upload_flow()\n
    Source code in prefect_aws/s3.py
    @task\nasync def s3_upload(\n    data: bytes,\n    bucket: str,\n    aws_credentials: AwsCredentials,\n    aws_client_parameters: AwsClientParameters = AwsClientParameters(),\n    key: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Uploads data to an S3 bucket.\n\n    Args:\n        data: Bytes representation of data to upload to S3.\n        bucket: Name of bucket to upload data to. Required if a default value was not\n            supplied when creating the task.\n        aws_credentials: Credentials to use for authentication with AWS.\n        aws_client_parameters: Custom parameter for the boto3 client initialization..\n        key: Key of object to download. Defaults to a UUID string.\n\n    Returns:\n        The key of the uploaded object\n\n    Example:\n        Read and upload a file to an S3 bucket:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_upload\n\n\n        @flow\n        async def example_s3_upload_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            with open(\"data.csv\", \"rb\") as file:\n                key = await s3_upload(\n                    bucket=\"bucket\",\n                    key=\"data.csv\",\n                    data=file.read(),\n                    aws_credentials=aws_credentials,\n                )\n\n        example_s3_upload_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    key = key or str(uuid.uuid4())\n\n    logger.info(\"Uploading object to bucket %s with key %s\", bucket, key)\n\n    s3_client = aws_credentials.get_boto3_session().client(\n        \"s3\", **aws_client_parameters.get_params_override()\n    )\n    stream = io.BytesIO(data)\n    await run_sync_in_worker_thread(\n        s3_client.upload_fileobj, stream, Bucket=bucket, Key=key\n    )\n\n    return key\n
    "},{"location":"integrations/prefect-aws/secrets_manager/","title":"Secrets Manager","text":""},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager","title":"prefect_aws.secrets_manager","text":"

    Tasks for interacting with AWS Secrets Manager

    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret","title":"AwsSecret","text":"

    Bases: SecretBlock

    Manages a secret in AWS's Secrets Manager.

    Attributes:

    Name Type Description aws_credentials AwsCredentials

    The credentials to use for authentication with AWS.

    secret_name str

    The name of the secret.

    Source code in prefect_aws/secrets_manager.py
    class AwsSecret(SecretBlock):\n    \"\"\"\n    Manages a secret in AWS's Secrets Manager.\n\n    Attributes:\n        aws_credentials: The credentials to use for authentication with AWS.\n        secret_name: The name of the secret.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _block_type_name = \"AWS Secret\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret\"  # noqa\n\n    aws_credentials: AwsCredentials\n    secret_name: str = Field(default=..., description=\"The name of the secret.\")\n\n    @sync_compatible\n    async def read_secret(\n        self,\n        version_id: str = None,\n        version_stage: str = None,\n        **read_kwargs: Dict[str, Any],\n    ) -> bytes:\n        \"\"\"\n        Reads the secret from the secret storage service.\n\n        Args:\n            version_id: The version of the secret to read. If not provided, the latest\n                version will be read.\n            version_stage: The version stage of the secret to read. If not provided,\n                the latest version will be read.\n            read_kwargs: Additional keyword arguments to pass to the\n                `get_secret_value` method of the boto3 client.\n\n        Returns:\n            The secret data.\n\n        Examples:\n            Reads a secret.\n            ```python\n            secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n            secrets_manager.read_secret()\n            ```\n        \"\"\"\n        client = self.aws_credentials.get_secrets_manager_client()\n        if version_id is not None:\n            read_kwargs[\"VersionId\"] = version_id\n        if version_stage is not None:\n            read_kwargs[\"VersionStage\"] = version_stage\n        response = await run_sync_in_worker_thread(\n            client.get_secret_value, SecretId=self.secret_name, **read_kwargs\n        )\n        if \"SecretBinary\" in response:\n            secret = response[\"SecretBinary\"]\n        elif \"SecretString\" in response:\n            secret = response[\"SecretString\"]\n        arn = response[\"ARN\"]\n        self.logger.info(f\"The secret {arn!r} data was successfully read.\")\n        return secret\n\n    @sync_compatible\n    async def write_secret(\n        self, secret_data: bytes, **put_or_create_secret_kwargs: Dict[str, Any]\n    ) -> str:\n        \"\"\"\n        Writes the secret to the secret storage service as a SecretBinary;\n        if it doesn't exist, it will be created.\n\n        Args:\n            secret_data: The secret data to write.\n            **put_or_create_secret_kwargs: Additional keyword arguments to pass to\n                put_secret_value or create_secret method of the boto3 client.\n\n        Returns:\n            The path that the secret was written to.\n\n        Examples:\n            Write some secret data.\n            ```python\n            secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n            secrets_manager.write_secret(b\"my_secret_data\")\n            ```\n        \"\"\"\n        client = self.aws_credentials.get_secrets_manager_client()\n        try:\n            response = await run_sync_in_worker_thread(\n                client.put_secret_value,\n                SecretId=self.secret_name,\n                SecretBinary=secret_data,\n                **put_or_create_secret_kwargs,\n            )\n        except client.exceptions.ResourceNotFoundException:\n            self.logger.info(\n                f\"The secret {self.secret_name!r} does not exist yet, creating it now.\"\n            )\n            response = await run_sync_in_worker_thread(\n                client.create_secret,\n                Name=self.secret_name,\n                SecretBinary=secret_data,\n                **put_or_create_secret_kwargs,\n            )\n        arn = response[\"ARN\"]\n        self.logger.info(f\"The secret data was written successfully to {arn!r}.\")\n        return arn\n\n    @sync_compatible\n    async def delete_secret(\n        self,\n        recovery_window_in_days: int = 30,\n        force_delete_without_recovery: bool = False,\n        **delete_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Deletes the secret from the secret storage service.\n\n        Args:\n            recovery_window_in_days: The number of days to wait before permanently\n                deleting the secret. Must be between 7 and 30 days.\n            force_delete_without_recovery: If True, the secret will be deleted\n                immediately without a recovery window.\n            **delete_kwargs: Additional keyword arguments to pass to the\n                delete_secret method of the boto3 client.\n\n        Returns:\n            The path that the secret was deleted from.\n\n        Examples:\n            Deletes the secret with a recovery window of 15 days.\n            ```python\n            secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n            secrets_manager.delete_secret(recovery_window_in_days=15)\n            ```\n        \"\"\"\n        if force_delete_without_recovery and recovery_window_in_days:\n            raise ValueError(\n                \"Cannot specify recovery window and force delete without recovery.\"\n            )\n        elif not (7 <= recovery_window_in_days <= 30):\n            raise ValueError(\n                \"Recovery window must be between 7 and 30 days, got \"\n                f\"{recovery_window_in_days}.\"\n            )\n\n        client = self.aws_credentials.get_secrets_manager_client()\n        response = await run_sync_in_worker_thread(\n            client.delete_secret,\n            SecretId=self.secret_name,\n            RecoveryWindowInDays=recovery_window_in_days,\n            ForceDeleteWithoutRecovery=force_delete_without_recovery,\n            **delete_kwargs,\n        )\n        arn = response[\"ARN\"]\n        self.logger.info(f\"The secret {arn} was deleted successfully.\")\n        return arn\n
    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret.delete_secret","title":"delete_secret async","text":"

    Deletes the secret from the secret storage service.

    Parameters:

    Name Type Description Default recovery_window_in_days int

    The number of days to wait before permanently deleting the secret. Must be between 7 and 30 days.

    30 force_delete_without_recovery bool

    If True, the secret will be deleted immediately without a recovery window.

    False **delete_kwargs Dict[str, Any]

    Additional keyword arguments to pass to the delete_secret method of the boto3 client.

    {}

    Returns:

    Type Description str

    The path that the secret was deleted from.

    Examples:

    Deletes the secret with a recovery window of 15 days.

    secrets_manager = SecretsManager.load(\"MY_BLOCK\")\nsecrets_manager.delete_secret(recovery_window_in_days=15)\n

    Source code in prefect_aws/secrets_manager.py
    @sync_compatible\nasync def delete_secret(\n    self,\n    recovery_window_in_days: int = 30,\n    force_delete_without_recovery: bool = False,\n    **delete_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Deletes the secret from the secret storage service.\n\n    Args:\n        recovery_window_in_days: The number of days to wait before permanently\n            deleting the secret. Must be between 7 and 30 days.\n        force_delete_without_recovery: If True, the secret will be deleted\n            immediately without a recovery window.\n        **delete_kwargs: Additional keyword arguments to pass to the\n            delete_secret method of the boto3 client.\n\n    Returns:\n        The path that the secret was deleted from.\n\n    Examples:\n        Deletes the secret with a recovery window of 15 days.\n        ```python\n        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n        secrets_manager.delete_secret(recovery_window_in_days=15)\n        ```\n    \"\"\"\n    if force_delete_without_recovery and recovery_window_in_days:\n        raise ValueError(\n            \"Cannot specify recovery window and force delete without recovery.\"\n        )\n    elif not (7 <= recovery_window_in_days <= 30):\n        raise ValueError(\n            \"Recovery window must be between 7 and 30 days, got \"\n            f\"{recovery_window_in_days}.\"\n        )\n\n    client = self.aws_credentials.get_secrets_manager_client()\n    response = await run_sync_in_worker_thread(\n        client.delete_secret,\n        SecretId=self.secret_name,\n        RecoveryWindowInDays=recovery_window_in_days,\n        ForceDeleteWithoutRecovery=force_delete_without_recovery,\n        **delete_kwargs,\n    )\n    arn = response[\"ARN\"]\n    self.logger.info(f\"The secret {arn} was deleted successfully.\")\n    return arn\n
    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret.read_secret","title":"read_secret async","text":"

    Reads the secret from the secret storage service.

    Parameters:

    Name Type Description Default version_id str

    The version of the secret to read. If not provided, the latest version will be read.

    None version_stage str

    The version stage of the secret to read. If not provided, the latest version will be read.

    None read_kwargs Dict[str, Any]

    Additional keyword arguments to pass to the get_secret_value method of the boto3 client.

    {}

    Returns:

    Type Description bytes

    The secret data.

    Examples:

    Reads a secret.

    secrets_manager = SecretsManager.load(\"MY_BLOCK\")\nsecrets_manager.read_secret()\n

    Source code in prefect_aws/secrets_manager.py
    @sync_compatible\nasync def read_secret(\n    self,\n    version_id: str = None,\n    version_stage: str = None,\n    **read_kwargs: Dict[str, Any],\n) -> bytes:\n    \"\"\"\n    Reads the secret from the secret storage service.\n\n    Args:\n        version_id: The version of the secret to read. If not provided, the latest\n            version will be read.\n        version_stage: The version stage of the secret to read. If not provided,\n            the latest version will be read.\n        read_kwargs: Additional keyword arguments to pass to the\n            `get_secret_value` method of the boto3 client.\n\n    Returns:\n        The secret data.\n\n    Examples:\n        Reads a secret.\n        ```python\n        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n        secrets_manager.read_secret()\n        ```\n    \"\"\"\n    client = self.aws_credentials.get_secrets_manager_client()\n    if version_id is not None:\n        read_kwargs[\"VersionId\"] = version_id\n    if version_stage is not None:\n        read_kwargs[\"VersionStage\"] = version_stage\n    response = await run_sync_in_worker_thread(\n        client.get_secret_value, SecretId=self.secret_name, **read_kwargs\n    )\n    if \"SecretBinary\" in response:\n        secret = response[\"SecretBinary\"]\n    elif \"SecretString\" in response:\n        secret = response[\"SecretString\"]\n    arn = response[\"ARN\"]\n    self.logger.info(f\"The secret {arn!r} data was successfully read.\")\n    return secret\n
    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret.write_secret","title":"write_secret async","text":"

    Writes the secret to the secret storage service as a SecretBinary; if it doesn't exist, it will be created.

    Parameters:

    Name Type Description Default secret_data bytes

    The secret data to write.

    required **put_or_create_secret_kwargs Dict[str, Any]

    Additional keyword arguments to pass to put_secret_value or create_secret method of the boto3 client.

    {}

    Returns:

    Type Description str

    The path that the secret was written to.

    Examples:

    Write some secret data.

    secrets_manager = SecretsManager.load(\"MY_BLOCK\")\nsecrets_manager.write_secret(b\"my_secret_data\")\n

    Source code in prefect_aws/secrets_manager.py
    @sync_compatible\nasync def write_secret(\n    self, secret_data: bytes, **put_or_create_secret_kwargs: Dict[str, Any]\n) -> str:\n    \"\"\"\n    Writes the secret to the secret storage service as a SecretBinary;\n    if it doesn't exist, it will be created.\n\n    Args:\n        secret_data: The secret data to write.\n        **put_or_create_secret_kwargs: Additional keyword arguments to pass to\n            put_secret_value or create_secret method of the boto3 client.\n\n    Returns:\n        The path that the secret was written to.\n\n    Examples:\n        Write some secret data.\n        ```python\n        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n        secrets_manager.write_secret(b\"my_secret_data\")\n        ```\n    \"\"\"\n    client = self.aws_credentials.get_secrets_manager_client()\n    try:\n        response = await run_sync_in_worker_thread(\n            client.put_secret_value,\n            SecretId=self.secret_name,\n            SecretBinary=secret_data,\n            **put_or_create_secret_kwargs,\n        )\n    except client.exceptions.ResourceNotFoundException:\n        self.logger.info(\n            f\"The secret {self.secret_name!r} does not exist yet, creating it now.\"\n        )\n        response = await run_sync_in_worker_thread(\n            client.create_secret,\n            Name=self.secret_name,\n            SecretBinary=secret_data,\n            **put_or_create_secret_kwargs,\n        )\n    arn = response[\"ARN\"]\n    self.logger.info(f\"The secret data was written successfully to {arn!r}.\")\n    return arn\n
    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.create_secret","title":"create_secret async","text":"

    Creates a secret in AWS Secrets Manager.

    Parameters:

    Name Type Description Default secret_name str

    The name of the secret to create.

    required secret_value Union[str, bytes]

    The value to store in the created secret.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required description Optional[str]

    A description for the created secret.

    None tags Optional[List[Dict[str, str]]]

    A list of tags to attach to the secret. Each tag should be specified as a dictionary in the following format:

    {\n    \"Key\": str,\n    \"Value\": str\n}\n

    None

    Returns:

    Type Description Dict[str, str]

    A dict containing the secret ARN (Amazon Resource Name), name, and current version ID.

    {\n    \"ARN\": str,\n    \"Name\": str,\n    \"VersionId\": str\n}\n

    ```python\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import create_secret\n\n@flow\ndef example_create_secret():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    create_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        secret_value=\"42\",\n        aws_credentials=aws_credentials\n    )\n\nexample_create_secret()\n```\n
    Source code in prefect_aws/secrets_manager.py
    @task\nasync def create_secret(\n    secret_name: str,\n    secret_value: Union[str, bytes],\n    aws_credentials: AwsCredentials,\n    description: Optional[str] = None,\n    tags: Optional[List[Dict[str, str]]] = None,\n) -> Dict[str, str]:\n    \"\"\"\n    Creates a secret in AWS Secrets Manager.\n\n    Args:\n        secret_name: The name of the secret to create.\n        secret_value: The value to store in the created secret.\n        aws_credentials: Credentials to use for authentication with AWS.\n        description: A description for the created secret.\n        tags: A list of tags to attach to the secret. Each tag should be specified as a\n            dictionary in the following format:\n            ```python\n            {\n                \"Key\": str,\n                \"Value\": str\n            }\n            ```\n\n    Returns:\n        A dict containing the secret ARN (Amazon Resource Name),\n            name, and current version ID.\n            ```python\n            {\n                \"ARN\": str,\n                \"Name\": str,\n                \"VersionId\": str\n            }\n            ```\n    Example:\n        Create a secret:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import create_secret\n\n        @flow\n        def example_create_secret():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            create_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                secret_value=\"42\",\n                aws_credentials=aws_credentials\n            )\n\n        example_create_secret()\n        ```\n\n\n    \"\"\"\n    create_secret_kwargs: Dict[str, Union[str, bytes, List[Dict[str, str]]]] = dict(\n        Name=secret_name\n    )\n    if description is not None:\n        create_secret_kwargs[\"Description\"] = description\n    if tags is not None:\n        create_secret_kwargs[\"Tags\"] = tags\n    if isinstance(secret_value, bytes):\n        create_secret_kwargs[\"SecretBinary\"] = secret_value\n    elif isinstance(secret_value, str):\n        create_secret_kwargs[\"SecretString\"] = secret_value\n    else:\n        raise ValueError(\"Please provide a bytes or str value for secret_value\")\n\n    logger = get_run_logger()\n    logger.info(\"Creating secret named %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.create_secret, **create_secret_kwargs\n        )\n        print(response.pop(\"ResponseMetadata\", None))\n        return response\n    except ClientError:\n        logger.exception(\"Unable to create secret %s\", secret_name)\n        raise\n
    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.delete_secret","title":"delete_secret async","text":"

    Deletes a secret from AWS Secrets Manager.

    Secrets can either be deleted immediately by setting force_delete_without_recovery equal to True. Otherwise, secrets will be marked for deletion and available for recovery for the number of days specified in recovery_window_in_days

    Parameters:

    Name Type Description Default secret_name str

    Name of the secret to be deleted.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required recovery_window_in_days int

    Number of days a secret should be recoverable for before permanent deletion. Minimum window is 7 days and maximum window is 30 days. If force_delete_without_recovery is set to True, this value will be ignored.

    30 force_delete_without_recovery bool

    If True, the secret will be immediately deleted and will not be recoverable.

    False

    Returns:

    Type Description Dict[str, str]

    A dict containing the secret ARN (Amazon Resource Name), name, and deletion date of the secret. DeletionDate is the date and time of the delete request plus the number of days in recovery_window_in_days.

    {\n    \"ARN\": str,\n    \"Name\": str,\n    \"DeletionDate\": datetime.datetime\n}\n

    Examples:

    Delete a secret immediately:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import delete_secret\n\n@flow\ndef example_delete_secret_immediately():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    delete_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        aws_credentials=aws_credentials,\n        force_delete_without_recovery: True\n    )\n\nexample_delete_secret_immediately()\n

    Delete a secret with a 90 day recovery window:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import delete_secret\n\n@flow\ndef example_delete_secret_with_recovery_window():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    delete_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        aws_credentials=aws_credentials,\n        recovery_window_in_days=90\n    )\n\nexample_delete_secret_with_recovery_window()\n
    Source code in prefect_aws/secrets_manager.py
    @task\nasync def delete_secret(\n    secret_name: str,\n    aws_credentials: AwsCredentials,\n    recovery_window_in_days: int = 30,\n    force_delete_without_recovery: bool = False,\n) -> Dict[str, str]:\n    \"\"\"\n    Deletes a secret from AWS Secrets Manager.\n\n    Secrets can either be deleted immediately by setting `force_delete_without_recovery`\n    equal to `True`. Otherwise, secrets will be marked for deletion and available for\n    recovery for the number of days specified in `recovery_window_in_days`\n\n    Args:\n        secret_name: Name of the secret to be deleted.\n        aws_credentials: Credentials to use for authentication with AWS.\n        recovery_window_in_days: Number of days a secret should be recoverable for\n            before permanent deletion. Minimum window is 7 days and maximum window\n            is 30 days. If `force_delete_without_recovery` is set to `True`, this\n            value will be ignored.\n        force_delete_without_recovery: If `True`, the secret will be immediately\n            deleted and will not be recoverable.\n\n    Returns:\n        A dict containing the secret ARN (Amazon Resource Name),\n            name, and deletion date of the secret. DeletionDate is the date and\n            time of the delete request plus the number of days in\n            `recovery_window_in_days`.\n            ```python\n            {\n                \"ARN\": str,\n                \"Name\": str,\n                \"DeletionDate\": datetime.datetime\n            }\n            ```\n\n    Examples:\n        Delete a secret immediately:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import delete_secret\n\n        @flow\n        def example_delete_secret_immediately():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            delete_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                aws_credentials=aws_credentials,\n                force_delete_without_recovery: True\n            )\n\n        example_delete_secret_immediately()\n        ```\n\n        Delete a secret with a 90 day recovery window:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import delete_secret\n\n        @flow\n        def example_delete_secret_with_recovery_window():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            delete_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                aws_credentials=aws_credentials,\n                recovery_window_in_days=90\n            )\n\n        example_delete_secret_with_recovery_window()\n        ```\n\n\n    \"\"\"\n    if not force_delete_without_recovery and not (7 <= recovery_window_in_days <= 30):\n        raise ValueError(\"Recovery window must be between 7 and 30 days.\")\n\n    delete_secret_kwargs: Dict[str, Union[str, int, bool]] = dict(SecretId=secret_name)\n    if force_delete_without_recovery:\n        delete_secret_kwargs[\n            \"ForceDeleteWithoutRecovery\"\n        ] = force_delete_without_recovery\n    else:\n        delete_secret_kwargs[\"RecoveryWindowInDays\"] = recovery_window_in_days\n\n    logger = get_run_logger()\n    logger.info(\"Deleting secret %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.delete_secret, **delete_secret_kwargs\n        )\n        response.pop(\"ResponseMetadata\", None)\n        return response\n    except ClientError:\n        logger.exception(\"Unable to delete secret %s\", secret_name)\n        raise\n
    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.read_secret","title":"read_secret async","text":"

    Reads the value of a given secret from AWS Secrets Manager.

    Parameters:

    Name Type Description Default secret_name str

    Name of stored secret.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required version_id Optional[str]

    Specifies version of secret to read. Defaults to the most recent version if not given.

    None version_stage Optional[str]

    Specifies the version stage of the secret to read. Defaults to AWS_CURRENT if not given.

    None

    Returns:

    Type Description Union[str, bytes]

    The secret values as a str or bytes depending on the format in which the secret was stored.

    Example

    Read a secret value:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef example_read_secret():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    secret_value = read_secret(\n        secret_name=\"db_password\",\n        aws_credentials=aws_credentials\n    )\n\nexample_read_secret()\n
    Source code in prefect_aws/secrets_manager.py
    @task\nasync def read_secret(\n    secret_name: str,\n    aws_credentials: AwsCredentials,\n    version_id: Optional[str] = None,\n    version_stage: Optional[str] = None,\n) -> Union[str, bytes]:\n    \"\"\"\n    Reads the value of a given secret from AWS Secrets Manager.\n\n    Args:\n        secret_name: Name of stored secret.\n        aws_credentials: Credentials to use for authentication with AWS.\n        version_id: Specifies version of secret to read. Defaults to the most recent\n            version if not given.\n        version_stage: Specifies the version stage of the secret to read. Defaults to\n            AWS_CURRENT if not given.\n\n    Returns:\n        The secret values as a `str` or `bytes` depending on the format in which the\n            secret was stored.\n\n    Example:\n        Read a secret value:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import read_secret\n\n        @flow\n        def example_read_secret():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            secret_value = read_secret(\n                secret_name=\"db_password\",\n                aws_credentials=aws_credentials\n            )\n\n        example_read_secret()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Getting value for secret %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    get_secret_value_kwargs = dict(SecretId=secret_name)\n    if version_id is not None:\n        get_secret_value_kwargs[\"VersionId\"] = version_id\n    if version_stage is not None:\n        get_secret_value_kwargs[\"VersionStage\"] = version_stage\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.get_secret_value, **get_secret_value_kwargs\n        )\n    except ClientError:\n        logger.exception(\"Unable to get value for secret %s\", secret_name)\n        raise\n    else:\n        return response.get(\"SecretString\") or response.get(\"SecretBinary\")\n
    "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.update_secret","title":"update_secret async","text":"

    Updates the value of a given secret in AWS Secrets Manager.

    Parameters:

    Name Type Description Default secret_name str

    Name of secret to update.

    required secret_value Union[str, bytes]

    Desired value of the secret. Can be either str or bytes.

    required aws_credentials AwsCredentials

    Credentials to use for authentication with AWS.

    required description Optional[str]

    Desired description of the secret.

    None

    Returns:

    Type Description Dict[str, str]

    A dict containing the secret ARN (Amazon Resource Name), name, and current version ID.

    {\n    \"ARN\": str,\n    \"Name\": str,\n    \"VersionId\": str\n}\n

    Example

    Update a secret value:

    from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import update_secret\n\n@flow\ndef example_update_secret():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    update_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        secret_value=\"42\",\n        aws_credentials=aws_credentials\n    )\n\nexample_update_secret()\n
    Source code in prefect_aws/secrets_manager.py
    @task\nasync def update_secret(\n    secret_name: str,\n    secret_value: Union[str, bytes],\n    aws_credentials: AwsCredentials,\n    description: Optional[str] = None,\n) -> Dict[str, str]:\n    \"\"\"\n    Updates the value of a given secret in AWS Secrets Manager.\n\n    Args:\n        secret_name: Name of secret to update.\n        secret_value: Desired value of the secret. Can be either `str` or `bytes`.\n        aws_credentials: Credentials to use for authentication with AWS.\n        description: Desired description of the secret.\n\n    Returns:\n        A dict containing the secret ARN (Amazon Resource Name),\n            name, and current version ID.\n            ```python\n            {\n                \"ARN\": str,\n                \"Name\": str,\n                \"VersionId\": str\n            }\n            ```\n\n    Example:\n        Update a secret value:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import update_secret\n\n        @flow\n        def example_update_secret():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            update_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                secret_value=\"42\",\n                aws_credentials=aws_credentials\n            )\n\n        example_update_secret()\n        ```\n\n    \"\"\"\n    update_secret_kwargs: Dict[str, Union[str, bytes]] = dict(SecretId=secret_name)\n    if description is not None:\n        update_secret_kwargs[\"Description\"] = description\n    if isinstance(secret_value, bytes):\n        update_secret_kwargs[\"SecretBinary\"] = secret_value\n    elif isinstance(secret_value, str):\n        update_secret_kwargs[\"SecretString\"] = secret_value\n    else:\n        raise ValueError(\"Please provide a bytes or str value for secret_value\")\n\n    logger = get_run_logger()\n    logger.info(\"Updating value for secret %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.update_secret, **update_secret_kwargs\n        )\n        response.pop(\"ResponseMetadata\", None)\n        return response\n    except ClientError:\n        logger.exception(\"Unable to update secret %s\", secret_name)\n        raise\n
    "},{"location":"integrations/prefect-aws/deployments/steps/","title":"Steps","text":""},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps","title":"prefect_aws.deployments.steps","text":"

    Prefect deployment steps for code storage and retrieval in S3 and S3 compatible services.

    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PullFromS3Output","title":"PullFromS3Output","text":"

    Bases: TypedDict

    The output of the pull_from_s3 step.

    Source code in prefect_aws/deployments/steps.py
    class PullFromS3Output(TypedDict):\n    \"\"\"\n    The output of the `pull_from_s3` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n    directory: str\n
    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PullProjectFromS3Output","title":"PullProjectFromS3Output","text":"

    Bases: PullFromS3Output

    Deprecated. Use PullFromS3Output instead..

    Source code in prefect_aws/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PullFromS3Output` instead.\")\nclass PullProjectFromS3Output(PullFromS3Output):\n    \"\"\"Deprecated. Use `PullFromS3Output` instead..\"\"\"\n
    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PushProjectToS3Output","title":"PushProjectToS3Output","text":"

    Bases: PushToS3Output

    Deprecated. Use PushToS3Output instead.

    Source code in prefect_aws/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PushToS3Output` instead.\")\nclass PushProjectToS3Output(PushToS3Output):\n    \"\"\"Deprecated. Use `PushToS3Output` instead.\"\"\"\n
    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PushToS3Output","title":"PushToS3Output","text":"

    Bases: TypedDict

    The output of the push_to_s3 step.

    Source code in prefect_aws/deployments/steps.py
    class PushToS3Output(TypedDict):\n    \"\"\"\n    The output of the `push_to_s3` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n
    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.pull_from_s3","title":"pull_from_s3","text":"

    Pulls the contents of an S3 bucket folder to the current working directory.

    Parameters:

    Name Type Description Default bucket str

    The name of the S3 bucket where files are stored.

    required folder str

    The folder in the S3 bucket where files are stored.

    required credentials Optional[Dict]

    A dictionary of AWS credentials (aws_access_key_id, aws_secret_access_key, aws_session_token) or MinIO credentials (minio_root_user, minio_root_password).

    None client_parameters Optional[Dict]

    A dictionary of additional parameters to pass to the boto3 client.

    None

    Returns:

    Type Description PullFromS3Output

    A dictionary containing the bucket, folder, and local directory where files were downloaded.

    Examples:

    Pull files from S3 using the default credentials and client parameters:

    pull:\n    - prefect_aws.deployments.steps.pull_from_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n

    Pull files from S3 using credentials stored in a block:

    pull:\n    - prefect_aws.deployments.steps.pull_from_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n        credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

    Source code in prefect_aws/deployments/steps.py
    def pull_from_s3(\n    bucket: str,\n    folder: str,\n    credentials: Optional[Dict] = None,\n    client_parameters: Optional[Dict] = None,\n) -> PullFromS3Output:\n    \"\"\"\n    Pulls the contents of an S3 bucket folder to the current working directory.\n\n    Args:\n        bucket: The name of the S3 bucket where files are stored.\n        folder: The folder in the S3 bucket where files are stored.\n        credentials: A dictionary of AWS credentials (aws_access_key_id,\n            aws_secret_access_key, aws_session_token) or MinIO credentials\n            (minio_root_user, minio_root_password).\n        client_parameters: A dictionary of additional parameters to pass to the\n            boto3 client.\n\n    Returns:\n        A dictionary containing the bucket, folder, and local directory where\n            files were downloaded.\n\n    Examples:\n        Pull files from S3 using the default credentials and client parameters:\n        ```yaml\n        pull:\n            - prefect_aws.deployments.steps.pull_from_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n        ```\n\n        Pull files from S3 using credentials stored in a block:\n        ```yaml\n        pull:\n            - prefect_aws.deployments.steps.pull_from_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n                credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n        ```\n    \"\"\"\n    s3 = get_s3_client(credentials=credentials, client_parameters=client_parameters)\n\n    local_path = Path.cwd()\n\n    paginator = s3.get_paginator(\"list_objects_v2\")\n    for result in paginator.paginate(Bucket=bucket, Prefix=folder):\n        for obj in result.get(\"Contents\", []):\n            remote_key = obj[\"Key\"]\n\n            if remote_key[-1] == \"/\":\n                # object is a folder and will be created if it contains any objects\n                continue\n\n            target = PurePosixPath(\n                local_path\n                / relative_path_to_current_platform(remote_key).relative_to(folder)\n            )\n            Path.mkdir(Path(target.parent), parents=True, exist_ok=True)\n            s3.download_file(bucket, remote_key, str(target))\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n        \"directory\": str(local_path),\n    }\n
    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.pull_project_from_s3","title":"pull_project_from_s3","text":"

    Deprecated. Use pull_from_s3 instead.

    Source code in prefect_aws/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `pull_from_s3` instead.\")\ndef pull_project_from_s3(*args, **kwargs):\n    \"\"\"Deprecated. Use `pull_from_s3` instead.\"\"\"\n    pull_from_s3(*args, **kwargs)\n
    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.push_project_to_s3","title":"push_project_to_s3","text":"

    Deprecated. Use push_to_s3 instead.

    Source code in prefect_aws/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `push_to_s3` instead.\")\ndef push_project_to_s3(*args, **kwargs):\n    \"\"\"Deprecated. Use `push_to_s3` instead.\"\"\"\n    push_to_s3(*args, **kwargs)\n
    "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.push_to_s3","title":"push_to_s3","text":"

    Pushes the contents of the current working directory to an S3 bucket, excluding files and folders specified in the ignore_file.

    Parameters:

    Name Type Description Default bucket str

    The name of the S3 bucket where files will be uploaded.

    required folder str

    The folder in the S3 bucket where files will be uploaded.

    required credentials Optional[Dict]

    A dictionary of AWS credentials (aws_access_key_id, aws_secret_access_key, aws_session_token) or MinIO credentials (minio_root_user, minio_root_password).

    None client_parameters Optional[Dict]

    A dictionary of additional parameters to pass to the boto3 client.

    None ignore_file Optional[str]

    The name of the file containing ignore patterns.

    '.prefectignore'

    Returns:

    Type Description PushToS3Output

    A dictionary containing the bucket and folder where files were uploaded.

    Examples:

    Push files to an S3 bucket:

    push:\n    - prefect_aws.deployments.steps.push_to_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n

    Push files to an S3 bucket using credentials stored in a block:

    push:\n    - prefect_aws.deployments.steps.push_to_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n        credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

    Source code in prefect_aws/deployments/steps.py
    def push_to_s3(\n    bucket: str,\n    folder: str,\n    credentials: Optional[Dict] = None,\n    client_parameters: Optional[Dict] = None,\n    ignore_file: Optional[str] = \".prefectignore\",\n) -> PushToS3Output:\n    \"\"\"\n    Pushes the contents of the current working directory to an S3 bucket,\n    excluding files and folders specified in the ignore_file.\n\n    Args:\n        bucket: The name of the S3 bucket where files will be uploaded.\n        folder: The folder in the S3 bucket where files will be uploaded.\n        credentials: A dictionary of AWS credentials (aws_access_key_id,\n            aws_secret_access_key, aws_session_token) or MinIO credentials\n            (minio_root_user, minio_root_password).\n        client_parameters: A dictionary of additional parameters to pass to the boto3\n            client.\n        ignore_file: The name of the file containing ignore patterns.\n\n    Returns:\n        A dictionary containing the bucket and folder where files were uploaded.\n\n    Examples:\n        Push files to an S3 bucket:\n        ```yaml\n        push:\n            - prefect_aws.deployments.steps.push_to_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n        ```\n\n        Push files to an S3 bucket using credentials stored in a block:\n        ```yaml\n        push:\n            - prefect_aws.deployments.steps.push_to_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n                credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n        ```\n\n    \"\"\"\n    s3 = get_s3_client(credentials=credentials, client_parameters=client_parameters)\n\n    local_path = Path.cwd()\n\n    included_files = None\n    if ignore_file and Path(ignore_file).exists():\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(str(local_path), ignore_patterns)\n\n    for local_file_path in local_path.expanduser().rglob(\"*\"):\n        if (\n            included_files is not None\n            and str(local_file_path.relative_to(local_path)) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = Path(folder) / local_file_path.relative_to(local_path)\n            s3.upload_file(\n                str(local_file_path), bucket, str(remote_file_path.as_posix())\n            )\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n    }\n
    "},{"location":"integrations/prefect-azure/","title":"prefect-azure","text":"

    prefect-azure is a collection of Prefect integrations for orchestration workflows with Azure.

    "},{"location":"integrations/prefect-azure/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-azure/#installation","title":"Installation","text":"

    Install prefect-azure with pip

    pip install prefect-azure\n

    To use Blob Storage:

    pip install \"prefect-azure[blob_storage]\"\n

    To use Cosmos DB:

    pip install \"prefect-azure[cosmos_db]\"\n

    To use ML Datastore:

    pip install \"prefect-azure[ml_datastore]\"\n

    "},{"location":"integrations/prefect-azure/#examples","title":"Examples","text":""},{"location":"integrations/prefect-azure/#download-a-blob","title":"Download a blob","text":"
    from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef example_blob_storage_download_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    data = blob_storage_download(\n        blob=\"prefect.txt\",\n        container=\"prefect\",\n        azure_credentials=blob_storage_credentials,\n    )\n    return data\n\nexample_blob_storage_download_flow()\n

    Use with_options to customize options on any existing task or flow:

    custom_blob_storage_download_flow = example_blob_storage_download_flow.with_options(\n    name=\"My custom task name\",\n    retries=2,\n    retry_delay_seconds=10,\n)\n

    "},{"location":"integrations/prefect-azure/#run-a-command-on-an-azure-container-instance","title":"Run a command on an Azure container instance","text":"
    from prefect import flow\nfrom prefect_azure import AzureContainerInstanceCredentials\nfrom prefect_azure.container_instance import AzureContainerInstanceJob\n\n\n@flow\ndef container_instance_job_flow():\n    aci_credentials = AzureContainerInstanceCredentials.load(\"MY_BLOCK_NAME\")\n    container_instance_job = AzureContainerInstanceJob(\n        aci_credentials=aci_credentials,\n        resource_group_name=\"azure_resource_group.example.name\",\n        subscription_id=\"<MY_AZURE_SUBSCRIPTION_ID>\",\n        command=[\"echo\", \"hello world\"],\n    )\n    return container_instance_job.run()\n
    "},{"location":"integrations/prefect-azure/#use-azure-container-instance-as-infrastructure","title":"Use Azure Container Instance as infrastructure","text":"

    If we have a_flow_module.py:

    from prefect import flow, get_run_logger\n\n@flow\ndef log_hello_flow(name=\"Marvin\"):\n    logger = get_run_logger()\n    logger.info(f\"{name} said hello!\")\n\nif __name__ == \"__main__\":\n    log_hello_flow()\n

    We can run that flow using an Azure Container Instance, but first create the infrastructure block:

    from prefect_azure import AzureContainerInstanceCredentials\nfrom prefect_azure.container_instance import AzureContainerInstanceJob\n\ncontainer_instance_job = AzureContainerInstanceJob(\n    aci_credentials=AzureContainerInstanceCredentials.load(\"MY_BLOCK_NAME\"),\n    resource_group_name=\"azure_resource_group.example.name\",\n    subscription_id=\"<MY_AZURE_SUBSCRIPTION_ID>\",\n)\ncontainer_instance_job.save(\"aci-dev\")\n

    Then, create the deployment either on the UI or through the CLI:

    prefect deployment build a_flow_module.py:log_hello_flow --name aci-dev -ib container-instance-job/aci-dev\n

    Visit Prefect Deployments for more information about deployments.

    "},{"location":"integrations/prefect-azure/#azure-container-instance-worker","title":"Azure Container Instance Worker","text":"

    The Azure Container Instance worker is an excellent way to run your workflows on Azure.

    To get started, create an Azure Container Instances typed work pool:

    prefect work-pool create -t azure-container-instance my-aci-work-pool\n

    Then, run a worker that pulls jobs from the work pool:

    prefect worker start -n my-aci-worker -p my-aci-work-pool\n

    The worker should automatically read the work pool's type and start an Azure Container Instance worker.

    "},{"location":"integrations/prefect-azure/aci_worker/","title":"Azure Container Instances Worker Guide","text":""},{"location":"integrations/prefect-azure/aci_worker/#why-use-aci-for-flow-run-execution","title":"Why use ACI for flow run execution?","text":"

    ACI (Azure Container Instances) is a fully managed compute platform that streamlines running your Prefect flows on scalable, on-demand infrastructure on Azure.

    "},{"location":"integrations/prefect-azure/aci_worker/#prerequisites","title":"Prerequisites","text":"

    Before starting this guide, make sure you have:

    • An Azure account and user permissions for provisioning resource groups and container instances.
    • The azure CLI installed on your local machine. You can follow Microsoft's installation guide.
    • Docker installed on your local machine.
    "},{"location":"integrations/prefect-azure/aci_worker/#step-1-create-a-resource-group","title":"Step 1. Create a resource group","text":"

    Azure resource groups serve as containers for managing groupings of Azure resources.

    Replace <resource-group-name> with the name of your choosing, and <location> with a valid Azure location name, such aseastus.

    export RG_NAME=<resource-group-name> && \\\naz group create --name $RG_NAME --location <location>\n

    Throughout the rest of the guide, we'll need to refer to the scope of the created resource group, which is a string describing where the resource group lives in the hierarchy of your Azure account. To save the scope of your resource group as an environment variable, run the following command:

    RG_SCOPE=$(az group show --name $RG_NAME --query id --output tsv)\n

    You can check that the scope is correct before moving on by running echo $RG_SCOPE in your terminal. It should be formatted as follows:

    /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>\n
    "},{"location":"integrations/prefect-azure/aci_worker/#step-2-prepare-aci-permissions","title":"Step 2. Prepare ACI permissions","text":"

    In order for the worker to create, monitor, and delete the other container instances in which flows will run, we'll need to create a custom role and an identity, and then affiliate that role to the identity with a role assignment. When we start our worker, we'll assign that identity to the container instance it's running in.

    "},{"location":"integrations/prefect-azure/aci_worker/#1-create-a-role","title":"1. Create a role","text":"

    The custom Container Instances Contributor role has all the permissions your worker will need to run flows in other container instances. Create it by running the following command:

    az role definition create --role-definition '{\n  \"Name\": \"Container Instances Contributor\",\n  \"IsCustom\": true,\n  \"Description\": \"Can create, delete, and monitor container instances.\",\n  \"Actions\": [\n    \"Microsoft.ManagedIdentity/userAssignedIdentities/assign/action\",\n    \"Microsoft.Resources/deployments/*\",\n    \"Microsoft.ContainerInstance/containerGroups/*\"\n  ],\n  \"NotActions\": [\n  ],\n  \"AssignableScopes\": [\n    '\"\\\"$RG_SCOPE\\\"\"'\n  ]\n}'\n
    "},{"location":"integrations/prefect-azure/aci_worker/#2-create-an-identity","title":"2. Create an identity","text":"

    Create a user-managed identity with the following command, replacing <identity-name> with the name you'd like to use for the identity:

    export IDENTITY_NAME=<identity_name> && \\\naz identity create -g $RG_NAME -n $IDENTITY_NAME\n

    We'll also need to save the principal ID and full object ID of the identity for the role assignment and container creation steps, respectively:

    IDENTITY_PRINCIPAL_ID=$(az identity list --query \"[?name=='$IDENTITY_NAME'].principalId\" --output tsv) && \\\nIDENTITY_ID=$(az identity list --query \"[?name=='$IDENTITY_NAME'].id\" --output tsv)\n
    "},{"location":"integrations/prefect-azure/aci_worker/#3-assign-roles-to-the-identity","title":"3. Assign roles to the identity","text":"

    Now let's assign the Container Instances Contributor role we created earlier to the new identity:

    az role assignment create \\\n    --assignee $IDENTITY_PRINCIPAL_ID \\\n    --role \"Container Instances Contributor\" \\\n    --scope $RG_SCOPE\n

    Since we'll be using ACR to host a custom Docker image containing a Prefect flow later in the guide, let's also assign the built in AcrPull role to the identity:

    az role assignment create \\\n    --assignee $IDENTITY_PRINCIPAL_ID \\\n    --role \"AcrPull\" \\\n    --scope $RG_SCOPE\n
    "},{"location":"integrations/prefect-azure/aci_worker/#step-3-create-the-worker-container-instance","title":"Step 3. Create the worker container instance","text":"

    Before running this command, set your PREFECT_API_URL and PREFECT_API_KEY as environment variables:

    export PREFECT_API_URL=<PREFECT_API_URL_HERE> PREFECT_API_KEY=<PREFECT_API_KEY_HERE>\n

    Running the following command will create a container instance in your Azure resource group that will start a Prefect ACI worker. If there is not already a work pool in Prefect with the name you chose, a work pool will also be created.

    Replace <work-pool-name> with the name of the ACI work pool you want to create in Prefect. Here we're using the work pool name as the name of the container instance in Azure as well, but you may name it something else if you prefer.

    az container create \\\n    --name <work-pool-name> \\\n    --resource-group $RG_NAME \\\n    --assign-identity $IDENTITY_ID \\\n    --image \"prefecthq/prefect:2-python3.11\" \\\n    --secure-environment-variables PREFECT_API_URL=$PREFECT_API_URL PREFECT_API_KEY=$PREFECT_API_KEY \\\n    --command-line \"/bin/bash -c 'pip install prefect-azure && prefect worker start --pool <work-pool-name> --type azure-container-instance'\" \n

    This container instance uses default networking and security settings. For advanced configuration, refer the az container create CLI reference.

    "},{"location":"integrations/prefect-azure/aci_worker/#step-4-create-an-acr-registry","title":"Step 4. Create an ACR registry","text":"

    In order to build and push images containing flow code to Azure, we'll need a container registry. Create one with the following command, replacing <registry-name> with the registry name of your choosing:

    export REGISTRY_NAME=<registry-name> && \\\naz acr create --resource-group $RG_NAME \\\n  --name <registry-name> --sku Basic\n
    "},{"location":"integrations/prefect-azure/aci_worker/#step-5-update-your-aci-work-pool-configuration","title":"Step 5. Update your ACI work pool configuration","text":"

    Once your work pool is created, navigate to the Edit page of your ACI work pool. You will need to update the following fields:

    "},{"location":"integrations/prefect-azure/aci_worker/#identities","title":"Identities","text":"

    This will be your IDENTITY_ID. You can get it from your terminal by running echo $IDENTITY_ID. When adding it to your work pool, it should be formatted as a JSON array:

    [\"/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>\"]\n

    "},{"location":"integrations/prefect-azure/aci_worker/#acrmanagedidentity","title":"ACRManagedIdentity","text":"

    ACRManagedIdentity is required for your flow code containers to be pulled from ACR. It consists of the following:

    • Identity: the same IDENTITY_ID as above, as a string
      /subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>\n
    • Registry URL: your <registry-name>, followed by .azurecr.io
      <registry-name>.azurecr.io\n

    "},{"location":"integrations/prefect-azure/aci_worker/#subscription-id-and-resource-group-name","title":"Subscription ID and resource group name","text":"

    Both the subscription ID and resource group name can be found in in the RG_SCOPE environment variable created earlier in the guide. View their values by running echo $RG_SCOPE:

    /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>\n

    Then click Save.

    "},{"location":"integrations/prefect-azure/aci_worker/#step-6-pick-up-a-flow-run-with-your-new-worker","title":"Step 6. Pick up a flow run with your new worker","text":"

    This guide uses ACR to store a Docker image containing your flow code. Write a flow, then deploy it using flow.deploy(), which will copy flow code into a Docker image and push that image to an ACR registry.

    "},{"location":"integrations/prefect-azure/aci_worker/#1-log-in-to-acr","title":"1. Log in to ACR","text":"

    Use the following commands to log in to ACR:

    TOKEN=$(az acr login --name $REGISTRY_NAME --expose-token --output tsv --query accessToken)\n
    docker login $REGISTRY_NAME.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN\n
    "},{"location":"integrations/prefect-azure/aci_worker/#2-write-and-deploy-a-simple-test-flow","title":"2. Write and deploy a simple test flow","text":"

    Create and run the following script to deploy your flow. Be sure to replace <registry-name> and <work-pool-name> with the appropriate values.

    my_flow.py

    from prefect import flow, get_run_logger\nfrom prefect.deployments import DeploymentImage\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"Hello from ACI!\")\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        name=\"aci-deployment\",\n        image=DeploymentImage(\n            name=\"<registry-name>.azurecr.io/example:latest\",\n            platform=\"linux/amd64\",\n        ),\n        work_pool_name=\"<work-pool-name>\",\n    )\n
    "},{"location":"integrations/prefect-azure/aci_worker/#3-find-the-deployment-in-the-ui-and-click-the-quick-run-button","title":"3. Find the deployment in the UI and click the Quick Run button!","text":""},{"location":"integrations/prefect-azure/blob_storage/","title":"Blob Storage","text":""},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage","title":"prefect_azure.blob_storage","text":"

    Integrations for interacting with Azure Blob Storage

    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer","title":"AzureBlobStorageContainer","text":"

    Bases: ObjectStorageBlock, WritableFileSystem, WritableDeploymentStorage

    Represents a container in Azure Blob Storage.

    This class provides methods for downloading and uploading files and folders to and from the Azure Blob Storage container.

    Attributes:

    Name Type Description container_name str

    The name of the Azure Blob Storage container.

    credentials AzureBlobStorageCredentials

    The credentials to use for authentication with Azure.

    base_folder Optional[str]

    A base path to a folder within the container to use for reading and writing objects.

    Source code in prefect_azure/blob_storage.py
    class AzureBlobStorageContainer(\n    ObjectStorageBlock, WritableFileSystem, WritableDeploymentStorage\n):\n    \"\"\"\n    Represents a container in Azure Blob Storage.\n\n    This class provides methods for downloading and uploading files and folders\n    to and from the Azure Blob Storage container.\n\n    Attributes:\n        container_name: The name of the Azure Blob Storage container.\n        credentials: The credentials to use for authentication with Azure.\n        base_folder: A base path to a folder within the container to use\n            for reading and writing objects.\n    \"\"\"\n\n    _block_type_name = \"Azure Blob Storage Container\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/blob_storage/#prefect_azure.blob_storabe.AzureBlobStorageContainer\"  # noqa\n\n    container_name: str = Field(\n        default=..., description=\"The name of a Azure Blob Storage container.\"\n    )\n    credentials: AzureBlobStorageCredentials = Field(\n        default_factory=AzureBlobStorageCredentials,\n        description=\"The credentials to use for authentication with Azure.\",\n    )\n    base_folder: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A base path to a folder within the container to use \"\n            \"for reading and writing objects.\"\n        ),\n    )\n\n    def _get_path_relative_to_base_folder(self, path: Optional[str] = None) -> str:\n        if path is None and self.base_folder is None:\n            return \"\"\n        if path is None:\n            return self.base_folder\n        if self.base_folder is None:\n            return path\n        return (Path(self.base_folder) / Path(path)).as_posix()\n\n    @sync_compatible\n    async def download_folder_to_path(\n        self,\n        from_folder: str,\n        to_folder: Union[str, Path],\n        **download_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, Path]:\n        \"\"\"Download a folder from the container to a local path.\n\n        Args:\n            from_folder: The folder path in the container to download.\n            to_folder: The local path to download the folder to.\n            **download_kwargs: Additional keyword arguments passed into\n                `BlobClient.download_blob`.\n\n        Returns:\n            The local path where the folder was downloaded.\n\n        Example:\n            Download the contents of container folder `folder` from the container\n                to the local folder `local_folder`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.download_folder_to_path(\n                from_folder=\"folder\",\n                to_folder=\"local_folder\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Downloading folder from container %s to path %s\",\n            self.container_name,\n            to_folder,\n        )\n        full_container_path = self._get_path_relative_to_base_folder(from_folder)\n        async with self.credentials.get_container_client(\n            self.container_name\n        ) as container_client:\n            try:\n                async for blob in container_client.list_blobs(\n                    name_starts_with=full_container_path\n                ):\n                    blob_path = blob.name\n                    local_path = Path(to_folder) / Path(blob_path).relative_to(\n                        full_container_path\n                    )\n                    local_path.parent.mkdir(parents=True, exist_ok=True)\n                    async with container_client.get_blob_client(\n                        blob_path\n                    ) as blob_client:\n                        blob_obj = await blob_client.download_blob(**download_kwargs)\n\n                    with local_path.open(mode=\"wb\") as to_file:\n                        await blob_obj.readinto(to_file)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to download from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return Path(to_folder)\n\n    @sync_compatible\n    async def download_object_to_file_object(\n        self,\n        from_path: str,\n        to_file_object: BinaryIO,\n        **download_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, BinaryIO]:\n        \"\"\"\n        Downloads an object from the container to a file object.\n\n        Args:\n            from_path : The path of the object to download within the container.\n            to_file_object: The file object to download the object to.\n            **download_kwargs: Additional keyword arguments for the download\n                operation.\n\n        Returns:\n            The file object that the object was downloaded to.\n\n        Example:\n            Download the object `object` from the container to a file object:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            with open(\"file.txt\", \"wb\") as f:\n                block.download_object_to_file_object(\n                    from_path=\"object\",\n                    to_file_object=f\n                )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Downloading object from container %s to file object\", self.container_name\n        )\n        full_container_path = self._get_path_relative_to_base_folder(from_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                blob_obj = await blob_client.download_blob(**download_kwargs)\n                await blob_obj.download_to_stream(to_file_object)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to download from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return to_file_object\n\n    @sync_compatible\n    async def download_object_to_path(\n        self,\n        from_path: str,\n        to_path: Union[str, Path],\n        **download_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, Path]:\n        \"\"\"\n        Downloads an object from a container to a specified path.\n\n        Args:\n            from_path: The path of the object in the container.\n            to_path: The path where the object will be downloaded to.\n            **download_kwargs (Dict[str, Any]): Additional keyword arguments\n                for the download operation.\n\n        Returns:\n            The path where the object was downloaded to.\n\n        Example:\n            Download the object `object` from the container to the local path\n                `file.txt`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.download_object_to_path(\n                from_path=\"object\",\n                to_path=\"file.txt\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Downloading object from container %s to path %s\",\n            self.container_name,\n            to_path,\n        )\n        full_container_path = self._get_path_relative_to_base_folder(from_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                blob_obj = await blob_client.download_blob(**download_kwargs)\n\n                path = Path(to_path)\n\n                path.parent.mkdir(parents=True, exist_ok=True)\n\n                with path.open(mode=\"wb\") as to_file:\n                    await blob_obj.readinto(to_file)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to download from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n        return Path(to_path)\n\n    @sync_compatible\n    async def upload_from_file_object(\n        self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n    ) -> Coroutine[Any, Any, str]:\n        \"\"\"\n        Uploads an object from a file object to the specified path in the blob\n            storage container.\n\n        Args:\n            from_file_object: The file object to upload.\n            to_path: The path in the blob storage container to upload the\n                object to.\n            **upload_kwargs: Additional keyword arguments to pass to the\n                upload_blob method.\n\n        Returns:\n            The path where the object was uploaded to.\n\n        Example:\n            Upload a file object to the container at the path `object`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            with open(\"file.txt\", \"rb\") as f:\n                block.upload_from_file_object(\n                    from_file_object=f,\n                    to_path=\"object\"\n                )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Uploading object to container %s with key %s\", self.container_name, to_path\n        )\n        full_container_path = self._get_path_relative_to_base_folder(to_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                await blob_client.upload_blob(from_file_object, **upload_kwargs)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to upload from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return to_path\n\n    @sync_compatible\n    async def upload_from_path(\n        self, from_path: Union[str, Path], to_path: str, **upload_kwargs: Dict[str, Any]\n    ) -> Coroutine[Any, Any, str]:\n        \"\"\"\n        Uploads an object from a local path to the specified destination path in the\n            blob storage container.\n\n        Args:\n            from_path: The local path of the object to upload.\n            to_path: The destination path in the blob storage container.\n            **upload_kwargs: Additional keyword arguments to pass to the\n                `upload_blob` method.\n\n        Returns:\n            The destination path in the blob storage container.\n\n        Example:\n            Upload a file from the local path `file.txt` to the container\n                at the path `object`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.upload_from_path(\n                from_path=\"file.txt\",\n                to_path=\"object\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Uploading object to container %s with key %s\", self.container_name, to_path\n        )\n        full_container_path = self._get_path_relative_to_base_folder(to_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                with open(from_path, \"rb\") as f:\n                    await blob_client.upload_blob(f, **upload_kwargs)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to upload to container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return to_path\n\n    @sync_compatible\n    async def upload_from_folder(\n        self,\n        from_folder: Union[str, Path],\n        to_folder: str,\n        **upload_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, str]:\n        \"\"\"\n        Uploads files from a local folder to a specified folder in the Azure\n            Blob Storage container.\n\n        Args:\n            from_folder: The path to the local folder containing the files to upload.\n            to_folder: The destination folder in the Azure Blob Storage container.\n            **upload_kwargs: Additional keyword arguments to pass to the\n                `upload_blob` method.\n\n        Returns:\n            The full path of the destination folder in the container.\n\n        Example:\n            Upload the contents of the local folder `local_folder` to the container\n                folder `folder`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.upload_from_folder(\n                from_folder=\"local_folder\",\n                to_folder=\"folder\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Uploading folder to container %s with key %s\",\n            self.container_name,\n            to_folder,\n        )\n        full_container_path = self._get_path_relative_to_base_folder(to_folder)\n        async with self.credentials.get_container_client(\n            self.container_name\n        ) as container_client:\n            if not Path(from_folder).is_dir():\n                raise ValueError(f\"{from_folder} is not a directory\")\n            for path in Path(from_folder).rglob(\"*\"):\n                if path.is_file():\n                    blob_path = Path(full_container_path) / path.relative_to(\n                        from_folder\n                    )\n                    async with container_client.get_blob_client(\n                        blob_path.as_posix()\n                    ) as blob_client:\n                        try:\n                            await blob_client.upload_blob(\n                                path.read_bytes(), **upload_kwargs\n                            )\n                        except ResourceNotFoundError as exc:\n                            raise RuntimeError(\n                                \"An error occurred when attempting to upload to \"\n                                f\"container {self.container_name}: {exc.reason}\"\n                            ) from exc\n        return full_container_path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: str = None, local_path: str = None\n    ) -> None:\n        \"\"\"\n        Downloads the contents of a direry from the blob storage to a local path.\n\n        Used to enable flow code storage for deployments.\n\n        Args:\n            from_path: The path of the directory in the blob storage.\n            local_path: The local path where the directory will be downloaded.\n        \"\"\"\n        await self.download_folder_to_path(from_path, local_path)\n\n    @sync_compatible\n    async def put_directory(\n        self, local_path: str = None, to_path: str = None, ignore_file: str = None\n    ) -> None:\n        \"\"\"\n        Uploads a directory to the blob storage.\n\n        Used to enable flow code storage for deployments.\n\n        Args:\n            local_path: The local path of the directory to upload. Defaults to\n                current directory.\n            to_path: The destination path in the blob storage. Defaults to\n                root directory.\n            ignore_file: The path to a file containing patterns to ignore\n                during upload.\n        \"\"\"\n        to_path = \"\" if to_path is None else to_path\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(local_path, ignore_patterns)\n\n        for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n            if (\n                included_files is not None\n                and str(local_file_path.relative_to(local_path)) not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = Path(to_path) / local_file_path.relative_to(\n                    local_path\n                )\n                with open(local_file_path, \"rb\") as local_file:\n                    local_file_content = local_file.read()\n\n                await self.write_path(\n                    remote_file_path.as_posix(), content=local_file_content\n                )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        \"\"\"\n        Reads the contents of a file at the specified path and returns it as bytes.\n\n        Used to enable results storage.\n\n        Args:\n            path: The path of the file to read.\n\n        Returns:\n            The contents of the file as bytes.\n        \"\"\"\n        file_obj = BytesIO()\n        await self.download_object_to_file_object(path, file_obj)\n        return file_obj.getvalue()\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> None:\n        \"\"\"\n        Writes the content to the specified path in the blob storage.\n\n        Used to enable results storage.\n\n        Args:\n            path: The path where the content will be written.\n            content: The content to be written.\n        \"\"\"\n        await self.upload_from_file_object(BytesIO(content), path)\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.download_folder_to_path","title":"download_folder_to_path async","text":"

    Download a folder from the container to a local path.

    Parameters:

    Name Type Description Default from_folder str

    The folder path in the container to download.

    required to_folder Union[str, Path]

    The local path to download the folder to.

    required **download_kwargs Dict[str, Any]

    Additional keyword arguments passed into BlobClient.download_blob.

    {}

    Returns:

    Type Description Coroutine[Any, Any, Path]

    The local path where the folder was downloaded.

    Example

    Download the contents of container folder folder from the container to the local folder local_folder:

    from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.download_folder_to_path(\n    from_folder=\"folder\",\n    to_folder=\"local_folder\"\n)\n
    Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def download_folder_to_path(\n    self,\n    from_folder: str,\n    to_folder: Union[str, Path],\n    **download_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, Path]:\n    \"\"\"Download a folder from the container to a local path.\n\n    Args:\n        from_folder: The folder path in the container to download.\n        to_folder: The local path to download the folder to.\n        **download_kwargs: Additional keyword arguments passed into\n            `BlobClient.download_blob`.\n\n    Returns:\n        The local path where the folder was downloaded.\n\n    Example:\n        Download the contents of container folder `folder` from the container\n            to the local folder `local_folder`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.download_folder_to_path(\n            from_folder=\"folder\",\n            to_folder=\"local_folder\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Downloading folder from container %s to path %s\",\n        self.container_name,\n        to_folder,\n    )\n    full_container_path = self._get_path_relative_to_base_folder(from_folder)\n    async with self.credentials.get_container_client(\n        self.container_name\n    ) as container_client:\n        try:\n            async for blob in container_client.list_blobs(\n                name_starts_with=full_container_path\n            ):\n                blob_path = blob.name\n                local_path = Path(to_folder) / Path(blob_path).relative_to(\n                    full_container_path\n                )\n                local_path.parent.mkdir(parents=True, exist_ok=True)\n                async with container_client.get_blob_client(\n                    blob_path\n                ) as blob_client:\n                    blob_obj = await blob_client.download_blob(**download_kwargs)\n\n                with local_path.open(mode=\"wb\") as to_file:\n                    await blob_obj.readinto(to_file)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to download from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return Path(to_folder)\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.download_object_to_file_object","title":"download_object_to_file_object async","text":"

    Downloads an object from the container to a file object.

    Parameters:

    Name Type Description Default from_path

    The path of the object to download within the container.

    required to_file_object BinaryIO

    The file object to download the object to.

    required **download_kwargs Dict[str, Any]

    Additional keyword arguments for the download operation.

    {}

    Returns:

    Type Description Coroutine[Any, Any, BinaryIO]

    The file object that the object was downloaded to.

    Example

    Download the object object from the container to a file object:

    from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nwith open(\"file.txt\", \"wb\") as f:\n    block.download_object_to_file_object(\n        from_path=\"object\",\n        to_file_object=f\n    )\n
    Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def download_object_to_file_object(\n    self,\n    from_path: str,\n    to_file_object: BinaryIO,\n    **download_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, BinaryIO]:\n    \"\"\"\n    Downloads an object from the container to a file object.\n\n    Args:\n        from_path : The path of the object to download within the container.\n        to_file_object: The file object to download the object to.\n        **download_kwargs: Additional keyword arguments for the download\n            operation.\n\n    Returns:\n        The file object that the object was downloaded to.\n\n    Example:\n        Download the object `object` from the container to a file object:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        with open(\"file.txt\", \"wb\") as f:\n            block.download_object_to_file_object(\n                from_path=\"object\",\n                to_file_object=f\n            )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Downloading object from container %s to file object\", self.container_name\n    )\n    full_container_path = self._get_path_relative_to_base_folder(from_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            blob_obj = await blob_client.download_blob(**download_kwargs)\n            await blob_obj.download_to_stream(to_file_object)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to download from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return to_file_object\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.download_object_to_path","title":"download_object_to_path async","text":"

    Downloads an object from a container to a specified path.

    Parameters:

    Name Type Description Default from_path str

    The path of the object in the container.

    required to_path Union[str, Path]

    The path where the object will be downloaded to.

    required **download_kwargs Dict[str, Any]

    Additional keyword arguments for the download operation.

    {}

    Returns:

    Type Description Coroutine[Any, Any, Path]

    The path where the object was downloaded to.

    Example

    Download the object object from the container to the local path file.txt:

    from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.download_object_to_path(\n    from_path=\"object\",\n    to_path=\"file.txt\"\n)\n
    Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def download_object_to_path(\n    self,\n    from_path: str,\n    to_path: Union[str, Path],\n    **download_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, Path]:\n    \"\"\"\n    Downloads an object from a container to a specified path.\n\n    Args:\n        from_path: The path of the object in the container.\n        to_path: The path where the object will be downloaded to.\n        **download_kwargs (Dict[str, Any]): Additional keyword arguments\n            for the download operation.\n\n    Returns:\n        The path where the object was downloaded to.\n\n    Example:\n        Download the object `object` from the container to the local path\n            `file.txt`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.download_object_to_path(\n            from_path=\"object\",\n            to_path=\"file.txt\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Downloading object from container %s to path %s\",\n        self.container_name,\n        to_path,\n    )\n    full_container_path = self._get_path_relative_to_base_folder(from_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            blob_obj = await blob_client.download_blob(**download_kwargs)\n\n            path = Path(to_path)\n\n            path.parent.mkdir(parents=True, exist_ok=True)\n\n            with path.open(mode=\"wb\") as to_file:\n                await blob_obj.readinto(to_file)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to download from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n    return Path(to_path)\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.get_directory","title":"get_directory async","text":"

    Downloads the contents of a direry from the blob storage to a local path.

    Used to enable flow code storage for deployments.

    Parameters:

    Name Type Description Default from_path str

    The path of the directory in the blob storage.

    None local_path str

    The local path where the directory will be downloaded.

    None Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: str = None, local_path: str = None\n) -> None:\n    \"\"\"\n    Downloads the contents of a direry from the blob storage to a local path.\n\n    Used to enable flow code storage for deployments.\n\n    Args:\n        from_path: The path of the directory in the blob storage.\n        local_path: The local path where the directory will be downloaded.\n    \"\"\"\n    await self.download_folder_to_path(from_path, local_path)\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.put_directory","title":"put_directory async","text":"

    Uploads a directory to the blob storage.

    Used to enable flow code storage for deployments.

    Parameters:

    Name Type Description Default local_path str

    The local path of the directory to upload. Defaults to current directory.

    None to_path str

    The destination path in the blob storage. Defaults to root directory.

    None ignore_file str

    The path to a file containing patterns to ignore during upload.

    None Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def put_directory(\n    self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n    \"\"\"\n    Uploads a directory to the blob storage.\n\n    Used to enable flow code storage for deployments.\n\n    Args:\n        local_path: The local path of the directory to upload. Defaults to\n            current directory.\n        to_path: The destination path in the blob storage. Defaults to\n            root directory.\n        ignore_file: The path to a file containing patterns to ignore\n            during upload.\n    \"\"\"\n    to_path = \"\" if to_path is None else to_path\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(local_path, ignore_patterns)\n\n    for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n        if (\n            included_files is not None\n            and str(local_file_path.relative_to(local_path)) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = Path(to_path) / local_file_path.relative_to(\n                local_path\n            )\n            with open(local_file_path, \"rb\") as local_file:\n                local_file_content = local_file.read()\n\n            await self.write_path(\n                remote_file_path.as_posix(), content=local_file_content\n            )\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.read_path","title":"read_path async","text":"

    Reads the contents of a file at the specified path and returns it as bytes.

    Used to enable results storage.

    Parameters:

    Name Type Description Default path str

    The path of the file to read.

    required

    Returns:

    Type Description bytes

    The contents of the file as bytes.

    Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def read_path(self, path: str) -> bytes:\n    \"\"\"\n    Reads the contents of a file at the specified path and returns it as bytes.\n\n    Used to enable results storage.\n\n    Args:\n        path: The path of the file to read.\n\n    Returns:\n        The contents of the file as bytes.\n    \"\"\"\n    file_obj = BytesIO()\n    await self.download_object_to_file_object(path, file_obj)\n    return file_obj.getvalue()\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.upload_from_file_object","title":"upload_from_file_object async","text":"

    Uploads an object from a file object to the specified path in the blob storage container.

    Parameters:

    Name Type Description Default from_file_object BinaryIO

    The file object to upload.

    required to_path str

    The path in the blob storage container to upload the object to.

    required **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to the upload_blob method.

    {}

    Returns:

    Type Description Coroutine[Any, Any, str]

    The path where the object was uploaded to.

    Example

    Upload a file object to the container at the path object:

    from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nwith open(\"file.txt\", \"rb\") as f:\n    block.upload_from_file_object(\n        from_file_object=f,\n        to_path=\"object\"\n    )\n
    Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def upload_from_file_object(\n    self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n) -> Coroutine[Any, Any, str]:\n    \"\"\"\n    Uploads an object from a file object to the specified path in the blob\n        storage container.\n\n    Args:\n        from_file_object: The file object to upload.\n        to_path: The path in the blob storage container to upload the\n            object to.\n        **upload_kwargs: Additional keyword arguments to pass to the\n            upload_blob method.\n\n    Returns:\n        The path where the object was uploaded to.\n\n    Example:\n        Upload a file object to the container at the path `object`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        with open(\"file.txt\", \"rb\") as f:\n            block.upload_from_file_object(\n                from_file_object=f,\n                to_path=\"object\"\n            )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Uploading object to container %s with key %s\", self.container_name, to_path\n    )\n    full_container_path = self._get_path_relative_to_base_folder(to_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            await blob_client.upload_blob(from_file_object, **upload_kwargs)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to upload from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return to_path\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.upload_from_folder","title":"upload_from_folder async","text":"

    Uploads files from a local folder to a specified folder in the Azure Blob Storage container.

    Parameters:

    Name Type Description Default from_folder Union[str, Path]

    The path to the local folder containing the files to upload.

    required to_folder str

    The destination folder in the Azure Blob Storage container.

    required **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to the upload_blob method.

    {}

    Returns:

    Type Description Coroutine[Any, Any, str]

    The full path of the destination folder in the container.

    Example

    Upload the contents of the local folder local_folder to the container folder folder:

    from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.upload_from_folder(\n    from_folder=\"local_folder\",\n    to_folder=\"folder\"\n)\n
    Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def upload_from_folder(\n    self,\n    from_folder: Union[str, Path],\n    to_folder: str,\n    **upload_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, str]:\n    \"\"\"\n    Uploads files from a local folder to a specified folder in the Azure\n        Blob Storage container.\n\n    Args:\n        from_folder: The path to the local folder containing the files to upload.\n        to_folder: The destination folder in the Azure Blob Storage container.\n        **upload_kwargs: Additional keyword arguments to pass to the\n            `upload_blob` method.\n\n    Returns:\n        The full path of the destination folder in the container.\n\n    Example:\n        Upload the contents of the local folder `local_folder` to the container\n            folder `folder`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.upload_from_folder(\n            from_folder=\"local_folder\",\n            to_folder=\"folder\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Uploading folder to container %s with key %s\",\n        self.container_name,\n        to_folder,\n    )\n    full_container_path = self._get_path_relative_to_base_folder(to_folder)\n    async with self.credentials.get_container_client(\n        self.container_name\n    ) as container_client:\n        if not Path(from_folder).is_dir():\n            raise ValueError(f\"{from_folder} is not a directory\")\n        for path in Path(from_folder).rglob(\"*\"):\n            if path.is_file():\n                blob_path = Path(full_container_path) / path.relative_to(\n                    from_folder\n                )\n                async with container_client.get_blob_client(\n                    blob_path.as_posix()\n                ) as blob_client:\n                    try:\n                        await blob_client.upload_blob(\n                            path.read_bytes(), **upload_kwargs\n                        )\n                    except ResourceNotFoundError as exc:\n                        raise RuntimeError(\n                            \"An error occurred when attempting to upload to \"\n                            f\"container {self.container_name}: {exc.reason}\"\n                        ) from exc\n    return full_container_path\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.upload_from_path","title":"upload_from_path async","text":"

    Uploads an object from a local path to the specified destination path in the blob storage container.

    Parameters:

    Name Type Description Default from_path Union[str, Path]

    The local path of the object to upload.

    required to_path str

    The destination path in the blob storage container.

    required **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to the upload_blob method.

    {}

    Returns:

    Type Description Coroutine[Any, Any, str]

    The destination path in the blob storage container.

    Example

    Upload a file from the local path file.txt to the container at the path object:

    from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.upload_from_path(\n    from_path=\"file.txt\",\n    to_path=\"object\"\n)\n
    Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def upload_from_path(\n    self, from_path: Union[str, Path], to_path: str, **upload_kwargs: Dict[str, Any]\n) -> Coroutine[Any, Any, str]:\n    \"\"\"\n    Uploads an object from a local path to the specified destination path in the\n        blob storage container.\n\n    Args:\n        from_path: The local path of the object to upload.\n        to_path: The destination path in the blob storage container.\n        **upload_kwargs: Additional keyword arguments to pass to the\n            `upload_blob` method.\n\n    Returns:\n        The destination path in the blob storage container.\n\n    Example:\n        Upload a file from the local path `file.txt` to the container\n            at the path `object`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.upload_from_path(\n            from_path=\"file.txt\",\n            to_path=\"object\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Uploading object to container %s with key %s\", self.container_name, to_path\n    )\n    full_container_path = self._get_path_relative_to_base_folder(to_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            with open(from_path, \"rb\") as f:\n                await blob_client.upload_blob(f, **upload_kwargs)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to upload to container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return to_path\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.write_path","title":"write_path async","text":"

    Writes the content to the specified path in the blob storage.

    Used to enable results storage.

    Parameters:

    Name Type Description Default path str

    The path where the content will be written.

    required content bytes

    The content to be written.

    required Source code in prefect_azure/blob_storage.py
    @sync_compatible\nasync def write_path(self, path: str, content: bytes) -> None:\n    \"\"\"\n    Writes the content to the specified path in the blob storage.\n\n    Used to enable results storage.\n\n    Args:\n        path: The path where the content will be written.\n        content: The content to be written.\n    \"\"\"\n    await self.upload_from_file_object(BytesIO(content), path)\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.blob_storage_download","title":"blob_storage_download async","text":"

    Downloads a blob with a given key from a given Blob Storage container. Args: blob: Name of the blob within this container to retrieve. container: Name of the Blob Storage container to retrieve from. blob_storage_credentials: Credentials to use for authentication with Azure. Returns: A bytes representation of the downloaded blob. Example: Download a file from a Blob Storage container

    from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef example_blob_storage_download_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    data = blob_storage_download(\n        container=\"prefect\",\n        blob=\"prefect.txt\",\n        blob_storage_credentials=blob_storage_credentials,\n    )\n    return data\n\nexample_blob_storage_download_flow()\n

    Source code in prefect_azure/blob_storage.py
    @task\nasync def blob_storage_download(\n    container: str,\n    blob: str,\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n) -> bytes:\n    \"\"\"\n    Downloads a blob with a given key from a given Blob Storage container.\n    Args:\n        blob: Name of the blob within this container to retrieve.\n        container: Name of the Blob Storage container to retrieve from.\n        blob_storage_credentials: Credentials to use for authentication with Azure.\n    Returns:\n        A `bytes` representation of the downloaded blob.\n    Example:\n        Download a file from a Blob Storage container\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import blob_storage_download\n\n        @flow\n        def example_blob_storage_download_flow():\n            connection_string = \"connection_string\"\n            blob_storage_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            data = blob_storage_download(\n                container=\"prefect\",\n                blob=\"prefect.txt\",\n                blob_storage_credentials=blob_storage_credentials,\n            )\n            return data\n\n        example_blob_storage_download_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Downloading blob from container %s with key %s\", container, blob)\n\n    async with blob_storage_credentials.get_blob_client(container, blob) as blob_client:\n        blob_obj = await blob_client.download_blob()\n        output = await blob_obj.content_as_bytes()\n\n    return output\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.blob_storage_list","title":"blob_storage_list async","text":"

    List objects from a given Blob Storage container. Args: container: Name of the Blob Storage container to retrieve from. blob_storage_credentials: Credentials to use for authentication with Azure. name_starts_with: Filters the results to return only blobs whose names begin with the specified prefix. include: Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy', 'legalhold'. **kwargs: Additional kwargs passed to ContainerClient.list_blobs() Returns: A list of dicts containing metadata about the blob. Example:

    from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_list\n\n@flow\ndef example_blob_storage_list_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=\"connection_string\",\n    )\n    data = blob_storage_list(\n        container=\"container\",\n        blob_storage_credentials=blob_storage_credentials,\n    )\n    return data\n\nexample_blob_storage_list_flow()\n

    Source code in prefect_azure/blob_storage.py
    @task\nasync def blob_storage_list(\n    container: str,\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n    name_starts_with: str = None,\n    include: Union[str, List[str]] = None,\n    **kwargs,\n) -> List[\"BlobProperties\"]:\n    \"\"\"\n    List objects from a given Blob Storage container.\n    Args:\n        container: Name of the Blob Storage container to retrieve from.\n        blob_storage_credentials: Credentials to use for authentication with Azure.\n        name_starts_with: Filters the results to return only blobs whose names\n            begin with the specified prefix.\n        include: Specifies one or more additional datasets to include in the response.\n            Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy',\n            'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy',\n            'legalhold'.\n        **kwargs: Additional kwargs passed to `ContainerClient.list_blobs()`\n    Returns:\n        A `list` of `dict`s containing metadata about the blob.\n    Example:\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import blob_storage_list\n\n        @flow\n        def example_blob_storage_list_flow():\n            connection_string = \"connection_string\"\n            blob_storage_credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            data = blob_storage_list(\n                container=\"container\",\n                blob_storage_credentials=blob_storage_credentials,\n            )\n            return data\n\n        example_blob_storage_list_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Listing blobs from container %s\", container)\n\n    async with blob_storage_credentials.get_container_client(\n        container\n    ) as container_client:\n        blobs = [\n            blob\n            async for blob in container_client.list_blobs(\n                name_starts_with=name_starts_with, include=include, **kwargs\n            )\n        ]\n\n    return blobs\n
    "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.blob_storage_upload","title":"blob_storage_upload async","text":"

    Uploads data to an Blob Storage container. Args: data: Bytes representation of data to upload to Blob Storage. container: Name of the Blob Storage container to upload to. blob_storage_credentials: Credentials to use for authentication with Azure. blob: Name of the blob within this container to retrieve. overwrite: If True, an existing blob with the same name will be overwritten. Defaults to False and an error will be thrown if the blob already exists. Returns: The blob name of the uploaded object Example: Read and upload a file to a Blob Storage container

    from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_upload\n\n@flow\ndef example_blob_storage_upload_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    with open(\"data.csv\", \"rb\") as f:\n        blob = blob_storage_upload(\n            data=f.read(),\n            container=\"container\",\n            blob=\"data.csv\",\n            blob_storage_credentials=blob_storage_credentials,\n            overwrite=False,\n        )\n    return blob\n\nexample_blob_storage_upload_flow()\n

    Source code in prefect_azure/blob_storage.py
    @task\nasync def blob_storage_upload(\n    data: bytes,\n    container: str,\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n    blob: str = None,\n    overwrite: bool = False,\n) -> str:\n    \"\"\"\n    Uploads data to an Blob Storage container.\n    Args:\n        data: Bytes representation of data to upload to Blob Storage.\n        container: Name of the Blob Storage container to upload to.\n        blob_storage_credentials: Credentials to use for authentication with Azure.\n        blob: Name of the blob within this container to retrieve.\n        overwrite: If `True`, an existing blob with the same name will be overwritten.\n            Defaults to `False` and an error will be thrown if the blob already exists.\n    Returns:\n        The blob name of the uploaded object\n    Example:\n        Read and upload a file to a Blob Storage container\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import blob_storage_upload\n\n        @flow\n        def example_blob_storage_upload_flow():\n            connection_string = \"connection_string\"\n            blob_storage_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            with open(\"data.csv\", \"rb\") as f:\n                blob = blob_storage_upload(\n                    data=f.read(),\n                    container=\"container\",\n                    blob=\"data.csv\",\n                    blob_storage_credentials=blob_storage_credentials,\n                    overwrite=False,\n                )\n            return blob\n\n        example_blob_storage_upload_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading blob to container %s with key %s\", container, blob)\n\n    # create key if not provided\n    if blob is None:\n        blob = str(uuid.uuid4())\n\n    async with blob_storage_credentials.get_blob_client(container, blob) as blob_client:\n        await blob_client.upload_blob(data, overwrite=overwrite)\n\n    return blob\n
    "},{"location":"integrations/prefect-azure/container_instance/","title":"Container Instance Block","text":""},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance","title":"prefect_azure.container_instance","text":"

    Integrations with the Azure Container Instances service. Note this module is experimental. The interfaces within may change without notice.

    The AzureContainerInstanceJob infrastructure block in this module is ideally configured via the Prefect UI and run via a Prefect agent, but it can be called directly as demonstrated in the following examples.

    Examples:

    Run a command using an Azure Container Instances container.

    AzureContainerInstanceJob(command=[\"echo\", \"hello world\"]).run()\n

    Run a command and stream the container's output to the local terminal.

    AzureContainerInstanceJob(\n    command=[\"echo\", \"hello world\"],\n    stream_output=True,\n)\n

    Run a command with a specific image

    AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], image=\"alpine:latest\")\n

    Run a task with custom memory and CPU requirements

    AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], memory=1.0, cpu=1.0)\n

    Run a task with custom memory and CPU requirements

    AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], memory=1.0, cpu=1.0)\n

    Run a task with custom memory, CPU, and GPU requirements

    AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], memory=1.0, cpu=1.0,\ngpu_count=1, gpu_sku=\"V100\")\n

    Run a task with custom environment variables

    AzureContainerInstanceJob(\n    command=[\"echo\", \"hello $PLANET\"],\n    env={\"PLANET\": \"earth\"}\n)\n

    Run a task that uses a private ACR registry with a managed identity

    AzureContainerInstanceJob(\n    command=[\"echo\", \"hello $PLANET\"],\n    image=\"my-registry.azurecr.io/my-image\",\n    image_registry=ACRManagedIdentity(\n        registry_url=\"my-registry.azurecr.io\",\n        identity=\"/my/managed/identity/123abc\"\n    )\n)\n

    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.ACRManagedIdentity","title":"ACRManagedIdentity","text":"

    Bases: BaseModel

    Use a Managed Identity to access Azure Container registry. Requires the user-assigned managed identity be available to the ACI container group.

    Source code in prefect_azure/container_instance.py
    class ACRManagedIdentity(BaseModel):\n    \"\"\"\n    Use a Managed Identity to access Azure Container registry. Requires the\n    user-assigned managed identity be available to the ACI container group.\n    \"\"\"\n\n    registry_url: str = Field(\n        default=...,\n        title=\"Registry URL\",\n        description=(\n            \"The URL to the registry, such as myregistry.azurecr.io. Generally, 'http' \"\n            \"or 'https' can be omitted.\"\n        ),\n    )\n    identity: str = Field(\n        default=...,\n        description=(\n            \"The user-assigned Azure managed identity for the private registry.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob","title":"AzureContainerInstanceJob","text":"

    Bases: Infrastructure

    Run a command using a container on Azure Container Instances. Note this block is experimental. The interface may change without notice.

    Source code in prefect_azure/container_instance.py
    class AzureContainerInstanceJob(Infrastructure):\n    \"\"\"\n    Run a command using a container on Azure Container Instances.\n    Note this block is experimental. The interface may change without notice.\n    \"\"\"\n\n    _block_type_name = \"Azure Container Instance Job\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _description = \"Run tasks using Azure Container Instances. Note this block is experimental. The interface may change without notice.\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob\"  # noqa\n\n    type: Literal[\"container-instance-job\"] = Field(\n        default=\"container-instance-job\", description=\"The slug for this task type.\"\n    )\n    aci_credentials: AzureContainerInstanceCredentials = Field(\n        default_factory=AzureContainerInstanceCredentials,\n        description=(\n            \"Credentials for Azure Container Instances; \"\n            \"if not provided will attempt to use DefaultAzureCredentials.\"\n        ),\n    )\n    resource_group_name: str = Field(\n        default=...,\n        title=\"Azure Resource Group Name\",\n        description=(\n            \"The name of the Azure Resource Group in which to run Prefect ACI tasks.\"\n        ),\n    )\n    subscription_id: SecretStr = Field(\n        default=...,\n        title=\"Azure Subscription ID\",\n        description=\"The ID of the Azure subscription to create containers under.\",\n    )\n    identities: Optional[List[str]] = Field(\n        title=\"Identities\",\n        default=None,\n        description=(\n            \"A list of user-assigned identities to associate with the container group. \"\n            \"The identities should be an ARM resource IDs in the form: \"\n            \"'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'.\"  # noqa\n        ),\n    )\n    image: Optional[str] = Field(\n        default_factory=get_prefect_image_name,\n        description=(\n            \"The image to use for the Prefect container in the task. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=DEFAULT_CONTAINER_ENTRYPOINT,\n        description=(\n            \"The entrypoint of the container you wish you run. This value \"\n            \"defaults to the entrypoint used by Prefect images and should only be \"\n            \"changed when using a custom image that is not based on an official \"\n            \"Prefect image. Any commands set on deployments will be passed \"\n            \"to the entrypoint as parameters.\"\n        ),\n    )\n    image_registry: Optional[\n        Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n        ]\n    ] = Field(\n        default=None,\n        title=\"Image Registry (Optional)\",\n        description=(\n            \"To use any private container registry with a username and password, \"\n            \"choose DockerRegistry. To use a private Azure Container Registry \"\n            \"with a managed identity, choose ACRManagedIdentity.\"\n        ),\n    )\n    cpu: float = Field(\n        title=\"CPU\",\n        default=ACI_DEFAULT_CPU,\n        description=(\n            \"The number of virtual CPUs to assign to the task container. \"\n            f\"If not provided, a default value of {ACI_DEFAULT_CPU} will be used.\"\n        ),\n    )\n    gpu_count: Optional[int] = Field(\n        title=\"GPU Count\",\n        default=None,\n        description=(\n            \"The number of GPUs to assign to the task container. \"\n            \"If not provided, no GPU will be used.\"\n        ),\n    )\n    gpu_sku: Optional[str] = Field(\n        title=\"GPU SKU\",\n        default=None,\n        description=(\n            \"The Azure GPU SKU to use. See the ACI documentation for a list of \"\n            \"GPU SKUs available in each Azure region.\"\n        ),\n    )\n    memory: float = Field(\n        default=ACI_DEFAULT_MEMORY,\n        description=(\n            \"The amount of memory in gigabytes to provide to the ACI task. Valid \"\n            \"amounts are specified in the Azure documentation. If not provided, a \"\n            f\"default value of  {ACI_DEFAULT_MEMORY} will be used unless present \"\n            \"on the task definition.\"\n        ),\n    )\n    subnet_ids: Optional[List[str]] = Field(\n        default=None,\n        title=\"Subnet IDs\",\n        description=\"A list of Azure subnet IDs the container should be connected to.\",\n    )\n    dns_servers: Optional[List[str]] = Field(\n        default=None,\n        title=\"DNS Servers\",\n        description=\"A list of custom DNS Servers the container should use.\",\n    )\n    stream_output: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `True`, logs will be streamed from the Prefect container to the local \"\n            \"console.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        title=\"Environment Variables\",\n        default_factory=dict,\n        description=(\n            \"Environment variables to provide to the task run. These variables are set \"\n            \"on the Prefect container at task runtime. These will not be set on the \"\n            \"task definition.\"\n        ),\n    )\n    # Execution settings\n    task_start_timeout_seconds: int = Field(\n        default=240,\n        description=(\n            \"The amount of time to watch for the start of the ACI container. \"\n            \"before marking it as failed.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The number of seconds to wait between Azure API calls while monitoring \"\n            \"the state of an Azure Container Instances task.\"\n        ),\n    )\n\n    @sync_compatible\n    async def run(\n        self, task_status: Optional[TaskStatus] = None\n    ) -> AzureContainerInstanceJobResult:\n        \"\"\"\n        Runs the configured task using an ACI container.\n\n        Args:\n            task_status: An optional `TaskStatus` to update when the container starts.\n\n        Returns:\n            An `AzureContainerInstanceJobResult` with the container's exit code.\n        \"\"\"\n\n        run_start_time = datetime.datetime.now(datetime.timezone.utc)\n\n        container = self._configure_container()\n        container_group = self._configure_container_group(container)\n        created_container_group = None\n\n        aci_client = self.aci_credentials.get_container_client(\n            self.subscription_id.get_secret_value()\n        )\n\n        self.logger.info(\n            f\"{self._log_prefix}: Preparing to run command {' '.join(self.command)!r} \"\n            f\"in container {container.name!r} ({self.image})...\"\n        )\n        try:\n            self.logger.info(f\"{self._log_prefix}: Waiting for container creation...\")\n            # Create the container group and wait for it to start\n            creation_status_poller = await run_sync_in_worker_thread(\n                aci_client.container_groups.begin_create_or_update,\n                self.resource_group_name,\n                container.name,\n                container_group,\n            )\n            created_container_group = await run_sync_in_worker_thread(\n                self._wait_for_task_container_start, creation_status_poller\n            )\n\n            # If creation succeeded, group provisioning state should be 'Succeeded'\n            # and the group should have a single container\n            if self._provisioning_succeeded(created_container_group):\n                self.logger.info(f\"{self._log_prefix}: Running command...\")\n                if task_status:\n                    task_status.started(value=created_container_group.name)\n                status_code = await run_sync_in_worker_thread(\n                    self._watch_task_and_get_exit_code,\n                    aci_client,\n                    created_container_group,\n                    run_start_time,\n                )\n                self.logger.info(f\"{self._log_prefix}: Completed command run.\")\n            else:\n                raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n        finally:\n            if created_container_group:\n                await self._wait_for_container_group_deletion(\n                    aci_client, created_container_group\n                )\n\n        return AzureContainerInstanceJobResult(\n            identifier=created_container_group.name, status_code=status_code\n        )\n\n    async def kill(\n        self,\n        container_group_name: str,\n        grace_seconds: int = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS,\n    ):\n        \"\"\"\n        Kill a flow running in an ACI container group.\n\n        Args:\n            container_group_name: The container group name yielded by\n                `AzureContainerInstanceJob.run`.\n        \"\"\"\n        # ACI does not provide a way to specify grace period, but it gives\n        # applications ~30 seconds to gracefully terminate before killing\n        # a container group.\n        if grace_seconds != CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS:\n            self.logger.warning(\n                f\"{self._log_prefix}: Kill grace period of {grace_seconds}s requested, \"\n                f\"but ACI does not support grace period configuration.\"\n            )\n\n        aci_client = self.aci_credentials.get_container_client(\n            self.subscription_id.get_secret_value()\n        )\n\n        # get the container group to check that it still exists\n        try:\n            container_group = aci_client.container_groups.get(\n                resource_group_name=self.resource_group_name,\n                container_group_name=container_group_name,\n            )\n        except ResourceNotFoundError as exc:\n            # the container group no longer exists, so there's nothing to cancel\n            raise InfrastructureNotFound(\n                f\"Cannot stop ACI job: container group {container_group_name} \"\n                \"no longer exists.\"\n            ) from exc\n\n        # get the container state to check if the container has terminated\n        container = self._get_container(container_group)\n        container_state = container.instance_view.current_state.state\n\n        # the container group needs to be deleted regardless of whether the container\n        # already terminated\n        await self._wait_for_container_group_deletion(aci_client, container_group)\n\n        # if the container had already terminated, raise an exception to let the agent\n        # know the flow was not cancelled\n        if container_state == ContainerRunState.TERMINATED:\n            raise InfrastructureNotAvailable(\n                f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n                f\"but container {container.name} has already terminated.\"\n            )\n\n    def preview(self) -> str:\n        \"\"\"\n        Provides a summary of how the container will be created when `run` is called.\n\n        Returns:\n           A string containing the summary.\n        \"\"\"\n        preview = {\n            \"container_name\": \"<generated when run>\",\n            \"resource_group_name\": self.resource_group_name,\n            \"memory\": self.memory,\n            \"cpu\": self.cpu,\n            \"gpu_count\": self.gpu_count,\n            \"gpu_sku\": self.gpu_sku,\n            \"env\": self._get_environment(),\n        }\n\n        return json.dumps(preview)\n\n    def get_corresponding_worker_type(self) -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        from prefect_azure.workers.container_instance import AzureContainerWorker\n\n        return AzureContainerWorker.type\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for an `Azure Container Instance` work pool\n        with the same configuration as this block.\n\n        Returns:\n            - dict: a base job template for an `Azure Container Instance` work pool\n        \"\"\"\n        from prefect_azure.workers.container_instance import AzureContainerWorker\n\n        base_job_template = deepcopy(\n            AzureContainerWorker.get_default_base_job_template()\n        )\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key == \"subscription_id\":\n                base_job_template[\"variables\"][\"properties\"][\"subscription_id\"][\n                    \"default\"\n                ] = value.get_secret_value()\n            elif key == \"aci_credentials\":\n                if not self.aci_credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"aci_credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(\n                            self.aci_credentials._block_document_id\n                        )\n                    }\n                }\n            elif key == \"image_registry\":\n                if not self.image_registry._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"image_registry\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(self.image_registry._block_document_id)\n                    }\n                }\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by `Azure Container Instance`\"\n                    \" work pools. Skipping.\"\n                )\n\n        return base_job_template\n\n    def _configure_container(self) -> Container:\n        \"\"\"\n        Configures an Azure `Container` using data from the block's fields.\n\n        Returns:\n            An instance of `Container` ready to submit to Azure.\n        \"\"\"\n\n        # setup container environment variables\n        environment = [\n            EnvironmentVariable(name=k, secure_value=v)\n            if k in ENV_SECRETS\n            else EnvironmentVariable(name=k, value=v)\n            for (k, v) in self._get_environment().items()\n        ]\n\n        # all container names in a resource group must be unique\n        if self.name:\n            slugified_name = slugify(\n                self.name,\n                max_length=52,\n                regex_pattern=r\"[^a-zA-Z0-9-]+\",\n            )\n            random_suffix = \"\".join(\n                random.choices(string.ascii_lowercase + string.digits, k=10)\n            )\n            container_name = slugified_name + \"-\" + random_suffix\n        else:\n            container_name = str(uuid.uuid4())\n\n        container_resource_requirements = self._configure_container_resources()\n\n        # add the entrypoint if provided, because creating an ACI container with a\n        # command overrides the container's built-in entrypoint.\n        if self.entrypoint:\n            self.command.insert(0, self.entrypoint)\n\n        return Container(\n            name=container_name,\n            image=self.image,\n            command=self.command,\n            resources=container_resource_requirements,\n            environment_variables=environment,\n        )\n\n    def _configure_container_resources(self) -> ResourceRequirements:\n        \"\"\"\n        Configures the container's memory, CPU, and GPU resources.\n\n        Returns:\n            A `ResourceRequirements` instance initialized with data from this\n            `AzureContainerInstanceJob` block.\n        \"\"\"\n\n        gpu_resource = (\n            GpuResource(count=self.gpu_count, sku=self.gpu_sku)\n            if self.gpu_count and self.gpu_sku\n            else None\n        )\n        container_resource_requests = ResourceRequests(\n            memory_in_gb=self.memory, cpu=self.cpu, gpu=gpu_resource\n        )\n\n        return ResourceRequirements(requests=container_resource_requests)\n\n    def _configure_container_group(self, container: Container) -> ContainerGroup:\n        \"\"\"\n        Configures the container group needed to start a container on ACI.\n\n        Args:\n            container: An initialized instance of `Container`.\n\n        Returns:\n            An initialized `ContainerGroup` ready to submit to Azure.\n        \"\"\"\n\n        # Load the resource group, so we can set the container group location\n        # correctly.\n\n        resource_group_client = self.aci_credentials.get_resource_client(\n            self.subscription_id.get_secret_value()\n        )\n\n        resource_group = resource_group_client.resource_groups.get(\n            self.resource_group_name\n        )\n\n        image_registry_credentials = self._create_image_registry_credentials(\n            self.image_registry\n        )\n\n        identity = (\n            ContainerGroupIdentity(\n                type=\"UserAssigned\",\n                # The Azure API only uses the dict keys and ignores values when\n                # creating a container group. Using empty `UserAssignedIdentities`\n                # instances as dict values satisfies Python type checkers.\n                user_assigned_identities={\n                    identity: UserAssignedIdentities() for identity in self.identities\n                },\n            )\n            if self.identities\n            else None\n        )\n\n        subnet_ids = (\n            [ContainerGroupSubnetId(id=subnet_id) for subnet_id in self.subnet_ids]\n            if self.subnet_ids\n            else None\n        )\n\n        dns_config = (\n            DnsConfiguration(name_servers=self.dns_servers)\n            if self.dns_servers\n            else None\n        )\n\n        return ContainerGroup(\n            location=resource_group.location,\n            identity=identity,\n            containers=[container],\n            os_type=OperatingSystemTypes.linux,\n            restart_policy=ContainerGroupRestartPolicy.never,\n            image_registry_credentials=image_registry_credentials,\n            subnet_ids=subnet_ids,\n            dns_config=dns_config,\n        )\n\n    @staticmethod\n    def _create_image_registry_credentials(\n        image_registry: Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n            None,\n        ],\n    ):\n        \"\"\"\n        Create image registry credentials based on the type of image_registry provided.\n\n        Args:\n            image_registry: An instance of a DockerRegistry or\n            ACRManagedIdentity object.\n\n        Returns:\n            A list containing an ImageRegistryCredential object if the input is a\n            `DockerRegistry` or `ACRManagedIdentity`, or None if the\n            input doesn't match any of the expected types.\n        \"\"\"\n        if image_registry and isinstance(\n            image_registry, prefect.infrastructure.container.DockerRegistry\n        ):\n            return [\n                ImageRegistryCredential(\n                    server=image_registry.registry_url,\n                    username=image_registry.username,\n                    password=image_registry.password.get_secret_value(),\n                )\n            ]\n        elif image_registry and isinstance(image_registry, ACRManagedIdentity):\n            return [\n                ImageRegistryCredential(\n                    server=image_registry.registry_url,\n                    identity=image_registry.identity,\n                )\n            ]\n        else:\n            return None\n\n    def _wait_for_task_container_start(\n        self, creation_status_poller: LROPoller[ContainerGroup]\n    ) -> ContainerGroup:\n        \"\"\"\n        Wait for the result of group and container creation.\n\n        Args:\n            creation_status_poller: Poller returned by the Azure SDK.\n\n        Raises:\n            RuntimeError: Raised if the timeout limit is exceeded before the\n            container starts.\n\n        Returns:\n            A `ContainerGroup` representing the current status of the group being\n            watched.\n        \"\"\"\n\n        t0 = time.time()\n        timeout = self.task_start_timeout_seconds\n\n        while not creation_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while watching waiting for \"\n                        \"container start.\"\n                    )\n                )\n            time.sleep(self.task_watch_poll_interval)\n\n        return creation_status_poller.result()\n\n    def _watch_task_and_get_exit_code(\n        self,\n        client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n        run_start_time: datetime.datetime,\n    ) -> int:\n        \"\"\"\n        Waits until the container finishes running and obtains its exit code.\n\n        Args:\n            client: An initialized Azure `ContainerInstanceManagementClient`\n            container_group: The `ContainerGroup` in which the container resides.\n\n        Returns:\n            An `int` representing the container's exit code.\n        \"\"\"\n\n        status_code = -1\n        running_container = self._get_container(container_group)\n        current_state = running_container.instance_view.current_state.state\n\n        # get any logs the container has already generated\n        last_log_time = run_start_time\n        if self.stream_output:\n            last_log_time = self._get_and_stream_output(\n                client, container_group, last_log_time\n            )\n\n        # set exit code if flow run already finished:\n        if current_state == ContainerRunState.TERMINATED:\n            status_code = running_container.instance_view.current_state.exit_code\n\n        while current_state != ContainerRunState.TERMINATED:\n            try:\n                container_group = client.container_groups.get(\n                    resource_group_name=self.resource_group_name,\n                    container_group_name=container_group.name,\n                )\n            except ResourceNotFoundError:\n                self.logger.exception(\n                    f\"{self._log_prefix}: Container group was deleted before flow run \"\n                    \"completed, likely due to flow cancellation.\"\n                )\n\n                # since the flow was cancelled, exit early instead of raising an\n                # exception\n                return status_code\n\n            container = self._get_container(container_group)\n            current_state = container.instance_view.current_state.state\n\n            if current_state == ContainerRunState.TERMINATED:\n                status_code = container.instance_view.current_state.exit_code\n                # break instead of waiting for next loop iteration because\n                # trying to read logs from a terminated container raises an exception\n                break\n\n            if self.stream_output:\n                last_log_time = self._get_and_stream_output(\n                    client, container_group, last_log_time\n                )\n\n            time.sleep(self.task_watch_poll_interval)\n\n        return status_code\n\n    async def _wait_for_container_group_deletion(\n        self,\n        aci_client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n    ):\n        self.logger.info(f\"{self._log_prefix}: Deleting container...\")\n\n        deletion_status_poller = await run_sync_in_worker_thread(\n            aci_client.container_groups.begin_delete,\n            resource_group_name=self.resource_group_name,\n            container_group_name=container_group.name,\n        )\n\n        t0 = time.time()\n        timeout = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS\n\n        while not deletion_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while waiting for deletion of\"\n                        f\" container group {container_group.name}. To verify the group \"\n                        \"has been deleted, check the Azure Portal or run \"\n                        f\"az container show --name {container_group.name} --resource-group {self.resource_group_name}\"  # noqa\n                    )\n                )\n            await anyio.sleep(self.task_watch_poll_interval)\n\n        self.logger.info(f\"{self._log_prefix}: Container deleted.\")\n\n    def _get_container(self, container_group: ContainerGroup) -> Container:\n        \"\"\"\n        Extracts the job container from a container group.\n        \"\"\"\n        return container_group.containers[0]\n\n    def _get_and_stream_output(\n        self,\n        client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n        last_log_time: datetime.datetime,\n    ) -> datetime.datetime:\n        \"\"\"\n        Fetches logs output from the job container and writes all entries after\n        a given time to stderr.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        logs = self._get_logs(client, container_group)\n        return self._stream_output(logs, last_log_time)\n\n    def _get_logs(\n        self,\n        client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n        max_lines: int = 100,\n    ) -> str:\n        \"\"\"\n        Gets the most container logs up to a given maximum.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            max_lines: The number of log lines to pull. Defaults to 100.\n\n        Returns:\n            A string containing the requested log entries, one per line.\n        \"\"\"\n        container = self._get_container(container_group)\n\n        logs: Union[Logs, None] = None\n        try:\n            logs = client.containers.list_logs(\n                resource_group_name=self.resource_group_name,\n                container_group_name=container_group.name,\n                container_name=container.name,\n                tail=max_lines,\n                timestamps=True,\n            )\n        except HttpResponseError:\n            # Trying to get logs when the container is under heavy CPU load sometimes\n            # results in an error, but we won't want to raise an exception and stop\n            # monitoring the flow. Instead, log the error and carry on so we can try to\n            # get all missed logs on the next check.\n            self.logger.warning(\n                f\"{self._log_prefix}: Unable to retrieve logs from container \"\n                f\"{container.name}. Trying again in {self.task_watch_poll_interval}s\"\n            )\n\n        return logs.content if logs else \"\"\n\n    def _stream_output(\n        self, log_content: Union[str, None], last_log_time: datetime.datetime\n    ) -> datetime.datetime:\n        \"\"\"\n        Writes each entry from a string of log lines to stderr.\n\n        Args:\n            log_content: A string containing Azure container logs.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        if not log_content:\n            # nothing to stream\n            return last_log_time\n\n        log_lines = log_content.split(\"\\n\")\n\n        last_written_time = last_log_time\n\n        for log_line in log_lines:\n            # skip if the line is blank or whitespace\n            if not log_line.strip():\n                continue\n\n            line_parts = log_line.split(\" \")\n            # timestamp should always be before first space in line\n            line_timestamp = line_parts[0]\n            line = \" \".join(line_parts[1:])\n\n            try:\n                line_time = dateutil.parser.parse(line_timestamp)\n                if line_time > last_written_time:\n                    self._write_output_line(line)\n                    last_written_time = line_time\n            except dateutil.parser.ParserError as e:\n                self.logger.debug(\n                    (\n                        f\"{self._log_prefix}: Unable to parse timestamp from Azure \"\n                        \"log line: %s\"\n                    ),\n                    log_line,\n                    exc_info=e,\n                )\n\n        return last_written_time\n\n    def _get_environment(self):\n        \"\"\"\n        Generates a dictionary of all environment variables to send to the\n        ACI container.\n        \"\"\"\n        return {**self._base_environment(), **self.env}\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"AzureContainerInstanceJob {self.name!r}\"\n        else:\n            return \"AzureContainerInstanceJob\"\n\n    @staticmethod\n    def _provisioning_succeeded(container_group: ContainerGroup) -> bool:\n        \"\"\"\n        Determines whether ACI container group provisioning was successful.\n\n        Args:\n            container_group: a container group returned by the Azure SDK.\n\n        Returns:\n            True if provisioning was successful, False otherwise.\n        \"\"\"\n        if not container_group:\n            return False\n\n        return (\n            container_group.provisioning_state\n            == ContainerGroupProvisioningState.SUCCEEDED\n            and len(container_group.containers) == 1\n        )\n\n    @staticmethod\n    def _write_output_line(line: str):\n        \"\"\"\n        Writes a line of output to stderr.\n        \"\"\"\n        print(line, file=sys.stderr)\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

    Generate a base job template for an Azure Container Instance work pool with the same configuration as this block.

    Returns:

    Type Description dict
    • dict: a base job template for an Azure Container Instance work pool
    Source code in prefect_azure/container_instance.py
    async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for an `Azure Container Instance` work pool\n    with the same configuration as this block.\n\n    Returns:\n        - dict: a base job template for an `Azure Container Instance` work pool\n    \"\"\"\n    from prefect_azure.workers.container_instance import AzureContainerWorker\n\n    base_job_template = deepcopy(\n        AzureContainerWorker.get_default_base_job_template()\n    )\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n        ]:\n            continue\n        elif key == \"subscription_id\":\n            base_job_template[\"variables\"][\"properties\"][\"subscription_id\"][\n                \"default\"\n            ] = value.get_secret_value()\n        elif key == \"aci_credentials\":\n            if not self.aci_credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"aci_credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(\n                        self.aci_credentials._block_document_id\n                    )\n                }\n            }\n        elif key == \"image_registry\":\n            if not self.image_registry._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"image_registry\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(self.image_registry._block_document_id)\n                }\n            }\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by `Azure Container Instance`\"\n                \" work pools. Skipping.\"\n            )\n\n    return base_job_template\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.get_corresponding_worker_type","title":"get_corresponding_worker_type","text":"

    Return the corresponding worker type for this infrastructure block.

    Source code in prefect_azure/container_instance.py
    def get_corresponding_worker_type(self) -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    from prefect_azure.workers.container_instance import AzureContainerWorker\n\n    return AzureContainerWorker.type\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.kill","title":"kill async","text":"

    Kill a flow running in an ACI container group.

    Parameters:

    Name Type Description Default container_group_name str

    The container group name yielded by AzureContainerInstanceJob.run.

    required Source code in prefect_azure/container_instance.py
    async def kill(\n    self,\n    container_group_name: str,\n    grace_seconds: int = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS,\n):\n    \"\"\"\n    Kill a flow running in an ACI container group.\n\n    Args:\n        container_group_name: The container group name yielded by\n            `AzureContainerInstanceJob.run`.\n    \"\"\"\n    # ACI does not provide a way to specify grace period, but it gives\n    # applications ~30 seconds to gracefully terminate before killing\n    # a container group.\n    if grace_seconds != CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS:\n        self.logger.warning(\n            f\"{self._log_prefix}: Kill grace period of {grace_seconds}s requested, \"\n            f\"but ACI does not support grace period configuration.\"\n        )\n\n    aci_client = self.aci_credentials.get_container_client(\n        self.subscription_id.get_secret_value()\n    )\n\n    # get the container group to check that it still exists\n    try:\n        container_group = aci_client.container_groups.get(\n            resource_group_name=self.resource_group_name,\n            container_group_name=container_group_name,\n        )\n    except ResourceNotFoundError as exc:\n        # the container group no longer exists, so there's nothing to cancel\n        raise InfrastructureNotFound(\n            f\"Cannot stop ACI job: container group {container_group_name} \"\n            \"no longer exists.\"\n        ) from exc\n\n    # get the container state to check if the container has terminated\n    container = self._get_container(container_group)\n    container_state = container.instance_view.current_state.state\n\n    # the container group needs to be deleted regardless of whether the container\n    # already terminated\n    await self._wait_for_container_group_deletion(aci_client, container_group)\n\n    # if the container had already terminated, raise an exception to let the agent\n    # know the flow was not cancelled\n    if container_state == ContainerRunState.TERMINATED:\n        raise InfrastructureNotAvailable(\n            f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n            f\"but container {container.name} has already terminated.\"\n        )\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.preview","title":"preview","text":"

    Provides a summary of how the container will be created when run is called.

    Returns:

    Type Description str

    A string containing the summary.

    Source code in prefect_azure/container_instance.py
    def preview(self) -> str:\n    \"\"\"\n    Provides a summary of how the container will be created when `run` is called.\n\n    Returns:\n       A string containing the summary.\n    \"\"\"\n    preview = {\n        \"container_name\": \"<generated when run>\",\n        \"resource_group_name\": self.resource_group_name,\n        \"memory\": self.memory,\n        \"cpu\": self.cpu,\n        \"gpu_count\": self.gpu_count,\n        \"gpu_sku\": self.gpu_sku,\n        \"env\": self._get_environment(),\n    }\n\n    return json.dumps(preview)\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.run","title":"run async","text":"

    Runs the configured task using an ACI container.

    Parameters:

    Name Type Description Default task_status Optional[TaskStatus]

    An optional TaskStatus to update when the container starts.

    None

    Returns:

    Type Description AzureContainerInstanceJobResult

    An AzureContainerInstanceJobResult with the container's exit code.

    Source code in prefect_azure/container_instance.py
    @sync_compatible\nasync def run(\n    self, task_status: Optional[TaskStatus] = None\n) -> AzureContainerInstanceJobResult:\n    \"\"\"\n    Runs the configured task using an ACI container.\n\n    Args:\n        task_status: An optional `TaskStatus` to update when the container starts.\n\n    Returns:\n        An `AzureContainerInstanceJobResult` with the container's exit code.\n    \"\"\"\n\n    run_start_time = datetime.datetime.now(datetime.timezone.utc)\n\n    container = self._configure_container()\n    container_group = self._configure_container_group(container)\n    created_container_group = None\n\n    aci_client = self.aci_credentials.get_container_client(\n        self.subscription_id.get_secret_value()\n    )\n\n    self.logger.info(\n        f\"{self._log_prefix}: Preparing to run command {' '.join(self.command)!r} \"\n        f\"in container {container.name!r} ({self.image})...\"\n    )\n    try:\n        self.logger.info(f\"{self._log_prefix}: Waiting for container creation...\")\n        # Create the container group and wait for it to start\n        creation_status_poller = await run_sync_in_worker_thread(\n            aci_client.container_groups.begin_create_or_update,\n            self.resource_group_name,\n            container.name,\n            container_group,\n        )\n        created_container_group = await run_sync_in_worker_thread(\n            self._wait_for_task_container_start, creation_status_poller\n        )\n\n        # If creation succeeded, group provisioning state should be 'Succeeded'\n        # and the group should have a single container\n        if self._provisioning_succeeded(created_container_group):\n            self.logger.info(f\"{self._log_prefix}: Running command...\")\n            if task_status:\n                task_status.started(value=created_container_group.name)\n            status_code = await run_sync_in_worker_thread(\n                self._watch_task_and_get_exit_code,\n                aci_client,\n                created_container_group,\n                run_start_time,\n            )\n            self.logger.info(f\"{self._log_prefix}: Completed command run.\")\n        else:\n            raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n    finally:\n        if created_container_group:\n            await self._wait_for_container_group_deletion(\n                aci_client, created_container_group\n            )\n\n    return AzureContainerInstanceJobResult(\n        identifier=created_container_group.name, status_code=status_code\n    )\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJobResult","title":"AzureContainerInstanceJobResult","text":"

    Bases: InfrastructureResult

    The result of an AzureContainerInstanceJob run.

    Source code in prefect_azure/container_instance.py
    class AzureContainerInstanceJobResult(InfrastructureResult):\n    \"\"\"\n    The result of an `AzureContainerInstanceJob` run.\n    \"\"\"\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.ContainerGroupProvisioningState","title":"ContainerGroupProvisioningState","text":"

    Bases: str, Enum

    Terminal provisioning states for ACI container groups. Per the Azure docs, the states in this Enum are the only ones that can be relied on as dependencies.

    Source code in prefect_azure/container_instance.py
    class ContainerGroupProvisioningState(str, Enum):\n    \"\"\"\n    Terminal provisioning states for ACI container groups. Per the Azure docs,\n    the states in this Enum are the only ones that can be relied on as dependencies.\n    \"\"\"\n\n    SUCCEEDED = \"Succeeded\"\n    FAILED = \"Failed\"\n
    "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.ContainerRunState","title":"ContainerRunState","text":"

    Bases: str, Enum

    Terminal run states for ACI containers.

    Source code in prefect_azure/container_instance.py
    class ContainerRunState(str, Enum):\n    \"\"\"\n    Terminal run states for ACI containers.\n    \"\"\"\n\n    RUNNING = \"Running\"\n    TERMINATED = \"Terminated\"\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/","title":"Container Instance Worker","text":""},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance","title":"prefect_azure.workers.container_instance","text":"

    Module containing the Azure Container Instances worker used for executing flow runs in ACI containers.

    To start an ACI worker, run the following command:

    prefect worker start --pool 'my-work-pool' --type azure-container-instance\n

    Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

    Using a custom ARM template

    To facilitate easy customization, the Azure Container worker provisions a containing group using an ARM template. The default ARM template is represented in YAML as follows:

    ---\narm_template:\n  \"$schema\": https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#\n  contentVersion: 1.0.0.0\n  parameters:\n    location:\n      type: string\n      defaultValue: \"[resourceGroup().location]\"\n      metadata:\n        description: Location for all resources.\n    container_group_name:\n      type: string\n      defaultValue: \"[uniqueString(resourceGroup().id)]\"\n      metadata:\n        description: The name of the container group to create.\n    container_name:\n      type: string\n      defaultValue: \"[uniqueString(resourceGroup().id)]\"\n      metadata:\n        description: The name of the container to create.\n  resources:\n  - type: Microsoft.ContainerInstance/containerGroups\n    apiVersion: '2022-09-01'\n    name: \"[parameters('container_group_name')]\"\n    location: \"[parameters('location')]\"\n    properties:\n      containers:\n      - name: \"[parameters('container_name')]\"\n        properties:\n          image: rpeden/my-aci-flow:latest\n          command: \"{{ command }}\"\n          resources:\n            requests:\n              cpu: \"{{ cpu }}\"\n              memoryInGB: \"{{ memory }}\"\n          environmentVariables: []\n      osType: Linux\n      restartPolicy: Never\n

    Each values enclosed in {{ }} is a placeholder that will be replaced with a value at runtime. The values that can be used a placeholders are defined by the variables schema defined in the base job template.

    The default job manifest and available variables can be customized on a work pool by work pool basis. These customizations can be made via the Prefect UI when creating or editing a work pool.

    Using an ARM template makes the worker flexible; you're not limited to using the features the worker provides out of the box. Instead, you can modify the ARM template to use any features available in Azure Container Instances.

    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerJobConfiguration","title":"AzureContainerJobConfiguration","text":"

    Bases: BaseJobConfiguration

    Configuration for an Azure Container Instance flow run.

    Source code in prefect_azure/workers/container_instance.py
    class AzureContainerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration for an Azure Container Instance flow run.\n    \"\"\"\n\n    image: str = Field(default_factory=get_prefect_image_name)\n    resource_group_name: str = Field(default=...)\n    subscription_id: SecretStr = Field(default=...)\n    identities: Optional[List[str]] = Field(default=None)\n    entrypoint: Optional[str] = Field(default=DEFAULT_CONTAINER_ENTRYPOINT)\n    image_registry: Optional[\n        Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n        ]\n    ] = Field(default=None)\n    cpu: float = Field(default=ACI_DEFAULT_CPU)\n    gpu_count: Optional[int] = Field(default=None)\n    gpu_sku: Optional[str] = Field(default=None)\n    memory: float = Field(default=ACI_DEFAULT_MEMORY)\n    subnet_ids: Optional[List[str]] = Field(default=None)\n    dns_servers: Optional[List[str]] = Field(default=None)\n    stream_output: bool = Field(default=False)\n    aci_credentials: AzureContainerInstanceCredentials = Field(\n        # default to an empty credentials object that will use\n        # `DefaultAzureCredential` to authenticate.\n        default_factory=AzureContainerInstanceCredentials\n    )\n    # Execution settings\n    task_start_timeout_seconds: int = Field(default=240)\n    task_watch_poll_interval: float = Field(default=5.0)\n    arm_template: Dict[str, Any] = Field(template=_get_default_arm_template())\n    keep_container_group: bool = Field(default=False)\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        # expectations:\n        # - the first resource in the template is the container group\n        # - the container group has a single container\n        container_group = self.arm_template[\"resources\"][0]\n        container = container_group[\"properties\"][\"containers\"][0]\n\n        # set the container's environment variables\n        container[\"properties\"][\"environmentVariables\"] = self._get_arm_environment()\n\n        # convert the command from a string to a list, because that's what ACI expects\n        if self.command:\n            container[\"properties\"][\"command\"] = self.command.split(\" \")\n\n        self._add_image()\n\n        # Add the entrypoint if provided. Creating an ACI container with a\n        # command overrides the container's built-in entrypoint. Prefect base images\n        # use entrypoint.sh as the entrypoint, so we need to add to the beginning of\n        # the command list to avoid breaking EXTRA_PIP_PACKAGES installation on\n        # container startup.\n        if self.entrypoint:\n            container[\"properties\"][\"command\"].insert(0, self.entrypoint)\n\n        if self.image_registry:\n            self._add_image_registry_credentials(self.image_registry)\n\n        if self.identities:\n            self._add_identities(self.identities)\n\n        if self.subnet_ids:\n            self._add_subnets(self.subnet_ids)\n\n        if self.dns_servers:\n            self._add_dns_servers(self.dns_servers)\n\n    def _add_image(self):\n        \"\"\"\n        Add the image to the arm template.\n        \"\"\"\n        try:\n            self.arm_template[\"resources\"][0][\"properties\"][\"containers\"][0][\n                \"properties\"\n            ][\"image\"] = self.image\n        except KeyError:\n            raise ValueError(\"Unable to add image due to invalid job ARM template.\")\n\n    def _add_image_registry_credentials(\n        self,\n        image_registry: Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n            None,\n        ],\n    ):\n        \"\"\"\n        Create image registry credentials based on the type of image_registry provided.\n\n        Args:\n            image_registry: An instance of a DockerRegistry or\n            ACRManagedIdentity object.\n        \"\"\"\n        if image_registry and isinstance(\n            image_registry, prefect.infrastructure.container.DockerRegistry\n        ):\n            self.arm_template[\"resources\"][0][\"properties\"][\n                \"imageRegistryCredentials\"\n            ] = [\n                {\n                    \"server\": image_registry.registry_url,\n                    \"username\": image_registry.username,\n                    \"password\": image_registry.password.get_secret_value(),\n                }\n            ]\n        elif image_registry and isinstance(image_registry, ACRManagedIdentity):\n            self.arm_template[\"resources\"][0][\"properties\"][\n                \"imageRegistryCredentials\"\n            ] = [\n                {\n                    \"server\": image_registry.registry_url,\n                    \"identity\": image_registry.identity,\n                }\n            ]\n\n    def _add_identities(self, identities: List[str]):\n        \"\"\"\n        Add identities to the container group.\n\n        Args:\n            identities: A list of user-assigned identities to add to\n            the container group.\n        \"\"\"\n        self.arm_template[\"resources\"][0][\"identity\"] = {\n            \"type\": \"UserAssigned\",\n            \"userAssignedIdentities\": {\n                # note: For user-assigned identities, the key is the resource ID\n                # of the identity and the value is an empty object. See:\n                # https://docs.microsoft.com/en-us/azure/templates/microsoft.containerinstance/containergroups?tabs=bicep#identity-object # noqa\n                identity: {}\n                for identity in identities\n            },\n        }\n\n    def _add_subnets(self, subnet_ids: List[str]):\n        \"\"\"\n        Add subnets to the container group.\n\n        Args:\n            subnet_ids: A list of subnet ids to add to the container group.\n        \"\"\"\n        self.arm_template[\"resources\"][0][\"properties\"][\"subnetIds\"] = [\n            {\"id\": subnet_id} for subnet_id in subnet_ids\n        ]\n\n    def _add_dns_servers(self, dns_servers: List[str]):\n        \"\"\"\n        Add dns servers to the container group.\n\n        Args:\n            dns_servers: A list of dns servers to add to the container group.\n        \"\"\"\n        self.arm_template[\"resources\"][0][\"properties\"][\"dnsConfig\"] = {\n            \"nameServers\": dns_servers\n        }\n\n    def _get_arm_environment(self):\n        \"\"\"\n        Returns the environment variables to pass to the ARM template.\n        \"\"\"\n        env = {**self._base_environment(), **self.env}\n\n        azure_env = [\n            {\"name\": key, \"secureValue\": value}\n            if key in ENV_SECRETS\n            else {\"name\": key, \"value\": value}\n            for key, value in env.items()\n        ]\n        return azure_env\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Prepares the job configuration for a flow run.

    Source code in prefect_azure/workers/container_instance.py
    def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n\n    # expectations:\n    # - the first resource in the template is the container group\n    # - the container group has a single container\n    container_group = self.arm_template[\"resources\"][0]\n    container = container_group[\"properties\"][\"containers\"][0]\n\n    # set the container's environment variables\n    container[\"properties\"][\"environmentVariables\"] = self._get_arm_environment()\n\n    # convert the command from a string to a list, because that's what ACI expects\n    if self.command:\n        container[\"properties\"][\"command\"] = self.command.split(\" \")\n\n    self._add_image()\n\n    # Add the entrypoint if provided. Creating an ACI container with a\n    # command overrides the container's built-in entrypoint. Prefect base images\n    # use entrypoint.sh as the entrypoint, so we need to add to the beginning of\n    # the command list to avoid breaking EXTRA_PIP_PACKAGES installation on\n    # container startup.\n    if self.entrypoint:\n        container[\"properties\"][\"command\"].insert(0, self.entrypoint)\n\n    if self.image_registry:\n        self._add_image_registry_credentials(self.image_registry)\n\n    if self.identities:\n        self._add_identities(self.identities)\n\n    if self.subnet_ids:\n        self._add_subnets(self.subnet_ids)\n\n    if self.dns_servers:\n        self._add_dns_servers(self.dns_servers)\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerVariables","title":"AzureContainerVariables","text":"

    Bases: BaseVariables

    Variables for an Azure Container Instance flow run.

    Source code in prefect_azure/workers/container_instance.py
    class AzureContainerVariables(BaseVariables):\n    \"\"\"\n    Variables for an Azure Container Instance flow run.\n    \"\"\"\n\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image to use for the Prefect container in the task. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    resource_group_name: str = Field(\n        default=...,\n        title=\"Azure Resource Group Name\",\n        description=(\n            \"The name of the Azure Resource Group in which to run Prefect ACI tasks.\"\n        ),\n    )\n    subscription_id: SecretStr = Field(\n        default=...,\n        title=\"Azure Subscription ID\",\n        description=\"The ID of the Azure subscription to create containers under.\",\n    )\n    identities: Optional[List[str]] = Field(\n        title=\"Identities\",\n        default=None,\n        description=(\n            \"A list of user-assigned identities to associate with the container group. \"\n            \"The identities should be an ARM resource IDs in the form: \"\n            \"'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'.\"  # noqa\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=DEFAULT_CONTAINER_ENTRYPOINT,\n        description=(\n            \"The entrypoint of the container you wish you run. This value \"\n            \"defaults to the entrypoint used by Prefect images and should only be \"\n            \"changed when using a custom image that is not based on an official \"\n            \"Prefect image. Any commands set on deployments will be passed \"\n            \"to the entrypoint as parameters.\"\n        ),\n    )\n    image_registry: Optional[\n        Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n        ]\n    ] = Field(\n        default=None,\n        title=\"Image Registry (Optional)\",\n        description=(\n            \"To use any private container registry with a username and password, \"\n            \"choose DockerRegistry. To use a private Azure Container Registry \"\n            \"with a managed identity, choose ACRManagedIdentity.\"\n        ),\n    )\n    cpu: float = Field(\n        title=\"CPU\",\n        default=ACI_DEFAULT_CPU,\n        description=(\n            \"The number of virtual CPUs to assign to the task container. \"\n            f\"If not provided, a default value of {ACI_DEFAULT_CPU} will be used.\"\n        ),\n    )\n    gpu_count: Optional[int] = Field(\n        title=\"GPU Count\",\n        default=None,\n        description=(\n            \"The number of GPUs to assign to the task container. \"\n            \"If not provided, no GPU will be used.\"\n        ),\n    )\n    gpu_sku: Optional[str] = Field(\n        title=\"GPU SKU\",\n        default=None,\n        description=(\n            \"The Azure GPU SKU to use. See the ACI documentation for a list of \"\n            \"GPU SKUs available in each Azure region.\"\n        ),\n    )\n    memory: float = Field(\n        default=ACI_DEFAULT_MEMORY,\n        description=(\n            \"The amount of memory in gigabytes to provide to the ACI task. Valid \"\n            \"amounts are specified in the Azure documentation. If not provided, a \"\n            f\"default value of  {ACI_DEFAULT_MEMORY} will be used unless present \"\n            \"on the task definition.\"\n        ),\n    )\n    subnet_ids: Optional[List[str]] = Field(\n        title=\"Subnet IDs\",\n        default=None,\n        description=(\"A list of subnet IDs to associate with the container group. \"),\n    )\n    dns_servers: Optional[List[str]] = Field(\n        title=\"DNS Servers\",\n        default=None,\n        description=(\"A list of DNS servers to associate with the container group.\"),\n    )\n    aci_credentials: AzureContainerInstanceCredentials = Field(\n        default_factory=AzureContainerInstanceCredentials,\n        description=(\"The credentials to use to authenticate with Azure.\"),\n    )\n    stream_output: bool = Field(\n        default=False,\n        description=(\n            \"If `True`, logs will be streamed from the Prefect container to the local \"\n            \"console.\"\n        ),\n    )\n    # Execution settings\n    task_start_timeout_seconds: int = Field(\n        default=240,\n        description=(\n            \"The amount of time to watch for the start of the ACI container. \"\n            \"before marking it as failed.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The number of seconds to wait between Azure API calls while monitoring \"\n            \"the state of an Azure Container Instances task.\"\n        ),\n    )\n    keep_container_group: bool = Field(\n        default=False,\n        title=\"Keep Container Group After Completion\",\n        description=\"Keep the completed container group on Azure.\",\n    )\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorker","title":"AzureContainerWorker","text":"

    Bases: BaseWorker

    A Prefect worker that runs flows in an Azure Container Instance.

    Source code in prefect_azure/workers/container_instance.py
    class AzureContainerWorker(BaseWorker):\n    \"\"\"\n    A Prefect worker that runs flows in an Azure Container Instance.\n    \"\"\"\n\n    type = \"azure-container-instance\"\n    job_configuration = AzureContainerJobConfiguration\n    job_configuration_variables = AzureContainerVariables\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _display_name = \"Azure Container Instances\"\n    _description = (\n        \"Execute flow runs within containers on Azure's Container Instances \"\n        \"service. Requires an Azure account.\"\n    )\n    _documentation_url = (\n        \"https://prefecthq.github.io/prefect-azure/container_instance_worker/\"\n    )\n\n    async def run(\n        self,\n        flow_run: FlowRun,\n        configuration: AzureContainerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ):\n        \"\"\"\n        Run a flow in an Azure Container Instance.\n        Args:\n            flow_run: The flow run to run.\n            configuration: The configuration for the flow run.\n            task_status: The task status object for the current task. Used\n            to provide an identifier that can be used to cancel the task.\n\n        Returns:\n            The result of the flow run.\n        \"\"\"\n        run_start_time = datetime.datetime.now(datetime.timezone.utc)\n        prefect_client = get_client()\n\n        # Get the flow, so we can use its name in the container group name\n        # to make it easier to identify and debug.\n        flow = await prefect_client.read_flow(flow_run.flow_id)\n        container_group_name = f\"{flow.name}-{flow_run.id}\"\n\n        # Slugify flow.name if the generated name will be too long for the\n        # max deployment name length (64) including \"prefect-\"\n        if len(container_group_name) > 55:\n            slugified_flow_name = slugify(\n                flow.name,\n                max_length=55 - len(str(flow_run.id)),\n                regex_pattern=r\"[^a-zA-Z0-9-]+\",\n            )\n            container_group_name = f\"{slugified_flow_name}-{flow_run.id}\"\n\n        self._logger.info(\n            f\"{self._log_prefix}: Preparing to run command {configuration.command} \"\n            f\"in container  {configuration.image})...\"\n        )\n\n        aci_client = configuration.aci_credentials.get_container_client(\n            configuration.subscription_id.get_secret_value()\n        )\n        resource_client = configuration.aci_credentials.get_resource_client(\n            configuration.subscription_id.get_secret_value()\n        )\n\n        created_container_group: Union[ContainerGroup, None] = None\n        try:\n            self._logger.info(f\"{self._log_prefix}: Creating container group...\")\n\n            created_container_group = await self._provision_container_group(\n                aci_client,\n                resource_client,\n                configuration,\n                container_group_name,\n            )\n            # Both the flow ID and container group name will be needed to\n            # cancel the flow run if needed.\n            identifier = f\"{flow_run.id}:{container_group_name}\"\n\n            if self._provisioning_succeeded(created_container_group):\n                self._logger.info(f\"{self._log_prefix}: Running command...\")\n                if task_status is not None:\n                    task_status.started(value=identifier)\n\n                status_code = await run_sync_in_worker_thread(\n                    self._watch_task_and_get_exit_code,\n                    aci_client,\n                    configuration,\n                    created_container_group,\n                    run_start_time,\n                )\n\n                self._logger.info(f\"{self._log_prefix}: Completed command run.\")\n\n            else:\n                raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n        finally:\n            if configuration.keep_container_group:\n                self._logger.info(f\"{self._log_prefix}: Stopping container group...\")\n                aci_client.container_groups.stop(\n                    resource_group_name=configuration.resource_group_name,\n                    container_group_name=container_group_name,\n                )\n            else:\n                await self._wait_for_container_group_deletion(\n                    aci_client, configuration, container_group_name\n                )\n\n        return AzureContainerWorkerResult(\n            identifier=created_container_group.name, status_code=status_code\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: AzureContainerJobConfiguration,\n    ):\n        \"\"\"\n        Kill a flow running in an ACI container group.\n\n        Args:\n            infrastructure_pid: The container group identification data yielded by\n                `AzureContainerInstanceJob.run`.\n            configuration: The job configuration.\n        \"\"\"\n        (flow_run_id, container_group_name) = infrastructure_pid.split(\":\")\n\n        aci_client = configuration.aci_credentials.get_container_client(\n            configuration.subscription_id.get_secret_value()\n        )\n\n        # get the container group to check that it still exists\n        try:\n            container_group = aci_client.container_groups.get(\n                resource_group_name=configuration.resource_group_name,\n                container_group_name=container_group_name,\n            )\n        except ResourceNotFoundError as exc:\n            # the container group no longer exists, so there's nothing to cancel\n            raise InfrastructureNotFound(\n                f\"Cannot stop ACI job: container group \"\n                f\"{container_group_name} no longer exists.\"\n            ) from exc\n\n        # get the container state to check if the container has terminated\n        container = self._get_container(container_group)\n        container_state = container.instance_view.current_state.state\n\n        # the container group needs to be deleted regardless of whether the container\n        # already terminated\n        await self._wait_for_container_group_deletion(\n            aci_client, configuration, container_group_name\n        )\n\n        # if the container has already terminated, raise an exception to let the agent\n        # know the flow was not cancelled\n        if container_state == ContainerRunState.TERMINATED:\n            raise InfrastructureNotAvailable(\n                f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n                f\"but container {container.name} has already terminated.\"\n            )\n\n    def _wait_for_task_container_start(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group_name: str,\n        creation_status_poller: LROPoller[DeploymentExtended],\n    ) -> Optional[ContainerGroup]:\n        \"\"\"\n        Wait for the result of group and container creation.\n\n        Args:\n            creation_status_poller: Poller returned by the Azure SDK.\n\n        Raises:\n            RuntimeError: Raised if the timeout limit is exceeded before the\n            container starts.\n\n        Returns:\n            A `ContainerGroup` representing the current status of the group being\n            watched, or None if creation failed.\n        \"\"\"\n        t0 = time.time()\n        timeout = configuration.task_start_timeout_seconds\n\n        while not creation_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while watching waiting for \"\n                        \"container start.\"\n                    )\n                )\n            time.sleep(configuration.task_watch_poll_interval)\n\n        deployment = creation_status_poller.result()\n\n        provisioning_succeeded = (\n            deployment.properties.provisioning_state\n            == ContainerGroupProvisioningState.SUCCEEDED\n        )\n\n        if provisioning_succeeded:\n            return self._get_container_group(\n                client, configuration.resource_group_name, container_group_name\n            )\n        else:\n            return None\n\n    async def _provision_container_group(\n        self,\n        aci_client: ContainerInstanceManagementClient,\n        resource_client: ResourceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group_name: str,\n    ):\n        \"\"\"\n        Create a container group and wait for it to start.\n        Args:\n            aci_client: An authenticated ACI client.\n            resource_client: An authenticated resource client.\n            configuration: The job configuration.\n            container_group_name: The name of the container group to create.\n\n        Returns:\n            A `ContainerGroup` representing the container group that was created.\n        \"\"\"\n        properties = DeploymentProperties(\n            mode=DeploymentMode.INCREMENTAL,\n            template=configuration.arm_template,\n            parameters={\"container_group_name\": {\"value\": container_group_name}},\n        )\n        deployment = Deployment(properties=properties)\n\n        creation_status_poller = await run_sync_in_worker_thread(\n            resource_client.deployments.begin_create_or_update,\n            resource_group_name=configuration.resource_group_name,\n            deployment_name=f\"prefect-{container_group_name}\",\n            parameters=deployment,\n        )\n\n        created_container_group = await run_sync_in_worker_thread(\n            self._wait_for_task_container_start,\n            aci_client,\n            configuration,\n            container_group_name,\n            creation_status_poller,\n        )\n\n        return created_container_group\n\n    def _watch_task_and_get_exit_code(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group: ContainerGroup,\n        run_start_time: datetime.datetime,\n    ) -> int:\n        \"\"\"\n        Waits until the container finishes running and obtains its exit code.\n\n        Args:\n            client: An initialized Azure `ContainerInstanceManagementClient`\n            container_group: The `ContainerGroup` in which the container resides.\n\n        Returns:\n            An `int` representing the container's exit code.\n        \"\"\"\n        status_code = -1\n        running_container = self._get_container(container_group)\n        current_state = running_container.instance_view.current_state.state\n\n        # get any logs the container has already generated\n        last_log_time = run_start_time\n        if configuration.stream_output:\n            last_log_time = self._get_and_stream_output(\n                client=client,\n                configuration=configuration,\n                container_group=container_group,\n                last_log_time=last_log_time,\n            )\n\n        # set exit code if flow run already finished:\n        if current_state == ContainerRunState.TERMINATED:\n            status_code = running_container.instance_view.current_state.exit_code\n\n        while current_state != ContainerRunState.TERMINATED:\n            try:\n                container_group = self._get_container_group(\n                    client,\n                    configuration.resource_group_name,\n                    container_group.name,\n                )\n            except ResourceNotFoundError:\n                self._logger.exception(\n                    f\"{self._log_prefix}: Container group was deleted before flow run \"\n                    \"completed, likely due to flow cancellation.\"\n                )\n\n                # since the flow was cancelled, exit early instead of raising an\n                # exception\n                return status_code\n\n            container = self._get_container(container_group)\n            current_state = container.instance_view.current_state.state\n\n            if current_state == ContainerRunState.TERMINATED:\n                status_code = container.instance_view.current_state.exit_code\n                # break instead of waiting for next loop iteration because\n                # trying to read logs from a terminated container raises an exception\n                break\n\n            if configuration.stream_output:\n                last_log_time = self._get_and_stream_output(\n                    client=client,\n                    configuration=configuration,\n                    container_group=container_group,\n                    last_log_time=last_log_time,\n                )\n\n            time.sleep(configuration.task_watch_poll_interval)\n\n        return status_code\n\n    async def _wait_for_container_group_deletion(\n        self,\n        aci_client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group_name: str,\n    ):\n        \"\"\"\n        Wait for the container group to be deleted.\n        Args:\n            aci_client: An authenticated ACI client.\n            configuration: The job configuration.\n            container_group_name: The name of the container group to delete.\n        \"\"\"\n        self._logger.info(f\"{self._log_prefix}: Deleting container...\")\n\n        deletion_status_poller = await run_sync_in_worker_thread(\n            aci_client.container_groups.begin_delete,\n            resource_group_name=configuration.resource_group_name,\n            container_group_name=container_group_name,\n        )\n\n        t0 = time.time()\n        timeout = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS\n\n        while not deletion_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while waiting for deletion of\"\n                        f\" container group {container_group_name}. To verify the group \"\n                        \"has been deleted, check the Azure Portal or run \"\n                        f\"az container show --name {container_group_name} --resource-group {configuration.resource_group_name}\"  # noqa\n                    )\n                )\n            await anyio.sleep(configuration.task_watch_poll_interval)\n\n        self._logger.info(f\"{self._log_prefix}: Container deleted.\")\n\n    def _get_container(self, container_group: ContainerGroup) -> Container:\n        \"\"\"\n        Extracts the job container from a container group.\n        \"\"\"\n        return container_group.containers[0]\n\n    @staticmethod\n    def _get_container_group(\n        client: ContainerInstanceManagementClient,\n        resource_group_name: str,\n        container_group_name: str,\n    ) -> ContainerGroup:\n        \"\"\"\n        Gets the container group from Azure.\n        \"\"\"\n        return client.container_groups.get(\n            resource_group_name=resource_group_name,\n            container_group_name=container_group_name,\n        )\n\n    def _get_and_stream_output(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group: ContainerGroup,\n        last_log_time: datetime.datetime,\n    ) -> datetime.datetime:\n        \"\"\"\n        Fetches logs output from the job container and writes all entries after\n        a given time to stderr.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        logs = self._get_logs(\n            client=client, configuration=configuration, container_group=container_group\n        )\n        return self._stream_output(logs, last_log_time)\n\n    def _get_logs(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group: ContainerGroup,\n        max_lines: int = 100,\n    ) -> str:\n        \"\"\"\n        Gets the most container logs up to a given maximum.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            max_lines: The number of log lines to pull. Defaults to 100.\n\n        Returns:\n            A string containing the requested log entries, one per line.\n        \"\"\"\n        container = self._get_container(container_group)\n\n        logs: Union[Logs, None] = None\n        try:\n            logs = client.containers.list_logs(\n                resource_group_name=configuration.resource_group_name,\n                container_group_name=container_group.name,\n                container_name=container.name,\n                tail=max_lines,\n                timestamps=True,\n            )\n        except HttpResponseError:\n            # Trying to get logs when the container is under heavy CPU load sometimes\n            # results in an error, but we won't want to raise an exception and stop\n            # monitoring the flow. Instead, log the error and carry on so we can try to\n            # get all missed logs on the next check.\n            self._logger.warning(\n                f\"{self._log_prefix}: Unable to retrieve logs from container \"\n                f\"{container.name}. Trying again in \"\n                f\"{configuration.task_watch_poll_interval}s\"\n            )\n\n        return logs.content if logs else \"\"\n\n    def _stream_output(\n        self, log_content: Union[str, None], last_log_time: datetime.datetime\n    ) -> datetime.datetime:\n        \"\"\"\n        Writes each entry from a string of log lines to stderr.\n\n        Args:\n            log_content: A string containing Azure container logs.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        if not log_content:\n            # nothing to stream\n            return last_log_time\n\n        log_lines = log_content.split(\"\\n\")\n\n        last_written_time = last_log_time\n\n        for log_line in log_lines:\n            # skip if the line is blank or whitespace\n            if not log_line.strip():\n                continue\n\n            line_parts = log_line.split(\" \")\n            # timestamp should always be before first space in line\n            line_timestamp = line_parts[0]\n            line = \" \".join(line_parts[1:])\n\n            try:\n                line_time = dateutil.parser.parse(line_timestamp)\n                if line_time > last_written_time:\n                    self._write_output_line(line)\n                    last_written_time = line_time\n            except dateutil.parser.ParserError as e:\n                self._logger.debug(\n                    (\n                        f\"{self._log_prefix}: Unable to parse timestamp from Azure \"\n                        \"log line: %s\"\n                    ),\n                    log_line,\n                    exc_info=e,\n                )\n\n        return last_written_time\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"AzureContainerInstanceJob {self.name!r}\"\n        else:\n            return \"AzureContainerInstanceJob\"\n\n    @staticmethod\n    def _provisioning_succeeded(container_group: Union[ContainerGroup, None]) -> bool:\n        \"\"\"\n        Determines whether ACI container group provisioning was successful.\n\n        Args:\n            container_group: a container group returned by the Azure SDK.\n\n        Returns:\n            True if provisioning was successful, False otherwise.\n        \"\"\"\n        if not container_group:\n            return False\n\n        return (\n            container_group.provisioning_state\n            == ContainerGroupProvisioningState.SUCCEEDED\n            and len(container_group.containers) == 1\n        )\n\n    @staticmethod\n    def _write_output_line(line: str):\n        \"\"\"\n        Writes a line of output to stderr.\n        \"\"\"\n        print(line, file=sys.stderr)\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

    Kill a flow running in an ACI container group.

    Parameters:

    Name Type Description Default infrastructure_pid str

    The container group identification data yielded by AzureContainerInstanceJob.run.

    required configuration AzureContainerJobConfiguration

    The job configuration.

    required Source code in prefect_azure/workers/container_instance.py
    async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: AzureContainerJobConfiguration,\n):\n    \"\"\"\n    Kill a flow running in an ACI container group.\n\n    Args:\n        infrastructure_pid: The container group identification data yielded by\n            `AzureContainerInstanceJob.run`.\n        configuration: The job configuration.\n    \"\"\"\n    (flow_run_id, container_group_name) = infrastructure_pid.split(\":\")\n\n    aci_client = configuration.aci_credentials.get_container_client(\n        configuration.subscription_id.get_secret_value()\n    )\n\n    # get the container group to check that it still exists\n    try:\n        container_group = aci_client.container_groups.get(\n            resource_group_name=configuration.resource_group_name,\n            container_group_name=container_group_name,\n        )\n    except ResourceNotFoundError as exc:\n        # the container group no longer exists, so there's nothing to cancel\n        raise InfrastructureNotFound(\n            f\"Cannot stop ACI job: container group \"\n            f\"{container_group_name} no longer exists.\"\n        ) from exc\n\n    # get the container state to check if the container has terminated\n    container = self._get_container(container_group)\n    container_state = container.instance_view.current_state.state\n\n    # the container group needs to be deleted regardless of whether the container\n    # already terminated\n    await self._wait_for_container_group_deletion(\n        aci_client, configuration, container_group_name\n    )\n\n    # if the container has already terminated, raise an exception to let the agent\n    # know the flow was not cancelled\n    if container_state == ContainerRunState.TERMINATED:\n        raise InfrastructureNotAvailable(\n            f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n            f\"but container {container.name} has already terminated.\"\n        )\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorker.run","title":"run async","text":"

    Run a flow in an Azure Container Instance. Args: flow_run: The flow run to run. configuration: The configuration for the flow run. task_status: The task status object for the current task. Used to provide an identifier that can be used to cancel the task.

    Returns:

    Type Description

    The result of the flow run.

    Source code in prefect_azure/workers/container_instance.py
    async def run(\n    self,\n    flow_run: FlowRun,\n    configuration: AzureContainerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n):\n    \"\"\"\n    Run a flow in an Azure Container Instance.\n    Args:\n        flow_run: The flow run to run.\n        configuration: The configuration for the flow run.\n        task_status: The task status object for the current task. Used\n        to provide an identifier that can be used to cancel the task.\n\n    Returns:\n        The result of the flow run.\n    \"\"\"\n    run_start_time = datetime.datetime.now(datetime.timezone.utc)\n    prefect_client = get_client()\n\n    # Get the flow, so we can use its name in the container group name\n    # to make it easier to identify and debug.\n    flow = await prefect_client.read_flow(flow_run.flow_id)\n    container_group_name = f\"{flow.name}-{flow_run.id}\"\n\n    # Slugify flow.name if the generated name will be too long for the\n    # max deployment name length (64) including \"prefect-\"\n    if len(container_group_name) > 55:\n        slugified_flow_name = slugify(\n            flow.name,\n            max_length=55 - len(str(flow_run.id)),\n            regex_pattern=r\"[^a-zA-Z0-9-]+\",\n        )\n        container_group_name = f\"{slugified_flow_name}-{flow_run.id}\"\n\n    self._logger.info(\n        f\"{self._log_prefix}: Preparing to run command {configuration.command} \"\n        f\"in container  {configuration.image})...\"\n    )\n\n    aci_client = configuration.aci_credentials.get_container_client(\n        configuration.subscription_id.get_secret_value()\n    )\n    resource_client = configuration.aci_credentials.get_resource_client(\n        configuration.subscription_id.get_secret_value()\n    )\n\n    created_container_group: Union[ContainerGroup, None] = None\n    try:\n        self._logger.info(f\"{self._log_prefix}: Creating container group...\")\n\n        created_container_group = await self._provision_container_group(\n            aci_client,\n            resource_client,\n            configuration,\n            container_group_name,\n        )\n        # Both the flow ID and container group name will be needed to\n        # cancel the flow run if needed.\n        identifier = f\"{flow_run.id}:{container_group_name}\"\n\n        if self._provisioning_succeeded(created_container_group):\n            self._logger.info(f\"{self._log_prefix}: Running command...\")\n            if task_status is not None:\n                task_status.started(value=identifier)\n\n            status_code = await run_sync_in_worker_thread(\n                self._watch_task_and_get_exit_code,\n                aci_client,\n                configuration,\n                created_container_group,\n                run_start_time,\n            )\n\n            self._logger.info(f\"{self._log_prefix}: Completed command run.\")\n\n        else:\n            raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n    finally:\n        if configuration.keep_container_group:\n            self._logger.info(f\"{self._log_prefix}: Stopping container group...\")\n            aci_client.container_groups.stop(\n                resource_group_name=configuration.resource_group_name,\n                container_group_name=container_group_name,\n            )\n        else:\n            await self._wait_for_container_group_deletion(\n                aci_client, configuration, container_group_name\n            )\n\n    return AzureContainerWorkerResult(\n        identifier=created_container_group.name, status_code=status_code\n    )\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorkerResult","title":"AzureContainerWorkerResult","text":"

    Bases: BaseWorkerResult

    Contains information about the final state of a completed process

    Source code in prefect_azure/workers/container_instance.py
    class AzureContainerWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.ContainerGroupProvisioningState","title":"ContainerGroupProvisioningState","text":"

    Bases: str, Enum

    Terminal provisioning states for ACI container groups. Per the Azure docs, the states in this Enum are the only ones that can be relied on as dependencies.

    Source code in prefect_azure/workers/container_instance.py
    class ContainerGroupProvisioningState(str, Enum):\n    \"\"\"\n    Terminal provisioning states for ACI container groups. Per the Azure docs,\n    the states in this Enum are the only ones that can be relied on as dependencies.\n    \"\"\"\n\n    SUCCEEDED = \"Succeeded\"\n    FAILED = \"Failed\"\n
    "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.ContainerRunState","title":"ContainerRunState","text":"

    Bases: str, Enum

    Terminal run states for ACI containers.

    Source code in prefect_azure/workers/container_instance.py
    class ContainerRunState(str, Enum):\n    \"\"\"\n    Terminal run states for ACI containers.\n    \"\"\"\n\n    RUNNING = \"Running\"\n    TERMINATED = \"Terminated\"\n
    "},{"location":"integrations/prefect-azure/cosmos_db/","title":"Cosmos DB","text":""},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db","title":"prefect_azure.cosmos_db","text":"

    Tasks for interacting with Azure Cosmos DB

    "},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db.cosmos_db_create_item","title":"cosmos_db_create_item async","text":"

    Create an item in the container.

    To update or replace an existing item, use the upsert_item method.

    Parameters:

    Name Type Description Default body Dict[str, Any]

    A dict-like object representing the item to create.

    required container Union[str, ContainerProxy, Dict[str, Any]]

    The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved.

    required database Union[str, DatabaseProxy, Dict[str, Any]]

    The ID (name), dict representing the properties or DatabaseProxy instance of the database to read.

    required cosmos_db_credentials AzureCosmosDbCredentials

    Credentials to use for authentication with Azure.

    required **kwargs Any

    Additional keyword arguments to pass.

    {}

    Returns:

    Type Description Dict[str, Any]

    A dict representing the new item.

    Example

    Create an item in the container.

    To update or replace an existing item, use the upsert_item method.

    import uuid\n\nfrom prefect import flow\n\nfrom prefect_azure import AzureCosmosDbCredentials\nfrom prefect_azure.cosmos_db import cosmos_db_create_item\n\n@flow\ndef example_cosmos_db_create_item_flow():\n    connection_string = \"connection_string\"\n    cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n    body = {\n        \"firstname\": \"Olivia\",\n        \"age\": 3,\n        \"id\": str(uuid.uuid4())\n    }\n    container = \"Persons\"\n    database = \"SampleDB\"\n\n    result = cosmos_db_create_item(\n        body,\n        container,\n        database,\n        cosmos_db_credentials\n    )\n    return result\n\nexample_cosmos_db_create_item_flow()\n

    Source code in prefect_azure/cosmos_db.py
    @task\nasync def cosmos_db_create_item(\n    body: Dict[str, Any],\n    container: Union[str, \"ContainerProxy\", Dict[str, Any]],\n    database: Union[str, \"DatabaseProxy\", Dict[str, Any]],\n    cosmos_db_credentials: AzureCosmosDbCredentials,\n    **kwargs: Any,\n) -> Dict[str, Any]:\n    \"\"\"\n    Create an item in the container.\n\n    To update or replace an existing item, use the upsert_item method.\n\n    Args:\n        body: A dict-like object representing the item to create.\n        container: The ID (name) of the container, a ContainerProxy instance,\n            or a dict representing the properties of the container to be retrieved.\n        database: The ID (name), dict representing the properties\n            or DatabaseProxy instance of the database to read.\n        cosmos_db_credentials: Credentials to use for authentication with Azure.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        A dict representing the new item.\n\n    Example:\n        Create an item in the container.\n\n        To update or replace an existing item, use the upsert_item method.\n        ```python\n        import uuid\n\n        from prefect import flow\n\n        from prefect_azure import AzureCosmosDbCredentials\n        from prefect_azure.cosmos_db import cosmos_db_create_item\n\n        @flow\n        def example_cosmos_db_create_item_flow():\n            connection_string = \"connection_string\"\n            cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n            body = {\n                \"firstname\": \"Olivia\",\n                \"age\": 3,\n                \"id\": str(uuid.uuid4())\n            }\n            container = \"Persons\"\n            database = \"SampleDB\"\n\n            result = cosmos_db_create_item(\n                body,\n                container,\n                database,\n                cosmos_db_credentials\n            )\n            return result\n\n        example_cosmos_db_create_item_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Creating the item within container %s under %s database\",\n        container,\n        database,\n    )\n\n    container_client = cosmos_db_credentials.get_container_client(container, database)\n    create_item = partial(container_client.create_item, body, **kwargs)\n    result = await to_thread.run_sync(create_item)\n    return result\n
    "},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db.cosmos_db_query_items","title":"cosmos_db_query_items async","text":"

    Return all results matching the given query.

    You can use any value for the container name in the FROM clause, but often the container name is used. In the examples below, the container name is \"products,\" and is aliased as \"p\" for easier referencing in the WHERE clause.

    Parameters:

    Name Type Description Default query str

    The Azure Cosmos DB SQL query to execute.

    required container Union[str, ContainerProxy, Dict[str, Any]]

    The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved.

    required database Union[str, DatabaseProxy, Dict[str, Any]]

    The ID (name), dict representing the properties or DatabaseProxy instance of the database to read.

    required cosmos_db_credentials AzureCosmosDbCredentials

    Credentials to use for authentication with Azure.

    required parameters Optional[List[Dict[str, object]]]

    Optional array of parameters to the query. Each parameter is a dict() with 'name' and 'value' keys.

    None partition_key Optional[Any]

    Partition key for the item to retrieve.

    None **kwargs Any

    Additional keyword arguments to pass.

    {}

    Returns:

    Type Description List[Union[str, dict]]

    An list of results.

    Example

    Query SampleDB Persons container where age >= 44

    from prefect import flow\n\nfrom prefect_azure import AzureCosmosDbCredentials\nfrom prefect_azure.cosmos_db import cosmos_db_query_items\n\n@flow\ndef example_cosmos_db_query_items_flow():\n    connection_string = \"connection_string\"\n    cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n    query = \"SELECT * FROM c where c.age >= @age\"\n    container = \"Persons\"\n    database = \"SampleDB\"\n    parameters = [dict(name=\"@age\", value=44)]\n\n    results = cosmos_db_query_items(\n        query,\n        container,\n        database,\n        cosmos_db_credentials,\n        parameters=parameters,\n        enable_cross_partition_query=True,\n    )\n    return results\n\nexample_cosmos_db_query_items_flow()\n

    Source code in prefect_azure/cosmos_db.py
    @task\nasync def cosmos_db_query_items(\n    query: str,\n    container: Union[str, \"ContainerProxy\", Dict[str, Any]],\n    database: Union[str, \"DatabaseProxy\", Dict[str, Any]],\n    cosmos_db_credentials: AzureCosmosDbCredentials,\n    parameters: Optional[List[Dict[str, object]]] = None,\n    partition_key: Optional[Any] = None,\n    **kwargs: Any,\n) -> List[Union[str, dict]]:\n    \"\"\"\n    Return all results matching the given query.\n\n    You can use any value for the container name in the FROM clause,\n    but often the container name is used.\n    In the examples below, the container name is \"products,\"\n    and is aliased as \"p\" for easier referencing in the WHERE clause.\n\n    Args:\n        query: The Azure Cosmos DB SQL query to execute.\n        container: The ID (name) of the container, a ContainerProxy instance,\n            or a dict representing the properties of the container to be retrieved.\n        database: The ID (name), dict representing the properties\n            or DatabaseProxy instance of the database to read.\n        cosmos_db_credentials: Credentials to use for authentication with Azure.\n        parameters: Optional array of parameters to the query.\n            Each parameter is a dict() with 'name' and 'value' keys.\n        partition_key: Partition key for the item to retrieve.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        An `list` of results.\n\n    Example:\n        Query SampleDB Persons container where age >= 44\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureCosmosDbCredentials\n        from prefect_azure.cosmos_db import cosmos_db_query_items\n\n        @flow\n        def example_cosmos_db_query_items_flow():\n            connection_string = \"connection_string\"\n            cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n            query = \"SELECT * FROM c where c.age >= @age\"\n            container = \"Persons\"\n            database = \"SampleDB\"\n            parameters = [dict(name=\"@age\", value=44)]\n\n            results = cosmos_db_query_items(\n                query,\n                container,\n                database,\n                cosmos_db_credentials,\n                parameters=parameters,\n                enable_cross_partition_query=True,\n            )\n            return results\n\n        example_cosmos_db_query_items_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Running query from container %s in %s database\", container, database)\n\n    container_client = cosmos_db_credentials.get_container_client(container, database)\n    partial_query_items = partial(\n        container_client.query_items,\n        query,\n        parameters=parameters,\n        partition_key=partition_key,\n        **kwargs,\n    )\n    results = await to_thread.run_sync(partial_query_items)\n    return results\n
    "},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db.cosmos_db_read_item","title":"cosmos_db_read_item async","text":"

    Get the item identified by item.

    Parameters:

    Name Type Description Default item Union[str, Dict[str, Any]]

    The ID (name) or dict representing item to retrieve.

    required partition_key Any

    Partition key for the item to retrieve.

    required container Union[str, ContainerProxy, Dict[str, Any]]

    The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved.

    required database Union[str, DatabaseProxy, Dict[str, Any]]

    The ID (name), dict representing the properties or DatabaseProxy instance of the database to read.

    required cosmos_db_credentials AzureCosmosDbCredentials

    Credentials to use for authentication with Azure.

    required **kwargs Any

    Additional keyword arguments to pass.

    {}

    Returns:

    Type Description List[Union[str, dict]]

    Dict representing the item to be retrieved.

    Example

    Read an item using a partition key from Cosmos DB.

    from prefect import flow\n\nfrom prefect_azure import AzureCosmosDbCredentials\nfrom prefect_azure.cosmos_db import cosmos_db_read_item\n\n@flow\ndef example_cosmos_db_read_item_flow():\n    connection_string = \"connection_string\"\n    cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n    item = \"item\"\n    partition_key = \"partition_key\"\n    container = \"container\"\n    database = \"database\"\n\n    result = cosmos_db_read_item(\n        item,\n        partition_key,\n        container,\n        database,\n        cosmos_db_credentials\n    )\n    return result\n\nexample_cosmos_db_read_item_flow()\n

    Source code in prefect_azure/cosmos_db.py
    @task\nasync def cosmos_db_read_item(\n    item: Union[str, Dict[str, Any]],\n    partition_key: Any,\n    container: Union[str, \"ContainerProxy\", Dict[str, Any]],\n    database: Union[str, \"DatabaseProxy\", Dict[str, Any]],\n    cosmos_db_credentials: AzureCosmosDbCredentials,\n    **kwargs: Any,\n) -> List[Union[str, dict]]:\n    \"\"\"\n    Get the item identified by item.\n\n    Args:\n        item: The ID (name) or dict representing item to retrieve.\n        partition_key: Partition key for the item to retrieve.\n        container: The ID (name) of the container, a ContainerProxy instance,\n            or a dict representing the properties of the container to be retrieved.\n        database: The ID (name), dict representing the properties\n            or DatabaseProxy instance of the database to read.\n        cosmos_db_credentials: Credentials to use for authentication with Azure.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        Dict representing the item to be retrieved.\n\n    Example:\n        Read an item using a partition key from Cosmos DB.\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureCosmosDbCredentials\n        from prefect_azure.cosmos_db import cosmos_db_read_item\n\n        @flow\n        def example_cosmos_db_read_item_flow():\n            connection_string = \"connection_string\"\n            cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n            item = \"item\"\n            partition_key = \"partition_key\"\n            container = \"container\"\n            database = \"database\"\n\n            result = cosmos_db_read_item(\n                item,\n                partition_key,\n                container,\n                database,\n                cosmos_db_credentials\n            )\n            return result\n\n        example_cosmos_db_read_item_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Reading item %s with partition_key %s from container %s in %s database\",\n        item,\n        partition_key,\n        container,\n        database,\n    )\n\n    container_client = cosmos_db_credentials.get_container_client(container, database)\n    read_item = partial(container_client.read_item, item, partition_key, **kwargs)\n    result = await to_thread.run_sync(read_item)\n    return result\n
    "},{"location":"integrations/prefect-azure/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials","title":"prefect_azure.credentials","text":"

    Credential classes used to perform authenticated interactions with Azure

    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials","title":"AzureBlobStorageCredentials","text":"

    Bases: Block

    Stores credentials for authenticating with Azure Blob Storage.

    Parameters:

    Name Type Description Default account_url

    The URL for your Azure storage account. If provided, the account URL will be used to authenticate with the discovered default Azure credentials.

    required connection_string

    The connection string to your Azure storage account. If provided, the connection string will take precedence over the account URL.

    required Example

    Load stored Azure Blob Storage credentials and retrieve a blob service client:

    from prefect_azure import AzureBlobStorageCredentials\n\nazure_credentials_block = AzureBlobStorageCredentials.load(\"BLOCK_NAME\")\n\nblob_service_client = azure_credentials_block.get_blob_client()\n

    Source code in prefect_azure/credentials.py
    class AzureBlobStorageCredentials(Block):\n    \"\"\"\n    Stores credentials for authenticating with Azure Blob Storage.\n\n    Args:\n        account_url: The URL for your Azure storage account. If provided, the account\n            URL will be used to authenticate with the discovered default Azure\n            credentials.\n        connection_string: The connection string to your Azure storage account. If\n            provided, the connection string will take precedence over the account URL.\n\n    Example:\n        Load stored Azure Blob Storage credentials and retrieve a blob service client:\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n\n        azure_credentials_block = AzureBlobStorageCredentials.load(\"BLOCK_NAME\")\n\n        blob_service_client = azure_credentials_block.get_blob_client()\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure Blob Storage Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials\"  # noqa\n\n    connection_string: Optional[SecretStr] = Field(\n        default=None,\n        description=(\n            \"The connection string to your Azure storage account. If provided, the \"\n            \"connection string will take precedence over the account URL.\"\n        ),\n    )\n    account_url: Optional[str] = Field(\n        default=None,\n        title=\"Account URL\",\n        description=(\n            \"The URL for your Azure storage account. If provided, the account \"\n            \"URL will be used to authenticate with the discovered default \"\n            \"Azure credentials.\"\n        ),\n    )\n\n    @root_validator\n    def check_connection_string_or_account_url(\n        cls, values: Dict[str, Any]\n    ) -> Dict[str, Any]:\n        \"\"\"\n        Checks that either a connection string or account URL is provided, not both.\n        \"\"\"\n        has_account_url = values.get(\"account_url\") is not None\n        has_conn_str = values.get(\"connection_string\") is not None\n        if not has_account_url and not has_conn_str:\n            raise ValueError(\n                \"Must provide either a connection string or an account URL.\"\n            )\n        if has_account_url and has_conn_str:\n            raise ValueError(\n                \"Must provide either a connection string or account URL, but not both.\"\n            )\n        return values\n\n    @_raise_help_msg(\"blob_storage\")\n    def get_client(self) -> \"BlobServiceClient\":\n        \"\"\"\n        Returns an authenticated base Blob Service client that can be used to create\n        other clients for Azure services.\n\n        Example:\n            Create an authorized Blob Service session\n            ```python\n            import os\n            import asyncio\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            async def example_get_client_flow():\n                connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n                azure_credentials = AzureBlobStorageCredentials(\n                    connection_string=connection_string,\n                )\n                async with azure_credentials.get_client() as blob_service_client:\n                    # run other code here\n                    pass\n\n            asyncio.run(example_get_client_flow())\n            ```\n        \"\"\"\n        if self.connection_string is None:\n            return BlobServiceClient(\n                account_url=self.account_url,\n                credential=ADefaultAzureCredential(),\n            )\n\n        return BlobServiceClient.from_connection_string(\n            self.connection_string.get_secret_value()\n        )\n\n    @_raise_help_msg(\"blob_storage\")\n    def get_blob_client(self, container, blob) -> \"BlobClient\":\n        \"\"\"\n        Returns an authenticated Blob client that can be used to\n        download and upload blobs.\n\n        Args:\n            container: Name of the Blob Storage container to retrieve from.\n            blob: Name of the blob within this container to retrieve.\n\n        Example:\n            Create an authorized Blob session\n            ```python\n            import os\n            import asyncio\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            async def example_get_blob_client_flow():\n                connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n                azure_credentials = AzureBlobStorageCredentials(\n                    connection_string=connection_string,\n                )\n                async with azure_credentials.get_blob_client(\n                    \"container\", \"blob\"\n                ) as blob_client:\n                    # run other code here\n                    pass\n\n            asyncio.run(example_get_blob_client_flow())\n            ```\n        \"\"\"\n        if self.connection_string is None:\n            return BlobClient(\n                account_url=self.account_url,\n                container_name=container,\n                credential=ADefaultAzureCredential(),\n                blob_name=blob,\n            )\n\n        blob_client = BlobClient.from_connection_string(\n            self.connection_string.get_secret_value(), container, blob\n        )\n        return blob_client\n\n    @_raise_help_msg(\"blob_storage\")\n    def get_container_client(self, container) -> \"ContainerClient\":\n        \"\"\"\n        Returns an authenticated Container client that can be used to create clients\n        for Azure services.\n\n        Args:\n            container: Name of the Blob Storage container to retrieve from.\n\n        Example:\n            Create an authorized Container session\n            ```python\n            import os\n            import asyncio\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            async def example_get_container_client_flow():\n                connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n                azure_credentials = AzureBlobStorageCredentials(\n                    connection_string=connection_string,\n                )\n                async with azure_credentials.get_container_client(\n                    \"container\"\n                ) as container_client:\n                    # run other code here\n                    pass\n\n            asyncio.run(example_get_container_client_flow())\n            ```\n        \"\"\"\n        if self.connection_string is None:\n            return ContainerClient(\n                account_url=self.account_url,\n                container_name=container,\n                credential=ADefaultAzureCredential(),\n            )\n\n        container_client = ContainerClient.from_connection_string(\n            self.connection_string.get_secret_value(), container\n        )\n        return container_client\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.check_connection_string_or_account_url","title":"check_connection_string_or_account_url","text":"

    Checks that either a connection string or account URL is provided, not both.

    Source code in prefect_azure/credentials.py
    @root_validator\ndef check_connection_string_or_account_url(\n    cls, values: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Checks that either a connection string or account URL is provided, not both.\n    \"\"\"\n    has_account_url = values.get(\"account_url\") is not None\n    has_conn_str = values.get(\"connection_string\") is not None\n    if not has_account_url and not has_conn_str:\n        raise ValueError(\n            \"Must provide either a connection string or an account URL.\"\n        )\n    if has_account_url and has_conn_str:\n        raise ValueError(\n            \"Must provide either a connection string or account URL, but not both.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.get_blob_client","title":"get_blob_client","text":"

    Returns an authenticated Blob client that can be used to download and upload blobs.

    Parameters:

    Name Type Description Default container

    Name of the Blob Storage container to retrieve from.

    required blob

    Name of the blob within this container to retrieve.

    required Example

    Create an authorized Blob session

    import os\nimport asyncio\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\nasync def example_get_blob_client_flow():\n    connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n    azure_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    async with azure_credentials.get_blob_client(\n        \"container\", \"blob\"\n    ) as blob_client:\n        # run other code here\n        pass\n\nasyncio.run(example_get_blob_client_flow())\n

    Source code in prefect_azure/credentials.py
    @_raise_help_msg(\"blob_storage\")\ndef get_blob_client(self, container, blob) -> \"BlobClient\":\n    \"\"\"\n    Returns an authenticated Blob client that can be used to\n    download and upload blobs.\n\n    Args:\n        container: Name of the Blob Storage container to retrieve from.\n        blob: Name of the blob within this container to retrieve.\n\n    Example:\n        Create an authorized Blob session\n        ```python\n        import os\n        import asyncio\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        async def example_get_blob_client_flow():\n            connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n            azure_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            async with azure_credentials.get_blob_client(\n                \"container\", \"blob\"\n            ) as blob_client:\n                # run other code here\n                pass\n\n        asyncio.run(example_get_blob_client_flow())\n        ```\n    \"\"\"\n    if self.connection_string is None:\n        return BlobClient(\n            account_url=self.account_url,\n            container_name=container,\n            credential=ADefaultAzureCredential(),\n            blob_name=blob,\n        )\n\n    blob_client = BlobClient.from_connection_string(\n        self.connection_string.get_secret_value(), container, blob\n    )\n    return blob_client\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.get_client","title":"get_client","text":"

    Returns an authenticated base Blob Service client that can be used to create other clients for Azure services.

    Example

    Create an authorized Blob Service session

    import os\nimport asyncio\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\nasync def example_get_client_flow():\n    connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n    azure_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    async with azure_credentials.get_client() as blob_service_client:\n        # run other code here\n        pass\n\nasyncio.run(example_get_client_flow())\n

    Source code in prefect_azure/credentials.py
    @_raise_help_msg(\"blob_storage\")\ndef get_client(self) -> \"BlobServiceClient\":\n    \"\"\"\n    Returns an authenticated base Blob Service client that can be used to create\n    other clients for Azure services.\n\n    Example:\n        Create an authorized Blob Service session\n        ```python\n        import os\n        import asyncio\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        async def example_get_client_flow():\n            connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n            azure_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            async with azure_credentials.get_client() as blob_service_client:\n                # run other code here\n                pass\n\n        asyncio.run(example_get_client_flow())\n        ```\n    \"\"\"\n    if self.connection_string is None:\n        return BlobServiceClient(\n            account_url=self.account_url,\n            credential=ADefaultAzureCredential(),\n        )\n\n    return BlobServiceClient.from_connection_string(\n        self.connection_string.get_secret_value()\n    )\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.get_container_client","title":"get_container_client","text":"

    Returns an authenticated Container client that can be used to create clients for Azure services.

    Parameters:

    Name Type Description Default container

    Name of the Blob Storage container to retrieve from.

    required Example

    Create an authorized Container session

    import os\nimport asyncio\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\nasync def example_get_container_client_flow():\n    connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n    azure_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    async with azure_credentials.get_container_client(\n        \"container\"\n    ) as container_client:\n        # run other code here\n        pass\n\nasyncio.run(example_get_container_client_flow())\n

    Source code in prefect_azure/credentials.py
    @_raise_help_msg(\"blob_storage\")\ndef get_container_client(self, container) -> \"ContainerClient\":\n    \"\"\"\n    Returns an authenticated Container client that can be used to create clients\n    for Azure services.\n\n    Args:\n        container: Name of the Blob Storage container to retrieve from.\n\n    Example:\n        Create an authorized Container session\n        ```python\n        import os\n        import asyncio\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        async def example_get_container_client_flow():\n            connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n            azure_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            async with azure_credentials.get_container_client(\n                \"container\"\n            ) as container_client:\n                # run other code here\n                pass\n\n        asyncio.run(example_get_container_client_flow())\n        ```\n    \"\"\"\n    if self.connection_string is None:\n        return ContainerClient(\n            account_url=self.account_url,\n            container_name=container,\n            credential=ADefaultAzureCredential(),\n        )\n\n    container_client = ContainerClient.from_connection_string(\n        self.connection_string.get_secret_value(), container\n    )\n    return container_client\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials","title":"AzureContainerInstanceCredentials","text":"

    Bases: Block

    Block used to manage Azure Container Instances authentication. Stores Azure Service Principal authentication data.

    Source code in prefect_azure/credentials.py
    class AzureContainerInstanceCredentials(Block):\n    \"\"\"\n    Block used to manage Azure Container Instances authentication. Stores Azure Service\n    Principal authentication data.\n    \"\"\"\n\n    _block_type_name = \"Azure Container Instance Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials\"  # noqa\n\n    client_id: Optional[str] = Field(\n        default=None,\n        title=\"Client ID\",\n        description=(\n            \"The service principal client ID. \"\n            \"If none of client_id, tenant_id, and client_secret are provided, \"\n            \"will use DefaultAzureCredential; else will need to provide all three to \"\n            \"use ClientSecretCredential.\"\n        ),\n    )\n    tenant_id: Optional[str] = Field(\n        default=None,\n        title=\"Tenant ID\",\n        description=(\n            \"The service principal tenant ID.\"\n            \"If none of client_id, tenant_id, and client_secret are provided, \"\n            \"will use DefaultAzureCredential; else will need to provide all three to \"\n            \"use ClientSecretCredential.\"\n        ),\n    )\n    client_secret: Optional[SecretStr] = Field(\n        default=None,\n        description=(\n            \"The service principal client secret.\"\n            \"If none of client_id, tenant_id, and client_secret are provided, \"\n            \"will use DefaultAzureCredential; else will need to provide all three to \"\n            \"use ClientSecretCredential.\"\n        ),\n    )\n    credential_kwargs: Dict[str, Any] = Field(\n        default_factory=dict,\n        title=\"Additional Credential Keyword Arguments\",\n        description=(\n            \"Additional keyword arguments to pass to \"\n            \"`ClientSecretCredential` or `DefaultAzureCredential`.\"\n        ),\n    )\n\n    @root_validator\n    def validate_credential_kwargs(cls, values):\n        \"\"\"\n        Validates that if any of `client_id`, `tenant_id`, or `client_secret` are\n        provided, all must be provided.\n        \"\"\"\n        auth_args = (\"client_id\", \"tenant_id\", \"client_secret\")\n        has_any = any(values.get(key) is not None for key in auth_args)\n        has_all = all(values.get(key) is not None for key in auth_args)\n        if has_any and not has_all:\n            raise ValueError(\n                \"If any of `client_id`, `tenant_id`, or `client_secret` are provided, \"\n                \"all must be provided.\"\n            )\n        return values\n\n    def get_container_client(self, subscription_id: str):\n        \"\"\"\n        Creates an Azure Container Instances client initialized with data from\n        this block's fields and a provided Azure subscription ID.\n\n        Args:\n            subscription_id: A valid Azure subscription ID.\n\n        Returns:\n            An initialized `ContainerInstanceManagementClient`\n        \"\"\"\n\n        return ContainerInstanceManagementClient(\n            credential=self._create_credential(),\n            subscription_id=subscription_id,\n        )\n\n    def get_resource_client(self, subscription_id: str):\n        \"\"\"\n        Creates an Azure resource management client initialized with data from\n        this block's fields and a provided Azure subscription ID.\n\n        Args:\n            subscription_id: A valid Azure subscription ID.\n\n        Returns:\n            An initialized `ResourceManagementClient`\n        \"\"\"\n\n        return ResourceManagementClient(\n            credential=self._create_credential(),\n            subscription_id=subscription_id,\n        )\n\n    def _create_credential(self):\n        \"\"\"\n        Creates an Azure credential initialized with data from this block's fields.\n\n        Returns:\n            An initialized Azure `TokenCredential` ready to use with Azure SDK client\n            classes.\n        \"\"\"\n        auth_args = (self.client_id, self.tenant_id, self.client_secret)\n        if auth_args == (None, None, None):\n            return DefaultAzureCredential(**self.credential_kwargs)\n\n        return ClientSecretCredential(\n            tenant_id=self.tenant_id,\n            client_id=self.client_id,\n            client_secret=self.client_secret.get_secret_value(),\n            **self.credential_kwargs,\n        )\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials.get_container_client","title":"get_container_client","text":"

    Creates an Azure Container Instances client initialized with data from this block's fields and a provided Azure subscription ID.

    Parameters:

    Name Type Description Default subscription_id str

    A valid Azure subscription ID.

    required

    Returns:

    Type Description

    An initialized ContainerInstanceManagementClient

    Source code in prefect_azure/credentials.py
    def get_container_client(self, subscription_id: str):\n    \"\"\"\n    Creates an Azure Container Instances client initialized with data from\n    this block's fields and a provided Azure subscription ID.\n\n    Args:\n        subscription_id: A valid Azure subscription ID.\n\n    Returns:\n        An initialized `ContainerInstanceManagementClient`\n    \"\"\"\n\n    return ContainerInstanceManagementClient(\n        credential=self._create_credential(),\n        subscription_id=subscription_id,\n    )\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials.get_resource_client","title":"get_resource_client","text":"

    Creates an Azure resource management client initialized with data from this block's fields and a provided Azure subscription ID.

    Parameters:

    Name Type Description Default subscription_id str

    A valid Azure subscription ID.

    required

    Returns:

    Type Description

    An initialized ResourceManagementClient

    Source code in prefect_azure/credentials.py
    def get_resource_client(self, subscription_id: str):\n    \"\"\"\n    Creates an Azure resource management client initialized with data from\n    this block's fields and a provided Azure subscription ID.\n\n    Args:\n        subscription_id: A valid Azure subscription ID.\n\n    Returns:\n        An initialized `ResourceManagementClient`\n    \"\"\"\n\n    return ResourceManagementClient(\n        credential=self._create_credential(),\n        subscription_id=subscription_id,\n    )\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials.validate_credential_kwargs","title":"validate_credential_kwargs","text":"

    Validates that if any of client_id, tenant_id, or client_secret are provided, all must be provided.

    Source code in prefect_azure/credentials.py
    @root_validator\ndef validate_credential_kwargs(cls, values):\n    \"\"\"\n    Validates that if any of `client_id`, `tenant_id`, or `client_secret` are\n    provided, all must be provided.\n    \"\"\"\n    auth_args = (\"client_id\", \"tenant_id\", \"client_secret\")\n    has_any = any(values.get(key) is not None for key in auth_args)\n    has_all = all(values.get(key) is not None for key in auth_args)\n    if has_any and not has_all:\n        raise ValueError(\n            \"If any of `client_id`, `tenant_id`, or `client_secret` are provided, \"\n            \"all must be provided.\"\n        )\n    return values\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials","title":"AzureCosmosDbCredentials","text":"

    Bases: Block

    Block used to manage Cosmos DB authentication with Azure. Azure authentication is handled via the azure module through a connection string.

    Parameters:

    Name Type Description Default connection_string

    Includes the authorization information required.

    required Example

    Load stored Azure Cosmos DB credentials:

    from prefect_azure import AzureCosmosDbCredentials\nazure_credentials_block = AzureCosmosDbCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_azure/credentials.py
    class AzureCosmosDbCredentials(Block):\n    \"\"\"\n    Block used to manage Cosmos DB authentication with Azure.\n    Azure authentication is handled via the `azure` module through\n    a connection string.\n\n    Args:\n        connection_string: Includes the authorization information required.\n\n    Example:\n        Load stored Azure Cosmos DB credentials:\n        ```python\n        from prefect_azure import AzureCosmosDbCredentials\n        azure_credentials_block = AzureCosmosDbCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure Cosmos DB Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials\"  # noqa\n\n    connection_string: SecretStr = Field(\n        default=..., description=\"Includes the authorization information required.\"\n    )\n\n    @_raise_help_msg(\"cosmos_db\")\n    def get_client(self) -> \"CosmosClient\":\n        \"\"\"\n        Returns an authenticated Cosmos client that can be used to create\n        other clients for Azure services.\n\n        Example:\n            Create an authorized Cosmos session\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureCosmosDbCredentials\n\n            @flow\n            def example_get_client_flow():\n                connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n                azure_credentials = AzureCosmosDbCredentials(\n                    connection_string=connection_string,\n                )\n                cosmos_client = azure_credentials.get_client()\n                return cosmos_client\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        return CosmosClient.from_connection_string(\n            self.connection_string.get_secret_value()\n        )\n\n    def get_database_client(self, database: str) -> \"DatabaseProxy\":\n        \"\"\"\n        Returns an authenticated Database client.\n\n        Args:\n            database: Name of the database.\n\n        Example:\n            Create an authorized Cosmos session\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureCosmosDbCredentials\n\n            @flow\n            def example_get_client_flow():\n                connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n                azure_credentials = AzureCosmosDbCredentials(\n                    connection_string=connection_string,\n                )\n                cosmos_client = azure_credentials.get_database_client()\n                return cosmos_client\n\n            example_get_database_client_flow()\n            ```\n        \"\"\"\n        cosmos_client = self.get_client()\n        database_client = cosmos_client.get_database_client(database=database)\n        return database_client\n\n    def get_container_client(self, container: str, database: str) -> \"ContainerProxy\":\n        \"\"\"\n        Returns an authenticated Container client used for querying.\n\n        Args:\n            container: Name of the Cosmos DB container to retrieve from.\n            database: Name of the Cosmos DB database.\n\n        Example:\n            Create an authorized Container session\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            def example_get_container_client_flow():\n                connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n                azure_credentials = AzureCosmosDbCredentials(\n                    connection_string=connection_string,\n                )\n                container_client = azure_credentials.get_container_client(container)\n                return container_client\n\n            example_get_container_client_flow()\n            ```\n        \"\"\"\n        database_client = self.get_database_client(database)\n        container_client = database_client.get_container_client(container=container)\n        return container_client\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials.get_client","title":"get_client","text":"

    Returns an authenticated Cosmos client that can be used to create other clients for Azure services.

    Example

    Create an authorized Cosmos session

    import os\nfrom prefect import flow\nfrom prefect_azure import AzureCosmosDbCredentials\n\n@flow\ndef example_get_client_flow():\n    connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n    azure_credentials = AzureCosmosDbCredentials(\n        connection_string=connection_string,\n    )\n    cosmos_client = azure_credentials.get_client()\n    return cosmos_client\n\nexample_get_client_flow()\n

    Source code in prefect_azure/credentials.py
    @_raise_help_msg(\"cosmos_db\")\ndef get_client(self) -> \"CosmosClient\":\n    \"\"\"\n    Returns an authenticated Cosmos client that can be used to create\n    other clients for Azure services.\n\n    Example:\n        Create an authorized Cosmos session\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureCosmosDbCredentials\n\n        @flow\n        def example_get_client_flow():\n            connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n            azure_credentials = AzureCosmosDbCredentials(\n                connection_string=connection_string,\n            )\n            cosmos_client = azure_credentials.get_client()\n            return cosmos_client\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    return CosmosClient.from_connection_string(\n        self.connection_string.get_secret_value()\n    )\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials.get_container_client","title":"get_container_client","text":"

    Returns an authenticated Container client used for querying.

    Parameters:

    Name Type Description Default container str

    Name of the Cosmos DB container to retrieve from.

    required database str

    Name of the Cosmos DB database.

    required Example

    Create an authorized Container session

    import os\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\ndef example_get_container_client_flow():\n    connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n    azure_credentials = AzureCosmosDbCredentials(\n        connection_string=connection_string,\n    )\n    container_client = azure_credentials.get_container_client(container)\n    return container_client\n\nexample_get_container_client_flow()\n

    Source code in prefect_azure/credentials.py
    def get_container_client(self, container: str, database: str) -> \"ContainerProxy\":\n    \"\"\"\n    Returns an authenticated Container client used for querying.\n\n    Args:\n        container: Name of the Cosmos DB container to retrieve from.\n        database: Name of the Cosmos DB database.\n\n    Example:\n        Create an authorized Container session\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        def example_get_container_client_flow():\n            connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n            azure_credentials = AzureCosmosDbCredentials(\n                connection_string=connection_string,\n            )\n            container_client = azure_credentials.get_container_client(container)\n            return container_client\n\n        example_get_container_client_flow()\n        ```\n    \"\"\"\n    database_client = self.get_database_client(database)\n    container_client = database_client.get_container_client(container=container)\n    return container_client\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials.get_database_client","title":"get_database_client","text":"

    Returns an authenticated Database client.

    Parameters:

    Name Type Description Default database str

    Name of the database.

    required Example

    Create an authorized Cosmos session

    import os\nfrom prefect import flow\nfrom prefect_azure import AzureCosmosDbCredentials\n\n@flow\ndef example_get_client_flow():\n    connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n    azure_credentials = AzureCosmosDbCredentials(\n        connection_string=connection_string,\n    )\n    cosmos_client = azure_credentials.get_database_client()\n    return cosmos_client\n\nexample_get_database_client_flow()\n

    Source code in prefect_azure/credentials.py
    def get_database_client(self, database: str) -> \"DatabaseProxy\":\n    \"\"\"\n    Returns an authenticated Database client.\n\n    Args:\n        database: Name of the database.\n\n    Example:\n        Create an authorized Cosmos session\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureCosmosDbCredentials\n\n        @flow\n        def example_get_client_flow():\n            connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n            azure_credentials = AzureCosmosDbCredentials(\n                connection_string=connection_string,\n            )\n            cosmos_client = azure_credentials.get_database_client()\n            return cosmos_client\n\n        example_get_database_client_flow()\n        ```\n    \"\"\"\n    cosmos_client = self.get_client()\n    database_client = cosmos_client.get_database_client(database=database)\n    return database_client\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureMlCredentials","title":"AzureMlCredentials","text":"

    Bases: Block

    Block used to manage authentication with AzureML. Azure authentication is handled via the azure module.

    Parameters:

    Name Type Description Default tenant_id

    The active directory tenant that the service identity belongs to.

    required service_principal_id

    The service principal ID.

    required service_principal_password

    The service principal password/key.

    required subscription_id

    The Azure subscription ID containing the workspace.

    required resource_group

    The resource group containing the workspace.

    required workspace_name

    The existing workspace name.

    required Example

    Load stored AzureML credentials:

    from prefect_azure import AzureMlCredentials\nazure_ml_credentials_block = AzureMlCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_azure/credentials.py
    class AzureMlCredentials(Block):\n    \"\"\"\n    Block used to manage authentication with AzureML. Azure authentication is\n    handled via the `azure` module.\n\n    Args:\n        tenant_id: The active directory tenant that the service identity belongs to.\n        service_principal_id: The service principal ID.\n        service_principal_password: The service principal password/key.\n        subscription_id: The Azure subscription ID containing the workspace.\n        resource_group: The resource group containing the workspace.\n        workspace_name: The existing workspace name.\n\n    Example:\n        Load stored AzureML credentials:\n        ```python\n        from prefect_azure import AzureMlCredentials\n        azure_ml_credentials_block = AzureMlCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"AzureML Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureMlCredentials\"  # noqa\n\n    tenant_id: str = Field(\n        default=...,\n        description=\"The active directory tenant that the service identity belongs to.\",\n    )\n    service_principal_id: str = Field(\n        default=..., description=\"The service principal ID.\"\n    )\n    service_principal_password: SecretStr = Field(\n        default=..., description=\"The service principal password/key.\"\n    )\n    subscription_id: str = Field(\n        default=...,\n        description=\"The Azure subscription ID containing the workspace, in format: '00000000-0000-0000-0000-000000000000'.\",  # noqa\n    )\n    resource_group: str = Field(\n        default=..., description=\"The resource group containing the workspace.\"\n    )\n    workspace_name: str = Field(default=..., description=\"The existing workspace name.\")\n\n    @_raise_help_msg(\"ml_datastore\")\n    def get_workspace(self) -> \"Workspace\":\n        \"\"\"\n        Returns an authenticated base Workspace that can be used in\n        Azure's Datasets and Datastores.\n\n        Example:\n            Create an authorized workspace\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureMlCredentials\n            @flow\n            def example_get_workspace_flow():\n                azure_credentials = AzureMlCredentials(\n                    tenant_id=\"tenant_id\",\n                    service_principal_id=\"service_principal_id\",\n                    service_principal_password=\"service_principal_password\",\n                    subscription_id=\"subscription_id\",\n                    resource_group=\"resource_group\",\n                    workspace_name=\"workspace_name\"\n                )\n                workspace_client = azure_credentials.get_workspace()\n                return workspace_client\n            example_get_workspace_flow()\n            ```\n        \"\"\"\n        service_principal_password = self.service_principal_password.get_secret_value()\n        service_principal_authentication = ServicePrincipalAuthentication(\n            tenant_id=self.tenant_id,\n            service_principal_id=self.service_principal_id,\n            service_principal_password=service_principal_password,\n        )\n\n        workspace = Workspace(\n            subscription_id=self.subscription_id,\n            resource_group=self.resource_group,\n            workspace_name=self.workspace_name,\n            auth=service_principal_authentication,\n        )\n\n        return workspace\n
    "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureMlCredentials.get_workspace","title":"get_workspace","text":"

    Returns an authenticated base Workspace that can be used in Azure's Datasets and Datastores.

    Example

    Create an authorized workspace

    import os\nfrom prefect import flow\nfrom prefect_azure import AzureMlCredentials\n@flow\ndef example_get_workspace_flow():\n    azure_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\"\n    )\n    workspace_client = azure_credentials.get_workspace()\n    return workspace_client\nexample_get_workspace_flow()\n

    Source code in prefect_azure/credentials.py
    @_raise_help_msg(\"ml_datastore\")\ndef get_workspace(self) -> \"Workspace\":\n    \"\"\"\n    Returns an authenticated base Workspace that can be used in\n    Azure's Datasets and Datastores.\n\n    Example:\n        Create an authorized workspace\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        @flow\n        def example_get_workspace_flow():\n            azure_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\"\n            )\n            workspace_client = azure_credentials.get_workspace()\n            return workspace_client\n        example_get_workspace_flow()\n        ```\n    \"\"\"\n    service_principal_password = self.service_principal_password.get_secret_value()\n    service_principal_authentication = ServicePrincipalAuthentication(\n        tenant_id=self.tenant_id,\n        service_principal_id=self.service_principal_id,\n        service_principal_password=service_principal_password,\n    )\n\n    workspace = Workspace(\n        subscription_id=self.subscription_id,\n        resource_group=self.resource_group,\n        workspace_name=self.workspace_name,\n        auth=service_principal_authentication,\n    )\n\n    return workspace\n
    "},{"location":"integrations/prefect-azure/ml_datastore/","title":"ML Datastore","text":""},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore","title":"prefect_azure.ml_datastore","text":"

    Tasks for interacting with Azure ML Datastore

    "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_get_datastore","title":"ml_get_datastore async","text":"

    Gets the Datastore within the Workspace.

    Parameters:

    Name Type Description Default ml_credentials AzureMlCredentials

    Credentials to use for authentication with Azure.

    required datastore_name str

    The name of the Datastore. If None, then the default Datastore of the Workspace is returned.

    None Example

    Get Datastore object

    from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_get_datastore\n\n@flow\ndef example_ml_get_datastore_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    results = ml_get_datastore(ml_credentials, datastore_name=\"datastore_name\")\n    return results\n

    Source code in prefect_azure/ml_datastore.py
    @task\nasync def ml_get_datastore(\n    ml_credentials: \"AzureMlCredentials\", datastore_name: str = None\n) -> Datastore:\n    \"\"\"\n    Gets the Datastore within the Workspace.\n\n    Args:\n        ml_credentials: Credentials to use for authentication with Azure.\n        datastore_name: The name of the Datastore. If `None`, then the\n            default Datastore of the Workspace is returned.\n\n    Example:\n        Get Datastore object\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_get_datastore\n\n        @flow\n        def example_ml_get_datastore_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            results = ml_get_datastore(ml_credentials, datastore_name=\"datastore_name\")\n            return results\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Getting datastore %s\", datastore_name)\n\n    result = await _get_datastore(ml_credentials, datastore_name)\n    return result\n
    "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_list_datastores","title":"ml_list_datastores","text":"

    Lists the Datastores in the Workspace.

    Parameters:

    Name Type Description Default ml_credentials AzureMlCredentials

    Credentials to use for authentication with Azure.

    required Example

    List Datastore objects

    from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_list_datastores\n\n@flow\ndef example_ml_list_datastores_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    results = ml_list_datastores(ml_credentials)\n    return results\n

    Source code in prefect_azure/ml_datastore.py
    @task\ndef ml_list_datastores(ml_credentials: \"AzureMlCredentials\") -> Dict:\n    \"\"\"\n    Lists the Datastores in the Workspace.\n\n    Args:\n        ml_credentials: Credentials to use for authentication with Azure.\n\n    Example:\n        List Datastore objects\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_list_datastores\n\n        @flow\n        def example_ml_list_datastores_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            results = ml_list_datastores(ml_credentials)\n            return results\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Listing datastores\")\n\n    workspace = ml_credentials.get_workspace()\n    results = workspace.datastores\n    return results\n
    "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_register_datastore_blob_container","title":"ml_register_datastore_blob_container async","text":"

    Registers a Azure Blob Storage container as a Datastore in a Azure ML service Workspace.

    Parameters:

    Name Type Description Default container_name str

    The name of the container.

    required ml_credentials AzureMlCredentials

    Credentials to use for authentication with Azure ML.

    required blob_storage_credentials AzureBlobStorageCredentials

    Credentials to use for authentication with Azure Blob Storage.

    required datastore_name str

    The name of the datastore. If not defined, the container name will be used.

    None create_container_if_not_exists bool

    Create a container, if one does not exist with the given name.

    False overwrite bool

    Overwrite an existing datastore. If the datastore does not exist, it will be created.

    False set_as_default bool

    Set the created Datastore as the default datastore for the Workspace.

    False Example

    Upload Datastore object

    from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_register_datastore_blob_container\n\n@flow\ndef example_ml_register_datastore_blob_container_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    blob_storage_credentials = AzureBlobStorageCredentials(\"connection_string\")\n    result = ml_register_datastore_blob_container(\n        \"container\",\n        ml_credentials,\n        blob_storage_credentials,\n        datastore_name=\"datastore_name\"\n    )\n    return result\n

    Source code in prefect_azure/ml_datastore.py
    @task\nasync def ml_register_datastore_blob_container(\n    container_name: str,\n    ml_credentials: \"AzureMlCredentials\",\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n    datastore_name: str = None,\n    create_container_if_not_exists: bool = False,\n    overwrite: bool = False,\n    set_as_default: bool = False,\n) -> \"AzureBlobDatastore\":\n    \"\"\"\n    Registers a Azure Blob Storage container as a\n    Datastore in a Azure ML service Workspace.\n\n    Args:\n        container_name: The name of the container.\n        ml_credentials: Credentials to use for authentication with Azure ML.\n        blob_storage_credentials: Credentials to use for authentication\n            with Azure Blob Storage.\n        datastore_name: The name of the datastore. If not defined, the\n            container name will be used.\n        create_container_if_not_exists: Create a container, if one does not\n            exist with the given name.\n        overwrite: Overwrite an existing datastore. If\n            the datastore does not exist, it will be created.\n        set_as_default: Set the created Datastore as the default datastore\n            for the Workspace.\n\n    Example:\n        Upload Datastore object\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_register_datastore_blob_container\n\n        @flow\n        def example_ml_register_datastore_blob_container_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            blob_storage_credentials = AzureBlobStorageCredentials(\"connection_string\")\n            result = ml_register_datastore_blob_container(\n                \"container\",\n                ml_credentials,\n                blob_storage_credentials,\n                datastore_name=\"datastore_name\"\n            )\n            return result\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    if datastore_name is None:\n        datastore_name = container_name\n\n    logger.info(\n        \"Registering %s container into %s datastore\", container_name, datastore_name\n    )\n\n    workspace = ml_credentials.get_workspace()\n    async with blob_storage_credentials.get_client() as blob_service_client:\n        credential = blob_service_client.credential\n        account_name = credential.account_name\n        account_key = credential.account_key\n\n    partial_register = partial(\n        Datastore.register_azure_blob_container,\n        workspace=workspace,\n        datastore_name=datastore_name,\n        container_name=container_name,\n        account_name=account_name,\n        account_key=account_key,\n        overwrite=overwrite,\n        create_if_not_exists=create_container_if_not_exists,\n    )\n    result = await to_thread.run_sync(partial_register)\n\n    if set_as_default:\n        result.set_as_default()\n\n    return result\n
    "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_upload_datastore","title":"ml_upload_datastore async","text":"

    Uploads local files to a Datastore.

    Parameters:

    Name Type Description Default path Union[str, Path, List[Union[str, Path]]]

    The path to a single file, single directory, or a list of path to files to be uploaded.

    required ml_credentials AzureMlCredentials

    Credentials to use for authentication with Azure.

    required target_path Union[str, Path]

    The location in the blob container to upload to. If None, then upload to root.

    None relative_root Union[str, Path]

    The root from which is used to determine the path of the files in the blob. For example, if we upload /path/to/file.txt, and we define base path to be /path, when file.txt is uploaded to the blob storage, it will have the path of /to/file.txt.

    None datastore_name str

    The name of the Datastore. If None, then the default Datastore of the Workspace is returned.

    None overwrite bool

    Overwrite existing file(s).

    False Example

    Upload Datastore object

    from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_upload_datastore\n\n@flow\ndef example_ml_upload_datastore_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    result = ml_upload_datastore(\n        \"path/to/dir/or/file\",\n        ml_credentials,\n        datastore_name=\"datastore_name\"\n    )\n    return result\n

    Source code in prefect_azure/ml_datastore.py
    @task\nasync def ml_upload_datastore(\n    path: Union[str, Path, List[Union[str, Path]]],\n    ml_credentials: \"AzureMlCredentials\",\n    target_path: Union[str, Path] = None,\n    relative_root: Union[str, Path] = None,\n    datastore_name: str = None,\n    overwrite: bool = False,\n) -> \"DataReference\":\n    \"\"\"\n    Uploads local files to a Datastore.\n\n    Args:\n        path: The path to a single file, single directory,\n            or a list of path to files to be uploaded.\n        ml_credentials: Credentials to use for authentication with Azure.\n        target_path: The location in the blob container to upload to. If\n            None, then upload to root.\n        relative_root: The root from which is used to determine the path of\n            the files in the blob. For example, if we upload /path/to/file.txt,\n            and we define base path to be /path, when file.txt is uploaded\n            to the blob storage, it will have the path of /to/file.txt.\n        datastore_name: The name of the Datastore. If `None`, then the\n            default Datastore of the Workspace is returned.\n        overwrite: Overwrite existing file(s).\n\n    Example:\n        Upload Datastore object\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_upload_datastore\n\n        @flow\n        def example_ml_upload_datastore_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            result = ml_upload_datastore(\n                \"path/to/dir/or/file\",\n                ml_credentials,\n                datastore_name=\"datastore_name\"\n            )\n            return result\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading %s into %s datastore\", path, datastore_name)\n\n    datastore = await _get_datastore(ml_credentials, datastore_name)\n\n    if isinstance(path, Path):\n        path = str(path)\n    elif isinstance(path, list) and isinstance(path[0], Path):\n        path = [str(p) for p in path]\n\n    if isinstance(target_path, Path):\n        target_path = str(target_path)\n\n    if isinstance(relative_root, Path):\n        relative_root = str(relative_root)\n\n    if isinstance(path, str) and os.path.isdir(path):\n        partial_upload = partial(\n            datastore.upload,\n            src_dir=path,\n            target_path=target_path,\n            overwrite=overwrite,\n            show_progress=False,\n        )\n    else:\n        partial_upload = partial(\n            datastore.upload_files,\n            files=path if isinstance(path, list) else [path],\n            relative_root=relative_root,\n            target_path=target_path,\n            overwrite=overwrite,\n            show_progress=False,\n        )\n\n    result = await to_thread.run_sync(partial_upload)\n    return result\n
    "},{"location":"integrations/prefect-azure/deployments/steps/","title":"Steps","text":""},{"location":"integrations/prefect-azure/deployments/steps/#prefect_azure.deployments.steps","title":"prefect_azure.deployments.steps","text":"

    Prefect deployment steps for code storage and retrieval in Azure Blob Storage.

    These steps can be used in a prefect.yaml file to define the default push and pull steps for a group of deployments, or they can be used to define the push and pull steps for a specific deployment.

    Example

    Sample prefect.yaml file that is configured to push and pull to and from an Azure Blob Storage container:

    prefect_version: ...\nname: ...\n\npush:\n    - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n\npull:\n    - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: \"{{ container }}\"\n        folder: \"{{ folder }}\"\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n

    Note

    Azure Storage account needs to have Hierarchical Namespace disabled.

    For more information about using deployment steps, check out out the Prefect docs.

    "},{"location":"integrations/prefect-azure/deployments/steps/#prefect_azure.deployments.steps.pull_from_azure_blob_storage","title":"pull_from_azure_blob_storage","text":"

    Pulls from an Azure Blob Storage container.

    Parameters:

    Name Type Description Default container str

    The name of the container to pull files from

    required folder str

    The folder within the container to pull from

    required credentials Dict[str, str]

    A dictionary of credentials with keys connection_string or account_url and values of the corresponding connection string or account url. If both are provided, connection_string will be used.

    required Note

    Azure Storage account needs to have Hierarchical Namespace disabled.

    Example

    Pull from an Azure Blob Storage container using credentials stored in a block:

    pull:\n    - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n

    Pull from an Azure Blob Storage container using an account URL and default credentials:

    pull:\n    - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials:\n            account_url: https://myaccount.blob.core.windows.net/\n

    Source code in prefect_azure/deployments/steps.py
    def pull_from_azure_blob_storage(\n    container: str,\n    folder: str,\n    credentials: Dict[str, str],\n):\n    \"\"\"\n    Pulls from an Azure Blob Storage container.\n\n    Args:\n        container: The name of the container to pull files from\n        folder: The folder within the container to pull from\n        credentials: A dictionary of credentials with keys `connection_string` or\n            `account_url` and values of the corresponding connection string or\n            account url. If both are provided, `connection_string` will be used.\n\n    Note:\n        Azure Storage account needs to have Hierarchical Namespace disabled.\n\n    Example:\n        Pull from an Azure Blob Storage container using credentials stored in\n        a block:\n        ```yaml\n        pull:\n            - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n        ```\n\n        Pull from an Azure Blob Storage container using an account URL and\n        default credentials:\n        ```yaml\n        pull:\n            - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials:\n                    account_url: https://myaccount.blob.core.windows.net/\n        ```\n    \"\"\"  # noqa\n    local_path = Path.cwd()\n    if credentials.get(\"connection_string\") is not None:\n        container_client = ContainerClient.from_connection_string(\n            credentials[\"connection_string\"], container_name=container\n        )\n    elif credentials.get(\"account_url\") is not None:\n        container_client = ContainerClient(\n            account_url=credentials[\"account_url\"],\n            container_name=container,\n            credential=DefaultAzureCredential(),\n        )\n    else:\n        raise ValueError(\n            \"Credentials must contain either connection_string or account_url\"\n        )\n\n    with container_client as client:\n        for blob in client.list_blobs(name_starts_with=folder):\n            target = PurePosixPath(\n                local_path\n                / relative_path_to_current_platform(blob.name).relative_to(folder)\n            )\n            Path.mkdir(Path(target.parent), parents=True, exist_ok=True)\n            with open(target, \"wb\") as f:\n                client.download_blob(blob).readinto(f)\n\n    return {\n        \"container\": container,\n        \"folder\": folder,\n        \"directory\": local_path,\n    }\n
    "},{"location":"integrations/prefect-azure/deployments/steps/#prefect_azure.deployments.steps.push_to_azure_blob_storage","title":"push_to_azure_blob_storage","text":"

    Pushes to an Azure Blob Storage container.

    Parameters:

    Name Type Description Default container str

    The name of the container to push files to

    required folder str

    The folder within the container to push to

    required credentials Dict[str, str]

    A dictionary of credentials with keys connection_string or account_url and values of the corresponding connection string or account url. If both are provided, connection_string will be used.

    required ignore_file Optional[str]

    The path to a file containing patterns of files to ignore when pushing to Azure Blob Storage. If not provided, the default .prefectignore file will be used.

    '.prefectignore' Example

    Push to an Azure Blob Storage container using credentials stored in a block:

    push:\n    - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n

    Push to an Azure Blob Storage container using an account URL and default credentials:

    push:\n    - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials:\n            account_url: https://myaccount.blob.core.windows.net/\n

    Source code in prefect_azure/deployments/steps.py
    def push_to_azure_blob_storage(\n    container: str,\n    folder: str,\n    credentials: Dict[str, str],\n    ignore_file: Optional[str] = \".prefectignore\",\n):\n    \"\"\"\n    Pushes to an Azure Blob Storage container.\n\n    Args:\n        container: The name of the container to push files to\n        folder: The folder within the container to push to\n        credentials: A dictionary of credentials with keys `connection_string` or\n            `account_url` and values of the corresponding connection string or\n            account url. If both are provided, `connection_string` will be used.\n        ignore_file: The path to a file containing patterns of files to ignore when\n            pushing to Azure Blob Storage. If not provided, the default `.prefectignore`\n            file will be used.\n\n    Example:\n        Push to an Azure Blob Storage container using credentials stored in a\n        block:\n        ```yaml\n        push:\n            - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n        ```\n\n        Push to an Azure Blob Storage container using an account URL and\n        default credentials:\n        ```yaml\n        push:\n            - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials:\n                    account_url: https://myaccount.blob.core.windows.net/\n        ```\n    \"\"\"  # noqa\n    local_path = Path.cwd()\n    if credentials.get(\"connection_string\") is not None:\n        container_client = ContainerClient.from_connection_string(\n            credentials[\"connection_string\"], container_name=container\n        )\n    elif credentials.get(\"account_url\") is not None:\n        container_client = ContainerClient(\n            account_url=credentials[\"account_url\"],\n            container_name=container,\n            credential=DefaultAzureCredential(),\n        )\n    else:\n        raise ValueError(\n            \"Credentials must contain either connection_string or account_url\"\n        )\n\n    included_files = None\n    if ignore_file and Path(ignore_file).exists():\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(str(local_path), ignore_patterns)\n\n    with container_client as client:\n        for local_file_path in local_path.expanduser().rglob(\"*\"):\n            if (\n                included_files is not None\n                and str(local_file_path.relative_to(local_path)) not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = Path(folder) / local_file_path.relative_to(\n                    local_path\n                )\n                with open(local_file_path, \"rb\") as f:\n                    client.upload_blob(str(remote_file_path), f, overwrite=True)\n\n    return {\n        \"container\": container,\n        \"folder\": folder,\n    }\n
    "},{"location":"integrations/prefect-bitbucket/","title":"prefect-bitbucket","text":""},{"location":"integrations/prefect-bitbucket/#welcome","title":"Welcome!","text":"

    Prefect integrations for working with Bitbucket repositories.

    "},{"location":"integrations/prefect-bitbucket/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-bitbucket/#python-setup","title":"Python setup","text":"

    Requires an installation of Python 3.8+.

    We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

    These tasks are designed to work with Prefect 2.0. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-bitbucket/#installation","title":"Installation","text":"

    Install prefect-bitbucket with pip:

    pip install prefect-bitbucket\n

    Then, register to view the block on Prefect Cloud:

    prefect block register -m prefect_bitbucket\n

    Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

    "},{"location":"integrations/prefect-bitbucket/#write-and-run-a-flow","title":"Write and run a flow","text":""},{"location":"integrations/prefect-bitbucket/#load-a-pre-existing-bitbucketcredentials-block","title":"Load a pre-existing BitBucketCredentials block","text":"
    from prefect import flow\nfrom prefect_bitbucket.credentials import BitBucketCredentials\n\n@flow\ndef use_stored_bitbucket_creds_flow():\n    bitbucket_credentials_block = BitBucketCredentials.load(\"BLOCK_NAME\")\n\n    return bitbucket_credentials_block\n\nuse_stored_bitbucket_creds_flow()\n
    "},{"location":"integrations/prefect-bitbucket/#create-a-new-bitbucketcredentials-block-in-a-flow","title":"Create a new BitBucketCredentials block in a flow","text":"
    from prefect import flow\nfrom prefect_bitbucket.credentials import BitBucketCredentials\n\n@flow\ndef create_new_bitbucket_creds_flow():\n    bitbucket_credentials_block = BitBucketCredentials(\n        token=\"my-token\",\n        username=\"my-username\"\n    )\n\ncreate_new_bitbucket_creds_flow()\n
    "},{"location":"integrations/prefect-bitbucket/#create-a-bitbucketrepository-block-for-a-public-repo","title":"Create a BitBucketRepository block for a public repo","text":"
    from prefect_bitbucket import BitBucketRepository\n\npublic_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\n\n# Creates a public BitBucket repository BitBucketRepository block\npublic_bitbucket_block = BitBucketRepository(\n    repository=public_repo\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\npublic_bitbucket_block.save(\"my-bitbucket-block\")\n
    "},{"location":"integrations/prefect-bitbucket/#create-a-bitbucketrepository-block-for-a-public-repo-at-a-specific-branch-or-tag","title":"Create a BitBucketRepository block for a public repo at a specific branch or tag","text":"
    from prefect_bitbucket import BitBucketRepository\n\npublic_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\n\n# Creates a public BitBucket repository BitBucketRepository block\nbranch_bitbucket_block = BitBucketRepository(\n    reference=\"my-branch-or-tag\",  # e.g \"master\"\n    repository=public_repo\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\nbranch_bitbucket_block.save(\"my-bitbucket-branch-block\")\n
    "},{"location":"integrations/prefect-bitbucket/#create-a-new-bitbucketcredentials-block-and-a-bitbucketrepository-block-for-a-private-repo","title":"Create a new BitBucketCredentials block and a BitBucketRepository block for a private repo","text":"
    from prefect_bitbucket import BitBucketCredentials, BitBucketRepository\n\n# For a private repo, we need credentials to access it\nbitbucket_credentials_block = BitBucketCredentials(\n    token=\"my-token\",\n    username=\"my-username\"  # optional\n)\n\n# Saves the BitBucketCredentials block to your Prefect workspace (in the Blocks tab)\nbitbucket_credentials_block.save(name=\"my-bitbucket-credentials-block\")\n\n\n# Creates a private BitBucket repository BitBucketRepository block\nprivate_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\nprivate_bitbucket_block = BitBucketRepository(\n    repository=private_repo,\n    bitbucket_credentials=bitbucket_credentials_block\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\nprivate_bitbucket_block.save(name=\"my-private-bitbucket-block\")\n
    "},{"location":"integrations/prefect-bitbucket/#use-a-preexisting-bitbucketcredentials-block-to-create-a-bitbucketrepository-block-for-a-private-repo","title":"Use a preexisting BitBucketCredentials block to create a BitBucketRepository block for a private repo","text":"
    from prefect_bitbucket import BitBucketCredentials, BitBucketRepository\n\n# Loads a preexisting BitBucketCredentials block\nBitBucketCredentials.load(\"my-bitbucket-credentials-block\")\n\n# Creates a private BitBucket repository BitBucketRepository block\nprivate_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\nprivate_bitbucket_block = BitBucketRepository(\n    repository=private_repo,\n    bitbucket_credentials=bitbucket_credentials_block\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\nprivate_bitbucket_block.save(name=\"my-private-bitbucket-block\")\n

    Differences between Bitbucket Server and Bitbucket Cloud

    For Bitbucket Cloud, only set the token to authenticate. For Bitbucket Server, set both the token and the username.

    "},{"location":"integrations/prefect-bitbucket/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials","title":"prefect_bitbucket.credentials","text":"

    Module to enable authenticate interactions with BitBucket.

    "},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials.BitBucketCredentials","title":"BitBucketCredentials","text":"

    Bases: CredentialsBlock

    Store BitBucket credentials to interact with private BitBucket repositories.

    Attributes:

    Name Type Description token Optional[SecretStr]

    An access token to authenticate with BitBucket. This is required for accessing private repositories.

    username Optional[str]

    Identification name unique across entire BitBucket site.

    password Optional[SecretStr]

    The password to authenticate to BitBucket.

    url str

    The base URL of your BitBucket instance.

    Examples:

    Load stored BitBucket credentials:

    from prefect_bitbucket import BitBucketCredentials\nbitbucket_credentials_block = BitBucketCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_bitbucket/credentials.py
    class BitBucketCredentials(CredentialsBlock):\n    \"\"\"Store BitBucket credentials to interact with private BitBucket repositories.\n\n    Attributes:\n        token: An access token to authenticate with BitBucket. This is required\n            for accessing private repositories.\n        username: Identification name unique across entire BitBucket site.\n        password: The password to authenticate to BitBucket.\n        url: The base URL of your BitBucket instance.\n\n\n    Examples:\n        Load stored BitBucket credentials:\n        ```python\n        from prefect_bitbucket import BitBucketCredentials\n        bitbucket_credentials_block = BitBucketCredentials.load(\"BLOCK_NAME\")\n        ```\n\n\n    \"\"\"\n\n    _block_type_name = \"BitBucket Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/5d729f7355fb6828c4b605268ded9cfafab3ae4f-250x250.png\"  # noqa\n    token: Optional[SecretStr] = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=(\n            \"A BitBucket Personal Access Token - required for private repositories.\"\n        ),\n        example=\"x-token-auth:my-token\",\n    )\n    username: Optional[str] = Field(\n        default=None,\n        description=\"Identification name unique across entire BitBucket site.\",\n    )\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password to authenticate to BitBucket.\"\n    )\n    url: str = Field(\n        default=\"https://api.bitbucket.org/\",\n        description=\"The base URL of your BitBucket instance.\",\n        title=\"URL\",\n    )\n\n    @validator(\"username\")\n    def _validate_username(cls, value: str) -> str:\n        \"\"\"When username provided, will validate it.\"\"\"\n        pattern = \"^[A-Za-z0-9_-]*$\"\n\n        if not re.match(pattern, value):\n            raise ValueError(\n                \"Username must be alpha, num, dash and/or underscore only.\"\n            )\n        if not len(value) <= 30:\n            raise ValueError(\"Username cannot be longer than 30 chars.\")\n        return value\n\n    def get_client(\n        self, client_type: Union[str, ClientType], **client_kwargs\n    ) -> Union[Cloud, Bitbucket]:\n        \"\"\"Get an authenticated local or cloud Bitbucket client.\n\n        Args:\n            client_type: Whether to use a local or cloud client.\n\n        Returns:\n            An authenticated Bitbucket client.\n\n        \"\"\"\n        # ref: https://atlassian-python-api.readthedocs.io/\n        if isinstance(client_type, str):\n            client_type = ClientType(client_type.lower())\n\n        password = self.password.get_secret_value()\n        input_client_kwargs = dict(\n            url=self.url, username=self.username, password=password\n        )\n        input_client_kwargs.update(**client_kwargs)\n\n        if client_type == ClientType.CLOUD:\n            client = Cloud(**input_client_kwargs)\n        else:\n            client = Bitbucket(**input_client_kwargs)\n        return client\n
    "},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials.BitBucketCredentials.get_client","title":"get_client","text":"

    Get an authenticated local or cloud Bitbucket client.

    Parameters:

    Name Type Description Default client_type Union[str, ClientType]

    Whether to use a local or cloud client.

    required

    Returns:

    Type Description Union[Cloud, Bitbucket]

    An authenticated Bitbucket client.

    Source code in prefect_bitbucket/credentials.py
    def get_client(\n    self, client_type: Union[str, ClientType], **client_kwargs\n) -> Union[Cloud, Bitbucket]:\n    \"\"\"Get an authenticated local or cloud Bitbucket client.\n\n    Args:\n        client_type: Whether to use a local or cloud client.\n\n    Returns:\n        An authenticated Bitbucket client.\n\n    \"\"\"\n    # ref: https://atlassian-python-api.readthedocs.io/\n    if isinstance(client_type, str):\n        client_type = ClientType(client_type.lower())\n\n    password = self.password.get_secret_value()\n    input_client_kwargs = dict(\n        url=self.url, username=self.username, password=password\n    )\n    input_client_kwargs.update(**client_kwargs)\n\n    if client_type == ClientType.CLOUD:\n        client = Cloud(**input_client_kwargs)\n    else:\n        client = Bitbucket(**input_client_kwargs)\n    return client\n
    "},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials.ClientType","title":"ClientType","text":"

    Bases: Enum

    The client type to use.

    Source code in prefect_bitbucket/credentials.py
    class ClientType(Enum):\n    \"\"\"The client type to use.\"\"\"\n\n    LOCAL = \"local\"\n    CLOUD = \"cloud\"\n
    "},{"location":"integrations/prefect-bitbucket/repository/","title":"Repository","text":""},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository","title":"prefect_bitbucket.repository","text":"

    Allows for interaction with a BitBucket repository.

    The BitBucket class in this collection is a storage block that lets Prefect agents pull Prefect flow code from BitBucket repositories.

    The BitBucket block is ideally configured via the Prefect UI, but can also be used in Python as the following examples demonstrate.

    Examples ```python from prefect_bitbucket.repository import BitBucketRepository

    "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository--public-bitbucket-repository","title":"public BitBucket repository","text":"

    public_bitbucket_block = BitBucketRepository( repository=\"https://bitbucket.com/my-project/my-repository.git\" )

    public_bitbucket_block.save(name=\"my-bitbucket-block\")

    "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository--specific-branch-or-tag","title":"specific branch or tag","text":"

    branch_bitbucket_block = BitBucketRepository( reference=\"branch-or-tag-name\", repository=\"https://bitbucket.com/my-project/my-repository.git\" )

    branch_bitbucket_block.save(name=\"my-bitbucket-block\")

    "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository--private-bitbucket-repository","title":"private BitBucket repository","text":"

    private_bitbucket_block = BitBucketRepository( repository=\"https://bitbucket.com/my-project/my-repository.git\", bitbucket_credentials=BitBucketCredentials.load(\"my-bitbucket-credentials-block\") )

    private_bitbucket_block.save(name=\"my-private-bitbucket-block\")

    "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository.BitBucketRepository","title":"BitBucketRepository","text":"

    Bases: ReadableDeploymentStorage

    Interact with files stored in BitBucket repositories.

    An accessible installation of git is required for this block to function properly.

    Source code in prefect_bitbucket/repository.py
    class BitBucketRepository(ReadableDeploymentStorage):\n    \"\"\"Interact with files stored in BitBucket repositories.\n\n    An accessible installation of git is required for this block to function\n    properly.\n    \"\"\"\n\n    _block_type_name = \"BitBucket Repository\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/5d729f7355fb6828c4b605268ded9cfafab3ae4f-250x250.png\"  # noqa\n    _description = \"Interact with files stored in BitBucket repositories.\"\n\n    repository: str = Field(\n        default=...,\n        description=\"The URL of a BitBucket repository to read from in HTTPS format\",\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch or tag.\",\n    )\n    bitbucket_credentials: Optional[BitBucketCredentials] = Field(\n        default=None,\n        description=(\n            \"An optional BitBucketCredentials block for authenticating with \"\n            \"private BitBucket repos.\"\n        ),\n    )\n\n    @validator(\"bitbucket_credentials\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n        \"\"\"Ensure that credentials are not provided with 'SSH' formatted BitBucket URLs.\n\n        Validators are by default only called on provided arguments.\n\n        Note: validates `credentials` specifically so that it only fires when private\n        repositories are used.\n        \"\"\"\n        if v is not None:\n            if urlparse(values[\"repository\"]).scheme != \"https\":\n                raise InvalidRepositoryURLError(\n                    (\n                        \"Credentials can only be used with BitBucket repositories \"\n                        \"using the 'HTTPS' format. You must either remove the \"\n                        \"credential if you wish to use the 'SSH' format and are not \"\n                        \"using a private repository, or you must change the repository \"\n                        \"URL to the 'HTTPS' format.\"\n                    )\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos in the cloud:\n        https://x-token-auth:<access-token>@bitbucket.org/<user>/<repo>.git\n        For private repos with a local bitbucket server:\n        https://<username>:<access-token>@<server>/scm/<project>/<repo>.git\n\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urlparse(self.repository)\n        token_is_set = (\n            self.bitbucket_credentials is not None and self.bitbucket_credentials.token\n        )\n\n        # Need a token for private repos\n        if url_components.scheme == \"https\" and token_is_set:\n            token = self.bitbucket_credentials.token.get_secret_value()\n            username = self.bitbucket_credentials.username\n            if username is None:\n                username = \"x-token-auth\"\n            updated_components = url_components._replace(\n                netloc=f\"{username}:{token}@{url_components.netloc}\"\n            )\n            full_url = urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: Optional[str]\n    ) -> Tuple[str, str]:\n        \"\"\"Return the fully formed paths for BitBucketRepository contents.\n\n        Return will take the form of (content_source, content_destination).\n\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"Clones a BitBucket project within `from_path` to the provided `local_path`.\n\n        This defaults to cloning the repository reference configured on the\n        Block to the present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n\n        \"\"\"\n        # Construct command\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        cmd += [\"--depth\", \"1\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            copy_tree(src=content_source, dst=content_destination)\n
    "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository.BitBucketRepository.get_directory","title":"get_directory async","text":"

    Clones a BitBucket project within from_path to the provided local_path.

    This defaults to cloning the repository reference configured on the Block to the present working directory.

    Parameters:

    Name Type Description Default from_path Optional[str]

    If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

    None local_path Optional[str]

    A local path to clone to; defaults to present working directory.

    None Source code in prefect_bitbucket/repository.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"Clones a BitBucket project within `from_path` to the provided `local_path`.\n\n    This defaults to cloning the repository reference configured on the\n    Block to the present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n\n    \"\"\"\n    # Construct command\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        copy_tree(src=content_source, dst=content_destination)\n
    "},{"location":"integrations/prefect-dask/","title":"prefect-dask","text":"

    The prefect-dask collection makes it easy to include distributed processing for your flows. Check out the examples below to get started!

    "},{"location":"integrations/prefect-dask/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-dask/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

    Perhaps you're already working with Prefect flows. Say your flow downloads many images to train your machine learning model. Unfortunately, it takes a long time to download your flows because your code is running sequentially.

    After installing prefect-dask you can parallelize your flow in three simple steps:

    1. Add the import: from prefect_dask import DaskTaskRunner
    2. Specify the task runner in the flow decorator: @flow(task_runner=DaskTaskRunner)
    3. Submit tasks to the flow's task runner: a_task.submit(*args, **kwargs)

    The parallelized code runs in about 1/3 of the time in our test! And that's without distributing the workload over multiple machines. Here's the before and after!

    BeforeAfter
    # Completed in 15.2 seconds\n\nfrom typing import List\nfrom pathlib import Path\n\nimport httpx\nfrom prefect import flow, task\n\nURL_FORMAT = (\n    \"https://www.cpc.ncep.noaa.gov/products/NMME/archive/\"\n    \"{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png\"\n)\n\n@task\ndef download_image(year: int, month: int, directory: Path) -> Path:\n    # download image from URL\n    url = URL_FORMAT.format(year=year, month=month)\n    resp = httpx.get(url)\n\n    # save content to directory/YYYYMM.png\n    file_path = (directory / url.split(\"/\")[-1]).with_stem(f\"{year:04d}{month:02d}\")\n    file_path.write_bytes(resp.content)\n    return file_path\n\n@flow\ndef download_nino_34_plumes_from_year(year: int) -> List[Path]:\n    # create a directory to hold images\n    directory = Path(\"data\")\n    directory.mkdir(exist_ok=True)\n\n    # download all images\n    file_paths = []\n    for month in range(1, 12 + 1):\n        file_path = download_image(year, month, directory)\n        file_paths.append(file_path)\n    return file_paths\n\nif __name__ == \"__main__\":\n    download_nino_34_plumes_from_year(2022)\n
    # Completed in 5.7 seconds\n\nfrom typing import List\nfrom pathlib import Path\n\nimport httpx\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner\n\nURL_FORMAT = (\n    \"https://www.cpc.ncep.noaa.gov/products/NMME/archive/\"\n    \"{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png\"\n)\n\n@task\ndef download_image(year: int, month: int, directory: Path) -> Path:\n    # download image from URL\n    url = URL_FORMAT.format(year=year, month=month)\n    resp = httpx.get(url)\n\n    # save content to directory/YYYYMM.png\n    file_path = (directory / url.split(\"/\")[-1]).with_stem(f\"{year:04d}{month:02d}\")\n    file_path.write_bytes(resp.content)\n    return file_path\n\n@flow(task_runner=DaskTaskRunner(cluster_kwargs={\"processes\": False}))\ndef download_nino_34_plumes_from_year(year: int) -> List[Path]:\n    # create a directory to hold images\n    directory = Path(\"data\")\n    directory.mkdir(exist_ok=True)\n\n    # download all images\n    file_paths = []\n    for month in range(1, 12 + 1):\n        file_path = download_image.submit(year, month, directory)\n        file_paths.append(file_path)\n    return file_paths\n\nif __name__ == \"__main__\":\n    download_nino_34_plumes_from_year(2022)\n

    The original flow completes in 15.2 seconds.

    However, with just a few minor tweaks, we were able to reduce the runtime by nearly three folds, down to just 5.7 seconds!

    "},{"location":"integrations/prefect-dask/#integrate-with-dask-clientcluster-and-collections","title":"Integrate with Dask client/cluster and collections","text":"

    Suppose you have an existing Dask client/cluster and collection, like a dask.dataframe.DataFrame, and you want to add observability.

    With prefect-dask, there's no major overhaul necessary because Prefect was designed with incremental adoption in mind! It's as easy as:

    1. Adding the imports
    2. Sprinkling a few task and flow decorators
    3. Using get_dask_client context manager on collections to distribute work across workers
    4. Specifying the task runner and client's address in the flow decorator
    5. Submitting the tasks to the flow's task runner
    BeforeAfter
    import dask.dataframe\nimport dask.distributed\n\n\n\nclient = dask.distributed.Client()\n\n\ndef read_data(start: str, end: str) -> dask.dataframe.DataFrame:\n    df = dask.datasets.timeseries(start, end, partition_freq=\"4w\")\n    return df\n\n\ndef process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame:\n\n    df_yearly_avg = df.groupby(df.index.year).mean()\n    return df_yearly_avg.compute()\n\n\ndef dask_pipeline():\n    df = read_data(\"1988\", \"2022\")\n    df_yearly_average = process_data(df)\n    return df_yearly_average\n\ndask_pipeline()\n
    import dask.dataframe\nimport dask.distributed\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_dask_client\n\nclient = dask.distributed.Client()\n\n@task\ndef read_data(start: str, end: str) -> dask.dataframe.DataFrame:\n    df = dask.datasets.timeseries(start, end, partition_freq=\"4w\")\n    return df\n\n@task\ndef process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame:\n    with get_dask_client():\n        df_yearly_avg = df.groupby(df.index.year).mean()\n        return df_yearly_avg.compute()\n\n@flow(task_runner=DaskTaskRunner(address=client.scheduler.address))\ndef dask_pipeline():\n    df = read_data.submit(\"1988\", \"2022\")\n    df_yearly_average = process_data.submit(df)\n    return df_yearly_average\n\ndask_pipeline()\n

    Now, you can conveniently see when each task completed, both in the terminal and the UI!

    14:10:09.845 | INFO    | prefect.engine - Created flow run 'chocolate-pony' for flow 'dask-flow'\n14:10:09.847 | INFO    | prefect.task_runner.dask - Connecting to an existing Dask cluster at tcp://127.0.0.1:59255\n14:10:09.857 | INFO    | distributed.scheduler - Receive client connection: Client-8c1e0f24-9133-11ed-800e-86f2469c4e7a\n14:10:09.859 | INFO    | distributed.core - Starting established connection to tcp://127.0.0.1:59516\n14:10:09.862 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n14:10:11.344 | INFO    | Flow run 'chocolate-pony' - Created task run 'read_data-5bc97744-0' for task 'read_data'\n14:10:11.626 | INFO    | Flow run 'chocolate-pony' - Submitted task run 'read_data-5bc97744-0' for execution.\n14:10:11.795 | INFO    | Flow run 'chocolate-pony' - Created task run 'process_data-090555ba-0' for task 'process_data'\n14:10:11.798 | INFO    | Flow run 'chocolate-pony' - Submitted task run 'process_data-090555ba-0' for execution.\n14:10:13.279 | INFO    | Task run 'read_data-5bc97744-0' - Finished in state Completed()\n14:11:43.539 | INFO    | Task run 'process_data-090555ba-0' - Finished in state Completed()\n14:11:43.883 | INFO    | Flow run 'chocolate-pony' - Finished in state Completed('All states completed.')\n
    "},{"location":"integrations/prefect-dask/#resources","title":"Resources","text":"

    For additional examples, check out the Usage Guide!

    "},{"location":"integrations/prefect-dask/#installation","title":"Installation","text":"

    Get started by installing prefect-dask!

    pipconda
    pip install -U prefect-dask\n
    conda install -c conda-forge prefect-dask\n

    Requires an installation of Python 3.7+.

    We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

    These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-dask/#feedback","title":"Feedback","text":"

    If you encounter any bugs while using prefect-dask, feel free to open an issue in the prefect repository.

    If you have any questions or issues while using prefect-dask, you can find help in either the Prefect Discourse forum or the Prefect Slack community.

    "},{"location":"integrations/prefect-dask/#contributing","title":"Contributing","text":"

    If you'd like to help contribute to fix an issue or add a feature to prefect-dask, please propose changes through a pull request from a fork of the repository.

    Here are the steps:

    1. Fork the repository
    2. Clone the forked repository
    3. Install the repository and its dependencies:
      pip install -e \".[dev]\"\n
    4. Make desired changes
    5. Add tests
    6. Install pre-commit to perform quality checks prior to commit:
      pre-commit install\n
    7. git commit, git push, and create a pull request
    "},{"location":"integrations/prefect-dask/task_runners/","title":"Task Runners","text":""},{"location":"integrations/prefect-dask/task_runners/#prefect_dask.task_runners","title":"prefect_dask.task_runners","text":"

    Interface and implementations of the Dask Task Runner. Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

    Example
    import time\n\nfrom prefect import flow, task\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#0\n#1\n#2\n#3\n#4\n#5\n#6\n#7\n#8\n#9\n

    Switching to a DaskTaskRunner:

    import time\n\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=DaskTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

    "},{"location":"integrations/prefect-dask/task_runners/#prefect_dask.task_runners.DaskTaskRunner","title":"DaskTaskRunner","text":"

    Bases: BaseTaskRunner

    A parallel task_runner that submits tasks to the dask.distributed scheduler. By default a temporary distributed.LocalCluster is created (and subsequently torn down) within the start() contextmanager. To use a different cluster class (e.g. dask_kubernetes.KubeCluster), you can specify cluster_class/cluster_kwargs.

    Alternatively, if you already have a dask cluster running, you can provide the cluster object via the cluster kwarg or the address of the scheduler via the address kwarg.

    Multiprocessing safety

    Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or warnings will be displayed.

    Parameters:

    Name Type Description Default cluster Cluster

    Currently running dask cluster; if one is not provider (or specified via address kwarg), a temporary cluster will be created in DaskTaskRunner.start(). Defaults to None.

    None address string

    Address of a currently running dask scheduler. Defaults to None.

    None cluster_class string or callable

    The cluster class to use when creating a temporary dask cluster. Can be either the full class name (e.g. \"distributed.LocalCluster\"), or the class itself.

    None cluster_kwargs dict

    Additional kwargs to pass to the cluster_class when creating a temporary dask cluster.

    None adapt_kwargs dict

    Additional kwargs to pass to cluster.adapt when creating a temporary dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided.

    None client_kwargs dict

    Additional kwargs to use when creating a dask.distributed.Client.

    None

    Examples:

    Using a temporary local dask cluster:

    from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner)\ndef my_flow():\n    ...\n

    Using a temporary cluster running elsewhere. Any Dask cluster class should work, here we use dask-cloudprovider:

    DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.FargateCluster\",\n    cluster_kwargs={\n        \"image\": \"prefecthq/prefect:latest\",\n        \"n_workers\": 5,\n    },\n)\n

    Connecting to an existing dask cluster:

    DaskTaskRunner(address=\"192.0.2.255:8786\")\n

    Source code in prefect_dask/task_runners.py
    class DaskTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A parallel task_runner that submits tasks to the `dask.distributed` scheduler.\n    By default a temporary `distributed.LocalCluster` is created (and\n    subsequently torn down) within the `start()` contextmanager. To use a\n    different cluster class (e.g.\n    [`dask_kubernetes.KubeCluster`](https://kubernetes.dask.org/)), you can\n    specify `cluster_class`/`cluster_kwargs`.\n\n    Alternatively, if you already have a dask cluster running, you can provide\n    the cluster object via the `cluster` kwarg or the address of the scheduler\n    via the `address` kwarg.\n    !!! warning \"Multiprocessing safety\"\n        Note that, because the `DaskTaskRunner` uses multiprocessing, calls to flows\n        in scripts must be guarded with `if __name__ == \"__main__\":` or warnings will\n        be displayed.\n\n    Args:\n        cluster (distributed.deploy.Cluster, optional): Currently running dask cluster;\n            if one is not provider (or specified via `address` kwarg), a temporary\n            cluster will be created in `DaskTaskRunner.start()`. Defaults to `None`.\n        address (string, optional): Address of a currently running dask\n            scheduler. Defaults to `None`.\n        cluster_class (string or callable, optional): The cluster class to use\n            when creating a temporary dask cluster. Can be either the full\n            class name (e.g. `\"distributed.LocalCluster\"`), or the class itself.\n        cluster_kwargs (dict, optional): Additional kwargs to pass to the\n            `cluster_class` when creating a temporary dask cluster.\n        adapt_kwargs (dict, optional): Additional kwargs to pass to `cluster.adapt`\n            when creating a temporary dask cluster. Note that adaptive scaling\n            is only enabled if `adapt_kwargs` are provided.\n        client_kwargs (dict, optional): Additional kwargs to use when creating a\n            [`dask.distributed.Client`](https://distributed.dask.org/en/latest/api.html#client).\n\n    Examples:\n        Using a temporary local dask cluster:\n        ```python\n        from prefect import flow\n        from prefect_dask.task_runners import DaskTaskRunner\n\n        @flow(task_runner=DaskTaskRunner)\n        def my_flow():\n            ...\n        ```\n\n        Using a temporary cluster running elsewhere. Any Dask cluster class should\n        work, here we use [dask-cloudprovider](https://cloudprovider.dask.org):\n        ```python\n        DaskTaskRunner(\n            cluster_class=\"dask_cloudprovider.FargateCluster\",\n            cluster_kwargs={\n                \"image\": \"prefecthq/prefect:latest\",\n                \"n_workers\": 5,\n            },\n        )\n        ```\n\n        Connecting to an existing dask cluster:\n        ```python\n        DaskTaskRunner(address=\"192.0.2.255:8786\")\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        cluster: Optional[distributed.deploy.Cluster] = None,\n        address: str = None,\n        cluster_class: Union[str, Callable] = None,\n        cluster_kwargs: dict = None,\n        adapt_kwargs: dict = None,\n        client_kwargs: dict = None,\n    ):\n        # Validate settings and infer defaults\n        if address:\n            if cluster or cluster_class or cluster_kwargs or adapt_kwargs:\n                raise ValueError(\n                    \"Cannot specify `address` and \"\n                    \"`cluster`/`cluster_class`/`cluster_kwargs`/`adapt_kwargs`\"\n                )\n        elif cluster:\n            if cluster_class or cluster_kwargs:\n                raise ValueError(\n                    \"Cannot specify `cluster` and `cluster_class`/`cluster_kwargs`\"\n                )\n            if not cluster.asynchronous:\n                raise ValueError(\n                    \"The cluster must have `asynchronous=True` to be \"\n                    \"used with `DaskTaskRunner`.\"\n                )\n        else:\n            if isinstance(cluster_class, str):\n                cluster_class = from_qualified_name(cluster_class)\n            else:\n                cluster_class = cluster_class\n\n        # Create a copies of incoming kwargs since we may mutate them\n        cluster_kwargs = cluster_kwargs.copy() if cluster_kwargs else {}\n        adapt_kwargs = adapt_kwargs.copy() if adapt_kwargs else {}\n        client_kwargs = client_kwargs.copy() if client_kwargs else {}\n\n        # Update kwargs defaults\n        client_kwargs.setdefault(\"set_as_default\", False)\n\n        # The user cannot specify async/sync themselves\n        if \"asynchronous\" in client_kwargs:\n            raise ValueError(\n                \"`client_kwargs` cannot set `asynchronous`. \"\n                \"This option is managed by Prefect.\"\n            )\n        if \"asynchronous\" in cluster_kwargs:\n            raise ValueError(\n                \"`cluster_kwargs` cannot set `asynchronous`. \"\n                \"This option is managed by Prefect.\"\n            )\n\n        # Store settings\n        self.address = address\n        self.cluster_class = cluster_class\n        self.cluster_kwargs = cluster_kwargs\n        self.adapt_kwargs = adapt_kwargs\n        self.client_kwargs = client_kwargs\n\n        # Runtime attributes\n        self._client: \"distributed.Client\" = None\n        self._cluster: \"distributed.deploy.Cluster\" = cluster\n        self._dask_futures: Dict[str, \"distributed.Future\"] = {}\n\n        super().__init__()\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return (\n            TaskConcurrencyType.PARALLEL\n            if self.cluster_kwargs.get(\"processes\")\n            else TaskConcurrencyType.CONCURRENT\n        )\n\n    def duplicate(self):\n        \"\"\"\n        Create a new instance of the task runner with the same settings.\n        \"\"\"\n        return type(self)(\n            address=self.address,\n            cluster_class=self.cluster_class,\n            cluster_kwargs=self.cluster_kwargs,\n            adapt_kwargs=self.adapt_kwargs,\n            client_kwargs=self.client_kwargs,\n        )\n\n    def __eq__(self, other: object) -> bool:\n        \"\"\"\n        Check if an instance has the same settings as this task runner.\n        \"\"\"\n        if type(self) == type(other):\n            return (\n                self.address == other.address\n                and self.cluster_class == other.cluster_class\n                and self.cluster_kwargs == other.cluster_kwargs\n                and self.adapt_kwargs == other.adapt_kwargs\n                and self.client_kwargs == other.client_kwargs\n            )\n        else:\n            return NotImplemented\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        # unpack the upstream call in order to cast Prefect futures to Dask futures\n        # where possible to optimize Dask task scheduling\n        call_kwargs = self._optimize_futures(call.keywords)\n\n        if \"task_run\" in call_kwargs:\n            task_run = call_kwargs[\"task_run\"]\n            flow_run = FlowRunContext.get().flow_run\n            # Dask displays the text up to the first '-' as the name; the task run key\n            # should include the task run name for readability in the Dask console.\n            # For cases where the task run fails and reruns for a retried flow run,\n            # the flow run count is included so that the new key will not match\n            # the failed run's key, therefore not retrieving from the Dask cache.\n            dask_key = f\"{task_run.name}-{task_run.id.hex}-{flow_run.run_count}\"\n        else:\n            dask_key = str(key)\n\n        self._dask_futures[key] = self._client.submit(\n            call.func,\n            key=dask_key,\n            # Dask defaults to treating functions are pure, but we set this here for\n            # explicit expectations. If this task run is submitted to Dask twice, the\n            # result of the first run should be returned. Subsequent runs would return\n            # `Abort` exceptions if they were submitted again.\n            pure=True,\n            **call_kwargs,\n        )\n\n    def _get_dask_future(self, key: UUID) -> \"distributed.Future\":\n        \"\"\"\n        Retrieve the dask future corresponding to a Prefect future.\n        The Dask future is for the `run_fn`, which should return a `State`.\n        \"\"\"\n        return self._dask_futures[key]\n\n    def _optimize_futures(self, expr):\n        def visit_fn(expr):\n            if isinstance(expr, PrefectFuture):\n                dask_future = self._dask_futures.get(expr.key)\n                if dask_future is not None:\n                    return dask_future\n            # Fallback to return the expression unaltered\n            return expr\n\n        return visit_collection(expr, visit_fn=visit_fn, return_data=True)\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        future = self._get_dask_future(key)\n        try:\n            return await future.result(timeout=timeout)\n        except distributed.TimeoutError:\n            return None\n        except BaseException as exc:\n            return await exception_to_crashed_state(exc)\n\n    async def _start(self, exit_stack: AsyncExitStack):\n        \"\"\"\n        Start the task runner and prep for context exit.\n        - Creates a cluster if an external address is not set.\n        - Creates a client to connect to the cluster.\n        - Pushes a call to wait for all running futures to complete on exit.\n        \"\"\"\n\n        if self._cluster:\n            self.logger.info(f\"Connecting to existing Dask cluster {self._cluster}\")\n            self._connect_to = self._cluster\n            if self.adapt_kwargs:\n                self._cluster.adapt(**self.adapt_kwargs)\n        elif self.address:\n            self.logger.info(\n                f\"Connecting to an existing Dask cluster at {self.address}\"\n            )\n            self._connect_to = self.address\n        else:\n            self.cluster_class = self.cluster_class or distributed.LocalCluster\n\n            self.logger.info(\n                f\"Creating a new Dask cluster with \"\n                f\"`{to_qualified_name(self.cluster_class)}`\"\n            )\n            self._connect_to = self._cluster = await exit_stack.enter_async_context(\n                self.cluster_class(asynchronous=True, **self.cluster_kwargs)\n            )\n            if self.adapt_kwargs:\n                adapt_response = self._cluster.adapt(**self.adapt_kwargs)\n                if inspect.isawaitable(adapt_response):\n                    await adapt_response\n\n        self._client = await exit_stack.enter_async_context(\n            distributed.Client(\n                self._connect_to, asynchronous=True, **self.client_kwargs\n            )\n        )\n\n        if self._client.dashboard_link:\n            self.logger.info(\n                f\"The Dask dashboard is available at {self._client.dashboard_link}\",\n            )\n\n    def __getstate__(self):\n        \"\"\"\n        Allow the `DaskTaskRunner` to be serialized by dropping\n        the `distributed.Client`, which contains locks.\n        Must be deserialized on a dask worker.\n        \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_client\", \"_cluster\", \"_connect_to\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"\n        Restore the `distributed.Client` by loading the client on a dask worker.\n        \"\"\"\n        self.__dict__.update(data)\n        self._client = distributed.get_client()\n
    "},{"location":"integrations/prefect-dask/task_runners/#prefect_dask.task_runners.DaskTaskRunner.duplicate","title":"duplicate","text":"

    Create a new instance of the task runner with the same settings.

    Source code in prefect_dask/task_runners.py
    def duplicate(self):\n    \"\"\"\n    Create a new instance of the task runner with the same settings.\n    \"\"\"\n    return type(self)(\n        address=self.address,\n        cluster_class=self.cluster_class,\n        cluster_kwargs=self.cluster_kwargs,\n        adapt_kwargs=self.adapt_kwargs,\n        client_kwargs=self.client_kwargs,\n    )\n
    "},{"location":"integrations/prefect-dask/usage_guide/","title":"Usage Guide","text":"

    Below is a guide on how to use prefect-dask effectively.

    "},{"location":"integrations/prefect-dask/usage_guide/#running-tasks-on-dask","title":"Running tasks on Dask","text":"

    The DaskTaskRunner is a parallel task runner that submits tasks to the dask.distributed scheduler.

    By default, a temporary Dask cluster is created for the duration of the flow run.

    For example, this flow counts up to 10 in parallel (note that the output is not sequential).

    import time\n\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=DaskTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

    If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via an address argument.

    To configure your flow to use the DaskTaskRunner:

    1. Make sure the prefect-dask collection is installed as described earlier: pip install prefect-dask.
    2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
    3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.

    For example, this flow uses the DaskTaskRunner configured to access an existing Dask cluster at http://my-dask-cluster.

    from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n    ...\n

    DaskTaskRunner accepts the following optional parameters:

    Parameter Description address Address of a currently running Dask scheduler. cluster_class The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, \"distributed.LocalCluster\"), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client.

    Multiprocessing safety

    Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or you will encounter warnings and errors.

    If you don't provide the address of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores available to your execution environment. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers or threads_per_worker to cluster_kwargs.

    # Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n    cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
    "},{"location":"integrations/prefect-dask/usage_guide/#distributing-dask-collections-across-workers","title":"Distributing Dask collections across workers","text":"

    If you use a Dask collection, such as a dask.DataFrame or dask.Bag, to distribute the work across workers and achieve parallel computations, use one of the context managers get_dask_client or get_async_dask_client:

    import dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_dask_client\n\n@task\ndef compute_task():\n    with get_dask_client() as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = df.describe().compute()\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_flow():\n    prefect_future = compute_task.submit()\n    return prefect_future.result()\n\ndask_flow()\n

    The context managers can be used the same way in both flow run contexts and task run contexts.

    Resolving futures in sync client

    Note, by default, dask_collection.compute() returns concrete values while client.compute(dask_collection) returns Dask Futures. Therefore, if you call client.compute, you must resolve all futures before exiting out of the context manager by either:

    1. setting sync=True

      with get_dask_client() as client:\n    df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n    summary_df = client.compute(df.describe(), sync=True)\n

    2. calling result()

      with get_dask_client() as client:\n    df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n    summary_df = client.compute(df.describe()).result()\n
      For more information, visit the docs on Waiting on Futures.

    There is also an equivalent context manager for asynchronous tasks and flows: get_async_dask_client.

    import asyncio\n\nimport dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_async_dask_client\n\n@task\nasync def compute_task():\n    async with get_async_dask_client() as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = await client.compute(df.describe())\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\nasync def dask_flow():\n    prefect_future = await compute_task.submit()\n    return await prefect_future.result()\n\nasyncio.run(dask_flow())\n

    Resolving futures in async client

    With the async client, you do not need to set sync=True or call result().

    However you must await client.compute(dask_collection) before exiting out of the context manager.

    To invoke compute from the Dask collection, set sync=False and call result() before exiting out of the context manager: await dask_collection.compute(sync=False).

    "},{"location":"integrations/prefect-dask/usage_guide/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"

    The DaskTaskRunner is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.

    To configure, you need to provide a cluster_class. This can be:

    • A string specifying the import path to the cluster class (for example, \"dask_cloudprovider.aws.FargateCluster\")
    • The cluster class itself
    • A function for creating a custom cluster

    You can also configure cluster_kwargs, which takes a dictionary of keyword arguments to pass to cluster_class when starting the flow run.

    For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster with 4 workers running with an image named my-prefect-image:

    DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
    "},{"location":"integrations/prefect-dask/usage_guide/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"

    Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):

    • All workers in the cluster must have dependencies installed for all flows you intend to run.
    • Multiple flow runs may compete for resources. Dask tries to do a good job sharing resources between tasks, but you may still run into issues.

    That said, you may prefer managing a single long-running cluster.

    To configure a DaskTaskRunner to connect to an existing cluster, pass in the address of the scheduler to the address argument:

    # Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
    "},{"location":"integrations/prefect-dask/usage_guide/#adaptive-scaling","title":"Adaptive scaling","text":"

    One nice feature of using a DaskTaskRunner is the ability to scale adaptively to the workload. Instead of specifying n_workers as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.

    To do this, you can pass adapt_kwargs to DaskTaskRunner. This takes the following fields:

    • maximum (int or None, optional): the maximum number of workers to scale to. Set to None for no maximum.
    • minimum (int or None, optional): the minimum number of workers to scale to. Set to None for no minimum.

    For example, here we configure a flow to run on a FargateCluster scaling up to at most 10 workers.

    DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    adapt_kwargs={\"maximum\": 10}\n)\n
    "},{"location":"integrations/prefect-dask/usage_guide/#dask-annotations","title":"Dask annotations","text":"

    Dask annotations can be used to further control the behavior of tasks.

    For example, we can set the priority of tasks in the Dask scheduler:

    import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n    with dask.annotate(priority=-10):\n        future = show(1)  # low priority task\n\n    with dask.annotate(priority=10):\n        future = show(2)  # high priority task\n

    Another common use case is resource annotations:

    import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n    task_runner=DaskTaskRunner(\n        cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n    )\n)\ndef my_flow():\n    with dask.annotate(resources={'GPU': 1}):\n        future = show(0)  # this task requires 1 GPU resource on a worker\n\n    with dask.annotate(resources={'process': 1}):\n        # These tasks each require 1 process on a worker; because we've \n        # specified that our cluster has 1 process per worker and 1 worker,\n        # these tasks will run sequentially\n        future = show(1)\n        future = show(2)\n        future = show(3)\n

    For more tips on how to use tasks and flows in a Collection, check out Using Collections!

    "},{"location":"integrations/prefect-dask/utils/","title":"Utils","text":""},{"location":"integrations/prefect-dask/utils/#prefect_dask.utils","title":"prefect_dask.utils","text":"

    Utils to use alongside prefect-dask.

    "},{"location":"integrations/prefect-dask/utils/#prefect_dask.utils.get_async_dask_client","title":"get_async_dask_client async","text":"

    Yields a temporary asynchronous dask client; this is useful for parallelizing operations on dask collections, such as a dask.DataFrame or dask.Bag.

    Without invoking this, workers do not automatically get a client to connect to the full cluster. Therefore, it will attempt perform work within the worker itself serially, and potentially overwhelming the single worker.

    Parameters:

    Name Type Description Default timeout Optional[Union[int, float, str, timedelta]]

    Timeout after which to error out; has no effect in flow run contexts because the client has already started; Defaults to the distributed.comm.timeouts.connect configuration value.

    None client_kwargs Dict[str, Any]

    Additional keyword arguments to pass to distributed.Client, and overwrites inherited keyword arguments from the task runner, if any.

    {}

    Yields:

    Type Description AsyncGenerator[Client, None]

    A temporary asynchronous dask client.

    Examples:

    Use get_async_dask_client to distribute work across workers.

    import dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_async_dask_client\n\n@task\nasync def compute_task():\n    async with get_async_dask_client(timeout=\"120s\") as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = await client.compute(df.describe())\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\nasync def dask_flow():\n    prefect_future = await compute_task.submit()\n    return await prefect_future.result()\n\nasyncio.run(dask_flow())\n

    Source code in prefect_dask/utils.py
    @asynccontextmanager\nasync def get_async_dask_client(\n    timeout: Optional[Union[int, float, str, timedelta]] = None,\n    **client_kwargs: Dict[str, Any],\n) -> AsyncGenerator[Client, None]:\n    \"\"\"\n    Yields a temporary asynchronous dask client; this is useful\n    for parallelizing operations on dask collections,\n    such as a `dask.DataFrame` or `dask.Bag`.\n\n    Without invoking this, workers do not automatically get a client to connect\n    to the full cluster. Therefore, it will attempt perform work within the\n    worker itself serially, and potentially overwhelming the single worker.\n\n    Args:\n        timeout: Timeout after which to error out; has no effect in\n            flow run contexts because the client has already started;\n            Defaults to the `distributed.comm.timeouts.connect`\n            configuration value.\n        client_kwargs: Additional keyword arguments to pass to\n            `distributed.Client`, and overwrites inherited keyword arguments\n            from the task runner, if any.\n\n    Yields:\n        A temporary asynchronous dask client.\n\n    Examples:\n        Use `get_async_dask_client` to distribute work across workers.\n        ```python\n        import dask\n        from prefect import flow, task\n        from prefect_dask import DaskTaskRunner, get_async_dask_client\n\n        @task\n        async def compute_task():\n            async with get_async_dask_client(timeout=\"120s\") as client:\n                df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n                summary_df = await client.compute(df.describe())\n            return summary_df\n\n        @flow(task_runner=DaskTaskRunner())\n        async def dask_flow():\n            prefect_future = await compute_task.submit()\n            return await prefect_future.result()\n\n        asyncio.run(dask_flow())\n        ```\n    \"\"\"\n    client_kwargs = _generate_client_kwargs(\n        async_client=True, timeout=timeout, **client_kwargs\n    )\n    async with Client(**client_kwargs) as client:\n        yield client\n
    "},{"location":"integrations/prefect-dask/utils/#prefect_dask.utils.get_dask_client","title":"get_dask_client","text":"

    Yields a temporary synchronous dask client; this is useful for parallelizing operations on dask collections, such as a dask.DataFrame or dask.Bag.

    Without invoking this, workers do not automatically get a client to connect to the full cluster. Therefore, it will attempt perform work within the worker itself serially, and potentially overwhelming the single worker.

    When in an async context, we recommend using get_async_dask_client instead.

    Parameters:

    Name Type Description Default timeout Optional[Union[int, float, str, timedelta]]

    Timeout after which to error out; has no effect in flow run contexts because the client has already started; Defaults to the distributed.comm.timeouts.connect configuration value.

    None client_kwargs Dict[str, Any]

    Additional keyword arguments to pass to distributed.Client, and overwrites inherited keyword arguments from the task runner, if any.

    {}

    Yields:

    Type Description Client

    A temporary synchronous dask client.

    Examples:

    Use get_dask_client to distribute work across workers.

    import dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_dask_client\n\n@task\ndef compute_task():\n    with get_dask_client(timeout=\"120s\") as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = client.compute(df.describe()).result()\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_flow():\n    prefect_future = compute_task.submit()\n    return prefect_future.result()\n\ndask_flow()\n

    Source code in prefect_dask/utils.py
    @contextmanager\ndef get_dask_client(\n    timeout: Optional[Union[int, float, str, timedelta]] = None,\n    **client_kwargs: Dict[str, Any],\n) -> Generator[Client, None, None]:\n    \"\"\"\n    Yields a temporary synchronous dask client; this is useful\n    for parallelizing operations on dask collections,\n    such as a `dask.DataFrame` or `dask.Bag`.\n\n    Without invoking this, workers do not automatically get a client to connect\n    to the full cluster. Therefore, it will attempt perform work within the\n    worker itself serially, and potentially overwhelming the single worker.\n\n    When in an async context, we recommend using `get_async_dask_client` instead.\n\n    Args:\n        timeout: Timeout after which to error out; has no effect in\n            flow run contexts because the client has already started;\n            Defaults to the `distributed.comm.timeouts.connect`\n            configuration value.\n        client_kwargs: Additional keyword arguments to pass to\n            `distributed.Client`, and overwrites inherited keyword arguments\n            from the task runner, if any.\n\n    Yields:\n        A temporary synchronous dask client.\n\n    Examples:\n        Use `get_dask_client` to distribute work across workers.\n        ```python\n        import dask\n        from prefect import flow, task\n        from prefect_dask import DaskTaskRunner, get_dask_client\n\n        @task\n        def compute_task():\n            with get_dask_client(timeout=\"120s\") as client:\n                df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n                summary_df = client.compute(df.describe()).result()\n            return summary_df\n\n        @flow(task_runner=DaskTaskRunner())\n        def dask_flow():\n            prefect_future = compute_task.submit()\n            return prefect_future.result()\n\n        dask_flow()\n        ```\n    \"\"\"\n    client_kwargs = _generate_client_kwargs(\n        async_client=False, timeout=timeout, **client_kwargs\n    )\n    with Client(**client_kwargs) as client:\n        yield client\n
    "},{"location":"integrations/prefect-databricks/","title":"prefect-databricks","text":"

    "},{"location":"integrations/prefect-databricks/#welcome","title":"Welcome!","text":"

    Prefect integrations for interacting with Databricks

    The tasks within this collection were created by a code generator using the service's OpenAPI spec.

    The service's REST API documentation can be found here.

    "},{"location":"integrations/prefect-databricks/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-databricks/#python-setup","title":"Python setup","text":"

    Requires an installation of Python 3.8+.

    We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

    These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-databricks/#installation","title":"Installation","text":"

    Install prefect-databricks with pip:

    pip install prefect-databricks\n
    "},{"location":"integrations/prefect-databricks/#lists-jobs-on-the-databricks-instance","title":"Lists jobs on the Databricks instance","text":"
    from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.jobs import jobs_list\n\n\n@flow\ndef example_execute_endpoint_flow():\n    databricks_credentials = DatabricksCredentials.load(\"my-block\")\n    jobs = jobs_list(\n        databricks_credentials,\n        limit=5\n    )\n    return jobs\n\nexample_execute_endpoint_flow()\n
    "},{"location":"integrations/prefect-databricks/#use-with_options-to-customize-options-on-any-existing-task-or-flow","title":"Use with_options to customize options on any existing task or flow","text":"
    custom_example_execute_endpoint_flow = example_execute_endpoint_flow.with_options(\n    name=\"My custom flow name\",\n    retries=2,\n    retry_delay_seconds=10,\n)\n
    "},{"location":"integrations/prefect-databricks/#launch-a-new-cluster-and-run-a-databricks-notebook","title":"Launch a new cluster and run a Databricks notebook","text":"

    Notebook named example.ipynb on Databricks which accepts a name parameter:

    name = dbutils.widgets.get(\"name\")\nmessage = f\"Don't worry {name}, I got your request! Welcome to prefect-databricks!\"\nprint(message)\n

    Prefect flow that launches a new cluster to run example.ipynb:

    from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.jobs import jobs_runs_submit\nfrom prefect_databricks.models.jobs import (\n    AutoScale,\n    AwsAttributes,\n    JobTaskSettings,\n    NotebookTask,\n    NewCluster,\n)\n\n\n@flow\ndef jobs_runs_submit_flow(notebook_path, **base_parameters):\n    databricks_credentials = DatabricksCredentials.load(\"my-block\")\n\n    # specify new cluster settings\n    aws_attributes = AwsAttributes(\n        availability=\"SPOT\",\n        zone_id=\"us-west-2a\",\n        ebs_volume_type=\"GENERAL_PURPOSE_SSD\",\n        ebs_volume_count=3,\n        ebs_volume_size=100,\n    )\n    auto_scale = AutoScale(min_workers=1, max_workers=2)\n    new_cluster = NewCluster(\n        aws_attributes=aws_attributes,\n        autoscale=auto_scale,\n        node_type_id=\"m4.large\",\n        spark_version=\"10.4.x-scala2.12\",\n        spark_conf={\"spark.speculation\": True},\n    )\n\n    # specify notebook to use and parameters to pass\n    notebook_task = NotebookTask(\n        notebook_path=notebook_path,\n        base_parameters=base_parameters,\n    )\n\n    # compile job task settings\n    job_task_settings = JobTaskSettings(\n        new_cluster=new_cluster,\n        notebook_task=notebook_task,\n        task_key=\"prefect-task\"\n    )\n\n    run = jobs_runs_submit(\n        databricks_credentials=databricks_credentials,\n        run_name=\"prefect-job\",\n        tasks=[job_task_settings]\n    )\n\n    return run\n\n\njobs_runs_submit_flow(\"/Users/username@gmail.com/example.ipynb\", name=\"Marvin\")\n

    Note, instead of using the built-in models, you may also input valid JSON. For example, AutoScale(min_workers=1, max_workers=2) is equivalent to {\"min_workers\": 1, \"max_workers\": 2}.

    "},{"location":"integrations/prefect-databricks/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-databricks/credentials/#prefect_databricks.credentials","title":"prefect_databricks.credentials","text":"

    Credential classes used to perform authenticated interactions with Databricks

    "},{"location":"integrations/prefect-databricks/credentials/#prefect_databricks.credentials.DatabricksCredentials","title":"DatabricksCredentials","text":"

    Bases: Block

    Block used to manage Databricks authentication.

    Attributes:

    Name Type Description databricks_instance str

    Databricks instance used in formatting the endpoint URL.

    token SecretStr

    The token to authenticate with Databricks.

    client_kwargs Optional[Dict[str, Any]]

    Additional keyword arguments to pass to AsyncClient.

    Examples:

    Load stored Databricks credentials:

    from prefect_databricks import DatabricksCredentials\ndatabricks_credentials_block = DatabricksCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_databricks/credentials.py
    class DatabricksCredentials(Block):\n    \"\"\"\n    Block used to manage Databricks authentication.\n\n    Attributes:\n        databricks_instance:\n            Databricks instance used in formatting the endpoint URL.\n        token: The token to authenticate with Databricks.\n        client_kwargs: Additional keyword arguments to pass to AsyncClient.\n\n    Examples:\n        Load stored Databricks credentials:\n        ```python\n        from prefect_databricks import DatabricksCredentials\n        databricks_credentials_block = DatabricksCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Databricks Credentials\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5GTHI1PH2dTiantfps6Fnc/1c750fab7f4c14ea1b93a62b9fea6a94/databricks_logo_icon_170295.png?h=250\"  # noqa\n\n    databricks_instance: str = Field(\n        default=...,\n        description=\"Databricks instance used in formatting the endpoint URL.\",\n    )\n    token: SecretStr = Field(\n        default=..., description=\"The token to authenticate with Databricks.\"\n    )\n    client_kwargs: Optional[Dict[str, Any]] = Field(\n        default=None, description=\"Additional keyword arguments to pass to AsyncClient.\"\n    )\n\n    def get_client(self) -> AsyncClient:\n        \"\"\"\n        Gets an Databricks REST AsyncClient.\n\n        Returns:\n            An Databricks REST AsyncClient.\n\n        Example:\n            Gets a Databricks REST AsyncClient.\n            ```python\n            from prefect import flow\n            from prefect_databricks import DatabricksCredentials\n\n            @flow\n            def example_get_client_flow():\n                token = \"consumer_key\"\n                databricks_credentials = DatabricksCredentials(token=token)\n                client = databricks_credentials.get_client()\n                return client\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        base_url = f\"https://{self.databricks_instance}/api/\"\n\n        client_kwargs = self.client_kwargs or {}\n        client_kwargs[\"headers\"] = {\n            \"Authorization\": f\"Bearer {self.token.get_secret_value()}\"\n        }\n        client = AsyncClient(base_url=base_url, **client_kwargs)\n        return client\n
    "},{"location":"integrations/prefect-databricks/credentials/#prefect_databricks.credentials.DatabricksCredentials.get_client","title":"get_client","text":"

    Gets an Databricks REST AsyncClient.

    Returns:

    Type Description AsyncClient

    An Databricks REST AsyncClient.

    Example

    Gets a Databricks REST AsyncClient.

    from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\n\n@flow\ndef example_get_client_flow():\n    token = \"consumer_key\"\n    databricks_credentials = DatabricksCredentials(token=token)\n    client = databricks_credentials.get_client()\n    return client\n\nexample_get_client_flow()\n

    Source code in prefect_databricks/credentials.py
    def get_client(self) -> AsyncClient:\n    \"\"\"\n    Gets an Databricks REST AsyncClient.\n\n    Returns:\n        An Databricks REST AsyncClient.\n\n    Example:\n        Gets a Databricks REST AsyncClient.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n\n        @flow\n        def example_get_client_flow():\n            token = \"consumer_key\"\n            databricks_credentials = DatabricksCredentials(token=token)\n            client = databricks_credentials.get_client()\n            return client\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    base_url = f\"https://{self.databricks_instance}/api/\"\n\n    client_kwargs = self.client_kwargs or {}\n    client_kwargs[\"headers\"] = {\n        \"Authorization\": f\"Bearer {self.token.get_secret_value()}\"\n    }\n    client = AsyncClient(base_url=base_url, **client_kwargs)\n    return client\n
    "},{"location":"integrations/prefect-databricks/flows/","title":"Flows","text":""},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows","title":"prefect_databricks.flows","text":"

    Module containing flows for interacting with Databricks

    "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobInternalError","title":"DatabricksJobInternalError","text":"

    Bases: Exception

    Raised when Databricks jobs runs submit encounters internal error

    Source code in prefect_databricks/flows.py
    class DatabricksJobInternalError(Exception):\n    \"\"\"Raised when Databricks jobs runs submit encounters internal error\"\"\"\n
    "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobRunTimedOut","title":"DatabricksJobRunTimedOut","text":"

    Bases: Exception

    Raised when Databricks jobs runs does not complete in the configured max wait seconds

    Source code in prefect_databricks/flows.py
    class DatabricksJobRunTimedOut(Exception):\n    \"\"\"\n    Raised when Databricks jobs runs does not complete in the configured max\n    wait seconds\n    \"\"\"\n
    "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobSkipped","title":"DatabricksJobSkipped","text":"

    Bases: Exception

    Raised when Databricks jobs runs submit skips

    Source code in prefect_databricks/flows.py
    class DatabricksJobSkipped(Exception):\n    \"\"\"Raised when Databricks jobs runs submit skips\"\"\"\n
    "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobTerminated","title":"DatabricksJobTerminated","text":"

    Bases: Exception

    Raised when Databricks jobs runs submit terminates

    Source code in prefect_databricks/flows.py
    class DatabricksJobTerminated(Exception):\n    \"\"\"Raised when Databricks jobs runs submit terminates\"\"\"\n
    "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.jobs_runs_submit_and_wait_for_completion","title":"jobs_runs_submit_and_wait_for_completion async","text":"

    Flow that triggers a job run and waits for the triggered run to complete.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required tasks List[RunSubmitTaskSettings]

    Tasks to run, e.g.

    [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n        },\n        \"timeout_seconds\": 86400,\n    },\n]\n

    None run_name Optional[str]

    An optional name for the run. The default value is Untitled, e.g. A multitask job run.

    None git_source Optional[GitSource]

    This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. Key-values: - git_url: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. https://github.com/databricks/databricks-cli. - git_provider: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. github. - git_branch: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. main. - git_tag: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. release-1.0.0. - git_commit: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. e0056d01. - git_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

    None timeout_seconds Optional[int]

    An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400.

    None idempotency_token Optional[str]

    An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g. 8f018174-4792-40d5-bcbc-3e6a527352c8.

    None access_control_list Optional[List[AccessControlRequest]]

    List of permissions to set on the job.

    None max_wait_seconds int

    Maximum number of seconds to wait for the entire flow to complete.

    900 poll_frequency_seconds int

    Number of seconds to wait in between checks for run completion.

    10 return_metadata bool

    When True, method will return a tuple of notebook output as well as job run metadata; by default though, the method only returns notebook output

    False **jobs_runs_submit_kwargs Dict[str, Any]

    Additional keyword arguments to pass to jobs_runs_submit.

    {}

    Returns:

    Type Description Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]

    Either a dict or a tuple (depends on return_metadata) comprised of

    Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]
    • task_notebook_outputs: dictionary of task keys to its corresponding notebook output; this is the only object returned by default from this method
    Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]
    • jobs_runs_metadata: dictionary containing IDs of the jobs runs tasks; this is only returned if return_metadata=True.

    Examples:

    Submit jobs runs and wait.

    from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.flows import jobs_runs_submit_and_wait_for_completion\nfrom prefect_databricks.models.jobs import (\n    AutoScale,\n    AwsAttributes,\n    JobTaskSettings,\n    NotebookTask,\n    NewCluster,\n)\n\n@flow\ndef jobs_runs_submit_and_wait_for_completion_flow(notebook_path, **base_parameters):\n    databricks_credentials = await DatabricksCredentials.load(\"BLOCK_NAME\")\n\n    # specify new cluster settings\n    aws_attributes = AwsAttributes(\n        availability=\"SPOT\",\n        zone_id=\"us-west-2a\",\n        ebs_volume_type=\"GENERAL_PURPOSE_SSD\",\n        ebs_volume_count=3,\n        ebs_volume_size=100,\n    )\n    auto_scale = AutoScale(min_workers=1, max_workers=2)\n    new_cluster = NewCluster(\n        aws_attributes=aws_attributes,\n        autoscale=auto_scale,\n        node_type_id=\"m4.large\",\n        spark_version=\"10.4.x-scala2.12\",\n        spark_conf={\"spark.speculation\": True},\n    )\n\n    # specify notebook to use and parameters to pass\n    notebook_task = NotebookTask(\n        notebook_path=notebook_path,\n        base_parameters=base_parameters,\n    )\n\n    # compile job task settings\n    job_task_settings = JobTaskSettings(\n        new_cluster=new_cluster,\n        notebook_task=notebook_task,\n        task_key=\"prefect-task\"\n    )\n\n    multi_task_runs = jobs_runs_submit_and_wait_for_completion(\n        databricks_credentials=databricks_credentials,\n        run_name=\"prefect-job\",\n        tasks=[job_task_settings]\n    )\n\n    return multi_task_runs\n

    Source code in prefect_databricks/flows.py
    @flow(\n    name=\"Submit jobs runs and wait for completion\",\n    description=(\n        \"Triggers a Databricks jobs runs and waits for the \"\n        \"triggered runs to complete.\"\n    ),\n)\nasync def jobs_runs_submit_and_wait_for_completion(\n    databricks_credentials: DatabricksCredentials,\n    tasks: List[RunSubmitTaskSettings] = None,\n    run_name: Optional[str] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n    git_source: Optional[GitSource] = None,\n    timeout_seconds: Optional[int] = None,\n    idempotency_token: Optional[str] = None,\n    access_control_list: Optional[List[AccessControlRequest]] = None,\n    return_metadata: bool = False,\n    **jobs_runs_submit_kwargs: Dict[str, Any],\n) -> Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]:\n    \"\"\"\n    Flow that triggers a job run and waits for the triggered run to complete.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        tasks: Tasks to run, e.g.\n            ```\n            [\n                {\n                    \"task_key\": \"Sessionize\",\n                    \"description\": \"Extracts session data from events\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.Sessionize\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Orders_Ingest\",\n                    \"description\": \"Ingests order data\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.OrdersIngest\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Match\",\n                    \"description\": \"Matches orders with user sessions\",\n                    \"depends_on\": [\n                        {\"task_key\": \"Orders_Ingest\"},\n                        {\"task_key\": \"Sessionize\"},\n                    ],\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                    \"notebook_task\": {\n                        \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                        \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n                    },\n                    \"timeout_seconds\": 86400,\n                },\n            ]\n            ```\n        run_name:\n            An optional name for the run. The default value is `Untitled`, e.g. `A\n            multitask job run`.\n        git_source:\n            This functionality is in Public Preview.  An optional specification for\n            a remote repository containing the notebooks used by this\n            job's notebook tasks. Key-values:\n            - git_url:\n                URL of the repository to be cloned by this job. The maximum\n                length is 300 characters, e.g.\n                `https://github.com/databricks/databricks-cli`.\n            - git_provider:\n                Unique identifier of the service used to host the Git\n                repository. The value is case insensitive, e.g. `github`.\n            - git_branch:\n                Name of the branch to be checked out and used by this job.\n                This field cannot be specified in conjunction with git_tag\n                or git_commit. The maximum length is 255 characters, e.g.\n                `main`.\n            - git_tag:\n                Name of the tag to be checked out and used by this job. This\n                field cannot be specified in conjunction with git_branch or\n                git_commit. The maximum length is 255 characters, e.g.\n                `release-1.0.0`.\n            - git_commit:\n                Commit to be checked out and used by this job. This field\n                cannot be specified in conjunction with git_branch or\n                git_tag. The maximum length is 64 characters, e.g.\n                `e0056d01`.\n            - git_snapshot:\n                Read-only state of the remote repository at the time the job was run.\n                            This field is only included on job runs.\n        timeout_seconds:\n            An optional timeout applied to each run of this job. The default\n            behavior is to have no timeout, e.g. `86400`.\n        idempotency_token:\n            An optional token that can be used to guarantee the idempotency of job\n            run requests. If a run with the provided token already\n            exists, the request does not create a new run but returns\n            the ID of the existing run instead. If a run with the\n            provided token is deleted, an error is returned.  If you\n            specify the idempotency token, upon failure you can retry\n            until the request succeeds. Databricks guarantees that\n            exactly one run is launched with that idempotency token.\n            This token must have at most 64 characters.  For more\n            information, see [How to ensure idempotency for\n            jobs](https://kb.databricks.com/jobs/jobs-idempotency.html),\n            e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`.\n        access_control_list:\n            List of permissions to set on the job.\n        max_wait_seconds: Maximum number of seconds to wait for the entire flow to complete.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n        return_metadata: When True, method will return a tuple of notebook output as well as\n            job run metadata; by default though, the method only returns notebook output\n        **jobs_runs_submit_kwargs: Additional keyword arguments to pass to `jobs_runs_submit`.\n\n    Returns:\n        Either a dict or a tuple (depends on `return_metadata`) comprised of\n        * task_notebook_outputs: dictionary of task keys to its corresponding notebook output;\n          this is the only object returned by default from this method\n        * jobs_runs_metadata: dictionary containing IDs of the jobs runs tasks; this is only\n          returned if `return_metadata=True`.\n\n    Examples:\n        Submit jobs runs and wait.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n        from prefect_databricks.flows import jobs_runs_submit_and_wait_for_completion\n        from prefect_databricks.models.jobs import (\n            AutoScale,\n            AwsAttributes,\n            JobTaskSettings,\n            NotebookTask,\n            NewCluster,\n        )\n\n        @flow\n        def jobs_runs_submit_and_wait_for_completion_flow(notebook_path, **base_parameters):\n            databricks_credentials = await DatabricksCredentials.load(\"BLOCK_NAME\")\n\n            # specify new cluster settings\n            aws_attributes = AwsAttributes(\n                availability=\"SPOT\",\n                zone_id=\"us-west-2a\",\n                ebs_volume_type=\"GENERAL_PURPOSE_SSD\",\n                ebs_volume_count=3,\n                ebs_volume_size=100,\n            )\n            auto_scale = AutoScale(min_workers=1, max_workers=2)\n            new_cluster = NewCluster(\n                aws_attributes=aws_attributes,\n                autoscale=auto_scale,\n                node_type_id=\"m4.large\",\n                spark_version=\"10.4.x-scala2.12\",\n                spark_conf={\"spark.speculation\": True},\n            )\n\n            # specify notebook to use and parameters to pass\n            notebook_task = NotebookTask(\n                notebook_path=notebook_path,\n                base_parameters=base_parameters,\n            )\n\n            # compile job task settings\n            job_task_settings = JobTaskSettings(\n                new_cluster=new_cluster,\n                notebook_task=notebook_task,\n                task_key=\"prefect-task\"\n            )\n\n            multi_task_runs = jobs_runs_submit_and_wait_for_completion(\n                databricks_credentials=databricks_credentials,\n                run_name=\"prefect-job\",\n                tasks=[job_task_settings]\n            )\n\n            return multi_task_runs\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n\n    # submit the jobs runs\n    multi_task_jobs_runs_future = await jobs_runs_submit.submit(\n        databricks_credentials=databricks_credentials,\n        tasks=tasks,\n        run_name=run_name,\n        git_source=git_source,\n        timeout_seconds=timeout_seconds,\n        idempotency_token=idempotency_token,\n        access_control_list=access_control_list,\n        **jobs_runs_submit_kwargs,\n    )\n    multi_task_jobs_runs = await multi_task_jobs_runs_future.result()\n    multi_task_jobs_runs_id = multi_task_jobs_runs[\"run_id\"]\n\n    # wait for all the jobs runs to complete in a separate flow\n    # for a cleaner radar interface\n    jobs_runs_state, jobs_runs_metadata = await jobs_runs_wait_for_completion(\n        multi_task_jobs_runs_id=multi_task_jobs_runs_id,\n        databricks_credentials=databricks_credentials,\n        run_name=run_name,\n        max_wait_seconds=max_wait_seconds,\n        poll_frequency_seconds=poll_frequency_seconds,\n    )\n\n    # fetch the state results\n    jobs_runs_life_cycle_state = jobs_runs_state[\"life_cycle_state\"]\n    jobs_runs_state_message = jobs_runs_state[\"state_message\"]\n\n    # return results or raise error\n    if jobs_runs_life_cycle_state == RunLifeCycleState.terminated.value:\n        jobs_runs_result_state = jobs_runs_state.get(\"result_state\", None)\n        if jobs_runs_result_state == RunResultState.success.value:\n            task_notebook_outputs = {}\n            for task in jobs_runs_metadata[\"tasks\"]:\n                task_key = task[\"task_key\"]\n                task_run_id = task[\"run_id\"]\n                task_run_output_future = await jobs_runs_get_output.submit(\n                    run_id=task_run_id,\n                    databricks_credentials=databricks_credentials,\n                )\n                task_run_output = await task_run_output_future.result()\n                task_run_notebook_output = task_run_output.get(\"notebook_output\", {})\n                task_notebook_outputs[task_key] = task_run_notebook_output\n            logger.info(\n                \"Databricks Jobs Runs Submit (%s ID %s) completed successfully!\",\n                run_name,\n                multi_task_jobs_runs_id,\n            )\n            if return_metadata:\n                return task_notebook_outputs, jobs_runs_metadata\n            return task_notebook_outputs\n        else:\n            raise DatabricksJobTerminated(\n                f\"Databricks Jobs Runs Submit \"\n                f\"({run_name} ID {multi_task_jobs_runs_id}) \"\n                f\"terminated with result state, {jobs_runs_result_state}: \"\n                f\"{jobs_runs_state_message}\"\n            )\n    elif jobs_runs_life_cycle_state == RunLifeCycleState.skipped.value:\n        raise DatabricksJobSkipped(\n            f\"Databricks Jobs Runs Submit ({run_name} ID \"\n            f\"{multi_task_jobs_runs_id}) was skipped: {jobs_runs_state_message}.\",\n        )\n    elif jobs_runs_life_cycle_state == RunLifeCycleState.internalerror.value:\n        raise DatabricksJobInternalError(\n            f\"Databricks Jobs Runs Submit ({run_name} ID \"\n            f\"{multi_task_jobs_runs_id}) \"\n            f\"encountered an internal error: {jobs_runs_state_message}.\",\n        )\n
    "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.jobs_runs_wait_for_completion","title":"jobs_runs_wait_for_completion async","text":"

    Flow that triggers a job run and waits for the triggered run to complete.

    Parameters:

    Name Type Description Default run_name Optional[str]

    The name of the jobs runs task.

    None multi_task_jobs_run_id

    The ID of the jobs runs task to watch.

    required databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required max_wait_seconds int

    Maximum number of seconds to wait for the entire flow to complete.

    900 poll_frequency_seconds int

    Number of seconds to wait in between checks for run completion.

    10

    Returns:

    Name Type Description jobs_runs_state

    A dict containing the jobs runs life cycle state and message.

    jobs_runs_metadata

    A dict containing IDs of the jobs runs tasks.

    Example

    Waits for completion on jobs runs.

    from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.flows import jobs_runs_wait_for_completion\n\n@flow\ndef jobs_runs_wait_for_completion_flow():\n    databricks_credentials = DatabricksCredentials.load(\"BLOCK_NAME\")\n    return jobs_runs_wait_for_completion(\n        multi_task_jobs_run_id=45429,\n        databricks_credentials=databricks_credentials,\n        run_name=\"my_run_name\",\n        max_wait_seconds=1800,  # 30 minutes\n        poll_frequency_seconds=120,  # 2 minutes\n    )\n

    Source code in prefect_databricks/flows.py
    @flow(\n    name=\"Wait for completion of jobs runs\",\n    description=\"Waits for the jobs runs to finish running\",\n)\nasync def jobs_runs_wait_for_completion(\n    multi_task_jobs_runs_id: int,\n    databricks_credentials: DatabricksCredentials,\n    run_name: Optional[str] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n):\n    \"\"\"\n    Flow that triggers a job run and waits for the triggered run to complete.\n\n    Args:\n        run_name: The name of the jobs runs task.\n        multi_task_jobs_run_id: The ID of the jobs runs task to watch.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        max_wait_seconds:\n            Maximum number of seconds to wait for the entire flow to complete.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Returns:\n        jobs_runs_state: A dict containing the jobs runs life cycle state and message.\n        jobs_runs_metadata: A dict containing IDs of the jobs runs tasks.\n\n    Example:\n        Waits for completion on jobs runs.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n        from prefect_databricks.flows import jobs_runs_wait_for_completion\n\n        @flow\n        def jobs_runs_wait_for_completion_flow():\n            databricks_credentials = DatabricksCredentials.load(\"BLOCK_NAME\")\n            return jobs_runs_wait_for_completion(\n                multi_task_jobs_run_id=45429,\n                databricks_credentials=databricks_credentials,\n                run_name=\"my_run_name\",\n                max_wait_seconds=1800,  # 30 minutes\n                poll_frequency_seconds=120,  # 2 minutes\n            )\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    seconds_waited_for_run_completion = 0\n    wait_for = []\n\n    jobs_status = {}\n    tasks_status = {}\n    while seconds_waited_for_run_completion <= max_wait_seconds:\n        jobs_runs_metadata_future = await jobs_runs_get.submit(\n            run_id=multi_task_jobs_runs_id,\n            databricks_credentials=databricks_credentials,\n            wait_for=wait_for,\n        )\n        wait_for = [jobs_runs_metadata_future]\n\n        jobs_runs_metadata = await jobs_runs_metadata_future.result()\n        jobs_status = _update_and_log_state_changes(\n            jobs_status, jobs_runs_metadata, logger, \"Job\"\n        )\n        jobs_runs_metadata_tasks = jobs_runs_metadata.get(\"tasks\", [])\n        for task_metadata in jobs_runs_metadata_tasks:\n            tasks_status = _update_and_log_state_changes(\n                tasks_status, task_metadata, logger, \"Task\"\n            )\n\n        jobs_runs_state = jobs_runs_metadata.get(\"state\", {})\n        jobs_runs_life_cycle_state = jobs_runs_state[\"life_cycle_state\"]\n        if jobs_runs_life_cycle_state in TERMINAL_STATUS_CODES:\n            return jobs_runs_state, jobs_runs_metadata\n\n        logger.info(\"Waiting for %s seconds.\", poll_frequency_seconds)\n        await asyncio.sleep(poll_frequency_seconds)\n        seconds_waited_for_run_completion += poll_frequency_seconds\n\n    raise DatabricksJobRunTimedOut(\n        f\"Max wait time of {max_wait_seconds} seconds exceeded while waiting \"\n        f\"for job run ({run_name} ID {multi_task_jobs_runs_id})\"\n    )\n
    "},{"location":"integrations/prefect-databricks/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs","title":"prefect_databricks.jobs","text":"

    This is a module containing tasks for interacting with: Databricks jobs

    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_create","title":"jobs_create async","text":"

    Create a new job.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required name str

    An optional name for the job, e.g. A multitask job.

    'Untitled' tags Dict

    A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g.

    {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n

    None tasks Optional[List[JobTaskSettings]]

    A list of task specifications to be executed by this job, e.g.

    [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n        },\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n]\n

    None job_clusters Optional[List[JobCluster]]

    A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g.

    [\n    {\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n    }\n]\n

    None email_notifications JobEmailNotifications

    An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. Key-values: - on_start: A list of email addresses to be notified when a run begins. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent, e.g.

    [\"user.name@databricks.com\"]\n
    - on_success: A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a TERMINATED life_cycle_state and a SUCCESSFUL result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent, e.g.
    [\"user.name@databricks.com\"]\n
    - on_failure: A list of email addresses to notify when a run completes unsuccessfully. A run is considered unsuccessful if it ends with an INTERNAL_ERROR life_cycle_state or a SKIPPED, FAILED, or TIMED_OUT result_state. If not specified on job creation, reset, or update, or the list is empty, then notifications are not sent. Job-level failure notifications are sent only once after the entire job run (including all of its retries) has failed. Notifications are not sent when failed job runs are retried. To receive a failure notification after every failed task (including every failed retry), use task-level notifications instead, e.g.
    [\"user.name@databricks.com\"]\n
    - no_alert_for_skipped_runs: If true, do not send email to recipients specified in on_failure if the run is skipped.

    None webhook_notifications WebhookNotifications

    A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. Key-values: - on_start: An optional list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the on_start property, e.g.

    [\n    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n]\n
    - on_success: An optional list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the on_success property, e.g.
    [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n
    - on_failure: An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the on_failure property, e.g.
    [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n

    None timeout_seconds Optional[int]

    An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400.

    None schedule CronSchedule

    An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or sending an API request to runNow. Key-values: - quartz_cron_expression: A Cron expression using Quartz syntax that describes the schedule for a job. See Cron Trigger for details. This field is required, e.g. 20 30 * * * ?. - timezone_id: A Java timezone ID. The schedule for a job is resolved with respect to this timezone. See Java TimeZone for details. This field is required, e.g. Europe/London. - pause_status: Indicate whether this schedule is paused or not, e.g. PAUSED.

    None max_concurrent_runs Optional[int]

    An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job\u2019s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won\u2019t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. 10.

    None git_source GitSource

    This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. { \"git_url\": \"https://github.com/databricks/databricks-cli\", \"git_branch\": \"main\", \"git_provider\": \"gitHub\", } Key-values: - git_url: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. https://github.com/databricks/databricks-cli. - git_provider: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. github. - git_branch: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. main. - git_tag: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. release-1.0.0. - git_commit: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. e0056d01. - git_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

    None format Optional[str]

    Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to 'MULTI_TASK', e.g. MULTI_TASK.

    None access_control_list Optional[List[AccessControlRequest]]

    List of permissions to set on the job.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - job_id: int

    API Endpoint:

    /2.1/jobs/create

    API Responses: Response Description 200 Job was created successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_create(\n    databricks_credentials: \"DatabricksCredentials\",\n    name: str = \"Untitled\",\n    tags: Dict = None,\n    tasks: Optional[List[\"models.JobTaskSettings\"]] = None,\n    job_clusters: Optional[List[\"models.JobCluster\"]] = None,\n    email_notifications: \"models.JobEmailNotifications\" = None,\n    webhook_notifications: \"models.WebhookNotifications\" = None,\n    timeout_seconds: Optional[int] = None,\n    schedule: \"models.CronSchedule\" = None,\n    max_concurrent_runs: Optional[int] = None,\n    git_source: \"models.GitSource\" = None,\n    format: Optional[str] = None,\n    access_control_list: Optional[List[\"models.AccessControlRequest\"]] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Create a new job.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        name:\n            An optional name for the job, e.g. `A multitask job`.\n        tags:\n            A map of tags associated with the job. These are forwarded to the\n            cluster as cluster tags for jobs clusters, and are subject\n            to the same limitations as cluster tags. A maximum of 25\n            tags can be added to the job, e.g.\n            ```\n            {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n            ```\n        tasks:\n            A list of task specifications to be executed by this job, e.g.\n            ```\n            [\n                {\n                    \"task_key\": \"Sessionize\",\n                    \"description\": \"Extracts session data from events\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.Sessionize\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                    \"timeout_seconds\": 86400,\n                    \"max_retries\": 3,\n                    \"min_retry_interval_millis\": 2000,\n                    \"retry_on_timeout\": False,\n                },\n                {\n                    \"task_key\": \"Orders_Ingest\",\n                    \"description\": \"Ingests order data\",\n                    \"depends_on\": [],\n                    \"job_cluster_key\": \"auto_scaling_cluster\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.OrdersIngest\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                    \"timeout_seconds\": 86400,\n                    \"max_retries\": 3,\n                    \"min_retry_interval_millis\": 2000,\n                    \"retry_on_timeout\": False,\n                },\n                {\n                    \"task_key\": \"Match\",\n                    \"description\": \"Matches orders with user sessions\",\n                    \"depends_on\": [\n                        {\"task_key\": \"Orders_Ingest\"},\n                        {\"task_key\": \"Sessionize\"},\n                    ],\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                    \"notebook_task\": {\n                        \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                        \"source\": \"WORKSPACE\",\n                        \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n                    },\n                    \"timeout_seconds\": 86400,\n                    \"max_retries\": 3,\n                    \"min_retry_interval_millis\": 2000,\n                    \"retry_on_timeout\": False,\n                },\n            ]\n            ```\n        job_clusters:\n            A list of job cluster specifications that can be shared and reused by\n            tasks of this job. Libraries cannot be declared in a shared\n            job cluster. You must declare dependent libraries in task\n            settings, e.g.\n            ```\n            [\n                {\n                    \"job_cluster_key\": \"auto_scaling_cluster\",\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                }\n            ]\n            ```\n        email_notifications:\n            An optional set of email addresses that is notified when runs of this\n            job begin or complete as well as when this job is deleted.\n            The default behavior is to not send any emails. Key-values:\n            - on_start:\n                A list of email addresses to be notified when a run begins.\n                If not specified on job creation, reset, or update, the list\n                is empty, and notifications are not sent, e.g.\n                ```\n                [\"user.name@databricks.com\"]\n                ```\n            - on_success:\n                A list of email addresses to be notified when a run\n                successfully completes. A run is considered to have\n                completed successfully if it ends with a `TERMINATED`\n                `life_cycle_state` and a `SUCCESSFUL` result_state. If not\n                specified on job creation, reset, or update, the list is\n                empty, and notifications are not sent, e.g.\n                ```\n                [\"user.name@databricks.com\"]\n                ```\n            - on_failure:\n                A list of email addresses to notify when a run completes\n                unsuccessfully. A run is considered unsuccessful if it ends\n                with an `INTERNAL_ERROR` `life_cycle_state` or a `SKIPPED`,\n                `FAILED`, or `TIMED_OUT` `result_state`. If not specified on\n                job creation, reset, or update, or the list is empty, then\n                notifications are not sent. Job-level failure notifications\n                are sent only once after the entire job run (including all\n                of its retries) has failed. Notifications are not sent when\n                failed job runs are retried. To receive a failure\n                notification after every failed task (including every failed\n                retry), use task-level notifications instead, e.g.\n                ```\n                [\"user.name@databricks.com\"]\n                ```\n            - no_alert_for_skipped_runs:\n                If true, do not send email to recipients specified in\n                `on_failure` if the run is skipped.\n        webhook_notifications:\n            A collection of system notification IDs to notify when runs of this job\n            begin or complete. The default behavior is to not send any\n            system notifications. Key-values:\n            - on_start:\n                An optional list of notification IDs to call when the run\n                starts. A maximum of 3 destinations can be specified for the\n                `on_start` property, e.g.\n                ```\n                [\n                    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n                    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n                ]\n                ```\n            - on_success:\n                An optional list of notification IDs to call when the run\n                completes successfully. A maximum of 3 destinations can be\n                specified for the `on_success` property, e.g.\n                ```\n                [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n                ```\n            - on_failure:\n                An optional list of notification IDs to call when the run\n                fails. A maximum of 3 destinations can be specified for the\n                `on_failure` property, e.g.\n                ```\n                [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n                ```\n        timeout_seconds:\n            An optional timeout applied to each run of this job. The default\n            behavior is to have no timeout, e.g. `86400`.\n        schedule:\n            An optional periodic schedule for this job. The default behavior is that\n            the job only runs when triggered by clicking \u201cRun Now\u201d in\n            the Jobs UI or sending an API request to `runNow`. Key-values:\n            - quartz_cron_expression:\n                A Cron expression using Quartz syntax that describes the\n                schedule for a job. See [Cron Trigger](http://www.quartz-\n                scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\n                for details. This field is required, e.g. `20 30 * * * ?`.\n            - timezone_id:\n                A Java timezone ID. The schedule for a job is resolved with\n                respect to this timezone. See [Java\n                TimeZone](https://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html)\n                for details. This field is required, e.g. `Europe/London`.\n            - pause_status:\n                Indicate whether this schedule is paused or not, e.g.\n                `PAUSED`.\n        max_concurrent_runs:\n            An optional maximum allowed number of concurrent runs of the job.  Set\n            this value if you want to be able to execute multiple runs\n            of the same job concurrently. This is useful for example if\n            you trigger your job on a frequent schedule and want to\n            allow consecutive runs to overlap with each other, or if you\n            want to trigger multiple runs which differ by their input\n            parameters.  This setting affects only new runs. For\n            example, suppose the job\u2019s concurrency is 4 and there are 4\n            concurrent active runs. Then setting the concurrency to 3\n            won\u2019t kill any of the active runs. However, from then on,\n            new runs are skipped unless there are fewer than 3 active\n            runs.  This value cannot exceed 1000\\. Setting this value to\n            0 causes all new runs to be skipped. The default behavior is\n            to allow only 1 concurrent run, e.g. `10`.\n        git_source:\n            This functionality is in Public Preview.  An optional specification for\n            a remote repository containing the notebooks used by this\n            job's notebook tasks, e.g.\n            ```\n            {\n                \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                \"git_branch\": \"main\",\n                \"git_provider\": \"gitHub\",\n            }\n            ``` Key-values:\n            - git_url:\n                URL of the repository to be cloned by this job. The maximum\n                length is 300 characters, e.g.\n                `https://github.com/databricks/databricks-cli`.\n            - git_provider:\n                Unique identifier of the service used to host the Git\n                repository. The value is case insensitive, e.g. `github`.\n            - git_branch:\n                Name of the branch to be checked out and used by this job.\n                This field cannot be specified in conjunction with git_tag\n                or git_commit. The maximum length is 255 characters, e.g.\n                `main`.\n            - git_tag:\n                Name of the tag to be checked out and used by this job. This\n                field cannot be specified in conjunction with git_branch or\n                git_commit. The maximum length is 255 characters, e.g.\n                `release-1.0.0`.\n            - git_commit:\n                Commit to be checked out and used by this job. This field\n                cannot be specified in conjunction with git_branch or\n                git_tag. The maximum length is 64 characters, e.g.\n                `e0056d01`.\n            - git_snapshot:\n                Read-only state of the remote repository at the time the job was run.\n                            This field is only included on job runs.\n        format:\n            Used to tell what is the format of the job. This field is ignored in\n            Create/Update/Reset calls. When using the Jobs API 2.1 this\n            value is always set to `'MULTI_TASK'`, e.g. `MULTI_TASK`.\n        access_control_list:\n            List of permissions to set on the job.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `job_id: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/create`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was created successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/create\"  # noqa\n\n    responses = {\n        200: \"Job was created successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"name\": name,\n        \"tags\": tags,\n        \"tasks\": tasks,\n        \"job_clusters\": job_clusters,\n        \"email_notifications\": email_notifications,\n        \"webhook_notifications\": webhook_notifications,\n        \"timeout_seconds\": timeout_seconds,\n        \"schedule\": schedule,\n        \"max_concurrent_runs\": max_concurrent_runs,\n        \"git_source\": git_source,\n        \"format\": format,\n        \"access_control_list\": access_control_list,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_delete","title":"jobs_delete async","text":"

    Deletes a job.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required job_id Optional[int]

    The canonical identifier of the job to delete. This field is required, e.g. 11223344.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, an empty dict.

    API Endpoint:

    /2.1/jobs/delete

    API Responses: Response Description 200 Job was deleted successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_delete(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Deletes a job.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to delete. This field is required,\n            e.g. `11223344`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/delete`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was deleted successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/delete\"  # noqa\n\n    responses = {\n        200: \"Job was deleted successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_get","title":"jobs_get async","text":"

    Retrieves the details for a single job.

    Parameters:

    Name Type Description Default job_id int

    The canonical identifier of the job to retrieve information about. This field is required.

    required databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - job_id: int- creator_user_name: str- run_as_user_name: str- settings: \"models.JobSettings\"- created_time: int

    API Endpoint:

    /2.1/jobs/get

    API Responses: Response Description 200 Job was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_get(\n    job_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieves the details for a single job.\n\n    Args:\n        job_id:\n            The canonical identifier of the job to retrieve information about. This\n            field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `job_id: int`</br>- `creator_user_name: str`</br>- `run_as_user_name: str`</br>- `settings: \"models.JobSettings\"`</br>- `created_time: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/get`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/get\"  # noqa\n\n    responses = {\n        200: \"Job was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"job_id\": job_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_list","title":"jobs_list async","text":"

    Retrieves a list of jobs.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required limit int

    The number of jobs to return. This value must be greater than 0 and less or equal to 25. The default value is 20.

    20 offset int

    The offset of the first job to return, relative to the most recently created job.

    0 name Optional[str]

    A filter on the list based on the exact (case insensitive) job name.

    None expand_tasks bool

    Whether to include task and cluster details in the response.

    False

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - jobs: List[\"models.Job\"]- has_more: bool

    API Endpoint:

    /2.1/jobs/list

    API Responses: Response Description 200 List of jobs was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_list(\n    databricks_credentials: \"DatabricksCredentials\",\n    limit: int = 20,\n    offset: int = 0,\n    name: Optional[str] = None,\n    expand_tasks: bool = False,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieves a list of jobs.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        limit:\n            The number of jobs to return. This value must be greater than 0 and less\n            or equal to 25. The default value is 20.\n        offset:\n            The offset of the first job to return, relative to the most recently\n            created job.\n        name:\n            A filter on the list based on the exact (case insensitive) job name.\n        expand_tasks:\n            Whether to include task and cluster details in the response.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `jobs: List[\"models.Job\"]`</br>- `has_more: bool`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/list`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | List of jobs was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/list\"  # noqa\n\n    responses = {\n        200: \"List of jobs was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"limit\": limit,\n        \"offset\": offset,\n        \"name\": name,\n        \"expand_tasks\": expand_tasks,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_reset","title":"jobs_reset async","text":"

    Overwrites all the settings for a specific job. Use the Update endpoint to update job settings partially.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required job_id Optional[int]

    The canonical identifier of the job to reset. This field is required, e.g. 11223344.

    None new_settings JobSettings

    The new settings of the job. These settings completely replace the old settings. Changes to the field JobSettings.timeout_seconds are applied to active runs. Changes to other fields are applied to future runs only. Key-values: - name: An optional name for the job, e.g. A multitask job. - tags: A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g.

    {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n
    - tasks: A list of task specifications to be executed by this job, e.g.
    [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/order-data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\n                \"name\": \"John Doe\",\n                \"age\": \"35\",\n            },\n        },\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n]\n
    - job_clusters: A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g.
    [\n    {\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n    }\n]\n
    - email_notifications: An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. - webhook_notifications: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. - timeout_seconds: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400. - schedule: An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or sending an API request to runNow. - max_concurrent_runs: An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job\u2019s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won\u2019t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. 10. - git_source: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g.
    {\n    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n    \"git_branch\": \"main\",\n    \"git_provider\": \"gitHub\",\n}\n
    - format: Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to 'MULTI_TASK', e.g. MULTI_TASK.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, an empty dict.

    API Endpoint:

    /2.1/jobs/reset

    API Responses: Response Description 200 Job was overwritten successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_reset(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n    new_settings: \"models.JobSettings\" = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Overwrites all the settings for a specific job. Use the Update endpoint to\n    update job settings partially.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to reset. This field is required,\n            e.g. `11223344`.\n        new_settings:\n            The new settings of the job. These settings completely replace the old\n            settings.  Changes to the field\n            `JobSettings.timeout_seconds` are applied to active runs.\n            Changes to other fields are applied to future runs only. Key-values:\n            - name:\n                An optional name for the job, e.g. `A multitask job`.\n            - tags:\n                A map of tags associated with the job. These are forwarded\n                to the cluster as cluster tags for jobs clusters, and are\n                subject to the same limitations as cluster tags. A maximum\n                of 25 tags can be added to the job, e.g.\n                ```\n                {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n                ```\n            - tasks:\n                A list of task specifications to be executed by this job, e.g.\n                ```\n                [\n                    {\n                        \"task_key\": \"Sessionize\",\n                        \"description\": \"Extracts session data from events\",\n                        \"depends_on\": [],\n                        \"existing_cluster_id\": \"0923-164208-meows279\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.Sessionize\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Orders_Ingest\",\n                        \"description\": \"Ingests order data\",\n                        \"depends_on\": [],\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.OrdersIngest\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/order-data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Match\",\n                        \"description\": \"Matches orders with user sessions\",\n                        \"depends_on\": [\n                            {\"task_key\": \"Orders_Ingest\"},\n                            {\"task_key\": \"Sessionize\"},\n                        ],\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                        \"notebook_task\": {\n                            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                            \"source\": \"WORKSPACE\",\n                            \"base_parameters\": {\n                                \"name\": \"John Doe\",\n                                \"age\": \"35\",\n                            },\n                        },\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                ]\n                ```\n            - job_clusters:\n                A list of job cluster specifications that can be shared and\n                reused by tasks of this job. Libraries cannot be declared in\n                a shared job cluster. You must declare dependent libraries\n                in task settings, e.g.\n                ```\n                [\n                    {\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                    }\n                ]\n                ```\n            - email_notifications:\n                An optional set of email addresses that is notified when\n                runs of this job begin or complete as well as when this job\n                is deleted. The default behavior is to not send any emails.\n            - webhook_notifications:\n                A collection of system notification IDs to notify when runs\n                of this job begin or complete. The default behavior is to\n                not send any system notifications.\n            - timeout_seconds:\n                An optional timeout applied to each run of this job. The\n                default behavior is to have no timeout, e.g. `86400`.\n            - schedule:\n                An optional periodic schedule for this job. The default\n                behavior is that the job only runs when triggered by\n                clicking \u201cRun Now\u201d in the Jobs UI or sending an API request\n                to `runNow`.\n            - max_concurrent_runs:\n                An optional maximum allowed number of concurrent runs of the\n                job.  Set this value if you want to be able to execute\n                multiple runs of the same job concurrently. This is useful\n                for example if you trigger your job on a frequent schedule\n                and want to allow consecutive runs to overlap with each\n                other, or if you want to trigger multiple runs which differ\n                by their input parameters.  This setting affects only new\n                runs. For example, suppose the job\u2019s concurrency is 4 and\n                there are 4 concurrent active runs. Then setting the\n                concurrency to 3 won\u2019t kill any of the active runs. However,\n                from then on, new runs are skipped unless there are fewer\n                than 3 active runs.  This value cannot exceed 1000\\. Setting\n                this value to 0 causes all new runs to be skipped. The\n                default behavior is to allow only 1 concurrent run, e.g.\n                `10`.\n            - git_source:\n                This functionality is in Public Preview.  An optional\n                specification for a remote repository containing the\n                notebooks used by this job's notebook tasks, e.g.\n                ```\n                {\n                    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                    \"git_branch\": \"main\",\n                    \"git_provider\": \"gitHub\",\n                }\n                ```\n            - format:\n                Used to tell what is the format of the job. This field is\n                ignored in Create/Update/Reset calls. When using the Jobs\n                API 2.1 this value is always set to `'MULTI_TASK'`, e.g.\n                `MULTI_TASK`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/reset`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was overwritten successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/reset\"  # noqa\n\n    responses = {\n        200: \"Job was overwritten successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n        \"new_settings\": new_settings,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_run_now","title":"jobs_run_now async","text":"

    Run a job and return the run_id of the triggered run.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required job_id Optional[int]

    The ID of the job to be executed, e.g. 11223344.

    None idempotency_token Optional[str]

    An optional token to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g. 8f018174-4792-40d5-bcbc-3e6a527352c8.

    None jar_params Optional[List[str]]

    A list of parameters for jobs with Spark JAR tasks, for example 'jar_params': ['john doe', '35']. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (for example {'jar_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs, e.g.

    [\"john\", \"doe\", \"35\"]\n

    None notebook_params Optional[Dict]

    A map from keys to values for jobs with notebook task, for example 'notebook_params': {'name': 'john doe', 'age': '35'}. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job\u2019s base parameters. notebook_params cannot be specified in conjunction with jar_params. Use Task parameter variables to set parameters containing information about job runs. The JSON representation of this field (for example {'notebook_params':{'name':'john doe','age':'35'}}) cannot exceed 10,000 bytes, e.g.

    {\"name\": \"john doe\", \"age\": \"35\"}\n

    None python_params Optional[List[str]]

    A list of parameters for jobs with Python tasks, for example 'python_params': ['john doe', '35']. The parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

    [\"john doe\", \"35\"]\n

    None spark_submit_params Optional[List[str]]

    A list of parameters for jobs with spark submit task, for example 'spark_submit_params': ['--class', 'org.apache.spark.examples.SparkPi']. The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

    [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n

    None python_named_params Optional[Dict]

    A map from keys to values for jobs with Python wheel task, for example 'python_named_params': {'name': 'task', 'data': 'dbfs:/path/to/data.json'}, e.g.

    {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n

    None pipeline_params Optional[str] None sql_params Optional[Dict]

    A map from keys to values for SQL tasks, for example 'sql_params': {'name': 'john doe', 'age': '35'}. The SQL alert task does not support custom parameters, e.g.

    {\"name\": \"john doe\", \"age\": \"35\"}\n

    None dbt_commands Optional[List]

    An array of commands to execute for jobs with the dbt task, for example 'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run'], e.g.

    [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - run_id: int- number_in_job: int

    API Endpoint:

    /2.1/jobs/run-now

    API Responses: Response Description 200 Run was started successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_run_now(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n    idempotency_token: Optional[str] = None,\n    jar_params: Optional[List[str]] = None,\n    notebook_params: Optional[Dict] = None,\n    python_params: Optional[List[str]] = None,\n    spark_submit_params: Optional[List[str]] = None,\n    python_named_params: Optional[Dict] = None,\n    pipeline_params: Optional[str] = None,\n    sql_params: Optional[Dict] = None,\n    dbt_commands: Optional[List] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Run a job and return the `run_id` of the triggered run.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The ID of the job to be executed, e.g. `11223344`.\n        idempotency_token:\n            An optional token to guarantee the idempotency of job run requests. If a\n            run with the provided token already exists, the request does\n            not create a new run but returns the ID of the existing run\n            instead. If a run with the provided token is deleted, an\n            error is returned.  If you specify the idempotency token,\n            upon failure you can retry until the request succeeds.\n            Databricks guarantees that exactly one run is launched with\n            that idempotency token.  This token must have at most 64\n            characters.  For more information, see [How to ensure\n            idempotency for jobs](https://kb.databricks.com/jobs/jobs-\n            idempotency.html), e.g.\n            `8f018174-4792-40d5-bcbc-3e6a527352c8`.\n        jar_params:\n            A list of parameters for jobs with Spark JAR tasks, for example\n            `'jar_params': ['john doe', '35']`. The parameters are used\n            to invoke the main function of the main class specified in\n            the Spark JAR task. If not specified upon `run-now`, it\n            defaults to an empty list. jar_params cannot be specified in\n            conjunction with notebook_params. The JSON representation of\n            this field (for example `{'jar_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs, e.g.\n            ```\n            [\"john\", \"doe\", \"35\"]\n            ```\n        notebook_params:\n            A map from keys to values for jobs with notebook task, for example\n            `'notebook_params': {'name': 'john doe', 'age': '35'}`. The\n            map is passed to the notebook and is accessible through the\n            [dbutils.widgets.get](https://docs.databricks.com/dev-\n            tools/databricks-utils.html\n            dbutils-widgets) function.  If not specified upon `run-now`,\n            the triggered run uses the job\u2019s base parameters.\n            notebook_params cannot be specified in conjunction with\n            jar_params.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  The JSON representation of this\n            field (for example `{'notebook_params':{'name':'john\n            doe','age':'35'}}`) cannot exceed 10,000 bytes, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        python_params:\n            A list of parameters for jobs with Python tasks, for example\n            `'python_params': ['john doe', '35']`. The parameters are\n            passed to Python file as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"john doe\", \"35\"]\n            ```\n        spark_submit_params:\n            A list of parameters for jobs with spark submit task, for example\n            `'spark_submit_params': ['--class',\n            'org.apache.spark.examples.SparkPi']`. The parameters are\n            passed to spark-submit script as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n            ```\n        python_named_params:\n            A map from keys to values for jobs with Python wheel task, for example\n            `'python_named_params': {'name': 'task', 'data':\n            'dbfs:/path/to/data.json'}`, e.g.\n            ```\n            {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n            ```\n        pipeline_params:\n\n        sql_params:\n            A map from keys to values for SQL tasks, for example `'sql_params':\n            {'name': 'john doe', 'age': '35'}`. The SQL alert task does\n            not support custom parameters, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        dbt_commands:\n            An array of commands to execute for jobs with the dbt task, for example\n            `'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run']`, e.g.\n            ```\n            [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n            ```\n\n    Returns:\n        Upon success, a dict of the response. </br>- `run_id: int`</br>- `number_in_job: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/run-now`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was started successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/run-now\"  # noqa\n\n    responses = {\n        200: \"Run was started successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n        \"idempotency_token\": idempotency_token,\n        \"jar_params\": jar_params,\n        \"notebook_params\": notebook_params,\n        \"python_params\": python_params,\n        \"spark_submit_params\": spark_submit_params,\n        \"python_named_params\": python_named_params,\n        \"pipeline_params\": pipeline_params,\n        \"sql_params\": sql_params,\n        \"dbt_commands\": dbt_commands,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_cancel","title":"jobs_runs_cancel async","text":"

    Cancels a job run. The run is canceled asynchronously, so it may still be running when this request completes.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required run_id Optional[int]

    This field is required, e.g. 455644833.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, an empty dict.

    API Endpoint:

    /2.1/jobs/runs/cancel

    API Responses: Response Description 200 Run was cancelled successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_cancel(\n    databricks_credentials: \"DatabricksCredentials\",\n    run_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Cancels a job run. The run is canceled asynchronously, so it may still be\n    running when this request completes.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        run_id:\n            This field is required, e.g. `455644833`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/cancel`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was cancelled successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/cancel\"  # noqa\n\n    responses = {\n        200: \"Run was cancelled successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"run_id\": run_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_cancel_all","title":"jobs_runs_cancel_all async","text":"

    Cancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required job_id Optional[int]

    The canonical identifier of the job to cancel all runs of. This field is required, e.g. 11223344.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, an empty dict.

    API Endpoint:

    /2.1/jobs/runs/cancel-all

    API Responses: Response Description 200 All runs were cancelled successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_cancel_all(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Cancels all active runs of a job. The runs are canceled asynchronously, so it\n    doesn't prevent new runs from being started.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to cancel all runs of. This field is\n            required, e.g. `11223344`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/cancel-all`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | All runs were cancelled successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/cancel-all\"  # noqa\n\n    responses = {\n        200: \"All runs were cancelled successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_delete","title":"jobs_runs_delete async","text":"

    Deletes a non-active run. Returns an error if the run is active.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required run_id Optional[int]

    The canonical identifier of the run for which to retrieve the metadata, e.g. 455644833.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, an empty dict.

    API Endpoint:

    /2.1/jobs/runs/delete

    API Responses: Response Description 200 Run was deleted successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_delete(\n    databricks_credentials: \"DatabricksCredentials\",\n    run_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Deletes a non-active run. Returns an error if the run is active.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        run_id:\n            The canonical identifier of the run for which to retrieve the metadata,\n            e.g. `455644833`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/delete`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was deleted successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/delete\"  # noqa\n\n    responses = {\n        200: \"Run was deleted successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"run_id\": run_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_export","title":"jobs_runs_export async","text":"

    Export and retrieve the job run task.

    Parameters:

    Name Type Description Default run_id int

    The canonical identifier for the run. This field is required.

    required databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required views_to_export Optional[ViewsToExport]

    Which views to export (CODE, DASHBOARDS, or ALL). Defaults to CODE.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - views: List[\"models.ViewItem\"]

    API Endpoint:

    /2.0/jobs/runs/export

    API Responses: Response Description 200 Run was exported successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_export(\n    run_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n    views_to_export: Optional[\"models.ViewsToExport\"] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Export and retrieve the job run task.\n\n    Args:\n        run_id:\n            The canonical identifier for the run. This field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        views_to_export:\n            Which views to export (CODE, DASHBOARDS, or ALL). Defaults to CODE.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `views: List[\"models.ViewItem\"]`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.0/jobs/runs/export`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was exported successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.0/jobs/runs/export\"  # noqa\n\n    responses = {\n        200: \"Run was exported successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"run_id\": run_id,\n        \"views_to_export\": views_to_export,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_get","title":"jobs_runs_get async","text":"

    Retrieve the metadata of a run.

    Parameters:

    Name Type Description Default run_id int

    The canonical identifier of the run for which to retrieve the metadata. This field is required.

    required databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required include_history Optional[bool]

    Whether to include the repair history in the response.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - job_id: int- run_id: int- number_in_job: int- creator_user_name: str- original_attempt_run_id: int- state: \"models.RunState\"- schedule: \"models.CronSchedule\"- tasks: List[\"models.RunTask\"]- job_clusters: List[\"models.JobCluster\"]- cluster_spec: \"models.ClusterSpec\"- cluster_instance: \"models.ClusterInstance\"- git_source: \"models.GitSource\"- overriding_parameters: \"models.RunParameters\"- start_time: int- setup_duration: int- execution_duration: int- cleanup_duration: int- end_time: int- trigger: \"models.TriggerType\"- run_name: str- run_page_url: str- run_type: \"models.RunType\"- attempt_number: int- repair_history: List[\"models.RepairHistoryItem\"]

    API Endpoint:

    /2.1/jobs/runs/get

    API Responses: Response Description 200 Run was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_get(\n    run_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n    include_history: Optional[bool] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieve the metadata of a run.\n\n    Args:\n        run_id:\n            The canonical identifier of the run for which to retrieve the metadata.\n            This field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        include_history:\n            Whether to include the repair history in the response.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `job_id: int`</br>- `run_id: int`</br>- `number_in_job: int`</br>- `creator_user_name: str`</br>- `original_attempt_run_id: int`</br>- `state: \"models.RunState\"`</br>- `schedule: \"models.CronSchedule\"`</br>- `tasks: List[\"models.RunTask\"]`</br>- `job_clusters: List[\"models.JobCluster\"]`</br>- `cluster_spec: \"models.ClusterSpec\"`</br>- `cluster_instance: \"models.ClusterInstance\"`</br>- `git_source: \"models.GitSource\"`</br>- `overriding_parameters: \"models.RunParameters\"`</br>- `start_time: int`</br>- `setup_duration: int`</br>- `execution_duration: int`</br>- `cleanup_duration: int`</br>- `end_time: int`</br>- `trigger: \"models.TriggerType\"`</br>- `run_name: str`</br>- `run_page_url: str`</br>- `run_type: \"models.RunType\"`</br>- `attempt_number: int`</br>- `repair_history: List[\"models.RepairHistoryItem\"]`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/get`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/get\"  # noqa\n\n    responses = {\n        200: \"Run was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"run_id\": run_id,\n        \"include_history\": include_history,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_get_output","title":"jobs_runs_get_output async","text":"

    Retrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit() call, you can use this endpoint to retrieve that value. Databricks restricts this API to return the first 5 MB of the output. To return a larger result, you can store job results in a cloud storage service. This endpoint validates that the run_id parameter is valid and returns an HTTP status code 400 if the run_id parameter is invalid. Runs are automatically removed after 60 days. If you to want to reference them beyond 60 days, you must save old run results before they expire. To export using the UI, see Export job run results. To export using the Jobs API, see Runs export.

    Parameters:

    Name Type Description Default run_id int

    The canonical identifier for the run. This field is required.

    required databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - notebook_output: \"models.NotebookOutput\"- sql_output: \"models.SqlOutput\"- dbt_output: \"models.DbtOutput\"- logs: str- logs_truncated: bool- error: str- error_trace: str- metadata: \"models.Run\"

    API Endpoint:

    /2.1/jobs/runs/get-output

    API Responses: Response Description 200 Run output was retrieved successfully. 400 A job run with multiple tasks was provided. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_get_output(\n    run_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieve the output and metadata of a single task run. When a notebook task\n    returns a value through the dbutils.notebook.exit() call, you can use this\n    endpoint to retrieve that value. Databricks restricts this API to return the\n    first 5 MB of the output. To return a larger result, you can store job\n    results in a cloud storage service. This endpoint validates that the run_id\n    parameter is valid and returns an HTTP status code 400 if the run_id\n    parameter is invalid. Runs are automatically removed after 60 days. If you\n    to want to reference them beyond 60 days, you must save old run results\n    before they expire. To export using the UI, see Export job run results. To\n    export using the Jobs API, see Runs export.\n\n    Args:\n        run_id:\n            The canonical identifier for the run. This field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `notebook_output: \"models.NotebookOutput\"`</br>- `sql_output: \"models.SqlOutput\"`</br>- `dbt_output: \"models.DbtOutput\"`</br>- `logs: str`</br>- `logs_truncated: bool`</br>- `error: str`</br>- `error_trace: str`</br>- `metadata: \"models.Run\"`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/get-output`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run output was retrieved successfully. |\n    | 400 | A job run with multiple tasks was provided. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/get-output\"  # noqa\n\n    responses = {\n        200: \"Run output was retrieved successfully.\",  # noqa\n        400: \"A job run with multiple tasks was provided.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"run_id\": run_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_list","title":"jobs_runs_list async","text":"

    List runs in descending order by start time.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required active_only bool

    If active_only is true, only active runs are included in the results; otherwise, lists both active and completed runs. An active run is a run in the PENDING, RUNNING, or TERMINATING. This field cannot be true when completed_only is true.

    False completed_only bool

    If completed_only is true, only completed runs are included in the results; otherwise, lists both active and completed runs. This field cannot be true when active_only is true.

    False job_id Optional[int]

    The job for which to list runs. If omitted, the Jobs service lists runs from all jobs.

    None offset int

    The offset of the first run to return, relative to the most recent run.

    0 limit int

    The number of runs to return. This value must be greater than 0 and less than 25. The default value is 25. If a request specifies a limit of 0, the service instead uses the maximum limit.

    25 run_type Optional[str]

    The type of runs to return. For a description of run types, see Run.

    None expand_tasks bool

    Whether to include task and cluster details in the response.

    False start_time_from Optional[int]

    Show runs that started at or after this value. The value must be a UTC timestamp in milliseconds. Can be combined with start_time_to to filter by a time range.

    None start_time_to Optional[int]

    Show runs that started at or before this value. The value must be a UTC timestamp in milliseconds. Can be combined with start_time_from to filter by a time range.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - runs: List[\"models.Run\"]- has_more: bool

    API Endpoint:

    /2.1/jobs/runs/list

    API Responses: Response Description 200 List of runs was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_list(\n    databricks_credentials: \"DatabricksCredentials\",\n    active_only: bool = False,\n    completed_only: bool = False,\n    job_id: Optional[int] = None,\n    offset: int = 0,\n    limit: int = 25,\n    run_type: Optional[str] = None,\n    expand_tasks: bool = False,\n    start_time_from: Optional[int] = None,\n    start_time_to: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List runs in descending order by start time.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        active_only:\n            If active_only is `true`, only active runs are included in the results;\n            otherwise, lists both active and completed runs. An active\n            run is a run in the `PENDING`, `RUNNING`, or `TERMINATING`.\n            This field cannot be `true` when completed_only is `true`.\n        completed_only:\n            If completed_only is `true`, only completed runs are included in the\n            results; otherwise, lists both active and completed runs.\n            This field cannot be `true` when active_only is `true`.\n        job_id:\n            The job for which to list runs. If omitted, the Jobs service lists runs\n            from all jobs.\n        offset:\n            The offset of the first run to return, relative to the most recent run.\n        limit:\n            The number of runs to return. This value must be greater than 0 and less\n            than 25\\. The default value is 25\\. If a request specifies a\n            limit of 0, the service instead uses the maximum limit.\n        run_type:\n            The type of runs to return. For a description of run types, see\n            [Run](https://docs.databricks.com/dev-\n            tools/api/latest/jobs.html\n            operation/JobsRunsGet).\n        expand_tasks:\n            Whether to include task and cluster details in the response.\n        start_time_from:\n            Show runs that started _at or after_ this value. The value must be a UTC\n            timestamp in milliseconds. Can be combined with\n            _start_time_to_ to filter by a time range.\n        start_time_to:\n            Show runs that started _at or before_ this value. The value must be a\n            UTC timestamp in milliseconds. Can be combined with\n            _start_time_from_ to filter by a time range.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `runs: List[\"models.Run\"]`</br>- `has_more: bool`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/list`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | List of runs was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/list\"  # noqa\n\n    responses = {\n        200: \"List of runs was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"active_only\": active_only,\n        \"completed_only\": completed_only,\n        \"job_id\": job_id,\n        \"offset\": offset,\n        \"limit\": limit,\n        \"run_type\": run_type,\n        \"expand_tasks\": expand_tasks,\n        \"start_time_from\": start_time_from,\n        \"start_time_to\": start_time_to,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_repair","title":"jobs_runs_repair async","text":"

    Re-run one or more tasks. Tasks are re-run as part of the original job run, use the current job and task settings, and can be viewed in the history for the original job run.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required run_id Optional[int]

    The job run ID of the run to repair. The run must not be in progress, e.g. 455644833.

    None rerun_tasks Optional[List[str]]

    The task keys of the task runs to repair, e.g.

    [\"task0\", \"task1\"]\n

    None latest_repair_id Optional[int]

    The ID of the latest repair. This parameter is not required when repairing a run for the first time, but must be provided on subsequent requests to repair the same run, e.g. 734650698524280.

    None rerun_all_failed_tasks bool

    If true, repair all failed tasks. Only one of rerun_tasks or rerun_all_failed_tasks can be used.

    False jar_params Optional[List[str]]

    A list of parameters for jobs with Spark JAR tasks, for example 'jar_params': ['john doe', '35']. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (for example {'jar_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs, e.g.

    [\"john\", \"doe\", \"35\"]\n

    None notebook_params Optional[Dict]

    A map from keys to values for jobs with notebook task, for example 'notebook_params': {'name': 'john doe', 'age': '35'}. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job\u2019s base parameters. notebook_params cannot be specified in conjunction with jar_params. Use Task parameter variables to set parameters containing information about job runs. The JSON representation of this field (for example {'notebook_params':{'name':'john doe','age':'35'}}) cannot exceed 10,000 bytes, e.g.

    {\"name\": \"john doe\", \"age\": \"35\"}\n

    None python_params Optional[List[str]]

    A list of parameters for jobs with Python tasks, for example 'python_params': ['john doe', '35']. The parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

    [\"john doe\", \"35\"]\n

    None spark_submit_params Optional[List[str]]

    A list of parameters for jobs with spark submit task, for example 'spark_submit_params': ['--class', 'org.apache.spark.examples.SparkPi']. The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

    [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n

    None python_named_params Optional[Dict]

    A map from keys to values for jobs with Python wheel task, for example 'python_named_params': {'name': 'task', 'data': 'dbfs:/path/to/data.json'}, e.g.

    {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n

    None pipeline_params Optional[str] None sql_params Optional[Dict]

    A map from keys to values for SQL tasks, for example 'sql_params': {'name': 'john doe', 'age': '35'}. The SQL alert task does not support custom parameters, e.g.

    {\"name\": \"john doe\", \"age\": \"35\"}\n

    None dbt_commands Optional[List]

    An array of commands to execute for jobs with the dbt task, for example 'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run'], e.g.

    [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - repair_id: int

    API Endpoint:

    /2.1/jobs/runs/repair

    API Responses: Response Description 200 Run repair was initiated. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_repair(\n    databricks_credentials: \"DatabricksCredentials\",\n    run_id: Optional[int] = None,\n    rerun_tasks: Optional[List[str]] = None,\n    latest_repair_id: Optional[int] = None,\n    rerun_all_failed_tasks: bool = False,\n    jar_params: Optional[List[str]] = None,\n    notebook_params: Optional[Dict] = None,\n    python_params: Optional[List[str]] = None,\n    spark_submit_params: Optional[List[str]] = None,\n    python_named_params: Optional[Dict] = None,\n    pipeline_params: Optional[str] = None,\n    sql_params: Optional[Dict] = None,\n    dbt_commands: Optional[List] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Re-run one or more tasks. Tasks are re-run as part of the original job run, use\n    the current job and task settings, and can be viewed in the history for the\n    original job run.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        run_id:\n            The job run ID of the run to repair. The run must not be in progress,\n            e.g. `455644833`.\n        rerun_tasks:\n            The task keys of the task runs to repair, e.g.\n            ```\n            [\"task0\", \"task1\"]\n            ```\n        latest_repair_id:\n            The ID of the latest repair. This parameter is not required when\n            repairing a run for the first time, but must be provided on\n            subsequent requests to repair the same run, e.g.\n            `734650698524280`.\n        rerun_all_failed_tasks:\n            If true, repair all failed tasks. Only one of rerun_tasks or\n            rerun_all_failed_tasks can be used.\n        jar_params:\n            A list of parameters for jobs with Spark JAR tasks, for example\n            `'jar_params': ['john doe', '35']`. The parameters are used\n            to invoke the main function of the main class specified in\n            the Spark JAR task. If not specified upon `run-now`, it\n            defaults to an empty list. jar_params cannot be specified in\n            conjunction with notebook_params. The JSON representation of\n            this field (for example `{'jar_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs, e.g.\n            ```\n            [\"john\", \"doe\", \"35\"]\n            ```\n        notebook_params:\n            A map from keys to values for jobs with notebook task, for example\n            `'notebook_params': {'name': 'john doe', 'age': '35'}`. The\n            map is passed to the notebook and is accessible through the\n            [dbutils.widgets.get](https://docs.databricks.com/dev-\n            tools/databricks-utils.html\n            dbutils-widgets) function.  If not specified upon `run-now`,\n            the triggered run uses the job\u2019s base parameters.\n            notebook_params cannot be specified in conjunction with\n            jar_params.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  The JSON representation of this\n            field (for example `{'notebook_params':{'name':'john\n            doe','age':'35'}}`) cannot exceed 10,000 bytes, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        python_params:\n            A list of parameters for jobs with Python tasks, for example\n            `'python_params': ['john doe', '35']`. The parameters are\n            passed to Python file as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"john doe\", \"35\"]\n            ```\n        spark_submit_params:\n            A list of parameters for jobs with spark submit task, for example\n            `'spark_submit_params': ['--class',\n            'org.apache.spark.examples.SparkPi']`. The parameters are\n            passed to spark-submit script as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n            ```\n        python_named_params:\n            A map from keys to values for jobs with Python wheel task, for example\n            `'python_named_params': {'name': 'task', 'data':\n            'dbfs:/path/to/data.json'}`, e.g.\n            ```\n            {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n            ```\n        pipeline_params:\n\n        sql_params:\n            A map from keys to values for SQL tasks, for example `'sql_params':\n            {'name': 'john doe', 'age': '35'}`. The SQL alert task does\n            not support custom parameters, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        dbt_commands:\n            An array of commands to execute for jobs with the dbt task, for example\n            `'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run']`, e.g.\n            ```\n            [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n            ```\n\n    Returns:\n        Upon success, a dict of the response. </br>- `repair_id: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/repair`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run repair was initiated. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/repair\"  # noqa\n\n    responses = {\n        200: \"Run repair was initiated.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"run_id\": run_id,\n        \"rerun_tasks\": rerun_tasks,\n        \"latest_repair_id\": latest_repair_id,\n        \"rerun_all_failed_tasks\": rerun_all_failed_tasks,\n        \"jar_params\": jar_params,\n        \"notebook_params\": notebook_params,\n        \"python_params\": python_params,\n        \"spark_submit_params\": spark_submit_params,\n        \"python_named_params\": python_named_params,\n        \"pipeline_params\": pipeline_params,\n        \"sql_params\": sql_params,\n        \"dbt_commands\": dbt_commands,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_submit","title":"jobs_runs_submit async","text":"

    Submit a one-time run. This endpoint allows you to submit a workload directly without creating a job. Use the jobs/runs/get API to check the run state after the job is submitted.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required tasks Optional[List[RunSubmitTaskSettings]]

    , e.g.

    [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n        },\n        \"timeout_seconds\": 86400,\n    },\n]\n

    None run_name Optional[str]

    An optional name for the run. The default value is Untitled, e.g. A multitask job run.

    None webhook_notifications WebhookNotifications

    A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. Key-values: - on_start: An optional list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the on_start property, e.g.

    [\n    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n]\n
    - on_success: An optional list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the on_success property, e.g.
    [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n
    - on_failure: An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the on_failure property, e.g.
    [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n

    None git_source GitSource

    This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. { \"git_url\": \"https://github.com/databricks/databricks-cli\", \"git_branch\": \"main\", \"git_provider\": \"gitHub\", } Key-values: - git_url: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. https://github.com/databricks/databricks-cli. - git_provider: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. github. - git_branch: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. main. - git_tag: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. release-1.0.0. - git_commit: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. e0056d01. - git_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

    None timeout_seconds Optional[int]

    An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400.

    None idempotency_token Optional[str]

    An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g. 8f018174-4792-40d5-bcbc-3e6a527352c8.

    None access_control_list Optional[List[AccessControlRequest]]

    List of permissions to set on the job.

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, a dict of the response. - run_id: int

    API Endpoint:

    /2.1/jobs/runs/submit

    API Responses: Response Description 200 Run was created and started successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_runs_submit(\n    databricks_credentials: \"DatabricksCredentials\",\n    tasks: Optional[List[\"models.RunSubmitTaskSettings\"]] = None,\n    run_name: Optional[str] = None,\n    webhook_notifications: \"models.WebhookNotifications\" = None,\n    git_source: \"models.GitSource\" = None,\n    timeout_seconds: Optional[int] = None,\n    idempotency_token: Optional[str] = None,\n    access_control_list: Optional[List[\"models.AccessControlRequest\"]] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Submit a one-time run. This endpoint allows you to submit a workload directly\n    without creating a job. Use the `jobs/runs/get` API to check the run state\n    after the job is submitted.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        tasks:\n            , e.g.\n            ```\n            [\n                {\n                    \"task_key\": \"Sessionize\",\n                    \"description\": \"Extracts session data from events\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.Sessionize\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Orders_Ingest\",\n                    \"description\": \"Ingests order data\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.OrdersIngest\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Match\",\n                    \"description\": \"Matches orders with user sessions\",\n                    \"depends_on\": [\n                        {\"task_key\": \"Orders_Ingest\"},\n                        {\"task_key\": \"Sessionize\"},\n                    ],\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                    \"notebook_task\": {\n                        \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                        \"source\": \"WORKSPACE\",\n                        \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n                    },\n                    \"timeout_seconds\": 86400,\n                },\n            ]\n            ```\n        run_name:\n            An optional name for the run. The default value is `Untitled`, e.g. `A\n            multitask job run`.\n        webhook_notifications:\n            A collection of system notification IDs to notify when runs of this job\n            begin or complete. The default behavior is to not send any\n            system notifications. Key-values:\n            - on_start:\n                An optional list of notification IDs to call when the run\n                starts. A maximum of 3 destinations can be specified for the\n                `on_start` property, e.g.\n                ```\n                [\n                    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n                    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n                ]\n                ```\n            - on_success:\n                An optional list of notification IDs to call when the run\n                completes successfully. A maximum of 3 destinations can be\n                specified for the `on_success` property, e.g.\n                ```\n                [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n                ```\n            - on_failure:\n                An optional list of notification IDs to call when the run\n                fails. A maximum of 3 destinations can be specified for the\n                `on_failure` property, e.g.\n                ```\n                [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n                ```\n        git_source:\n            This functionality is in Public Preview.  An optional specification for\n            a remote repository containing the notebooks used by this\n            job's notebook tasks, e.g.\n            ```\n            {\n                \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                \"git_branch\": \"main\",\n                \"git_provider\": \"gitHub\",\n            }\n            ``` Key-values:\n            - git_url:\n                URL of the repository to be cloned by this job. The maximum\n                length is 300 characters, e.g.\n                `https://github.com/databricks/databricks-cli`.\n            - git_provider:\n                Unique identifier of the service used to host the Git\n                repository. The value is case insensitive, e.g. `github`.\n            - git_branch:\n                Name of the branch to be checked out and used by this job.\n                This field cannot be specified in conjunction with git_tag\n                or git_commit. The maximum length is 255 characters, e.g.\n                `main`.\n            - git_tag:\n                Name of the tag to be checked out and used by this job. This\n                field cannot be specified in conjunction with git_branch or\n                git_commit. The maximum length is 255 characters, e.g.\n                `release-1.0.0`.\n            - git_commit:\n                Commit to be checked out and used by this job. This field\n                cannot be specified in conjunction with git_branch or\n                git_tag. The maximum length is 64 characters, e.g.\n                `e0056d01`.\n            - git_snapshot:\n                Read-only state of the remote repository at the time the job was run.\n                            This field is only included on job runs.\n        timeout_seconds:\n            An optional timeout applied to each run of this job. The default\n            behavior is to have no timeout, e.g. `86400`.\n        idempotency_token:\n            An optional token that can be used to guarantee the idempotency of job\n            run requests. If a run with the provided token already\n            exists, the request does not create a new run but returns\n            the ID of the existing run instead. If a run with the\n            provided token is deleted, an error is returned.  If you\n            specify the idempotency token, upon failure you can retry\n            until the request succeeds. Databricks guarantees that\n            exactly one run is launched with that idempotency token.\n            This token must have at most 64 characters.  For more\n            information, see [How to ensure idempotency for\n            jobs](https://kb.databricks.com/jobs/jobs-idempotency.html),\n            e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`.\n        access_control_list:\n            List of permissions to set on the job.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `run_id: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/submit`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was created and started successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/submit\"  # noqa\n\n    responses = {\n        200: \"Run was created and started successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"tasks\": tasks,\n        \"run_name\": run_name,\n        \"webhook_notifications\": webhook_notifications,\n        \"git_source\": git_source,\n        \"timeout_seconds\": timeout_seconds,\n        \"idempotency_token\": idempotency_token,\n        \"access_control_list\": access_control_list,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_update","title":"jobs_update async","text":"

    Add, update, or remove specific settings of an existing job. Use the Reset endpoint to overwrite all job settings.

    Parameters:

    Name Type Description Default databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required job_id Optional[int]

    The canonical identifier of the job to update. This field is required, e.g. 11223344.

    None new_settings JobSettings

    The new settings for the job. Any top-level fields specified in new_settings are completely replaced. Partially updating nested fields is not supported. Changes to the field JobSettings.timeout_seconds are applied to active runs. Changes to other fields are applied to future runs only. Key-values: - name: An optional name for the job, e.g. A multitask job. - tags: A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g.

    {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n
    - tasks: A list of task specifications to be executed by this job, e.g.
    [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/order-data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\n                \"name\": \"John Doe\",\n                \"age\": \"35\",\n            },\n        },\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n]\n
    - job_clusters: A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g.
    [\n    {\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n    }\n]\n
    - email_notifications: An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. - webhook_notifications: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. - timeout_seconds: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400. - schedule: An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or sending an API request to runNow. - max_concurrent_runs: An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job\u2019s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won\u2019t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. 10. - git_source: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g.
    {\n    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n    \"git_branch\": \"main\",\n    \"git_provider\": \"gitHub\",\n}\n
    - format: Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to 'MULTI_TASK', e.g. MULTI_TASK.

    None fields_to_remove Optional[List[str]]

    Remove top-level fields in the job settings. Removing nested fields is not supported. This field is optional, e.g.

    [\"libraries\", \"schedule\"]\n

    None

    Returns:

    Type Description Dict[str, Any]

    Upon success, an empty dict.

    API Endpoint:

    /2.1/jobs/update

    API Responses: Response Description 200 Job was updated successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
    @task\nasync def jobs_update(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n    new_settings: \"models.JobSettings\" = None,\n    fields_to_remove: Optional[List[str]] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Add, update, or remove specific settings of an existing job. Use the Reset\n    endpoint to overwrite all job settings.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to update. This field is required,\n            e.g. `11223344`.\n        new_settings:\n            The new settings for the job. Any top-level fields specified in\n            `new_settings` are completely replaced. Partially updating\n            nested fields is not supported.  Changes to the field\n            `JobSettings.timeout_seconds` are applied to active runs.\n            Changes to other fields are applied to future runs only. Key-values:\n            - name:\n                An optional name for the job, e.g. `A multitask job`.\n            - tags:\n                A map of tags associated with the job. These are forwarded\n                to the cluster as cluster tags for jobs clusters, and are\n                subject to the same limitations as cluster tags. A maximum\n                of 25 tags can be added to the job, e.g.\n                ```\n                {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n                ```\n            - tasks:\n                A list of task specifications to be executed by this job, e.g.\n                ```\n                [\n                    {\n                        \"task_key\": \"Sessionize\",\n                        \"description\": \"Extracts session data from events\",\n                        \"depends_on\": [],\n                        \"existing_cluster_id\": \"0923-164208-meows279\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.Sessionize\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Orders_Ingest\",\n                        \"description\": \"Ingests order data\",\n                        \"depends_on\": [],\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.OrdersIngest\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/order-data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Match\",\n                        \"description\": \"Matches orders with user sessions\",\n                        \"depends_on\": [\n                            {\"task_key\": \"Orders_Ingest\"},\n                            {\"task_key\": \"Sessionize\"},\n                        ],\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                        \"notebook_task\": {\n                            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                            \"source\": \"WORKSPACE\",\n                            \"base_parameters\": {\n                                \"name\": \"John Doe\",\n                                \"age\": \"35\",\n                            },\n                        },\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                ]\n                ```\n            - job_clusters:\n                A list of job cluster specifications that can be shared and\n                reused by tasks of this job. Libraries cannot be declared in\n                a shared job cluster. You must declare dependent libraries\n                in task settings, e.g.\n                ```\n                [\n                    {\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                    }\n                ]\n                ```\n            - email_notifications:\n                An optional set of email addresses that is notified when\n                runs of this job begin or complete as well as when this job\n                is deleted. The default behavior is to not send any emails.\n            - webhook_notifications:\n                A collection of system notification IDs to notify when runs\n                of this job begin or complete. The default behavior is to\n                not send any system notifications.\n            - timeout_seconds:\n                An optional timeout applied to each run of this job. The\n                default behavior is to have no timeout, e.g. `86400`.\n            - schedule:\n                An optional periodic schedule for this job. The default\n                behavior is that the job only runs when triggered by\n                clicking \u201cRun Now\u201d in the Jobs UI or sending an API request\n                to `runNow`.\n            - max_concurrent_runs:\n                An optional maximum allowed number of concurrent runs of the\n                job.  Set this value if you want to be able to execute\n                multiple runs of the same job concurrently. This is useful\n                for example if you trigger your job on a frequent schedule\n                and want to allow consecutive runs to overlap with each\n                other, or if you want to trigger multiple runs which differ\n                by their input parameters.  This setting affects only new\n                runs. For example, suppose the job\u2019s concurrency is 4 and\n                there are 4 concurrent active runs. Then setting the\n                concurrency to 3 won\u2019t kill any of the active runs. However,\n                from then on, new runs are skipped unless there are fewer\n                than 3 active runs.  This value cannot exceed 1000\\. Setting\n                this value to 0 causes all new runs to be skipped. The\n                default behavior is to allow only 1 concurrent run, e.g.\n                `10`.\n            - git_source:\n                This functionality is in Public Preview.  An optional\n                specification for a remote repository containing the\n                notebooks used by this job's notebook tasks, e.g.\n                ```\n                {\n                    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                    \"git_branch\": \"main\",\n                    \"git_provider\": \"gitHub\",\n                }\n                ```\n            - format:\n                Used to tell what is the format of the job. This field is\n                ignored in Create/Update/Reset calls. When using the Jobs\n                API 2.1 this value is always set to `'MULTI_TASK'`, e.g.\n                `MULTI_TASK`.\n        fields_to_remove:\n            Remove top-level fields in the job settings. Removing nested fields is\n            not supported. This field is optional, e.g.\n            ```\n            [\"libraries\", \"schedule\"]\n            ```\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/update`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was updated successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/update\"  # noqa\n\n    responses = {\n        200: \"Job was updated successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n        \"new_settings\": new_settings,\n        \"fields_to_remove\": fields_to_remove,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
    "},{"location":"integrations/prefect-databricks/rest/","title":"Rest","text":""},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest","title":"prefect_databricks.rest","text":"

    This is a module containing generic REST tasks.

    "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.HTTPMethod","title":"HTTPMethod","text":"

    Bases: Enum

    Available HTTP request methods.

    Source code in prefect_databricks/rest.py
    class HTTPMethod(Enum):\n    \"\"\"\n    Available HTTP request methods.\n    \"\"\"\n\n    GET = \"get\"\n    POST = \"post\"\n    PUT = \"put\"\n    DELETE = \"delete\"\n    PATCH = \"patch\"\n
    "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.execute_endpoint","title":"execute_endpoint async","text":"

    Generic function for executing REST endpoints.

    Parameters:

    Name Type Description Default endpoint str

    The endpoint route.

    required databricks_credentials DatabricksCredentials

    Credentials to use for authentication with Databricks.

    required http_method HTTPMethod

    Either GET, POST, PUT, DELETE, or PATCH.

    GET params Dict[str, Any]

    URL query parameters in the request.

    None json Dict[str, Any]

    JSON serializable object to include in the body of the request.

    None **kwargs Dict[str, Any]

    Additional keyword arguments to pass.

    {}

    Returns:

    Type Description Response

    The httpx.Response from interacting with the endpoint.

    Examples:

    Lists jobs on the Databricks instance.

    from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.rest import execute_endpoint\n@flow\ndef example_execute_endpoint_flow():\n    endpoint = \"/2.1/jobs/list\"\n    databricks_credentials = DatabricksCredentials.load(\"my-block\")\n    params = {\n        \"limit\": 5,\n        \"offset\": None,\n        \"expand_tasks\": True,\n    }\n    response = execute_endpoint(\n        endpoint,\n        databricks_credentials,\n        params=params\n    )\n    return response.json()\n

    Source code in prefect_databricks/rest.py
    @task\nasync def execute_endpoint(\n    endpoint: str,\n    databricks_credentials: \"DatabricksCredentials\",\n    http_method: HTTPMethod = HTTPMethod.GET,\n    params: Dict[str, Any] = None,\n    json: Dict[str, Any] = None,\n    **kwargs: Dict[str, Any],\n) -> httpx.Response:\n    \"\"\"\n    Generic function for executing REST endpoints.\n\n    Args:\n        endpoint: The endpoint route.\n        databricks_credentials: Credentials to use for authentication with Databricks.\n        http_method: Either GET, POST, PUT, DELETE, or PATCH.\n        params: URL query parameters in the request.\n        json: JSON serializable object to include in the body of the request.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        The httpx.Response from interacting with the endpoint.\n\n    Examples:\n        Lists jobs on the Databricks instance.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n        from prefect_databricks.rest import execute_endpoint\n        @flow\n        def example_execute_endpoint_flow():\n            endpoint = \"/2.1/jobs/list\"\n            databricks_credentials = DatabricksCredentials.load(\"my-block\")\n            params = {\n                \"limit\": 5,\n                \"offset\": None,\n                \"expand_tasks\": True,\n            }\n            response = execute_endpoint(\n                endpoint,\n                databricks_credentials,\n                params=params\n            )\n            return response.json()\n        ```\n    \"\"\"\n    if isinstance(http_method, HTTPMethod):\n        http_method = http_method.value\n\n    if params is not None:\n        stripped_params = strip_kwargs(**params)\n    else:\n        stripped_params = None\n\n    if json is not None:\n        kwargs[\"json\"] = strip_kwargs(**json)\n\n    async with databricks_credentials.get_client() as client:\n        response = await getattr(client, http_method)(\n            endpoint, params=stripped_params, **kwargs\n        )\n\n    return response\n
    "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.serialize_model","title":"serialize_model","text":"

    Recursively serializes pydantic.BaseModel into JSON; returns original obj if not a BaseModel.

    Parameters:

    Name Type Description Default obj Any

    Input object to serialize.

    required

    Returns:

    Type Description Any

    Serialized version of object.

    Source code in prefect_databricks/rest.py
    def serialize_model(obj: Any) -> Any:\n    \"\"\"\n    Recursively serializes `pydantic.BaseModel` into JSON;\n    returns original obj if not a `BaseModel`.\n\n    Args:\n        obj: Input object to serialize.\n\n    Returns:\n        Serialized version of object.\n    \"\"\"\n    if isinstance(obj, list):\n        return [serialize_model(o) for o in obj]\n    elif isinstance(obj, Dict):\n        return {k: serialize_model(v) for k, v in obj.items()}\n\n    if isinstance(obj, MODELS):\n        return model_dump(obj, mode=\"json\")\n    elif isinstance(obj, Enum):\n        return obj.value\n    return obj\n
    "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.strip_kwargs","title":"strip_kwargs","text":"

    Recursively drops keyword arguments if value is None, and serializes any pydantic.BaseModel types.

    Parameters:

    Name Type Description Default **kwargs Dict

    Input keyword arguments.

    {}

    Returns:

    Type Description Dict

    Stripped version of kwargs.

    Source code in prefect_databricks/rest.py
    def strip_kwargs(**kwargs: Dict) -> Dict:\n    \"\"\"\n    Recursively drops keyword arguments if value is None,\n    and serializes any `pydantic.BaseModel` types.\n\n    Args:\n        **kwargs: Input keyword arguments.\n\n    Returns:\n        Stripped version of kwargs.\n    \"\"\"\n    stripped_dict = {}\n    for k, v in kwargs.items():\n        v = serialize_model(v)\n        if isinstance(v, dict):\n            v = strip_kwargs(**v)\n        if v is not None:\n            stripped_dict[k] = v\n    return stripped_dict or {}\n
    "},{"location":"integrations/prefect-databricks/models/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs","title":"prefect_databricks.models.jobs","text":""},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlList","title":"AccessControlList","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class AccessControlList(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    access_control_list: Optional[List[AccessControlRequest]] = Field(\n        None, description=\"List of permissions to set on the job.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequest","title":"AccessControlRequest","text":"

    Bases: AccessControlRequestForUser, AccessControlRequestForGroup

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class AccessControlRequest(AccessControlRequestForUser, AccessControlRequestForGroup):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequestForGroup","title":"AccessControlRequestForGroup","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class AccessControlRequestForGroup(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    group_name: Optional[GroupName] = None\n    permission_level: Optional[PermissionLevelForGroup] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequestForServicePrincipal","title":"AccessControlRequestForServicePrincipal","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class AccessControlRequestForServicePrincipal(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    permission_level: Optional[PermissionLevel] = None\n    service_principal_name: Optional[ServicePrincipalName] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequestForUser","title":"AccessControlRequestForUser","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class AccessControlRequestForUser(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    permission_level: Optional[PermissionLevel] = None\n    user_name: Optional[UserName] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AutoScale","title":"AutoScale","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class AutoScale(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    max_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"The maximum number of workers to which the cluster can scale up when\"\n            \" overloaded. max_workers must be strictly greater than min_workers.\"\n        ),\n    )\n    min_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"The minimum number of workers to which the cluster can scale down when\"\n            \" underutilized. It is also the initial number of workers the cluster has\"\n            \" after creation.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AwsAttributes","title":"AwsAttributes","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class AwsAttributes(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    availability: Optional[Literal[\"SPOT\", \"ON_DEMAND\", \"SPOT_WITH_FALLBACK\"]] = Field(\n        None,\n        description=(\n            \"Availability type used for all subsequent nodes past the `first_on_demand`\"\n            \" ones. **Note:** If `first_on_demand` is zero, this availability type is\"\n            \" used for the entire cluster.\\n\\n`SPOT`: use spot instances.\\n`ON_DEMAND`:\"\n            \" use on-demand instances.\\n`SPOT_WITH_FALLBACK`: preferably use spot\"\n            \" instances, but fall back to on-demand instances if spot instances cannot\"\n            \" be acquired (for example, if AWS spot prices are too high).\"\n        ),\n    )\n    ebs_volume_count: Optional[int] = Field(\n        None,\n        description=(\n            \"The number of volumes launched for each instance. You can choose up to 10\"\n            \" volumes. This feature is only enabled for supported node types. Legacy\"\n            \" node types cannot specify custom EBS volumes. For node types with no\"\n            \" instance store, at least one EBS volume needs to be specified; otherwise,\"\n            \" cluster creation fails.\\n\\nThese EBS volumes are mounted at `/ebs0`,\"\n            \" `/ebs1`, and etc. Instance store volumes are mounted at `/local_disk0`,\"\n            \" `/local_disk1`, and etc.\\n\\nIf EBS volumes are attached, Databricks\"\n            \" configures Spark to use only the EBS volumes for scratch storage because\"\n            \" heterogeneously sized scratch devices can lead to inefficient disk\"\n            \" utilization. If no EBS volumes are attached, Databricks configures Spark\"\n            \" to use instance store volumes.\\n\\nIf EBS volumes are specified, then the\"\n            \" Spark configuration `spark.local.dir` is overridden.\"\n        ),\n    )\n    ebs_volume_iops: Optional[int] = Field(\n        None,\n        description=(\n            \"The number of IOPS per EBS gp3 volume.\\n\\nThis value must be between 3000\"\n            \" and 16000.\\n\\nThe value of IOPS and throughput is calculated based on AWS\"\n            \" documentation to match the maximum performance of a gp2 volume with the\"\n            \" same volume size.\\n\\nFor more information, see the [EBS volume limit\"\n            \" calculator](https://github.com/awslabs/aws-support-tools/tree/master/EBS/VolumeLimitCalculator).\"\n        ),\n    )\n    ebs_volume_size: Optional[int] = Field(\n        None,\n        description=(\n            \"The size of each EBS volume (in GiB) launched for each instance. For\"\n            \" general purpose SSD, this value must be within the range 100 - 4096\\\\.\"\n            \" For throughput optimized HDD, this value must be within the range 500 -\"\n            \" 4096\\\\. Custom EBS volumes cannot be specified for the legacy node types\"\n            \" (_memory-optimized_ and _compute-optimized_).\"\n        ),\n    )\n    ebs_volume_throughput: Optional[int] = Field(\n        None,\n        description=(\n            \"The throughput per EBS gp3 volume, in MiB per second.\\n\\nThis value must\"\n            \" be between 125 and 1000.\"\n        ),\n    )\n    ebs_volume_type: Optional[\n        Literal[\"GENERAL_PURPOSE_SSD\", \"THROUGHPUT_OPTIMIZED_HDD\"]\n    ] = Field(\n        None,\n        description=(\n            \"The type of EBS volume that is launched with this\"\n            \" cluster.\\n\\n`GENERAL_PURPOSE_SSD`: provision extra storage using AWS gp2\"\n            \" EBS volumes.\\n`THROUGHPUT_OPTIMIZED_HDD`: provision extra storage using\"\n            \" AWS st1 volumes.\"\n        ),\n    )\n    first_on_demand: Optional[int] = Field(\n        None,\n        description=(\n            \"The first first_on_demand nodes of the cluster are placed on on-demand\"\n            \" instances. If this value is greater than 0, the cluster driver node is\"\n            \" placed on an on-demand instance. If this value is greater than or equal\"\n            \" to the current cluster size, all nodes are placed on on-demand instances.\"\n            \" If this value is less than the current cluster size, first_on_demand\"\n            \" nodes are placed on on-demand instances and the remainder are placed on\"\n            \" `availability` instances. This value does not affect cluster size and\"\n            \" cannot be mutated over the lifetime of a cluster.\"\n        ),\n    )\n    instance_profile_arn: Optional[str] = Field(\n        None,\n        description=(\n            \"Nodes for this cluster are only be placed on AWS instances with this\"\n            \" instance profile. If omitted, nodes are placed on instances without an\"\n            \" instance profile. The instance profile must have previously been added to\"\n            \" the Databricks environment by an account administrator.\\n\\nThis feature\"\n            \" may only be available to certain customer plans.\"\n        ),\n    )\n    spot_bid_price_percent: Optional[int] = Field(\n        None,\n        description=(\n            \"The max price for AWS spot instances, as a percentage of the corresponding\"\n            \" instance type\u2019s on-demand price. For example, if this field is set to 50,\"\n            \" and the cluster needs a new `i3.xlarge` spot instance, then the max price\"\n            \" is half of the price of on-demand `i3.xlarge` instances. Similarly, if\"\n            \" this field is set to 200, the max price is twice the price of on-demand\"\n            \" `i3.xlarge` instances. If not specified, the default value is 100\\\\. When\"\n            \" spot instances are requested for this cluster, only spot instances whose\"\n            \" max price percentage matches this field is considered. For safety, we\"\n            \" enforce this field to be no more than 10000.\"\n        ),\n    )\n    zone_id: Optional[str] = Field(\n        None,\n        description=(\n            \"Identifier for the availability zone/datacenter in which the cluster\"\n            \" resides. You have three options:\\n\\n**Specify an availability zone as a\"\n            \" string**, for example: \u201cus-west-2a\u201d. The provided availability zone must\"\n            \" be in the same region as the Databricks deployment. For example,\"\n            \" \u201cus-west-2a\u201d is not a valid zone ID if the Databricks deployment resides\"\n            \" in the \u201cus-east-1\u201d region.\\n\\n**Enable automatic availability zone\"\n            \" selection (\u201cAuto-AZ\u201d)**, by setting the value \u201cauto\u201d. Databricks selects\"\n            \" the AZ based on available IPs in the workspace subnets and retries in\"\n            \" other availability zones if AWS returns insufficient capacity\"\n            \" errors.\\n\\n**Do not specify a value**. If not specified, a default zone\"\n            \" is used.\\n\\nThe list of available zones as well as the default value can\"\n            \" be found by using the [List\"\n            \" zones](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-zones)\"\n            \" API.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CanManage","title":"CanManage","text":"

    Bases: str, Enum

    Permission to manage the job.

    Source code in prefect_databricks/models/jobs.py
    class CanManage(str, Enum):\n    \"\"\"\n    Permission to manage the job.\n    \"\"\"\n\n    canmanage = \"CAN_MANAGE\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CanManageRun","title":"CanManageRun","text":"

    Bases: str, Enum

    Permission to run and/or manage runs for the job.

    Source code in prefect_databricks/models/jobs.py
    class CanManageRun(str, Enum):\n    \"\"\"\n    Permission to run and/or manage runs for the job.\n    \"\"\"\n\n    canmanagerun = \"CAN_MANAGE_RUN\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CanView","title":"CanView","text":"

    Bases: str, Enum

    Permission to view the settings of the job.

    Source code in prefect_databricks/models/jobs.py
    class CanView(str, Enum):\n    \"\"\"\n    Permission to view the settings of the job.\n    \"\"\"\n\n    canview = \"CAN_VIEW\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterAttributes","title":"ClusterAttributes","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterAttributes(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autotermination_minutes: Optional[int] = Field(\n        None,\n        description=(\n            \"Automatically terminates the cluster after it is inactive for this time in\"\n            \" minutes. If not set, this cluster is not be automatically terminated. If\"\n            \" specified, the threshold must be between 10 and 10000 minutes. You can\"\n            \" also set this value to 0 to explicitly disable automatic termination.\"\n        ),\n    )\n    aws_attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"Attributes related to clusters running on Amazon Web Services. If not\"\n            \" specified at cluster creation, a set of default values are used.\"\n        ),\n    )\n    cluster_log_conf: Optional[ClusterLogConf] = Field(\n        None,\n        description=(\n            \"The configuration for delivering Spark logs to a long-term storage\"\n            \" destination. Only one destination can be specified for one cluster. If\"\n            \" the conf is given, the logs is delivered to the destination every `5\"\n            \" mins`. The destination of driver logs is\"\n            \" `<destination>/<cluster-ID>/driver`, while the destination of executor\"\n            \" logs is `<destination>/<cluster-ID>/executor`.\"\n        ),\n    )\n    cluster_name: Optional[str] = Field(\n        None,\n        description=(\n            \"Cluster name requested by the user. This doesn\u2019t have to be unique. If not\"\n            \" specified at creation, the cluster name is an empty string.\"\n        ),\n    )\n    cluster_source: Optional[ClusterSource] = Field(\n        None,\n        description=(\n            \"Determines whether the cluster was created by a user through the UI,\"\n            \" created by the Databricks Jobs scheduler, or through an API request.\"\n        ),\n    )\n    custom_tags: Optional[ClusterTag] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags for cluster resources. Databricks tags\"\n            \" all cluster resources (such as AWS instances and EBS volumes) with these\"\n            \" tags in addition to default_tags.\\n\\n**Note**:\\n\\n* Tags are not\"\n            \" supported on legacy node types such as compute-optimized and\"\n            \" memory-optimized\\n* Databricks allows at most 45 custom tags\"\n        ),\n    )\n    docker_image: Optional[DockerImage] = Field(\n        None,\n        description=(\n            \"Docker image for a [custom\"\n            \" container](https://docs.databricks.com/clusters/custom-containers.html).\"\n        ),\n    )\n    driver_node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The node type of the Spark driver. This field is optional; if unset, the\"\n            \" driver node type is set as the same value as `node_type_id` defined\"\n            \" above.\"\n        ),\n    )\n    enable_elastic_disk: Optional[bool] = Field(\n        None,\n        description=(\n            \"Autoscaling Local Storage: when enabled, this cluster dynamically acquires\"\n            \" additional disk space when its Spark workers are running low on disk\"\n            \" space. This feature requires specific AWS permissions to function\"\n            \" correctly. Refer to [Autoscaling local\"\n            \" storage](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage)\"\n            \" for details.\"\n        ),\n    )\n    enable_local_disk_encryption: Optional[bool] = Field(\n        None,\n        description=(\n            \"Determines whether encryption of the disks attached to the cluster locally\"\n            \" is enabled.\"\n        ),\n    )\n    init_scripts: Optional[List[InitScriptInfo]] = Field(\n        None,\n        description=(\n            \"The configuration for storing init scripts. Any number of destinations can\"\n            \" be specified. The scripts are executed sequentially in the order\"\n            \" provided. If `cluster_log_conf` is specified, init script logs are sent\"\n            \" to `<destination>/<cluster-ID>/init_scripts`.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to which the cluster belongs. Refer\"\n            \" to [Pools](https://docs.databricks.com/clusters/instance-pools/index.html)\"\n            \" for details.\"\n        ),\n    )\n    node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"This field encodes, through a single value, the resources available to\"\n            \" each of the Spark nodes in this cluster. For example, the Spark nodes can\"\n            \" be provisioned and optimized for memory or compute intensive workloads A\"\n            \" list of available node types can be retrieved by using the [List node\"\n            \" types](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-node-types)\"\n            \" API call.\"\n        ),\n    )\n    policy_id: Optional[str] = Field(\n        None,\n        description=(\n            \"A [cluster\"\n            \" policy](https://docs.databricks.com/dev-tools/api/latest/policies.html) ID.\"\n        ),\n    )\n    spark_conf: Optional[SparkConfPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified Spark configuration\"\n            \" key-value pairs. You can also pass in a string of extra JVM options to\"\n            \" the driver and the executors via `spark.driver.extraJavaOptions` and\"\n            \" `spark.executor.extraJavaOptions` respectively.\\n\\nExample Spark confs:\"\n            ' `{\"spark.speculation\": true, \"spark.streaming.ui.retainedBatches\": 5}` or'\n            ' `{\"spark.driver.extraJavaOptions\": \"-verbose:gc -XX:+PrintGCDetails\"}`'\n        ),\n    )\n    spark_env_vars: Optional[SparkEnvPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified environment\"\n            \" variable key-value pairs. Key-value pairs of the form (X,Y) are exported\"\n            \" as is (that is, `export X='Y'`) while launching the driver and\"\n            \" workers.\\n\\nIn order to specify an additional set of\"\n            \" `SPARK_DAEMON_JAVA_OPTS`, we recommend appending them to\"\n            \" `$SPARK_DAEMON_JAVA_OPTS` as shown in the following example. This ensures\"\n            \" that all default databricks managed environmental variables are included\"\n            ' as well.\\n\\nExample Spark environment variables: `{\"SPARK_WORKER_MEMORY\":'\n            ' \"28000m\", \"SPARK_LOCAL_DIRS\": \"/local_disk0\"}` or'\n            ' `{\"SPARK_DAEMON_JAVA_OPTS\": \"$SPARK_DAEMON_JAVA_OPTS'\n            ' -Dspark.shuffle.service.enabled=true\"}`'\n        ),\n    )\n    spark_version: Optional[str] = Field(\n        None,\n        description=(\n            \"The runtime version of the cluster, for example \u201c5.0.x-scala2.11\u201d. You can\"\n            \" retrieve a list of available runtime versions by using the [Runtime\"\n            \" versions](https://docs.databricks.com/dev-tools/api/latest/clusters.html#runtime-versions)\"\n            \" API call.\"\n        ),\n    )\n    ssh_public_keys: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"SSH public key contents that is added to each Spark node in this cluster.\"\n            \" The corresponding private keys can be used to login with the user name\"\n            \" `ubuntu` on port `2200`. Up to 10 keys can be specified.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterCloudProviderNodeInfo","title":"ClusterCloudProviderNodeInfo","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterCloudProviderNodeInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    available_core_quota: Optional[int] = Field(\n        None, description=\"Available CPU core quota.\"\n    )\n    status: Optional[ClusterCloudProviderNodeStatus] = Field(\n        None, description=\"Status as reported by the cloud provider.\"\n    )\n    total_core_quota: Optional[int] = Field(None, description=\"Total CPU core quota.\")\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterCloudProviderNodeStatus","title":"ClusterCloudProviderNodeStatus","text":"

    Bases: str, Enum

    * NotEnabledOnSubscription: Node type not available for subscription.\n
    • NotAvailableInRegion: Node type not available in region.
    Source code in prefect_databricks/models/jobs.py
    class ClusterCloudProviderNodeStatus(str, Enum):\n    \"\"\"\n        * NotEnabledOnSubscription: Node type not available for subscription.\n    * NotAvailableInRegion: Node type not available in region.\n\n    \"\"\"\n\n    not_enabled_on_subscription = \"NotEnabledOnSubscription\"\n    not_available_in_region = \"NotAvailableInRegion\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterEvent","title":"ClusterEvent","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterEvent(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cluster_id: str = Field(\n        ..., description=\"Canonical identifier for the cluster. This field is required.\"\n    )\n    details: EventDetails = Field(\n        ..., description=\"The event details. This field is required.\"\n    )\n    timestamp: Optional[int] = Field(\n        None,\n        description=(\n            \"The timestamp when the event occurred, stored as the number of\"\n            \" milliseconds since the unix epoch. Assigned by the Timeline service.\"\n        ),\n    )\n    type: ClusterEventType = Field(\n        ..., description=\"The event type. This field is required.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterEventType","title":"ClusterEventType","text":"

    Bases: str, Enum

    * `CREATING`: Indicates that the cluster is being created.\n
    • DID_NOT_EXPAND_DISK: Indicates that a disk is low on space, but adding disks would put it over the max capacity.
    • EXPANDED_DISK: Indicates that a disk was low on space and the disks were expanded.
    • FAILED_TO_EXPAND_DISK: Indicates that a disk was low on space and disk space could not be expanded.
    • INIT_SCRIPTS_STARTING: Indicates that the cluster scoped init script has started.
    • INIT_SCRIPTS_FINISHED: Indicates that the cluster scoped init script has finished.
    • STARTING: Indicates that the cluster is being started.
    • RESTARTING: Indicates that the cluster is being started.
    • TERMINATING: Indicates that the cluster is being terminated.
    • EDITED: Indicates that the cluster has been edited.
    • RUNNING: Indicates the cluster has finished being created. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.
    • RESIZING: Indicates a change in the target size of the cluster (upsize or downsize).
    • UPSIZE_COMPLETED: Indicates that nodes finished being added to the cluster. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.
    • NODES_LOST: Indicates that some nodes were lost from the cluster.
    • DRIVER_HEALTHY: Indicates that the driver is healthy and the cluster is ready for use.
    • DRIVER_UNAVAILABLE: Indicates that the driver is unavailable.
    • SPARK_EXCEPTION: Indicates that a Spark exception was thrown from the driver.
    • DRIVER_NOT_RESPONDING: Indicates that the driver is up but is not responsive, likely due to GC.
    • DBFS_DOWN: Indicates that the driver is up but DBFS is down.
    • METASTORE_DOWN: Indicates that the driver is up but the metastore is down.
    • NODE_BLACKLISTED: Indicates that a node is not allowed by Spark.
    • PINNED: Indicates that the cluster was pinned.
    • UNPINNED: Indicates that the cluster was unpinned.
    Source code in prefect_databricks/models/jobs.py
    class ClusterEventType(str, Enum):\n    \"\"\"\n        * `CREATING`: Indicates that the cluster is being created.\n    * `DID_NOT_EXPAND_DISK`: Indicates that a disk is low on space, but adding disks would put it over the max capacity.\n    * `EXPANDED_DISK`: Indicates that a disk was low on space and the disks were expanded.\n    * `FAILED_TO_EXPAND_DISK`: Indicates that a disk was low on space and disk space could not be expanded.\n    * `INIT_SCRIPTS_STARTING`: Indicates that the cluster scoped init script has started.\n    * `INIT_SCRIPTS_FINISHED`: Indicates that the cluster scoped init script has finished.\n    * `STARTING`: Indicates that the cluster is being started.\n    * `RESTARTING`: Indicates that the cluster is being started.\n    * `TERMINATING`: Indicates that the cluster is being terminated.\n    * `EDITED`: Indicates that the cluster has been edited.\n    * `RUNNING`: Indicates the cluster has finished being created. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.\n    * `RESIZING`: Indicates a change in the target size of the cluster (upsize or downsize).\n    * `UPSIZE_COMPLETED`: Indicates that nodes finished being added to the cluster. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.\n    * `NODES_LOST`: Indicates that some nodes were lost from the cluster.\n    * `DRIVER_HEALTHY`: Indicates that the driver is healthy and the cluster is ready for use.\n    * `DRIVER_UNAVAILABLE`: Indicates that the driver is unavailable.\n    * `SPARK_EXCEPTION`: Indicates that a Spark exception was thrown from the driver.\n    * `DRIVER_NOT_RESPONDING`: Indicates that the driver is up but is not responsive, likely due to GC.\n    * `DBFS_DOWN`: Indicates that the driver is up but DBFS is down.\n    * `METASTORE_DOWN`: Indicates that the driver is up but the metastore is down.\n    * `NODE_BLACKLISTED`: Indicates that a node is not allowed by Spark.\n    * `PINNED`: Indicates that the cluster was pinned.\n    * `UNPINNED`: Indicates that the cluster was unpinned.\n    \"\"\"\n\n    creating = \"CREATING\"\n    didnotexpanddisk = \"DID_NOT_EXPAND_DISK\"\n    expandeddisk = \"EXPANDED_DISK\"\n    failedtoexpanddisk = \"FAILED_TO_EXPAND_DISK\"\n    initscriptsstarting = \"INIT_SCRIPTS_STARTING\"\n    initscriptsfinished = \"INIT_SCRIPTS_FINISHED\"\n    starting = \"STARTING\"\n    restarting = \"RESTARTING\"\n    terminating = \"TERMINATING\"\n    edited = \"EDITED\"\n    running = \"RUNNING\"\n    resizing = \"RESIZING\"\n    upsizecompleted = \"UPSIZE_COMPLETED\"\n    nodeslost = \"NODES_LOST\"\n    driverhealthy = \"DRIVER_HEALTHY\"\n    driverunavailable = \"DRIVER_UNAVAILABLE\"\n    sparkexception = \"SPARK_EXCEPTION\"\n    drivernotresponding = \"DRIVER_NOT_RESPONDING\"\n    dbfsdown = \"DBFS_DOWN\"\n    metastoredown = \"METASTORE_DOWN\"\n    nodeblacklisted = \"NODE_BLACKLISTED\"\n    pinned = \"PINNED\"\n    unpinned = \"UNPINNED\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterInfo","title":"ClusterInfo","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autoscale: Optional[AutoScale] = Field(\n        None,\n        description=(\n            \"If autoscale, parameters needed in order to automatically scale clusters\"\n            \" up and down based on load.\"\n        ),\n    )\n    autotermination_minutes: Optional[int] = Field(\n        None,\n        description=(\n            \"Automatically terminates the cluster after it is inactive for this time in\"\n            \" minutes. If not set, this cluster is not be automatically terminated. If\"\n            \" specified, the threshold must be between 10 and 10000 minutes. You can\"\n            \" also set this value to 0 to explicitly disable automatic termination.\"\n        ),\n    )\n    aws_attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"Attributes related to clusters running on Amazon Web Services. If not\"\n            \" specified at cluster creation, a set of default values is used.\"\n        ),\n    )\n    cluster_cores: Optional[float] = Field(\n        None,\n        description=(\n            \"Number of CPU cores available for this cluster. This can be fractional\"\n            \" since certain node types are configured to share cores between Spark\"\n            \" nodes on the same instance.\"\n        ),\n    )\n    cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"Canonical identifier for the cluster. This ID is retained during cluster\"\n            \" restarts and resizes, while each new cluster has a globally unique ID.\"\n        ),\n    )\n    cluster_log_conf: Optional[ClusterLogConf] = Field(\n        None,\n        description=(\n            \"The configuration for delivering Spark logs to a long-term storage\"\n            \" destination. Only one destination can be specified for one cluster. If\"\n            \" the conf is given, the logs are delivered to the destination every `5\"\n            \" mins`. The destination of driver logs is\"\n            \" `<destination>/<cluster-ID>/driver`, while the destination of executor\"\n            \" logs is `<destination>/<cluster-ID>/executor`.\"\n        ),\n    )\n    cluster_log_status: Optional[LogSyncStatus] = Field(\n        None, description=\"Cluster log delivery status.\"\n    )\n    cluster_memory_mb: Optional[int] = Field(\n        None, description=\"Total amount of cluster memory, in megabytes.\"\n    )\n    cluster_name: Optional[str] = Field(\n        None,\n        description=(\n            \"Cluster name requested by the user. This doesn\u2019t have to be unique. If not\"\n            \" specified at creation, the cluster name is an empty string.\"\n        ),\n    )\n    cluster_source: Optional[ClusterSource] = Field(\n        None,\n        description=(\n            \"Determines whether the cluster was created by a user through the UI, by\"\n            \" the Databricks Jobs scheduler, or through an API request.\"\n        ),\n    )\n    creator_user_name: Optional[str] = Field(\n        None,\n        description=(\n            \"Creator user name. The field won\u2019t be included in the response if the user\"\n            \" has already been deleted.\"\n        ),\n    )\n    custom_tags: Optional[List[ClusterTag]] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags for cluster resources. Databricks tags\"\n            \" all cluster resources (such as AWS instances and EBS volumes) with these\"\n            \" tags in addition to default_tags.\\n\\n**Note**:\\n\\n* Tags are not\"\n            \" supported on legacy node types such as compute-optimized and\"\n            \" memory-optimized\\n* Databricks allows at most 45 custom tags\"\n        ),\n    )\n    default_tags: Optional[ClusterTag] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags that are added by Databricks regardless\"\n            \" of any custom_tags, including:\\n\\n* Vendor: Databricks\\n* Creator:\"\n            \" <username-of-creator>\\n* ClusterName: <name-of-cluster>\\n* ClusterId:\"\n            \" <id-of-cluster>\\n* Name: <Databricks internal use>  \\nOn job clusters:\\n*\"\n            \" RunName: <name-of-job>\\n* JobId: <id-of-job>  \\nOn resources used by\"\n            \" Databricks SQL:\\n* SqlEndpointId: <id-of-endpoint>\"\n        ),\n    )\n    docker_image: Optional[DockerImage] = Field(\n        None,\n        description=(\n            \"Docker image for a [custom\"\n            \" container](https://docs.databricks.com/clusters/custom-containers.html).\"\n        ),\n    )\n    driver: Optional[SparkNode] = Field(\n        None,\n        description=(\n            \"Node on which the Spark driver resides. The driver node contains the Spark\"\n            \" master and the Databricks application that manages the per-notebook Spark\"\n            \" REPLs.\"\n        ),\n    )\n    driver_node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The node type of the Spark driver. This field is optional; if unset, the\"\n            \" driver node type is set as the same value as `node_type_id` defined\"\n            \" above.\"\n        ),\n    )\n    enable_elastic_disk: Optional[bool] = Field(\n        None,\n        description=(\n            \"Autoscaling Local Storage: when enabled, this cluster dynamically acquires\"\n            \" additional disk space when its Spark workers are running low on disk\"\n            \" space. This feature requires specific AWS permissions to function\"\n            \" correctly - refer to [Autoscaling local\"\n            \" storage](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage)\"\n            \" for details.\"\n        ),\n    )\n    executors: Optional[List[SparkNode]] = Field(\n        None, description=\"Nodes on which the Spark executors reside.\"\n    )\n    init_scripts: Optional[List[InitScriptInfo]] = Field(\n        None,\n        description=(\n            \"The configuration for storing init scripts. Any number of destinations can\"\n            \" be specified. The scripts are executed sequentially in the order\"\n            \" provided. If `cluster_log_conf` is specified, init script logs are sent\"\n            \" to `<destination>/<cluster-ID>/init_scripts`.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to which the cluster belongs. Refer\"\n            \" to [Pools](https://docs.databricks.com/clusters/instance-pools/index.html)\"\n            \" for details.\"\n        ),\n    )\n    jdbc_port: Optional[int] = Field(\n        None,\n        description=(\n            \"Port on which Spark JDBC server is listening in the driver node. No\"\n            \" service listens on this port in executor nodes.\"\n        ),\n    )\n    last_activity_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when the cluster was last active. A cluster\"\n            \" is active if there is at least one command that has not finished on the\"\n            \" cluster. This field is available after the cluster has reached a\"\n            \" `RUNNING` state. Updates to this field are made as best-effort attempts.\"\n            \" Certain versions of Spark do not support reporting of cluster activity.\"\n            \" Refer to [Automatic\"\n            \" termination](https://docs.databricks.com/clusters/clusters-manage.html#automatic-termination)\"\n            \" for details.\"\n        ),\n    )\n    last_state_loss_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time when the cluster driver last lost its state (due to a restart or\"\n            \" driver failure).\"\n        ),\n    )\n    node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"This field encodes, through a single value, the resources available to\"\n            \" each of the Spark nodes in this cluster. For example, the Spark nodes can\"\n            \" be provisioned and optimized for memory or compute intensive workloads. A\"\n            \" list of available node types can be retrieved by using the [List node\"\n            \" types](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-node-types)\"\n            \" API call.\"\n        ),\n    )\n    num_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"If num_workers, number of worker nodes that this cluster must have. A\"\n            \" cluster has one Spark driver and num_workers executors for a total of\"\n            \" num_workers + 1 Spark nodes. **Note:** When reading the properties of a\"\n            \" cluster, this field reflects the desired number of workers rather than\"\n            \" the actual number of workers. For instance, if a cluster is resized from\"\n            \" 5 to 10 workers, this field is immediately updated to reflect the target\"\n            \" size of 10 workers, whereas the workers listed in `executors` gradually\"\n            \" increase from 5 to 10 as the new nodes are provisioned.\"\n        ),\n    )\n    spark_conf: Optional[SparkConfPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified Spark configuration\"\n            \" key-value pairs. You can also pass in a string of extra JVM options to\"\n            \" the driver and the executors via `spark.driver.extraJavaOptions` and\"\n            \" `spark.executor.extraJavaOptions` respectively.\\n\\nExample Spark confs:\"\n            ' `{\"spark.speculation\": true, \"spark.streaming.ui.retainedBatches\": 5}` or'\n            ' `{\"spark.driver.extraJavaOptions\": \"-verbose:gc -XX:+PrintGCDetails\"}`'\n        ),\n    )\n    spark_context_id: Optional[int] = Field(\n        None,\n        description=(\n            \"A canonical SparkContext identifier. This value _does_ change when the\"\n            \" Spark driver restarts. The pair `(cluster_id, spark_context_id)` is a\"\n            \" globally unique identifier over all Spark contexts.\"\n        ),\n    )\n    spark_env_vars: Optional[SparkEnvPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified environment\"\n            \" variable key-value pairs. Key-value pairs of the form (X,Y) are exported\"\n            \" as is (that is, `export X='Y'`) while launching the driver and\"\n            \" workers.\\n\\nTo specify an additional set of `SPARK_DAEMON_JAVA_OPTS`, we\"\n            \" recommend appending them to `$SPARK_DAEMON_JAVA_OPTS` as shown in the\"\n            \" following example. This ensures that all default databricks managed\"\n            \" environmental variables are included as well.\\n\\nExample Spark\"\n            ' environment variables: `{\"SPARK_WORKER_MEMORY\": \"28000m\",'\n            ' \"SPARK_LOCAL_DIRS\": \"/local_disk0\"}` or `{\"SPARK_DAEMON_JAVA_OPTS\":'\n            ' \"$SPARK_DAEMON_JAVA_OPTS -Dspark.shuffle.service.enabled=true\"}`'\n        ),\n    )\n    spark_version: Optional[str] = Field(\n        None,\n        description=(\n            \"The runtime version of the cluster. You can retrieve a list of available\"\n            \" runtime versions by using the [Runtime\"\n            \" versions](https://docs.databricks.com/dev-tools/api/latest/clusters.html#runtime-versions)\"\n            \" API call.\"\n        ),\n    )\n    ssh_public_keys: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"SSH public key contents that are added to each Spark node in this cluster.\"\n            \" The corresponding private keys can be used to login with the user name\"\n            \" `ubuntu` on port `2200`. Up to 10 keys can be specified.\"\n        ),\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when the cluster creation request was\"\n            \" received (when the cluster entered a `PENDING` state).\"\n        ),\n    )\n    state: Optional[ClusterState] = Field(None, description=\"State of the cluster.\")\n    state_message: Optional[str] = Field(\n        None,\n        description=(\n            \"A message associated with the most recent state transition (for example,\"\n            \" the reason why the cluster entered a `TERMINATED` state). This field is\"\n            \" unstructured, and its exact format is subject to change.\"\n        ),\n    )\n    terminated_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when the cluster was terminated, if\"\n            \" applicable.\"\n        ),\n    )\n    termination_reason: Optional[TerminationReason] = Field(\n        None,\n        description=(\n            \"Information about why the cluster was terminated. This field only appears\"\n            \" when the cluster is in a `TERMINATING` or `TERMINATED` state.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterInstance","title":"ClusterInstance","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterInstance(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The canonical identifier for the cluster used by a run. This field is\"\n            \" always available for runs on existing clusters. For runs on new clusters,\"\n            \" it becomes available once the cluster is created. This value can be used\"\n            \" to view logs by browsing to `/#setting/sparkui/$cluster_id/driver-logs`.\"\n            \" The logs continue to be available after the run completes.\\n\\nThe\"\n            \" response won\u2019t include this field if the identifier is not available yet.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    spark_context_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The canonical identifier for the Spark context used by a run. This field\"\n            \" is filled in once the run begins execution. This value can be used to\"\n            \" view the Spark UI by browsing to\"\n            \" `/#setting/sparkui/$cluster_id/$spark_context_id`. The Spark UI continues\"\n            \" to be available after the run has completed.\\n\\nThe response won\u2019t\"\n            \" include this field if the identifier is not available yet.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterLibraryStatuses","title":"ClusterLibraryStatuses","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterLibraryStatuses(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cluster_id: Optional[str] = Field(\n        None, description=\"Unique identifier for the cluster.\"\n    )\n    library_statuses: Optional[List[LibraryFullStatus]] = Field(\n        None, description=\"Status of all libraries on the cluster.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterLogConf","title":"ClusterLogConf","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterLogConf(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbfs: Optional[DbfsStorageInfo] = Field(\n        None,\n        description=(\n            \"DBFS location of cluster log. Destination must be provided. For example,\"\n            ' `{ \"dbfs\" : { \"destination\" : \"dbfs:/home/cluster_log\" } }`'\n        ),\n    )\n    s3: Optional[S3StorageInfo] = Field(\n        None,\n        description=(\n            \"S3 location of cluster log. `destination` and either `region` or\"\n            ' `endpoint` must be provided. For example, `{ \"s3\": { \"destination\" :'\n            ' \"s3://cluster_log_bucket/prefix\", \"region\" : \"us-west-2\" } }`'\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterSize","title":"ClusterSize","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterSize(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autoscale: Optional[AutoScale] = Field(\n        None,\n        description=(\n            \"If autoscale, parameters needed in order to automatically scale clusters\"\n            \" up and down based on load.\"\n        ),\n    )\n    num_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"If num_workers, number of worker nodes that this cluster must have. A\"\n            \" cluster has one Spark driver and num_workers executors for a total of\"\n            \" num_workers + 1 Spark nodes. When reading the properties of a cluster,\"\n            \" this field reflects the desired number of workers rather than the actual\"\n            \" number of workers. For instance, if a cluster is resized from 5 to 10\"\n            \" workers, this field is updated to reflect the target size of 10 workers,\"\n            \" whereas the workers listed in executors gradually increase from 5 to 10\"\n            \" as the new nodes are provisioned.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterSource","title":"ClusterSource","text":"

    Bases: str, Enum

    * UI: Cluster created through the UI.\n
    • JOB: Cluster created by the Databricks job scheduler.
    • API: Cluster created through an API call.
    Source code in prefect_databricks/models/jobs.py
    class ClusterSource(str, Enum):\n    \"\"\"\n        * UI: Cluster created through the UI.\n    * JOB: Cluster created by the Databricks job scheduler.\n    * API: Cluster created through an API call.\n\n    \"\"\"\n\n    ui = \"UI\"\n    job = \"JOB\"\n    api = \"API\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterSpec","title":"ClusterSpec","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ClusterSpec(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this job. When running jobs on an existing cluster, you may need\"\n            \" to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the job. The default value is an empty list.\"\n        ),\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterState","title":"ClusterState","text":"

    Bases: str, Enum

    * PENDING: Indicates that a cluster is in the process of being created.\n
    • RUNNING: Indicates that a cluster has been started and is ready for use.
    • RESTARTING: Indicates that a cluster is in the process of restarting.
    • RESIZING: Indicates that a cluster is in the process of adding or removing nodes.
    • TERMINATING: Indicates that a cluster is in the process of being destroyed.
    • TERMINATED: Indicates that a cluster has been successfully destroyed.
    • ERROR: This state is no longer used. It was used to indicate a cluster that failed to be created. TERMINATING and TERMINATED are used instead.
    • UNKNOWN: Indicates that a cluster is in an unknown state. A cluster should never be in this state.
    Source code in prefect_databricks/models/jobs.py
    class ClusterState(str, Enum):\n    \"\"\"\n        * PENDING: Indicates that a cluster is in the process of being created.\n    * RUNNING: Indicates that a cluster has been started and is ready for use.\n    * RESTARTING: Indicates that a cluster is in the process of restarting.\n    * RESIZING: Indicates that a cluster is in the process of adding or removing nodes.\n    * TERMINATING: Indicates that a cluster is in the process of being destroyed.\n    * TERMINATED: Indicates that a cluster has been successfully destroyed.\n    * ERROR: This state is no longer used. It was used to indicate a cluster that failed to be created. `TERMINATING` and `TERMINATED` are used instead.\n    * UNKNOWN: Indicates that a cluster is in an unknown state. A cluster should never be in this state.\n\n    \"\"\"\n\n    pending = \"PENDING\"\n    running = \"RUNNING\"\n    restarting = \"RESTARTING\"\n    resizing = \"RESIZING\"\n    terminating = \"TERMINATING\"\n    terminated = \"TERMINATED\"\n    error = \"ERROR\"\n    unknown = \"UNKNOWN\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterTag","title":"ClusterTag","text":"

    Bases: BaseModel

    See source code for the fields' description.

    An object with key value pairs. The key length must be between 1 and 127 UTF-8 characters, inclusive. The value length must be less than or equal to 255 UTF-8 characters. For a list of all restrictions, see AWS Tag Restrictions: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions

    Source code in prefect_databricks/models/jobs.py
    class ClusterTag(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An object with key value pairs. The key length must be between 1 and 127 UTF-8 characters, inclusive. The value length must be less than or equal to 255 UTF-8 characters. For a list of all restrictions, see AWS Tag Restrictions: <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions>\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CronSchedule","title":"CronSchedule","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class CronSchedule(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    pause_status: Optional[Literal[\"PAUSED\", \"UNPAUSED\"]] = Field(\n        None,\n        description=\"Indicate whether this schedule is paused or not.\",\n        example=\"PAUSED\",\n    )\n    quartz_cron_expression: str = Field(\n        ...,\n        description=(\n            \"A Cron expression using Quartz syntax that describes the schedule for a\"\n            \" job. See [Cron\"\n            \" Trigger](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\"\n            \" for details. This field is required.\"\n        ),\n        example=\"20 30 * * * ?\",\n    )\n    timezone_id: str = Field(\n        ...,\n        description=(\n            \"A Java timezone ID. The schedule for a job is resolved with respect to\"\n            \" this timezone. See [Java\"\n            \" TimeZone](https://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html)\"\n            \" for details. This field is required.\"\n        ),\n        example=\"Europe/London\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DbfsStorageInfo","title":"DbfsStorageInfo","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class DbfsStorageInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    destination: Optional[str] = Field(\n        None, description=\"DBFS destination. Example: `dbfs:/my/path`\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DbtOutput","title":"DbtOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class DbtOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    artifacts_headers: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"An optional map of headers to send when retrieving the artifact from the\"\n            \" `artifacts_link`.\"\n        ),\n    )\n    artifacts_link: Optional[str] = Field(\n        None,\n        description=(\n            \"A pre-signed URL to download the (compressed) dbt artifacts. This link is\"\n            \" valid for a limited time (30 minutes). This information is only available\"\n            \" after the run has finished.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DbtTask","title":"DbtTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class DbtTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    catalog: Optional[str] = Field(\n        None,\n        description=(\n            \"Optional name of the catalog to use. The value is the top level in the\"\n            \" 3-level namespace of Unity Catalog (catalog / schema / relation). The\"\n            \" catalog value can only be specified if a warehouse_id is specified.\"\n            \" Requires dbt-databricks >= 1.1.1.\"\n        ),\n        example=\"main\",\n    )\n    commands: List = Field(\n        ...,\n        description=(\n            \"A list of dbt commands to execute. All commands must start with `dbt`.\"\n            \" This parameter must not be empty. A maximum of up to 10 commands can be\"\n            \" provided.\"\n        ),\n        example=[\"dbt deps\", \"dbt seed\", \"dbt run --models 123\"],\n    )\n    profiles_directory: Optional[str] = Field(\n        None,\n        description=(\n            \"Optional (relative) path to the profiles directory. Can only be specified\"\n            \" if no warehouse_id is specified. If no warehouse_id is specified and this\"\n            \" folder is unset, the root directory is used.\"\n        ),\n    )\n    project_directory: Optional[str] = Field(\n        None,\n        description=(\n            \"Optional (relative) path to the project directory, if no value is\"\n            \" provided, the root of the git repository is used.\"\n        ),\n    )\n    schema_: Optional[str] = Field(\n        None,\n        alias=\"schema\",\n        description=(\n            \"Optional schema to write to. This parameter is only used when a\"\n            \" warehouse_id is also provided. If not provided, the `default` schema is\"\n            \" used.\"\n        ),\n    )\n    warehouse_id: Optional[str] = Field(\n        None,\n        description=(\n            \"ID of the SQL warehouse to connect to. If provided, we automatically\"\n            \" generate and provide the profile and connection details to dbt. It can be\"\n            \" overridden on a per-command basis by using the `--profiles-dir` command\"\n            \" line argument.\"\n        ),\n        example=\"30dade0507d960d1\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DockerBasicAuth","title":"DockerBasicAuth","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class DockerBasicAuth(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    password: Optional[str] = Field(\n        None, description=\"Password for the Docker repository.\"\n    )\n    username: Optional[str] = Field(\n        None, description=\"User name for the Docker repository.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DockerImage","title":"DockerImage","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class DockerImage(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    basic_auth: Optional[DockerBasicAuth] = Field(\n        None, description=\"Basic authentication information for Docker repository.\"\n    )\n    url: Optional[str] = Field(None, description=\"URL for the Docker image.\")\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Error","title":"Error","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class Error(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    error_code: Optional[str] = Field(\n        None, description=\"Error code\", example=\"INTERNAL_ERROR\"\n    )\n    message: Optional[str] = Field(\n        None,\n        description=(\n            \"Human-readable error message that describes the cause of the error.\"\n        ),\n        example=\"Unexpected error.\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.EventDetails","title":"EventDetails","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class EventDetails(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"* For created clusters, the attributes of the cluster.\\n* For edited\"\n            \" clusters, the new attributes of the cluster.\"\n        ),\n    )\n    cause: Optional[ResizeCause] = Field(\n        None, description=\"The cause of a change in target size.\"\n    )\n    cluster_size: Optional[ClusterSize] = Field(\n        None,\n        description=\"The cluster size that was set in the cluster creation or edit.\",\n    )\n    current_num_workers: Optional[int] = Field(\n        None, description=\"The number of nodes in the cluster.\"\n    )\n    previous_attributes: Optional[AwsAttributes] = Field(\n        None, description=\"The cluster attributes before a cluster was edited.\"\n    )\n    previous_cluster_size: Optional[ClusterSize] = Field(\n        None, description=\"The size of the cluster before an edit or resize.\"\n    )\n    reason: Optional[TerminationReason] = Field(\n        None,\n        description=(\n            \"A termination reason:\\n\\n* On a `TERMINATED` event, the reason for the\"\n            \" termination.\\n* On a `RESIZE_COMPLETE` event, indicates the reason that\"\n            \" we failed to acquire some nodes.\"\n        ),\n    )\n    target_num_workers: Optional[int] = Field(\n        None, description=\"The targeted number of nodes in the cluster.\"\n    )\n    user: Optional[str] = Field(\n        None,\n        description=(\n            \"The user that caused the event to occur. (Empty if it was done by\"\n            \" Databricks.)\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.FileStorageInfo","title":"FileStorageInfo","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class FileStorageInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    destination: Optional[str] = Field(\n        None, description=\"File destination. Example: `file:/my/file.sh`\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GitSnapshot","title":"GitSnapshot","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

    Source code in prefect_databricks/models/jobs.py
    class GitSnapshot(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    Read-only state of the remote repository at the time the job was run. This field is only included on job runs.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    used_commit: Optional[str] = Field(\n        None,\n        description=(\n            \"Commit that was used to execute the run. If git_branch was specified, this\"\n            \" points to the HEAD of the branch at the time of the run; if git_tag was\"\n            \" specified, this points to the commit the tag points to.\"\n        ),\n        example=\"4506fdf41e9fa98090570a34df7a5bce163ff15f\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GitSource","title":"GitSource","text":"

    Bases: BaseModel

    See source code for the fields' description.

    This functionality is in Public Preview.\n

    An optional specification for a remote repository containing the notebooks used by this job's notebook tasks.

    Source code in prefect_databricks/models/jobs.py
    class GitSource(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n        This functionality is in Public Preview.\n\n    An optional specification for a remote repository containing the notebooks used by this job's notebook tasks.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    git_branch: Optional[str] = Field(\n        None,\n        description=(\n            \"Name of the branch to be checked out and used by this job. This field\"\n            \" cannot be specified in conjunction with git_tag or git_commit.\\nThe\"\n            \" maximum length is 255 characters.\"\n        ),\n        example=\"main\",\n    )\n    git_commit: Optional[str] = Field(\n        None,\n        description=(\n            \"Commit to be checked out and used by this job. This field cannot be\"\n            \" specified in conjunction with git_branch or git_tag.\\nThe maximum length\"\n            \" is 64 characters.\"\n        ),\n        example=\"e0056d01\",\n    )\n    git_provider: Optional[\n        Literal[\n            \"gitHub\",\n            \"bitbucketCloud\",\n            \"azureDevOpsServices\",\n            \"gitHubEnterprise\",\n            \"bitbucketServer\",\n            \"gitLab\",\n            \"gitLabEnterpriseEdition\",\n            \"awsCodeCommit\",\n        ]\n    ] = Field(\n        None,\n        description=(\n            \"Unique identifier of the service used to host the Git repository. The\"\n            \" value is case insensitive.\"\n        ),\n        example=\"github\",\n    )\n    git_snapshot: Optional[GitSnapshot] = None\n    git_tag: Optional[str] = Field(\n        None,\n        description=(\n            \"Name of the tag to be checked out and used by this job. This field cannot\"\n            \" be specified in conjunction with git_branch or git_commit.\\nThe maximum\"\n            \" length is 255 characters.\"\n        ),\n        example=\"release-1.0.0\",\n    )\n    git_url: Optional[str] = Field(\n        None,\n        description=(\n            \"URL of the repository to be cloned by this job.\\nThe maximum length is 300\"\n            \" characters.\"\n        ),\n        example=\"https://github.com/databricks/databricks-cli\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GitSource1","title":"GitSource1","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class GitSource1(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: Union[GitSource, Any, Any, Any] = Field(\n        ...,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GroupName","title":"GroupName","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class GroupName(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=(\n            \"Group name. There are two built-in groups: `users` for all users, and\"\n            \" `admins` for administrators.\"\n        ),\n        example=\"users\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.InitScriptInfo","title":"InitScriptInfo","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class InitScriptInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    s3: Optional[S3StorageInfo] = Field(\n        None,\n        alias=\"S3\",\n        description=(\n            \"S3 location of init script. Destination and either region or endpoint must\"\n            ' be provided. For example, `{ \"s3\": { \"destination\" :'\n            ' \"s3://init_script_bucket/prefix\", \"region\" : \"us-west-2\" } }`'\n        ),\n    )\n    dbfs: Optional[DbfsStorageInfo] = Field(\n        None,\n        description=(\n            \"DBFS location of init script. Destination must be provided. For example,\"\n            ' `{ \"dbfs\" : { \"destination\" : \"dbfs:/home/init_script\" } }`'\n        ),\n    )\n    file: Optional[FileStorageInfo] = Field(\n        None,\n        description=(\n            \"File location of init script. Destination must be provided. For example,\"\n            ' `{ \"file\" : { \"destination\" : \"file:/my/local/file.sh\" } }`'\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.IsOwner","title":"IsOwner","text":"

    Bases: str, Enum

    Perimssion that represents ownership of the job.

    Source code in prefect_databricks/models/jobs.py
    class IsOwner(str, Enum):\n    \"\"\"\n    Perimssion that represents ownership of the job.\n    \"\"\"\n\n    isowner = \"IS_OWNER\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Job","title":"Job","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class Job(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    created_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this job was created in epoch milliseconds (milliseconds\"\n            \" since 1/1/1970 UTC).\"\n        ),\n        example=1601370337343,\n    )\n    creator_user_name: Optional[str] = Field(\n        None,\n        description=(\n            \"The creator user name. This field won\u2019t be included in the response if the\"\n            \" user has already been deleted.\"\n        ),\n        example=\"user.name@databricks.com\",\n    )\n    job_id: Optional[int] = Field(\n        None, description=\"The canonical identifier for this job.\", example=11223344\n    )\n    settings: Optional[JobSettings] = Field(\n        None,\n        description=(\n            \"Settings for this job and all of its runs. These settings can be updated\"\n            \" using the `resetJob` method.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobCluster","title":"JobCluster","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class JobCluster(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    job_cluster_key: str = Field(\n        ...,\n        description=(\n            \"A unique name for the job cluster. This field is required and must be\"\n            \" unique within the job.\\n`JobTaskSettings` may refer to this field to\"\n            \" determine which cluster to launch for the task execution.\"\n        ),\n        example=\"auto_scaling_cluster\",\n        max_length=100,\n        min_length=1,\n        regex=\"^[\\\\w\\\\-]+$\",\n    )\n    new_cluster: Optional[NewCluster] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobEmailNotifications","title":"JobEmailNotifications","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class JobEmailNotifications(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    no_alert_for_skipped_runs: Optional[bool] = Field(\n        None,\n        description=(\n            \"If true, do not send email to recipients specified in `on_failure` if the\"\n            \" run is skipped.\"\n        ),\n        example=False,\n    )\n    on_failure: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of email addresses to notify when a run completes unsuccessfully. A\"\n            \" run is considered unsuccessful if it ends with an `INTERNAL_ERROR`\"\n            \" `life_cycle_state` or a `SKIPPED`, `FAILED`, or `TIMED_OUT`\"\n            \" `result_state`. If not specified on job creation, reset, or update, or\"\n            \" the list is empty, then notifications are not sent. Job-level failure\"\n            \" notifications are sent only once after the entire job run (including all\"\n            \" of its retries) has failed. Notifications are not sent when failed job\"\n            \" runs are retried. To receive a failure notification after every failed\"\n            \" task (including every failed retry), use task-level notifications\"\n            \" instead.\"\n        ),\n        example=[\"user.name@databricks.com\"],\n    )\n    on_start: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of email addresses to be notified when a run begins. If not\"\n            \" specified on job creation, reset, or update, the list is empty, and\"\n            \" notifications are not sent.\"\n        ),\n        example=[\"user.name@databricks.com\"],\n    )\n    on_success: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of email addresses to be notified when a run successfully\"\n            \" completes. A run is considered to have completed successfully if it ends\"\n            \" with a `TERMINATED` `life_cycle_state` and a `SUCCESSFUL` result_state.\"\n            \" If not specified on job creation, reset, or update, the list is empty,\"\n            \" and notifications are not sent.\"\n        ),\n        example=[\"user.name@databricks.com\"],\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobSettings","title":"JobSettings","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class JobSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    email_notifications: Optional[JobEmailNotifications] = Field(\n        None,\n        description=(\n            \"An optional set of email addresses that is notified when runs of this job\"\n            \" begin or complete as well as when this job is deleted. The default\"\n            \" behavior is to not send any emails.\"\n        ),\n    )\n    format: Optional[Literal[\"SINGLE_TASK\", \"MULTI_TASK\"]] = Field(\n        None,\n        description=(\n            \"Used to tell what is the format of the job. This field is ignored in\"\n            \" Create/Update/Reset calls. When using the Jobs API 2.1 this value is\"\n            ' always set to `\"MULTI_TASK\"`.'\n        ),\n        example=\"MULTI_TASK\",\n    )\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    job_clusters: Optional[List[JobCluster]] = Field(\n        None,\n        description=(\n            \"A list of job cluster specifications that can be shared and reused by\"\n            \" tasks of this job. Libraries cannot be declared in a shared job cluster.\"\n            \" You must declare dependent libraries in task settings.\"\n        ),\n        example=[\n            {\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n            }\n        ],\n        max_items=100,\n    )\n    max_concurrent_runs: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional maximum allowed number of concurrent runs of the job.\\n\\nSet\"\n            \" this value if you want to be able to execute multiple runs of the same\"\n            \" job concurrently. This is useful for example if you trigger your job on a\"\n            \" frequent schedule and want to allow consecutive runs to overlap with each\"\n            \" other, or if you want to trigger multiple runs which differ by their\"\n            \" input parameters.\\n\\nThis setting affects only new runs. For example,\"\n            \" suppose the job\u2019s concurrency is 4 and there are 4 concurrent active\"\n            \" runs. Then setting the concurrency to 3 won\u2019t kill any of the active\"\n            \" runs. However, from then on, new runs are skipped unless there are fewer\"\n            \" than 3 active runs.\\n\\nThis value cannot exceed 1000\\\\. Setting this\"\n            \" value to 0 causes all new runs to be skipped. The default behavior is to\"\n            \" allow only 1 concurrent run.\"\n        ),\n        example=10,\n    )\n    name: Optional[str] = Field(\n        \"Untitled\",\n        description=\"An optional name for the job.\",\n        example=\"A multitask job\",\n    )\n    schedule: Optional[CronSchedule] = Field(\n        None,\n        description=(\n            \"An optional periodic schedule for this job. The default behavior is that\"\n            \" the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or\"\n            \" sending an API request to `runNow`.\"\n        ),\n    )\n    tags: Optional[Dict[str, Any]] = Field(\n        \"{}\",\n        description=(\n            \"A map of tags associated with the job. These are forwarded to the cluster\"\n            \" as cluster tags for jobs clusters, and are subject to the same\"\n            \" limitations as cluster tags. A maximum of 25 tags can be added to the\"\n            \" job.\"\n        ),\n        example={\"cost-center\": \"engineering\", \"team\": \"jobs\"},\n    )\n    tasks: Optional[List[JobTaskSettings]] = Field(\n        None,\n        description=\"A list of task specifications to be executed by this job.\",\n        example=[\n            {\n                \"depends_on\": [],\n                \"description\": \"Extracts session data from events\",\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                \"max_retries\": 3,\n                \"min_retry_interval_millis\": 2000,\n                \"retry_on_timeout\": False,\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.Sessionize\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                },\n                \"task_key\": \"Sessionize\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [],\n                \"description\": \"Ingests order data\",\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                \"max_retries\": 3,\n                \"min_retry_interval_millis\": 2000,\n                \"retry_on_timeout\": False,\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.OrdersIngest\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                },\n                \"task_key\": \"Orders_Ingest\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [\n                    {\"task_key\": \"Orders_Ingest\"},\n                    {\"task_key\": \"Sessionize\"},\n                ],\n                \"description\": \"Matches orders with user sessions\",\n                \"max_retries\": 3,\n                \"min_retry_interval_millis\": 2000,\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n                \"notebook_task\": {\n                    \"base_parameters\": {\"age\": \"35\", \"name\": \"John Doe\"},\n                    \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                    \"source\": \"WORKSPACE\",\n                },\n                \"retry_on_timeout\": False,\n                \"task_key\": \"Match\",\n                \"timeout_seconds\": 86400,\n            },\n        ],\n        max_items=100,\n    )\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job. The default behavior\"\n            \" is to have no timeout.\"\n        ),\n        example=86400,\n    )\n    webhook_notifications: Optional[WebhookNotifications] = Field(\n        None,\n        description=(\n            \"A collection of system notification IDs to notify when runs of this job\"\n            \" begin or complete. The default behavior is to not send any system\"\n            \" notifications.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobTask","title":"JobTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class JobTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this job must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this job must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None,\n        description=\"If spark_jar_task, indicates that this job must run a JAR.\",\n        example=\"\",\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this job must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this job must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobTaskSettings","title":"JobTaskSettings","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class JobTaskSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    depends_on: Optional[TaskDependencies] = None\n    description: Optional[TaskDescription] = None\n    email_notifications: Optional[JobEmailNotifications] = Field(\n        None,\n        description=(\n            \"An optional set of email addresses that is notified when runs of this task\"\n            \" begin or complete as well as when this task is deleted. The default\"\n            \" behavior is to not send any emails.\"\n        ),\n    )\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this task. When running tasks on an existing cluster, you may\"\n            \" need to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    job_cluster_key: Optional[str] = Field(\n        None,\n        description=(\n            \"If job_cluster_key, this task is executed reusing the cluster specified in\"\n            \" `job.settings.job_clusters`.\"\n        ),\n        max_length=100,\n        min_length=1,\n        regex=\"^[\\\\w\\\\-]+$\",\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the task. The default value is an empty list.\"\n        ),\n    )\n    max_retries: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional maximum number of times to retry an unsuccessful run. A run is\"\n            \" considered to be unsuccessful if it completes with the `FAILED`\"\n            \" result_state or `INTERNAL_ERROR` `life_cycle_state`. The value -1 means\"\n            \" to retry indefinitely and the value 0 means to never retry. The default\"\n            \" behavior is to never retry.\"\n        ),\n        example=10,\n    )\n    min_retry_interval_millis: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional minimal interval in milliseconds between the start of the\"\n            \" failed run and the subsequent retry run. The default behavior is that\"\n            \" unsuccessful runs are immediately retried.\"\n        ),\n        example=2000,\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this task must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this task must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    retry_on_timeout: Optional[bool] = Field(\n        None,\n        description=(\n            \"An optional policy to specify whether to retry a task when it times out.\"\n            \" The default behavior is to not retry on timeout.\"\n        ),\n        example=True,\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None, description=\"If spark_jar_task, indicates that this task must run a JAR.\"\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this task must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this task must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n    task_key: TaskKey\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job task. The default\"\n            \" behavior is to have no timeout.\"\n        ),\n        example=86400,\n    )\n    webhook_notifications: Optional[WebhookNotifications] = Field(\n        None,\n        description=(\n            \"A collection of system notification IDs to notify when the run begins or\"\n            \" completes. The default behavior is to not send any system notifications.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Library","title":"Library","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class Library(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cran: Optional[RCranLibrary] = Field(\n        None, description=\"If cran, specification of a CRAN library to be installed.\"\n    )\n    egg: Optional[str] = Field(\n        None,\n        description=(\n            \"If egg, URI of the egg to be installed. DBFS and S3 URIs are supported.\"\n            ' For example: `{ \"egg\": \"dbfs:/my/egg\" }` or `{ \"egg\":'\n            ' \"s3://my-bucket/egg\" }`. If S3 is used, make sure the cluster has read'\n            \" access on the library. You may need to launch the cluster with an\"\n            \" instance profile to access the S3 URI.\"\n        ),\n        example=\"dbfs:/my/egg\",\n    )\n    jar: Optional[str] = Field(\n        None,\n        description=(\n            \"If jar, URI of the JAR to be installed. DBFS and S3 URIs are supported.\"\n            ' For example: `{ \"jar\": \"dbfs:/mnt/databricks/library.jar\" }` or `{ \"jar\":'\n            ' \"s3://my-bucket/library.jar\" }`. If S3 is used, make sure the cluster has'\n            \" read access on the library. You may need to launch the cluster with an\"\n            \" instance profile to access the S3 URI.\"\n        ),\n        example=\"dbfs:/my-jar.jar\",\n    )\n    maven: Optional[MavenLibrary] = Field(\n        None,\n        description=(\n            \"If maven, specification of a Maven library to be installed. For example:\"\n            ' `{ \"coordinates\": \"org.jsoup:jsoup:1.7.2\" }`'\n        ),\n    )\n    pypi: Optional[PythonPyPiLibrary] = Field(\n        None,\n        description=(\n            \"If pypi, specification of a PyPI library to be installed. Specifying the\"\n            \" `repo` field is optional and if not specified, the default pip index is\"\n            ' used. For example: `{ \"package\": \"simplejson\", \"repo\":'\n            ' \"https://my-repo.com\" }`'\n        ),\n    )\n    whl: Optional[str] = Field(\n        None,\n        description=(\n            \"If whl, URI of the wheel or zipped wheels to be installed. DBFS and S3\"\n            ' URIs are supported. For example: `{ \"whl\": \"dbfs:/my/whl\" }` or `{ \"whl\":'\n            ' \"s3://my-bucket/whl\" }`. If S3 is used, make sure the cluster has read'\n            \" access on the library. You may need to launch the cluster with an\"\n            \" instance profile to access the S3 URI. Also the wheel file name needs to\"\n            \" use the [correct\"\n            \" convention](https://www.python.org/dev/peps/pep-0427/#file-format). If\"\n            \" zipped wheels are to be installed, the file name suffix should be\"\n            \" `.wheelhouse.zip`.\"\n        ),\n        example=\"dbfs:/my/whl\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.LibraryFullStatus","title":"LibraryFullStatus","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class LibraryFullStatus(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    is_library_for_all_clusters: Optional[bool] = Field(\n        None,\n        description=(\n            \"Whether the library was set to be installed on all clusters via the\"\n            \" libraries UI.\"\n        ),\n    )\n    library: Optional[Library] = Field(\n        None, description=\"Unique identifier for the library.\"\n    )\n    messages: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"All the info and warning messages that have occurred so far for this\"\n            \" library.\"\n        ),\n    )\n    status: Optional[LibraryInstallStatus] = Field(\n        None, description=\"Status of installing the library on the cluster.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.LibraryInstallStatus","title":"LibraryInstallStatus","text":"

    Bases: str, Enum

    * `PENDING`: No action has yet been taken to install the library. This state should be very short lived.\n
    • RESOLVING: Metadata necessary to install the library is being retrieved from the provided repository. For Jar, Egg, and Whl libraries, this step is a no-op.
    • INSTALLING: The library is actively being installed, either by adding resources to Spark or executing system commands inside the Spark nodes.
    • INSTALLED: The library has been successfully instally.
    • SKIPPED: Installation on a Databricks Runtime 7.0 or above cluster was skipped due to Scala version incompatibility.
    • FAILED: Some step in installation failed. More information can be found in the messages field.
    • UNINSTALL_ON_RESTART: The library has been marked for removal. Libraries can be removed only when clusters are restarted, so libraries that enter this state remains until the cluster is restarted.
    Source code in prefect_databricks/models/jobs.py
    class LibraryInstallStatus(str, Enum):\n    \"\"\"\n        * `PENDING`: No action has yet been taken to install the library. This state should be very short lived.\n    * `RESOLVING`: Metadata necessary to install the library is being retrieved from the provided repository. For Jar, Egg, and Whl libraries, this step is a no-op.\n    * `INSTALLING`: The library is actively being installed, either by adding resources to Spark or executing system commands inside the Spark nodes.\n    * `INSTALLED`: The library has been successfully instally.\n    * `SKIPPED`: Installation on a Databricks Runtime 7.0 or above cluster was skipped due to Scala version incompatibility.\n    * `FAILED`: Some step in installation failed. More information can be found in the messages field.\n    * `UNINSTALL_ON_RESTART`: The library has been marked for removal. Libraries can be removed only when clusters are restarted, so libraries that enter this state remains until the cluster is restarted.\n    \"\"\"\n\n    pending = \"PENDING\"\n    resolving = \"RESOLVING\"\n    installing = \"INSTALLING\"\n    installed = \"INSTALLED\"\n    skipped = \"SKIPPED\"\n    failed = \"FAILED\"\n    uninstallonrestart = \"UNINSTALL_ON_RESTART\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ListOrder","title":"ListOrder","text":"

    Bases: str, Enum

    * `DESC`: Descending order.\n
    • ASC: Ascending order.
    Source code in prefect_databricks/models/jobs.py
    class ListOrder(str, Enum):\n    \"\"\"\n        * `DESC`: Descending order.\n    * `ASC`: Ascending order.\n    \"\"\"\n\n    desc = \"DESC\"\n    asc = \"ASC\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.LogSyncStatus","title":"LogSyncStatus","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class LogSyncStatus(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    last_attempted: Optional[int] = Field(\n        None,\n        description=(\n            \"The timestamp of last attempt. If the last attempt fails, last_exception\"\n            \" contains the exception in the last attempt.\"\n        ),\n    )\n    last_exception: Optional[str] = Field(\n        None,\n        description=(\n            \"The exception thrown in the last attempt, it would be null (omitted in the\"\n            \" response) if there is no exception in last attempted.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.MavenLibrary","title":"MavenLibrary","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class MavenLibrary(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    coordinates: str = Field(\n        ...,\n        description=(\n            \"Gradle-style Maven coordinates. For example: `org.jsoup:jsoup:1.7.2`. This\"\n            \" field is required.\"\n        ),\n        example=\"org.jsoup:jsoup:1.7.2\",\n    )\n    exclusions: Optional[List[str]] = Field(\n        None,\n        description=(\n            'List of dependences to exclude. For example: `[\"slf4j:slf4j\",'\n            ' \"*:hadoop-client\"]`.\\n\\nMaven dependency exclusions:'\n            \" <https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html>.\"\n        ),\n        example=[\"slf4j:slf4j\", \"*:hadoop-client\"],\n    )\n    repo: Optional[str] = Field(\n        None,\n        description=(\n            \"Maven repo to install the Maven package from. If omitted, both Maven\"\n            \" Central Repository and Spark Packages are searched.\"\n        ),\n        example=\"https://my-repo.com\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NewCluster","title":"NewCluster","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class NewCluster(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autoscale: Optional[AutoScale] = Field(\n        None,\n        description=(\n            \"If autoscale, the required parameters to automatically scale clusters up\"\n            \" and down based on load.\"\n        ),\n    )\n    aws_attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"Attributes related to clusters running on Amazon Web Services. If not\"\n            \" specified at cluster creation, a set of default values is used.\"\n        ),\n    )\n    cluster_log_conf: Optional[ClusterLogConf] = Field(\n        None,\n        description=(\n            \"The configuration for delivering Spark logs to a long-term storage\"\n            \" destination. Only one destination can be specified for one cluster. If\"\n            \" the conf is given, the logs are delivered to the destination every `5\"\n            \" mins`. The destination of driver logs is\"\n            \" `<destination>/<cluster-id>/driver`, while the destination of executor\"\n            \" logs is `<destination>/<cluster-id>/executor`.\"\n        ),\n    )\n    custom_tags: Optional[ClusterTag] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags for cluster resources. Databricks tags\"\n            \" all cluster resources (such as AWS instances and EBS volumes) with these\"\n            \" tags in addition to default_tags.\\n\\n**Note**:\\n\\n* Tags are not\"\n            \" supported on legacy node types such as compute-optimized and\"\n            \" memory-optimized\\n* Databricks allows at most 45 custom tags\"\n        ),\n    )\n    driver_instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to use for the driver node. You must\"\n            \" also specify `instance_pool_id`. Refer to [Instance Pools\"\n            \" API](https://docs.databricks.com/dev-tools/api/latest/instance-pools.html)\"\n            \" for details.\"\n        ),\n    )\n    driver_node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The node type of the Spark driver. This field is optional; if unset, the\"\n            \" driver node type is set as the same value as `node_type_id` defined\"\n            \" above.\"\n        ),\n    )\n    enable_elastic_disk: Optional[bool] = Field(\n        None,\n        description=(\n            \"Autoscaling Local Storage: when enabled, this cluster dynamically acquires\"\n            \" additional disk space when its Spark workers are running low on disk\"\n            \" space. This feature requires specific AWS permissions to function\"\n            \" correctly - refer to [Autoscaling local\"\n            \" storage](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage)\"\n            \" for details.\"\n        ),\n    )\n    enable_local_disk_encryption: Optional[bool] = Field(\n        None,\n        description=(\n            \"Determines whether encryption of disks locally attached to the cluster is\"\n            \" enabled.\"\n        ),\n    )\n    init_scripts: Optional[List[InitScriptInfo]] = Field(\n        None,\n        description=(\n            \"The configuration for storing init scripts. Any number of scripts can be\"\n            \" specified. The scripts are executed sequentially in the order provided.\"\n            \" If `cluster_log_conf` is specified, init script logs are sent to\"\n            \" `<destination>/<cluster-id>/init_scripts`.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to use for cluster nodes. If\"\n            \" `driver_instance_pool_id` is present, `instance_pool_id` is used for\"\n            \" worker nodes only. Otherwise, it is used for both the driver node and\"\n            \" worker nodes. Refer to [Instance Pools\"\n            \" API](https://docs.databricks.com/dev-tools/api/latest/instance-pools.html)\"\n            \" for details.\"\n        ),\n    )\n    node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"This field encodes, through a single value, the resources available to\"\n            \" each of the Spark nodes in this cluster. For example, the Spark nodes can\"\n            \" be provisioned and optimized for memory or compute intensive workloads A\"\n            \" list of available node types can be retrieved by using the [List node\"\n            \" types](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-node-types)\"\n            \" API call.\"\n        ),\n    )\n    num_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"If num_workers, number of worker nodes that this cluster must have. A\"\n            \" cluster has one Spark driver and num_workers executors for a total of\"\n            \" num_workers + 1 Spark nodes. When reading the properties of a cluster,\"\n            \" this field reflects the desired number of workers rather than the actual\"\n            \" current number of workers. For example, if a cluster is resized from 5 to\"\n            \" 10 workers, this field immediately updates to reflect the target size of\"\n            \" 10 workers, whereas the workers listed in `spark_info` gradually increase\"\n            \" from 5 to 10 as the new nodes are provisioned.\"\n        ),\n    )\n    policy_id: Optional[str] = Field(\n        None,\n        description=(\n            \"A [cluster\"\n            \" policy](https://docs.databricks.com/dev-tools/api/latest/policies.html)\"\n            \" ID. Either `node_type_id` or `instance_pool_id` must be specified in the\"\n            \" cluster policy if they are not specified in this job cluster object.\"\n        ),\n    )\n    spark_conf: Optional[SparkConfPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified Spark configuration\"\n            \" key-value pairs. You can also pass in a string of extra JVM options to\"\n            \" the driver and the executors via `spark.driver.extraJavaOptions` and\"\n            \" `spark.executor.extraJavaOptions` respectively.\\n\\nExample Spark confs:\"\n            ' `{\"spark.speculation\": true, \"spark.streaming.ui.retainedBatches\": 5}` or'\n            ' `{\"spark.driver.extraJavaOptions\": \"-verbose:gc -XX:+PrintGCDetails\"}`'\n        ),\n    )\n    spark_env_vars: Optional[SparkEnvPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified environment\"\n            \" variable key-value pairs. Key-value pair of the form (X,Y) are exported\"\n            \" as is (for example, `export X='Y'`) while launching the driver and\"\n            \" workers.\\n\\nTo specify an additional set of `SPARK_DAEMON_JAVA_OPTS`, we\"\n            \" recommend appending them to `$SPARK_DAEMON_JAVA_OPTS` as shown in the\"\n            \" following example. This ensures that all default databricks managed\"\n            \" environmental variables are included as well.\\n\\nExample Spark\"\n            ' environment variables: `{\"SPARK_WORKER_MEMORY\": \"28000m\",'\n            ' \"SPARK_LOCAL_DIRS\": \"/local_disk0\"}` or `{\"SPARK_DAEMON_JAVA_OPTS\":'\n            ' \"$SPARK_DAEMON_JAVA_OPTS -Dspark.shuffle.service.enabled=true\"}`'\n        ),\n    )\n    spark_version: str = Field(\n        ...,\n        description=(\n            \"The Spark version of the cluster. A list of available Spark versions can\"\n            \" be retrieved by using the [Runtime\"\n            \" versions](https://docs.databricks.com/dev-tools/api/latest/clusters.html#runtime-versions)\"\n            \" API call.\"\n        ),\n    )\n    ssh_public_keys: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"SSH public key contents that are added to each Spark node in this cluster.\"\n            \" The corresponding private keys can be used to login with the user name\"\n            \" `ubuntu` on port `2200`. Up to 10 keys can be specified.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NodeType","title":"NodeType","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class NodeType(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    description: str = Field(\n        ...,\n        description=(\n            \"A string description associated with this node type. This field is\"\n            \" required.\"\n        ),\n    )\n    instance_type_id: str = Field(\n        ...,\n        description=(\n            \"An identifier for the type of hardware that this node runs on. This field\"\n            \" is required.\"\n        ),\n    )\n    is_deprecated: Optional[bool] = Field(\n        None,\n        description=(\n            \"Whether the node type is deprecated. Non-deprecated node types offer\"\n            \" greater performance.\"\n        ),\n    )\n    memory_mb: int = Field(\n        ...,\n        description=(\n            \"Memory (in MB) available for this node type. This field is required.\"\n        ),\n    )\n    node_info: Optional[ClusterCloudProviderNodeInfo] = Field(\n        None, description=\"Node type info reported by the cloud provider.\"\n    )\n    node_type_id: str = Field(\n        ..., description=\"Unique identifier for this node type. This field is required.\"\n    )\n    num_cores: Optional[float] = Field(\n        None,\n        description=(\n            \"Number of CPU cores available for this node type. This can be fractional\"\n            \" if the number of cores on a machine instance is not divisible by the\"\n            \" number of Spark nodes on that machine. This field is required.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NotebookOutput","title":"NotebookOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class NotebookOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    result: Optional[str] = Field(\n        None,\n        description=(\n            \"The value passed to\"\n            \" [dbutils.notebook.exit()](https://docs.databricks.com/notebooks/notebook-workflows.html#notebook-workflows-exit).\"\n            \" Databricks restricts this API to return the first 5 MB of the value. For\"\n            \" a larger result, your job can store the results in a cloud storage\"\n            \" service. This field is absent if `dbutils.notebook.exit()` was never\"\n            \" called.\"\n        ),\n        example=\"An arbitrary string passed by calling dbutils.notebook.exit(...)\",\n    )\n    truncated: Optional[bool] = Field(\n        None, description=\"Whether or not the result was truncated.\", example=False\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NotebookTask","title":"NotebookTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class NotebookTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    base_parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"Base parameters to be used for each run of this job. If the run is\"\n            \" initiated by a call to\"\n            \" [`run-now`](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunNow)\"\n            \" with parameters specified, the two parameters maps are merged. If the\"\n            \" same key is specified in `base_parameters` and in `run-now`, the value\"\n            \" from `run-now` is used.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\\n\\nIf the notebook\"\n            \" takes a parameter that is not specified in the job\u2019s `base_parameters` or\"\n            \" the `run-now` override parameters, the default value from the notebook is\"\n            \" used.\\n\\nRetrieve these parameters in a notebook using\"\n            \" [dbutils.widgets.get](https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-widgets).\"\n        ),\n        example={\"age\": 35, \"name\": \"John Doe\"},\n    )\n    notebook_path: str = Field(\n        ...,\n        description=(\n            \"The path of the notebook to be run in the Databricks workspace or remote\"\n            \" repository. For notebooks stored in the Databricks workspace, the path\"\n            \" must be absolute and begin with a slash. For notebooks stored in a remote\"\n            \" repository, the path must be relative. This field is required.\"\n        ),\n        example=\"/Users/user.name@databricks.com/notebook_to_run\",\n    )\n    source: Optional[Literal[\"WORKSPACE\", \"GIT\"]] = Field(\n        None,\n        description=(\n            \"Optional location type of the notebook. When set to `WORKSPACE`, the\"\n            \" notebook will be retrieved from the local Databricks workspace. When set\"\n            \" to `GIT`, the notebook will be retrieved from a Git repository defined in\"\n            \" `git_source`. If the value is empty, the task will use `GIT` if\"\n            \" `git_source` is defined and `WORKSPACE` otherwise.\"\n        ),\n        example=\"WORKSPACE\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.OnFailureItem","title":"OnFailureItem","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class OnFailureItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    id: Optional[str] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.OnStartItem","title":"OnStartItem","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class OnStartItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    id: Optional[str] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.OnSucces","title":"OnSucces","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class OnSucces(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    id: Optional[str] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ParameterPair","title":"ParameterPair","text":"

    Bases: BaseModel

    See source code for the fields' description.

    An object with additional information about why a cluster was terminated. The object keys are one of TerminationParameter and the value is the termination information.

    Source code in prefect_databricks/models/jobs.py
    class ParameterPair(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An object with additional information about why a cluster was terminated. The object keys are one of `TerminationParameter` and the value is the termination information.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PermissionLevel","title":"PermissionLevel","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class PermissionLevel(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: Union[CanManage, CanManageRun, CanView, IsOwner] = Field(\n        ..., description=\"Permission level to grant.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PermissionLevelForGroup","title":"PermissionLevelForGroup","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class PermissionLevelForGroup(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: Union[CanManage, CanManageRun, CanView] = Field(\n        ..., description=\"Permission level to grant.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PipelineParams","title":"PipelineParams","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class PipelineParams(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    full_refresh: Optional[bool] = Field(\n        None, description=\"If true, triggers a full refresh on the delta live table.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PipelineTask","title":"PipelineTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class PipelineTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    full_refresh: Optional[bool] = Field(\n        False,\n        description=(\n            \"If true, a full refresh will be triggered on the delta live table.\"\n        ),\n    )\n    pipeline_id: Optional[str] = Field(\n        None,\n        description=\"The full name of the pipeline task to execute.\",\n        example=\"a12cd3e4-0ab1-1abc-1a2b-1a2bcd3e4fg5\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PoolClusterTerminationCode","title":"PoolClusterTerminationCode","text":"

    Bases: str, Enum

    * INSTANCE_POOL_MAX_CAPACITY_FAILURE: The pool max capacity has been reached.\n
    • INSTANCE_POOL_NOT_FOUND_FAILURE: The pool specified by the cluster is no longer active or doesn\u2019t exist.
    Source code in prefect_databricks/models/jobs.py
    class PoolClusterTerminationCode(str, Enum):\n    \"\"\"\n        * INSTANCE_POOL_MAX_CAPACITY_FAILURE: The pool max capacity has been reached.\n    * INSTANCE_POOL_NOT_FOUND_FAILURE: The pool specified by the cluster is no longer active or doesn\u2019t exist.\n    \"\"\"\n\n    instancepoolmaxcapacityfailure = \"INSTANCE_POOL_MAX_CAPACITY_FAILURE\"\n    instancepoolnotfoundfailure = \"INSTANCE_POOL_NOT_FOUND_FAILURE\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PythonPyPiLibrary","title":"PythonPyPiLibrary","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class PythonPyPiLibrary(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    package: str = Field(\n        ...,\n        description=(\n            \"The name of the PyPI package to install. An optional exact version\"\n            \" specification is also supported. Examples: `simplejson` and\"\n            \" `simplejson==3.8.0`. This field is required.\"\n        ),\n        example=\"simplejson==3.8.0\",\n    )\n    repo: Optional[str] = Field(\n        None,\n        description=(\n            \"The repository where the package can be found. If not specified, the\"\n            \" default pip index is used.\"\n        ),\n        example=\"https://my-repo.com\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PythonWheelTask","title":"PythonWheelTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class PythonWheelTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    entry_point: Optional[str] = Field(\n        None,\n        description=(\n            \"Named entry point to use, if it does not exist in the metadata of the\"\n            \" package it executes the function from the package directly using\"\n            \" `$packageName.$entryPoint()`\"\n        ),\n    )\n    named_parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"Command-line parameters passed to Python wheel task in the form of\"\n            ' `[\"--name=task\", \"--data=dbfs:/path/to/data.json\"]`. Leave it empty if'\n            \" `parameters` is not null.\"\n        ),\n        example={\"data\": \"dbfs:/path/to/data.json\", \"name\": \"task\"},\n    )\n    package_name: Optional[str] = Field(\n        None, description=\"Name of the package to execute\"\n    )\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Command-line parameters passed to Python wheel task. Leave it empty if\"\n            \" `named_parameters` is not null.\"\n        ),\n        example=[\"--name=task\", \"one\", \"two\"],\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RCranLibrary","title":"RCranLibrary","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RCranLibrary(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    package: str = Field(\n        ...,\n        description=\"The name of the CRAN package to install. This field is required.\",\n        example=\"geojson\",\n    )\n    repo: Optional[str] = Field(\n        None,\n        description=(\n            \"The repository where the package can be found. If not specified, the\"\n            \" default CRAN repo is used.\"\n        ),\n        example=\"https://my-repo.com\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RepairHistory","title":"RepairHistory","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RepairHistory(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    repair_history: Optional[List[RepairHistoryItem]] = Field(\n        None, description=\"The repair history of the run.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RepairHistoryItem","title":"RepairHistoryItem","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RepairHistoryItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    end_time: Optional[int] = Field(\n        None, description=\"The end time of the (repaired) run.\", example=1625060863413\n    )\n    id: Optional[int] = Field(\n        None,\n        description=(\n            \"The ID of the repair. Only returned for the items that represent a repair\"\n            \" in `repair_history`.\"\n        ),\n        example=734650698524280,\n    )\n    start_time: Optional[int] = Field(\n        None, description=\"The start time of the (repaired) run.\", example=1625060460483\n    )\n    state: Optional[RunState] = None\n    task_run_ids: Optional[List[int]] = Field(\n        None,\n        description=(\n            \"The run IDs of the task runs that ran as part of this repair history item.\"\n        ),\n        example=[1106460542112844, 988297789683452],\n    )\n    type: Optional[Literal[\"ORIGINAL\", \"REPAIR\"]] = Field(\n        None,\n        description=(\n            \"The repair history item type. Indicates whether a run is the original run\"\n            \" or a repair run.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RepairRunInput","title":"RepairRunInput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RepairRunInput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    latest_repair_id: Optional[int] = Field(\n        None,\n        description=(\n            \"The ID of the latest repair. This parameter is not required when repairing\"\n            \" a run for the first time, but must be provided on subsequent requests to\"\n            \" repair the same run.\"\n        ),\n        example=734650698524280,\n    )\n    rerun_all_failed_tasks: Optional[bool] = Field(\n        False,\n        description=(\n            \"If true, repair all failed tasks. Only one of rerun_tasks or\"\n            \" rerun_all_failed_tasks can be used.\"\n        ),\n    )\n    rerun_tasks: Optional[List[str]] = Field(\n        None,\n        description=\"The task keys of the task runs to repair.\",\n        example=[\"task0\", \"task1\"],\n    )\n    run_id: Optional[int] = Field(\n        None,\n        description=(\n            \"The job run ID of the run to repair. The run must not be in progress.\"\n        ),\n        example=455644833,\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ResizeCause","title":"ResizeCause","text":"

    Bases: str, Enum

    * `AUTOSCALE`: Automatically resized based on load.\n
    • USER_REQUEST: User requested a new size.
    • AUTORECOVERY: Autorecovery monitor resized the cluster after it lost a node.
    Source code in prefect_databricks/models/jobs.py
    class ResizeCause(str, Enum):\n    \"\"\"\n        * `AUTOSCALE`: Automatically resized based on load.\n    * `USER_REQUEST`: User requested a new size.\n    * `AUTORECOVERY`: Autorecovery monitor resized the cluster after it lost a node.\n    \"\"\"\n\n    autoscale = \"AUTOSCALE\"\n    userrequest = \"USER_REQUEST\"\n    autorecovery = \"AUTORECOVERY\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Run","title":"Run","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class Run(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    attempt_number: Optional[int] = Field(\n        None,\n        description=(\n            \"The sequence number of this run attempt for a triggered job run. The\"\n            \" initial attempt of a run has an attempt_number of 0\\\\. If the initial run\"\n            \" attempt fails, and the job has a retry policy (`max_retries` \\\\> 0),\"\n            \" subsequent runs are created with an `original_attempt_run_id` of the\"\n            \" original attempt\u2019s ID and an incrementing `attempt_number`. Runs are\"\n            \" retried only until they succeed, and the maximum `attempt_number` is the\"\n            \" same as the `max_retries` value for the job.\"\n        ),\n        example=0,\n    )\n    cleanup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to terminate the cluster and clean up any\"\n            \" associated artifacts. The total duration of the run is the sum of the\"\n            \" setup_duration, the execution_duration, and the cleanup_duration.\"\n        ),\n        example=0,\n    )\n    cluster_instance: Optional[ClusterInstance] = Field(\n        None,\n        description=(\n            \"The cluster used for this run. If the run is specified to use a new\"\n            \" cluster, this field is set once the Jobs service has requested a cluster\"\n            \" for the run.\"\n        ),\n    )\n    cluster_spec: Optional[ClusterSpec] = Field(\n        None,\n        description=(\n            \"A snapshot of the job\u2019s cluster specification when this run was created.\"\n        ),\n    )\n    creator_user_name: Optional[str] = Field(\n        None,\n        description=(\n            \"The creator user name. This field won\u2019t be included in the response if the\"\n            \" user has already been deleted.\"\n        ),\n        example=\"user.name@databricks.com\",\n    )\n    end_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run ended in epoch milliseconds (milliseconds since\"\n            \" 1/1/1970 UTC). This field is set to 0 if the job is still running.\"\n        ),\n        example=1625060863413,\n    )\n    execution_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to execute the commands in the JAR or\"\n            \" notebook until they completed, failed, timed out, were cancelled, or\"\n            \" encountered an unexpected error.\"\n        ),\n        example=0,\n    )\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    job_clusters: Optional[List[JobCluster]] = Field(\n        None,\n        description=(\n            \"A list of job cluster specifications that can be shared and reused by\"\n            \" tasks of this job. Libraries cannot be declared in a shared job cluster.\"\n            \" You must declare dependent libraries in task settings.\"\n        ),\n        example=[\n            {\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n            }\n        ],\n        max_items=100,\n    )\n    job_id: Optional[int] = Field(\n        None,\n        description=\"The canonical identifier of the job that contains this run.\",\n        example=11223344,\n    )\n    number_in_job: Optional[int] = Field(\n        None,\n        deprecated=True,\n        description=(\n            \"A unique identifier for this job run. This is set to the same value as\"\n            \" `run_id`.\"\n        ),\n        example=455644833,\n    )\n    original_attempt_run_id: Optional[int] = Field(\n        None,\n        description=(\n            \"If this run is a retry of a prior run attempt, this field contains the\"\n            \" run_id of the original attempt; otherwise, it is the same as the run_id.\"\n        ),\n        example=455644833,\n    )\n    overriding_parameters: Optional[RunParameters] = Field(\n        None, description=\"The parameters used for this run.\"\n    )\n    run_id: Optional[int] = Field(\n        None,\n        description=(\n            \"The canonical identifier of the run. This ID is unique across all runs of\"\n            \" all jobs.\"\n        ),\n        example=455644833,\n    )\n    run_name: Optional[str] = Field(\n        \"Untitled\",\n        description=(\n            \"An optional name for the run. The maximum allowed length is 4096 bytes in\"\n            \" UTF-8 encoding.\"\n        ),\n        example=\"A multitask job run\",\n    )\n    run_page_url: Optional[str] = Field(\n        None,\n        description=\"The URL to the detail page of the run.\",\n        example=\"https://my-workspace.cloud.databricks.com/#job/11223344/run/123\",\n    )\n    run_type: Optional[RunType] = None\n    schedule: Optional[CronSchedule] = Field(\n        None,\n        description=(\n            \"The cron schedule that triggered this run if it was triggered by the\"\n            \" periodic scheduler.\"\n        ),\n    )\n    setup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time it took to set up the cluster in milliseconds. For runs that run\"\n            \" on new clusters this is the cluster creation time, for runs that run on\"\n            \" existing clusters this time should be very short.\"\n        ),\n        example=0,\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run was started in epoch milliseconds (milliseconds\"\n            \" since 1/1/1970 UTC). This may not be the time when the job task starts\"\n            \" executing, for example, if the job is scheduled to run on a new cluster,\"\n            \" this is the time the cluster creation call is issued.\"\n        ),\n        example=1625060460483,\n    )\n    state: Optional[RunState] = Field(\n        None, description=\"The result and lifecycle states of the run.\"\n    )\n    tasks: Optional[List[RunTask]] = Field(\n        None,\n        description=(\n            \"The list of tasks performed by the run. Each task has its own `run_id`\"\n            \" which you can use to call `JobsGetOutput` to retrieve the run resutls.\"\n        ),\n        example=[\n            {\n                \"attempt_number\": 0,\n                \"cleanup_duration\": 0,\n                \"cluster_instance\": {\n                    \"cluster_id\": \"0923-164208-meows279\",\n                    \"spark_context_id\": \"4348585301701786933\",\n                },\n                \"description\": \"Ingests order data\",\n                \"end_time\": 1629989930171,\n                \"execution_duration\": 0,\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                \"run_id\": 2112892,\n                \"run_page_url\": (\n                    \"https://my-workspace.cloud.databricks.com/#job/39832/run/20\"\n                ),\n                \"setup_duration\": 0,\n                \"spark_jar_task\": {\"main_class_name\": \"com.databricks.OrdersIngest\"},\n                \"start_time\": 1629989929660,\n                \"state\": {\n                    \"life_cycle_state\": \"INTERNAL_ERROR\",\n                    \"result_state\": \"FAILED\",\n                    \"state_message\": (\n                        \"Library installation failed for library due to user error.\"\n                        \" Error messages:\\n'Manage' permissions are required to install\"\n                        \" libraries on a cluster\"\n                    ),\n                    \"user_cancelled_or_timedout\": False,\n                },\n                \"task_key\": \"Orders_Ingest\",\n            },\n            {\n                \"attempt_number\": 0,\n                \"cleanup_duration\": 0,\n                \"cluster_instance\": {\"cluster_id\": \"0923-164208-meows279\"},\n                \"depends_on\": [\n                    {\"task_key\": \"Orders_Ingest\"},\n                    {\"task_key\": \"Sessionize\"},\n                ],\n                \"description\": \"Matches orders with user sessions\",\n                \"end_time\": 1629989930238,\n                \"execution_duration\": 0,\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n                \"notebook_task\": {\n                    \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                    \"source\": \"WORKSPACE\",\n                },\n                \"run_id\": 2112897,\n                \"run_page_url\": (\n                    \"https://my-workspace.cloud.databricks.com/#job/39832/run/21\"\n                ),\n                \"setup_duration\": 0,\n                \"start_time\": 0,\n                \"state\": {\n                    \"life_cycle_state\": \"SKIPPED\",\n                    \"state_message\": \"An upstream task failed.\",\n                    \"user_cancelled_or_timedout\": False,\n                },\n                \"task_key\": \"Match\",\n            },\n            {\n                \"attempt_number\": 0,\n                \"cleanup_duration\": 0,\n                \"cluster_instance\": {\n                    \"cluster_id\": \"0923-164208-meows279\",\n                    \"spark_context_id\": \"4348585301701786933\",\n                },\n                \"description\": \"Extracts session data from events\",\n                \"end_time\": 1629989930144,\n                \"execution_duration\": 0,\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                \"run_id\": 2112902,\n                \"run_page_url\": (\n                    \"https://my-workspace.cloud.databricks.com/#job/39832/run/22\"\n                ),\n                \"setup_duration\": 0,\n                \"spark_jar_task\": {\"main_class_name\": \"com.databricks.Sessionize\"},\n                \"start_time\": 1629989929668,\n                \"state\": {\n                    \"life_cycle_state\": \"INTERNAL_ERROR\",\n                    \"result_state\": \"FAILED\",\n                    \"state_message\": (\n                        \"Library installation failed for library due to user error.\"\n                        \" Error messages:\\n'Manage' permissions are required to install\"\n                        \" libraries on a cluster\"\n                    ),\n                    \"user_cancelled_or_timedout\": False,\n                },\n                \"task_key\": \"Sessionize\",\n            },\n        ],\n        max_items=100,\n    )\n    trigger: Optional[TriggerType] = Field(\n        None, description=\"The type of trigger that fired this run.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunLifeCycleState","title":"RunLifeCycleState","text":"

    Bases: str, Enum

    * `PENDING`: The run has been triggered. If there is not already an active run of the same job, the cluster and execution context are being prepared. If there is already an active run of the same job, the run immediately transitions into the `SKIPPED` state without preparing any resources.\n
    • RUNNING: The task of this run is being executed.
    • TERMINATING: The task of this run has completed, and the cluster and execution context are being cleaned up.
    • TERMINATED: The task of this run has completed, and the cluster and execution context have been cleaned up. This state is terminal.
    • SKIPPED: This run was aborted because a previous run of the same job was already active. This state is terminal.
    • INTERNAL_ERROR: An exceptional state that indicates a failure in the Jobs service, such as network failure over a long period. If a run on a new cluster ends in the INTERNAL_ERROR state, the Jobs service terminates the cluster as soon as possible. This state is terminal.
    • BLOCKED: The run is blocked on an upstream dependency.
    • WAITING_FOR_RETRY: The run is waiting for a retry.
    Source code in prefect_databricks/models/jobs.py
    class RunLifeCycleState(str, Enum):\n    \"\"\"\n        * `PENDING`: The run has been triggered. If there is not already an active run of the same job, the cluster and execution context are being prepared. If there is already an active run of the same job, the run immediately transitions into the `SKIPPED` state without preparing any resources.\n    * `RUNNING`: The task of this run is being executed.\n    * `TERMINATING`: The task of this run has completed, and the cluster and execution context are being cleaned up.\n    * `TERMINATED`: The task of this run has completed, and the cluster and execution context have been cleaned up. This state is terminal.\n    * `SKIPPED`: This run was aborted because a previous run of the same job was already active. This state is terminal.\n    * `INTERNAL_ERROR`: An exceptional state that indicates a failure in the Jobs service, such as network failure over a long period. If a run on a new cluster ends in the `INTERNAL_ERROR` state, the Jobs service terminates the cluster as soon as possible. This state is terminal.\n    * `BLOCKED`: The run is blocked on an upstream dependency.\n    * `WAITING_FOR_RETRY`: The run is waiting for a retry.\n    \"\"\"\n\n    terminated = \"TERMINATED\"\n    pending = \"PENDING\"\n    running = \"RUNNING\"\n    terminating = \"TERMINATING\"\n    skipped = \"SKIPPED\"\n    internalerror = \"INTERNAL_ERROR\"\n    blocked = \"BLOCKED\"\n    waitingforretry = \"WAITING_FOR_RETRY\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunNowInput","title":"RunNowInput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RunNowInput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    idempotency_token: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional token to guarantee the idempotency of job run requests. If a\"\n            \" run with the provided token already exists, the request does not create a\"\n            \" new run but returns the ID of the existing run instead. If a run with the\"\n            \" provided token is deleted, an error is returned.\\n\\nIf you specify the\"\n            \" idempotency token, upon failure you can retry until the request succeeds.\"\n            \" Databricks guarantees that exactly one run is launched with that\"\n            \" idempotency token.\\n\\nThis token must have at most 64 characters.\\n\\nFor\"\n            \" more information, see [How to ensure idempotency for\"\n            \" jobs](https://kb.databricks.com/jobs/jobs-idempotency.html).\"\n        ),\n        example=\"8f018174-4792-40d5-bcbc-3e6a527352c8\",\n    )\n    job_id: Optional[int] = Field(\n        None, description=\"The ID of the job to be executed\", example=11223344\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunParameters","title":"RunParameters","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RunParameters(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_commands: Optional[List] = Field(\n        None,\n        description=(\n            \"An array of commands to execute for jobs with the dbt task, for example\"\n            ' `\"dbt_commands\": [\"dbt deps\", \"dbt seed\", \"dbt run\"]`'\n        ),\n        example=[\"dbt deps\", \"dbt seed\", \"dbt run\"],\n    )\n    jar_params: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of parameters for jobs with Spark JAR tasks, for example\"\n            ' `\"jar_params\": [\"john doe\", \"35\"]`. The parameters are used to invoke the'\n            \" main function of the main class specified in the Spark JAR task. If not\"\n            \" specified upon `run-now`, it defaults to an empty list. jar_params cannot\"\n            \" be specified in conjunction with notebook_params. The JSON representation\"\n            ' of this field (for example `{\"jar_params\":[\"john doe\",\"35\"]}`) cannot'\n            \" exceed 10,000 bytes.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\"john\", \"doe\", \"35\"],\n    )\n    notebook_params: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"A map from keys to values for jobs with notebook task, for example\"\n            ' `\"notebook_params\": {\"name\": \"john doe\", \"age\": \"35\"}`. The map is passed'\n            \" to the notebook and is accessible through the\"\n            \" [dbutils.widgets.get](https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-widgets)\"\n            \" function.\\n\\nIf not specified upon `run-now`, the triggered run uses the\"\n            \" job\u2019s base parameters.\\n\\nnotebook_params cannot be specified in\"\n            \" conjunction with jar_params.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\\n\\nThe JSON\"\n            \" representation of this field (for example\"\n            ' `{\"notebook_params\":{\"name\":\"john doe\",\"age\":\"35\"}}`) cannot exceed'\n            \" 10,000 bytes.\"\n        ),\n        example={\"age\": \"35\", \"name\": \"john doe\"},\n    )\n    pipeline_params: Optional[PipelineParams] = None\n    python_named_params: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"A map from keys to values for jobs with Python wheel task, for example\"\n            ' `\"python_named_params\": {\"name\": \"task\", \"data\":'\n            ' \"dbfs:/path/to/data.json\"}`.'\n        ),\n        example={\"data\": \"dbfs:/path/to/data.json\", \"name\": \"task\"},\n    )\n    python_params: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of parameters for jobs with Python tasks, for example\"\n            ' `\"python_params\": [\"john doe\", \"35\"]`. The parameters are passed to'\n            \" Python file as command-line parameters. If specified upon `run-now`, it\"\n            \" would overwrite the parameters specified in job setting. The JSON\"\n            ' representation of this field (for example `{\"python_params\":[\"john'\n            ' doe\",\"35\"]}`) cannot exceed 10,000 bytes.\\n\\nUse [Task parameter'\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job\"\n            \" runs.\\n\\nImportant\\n\\nThese parameters accept only Latin characters\"\n            \" (ASCII character set). Using non-ASCII characters returns an error.\"\n            \" Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis,\"\n            \" and emojis.\"\n        ),\n        example=[\"john doe\", \"35\"],\n    )\n    spark_submit_params: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of parameters for jobs with spark submit task, for example\"\n            ' `\"spark_submit_params\": [\"--class\",'\n            ' \"org.apache.spark.examples.SparkPi\"]`. The parameters are passed to'\n            \" spark-submit script as command-line parameters. If specified upon\"\n            \" `run-now`, it would overwrite the parameters specified in job setting.\"\n            \" The JSON representation of this field (for example\"\n            ' `{\"python_params\":[\"john doe\",\"35\"]}`) cannot exceed 10,000 bytes.\\n\\nUse'\n            \" [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job\"\n            \" runs.\\n\\nImportant\\n\\nThese parameters accept only Latin characters\"\n            \" (ASCII character set). Using non-ASCII characters returns an error.\"\n            \" Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis,\"\n            \" and emojis.\"\n        ),\n        example=[\"--class\", \"org.apache.spark.examples.SparkPi\"],\n    )\n    sql_params: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            'A map from keys to values for SQL tasks, for example `\"sql_params\":'\n            ' {\"name\": \"john doe\", \"age\": \"35\"}`. The SQL alert task does not support'\n            \" custom parameters.\"\n        ),\n        example={\"age\": \"35\", \"name\": \"john doe\"},\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunResultState","title":"RunResultState","text":"

    Bases: str, Enum

    * `SUCCESS`: The task completed successfully.\n
    • FAILED: The task completed with an error.
    • TIMEDOUT: The run was stopped after reaching the timeout.
    • CANCELED: The run was canceled at user request.
    Source code in prefect_databricks/models/jobs.py
    class RunResultState(str, Enum):\n    \"\"\"\n        * `SUCCESS`: The task completed successfully.\n    * `FAILED`: The task completed with an error.\n    * `TIMEDOUT`: The run was stopped after reaching the timeout.\n    * `CANCELED`: The run was canceled at user request.\n    \"\"\"\n\n    success = \"SUCCESS\"\n    failed = \"FAILED\"\n    timedout = \"TIMEDOUT\"\n    canceled = \"CANCELED\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunState","title":"RunState","text":"

    Bases: BaseModel

    See source code for the fields' description.

    The result and lifecycle state of the run.

    Source code in prefect_databricks/models/jobs.py
    class RunState(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    The result and lifecycle state of the run.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    life_cycle_state: Optional[RunLifeCycleState] = Field(\n        None,\n        description=(\n            \"A description of a run\u2019s current location in the run lifecycle. This field\"\n            \" is always available in the response.\"\n        ),\n    )\n    result_state: Optional[RunResultState] = None\n    state_message: Optional[str] = Field(\n        None,\n        description=(\n            \"A descriptive message for the current state. This field is unstructured,\"\n            \" and its exact format is subject to change.\"\n        ),\n        example=\"\",\n    )\n    user_cancelled_or_timedout: Optional[bool] = Field(\n        None,\n        description=(\n            \"Whether a run was canceled manually by a user or by the scheduler because\"\n            \" the run timed out.\"\n        ),\n        example=False,\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunSubmitSettings","title":"RunSubmitSettings","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RunSubmitSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    idempotency_token: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional token that can be used to guarantee the idempotency of job run\"\n            \" requests. If a run with the provided token already exists, the request\"\n            \" does not create a new run but returns the ID of the existing run instead.\"\n            \" If a run with the provided token is deleted, an error is returned.\\n\\nIf\"\n            \" you specify the idempotency token, upon failure you can retry until the\"\n            \" request succeeds. Databricks guarantees that exactly one run is launched\"\n            \" with that idempotency token.\\n\\nThis token must have at most 64\"\n            \" characters.\\n\\nFor more information, see [How to ensure idempotency for\"\n            \" jobs](https://kb.databricks.com/jobs/jobs-idempotency.html).\"\n        ),\n        example=\"8f018174-4792-40d5-bcbc-3e6a527352c8\",\n    )\n    run_name: Optional[str] = Field(\n        None,\n        description=\"An optional name for the run. The default value is `Untitled`.\",\n        example=\"A multitask job run\",\n    )\n    tasks: Optional[List[RunSubmitTaskSettings]] = Field(\n        None,\n        example=[\n            {\n                \"depends_on\": [],\n                \"description\": \"Extracts session data from events\",\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.Sessionize\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                },\n                \"task_key\": \"Sessionize\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [],\n                \"description\": \"Ingests order data\",\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.OrdersIngest\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                },\n                \"task_key\": \"Orders_Ingest\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [\n                    {\"task_key\": \"Orders_Ingest\"},\n                    {\"task_key\": \"Sessionize\"},\n                ],\n                \"description\": \"Matches orders with user sessions\",\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n                \"notebook_task\": {\n                    \"base_parameters\": {\"age\": \"35\", \"name\": \"John Doe\"},\n                    \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                    \"source\": \"WORKSPACE\",\n                },\n                \"task_key\": \"Match\",\n                \"timeout_seconds\": 86400,\n            },\n        ],\n        max_items=100,\n    )\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job. The default behavior\"\n            \" is to have no timeout.\"\n        ),\n        example=86400,\n    )\n    webhook_notifications: Optional[WebhookNotifications] = Field(\n        None,\n        description=(\n            \"A collection of system notification IDs to notify when runs of this job\"\n            \" begin or complete. The default behavior is to not send any system\"\n            \" notifications.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunSubmitTaskSettings","title":"RunSubmitTaskSettings","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RunSubmitTaskSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    depends_on: Optional[TaskDependencies] = None\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this task. When running tasks on an existing cluster, you may\"\n            \" need to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the task. The default value is an empty list.\"\n        ),\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this task must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this task must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None, description=\"If spark_jar_task, indicates that this task must run a JAR.\"\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this task must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this task must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n    task_key: TaskKey\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job task. The default\"\n            \" behavior is to have no timeout.\"\n        ),\n        example=86400,\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunTask","title":"RunTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class RunTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    attempt_number: Optional[int] = Field(\n        None,\n        description=(\n            \"The sequence number of this run attempt for a triggered job run. The\"\n            \" initial attempt of a run has an attempt_number of 0\\\\. If the initial run\"\n            \" attempt fails, and the job has a retry policy (`max_retries` \\\\> 0),\"\n            \" subsequent runs are created with an `original_attempt_run_id` of the\"\n            \" original attempt\u2019s ID and an incrementing `attempt_number`. Runs are\"\n            \" retried only until they succeed, and the maximum `attempt_number` is the\"\n            \" same as the `max_retries` value for the job.\"\n        ),\n        example=0,\n    )\n    cleanup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to terminate the cluster and clean up any\"\n            \" associated artifacts. The total duration of the run is the sum of the\"\n            \" setup_duration, the execution_duration, and the cleanup_duration.\"\n        ),\n        example=0,\n    )\n    cluster_instance: Optional[ClusterInstance] = Field(\n        None,\n        description=(\n            \"The cluster used for this run. If the run is specified to use a new\"\n            \" cluster, this field is set once the Jobs service has requested a cluster\"\n            \" for the run.\"\n        ),\n    )\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    depends_on: Optional[TaskDependencies] = None\n    description: Optional[TaskDescription] = None\n    end_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run ended in epoch milliseconds (milliseconds since\"\n            \" 1/1/1970 UTC). This field is set to 0 if the job is still running.\"\n        ),\n        example=1625060863413,\n    )\n    execution_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to execute the commands in the JAR or\"\n            \" notebook until they completed, failed, timed out, were cancelled, or\"\n            \" encountered an unexpected error.\"\n        ),\n        example=0,\n    )\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this job. When running jobs on an existing cluster, you may need\"\n            \" to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n    )\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the job. The default value is an empty list.\"\n        ),\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this job must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this job must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    run_id: Optional[int] = Field(\n        None, description=\"The ID of the task run.\", example=99887766\n    )\n    setup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time it took to set up the cluster in milliseconds. For runs that run\"\n            \" on new clusters this is the cluster creation time, for runs that run on\"\n            \" existing clusters this time should be very short.\"\n        ),\n        example=0,\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None, description=\"If spark_jar_task, indicates that this job must run a JAR.\"\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this job must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this job must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run was started in epoch milliseconds (milliseconds\"\n            \" since 1/1/1970 UTC). This may not be the time when the job task starts\"\n            \" executing, for example, if the job is scheduled to run on a new cluster,\"\n            \" this is the time the cluster creation call is issued.\"\n        ),\n        example=1625060460483,\n    )\n    state: Optional[RunState] = Field(\n        None, description=\"The result and lifecycle states of the run.\"\n    )\n    task_key: Optional[TaskKey] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunType","title":"RunType","text":"

    Bases: str, Enum

    The type of the run.\n
    • JOB_RUN - Normal job run. A run created with Run now.
    • WORKFLOW_RUN - Workflow run. A run created with dbutils.notebook.run.
    • SUBMIT_RUN - Submit run. A run created with Run Submit.
    Source code in prefect_databricks/models/jobs.py
    class RunType(str, Enum):\n    \"\"\"\n        The type of the run.\n    * `JOB_RUN` \\- Normal job run. A run created with [Run now](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunNow).\n    * `WORKFLOW_RUN` \\- Workflow run. A run created with [dbutils.notebook.run](https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-workflow).\n    * `SUBMIT_RUN` \\- Submit run. A run created with [Run Submit](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsSubmit).\n    \"\"\"\n\n    jobrun = \"JOB_RUN\"\n    workflowrun = \"WORKFLOW_RUN\"\n    submitrun = \"SUBMIT_RUN\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.S3StorageInfo","title":"S3StorageInfo","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class S3StorageInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    canned_acl: Optional[str] = Field(\n        None,\n        description=(\n            \"(Optional) Set canned access control list. For example:\"\n            \" `bucket-owner-full-control`. If canned_acl is set, the cluster instance\"\n            \" profile must have `s3:PutObjectAcl` permission on the destination bucket\"\n            \" and prefix. The full list of possible canned ACLs can be found at\"\n            \" <https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl>.\"\n            \" By default only the object owner gets full control. If you are using\"\n            \" cross account role for writing data, you may want to set\"\n            \" `bucket-owner-full-control` to make bucket owner able to read the logs.\"\n        ),\n    )\n    destination: Optional[str] = Field(\n        None,\n        description=(\n            \"S3 destination. For example: `s3://my-bucket/some-prefix` You must\"\n            \" configure the cluster with an instance profile and the instance profile\"\n            \" must have write access to the destination. You _cannot_ use AWS keys.\"\n        ),\n    )\n    enable_encryption: Optional[bool] = Field(\n        None, description=\"(Optional)Enable server side encryption, `false` by default.\"\n    )\n    encryption_type: Optional[str] = Field(\n        None,\n        description=(\n            \"(Optional) The encryption type, it could be `sse-s3` or `sse-kms`. It is\"\n            \" used only when encryption is enabled and the default type is `sse-s3`.\"\n        ),\n    )\n    endpoint: Optional[str] = Field(\n        None,\n        description=(\n            \"S3 endpoint. For example: `https://s3-us-west-2.amazonaws.com`. Either\"\n            \" region or endpoint must be set. If both are set, endpoint is used.\"\n        ),\n    )\n    kms_key: Optional[str] = Field(\n        None,\n        description=(\n            \"(Optional) KMS key used if encryption is enabled and encryption type is\"\n            \" set to `sse-kms`.\"\n        ),\n    )\n    region: Optional[str] = Field(\n        None,\n        description=(\n            \"S3 region. For example: `us-west-2`. Either region or endpoint must be\"\n            \" set. If both are set, endpoint is used.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ServicePrincipalName","title":"ServicePrincipalName","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ServicePrincipalName(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=\"Name of an Azure service principal.\",\n        example=\"9f0621ee-b52b-11ea-b3de-0242ac130004\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkConfPair","title":"SparkConfPair","text":"

    Bases: BaseModel

    See source code for the fields' description.

    An arbitrary object where the object key is a configuration property name and the value is a configuration property value.

    Source code in prefect_databricks/models/jobs.py
    class SparkConfPair(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An arbitrary object where the object key is a configuration property name and the value is a configuration property value.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkEnvPair","title":"SparkEnvPair","text":"

    Bases: BaseModel

    See source code for the fields' description.

    An arbitrary object where the object key is an environment variable name and the value is an environment variable value.

    Source code in prefect_databricks/models/jobs.py
    class SparkEnvPair(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An arbitrary object where the object key is an environment variable name and the value is an environment variable value.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkJarTask","title":"SparkJarTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SparkJarTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    jar_uri: Optional[str] = Field(\n        None,\n        deprecated=True,\n        description=(\n            \"Deprecated since 04/2016\\\\. Provide a `jar` through the `libraries` field\"\n            \" instead. For an example, see\"\n            \" [Create](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsCreate).\"\n        ),\n    )\n    main_class_name: Optional[str] = Field(\n        None,\n        description=(\n            \"The full name of the class containing the main method to be executed. This\"\n            \" class must be contained in a JAR provided as a library.\\n\\nThe code must\"\n            \" use `SparkContext.getOrCreate` to obtain a Spark context; otherwise, runs\"\n            \" of the job fail.\"\n        ),\n        example=\"com.databricks.ComputeModels\",\n    )\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Parameters passed to the main method.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\"--data\", \"dbfs:/path/to/data.json\"],\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkNode","title":"SparkNode","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SparkNode(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    host_private_ip: Optional[str] = Field(\n        None, description=\"The private IP address of the host instance.\"\n    )\n    instance_id: Optional[str] = Field(\n        None,\n        description=(\n            \"Globally unique identifier for the host instance from the cloud provider.\"\n        ),\n    )\n    node_aws_attributes: Optional[SparkNodeAwsAttributes] = Field(\n        None, description=\"Attributes specific to AWS for a Spark node.\"\n    )\n    node_id: Optional[str] = Field(\n        None, description=\"Globally unique identifier for this node.\"\n    )\n    private_ip: Optional[str] = Field(\n        None,\n        description=(\n            \"Private IP address (typically a 10.x.x.x address) of the Spark node. This\"\n            \" is different from the private IP address of the host instance.\"\n        ),\n    )\n    public_dns: Optional[str] = Field(\n        None,\n        description=(\n            \"Public DNS address of this node. This address can be used to access the\"\n            \" Spark JDBC server on the driver node. To communicate with the JDBC\"\n            \" server, traffic must be manually authorized by adding security group\"\n            \" rules to the \u201cworker-unmanaged\u201d security group via the AWS console.\"\n        ),\n    )\n    start_timestamp: Optional[int] = Field(\n        None,\n        description=\"The timestamp (in millisecond) when the Spark node is launched.\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkNodeAwsAttributes","title":"SparkNodeAwsAttributes","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SparkNodeAwsAttributes(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    is_spot: Optional[bool] = Field(\n        None, description=\"Whether this node is on an Amazon spot instance.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkPythonTask","title":"SparkPythonTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SparkPythonTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Command line parameters passed to the Python file.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\"--data\", \"dbfs:/path/to/data.json\"],\n    )\n    python_file: str = Field(\n        ...,\n        description=(\n            \"The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/,\"\n            \" adls:/, gcs:/) and workspace paths are supported. For python files stored\"\n            \" in the Databricks workspace, the path must be absolute and begin with\"\n            \" `/Repos`. This field is required.\"\n        ),\n        example=\"dbfs:/path/to/file.py\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkSubmitTask","title":"SparkSubmitTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SparkSubmitTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Command-line parameters passed to spark submit.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\n            \"--class\",\n            \"org.apache.spark.examples.SparkPi\",\n            \"dbfs:/path/to/examples.jar\",\n            \"10\",\n        ],\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkVersion","title":"SparkVersion","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SparkVersion(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    key: Optional[str] = Field(\n        None,\n        description=(\n            \"[Databricks Runtime\"\n            \" version](https://docs.databricks.com/dev-tools/api/latest/index.html#programmatic-version)\"\n            \" key, for example `7.3.x-scala2.12`. The value that must be provided as\"\n            \" the `spark_version` when creating a new cluster. The exact runtime\"\n            \" version may change over time for a \u201cwildcard\u201d version (that is,\"\n            \" `7.3.x-scala2.12` is a \u201cwildcard\u201d version) with minor bug fixes.\"\n        ),\n    )\n    name: Optional[str] = Field(\n        None,\n        description=(\n            \"A descriptive name for the runtime version, for example \u201cDatabricks\"\n            \" Runtime 7.3 LTS\u201d.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlAlertOutput","title":"SqlAlertOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlAlertOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    output_link: Optional[str] = Field(\n        None, description=\"The link to find the output results.\"\n    )\n    query_text: Optional[str] = Field(\n        None,\n        description=(\n            \"The text of the SQL query. Can Run permission of the SQL query associated\"\n            \" with the SQL alert is required to view this field.\"\n        ),\n    )\n    sql_statements: Optional[SqlStatementOutput] = Field(\n        None, description=\"Information about SQL statements executed in the run.\"\n    )\n    warehouse_id: Optional[str] = Field(\n        None, description=\"The canonical identifier of the SQL warehouse.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlDashboardOutput","title":"SqlDashboardOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlDashboardOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    widgets: Optional[SqlDashboardWidgetOutput] = Field(\n        None,\n        description=(\n            \"Widgets executed in the run. Only SQL query based widgets are listed.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlDashboardWidgetOutput","title":"SqlDashboardWidgetOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlDashboardWidgetOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    end_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when execution of the SQL widget ends.\"\n        ),\n    )\n    error: Optional[SqlOutputError] = Field(\n        None, description=\"The information about the error when execution fails.\"\n    )\n    output_link: Optional[str] = Field(\n        None, description=\"The link to find the output results.\"\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when execution of the SQL widget starts.\"\n        ),\n    )\n    status: Optional[\n        Literal[\"PENDING\", \"RUNNING\", \"SUCCESS\", \"FAILED\", \"CANCELLED\"]\n    ] = Field(None, description=\"The execution status of the SQL widget.\")\n    widget_id: Optional[str] = Field(\n        None, description=\"The canonical identifier of the SQL widget.\"\n    )\n    widget_title: Optional[str] = Field(\n        None, description=\"The title of the SQL widget.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlOutput","title":"SqlOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    alert_output: Optional[SqlAlertOutput] = Field(\n        None, description=\"The output of a SQL alert task, if available.\"\n    )\n    dashboard_output: Optional[SqlDashboardOutput] = Field(\n        None, description=\"The output of a SQL dashboard task, if available.\"\n    )\n    query_output: Optional[SqlQueryOutput] = Field(\n        None, description=\"The output of a SQL query task, if available.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlOutputError","title":"SqlOutputError","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlOutputError(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    message: Optional[str] = Field(\n        None, description=\"The error message when execution fails.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlQueryOutput","title":"SqlQueryOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlQueryOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    output_link: Optional[str] = Field(\n        None, description=\"The link to find the output results.\"\n    )\n    query_text: Optional[str] = Field(\n        None,\n        description=(\n            \"The text of the SQL query. Can Run permission of the SQL query is required\"\n            \" to view this field.\"\n        ),\n    )\n    sql_statements: Optional[SqlStatementOutput] = Field(\n        None, description=\"Information about SQL statements executed in the run.\"\n    )\n    warehouse_id: Optional[str] = Field(\n        None, description=\"The canonical identifier of the SQL warehouse.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlStatementOutput","title":"SqlStatementOutput","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlStatementOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    lookup_key: Optional[str] = Field(\n        None, description=\"A key that can be used to look up query details.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTask","title":"SqlTask","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    alert: Optional[SqlTaskAlert] = Field(\n        None, description=\"If alert, indicates that this job must refresh a SQL alert.\"\n    )\n    dashboard: Optional[SqlTaskDashboard] = Field(\n        None,\n        description=(\n            \"If dashboard, indicates that this job must refresh a SQL dashboard.\"\n        ),\n    )\n    parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"Parameters to be used for each run of this job. The SQL alert task does\"\n            \" not support custom parameters.\"\n        ),\n        example={\"age\": 35, \"name\": \"John Doe\"},\n    )\n    query: Optional[SqlTaskQuery] = Field(\n        None, description=\"If query, indicates that this job must execute a SQL query.\"\n    )\n    warehouse_id: str = Field(\n        ...,\n        description=(\n            \"The canonical identifier of the SQL warehouse. Only serverless and pro SQL\"\n            \" warehouses are supported.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTaskAlert","title":"SqlTaskAlert","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlTaskAlert(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    alert_id: str = Field(..., description=\"The canonical identifier of the SQL alert.\")\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTaskDashboard","title":"SqlTaskDashboard","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlTaskDashboard(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dashboard_id: str = Field(\n        ..., description=\"The canonical identifier of the SQL dashboard.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTaskQuery","title":"SqlTaskQuery","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class SqlTaskQuery(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    query_id: str = Field(..., description=\"The canonical identifier of the SQL query.\")\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskDependencies","title":"TaskDependencies","text":"

    Bases: BaseModel

    See source code for the fields' description.

    An optional array of objects specifying the dependency graph of the task. All tasks specified in this field must complete successfully before executing this task.\n

    The key is task_key, and the value is the name assigned to the dependent task. This field is required when a job consists of more than one task.

    Source code in prefect_databricks/models/jobs.py
    class TaskDependencies(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n        An optional array of objects specifying the dependency graph of the task. All tasks specified in this field must complete successfully before executing this task.\n    The key is `task_key`, and the value is the name assigned to the dependent task.\n    This field is required when a job consists of more than one task.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: List[TaskDependency] = Field(\n        ...,\n        description=(\n            \"An optional array of objects specifying the dependency graph of the task.\"\n            \" All tasks specified in this field must complete successfully before\"\n            \" executing this task.\\nThe key is `task_key`, and the value is the name\"\n            \" assigned to the dependent task.\\nThis field is required when a job\"\n            \" consists of more than one task.\"\n        ),\n        example=[{\"task_key\": \"Previous_Task_Key\"}, {\"task_key\": \"Other_Task_Key\"}],\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskDependency","title":"TaskDependency","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class TaskDependency(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    task_key: Optional[str] = None\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskDescription","title":"TaskDescription","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class TaskDescription(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=(\n            \"An optional description for this task.\\nThe maximum length is 4096 bytes.\"\n        ),\n        example=\"This is the description for this task.\",\n        max_length=4096,\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskKey","title":"TaskKey","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class TaskKey(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=(\n            \"A unique name for the task. This field is used to refer to this task from\"\n            \" other tasks.\\nThis field is required and must be unique within its parent\"\n            \" job.\\nOn Update or Reset, this field is used to reference the tasks to be\"\n            \" updated or reset.\\nThe maximum length is 100 characters.\"\n        ),\n        example=\"Task_Key\",\n        max_length=100,\n        min_length=1,\n        regex=\"^[\\\\w\\\\-]+$\",\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationCode","title":"TerminationCode","text":"

    Bases: str, Enum

    * USER_REQUEST: A user terminated the cluster directly. Parameters should include a `username` field that indicates the specific user who terminated the cluster.\n
    • JOB_FINISHED: The cluster was launched by a job, and terminated when the job completed.
    • INACTIVITY: The cluster was terminated since it was idle.
    • CLOUD_PROVIDER_SHUTDOWN: The instance that hosted the Spark driver was terminated by the cloud provider. In AWS, for example, AWS may retire instances and directly shut them down. Parameters should include an aws_instance_state_reason field indicating the AWS-provided reason why the instance was terminated.
    • COMMUNICATION_LOST: Databricks lost connection to services on the driver instance. For example, this can happen when problems arise in cloud networking infrastructure, or when the instance itself becomes unhealthy.
    • CLOUD_PROVIDER_LAUNCH_FAILURE: Databricks experienced a cloud provider failure when requesting instances to launch clusters. For example, AWS limits the number of running instances and EBS volumes. If you ask Databricks to launch a cluster that requires instances or EBS volumes that exceed your AWS limit, the cluster fails with this status code. Parameters should include one of aws_api_error_code, aws_instance_state_reason, or aws_spot_request_status to indicate the AWS-provided reason why Databricks could not request the required instances for the cluster.
    • SPARK_STARTUP_FAILURE: The cluster failed to initialize. Possible reasons may include failure to create the environment for Spark or issues launching the Spark master and worker processes.
    • INVALID_ARGUMENT: Cannot launch the cluster because the user specified an invalid argument. For example, the user might specify an invalid runtime version for the cluster.
    • UNEXPECTED_LAUNCH_FAILURE: While launching this cluster, Databricks failed to complete critical setup steps, terminating the cluster.
    • INTERNAL_ERROR: Databricks encountered an unexpected error that forced the running cluster to be terminated. Contact Databricks support for additional details.
    • SPARK_ERROR: The Spark driver failed to start. Possible reasons may include incompatible libraries and initialization scripts that corrupted the Spark container.
    • METASTORE_COMPONENT_UNHEALTHY: The cluster failed to start because the external metastore could not be reached. Refer to Troubleshooting.
    • DBFS_COMPONENT_UNHEALTHY: The cluster failed to start because Databricks File System (DBFS) could not be reached.
    • DRIVER_UNREACHABLE: Databricks was not able to access the Spark driver, because it was not reachable.
    • DRIVER_UNRESPONSIVE: Databricks was not able to access the Spark driver, because it was unresponsive.
    • INSTANCE_UNREACHABLE: Databricks was not able to access instances in order to start the cluster. This can be a transient networking issue. If the problem persists, this usually indicates a networking environment misconfiguration.
    • CONTAINER_LAUNCH_FAILURE: Databricks was unable to launch containers on worker nodes for the cluster. Have your admin check your network configuration.
    • INSTANCE_POOL_CLUSTER_FAILURE: Pool backed cluster specific failure. Refer to Pools for details.
    • REQUEST_REJECTED: Databricks cannot handle the request at this moment. Try again later and contact Databricks if the problem persists.
    • INIT_SCRIPT_FAILURE: Databricks cannot load and run a cluster-scoped init script on one of the cluster\u2019s nodes, or the init script terminates with a non-zero exit code. Refer to Init script logs.
    • TRIAL_EXPIRED: The Databricks trial subscription expired.
    Source code in prefect_databricks/models/jobs.py
    class TerminationCode(str, Enum):\n    \"\"\"\n        * USER_REQUEST: A user terminated the cluster directly. Parameters should include a `username` field that indicates the specific user who terminated the cluster.\n    * JOB_FINISHED: The cluster was launched by a job, and terminated when the job completed.\n    * INACTIVITY: The cluster was terminated since it was idle.\n    * CLOUD_PROVIDER_SHUTDOWN: The instance that hosted the Spark driver was terminated by the cloud provider. In AWS, for example, AWS may retire instances and directly shut them down. Parameters should include an `aws_instance_state_reason` field indicating the AWS-provided reason why the instance was terminated.\n    * COMMUNICATION_LOST: Databricks lost connection to services on the driver instance. For example, this can happen when problems arise in cloud networking infrastructure, or when the instance itself becomes unhealthy.\n    * CLOUD_PROVIDER_LAUNCH_FAILURE: Databricks experienced a cloud provider failure when requesting instances to launch clusters. For example, AWS limits the number of running instances and EBS volumes. If you ask Databricks to launch a cluster that requires instances or EBS volumes that exceed your AWS limit, the cluster fails with this status code. Parameters should include one of `aws_api_error_code`, `aws_instance_state_reason`, or `aws_spot_request_status` to indicate the AWS-provided reason why Databricks could not request the required instances for the cluster.\n    * SPARK_STARTUP_FAILURE: The cluster failed to initialize. Possible reasons may include failure to create the environment for Spark or issues launching the Spark master and worker processes.\n    * INVALID_ARGUMENT: Cannot launch the cluster because the user specified an invalid argument. For example, the user might specify an invalid runtime version for the cluster.\n    * UNEXPECTED_LAUNCH_FAILURE: While launching this cluster, Databricks failed to complete critical setup steps, terminating the cluster.\n    * INTERNAL_ERROR: Databricks encountered an unexpected error that forced the running cluster to be terminated. Contact Databricks support for additional details.\n    * SPARK_ERROR: The Spark driver failed to start. Possible reasons may include incompatible libraries and initialization scripts that corrupted the Spark container.\n    * METASTORE_COMPONENT_UNHEALTHY: The cluster failed to start because the external metastore could not be reached. Refer to [Troubleshooting](https://docs.databricks.com/data/metastores/external-hive-metastore.html#troubleshooting).\n    * DBFS_COMPONENT_UNHEALTHY: The cluster failed to start because Databricks File System (DBFS) could not be reached.\n    * DRIVER_UNREACHABLE: Databricks was not able to access the Spark driver, because it was not reachable.\n    * DRIVER_UNRESPONSIVE: Databricks was not able to access the Spark driver, because it was unresponsive.\n    * INSTANCE_UNREACHABLE: Databricks was not able to access instances in order to start the cluster. This can be a transient networking issue. If the problem persists, this usually indicates a networking environment misconfiguration.\n    * CONTAINER_LAUNCH_FAILURE: Databricks was unable to launch containers on worker nodes for the cluster. Have your admin check your network configuration.\n    * INSTANCE_POOL_CLUSTER_FAILURE: Pool backed cluster specific failure. Refer to [Pools](https://docs.databricks.com/clusters/instance-pools/index.html) for details.\n    * REQUEST_REJECTED: Databricks cannot handle the request at this moment. Try again later and contact Databricks if the problem persists.\n    * INIT_SCRIPT_FAILURE: Databricks cannot load and run a cluster-scoped init script on one of the cluster\u2019s nodes, or the init script terminates with a non-zero exit code. Refer to [Init script logs](https://docs.databricks.com/clusters/init-scripts.html#init-script-log).\n    * TRIAL_EXPIRED: The Databricks trial subscription expired.\n    \"\"\"\n\n    userrequest = \"USER_REQUEST\"\n    jobfinished = \"JOB_FINISHED\"\n    inactivity = \"INACTIVITY\"\n    cloudprovidershutdown = \"CLOUD_PROVIDER_SHUTDOWN\"\n    communicationlost = \"COMMUNICATION_LOST\"\n    cloudproviderlaunchfailure = \"CLOUD_PROVIDER_LAUNCH_FAILURE\"\n    sparkstartupfailure = \"SPARK_STARTUP_FAILURE\"\n    invalidargument = \"INVALID_ARGUMENT\"\n    unexpectedlaunchfailure = \"UNEXPECTED_LAUNCH_FAILURE\"\n    internalerror = \"INTERNAL_ERROR\"\n    sparkerror = \"SPARK_ERROR\"\n    metastorecomponentunhealthy = \"METASTORE_COMPONENT_UNHEALTHY\"\n    dbfscomponentunhealthy = \"DBFS_COMPONENT_UNHEALTHY\"\n    driverunreachable = \"DRIVER_UNREACHABLE\"\n    driverunresponsive = \"DRIVER_UNRESPONSIVE\"\n    instanceunreachable = \"INSTANCE_UNREACHABLE\"\n    containerlaunchfailure = \"CONTAINER_LAUNCH_FAILURE\"\n    instancepoolclusterfailure = \"INSTANCE_POOL_CLUSTER_FAILURE\"\n    requestrejected = \"REQUEST_REJECTED\"\n    initscriptfailure = \"INIT_SCRIPT_FAILURE\"\n    trialexpired = \"TRIAL_EXPIRED\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationParameter","title":"TerminationParameter","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class TerminationParameter(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    aws_api_error_code: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided error code describing why cluster nodes could not be\"\n            \" provisioned. For example, `InstanceLimitExceeded` indicates that the\"\n            \" limit of EC2 instances for a specific instance type has been exceeded.\"\n            \" For reference, see:\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/APIReference/query-api-troubleshooting.html>.\"\n        ),\n    )\n    aws_error_message: Optional[str] = Field(\n        None,\n        description=(\n            \"Human-readable context of various failures from AWS. This field is\"\n            \" unstructured, and its exact format is subject to change.\"\n        ),\n    )\n    aws_impaired_status_details: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided status check which failed and induced a node loss. This\"\n            \" status may correspond to a failed instance or system check. For\"\n            \" reference, see\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html>.\"\n        ),\n    )\n    aws_instance_state_reason: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided state reason describing why the driver node was\"\n            \" terminated. For example, `Client.VolumeLimitExceeded` indicates that the\"\n            \" limit of EBS volumes or total EBS volume storage has been exceeded. For\"\n            \" reference, see\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_StateReason.html>.\"\n        ),\n    )\n    aws_instance_status_event: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided scheduled event (for example reboot) which induced a node\"\n            \" loss. For reference, see\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html>.\"\n        ),\n    )\n    aws_spot_request_fault_code: Optional[str] = Field(\n        None,\n        description=(\n            \"Provides additional details when a spot request fails. For example\"\n            \" `InsufficientFreeAddressesInSubnet` indicates the subnet does not have\"\n            \" free IP addresses to accommodate the new instance. For reference, see\"\n            \" <https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-spot-instance-requests.html>.\"\n        ),\n    )\n    aws_spot_request_status: Optional[str] = Field(\n        None,\n        description=(\n            \"Describes why a spot request could not be fulfilled. For example,\"\n            \" `price-too-low` indicates that the max price was lower than the current\"\n            \" spot price. For reference, see:\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html#spot-instance-bid-status-understand>.\"\n        ),\n    )\n    databricks_error_message: Optional[str] = Field(\n        None,\n        description=(\n            \"Additional context that may explain the reason for cluster termination.\"\n            \" This field is unstructured, and its exact format is subject to change.\"\n        ),\n    )\n    inactivity_duration_min: Optional[str] = Field(\n        None,\n        description=(\n            \"An idle cluster was shut down after being inactive for this duration.\"\n        ),\n    )\n    instance_id: Optional[str] = Field(\n        None, description=\"The ID of the instance that was hosting the Spark driver.\"\n    )\n    instance_pool_error_code: Optional[str] = Field(\n        None,\n        description=(\n            \"The [error\"\n            \" code](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterterminationreasonpoolclusterterminationcode)\"\n            \" for cluster failures specific to a pool.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None, description=\"The ID of the instance pool the cluster is using.\"\n    )\n    username: Optional[str] = Field(\n        None, description=\"The username of the user who terminated the cluster.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationReason","title":"TerminationReason","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class TerminationReason(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    code: Optional[TerminationCode] = Field(\n        None, description=\"Status code indicating why a cluster was terminated.\"\n    )\n    parameters: Optional[ParameterPair] = Field(\n        None,\n        description=(\n            \"Object containing a set of parameters that provide information about why a\"\n            \" cluster was terminated.\"\n        ),\n    )\n    type: Optional[TerminationType] = Field(\n        None, description=\"Reason indicating why a cluster was terminated.\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationType","title":"TerminationType","text":"

    Bases: str, Enum

    * SUCCESS: Termination succeeded.\n
    • CLIENT_ERROR: Non-retriable. Client must fix parameters before reattempting the cluster creation.
    • SERVICE_FAULT: Databricks service issue. Client can retry.
    • CLOUD_FAILURECloud provider infrastructure issue. Client can retry after the underlying issue is resolved.
    Source code in prefect_databricks/models/jobs.py
    class TerminationType(str, Enum):\n    \"\"\"\n        * SUCCESS: Termination succeeded.\n    * CLIENT_ERROR: Non-retriable. Client must fix parameters before reattempting the cluster creation.\n    * SERVICE_FAULT: Databricks service issue. Client can retry.\n    * CLOUD_FAILURECloud provider infrastructure issue. Client can retry after the underlying issue is resolved.\n\n    \"\"\"\n\n    success = \"SUCCESS\"\n    clienterror = \"CLIENT_ERROR\"\n    servicefault = \"SERVICE_FAULT\"\n    cloudfailure = \"CLOUD_FAILURE\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TriggerType","title":"TriggerType","text":"

    Bases: str, Enum

    * `PERIODIC`: Schedules that periodically trigger runs, such as a cron scheduler.\n
    • ONE_TIME: One time triggers that fire a single run. This occurs you triggered a single run on demand through the UI or the API.
    • RETRY: Indicates a run that is triggered as a retry of a previously failed run. This occurs when you request to re-run the job in case of failures.
    Source code in prefect_databricks/models/jobs.py
    class TriggerType(str, Enum):\n    \"\"\"\n        * `PERIODIC`: Schedules that periodically trigger runs, such as a cron scheduler.\n    * `ONE_TIME`: One time triggers that fire a single run. This occurs you triggered a single run on demand through the UI or the API.\n    * `RETRY`: Indicates a run that is triggered as a retry of a previously failed run. This occurs when you request to re-run the job in case of failures.\n    \"\"\"\n\n    periodic = \"PERIODIC\"\n    onetime = \"ONE_TIME\"\n    retry = \"RETRY\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.UserName","title":"UserName","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class UserName(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ..., description=\"Email address for the user.\", example=\"jsmith@example.com\"\n    )\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ViewItem","title":"ViewItem","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class ViewItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    content: Optional[str] = Field(None, description=\"Content of the view.\")\n    name: Optional[str] = Field(\n        None,\n        description=(\n            \"Name of the view item. In the case of code view, it would be the\"\n            \" notebook\u2019s name. In the case of dashboard view, it would be the\"\n            \" dashboard\u2019s name.\"\n        ),\n    )\n    type: Optional[ViewType] = Field(None, description=\"Type of the view item.\")\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ViewType","title":"ViewType","text":"

    Bases: str, Enum

    * `NOTEBOOK`: Notebook view item.\n
    • DASHBOARD: Dashboard view item.
    Source code in prefect_databricks/models/jobs.py
    class ViewType(str, Enum):\n    \"\"\"\n        * `NOTEBOOK`: Notebook view item.\n    * `DASHBOARD`: Dashboard view item.\n    \"\"\"\n\n    notebook = \"NOTEBOOK\"\n    dashboard = \"DASHBOARD\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ViewsToExport","title":"ViewsToExport","text":"

    Bases: str, Enum

    * `CODE`: Code view of the notebook.\n
    • DASHBOARDS: All dashboard views of the notebook.
    • ALL: All views of the notebook.
    Source code in prefect_databricks/models/jobs.py
    class ViewsToExport(str, Enum):\n    \"\"\"\n        * `CODE`: Code view of the notebook.\n    * `DASHBOARDS`: All dashboard views of the notebook.\n    * `ALL`: All views of the notebook.\n    \"\"\"\n\n    code = \"CODE\"\n    dashboards = \"DASHBOARDS\"\n    all = \"ALL\"\n
    "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.WebhookNotifications","title":"WebhookNotifications","text":"

    Bases: BaseModel

    See source code for the fields' description.

    Source code in prefect_databricks/models/jobs.py
    class WebhookNotifications(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    on_failure: Optional[List[OnFailureItem]] = Field(\n        None,\n        description=(\n            \"An optional list of notification IDs to call when the run fails. A maximum\"\n            \" of 3 destinations can be specified for the `on_failure` property.\"\n        ),\n        example=[{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}],\n    )\n    on_start: Optional[List[OnStartItem]] = Field(\n        None,\n        description=(\n            \"An optional list of notification IDs to call when the run starts. A\"\n            \" maximum of 3 destinations can be specified for the `on_start` property.\"\n        ),\n        example=[\n            {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n            {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n        ],\n    )\n    on_success: Optional[List[OnSucces]] = Field(\n        None,\n        description=(\n            \"An optional list of notification IDs to call when the run completes\"\n            \" successfully. A maximum of 3 destinations can be specified for the\"\n            \" `on_success` property.\"\n        ),\n        example=[{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}],\n    )\n
    "},{"location":"integrations/prefect-dbt/","title":"prefect-dbt","text":"

    With prefect-dbt, you can trigger and observe dbt Cloud jobs, execute dbt Core CLI commands, and incorporate other tools, such as Snowflake, into your dbt runs. Prefect provides a global view of the state of your workflows and allows you to take action based on state changes.

    "},{"location":"integrations/prefect-dbt/#getting-started","title":"Getting started","text":"
    1. Install prefect-dbt
    2. Register newly installed blocks types

    Explore the examples below to learn how to use Prefect with dbt.

    "},{"location":"integrations/prefect-dbt/#integrate-dbt-cloud-jobs-with-prefect-flows","title":"Integrate dbt Cloud jobs with Prefect flows","text":"

    If you have an existing dbt Cloud job, you can use the pre-built flow run_dbt_cloud_job to trigger a job run and wait until the job run is finished.

    If some nodes fail, run_dbt_cloud_job efficiently retries the unsuccessful nodes.

    Prior to running this flow, save your dbt Cloud credentials to a DbtCloudCredentials block

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudJob\nfrom prefect_dbt.cloud.jobs import run_dbt_cloud_job\n\n@flow\ndef run_dbt_job_flow():\n    result = run_dbt_cloud_job(\n        dbt_cloud_job=DbtCloudJob.load(\"my-block-name\"),\n        targeted_retries=5,\n    )\n    return result\n\nrun_dbt_job_flow()\n
    "},{"location":"integrations/prefect-dbt/#integrate-dbt-core-cli-commands-with-prefect-flows","title":"Integrate dbt Core CLI commands with Prefect flows","text":"

    prefect-dbt supports execution of dbt Core CLI commands.

    If you don't have a DbtCoreOperation block saved, create one and set the commands that you want to run.

    Optionally, specify the project_dir. If profiles_dir is not set, the DBT_PROFILES_DIR environment variable will be used. If DBT_PROFILES_DIR is not set, the default directory will be used $HOME/.dbt/.

    "},{"location":"integrations/prefect-dbt/#using-an-existing-profile","title":"Using an existing profile","text":"

    If you have an existing dbt profile, specify the profiles_dir where profiles.yml is located. You can use it in code like this:

    from prefect import flow\nfrom prefect_dbt.cli.commands import DbtCoreOperation\n\n@flow\ndef trigger_dbt_flow() -> str:\n    result = DbtCoreOperation(\n        commands=[\"pwd\", \"dbt debug\", \"dbt run\"],\n        project_dir=\"PROJECT-DIRECTORY-PLACEHOLDER\",\n        profiles_dir=\"PROFILES-DIRECTORY-PLACEHOLDER\"\n    ).run()\n    return result\n\ntrigger_dbt_flow()\n
    "},{"location":"integrations/prefect-dbt/#writing-a-new-profile","title":"Writing a new profile","text":"

    To setup a new profile, first save and load a DbtCliProfile block and use it in DbtCoreOperation.

    Then, specifyprofiles_dir where profiles.yml will be written. Here's example code with placeholders:

    from prefect import flow\nfrom prefect_dbt.cli import DbtCliProfile, DbtCoreOperation\n\n@flow\ndef trigger_dbt_flow():\n    dbt_cli_profile = DbtCliProfile.load(\"DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER\")\n    with DbtCoreOperation(\n        commands=[\"dbt debug\", \"dbt run\"],\n        project_dir=\"PROJECT-DIRECTORY-PLACEHOLDER\",\n        profiles_dir=\"PROFILES-DIRECTORY-PLACEHOLDER\",\n        dbt_cli_profile=dbt_cli_profile,\n    ) as dbt_operation:\n        dbt_process = dbt_operation.trigger()\n        # do other things before waiting for completion\n        dbt_process.wait_for_completion()\n        result = dbt_process.fetch_result()\n    return result\n\ntrigger_dbt_flow()\n
    "},{"location":"integrations/prefect-dbt/#resources","title":"Resources","text":"

    If you need help using dbt, consult the dbt documentation.

    "},{"location":"integrations/prefect-dbt/#installation","title":"Installation","text":"

    To install prefect-dbt for use with dbt Cloud:

    pip install prefect-dbt\n

    To install with additional functionality for dbt Core (CLI):

    pip install \"prefect-dbt[cli]\"\n

    To install with additional functionality for dbt Core and Snowflake profiles:

    pip install \"prefect-dbt[snowflake]\"\n

    To install with additional functionality for dbt Core and BigQuery profiles:

    pip install \"prefect-dbt[bigquery]\"\n

    To install with additional functionality for dbt Core and Postgres profiles:

    pip install \"prefect-dbt[postgres]\"\n

    Some dbt Core profiles require additional installation

    According to dbt's Databricks setup page, users must first install the adapter:

    pip install dbt-databricks\n

    Check out the desired profile setup page on the sidebar for others.

    prefect-dbt requires Python 3.8 or newer.

    We recommend using a Python virtual environment manager such as conda, venv, or pipenv.

    "},{"location":"integrations/prefect-dbt/#registering-block-types","title":"Registering block types","text":"

    Register the block types in the prefect-dbt module to make them available for use.

    prefect block register -m prefect_dbt\n
    "},{"location":"integrations/prefect-dbt/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

    Blocks can be created through code or through the UI.

    "},{"location":"integrations/prefect-dbt/#dbt-cloud","title":"dbt Cloud","text":"

    To create a dbt Cloud Credentials block do the following:

    1. Go to your dbt Cloud profile.
    2. Log in to your dbt Cloud account.
    3. Scroll to API or click API Access on the sidebar.
    4. Copy the API Key.
    5. Click Projects on the sidebar.
    6. Copy the account ID from the URL: https://cloud.getdbt.com/settings/accounts/<ACCOUNT_ID>.
    7. Create and run the following script, replacing the placeholders.
    from prefect_dbt.cloud import DbtCloudCredentials\n\nDbtCloudCredentials(\n    api_key=\"API-KEY-PLACEHOLDER\",\n    account_id=\"ACCOUNT-ID-PLACEHOLDER\"\n).save(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\n

    Then, to create a dbt Cloud job block do the following:

    1. Head over to your dbt home page.
    2. On the top nav bar, click on Deploy -> Jobs.
    3. Select a job.
    4. Copy the job ID from the URL: https://cloud.getdbt.com/deploy/<ACCOUNT_ID>/projects/<PROJECT_ID>/jobs/<JOB_ID>
    5. Create and run the following script, replacing the placeholders.
    from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n\ndbt_cloud_credentials = DbtCloudCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\ndbt_cloud_job = DbtCloudJob(\n    dbt_cloud_credentials=dbt_cloud_credentials,\n    job_id=\"JOB-ID-PLACEHOLDER\"\n).save(\"JOB-BLOCK-NAME-PLACEHOLDER\")\n

    You can now load the saved block, which can access your credentials:

    from prefect_dbt.cloud import DbtCloudJob\n\nDbtCloudJob.load(\"JOB-BLOCK-NAME-PLACEHOLDER\")\n
    "},{"location":"integrations/prefect-dbt/#dbt-core-cli","title":"dbt Core CLI","text":"

    Available TargetConfigs blocks

    Visit the API Reference to see other built-in TargetConfigs blocks.

    If the desired service profile is not available, check out the Examples Catalog to see how you can build one from the generic TargetConfigs class.

    To create dbt Core target config and profile blocks for BigQuery:

    1. Save and load a GcpCredentials block.
    2. Determine the schema / dataset you want to use in BigQuery.
    3. Create a short script, replacing the placeholders.
    from prefect_gcp.credentials import GcpCredentials\nfrom prefect_dbt.cli import BigQueryTargetConfigs, DbtCliProfile\n\ncredentials = GcpCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\ntarget_configs = BigQueryTargetConfigs(\n    schema=\"SCHEMA-NAME-PLACEHOLDER\",  # also known as dataset\n    credentials=credentials,\n)\ntarget_configs.save(\"TARGET-CONFIGS-BLOCK-NAME-PLACEHOLDER\")\n\ndbt_cli_profile = DbtCliProfile(\n    name=\"PROFILE-NAME-PLACEHOLDER\",\n    target=\"TARGET-NAME-placeholder\",\n    target_configs=target_configs,\n)\ndbt_cli_profile.save(\"DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER\")\n

    Then, to create a dbt Core operation block:

    1. Determine the dbt commands you want to run.
    2. Create a short script, replacing the placeholders.
    from prefect_dbt.cli import DbtCliProfile, DbtCoreOperation\n\ndbt_cli_profile = DbtCliProfile.load(\"DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER\")\ndbt_core_operation = DbtCoreOperation(\n    commands=[\"DBT-CLI-COMMANDS-PLACEHOLDER\"],\n    dbt_cli_profile=dbt_cli_profile,\n    overwrite_profiles=True,\n)\ndbt_core_operation.save(\"DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER\")\n

    Congrats! You can now easily load the saved block, which holds your credentials:

    from prefect_dbt.cloud import DbtCoreOperation\n\nDbtCoreOperation.load(\"DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER\")\n
    "},{"location":"integrations/prefect-dbt/cli/commands/","title":"Commands","text":""},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands","title":"prefect_dbt.cli.commands","text":"

    Module containing tasks and flows for interacting with dbt CLI

    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.DbtCoreOperation","title":"DbtCoreOperation","text":"

    Bases: ShellOperation

    A block representing a dbt operation, containing multiple dbt and shell commands.

    For long-lasting operations, use the trigger method and utilize the block as a context manager for automatic closure of processes when context is exited. If not, manually call the close method to close processes.

    For short-lasting operations, use the run method. Context is automatically managed with this method.

    Attributes:

    Name Type Description commands

    A list of commands to execute sequentially.

    stream_output

    Whether to stream output.

    env

    A dictionary of environment variables to set for the shell operation.

    working_directory

    The working directory context the commands will be executed within.

    shell

    The shell to use to execute the commands.

    extension

    The extension to use for the temporary file. if unset defaults to .ps1 on Windows and .sh on other platforms.

    profiles_dir Optional[Path]

    The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the dbt commands provided. If this is not set, will try using the DBT_PROFILES_DIR environment variable, but if that's also not set, will use the default directory $HOME/.dbt/.

    project_dir Optional[Path]

    The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

    overwrite_profiles bool

    Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

    dbt_cli_profile Optional[DbtCliProfile]

    Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

    Examples:

    Load a configured block.

    from prefect_dbt import DbtCoreOperation\n\ndbt_op = DbtCoreOperation.load(\"BLOCK_NAME\")\n

    Execute short-lasting dbt debug and list with a custom DbtCliProfile.

    from prefect_dbt import DbtCoreOperation, DbtCliProfile\nfrom prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake import SnowflakeConnector\n\nsnowflake_connector = await SnowflakeConnector.load(\"snowflake-connector\")\ntarget_configs = SnowflakeTargetConfigs(connector=snowflake_connector)\ndbt_cli_profile = DbtCliProfile(\n    name=\"jaffle_shop\",\n    target=\"dev\",\n    target_configs=target_configs,\n)\ndbt_init = DbtCoreOperation(\n    commands=[\"dbt debug\", \"dbt list\"],\n    dbt_cli_profile=dbt_cli_profile,\n    overwrite_profiles=True\n)\ndbt_init.run()\n

    Execute a longer-lasting dbt run as a context manager.

    with DbtCoreOperation(commands=[\"dbt run\"]) as dbt_run:\n    dbt_process = dbt_run.trigger()\n    # do other things\n    dbt_process.wait_for_completion()\n    dbt_output = dbt_process.fetch_result()\n

    Source code in prefect_dbt/cli/commands.py
    class DbtCoreOperation(ShellOperation):\n    \"\"\"\n    A block representing a dbt operation, containing multiple dbt and shell commands.\n\n    For long-lasting operations, use the trigger method and utilize the block as a\n    context manager for automatic closure of processes when context is exited.\n    If not, manually call the close method to close processes.\n\n    For short-lasting operations, use the run method. Context is automatically managed\n    with this method.\n\n    Attributes:\n        commands: A list of commands to execute sequentially.\n        stream_output: Whether to stream output.\n        env: A dictionary of environment variables to set for the shell operation.\n        working_directory: The working directory context the commands\n            will be executed within.\n        shell: The shell to use to execute the commands.\n        extension: The extension to use for the temporary file.\n            if unset defaults to `.ps1` on Windows and `.sh` on other platforms.\n        profiles_dir: The directory to search for the profiles.yml file.\n            Setting this appends the `--profiles-dir` option to the dbt commands\n            provided. If this is not set, will try using the DBT_PROFILES_DIR\n            environment variable, but if that's also not\n            set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error if profiles.yml already\n            exists under profile_dir and overwrite_profiles is set to False.\n\n    Examples:\n        Load a configured block.\n        ```python\n        from prefect_dbt import DbtCoreOperation\n\n        dbt_op = DbtCoreOperation.load(\"BLOCK_NAME\")\n        ```\n\n        Execute short-lasting dbt debug and list with a custom DbtCliProfile.\n        ```python\n        from prefect_dbt import DbtCoreOperation, DbtCliProfile\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake import SnowflakeConnector\n\n        snowflake_connector = await SnowflakeConnector.load(\"snowflake-connector\")\n        target_configs = SnowflakeTargetConfigs(connector=snowflake_connector)\n        dbt_cli_profile = DbtCliProfile(\n            name=\"jaffle_shop\",\n            target=\"dev\",\n            target_configs=target_configs,\n        )\n        dbt_init = DbtCoreOperation(\n            commands=[\"dbt debug\", \"dbt list\"],\n            dbt_cli_profile=dbt_cli_profile,\n            overwrite_profiles=True\n        )\n        dbt_init.run()\n        ```\n\n        Execute a longer-lasting dbt run as a context manager.\n        ```python\n        with DbtCoreOperation(commands=[\"dbt run\"]) as dbt_run:\n            dbt_process = dbt_run.trigger()\n            # do other things\n            dbt_process.wait_for_completion()\n            dbt_output = dbt_process.fetch_result()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt Core Operation\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.DbtCoreOperation\"  # noqa\n\n    profiles_dir: Optional[Path] = Field(\n        default=None,\n        description=(\n            \"The directory to search for the profiles.yml file. \"\n            \"Setting this appends the `--profiles-dir` option to the dbt commands \"\n            \"provided. If this is not set, will try using the DBT_PROFILES_DIR \"\n            \"environment variable, but if that's also not \"\n            \"set, will use the default directory `$HOME/.dbt/`.\"\n        ),\n    )\n    project_dir: Optional[Path] = Field(\n        default=None,\n        description=(\n            \"The directory to search for the dbt_project.yml file. \"\n            \"Default is the current working directory and its parents.\"\n        ),\n    )\n    overwrite_profiles: bool = Field(\n        default=False,\n        description=(\n            \"Whether the existing profiles.yml file under profiles_dir \"\n            \"should be overwritten with a new profile.\"\n        ),\n    )\n    dbt_cli_profile: Optional[DbtCliProfile] = Field(\n        default=None,\n        description=(\n            \"Profiles class containing the profile written to profiles.yml. \"\n            \"Note! This is optional and will raise an error if profiles.yml already \"\n            \"exists under profile_dir and overwrite_profiles is set to False.\"\n        ),\n    )\n\n    @validator(\"commands\", always=True)\n    def _has_a_dbt_command(cls, commands):\n        \"\"\"\n        Check that the commands contain a dbt command.\n        \"\"\"\n        if not any(\"dbt \" in command for command in commands):\n            raise ValueError(\n                \"None of the commands are a valid dbt sub-command; see dbt --help, \"\n                \"or use prefect_shell.ShellOperation for non-dbt related \"\n                \"commands instead\"\n            )\n        return commands\n\n    def _find_valid_profiles_dir(self) -> PosixPath:\n        \"\"\"\n        Ensure that there is a profiles.yml available for use.\n        \"\"\"\n        profiles_dir = self.profiles_dir\n        if profiles_dir is None:\n            if self.env.get(\"DBT_PROFILES_DIR\") is not None:\n                # get DBT_PROFILES_DIR from the user input env\n                profiles_dir = self.env[\"DBT_PROFILES_DIR\"]\n            else:\n                # get DBT_PROFILES_DIR from the system env, or default to ~/.dbt\n                profiles_dir = os.getenv(\"DBT_PROFILES_DIR\", Path.home() / \".dbt\")\n        profiles_dir = relative_path_to_current_platform(\n            Path(profiles_dir).expanduser()\n        )\n\n        # https://docs.getdbt.com/dbt-cli/configure-your-profile\n        # Note that the file always needs to be called profiles.yml,\n        # regardless of which directory it is in.\n        profiles_path = profiles_dir / \"profiles.yml\"\n        overwrite_profiles = self.overwrite_profiles\n        dbt_cli_profile = self.dbt_cli_profile\n        if not profiles_path.exists() or overwrite_profiles:\n            if dbt_cli_profile is None:\n                raise ValueError(\n                    \"Since overwrite_profiles is True or profiles_path is empty, \"\n                    \"need `dbt_cli_profile` to write a profile\"\n                )\n            profile = dbt_cli_profile.get_profile()\n            profiles_dir.mkdir(exist_ok=True)\n            with open(profiles_path, \"w+\") as f:\n                yaml.dump(profile, f, default_flow_style=False)\n        elif dbt_cli_profile is not None:\n            raise ValueError(\n                f\"Since overwrite_profiles is False and profiles_path {profiles_path} \"\n                f\"already exists, the profile within dbt_cli_profile couldn't be used; \"\n                f\"if the existing profile is satisfactory, do not set dbt_cli_profile\"\n            )\n        return profiles_dir\n\n    def _append_dirs_to_commands(self, profiles_dir) -> List[str]:\n        \"\"\"\n        Append profiles_dir and project_dir options to dbt commands.\n        \"\"\"\n        project_dir = self.project_dir\n\n        commands = []\n        for command in self.commands:\n            command += f\" --profiles-dir {profiles_dir}\"\n            if project_dir is not None:\n                project_dir = Path(project_dir).expanduser()\n                command += f\" --project-dir {project_dir}\"\n            commands.append(command)\n        return commands\n\n    def _compile_kwargs(self, **open_kwargs: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Helper method to compile the kwargs for `open_process` so it's not repeated\n        across the run and trigger methods.\n        \"\"\"\n        profiles_dir = self._find_valid_profiles_dir()\n        commands = self._append_dirs_to_commands(profiles_dir=profiles_dir)\n\n        # _compile_kwargs is called within trigger() and run(), prior to execution.\n        # However _compile_kwargs directly uses self.commands, but here we modified\n        # the commands without saving back to self.commands so we need to create a copy.\n        # was also thinking of using env vars but DBT_PROJECT_DIR is not supported yet.\n        modified_self = self.copy()\n        modified_self.commands = commands\n        return super(type(self), modified_self)._compile_kwargs(**open_kwargs)\n
    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.create_summary_markdown","title":"create_summary_markdown","text":"

    Creates a Prefect task artifact summarizing the results of the above predefined prefrect-dbt task.

    Source code in prefect_dbt/cli/commands.py
    def create_summary_markdown(results: dbtRunnerResult, command: str) -> UUID:\n    \"\"\"\n    Creates a Prefect task artifact summarizing the results\n    of the above predefined prefrect-dbt task.\n    \"\"\"\n    # Create Summary Markdown Artifact\n    run_statuses: Dict[str, List[str]] = {\n        \"successful\": [],\n        \"failed\": [],\n        \"skipped\": [],\n    }\n\n    for r in results.result.results:\n        if r.status == NodeStatus.Success or r.status == NodeStatus.Pass:\n            run_statuses[\"successful\"].append(r)\n        elif (\n            r.status == NodeStatus.Fail\n            or r.status == NodeStatus.Error\n            or r.status == NodeStatus.RuntimeErr\n        ):\n            run_statuses[\"failed\"].append(r)\n        elif r.status == NodeStatus.Skipped:\n            run_statuses[\"skipped\"].append(r)\n\n    markdown = f\"# dbt {command} Task Summary\"\n\n    if run_statuses[\"failed\"] != []:\n        failed_runs_str = \"\"\n        for r in run_statuses[\"failed\"]:\n            failed_runs_str += f\"**{r.node.name}**\\n \\\n                Node Type: {r.node.resource_type}\\n \\\n                Node Path: {r.node.original_file_path}\"\n            if r.message:\n                message = r.message.replace(\"\\n\", \".\")\n                failed_runs_str += f\"\\nError Message: {message}\\n\"\n        markdown += f\"\"\"\\n## Failed Runs \ud83d\udd34\\n\\n{failed_runs_str}\\n\\n\"\"\"\n\n    if run_statuses[\"successful\"] != []:\n        successful_runs_str = \"\\n\".join(\n            [f\"**{r.node.name}**\" for r in run_statuses[\"successful\"]]\n        )\n        markdown += f\"\"\"\\n## Successful Runs \u2705\\n\\n{successful_runs_str}\\n\\n\"\"\"\n\n    if run_statuses[\"skipped\"] != []:\n        skipped_runs_str = \"\\n\".join(\n            [f\"**{r.node.name}**\" for r in run_statuses[\"skipped\"]]\n        )\n        markdown += f\"\"\" ## Skipped Runs \ud83d\udeab\\n\\n{skipped_runs_str}\\n\\n\"\"\"\n\n    return markdown\n
    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_build","title":"run_dbt_build async","text":"

    Executes the 'dbt build' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

    Parameters:

    Name Type Description Default profiles_dir Optional[Union[Path, str]]

    The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

    None project_dir Optional[Union[Path, str]]

    The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

    None overwrite_profiles bool

    Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

    False dbt_cli_profile Optional[DbtCliProfile]

    Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

    None create_artifact bool

    If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

    True artifact_key str

    The key under which to store the dbt build results artifact in Prefect. Defaults to 'dbt-build-task-summary'.

    'dbt-build-task-summary'
        from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_build_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_build_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

    Raises:

    Type Description ValueError

    If required dbt_cli_profile is not provided when needed for profile writing.

    RuntimeError

    If the dbt build fails for any reason, it will be indicated by the exception raised.

    Source code in prefect_dbt/cli/commands.py
    @task\nasync def run_dbt_build(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-build-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt build' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt build results artifact in Prefect.\n            Defaults to 'dbt-build-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_build_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_build_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"build\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n    return results\n
    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_model","title":"run_dbt_model async","text":"

    Executes the 'dbt run' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

    Parameters:

    Name Type Description Default profiles_dir Optional[Union[Path, str]]

    The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

    None project_dir Optional[Union[Path, str]]

    The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

    None overwrite_profiles bool

    Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

    False dbt_cli_profile Optional[DbtCliProfile]

    Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

    None create_artifact bool

    If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

    True artifact_key str

    The key under which to store the dbt run results artifact in Prefect. Defaults to 'dbt-run-task-summary'.

    'dbt-run-task-summary'
        from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_run_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_run_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

    Raises:

    Type Description ValueError

    If required dbt_cli_profile is not provided when needed for profile writing.

    RuntimeError

    If the dbt build fails for any reason, it will be indicated by the exception raised.

    Source code in prefect_dbt/cli/commands.py
    @task\nasync def run_dbt_model(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-run-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt run' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt run results artifact in Prefect.\n            Defaults to 'dbt-run-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_run_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_run_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"run\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_seed","title":"run_dbt_seed async","text":"

    Executes the 'dbt seed' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

    Parameters:

    Name Type Description Default profiles_dir Optional[Union[Path, str]]

    The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

    None project_dir Optional[Union[Path, str]]

    The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

    None overwrite_profiles bool

    Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

    False dbt_cli_profile Optional[DbtCliProfile]

    Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

    None create_artifact bool

    If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

    True artifact_key str

    The key under which to store the dbt build results artifact in Prefect. Defaults to 'dbt-seed-task-summary'.

    'dbt-seed-task-summary'
        from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_seed_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_seed_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

    Raises:

    Type Description ValueError

    If required dbt_cli_profile is not provided when needed for profile writing.

    RuntimeError

    If the dbt build fails for any reason, it will be indicated by the exception raised.

    Source code in prefect_dbt/cli/commands.py
    @task\nasync def run_dbt_seed(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-seed-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt seed' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt build results artifact in Prefect.\n            Defaults to 'dbt-seed-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_seed_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_seed_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"seed\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_snapshot","title":"run_dbt_snapshot async","text":"

    Executes the 'dbt snapshot' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

    Parameters:

    Name Type Description Default profiles_dir Optional[Union[Path, str]]

    The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

    None project_dir Optional[Union[Path, str]]

    The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

    None overwrite_profiles bool

    Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

    False dbt_cli_profile Optional[DbtCliProfile]

    Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

    None create_artifact bool

    If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

    True artifact_key str

    The key under which to store the dbt build results artifact in Prefect. Defaults to 'dbt-snapshot-task-summary'.

    'dbt-snapshot-task-summary'
        from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_snapshot_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_snapshot_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

    Raises:

    Type Description ValueError

    If required dbt_cli_profile is not provided when needed for profile writing.

    RuntimeError

    If the dbt build fails for any reason, it will be indicated by the exception raised.

    Source code in prefect_dbt/cli/commands.py
    @task\nasync def run_dbt_snapshot(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-snapshot-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt snapshot' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt build results artifact in Prefect.\n            Defaults to 'dbt-snapshot-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_snapshot_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_snapshot_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"snapshot\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_test","title":"run_dbt_test async","text":"

    Executes the 'dbt test' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

    Parameters:

    Name Type Description Default profiles_dir Optional[Union[Path, str]]

    The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

    None project_dir Optional[Union[Path, str]]

    The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

    None overwrite_profiles bool

    Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

    False dbt_cli_profile Optional[DbtCliProfile]

    Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

    None create_artifact bool

    If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

    True artifact_key str

    The key under which to store the dbt test results artifact in Prefect. Defaults to 'dbt-test-task-summary'.

    'dbt-test-task-summary'
        from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_test_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_test_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

    Raises:

    Type Description ValueError

    If required dbt_cli_profile is not provided when needed for profile writing.

    RuntimeError

    If the dbt build fails for any reason, it will be indicated by the exception raised.

    Source code in prefect_dbt/cli/commands.py
    @task\nasync def run_dbt_test(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-test-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt test' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt test results artifact in Prefect.\n            Defaults to 'dbt-test-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_test_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_test_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"test\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
    "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.trigger_dbt_cli_command","title":"trigger_dbt_cli_command async","text":"

    Task for running dbt commands.

    If no profiles.yml file is found or if overwrite_profiles flag is set to True, this will first generate a profiles.yml file in the profiles_dir directory. Then run the dbt CLI shell command.

    Parameters:

    Name Type Description Default command str

    The dbt command to be executed.

    required profiles_dir Optional[Union[Path, str]]

    The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR environment variable, but if that's also not set, will use the default directory $HOME/.dbt/.

    None project_dir Optional[Union[Path, str]]

    The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

    None overwrite_profiles bool

    Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

    False dbt_cli_profile Optional[DbtCliProfile]

    Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

    None **shell_run_command_kwargs

    Additional keyword arguments to pass to shell_run_command.

    required

    Returns:

    Name Type Description last_line_cli_output str

    The last line of the CLI output will be returned if return_all in shell_run_command_kwargs is False. This is the default behavior.

    full_cli_output List[str]

    Full CLI output will be returned if return_all in shell_run_command_kwargs is True.

    Examples:

    Execute dbt debug with a pre-populated profiles.yml.

    from prefect import flow\nfrom prefect_dbt.cli.commands import trigger_dbt_cli_command\n\n@flow\ndef trigger_dbt_cli_command_flow():\n    result = trigger_dbt_cli_command(\"dbt debug\")\n    return result\n\ntrigger_dbt_cli_command_flow()\n

    Execute dbt debug without a pre-populated profiles.yml.

    from prefect import flow\nfrom prefect_dbt.cli.credentials import DbtCliProfile\nfrom prefect_dbt.cli.commands import trigger_dbt_cli_command\nfrom prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake.credentials import SnowflakeCredentials\n\n@flow\ndef trigger_dbt_cli_command_flow():\n    credentials = SnowflakeCredentials(\n        user=\"user\",\n        password=\"password\",\n        account=\"account.region.aws\",\n        role=\"role\",\n    )\n    connector = SnowflakeConnector(\n        schema=\"public\",\n        database=\"database\",\n        warehouse=\"warehouse\",\n        credentials=credentials,\n    )\n    target_configs = SnowflakeTargetConfigs(\n        connector=connector\n    )\n    dbt_cli_profile = DbtCliProfile(\n        name=\"jaffle_shop\",\n        target=\"dev\",\n        target_configs=target_configs,\n    )\n    result = trigger_dbt_cli_command(\n        \"dbt debug\",\n        overwrite_profiles=True,\n        dbt_cli_profile=dbt_cli_profile\n    )\n    return result\n\ntrigger_dbt_cli_command_flow()\n

    Source code in prefect_dbt/cli/commands.py
    @task\nasync def trigger_dbt_cli_command(\n    command: str,\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-cli-command-summary\",\n    **command_kwargs: Dict[str, Any],\n) -> Optional[dbtRunnerResult]:\n    \"\"\"\n    Task for running dbt commands.\n\n    If no profiles.yml file is found or if overwrite_profiles flag is set to True, this\n    will first generate a profiles.yml file in the profiles_dir directory. Then run the dbt\n    CLI shell command.\n\n    Args:\n        command: The dbt command to be executed.\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided. If this is not set,\n            will try using the DBT_PROFILES_DIR environment variable, but if that's also not\n            set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error if profiles.yml already exists\n            under profile_dir and overwrite_profiles is set to False.\n        **shell_run_command_kwargs: Additional keyword arguments to pass to\n            [shell_run_command](https://prefecthq.github.io/prefect-shell/commands/#prefect_shell.commands.shell_run_command).\n\n    Returns:\n        last_line_cli_output (str): The last line of the CLI output will be returned\n            if `return_all` in `shell_run_command_kwargs` is False. This is the default\n            behavior.\n        full_cli_output (List[str]): Full CLI output will be returned if `return_all`\n            in `shell_run_command_kwargs` is True.\n\n    Examples:\n        Execute `dbt debug` with a pre-populated profiles.yml.\n        ```python\n        from prefect import flow\n        from prefect_dbt.cli.commands import trigger_dbt_cli_command\n\n        @flow\n        def trigger_dbt_cli_command_flow():\n            result = trigger_dbt_cli_command(\"dbt debug\")\n            return result\n\n        trigger_dbt_cli_command_flow()\n        ```\n\n        Execute `dbt debug` without a pre-populated profiles.yml.\n        ```python\n        from prefect import flow\n        from prefect_dbt.cli.credentials import DbtCliProfile\n        from prefect_dbt.cli.commands import trigger_dbt_cli_command\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake.credentials import SnowflakeCredentials\n\n        @flow\n        def trigger_dbt_cli_command_flow():\n            credentials = SnowflakeCredentials(\n                user=\"user\",\n                password=\"password\",\n                account=\"account.region.aws\",\n                role=\"role\",\n            )\n            connector = SnowflakeConnector(\n                schema=\"public\",\n                database=\"database\",\n                warehouse=\"warehouse\",\n                credentials=credentials,\n            )\n            target_configs = SnowflakeTargetConfigs(\n                connector=connector\n            )\n            dbt_cli_profile = DbtCliProfile(\n                name=\"jaffle_shop\",\n                target=\"dev\",\n                target_configs=target_configs,\n            )\n            result = trigger_dbt_cli_command(\n                \"dbt debug\",\n                overwrite_profiles=True,\n                dbt_cli_profile=dbt_cli_profile\n            )\n            return result\n\n        trigger_dbt_cli_command_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    if profiles_dir is None:\n        profiles_dir = os.getenv(\"DBT_PROFILES_DIR\", str(Path.home()) + \"/.dbt\")\n\n    if command.startswith(\"dbt\"):\n        command = command.split(\" \", 1)[1]\n\n    # https://docs.getdbt.com/dbt-cli/configure-your-profile\n    # Note that the file always needs to be called profiles.yml,\n    # regardless of which directory it is in.\n    profiles_path = profiles_dir + \"/profiles.yml\"\n    logger.debug(f\"Using this profiles path: {profiles_path}\")\n\n    # write the profile if overwrite or no profiles exist\n    if overwrite_profiles or not Path(profiles_path).expanduser().exists():\n        if dbt_cli_profile is None:\n            raise ValueError(\"Provide `dbt_cli_profile` keyword for writing profiles\")\n        profile = dbt_cli_profile.get_profile()\n        Path(profiles_dir).expanduser().mkdir(exist_ok=True)\n        with open(profiles_path, \"w+\") as f:\n            yaml.dump(profile, f, default_flow_style=False)\n        logger.info(f\"Wrote profile to {profiles_path}\")\n    elif dbt_cli_profile is not None:\n        raise ValueError(\n            f\"Since overwrite_profiles is False and profiles_path ({profiles_path}) \"\n            f\"already exists, the profile within dbt_cli_profile could not be used; \"\n            f\"if the existing profile is satisfactory, do not pass dbt_cli_profile\"\n        )\n\n    # append the options\n    cli_args = [command]\n    cli_args.append(\"--profiles-dir\")\n    cli_args.append(profiles_dir)\n    if project_dir is not None:\n        project_dir = Path(project_dir).expanduser()\n        cli_args.append(\"--project-dir\")\n        cli_args.append(project_dir)\n\n    if command_kwargs:\n        cli_args.append(command_kwargs)\n\n    # fix up empty shell_run_command_kwargs\n    dbt_runner_client = dbtRunner()\n    logger.info(f\"Running dbt command: {cli_args}\")\n    result: dbtRunnerResult = dbt_runner_client.invoke(cli_args)\n\n    if result.exception is not None:\n        logger.error(f\"dbt build task failed with exception: {result.exception}\")\n        raise result.exception\n\n    # Creating the dbt Summary Markdown if enabled\n    if create_artifact and isinstance(result.result, RunExecutionResult):\n        markdown = create_summary_markdown(result, command)\n        artifact_id = await create_markdown_artifact(\n            markdown=markdown,\n            key=artifact_key,\n        )\n        if not artifact_id:\n            logger.error(f\"Artifact was not created for dbt {command} task\")\n        else:\n            logger.info(\n                f\"dbt {command} task completed successfully with artifact {artifact_id}\"\n            )\n    else:\n        logger.debug(\n            f\"Artifact was not created for dbt {command} this task \\\n                     due to create_artifact=False or the dbt command did not \\\n                     return any RunExecutionResults. \\\n                     See https://docs.getdbt.com/reference/programmatic-invocations \\\n                     for more details on dbtRunnerResult.\"\n        )\n    return result\n
    "},{"location":"integrations/prefect-dbt/cli/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials","title":"prefect_dbt.cli.credentials","text":"

    Module containing credentials for interacting with dbt CLI

    "},{"location":"integrations/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials.DbtCliProfile","title":"DbtCliProfile","text":"

    Bases: Block

    Profile for use across dbt CLI tasks and flows.

    Attributes:

    Name Type Description name str

    Profile name used for populating profiles.yml.

    target str

    The default target your dbt project will use.

    target_configs TargetConfigs

    Target configs contain credentials and settings, specific to the warehouse you're connecting to. To find valid keys, head to the Available adapters page and click the desired adapter's \"Profile Setup\" hyperlink.

    global_configs GlobalConfigs

    Global configs control things like the visual output of logs, the manner in which dbt parses your project, and what to do when dbt finds a version mismatch or a failing model. Valid keys can be found here.

    Examples:

    Load stored dbt CLI profile:

    from prefect_dbt.cli import DbtCliProfile\ndbt_cli_profile = DbtCliProfile.load(\"BLOCK_NAME\").get_profile()\n

    Get a dbt Snowflake profile from DbtCliProfile by using SnowflakeTargetConfigs:

    from prefect_dbt.cli import DbtCliProfile\nfrom prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector\n\ncredentials = SnowflakeCredentials(\n    user=\"user\",\n    password=\"password\",\n    account=\"account.region.aws\",\n    role=\"role\",\n)\nconnector = SnowflakeConnector(\n    schema=\"public\",\n    database=\"database\",\n    warehouse=\"warehouse\",\n    credentials=credentials,\n)\ntarget_configs = SnowflakeTargetConfigs(\n    connector=connector\n)\ndbt_cli_profile = DbtCliProfile(\n    name=\"jaffle_shop\",\n    target=\"dev\",\n    target_configs=target_configs,\n)\nprofile = dbt_cli_profile.get_profile()\n

    Get a dbt Redshift profile from DbtCliProfile by using generic TargetConfigs:

    from prefect_dbt.cli import DbtCliProfile\nfrom prefect_dbt.cli.configs import GlobalConfigs, TargetConfigs\n\ntarget_configs_extras = dict(\n    host=\"hostname.region.redshift.amazonaws.com\",\n    user=\"username\",\n    password=\"password1\",\n    port=5439,\n    dbname=\"analytics\",\n)\ntarget_configs = TargetConfigs(\n    type=\"redshift\",\n    schema=\"schema\",\n    threads=4,\n    extras=target_configs_extras\n)\ndbt_cli_profile = DbtCliProfile(\n    name=\"jaffle_shop\",\n    target=\"dev\",\n    target_configs=target_configs,\n)\nprofile = dbt_cli_profile.get_profile()\n

    Source code in prefect_dbt/cli/credentials.py
    class DbtCliProfile(Block):\n    \"\"\"\n    Profile for use across dbt CLI tasks and flows.\n\n    Attributes:\n        name (str): Profile name used for populating profiles.yml.\n        target (str): The default target your dbt project will use.\n        target_configs (TargetConfigs): Target configs contain credentials and\n            settings, specific to the warehouse you're connecting to.\n            To find valid keys, head to the [Available adapters](\n            https://docs.getdbt.com/docs/available-adapters) page and\n            click the desired adapter's \"Profile Setup\" hyperlink.\n        global_configs (GlobalConfigs): Global configs control\n            things like the visual output of logs, the manner\n            in which dbt parses your project, and what to do when\n            dbt finds a version mismatch or a failing model.\n            Valid keys can be found [here](\n            https://docs.getdbt.com/reference/global-configs).\n\n    Examples:\n        Load stored dbt CLI profile:\n        ```python\n        from prefect_dbt.cli import DbtCliProfile\n        dbt_cli_profile = DbtCliProfile.load(\"BLOCK_NAME\").get_profile()\n        ```\n\n        Get a dbt Snowflake profile from DbtCliProfile by using SnowflakeTargetConfigs:\n        ```python\n        from prefect_dbt.cli import DbtCliProfile\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector\n\n        credentials = SnowflakeCredentials(\n            user=\"user\",\n            password=\"password\",\n            account=\"account.region.aws\",\n            role=\"role\",\n        )\n        connector = SnowflakeConnector(\n            schema=\"public\",\n            database=\"database\",\n            warehouse=\"warehouse\",\n            credentials=credentials,\n        )\n        target_configs = SnowflakeTargetConfigs(\n            connector=connector\n        )\n        dbt_cli_profile = DbtCliProfile(\n            name=\"jaffle_shop\",\n            target=\"dev\",\n            target_configs=target_configs,\n        )\n        profile = dbt_cli_profile.get_profile()\n        ```\n\n        Get a dbt Redshift profile from DbtCliProfile by using generic TargetConfigs:\n        ```python\n        from prefect_dbt.cli import DbtCliProfile\n        from prefect_dbt.cli.configs import GlobalConfigs, TargetConfigs\n\n        target_configs_extras = dict(\n            host=\"hostname.region.redshift.amazonaws.com\",\n            user=\"username\",\n            password=\"password1\",\n            port=5439,\n            dbname=\"analytics\",\n        )\n        target_configs = TargetConfigs(\n            type=\"redshift\",\n            schema=\"schema\",\n            threads=4,\n            extras=target_configs_extras\n        )\n        dbt_cli_profile = DbtCliProfile(\n            name=\"jaffle_shop\",\n            target=\"dev\",\n            target_configs=target_configs,\n        )\n        profile = dbt_cli_profile.get_profile()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Profile\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials.DbtCliProfile\"  # noqa\n\n    name: str = Field(\n        default=..., description=\"Profile name used for populating profiles.yml.\"\n    )\n    target: str = Field(\n        default=..., description=\"The default target your dbt project will use.\"\n    )\n    target_configs: Union[\n        SnowflakeTargetConfigs,\n        BigQueryTargetConfigs,\n        PostgresTargetConfigs,\n        TargetConfigs,\n    ] = Field(\n        default=...,\n        description=(\n            \"Target configs contain credentials and settings, specific to the \"\n            \"warehouse you're connecting to.\"\n        ),\n    )\n    global_configs: Optional[GlobalConfigs] = Field(\n        default=None,\n        description=(\n            \"Global configs control things like the visual output of logs, the manner \"\n            \"in which dbt parses your project, and what to do when dbt finds a version \"\n            \"mismatch or a failing model.\"\n        ),\n    )\n\n    def get_profile(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt profile, likely used for writing to profiles.yml.\n\n        Returns:\n            A JSON compatible dictionary with the expected format of profiles.yml.\n        \"\"\"\n        profile = {\n            \"config\": self.global_configs.get_configs() if self.global_configs else {},\n            self.name: {\n                \"target\": self.target,\n                \"outputs\": {self.target: self.target_configs.get_configs()},\n            },\n        }\n        return profile\n
    "},{"location":"integrations/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials.DbtCliProfile.get_profile","title":"get_profile","text":"

    Returns the dbt profile, likely used for writing to profiles.yml.

    Returns:

    Type Description Dict[str, Any]

    A JSON compatible dictionary with the expected format of profiles.yml.

    Source code in prefect_dbt/cli/credentials.py
    def get_profile(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt profile, likely used for writing to profiles.yml.\n\n    Returns:\n        A JSON compatible dictionary with the expected format of profiles.yml.\n    \"\"\"\n    profile = {\n        \"config\": self.global_configs.get_configs() if self.global_configs else {},\n        self.name: {\n            \"target\": self.target,\n            \"outputs\": {self.target: self.target_configs.get_configs()},\n        },\n    }\n    return profile\n
    "},{"location":"integrations/prefect-dbt/cli/configs/base/","title":"Base","text":""},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base","title":"prefect_dbt.cli.configs.base","text":"

    Module containing models for base configs

    "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.DbtConfigs","title":"DbtConfigs","text":"

    Bases: Block, ABC

    Abstract class for other dbt Configs.

    Attributes:

    Name Type Description extras Optional[Dict[str, Any]]

    Extra target configs' keywords, not yet exposed in prefect-dbt, but available in dbt; if there are duplicate keys between extras and TargetConfigs, an error will be raised.

    Source code in prefect_dbt/cli/configs/base.py
    class DbtConfigs(Block, abc.ABC):\n    \"\"\"\n    Abstract class for other dbt Configs.\n\n    Attributes:\n        extras: Extra target configs' keywords, not yet exposed\n            in prefect-dbt, but available in dbt; if there are\n            duplicate keys between extras and TargetConfigs,\n            an error will be raised.\n    \"\"\"\n\n    extras: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=(\n            \"Extra target configs' keywords, not yet exposed in prefect-dbt, \"\n            \"but available in dbt.\"\n        ),\n    )\n    allow_field_overrides: bool = Field(\n        default=False,\n        description=(\n            \"If enabled, fields from dbt target configs will override \"\n            \"fields provided in extras and credentials.\"\n        ),\n    )\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.DbtConfigs\"  # noqa\n\n    def _populate_configs_json(\n        self,\n        configs_json: Dict[str, Any],\n        fields: Dict[str, Any],\n        model: BaseModel = None,\n    ) -> Dict[str, Any]:\n        \"\"\"\n        Recursively populate configs_json.\n        \"\"\"\n        # if allow_field_overrides is True keys from TargetConfigs take precedence\n        override_configs_json = {}\n\n        for field_name, field in fields.items():\n            if model is not None:\n                # get actual value from model\n                try:\n                    field_value = getattr(model, field_name)\n                except AttributeError:\n                    field_value = getattr(model, field.alias)\n                # override the name with alias so dbt parser can recognize the keyword;\n                # e.g. schema_ -> schema, returns the original name if no alias is set\n                field_name = field.alias\n            else:\n                field_value = field\n\n            if field_value is None or field_name == \"allow_field_overrides\":\n                # do not add to configs json if no value or default is set\n                continue\n\n            if isinstance(field_value, BaseModel):\n                configs_json = self._populate_configs_json(\n                    configs_json, field_value.__fields__, model=field_value\n                )\n            elif field_name == \"extras\":\n                configs_json = self._populate_configs_json(\n                    configs_json,\n                    field_value,\n                )\n                override_configs_json.update(configs_json)\n            else:\n                if field_name in configs_json.keys() and not self.allow_field_overrides:\n                    raise ValueError(\n                        f\"The keyword, {field_name}, has already been provided in \"\n                        f\"TargetConfigs; remove duplicated keywords to continue\"\n                    )\n                if isinstance(field_value, SecretField):\n                    field_value = field_value.get_secret_value()\n                elif isinstance(field_value, Path):\n                    field_value = str(field_value)\n                configs_json[field_name] = field_value\n\n                if self.allow_field_overrides and model is self or model is None:\n                    override_configs_json[field_name] = field_value\n\n        configs_json.update(override_configs_json)\n        return configs_json\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs, likely used eventually for writing to profiles.yml.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        return self._populate_configs_json({}, self.__fields__, model=self)\n
    "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.DbtConfigs.get_configs","title":"get_configs","text":"

    Returns the dbt configs, likely used eventually for writing to profiles.yml.

    Returns:

    Type Description Dict[str, Any]

    A configs JSON.

    Source code in prefect_dbt/cli/configs/base.py
    def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs, likely used eventually for writing to profiles.yml.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    return self._populate_configs_json({}, self.__fields__, model=self)\n
    "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.GlobalConfigs","title":"GlobalConfigs","text":"

    Bases: DbtConfigs

    Global configs control things like the visual output of logs, the manner in which dbt parses your project, and what to do when dbt finds a version mismatch or a failing model. Docs can be found here.

    Attributes:

    Name Type Description send_anonymous_usage_stats Optional[bool]

    Whether usage stats are sent to dbt.

    use_colors Optional[bool]

    Colorize the output it prints in your terminal.

    partial_parse Optional[bool]

    When partial parsing is enabled, dbt will use an stored internal manifest to determine which files have been changed (if any) since it last parsed the project.

    printer_width Optional[int]

    Length of characters before starting a new line.

    write_json Optional[bool]

    Determines whether dbt writes JSON artifacts to the target/ directory.

    warn_error Optional[bool]

    Whether to convert dbt warnings into errors.

    log_format Optional[str]

    The LOG_FORMAT config specifies how dbt's logs should be formatted. If the value of this config is json, dbt will output fully structured logs in JSON format.

    debug Optional[bool]

    Whether to redirect dbt's debug logs to standard out.

    version_check Optional[bool]

    Whether to raise an error if a project's version is used with an incompatible dbt version.

    fail_fast Optional[bool]

    Make dbt exit immediately if a single resource fails to build.

    use_experimental_parser Optional[bool]

    Opt into the latest experimental version of the static parser.

    static_parser Optional[bool]

    Whether to use the static parser.

    Examples:

    Load stored GlobalConfigs:

    from prefect_dbt.cli.configs import GlobalConfigs\n\ndbt_cli_global_configs = GlobalConfigs.load(\"BLOCK_NAME\")\n

    Source code in prefect_dbt/cli/configs/base.py
    class GlobalConfigs(DbtConfigs):\n    \"\"\"\n    Global configs control things like the visual output\n    of logs, the manner in which dbt parses your project,\n    and what to do when dbt finds a version mismatch\n    or a failing model. Docs can be found [here](\n    https://docs.getdbt.com/reference/global-configs).\n\n    Attributes:\n        send_anonymous_usage_stats: Whether usage stats are sent to dbt.\n        use_colors: Colorize the output it prints in your terminal.\n        partial_parse: When partial parsing is enabled, dbt will use an\n            stored internal manifest to determine which files have been changed\n            (if any) since it last parsed the project.\n        printer_width: Length of characters before starting a new line.\n        write_json: Determines whether dbt writes JSON artifacts to\n            the target/ directory.\n        warn_error: Whether to convert dbt warnings into errors.\n        log_format: The LOG_FORMAT config specifies how dbt's logs should\n            be formatted. If the value of this config is json, dbt will\n            output fully structured logs in JSON format.\n        debug: Whether to redirect dbt's debug logs to standard out.\n        version_check: Whether to raise an error if a project's version\n            is used with an incompatible dbt version.\n        fail_fast: Make dbt exit immediately if a single resource fails to build.\n        use_experimental_parser: Opt into the latest experimental version\n            of the static parser.\n        static_parser: Whether to use the [static parser](\n            https://docs.getdbt.com/reference/parsing#static-parser).\n\n    Examples:\n        Load stored GlobalConfigs:\n        ```python\n        from prefect_dbt.cli.configs import GlobalConfigs\n\n        dbt_cli_global_configs = GlobalConfigs.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Global Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.GlobalConfigs\"  # noqa\n\n    send_anonymous_usage_stats: Optional[bool] = Field(\n        default=None,\n        description=\"Whether usage stats are sent to dbt.\",\n    )\n    use_colors: Optional[bool] = Field(\n        default=None,\n        description=\"Colorize the output it prints in your terminal.\",\n    )\n    partial_parse: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"When partial parsing is enabled, dbt will use an \"\n            \"stored internal manifest to determine which files have been changed \"\n            \"(if any) since it last parsed the project.\"\n        ),\n    )\n    printer_width: Optional[int] = Field(\n        default=None,\n        description=\"Length of characters before starting a new line.\",\n    )\n    write_json: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Determines whether dbt writes JSON artifacts to \" \"the target/ directory.\"\n        ),\n    )\n    warn_error: Optional[bool] = Field(\n        default=None,\n        description=\"Whether to convert dbt warnings into errors.\",\n    )\n    log_format: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The LOG_FORMAT config specifies how dbt's logs should \"\n            \"be formatted. If the value of this config is json, dbt will \"\n            \"output fully structured logs in JSON format.\"\n        ),\n    )\n    debug: Optional[bool] = Field(\n        default=None,\n        description=\"Whether to redirect dbt's debug logs to standard out.\",\n    )\n    version_check: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether to raise an error if a project's version \"\n            \"is used with an incompatible dbt version.\"\n        ),\n    )\n    fail_fast: Optional[bool] = Field(\n        default=None,\n        description=(\"Make dbt exit immediately if a single resource fails to build.\"),\n    )\n    use_experimental_parser: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Opt into the latest experimental version \" \"of the static parser.\"\n        ),\n    )\n    static_parser: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether to use the [static parser](https://docs.getdbt.com/reference/parsing#static-parser).\"  # noqa\n        ),\n    )\n
    "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.TargetConfigs","title":"TargetConfigs","text":"

    Bases: BaseTargetConfigs

    Target configs contain credentials and settings, specific to the warehouse you're connecting to. To find valid keys, head to the Available adapters page and click the desired adapter's \"Profile Setup\" hyperlink.

    Attributes:

    Name Type Description type

    The name of the database warehouse.

    schema

    The schema that dbt will build objects into; in BigQuery, a schema is actually a dataset.

    threads

    The number of threads representing the max number of paths through the graph dbt may work on at once.

    Examples:

    Load stored TargetConfigs:

    from prefect_dbt.cli.configs import TargetConfigs\n\ndbt_cli_target_configs = TargetConfigs.load(\"BLOCK_NAME\")\n

    Source code in prefect_dbt/cli/configs/base.py
    class TargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to the warehouse you're connecting to.\n    To find valid keys, head to the [Available adapters](\n    https://docs.getdbt.com/docs/available-adapters) page and\n    click the desired adapter's \"Profile Setup\" hyperlink.\n\n    Attributes:\n        type: The name of the database warehouse.\n        schema: The schema that dbt will build objects into;\n            in BigQuery, a schema is actually a dataset.\n        threads: The number of threads representing the max number\n            of paths through the graph dbt may work on at once.\n\n    Examples:\n        Load stored TargetConfigs:\n        ```python\n        from prefect_dbt.cli.configs import TargetConfigs\n\n        dbt_cli_target_configs = TargetConfigs.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.TargetConfigs\"  # noqa\n
    "},{"location":"integrations/prefect-dbt/cli/configs/bigquery/","title":"BigQuery","text":""},{"location":"integrations/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery","title":"prefect_dbt.cli.configs.bigquery","text":"

    Module containing models for BigQuery configs

    "},{"location":"integrations/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery.BigQueryTargetConfigs","title":"BigQueryTargetConfigs","text":"

    Bases: BaseTargetConfigs

    Target configs contain credentials and settings, specific to BigQuery. To find valid keys, head to the BigQuery Profile page.

    Attributes:

    Name Type Description credentials GcpCredentials

    The credentials to use to authenticate; if there are duplicate keys between credentials and TargetConfigs, e.g. schema, an error will be raised.

    Examples:

    Load stored BigQueryTargetConfigs.

    from prefect_dbt.cli.configs import BigQueryTargetConfigs\n\nbigquery_target_configs = BigQueryTargetConfigs.load(\"BLOCK_NAME\")\n

    Instantiate BigQueryTargetConfigs.

    from prefect_dbt.cli.configs import BigQueryTargetConfigs\nfrom prefect_gcp.credentials import GcpCredentials\n\ncredentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\ntarget_configs = BigQueryTargetConfigs(\n    schema=\"schema\",  # also known as dataset\n    credentials=credentials,\n)\n

    Source code in prefect_dbt/cli/configs/bigquery.py
    class BigQueryTargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to BigQuery.\n    To find valid keys, head to the [BigQuery Profile](\n    https://docs.getdbt.com/reference/warehouse-profiles/bigquery-profile)\n    page.\n\n    Attributes:\n        credentials: The credentials to use to authenticate; if there are\n            duplicate keys between credentials and TargetConfigs,\n            e.g. schema, an error will be raised.\n\n    Examples:\n        Load stored BigQueryTargetConfigs.\n        ```python\n        from prefect_dbt.cli.configs import BigQueryTargetConfigs\n\n        bigquery_target_configs = BigQueryTargetConfigs.load(\"BLOCK_NAME\")\n        ```\n\n        Instantiate BigQueryTargetConfigs.\n        ```python\n        from prefect_dbt.cli.configs import BigQueryTargetConfigs\n        from prefect_gcp.credentials import GcpCredentials\n\n        credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n        target_configs = BigQueryTargetConfigs(\n            schema=\"schema\",  # also known as dataset\n            credentials=credentials,\n        )\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI BigQuery Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _description = \"dbt CLI target configs containing credentials and settings, specific to BigQuery.\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery.BigQueryTargetConfigs\"  # noqa\n\n    type: Literal[\"bigquery\"] = Field(\n        default=\"bigquery\", description=\"The type of target.\"\n    )\n    project: Optional[str] = Field(default=None, description=\"The project to use.\")\n    credentials: GcpCredentials = Field(\n        default_factory=GcpCredentials,\n        description=\"The credentials to use to authenticate.\",\n    )\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs specific to BigQuery profile.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        # since GcpCredentials will always define a project\n        self_copy = self.copy()\n        if self_copy.project is not None:\n            self_copy.credentials.project = None\n        all_configs_json = self._populate_configs_json(\n            {}, self_copy.__fields__, model=self_copy\n        )\n\n        # decouple prefect-gcp from prefect-dbt\n        # by mapping all the keys dbt gcp accepts\n        # https://docs.getdbt.com/reference/warehouse-setups/bigquery-setup\n        rename_keys = {\n            # dbt\n            \"type\": \"type\",\n            \"schema\": \"schema\",\n            \"threads\": \"threads\",\n            # general\n            \"dataset\": \"schema\",\n            \"method\": \"method\",\n            \"project\": \"project\",\n            # service-account\n            \"service_account_file\": \"keyfile\",\n            # service-account json\n            \"service_account_info\": \"keyfile_json\",\n            # oauth secrets\n            \"refresh_token\": \"refresh_token\",\n            \"client_id\": \"client_id\",\n            \"client_secret\": \"client_secret\",\n            \"token_uri\": \"token_uri\",\n            # optional\n            \"priority\": \"priority\",\n            \"timeout_seconds\": \"timeout_seconds\",\n            \"location\": \"location\",\n            \"maximum_bytes_billed\": \"maximum_bytes_billed\",\n            \"scopes\": \"scopes\",\n            \"impersonate_service_account\": \"impersonate_service_account\",\n            \"execution_project\": \"execution_project\",\n        }\n        configs_json = {}\n        extras = self.extras or {}\n        for key in all_configs_json.keys():\n            if key not in rename_keys and key not in extras:\n                # skip invalid keys\n                continue\n            # rename key to something dbt profile expects\n            dbt_key = rename_keys.get(key) or key\n            configs_json[dbt_key] = all_configs_json[key]\n\n        if \"keyfile_json\" in configs_json:\n            configs_json[\"method\"] = \"service-account-json\"\n        elif \"keyfile\" in configs_json:\n            configs_json[\"method\"] = \"service-account\"\n            configs_json[\"keyfile\"] = str(configs_json[\"keyfile\"])\n        else:\n            configs_json[\"method\"] = \"oauth-secrets\"\n            # through gcloud application-default login\n            google_credentials = (\n                self_copy.credentials.get_credentials_from_service_account()\n            )\n            if hasattr(google_credentials, \"token\"):\n                request = Request()\n                google_credentials.refresh(request)\n                configs_json[\"token\"] = google_credentials.token\n            else:\n                for key in (\"refresh_token\", \"client_id\", \"client_secret\", \"token_uri\"):\n                    configs_json[key] = getattr(google_credentials, key)\n\n        if \"project\" not in configs_json:\n            raise ValueError(\n                \"The keyword, project, must be provided in either \"\n                \"GcpCredentials or BigQueryTargetConfigs\"\n            )\n        return configs_json\n
    "},{"location":"integrations/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery.BigQueryTargetConfigs.get_configs","title":"get_configs","text":"

    Returns the dbt configs specific to BigQuery profile.

    Returns:

    Type Description Dict[str, Any]

    A configs JSON.

    Source code in prefect_dbt/cli/configs/bigquery.py
    def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs specific to BigQuery profile.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    # since GcpCredentials will always define a project\n    self_copy = self.copy()\n    if self_copy.project is not None:\n        self_copy.credentials.project = None\n    all_configs_json = self._populate_configs_json(\n        {}, self_copy.__fields__, model=self_copy\n    )\n\n    # decouple prefect-gcp from prefect-dbt\n    # by mapping all the keys dbt gcp accepts\n    # https://docs.getdbt.com/reference/warehouse-setups/bigquery-setup\n    rename_keys = {\n        # dbt\n        \"type\": \"type\",\n        \"schema\": \"schema\",\n        \"threads\": \"threads\",\n        # general\n        \"dataset\": \"schema\",\n        \"method\": \"method\",\n        \"project\": \"project\",\n        # service-account\n        \"service_account_file\": \"keyfile\",\n        # service-account json\n        \"service_account_info\": \"keyfile_json\",\n        # oauth secrets\n        \"refresh_token\": \"refresh_token\",\n        \"client_id\": \"client_id\",\n        \"client_secret\": \"client_secret\",\n        \"token_uri\": \"token_uri\",\n        # optional\n        \"priority\": \"priority\",\n        \"timeout_seconds\": \"timeout_seconds\",\n        \"location\": \"location\",\n        \"maximum_bytes_billed\": \"maximum_bytes_billed\",\n        \"scopes\": \"scopes\",\n        \"impersonate_service_account\": \"impersonate_service_account\",\n        \"execution_project\": \"execution_project\",\n    }\n    configs_json = {}\n    extras = self.extras or {}\n    for key in all_configs_json.keys():\n        if key not in rename_keys and key not in extras:\n            # skip invalid keys\n            continue\n        # rename key to something dbt profile expects\n        dbt_key = rename_keys.get(key) or key\n        configs_json[dbt_key] = all_configs_json[key]\n\n    if \"keyfile_json\" in configs_json:\n        configs_json[\"method\"] = \"service-account-json\"\n    elif \"keyfile\" in configs_json:\n        configs_json[\"method\"] = \"service-account\"\n        configs_json[\"keyfile\"] = str(configs_json[\"keyfile\"])\n    else:\n        configs_json[\"method\"] = \"oauth-secrets\"\n        # through gcloud application-default login\n        google_credentials = (\n            self_copy.credentials.get_credentials_from_service_account()\n        )\n        if hasattr(google_credentials, \"token\"):\n            request = Request()\n            google_credentials.refresh(request)\n            configs_json[\"token\"] = google_credentials.token\n        else:\n            for key in (\"refresh_token\", \"client_id\", \"client_secret\", \"token_uri\"):\n                configs_json[key] = getattr(google_credentials, key)\n\n    if \"project\" not in configs_json:\n        raise ValueError(\n            \"The keyword, project, must be provided in either \"\n            \"GcpCredentials or BigQueryTargetConfigs\"\n        )\n    return configs_json\n
    "},{"location":"integrations/prefect-dbt/cli/configs/postgres/","title":"Postgres","text":""},{"location":"integrations/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres","title":"prefect_dbt.cli.configs.postgres","text":"

    Module containing models for Postgres configs

    "},{"location":"integrations/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres.PostgresTargetConfigs","title":"PostgresTargetConfigs","text":"

    Bases: BaseTargetConfigs

    Target configs contain credentials and settings, specific to Postgres. To find valid keys, head to the Postgres Profile page.

    Attributes:

    Name Type Description credentials Union[SqlAlchemyConnector, DatabaseCredentials]

    The credentials to use to authenticate; if there are duplicate keys between credentials and TargetConfigs, e.g. schema, an error will be raised.

    Examples:

    Load stored PostgresTargetConfigs:

    from prefect_dbt.cli.configs import PostgresTargetConfigs\n\npostgres_target_configs = PostgresTargetConfigs.load(\"BLOCK_NAME\")\n

    Instantiate PostgresTargetConfigs with DatabaseCredentials.

    from prefect_dbt.cli.configs import PostgresTargetConfigs\nfrom prefect_sqlalchemy import DatabaseCredentials, SyncDriver\n\ncredentials = DatabaseCredentials(\n    driver=SyncDriver.POSTGRESQL_PSYCOPG2,\n    username=\"prefect\",\n    password=\"prefect_password\",\n    database=\"postgres\",\n    host=\"host\",\n    port=8080\n)\ntarget_configs = PostgresTargetConfigs(credentials=credentials, schema=\"schema\")\n

    Source code in prefect_dbt/cli/configs/postgres.py
    class PostgresTargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to Postgres.\n    To find valid keys, head to the [Postgres Profile](\n    https://docs.getdbt.com/reference/warehouse-profiles/postgres-profile)\n    page.\n\n    Attributes:\n        credentials: The credentials to use to authenticate; if there are\n            duplicate keys between credentials and TargetConfigs,\n            e.g. schema, an error will be raised.\n\n    Examples:\n        Load stored PostgresTargetConfigs:\n        ```python\n        from prefect_dbt.cli.configs import PostgresTargetConfigs\n\n        postgres_target_configs = PostgresTargetConfigs.load(\"BLOCK_NAME\")\n        ```\n\n        Instantiate PostgresTargetConfigs with DatabaseCredentials.\n        ```python\n        from prefect_dbt.cli.configs import PostgresTargetConfigs\n        from prefect_sqlalchemy import DatabaseCredentials, SyncDriver\n\n        credentials = DatabaseCredentials(\n            driver=SyncDriver.POSTGRESQL_PSYCOPG2,\n            username=\"prefect\",\n            password=\"prefect_password\",\n            database=\"postgres\",\n            host=\"host\",\n            port=8080\n        )\n        target_configs = PostgresTargetConfigs(credentials=credentials, schema=\"schema\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Postgres Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _description = \"dbt CLI target configs containing credentials and settings specific to Postgres.\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres.PostgresTargetConfigs\"  # noqa\n\n    type: Literal[\"postgres\"] = Field(\n        default=\"postgres\", description=\"The type of the target.\"\n    )\n    credentials: Union[SqlAlchemyConnector, DatabaseCredentials] = Field(\n        default=...,\n        description=(\n            \"The credentials to use to authenticate; if there are duplicate keys \"\n            \"between credentials and TargetConfigs, e.g. schema, \"\n            \"an error will be raised.\"\n        ),\n    )  # noqa\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs specific to Postgres profile.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        if isinstance(self.credentials, DatabaseCredentials):\n            warnings.warn(\n                \"Using DatabaseCredentials is deprecated and will be removed \"\n                \"on May 7th, 2023, use SqlAlchemyConnector instead.\",\n                DeprecationWarning,\n            )\n        all_configs_json = super().get_configs()\n\n        rename_keys = {\n            # dbt\n            \"type\": \"type\",\n            \"schema\": \"schema\",\n            \"threads\": \"threads\",\n            # general\n            \"host\": \"host\",\n            \"username\": \"user\",\n            \"password\": \"password\",\n            \"port\": \"port\",\n            \"database\": \"dbname\",\n            # optional\n            \"keepalives_idle\": \"keepalives_idle\",\n            \"connect_timeout\": \"connect_timeout\",\n            \"retries\": \"retries\",\n            \"search_path\": \"search_path\",\n            \"role\": \"role\",\n            \"sslmode\": \"sslmode\",\n        }\n\n        configs_json = {}\n        extras = self.extras or {}\n        for key in all_configs_json.keys():\n            if key not in rename_keys and key not in extras:\n                # skip invalid keys, like fetch_size + poll_frequency_s\n                continue\n            # rename key to something dbt profile expects\n            dbt_key = rename_keys.get(key) or key\n            configs_json[dbt_key] = all_configs_json[key]\n        port = configs_json.get(\"port\")\n        if port is not None:\n            configs_json[\"port\"] = int(port)\n        return configs_json\n
    "},{"location":"integrations/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres.PostgresTargetConfigs.get_configs","title":"get_configs","text":"

    Returns the dbt configs specific to Postgres profile.

    Returns:

    Type Description Dict[str, Any]

    A configs JSON.

    Source code in prefect_dbt/cli/configs/postgres.py
    def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs specific to Postgres profile.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    if isinstance(self.credentials, DatabaseCredentials):\n        warnings.warn(\n            \"Using DatabaseCredentials is deprecated and will be removed \"\n            \"on May 7th, 2023, use SqlAlchemyConnector instead.\",\n            DeprecationWarning,\n        )\n    all_configs_json = super().get_configs()\n\n    rename_keys = {\n        # dbt\n        \"type\": \"type\",\n        \"schema\": \"schema\",\n        \"threads\": \"threads\",\n        # general\n        \"host\": \"host\",\n        \"username\": \"user\",\n        \"password\": \"password\",\n        \"port\": \"port\",\n        \"database\": \"dbname\",\n        # optional\n        \"keepalives_idle\": \"keepalives_idle\",\n        \"connect_timeout\": \"connect_timeout\",\n        \"retries\": \"retries\",\n        \"search_path\": \"search_path\",\n        \"role\": \"role\",\n        \"sslmode\": \"sslmode\",\n    }\n\n    configs_json = {}\n    extras = self.extras or {}\n    for key in all_configs_json.keys():\n        if key not in rename_keys and key not in extras:\n            # skip invalid keys, like fetch_size + poll_frequency_s\n            continue\n        # rename key to something dbt profile expects\n        dbt_key = rename_keys.get(key) or key\n        configs_json[dbt_key] = all_configs_json[key]\n    port = configs_json.get(\"port\")\n    if port is not None:\n        configs_json[\"port\"] = int(port)\n    return configs_json\n
    "},{"location":"integrations/prefect-dbt/cli/configs/snowflake/","title":"Snowflake","text":""},{"location":"integrations/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake","title":"prefect_dbt.cli.configs.snowflake","text":"

    Module containing models for Snowflake configs

    "},{"location":"integrations/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake.SnowflakeTargetConfigs","title":"SnowflakeTargetConfigs","text":"

    Bases: BaseTargetConfigs

    Target configs contain credentials and settings, specific to Snowflake. To find valid keys, head to the Snowflake Profile page.

    Attributes:

    Name Type Description connector SnowflakeConnector

    The connector to use.

    Examples:

    Load stored SnowflakeTargetConfigs:

    from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n\nsnowflake_target_configs = SnowflakeTargetConfigs.load(\"BLOCK_NAME\")\n

    Instantiate SnowflakeTargetConfigs.

    from prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector\n\ncredentials = SnowflakeCredentials(\n    user=\"user\",\n    password=\"password\",\n    account=\"account.region.aws\",\n    role=\"role\",\n)\nconnector = SnowflakeConnector(\n    schema=\"public\",\n    database=\"database\",\n    warehouse=\"warehouse\",\n    credentials=credentials,\n)\ntarget_configs = SnowflakeTargetConfigs(\n    connector=connector,\n    extras={\"retry_on_database_errors\": True},\n)\n

    Source code in prefect_dbt/cli/configs/snowflake.py
    class SnowflakeTargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to Snowflake.\n    To find valid keys, head to the [Snowflake Profile](\n    https://docs.getdbt.com/reference/warehouse-profiles/snowflake-profile)\n    page.\n\n    Attributes:\n        connector: The connector to use.\n\n    Examples:\n        Load stored SnowflakeTargetConfigs:\n        ```python\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n\n        snowflake_target_configs = SnowflakeTargetConfigs.load(\"BLOCK_NAME\")\n        ```\n\n        Instantiate SnowflakeTargetConfigs.\n        ```python\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector\n\n        credentials = SnowflakeCredentials(\n            user=\"user\",\n            password=\"password\",\n            account=\"account.region.aws\",\n            role=\"role\",\n        )\n        connector = SnowflakeConnector(\n            schema=\"public\",\n            database=\"database\",\n            warehouse=\"warehouse\",\n            credentials=credentials,\n        )\n        target_configs = SnowflakeTargetConfigs(\n            connector=connector,\n            extras={\"retry_on_database_errors\": True},\n        )\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Snowflake Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake.SnowflakeTargetConfigs\"  # noqa\n\n    type: Literal[\"snowflake\"] = Field(\n        default=\"snowflake\", description=\"The type of the target configs.\"\n    )\n    schema_: Optional[str] = Field(\n        default=None,\n        alias=\"schema\",\n        description=\"The schema to use for the target configs.\",\n    )\n    connector: SnowflakeConnector = Field(\n        default=..., description=\"The connector to use.\"\n    )\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs specific to Snowflake profile.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        all_configs_json = super().get_configs()\n\n        # decouple prefect-snowflake from prefect-dbt\n        # by mapping all the keys dbt snowflake accepts\n        # https://docs.getdbt.com/reference/warehouse-setups/snowflake-setup\n        rename_keys = {\n            # dbt\n            \"type\": \"type\",\n            \"schema\": \"schema\",\n            \"threads\": \"threads\",\n            # general\n            \"account\": \"account\",\n            \"user\": \"user\",\n            \"role\": \"role\",\n            \"database\": \"database\",\n            \"warehouse\": \"warehouse\",\n            # user and password\n            \"password\": \"password\",\n            # duo mfa / sso\n            \"authenticator\": \"authenticator\",\n            # key pair\n            \"private_key_path\": \"private_key_path\",\n            \"private_key_passphrase\": \"private_key_passphrase\",\n            # optional\n            \"client_session_keep_alive\": \"client_session_keep_alive\",\n            \"query_tag\": \"query_tag\",\n            \"connect_retries\": \"connect_retries\",\n            \"connect_timeout\": \"connect_timeout\",\n            \"retry_on_database_errors\": \"retry_on_database_errors\",\n            \"retry_all\": \"retry_all\",\n        }\n        configs_json = {}\n        extras = self.extras or {}\n        for key in all_configs_json.keys():\n            if key not in rename_keys and key not in extras:\n                # skip invalid keys, like fetch_size + poll_frequency_s\n                continue\n            # rename key to something dbt profile expects\n            dbt_key = rename_keys.get(key) or key\n            configs_json[dbt_key] = all_configs_json[key]\n        return configs_json\n
    "},{"location":"integrations/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake.SnowflakeTargetConfigs.get_configs","title":"get_configs","text":"

    Returns the dbt configs specific to Snowflake profile.

    Returns:

    Type Description Dict[str, Any]

    A configs JSON.

    Source code in prefect_dbt/cli/configs/snowflake.py
    def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs specific to Snowflake profile.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    all_configs_json = super().get_configs()\n\n    # decouple prefect-snowflake from prefect-dbt\n    # by mapping all the keys dbt snowflake accepts\n    # https://docs.getdbt.com/reference/warehouse-setups/snowflake-setup\n    rename_keys = {\n        # dbt\n        \"type\": \"type\",\n        \"schema\": \"schema\",\n        \"threads\": \"threads\",\n        # general\n        \"account\": \"account\",\n        \"user\": \"user\",\n        \"role\": \"role\",\n        \"database\": \"database\",\n        \"warehouse\": \"warehouse\",\n        # user and password\n        \"password\": \"password\",\n        # duo mfa / sso\n        \"authenticator\": \"authenticator\",\n        # key pair\n        \"private_key_path\": \"private_key_path\",\n        \"private_key_passphrase\": \"private_key_passphrase\",\n        # optional\n        \"client_session_keep_alive\": \"client_session_keep_alive\",\n        \"query_tag\": \"query_tag\",\n        \"connect_retries\": \"connect_retries\",\n        \"connect_timeout\": \"connect_timeout\",\n        \"retry_on_database_errors\": \"retry_on_database_errors\",\n        \"retry_all\": \"retry_all\",\n    }\n    configs_json = {}\n    extras = self.extras or {}\n    for key in all_configs_json.keys():\n        if key not in rename_keys and key not in extras:\n            # skip invalid keys, like fetch_size + poll_frequency_s\n            continue\n        # rename key to something dbt profile expects\n        dbt_key = rename_keys.get(key) or key\n        configs_json[dbt_key] = all_configs_json[key]\n    return configs_json\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/","title":"Clients","text":""},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients","title":"prefect_dbt.cloud.clients","text":"

    Module containing clients for interacting with the dbt Cloud API

    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient","title":"DbtCloudAdministrativeClient","text":"

    Client for interacting with the dbt cloud Administrative API.

    Parameters:

    Name Type Description Default api_key str

    API key to authenticate with the dbt Cloud administrative API.

    required account_id int

    ID of dbt Cloud account with which to interact.

    required domain str

    Domain at which the dbt Cloud API is hosted.

    'cloud.getdbt.com' Source code in prefect_dbt/cloud/clients.py
    class DbtCloudAdministrativeClient:\n    \"\"\"\n    Client for interacting with the dbt cloud Administrative API.\n\n    Args:\n        api_key: API key to authenticate with the dbt Cloud administrative API.\n        account_id: ID of dbt Cloud account with which to interact.\n        domain: Domain at which the dbt Cloud API is hosted.\n    \"\"\"\n\n    def __init__(self, api_key: str, account_id: int, domain: str = \"cloud.getdbt.com\"):\n        self._closed = False\n        self._started = False\n\n        self._admin_client = AsyncClient(\n            headers={\n                \"Authorization\": f\"Bearer {api_key}\",\n                \"user-agent\": f\"prefect-{prefect.__version__}\",\n                \"x-dbt-partner-source\": \"prefect\",\n            },\n            base_url=f\"https://{domain}/api/v2/accounts/{account_id}\",\n        )\n\n    async def call_endpoint(\n        self,\n        http_method: str,\n        path: str,\n        params: Optional[Dict[str, Any]] = None,\n        json: Optional[Dict[str, Any]] = None,\n    ) -> Response:\n        \"\"\"\n        Call an endpoint in the dbt Cloud API.\n\n        Args:\n            path: The partial path for the request (e.g. /projects/). Will be appended\n                onto the base URL as determined by the client configuration.\n            http_method: HTTP method to call on the endpoint.\n            params: Query parameters to include in the request.\n            json: JSON serializable body to send in the request.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"\n        response = await self._admin_client.request(\n            method=http_method, url=path, params=params, json=json\n        )\n\n        response.raise_for_status()\n\n        return response\n\n    async def get_job(\n        self,\n        job_id: int,\n        order_by: Optional[str] = None,\n    ) -> Response:\n        \"\"\"\n        Return job details for a job on an account.\n\n        Args:\n            job_id: Numeric ID of the job.\n            order_by: Field to order the result by. Use - to indicate reverse order.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"order_by\": order_by} if order_by else None\n        return await self.call_endpoint(\n            path=f\"/jobs/{job_id}/\", http_method=\"GET\", params=params\n        )\n\n    async def trigger_job_run(\n        self, job_id: int, options: Optional[TriggerJobRunOptions] = None\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [trigger job run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Jobs/operation/triggerRun)\n        to initiate a job run.\n\n        Args:\n            job_id: The ID of the job to trigger.\n            options: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        if options is None:\n            options = TriggerJobRunOptions()\n\n        return await self.call_endpoint(\n            path=f\"/jobs/{job_id}/run/\",\n            http_method=\"POST\",\n            json=options.dict(exclude_none=True),\n        )\n\n    async def get_run(\n        self,\n        run_id: int,\n        include_related: Optional[\n            List[Literal[\"trigger\", \"job\", \"debug_logs\", \"run_steps\"]]\n        ] = None,\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [get run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getRunById)\n        to get details about a job run.\n\n        Args:\n            run_id: The ID of the run to get details for.\n            include_related: List of related fields to pull with the run.\n                Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\".\n                If \"debug_logs\" is not provided in a request, then the included debug\n                logs will be truncated to the last 1,000 lines of the debug log output file.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"include_related\": include_related} if include_related else None\n        return await self.call_endpoint(\n            path=f\"/runs/{run_id}/\", http_method=\"GET\", params=params\n        )\n\n    async def list_run_artifacts(\n        self, run_id: int, step: Optional[int] = None\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [list run artifacts endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/listArtifactsByRunId)\n        to fetch a list of paths of artifacts generated for a completed run.\n\n        Args:\n            run_id: The ID of the run to list run artifacts for.\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"step\": step} if step else None\n        return await self.call_endpoint(\n            path=f\"/runs/{run_id}/artifacts/\", http_method=\"GET\", params=params\n        )\n\n    async def get_run_artifact(\n        self, run_id: int, path: str, step: Optional[int] = None\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [get run artifact endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getArtifactsByRunId)\n        to fetch an artifact generated for a completed run.\n\n        Args:\n            run_id: The ID of the run to list run artifacts for.\n            path: The relative path to the run artifact (e.g. manifest.json, catalog.json,\n                run_results.json)\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"step\": step} if step else None\n        return await self.call_endpoint(\n            path=f\"/runs/{run_id}/artifacts/{path}\", http_method=\"GET\", params=params\n        )\n\n    async def __aenter__(self):\n        if self._closed:\n            raise RuntimeError(\n                \"The client cannot be started again after it has been closed.\"\n            )\n        if self._started:\n            raise RuntimeError(\"The client cannot be started more than once.\")\n\n        self._started = True\n\n        return self\n\n    async def __aexit__(self, *exc):\n        self._closed = True\n        await self._admin_client.__aexit__()\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.call_endpoint","title":"call_endpoint async","text":"

    Call an endpoint in the dbt Cloud API.

    Parameters:

    Name Type Description Default path str

    The partial path for the request (e.g. /projects/). Will be appended onto the base URL as determined by the client configuration.

    required http_method str

    HTTP method to call on the endpoint.

    required params Optional[Dict[str, Any]]

    Query parameters to include in the request.

    None json Optional[Dict[str, Any]]

    JSON serializable body to send in the request.

    None

    Returns:

    Type Description Response

    The response from the dbt Cloud administrative API.

    Source code in prefect_dbt/cloud/clients.py
    async def call_endpoint(\n    self,\n    http_method: str,\n    path: str,\n    params: Optional[Dict[str, Any]] = None,\n    json: Optional[Dict[str, Any]] = None,\n) -> Response:\n    \"\"\"\n    Call an endpoint in the dbt Cloud API.\n\n    Args:\n        path: The partial path for the request (e.g. /projects/). Will be appended\n            onto the base URL as determined by the client configuration.\n        http_method: HTTP method to call on the endpoint.\n        params: Query parameters to include in the request.\n        json: JSON serializable body to send in the request.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"\n    response = await self._admin_client.request(\n        method=http_method, url=path, params=params, json=json\n    )\n\n    response.raise_for_status()\n\n    return response\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.get_job","title":"get_job async","text":"

    Return job details for a job on an account.

    Parameters:

    Name Type Description Default job_id int

    Numeric ID of the job.

    required order_by Optional[str]

    Field to order the result by. Use - to indicate reverse order.

    None

    Returns:

    Type Description Response

    The response from the dbt Cloud administrative API.

    Source code in prefect_dbt/cloud/clients.py
    async def get_job(\n    self,\n    job_id: int,\n    order_by: Optional[str] = None,\n) -> Response:\n    \"\"\"\n    Return job details for a job on an account.\n\n    Args:\n        job_id: Numeric ID of the job.\n        order_by: Field to order the result by. Use - to indicate reverse order.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"order_by\": order_by} if order_by else None\n    return await self.call_endpoint(\n        path=f\"/jobs/{job_id}/\", http_method=\"GET\", params=params\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.get_run","title":"get_run async","text":"

    Sends a request to the get run endpoint to get details about a job run.

    Parameters:

    Name Type Description Default run_id int

    The ID of the run to get details for.

    required include_related Optional[List[Literal['trigger', 'job', 'debug_logs', 'run_steps']]]

    List of related fields to pull with the run. Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\". If \"debug_logs\" is not provided in a request, then the included debug logs will be truncated to the last 1,000 lines of the debug log output file.

    None

    Returns:

    Type Description Response

    The response from the dbt Cloud administrative API.

    Source code in prefect_dbt/cloud/clients.py
    async def get_run(\n    self,\n    run_id: int,\n    include_related: Optional[\n        List[Literal[\"trigger\", \"job\", \"debug_logs\", \"run_steps\"]]\n    ] = None,\n) -> Response:\n    \"\"\"\n    Sends a request to the [get run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getRunById)\n    to get details about a job run.\n\n    Args:\n        run_id: The ID of the run to get details for.\n        include_related: List of related fields to pull with the run.\n            Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\".\n            If \"debug_logs\" is not provided in a request, then the included debug\n            logs will be truncated to the last 1,000 lines of the debug log output file.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"include_related\": include_related} if include_related else None\n    return await self.call_endpoint(\n        path=f\"/runs/{run_id}/\", http_method=\"GET\", params=params\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.get_run_artifact","title":"get_run_artifact async","text":"

    Sends a request to the get run artifact endpoint to fetch an artifact generated for a completed run.

    Parameters:

    Name Type Description Default run_id int

    The ID of the run to list run artifacts for.

    required path str

    The relative path to the run artifact (e.g. manifest.json, catalog.json, run_results.json)

    required step Optional[int]

    The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

    None

    Returns:

    Type Description Response

    The response from the dbt Cloud administrative API.

    Source code in prefect_dbt/cloud/clients.py
    async def get_run_artifact(\n    self, run_id: int, path: str, step: Optional[int] = None\n) -> Response:\n    \"\"\"\n    Sends a request to the [get run artifact endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getArtifactsByRunId)\n    to fetch an artifact generated for a completed run.\n\n    Args:\n        run_id: The ID of the run to list run artifacts for.\n        path: The relative path to the run artifact (e.g. manifest.json, catalog.json,\n            run_results.json)\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"step\": step} if step else None\n    return await self.call_endpoint(\n        path=f\"/runs/{run_id}/artifacts/{path}\", http_method=\"GET\", params=params\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.list_run_artifacts","title":"list_run_artifacts async","text":"

    Sends a request to the list run artifacts endpoint to fetch a list of paths of artifacts generated for a completed run.

    Parameters:

    Name Type Description Default run_id int

    The ID of the run to list run artifacts for.

    required step Optional[int]

    The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

    None

    Returns:

    Type Description Response

    The response from the dbt Cloud administrative API.

    Source code in prefect_dbt/cloud/clients.py
    async def list_run_artifacts(\n    self, run_id: int, step: Optional[int] = None\n) -> Response:\n    \"\"\"\n    Sends a request to the [list run artifacts endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/listArtifactsByRunId)\n    to fetch a list of paths of artifacts generated for a completed run.\n\n    Args:\n        run_id: The ID of the run to list run artifacts for.\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"step\": step} if step else None\n    return await self.call_endpoint(\n        path=f\"/runs/{run_id}/artifacts/\", http_method=\"GET\", params=params\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.trigger_job_run","title":"trigger_job_run async","text":"

    Sends a request to the trigger job run endpoint to initiate a job run.

    Parameters:

    Name Type Description Default job_id int

    The ID of the job to trigger.

    required options Optional[TriggerJobRunOptions]

    An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

    None

    Returns:

    Type Description Response

    The response from the dbt Cloud administrative API.

    Source code in prefect_dbt/cloud/clients.py
    async def trigger_job_run(\n    self, job_id: int, options: Optional[TriggerJobRunOptions] = None\n) -> Response:\n    \"\"\"\n    Sends a request to the [trigger job run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Jobs/operation/triggerRun)\n    to initiate a job run.\n\n    Args:\n        job_id: The ID of the job to trigger.\n        options: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    if options is None:\n        options = TriggerJobRunOptions()\n\n    return await self.call_endpoint(\n        path=f\"/jobs/{job_id}/run/\",\n        http_method=\"POST\",\n        json=options.dict(exclude_none=True),\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudMetadataClient","title":"DbtCloudMetadataClient","text":"

    Client for interacting with the dbt cloud Administrative API.

    Parameters:

    Name Type Description Default api_key str

    API key to authenticate with the dbt Cloud administrative API.

    required domain str

    Domain at which the dbt Cloud API is hosted.

    'metadata.cloud.getdbt.com' Source code in prefect_dbt/cloud/clients.py
    class DbtCloudMetadataClient:\n    \"\"\"\n    Client for interacting with the dbt cloud Administrative API.\n\n    Args:\n        api_key: API key to authenticate with the dbt Cloud administrative API.\n        domain: Domain at which the dbt Cloud API is hosted.\n    \"\"\"\n\n    def __init__(self, api_key: str, domain: str = \"metadata.cloud.getdbt.com\"):\n        self._http_endpoint = HTTPEndpoint(\n            base_headers={\n                \"Authorization\": f\"Bearer {api_key}\",\n                \"user-agent\": f\"prefect-{prefect.__version__}\",\n                \"x-dbt-partner-source\": \"prefect\",\n                \"content-type\": \"application/json\",\n            },\n            url=f\"https://{domain}/graphql\",\n        )\n\n    def query(\n        self,\n        query: str,\n        variables: Optional[Dict] = None,\n        operation_name: Optional[str] = None,\n    ) -> Dict[str, Any]:\n        \"\"\"\n        Run a GraphQL query against the dbt Cloud metadata API.\n\n        Args:\n            query: The GraphQL query to run.\n            variables: The values of any variables defined in the GraphQL query.\n            operation_name: The name of the operation to run if multiple operations\n                are defined in the provided query.\n\n        Returns:\n            The result of the GraphQL query.\n        \"\"\"\n        return self._http_endpoint(\n            query=query, variables=variables, operation_name=operation_name\n        )\n
    "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudMetadataClient.query","title":"query","text":"

    Run a GraphQL query against the dbt Cloud metadata API.

    Parameters:

    Name Type Description Default query str

    The GraphQL query to run.

    required variables Optional[Dict]

    The values of any variables defined in the GraphQL query.

    None operation_name Optional[str]

    The name of the operation to run if multiple operations are defined in the provided query.

    None

    Returns:

    Type Description Dict[str, Any]

    The result of the GraphQL query.

    Source code in prefect_dbt/cloud/clients.py
    def query(\n    self,\n    query: str,\n    variables: Optional[Dict] = None,\n    operation_name: Optional[str] = None,\n) -> Dict[str, Any]:\n    \"\"\"\n    Run a GraphQL query against the dbt Cloud metadata API.\n\n    Args:\n        query: The GraphQL query to run.\n        variables: The values of any variables defined in the GraphQL query.\n        operation_name: The name of the operation to run if multiple operations\n            are defined in the provided query.\n\n    Returns:\n        The result of the GraphQL query.\n    \"\"\"\n    return self._http_endpoint(\n        query=query, variables=variables, operation_name=operation_name\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials","title":"prefect_dbt.cloud.credentials","text":"

    Module containing credentials for interacting with dbt Cloud

    "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials","title":"DbtCloudCredentials","text":"

    Bases: CredentialsBlock

    Credentials block for credential use across dbt Cloud tasks and flows.

    Attributes:

    Name Type Description api_key SecretStr

    API key to authenticate with the dbt Cloud administrative API. Refer to the Authentication docs for retrieving the API key.

    account_id int

    ID of dbt Cloud account with which to interact.

    domain Optional[str]

    Domain at which the dbt Cloud API is hosted.

    Examples:

    Load stored dbt Cloud credentials:

    from prefect_dbt.cloud import DbtCloudCredentials\n\ndbt_cloud_credentials = DbtCloudCredentials.load(\"BLOCK_NAME\")\n

    Use DbtCloudCredentials instance to trigger a job run:

    from prefect_dbt.cloud import DbtCloudCredentials\n\ncredentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\nasync with dbt_cloud_credentials.get_administrative_client() as client:\n    client.trigger_job_run(job_id=1)\n

    Load saved dbt Cloud credentials within a flow:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n@flow\ndef trigger_dbt_cloud_job_run_flow():\n    credentials = DbtCloudCredentials.load(\"my-dbt-credentials\")\n    trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\ntrigger_dbt_cloud_job_run_flow()\n

    Source code in prefect_dbt/cloud/credentials.py
    class DbtCloudCredentials(CredentialsBlock):\n    \"\"\"\n    Credentials block for credential use across dbt Cloud tasks and flows.\n\n    Attributes:\n        api_key (SecretStr): API key to authenticate with the dbt Cloud\n            administrative API. Refer to the [Authentication docs](\n            https://docs.getdbt.com/dbt-cloud/api-v2#section/Authentication)\n            for retrieving the API key.\n        account_id (int): ID of dbt Cloud account with which to interact.\n        domain (Optional[str]): Domain at which the dbt Cloud API is hosted.\n\n    Examples:\n        Load stored dbt Cloud credentials:\n        ```python\n        from prefect_dbt.cloud import DbtCloudCredentials\n\n        dbt_cloud_credentials = DbtCloudCredentials.load(\"BLOCK_NAME\")\n        ```\n\n        Use DbtCloudCredentials instance to trigger a job run:\n        ```python\n        from prefect_dbt.cloud import DbtCloudCredentials\n\n        credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            client.trigger_job_run(job_id=1)\n        ```\n\n        Load saved dbt Cloud credentials within a flow:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n        @flow\n        def trigger_dbt_cloud_job_run_flow():\n            credentials = DbtCloudCredentials.load(\"my-dbt-credentials\")\n            trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\n        trigger_dbt_cloud_job_run_flow()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt Cloud Credentials\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials\"  # noqa\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"A dbt Cloud API key to use for authentication.\",\n    )\n    account_id: int = Field(\n        default=..., title=\"Account ID\", description=\"The ID of your dbt Cloud account.\"\n    )\n    domain: str = Field(\n        default=\"cloud.getdbt.com\",\n        description=\"The base domain of your dbt Cloud instance.\",\n    )\n\n    def get_administrative_client(self) -> DbtCloudAdministrativeClient:\n        \"\"\"\n        Returns a newly instantiated client for working with the dbt Cloud\n        administrative API.\n\n        Returns:\n            An authenticated dbt Cloud administrative API client.\n        \"\"\"\n        return DbtCloudAdministrativeClient(\n            api_key=self.api_key.get_secret_value(),\n            account_id=self.account_id,\n            domain=self.domain,\n        )\n\n    def get_metadata_client(self) -> DbtCloudMetadataClient:\n        \"\"\"\n        Returns a newly instantiated client for working with the dbt Cloud\n        metadata API.\n\n        Example:\n            Sending queries via the returned metadata client:\n            ```python\n            from prefect_dbt import DbtCloudCredentials\n\n            credentials_block = DbtCloudCredentials.load(\"test-account\")\n            metadata_client = credentials_block.get_metadata_client()\n            query = \\\"\\\"\\\"\n            {\n                metrics(jobId: 123) {\n                    uniqueId\n                    name\n                    packageName\n                    tags\n                    label\n                    runId\n                    description\n                    type\n                    sql\n                    timestamp\n                    timeGrains\n                    dimensions\n                    meta\n                    resourceType\n                    filters {\n                        field\n                        operator\n                        value\n                    }\n                    model {\n                        name\n                    }\n                }\n            }\n            \\\"\\\"\\\"\n            metadata_client.query(query)\n            # Result:\n            # {\n            #   \"data\": {\n            #     \"metrics\": [\n            #       {\n            #         \"uniqueId\": \"metric.tpch.total_revenue\",\n            #         \"name\": \"total_revenue\",\n            #         \"packageName\": \"tpch\",\n            #         \"tags\": [],\n            #         \"label\": \"Total Revenue ($)\",\n            #         \"runId\": 108952046,\n            #         \"description\": \"\",\n            #         \"type\": \"sum\",\n            #         \"sql\": \"net_item_sales_amount\",\n            #         \"timestamp\": \"order_date\",\n            #         \"timeGrains\": [\"day\", \"week\", \"month\"],\n            #         \"dimensions\": [\"status_code\", \"priority_code\"],\n            #         \"meta\": {},\n            #         \"resourceType\": \"metric\",\n            #         \"filters\": [],\n            #         \"model\": { \"name\": \"fct_orders\" }\n            #       }\n            #     ]\n            #   }\n            # }\n            ```\n\n        Returns:\n            An authenticated dbt Cloud metadata API client.\n        \"\"\"\n        return DbtCloudMetadataClient(\n            api_key=self.api_key.get_secret_value(),\n            domain=f\"metadata.{self.domain}\",\n        )\n\n    def get_client(\n        self, client_type: Literal[\"administrative\", \"metadata\"]\n    ) -> Union[DbtCloudAdministrativeClient, DbtCloudMetadataClient]:\n        \"\"\"\n        Returns a newly instantiated client for working with the dbt Cloud API.\n\n        Args:\n            client_type: Type of client to return. Accepts either 'administrative'\n                or 'metadata'.\n\n        Returns:\n            The authenticated client of the requested type.\n        \"\"\"\n        get_client_method = getattr(self, f\"get_{client_type}_client\", None)\n        if get_client_method is None:\n            raise ValueError(f\"'{client_type}' is not a supported client type.\")\n        return get_client_method()\n
    "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials.get_administrative_client","title":"get_administrative_client","text":"

    Returns a newly instantiated client for working with the dbt Cloud administrative API.

    Returns:

    Type Description DbtCloudAdministrativeClient

    An authenticated dbt Cloud administrative API client.

    Source code in prefect_dbt/cloud/credentials.py
    def get_administrative_client(self) -> DbtCloudAdministrativeClient:\n    \"\"\"\n    Returns a newly instantiated client for working with the dbt Cloud\n    administrative API.\n\n    Returns:\n        An authenticated dbt Cloud administrative API client.\n    \"\"\"\n    return DbtCloudAdministrativeClient(\n        api_key=self.api_key.get_secret_value(),\n        account_id=self.account_id,\n        domain=self.domain,\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials.get_client","title":"get_client","text":"

    Returns a newly instantiated client for working with the dbt Cloud API.

    Parameters:

    Name Type Description Default client_type Literal['administrative', 'metadata']

    Type of client to return. Accepts either 'administrative' or 'metadata'.

    required

    Returns:

    Type Description Union[DbtCloudAdministrativeClient, DbtCloudMetadataClient]

    The authenticated client of the requested type.

    Source code in prefect_dbt/cloud/credentials.py
    def get_client(\n    self, client_type: Literal[\"administrative\", \"metadata\"]\n) -> Union[DbtCloudAdministrativeClient, DbtCloudMetadataClient]:\n    \"\"\"\n    Returns a newly instantiated client for working with the dbt Cloud API.\n\n    Args:\n        client_type: Type of client to return. Accepts either 'administrative'\n            or 'metadata'.\n\n    Returns:\n        The authenticated client of the requested type.\n    \"\"\"\n    get_client_method = getattr(self, f\"get_{client_type}_client\", None)\n    if get_client_method is None:\n        raise ValueError(f\"'{client_type}' is not a supported client type.\")\n    return get_client_method()\n
    "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials.get_metadata_client","title":"get_metadata_client","text":"

    Returns a newly instantiated client for working with the dbt Cloud metadata API.

    Example

    Sending queries via the returned metadata client:

    from prefect_dbt import DbtCloudCredentials\n\ncredentials_block = DbtCloudCredentials.load(\"test-account\")\nmetadata_client = credentials_block.get_metadata_client()\nquery = \"\"\"\n{\n    metrics(jobId: 123) {\n        uniqueId\n        name\n        packageName\n        tags\n        label\n        runId\n        description\n        type\n        sql\n        timestamp\n        timeGrains\n        dimensions\n        meta\n        resourceType\n        filters {\n            field\n            operator\n            value\n        }\n        model {\n            name\n        }\n    }\n}\n\"\"\"\nmetadata_client.query(query)\n# Result:\n# {\n#   \"data\": {\n#     \"metrics\": [\n#       {\n#         \"uniqueId\": \"metric.tpch.total_revenue\",\n#         \"name\": \"total_revenue\",\n#         \"packageName\": \"tpch\",\n#         \"tags\": [],\n#         \"label\": \"Total Revenue ($)\",\n#         \"runId\": 108952046,\n#         \"description\": \"\",\n#         \"type\": \"sum\",\n#         \"sql\": \"net_item_sales_amount\",\n#         \"timestamp\": \"order_date\",\n#         \"timeGrains\": [\"day\", \"week\", \"month\"],\n#         \"dimensions\": [\"status_code\", \"priority_code\"],\n#         \"meta\": {},\n#         \"resourceType\": \"metric\",\n#         \"filters\": [],\n#         \"model\": { \"name\": \"fct_orders\" }\n#       }\n#     ]\n#   }\n# }\n

    Returns:

    Type Description DbtCloudMetadataClient

    An authenticated dbt Cloud metadata API client.

    Source code in prefect_dbt/cloud/credentials.py
    def get_metadata_client(self) -> DbtCloudMetadataClient:\n    \"\"\"\n    Returns a newly instantiated client for working with the dbt Cloud\n    metadata API.\n\n    Example:\n        Sending queries via the returned metadata client:\n        ```python\n        from prefect_dbt import DbtCloudCredentials\n\n        credentials_block = DbtCloudCredentials.load(\"test-account\")\n        metadata_client = credentials_block.get_metadata_client()\n        query = \\\"\\\"\\\"\n        {\n            metrics(jobId: 123) {\n                uniqueId\n                name\n                packageName\n                tags\n                label\n                runId\n                description\n                type\n                sql\n                timestamp\n                timeGrains\n                dimensions\n                meta\n                resourceType\n                filters {\n                    field\n                    operator\n                    value\n                }\n                model {\n                    name\n                }\n            }\n        }\n        \\\"\\\"\\\"\n        metadata_client.query(query)\n        # Result:\n        # {\n        #   \"data\": {\n        #     \"metrics\": [\n        #       {\n        #         \"uniqueId\": \"metric.tpch.total_revenue\",\n        #         \"name\": \"total_revenue\",\n        #         \"packageName\": \"tpch\",\n        #         \"tags\": [],\n        #         \"label\": \"Total Revenue ($)\",\n        #         \"runId\": 108952046,\n        #         \"description\": \"\",\n        #         \"type\": \"sum\",\n        #         \"sql\": \"net_item_sales_amount\",\n        #         \"timestamp\": \"order_date\",\n        #         \"timeGrains\": [\"day\", \"week\", \"month\"],\n        #         \"dimensions\": [\"status_code\", \"priority_code\"],\n        #         \"meta\": {},\n        #         \"resourceType\": \"metric\",\n        #         \"filters\": [],\n        #         \"model\": { \"name\": \"fct_orders\" }\n        #       }\n        #     ]\n        #   }\n        # }\n        ```\n\n    Returns:\n        An authenticated dbt Cloud metadata API client.\n    \"\"\"\n    return DbtCloudMetadataClient(\n        api_key=self.api_key.get_secret_value(),\n        domain=f\"metadata.{self.domain}\",\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs","title":"prefect_dbt.cloud.jobs","text":"

    Module containing tasks and flows for interacting with dbt Cloud jobs

    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob","title":"DbtCloudJob","text":"

    Bases: JobBlock

    Block that holds the information and methods to interact with a dbt Cloud job.

    Attributes:

    Name Type Description dbt_cloud_credentials DbtCloudCredentials

    The credentials to use to authenticate with dbt Cloud.

    job_id int

    The id of the dbt Cloud job.

    timeout_seconds int

    The number of seconds to wait for the job to complete.

    interval_seconds int

    The number of seconds to wait between polling for job completion.

    trigger_job_run_options TriggerJobRunOptions

    The options to use when triggering a job run.

    Examples:

    Load a configured dbt Cloud job block.

    from prefect_dbt.cloud import DbtCloudJob\n\ndbt_cloud_job = DbtCloudJob.load(\"BLOCK_NAME\")\n

    Triggers a dbt Cloud job, waits for completion, and fetches the results.

    from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n\n@flow\ndef dbt_cloud_job_flow():\n    dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n    dbt_cloud_job = DbtCloudJob.load(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=154217\n    )\n    dbt_cloud_job_run = dbt_cloud_job.trigger()\n    dbt_cloud_job_run.wait_for_completion()\n    dbt_cloud_job_run.fetch_result()\n    return dbt_cloud_job_run\n\ndbt_cloud_job_flow()\n

    Source code in prefect_dbt/cloud/jobs.py
    class DbtCloudJob(JobBlock):\n    \"\"\"\n    Block that holds the information and methods to interact with a dbt Cloud job.\n\n    Attributes:\n        dbt_cloud_credentials: The credentials to use to authenticate with dbt Cloud.\n        job_id: The id of the dbt Cloud job.\n        timeout_seconds: The number of seconds to wait for the job to complete.\n        interval_seconds:\n            The number of seconds to wait between polling for job completion.\n        trigger_job_run_options: The options to use when triggering a job run.\n\n    Examples:\n        Load a configured dbt Cloud job block.\n        ```python\n        from prefect_dbt.cloud import DbtCloudJob\n\n        dbt_cloud_job = DbtCloudJob.load(\"BLOCK_NAME\")\n        ```\n\n        Triggers a dbt Cloud job, waits for completion, and fetches the results.\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n\n        @flow\n        def dbt_cloud_job_flow():\n            dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n            dbt_cloud_job = DbtCloudJob.load(\n                dbt_cloud_credentials=dbt_cloud_credentials,\n                job_id=154217\n            )\n            dbt_cloud_job_run = dbt_cloud_job.trigger()\n            dbt_cloud_job_run.wait_for_completion()\n            dbt_cloud_job_run.fetch_result()\n            return dbt_cloud_job_run\n\n        dbt_cloud_job_flow()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt Cloud Job\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob\"  # noqa\n\n    dbt_cloud_credentials: DbtCloudCredentials = Field(\n        default=...,\n        description=\"The dbt Cloud credentials to use to authenticate with dbt Cloud.\",\n    )  # noqa: E501\n    job_id: int = Field(\n        default=..., description=\"The id of the dbt Cloud job.\", title=\"Job ID\"\n    )\n    timeout_seconds: int = Field(\n        default=900,\n        description=\"The number of seconds to wait for the job to complete.\",\n    )\n    interval_seconds: int = Field(\n        default=10,\n        description=\"The number of seconds to wait between polling for job completion.\",\n    )\n    trigger_job_run_options: TriggerJobRunOptions = Field(\n        default_factory=TriggerJobRunOptions,\n        description=\"The options to use when triggering a job run.\",\n    )\n\n    @sync_compatible\n    async def get_job(self, order_by: Optional[str] = None) -> Dict[str, Any]:\n        \"\"\"\n        Retrieve information about a dbt Cloud job.\n\n        Args:\n            order_by: The field to order the results by.\n\n        Returns:\n            The job data.\n        \"\"\"\n        try:\n            async with self.dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.get_job(\n                    job_id=self.job_id,\n                    order_by=order_by,\n                )\n        except HTTPStatusError as ex:\n            raise DbtCloudGetJobFailed(extract_user_message(ex)) from ex\n        return response.json()[\"data\"]\n\n    @sync_compatible\n    async def trigger(\n        self, trigger_job_run_options: Optional[TriggerJobRunOptions] = None\n    ) -> DbtCloudJobRun:\n        \"\"\"\n        Triggers a dbt Cloud job.\n\n        Returns:\n            A representation of the dbt Cloud job run.\n        \"\"\"\n        try:\n            trigger_job_run_options = (\n                trigger_job_run_options or self.trigger_job_run_options\n            )\n            async with self.dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.trigger_job_run(\n                    job_id=self.job_id, options=trigger_job_run_options\n                )\n        except HTTPStatusError as ex:\n            raise DbtCloudJobRunTriggerFailed(extract_user_message(ex)) from ex\n\n        run_data = response.json()[\"data\"]\n        run_id = run_data.get(\"id\")\n        run = DbtCloudJobRun(\n            dbt_cloud_job=self,\n            run_id=run_id,\n        )\n        self.logger.info(\n            f\"dbt Cloud job {self.job_id} run {run_id} successfully triggered. \"\n            f\"You can view the status of this run at \"\n            f\"https://{self.dbt_cloud_credentials.domain}/#/accounts/\"\n            f\"{self.dbt_cloud_credentials.account_id}/projects/\"\n            f\"{run_data['project_id']}/runs/{run_id}/\"\n        )\n        return run\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob.get_job","title":"get_job async","text":"

    Retrieve information about a dbt Cloud job.

    Parameters:

    Name Type Description Default order_by Optional[str]

    The field to order the results by.

    None

    Returns:

    Type Description Dict[str, Any]

    The job data.

    Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def get_job(self, order_by: Optional[str] = None) -> Dict[str, Any]:\n    \"\"\"\n    Retrieve information about a dbt Cloud job.\n\n    Args:\n        order_by: The field to order the results by.\n\n    Returns:\n        The job data.\n    \"\"\"\n    try:\n        async with self.dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_job(\n                job_id=self.job_id,\n                order_by=order_by,\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetJobFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob.trigger","title":"trigger async","text":"

    Triggers a dbt Cloud job.

    Returns:

    Type Description DbtCloudJobRun

    A representation of the dbt Cloud job run.

    Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def trigger(\n    self, trigger_job_run_options: Optional[TriggerJobRunOptions] = None\n) -> DbtCloudJobRun:\n    \"\"\"\n    Triggers a dbt Cloud job.\n\n    Returns:\n        A representation of the dbt Cloud job run.\n    \"\"\"\n    try:\n        trigger_job_run_options = (\n            trigger_job_run_options or self.trigger_job_run_options\n        )\n        async with self.dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.trigger_job_run(\n                job_id=self.job_id, options=trigger_job_run_options\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudJobRunTriggerFailed(extract_user_message(ex)) from ex\n\n    run_data = response.json()[\"data\"]\n    run_id = run_data.get(\"id\")\n    run = DbtCloudJobRun(\n        dbt_cloud_job=self,\n        run_id=run_id,\n    )\n    self.logger.info(\n        f\"dbt Cloud job {self.job_id} run {run_id} successfully triggered. \"\n        f\"You can view the status of this run at \"\n        f\"https://{self.dbt_cloud_credentials.domain}/#/accounts/\"\n        f\"{self.dbt_cloud_credentials.account_id}/projects/\"\n        f\"{run_data['project_id']}/runs/{run_id}/\"\n    )\n    return run\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun","title":"DbtCloudJobRun","text":"

    Bases: JobRun

    Class that holds the information and methods to interact with the resulting run of a dbt Cloud job.

    Source code in prefect_dbt/cloud/jobs.py
    class DbtCloudJobRun(JobRun):  # NOT A BLOCK\n    \"\"\"\n    Class that holds the information and methods to interact\n    with the resulting run of a dbt Cloud job.\n    \"\"\"\n\n    def __init__(self, run_id: int, dbt_cloud_job: \"DbtCloudJob\"):\n        self.run_id = run_id\n        self._dbt_cloud_job = dbt_cloud_job\n        self._dbt_cloud_credentials = dbt_cloud_job.dbt_cloud_credentials\n\n    @property\n    def _log_prefix(self):\n        return f\"dbt Cloud job {self._dbt_cloud_job.job_id} run {self.run_id}.\"\n\n    async def _wait_until_state(\n        self,\n        in_final_state_fn: Awaitable[Callable],\n        get_state_fn: Awaitable[Callable],\n        log_state_fn: Callable = None,\n        timeout_seconds: int = 60,\n        interval_seconds: int = 1,\n    ):\n        \"\"\"\n        Wait until the job run reaches a specific state.\n\n        Args:\n            in_final_state_fn: An async function that accepts a run state\n                and returns a boolean indicating whether the job run is\n                in a final state.\n            get_state_fn: An async function that returns\n                the current state of the job run.\n            log_state_fn: A callable that accepts a run\n                state and makes it human readable.\n            timeout_seconds: The maximum amount of time, in seconds, to wait\n                for the job run to reach the final state.\n            interval_seconds: The number of seconds to wait between checks of\n                the job run's state.\n        \"\"\"\n        start_time = time.time()\n        last_state = run_state = None\n        while not in_final_state_fn(run_state):\n            run_state = await get_state_fn()\n            if run_state != last_state:\n                if self.logger is not None:\n                    self.logger.info(\n                        \"%s has new state: %s\",\n                        self._log_prefix,\n                        log_state_fn(run_state),\n                    )\n                last_state = run_state\n\n            elapsed_time_seconds = time.time() - start_time\n            if elapsed_time_seconds > timeout_seconds:\n                raise DbtCloudJobRunTimedOut(\n                    f\"Max wait time of {timeout_seconds} \"\n                    \"seconds exceeded while waiting\"\n                )\n            await asyncio.sleep(interval_seconds)\n\n    @sync_compatible\n    async def get_run(self) -> Dict[str, Any]:\n        \"\"\"\n        Makes a request to the dbt Cloud API to get the run data.\n\n        Returns:\n            The run data.\n        \"\"\"\n        try:\n            dbt_cloud_credentials = self._dbt_cloud_credentials\n            async with dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.get_run(self.run_id)\n        except HTTPStatusError as ex:\n            raise DbtCloudGetRunFailed(extract_user_message(ex)) from ex\n        run_data = response.json()[\"data\"]\n        return run_data\n\n    @sync_compatible\n    async def get_status_code(self) -> int:\n        \"\"\"\n        Makes a request to the dbt Cloud API to get the run status.\n\n        Returns:\n            The run status code.\n        \"\"\"\n        run_data = await self.get_run()\n        run_status_code = run_data.get(\"status\")\n        return run_status_code\n\n    @sync_compatible\n    async def wait_for_completion(self) -> None:\n        \"\"\"\n        Waits for the job run to reach a terminal state.\n        \"\"\"\n        await self._wait_until_state(\n            in_final_state_fn=DbtCloudJobRunStatus.is_terminal_status_code,\n            get_state_fn=self.get_status_code,\n            log_state_fn=DbtCloudJobRunStatus,\n            timeout_seconds=self._dbt_cloud_job.timeout_seconds,\n            interval_seconds=self._dbt_cloud_job.interval_seconds,\n        )\n\n    @sync_compatible\n    async def fetch_result(self, step: Optional[int] = None) -> Dict[str, Any]:\n        \"\"\"\n        Gets the results from the job run. Since the results\n        may not be ready, use wait_for_completion before calling this method.\n\n        Args:\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n        \"\"\"\n        run_data = await self.get_run()\n        run_status = DbtCloudJobRunStatus(run_data.get(\"status\"))\n        if run_status == DbtCloudJobRunStatus.SUCCESS:\n            try:\n                async with self._dbt_cloud_credentials.get_administrative_client() as client:  # noqa\n                    response = await client.list_run_artifacts(\n                        run_id=self.run_id, step=step\n                    )\n                run_data[\"artifact_paths\"] = response.json()[\"data\"]\n                self.logger.info(\"%s completed successfully!\", self._log_prefix)\n            except HTTPStatusError as ex:\n                raise DbtCloudListRunArtifactsFailed(extract_user_message(ex)) from ex\n            return run_data\n        elif run_status == DbtCloudJobRunStatus.CANCELLED:\n            raise DbtCloudJobRunCancelled(f\"{self._log_prefix} was cancelled.\")\n        elif run_status == DbtCloudJobRunStatus.FAILED:\n            raise DbtCloudJobRunFailed(f\"{self._log_prefix} has failed.\")\n        else:\n            raise DbtCloudJobRunIncomplete(\n                f\"{self._log_prefix} is still running; \"\n                \"use wait_for_completion() to wait until results are ready.\"\n            )\n\n    @sync_compatible\n    async def get_run_artifacts(\n        self,\n        path: Literal[\"manifest.json\", \"catalog.json\", \"run_results.json\"],\n        step: Optional[int] = None,\n    ) -> Union[Dict[str, Any], str]:\n        \"\"\"\n        Get an artifact generated for a completed run.\n\n        Args:\n            path: The relative path to the run artifact.\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n\n        Returns:\n            The contents of the requested manifest. Returns a `Dict` if the\n                requested artifact is a JSON file and a `str` otherwise.\n        \"\"\"\n        try:\n            dbt_cloud_credentials = self._dbt_cloud_credentials\n            async with dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.get_run_artifact(\n                    run_id=self.run_id, path=path, step=step\n                )\n        except HTTPStatusError as ex:\n            raise DbtCloudGetRunArtifactFailed(extract_user_message(ex)) from ex\n\n        if path.endswith(\".json\"):\n            artifact_contents = response.json()\n        else:\n            artifact_contents = response.text\n        return artifact_contents\n\n    def _select_unsuccessful_commands(\n        self,\n        run_results: List[Dict[str, Any]],\n        command_components: List[str],\n        command: str,\n        exe_command: str,\n    ) -> List[str]:\n        \"\"\"\n        Select nodes that were not successful and rebuild a command.\n        \"\"\"\n        # note \"fail\" here instead of \"cancelled\" because\n        # nodes do not have a cancelled state\n        run_nodes = \" \".join(\n            run_result[\"unique_id\"].split(\".\")[2]\n            for run_result in run_results\n            if run_result[\"status\"] in (\"error\", \"skipped\", \"fail\")\n        )\n\n        select_arg = None\n        if \"-s\" in command_components:\n            select_arg = \"-s\"\n        elif \"--select\" in command_components:\n            select_arg = \"--select\"\n\n        # prevent duplicate --select/-s statements\n        if select_arg is not None:\n            # dbt --fail-fast run, -s, bad_mod --vars '{\"env\": \"prod\"}' to:\n            # dbt --fail-fast run -s other_mod bad_mod --vars '{\"env\": \"prod\"}'\n            command_start, select_arg, command_end = command.partition(select_arg)\n            modified_command = f\"{command_start} {select_arg} {run_nodes} {command_end}\"  # noqa\n        else:\n            # dbt --fail-fast, build, --vars '{\"env\": \"prod\"}' to:\n            # dbt --fail-fast build --select bad_model --vars '{\"env\": \"prod\"}'\n            dbt_global_args, exe_command, exe_args = command.partition(exe_command)\n            modified_command = (\n                f\"{dbt_global_args} {exe_command} -s {run_nodes} {exe_args}\"\n            )\n        return modified_command\n\n    async def _build_trigger_job_run_options(\n        self,\n        job: Dict[str, Any],\n        run: Dict[str, Any],\n    ) -> TriggerJobRunOptions:\n        \"\"\"\n        Compiles a list of steps (commands) to retry, then either build trigger job\n        run options from scratch if it does not exist, else overrides the existing.\n        \"\"\"\n        generate_docs = job.get(\"generate_docs\", False)\n        generate_sources = job.get(\"generate_sources\", False)\n\n        steps_override = []\n        for run_step in run[\"run_steps\"]:\n            status = run_step[\"status_humanized\"].lower()\n            # Skipping cloning, profile setup, and dbt deps - always the first three\n            # steps in any run, and note, index starts at 1 instead of 0\n            if run_step[\"index\"] <= 3 or status == \"success\":\n                continue\n            # get dbt build from \"Invoke dbt with `dbt build`\"\n            command = run_step[\"name\"].partition(\"`\")[2].partition(\"`\")[0]\n\n            # These steps will be re-run regardless if\n            # generate_docs or generate_sources are enabled for a given job\n            # so if we don't skip, it'll run twice\n            freshness_in_command = (\n                \"dbt source snapshot-freshness\" in command\n                or \"dbt source freshness\" in command\n            )\n            if \"dbt docs generate\" in command and generate_docs:\n                continue\n            elif freshness_in_command and generate_sources:\n                continue\n\n            # find an executable command like `build` or `run`\n            # search in a list so that there aren't false positives, like\n            # `\"run\" in \"dbt run-operation\"`, which is True; we actually want\n            # `\"run\" in [\"dbt\", \"run-operation\"]` which is False\n            command_components = shlex.split(command)\n            for exe_command in EXE_COMMANDS:\n                if exe_command in command_components:\n                    break\n            else:\n                exe_command = \"\"\n\n            is_exe_command = exe_command in EXE_COMMANDS\n            is_not_success = status in (\"error\", \"skipped\", \"cancelled\")\n            is_skipped = status == \"skipped\"\n            if (not is_exe_command and is_not_success) or (\n                is_exe_command and is_skipped\n            ):\n                # if no matches like `run-operation`, we will be rerunning entirely\n                # or if it's one of the expected commands and is skipped\n                steps_override.append(command)\n            else:\n                # errors and failures are when we need to inspect to figure\n                # out the point of failure\n                try:\n                    run_artifact = await self.get_run_artifacts(\n                        \"run_results.json\", run_step[\"index\"]\n                    )\n                except JSONDecodeError:\n                    # get the run results scoped to the step which had an error\n                    # an error here indicates that either:\n                    # 1) the fail-fast flag was set, in which case\n                    #    the run_results.json file was never created; or\n                    # 2) there was a problem on dbt Cloud's side saving\n                    #    this artifact\n                    steps_override.append(command)\n                else:\n                    # we only need to find the individual nodes\n                    # for those run commands\n                    run_results = run_artifact[\"results\"]\n                    modified_command = self._select_unsuccessful_commands(\n                        run_results=run_results,\n                        command_components=command_components,\n                        command=command,\n                        exe_command=exe_command,\n                    )\n                    steps_override.append(modified_command)\n\n        if self._dbt_cloud_job.trigger_job_run_options is None:\n            trigger_job_run_options_override = TriggerJobRunOptions(\n                steps_override=steps_override\n            )\n        else:\n            trigger_job_run_options_override = (\n                self._dbt_cloud_job.trigger_job_run_options.copy()\n            )\n            trigger_job_run_options_override.steps_override = steps_override\n        return trigger_job_run_options_override\n\n    @sync_compatible\n    async def retry_failed_steps(self) -> \"DbtCloudJobRun\":  # noqa: F821\n        \"\"\"\n        Retries steps that did not complete successfully in a run.\n\n        Returns:\n            A representation of the dbt Cloud job run.\n        \"\"\"\n        job = await self._dbt_cloud_job.get_job()\n        run = await self.get_run()\n\n        trigger_job_run_options_override = await self._build_trigger_job_run_options(\n            job=job, run=run\n        )\n\n        num_steps = len(trigger_job_run_options_override.steps_override)\n        if num_steps == 0:\n            self.logger.info(f\"{self._log_prefix} does not have any steps to retry.\")\n        else:\n            self.logger.info(f\"{self._log_prefix} has {num_steps} steps to retry.\")\n            run = await self._dbt_cloud_job.trigger(\n                trigger_job_run_options=trigger_job_run_options_override,\n            )\n        return run\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.fetch_result","title":"fetch_result async","text":"

    Gets the results from the job run. Since the results may not be ready, use wait_for_completion before calling this method.

    Parameters:

    Name Type Description Default step Optional[int]

    The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

    None Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def fetch_result(self, step: Optional[int] = None) -> Dict[str, Any]:\n    \"\"\"\n    Gets the results from the job run. Since the results\n    may not be ready, use wait_for_completion before calling this method.\n\n    Args:\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n    \"\"\"\n    run_data = await self.get_run()\n    run_status = DbtCloudJobRunStatus(run_data.get(\"status\"))\n    if run_status == DbtCloudJobRunStatus.SUCCESS:\n        try:\n            async with self._dbt_cloud_credentials.get_administrative_client() as client:  # noqa\n                response = await client.list_run_artifacts(\n                    run_id=self.run_id, step=step\n                )\n            run_data[\"artifact_paths\"] = response.json()[\"data\"]\n            self.logger.info(\"%s completed successfully!\", self._log_prefix)\n        except HTTPStatusError as ex:\n            raise DbtCloudListRunArtifactsFailed(extract_user_message(ex)) from ex\n        return run_data\n    elif run_status == DbtCloudJobRunStatus.CANCELLED:\n        raise DbtCloudJobRunCancelled(f\"{self._log_prefix} was cancelled.\")\n    elif run_status == DbtCloudJobRunStatus.FAILED:\n        raise DbtCloudJobRunFailed(f\"{self._log_prefix} has failed.\")\n    else:\n        raise DbtCloudJobRunIncomplete(\n            f\"{self._log_prefix} is still running; \"\n            \"use wait_for_completion() to wait until results are ready.\"\n        )\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.get_run","title":"get_run async","text":"

    Makes a request to the dbt Cloud API to get the run data.

    Returns:

    Type Description Dict[str, Any]

    The run data.

    Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def get_run(self) -> Dict[str, Any]:\n    \"\"\"\n    Makes a request to the dbt Cloud API to get the run data.\n\n    Returns:\n        The run data.\n    \"\"\"\n    try:\n        dbt_cloud_credentials = self._dbt_cloud_credentials\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run(self.run_id)\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunFailed(extract_user_message(ex)) from ex\n    run_data = response.json()[\"data\"]\n    return run_data\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.get_run_artifacts","title":"get_run_artifacts async","text":"

    Get an artifact generated for a completed run.

    Parameters:

    Name Type Description Default path Literal['manifest.json', 'catalog.json', 'run_results.json']

    The relative path to the run artifact.

    required step Optional[int]

    The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

    None

    Returns:

    Type Description Union[Dict[str, Any], str]

    The contents of the requested manifest. Returns a Dict if the requested artifact is a JSON file and a str otherwise.

    Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def get_run_artifacts(\n    self,\n    path: Literal[\"manifest.json\", \"catalog.json\", \"run_results.json\"],\n    step: Optional[int] = None,\n) -> Union[Dict[str, Any], str]:\n    \"\"\"\n    Get an artifact generated for a completed run.\n\n    Args:\n        path: The relative path to the run artifact.\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The contents of the requested manifest. Returns a `Dict` if the\n            requested artifact is a JSON file and a `str` otherwise.\n    \"\"\"\n    try:\n        dbt_cloud_credentials = self._dbt_cloud_credentials\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run_artifact(\n                run_id=self.run_id, path=path, step=step\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunArtifactFailed(extract_user_message(ex)) from ex\n\n    if path.endswith(\".json\"):\n        artifact_contents = response.json()\n    else:\n        artifact_contents = response.text\n    return artifact_contents\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.get_status_code","title":"get_status_code async","text":"

    Makes a request to the dbt Cloud API to get the run status.

    Returns:

    Type Description int

    The run status code.

    Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def get_status_code(self) -> int:\n    \"\"\"\n    Makes a request to the dbt Cloud API to get the run status.\n\n    Returns:\n        The run status code.\n    \"\"\"\n    run_data = await self.get_run()\n    run_status_code = run_data.get(\"status\")\n    return run_status_code\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.retry_failed_steps","title":"retry_failed_steps async","text":"

    Retries steps that did not complete successfully in a run.

    Returns:

    Type Description DbtCloudJobRun

    A representation of the dbt Cloud job run.

    Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def retry_failed_steps(self) -> \"DbtCloudJobRun\":  # noqa: F821\n    \"\"\"\n    Retries steps that did not complete successfully in a run.\n\n    Returns:\n        A representation of the dbt Cloud job run.\n    \"\"\"\n    job = await self._dbt_cloud_job.get_job()\n    run = await self.get_run()\n\n    trigger_job_run_options_override = await self._build_trigger_job_run_options(\n        job=job, run=run\n    )\n\n    num_steps = len(trigger_job_run_options_override.steps_override)\n    if num_steps == 0:\n        self.logger.info(f\"{self._log_prefix} does not have any steps to retry.\")\n    else:\n        self.logger.info(f\"{self._log_prefix} has {num_steps} steps to retry.\")\n        run = await self._dbt_cloud_job.trigger(\n            trigger_job_run_options=trigger_job_run_options_override,\n        )\n    return run\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.wait_for_completion","title":"wait_for_completion async","text":"

    Waits for the job run to reach a terminal state.

    Source code in prefect_dbt/cloud/jobs.py
    @sync_compatible\nasync def wait_for_completion(self) -> None:\n    \"\"\"\n    Waits for the job run to reach a terminal state.\n    \"\"\"\n    await self._wait_until_state(\n        in_final_state_fn=DbtCloudJobRunStatus.is_terminal_status_code,\n        get_state_fn=self.get_status_code,\n        log_state_fn=DbtCloudJobRunStatus,\n        timeout_seconds=self._dbt_cloud_job.timeout_seconds,\n        interval_seconds=self._dbt_cloud_job.interval_seconds,\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.get_dbt_cloud_job_info","title":"get_dbt_cloud_job_info async","text":"

    A task to retrieve information about a dbt Cloud job.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required job_id int

    The ID of the job to get.

    required

    Returns:

    Type Description Dict

    The job data returned by the dbt Cloud administrative API.

    Example

    Get status of a dbt Cloud job:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import get_job\n\n@flow\ndef get_job_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return get_job(\n        dbt_cloud_credentials=credentials,\n        job_id=42\n    )\n\nget_job_flow()\n

    Source code in prefect_dbt/cloud/jobs.py
    @task(\n    name=\"Get dbt Cloud job details\",\n    description=\"Retrieves details of a dbt Cloud job \"\n    \"for the job with the given job_id.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def get_dbt_cloud_job_info(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    job_id: int,\n    order_by: Optional[str] = None,\n) -> Dict:\n    \"\"\"\n    A task to retrieve information about a dbt Cloud job.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        job_id: The ID of the job to get.\n\n    Returns:\n        The job data returned by the dbt Cloud administrative API.\n\n    Example:\n        Get status of a dbt Cloud job:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import get_job\n\n        @flow\n        def get_job_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return get_job(\n                dbt_cloud_credentials=credentials,\n                job_id=42\n            )\n\n        get_job_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_job(\n                job_id=job_id,\n                order_by=order_by,\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetJobFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.get_run_id","title":"get_run_id","text":"

    Task that extracts the run ID from a trigger job run API response,

    This task is mainly used to maintain dependency tracking between the trigger_dbt_cloud_job_run task and downstream tasks/flows that use the run ID.

    Parameters:

    Name Type Description Default obj Dict

    The JSON body from the trigger job run response.

    required Example
    from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run, get_run_id\n\n\n@flow\ndef trigger_run_and_get_id():\n    dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        )\n\n    triggered_run_data = trigger_dbt_cloud_job_run(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n        options=trigger_job_run_options,\n    )\n    run_id = get_run_id.submit(triggered_run_data)\n    return run_id\n\ntrigger_run_and_get_id()\n
    Source code in prefect_dbt/cloud/jobs.py
    @task(\n    name=\"Get dbt Cloud job run ID\",\n    description=\"Extracts the run ID from a trigger job run API response\",\n)\ndef get_run_id(obj: Dict):\n    \"\"\"\n    Task that extracts the run ID from a trigger job run API response,\n\n    This task is mainly used to maintain dependency tracking between the\n    `trigger_dbt_cloud_job_run` task and downstream tasks/flows that use the run ID.\n\n    Args:\n        obj: The JSON body from the trigger job run response.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run, get_run_id\n\n\n        @flow\n        def trigger_run_and_get_id():\n            dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                )\n\n            triggered_run_data = trigger_dbt_cloud_job_run(\n                dbt_cloud_credentials=dbt_cloud_credentials,\n                job_id=job_id,\n                options=trigger_job_run_options,\n            )\n            run_id = get_run_id.submit(triggered_run_data)\n            return run_id\n\n        trigger_run_and_get_id()\n        ```\n    \"\"\"\n    id = obj.get(\"id\")\n    if id is None:\n        raise RuntimeError(\"Unable to determine run ID for triggered job.\")\n    return id\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.retry_dbt_cloud_job_run_subset_and_wait_for_completion","title":"retry_dbt_cloud_job_run_subset_and_wait_for_completion async","text":"

    Flow that retrys a subset of dbt Cloud job run, filtered by select statuses, and waits for the triggered retry to complete.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required trigger_job_run_options Optional[TriggerJobRunOptions]

    An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

    None max_wait_seconds int

    Maximum number of seconds to wait for job to complete

    900 poll_frequency_seconds int

    Number of seconds to wait in between checks for run completion.

    10 run_id int

    The ID of the job run to retry.

    required

    Raises:

    Type Description ValueError

    If trigger_job_run_options.steps_override is set by the user.

    Returns:

    Type Description Dict

    The run data returned by the dbt Cloud administrative API.

    Examples:

    Retry a subset of models in a dbt Cloud job run and wait for completion:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import retry_dbt_cloud_job_run_subset_and_wait_for_completion\n\n@flow\ndef retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow():\n    credentials = DbtCloudCredentials.load(\"MY_BLOCK_NAME\")\n    retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n        dbt_cloud_credentials=credentials,\n        run_id=88640123,\n    )\n\nretry_dbt_cloud_job_run_subset_and_wait_for_completion_flow()\n

    Source code in prefect_dbt/cloud/jobs.py
    @flow(\n    name=\"Retry subset of dbt Cloud job run and wait for completion\",\n    description=(\n        \"Retries a subset of dbt Cloud job run, filtered by select statuses, \"\n        \"and waits for the triggered retry to complete.\"\n    ),\n)\nasync def retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    run_id: int,\n    trigger_job_run_options: Optional[TriggerJobRunOptions] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n) -> Dict:\n    \"\"\"\n    Flow that retrys a subset of dbt Cloud job run, filtered by select statuses,\n    and waits for the triggered retry to complete.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        trigger_job_run_options: An optional TriggerJobRunOptions instance to\n            specify overrides for the triggered job run.\n        max_wait_seconds: Maximum number of seconds to wait for job to complete\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n        run_id: The ID of the job run to retry.\n\n    Raises:\n        ValueError: If `trigger_job_run_options.steps_override` is set by the user.\n\n    Returns:\n        The run data returned by the dbt Cloud administrative API.\n\n    Examples:\n        Retry a subset of models in a dbt Cloud job run and wait for completion:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import retry_dbt_cloud_job_run_subset_and_wait_for_completion\n\n        @flow\n        def retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow():\n            credentials = DbtCloudCredentials.load(\"MY_BLOCK_NAME\")\n            retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n                dbt_cloud_credentials=credentials,\n                run_id=88640123,\n            )\n\n        retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow()\n        ```\n    \"\"\"  # noqa\n    if trigger_job_run_options and trigger_job_run_options.steps_override is not None:\n        raise ValueError(\n            \"Do not set `steps_override` in `trigger_job_run_options` \"\n            \"because this flow will automatically set it\"\n        )\n\n    run_info_future = await get_dbt_cloud_run_info.submit(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        run_id=run_id,\n        include_related=[\"run_steps\"],\n    )\n    run_info = await run_info_future.result()\n\n    job_id = run_info[\"job_id\"]\n    job_info_future = await get_dbt_cloud_job_info.submit(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n    )\n    job_info = await job_info_future.result()\n\n    trigger_job_run_options_override = await _build_trigger_job_run_options(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        trigger_job_run_options=trigger_job_run_options,\n        run_id=run_id,\n        run_info=run_info,\n        job_info=job_info,\n    )\n\n    # to circumvent `RuntimeError: The task runner is already started!`\n    flow_run_context = FlowRunContext.get()\n    task_runner_type = type(flow_run_context.task_runner)\n\n    run_data = await trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n        task_runner=task_runner_type()\n    )(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n        retry_filtered_models_attempts=0,\n        trigger_job_run_options=trigger_job_run_options_override,\n        max_wait_seconds=max_wait_seconds,\n        poll_frequency_seconds=poll_frequency_seconds,\n    )\n    return run_data\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.run_dbt_cloud_job","title":"run_dbt_cloud_job async","text":"

    Flow that triggers and waits for a dbt Cloud job run, retrying a subset of failed nodes if necessary.

    Parameters:

    Name Type Description Default dbt_cloud_job DbtCloudJob

    Block that holds the information and methods to interact with a dbt Cloud job.

    required targeted_retries int

    The number of times to retry failed steps.

    3

    Examples:

    from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\nfrom prefect_dbt.cloud.jobs import run_dbt_cloud_job\n\n@flow\ndef run_dbt_cloud_job_flow():\n    dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n    dbt_cloud_job = DbtCloudJob(\n        dbt_cloud_credentials=dbt_cloud_credentials, job_id=154217\n    )\n    return run_dbt_cloud_job(dbt_cloud_job=dbt_cloud_job)\n\nrun_dbt_cloud_job_flow()\n
    Source code in prefect_dbt/cloud/jobs.py
    @flow\nasync def run_dbt_cloud_job(\n    dbt_cloud_job: DbtCloudJob,\n    targeted_retries: int = 3,\n) -> Dict[str, Any]:\n    \"\"\"\n    Flow that triggers and waits for a dbt Cloud job run, retrying a\n    subset of failed nodes if necessary.\n\n    Args:\n        dbt_cloud_job: Block that holds the information and\n            methods to interact with a dbt Cloud job.\n        targeted_retries: The number of times to retry failed steps.\n\n    Examples:\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n        from prefect_dbt.cloud.jobs import run_dbt_cloud_job\n\n        @flow\n        def run_dbt_cloud_job_flow():\n            dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n            dbt_cloud_job = DbtCloudJob(\n                dbt_cloud_credentials=dbt_cloud_credentials, job_id=154217\n            )\n            return run_dbt_cloud_job(dbt_cloud_job=dbt_cloud_job)\n\n        run_dbt_cloud_job_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    run = await task(dbt_cloud_job.trigger.aio)(dbt_cloud_job)\n    while targeted_retries > 0:\n        try:\n            await task(run.wait_for_completion.aio)(run)\n            result = await task(run.fetch_result.aio)(run)\n            return result\n        except DbtCloudJobRunFailed:\n            logger.info(\n                f\"Retrying job run with ID: {run.run_id} \"\n                f\"{targeted_retries} more times\"\n            )\n            run = await task(run.retry_failed_steps.aio)(run)\n            targeted_retries -= 1\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.trigger_dbt_cloud_job_run","title":"trigger_dbt_cloud_job_run async","text":"

    A task to trigger a dbt Cloud job run.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required job_id int

    The ID of the job to trigger.

    required options Optional[TriggerJobRunOptions]

    An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

    None

    Returns:

    Type Description Dict

    The run data returned from the dbt Cloud administrative API.

    Examples:

    Trigger a dbt Cloud job run:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n@flow\ndef trigger_dbt_cloud_job_run_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\n\ntrigger_dbt_cloud_job_run_flow()\n

    Trigger a dbt Cloud job run with overrides:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\nfrom prefect_dbt.cloud.models import TriggerJobRunOptions\n\n\n@flow\ndef trigger_dbt_cloud_job_run_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    trigger_dbt_cloud_job_run(\n        dbt_cloud_credentials=credentials,\n        job_id=1,\n        options=TriggerJobRunOptions(\n            git_branch=\"staging\",\n            schema_override=\"dbt_cloud_pr_123\",\n            dbt_version_override=\"0.18.0\",\n            target_name_override=\"staging\",\n            timeout_seconds_override=3000,\n            generate_docs_override=True,\n            threads_override=8,\n            steps_override=[\n                \"dbt seed\",\n                \"dbt run --fail-fast\",\n                \"dbt test --fail-fast\",\n            ],\n        ),\n    )\n\n\ntrigger_dbt_cloud_job_run()\n

    Source code in prefect_dbt/cloud/jobs.py
    @task(\n    name=\"Trigger dbt Cloud job run\",\n    description=\"Triggers a dbt Cloud job run for the job \"\n    \"with the given job_id and optional overrides.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def trigger_dbt_cloud_job_run(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    job_id: int,\n    options: Optional[TriggerJobRunOptions] = None,\n) -> Dict:\n    \"\"\"\n    A task to trigger a dbt Cloud job run.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        job_id: The ID of the job to trigger.\n        options: An optional TriggerJobRunOptions instance to specify overrides\n            for the triggered job run.\n\n    Returns:\n        The run data returned from the dbt Cloud administrative API.\n\n    Examples:\n        Trigger a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n        @flow\n        def trigger_dbt_cloud_job_run_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\n\n        trigger_dbt_cloud_job_run_flow()\n        ```\n\n        Trigger a dbt Cloud job run with overrides:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n        from prefect_dbt.cloud.models import TriggerJobRunOptions\n\n\n        @flow\n        def trigger_dbt_cloud_job_run_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            trigger_dbt_cloud_job_run(\n                dbt_cloud_credentials=credentials,\n                job_id=1,\n                options=TriggerJobRunOptions(\n                    git_branch=\"staging\",\n                    schema_override=\"dbt_cloud_pr_123\",\n                    dbt_version_override=\"0.18.0\",\n                    target_name_override=\"staging\",\n                    timeout_seconds_override=3000,\n                    generate_docs_override=True,\n                    threads_override=8,\n                    steps_override=[\n                        \"dbt seed\",\n                        \"dbt run --fail-fast\",\n                        \"dbt test --fail-fast\",\n                    ],\n                ),\n            )\n\n\n        trigger_dbt_cloud_job_run()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n\n    logger.info(f\"Triggering run for job with ID {job_id}\")\n\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.trigger_job_run(job_id=job_id, options=options)\n    except HTTPStatusError as ex:\n        raise DbtCloudJobRunTriggerFailed(extract_user_message(ex)) from ex\n\n    run_data = response.json()[\"data\"]\n\n    if \"project_id\" in run_data and \"id\" in run_data:\n        logger.info(\n            f\"Run successfully triggered for job with ID {job_id}. \"\n            \"You can view the status of this run at \"\n            f\"https://{dbt_cloud_credentials.domain}/#/accounts/\"\n            f\"{dbt_cloud_credentials.account_id}/projects/{run_data['project_id']}/\"\n            f\"runs/{run_data['id']}/\"\n        )\n\n    return run_data\n
    "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.trigger_dbt_cloud_job_run_and_wait_for_completion","title":"trigger_dbt_cloud_job_run_and_wait_for_completion async","text":"

    Flow that triggers a job run and waits for the triggered run to complete.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required job_id int

    The ID of the job to trigger.

    required trigger_job_run_options Optional[TriggerJobRunOptions]

    An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

    None max_wait_seconds int

    Maximum number of seconds to wait for job to complete

    900 poll_frequency_seconds int

    Number of seconds to wait in between checks for run completion.

    10 retry_filtered_models_attempts int

    Number of times to retry models selected by retry_status_filters.

    3

    Raises:

    Type Description DbtCloudJobRunCancelled

    The triggered dbt Cloud job run was cancelled.

    DbtCloudJobRunFailed

    The triggered dbt Cloud job run failed.

    RuntimeError

    The triggered dbt Cloud job run ended in an unexpected state.

    Returns:

    Type Description Dict

    The run data returned by the dbt Cloud administrative API.

    Examples:

    Trigger a dbt Cloud job and wait for completion as a stand alone flow:

    import asyncio\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\nasyncio.run(\n    trigger_dbt_cloud_job_run_and_wait_for_completion(\n        dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        ),\n        job_id=1\n    )\n)\n

    Trigger a dbt Cloud job and wait for completion as a sub-flow:

    from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\n@flow\ndef my_flow():\n    ...\n    run_result = trigger_dbt_cloud_job_run_and_wait_for_completion(\n        dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        ),\n        job_id=1\n    )\n    ...\n\nmy_flow()\n

    Trigger a dbt Cloud job with overrides:

    import asyncio\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\nfrom prefect_dbt.cloud.models import TriggerJobRunOptions\n\nasyncio.run(\n    trigger_dbt_cloud_job_run_and_wait_for_completion(\n        dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        ),\n        job_id=1,\n        trigger_job_run_options=TriggerJobRunOptions(\n            git_branch=\"staging\",\n            schema_override=\"dbt_cloud_pr_123\",\n            dbt_version_override=\"0.18.0\",\n            target_name_override=\"staging\",\n            timeout_seconds_override=3000,\n            generate_docs_override=True,\n            threads_override=8,\n            steps_override=[\n                \"dbt seed\",\n                \"dbt run --fail-fast\",\n                \"dbt test --fail fast\",\n            ],\n        ),\n    )\n)\n

    Source code in prefect_dbt/cloud/jobs.py
    @flow(\n    name=\"Trigger dbt Cloud job run and wait for completion\",\n    description=\"Triggers a dbt Cloud job run and waits for the\"\n    \"triggered run to complete.\",\n)\nasync def trigger_dbt_cloud_job_run_and_wait_for_completion(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    job_id: int,\n    trigger_job_run_options: Optional[TriggerJobRunOptions] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n    retry_filtered_models_attempts: int = 3,\n) -> Dict:\n    \"\"\"\n    Flow that triggers a job run and waits for the triggered run to complete.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        job_id: The ID of the job to trigger.\n        trigger_job_run_options: An optional TriggerJobRunOptions instance to\n            specify overrides for the triggered job run.\n        max_wait_seconds: Maximum number of seconds to wait for job to complete\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n        retry_filtered_models_attempts: Number of times to retry models selected by `retry_status_filters`.\n\n    Raises:\n        DbtCloudJobRunCancelled: The triggered dbt Cloud job run was cancelled.\n        DbtCloudJobRunFailed: The triggered dbt Cloud job run failed.\n        RuntimeError: The triggered dbt Cloud job run ended in an unexpected state.\n\n    Returns:\n        The run data returned by the dbt Cloud administrative API.\n\n    Examples:\n        Trigger a dbt Cloud job and wait for completion as a stand alone flow:\n        ```python\n        import asyncio\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\n        asyncio.run(\n            trigger_dbt_cloud_job_run_and_wait_for_completion(\n                dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                ),\n                job_id=1\n            )\n        )\n        ```\n\n        Trigger a dbt Cloud job and wait for completion as a sub-flow:\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\n        @flow\n        def my_flow():\n            ...\n            run_result = trigger_dbt_cloud_job_run_and_wait_for_completion(\n                dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                ),\n                job_id=1\n            )\n            ...\n\n        my_flow()\n        ```\n\n        Trigger a dbt Cloud job with overrides:\n        ```python\n        import asyncio\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n        from prefect_dbt.cloud.models import TriggerJobRunOptions\n\n        asyncio.run(\n            trigger_dbt_cloud_job_run_and_wait_for_completion(\n                dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                ),\n                job_id=1,\n                trigger_job_run_options=TriggerJobRunOptions(\n                    git_branch=\"staging\",\n                    schema_override=\"dbt_cloud_pr_123\",\n                    dbt_version_override=\"0.18.0\",\n                    target_name_override=\"staging\",\n                    timeout_seconds_override=3000,\n                    generate_docs_override=True,\n                    threads_override=8,\n                    steps_override=[\n                        \"dbt seed\",\n                        \"dbt run --fail-fast\",\n                        \"dbt test --fail fast\",\n                    ],\n                ),\n            )\n        )\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n\n    triggered_run_data_future = await trigger_dbt_cloud_job_run.submit(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n        options=trigger_job_run_options,\n    )\n    run_id = (await triggered_run_data_future.result()).get(\"id\")\n    if run_id is None:\n        raise RuntimeError(\"Unable to determine run ID for triggered job.\")\n\n    final_run_status, run_data = await wait_for_dbt_cloud_job_run(\n        run_id=run_id,\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        max_wait_seconds=max_wait_seconds,\n        poll_frequency_seconds=poll_frequency_seconds,\n    )\n\n    if final_run_status == DbtCloudJobRunStatus.SUCCESS:\n        try:\n            list_run_artifacts_future = await list_dbt_cloud_run_artifacts.submit(\n                dbt_cloud_credentials=dbt_cloud_credentials,\n                run_id=run_id,\n            )\n            run_data[\"artifact_paths\"] = await list_run_artifacts_future.result()\n        except DbtCloudListRunArtifactsFailed as ex:\n            logger.warning(\n                \"Unable to retrieve artifacts for job run with ID %s. Reason: %s\",\n                run_id,\n                ex,\n            )\n        logger.info(\n            \"dbt Cloud job run with ID %s completed successfully!\",\n            run_id,\n        )\n        return run_data\n    elif final_run_status == DbtCloudJobRunStatus.CANCELLED:\n        raise DbtCloudJobRunCancelled(\n            f\"Triggered job run with ID {run_id} was cancelled.\"\n        )\n    elif final_run_status == DbtCloudJobRunStatus.FAILED:\n        while retry_filtered_models_attempts > 0:\n            logger.info(\n                f\"Retrying job run with ID: {run_id} \"\n                f\"{retry_filtered_models_attempts} more times\"\n            )\n            try:\n                retry_filtered_models_attempts -= 1\n                run_data = await retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n                    dbt_cloud_credentials=dbt_cloud_credentials,\n                    run_id=run_id,\n                    trigger_job_run_options=trigger_job_run_options,\n                    max_wait_seconds=max_wait_seconds,\n                    poll_frequency_seconds=poll_frequency_seconds,\n                )\n                return run_data\n            except Exception:\n                pass\n        else:\n            raise DbtCloudJobRunFailed(f\"Triggered job run with ID: {run_id} failed.\")\n    else:\n        raise RuntimeError(\n            f\"Triggered job run with ID: {run_id} ended with unexpected\"\n            f\"status {final_run_status.value}.\"\n        )\n
    "},{"location":"integrations/prefect-dbt/cloud/models/","title":"Models","text":""},{"location":"integrations/prefect-dbt/cloud/models/#prefect_dbt.cloud.models","title":"prefect_dbt.cloud.models","text":"

    Module containing models used for passing data to dbt Cloud

    "},{"location":"integrations/prefect-dbt/cloud/models/#prefect_dbt.cloud.models.TriggerJobRunOptions","title":"TriggerJobRunOptions","text":"

    Bases: BaseModel

    Defines options that can be defined when triggering a dbt Cloud job run.

    Source code in prefect_dbt/cloud/models.py
    class TriggerJobRunOptions(BaseModel):\n    \"\"\"\n    Defines options that can be defined when triggering a dbt Cloud job run.\n    \"\"\"\n\n    cause: str = Field(\n        default_factory=default_cause_factory,\n        description=\"A text description of the reason for running this job.\",\n    )\n    git_sha: Optional[str] = Field(\n        default=None, description=\"The git sha to check out before running this job.\"\n    )\n    git_branch: Optional[str] = Field(\n        default=None, description=\"The git branch to check out before running this job.\"\n    )\n    schema_override: Optional[str] = Field(\n        default=None,\n        description=\"Override the destination schema in the configured \"\n        \"target for this job.\",\n    )\n    dbt_version_override: Optional[str] = Field(\n        default=None, description=\"Override the version of dbt used to run this job.\"\n    )\n    threads_override: Optional[int] = Field(\n        default=None, description=\"Override the number of threads used to run this job.\"\n    )\n    target_name_override: Optional[str] = Field(\n        default=None,\n        description=\"Override the target.name context variable used when \"\n        \"running this job\",\n    )\n    generate_docs_override: Optional[bool] = Field(\n        default=None,\n        description=\"Override whether or not this job generates docs \"\n        \"(true=yes, false=no).\",\n    )\n    timeout_seconds_override: Optional[int] = Field(\n        default=None, description=\"Override the timeout in seconds for this job.\"\n    )\n    steps_override: Optional[List[str]] = Field(\n        default=None, description=\"Override the list of steps for this job.\"\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/models/#prefect_dbt.cloud.models.default_cause_factory","title":"default_cause_factory","text":"

    Factory function to populate the default cause for a job run to include information from the Prefect run context.

    Source code in prefect_dbt/cloud/models.py
    def default_cause_factory():\n    \"\"\"\n    Factory function to populate the default cause for a job run to include information\n    from the Prefect run context.\n    \"\"\"\n    cause = \"Triggered via Prefect\"\n\n    try:\n        context = get_run_context()\n        if isinstance(context, FlowRunContext):\n            cause = f\"{cause} in flow run {context.flow_run.name}\"\n        elif isinstance(context, TaskRunContext):\n            cause = f\"{cause} in task run {context.task_run.name}\"\n    except RuntimeError:\n        pass\n\n    return cause\n
    "},{"location":"integrations/prefect-dbt/cloud/runs/","title":"Runs","text":""},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs","title":"prefect_dbt.cloud.runs","text":"

    Module containing tasks and flows for interacting with dbt Cloud job runs

    "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.DbtCloudJobRunStatus","title":"DbtCloudJobRunStatus","text":"

    Bases: Enum

    dbt Cloud Job statuses.

    Source code in prefect_dbt/cloud/runs.py
    class DbtCloudJobRunStatus(Enum):\n    \"\"\"dbt Cloud Job statuses.\"\"\"\n\n    QUEUED = 1\n    STARTING = 2\n    RUNNING = 3\n    SUCCESS = 10\n    FAILED = 20\n    CANCELLED = 30\n\n    @classmethod\n    def is_terminal_status_code(cls, status_code: Any) -> bool:\n        \"\"\"\n        Returns True if a status code is terminal for a job run.\n        Returns False otherwise.\n        \"\"\"\n        return status_code in [cls.SUCCESS.value, cls.FAILED.value, cls.CANCELLED.value]\n
    "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.DbtCloudJobRunStatus.is_terminal_status_code","title":"is_terminal_status_code classmethod","text":"

    Returns True if a status code is terminal for a job run. Returns False otherwise.

    Source code in prefect_dbt/cloud/runs.py
    @classmethod\ndef is_terminal_status_code(cls, status_code: Any) -> bool:\n    \"\"\"\n    Returns True if a status code is terminal for a job run.\n    Returns False otherwise.\n    \"\"\"\n    return status_code in [cls.SUCCESS.value, cls.FAILED.value, cls.CANCELLED.value]\n
    "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.get_dbt_cloud_run_artifact","title":"get_dbt_cloud_run_artifact async","text":"

    A task to get an artifact generated for a completed run. The requested artifact is saved to a file in the current working directory.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required run_id int

    The ID of the run to list run artifacts for.

    required path str

    The relative path to the run artifact (e.g. manifest.json, catalog.json, run_results.json)

    required step Optional[int]

    The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

    None

    Returns:

    Type Description Union[Dict, str]

    The contents of the requested manifest. Returns a Dict if the requested artifact is a JSON file and a str otherwise.

    Examples:

    Get an artifact of a dbt Cloud job run:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.runs import get_dbt_cloud_run_artifact\n\n@flow\ndef get_artifact_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return get_dbt_cloud_run_artifact(\n        dbt_cloud_credentials=credentials,\n        run_id=42,\n        path=\"manifest.json\"\n    )\n\nget_artifact_flow()\n

    Get an artifact of a dbt Cloud job run and write it to a file:

    import json\n\nfrom prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import get_dbt_cloud_run_artifact\n\n@flow\ndef get_artifact_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    get_run_artifact_result = get_dbt_cloud_run_artifact(\n        dbt_cloud_credentials=credentials,\n        run_id=42,\n        path=\"manifest.json\"\n    )\n\n    with open(\"manifest.json\", \"w\") as file:\n        json.dump(get_run_artifact_result, file)\n\nget_artifact_flow()\n

    Source code in prefect_dbt/cloud/runs.py
    @task(\n    name=\"Get dbt Cloud job artifact\",\n    description=\"Fetches an artifact from a completed run.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def get_dbt_cloud_run_artifact(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    run_id: int,\n    path: str,\n    step: Optional[int] = None,\n) -> Union[Dict, str]:\n    \"\"\"\n    A task to get an artifact generated for a completed run. The requested artifact\n    is saved to a file in the current working directory.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        run_id: The ID of the run to list run artifacts for.\n        path: The relative path to the run artifact (e.g. manifest.json, catalog.json,\n            run_results.json)\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The contents of the requested manifest. Returns a `Dict` if the\n            requested artifact is a JSON file and a `str` otherwise.\n\n    Examples:\n        Get an artifact of a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.runs import get_dbt_cloud_run_artifact\n\n        @flow\n        def get_artifact_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return get_dbt_cloud_run_artifact(\n                dbt_cloud_credentials=credentials,\n                run_id=42,\n                path=\"manifest.json\"\n            )\n\n        get_artifact_flow()\n        ```\n\n        Get an artifact of a dbt Cloud job run and write it to a file:\n        ```python\n        import json\n\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import get_dbt_cloud_run_artifact\n\n        @flow\n        def get_artifact_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            get_run_artifact_result = get_dbt_cloud_run_artifact(\n                dbt_cloud_credentials=credentials,\n                run_id=42,\n                path=\"manifest.json\"\n            )\n\n            with open(\"manifest.json\", \"w\") as file:\n                json.dump(get_run_artifact_result, file)\n\n        get_artifact_flow()\n        ```\n    \"\"\"  # noqa\n\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run_artifact(\n                run_id=run_id, path=path, step=step\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunArtifactFailed(extract_user_message(ex)) from ex\n\n    if path.endswith(\".json\"):\n        artifact_contents = response.json()\n    else:\n        artifact_contents = response.text\n\n    return artifact_contents\n
    "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.get_dbt_cloud_run_info","title":"get_dbt_cloud_run_info async","text":"

    A task to retrieve information about a dbt Cloud job run.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required run_id int

    The ID of the job to trigger.

    required include_related Optional[List[Literal['trigger', 'job', 'debug_logs', 'run_steps']]]

    List of related fields to pull with the run. Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\". If \"debug_logs\" is not provided in a request, then the included debug logs will be truncated to the last 1,000 lines of the debug log output file.

    None

    Returns:

    Type Description Dict

    The run data returned by the dbt Cloud administrative API.

    Example

    Get status of a dbt Cloud job run:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import get_run\n\n@flow\ndef get_run_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return get_run(\n        dbt_cloud_credentials=credentials,\n        run_id=42\n    )\n\nget_run_flow()\n

    Source code in prefect_dbt/cloud/runs.py
    @task(\n    name=\"Get dbt Cloud job run details\",\n    description=\"Retrieves details of a dbt Cloud job run \"\n    \"for the run with the given run_id.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def get_dbt_cloud_run_info(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    run_id: int,\n    include_related: Optional[\n        List[Literal[\"trigger\", \"job\", \"debug_logs\", \"run_steps\"]]\n    ] = None,\n) -> Dict:\n    \"\"\"\n    A task to retrieve information about a dbt Cloud job run.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        run_id: The ID of the job to trigger.\n        include_related: List of related fields to pull with the run.\n            Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\".\n            If \"debug_logs\" is not provided in a request, then the included debug\n            logs will be truncated to the last 1,000 lines of the debug log output file.\n\n    Returns:\n        The run data returned by the dbt Cloud administrative API.\n\n    Example:\n        Get status of a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import get_run\n\n        @flow\n        def get_run_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return get_run(\n                dbt_cloud_credentials=credentials,\n                run_id=42\n            )\n\n        get_run_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run(\n                run_id=run_id, include_related=include_related\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
    "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.list_dbt_cloud_run_artifacts","title":"list_dbt_cloud_run_artifacts async","text":"

    A task to list the artifact files generated for a completed run.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required run_id int

    The ID of the run to list run artifacts for.

    required step Optional[int]

    The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

    None

    Returns:

    Type Description List[str]

    A list of paths to artifact files that can be used to retrieve the generated artifacts.

    Example

    List artifacts of a dbt Cloud job run:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import list_dbt_cloud_run_artifacts\n\n@flow\ndef list_artifacts_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return list_dbt_cloud_run_artifacts(\n        dbt_cloud_credentials=credentials,\n        run_id=42\n    )\n\nlist_artifacts_flow()\n

    Source code in prefect_dbt/cloud/runs.py
    @task(\n    name=\"List dbt Cloud job artifacts\",\n    description=\"Fetches a list of artifact files generated for a completed run.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def list_dbt_cloud_run_artifacts(\n    dbt_cloud_credentials: DbtCloudCredentials, run_id: int, step: Optional[int] = None\n) -> List[str]:\n    \"\"\"\n    A task to list the artifact files generated for a completed run.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        run_id: The ID of the run to list run artifacts for.\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        A list of paths to artifact files that can be used to retrieve the generated artifacts.\n\n    Example:\n        List artifacts of a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import list_dbt_cloud_run_artifacts\n\n        @flow\n        def list_artifacts_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return list_dbt_cloud_run_artifacts(\n                dbt_cloud_credentials=credentials,\n                run_id=42\n            )\n\n        list_artifacts_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.list_run_artifacts(run_id=run_id, step=step)\n    except HTTPStatusError as ex:\n        raise DbtCloudListRunArtifactsFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
    "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.wait_for_dbt_cloud_job_run","title":"wait_for_dbt_cloud_job_run async","text":"

    Waits for the given dbt Cloud job run to finish running.

    Parameters:

    Name Type Description Default run_id int

    The ID of the run to wait for.

    required dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required max_wait_seconds int

    Maximum number of seconds to wait for job to complete

    900 poll_frequency_seconds int

    Number of seconds to wait in between checks for run completion.

    10

    Raises:

    Type Description DbtCloudJobRunTimedOut

    When the elapsed wait time exceeds max_wait_seconds.

    Returns:

    Name Type Description run_status DbtCloudJobRunStatus

    An enum representing the final dbt Cloud job run status

    run_data Dict

    A dictionary containing information about the run after completion.

    Source code in prefect_dbt/cloud/runs.py
    @flow(\n    name=\"Wait for dbt Cloud job run\",\n    description=\"Waits for a dbt Cloud job run to finish running.\",\n)\nasync def wait_for_dbt_cloud_job_run(\n    run_id: int,\n    dbt_cloud_credentials: DbtCloudCredentials,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n) -> Tuple[DbtCloudJobRunStatus, Dict]:\n    \"\"\"\n    Waits for the given dbt Cloud job run to finish running.\n\n    Args:\n        run_id: The ID of the run to wait for.\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        max_wait_seconds: Maximum number of seconds to wait for job to complete\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Raises:\n        DbtCloudJobRunTimedOut: When the elapsed wait time exceeds `max_wait_seconds`.\n\n    Returns:\n        run_status: An enum representing the final dbt Cloud job run status\n        run_data: A dictionary containing information about the run after completion.\n\n\n    Example:\n\n\n    \"\"\"\n    logger = get_run_logger()\n    seconds_waited_for_run_completion = 0\n    wait_for = []\n    while seconds_waited_for_run_completion <= max_wait_seconds:\n        run_data_future = await get_dbt_cloud_run_info.submit(\n            dbt_cloud_credentials=dbt_cloud_credentials,\n            run_id=run_id,\n            wait_for=wait_for,\n        )\n        run_data = await run_data_future.result()\n        run_status_code = run_data.get(\"status\")\n\n        if DbtCloudJobRunStatus.is_terminal_status_code(run_status_code):\n            return DbtCloudJobRunStatus(run_status_code), run_data\n\n        wait_for = [run_data_future]\n        logger.debug(\n            \"dbt Cloud job run with ID %i has status %s. Waiting for %i seconds.\",\n            run_id,\n            DbtCloudJobRunStatus(run_status_code).name,\n            poll_frequency_seconds,\n        )\n        await asyncio.sleep(poll_frequency_seconds)\n        seconds_waited_for_run_completion += poll_frequency_seconds\n\n    raise DbtCloudJobRunTimedOut(\n        f\"Max wait time of {max_wait_seconds} seconds exceeded while waiting \"\n        \"for job run with ID {run_id}\"\n    )\n
    "},{"location":"integrations/prefect-dbt/cloud/utils/","title":"Utils","text":""},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils","title":"prefect_dbt.cloud.utils","text":"

    Utilities for common interactions with the dbt Cloud API

    "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.DbtCloudAdministrativeApiCallFailed","title":"DbtCloudAdministrativeApiCallFailed","text":"

    Bases: Exception

    Raised when a call to dbt Cloud administrative API fails.

    Source code in prefect_dbt/cloud/utils.py
    class DbtCloudAdministrativeApiCallFailed(Exception):\n    \"\"\"Raised when a call to dbt Cloud administrative API fails.\"\"\"\n
    "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.call_dbt_cloud_administrative_api_endpoint","title":"call_dbt_cloud_administrative_api_endpoint async","text":"

    Task that calls a specified endpoint in the dbt Cloud administrative API. Use this task if a prebuilt one is not yet available.

    Parameters:

    Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

    Credentials for authenticating with dbt Cloud.

    required path str

    The partial path for the request (e.g. /projects/). Will be appended onto the base URL as determined by the client configuration.

    required http_method str

    HTTP method to call on the endpoint.

    required params Optional[Dict[str, Any]]

    Query parameters to include in the request.

    None json Optional[Dict[str, Any]]

    JSON serializable body to send in the request.

    None

    Returns:

    Type Description Any

    The body of the response. If the body is JSON serializable, then the result of json.loads with the body as the input will be returned. Otherwise, the body will be returned directly.

    Examples:

    List projects for an account:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n@flow\ndef get_projects_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    result = call_dbt_cloud_administrative_api_endpoint(\n        dbt_cloud_credentials=credentials,\n        path=\"/projects/\",\n        http_method=\"GET\",\n    )\n    return result[\"data\"]\n\nget_projects_flow()\n

    Create a new job:

    from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n\n@flow\ndef create_job_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    result = call_dbt_cloud_administrative_api_endpoint(\n        dbt_cloud_credentials=credentials,\n        path=\"/jobs/\",\n        http_method=\"POST\",\n        json={\n            \"id\": None,\n            \"account_id\": 123456789,\n            \"project_id\": 100,\n            \"environment_id\": 10,\n            \"name\": \"Nightly run\",\n            \"dbt_version\": None,\n            \"triggers\": {\"github_webhook\": True, \"schedule\": True},\n            \"execute_steps\": [\"dbt run\", \"dbt test\", \"dbt source snapshot-freshness\"],\n            \"settings\": {\"threads\": 4, \"target_name\": \"prod\"},\n            \"state\": 1,\n            \"schedule\": {\n                \"date\": {\"type\": \"every_day\"},\n                \"time\": {\"type\": \"every_hour\", \"interval\": 1},\n            },\n        },\n    )\n    return result[\"data\"]\n\ncreate_job_flow()\n

    Source code in prefect_dbt/cloud/utils.py
    @task(\n    name=\"Call dbt Cloud administrative API endpoint\",\n    description=\"Calls a dbt Cloud administrative API endpoint\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def call_dbt_cloud_administrative_api_endpoint(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    path: str,\n    http_method: str,\n    params: Optional[Dict[str, Any]] = None,\n    json: Optional[Dict[str, Any]] = None,\n) -> Any:\n    \"\"\"\n    Task that calls a specified endpoint in the dbt Cloud administrative API. Use this\n    task if a prebuilt one is not yet available.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        path: The partial path for the request (e.g. /projects/). Will be appended\n            onto the base URL as determined by the client configuration.\n        http_method: HTTP method to call on the endpoint.\n        params: Query parameters to include in the request.\n        json: JSON serializable body to send in the request.\n\n    Returns:\n        The body of the response. If the body is JSON serializable, then the result of\n            `json.loads` with the body as the input will be returned. Otherwise, the\n            body will be returned directly.\n\n    Examples:\n        List projects for an account:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n        @flow\n        def get_projects_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            result = call_dbt_cloud_administrative_api_endpoint(\n                dbt_cloud_credentials=credentials,\n                path=\"/projects/\",\n                http_method=\"GET\",\n            )\n            return result[\"data\"]\n\n        get_projects_flow()\n        ```\n\n        Create a new job:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n\n        @flow\n        def create_job_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            result = call_dbt_cloud_administrative_api_endpoint(\n                dbt_cloud_credentials=credentials,\n                path=\"/jobs/\",\n                http_method=\"POST\",\n                json={\n                    \"id\": None,\n                    \"account_id\": 123456789,\n                    \"project_id\": 100,\n                    \"environment_id\": 10,\n                    \"name\": \"Nightly run\",\n                    \"dbt_version\": None,\n                    \"triggers\": {\"github_webhook\": True, \"schedule\": True},\n                    \"execute_steps\": [\"dbt run\", \"dbt test\", \"dbt source snapshot-freshness\"],\n                    \"settings\": {\"threads\": 4, \"target_name\": \"prod\"},\n                    \"state\": 1,\n                    \"schedule\": {\n                        \"date\": {\"type\": \"every_day\"},\n                        \"time\": {\"type\": \"every_hour\", \"interval\": 1},\n                    },\n                },\n            )\n            return result[\"data\"]\n\n        create_job_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.call_endpoint(\n                http_method=http_method, path=path, params=params, json=json\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudAdministrativeApiCallFailed(extract_developer_message(ex)) from ex\n    try:\n        return response.json()\n    except JSONDecodeError:\n        return response.text\n
    "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.extract_developer_message","title":"extract_developer_message","text":"

    Extracts developer message from a error response from the dbt Cloud administrative API.

    Parameters:

    Name Type Description Default ex HTTPStatusError

    An HTTPStatusError raised by httpx

    required

    Returns:

    Type Description Optional[str]

    developer_message from dbt Cloud administrative API response or None if a

    Optional[str]

    developer_message cannot be extracted

    Source code in prefect_dbt/cloud/utils.py
    def extract_developer_message(ex: HTTPStatusError) -> Optional[str]:\n    \"\"\"\n    Extracts developer message from a error response from the dbt Cloud\n    administrative API.\n\n    Args:\n        ex: An HTTPStatusError raised by httpx\n\n    Returns:\n        developer_message from dbt Cloud administrative API response or None if a\n        developer_message cannot be extracted\n    \"\"\"\n    response_payload = ex.response.json()\n    status = response_payload.get(\"status\", {})\n    return status.get(\"developer_message\")\n
    "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.extract_user_message","title":"extract_user_message","text":"

    Extracts user message from a error response from the dbt Cloud administrative API.

    Parameters:

    Name Type Description Default ex HTTPStatusError

    An HTTPStatusError raised by httpx

    required

    Returns:

    Type Description Optional[str]

    user_message from dbt Cloud administrative API response or None if a

    Optional[str]

    user_message cannot be extracted

    Source code in prefect_dbt/cloud/utils.py
    def extract_user_message(ex: HTTPStatusError) -> Optional[str]:\n    \"\"\"\n    Extracts user message from a error response from the dbt Cloud administrative API.\n\n    Args:\n        ex: An HTTPStatusError raised by httpx\n\n    Returns:\n        user_message from dbt Cloud administrative API response or None if a\n        user_message cannot be extracted\n    \"\"\"\n    response_payload = ex.response.json()\n    status = response_payload.get(\"status\", {})\n    return status.get(\"user_message\")\n
    "},{"location":"integrations/prefect-docker/","title":"prefect-docker","text":""},{"location":"integrations/prefect-docker/#welcome","title":"Welcome!","text":"

    Prefect integrations for working with Docker.

    Note! The DockerRegistryCredentials in prefect-docker is a unique block, separate from the DockerRegistry in prefect core. While DockerRegistry implements a few functionality from both DockerHost and DockerRegistryCredentials for convenience, it does not allow much configuration to interact with a Docker host.

    Do not use DockerRegistry with this collection. Instead, use DockerHost and DockerRegistryCredentials.

    "},{"location":"integrations/prefect-docker/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-docker/#python-setup","title":"Python setup","text":"

    Requires an installation of Python 3.8+.

    We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

    These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-docker/#installation","title":"Installation","text":"

    Install prefect-docker with pip:

    pip install prefect-docker\n

    Then, register to view the block on Prefect Cloud:

    prefect block register -m prefect_docker\n

    Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

    "},{"location":"integrations/prefect-docker/#pull-image-and-create-start-log-stop-and-remove-docker-container","title":"Pull image, and create, start, log, stop, and remove Docker container","text":"
    from prefect import flow, get_run_logger\nfrom prefect_docker.images import pull_docker_image\nfrom prefect_docker.containers import (\n    create_docker_container,\n    start_docker_container,\n    get_docker_container_logs,\n    stop_docker_container,\n    remove_docker_container,\n)\n\n\n@flow\ndef docker_flow():\n    logger = get_run_logger()\n    pull_docker_image(\"prefecthq/prefect\", \"latest\")\n    container = create_docker_container(\n        image=\"prefecthq/prefect\", command=\"echo 'hello world!' && sleep 60\"\n    )\n    start_docker_container(container_id=container.id)\n    logs = get_docker_container_logs(container_id=container.id)\n    logger.info(logs)\n    stop_docker_container(container_id=container.id)\n    remove_docker_container(container_id=container.id)\n    return container\n
    "},{"location":"integrations/prefect-docker/#use-a-custom-docker-host-to-create-a-docker-container","title":"Use a custom Docker Host to create a Docker container","text":"
    from prefect import flow\nfrom prefect_docker import DockerHost\nfrom prefect_docker.containers import create_docker_container\n\n@flow\ndef create_docker_container_flow():\n    docker_host = DockerHost(\n        base_url=\"tcp://127.0.0.1:1234\",\n        max_pool_size=4\n    )\n    container = create_docker_container(\n        docker_host=docker_host,\n        image=\"prefecthq/prefect\",\n        command=\"echo 'hello world!'\"\n    )\n\ncreate_docker_container_flow()\n
    "},{"location":"integrations/prefect-docker/#resources","title":"Resources","text":"

    If you encounter any bugs while using prefect-docker, feel free to open an issue in the prefect repository.

    If you have any questions or issues while using prefect-docker, you can find help in the Prefect Slack community.

    "},{"location":"integrations/prefect-docker/#development","title":"Development","text":"

    If you'd like to install a version of prefect-docker for development, clone the repository and perform an editable install with pip:

    git clone https://github.com/PrefectHQ/prefect-docker.git\n\ncd prefect-docker/\n\npip install -e \".[dev]\"\n\n# Install linting pre-commit hooks\npre-commit install\n
    "},{"location":"integrations/prefect-docker/containers/","title":"Containers","text":""},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers","title":"prefect_docker.containers","text":"

    Integrations with Docker Containers.

    "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.create_docker_container","title":"create_docker_container async","text":"

    Create a container without starting it. Similar to docker create.

    Parameters:

    Name Type Description Default image str

    The image to run.

    required command Optional[Union[str, List[str]]]

    The command(s) to run in the container.

    None name Optional[str]

    The name for this container.

    None detach Optional[bool]

    Run container in the background.

    None docker_host Optional[DockerHost]

    Settings for interacting with a Docker host.

    None entrypoint Optional[Union[str, List[str]]]

    The entrypoint for the container.

    None environment Optional[Union[Dict[str, str], List[str]]]

    Environment variables to set inside the container, as a dictionary or a list of strings in the format [\"SOMEVARIABLE=xxx\"].

    None **create_kwargs Dict[str, Any]

    Additional keyword arguments to pass to client.containers.create.

    {}

    Returns:

    Type Description Container

    A Docker Container object.

    Examples:

    Create a container with the Prefect image.

    from prefect import flow\nfrom prefect_docker.containers import create_docker_container\n\n@flow\ndef create_docker_container_flow():\n    container = create_docker_container(\n        image=\"prefecthq/prefect\",\n        command=\"echo 'hello world!'\"\n    )\n\ncreate_docker_container_flow()\n

    Source code in prefect_docker/containers.py
    @task\nasync def create_docker_container(\n    image: str,\n    command: Optional[Union[str, List[str]]] = None,\n    name: Optional[str] = None,\n    detach: Optional[bool] = None,\n    entrypoint: Optional[Union[str, List[str]]] = None,\n    environment: Optional[Union[Dict[str, str], List[str]]] = None,\n    docker_host: Optional[DockerHost] = None,\n    **create_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Create a container without starting it. Similar to docker create.\n\n    Args:\n        image: The image to run.\n        command: The command(s) to run in the container.\n        name: The name for this container.\n        detach: Run container in the background.\n        docker_host: Settings for interacting with a Docker host.\n        entrypoint: The entrypoint for the container.\n        environment: Environment variables to set inside the container,\n            as a dictionary or a list of strings in the format [\"SOMEVARIABLE=xxx\"].\n        **create_kwargs: Additional keyword arguments to pass to\n            [`client.containers.create`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.create).\n\n    Returns:\n        A Docker Container object.\n\n    Examples:\n        Create a container with the Prefect image.\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import create_docker_container\n\n        @flow\n        def create_docker_container_flow():\n            container = create_docker_container(\n                image=\"prefecthq/prefect\",\n                command=\"echo 'hello world!'\"\n            )\n\n        create_docker_container_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        logger.info(f\"Creating container with {image!r} image.\")\n        container = await run_sync_in_worker_thread(\n            client.containers.create,\n            image=image,\n            command=command,\n            name=name,\n            detach=detach,\n            entrypoint=entrypoint,\n            environment=environment,\n            **create_kwargs,\n        )\n    return container\n
    "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.get_docker_container_logs","title":"get_docker_container_logs async","text":"

    Get logs from this container. Similar to the docker logs command.

    Parameters:

    Name Type Description Default container_id str

    The container ID to pull logs from.

    required docker_host Optional[DockerHost]

    Settings for interacting with a Docker host.

    None **logs_kwargs Dict[str, Any]

    Additional keyword arguments to pass to client.containers.get(container_id).logs.

    {}

    Returns:

    Type Description str

    The Container's logs.

    Examples:

    Gets logs from a container with an ID that starts with \"c157\".

    from prefect import flow\nfrom prefect_docker.containers import get_docker_container_logs\n\n@flow\ndef get_docker_container_logs_flow():\n    logs = get_docker_container_logs(container_id=\"c157\")\n    return logs\n\nget_docker_container_logs_flow()\n

    Source code in prefect_docker/containers.py
    @task\nasync def get_docker_container_logs(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **logs_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Get logs from this container. Similar to the docker logs command.\n\n    Args:\n        container_id: The container ID to pull logs from.\n        docker_host: Settings for interacting with a Docker host.\n        **logs_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).logs`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.logs).\n\n    Returns:\n        The Container's logs.\n\n    Examples:\n        Gets logs from a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import get_docker_container_logs\n\n        @flow\n        def get_docker_container_logs_flow():\n            logs = get_docker_container_logs(container_id=\"c157\")\n            return logs\n\n        get_docker_container_logs_flow()\n        ```\n\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Retrieving logs from {container.id!r} container.\")\n        logs = await run_sync_in_worker_thread(container.logs, **logs_kwargs)\n\n    return logs.decode()\n
    "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.remove_docker_container","title":"remove_docker_container async","text":"

    Remove this container. Similar to the docker rm command.

    Parameters:

    Name Type Description Default container_id str

    The container ID to remove.

    required docker_host Optional[DockerHost]

    Settings for interacting with a Docker host.

    None **remove_kwargs Dict[str, Any]

    Additional keyword arguments to pass to client.containers.get(container_id).remove.

    {}

    Returns:

    Type Description Container

    The Docker Container object.

    Examples:

    Removes a container with an ID that starts with \"c157\".

    from prefect import flow\nfrom prefect_docker.containers import remove_docker_container\n\n@flow\ndef remove_docker_container_flow():\n    container = remove_docker_container(container_id=\"c157\")\n    return container\n\nremove_docker_container()\n

    Source code in prefect_docker/containers.py
    @task\nasync def remove_docker_container(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **remove_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Remove this container. Similar to the docker rm command.\n\n    Args:\n        container_id: The container ID to remove.\n        docker_host: Settings for interacting with a Docker host.\n        **remove_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).remove`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.remove).\n\n    Returns:\n        The Docker Container object.\n\n    Examples:\n        Removes a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import remove_docker_container\n\n        @flow\n        def remove_docker_container_flow():\n            container = remove_docker_container(container_id=\"c157\")\n            return container\n\n        remove_docker_container()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Removing container {container.id!r}.\")\n        await run_sync_in_worker_thread(container.remove, **remove_kwargs)\n\n    return container\n
    "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.start_docker_container","title":"start_docker_container async","text":"

    Start this container. Similar to the docker start command.

    Parameters:

    Name Type Description Default container_id str

    The container ID to start.

    required docker_host Optional[DockerHost]

    Settings for interacting with a Docker host.

    None **start_kwargs Dict[str, Any]

    Additional keyword arguments to pass to client.containers.get(container_id).start.

    {}

    Returns:

    Type Description Container

    The Docker Container object.

    Examples:

    Start a container with an ID that starts with \"c157\".

    from prefect import flow\nfrom prefect_docker.containers import start_docker_container\n\n@flow\ndef start_docker_container_flow():\n    container = start_docker_container(container_id=\"c157\")\n    return container\n\nstart_docker_container_flow()\n

    Source code in prefect_docker/containers.py
    @task\nasync def start_docker_container(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **start_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Start this container. Similar to the docker start command.\n\n    Args:\n        container_id: The container ID to start.\n        docker_host: Settings for interacting with a Docker host.\n        **start_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).start`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.start).\n\n    Returns:\n        The Docker Container object.\n\n    Examples:\n        Start a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import start_docker_container\n\n        @flow\n        def start_docker_container_flow():\n            container = start_docker_container(container_id=\"c157\")\n            return container\n\n        start_docker_container_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Starting container {container.id!r}.\")\n        await run_sync_in_worker_thread(container.start, **start_kwargs)\n\n    return container\n
    "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.stop_docker_container","title":"stop_docker_container async","text":"

    Stops a container. Similar to the docker stop command.

    Parameters:

    Name Type Description Default container_id str

    The container ID to stop.

    required docker_host Optional[DockerHost]

    Settings for interacting with a Docker host.

    None **stop_kwargs Dict[str, Any]

    Additional keyword arguments to pass to client.containers.get(container_id).stop.

    {}

    Returns:

    Type Description Container

    The Docker Container object.

    Examples:

    Stop a container with an ID that starts with \"c157\".

    from prefect import flow\nfrom prefect_docker.containers import stop_docker_container\n\n@flow\ndef stop_docker_container_flow():\n    container = stop_docker_container(container_id=\"c157\")\n    return container\n\nstop_docker_container_flow()\n

    Source code in prefect_docker/containers.py
    @task\nasync def stop_docker_container(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **stop_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Stops a container. Similar to the docker stop command.\n\n    Args:\n        container_id: The container ID to stop.\n        docker_host: Settings for interacting with a Docker host.\n        **stop_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).stop`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.stop).\n\n    Returns:\n        The Docker Container object.\n\n    Examples:\n        Stop a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import stop_docker_container\n\n        @flow\n        def stop_docker_container_flow():\n            container = stop_docker_container(container_id=\"c157\")\n            return container\n\n        stop_docker_container_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Stopping container {container.id!r}.\")\n        await run_sync_in_worker_thread(container.stop, **stop_kwargs)\n\n    return container\n
    "},{"location":"integrations/prefect-docker/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-docker/credentials/#prefect_docker.credentials","title":"prefect_docker.credentials","text":"

    Module containing docker credentials.

    "},{"location":"integrations/prefect-docker/credentials/#prefect_docker.credentials.DockerRegistryCredentials","title":"DockerRegistryCredentials","text":"

    Bases: Block

    Block used to manage credentials for interacting with a Docker Registry.

    Examples:

    Log into Docker Registry.

    from prefect_docker import DockerHost, DockerRegistryCredentials\n\ndocker_host = DockerHost()\ndocker_registry_credentials = DockerRegistryCredentials(\n    username=\"my_username\",\n    password=\"my_password\",\n    registry_url=\"registry.hub.docker.com\",\n)\nwith docker_host.get_client() as client:\n    docker_registry_credentials.login(client)\n

    Source code in prefect_docker/credentials.py
    class DockerRegistryCredentials(Block):\n    \"\"\"\n    Block used to manage credentials for interacting with a Docker Registry.\n\n    Examples:\n        Log into Docker Registry.\n        ```python\n        from prefect_docker import DockerHost, DockerRegistryCredentials\n\n        docker_host = DockerHost()\n        docker_registry_credentials = DockerRegistryCredentials(\n            username=\"my_username\",\n            password=\"my_password\",\n            registry_url=\"registry.hub.docker.com\",\n        )\n        with docker_host.get_client() as client:\n            docker_registry_credentials.login(client)\n        ```\n    \"\"\"\n\n    _block_type_name = \"Docker Registry Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"  # noqa\n    _description = \"Store credentials for interacting with a Docker Registry.\"\n\n    username: str = Field(\n        default=..., description=\"The username to log into the registry with.\"\n    )\n    password: SecretStr = Field(\n        default=..., description=\"The password to log into the registry with.\"\n    )\n    registry_url: str = Field(\n        default=...,\n        description=(\n            'The URL to the registry. Generally, \"http\" or \"https\" can be omitted.'\n        ),\n        example=\"index.docker.io\",\n    )\n    reauth: bool = Field(\n        default=True,\n        description=\"Whether or not to reauthenticate on each interaction.\",\n    )\n\n    async def login(self, client: docker.DockerClient):\n        \"\"\"\n        Authenticates a given Docker client with the configured Docker registry.\n\n        Args:\n            client: A Docker Client.\n        \"\"\"\n        logger = get_run_logger()\n        logger.debug(f\"Logging into {self.registry_url}.\")\n        await run_sync_in_worker_thread(\n            client.login,\n            username=self.username,\n            password=self.password.get_secret_value(),\n            registry=self.registry_url,\n            # See https://github.com/docker/docker-py/issues/2256 for information on\n            # the default value for reauth.\n            reauth=self.reauth,\n        )\n
    "},{"location":"integrations/prefect-docker/credentials/#prefect_docker.credentials.DockerRegistryCredentials.login","title":"login async","text":"

    Authenticates a given Docker client with the configured Docker registry.

    Parameters:

    Name Type Description Default client DockerClient

    A Docker Client.

    required Source code in prefect_docker/credentials.py
    async def login(self, client: docker.DockerClient):\n    \"\"\"\n    Authenticates a given Docker client with the configured Docker registry.\n\n    Args:\n        client: A Docker Client.\n    \"\"\"\n    logger = get_run_logger()\n    logger.debug(f\"Logging into {self.registry_url}.\")\n    await run_sync_in_worker_thread(\n        client.login,\n        username=self.username,\n        password=self.password.get_secret_value(),\n        registry=self.registry_url,\n        # See https://github.com/docker/docker-py/issues/2256 for information on\n        # the default value for reauth.\n        reauth=self.reauth,\n    )\n
    "},{"location":"integrations/prefect-docker/host/","title":"Host","text":""},{"location":"integrations/prefect-docker/host/#prefect_docker.host","title":"prefect_docker.host","text":"

    Module containing Docker host settings.

    "},{"location":"integrations/prefect-docker/host/#prefect_docker.host.DockerHost","title":"DockerHost","text":"

    Bases: Block

    Block used to manage settings for interacting with a Docker host.

    Attributes:

    Name Type Description base_url Optional[str]

    URL to the Docker server, e.g. unix:///var/run/docker.sock or tcp://127.0.0.1:1234. If this is not set, the client will be configured from environment variables.

    version str

    The version of the API to use. Set to auto to automatically detect the server's version.

    timeout Optional[int]

    Default timeout for API calls, in seconds.

    max_pool_size Optional[int]

    The maximum number of connections to save in the pool.

    client_kwargs Dict[str, Any]

    Additional keyword arguments to pass to docker.from_env() or DockerClient.

    Examples:

    Get a Docker Host client.

    from prefect_docker import DockerHost\n\ndocker_host = DockerHost(\nbase_url=\"tcp://127.0.0.1:1234\",\n    max_pool_size=4\n)\nwith docker_host.get_client() as client:\n    ... # Use the client for Docker operations\n

    Source code in prefect_docker/host.py
    class DockerHost(Block):\n    \"\"\"\n    Block used to manage settings for interacting with a Docker host.\n\n    Attributes:\n        base_url: URL to the Docker server, e.g. `unix:///var/run/docker.sock`\n            or `tcp://127.0.0.1:1234`. If this is not set, the client will\n            be configured from environment variables.\n        version: The version of the API to use. Set to auto to\n            automatically detect the server's version.\n        timeout: Default timeout for API calls, in seconds.\n        max_pool_size: The maximum number of connections to save in the pool.\n        client_kwargs: Additional keyword arguments to pass to\n            `docker.from_env()` or `DockerClient`.\n\n    Examples:\n        Get a Docker Host client.\n        ```python\n        from prefect_docker import DockerHost\n\n        docker_host = DockerHost(\n        base_url=\"tcp://127.0.0.1:1234\",\n            max_pool_size=4\n        )\n        with docker_host.get_client() as client:\n            ... # Use the client for Docker operations\n        ```\n    \"\"\"\n\n    _block_type_name = \"Docker Host\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"  # noqa\n    _description = \"Store settings for interacting with a Docker host.\"\n\n    base_url: Optional[str] = Field(\n        default=None,\n        description=\"URL to the Docker host.\",\n        title=\"Base URL\",\n        example=\"unix:///var/run/docker.sock\",\n    )\n    version: str = Field(default=\"auto\", description=\"The version of the API to use\")\n    timeout: Optional[int] = Field(\n        default=None, description=\"Default timeout for API calls, in seconds.\"\n    )\n    max_pool_size: Optional[int] = Field(\n        default=None,\n        description=\"The maximum number of connections to save in the pool.\",\n    )\n    client_kwargs: Dict[str, Any] = Field(\n        default_factory=dict,\n        title=\"Additional Configuration\",\n        description=(\n            \"Additional keyword arguments to pass to \"\n            \"`docker.from_env()` or `DockerClient`.\"\n        ),\n    )\n\n    def get_client(self) -> docker.DockerClient:\n        \"\"\"\n        Gets a Docker Client to communicate with a Docker host.\n\n        Returns:\n            A Docker Client.\n        \"\"\"\n        logger = get_run_logger()\n        client_kwargs = {\n            \"version\": self.version,\n            \"timeout\": self.timeout,\n            \"max_pool_size\": self.max_pool_size,\n            **self.client_kwargs,\n        }\n        client_kwargs = {\n            key: value for key, value in client_kwargs.items() if value is not None\n        }\n        if self.base_url is None:\n            logger.debug(\n                f\"Creating a Docker client from \"\n                f\"environment variables, using {self.version} version.\"\n            )\n            client = _ContextManageableDockerClient.from_env(**client_kwargs)\n        else:\n            logger.debug(\n                f\"Creating a Docker client to {self.base_url} \"\n                f\"using {self.version} version.\"\n            )\n            client = _ContextManageableDockerClient(\n                base_url=self.base_url, **client_kwargs\n            )\n        return client\n
    "},{"location":"integrations/prefect-docker/host/#prefect_docker.host.DockerHost.get_client","title":"get_client","text":"

    Gets a Docker Client to communicate with a Docker host.

    Returns:

    Type Description DockerClient

    A Docker Client.

    Source code in prefect_docker/host.py
    def get_client(self) -> docker.DockerClient:\n    \"\"\"\n    Gets a Docker Client to communicate with a Docker host.\n\n    Returns:\n        A Docker Client.\n    \"\"\"\n    logger = get_run_logger()\n    client_kwargs = {\n        \"version\": self.version,\n        \"timeout\": self.timeout,\n        \"max_pool_size\": self.max_pool_size,\n        **self.client_kwargs,\n    }\n    client_kwargs = {\n        key: value for key, value in client_kwargs.items() if value is not None\n    }\n    if self.base_url is None:\n        logger.debug(\n            f\"Creating a Docker client from \"\n            f\"environment variables, using {self.version} version.\"\n        )\n        client = _ContextManageableDockerClient.from_env(**client_kwargs)\n    else:\n        logger.debug(\n            f\"Creating a Docker client to {self.base_url} \"\n            f\"using {self.version} version.\"\n        )\n        client = _ContextManageableDockerClient(\n            base_url=self.base_url, **client_kwargs\n        )\n    return client\n
    "},{"location":"integrations/prefect-docker/images/","title":"Images","text":""},{"location":"integrations/prefect-docker/images/#prefect_docker.images","title":"prefect_docker.images","text":"

    Integrations with Docker Images.

    "},{"location":"integrations/prefect-docker/images/#prefect_docker.images.pull_docker_image","title":"pull_docker_image async","text":"

    Pull an image of the given name and return it. Similar to the docker pull command.

    If all_tags is set, the tag parameter is ignored and all image tags will be pulled.

    Parameters:

    Name Type Description Default repository str

    The repository to pull.

    required tag Optional[str]

    The tag to pull; if not provided, it is set to latest.

    None platform Optional[str]

    Platform in the format os[/arch[/variant]].

    None all_tags bool

    Pull all image tags which will return a list of Images.

    False docker_host Optional[DockerHost]

    Settings for interacting with a Docker host; if not provided, will automatically instantiate a DockerHost from env.

    None docker_registry_credentials Optional[DockerRegistryCredentials]

    Docker credentials used to log in to a registry before pulling the image.

    None **pull_kwargs Dict[str, Any]

    Additional keyword arguments to pass to client.images.pull.

    {}

    Returns:

    Type Description Union[Image, List[Image]]

    The image that has been pulled, or a list of images if all_tags is True.

    Examples:

    Pull prefecthq/prefect image with the tag latest-python3.10.

    from prefect import flow\nfrom prefect_docker.images import pull_docker_image\n\n@flow\ndef pull_docker_image_flow():\n    image = pull_docker_image(\n        repository=\"prefecthq/prefect\",\n        tag=\"latest-python3.10\"\n    )\n    return image\n\npull_docker_image_flow()\n

    Source code in prefect_docker/images.py
    @task\nasync def pull_docker_image(\n    repository: str,\n    tag: Optional[str] = None,\n    platform: Optional[str] = None,\n    all_tags: bool = False,\n    docker_host: Optional[DockerHost] = None,\n    docker_registry_credentials: Optional[DockerRegistryCredentials] = None,\n    **pull_kwargs: Dict[str, Any],\n) -> Union[Image, List[Image]]:\n    \"\"\"\n    Pull an image of the given name and return it. Similar to the docker pull command.\n\n    If all_tags is set, the tag parameter is ignored and all image tags will be pulled.\n\n    Args:\n        repository: The repository to pull.\n        tag: The tag to pull; if not provided, it is set to latest.\n        platform: Platform in the format os[/arch[/variant]].\n        all_tags: Pull all image tags which will return a list of Images.\n        docker_host: Settings for interacting with a Docker host; if not\n            provided, will automatically instantiate a `DockerHost` from env.\n        docker_registry_credentials: Docker credentials used to log in to\n            a registry before pulling the image.\n        **pull_kwargs: Additional keyword arguments to pass to `client.images.pull`.\n\n    Returns:\n        The image that has been pulled, or a list of images if `all_tags` is `True`.\n\n    Examples:\n        Pull prefecthq/prefect image with the tag latest-python3.10.\n        ```python\n        from prefect import flow\n        from prefect_docker.images import pull_docker_image\n\n        @flow\n        def pull_docker_image_flow():\n            image = pull_docker_image(\n                repository=\"prefecthq/prefect\",\n                tag=\"latest-python3.10\"\n            )\n            return image\n\n        pull_docker_image_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    if tag and all_tags:\n        raise ValueError(\"Cannot pass `tags` and `all_tags` together\")\n\n    pull_kwargs = {\n        \"repository\": repository,\n        \"tag\": tag,\n        \"platform\": platform,\n        \"all_tags\": all_tags,\n        **pull_kwargs,\n    }\n    pull_kwargs = {\n        key: value for key, value in pull_kwargs.items() if value is not None\n    }\n\n    with (docker_host or DockerHost()).get_client() as client:\n        if docker_registry_credentials is not None:\n            await docker_registry_credentials.login(client=client)\n\n        if tag:\n            logger.info(f\"Pulling image: {repository}:{tag}.\")\n        elif all_tags:\n            logger.info(f\"Pulling all images from: {repository}\")\n\n        image = await run_sync_in_worker_thread(client.images.pull, **pull_kwargs)\n\n    return image\n
    "},{"location":"integrations/prefect-docker/worker/","title":"Worker","text":""},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker","title":"prefect_docker.worker","text":"

    Module containing the Docker worker used for executing flow runs as Docker containers.

    To start a Docker worker, run the following command:

    prefect worker start --pool 'my-work-pool' --type docker\n

    Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

    For more information about work pools and workers, checkout out the Prefect docs.

    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorker","title":"DockerWorker","text":"

    Bases: BaseWorker

    Prefect worker that executes flow runs within Docker containers.

    Source code in prefect_docker/worker.py
    class DockerWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Docker containers.\"\"\"\n\n    type = \"docker\"\n    job_configuration = DockerWorkerJobConfiguration\n    _description = (\n        \"Execute flow runs within Docker containers. Works well for managing flow \"\n        \"execution environments via Docker images. Requires access to a running \"\n        \"Docker daemon.\"\n    )\n    _display_name = \"Docker\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-docker/worker/\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/2IfXXfMq66mrzJBDFFCHTp/6d8f320d9e4fc4393f045673d61ab612/Moby-logo.png?h=250\"  # noqa\n\n    def __init__(self, *args: Any, test_mode: bool = None, **kwargs: Any) -> None:\n        if test_mode is None:\n            self.test_mode = bool(os.getenv(\"PREFECT_DOCKER_TEST_MODE\", False))\n        else:\n            self.test_mode = test_mode\n        super().__init__(*args, **kwargs)\n\n    async def setup(self):\n        if not self.test_mode:\n            self._client = get_client()\n            if self._client.server_type == ServerType.EPHEMERAL:\n                raise RuntimeError(\n                    \"Docker worker cannot be used with an ephemeral server. Please set\"\n                    \" PREFECT_API_URL to the URL for your Prefect API instance. You\"\n                    \" can use a local Prefect API instance by running `prefect server\"\n                    \" start`.\"\n                )\n\n        return await super().setup()\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: BaseJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        \"\"\"\n        Executes a flow run within a Docker container and waits for the flow run\n        to complete.\n        \"\"\"\n        # The `docker` library uses requests instead of an async http library so it must\n        # be run in a thread to avoid blocking the event loop.\n        container, created_event = await run_sync_in_worker_thread(\n            self._create_and_start_container, configuration\n        )\n        container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n        # Mark as started and return the infrastructure id\n        if task_status:\n            task_status.started(container_pid)\n\n        # Monitor the container\n        container = await run_sync_in_worker_thread(\n            self._watch_container_safe, container, configuration, created_event\n        )\n\n        exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n        return DockerWorkerResult(\n            status_code=exit_code if exit_code is not None else -1,\n            identifier=container_pid,\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: DockerWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a container for a cancelled flow run based on the provided infrastructure\n        PID.\n        \"\"\"\n        docker_client = self._get_client()\n\n        base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n        if docker_client.api.base_url != base_url:\n            raise InfrastructureNotAvailable(\n                \"\".join(\n                    [\n                        (\n                            f\"Unable to stop container {container_id!r}: the current\"\n                            \" Docker API \"\n                        ),\n                        (\n                            f\"URL {docker_client.api.base_url!r} does not match the\"\n                            \" expected \"\n                        ),\n                        f\"API base URL {base_url}.\",\n                    ]\n                )\n            )\n        await run_sync_in_worker_thread(\n            self._stop_container, container_id, docker_client, grace_seconds\n        )\n\n    def _stop_container(\n        self,\n        container_id: str,\n        client: \"DockerClient\",\n        grace_seconds: int = 30,\n    ):\n        try:\n            container = client.containers.get(container_id=container_id)\n        except docker.errors.NotFound:\n            raise InfrastructureNotFound(\n                f\"Unable to stop container {container_id!r}: The container was not\"\n                \" found.\"\n            )\n\n        container.stop(timeout=grace_seconds)\n\n    def _get_client(self):\n        \"\"\"Returns a docker client.\"\"\"\n        try:\n            with warnings.catch_warnings():\n                # Silence warnings due to use of deprecated methods within dockerpy\n                # See https://github.com/docker/docker-py/pull/2931\n                warnings.filterwarnings(\n                    \"ignore\",\n                    message=\"distutils Version classes are deprecated.*\",\n                    category=DeprecationWarning,\n                )\n\n                docker_client = docker.from_env()\n\n        except docker.errors.DockerException as exc:\n            raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n        return docker_client\n\n    def _get_infrastructure_pid(self, container_id: str) -> str:\n        \"\"\"Generates a Docker infrastructure_pid string in the form of\n        `<docker_host_base_url>:<container_id>`.\n        \"\"\"\n        docker_client = self._get_client()\n        base_url = docker_client.api.base_url\n        docker_client.close()\n        return f\"{base_url}:{container_id}\"\n\n    def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n        \"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n        # base_url can contain `:` so we only want the last item of the split\n        base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n        return base_url, str(container_id)\n\n    def _build_container_settings(\n        self,\n        docker_client: \"DockerClient\",\n        configuration: DockerWorkerJobConfiguration,\n    ) -> Dict:\n        \"\"\"Builds a dictionary of container settings to pass to the Docker API.\"\"\"\n        network_mode = configuration.get_network_mode()\n        return dict(\n            image=configuration.image,\n            network=configuration.networks[0] if configuration.networks else None,\n            network_mode=network_mode,\n            command=configuration.command,\n            environment=configuration.env,\n            auto_remove=configuration.auto_remove,\n            labels=configuration.labels,\n            extra_hosts=configuration.get_extra_hosts(docker_client),\n            name=configuration.name,\n            volumes=configuration.volumes,\n            mem_limit=configuration.mem_limit,\n            memswap_limit=configuration.memswap_limit,\n            privileged=configuration.privileged,\n        )\n\n    def _create_and_start_container(\n        self, configuration: DockerWorkerJobConfiguration\n    ) -> Tuple[\"Container\", Event]:\n        \"\"\"Creates and starts a Docker container.\"\"\"\n        docker_client = self._get_client()\n        if configuration.registry_credentials:\n            self._logger.info(\"Logging into Docker registry...\")\n            docker_client.login(\n                username=configuration.registry_credentials.username,\n                password=configuration.registry_credentials.password.get_secret_value(),\n                registry=configuration.registry_credentials.registry_url,\n                reauth=configuration.registry_credentials.reauth,\n            )\n        container_settings = self._build_container_settings(\n            docker_client, configuration\n        )\n\n        if self._should_pull_image(docker_client, configuration=configuration):\n            self._logger.info(f\"Pulling image {configuration.image!r}...\")\n            self._pull_image(docker_client, configuration)\n\n        try:\n            container = self._create_container(docker_client, **container_settings)\n        except Exception as exc:\n            self._emit_container_creation_failed_event(configuration)\n            raise exc\n\n        created_event = self._emit_container_status_change_event(\n            container, configuration\n        )\n\n        # Add additional networks after the container is created; only one network can\n        # be attached at creation time\n        if len(configuration.networks) > 1:\n            for network_name in configuration.networks[1:]:\n                network = docker_client.networks.get(network_name)\n                network.connect(container)\n\n        # Start the container\n        container.start()\n\n        docker_client.close()\n\n        return container, created_event\n\n    def _watch_container_safe(\n        self,\n        container: \"Container\",\n        configuration: DockerWorkerJobConfiguration,\n        created_event: Event,\n    ) -> \"Container\":\n        \"\"\"Watches a container for completion, handling any errors that may occur.\"\"\"\n        # Monitor the container capturing the latest snapshot while capturing\n        # not found errors\n        docker_client = self._get_client()\n\n        try:\n            seen_statuses = {container.status}\n            last_event = created_event\n            for latest_container in self._watch_container(\n                docker_client, container.id, configuration\n            ):\n                container = latest_container\n                if container.status not in seen_statuses:\n                    seen_statuses.add(container.status)\n                    last_event = self._emit_container_status_change_event(\n                        container, configuration, last_event=last_event\n                    )\n\n        except docker.errors.NotFound:\n            # The container was removed during watching\n            self._logger.warning(\n                f\"Docker container {container.name} was removed before we could wait \"\n                \"for its completion.\"\n            )\n        finally:\n            docker_client.close()\n\n        return container\n\n    def _watch_container(\n        self,\n        docker_client: \"DockerClient\",\n        container_id: str,\n        configuration: DockerWorkerJobConfiguration,\n    ) -> Generator[None, None, \"Container\"]:\n        \"\"\"\n        Watches a container for completion, yielding the latest container\n        snapshot on each iteration.\n        \"\"\"\n        container: \"Container\" = docker_client.containers.get(container_id)\n\n        status = container.status\n        self._logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n        if configuration.stream_output:\n            try:\n                for log in container.logs(stream=True):\n                    log: bytes\n                    print(log.decode().rstrip())\n            except docker.errors.APIError as exc:\n                if \"marked for removal\" in str(exc):\n                    self._logger.warning(\n                        f\"Docker container {container.name} was marked for removal\"\n                        \" before logs could be retrieved. Output will not be\"\n                        \" streamed. \"\n                    )\n                else:\n                    self._logger.exception(\n                        \"An unexpected Docker API error occurred while streaming output \"\n                        f\"from container {container.name}.\"\n                    )\n\n            container.reload()\n            if container.status != status:\n                self._logger.info(\n                    f\"Docker container {container.name!r} has status\"\n                    f\" {container.status!r}\"\n                )\n            yield container\n\n        container.wait()\n        self._logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n    def _should_pull_image(\n        self, docker_client: \"DockerClient\", configuration: DockerWorkerJobConfiguration\n    ) -> bool:\n        \"\"\"\n        Decide whether we need to pull the Docker image.\n        \"\"\"\n        image_pull_policy = configuration._determine_image_pull_policy()\n\n        if image_pull_policy is ImagePullPolicy.ALWAYS:\n            return True\n        elif image_pull_policy is ImagePullPolicy.NEVER:\n            return False\n        elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n            try:\n                # NOTE: images.get() wants the tag included with the image\n                # name, while images.pull() wants them split.\n                docker_client.images.get(configuration.image)\n            except docker.errors.ImageNotFound:\n                self._logger.debug(\n                    f\"Could not find Docker image locally: {configuration.image}\"\n                )\n                return True\n        return False\n\n    def _pull_image(\n        self, docker_client: \"DockerClient\", configuration: DockerWorkerJobConfiguration\n    ):\n        \"\"\"\n        Pull the image we're going to use to create the container.\n        \"\"\"\n        image, tag = parse_image_tag(configuration.image)\n\n        return docker_client.images.pull(image, tag)\n\n    def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n        \"\"\"\n        Create a docker container with retries on name conflicts.\n\n        If the container already exists with the given name, an incremented index is\n        added.\n        \"\"\"\n        # Create the container with retries on name conflicts (with an incremented idx)\n        index = 0\n        container = None\n        name = original_name = kwargs.pop(\"name\")\n\n        while not container:\n            try:\n                display_name = repr(name) if name else \"with auto-generated name\"\n                self._logger.info(f\"Creating Docker container {display_name}...\")\n                container = docker_client.containers.create(name=name, **kwargs)\n            except docker.errors.APIError as exc:\n                if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n                    self._logger.info(\n                        f\"Docker container name {display_name} already exists; \"\n                        \"retrying...\"\n                    )\n                    index += 1\n                    name = f\"{original_name}-{index}\"\n                else:\n                    raise\n\n        self._logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        return container\n\n    def _container_as_resource(self, container: \"Container\") -> Dict[str, str]:\n        \"\"\"Convert a container to a resource dictionary\"\"\"\n        return {\n            \"prefect.resource.id\": f\"prefect.docker.container.{container.id}\",\n            \"prefect.resource.name\": container.name,\n        }\n\n    def _emit_container_creation_failed_event(\n        self, configuration: DockerWorkerJobConfiguration\n    ) -> Event:\n        \"\"\"Emit a Prefect event when a docker container fails to be created.\"\"\"\n        return emit_event(\n            event=\"prefect.docker.container.creation-failed\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(configuration=configuration),\n        )\n\n    def _emit_container_status_change_event(\n        self,\n        container: \"Container\",\n        configuration: DockerWorkerJobConfiguration,\n        last_event: Optional[Event] = None,\n    ) -> Event:\n        \"\"\"Emit a Prefect event for a Docker container event.\"\"\"\n        related = self._event_related_resources(configuration=configuration)\n\n        worker_resource = self._event_resource()\n        worker_resource[\"prefect.resource.role\"] = \"worker\"\n        worker_related_resource = RelatedResource(__root__=worker_resource)\n\n        return emit_event(\n            event=f\"prefect.docker.container.{container.status.lower()}\",\n            resource=self._container_as_resource(container),\n            related=related + [worker_related_resource],\n            follows=last_event,\n        )\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

    Stops a container for a cancelled flow run based on the provided infrastructure PID.

    Source code in prefect_docker/worker.py
    async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: DockerWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a container for a cancelled flow run based on the provided infrastructure\n    PID.\n    \"\"\"\n    docker_client = self._get_client()\n\n    base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n    if docker_client.api.base_url != base_url:\n        raise InfrastructureNotAvailable(\n            \"\".join(\n                [\n                    (\n                        f\"Unable to stop container {container_id!r}: the current\"\n                        \" Docker API \"\n                    ),\n                    (\n                        f\"URL {docker_client.api.base_url!r} does not match the\"\n                        \" expected \"\n                    ),\n                    f\"API base URL {base_url}.\",\n                ]\n            )\n        )\n    await run_sync_in_worker_thread(\n        self._stop_container, container_id, docker_client, grace_seconds\n    )\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorker.run","title":"run async","text":"

    Executes a flow run within a Docker container and waits for the flow run to complete.

    Source code in prefect_docker/worker.py
    async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: BaseJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n    \"\"\"\n    Executes a flow run within a Docker container and waits for the flow run\n    to complete.\n    \"\"\"\n    # The `docker` library uses requests instead of an async http library so it must\n    # be run in a thread to avoid blocking the event loop.\n    container, created_event = await run_sync_in_worker_thread(\n        self._create_and_start_container, configuration\n    )\n    container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n    # Mark as started and return the infrastructure id\n    if task_status:\n        task_status.started(container_pid)\n\n    # Monitor the container\n    container = await run_sync_in_worker_thread(\n        self._watch_container_safe, container, configuration, created_event\n    )\n\n    exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n    return DockerWorkerResult(\n        status_code=exit_code if exit_code is not None else -1,\n        identifier=container_pid,\n    )\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration","title":"DockerWorkerJobConfiguration","text":"

    Bases: BaseJobConfiguration

    Configuration class used by the Docker worker.

    An instance of this class is passed to the Docker worker's run method for each flow run. It contains all the information necessary to execute the flow run as a Docker container.

    Attributes:

    Name Type Description name

    The name to give to created Docker containers.

    command

    The command executed in created Docker containers to kick off flow run execution.

    env

    The environment variables to set in created Docker containers.

    labels

    The labels to set on created Docker containers.

    image str

    The image reference of a container image to use for created jobs. If not set, the latest Prefect image will be used.

    image_pull_policy Optional[Literal['IfNotPresent', 'Always', 'Never']]

    The image pull policy to use when pulling images.

    networks List[str]

    Docker networks that created containers should be connected to.

    network_mode Optional[str]

    The network mode for the created containers (e.g. host, bridge). If 'networks' is set, this cannot be set.

    auto_remove bool

    If set, containers will be deleted on completion.

    volumes List[str]

    Docker volumes that should be mounted in created containers.

    stream_output bool

    If set, the output from created containers will be streamed to local standard output.

    mem_limit Optional[str]

    Memory limit of created containers. Accepts a value with a unit identifier (e.g. 100000b, 1000k, 128m, 1g.) If a value is given without a unit, bytes are assumed.

    memswap_limit Optional[str]

    Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit is also set. If mem_limit is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit is 300m and memswap_limit is not set, containers can use 600m in total of memory and swap.

    privileged bool

    Give extended privileges to created containers.

    Source code in prefect_docker/worker.py
    class DockerWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Docker worker.\n\n    An instance of this class is passed to the Docker worker's `run` method\n    for each flow run. It contains all the information necessary to execute the\n    flow run as a Docker container.\n\n    Attributes:\n        name: The name to give to created Docker containers.\n        command: The command executed in created Docker containers to kick off\n            flow run execution.\n        env: The environment variables to set in created Docker containers.\n        labels: The labels to set on created Docker containers.\n        image: The image reference of a container image to use for created jobs.\n            If not set, the latest Prefect image will be used.\n        image_pull_policy: The image pull policy to use when pulling images.\n        networks: Docker networks that created containers should be connected to.\n        network_mode: The network mode for the created containers (e.g. host, bridge).\n            If 'networks' is set, this cannot be set.\n        auto_remove: If set, containers will be deleted on completion.\n        volumes: Docker volumes that should be mounted in created containers.\n        stream_output: If set, the output from created containers will be streamed\n            to local standard output.\n        mem_limit: Memory limit of created containers. Accepts a value\n            with a unit identifier (e.g. 100000b, 1000k, 128m, 1g.) If a value is\n            given without a unit, bytes are assumed.\n        memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n            set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n            allowing the container to use as much swap as memory. For example, if\n            `mem_limit` is 300m and `memswap_limit` is not set, containers can use\n            600m in total of memory and swap.\n        privileged: Give extended privileges to created containers.\n    \"\"\"\n\n    image: str = Field(\n        default_factory=get_prefect_image_name,\n        description=\"The image reference of a container image to use for created jobs. \"\n        \"If not set, the latest Prefect image will be used.\",\n        example=\"docker.io/prefecthq/prefect:2-latest\",\n    )\n    registry_credentials: Optional[DockerRegistryCredentials] = Field(\n        default=None,\n        description=\"Credentials for logging into a Docker registry to pull\"\n        \" images from.\",\n    )\n    image_pull_policy: Optional[Literal[\"IfNotPresent\", \"Always\", \"Never\"]] = Field(\n        default=None,\n        description=\"The image pull policy to use when pulling images.\",\n    )\n    networks: List[str] = Field(\n        default_factory=list,\n        description=\"Docker networks that created containers should be connected to.\",\n    )\n    network_mode: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The network mode for the created containers (e.g. host, bridge). If\"\n            \" 'networks' is set, this cannot be set.\"\n        ),\n    )\n    auto_remove: bool = Field(\n        default=False,\n        description=\"If set, containers will be deleted on completion.\",\n    )\n    volumes: List[str] = Field(\n        default_factory=list,\n        description=\"A list of volume to mount into created containers.\",\n        example=[\"/my/local/path:/path/in/container\"],\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, the output from created containers will be streamed to local \"\n            \"standard output.\"\n        ),\n    )\n    mem_limit: Optional[str] = Field(\n        default=None,\n        title=\"Memory Limit\",\n        description=(\n            \"Memory limit of created containers. Accepts a value \"\n            \"with a unit identifier (e.g. 100000b, 1000k, 128m, 1g.) \"\n            \"If a value is given without a unit, bytes are assumed.\"\n        ),\n    )\n    memswap_limit: Optional[str] = Field(\n        default=None,\n        title=\"Memory Swap Limit\",\n        description=(\n            \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n            \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n            \"allowing the container to use as much swap as memory. For example, if \"\n            \"`mem_limit` is 300m and `memswap_limit` is not set, containers can use \"\n            \"600m in total of memory and swap.\"\n        ),\n    )\n\n    privileged: bool = Field(\n        default=False,\n        description=\"Give extended privileges to created container.\",\n    )\n\n    @validator(\"volumes\")\n    def _validate_volume_format(cls, volumes):\n        \"\"\"Validates that provided volume strings are in the correct format.\"\"\"\n        for volume in volumes:\n            if \":\" not in volume:\n                raise ValueError(\n                    \"Invalid volume specification. \"\n                    f\"Expected format 'path:container_path', but got {volume!r}\"\n                )\n\n        return volumes\n\n    def _convert_labels_to_docker_format(self, labels: Dict[str, str]):\n        \"\"\"Converts labels to the format expected by Docker.\"\"\"\n        labels = labels or {}\n        new_labels = {}\n        for name, value in labels.items():\n            if \"/\" in name:\n                namespace, key = name.split(\"/\", maxsplit=1)\n                new_namespace = \".\".join(reversed(namespace.split(\".\")))\n                new_labels[f\"{new_namespace}.{key}\"] = value\n            else:\n                new_labels[name] = value\n        return new_labels\n\n    def _slugify_container_name(self) -> Optional[str]:\n        \"\"\"\n        Generates a container name to match the configured name, ensuring it is Docker\n        compatible.\n        \"\"\"\n        # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n        if not self.name:\n            return None\n\n        return (\n            slugify(\n                self.name,\n                lowercase=False,\n                # Docker does not limit length but URL limits apply eventually so\n                # limit the length for safety\n                max_length=250,\n                # Docker allows these characters for container names\n                regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n            ).lstrip(\n                # Docker does not allow leading underscore, dash, or period\n                \"_-.\"\n            )\n            # Docker does not allow 0 character names so cast to null if the name is\n            # empty after slufification\n            or None\n        )\n\n    def _base_environment(self):\n        \"\"\"\n        If the API URL has been set update the value to ensure connectivity\n        when using a bridge network by updating local connections to use the\n        docker internal host unless the network mode is \"host\" where localhost\n        is available already.\n        \"\"\"\n\n        base_env = super()._base_environment()\n        network_mode = self.get_network_mode()\n        if (\n            \"PREFECT_API_URL\" in base_env\n            and base_env[\"PREFECT_API_URL\"] is not None\n            and network_mode != \"host\"\n        ):\n            base_env[\"PREFECT_API_URL\"] = (\n                base_env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", \"host.docker.internal\")\n                .replace(\"127.0.0.1\", \"host.docker.internal\")\n            )\n        return base_env\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the agent for a flow run by setting the image, labels, and name\n        attributes.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self.image = self.image or get_prefect_image_name()\n        self.labels = self._convert_labels_to_docker_format(\n            {**self.labels, **CONTAINER_LABELS}\n        )\n        self.name = self._slugify_container_name()\n\n    def get_network_mode(self) -> Optional[str]:\n        \"\"\"\n        Returns the network mode to use for the container based on the configured\n        options and the platform.\n        \"\"\"\n        # User's value takes precedence; this may collide with the incompatible options\n        # mentioned below.\n        if self.network_mode:\n            if sys.platform != \"linux\" and self.network_mode == \"host\":\n                warnings.warn(\n                    f\"{self.network_mode!r} network mode is not supported on platform \"\n                    f\"{sys.platform!r} and may not work as intended.\"\n                )\n            return self.network_mode\n\n        # Network mode is not compatible with networks or ports (we do not support ports\n        # yet though)\n        if self.networks:\n            return None\n\n        # Check for a local API connection\n        api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n        if api_url:\n            try:\n                _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n            except Exception as exc:\n                warnings.warn(\n                    f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                    f\"{exc}\\nThe network mode will not be inferred.\"\n                )\n                return None\n\n            host = netloc.split(\":\")[0]\n\n            # If using a locally hosted API, use a host network on linux\n            if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n                return \"host\"\n\n        # Default to unset\n        return None\n\n    def get_extra_hosts(self, docker_client) -> Optional[Dict[str, str]]:\n        \"\"\"\n        A host.docker.internal -> host-gateway mapping is necessary for communicating\n        with the API on Linux machines. Docker Desktop on macOS will automatically\n        already have this mapping.\n        \"\"\"\n        if sys.platform == \"linux\" and (\n            # Do not warn if the user has specified a host manually that does not use\n            # a local address\n            \"PREFECT_API_URL\" not in self.env\n            or re.search(\n                \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n                self.env[\"PREFECT_API_URL\"],\n            )\n        ):\n            user_version = packaging.version.parse(\n                format_outlier_version_name(docker_client.version()[\"Version\"])\n            )\n            required_version = packaging.version.parse(\"20.10.0\")\n\n            if user_version < required_version:\n                warnings.warn(\n                    \"`host.docker.internal` could not be automatically resolved to\"\n                    \" your local ip address. This feature is not supported on Docker\"\n                    f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                    \" encounter issues.\"\n                )\n                return {}\n            else:\n                # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n                # Only supported by Docker v20.10.0+ which is our minimum recommend\n                # version\n                return {\"host.docker.internal\": \"host-gateway\"}\n\n    def _determine_image_pull_policy(self) -> ImagePullPolicy:\n        \"\"\"\n        Determine the appropriate image pull policy.\n\n        1. If they specified an image pull policy, use that.\n\n        2. If they did not specify an image pull policy and gave us\n           the \"latest\" tag, use ImagePullPolicy.always.\n\n        3. If they did not specify an image pull policy and did not\n           specify a tag, use ImagePullPolicy.always.\n\n        4. If they did not specify an image pull policy and gave us\n           a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n        This logic matches the behavior of Kubernetes.\n        See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n        \"\"\"\n        if not self.image_pull_policy:\n            _, tag = parse_image_tag(self.image)\n            if tag == \"latest\" or not tag:\n                return ImagePullPolicy.ALWAYS\n            return ImagePullPolicy.IF_NOT_PRESENT\n        return ImagePullPolicy(self.image_pull_policy)\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration.get_extra_hosts","title":"get_extra_hosts","text":"

    A host.docker.internal -> host-gateway mapping is necessary for communicating with the API on Linux machines. Docker Desktop on macOS will automatically already have this mapping.

    Source code in prefect_docker/worker.py
    def get_extra_hosts(self, docker_client) -> Optional[Dict[str, str]]:\n    \"\"\"\n    A host.docker.internal -> host-gateway mapping is necessary for communicating\n    with the API on Linux machines. Docker Desktop on macOS will automatically\n    already have this mapping.\n    \"\"\"\n    if sys.platform == \"linux\" and (\n        # Do not warn if the user has specified a host manually that does not use\n        # a local address\n        \"PREFECT_API_URL\" not in self.env\n        or re.search(\n            \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n            self.env[\"PREFECT_API_URL\"],\n        )\n    ):\n        user_version = packaging.version.parse(\n            format_outlier_version_name(docker_client.version()[\"Version\"])\n        )\n        required_version = packaging.version.parse(\"20.10.0\")\n\n        if user_version < required_version:\n            warnings.warn(\n                \"`host.docker.internal` could not be automatically resolved to\"\n                \" your local ip address. This feature is not supported on Docker\"\n                f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                \" encounter issues.\"\n            )\n            return {}\n        else:\n            # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n            # Only supported by Docker v20.10.0+ which is our minimum recommend\n            # version\n            return {\"host.docker.internal\": \"host-gateway\"}\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration.get_network_mode","title":"get_network_mode","text":"

    Returns the network mode to use for the container based on the configured options and the platform.

    Source code in prefect_docker/worker.py
    def get_network_mode(self) -> Optional[str]:\n    \"\"\"\n    Returns the network mode to use for the container based on the configured\n    options and the platform.\n    \"\"\"\n    # User's value takes precedence; this may collide with the incompatible options\n    # mentioned below.\n    if self.network_mode:\n        if sys.platform != \"linux\" and self.network_mode == \"host\":\n            warnings.warn(\n                f\"{self.network_mode!r} network mode is not supported on platform \"\n                f\"{sys.platform!r} and may not work as intended.\"\n            )\n        return self.network_mode\n\n    # Network mode is not compatible with networks or ports (we do not support ports\n    # yet though)\n    if self.networks:\n        return None\n\n    # Check for a local API connection\n    api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n    if api_url:\n        try:\n            _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n        except Exception as exc:\n            warnings.warn(\n                f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                f\"{exc}\\nThe network mode will not be inferred.\"\n            )\n            return None\n\n        host = netloc.split(\":\")[0]\n\n        # If using a locally hosted API, use a host network on linux\n        if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n            return \"host\"\n\n    # Default to unset\n    return None\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Prepares the agent for a flow run by setting the image, labels, and name attributes.

    Source code in prefect_docker/worker.py
    def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the agent for a flow run by setting the image, labels, and name\n    attributes.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n\n    self.image = self.image or get_prefect_image_name()\n    self.labels = self._convert_labels_to_docker_format(\n        {**self.labels, **CONTAINER_LABELS}\n    )\n    self.name = self._slugify_container_name()\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerResult","title":"DockerWorkerResult","text":"

    Bases: BaseWorkerResult

    Contains information about a completed Docker container

    Source code in prefect_docker/worker.py
    class DockerWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about a completed Docker container\"\"\"\n
    "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.ImagePullPolicy","title":"ImagePullPolicy","text":"

    Bases: Enum

    Enum representing the image pull policy options for a Docker container.

    Source code in prefect_docker/worker.py
    class ImagePullPolicy(enum.Enum):\n    \"\"\"Enum representing the image pull policy options for a Docker container.\"\"\"\n\n    IF_NOT_PRESENT = \"IfNotPresent\"\n    ALWAYS = \"Always\"\n    NEVER = \"Never\"\n
    "},{"location":"integrations/prefect-email/","title":"prefect-email","text":"

    Visit the full docs here to see additional examples and the API reference.

    prefect-email is a collection of prebuilt Prefect integrations that can be used to interact with email services.

    "},{"location":"integrations/prefect-email/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-email/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

    prefect-email makes sending emails effortless, giving you peace of mind that your emails are being sent as expected.

    First, install prefect-email and save your email credentials to a block to run the examples below!

    from prefect import flow\nfrom prefect_email import EmailServerCredentials, email_send_message\n\n@flow\ndef example_email_send_message_flow(email_addresses):\n    email_server_credentials = EmailServerCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    for email_address in email_addresses:\n        subject = email_send_message.with_options(name=f\"email {email_address}\").submit(\n            email_server_credentials=email_server_credentials,\n            subject=\"Example Flow Notification using Gmail\",\n            msg=\"This proves email_send_message works!\",\n            email_to=email_address,\n        )\n\nexample_email_send_message_flow([\"EMAIL-ADDRESS-PLACEHOLDER\"])\n

    Outputs:

    16:58:27.646 | INFO    | prefect.engine - Created flow run 'busy-bat' for flow 'example-email-send-message-flow'\n16:58:29.225 | INFO    | Flow run 'busy-bat' - Created task run 'email someone@gmail.com-0' for task 'email someone@gmail.com'\n16:58:29.229 | INFO    | Flow run 'busy-bat' - Submitted task run 'email someone@gmail.com-0' for execution.\n16:58:31.523 | INFO    | Task run 'email someone@gmail.com-0' - Finished in state Completed()\n16:58:31.713 | INFO    | Flow run 'busy-bat' - Finished in state Completed('All states completed.')\n

    Please note, many email services, like Gmail, require an App Password to successfully send emails. If you encounter an error similar to smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted..., it's likely you are not using an App Password.

    "},{"location":"integrations/prefect-email/#capture-exceptions-and-notify-by-email","title":"Capture exceptions and notify by email","text":"

    Perhaps you want an email notification with the details of the exception when your flow run fails.

    prefect-email can be wrapped in an except statement to do just that!

    from prefect import flow\nfrom prefect.context import get_run_context\nfrom prefect_email import EmailServerCredentials, email_send_message\n\ndef notify_exc_by_email(exc):\n    context = get_run_context()\n    flow_run_name = context.flow_run.name\n    email_server_credentials = EmailServerCredentials.load(\"email-server-credentials\")\n    email_send_message(\n        email_server_credentials=email_server_credentials,\n        subject=f\"Flow run {flow_run_name!r} failed\",\n        msg=f\"Flow run {flow_run_name!r} failed due to {exc}.\",\n        email_to=email_server_credentials.username,\n    )\n\n@flow\ndef example_flow():\n    try:\n        1 / 0\n    except Exception as exc:\n        notify_exc_by_email(exc)\n        raise\n\nexample_flow()\n
    "},{"location":"integrations/prefect-email/#resources","title":"Resources","text":"

    For more tips on how to use tasks and flows in a Collection, check out Using Collections!

    "},{"location":"integrations/prefect-email/#installation","title":"Installation","text":"

    Install prefect-email with pip:

    pip install prefect-email\n

    Then, register to view the block on Prefect Cloud:

    prefect block register -m prefect_email\n

    Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

    Requires an installation of Python 3.8+.

    We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

    These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-email/#saving-credentials-to-block","title":"Saving credentials to block","text":"

    Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

    Below is a walkthrough on saving block documents through code.

    Create a short script, replacing the placeholders.

    from prefect_email import EmailServerCredentials\n\ncredentials = EmailServerCredentials(\n    username=\"EMAIL-ADDRESS-PLACEHOLDER\",\n    password=\"PASSWORD-PLACEHOLDER\",  # must be an app password\n)\ncredentials.save(\"BLOCK-NAME-PLACEHOLDER\")\n

    Congrats! You can now easily load the saved block, which holds your credentials:

    from prefect_email import EmailServerCredentials\n\nEmailServerCredentials.load(\"BLOCK_NAME_PLACEHOLDER\")\n

    Registering blocks

    Register blocks in this module to view and edit them on Prefect Cloud:

    prefect block register -m prefect_email\n
    "},{"location":"integrations/prefect-email/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials","title":"prefect_email.credentials","text":"

    Credential classes used to perform authenticated interactions with email services

    "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.EmailServerCredentials","title":"EmailServerCredentials","text":"

    Bases: Block

    Block used to manage generic email server authentication. It is recommended you use a Google App Password if you use Gmail.

    Attributes:

    Name Type Description username Optional[str]

    The username to use for authentication to the server. Unnecessary if SMTP login is not required.

    password SecretStr

    The password to use for authentication to the server. Unnecessary if SMTP login is not required.

    smtp_server Union[SMTPServer, str]

    Either the hostname of the SMTP server, or one of the keys from the built-in SMTPServer Enum members, like \"gmail\".

    smtp_type Union[SMTPType, str]

    Either \"SSL\", \"STARTTLS\", or \"INSECURE\".

    smtp_port Optional[int]

    If provided, overrides the smtp_type's default port number.

    Example

    Load stored email server credentials:

    from prefect_email import EmailServerCredentials\nemail_credentials_block = EmailServerCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_email/credentials.py
    class EmailServerCredentials(Block):\n    \"\"\"\n    Block used to manage generic email server authentication.\n    It is recommended you use a\n    [Google App Password](https://support.google.com/accounts/answer/185833)\n    if you use Gmail.\n\n    Attributes:\n        username: The username to use for authentication to the server.\n            Unnecessary if SMTP login is not required.\n        password: The password to use for authentication to the server.\n            Unnecessary if SMTP login is not required.\n        smtp_server: Either the hostname of the SMTP server, or one of the\n            keys from the built-in SMTPServer Enum members, like \"gmail\".\n        smtp_type: Either \"SSL\", \"STARTTLS\", or \"INSECURE\".\n        smtp_port: If provided, overrides the smtp_type's default port number.\n\n    Example:\n        Load stored email server credentials:\n        ```python\n        from prefect_email import EmailServerCredentials\n        email_credentials_block = EmailServerCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _block_type_name = \"Email Server Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/82bc6ed16ca42a2252a5512c72233a253b8a58eb-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-email/credentials/#prefect_email.credentials.EmailServerCredentials\"  # noqa\n\n    username: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The username to use for authentication to the server. \"\n            \"Unnecessary if SMTP login is not required.\"\n        ),\n    )\n    password: SecretStr = Field(\n        default_factory=partial(SecretStr, \"\"),\n        description=(\n            \"The password to use for authentication to the server. \"\n            \"Unnecessary if SMTP login is not required.\"\n        ),\n    )\n    smtp_server: Union[SMTPServer, str] = Field(\n        default=SMTPServer.GMAIL,\n        description=(\n            \"Either the hostname of the SMTP server, or one of the \"\n            \"keys from the built-in SMTPServer Enum members, like 'gmail'.\"\n        ),\n        title=\"SMTP Server\",\n    )\n    smtp_type: Union[SMTPType, str] = Field(\n        default=SMTPType.SSL,\n        description=(\"Either 'SSL', 'STARTTLS', or 'INSECURE'.\"),\n        title=\"SMTP Type\",\n    )\n    smtp_port: Optional[int] = Field(\n        default=None,\n        description=(\"If provided, overrides the smtp_type's default port number.\"),\n        title=\"SMTP Port\",\n    )\n\n    @validator(\"smtp_server\", pre=True)\n    def _cast_smtp_server(cls, value):\n        \"\"\"\n        Cast the smtp_server to an SMTPServer Enum member, if valid.\n        \"\"\"\n        return _cast_to_enum(value, SMTPServer)\n\n    @validator(\"smtp_type\", pre=True)\n    def _cast_smtp_type(cls, value):\n        \"\"\"\n        Cast the smtp_type to an SMTPType Enum member, if valid.\n        \"\"\"\n        if isinstance(value, int):\n            return SMTPType(value)\n        return _cast_to_enum(value, SMTPType, restrict=True)\n\n    def get_server(self) -> SMTP:\n        \"\"\"\n        Gets an authenticated SMTP server.\n\n        Returns:\n            SMTP: An authenticated SMTP server.\n\n        Example:\n            Gets a GMail SMTP server through defaults.\n            ```python\n            from prefect import flow\n            from prefect_email import EmailServerCredentials\n\n            @flow\n            def example_get_server_flow():\n                email_server_credentials = EmailServerCredentials(\n                    username=\"username@gmail.com\",\n                    password=\"password\",\n                )\n                server = email_server_credentials.get_server()\n                return server\n\n            example_get_server_flow()\n            ```\n        \"\"\"\n        smtp_server = self.smtp_server\n        if isinstance(smtp_server, SMTPServer):\n            smtp_server = smtp_server.value\n\n        smtp_type = self.smtp_type\n        smtp_port = self.smtp_port\n        if smtp_port is None:\n            smtp_port = smtp_type.value\n\n        if smtp_type == SMTPType.INSECURE:\n            server = SMTP(smtp_server, smtp_port)\n        else:\n            context = ssl.create_default_context()\n            if smtp_type == SMTPType.SSL:\n                server = SMTP_SSL(smtp_server, smtp_port, context=context)\n            elif smtp_type == SMTPType.STARTTLS:\n                server = SMTP(smtp_server, smtp_port)\n                server.starttls(context=context)\n            if self.username is not None:\n                server.login(self.username, self.password.get_secret_value())\n\n        return server\n
    "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.EmailServerCredentials.get_server","title":"get_server","text":"

    Gets an authenticated SMTP server.

    Returns:

    Name Type Description SMTP SMTP

    An authenticated SMTP server.

    Example

    Gets a GMail SMTP server through defaults.

    from prefect import flow\nfrom prefect_email import EmailServerCredentials\n\n@flow\ndef example_get_server_flow():\n    email_server_credentials = EmailServerCredentials(\n        username=\"username@gmail.com\",\n        password=\"password\",\n    )\n    server = email_server_credentials.get_server()\n    return server\n\nexample_get_server_flow()\n

    Source code in prefect_email/credentials.py
    def get_server(self) -> SMTP:\n    \"\"\"\n    Gets an authenticated SMTP server.\n\n    Returns:\n        SMTP: An authenticated SMTP server.\n\n    Example:\n        Gets a GMail SMTP server through defaults.\n        ```python\n        from prefect import flow\n        from prefect_email import EmailServerCredentials\n\n        @flow\n        def example_get_server_flow():\n            email_server_credentials = EmailServerCredentials(\n                username=\"username@gmail.com\",\n                password=\"password\",\n            )\n            server = email_server_credentials.get_server()\n            return server\n\n        example_get_server_flow()\n        ```\n    \"\"\"\n    smtp_server = self.smtp_server\n    if isinstance(smtp_server, SMTPServer):\n        smtp_server = smtp_server.value\n\n    smtp_type = self.smtp_type\n    smtp_port = self.smtp_port\n    if smtp_port is None:\n        smtp_port = smtp_type.value\n\n    if smtp_type == SMTPType.INSECURE:\n        server = SMTP(smtp_server, smtp_port)\n    else:\n        context = ssl.create_default_context()\n        if smtp_type == SMTPType.SSL:\n            server = SMTP_SSL(smtp_server, smtp_port, context=context)\n        elif smtp_type == SMTPType.STARTTLS:\n            server = SMTP(smtp_server, smtp_port)\n            server.starttls(context=context)\n        if self.username is not None:\n            server.login(self.username, self.password.get_secret_value())\n\n    return server\n
    "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.SMTPServer","title":"SMTPServer","text":"

    Bases: Enum

    Server used to send email.

    Source code in prefect_email/credentials.py
    class SMTPServer(Enum):\n    \"\"\"\n    Server used to send email.\n    \"\"\"\n\n    AOL = \"smtp.aol.com\"\n    ATT = \"smtp.mail.att.net\"\n    COMCAST = \"smtp.comcast.net\"\n    ICLOUD = \"smtp.mail.me.com\"\n    GMAIL = \"smtp.gmail.com\"\n    OUTLOOK = \"smtp-mail.outlook.com\"\n    YAHOO = \"smtp.mail.yahoo.com\"\n
    "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.SMTPType","title":"SMTPType","text":"

    Bases: Enum

    Protocols used to secure email transmissions.

    Source code in prefect_email/credentials.py
    class SMTPType(Enum):\n    \"\"\"\n    Protocols used to secure email transmissions.\n    \"\"\"\n\n    SSL = 465\n    STARTTLS = 587\n    INSECURE = 25\n
    "},{"location":"integrations/prefect-email/message/","title":"Message","text":""},{"location":"integrations/prefect-email/message/#prefect_email.message","title":"prefect_email.message","text":"

    Tasks for interacting with email message services

    "},{"location":"integrations/prefect-email/message/#prefect_email.message.email_send_message","title":"email_send_message async","text":"

    Sends an email message from an authenticated email service over SMTP. Sending messages containing HTML code is supported - the default MIME type is set to the text/html.

    Parameters:

    Name Type Description Default subject str

    The subject line of the email.

    required msg str

    The contents of the email, added as html; can be used in combination with msg_plain.

    required msg_plain Optional[str]

    The contents of the email as plain text, can be used in combination with msg.

    None email_to Optional[Union[str, List[str]]]

    The email addresses to send the message to, separated by commas. If a list is provided, will join the items, separated by commas.

    None email_to_cc Optional[Union[str, List[str]]]

    Additional email addresses to send the message to as cc, separated by commas. If a list is provided, will join the items, separated by commas.

    None email_to_bcc Optional[Union[str, List[str]]]

    Additional email addresses to send the message to as bcc, separated by commas. If a list is provided, will join the items, separated by commas.

    None attachments Optional[List[str]]

    Names of files that should be sent as attachment.

    None

    Returns:

    Name Type Description MimeText

    The MIME Multipart message of the email.

    Example

    Sends a notification email to someone@gmail.com.

    from prefect import flow\nfrom prefect_email import EmailServerCredentials, email_send_message\n\n@flow\ndef example_email_send_message_flow():\n    email_server_credentials = EmailServerCredentials(\n        username=\"username@email.com\",\n        password=\"password\",\n    )\n    subject = email_send_message(\n        email_server_credentials=email_server_credentials,\n        subject=\"Example Flow Notification\",\n        msg=\"This proves email_send_message works!\",\n        email_to=\"someone@email.com\",\n    )\n    return subject\n\nexample_email_send_message_flow()\n

    Source code in prefect_email/message.py
    @task\nasync def email_send_message(\n    subject: str,\n    msg: str,\n    email_server_credentials: \"EmailServerCredentials\",\n    msg_plain: Optional[str] = None,\n    email_from: Optional[str] = None,\n    email_to: Optional[Union[str, List[str]]] = None,\n    email_to_cc: Optional[Union[str, List[str]]] = None,\n    email_to_bcc: Optional[Union[str, List[str]]] = None,\n    attachments: Optional[List[str]] = None,\n):\n    \"\"\"\n    Sends an email message from an authenticated email service over SMTP.\n    Sending messages containing HTML code is supported - the default MIME\n    type is set to the text/html.\n\n    Args:\n        subject: The subject line of the email.\n        msg: The contents of the email, added as html; can be used in\n            combination with msg_plain.\n        msg_plain: The contents of the email as plain text,\n            can be used in combination with msg.\n        email_to: The email addresses to send the message to, separated by commas.\n            If a list is provided, will join the items, separated by commas.\n        email_to_cc: Additional email addresses to send the message to as cc,\n            separated by commas. If a list is provided, will join the items,\n            separated by commas.\n        email_to_bcc: Additional email addresses to send the message to as bcc,\n            separated by commas. If a list is provided, will join the items,\n            separated by commas.\n        attachments: Names of files that should be sent as attachment.\n\n    Returns:\n        MimeText: The MIME Multipart message of the email.\n\n    Example:\n        Sends a notification email to someone@gmail.com.\n        ```python\n        from prefect import flow\n        from prefect_email import EmailServerCredentials, email_send_message\n\n        @flow\n        def example_email_send_message_flow():\n            email_server_credentials = EmailServerCredentials(\n                username=\"username@email.com\",\n                password=\"password\",\n            )\n            subject = email_send_message(\n                email_server_credentials=email_server_credentials,\n                subject=\"Example Flow Notification\",\n                msg=\"This proves email_send_message works!\",\n                email_to=\"someone@email.com\",\n            )\n            return subject\n\n        example_email_send_message_flow()\n        ```\n    \"\"\"\n    message = MIMEMultipart()\n    message[\"Subject\"] = subject\n    message[\"From\"] = email_from or email_server_credentials.username\n\n    email_to_dict = {\"To\": email_to, \"Cc\": email_to_cc, \"Bcc\": email_to_bcc}\n    if all(val is None for val in email_to_dict.values()):\n        raise ValueError(\n            \"One of email_to, email_to_cc, or email_to_bcc must be specified\"\n        )\n\n    for key, val in email_to_dict.items():\n        if isinstance(val, list):\n            val = \", \".join(val)\n        message[key] = val\n\n    # First add the message in plain text, then the HTML version;\n    # email clients try to render the last part first\n    if msg_plain:\n        message.attach(MIMEText(msg_plain, \"plain\"))\n    if msg:\n        message.attach(MIMEText(msg, \"html\"))\n\n    for filepath in attachments or []:\n        with open(filepath, \"rb\") as attachment:\n            part = MIMEBase(\"application\", \"octet-stream\")\n            part.set_payload(attachment.read())\n\n        encoders.encode_base64(part)\n        filename = os.path.basename(filepath)\n        part.add_header(\n            \"Content-Disposition\",\n            f\"attachment; filename= {filename}\",\n        )\n        message.attach(part)\n\n    with email_server_credentials.get_server() as server:\n        partial_send_message = partial(server.send_message, message)\n        await to_thread.run_sync(partial_send_message)\n\n    return message\n
    "},{"location":"integrations/prefect-gcp/","title":"prefect-gcp","text":"

    prefect-gcp makes it easy to leverage the capabilities of Google Cloud Platform (GCP) in your flows, featuring support for Vertex AI, Cloud Run, BigQuery, Cloud Storage, and Secret Manager.

    "},{"location":"integrations/prefect-gcp/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-gcp/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

    You will need to first install prefect-gcp and authenticate with a service account in order to use prefect-gcp.

    prefect-gcp is able to safely save and load the service account, so they can be reused across the collection! Simply follow the steps below.

    1. Refer to the GCP service account documentation on how to create and download a service account key file.
    2. Copy the JSON contents.
    3. Create a short script, replacing the placeholders with your information.
    from prefect_gcp import GcpCredentials\n\n# replace this PLACEHOLDER dict with your own service account info\nservice_account_info = {\n  \"type\": \"service_account\",\n  \"project_id\": \"PROJECT_ID\",\n  \"private_key_id\": \"KEY_ID\",\n  \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nPRIVATE_KEY\\n-----END PRIVATE KEY-----\\n\",\n  \"client_email\": \"SERVICE_ACCOUNT_EMAIL\",\n  \"client_id\": \"CLIENT_ID\",\n  \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n  \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n  \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n  \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\"\n}\n\nGcpCredentials(\n    service_account_info=service_account_info\n).save(\"BLOCK-NAME-PLACEHOLDER\")\n

    service_account_info vs service_account_file

    The advantage of using service_account_info, instead of service_account_file, is that it is accessible across containers.

    If service_account_file is used, the provided file path must be available in the container executing the flow.

    Congrats! You can now easily load the saved block, which holds your credentials:

    from prefect_gcp import GcpCredentials\nGcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n

    Registering blocks

    Register blocks in this module to view and edit them on Prefect Cloud:

    prefect block register -m prefect_gcp\n
    "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-cloud-run","title":"Using Prefect with Google Cloud Run","text":"

    Is your local computer or server running out of memory or taking too long to complete a job?

    prefect_gcp can offers a solution by enabling you to execute your Prefect flows remotely, on-demand thru Google Cloud Run.

    The following code snippets demonstrate how prefect_gcp can be used to run a job on Cloud Run, either as part of a Prefect deployment's infrastructure or within a flow.

    "},{"location":"integrations/prefect-gcp/#as-infrastructure","title":"As Infrastructure","text":"

    Below is a simple walkthrough for how to use Google Cloud Run as infrastructure for a deployment.

    "},{"location":"integrations/prefect-gcp/#set-variables","title":"Set variables","text":"

    To expedite copy/paste without the needing to update placeholders manually, update and execute the following.

    export CREDENTIALS_BLOCK_NAME=\"BLOCK-NAME-PLACEHOLDER\"\nexport CLOUD_RUN_JOB_BLOCK_NAME=\"cloud-run-job-example\"\nexport CLOUD_RUN_JOB_REGION=\"us-central1\"\nexport GCS_BUCKET_BLOCK_NAME=\"cloud-run-job-bucket-example\"\nexport GCP_PROJECT_ID=$(gcloud config get-value project)\n
    "},{"location":"integrations/prefect-gcp/#build-an-image","title":"Build an image","text":"

    First, find an existing image within the Google Artifact Registry. Ensure it has Python and prefect-gcp[cloud_storage] installed, or follow the instructions below to set one up.

    Create a Dockerfile.

    FROM prefecthq/prefect:2-python3.11\nRUN pip install \"prefect-gcp[cloud_storage]\"\n

    Then push to the Google Artifact Registry.

    gcloud artifacts repositories create test-example-repository --repository-format=docker --location=us\ngcloud auth configure-docker us-docker.pkg.dev\ndocker build -t us-docker.pkg.dev/${GCP_PROJECT_ID}/test-example-repository/prefect-gcp:2-python3.11 .\ndocker push us-docker.pkg.dev/${GCP_PROJECT_ID}/test-example-repository/prefect-gcp:2-python3.11\n
    "},{"location":"integrations/prefect-gcp/#save-an-infrastructure-and-storage-block","title":"Save an infrastructure and storage block","text":"

    Save a custom infrastructure and storage block by executing the following snippet.

    import os\nfrom prefect_gcp import GcpCredentials, CloudRunJob, GcsBucket\n\ngcp_credentials = GcpCredentials.load(os.environ[\"CREDENTIALS_BLOCK_NAME\"])\n\n# must be from GCR and have Python + Prefect\nimage = f\"us-docker.pkg.dev/{os.environ['GCP_PROJECT_ID']}/test-example-repository/prefect-gcp:2-python3.11\"  # noqa\n\ncloud_run_job = CloudRunJob(\n    image=image,\n    credentials=gcp_credentials,\n    region=os.environ[\"CLOUD_RUN_JOB_REGION\"],\n)\ncloud_run_job.save(os.environ[\"CLOUD_RUN_JOB_BLOCK_NAME\"], overwrite=True)\n\nbucket_name = \"cloud-run-job-bucket\"\ncloud_storage_client = gcp_credentials.get_cloud_storage_client()\ncloud_storage_client.create_bucket(bucket_name)\ngcs_bucket = GcsBucket(\n    bucket=bucket_name,\n    gcp_credentials=gcp_credentials,\n)\ngcs_bucket.save(os.environ[\"GCS_BUCKET_BLOCK_NAME\"], overwrite=True)\n
    "},{"location":"integrations/prefect-gcp/#write-a-flow","title":"Write a flow","text":"

    Then, use an existing flow to create a deployment with, or use the flow below if you don't have an existing flow handy.

    from prefect import flow\n\n@flow(log_prints=True)\ndef cloud_run_job_flow():\n    print(\"Hello, Prefect!\")\n\nif __name__ == \"__main__\":\n    cloud_run_job_flow()\n
    "},{"location":"integrations/prefect-gcp/#create-a-deployment","title":"Create a deployment","text":"

    If the script was named \"cloud_run_job_script.py\", build a deployment manifest with the following command.

    prefect deployment build cloud_run_job_script.py:cloud_run_job_flow \\\n    -n cloud-run-deployment \\\n    -ib cloud-run-job/${CLOUD_RUN_JOB_BLOCK_NAME} \\\n    -sb gcs-bucket/${GCS_BUCKET_BLOCK_NAME}\n

    Now apply the deployment!

    prefect deployment apply cloud_run_job_flow-deployment.yaml\n
    "},{"location":"integrations/prefect-gcp/#test-the-deployment","title":"Test the deployment","text":"

    Start up an agent in a separate terminal. The agent will poll the Prefect API for scheduled flow runs that are ready to run.

    prefect agent start -q 'default'\n

    Run the deployment once to test.

    prefect deployment run cloud-run-job-flow/cloud-run-deployment\n

    Once the flow run has completed, you will see Hello, Prefect! logged in the Prefect UI.

    No class found for dispatch key

    If you encounter an error message like KeyError: \"No class found for dispatch key 'cloud-run-job' in registry for type 'Block'.\", ensure prefect-gcp is installed in the environment that your agent is running!

    "},{"location":"integrations/prefect-gcp/#within-flow","title":"Within Flow","text":"

    You can execute commands through Cloud Run Job directly within a Prefect flow.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_run import CloudRunJob\n\n@flow\ndef cloud_run_job_flow():\n    cloud_run_job = CloudRunJob(\n        image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n        credentials=GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\"),\n        region=\"us-central1\",\n        command=[\"echo\", \"Hello, Prefect!\"],\n    )\n    return cloud_run_job.run()\n
    "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-vertex-ai","title":"Using Prefect with Google Vertex AI","text":"

    prefect_gcp can enable you to execute your Prefect flows remotely, on-demand using Google Vertex AI too!

    Be sure to additionally install the AI Platform extra!

    Setting up a Vertex AI job is extremely similar to setting up a Cloud Run Job, but replace CloudRunJob with the following snippet.

    from prefect_gcp import GcpCredentials, VertexAICustomTrainingJob, GcsBucket\n\ngcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n\nvertex_ai_job = VertexAICustomTrainingJob(\n    image=\"IMAGE-NAME-PLACEHOLDER\",  # must be from GCR and have Python + Prefect\n    credentials=gcp_credentials,\n    region=\"us-central1\",\n)\nvertex_ai_job.save(\"test-example\")\n

    Cloud Run Job vs Vertex AI

    With Vertex AI, you can allocate computational resources on-the-fly for your executions, much like Cloud Run.

    However, unlike Cloud Run, you have the flexibility to provision instances with higher CPU, GPU, TPU, and RAM capacities.

    Additionally, jobs can run for up to 7 days, which is significantly longer than the maximum duration allowed on Cloud Run.

    "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-bigquery","title":"Using Prefect with Google BigQuery","text":"

    Got big data in BigQuery? prefect_gcp allows you to steadily stream data from and write to Google BigQuery within your Prefect flows!

    Be sure to install prefect-gcp with the BigQuery extra!

    The provided code snippet shows how you can use prefect_gcp to create a new dataset in BigQuery, define a table, insert rows, and fetch data from the table.

    from prefect import flow\nfrom prefect_gcp.bigquery import GcpCredentials, BigQueryWarehouse\n\n@flow\ndef bigquery_flow():\n    all_rows = []\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n\n    client = gcp_credentials.get_bigquery_client()\n    client.create_dataset(\"test_example\", exists_ok=True)\n\n    with BigQueryWarehouse(gcp_credentials=gcp_credentials) as warehouse:\n        warehouse.execute(\n            \"CREATE TABLE IF NOT EXISTS test_example.customers (name STRING, address STRING);\"\n        )\n        warehouse.execute_many(\n            \"INSERT INTO test_example.customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            ],\n        )\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = warehouse.fetch_many(\"SELECT * FROM test_example.customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.extend(new_rows)\n    return all_rows\n\nbigquery_flow()\n
    "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-cloud-storage","title":"Using Prefect with Google Cloud Storage","text":"

    With prefect_gcp, you can have peace of mind that your Prefect flows have not only seamlessly uploaded and downloaded objects to Google Cloud Storage, but also have these actions logged.

    Be sure to additionally install prefect-gcp with the Cloud Storage extra!

    The provided code snippet shows how you can use prefect_gcp to upload a file to a Google Cloud Storage bucket and download the same file under a different file name.

    from pathlib import Path\nfrom prefect import flow\nfrom prefect_gcp import GcpCredentials, GcsBucket\n\n\n@flow\ndef cloud_storage_flow():\n    # create a dummy file to upload\n    file_path = Path(\"test-example.txt\")\n    file_path.write_text(\"Hello, Prefect!\")\n\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    gcs_bucket = GcsBucket(\n        bucket=\"BUCKET-NAME-PLACEHOLDER\",\n        gcp_credentials=gcp_credentials\n    )\n\n    gcs_bucket_path = gcs_bucket.upload_from_path(file_path)\n    downloaded_file_path = gcs_bucket.download_object_to_path(\n        gcs_bucket_path, \"downloaded-test-example.txt\"\n    )\n    return downloaded_file_path.read_text()\n\n\ncloud_storage_flow()\n

    Upload and download directories

    GcsBucket supports uploading and downloading entire directories. To view examples, check out the Examples Catalog!

    "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-secret-manager","title":"Using Prefect with Google Secret Manager","text":"

    Do you already have secrets available on Google Secret Manager? There's no need to migrate them!

    prefect_gcp allows you to read and write secrets with Google Secret Manager within your Prefect flows.

    Be sure to install prefect-gcp with the Secret Manager extra!

    The provided code snippet shows how you can use prefect_gcp to write a secret to the Secret Manager, read the secret data, delete the secret, and finally return the secret data.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials, GcpSecret\n\n\n@flow\ndef secret_manager_flow():\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    gcp_secret = GcpSecret(secret_name=\"test-example\", gcp_credentials=gcp_credentials)\n    gcp_secret.write_secret(secret_data=b\"Hello, Prefect!\")\n    secret_data = gcp_secret.read_secret()\n    gcp_secret.delete_secret()\n    return secret_data\n\nsecret_manager_flow()\n
    "},{"location":"integrations/prefect-gcp/#accessing-google-credentials-or-clients-from-gcpcredentials","title":"Accessing Google credentials or clients from GcpCredentials","text":"

    In the case that prefect-gcp is missing a feature, feel free to submit an issue.

    In the meantime, you may want to access the underlying Google Cloud credentials or clients, which prefect-gcp exposes via the GcpCredentials block.

    The provided code snippet shows how you can use prefect_gcp to instantiate a Google Cloud client, like bigquery.Client.

    Note a GcpCredentials object is NOT a valid input to the underlying BigQuery client--use the get_credentials_from_service_account method to access and pass an actual google.auth.Credentials object.

    import google.cloud.bigquery\nfrom prefect import flow\nfrom prefect_gcp import GcpCredentials\n\n@flow\ndef create_bigquery_client():\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    google_auth_credentials = gcp_credentials.get_credentials_from_service_account()\n    bigquery_client = bigquery.Client(credentials=google_auth_credentials)\n

    If you simply want to access the underlying client, prefect-gcp exposes a get_client method from GcpCredentials.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\n\n@flow\ndef create_bigquery_client():\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    bigquery_client = gcp_credentials.get_client(\"bigquery\")\n
    "},{"location":"integrations/prefect-gcp/#resources","title":"Resources","text":"

    For more tips on how to use tasks and flows in a Collection, check out Using Collections!

    "},{"location":"integrations/prefect-gcp/#installation","title":"Installation","text":"

    To use prefect-gcp and Cloud Run:

    pip install prefect-gcp\n

    To use Cloud Storage:

    pip install \"prefect-gcp[cloud_storage]\"\n

    To use BigQuery:

    pip install \"prefect-gcp[bigquery]\"\n

    To use Secret Manager:

    pip install \"prefect-gcp[secret_manager]\"\n

    To use Vertex AI:

    pip install \"prefect-gcp[aiplatform]\"\n

    A list of available blocks in prefect-gcp and their setup instructions can be found here.

    Requires an installation of Python 3.7+.

    We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

    These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-gcp/#feedback","title":"Feedback","text":"

    If you encounter any bugs while using prefect-gcp, feel free to open an issue in the prefect-gcp repository.

    If you have any questions or issues while using prefect-gcp, you can find help in either the Prefect Discourse forum or the Prefect Slack community.

    Feel free to star or watch prefect-gcp for updates too!

    "},{"location":"integrations/prefect-gcp/aiplatform/","title":"AI Platform","text":""},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform","title":"prefect_gcp.aiplatform","text":"

    DEPRECATION WARNING:

    This module is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by the Vertex AI worker, which offers enhanced functionality and better performance.

    For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

    Integrations with Google AI Platform.

    Examples:

    Run a job using Vertex AI Custom Training:\n```python\nfrom prefect_gcp.credentials import GcpCredentials\nfrom prefect_gcp.aiplatform import VertexAICustomTrainingJob\n\ngcp_credentials = GcpCredentials.load(\"BLOCK_NAME\")\njob = VertexAICustomTrainingJob(\n    region=\"us-east1\",\n    image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n    gcp_credentials=gcp_credentials,\n)\njob.run()\n```\n\nRun a job that runs the command `echo hello world` using Google Cloud Run Jobs:\n```python\nfrom prefect_gcp.credentials import GcpCredentials\nfrom prefect_gcp.aiplatform import VertexAICustomTrainingJob\n\ngcp_credentials = GcpCredentials.load(\"BLOCK_NAME\")\njob = VertexAICustomTrainingJob(\n    command=[\"echo\", \"hello world\"],\n    region=\"us-east1\",\n    image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n    gcp_credentials=gcp_credentials,\n)\njob.run()\n```\n\nPreview job specs:\n```python\nfrom prefect_gcp.credentials import GcpCredentials\nfrom prefect_gcp.aiplatform import VertexAICustomTrainingJob\n\ngcp_credentials = GcpCredentials.load(\"BLOCK_NAME\")\njob = VertexAICustomTrainingJob(\n    command=[\"echo\", \"hello world\"],\n    region=\"us-east1\",\n    image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n    gcp_credentials=gcp_credentials,\n)\njob.preview()\n```\n
    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob","title":"VertexAICustomTrainingJob","text":"

    Bases: Infrastructure

    Infrastructure block used to run Vertex AI custom training jobs.

    Source code in prefect_gcp/aiplatform.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=(\n        \"Use the Vertex AI worker instead.\"\n        \" Refer to the upgrade guide for more information:\"\n        \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\"\n    ),\n)\nclass VertexAICustomTrainingJob(Infrastructure):\n    \"\"\"\n    Infrastructure block used to run Vertex AI custom training jobs.\n    \"\"\"\n\n    _block_type_name = \"Vertex AI Custom Training Job\"\n    _block_type_slug = \"vertex-ai-custom-training-job\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob\"  # noqa: E501\n\n    type: Literal[\"vertex-ai-custom-training-job\"] = Field(\n        \"vertex-ai-custom-training-job\", description=\"The slug for this task type.\"\n    )\n\n    gcp_credentials: GcpCredentials = Field(\n        default_factory=GcpCredentials,\n        description=(\n            \"GCP credentials to use when running the configured Vertex AI custom \"\n            \"training job. If not provided, credentials will be inferred from the \"\n            \"environment. See `GcpCredentials` for details.\"\n        ),\n    )\n    region: str = Field(\n        default=...,\n        description=\"The region where the Vertex AI custom training job resides.\",\n    )\n    image: str = Field(\n        default=...,\n        title=\"Image Name\",\n        description=(\n            \"The image to use for a new Vertex AI custom training job. This value must \"\n            \"refer to an image within either Google Container Registry \"\n            \"or Google Artifact Registry, like `gcr.io/<project_name>/<repo>/`.\"\n        ),\n    )\n    env: Dict[str, str] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to be passed to your Cloud Run Job.\",\n    )\n    machine_type: str = Field(\n        default=\"n1-standard-4\",\n        description=\"The machine type to use for the run, which controls the available \"\n        \"CPU and memory.\",\n    )\n    accelerator_type: Optional[str] = Field(\n        default=None, description=\"The type of accelerator to attach to the machine.\"\n    )\n    accelerator_count: Optional[int] = Field(\n        default=None, description=\"The number of accelerators to attach to the machine.\"\n    )\n    boot_disk_type: str = Field(\n        default=\"pd-ssd\",\n        title=\"Boot Disk Type\",\n        description=\"The type of boot disk to attach to the machine.\",\n    )\n    boot_disk_size_gb: int = Field(\n        default=100,\n        title=\"Boot Disk Size\",\n        description=\"The size of the boot disk to attach to the machine, in gigabytes.\",\n    )\n    maximum_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(days=7), description=\"The maximum job running time.\"\n    )\n    network: Optional[str] = Field(\n        default=None,\n        description=\"The full name of the Compute Engine network\"\n        \"to which the Job should be peered. Private services access must \"\n        \"already be configured for the network. If left unspecified, the job \"\n        \"is not peered with any network.\",\n    )\n    reserved_ip_ranges: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of names for the reserved ip ranges under the VPC \"\n        \"network that can be used for this job. If set, we will deploy the job \"\n        \"within the provided ip ranges. Otherwise, the job will be deployed to \"\n        \"any ip ranges under the provided VPC network.\",\n    )\n    service_account: Optional[str] = Field(\n        default=None,\n        description=(\n            \"Specifies the service account to use \"\n            \"as the run-as account in Vertex AI. The agent submitting jobs must have \"\n            \"act-as permission on this run-as account. If unspecified, the AI \"\n            \"Platform Custom Code Service Agent for the CustomJob's project is \"\n            \"used. Takes precedence over the service account found in gcp_credentials, \"\n            \"and required if a service account cannot be detected in gcp_credentials.\"\n        ),\n    )\n    job_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The amount of time to wait between GCP API calls while monitoring the \"\n            \"state of a Vertex AI Job.\"\n        ),\n    )\n\n    @property\n    def job_name(self):\n        \"\"\"\n        The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference:\n        https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name\n        \"\"\"  # noqa\n        try:\n            base_name = self.name or self.image.split(\"/\")[2]\n            return f\"{base_name}-{uuid4().hex}\"\n        except IndexError:\n            raise ValueError(\n                \"The provided image must be from either Google Container Registry \"\n                \"or Google Artifact Registry\"\n            )\n\n    def _get_compatible_labels(self) -> Dict[str, str]:\n        \"\"\"\n        Ensures labels are compatible with GCP label requirements.\n        https://cloud.google.com/resource-manager/docs/creating-managing-labels\n\n        Ex: the Prefect provided key of prefect.io/flow-name -> prefect-io_flow-name\n        \"\"\"\n        compatible_labels = {}\n        for key, val in self.labels.items():\n            new_key = slugify(\n                key,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n            compatible_labels[new_key] = slugify(\n                val,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n        return compatible_labels\n\n    def preview(self) -> str:\n        \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n        job_spec = self._build_job_spec()\n        custom_job = CustomJob(\n            display_name=self.job_name,\n            job_spec=job_spec,\n            labels=self._get_compatible_labels(),\n        )\n        return str(custom_job)  # outputs a json string\n\n    def get_corresponding_worker_type(self) -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        return \"vertex-ai\"\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for a `Vertex AI` work pool with the same\n        configuration as this block.\n        Returns:\n            - dict: a base job template for a `Vertex AI` work pool\n        \"\"\"\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type(),\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to generate default base job template for Cloud Run worker.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key == \"gcp_credentials\":\n                if not self.gcp_credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(\n                            self.gcp_credentials._block_document_id\n                        )\n                    }\n                }\n            elif key == \"maximum_run_time\":\n                base_job_template[\"variables\"][\"properties\"][\"maximum_run_time_hours\"][\n                    \"default\"\n                ] = round(value.total_seconds() / 3600)\n            elif key == \"service_account\":\n                base_job_template[\"variables\"][\"properties\"][\"service_account_name\"][\n                    \"default\"\n                ] = value\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by `Vertex AI` work pools.\"\n                    \" Skipping.\"\n                )\n\n        return base_job_template\n\n    def _build_job_spec(self) -> \"CustomJobSpec\":\n        \"\"\"\n        Builds a job spec by gathering details.\n        \"\"\"\n        # gather worker pool spec\n        env_list = [\n            {\"name\": name, \"value\": value}\n            for name, value in {\n                **self._base_environment(),\n                **self.env,\n            }.items()\n        ]\n        container_spec = ContainerSpec(\n            image_uri=self.image, command=self.command, args=[], env=env_list\n        )\n        machine_spec = MachineSpec(\n            machine_type=self.machine_type,\n            accelerator_type=self.accelerator_type,\n            accelerator_count=self.accelerator_count,\n        )\n        worker_pool_spec = WorkerPoolSpec(\n            container_spec=container_spec,\n            machine_spec=machine_spec,\n            replica_count=1,\n            disk_spec=DiskSpec(\n                boot_disk_type=self.boot_disk_type,\n                boot_disk_size_gb=self.boot_disk_size_gb,\n            ),\n        )\n        # look for service account\n        service_account = (\n            self.service_account or self.gcp_credentials._service_account_email\n        )\n        if service_account is None:\n            raise ValueError(\n                \"A service account is required for the Vertex job. \"\n                \"A service account could not be detected in the attached credentials; \"\n                \"please set a service account explicitly, e.g. \"\n                '`VertexAICustomTrainingJob(service_acount=\"...\")`'\n            )\n\n        # build custom job specs\n        timeout = Duration().FromTimedelta(td=self.maximum_run_time)\n        scheduling = Scheduling(timeout=timeout)\n        job_spec = CustomJobSpec(\n            worker_pool_specs=[worker_pool_spec],\n            service_account=service_account,\n            scheduling=scheduling,\n            network=self.network,\n            reserved_ip_ranges=self.reserved_ip_ranges,\n        )\n        return job_spec\n\n    async def _create_and_begin_job(\n        self,\n        job_spec: \"CustomJobSpec\",\n        job_service_async_client: \"JobServiceAsyncClient\",\n    ) -> \"CustomJob\":\n        \"\"\"\n        Builds a custom job and begins running it.\n        \"\"\"\n        # create custom job\n        custom_job = CustomJob(\n            display_name=self.job_name,\n            job_spec=job_spec,\n            labels=self._get_compatible_labels(),\n        )\n\n        # run job\n        self.logger.info(\n            f\"{self._log_prefix}: Creating job {self.job_name!r} \"\n            f\"with command {' '.join(self.command)!r} in region \"\n            f\"{self.region!r} using image {self.image!r}\"\n        )\n\n        project = self.gcp_credentials.project\n        resource_name = f\"projects/{project}/locations/{self.region}\"\n\n        async for attempt in AsyncRetrying(\n            stop=stop_after_attempt(3), wait=wait_fixed(1) + wait_random(0, 3)\n        ):\n            with attempt:\n                custom_job_run = await job_service_async_client.create_custom_job(\n                    parent=resource_name,\n                    custom_job=custom_job,\n                )\n\n        self.logger.info(\n            f\"{self._log_prefix}: Job {self.job_name!r} created. \"\n            f\"The full job name is {custom_job_run.name!r}\"\n        )\n\n        return custom_job_run\n\n    async def _watch_job_run(\n        self,\n        full_job_name: str,  # different from self.job_name\n        job_service_async_client: \"JobServiceAsyncClient\",\n        current_state: \"JobState\",\n        until_states: Tuple[\"JobState\"],\n        timeout: int = None,\n    ) -> \"CustomJob\":\n        \"\"\"\n        Polls job run to see if status changed.\n\n        State changes reported by the Vertex AI API may sometimes be inaccurate\n        immediately upon startup, but should eventually report a correct running\n        and then terminal state. The minimum training duration for a custom job is\n        30 seconds, so short-lived jobs may be marked as successful some time\n        after a flow run has completed.\n        \"\"\"\n        state = JobState.JOB_STATE_UNSPECIFIED\n        last_state = current_state\n        t0 = time.time()\n\n        while state not in until_states:\n            job_run = await job_service_async_client.get_custom_job(\n                name=full_job_name,\n            )\n            state = job_run.state\n            if state != last_state:\n                state_label = (\n                    state.name.replace(\"_\", \" \")\n                    .lower()\n                    .replace(\"state\", \"state is now:\")\n                )\n                # results in \"New job state is now: succeeded\"\n                self.logger.debug(\n                    f\"{self._log_prefix}: {self.job_name} has new {state_label}\"\n                )\n                last_state = state\n            else:\n                # Intermittently, the job will not be described. We want to respect the\n                # watch timeout though.\n                self.logger.debug(f\"{self._log_prefix}: Job not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching job for states \"\n                    \"{until_states!r}\"\n                )\n            await asyncio.sleep(self.job_watch_poll_interval)\n\n        return job_run\n\n    @sync_compatible\n    async def run(\n        self, task_status: Optional[\"TaskStatus\"] = None\n    ) -> VertexAICustomTrainingJobResult:\n        \"\"\"\n        Run the configured task on VertexAI.\n\n        Args:\n            task_status: An optional `TaskStatus` to update when the container starts.\n\n        Returns:\n            The `VertexAICustomTrainingJobResult`.\n        \"\"\"\n        client_options = ClientOptions(\n            api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n        )\n\n        job_spec = self._build_job_spec()\n        job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n        job_run = await self._create_and_begin_job(\n            job_spec,\n            job_service_async_client,\n        )\n\n        if task_status:\n            task_status.started(self.job_name)\n\n        final_job_run = await self._watch_job_run(\n            full_job_name=job_run.name,\n            job_service_async_client=job_service_async_client,\n            current_state=job_run.state,\n            until_states=(\n                JobState.JOB_STATE_SUCCEEDED,\n                JobState.JOB_STATE_FAILED,\n                JobState.JOB_STATE_CANCELLED,\n                JobState.JOB_STATE_EXPIRED,\n            ),\n            timeout=self.maximum_run_time.total_seconds(),\n        )\n\n        error_msg = final_job_run.error.message\n        if error_msg:\n            raise RuntimeError(f\"{self._log_prefix}: {error_msg}\")\n\n        status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n        return VertexAICustomTrainingJobResult(\n            identifier=final_job_run.display_name, status_code=status_code\n        )\n\n    @sync_compatible\n    async def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n        \"\"\"\n        Kill a job running Cloud Run.\n\n        Args:\n            identifier: The Vertex AI full job name, formatted like\n                \"projects/{project}/locations/{location}/customJobs/{custom_job}\".\n\n        Returns:\n            The `VertexAICustomTrainingJobResult`.\n        \"\"\"\n        client_options = ClientOptions(\n            api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n        )\n        job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n        await self._kill_job(\n            job_service_async_client=job_service_async_client,\n            full_job_name=identifier,\n        )\n        self.logger.info(f\"Requested to cancel {identifier}...\")\n\n    async def _kill_job(\n        self, job_service_async_client: \"JobServiceAsyncClient\", full_job_name: str\n    ) -> None:\n        \"\"\"\n        Thin wrapper around Job.delete, wrapping a try/except since\n        Job is an independent class that doesn't have knowledge of\n        CloudRunJob and its associated logic.\n        \"\"\"\n        cancel_custom_job_request = CancelCustomJobRequest(name=full_job_name)\n        try:\n            await job_service_async_client.cancel_custom_job(\n                request=cancel_custom_job_request,\n            )\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Vertex AI job; the job name {full_job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"VertexAICustomTrainingJob {self.name!r}\"\n        else:\n            return \"VertexAICustomTrainingJob\"\n
    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.job_name","title":"job_name property","text":"

    The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference: https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name

    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

    Generate a base job template for a Vertex AI work pool with the same configuration as this block. Returns: - dict: a base job template for a Vertex AI work pool

    Source code in prefect_gcp/aiplatform.py
    async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for a `Vertex AI` work pool with the same\n    configuration as this block.\n    Returns:\n        - dict: a base job template for a `Vertex AI` work pool\n    \"\"\"\n    base_job_template = await get_default_base_job_template_for_infrastructure_type(\n        self.get_corresponding_worker_type(),\n    )\n    assert (\n        base_job_template is not None\n    ), \"Failed to generate default base job template for Cloud Run worker.\"\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n        ]:\n            continue\n        elif key == \"gcp_credentials\":\n            if not self.gcp_credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(\n                        self.gcp_credentials._block_document_id\n                    )\n                }\n            }\n        elif key == \"maximum_run_time\":\n            base_job_template[\"variables\"][\"properties\"][\"maximum_run_time_hours\"][\n                \"default\"\n            ] = round(value.total_seconds() / 3600)\n        elif key == \"service_account\":\n            base_job_template[\"variables\"][\"properties\"][\"service_account_name\"][\n                \"default\"\n            ] = value\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by `Vertex AI` work pools.\"\n                \" Skipping.\"\n            )\n\n    return base_job_template\n
    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.get_corresponding_worker_type","title":"get_corresponding_worker_type","text":"

    Return the corresponding worker type for this infrastructure block.

    Source code in prefect_gcp/aiplatform.py
    def get_corresponding_worker_type(self) -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    return \"vertex-ai\"\n
    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.kill","title":"kill async","text":"

    Kill a job running Cloud Run.

    Parameters:

    Name Type Description Default identifier str

    The Vertex AI full job name, formatted like \"projects/{project}/locations/{location}/customJobs/{custom_job}\".

    required

    Returns:

    Type Description None

    The VertexAICustomTrainingJobResult.

    Source code in prefect_gcp/aiplatform.py
    @sync_compatible\nasync def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n    \"\"\"\n    Kill a job running Cloud Run.\n\n    Args:\n        identifier: The Vertex AI full job name, formatted like\n            \"projects/{project}/locations/{location}/customJobs/{custom_job}\".\n\n    Returns:\n        The `VertexAICustomTrainingJobResult`.\n    \"\"\"\n    client_options = ClientOptions(\n        api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n    )\n    job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n        client_options=client_options\n    )\n    await self._kill_job(\n        job_service_async_client=job_service_async_client,\n        full_job_name=identifier,\n    )\n    self.logger.info(f\"Requested to cancel {identifier}...\")\n
    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.preview","title":"preview","text":"

    Generate a preview of the job definition that will be sent to GCP.

    Source code in prefect_gcp/aiplatform.py
    def preview(self) -> str:\n    \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n    job_spec = self._build_job_spec()\n    custom_job = CustomJob(\n        display_name=self.job_name,\n        job_spec=job_spec,\n        labels=self._get_compatible_labels(),\n    )\n    return str(custom_job)  # outputs a json string\n
    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.run","title":"run async","text":"

    Run the configured task on VertexAI.

    Parameters:

    Name Type Description Default task_status Optional[TaskStatus]

    An optional TaskStatus to update when the container starts.

    None

    Returns:

    Type Description VertexAICustomTrainingJobResult

    The VertexAICustomTrainingJobResult.

    Source code in prefect_gcp/aiplatform.py
    @sync_compatible\nasync def run(\n    self, task_status: Optional[\"TaskStatus\"] = None\n) -> VertexAICustomTrainingJobResult:\n    \"\"\"\n    Run the configured task on VertexAI.\n\n    Args:\n        task_status: An optional `TaskStatus` to update when the container starts.\n\n    Returns:\n        The `VertexAICustomTrainingJobResult`.\n    \"\"\"\n    client_options = ClientOptions(\n        api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n    )\n\n    job_spec = self._build_job_spec()\n    job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n        client_options=client_options\n    )\n    job_run = await self._create_and_begin_job(\n        job_spec,\n        job_service_async_client,\n    )\n\n    if task_status:\n        task_status.started(self.job_name)\n\n    final_job_run = await self._watch_job_run(\n        full_job_name=job_run.name,\n        job_service_async_client=job_service_async_client,\n        current_state=job_run.state,\n        until_states=(\n            JobState.JOB_STATE_SUCCEEDED,\n            JobState.JOB_STATE_FAILED,\n            JobState.JOB_STATE_CANCELLED,\n            JobState.JOB_STATE_EXPIRED,\n        ),\n        timeout=self.maximum_run_time.total_seconds(),\n    )\n\n    error_msg = final_job_run.error.message\n    if error_msg:\n        raise RuntimeError(f\"{self._log_prefix}: {error_msg}\")\n\n    status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n    return VertexAICustomTrainingJobResult(\n        identifier=final_job_run.display_name, status_code=status_code\n    )\n
    "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJobResult","title":"VertexAICustomTrainingJobResult","text":"

    Bases: InfrastructureResult

    Result from a Vertex AI custom training job.

    Source code in prefect_gcp/aiplatform.py
    class VertexAICustomTrainingJobResult(InfrastructureResult):\n    \"\"\"Result from a Vertex AI custom training job.\"\"\"\n
    "},{"location":"integrations/prefect-gcp/bigquery/","title":"BigQuery","text":""},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery","title":"prefect_gcp.bigquery","text":"

    Tasks for interacting with GCP BigQuery

    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse","title":"BigQueryWarehouse","text":"

    Bases: DatabaseBlock

    A block for querying a database with BigQuery.

    Upon instantiating, a connection to BigQuery is established and maintained for the life of the object until the close method is called.

    It is recommended to use this block as a context manager, which will automatically close the connection and its cursors when the context is exited.

    It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor could be lost.

    Attributes:

    Name Type Description gcp_credentials GcpCredentials

    The credentials to use to authenticate.

    fetch_size int

    The number of rows to fetch at a time when calling fetch_many. Note, this parameter is executed on the client side and is not passed to the database. To limit on the server side, add the LIMIT clause, or the dialect's equivalent clause, like TOP, to the query.

    Source code in prefect_gcp/bigquery.py
    class BigQueryWarehouse(DatabaseBlock):\n    \"\"\"\n    A block for querying a database with BigQuery.\n\n    Upon instantiating, a connection to BigQuery is established\n    and maintained for the life of the object until the close method is called.\n\n    It is recommended to use this block as a context manager, which will automatically\n    close the connection and its cursors when the context is exited.\n\n    It is also recommended that this block is loaded and consumed within a single task\n    or flow because if the block is passed across separate tasks and flows,\n    the state of the block's connection and cursor could be lost.\n\n    Attributes:\n        gcp_credentials: The credentials to use to authenticate.\n        fetch_size: The number of rows to fetch at a time when calling fetch_many.\n            Note, this parameter is executed on the client side and is not\n            passed to the database. To limit on the server side, add the `LIMIT`\n            clause, or the dialect's equivalent clause, like `TOP`, to the query.\n    \"\"\"  # noqa\n\n    _block_type_name = \"BigQuery Warehouse\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse\"  # noqa: E501\n\n    gcp_credentials: GcpCredentials\n    fetch_size: int = Field(\n        default=1, description=\"The number of rows to fetch at a time.\"\n    )\n\n    _connection: Optional[\"Connection\"] = None\n    _unique_cursors: Dict[str, \"Cursor\"] = None\n\n    def _start_connection(self):\n        \"\"\"\n        Starts a connection.\n        \"\"\"\n        with self.gcp_credentials.get_bigquery_client() as client:\n            self._connection = Connection(client=client)\n\n    def block_initialization(self) -> None:\n        super().block_initialization()\n        if self._connection is None:\n            self._start_connection()\n\n        if self._unique_cursors is None:\n            self._unique_cursors = {}\n\n    def get_connection(self) -> \"Connection\":\n        \"\"\"\n        Get the opened connection to BigQuery.\n        \"\"\"\n        return self._connection\n\n    def _get_cursor(self, inputs: Dict[str, Any]) -> Tuple[bool, \"Cursor\"]:\n        \"\"\"\n        Get a BigQuery cursor.\n\n        Args:\n            inputs: The inputs to generate a unique hash, used to decide\n                whether a new cursor should be used.\n\n        Returns:\n            Whether a cursor is new and a BigQuery cursor.\n        \"\"\"\n        input_hash = hash_objects(inputs)\n        assert input_hash is not None, (\n            \"We were not able to hash your inputs, \"\n            \"which resulted in an unexpected data return; \"\n            \"please open an issue with a reproducible example.\"\n        )\n        if input_hash not in self._unique_cursors.keys():\n            new_cursor = self._connection.cursor()\n            self._unique_cursors[input_hash] = new_cursor\n            return True, new_cursor\n        else:\n            existing_cursor = self._unique_cursors[input_hash]\n            return False, existing_cursor\n\n    def reset_cursors(self) -> None:\n        \"\"\"\n        Tries to close all opened cursors.\n        \"\"\"\n        input_hashes = tuple(self._unique_cursors.keys())\n        for input_hash in input_hashes:\n            cursor = self._unique_cursors.pop(input_hash)\n            try:\n                cursor.close()\n            except Exception as exc:\n                self.logger.warning(\n                    f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n                )\n\n    @sync_compatible\n    async def fetch_one(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> \"Row\":\n        \"\"\"\n        Fetch a single result from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Returns:\n            A tuple containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Execute operation with parameters, fetching one new row at a time:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    SELECT word, word_count\n                    FROM `bigquery-public-data.samples.shakespeare`\n                    WHERE corpus = %(corpus)s\n                    AND word_count >= %(min_word_count)s\n                    ORDER BY word_count DESC\n                    LIMIT 3;\n                '''\n                parameters = {\n                    \"corpus\": \"romeoandjuliet\",\n                    \"min_word_count\": 250,\n                }\n                for _ in range(0, 3):\n                    result = warehouse.fetch_one(operation, parameters=parameters)\n                    print(result)\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        new, cursor = self._get_cursor(inputs)\n        if new:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n        result = await run_sync_in_worker_thread(cursor.fetchone)\n        return result\n\n    @sync_compatible\n    async def fetch_many(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        size: Optional[int] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[\"Row\"]:\n        \"\"\"\n        Fetch a limited number of results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            size: The number of results to return; if None or 0, uses the value of\n                `fetch_size` configured on the block.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Execute operation with parameters, fetching two new rows at a time:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    SELECT word, word_count\n                    FROM `bigquery-public-data.samples.shakespeare`\n                    WHERE corpus = %(corpus)s\n                    AND word_count >= %(min_word_count)s\n                    ORDER BY word_count DESC\n                    LIMIT 6;\n                '''\n                parameters = {\n                    \"corpus\": \"romeoandjuliet\",\n                    \"min_word_count\": 250,\n                }\n                for _ in range(0, 3):\n                    result = warehouse.fetch_many(\n                        operation,\n                        parameters=parameters,\n                        size=2\n                    )\n                    print(result)\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        new, cursor = self._get_cursor(inputs)\n        if new:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n        size = size or self.fetch_size\n        result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n        return result\n\n    @sync_compatible\n    async def fetch_all(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[\"Row\"]:\n        \"\"\"\n        Fetch all results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Execute operation with parameters, fetching all rows:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    SELECT word, word_count\n                    FROM `bigquery-public-data.samples.shakespeare`\n                    WHERE corpus = %(corpus)s\n                    AND word_count >= %(min_word_count)s\n                    ORDER BY word_count DESC\n                    LIMIT 3;\n                '''\n                parameters = {\n                    \"corpus\": \"romeoandjuliet\",\n                    \"min_word_count\": 250,\n                }\n                result = warehouse.fetch_all(operation, parameters=parameters)\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        new, cursor = self._get_cursor(inputs)\n        if new:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n        result = await run_sync_in_worker_thread(cursor.fetchall)\n        return result\n\n    @sync_compatible\n    async def execute(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> None:\n        \"\"\"\n        Executes an operation on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Examples:\n            Execute operation with parameters:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    CREATE TABLE mydataset.trips AS (\n                    SELECT\n                        bikeid,\n                        start_time,\n                        duration_minutes\n                    FROM\n                        bigquery-public-data.austin_bikeshare.bikeshare_trips\n                    LIMIT %(limit)s\n                    );\n                '''\n                warehouse.execute(operation, parameters={\"limit\": 5})\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        cursor = self._get_cursor(inputs)[1]\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    @sync_compatible\n    async def execute_many(\n        self,\n        operation: str,\n        seq_of_parameters: List[Dict[str, Any]],\n    ) -> None:\n        \"\"\"\n        Executes many operations on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operations\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            seq_of_parameters: The sequence of parameters for the operation.\n\n        Examples:\n            Create mytable in mydataset and insert two rows into it:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"bigquery\") as warehouse:\n                create_operation = '''\n                CREATE TABLE IF NOT EXISTS mydataset.mytable (\n                    col1 STRING,\n                    col2 INTEGER,\n                    col3 BOOLEAN\n                )\n                '''\n                warehouse.execute(create_operation)\n                insert_operation = '''\n                INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s)\n                '''\n                seq_of_parameters = [\n                    (\"a\", 1, True),\n                    (\"b\", 2, False),\n                ]\n                warehouse.execute_many(\n                    insert_operation,\n                    seq_of_parameters=seq_of_parameters\n                )\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            seq_of_parameters=seq_of_parameters,\n        )\n        cursor = self._get_cursor(inputs)[1]\n        await run_sync_in_worker_thread(cursor.executemany, **inputs)\n\n    def close(self):\n        \"\"\"\n        Closes connection and its cursors.\n        \"\"\"\n        try:\n            self.reset_cursors()\n        finally:\n            if self._connection is not None:\n                self._connection.close()\n                self._connection = None\n\n    def __enter__(self):\n        \"\"\"\n        Start a connection upon entry.\n        \"\"\"\n        return self\n\n    def __exit__(self, *args):\n        \"\"\"\n        Closes connection and its cursors upon exit.\n        \"\"\"\n        self.close()\n\n    def __getstate__(self):\n        \"\"\" \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_connection\", \"_unique_cursors\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\" \"\"\"\n        self.__dict__.update(data)\n        self._unique_cursors = {}\n        self._start_connection()\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.close","title":"close","text":"

    Closes connection and its cursors.

    Source code in prefect_gcp/bigquery.py
    def close(self):\n    \"\"\"\n    Closes connection and its cursors.\n    \"\"\"\n    try:\n        self.reset_cursors()\n    finally:\n        if self._connection is not None:\n            self._connection.close()\n            self._connection = None\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.execute","title":"execute async","text":"

    Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

    Unlike the fetch methods, this method will always execute the operation upon calling.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None **execution_options Dict[str, Any]

    Additional options to pass to connection.execute.

    {}

    Examples:

    Execute operation with parameters:

    from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        CREATE TABLE mydataset.trips AS (\n        SELECT\n            bikeid,\n            start_time,\n            duration_minutes\n        FROM\n            bigquery-public-data.austin_bikeshare.bikeshare_trips\n        LIMIT %(limit)s\n        );\n    '''\n    warehouse.execute(operation, parameters={\"limit\": 5})\n

    Source code in prefect_gcp/bigquery.py
    @sync_compatible\nasync def execute(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> None:\n    \"\"\"\n    Executes an operation on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Examples:\n        Execute operation with parameters:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                CREATE TABLE mydataset.trips AS (\n                SELECT\n                    bikeid,\n                    start_time,\n                    duration_minutes\n                FROM\n                    bigquery-public-data.austin_bikeshare.bikeshare_trips\n                LIMIT %(limit)s\n                );\n            '''\n            warehouse.execute(operation, parameters={\"limit\": 5})\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    cursor = self._get_cursor(inputs)[1]\n    await run_sync_in_worker_thread(cursor.execute, **inputs)\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.execute_many","title":"execute_many async","text":"

    Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

    Unlike the fetch methods, this method will always execute the operations upon calling.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required seq_of_parameters List[Dict[str, Any]]

    The sequence of parameters for the operation.

    required

    Examples:

    Create mytable in mydataset and insert two rows into it:

    from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"bigquery\") as warehouse:\n    create_operation = '''\n    CREATE TABLE IF NOT EXISTS mydataset.mytable (\n        col1 STRING,\n        col2 INTEGER,\n        col3 BOOLEAN\n    )\n    '''\n    warehouse.execute(create_operation)\n    insert_operation = '''\n    INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s)\n    '''\n    seq_of_parameters = [\n        (\"a\", 1, True),\n        (\"b\", 2, False),\n    ]\n    warehouse.execute_many(\n        insert_operation,\n        seq_of_parameters=seq_of_parameters\n    )\n

    Source code in prefect_gcp/bigquery.py
    @sync_compatible\nasync def execute_many(\n    self,\n    operation: str,\n    seq_of_parameters: List[Dict[str, Any]],\n) -> None:\n    \"\"\"\n    Executes many operations on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operations\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        seq_of_parameters: The sequence of parameters for the operation.\n\n    Examples:\n        Create mytable in mydataset and insert two rows into it:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"bigquery\") as warehouse:\n            create_operation = '''\n            CREATE TABLE IF NOT EXISTS mydataset.mytable (\n                col1 STRING,\n                col2 INTEGER,\n                col3 BOOLEAN\n            )\n            '''\n            warehouse.execute(create_operation)\n            insert_operation = '''\n            INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s)\n            '''\n            seq_of_parameters = [\n                (\"a\", 1, True),\n                (\"b\", 2, False),\n            ]\n            warehouse.execute_many(\n                insert_operation,\n                seq_of_parameters=seq_of_parameters\n            )\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        seq_of_parameters=seq_of_parameters,\n    )\n    cursor = self._get_cursor(inputs)[1]\n    await run_sync_in_worker_thread(cursor.executemany, **inputs)\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.fetch_all","title":"fetch_all async","text":"

    Fetch all results from the database.

    Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None **execution_options Dict[str, Any]

    Additional options to pass to connection.execute.

    {}

    Returns:

    Type Description List[Row]

    A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Execute operation with parameters, fetching all rows:

    from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = %(corpus)s\n        AND word_count >= %(min_word_count)s\n        ORDER BY word_count DESC\n        LIMIT 3;\n    '''\n    parameters = {\n        \"corpus\": \"romeoandjuliet\",\n        \"min_word_count\": 250,\n    }\n    result = warehouse.fetch_all(operation, parameters=parameters)\n

    Source code in prefect_gcp/bigquery.py
    @sync_compatible\nasync def fetch_all(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> List[\"Row\"]:\n    \"\"\"\n    Fetch all results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Execute operation with parameters, fetching all rows:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = %(corpus)s\n                AND word_count >= %(min_word_count)s\n                ORDER BY word_count DESC\n                LIMIT 3;\n            '''\n            parameters = {\n                \"corpus\": \"romeoandjuliet\",\n                \"min_word_count\": 250,\n            }\n            result = warehouse.fetch_all(operation, parameters=parameters)\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    new, cursor = self._get_cursor(inputs)\n    if new:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    result = await run_sync_in_worker_thread(cursor.fetchall)\n    return result\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.fetch_many","title":"fetch_many async","text":"

    Fetch a limited number of results from the database.

    Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None size Optional[int]

    The number of results to return; if None or 0, uses the value of fetch_size configured on the block.

    None **execution_options Dict[str, Any]

    Additional options to pass to connection.execute.

    {}

    Returns:

    Type Description List[Row]

    A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Execute operation with parameters, fetching two new rows at a time:

    from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = %(corpus)s\n        AND word_count >= %(min_word_count)s\n        ORDER BY word_count DESC\n        LIMIT 6;\n    '''\n    parameters = {\n        \"corpus\": \"romeoandjuliet\",\n        \"min_word_count\": 250,\n    }\n    for _ in range(0, 3):\n        result = warehouse.fetch_many(\n            operation,\n            parameters=parameters,\n            size=2\n        )\n        print(result)\n

    Source code in prefect_gcp/bigquery.py
    @sync_compatible\nasync def fetch_many(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    size: Optional[int] = None,\n    **execution_options: Dict[str, Any],\n) -> List[\"Row\"]:\n    \"\"\"\n    Fetch a limited number of results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        size: The number of results to return; if None or 0, uses the value of\n            `fetch_size` configured on the block.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Execute operation with parameters, fetching two new rows at a time:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = %(corpus)s\n                AND word_count >= %(min_word_count)s\n                ORDER BY word_count DESC\n                LIMIT 6;\n            '''\n            parameters = {\n                \"corpus\": \"romeoandjuliet\",\n                \"min_word_count\": 250,\n            }\n            for _ in range(0, 3):\n                result = warehouse.fetch_many(\n                    operation,\n                    parameters=parameters,\n                    size=2\n                )\n                print(result)\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    new, cursor = self._get_cursor(inputs)\n    if new:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    size = size or self.fetch_size\n    result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n    return result\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.fetch_one","title":"fetch_one async","text":"

    Fetch a single result from the database.

    Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None **execution_options Dict[str, Any]

    Additional options to pass to connection.execute.

    {}

    Returns:

    Type Description Row

    A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Execute operation with parameters, fetching one new row at a time:

    from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = %(corpus)s\n        AND word_count >= %(min_word_count)s\n        ORDER BY word_count DESC\n        LIMIT 3;\n    '''\n    parameters = {\n        \"corpus\": \"romeoandjuliet\",\n        \"min_word_count\": 250,\n    }\n    for _ in range(0, 3):\n        result = warehouse.fetch_one(operation, parameters=parameters)\n        print(result)\n

    Source code in prefect_gcp/bigquery.py
    @sync_compatible\nasync def fetch_one(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> \"Row\":\n    \"\"\"\n    Fetch a single result from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Returns:\n        A tuple containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Execute operation with parameters, fetching one new row at a time:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = %(corpus)s\n                AND word_count >= %(min_word_count)s\n                ORDER BY word_count DESC\n                LIMIT 3;\n            '''\n            parameters = {\n                \"corpus\": \"romeoandjuliet\",\n                \"min_word_count\": 250,\n            }\n            for _ in range(0, 3):\n                result = warehouse.fetch_one(operation, parameters=parameters)\n                print(result)\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    new, cursor = self._get_cursor(inputs)\n    if new:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    result = await run_sync_in_worker_thread(cursor.fetchone)\n    return result\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.get_connection","title":"get_connection","text":"

    Get the opened connection to BigQuery.

    Source code in prefect_gcp/bigquery.py
    def get_connection(self) -> \"Connection\":\n    \"\"\"\n    Get the opened connection to BigQuery.\n    \"\"\"\n    return self._connection\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.reset_cursors","title":"reset_cursors","text":"

    Tries to close all opened cursors.

    Source code in prefect_gcp/bigquery.py
    def reset_cursors(self) -> None:\n    \"\"\"\n    Tries to close all opened cursors.\n    \"\"\"\n    input_hashes = tuple(self._unique_cursors.keys())\n    for input_hash in input_hashes:\n        cursor = self._unique_cursors.pop(input_hash)\n        try:\n            cursor.close()\n        except Exception as exc:\n            self.logger.warning(\n                f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n            )\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_create_table","title":"bigquery_create_table async","text":"

    Creates table in BigQuery. Args: dataset: Name of a dataset in that the table will be created. table: Name of a table to create. schema: Schema to use when creating the table. gcp_credentials: Credentials to use for authentication with GCP. clustering_fields: List of fields to cluster the table by. time_partitioning: bigquery.TimePartitioning object specifying a partitioning of the newly created table project: Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: The location of the dataset that will be written to. external_config: The external data source. # noqa Returns: Table name. Example:

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_create_table\nfrom google.cloud.bigquery import SchemaField\n@flow\ndef example_bigquery_create_table_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    schema = [\n        SchemaField(\"number\", field_type=\"INTEGER\", mode=\"REQUIRED\"),\n        SchemaField(\"text\", field_type=\"STRING\", mode=\"REQUIRED\"),\n        SchemaField(\"bool\", field_type=\"BOOLEAN\")\n    ]\n    result = bigquery_create_table(\n        dataset=\"dataset\",\n        table=\"test_table\",\n        schema=schema,\n        gcp_credentials=gcp_credentials\n    )\n    return result\nexample_bigquery_create_table_flow()\n

    Source code in prefect_gcp/bigquery.py
    @task\nasync def bigquery_create_table(\n    dataset: str,\n    table: str,\n    gcp_credentials: GcpCredentials,\n    schema: Optional[List[\"SchemaField\"]] = None,\n    clustering_fields: List[str] = None,\n    time_partitioning: \"TimePartitioning\" = None,\n    project: Optional[str] = None,\n    location: str = \"US\",\n    external_config: Optional[\"ExternalConfig\"] = None,\n) -> str:\n    \"\"\"\n    Creates table in BigQuery.\n    Args:\n        dataset: Name of a dataset in that the table will be created.\n        table: Name of a table to create.\n        schema: Schema to use when creating the table.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        clustering_fields: List of fields to cluster the table by.\n        time_partitioning: `bigquery.TimePartitioning` object specifying a partitioning\n            of the newly created table\n        project: Project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: The location of the dataset that will be written to.\n        external_config: The [external data source](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigquery_table#nested_external_data_configuration).  # noqa\n    Returns:\n        Table name.\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_create_table\n        from google.cloud.bigquery import SchemaField\n        @flow\n        def example_bigquery_create_table_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            schema = [\n                SchemaField(\"number\", field_type=\"INTEGER\", mode=\"REQUIRED\"),\n                SchemaField(\"text\", field_type=\"STRING\", mode=\"REQUIRED\"),\n                SchemaField(\"bool\", field_type=\"BOOLEAN\")\n            ]\n            result = bigquery_create_table(\n                dataset=\"dataset\",\n                table=\"test_table\",\n                schema=schema,\n                gcp_credentials=gcp_credentials\n            )\n            return result\n        example_bigquery_create_table_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Creating %s.%s\", dataset, table)\n\n    if not external_config and not schema:\n        raise ValueError(\"Either a schema or an external config must be provided.\")\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n    try:\n        partial_get_dataset = partial(client.get_dataset, dataset)\n        dataset_ref = await to_thread.run_sync(partial_get_dataset)\n    except NotFound:\n        logger.debug(\"Dataset %s not found, creating\", dataset)\n        partial_create_dataset = partial(client.create_dataset, dataset)\n        dataset_ref = await to_thread.run_sync(partial_create_dataset)\n\n    table_ref = dataset_ref.table(table)\n    try:\n        partial_get_table = partial(client.get_table, table_ref)\n        await to_thread.run_sync(partial_get_table)\n        logger.info(\"%s.%s already exists\", dataset, table)\n    except NotFound:\n        logger.debug(\"Table %s not found, creating\", table)\n        table_obj = Table(table_ref, schema=schema)\n\n        # external data configuration\n        if external_config:\n            table_obj.external_data_configuration = external_config\n\n        # cluster for optimal data sorting/access\n        if clustering_fields:\n            table_obj.clustering_fields = clustering_fields\n\n        # partitioning\n        if time_partitioning:\n            table_obj.time_partitioning = time_partitioning\n\n        partial_create_table = partial(client.create_table, table_obj)\n        await to_thread.run_sync(partial_create_table)\n\n    return table\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_insert_stream","title":"bigquery_insert_stream async","text":"

    Insert records in a Google BigQuery table via the streaming API.

    Parameters:

    Name Type Description Default dataset str

    Name of a dataset where the records will be written to.

    required table str

    Name of a table to write to.

    required records List[dict]

    The list of records to insert as rows into the BigQuery table; each item in the list should be a dictionary whose keys correspond to columns in the table.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required project Optional[str]

    The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials.

    None location str

    Location of the dataset that will be written to.

    'US'

    Returns:

    Type Description List

    List of inserted rows.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_insert_stream\nfrom google.cloud.bigquery import SchemaField\n\n@flow\ndef example_bigquery_insert_stream_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    records = [\n        {\"number\": 1, \"text\": \"abc\", \"bool\": True},\n        {\"number\": 2, \"text\": \"def\", \"bool\": False},\n    ]\n    result = bigquery_insert_stream(\n        dataset=\"integrations\",\n        table=\"test_table\",\n        records=records,\n        gcp_credentials=gcp_credentials\n    )\n    return result\n\nexample_bigquery_insert_stream_flow()\n
    Source code in prefect_gcp/bigquery.py
    @task\nasync def bigquery_insert_stream(\n    dataset: str,\n    table: str,\n    records: List[dict],\n    gcp_credentials: GcpCredentials,\n    project: Optional[str] = None,\n    location: str = \"US\",\n) -> List:\n    \"\"\"\n    Insert records in a Google BigQuery table via the [streaming\n    API](https://cloud.google.com/bigquery/streaming-data-into-bigquery).\n\n    Args:\n        dataset: Name of a dataset where the records will be written to.\n        table: Name of a table to write to.\n        records: The list of records to insert as rows into the BigQuery table;\n            each item in the list should be a dictionary whose keys correspond to\n            columns in the table.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        project: The project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: Location of the dataset that will be written to.\n\n    Returns:\n        List of inserted rows.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_insert_stream\n        from google.cloud.bigquery import SchemaField\n\n        @flow\n        def example_bigquery_insert_stream_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            records = [\n                {\"number\": 1, \"text\": \"abc\", \"bool\": True},\n                {\"number\": 2, \"text\": \"def\", \"bool\": False},\n            ]\n            result = bigquery_insert_stream(\n                dataset=\"integrations\",\n                table=\"test_table\",\n                records=records,\n                gcp_credentials=gcp_credentials\n            )\n            return result\n\n        example_bigquery_insert_stream_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Inserting into %s.%s as a stream\", dataset, table)\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n    table_ref = client.dataset(dataset).table(table)\n    partial_insert = partial(\n        client.insert_rows_json, table=table_ref, json_rows=records\n    )\n    response = await to_thread.run_sync(partial_insert)\n\n    errors = []\n    output = []\n    for row in response:\n        output.append(row)\n        if \"errors\" in row:\n            errors.append(row[\"errors\"])\n\n    if errors:\n        raise ValueError(errors)\n\n    return output\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_load_cloud_storage","title":"bigquery_load_cloud_storage async","text":"

    Run method for this Task. Invoked by calling this Task within a Flow context, after initialization. Args: uri: GCS path to load data from. dataset: The id of a destination dataset to write the records to. table: The name of a destination table to write the records to. gcp_credentials: Credentials to use for authentication with GCP. schema: The schema to use when creating the table. job_config: Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). project: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: Location of the dataset that will be written to.

    Returns:

    Type Description LoadJob

    The response from load_table_from_uri.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_load_cloud_storage\n\n@flow\ndef example_bigquery_load_cloud_storage_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    result = bigquery_load_cloud_storage(\n        dataset=\"dataset\",\n        table=\"test_table\",\n        uri=\"uri\",\n        gcp_credentials=gcp_credentials\n    )\n    return result\n\nexample_bigquery_load_cloud_storage_flow()\n
    Source code in prefect_gcp/bigquery.py
    @task\nasync def bigquery_load_cloud_storage(\n    dataset: str,\n    table: str,\n    uri: str,\n    gcp_credentials: GcpCredentials,\n    schema: Optional[List[\"SchemaField\"]] = None,\n    job_config: Optional[dict] = None,\n    project: Optional[str] = None,\n    location: str = \"US\",\n) -> \"LoadJob\":\n    \"\"\"\n    Run method for this Task.  Invoked by _calling_ this\n    Task within a Flow context, after initialization.\n    Args:\n        uri: GCS path to load data from.\n        dataset: The id of a destination dataset to write the records to.\n        table: The name of a destination table to write the records to.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        schema: The schema to use when creating the table.\n        job_config: Dictionary of job configuration parameters;\n            note that the parameters provided here must be pickleable\n            (e.g., dataset references will be rejected).\n        project: The project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: Location of the dataset that will be written to.\n\n    Returns:\n        The response from `load_table_from_uri`.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_load_cloud_storage\n\n        @flow\n        def example_bigquery_load_cloud_storage_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            result = bigquery_load_cloud_storage(\n                dataset=\"dataset\",\n                table=\"test_table\",\n                uri=\"uri\",\n                gcp_credentials=gcp_credentials\n            )\n            return result\n\n        example_bigquery_load_cloud_storage_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Loading into %s.%s from cloud storage\", dataset, table)\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n    table_ref = client.dataset(dataset).table(table)\n\n    job_config = job_config or {}\n    if \"autodetect\" not in job_config:\n        job_config[\"autodetect\"] = True\n    job_config = LoadJobConfig(**job_config)\n    if schema:\n        job_config.schema = schema\n\n    result = None\n    try:\n        partial_load = partial(\n            _result_sync,\n            client.load_table_from_uri,\n            uri,\n            table_ref,\n            job_config=job_config,\n        )\n        result = await to_thread.run_sync(partial_load)\n    except Exception as exception:\n        logger.exception(exception)\n        if result is not None and result.errors is not None:\n            for error in result.errors:\n                logger.exception(error)\n        raise\n\n    if result is not None:\n        # remove unpickleable attributes\n        result._client = None\n        result._completion_lock = None\n\n    return result\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_load_file","title":"bigquery_load_file async","text":"

    Loads file into BigQuery.

    Parameters:

    Name Type Description Default dataset str

    ID of a destination dataset to write the records to; if not provided here, will default to the one provided at initialization.

    required table str

    Name of a destination table to write the records to; if not provided here, will default to the one provided at initialization.

    required path Union[str, Path]

    A string or path-like object of the file to be loaded.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required schema Optional[List[SchemaField]]

    Schema to use when creating the table.

    None job_config Optional[dict]

    An optional dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected).

    None rewind bool

    if True, seek to the beginning of the file handle before reading the file.

    False size Optional[int]

    Number of bytes to read from the file handle. If size is None or large, resumable upload will be used. Otherwise, multipart upload will be used.

    None project Optional[str]

    Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials.

    None location str

    location of the dataset that will be written to.

    'US'

    Returns:

    Type Description LoadJob

    The response from load_table_from_file.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_load_file\nfrom google.cloud.bigquery import SchemaField\n\n@flow\ndef example_bigquery_load_file_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    result = bigquery_load_file(\n        dataset=\"dataset\",\n        table=\"test_table\",\n        path=\"path\",\n        gcp_credentials=gcp_credentials\n    )\n    return result\n\nexample_bigquery_load_file_flow()\n
    Source code in prefect_gcp/bigquery.py
    @task\nasync def bigquery_load_file(\n    dataset: str,\n    table: str,\n    path: Union[str, Path],\n    gcp_credentials: GcpCredentials,\n    schema: Optional[List[\"SchemaField\"]] = None,\n    job_config: Optional[dict] = None,\n    rewind: bool = False,\n    size: Optional[int] = None,\n    project: Optional[str] = None,\n    location: str = \"US\",\n) -> \"LoadJob\":\n    \"\"\"\n    Loads file into BigQuery.\n\n    Args:\n        dataset: ID of a destination dataset to write the records to;\n            if not provided here, will default to the one provided at initialization.\n        table: Name of a destination table to write the records to;\n            if not provided here, will default to the one provided at initialization.\n        path: A string or path-like object of the file to be loaded.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        schema: Schema to use when creating the table.\n        job_config: An optional dictionary of job configuration parameters;\n            note that the parameters provided here must be pickleable\n            (e.g., dataset references will be rejected).\n        rewind: if True, seek to the beginning of the file handle\n            before reading the file.\n        size: Number of bytes to read from the file handle. If size is None or large,\n            resumable upload will be used. Otherwise, multipart upload will be used.\n        project: Project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: location of the dataset that will be written to.\n\n    Returns:\n        The response from `load_table_from_file`.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_load_file\n        from google.cloud.bigquery import SchemaField\n\n        @flow\n        def example_bigquery_load_file_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            result = bigquery_load_file(\n                dataset=\"dataset\",\n                table=\"test_table\",\n                path=\"path\",\n                gcp_credentials=gcp_credentials\n            )\n            return result\n\n        example_bigquery_load_file_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Loading into %s.%s from file\", dataset, table)\n\n    if not os.path.exists(path):\n        raise ValueError(f\"{path} does not exist\")\n    elif not os.path.isfile(path):\n        raise ValueError(f\"{path} is not a file\")\n\n    client = gcp_credentials.get_bigquery_client(project=project)\n    table_ref = client.dataset(dataset).table(table)\n\n    job_config = job_config or {}\n    if \"autodetect\" not in job_config:\n        job_config[\"autodetect\"] = True\n        # TODO: test if autodetect is needed when schema is passed\n    job_config = LoadJobConfig(**job_config)\n    if schema:\n        # TODO: test if schema can be passed directly in job_config\n        job_config.schema = schema\n\n    try:\n        with open(path, \"rb\") as file_obj:\n            partial_load = partial(\n                _result_sync,\n                client.load_table_from_file,\n                file_obj,\n                table_ref,\n                rewind=rewind,\n                size=size,\n                location=location,\n                job_config=job_config,\n            )\n            result = await to_thread.run_sync(partial_load)\n    except IOError:\n        logger.exception(f\"Could not open and read from {path}\")\n        raise\n\n    if result is not None:\n        # remove unpickleable attributes\n        result._client = None\n        result._completion_lock = None\n\n    return result\n
    "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_query","title":"bigquery_query async","text":"

    Runs a BigQuery query.

    Parameters:

    Name Type Description Default query str

    String of the query to execute.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required query_params Optional[List[tuple]]

    List of 3-tuples specifying BigQuery query parameters; currently only scalar query parameters are supported. See the Google documentation for more details on how both the query and the query parameters should be formatted.

    None dry_run_max_bytes Optional[int]

    If provided, the maximum number of bytes the query is allowed to process; this will be determined by executing a dry run and raising a ValueError if the maximum is exceeded.

    None dataset Optional[str]

    Name of a destination dataset to write the query results to, if you don't want them returned; if provided, table must also be provided.

    None table Optional[str]

    Name of a destination table to write the query results to, if you don't want them returned; if provided, dataset must also be provided.

    None to_dataframe bool

    If provided, returns the results of the query as a pandas dataframe instead of a list of bigquery.table.Row objects.

    False job_config Optional[dict]

    Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected).

    None project Optional[str]

    The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials.

    None result_transformer Optional[Callable[[List[Row]], Any]]

    Function that can be passed to transform the result of a query before returning. The function will be passed the list of rows returned by BigQuery for the given query.

    None location str

    Location of the dataset that will be queried.

    'US'

    Returns:

    Type Description Any

    A list of rows, or pandas DataFrame if to_dataframe,

    Any

    matching the query criteria.

    Example

    Queries the public names database, returning 10 results.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_query\n\n@flow\ndef example_bigquery_query_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\",\n        project=\"project\"\n    )\n    query = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = @corpus\n        AND word_count >= @min_word_count\n        ORDER BY word_count DESC;\n    '''\n    query_params = [\n        (\"corpus\", \"STRING\", \"romeoandjuliet\"),\n        (\"min_word_count\", \"INT64\", 250)\n    ]\n    result = bigquery_query(\n        query, gcp_credentials, query_params=query_params\n    )\n    return result\n\nexample_bigquery_query_flow()\n

    Source code in prefect_gcp/bigquery.py
    @task\nasync def bigquery_query(\n    query: str,\n    gcp_credentials: GcpCredentials,\n    query_params: Optional[List[tuple]] = None,  # 3-tuples\n    dry_run_max_bytes: Optional[int] = None,\n    dataset: Optional[str] = None,\n    table: Optional[str] = None,\n    to_dataframe: bool = False,\n    job_config: Optional[dict] = None,\n    project: Optional[str] = None,\n    result_transformer: Optional[Callable[[List[\"Row\"]], Any]] = None,\n    location: str = \"US\",\n) -> Any:\n    \"\"\"\n    Runs a BigQuery query.\n\n    Args:\n        query: String of the query to execute.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        query_params: List of 3-tuples specifying BigQuery query parameters; currently\n            only scalar query parameters are supported.  See the\n            [Google documentation](https://cloud.google.com/bigquery/docs/parameterized-queries#bigquery-query-params-python)\n            for more details on how both the query and the query parameters should be formatted.\n        dry_run_max_bytes: If provided, the maximum number of bytes the query\n            is allowed to process; this will be determined by executing a dry run\n            and raising a `ValueError` if the maximum is exceeded.\n        dataset: Name of a destination dataset to write the query results to,\n            if you don't want them returned; if provided, `table` must also be provided.\n        table: Name of a destination table to write the query results to,\n            if you don't want them returned; if provided, `dataset` must also be provided.\n        to_dataframe: If provided, returns the results of the query as a pandas\n            dataframe instead of a list of `bigquery.table.Row` objects.\n        job_config: Dictionary of job configuration parameters;\n            note that the parameters provided here must be pickleable\n            (e.g., dataset references will be rejected).\n        project: The project to initialize the BigQuery Client with; if not\n            provided, will default to the one inferred from your credentials.\n        result_transformer: Function that can be passed to transform the result of a query before returning. The function will be passed the list of rows returned by BigQuery for the given query.\n        location: Location of the dataset that will be queried.\n\n    Returns:\n        A list of rows, or pandas DataFrame if to_dataframe,\n        matching the query criteria.\n\n    Example:\n        Queries the public names database, returning 10 results.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_query\n\n        @flow\n        def example_bigquery_query_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\",\n                project=\"project\"\n            )\n            query = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = @corpus\n                AND word_count >= @min_word_count\n                ORDER BY word_count DESC;\n            '''\n            query_params = [\n                (\"corpus\", \"STRING\", \"romeoandjuliet\"),\n                (\"min_word_count\", \"INT64\", 250)\n            ]\n            result = bigquery_query(\n                query, gcp_credentials, query_params=query_params\n            )\n            return result\n\n        example_bigquery_query_flow()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Running BigQuery query\")\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n\n    # setup job config\n    job_config = QueryJobConfig(**job_config or {})\n    if query_params is not None:\n        job_config.query_parameters = [ScalarQueryParameter(*qp) for qp in query_params]\n\n    # perform dry_run if requested\n    if dry_run_max_bytes is not None:\n        saved_info = dict(\n            dry_run=job_config.dry_run, use_query_cache=job_config.use_query_cache\n        )\n        job_config.dry_run = True\n        job_config.use_query_cache = False\n        partial_query = partial(client.query, query, job_config=job_config)\n        response = await to_thread.run_sync(partial_query)\n        total_bytes_processed = response.total_bytes_processed\n        if total_bytes_processed > dry_run_max_bytes:\n            raise RuntimeError(\n                f\"Query will process {total_bytes_processed} bytes which is above \"\n                f\"the set maximum of {dry_run_max_bytes} for this task.\"\n            )\n        job_config.dry_run = saved_info[\"dry_run\"]\n        job_config.use_query_cache = saved_info[\"use_query_cache\"]\n\n    # if writing to a destination table\n    if dataset is not None:\n        table_ref = client.dataset(dataset).table(table)\n        job_config.destination = table_ref\n\n    partial_query = partial(\n        _result_sync,\n        client.query,\n        query,\n        job_config=job_config,\n    )\n    result = await to_thread.run_sync(partial_query)\n\n    if to_dataframe:\n        return result.to_dataframe()\n    else:\n        if result_transformer:\n            return result_transformer(result)\n        else:\n            return list(result)\n
    "},{"location":"integrations/prefect-gcp/cloud_run/","title":"Cloud Run","text":""},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run","title":"prefect_gcp.cloud_run","text":"

    DEPRECATION WARNING:

    This module is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by the Cloud Run and Cloud Run V2 workers, which offer enhanced functionality and better performance.

    For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

    Integrations with Google Cloud Run Job.

    Examples:

    Run a job using Google Cloud Run Jobs:\n```python\nCloudRunJob(\n    image=\"gcr.io/my-project/my-image\",\n    region=\"us-east1\",\n    credentials=my_gcp_credentials\n).run()\n```\n\nRun a job that runs the command `echo hello world` using Google Cloud Run Jobs:\n```python\nCloudRunJob(\n    image=\"gcr.io/my-project/my-image\",\n    region=\"us-east1\",\n    credentials=my_gcp_credentials\n    command=[\"echo\", \"hello world\"]\n).run()\n```\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob","title":"CloudRunJob","text":"

    Bases: Infrastructure

    Infrastructure block used to run GCP Cloud Run Jobs.

    Project name information is provided by the Credentials object, and should always be correct as long as the Credentials object is for the correct project.

    Note this block is experimental. The interface may change without notice.

    Source code in prefect_gcp/cloud_run.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=(\n        \"Use the Cloud Run or Cloud Run v2 worker instead.\"\n        \" Refer to the upgrade guide for more information:\"\n        \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\"\n    ),\n)\nclass CloudRunJob(Infrastructure):\n    \"\"\"\n    <span class=\"badge-api experimental\"/>\n\n    Infrastructure block used to run GCP Cloud Run Jobs.\n\n    Project name information is provided by the Credentials object, and should always\n    be correct as long as the Credentials object is for the correct project.\n\n    Note this block is experimental. The interface may change without notice.\n    \"\"\"\n\n    _block_type_slug = \"cloud-run-job\"\n    _block_type_name = \"GCP Cloud Run Job\"\n    _description = \"Infrastructure block used to run GCP Cloud Run Jobs. Note this block is experimental. The interface may change without notice.\"  # noqa\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob\"  # noqa: E501\n\n    type: Literal[\"cloud-run-job\"] = Field(\n        \"cloud-run-job\", description=\"The slug for this task type.\"\n    )\n    image: str = Field(\n        ...,\n        title=\"Image Name\",\n        description=(\n            \"The image to use for a new Cloud Run Job. This value must \"\n            \"refer to an image within either Google Container Registry \"\n            \"or Google Artifact Registry, like `gcr.io/<project_name>/<repo>/`.\"\n        ),\n    )\n    region: str = Field(..., description=\"The region where the Cloud Run Job resides.\")\n    credentials: GcpCredentials  # cannot be Field; else it shows as Json\n\n    # Job settings\n    cpu: Optional[int] = Field(\n        default=None,\n        title=\"CPU\",\n        description=(\n            \"The amount of compute allocated to the Cloud Run Job. \"\n            \"The int must be valid based on the rules specified at \"\n            \"https://cloud.google.com/run/docs/configuring/cpu#setting-jobs .\"\n        ),\n    )\n    memory: Optional[int] = Field(\n        default=None,\n        title=\"Memory\",\n        description=\"The amount of memory allocated to the Cloud Run Job.\",\n    )\n    memory_unit: Optional[Literal[\"G\", \"Gi\", \"M\", \"Mi\"]] = Field(\n        default=None,\n        title=\"Memory Units\",\n        description=(\n            \"The unit of memory. See \"\n            \"https://cloud.google.com/run/docs/configuring/memory-limits#setting \"\n            \"for additional details.\"\n        ),\n    )\n    vpc_connector_name: Optional[str] = Field(\n        default=None,\n        title=\"VPC Connector Name\",\n        description=\"The name of the VPC connector to use for the Cloud Run Job.\",\n    )\n    args: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"Arguments to be passed to your Cloud Run Job's entrypoint command.\"\n        ),\n    )\n    env: Dict[str, str] = Field(\n        default_factory=dict,\n        description=\"Environment variables to be passed to your Cloud Run Job.\",\n    )\n\n    # Cleanup behavior\n    keep_job: Optional[bool] = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud Run Job on Google Cloud Platform.\",\n    )\n    timeout: Optional[int] = Field(\n        default=600,\n        gt=0,\n        le=3600,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to complete \"\n            \"before raising an exception.\"\n        ),\n    )\n    max_retries: Optional[int] = Field(\n        default=3,\n        ge=0,\n        le=10,\n        title=\"Max Retries\",\n        description=(\n            \"The maximum retries setting specifies the number of times a task is \"\n            \"allowed to restart in case of failure before being failed permanently.\"\n        ),\n    )\n    # For private use\n    _job_name: str = None\n    _execution: Optional[Execution] = None\n\n    @property\n    def job_name(self):\n        \"\"\"Create a unique and valid job name.\"\"\"\n\n        if self._job_name is None:\n            # get `repo` from `gcr.io/<project_name>/repo/other`\n            components = self.image.split(\"/\")\n            image_name = components[2]\n            # only alphanumeric and '-' allowed for a job name\n            modified_image_name = image_name.replace(\":\", \"-\").replace(\".\", \"-\")\n            # make 50 char limit for final job name, which will be '<name>-<uuid>'\n            if len(modified_image_name) > 17:\n                modified_image_name = modified_image_name[:17]\n            name = f\"{modified_image_name}-{uuid4().hex}\"\n            self._job_name = name\n\n        return self._job_name\n\n    @property\n    def memory_string(self):\n        \"\"\"Returns the string expected for memory resources argument.\"\"\"\n        if self.memory and self.memory_unit:\n            return str(self.memory) + self.memory_unit\n        return None\n\n    @validator(\"image\")\n    def _remove_image_spaces(cls, value):\n        \"\"\"Deal with spaces in image names.\"\"\"\n        if value is not None:\n            return value.strip()\n\n    @root_validator\n    def _check_valid_memory(cls, values):\n        \"\"\"Make sure memory conforms to expected values for API.\n        See: https://cloud.google.com/run/docs/configuring/memory-limits#setting\n        \"\"\"  # noqa\n        if (values.get(\"memory\") is not None and values.get(\"memory_unit\") is None) or (\n            values.get(\"memory_unit\") is not None and values.get(\"memory\") is None\n        ):\n            raise ValueError(\n                \"A memory value and unit must both be supplied to specify a memory\"\n                \" value other than the default memory value.\"\n            )\n        return values\n\n    def get_corresponding_worker_type(self) -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        return \"cloud-run\"\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for a cloud-run work pool with the same\n        configuration as this block.\n\n        Returns:\n            - dict: a base job template for a cloud-run work pool\n        \"\"\"\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type(),\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to generate default base job template for Cloud Run worker.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n                \"memory_unit\",\n            ]:\n                continue\n            elif key == \"credentials\":\n                if not self.credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(self.credentials._block_document_id)\n                    }\n                }\n            elif key == \"memory\" and self.memory_string:\n                base_job_template[\"variables\"][\"properties\"][\"memory\"][\n                    \"default\"\n                ] = self.memory_string\n            elif key == \"cpu\" and self.cpu is not None:\n                base_job_template[\"variables\"][\"properties\"][\"cpu\"][\n                    \"default\"\n                ] = f\"{self.cpu * 1000}m\"\n            elif key == \"args\":\n                # Not a default variable, but we can add it to the template\n                base_job_template[\"variables\"][\"properties\"][\"args\"] = {\n                    \"title\": \"Arguments\",\n                    \"type\": \"string\",\n                    \"description\": \"Arguments to be passed to your Cloud Run Job's entrypoint command.\",  # noqa\n                    \"default\": value,\n                }\n                base_job_template[\"job_configuration\"][\"job_body\"][\"spec\"][\"template\"][\n                    \"spec\"\n                ][\"template\"][\"spec\"][\"containers\"][0][\"args\"] = \"{{ args }}\"\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                    \" Skipping.\"\n                )\n\n        return base_job_template\n\n    def _create_job_error(self, exc):\n        \"\"\"Provides a nicer error for 404s when trying to create a Cloud Run Job.\"\"\"\n        # TODO consider lookup table instead of the if/else,\n        # also check for documented errors\n        if exc.status_code == 404:\n            raise RuntimeError(\n                f\"Failed to find resources at {exc.uri}. Confirm that region\"\n                f\" '{self.region}' is the correct region for your Cloud Run Job and\"\n                f\" that {self.credentials.project} is the correct GCP project. If\"\n                f\" your project ID is not correct, you are using a Credentials block\"\n                f\" with permissions for the wrong project.\"\n            ) from exc\n        raise exc\n\n    def _job_run_submission_error(self, exc):\n        \"\"\"Provides a nicer error for 404s when submitting job runs.\"\"\"\n        if exc.status_code == 404:\n            pat1 = r\"The requested URL [^ ]+ was not found on this server\"\n            # pat2 = (\n            #     r\"Resource '[^ ]+' of kind 'JOB' in region '[\\w\\-0-9]+' \"\n            #     r\"in project '[\\w\\-0-9]+' does not exist\"\n            # )\n            if re.findall(pat1, str(exc)):\n                raise RuntimeError(\n                    f\"Failed to find resources at {exc.uri}. \"\n                    f\"Confirm that region '{self.region}' is \"\n                    f\"the correct region for your Cloud Run Job \"\n                    f\"and that '{self.credentials.project}' is the \"\n                    f\"correct GCP project. If your project ID is not \"\n                    f\"correct, you are using a Credentials \"\n                    f\"block with permissions for the wrong project.\"\n                ) from exc\n            else:\n                raise exc\n\n        raise exc\n\n    def _cpu_as_k8s_quantity(self) -> str:\n        \"\"\"Return the CPU integer in the format expected by GCP Cloud Run Jobs API.\n        See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        See also: https://cloud.google.com/run/docs/configuring/cpu#setting-jobs\n        \"\"\"  # noqa\n        return str(self.cpu * 1000) + \"m\"\n\n    @sync_compatible\n    async def run(self, task_status: Optional[TaskStatus] = None):\n        \"\"\"Run the configured job on a Google Cloud Run Job.\"\"\"\n        with self._get_client() as client:\n            await run_sync_in_worker_thread(\n                self._create_job_and_wait_for_registration, client\n            )\n            job_execution = await run_sync_in_worker_thread(\n                self._begin_job_execution, client\n            )\n\n            if task_status:\n                task_status.started(self.job_name)\n\n            result = await run_sync_in_worker_thread(\n                self._watch_job_execution_and_get_result,\n                client,\n                job_execution,\n                5,\n            )\n            return result\n\n    @sync_compatible\n    async def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n        \"\"\"\n        Kill a task running Cloud Run.\n\n        Args:\n            identifier: The Cloud Run Job name. This should match a\n                value yielded by CloudRunJob.run.\n        \"\"\"\n        if grace_seconds != 30:\n            self.logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n            )\n\n        with self._get_client() as client:\n            await run_sync_in_worker_thread(\n                self._kill_job,\n                client=client,\n                namespace=self.credentials.project,\n                job_name=identifier,\n            )\n\n    def _kill_job(self, client: Resource, namespace: str, job_name: str) -> None:\n        \"\"\"\n        Thin wrapper around Job.delete, wrapping a try/except since\n        Job is an independent class that doesn't have knowledge of\n        CloudRunJob and its associated logic.\n        \"\"\"\n        try:\n            Job.delete(client=client, namespace=namespace, job_name=job_name)\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Cloud Run Job; the job name {job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n\n    def _create_job_and_wait_for_registration(self, client: Resource) -> None:\n        \"\"\"Create a new job wait for it to finish registering.\"\"\"\n        try:\n            self.logger.info(f\"Creating Cloud Run Job {self.job_name}\")\n            Job.create(\n                client=client,\n                namespace=self.credentials.project,\n                body=self._jobs_body(),\n            )\n        except googleapiclient.errors.HttpError as exc:\n            self._create_job_error(exc)\n\n        try:\n            self._wait_for_job_creation(client=client, timeout=self.timeout)\n        except Exception:\n            self.logger.exception(\n                \"Encountered an exception while waiting for job run creation\"\n            )\n            if not self.keep_job:\n                self.logger.info(\n                    f\"Deleting Cloud Run Job {self.job_name} from Google Cloud Run.\"\n                )\n                try:\n                    Job.delete(\n                        client=client,\n                        namespace=self.credentials.project,\n                        job_name=self.job_name,\n                    )\n                except Exception:\n                    self.logger.exception(\n                        \"Received an unexpected exception while attempting to delete\"\n                        f\" Cloud Run Job {self.job_name!r}\"\n                    )\n            raise\n\n    def _begin_job_execution(self, client: Resource) -> Execution:\n        \"\"\"Submit a job run for execution and return the execution object.\"\"\"\n        try:\n            self.logger.info(\n                f\"Submitting Cloud Run Job {self.job_name!r} for execution.\"\n            )\n            submission = Job.run(\n                client=client,\n                namespace=self.credentials.project,\n                job_name=self.job_name,\n            )\n\n            job_execution = Execution.get(\n                client=client,\n                namespace=submission[\"metadata\"][\"namespace\"],\n                execution_name=submission[\"metadata\"][\"name\"],\n            )\n\n            command = (\n                \" \".join(self.command) if self.command else \"default container command\"\n            )\n\n            self.logger.info(\n                f\"Cloud Run Job {self.job_name!r}: Running command {command!r}\"\n            )\n        except Exception as exc:\n            self._job_run_submission_error(exc)\n\n        return job_execution\n\n    def _watch_job_execution_and_get_result(\n        self, client: Resource, execution: Execution, poll_interval: int\n    ) -> CloudRunJobResult:\n        \"\"\"Wait for execution to complete and then return result.\"\"\"\n        try:\n            job_execution = self._watch_job_execution(\n                client=client,\n                job_execution=execution,\n                timeout=self.timeout,\n                poll_interval=poll_interval,\n            )\n        except Exception:\n            self.logger.exception(\n                \"Received an unexpected exception while monitoring Cloud Run Job \"\n                f\"{self.job_name!r}\"\n            )\n            raise\n\n        if job_execution.succeeded():\n            status_code = 0\n            self.logger.info(f\"Job Run {self.job_name} completed successfully\")\n        else:\n            status_code = 1\n            error_msg = job_execution.condition_after_completion()[\"message\"]\n            self.logger.error(\n                f\"Job Run {self.job_name} did not complete successfully. {error_msg}\"\n            )\n\n        self.logger.info(\n            f\"Job Run logs can be found on GCP at: {job_execution.log_uri}\"\n        )\n\n        if not self.keep_job:\n            self.logger.info(\n                f\"Deleting completed Cloud Run Job {self.job_name!r} from Google Cloud\"\n                \" Run...\"\n            )\n            try:\n                Job.delete(\n                    client=client,\n                    namespace=self.credentials.project,\n                    job_name=self.job_name,\n                )\n            except Exception:\n                self.logger.exception(\n                    \"Received an unexpected exception while attempting to delete Cloud\"\n                    f\" Run Job {self.job_name}\"\n                )\n\n        return CloudRunJobResult(identifier=self.job_name, status_code=status_code)\n\n    def _jobs_body(self) -> dict:\n        \"\"\"Create properly formatted body used for a Job CREATE request.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs\n        \"\"\"\n        jobs_metadata = {\"name\": self.job_name}\n\n        annotations = {\n            # See: https://cloud.google.com/run/docs/troubleshooting#launch-stage-validation  # noqa\n            \"run.googleapis.com/launch-stage\": \"BETA\",\n        }\n        # add vpc connector if specified\n        if self.vpc_connector_name:\n            annotations[\n                \"run.googleapis.com/vpc-access-connector\"\n            ] = self.vpc_connector_name\n\n        # env and command here\n        containers = [self._add_container_settings({\"image\": self.image})]\n\n        # apply this timeout to each task\n        timeout_seconds = str(self.timeout)\n\n        body = {\n            \"apiVersion\": \"run.googleapis.com/v1\",\n            \"kind\": \"Job\",\n            \"metadata\": jobs_metadata,\n            \"spec\": {  # JobSpec\n                \"template\": {  # ExecutionTemplateSpec\n                    \"metadata\": {\"annotations\": annotations},\n                    \"spec\": {  # ExecutionSpec\n                        \"template\": {  # TaskTemplateSpec\n                            \"spec\": {\n                                \"containers\": containers,\n                                \"timeoutSeconds\": timeout_seconds,\n                                \"maxRetries\": self.max_retries,\n                            }  # TaskSpec\n                        }\n                    },\n                }\n            },\n        }\n        return body\n\n    def preview(self) -> str:\n        \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n        body = self._jobs_body()\n        container_settings = body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n            \"containers\"\n        ][0][\"env\"]\n        body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\"env\"] = [\n            container_setting\n            for container_setting in container_settings\n            if container_setting[\"name\"] != \"PREFECT_API_KEY\"\n        ]\n        return json.dumps(body, indent=2)\n\n    def _watch_job_execution(\n        self, client, job_execution: Execution, timeout: int, poll_interval: int = 5\n    ):\n        \"\"\"\n        Update job_execution status until it is no longer running or timeout is reached.\n        \"\"\"\n        t0 = time.time()\n        while job_execution.is_running():\n            job_execution = Execution.get(\n                client=client,\n                namespace=job_execution.namespace,\n                execution_name=job_execution.name,\n            )\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n        return job_execution\n\n    def _wait_for_job_creation(\n        self, client: Resource, timeout: int, poll_interval: int = 5\n    ):\n        \"\"\"Give created job time to register.\"\"\"\n        job = Job.get(\n            client=client, namespace=self.credentials.project, job_name=self.job_name\n        )\n\n        t0 = time.time()\n        while not job.is_ready():\n            ready_condition = (\n                job.ready_condition\n                if job.ready_condition\n                else \"waiting for condition update\"\n            )\n            self.logger.info(\n                f\"Job is not yet ready... Current condition: {ready_condition}\"\n            )\n            job = Job.get(\n                client=client,\n                namespace=self.credentials.project,\n                job_name=self.job_name,\n            )\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n    def _get_client(self) -> Resource:\n        \"\"\"Get the base client needed for interacting with GCP APIs.\"\"\"\n        # region needed for 'v1' API\n        api_endpoint = f\"https://{self.region}-run.googleapis.com\"\n        gcp_creds = self.credentials.get_credentials_from_service_account()\n        options = ClientOptions(api_endpoint=api_endpoint)\n\n        return discovery.build(\n            \"run\", \"v1\", client_options=options, credentials=gcp_creds\n        ).namespaces()\n\n    # CONTAINER SETTINGS\n    def _add_container_settings(self, base_settings: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Add settings related to containers for Cloud Run Jobs to a dictionary.\n        Includes environment variables, entrypoint command, entrypoint arguments,\n        and cpu and memory limits.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container\n        and https://cloud.google.com/run/docs/reference/rest/v1/Container#ResourceRequirements\n        \"\"\"  # noqa\n        container_settings = base_settings.copy()\n        container_settings.update(self._add_env())\n        container_settings.update(self._add_resources())\n        container_settings.update(self._add_command())\n        container_settings.update(self._add_args())\n        return container_settings\n\n    def _add_args(self) -> dict:\n        \"\"\"Set the arguments that will be passed to the entrypoint for a Cloud Run Job.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container\n        \"\"\"  # noqa\n        return {\"args\": self.args} if self.args else {}\n\n    def _add_command(self) -> dict:\n        \"\"\"Set the command that a container will run for a Cloud Run Job.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container\n        \"\"\"  # noqa\n        return {\"command\": self.command}\n\n    def _add_resources(self) -> dict:\n        \"\"\"Set specified resources limits for a Cloud Run Job.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container#ResourceRequirements\n        See also: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        \"\"\"  # noqa\n        resources = {\"limits\": {}, \"requests\": {}}\n\n        if self.cpu is not None:\n            cpu = self._cpu_as_k8s_quantity()\n            resources[\"limits\"][\"cpu\"] = cpu\n            resources[\"requests\"][\"cpu\"] = cpu\n        if self.memory_string is not None:\n            resources[\"limits\"][\"memory\"] = self.memory_string\n            resources[\"requests\"][\"memory\"] = self.memory_string\n\n        return {\"resources\": resources} if resources[\"requests\"] else {}\n\n    def _add_env(self) -> dict:\n        \"\"\"Add environment variables for a Cloud Run Job.\n\n        Method `self._base_environment()` gets necessary Prefect environment variables\n        from the config.\n\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container#envvar for\n        how environment variables are specified for Cloud Run Jobs.\n        \"\"\"  # noqa\n        env = {**self._base_environment(), **self.env}\n        cloud_run_env = [{\"name\": k, \"value\": v} for k, v in env.items()]\n        return {\"env\": cloud_run_env}\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.job_name","title":"job_name property","text":"

    Create a unique and valid job name.

    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.memory_string","title":"memory_string property","text":"

    Returns the string expected for memory resources argument.

    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

    Generate a base job template for a cloud-run work pool with the same configuration as this block.

    Returns:

    Type Description dict
    • dict: a base job template for a cloud-run work pool
    Source code in prefect_gcp/cloud_run.py
    async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for a cloud-run work pool with the same\n    configuration as this block.\n\n    Returns:\n        - dict: a base job template for a cloud-run work pool\n    \"\"\"\n    base_job_template = await get_default_base_job_template_for_infrastructure_type(\n        self.get_corresponding_worker_type(),\n    )\n    assert (\n        base_job_template is not None\n    ), \"Failed to generate default base job template for Cloud Run worker.\"\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n            \"memory_unit\",\n        ]:\n            continue\n        elif key == \"credentials\":\n            if not self.credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(self.credentials._block_document_id)\n                }\n            }\n        elif key == \"memory\" and self.memory_string:\n            base_job_template[\"variables\"][\"properties\"][\"memory\"][\n                \"default\"\n            ] = self.memory_string\n        elif key == \"cpu\" and self.cpu is not None:\n            base_job_template[\"variables\"][\"properties\"][\"cpu\"][\n                \"default\"\n            ] = f\"{self.cpu * 1000}m\"\n        elif key == \"args\":\n            # Not a default variable, but we can add it to the template\n            base_job_template[\"variables\"][\"properties\"][\"args\"] = {\n                \"title\": \"Arguments\",\n                \"type\": \"string\",\n                \"description\": \"Arguments to be passed to your Cloud Run Job's entrypoint command.\",  # noqa\n                \"default\": value,\n            }\n            base_job_template[\"job_configuration\"][\"job_body\"][\"spec\"][\"template\"][\n                \"spec\"\n            ][\"template\"][\"spec\"][\"containers\"][0][\"args\"] = \"{{ args }}\"\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                \" Skipping.\"\n            )\n\n    return base_job_template\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.get_corresponding_worker_type","title":"get_corresponding_worker_type","text":"

    Return the corresponding worker type for this infrastructure block.

    Source code in prefect_gcp/cloud_run.py
    def get_corresponding_worker_type(self) -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    return \"cloud-run\"\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.kill","title":"kill async","text":"

    Kill a task running Cloud Run.

    Parameters:

    Name Type Description Default identifier str

    The Cloud Run Job name. This should match a value yielded by CloudRunJob.run.

    required Source code in prefect_gcp/cloud_run.py
    @sync_compatible\nasync def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n    \"\"\"\n    Kill a task running Cloud Run.\n\n    Args:\n        identifier: The Cloud Run Job name. This should match a\n            value yielded by CloudRunJob.run.\n    \"\"\"\n    if grace_seconds != 30:\n        self.logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n        )\n\n    with self._get_client() as client:\n        await run_sync_in_worker_thread(\n            self._kill_job,\n            client=client,\n            namespace=self.credentials.project,\n            job_name=identifier,\n        )\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.preview","title":"preview","text":"

    Generate a preview of the job definition that will be sent to GCP.

    Source code in prefect_gcp/cloud_run.py
    def preview(self) -> str:\n    \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n    body = self._jobs_body()\n    container_settings = body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n        \"containers\"\n    ][0][\"env\"]\n    body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\"env\"] = [\n        container_setting\n        for container_setting in container_settings\n        if container_setting[\"name\"] != \"PREFECT_API_KEY\"\n    ]\n    return json.dumps(body, indent=2)\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.run","title":"run async","text":"

    Run the configured job on a Google Cloud Run Job.

    Source code in prefect_gcp/cloud_run.py
    @sync_compatible\nasync def run(self, task_status: Optional[TaskStatus] = None):\n    \"\"\"Run the configured job on a Google Cloud Run Job.\"\"\"\n    with self._get_client() as client:\n        await run_sync_in_worker_thread(\n            self._create_job_and_wait_for_registration, client\n        )\n        job_execution = await run_sync_in_worker_thread(\n            self._begin_job_execution, client\n        )\n\n        if task_status:\n            task_status.started(self.job_name)\n\n        result = await run_sync_in_worker_thread(\n            self._watch_job_execution_and_get_result,\n            client,\n            job_execution,\n            5,\n        )\n        return result\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJobResult","title":"CloudRunJobResult","text":"

    Bases: InfrastructureResult

    Result from a Cloud Run Job.

    Source code in prefect_gcp/cloud_run.py
    class CloudRunJobResult(InfrastructureResult):\n    \"\"\"Result from a Cloud Run Job.\"\"\"\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution","title":"Execution","text":"

    Bases: BaseModel

    Utility class to call GCP executions API and interact with the returned objects.

    Source code in prefect_gcp/cloud_run.py
    class Execution(BaseModel):\n    \"\"\"\n    Utility class to call GCP `executions` API and\n    interact with the returned objects.\n    \"\"\"\n\n    name: str\n    namespace: str\n    metadata: dict\n    spec: dict\n    status: dict\n    log_uri: str\n\n    def is_running(self) -> bool:\n        \"\"\"Returns True if Execution is not completed.\"\"\"\n        return self.status.get(\"completionTime\") is None\n\n    def condition_after_completion(self):\n        \"\"\"Returns Execution condition if Execution has completed.\"\"\"\n        for condition in self.status[\"conditions\"]:\n            if condition[\"type\"] == \"Completed\":\n                return condition\n\n    def succeeded(self):\n        \"\"\"Whether or not the Execution completed is a successful state.\"\"\"\n        completed_condition = self.condition_after_completion()\n        if completed_condition and completed_condition[\"status\"] == \"True\":\n            return True\n\n        return False\n\n    @classmethod\n    def get(cls, client: Resource, namespace: str, execution_name: str):\n        \"\"\"\n        Make a get request to the GCP executions API\n        and return an Execution instance.\n        \"\"\"\n        request = client.executions().get(\n            name=f\"namespaces/{namespace}/executions/{execution_name}\"\n        )\n        response = request.execute()\n\n        return cls(\n            name=response[\"metadata\"][\"name\"],\n            namespace=response[\"metadata\"][\"namespace\"],\n            metadata=response[\"metadata\"],\n            spec=response[\"spec\"],\n            status=response[\"status\"],\n            log_uri=response[\"status\"][\"logUri\"],\n        )\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.condition_after_completion","title":"condition_after_completion","text":"

    Returns Execution condition if Execution has completed.

    Source code in prefect_gcp/cloud_run.py
    def condition_after_completion(self):\n    \"\"\"Returns Execution condition if Execution has completed.\"\"\"\n    for condition in self.status[\"conditions\"]:\n        if condition[\"type\"] == \"Completed\":\n            return condition\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.get","title":"get classmethod","text":"

    Make a get request to the GCP executions API and return an Execution instance.

    Source code in prefect_gcp/cloud_run.py
    @classmethod\ndef get(cls, client: Resource, namespace: str, execution_name: str):\n    \"\"\"\n    Make a get request to the GCP executions API\n    and return an Execution instance.\n    \"\"\"\n    request = client.executions().get(\n        name=f\"namespaces/{namespace}/executions/{execution_name}\"\n    )\n    response = request.execute()\n\n    return cls(\n        name=response[\"metadata\"][\"name\"],\n        namespace=response[\"metadata\"][\"namespace\"],\n        metadata=response[\"metadata\"],\n        spec=response[\"spec\"],\n        status=response[\"status\"],\n        log_uri=response[\"status\"][\"logUri\"],\n    )\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.is_running","title":"is_running","text":"

    Returns True if Execution is not completed.

    Source code in prefect_gcp/cloud_run.py
    def is_running(self) -> bool:\n    \"\"\"Returns True if Execution is not completed.\"\"\"\n    return self.status.get(\"completionTime\") is None\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.succeeded","title":"succeeded","text":"

    Whether or not the Execution completed is a successful state.

    Source code in prefect_gcp/cloud_run.py
    def succeeded(self):\n    \"\"\"Whether or not the Execution completed is a successful state.\"\"\"\n    completed_condition = self.condition_after_completion()\n    if completed_condition and completed_condition[\"status\"] == \"True\":\n        return True\n\n    return False\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job","title":"Job","text":"

    Bases: BaseModel

    Utility class to call GCP jobs API and interact with the returned objects.

    Source code in prefect_gcp/cloud_run.py
    class Job(BaseModel):\n    \"\"\"\n    Utility class to call GCP `jobs` API and\n    interact with the returned objects.\n    \"\"\"\n\n    metadata: dict\n    spec: dict\n    status: dict\n    name: str\n    ready_condition: dict\n    execution_status: dict\n\n    def _is_missing_container(self):\n        \"\"\"\n        Check if Job status is not ready because\n        the specified container cannot be found.\n        \"\"\"\n        if (\n            self.ready_condition.get(\"status\") == \"False\"\n            and self.ready_condition.get(\"reason\") == \"ContainerMissing\"\n        ):\n            return True\n        return False\n\n    def is_ready(self) -> bool:\n        \"\"\"Whether a job is finished registering and ready to be executed\"\"\"\n        if self._is_missing_container():\n            raise Exception(f\"{self.ready_condition['message']}\")\n        return self.ready_condition.get(\"status\") == \"True\"\n\n    def has_execution_in_progress(self) -> bool:\n        \"\"\"See if job has a run in progress.\"\"\"\n        return (\n            self.execution_status == {}\n            or self.execution_status.get(\"completionTimestamp\") is None\n        )\n\n    @staticmethod\n    def _get_ready_condition(job: dict) -> dict:\n        \"\"\"Utility to access JSON field containing ready condition.\"\"\"\n        if job[\"status\"].get(\"conditions\"):\n            for condition in job[\"status\"][\"conditions\"]:\n                if condition[\"type\"] == \"Ready\":\n                    return condition\n\n        return {}\n\n    @staticmethod\n    def _get_execution_status(job: dict):\n        \"\"\"Utility to access JSON field containing execution status.\"\"\"\n        if job[\"status\"].get(\"latestCreatedExecution\"):\n            return job[\"status\"][\"latestCreatedExecution\"]\n\n        return {}\n\n    @classmethod\n    def get(cls, client: Resource, namespace: str, job_name: str):\n        \"\"\"Make a get request to the GCP jobs API and return a Job instance.\"\"\"\n        request = client.jobs().get(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n        response = request.execute()\n\n        return cls(\n            metadata=response[\"metadata\"],\n            spec=response[\"spec\"],\n            status=response[\"status\"],\n            name=response[\"metadata\"][\"name\"],\n            ready_condition=cls._get_ready_condition(response),\n            execution_status=cls._get_execution_status(response),\n        )\n\n    @staticmethod\n    def create(client: Resource, namespace: str, body: dict):\n        \"\"\"Make a create request to the GCP jobs API.\"\"\"\n        request = client.jobs().create(parent=f\"namespaces/{namespace}\", body=body)\n        response = request.execute()\n        return response\n\n    @staticmethod\n    def delete(client: Resource, namespace: str, job_name: str):\n        \"\"\"Make a delete request to the GCP jobs API.\"\"\"\n        request = client.jobs().delete(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n        response = request.execute()\n        return response\n\n    @staticmethod\n    def run(client: Resource, namespace: str, job_name: str):\n        \"\"\"Make a run request to the GCP jobs API.\"\"\"\n        request = client.jobs().run(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n        response = request.execute()\n        return response\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.create","title":"create staticmethod","text":"

    Make a create request to the GCP jobs API.

    Source code in prefect_gcp/cloud_run.py
    @staticmethod\ndef create(client: Resource, namespace: str, body: dict):\n    \"\"\"Make a create request to the GCP jobs API.\"\"\"\n    request = client.jobs().create(parent=f\"namespaces/{namespace}\", body=body)\n    response = request.execute()\n    return response\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.delete","title":"delete staticmethod","text":"

    Make a delete request to the GCP jobs API.

    Source code in prefect_gcp/cloud_run.py
    @staticmethod\ndef delete(client: Resource, namespace: str, job_name: str):\n    \"\"\"Make a delete request to the GCP jobs API.\"\"\"\n    request = client.jobs().delete(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n    response = request.execute()\n    return response\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.get","title":"get classmethod","text":"

    Make a get request to the GCP jobs API and return a Job instance.

    Source code in prefect_gcp/cloud_run.py
    @classmethod\ndef get(cls, client: Resource, namespace: str, job_name: str):\n    \"\"\"Make a get request to the GCP jobs API and return a Job instance.\"\"\"\n    request = client.jobs().get(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n    response = request.execute()\n\n    return cls(\n        metadata=response[\"metadata\"],\n        spec=response[\"spec\"],\n        status=response[\"status\"],\n        name=response[\"metadata\"][\"name\"],\n        ready_condition=cls._get_ready_condition(response),\n        execution_status=cls._get_execution_status(response),\n    )\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.has_execution_in_progress","title":"has_execution_in_progress","text":"

    See if job has a run in progress.

    Source code in prefect_gcp/cloud_run.py
    def has_execution_in_progress(self) -> bool:\n    \"\"\"See if job has a run in progress.\"\"\"\n    return (\n        self.execution_status == {}\n        or self.execution_status.get(\"completionTimestamp\") is None\n    )\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.is_ready","title":"is_ready","text":"

    Whether a job is finished registering and ready to be executed

    Source code in prefect_gcp/cloud_run.py
    def is_ready(self) -> bool:\n    \"\"\"Whether a job is finished registering and ready to be executed\"\"\"\n    if self._is_missing_container():\n        raise Exception(f\"{self.ready_condition['message']}\")\n    return self.ready_condition.get(\"status\") == \"True\"\n
    "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.run","title":"run staticmethod","text":"

    Make a run request to the GCP jobs API.

    Source code in prefect_gcp/cloud_run.py
    @staticmethod\ndef run(client: Resource, namespace: str, job_name: str):\n    \"\"\"Make a run request to the GCP jobs API.\"\"\"\n    request = client.jobs().run(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n    response = request.execute()\n    return response\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/","title":"Cloud Run","text":""},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run","title":"prefect_gcp.workers.cloud_run","text":"

    Module containing the Cloud Run worker used for executing flow runs as Cloud Run jobs.

    Get started by creating a Cloud Run work pool:

    prefect work-pool create 'my-cloud-run-pool' --type cloud-run\n

    Then start a Cloud Run worker with the following command:

    prefect worker start --pool 'my-cloud-run-pool'\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run--configuration","title":"Configuration","text":"

    Read more about configuring work pools here.

    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run--advanced-configuration","title":"Advanced Configuration","text":"

    Using a custom Cloud Run job template

    Below is the default job body template used by the Cloud Run Worker:

    {\n    \"apiVersion\": \"run.googleapis.com/v1\",\n    \"kind\": \"Job\",\n    \"metadata\":\n        {\n            \"name\": \"{{ name }}\",\n            \"annotations\":\n            {\n                \"run.googleapis.com/launch-stage\": \"BETA\",\n            }\n        },\n        \"spec\":\n        {\n            \"template\":\n            {\n                \"spec\":\n                {\n                    \"template\":\n                    {\n                        \"spec\":\n                        {\n                            \"containers\":\n                            [\n                                {\n                                    \"image\": \"{{ image }}\",\n                                    \"args\": \"{{ args }}\",\n                                    \"resources\":\n                                    {\n                                        \"limits\":\n                                        {\n                                            \"cpu\": \"{{ cpu }}\",\n                                            \"memory\": \"{{ memory }}\"\n                                        },\n                                        \"requests\":\n                                        {\n                                            \"cpu\": \"{{ cpu }}\",\n                                            \"memory\": \"{{ memory }}\"\n                                        }\n                                    }\n                                }\n                            ],\n                            \"timeoutSeconds\": \"{{ timeout }}\",\n                            \"serviceAccountName\": \"{{ service_account_name }}\"\n                        }\n                    }\n                }\n                }\n            },\n            \"metadata\":\n            {\n                \"annotations\":\n                {\n                    \"run.googleapis.com/vpc-access-connector\": \"{{ vpc_connector_name }}\"\n                }\n            }\n        },\n    },\n    \"timeout\": \"{{ timeout }}\",\n    \"keep_job\": \"{{ keep_job }}\"\n}\n
    Each values enclosed in {{ }} is a placeholder that will be replaced with a value at runtime on a per-deployment basis. The values that can be used a placeholders are defined by the variables schema defined in the base job template.

    The default job body template and available variables can be customized on a work pool by work pool basis. By editing the default job body template you can:

    • Add additional placeholders to the default job template
    • Remove placeholders from the default job template
    • Pass values to Cloud Run that are not defined in the variables schema
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run--adding-additional-placeholders","title":"Adding additional placeholders","text":"

    For example, to allow for extra customization of a new annotation not described in the default job template, you can add the following:

    {\n    \"apiVersion\": \"run.googleapis.com/v1\",\n    \"kind\": \"Job\",\n    \"metadata\":\n    {\n        \"name\": \"{{ name }}\",\n        \"annotations\":\n        {\n            \"run.googleapis.com/my-custom-annotation\": \"{{ my_custom_annotation }}\",\n            \"run.googleapis.com/launch-stage\": \"BETA\",\n        },\n      ...\n    },\n  ...\n}\n
    my_custom_annotation can now be used as a placeholder in the job template and set on a per-deployment basis.

    # deployment.yaml\n...\ninfra_overrides: {\"my_custom_annotation\": \"my-custom-value\"}\n

    Additionally, fields can be set to prevent configuration at the deployment level. For example to configure the vpc_connector_name field, the placeholder can be removed and replaced with an actual value. Now all deployments that point to this work pool will use the same vpc_connector_name value.

    {\n    \"apiVersion\": \"run.googleapis.com/v1\",\n    \"kind\": \"Job\",\n    \"spec\":\n    {\n        \"template\":\n        {\n            \"metadata\":\n            {\n                \"annotations\":\n                {\n                    \"run.googleapis.com/vpc-access-connector\": \"my-vpc-connector\"\n                }\n            },\n            ...\n        },\n        ...\n    }\n}\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorker","title":"CloudRunWorker","text":"

    Bases: BaseWorker

    Prefect worker that executes flow runs within Cloud Run Jobs.

    Source code in prefect_gcp/workers/cloud_run.py
    class CloudRunWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Cloud Run Jobs.\"\"\"\n\n    type = \"cloud-run\"\n    job_configuration = CloudRunWorkerJobConfiguration\n    job_configuration_variables = CloudRunWorkerVariables\n    _description = (\n        \"Execute flow runs within containers on Google Cloud Run. Requires \"\n        \"a Google Cloud Platform account.\"\n    )\n    _display_name = \"Google Cloud Run\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/cloud_run_worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n\n    def _create_job_error(self, exc, configuration):\n        \"\"\"Provides a nicer error for 404s when trying to create a Cloud Run Job.\"\"\"\n        # TODO consider lookup table instead of the if/else,\n        # also check for documented errors\n        if exc.status_code == 404:\n            raise RuntimeError(\n                f\"Failed to find resources at {exc.uri}. Confirm that region\"\n                f\" '{self.region}' is the correct region for your Cloud Run Job and\"\n                f\" that {configuration.project} is the correct GCP project. If\"\n                f\" your project ID is not correct, you are using a Credentials block\"\n                f\" with permissions for the wrong project.\"\n            ) from exc\n        raise exc\n\n    def _job_run_submission_error(self, exc, configuration):\n        \"\"\"Provides a nicer error for 404s when submitting job runs.\"\"\"\n        if exc.status_code == 404:\n            pat1 = r\"The requested URL [^ ]+ was not found on this server\"\n            # pat2 = (\n            #     r\"Resource '[^ ]+' of kind 'JOB' in region '[\\w\\-0-9]+' \"\n            #     r\"in project '[\\w\\-0-9]+' does not exist\"\n            # )\n            if re.findall(pat1, str(exc)):\n                raise RuntimeError(\n                    f\"Failed to find resources at {exc.uri}. \"\n                    f\"Confirm that region '{self.region}' is \"\n                    f\"the correct region for your Cloud Run Job \"\n                    f\"and that '{configuration.project}' is the \"\n                    f\"correct GCP project. If your project ID is not \"\n                    f\"correct, you are using a Credentials \"\n                    f\"block with permissions for the wrong project.\"\n                ) from exc\n            else:\n                raise exc\n\n        raise exc\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: CloudRunWorkerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> CloudRunWorkerResult:\n        \"\"\"\n        Executes a flow run within a Cloud Run Job and waits for the flow run\n        to complete.\n\n        Args:\n            flow_run: The flow run to execute\n            configuration: The configuration to use when executing the flow run.\n            task_status: The task status object for the current flow run. If provided,\n                the task will be marked as started.\n\n        Returns:\n            CloudRunWorkerResult: A result object containing information about the\n                final state of the flow run\n        \"\"\"\n\n        logger = self.get_flow_run_logger(flow_run)\n\n        with self._get_client(configuration) as client:\n            await run_sync_in_worker_thread(\n                self._create_job_and_wait_for_registration,\n                configuration,\n                client,\n                logger,\n            )\n            job_execution = await run_sync_in_worker_thread(\n                self._begin_job_execution, configuration, client, logger\n            )\n\n            if task_status:\n                task_status.started(configuration.job_name)\n\n            result = await run_sync_in_worker_thread(\n                self._watch_job_execution_and_get_result,\n                configuration,\n                client,\n                job_execution,\n                logger,\n            )\n            return result\n\n    def _get_client(self, configuration: CloudRunWorkerJobConfiguration) -> Resource:\n        \"\"\"Get the base client needed for interacting with GCP APIs.\"\"\"\n        # region needed for 'v1' API\n        api_endpoint = f\"https://{configuration.region}-run.googleapis.com\"\n        gcp_creds = configuration.credentials.get_credentials_from_service_account()\n        options = ClientOptions(api_endpoint=api_endpoint)\n\n        return discovery.build(\n            \"run\", \"v1\", client_options=options, credentials=gcp_creds\n        ).namespaces()\n\n    def _create_job_and_wait_for_registration(\n        self,\n        configuration: CloudRunWorkerJobConfiguration,\n        client: Resource,\n        logger: PrefectLogAdapter,\n    ) -> None:\n        \"\"\"Create a new job wait for it to finish registering.\"\"\"\n        try:\n            logger.info(f\"Creating Cloud Run Job {configuration.job_name}\")\n\n            Job.create(\n                client=client,\n                namespace=configuration.credentials.project,\n                body=configuration.job_body,\n            )\n        except googleapiclient.errors.HttpError as exc:\n            self._create_job_error(exc, configuration)\n\n        try:\n            self._wait_for_job_creation(\n                client=client, configuration=configuration, logger=logger\n            )\n        except Exception:\n            logger.exception(\n                \"Encountered an exception while waiting for job run creation\"\n            )\n            if not configuration.keep_job:\n                logger.info(\n                    f\"Deleting Cloud Run Job {configuration.job_name} from \"\n                    \"Google Cloud Run.\"\n                )\n                try:\n                    Job.delete(\n                        client=client,\n                        namespace=configuration.credentials.project,\n                        job_name=configuration.job_name,\n                    )\n                except Exception:\n                    logger.exception(\n                        \"Received an unexpected exception while attempting to delete\"\n                        f\" Cloud Run Job {configuration.job_name!r}\"\n                    )\n            raise\n\n    def _begin_job_execution(\n        self,\n        configuration: CloudRunWorkerJobConfiguration,\n        client: Resource,\n        logger: PrefectLogAdapter,\n    ) -> Execution:\n        \"\"\"Submit a job run for execution and return the execution object.\"\"\"\n        try:\n            logger.info(\n                f\"Submitting Cloud Run Job {configuration.job_name!r} for execution.\"\n            )\n            submission = Job.run(\n                client=client,\n                namespace=configuration.project,\n                job_name=configuration.job_name,\n            )\n\n            job_execution = Execution.get(\n                client=client,\n                namespace=submission[\"metadata\"][\"namespace\"],\n                execution_name=submission[\"metadata\"][\"name\"],\n            )\n        except Exception as exc:\n            self._job_run_submission_error(exc, configuration)\n\n        return job_execution\n\n    def _watch_job_execution_and_get_result(\n        self,\n        configuration: CloudRunWorkerJobConfiguration,\n        client: Resource,\n        execution: Execution,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ) -> CloudRunWorkerResult:\n        \"\"\"Wait for execution to complete and then return result.\"\"\"\n        try:\n            job_execution = self._watch_job_execution(\n                client=client,\n                job_execution=execution,\n                timeout=configuration.timeout,\n                poll_interval=poll_interval,\n            )\n        except Exception:\n            logger.exception(\n                \"Received an unexpected exception while monitoring Cloud Run Job \"\n                f\"{configuration.job_name!r}\"\n            )\n            raise\n\n        if job_execution.succeeded():\n            status_code = 0\n            logger.info(f\"Job Run {configuration.job_name} completed successfully\")\n        else:\n            status_code = 1\n            error_msg = job_execution.condition_after_completion()[\"message\"]\n            logger.error(\n                \"Job Run {configuration.job_name} did not complete successfully. \"\n                f\"{error_msg}\"\n            )\n\n        logger.info(f\"Job Run logs can be found on GCP at: {job_execution.log_uri}\")\n\n        if not configuration.keep_job:\n            logger.info(\n                f\"Deleting completed Cloud Run Job {configuration.job_name!r} \"\n                \"from Google Cloud Run...\"\n            )\n            try:\n                Job.delete(\n                    client=client,\n                    namespace=configuration.project,\n                    job_name=configuration.job_name,\n                )\n            except Exception:\n                logger.exception(\n                    \"Received an unexpected exception while attempting to delete Cloud\"\n                    f\" Run Job {configuration.job_name}\"\n                )\n\n        return CloudRunWorkerResult(\n            identifier=configuration.job_name, status_code=status_code\n        )\n\n    def _watch_job_execution(\n        self, client, job_execution: Execution, timeout: int, poll_interval: int = 5\n    ):\n        \"\"\"\n        Update job_execution status until it is no longer running or timeout is reached.\n        \"\"\"\n        t0 = time.time()\n        while job_execution.is_running():\n            job_execution = Execution.get(\n                client=client,\n                namespace=job_execution.namespace,\n                execution_name=job_execution.name,\n            )\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n        return job_execution\n\n    def _wait_for_job_creation(\n        self,\n        client: Resource,\n        configuration: CloudRunWorkerJobConfiguration,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ):\n        \"\"\"Give created job time to register.\"\"\"\n        job = Job.get(\n            client=client,\n            namespace=configuration.project,\n            job_name=configuration.job_name,\n        )\n\n        t0 = time.time()\n        while not job.is_ready():\n            ready_condition = (\n                job.ready_condition\n                if job.ready_condition\n                else \"waiting for condition update\"\n            )\n            logger.info(f\"Job is not yet ready... Current condition: {ready_condition}\")\n            job = Job.get(\n                client=client,\n                namespace=configuration.project,\n                job_name=configuration.job_name,\n            )\n\n            elapsed_time = time.time() - t0\n            if (\n                configuration.timeout is not None\n                and elapsed_time > configuration.timeout\n            ):\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: CloudRunWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a job for a cancelled flow run based on the provided infrastructure PID\n        and run configuration.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n            )\n\n        with self._get_client(configuration) as client:\n            await run_sync_in_worker_thread(\n                self._stop_job,\n                client=client,\n                namespace=configuration.project,\n                job_name=infrastructure_pid,\n            )\n\n    def _stop_job(self, client: Resource, namespace: str, job_name: str):\n        try:\n            Job.delete(client=client, namespace=namespace, job_name=job_name)\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Cloud Run Job; the job name {job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

    Stops a job for a cancelled flow run based on the provided infrastructure PID and run configuration.

    Source code in prefect_gcp/workers/cloud_run.py
    async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: CloudRunWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a job for a cancelled flow run based on the provided infrastructure PID\n    and run configuration.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n        )\n\n    with self._get_client(configuration) as client:\n        await run_sync_in_worker_thread(\n            self._stop_job,\n            client=client,\n            namespace=configuration.project,\n            job_name=infrastructure_pid,\n        )\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorker.run","title":"run async","text":"

    Executes a flow run within a Cloud Run Job and waits for the flow run to complete.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to execute

    required configuration CloudRunWorkerJobConfiguration

    The configuration to use when executing the flow run.

    required task_status Optional[TaskStatus]

    The task status object for the current flow run. If provided, the task will be marked as started.

    None

    Returns:

    Name Type Description CloudRunWorkerResult CloudRunWorkerResult

    A result object containing information about the final state of the flow run

    Source code in prefect_gcp/workers/cloud_run.py
    async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: CloudRunWorkerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> CloudRunWorkerResult:\n    \"\"\"\n    Executes a flow run within a Cloud Run Job and waits for the flow run\n    to complete.\n\n    Args:\n        flow_run: The flow run to execute\n        configuration: The configuration to use when executing the flow run.\n        task_status: The task status object for the current flow run. If provided,\n            the task will be marked as started.\n\n    Returns:\n        CloudRunWorkerResult: A result object containing information about the\n            final state of the flow run\n    \"\"\"\n\n    logger = self.get_flow_run_logger(flow_run)\n\n    with self._get_client(configuration) as client:\n        await run_sync_in_worker_thread(\n            self._create_job_and_wait_for_registration,\n            configuration,\n            client,\n            logger,\n        )\n        job_execution = await run_sync_in_worker_thread(\n            self._begin_job_execution, configuration, client, logger\n        )\n\n        if task_status:\n            task_status.started(configuration.job_name)\n\n        result = await run_sync_in_worker_thread(\n            self._watch_job_execution_and_get_result,\n            configuration,\n            client,\n            job_execution,\n            logger,\n        )\n        return result\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration","title":"CloudRunWorkerJobConfiguration","text":"

    Bases: BaseJobConfiguration

    Configuration class used by the Cloud Run Worker to create a Cloud Run Job.

    An instance of this class is passed to the Cloud Run worker's run method for each flow run. It contains all information necessary to execute the flow run as a Cloud Run Job.

    Attributes:

    Name Type Description region str

    The region where the Cloud Run Job resides.

    credentials Optional[GcpCredentials]

    The GCP Credentials used to connect to Cloud Run.

    job_body Dict[str, Any]

    The job body used to create the Cloud Run Job.

    timeout Optional[int]

    The length of time that Prefect will wait for a Cloud Run Job.

    keep_job Optional[bool]

    Whether to delete the Cloud Run Job after it completes.

    Source code in prefect_gcp/workers/cloud_run.py
    class CloudRunWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Cloud Run Worker to create a Cloud Run Job.\n\n    An instance of this class is passed to the Cloud Run worker's `run` method\n    for each flow run. It contains all information necessary to execute\n    the flow run as a Cloud Run Job.\n\n    Attributes:\n        region: The region where the Cloud Run Job resides.\n        credentials: The GCP Credentials used to connect to Cloud Run.\n        job_body: The job body used to create the Cloud Run Job.\n        timeout: The length of time that Prefect will wait for a Cloud Run Job.\n        keep_job: Whether to delete the Cloud Run Job after it completes.\n    \"\"\"\n\n    region: str = Field(\n        default=\"us-central1\", description=\"The region where the Cloud Run Job resides.\"\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to connect to Cloud Run. \"\n        \"If not provided credentials will be inferred from \"\n        \"the local environment.\",\n    )\n    job_body: Dict[str, Any] = Field(template=_get_default_job_body_template())\n    timeout: Optional[int] = Field(\n        default=600,\n        gt=0,\n        le=3600,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to complete \"\n            \"before raising an exception.\"\n        ),\n    )\n    keep_job: Optional[bool] = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud Run Job on Google Cloud Platform.\",\n    )\n\n    @property\n    def project(self) -> str:\n        \"\"\"property for accessing the project from the credentials.\"\"\"\n        return self.credentials.project\n\n    @property\n    def job_name(self) -> str:\n        \"\"\"property for accessing the name from the job metadata.\"\"\"\n        return self.job_body[\"metadata\"][\"name\"]\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n\n        Ensures that necessary values are present in the job body and that the\n        job body is valid.\n\n        Args:\n            flow_run: The flow run to prepare the job configuration for\n            deployment: The deployment associated with the flow run used for\n                preparation.\n            flow: The flow associated with the flow run used for preparation.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self._populate_envs()\n        self._populate_or_format_command()\n        self._format_args_if_present()\n        self._populate_image_if_not_present()\n        self._populate_name_if_not_present()\n\n    def _populate_envs(self):\n        \"\"\"Populate environment variables. BaseWorker.prepare_for_flow_run handles\n        putting the environment variables in the `env` attribute. This method\n        moves them into the jobs body\"\"\"\n        envs = [{\"name\": k, \"value\": v} for k, v in self.env.items()]\n        self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n            \"env\"\n        ] = envs\n\n    def _populate_name_if_not_present(self):\n        \"\"\"Adds the flow run name to the job if one is not already provided.\"\"\"\n        try:\n            if \"name\" not in self.job_body[\"metadata\"]:\n                base_job_name = slugify_name(self.name)\n                job_name = f\"{base_job_name}-{uuid4().hex}\"\n                self.job_body[\"metadata\"][\"name\"] = job_name\n        except KeyError:\n            raise ValueError(\"Unable to verify name due to invalid job body template.\")\n\n    def _populate_image_if_not_present(self):\n        \"\"\"Adds the latest prefect image to the job if one is not already provided.\"\"\"\n        try:\n            if (\n                \"image\"\n                not in self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0]\n            ):\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"image\"] = f\"docker.io/{get_prefect_image_name()}\"\n        except KeyError:\n            raise ValueError(\"Unable to verify image due to invalid job body template.\")\n\n    def _populate_or_format_command(self):\n        \"\"\"\n        Ensures that the command is present in the job manifest. Populates the command\n        with the `prefect -m prefect.engine` if a command is not present.\n        \"\"\"\n        try:\n            command = self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                \"containers\"\n            ][0].get(\"command\")\n            if command is None:\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"command\"] = shlex.split(self._base_flow_run_command())\n            elif isinstance(command, str):\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"command\"] = shlex.split(command)\n        except KeyError:\n            raise ValueError(\n                \"Unable to verify command due to invalid job body template.\"\n            )\n\n    def _format_args_if_present(self):\n        try:\n            args = self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                \"containers\"\n            ][0].get(\"args\")\n            if args is not None and isinstance(args, str):\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"args\"] = shlex.split(args)\n        except KeyError:\n            raise ValueError(\"Unable to verify args due to invalid job body template.\")\n\n    @validator(\"job_body\")\n    def _ensure_job_includes_all_required_components(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job body includes all required components.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n\n    @validator(\"job_body\")\n    def _ensure_job_has_compatible_values(cls, value: Dict[str, Any]):\n        \"\"\"Ensure that the job body has compatible values.\"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatible values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration.job_name","title":"job_name: str property","text":"

    property for accessing the name from the job metadata.

    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration.project","title":"project: str property","text":"

    property for accessing the project from the credentials.

    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Prepares the job configuration for a flow run.

    Ensures that necessary values are present in the job body and that the job body is valid.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to prepare the job configuration for

    required deployment Optional[DeploymentResponse]

    The deployment associated with the flow run used for preparation.

    None flow Optional[Flow]

    The flow associated with the flow run used for preparation.

    None Source code in prefect_gcp/workers/cloud_run.py
    def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n\n    Ensures that necessary values are present in the job body and that the\n    job body is valid.\n\n    Args:\n        flow_run: The flow run to prepare the job configuration for\n        deployment: The deployment associated with the flow run used for\n            preparation.\n        flow: The flow associated with the flow run used for preparation.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n\n    self._populate_envs()\n    self._populate_or_format_command()\n    self._format_args_if_present()\n    self._populate_image_if_not_present()\n    self._populate_name_if_not_present()\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerResult","title":"CloudRunWorkerResult","text":"

    Bases: BaseWorkerResult

    Contains information about the final state of a completed process

    Source code in prefect_gcp/workers/cloud_run.py
    class CloudRunWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerVariables","title":"CloudRunWorkerVariables","text":"

    Bases: BaseVariables

    Default variables for the Cloud Run worker.

    The schema for this class is used to populate the variables section of the default base job template.

    Source code in prefect_gcp/workers/cloud_run.py
    class CloudRunWorkerVariables(BaseVariables):\n    \"\"\"\n    Default variables for the Cloud Run worker.\n\n    The schema for this class is used to populate the `variables` section of the default\n    base job template.\n    \"\"\"\n\n    region: str = Field(\n        default=\"us-central1\",\n        description=\"The region where the Cloud Run Job resides.\",\n        example=\"us-central1\",\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to initiate the \"\n        \"Cloud Run Job. If not provided credentials will be \"\n        \"inferred from the local environment.\",\n    )\n    image: Optional[str] = Field(\n        default=None,\n        title=\"Image Name\",\n        description=(\n            \"The image to use for a new Cloud Run Job. \"\n            \"If not set, the latest Prefect image will be used. \"\n            \"See https://cloud.google.com/run/docs/deploying#images.\"\n        ),\n        example=\"docker.io/prefecthq/prefect:2-latest\",\n    )\n    cpu: Optional[str] = Field(\n        default=None,\n        title=\"CPU\",\n        description=(\n            \"The amount of compute allocated to the Cloud Run Job. \"\n            \"(1000m = 1 CPU). See \"\n            \"https://cloud.google.com/run/docs/configuring/cpu#setting-jobs.\"\n        ),\n        example=\"1000m\",\n        regex=r\"^(\\d*000)m$\",\n    )\n    memory: Optional[str] = Field(\n        default=None,\n        title=\"Memory\",\n        description=(\n            \"The amount of memory allocated to the Cloud Run Job. \"\n            \"Must be specified in units of 'G', 'Gi', 'M', or 'Mi'. \"\n            \"See https://cloud.google.com/run/docs/configuring/memory-limits#setting.\"\n        ),\n        example=\"512Mi\",\n        regex=r\"^\\d+(?:G|Gi|M|Mi)$\",\n    )\n    vpc_connector_name: Optional[str] = Field(\n        default=None,\n        title=\"VPC Connector Name\",\n        description=\"The name of the VPC connector to use for the Cloud Run Job.\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        title=\"Service Account Name\",\n        description=\"The name of the service account to use for the task execution \"\n        \"of Cloud Run Job. By default Cloud Run jobs run as the default \"\n        \"Compute Engine Service Account. \",\n        example=\"service-account@example.iam.gserviceaccount.com\",\n    )\n    keep_job: Optional[bool] = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud Run Job after it has run.\",\n    )\n    timeout: Optional[int] = Field(\n        default=600,\n        gt=0,\n        le=3600,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for Cloud Run Job state changes.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/","title":"Cloud run worker v2","text":""},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2","title":"prefect_gcp.workers.cloud_run_v2","text":""},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration","title":"CloudRunWorkerJobV2Configuration","text":"

    Bases: BaseJobConfiguration

    The configuration for the Cloud Run worker V2.

    The schema for this class is used to populate the job_body section of the default base job template.

    Source code in prefect_gcp/workers/cloud_run_v2.py
    class CloudRunWorkerJobV2Configuration(BaseJobConfiguration):\n    \"\"\"\n    The configuration for the Cloud Run worker V2.\n\n    The schema for this class is used to populate the `job_body` section of the\n    default base job template.\n    \"\"\"\n\n    credentials: GcpCredentials = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=(\n            \"The GCP Credentials used to connect to Cloud Run. \"\n            \"If not provided credentials will be inferred from \"\n            \"the local environment.\"\n        ),\n    )\n    job_body: Dict[str, Any] = Field(\n        template=_get_default_job_body_template(),\n    )\n    keep_job: bool = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud run job on Google Cloud Platform.\",\n    )\n    region: str = Field(\n        default=\"us-central1\",\n        description=\"The region in which to run the Cloud Run job\",\n    )\n    timeout: int = Field(\n        default=600,\n        gt=0,\n        le=86400,\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to \"\n            \"complete before raising an exception.\"\n        ),\n    )\n    _job_name: str = PrivateAttr(default=None)\n\n    @property\n    def project(self) -> str:\n        \"\"\"\n        Returns the GCP project associated with the credentials.\n\n        Returns:\n            str: The GCP project associated with the credentials.\n        \"\"\"\n        return self.credentials.project\n\n    @property\n    def job_name(self) -> str:\n        \"\"\"\n        Returns the name of the job.\n\n        Returns:\n            str: The name of the job.\n        \"\"\"\n        if self._job_name is None:\n            base_job_name = slugify_name(self.name)\n            job_name = f\"{base_job_name}-{uuid4().hex}\"\n            self._job_name = job_name\n\n        return self._job_name\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n\n        Ensures that necessary values are present in the job body and that the\n        job body is valid.\n\n        Args:\n            flow_run: The flow run to prepare the job configuration for\n            deployment: The deployment associated with the flow run used for\n                preparation.\n            flow: The flow associated with the flow run used for preparation.\n        \"\"\"\n        super().prepare_for_flow_run(\n            flow_run=flow_run,\n            deployment=deployment,\n            flow=flow,\n        )\n\n        self._populate_env()\n        self._populate_or_format_command()\n        self._format_args_if_present()\n        self._populate_image_if_not_present()\n        self._populate_timeout()\n        self._remove_vpc_access_if_unset()\n\n    def _populate_timeout(self):\n        \"\"\"\n        Populates the job body with the timeout.\n        \"\"\"\n        self.job_body[\"template\"][\"template\"][\"timeout\"] = f\"{self.timeout}s\"\n\n    def _populate_env(self):\n        \"\"\"\n        Populates the job body with environment variables.\n        \"\"\"\n        envs = [{\"name\": k, \"value\": v} for k, v in self.env.items()]\n\n        self.job_body[\"template\"][\"template\"][\"containers\"][0][\"env\"] = envs\n\n    def _populate_image_if_not_present(self):\n        \"\"\"\n        Populates the job body with the image if not present.\n        \"\"\"\n        if \"image\" not in self.job_body[\"template\"][\"template\"][\"containers\"][0]:\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"image\"\n            ] = f\"docker.io/{get_prefect_image_name()}\"\n\n    def _populate_or_format_command(self):\n        \"\"\"\n        Populates the job body with the command if not present.\n        \"\"\"\n        command = self.job_body[\"template\"][\"template\"][\"containers\"][0].get(\"command\")\n\n        if command is None:\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"command\"\n            ] = shlex.split(self._base_flow_run_command())\n        elif isinstance(command, str):\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"command\"\n            ] = shlex.split(command)\n\n    def _format_args_if_present(self):\n        \"\"\"\n        Formats the job body args if present.\n        \"\"\"\n        args = self.job_body[\"template\"][\"template\"][\"containers\"][0].get(\"args\")\n\n        if args is not None and isinstance(args, str):\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"args\"\n            ] = shlex.split(args)\n\n    def _remove_vpc_access_if_unset(self):\n        \"\"\"\n        Removes vpcAccess if unset.\n        \"\"\"\n\n        if \"vpcAccess\" not in self.job_body[\"template\"][\"template\"]:\n            return\n\n        vpc_access = self.job_body[\"template\"][\"template\"][\"vpcAccess\"]\n\n        # if vpcAccess is unset or connector is unset, remove the entire vpcAccess block\n        # otherwise leave the user provided value.\n        if not vpc_access or (\n            len(vpc_access) == 1\n            and \"connector\" in vpc_access\n            and vpc_access[\"connector\"] is None\n        ):\n            self.job_body[\"template\"][\"template\"].pop(\"vpcAccess\")\n\n    # noinspection PyMethodParameters\n    @validator(\"job_body\")\n    def _ensure_job_includes_all_required_components(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job body includes all required components.\n\n        Args:\n            value: The job body to validate.\n        Returns:\n            The validated job body.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n\n        if missing_paths:\n            raise ValueError(\n                f\"Job body is missing required components: {', '.join(missing_paths)}\"\n            )\n\n        return value\n\n    # noinspection PyMethodParameters\n    @validator(\"job_body\")\n    def _ensure_job_has_compatible_values(cls, value: Dict[str, Any]):\n        \"\"\"Ensure that the job body has compatible values.\"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatible values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration.job_name","title":"job_name: str property","text":"

    Returns the name of the job.

    Returns:

    Name Type Description str str

    The name of the job.

    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration.project","title":"project: str property","text":"

    Returns the GCP project associated with the credentials.

    Returns:

    Name Type Description str str

    The GCP project associated with the credentials.

    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Prepares the job configuration for a flow run.

    Ensures that necessary values are present in the job body and that the job body is valid.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to prepare the job configuration for

    required deployment Optional[DeploymentResponse]

    The deployment associated with the flow run used for preparation.

    None flow Optional[Flow]

    The flow associated with the flow run used for preparation.

    None Source code in prefect_gcp/workers/cloud_run_v2.py
    def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n\n    Ensures that necessary values are present in the job body and that the\n    job body is valid.\n\n    Args:\n        flow_run: The flow run to prepare the job configuration for\n        deployment: The deployment associated with the flow run used for\n            preparation.\n        flow: The flow associated with the flow run used for preparation.\n    \"\"\"\n    super().prepare_for_flow_run(\n        flow_run=flow_run,\n        deployment=deployment,\n        flow=flow,\n    )\n\n    self._populate_env()\n    self._populate_or_format_command()\n    self._format_args_if_present()\n    self._populate_image_if_not_present()\n    self._populate_timeout()\n    self._remove_vpc_access_if_unset()\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2","title":"CloudRunWorkerV2","text":"

    Bases: BaseWorker

    The Cloud Run worker V2.

    Source code in prefect_gcp/workers/cloud_run_v2.py
    class CloudRunWorkerV2(BaseWorker):\n    \"\"\"\n    The Cloud Run worker V2.\n    \"\"\"\n\n    type = \"cloud-run-v2\"\n    job_configuration = CloudRunWorkerJobV2Configuration\n    job_configuration_variables = CloudRunWorkerV2Variables\n    _description = \"Execute flow runs within containers on Google Cloud Run (V2 API). Requires a Google Cloud Platform account.\"  # noqa\n    _display_name = \"Google Cloud Run V2\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/worker_v2/\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4SpnOBvMYkHp6z939MDKP6/549a91bc1ce9afd4fb12c68db7b68106/social-icon-google-cloud-1200-630.png?h=250\"  # noqa\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: CloudRunWorkerJobV2Configuration,\n        task_status: Optional[TaskStatus] = None,\n    ) -> CloudRunJobV2Result:\n        \"\"\"\n        Runs the flow run on Cloud Run and waits for it to complete.\n\n        Args:\n            flow_run: The flow run to run.\n            configuration: The configuration for the job.\n            task_status: The task status to update.\n\n        Returns:\n            The result of the job.\n        \"\"\"\n        logger = self.get_flow_run_logger(flow_run)\n\n        with self._get_client(configuration=configuration) as cr_client:\n            await run_sync_in_worker_thread(\n                self._create_job_and_wait_for_registration,\n                configuration=configuration,\n                cr_client=cr_client,\n                logger=logger,\n            )\n\n            execution = await run_sync_in_worker_thread(\n                self._begin_job_execution,\n                configuration=configuration,\n                cr_client=cr_client,\n                logger=logger,\n            )\n\n            if task_status:\n                task_status.started(configuration.job_name)\n\n            result = await run_sync_in_worker_thread(\n                self._watch_job_execution_and_get_result,\n                configuration=configuration,\n                cr_client=cr_client,\n                execution=execution,\n                logger=logger,\n            )\n\n            return result\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: CloudRunWorkerJobV2Configuration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops the Cloud Run job.\n\n        Args:\n            infrastructure_pid: The ID of the infrastructure to stop.\n            configuration: The configuration for the job.\n            grace_seconds: The number of seconds to wait before stopping the job.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n            )\n\n        with self._get_client(configuration=configuration) as cr_client:\n            await run_sync_in_worker_thread(\n                self._stop_job,\n                cr_client=cr_client,\n                configuration=configuration,\n                job_name=infrastructure_pid,\n            )\n\n    @staticmethod\n    def _get_client(\n        configuration: CloudRunWorkerJobV2Configuration,\n    ) -> ResourceWarning:\n        \"\"\"\n        Get the base client needed for interacting with GCP Cloud Run V2 API.\n\n        Returns:\n            Resource: The base client needed for interacting with GCP Cloud Run V2 API.\n        \"\"\"\n        api_endpoint = \"https://run.googleapis.com\"\n        gcp_creds = configuration.credentials.get_credentials_from_service_account()\n\n        options = ClientOptions(api_endpoint=api_endpoint)\n\n        return (\n            discovery.build(\n                \"run\",\n                \"v2\",\n                client_options=options,\n                credentials=gcp_creds,\n                num_retries=3,  # Set to 3 in case of intermittent/connection issues\n            )\n            .projects()\n            .locations()\n        )\n\n    def _create_job_and_wait_for_registration(\n        self,\n        configuration: CloudRunWorkerJobV2Configuration,\n        cr_client: Resource,\n        logger: PrefectLogAdapter,\n    ):\n        \"\"\"\n        Creates the Cloud Run job and waits for it to register.\n\n        Args:\n            configuration: The configuration for the job.\n            cr_client: The Cloud Run client.\n            logger: The logger to use.\n        \"\"\"\n        try:\n            logger.info(f\"Creating Cloud Run JobV2 {configuration.job_name}\")\n\n            JobV2.create(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_id=configuration.job_name,\n                body=configuration.job_body,\n            )\n        except HttpError as exc:\n            self._create_job_error(\n                exc=exc,\n                configuration=configuration,\n            )\n\n        try:\n            self._wait_for_job_creation(\n                cr_client=cr_client,\n                configuration=configuration,\n                logger=logger,\n            )\n        except Exception as exc:\n            logger.critical(\n                f\"Failed to create Cloud Run JobV2 {configuration.job_name}.\\n{exc}\"\n            )\n\n            if not configuration.keep_job:\n                try:\n                    JobV2.delete(\n                        cr_client=cr_client,\n                        project=configuration.project,\n                        location=configuration.region,\n                        job_name=configuration.job_name,\n                    )\n                except Exception as exc2:\n                    logger.critical(\n                        f\"Failed to delete Cloud Run JobV2 {configuration.job_name}.\"\n                        f\"\\n{exc2}\"\n                    )\n\n            raise\n\n    @staticmethod\n    def _wait_for_job_creation(\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ):\n        \"\"\"\n        Waits for the Cloud Run job to be created.\n\n        Args:\n            cr_client: The Cloud Run client.\n            configuration: The configuration for the job.\n            logger: The logger to use.\n            poll_interval: The interval to poll the Cloud Run job, defaults to 5\n                seconds.\n        \"\"\"\n        job = JobV2.get(\n            cr_client=cr_client,\n            project=configuration.project,\n            location=configuration.region,\n            job_name=configuration.job_name,\n        )\n\n        t0 = time.time()\n\n        while not job.is_ready():\n            if not (ready_condition := job.get_ready_condition()):\n                ready_condition = \"waiting for condition update\"\n\n            logger.info(f\"Current Job Condition: {ready_condition}\")\n\n            job = JobV2.get(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_name=configuration.job_name,\n            )\n\n            elapsed_time = time.time() - t0\n\n            if elapsed_time > configuration.timeout:\n                raise RuntimeError(\n                    f\"Timeout of {configuration.timeout} seconds reached while \"\n                    f\"waiting for Cloud Run Job V2 {configuration.job_name} to be \"\n                    \"created.\"\n                )\n\n            time.sleep(poll_interval)\n\n    @staticmethod\n    def _create_job_error(\n        exc: HttpError,\n        configuration: CloudRunWorkerJobV2Configuration,\n    ):\n        \"\"\"\n        Creates a formatted error message for the Cloud Run V2 API errors\n        \"\"\"\n        # noinspection PyUnresolvedReferences\n        if exc.status_code == 404:\n            raise RuntimeError(\n                f\"Failed to find resources at {exc.uri}. Confirm that region\"\n                f\" '{configuration.region}' is the correct region for your Cloud\"\n                f\" Run Job and that {configuration.project} is the correct GCP \"\n                f\" project. If your project ID is not correct, you are using a \"\n                f\"Credentials block with permissions for the wrong project.\"\n            ) from exc\n\n        raise exc\n\n    def _begin_job_execution(\n        self,\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        logger: PrefectLogAdapter,\n    ) -> ExecutionV2:\n        \"\"\"\n        Begins the Cloud Run job execution.\n\n        Args:\n            cr_client: The Cloud Run client.\n            configuration: The configuration for the job.\n            logger: The logger to use.\n\n        Returns:\n            The Cloud Run job execution.\n        \"\"\"\n        try:\n            logger.info(\n                f\"Submitting Cloud Run Job V2 {configuration.job_name} for execution...\"\n            )\n\n            submission = JobV2.run(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_name=configuration.job_name,\n            )\n\n            job_execution = ExecutionV2.get(\n                cr_client=cr_client,\n                execution_id=submission[\"metadata\"][\"name\"],\n            )\n\n            command = (\n                \" \".join(configuration.command)\n                if configuration.command\n                else \"default container command\"\n            )\n\n            logger.info(\n                f\"Cloud Run Job V2 {configuration.job_name} submitted for execution \"\n                f\"with command: {command}\"\n            )\n\n            return job_execution\n        except Exception as exc:\n            self._job_run_submission_error(\n                exc=exc,\n                configuration=configuration,\n            )\n            raise\n\n    def _watch_job_execution_and_get_result(\n        self,\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        execution: ExecutionV2,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ) -> CloudRunJobV2Result:\n        \"\"\"\n        Watch the job execution and get the result.\n\n        Args:\n            cr_client (Resource): The base client needed for interacting with GCP\n                Cloud Run V2 API.\n            configuration (CloudRunWorkerJobV2Configuration): The configuration for\n                the job.\n            execution (ExecutionV2): The execution to watch.\n            logger (PrefectLogAdapter): The logger to use.\n            poll_interval (int): The number of seconds to wait between polls.\n                Defaults to 5 seconds.\n\n        Returns:\n            The result of the job.\n        \"\"\"\n        try:\n            execution = self._watch_job_execution(\n                cr_client=cr_client,\n                configuration=configuration,\n                execution=execution,\n                poll_interval=poll_interval,\n            )\n        except Exception as exc:\n            logger.critical(\n                f\"Encountered an exception while waiting for job run completion - \"\n                f\"{exc}\"\n            )\n            raise\n\n        if execution.succeeded():\n            status_code = 0\n            logger.info(f\"Cloud Run Job V2 {configuration.job_name} succeeded\")\n        else:\n            status_code = 1\n            error_mg = execution.condition_after_completion().get(\"message\")\n            logger.error(\n                f\"Cloud Run Job V2 {configuration.job_name} failed - {error_mg}\"\n            )\n\n        logger.info(f\"Job run logs can be found on GCP at: {execution.logUri}\")\n\n        if not configuration.keep_job:\n            logger.info(\n                f\"Deleting completed Cloud Run Job {configuration.job_name!r} from \"\n                \"Google Cloud Run...\"\n            )\n\n            try:\n                JobV2.delete(\n                    cr_client=cr_client,\n                    project=configuration.project,\n                    location=configuration.region,\n                    job_name=configuration.job_name,\n                )\n            except Exception as exc:\n                logger.critical(\n                    \"Received an exception while deleting the Cloud Run Job V2 \"\n                    f\"- {configuration.job_name} - {exc}\"\n                )\n\n        return CloudRunJobV2Result(\n            identifier=configuration.job_name,\n            status_code=status_code,\n        )\n\n    # noinspection DuplicatedCode\n    @staticmethod\n    def _watch_job_execution(\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        execution: ExecutionV2,\n        poll_interval: int,\n    ) -> ExecutionV2:\n        \"\"\"\n        Update execution status until it is no longer running or timeout is reached.\n\n        Args:\n            cr_client (Resource): The base client needed for interacting with GCP\n                Cloud Run V2 API.\n            configuration (CloudRunWorkerJobV2Configuration): The configuration for\n                the job.\n            execution (ExecutionV2): The execution to watch.\n            poll_interval (int): The number of seconds to wait between polls.\n\n        Returns:\n            The execution.\n        \"\"\"\n        t0 = time.time()\n\n        while execution.is_running():\n            execution = ExecutionV2.get(\n                cr_client=cr_client,\n                execution_id=execution.name,\n            )\n\n            elapsed_time = time.time() - t0\n\n            if elapsed_time > configuration.timeout:\n                raise RuntimeError(\n                    f\"Timeout of {configuration.timeout} seconds reached while \"\n                    f\"waiting for Cloud Run Job V2 {configuration.job_name} to \"\n                    \"complete.\"\n                )\n\n            time.sleep(poll_interval)\n\n        return execution\n\n    @staticmethod\n    def _job_run_submission_error(\n        exc: Exception,\n        configuration: CloudRunWorkerJobV2Configuration,\n    ):\n        \"\"\"\n        Creates a formatted error message for the Cloud Run V2 API errors\n\n        Args:\n            exc: The exception to format.\n            configuration: The configuration for the job.\n        \"\"\"\n        # noinspection PyUnresolvedReferences\n        if exc.status_code == 404:\n            pat1 = r\"The requested URL [^ ]+ was not found on this server\"\n\n            if re.findall(pat1, str(exc)):\n                # noinspection PyUnresolvedReferences\n                raise RuntimeError(\n                    f\"Failed to find resources at {exc.uri}. \"\n                    f\"Confirm that region '{configuration.region}' is \"\n                    f\"the correct region for your Cloud Run Job \"\n                    f\"and that '{configuration.project}' is the \"\n                    f\"correct GCP project. If your project ID is not \"\n                    f\"correct, you are using a Credentials \"\n                    f\"block with permissions for the wrong project.\"\n                ) from exc\n            else:\n                raise exc\n\n    @staticmethod\n    def _stop_job(\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        job_name: str,\n    ):\n        \"\"\"\n        Stops/deletes the Cloud Run job.\n\n        Args:\n            cr_client: The Cloud Run client.\n            configuration: The configuration for the job.\n            job_name: The name of the job to stop.\n        \"\"\"\n        try:\n            JobV2.delete(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_name=job_name,\n            )\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Cloud Run Job; the job name {job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2.kill_infrastructure","title":"kill_infrastructure async","text":"

    Stops the Cloud Run job.

    Parameters:

    Name Type Description Default infrastructure_pid str

    The ID of the infrastructure to stop.

    required configuration CloudRunWorkerJobV2Configuration

    The configuration for the job.

    required grace_seconds int

    The number of seconds to wait before stopping the job.

    30 Source code in prefect_gcp/workers/cloud_run_v2.py
    async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: CloudRunWorkerJobV2Configuration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops the Cloud Run job.\n\n    Args:\n        infrastructure_pid: The ID of the infrastructure to stop.\n        configuration: The configuration for the job.\n        grace_seconds: The number of seconds to wait before stopping the job.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n        )\n\n    with self._get_client(configuration=configuration) as cr_client:\n        await run_sync_in_worker_thread(\n            self._stop_job,\n            cr_client=cr_client,\n            configuration=configuration,\n            job_name=infrastructure_pid,\n        )\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2.run","title":"run async","text":"

    Runs the flow run on Cloud Run and waits for it to complete.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to run.

    required configuration CloudRunWorkerJobV2Configuration

    The configuration for the job.

    required task_status Optional[TaskStatus]

    The task status to update.

    None

    Returns:

    Type Description CloudRunJobV2Result

    The result of the job.

    Source code in prefect_gcp/workers/cloud_run_v2.py
    async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: CloudRunWorkerJobV2Configuration,\n    task_status: Optional[TaskStatus] = None,\n) -> CloudRunJobV2Result:\n    \"\"\"\n    Runs the flow run on Cloud Run and waits for it to complete.\n\n    Args:\n        flow_run: The flow run to run.\n        configuration: The configuration for the job.\n        task_status: The task status to update.\n\n    Returns:\n        The result of the job.\n    \"\"\"\n    logger = self.get_flow_run_logger(flow_run)\n\n    with self._get_client(configuration=configuration) as cr_client:\n        await run_sync_in_worker_thread(\n            self._create_job_and_wait_for_registration,\n            configuration=configuration,\n            cr_client=cr_client,\n            logger=logger,\n        )\n\n        execution = await run_sync_in_worker_thread(\n            self._begin_job_execution,\n            configuration=configuration,\n            cr_client=cr_client,\n            logger=logger,\n        )\n\n        if task_status:\n            task_status.started(configuration.job_name)\n\n        result = await run_sync_in_worker_thread(\n            self._watch_job_execution_and_get_result,\n            configuration=configuration,\n            cr_client=cr_client,\n            execution=execution,\n            logger=logger,\n        )\n\n        return result\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2Result","title":"CloudRunWorkerV2Result","text":"

    Bases: BaseWorkerResult

    The result of a Cloud Run worker V2 job.

    Source code in prefect_gcp/workers/cloud_run_v2.py
    class CloudRunWorkerV2Result(BaseWorkerResult):\n    \"\"\"\n    The result of a Cloud Run worker V2 job.\n    \"\"\"\n
    "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2Variables","title":"CloudRunWorkerV2Variables","text":"

    Bases: BaseVariables

    Default variables for the Cloud Run worker V2.

    The schema for this class is used to populate the variables section of the default base job template.

    Source code in prefect_gcp/workers/cloud_run_v2.py
    class CloudRunWorkerV2Variables(BaseVariables):\n    \"\"\"\n    Default variables for the Cloud Run worker V2.\n\n    The schema for this class is used to populate the `variables` section of the\n    default base job template.\n    \"\"\"\n\n    credentials: GcpCredentials = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=(\n            \"The GCP Credentials used to connect to Cloud Run. \"\n            \"If not provided credentials will be inferred from \"\n            \"the local environment.\"\n        ),\n    )\n    region: str = Field(\n        default=\"us-central1\",\n        description=\"The region in which to run the Cloud Run job\",\n    )\n    image: Optional[str] = Field(\n        default=\"prefecthq/prefect:2-latest\",\n        title=\"Image Name\",\n        description=(\n            \"The image to use for the Cloud Run job. \"\n            \"If not provided the default Prefect image will be used.\"\n        ),\n    )\n    args: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"The arguments to pass to the Cloud Run Job V2's entrypoint command.\"\n        ),\n    )\n    keep_job: bool = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud run job on Google Cloud Platform.\",\n    )\n    launch_stage: Literal[\n        \"ALPHA\",\n        \"BETA\",\n        \"GA\",\n        \"DEPRECATED\",\n        \"EARLY_ACCESS\",\n        \"PRELAUNCH\",\n        \"UNIMPLEMENTED\",\n        \"LAUNCH_TAG_UNSPECIFIED\",\n    ] = Field(\n        \"BETA\",\n        description=(\n            \"The launch stage of the Cloud Run Job V2. \"\n            \"See https://cloud.google.com/run/docs/about-features-categories \"\n            \"for additional details.\"\n        ),\n    )\n    max_retries: int = Field(\n        default=0,\n        title=\"Max Retries\",\n        description=\"The number of times to retry the Cloud Run job.\",\n    )\n    cpu: str = Field(\n        default=\"1000m\",\n        title=\"CPU\",\n        description=\"The CPU to allocate to the Cloud Run job.\",\n    )\n    memory: str = Field(\n        default=\"512Mi\",\n        title=\"Memory\",\n        description=(\n            \"The memory to allocate to the Cloud Run job along with the units, which\"\n            \"could be: G, Gi, M, Mi.\"\n        ),\n        example=\"512Mi\",\n        pattern=r\"^\\d+(?:G|Gi|M|Mi)$\",\n    )\n    timeout: int = Field(\n        default=600,\n        gt=0,\n        le=86400,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to \"\n            \"complete before raising an exception (maximum of 86400 seconds, 1 day).\"\n        ),\n    )\n    vpc_connector_name: Optional[str] = Field(\n        default=None,\n        title=\"VPC Connector Name\",\n        description=\"The name of the VPC connector to use for the Cloud Run job.\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        title=\"Service Account Name\",\n        description=(\n            \"The name of the service account to use for the task execution \"\n            \"of Cloud Run Job. By default Cloud Run jobs run as the default \"\n            \"Compute Engine Service Account.\"\n        ),\n        example=\"service-account@example.iam.gserviceaccount.com\",\n    )\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/","title":"Cloud Storage","text":""},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage","title":"prefect_gcp.cloud_storage","text":"

    Tasks for interacting with GCP Cloud Storage.

    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat","title":"DataFrameSerializationFormat","text":"

    Bases: Enum

    An enumeration class to represent different file formats, compression options for upload_from_dataframe

    Attributes:

    Name Type Description CSV

    Representation for 'csv' file format with no compression and its related content type and suffix.

    CSV_GZIP

    Representation for 'csv' file format with 'gzip' compression and its related content type and suffix.

    PARQUET

    Representation for 'parquet' file format with no compression and its related content type and suffix.

    PARQUET_SNAPPY

    Representation for 'parquet' file format with 'snappy' compression and its related content type and suffix.

    PARQUET_GZIP

    Representation for 'parquet' file format with 'gzip' compression and its related content type and suffix.

    Source code in prefect_gcp/cloud_storage.py
    class DataFrameSerializationFormat(Enum):\n    \"\"\"\n    An enumeration class to represent different file formats,\n    compression options for upload_from_dataframe\n\n    Attributes:\n        CSV: Representation for 'csv' file format with no compression\n            and its related content type and suffix.\n\n        CSV_GZIP: Representation for 'csv' file format with 'gzip' compression\n            and its related content type and suffix.\n\n        PARQUET: Representation for 'parquet' file format with no compression\n            and its related content type and suffix.\n\n        PARQUET_SNAPPY: Representation for 'parquet' file format\n            with 'snappy' compression and its related content type and suffix.\n\n        PARQUET_GZIP: Representation for 'parquet' file format\n            with 'gzip' compression and its related content type and suffix.\n    \"\"\"\n\n    CSV = (\"csv\", None, \"text/csv\", \".csv\")\n    CSV_GZIP = (\"csv\", \"gzip\", \"application/x-gzip\", \".csv.gz\")\n    PARQUET = (\"parquet\", None, \"application/octet-stream\", \".parquet\")\n    PARQUET_SNAPPY = (\n        \"parquet\",\n        \"snappy\",\n        \"application/octet-stream\",\n        \".snappy.parquet\",\n    )\n    PARQUET_GZIP = (\"parquet\", \"gzip\", \"application/octet-stream\", \".gz.parquet\")\n\n    @property\n    def format(self) -> str:\n        \"\"\"The file format of the current instance.\"\"\"\n        return self.value[0]\n\n    @property\n    def compression(self) -> Union[str, None]:\n        \"\"\"The compression type of the current instance.\"\"\"\n        return self.value[1]\n\n    @property\n    def content_type(self) -> str:\n        \"\"\"The content type of the current instance.\"\"\"\n        return self.value[2]\n\n    @property\n    def suffix(self) -> str:\n        \"\"\"The suffix of the file format of the current instance.\"\"\"\n        return self.value[3]\n\n    def fix_extension_with(self, gcs_blob_path: str) -> str:\n        \"\"\"Fix the extension of a GCS blob.\n\n        Args:\n            gcs_blob_path: The path to the GCS blob to be modified.\n\n        Returns:\n            The modified path to the GCS blob with the new extension.\n        \"\"\"\n        gcs_blob_path = PurePosixPath(gcs_blob_path)\n        folder = gcs_blob_path.parent\n        filename = PurePosixPath(gcs_blob_path.stem).with_suffix(self.suffix)\n        return str(folder.joinpath(filename))\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.compression","title":"compression: Union[str, None] property","text":"

    The compression type of the current instance.

    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.content_type","title":"content_type: str property","text":"

    The content type of the current instance.

    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.format","title":"format: str property","text":"

    The file format of the current instance.

    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.suffix","title":"suffix: str property","text":"

    The suffix of the file format of the current instance.

    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.fix_extension_with","title":"fix_extension_with","text":"

    Fix the extension of a GCS blob.

    Parameters:

    Name Type Description Default gcs_blob_path str

    The path to the GCS blob to be modified.

    required

    Returns:

    Type Description str

    The modified path to the GCS blob with the new extension.

    Source code in prefect_gcp/cloud_storage.py
    def fix_extension_with(self, gcs_blob_path: str) -> str:\n    \"\"\"Fix the extension of a GCS blob.\n\n    Args:\n        gcs_blob_path: The path to the GCS blob to be modified.\n\n    Returns:\n        The modified path to the GCS blob with the new extension.\n    \"\"\"\n    gcs_blob_path = PurePosixPath(gcs_blob_path)\n    folder = gcs_blob_path.parent\n    filename = PurePosixPath(gcs_blob_path.stem).with_suffix(self.suffix)\n    return str(folder.joinpath(filename))\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket","title":"GcsBucket","text":"

    Bases: WritableDeploymentStorage, WritableFileSystem, ObjectStorageBlock

    Block used to store data using GCP Cloud Storage Buckets.

    Note! GcsBucket in prefect-gcp is a unique block, separate from GCS in core Prefect. GcsBucket does not use gcsfs under the hood, instead using the google-cloud-storage package, and offers more configuration and functionality.

    Attributes:

    Name Type Description bucket str

    Name of the bucket.

    gcp_credentials GcpCredentials

    The credentials to authenticate with GCP.

    bucket_folder str

    A default path to a folder within the GCS bucket to use for reading and writing objects.

    Example

    Load stored GCP Cloud Storage Bucket:

    from prefect_gcp.cloud_storage import GcsBucket\ngcp_cloud_storage_bucket_block = GcsBucket.load(\"BLOCK_NAME\")\n

    Source code in prefect_gcp/cloud_storage.py
    class GcsBucket(WritableDeploymentStorage, WritableFileSystem, ObjectStorageBlock):\n    \"\"\"\n    Block used to store data using GCP Cloud Storage Buckets.\n\n    Note! `GcsBucket` in `prefect-gcp` is a unique block, separate from `GCS`\n    in core Prefect. `GcsBucket` does not use `gcsfs` under the hood,\n    instead using the `google-cloud-storage` package, and offers more configuration\n    and functionality.\n\n    Attributes:\n        bucket: Name of the bucket.\n        gcp_credentials: The credentials to authenticate with GCP.\n        bucket_folder: A default path to a folder within the GCS bucket to use\n            for reading and writing objects.\n\n    Example:\n        Load stored GCP Cloud Storage Bucket:\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n        gcp_cloud_storage_bucket_block = GcsBucket.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _block_type_name = \"GCS Bucket\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket\"  # noqa: E501\n\n    bucket: str = Field(..., description=\"Name of the bucket.\")\n    gcp_credentials: GcpCredentials = Field(\n        default_factory=GcpCredentials,\n        description=\"The credentials to authenticate with GCP.\",\n    )\n    bucket_folder: str = Field(\n        default=\"\",\n        description=(\n            \"A default path to a folder within the GCS bucket to use \"\n            \"for reading and writing objects.\"\n        ),\n    )\n\n    @property\n    def basepath(self) -> str:\n        \"\"\"\n        Read-only property that mirrors the bucket folder.\n\n        Used for deployment.\n        \"\"\"\n        return self.bucket_folder\n\n    @validator(\"bucket_folder\", pre=True, always=True)\n    def _bucket_folder_suffix(cls, value):\n        \"\"\"\n        Ensures that the bucket folder is suffixed with a forward slash.\n        \"\"\"\n        if value != \"\" and not value.endswith(\"/\"):\n            value = f\"{value}/\"\n        return value\n\n    def _resolve_path(self, path: str) -> str:\n        \"\"\"\n        A helper function used in write_path to join `self.bucket_folder` and `path`.\n\n        Args:\n            path: Name of the key, e.g. \"file1\". Each object in your\n                bucket has a unique key (or key name).\n\n        Returns:\n            The joined path.\n        \"\"\"\n        # If bucket_folder provided, it means we won't write to the root dir of\n        # the bucket. So we need to add it on the front of the path.\n        path = (\n            str(PurePosixPath(self.bucket_folder, path)) if self.bucket_folder else path\n        )\n        if path in [\"\", \".\", \"/\"]:\n            # client.bucket.list_blobs(prefix=None) is the proper way\n            # of specifying the root folder of the bucket\n            path = None\n        return path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> List[Union[str, Path]]:\n        \"\"\"\n        Copies a folder from the configured GCS bucket to a local directory.\n        Defaults to copying the entire contents of the block's bucket_folder\n        to the current working directory.\n\n        Args:\n            from_path: Path in GCS bucket to download from. Defaults to the block's\n                configured bucket_folder.\n            local_path: Local path to download GCS bucket contents to.\n                Defaults to the current working directory.\n\n        Returns:\n            A list of downloaded file paths.\n        \"\"\"\n        from_path = (\n            self.bucket_folder if from_path is None else self._resolve_path(from_path)\n        )\n\n        if local_path is None:\n            local_path = os.path.abspath(\".\")\n        else:\n            local_path = os.path.abspath(os.path.expanduser(local_path))\n\n        project = self.gcp_credentials.project\n        client = self.gcp_credentials.get_cloud_storage_client(project=project)\n\n        blobs = await run_sync_in_worker_thread(\n            client.list_blobs, self.bucket, prefix=from_path\n        )\n\n        file_paths = []\n        for blob in blobs:\n            blob_path = blob.name\n            if blob_path[-1] == \"/\":\n                # object is a folder and will be created if it contains any objects\n                continue\n            local_file_path = os.path.join(local_path, blob_path)\n            os.makedirs(os.path.dirname(local_file_path), exist_ok=True)\n\n            with disable_run_logger():\n                file_path = await cloud_storage_download_blob_to_file.fn(\n                    bucket=self.bucket,\n                    blob=blob_path,\n                    path=local_file_path,\n                    gcp_credentials=self.gcp_credentials,\n                )\n                file_paths.append(file_path)\n        return file_paths\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to the configured GCS bucket in a\n        given folder.\n\n        Defaults to uploading the entire contents the current working directory to the\n        block's bucket_folder.\n\n        Args:\n            local_path: Path to local directory to upload from.\n            to_path: Path in GCS bucket to upload to. Defaults to block's configured\n                bucket_folder.\n            ignore_file: Path to file containing gitignore style expressions for\n                filepaths to ignore.\n\n        Returns:\n            The number of files uploaded.\n        \"\"\"\n        if local_path is None:\n            local_path = os.path.abspath(\".\")\n        else:\n            local_path = os.path.expanduser(local_path)\n\n        to_path = self.bucket_folder if to_path is None else self._resolve_path(to_path)\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n            included_files = filter_files(local_path, ignore_patterns)\n\n        uploaded_file_count = 0\n        for local_file_path in Path(local_path).rglob(\"*\"):\n            if (\n                included_files is not None\n                and local_file_path.name not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = str(\n                    PurePosixPath(to_path, local_file_path.relative_to(local_path))\n                )\n                local_file_content = local_file_path.read_bytes()\n                await self.write_path(remote_file_path, content=local_file_content)\n                uploaded_file_count += 1\n\n        return uploaded_file_count\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        \"\"\"\n        Read specified path from GCS and return contents. Provide the entire\n        path to the key in GCS.\n\n        Args:\n            path: Entire path to (and including) the key.\n\n        Returns:\n            A bytes or string representation of the blob object.\n        \"\"\"\n        path = self._resolve_path(path)\n        with disable_run_logger():\n            contents = await cloud_storage_download_blob_as_bytes.fn(\n                bucket=self.bucket, blob=path, gcp_credentials=self.gcp_credentials\n            )\n        return contents\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        \"\"\"\n        Writes to an GCS bucket.\n\n        Args:\n            path: The key name. Each object in your bucket has a unique\n                key (or key name).\n            content: What you are uploading to GCS Bucket.\n\n        Returns:\n            The path that the contents were written to.\n        \"\"\"\n        path = self._resolve_path(path)\n        with disable_run_logger():\n            await cloud_storage_upload_blob_from_string.fn(\n                data=content,\n                bucket=self.bucket,\n                blob=path,\n                gcp_credentials=self.gcp_credentials,\n            )\n        return path\n\n    # NEW BLOCK INTERFACE METHODS BELOW\n    def _join_bucket_folder(self, bucket_path: str = \"\") -> str:\n        \"\"\"\n        Joins the base bucket folder to the bucket path.\n\n        NOTE: If a method reuses another method in this class, be careful to not\n        call this  twice because it'll join the bucket folder twice.\n        See https://github.com/PrefectHQ/prefect-aws/issues/141 for a past issue.\n        \"\"\"\n        bucket_path = str(bucket_path)\n        if self.bucket_folder != \"\" and bucket_path.startswith(self.bucket_folder):\n            self.logger.info(\n                f\"Bucket path {bucket_path!r} is already prefixed with \"\n                f\"bucket folder {self.bucket_folder!r}; is this intentional?\"\n            )\n\n        bucket_path = str(PurePosixPath(self.bucket_folder) / bucket_path)\n        if bucket_path in [\"\", \".\", \"/\"]:\n            # client.bucket.list_blobs(prefix=None) is the proper way\n            # of specifying the root folder of the bucket\n            bucket_path = None\n        return bucket_path\n\n    @sync_compatible\n    async def create_bucket(\n        self, location: Optional[str] = None, **create_kwargs\n    ) -> \"Bucket\":\n        \"\"\"\n        Creates a bucket.\n\n        Args:\n            location: The location of the bucket.\n            **create_kwargs: Additional keyword arguments to pass to the\n                `create_bucket` method.\n\n        Returns:\n            The bucket object.\n\n        Examples:\n            Create a bucket.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket(bucket=\"my-bucket\")\n            gcs_bucket.create_bucket()\n            ```\n        \"\"\"\n        self.logger.info(f\"Creating bucket {self.bucket!r}.\")\n        client = self.gcp_credentials.get_cloud_storage_client()\n        bucket = await run_sync_in_worker_thread(\n            client.create_bucket, self.bucket, location=location, **create_kwargs\n        )\n        return bucket\n\n    @sync_compatible\n    async def get_bucket(self) -> \"Bucket\":\n        \"\"\"\n        Returns the bucket object.\n\n        Returns:\n            The bucket object.\n\n        Examples:\n            Get the bucket object.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.get_bucket()\n            ```\n        \"\"\"\n        self.logger.info(f\"Getting bucket {self.bucket!r}.\")\n        client = self.gcp_credentials.get_cloud_storage_client()\n        bucket = await run_sync_in_worker_thread(client.get_bucket, self.bucket)\n        return bucket\n\n    @sync_compatible\n    async def list_blobs(self, folder: str = \"\") -> List[\"Blob\"]:\n        \"\"\"\n        Lists all blobs in the bucket that are in a folder.\n        Folders are not included in the output.\n\n        Args:\n            folder: The folder to list blobs from.\n\n        Returns:\n            A list of Blob objects.\n\n        Examples:\n            Get all blobs from a folder named \"prefect\".\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.list_blobs(\"prefect\")\n            ```\n        \"\"\"\n        client = self.gcp_credentials.get_cloud_storage_client()\n\n        bucket_path = self._join_bucket_folder(folder)\n        if bucket_path is None:\n            self.logger.info(f\"Listing blobs in bucket {self.bucket!r}.\")\n        else:\n            self.logger.info(\n                f\"Listing blobs in folder {bucket_path!r} in bucket {self.bucket!r}.\"\n            )\n        blobs = await run_sync_in_worker_thread(\n            client.list_blobs, self.bucket, prefix=bucket_path\n        )\n\n        # Ignore folders\n        return [blob for blob in blobs if not blob.name.endswith(\"/\")]\n\n    @sync_compatible\n    async def list_folders(self, folder: str = \"\") -> List[str]:\n        \"\"\"\n        Lists all folders and subfolders in the bucket.\n\n        Args:\n            folder: List all folders and subfolders inside given folder.\n\n        Returns:\n            A list of folders.\n\n        Examples:\n            Get all folders from a bucket named \"my-bucket\".\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.list_folders()\n            ```\n\n            Get all folders from a folder called years\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.list_folders(\"years\")\n            ```\n        \"\"\"\n\n        # Beware of calling _join_bucket_folder twice, see note in method.\n        # However, we just want to use it to check if we are listing the root folder\n        bucket_path = self._join_bucket_folder(folder)\n        if bucket_path is None:\n            self.logger.info(f\"Listing folders in bucket {self.bucket!r}.\")\n        else:\n            self.logger.info(\n                f\"Listing folders in {bucket_path!r} in bucket {self.bucket!r}.\"\n            )\n\n        blobs = await self.list_blobs(folder)\n        # gets all folders with full path\n        folders = {str(PurePosixPath(blob.name).parent) for blob in blobs}\n\n        return [folder for folder in folders if folder != \".\"]\n\n    @sync_compatible\n    async def download_object_to_path(\n        self,\n        from_path: str,\n        to_path: Optional[Union[str, Path]] = None,\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads an object from the object storage service to a path.\n\n        Args:\n            from_path: The path to the blob to download; this gets prefixed\n                with the bucket_folder.\n            to_path: The path to download the blob to. If not provided, the\n                blob's name will be used.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Blob.download_to_filename`.\n\n        Returns:\n            The absolute path that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to notes.txt.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n            ```\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        # making path absolute, but converting back to str here\n        # since !r looks nicer that way and filename arg expects str\n        to_path = str(Path(to_path).absolute())\n\n        bucket = await self.get_bucket()\n        bucket_path = self._join_bucket_folder(from_path)\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n            f\"to {to_path!r}.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.download_to_filename, filename=to_path, **download_kwargs\n        )\n        return Path(to_path)\n\n    @sync_compatible\n    async def download_object_to_file_object(\n        self,\n        from_path: str,\n        to_file_object: BinaryIO,\n        **download_kwargs: Dict[str, Any],\n    ) -> BinaryIO:\n        \"\"\"\n        Downloads an object from the object storage service to a file-like object,\n        which can be a BytesIO object or a BufferedWriter.\n\n        Args:\n            from_path: The path to the blob to download from; this gets prefixed\n                with the bucket_folder.\n            to_file_object: The file-like object to download the blob to.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Blob.download_to_file`.\n\n        Returns:\n            The file-like object that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to a BytesIO object.\n            ```python\n            from io import BytesIO\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with BytesIO() as buf:\n                gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n            ```\n\n            Download my_folder/notes.txt object to a BufferedWriter.\n            ```python\n                from prefect_gcp.cloud_storage import GcsBucket\n\n                gcs_bucket = GcsBucket.load(\"my-bucket\")\n                with open(\"notes.txt\", \"wb\") as f:\n                    gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n            ```\n        \"\"\"\n        bucket = await self.get_bucket()\n\n        bucket_path = self._join_bucket_folder(from_path)\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n            f\"to file object.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.download_to_file, file_obj=to_file_object, **download_kwargs\n        )\n        return to_file_object\n\n    @sync_compatible\n    async def download_folder_to_path(\n        self,\n        from_folder: str,\n        to_folder: Optional[Union[str, Path]] = None,\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads objects *within* a folder (excluding the folder itself)\n        from the object storage service to a folder.\n\n        Args:\n            from_folder: The path to the folder to download from; this gets prefixed\n                with the bucket_folder.\n            to_folder: The path to download the folder to. If not provided, will default\n                to the current directory.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Blob.download_to_filename`.\n\n        Returns:\n            The absolute path that the folder was downloaded to.\n\n        Examples:\n            Download my_folder to a local folder named my_folder.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n            ```\n        \"\"\"\n        if to_folder is None:\n            to_folder = \"\"\n        to_folder = Path(to_folder).absolute()\n\n        blobs = await self.list_blobs(folder=from_folder)\n        if len(blobs) == 0:\n            self.logger.warning(\n                f\"No blobs were downloaded from \"\n                f\"bucket {self.bucket!r} path {from_folder!r}.\"\n            )\n            return to_folder\n\n        # do not call self._join_bucket_folder for list_blobs\n        # because it's built-in to that method already!\n        # however, we still need to do it because we're using relative_to\n        bucket_folder = self._join_bucket_folder(from_folder)\n\n        async_coros = []\n        for blob in blobs:\n            bucket_path = PurePosixPath(blob.name).relative_to(bucket_folder)\n            if str(bucket_path).endswith(\"/\"):\n                continue\n            to_path = to_folder / bucket_path\n            to_path.parent.mkdir(parents=True, exist_ok=True)\n            self.logger.info(\n                f\"Downloading blob from bucket {self.bucket!r} path \"\n                f\"{str(bucket_path)!r} to {to_path}.\"\n            )\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    blob.download_to_filename, filename=str(to_path), **download_kwargs\n                )\n            )\n        await asyncio.gather(*async_coros)\n\n        return to_folder\n\n    @sync_compatible\n    async def upload_from_path(\n        self,\n        from_path: Union[str, Path],\n        to_path: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads an object from a path to the object storage service.\n\n        Args:\n            from_path: The path to the file to upload from.\n            to_path: The path to upload the file to. If not provided, will use\n                the file name of from_path; this gets prefixed\n                with the bucket_folder.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Blob.upload_from_filename`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload notes.txt to my_folder/notes.txt.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n            ```\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        bucket_path = self._join_bucket_folder(to_path)\n        bucket = await self.get_bucket()\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Uploading from {from_path!r} to the bucket \"\n            f\"{self.bucket!r} path {bucket_path!r}.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.upload_from_filename, filename=from_path, **upload_kwargs\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_file_object(\n        self, from_file_object: BinaryIO, to_path: str, **upload_kwargs\n    ) -> str:\n        \"\"\"\n        Uploads an object to the object storage service from a file-like object,\n        which can be a BytesIO object or a BufferedReader.\n\n        Args:\n            from_file_object: The file-like object to upload from.\n            to_path: The path to upload the object to; this gets prefixed\n                with the bucket_folder.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Blob.upload_from_file`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload my_folder/notes.txt object to a BytesIO object.\n            ```python\n            from io import BytesIO\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                gcs_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n            ```\n\n            Upload BufferedReader object to my_folder/notes.txt.\n            ```python\n            from io import BufferedReader\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                gcs_bucket.upload_from_file_object(\n                    BufferedReader(f), \"my_folder/notes.txt\"\n                )\n            ```\n        \"\"\"\n        bucket = await self.get_bucket()\n\n        bucket_path = self._join_bucket_folder(to_path)\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Uploading from file object to the bucket \"\n            f\"{self.bucket!r} path {bucket_path!r}.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.upload_from_file, from_file_object, **upload_kwargs\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_folder(\n        self,\n        from_folder: Union[str, Path],\n        to_folder: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads files *within* a folder (excluding the folder itself)\n        to the object storage service folder.\n\n        Args:\n            from_folder: The path to the folder to upload from.\n            to_folder: The path to upload the folder to. If not provided, will default\n                to bucket_folder or the base directory of the bucket.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Blob.upload_from_filename`.\n\n        Returns:\n            The path that the folder was uploaded to.\n\n        Examples:\n            Upload local folder my_folder to the bucket's folder my_folder.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.upload_from_folder(\"my_folder\")\n            ```\n        \"\"\"\n        from_folder = Path(from_folder)\n        # join bucket folder expects string for the first input\n        # when it returns None, we need to convert it back to empty string\n        # so relative_to works\n        bucket_folder = self._join_bucket_folder(to_folder or \"\") or \"\"\n\n        num_uploaded = 0\n        bucket = await self.get_bucket()\n\n        async_coros = []\n        for from_path in from_folder.rglob(\"**/*\"):\n            if from_path.is_dir():\n                continue\n            bucket_path = str(Path(bucket_folder) / from_path.relative_to(from_folder))\n            self.logger.info(\n                f\"Uploading from {str(from_path)!r} to the bucket \"\n                f\"{self.bucket!r} path {bucket_path!r}.\"\n            )\n            blob = bucket.blob(bucket_path)\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    blob.upload_from_filename, filename=from_path, **upload_kwargs\n                )\n            )\n            num_uploaded += 1\n        await asyncio.gather(*async_coros)\n        if num_uploaded == 0:\n            self.logger.warning(f\"No files were uploaded from {from_folder}.\")\n        return bucket_folder\n\n    @sync_compatible\n    async def upload_from_dataframe(\n        self,\n        df: \"DataFrame\",\n        to_path: str,\n        serialization_format: Union[\n            str, DataFrameSerializationFormat\n        ] = DataFrameSerializationFormat.CSV_GZIP,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"Upload a Pandas DataFrame to Google Cloud Storage in various formats.\n\n        This function uploads the data in a Pandas DataFrame to Google Cloud Storage\n        in a specified format, such as .csv, .csv.gz, .parquet,\n        .parquet.snappy, and .parquet.gz.\n\n        Args:\n            df: The Pandas DataFrame to be uploaded.\n            to_path: The destination path for the uploaded DataFrame.\n            serialization_format: The format to serialize the DataFrame into.\n                When passed as a `str`, the valid options are:\n                'csv', 'csv_gzip',  'parquet', 'parquet_snappy', 'parquet_gzip'.\n                Defaults to `DataFrameSerializationFormat.CSV_GZIP`.\n            **upload_kwargs: Additional keyword arguments to pass to the underlying\n            `Blob.upload_from_dataframe` method.\n\n        Returns:\n            The path that the object was uploaded to.\n        \"\"\"\n        if isinstance(serialization_format, str):\n            serialization_format = DataFrameSerializationFormat[\n                serialization_format.upper()\n            ]\n\n        with BytesIO() as bytes_buffer:\n            if serialization_format.format == \"parquet\":\n                df.to_parquet(\n                    path=bytes_buffer,\n                    compression=serialization_format.compression,\n                    index=False,\n                )\n            elif serialization_format.format == \"csv\":\n                df.to_csv(\n                    path_or_buf=bytes_buffer,\n                    compression=serialization_format.compression,\n                    index=False,\n                )\n\n            bytes_buffer.seek(0)\n            to_path = serialization_format.fix_extension_with(gcs_blob_path=to_path)\n\n            return await self.upload_from_file_object(\n                from_file_object=bytes_buffer,\n                to_path=to_path,\n                **{\"content_type\": serialization_format.content_type, **upload_kwargs},\n            )\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.basepath","title":"basepath: str property","text":"

    Read-only property that mirrors the bucket folder.

    Used for deployment.

    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.create_bucket","title":"create_bucket async","text":"

    Creates a bucket.

    Parameters:

    Name Type Description Default location Optional[str]

    The location of the bucket.

    None **create_kwargs

    Additional keyword arguments to pass to the create_bucket method.

    {}

    Returns:

    Type Description Bucket

    The bucket object.

    Examples:

    Create a bucket.

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket(bucket=\"my-bucket\")\ngcs_bucket.create_bucket()\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def create_bucket(\n    self, location: Optional[str] = None, **create_kwargs\n) -> \"Bucket\":\n    \"\"\"\n    Creates a bucket.\n\n    Args:\n        location: The location of the bucket.\n        **create_kwargs: Additional keyword arguments to pass to the\n            `create_bucket` method.\n\n    Returns:\n        The bucket object.\n\n    Examples:\n        Create a bucket.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket(bucket=\"my-bucket\")\n        gcs_bucket.create_bucket()\n        ```\n    \"\"\"\n    self.logger.info(f\"Creating bucket {self.bucket!r}.\")\n    client = self.gcp_credentials.get_cloud_storage_client()\n    bucket = await run_sync_in_worker_thread(\n        client.create_bucket, self.bucket, location=location, **create_kwargs\n    )\n    return bucket\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.download_folder_to_path","title":"download_folder_to_path async","text":"

    Downloads objects within a folder (excluding the folder itself) from the object storage service to a folder.

    Parameters:

    Name Type Description Default from_folder str

    The path to the folder to download from; this gets prefixed with the bucket_folder.

    required to_folder Optional[Union[str, Path]]

    The path to download the folder to. If not provided, will default to the current directory.

    None **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.download_to_filename.

    {}

    Returns:

    Type Description Path

    The absolute path that the folder was downloaded to.

    Examples:

    Download my_folder to a local folder named my_folder.

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def download_folder_to_path(\n    self,\n    from_folder: str,\n    to_folder: Optional[Union[str, Path]] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads objects *within* a folder (excluding the folder itself)\n    from the object storage service to a folder.\n\n    Args:\n        from_folder: The path to the folder to download from; this gets prefixed\n            with the bucket_folder.\n        to_folder: The path to download the folder to. If not provided, will default\n            to the current directory.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_filename`.\n\n    Returns:\n        The absolute path that the folder was downloaded to.\n\n    Examples:\n        Download my_folder to a local folder named my_folder.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n        ```\n    \"\"\"\n    if to_folder is None:\n        to_folder = \"\"\n    to_folder = Path(to_folder).absolute()\n\n    blobs = await self.list_blobs(folder=from_folder)\n    if len(blobs) == 0:\n        self.logger.warning(\n            f\"No blobs were downloaded from \"\n            f\"bucket {self.bucket!r} path {from_folder!r}.\"\n        )\n        return to_folder\n\n    # do not call self._join_bucket_folder for list_blobs\n    # because it's built-in to that method already!\n    # however, we still need to do it because we're using relative_to\n    bucket_folder = self._join_bucket_folder(from_folder)\n\n    async_coros = []\n    for blob in blobs:\n        bucket_path = PurePosixPath(blob.name).relative_to(bucket_folder)\n        if str(bucket_path).endswith(\"/\"):\n            continue\n        to_path = to_folder / bucket_path\n        to_path.parent.mkdir(parents=True, exist_ok=True)\n        self.logger.info(\n            f\"Downloading blob from bucket {self.bucket!r} path \"\n            f\"{str(bucket_path)!r} to {to_path}.\"\n        )\n        async_coros.append(\n            run_sync_in_worker_thread(\n                blob.download_to_filename, filename=str(to_path), **download_kwargs\n            )\n        )\n    await asyncio.gather(*async_coros)\n\n    return to_folder\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.download_object_to_file_object","title":"download_object_to_file_object async","text":"

    Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter.

    Parameters:

    Name Type Description Default from_path str

    The path to the blob to download from; this gets prefixed with the bucket_folder.

    required to_file_object BinaryIO

    The file-like object to download the blob to.

    required **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.download_to_file.

    {}

    Returns:

    Type Description BinaryIO

    The file-like object that the object was downloaded to.

    Examples:

    Download my_folder/notes.txt object to a BytesIO object.

    from io import BytesIO\nfrom prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\nwith BytesIO() as buf:\n    gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n

    Download my_folder/notes.txt object to a BufferedWriter.

        from prefect_gcp.cloud_storage import GcsBucket\n\n    gcs_bucket = GcsBucket.load(\"my-bucket\")\n    with open(\"notes.txt\", \"wb\") as f:\n        gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def download_object_to_file_object(\n    self,\n    from_path: str,\n    to_file_object: BinaryIO,\n    **download_kwargs: Dict[str, Any],\n) -> BinaryIO:\n    \"\"\"\n    Downloads an object from the object storage service to a file-like object,\n    which can be a BytesIO object or a BufferedWriter.\n\n    Args:\n        from_path: The path to the blob to download from; this gets prefixed\n            with the bucket_folder.\n        to_file_object: The file-like object to download the blob to.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_file`.\n\n    Returns:\n        The file-like object that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to a BytesIO object.\n        ```python\n        from io import BytesIO\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        with BytesIO() as buf:\n            gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n        ```\n\n        Download my_folder/notes.txt object to a BufferedWriter.\n        ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"wb\") as f:\n                gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n        ```\n    \"\"\"\n    bucket = await self.get_bucket()\n\n    bucket_path = self._join_bucket_folder(from_path)\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n        f\"to file object.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.download_to_file, file_obj=to_file_object, **download_kwargs\n    )\n    return to_file_object\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.download_object_to_path","title":"download_object_to_path async","text":"

    Downloads an object from the object storage service to a path.

    Parameters:

    Name Type Description Default from_path str

    The path to the blob to download; this gets prefixed with the bucket_folder.

    required to_path Optional[Union[str, Path]]

    The path to download the blob to. If not provided, the blob's name will be used.

    None **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.download_to_filename.

    {}

    Returns:

    Type Description Path

    The absolute path that the object was downloaded to.

    Examples:

    Download my_folder/notes.txt object to notes.txt.

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def download_object_to_path(\n    self,\n    from_path: str,\n    to_path: Optional[Union[str, Path]] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads an object from the object storage service to a path.\n\n    Args:\n        from_path: The path to the blob to download; this gets prefixed\n            with the bucket_folder.\n        to_path: The path to download the blob to. If not provided, the\n            blob's name will be used.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_filename`.\n\n    Returns:\n        The absolute path that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to notes.txt.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n        ```\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    # making path absolute, but converting back to str here\n    # since !r looks nicer that way and filename arg expects str\n    to_path = str(Path(to_path).absolute())\n\n    bucket = await self.get_bucket()\n    bucket_path = self._join_bucket_folder(from_path)\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n        f\"to {to_path!r}.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.download_to_filename, filename=to_path, **download_kwargs\n    )\n    return Path(to_path)\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.get_bucket","title":"get_bucket async","text":"

    Returns the bucket object.

    Returns:

    Type Description Bucket

    The bucket object.

    Examples:

    Get the bucket object.

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.get_bucket()\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def get_bucket(self) -> \"Bucket\":\n    \"\"\"\n    Returns the bucket object.\n\n    Returns:\n        The bucket object.\n\n    Examples:\n        Get the bucket object.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.get_bucket()\n        ```\n    \"\"\"\n    self.logger.info(f\"Getting bucket {self.bucket!r}.\")\n    client = self.gcp_credentials.get_cloud_storage_client()\n    bucket = await run_sync_in_worker_thread(client.get_bucket, self.bucket)\n    return bucket\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.get_directory","title":"get_directory async","text":"

    Copies a folder from the configured GCS bucket to a local directory. Defaults to copying the entire contents of the block's bucket_folder to the current working directory.

    Parameters:

    Name Type Description Default from_path Optional[str]

    Path in GCS bucket to download from. Defaults to the block's configured bucket_folder.

    None local_path Optional[str]

    Local path to download GCS bucket contents to. Defaults to the current working directory.

    None

    Returns:

    Type Description List[Union[str, Path]]

    A list of downloaded file paths.

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> List[Union[str, Path]]:\n    \"\"\"\n    Copies a folder from the configured GCS bucket to a local directory.\n    Defaults to copying the entire contents of the block's bucket_folder\n    to the current working directory.\n\n    Args:\n        from_path: Path in GCS bucket to download from. Defaults to the block's\n            configured bucket_folder.\n        local_path: Local path to download GCS bucket contents to.\n            Defaults to the current working directory.\n\n    Returns:\n        A list of downloaded file paths.\n    \"\"\"\n    from_path = (\n        self.bucket_folder if from_path is None else self._resolve_path(from_path)\n    )\n\n    if local_path is None:\n        local_path = os.path.abspath(\".\")\n    else:\n        local_path = os.path.abspath(os.path.expanduser(local_path))\n\n    project = self.gcp_credentials.project\n    client = self.gcp_credentials.get_cloud_storage_client(project=project)\n\n    blobs = await run_sync_in_worker_thread(\n        client.list_blobs, self.bucket, prefix=from_path\n    )\n\n    file_paths = []\n    for blob in blobs:\n        blob_path = blob.name\n        if blob_path[-1] == \"/\":\n            # object is a folder and will be created if it contains any objects\n            continue\n        local_file_path = os.path.join(local_path, blob_path)\n        os.makedirs(os.path.dirname(local_file_path), exist_ok=True)\n\n        with disable_run_logger():\n            file_path = await cloud_storage_download_blob_to_file.fn(\n                bucket=self.bucket,\n                blob=blob_path,\n                path=local_file_path,\n                gcp_credentials=self.gcp_credentials,\n            )\n            file_paths.append(file_path)\n    return file_paths\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.list_blobs","title":"list_blobs async","text":"

    Lists all blobs in the bucket that are in a folder. Folders are not included in the output.

    Parameters:

    Name Type Description Default folder str

    The folder to list blobs from.

    ''

    Returns:

    Type Description List[Blob]

    A list of Blob objects.

    Examples:

    Get all blobs from a folder named \"prefect\".

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.list_blobs(\"prefect\")\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def list_blobs(self, folder: str = \"\") -> List[\"Blob\"]:\n    \"\"\"\n    Lists all blobs in the bucket that are in a folder.\n    Folders are not included in the output.\n\n    Args:\n        folder: The folder to list blobs from.\n\n    Returns:\n        A list of Blob objects.\n\n    Examples:\n        Get all blobs from a folder named \"prefect\".\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.list_blobs(\"prefect\")\n        ```\n    \"\"\"\n    client = self.gcp_credentials.get_cloud_storage_client()\n\n    bucket_path = self._join_bucket_folder(folder)\n    if bucket_path is None:\n        self.logger.info(f\"Listing blobs in bucket {self.bucket!r}.\")\n    else:\n        self.logger.info(\n            f\"Listing blobs in folder {bucket_path!r} in bucket {self.bucket!r}.\"\n        )\n    blobs = await run_sync_in_worker_thread(\n        client.list_blobs, self.bucket, prefix=bucket_path\n    )\n\n    # Ignore folders\n    return [blob for blob in blobs if not blob.name.endswith(\"/\")]\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.list_folders","title":"list_folders async","text":"

    Lists all folders and subfolders in the bucket.

    Parameters:

    Name Type Description Default folder str

    List all folders and subfolders inside given folder.

    ''

    Returns:

    Type Description List[str]

    A list of folders.

    Examples:

    Get all folders from a bucket named \"my-bucket\".

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.list_folders()\n

    Get all folders from a folder called years

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.list_folders(\"years\")\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def list_folders(self, folder: str = \"\") -> List[str]:\n    \"\"\"\n    Lists all folders and subfolders in the bucket.\n\n    Args:\n        folder: List all folders and subfolders inside given folder.\n\n    Returns:\n        A list of folders.\n\n    Examples:\n        Get all folders from a bucket named \"my-bucket\".\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.list_folders()\n        ```\n\n        Get all folders from a folder called years\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.list_folders(\"years\")\n        ```\n    \"\"\"\n\n    # Beware of calling _join_bucket_folder twice, see note in method.\n    # However, we just want to use it to check if we are listing the root folder\n    bucket_path = self._join_bucket_folder(folder)\n    if bucket_path is None:\n        self.logger.info(f\"Listing folders in bucket {self.bucket!r}.\")\n    else:\n        self.logger.info(\n            f\"Listing folders in {bucket_path!r} in bucket {self.bucket!r}.\"\n        )\n\n    blobs = await self.list_blobs(folder)\n    # gets all folders with full path\n    folders = {str(PurePosixPath(blob.name).parent) for blob in blobs}\n\n    return [folder for folder in folders if folder != \".\"]\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.put_directory","title":"put_directory async","text":"

    Uploads a directory from a given local path to the configured GCS bucket in a given folder.

    Defaults to uploading the entire contents the current working directory to the block's bucket_folder.

    Parameters:

    Name Type Description Default local_path Optional[str]

    Path to local directory to upload from.

    None to_path Optional[str]

    Path in GCS bucket to upload to. Defaults to block's configured bucket_folder.

    None ignore_file Optional[str]

    Path to file containing gitignore style expressions for filepaths to ignore.

    None

    Returns:

    Type Description int

    The number of files uploaded.

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to the configured GCS bucket in a\n    given folder.\n\n    Defaults to uploading the entire contents the current working directory to the\n    block's bucket_folder.\n\n    Args:\n        local_path: Path to local directory to upload from.\n        to_path: Path in GCS bucket to upload to. Defaults to block's configured\n            bucket_folder.\n        ignore_file: Path to file containing gitignore style expressions for\n            filepaths to ignore.\n\n    Returns:\n        The number of files uploaded.\n    \"\"\"\n    if local_path is None:\n        local_path = os.path.abspath(\".\")\n    else:\n        local_path = os.path.expanduser(local_path)\n\n    to_path = self.bucket_folder if to_path is None else self._resolve_path(to_path)\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(local_path, ignore_patterns)\n\n    uploaded_file_count = 0\n    for local_file_path in Path(local_path).rglob(\"*\"):\n        if (\n            included_files is not None\n            and local_file_path.name not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = str(\n                PurePosixPath(to_path, local_file_path.relative_to(local_path))\n            )\n            local_file_content = local_file_path.read_bytes()\n            await self.write_path(remote_file_path, content=local_file_content)\n            uploaded_file_count += 1\n\n    return uploaded_file_count\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.read_path","title":"read_path async","text":"

    Read specified path from GCS and return contents. Provide the entire path to the key in GCS.

    Parameters:

    Name Type Description Default path str

    Entire path to (and including) the key.

    required

    Returns:

    Type Description bytes

    A bytes or string representation of the blob object.

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def read_path(self, path: str) -> bytes:\n    \"\"\"\n    Read specified path from GCS and return contents. Provide the entire\n    path to the key in GCS.\n\n    Args:\n        path: Entire path to (and including) the key.\n\n    Returns:\n        A bytes or string representation of the blob object.\n    \"\"\"\n    path = self._resolve_path(path)\n    with disable_run_logger():\n        contents = await cloud_storage_download_blob_as_bytes.fn(\n            bucket=self.bucket, blob=path, gcp_credentials=self.gcp_credentials\n        )\n    return contents\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_dataframe","title":"upload_from_dataframe async","text":"

    Upload a Pandas DataFrame to Google Cloud Storage in various formats.

    This function uploads the data in a Pandas DataFrame to Google Cloud Storage in a specified format, such as .csv, .csv.gz, .parquet, .parquet.snappy, and .parquet.gz.

    Parameters:

    Name Type Description Default df DataFrame

    The Pandas DataFrame to be uploaded.

    required to_path str

    The destination path for the uploaded DataFrame.

    required serialization_format Union[str, DataFrameSerializationFormat]

    The format to serialize the DataFrame into. When passed as a str, the valid options are: 'csv', 'csv_gzip', 'parquet', 'parquet_snappy', 'parquet_gzip'. Defaults to DataFrameSerializationFormat.CSV_GZIP.

    CSV_GZIP **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to the underlying

    {}

    Returns:

    Type Description str

    The path that the object was uploaded to.

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def upload_from_dataframe(\n    self,\n    df: \"DataFrame\",\n    to_path: str,\n    serialization_format: Union[\n        str, DataFrameSerializationFormat\n    ] = DataFrameSerializationFormat.CSV_GZIP,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"Upload a Pandas DataFrame to Google Cloud Storage in various formats.\n\n    This function uploads the data in a Pandas DataFrame to Google Cloud Storage\n    in a specified format, such as .csv, .csv.gz, .parquet,\n    .parquet.snappy, and .parquet.gz.\n\n    Args:\n        df: The Pandas DataFrame to be uploaded.\n        to_path: The destination path for the uploaded DataFrame.\n        serialization_format: The format to serialize the DataFrame into.\n            When passed as a `str`, the valid options are:\n            'csv', 'csv_gzip',  'parquet', 'parquet_snappy', 'parquet_gzip'.\n            Defaults to `DataFrameSerializationFormat.CSV_GZIP`.\n        **upload_kwargs: Additional keyword arguments to pass to the underlying\n        `Blob.upload_from_dataframe` method.\n\n    Returns:\n        The path that the object was uploaded to.\n    \"\"\"\n    if isinstance(serialization_format, str):\n        serialization_format = DataFrameSerializationFormat[\n            serialization_format.upper()\n        ]\n\n    with BytesIO() as bytes_buffer:\n        if serialization_format.format == \"parquet\":\n            df.to_parquet(\n                path=bytes_buffer,\n                compression=serialization_format.compression,\n                index=False,\n            )\n        elif serialization_format.format == \"csv\":\n            df.to_csv(\n                path_or_buf=bytes_buffer,\n                compression=serialization_format.compression,\n                index=False,\n            )\n\n        bytes_buffer.seek(0)\n        to_path = serialization_format.fix_extension_with(gcs_blob_path=to_path)\n\n        return await self.upload_from_file_object(\n            from_file_object=bytes_buffer,\n            to_path=to_path,\n            **{\"content_type\": serialization_format.content_type, **upload_kwargs},\n        )\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_file_object","title":"upload_from_file_object async","text":"

    Uploads an object to the object storage service from a file-like object, which can be a BytesIO object or a BufferedReader.

    Parameters:

    Name Type Description Default from_file_object BinaryIO

    The file-like object to upload from.

    required to_path str

    The path to upload the object to; this gets prefixed with the bucket_folder.

    required **upload_kwargs

    Additional keyword arguments to pass to Blob.upload_from_file.

    {}

    Returns:

    Type Description str

    The path that the object was uploaded to.

    Examples:

    Upload my_folder/notes.txt object to a BytesIO object.

    from io import BytesIO\nfrom prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    gcs_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n

    Upload BufferedReader object to my_folder/notes.txt.

    from io import BufferedReader\nfrom prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    gcs_bucket.upload_from_file_object(\n        BufferedReader(f), \"my_folder/notes.txt\"\n    )\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def upload_from_file_object(\n    self, from_file_object: BinaryIO, to_path: str, **upload_kwargs\n) -> str:\n    \"\"\"\n    Uploads an object to the object storage service from a file-like object,\n    which can be a BytesIO object or a BufferedReader.\n\n    Args:\n        from_file_object: The file-like object to upload from.\n        to_path: The path to upload the object to; this gets prefixed\n            with the bucket_folder.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_file`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload my_folder/notes.txt object to a BytesIO object.\n        ```python\n        from io import BytesIO\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            gcs_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n        ```\n\n        Upload BufferedReader object to my_folder/notes.txt.\n        ```python\n        from io import BufferedReader\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            gcs_bucket.upload_from_file_object(\n                BufferedReader(f), \"my_folder/notes.txt\"\n            )\n        ```\n    \"\"\"\n    bucket = await self.get_bucket()\n\n    bucket_path = self._join_bucket_folder(to_path)\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Uploading from file object to the bucket \"\n        f\"{self.bucket!r} path {bucket_path!r}.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.upload_from_file, from_file_object, **upload_kwargs\n    )\n    return bucket_path\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_folder","title":"upload_from_folder async","text":"

    Uploads files within a folder (excluding the folder itself) to the object storage service folder.

    Parameters:

    Name Type Description Default from_folder Union[str, Path]

    The path to the folder to upload from.

    required to_folder Optional[str]

    The path to upload the folder to. If not provided, will default to bucket_folder or the base directory of the bucket.

    None **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.upload_from_filename.

    {}

    Returns:

    Type Description str

    The path that the folder was uploaded to.

    Examples:

    Upload local folder my_folder to the bucket's folder my_folder.

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.upload_from_folder(\"my_folder\")\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def upload_from_folder(\n    self,\n    from_folder: Union[str, Path],\n    to_folder: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads files *within* a folder (excluding the folder itself)\n    to the object storage service folder.\n\n    Args:\n        from_folder: The path to the folder to upload from.\n        to_folder: The path to upload the folder to. If not provided, will default\n            to bucket_folder or the base directory of the bucket.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_filename`.\n\n    Returns:\n        The path that the folder was uploaded to.\n\n    Examples:\n        Upload local folder my_folder to the bucket's folder my_folder.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.upload_from_folder(\"my_folder\")\n        ```\n    \"\"\"\n    from_folder = Path(from_folder)\n    # join bucket folder expects string for the first input\n    # when it returns None, we need to convert it back to empty string\n    # so relative_to works\n    bucket_folder = self._join_bucket_folder(to_folder or \"\") or \"\"\n\n    num_uploaded = 0\n    bucket = await self.get_bucket()\n\n    async_coros = []\n    for from_path in from_folder.rglob(\"**/*\"):\n        if from_path.is_dir():\n            continue\n        bucket_path = str(Path(bucket_folder) / from_path.relative_to(from_folder))\n        self.logger.info(\n            f\"Uploading from {str(from_path)!r} to the bucket \"\n            f\"{self.bucket!r} path {bucket_path!r}.\"\n        )\n        blob = bucket.blob(bucket_path)\n        async_coros.append(\n            run_sync_in_worker_thread(\n                blob.upload_from_filename, filename=from_path, **upload_kwargs\n            )\n        )\n        num_uploaded += 1\n    await asyncio.gather(*async_coros)\n    if num_uploaded == 0:\n        self.logger.warning(f\"No files were uploaded from {from_folder}.\")\n    return bucket_folder\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_path","title":"upload_from_path async","text":"

    Uploads an object from a path to the object storage service.

    Parameters:

    Name Type Description Default from_path Union[str, Path]

    The path to the file to upload from.

    required to_path Optional[str]

    The path to upload the file to. If not provided, will use the file name of from_path; this gets prefixed with the bucket_folder.

    None **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.upload_from_filename.

    {}

    Returns:

    Type Description str

    The path that the object was uploaded to.

    Examples:

    Upload notes.txt to my_folder/notes.txt.

    from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def upload_from_path(\n    self,\n    from_path: Union[str, Path],\n    to_path: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads an object from a path to the object storage service.\n\n    Args:\n        from_path: The path to the file to upload from.\n        to_path: The path to upload the file to. If not provided, will use\n            the file name of from_path; this gets prefixed\n            with the bucket_folder.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_filename`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload notes.txt to my_folder/notes.txt.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n        ```\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    bucket_path = self._join_bucket_folder(to_path)\n    bucket = await self.get_bucket()\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Uploading from {from_path!r} to the bucket \"\n        f\"{self.bucket!r} path {bucket_path!r}.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.upload_from_filename, filename=from_path, **upload_kwargs\n    )\n    return bucket_path\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.write_path","title":"write_path async","text":"

    Writes to an GCS bucket.

    Parameters:

    Name Type Description Default path str

    The key name. Each object in your bucket has a unique key (or key name).

    required content bytes

    What you are uploading to GCS Bucket.

    required

    Returns:

    Type Description str

    The path that the contents were written to.

    Source code in prefect_gcp/cloud_storage.py
    @sync_compatible\nasync def write_path(self, path: str, content: bytes) -> str:\n    \"\"\"\n    Writes to an GCS bucket.\n\n    Args:\n        path: The key name. Each object in your bucket has a unique\n            key (or key name).\n        content: What you are uploading to GCS Bucket.\n\n    Returns:\n        The path that the contents were written to.\n    \"\"\"\n    path = self._resolve_path(path)\n    with disable_run_logger():\n        await cloud_storage_upload_blob_from_string.fn(\n            data=content,\n            bucket=self.bucket,\n            blob=path,\n            gcp_credentials=self.gcp_credentials,\n        )\n    return path\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_copy_blob","title":"cloud_storage_copy_blob async","text":"

    Copies data from one Google Cloud Storage bucket to another, without downloading it locally.

    Parameters:

    Name Type Description Default source_bucket str

    Source bucket name.

    required dest_bucket str

    Destination bucket name.

    required source_blob str

    Source blob name.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required dest_blob Optional[str]

    Destination blob name; if not provided, defaults to source_blob.

    None timeout Union[float, Tuple[float, float]]

    The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None **copy_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Bucket.copy_blob.

    {}

    Returns:

    Type Description str

    Destination blob name.

    Example

    Copies blob from one bucket to another.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_copy_blob\n\n@flow()\ndef example_cloud_storage_copy_blob_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    blob = cloud_storage_copy_blob(\n        \"source_bucket\",\n        \"dest_bucket\",\n        \"source_blob\",\n        gcp_credentials\n    )\n    return blob\n\nexample_cloud_storage_copy_blob_flow()\n

    Source code in prefect_gcp/cloud_storage.py
    @task\nasync def cloud_storage_copy_blob(\n    source_bucket: str,\n    dest_bucket: str,\n    source_blob: str,\n    gcp_credentials: GcpCredentials,\n    dest_blob: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **copy_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Copies data from one Google Cloud Storage bucket to another,\n    without downloading it locally.\n\n    Args:\n        source_bucket: Source bucket name.\n        dest_bucket: Destination bucket name.\n        source_blob: Source blob name.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        dest_blob: Destination blob name; if not provided, defaults to source_blob.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **copy_kwargs: Additional keyword arguments to pass to\n            `Bucket.copy_blob`.\n\n    Returns:\n        Destination blob name.\n\n    Example:\n        Copies blob from one bucket to another.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_copy_blob\n\n        @flow()\n        def example_cloud_storage_copy_blob_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            blob = cloud_storage_copy_blob(\n                \"source_bucket\",\n                \"dest_bucket\",\n                \"source_blob\",\n                gcp_credentials\n            )\n            return blob\n\n        example_cloud_storage_copy_blob_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Copying blob named %s from the %s bucket to the %s bucket\",\n        source_blob,\n        source_bucket,\n        dest_bucket,\n    )\n\n    source_bucket_obj = await _get_bucket(\n        source_bucket, gcp_credentials, project=project\n    )\n\n    dest_bucket_obj = await _get_bucket(dest_bucket, gcp_credentials, project=project)\n    if dest_blob is None:\n        dest_blob = source_blob\n\n    source_blob_obj = source_bucket_obj.blob(source_blob)\n    await run_sync_in_worker_thread(\n        source_bucket_obj.copy_blob,\n        blob=source_blob_obj,\n        destination_bucket=dest_bucket_obj,\n        new_name=dest_blob,\n        timeout=timeout,\n        **copy_kwargs,\n    )\n\n    return dest_blob\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_create_bucket","title":"cloud_storage_create_bucket async","text":"

    Creates a bucket.

    Parameters:

    Name Type Description Default bucket str

    Name of the bucket.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None location Optional[str]

    Location of the bucket.

    None **create_kwargs Dict[str, Any]

    Additional keyword arguments to pass to client.create_bucket.

    {}

    Returns:

    Type Description str

    The bucket name.

    Example

    Creates a bucket named \"prefect\".

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_create_bucket\n\n@flow()\ndef example_cloud_storage_create_bucket_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    bucket = cloud_storage_create_bucket(\"prefect\", gcp_credentials)\n\nexample_cloud_storage_create_bucket_flow()\n

    Source code in prefect_gcp/cloud_storage.py
    @task\nasync def cloud_storage_create_bucket(\n    bucket: str,\n    gcp_credentials: GcpCredentials,\n    project: Optional[str] = None,\n    location: Optional[str] = None,\n    **create_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Creates a bucket.\n\n    Args:\n        bucket: Name of the bucket.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        location: Location of the bucket.\n        **create_kwargs: Additional keyword arguments to pass to `client.create_bucket`.\n\n    Returns:\n        The bucket name.\n\n    Example:\n        Creates a bucket named \"prefect\".\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_create_bucket\n\n        @flow()\n        def example_cloud_storage_create_bucket_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            bucket = cloud_storage_create_bucket(\"prefect\", gcp_credentials)\n\n        example_cloud_storage_create_bucket_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Creating %s bucket\", bucket)\n\n    client = gcp_credentials.get_cloud_storage_client(project=project)\n    await run_sync_in_worker_thread(\n        client.create_bucket, bucket, location=location, **create_kwargs\n    )\n    return bucket\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_download_blob_as_bytes","title":"cloud_storage_download_blob_as_bytes async","text":"

    Downloads a blob as bytes.

    Parameters:

    Name Type Description Default bucket str

    Name of the bucket.

    required blob str

    Name of the Cloud Storage blob.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required chunk_size int

    The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

    None encryption_key Optional[str]

    An encryption key.

    None timeout Union[float, Tuple[float, float]]

    The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.download_as_bytes.

    {}

    Returns:

    Type Description bytes

    A bytes or string representation of the blob object.

    Example

    Downloads blob from bucket.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_download_blob_as_bytes\n\n@flow()\ndef example_cloud_storage_download_blob_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    contents = cloud_storage_download_blob_as_bytes(\n        \"bucket\", \"blob\", gcp_credentials)\n    return contents\n\nexample_cloud_storage_download_blob_flow()\n

    Source code in prefect_gcp/cloud_storage.py
    @task\nasync def cloud_storage_download_blob_as_bytes(\n    bucket: str,\n    blob: str,\n    gcp_credentials: GcpCredentials,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **download_kwargs: Dict[str, Any],\n) -> bytes:\n    \"\"\"\n    Downloads a blob as bytes.\n\n    Args:\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        chunk_size (int, optional): The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_as_bytes`.\n\n    Returns:\n        A bytes or string representation of the blob object.\n\n    Example:\n        Downloads blob from bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_download_blob_as_bytes\n\n        @flow()\n        def example_cloud_storage_download_blob_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            contents = cloud_storage_download_blob_as_bytes(\n                \"bucket\", \"blob\", gcp_credentials)\n            return contents\n\n        example_cloud_storage_download_blob_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Downloading blob named %s from the %s bucket\", blob, bucket)\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    contents = await run_sync_in_worker_thread(\n        blob_obj.download_as_bytes, timeout=timeout, **download_kwargs\n    )\n    return contents\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_download_blob_to_file","title":"cloud_storage_download_blob_to_file async","text":"

    Downloads a blob to a file path.

    Parameters:

    Name Type Description Default bucket str

    Name of the bucket.

    required blob str

    Name of the Cloud Storage blob.

    required path Union[str, Path]

    Downloads the contents to the provided file path; if the path is a directory, automatically joins the blob name.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required chunk_size int

    The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

    None encryption_key Optional[str]

    An encryption key.

    None timeout Union[float, Tuple[float, float]]

    The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None **download_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.download_to_filename.

    {}

    Returns:

    Type Description Union[str, Path]

    The path to the blob object.

    Example

    Downloads blob from bucket.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_download_blob_to_file\n\n@flow()\ndef example_cloud_storage_download_blob_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    path = cloud_storage_download_blob_to_file(\n        \"bucket\", \"blob\", \"file_path\", gcp_credentials)\n    return path\n\nexample_cloud_storage_download_blob_flow()\n

    Source code in prefect_gcp/cloud_storage.py
    @task\nasync def cloud_storage_download_blob_to_file(\n    bucket: str,\n    blob: str,\n    path: Union[str, Path],\n    gcp_credentials: GcpCredentials,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Union[str, Path]:\n    \"\"\"\n    Downloads a blob to a file path.\n\n    Args:\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        path: Downloads the contents to the provided file path;\n            if the path is a directory, automatically joins the blob name.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        chunk_size (int, optional): The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_filename`.\n\n    Returns:\n        The path to the blob object.\n\n    Example:\n        Downloads blob from bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_download_blob_to_file\n\n        @flow()\n        def example_cloud_storage_download_blob_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            path = cloud_storage_download_blob_to_file(\n                \"bucket\", \"blob\", \"file_path\", gcp_credentials)\n            return path\n\n        example_cloud_storage_download_blob_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Downloading blob named %s from the %s bucket to %s\", blob, bucket, path\n    )\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    if os.path.isdir(path):\n        if isinstance(path, Path):\n            path = path.joinpath(blob)  # keep as Path if Path is passed\n        else:\n            path = os.path.join(path, blob)  # keep as str if a str is passed\n\n    await run_sync_in_worker_thread(\n        blob_obj.download_to_filename, path, timeout=timeout, **download_kwargs\n    )\n    return path\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_upload_blob_from_file","title":"cloud_storage_upload_blob_from_file async","text":"

    Uploads a blob from file path or file-like object. Usage for passing in file-like object is if the data was downloaded from the web; can bypass writing to disk and directly upload to Cloud Storage.

    Parameters:

    Name Type Description Default file Union[str, Path, BytesIO]

    Path to data or file like object to upload.

    required bucket str

    Name of the bucket.

    required blob str

    Name of the Cloud Storage blob.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required content_type Optional[str]

    Type of content being uploaded.

    None chunk_size Optional[int]

    The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

    None encryption_key Optional[str]

    An encryption key.

    None timeout Union[float, Tuple[float, float]]

    The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.upload_from_file or Blob.upload_from_filename.

    {}

    Returns:

    Type Description str

    The blob name.

    Example

    Uploads blob to bucket.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_file\n\n@flow()\ndef example_cloud_storage_upload_blob_from_file_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    blob = cloud_storage_upload_blob_from_file(\n        \"/path/somewhere\", \"bucket\", \"blob\", gcp_credentials)\n    return blob\n\nexample_cloud_storage_upload_blob_from_file_flow()\n

    Source code in prefect_gcp/cloud_storage.py
    @task\nasync def cloud_storage_upload_blob_from_file(\n    file: Union[str, Path, BytesIO],\n    bucket: str,\n    blob: str,\n    gcp_credentials: GcpCredentials,\n    content_type: Optional[str] = None,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads a blob from file path or file-like object. Usage for passing in\n    file-like object is if the data was downloaded from the web;\n    can bypass writing to disk and directly upload to Cloud Storage.\n\n    Args:\n        file: Path to data or file like object to upload.\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        content_type: Type of content being uploaded.\n        chunk_size: The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_file` or `Blob.upload_from_filename`.\n\n    Returns:\n        The blob name.\n\n    Example:\n        Uploads blob to bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_file\n\n        @flow()\n        def example_cloud_storage_upload_blob_from_file_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            blob = cloud_storage_upload_blob_from_file(\n                \"/path/somewhere\", \"bucket\", \"blob\", gcp_credentials)\n            return blob\n\n        example_cloud_storage_upload_blob_from_file_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading blob named %s to the %s bucket\", blob, bucket)\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    if isinstance(file, BytesIO):\n        await run_sync_in_worker_thread(\n            blob_obj.upload_from_file,\n            file,\n            content_type=content_type,\n            timeout=timeout,\n            **upload_kwargs,\n        )\n    else:\n        await run_sync_in_worker_thread(\n            blob_obj.upload_from_filename,\n            file,\n            content_type=content_type,\n            timeout=timeout,\n            **upload_kwargs,\n        )\n    return blob\n
    "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_upload_blob_from_string","title":"cloud_storage_upload_blob_from_string async","text":"

    Uploads a blob from a string or bytes representation of data.

    Parameters:

    Name Type Description Default data Union[str, bytes]

    String or bytes representation of data to upload.

    required bucket str

    Name of the bucket.

    required blob str

    Name of the Cloud Storage blob.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required content_type Optional[str]

    Type of content being uploaded.

    None chunk_size Optional[int]

    The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

    None encryption_key Optional[str]

    An encryption key.

    None timeout Union[float, Tuple[float, float]]

    The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None **upload_kwargs Dict[str, Any]

    Additional keyword arguments to pass to Blob.upload_from_string.

    {}

    Returns:

    Type Description str

    The blob name.

    Example

    Uploads blob to bucket.

    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_string\n\n@flow()\ndef example_cloud_storage_upload_blob_from_string_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    blob = cloud_storage_upload_blob_from_string(\n        \"data\", \"bucket\", \"blob\", gcp_credentials)\n    return blob\n\nexample_cloud_storage_upload_blob_from_string_flow()\n

    Source code in prefect_gcp/cloud_storage.py
    @task\nasync def cloud_storage_upload_blob_from_string(\n    data: Union[str, bytes],\n    bucket: str,\n    blob: str,\n    gcp_credentials: GcpCredentials,\n    content_type: Optional[str] = None,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads a blob from a string or bytes representation of data.\n\n    Args:\n        data: String or bytes representation of data to upload.\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        content_type: Type of content being uploaded.\n        chunk_size: The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_string`.\n\n    Returns:\n        The blob name.\n\n    Example:\n        Uploads blob to bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_string\n\n        @flow()\n        def example_cloud_storage_upload_blob_from_string_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            blob = cloud_storage_upload_blob_from_string(\n                \"data\", \"bucket\", \"blob\", gcp_credentials)\n            return blob\n\n        example_cloud_storage_upload_blob_from_string_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading blob named %s to the %s bucket\", blob, bucket)\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    await run_sync_in_worker_thread(\n        blob_obj.upload_from_string,\n        data,\n        content_type=content_type,\n        timeout=timeout,\n        **upload_kwargs,\n    )\n    return blob\n
    "},{"location":"integrations/prefect-gcp/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials","title":"prefect_gcp.credentials","text":"

    Module handling GCP credentials.

    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials","title":"GcpCredentials","text":"

    Bases: CredentialsBlock

    Block used to manage authentication with GCP. Google authentication is handled via the google.oauth2 module or through the CLI. Specify either one of service account_file or service_account_info; if both are not specified, the client will try to detect the credentials following Google's Application Default Credentials. See Google's Authentication documentation for details on inference and recommended authentication patterns.

    Attributes:

    Name Type Description service_account_file Optional[Path]

    Path to the service account JSON keyfile.

    service_account_info Optional[SecretDict]

    The contents of the keyfile as a dict.

    Example

    Load GCP credentials stored in a GCP Credentials Block:

    from prefect_gcp import GcpCredentials\ngcp_credentials_block = GcpCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_gcp/credentials.py
    class GcpCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with GCP. Google authentication is\n    handled via the `google.oauth2` module or through the CLI.\n    Specify either one of service `account_file` or `service_account_info`; if both\n    are not specified, the client will try to detect the credentials following Google's\n    [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials).\n    See Google's [Authentication documentation](https://cloud.google.com/docs/authentication#service-accounts)\n    for details on inference and recommended authentication patterns.\n\n    Attributes:\n        service_account_file: Path to the service account JSON keyfile.\n        service_account_info: The contents of the keyfile as a dict.\n\n    Example:\n        Load GCP credentials stored in a `GCP Credentials` Block:\n        ```python\n        from prefect_gcp import GcpCredentials\n        gcp_credentials_block = GcpCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _block_type_name = \"GCP Credentials\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials\"  # noqa: E501\n\n    service_account_file: Optional[Path] = Field(\n        default=None, description=\"Path to the service account JSON keyfile.\"\n    )\n    service_account_info: Optional[SecretDict] = Field(\n        default=None, description=\"The contents of the keyfile as a dict.\"\n    )\n    project: Optional[str] = Field(\n        default=None, description=\"The GCP project to use for the client.\"\n    )\n\n    _service_account_email: Optional[str] = None\n\n    def __hash__(self):\n        return hash(\n            (\n                hash(self.service_account_file),\n                hash(frozenset(self.service_account_info.dict().items()))\n                if self.service_account_info\n                else None,\n                hash(self.project),\n                hash(self._service_account_email),\n            )\n        )\n\n    @root_validator\n    def _provide_one_service_account_source(cls, values):\n        \"\"\"\n        Ensure that only a service account file or service account info ias provided.\n        \"\"\"\n        both_service_account = (\n            values.get(\"service_account_info\") is not None\n            and values.get(\"service_account_file\") is not None\n        )\n        if both_service_account:\n            raise ValueError(\n                \"Only one of service_account_info or service_account_file \"\n                \"can be specified at once\"\n            )\n        return values\n\n    @validator(\"service_account_file\")\n    def _check_service_account_file(cls, file):\n        \"\"\"Get full path of provided file and make sure that it exists.\"\"\"\n        if not file:\n            return file\n\n        service_account_file = Path(file).expanduser()\n        if not service_account_file.exists():\n            raise ValueError(\"The provided path to the service account is invalid\")\n        return service_account_file\n\n    @validator(\"service_account_info\", pre=True)\n    def _convert_json_string_json_service_account_info(cls, value):\n        \"\"\"\n        Converts service account info provided as a json formatted string\n        to a dictionary\n        \"\"\"\n        if isinstance(value, str):\n            try:\n                service_account_info = json.loads(value)\n                return service_account_info\n            except Exception:\n                raise ValueError(\"Unable to decode service_account_info\")\n        else:\n            return value\n\n    def block_initialization(self):\n        credentials = self.get_credentials_from_service_account()\n        if self.project is None:\n            if self.service_account_info or self.service_account_file:\n                credentials_project = credentials.project_id\n            # google.auth.default using gcloud auth application-default login\n            elif credentials.quota_project_id:\n                credentials_project = credentials.quota_project_id\n            # compute-assigned service account via GCP metadata server\n            else:\n                _, credentials_project = google.auth.default()\n            self.project = credentials_project\n\n        if hasattr(credentials, \"service_account_email\"):\n            self._service_account_email = credentials.service_account_email\n\n    def get_credentials_from_service_account(self) -> Credentials:\n        \"\"\"\n        Helper method to serialize credentials by using either\n        service_account_file or service_account_info.\n        \"\"\"\n        if self.service_account_info:\n            credentials = Credentials.from_service_account_info(\n                self.service_account_info.get_secret_value(),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        elif self.service_account_file:\n            credentials = Credentials.from_service_account_file(\n                self.service_account_file,\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        else:\n            credentials, _ = google.auth.default()\n        return credentials\n\n    @sync_compatible\n    async def get_access_token(self):\n        \"\"\"\n        See: https://stackoverflow.com/a/69107745\n        Also: https://www.jhanley.com/google-cloud-creating-oauth-access-tokens-for-rest-api-calls/\n        \"\"\"  # noqa\n        request = google.auth.transport.requests.Request()\n        credentials = self.get_credentials_from_service_account()\n        await run_sync_in_worker_thread(credentials.refresh, request)\n        return credentials.token\n\n    def get_client(\n        self,\n        client_type: Union[str, ClientType],\n        **get_client_kwargs: Dict[str, Any],\n    ) -> Any:\n        \"\"\"\n        Helper method to dynamically get a client type.\n\n        Args:\n            client_type: The name of the client to get.\n            **get_client_kwargs: Additional keyword arguments to pass to the\n                `get_*_client` method.\n\n        Returns:\n            An authenticated client.\n\n        Raises:\n            ValueError: if the client is not supported.\n        \"\"\"\n        if isinstance(client_type, str):\n            client_type = ClientType(client_type)\n        client_type = client_type.value\n        get_client_method = getattr(self, f\"get_{client_type}_client\")\n        return get_client_method(**get_client_kwargs)\n\n    @_raise_help_msg(\"cloud_storage\")\n    def get_cloud_storage_client(\n        self, project: Optional[str] = None\n    ) -> \"StorageClient\":\n        \"\"\"\n        Gets an authenticated Cloud Storage client.\n\n        Args:\n            project: Name of the project to use; overrides the base\n                class's project if provided.\n\n        Returns:\n            An authenticated Cloud Storage client.\n\n        Examples:\n            Gets a GCP Cloud Storage client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_cloud_storage_client()\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_cloud_storage_client()\n            example_get_client_flow()\n            ```\n        \"\"\"\n        credentials = self.get_credentials_from_service_account()\n\n        # override class project if method project is provided\n        project = project or self.project\n        storage_client = StorageClient(credentials=credentials, project=project)\n        return storage_client\n\n    @_raise_help_msg(\"bigquery\")\n    def get_bigquery_client(\n        self, project: str = None, location: str = None\n    ) -> \"BigQueryClient\":\n        \"\"\"\n        Gets an authenticated BigQuery client.\n\n        Args:\n            project: Name of the project to use; overrides the base\n                class's project if provided.\n            location: Location to use.\n\n        Returns:\n            An authenticated BigQuery client.\n\n        Examples:\n            Gets a GCP BigQuery client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_bigquery_client()\n            example_get_client_flow()\n            ```\n\n            Gets a GCP BigQuery client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_bigquery_client()\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        credentials = self.get_credentials_from_service_account()\n\n        # override class project if method project is provided\n        project = project or self.project\n        big_query_client = BigQueryClient(\n            credentials=credentials, project=project, location=location\n        )\n        return big_query_client\n\n    @_raise_help_msg(\"secret_manager\")\n    def get_secret_manager_client(self) -> \"SecretManagerServiceClient\":\n        \"\"\"\n        Gets an authenticated Secret Manager Service client.\n\n        Returns:\n            An authenticated Secret Manager Service client.\n\n        Examples:\n            Gets a GCP Secret Manager client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_secret_manager_client()\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_secret_manager_client()\n            example_get_client_flow()\n            ```\n        \"\"\"\n        credentials = self.get_credentials_from_service_account()\n\n        # doesn't accept project; must pass in project in tasks\n        secret_manager_client = SecretManagerServiceClient(credentials=credentials)\n        return secret_manager_client\n\n    @_raise_help_msg(\"aiplatform\")\n    def get_job_service_client(\n        self, client_options: Union[Dict[str, Any], ClientOptions] = None\n    ) -> \"JobServiceClient\":\n        \"\"\"\n        Gets an authenticated Job Service client for Vertex AI.\n\n        Returns:\n            An authenticated Job Service client.\n\n        Examples:\n            Gets a GCP Job Service client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_job_service_client()\n\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_job_service_client()\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        if isinstance(client_options, dict):\n            client_options = from_dict(client_options)\n\n        credentials = self.get_credentials_from_service_account()\n        return JobServiceClient(credentials=credentials, client_options=client_options)\n\n    @_raise_help_msg(\"aiplatform\")\n    def get_job_service_async_client(\n        self, client_options: Union[Dict[str, Any], ClientOptions] = None\n    ) -> \"JobServiceAsyncClient\":\n        \"\"\"\n        Gets an authenticated Job Service async client for Vertex AI.\n\n        Returns:\n            An authenticated Job Service async client.\n\n        Examples:\n            Gets a GCP Job Service client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_job_service_async_client()\n\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_job_service_async_client()\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        if isinstance(client_options, dict):\n            client_options = from_dict(client_options)\n\n        return _get_job_service_async_client_cached(\n            self, tuple(client_options.__dict__.items())\n        )\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_access_token","title":"get_access_token async","text":"Source code in prefect_gcp/credentials.py
    @sync_compatible\nasync def get_access_token(self):\n    \"\"\"\n    See: https://stackoverflow.com/a/69107745\n    Also: https://www.jhanley.com/google-cloud-creating-oauth-access-tokens-for-rest-api-calls/\n    \"\"\"  # noqa\n    request = google.auth.transport.requests.Request()\n    credentials = self.get_credentials_from_service_account()\n    await run_sync_in_worker_thread(credentials.refresh, request)\n    return credentials.token\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_bigquery_client","title":"get_bigquery_client","text":"

    Gets an authenticated BigQuery client.

    Parameters:

    Name Type Description Default project str

    Name of the project to use; overrides the base class's project if provided.

    None location str

    Location to use.

    None

    Returns:

    Type Description Client

    An authenticated BigQuery client.

    Examples:

    Gets a GCP BigQuery client from a path.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_bigquery_client()\nexample_get_client_flow()\n

    Gets a GCP BigQuery client from a dictionary.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_bigquery_client()\n\nexample_get_client_flow()\n

    Source code in prefect_gcp/credentials.py
    @_raise_help_msg(\"bigquery\")\ndef get_bigquery_client(\n    self, project: str = None, location: str = None\n) -> \"BigQueryClient\":\n    \"\"\"\n    Gets an authenticated BigQuery client.\n\n    Args:\n        project: Name of the project to use; overrides the base\n            class's project if provided.\n        location: Location to use.\n\n    Returns:\n        An authenticated BigQuery client.\n\n    Examples:\n        Gets a GCP BigQuery client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_bigquery_client()\n        example_get_client_flow()\n        ```\n\n        Gets a GCP BigQuery client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_bigquery_client()\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    credentials = self.get_credentials_from_service_account()\n\n    # override class project if method project is provided\n    project = project or self.project\n    big_query_client = BigQueryClient(\n        credentials=credentials, project=project, location=location\n    )\n    return big_query_client\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_client","title":"get_client","text":"

    Helper method to dynamically get a client type.

    Parameters:

    Name Type Description Default client_type Union[str, ClientType]

    The name of the client to get.

    required **get_client_kwargs Dict[str, Any]

    Additional keyword arguments to pass to the get_*_client method.

    {}

    Returns:

    Type Description Any

    An authenticated client.

    Raises:

    Type Description ValueError

    if the client is not supported.

    Source code in prefect_gcp/credentials.py
    def get_client(\n    self,\n    client_type: Union[str, ClientType],\n    **get_client_kwargs: Dict[str, Any],\n) -> Any:\n    \"\"\"\n    Helper method to dynamically get a client type.\n\n    Args:\n        client_type: The name of the client to get.\n        **get_client_kwargs: Additional keyword arguments to pass to the\n            `get_*_client` method.\n\n    Returns:\n        An authenticated client.\n\n    Raises:\n        ValueError: if the client is not supported.\n    \"\"\"\n    if isinstance(client_type, str):\n        client_type = ClientType(client_type)\n    client_type = client_type.value\n    get_client_method = getattr(self, f\"get_{client_type}_client\")\n    return get_client_method(**get_client_kwargs)\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_cloud_storage_client","title":"get_cloud_storage_client","text":"

    Gets an authenticated Cloud Storage client.

    Parameters:

    Name Type Description Default project Optional[str]

    Name of the project to use; overrides the base class's project if provided.

    None

    Returns:

    Type Description Client

    An authenticated Cloud Storage client.

    Examples:

    Gets a GCP Cloud Storage client from a path.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_cloud_storage_client()\nexample_get_client_flow()\n

    Gets a GCP Cloud Storage client from a dictionary.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_cloud_storage_client()\nexample_get_client_flow()\n

    Source code in prefect_gcp/credentials.py
    @_raise_help_msg(\"cloud_storage\")\ndef get_cloud_storage_client(\n    self, project: Optional[str] = None\n) -> \"StorageClient\":\n    \"\"\"\n    Gets an authenticated Cloud Storage client.\n\n    Args:\n        project: Name of the project to use; overrides the base\n            class's project if provided.\n\n    Returns:\n        An authenticated Cloud Storage client.\n\n    Examples:\n        Gets a GCP Cloud Storage client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_cloud_storage_client()\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_cloud_storage_client()\n        example_get_client_flow()\n        ```\n    \"\"\"\n    credentials = self.get_credentials_from_service_account()\n\n    # override class project if method project is provided\n    project = project or self.project\n    storage_client = StorageClient(credentials=credentials, project=project)\n    return storage_client\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_credentials_from_service_account","title":"get_credentials_from_service_account","text":"

    Helper method to serialize credentials by using either service_account_file or service_account_info.

    Source code in prefect_gcp/credentials.py
    def get_credentials_from_service_account(self) -> Credentials:\n    \"\"\"\n    Helper method to serialize credentials by using either\n    service_account_file or service_account_info.\n    \"\"\"\n    if self.service_account_info:\n        credentials = Credentials.from_service_account_info(\n            self.service_account_info.get_secret_value(),\n            scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n        )\n    elif self.service_account_file:\n        credentials = Credentials.from_service_account_file(\n            self.service_account_file,\n            scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n        )\n    else:\n        credentials, _ = google.auth.default()\n    return credentials\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_job_service_async_client","title":"get_job_service_async_client","text":"

    Gets an authenticated Job Service async client for Vertex AI.

    Returns:

    Type Description JobServiceAsyncClient

    An authenticated Job Service async client.

    Examples:

    Gets a GCP Job Service client from a path.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_job_service_async_client()\n\nexample_get_client_flow()\n

    Gets a GCP Cloud Storage client from a dictionary.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_job_service_async_client()\n\nexample_get_client_flow()\n

    Source code in prefect_gcp/credentials.py
    @_raise_help_msg(\"aiplatform\")\ndef get_job_service_async_client(\n    self, client_options: Union[Dict[str, Any], ClientOptions] = None\n) -> \"JobServiceAsyncClient\":\n    \"\"\"\n    Gets an authenticated Job Service async client for Vertex AI.\n\n    Returns:\n        An authenticated Job Service async client.\n\n    Examples:\n        Gets a GCP Job Service client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_job_service_async_client()\n\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_job_service_async_client()\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    if isinstance(client_options, dict):\n        client_options = from_dict(client_options)\n\n    return _get_job_service_async_client_cached(\n        self, tuple(client_options.__dict__.items())\n    )\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_job_service_client","title":"get_job_service_client","text":"

    Gets an authenticated Job Service client for Vertex AI.

    Returns:

    Type Description JobServiceClient

    An authenticated Job Service client.

    Examples:

    Gets a GCP Job Service client from a path.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_job_service_client()\n\nexample_get_client_flow()\n

    Gets a GCP Cloud Storage client from a dictionary.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_job_service_client()\n\nexample_get_client_flow()\n

    Source code in prefect_gcp/credentials.py
    @_raise_help_msg(\"aiplatform\")\ndef get_job_service_client(\n    self, client_options: Union[Dict[str, Any], ClientOptions] = None\n) -> \"JobServiceClient\":\n    \"\"\"\n    Gets an authenticated Job Service client for Vertex AI.\n\n    Returns:\n        An authenticated Job Service client.\n\n    Examples:\n        Gets a GCP Job Service client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_job_service_client()\n\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_job_service_client()\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    if isinstance(client_options, dict):\n        client_options = from_dict(client_options)\n\n    credentials = self.get_credentials_from_service_account()\n    return JobServiceClient(credentials=credentials, client_options=client_options)\n
    "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_secret_manager_client","title":"get_secret_manager_client","text":"

    Gets an authenticated Secret Manager Service client.

    Returns:

    Type Description SecretManagerServiceClient

    An authenticated Secret Manager Service client.

    Examples:

    Gets a GCP Secret Manager client from a path.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_secret_manager_client()\nexample_get_client_flow()\n

    Gets a GCP Cloud Storage client from a dictionary.

    from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_secret_manager_client()\nexample_get_client_flow()\n

    Source code in prefect_gcp/credentials.py
    @_raise_help_msg(\"secret_manager\")\ndef get_secret_manager_client(self) -> \"SecretManagerServiceClient\":\n    \"\"\"\n    Gets an authenticated Secret Manager Service client.\n\n    Returns:\n        An authenticated Secret Manager Service client.\n\n    Examples:\n        Gets a GCP Secret Manager client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_secret_manager_client()\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_secret_manager_client()\n        example_get_client_flow()\n        ```\n    \"\"\"\n    credentials = self.get_credentials_from_service_account()\n\n    # doesn't accept project; must pass in project in tasks\n    secret_manager_client = SecretManagerServiceClient(credentials=credentials)\n    return secret_manager_client\n
    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/","title":"Google Cloud Run Worker Guide","text":""},{"location":"integrations/prefect-gcp/gcp-worker-guide/#why-use-google-cloud-run-for-flow-run-execution","title":"Why use Google Cloud Run for flow run execution?","text":"

    Google Cloud Run is a fully managed compute platform that automatically scales your containerized applications.

    1. Serverless architecture: Cloud Run follows a serverless architecture, which means you don't need to manage any underlying infrastructure. Google Cloud Run automatically handles the scaling and availability of your flow run infrastructure, allowing you to focus on developing and deploying your code.

    2. Scalability: Cloud Run can automatically scale your pipeline to handle varying workloads and traffic. It can quickly respond to increased demand and scale back down during low activity periods, ensuring efficient resource utilization.

    3. Integration with Google Cloud services: Google Cloud Run easily integrates with other Google Cloud services, such as Google Cloud Storage, Google Cloud Pub/Sub, and Google Cloud Build. This interoperability enables you to build end-to-end data pipelines that use a variety of services.

    4. Portability: Since Cloud Run uses container images, you can develop your pipelines locally using Docker and then deploy them on Google Cloud Run without significant modifications. This portability allows you to run the same pipeline in different environments.

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#google-cloud-run-guide","title":"Google Cloud Run guide","text":"

    After completing this guide, you will have:

    1. Created a Google Cloud Service Account
    2. Created a Prefect Work Pool
    3. Deployed a Prefect Worker as a Cloud Run Service
    4. Deployed a Flow
    5. Executed the Flow as a Google Cloud Run Job

    If you're looking for a general introduction to workers, work pools, and deployments, check out the workers and work pools tutorial.

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#prerequisites","title":"Prerequisites","text":"

    Before starting this guide, make sure you have:

    • A Google Cloud Platform (GCP) account.
    • A project on your GCP account where you have the necessary permissions to create Cloud Run Services and Service Accounts.
    • The gcloud CLI installed on your local machine. You can follow Google Cloud's installation guide. If you're using Apple (or a Linux system) you can also use Homebrew for installation.
    • Docker installed on your local machine.
    • A Prefect server instance. You can sign up for a forever free Prefect Cloud Account or, alternatively, self-host a Prefect server.
    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-1-create-a-google-cloud-service-account","title":"Step 1. Create a Google Cloud service account","text":"

    First, open a terminal or command prompt on your local machine where gcloud is installed. If you haven't already authenticated with gcloud, run the following command and follow the instructions to log in to your GCP account.

    gcloud auth login\n

    Next, you'll set your project where you'd like to create the service account. Use the following command and replace <PROJECT_ID> with your GCP project's ID.

    gcloud config set project <PROJECT-ID>\n

    For example, if your project's ID is prefect-project the command will look like this:

    gcloud config set project prefect-project\n

    Now you're ready to make the service account. To do so, you'll need to run this command:

    gcloud iam service-accounts create <SERVICE-ACCOUNT-NAME> --display-name=\"<DISPLAY-NAME>\"\n

    Here's an example of the command above which you can use which already has the service account name and display name provided. An additional option to describe the service account has also been added:

    gcloud iam service-accounts create prefect-service-account \\\n    --description=\"service account to use for the prefect worker\" \\\n    --display-name=\"prefect-service-account\"\n

    The last step of this process is to make sure the service account has the proper permissions to execute flow runs as Cloud Run jobs. Run the following commands to grant the necessary permissions:

    gcloud projects add-iam-policy-binding <PROJECT-ID> \\\n    --member=\"serviceAccount:<SERVICE-ACCOUNT-NAME>@<PROJECT-ID>.iam.gserviceaccount.com\" \\\n    --role=\"roles/iam.serviceAccountUser\"\n
    gcloud projects add-iam-policy-binding <PROJECT-ID> \\\n    --member=\"serviceAccount:<SERVICE-ACCOUNT-NAME>@<PROJECT-ID>.iam.gserviceaccount.com\" \\\n    --role=\"roles/run.admin\"\n

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-2-create-a-cloud-run-work-pool","title":"Step 2. Create a Cloud Run work pool","text":"

    Let's walk through the process of creating a Cloud Run work pool.

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#fill-out-the-work-pool-base-job-template","title":"Fill out the work pool base job template","text":"

    You can create a new work pool using the Prefect UI or CLI. The following command creates a work pool of type cloud-run via the CLI (you'll want to replace the <WORK-POOL-NAME> with the name of your work pool):

    prefect work-pool create --type cloud-run <WORK-POOL-NAME>\n

    Once the work pool is created, find the work pool in the UI and edit it.

    There are many ways to customize the base job template for the work pool. Modifying the template influences the infrastructure configuration that the worker provisions for flow runs submitted to the work pool. For this guide we are going to modify just a few of the available fields.

    Specify the region for the cloud run job.

    Save the name of the service account created in first step of this guide.

    Your work pool is now ready to receive scheduled flow runs!

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-3-deploy-a-cloud-run-worker","title":"Step 3. Deploy a Cloud Run worker","text":"

    Now you can launch a Cloud Run service to host the Cloud Run worker. This worker will poll the work pool that you created in the previous step.

    Navigate back to your terminal and run the following commands to set your Prefect API key and URL as environment variables. Be sure to replace <ACCOUNT-ID> and <WORKSPACE-ID> with your Prefect account and workspace IDs (both will be available in the URL of the UI when previewing the workspace dashboard). You'll want to replace <YOUR-API-KEY> with an active API key as well.

    export PREFECT_API_URL='https://api.prefect.cloud/api/accounts/<ACCOUNT-ID>/workspaces/<WORKSPACE-ID>'\nexport PREFECT_API_KEY='<YOUR-API-KEY>'\n

    Once those variables are set, run the following shell command to deploy your worker as a service. Don't forget to replace <YOUR-SERVICE-ACCOUNT-NAME> with the name of the service account you created in the first step of this guide, and replace <WORK-POOL-NAME> with the name of the work pool you created in the second step.

    gcloud run deploy prefect-worker --image=prefecthq/prefect:2-latest \\\n--set-env-vars PREFECT_API_URL=$PREFECT_API_URL,PREFECT_API_KEY=$PREFECT_API_KEY \\\n--service-account <YOUR-SERVICE-ACCOUNT-NAME> \\\n--no-cpu-throttling \\\n--min-instances 1 \\\n--args \"prefect\",\"worker\",\"start\",\"--install-policy\",\"always\",\"--with-healthcheck\",\"-p\",\"<WORK-POOL-NAME>\",\"-t\",\"cloud-run\"\n

    After running this command, you'll be prompted to specify a region. Choose the same region that you selected when creating the Cloud Run work pool in the second step of this guide. The next prompt will ask if you'd like to allow unauthentiated invocations to your worker. For this guide, you can select \"No\".

    After a few seconds, you'll be able to see your new prefect-worker service by navigating to the Cloud Run page of your Google Cloud console. Additionally, you should be able to see a record of this worker in the Prefect UI on the work pool's page by navigating to the Worker tab. Let's not leave our worker hanging, it's time to give it a job.

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-4-deploy-a-flow","title":"Step 4. Deploy a flow","text":"

    Let's prepare a flow to run as a Cloud Run job. In this section of the guide, we'll \"bake\" our code into a Docker image, and push that image to Google Artifact Registry.

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#create-a-registry","title":"Create a registry","text":"

    Let's create a docker repository in your Google Artifact Registry to host your custom image. If you already have a registry, and are authenticated to it, skip ahead to the Write a flow section.

    The following command creates a repository using the gcloud CLI. You'll want to replace the <REPOSITORY-NAME> with your own value. :

    gcloud artifacts repositories create <REPOSITORY-NAME> \\\n--repository-format=docker --location=us\n

    Now you can authenticate to artifact registry:

    gcloud auth configure-docker us-docker.pkg.dev\n

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#write-a-flow","title":"Write a flow","text":"

    First, create a new directory. This will serve as the root of your project's repository. Within the directory, create a sub-directory called flows. Navigate to the flows subdirectory and create a new file for your flow. Feel free to write your own flow, but here's a ready-made one for your convenience:

    import httpx\nfrom prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef mark_it_down(temp):\n    markdown_report = f\"\"\"# Weather Report\n## Recent weather\n\n| Time        | Temperature |\n|:--------------|-------:|\n| Now | {temp} |\n| In 1 hour       | {temp + 2} |\n\"\"\"\n    create_markdown_artifact(\n        key=\"weather-report\",\n        markdown=markdown_report,\n        description=\"Very scientific weather report\",\n    )\n\n\n@flow\ndef fetch_weather(lat: float, lon: float):\n    base_url = \"https://api.open-meteo.com/v1/forecast/\"\n    weather = httpx.get(\n        base_url,\n        params=dict(latitude=lat, longitude=lon, hourly=\"temperature_2m\"),\n    )\n    most_recent_temp = float(weather.json()[\"hourly\"][\"temperature_2m\"][0])\n    mark_it_down(most_recent_temp)\n\n\nif __name__ == \"__main__\":\n    fetch_weather(38.9, -77.0)\n

    In the remainder of this guide, this script will be referred to as weather_flow.py, but you can name yours whatever you'd like.

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#creating-a-prefectyaml-file","title":"Creating a prefect.yaml file","text":"

    Now we're ready to make a prefect.yaml file, which will be responsible for managing the deployments of this repository. Navigate back to the root of your directory, and run the following command to create a prefect.yaml file using Prefect's docker deployment recipe.

    prefect init --recipe docker\n

    You'll receive a prompt to put in values for the image name and tag. Since we will be pushing the image to Google Artifact Registry, the name of your image should be prefixed with the path to the docker repository you created within the registry. For example: us-docker.pkg.dev/<PROJECT-ID>/<REPOSITORY-NAME>/. You'll want to replace <PROJECT-ID> with the ID of your project in GCP. This should match the ID of the project you used in first step of this guide. Here is an example of what this could look like:

    image_name: us-docker.pkg.dev/prefect-project/my-artifact-registry/gcp-weather-image\ntag: latest\n

    At this point, there will be a new prefect.yaml file available at the root of your project. The contents will look similar to the example below, however, I've added in a combination of yaml templating options and prefect deployment actions to build out a simple CI/CD process. Feel free to copy the contents and paste them in your prefect.yaml:

    # Welcome to your prefect.yaml file! You can you this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: <WORKING-DIRECTORY>\nprefect-version: 2.13.4\n\n# build section allows you to manage and build docker image\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build_image\n    requires: prefect-docker>=0.3.1\n    image_name: <PATH-TO-ARTIFACT-REGISTRY>/gcp-weather-image\n    tag: latest\n    dockerfile: auto\n    platform: linux/amd64\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n    requires: prefect-docker>=0.3.1\n    image_name: '{{ build_image.image_name }}'\n    tag: '{{ build_image.tag }}'\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n    directory: /opt/prefect/<WORKING-DIRECTORY>\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: gcp-weather-deploy\n  version: null\n  tags: []\n  description: null\n  schedule: {}\n  flow_name: null\n  entrypoint: flows/weather_flow.py:fetch_weather\n  parameters:\n    lat: 14.5994\n    lon: 28.6731\n  work_pool:\n    name: my-cloud-run-pool\n    work_queue_name: default\n    job_variables:\n      image: '{{ build_image.image }}'\n

    Tip

    After copying the example above, don't forget to replace <WORKING-DIRECTORY> with the name of the directory where your flow folder and prefect.yaml live. You'll also need to replace <PATH-TO-ARTIFACT-REGISTRY> with the path to the Docker repository in your Google Artifact Registry.

    To get a better understanding of the different components of the prefect.yaml file above and what they do, feel free to read this next section. Otherwise, you can skip ahead to Flow Deployment.

    In the build section of the prefect.yaml the following step is executed at deployment build time:

    1. prefect_docker.deployments.steps.build_docker_image : builds a Docker image automatically which uses the name and tag chosen previously.

    Warning

    If you are using an ARM-based chip (such as an M1 or M2 Mac), you'll want to ensure that you add platform: linux/amd64 to your build_docker_image step to ensure that your docker image uses an AMD architecture. For example:

    - prefect_docker.deployments.steps.build_docker_image:\nid: build_image\nrequires: prefect-docker>=0.3.1\nimage_name: us-docker.pkg.dev/prefect-project/my-docker-repository/gcp-weather-image\ntag: latest\ndockerfile: auto\nplatform: linux/amd64\n

    The push section sends the Docker image to the Docker repository in your Google Artifact Registry, so that it can be easily accessed by the worker for flow run execution.

    The pull section sets the working directory for the process prior to importing your flow.

    In the deployments section of the prefect.yaml file above, you'll see that there is a deployment declaration named gcp-weather-deploy. Within the declaration, the entrypoint for the flow is specified along with some default parameters which will be passed to the flow at runtime. Last but not least, the name of the workpool that we created in step 2 of this guide is specified.

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#flow-deployment","title":"Flow deployment","text":"

    Once you're happy with the specifications in the prefect.yaml file, run the following command in the terminal to deploy your flow:

    prefect deploy --name gcp-weather-deploy\n

    Once the flow is deployed to Prefect Cloud or your local Prefect Server, it's time to queue up a flow run!

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-5-flow-execution","title":"Step 5. Flow execution","text":"

    Find your deployment in the UI, and hit the Quick Run button. You have now successfully submitted a flow run to your Cloud Run worker! If you used the flow script provided in this guide, check the Artifacts tab for the flow run once it completes. You'll have a nice little weather report waiting for you there. Hope your day is a sunny one!

    "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#recap-and-next-steps","title":"Recap and next steps","text":"

    Congratulations on completing this guide! Looking back on our journey, you have:

    1. Created a Google Cloud service account
    2. Created a Cloud Run work pool
    3. Deployed a Cloud Run worker
    4. Deployed a flow
    5. Executed a flow

    For next steps, you could:

    • Take a look at some of the other work pools Prefect has to offer
    • Do a deep drive on Prefect concepts
    • Try out another guide to explore new deployment patterns and recipes

    The world is your oyster \ud83e\uddaa\u2728.

    "},{"location":"integrations/prefect-gcp/secret_manager/","title":"Secret Manager","text":""},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager","title":"prefect_gcp.secret_manager","text":""},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret","title":"GcpSecret","text":"

    Bases: SecretBlock

    Manages a secret in Google Cloud Platform's Secret Manager.

    Attributes:

    Name Type Description gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    secret_name str

    Name of the secret to manage.

    secret_version str

    Version number of the secret to use, or \"latest\".

    Source code in prefect_gcp/secret_manager.py
    class GcpSecret(SecretBlock):\n    \"\"\"\n    Manages a secret in Google Cloud Platform's Secret Manager.\n\n    Attributes:\n        gcp_credentials: Credentials to use for authentication with GCP.\n        secret_name: Name of the secret to manage.\n        secret_version: Version number of the secret to use, or \"latest\".\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret\"  # noqa: E501\n\n    gcp_credentials: GcpCredentials\n    secret_name: str = Field(default=..., description=\"Name of the secret to manage.\")\n    secret_version: str = Field(\n        default=\"latest\", description=\"Version number of the secret to use.\"\n    )\n\n    @sync_compatible\n    async def read_secret(self) -> bytes:\n        \"\"\"\n        Reads the secret data from the secret storage service.\n\n        Returns:\n            The secret data as bytes.\n        \"\"\"\n        client = self.gcp_credentials.get_secret_manager_client()\n        project = self.gcp_credentials.project\n        name = f\"projects/{project}/secrets/{self.secret_name}/versions/{self.secret_version}\"  # noqa\n        request = AccessSecretVersionRequest(name=name)\n\n        self.logger.debug(f\"Preparing to read secret data from {name!r}.\")\n        response = await run_sync_in_worker_thread(\n            client.access_secret_version, request=request\n        )\n        secret = response.payload.data\n        self.logger.info(f\"The secret {name!r} data was successfully read.\")\n        return secret\n\n    @sync_compatible\n    async def write_secret(self, secret_data: bytes) -> str:\n        \"\"\"\n        Writes the secret data to the secret storage service; if it doesn't exist\n        it will be created.\n\n        Args:\n            secret_data: The secret to write.\n\n        Returns:\n            The path that the secret was written to.\n        \"\"\"\n        client = self.gcp_credentials.get_secret_manager_client()\n        project = self.gcp_credentials.project\n        parent = f\"projects/{project}/secrets/{self.secret_name}\"\n        payload = SecretPayload(data=secret_data)\n        add_request = AddSecretVersionRequest(parent=parent, payload=payload)\n\n        self.logger.debug(f\"Preparing to write secret data to {parent!r}.\")\n        try:\n            response = await run_sync_in_worker_thread(\n                client.add_secret_version, request=add_request\n            )\n        except NotFound:\n            self.logger.info(\n                f\"The secret {parent!r} does not exist yet, creating it now.\"\n            )\n            create_parent = f\"projects/{project}\"\n            secret_id = self.secret_name\n            secret = Secret(replication=Replication(automatic=Replication.Automatic()))\n            create_request = CreateSecretRequest(\n                parent=create_parent, secret_id=secret_id, secret=secret\n            )\n            await run_sync_in_worker_thread(\n                client.create_secret, request=create_request\n            )\n\n            self.logger.debug(f\"Preparing to write secret data to {parent!r} again.\")\n            response = await run_sync_in_worker_thread(\n                client.add_secret_version, request=add_request\n            )\n\n        self.logger.info(f\"The secret data was written successfully to {parent!r}.\")\n        return response.name\n\n    @sync_compatible\n    async def delete_secret(self) -> str:\n        \"\"\"\n        Deletes the secret from the secret storage service.\n\n        Returns:\n            The path that the secret was deleted from.\n        \"\"\"\n        client = self.gcp_credentials.get_secret_manager_client()\n        project = self.gcp_credentials.project\n\n        name = f\"projects/{project}/secrets/{self.secret_name}\"\n        request = DeleteSecretRequest(name=name)\n\n        self.logger.debug(f\"Preparing to delete the secret {name!r}.\")\n        await run_sync_in_worker_thread(client.delete_secret, request=request)\n        self.logger.info(f\"The secret {name!r} was successfully deleted.\")\n        return name\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret.delete_secret","title":"delete_secret async","text":"

    Deletes the secret from the secret storage service.

    Returns:

    Type Description str

    The path that the secret was deleted from.

    Source code in prefect_gcp/secret_manager.py
    @sync_compatible\nasync def delete_secret(self) -> str:\n    \"\"\"\n    Deletes the secret from the secret storage service.\n\n    Returns:\n        The path that the secret was deleted from.\n    \"\"\"\n    client = self.gcp_credentials.get_secret_manager_client()\n    project = self.gcp_credentials.project\n\n    name = f\"projects/{project}/secrets/{self.secret_name}\"\n    request = DeleteSecretRequest(name=name)\n\n    self.logger.debug(f\"Preparing to delete the secret {name!r}.\")\n    await run_sync_in_worker_thread(client.delete_secret, request=request)\n    self.logger.info(f\"The secret {name!r} was successfully deleted.\")\n    return name\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret.read_secret","title":"read_secret async","text":"

    Reads the secret data from the secret storage service.

    Returns:

    Type Description bytes

    The secret data as bytes.

    Source code in prefect_gcp/secret_manager.py
    @sync_compatible\nasync def read_secret(self) -> bytes:\n    \"\"\"\n    Reads the secret data from the secret storage service.\n\n    Returns:\n        The secret data as bytes.\n    \"\"\"\n    client = self.gcp_credentials.get_secret_manager_client()\n    project = self.gcp_credentials.project\n    name = f\"projects/{project}/secrets/{self.secret_name}/versions/{self.secret_version}\"  # noqa\n    request = AccessSecretVersionRequest(name=name)\n\n    self.logger.debug(f\"Preparing to read secret data from {name!r}.\")\n    response = await run_sync_in_worker_thread(\n        client.access_secret_version, request=request\n    )\n    secret = response.payload.data\n    self.logger.info(f\"The secret {name!r} data was successfully read.\")\n    return secret\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret.write_secret","title":"write_secret async","text":"

    Writes the secret data to the secret storage service; if it doesn't exist it will be created.

    Parameters:

    Name Type Description Default secret_data bytes

    The secret to write.

    required

    Returns:

    Type Description str

    The path that the secret was written to.

    Source code in prefect_gcp/secret_manager.py
    @sync_compatible\nasync def write_secret(self, secret_data: bytes) -> str:\n    \"\"\"\n    Writes the secret data to the secret storage service; if it doesn't exist\n    it will be created.\n\n    Args:\n        secret_data: The secret to write.\n\n    Returns:\n        The path that the secret was written to.\n    \"\"\"\n    client = self.gcp_credentials.get_secret_manager_client()\n    project = self.gcp_credentials.project\n    parent = f\"projects/{project}/secrets/{self.secret_name}\"\n    payload = SecretPayload(data=secret_data)\n    add_request = AddSecretVersionRequest(parent=parent, payload=payload)\n\n    self.logger.debug(f\"Preparing to write secret data to {parent!r}.\")\n    try:\n        response = await run_sync_in_worker_thread(\n            client.add_secret_version, request=add_request\n        )\n    except NotFound:\n        self.logger.info(\n            f\"The secret {parent!r} does not exist yet, creating it now.\"\n        )\n        create_parent = f\"projects/{project}\"\n        secret_id = self.secret_name\n        secret = Secret(replication=Replication(automatic=Replication.Automatic()))\n        create_request = CreateSecretRequest(\n            parent=create_parent, secret_id=secret_id, secret=secret\n        )\n        await run_sync_in_worker_thread(\n            client.create_secret, request=create_request\n        )\n\n        self.logger.debug(f\"Preparing to write secret data to {parent!r} again.\")\n        response = await run_sync_in_worker_thread(\n            client.add_secret_version, request=add_request\n        )\n\n    self.logger.info(f\"The secret data was written successfully to {parent!r}.\")\n    return response.name\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.create_secret","title":"create_secret async","text":"

    Creates a secret in Google Cloud Platform's Secret Manager.

    Parameters:

    Name Type Description Default secret_name str

    Name of the secret to retrieve.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required timeout float

    The number of seconds the transport should wait for the server response.

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None

    Returns:

    Type Description str

    The path of the created secret.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import create_secret\n\n@flow()\ndef example_cloud_storage_create_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_path = create_secret(\"secret_name\", gcp_credentials)\n    return secret_path\n\nexample_cloud_storage_create_secret_flow()\n
    Source code in prefect_gcp/secret_manager.py
    @task\nasync def create_secret(\n    secret_name: str,\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Creates a secret in Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the created secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import create_secret\n\n        @flow()\n        def example_cloud_storage_create_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_path = create_secret(\"secret_name\", gcp_credentials)\n            return secret_path\n\n        example_cloud_storage_create_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Creating the %s secret\", secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    parent = f\"projects/{project}\"\n    secret_settings = {\"replication\": {\"automatic\": {}}}\n\n    partial_create = partial(\n        client.create_secret,\n        parent=parent,\n        secret_id=secret_name,\n        secret=secret_settings,\n        timeout=timeout,\n    )\n    response = await to_thread.run_sync(partial_create)\n    return response.name\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.delete_secret","title":"delete_secret async","text":"

    Deletes the specified secret from Google Cloud Platform's Secret Manager.

    Parameters:

    Name Type Description Default secret_name str

    Name of the secret to delete.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required timeout float

    The number of seconds the transport should wait for the server response.

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None

    Returns:

    Type Description str

    The path of the deleted secret.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import delete_secret\n\n@flow()\ndef example_cloud_storage_delete_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_path = delete_secret(\"secret_name\", gcp_credentials)\n    return secret_path\n\nexample_cloud_storage_delete_secret_flow()\n
    Source code in prefect_gcp/secret_manager.py
    @task\nasync def delete_secret(\n    secret_name: str,\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Deletes the specified secret from Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to delete.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the deleted secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import delete_secret\n\n        @flow()\n        def example_cloud_storage_delete_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_path = delete_secret(\"secret_name\", gcp_credentials)\n            return secret_path\n\n        example_cloud_storage_delete_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Deleting %s secret\", secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    name = f\"projects/{project}/secrets/{secret_name}/\"\n    partial_delete = partial(client.delete_secret, name=name, timeout=timeout)\n    await to_thread.run_sync(partial_delete)\n    return name\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.delete_secret_version","title":"delete_secret_version async","text":"

    Deletes a version of a given secret from Google Cloud Platform's Secret Manager.

    Parameters:

    Name Type Description Default secret_name str

    Name of the secret to retrieve.

    required version_id int

    Version number of the secret to use; \"latest\" can NOT be used.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required timeout float

    The number of seconds the transport should wait for the server response.

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None

    Returns:

    Type Description str

    The path of the deleted secret version.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import delete_secret_version\n\n@flow()\ndef example_cloud_storage_delete_secret_version_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_value = delete_secret_version(\"secret_name\", 1, gcp_credentials)\n    return secret_value\n\nexample_cloud_storage_delete_secret_version_flow()\n
    Source code in prefect_gcp/secret_manager.py
    @task\nasync def delete_secret_version(\n    secret_name: str,\n    version_id: int,\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Deletes a version of a given secret from Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        version_id: Version number of the secret to use; \"latest\" can NOT be used.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the deleted secret version.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import delete_secret_version\n\n        @flow()\n        def example_cloud_storage_delete_secret_version_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_value = delete_secret_version(\"secret_name\", 1, gcp_credentials)\n            return secret_value\n\n        example_cloud_storage_delete_secret_version_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Reading %s version of %s secret\", version_id, secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    if version_id == \"latest\":\n        raise ValueError(\"The version_id cannot be 'latest'\")\n\n    name = f\"projects/{project}/secrets/{secret_name}/versions/{version_id}\"\n    partial_destroy = partial(client.destroy_secret_version, name=name, timeout=timeout)\n    await to_thread.run_sync(partial_destroy)\n    return name\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.read_secret","title":"read_secret async","text":"

    Reads the value of a given secret from Google Cloud Platform's Secret Manager.

    Parameters:

    Name Type Description Default secret_name str

    Name of the secret to retrieve.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required timeout float

    The number of seconds the transport should wait for the server response.

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None

    Returns:

    Type Description str

    Contents of the specified secret.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import read_secret\n\n@flow()\ndef example_cloud_storage_read_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_value = read_secret(\"secret_name\", gcp_credentials, version_id=1)\n    return secret_value\n\nexample_cloud_storage_read_secret_flow()\n
    Source code in prefect_gcp/secret_manager.py
    @task\nasync def read_secret(\n    secret_name: str,\n    gcp_credentials: \"GcpCredentials\",\n    version_id: Union[str, int] = \"latest\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Reads the value of a given secret from Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        Contents of the specified secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import read_secret\n\n        @flow()\n        def example_cloud_storage_read_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_value = read_secret(\"secret_name\", gcp_credentials, version_id=1)\n            return secret_value\n\n        example_cloud_storage_read_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Reading %s version of %s secret\", version_id, secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    name = f\"projects/{project}/secrets/{secret_name}/versions/{version_id}\"\n    partial_access = partial(client.access_secret_version, name=name, timeout=timeout)\n    response = await to_thread.run_sync(partial_access)\n    secret = response.payload.data.decode(\"UTF-8\")\n    return secret\n
    "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.update_secret","title":"update_secret async","text":"

    Updates a secret in Google Cloud Platform's Secret Manager.

    Parameters:

    Name Type Description Default secret_name str

    Name of the secret to retrieve.

    required secret_value Union[str, bytes]

    Desired value of the secret. Can be either str or bytes.

    required gcp_credentials GcpCredentials

    Credentials to use for authentication with GCP.

    required timeout float

    The number of seconds the transport should wait for the server response.

    60 project Optional[str]

    Name of the project to use; overrides the gcp_credentials project if provided.

    None

    Returns:

    Type Description str

    The path of the updated secret.

    Example
    from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import update_secret\n\n@flow()\ndef example_cloud_storage_update_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_path = update_secret(\"secret_name\", \"secret_value\", gcp_credentials)\n    return secret_path\n\nexample_cloud_storage_update_secret_flow()\n
    Source code in prefect_gcp/secret_manager.py
    @task\nasync def update_secret(\n    secret_name: str,\n    secret_value: Union[str, bytes],\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Updates a secret in Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        secret_value: Desired value of the secret. Can be either `str` or `bytes`.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the updated secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import update_secret\n\n        @flow()\n        def example_cloud_storage_update_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_path = update_secret(\"secret_name\", \"secret_value\", gcp_credentials)\n            return secret_path\n\n        example_cloud_storage_update_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Updating the %s secret\", secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    parent = f\"projects/{project}/secrets/{secret_name}\"\n    if isinstance(secret_value, str):\n        secret_value = secret_value.encode(\"UTF-8\")\n    partial_add = partial(\n        client.add_secret_version,\n        parent=parent,\n        payload={\"data\": secret_value},\n        timeout=timeout,\n    )\n    response = await to_thread.run_sync(partial_add)\n    return response.name\n
    "},{"location":"integrations/prefect-gcp/vertex_worker/","title":"Vertex AI","text":""},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex","title":"prefect_gcp.workers.vertex","text":"

    Module containing the custom worker used for executing flow runs as Vertex AI Custom Jobs.

    Get started by creating a Cloud Run work pool:

    prefect work-pool create 'my-vertex-pool' --type vertex-ai\n

    Then start a Cloud Run worker with the following command:

    prefect worker start --pool 'my-vertex-pool'\n
    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex--configuration","title":"Configuration","text":"

    Read more about configuring work pools here.

    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorker","title":"VertexAIWorker","text":"

    Bases: BaseWorker

    Prefect worker that executes flow runs within Vertex AI Jobs.

    Source code in prefect_gcp/workers/vertex.py
    class VertexAIWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Vertex AI Jobs.\"\"\"\n\n    type = \"vertex-ai\"\n    job_configuration = VertexAIWorkerJobConfiguration\n    job_configuration_variables = VertexAIWorkerVariables\n    _description = (\n        \"Execute flow runs within containers on Google Vertex AI. Requires \"\n        \"a Google Cloud Platform account.\"\n    )\n    _display_name = \"Google Vertex AI\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/vertex_worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: VertexAIWorkerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> VertexAIWorkerResult:\n        \"\"\"\n        Executes a flow run within a Vertex AI Job and waits for the flow run\n        to complete.\n\n        Args:\n            flow_run: The flow run to execute\n            configuration: The configuration to use when executing the flow run.\n            task_status: The task status object for the current flow run. If provided,\n                the task will be marked as started.\n\n        Returns:\n            VertexAIWorkerResult: A result object containing information about the\n                final state of the flow run\n        \"\"\"\n        logger = self.get_flow_run_logger(flow_run)\n\n        client_options = ClientOptions(\n            api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n        )\n\n        job_name = configuration.job_name\n\n        job_spec = self._build_job_spec(configuration)\n        job_service_async_client = (\n            configuration.credentials.get_job_service_async_client(\n                client_options=client_options\n            )\n        )\n\n        job_run = await self._create_and_begin_job(\n            job_name,\n            job_spec,\n            job_service_async_client,\n            configuration,\n            logger,\n        )\n\n        if task_status:\n            task_status.started(job_run.name)\n\n        final_job_run = await self._watch_job_run(\n            job_name=job_name,\n            full_job_name=job_run.name,\n            job_service_async_client=job_service_async_client,\n            current_state=job_run.state,\n            until_states=(\n                JobState.JOB_STATE_SUCCEEDED,\n                JobState.JOB_STATE_FAILED,\n                JobState.JOB_STATE_CANCELLED,\n                JobState.JOB_STATE_EXPIRED,\n            ),\n            configuration=configuration,\n            logger=logger,\n            timeout=int(\n                datetime.timedelta(\n                    hours=configuration.job_spec[\"maximum_run_time_hours\"]\n                ).total_seconds()\n            ),\n        )\n\n        error_msg = final_job_run.error.message\n\n        # Vertex will include an error message upon valid\n        # flow cancellations, so we'll avoid raising an error in that case\n        if error_msg and \"CANCELED\" not in error_msg:\n            raise RuntimeError(error_msg)\n\n        status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n        return VertexAIWorkerResult(\n            identifier=final_job_run.display_name, status_code=status_code\n        )\n\n    def _build_job_spec(\n        self, configuration: VertexAIWorkerJobConfiguration\n    ) -> \"CustomJobSpec\":\n        \"\"\"\n        Builds a job spec by gathering details.\n        \"\"\"\n        # here, we extract the `worker_pool_specs` out of the job_spec\n        worker_pool_specs = [\n            WorkerPoolSpec(\n                container_spec=ContainerSpec(**spec[\"container_spec\"]),\n                machine_spec=MachineSpec(**spec[\"machine_spec\"]),\n                replica_count=spec[\"replica_count\"],\n                disk_spec=DiskSpec(**spec[\"disk_spec\"]),\n            )\n            for spec in configuration.job_spec.pop(\"worker_pool_specs\", [])\n        ]\n\n        timeout = Duration().FromTimedelta(\n            td=datetime.timedelta(\n                hours=configuration.job_spec[\"maximum_run_time_hours\"]\n            )\n        )\n        scheduling = Scheduling(timeout=timeout)\n\n        # construct the final job spec that we will provide to Vertex AI\n        job_spec = CustomJobSpec(\n            worker_pool_specs=worker_pool_specs,\n            scheduling=scheduling,\n            ignore_unknown_fields=True,\n            **configuration.job_spec,\n        )\n        return job_spec\n\n    async def _create_and_begin_job(\n        self,\n        job_name: str,\n        job_spec: \"CustomJobSpec\",\n        job_service_async_client: \"JobServiceAsyncClient\",\n        configuration: VertexAIWorkerJobConfiguration,\n        logger: PrefectLogAdapter,\n    ) -> \"CustomJob\":\n        \"\"\"\n        Builds a custom job and begins running it.\n        \"\"\"\n        # create custom job\n        custom_job = CustomJob(\n            display_name=job_name,\n            job_spec=job_spec,\n            labels=self._get_compatible_labels(configuration=configuration),\n        )\n\n        # run job\n        logger.info(f\"Creating job {job_name!r}\")\n\n        project = configuration.project\n        resource_name = f\"projects/{project}/locations/{configuration.region}\"\n\n        async for attempt in AsyncRetrying(\n            stop=stop_after_attempt(3), wait=wait_fixed(1) + wait_random(0, 3)\n        ):\n            with attempt:\n                custom_job_run = await job_service_async_client.create_custom_job(\n                    parent=resource_name,\n                    custom_job=custom_job,\n                )\n\n        logger.info(\n            f\"Job {job_name!r} created. \"\n            f\"The full job name is {custom_job_run.name!r}\"\n        )\n\n        return custom_job_run\n\n    async def _watch_job_run(\n        self,\n        job_name: str,\n        full_job_name: str,  # different from job_name\n        job_service_async_client: \"JobServiceAsyncClient\",\n        current_state: \"JobState\",\n        until_states: Tuple[\"JobState\"],\n        configuration: VertexAIWorkerJobConfiguration,\n        logger: PrefectLogAdapter,\n        timeout: int = None,\n    ) -> \"CustomJob\":\n        \"\"\"\n        Polls job run to see if status changed.\n\n        State changes reported by the Vertex AI API may sometimes be inaccurate\n        immediately upon startup, but should eventually report a correct running\n        and then terminal state. The minimum training duration for a custom job is\n        30 seconds, so short-lived jobs may be marked as successful some time\n        after a flow run has completed.\n        \"\"\"\n        state = JobState.JOB_STATE_UNSPECIFIED\n        last_state = current_state\n        t0 = time.time()\n\n        while state not in until_states:\n            job_run = await job_service_async_client.get_custom_job(\n                name=full_job_name,\n            )\n            state = job_run.state\n            if state != last_state:\n                state_label = (\n                    state.name.replace(\"_\", \" \")\n                    .lower()\n                    .replace(\"state\", \"state is now:\")\n                )\n                # results in \"New job state is now: succeeded\"\n                logger.debug(f\"{job_name} has new {state_label}\")\n                last_state = state\n            else:\n                # Intermittently, the job will not be described. We want to respect the\n                # watch timeout though.\n                logger.debug(f\"Job {job_name} not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching job for states \"\n                    \"{until_states!r}\"\n                )\n            await asyncio.sleep(configuration.job_watch_poll_interval)\n\n        return job_run\n\n    def _get_compatible_labels(\n        self, configuration: VertexAIWorkerJobConfiguration\n    ) -> Dict[str, str]:\n        \"\"\"\n        Ensures labels are compatible with GCP label requirements.\n        https://cloud.google.com/resource-manager/docs/creating-managing-labels\n\n        Ex: the Prefect provided key of prefect.io/flow-name -> prefect-io_flow-name\n        \"\"\"\n        compatible_labels = {}\n        for key, val in configuration.labels.items():\n            new_key = slugify(\n                key,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n            compatible_labels[new_key] = slugify(\n                val,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n        return compatible_labels\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: VertexAIWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a job running in Vertex AI upon flow cancellation,\n        based on the provided infrastructure PID + run configuration.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.customJobs/cancel\"  # noqa\n            )\n\n        client_options = ClientOptions(\n            api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n        )\n        job_service_async_client = (\n            configuration.credentials.get_job_service_async_client(\n                client_options=client_options\n            )\n        )\n        await self._stop_job(\n            client=job_service_async_client,\n            vertex_job_name=infrastructure_pid,\n        )\n\n    async def _stop_job(self, client: \"JobServiceAsyncClient\", vertex_job_name: str):\n        \"\"\"\n        Calls the `cancel_custom_job` method on the Vertex AI Job Service Client.\n        \"\"\"\n        cancel_custom_job_request = CancelCustomJobRequest(name=vertex_job_name)\n        try:\n            await client.cancel_custom_job(\n                request=cancel_custom_job_request,\n            )\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Vertex AI job; the job name {vertex_job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n
    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

    Stops a job running in Vertex AI upon flow cancellation, based on the provided infrastructure PID + run configuration.

    Source code in prefect_gcp/workers/vertex.py
    async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: VertexAIWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a job running in Vertex AI upon flow cancellation,\n    based on the provided infrastructure PID + run configuration.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.customJobs/cancel\"  # noqa\n        )\n\n    client_options = ClientOptions(\n        api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n    )\n    job_service_async_client = (\n        configuration.credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n    )\n    await self._stop_job(\n        client=job_service_async_client,\n        vertex_job_name=infrastructure_pid,\n    )\n
    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorker.run","title":"run async","text":"

    Executes a flow run within a Vertex AI Job and waits for the flow run to complete.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to execute

    required configuration VertexAIWorkerJobConfiguration

    The configuration to use when executing the flow run.

    required task_status Optional[TaskStatus]

    The task status object for the current flow run. If provided, the task will be marked as started.

    None

    Returns:

    Name Type Description VertexAIWorkerResult VertexAIWorkerResult

    A result object containing information about the final state of the flow run

    Source code in prefect_gcp/workers/vertex.py
    async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: VertexAIWorkerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> VertexAIWorkerResult:\n    \"\"\"\n    Executes a flow run within a Vertex AI Job and waits for the flow run\n    to complete.\n\n    Args:\n        flow_run: The flow run to execute\n        configuration: The configuration to use when executing the flow run.\n        task_status: The task status object for the current flow run. If provided,\n            the task will be marked as started.\n\n    Returns:\n        VertexAIWorkerResult: A result object containing information about the\n            final state of the flow run\n    \"\"\"\n    logger = self.get_flow_run_logger(flow_run)\n\n    client_options = ClientOptions(\n        api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n    )\n\n    job_name = configuration.job_name\n\n    job_spec = self._build_job_spec(configuration)\n    job_service_async_client = (\n        configuration.credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n    )\n\n    job_run = await self._create_and_begin_job(\n        job_name,\n        job_spec,\n        job_service_async_client,\n        configuration,\n        logger,\n    )\n\n    if task_status:\n        task_status.started(job_run.name)\n\n    final_job_run = await self._watch_job_run(\n        job_name=job_name,\n        full_job_name=job_run.name,\n        job_service_async_client=job_service_async_client,\n        current_state=job_run.state,\n        until_states=(\n            JobState.JOB_STATE_SUCCEEDED,\n            JobState.JOB_STATE_FAILED,\n            JobState.JOB_STATE_CANCELLED,\n            JobState.JOB_STATE_EXPIRED,\n        ),\n        configuration=configuration,\n        logger=logger,\n        timeout=int(\n            datetime.timedelta(\n                hours=configuration.job_spec[\"maximum_run_time_hours\"]\n            ).total_seconds()\n        ),\n    )\n\n    error_msg = final_job_run.error.message\n\n    # Vertex will include an error message upon valid\n    # flow cancellations, so we'll avoid raising an error in that case\n    if error_msg and \"CANCELED\" not in error_msg:\n        raise RuntimeError(error_msg)\n\n    status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n    return VertexAIWorkerResult(\n        identifier=final_job_run.display_name, status_code=status_code\n    )\n
    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerJobConfiguration","title":"VertexAIWorkerJobConfiguration","text":"

    Bases: BaseJobConfiguration

    Configuration class used by the Vertex AI Worker to create a Job.

    An instance of this class is passed to the Vertex AI Worker's run method for each flow run. It contains all information necessary to execute the flow run as a Vertex AI Job.

    Attributes:

    Name Type Description region str

    The region where the Vertex AI Job resides.

    credentials Optional[GcpCredentials]

    The GCP Credentials used to connect to Vertex AI.

    job_spec Dict[str, Any]

    The Vertex AI Job spec used to create the Job.

    job_watch_poll_interval float

    The interval between GCP API calls to check Job state.

    Source code in prefect_gcp/workers/vertex.py
    class VertexAIWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Vertex AI Worker to create a Job.\n\n    An instance of this class is passed to the Vertex AI Worker's `run` method\n    for each flow run. It contains all information necessary to execute\n    the flow run as a Vertex AI Job.\n\n    Attributes:\n        region: The region where the Vertex AI Job resides.\n        credentials: The GCP Credentials used to connect to Vertex AI.\n        job_spec: The Vertex AI Job spec used to create the Job.\n        job_watch_poll_interval: The interval between GCP API calls to check Job state.\n    \"\"\"\n\n    region: str = Field(\n        description=\"The region where the Vertex AI Job resides.\",\n        example=\"us-central1\",\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to initiate the \"\n        \"Vertex AI Job. If not provided credentials will be \"\n        \"inferred from the local environment.\",\n    )\n\n    job_spec: Dict[str, Any] = Field(\n        template={\n            \"service_account_name\": \"{{ service_account_name }}\",\n            \"network\": \"{{ network }}\",\n            \"reserved_ip_ranges\": \"{{ reserved_ip_ranges }}\",\n            \"maximum_run_time_hours\": \"{{ maximum_run_time_hours }}\",\n            \"worker_pool_specs\": [\n                {\n                    \"replica_count\": 1,\n                    \"container_spec\": {\n                        \"image_uri\": \"{{ image }}\",\n                        \"command\": \"{{ command }}\",\n                        \"args\": [],\n                    },\n                    \"machine_spec\": {\n                        \"machine_type\": \"{{ machine_type }}\",\n                        \"accelerator_type\": \"{{ accelerator_type }}\",\n                        \"accelerator_count\": \"{{ accelerator_count }}\",\n                    },\n                    \"disk_spec\": {\n                        \"boot_disk_type\": \"{{ boot_disk_type }}\",\n                        \"boot_disk_size_gb\": \"{{ boot_disk_size_gb }}\",\n                    },\n                }\n            ],\n        }\n    )\n    job_watch_poll_interval: float = Field(\n        default=5.0,\n        title=\"Poll Interval (Seconds)\",\n        description=(\n            \"The amount of time to wait between GCP API calls while monitoring the \"\n            \"state of a Vertex AI Job.\"\n        ),\n    )\n\n    @property\n    def project(self) -> str:\n        \"\"\"property for accessing the project from the credentials.\"\"\"\n        return self.credentials.project\n\n    @property\n    def job_name(self) -> str:\n        \"\"\"\n        The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference:\n        https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name\n        \"\"\"  # noqa\n        unique_suffix = uuid4().hex\n        job_name = f\"{self.name}-{unique_suffix}\"\n        return job_name\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self._inject_formatted_env_vars()\n        self._inject_formatted_command()\n        self._ensure_existence_of_service_account()\n\n    def _inject_formatted_env_vars(self):\n        \"\"\"Inject environment variables in the Vertex job_spec configuration,\n        in the correct format, which is sourced from the BaseJobConfiguration.\n        This method is invoked by `prepare_for_flow_run()`.\"\"\"\n        worker_pool_specs = self.job_spec[\"worker_pool_specs\"]\n        formatted_env_vars = [\n            {\"name\": key, \"value\": value} for key, value in self.env.items()\n        ]\n        worker_pool_specs[0][\"container_spec\"][\"env\"] = formatted_env_vars\n\n    def _inject_formatted_command(self):\n        \"\"\"Inject shell commands in the Vertex job_spec configuration,\n        in the correct format, which is sourced from the BaseJobConfiguration.\n        Here, we'll ensure that the default string format\n        is converted to a list of strings.\"\"\"\n        worker_pool_specs = self.job_spec[\"worker_pool_specs\"]\n\n        existing_command = worker_pool_specs[0][\"container_spec\"].get(\"command\")\n        if existing_command is None:\n            worker_pool_specs[0][\"container_spec\"][\"command\"] = shlex.split(\n                self._base_flow_run_command()\n            )\n        elif isinstance(existing_command, str):\n            worker_pool_specs[0][\"container_spec\"][\"command\"] = shlex.split(\n                existing_command\n            )\n\n    def _ensure_existence_of_service_account(self):\n        \"\"\"Verify that a service account was provided, either in the credentials\n        or as a standalone service account name override.\"\"\"\n\n        provided_service_account_name = self.job_spec.get(\"service_account_name\")\n        credential_service_account = self.credentials._service_account_email\n\n        service_account_to_use = (\n            provided_service_account_name or credential_service_account\n        )\n\n        if service_account_to_use is None:\n            raise ValueError(\n                \"A service account is required for the Vertex job. \"\n                \"A service account could not be detected in the attached credentials \"\n                \"or in the service_account_name input. \"\n                \"Please pass in valid GCP credentials or a valid service_account_name\"\n            )\n\n        self.job_spec[\"service_account_name\"] = service_account_to_use\n\n    @validator(\"job_spec\")\n    def _ensure_job_spec_includes_required_attributes(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job spec includes all required components.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_spec())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n
    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerJobConfiguration.job_name","title":"job_name: str property","text":"

    The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference: https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name

    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerJobConfiguration.project","title":"project: str property","text":"

    property for accessing the project from the credentials.

    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerResult","title":"VertexAIWorkerResult","text":"

    Bases: BaseWorkerResult

    Contains information about the final state of a completed process

    Source code in prefect_gcp/workers/vertex.py
    class VertexAIWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
    "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerVariables","title":"VertexAIWorkerVariables","text":"

    Bases: BaseVariables

    Default variables for the Vertex AI worker.

    The schema for this class is used to populate the variables section of the default base job template.

    Source code in prefect_gcp/workers/vertex.py
    class VertexAIWorkerVariables(BaseVariables):\n    \"\"\"\n    Default variables for the Vertex AI worker.\n\n    The schema for this class is used to populate the `variables` section of the default\n    base job template.\n    \"\"\"\n\n    region: str = Field(\n        description=\"The region where the Vertex AI Job resides.\",\n        example=\"us-central1\",\n    )\n    image: str = Field(\n        title=\"Image Name\",\n        description=(\n            \"The URI of a container image in the Container or Artifact Registry, \"\n            \"used to run your Vertex AI Job. Note that Vertex AI will need access\"\n            \"to the project and region where the container image is stored. See \"\n            \"https://cloud.google.com/vertex-ai/docs/training/create-custom-container\"\n        ),\n        example=\"gcr.io/your-project/your-repo:latest\",\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to initiate the \"\n        \"Vertex AI Job. If not provided credentials will be \"\n        \"inferred from the local environment.\",\n    )\n    machine_type: str = Field(\n        title=\"Machine Type\",\n        description=(\n            \"The machine type to use for the run, which controls \"\n            \"the available CPU and memory. \"\n            \"See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec\"\n        ),\n        default=\"n1-standard-4\",\n    )\n    accelerator_type: Optional[str] = Field(\n        title=\"Accelerator Type\",\n        description=(\n            \"The type of accelerator to attach to the machine. \"\n            \"See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec\"\n        ),\n        example=\"NVIDIA_TESLA_K80\",\n        default=None,\n    )\n    accelerator_count: Optional[int] = Field(\n        title=\"Accelerator Count\",\n        description=(\n            \"The number of accelerators to attach to the machine. \"\n            \"See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec\"\n        ),\n        example=1,\n        default=None,\n    )\n    boot_disk_type: str = Field(\n        title=\"Boot Disk Type\",\n        description=\"The type of boot disk to attach to the machine.\",\n        default=\"pd-ssd\",\n    )\n    boot_disk_size_gb: int = Field(\n        title=\"Boot Disk Size (GB)\",\n        description=\"The size of the boot disk to attach to the machine, in gigabytes.\",\n        default=100,\n    )\n    maximum_run_time_hours: int = Field(\n        default=1,\n        title=\"Maximum Run Time (Hours)\",\n        description=\"The maximum job running time, in hours\",\n    )\n    network: Optional[str] = Field(\n        default=None,\n        title=\"Network\",\n        description=\"The full name of the Compute Engine network\"\n        \"to which the Job should be peered. Private services access must \"\n        \"already be configured for the network. If left unspecified, the job \"\n        \"is not peered with any network. \"\n        \"For example: projects/12345/global/networks/myVPC\",\n    )\n    reserved_ip_ranges: Optional[List[str]] = Field(\n        default=None,\n        title=\"Reserved IP Ranges\",\n        description=\"A list of names for the reserved ip ranges under the VPC \"\n        \"network that can be used for this job. If set, we will deploy the job \"\n        \"within the provided ip ranges. Otherwise, the job will be deployed to \"\n        \"any ip ranges under the provided VPC network.\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        title=\"Service Account Name\",\n        description=(\n            \"Specifies the service account to use \"\n            \"as the run-as account in Vertex AI. The worker submitting jobs must have \"\n            \"act-as permission on this run-as account. If unspecified, the AI \"\n            \"Platform Custom Code Service Agent for the CustomJob's project is \"\n            \"used. Takes precedence over the service account found in GCP credentials, \"\n            \"and required if a service account cannot be detected in GCP credentials.\"\n        ),\n    )\n    job_watch_poll_interval: float = Field(\n        default=5.0,\n        title=\"Poll Interval (Seconds)\",\n        description=(\n            \"The amount of time to wait between GCP API calls while monitoring the \"\n            \"state of a Vertex AI Job.\"\n        ),\n    )\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/","title":"Deployment Steps","text":""},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps","title":"prefect_gcp.deployments.steps","text":"

    Prefect deployment steps for code storage in and retrieval from Google Cloud Storage.

    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PullFromGcsOutput","title":"PullFromGcsOutput","text":"

    Bases: TypedDict

    The output of the pull_from_gcs step.

    Source code in prefect_gcp/deployments/steps.py
    class PullFromGcsOutput(TypedDict):\n    \"\"\"\n    The output of the `pull_from_gcs` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n    directory: str\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PullProjectFromGcsOutput","title":"PullProjectFromGcsOutput","text":"

    Bases: PullFromGcsOutput

    Deprecated. Use PullFromGcsOutput instead.

    Source code in prefect_gcp/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PullFromGcsOutput` instead.\")\nclass PullProjectFromGcsOutput(PullFromGcsOutput):\n    \"\"\"Deprecated. Use `PullFromGcsOutput` instead.\"\"\"\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PushProjectToGcsOutput","title":"PushProjectToGcsOutput","text":"

    Bases: PushToGcsOutput

    Deprecated. Use PushToGcsOutput instead.

    Source code in prefect_gcp/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PushToGcsOutput` instead.\")\nclass PushProjectToGcsOutput(PushToGcsOutput):\n    \"\"\"Deprecated. Use `PushToGcsOutput` instead.\"\"\"\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PushToGcsOutput","title":"PushToGcsOutput","text":"

    Bases: TypedDict

    The output of the push_to_gcs step.

    Source code in prefect_gcp/deployments/steps.py
    class PushToGcsOutput(TypedDict):\n    \"\"\"\n    The output of the `push_to_gcs` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.pull_from_gcs","title":"pull_from_gcs","text":"

    Pulls the contents of a project from an GCS bucket to the current working directory.

    Parameters:

    Name Type Description Default bucket str

    The name of the GCS bucket where files are stored.

    required folder str

    The folder in the GCS bucket where files are stored.

    required project Optional[str]

    The GCP project the bucket belongs to. If not provided, the project will be inferred from the credentials or the local environment.

    None credentials Optional[Dict]

    A dictionary containing the service account information and project used for authentication. If not provided, the application default credentials will be used.

    None

    Returns:

    Type Description PullProjectFromGcsOutput

    A dictionary containing the bucket, folder, and local directory where files were downloaded.

    Examples:

    Pull from GCS using the default environment credentials:

    build:\n    - prefect_gcp.deployments.steps.pull_from_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n

    Pull from GCS using credentials stored in a block:

    build:\n    - prefect_gcp.deployments.steps.pull_from_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n

    Pull from to an GCS bucket using credentials stored in a service account file:

    build:\n    - prefect_gcp.deployments.steps.pull_from_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials:\n            project: my-project\n            service_account_file: /path/to/service_account.json\n

    Source code in prefect_gcp/deployments/steps.py
    def pull_from_gcs(\n    bucket: str,\n    folder: str,\n    project: Optional[str] = None,\n    credentials: Optional[Dict] = None,\n) -> PullProjectFromGcsOutput:\n    \"\"\"\n    Pulls the contents of a project from an GCS bucket to the current working directory.\n\n    Args:\n        bucket: The name of the GCS bucket where files are stored.\n        folder: The folder in the GCS bucket where files are stored.\n        project: The GCP project the bucket belongs to. If not provided, the project will be\n            inferred from the credentials or the local environment.\n        credentials: A dictionary containing the service account information and project\n            used for authentication. If not provided, the application default\n            credentials will be used.\n\n    Returns:\n        A dictionary containing the bucket, folder, and local directory where files were downloaded.\n\n    Examples:\n        Pull from GCS using the default environment credentials:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.pull_from_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n        ```\n\n        Pull from GCS using credentials stored in a block:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.pull_from_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n        ```\n\n        Pull from to an GCS bucket using credentials stored in a service account file:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.pull_from_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials:\n                    project: my-project\n                    service_account_file: /path/to/service_account.json\n        ```\n\n    \"\"\"  # noqa\n    local_path = Path.cwd()\n    project = credentials.get(\"project\") if credentials else None\n\n    gcp_creds = None\n    if credentials is not None:\n        if credentials.get(\"service_account_info\") is not None:\n            gcp_creds = Credentials.from_service_account_info(\n                credentials.get(\"service_account_info\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        elif credentials.get(\"service_account_file\") is not None:\n            gcp_creds = Credentials.from_service_account_file(\n                credentials.get(\"service_account_file\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n\n    gcp_creds = gcp_creds or google.auth.default()[0]\n\n    storage_client = StorageClient(credentials=gcp_creds, project=project)\n\n    blobs = storage_client.list_blobs(bucket, prefix=folder)\n\n    for blob in blobs:\n        if blob.name.endswith(\"/\"):\n            # object is a folder and will be created if it contains any objects\n            continue\n        local_blob_download_path = PurePosixPath(\n            local_path\n            / relative_path_to_current_platform(blob.name).relative_to(folder)\n        )\n        Path.mkdir(Path(local_blob_download_path.parent), parents=True, exist_ok=True)\n\n        blob.download_to_filename(local_blob_download_path)\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n        \"directory\": str(local_path),\n    }\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.pull_project_from_gcs","title":"pull_project_from_gcs","text":"

    Deprecated. Use pull_from_gcs instead.

    Source code in prefect_gcp/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `pull_from_gcs` instead.\")\ndef pull_project_from_gcs(*args, **kwargs) -> PullProjectFromGcsOutput:\n    \"\"\"\n    Deprecated. Use `pull_from_gcs` instead.\n    \"\"\"\n    return pull_from_gcs(*args, **kwargs)\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.push_project_to_gcs","title":"push_project_to_gcs","text":"

    Deprecated. Use push_to_gcs instead.

    Source code in prefect_gcp/deployments/steps.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `push_to_gcs` instead.\")\ndef push_project_to_gcs(*args, **kwargs) -> PushToGcsOutput:\n    \"\"\"\n    Deprecated. Use `push_to_gcs` instead.\n    \"\"\"\n    return push_to_gcs(*args, **kwargs)\n
    "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.push_to_gcs","title":"push_to_gcs","text":"

    Pushes the contents of the current working directory to a GCS bucket, excluding files and folders specified in the ignore_file.

    Parameters:

    Name Type Description Default bucket str

    The name of the GCS bucket where files will be uploaded.

    required folder str

    The folder in the GCS bucket where files will be uploaded.

    required project Optional[str]

    The GCP project the bucket belongs to. If not provided, the project will be inferred from the credentials or the local environment.

    None credentials Optional[Dict]

    A dictionary containing the service account information and project used for authentication. If not provided, the application default credentials will be used.

    None ignore_file

    The name of the file containing ignore patterns.

    '.prefectignore'

    Returns:

    Type Description PushToGcsOutput

    A dictionary containing the bucket and folder where files were uploaded.

    Examples:

    Push to a GCS bucket:

    build:\n    - prefect_gcp.deployments.steps.push_to_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-project\n

    Push to a GCS bucket using credentials stored in a block:

    build:\n    - prefect_gcp.deployments.steps.push_to_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n

    Push to a GCS bucket using credentials stored in a service account file:

    build:\n    - prefect_gcp.deployments.steps.push_to_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials:\n            project: my-project\n            service_account_file: /path/to/service_account.json\n

    Source code in prefect_gcp/deployments/steps.py
    def push_to_gcs(\n    bucket: str,\n    folder: str,\n    project: Optional[str] = None,\n    credentials: Optional[Dict] = None,\n    ignore_file=\".prefectignore\",\n) -> PushToGcsOutput:\n    \"\"\"\n    Pushes the contents of the current working directory to a GCS bucket,\n    excluding files and folders specified in the ignore_file.\n\n    Args:\n        bucket: The name of the GCS bucket where files will be uploaded.\n        folder: The folder in the GCS bucket where files will be uploaded.\n        project: The GCP project the bucket belongs to. If not provided, the project\n            will be inferred from the credentials or the local environment.\n        credentials: A dictionary containing the service account information and project\n            used for authentication. If not provided, the application default\n            credentials will be used.\n        ignore_file: The name of the file containing ignore patterns.\n\n    Returns:\n        A dictionary containing the bucket and folder where files were uploaded.\n\n    Examples:\n        Push to a GCS bucket:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.push_to_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-project\n        ```\n\n        Push  to a GCS bucket using credentials stored in a block:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.push_to_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n        ```\n\n        Push to a GCS bucket using credentials stored in a service account\n        file:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.push_to_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials:\n                    project: my-project\n                    service_account_file: /path/to/service_account.json\n        ```\n\n    \"\"\"\n    project = credentials.get(\"project\") if credentials else None\n\n    gcp_creds = None\n    if credentials is not None:\n        if credentials.get(\"service_account_info\") is not None:\n            gcp_creds = Credentials.from_service_account_info(\n                credentials.get(\"service_account_info\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        elif credentials.get(\"service_account_file\") is not None:\n            gcp_creds = Credentials.from_service_account_file(\n                credentials.get(\"service_account_file\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n\n    gcp_creds = gcp_creds or google.auth.default()[0]\n\n    storage_client = StorageClient(credentials=gcp_creds, project=project)\n    bucket_resource = storage_client.bucket(bucket)\n\n    local_path = Path.cwd()\n\n    included_files = None\n    if ignore_file and Path(ignore_file).exists():\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(str(local_path), ignore_patterns)\n\n    for local_file_path in local_path.expanduser().rglob(\"*\"):\n        relative_local_file_path = local_file_path.relative_to(local_path)\n        if (\n            included_files is not None\n            and str(relative_local_file_path) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = (folder / relative_local_file_path).as_posix()\n\n            blob_resource = bucket_resource.blob(remote_file_path)\n            blob_resource.upload_from_filename(local_file_path)\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n    }\n
    "},{"location":"integrations/prefect-github/","title":"prefect-github","text":""},{"location":"integrations/prefect-github/#welcome","title":"Welcome!","text":"

    Prefect integrations interacting with GitHub.

    The tasks within this collection were created by a code generator using the GitHub GraphQL schema.

    "},{"location":"integrations/prefect-github/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-github/#python-setup","title":"Python setup","text":"

    Requires an installation of Python 3.8 or newer.

    We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

    These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-github/#installation","title":"Installation","text":"

    Install prefect-github with pip:

    pip install prefect-github\n

    Then, register to view the block on Prefect Cloud:

    prefect block register -m prefect_github\n

    Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

    "},{"location":"integrations/prefect-github/#write-and-run-a-flow","title":"Write and run a flow","text":"
    from prefect import flow\nfrom prefect_github import GitHubCredentials\nfrom prefect_github.repository import query_repository\nfrom prefect_github.mutations import add_star_starrable\n\n\n@flow()\ndef github_add_star_flow():\n    github_credentials = GitHubCredentials.load(\"github-token\")\n    repository_id = query_repository(\n        \"PrefectHQ\",\n        \"Prefect\",\n        github_credentials=github_credentials,\n        return_fields=\"id\"\n    )[\"id\"]\n    starrable = add_star_starrable(\n        repository_id,\n        github_credentials\n    )\n    return starrable\n\n\ngithub_add_star_flow()\n
    "},{"location":"integrations/prefect-github/#resources","title":"Resources","text":"

    If you encounter any bugs while using prefect-github, feel free to open an issue in the prefect-github repository.

    If you have any questions or issues while using prefect-github, you can find help in the Prefect Slack community.

    Feel free to \u2b50\ufe0f or watch prefect-github for updates too!

    "},{"location":"integrations/prefect-github/#development","title":"Development","text":"

    If you'd like to install a version of prefect-github for development, clone the repository and perform an editable install with pip:

    git clone https://github.com/PrefectHQ/prefect-github.git\n\ncd prefect-github/\n\npip install -e \".[dev]\"\n\n# Install linting pre-commit hooks\npre-commit install\n
    "},{"location":"integrations/prefect-github/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials","title":"prefect_github.credentials","text":"

    Credential classes used to perform authenticated interactions with GitHub

    "},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials","title":"GitHubCredentials","text":"

    Bases: CredentialsBlock

    Block used to manage GitHub authentication.

    Attributes:

    Name Type Description token SecretStr

    the token to authenticate into GitHub.

    Examples:

    Load stored GitHub credentials:

    from prefect_github import GitHubCredentials\ngithub_credentials_block = GitHubCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_github/credentials.py
    class GitHubCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage GitHub authentication.\n\n    Attributes:\n        token: the token to authenticate into GitHub.\n\n    Examples:\n        Load stored GitHub credentials:\n        ```python\n        from prefect_github import GitHubCredentials\n        github_credentials_block = GitHubCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"GitHub Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials\"  # noqa\n\n    token: SecretStr = Field(\n        default=None, description=\"A GitHub personal access token (PAT).\"\n    )\n\n    def get_client(self) -> HTTPEndpoint:\n        \"\"\"\n        Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n\n        Returns:\n            An authenticated GitHub GraphQL HTTPEndpoint client.\n\n        Example:\n            Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n            ```python\n            from prefect_github import GitHubCredentials\n\n            github_credentials = GitHubCredentials(token=token)\n            client = github_credentials.get_client()\n            ```\n        \"\"\"\n\n        if self.token is not None:\n            base_headers = {\"Authorization\": f\"Bearer {self.token.get_secret_value()}\"}\n        else:\n            base_headers = None\n\n        endpoint = HTTPEndpoint(\n            \"https://api.github.com/graphql\", base_headers=base_headers\n        )\n        return endpoint\n\n    def get_endpoint(self) -> HTTPEndpoint:\n        \"\"\"\n        Gets an authenticated GitHub GraphQL HTTPEndpoint.\n\n        Returns:\n            An authenticated GitHub GraphQL HTTPEndpoint\n\n        Example:\n            Gets an authenticated GitHub GraphQL HTTPEndpoint.\n            ```python\n            from prefect import flow\n            from prefect_github import GitHubCredentials\n\n            @flow\n            def example_get_endpoint_flow():\n                token = \"token_xxxxxxx\"\n                github_credentials = GitHubCredentials(token=token)\n                endpoint = github_credentials.get_endpoint()\n                return endpoint\n\n            example_get_endpoint_flow()\n            ```\n        \"\"\"\n        warnings.warn(\n            \"`get_endpoint` is deprecated and will be removed March 31st, 2023, \"\n            \"use `get_client` instead.\",\n            DeprecationWarning,\n        )\n        return self.get_client()\n
    "},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials.get_client","title":"get_client","text":"

    Gets an authenticated GitHub GraphQL HTTPEndpoint client.

    Returns:

    Type Description HTTPEndpoint

    An authenticated GitHub GraphQL HTTPEndpoint client.

    Example

    Gets an authenticated GitHub GraphQL HTTPEndpoint client.

    from prefect_github import GitHubCredentials\n\ngithub_credentials = GitHubCredentials(token=token)\nclient = github_credentials.get_client()\n

    Source code in prefect_github/credentials.py
    def get_client(self) -> HTTPEndpoint:\n    \"\"\"\n    Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n\n    Returns:\n        An authenticated GitHub GraphQL HTTPEndpoint client.\n\n    Example:\n        Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n        ```python\n        from prefect_github import GitHubCredentials\n\n        github_credentials = GitHubCredentials(token=token)\n        client = github_credentials.get_client()\n        ```\n    \"\"\"\n\n    if self.token is not None:\n        base_headers = {\"Authorization\": f\"Bearer {self.token.get_secret_value()}\"}\n    else:\n        base_headers = None\n\n    endpoint = HTTPEndpoint(\n        \"https://api.github.com/graphql\", base_headers=base_headers\n    )\n    return endpoint\n
    "},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials.get_endpoint","title":"get_endpoint","text":"

    Gets an authenticated GitHub GraphQL HTTPEndpoint.

    Returns:

    Type Description HTTPEndpoint

    An authenticated GitHub GraphQL HTTPEndpoint

    Example

    Gets an authenticated GitHub GraphQL HTTPEndpoint.

    from prefect import flow\nfrom prefect_github import GitHubCredentials\n\n@flow\ndef example_get_endpoint_flow():\n    token = \"token_xxxxxxx\"\n    github_credentials = GitHubCredentials(token=token)\n    endpoint = github_credentials.get_endpoint()\n    return endpoint\n\nexample_get_endpoint_flow()\n

    Source code in prefect_github/credentials.py
    def get_endpoint(self) -> HTTPEndpoint:\n    \"\"\"\n    Gets an authenticated GitHub GraphQL HTTPEndpoint.\n\n    Returns:\n        An authenticated GitHub GraphQL HTTPEndpoint\n\n    Example:\n        Gets an authenticated GitHub GraphQL HTTPEndpoint.\n        ```python\n        from prefect import flow\n        from prefect_github import GitHubCredentials\n\n        @flow\n        def example_get_endpoint_flow():\n            token = \"token_xxxxxxx\"\n            github_credentials = GitHubCredentials(token=token)\n            endpoint = github_credentials.get_endpoint()\n            return endpoint\n\n        example_get_endpoint_flow()\n        ```\n    \"\"\"\n    warnings.warn(\n        \"`get_endpoint` is deprecated and will be removed March 31st, 2023, \"\n        \"use `get_client` instead.\",\n        DeprecationWarning,\n    )\n    return self.get_client()\n
    "},{"location":"integrations/prefect-github/graphql/","title":"Graphql","text":""},{"location":"integrations/prefect-github/graphql/#prefect_github.graphql","title":"prefect_github.graphql","text":"

    This is a module containing generic GraphQL tasks

    "},{"location":"integrations/prefect-github/graphql/#prefect_github.graphql.execute_graphql","title":"execute_graphql async","text":"

    Generic function for executing GraphQL operations.

    Parameters:

    Name Type Description Default op Union[Operation, str]

    The operation, either as a valid GraphQL string or sgqlc.Operation.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required error_key str

    The key name to look out for in the response that indicates an error has occurred with the request.

    'errors'

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Examples:

    Queries the first three issues from the Prefect repository using a string query.

    from prefect import flow\nfrom prefect_github import GitHubCredentials\nfrom prefect_github.graphql import execute_graphql\n\n@flow()\ndef example_execute_graphql_flow():\n    op = '''\n        query GitHubRepoIssues($owner: String!, $name: String!) {\n            repository(owner: $owner, name: $name) {\n                issues(last: 3) {\n                    nodes {\n                        number\n                        title\n                    }\n                }\n            }\n        }\n    '''\n    token = \"ghp_...\"\n    github_credentials = GitHubCredentials(token=token)\n    params = dict(owner=\"PrefectHQ\", name=\"Prefect\")\n    result = execute_graphql(op, github_credentials, **params)\n    return result\n\nexample_execute_graphql_flow()\n

    Queries the first three issues from Prefect repository using a sgqlc.Operation.

    from prefect import flow\nfrom sgqlc.operation import Operation\nfrom prefect_github import GitHubCredentials\nfrom prefect_github.schemas import graphql_schema\nfrom prefect_github.graphql import execute_graphql\n\n@flow()\ndef example_execute_graphql_flow():\n    op = Operation(graphql_schema.Query)\n    op_settings = op.repository(\n        owner=\"PrefectHQ\", name=\"Prefect\"\n    ).issues(\n        first=3\n    ).nodes()\n    op_settings.__fields__(\"id\", \"title\")\n    token = \"ghp_...\"\n    github_credentials = GitHubCredentials(token=token)\n    result = execute_graphql(\n        op,\n        github_credentials,\n    )\n    return result\n\nexample_execute_graphql_flow()\n

    Source code in prefect_github/graphql.py
    @task\nasync def execute_graphql(\n    op: Union[Operation, str],\n    github_credentials: GitHubCredentials,\n    error_key: str = \"errors\",\n    **vars,\n) -> Dict[str, Any]:\n    # NOTE: Maintainers can update these examples to match their collection!\n    \"\"\"\n    Generic function for executing GraphQL operations.\n\n    Args:\n        op: The operation, either as a valid GraphQL string or sgqlc.Operation.\n        github_credentials: Credentials to use for authentication with GitHub.\n        error_key: The key name to look out for in the response\n            that indicates an error has occurred with the request.\n\n    Returns:\n        A dict of the returned fields.\n\n    Examples:\n        Queries the first three issues from the Prefect repository\n        using a string query.\n        ```python\n        from prefect import flow\n        from prefect_github import GitHubCredentials\n        from prefect_github.graphql import execute_graphql\n\n        @flow()\n        def example_execute_graphql_flow():\n            op = '''\n                query GitHubRepoIssues($owner: String!, $name: String!) {\n                    repository(owner: $owner, name: $name) {\n                        issues(last: 3) {\n                            nodes {\n                                number\n                                title\n                            }\n                        }\n                    }\n                }\n            '''\n            token = \"ghp_...\"\n            github_credentials = GitHubCredentials(token=token)\n            params = dict(owner=\"PrefectHQ\", name=\"Prefect\")\n            result = execute_graphql(op, github_credentials, **params)\n            return result\n\n        example_execute_graphql_flow()\n        ```\n\n        Queries the first three issues from Prefect repository\n        using a sgqlc.Operation.\n        ```python\n        from prefect import flow\n        from sgqlc.operation import Operation\n        from prefect_github import GitHubCredentials\n        from prefect_github.schemas import graphql_schema\n        from prefect_github.graphql import execute_graphql\n\n        @flow()\n        def example_execute_graphql_flow():\n            op = Operation(graphql_schema.Query)\n            op_settings = op.repository(\n                owner=\"PrefectHQ\", name=\"Prefect\"\n            ).issues(\n                first=3\n            ).nodes()\n            op_settings.__fields__(\"id\", \"title\")\n            token = \"ghp_...\"\n            github_credentials = GitHubCredentials(token=token)\n            result = execute_graphql(\n                op,\n                github_credentials,\n            )\n            return result\n\n        example_execute_graphql_flow()\n        ```\n    \"\"\"\n    result = await _execute_graphql_op(\n        op, github_credentials, error_key=error_key, **vars\n    )\n    return result\n
    "},{"location":"integrations/prefect-github/mutations/","title":"Mutations","text":""},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations","title":"prefect_github.mutations","text":"

    This is a module containing: GitHub mutation tasks

    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_comment_subject","title":"add_comment_subject async","text":"

    Adds a comment to an Issue or Pull Request.

    Parameters:

    Name Type Description Default subject_id str

    The Node ID of the subject to modify.

    required body str

    The contents of the comment.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def add_comment_subject(  # noqa\n    subject_id: str,\n    body: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a comment to an Issue or Pull Request.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        body: The contents of the comment.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_comment(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                body=body,\n            )\n        )\n    ).subject(**strip_kwargs())\n\n    op_stack = (\n        \"addComment\",\n        \"subject\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addComment\"][\"subject\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_pull_request_review","title":"add_pull_request_review async","text":"

    Adds a review to a Pull Request.

    Parameters:

    Name Type Description Default pull_request_id str

    The Node ID of the pull request to modify.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required commit_oid datetime

    The commit OID the review pertains to.

    None body str

    The contents of the review body comment.

    None event PullRequestReviewEvent

    The event to perform on the pull request review.

    None comments Iterable[DraftPullRequestReviewComment]

    The review line comments.

    None threads Iterable[DraftPullRequestReviewThread]

    The review line comment threads.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def add_pull_request_review(  # noqa\n    pull_request_id: str,\n    github_credentials: GitHubCredentials,\n    commit_oid: datetime = None,\n    body: str = None,\n    event: graphql_schema.PullRequestReviewEvent = None,\n    comments: Iterable[graphql_schema.DraftPullRequestReviewComment] = None,\n    threads: Iterable[graphql_schema.DraftPullRequestReviewThread] = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a review to a Pull Request.\n\n    Args:\n        pull_request_id: The Node ID of the pull request to modify.\n        github_credentials: Credentials to use for authentication with GitHub.\n        commit_oid: The commit OID the review pertains to.\n        body: The contents of the review body comment.\n        event: The event to perform on the pull request review.\n        comments: The review line comments.\n        threads: The review line comment threads.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_pull_request_review(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n                commit_oid=commit_oid,\n                body=body,\n                event=event,\n                comments=comments,\n                threads=threads,\n            )\n        )\n    ).pull_request_review(**strip_kwargs())\n\n    op_stack = (\n        \"addPullRequestReview\",\n        \"pullRequestReview\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addPullRequestReview\"][\"pullRequestReview\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_reaction","title":"add_reaction async","text":"

    Adds a reaction to a subject.

    Parameters:

    Name Type Description Default subject_id str

    The Node ID of the subject to modify.

    required content ReactionContent

    The name of the emoji to react with.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def add_reaction(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a reaction to a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji to react with.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).reaction(**strip_kwargs())\n\n    op_stack = (\n        \"addReaction\",\n        \"reaction\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addReaction\"][\"reaction\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_reaction_subject","title":"add_reaction_subject async","text":"

    Adds a reaction to a subject.

    Parameters:

    Name Type Description Default subject_id str

    The Node ID of the subject to modify.

    required content ReactionContent

    The name of the emoji to react with.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def add_reaction_subject(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a reaction to a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji to react with.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).subject(**strip_kwargs())\n\n    op_stack = (\n        \"addReaction\",\n        \"subject\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addReaction\"][\"subject\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_star_starrable","title":"add_star_starrable async","text":"

    Adds a star to a Starrable.

    Parameters:

    Name Type Description Default starrable_id str

    The Starrable ID to star.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def add_star_starrable(  # noqa\n    starrable_id: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a star to a Starrable.\n\n    Args:\n        starrable_id: The Starrable ID to star.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_star(\n        **strip_kwargs(\n            input=dict(\n                starrable_id=starrable_id,\n            )\n        )\n    ).starrable(**strip_kwargs())\n\n    op_stack = (\n        \"addStar\",\n        \"starrable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addStar\"][\"starrable\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.close_issue","title":"close_issue async","text":"

    Close an issue.

    Parameters:

    Name Type Description Default issue_id str

    ID of the issue to be closed.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required state_reason IssueClosedStateReason

    The reason the issue is to be closed.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def close_issue(  # noqa\n    issue_id: str,\n    github_credentials: GitHubCredentials,\n    state_reason: graphql_schema.IssueClosedStateReason = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Close an issue.\n\n    Args:\n        issue_id: ID of the issue to be closed.\n        github_credentials: Credentials to use for authentication with GitHub.\n        state_reason: The reason the issue is to be closed.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.close_issue(\n        **strip_kwargs(\n            input=dict(\n                issue_id=issue_id,\n                state_reason=state_reason,\n            )\n        )\n    ).issue(**strip_kwargs())\n\n    op_stack = (\n        \"closeIssue\",\n        \"issue\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"closeIssue\"][\"issue\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.close_pull_request","title":"close_pull_request async","text":"

    Close a pull request.

    Parameters:

    Name Type Description Default pull_request_id str

    ID of the pull request to be closed.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def close_pull_request(  # noqa\n    pull_request_id: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Close a pull request.\n\n    Args:\n        pull_request_id: ID of the pull request to be closed.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.close_pull_request(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n            )\n        )\n    ).pull_request(**strip_kwargs())\n\n    op_stack = (\n        \"closePullRequest\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"closePullRequest\"][\"pullRequest\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.create_issue","title":"create_issue async","text":"

    Creates a new issue.

    Parameters:

    Name Type Description Default repository_id str

    The Node ID of the repository.

    required title str

    The title for the issue.

    required assignee_ids Iterable[str]

    The Node ID for the user assignee for this issue.

    required label_ids Iterable[str]

    An array of Node IDs of labels for this issue.

    required project_ids Iterable[str]

    An array of Node IDs for projects associated with this issue.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required body str

    The body for the issue description.

    None milestone_id str

    The Node ID of the milestone for this issue.

    None issue_template str

    The name of an issue template in the repository, assigns labels and assignees from the template to the issue.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def create_issue(  # noqa\n    repository_id: str,\n    title: str,\n    assignee_ids: Iterable[str],\n    label_ids: Iterable[str],\n    project_ids: Iterable[str],\n    github_credentials: GitHubCredentials,\n    body: str = None,\n    milestone_id: str = None,\n    issue_template: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Creates a new issue.\n\n    Args:\n        repository_id: The Node ID of the repository.\n        title: The title for the issue.\n        assignee_ids: The Node ID for the user assignee for this issue.\n        label_ids: An array of Node IDs of labels for this issue.\n        project_ids: An array of Node IDs for projects associated with this\n            issue.\n        github_credentials: Credentials to use for authentication with GitHub.\n        body: The body for the issue description.\n        milestone_id: The Node ID of the milestone for this issue.\n        issue_template: The name of an issue template in the repository, assigns\n            labels and assignees from the template to the issue.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.create_issue(\n        **strip_kwargs(\n            input=dict(\n                repository_id=repository_id,\n                title=title,\n                assignee_ids=assignee_ids,\n                label_ids=label_ids,\n                project_ids=project_ids,\n                body=body,\n                milestone_id=milestone_id,\n                issue_template=issue_template,\n            )\n        )\n    ).issue(**strip_kwargs())\n\n    op_stack = (\n        \"createIssue\",\n        \"issue\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"createIssue\"][\"issue\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.create_pull_request","title":"create_pull_request async","text":"

    Create a new pull request.

    Parameters:

    Name Type Description Default repository_id str

    The Node ID of the repository.

    required base_ref_name str

    The name of the branch you want your changes pulled into. This should be an existing branch on the current repository. You cannot update the base branch on a pull request to point to another repository.

    required head_ref_name str

    The name of the branch where your changes are implemented. For cross-repository pull requests in the same network, namespace head_ref_name with a user like this: username:branch.

    required title str

    The title of the pull request.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required body str

    The contents of the pull request.

    None maintainer_can_modify bool

    Indicates whether maintainers can modify the pull request.

    None draft bool

    Indicates whether this pull request should be a draft.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def create_pull_request(  # noqa\n    repository_id: str,\n    base_ref_name: str,\n    head_ref_name: str,\n    title: str,\n    github_credentials: GitHubCredentials,\n    body: str = None,\n    maintainer_can_modify: bool = None,\n    draft: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Create a new pull request.\n\n    Args:\n        repository_id: The Node ID of the repository.\n        base_ref_name: The name of the branch you want your changes pulled into.\n            This should be an existing branch on the current repository.\n            You cannot update the base branch on a pull request to point\n            to another repository.\n        head_ref_name: The name of the branch where your changes are\n            implemented. For cross-repository pull requests in the same\n            network, namespace `head_ref_name` with a user like this:\n            `username:branch`.\n        title: The title of the pull request.\n        github_credentials: Credentials to use for authentication with GitHub.\n        body: The contents of the pull request.\n        maintainer_can_modify: Indicates whether maintainers can modify the pull\n            request.\n        draft: Indicates whether this pull request should be a draft.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.create_pull_request(\n        **strip_kwargs(\n            input=dict(\n                repository_id=repository_id,\n                base_ref_name=base_ref_name,\n                head_ref_name=head_ref_name,\n                title=title,\n                body=body,\n                maintainer_can_modify=maintainer_can_modify,\n                draft=draft,\n            )\n        )\n    ).pull_request(**strip_kwargs())\n\n    op_stack = (\n        \"createPullRequest\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"createPullRequest\"][\"pullRequest\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.remove_reaction","title":"remove_reaction async","text":"

    Removes a reaction from a subject.

    Parameters:

    Name Type Description Default subject_id str

    The Node ID of the subject to modify.

    required content ReactionContent

    The name of the emoji reaction to remove.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def remove_reaction(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Removes a reaction from a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji reaction to remove.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.remove_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).reaction(**strip_kwargs())\n\n    op_stack = (\n        \"removeReaction\",\n        \"reaction\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"removeReaction\"][\"reaction\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.remove_reaction_subject","title":"remove_reaction_subject async","text":"

    Removes a reaction from a subject.

    Parameters:

    Name Type Description Default subject_id str

    The Node ID of the subject to modify.

    required content ReactionContent

    The name of the emoji reaction to remove.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def remove_reaction_subject(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Removes a reaction from a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji reaction to remove.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.remove_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).subject(**strip_kwargs())\n\n    op_stack = (\n        \"removeReaction\",\n        \"subject\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"removeReaction\"][\"subject\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.remove_star_starrable","title":"remove_star_starrable async","text":"

    Removes a star from a Starrable.

    Parameters:

    Name Type Description Default starrable_id str

    The Starrable ID to unstar.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def remove_star_starrable(  # noqa\n    starrable_id: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Removes a star from a Starrable.\n\n    Args:\n        starrable_id: The Starrable ID to unstar.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.remove_star(\n        **strip_kwargs(\n            input=dict(\n                starrable_id=starrable_id,\n            )\n        )\n    ).starrable(**strip_kwargs())\n\n    op_stack = (\n        \"removeStar\",\n        \"starrable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"removeStar\"][\"starrable\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.request_reviews","title":"request_reviews async","text":"

    Set review requests on a pull request.

    Parameters:

    Name Type Description Default pull_request_id str

    The Node ID of the pull request to modify.

    required user_ids Iterable[str]

    The Node IDs of the user to request.

    required team_ids Iterable[str]

    The Node IDs of the team to request.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required union bool

    Add users to the set rather than replace.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def request_reviews(  # noqa\n    pull_request_id: str,\n    user_ids: Iterable[str],\n    team_ids: Iterable[str],\n    github_credentials: GitHubCredentials,\n    union: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Set review requests on a pull request.\n\n    Args:\n        pull_request_id: The Node ID of the pull request to modify.\n        user_ids: The Node IDs of the user to request.\n        team_ids: The Node IDs of the team to request.\n        github_credentials: Credentials to use for authentication with GitHub.\n        union: Add users to the set rather than replace.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.request_reviews(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n                user_ids=user_ids,\n                team_ids=team_ids,\n                union=union,\n            )\n        )\n    )\n\n    op_stack = (\"requestReviews\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"requestReviews\"]\n
    "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.request_reviews_pull_request","title":"request_reviews_pull_request async","text":"

    Set review requests on a pull request.

    Parameters:

    Name Type Description Default pull_request_id str

    The Node ID of the pull request to modify.

    required user_ids Iterable[str]

    The Node IDs of the user to request.

    required team_ids Iterable[str]

    The Node IDs of the team to request.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required union bool

    Add users to the set rather than replace.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/mutations.py
    @task\nasync def request_reviews_pull_request(  # noqa\n    pull_request_id: str,\n    user_ids: Iterable[str],\n    team_ids: Iterable[str],\n    github_credentials: GitHubCredentials,\n    union: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Set review requests on a pull request.\n\n    Args:\n        pull_request_id: The Node ID of the pull request to modify.\n        user_ids: The Node IDs of the user to request.\n        team_ids: The Node IDs of the team to request.\n        github_credentials: Credentials to use for authentication with GitHub.\n        union: Add users to the set rather than replace.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.request_reviews(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n                user_ids=user_ids,\n                team_ids=team_ids,\n                union=union,\n            )\n        )\n    ).pull_request(**strip_kwargs())\n\n    op_stack = (\n        \"requestReviews\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"requestReviews\"][\"pullRequest\"]\n
    "},{"location":"integrations/prefect-github/organization/","title":"Organization","text":""},{"location":"integrations/prefect-github/organization/#prefect_github.organization","title":"prefect_github.organization","text":"

    This is a module containing: GitHub query_organization* tasks

    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization","title":"query_organization async","text":"

    The query root of GitHub's GraphQL interface.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\"organization\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_audit_log","title":"query_organization_audit_log async","text":"

    Audit log entries of the organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None query str

    The query string to filter audit entries.

    None order_by AuditLogOrder

    Ordering options for the returned audit log entries.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_audit_log(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    order_by: graphql_schema.AuditLogOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Audit log entries of the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: The query string to filter audit entries.\n        order_by: Ordering options for the returned audit log entries.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).audit_log(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"auditLog\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"auditLog\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_domains","title":"query_organization_domains async","text":"

    A list of domains owned by the organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None is_verified bool

    Filter by if the domain is verified.

    None is_approved bool

    Filter by if the domain is approved.

    None order_by VerifiableDomainOrder

    Ordering options for verifiable domains returned.

    {'field': 'DOMAIN', 'direction': 'ASC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_domains(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_verified: bool = None,\n    is_approved: bool = None,\n    order_by: graphql_schema.VerifiableDomainOrder = {\n        \"field\": \"DOMAIN\",\n        \"direction\": \"ASC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of domains owned by the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_verified: Filter by if the domain is verified.\n        is_approved: Filter by if the domain is approved.\n        order_by: Ordering options for verifiable domains returned.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).domains(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_verified=is_verified,\n            is_approved=is_approved,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"domains\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"domains\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_enterprise_owners","title":"query_organization_enterprise_owners async","text":"

    A list of owners of the organization's enterprise account.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required query str

    The search string to look for.

    None organization_role RoleInOrganization

    The organization role to filter by.

    None order_by OrgEnterpriseOwnerOrder

    Ordering options for enterprise owners returned from the connection.

    {'field': 'LOGIN', 'direction': 'ASC'} after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_enterprise_owners(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    organization_role: graphql_schema.RoleInOrganization = None,\n    order_by: graphql_schema.OrgEnterpriseOwnerOrder = {\n        \"field\": \"LOGIN\",\n        \"direction\": \"ASC\",\n    },\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of owners of the organization's enterprise account.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: The search string to look for.\n        organization_role: The organization role to filter by.\n        order_by: Ordering options for enterprise owners\n            returned from the connection.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).enterprise_owners(\n        **strip_kwargs(\n            query=query,\n            organization_role=organization_role,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"enterpriseOwners\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"enterpriseOwners\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_interaction_ability","title":"query_organization_interaction_ability async","text":"

    The interaction ability settings for this organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_interaction_ability(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"interactionAbility\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_ip_allow_list_entries","title":"query_organization_ip_allow_list_entries async","text":"

    The IP addresses that are allowed to access resources owned by the organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by IpAllowListEntryOrder

    Ordering options for IP allow list entries returned.

    {'field': 'ALLOW_LIST_VALUE', 'direction': 'ASC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_ip_allow_list_entries(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.IpAllowListEntryOrder = {\n        \"field\": \"ALLOW_LIST_VALUE\",\n        \"direction\": \"ASC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The IP addresses that are allowed to access resources owned by the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for IP allow list\n            entries returned.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).ip_allow_list_entries(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"ipAllowListEntries\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"ipAllowListEntries\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_item_showcase","title":"query_organization_item_showcase async","text":"

    Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_item_showcase(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Showcases a selection of repositories and gists that the profile owner has\n    either curated or that have been selected automatically based on popularity.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).item_showcase(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"itemShowcase\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"itemShowcase\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_member_statuses","title":"query_organization_member_statuses async","text":"

    Get the status messages members of this entity have set that are either public or visible only to the organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by UserStatusOrder

    Ordering options for user statuses returned from the connection.

    {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_member_statuses(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.UserStatusOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Get the status messages members of this entity have set that are either public\n    or visible only to the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for user statuses returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).member_statuses(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"memberStatuses\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"memberStatuses\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_members_with_role","title":"query_organization_members_with_role async","text":"

    A list of users who are members of this organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_members_with_role(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users who are members of this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).members_with_role(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"membersWithRole\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"membersWithRole\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_packages","title":"query_organization_packages async","text":"

    A list of packages under the owner.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None names Iterable[str]

    Find packages by their names.

    None repository_id str

    Find packages in a repository by ID.

    None package_type PackageType

    Filter registry package by type.

    None order_by PackageOrder

    Ordering of the returned packages.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_packages(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"packages\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_pending_members","title":"query_organization_pending_members async","text":"

    A list of users who have been invited to join this organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_pending_members(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users who have been invited to join this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pending_members(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"pendingMembers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"pendingMembers\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_pinnable_items","title":"query_organization_pinnable_items async","text":"

    A list of repositories and gists this profile owner can pin to their profile.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required types Iterable[PinnableItemType]

    Filter the types of pinnable items that are returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_pinnable_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner can pin to their profile.\n\n    Args:\n        login: The organization's login.\n        types: Filter the types of pinnable items that are\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinnable_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"pinnableItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"pinnableItems\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_pinned_items","title":"query_organization_pinned_items async","text":"

    A list of repositories and gists this profile owner has pinned to their profile.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required types Iterable[PinnableItemType]

    Filter the types of pinned items that are returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_pinned_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner has pinned to their profile.\n\n    Args:\n        login: The organization's login.\n        types: Filter the types of pinned items that are returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinned_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"pinnedItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"pinnedItems\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_project","title":"query_organization_project async","text":"

    Find project by number.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required number int

    The project number to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_project(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        login: The organization's login.\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"project\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_project_next","title":"query_organization_project_next async","text":"

    Find a project by project (beta) number.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required number int

    The project (beta) number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_project_next(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by project (beta) number.\n\n    Args:\n        login: The organization's login.\n        number: The project (beta) number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectNext\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_project_v2","title":"query_organization_project_v2 async","text":"

    Find a project by number.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required number int

    The project number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_project_v2(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by number.\n\n    Args:\n        login: The organization's login.\n        number: The project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectV2\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_projects","title":"query_organization_projects async","text":"

    A list of projects under the owner.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required states Iterable[ProjectState]

    A list of states to filter the projects by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required order_by ProjectOrder

    Ordering options for projects returned from the connection.

    None search str

    Query to search projects by, currently only searching by name.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_projects(  # noqa\n    login: str,\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The organization's login.\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projects\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_projects_next","title":"query_organization_projects_next async","text":"

    A list of projects (beta) under the owner.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required query str

    A project (beta) to search for under the the owner.

    None sort_by ProjectNextOrderField

    How to order the returned projects (beta).

    'TITLE' after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_projects_next(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects (beta) under the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project (beta) to search for under the the owner.\n        sort_by: How to order the returned projects (beta).\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_next(\n        **strip_kwargs(\n            query=query,\n            sort_by=sort_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectsNext\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_projects_v2","title":"query_organization_projects_v2 async","text":"

    A list of projects under the owner.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required query str

    A project to search for under the the owner.

    None order_by ProjectV2Order

    How to order the returned projects.

    {'field': 'NUMBER', 'direction': 'DESC'} after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_projects_v2(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project to search for under the the owner.\n        order_by: How to order the returned projects.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_v2(\n        **strip_kwargs(\n            query=query,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectsV2\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_recent_projects","title":"query_organization_recent_projects async","text":"

    Recent projects that this user has modified in the context of the owner.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_recent_projects(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"recentProjects\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repositories","title":"query_organization_repositories async","text":"

    A list of repositories that the user owns.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None affiliations Iterable[RepositoryAffiliation]

    Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

    None owner_affiliations Iterable[RepositoryAffiliation]

    Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

    ('OWNER', 'COLLABORATOR') is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None is_fork bool

    If non-null, filters repositories according to whether they are forks of another repository.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositories\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository","title":"query_organization_repository async","text":"

    Find Repository.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required name str

    Name of Repository to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_repository(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        login: The organization's login.\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repository\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository_discussion_comments","title":"query_organization_repository_discussion_comments async","text":"

    Discussion comments this user has authored.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None repository_id str

    Filter discussion comments to only those in a specific repository.

    None only_answers bool

    Filter discussion comments to only those that were marked as the answer.

    False return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_repository_discussion_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    repository_id: str = None,\n    only_answers: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussion comments this user has authored.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list\n            that come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements\n            from the list.\n        last: Returns the last _n_ elements from\n            the list.\n        repository_id: Filter discussion comments\n            to only those in a specific repository.\n        only_answers: Filter discussion comments\n            to only those that were marked as the answer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussion_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            repository_id=repository_id,\n            only_answers=only_answers,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositoryDiscussionComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositoryDiscussionComments\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository_discussions","title":"query_organization_repository_discussions async","text":"

    Discussions this user has started.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by DiscussionOrder

    Ordering options for discussions returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'DESC'} repository_id str

    Filter discussions to only those in a specific repository.

    None answered bool

    Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_repository_discussions(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    repository_id: str = None,\n    answered: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussions this user has started.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for discussions\n            returned from the connection.\n        repository_id: Filter discussions to only those\n            in a specific repository.\n        answered: Filter discussions to only those that\n            have been answered or not. Defaults to including both\n            answered and unanswered discussions.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            repository_id=repository_id,\n            answered=answered,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositoryDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositoryDiscussions\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository_migrations","title":"query_organization_repository_migrations async","text":"

    A list of all repository migrations for this organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None state MigrationState

    Filter repository migrations by state.

    None repository_name str

    Filter repository migrations by repository name.

    None order_by RepositoryMigrationOrder

    Ordering options for repository migrations returned.

    {'field': 'CREATED_AT', 'direction': 'ASC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_repository_migrations(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    state: graphql_schema.MigrationState = None,\n    repository_name: str = None,\n    order_by: graphql_schema.RepositoryMigrationOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"ASC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of all repository migrations for this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        state: Filter repository migrations by state.\n        repository_name: Filter repository migrations by\n            repository name.\n        order_by: Ordering options for repository\n            migrations returned.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_migrations(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            state=state,\n            repository_name=repository_name,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositoryMigrations\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositoryMigrations\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_saml_identity_provider","title":"query_organization_saml_identity_provider async","text":"

    The Organization's SAML identity providers.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_saml_identity_provider(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The Organization's SAML identity providers.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).saml_identity_provider(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"samlIdentityProvider\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"samlIdentityProvider\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsoring","title":"query_organization_sponsoring async","text":"

    List of users and organizations this entity is sponsoring.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorOrder

    Ordering options for the users and organizations returned from the connection.

    {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsoring(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of users and organizations this entity is sponsoring.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for the users and organizations\n            returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsoring(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsoring\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsoring\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsors","title":"query_organization_sponsors async","text":"

    List of sponsors for this user or organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None tier_id str

    If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see.

    None order_by SponsorOrder

    Ordering options for sponsors returned from the connection.

    {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsors(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    tier_id: str = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsors for this user or organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        tier_id: If given, will filter for sponsors at the given tier.\n            Will only return sponsors whose tier the viewer is permitted\n            to see.\n        order_by: Ordering options for sponsors returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            tier_id=tier_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsors\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsors\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsors_activities","title":"query_organization_sponsors_activities async","text":"

    Events involving this sponsorable, such as new sponsorships.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required actions Iterable[SponsorsActivityAction]

    Filter activities to only the specified actions.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None period SponsorsActivityPeriod

    Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred.

    'MONTH' order_by SponsorsActivityOrder

    Ordering options for activity returned from the connection.

    {'field': 'TIMESTAMP', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsors_activities(  # noqa\n    login: str,\n    actions: Iterable[graphql_schema.SponsorsActivityAction],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    period: graphql_schema.SponsorsActivityPeriod = \"MONTH\",\n    order_by: graphql_schema.SponsorsActivityOrder = {\n        \"field\": \"TIMESTAMP\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Events involving this sponsorable, such as new sponsorships.\n\n    Args:\n        login: The organization's login.\n        actions: Filter activities to only the specified\n            actions.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        period: Filter activities returned to only those\n            that occurred in the most recent specified time period. Set\n            to ALL to avoid filtering by when the activity occurred.\n        order_by: Ordering options for activity returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_activities(\n        **strip_kwargs(\n            actions=actions,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            period=period,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorsActivities\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorsActivities\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsors_listing","title":"query_organization_sponsors_listing async","text":"

    The GitHub Sponsors listing for this user or organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsors_listing(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The GitHub Sponsors listing for this user or organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_listing(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"sponsorsListing\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorsListing\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorship_for_viewer_as_sponsor","title":"query_organization_sponsorship_for_viewer_as_sponsor async","text":"

    The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsorship_for_viewer_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from the viewer to this user/organization; that is, the\n    sponsorship where you're the sponsor. Only returns a sponsorship if it is\n    active.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsor(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipForViewerAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipForViewerAsSponsor\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorship_for_viewer_as_sponsorable","title":"query_organization_sponsorship_for_viewer_as_sponsorable async","text":"

    The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsorship_for_viewer_as_sponsorable(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from this user/organization to the viewer; that is, the\n    sponsorship you're receiving. Only returns a sponsorship if it is active.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsorable(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipForViewerAsSponsorable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipForViewerAsSponsorable\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorship_newsletters","title":"query_organization_sponsorship_newsletters async","text":"

    List of sponsorship updates sent from this sponsorable to sponsors.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorshipNewsletterOrder

    Ordering options for sponsorship updates returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsorship_newsletters(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipNewsletterOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsorship updates sent from this sponsorable to sponsors.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorship\n            updates returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_newsletters(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipNewsletters\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipNewsletters\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorships_as_maintainer","title":"query_organization_sponsorships_as_maintainer async","text":"

    This object's sponsorships as the maintainer.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None include_private bool

    Whether or not to include private sponsorships in the result set.

    False order_by SponsorshipOrder

    Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsorships_as_maintainer(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    include_private: bool = False,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the maintainer.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        include_private: Whether or not to include\n            private sponsorships in the result set.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_maintainer(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            include_private=include_private,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipsAsMaintainer\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipsAsMaintainer\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorships_as_sponsor","title":"query_organization_sponsorships_as_sponsor async","text":"

    This object's sponsorships as the sponsor.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorshipOrder

    Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_sponsorships_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the sponsor.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_sponsor(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipsAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipsAsSponsor\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_team","title":"query_organization_team async","text":"

    Find an organization's team by its slug.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required slug str

    The name or slug of the team to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_team(  # noqa\n    login: str,\n    slug: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find an organization's team by its slug.\n\n    Args:\n        login: The organization's login.\n        slug: The name or slug of the team to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).team(\n        **strip_kwargs(\n            slug=slug,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"team\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"team\"]\n
    "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_teams","title":"query_organization_teams async","text":"

    A list of teams in this organization.

    Parameters:

    Name Type Description Default login str

    The organization's login.

    required user_logins Iterable[str]

    User logins to filter by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy TeamPrivacy

    If non-null, filters teams according to privacy.

    None role TeamRole

    If non-null, filters teams according to whether the viewer is an admin or member on team.

    None query str

    If non-null, filters teams with query on team name and team slug.

    None order_by TeamOrder

    Ordering options for teams returned from the connection.

    None ldap_mapped bool

    If true, filters teams that are mapped to an LDAP Group (Enterprise only).

    None root_teams_only bool

    If true, restrict to only root teams.

    False after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/organization.py
    @task\nasync def query_organization_teams(  # noqa\n    login: str,\n    user_logins: Iterable[str],\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.TeamPrivacy = None,\n    role: graphql_schema.TeamRole = None,\n    query: str = None,\n    order_by: graphql_schema.TeamOrder = None,\n    ldap_mapped: bool = None,\n    root_teams_only: bool = False,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of teams in this organization.\n\n    Args:\n        login: The organization's login.\n        user_logins: User logins to filter by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters teams according to privacy.\n        role: If non-null, filters teams according to whether the viewer\n            is an admin or member on team.\n        query: If non-null, filters teams with query on team name and team\n            slug.\n        order_by: Ordering options for teams returned from the connection.\n        ldap_mapped: If true, filters teams that are mapped to an LDAP\n            Group (Enterprise only).\n        root_teams_only: If true, restrict to only root teams.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).teams(\n        **strip_kwargs(\n            user_logins=user_logins,\n            privacy=privacy,\n            role=role,\n            query=query,\n            order_by=order_by,\n            ldap_mapped=ldap_mapped,\n            root_teams_only=root_teams_only,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"teams\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"teams\"]\n
    "},{"location":"integrations/prefect-github/repository/","title":"Repository","text":""},{"location":"integrations/prefect-github/repository/#prefect_github.repository","title":"prefect_github.repository","text":"

    This is a module containing: GitHub query_repository* tasks and the GitHub storage block.

    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.GitHubRepository","title":"GitHubRepository","text":"

    Bases: ReadableDeploymentStorage

    Interact with files stored on GitHub repositories.

    Source code in prefect_github/repository.py
    class GitHubRepository(ReadableDeploymentStorage):\n    \"\"\"\n    Interact with files stored on GitHub repositories.\n    \"\"\"\n\n    _block_type_name = \"GitHub Repository\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"  # noqa: E501\n    _documentation_url = \"https://prefecthq.github.io/prefect-github/repository/#prefect_github.repository.GitHubRepository\"  # noqa\n\n    repository_url: str = Field(\n        default=...,\n        title=\"Repository URL\",\n        description=(\n            \"The URL of a GitHub repository to read from, in either HTTPS or SSH \"\n            \"format. If you are using a private repo, it must be in the HTTPS format.\"\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    credentials: Optional[GitHubCredentials] = Field(\n        default=None,\n        description=\"An optional GitHubCredentials block for using private GitHub repos.\",  # noqa: E501\n    )\n\n    @validator(\"credentials\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict):\n        \"\"\"Ensure that credentials are not provided with 'SSH' formatted GitHub URLs.\"\"\"\n        if v is not None:\n            if urlparse(values[\"repository_url\"]).scheme != \"https\":\n                raise InvalidRepositoryURLError(\n                    (\n                        \"Crendentials can only be used with GitHub repositories \"\n                        \"using the 'HTTPS' format. You must either remove the \"\n                        \"credential if you wish to use the 'SSH' format and are not \"\n                        \"using a private repository, or you must change the repository \"\n                        \"url to the 'HTTPS' format. \"\n                    )\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urlparse(self.repository_url)\n        if url_components.scheme == \"https\" and self.credentials is not None:\n            token_value = self.credentials.token.get_secret_value()\n            updated_components = url_components._replace(\n                netloc=f\"{token_value}@{url_components.netloc}\"\n            )\n            full_url = urlunparse(updated_components)\n        else:\n            full_url = self.repository_url\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: str\n    ) -> Tuple[str, str]:\n        \"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Clones a GitHub project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = f\"git clone {self._create_repo_url()}\"\n        if self.reference:\n            cmd += f\" -b {self.reference}\"\n\n        # Limit git history\n        cmd += \" --depth 1\"\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            tmp_path_str = tmp_dir\n            cmd += f\" {tmp_path_str}\"\n            cmd = shlex.split(cmd)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise RuntimeError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_path_str, sub_directory=from_path\n            )\n\n            copy_tree(src=content_source, dst=content_destination)\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.GitHubRepository.get_directory","title":"get_directory async","text":"

    Clones a GitHub project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory.

    Parameters:

    Name Type Description Default from_path Optional[str]

    If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

    None local_path Optional[str]

    A local path to clone to; defaults to present working directory.

    None Source code in prefect_github/repository.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Clones a GitHub project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = f\"git clone {self._create_repo_url()}\"\n    if self.reference:\n        cmd += f\" -b {self.reference}\"\n\n    # Limit git history\n    cmd += \" --depth 1\"\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        tmp_path_str = tmp_dir\n        cmd += f\" {tmp_path_str}\"\n        cmd = shlex.split(cmd)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise RuntimeError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_path_str, sub_directory=from_path\n        )\n\n        copy_tree(src=content_source, dst=content_destination)\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository","title":"query_repository async","text":"

    The query root of GitHub's GraphQL interface.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a repository\n            referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\"repository\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_assignable_users","title":"query_repository_assignable_users async","text":"

    A list of users that can be assigned to issues in this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True query str

    Filters users with query on user name and login.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_assignable_users(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users that can be assigned to issues in this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        query: Filters users with query on user name and login.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).assignable_users(\n        **strip_kwargs(\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"assignableUsers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"assignableUsers\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_branch_protection_rules","title":"query_repository_branch_protection_rules async","text":"

    A list of branch protection rules for this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_branch_protection_rules(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of branch protection rules for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).branch_protection_rules(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"branchProtectionRules\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"branchProtectionRules\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_code_of_conduct","title":"query_repository_code_of_conduct async","text":"

    Returns the code of conduct for this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_code_of_conduct(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns the code of conduct for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).code_of_conduct(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"codeOfConduct\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"codeOfConduct\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_collaborators","title":"query_repository_collaborators async","text":"

    A list of collaborators associated with the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True affiliation CollaboratorAffiliation

    Collaborators affiliation level with a repository.

    None query str

    Filters users with query on user name and login.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_collaborators(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    affiliation: graphql_schema.CollaboratorAffiliation = None,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of collaborators associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        affiliation: Collaborators affiliation level with a\n            repository.\n        query: Filters users with query on user name and login.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).collaborators(\n        **strip_kwargs(\n            affiliation=affiliation,\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"collaborators\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"collaborators\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_commit_comments","title":"query_repository_commit_comments async","text":"

    A list of commit comments associated with the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_commit_comments(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of commit comments associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).commit_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"commitComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"commitComments\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_contact_links","title":"query_repository_contact_links async","text":"

    Returns a list of contact links associated to the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_contact_links(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of contact links associated to the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).contact_links(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"contactLinks\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"contactLinks\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_default_branch_ref","title":"query_repository_default_branch_ref async","text":"

    The Ref associated with the repository's default branch.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_default_branch_ref(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The Ref associated with the repository's default branch.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).default_branch_ref(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"defaultBranchRef\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"defaultBranchRef\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_deploy_keys","title":"query_repository_deploy_keys async","text":"

    A list of deploy keys that are on this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_deploy_keys(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of deploy keys that are on this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).deploy_keys(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"deployKeys\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"deployKeys\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_deployments","title":"query_repository_deployments async","text":"

    Deployments associated with the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required environments Iterable[str]

    Environments to list deployments for.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True order_by DeploymentOrder

    Ordering options for deployments returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'ASC'} after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_deployments(  # noqa\n    owner: str,\n    name: str,\n    environments: Iterable[str],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.DeploymentOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"ASC\",\n    },\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Deployments associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        environments: Environments to list deployments for.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for deployments returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).deployments(\n        **strip_kwargs(\n            environments=environments,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"deployments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"deployments\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussion","title":"query_repository_discussion async","text":"

    Returns a single discussion from the current repository by number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The number for the discussion to be returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_discussion(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single discussion from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the discussion to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussion(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussion\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussion\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussion_categories","title":"query_repository_discussion_categories async","text":"

    A list of discussion categories that are available in the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None filter_by_assignable bool

    Filter by categories that are assignable by the viewer.

    False return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_discussion_categories(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    filter_by_assignable: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of discussion categories that are available in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        filter_by_assignable: Filter by categories that\n            are assignable by the viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussion_categories(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            filter_by_assignable=filter_by_assignable,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussionCategories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussionCategories\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussion_category","title":"query_repository_discussion_category async","text":"

    A discussion category by slug.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required slug str

    The slug of the discussion category to be returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_discussion_category(  # noqa\n    owner: str,\n    name: str,\n    slug: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A discussion category by slug.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        slug: The slug of the discussion category to be\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussion_category(\n        **strip_kwargs(\n            slug=slug,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussionCategory\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussionCategory\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussions","title":"query_repository_discussions async","text":"

    A list of discussions that have been opened in the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None category_id str

    Only include discussions that belong to the category with this ID.

    None order_by DiscussionOrder

    Ordering options for discussions returned from the connection.

    {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_discussions(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    category_id: str = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of discussions that have been opened in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        category_id: Only include discussions that belong to the\n            category with this ID.\n        order_by: Ordering options for discussions returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            category_id=category_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussions\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_environment","title":"query_repository_environment async","text":"

    Returns a single active environment from the current repository by name.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required environment_name str

    The name of the environment to be returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_environment(  # noqa\n    owner: str,\n    name: str,\n    environment_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single active environment from the current repository by name.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        environment_name: The name of the environment to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).environment(\n        **strip_kwargs(\n            name=environment_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"environment\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"environment\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_environments","title":"query_repository_environments async","text":"

    A list of environments that are in this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_environments(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of environments that are in this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).environments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"environments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"environments\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_forks","title":"query_repository_forks async","text":"

    A list of direct forked repositories.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None affiliations Iterable[RepositoryAffiliation]

    Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

    None owner_affiliations Iterable[RepositoryAffiliation]

    Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

    ('OWNER', 'COLLABORATOR') is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_forks(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of direct forked repositories.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        privacy: If non-null, filters repositories according to privacy.\n        order_by: Ordering options for repositories returned from the\n            connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to whether\n            they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).forks(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"forks\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"forks\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_funding_links","title":"query_repository_funding_links async","text":"

    The funding links for this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_funding_links(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The funding links for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).funding_links(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"fundingLinks\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"fundingLinks\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_interaction_ability","title":"query_repository_interaction_ability async","text":"

    The interaction ability settings for this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_interaction_ability(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"interactionAbility\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issue","title":"query_repository_issue async","text":"

    Returns a single issue from the current repository by number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The number for the issue to be returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_issue(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single issue from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the issue to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issue(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"issue\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issue\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issue_or_pull_request","title":"query_repository_issue_or_pull_request async","text":"

    Returns a single issue-like object from the current repository by number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The number for the issue to be returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_issue_or_pull_request(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single issue-like object from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the issue to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issue_or_pull_request(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"issueOrPullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issueOrPullRequest\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issue_templates","title":"query_repository_issue_templates async","text":"

    Returns a list of issue templates associated to the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_issue_templates(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of issue templates associated to the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issue_templates(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"issueTemplates\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issueTemplates\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issues","title":"query_repository_issues async","text":"

    A list of issues that have been opened in the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required labels Iterable[str]

    A list of label names to filter the pull requests by.

    required states Iterable[IssueState]

    A list of states to filter the issues by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True order_by IssueOrder

    Ordering options for issues returned from the connection.

    None filter_by IssueFilters

    Filtering options for issues returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_issues(  # noqa\n    owner: str,\n    name: str,\n    labels: Iterable[str],\n    states: Iterable[graphql_schema.IssueState],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.IssueOrder = None,\n    filter_by: graphql_schema.IssueFilters = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issues that have been opened in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        labels: A list of label names to filter the pull requests by.\n        states: A list of states to filter the issues by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for issues returned from the\n            connection.\n        filter_by: Filtering options for issues returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issues(\n        **strip_kwargs(\n            labels=labels,\n            states=states,\n            order_by=order_by,\n            filter_by=filter_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"issues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issues\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_label","title":"query_repository_label async","text":"

    Returns a single label by name.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required label_name str

    Label name.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_label(  # noqa\n    owner: str,\n    name: str,\n    label_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single label by name.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        label_name: Label name.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).label(\n        **strip_kwargs(\n            name=label_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"label\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"label\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_labels","title":"query_repository_labels async","text":"

    A list of labels associated with the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True order_by LabelOrder

    Ordering options for labels returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'ASC'} after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None query str

    If provided, searches labels by name and description.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_labels(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.LabelOrder = {\"field\": \"CREATED_AT\", \"direction\": \"ASC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of labels associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for labels returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: If provided, searches labels by name and description.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).labels(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"labels\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"labels\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_languages","title":"query_repository_languages async","text":"

    A list containing a breakdown of the language composition of the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by LanguageOrder

    Order for connection.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_languages(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.LanguageOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list containing a breakdown of the language composition of the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).languages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"languages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"languages\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_latest_release","title":"query_repository_latest_release async","text":"

    Get the latest release for the repository if one exists.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_latest_release(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Get the latest release for the repository if one exists.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).latest_release(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"latestRelease\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"latestRelease\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_license_info","title":"query_repository_license_info async","text":"

    The license associated with the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_license_info(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The license associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).license_info(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"licenseInfo\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"licenseInfo\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_mentionable_users","title":"query_repository_mentionable_users async","text":"

    A list of Users that can be mentioned in the context of the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True query str

    Filters users with query on user name and login.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_mentionable_users(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of Users that can be mentioned in the context of the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        query: Filters users with query on user name and login.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).mentionable_users(\n        **strip_kwargs(\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"mentionableUsers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"mentionableUsers\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_milestone","title":"query_repository_milestone async","text":"

    Returns a single milestone from the current repository by number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The number for the milestone to be returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_milestone(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single milestone from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the milestone to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).milestone(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"milestone\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"milestone\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_milestones","title":"query_repository_milestones async","text":"

    A list of milestones associated with the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required states Iterable[MilestoneState]

    Filter by the state of the milestones.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by MilestoneOrder

    Ordering options for milestones.

    None query str

    Filters milestones with a query on the title.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_milestones(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.MilestoneState],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.MilestoneOrder = None,\n    query: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of milestones associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: Filter by the state of the milestones.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for milestones.\n        query: Filters milestones with a query on the title.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).milestones(\n        **strip_kwargs(\n            states=states,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            query=query,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"milestones\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"milestones\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_object","title":"query_repository_object async","text":"

    A Git object in the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True oid datetime

    The Git object ID.

    None expression str

    A Git revision expression suitable for rev-parse.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_object(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    oid: datetime = None,\n    expression: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A Git object in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        oid: The Git object ID.\n        expression: A Git revision expression suitable for rev-parse.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).object(\n        **strip_kwargs(\n            oid=oid,\n            expression=expression,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"object\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"object\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_owner","title":"query_repository_owner async","text":"

    The User owner of the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_owner(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The User owner of the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).owner(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"owner\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"owner\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_packages","title":"query_repository_packages async","text":"

    A list of packages under the owner.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None names Iterable[str]

    Find packages by their names.

    None repository_id str

    Find packages in a repository by ID.

    None package_type PackageType

    Filter registry package by type.

    None order_by PackageOrder

    Ordering of the returned packages.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_packages(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"packages\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pinned_discussions","title":"query_repository_pinned_discussions async","text":"

    A list of discussions that have been pinned in this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_pinned_discussions(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of discussions that have been pinned in this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pinned_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pinnedDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pinnedDiscussions\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pinned_issues","title":"query_repository_pinned_issues async","text":"

    A list of pinned issues for this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_pinned_issues(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pinned issues for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pinned_issues(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pinnedIssues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pinnedIssues\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_primary_language","title":"query_repository_primary_language async","text":"

    The primary language of the repository's code.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_primary_language(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The primary language of the repository's code.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).primary_language(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"primaryLanguage\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"primaryLanguage\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_project","title":"query_repository_project async","text":"

    Find project by number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The project number to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_project(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"project\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_project_next","title":"query_repository_project_next async","text":"

    Finds and returns the Project (beta) according to the provided Project (beta) number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The ProjectNext number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_project_next(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Finds and returns the Project (beta) according to the provided Project (beta)\n    number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The ProjectNext number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectNext\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_project_v2","title":"query_repository_project_v2 async","text":"

    Finds and returns the Project according to the provided Project number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The Project number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_project_v2(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Finds and returns the Project according to the provided Project number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The Project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectV2\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_projects","title":"query_repository_projects async","text":"

    A list of projects under the owner.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required states Iterable[ProjectState]

    A list of states to filter the projects by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True order_by ProjectOrder

    Ordering options for projects returned from the connection.

    None search str

    Query to search projects by, currently only searching by name.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_projects(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projects\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_projects_next","title":"query_repository_projects_next async","text":"

    List of projects (beta) linked to this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None query str

    A project (beta) to search for linked to the repo.

    None sort_by ProjectNextOrderField

    How to order the returned project (beta) objects.

    'TITLE' return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_projects_next(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of projects (beta) linked to this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: A project (beta) to search for linked to the repo.\n        sort_by: How to order the returned project (beta) objects.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).projects_next(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n            sort_by=sort_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectsNext\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_projects_v2","title":"query_repository_projects_v2 async","text":"

    List of projects linked to this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None query str

    A project to search for linked to the repo.

    None order_by ProjectV2Order

    How to order the returned projects.

    {'field': 'NUMBER', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_projects_v2(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of projects linked to this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: A project to search for linked to the repo.\n        order_by: How to order the returned projects.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).projects_v2(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectsV2\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pull_request","title":"query_repository_pull_request async","text":"

    Returns a single pull request from the current repository by number.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required number int

    The number for the pull request to be returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_pull_request(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single pull request from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the pull request to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pull_request(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pullRequest\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pull_request_templates","title":"query_repository_pull_request_templates async","text":"

    Returns a list of pull request templates associated to the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_pull_request_templates(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of pull request templates associated to the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pull_request_templates(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"pullRequestTemplates\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pullRequestTemplates\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pull_requests","title":"query_repository_pull_requests async","text":"

    A list of pull requests that have been opened in the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required states Iterable[PullRequestState]

    A list of states to filter the pull requests by.

    required labels Iterable[str]

    A list of label names to filter the pull requests by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True head_ref_name str

    The head ref name to filter the pull requests by.

    None base_ref_name str

    The base ref name to filter the pull requests by.

    None order_by IssueOrder

    Ordering options for pull requests returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_pull_requests(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.PullRequestState],\n    labels: Iterable[str],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    head_ref_name: str = None,\n    base_ref_name: str = None,\n    order_by: graphql_schema.IssueOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pull requests that have been opened in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: A list of states to filter the pull requests by.\n        labels: A list of label names to filter the pull requests\n            by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        head_ref_name: The head ref name to filter the pull\n            requests by.\n        base_ref_name: The base ref name to filter the pull\n            requests by.\n        order_by: Ordering options for pull requests returned from\n            the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pull_requests(\n        **strip_kwargs(\n            states=states,\n            labels=labels,\n            head_ref_name=head_ref_name,\n            base_ref_name=base_ref_name,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pullRequests\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pullRequests\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_recent_projects","title":"query_repository_recent_projects async","text":"

    Recent projects that this user has modified in the context of the owner.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_recent_projects(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"recentProjects\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_ref","title":"query_repository_ref async","text":"

    Fetch a given ref from the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required qualified_name str

    The ref to retrieve. Fully qualified matches are checked in order (refs/heads/master) before falling back onto checks for short name matches (master).

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_ref(  # noqa\n    owner: str,\n    name: str,\n    qualified_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Fetch a given ref from the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        qualified_name: The ref to retrieve. Fully qualified matches are\n            checked in order (`refs/heads/master`) before falling back\n            onto checks for short name matches (`master`).\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).ref(\n        **strip_kwargs(\n            qualified_name=qualified_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"ref\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"ref\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_refs","title":"query_repository_refs async","text":"

    Fetch a list of refs from the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required ref_prefix str

    A ref name prefix like refs/heads/, refs/tags/, etc.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True query str

    Filters refs with query on name.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None direction OrderDirection None order_by RefOrder

    Ordering options for refs returned from the connection.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_refs(  # noqa\n    owner: str,\n    name: str,\n    ref_prefix: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    direction: graphql_schema.OrderDirection = None,\n    order_by: graphql_schema.RefOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Fetch a list of refs from the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        ref_prefix: A ref name prefix like `refs/heads/`, `refs/tags/`,\n            etc.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        query: Filters refs with query on name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        direction: DEPRECATED: use orderBy. The ordering direction.\n        order_by: Ordering options for refs returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).refs(\n        **strip_kwargs(\n            ref_prefix=ref_prefix,\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            direction=direction,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"refs\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"refs\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_release","title":"query_repository_release async","text":"

    Lookup a single release given various criteria.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required tag_name str

    The name of the Tag the Release was created from.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_release(  # noqa\n    owner: str,\n    name: str,\n    tag_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Lookup a single release given various criteria.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        tag_name: The name of the Tag the Release was created from.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).release(\n        **strip_kwargs(\n            tag_name=tag_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"release\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"release\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_releases","title":"query_repository_releases async","text":"

    List of releases which are dependent on this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by ReleaseOrder

    Order for connection.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_releases(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.ReleaseOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of releases which are dependent on this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).releases(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"releases\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"releases\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_repository_topics","title":"query_repository_repository_topics async","text":"

    A list of applied repository-topic associations for this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_repository_topics(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of applied repository-topic associations for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).repository_topics(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"repositoryTopics\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"repositoryTopics\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_stargazers","title":"query_repository_stargazers async","text":"

    A list of users who have starred this starrable.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by StarOrder

    Order for connection.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_stargazers(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.StarOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users who have starred this starrable.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).stargazers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"stargazers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"stargazers\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_submodules","title":"query_repository_submodules async","text":"

    Returns a list of all submodules in this repository parsed from the .gitmodules file as of the default branch's HEAD commit.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_submodules(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of all submodules in this repository parsed from the .gitmodules\n    file as of the default branch's HEAD commit.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).submodules(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"submodules\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"submodules\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_vulnerability_alerts","title":"query_repository_vulnerability_alerts async","text":"

    A list of vulnerability alerts that are on this repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required states Iterable[RepositoryVulnerabilityAlertState]

    Filter by the state of the alert.

    required dependency_scopes Iterable[RepositoryVulnerabilityAlertDependencyScope]

    Filter by the scope of the alert's dependency.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_vulnerability_alerts(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.RepositoryVulnerabilityAlertState],\n    dependency_scopes: Iterable[\n        graphql_schema.RepositoryVulnerabilityAlertDependencyScope\n    ],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of vulnerability alerts that are on this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: Filter by the state of the alert.\n        dependency_scopes: Filter by the scope of the\n            alert's dependency.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).vulnerability_alerts(\n        **strip_kwargs(\n            states=states,\n            dependency_scopes=dependency_scopes,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"vulnerabilityAlerts\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"vulnerabilityAlerts\"]\n
    "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_watchers","title":"query_repository_watchers async","text":"

    A list of users watching the repository.

    Parameters:

    Name Type Description Default owner str

    The login field of a user or organization.

    required name str

    The name of the repository.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository.py
    @task\nasync def query_repository_watchers(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users watching the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).watchers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"watchers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"watchers\"]\n
    "},{"location":"integrations/prefect-github/repository_owner/","title":"Repository Owner","text":""},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner","title":"prefect_github.repository_owner","text":"

    This is a module containing: GitHub query_repository_owner* tasks

    "},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner.query_repository_owner","title":"query_repository_owner async","text":"

    The query root of GitHub's GraphQL interface.

    Parameters:

    Name Type Description Default login str

    The username to lookup the owner by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository_owner.py
    @task\nasync def query_repository_owner(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        login: The username to lookup the owner by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository_owner(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\"repositoryOwner\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repositoryOwner\"]\n
    "},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner.query_repository_owner_repositories","title":"query_repository_owner_repositories async","text":"

    A list of repositories that the user owns.

    Parameters:

    Name Type Description Default login str

    The username to lookup the owner by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None affiliations Iterable[RepositoryAffiliation]

    Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

    None owner_affiliations Iterable[RepositoryAffiliation]

    Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

    ('OWNER', 'COLLABORATOR') is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None is_fork bool

    If non-null, filters repositories according to whether they are forks of another repository.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository_owner.py
    @task\nasync def query_repository_owner_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        login: The username to lookup the owner by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository_owner(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"repositoryOwner\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repositoryOwner\"][\"repositories\"]\n
    "},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner.query_repository_owner_repository","title":"query_repository_owner_repository async","text":"

    Find Repository.

    Parameters:

    Name Type Description Default login str

    The username to lookup the owner by.

    required name str

    Name of Repository to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/repository_owner.py
    @task\nasync def query_repository_owner_repository(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        login: The username to lookup the owner by.\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository_owner(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"repositoryOwner\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repositoryOwner\"][\"repository\"]\n
    "},{"location":"integrations/prefect-github/user/","title":"User","text":""},{"location":"integrations/prefect-github/user/#prefect_github.user","title":"prefect_github.user","text":"

    This is a module containing: GitHub query_user* tasks

    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user","title":"query_user async","text":"

    The query root of GitHub's GraphQL interface.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\"user\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_commit_comments","title":"query_user_commit_comments async","text":"

    A list of commit comments made by this user.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_commit_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of commit comments made by this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).commit_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"commitComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"commitComments\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_contributions_collection","title":"query_user_contributions_collection async","text":"

    The collection of contributions this user has made to different repositories.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required organization_id str

    The ID of the organization used to filter contributions.

    None from_ datetime

    Only contributions made at this time or later will be counted. If omitted, defaults to a year ago.

    None to datetime

    Only contributions made before and up to (including) this time will be counted. If omitted, defaults to the current time or one year from the provided from argument.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_contributions_collection(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    organization_id: str = None,\n    from_: datetime = None,\n    to: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The collection of contributions this user has made to different repositories.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        organization_id: The ID of the organization\n            used to filter contributions.\n        from_: Only contributions made at this time or\n            later will be counted. If omitted, defaults to a year ago.\n        to: Only contributions made before and up to\n            (including) this time will be counted. If omitted, defaults\n            to the current time or one year from the provided from\n            argument.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).contributions_collection(\n        **strip_kwargs(\n            organization_id=organization_id,\n            from_=from_,\n            to=to,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"contributionsCollection\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"contributionsCollection\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_followers","title":"query_user_followers async","text":"

    A list of users the given user is followed by.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_followers(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is followed by.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).followers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"followers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"followers\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_following","title":"query_user_following async","text":"

    A list of users the given user is following.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_following(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is following.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).following(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"following\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"following\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_gist","title":"query_user_gist async","text":"

    Find gist by repo name.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required name str

    The gist name to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_gist(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find gist by repo name.\n\n    Args:\n        login: The user's login.\n        name: The gist name to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).gist(\n        **strip_kwargs(\n            name=name,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"gist\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"gist\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_gist_comments","title":"query_user_gist_comments async","text":"

    A list of gist comments made by this user.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_gist_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of gist comments made by this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).gist_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"gistComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"gistComments\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_gists","title":"query_user_gists async","text":"

    A list of the Gists the user has created.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy GistPrivacy

    Filters Gists according to privacy.

    None order_by GistOrder

    Ordering options for gists returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_gists(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.GistPrivacy = None,\n    order_by: graphql_schema.GistOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of the Gists the user has created.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: Filters Gists according to privacy.\n        order_by: Ordering options for gists returned from the connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).gists(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"gists\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"gists\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_interaction_ability","title":"query_user_interaction_ability async","text":"

    The interaction ability settings for this user.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_interaction_ability(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"interactionAbility\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_issue_comments","title":"query_user_issue_comments async","text":"

    A list of issue comments made by this user.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required order_by IssueCommentOrder

    Ordering options for issue comments returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_issue_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueCommentOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issue comments made by this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issue comments returned\n            from the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).issue_comments(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"issueComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"issueComments\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_issues","title":"query_user_issues async","text":"

    A list of issues associated with this user.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required labels Iterable[str]

    A list of label names to filter the pull requests by.

    required states Iterable[IssueState]

    A list of states to filter the issues by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required order_by IssueOrder

    Ordering options for issues returned from the connection.

    None filter_by IssueFilters

    Filtering options for issues returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_issues(  # noqa\n    login: str,\n    labels: Iterable[str],\n    states: Iterable[graphql_schema.IssueState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueOrder = None,\n    filter_by: graphql_schema.IssueFilters = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issues associated with this user.\n\n    Args:\n        login: The user's login.\n        labels: A list of label names to filter the pull requests by.\n        states: A list of states to filter the issues by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issues returned from the\n            connection.\n        filter_by: Filtering options for issues returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).issues(\n        **strip_kwargs(\n            labels=labels,\n            states=states,\n            order_by=order_by,\n            filter_by=filter_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"issues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"issues\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_item_showcase","title":"query_user_item_showcase async","text":"

    Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_item_showcase(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Showcases a selection of repositories and gists that the profile owner has\n    either curated or that have been selected automatically based on popularity.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).item_showcase(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"itemShowcase\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"itemShowcase\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_organization","title":"query_user_organization async","text":"

    Find an organization by its login that the user belongs to.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required organization_login str

    The login of the organization to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_organization(  # noqa\n    login: str,\n    organization_login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find an organization by its login that the user belongs to.\n\n    Args:\n        login: The user's login.\n        organization_login: The login of the organization to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).organization(\n        **strip_kwargs(\n            login=organization_login,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"organization\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"organization\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_organizations","title":"query_user_organizations async","text":"

    A list of organizations the user belongs to.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_organizations(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of organizations the user belongs to.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).organizations(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"organizations\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"organizations\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_packages","title":"query_user_packages async","text":"

    A list of packages under the owner.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None names Iterable[str]

    Find packages by their names.

    None repository_id str

    Find packages in a repository by ID.

    None package_type PackageType

    Filter registry package by type.

    None order_by PackageOrder

    Ordering of the returned packages.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_packages(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"packages\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_pinnable_items","title":"query_user_pinnable_items async","text":"

    A list of repositories and gists this profile owner can pin to their profile.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required types Iterable[PinnableItemType]

    Filter the types of pinnable items that are returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_pinnable_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner can pin to their profile.\n\n    Args:\n        login: The user's login.\n        types: Filter the types of pinnable items that are\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinnable_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"pinnableItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"pinnableItems\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_pinned_items","title":"query_user_pinned_items async","text":"

    A list of repositories and gists this profile owner has pinned to their profile.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required types Iterable[PinnableItemType]

    Filter the types of pinned items that are returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_pinned_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner has pinned to their profile.\n\n    Args:\n        login: The user's login.\n        types: Filter the types of pinned items that are returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinned_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"pinnedItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"pinnedItems\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_project","title":"query_user_project async","text":"

    Find project by number.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required number int

    The project number to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_project(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        login: The user's login.\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"project\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_project_next","title":"query_user_project_next async","text":"

    Find a project by project (beta) number.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required number int

    The project (beta) number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_project_next(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by project (beta) number.\n\n    Args:\n        login: The user's login.\n        number: The project (beta) number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectNext\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_project_v2","title":"query_user_project_v2 async","text":"

    Find a project by number.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required number int

    The project number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_project_v2(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by number.\n\n    Args:\n        login: The user's login.\n        number: The project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectV2\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_projects","title":"query_user_projects async","text":"

    A list of projects under the owner.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required states Iterable[ProjectState]

    A list of states to filter the projects by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required order_by ProjectOrder

    Ordering options for projects returned from the connection.

    None search str

    Query to search projects by, currently only searching by name.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_projects(  # noqa\n    login: str,\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The user's login.\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projects\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_projects_next","title":"query_user_projects_next async","text":"

    A list of projects (beta) under the owner.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required query str

    A project (beta) to search for under the the owner.

    None sort_by ProjectNextOrderField

    How to order the returned projects (beta).

    'TITLE' after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_projects_next(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects (beta) under the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project (beta) to search for under the the owner.\n        sort_by: How to order the returned projects (beta).\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_next(\n        **strip_kwargs(\n            query=query,\n            sort_by=sort_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectsNext\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_projects_v2","title":"query_user_projects_v2 async","text":"

    A list of projects under the owner.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required query str

    A project to search for under the the owner.

    None order_by ProjectV2Order

    How to order the returned projects.

    {'field': 'NUMBER', 'direction': 'DESC'} after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_projects_v2(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project to search for under the the owner.\n        order_by: How to order the returned projects.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_v2(\n        **strip_kwargs(\n            query=query,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectsV2\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_public_keys","title":"query_user_public_keys async","text":"

    A list of public keys associated with this user.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_public_keys(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of public keys associated with this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).public_keys(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"publicKeys\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"publicKeys\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_pull_requests","title":"query_user_pull_requests async","text":"

    A list of pull requests associated with this user.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required states Iterable[PullRequestState]

    A list of states to filter the pull requests by.

    required labels Iterable[str]

    A list of label names to filter the pull requests by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required head_ref_name str

    The head ref name to filter the pull requests by.

    None base_ref_name str

    The base ref name to filter the pull requests by.

    None order_by IssueOrder

    Ordering options for pull requests returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_pull_requests(  # noqa\n    login: str,\n    states: Iterable[graphql_schema.PullRequestState],\n    labels: Iterable[str],\n    github_credentials: GitHubCredentials,\n    head_ref_name: str = None,\n    base_ref_name: str = None,\n    order_by: graphql_schema.IssueOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pull requests associated with this user.\n\n    Args:\n        login: The user's login.\n        states: A list of states to filter the pull requests by.\n        labels: A list of label names to filter the pull requests\n            by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        head_ref_name: The head ref name to filter the pull\n            requests by.\n        base_ref_name: The base ref name to filter the pull\n            requests by.\n        order_by: Ordering options for pull requests returned from\n            the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pull_requests(\n        **strip_kwargs(\n            states=states,\n            labels=labels,\n            head_ref_name=head_ref_name,\n            base_ref_name=base_ref_name,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"pullRequests\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"pullRequests\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_recent_projects","title":"query_user_recent_projects async","text":"

    Recent projects that this user has modified in the context of the owner.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_recent_projects(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"recentProjects\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repositories","title":"query_user_repositories async","text":"

    A list of repositories that the user owns.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None affiliations Iterable[RepositoryAffiliation]

    Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

    None owner_affiliations Iterable[RepositoryAffiliation]

    Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

    ('OWNER', 'COLLABORATOR') is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None is_fork bool

    If non-null, filters repositories according to whether they are forks of another repository.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositories\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repositories_contributed_to","title":"query_user_repositories_contributed_to async","text":"

    A list of repositories that the user recently contributed to.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None include_user_repositories bool

    If true, include user repositories.

    None contribution_types Iterable[RepositoryContributionType]

    If non-null, include only the specified types of contributions. The GitHub.com UI uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_repositories_contributed_to(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    is_locked: bool = None,\n    include_user_repositories: bool = None,\n    contribution_types: Iterable[graphql_schema.RepositoryContributionType] = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user recently contributed to.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories\n            according to privacy.\n        order_by: Ordering options for repositories\n            returned from the connection.\n        is_locked: If non-null, filters repositories\n            according to whether they have been locked.\n        include_user_repositories: If true, include\n            user repositories.\n        contribution_types: If non-null, include\n            only the specified types of contributions. The GitHub.com UI\n            uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories_contributed_to(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            is_locked=is_locked,\n            include_user_repositories=include_user_repositories,\n            contribution_types=contribution_types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositoriesContributedTo\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositoriesContributedTo\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repository","title":"query_user_repository async","text":"

    Find Repository.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required name str

    Name of Repository to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_repository(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        login: The user's login.\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repository\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repository_discussion_comments","title":"query_user_repository_discussion_comments async","text":"

    Discussion comments this user has authored.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None repository_id str

    Filter discussion comments to only those in a specific repository.

    None only_answers bool

    Filter discussion comments to only those that were marked as the answer.

    False return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_repository_discussion_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    repository_id: str = None,\n    only_answers: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussion comments this user has authored.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list\n            that come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements\n            from the list.\n        last: Returns the last _n_ elements from\n            the list.\n        repository_id: Filter discussion comments\n            to only those in a specific repository.\n        only_answers: Filter discussion comments\n            to only those that were marked as the answer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussion_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            repository_id=repository_id,\n            only_answers=only_answers,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositoryDiscussionComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositoryDiscussionComments\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repository_discussions","title":"query_user_repository_discussions async","text":"

    Discussions this user has started.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by DiscussionOrder

    Ordering options for discussions returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'DESC'} repository_id str

    Filter discussions to only those in a specific repository.

    None answered bool

    Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_repository_discussions(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    repository_id: str = None,\n    answered: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussions this user has started.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for discussions\n            returned from the connection.\n        repository_id: Filter discussions to only those\n            in a specific repository.\n        answered: Filter discussions to only those that\n            have been answered or not. Defaults to including both\n            answered and unanswered discussions.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            repository_id=repository_id,\n            answered=answered,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositoryDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositoryDiscussions\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_saved_replies","title":"query_user_saved_replies async","text":"

    Replies this user has saved.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SavedReplyOrder

    The field to order saved replies by.

    {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_saved_replies(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SavedReplyOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Replies this user has saved.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: The field to order saved replies by.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).saved_replies(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"savedReplies\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"savedReplies\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsoring","title":"query_user_sponsoring async","text":"

    List of users and organizations this entity is sponsoring.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorOrder

    Ordering options for the users and organizations returned from the connection.

    {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsoring(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of users and organizations this entity is sponsoring.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for the users and organizations\n            returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsoring(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsoring\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsoring\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsors","title":"query_user_sponsors async","text":"

    List of sponsors for this user or organization.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None tier_id str

    If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see.

    None order_by SponsorOrder

    Ordering options for sponsors returned from the connection.

    {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsors(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    tier_id: str = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsors for this user or organization.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        tier_id: If given, will filter for sponsors at the given tier.\n            Will only return sponsors whose tier the viewer is permitted\n            to see.\n        order_by: Ordering options for sponsors returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            tier_id=tier_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsors\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsors\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsors_activities","title":"query_user_sponsors_activities async","text":"

    Events involving this sponsorable, such as new sponsorships.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required actions Iterable[SponsorsActivityAction]

    Filter activities to only the specified actions.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None period SponsorsActivityPeriod

    Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred.

    'MONTH' order_by SponsorsActivityOrder

    Ordering options for activity returned from the connection.

    {'field': 'TIMESTAMP', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsors_activities(  # noqa\n    login: str,\n    actions: Iterable[graphql_schema.SponsorsActivityAction],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    period: graphql_schema.SponsorsActivityPeriod = \"MONTH\",\n    order_by: graphql_schema.SponsorsActivityOrder = {\n        \"field\": \"TIMESTAMP\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Events involving this sponsorable, such as new sponsorships.\n\n    Args:\n        login: The user's login.\n        actions: Filter activities to only the specified\n            actions.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        period: Filter activities returned to only those\n            that occurred in the most recent specified time period. Set\n            to ALL to avoid filtering by when the activity occurred.\n        order_by: Ordering options for activity returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_activities(\n        **strip_kwargs(\n            actions=actions,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            period=period,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorsActivities\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorsActivities\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsors_listing","title":"query_user_sponsors_listing async","text":"

    The GitHub Sponsors listing for this user or organization.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsors_listing(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The GitHub Sponsors listing for this user or organization.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_listing(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"sponsorsListing\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorsListing\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorship_for_viewer_as_sponsor","title":"query_user_sponsorship_for_viewer_as_sponsor async","text":"

    The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsorship_for_viewer_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from the viewer to this user/organization; that is, the\n    sponsorship where you're the sponsor. Only returns a sponsorship if it is\n    active.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsor(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipForViewerAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipForViewerAsSponsor\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorship_for_viewer_as_sponsorable","title":"query_user_sponsorship_for_viewer_as_sponsorable async","text":"

    The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsorship_for_viewer_as_sponsorable(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from this user/organization to the viewer; that is, the\n    sponsorship you're receiving. Only returns a sponsorship if it is active.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsorable(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipForViewerAsSponsorable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipForViewerAsSponsorable\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorship_newsletters","title":"query_user_sponsorship_newsletters async","text":"

    List of sponsorship updates sent from this sponsorable to sponsors.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorshipNewsletterOrder

    Ordering options for sponsorship updates returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsorship_newsletters(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipNewsletterOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsorship updates sent from this sponsorable to sponsors.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorship\n            updates returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_newsletters(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipNewsletters\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipNewsletters\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorships_as_maintainer","title":"query_user_sponsorships_as_maintainer async","text":"

    This object's sponsorships as the maintainer.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None include_private bool

    Whether or not to include private sponsorships in the result set.

    False order_by SponsorshipOrder

    Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsorships_as_maintainer(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    include_private: bool = False,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the maintainer.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        include_private: Whether or not to include\n            private sponsorships in the result set.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_maintainer(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            include_private=include_private,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipsAsMaintainer\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipsAsMaintainer\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorships_as_sponsor","title":"query_user_sponsorships_as_sponsor async","text":"

    This object's sponsorships as the sponsor.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorshipOrder

    Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_sponsorships_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the sponsor.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_sponsor(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipsAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipsAsSponsor\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_starred_repositories","title":"query_user_starred_repositories async","text":"

    Repositories the user has starred.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None owned_by_viewer bool

    Filters starred repositories to only return repositories owned by the viewer.

    None order_by StarOrder

    Order for connection.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_starred_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    owned_by_viewer: bool = None,\n    order_by: graphql_schema.StarOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has starred.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        owned_by_viewer: Filters starred repositories to\n            only return repositories owned by the viewer.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).starred_repositories(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            owned_by_viewer=owned_by_viewer,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"starredRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"starredRepositories\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_status","title":"query_user_status async","text":"

    The user's description of what they're currently doing.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_status(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The user's description of what they're currently doing.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).status(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"status\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"status\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_top_repositories","title":"query_user_top_repositories async","text":"

    Repositories the user has contributed to, ordered by contribution rank, plus repositories the user has created.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None since datetime

    How far back in time to fetch contributed repositories.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_top_repositories(  # noqa\n    login: str,\n    order_by: graphql_schema.RepositoryOrder,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    since: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has contributed to, ordered by contribution rank, plus\n    repositories the user has created.\n\n    Args:\n        login: The user's login.\n        order_by: Ordering options for repositories returned\n            from the connection.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        since: How far back in time to fetch contributed\n            repositories.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).top_repositories(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            since=since,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"topRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"topRepositories\"]\n
    "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_watching","title":"query_user_watching async","text":"

    A list of repositories the given user is watching.

    Parameters:

    Name Type Description Default login str

    The user's login.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None affiliations Iterable[RepositoryAffiliation]

    Affiliation options for repositories returned from the connection. If none specified, the results will include repositories for which the current viewer is an owner or collaborator, or member.

    None owner_affiliations Iterable[RepositoryAffiliation]

    Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

    ('OWNER', 'COLLABORATOR') is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/user.py
    @task\nasync def query_user_watching(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories the given user is watching.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to privacy.\n        order_by: Ordering options for repositories returned from the\n            connection.\n        affiliations: Affiliation options for repositories returned\n            from the connection. If none specified, the results will\n            include repositories for which the current viewer is an\n            owner or collaborator, or member.\n        owner_affiliations: Array of owner's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).watching(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"watching\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"watching\"]\n
    "},{"location":"integrations/prefect-github/utils/","title":"Utils","text":""},{"location":"integrations/prefect-github/utils/#prefect_github.utils","title":"prefect_github.utils","text":"

    Utilities to assist with using generated collections.

    "},{"location":"integrations/prefect-github/utils/#prefect_github.utils.camel_to_snake_case","title":"camel_to_snake_case","text":"

    Converts CamelCase and lowerCamelCase to snake_case. Args: string: The string in CamelCase or lowerCamelCase to convert. Returns: A snake_case version of the string.

    Source code in prefect_github/utils.py
    def camel_to_snake_case(string: str) -> str:\n    \"\"\"\n    Converts CamelCase and lowerCamelCase to snake_case.\n    Args:\n        string: The string in CamelCase or lowerCamelCase to convert.\n    Returns:\n        A snake_case version of the string.\n    \"\"\"\n    string = SNAKE_CASE_REGEX1.sub(r\"\\1_\\2\", string)\n    return SNAKE_CASE_REGEX2.sub(r\"\\1_\\2\", string).lower()\n
    "},{"location":"integrations/prefect-github/utils/#prefect_github.utils.initialize_return_fields_defaults","title":"initialize_return_fields_defaults","text":"

    Reads config_path to parse out the desired default fields to return. Args: config_path: The path to the config file.

    Source code in prefect_github/utils.py
    def initialize_return_fields_defaults(config_path: Union[Path, str]) -> List:\n    \"\"\"\n    Reads config_path to parse out the desired default fields to return.\n    Args:\n        config_path: The path to the config file.\n    \"\"\"\n    with open(config_path, \"r\") as f:\n        config = json.load(f)\n\n    return_fields_defaults = defaultdict(lambda: [])\n    for op_type, sub_op_types in config.items():\n        for sub_op_type in sub_op_types:\n            if isinstance(sub_op_type, str):\n                return_fields_defaults[(op_type,)].append(\n                    camel_to_snake_case(sub_op_type)\n                )\n            elif isinstance(sub_op_type, dict):\n                sub_op_type_key = list(sub_op_type.keys())[0]\n                return_fields_defaults[(op_type, sub_op_type_key)] = [\n                    camel_to_snake_case(field) for field in sub_op_type[sub_op_type_key]\n                ]\n    return return_fields_defaults\n
    "},{"location":"integrations/prefect-github/utils/#prefect_github.utils.strip_kwargs","title":"strip_kwargs","text":"

    Drops keyword arguments if value is None because sgqlc.Operation errors out if a keyword argument is provided, but set to None.

    Parameters:

    Name Type Description Default **kwargs Dict

    Input keyword arguments.

    {}

    Returns:

    Type Description Dict

    Stripped version of kwargs.

    Source code in prefect_github/utils.py
    def strip_kwargs(**kwargs: Dict) -> Dict:\n    \"\"\"\n    Drops keyword arguments if value is None because sgqlc.Operation\n    errors out if a keyword argument is provided, but set to None.\n\n    Args:\n        **kwargs: Input keyword arguments.\n\n    Returns:\n        Stripped version of kwargs.\n    \"\"\"\n    stripped_dict = {}\n    for k, v in kwargs.items():\n        if isinstance(v, dict):\n            v = strip_kwargs(**v)\n        if v is not None:\n            stripped_dict[k] = v\n    return stripped_dict or {}\n
    "},{"location":"integrations/prefect-github/viewer/","title":"Viewer","text":""},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer","title":"prefect_github.viewer","text":"

    This is a module containing: GitHub query_viewer* tasks

    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer","title":"query_viewer async","text":"

    The query root of GitHub's GraphQL interface.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs())\n\n    op_stack = (\"viewer\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_commit_comments","title":"query_viewer_commit_comments async","text":"

    A list of commit comments made by this user.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_commit_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of commit comments made by this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).commit_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"commitComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"commitComments\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_contributions_collection","title":"query_viewer_contributions_collection async","text":"

    The collection of contributions this user has made to different repositories.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required organization_id str

    The ID of the organization used to filter contributions.

    None from_ datetime

    Only contributions made at this time or later will be counted. If omitted, defaults to a year ago.

    None to datetime

    Only contributions made before and up to (including) this time will be counted. If omitted, defaults to the current time or one year from the provided from argument.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_contributions_collection(  # noqa\n    github_credentials: GitHubCredentials,\n    organization_id: str = None,\n    from_: datetime = None,\n    to: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The collection of contributions this user has made to different repositories.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        organization_id: The ID of the organization\n            used to filter contributions.\n        from_: Only contributions made at this time or\n            later will be counted. If omitted, defaults to a year ago.\n        to: Only contributions made before and up to\n            (including) this time will be counted. If omitted, defaults\n            to the current time or one year from the provided from\n            argument.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).contributions_collection(\n        **strip_kwargs(\n            organization_id=organization_id,\n            from_=from_,\n            to=to,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"contributionsCollection\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"contributionsCollection\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_followers","title":"query_viewer_followers async","text":"

    A list of users the given user is followed by.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_followers(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is followed by.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).followers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"followers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"followers\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_following","title":"query_viewer_following async","text":"

    A list of users the given user is following.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_following(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is following.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).following(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"following\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"following\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_gist","title":"query_viewer_gist async","text":"

    Find gist by repo name.

    Parameters:

    Name Type Description Default name str

    The gist name to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_gist(  # noqa\n    name: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find gist by repo name.\n\n    Args:\n        name: The gist name to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).gist(\n        **strip_kwargs(\n            name=name,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"gist\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"gist\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_gist_comments","title":"query_viewer_gist_comments async","text":"

    A list of gist comments made by this user.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_gist_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of gist comments made by this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).gist_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"gistComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"gistComments\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_gists","title":"query_viewer_gists async","text":"

    A list of the Gists the user has created.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy GistPrivacy

    Filters Gists according to privacy.

    None order_by GistOrder

    Ordering options for gists returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_gists(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.GistPrivacy = None,\n    order_by: graphql_schema.GistOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of the Gists the user has created.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: Filters Gists according to privacy.\n        order_by: Ordering options for gists returned from the connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).gists(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"gists\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"gists\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_interaction_ability","title":"query_viewer_interaction_ability async","text":"

    The interaction ability settings for this user.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_interaction_ability(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"interactionAbility\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_issue_comments","title":"query_viewer_issue_comments async","text":"

    A list of issue comments made by this user.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required order_by IssueCommentOrder

    Ordering options for issue comments returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_issue_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueCommentOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issue comments made by this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issue comments returned\n            from the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).issue_comments(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"issueComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"issueComments\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_issues","title":"query_viewer_issues async","text":"

    A list of issues associated with this user.

    Parameters:

    Name Type Description Default labels Iterable[str]

    A list of label names to filter the pull requests by.

    required states Iterable[IssueState]

    A list of states to filter the issues by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required order_by IssueOrder

    Ordering options for issues returned from the connection.

    None filter_by IssueFilters

    Filtering options for issues returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_issues(  # noqa\n    labels: Iterable[str],\n    states: Iterable[graphql_schema.IssueState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueOrder = None,\n    filter_by: graphql_schema.IssueFilters = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issues associated with this user.\n\n    Args:\n        labels: A list of label names to filter the pull requests by.\n        states: A list of states to filter the issues by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issues returned from the\n            connection.\n        filter_by: Filtering options for issues returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).issues(\n        **strip_kwargs(\n            labels=labels,\n            states=states,\n            order_by=order_by,\n            filter_by=filter_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"issues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"issues\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_item_showcase","title":"query_viewer_item_showcase async","text":"

    Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_item_showcase(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Showcases a selection of repositories and gists that the profile owner has\n    either curated or that have been selected automatically based on popularity.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).item_showcase(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"itemShowcase\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"itemShowcase\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_organization","title":"query_viewer_organization async","text":"

    Find an organization by its login that the user belongs to.

    Parameters:

    Name Type Description Default login str

    The login of the organization to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_organization(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find an organization by its login that the user belongs to.\n\n    Args:\n        login: The login of the organization to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).organization(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"organization\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"organization\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_organizations","title":"query_viewer_organizations async","text":"

    A list of organizations the user belongs to.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_organizations(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of organizations the user belongs to.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).organizations(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"organizations\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"organizations\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_packages","title":"query_viewer_packages async","text":"

    A list of packages under the owner.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None names Iterable[str]

    Find packages by their names.

    None repository_id str

    Find packages in a repository by ID.

    None package_type PackageType

    Filter registry package by type.

    None order_by PackageOrder

    Ordering of the returned packages.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_packages(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"packages\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_pinnable_items","title":"query_viewer_pinnable_items async","text":"

    A list of repositories and gists this profile owner can pin to their profile.

    Parameters:

    Name Type Description Default types Iterable[PinnableItemType]

    Filter the types of pinnable items that are returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_pinnable_items(  # noqa\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner can pin to their profile.\n\n    Args:\n        types: Filter the types of pinnable items that are\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).pinnable_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"pinnableItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"pinnableItems\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_pinned_items","title":"query_viewer_pinned_items async","text":"

    A list of repositories and gists this profile owner has pinned to their profile.

    Parameters:

    Name Type Description Default types Iterable[PinnableItemType]

    Filter the types of pinned items that are returned.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_pinned_items(  # noqa\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner has pinned to their profile.\n\n    Args:\n        types: Filter the types of pinned items that are returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).pinned_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"pinnedItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"pinnedItems\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_project","title":"query_viewer_project async","text":"

    Find project by number.

    Parameters:

    Name Type Description Default number int

    The project number to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_project(  # noqa\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"project\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_project_next","title":"query_viewer_project_next async","text":"

    Find a project by project (beta) number.

    Parameters:

    Name Type Description Default number int

    The project (beta) number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_project_next(  # noqa\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by project (beta) number.\n\n    Args:\n        number: The project (beta) number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectNext\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_project_v2","title":"query_viewer_project_v2 async","text":"

    Find a project by number.

    Parameters:

    Name Type Description Default number int

    The project number.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_project_v2(  # noqa\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by number.\n\n    Args:\n        number: The project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectV2\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_projects","title":"query_viewer_projects async","text":"

    A list of projects under the owner.

    Parameters:

    Name Type Description Default states Iterable[ProjectState]

    A list of states to filter the projects by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required order_by ProjectOrder

    Ordering options for projects returned from the connection.

    None search str

    Query to search projects by, currently only searching by name.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_projects(  # noqa\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projects\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_projects_next","title":"query_viewer_projects_next async","text":"

    A list of projects (beta) under the owner.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required query str

    A project (beta) to search for under the the owner.

    None sort_by ProjectNextOrderField

    How to order the returned projects (beta).

    'TITLE' after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_projects_next(  # noqa\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects (beta) under the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project (beta) to search for under the the owner.\n        sort_by: How to order the returned projects (beta).\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).projects_next(\n        **strip_kwargs(\n            query=query,\n            sort_by=sort_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectsNext\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_projects_v2","title":"query_viewer_projects_v2 async","text":"

    A list of projects under the owner.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required query str

    A project to search for under the the owner.

    None order_by ProjectV2Order

    How to order the returned projects.

    {'field': 'NUMBER', 'direction': 'DESC'} after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_projects_v2(  # noqa\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project to search for under the the owner.\n        order_by: How to order the returned projects.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).projects_v2(\n        **strip_kwargs(\n            query=query,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectsV2\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_public_keys","title":"query_viewer_public_keys async","text":"

    A list of public keys associated with this user.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_public_keys(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of public keys associated with this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).public_keys(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"publicKeys\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"publicKeys\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_pull_requests","title":"query_viewer_pull_requests async","text":"

    A list of pull requests associated with this user.

    Parameters:

    Name Type Description Default states Iterable[PullRequestState]

    A list of states to filter the pull requests by.

    required labels Iterable[str]

    A list of label names to filter the pull requests by.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required head_ref_name str

    The head ref name to filter the pull requests by.

    None base_ref_name str

    The base ref name to filter the pull requests by.

    None order_by IssueOrder

    Ordering options for pull requests returned from the connection.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_pull_requests(  # noqa\n    states: Iterable[graphql_schema.PullRequestState],\n    labels: Iterable[str],\n    github_credentials: GitHubCredentials,\n    head_ref_name: str = None,\n    base_ref_name: str = None,\n    order_by: graphql_schema.IssueOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pull requests associated with this user.\n\n    Args:\n        states: A list of states to filter the pull requests by.\n        labels: A list of label names to filter the pull requests\n            by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        head_ref_name: The head ref name to filter the pull\n            requests by.\n        base_ref_name: The base ref name to filter the pull\n            requests by.\n        order_by: Ordering options for pull requests returned from\n            the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).pull_requests(\n        **strip_kwargs(\n            states=states,\n            labels=labels,\n            head_ref_name=head_ref_name,\n            base_ref_name=base_ref_name,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"pullRequests\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"pullRequests\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_recent_projects","title":"query_viewer_recent_projects async","text":"

    Recent projects that this user has modified in the context of the owner.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_recent_projects(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"recentProjects\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repositories","title":"query_viewer_repositories async","text":"

    A list of repositories that the user owns.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None affiliations Iterable[RepositoryAffiliation]

    Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

    None owner_affiliations Iterable[RepositoryAffiliation]

    Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

    ('OWNER', 'COLLABORATOR') is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None is_fork bool

    If non-null, filters repositories according to whether they are forks of another repository.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_repositories(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositories\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repositories_contributed_to","title":"query_viewer_repositories_contributed_to async","text":"

    A list of repositories that the user recently contributed to.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None include_user_repositories bool

    If true, include user repositories.

    None contribution_types Iterable[RepositoryContributionType]

    If non-null, include only the specified types of contributions. The GitHub.com UI uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_repositories_contributed_to(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    is_locked: bool = None,\n    include_user_repositories: bool = None,\n    contribution_types: Iterable[graphql_schema.RepositoryContributionType] = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user recently contributed to.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories\n            according to privacy.\n        order_by: Ordering options for repositories\n            returned from the connection.\n        is_locked: If non-null, filters repositories\n            according to whether they have been locked.\n        include_user_repositories: If true, include\n            user repositories.\n        contribution_types: If non-null, include\n            only the specified types of contributions. The GitHub.com UI\n            uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repositories_contributed_to(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            is_locked=is_locked,\n            include_user_repositories=include_user_repositories,\n            contribution_types=contribution_types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositoriesContributedTo\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositoriesContributedTo\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repository","title":"query_viewer_repository async","text":"

    Find Repository.

    Parameters:

    Name Type Description Default name str

    Name of Repository to find.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required follow_renames bool

    Follow repository renames. If disabled, a repository referenced by its old name will return an error.

    True return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_repository(  # noqa\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repository\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repository_discussion_comments","title":"query_viewer_repository_discussion_comments async","text":"

    Discussion comments this user has authored.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None repository_id str

    Filter discussion comments to only those in a specific repository.

    None only_answers bool

    Filter discussion comments to only those that were marked as the answer.

    False return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_repository_discussion_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    repository_id: str = None,\n    only_answers: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussion comments this user has authored.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list\n            that come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements\n            from the list.\n        last: Returns the last _n_ elements from\n            the list.\n        repository_id: Filter discussion comments\n            to only those in a specific repository.\n        only_answers: Filter discussion comments\n            to only those that were marked as the answer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repository_discussion_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            repository_id=repository_id,\n            only_answers=only_answers,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositoryDiscussionComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositoryDiscussionComments\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repository_discussions","title":"query_viewer_repository_discussions async","text":"

    Discussions this user has started.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by DiscussionOrder

    Ordering options for discussions returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'DESC'} repository_id str

    Filter discussions to only those in a specific repository.

    None answered bool

    Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_repository_discussions(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    repository_id: str = None,\n    answered: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussions this user has started.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for discussions\n            returned from the connection.\n        repository_id: Filter discussions to only those\n            in a specific repository.\n        answered: Filter discussions to only those that\n            have been answered or not. Defaults to including both\n            answered and unanswered discussions.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repository_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            repository_id=repository_id,\n            answered=answered,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositoryDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositoryDiscussions\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_saved_replies","title":"query_viewer_saved_replies async","text":"

    Replies this user has saved.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SavedReplyOrder

    The field to order saved replies by.

    {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_saved_replies(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SavedReplyOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Replies this user has saved.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: The field to order saved replies by.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).saved_replies(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"savedReplies\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"savedReplies\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsoring","title":"query_viewer_sponsoring async","text":"

    List of users and organizations this entity is sponsoring.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorOrder

    Ordering options for the users and organizations returned from the connection.

    {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsoring(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of users and organizations this entity is sponsoring.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for the users and organizations\n            returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsoring(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsoring\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsoring\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsors","title":"query_viewer_sponsors async","text":"

    List of sponsors for this user or organization.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None tier_id str

    If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see.

    None order_by SponsorOrder

    Ordering options for sponsors returned from the connection.

    {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsors(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    tier_id: str = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsors for this user or organization.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        tier_id: If given, will filter for sponsors at the given tier.\n            Will only return sponsors whose tier the viewer is permitted\n            to see.\n        order_by: Ordering options for sponsors returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsors(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            tier_id=tier_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsors\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsors\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsors_activities","title":"query_viewer_sponsors_activities async","text":"

    Events involving this sponsorable, such as new sponsorships.

    Parameters:

    Name Type Description Default actions Iterable[SponsorsActivityAction]

    Filter activities to only the specified actions.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None period SponsorsActivityPeriod

    Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred.

    'MONTH' order_by SponsorsActivityOrder

    Ordering options for activity returned from the connection.

    {'field': 'TIMESTAMP', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsors_activities(  # noqa\n    actions: Iterable[graphql_schema.SponsorsActivityAction],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    period: graphql_schema.SponsorsActivityPeriod = \"MONTH\",\n    order_by: graphql_schema.SponsorsActivityOrder = {\n        \"field\": \"TIMESTAMP\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Events involving this sponsorable, such as new sponsorships.\n\n    Args:\n        actions: Filter activities to only the specified\n            actions.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        period: Filter activities returned to only those\n            that occurred in the most recent specified time period. Set\n            to ALL to avoid filtering by when the activity occurred.\n        order_by: Ordering options for activity returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsors_activities(\n        **strip_kwargs(\n            actions=actions,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            period=period,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorsActivities\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorsActivities\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsors_listing","title":"query_viewer_sponsors_listing async","text":"

    The GitHub Sponsors listing for this user or organization.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsors_listing(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The GitHub Sponsors listing for this user or organization.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsors_listing(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorsListing\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorsListing\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorship_for_viewer_as_sponsor","title":"query_viewer_sponsorship_for_viewer_as_sponsor async","text":"

    The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsorship_for_viewer_as_sponsor(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from the viewer to this user/organization; that is, the\n    sponsorship where you're the sponsor. Only returns a sponsorship if it is\n    active.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorship_for_viewer_as_sponsor(\n        **strip_kwargs()\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipForViewerAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipForViewerAsSponsor\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorship_for_viewer_as_sponsorable","title":"query_viewer_sponsorship_for_viewer_as_sponsorable async","text":"

    The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsorship_for_viewer_as_sponsorable(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from this user/organization to the viewer; that is, the\n    sponsorship you're receiving. Only returns a sponsorship if it is active.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorship_for_viewer_as_sponsorable(\n        **strip_kwargs()\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipForViewerAsSponsorable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipForViewerAsSponsorable\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorship_newsletters","title":"query_viewer_sponsorship_newsletters async","text":"

    List of sponsorship updates sent from this sponsorable to sponsors.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorshipNewsletterOrder

    Ordering options for sponsorship updates returned from the connection.

    {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsorship_newsletters(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipNewsletterOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsorship updates sent from this sponsorable to sponsors.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorship\n            updates returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorship_newsletters(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipNewsletters\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipNewsletters\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorships_as_maintainer","title":"query_viewer_sponsorships_as_maintainer async","text":"

    This object's sponsorships as the maintainer.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None include_private bool

    Whether or not to include private sponsorships in the result set.

    False order_by SponsorshipOrder

    Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsorships_as_maintainer(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    include_private: bool = False,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the maintainer.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        include_private: Whether or not to include\n            private sponsorships in the result set.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorships_as_maintainer(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            include_private=include_private,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipsAsMaintainer\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipsAsMaintainer\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorships_as_sponsor","title":"query_viewer_sponsorships_as_sponsor async","text":"

    This object's sponsorships as the sponsor.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None order_by SponsorshipOrder

    Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_sponsorships_as_sponsor(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the sponsor.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorships_as_sponsor(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipsAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipsAsSponsor\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_starred_repositories","title":"query_viewer_starred_repositories async","text":"

    Repositories the user has starred.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None owned_by_viewer bool

    Filters starred repositories to only return repositories owned by the viewer.

    None order_by StarOrder

    Order for connection.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_starred_repositories(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    owned_by_viewer: bool = None,\n    order_by: graphql_schema.StarOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has starred.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        owned_by_viewer: Filters starred repositories to\n            only return repositories owned by the viewer.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).starred_repositories(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            owned_by_viewer=owned_by_viewer,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"starredRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"starredRepositories\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_status","title":"query_viewer_status async","text":"

    The user's description of what they're currently doing.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_status(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The user's description of what they're currently doing.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).status(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"status\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"status\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_top_repositories","title":"query_viewer_top_repositories async","text":"

    Repositories the user has contributed to, ordered by contribution rank, plus repositories the user has created.

    Parameters:

    Name Type Description Default order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    required github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None since datetime

    How far back in time to fetch contributed repositories.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_top_repositories(  # noqa\n    order_by: graphql_schema.RepositoryOrder,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    since: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has contributed to, ordered by contribution rank, plus\n    repositories the user has created.\n\n    Args:\n        order_by: Ordering options for repositories returned\n            from the connection.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        since: How far back in time to fetch contributed\n            repositories.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).top_repositories(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            since=since,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"topRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"topRepositories\"]\n
    "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_watching","title":"query_viewer_watching async","text":"

    A list of repositories the given user is watching.

    Parameters:

    Name Type Description Default github_credentials GitHubCredentials

    Credentials to use for authentication with GitHub.

    required privacy RepositoryPrivacy

    If non-null, filters repositories according to privacy.

    None order_by RepositoryOrder

    Ordering options for repositories returned from the connection.

    None affiliations Iterable[RepositoryAffiliation]

    Affiliation options for repositories returned from the connection. If none specified, the results will include repositories for which the current viewer is an owner or collaborator, or member.

    None owner_affiliations Iterable[RepositoryAffiliation]

    Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

    ('OWNER', 'COLLABORATOR') is_locked bool

    If non-null, filters repositories according to whether they have been locked.

    None after str

    Returns the elements in the list that come after the specified cursor.

    None before str

    Returns the elements in the list that come before the specified cursor.

    None first int

    Returns the first n elements from the list.

    None last int

    Returns the last n elements from the list.

    None return_fields Iterable[str]

    Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

    None

    Returns:

    Type Description Dict[str, Any]

    A dict of the returned fields.

    Source code in prefect_github/viewer.py
    @task\nasync def query_viewer_watching(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories the given user is watching.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to privacy.\n        order_by: Ordering options for repositories returned from the\n            connection.\n        affiliations: Affiliation options for repositories returned\n            from the connection. If none specified, the results will\n            include repositories for which the current viewer is an\n            owner or collaborator, or member.\n        owner_affiliations: Array of owner's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).watching(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"watching\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"watching\"]\n
    "},{"location":"integrations/prefect-gitlab/","title":"prefect-gitlab","text":""},{"location":"integrations/prefect-gitlab/#welcome","title":"Welcome!","text":"

    prefect-gitlab is a Prefect collection for working with GitLab repositories.

    "},{"location":"integrations/prefect-gitlab/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-gitlab/#python-setup","title":"Python setup","text":"

    Requires an installation of Python 3.8 or higher.

    We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

    This integration is designed to work with Prefect 2.3.0 or higher. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-gitlab/#installation","title":"Installation","text":"

    Install prefect-gitlab with pip:

    pip install prefect-gitlab\n

    Then, register the block types) in this integration to view the storage block type on Prefect Cloud:

    prefect block register -m prefect_gitlab\n

    Note, to use the load method on a block, you must already have a block document saved.

    "},{"location":"integrations/prefect-gitlab/#creating-a-gitlab-storage-block","title":"Creating a GitLab storage block","text":""},{"location":"integrations/prefect-gitlab/#in-python","title":"In Python","text":"
    from prefect_gitlab import GitLabRepository\n\n# public GitLab repository\npublic_gitlab_block = GitLabRepository(\n    name=\"my-gitlab-block\",\n    repository=\"https://gitlab.com/testing/my-repository.git\"\n)\n\npublic_gitlab_block.save()\n\n\n# specific branch or tag of a GitLab repository\nbranch_gitlab_block = GitLabRepository(\n    name=\"my-gitlab-block\",\n    reference=\"branch-or-tag-name\",\n    repository=\"https://gitlab.com/testing/my-repository.git\"\n)\n\nbranch_gitlab_block.save()\n\n\n# Get all history of a specific branch or tag of a GitLab repository\nbranch_gitlab_block = GitLabRepository(\n    name=\"my-gitlab-block\",\n    reference=\"branch-or-tag-name\",\n    git_depth=None,\n    repository=\"https://gitlab.com/testing/my-repository.git\"\n)\n\nbranch_gitlab_block.save()\n\n# private GitLab repository\nprivate_gitlab_block = GitLabRepository(\n    name=\"my-private-gitlab-block\",\n    repository=\"https://gitlab.com/testing/my-repository.git\",\n    access_token=\"YOUR_GITLAB_PERSONAL_ACCESS_TOKEN\"\n)\n\nprivate_gitlab_block.save()\n
    "},{"location":"integrations/prefect-gitlab/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-gitlab/credentials/#prefect_gitlab.credentials","title":"prefect_gitlab.credentials","text":"

    Module used to enable authenticated interactions with GitLab

    "},{"location":"integrations/prefect-gitlab/credentials/#prefect_gitlab.credentials.GitLabCredentials","title":"GitLabCredentials","text":"

    Bases: Block

    Store a GitLab personal access token to interact with private GitLab repositories.

    Attributes:

    Name Type Description token SecretStr

    The personal access token to authenticate with GitLab.

    url str

    URL to self-hosted GitLab instances.

    Examples:

    Load stored GitLab credentials:

    from prefect_gitlab import GitLabCredentials\ngitlab_credentials_block = GitLabCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_gitlab/credentials.py
    class GitLabCredentials(Block):\n    \"\"\"\n    Store a GitLab personal access token to interact with private GitLab\n    repositories.\n\n    Attributes:\n        token: The personal access token to authenticate with GitLab.\n        url: URL to self-hosted GitLab instances.\n\n    Examples:\n        Load stored GitLab credentials:\n        ```python\n        from prefect_gitlab import GitLabCredentials\n        gitlab_credentials_block = GitLabCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"GitLab Credentials\"\n    _logo_url = HttpUrl(\n        url=\"https://images.ctfassets.net/gm98wzqotmnx/55edIimT4g9gbjhkh5a3Sp/dfdb9391d8f45c2e93e72e3a4d350771/gitlab-logo-500.png?h=250\",  # noqa\n        scheme=\"https\",\n    )\n\n    token: SecretStr = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=\"A GitLab Personal Access Token with read_repository scope.\",\n    )\n    url: str = Field(\n        default=None, title=\"URL\", description=\"URL to self-hosted GitLab instances.\"\n    )\n\n    def get_client(self) -> Gitlab:\n        \"\"\"\n        Gets an authenticated GitLab client.\n\n        Returns:\n            An authenticated GitLab client.\n        \"\"\"\n        # ref: https://python-gitlab.readthedocs.io/en/stable/\n        gitlab = Gitlab(url=self.url, oauth_token=self.token.get_secret_value())\n        gitlab.auth()\n        return gitlab\n
    "},{"location":"integrations/prefect-gitlab/credentials/#prefect_gitlab.credentials.GitLabCredentials.get_client","title":"get_client","text":"

    Gets an authenticated GitLab client.

    Returns:

    Type Description Gitlab

    An authenticated GitLab client.

    Source code in prefect_gitlab/credentials.py
    def get_client(self) -> Gitlab:\n    \"\"\"\n    Gets an authenticated GitLab client.\n\n    Returns:\n        An authenticated GitLab client.\n    \"\"\"\n    # ref: https://python-gitlab.readthedocs.io/en/stable/\n    gitlab = Gitlab(url=self.url, oauth_token=self.token.get_secret_value())\n    gitlab.auth()\n    return gitlab\n
    "},{"location":"integrations/prefect-gitlab/repositories/","title":"Repositories","text":""},{"location":"integrations/prefect-gitlab/repositories/#prefect_gitlab.repositories","title":"prefect_gitlab.repositories","text":"

    Integrations with GitLab.

    The GitLab class in this collection is a storage block that lets Prefect agents pull Prefect flow code from GitLab repositories.

    The GitLab block is ideally configured via the Prefect UI, but can also be used in Python as the following examples demonstrate.

    Examples:

        from prefect_gitlab.repositories import GitLabRepository\n\n    # public GitLab repository\n    public_gitlab_block = GitLabRepository(\n        name=\"my-gitlab-block\",\n        repository=\"https://gitlab.com/testing/my-repository.git\"\n    )\n\n    public_gitlab_block.save()\n\n\n    # specific branch or tag of a GitLab repository\n    branch_gitlab_block = GitLabRepository(\n        name=\"my-gitlab-block\",\n        reference=\"branch-or-tag-name\"\n        repository=\"https://gitlab.com/testing/my-repository.git\"\n    )\n\n    branch_gitlab_block.save()\n\n\n    # private GitLab repository\n    private_gitlab_block = GitLabRepository(\n        name=\"my-private-gitlab-block\",\n        repository=\"https://gitlab.com/testing/my-repository.git\",\n        access_token=\"YOUR_GITLAB_PERSONAL_ACCESS_TOKEN\"\n    )\n\n    private_gitlab_block.save()\n
    "},{"location":"integrations/prefect-gitlab/repositories/#prefect_gitlab.repositories.GitLabRepository","title":"GitLabRepository","text":"

    Bases: ReadableDeploymentStorage

    Interact with files stored in GitLab repositories.

    An accessible installation of git is required for this block to function properly.

    Source code in prefect_gitlab/repositories.py
    class GitLabRepository(ReadableDeploymentStorage):\n    \"\"\"\n    Interact with files stored in GitLab repositories.\n\n    An accessible installation of git is required for this block to function\n    properly.\n    \"\"\"\n\n    _block_type_name = \"GitLab Repository\"\n    _logo_url = HttpUrl(\n        url=\"https://images.ctfassets.net/gm98wzqotmnx/55edIimT4g9gbjhkh5a3Sp/dfdb9391d8f45c2e93e72e3a4d350771/gitlab-logo-500.png?h=250\",  # noqa\n        scheme=\"https\",\n    )\n    _description = \"Interact with files stored in GitLab repositories.\"\n\n    repository: str = Field(\n        default=...,\n        description=(\n            \"The URL of a GitLab repository to read from, in either HTTP/HTTPS or SSH format.\"  # noqa\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    git_depth: Optional[int] = Field(\n        default=1,\n        gte=1,\n        description=\"The number of commits that Git history is truncated to \"\n        \"during cloning. Set to None to fetch the entire history.\",\n    )\n    credentials: Optional[GitLabCredentials] = Field(\n        default=None,\n        description=\"An optional GitLab Credentials block for authenticating with \"\n        \"private GitLab repos.\",\n    )\n\n    @validator(\"credentials\")\n    def _ensure_credentials_go_with_http(cls, v: str, values: dict) -> str:\n        \"\"\"Ensure that credentials are not provided with 'SSH' formatted GitLub URLs.\n        Note: validates `access_token` specifically so that it only fires when\n        private repositories are used.\n        \"\"\"\n        if v is not None:\n            if urllib.parse.urlparse(values[\"repository\"]).scheme not in [\n                \"https\",\n                \"http\",\n            ]:\n                raise InvalidRepositoryURLError(\n                    (\n                        \"Credentials can only be used with GitLab repositories \"\n                        \"using the 'HTTPS'/'HTTP' format. You must either remove the \"\n                        \"credential if you wish to use the 'SSH' format and are not \"\n                        \"using a private repository, or you must change the repository \"\n                        \"URL to the 'HTTPS'/'HTTP' format.\"\n                    )\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n        For private repos: https://<oauth-key>@gitlab.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urllib.parse.urlparse(self.repository)\n        if url_components.scheme in [\"https\", \"http\"] and self.credentials is not None:\n            token = self.credentials.token.get_secret_value()\n            updated_components = url_components._replace(\n                netloc=f\"oauth2:{token}@{url_components.netloc}\"\n            )\n            full_url = urllib.parse.urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: Optional[str]\n    ) -> Tuple[str, str]:\n        \"\"\"Returns the fully formed paths for GitLabRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    @retry(\n        stop=stop_after_attempt(MAX_CLONE_ATTEMPTS),\n        wait=wait_fixed(CLONE_RETRY_MIN_DELAY_SECONDS)\n        + wait_random(\n            CLONE_RETRY_MIN_DELAY_JITTER_SECONDS,\n            CLONE_RETRY_MAX_DELAY_JITTER_SECONDS,\n        ),\n        reraise=True,\n    )\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Clones a GitLab project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        if self.git_depth is not None:\n            cmd += [\"--depth\", f\"{self.git_depth}\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            copy_tree(src=content_source, dst=content_destination)\n
    "},{"location":"integrations/prefect-gitlab/repositories/#prefect_gitlab.repositories.GitLabRepository.get_directory","title":"get_directory async","text":"

    Clones a GitLab project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory. Args: from_path: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. local_path: A local path to clone to; defaults to present working directory.

    Source code in prefect_gitlab/repositories.py
    @sync_compatible\n@retry(\n    stop=stop_after_attempt(MAX_CLONE_ATTEMPTS),\n    wait=wait_fixed(CLONE_RETRY_MIN_DELAY_SECONDS)\n    + wait_random(\n        CLONE_RETRY_MIN_DELAY_JITTER_SECONDS,\n        CLONE_RETRY_MAX_DELAY_JITTER_SECONDS,\n    ),\n    reraise=True,\n)\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Clones a GitLab project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    if self.git_depth is not None:\n        cmd += [\"--depth\", f\"{self.git_depth}\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        copy_tree(src=content_source, dst=content_destination)\n
    "},{"location":"integrations/prefect-kubernetes/","title":"prefect-kubernetes","text":""},{"location":"integrations/prefect-kubernetes/#welcome","title":"Welcome!","text":"

    prefect-kubernetes is a collection of Prefect tasks, flows, and blocks enabling orchestration, observation and management of Kubernetes resources.

    Jump to examples.

    "},{"location":"integrations/prefect-kubernetes/#resources","title":"Resources","text":"

    For more tips on how to use tasks and flows in a Collection, check out Using Collections!

    "},{"location":"integrations/prefect-kubernetes/#installation","title":"Installation","text":"

    Install prefect-kubernetes with pip:

     pip install prefect-kubernetes\n ```\n\nRequires an installation of Python 3.8+.\n\nWe recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.\n\nThese tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the [Prefect documentation](https://docs.prefect.io/).\n\nThen, to register [blocks](https://docs.prefect.io/ui/blocks/) on Prefect Cloud:\n\n```bash\nprefect block register -m prefect_kubernetes\n

    Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

    "},{"location":"integrations/prefect-kubernetes/#example-usage","title":"Example Usage","text":""},{"location":"integrations/prefect-kubernetes/#use-with_options-to-customize-options-on-any-existing-task-or-flow","title":"Use with_options to customize options on any existing task or flow","text":"
    from prefect_kubernetes.flows import run_namespaced_job\n\ncustomized_run_namespaced_job = run_namespaced_job.with_options(\n    name=\"My flow running a Kubernetes Job\",\n    retries=2,\n    retry_delay_seconds=10,\n) # this is now a new flow object that can be called\n

    For more tips on how to use tasks and flows in a Collection, check out Using Collections!

    "},{"location":"integrations/prefect-kubernetes/#specify-and-run-a-kubernetes-job-from-a-yaml-file","title":"Specify and run a Kubernetes Job from a yaml file","text":"
    from prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.flows import run_namespaced_job # this is a flow\nfrom prefect_kubernetes.jobs import KubernetesJob\n\nk8s_creds = KubernetesCredentials.load(\"k8s-creds\")\n\njob = KubernetesJob.from_yaml_file( # or create in the UI with a dict manifest\n    credentials=k8s_creds,\n    manifest_path=\"path/to/job.yaml\",\n)\n\njob.save(\"my-k8s-job\", overwrite=True)\n\nif __name__ == \"__main__\":\n    # run the flow\n    run_namespaced_job(job)\n
    "},{"location":"integrations/prefect-kubernetes/#generate-a-resource-specific-client-from-kubernetesclusterconfig","title":"Generate a resource-specific client from KubernetesClusterConfig","text":"
    # with minikube / docker desktop & a valid ~/.kube/config this should ~just work~\nfrom prefect.blocks.kubernetes import KubernetesClusterConfig\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\nk8s_config = KubernetesClusterConfig.from_file('~/.kube/config')\n\nk8s_credentials = KubernetesCredentials(cluster_config=k8s_config)\n\nwith k8s_credentials.get_client(\"core\") as v1_core_client:\n    for namespace in v1_core_client.list_namespace().items:\n        print(namespace.metadata.name)\n
    "},{"location":"integrations/prefect-kubernetes/#list-jobs-in-a-specific-namespace","title":"List jobs in a specific namespace","text":"
    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import list_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_list = list_namespaced_job(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        namespace=\"my-namespace\",\n    )\n
    "},{"location":"integrations/prefect-kubernetes/#patch-an-existing-deployment","title":"Patch an existing deployment","text":"
    from kubernetes.client.models import V1Deployment\n\nfrom prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import patch_namespaced_deployment\nfrom prefect_kubernetes.utilities import convert_manifest_to_model\n\n@flow\ndef kubernetes_orchestrator():\n\n    v1_deployment_updates = convert_manifest_to_model(\n        manifest=\"path/to/manifest.yaml\",\n        v1_model_name=\"V1Deployment\",\n    )\n\n    v1_deployment = patch_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"my-deployment\",\n        deployment_updates=v1_deployment_updates,\n        namespace=\"my-namespace\"\n    )\n
    "},{"location":"integrations/prefect-kubernetes/#feedback","title":"Feedback","text":"

    If you encounter any bugs while using prefect-kubernetes, feel free to open an issue in the prefect repository.

    If you have any questions or issues while using prefect-kubernetes, you can find help in either the Prefect Discourse forum or the Prefect Slack community.

    "},{"location":"integrations/prefect-kubernetes/#contributing","title":"Contributing","text":"

    If you'd like to help contribute to fix an issue or add a feature to prefect-kubernetes, please propose changes through a pull request from a fork of the repository.

    Here are the steps:

    1. Fork the repository
    2. Clone the forked repository
    3. Install the repository and its dependencies:
       pip install -e \".[dev]\"\n
    4. Make desired changes
    5. Add tests
    6. Install pre-commit to perform quality checks prior to commit: pre-commit install
    7. git commit, git push, and create a pull request
    "},{"location":"integrations/prefect-kubernetes/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials","title":"prefect_kubernetes.credentials","text":"

    Module for defining Kubernetes credential handling and client generation.

    "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

    Bases: Block

    Stores configuration for interaction with Kubernetes clusters.

    See from_file for creation.

    Attributes:

    Name Type Description config Dict

    The entire loaded YAML contents of a kubectl config file

    context_name str

    The name of the kubectl context to use

    Example

    Load a saved Kubernetes cluster config:

    from prefect_kubernetes.credentials import import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

    Source code in prefect_kubernetes/credentials.py
    class KubernetesClusterConfig(Block):\n    \"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect_kubernetes.credentials import import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig\"  # noqa\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        if isinstance(value, str):\n            return yaml.safe_load(value)\n        return value\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n        \"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n\n        path = Path(path or config.kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        (\n            existing_contexts,\n            current_context,\n        ) = config.kube_config.list_kube_config_contexts(config_file=str(path))\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n        \"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n        \"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
    "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

    Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

    Source code in prefect_kubernetes/credentials.py
    def configure_client(self) -> None:\n    \"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
    "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

    Create a cluster config from the a Kubernetes config file.

    By default, the current context in the default Kubernetes config file will be used.

    An alternative file or context may be specified.

    The entire config file will be loaded and stored.

    Source code in prefect_kubernetes/credentials.py
    @classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n    \"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n\n    path = Path(path or config.kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    (\n        existing_contexts,\n        current_context,\n    ) = config.kube_config.list_kube_config_contexts(config_file=str(path))\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
    "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

    Returns a Kubernetes API client for this cluster config.

    Source code in prefect_kubernetes/credentials.py
    def get_api_client(self) -> \"ApiClient\":\n    \"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
    "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials","title":"KubernetesCredentials","text":"

    Bases: Block

    Credentials block for generating configured Kubernetes API clients.

    Attributes:

    Name Type Description cluster_config Optional[KubernetesClusterConfig]

    A KubernetesClusterConfig block holding a JSON kube config for a specific kubernetes context.

    Example

    Load stored Kubernetes credentials:

    from prefect_kubernetes.credentials import KubernetesCredentials\n\nkubernetes_credentials = KubernetesCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_kubernetes/credentials.py
    class KubernetesCredentials(Block):\n    \"\"\"Credentials block for generating configured Kubernetes API clients.\n\n    Attributes:\n        cluster_config: A `KubernetesClusterConfig` block holding a JSON kube\n            config for a specific kubernetes context.\n\n    Example:\n        Load stored Kubernetes credentials:\n        ```python\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        kubernetes_credentials = KubernetesCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials\"  # noqa\n\n    cluster_config: Optional[KubernetesClusterConfig] = None\n\n    @contextmanager\n    def get_client(\n        self,\n        client_type: Literal[\"apps\", \"batch\", \"core\", \"custom_objects\"],\n        configuration: Optional[Configuration] = None,\n    ) -> Generator[KubernetesClient, None, None]:\n        \"\"\"Convenience method for retrieving a Kubernetes API client for deployment resources.\n\n        Args:\n            client_type: The resource-specific type of Kubernetes client to retrieve.\n\n        Yields:\n            An authenticated, resource-specific Kubernetes API client.\n\n        Example:\n            ```python\n            from prefect_kubernetes.credentials import KubernetesCredentials\n\n            with KubernetesCredentials.get_client(\"core\") as core_v1_client:\n                for pod in core_v1_client.list_namespaced_pod():\n                    print(pod.metadata.name)\n            ```\n        \"\"\"\n        client_config = configuration or Configuration()\n\n        with ApiClient(configuration=client_config) as generic_client:\n            try:\n                yield self.get_resource_specific_client(client_type)\n            finally:\n                generic_client.rest_client.pool_manager.clear()\n\n    def get_resource_specific_client(\n        self,\n        client_type: str,\n    ) -> Union[AppsV1Api, BatchV1Api, CoreV1Api]:\n        \"\"\"\n        Utility function for configuring a generic Kubernetes client.\n        It will attempt to connect to a Kubernetes cluster in three steps with\n        the first successful connection attempt becoming the mode of communication with\n        a cluster:\n\n        1. It will first attempt to use a `KubernetesCredentials` block's\n        `cluster_config` to configure a client using\n        `KubernetesClusterConfig.configure_client`.\n\n        2. Attempt in-cluster connection (will only work when running on a pod).\n\n        3. Attempt out-of-cluster connection using the default location for a\n        kube config file.\n\n        Args:\n            client_type: The Kubernetes API client type for interacting with specific\n                Kubernetes resources.\n\n        Returns:\n            KubernetesClient: An authenticated, resource-specific Kubernetes Client.\n\n        Raises:\n            ValueError: If `client_type` is not a valid Kubernetes API client type.\n        \"\"\"\n\n        if self.cluster_config:\n            self.cluster_config.configure_client()\n        else:\n            try:\n                config.load_incluster_config()\n            except ConfigException:\n                config.load_kube_config()\n\n        try:\n            return K8S_CLIENT_TYPES[client_type]()\n        except KeyError:\n            raise ValueError(\n                f\"Invalid client type provided '{client_type}'.\"\n                f\" Must be one of {listrepr(K8S_CLIENT_TYPES.keys())}.\"\n            )\n
    "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials.get_client","title":"get_client","text":"

    Convenience method for retrieving a Kubernetes API client for deployment resources.

    Parameters:

    Name Type Description Default client_type Literal['apps', 'batch', 'core', 'custom_objects']

    The resource-specific type of Kubernetes client to retrieve.

    required

    Yields:

    Type Description KubernetesClient

    An authenticated, resource-specific Kubernetes API client.

    Example
    from prefect_kubernetes.credentials import KubernetesCredentials\n\nwith KubernetesCredentials.get_client(\"core\") as core_v1_client:\n    for pod in core_v1_client.list_namespaced_pod():\n        print(pod.metadata.name)\n
    Source code in prefect_kubernetes/credentials.py
    @contextmanager\ndef get_client(\n    self,\n    client_type: Literal[\"apps\", \"batch\", \"core\", \"custom_objects\"],\n    configuration: Optional[Configuration] = None,\n) -> Generator[KubernetesClient, None, None]:\n    \"\"\"Convenience method for retrieving a Kubernetes API client for deployment resources.\n\n    Args:\n        client_type: The resource-specific type of Kubernetes client to retrieve.\n\n    Yields:\n        An authenticated, resource-specific Kubernetes API client.\n\n    Example:\n        ```python\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        with KubernetesCredentials.get_client(\"core\") as core_v1_client:\n            for pod in core_v1_client.list_namespaced_pod():\n                print(pod.metadata.name)\n        ```\n    \"\"\"\n    client_config = configuration or Configuration()\n\n    with ApiClient(configuration=client_config) as generic_client:\n        try:\n            yield self.get_resource_specific_client(client_type)\n        finally:\n            generic_client.rest_client.pool_manager.clear()\n
    "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials.get_resource_specific_client","title":"get_resource_specific_client","text":"

    Utility function for configuring a generic Kubernetes client. It will attempt to connect to a Kubernetes cluster in three steps with the first successful connection attempt becoming the mode of communication with a cluster:

    1. It will first attempt to use a KubernetesCredentials block's cluster_config to configure a client using KubernetesClusterConfig.configure_client.

    2. Attempt in-cluster connection (will only work when running on a pod).

    3. Attempt out-of-cluster connection using the default location for a kube config file.

    Parameters:

    Name Type Description Default client_type str

    The Kubernetes API client type for interacting with specific Kubernetes resources.

    required

    Returns:

    Name Type Description KubernetesClient Union[AppsV1Api, BatchV1Api, CoreV1Api]

    An authenticated, resource-specific Kubernetes Client.

    Raises:

    Type Description ValueError

    If client_type is not a valid Kubernetes API client type.

    Source code in prefect_kubernetes/credentials.py
    def get_resource_specific_client(\n    self,\n    client_type: str,\n) -> Union[AppsV1Api, BatchV1Api, CoreV1Api]:\n    \"\"\"\n    Utility function for configuring a generic Kubernetes client.\n    It will attempt to connect to a Kubernetes cluster in three steps with\n    the first successful connection attempt becoming the mode of communication with\n    a cluster:\n\n    1. It will first attempt to use a `KubernetesCredentials` block's\n    `cluster_config` to configure a client using\n    `KubernetesClusterConfig.configure_client`.\n\n    2. Attempt in-cluster connection (will only work when running on a pod).\n\n    3. Attempt out-of-cluster connection using the default location for a\n    kube config file.\n\n    Args:\n        client_type: The Kubernetes API client type for interacting with specific\n            Kubernetes resources.\n\n    Returns:\n        KubernetesClient: An authenticated, resource-specific Kubernetes Client.\n\n    Raises:\n        ValueError: If `client_type` is not a valid Kubernetes API client type.\n    \"\"\"\n\n    if self.cluster_config:\n        self.cluster_config.configure_client()\n    else:\n        try:\n            config.load_incluster_config()\n        except ConfigException:\n            config.load_kube_config()\n\n    try:\n        return K8S_CLIENT_TYPES[client_type]()\n    except KeyError:\n        raise ValueError(\n            f\"Invalid client type provided '{client_type}'.\"\n            f\" Must be one of {listrepr(K8S_CLIENT_TYPES.keys())}.\"\n        )\n
    "},{"location":"integrations/prefect-kubernetes/custom_objects/","title":"Custom Objects","text":""},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects","title":"prefect_kubernetes.custom_objects","text":""},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.create_namespaced_custom_object","title":"create_namespaced_custom_object async","text":"

    Task for creating a namespaced custom object.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required group str

    The custom resource object's group

    required version str

    The custom resource object's version

    required plural str

    The custom resource object's plural

    required body Dict[str, Any]

    A Dict containing the custom resource object's specification.

    required namespace Optional[str]

    The Kubernetes namespace to create the custom object in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description object

    object containing the custom resource created by this task.

    Example

    Create a custom object in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import create_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = create_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        body={\n            'api': 'crd-version',\n            'kind': 'crd-kind',\n            'metadata': {\n                'name': 'crd-name',\n            },\n        },\n    )\n

    Source code in prefect_kubernetes/custom_objects.py
    @task\nasync def create_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    body: Dict[str, Any],\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for creating a namespaced custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        body: A Dict containing the custom resource object's specification.\n        namespace: The Kubernetes namespace to create the custom object in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        object containing the custom resource created by this task.\n\n    Example:\n        Create a custom object in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import create_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = create_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                body={\n                    'api': 'crd-version',\n                    'kind': 'crd-kind',\n                    'metadata': {\n                        'name': 'crd-name',\n                    },\n                },\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.create_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            body=body,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.delete_namespaced_custom_object","title":"delete_namespaced_custom_object async","text":"

    Task for deleting a namespaced custom object.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required group str

    The custom resource object's group

    required version str

    The custom resource object's version

    required plural str

    The custom resource object's plural

    required name str

    The name of a custom object to delete.

    required namespace Optional[str]

    The Kubernetes namespace to create this custom object in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description object

    object containing the custom resource deleted by this task.

    Example

    Delete \"my-custom-object\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import delete_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = delete_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n    )\n

    Source code in prefect_kubernetes/custom_objects.py
    @task\nasync def delete_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for deleting a namespaced custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to delete.\n        namespace: The Kubernetes namespace to create this custom object in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n\n    Returns:\n        object containing the custom resource deleted by this task.\n\n    Example:\n        Delete \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import delete_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = delete_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n            )\n        ```\n    \"\"\"\n\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.delete_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.get_namespaced_custom_object","title":"get_namespaced_custom_object async","text":"

    Task for reading a namespaced Kubernetes custom object.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required group str

    The custom resource object's group

    required version str

    The custom resource object's version

    required plural str

    The custom resource object's plural

    required name str

    The name of a custom object to read.

    required namespace Optional[str]

    The Kubernetes namespace the custom resource is in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Raises:

    Type Description ValueError

    if name is None.

    Returns:

    Type Description object

    object containing the custom resource specification.

    Example

    Read \"my-custom-object\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import get_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = get_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n    )\n

    Source code in prefect_kubernetes/custom_objects.py
    @task\nasync def get_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for reading a namespaced Kubernetes custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to read.\n        namespace: The Kubernetes namespace the custom resource is in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `name` is `None`.\n\n    Returns:\n        object containing the custom resource specification.\n\n    Example:\n        Read \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import get_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = get_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.get_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.get_namespaced_custom_object_status","title":"get_namespaced_custom_object_status async","text":"

    Task for fetching status of a namespaced custom object.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required group str

    The custom resource object's group

    required version str

    The custom resource object's version

    required plural str

    The custom resource object's plural

    required name str

    The name of a custom object to read.

    required namespace str

    The Kubernetes namespace the custom resource is in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description object

    object containing the custom-object specification with status.

    Example

    Fetch status of \"my-custom-object\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import (\n    get_namespaced_custom_object_status,\n)\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = get_namespaced_custom_object_status(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n    )\n

    Source code in prefect_kubernetes/custom_objects.py
    @task\nasync def get_namespaced_custom_object_status(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for fetching status of a namespaced custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to read.\n        namespace: The Kubernetes namespace the custom resource is in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        object containing the custom-object specification with status.\n\n    Example:\n        Fetch status of \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import (\n            get_namespaced_custom_object_status,\n        )\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = get_namespaced_custom_object_status(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.get_namespaced_custom_object_status,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.list_namespaced_custom_object","title":"list_namespaced_custom_object async","text":"

    Task for listing namespaced custom objects.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required group str

    The custom resource object's group

    required version str

    The custom resource object's version

    required plural str

    The custom resource object's plural

    required namespace str

    The Kubernetes namespace to list custom resources for.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description object

    object containing a list of custom resources.

    Example

    List custom resources in \"my-namespace\":

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import list_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    namespaced_custom_objects_list = list_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        namespace=\"my-namespace\",\n    )\n

    Source code in prefect_kubernetes/custom_objects.py
    @task\nasync def list_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for listing namespaced custom objects.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        namespace: The Kubernetes namespace to list custom resources for.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        object containing a list of custom resources.\n\n    Example:\n        List custom resources in \"my-namespace\":\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import list_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            namespaced_custom_objects_list = list_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.list_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.patch_namespaced_custom_object","title":"patch_namespaced_custom_object async","text":"

    Task for patching a namespaced custom resource.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required group str

    The custom resource object's group

    required version str

    The custom resource object's version

    required plural str

    The custom resource object's plural

    required name str

    The name of a custom object to patch.

    required body Dict[str, Any]

    A Dict containing the custom resource object's patch.

    required namespace str

    The custom resource's Kubernetes namespace.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Raises:

    Type Description ValueError

    if body is None.

    Returns:

    Type Description object

    object containing the custom resource specification

    object

    after the patch gets applied.

    Example

    Patch \"my-custom-object\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import (\n    patch_namespaced_custom_object,\n)\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = patch_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n        body={\n            'api': 'crd-version',\n            'kind': 'crd-kind',\n            'metadata': {\n                'name': 'my-custom-object',\n            },\n        },\n    )\n

    Source code in prefect_kubernetes/custom_objects.py
    @task\nasync def patch_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    body: Dict[str, Any],\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for patching a namespaced custom resource.\n\n    Args:\n        kubernetes_credentials: KubernetesCredentials block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to patch.\n        body: A Dict containing the custom resource object's patch.\n        namespace: The custom resource's Kubernetes namespace.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `body` is `None`.\n\n    Returns:\n        object containing the custom resource specification\n        after the patch gets applied.\n\n    Example:\n        Patch \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import (\n            patch_namespaced_custom_object,\n        )\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = patch_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n                body={\n                    'api': 'crd-version',\n                    'kind': 'crd-kind',\n                    'metadata': {\n                        'name': 'my-custom-object',\n                    },\n                },\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.patch_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            body=body,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.replace_namespaced_custom_object","title":"replace_namespaced_custom_object async","text":"

    Task for replacing a namespaced custom resource.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required group str

    The custom resource object's group

    required version str

    The custom resource object's version

    required plural str

    The custom resource object's plural

    required name str

    The name of a custom object to replace.

    required body Dict[str, Any]

    A Dict containing the custom resource object's specification.

    required namespace str

    The custom resource's Kubernetes namespace.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Raises:

    Type Description ValueError

    if body is None.

    Returns:

    Type Description object

    object containing the custom resource specification after the replacement.

    Example

    Replace \"my-custom-object\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import replace_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = replace_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n        body={\n            'api': 'crd-version',\n            'kind': 'crd-kind',\n            'metadata': {\n                'name': 'my-custom-object',\n            },\n        },\n    )\n

    Source code in prefect_kubernetes/custom_objects.py
    @task\nasync def replace_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    body: Dict[str, Any],\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for replacing a namespaced custom resource.\n\n    Args:\n        kubernetes_credentials: KubernetesCredentials block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to replace.\n        body: A Dict containing the custom resource object's specification.\n        namespace: The custom resource's Kubernetes namespace.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `body` is `None`.\n\n    Returns:\n        object containing the custom resource specification after the replacement.\n\n    Example:\n        Replace \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import replace_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = replace_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n                body={\n                    'api': 'crd-version',\n                    'kind': 'crd-kind',\n                    'metadata': {\n                        'name': 'my-custom-object',\n                    },\n                },\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.replace_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            body=body,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/deployments/","title":"Deployments","text":""},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments","title":"prefect_kubernetes.deployments","text":"

    Module for interacting with Kubernetes deployments from Prefect flows.

    "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.create_namespaced_deployment","title":"create_namespaced_deployment async","text":"

    Create a Kubernetes deployment in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required new_deployment V1Deployment

    A Kubernetes V1Deployment specification.

    required namespace Optional[str]

    The Kubernetes namespace to create this deployment in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Deployment

    A Kubernetes V1Deployment object.

    Example

    Create a deployment in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import create_namespaced_deployment\nfrom kubernetes.client.models import V1Deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = create_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        new_deployment=V1Deployment(metadata={\"name\": \"test-deployment\"}),\n    )\n

    Source code in prefect_kubernetes/deployments.py
    @task\nasync def create_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    new_deployment: V1Deployment,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Create a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        new_deployment: A Kubernetes `V1Deployment` specification.\n        namespace: The Kubernetes namespace to create this deployment in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Create a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import create_namespaced_deployment\n        from kubernetes.client.models import V1Deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = create_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                new_deployment=V1Deployment(metadata={\"name\": \"test-deployment\"}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.create_namespaced_deployment,\n            namespace=namespace,\n            body=new_deployment,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.delete_namespaced_deployment","title":"delete_namespaced_deployment async","text":"

    Delete a Kubernetes deployment in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required deployment_name str

    The name of the deployment to delete.

    required delete_options Optional[V1DeleteOptions]

    A Kubernetes V1DeleteOptions object.

    None namespace Optional[str]

    The Kubernetes namespace to delete this deployment from.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Deployment

    A Kubernetes V1Deployment object.

    Example

    Delete a deployment in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import delete_namespaced_deployment\nfrom kubernetes.client.models import V1DeleteOptions\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = delete_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\",\n        delete_options=V1DeleteOptions(grace_period_seconds=0),\n    )\n

    Source code in prefect_kubernetes/deployments.py
    @task\nasync def delete_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Delete a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to delete.\n        delete_options: A Kubernetes `V1DeleteOptions` object.\n        namespace: The Kubernetes namespace to delete this deployment from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Delete a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import delete_namespaced_deployment\n        from kubernetes.client.models import V1DeleteOptions\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = delete_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\",\n                delete_options=V1DeleteOptions(grace_period_seconds=0),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.delete_namespaced_deployment,\n            deployment_name,\n            body=delete_options,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.list_namespaced_deployment","title":"list_namespaced_deployment async","text":"

    List all deployments in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required namespace Optional[str]

    The Kubernetes namespace to list deployments from.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1DeploymentList

    A Kubernetes V1DeploymentList object.

    Example

    List all deployments in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import list_namespaced_deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_list = list_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n    )\n

    Source code in prefect_kubernetes/deployments.py
    @task\nasync def list_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1DeploymentList:\n    \"\"\"List all deployments in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        namespace: The Kubernetes namespace to list deployments from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1DeploymentList` object.\n\n    Example:\n        List all deployments in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import list_namespaced_deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_list = list_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.list_namespaced_deployment,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.patch_namespaced_deployment","title":"patch_namespaced_deployment async","text":"

    Patch a Kubernetes deployment in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required deployment_name str

    The name of the deployment to patch.

    required deployment_updates V1Deployment

    A Kubernetes V1Deployment object.

    required namespace Optional[str]

    The Kubernetes namespace to patch this deployment in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Deployment

    A Kubernetes V1Deployment object.

    Example

    Patch a deployment in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import patch_namespaced_deployment\nfrom kubernetes.client.models import V1Deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = patch_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\",\n        deployment_updates=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}}),\n    )\n

    Source code in prefect_kubernetes/deployments.py
    @task\nasync def patch_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    deployment_updates: V1Deployment,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Patch a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to patch.\n        deployment_updates: A Kubernetes `V1Deployment` object.\n        namespace: The Kubernetes namespace to patch this deployment in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Patch a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import patch_namespaced_deployment\n        from kubernetes.client.models import V1Deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = patch_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\",\n                deployment_updates=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.patch_namespaced_deployment,\n            name=deployment_name,\n            namespace=namespace,\n            body=deployment_updates,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.read_namespaced_deployment","title":"read_namespaced_deployment async","text":"

    Read information on a Kubernetes deployment in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required deployment_name str

    The name of the deployment to read.

    required namespace Optional[str]

    The Kubernetes namespace to read this deployment from.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Deployment

    A Kubernetes V1Deployment object.

    Example

    Read a deployment in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = read_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\"\n    )\n

    Source code in prefect_kubernetes/deployments.py
    @task\nasync def read_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Read information on a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to read.\n        namespace: The Kubernetes namespace to read this deployment from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Read a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = read_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\"\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.read_namespaced_deployment,\n            name=deployment_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.replace_namespaced_deployment","title":"replace_namespaced_deployment async","text":"

    Replace a Kubernetes deployment in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required deployment_name str

    The name of the deployment to replace.

    required new_deployment V1Deployment

    A Kubernetes V1Deployment object.

    required namespace Optional[str]

    The Kubernetes namespace to replace this deployment in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Deployment

    A Kubernetes V1Deployment object.

    Example

    Replace a deployment in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import replace_namespaced_deployment\nfrom kubernetes.client.models import V1Deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = replace_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\",\n        new_deployment=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}})\n    )\n

    Source code in prefect_kubernetes/deployments.py
    @task\nasync def replace_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    new_deployment: V1Deployment,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Replace a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to replace.\n        new_deployment: A Kubernetes `V1Deployment` object.\n        namespace: The Kubernetes namespace to replace this deployment in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Replace a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import replace_namespaced_deployment\n        from kubernetes.client.models import V1Deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = replace_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\",\n                new_deployment=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}})\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.replace_namespaced_deployment,\n            body=new_deployment,\n            name=deployment_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/exceptions/","title":"Exceptions","text":""},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions","title":"prefect_kubernetes.exceptions","text":"

    Module to define common exceptions within prefect_kubernetes.

    "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesJobDefinitionError","title":"KubernetesJobDefinitionError","text":"

    Bases: OpenApiException

    An exception for when a Kubernetes job definition is invalid.

    Source code in prefect_kubernetes/exceptions.py
    class KubernetesJobDefinitionError(OpenApiException):\n    \"\"\"An exception for when a Kubernetes job definition is invalid.\"\"\"\n
    "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesJobFailedError","title":"KubernetesJobFailedError","text":"

    Bases: OpenApiException

    An exception for when a Kubernetes job fails.

    Source code in prefect_kubernetes/exceptions.py
    class KubernetesJobFailedError(OpenApiException):\n    \"\"\"An exception for when a Kubernetes job fails.\"\"\"\n
    "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesJobTimeoutError","title":"KubernetesJobTimeoutError","text":"

    Bases: OpenApiException

    An exception for when a Kubernetes job times out.

    Source code in prefect_kubernetes/exceptions.py
    class KubernetesJobTimeoutError(OpenApiException):\n    \"\"\"An exception for when a Kubernetes job times out.\"\"\"\n
    "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesResourceNotFoundError","title":"KubernetesResourceNotFoundError","text":"

    Bases: ApiException

    An exception for when a Kubernetes resource cannot be found by a client.

    Source code in prefect_kubernetes/exceptions.py
    class KubernetesResourceNotFoundError(ApiException):\n    \"\"\"An exception for when a Kubernetes resource cannot be found by a client.\"\"\"\n
    "},{"location":"integrations/prefect-kubernetes/flows/","title":"Flows","text":""},{"location":"integrations/prefect-kubernetes/flows/#prefect_kubernetes.flows","title":"prefect_kubernetes.flows","text":"

    A module to define flows interacting with Kubernetes resources.

    "},{"location":"integrations/prefect-kubernetes/flows/#prefect_kubernetes.flows.run_namespaced_job","title":"run_namespaced_job async","text":"

    Flow for running a namespaced Kubernetes job.

    Parameters:

    Name Type Description Default kubernetes_job KubernetesJob

    The KubernetesJob block that specifies the job to run.

    required

    Returns:

    Type Description Dict[str, Any]

    The a dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.

    Raises:

    Type Description RuntimeError

    If the created Kubernetes job attains a failed status.

    ```python\nfrom prefect_kubernetes import KubernetesJob, run_namespaced_job\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\nrun_namespaced_job(\n    kubernetes_job=KubernetesJob.from_yaml_file(\n        credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        manifest_path=\"path/to/job.yaml\",\n    )\n)\n```\n
    Source code in prefect_kubernetes/flows.py
    @flow\nasync def run_namespaced_job(\n    kubernetes_job: KubernetesJob,\n) -> Dict[str, Any]:\n    \"\"\"Flow for running a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_job: The `KubernetesJob` block that specifies the job to run.\n\n    Returns:\n        The a dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.\n\n    Raises:\n        RuntimeError: If the created Kubernetes job attains a failed status.\n\n    Example:\n\n        ```python\n        from prefect_kubernetes import KubernetesJob, run_namespaced_job\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        run_namespaced_job(\n            kubernetes_job=KubernetesJob.from_yaml_file(\n                credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                manifest_path=\"path/to/job.yaml\",\n            )\n        )\n        ```\n    \"\"\"\n    kubernetes_job_run = await task(kubernetes_job.trigger.aio)(kubernetes_job)\n\n    await task(kubernetes_job_run.wait_for_completion.aio)(kubernetes_job_run)\n\n    return await task(kubernetes_job_run.fetch_result.aio)(kubernetes_job_run)\n
    "},{"location":"integrations/prefect-kubernetes/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs","title":"prefect_kubernetes.jobs","text":"

    Module to define tasks for interacting with Kubernetes jobs.

    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob","title":"KubernetesJob","text":"

    Bases: JobBlock

    A block representing a Kubernetes job configuration.

    Source code in prefect_kubernetes/jobs.py
    class KubernetesJob(JobBlock):\n    \"\"\"A block representing a Kubernetes job configuration.\"\"\"\n\n    v1_job: Dict[str, Any] = Field(\n        default=...,\n        title=\"Job Manifest\",\n        description=(\n            \"The Kubernetes job manifest to run. This dictionary can be produced \"\n            \"using `yaml.safe_load`.\"\n        ),\n    )\n    api_kwargs: Dict[str, Any] = Field(\n        default_factory=dict,\n        title=\"Additional API Arguments\",\n        description=\"Additional arguments to include in Kubernetes API calls.\",\n        example={\"pretty\": \"true\"},\n    )\n    credentials: KubernetesCredentials = Field(\n        default=..., description=\"The credentials to configure a client from.\"\n    )\n    delete_after_completion: bool = Field(\n        default=True,\n        description=\"Whether to delete the job after it has completed.\",\n    )\n    interval_seconds: int = Field(\n        default=5,\n        description=\"The number of seconds to wait between job status checks.\",\n    )\n    namespace: str = Field(\n        default=\"default\",\n        description=\"The namespace to create and run the job in.\",\n    )\n    timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=\"The number of seconds to wait for the job run before timing out.\",\n    )\n\n    _block_type_name = \"Kubernetes Job\"\n    _block_type_slug = \"k8s-job\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"  # noqa: E501\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob\"  # noqa\n\n    @sync_compatible\n    async def trigger(self):\n        \"\"\"Create a Kubernetes job and return a `KubernetesJobRun` object.\"\"\"\n\n        v1_job_model = convert_manifest_to_model(self.v1_job, \"V1Job\")\n\n        await create_namespaced_job.fn(\n            kubernetes_credentials=self.credentials,\n            new_job=v1_job_model,\n            namespace=self.namespace,\n            **self.api_kwargs,\n        )\n\n        return KubernetesJobRun(kubernetes_job=self, v1_job_model=v1_job_model)\n\n    @classmethod\n    def from_yaml_file(\n        cls: Type[Self], manifest_path: Union[Path, str], **kwargs\n    ) -> Self:\n        \"\"\"Create a `KubernetesJob` from a YAML file.\n\n        Args:\n            manifest_path: The YAML file to create the `KubernetesJob` from.\n\n        Returns:\n            A KubernetesJob object.\n        \"\"\"\n        with open(manifest_path, \"r\") as yaml_stream:\n            yaml_dict = yaml.safe_load(yaml_stream)\n\n        return cls(v1_job=yaml_dict, **kwargs)\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob.from_yaml_file","title":"from_yaml_file classmethod","text":"

    Create a KubernetesJob from a YAML file.

    Parameters:

    Name Type Description Default manifest_path Union[Path, str]

    The YAML file to create the KubernetesJob from.

    required

    Returns:

    Type Description Self

    A KubernetesJob object.

    Source code in prefect_kubernetes/jobs.py
    @classmethod\ndef from_yaml_file(\n    cls: Type[Self], manifest_path: Union[Path, str], **kwargs\n) -> Self:\n    \"\"\"Create a `KubernetesJob` from a YAML file.\n\n    Args:\n        manifest_path: The YAML file to create the `KubernetesJob` from.\n\n    Returns:\n        A KubernetesJob object.\n    \"\"\"\n    with open(manifest_path, \"r\") as yaml_stream:\n        yaml_dict = yaml.safe_load(yaml_stream)\n\n    return cls(v1_job=yaml_dict, **kwargs)\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob.trigger","title":"trigger async","text":"

    Create a Kubernetes job and return a KubernetesJobRun object.

    Source code in prefect_kubernetes/jobs.py
    @sync_compatible\nasync def trigger(self):\n    \"\"\"Create a Kubernetes job and return a `KubernetesJobRun` object.\"\"\"\n\n    v1_job_model = convert_manifest_to_model(self.v1_job, \"V1Job\")\n\n    await create_namespaced_job.fn(\n        kubernetes_credentials=self.credentials,\n        new_job=v1_job_model,\n        namespace=self.namespace,\n        **self.api_kwargs,\n    )\n\n    return KubernetesJobRun(kubernetes_job=self, v1_job_model=v1_job_model)\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJobRun","title":"KubernetesJobRun","text":"

    Bases: JobRun[Dict[str, Any]]

    A container representing a run of a Kubernetes job.

    Source code in prefect_kubernetes/jobs.py
    class KubernetesJobRun(JobRun[Dict[str, Any]]):\n    \"\"\"A container representing a run of a Kubernetes job.\"\"\"\n\n    def __init__(\n        self,\n        kubernetes_job: \"KubernetesJob\",\n        v1_job_model: V1Job,\n    ):\n        self.pod_logs = None\n\n        self._completed = False\n        self._kubernetes_job = kubernetes_job\n        self._v1_job_model = v1_job_model\n\n    async def _cleanup(self):\n        \"\"\"Deletes the Kubernetes job resource.\"\"\"\n\n        delete_options = V1DeleteOptions(propagation_policy=\"Foreground\")\n\n        deleted_v1_job = await delete_namespaced_job.fn(\n            kubernetes_credentials=self._kubernetes_job.credentials,\n            job_name=self._v1_job_model.metadata.name,\n            delete_options=delete_options,\n            namespace=self._kubernetes_job.namespace,\n            **self._kubernetes_job.api_kwargs,\n        )\n        self.logger.info(\n            f\"Job {self._v1_job_model.metadata.name} deleted \"\n            f\"with {deleted_v1_job.status!r}.\"\n        )\n\n    @sync_compatible\n    async def wait_for_completion(self):\n        \"\"\"Waits for the job to complete.\n\n        If the job has `delete_after_completion` set to `True`,\n        the job will be deleted if it is observed by this method\n        to enter a completed state.\n\n        Raises:\n            RuntimeError: If the Kubernetes job fails.\n            KubernetesJobTimeoutError: If the Kubernetes job times out.\n            ValueError: If `wait_for_completion` is never called.\n        \"\"\"\n        self.pod_logs = {}\n\n        elapsed_time = 0\n\n        while not self._completed:\n            job_expired = (\n                elapsed_time > self._kubernetes_job.timeout_seconds\n                if self._kubernetes_job.timeout_seconds\n                else False\n            )\n            if job_expired:\n                raise KubernetesJobTimeoutError(\n                    f\"Job timed out after {elapsed_time} seconds.\"\n                )\n\n            v1_job_status = await read_namespaced_job_status.fn(\n                kubernetes_credentials=self._kubernetes_job.credentials,\n                job_name=self._v1_job_model.metadata.name,\n                namespace=self._kubernetes_job.namespace,\n                **self._kubernetes_job.api_kwargs,\n            )\n            pod_selector = (\n                \"controller-uid=\" f\"{v1_job_status.metadata.labels['controller-uid']}\"\n            )\n            v1_pod_list = await list_namespaced_pod.fn(\n                kubernetes_credentials=self._kubernetes_job.credentials,\n                namespace=self._kubernetes_job.namespace,\n                label_selector=pod_selector,\n                **self._kubernetes_job.api_kwargs,\n            )\n\n            for pod in v1_pod_list.items:\n                pod_name = pod.metadata.name\n\n                if pod.status.phase == \"Pending\" or pod_name in self.pod_logs.keys():\n                    continue\n\n                self.logger.info(f\"Capturing logs for pod {pod_name!r}.\")\n\n                self.pod_logs[pod_name] = await read_namespaced_pod_log.fn(\n                    kubernetes_credentials=self._kubernetes_job.credentials,\n                    pod_name=pod_name,\n                    container=v1_job_status.spec.template.spec.containers[0].name,\n                    namespace=self._kubernetes_job.namespace,\n                    **self._kubernetes_job.api_kwargs,\n                )\n\n            if v1_job_status.status.active:\n                await sleep(self._kubernetes_job.interval_seconds)\n                if self._kubernetes_job.timeout_seconds:\n                    elapsed_time += self._kubernetes_job.interval_seconds\n            elif v1_job_status.status.conditions:\n                final_completed_conditions = [\n                    condition.type == \"Complete\"\n                    for condition in v1_job_status.status.conditions\n                    if condition.status == \"True\"\n                ]\n                if final_completed_conditions and any(final_completed_conditions):\n                    self._completed = True\n                    self.logger.info(\n                        f\"Job {v1_job_status.metadata.name!r} has \"\n                        f\"completed with {v1_job_status.status.succeeded} pods.\"\n                    )\n                elif final_completed_conditions:\n                    failed_conditions = [\n                        condition.reason\n                        for condition in v1_job_status.status.conditions\n                        if condition.type == \"Failed\"\n                    ]\n                    raise RuntimeError(\n                        f\"Job {v1_job_status.metadata.name!r} failed due to \"\n                        f\"{failed_conditions}, check the Kubernetes pod logs \"\n                        f\"for more information.\"\n                    )\n\n        if self._kubernetes_job.delete_after_completion:\n            await self._cleanup()\n\n    @sync_compatible\n    async def fetch_result(self) -> Dict[str, Any]:\n        \"\"\"Fetch the results of the job.\n\n        Returns:\n            The logs from each of the pods in the job.\n\n        Raises:\n            ValueError: If this method is called when the job has\n                a non-terminal state.\n        \"\"\"\n\n        if not self._completed:\n            raise ValueError(\n                \"The Kubernetes Job run is not in a completed state - \"\n                \"be sure to call `wait_for_completion` before attempting \"\n                \"to fetch the result.\"\n            )\n        return self.pod_logs\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJobRun.fetch_result","title":"fetch_result async","text":"

    Fetch the results of the job.

    Returns:

    Type Description Dict[str, Any]

    The logs from each of the pods in the job.

    Raises:

    Type Description ValueError

    If this method is called when the job has a non-terminal state.

    Source code in prefect_kubernetes/jobs.py
    @sync_compatible\nasync def fetch_result(self) -> Dict[str, Any]:\n    \"\"\"Fetch the results of the job.\n\n    Returns:\n        The logs from each of the pods in the job.\n\n    Raises:\n        ValueError: If this method is called when the job has\n            a non-terminal state.\n    \"\"\"\n\n    if not self._completed:\n        raise ValueError(\n            \"The Kubernetes Job run is not in a completed state - \"\n            \"be sure to call `wait_for_completion` before attempting \"\n            \"to fetch the result.\"\n        )\n    return self.pod_logs\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJobRun.wait_for_completion","title":"wait_for_completion async","text":"

    Waits for the job to complete.

    If the job has delete_after_completion set to True, the job will be deleted if it is observed by this method to enter a completed state.

    Raises:

    Type Description RuntimeError

    If the Kubernetes job fails.

    KubernetesJobTimeoutError

    If the Kubernetes job times out.

    ValueError

    If wait_for_completion is never called.

    Source code in prefect_kubernetes/jobs.py
    @sync_compatible\nasync def wait_for_completion(self):\n    \"\"\"Waits for the job to complete.\n\n    If the job has `delete_after_completion` set to `True`,\n    the job will be deleted if it is observed by this method\n    to enter a completed state.\n\n    Raises:\n        RuntimeError: If the Kubernetes job fails.\n        KubernetesJobTimeoutError: If the Kubernetes job times out.\n        ValueError: If `wait_for_completion` is never called.\n    \"\"\"\n    self.pod_logs = {}\n\n    elapsed_time = 0\n\n    while not self._completed:\n        job_expired = (\n            elapsed_time > self._kubernetes_job.timeout_seconds\n            if self._kubernetes_job.timeout_seconds\n            else False\n        )\n        if job_expired:\n            raise KubernetesJobTimeoutError(\n                f\"Job timed out after {elapsed_time} seconds.\"\n            )\n\n        v1_job_status = await read_namespaced_job_status.fn(\n            kubernetes_credentials=self._kubernetes_job.credentials,\n            job_name=self._v1_job_model.metadata.name,\n            namespace=self._kubernetes_job.namespace,\n            **self._kubernetes_job.api_kwargs,\n        )\n        pod_selector = (\n            \"controller-uid=\" f\"{v1_job_status.metadata.labels['controller-uid']}\"\n        )\n        v1_pod_list = await list_namespaced_pod.fn(\n            kubernetes_credentials=self._kubernetes_job.credentials,\n            namespace=self._kubernetes_job.namespace,\n            label_selector=pod_selector,\n            **self._kubernetes_job.api_kwargs,\n        )\n\n        for pod in v1_pod_list.items:\n            pod_name = pod.metadata.name\n\n            if pod.status.phase == \"Pending\" or pod_name in self.pod_logs.keys():\n                continue\n\n            self.logger.info(f\"Capturing logs for pod {pod_name!r}.\")\n\n            self.pod_logs[pod_name] = await read_namespaced_pod_log.fn(\n                kubernetes_credentials=self._kubernetes_job.credentials,\n                pod_name=pod_name,\n                container=v1_job_status.spec.template.spec.containers[0].name,\n                namespace=self._kubernetes_job.namespace,\n                **self._kubernetes_job.api_kwargs,\n            )\n\n        if v1_job_status.status.active:\n            await sleep(self._kubernetes_job.interval_seconds)\n            if self._kubernetes_job.timeout_seconds:\n                elapsed_time += self._kubernetes_job.interval_seconds\n        elif v1_job_status.status.conditions:\n            final_completed_conditions = [\n                condition.type == \"Complete\"\n                for condition in v1_job_status.status.conditions\n                if condition.status == \"True\"\n            ]\n            if final_completed_conditions and any(final_completed_conditions):\n                self._completed = True\n                self.logger.info(\n                    f\"Job {v1_job_status.metadata.name!r} has \"\n                    f\"completed with {v1_job_status.status.succeeded} pods.\"\n                )\n            elif final_completed_conditions:\n                failed_conditions = [\n                    condition.reason\n                    for condition in v1_job_status.status.conditions\n                    if condition.type == \"Failed\"\n                ]\n                raise RuntimeError(\n                    f\"Job {v1_job_status.metadata.name!r} failed due to \"\n                    f\"{failed_conditions}, check the Kubernetes pod logs \"\n                    f\"for more information.\"\n                )\n\n    if self._kubernetes_job.delete_after_completion:\n        await self._cleanup()\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.create_namespaced_job","title":"create_namespaced_job async","text":"

    Task for creating a namespaced Kubernetes job.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required new_job V1Job

    A Kubernetes V1Job specification.

    required namespace Optional[str]

    The Kubernetes namespace to create this job in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description V1Job

    A Kubernetes V1Job object.

    Example

    Create a job in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import create_namespaced_job\nfrom kubernetes.client.models import V1Job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = create_namespaced_job(\n        new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

    Source code in prefect_kubernetes/jobs.py
    @task\nasync def create_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    new_job: V1Job,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for creating a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        new_job: A Kubernetes `V1Job` specification.\n        namespace: The Kubernetes namespace to create this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Create a job in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import create_namespaced_job\n        from kubernetes.client.models import V1Job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = create_namespaced_job(\n                new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.create_namespaced_job,\n            namespace=namespace,\n            body=new_job,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.delete_namespaced_job","title":"delete_namespaced_job async","text":"

    Task for deleting a namespaced Kubernetes job.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required job_name str

    The name of a job to delete.

    required delete_options Optional[V1DeleteOptions]

    A Kubernetes V1DeleteOptions object.

    None namespace Optional[str]

    The Kubernetes namespace to delete this job in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description V1Status

    A Kubernetes V1Status object.

    Example

    Delete \"my-job\" in the default namespace:

    from kubernetes.client.models import V1DeleteOptions\nfrom prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import delete_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_status = delete_namespaced_job(\n        job_name=\"my-job\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        delete_options=V1DeleteOptions(propagation_policy=\"Foreground\"),\n    )\n

    Source code in prefect_kubernetes/jobs.py
    @task\nasync def delete_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Status:\n    \"\"\"Task for deleting a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to delete.\n        delete_options: A Kubernetes `V1DeleteOptions` object.\n        namespace: The Kubernetes namespace to delete this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n\n    Returns:\n        A Kubernetes `V1Status` object.\n\n    Example:\n        Delete \"my-job\" in the default namespace:\n        ```python\n        from kubernetes.client.models import V1DeleteOptions\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import delete_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_status = delete_namespaced_job(\n                job_name=\"my-job\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                delete_options=V1DeleteOptions(propagation_policy=\"Foreground\"),\n            )\n        ```\n    \"\"\"\n\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.delete_namespaced_job,\n            name=job_name,\n            body=delete_options,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.list_namespaced_job","title":"list_namespaced_job async","text":"

    Task for listing namespaced Kubernetes jobs.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required namespace Optional[str]

    The Kubernetes namespace to list jobs from.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description V1JobList

    A Kubernetes V1JobList object.

    Example

    List jobs in \"my-namespace\":

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import list_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    namespaced_job_list = list_namespaced_job(\n        namespace=\"my-namespace\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

    Source code in prefect_kubernetes/jobs.py
    @task\nasync def list_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1JobList:\n    \"\"\"Task for listing namespaced Kubernetes jobs.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        namespace: The Kubernetes namespace to list jobs from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1JobList` object.\n\n    Example:\n        List jobs in \"my-namespace\":\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import list_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            namespaced_job_list = list_namespaced_job(\n                namespace=\"my-namespace\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.list_namespaced_job,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.patch_namespaced_job","title":"patch_namespaced_job async","text":"

    Task for patching a namespaced Kubernetes job.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required job_name str

    The name of a job to patch.

    required job_updates V1Job

    A Kubernetes V1Job specification.

    required namespace Optional[str]

    The Kubernetes namespace to patch this job in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Raises:

    Type Description ValueError

    if job_name is None.

    Returns:

    Type Description V1Job

    A Kubernetes V1Job object.

    Example

    Patch \"my-job\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import patch_namespaced_job\n\nfrom kubernetes.client.models import V1Job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = patch_namespaced_job(\n        job_name=\"my-job\",\n        job_updates=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}}),\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

    Source code in prefect_kubernetes/jobs.py
    @task\nasync def patch_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    job_updates: V1Job,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for patching a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: KubernetesCredentials block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to patch.\n        job_updates: A Kubernetes `V1Job` specification.\n        namespace: The Kubernetes namespace to patch this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `job_name` is `None`.\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Patch \"my-job\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import patch_namespaced_job\n\n        from kubernetes.client.models import V1Job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = patch_namespaced_job(\n                job_name=\"my-job\",\n                job_updates=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}}),\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.patch_namespaced_job,\n            name=job_name,\n            namespace=namespace,\n            body=job_updates,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.read_namespaced_job","title":"read_namespaced_job async","text":"

    Task for reading a namespaced Kubernetes job.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required job_name str

    The name of a job to read.

    required namespace Optional[str]

    The Kubernetes namespace to read this job in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Raises:

    Type Description ValueError

    if job_name is None.

    Returns:

    Type Description V1Job

    A Kubernetes V1Job object.

    Example

    Read \"my-job\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import read_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = read_namespaced_job(\n        job_name=\"my-job\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

    Source code in prefect_kubernetes/jobs.py
    @task\nasync def read_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for reading a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to read.\n        namespace: The Kubernetes namespace to read this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `job_name` is `None`.\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Read \"my-job\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import read_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = read_namespaced_job(\n                job_name=\"my-job\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.read_namespaced_job,\n            name=job_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.read_namespaced_job_status","title":"read_namespaced_job_status async","text":"

    Task for fetching status of a namespaced Kubernetes job.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required job_name str

    The name of a job to fetch status for.

    required namespace Optional[str]

    The Kubernetes namespace to fetch status of job in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description V1Job

    A Kubernetes V1JobStatus object.

    Example

    Fetch status of a job in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import read_namespaced_job_status\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_status = read_namespaced_job_status(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        job_name=\"my-job\",\n    )\n

    Source code in prefect_kubernetes/jobs.py
    @task\nasync def read_namespaced_job_status(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for fetching status of a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to fetch status for.\n        namespace: The Kubernetes namespace to fetch status of job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1JobStatus` object.\n\n    Example:\n        Fetch status of a job in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import read_namespaced_job_status\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_status = read_namespaced_job_status(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                job_name=\"my-job\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.read_namespaced_job_status,\n            name=job_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.replace_namespaced_job","title":"replace_namespaced_job async","text":"

    Task for replacing a namespaced Kubernetes job.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block holding authentication needed to generate the required API client.

    required job_name str

    The name of a job to replace.

    required new_job V1Job

    A Kubernetes V1Job specification.

    required namespace Optional[str]

    The Kubernetes namespace to replace this job in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

    {}

    Returns:

    Type Description V1Job

    A Kubernetes V1Job object.

    Example

    Replace \"my-job\" in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import replace_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = replace_namespaced_job(\n        new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n        job_name=\"my-job\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

    Source code in prefect_kubernetes/jobs.py
    @task\nasync def replace_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    new_job: V1Job,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for replacing a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to replace.\n        new_job: A Kubernetes `V1Job` specification.\n        namespace: The Kubernetes namespace to replace this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Replace \"my-job\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import replace_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = replace_namespaced_job(\n                new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n                job_name=\"my-job\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.replace_namespaced_job,\n            name=job_name,\n            body=new_job,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/pods/","title":"Pods","text":""},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods","title":"prefect_kubernetes.pods","text":"

    Module for interacting with Kubernetes pods from Prefect flows.

    "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.create_namespaced_pod","title":"create_namespaced_pod async","text":"

    Create a Kubernetes pod in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required new_pod V1Pod

    A Kubernetes V1Pod specification.

    required namespace Optional[str]

    The Kubernetes namespace to create this pod in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Pod

    A Kubernetes V1Pod object.

    Example

    Create a pod in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import create_namespaced_pod\nfrom kubernetes.client.models import V1Pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = create_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        new_pod=V1Pod(metadata={\"name\": \"test-pod\"}),\n    )\n

    Source code in prefect_kubernetes/pods.py
    @task\nasync def create_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    new_pod: V1Pod,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Create a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        new_pod: A Kubernetes `V1Pod` specification.\n        namespace: The Kubernetes namespace to create this pod in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Create a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import create_namespaced_pod\n        from kubernetes.client.models import V1Pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = create_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                new_pod=V1Pod(metadata={\"name\": \"test-pod\"}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.create_namespaced_pod,\n            namespace=namespace,\n            body=new_pod,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.delete_namespaced_pod","title":"delete_namespaced_pod async","text":"

    Delete a Kubernetes pod in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required pod_name str

    The name of the pod to delete.

    required delete_options Optional[V1DeleteOptions]

    A Kubernetes V1DeleteOptions object.

    None namespace Optional[str]

    The Kubernetes namespace to delete this pod from.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Pod

    A Kubernetes V1Pod object.

    Example

    Delete a pod in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import delete_namespaced_pod\nfrom kubernetes.client.models import V1DeleteOptions\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = delete_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        delete_options=V1DeleteOptions(grace_period_seconds=0),\n    )\n

    Source code in prefect_kubernetes/pods.py
    @task\nasync def delete_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Delete a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to delete.\n        delete_options: A Kubernetes `V1DeleteOptions` object.\n        namespace: The Kubernetes namespace to delete this pod from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Delete a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import delete_namespaced_pod\n        from kubernetes.client.models import V1DeleteOptions\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = delete_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                delete_options=V1DeleteOptions(grace_period_seconds=0),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.delete_namespaced_pod,\n            pod_name,\n            body=delete_options,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.list_namespaced_pod","title":"list_namespaced_pod async","text":"

    List all pods in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required namespace Optional[str]

    The Kubernetes namespace to list pods from.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1PodList

    A Kubernetes V1PodList object.

    Example

    List all pods in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import list_namespaced_pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_list = list_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n    )\n

    Source code in prefect_kubernetes/pods.py
    @task\nasync def list_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1PodList:\n    \"\"\"List all pods in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        namespace: The Kubernetes namespace to list pods from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1PodList` object.\n\n    Example:\n        List all pods in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import list_namespaced_pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_list = list_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.list_namespaced_pod, namespace=namespace, **kube_kwargs\n        )\n
    "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.patch_namespaced_pod","title":"patch_namespaced_pod async","text":"

    Patch a Kubernetes pod in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required pod_name str

    The name of the pod to patch.

    required pod_updates V1Pod

    A Kubernetes V1Pod object.

    required namespace Optional[str]

    The Kubernetes namespace to patch this pod in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Pod

    A Kubernetes V1Pod object.

    Example

    Patch a pod in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import patch_namespaced_pod\nfrom kubernetes.client.models import V1Pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = patch_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        pod_updates=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}}),\n    )\n

    Source code in prefect_kubernetes/pods.py
    @task\nasync def patch_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    pod_updates: V1Pod,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Patch a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to patch.\n        pod_updates: A Kubernetes `V1Pod` object.\n        namespace: The Kubernetes namespace to patch this pod in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Patch a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import patch_namespaced_pod\n        from kubernetes.client.models import V1Pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = patch_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                pod_updates=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.patch_namespaced_pod,\n            name=pod_name,\n            namespace=namespace,\n            body=pod_updates,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.read_namespaced_pod","title":"read_namespaced_pod async","text":"

    Read information on a Kubernetes pod in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required pod_name str

    The name of the pod to read.

    required namespace Optional[str]

    The Kubernetes namespace to read this pod from.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Pod

    A Kubernetes V1Pod object.

    Example

    Read a pod in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = read_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\"\n    )\n

    Source code in prefect_kubernetes/pods.py
    @task\nasync def read_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Read information on a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to read.\n        namespace: The Kubernetes namespace to read this pod from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Read a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = read_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\"\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.read_namespaced_pod,\n            name=pod_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.read_namespaced_pod_log","title":"read_namespaced_pod_log async","text":"

    Read logs from a Kubernetes pod in a given namespace.

    If print_func is provided, the logs will be streamed using that function. If the pod is no longer running, logs generated up to that point will be returned.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required pod_name str

    The name of the pod to read logs from.

    required container str

    The name of the container to read logs from.

    required namespace Optional[str]

    The Kubernetes namespace to read this pod from.

    'default' print_func Optional[Callable]

    If provided, it will stream the pod logs by calling print_func for every line and returning None. If not provided, the current pod logs will be returned immediately.

    None **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description Union[str, None]

    A string containing the logs from the pod's container.

    Example

    Read logs from a pod in the default namespace:

    from prefect import flow, get_run_logger\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import read_namespaced_pod_logs\n\n@flow\ndef kubernetes_orchestrator():\n    logger = get_run_logger()\n\n    pod_logs = read_namespaced_pod_logs(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        container=\"test-container\",\n        print_func=logger.info\n    )\n

    Source code in prefect_kubernetes/pods.py
    @task\nasync def read_namespaced_pod_log(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    container: str,\n    namespace: Optional[str] = \"default\",\n    print_func: Optional[Callable] = None,\n    **kube_kwargs: Dict[str, Any],\n) -> Union[str, None]:\n    \"\"\"Read logs from a Kubernetes pod in a given namespace.\n\n    If `print_func` is provided, the logs will be streamed using that function.\n    If the pod is no longer running, logs generated up to that point will be returned.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to read logs from.\n        container: The name of the container to read logs from.\n        namespace: The Kubernetes namespace to read this pod from.\n        print_func: If provided, it will stream the pod logs by calling `print_func`\n            for every line and returning `None`. If not provided, the current pod\n            logs will be returned immediately.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A string containing the logs from the pod's container.\n\n    Example:\n        Read logs from a pod in the default namespace:\n        ```python\n        from prefect import flow, get_run_logger\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import read_namespaced_pod_logs\n\n        @flow\n        def kubernetes_orchestrator():\n            logger = get_run_logger()\n\n            pod_logs = read_namespaced_pod_logs(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                container=\"test-container\",\n                print_func=logger.info\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        if print_func is not None:\n            # should no longer need to manually refresh on ApiException.status == 410\n            # as of https://github.com/kubernetes-client/python-base/pull/133\n            for log_line in Watch().stream(\n                core_v1_client.read_namespaced_pod_log,\n                name=pod_name,\n                namespace=namespace,\n                container=container,\n            ):\n                print_func(log_line)\n\n        return await run_sync_in_worker_thread(\n            core_v1_client.read_namespaced_pod_log,\n            name=pod_name,\n            namespace=namespace,\n            container=container,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.replace_namespaced_pod","title":"replace_namespaced_pod async","text":"

    Replace a Kubernetes pod in a given namespace.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required pod_name str

    The name of the pod to replace.

    required new_pod V1Pod

    A Kubernetes V1Pod object.

    required namespace Optional[str]

    The Kubernetes namespace to replace this pod in.

    'default' **kube_kwargs Dict[str, Any]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Pod

    A Kubernetes V1Pod object.

    Example

    Replace a pod in the default namespace:

    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import replace_namespaced_pod\nfrom kubernetes.client.models import V1Pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = replace_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        new_pod=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}})\n    )\n

    Source code in prefect_kubernetes/pods.py
    @task\nasync def replace_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    new_pod: V1Pod,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Replace a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to replace.\n        new_pod: A Kubernetes `V1Pod` object.\n        namespace: The Kubernetes namespace to replace this pod in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Replace a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import replace_namespaced_pod\n        from kubernetes.client.models import V1Pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = replace_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                new_pod=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}})\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.replace_namespaced_pod,\n            body=new_pod,\n            name=pod_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/services/","title":"Services","text":""},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services","title":"prefect_kubernetes.services","text":"

    Tasks for working with Kubernetes services.

    "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.create_namespaced_service","title":"create_namespaced_service async","text":"

    Create a namespaced Kubernetes service.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    A KubernetesCredentials block used to generate a CoreV1Api client.

    required new_service V1Service

    A V1Service object representing the service to create.

    required namespace Optional[str]

    The namespace to create the service in.

    'default' **kube_kwargs Optional[Dict[str, Any]]

    Additional keyword arguments to pass to the CoreV1Api method call.

    {}

    Returns:

    Type Description V1Service

    A V1Service representing the created service.

    Example
    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import create_namespaced_service\nfrom kubernetes.client.models import V1Service\n\n@flow\ndef create_service_flow():\n    v1_service = create_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        new_service=V1Service(metadata={...}, spec={...}),\n    )\n
    Source code in prefect_kubernetes/services.py
    @task\nasync def create_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    new_service: V1Service,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Create a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: A `KubernetesCredentials` block used to generate a\n            `CoreV1Api` client.\n        new_service: A `V1Service` object representing the service to create.\n        namespace: The namespace to create the service in.\n        **kube_kwargs: Additional keyword arguments to pass to the `CoreV1Api`\n            method call.\n\n    Returns:\n        A `V1Service` representing the created service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import create_namespaced_service\n        from kubernetes.client.models import V1Service\n\n        @flow\n        def create_service_flow():\n            v1_service = create_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                new_service=V1Service(metadata={...}, spec={...}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.create_namespaced_service,\n            body=new_service,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.delete_namespaced_service","title":"delete_namespaced_service async","text":"

    Delete a namespaced Kubernetes service.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required service_name str

    The name of the service to delete.

    required delete_options Optional[V1DeleteOptions]

    A V1DeleteOptions object representing the options to delete the service with.

    None namespace Optional[str]

    The namespace to delete the service from.

    'default' **kube_kwargs Optional[Dict[str, Any]]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Service

    A V1Service representing the deleted service.

    Example
    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import delete_namespaced_service\n\n@flow\ndef kubernetes_orchestrator():\n    delete_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        namespace=\"my-namespace\",\n    )\n
    Source code in prefect_kubernetes/services.py
    @task\nasync def delete_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Delete a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to delete.\n        delete_options: A `V1DeleteOptions` object representing the options to\n            delete the service with.\n        namespace: The namespace to delete the service from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` representing the deleted service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import delete_namespaced_service\n\n        @flow\n        def kubernetes_orchestrator():\n            delete_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.delete_namespaced_service,\n            name=service_name,\n            namespace=namespace,\n            body=delete_options,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.list_namespaced_service","title":"list_namespaced_service async","text":"

    List namespaced Kubernetes services.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required namespace Optional[str]

    The namespace to list services from.

    'default' **kube_kwargs Optional[Dict[str, Any]]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1ServiceList

    A V1ServiceList representing the list of services in the given namespace.

    Example
    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import list_namespaced_service\n\n@flow\ndef kubernetes_orchestrator():\n    list_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        namespace=\"my-namespace\",\n    )\n
    Source code in prefect_kubernetes/services.py
    @task\nasync def list_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1ServiceList:\n    \"\"\"List namespaced Kubernetes services.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        namespace: The namespace to list services from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1ServiceList` representing the list of services in the given namespace.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import list_namespaced_service\n\n        @flow\n        def kubernetes_orchestrator():\n            list_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.list_namespaced_service,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.patch_namespaced_service","title":"patch_namespaced_service async","text":"

    Patch a namespaced Kubernetes service.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required service_name str

    The name of the service to patch.

    required service_updates V1Service

    A V1Service object representing patches to service_name.

    required namespace Optional[str]

    The namespace to patch the service in.

    'default' **kube_kwargs Optional[Dict[str, Any]]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Service

    A V1Service representing the patched service.

    Example
    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import patch_namespaced_service\nfrom kubernetes.client.models import V1Service\n\n@flow\ndef kubernetes_orchestrator():\n    patch_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        new_service=V1Service(metadata={...}, spec={...}),\n        namespace=\"my-namespace\",\n    )\n
    Source code in prefect_kubernetes/services.py
    @task\nasync def patch_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    service_updates: V1Service,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Patch a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to patch.\n        service_updates: A `V1Service` object representing patches to `service_name`.\n        namespace: The namespace to patch the service in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` representing the patched service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import patch_namespaced_service\n        from kubernetes.client.models import V1Service\n\n        @flow\n        def kubernetes_orchestrator():\n            patch_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                new_service=V1Service(metadata={...}, spec={...}),\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.patch_namespaced_service,\n            name=service_name,\n            body=service_updates,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.read_namespaced_service","title":"read_namespaced_service async","text":"

    Read a namespaced Kubernetes service.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required service_name str

    The name of the service to read.

    required namespace Optional[str]

    The namespace to read the service from.

    'default' **kube_kwargs Optional[Dict[str, Any]]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Service

    A V1Service object representing the service.

    Example
    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import read_namespaced_service\n\n@flow\ndef kubernetes_orchestrator():\n    read_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        namespace=\"my-namespace\",\n    )\n
    Source code in prefect_kubernetes/services.py
    @task\nasync def read_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Read a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to read.\n        namespace: The namespace to read the service from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` object representing the service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import read_namespaced_service\n\n        @flow\n        def kubernetes_orchestrator():\n            read_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.read_namespaced_service,\n            name=service_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.replace_namespaced_service","title":"replace_namespaced_service async","text":"

    Replace a namespaced Kubernetes service.

    Parameters:

    Name Type Description Default kubernetes_credentials KubernetesCredentials

    KubernetesCredentials block for creating authenticated Kubernetes API clients.

    required service_name str

    The name of the service to replace.

    required new_service V1Service

    A V1Service object representing the new service.

    required namespace Optional[str]

    The namespace to replace the service in.

    'default' **kube_kwargs Optional[Dict[str, Any]]

    Optional extra keyword arguments to pass to the Kubernetes API.

    {}

    Returns:

    Type Description V1Service

    A V1Service representing the new service.

    Example
    from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import replace_namespaced_service\nfrom kubernetes.client.models import V1Service\n\n@flow\ndef kubernetes_orchestrator():\n    replace_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        new_service=V1Service(metadata={...}, spec={...}),\n        namespace=\"my-namespace\",\n    )\n
    Source code in prefect_kubernetes/services.py
    @task\nasync def replace_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    new_service: V1Service,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Replace a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to replace.\n        new_service: A `V1Service` object representing the new service.\n        namespace: The namespace to replace the service in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` representing the new service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import replace_namespaced_service\n        from kubernetes.client.models import V1Service\n\n        @flow\n        def kubernetes_orchestrator():\n            replace_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                new_service=V1Service(metadata={...}, spec={...}),\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.replace_namespaced_service,\n            name=service_name,\n            body=new_service,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
    "},{"location":"integrations/prefect-kubernetes/utilities/","title":"Utilities","text":""},{"location":"integrations/prefect-kubernetes/utilities/#prefect_kubernetes.utilities","title":"prefect_kubernetes.utilities","text":"

    Utilities for working with the Python Kubernetes API.

    "},{"location":"integrations/prefect-kubernetes/utilities/#prefect_kubernetes.utilities.convert_manifest_to_model","title":"convert_manifest_to_model","text":"

    Recursively converts a dict representation of a Kubernetes resource to the corresponding Python model containing the Python models that compose it, according to the openapi_types on the class retrieved with v1_model_name.

    If manifest is a path-like object with a .yaml or .yml extension, it will be treated as a path to a Kubernetes resource manifest and loaded into a dict.

    Parameters:

    Name Type Description Default manifest Union[Path, str, KubernetesManifest]

    A path to a Kubernetes resource manifest or its dict representation.

    required v1_model_name str

    The name of a Kubernetes client model to convert the manifest to.

    required

    Returns:

    Type Description V1KubernetesModel

    A populated instance of a Kubernetes client model with type v1_model_name.

    Raises:

    Type Description ValueError

    If v1_model_name is not a valid Kubernetes client model name.

    ValueError

    If manifest is path-like and is not a valid yaml filename.

    Source code in prefect_kubernetes/utilities.py
    def convert_manifest_to_model(\n    manifest: Union[Path, str, KubernetesManifest], v1_model_name: str\n) -> V1KubernetesModel:\n    \"\"\"Recursively converts a `dict` representation of a Kubernetes resource to the\n    corresponding Python model containing the Python models that compose it,\n    according to the `openapi_types` on the class retrieved with `v1_model_name`.\n\n    If `manifest` is a path-like object with a `.yaml` or `.yml` extension, it will be\n    treated as a path to a Kubernetes resource manifest and loaded into a `dict`.\n\n    Args:\n        manifest: A path to a Kubernetes resource manifest or its `dict` representation.\n        v1_model_name: The name of a Kubernetes client model to convert the manifest to.\n\n    Returns:\n        A populated instance of a Kubernetes client model with type `v1_model_name`.\n\n    Raises:\n        ValueError: If `v1_model_name` is not a valid Kubernetes client model name.\n        ValueError: If `manifest` is path-like and is not a valid yaml filename.\n    \"\"\"\n    if not manifest:\n        return None\n\n    if not (isinstance(v1_model_name, str) and v1_model_name in set(dir(k8s_models))):\n        raise ValueError(\n            \"`v1_model` must be the name of a valid Kubernetes client model, received \"\n            f\": {v1_model_name!r}\"\n        )\n\n    if isinstance(manifest, (Path, str)):\n        str_path = str(manifest)\n        if not str_path.endswith((\".yaml\", \".yml\")):\n            raise ValueError(\"Manifest must be a valid dict or path to a .yaml file.\")\n        manifest = KubernetesJob.job_from_file(manifest)\n\n    converted_manifest = {}\n    v1_model = getattr(k8s_models, v1_model_name)\n    valid_supplied_fields = (  # valid and specified fields for current `v1_model_name`\n        (k, v)\n        for k, v in v1_model.openapi_types.items()\n        if v1_model.attribute_map[k] in manifest  # map goes \ud83d\udc0d -> \ud83d\udc2b, user supplies \ud83d\udc2b\n    )\n\n    for field, value_type in valid_supplied_fields:\n        if value_type.startswith(\"V1\"):  # field value is another model\n            converted_manifest[field] = convert_manifest_to_model(\n                manifest[v1_model.attribute_map[field]], value_type\n            )\n        elif value_type.startswith(\"list[V1\"):  # field value is a list of models\n            field_item_type = value_type.replace(\"list[\", \"\").replace(\"]\", \"\")\n            try:\n                converted_manifest[field] = [\n                    convert_manifest_to_model(item, field_item_type)\n                    for item in manifest[v1_model.attribute_map[field]]\n                ]\n            except TypeError:\n                converted_manifest[field] = manifest[v1_model.attribute_map[field]]\n        elif value_type in base_types:  # field value is a primitive Python type\n            converted_manifest[field] = manifest[v1_model.attribute_map[field]]\n\n    return v1_model(**converted_manifest)\n
    "},{"location":"integrations/prefect-kubernetes/utilities/#prefect_kubernetes.utilities.enable_socket_keep_alive","title":"enable_socket_keep_alive","text":"

    Setting the keep-alive flags on the kubernetes client object. Unfortunately neither the kubernetes library nor the urllib3 library which kubernetes is using internally offer the functionality to enable keep-alive messages. Thus the flags are added to be used on the underlying sockets.

    Source code in prefect_kubernetes/utilities.py
    def enable_socket_keep_alive(client: ApiClient) -> None:\n    \"\"\"\n    Setting the keep-alive flags on the kubernetes client object.\n    Unfortunately neither the kubernetes library nor the urllib3 library which\n    kubernetes is using internally offer the functionality to enable keep-alive\n    messages. Thus the flags are added to be used on the underlying sockets.\n    \"\"\"\n\n    socket_options = [(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)]\n\n    if hasattr(socket, \"TCP_KEEPINTVL\"):\n        socket_options.append((socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 30))\n\n    if hasattr(socket, \"TCP_KEEPCNT\"):\n        socket_options.append((socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 6))\n\n    if hasattr(socket, \"TCP_KEEPIDLE\"):\n        socket_options.append((socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 6))\n\n    if sys.platform == \"darwin\":\n        # TCP_KEEP_ALIVE not available on socket module in macOS, but defined in\n        # https://github.com/apple/darwin-xnu/blob/2ff845c2e033bd0ff64b5b6aa6063a1f8f65aa32/bsd/netinet/tcp.h#L215\n        TCP_KEEP_ALIVE = 0x10\n        socket_options.append((socket.IPPROTO_TCP, TCP_KEEP_ALIVE, 30))\n\n    client.rest_client.pool_manager.connection_pool_kw[\n        \"socket_options\"\n    ] = socket_options\n
    "},{"location":"integrations/prefect-kubernetes/worker/","title":"Worker","text":""},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker","title":"prefect_kubernetes.worker","text":"

    Module containing the Kubernetes worker used for executing flow runs as Kubernetes jobs.

    To start a Kubernetes worker, run the following command:

    prefect worker start --pool 'my-work-pool' --type kubernetes\n

    Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker--securing-your-prefect-cloud-api-key","title":"Securing your Prefect Cloud API key","text":"

    If you are using Prefect Cloud and would like to pass your Prefect Cloud API key to created jobs via a Kubernetes secret, set the PREFECT_KUBERNETES_WORKER_STORE_PREFECT_API_IN_SECRET environment variable before starting your worker:

    export PREFECT_KUBERNETES_WORKER_STORE_PREFECT_API_IN_SECRET=\"true\"\nprefect worker start --pool 'my-work-pool' --type kubernetes\n

    Note that your work will need permission to create secrets in the same namespace(s) that Kubernetes jobs are created in to execute flow runs.

    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker--using-a-custom-kubernetes-job-manifest-template","title":"Using a custom Kubernetes job manifest template","text":"

    The default template used for Kubernetes job manifests looks like this:

    ---\napiVersion: batch/v1\nkind: Job\nmetadata:\nlabels: \"{{ labels }}\"\nnamespace: \"{{ namespace }}\"\ngenerateName: \"{{ name }}-\"\nspec:\nttlSecondsAfterFinished: \"{{ finished_job_ttl }}\"\ntemplate:\n    spec:\n    parallelism: 1\n    completions: 1\n    restartPolicy: Never\n    serviceAccountName: \"{{ service_account_name }}\"\n    containers:\n    - name: prefect-job\n        env: \"{{ env }}\"\n        image: \"{{ image }}\"\n        imagePullPolicy: \"{{ image_pull_policy }}\"\n        args: \"{{ command }}\"\n

    Each values enclosed in {{ }} is a placeholder that will be replaced with a value at runtime. The values that can be used a placeholders are defined by the variables schema defined in the base job template.

    The default job manifest and available variables can be customized on a work pool by work pool basis. These customizations can be made via the Prefect UI when creating or editing a work pool.

    For example, if you wanted to allow custom memory requests for a Kubernetes work pool you could update the job manifest template to look like this:

    ---\napiVersion: batch/v1\nkind: Job\nmetadata:\nlabels: \"{{ labels }}\"\nnamespace: \"{{ namespace }}\"\ngenerateName: \"{{ name }}-\"\nspec:\nttlSecondsAfterFinished: \"{{ finished_job_ttl }}\"\ntemplate:\n    spec:\n    parallelism: 1\n    completions: 1\n    restartPolicy: Never\n    serviceAccountName: \"{{ service_account_name }}\"\n    containers:\n    - name: prefect-job\n        env: \"{{ env }}\"\n        image: \"{{ image }}\"\n        imagePullPolicy: \"{{ image_pull_policy }}\"\n        args: \"{{ command }}\"\n        resources:\n            requests:\n                memory: \"{{ memory }}Mi\"\n            limits:\n                memory: 128Mi\n

    In this new template, the memory placeholder allows customization of the memory allocated to Kubernetes jobs created by workers in this work pool, but the limit is hard-coded and cannot be changed by deployments.

    For more information about work pools and workers, checkout out the Prefect docs.

    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesImagePullPolicy","title":"KubernetesImagePullPolicy","text":"

    Bases: Enum

    Enum representing the image pull policy options for a Kubernetes job.

    Source code in prefect_kubernetes/worker.py
    class KubernetesImagePullPolicy(enum.Enum):\n    \"\"\"Enum representing the image pull policy options for a Kubernetes job.\"\"\"\n\n    IF_NOT_PRESENT = \"IfNotPresent\"\n    ALWAYS = \"Always\"\n    NEVER = \"Never\"\n
    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorker","title":"KubernetesWorker","text":"

    Bases: BaseWorker

    Prefect worker that executes flow runs within Kubernetes Jobs.

    Source code in prefect_kubernetes/worker.py
    class KubernetesWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Kubernetes Jobs.\"\"\"\n\n    type = \"kubernetes\"\n    job_configuration = KubernetesWorkerJobConfiguration\n    job_configuration_variables = KubernetesWorkerVariables\n    _description = (\n        \"Execute flow runs within jobs scheduled on a Kubernetes cluster. Requires a \"\n        \"Kubernetes cluster.\"\n    )\n    _display_name = \"Kubernetes\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"  # noqa\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self._created_secrets = {}\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: KubernetesWorkerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> KubernetesWorkerResult:\n        \"\"\"\n        Executes a flow run within a Kubernetes Job and waits for the flow run\n        to complete.\n\n        Args:\n            flow_run: The flow run to execute\n            configuration: The configuration to use when executing the flow run.\n            task_status: The task status object for the current flow run. If provided,\n                the task will be marked as started.\n\n        Returns:\n            KubernetesWorkerResult: A result object containing information about the\n                final state of the flow run\n        \"\"\"\n        logger = self.get_flow_run_logger(flow_run)\n\n        with self._get_configured_kubernetes_client(configuration) as client:\n            logger.info(\"Creating Kubernetes job...\")\n            job = await run_sync_in_worker_thread(\n                self._create_job, configuration, client\n            )\n            pid = await run_sync_in_worker_thread(\n                self._get_infrastructure_pid, job, client\n            )\n            # Indicate that the job has started\n            if task_status is not None:\n                task_status.started(pid)\n\n            # Monitor the job until completion\n\n            events_replicator = KubernetesEventsReplicator(\n                client=client,\n                job_name=job.metadata.name,\n                namespace=configuration.namespace,\n                worker_resource=self._event_resource(),\n                related_resources=self._event_related_resources(\n                    configuration=configuration\n                ),\n                timeout_seconds=configuration.pod_watch_timeout_seconds,\n            )\n\n            with events_replicator:\n                status_code = await run_sync_in_worker_thread(\n                    self._watch_job, logger, job.metadata.name, configuration, client\n                )\n            return KubernetesWorkerResult(identifier=pid, status_code=status_code)\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a job for a cancelled flow run based on the provided infrastructure PID\n        and run configuration.\n        \"\"\"\n        await run_sync_in_worker_thread(\n            self._stop_job, infrastructure_pid, configuration, grace_seconds\n        )\n\n    async def teardown(self, *exc_info):\n        await super().teardown(*exc_info)\n\n        await self._clean_up_created_secrets()\n\n    async def _clean_up_created_secrets(self):\n        \"\"\"Deletes any secrets created during the worker's operation.\"\"\"\n        coros = []\n        for key, configuration in self._created_secrets.items():\n            with self._get_configured_kubernetes_client(configuration) as client:\n                with self._get_core_client(client) as core_client:\n                    coros.append(\n                        run_sync_in_worker_thread(\n                            core_client.delete_namespaced_secret,\n                            name=key[0],\n                            namespace=key[1],\n                        )\n                    )\n\n        results = await asyncio.gather(*coros, return_exceptions=True)\n        for result in results:\n            if isinstance(result, Exception):\n                self._logger.warning(\n                    \"Failed to delete created secret with exception: %s\", result\n                )\n\n    def _stop_job(\n        self,\n        infrastructure_pid: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"Removes the given Job from the Kubernetes cluster\"\"\"\n        with self._get_configured_kubernetes_client(configuration) as client:\n            job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n                infrastructure_pid\n            )\n\n            if job_namespace != configuration.namespace:\n                raise InfrastructureNotAvailable(\n                    f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n                    f\"{job_namespace!r} but this worker expected jobs to be running in \"\n                    f\"namespace {configuration.namespace!r} based on the work pool and \"\n                    \"deployment configuration.\"\n                )\n\n            current_cluster_uid = self._get_cluster_uid(client)\n            if job_cluster_uid != current_cluster_uid:\n                raise InfrastructureNotAvailable(\n                    f\"Unable to kill job {job_name!r}: The job is running on another \"\n                    \"cluster than the one specified by the infrastructure PID.\"\n                )\n\n            with self._get_batch_client(client) as batch_client:\n                try:\n                    batch_client.delete_namespaced_job(\n                        name=job_name,\n                        namespace=job_namespace,\n                        grace_period_seconds=grace_seconds,\n                        # Foreground propagation deletes dependent objects before deleting # noqa\n                        # owner objects. This ensures that the pods are cleaned up before # noqa\n                        # the job is marked as deleted.\n                        # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion # noqa\n                        propagation_policy=\"Foreground\",\n                    )\n                except kubernetes.client.exceptions.ApiException as exc:\n                    if exc.status == 404:\n                        raise InfrastructureNotFound(\n                            f\"Unable to kill job {job_name!r}: The job was not found.\"\n                        ) from exc\n                    else:\n                        raise\n\n    @contextmanager\n    def _get_configured_kubernetes_client(\n        self, configuration: KubernetesWorkerJobConfiguration\n    ) -> Generator[\"ApiClient\", None, None]:\n        \"\"\"\n        Returns a configured Kubernetes client.\n        \"\"\"\n\n        try:\n            if configuration.cluster_config:\n                client = kubernetes.config.new_client_from_config_dict(\n                    config_dict=configuration.cluster_config.config,\n                    context=configuration.cluster_config.context_name,\n                )\n            else:\n                # If no hardcoded config specified, try to load Kubernetes configuration\n                # within a cluster. If that doesn't work, try to load the configuration\n                # from the local environment, allowing any further ConfigExceptions to\n                # bubble up.\n                try:\n                    kubernetes.config.load_incluster_config()\n                    config = kubernetes.client.Configuration.get_default_copy()\n                    client = kubernetes.client.ApiClient(configuration=config)\n                except kubernetes.config.ConfigException:\n                    client = kubernetes.config.new_client_from_config()\n\n            if os.environ.get(\n                \"PREFECT_KUBERNETES_WORKER_ADD_TCP_KEEPALIVE\", \"TRUE\"\n            ).strip().lower() in (\"true\", \"1\"):\n                enable_socket_keep_alive(client)\n\n            yield client\n        finally:\n            client.rest_client.pool_manager.clear()\n\n    def _replace_api_key_with_secret(\n        self, configuration: KubernetesWorkerJobConfiguration, client: \"ApiClient\"\n    ):\n        \"\"\"Replaces the PREFECT_API_KEY environment variable with a Kubernetes secret\"\"\"\n        manifest_env = configuration.job_manifest[\"spec\"][\"template\"][\"spec\"][\n            \"containers\"\n        ][0].get(\"env\")\n        manifest_api_key_env = next(\n            (\n                env_entry\n                for env_entry in manifest_env\n                if env_entry.get(\"name\") == \"PREFECT_API_KEY\"\n            ),\n            {},\n        )\n        api_key = manifest_api_key_env.get(\"value\")\n        if api_key:\n            secret_name = f\"prefect-{_slugify_name(self.name)}-api-key\"\n            secret = self._upsert_secret(\n                name=secret_name,\n                value=api_key,\n                namespace=configuration.namespace,\n                client=client,\n            )\n            # Store configuration so that we can delete the secret when the worker shuts\n            # down\n            self._created_secrets[\n                (secret.metadata.name, secret.metadata.namespace)\n            ] = configuration\n            new_api_env_entry = {\n                \"name\": \"PREFECT_API_KEY\",\n                \"valueFrom\": {\"secretKeyRef\": {\"name\": secret_name, \"key\": \"value\"}},\n            }\n            manifest_env = [\n                entry if entry.get(\"name\") != \"PREFECT_API_KEY\" else new_api_env_entry\n                for entry in manifest_env\n            ]\n            configuration.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                \"env\"\n            ] = manifest_env\n\n    @retry(\n        stop=stop_after_attempt(MAX_ATTEMPTS),\n        wait=wait_fixed(RETRY_MIN_DELAY_SECONDS)\n        + wait_random(\n            RETRY_MIN_DELAY_JITTER_SECONDS,\n            RETRY_MAX_DELAY_JITTER_SECONDS,\n        ),\n        reraise=True,\n    )\n    def _create_job(\n        self, configuration: KubernetesWorkerJobConfiguration, client: \"ApiClient\"\n    ) -> \"V1Job\":\n        \"\"\"\n        Creates a Kubernetes job from a job manifest.\n        \"\"\"\n        if os.environ.get(\n            \"PREFECT_KUBERNETES_WORKER_STORE_PREFECT_API_IN_SECRET\", \"\"\n        ).strip().lower() in (\"true\", \"1\"):\n            self._replace_api_key_with_secret(\n                configuration=configuration, client=client\n            )\n        try:\n            with self._get_batch_client(client) as batch_client:\n                job = batch_client.create_namespaced_job(\n                    configuration.namespace, configuration.job_manifest\n                )\n        except kubernetes.client.exceptions.ApiException as exc:\n            # Parse the reason and message from the response if feasible\n            message = \"\"\n            if exc.reason:\n                message += \": \" + exc.reason\n            if exc.body and \"message\" in (body := json.loads(exc.body)):\n                message += \": \" + body[\"message\"]\n\n            raise InfrastructureError(\n                f\"Unable to create Kubernetes job{message}\"\n            ) from exc\n\n        return job\n\n    def _upsert_secret(\n        self, name: str, value: str, namespace: str, client: \"ApiClient\"\n    ):\n        encoded_value = base64.b64encode(value.encode(\"utf-8\")).decode(\"utf-8\")\n        with self._get_core_client(client) as core_client:\n            try:\n                # Get the current version of the Secret and update it with the\n                # new value\n                current_secret = core_client.read_namespaced_secret(\n                    name=name, namespace=namespace\n                )\n                current_secret.data = {\"value\": encoded_value}\n                secret = core_client.replace_namespaced_secret(\n                    name=name, namespace=namespace, body=current_secret\n                )\n            except ApiException as exc:\n                if exc.status != 404:\n                    raise\n                # Create the secret if it doesn't already exist\n                metadata = V1ObjectMeta(name=name, namespace=namespace)\n                secret = V1Secret(\n                    api_version=\"v1\",\n                    kind=\"Secret\",\n                    metadata=metadata,\n                    data={\"value\": encoded_value},\n                )\n                secret = core_client.create_namespaced_secret(\n                    namespace=namespace, body=secret\n                )\n            return secret\n\n    @contextmanager\n    def _get_batch_client(\n        self, client: \"ApiClient\"\n    ) -> Generator[\"BatchV1Api\", None, None]:\n        \"\"\"\n        Context manager for retrieving a Kubernetes batch client.\n        \"\"\"\n        try:\n            yield kubernetes.client.BatchV1Api(api_client=client)\n        finally:\n            client.rest_client.pool_manager.clear()\n\n    def _get_infrastructure_pid(self, job: \"V1Job\", client: \"ApiClient\") -> str:\n        \"\"\"\n        Generates a Kubernetes infrastructure PID.\n\n        The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n        \"\"\"\n        cluster_uid = self._get_cluster_uid(client)\n        pid = f\"{cluster_uid}:{job.metadata.namespace}:{job.metadata.name}\"\n        return pid\n\n    def _parse_infrastructure_pid(\n        self, infrastructure_pid: str\n    ) -> Tuple[str, str, str]:\n        \"\"\"\n        Parse a Kubernetes infrastructure PID into its component parts.\n\n        Returns a cluster UID, namespace, and job name.\n        \"\"\"\n        cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n        return cluster_uid, namespace, job_name\n\n    @contextmanager\n    def _get_core_client(\n        self, client: \"ApiClient\"\n    ) -> Generator[\"CoreV1Api\", None, None]:\n        \"\"\"\n        Context manager for retrieving a Kubernetes core client.\n        \"\"\"\n        try:\n            yield kubernetes.client.CoreV1Api(api_client=client)\n        finally:\n            client.rest_client.pool_manager.clear()\n\n    def _get_cluster_uid(self, client: \"ApiClient\") -> str:\n        \"\"\"\n        Gets a unique id for the current cluster being used.\n\n        There is no real unique identifier for a cluster. However, the `kube-system`\n        namespace is immutable and has a persistence UID that we use instead.\n\n        PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n        namespace cannot be read e.g. when a cluster role cannot be created. If set,\n        this variable will be used and we will not attempt to read the `kube-system`\n        namespace.\n\n        See https://github.com/kubernetes/kubernetes/issues/44954\n        \"\"\"\n        # Default to an environment variable\n        env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n        if env_cluster_uid:\n            return env_cluster_uid\n\n        # Read the UID from the cluster namespace\n        with self._get_core_client(client) as core_client:\n            namespace = core_client.read_namespace(\"kube-system\")\n        cluster_uid = namespace.metadata.uid\n\n        return cluster_uid\n\n    def _job_events(\n        self,\n        watch: kubernetes.watch.Watch,\n        batch_client: kubernetes.client.BatchV1Api,\n        job_name: str,\n        namespace: str,\n        watch_kwargs: dict,\n    ) -> Generator[Union[Any, dict, str], Any, None]:\n        \"\"\"\n        Stream job events.\n\n        Pick up from the current resource version returned by the API\n        in the case of a 410.\n\n        See https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes  # noqa\n        \"\"\"\n        while True:\n            try:\n                return watch.stream(\n                    func=batch_client.list_namespaced_job,\n                    namespace=namespace,\n                    field_selector=f\"metadata.name={job_name}\",\n                    **watch_kwargs,\n                )\n            except ApiException as e:\n                if e.status == 410:\n                    job_list = batch_client.list_namespaced_job(\n                        namespace=namespace, field_selector=f\"metadata.name={job_name}\"\n                    )\n                    resource_version = job_list.metadata.resource_version\n                    watch_kwargs[\"resource_version\"] = resource_version\n                else:\n                    raise\n\n    def _watch_job(\n        self,\n        logger: logging.Logger,\n        job_name: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> int:\n        \"\"\"\n        Watch a job.\n\n        Return the final status code of the first container.\n        \"\"\"\n        logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n        job = self._get_job(logger, job_name, configuration, client)\n        if not job:\n            return -1\n\n        pod = self._get_job_pod(logger, job_name, configuration, client)\n        if not pod:\n            return -1\n\n        # Calculate the deadline before streaming output\n        deadline = (\n            (time.monotonic() + configuration.job_watch_timeout_seconds)\n            if configuration.job_watch_timeout_seconds is not None\n            else None\n        )\n\n        if configuration.stream_output:\n            with self._get_core_client(client) as core_client:\n                logs = core_client.read_namespaced_pod_log(\n                    pod.metadata.name,\n                    configuration.namespace,\n                    follow=True,\n                    _preload_content=False,\n                    container=\"prefect-job\",\n                )\n                try:\n                    for log in logs.stream():\n                        print(log.decode().rstrip())\n\n                        # Check if we have passed the deadline and should stop streaming\n                        # logs\n                        remaining_time = (\n                            deadline - time.monotonic() if deadline else None\n                        )\n                        if deadline and remaining_time <= 0:\n                            break\n\n                except Exception:\n                    logger.warning(\n                        (\n                            \"Error occurred while streaming logs - \"\n                            \"Job will continue to run but logs will \"\n                            \"no longer be streamed to stdout.\"\n                        ),\n                        exc_info=True,\n                    )\n\n        with self._get_batch_client(client) as batch_client:\n            # Check if the job is completed before beginning a watch\n            job = batch_client.read_namespaced_job(\n                name=job_name, namespace=configuration.namespace\n            )\n            completed = job.status.completion_time is not None\n\n            while not completed:\n                remaining_time = (\n                    math.ceil(deadline - time.monotonic()) if deadline else None\n                )\n                if deadline and remaining_time <= 0:\n                    logger.error(\n                        f\"Job {job_name!r}: Job did not complete within \"\n                        f\"timeout of {configuration.job_watch_timeout_seconds}s.\"\n                    )\n                    return -1\n\n                watch = kubernetes.watch.Watch()\n\n                # The kubernetes library will disable retries if the timeout kwarg is\n                # present regardless of the value so we do not pass it unless given\n                # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n                watch_kwargs = {\"timeout_seconds\": remaining_time} if deadline else {}\n\n                for event in self._job_events(\n                    watch,\n                    batch_client,\n                    job_name,\n                    configuration.namespace,\n                    watch_kwargs,\n                ):\n                    if event[\"type\"] == \"DELETED\":\n                        logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n                        completed = True\n                    elif event[\"object\"].status.completion_time:\n                        if not event[\"object\"].status.succeeded:\n                            # Job failed, exit while loop and return pod exit code\n                            logger.error(f\"Job {job_name!r}: Job failed.\")\n                        completed = True\n                    # Check if the job has reached its backoff limit\n                    # and stop watching if it has\n                    elif (\n                        event[\"object\"].spec.backoff_limit is not None\n                        and event[\"object\"].status.failed is not None\n                        and event[\"object\"].status.failed\n                        > event[\"object\"].spec.backoff_limit\n                    ):\n                        logger.error(f\"Job {job_name!r}: Job reached backoff limit.\")\n                        completed = True\n                    # If the job has no backoff limit, check if it has failed\n                    # and stop watching if it has\n                    elif (\n                        not event[\"object\"].spec.backoff_limit\n                        and event[\"object\"].status.failed\n                    ):\n                        completed = True\n\n                    if completed:\n                        watch.stop()\n                        break\n\n        with self._get_core_client(client) as core_client:\n            # Get all pods for the job\n            pods = core_client.list_namespaced_pod(\n                namespace=configuration.namespace, label_selector=f\"job-name={job_name}\"\n            )\n            # Get the status for only the most recently used pod\n            pods.items.sort(\n                key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n            )\n            most_recent_pod = pods.items[0] if pods.items else None\n            first_container_status = (\n                most_recent_pod.status.container_statuses[0]\n                if most_recent_pod\n                else None\n            )\n            if not first_container_status:\n                logger.error(f\"Job {job_name!r}: No pods found for job.\")\n                return -1\n\n            # In some cases, such as spot instance evictions, the pod will be forcibly\n            # terminated and not report a status correctly.\n            elif (\n                first_container_status.state is None\n                or first_container_status.state.terminated is None\n                or first_container_status.state.terminated.exit_code is None\n            ):\n                logger.error(\n                    f\"Could not determine exit code for {job_name!r}.\"\n                    \"Exit code will be reported as -1.\"\n                    f\"First container status info did not report an exit code.\"\n                    f\"First container info: {first_container_status}.\"\n                )\n                return -1\n\n        return first_container_status.state.terminated.exit_code\n\n    def _get_job(\n        self,\n        logger: logging.Logger,\n        job_id: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> Optional[\"V1Job\"]:\n        \"\"\"Get a Kubernetes job by id.\"\"\"\n        with self._get_batch_client(client) as batch_client:\n            try:\n                job = batch_client.read_namespaced_job(\n                    name=job_id, namespace=configuration.namespace\n                )\n            except kubernetes.client.exceptions.ApiException:\n                logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n                return None\n            return job\n\n    def _get_job_pod(\n        self,\n        logger: logging.Logger,\n        job_name: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> Optional[\"V1Pod\"]:\n        \"\"\"Get the first running pod for a job.\"\"\"\n        from kubernetes.client.models import V1Pod\n\n        watch = kubernetes.watch.Watch()\n        logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n        last_phase = None\n        last_pod_name: Optional[str] = None\n        with self._get_core_client(client) as core_client:\n            for event in watch.stream(\n                func=core_client.list_namespaced_pod,\n                namespace=configuration.namespace,\n                label_selector=f\"job-name={job_name}\",\n                timeout_seconds=configuration.pod_watch_timeout_seconds,\n            ):\n                pod: V1Pod = event[\"object\"]\n                last_pod_name = pod.metadata.name\n\n                phase = pod.status.phase\n                if phase != last_phase:\n                    logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n                if phase != \"Pending\":\n                    watch.stop()\n                    return pod\n\n                last_phase = phase\n\n        # If we've gotten here, we never found the Pod that was created for the flow run\n        # Job, so let's inspect the situation and log what we can find.  It's possible\n        # that the Job ran into scheduling constraints it couldn't satisfy, like\n        # memory/CPU requests, or a volume that wasn't available, or a node with an\n        # available GPU.\n        logger.error(f\"Job {job_name!r}: Pod never started.\")\n        self._log_recent_events(logger, job_name, last_pod_name, configuration, client)\n\n    def _log_recent_events(\n        self,\n        logger: logging.Logger,\n        job_name: str,\n        pod_name: Optional[str],\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> None:\n        \"\"\"Look for reasons why a Job may not have been able to schedule a Pod, or why\n        a Pod may not have been able to start and log them to the provided logger.\"\"\"\n        from kubernetes.client.models import CoreV1Event, CoreV1EventList\n\n        def best_event_time(event: CoreV1Event) -> datetime:\n            \"\"\"Choose the best timestamp from a Kubernetes event\"\"\"\n            return event.event_time or event.last_timestamp\n\n        def log_event(event: CoreV1Event):\n            \"\"\"Log an event in one of a few formats to the provided logger\"\"\"\n            if event.count and event.count > 1:\n                logger.info(\n                    \"%s event %r (%s times) at %s: %s\",\n                    event.involved_object.kind,\n                    event.reason,\n                    event.count,\n                    best_event_time(event),\n                    event.message,\n                )\n            else:\n                logger.info(\n                    \"%s event %r at %s: %s\",\n                    event.involved_object.kind,\n                    event.reason,\n                    best_event_time(event),\n                    event.message,\n                )\n\n        with self._get_core_client(client) as core_client:\n            events: CoreV1EventList = core_client.list_namespaced_event(\n                configuration.namespace\n            )\n            event: CoreV1Event\n            for event in sorted(events.items, key=best_event_time):\n                if (\n                    event.involved_object.api_version == \"batch/v1\"\n                    and event.involved_object.kind == \"Job\"\n                    and event.involved_object.namespace == configuration.namespace\n                    and event.involved_object.name == job_name\n                ):\n                    log_event(event)\n\n                if (\n                    pod_name\n                    and event.involved_object.api_version == \"v1\"\n                    and event.involved_object.kind == \"Pod\"\n                    and event.involved_object.namespace == configuration.namespace\n                    and event.involved_object.name == pod_name\n                ):\n                    log_event(event)\n
    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

    Stops a job for a cancelled flow run based on the provided infrastructure PID and run configuration.

    Source code in prefect_kubernetes/worker.py
    async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: KubernetesWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a job for a cancelled flow run based on the provided infrastructure PID\n    and run configuration.\n    \"\"\"\n    await run_sync_in_worker_thread(\n        self._stop_job, infrastructure_pid, configuration, grace_seconds\n    )\n
    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorker.run","title":"run async","text":"

    Executes a flow run within a Kubernetes Job and waits for the flow run to complete.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to execute

    required configuration KubernetesWorkerJobConfiguration

    The configuration to use when executing the flow run.

    required task_status Optional[TaskStatus]

    The task status object for the current flow run. If provided, the task will be marked as started.

    None

    Returns:

    Name Type Description KubernetesWorkerResult KubernetesWorkerResult

    A result object containing information about the final state of the flow run

    Source code in prefect_kubernetes/worker.py
    async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: KubernetesWorkerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> KubernetesWorkerResult:\n    \"\"\"\n    Executes a flow run within a Kubernetes Job and waits for the flow run\n    to complete.\n\n    Args:\n        flow_run: The flow run to execute\n        configuration: The configuration to use when executing the flow run.\n        task_status: The task status object for the current flow run. If provided,\n            the task will be marked as started.\n\n    Returns:\n        KubernetesWorkerResult: A result object containing information about the\n            final state of the flow run\n    \"\"\"\n    logger = self.get_flow_run_logger(flow_run)\n\n    with self._get_configured_kubernetes_client(configuration) as client:\n        logger.info(\"Creating Kubernetes job...\")\n        job = await run_sync_in_worker_thread(\n            self._create_job, configuration, client\n        )\n        pid = await run_sync_in_worker_thread(\n            self._get_infrastructure_pid, job, client\n        )\n        # Indicate that the job has started\n        if task_status is not None:\n            task_status.started(pid)\n\n        # Monitor the job until completion\n\n        events_replicator = KubernetesEventsReplicator(\n            client=client,\n            job_name=job.metadata.name,\n            namespace=configuration.namespace,\n            worker_resource=self._event_resource(),\n            related_resources=self._event_related_resources(\n                configuration=configuration\n            ),\n            timeout_seconds=configuration.pod_watch_timeout_seconds,\n        )\n\n        with events_replicator:\n            status_code = await run_sync_in_worker_thread(\n                self._watch_job, logger, job.metadata.name, configuration, client\n            )\n        return KubernetesWorkerResult(identifier=pid, status_code=status_code)\n
    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerJobConfiguration","title":"KubernetesWorkerJobConfiguration","text":"

    Bases: BaseJobConfiguration

    Configuration class used by the Kubernetes worker.

    An instance of this class is passed to the Kubernetes worker's run method for each flow run. It contains all of the information necessary to execute the flow run as a Kubernetes job.

    Attributes:

    Name Type Description name

    The name to give to created Kubernetes job.

    command

    The command executed in created Kubernetes jobs to kick off flow run execution.

    env

    The environment variables to set in created Kubernetes jobs.

    labels

    The labels to set on created Kubernetes jobs.

    namespace str

    The Kubernetes namespace to create Kubernetes jobs in.

    job_manifest Dict[str, Any]

    The Kubernetes job manifest to use to create Kubernetes jobs.

    cluster_config Optional[KubernetesClusterConfig]

    The Kubernetes cluster configuration to use for authentication to a Kubernetes cluster.

    job_watch_timeout_seconds Optional[int]

    The number of seconds to wait for the job to complete before timing out. If None, the worker will wait indefinitely.

    pod_watch_timeout_seconds int

    The number of seconds to wait for the pod to complete before timing out.

    stream_output bool

    Whether or not to stream the job's output.

    Source code in prefect_kubernetes/worker.py
    class KubernetesWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Kubernetes worker.\n\n    An instance of this class is passed to the Kubernetes worker's `run` method\n    for each flow run. It contains all of the information necessary to execute\n    the flow run as a Kubernetes job.\n\n    Attributes:\n        name: The name to give to created Kubernetes job.\n        command: The command executed in created Kubernetes jobs to kick off\n            flow run execution.\n        env: The environment variables to set in created Kubernetes jobs.\n        labels: The labels to set on created Kubernetes jobs.\n        namespace: The Kubernetes namespace to create Kubernetes jobs in.\n        job_manifest: The Kubernetes job manifest to use to create Kubernetes jobs.\n        cluster_config: The Kubernetes cluster configuration to use for authentication\n            to a Kubernetes cluster.\n        job_watch_timeout_seconds: The number of seconds to wait for the job to\n            complete before timing out. If `None`, the worker will wait indefinitely.\n        pod_watch_timeout_seconds: The number of seconds to wait for the pod to\n            complete before timing out.\n        stream_output: Whether or not to stream the job's output.\n    \"\"\"\n\n    namespace: str = Field(default=\"default\")\n    job_manifest: Dict[str, Any] = Field(template=_get_default_job_manifest_template())\n    cluster_config: Optional[KubernetesClusterConfig] = Field(default=None)\n    job_watch_timeout_seconds: Optional[int] = Field(default=None)\n    pod_watch_timeout_seconds: int = Field(default=60)\n    stream_output: bool = Field(default=True)\n\n    # internal-use only\n    _api_dns_name: Optional[str] = None  # Replaces 'localhost' in API URL\n\n    @validator(\"job_manifest\")\n    def _ensure_metadata_is_present(cls, value: Dict[str, Any]):\n        \"\"\"Ensures that the metadata is present in the job manifest.\"\"\"\n        if \"metadata\" not in value:\n            value[\"metadata\"] = {}\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_labels_is_present(cls, value: Dict[str, Any]):\n        \"\"\"Ensures that the metadata is present in the job manifest.\"\"\"\n        if \"labels\" not in value[\"metadata\"]:\n            value[\"metadata\"][\"labels\"] = {}\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_namespace_is_present(cls, value: Dict[str, Any], values):\n        \"\"\"Ensures that the namespace is present in the job manifest.\"\"\"\n        if \"namespace\" not in value[\"metadata\"]:\n            value[\"metadata\"][\"namespace\"] = values[\"namespace\"]\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_job_includes_all_required_components(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job manifest includes all required components.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_manifest())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_job_has_compatible_values(cls, value: Dict[str, Any]):\n        patch = JsonPatch.from_diff(value, _get_base_job_manifest())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatible values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n\n        Ensures that necessary values are present in the job manifest and that the\n        job manifest is valid.\n\n        Args:\n            flow_run: The flow run to prepare the job configuration for\n            deployment: The deployment associated with the flow run used for\n                preparation.\n            flow: The flow associated with the flow run used for preparation.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n        # Update configuration env and job manifest env\n        self._update_prefect_api_url_if_local_server()\n        self._populate_env_in_manifest()\n        # Update labels in job manifest\n        self._slugify_labels()\n        # Add defaults to job manifest if necessary\n        self._populate_image_if_not_present()\n        self._populate_command_if_not_present()\n        self._populate_generate_name_if_not_present()\n\n    def _populate_env_in_manifest(self):\n        \"\"\"\n        Populates environment variables in the job manifest.\n\n        When `env` is templated as a variable in the job manifest it comes in as a\n        dictionary. We need to convert it to a list of dictionaries to conform to the\n        Kubernetes job manifest schema.\n\n        This function also handles the case where the user has removed the `{{ env }}`\n        placeholder and hard coded a value for `env`. In this case, we need to prepend\n        our environment variables to the list to ensure Prefect setting propagation.\n        An example reason the a user would remove the `{{ env }}` placeholder to\n        hardcode Kubernetes secrets in the base job template.\n        \"\"\"\n        transformed_env = [{\"name\": k, \"value\": v} for k, v in self.env.items()]\n\n        template_env = self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][\n            0\n        ].get(\"env\")\n\n        # If user has removed `{{ env }}` placeholder and hard coded a value for `env`,\n        # we need to prepend our environment variables to the list to ensure Prefect\n        # setting propagation.\n        if isinstance(template_env, list):\n            self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\"env\"] = [\n                *transformed_env,\n                *template_env,\n            ]\n        # Current templating adds `env` as a dict when the kubernetes manifest requires\n        # a list of dicts. Might be able to improve this in the future with a better\n        # default `env` value and better typing.\n        else:\n            self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                \"env\"\n            ] = transformed_env\n\n    def _update_prefect_api_url_if_local_server(self):\n        \"\"\"If the API URL has been set by the base environment rather than the by the\n        user, update the value to ensure connectivity when using a bridge network by\n        updating local connections to use the internal host\n        \"\"\"\n        if self.env.get(\"PREFECT_API_URL\") and self._api_dns_name:\n            self.env[\"PREFECT_API_URL\"] = (\n                self.env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", self._api_dns_name)\n                .replace(\"127.0.0.1\", self._api_dns_name)\n            )\n\n    def _slugify_labels(self):\n        \"\"\"Slugifies the labels in the job manifest.\"\"\"\n        all_labels = {**self.job_manifest[\"metadata\"].get(\"labels\", {}), **self.labels}\n        self.job_manifest[\"metadata\"][\"labels\"] = {\n            _slugify_label_key(k): _slugify_label_value(v)\n            for k, v in all_labels.items()\n        }\n\n    def _populate_image_if_not_present(self):\n        \"\"\"Ensures that the image is present in the job manifest. Populates the image\n        with the default Prefect image if it is not present.\"\"\"\n        try:\n            if (\n                \"image\"\n                not in self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0]\n            ):\n                self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                    \"image\"\n                ] = get_prefect_image_name()\n        except KeyError:\n            raise ValueError(\n                \"Unable to verify image due to invalid job manifest template.\"\n            )\n\n    def _populate_command_if_not_present(self):\n        \"\"\"\n        Ensures that the command is present in the job manifest. Populates the command\n        with the `prefect -m prefect.engine` if a command is not present.\n        \"\"\"\n        try:\n            command = self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][\n                0\n            ].get(\"args\")\n            if command is None:\n                self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                    \"args\"\n                ] = shlex.split(self._base_flow_run_command())\n            elif isinstance(command, str):\n                self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                    \"args\"\n                ] = shlex.split(command)\n            elif not isinstance(command, list):\n                raise ValueError(\n                    \"Invalid job manifest template: 'command' must be a string or list.\"\n                )\n        except KeyError:\n            raise ValueError(\n                \"Unable to verify command due to invalid job manifest template.\"\n            )\n\n    def _populate_generate_name_if_not_present(self):\n        \"\"\"Ensures that the generateName is present in the job manifest.\"\"\"\n        manifest_generate_name = self.job_manifest[\"metadata\"].get(\"generateName\", \"\")\n        has_placeholder = len(find_placeholders(manifest_generate_name)) > 0\n        # if name wasn't present during template rendering, generateName will be\n        # just a hyphen\n        manifest_generate_name_templated_with_empty_string = (\n            manifest_generate_name == \"-\"\n        )\n        if (\n            not manifest_generate_name\n            or has_placeholder\n            or manifest_generate_name_templated_with_empty_string\n        ):\n            generate_name = None\n            if self.name:\n                generate_name = _slugify_name(self.name)\n            # _slugify_name will return None if the slugified name in an exception\n            if not generate_name:\n                generate_name = \"prefect-job\"\n            self.job_manifest[\"metadata\"][\"generateName\"] = f\"{generate_name}-\"\n
    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Prepares the job configuration for a flow run.

    Ensures that necessary values are present in the job manifest and that the job manifest is valid.

    Parameters:

    Name Type Description Default flow_run FlowRun

    The flow run to prepare the job configuration for

    required deployment Optional[DeploymentResponse]

    The deployment associated with the flow run used for preparation.

    None flow Optional[Flow]

    The flow associated with the flow run used for preparation.

    None Source code in prefect_kubernetes/worker.py
    def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n\n    Ensures that necessary values are present in the job manifest and that the\n    job manifest is valid.\n\n    Args:\n        flow_run: The flow run to prepare the job configuration for\n        deployment: The deployment associated with the flow run used for\n            preparation.\n        flow: The flow associated with the flow run used for preparation.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n    # Update configuration env and job manifest env\n    self._update_prefect_api_url_if_local_server()\n    self._populate_env_in_manifest()\n    # Update labels in job manifest\n    self._slugify_labels()\n    # Add defaults to job manifest if necessary\n    self._populate_image_if_not_present()\n    self._populate_command_if_not_present()\n    self._populate_generate_name_if_not_present()\n
    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerResult","title":"KubernetesWorkerResult","text":"

    Bases: BaseWorkerResult

    Contains information about the final state of a completed process

    Source code in prefect_kubernetes/worker.py
    class KubernetesWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
    "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerVariables","title":"KubernetesWorkerVariables","text":"

    Bases: BaseVariables

    Default variables for the Kubernetes worker.

    The schema for this class is used to populate the variables section of the default base job template.

    Source code in prefect_kubernetes/worker.py
    class KubernetesWorkerVariables(BaseVariables):\n    \"\"\"\n    Default variables for the Kubernetes worker.\n\n    The schema for this class is used to populate the `variables` section of the default\n    base job template.\n    \"\"\"\n\n    namespace: str = Field(\n        default=\"default\", description=\"The Kubernetes namespace to create jobs within.\"\n    )\n    image: Optional[str] = Field(\n        default=None,\n        description=\"The image reference of a container image to use for created jobs. \"\n        \"If not set, the latest Prefect image will be used.\",\n        example=\"docker.io/prefecthq/prefect:2-latest\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        description=\"The Kubernetes service account to use for job creation.\",\n    )\n    image_pull_policy: Literal[\"IfNotPresent\", \"Always\", \"Never\"] = Field(\n        default=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n        description=\"The Kubernetes image pull policy to use for job containers.\",\n    )\n    finished_job_ttl: Optional[int] = Field(\n        default=None,\n        title=\"Finished Job TTL\",\n        description=\"The number of seconds to retain jobs after completion. If set, \"\n        \"finished jobs will be cleaned up by Kubernetes after the given delay. If not \"\n        \"set, jobs will be retained indefinitely.\",\n    )\n    job_watch_timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"Number of seconds to wait for each event emitted by a job before \"\n            \"timing out. If not set, the worker will wait for each event indefinitely.\"\n        ),\n    )\n    pod_watch_timeout_seconds: int = Field(\n        default=60,\n        description=\"Number of seconds to watch for pod creation before timing out.\",\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the job to local standard output.\"\n        ),\n    )\n    cluster_config: Optional[KubernetesClusterConfig] = Field(\n        default=None,\n        description=\"The Kubernetes cluster config to use for job creation.\",\n    )\n
    "},{"location":"integrations/prefect-ray/","title":"prefect-ray","text":""},{"location":"integrations/prefect-ray/#welcome","title":"Welcome!","text":"

    Visit the full docs here to see additional examples and the API reference.

    prefect-ray contains Prefect integrations with the Ray execution framework, a flexible distributed computing framework for Python.

    Provides a RayTaskRunner that enables Prefect flows to run tasks execute tasks in parallel using Ray.

    "},{"location":"integrations/prefect-ray/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-ray/#python-setup","title":"Python setup","text":"

    Requires an installation of Python 3.8 or newer.

    We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

    These tasks are designed to work with Prefect 2.0+. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-ray/#installation","title":"Installation","text":"

    Install prefect-ray with pip:

    pip install prefect-ray\n

    Users running Apple Silicon (such as M1 macs) should check out the Ray docs here for more details.

    "},{"location":"integrations/prefect-ray/#running-tasks-on-ray","title":"Running tasks on Ray","text":"

    The RayTaskRunner is a Prefect task runner that submits tasks to Ray for parallel execution.

    By default, a temporary Ray instance is created for the duration of the flow run.

    For example, this flow counts to 3 in parallel.

    import time\n\nfrom prefect import flow, task\nfrom prefect_ray import RayTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=RayTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

    If you already have a Ray instance running, you can provide the connection URL via an address argument.

    To configure your flow to use the RayTaskRunner:

    1. Make sure the prefect-ray collection is installed as described earlier: pip install prefect-ray.
    2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
    3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

    For example, this flow uses the RayTaskRunner with a local, temporary Ray instance created by Prefect at flow run time.

    from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner())\ndef my_flow():\n    ... \n

    This flow uses the RayTaskRunner configured to access an existing Ray instance at ray://192.0.2.255:8786.

    from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n    ... \n

    RayTaskRunner accepts the following optional parameters:

    Parameter Description address Address of a currently running Ray instance, starting with the ray:// URI. init_kwargs Additional kwargs to use when calling ray.init.

    Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address of a Ray instance, Prefect creates a temporary instance automatically.

    Ray environment limitations

    Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.

    See the Ray installation documentation for further compatibility information.

    "},{"location":"integrations/prefect-ray/#running-tasks-on-a-ray-remote-cluster","title":"Running tasks on a Ray remote cluster","text":"

    When using the RayTaskRunner with a remote Ray cluster, you may run into issues that are not seen when using a local Ray instance. To resolve these issues, we recommend taking the following steps when working with a remote Ray cluster:

    1. By default, Prefect will not persist any data to the filesystem of the remote ray worker. However, if you want to take advantage of Prefect's caching ability, you will need to configure a remote result storage to persist results across task runs.

    We recommend using the Prefect UI to configure a storage block to use for remote results storage.

    Here's an example of a flow that uses caching and remote result storage:

    from typing import List\n\nfrom prefect import flow, get_run_logger, task\nfrom prefect.filesystems import S3\nfrom prefect.tasks import task_input_hash\nfrom prefect_ray.task_runners import RayTaskRunner\n\n\n# The result of this task will be cached in the configured result storage\n@task(cache_key_fn=task_input_hash)\ndef say_hello(name: str) -> None:\n    logger = get_run_logger()\n    # This log statement will print only on the first run. Subsequent runs will be cached.\n    logger.info(f\"hello {name}!\")\n    return name\n\n\n@flow(\n    task_runner=RayTaskRunner(\n        address=\"ray://<instance_public_ip_address>:10001\",\n    ),\n    # Using an S3 block that has already been created via the Prefect UI\n    result_storage=\"s3/my-result-storage\",\n)\ndef greetings(names: List[str]) -> None:\n    for name in names:\n        say_hello.submit(name)\n\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

    1. If you get an error stating that the module 'prefect' cannot be found, ensure prefect is installed on the remote cluster, with:

      pip install prefect\n

    2. If you get an error with a message similar to \"File system created with scheme 's3' could not be created\", ensure the required Python modules are installed on both local and remote machines. The required prerequisite modules can be found in the Prefect documentation. For example, if using S3 for the remote storage:

      pip install s3fs\n

    3. If you are seeing timeout or other connection errors, double check the address provided to the RayTaskRunner. The address should look similar to: address='ray://<head_node_ip_address>:10001':

      RayTaskRunner(address=\"ray://1.23.199.255:10001\")\n

    "},{"location":"integrations/prefect-ray/#specifying-remote-options","title":"Specifying remote options","text":"

    The remote_options context can be used to control the task\u2019s remote options.

    For example, we can set the number of CPUs and GPUs to use for the process task:

    from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\nfrom prefect_ray.context import remote_options\n\n@task\ndef process(x):\n    return x + 1\n\n\n@flow(task_runner=RayTaskRunner())\ndef my_flow():\n    # equivalent to setting @ray.remote(num_cpus=4, num_gpus=2)\n    with remote_options(num_cpus=4, num_gpus=2):\n        process.submit(42)\n
    "},{"location":"integrations/prefect-ray/task-runners/","title":"Task Runners","text":""},{"location":"integrations/prefect-ray/task-runners/#prefect_ray.task_runners","title":"prefect_ray.task_runners","text":"

    Interface and implementations of the Ray Task Runner. Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

    Example
    import time\n\nfrom prefect import flow, task\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#0\n#1\n#2\n#3\n#4\n#5\n#6\n#7\n#8\n#9\n

    Switching to a RayTaskRunner:

    import time\n\nfrom prefect import flow, task\nfrom prefect_ray import RayTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=RayTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

    "},{"location":"integrations/prefect-ray/task-runners/#prefect_ray.task_runners.RayTaskRunner","title":"RayTaskRunner","text":"

    Bases: BaseTaskRunner

    A parallel task_runner that submits tasks to ray. By default, a temporary Ray cluster is created for the duration of the flow run. Alternatively, if you already have a ray instance running, you can provide the connection URL via the address kwarg. Args: address (string, optional): Address of a currently running ray instance; if one is not provided, a temporary instance will be created. init_kwargs (dict, optional): Additional kwargs to use when calling ray.init. Examples: Using a temporary local ray cluster:

    from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner())\ndef my_flow():\n    ...\n
    Connecting to an existing ray instance:
    RayTaskRunner(address=\"ray://192.0.2.255:8786\")\n

    Source code in prefect_ray/task_runners.py
    class RayTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A parallel task_runner that submits tasks to `ray`.\n    By default, a temporary Ray cluster is created for the duration of the flow run.\n    Alternatively, if you already have a `ray` instance running, you can provide\n    the connection URL via the `address` kwarg.\n    Args:\n        address (string, optional): Address of a currently running `ray` instance; if\n            one is not provided, a temporary instance will be created.\n        init_kwargs (dict, optional): Additional kwargs to use when calling `ray.init`.\n    Examples:\n        Using a temporary local ray cluster:\n        ```python\n        from prefect import flow\n        from prefect_ray.task_runners import RayTaskRunner\n\n        @flow(task_runner=RayTaskRunner())\n        def my_flow():\n            ...\n        ```\n        Connecting to an existing ray instance:\n        ```python\n        RayTaskRunner(address=\"ray://192.0.2.255:8786\")\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        address: str = None,\n        init_kwargs: dict = None,\n    ):\n        # Store settings\n        self.address = address\n        self.init_kwargs = init_kwargs.copy() if init_kwargs else {}\n\n        self.init_kwargs.setdefault(\"namespace\", \"prefect\")\n\n        # Runtime attributes\n        self._ray_refs: Dict[str, \"ray.ObjectRef\"] = {}\n\n        super().__init__()\n\n    def duplicate(self):\n        \"\"\"\n        Return a new instance of with the same settings as this one.\n        \"\"\"\n        return type(self)(address=self.address, init_kwargs=self.init_kwargs)\n\n    def __eq__(self, other: object) -> bool:\n        \"\"\"\n        Check if an instance has the same settings as this task runner.\n        \"\"\"\n        if type(self) == type(other):\n            return (\n                self.address == other.address and self.init_kwargs == other.init_kwargs\n            )\n        else:\n            return NotImplemented\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.PARALLEL\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        call_kwargs, upstream_ray_obj_refs = self._exchange_prefect_for_ray_futures(\n            call.keywords\n        )\n\n        remote_options = RemoteOptionsContext.get().current_remote_options\n        # Ray does not support the submission of async functions and we must create a\n        # sync entrypoint\n        if remote_options:\n            ray_decorator = ray.remote(**remote_options)\n        else:\n            ray_decorator = ray.remote\n\n        self._ray_refs[key] = (\n            ray_decorator(self._run_prefect_task)\n            .options(name=call.keywords[\"task_run\"].name)\n            .remote(sync_compatible(call.func), *upstream_ray_obj_refs, **call_kwargs)\n        )\n\n    def _exchange_prefect_for_ray_futures(self, kwargs_prefect_futures):\n        \"\"\"Exchanges Prefect futures for Ray futures.\"\"\"\n\n        upstream_ray_obj_refs = []\n\n        def exchange_prefect_for_ray_future(expr):\n            \"\"\"Exchanges Prefect future for Ray future.\"\"\"\n            if isinstance(expr, PrefectFuture):\n                ray_future = self._ray_refs.get(expr.key)\n                if ray_future is not None:\n                    upstream_ray_obj_refs.append(ray_future)\n                    return ray_future\n            return expr\n\n        kwargs_ray_futures = visit_collection(\n            kwargs_prefect_futures,\n            visit_fn=exchange_prefect_for_ray_future,\n            return_data=True,\n        )\n\n        return kwargs_ray_futures, upstream_ray_obj_refs\n\n    @staticmethod\n    def _run_prefect_task(func, *upstream_ray_obj_refs, **kwargs):\n        \"\"\"Resolves Ray futures before calling the actual Prefect task function.\n\n        Passing upstream_ray_obj_refs directly as args enables Ray to wait for\n        upstream tasks before running this remote function.\n        This variable is otherwise unused as the ray object refs are also\n        contained in kwargs.\n        \"\"\"\n\n        def resolve_ray_future(expr):\n            \"\"\"Resolves Ray future.\"\"\"\n            if isinstance(expr, ray.ObjectRef):\n                return ray.get(expr)\n            return expr\n\n        kwargs = visit_collection(kwargs, visit_fn=resolve_ray_future, return_data=True)\n\n        return func(**kwargs)\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        ref = self._get_ray_ref(key)\n\n        result = None\n\n        with anyio.move_on_after(timeout):\n            # We await the reference directly instead of using `ray.get` so we can\n            # avoid blocking the event loop\n            try:\n                result = await ref\n            except RayTaskError as exc:\n                # unwrap the original exception that caused task failure, except for\n                # KeyboardInterrupt, which unwraps as TaskCancelledError\n                result = await exception_to_crashed_state(exc.cause)\n            except BaseException as exc:\n                result = await exception_to_crashed_state(exc)\n\n        return result\n\n    async def _start(self, exit_stack: AsyncExitStack):\n        \"\"\"\n        Start the task runner and prep for context exit.\n\n        - Creates a cluster if an external address is not set.\n        - Creates a client to connect to the cluster.\n        - Pushes a call to wait for all running futures to complete on exit.\n        \"\"\"\n        if self.address and self.address != \"auto\":\n            self.logger.info(\n                f\"Connecting to an existing Ray instance at {self.address}\"\n            )\n            init_args = (self.address,)\n        elif ray.is_initialized():\n            self.logger.info(\n                \"Local Ray instance is already initialized. \"\n                \"Using existing local instance.\"\n            )\n            return\n        else:\n            self.logger.info(\"Creating a local Ray instance\")\n            init_args = ()\n\n        context = ray.init(*init_args, **self.init_kwargs)\n        dashboard_url = getattr(context, \"dashboard_url\", None)\n        exit_stack.push(context)\n\n        # Display some information about the cluster\n        nodes = ray.nodes()\n        living_nodes = [node for node in nodes if node.get(\"alive\")]\n        self.logger.info(f\"Using Ray cluster with {len(living_nodes)} nodes.\")\n\n        if dashboard_url:\n            self.logger.info(\n                f\"The Ray UI is available at {dashboard_url}\",\n            )\n\n    async def _shutdown_ray(self):\n        \"\"\"\n        Shuts down the cluster.\n        \"\"\"\n        self.logger.debug(\"Shutting down Ray cluster...\")\n        ray.shutdown()\n\n    def _get_ray_ref(self, key: UUID) -> \"ray.ObjectRef\":\n        \"\"\"\n        Retrieve the ray object reference corresponding to a prefect future.\n        \"\"\"\n        return self._ray_refs[key]\n
    "},{"location":"integrations/prefect-ray/task-runners/#prefect_ray.task_runners.RayTaskRunner.duplicate","title":"duplicate","text":"

    Return a new instance of with the same settings as this one.

    Source code in prefect_ray/task_runners.py
    def duplicate(self):\n    \"\"\"\n    Return a new instance of with the same settings as this one.\n    \"\"\"\n    return type(self)(address=self.address, init_kwargs=self.init_kwargs)\n
    "},{"location":"integrations/prefect-shell/","title":"Integrating shell commands into your dataflow with prefect-shell","text":"

    Visit the full docs here to see additional examples and the API reference.

    The prefect-shell collection makes it easy to execute shell commands in your Prefect flows. Check out the examples below to get started!

    "},{"location":"integrations/prefect-shell/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-shell/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

    With prefect-shell, you can bring your trusty shell commands (and/or scripts) straight into the Prefect flow party, complete with awesome Prefect logging.

    No more separate logs, just seamless integration. Let's get the shell-abration started!

    from prefect import flow\nfrom datetime import datetime\nfrom prefect_shell import ShellOperation\n\n@flow\ndef download_data():\n    today = datetime.today().strftime(\"%Y%m%d\")\n\n    # for short running operations, you can use the `run` method\n    # which automatically manages the context\n    ShellOperation(\n        commands=[\n            \"mkdir -p data\",\n            \"mkdir -p data/${today}\"\n        ],\n        env={\"today\": today}\n    ).run()\n\n    # for long running operations, you can use a context manager\n    with ShellOperation(\n        commands=[\n            \"curl -O https://masie_web.apps.nsidc.org/pub/DATASETS/NOAA/G02135/north/daily/data/N_seaice_extent_daily_v3.0.csv\",\n        ],\n        working_dir=f\"data/{today}\",\n    ) as download_csv_operation:\n\n        # trigger runs the process in the background\n        download_csv_process = download_csv_operation.trigger()\n\n        # then do other things here in the meantime, like download another file\n        ...\n\n        # when you're ready, wait for the process to finish\n        download_csv_process.wait_for_completion()\n\n        # if you'd like to get the output lines, you can use the `fetch_result` method\n        output_lines = download_csv_process.fetch_result()\n\ndownload_data()\n

    Outputs:

    14:48:16.550 | INFO    | prefect.engine - Created flow run 'tentacled-chachalaca' for flow 'download-data'\n14:48:17.977 | INFO    | Flow run 'tentacled-chachalaca' - PID 19360 triggered with 2 commands running inside the '.' directory.\n14:48:17.987 | INFO    | Flow run 'tentacled-chachalaca' - PID 19360 completed with return code 0.\n14:48:17.994 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 triggered with 1 commands running inside the PosixPath('data/20230201') directory.\n14:48:18.009 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dl\n14:48:18.010 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\noad  Upload   Total   Spent    Left  Speed\n  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\n14:48:18.840 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n 11 1630k   11  192k    0     0   229k      0  0:00:07 --:--:--  0:00:07  231k\n14:48:19.839 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n 83 1630k   83 1368k    0     0   745k      0  0:00:02  0:00:01  0:00:01  747k\n14:48:19.993 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n100 1630k  100 1630k    0     0   819k      0  0\n14:48:19.994 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n:00:01  0:00:01 --:--:--  821k\n14:48:19.996 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 completed with return code 0.\n14:48:19.998 | INFO    | Flow run 'tentacled-chachalaca' - Successfully closed all open processes.\n14:48:20.203 | INFO    | Flow run 'tentacled-chachalaca' - Finished in state Completed()\n

    Utilize Previously Saved Blocks

    You can save commands within a ShellOperation block, then reuse them across multiple flows, or even plain Python scripts.

    Save the block with desired commands:

    from prefect_shell import ShellOperation\n\nping_op = ShellOperation(commands=[\"ping -t 1 prefect.io\"])\nping_op.save(\"block-name\")\n

    Load the saved block:

    from prefect_shell import ShellOperation\n\nping_op = ShellOperation.load(\"block-name\")\n

    To view and edit the blocks on Prefect UI:

    prefect block register -m prefect_shell\n
    "},{"location":"integrations/prefect-shell/#resources","title":"Resources","text":"

    For more tips on how to use tasks and flows in a Collection, check out Using Collections!

    "},{"location":"integrations/prefect-shell/#installation","title":"Installation","text":"

    Install prefect-shell with pip:

    pip install -U prefect-shell\n

    Requires an installation of Python 3.8+.

    We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

    These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-shell/commands/","title":"Commands","text":""},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands","title":"prefect_shell.commands","text":"

    Tasks for interacting with shell commands

    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation","title":"ShellOperation","text":"

    Bases: JobBlock

    A block representing a shell operation, containing multiple commands.

    For long-lasting operations, use the trigger method and utilize the block as a context manager for automatic closure of processes when context is exited. If not, manually call the close method to close processes.

    For short-lasting operations, use the run method. Context is automatically managed with this method.

    Attributes:

    Name Type Description commands List[str]

    A list of commands to execute sequentially.

    stream_output bool

    Whether to stream output.

    env Dict[str, str]

    A dictionary of environment variables to set for the shell operation.

    working_dir DirectoryPath

    The working directory context the commands will be executed within.

    shell str

    The shell to use to execute the commands.

    extension Optional[str]

    The extension to use for the temporary file. if unset defaults to .ps1 on Windows and .sh on other platforms.

    Examples:

    Load a configured block:

    from prefect_shell import ShellOperation\n\nshell_operation = ShellOperation.load(\"BLOCK_NAME\")\n

    Source code in prefect_shell/commands.py
    class ShellOperation(JobBlock):\n    \"\"\"\n    A block representing a shell operation, containing multiple commands.\n\n    For long-lasting operations, use the trigger method and utilize the block as a\n    context manager for automatic closure of processes when context is exited.\n    If not, manually call the close method to close processes.\n\n    For short-lasting operations, use the run method. Context is automatically managed\n    with this method.\n\n    Attributes:\n        commands: A list of commands to execute sequentially.\n        stream_output: Whether to stream output.\n        env: A dictionary of environment variables to set for the shell operation.\n        working_dir: The working directory context the commands\n            will be executed within.\n        shell: The shell to use to execute the commands.\n        extension: The extension to use for the temporary file.\n            if unset defaults to `.ps1` on Windows and `.sh` on other platforms.\n\n    Examples:\n        Load a configured block:\n        ```python\n        from prefect_shell import ShellOperation\n\n        shell_operation = ShellOperation.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Shell Operation\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/0b47a017e1b40381de770c17647c49cdf6388d1c-250x250.png\"  # noqa: E501\n    _documentation_url = \"https://prefecthq.github.io/prefect-shell/commands/#prefect_shell.commands.ShellOperation\"  # noqa: E501\n\n    commands: List[str] = Field(\n        default=..., description=\"A list of commands to execute sequentially.\"\n    )\n    stream_output: bool = Field(default=True, description=\"Whether to stream output.\")\n    env: Dict[str, str] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to use for the subprocess.\",\n    )\n    working_dir: DirectoryPath = Field(\n        default=None,\n        title=\"Working Directory\",\n        description=(\n            \"The absolute path to the working directory \"\n            \"the command will be executed within.\"\n        ),\n    )\n    shell: str = Field(\n        default=None,\n        description=(\n            \"The shell to run the command with; if unset, \"\n            \"defaults to `powershell` on Windows and `bash` on other platforms.\"\n        ),\n    )\n    extension: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The extension to use for the temporary file; if unset, \"\n            \"defaults to `.ps1` on Windows and `.sh` on other platforms.\"\n        ),\n    )\n\n    _exit_stack: AsyncExitStack = PrivateAttr(\n        default_factory=AsyncExitStack,\n    )\n\n    @contextmanager\n    def _prep_trigger_command(self) -> Generator[str, None, None]:\n        \"\"\"\n        Write the commands to a temporary file, handling all the details of\n        creating the file and cleaning it up afterwards. Then, return the command\n        to run the temporary file.\n        \"\"\"\n        try:\n            extension = self.extension or (\".ps1\" if sys.platform == \"win32\" else \".sh\")\n            temp_file = tempfile.NamedTemporaryFile(\n                prefix=\"prefect-\",\n                suffix=extension,\n                delete=False,\n            )\n\n            joined_commands = os.linesep.join(self.commands)\n            self.logger.debug(\n                f\"Writing the following commands to \"\n                f\"{temp_file.name!r}:{os.linesep}{joined_commands}\"\n            )\n            temp_file.write(joined_commands.encode())\n\n            if self.shell is None and sys.platform == \"win32\" or extension == \".ps1\":\n                shell = \"powershell\"\n            elif self.shell is None:\n                shell = \"bash\"\n            else:\n                shell = self.shell.lower()\n\n            if shell == \"powershell\":\n                # if powershell, set exit code to that of command\n                temp_file.write(\"\\r\\nExit $LastExitCode\".encode())\n            temp_file.close()\n\n            trigger_command = [shell, temp_file.name]\n            yield trigger_command\n        finally:\n            if os.path.exists(temp_file.name):\n                os.remove(temp_file.name)\n\n    def _compile_kwargs(self, **open_kwargs: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Helper method to compile the kwargs for `open_process` so it's not repeated\n        across the run and trigger methods.\n        \"\"\"\n        trigger_command = self._exit_stack.enter_context(self._prep_trigger_command())\n        input_env = os.environ.copy()\n        input_env.update(self.env)\n        input_open_kwargs = dict(\n            command=trigger_command,\n            stdout=subprocess.PIPE,\n            stderr=subprocess.PIPE,\n            env=input_env,\n            cwd=self.working_dir,\n            **open_kwargs,\n        )\n        return input_open_kwargs\n\n    @sync_compatible\n    async def trigger(self, **open_kwargs: Dict[str, Any]) -> ShellProcess:\n        \"\"\"\n        Triggers a shell command and returns the shell command run object\n        to track the execution of the run. This method is ideal for long-lasting\n        shell commands; for short-lasting shell commands, it is recommended\n        to use the `run` method instead.\n\n        Args:\n            **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n        Returns:\n            A `ShellProcess` object.\n\n        Examples:\n            Sleep for 5 seconds and then print \"Hello, world!\":\n            ```python\n            from prefect_shell import ShellOperation\n\n            with ShellOperation(\n                commands=[\"sleep 5\", \"echo 'Hello, world!'\"],\n            ) as shell_operation:\n                shell_process = shell_operation.trigger()\n                shell_process.wait_for_completion()\n                shell_output = shell_process.fetch_result()\n            ```\n        \"\"\"\n        input_open_kwargs = self._compile_kwargs(**open_kwargs)\n        process = await self._exit_stack.enter_async_context(\n            open_process(**input_open_kwargs)\n        )\n        num_commands = len(self.commands)\n        self.logger.info(\n            f\"PID {process.pid} triggered with {num_commands} commands running \"\n            f\"inside the {(self.working_dir or '.')!r} directory.\"\n        )\n        return ShellProcess(shell_operation=self, process=process)\n\n    @sync_compatible\n    async def run(self, **open_kwargs: Dict[str, Any]) -> List[str]:\n        \"\"\"\n        Runs a shell command, but unlike the trigger method,\n        additionally waits and fetches the result directly, automatically managing\n        the context. This method is ideal for short-lasting shell commands;\n        for long-lasting shell commands, it is\n        recommended to use the `trigger` method instead.\n\n        Args:\n            **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n        Returns:\n            The lines output from the shell command as a list.\n\n        Examples:\n            Sleep for 5 seconds and then print \"Hello, world!\":\n            ```python\n            from prefect_shell import ShellOperation\n\n            shell_output = ShellOperation(\n                commands=[\"sleep 5\", \"echo 'Hello, world!'\"]\n            ).run()\n            ```\n        \"\"\"\n        input_open_kwargs = self._compile_kwargs(**open_kwargs)\n        async with open_process(**input_open_kwargs) as process:\n            shell_process = ShellProcess(shell_operation=self, process=process)\n            num_commands = len(self.commands)\n            self.logger.info(\n                f\"PID {process.pid} triggered with {num_commands} commands running \"\n                f\"inside the {(self.working_dir or '.')!r} directory.\"\n            )\n            await shell_process.wait_for_completion()\n            result = await shell_process.fetch_result()\n\n        return result\n\n    @sync_compatible\n    async def close(self):\n        \"\"\"\n        Close the job block.\n        \"\"\"\n        await self._exit_stack.aclose()\n        self.logger.info(\"Successfully closed all open processes.\")\n\n    async def aclose(self):\n        \"\"\"\n        Asynchronous version of the close method.\n        \"\"\"\n        await self.close()\n\n    async def __aenter__(self) -> \"ShellOperation\":\n        \"\"\"\n        Asynchronous version of the enter method.\n        \"\"\"\n        return self\n\n    async def __aexit__(self, *exc_info):\n        \"\"\"\n        Asynchronous version of the exit method.\n        \"\"\"\n        await self.close()\n\n    def __enter__(self) -> \"ShellOperation\":\n        \"\"\"\n        Enter the context of the job block.\n        \"\"\"\n        return self\n\n    def __exit__(self, *exc_info):\n        \"\"\"\n        Exit the context of the job block.\n        \"\"\"\n        self.close()\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.aclose","title":"aclose async","text":"

    Asynchronous version of the close method.

    Source code in prefect_shell/commands.py
    async def aclose(self):\n    \"\"\"\n    Asynchronous version of the close method.\n    \"\"\"\n    await self.close()\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.close","title":"close async","text":"

    Close the job block.

    Source code in prefect_shell/commands.py
    @sync_compatible\nasync def close(self):\n    \"\"\"\n    Close the job block.\n    \"\"\"\n    await self._exit_stack.aclose()\n    self.logger.info(\"Successfully closed all open processes.\")\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.run","title":"run async","text":"

    Runs a shell command, but unlike the trigger method, additionally waits and fetches the result directly, automatically managing the context. This method is ideal for short-lasting shell commands; for long-lasting shell commands, it is recommended to use the trigger method instead.

    Parameters:

    Name Type Description Default **open_kwargs Dict[str, Any]

    Additional keyword arguments to pass to open_process.

    {}

    Returns:

    Type Description List[str]

    The lines output from the shell command as a list.

    Examples:

    Sleep for 5 seconds and then print \"Hello, world!\":

    from prefect_shell import ShellOperation\n\nshell_output = ShellOperation(\n    commands=[\"sleep 5\", \"echo 'Hello, world!'\"]\n).run()\n

    Source code in prefect_shell/commands.py
    @sync_compatible\nasync def run(self, **open_kwargs: Dict[str, Any]) -> List[str]:\n    \"\"\"\n    Runs a shell command, but unlike the trigger method,\n    additionally waits and fetches the result directly, automatically managing\n    the context. This method is ideal for short-lasting shell commands;\n    for long-lasting shell commands, it is\n    recommended to use the `trigger` method instead.\n\n    Args:\n        **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n    Returns:\n        The lines output from the shell command as a list.\n\n    Examples:\n        Sleep for 5 seconds and then print \"Hello, world!\":\n        ```python\n        from prefect_shell import ShellOperation\n\n        shell_output = ShellOperation(\n            commands=[\"sleep 5\", \"echo 'Hello, world!'\"]\n        ).run()\n        ```\n    \"\"\"\n    input_open_kwargs = self._compile_kwargs(**open_kwargs)\n    async with open_process(**input_open_kwargs) as process:\n        shell_process = ShellProcess(shell_operation=self, process=process)\n        num_commands = len(self.commands)\n        self.logger.info(\n            f\"PID {process.pid} triggered with {num_commands} commands running \"\n            f\"inside the {(self.working_dir or '.')!r} directory.\"\n        )\n        await shell_process.wait_for_completion()\n        result = await shell_process.fetch_result()\n\n    return result\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.trigger","title":"trigger async","text":"

    Triggers a shell command and returns the shell command run object to track the execution of the run. This method is ideal for long-lasting shell commands; for short-lasting shell commands, it is recommended to use the run method instead.

    Parameters:

    Name Type Description Default **open_kwargs Dict[str, Any]

    Additional keyword arguments to pass to open_process.

    {}

    Returns:

    Type Description ShellProcess

    A ShellProcess object.

    Examples:

    Sleep for 5 seconds and then print \"Hello, world!\":

    from prefect_shell import ShellOperation\n\nwith ShellOperation(\n    commands=[\"sleep 5\", \"echo 'Hello, world!'\"],\n) as shell_operation:\n    shell_process = shell_operation.trigger()\n    shell_process.wait_for_completion()\n    shell_output = shell_process.fetch_result()\n

    Source code in prefect_shell/commands.py
    @sync_compatible\nasync def trigger(self, **open_kwargs: Dict[str, Any]) -> ShellProcess:\n    \"\"\"\n    Triggers a shell command and returns the shell command run object\n    to track the execution of the run. This method is ideal for long-lasting\n    shell commands; for short-lasting shell commands, it is recommended\n    to use the `run` method instead.\n\n    Args:\n        **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n    Returns:\n        A `ShellProcess` object.\n\n    Examples:\n        Sleep for 5 seconds and then print \"Hello, world!\":\n        ```python\n        from prefect_shell import ShellOperation\n\n        with ShellOperation(\n            commands=[\"sleep 5\", \"echo 'Hello, world!'\"],\n        ) as shell_operation:\n            shell_process = shell_operation.trigger()\n            shell_process.wait_for_completion()\n            shell_output = shell_process.fetch_result()\n        ```\n    \"\"\"\n    input_open_kwargs = self._compile_kwargs(**open_kwargs)\n    process = await self._exit_stack.enter_async_context(\n        open_process(**input_open_kwargs)\n    )\n    num_commands = len(self.commands)\n    self.logger.info(\n        f\"PID {process.pid} triggered with {num_commands} commands running \"\n        f\"inside the {(self.working_dir or '.')!r} directory.\"\n    )\n    return ShellProcess(shell_operation=self, process=process)\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess","title":"ShellProcess","text":"

    Bases: JobRun

    A class representing a shell process.

    Source code in prefect_shell/commands.py
    class ShellProcess(JobRun):\n    \"\"\"\n    A class representing a shell process.\n    \"\"\"\n\n    def __init__(self, shell_operation: \"ShellOperation\", process: Process):\n        self._shell_operation = shell_operation\n        self._process = process\n        self._output = []\n\n    @property\n    def pid(self) -> int:\n        \"\"\"\n        The PID of the process.\n\n        Returns:\n            The PID of the process.\n        \"\"\"\n        return self._process.pid\n\n    @property\n    def return_code(self) -> Optional[int]:\n        \"\"\"\n        The return code of the process.\n\n        Returns:\n            The return code of the process, or `None` if the process is still running.\n        \"\"\"\n        return self._process.returncode\n\n    async def _capture_output(self, source):\n        \"\"\"\n        Capture output from source.\n        \"\"\"\n        async for output in TextReceiveStream(source):\n            text = output.rstrip()\n            if self._shell_operation.stream_output:\n                self.logger.info(f\"PID {self.pid} stream output:{os.linesep}{text}\")\n            self._output.extend(text.split(os.linesep))\n\n    @sync_compatible\n    async def wait_for_completion(self) -> None:\n        \"\"\"\n        Wait for the shell command to complete after a process is triggered.\n        \"\"\"\n        self.logger.debug(f\"Waiting for PID {self.pid} to complete.\")\n\n        await asyncio.gather(\n            self._capture_output(self._process.stdout),\n            self._capture_output(self._process.stderr),\n        )\n        await self._process.wait()\n\n        if self.return_code != 0:\n            raise RuntimeError(\n                f\"PID {self.pid} failed with return code {self.return_code}.\"\n            )\n        self.logger.info(\n            f\"PID {self.pid} completed with return code {self.return_code}.\"\n        )\n\n    @sync_compatible\n    async def fetch_result(self) -> List[str]:\n        \"\"\"\n        Retrieve the output of the shell operation.\n\n        Returns:\n            The lines output from the shell operation as a list.\n        \"\"\"\n        if self._process.returncode is None:\n            self.logger.info(\"Process is still running, result may be incomplete.\")\n        return self._output\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.pid","title":"pid: int property","text":"

    The PID of the process.

    Returns:

    Type Description int

    The PID of the process.

    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.return_code","title":"return_code: Optional[int] property","text":"

    The return code of the process.

    Returns:

    Type Description Optional[int]

    The return code of the process, or None if the process is still running.

    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.fetch_result","title":"fetch_result async","text":"

    Retrieve the output of the shell operation.

    Returns:

    Type Description List[str]

    The lines output from the shell operation as a list.

    Source code in prefect_shell/commands.py
    @sync_compatible\nasync def fetch_result(self) -> List[str]:\n    \"\"\"\n    Retrieve the output of the shell operation.\n\n    Returns:\n        The lines output from the shell operation as a list.\n    \"\"\"\n    if self._process.returncode is None:\n        self.logger.info(\"Process is still running, result may be incomplete.\")\n    return self._output\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.wait_for_completion","title":"wait_for_completion async","text":"

    Wait for the shell command to complete after a process is triggered.

    Source code in prefect_shell/commands.py
    @sync_compatible\nasync def wait_for_completion(self) -> None:\n    \"\"\"\n    Wait for the shell command to complete after a process is triggered.\n    \"\"\"\n    self.logger.debug(f\"Waiting for PID {self.pid} to complete.\")\n\n    await asyncio.gather(\n        self._capture_output(self._process.stdout),\n        self._capture_output(self._process.stderr),\n    )\n    await self._process.wait()\n\n    if self.return_code != 0:\n        raise RuntimeError(\n            f\"PID {self.pid} failed with return code {self.return_code}.\"\n        )\n    self.logger.info(\n        f\"PID {self.pid} completed with return code {self.return_code}.\"\n    )\n
    "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.shell_run_command","title":"shell_run_command async","text":"

    Runs arbitrary shell commands.

    Parameters:

    Name Type Description Default command str

    Shell command to be executed; can also be provided post-initialization by calling this task instance.

    required env Optional[dict]

    Dictionary of environment variables to use for the subprocess; can also be provided at runtime.

    None helper_command Optional[str]

    String representing a shell command, which will be executed prior to the command in the same process. Can be used to change directories, define helper functions, etc. for different commands in a flow.

    None shell Optional[str]

    Shell to run the command with.

    None extension Optional[str]

    File extension to be appended to the command to be executed.

    None return_all bool

    Whether this task should return all lines of stdout as a list, or just the last line as a string.

    False stream_level int

    The logging level of the stream; defaults to 20 equivalent to logging.INFO.

    INFO cwd Union[str, bytes, PathLike, None]

    The working directory context the command will be executed within

    None

    Returns:

    Type Description Union[List, str]

    If return all, returns all lines as a list; else the last line as a string.

    Example

    List contents in the current directory.

    from prefect import flow\nfrom prefect_shell import shell_run_command\n\n@flow\ndef example_shell_run_command_flow():\n    return shell_run_command(command=\"ls .\", return_all=True)\n\nexample_shell_run_command_flow()\n

    Source code in prefect_shell/commands.py
    @task\nasync def shell_run_command(\n    command: str,\n    env: Optional[dict] = None,\n    helper_command: Optional[str] = None,\n    shell: Optional[str] = None,\n    extension: Optional[str] = None,\n    return_all: bool = False,\n    stream_level: int = logging.INFO,\n    cwd: Union[str, bytes, os.PathLike, None] = None,\n) -> Union[List, str]:\n    \"\"\"\n    Runs arbitrary shell commands.\n\n    Args:\n        command: Shell command to be executed; can also be\n            provided post-initialization by calling this task instance.\n        env: Dictionary of environment variables to use for\n            the subprocess; can also be provided at runtime.\n        helper_command: String representing a shell command, which\n            will be executed prior to the `command` in the same process.\n            Can be used to change directories, define helper functions, etc.\n            for different commands in a flow.\n        shell: Shell to run the command with.\n        extension: File extension to be appended to the command to be executed.\n        return_all: Whether this task should return all lines of stdout as a list,\n            or just the last line as a string.\n        stream_level: The logging level of the stream;\n            defaults to 20 equivalent to `logging.INFO`.\n        cwd: The working directory context the command will be executed within\n\n    Returns:\n        If return all, returns all lines as a list; else the last line as a string.\n\n    Example:\n        List contents in the current directory.\n        ```python\n        from prefect import flow\n        from prefect_shell import shell_run_command\n\n        @flow\n        def example_shell_run_command_flow():\n            return shell_run_command(command=\"ls .\", return_all=True)\n\n        example_shell_run_command_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    current_env = os.environ.copy()\n    current_env.update(env or {})\n\n    if shell is None:\n        # if shell is not specified:\n        # use powershell for windows\n        # use bash for other platforms\n        shell = \"powershell\" if sys.platform == \"win32\" else \"bash\"\n\n    extension = \".ps1\" if shell.lower() == \"powershell\" else extension\n\n    tmp = tempfile.NamedTemporaryFile(prefix=\"prefect-\", suffix=extension, delete=False)\n    try:\n        if helper_command:\n            tmp.write(helper_command.encode())\n            tmp.write(os.linesep.encode())\n        tmp.write(command.encode())\n        if shell.lower() == \"powershell\":\n            # if powershell, set exit code to that of command\n            tmp.write(\"\\r\\nExit $LastExitCode\".encode())\n        tmp.close()\n\n        shell_command = [shell, tmp.name]\n\n        lines = []\n        async with await anyio.open_process(\n            shell_command, env=current_env, cwd=cwd\n        ) as process:\n            async for text in TextReceiveStream(process.stdout):\n                logger.log(level=stream_level, msg=text)\n                lines.extend(text.rstrip().split(\"\\n\"))\n\n            await process.wait()\n            if process.returncode:\n                stderr = \"\\n\".join(\n                    [text async for text in TextReceiveStream(process.stderr)]\n                )\n                if not stderr and lines:\n                    stderr = f\"{lines[-1]}\\n\"\n                msg = (\n                    f\"Command failed with exit code {process.returncode}:\\n\" f\"{stderr}\"\n                )\n                raise RuntimeError(msg)\n    finally:\n        if os.path.exists(tmp.name):\n            os.remove(tmp.name)\n\n    line = lines[-1] if lines else \"\"\n    return lines if return_all else line\n
    "},{"location":"integrations/prefect-slack/","title":"prefect-slack","text":""},{"location":"integrations/prefect-slack/#welcome","title":"Welcome!","text":"

    prefect-slack is a collection of prebuilt Prefect tasks that can be used to quickly construct Prefect flows.

    "},{"location":"integrations/prefect-slack/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-slack/#python-setup","title":"Python setup","text":"

    Requires an installation of Python 3.8+

    We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

    These tasks are designed to work with Prefect 2.0. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-slack/#installation","title":"Installation","text":"

    Install prefect-slack

    pip install prefect-slack\n
    "},{"location":"integrations/prefect-slack/#slack-setup","title":"Slack setup","text":"

    In order to use tasks in the collection, you'll first need to create an Slack app and install it in your Slack workspace. You can create a Slack app by navigating to the apps page for your Slack account and selecting 'Create New App'.

    For tasks that require a Bot user OAuth token, you can get a token for your app by navigating to your apps OAuth & Permissions page.

    For tasks that require and Webhook URL, you get generate new Webhook URLs by navigating to you apps Incoming Webhooks page.

    Slack's Basic app setup guide provides additional details on setting up a Slack app.

    "},{"location":"integrations/prefect-slack/#write-and-run-a-flow","title":"Write and run a flow","text":"
    from prefect import flow\nfrom prefect.context import get_run_context\nfrom prefect_slack import SlackCredentials\nfrom prefect_slack.messages import send_chat_message\n\n\n@flow\ndef example_send_message_flow():\n   context = get_run_context()\n\n   # Run other tasks and subflows here\n\n   token = \"xoxb-your-bot-token-here\"\n   send_chat_message(\n         slack_credentials=SlackCredentials(token),\n         channel=\"#prefect\",\n         text=f\"Flow run {context.flow_run.name} completed :tada:\"\n   )\n\nexample_send_message_flow()\n
    "},{"location":"integrations/prefect-slack/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials","title":"prefect_slack.credentials","text":"

    Credential classes use to store Slack credentials.

    "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackCredentials","title":"SlackCredentials","text":"

    Bases: Block

    Block holding Slack credentials for use in tasks and flows.

    Parameters:

    Name Type Description Default token

    Bot user OAuth token for the Slack app used to perform actions.

    required

    Examples:

    Load stored Slack credentials:

    from prefect_slack import SlackCredentials\nslack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\n

    Get a Slack client:

    from prefect_slack import SlackCredentials\nslack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\nclient = slack_credentials_block.get_client()\n

    Source code in prefect_slack/credentials.py
    class SlackCredentials(Block):\n    \"\"\"\n    Block holding Slack credentials for use in tasks and flows.\n\n    Args:\n        token: Bot user OAuth token for the Slack app used to perform actions.\n\n    Examples:\n        Load stored Slack credentials:\n        ```python\n        from prefect_slack import SlackCredentials\n        slack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\n        ```\n\n        Get a Slack client:\n        ```python\n        from prefect_slack import SlackCredentials\n        slack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\n        client = slack_credentials_block.get_client()\n        ```\n    \"\"\"  # noqa E501\n\n    _block_type_name = \"Slack Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c1965ecbf8704ee1ea20d77786de9a41ce1087d1-500x500.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-slack/credentials/#prefect_slack.credentials.SlackCredentials\"  # noqa\n\n    token: SecretStr = Field(\n        default=...,\n        description=\"Bot user OAuth token for the Slack app used to perform actions.\",\n    )\n\n    def get_client(self) -> AsyncWebClient:\n        \"\"\"\n        Returns an authenticated `AsyncWebClient` to interact with the Slack API.\n        \"\"\"\n        return AsyncWebClient(token=self.token.get_secret_value())\n
    "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackCredentials.get_client","title":"get_client","text":"

    Returns an authenticated AsyncWebClient to interact with the Slack API.

    Source code in prefect_slack/credentials.py
    def get_client(self) -> AsyncWebClient:\n    \"\"\"\n    Returns an authenticated `AsyncWebClient` to interact with the Slack API.\n    \"\"\"\n    return AsyncWebClient(token=self.token.get_secret_value())\n
    "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook","title":"SlackWebhook","text":"

    Bases: NotificationBlock

    Block holding a Slack webhook for use in tasks and flows.

    Parameters:

    Name Type Description Default url

    Slack webhook URL which can be used to send messages (e.g. https://hooks.slack.com/XXX).

    required

    Examples:

    Load stored Slack webhook:

    from prefect_slack import SlackWebhook\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n

    Get a Slack webhook client:

    from prefect_slack import SlackWebhook\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nclient = slack_webhook_block.get_client()\n

    Send a notification in Slack:

    from prefect_slack import SlackWebhook\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello, world!\")\n

    Source code in prefect_slack/credentials.py
    class SlackWebhook(NotificationBlock):\n    \"\"\"\n    Block holding a Slack webhook for use in tasks and flows.\n\n    Args:\n        url: Slack webhook URL which can be used to send messages\n            (e.g. `https://hooks.slack.com/XXX`).\n\n    Examples:\n        Load stored Slack webhook:\n        ```python\n        from prefect_slack import SlackWebhook\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        ```\n\n        Get a Slack webhook client:\n        ```python\n        from prefect_slack import SlackWebhook\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        client = slack_webhook_block.get_client()\n        ```\n\n        Send a notification in Slack:\n        ```python\n        from prefect_slack import SlackWebhook\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        slack_webhook_block.notify(\"Hello, world!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Slack Incoming Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/7dkzINU9r6j44giEFuHuUC/85d4cd321ad60c1b1e898bc3fbd28580/5cb480cd5f1b6d3fbadece79.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook\"  # noqa\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Slack webhook URL which can be used to send messages.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n\n    def get_client(self) -> AsyncWebhookClient:\n        \"\"\"\n        Returns an authenticated `AsyncWebhookClient` to interact with the configured\n        Slack webhook.\n        \"\"\"\n        return AsyncWebhookClient(url=self.url.get_secret_value())\n\n    @sync_compatible\n    async def notify(self, body: str, subject: Optional[str] = None):\n        \"\"\"\n        Sends a message to the Slack channel.\n        \"\"\"\n        client = self.get_client()\n\n        response = await client.send(text=body)\n\n        # prefect>=2.17.2 added a means for notification blocks to raise errors on\n        # failures. This is not available in older versions, so we need to check if the\n        # private base class attribute exists before using it.\n        if getattr(self, \"_raise_on_failure\", False):  # pragma: no cover\n            try:\n                from prefect.blocks.abstract import NotificationError\n            except ImportError:\n                NotificationError = Exception\n\n            if response.status_code >= 400:\n                raise NotificationError(f\"Failed to send message: {response.body}\")\n
    "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook.get_client","title":"get_client","text":"

    Returns an authenticated AsyncWebhookClient to interact with the configured Slack webhook.

    Source code in prefect_slack/credentials.py
    def get_client(self) -> AsyncWebhookClient:\n    \"\"\"\n    Returns an authenticated `AsyncWebhookClient` to interact with the configured\n    Slack webhook.\n    \"\"\"\n    return AsyncWebhookClient(url=self.url.get_secret_value())\n
    "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook.notify","title":"notify async","text":"

    Sends a message to the Slack channel.

    Source code in prefect_slack/credentials.py
    @sync_compatible\nasync def notify(self, body: str, subject: Optional[str] = None):\n    \"\"\"\n    Sends a message to the Slack channel.\n    \"\"\"\n    client = self.get_client()\n\n    response = await client.send(text=body)\n\n    # prefect>=2.17.2 added a means for notification blocks to raise errors on\n    # failures. This is not available in older versions, so we need to check if the\n    # private base class attribute exists before using it.\n    if getattr(self, \"_raise_on_failure\", False):  # pragma: no cover\n        try:\n            from prefect.blocks.abstract import NotificationError\n        except ImportError:\n            NotificationError = Exception\n\n        if response.status_code >= 400:\n            raise NotificationError(f\"Failed to send message: {response.body}\")\n
    "},{"location":"integrations/prefect-slack/messages/","title":"Messages","text":""},{"location":"integrations/prefect-slack/messages/#prefect_slack.messages","title":"prefect_slack.messages","text":"

    Tasks for sending Slack messages.

    "},{"location":"integrations/prefect-slack/messages/#prefect_slack.messages.send_chat_message","title":"send_chat_message async","text":"

    Sends a message to a Slack channel.

    Parameters:

    Name Type Description Default channel str

    The name of the channel in which to post the chat message (e.g. #general).

    required slack_credentials SlackCredentials

    Instance of SlackCredentials initialized with a Slack bot token.

    required text Optional[str]

    Contents of the message. It's a best practice to always provide a text argument when posting a message. The text argument is used in places where content cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc.

    None attachments Optional[Sequence[Union[Dict, Attachment]]]

    List of objects defining secondary context in the posted Slack message. The Slack API docs provide guidance on building attachments.

    None slack_blocks Optional[Sequence[Union[Dict, Block]]]

    List of objects defining the layout and formatting of the posted message. The Slack API docs provide guidance on building messages with blocks.

    None

    Returns:

    Name Type Description Dict Dict

    Response from the Slack API. Example response structures can be found in the Slack API docs.

    Example

    Post a message at the end of a flow run.

    from prefect import flow\nfrom prefect.context import get_run_context\nfrom prefect_slack import SlackCredentials\nfrom prefect_slack.messages import send_chat_message\n\n\n@flow\ndef example_send_message_flow():\n    context = get_run_context()\n\n    # Run other tasks and subflows here\n\n    token = \"xoxb-your-bot-token-here\"\n    send_chat_message(\n        slack_credentials=SlackCredentials(token),\n        channel=\"#prefect\",\n        text=f\"Flow run {context.flow_run.name} completed :tada:\"\n    )\n\nexample_send_message_flow()\n
    Source code in prefect_slack/messages.py
    @task\nasync def send_chat_message(\n    channel: str,\n    slack_credentials: SlackCredentials,\n    text: Optional[str] = None,\n    attachments: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.attachments.Attachment\"]]\n    ] = None,\n    slack_blocks: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.blocks.Block\"]]\n    ] = None,\n) -> Dict:\n    \"\"\"\n    Sends a message to a Slack channel.\n\n    Args:\n        channel: The name of the channel in which to post the chat message\n            (e.g. #general).\n        slack_credentials: Instance of `SlackCredentials` initialized with a Slack\n            bot token.\n        text: Contents of the message. It's a best practice to always provide a `text`\n            argument when posting a message. The `text` argument is used in places where\n            content cannot be rendered such as: system push notifications, assistive\n            technology such as screen readers, etc.\n        attachments: List of objects defining secondary context in the posted Slack\n            message. The [Slack API docs](https://api.slack.com/messaging/composing/layouts#building-attachments)\n            provide guidance on building attachments.\n        slack_blocks: List of objects defining the layout and formatting of the posted\n            message. The [Slack API docs](https://api.slack.com/block-kit/building)\n            provide guidance on building messages with blocks.\n\n    Returns:\n        Dict: Response from the Slack API. Example response structures can be found in\n            the [Slack API docs](https://api.slack.com/methods/chat.postMessage#examples).\n\n    Example:\n        Post a message at the end of a flow run.\n\n        ```python\n        from prefect import flow\n        from prefect.context import get_run_context\n        from prefect_slack import SlackCredentials\n        from prefect_slack.messages import send_chat_message\n\n\n        @flow\n        def example_send_message_flow():\n            context = get_run_context()\n\n            # Run other tasks and subflows here\n\n            token = \"xoxb-your-bot-token-here\"\n            send_chat_message(\n                slack_credentials=SlackCredentials(token),\n                channel=\"#prefect\",\n                text=f\"Flow run {context.flow_run.name} completed :tada:\"\n            )\n\n        example_send_message_flow()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Posting chat message to %s\", channel)\n\n    client = slack_credentials.get_client()\n    result = await client.chat_postMessage(\n        channel=channel, text=text, blocks=slack_blocks, attachments=attachments\n    )\n    return result.data\n
    "},{"location":"integrations/prefect-slack/messages/#prefect_slack.messages.send_incoming_webhook_message","title":"send_incoming_webhook_message async","text":"

    Sends a message via an incoming webhook.

    Parameters:

    Name Type Description Default slack_webhook SlackWebhook

    Instance of SlackWebhook initialized with a Slack webhook URL.

    required text Optional[str]

    Contents of the message. It's a best practice to always provide a text argument when posting a message. The text argument is used in places where content cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc.

    None attachments Optional[Sequence[Union[Dict, Attachment]]]

    List of objects defining secondary context in the posted Slack message. The Slack API docs provide guidance on building attachments.

    None slack_blocks Optional[Sequence[Union[Dict, Block]]]

    List of objects defining the layout and formatting of the posted message. The Slack API docs provide guidance on building messages with blocks.

    None Example

    Post a message at the end of a flow run.

    from prefect import flow\nfrom prefect_slack import SlackWebhook\nfrom prefect_slack.messages import send_incoming_webhook_message\n\n\n@flow\ndef example_send_message_flow():\n    # Run other tasks and subflows here\n\n    webhook_url = \"https://hooks.slack.com/XXX\"\n    send_incoming_webhook_message(\n        slack_webhook=SlackWebhook(\n            url=webhook_url\n        ),\n        text=\"Warehouse loading flow completed :sparkles:\"\n    )\n\nexample_send_message_flow()\n
    Source code in prefect_slack/messages.py
    @task\nasync def send_incoming_webhook_message(\n    slack_webhook: SlackWebhook,\n    text: Optional[str] = None,\n    attachments: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.attachments.Attachment\"]]\n    ] = None,\n    slack_blocks: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.blocks.Block\"]]\n    ] = None,\n) -> None:\n    \"\"\"\n    Sends a message via an incoming webhook.\n\n    Args:\n        slack_webhook: Instance of `SlackWebhook` initialized with a Slack\n            webhook URL.\n        text: Contents of the message. It's a best practice to always provide a `text`\n            argument when posting a message. The `text` argument is used in places where\n            content cannot be rendered such as: system push notifications, assistive\n            technology such as screen readers, etc.\n        attachments: List of objects defining secondary context in the posted Slack\n            message. The [Slack API docs](https://api.slack.com/messaging/composing/layouts#building-attachments)\n            provide guidance on building attachments.\n        slack_blocks: List of objects defining the layout and formatting of the posted\n            message. The [Slack API docs](https://api.slack.com/block-kit/building)\n            provide guidance on building messages with blocks.\n\n    Example:\n        Post a message at the end of a flow run.\n\n        ```python\n        from prefect import flow\n        from prefect_slack import SlackWebhook\n        from prefect_slack.messages import send_incoming_webhook_message\n\n\n        @flow\n        def example_send_message_flow():\n            # Run other tasks and subflows here\n\n            webhook_url = \"https://hooks.slack.com/XXX\"\n            send_incoming_webhook_message(\n                slack_webhook=SlackWebhook(\n                    url=webhook_url\n                ),\n                text=\"Warehouse loading flow completed :sparkles:\"\n            )\n\n        example_send_message_flow()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Posting message to provided webhook\")\n\n    client = slack_webhook.get_client()\n    await client.send(text=text, attachments=attachments, blocks=slack_blocks)\n
    "},{"location":"integrations/prefect-snowflake/","title":"prefect-snowflake","text":""},{"location":"integrations/prefect-snowflake/#welcome","title":"Welcome!","text":"

    The prefect-snowflake collection makes it easy to connect to a Snowflake database in your Prefect flows. Check out the examples below to get started!

    "},{"location":"integrations/prefect-snowflake/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-snowflake/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

    Prefect works with Snowflake by providing dataflow automation for faster, more efficient data pipeline creation, execution, and monitoring.

    This results in reduced errors, increased confidence in your data, and ultimately, faster insights.

    To set up a table, use the execute and execute_many methods. Then, use the fetch_many method to retrieve data in a stream until there's no more data.

    By using the SnowflakeConnector as a context manager, you can make sure that the Snowflake connection and cursors are closed properly after you're done with them.

    Be sure to install prefect-snowflake, register the blocks, and create a credentials block to run the examples below!

    SyncAsync
    from prefect import flow, task\nfrom prefect_snowflake import SnowflakeConnector\n\n\n@task\ndef setup_table(block_name: str) -> None:\n    with SnowflakeConnector.load(block_name) as connector:\n        connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Space\"},\n                {\"name\": \"Me\", \"address\": \"Myway 88\"},\n            ],\n        )\n\n@task\ndef fetch_data(block_name: str) -> list:\n    all_rows = []\n    with SnowflakeConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\ndef snowflake_flow(block_name: str) -> list:\n    setup_table(block_name)\n    all_rows = fetch_data(block_name)\n    return all_rows\n\n\nif __name__==\"__main__\":\n    snowflake_flow()\n
    from prefect import flow, task\nfrom prefect_snowflake import SnowflakeConnector\nimport asyncio\n\n@task\nasync def setup_table(block_name: str) -> None:\n    with await SnowflakeConnector.load(block_name) as connector:\n        await connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        await connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Space\"},\n                {\"name\": \"Me\", \"address\": \"Myway 88\"},\n            ],\n        )\n\n@task\nasync def fetch_data(block_name: str) -> list:\n    all_rows = []\n    with await SnowflakeConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = await connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\nasync def snowflake_flow(block_name: str) -> list:\n    await setup_table(block_name)\n    all_rows = await fetch_data(block_name)\n    return all_rows\n\n\nif __name__==\"__main__\":\n    asyncio.run(snowflake_flow(\"example\"))\n
    "},{"location":"integrations/prefect-snowflake/#access-underlying-snowflake-connection","title":"Access underlying Snowflake connection","text":"

    If the native methods of the block don't meet your requirements, don't worry.

    You have the option to access the underlying Snowflake connection and utilize its built-in methods as well.

    import pandas as pd\nfrom prefect import flow\nfrom prefect_snowflake.database import SnowflakeConnector\nfrom snowflake.connector.pandas_tools import write_pandas\n\n@flow\ndef snowflake_write_pandas_flow():\n    connector = SnowflakeConnector.load(\"my-block\")\n    with connector.get_connection() as connection:\n        table_name = \"TABLE_NAME\"\n        ddl = \"NAME STRING, NUMBER INT\"\n        statement = f'CREATE TABLE IF NOT EXISTS {table_name} ({ddl})'\n        with connection.cursor() as cursor:\n            cursor.execute(statement)\n\n        # case sensitivity matters here!\n        df = pd.DataFrame([('Marvin', 42), ('Ford', 88)], columns=['NAME', 'NUMBER'])\n        success, num_chunks, num_rows, _ = write_pandas(\n            conn=connection,\n            df=df,\n            table_name=table_name,\n            database=snowflake_connector.database,\n            schema=snowflake_connector.schema_  # note the \"_\" suffix\n        )\n
    "},{"location":"integrations/prefect-snowflake/#resources","title":"Resources","text":"

    For more tips on how to use tasks and flows in an integration, check out Using Collections!

    "},{"location":"integrations/prefect-snowflake/#installation","title":"Installation","text":"

    Install prefect-snowflake with pip:

    pip install prefect-snowflake\n
    "},{"location":"integrations/prefect-snowflake/#registering-blocks","title":"Registering blocks","text":"

    Register blocks in this module to make them available for use.

    prefect block register -m prefect_aws\n
    "},{"location":"integrations/prefect-snowflake/#saving-credentials-to-block","title":"Saving credentials to block","text":"

    Note, to use the load method on a block, you must already have a block document saved through code or saved through the UI.

    Below is a walkthrough on saving a SnowflakeCredentials block through code.

    1. Head over to https://app.snowflake.com/.
    2. Login to your Snowflake account, e.g. nh12345.us-east-2.aws, with your username and password.
    3. Use those credentials to fill replace the placeholders below.
    from prefect_snowflake import SnowflakeCredentials\n\ncredentials = SnowflakeCredentials(\n    account=\"ACCOUNT-PLACEHOLDER\",  # resembles nh12345.us-east-2.aws\n    user=\"USER-PLACEHOLDER\",\n    password=\"PASSWORD-PLACEHOLDER\"\n)\ncredentials.save(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\n

    Then, to create a SnowflakeConnector block:

    1. After logging in, click on any worksheet.
    2. On the left side, select a database and schema.
    3. On the top right, select a warehouse.
    4. Create a short script, replacing the placeholders below.
    from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector\n\ncredentials = SnowflakeCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\n\nconnector = SnowflakeConnector(\n    credentials=credentials,\n    database=\"DATABASE-PLACEHOLDER\",\n    schema=\"SCHEMA-PLACEHOLDER\",\n    warehouse=\"COMPUTE_WH\",\n)\nconnector.save(\"CONNECTOR-BLOCK-NAME-PLACEHOLDER\")\n

    Congrats! You can now easily load the saved block, which holds your credentials and connection info:

    from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector\n\nSnowflakeCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\nSnowflakeConnector.load(\"CONNECTOR-BLOCK-NAME-PLACEHOLDER\")\n

    Registering blocks

    Register blocks in this module to view and edit them on Prefect Cloud:

    prefect block register -m prefect_snowflake\n

    A list of available blocks in prefect-snowflake and their setup instructions can be found here.

    "},{"location":"integrations/prefect-snowflake/#feedback","title":"Feedback","text":"

    If you encounter any bugs while using prefect-snowflake, feel free to open an issue in the prefect repository.

    If you have any questions or issues while using prefect-snowflake, you can find help in the Prefect Slack community.

    "},{"location":"integrations/prefect-snowflake/#contributing","title":"Contributing","text":"

    If you'd like to help contribute to fix an issue or add a feature to prefect-snowflake, please propose changes through a pull request from a fork of the Prefect repository.

    "},{"location":"integrations/prefect-snowflake/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials","title":"prefect_snowflake.credentials","text":"

    Credentials block for authenticating with Snowflake.

    "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.InvalidPemFormat","title":"InvalidPemFormat","text":"

    Bases: Exception

    Invalid PEM Format Certificate

    Source code in prefect_snowflake/credentials.py
    class InvalidPemFormat(Exception):\n    \"\"\"Invalid PEM Format Certificate\"\"\"\n
    "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials","title":"SnowflakeCredentials","text":"

    Bases: CredentialsBlock

    Block used to manage authentication with Snowflake.

    Parameters:

    Name Type Description Default account str

    The snowflake account name.

    required user str

    The user name used to authenticate.

    required password SecretStr

    The password used to authenticate.

    required private_key SecretStr

    The PEM used to authenticate.

    required authenticator str

    The type of authenticator to use for initializing connection (oauth, externalbrowser, etc); refer to Snowflake documentation for details, and note that externalbrowser will only work in an environment where a browser is available.

    required token SecretStr

    The OAuth or JWT Token to provide when authenticator is set to OAuth.

    required endpoint str

    The Okta endpoint to use when authenticator is set to okta_endpoint, e.g. https://<okta_account_name>.okta.com.

    required role str

    The name of the default role to use.

    required autocommit bool

    Whether to automatically commit.

    required Example

    Load stored Snowflake credentials:

    from prefect_snowflake import SnowflakeCredentials\n\nsnowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_snowflake/credentials.py
    class SnowflakeCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with Snowflake.\n\n    Args:\n        account (str): The snowflake account name.\n        user (str): The user name used to authenticate.\n        password (SecretStr): The password used to authenticate.\n        private_key (SecretStr): The PEM used to authenticate.\n        authenticator (str): The type of authenticator to use for initializing\n            connection (oauth, externalbrowser, etc); refer to\n            [Snowflake documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect)\n            for details, and note that `externalbrowser` will only\n            work in an environment where a browser is available.\n        token (SecretStr): The OAuth or JWT Token to provide when\n            authenticator is set to OAuth.\n        endpoint (str): The Okta endpoint to use when authenticator is\n            set to `okta_endpoint`, e.g. `https://<okta_account_name>.okta.com`.\n        role (str): The name of the default role to use.\n        autocommit (bool): Whether to automatically commit.\n\n    Example:\n        Load stored Snowflake credentials:\n        ```python\n        from prefect_snowflake import SnowflakeCredentials\n\n        snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _block_type_name = \"Snowflake Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/bd359de0b4be76c2254bd329fe3a267a1a3879c2-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials\"  # noqa\n\n    account: str = Field(\n        ..., description=\"The snowflake account name.\", example=\"nh12345.us-east-2.aws\"\n    )\n    user: str = Field(..., description=\"The user name used to authenticate.\")\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password used to authenticate.\"\n    )\n    private_key: Optional[SecretBytes] = Field(\n        default=None, description=\"The PEM used to authenticate.\"\n    )\n    private_key_path: Optional[Path] = Field(\n        default=None, description=\"The path to the private key.\"\n    )\n    private_key_passphrase: Optional[SecretStr] = Field(\n        default=None, description=\"The password to use for the private key.\"\n    )\n    authenticator: Literal[\n        \"snowflake\",\n        \"snowflake_jwt\",\n        \"externalbrowser\",\n        \"okta_endpoint\",\n        \"oauth\",\n        \"username_password_mfa\",\n    ] = Field(  # noqa\n        default=\"snowflake\",\n        description=(\"The type of authenticator to use for initializing connection.\"),\n    )\n    token: Optional[SecretStr] = Field(\n        default=None,\n        description=(\n            \"The OAuth or JWT Token to provide when authenticator is set to `oauth`.\"\n        ),\n    )\n    endpoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The Okta endpoint to use when authenticator is set to `okta_endpoint`.\"\n        ),\n    )\n    role: Optional[str] = Field(\n        default=None, description=\"The name of the default role to use.\"\n    )\n    autocommit: Optional[bool] = Field(\n        default=None, description=\"Whether to automatically commit.\"\n    )\n\n    @root_validator(pre=True)\n    def _validate_auth_kwargs(cls, values):\n        \"\"\"\n        Ensure an authorization value has been provided by the user.\n        \"\"\"\n        auth_params = (\n            \"password\",\n            \"private_key\",\n            \"private_key_path\",\n            \"authenticator\",\n            \"token\",\n        )\n        if not any(values.get(param) for param in auth_params):\n            auth_str = \", \".join(auth_params)\n            raise ValueError(\n                f\"One of the authentication keys must be provided: {auth_str}\\n\"\n            )\n        elif values.get(\"private_key\") and values.get(\"private_key_path\"):\n            raise ValueError(\n                \"Do not provide both private_key and private_key_path; select one.\"\n            )\n        elif values.get(\"password\") and values.get(\"private_key_passphrase\"):\n            raise ValueError(\n                \"Do not provide both password and private_key_passphrase; \"\n                \"specify private_key_passphrase only instead.\"\n            )\n        return values\n\n    @root_validator(pre=True)\n    def _validate_token_kwargs(cls, values):\n        \"\"\"\n        Ensure an authorization value has been provided by the user.\n        \"\"\"\n        authenticator = values.get(\"authenticator\")\n        token = values.get(\"token\")\n        if authenticator == \"oauth\" and not token:\n            raise ValueError(\n                \"If authenticator is set to `oauth`, `token` must be provided\"\n            )\n        return values\n\n    @root_validator(pre=True)\n    def _validate_okta_kwargs(cls, values):\n        \"\"\"\n        Ensure an authorization value has been provided by the user.\n        \"\"\"\n        authenticator = values.get(\"authenticator\")\n\n        # did not want to make a breaking change so we will allow both\n        # see https://github.com/PrefectHQ/prefect-snowflake/issues/44\n        if \"okta_endpoint\" in values.keys():\n            warnings.warn(\n                \"Please specify `endpoint` instead of `okta_endpoint`; \"\n                \"`okta_endpoint` will be removed March 31, 2023.\",\n                DeprecationWarning,\n                stacklevel=2,\n            )\n            # remove okta endpoint from fields\n            okta_endpoint = values.pop(\"okta_endpoint\")\n            if \"endpoint\" not in values.keys():\n                values[\"endpoint\"] = okta_endpoint\n\n        endpoint = values.get(\"endpoint\")\n        if authenticator == \"okta_endpoint\" and not endpoint:\n            raise ValueError(\n                \"If authenticator is set to `okta_endpoint`, \"\n                \"`endpoint` must be provided\"\n            )\n        return values\n\n    def resolve_private_key(self) -> Optional[bytes]:\n        \"\"\"\n        Converts a PEM encoded private key into a DER binary key.\n\n        Returns:\n            DER encoded key if private_key has been provided otherwise returns None.\n\n        Raises:\n            InvalidPemFormat: If private key is not in PEM format.\n        \"\"\"\n        if self.private_key_path is None and self.private_key is None:\n            return None\n        elif self.private_key_path:\n            private_key = self.private_key_path.read_bytes()\n        else:\n            private_key = self._decode_secret(self.private_key)\n\n        if self.private_key_passphrase is not None:\n            password = self._decode_secret(self.private_key_passphrase)\n        elif self.password is not None:\n            warnings.warn(\n                \"Using the password field for private_key is deprecated \"\n                \"and will not work after March 31, 2023; please use \"\n                \"private_key_passphrase instead\",\n                DeprecationWarning,\n                stacklevel=2,\n            )\n            password = self._decode_secret(self.password)\n        else:\n            password = None\n\n        composed_private_key = self._compose_pem(private_key)\n        return load_pem_private_key(\n            data=composed_private_key,\n            password=password,\n            backend=default_backend(),\n        ).private_bytes(\n            encoding=Encoding.DER,\n            format=PrivateFormat.PKCS8,\n            encryption_algorithm=NoEncryption(),\n        )\n\n    @staticmethod\n    def _decode_secret(secret: Union[SecretStr, SecretBytes]) -> Optional[bytes]:\n        \"\"\"\n        Decode the provided secret into bytes. If the secret is not a\n        string or bytes, or it is whitespace, then return None.\n\n        Args:\n            secret: The value to decode.\n\n        Returns:\n            The decoded secret as bytes.\n\n        \"\"\"\n        if isinstance(secret, (SecretBytes, SecretStr)):\n            secret = secret.get_secret_value()\n\n        if not isinstance(secret, (bytes, str)) or len(secret) == 0 or secret.isspace():\n            return None\n\n        return secret if isinstance(secret, bytes) else secret.encode()\n\n    @staticmethod\n    def _compose_pem(private_key: bytes) -> bytes:\n        \"\"\"Validate structure of PEM certificate.\n\n        The original key passed from Prefect is sometimes malformed.\n        This function recomposes the key into a valid key that will\n        pass the serialization step when resolving the key to a DER.\n\n        Args:\n            private_key: A valid PEM format byte encoded string.\n\n        Returns:\n            byte encoded certificate.\n\n        Raises:\n            InvalidPemFormat: if private key is an invalid format.\n        \"\"\"\n        pem_parts = re.match(_SIMPLE_PEM_CERTIFICATE_REGEX, private_key.decode())\n        if pem_parts is None:\n            raise InvalidPemFormat()\n\n        body = \"\\n\".join(re.split(r\"\\s+\", pem_parts[2].strip()))\n        # reassemble header+body+footer\n        return f\"{pem_parts[1]}\\n{body}\\n{pem_parts[3]}\".encode()\n\n    def get_client(\n        self, **connect_kwargs: Any\n    ) -> snowflake.connector.SnowflakeConnection:\n        \"\"\"\n        Returns an authenticated connection that can be used to query\n        Snowflake databases.\n\n        Any additional arguments passed to this method will be used to configure\n        the SnowflakeConnection. For available parameters, please refer to the\n        [Snowflake Python connector documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect).\n\n        Args:\n            **connect_kwargs: Additional arguments to pass to\n                `snowflake.connector.connect`.\n\n        Returns:\n            An authenticated Snowflake connection.\n\n        Example:\n            Get Snowflake connection with only block configuration:\n            ```python\n            from prefect_snowflake import SnowflakeCredentials\n\n            snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n            connection = snowflake_credentials_block.get_client()\n            ```\n\n            Get Snowflake connector scoped to a specified database:\n            ```python\n            from prefect_snowflake import SnowflakeCredentials\n\n            snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n            connection = snowflake_credentials_block.get_client(database=\"my_database\")\n            ```\n        \"\"\"  # noqa\n        connect_params = {\n            # required to track task's usage in the Snowflake Partner Network Portal\n            \"application\": \"Prefect_Snowflake_Collection\",\n            **self.dict(exclude_unset=True, exclude={\"block_type_slug\"}),\n            **connect_kwargs,\n        }\n\n        for key, value in connect_params.items():\n            if isinstance(value, SecretField):\n                connect_params[key] = connect_params[key].get_secret_value()\n\n        # set authenticator to the actual okta_endpoint\n        if connect_params.get(\"authenticator\") == \"okta_endpoint\":\n            endpoint = connect_params.pop(\"endpoint\", None) or connect_params.pop(\n                \"okta_endpoint\", None\n            )  # okta_endpoint is deprecated\n            connect_params[\"authenticator\"] = endpoint\n\n        private_der_key = self.resolve_private_key()\n        if private_der_key is not None:\n            connect_params[\"private_key\"] = private_der_key\n            connect_params.pop(\"password\", None)\n            connect_params.pop(\"private_key_passphrase\", None)\n\n        return snowflake.connector.connect(**connect_params)\n
    "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials.get_client","title":"get_client","text":"

    Returns an authenticated connection that can be used to query Snowflake databases.

    Any additional arguments passed to this method will be used to configure the SnowflakeConnection. For available parameters, please refer to the Snowflake Python connector documentation.

    Parameters:

    Name Type Description Default **connect_kwargs Any

    Additional arguments to pass to snowflake.connector.connect.

    {}

    Returns:

    Type Description SnowflakeConnection

    An authenticated Snowflake connection.

    Example

    Get Snowflake connection with only block configuration:

    from prefect_snowflake import SnowflakeCredentials\n\nsnowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\nconnection = snowflake_credentials_block.get_client()\n

    Get Snowflake connector scoped to a specified database:

    from prefect_snowflake import SnowflakeCredentials\n\nsnowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\nconnection = snowflake_credentials_block.get_client(database=\"my_database\")\n

    Source code in prefect_snowflake/credentials.py
    def get_client(\n    self, **connect_kwargs: Any\n) -> snowflake.connector.SnowflakeConnection:\n    \"\"\"\n    Returns an authenticated connection that can be used to query\n    Snowflake databases.\n\n    Any additional arguments passed to this method will be used to configure\n    the SnowflakeConnection. For available parameters, please refer to the\n    [Snowflake Python connector documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect).\n\n    Args:\n        **connect_kwargs: Additional arguments to pass to\n            `snowflake.connector.connect`.\n\n    Returns:\n        An authenticated Snowflake connection.\n\n    Example:\n        Get Snowflake connection with only block configuration:\n        ```python\n        from prefect_snowflake import SnowflakeCredentials\n\n        snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n        connection = snowflake_credentials_block.get_client()\n        ```\n\n        Get Snowflake connector scoped to a specified database:\n        ```python\n        from prefect_snowflake import SnowflakeCredentials\n\n        snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n        connection = snowflake_credentials_block.get_client(database=\"my_database\")\n        ```\n    \"\"\"  # noqa\n    connect_params = {\n        # required to track task's usage in the Snowflake Partner Network Portal\n        \"application\": \"Prefect_Snowflake_Collection\",\n        **self.dict(exclude_unset=True, exclude={\"block_type_slug\"}),\n        **connect_kwargs,\n    }\n\n    for key, value in connect_params.items():\n        if isinstance(value, SecretField):\n            connect_params[key] = connect_params[key].get_secret_value()\n\n    # set authenticator to the actual okta_endpoint\n    if connect_params.get(\"authenticator\") == \"okta_endpoint\":\n        endpoint = connect_params.pop(\"endpoint\", None) or connect_params.pop(\n            \"okta_endpoint\", None\n        )  # okta_endpoint is deprecated\n        connect_params[\"authenticator\"] = endpoint\n\n    private_der_key = self.resolve_private_key()\n    if private_der_key is not None:\n        connect_params[\"private_key\"] = private_der_key\n        connect_params.pop(\"password\", None)\n        connect_params.pop(\"private_key_passphrase\", None)\n\n    return snowflake.connector.connect(**connect_params)\n
    "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials.resolve_private_key","title":"resolve_private_key","text":"

    Converts a PEM encoded private key into a DER binary key.

    Returns:

    Type Description Optional[bytes]

    DER encoded key if private_key has been provided otherwise returns None.

    Raises:

    Type Description InvalidPemFormat

    If private key is not in PEM format.

    Source code in prefect_snowflake/credentials.py
    def resolve_private_key(self) -> Optional[bytes]:\n    \"\"\"\n    Converts a PEM encoded private key into a DER binary key.\n\n    Returns:\n        DER encoded key if private_key has been provided otherwise returns None.\n\n    Raises:\n        InvalidPemFormat: If private key is not in PEM format.\n    \"\"\"\n    if self.private_key_path is None and self.private_key is None:\n        return None\n    elif self.private_key_path:\n        private_key = self.private_key_path.read_bytes()\n    else:\n        private_key = self._decode_secret(self.private_key)\n\n    if self.private_key_passphrase is not None:\n        password = self._decode_secret(self.private_key_passphrase)\n    elif self.password is not None:\n        warnings.warn(\n            \"Using the password field for private_key is deprecated \"\n            \"and will not work after March 31, 2023; please use \"\n            \"private_key_passphrase instead\",\n            DeprecationWarning,\n            stacklevel=2,\n        )\n        password = self._decode_secret(self.password)\n    else:\n        password = None\n\n    composed_private_key = self._compose_pem(private_key)\n    return load_pem_private_key(\n        data=composed_private_key,\n        password=password,\n        backend=default_backend(),\n    ).private_bytes(\n        encoding=Encoding.DER,\n        format=PrivateFormat.PKCS8,\n        encryption_algorithm=NoEncryption(),\n    )\n
    "},{"location":"integrations/prefect-snowflake/database/","title":"Database","text":""},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database","title":"prefect_snowflake.database","text":"

    Module for querying against Snowflake databases.

    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector","title":"SnowflakeConnector","text":"

    Bases: DatabaseBlock

    Block used to manage connections with Snowflake.

    Upon instantiating, a connection is created and maintained for the life of the object until the close method is called.

    It is recommended to use this block as a context manager, which will automatically close the engine and its connections when the context is exited.

    It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor will be lost.

    Parameters:

    Name Type Description Default credentials

    The credentials to authenticate with Snowflake.

    required database

    The name of the default database to use.

    required warehouse

    The name of the default warehouse to use.

    required schema

    The name of the default schema to use; this attribute is accessible through SnowflakeConnector(...).schema_.

    required fetch_size

    The number of rows to fetch at a time.

    required poll_frequency_s

    The number of seconds before checking query.

    required

    Examples:

    Load stored Snowflake connector as a context manager:

    from prefect_snowflake.database import SnowflakeConnector\n\nsnowflake_connector = SnowflakeConnector.load(\"BLOCK_NAME\")\n

    Insert data into database and fetch results.

    from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = conn.fetch_all(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Space\"}\n    )\n    print(results)\n

    Source code in prefect_snowflake/database.py
    class SnowflakeConnector(DatabaseBlock):\n    \"\"\"\n    Block used to manage connections with Snowflake.\n\n    Upon instantiating, a connection is created and maintained for the life of\n    the object until the close method is called.\n\n    It is recommended to use this block as a context manager, which will automatically\n    close the engine and its connections when the context is exited.\n\n    It is also recommended that this block is loaded and consumed within a single task\n    or flow because if the block is passed across separate tasks and flows,\n    the state of the block's connection and cursor will be lost.\n\n    Args:\n        credentials: The credentials to authenticate with Snowflake.\n        database: The name of the default database to use.\n        warehouse: The name of the default warehouse to use.\n        schema: The name of the default schema to use;\n            this attribute is accessible through `SnowflakeConnector(...).schema_`.\n        fetch_size: The number of rows to fetch at a time.\n        poll_frequency_s: The number of seconds before checking query.\n\n    Examples:\n        Load stored Snowflake connector as a context manager:\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        snowflake_connector = SnowflakeConnector.load(\"BLOCK_NAME\")\n        ```\n\n        Insert data into database and fetch results.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = conn.fetch_all(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Space\"}\n            )\n            print(results)\n        ```\n    \"\"\"  # noqa\n\n    _block_type_name = \"Snowflake Connector\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/bd359de0b4be76c2254bd329fe3a267a1a3879c2-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector\"  # noqa\n    _description = \"Perform data operations against a Snowflake database.\"\n\n    credentials: SnowflakeCredentials = Field(\n        default=..., description=\"The credentials to authenticate with Snowflake.\"\n    )\n    database: str = Field(\n        default=..., description=\"The name of the default database to use.\"\n    )\n    warehouse: str = Field(\n        default=..., description=\"The name of the default warehouse to use.\"\n    )\n    schema_: str = Field(\n        default=...,\n        alias=\"schema\",\n        description=\"The name of the default schema to use.\",\n    )\n    fetch_size: int = Field(\n        default=1, description=\"The default number of rows to fetch at a time.\"\n    )\n    poll_frequency_s: int = Field(\n        default=1,\n        title=\"Poll Frequency [seconds]\",\n        description=(\n            \"The number of seconds between checking query \"\n            \"status for long running queries.\"\n        ),\n    )\n\n    _connection: Optional[SnowflakeConnection] = None\n    _unique_cursors: Dict[str, SnowflakeCursor] = None\n\n    def get_connection(self, **connect_kwargs: Any) -> SnowflakeConnection:\n        \"\"\"\n        Returns an authenticated connection that can be\n        used to query from Snowflake databases.\n\n        Args:\n            **connect_kwargs: Additional arguments to pass to\n                `snowflake.connector.connect`.\n\n        Returns:\n            The authenticated SnowflakeConnection.\n\n        Examples:\n            ```python\n            from prefect_snowflake.credentials import SnowflakeCredentials\n            from prefect_snowflake.database import SnowflakeConnector\n\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            with snowflake_connector.get_connection() as connection:\n                ...\n            ```\n        \"\"\"\n        if self._connection is not None:\n            return self._connection\n\n        connect_params = {\n            \"database\": self.database,\n            \"warehouse\": self.warehouse,\n            \"schema\": self.schema_,\n        }\n        connection = self.credentials.get_client(**connect_kwargs, **connect_params)\n        self._connection = connection\n        self.logger.info(\"Started a new connection to Snowflake.\")\n        return connection\n\n    def _start_connection(self):\n        \"\"\"\n        Starts Snowflake database connection.\n        \"\"\"\n        self.get_connection()\n        if self._unique_cursors is None:\n            self._unique_cursors = {}\n\n    def _get_cursor(\n        self,\n        inputs: Dict[str, Any],\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    ) -> Tuple[bool, SnowflakeCursor]:\n        \"\"\"\n        Get a Snowflake cursor.\n\n        Args:\n            inputs: The inputs to generate a unique hash, used to decide\n                whether a new cursor should be used.\n            cursor_type: The class of the cursor to use when creating a\n                Snowflake cursor.\n\n        Returns:\n            Whether a cursor is new and a Snowflake cursor.\n        \"\"\"\n        self._start_connection()\n\n        input_hash = hash_objects(inputs)\n        if input_hash is None:\n            raise RuntimeError(\n                \"We were not able to hash your inputs, \"\n                \"which resulted in an unexpected data return; \"\n                \"please open an issue with a reproducible example.\"\n            )\n        if input_hash not in self._unique_cursors.keys():\n            new_cursor = self._connection.cursor(cursor_type)\n            self._unique_cursors[input_hash] = new_cursor\n            return True, new_cursor\n        else:\n            existing_cursor = self._unique_cursors[input_hash]\n            return False, existing_cursor\n\n    async def _execute_async(self, cursor: SnowflakeCursor, inputs: Dict[str, Any]):\n        \"\"\"Helper method to execute operations asynchronously.\"\"\"\n        response = await run_sync_in_worker_thread(cursor.execute_async, **inputs)\n        self.logger.info(\n            f\"Executing the operation, {inputs['command']!r}, asynchronously; \"\n            f\"polling for the result every {self.poll_frequency_s} seconds.\"\n        )\n\n        query_id = response[\"queryId\"]\n        while self._connection.is_still_running(\n            await run_sync_in_worker_thread(\n                self._connection.get_query_status_throw_if_error, query_id\n            )\n        ):\n            await asyncio.sleep(self.poll_frequency_s)\n        await run_sync_in_worker_thread(cursor.get_results_from_sfqid, query_id)\n\n    def reset_cursors(self) -> None:\n        \"\"\"\n        Tries to close all opened cursors.\n\n        Examples:\n            Reset the cursors to refresh cursor position.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                print(conn.fetch_one(\"SELECT * FROM customers\"))  # Ford\n                conn.reset_cursors()\n                print(conn.fetch_one(\"SELECT * FROM customers\"))  # should be Ford again\n            ```\n        \"\"\"  # noqa\n        if not self._unique_cursors:\n            self.logger.info(\"There were no cursors to reset.\")\n            return\n\n        input_hashes = tuple(self._unique_cursors.keys())\n        for input_hash in input_hashes:\n            cursor = self._unique_cursors.pop(input_hash)\n            try:\n                cursor.close()\n            except Exception as exc:\n                self.logger.warning(\n                    f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n                )\n        self.logger.info(\"Successfully reset the cursors.\")\n\n    @sync_compatible\n    async def fetch_one(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> Tuple[Any]:\n        \"\"\"\n        Fetch a single result from the database.\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Returns:\n            A tuple containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Fetch one row from the database where address is Space.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                result = conn.fetch_one(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Space\"}\n                )\n                print(result)\n            ```\n        \"\"\"  # noqa\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        new, cursor = self._get_cursor(inputs, cursor_type=cursor_type)\n        if new:\n            await self._execute_async(cursor, inputs)\n        self.logger.debug(\"Preparing to fetch a row.\")\n        result = await run_sync_in_worker_thread(cursor.fetchone)\n        return result\n\n    @sync_compatible\n    async def fetch_many(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        size: Optional[int] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch a limited number of results from the database.\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            size: The number of results to return; if None or 0, uses the value of\n                `fetch_size` configured on the block.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Repeatedly fetch two rows from the database where address is Highway 42.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Me\", \"address\": \"Highway 42\"},\n                    ],\n                )\n                result = conn.fetch_many(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Highway 42\"},\n                    size=2\n                )\n                print(result)  # Marvin, Ford\n                result = conn.fetch_many(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Highway 42\"},\n                    size=2\n                )\n                print(result)  # Unknown, Me\n            ```\n        \"\"\"  # noqa\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        new, cursor = self._get_cursor(inputs, cursor_type)\n        if new:\n            await self._execute_async(cursor, inputs)\n        size = size or self.fetch_size\n        self.logger.debug(f\"Preparing to fetch {size} rows.\")\n        result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n        return result\n\n    @sync_compatible\n    async def fetch_all(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch all results from the database.\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Fetch all rows from the database where address is Highway 42.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                result = conn.fetch_all(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Highway 42\"},\n                )\n                print(result)  # Marvin, Ford, Unknown\n            ```\n        \"\"\"  # noqa\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        new, cursor = self._get_cursor(inputs, cursor_type)\n        if new:\n            await self._execute_async(cursor, inputs)\n        self.logger.debug(\"Preparing to fetch all rows.\")\n        result = await run_sync_in_worker_thread(cursor.fetchall)\n        return result\n\n    @sync_compatible\n    async def execute(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> None:\n        \"\"\"\n        Executes an operation on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Examples:\n            Create table named customers with two columns, name and address.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n            ```\n        \"\"\"  # noqa\n        self._start_connection()\n\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        with self._connection.cursor(cursor_type) as cursor:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n        self.logger.info(f\"Executed the operation, {operation!r}.\")\n\n    @sync_compatible\n    async def execute_many(\n        self,\n        operation: str,\n        seq_of_parameters: List[Dict[str, Any]],\n    ) -> None:\n        \"\"\"\n        Executes many operations on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n        Unlike the fetch methods, this method will always execute the operations\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            seq_of_parameters: The sequence of parameters for the operation.\n\n        Examples:\n            Create table and insert three rows into it.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    ],\n                )\n            ```\n        \"\"\"  # noqa\n        self._start_connection()\n\n        inputs = dict(\n            command=operation,\n            seqparams=seq_of_parameters,\n        )\n        with self._connection.cursor() as cursor:\n            await run_sync_in_worker_thread(cursor.executemany, **inputs)\n        self.logger.info(\n            f\"Executed {len(seq_of_parameters)} operations off {operation!r}.\"\n        )\n\n    def close(self):\n        \"\"\"\n        Closes connection and its cursors.\n        \"\"\"\n        try:\n            self.reset_cursors()\n        finally:\n            if self._connection is None:\n                self.logger.info(\"There was no connection open to be closed.\")\n                return\n            self._connection.close()\n            self._connection = None\n            self.logger.info(\"Successfully closed the Snowflake connection.\")\n\n    def __enter__(self):\n        \"\"\"\n        Start a connection upon entry.\n        \"\"\"\n        return self\n\n    def __exit__(self, *args):\n        \"\"\"\n        Closes connection and its cursors upon exit.\n        \"\"\"\n        self.close()\n\n    def __getstate__(self):\n        \"\"\"Allows block to be pickled and dumped.\"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_connection\", \"_unique_cursors\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"Reset connection and cursors upon loading.\"\"\"\n        self.__dict__.update(data)\n        self._start_connection()\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.close","title":"close","text":"

    Closes connection and its cursors.

    Source code in prefect_snowflake/database.py
    def close(self):\n    \"\"\"\n    Closes connection and its cursors.\n    \"\"\"\n    try:\n        self.reset_cursors()\n    finally:\n        if self._connection is None:\n            self.logger.info(\"There was no connection open to be closed.\")\n            return\n        self._connection.close()\n        self._connection = None\n        self.logger.info(\"Successfully closed the Snowflake connection.\")\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.execute","title":"execute async","text":"

    Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None cursor_type Type[SnowflakeCursor]

    The class of the cursor to use when creating a Snowflake cursor.

    SnowflakeCursor **execute_kwargs Any

    Additional options to pass to cursor.execute_async.

    {}

    Examples:

    Create table named customers with two columns, name and address.

    from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n

    Source code in prefect_snowflake/database.py
    @sync_compatible\nasync def execute(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> None:\n    \"\"\"\n    Executes an operation on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Examples:\n        Create table named customers with two columns, name and address.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n        ```\n    \"\"\"  # noqa\n    self._start_connection()\n\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    with self._connection.cursor(cursor_type) as cursor:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n    self.logger.info(f\"Executed the operation, {operation!r}.\")\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.execute_many","title":"execute_many async","text":"

    Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operations upon calling.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required seq_of_parameters List[Dict[str, Any]]

    The sequence of parameters for the operation.

    required

    Examples:

    Create table and insert three rows into it.

    from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n        ],\n    )\n

    Source code in prefect_snowflake/database.py
    @sync_compatible\nasync def execute_many(\n    self,\n    operation: str,\n    seq_of_parameters: List[Dict[str, Any]],\n) -> None:\n    \"\"\"\n    Executes many operations on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n    Unlike the fetch methods, this method will always execute the operations\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        seq_of_parameters: The sequence of parameters for the operation.\n\n    Examples:\n        Create table and insert three rows into it.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                ],\n            )\n        ```\n    \"\"\"  # noqa\n    self._start_connection()\n\n    inputs = dict(\n        command=operation,\n        seqparams=seq_of_parameters,\n    )\n    with self._connection.cursor() as cursor:\n        await run_sync_in_worker_thread(cursor.executemany, **inputs)\n    self.logger.info(\n        f\"Executed {len(seq_of_parameters)} operations off {operation!r}.\"\n    )\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.fetch_all","title":"fetch_all async","text":"

    Fetch all results from the database. Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None cursor_type Type[SnowflakeCursor]

    The class of the cursor to use when creating a Snowflake cursor.

    SnowflakeCursor **execute_kwargs Any

    Additional options to pass to cursor.execute_async.

    {}

    Returns:

    Type Description List[Tuple[Any]]

    A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Fetch all rows from the database where address is Highway 42.

    from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    result = conn.fetch_all(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Highway 42\"},\n    )\n    print(result)  # Marvin, Ford, Unknown\n

    Source code in prefect_snowflake/database.py
    @sync_compatible\nasync def fetch_all(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch all results from the database.\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Fetch all rows from the database where address is Highway 42.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            result = conn.fetch_all(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Highway 42\"},\n            )\n            print(result)  # Marvin, Ford, Unknown\n        ```\n    \"\"\"  # noqa\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    new, cursor = self._get_cursor(inputs, cursor_type)\n    if new:\n        await self._execute_async(cursor, inputs)\n    self.logger.debug(\"Preparing to fetch all rows.\")\n    result = await run_sync_in_worker_thread(cursor.fetchall)\n    return result\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.fetch_many","title":"fetch_many async","text":"

    Fetch a limited number of results from the database. Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None size Optional[int]

    The number of results to return; if None or 0, uses the value of fetch_size configured on the block.

    None cursor_type Type[SnowflakeCursor]

    The class of the cursor to use when creating a Snowflake cursor.

    SnowflakeCursor **execute_kwargs Any

    Additional options to pass to cursor.execute_async.

    {}

    Returns:

    Type Description List[Tuple[Any]]

    A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Repeatedly fetch two rows from the database where address is Highway 42.

    from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            {\"name\": \"Me\", \"address\": \"Highway 42\"},\n        ],\n    )\n    result = conn.fetch_many(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Highway 42\"},\n        size=2\n    )\n    print(result)  # Marvin, Ford\n    result = conn.fetch_many(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Highway 42\"},\n        size=2\n    )\n    print(result)  # Unknown, Me\n

    Source code in prefect_snowflake/database.py
    @sync_compatible\nasync def fetch_many(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    size: Optional[int] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch a limited number of results from the database.\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        size: The number of results to return; if None or 0, uses the value of\n            `fetch_size` configured on the block.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Repeatedly fetch two rows from the database where address is Highway 42.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Me\", \"address\": \"Highway 42\"},\n                ],\n            )\n            result = conn.fetch_many(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Highway 42\"},\n                size=2\n            )\n            print(result)  # Marvin, Ford\n            result = conn.fetch_many(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Highway 42\"},\n                size=2\n            )\n            print(result)  # Unknown, Me\n        ```\n    \"\"\"  # noqa\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    new, cursor = self._get_cursor(inputs, cursor_type)\n    if new:\n        await self._execute_async(cursor, inputs)\n    size = size or self.fetch_size\n    self.logger.debug(f\"Preparing to fetch {size} rows.\")\n    result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n    return result\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.fetch_one","title":"fetch_one async","text":"

    Fetch a single result from the database. Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None cursor_type Type[SnowflakeCursor]

    The class of the cursor to use when creating a Snowflake cursor.

    SnowflakeCursor **execute_kwargs Any

    Additional options to pass to cursor.execute_async.

    {}

    Returns:

    Type Description Tuple[Any]

    A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Fetch one row from the database where address is Space.

    from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    result = conn.fetch_one(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Space\"}\n    )\n    print(result)\n

    Source code in prefect_snowflake/database.py
    @sync_compatible\nasync def fetch_one(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> Tuple[Any]:\n    \"\"\"\n    Fetch a single result from the database.\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Returns:\n        A tuple containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Fetch one row from the database where address is Space.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            result = conn.fetch_one(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Space\"}\n            )\n            print(result)\n        ```\n    \"\"\"  # noqa\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    new, cursor = self._get_cursor(inputs, cursor_type=cursor_type)\n    if new:\n        await self._execute_async(cursor, inputs)\n    self.logger.debug(\"Preparing to fetch a row.\")\n    result = await run_sync_in_worker_thread(cursor.fetchone)\n    return result\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.get_connection","title":"get_connection","text":"

    Returns an authenticated connection that can be used to query from Snowflake databases.

    Parameters:

    Name Type Description Default **connect_kwargs Any

    Additional arguments to pass to snowflake.connector.connect.

    {}

    Returns:

    Type Description SnowflakeConnection

    The authenticated SnowflakeConnection.

    Examples:

    from prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector\n\nsnowflake_credentials = SnowflakeCredentials(\n    account=\"account\",\n    user=\"user\",\n    password=\"password\",\n)\nsnowflake_connector = SnowflakeConnector(\n    database=\"database\",\n    warehouse=\"warehouse\",\n    schema=\"schema\",\n    credentials=snowflake_credentials\n)\nwith snowflake_connector.get_connection() as connection:\n    ...\n
    Source code in prefect_snowflake/database.py
    def get_connection(self, **connect_kwargs: Any) -> SnowflakeConnection:\n    \"\"\"\n    Returns an authenticated connection that can be\n    used to query from Snowflake databases.\n\n    Args:\n        **connect_kwargs: Additional arguments to pass to\n            `snowflake.connector.connect`.\n\n    Returns:\n        The authenticated SnowflakeConnection.\n\n    Examples:\n        ```python\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector\n\n        snowflake_credentials = SnowflakeCredentials(\n            account=\"account\",\n            user=\"user\",\n            password=\"password\",\n        )\n        snowflake_connector = SnowflakeConnector(\n            database=\"database\",\n            warehouse=\"warehouse\",\n            schema=\"schema\",\n            credentials=snowflake_credentials\n        )\n        with snowflake_connector.get_connection() as connection:\n            ...\n        ```\n    \"\"\"\n    if self._connection is not None:\n        return self._connection\n\n    connect_params = {\n        \"database\": self.database,\n        \"warehouse\": self.warehouse,\n        \"schema\": self.schema_,\n    }\n    connection = self.credentials.get_client(**connect_kwargs, **connect_params)\n    self._connection = connection\n    self.logger.info(\"Started a new connection to Snowflake.\")\n    return connection\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.reset_cursors","title":"reset_cursors","text":"

    Tries to close all opened cursors.

    Examples:

    Reset the cursors to refresh cursor position.

    from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    print(conn.fetch_one(\"SELECT * FROM customers\"))  # Ford\n    conn.reset_cursors()\n    print(conn.fetch_one(\"SELECT * FROM customers\"))  # should be Ford again\n

    Source code in prefect_snowflake/database.py
    def reset_cursors(self) -> None:\n    \"\"\"\n    Tries to close all opened cursors.\n\n    Examples:\n        Reset the cursors to refresh cursor position.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            print(conn.fetch_one(\"SELECT * FROM customers\"))  # Ford\n            conn.reset_cursors()\n            print(conn.fetch_one(\"SELECT * FROM customers\"))  # should be Ford again\n        ```\n    \"\"\"  # noqa\n    if not self._unique_cursors:\n        self.logger.info(\"There were no cursors to reset.\")\n        return\n\n    input_hashes = tuple(self._unique_cursors.keys())\n    for input_hash in input_hashes:\n        cursor = self._unique_cursors.pop(input_hash)\n        try:\n            cursor.close()\n        except Exception as exc:\n            self.logger.warning(\n                f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n            )\n    self.logger.info(\"Successfully reset the cursors.\")\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.snowflake_multiquery","title":"snowflake_multiquery async","text":"

    Executes multiple queries against a Snowflake database in a shared session. Allows execution in a transaction.

    Parameters:

    Name Type Description Default queries List[str]

    The list of queries to execute against the database.

    required params Union[Tuple[Any], Dict[str, Any]]

    The params to replace the placeholders in the query.

    None snowflake_connector SnowflakeConnector

    The credentials to use to authenticate.

    required cursor_type Type[SnowflakeCursor]

    The type of database cursor to use for the query.

    SnowflakeCursor as_transaction bool

    If True, queries are executed in a transaction.

    False return_transaction_control_results bool

    Determines if the results of queries controlling the transaction (BEGIN/COMMIT) should be returned.

    False poll_frequency_seconds int

    Number of seconds to wait in between checks for run completion.

    1

    Returns:

    Type Description List[List[Tuple[Any]]]

    List of the outputs of response.fetchall() for each query.

    Examples:

    Query Snowflake table with the ID value parameterized.

    from prefect import flow\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector, snowflake_multiquery\n\n\n@flow\ndef snowflake_multiquery_flow():\n    snowflake_credentials = SnowflakeCredentials(\n        account=\"account\",\n        user=\"user\",\n        password=\"password\",\n    )\n    snowflake_connector = SnowflakeConnector(\n        database=\"database\",\n        warehouse=\"warehouse\",\n        schema=\"schema\",\n        credentials=snowflake_credentials\n    )\n    result = snowflake_multiquery(\n        [\"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\", \"SELECT 1,2\"],\n        snowflake_connector,\n        params={\"id_param\": 1},\n        as_transaction=True\n    )\n    return result\n\nsnowflake_multiquery_flow()\n

    Source code in prefect_snowflake/database.py
    @task\nasync def snowflake_multiquery(\n    queries: List[str],\n    snowflake_connector: SnowflakeConnector,\n    params: Union[Tuple[Any], Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    as_transaction: bool = False,\n    return_transaction_control_results: bool = False,\n    poll_frequency_seconds: int = 1,\n) -> List[List[Tuple[Any]]]:\n    \"\"\"\n    Executes multiple queries against a Snowflake database in a shared session.\n    Allows execution in a transaction.\n\n    Args:\n        queries: The list of queries to execute against the database.\n        params: The params to replace the placeholders in the query.\n        snowflake_connector: The credentials to use to authenticate.\n        cursor_type: The type of database cursor to use for the query.\n        as_transaction: If True, queries are executed in a transaction.\n        return_transaction_control_results: Determines if the results of queries\n            controlling the transaction (BEGIN/COMMIT) should be returned.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Returns:\n        List of the outputs of `response.fetchall()` for each query.\n\n    Examples:\n        Query Snowflake table with the ID value parameterized.\n        ```python\n        from prefect import flow\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector, snowflake_multiquery\n\n\n        @flow\n        def snowflake_multiquery_flow():\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            result = snowflake_multiquery(\n                [\"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\", \"SELECT 1,2\"],\n                snowflake_connector,\n                params={\"id_param\": 1},\n                as_transaction=True\n            )\n            return result\n\n        snowflake_multiquery_flow()\n        ```\n    \"\"\"\n    with snowflake_connector.get_connection() as connection:\n        if as_transaction:\n            queries.insert(0, BEGIN_TRANSACTION_STATEMENT)\n            queries.append(END_TRANSACTION_STATEMENT)\n\n        with connection.cursor(cursor_type) as cursor:\n            results = []\n            for query in queries:\n                response = cursor.execute_async(query, params=params)\n                query_id = response[\"queryId\"]\n                while connection.is_still_running(\n                    connection.get_query_status_throw_if_error(query_id)\n                ):\n                    await asyncio.sleep(poll_frequency_seconds)\n                cursor.get_results_from_sfqid(query_id)\n                result = cursor.fetchall()\n                results.append(result)\n\n    # cut off results from BEGIN/COMMIT queries\n    if as_transaction and not return_transaction_control_results:\n        return results[1:-1]\n    else:\n        return results\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.snowflake_query","title":"snowflake_query async","text":"

    Executes a query against a Snowflake database.

    Parameters:

    Name Type Description Default query str

    The query to execute against the database.

    required params Union[Tuple[Any], Dict[str, Any]]

    The params to replace the placeholders in the query.

    None snowflake_connector SnowflakeConnector

    The credentials to use to authenticate.

    required cursor_type Type[SnowflakeCursor]

    The type of database cursor to use for the query.

    SnowflakeCursor poll_frequency_seconds int

    Number of seconds to wait in between checks for run completion.

    1

    Returns:

    Type Description List[Tuple[Any]]

    The output of response.fetchall().

    Examples:

    Query Snowflake table with the ID value parameterized.

    from prefect import flow\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n@flow\ndef snowflake_query_flow():\n    snowflake_credentials = SnowflakeCredentials(\n        account=\"account\",\n        user=\"user\",\n        password=\"password\",\n    )\n    snowflake_connector = SnowflakeConnector(\n        database=\"database\",\n        warehouse=\"warehouse\",\n        schema=\"schema\",\n        credentials=snowflake_credentials\n    )\n    result = snowflake_query(\n        \"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\",\n        snowflake_connector,\n        params={\"id_param\": 1}\n    )\n    return result\n\nsnowflake_query_flow()\n

    Source code in prefect_snowflake/database.py
    @task\nasync def snowflake_query(\n    query: str,\n    snowflake_connector: SnowflakeConnector,\n    params: Union[Tuple[Any], Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    poll_frequency_seconds: int = 1,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Executes a query against a Snowflake database.\n\n    Args:\n        query: The query to execute against the database.\n        params: The params to replace the placeholders in the query.\n        snowflake_connector: The credentials to use to authenticate.\n        cursor_type: The type of database cursor to use for the query.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Returns:\n        The output of `response.fetchall()`.\n\n    Examples:\n        Query Snowflake table with the ID value parameterized.\n        ```python\n        from prefect import flow\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n        @flow\n        def snowflake_query_flow():\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            result = snowflake_query(\n                \"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\",\n                snowflake_connector,\n                params={\"id_param\": 1}\n            )\n            return result\n\n        snowflake_query_flow()\n        ```\n    \"\"\"\n    # context manager automatically rolls back failed transactions and closes\n    with snowflake_connector.get_connection() as connection:\n        with connection.cursor(cursor_type) as cursor:\n            response = cursor.execute_async(query, params=params)\n            query_id = response[\"queryId\"]\n            while connection.is_still_running(\n                connection.get_query_status_throw_if_error(query_id)\n            ):\n                await asyncio.sleep(poll_frequency_seconds)\n            cursor.get_results_from_sfqid(query_id)\n            result = cursor.fetchall()\n    return result\n
    "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.snowflake_query_sync","title":"snowflake_query_sync async","text":"

    Executes a query in sync mode against a Snowflake database.

    Parameters:

    Name Type Description Default query str

    The query to execute against the database.

    required params Union[Tuple[Any], Dict[str, Any]]

    The params to replace the placeholders in the query.

    None snowflake_connector SnowflakeConnector

    The credentials to use to authenticate.

    required cursor_type Type[SnowflakeCursor]

    The type of database cursor to use for the query.

    SnowflakeCursor

    Returns:

    Type Description List[Tuple[Any]]

    The output of response.fetchall().

    Examples:

    Execute a put statement.

    from prefect import flow\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n@flow\ndef snowflake_query_sync_flow():\n    snowflake_credentials = SnowflakeCredentials(\n        account=\"account\",\n        user=\"user\",\n        password=\"password\",\n    )\n    snowflake_connector = SnowflakeConnector(\n        database=\"database\",\n        warehouse=\"warehouse\",\n        schema=\"schema\",\n        credentials=snowflake_credentials\n    )\n    result = snowflake_query_sync(\n        \"put file://a_file.csv @mystage;\",\n        snowflake_connector,\n    )\n    return result\n\nsnowflake_query_sync_flow()\n

    Source code in prefect_snowflake/database.py
    @task\nasync def snowflake_query_sync(\n    query: str,\n    snowflake_connector: SnowflakeConnector,\n    params: Union[Tuple[Any], Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Executes a query in sync mode against a Snowflake database.\n\n    Args:\n        query: The query to execute against the database.\n        params: The params to replace the placeholders in the query.\n        snowflake_connector: The credentials to use to authenticate.\n        cursor_type: The type of database cursor to use for the query.\n\n    Returns:\n        The output of `response.fetchall()`.\n\n    Examples:\n        Execute a put statement.\n        ```python\n        from prefect import flow\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n        @flow\n        def snowflake_query_sync_flow():\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            result = snowflake_query_sync(\n                \"put file://a_file.csv @mystage;\",\n                snowflake_connector,\n            )\n            return result\n\n        snowflake_query_sync_flow()\n        ```\n    \"\"\"\n    # context manager automatically rolls back failed transactions and closes\n    with snowflake_connector.get_connection() as connection:\n        with connection.cursor(cursor_type) as cursor:\n            cursor.execute(query, params=params)\n            result = cursor.fetchall()\n    return result\n
    "},{"location":"integrations/prefect-sqlalchemy/","title":"prefect-sqlalchemy","text":"

    Visit the full docs here to see additional examples and the API reference.

    The prefect-sqlalchemy collection makes it easy to connect to a database in your Prefect flows. Check out the examples below to get started!

    "},{"location":"integrations/prefect-sqlalchemy/#getting-started","title":"Getting started","text":""},{"location":"integrations/prefect-sqlalchemy/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

    Prefect and SQLAlchemy are a data powerhouse duo. With Prefect, your workflows are orchestratable and observable, and with SQLAlchemy, your databases are a snap to handle! Get ready to experience the ultimate data \"flow-chemistry\"!

    To set up a table, use the execute and execute_many methods. Then, use the fetch_many method to retrieve data in a stream until there's no more data.

    By using the SqlAlchemyConnector as a context manager, you can make sure that the SQLAlchemy engine and any connected resources are closed properly after you're done with them.

    Be sure to install prefect-sqlalchemy and save your credentials to a Prefect block to run the examples below!

    Async support

    SqlAlchemyConnector also supports async workflows! Just be sure to save, load, and use an async driver as in the example below.

    from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n\nconnector = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=AsyncDriver.SQLITE_AIOSQLITE,\n        database=\"DATABASE-PLACEHOLDER.db\"\n    )\n)\n\nconnector.save(\"BLOCK_NAME-PLACEHOLDER\")\n
    SyncAsync
    from prefect import flow, task\nfrom prefect_sqlalchemy import SqlAlchemyConnector\n\n\n@task\ndef setup_table(block_name: str) -> None:\n    with SqlAlchemyConnector.load(block_name) as connector:\n        connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        connector.execute(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n        )\n        connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            ],\n        )\n\n@task\ndef fetch_data(block_name: str) -> list:\n    all_rows = []\n    with SqlAlchemyConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\ndef sqlalchemy_flow(block_name: str) -> list:\n    setup_table(block_name)\n    all_rows = fetch_data(block_name)\n    return all_rows\n\n\nsqlalchemy_flow(\"BLOCK-NAME-PLACEHOLDER\")\n
    from prefect import flow, task\nfrom prefect_sqlalchemy import SqlAlchemyConnector\nimport asyncio\n\n@task\nasync def setup_table(block_name: str) -> None:\n    async with await SqlAlchemyConnector.load(block_name) as connector:\n        await connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        await connector.execute(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n        )\n        await connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            ],\n        )\n\n@task\nasync def fetch_data(block_name: str) -> list:\n    all_rows = []\n    async with await SqlAlchemyConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = await connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\nasync def sqlalchemy_flow(block_name: str) -> list:\n    await setup_table(block_name)\n    all_rows = await fetch_data(block_name)\n    return all_rows\n\n\nasyncio.run(sqlalchemy_flow(\"BLOCK-NAME-PLACEHOLDER\"))\n
    "},{"location":"integrations/prefect-sqlalchemy/#resources","title":"Resources","text":"

    For more tips on how to use tasks and flows provided in a Prefect integration library, check out the Prefect docs on using integrations.

    "},{"location":"integrations/prefect-sqlalchemy/#installation","title":"Installation","text":"

    Install prefect-sqlalchemy with pip:

    pip install prefect-sqlalchemy\n

    Requires an installation of Python 3.8 or higher.

    We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

    The tasks in this library are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

    "},{"location":"integrations/prefect-sqlalchemy/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

    To use the load method on Blocks, you must have a block document saved through code or saved through the UI.

    Below is a walkthrough on saving block documents through code; simply create a short script, replacing the placeholders.

    from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver\n\nconnector = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=SyncDriver.POSTGRESQL_PSYCOPG2,\n        username=\"USERNAME-PLACEHOLDER\",\n        password=\"PASSWORD-PLACEHOLDER\",\n        host=\"localhost\",\n        port=5432,\n        database=\"DATABASE-PLACEHOLDER\",\n    )\n)\n\nconnector.save(\"BLOCK_NAME-PLACEHOLDER\")\n

    Congrats! You can now easily load the saved block, which holds your credentials:

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nSqlAlchemyConnector.load(\"BLOCK_NAME-PLACEHOLDER\")\n

    The required keywords depend upon the desired driver. For example, SQLite requires only the driver and database arguments:

    from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver\n\nconnector = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=SyncDriver.SQLITE_PYSQLITE,\n        database=\"DATABASE-PLACEHOLDER.db\"\n    )\n)\n\nconnector.save(\"BLOCK_NAME-PLACEHOLDER\")\n

    Registering blocks

    Register blocks in this module to view and edit them on Prefect Cloud:

    prefect block register -m prefect_sqlalchemy\n
    "},{"location":"integrations/prefect-sqlalchemy/#feedback","title":"Feedback","text":"

    If you encounter any bugs while using prefect-sqlalchemy, please open an issue in the prefect repository.

    If you have any questions or issues while using prefect-sqlalchemy, you can find help in the Prefect Community Slack .

    "},{"location":"integrations/prefect-sqlalchemy/#contributing","title":"Contributing","text":"

    If you'd like to help contribute to fix an issue or add a feature to prefect-sqlalchemy, please propose changes through a pull request from a fork of the repository.

    Here are the steps:

    1. Fork the repository
    2. Clone the forked repository
    3. Install the repository and its dependencies:
      pip install -e \".[dev]\"\n
    4. Make desired changes
    5. Add tests
    6. Install pre-commit to perform quality checks prior to commit:
      pre-commit install\n
    7. git commit, git push, and create a pull request
    "},{"location":"integrations/prefect-sqlalchemy/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials","title":"prefect_sqlalchemy.credentials","text":"

    Credential classes used to perform authenticated interactions with SQLAlchemy

    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.AsyncDriver","title":"AsyncDriver","text":"

    Bases: Enum

    Known dialects with their corresponding async drivers.

    Attributes:

    Name Type Description POSTGRESQL_ASYNCPG Enum

    postgresql+asyncpg

    SQLITE_AIOSQLITE Enum

    sqlite+aiosqlite

    MYSQL_ASYNCMY Enum

    mysql+asyncmy

    MYSQL_AIOMYSQL Enum

    mysql+aiomysql

    Source code in prefect_sqlalchemy/credentials.py
    class AsyncDriver(Enum):\n    \"\"\"\n    Known dialects with their corresponding async drivers.\n\n    Attributes:\n        POSTGRESQL_ASYNCPG (Enum): [postgresql+asyncpg](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.asyncpg)\n\n        SQLITE_AIOSQLITE (Enum): [sqlite+aiosqlite](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.aiosqlite)\n\n        MYSQL_ASYNCMY (Enum): [mysql+asyncmy](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.asyncmy)\n        MYSQL_AIOMYSQL (Enum): [mysql+aiomysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.aiomysql)\n    \"\"\"  # noqa\n\n    POSTGRESQL_ASYNCPG = \"postgresql+asyncpg\"\n\n    SQLITE_AIOSQLITE = \"sqlite+aiosqlite\"\n\n    MYSQL_ASYNCMY = \"mysql+asyncmy\"\n    MYSQL_AIOMYSQL = \"mysql+aiomysql\"\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.ConnectionComponents","title":"ConnectionComponents","text":"

    Bases: BaseModel

    Parameters to use to create a SQLAlchemy engine URL.

    Attributes:

    Name Type Description driver Union[AsyncDriver, SyncDriver, str]

    The driver name to use.

    database str

    The name of the database to use.

    username Optional[str]

    The user name used to authenticate.

    password Optional[SecretStr]

    The password used to authenticate.

    host Optional[str]

    The host address of the database.

    port Optional[str]

    The port to connect to the database.

    query Optional[Dict[str, str]]

    A dictionary of string keys to string values to be passed to the dialect and/or the DBAPI upon connect.

    Source code in prefect_sqlalchemy/credentials.py
    class ConnectionComponents(BaseModel):\n    \"\"\"\n    Parameters to use to create a SQLAlchemy engine URL.\n\n    Attributes:\n        driver: The driver name to use.\n        database: The name of the database to use.\n        username: The user name used to authenticate.\n        password: The password used to authenticate.\n        host: The host address of the database.\n        port: The port to connect to the database.\n        query: A dictionary of string keys to string values to be passed to the dialect\n            and/or the DBAPI upon connect.\n    \"\"\"\n\n    driver: Union[AsyncDriver, SyncDriver, str] = Field(\n        default=..., description=\"The driver name to use.\"\n    )\n    database: str = Field(default=..., description=\"The name of the database to use.\")\n    username: Optional[str] = Field(\n        default=None, description=\"The user name used to authenticate.\"\n    )\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password used to authenticate.\"\n    )\n    host: Optional[str] = Field(\n        default=None, description=\"The host address of the database.\"\n    )\n    port: Optional[str] = Field(\n        default=None, description=\"The port to connect to the database.\"\n    )\n    query: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=(\n            \"A dictionary of string keys to string values to be passed to the dialect \"\n            \"and/or the DBAPI upon connect. To specify non-string parameters to a \"\n            \"Python DBAPI directly, use connect_args.\"\n        ),\n    )\n\n    def create_url(self) -> URL:\n        \"\"\"\n        Create a fully formed connection URL.\n\n        Returns:\n            The SQLAlchemy engine URL.\n        \"\"\"\n        driver = self.driver\n        drivername = driver.value if isinstance(driver, Enum) else driver\n        password = self.password.get_secret_value() if self.password else None\n        url_params = dict(\n            drivername=drivername,\n            username=self.username,\n            password=password,\n            database=self.database,\n            host=self.host,\n            port=self.port,\n            query=self.query,\n        )\n        return URL.create(\n            **{\n                url_key: url_param\n                for url_key, url_param in url_params.items()\n                if url_param is not None\n            }\n        )\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.ConnectionComponents.create_url","title":"create_url","text":"

    Create a fully formed connection URL.

    Returns:

    Type Description URL

    The SQLAlchemy engine URL.

    Source code in prefect_sqlalchemy/credentials.py
    def create_url(self) -> URL:\n    \"\"\"\n    Create a fully formed connection URL.\n\n    Returns:\n        The SQLAlchemy engine URL.\n    \"\"\"\n    driver = self.driver\n    drivername = driver.value if isinstance(driver, Enum) else driver\n    password = self.password.get_secret_value() if self.password else None\n    url_params = dict(\n        drivername=drivername,\n        username=self.username,\n        password=password,\n        database=self.database,\n        host=self.host,\n        port=self.port,\n        query=self.query,\n    )\n    return URL.create(\n        **{\n            url_key: url_param\n            for url_key, url_param in url_params.items()\n            if url_param is not None\n        }\n    )\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials","title":"DatabaseCredentials","text":"

    Bases: Block

    Block used to manage authentication with a database.

    Attributes:

    Name Type Description driver Optional[Union[AsyncDriver, SyncDriver, str]]

    The driver name, e.g. \"postgresql+asyncpg\"

    database Optional[str]

    The name of the database to use.

    username Optional[str]

    The user name used to authenticate.

    password Optional[SecretStr]

    The password used to authenticate.

    host Optional[str]

    The host address of the database.

    port Optional[str]

    The port to connect to the database.

    query Optional[Dict[str, str]]

    A dictionary of string keys to string values to be passed to the dialect and/or the DBAPI upon connect. To specify non-string parameters to a Python DBAPI directly, use connect_args.

    url Optional[AnyUrl]

    Manually create and provide a URL to create the engine, this is useful for external dialects, e.g. Snowflake, because some of the params, such as \"warehouse\", is not directly supported in the vanilla sqlalchemy.engine.URL.create method; do not provide this alongside with other URL params as it will raise a ValueError.

    connect_args Optional[Dict[str, Any]]

    The options which will be passed directly to the DBAPI's connect() method as additional keyword arguments.

    Example

    Load stored database credentials:

    from prefect_sqlalchemy import DatabaseCredentials\ndatabase_block = DatabaseCredentials.load(\"BLOCK_NAME\")\n

    Source code in prefect_sqlalchemy/credentials.py
    class DatabaseCredentials(Block):\n    \"\"\"\n    Block used to manage authentication with a database.\n\n    Attributes:\n        driver: The driver name, e.g. \"postgresql+asyncpg\"\n        database: The name of the database to use.\n        username: The user name used to authenticate.\n        password: The password used to authenticate.\n        host: The host address of the database.\n        port: The port to connect to the database.\n        query: A dictionary of string keys to string values to be passed to\n            the dialect and/or the DBAPI upon connect. To specify non-string\n            parameters to a Python DBAPI directly, use connect_args.\n        url: Manually create and provide a URL to create the engine,\n            this is useful for external dialects, e.g. Snowflake, because some\n            of the params, such as \"warehouse\", is not directly supported in\n            the vanilla `sqlalchemy.engine.URL.create` method; do not provide\n            this alongside with other URL params as it will raise a `ValueError`.\n        connect_args: The options which will be passed directly to the\n            DBAPI's connect() method as additional keyword arguments.\n\n    Example:\n        Load stored database credentials:\n        ```python\n        from prefect_sqlalchemy import DatabaseCredentials\n        database_block = DatabaseCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Database Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/fb3f4debabcda1c5a3aeea4f5b3f94c28845e23e-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials\"  # noqa\n\n    driver: Optional[Union[AsyncDriver, SyncDriver, str]] = Field(\n        default=None, description=\"The driver name to use.\"\n    )\n    username: Optional[str] = Field(\n        default=None, description=\"The user name used to authenticate.\"\n    )\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password used to authenticate.\"\n    )\n    database: Optional[str] = Field(\n        default=None, description=\"The name of the database to use.\"\n    )\n    host: Optional[str] = Field(\n        default=None, description=\"The host address of the database.\"\n    )\n    port: Optional[str] = Field(\n        default=None, description=\"The port to connect to the database.\"\n    )\n    query: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=(\n            \"A dictionary of string keys to string values to be passed to the dialect \"\n            \"and/or the DBAPI upon connect. To specify non-string parameters to a \"\n            \"Python DBAPI directly, use connect_args.\"\n        ),\n    )\n    url: Optional[AnyUrl] = Field(\n        default=None,\n        description=(\n            \"Manually create and provide a URL to create the engine, this is useful \"\n            \"for external dialects, e.g. Snowflake, because some of the params, \"\n            \"such as 'warehouse', is not directly supported in the vanilla \"\n            \"`sqlalchemy.engine.URL.create` method; do not provide this \"\n            \"alongside with other URL params as it will raise a `ValueError`.\"\n        ),\n    )\n    connect_args: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=(\n            \"The options which will be passed directly to the DBAPI's connect() \"\n            \"method as additional keyword arguments.\"\n        ),\n    )\n\n    def block_initialization(self):\n        \"\"\"\n        Initializes the engine.\n        \"\"\"\n        warnings.warn(\n            \"DatabaseCredentials is now deprecated and will be removed March 2023; \"\n            \"please use SqlAlchemyConnector instead.\",\n            DeprecationWarning,\n        )\n        if isinstance(self.driver, AsyncDriver):\n            drivername = self.driver.value\n            self._driver_is_async = True\n        elif isinstance(self.driver, SyncDriver):\n            drivername = self.driver.value\n            self._driver_is_async = False\n        else:\n            drivername = self.driver\n            self._driver_is_async = drivername in AsyncDriver._value2member_map_\n\n        url_params = dict(\n            drivername=drivername,\n            username=self.username,\n            password=self.password.get_secret_value() if self.password else None,\n            database=self.database,\n            host=self.host,\n            port=self.port,\n            query=self.query,\n        )\n        if not self.url:\n            required_url_keys = (\"drivername\", \"database\")\n            if not all(url_params[key] for key in required_url_keys):\n                required_url_keys = (\"driver\", \"database\")\n                raise ValueError(\n                    f\"If the `url` is not provided, \"\n                    f\"all of these URL params are required: \"\n                    f\"{required_url_keys}\"\n                )\n            self.rendered_url = URL.create(\n                **{\n                    url_key: url_param\n                    for url_key, url_param in url_params.items()\n                    if url_param is not None\n                }\n            )  # from params\n        else:\n            if any(val for val in url_params.values()):\n                raise ValueError(\n                    f\"The `url` should not be provided \"\n                    f\"alongside any of these URL params: \"\n                    f\"{url_params.keys()}\"\n                )\n            self.rendered_url = make_url(str(self.url))\n\n    def get_engine(self) -> Union[\"Connection\", \"AsyncConnection\"]:\n        \"\"\"\n        Returns an authenticated engine that can be\n        used to query from databases.\n\n        Returns:\n            The authenticated SQLAlchemy Connection / AsyncConnection.\n\n        Examples:\n            Create an asynchronous engine to PostgreSQL using URL params.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                sqlalchemy_credentials = DatabaseCredentials(\n                    driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                    username=\"prefect\",\n                    password=\"prefect_password\",\n                    database=\"postgres\"\n                )\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n\n            Create a synchronous engine to Snowflake using the `url` kwarg.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                url = (\n                    \"snowflake://<user_login_name>:<password>\"\n                    \"@<account_identifier>/<database_name>\"\n                    \"?warehouse=<warehouse_name>\"\n                )\n                sqlalchemy_credentials = DatabaseCredentials(url=url)\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n        \"\"\"\n        engine_kwargs = dict(\n            url=self.rendered_url,\n            connect_args=self.connect_args or {},\n            poolclass=NullPool,\n        )\n        if self._driver_is_async:\n            engine = create_async_engine(**engine_kwargs)\n        else:\n            engine = create_engine(**engine_kwargs)\n        return engine\n\n    class Config:\n        \"\"\"Configuration of pydantic.\"\"\"\n\n        # Support serialization of the 'URL' type\n        arbitrary_types_allowed = True\n        json_encoders = {URL: lambda u: u.render_as_string()}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        \"\"\"\n        Convert to a dictionary.\n        \"\"\"\n        # Support serialization of the 'URL' type\n        d = super().dict(*args, **kwargs)\n        d[\"rendered_url\"] = SecretStr(\n            self.rendered_url.render_as_string(hide_password=False)\n        )\n        return d\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.Config","title":"Config","text":"

    Configuration of pydantic.

    Source code in prefect_sqlalchemy/credentials.py
    class Config:\n    \"\"\"Configuration of pydantic.\"\"\"\n\n    # Support serialization of the 'URL' type\n    arbitrary_types_allowed = True\n    json_encoders = {URL: lambda u: u.render_as_string()}\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.block_initialization","title":"block_initialization","text":"

    Initializes the engine.

    Source code in prefect_sqlalchemy/credentials.py
    def block_initialization(self):\n    \"\"\"\n    Initializes the engine.\n    \"\"\"\n    warnings.warn(\n        \"DatabaseCredentials is now deprecated and will be removed March 2023; \"\n        \"please use SqlAlchemyConnector instead.\",\n        DeprecationWarning,\n    )\n    if isinstance(self.driver, AsyncDriver):\n        drivername = self.driver.value\n        self._driver_is_async = True\n    elif isinstance(self.driver, SyncDriver):\n        drivername = self.driver.value\n        self._driver_is_async = False\n    else:\n        drivername = self.driver\n        self._driver_is_async = drivername in AsyncDriver._value2member_map_\n\n    url_params = dict(\n        drivername=drivername,\n        username=self.username,\n        password=self.password.get_secret_value() if self.password else None,\n        database=self.database,\n        host=self.host,\n        port=self.port,\n        query=self.query,\n    )\n    if not self.url:\n        required_url_keys = (\"drivername\", \"database\")\n        if not all(url_params[key] for key in required_url_keys):\n            required_url_keys = (\"driver\", \"database\")\n            raise ValueError(\n                f\"If the `url` is not provided, \"\n                f\"all of these URL params are required: \"\n                f\"{required_url_keys}\"\n            )\n        self.rendered_url = URL.create(\n            **{\n                url_key: url_param\n                for url_key, url_param in url_params.items()\n                if url_param is not None\n            }\n        )  # from params\n    else:\n        if any(val for val in url_params.values()):\n            raise ValueError(\n                f\"The `url` should not be provided \"\n                f\"alongside any of these URL params: \"\n                f\"{url_params.keys()}\"\n            )\n        self.rendered_url = make_url(str(self.url))\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.dict","title":"dict","text":"

    Convert to a dictionary.

    Source code in prefect_sqlalchemy/credentials.py
    def dict(self, *args, **kwargs) -> Dict:\n    \"\"\"\n    Convert to a dictionary.\n    \"\"\"\n    # Support serialization of the 'URL' type\n    d = super().dict(*args, **kwargs)\n    d[\"rendered_url\"] = SecretStr(\n        self.rendered_url.render_as_string(hide_password=False)\n    )\n    return d\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.get_engine","title":"get_engine","text":"

    Returns an authenticated engine that can be used to query from databases.

    Returns:

    Type Description Union[Connection, AsyncConnection]

    The authenticated SQLAlchemy Connection / AsyncConnection.

    Examples:

    Create an asynchronous engine to PostgreSQL using URL params.

    from prefect import flow\nfrom prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n@flow\ndef sqlalchemy_credentials_flow():\n    sqlalchemy_credentials = DatabaseCredentials(\n        driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n        username=\"prefect\",\n        password=\"prefect_password\",\n        database=\"postgres\"\n    )\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

    Create a synchronous engine to Snowflake using the url kwarg.

    from prefect import flow\nfrom prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n@flow\ndef sqlalchemy_credentials_flow():\n    url = (\n        \"snowflake://<user_login_name>:<password>\"\n        \"@<account_identifier>/<database_name>\"\n        \"?warehouse=<warehouse_name>\"\n    )\n    sqlalchemy_credentials = DatabaseCredentials(url=url)\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

    Source code in prefect_sqlalchemy/credentials.py
    def get_engine(self) -> Union[\"Connection\", \"AsyncConnection\"]:\n    \"\"\"\n    Returns an authenticated engine that can be\n    used to query from databases.\n\n    Returns:\n        The authenticated SQLAlchemy Connection / AsyncConnection.\n\n    Examples:\n        Create an asynchronous engine to PostgreSQL using URL params.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            sqlalchemy_credentials = DatabaseCredentials(\n                driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                username=\"prefect\",\n                password=\"prefect_password\",\n                database=\"postgres\"\n            )\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n\n        Create a synchronous engine to Snowflake using the `url` kwarg.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            url = (\n                \"snowflake://<user_login_name>:<password>\"\n                \"@<account_identifier>/<database_name>\"\n                \"?warehouse=<warehouse_name>\"\n            )\n            sqlalchemy_credentials = DatabaseCredentials(url=url)\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n    \"\"\"\n    engine_kwargs = dict(\n        url=self.rendered_url,\n        connect_args=self.connect_args or {},\n        poolclass=NullPool,\n    )\n    if self._driver_is_async:\n        engine = create_async_engine(**engine_kwargs)\n    else:\n        engine = create_engine(**engine_kwargs)\n    return engine\n
    "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.SyncDriver","title":"SyncDriver","text":"

    Bases: Enum

    Known dialects with their corresponding sync drivers.

    Attributes:

    Name Type Description POSTGRESQL_PSYCOPG2 Enum

    postgresql+psycopg2

    POSTGRESQL_PG8000 Enum

    postgresql+pg8000

    POSTGRESQL_PSYCOPG2CFFI Enum

    postgresql+psycopg2cffi

    POSTGRESQL_PYPOSTGRESQL Enum

    postgresql+pypostgresql

    POSTGRESQL_PYGRESQL Enum

    postgresql+pygresql

    MYSQL_MYSQLDB Enum

    mysql+mysqldb

    MYSQL_PYMYSQL Enum

    mysql+pymysql

    MYSQL_MYSQLCONNECTOR Enum

    mysql+mysqlconnector

    MYSQL_CYMYSQL Enum

    mysql+cymysql

    MYSQL_OURSQL Enum

    mysql+oursql

    MYSQL_PYODBC Enum

    mysql+pyodbc

    SQLITE_PYSQLITE Enum

    sqlite+pysqlite

    SQLITE_PYSQLCIPHER Enum

    sqlite+pysqlcipher

    ORACLE_CX_ORACLE Enum

    oracle+cx_oracle

    MSSQL_PYODBC Enum

    mssql+pyodbc

    MSSQL_MXODBC Enum

    mssql+mxodbc

    MSSQL_PYMSSQL Enum

    mssql+pymssql

    Source code in prefect_sqlalchemy/credentials.py
    class SyncDriver(Enum):\n    \"\"\"\n    Known dialects with their corresponding sync drivers.\n\n    Attributes:\n        POSTGRESQL_PSYCOPG2 (Enum): [postgresql+psycopg2](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2)\n        POSTGRESQL_PG8000 (Enum): [postgresql+pg8000](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pg8000)\n        POSTGRESQL_PSYCOPG2CFFI (Enum): [postgresql+psycopg2cffi](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2cffi)\n        POSTGRESQL_PYPOSTGRESQL (Enum): [postgresql+pypostgresql](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pypostgresql)\n        POSTGRESQL_PYGRESQL (Enum): [postgresql+pygresql](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pygresql)\n\n        MYSQL_MYSQLDB (Enum): [mysql+mysqldb](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqldb)\n        MYSQL_PYMYSQL (Enum): [mysql+pymysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.pymysql)\n        MYSQL_MYSQLCONNECTOR (Enum): [mysql+mysqlconnector](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqlconnector)\n        MYSQL_CYMYSQL (Enum): [mysql+cymysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.cymysql)\n        MYSQL_OURSQL (Enum): [mysql+oursql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.oursql)\n        MYSQL_PYODBC (Enum): [mysql+pyodbc](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.pyodbc)\n\n        SQLITE_PYSQLITE (Enum): [sqlite+pysqlite](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.pysqlite)\n        SQLITE_PYSQLCIPHER (Enum): [sqlite+pysqlcipher](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.pysqlcipher)\n\n        ORACLE_CX_ORACLE (Enum): [oracle+cx_oracle](https://docs.sqlalchemy.org/en/14/dialects/oracle.html#module-sqlalchemy.dialects.oracle.cx_oracle)\n\n        MSSQL_PYODBC (Enum): [mssql+pyodbc](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pyodbc)\n        MSSQL_MXODBC (Enum): [mssql+mxodbc](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.mxodbc)\n        MSSQL_PYMSSQL (Enum): [mssql+pymssql](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pymssql)\n    \"\"\"  # noqa\n\n    POSTGRESQL_PSYCOPG2 = \"postgresql+psycopg2\"\n    POSTGRESQL_PG8000 = \"postgresql+pg8000\"\n    POSTGRESQL_PSYCOPG2CFFI = \"postgresql+psycopg2cffi\"\n    POSTGRESQL_PYPOSTGRESQL = \"postgresql+pypostgresql\"\n    POSTGRESQL_PYGRESQL = \"postgresql+pygresql\"\n\n    MYSQL_MYSQLDB = \"mysql+mysqldb\"\n    MYSQL_PYMYSQL = \"mysql+pymysql\"\n    MYSQL_MYSQLCONNECTOR = \"mysql+mysqlconnector\"\n    MYSQL_CYMYSQL = \"mysql+cymysql\"\n    MYSQL_OURSQL = \"mysql+oursql\"\n    MYSQL_PYODBC = \"mysql+pyodbc\"\n\n    SQLITE_PYSQLITE = \"sqlite+pysqlite\"\n    SQLITE_PYSQLCIPHER = \"sqlite+pysqlcipher\"\n\n    ORACLE_CX_ORACLE = \"oracle+cx_oracle\"\n\n    MSSQL_PYODBC = \"mssql+pyodbc\"\n    MSSQL_MXODBC = \"mssql+mxodbc\"\n    MSSQL_PYMSSQL = \"mssql+pymssql\"\n
    "},{"location":"integrations/prefect-sqlalchemy/database/","title":"Database","text":""},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database","title":"prefect_sqlalchemy.database","text":"

    Tasks for querying a database with SQLAlchemy

    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector","title":"SqlAlchemyConnector","text":"

    Bases: CredentialsBlock, DatabaseBlock

    Block used to manage authentication with a database.

    Upon instantiating, an engine is created and maintained for the life of the object until the close method is called.

    It is recommended to use this block as a context manager, which will automatically close the engine and its connections when the context is exited.

    It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor could be lost.

    Attributes:

    Name Type Description connection_info Union[ConnectionComponents, AnyUrl]

    SQLAlchemy URL to create the engine; either create from components or create from a string.

    connect_args Optional[Dict[str, Any]]

    The options which will be passed directly to the DBAPI's connect() method as additional keyword arguments.

    fetch_size int

    The number of rows to fetch at a time.

    Example

    Load stored database credentials and use in context manager:

    from prefect_sqlalchemy import SqlAlchemyConnector\n\ndatabase_block = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nwith database_block:\n    ...\n

    Create table named customers and insert values; then fetch the first 10 rows.

    from prefect_sqlalchemy import (\n    SqlAlchemyConnector, SyncDriver, ConnectionComponents\n)\n\nwith SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=SyncDriver.SQLITE_PYSQLITE,\n        database=\"prefect.db\"\n    )\n) as database:\n    database.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n    )\n    for i in range(1, 42):\n        database.execute(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            parameters={\"name\": \"Marvin\", \"address\": f\"Highway {i}\"},\n        )\n    results = database.fetch_many(\n        \"SELECT * FROM customers WHERE name = :name;\",\n        parameters={\"name\": \"Marvin\"},\n        size=10\n    )\nprint(results)\n

    Source code in prefect_sqlalchemy/database.py
    class SqlAlchemyConnector(CredentialsBlock, DatabaseBlock):\n    \"\"\"\n    Block used to manage authentication with a database.\n\n    Upon instantiating, an engine is created and maintained for the life of\n    the object until the close method is called.\n\n    It is recommended to use this block as a context manager, which will automatically\n    close the engine and its connections when the context is exited.\n\n    It is also recommended that this block is loaded and consumed within a single task\n    or flow because if the block is passed across separate tasks and flows,\n    the state of the block's connection and cursor could be lost.\n\n    Attributes:\n        connection_info: SQLAlchemy URL to create the engine;\n            either create from components or create from a string.\n        connect_args: The options which will be passed directly to the\n            DBAPI's connect() method as additional keyword arguments.\n        fetch_size: The number of rows to fetch at a time.\n\n    Example:\n        Load stored database credentials and use in context manager:\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        database_block = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        with database_block:\n            ...\n        ```\n\n        Create table named customers and insert values; then fetch the first 10 rows.\n        ```python\n        from prefect_sqlalchemy import (\n            SqlAlchemyConnector, SyncDriver, ConnectionComponents\n        )\n\n        with SqlAlchemyConnector(\n            connection_info=ConnectionComponents(\n                driver=SyncDriver.SQLITE_PYSQLITE,\n                database=\"prefect.db\"\n            )\n        ) as database:\n            database.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n            )\n            for i in range(1, 42):\n                database.execute(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    parameters={\"name\": \"Marvin\", \"address\": f\"Highway {i}\"},\n                )\n            results = database.fetch_many(\n                \"SELECT * FROM customers WHERE name = :name;\",\n                parameters={\"name\": \"Marvin\"},\n                size=10\n            )\n        print(results)\n        ```\n    \"\"\"\n\n    _block_type_name = \"SQLAlchemy Connector\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/3c7dff04f70aaf4528e184a3b028f9e40b98d68c-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector\"  # noqa\n\n    connection_info: Union[ConnectionComponents, AnyUrl] = Field(\n        default=...,\n        description=(\n            \"SQLAlchemy URL to create the engine; either create from components \"\n            \"or create from a string.\"\n        ),\n    )\n    connect_args: Optional[Dict[str, Any]] = Field(\n        default=None,\n        title=\"Additional Connection Arguments\",\n        description=(\n            \"The options which will be passed directly to the DBAPI's connect() \"\n            \"method as additional keyword arguments.\"\n        ),\n    )\n    fetch_size: int = Field(\n        default=1, description=\"The number of rows to fetch at a time.\"\n    )\n\n    _engine: Optional[Union[AsyncEngine, Engine]] = None\n    _exit_stack: Union[ExitStack, AsyncExitStack] = None\n    _unique_results: Dict[str, CursorResult] = None\n\n    class Config:\n        \"\"\"Configuration of pydantic.\"\"\"\n\n        # Support serialization of the 'URL' type\n        arbitrary_types_allowed = True\n        json_encoders = {URL: lambda u: u.render_as_string()}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        \"\"\"\n        Convert to a dictionary.\n        \"\"\"\n        # Support serialization of the 'URL' type\n        d = super().dict(*args, **kwargs)\n        d[\"_rendered_url\"] = SecretStr(\n            self._rendered_url.render_as_string(hide_password=False)\n        )\n        return d\n\n    def block_initialization(self):\n        \"\"\"\n        Initializes the engine.\n        \"\"\"\n        super().block_initialization()\n\n        if isinstance(self.connection_info, ConnectionComponents):\n            self._rendered_url = self.connection_info.create_url()\n        else:\n            # make rendered url from string\n            self._rendered_url = make_url(str(self.connection_info))\n        drivername = self._rendered_url.drivername\n\n        try:\n            AsyncDriver(drivername)\n            self._driver_is_async = True\n        except ValueError:\n            self._driver_is_async = False\n\n        if self._unique_results is None:\n            self._unique_results = {}\n\n        if self._exit_stack is None:\n            self._start_exit_stack()\n\n    def _start_exit_stack(self):\n        \"\"\"\n        Starts an AsyncExitStack or ExitStack depending on whether driver is async.\n        \"\"\"\n        self._exit_stack = AsyncExitStack() if self._driver_is_async else ExitStack()\n\n    def get_engine(\n        self, **create_engine_kwargs: Dict[str, Any]\n    ) -> Union[Engine, AsyncEngine]:\n        \"\"\"\n        Returns an authenticated engine that can be\n        used to query from databases.\n\n        If an existing engine exists, return that one.\n\n        Returns:\n            The authenticated SQLAlchemy Engine / AsyncEngine.\n\n        Examples:\n            Create an asynchronous engine to PostgreSQL using URL params.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import (\n                SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n            )\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                sqlalchemy_credentials = SqlAlchemyConnector(\n                connection_info=ConnectionComponents(\n                        driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                        username=\"prefect\",\n                        password=\"prefect_password\",\n                        database=\"postgres\"\n                    )\n                )\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n\n            Create a synchronous engine to Snowflake using the `url` kwarg.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import SqlAlchemyConnector, AsyncDriver\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                url = (\n                    \"snowflake://<user_login_name>:<password>\"\n                    \"@<account_identifier>/<database_name>\"\n                    \"?warehouse=<warehouse_name>\"\n                )\n                sqlalchemy_credentials = SqlAlchemyConnector(url=url)\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n        \"\"\"\n        if self._engine is not None:\n            self.logger.debug(\"Reusing existing engine.\")\n            return self._engine\n\n        engine_kwargs = dict(\n            url=self._rendered_url,\n            connect_args=self.connect_args or {},\n            **create_engine_kwargs,\n        )\n        if self._driver_is_async:\n            # no need to await here\n            engine = create_async_engine(**engine_kwargs)\n        else:\n            engine = create_engine(**engine_kwargs)\n        self.logger.info(\"Created a new engine.\")\n\n        if self._engine is None:\n            self._engine = engine\n\n        return engine\n\n    def get_connection(\n        self, begin: bool = True, **connect_kwargs: Dict[str, Any]\n    ) -> Union[Connection, AsyncConnection]:\n        \"\"\"\n        Returns a connection that can be used to query from databases.\n\n        Args:\n            begin: Whether to begin a transaction on the connection; if True, if\n                any operations fail, the entire transaction will be rolled back.\n            **connect_kwargs: Additional keyword arguments to pass to either\n                `engine.begin` or engine.connect`.\n\n        Returns:\n            The SQLAlchemy Connection / AsyncConnection.\n\n        Examples:\n            Create an synchronous connection as a context-managed transaction.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            with sqlalchemy_connector.get_connection(begin=False) as connection:\n                connection.execute(\"SELECT * FROM table LIMIT 1;\")\n            ```\n\n            Create an asynchronous connection as a context-managed transacation.\n            ```python\n            import asyncio\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            async with sqlalchemy_connector.get_connection(begin=False) as connection:\n                asyncio.run(connection.execute(\"SELECT * FROM table LIMIT 1;\"))\n            ```\n        \"\"\"  # noqa: E501\n        engine = self.get_engine()\n        if begin:\n            connection = engine.begin(**connect_kwargs)\n        else:\n            connection = engine.connect(**connect_kwargs)\n        self.logger.info(\"Created a new connection.\")\n        return connection\n\n    def get_client(\n        self,\n        client_type: Literal[\"engine\", \"connection\"],\n        **get_client_kwargs: Dict[str, Any],\n    ) -> Union[Engine, AsyncEngine, Connection, AsyncConnection]:\n        \"\"\"\n        Returns either an engine or connection that can be used to query from databases.\n\n        Args:\n            client_type: Select from either 'engine' or 'connection'.\n            **get_client_kwargs: Additional keyword arguments to pass to\n                either `get_engine` or `get_connection`.\n\n        Returns:\n            The authenticated SQLAlchemy engine or connection.\n\n        Examples:\n            Create an engine.\n            ```python\n            from prefect_sqlalchemy import SqlalchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            engine = sqlalchemy_connector.get_client(client_type=\"engine\")\n            ```\n\n            Create a context managed connection.\n            ```python\n            from prefect_sqlalchemy import SqlalchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            with sqlalchemy_connector.get_client(client_type=\"connection\") as conn:\n                ...\n            ```\n        \"\"\"  # noqa: E501\n        if client_type == \"engine\":\n            client = self.get_engine(**get_client_kwargs)\n        elif client_type == \"connection\":\n            client = self.get_connection(**get_client_kwargs)\n        else:\n            raise ValueError(\n                f\"{client_type!r} is not supported; choose from engine or connection.\"\n            )\n        return client\n\n    async def _async_sync_execute(\n        self,\n        connection: Union[Connection, AsyncConnection],\n        *execute_args: Tuple[Any],\n        **execute_kwargs: Dict[str, Any],\n    ) -> CursorResult:\n        \"\"\"\n        Execute the statement asynchronously or synchronously.\n        \"\"\"\n        # can't use run_sync_in_worker_thread:\n        # ProgrammingError: (sqlite3.ProgrammingError) SQLite objects created in a\n        # thread can only be used in that same thread.\n        result_set = connection.execute(*execute_args, **execute_kwargs)\n\n        if self._driver_is_async:\n            result_set = await result_set\n            await connection.commit()  # very important\n        elif SQLALCHEMY_VERSION.startswith(\"2.\"):\n            connection.commit()\n        return result_set\n\n    @asynccontextmanager\n    async def _manage_connection(self, **get_connection_kwargs: Dict[str, Any]):\n        if self._driver_is_async:\n            async with self.get_connection(**get_connection_kwargs) as connection:\n                yield connection\n        else:\n            with self.get_connection(**get_connection_kwargs) as connection:\n                yield connection\n\n    async def _get_result_set(\n        self, *execute_args: Tuple[Any], **execute_kwargs: Dict[str, Any]\n    ) -> CursorResult:\n        \"\"\"\n        Returns a new or existing result set based on whether the inputs\n        are unique.\n\n        Args:\n            *execute_args: Args to pass to execute.\n            **execute_kwargs: Keyword args to pass to execute.\n\n        Returns:\n            The result set from the operation.\n        \"\"\"  # noqa: E501\n        input_hash = hash_objects(*execute_args, **execute_kwargs)\n        assert input_hash is not None, (\n            \"We were not able to hash your inputs, \"\n            \"which resulted in an unexpected data return; \"\n            \"please open an issue with a reproducible example.\"\n        )\n\n        if input_hash not in self._unique_results.keys():\n            if self._driver_is_async:\n                connection = await self._exit_stack.enter_async_context(\n                    self.get_connection()\n                )\n            else:\n                connection = self._exit_stack.enter_context(self.get_connection())\n            result_set = await self._async_sync_execute(\n                connection, *execute_args, **execute_kwargs\n            )\n            # implicitly store the connection by storing the result set\n            # which points to its parent connection\n            self._unique_results[input_hash] = result_set\n        else:\n            result_set = self._unique_results[input_hash]\n        return result_set\n\n    def _reset_cursor_results(self) -> None:\n        \"\"\"\n        Closes all the existing cursor results.\n        \"\"\"\n        input_hashes = tuple(self._unique_results.keys())\n        for input_hash in input_hashes:\n            try:\n                cursor_result = self._unique_results.pop(input_hash)\n                cursor_result.close()\n            except Exception as exc:\n                self.logger.warning(\n                    f\"Failed to close connection for input hash {input_hash!r}: {exc}\"\n                )\n\n    @sync_compatible\n    async def reset_connections(self) -> None:\n        \"\"\"\n        Tries to close all opened connections and their results.\n\n        Examples:\n            Resets connections so `fetch_*` methods return new results.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                results = database.fetch_one(\"SELECT * FROM customers\")\n                database.reset_connections()\n                results = database.fetch_one(\"SELECT * FROM customers\")\n            ```\n        \"\"\"\n        if self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} has no synchronous connections. \"\n                f\"Please use the `reset_async_connections` method instead.\"\n            )\n\n        if self._exit_stack is None:\n            self.logger.info(\"There were no connections to reset.\")\n            return\n\n        self._reset_cursor_results()\n        self._exit_stack.close()\n        self.logger.info(\"Reset opened connections and their results.\")\n\n    async def reset_async_connections(self) -> None:\n        \"\"\"\n        Tries to close all opened connections and their results.\n\n        Examples:\n            Resets connections so `fetch_*` methods return new results.\n            ```python\n            import asyncio\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            async def example_run():\n                async with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                    results = await database.fetch_one(\"SELECT * FROM customers\")\n                    await database.reset_async_connections()\n                    results = await database.fetch_one(\"SELECT * FROM customers\")\n\n            asyncio.run(example_run())\n            ```\n        \"\"\"\n        if not self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} has no asynchronous connections. \"\n                f\"Please use the `reset_connections` method instead.\"\n            )\n\n        if self._exit_stack is None:\n            self.logger.info(\"There were no connections to reset.\")\n            return\n\n        self._reset_cursor_results()\n        await self._exit_stack.aclose()\n        self.logger.info(\"Reset opened connections and their results.\")\n\n    @sync_compatible\n    async def fetch_one(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> Tuple[Any]:\n        \"\"\"\n        Fetch a single result from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Create a table, insert three rows into it, and fetch a row repeatedly.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                results = True\n                while results:\n                    results = database.fetch_one(\"SELECT * FROM customers\")\n                    print(results)\n            ```\n        \"\"\"  # noqa\n        result_set = await self._get_result_set(\n            text(operation), parameters, execution_options=execution_options\n        )\n        self.logger.debug(\"Preparing to fetch one row.\")\n        row = result_set.fetchone()\n        return row\n\n    @sync_compatible\n    async def fetch_many(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        size: Optional[int] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch a limited number of results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            size: The number of results to return; if None or 0, uses the value of\n                `fetch_size` configured on the block.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Create a table, insert three rows into it, and fetch two rows repeatedly.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n                print(results)\n                results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n                print(results)\n            ```\n        \"\"\"  # noqa\n        result_set = await self._get_result_set(\n            text(operation), parameters, execution_options=execution_options\n        )\n        size = size or self.fetch_size\n        self.logger.debug(f\"Preparing to fetch {size} rows.\")\n        rows = result_set.fetchmany(size=size)\n        return rows\n\n    @sync_compatible\n    async def fetch_all(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch all results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Create a table, insert three rows into it, and fetch all where name is 'Me'.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                results = database.fetch_all(\"SELECT * FROM customers WHERE name = :name\", parameters={\"name\": \"Me\"})\n            ```\n        \"\"\"  # noqa\n        result_set = await self._get_result_set(\n            text(operation), parameters, execution_options=execution_options\n        )\n        self.logger.debug(\"Preparing to fetch all rows.\")\n        rows = result_set.fetchall()\n        return rows\n\n    @sync_compatible\n    async def execute(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> None:\n        \"\"\"\n        Executes an operation on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Examples:\n            Create a table and insert one row into it.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                )\n            ```\n        \"\"\"  # noqa\n        async with self._manage_connection(begin=False) as connection:\n            await self._async_sync_execute(\n                connection,\n                text(operation),\n                parameters,\n                execution_options=execution_options,\n            )\n        self.logger.info(f\"Executed the operation, {operation!r}\")\n\n    @sync_compatible\n    async def execute_many(\n        self,\n        operation: str,\n        seq_of_parameters: List[Dict[str, Any]],\n        **execution_options: Dict[str, Any],\n    ) -> None:\n        \"\"\"\n        Executes many operations on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            seq_of_parameters: The sequence of parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Examples:\n            Create a table and insert two rows into it.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n            ```\n        \"\"\"  # noqa\n        async with self._manage_connection(begin=False) as connection:\n            await self._async_sync_execute(\n                connection,\n                text(operation),\n                seq_of_parameters,\n                execution_options=execution_options,\n            )\n        self.logger.info(\n            f\"Executed {len(seq_of_parameters)} operations based off {operation!r}.\"\n        )\n\n    async def __aenter__(self):\n        \"\"\"\n        Start an asynchronous database engine upon entry.\n        \"\"\"\n        if not self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} cannot be run asynchronously. \"\n                f\"Please use the `with` syntax.\"\n            )\n        return self\n\n    async def __aexit__(self, *args):\n        \"\"\"\n        Dispose the asynchronous database engine upon exit.\n        \"\"\"\n        await self.aclose()\n\n    async def aclose(self):\n        \"\"\"\n        Closes async connections and its cursors.\n        \"\"\"\n        if not self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} is not asynchronous. \"\n                f\"Please use the `close` method instead.\"\n            )\n        try:\n            await self.reset_async_connections()\n        finally:\n            if self._engine is not None:\n                await self._engine.dispose()\n                self._engine = None\n                self.logger.info(\"Disposed the engine.\")\n\n    def __enter__(self):\n        \"\"\"\n        Start an synchronous database engine upon entry.\n        \"\"\"\n        if self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} cannot be run synchronously. \"\n                f\"Please use the `async with` syntax.\"\n            )\n        return self\n\n    def __exit__(self, *args):\n        \"\"\"\n        Dispose the synchronous database engine upon exit.\n        \"\"\"\n        self.close()\n\n    def close(self):\n        \"\"\"\n        Closes sync connections and its cursors.\n        \"\"\"\n        if self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} is not synchronous. \"\n                f\"Please use the `aclose` method instead.\"\n            )\n\n        try:\n            self.reset_connections()\n        finally:\n            if self._engine is not None:\n                self._engine.dispose()\n                self._engine = None\n                self.logger.info(\"Disposed the engine.\")\n\n    def __getstate__(self):\n        \"\"\"Allows the block to be pickleable.\"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_engine\", \"_exit_stack\", \"_unique_results\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"Upon loading back, restart the engine and results.\"\"\"\n        self.__dict__.update(data)\n\n        if self._unique_results is None:\n            self._unique_results = {}\n\n        if self._exit_stack is None:\n            self._start_exit_stack()\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.Config","title":"Config","text":"

    Configuration of pydantic.

    Source code in prefect_sqlalchemy/database.py
    class Config:\n    \"\"\"Configuration of pydantic.\"\"\"\n\n    # Support serialization of the 'URL' type\n    arbitrary_types_allowed = True\n    json_encoders = {URL: lambda u: u.render_as_string()}\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.aclose","title":"aclose async","text":"

    Closes async connections and its cursors.

    Source code in prefect_sqlalchemy/database.py
    async def aclose(self):\n    \"\"\"\n    Closes async connections and its cursors.\n    \"\"\"\n    if not self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} is not asynchronous. \"\n            f\"Please use the `close` method instead.\"\n        )\n    try:\n        await self.reset_async_connections()\n    finally:\n        if self._engine is not None:\n            await self._engine.dispose()\n            self._engine = None\n            self.logger.info(\"Disposed the engine.\")\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.block_initialization","title":"block_initialization","text":"

    Initializes the engine.

    Source code in prefect_sqlalchemy/database.py
    def block_initialization(self):\n    \"\"\"\n    Initializes the engine.\n    \"\"\"\n    super().block_initialization()\n\n    if isinstance(self.connection_info, ConnectionComponents):\n        self._rendered_url = self.connection_info.create_url()\n    else:\n        # make rendered url from string\n        self._rendered_url = make_url(str(self.connection_info))\n    drivername = self._rendered_url.drivername\n\n    try:\n        AsyncDriver(drivername)\n        self._driver_is_async = True\n    except ValueError:\n        self._driver_is_async = False\n\n    if self._unique_results is None:\n        self._unique_results = {}\n\n    if self._exit_stack is None:\n        self._start_exit_stack()\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.close","title":"close","text":"

    Closes sync connections and its cursors.

    Source code in prefect_sqlalchemy/database.py
    def close(self):\n    \"\"\"\n    Closes sync connections and its cursors.\n    \"\"\"\n    if self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} is not synchronous. \"\n            f\"Please use the `aclose` method instead.\"\n        )\n\n    try:\n        self.reset_connections()\n    finally:\n        if self._engine is not None:\n            self._engine.dispose()\n            self._engine = None\n            self.logger.info(\"Disposed the engine.\")\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.dict","title":"dict","text":"

    Convert to a dictionary.

    Source code in prefect_sqlalchemy/database.py
    def dict(self, *args, **kwargs) -> Dict:\n    \"\"\"\n    Convert to a dictionary.\n    \"\"\"\n    # Support serialization of the 'URL' type\n    d = super().dict(*args, **kwargs)\n    d[\"_rendered_url\"] = SecretStr(\n        self._rendered_url.render_as_string(hide_password=False)\n    )\n    return d\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.execute","title":"execute async","text":"

    Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

    Unlike the fetch methods, this method will always execute the operation upon calling.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None **execution_options Dict[str, Any]

    Options to pass to Connection.execution_options.

    {}

    Examples:

    Create a table and insert one row into it.

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n    )\n

    Source code in prefect_sqlalchemy/database.py
    @sync_compatible\nasync def execute(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> None:\n    \"\"\"\n    Executes an operation on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Examples:\n        Create a table and insert one row into it.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            )\n        ```\n    \"\"\"  # noqa\n    async with self._manage_connection(begin=False) as connection:\n        await self._async_sync_execute(\n            connection,\n            text(operation),\n            parameters,\n            execution_options=execution_options,\n        )\n    self.logger.info(f\"Executed the operation, {operation!r}\")\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.execute_many","title":"execute_many async","text":"

    Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

    Unlike the fetch methods, this method will always execute the operation upon calling.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required seq_of_parameters List[Dict[str, Any]]

    The sequence of parameters for the operation.

    required **execution_options Dict[str, Any]

    Options to pass to Connection.execution_options.

    {}

    Examples:

    Create a table and insert two rows into it.

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n

    Source code in prefect_sqlalchemy/database.py
    @sync_compatible\nasync def execute_many(\n    self,\n    operation: str,\n    seq_of_parameters: List[Dict[str, Any]],\n    **execution_options: Dict[str, Any],\n) -> None:\n    \"\"\"\n    Executes many operations on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        seq_of_parameters: The sequence of parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Examples:\n        Create a table and insert two rows into it.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n        ```\n    \"\"\"  # noqa\n    async with self._manage_connection(begin=False) as connection:\n        await self._async_sync_execute(\n            connection,\n            text(operation),\n            seq_of_parameters,\n            execution_options=execution_options,\n        )\n    self.logger.info(\n        f\"Executed {len(seq_of_parameters)} operations based off {operation!r}.\"\n    )\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.fetch_all","title":"fetch_all async","text":"

    Fetch all results from the database.

    Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None **execution_options Dict[str, Any]

    Options to pass to Connection.execution_options.

    {}

    Returns:

    Type Description List[Tuple[Any]]

    A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Create a table, insert three rows into it, and fetch all where name is 'Me'.

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = database.fetch_all(\"SELECT * FROM customers WHERE name = :name\", parameters={\"name\": \"Me\"})\n

    Source code in prefect_sqlalchemy/database.py
    @sync_compatible\nasync def fetch_all(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch all results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Create a table, insert three rows into it, and fetch all where name is 'Me'.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = database.fetch_all(\"SELECT * FROM customers WHERE name = :name\", parameters={\"name\": \"Me\"})\n        ```\n    \"\"\"  # noqa\n    result_set = await self._get_result_set(\n        text(operation), parameters, execution_options=execution_options\n    )\n    self.logger.debug(\"Preparing to fetch all rows.\")\n    rows = result_set.fetchall()\n    return rows\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.fetch_many","title":"fetch_many async","text":"

    Fetch a limited number of results from the database.

    Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None size Optional[int]

    The number of results to return; if None or 0, uses the value of fetch_size configured on the block.

    None **execution_options Dict[str, Any]

    Options to pass to Connection.execution_options.

    {}

    Returns:

    Type Description List[Tuple[Any]]

    A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Create a table, insert three rows into it, and fetch two rows repeatedly.

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n    print(results)\n    results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n    print(results)\n

    Source code in prefect_sqlalchemy/database.py
    @sync_compatible\nasync def fetch_many(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    size: Optional[int] = None,\n    **execution_options: Dict[str, Any],\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch a limited number of results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        size: The number of results to return; if None or 0, uses the value of\n            `fetch_size` configured on the block.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Create a table, insert three rows into it, and fetch two rows repeatedly.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n            print(results)\n            results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n            print(results)\n        ```\n    \"\"\"  # noqa\n    result_set = await self._get_result_set(\n        text(operation), parameters, execution_options=execution_options\n    )\n    size = size or self.fetch_size\n    self.logger.debug(f\"Preparing to fetch {size} rows.\")\n    rows = result_set.fetchmany(size=size)\n    return rows\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.fetch_one","title":"fetch_one async","text":"

    Fetch a single result from the database.

    Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

    Parameters:

    Name Type Description Default operation str

    The SQL query or other operation to be executed.

    required parameters Optional[Dict[str, Any]]

    The parameters for the operation.

    None **execution_options Dict[str, Any]

    Options to pass to Connection.execution_options.

    {}

    Returns:

    Type Description Tuple[Any]

    A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

    Examples:

    Create a table, insert three rows into it, and fetch a row repeatedly.

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = True\n    while results:\n        results = database.fetch_one(\"SELECT * FROM customers\")\n        print(results)\n

    Source code in prefect_sqlalchemy/database.py
    @sync_compatible\nasync def fetch_one(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> Tuple[Any]:\n    \"\"\"\n    Fetch a single result from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Create a table, insert three rows into it, and fetch a row repeatedly.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = True\n            while results:\n                results = database.fetch_one(\"SELECT * FROM customers\")\n                print(results)\n        ```\n    \"\"\"  # noqa\n    result_set = await self._get_result_set(\n        text(operation), parameters, execution_options=execution_options\n    )\n    self.logger.debug(\"Preparing to fetch one row.\")\n    row = result_set.fetchone()\n    return row\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.get_client","title":"get_client","text":"

    Returns either an engine or connection that can be used to query from databases.

    Parameters:

    Name Type Description Default client_type Literal['engine', 'connection']

    Select from either 'engine' or 'connection'.

    required **get_client_kwargs Dict[str, Any]

    Additional keyword arguments to pass to either get_engine or get_connection.

    {}

    Returns:

    Type Description Union[Engine, AsyncEngine, Connection, AsyncConnection]

    The authenticated SQLAlchemy engine or connection.

    Examples:

    Create an engine.

    from prefect_sqlalchemy import SqlalchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nengine = sqlalchemy_connector.get_client(client_type=\"engine\")\n

    Create a context managed connection.

    from prefect_sqlalchemy import SqlalchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nwith sqlalchemy_connector.get_client(client_type=\"connection\") as conn:\n    ...\n

    Source code in prefect_sqlalchemy/database.py
    def get_client(\n    self,\n    client_type: Literal[\"engine\", \"connection\"],\n    **get_client_kwargs: Dict[str, Any],\n) -> Union[Engine, AsyncEngine, Connection, AsyncConnection]:\n    \"\"\"\n    Returns either an engine or connection that can be used to query from databases.\n\n    Args:\n        client_type: Select from either 'engine' or 'connection'.\n        **get_client_kwargs: Additional keyword arguments to pass to\n            either `get_engine` or `get_connection`.\n\n    Returns:\n        The authenticated SQLAlchemy engine or connection.\n\n    Examples:\n        Create an engine.\n        ```python\n        from prefect_sqlalchemy import SqlalchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        engine = sqlalchemy_connector.get_client(client_type=\"engine\")\n        ```\n\n        Create a context managed connection.\n        ```python\n        from prefect_sqlalchemy import SqlalchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        with sqlalchemy_connector.get_client(client_type=\"connection\") as conn:\n            ...\n        ```\n    \"\"\"  # noqa: E501\n    if client_type == \"engine\":\n        client = self.get_engine(**get_client_kwargs)\n    elif client_type == \"connection\":\n        client = self.get_connection(**get_client_kwargs)\n    else:\n        raise ValueError(\n            f\"{client_type!r} is not supported; choose from engine or connection.\"\n        )\n    return client\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.get_connection","title":"get_connection","text":"

    Returns a connection that can be used to query from databases.

    Parameters:

    Name Type Description Default begin bool

    Whether to begin a transaction on the connection; if True, if any operations fail, the entire transaction will be rolled back.

    True **connect_kwargs Dict[str, Any]

    Additional keyword arguments to pass to either engine.begin or engine.connect`.

    {}

    Returns:

    Type Description Union[Connection, AsyncConnection]

    The SQLAlchemy Connection / AsyncConnection.

    Examples:

    Create an synchronous connection as a context-managed transaction.

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nwith sqlalchemy_connector.get_connection(begin=False) as connection:\n    connection.execute(\"SELECT * FROM table LIMIT 1;\")\n

    Create an asynchronous connection as a context-managed transacation.

    import asyncio\nfrom prefect_sqlalchemy import SqlAlchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nasync with sqlalchemy_connector.get_connection(begin=False) as connection:\n    asyncio.run(connection.execute(\"SELECT * FROM table LIMIT 1;\"))\n

    Source code in prefect_sqlalchemy/database.py
    def get_connection(\n    self, begin: bool = True, **connect_kwargs: Dict[str, Any]\n) -> Union[Connection, AsyncConnection]:\n    \"\"\"\n    Returns a connection that can be used to query from databases.\n\n    Args:\n        begin: Whether to begin a transaction on the connection; if True, if\n            any operations fail, the entire transaction will be rolled back.\n        **connect_kwargs: Additional keyword arguments to pass to either\n            `engine.begin` or engine.connect`.\n\n    Returns:\n        The SQLAlchemy Connection / AsyncConnection.\n\n    Examples:\n        Create an synchronous connection as a context-managed transaction.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        with sqlalchemy_connector.get_connection(begin=False) as connection:\n            connection.execute(\"SELECT * FROM table LIMIT 1;\")\n        ```\n\n        Create an asynchronous connection as a context-managed transacation.\n        ```python\n        import asyncio\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        async with sqlalchemy_connector.get_connection(begin=False) as connection:\n            asyncio.run(connection.execute(\"SELECT * FROM table LIMIT 1;\"))\n        ```\n    \"\"\"  # noqa: E501\n    engine = self.get_engine()\n    if begin:\n        connection = engine.begin(**connect_kwargs)\n    else:\n        connection = engine.connect(**connect_kwargs)\n    self.logger.info(\"Created a new connection.\")\n    return connection\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.get_engine","title":"get_engine","text":"

    Returns an authenticated engine that can be used to query from databases.

    If an existing engine exists, return that one.

    Returns:

    Type Description Union[Engine, AsyncEngine]

    The authenticated SQLAlchemy Engine / AsyncEngine.

    Examples:

    Create an asynchronous engine to PostgreSQL using URL params.

    from prefect import flow\nfrom prefect_sqlalchemy import (\n    SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n)\n\n@flow\ndef sqlalchemy_credentials_flow():\n    sqlalchemy_credentials = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n            driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n            username=\"prefect\",\n            password=\"prefect_password\",\n            database=\"postgres\"\n        )\n    )\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

    Create a synchronous engine to Snowflake using the url kwarg.

    from prefect import flow\nfrom prefect_sqlalchemy import SqlAlchemyConnector, AsyncDriver\n\n@flow\ndef sqlalchemy_credentials_flow():\n    url = (\n        \"snowflake://<user_login_name>:<password>\"\n        \"@<account_identifier>/<database_name>\"\n        \"?warehouse=<warehouse_name>\"\n    )\n    sqlalchemy_credentials = SqlAlchemyConnector(url=url)\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

    Source code in prefect_sqlalchemy/database.py
    def get_engine(\n    self, **create_engine_kwargs: Dict[str, Any]\n) -> Union[Engine, AsyncEngine]:\n    \"\"\"\n    Returns an authenticated engine that can be\n    used to query from databases.\n\n    If an existing engine exists, return that one.\n\n    Returns:\n        The authenticated SQLAlchemy Engine / AsyncEngine.\n\n    Examples:\n        Create an asynchronous engine to PostgreSQL using URL params.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import (\n            SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n        )\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            sqlalchemy_credentials = SqlAlchemyConnector(\n            connection_info=ConnectionComponents(\n                    driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                    username=\"prefect\",\n                    password=\"prefect_password\",\n                    database=\"postgres\"\n                )\n            )\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n\n        Create a synchronous engine to Snowflake using the `url` kwarg.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import SqlAlchemyConnector, AsyncDriver\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            url = (\n                \"snowflake://<user_login_name>:<password>\"\n                \"@<account_identifier>/<database_name>\"\n                \"?warehouse=<warehouse_name>\"\n            )\n            sqlalchemy_credentials = SqlAlchemyConnector(url=url)\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n    \"\"\"\n    if self._engine is not None:\n        self.logger.debug(\"Reusing existing engine.\")\n        return self._engine\n\n    engine_kwargs = dict(\n        url=self._rendered_url,\n        connect_args=self.connect_args or {},\n        **create_engine_kwargs,\n    )\n    if self._driver_is_async:\n        # no need to await here\n        engine = create_async_engine(**engine_kwargs)\n    else:\n        engine = create_engine(**engine_kwargs)\n    self.logger.info(\"Created a new engine.\")\n\n    if self._engine is None:\n        self._engine = engine\n\n    return engine\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.reset_async_connections","title":"reset_async_connections async","text":"

    Tries to close all opened connections and their results.

    Examples:

    Resets connections so fetch_* methods return new results.

    import asyncio\nfrom prefect_sqlalchemy import SqlAlchemyConnector\n\nasync def example_run():\n    async with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n        results = await database.fetch_one(\"SELECT * FROM customers\")\n        await database.reset_async_connections()\n        results = await database.fetch_one(\"SELECT * FROM customers\")\n\nasyncio.run(example_run())\n

    Source code in prefect_sqlalchemy/database.py
    async def reset_async_connections(self) -> None:\n    \"\"\"\n    Tries to close all opened connections and their results.\n\n    Examples:\n        Resets connections so `fetch_*` methods return new results.\n        ```python\n        import asyncio\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        async def example_run():\n            async with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                results = await database.fetch_one(\"SELECT * FROM customers\")\n                await database.reset_async_connections()\n                results = await database.fetch_one(\"SELECT * FROM customers\")\n\n        asyncio.run(example_run())\n        ```\n    \"\"\"\n    if not self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} has no asynchronous connections. \"\n            f\"Please use the `reset_connections` method instead.\"\n        )\n\n    if self._exit_stack is None:\n        self.logger.info(\"There were no connections to reset.\")\n        return\n\n    self._reset_cursor_results()\n    await self._exit_stack.aclose()\n    self.logger.info(\"Reset opened connections and their results.\")\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.reset_connections","title":"reset_connections async","text":"

    Tries to close all opened connections and their results.

    Examples:

    Resets connections so fetch_* methods return new results.

    from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    results = database.fetch_one(\"SELECT * FROM customers\")\n    database.reset_connections()\n    results = database.fetch_one(\"SELECT * FROM customers\")\n

    Source code in prefect_sqlalchemy/database.py
    @sync_compatible\nasync def reset_connections(self) -> None:\n    \"\"\"\n    Tries to close all opened connections and their results.\n\n    Examples:\n        Resets connections so `fetch_*` methods return new results.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            results = database.fetch_one(\"SELECT * FROM customers\")\n            database.reset_connections()\n            results = database.fetch_one(\"SELECT * FROM customers\")\n        ```\n    \"\"\"\n    if self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} has no synchronous connections. \"\n            f\"Please use the `reset_async_connections` method instead.\"\n        )\n\n    if self._exit_stack is None:\n        self.logger.info(\"There were no connections to reset.\")\n        return\n\n    self._reset_cursor_results()\n    self._exit_stack.close()\n    self.logger.info(\"Reset opened connections and their results.\")\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.sqlalchemy_execute","title":"sqlalchemy_execute async","text":"

    Executes a SQL DDL or DML statement; useful for creating tables and inserting rows since this task does not return any objects.

    Parameters:

    Name Type Description Default statement str

    The statement to execute against the database.

    required sqlalchemy_credentials DatabaseCredentials

    The credentials to use to authenticate.

    required params Optional[Union[Tuple[Any], Dict[str, Any]]]

    The params to replace the placeholders in the query.

    None

    Examples:

    Create table named customers and insert values.

    from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\nfrom prefect_sqlalchemy.database import sqlalchemy_execute\nfrom prefect import flow\n\n@flow\ndef sqlalchemy_execute_flow():\n    sqlalchemy_credentials = DatabaseCredentials(\n        driver=AsyncDriver.SQLITE_AIOSQLITE,\n        database=\"prefect.db\",\n    )\n    sqlalchemy_execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n        sqlalchemy_credentials,\n    )\n    sqlalchemy_execute(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        sqlalchemy_credentials,\n        params={\"name\": \"Marvin\", \"address\": \"Highway 42\"}\n    )\n\nsqlalchemy_execute_flow()\n

    Source code in prefect_sqlalchemy/database.py
    @task\nasync def sqlalchemy_execute(\n    statement: str,\n    sqlalchemy_credentials: \"DatabaseCredentials\",\n    params: Optional[Union[Tuple[Any], Dict[str, Any]]] = None,\n):\n    \"\"\"\n    Executes a SQL DDL or DML statement; useful for creating tables and inserting rows\n    since this task does not return any objects.\n\n    Args:\n        statement: The statement to execute against the database.\n        sqlalchemy_credentials: The credentials to use to authenticate.\n        params: The params to replace the placeholders in the query.\n\n    Examples:\n        Create table named customers and insert values.\n        ```python\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n        from prefect_sqlalchemy.database import sqlalchemy_execute\n        from prefect import flow\n\n        @flow\n        def sqlalchemy_execute_flow():\n            sqlalchemy_credentials = DatabaseCredentials(\n                driver=AsyncDriver.SQLITE_AIOSQLITE,\n                database=\"prefect.db\",\n            )\n            sqlalchemy_execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n                sqlalchemy_credentials,\n            )\n            sqlalchemy_execute(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                sqlalchemy_credentials,\n                params={\"name\": \"Marvin\", \"address\": \"Highway 42\"}\n            )\n\n        sqlalchemy_execute_flow()\n        ```\n    \"\"\"\n    warnings.warn(\n        \"sqlalchemy_query is now deprecated and will be removed March 2023; \"\n        \"please use SqlAlchemyConnector execute_* methods instead.\",\n        DeprecationWarning,\n    )\n    # do not return anything or else results in the error:\n    # This result object does not return rows. It has been closed automatically\n    engine = sqlalchemy_credentials.get_engine()\n    async_supported = sqlalchemy_credentials._driver_is_async\n    async with _connect(engine, async_supported) as connection:\n        await _execute(connection, statement, params, async_supported)\n
    "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.sqlalchemy_query","title":"sqlalchemy_query async","text":"

    Executes a SQL query; useful for querying data from existing tables.

    Parameters:

    Name Type Description Default query str

    The query to execute against the database.

    required sqlalchemy_credentials DatabaseCredentials

    The credentials to use to authenticate.

    required params Optional[Union[Tuple[Any], Dict[str, Any]]]

    The params to replace the placeholders in the query.

    None limit Optional[int]

    The number of rows to fetch. Note, this parameter is executed on the client side, i.e. passed to fetchmany. To limit on the server side, add the LIMIT clause, or the dialect's equivalent clause, like TOP, to the query.

    None

    Returns:

    Type Description List[Tuple[Any]]

    The fetched results.

    Examples:

    Query postgres table with the ID value parameterized.

    from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\nfrom prefect_sqlalchemy.database import sqlalchemy_query\nfrom prefect import flow\n\n@flow\ndef sqlalchemy_query_flow():\n    sqlalchemy_credentials = DatabaseCredentials(\n        driver=AsyncDriver.SQLITE_AIOSQLITE,\n        database=\"prefect.db\",\n    )\n    result = sqlalchemy_query(\n        \"SELECT * FROM customers WHERE name = :name;\",\n        sqlalchemy_credentials,\n        params={\"name\": \"Marvin\"},\n    )\n    return result\n\nsqlalchemy_query_flow()\n

    Source code in prefect_sqlalchemy/database.py
    @task\nasync def sqlalchemy_query(\n    query: str,\n    sqlalchemy_credentials: \"DatabaseCredentials\",\n    params: Optional[Union[Tuple[Any], Dict[str, Any]]] = None,\n    limit: Optional[int] = None,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Executes a SQL query; useful for querying data from existing tables.\n\n    Args:\n        query: The query to execute against the database.\n        sqlalchemy_credentials: The credentials to use to authenticate.\n        params: The params to replace the placeholders in the query.\n        limit: The number of rows to fetch. Note, this parameter is\n            executed on the client side, i.e. passed to `fetchmany`.\n            To limit on the server side, add the `LIMIT` clause, or\n            the dialect's equivalent clause, like `TOP`, to the query.\n\n    Returns:\n        The fetched results.\n\n    Examples:\n        Query postgres table with the ID value parameterized.\n        ```python\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n        from prefect_sqlalchemy.database import sqlalchemy_query\n        from prefect import flow\n\n        @flow\n        def sqlalchemy_query_flow():\n            sqlalchemy_credentials = DatabaseCredentials(\n                driver=AsyncDriver.SQLITE_AIOSQLITE,\n                database=\"prefect.db\",\n            )\n            result = sqlalchemy_query(\n                \"SELECT * FROM customers WHERE name = :name;\",\n                sqlalchemy_credentials,\n                params={\"name\": \"Marvin\"},\n            )\n            return result\n\n        sqlalchemy_query_flow()\n        ```\n    \"\"\"\n    warnings.warn(\n        \"sqlalchemy_query is now deprecated and will be removed March 2023; \"\n        \"please use SqlAlchemyConnector fetch_* methods instead.\",\n        DeprecationWarning,\n    )\n    engine = sqlalchemy_credentials.get_engine()\n    async_supported = sqlalchemy_credentials._driver_is_async\n    async with _connect(engine, async_supported) as connection:\n        result = await _execute(connection, query, params, async_supported)\n        # some databases, like sqlite, require a connection still open to fetch!\n        rows = result.fetchall() if limit is None else result.fetchmany(limit)\n    return rows\n
    "},{"location":"recipes/recipes/","title":"Prefect Recipes","text":"

    Prefect recipes are common, extensible examples for setting up Prefect in your execution environment with ready-made ingredients such as Dockerfiles, Terraform files, and GitHub Actions.

    Recipes are useful when you are looking for tutorials on how to deploy a worker, use event-driven flows, set up unit testing, and more.

    The following are Prefect recipes specific to Prefect 2. You can find a full repository of recipes at https://github.com/PrefectHQ/prefect-recipes and additional recipes at Prefect Discourse.

    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#recipe-catalog","title":"Recipe catalog","text":"Agent on Azure with Kubernetes

    Configure Prefect on Azure with Kubernetes, running a Prefect agent to execute deployment flow runs.

    Maintained by Prefect

    This recipe uses:

    Agent on ECS Fargate with AWS CLI

    Run a Prefect 2 agent on ECS Fargate using the AWS CLI.

    Maintained by Prefect

    This recipe uses:

    Agent on ECS Fargate with Terraform

    Run a Prefect 2 agent on ECS Fargate using Terraform.

    Maintained by Prefect

    This recipe uses:

    Agent on an Azure VM

    Set up an Azure VM and run a Prefect agent.

    Maintained by Prefect

    This recipe uses:

    Deploy a dlt pipeline on Prefect

    dlt is an open-source Python library that enables the declarative loading of data sources into well-structured tables or datasets by automatically inferring and evolving schemas.

    Maintained by Prefect

    This recipe uses:

    Flow Deployment with GitHub Actions

    Deploy a Prefect flow with storage and infrastructure blocks, update and push Docker image to container registry.

    Maintained by Prefect

    This recipe uses:

    Flow Deployment with GitHub Storage and Docker Infrastructure

    Create a deployment with GitHub as a storage and Docker Container as an infrastructure

    Maintained by Prefect

    This recipe uses:

    Prefect server on an AKS Cluster

    Deploy a Prefect server to an Azure Kubernetes Service (AKS) Cluster with Azure Blob Storage.

    Maintained by Prefect

    This recipe uses:

    Serverless Prefect with AWS Chalice

    Execute Prefect flows in an AWS Lambda function managed by Chalice.

    Maintained by Prefect

    This recipe uses:

    Serverless Workflows with ECSTask Blocks

    Deploy a Prefect agent to AWS ECS Fargate using GitHub Actions and ECSTask infrastructure blocks.

    Maintained by Prefect

    This recipe uses:

    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#contributing-recipes","title":"Contributing recipes","text":"

    We're always looking for new recipe contributions! See the Prefect Recipes repository for details on how you can add your Prefect recipe, share best practices with fellow Prefect users, and earn some swag.

    Prefect recipes provide a vital cookbook where users can find helpful code examples and, when appropriate, common steps for specific Prefect use cases.

    We love recipes from anyone who has example code that another Prefect user can benefit from (e.g. a Prefect flow that loads data into Snowflake).

    Have a blog post, Discourse article, or tutorial you\u2019d like to share as a recipe? All submissions are welcome. Clone the prefect-recipes repo, create a branch, add a link to your recipe to the README, and submit a PR. Have more questions? Read on.

    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-is-a-recipe","title":"What is a recipe?","text":"

    A Prefect recipe is like a cookbook recipe: it tells you what you need \u2014 the ingredients \u2014 and some basic steps, but assumes you can put the pieces together. Think of the Hello Fresh meal experience, but for dataflows.

    A tutorial, on the other hand, is Julia Child holding your hand through the entire cooking process: explaining each ingredient and procedure, demonstrating best practices, pointing out potential problems, and generally making sure you can\u2019t stray from the happy path to a delicious meal.

    We love Julia, and we love tutorials. But we don\u2019t expect that a Prefect recipe should handhold users through every step and possible contingency of a solution. A recipe can start from an expectation of more expertise and problem-solving ability on the part of the reader.

    To see an example of a high quality recipe, check out Serverless with AWS Chalice. This recipe includes all of the elements we like to see.

    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#steps-to-add-your-recipe","title":"Steps to add your recipe","text":"

    Here\u2019s our guide to creating a recipe:

    # Clone the repository\ngit clone git@github.com:PrefectHQ/prefect-recipes.git\ncd prefect-recipes\n\n# Create and checkout a new branch\n\ngit checkout -b new_recipe_branch_name\n
    1. Add your recipe. Your code may simply be a copy/paste of a single Python file or an entire folder. Unsure of where to add your file or folder? Just add under the flows-advanced/ folder. A Prefect Recipes maintainer will help you find the best place for your recipe. Just want to direct others to a project you made, whether it be a repo or a blogpost? Simply link to it in the Prefect Recipes README!
    2. (Optional) Write a README.
    3. Include a dependencies file, if applicable.
    4. Push your code and make a PR to the repository.

    That\u2019s it!

    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-makes-a-good-recipe","title":"What makes a good recipe?","text":"

    Every recipe is useful, as other Prefect users can adapt the recipe to their needs. Particularly good ones help a Prefect user bake a great dataflow solution! Take a look at the prefect-recipes repo to see some examples.

    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-the-common-ingredients-of-a-good-recipe","title":"What are the common ingredients of a good recipe?","text":"
    • Easy to understand: Can a user easily follow your recipe? Would a README or code comments help? A simple explanation providing context on how to use the example code is useful, but not required. A good README can set a recipe apart, so we have some additional suggestions for README files below.
    • Code and more: Sometimes a use case is best represented in Python code or shell scripts. Sometimes a configuration file is the most important artifact \u2014 think of a Dockerfile or Terraform file for configuring infrastructure.
    • All-inclusive: Share as much code as you can. Even boilerplate code like Dockerfiles or Terraform or Helm files are useful. Just don\u2019t share company secrets or IP.
    • Specific: Don't worry about generalizing your code, aside from removing anything internal/secret! Other users will extrapolate their own unique solutions from your example.
    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-some-tips-for-a-good-recipe-readme","title":"What are some tips for a good recipe README?","text":"

    A thoughtful README can take a recipe from good to great. Here are some best practices that we\u2019ve found make for a great recipe README:

    • Provide a brief explanation of what your recipe demonstrates. This helps users determine quickly whether the recipe is relevant to their needs or answers their questions.
    • List which files are included and what each is meant to do. Each explanation can contain only a few words.
    • Describe any dependencies and prerequisites (in addition to any dependencies you include in a requirements file). This includes both libraries or modules and any services your recipes depends on.
    • If steps are involved or there\u2019s an order to do things, a simple list of steps is helpful.
    • Bonus: troubleshooting steps you encountered to get here or tips where other users might get tripped up.
    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#next-steps","title":"Next steps","text":"

    We hope you\u2019ll feel comfortable sharing your Prefect solutions as recipes in the prefect-recipes repo. Collaboration and knowledge sharing are defining attributes of our Prefect Community!

    Have questions about sharing or using recipes? Reach out on our active Prefect Slack Community!

    Happy engineering!

    ","tags":["recipes","best practices","examples"],"boost":2},{"location":"tutorial/","title":"Tutorial Overview","text":"

    Prefect orchestrates workflows \u2014 it simplifies the creation, scheduling, and monitoring of complex data pipelines. You define workflows as Python code and Prefect handles the rest.

    Prefect also provides error handling, retry mechanisms, and a user-friendly dashboard for monitoring. It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated.

    This tutorial provides a guided walk-through of Prefect's core concepts and instructions on how to use them.

    You will:

    1. Create a flow
    2. Add tasks to it
    3. Deploy and run the flow locally
    4. Create a work pool and run the flow on remote infrastructure

    These four topics will get most users to their first production deployment.

    Advanced users that need more governance and control of their workflow infrastructure can go one step further by:

    5.Using a worker-based deployment

    If you're looking for examples of more advanced operations (such as deploying on Kubernetes), check out Prefect's guides.

    Compared to the Quickstart, this tutorial is a more in-depth guide to Prefect's functionality. You will also see how to customize the Docker image where your flow runs and learn how to run flows on your own infrastructure.

    ","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#prerequisites","title":"Prerequisites","text":"

    Before you start, make sure you have Python installed in a virtual environment. Then install Prefect:

    pip install -U prefect\n

    See the install guide for more detailed instructions.

    To get the most out of Prefect, you need to connect to a forever-free Prefect Cloud account.

    1. Create a new account or sign in at https://app.prefect.cloud/.
    2. Use the prefect cloud login CLI command to authenticate to Prefect Cloud from your environment.
    prefect cloud login\n

    Choose Log in with a web browser and click the Authorize button in the browser window that opens.

    If you have any issues with browser-based authentication, see the Prefect Cloud docs to learn how to authenticate with a manually created API key.

    As an alternative to using Prefect Cloud, you can self-host a Prefect server instance. If you choose this option, run prefect server start to start a local Prefect server instance.

    ","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#first-steps-flows","title":"First steps: Flows","text":"

    Let's begin by learning how to create your first Prefect flow - click here to get started.

    ","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/deployments/","title":"Deploying Flows","text":"

    Reminder to connect to Prefect Cloud or a self-hosted Prefect server instance

    Some features in this tutorial, such as scheduling, require you to be connected to a Prefect server. If using a self-hosted setup, run prefect server start to run both the webserver and UI. If using Prefect Cloud, make sure you have successfully authenticated your local environment.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#why-deployments","title":"Why deployments?","text":"

    Some of the most common reasons to use an orchestration tool such as Prefect are for scheduling and event-based triggering. Up to this point, we\u2019ve demonstrated running Prefect flows as scripts, but this means you have been the one triggering and managing flow runs. You can certainly continue to trigger your workflows in this way and use Prefect as a monitoring layer for other schedulers or systems, but you will miss out on many of the other benefits and features that Prefect offers.

    Deploying a flow exposes an API and UI so that you can:

    • trigger new runs, cancel active runs, pause scheduled runs, customize parameters, and more
    • remotely configure schedules and automation rules for your deployments
    • dynamically provision infrastructure using workers
    ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#what-is-a-deployment","title":"What is a deployment?","text":"

    Deploying a flow is the act of specifying where and how it will run. This information is encapsulated and sent to Prefect as a deployment that contains the crucial metadata needed for remote orchestration. Deployments elevate workflows from functions that you call manually to API-managed entities.

    Attributes of a deployment include (but are not limited to):

    • Flow entrypoint: path to your flow function
    • Schedule or Trigger: optional schedule or triggering rules for this deployment
    • Tags: optional text labels for organizing your deployments
    ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#create-a-deployment","title":"Create a deployment","text":"

    Using our get_repo_info flow from the previous sections, we can easily create a deployment for it by calling a single method on the flow object: flow.serve.

    repo_info.py
    import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.serve(name=\"my-first-deployment\")\n

    Running this script will do two things:

    • create a deployment called \"my-first-deployment\" for your flow in the Prefect API
    • stay running to listen for flow runs for this deployment; when a run is found, it will be asynchronously executed within a subprocess

    Deployments must be defined in static files

    Flows can be defined and run interactively, that is, within REPLs or Notebooks. Deployments, on the other hand, require that your flow definition be in a known file (which can be located on a remote filesystem in certain setups, as we'll see in the next section of the tutorial).

    Because this deployment has no schedule or triggering automation, you will need to use the UI or API to create runs for it. Let's use the CLI (in a separate terminal window) to create a run for this deployment:

    prefect deployment run 'get-repo-info/my-first-deployment'\n

    If you are watching either your terminal or your UI, you should see the newly created run execute successfully! Let's take this example further by adding a schedule and additional metadata.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#additional-options","title":"Additional options","text":"

    The serve method on flows exposes many options for the deployment. Let's use a few of these options now:

    • cron: a keyword that allows us to set a cron string schedule for the deployment; see schedules for more advanced scheduling options
    • tags: a keyword that allows us to tag this deployment and its runs for bookkeeping and filtering purposes
    • description: a keyword that allows us to document what this deployment does; by default the description is set from the docstring of the flow function, but we did not document our flow function
    • version: a keyword that allows us to track changes to our deployment; by default a hash of the file containing the flow is used; popular options include semver tags or git commit hashes

    Let's add these options to our deployment:

    if __name__ == \"__main__\":\n    get_repo_info.serve(\n        name=\"my-first-deployment\",\n        cron=\"* * * * *\",\n        tags=[\"testing\", \"tutorial\"],\n        description=\"Given a GitHub repository, logs repository statistics for that repo.\",\n        version=\"tutorial/deployments\",\n    )\n

    When you rerun this script, you will find an updated deployment in the UI that is actively scheduling work! Stop the script in the CLI using CTRL+C and your schedule will be automatically paused.

    .serve is a long-running process

    For remotely triggered or scheduled runs to be executed, your script with flow.serve must be actively running.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#running-multiple-deployments-at-once","title":"Running multiple deployments at once","text":"

    This method is useful for creating deployments for single flows, but what if we have two or more flows? This situation only requires a few additional method calls and imports to get up and running:

    multi_flow_deployment.py
    import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n    serve(slow_deploy, fast_deploy)\n

    A few observations:

    • the flow.to_deployment interface exposes the exact same options as flow.serve; this method produces a deployment object
    • the deployments are only registered with the API once serve(...) is called
    • when serving multiple deployments, the only requirement is that they share a Python environment; they can be executed and scheduled independently of each other

    Spend some time experimenting with this setup. A few potential next steps for exploration include:

    • pausing and unpausing the schedule for the \"sleeper\" deployment
    • using the UI to submit ad-hoc runs for the \"sleeper\" deployment with different values for sleep
    • cancelling an active run for the \"sleeper\" deployment from the UI (good luck cancelling the \"fast\" one \ud83d\ude09)

    Hybrid execution option

    Another implication of Prefect's deployment interface is that you can choose to use our hybrid execution model. Whether you use Prefect Cloud or host a Prefect server instance yourself, you can run work flows in the environments best suited to their execution. This model allows you efficient use of your infrastructure resources while maintaining the privacy of your code and data. There is no ingress required. For more information read more about our hybrid model.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#next-steps","title":"Next steps","text":"

    Congratulations! You now have your first working deployment.

    Deploying flows through the serve method is a fast way to start scheduling flows with Prefect. However, if your team has more complex infrastructure requirements or you'd like to have Prefect manage flow execution, you can deploy flows to a work pool.

    Learn about work pools and how Prefect Cloud can handle infrastructure configuration for you in the next step of the tutorial.

    ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/flows/","title":"Flows","text":"

    Prerequisites

    This tutorial assumes you have already installed Prefect and connected to Prefect Cloud or a self-hosted server instance. See the prerequisites section of the tutorial for more details.

    ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#what-is-a-flow","title":"What is a flow?","text":"

    Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

    • All runs of the flow have persistent state. Transitions between states are recorded, allowing for flow execution to be observed and acted upon.
    • Input arguments can be type validated as workflow parameters.
    • Retries can be performed on failure.
    • Timeouts can be enforced to prevent unintentional, long-running workflows.
    • Metadata about flow runs, such as run time and final state, is automatically tracked.
    • They can easily be elevated to a deployment, which exposes a remote API for interacting with it
    ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#run-your-first-flow","title":"Run your first flow","text":"

    The simplest way to get started with Prefect is to annotate a Python function with the\u00a0@flow\u00a0decorator. The script below fetches statistics about the main Prefect repository. Note that httpx is an HTTP client library and a dependency of Prefect. Let's turn this function into a Prefect flow and run the script:

    repo_info.py
    import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info():\n    url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

    Running this file will result in some interesting output:

    12:47:42.792 | INFO | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\nStars \ud83c\udf20 : 12146\nForks \ud83c\udf74 : 1245\n12:47:45.008 | INFO | Flow run 'ludicrous-warthog' - Finished in state Completed()\n

    Flows can contain arbitrary Python

    As we can see above, flow definitions can contain arbitrary Python logic.

    ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#parameters","title":"Parameters","text":"

    As with any Python function, you can pass arguments to a flow. The positional and keyword arguments defined on your flow function are called parameters. Prefect will automatically perform type conversion using any provided type hints. Let's make the repository a string parameter with a default value:

    repo_info.py
    import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info(repo_name=\"PrefectHQ/marvin\")\n

    We can call our flow with varying values for the repo_name parameter (including \"bad\" values):

    python repo_info.py\n

    Try passing repo_name=\"missing-org/missing-repo\".

    You should see

    HTTPStatusError: Client error '404 Not Found' for url '<https://api.github.com/repos/missing-org/missing-repo>'\n

    Now navigate to your Prefect dashboard and compare the displays for these two runs.

    ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#logging","title":"Logging","text":"

    Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing. If we navigate to our dashboard and explore the runs we created above, we will notice that the repository statistics are not captured in the flow run logs. Let's fix that by adding some logging to our flow:

    repo_info.py
    import httpx\nfrom prefect import flow, get_run_logger\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    logger = get_run_logger()\n    logger.info(\"%s repository statistics \ud83e\udd13:\", repo_name)\n    logger.info(f\"Stars \ud83c\udf20 : %d\", repo[\"stargazers_count\"])\n    logger.info(f\"Forks \ud83c\udf74 : %d\", repo[\"forks_count\"])\n

    Now the output looks more consistent and, more importantly, our statistics are stored in the Prefect backend and displayed in the UI for this flow run:

    12:47:42.792 | INFO    | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\n12:47:43.016 | INFO    | Flow run 'ludicrous-warthog' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:47:43.016 | INFO    | Flow run 'ludicrous-warthog' - Stars \ud83c\udf20 : 12146\n12:47:43.042 | INFO    | Flow run 'ludicrous-warthog' - Forks \ud83c\udf74 : 1245\n12:47:45.008 | INFO    | Flow run 'ludicrous-warthog' - Finished in state Completed()\n

    log_prints=True

    We could have achieved the exact same outcome by using Prefect's convenient log_prints keyword argument in the flow decorator:

    @flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    ...\n

    Logging vs Artifacts

    The example above is for educational purposes. In general, it is better to use Prefect artifacts for storing metrics and output. Logs are best for tracking progress and debugging errors.

    ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#retries","title":"Retries","text":"

    So far our script works, but in the future unexpected errors may occur. For example the GitHub API may be temporarily unavailable or rate limited. Retries help make our flow more resilient. Let's add retry functionality to our example above:

    repo_info.py
    import httpx\nfrom prefect import flow\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n
    ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#next-tasks","title":"Next: Tasks","text":"

    As you have seen, adding a flow decorator converts our Python function to a resilient and observable workflow. In the next section, you'll supercharge this flow by using tasks to break down the workflow's complexity and make it more performant and observable - click here to continue.

    ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/tasks/","title":"Tasks","text":"","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#what-is-a-task","title":"What is a task?","text":"

    A task is any Python function decorated with a @task decorator. You can think of a flow as a recipe for connecting a known sequence of tasks together. Tasks, and the dependencies between them, are displayed in the flow run graph, enabling you to break down a complex flow into something you can observe, understand and control at a more granular level. When a function becomes a task, it can be executed concurrently and its return value can be cached.

    Flows and tasks share some common features:

    • Both are defined easily using their respective decorator, which accepts settings for that flow / task (see all task settings / flow settings).
    • Each can be given a name, description and tags for organization and bookkeeping.
    • Both provide functionality for retries, timeouts, and other hooks to handle failure and completion events.

    Network calls (such as our GET requests to the GitHub API) are particularly useful as tasks because they take advantage of task features such as retries, caching, and concurrency.

    Tasks may be called from other tasks

    As of prefect 2.18.x, tasks can be called from within other tasks. This removes the need to use subflows for simple task composition.

    When to use tasks

    Not all functions in a flow need be tasks. Use them only when their features are useful.

    Let's take our flow from before and move the request into a task:

    repo_info.py
    import httpx\nfrom prefect import flow, task\nfrom typing import Optional\n\n\n@task\ndef get_url(url: str, params: Optional[dict[str, any]] = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    repo_stats = get_url(url)\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

    Running the flow in your terminal will result in something like this:

    09:55:55.412 | INFO    | prefect.engine - Created flow run 'great-ammonite' for flow 'get-repo-info'\n09:55:55.499 | INFO    | Flow run 'great-ammonite' - Created task run 'get_url-0' for task 'get_url'\n09:55:55.500 | INFO    | Flow run 'great-ammonite' - Executing 'get_url-0' immediately...\n09:55:55.825 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Stars \ud83c\udf20 : 12157\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Forks \ud83c\udf74 : 1251\n09:55:55.849 | INFO    | Flow run 'great-ammonite' - Finished in state Completed('All states completed.')\n

    And you should now see this task run tracked in the UI as well.

    ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#caching","title":"Caching","text":"

    Tasks support the ability to cache their return value. Caching allows you to efficiently reuse results of tasks that may be expensive to reproduce with every flow run, or reuse cached results if the inputs to a task have not changed.

    To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. You can define a task that is cached based on its inputs by using the Prefect task_input_hash. Let's add caching to our get_url task:

    import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\nfrom typing import Optional\n\n\n@task(cache_key_fn=task_input_hash, \n      cache_expiration=timedelta(hours=1),\n      )\ndef get_url(url: str, params: Optional[dict[str, any]] = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n

    You can test this caching behavior by using a personal repository as your workflow parameter - give it a star, or remove a star and see how the output of this task changes (or doesn't) by running your flow multiple times.

    Task results and caching

    Task results are cached in memory during a flow run and persisted to your home directory by default. Prefect Cloud only stores the cache key, not the data itself.

    ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#concurrency","title":"Concurrency","text":"

    Tasks enable concurrency, allowing you to execute multiple tasks asynchronously. This concurrency can greatly enhance the efficiency and performance of your workflows. Let's expand our script to calculate the average open issues per user. This will require making more requests:

    repo_info.py
    import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\nfrom typing import Optional\n\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: Optional[dict[str, any]] = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n\n\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p]\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    repo_stats = get_url(f\"https://api.github.com/repos/{repo_name}\")\n    issues = get_open_issues(repo_name, repo_stats[\"open_issues_count\"])\n    issues_per_user = len(issues) / len(set([i[\"user\"][\"id\"] for i in issues]))\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n    print(f\"Average open issues per user \ud83d\udc8c : {issues_per_user:.2f}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

    Now we're fetching the data we need, but the requests are happening sequentially. Tasks expose a submit method that changes the execution from sequential to concurrent. In our specific example, we also need to use the result method because we are unpacking a list of return values:

    def get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url.submit(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p.result()]\n

    The logs show that each task is running concurrently:

    12:45:28.241 | INFO    | prefect.engine - Created flow run 'intrepid-coua' for flow 'get-repo-info'\n12:45:28.311 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-0' for task 'get_url'\n12:45:28.312 | INFO    | Flow run 'intrepid-coua' - Executing 'get_url-0' immediately...\n12:45:28.543 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n12:45:28.583 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-1' for task 'get_url'\n12:45:28.584 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-1' for execution.\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-2' for task 'get_url'\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-2' for execution.\n12:45:28.609 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-4' for task 'get_url'\n12:45:28.610 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-4' for execution.\n12:45:28.624 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-5' for task 'get_url'\n12:45:28.625 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-5' for execution.\n12:45:28.640 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-6' for task 'get_url'\n12:45:28.641 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-6' for execution.\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-3' for task 'get_url'\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-3' for execution.\n12:45:29.096 | INFO    | Task run 'get_url-6' - Finished in state Completed()\n12:45:29.565 | INFO    | Task run 'get_url-2' - Finished in state Completed()\n12:45:29.721 | INFO    | Task run 'get_url-5' - Finished in state Completed()\n12:45:29.749 | INFO    | Task run 'get_url-4' - Finished in state Completed()\n12:45:29.801 | INFO    | Task run 'get_url-3' - Finished in state Completed()\n12:45:29.817 | INFO    | Task run 'get_url-1' - Finished in state Completed()\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - Stars \ud83c\udf20 : 12159\n12:45:29.821 | INFO    | Flow run 'intrepid-coua' - Forks \ud83c\udf74 : 1251\nAverage open issues per user \ud83d\udc8c : 2.27\n12:45:29.838 | INFO    | Flow run 'intrepid-coua' - Finished in state Completed('All states completed.')\n
    ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#subflows","title":"Subflows","text":"

    Not only can you call tasks within a flow, but you can also call other flows! Child flows are called\u00a0subflows\u00a0and allow you to efficiently manage, track, and version common multi-task logic.

    Subflows are a great way to organize your workflows and offer more visibility within the UI.

    Let's add a flow decorator to our get_open_issues function:

    @flow\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url.submit(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p.result()]\n

    Whenever we run the parent flow, a new run will be generated for related functions within that as well. Not only is this run tracked as a subflow run of the main flow, but you can also inspect it independently in the UI!

    ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#next-deployments","title":"Next: Deployments","text":"

    We now have a flow with tasks, subflows, retries, logging, caching, and concurrent execution. In the next section, we'll see how we can deploy this flow in order to run it on a schedule and/or external infrastructure - click here to learn how to create your first deployment.

    ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/work-pools/","title":"Work Pools","text":"","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#why-work-pools","title":"Why work pools?","text":"

    Work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. To transition from persistent infrastructure to dynamic infrastructure, use flow.deploy instead of flow.serve.

    Choosing Between flow.deploy() and flow.serve()

    Earlier in the tutorial you used serve to deploy your flows. For many use cases, serve is sufficient to meet scheduling and orchestration needs. Work pools are optional. If infrastructure needs escalate, work pools can become a handy tool. The best part? You're not locked into one method. You can seamlessly combine approaches as needed.

    Deployment definition methods differ slightly for work pools

    When you use work-pool-based execution, you define deployments differently. Deployments for workers are configured with deploy, which requires additional configuration. A deployment created with serve cannot be used with a work pool.

    The primary reason to use work pools is for dynamic infrastructure provisioning and configuration. For example, you might have a workflow that has expensive infrastructure requirements and is run infrequently. In this case, you don't want an idle process running within that infrastructure.

    Other advantages to using work pools include:

    • You can configure default infrastructure configurations on your work pools that all jobs inherit and can override.
    • Platform teams can use work pools to expose opinionated (and enforced!) interfaces to the infrastructure that they oversee.
    • Work pools can be used to prioritize (or limit) flow runs through the use of work queues.

    Prefect provides several types of work pools. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#set-up-a-work-pool","title":"Set up a work pool","text":"

    Prefect Cloud

    This tutorial uses Prefect Cloud to deploy flows to work pools. Managed execution and push work pools are available in Prefect Cloud only. If you are not using Prefect Cloud, please learn about work pools below and then proceed to the next tutorial that uses worker-based work pools.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-prefect-managed-work-pool","title":"Create a Prefect Managed work pool","text":"

    In your terminal, run the following command to set up a work pool named my-managed-pool of type prefect:managed.

    prefect work-pool create my-managed-pool --type prefect:managed \n

    Let\u2019s confirm that the work pool was successfully created by running the following command.

    prefect work-pool ls\n

    You should see your new my-managed-pool in the output list.

    Finally, let\u2019s double check that you can see this work pool in the UI.

    Navigate to the Work Pools tab and verify that you see my-managed-pool listed.

    Feel free to select Edit from the three-dot menu on right of the work pool card to view the details of your work pool.

    Work pools contain configuration that is used to provision infrastructure for flow runs. For example, you can specify additional Python packages or environment variables that should be set for all deployments that use this work pool. Note that individual deployments can override the work pool configuration.

    Now that you\u2019ve set up your work pool, we can deploy a flow to this work pool. Let's deploy your tutorial flow to my-managed-pool.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-the-deployment","title":"Create the deployment","text":"

    From our previous steps, we now have:

    1. A flow
    2. A work pool

    Let's update our repo_info.py file to create a deployment in Prefect Cloud.

    The updates that we need to make to repo_info.py are:

    1. Change flow.serve to flow.deploy.
    2. Tell flow.deploy which work pool to deploy to.

    Here's what the updated repo_info.py looks like:

    repo_info.py
    import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.from_source(\n        source=\"https://github.com/discdiver/demos.git\", \n        entrypoint=\"repo_info.py:get_repo_info\"\n    ).deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-managed-pool\", \n    )\n

    In the from_source method, we specify the source of our flow code.

    In the deploy method, we specify the name of our deployment and the name of the work pool that we created earlier.

    You can store your flow code in any of several types of remote storage. In this example, we use a GitHub repository, but you could use a Docker image, as you'll see in an upcoming section of the tutorial. Alternatively, you could store your flow code in cloud provider storage such as AWS S3, or within a different git-based cloud provider such as GitLab or Bitbucket.

    Note

    In the example above, we store our code in a GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source argument of from_source to point to your repository.

    Now that you've updated your script, you can run it to register your deployment on Prefect Cloud:

    python repo_info.py\n

    You should see a message in the CLI that your deployment was created with instructions for how to run it.

    Successfully created/updated all deployments!\n\n                       Deployments                       \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                              \u2503 Status  \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 get-repo-info/my-first-deployment | applied \u2502         \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nTo schedule a run for this deployment, use the following command:\n\n        $ prefect deployment run 'get-repo-info/my-first-deployment'\n\n\nYou can also run your flow via the Prefect UI: https://app.prefect.cloud/account/\nabc/workspace/123/deployments/deployment/xyz\n

    Navigate to your Prefect Cloud UI and view your new deployment. Click the Run button to trigger a run of your deployment.

    Because this deployment was configured with a Prefect Managed work pool, Prefect Cloud will run your flow on your behalf.

    View the logs in the UI.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#schedule-a-deployment-run","title":"Schedule a deployment run","text":"

    Now everything is set up for us to submit a flow-run to the work pool. Go ahead and run the deployment from the CLI or the UI.

    prefect deployment run 'get_repo_info/my-deployment'\n

    Prefect Managed work pools are a great way to get started with Prefect. See the Managed Execution guide for more details.

    Many users will find that they need more control over the infrastructure that their flows run on. Prefect Cloud's push work pools are a popular option in those cases.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#push-work-pools-with-automatic-infrastructure-provisioning","title":"Push work pools with automatic infrastructure provisioning","text":"

    Serverless push work pools scale infinitely and provide more configuration options than Prefect Managed work pools.

    Prefect provides push work pools for AWS ECS on Fargate, Azure Container Instances, Google Cloud Run, and Modal. To use a push work pool, you will need an account with sufficient permissions on the cloud provider that you want to use. We'll use GCP for this example.

    Setting up the cloud provider pieces for infrastructure can be tricky and time consuming. Fortunately, Prefect can automatically provision infrastructure for you and wire it all together to work with your push work pool.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-push-work-pool-with-automatic-infrastructure-provisioning","title":"Create a push work pool with automatic infrastructure provisioning","text":"

    In your terminal, run the following command to set up a push work pool.

    Install the gcloud CLI and authenticate with your GCP project.

    If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update.

    You will need the following permissions in your GCP project:

    • resourcemanager.projects.list
    • serviceusage.services.enable
    • iam.serviceAccounts.create
    • iam.serviceAccountKeys.create
    • resourcemanager.projects.setIamPolicy
    • artifactregistry.repositories.create

    Docker is also required to build and push images to your registry. You can install Docker here.

    Run the following command to set up a work pool named my-cloud-run-pool of type cloud-run:push.

    prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n

    Using the --provision-infra flag allows you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials block for storing the service account key.

    Here's an abbreviated example output from running the command:

    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require:                           \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in GCP project central-kit-405415 in region us-central1                                      \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Activate the Cloud Run API for your project                                                    \u2502\n\u2502         - Activate the Artifact Registry API for your project                                            \u2502\n\u2502         - Create an Artifact Registry repository named prefect-images                                    \u2502\n\u2502         - Create a service account for managing Cloud Run jobs: prefect-cloud-run                        \u2502\n\u2502             - Service account will be granted the following roles:                                       \u2502\n\u2502                 - Service Account User                                                                   \u2502\n\u2502                 - Cloud Run Developer                                                                    \u2502\n\u2502         - Create a key for service account prefect-cloud-run                                             \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in Prefect workspace                                                                         \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Create GCP credentials block my--pool-push-pool-credentials to store the service account key   \u2502\n\u2502                                                                                                          \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n

    After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.

    While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.

    To take advantage of this functionality, you can write your deploy script like this:

    example_deploy_script.py
    from prefect import flow                                                       \nfrom prefect.deployments import DeploymentImage                                \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\":                                                     \n    my_flow.deploy(                                                            \n        name=\"my-deployment\",\n        work_pool_name=\"above-ground\",\n        cron=\"0 1 * * *\",\n        image=DeploymentImage(\n            name=\"my-image:latest\",\n            platform=\"linux/amd64\",\n        )\n    )\n

    Run the script to create the deployment on the Prefect Cloud server.

    Running this script will build a Docker image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest and push it to your repository.

    Tip

    Make sure you have Docker running locally before running this script.

    Note that you only need to include an object of the DeploymentImage class with the argument platform=\"linux/amd64 if you're building your image on a machine with an ARM-based processor. Otherwise, you could just pass image=\"my-image:latest\" to deploy.

    Also note that the cron argument will schedule the deployment to run at 1am every day. See the schedules docs for more information on scheduling options.

    See the Push Work Pool guide for more details and example commands for each cloud provider.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#next-step","title":"Next step","text":"

    Congratulations! You've learned how to deploy flows to work pools. If these work pool options meet all of your needs, we encourage you to go deeper with the concepts docs or explore our how-to guides to see examples of particular Prefect use cases.

    However, if you need more control over your infrastructure, want to run your workflows in Kubernetes, or are running a self-hosted Prefect server instance, we encourage you to see the next section of the tutorial. There you'll learn how to use work pools that rely on a worker and see how to customize Docker images for container-based infrastructure.

    ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/workers/","title":"Workers","text":"","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#prerequisites","title":"Prerequisites","text":"

    Docker installed and running on your machine.

    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#why-workers","title":"Why workers","text":"

    In the previous section of the tutorial, you learned how work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. You saw how you can transition from persistent infrastructure to dynamic infrastructure by using flow.deploy instead of flow.serve.

    Work pools that rely on client-side workers take this a step further by enabling you to run work flows in your own Docker containers, Kubernetes clusters, and serverless environments such as AWS ECS, Azure Container Instances, and GCP Cloud Run.

    The architecture of a worker-based work pool deployment can be summarized with the following diagram:

    graph TD\n    subgraph your_infra[\"Your Execution Environment\"]\n        worker[\"Worker\"]\n    subgraph flow_run_infra[Flow Run Infra]\n     flow_run_a((\"Flow Run A\"))\n    end\n    subgraph flow_run_infra_2[Flow Run Infra]\n     flow_run_b((\"Flow Run B\"))\n    end      \n    end\n\n    subgraph api[\"Prefect API\"]\n    Deployment --> |assigned to| work_pool\n        work_pool([\"Work Pool\"])\n    end\n\n    worker --> |polls| work_pool\n    worker --> |creates| flow_run_infra\n    worker --> |creates| flow_run_infra_2

    Notice above that the worker is in charge of provisioning the flow run infrastructure. In context of this tutorial, that flow run infrastructure is an ephemeral Docker container to host each flow run. Different worker types create different types of flow run infrastructure.

    Now that we\u2019ve reviewed the concepts of a work pool and worker, let\u2019s create them so that you can deploy your tutorial flow, and execute it later using the Prefect API.

    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#set-up-a-work-pool-and-worker","title":"Set up a work pool and worker","text":"

    For this tutorial you will create a Docker type work pool via the CLI.

    Using the Docker work pool type means that all work sent to this work pool will run within a dedicated Docker container using a Docker client available to the worker.

    Other work pool types

    There are work pool types for serverless computing environments such as AWS ECS, Azure Container Instances, Google Cloud Run, and Vertex AI. Kubernetes is also a popular work pool type.

    These options are expanded upon in various How-to Guides.

    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#create-a-work-pool","title":"Create a work pool","text":"

    In your terminal, run the following command to set up a Docker type work pool.

    prefect work-pool create --type docker my-docker-pool\n

    Let\u2019s confirm that the work pool was successfully created by running the following command in the same terminal.

    prefect work-pool ls\n

    You should see your new my-docker-pool listed in the output.

    Finally, let\u2019s double check that you can see this work pool in your Prefect UI.

    Navigate to the Work Pools tab and verify that you see my-docker-pool listed.

    When you click into my-docker-pool you should see a red status icon signifying that this work pool is not ready.

    To make the work pool ready, you need to start a worker.

    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#start-a-worker","title":"Start a worker","text":"

    Workers are a lightweight polling process that kick off scheduled flow runs on a specific type of infrastructure (such as Docker). To start a worker on your local machine, open a new terminal and confirm that your virtual environment has prefect installed.

    Run the following command in this new terminal to start the worker:

    prefect worker start --pool my-docker-pool\n

    You should see the worker start. It's now polling the Prefect API to check for any scheduled flow runs it should pick up and then submit for execution. You\u2019ll see your new worker listed in the UI under the Workers tab of the Work Pools page with a recent last polled date.

    You should also be able to see a Ready status indicator on your work pool - progress!

    You will need to keep this terminal session active for the worker to continue to pick up jobs. Since you are running this worker locally, the worker will terminate if you close the terminal. Therefore, in a production setting this worker should run as a daemonized or managed process.

    Now that you\u2019ve set up your work pool and worker, we have what we need to kick off and execute flow runs of flows deployed to this work pool. Let's deploy your tutorial flow to my-docker-pool.

    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#create-the-deployment","title":"Create the deployment","text":"

    From our previous steps, we now have:

    1. A flow
    2. A work pool
    3. A worker

    Now it\u2019s time to put it all together. We're going to update our repo_info.py file to build a Docker image and update our deployment so our worker can execute it.

    The updates that you need to make to repo_info.py are:

    1. Change flow.serve to flow.deploy.
    2. Tell flow.deploy which work pool to deploy to.
    3. Tell flow.deploy the name to use for the Docker image that will be built.

    Here's what the updated repo_info.py looks like:

    repo_info.py
    import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my-first-deployment-image:tutorial\",\n        push=False\n    )\n

    Why the push=False?

    For this tutorial, your Docker worker is running on your machine, so we don't need to push the image built by flow.deploy to a registry. When your worker is running on a remote machine, you will need to push the image to a registry that the worker can access.

    Remove the push=False argument, include your registry name, and ensure you've authenticated with the Docker CLI to push the image to a registry.

    Now that you've updated your script, you can run it to deploy your flow to the work pool:

    python repo_info.py\n

    Prefect will build a custom Docker image containing your workflow code that the worker can use to dynamically spawn Docker containers whenever this workflow needs to run.

    What Dockerfile?

    In this example, Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt file.

    If you want to use a custom Dockerfile, you can specify the path to the Dockerfile using the DeploymentImage class:

    repo_info.py
    import httpx\nfrom prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=DeploymentImage(\n            name=\"my-first-deployment-image\",\n            tag=\"tutorial\",\n            dockerfile=\"Dockerfile\"\n        ),\n        push=False\n    )\n
    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#modify-the-deployment","title":"Modify the deployment","text":"

    If you need to make updates to your deployment, you can do so by modifying your script and rerunning it. You'll need to make one update to specify a value for job_variables to ensure your Docker worker can successfully execute scheduled runs for this flow. See the example below.

    The job_variables section allows you to fine-tune the infrastructure settings for a specific deployment. These values override default values in the specified work pool's base job template.

    When testing images locally without pushing them to a registry (to avoid potential errors like docker.errors.NotFound), it's recommended to include an image_pull_policy job_variable set to Never. However, for production workflows, always consider pushing images to a remote registry for more reliability and accessibility.

    Here's how you can quickly set the image_pull_policy to be Never for this tutorial deployment without affecting the default value set on your work pool:

    repo_info.py
    import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        job_variables={\"image_pull_policy\": \"Never\"},\n        image=\"my-first-deployment-image:tutorial\",\n        push=False\n    )\n

    To register this update to your deployment's parameters with Prefect's API, run:

    python repo_info.py\n

    Now everything is set for us to submit a flow-run to the work pool:

    prefect deployment run 'get_repo_info/my-deployment'\n

    Common Pitfall

    • Store and run your deploy scripts at the root of your repo, otherwise the built Docker file may be missing files that it needs to execute!

    Did you know?

    A Prefect flow can have more than one deployment. This pattern can be useful if you want your flow to run in different execution environments.

    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#next-steps","title":"Next steps","text":"
    • Go deeper with deployments and learn about configuring deployments in YAML with prefect.yaml.
    • Concepts contain deep dives into Prefect components.
    • Guides provide step-by-step recipes for common Prefect operations including:
    • Deploying flows on Kubernetes
    • Deploying flows in Docker
    • Deploying flows on serverless infrastructure
    • Daemonizing workers

    Happy building!

    ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Welcome to Prefect","text":"

    Prefect is a workflow orchestration tool empowering developers to build, observe, and react to data pipelines.

    It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated. Just bring your Python code, sprinkle in a few decorators, and go!

    With Prefect you gain:

    • scheduling
    • retries
    • logging
    • convenient async functionality
    • caching
    • notifications
    • observability
    • event-based orchestration

    ","tags":["getting started","quick start","overview"],"boost":2},{"location":"#new-to-prefect","title":"New to Prefect?","text":"

    Get up and running quickly with the quickstart guide.

    Want more hands on practice to productionize your workflows? Follow our tutorial.

    For deeper dives into common use cases, explore our guides.

    Take your understanding even further with Prefect's concepts and API reference.

    Join Prefect's vibrant community of over 26,000 engineers to learn with others and share your knowledge!

    Need help?

    Get your questions answered by a Prefect Product Advocate! Book a Meeting

    ","tags":["getting started","quick start","overview"],"boost":2},{"location":"faq/","title":"Frequently Asked Questions","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#prefect","title":"Prefect","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-is-prefect-licensed","title":"How is Prefect licensed?","text":"

    Prefect is licensed under the Apache 2.0 License, an OSI approved open-source license. If you have any questions about licensing, please contact us.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#is-the-prefect-v2-cloud-url-different-than-the-prefect-v1-cloud-url","title":"Is the Prefect v2 Cloud URL different than the Prefect v1 Cloud URL?","text":"

    Yes. Prefect Cloud for v2 is at app.prefect.cloud/ while Prefect Cloud for v1 is at cloud.prefect.io.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#the-prefect-orchestration-engine","title":"The Prefect Orchestration Engine","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#why-was-the-prefect-orchestration-engine-created","title":"Why was the Prefect orchestration engine created?","text":"

    The Prefect orchestration engine has three major objectives:

    • Embracing dynamic, DAG-free workflows
    • An extraordinary developer experience
    • Transparent and observable orchestration rules

    As Prefect has matured, so has the modern data stack. The on-demand, dynamic, highly scalable workflows that used to exist principally in the domain of data science and analytics are now prevalent throughout all of data engineering. Few companies have workflows that don\u2019t deal with streaming data, uncertain timing, runtime logic, complex dependencies, versioning, or custom scheduling.

    This means that the current generation of workflow managers are built around the wrong abstraction: the directed acyclic graph (DAG). DAGs are an increasingly arcane, constrained way of representing the dynamic, heterogeneous range of modern data and computation patterns.

    Furthermore, as workflows have become more complex, it has become even more important to focus on the developer experience of building, testing, and monitoring them. Faced with an explosion of available tools, it is more important than ever for development teams to seek orchestration tools that will be compatible with any code, tools, or services they may require in the future.

    And finally, this additional complexity means that providing clear and consistent insight into the behavior of the orchestration engine and any decisions it makes is critically important.

    The Prefect orchestration engine represents a unified solution to these three problems.

    The Prefect orchestration engine is capable of governing any code through a well-defined series of state transitions designed to maximize the user's understanding of what happened during execution. It's popular to describe \"workflows as code\" or \"orchestration as code,\" but the Prefect engine represents \"code as workflows\": rather than ask users to change how they work to meet the requirements of the orchestrator, we've defined an orchestrator that adapts to how our users work.

    To achieve this, we've leveraged the familiar tools of native Python: first class functions, type annotations, and async support. Users are free to implement as much \u2014 or as little \u2014 of the Prefect engine as is useful for their objectives.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#if-im-using-prefect-cloud-2-do-i-still-need-to-run-a-prefect-server-locally","title":"If I\u2019m using Prefect Cloud 2, do I still need to run a Prefect server locally?","text":"

    No, Prefect Cloud hosts an instance of the Prefect API for you. In fact, each workspace in Prefect Cloud corresponds directly to a single instance of the Prefect orchestration engine. See the Prefect Cloud Overview for more information.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#features","title":"Features","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-mapping","title":"Does Prefect support mapping?","text":"

    Yes! For more information, see the Task.map API reference

    @flow\ndef my_flow():\n\n    # map over a constant\n    for i in range(10):\n        my_mapped_task(i)\n\n    # map over a task's output\n    l = list_task()\n    for i in l.wait().result():\n        my_mapped_task_2(i)\n

    Note that when tasks are called on constant values, they cannot detect their upstream edges automatically. In this example, my_mapped_task_2 does not know that it is downstream from list_task(). Prefect will have convenience functions for detecting these associations, and Prefect's .map() operator will automatically track them.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-enforce-ordering-between-tasks-that-dont-share-data","title":"Can I enforce ordering between tasks that don't share data?","text":"

    Yes! For more information, see the Tasks section.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-proxies","title":"Does Prefect support proxies?","text":"

    Yes!

    Prefect supports communicating via proxies through the use of environment variables. You can read more about this in the Installation documentation and the article Using Prefect Cloud with proxies.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-linux","title":"Can I run Prefect flows on Linux?","text":"

    Yes!

    See the Installation documentation and Linux installation notes for details on getting started with Prefect on Linux.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-windows","title":"Can I run Prefect flows on Windows?","text":"

    Yes!

    See the Installation documentation and Windows installation notes for details on getting started with Prefect on Windows.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-external-requirements-does-prefect-have","title":"What external requirements does Prefect have?","text":"

    Prefect does not have any additional requirements besides those installed by pip install --pre prefect. The entire system, including the UI and services, can be run in a single process via prefect server start and does not require Docker.

    Prefect Cloud users do not need to worry about the Prefect database. Prefect Cloud uses PostgreSQL on GCP behind the scenes. To use PostgreSQL with a self-hosted Prefect server, users must provide the connection string for a running database via the PREFECT_API_DATABASE_CONNECTION_URL environment variable.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-databases-does-prefect-support","title":"What databases does Prefect support?","text":"

    A self-hosted Prefect server can work with SQLite and PostgreSQL. New Prefect installs default to a SQLite database hosted at ~/.prefect/prefect.db on Mac or Linux machines. SQLite and PostgreSQL are not installed by Prefect.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-do-i-choose-between-sqlite-and-postgres","title":"How do I choose between SQLite and Postgres?","text":"

    SQLite generally works well for getting started and exploring Prefect. We have tested it with up to hundreds of thousands of task runs. Many users may be able to stay on SQLite for some time. However, for production uses, Prefect Cloud or self-hosted PostgreSQL is highly recommended. Under write-heavy workloads, SQLite performance can begin to suffer. Users running many flows with high degrees of parallelism or concurrency should use PostgreSQL.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#relationship-with-other-prefect-products","title":"Relationship with other Prefect products","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-flow-written-with-prefect-1-be-orchestrated-with-prefect-2-and-vice-versa","title":"Can a flow written with Prefect 1 be orchestrated with Prefect 2 and vice versa?","text":"

    No. Flows written with the Prefect 1 client must be rewritten with the Prefect 2 client. For most flows, this should take just a few minutes.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-use-prefect-1-and-prefect-2-at-the-same-time-on-my-local-machine","title":"Can I use Prefect 1 and Prefect 2 at the same time on my local machine?","text":"

    Yes. Just use different virtual environments.

    ","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"api-ref/","title":"API Reference","text":"

    Prefect auto-generates reference documentation for the following components:

    • Prefect Python SDK: used to build, test, and execute workflows.
    • Prefect REST API: used by both workflow clients as well as the Prefect UI for orchestration and data retrieval
    • Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.
    • The REST API documentation for a locally hosted open-source Prefect server is available in the Prefect REST API Reference.
    • Prefect Server SDK: used primarily by the server to work with workflow metadata and enforce orchestration logic. This is only used directly by Prefect developers and contributors.

    Self-hosted docs

    When self-hosting, you can access REST API documentation at the /docs endpoint of your PREFECT_API_URL - for example, if you ran prefect server start with no additional configuration you can find this reference at http://localhost:4200/docs.

    ","tags":["API","Prefect API","Prefect SDK","Prefect Cloud","REST API","development","orchestration"]},{"location":"api-ref/rest-api-reference/","title":"Prefect server REST API reference","text":"","tags":["REST API","Prefect server"]},{"location":"api-ref/prefect/agent/","title":"prefect.agent","text":"","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent","title":"prefect.agent","text":"

    DEPRECATION WARNING:

    This module is deprecated as of March 2024 and will not be available after September 2024. Agents have been replaced by workers, which offer enhanced functionality and better performance.

    For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

    ","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent","title":"PrefectAgent","text":"Source code in prefect/agent.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use a worker instead. Refer to the upgrade guide for more information: https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass PrefectAgent:\n    def __init__(\n        self,\n        work_queues: List[str] = None,\n        work_queue_prefix: Union[str, List[str]] = None,\n        work_pool_name: str = None,\n        prefetch_seconds: int = None,\n        default_infrastructure: Infrastructure = None,\n        default_infrastructure_document_id: UUID = None,\n        limit: Optional[int] = None,\n    ) -> None:\n        if default_infrastructure and default_infrastructure_document_id:\n            raise ValueError(\n                \"Provide only one of 'default_infrastructure' and\"\n                \" 'default_infrastructure_document_id'.\"\n            )\n\n        self.work_queues: Set[str] = set(work_queues) if work_queues else set()\n        self.work_pool_name = work_pool_name\n        self.prefetch_seconds = prefetch_seconds\n        self.submitting_flow_run_ids = set()\n        self.cancelling_flow_run_ids = set()\n        self.scheduled_task_scopes = set()\n        self.started = False\n        self.logger = get_logger(\"agent\")\n        self.task_group: Optional[anyio.abc.TaskGroup] = None\n        self.limit: Optional[int] = limit\n        self.limiter: Optional[anyio.CapacityLimiter] = None\n        self.client: Optional[PrefectClient] = None\n\n        if isinstance(work_queue_prefix, str):\n            work_queue_prefix = [work_queue_prefix]\n        self.work_queue_prefix = work_queue_prefix\n\n        self._work_queue_cache_expiration: pendulum.DateTime = None\n        self._work_queue_cache: List[WorkQueue] = []\n\n        if default_infrastructure:\n            self.default_infrastructure_document_id = (\n                default_infrastructure._block_document_id\n            )\n            self.default_infrastructure = default_infrastructure\n        elif default_infrastructure_document_id:\n            self.default_infrastructure_document_id = default_infrastructure_document_id\n            self.default_infrastructure = None\n        else:\n            self.default_infrastructure = Process()\n            self.default_infrastructure_document_id = None\n\n    async def update_matched_agent_work_queues(self):\n        if self.work_queue_prefix:\n            if self.work_pool_name:\n                matched_queues = await self.client.read_work_queues(\n                    work_pool_name=self.work_pool_name,\n                    work_queue_filter=WorkQueueFilter(\n                        name=WorkQueueFilterName(startswith_=self.work_queue_prefix)\n                    ),\n                )\n            else:\n                matched_queues = await self.client.match_work_queues(\n                    self.work_queue_prefix, work_pool_name=DEFAULT_AGENT_WORK_POOL_NAME\n                )\n\n            matched_queues = set(q.name for q in matched_queues)\n            if matched_queues != self.work_queues:\n                new_queues = matched_queues - self.work_queues\n                removed_queues = self.work_queues - matched_queues\n                if new_queues:\n                    self.logger.info(\n                        f\"Matched new work queues: {', '.join(new_queues)}\"\n                    )\n                if removed_queues:\n                    self.logger.info(\n                        f\"Work queues no longer matched: {', '.join(removed_queues)}\"\n                    )\n            self.work_queues = matched_queues\n\n    async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n        \"\"\"\n        Loads the work queue objects corresponding to the agent's target work\n        queues. If any of them don't exist, they are created.\n        \"\"\"\n\n        # if the queue cache has not expired, yield queues from the cache\n        now = pendulum.now(\"UTC\")\n        if (self._work_queue_cache_expiration or now) > now:\n            for queue in self._work_queue_cache:\n                yield queue\n            return\n\n        # otherwise clear the cache, set the expiration for 30 seconds, and\n        # reload the work queues\n        self._work_queue_cache.clear()\n        self._work_queue_cache_expiration = now.add(seconds=30)\n\n        await self.update_matched_agent_work_queues()\n\n        for name in self.work_queues:\n            try:\n                work_queue = await self.client.read_work_queue_by_name(\n                    work_pool_name=self.work_pool_name, name=name\n                )\n            except (ObjectNotFound, Exception):\n                work_queue = None\n\n            # if the work queue wasn't found and the agent is NOT polling\n            # for queues using a regex, try to create it\n            if work_queue is None and not self.work_queue_prefix:\n                try:\n                    work_queue = await self.client.create_work_queue(\n                        work_pool_name=self.work_pool_name, name=name\n                    )\n                except Exception:\n                    # if creating it raises an exception, it was probably just\n                    # created by some other agent; rather than entering a re-read\n                    # loop with new error handling, we log the exception and\n                    # continue.\n                    self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                    continue\n                else:\n                    log_str = f\"Created work queue {name!r}\"\n                    if self.work_pool_name:\n                        log_str = (\n                            f\"Created work queue {name!r} in work pool\"\n                            f\" {self.work_pool_name!r}.\"\n                        )\n                    else:\n                        log_str = f\"Created work queue '{name}'.\"\n                    self.logger.info(log_str)\n\n            if work_queue is None:\n                self.logger.error(\n                    f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n                    \" found\"\n                )\n            else:\n                self._work_queue_cache.append(work_queue)\n                yield work_queue\n\n    async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n        \"\"\"\n        The principle method on agents. Queries for scheduled flow runs and submits\n        them for execution in parallel.\n        \"\"\"\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for scheduled flow runs...\")\n\n        before = pendulum.now(\"utc\").add(\n            seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n        )\n\n        submittable_runs: List[FlowRun] = []\n\n        if self.work_pool_name:\n            responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n                work_pool_name=self.work_pool_name,\n                work_queue_names=[wq.name async for wq in self.get_work_queues()],\n                scheduled_before=before,\n            )\n            submittable_runs.extend([response.flow_run for response in responses])\n\n        else:\n            # load runs from each work queue\n            async for work_queue in self.get_work_queues():\n                # print a nice message if the work queue is paused\n                if work_queue.is_paused:\n                    self.logger.info(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                    )\n\n                else:\n                    try:\n                        queue_runs = await self.client.get_runs_in_work_queue(\n                            id=work_queue.id, limit=10, scheduled_before=before\n                        )\n                        submittable_runs.extend(queue_runs)\n                    except ObjectNotFound:\n                        self.logger.error(\n                            f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                            \" found.\"\n                        )\n                    except Exception as exc:\n                        self.logger.exception(exc)\n\n            submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n        for flow_run in submittable_runs:\n            # don't resubmit a run\n            if flow_run.id in self.submitting_flow_run_ids:\n                continue\n\n            try:\n                if self.limiter:\n                    self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self.logger.info(\n                    f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n                self.submitting_flow_run_ids.add(flow_run.id)\n                self.task_group.start_soon(\n                    self.submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n        )\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self.work_queues)))\n            if self.work_queues\n            else None\n        )\n\n        work_pool_filter = (\n            WorkPoolFilter(name=WorkPoolFilterName(any_=[self.work_pool_name]))\n            if self.work_pool_name\n            else WorkPoolFilter(name=WorkPoolFilterName(any_=[\"default-agent-pool\"]))\n        )\n        named_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self.logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self.cancelling_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: FlowRun) -> None:\n        \"\"\"\n        Cancel a flow run by killing its infrastructure\n        \"\"\"\n        if not flow_run.infrastructure_pid:\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n            if infrastructure.is_using_a_runner:\n                self.logger.info(\n                    f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n                    \" using enhanced cancellation. A dedicated runner will handle\"\n                    \" cancellation.\"\n                )\n                return\n        except Exception:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n                \"Flow run cannot be cancelled.\"\n            )\n            # Note: We leave this flow run in the cancelling set because it cannot be\n            #       cancelled and this will prevent additional attempts.\n            return\n\n        if not hasattr(infrastructure, \"kill\"):\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n                \"does not support killing created infrastructure. \"\n                \"Cancellation cannot be guaranteed.\"\n            )\n            return\n\n        self.logger.info(\n            f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n            f\"'{flow_run.id}'...\"\n        )\n        try:\n            await infrastructure.kill(flow_run.infrastructure_pid)\n        except InfrastructureNotFound as exc:\n            self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n        except Exception:\n            self.logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self.cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            await self._mark_flow_run_as_cancelled(flow_run)\n            self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: FlowRun, state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self.client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self.cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def get_infrastructure(self, flow_run: FlowRun) -> Infrastructure:\n        deployment = await self.client.read_deployment(flow_run.deployment_id)\n\n        flow = await self.client.read_flow(deployment.flow_id)\n\n        # overrides only apply when configuring known infra blocks\n        if not deployment.infrastructure_document_id:\n            if self.default_infrastructure:\n                infra_block = self.default_infrastructure\n            else:\n                infra_document = await self.client.read_block_document(\n                    self.default_infrastructure_document_id\n                )\n                infra_block = Block._from_block_document(infra_document)\n\n            # Add flow run metadata to the infrastructure\n            prepared_infrastructure = infra_block.prepare_for_flow_run(\n                flow_run, deployment=deployment, flow=flow\n            )\n            return prepared_infrastructure\n\n        ## get infra\n        infra_document = await self.client.read_block_document(\n            deployment.infrastructure_document_id\n        )\n\n        # this piece of logic applies any overrides that may have been set on the\n        # deployment; overrides are defined as dot.delimited paths on possibly nested\n        # attributes of the infrastructure block\n        doc_dict = infra_document.dict()\n        infra_dict = doc_dict.get(\"data\", {})\n        for override, value in (deployment.job_variables or {}).items():\n            nested_fields = override.split(\".\")\n            data = infra_dict\n            for field in nested_fields[:-1]:\n                data = data[field]\n\n            # once we reach the end, set the value\n            data[nested_fields[-1]] = value\n\n        # reconstruct the infra block\n        doc_dict[\"data\"] = infra_dict\n        infra_document = BlockDocument(**doc_dict)\n        infrastructure_block = Block._from_block_document(infra_document)\n\n        # TODO: Here the agent may update the infrastructure with agent-level settings\n\n        # Add flow run metadata to the infrastructure\n        prepared_infrastructure = infrastructure_block.prepare_for_flow_run(\n            flow_run, deployment=deployment, flow=flow\n        )\n\n        return prepared_infrastructure\n\n    async def submit_run(self, flow_run: FlowRun) -> None:\n        \"\"\"\n        Submit a flow run to the infrastructure\n        \"\"\"\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            try:\n                infrastructure = await self.get_infrastructure(flow_run)\n            except Exception as exc:\n                self.logger.exception(\n                    f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n                )\n                await self._propose_failed_state(flow_run, exc)\n                if self.limiter:\n                    self.limiter.release_on_behalf_of(flow_run.id)\n            else:\n                # Wait for submission to be completed. Note that the submission function\n                # may continue to run in the background after this exits.\n                readiness_result = await self.task_group.start(\n                    self._submit_run_and_capture_errors, flow_run, infrastructure\n                )\n\n                if readiness_result and not isinstance(readiness_result, Exception):\n                    try:\n                        await self.client.update_flow_run(\n                            flow_run_id=flow_run.id,\n                            infrastructure_pid=str(readiness_result),\n                        )\n                    except Exception:\n                        self.logger.exception(\n                            \"An error occurred while setting the `infrastructure_pid`\"\n                            f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                            \" cancellable.\"\n                        )\n\n                self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        self.submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self,\n        flow_run: FlowRun,\n        infrastructure: Infrastructure,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> Union[InfrastructureResult, Exception]:\n        # Note: There is not a clear way to determine if task_status.started() has been\n        #       called without peeking at the internal `_future`. Ideally we could just\n        #       check if the flow run id has been removed from `submitting_flow_run_ids`\n        #       but it is not so simple to guarantee that this coroutine yields back\n        #       to `submit_run` to execute that line when exceptions are raised during\n        #       submission.\n        try:\n            result = await infrastructure.run(task_status=task_status)\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                self.logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                self.logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            self.logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        return result\n\n    async def _propose_pending_state(self, flow_run: FlowRun) -> bool:\n        state = flow_run.state\n        try:\n            state = await propose_state(self.client, Pending(), flow_run_id=flow_run.id)\n        except Abort as exc:\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n            return False\n\n        if not state.is_pending():\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: FlowRun, exc: Exception) -> None:\n        try:\n            await propose_state(\n                self.client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: FlowRun, message: str) -> None:\n        try:\n            state = await propose_state(\n                self.client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            self.logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                self.logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n        \"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the agent exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.started:\n                with anyio.CancelScope() as scope:\n                    self.scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self.scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self.task_group.start(wrapper)\n\n    # Context management ---------------------------------------------------------------\n\n    async def start(self):\n        self.started = True\n        self.task_group = anyio.create_task_group()\n        self.limiter = (\n            anyio.CapacityLimiter(self.limit) if self.limit is not None else None\n        )\n        self.client = get_client()\n        await self.client.__aenter__()\n        await self.task_group.__aenter__()\n\n    async def shutdown(self, *exc_info):\n        self.started = False\n        # We must cancel scheduled task scopes before closing the task group\n        for scope in self.scheduled_task_scopes:\n            scope.cancel()\n        await self.task_group.__aexit__(*exc_info)\n        await self.client.__aexit__(*exc_info)\n        self.task_group = None\n        self.client = None\n        self.submitting_flow_run_ids.clear()\n        self.cancelling_flow_run_ids.clear()\n        self.scheduled_task_scopes.clear()\n        self._work_queue_cache_expiration = None\n        self._work_queue_cache = []\n\n    async def __aenter__(self):\n        await self.start()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        await self.shutdown(*exc_info)\n
    ","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.cancel_run","title":"cancel_run async","text":"

    Cancel a flow run by killing its infrastructure

    Source code in prefect/agent.py
    async def cancel_run(self, flow_run: FlowRun) -> None:\n    \"\"\"\n    Cancel a flow run by killing its infrastructure\n    \"\"\"\n    if not flow_run.infrastructure_pid:\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n            \" attached. Cancellation cannot be guaranteed.\"\n        )\n        await self._mark_flow_run_as_cancelled(\n            flow_run,\n            state_updates={\n                \"message\": (\n                    \"This flow run is missing infrastructure tracking information\"\n                    \" and cancellation cannot be guaranteed.\"\n                )\n            },\n        )\n        return\n\n    try:\n        infrastructure = await self.get_infrastructure(flow_run)\n        if infrastructure.is_using_a_runner:\n            self.logger.info(\n                f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n                \" using enhanced cancellation. A dedicated runner will handle\"\n                \" cancellation.\"\n            )\n            return\n    except Exception:\n        self.logger.exception(\n            f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n            \"Flow run cannot be cancelled.\"\n        )\n        # Note: We leave this flow run in the cancelling set because it cannot be\n        #       cancelled and this will prevent additional attempts.\n        return\n\n    if not hasattr(infrastructure, \"kill\"):\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n            \"does not support killing created infrastructure. \"\n            \"Cancellation cannot be guaranteed.\"\n        )\n        return\n\n    self.logger.info(\n        f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n        f\"'{flow_run.id}'...\"\n    )\n    try:\n        await infrastructure.kill(flow_run.infrastructure_pid)\n    except InfrastructureNotFound as exc:\n        self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n        await self._mark_flow_run_as_cancelled(flow_run)\n    except InfrastructureNotAvailable as exc:\n        self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n    except Exception:\n        self.logger.exception(\n            \"Encountered exception while killing infrastructure for flow run \"\n            f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n        )\n        # We will try again on generic exceptions\n        self.cancelling_flow_run_ids.remove(flow_run.id)\n        return\n    else:\n        await self._mark_flow_run_as_cancelled(flow_run)\n        self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n
    ","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_and_submit_flow_runs","title":"get_and_submit_flow_runs async","text":"

    The principle method on agents. Queries for scheduled flow runs and submits them for execution in parallel.

    Source code in prefect/agent.py
    async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n    \"\"\"\n    The principle method on agents. Queries for scheduled flow runs and submits\n    them for execution in parallel.\n    \"\"\"\n    if not self.started:\n        raise RuntimeError(\n            \"Agent is not started. Use `async with PrefectAgent()...`\"\n        )\n\n    self.logger.debug(\"Checking for scheduled flow runs...\")\n\n    before = pendulum.now(\"utc\").add(\n        seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n    )\n\n    submittable_runs: List[FlowRun] = []\n\n    if self.work_pool_name:\n        responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n            work_pool_name=self.work_pool_name,\n            work_queue_names=[wq.name async for wq in self.get_work_queues()],\n            scheduled_before=before,\n        )\n        submittable_runs.extend([response.flow_run for response in responses])\n\n    else:\n        # load runs from each work queue\n        async for work_queue in self.get_work_queues():\n            # print a nice message if the work queue is paused\n            if work_queue.is_paused:\n                self.logger.info(\n                    f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                )\n\n            else:\n                try:\n                    queue_runs = await self.client.get_runs_in_work_queue(\n                        id=work_queue.id, limit=10, scheduled_before=before\n                    )\n                    submittable_runs.extend(queue_runs)\n                except ObjectNotFound:\n                    self.logger.error(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                        \" found.\"\n                    )\n                except Exception as exc:\n                    self.logger.exception(exc)\n\n        submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n    for flow_run in submittable_runs:\n        # don't resubmit a run\n        if flow_run.id in self.submitting_flow_run_ids:\n            continue\n\n        try:\n            if self.limiter:\n                self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n        except anyio.WouldBlock:\n            self.logger.info(\n                f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                \" in progress.\"\n            )\n            break\n        else:\n            self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n            self.submitting_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(\n                self.submit_run,\n                flow_run,\n            )\n\n    return list(\n        filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n    )\n
    ","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_work_queues","title":"get_work_queues async","text":"

    Loads the work queue objects corresponding to the agent's target work queues. If any of them don't exist, they are created.

    Source code in prefect/agent.py
    async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n    \"\"\"\n    Loads the work queue objects corresponding to the agent's target work\n    queues. If any of them don't exist, they are created.\n    \"\"\"\n\n    # if the queue cache has not expired, yield queues from the cache\n    now = pendulum.now(\"UTC\")\n    if (self._work_queue_cache_expiration or now) > now:\n        for queue in self._work_queue_cache:\n            yield queue\n        return\n\n    # otherwise clear the cache, set the expiration for 30 seconds, and\n    # reload the work queues\n    self._work_queue_cache.clear()\n    self._work_queue_cache_expiration = now.add(seconds=30)\n\n    await self.update_matched_agent_work_queues()\n\n    for name in self.work_queues:\n        try:\n            work_queue = await self.client.read_work_queue_by_name(\n                work_pool_name=self.work_pool_name, name=name\n            )\n        except (ObjectNotFound, Exception):\n            work_queue = None\n\n        # if the work queue wasn't found and the agent is NOT polling\n        # for queues using a regex, try to create it\n        if work_queue is None and not self.work_queue_prefix:\n            try:\n                work_queue = await self.client.create_work_queue(\n                    work_pool_name=self.work_pool_name, name=name\n                )\n            except Exception:\n                # if creating it raises an exception, it was probably just\n                # created by some other agent; rather than entering a re-read\n                # loop with new error handling, we log the exception and\n                # continue.\n                self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                continue\n            else:\n                log_str = f\"Created work queue {name!r}\"\n                if self.work_pool_name:\n                    log_str = (\n                        f\"Created work queue {name!r} in work pool\"\n                        f\" {self.work_pool_name!r}.\"\n                    )\n                else:\n                    log_str = f\"Created work queue '{name}'.\"\n                self.logger.info(log_str)\n\n        if work_queue is None:\n            self.logger.error(\n                f\"Work queue '{name!r}' with prefix {self.work_queue_prefix} wasn't\"\n                \" found\"\n            )\n        else:\n            self._work_queue_cache.append(work_queue)\n            yield work_queue\n
    ","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.submit_run","title":"submit_run async","text":"

    Submit a flow run to the infrastructure

    Source code in prefect/agent.py
    async def submit_run(self, flow_run: FlowRun) -> None:\n    \"\"\"\n    Submit a flow run to the infrastructure\n    \"\"\"\n    ready_to_submit = await self._propose_pending_state(flow_run)\n\n    if ready_to_submit:\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n        except Exception as exc:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n            )\n            await self._propose_failed_state(flow_run, exc)\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n        else:\n            # Wait for submission to be completed. Note that the submission function\n            # may continue to run in the background after this exits.\n            readiness_result = await self.task_group.start(\n                self._submit_run_and_capture_errors, flow_run, infrastructure\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self.client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    self.logger.exception(\n                        \"An error occurred while setting the `infrastructure_pid`\"\n                        f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                        \" cancellable.\"\n                    )\n\n            self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n    else:\n        # If the run is not ready to submit, release the concurrency slot\n        if self.limiter:\n            self.limiter.release_on_behalf_of(flow_run.id)\n\n    self.submitting_flow_run_ids.remove(flow_run.id)\n
    ","tags":["Python API","agents"]},{"location":"api-ref/prefect/artifacts/","title":"prefect.artifacts","text":"","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts","title":"prefect.artifacts","text":"

    Interface for creating and reading artifacts.

    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact","title":"Artifact","text":"

    Bases: ArtifactCreate

    An artifact is a piece of data that is created by a flow or task run. https://docs.prefect.io/latest/concepts/artifacts/

    Parameters:

    Name Type Description Default type

    A string identifying the type of artifact.

    required key

    A user-provided string identifier. The key must only contain lowercase letters, numbers, and dashes.

    required description

    A user-specified description of the artifact.

    required data

    A JSON payload that allows for a result to be retrieved.

    required Source code in prefect/artifacts.py
    class Artifact(ArtifactRequest):\n    \"\"\"\n    An artifact is a piece of data that is created by a flow or task run.\n    https://docs.prefect.io/latest/concepts/artifacts/\n\n    Arguments:\n        type: A string identifying the type of artifact.\n        key: A user-provided string identifier.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n        data: A JSON payload that allows for a result to be retrieved.\n    \"\"\"\n\n    @sync_compatible\n    async def create(\n        self: Self,\n        client: Optional[PrefectClient] = None,\n    ) -> ArtifactResponse:\n        \"\"\"\n        A method to create an artifact.\n\n        Arguments:\n            client: The PrefectClient\n\n        Returns:\n            - The created artifact.\n        \"\"\"\n        client, _ = get_or_create_client(client)\n        task_run_id, flow_run_id = get_task_and_flow_run_ids()\n        return await client.create_artifact(\n            artifact=ArtifactRequest(\n                type=self.type,\n                key=self.key,\n                description=self.description,\n                task_run_id=self.task_run_id or task_run_id,\n                flow_run_id=self.flow_run_id or flow_run_id,\n                data=await self.format(),\n            )\n        )\n\n    @classmethod\n    @sync_compatible\n    async def get(\n        cls, key: Optional[str] = None, client: Optional[PrefectClient] = None\n    ) -> Optional[ArtifactResponse]:\n        \"\"\"\n        A method to get an artifact.\n\n        Arguments:\n            key (str, optional): The key of the artifact to get.\n            client (PrefectClient, optional): The PrefectClient\n\n        Returns:\n            (ArtifactResponse, optional): The artifact (if found).\n        \"\"\"\n        client, _ = get_or_create_client(client)\n        return next(\n            iter(\n                await client.read_artifacts(\n                    limit=1,\n                    sort=ArtifactSort.UPDATED_DESC,\n                    artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n                )\n            ),\n            None,\n        )\n\n    @classmethod\n    @sync_compatible\n    async def get_or_create(\n        cls,\n        key: Optional[str] = None,\n        description: Optional[str] = None,\n        data: Optional[Union[Dict[str, Any], Any]] = None,\n        client: Optional[PrefectClient] = None,\n        **kwargs: Any,\n    ) -> Tuple[ArtifactResponse, bool]:\n        \"\"\"\n        A method to get or create an artifact.\n\n        Arguments:\n            key (str, optional): The key of the artifact to get or create.\n            description (str, optional): The description of the artifact to create.\n            data (Union[Dict[str, Any], Any], optional): The data of the artifact to create.\n            client (PrefectClient, optional): The PrefectClient\n\n        Returns:\n            (ArtifactResponse): The artifact, either retrieved or created.\n        \"\"\"\n        artifact = await cls.get(key, client)\n        if artifact:\n            return artifact, False\n        else:\n            return (\n                await cls(key=key, description=description, data=data, **kwargs).create(\n                    client\n                ),\n                True,\n            )\n\n    async def format(self) -> Optional[Union[Dict[str, Any], Any]]:\n        return json.dumps(self.data)\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact.create","title":"create async","text":"

    A method to create an artifact.

    Parameters:

    Name Type Description Default client Optional[PrefectClient]

    The PrefectClient

    None

    Returns:

    Type Description Artifact
    • The created artifact.
    Source code in prefect/artifacts.py
    @sync_compatible\nasync def create(\n    self: Self,\n    client: Optional[PrefectClient] = None,\n) -> ArtifactResponse:\n    \"\"\"\n    A method to create an artifact.\n\n    Arguments:\n        client: The PrefectClient\n\n    Returns:\n        - The created artifact.\n    \"\"\"\n    client, _ = get_or_create_client(client)\n    task_run_id, flow_run_id = get_task_and_flow_run_ids()\n    return await client.create_artifact(\n        artifact=ArtifactRequest(\n            type=self.type,\n            key=self.key,\n            description=self.description,\n            task_run_id=self.task_run_id or task_run_id,\n            flow_run_id=self.flow_run_id or flow_run_id,\n            data=await self.format(),\n        )\n    )\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact.get","title":"get async classmethod","text":"

    A method to get an artifact.

    Parameters:

    Name Type Description Default key str

    The key of the artifact to get.

    None client PrefectClient

    The PrefectClient

    None

    Returns:

    Type Description (Artifact, optional)

    The artifact (if found).

    Source code in prefect/artifacts.py
    @classmethod\n@sync_compatible\nasync def get(\n    cls, key: Optional[str] = None, client: Optional[PrefectClient] = None\n) -> Optional[ArtifactResponse]:\n    \"\"\"\n    A method to get an artifact.\n\n    Arguments:\n        key (str, optional): The key of the artifact to get.\n        client (PrefectClient, optional): The PrefectClient\n\n    Returns:\n        (ArtifactResponse, optional): The artifact (if found).\n    \"\"\"\n    client, _ = get_or_create_client(client)\n    return next(\n        iter(\n            await client.read_artifacts(\n                limit=1,\n                sort=ArtifactSort.UPDATED_DESC,\n                artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n            )\n        ),\n        None,\n    )\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.Artifact.get_or_create","title":"get_or_create async classmethod","text":"

    A method to get or create an artifact.

    Parameters:

    Name Type Description Default key str

    The key of the artifact to get or create.

    None description str

    The description of the artifact to create.

    None data Union[Dict[str, Any], Any]

    The data of the artifact to create.

    None client PrefectClient

    The PrefectClient

    None

    Returns:

    Type Description Artifact

    The artifact, either retrieved or created.

    Source code in prefect/artifacts.py
    @classmethod\n@sync_compatible\nasync def get_or_create(\n    cls,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n    data: Optional[Union[Dict[str, Any], Any]] = None,\n    client: Optional[PrefectClient] = None,\n    **kwargs: Any,\n) -> Tuple[ArtifactResponse, bool]:\n    \"\"\"\n    A method to get or create an artifact.\n\n    Arguments:\n        key (str, optional): The key of the artifact to get or create.\n        description (str, optional): The description of the artifact to create.\n        data (Union[Dict[str, Any], Any], optional): The data of the artifact to create.\n        client (PrefectClient, optional): The PrefectClient\n\n    Returns:\n        (ArtifactResponse): The artifact, either retrieved or created.\n    \"\"\"\n    artifact = await cls.get(key, client)\n    if artifact:\n        return artifact, False\n    else:\n        return (\n            await cls(key=key, description=description, data=data, **kwargs).create(\n                client\n            ),\n            True,\n        )\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.TableArtifact","title":"TableArtifact","text":"

    Bases: Artifact

    Source code in prefect/artifacts.py
    class TableArtifact(Artifact):\n    table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]\n    type: Optional[str] = \"table\"\n\n    @classmethod\n    def _sanitize(\n        cls, item: Union[Dict[str, Any], List[Any], float]\n    ) -> Union[Dict[str, Any], List[Any], int, float, None]:\n        \"\"\"\n        Sanitize NaN values in a given item.\n        The item can be a dict, list or float.\n        \"\"\"\n        if isinstance(item, list):\n            return [cls._sanitize(sub_item) for sub_item in item]\n        elif isinstance(item, dict):\n            return {k: cls._sanitize(v) for k, v in item.items()}\n        elif isinstance(item, float) and math.isnan(item):\n            return None\n        else:\n            return item\n\n    async def format(self) -> str:\n        return json.dumps(self._sanitize(self.table))\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_link_artifact","title":"create_link_artifact async","text":"

    Create a link artifact.

    Parameters:

    Name Type Description Default link str

    The link to create.

    required link_text Optional[str]

    The link text.

    None key Optional[str]

    A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

    None description Optional[str]

    A user-specified description of the artifact.

    None

    Returns:

    Type Description UUID

    The table artifact ID.

    Source code in prefect/artifacts.py
    @sync_compatible\nasync def create_link_artifact(\n    link: str,\n    link_text: Optional[str] = None,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n    client: Optional[PrefectClient] = None,\n) -> UUID:\n    \"\"\"\n    Create a link artifact.\n\n    Arguments:\n        link: The link to create.\n        link_text: The link text.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    artifact = await LinkArtifact(\n        key=key,\n        description=description,\n        link=link,\n        link_text=link_text,\n    ).create(client)\n\n    return artifact.id\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_markdown_artifact","title":"create_markdown_artifact async","text":"

    Create a markdown artifact.

    Parameters:

    Name Type Description Default markdown str

    The markdown to create.

    required key Optional[str]

    A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

    None description Optional[str]

    A user-specified description of the artifact.

    None

    Returns:

    Type Description UUID

    The table artifact ID.

    Source code in prefect/artifacts.py
    @sync_compatible\nasync def create_markdown_artifact(\n    markdown: str,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n    \"\"\"\n    Create a markdown artifact.\n\n    Arguments:\n        markdown: The markdown to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    artifact = await MarkdownArtifact(\n        key=key,\n        description=description,\n        markdown=markdown,\n    ).create()\n\n    return artifact.id\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_table_artifact","title":"create_table_artifact async","text":"

    Create a table artifact.

    Parameters:

    Name Type Description Default table Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]

    The table to create.

    required key Optional[str]

    A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

    None description Optional[str]

    A user-specified description of the artifact.

    None

    Returns:

    Type Description UUID

    The table artifact ID.

    Source code in prefect/artifacts.py
    @sync_compatible\nasync def create_table_artifact(\n    table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]],\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n    \"\"\"\n    Create a table artifact.\n\n    Arguments:\n        table: The table to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n\n    artifact = await TableArtifact(\n        key=key,\n        description=description,\n        table=table,\n    ).create()\n\n    return artifact.id\n
    ","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/automations/","title":"prefect.automations","text":"","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations","title":"prefect.automations","text":"","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation","title":"Automation","text":"

    Bases: AutomationCore

    Source code in prefect/automations.py
    class Automation(AutomationCore):\n    id: Optional[UUID] = Field(default=None, description=\"The ID of this automation\")\n\n    @sync_compatible\n    async def create(self: Self) -> Self:\n        \"\"\"\n        Create a new automation.\n\n        auto_to_create = Automation(\n            name=\"woodchonk\",\n            trigger=EventTrigger(\n                expect={\"animal.walked\"},\n                match={\n                    \"genus\": \"Marmota\",\n                    \"species\": \"monax\",\n                },\n                posture=\"Reactive\",\n                threshold=3,\n                within=timedelta(seconds=10),\n            ),\n            actions=[CancelFlowRun()]\n        )\n        created_automation = auto_to_create.create()\n        \"\"\"\n        client, _ = get_or_create_client()\n        automation = AutomationCore(**self.dict(exclude={\"id\"}))\n        self.id = await client.create_automation(automation=automation)\n        return self\n\n    @sync_compatible\n    async def update(self: Self):\n        \"\"\"\n        Updates an existing automation.\n        auto = Automation.read(id=123)\n        auto.name = \"new name\"\n        auto.update()\n        \"\"\"\n\n        client, _ = get_or_create_client()\n        automation = AutomationCore(**self.dict(exclude={\"id\", \"owner_resource\"}))\n        await client.update_automation(automation_id=self.id, automation=automation)\n\n    @classmethod\n    @sync_compatible\n    async def read(\n        cls: Self, id: Optional[UUID] = None, name: Optional[str] = None\n    ) -> Self:\n        \"\"\"\n        Read an automation by ID or name.\n        automation = Automation.read(name=\"woodchonk\")\n\n        or\n\n        automation = Automation.read(id=UUID(\"b3514963-02b1-47a5-93d1-6eeb131041cb\"))\n        \"\"\"\n        if id and name:\n            raise ValueError(\"Only one of id or name can be provided\")\n        if not id and not name:\n            raise ValueError(\"One of id or name must be provided\")\n        client, _ = get_or_create_client()\n        if id:\n            try:\n                automation = await client.read_automation(automation_id=id)\n            except PrefectHTTPStatusError as exc:\n                if exc.response.status_code == 404:\n                    raise ValueError(f\"Automation with ID {id!r} not found\")\n            return Automation(**automation.dict())\n        else:\n            automation = await client.read_automations_by_name(name=name)\n            if len(automation) > 0:\n                return Automation(**automation[0].dict()) if automation else None\n            else:\n                raise ValueError(f\"Automation with name {name!r} not found\")\n\n    @sync_compatible\n    async def delete(self: Self) -> bool:\n        \"\"\"\n        auto = Automation.read(id = 123)\n        auto.delete()\n        \"\"\"\n        try:\n            client, _ = get_or_create_client()\n            await client.delete_automation(self.id)\n            return True\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                return False\n            raise\n\n    @sync_compatible\n    async def disable(self: Self) -> bool:\n        \"\"\"\n        Disable an automation.\n        auto = Automation.read(id = 123)\n        auto.disable()\n        \"\"\"\n        try:\n            client, _ = get_or_create_client()\n            await client.pause_automation(self.id)\n            return True\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                return False\n            raise\n\n    @sync_compatible\n    async def enable(self: Self) -> bool:\n        \"\"\"\n        Enable an automation.\n        auto = Automation.read(id = 123)\n        auto.enable()\n        \"\"\"\n        try:\n            client, _ = get_or_create_client()\n            await client.resume_automation(\"asd\")\n            return True\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                return False\n            raise\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.create","title":"create async","text":"

    Create a new automation.

    auto_to_create = Automation( name=\"woodchonk\", trigger=EventTrigger( expect={\"animal.walked\"}, match={ \"genus\": \"Marmota\", \"species\": \"monax\", }, posture=\"Reactive\", threshold=3, within=timedelta(seconds=10), ), actions=[CancelFlowRun()] ) created_automation = auto_to_create.create()

    Source code in prefect/automations.py
    @sync_compatible\nasync def create(self: Self) -> Self:\n    \"\"\"\n    Create a new automation.\n\n    auto_to_create = Automation(\n        name=\"woodchonk\",\n        trigger=EventTrigger(\n            expect={\"animal.walked\"},\n            match={\n                \"genus\": \"Marmota\",\n                \"species\": \"monax\",\n            },\n            posture=\"Reactive\",\n            threshold=3,\n            within=timedelta(seconds=10),\n        ),\n        actions=[CancelFlowRun()]\n    )\n    created_automation = auto_to_create.create()\n    \"\"\"\n    client, _ = get_or_create_client()\n    automation = AutomationCore(**self.dict(exclude={\"id\"}))\n    self.id = await client.create_automation(automation=automation)\n    return self\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.delete","title":"delete async","text":"

    auto = Automation.read(id = 123) auto.delete()

    Source code in prefect/automations.py
    @sync_compatible\nasync def delete(self: Self) -> bool:\n    \"\"\"\n    auto = Automation.read(id = 123)\n    auto.delete()\n    \"\"\"\n    try:\n        client, _ = get_or_create_client()\n        await client.delete_automation(self.id)\n        return True\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return False\n        raise\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.disable","title":"disable async","text":"

    Disable an automation. auto = Automation.read(id = 123) auto.disable()

    Source code in prefect/automations.py
    @sync_compatible\nasync def disable(self: Self) -> bool:\n    \"\"\"\n    Disable an automation.\n    auto = Automation.read(id = 123)\n    auto.disable()\n    \"\"\"\n    try:\n        client, _ = get_or_create_client()\n        await client.pause_automation(self.id)\n        return True\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return False\n        raise\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.enable","title":"enable async","text":"

    Enable an automation. auto = Automation.read(id = 123) auto.enable()

    Source code in prefect/automations.py
    @sync_compatible\nasync def enable(self: Self) -> bool:\n    \"\"\"\n    Enable an automation.\n    auto = Automation.read(id = 123)\n    auto.enable()\n    \"\"\"\n    try:\n        client, _ = get_or_create_client()\n        await client.resume_automation(\"asd\")\n        return True\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return False\n        raise\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.read","title":"read async classmethod","text":"

    Read an automation by ID or name. automation = Automation.read(name=\"woodchonk\")

    or

    automation = Automation.read(id=UUID(\"b3514963-02b1-47a5-93d1-6eeb131041cb\"))

    Source code in prefect/automations.py
    @classmethod\n@sync_compatible\nasync def read(\n    cls: Self, id: Optional[UUID] = None, name: Optional[str] = None\n) -> Self:\n    \"\"\"\n    Read an automation by ID or name.\n    automation = Automation.read(name=\"woodchonk\")\n\n    or\n\n    automation = Automation.read(id=UUID(\"b3514963-02b1-47a5-93d1-6eeb131041cb\"))\n    \"\"\"\n    if id and name:\n        raise ValueError(\"Only one of id or name can be provided\")\n    if not id and not name:\n        raise ValueError(\"One of id or name must be provided\")\n    client, _ = get_or_create_client()\n    if id:\n        try:\n            automation = await client.read_automation(automation_id=id)\n        except PrefectHTTPStatusError as exc:\n            if exc.response.status_code == 404:\n                raise ValueError(f\"Automation with ID {id!r} not found\")\n        return Automation(**automation.dict())\n    else:\n        automation = await client.read_automations_by_name(name=name)\n        if len(automation) > 0:\n            return Automation(**automation[0].dict()) if automation else None\n        else:\n            raise ValueError(f\"Automation with name {name!r} not found\")\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Automation.update","title":"update async","text":"

    Updates an existing automation. auto = Automation.read(id=123) auto.name = \"new name\" auto.update()

    Source code in prefect/automations.py
    @sync_compatible\nasync def update(self: Self):\n    \"\"\"\n    Updates an existing automation.\n    auto = Automation.read(id=123)\n    auto.name = \"new name\"\n    auto.update()\n    \"\"\"\n\n    client, _ = get_or_create_client()\n    automation = AutomationCore(**self.dict(exclude={\"id\", \"owner_resource\"}))\n    await client.update_automation(automation_id=self.id, automation=automation)\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.AutomationCore","title":"AutomationCore","text":"

    Bases: PrefectBaseModel

    Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources

    Source code in prefect/events/schemas/automations.py
    class AutomationCore(PrefectBaseModel, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"Defines an action a user wants to take when a certain number of events\n    do or don't happen to the matching resources\"\"\"\n\n    name: str = Field(..., description=\"The name of this automation\")\n    description: str = Field(\"\", description=\"A longer description of this automation\")\n\n    enabled: bool = Field(True, description=\"Whether this automation will be evaluated\")\n\n    trigger: TriggerTypes = Field(\n        ...,\n        description=(\n            \"The criteria for which events this Automation covers and how it will \"\n            \"respond to the presence or absence of those events\"\n        ),\n    )\n\n    actions: List[ActionTypes] = Field(\n        ...,\n        description=\"The actions to perform when this Automation triggers\",\n    )\n\n    actions_on_trigger: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a triggered state\",\n    )\n\n    actions_on_resolve: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a resolving state\",\n    )\n\n    owner_resource: Optional[str] = Field(\n        default=None, description=\"The owning resource of this automation\"\n    )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.CompositeTrigger","title":"CompositeTrigger","text":"

    Bases: Trigger, ABC

    Requires some number of triggers to have fired within the given time period.

    Source code in prefect/events/schemas/automations.py
    class CompositeTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Requires some number of triggers to have fired within the given time period.\n    \"\"\"\n\n    type: Literal[\"compound\", \"sequence\"]\n    triggers: List[\"TriggerTypes\"]\n    within: Optional[timedelta]\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.CompoundTrigger","title":"CompoundTrigger","text":"

    Bases: CompositeTrigger

    A composite trigger that requires some number of triggers to have fired within the given time period

    Source code in prefect/events/schemas/automations.py
    class CompoundTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have\n    fired within the given time period\"\"\"\n\n    type: Literal[\"compound\"] = \"compound\"\n    require: Union[int, Literal[\"any\", \"all\"]]\n\n    @root_validator\n    def validate_require(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n        require = values.get(\"require\")\n\n        if isinstance(require, int):\n            if require < 1:\n                raise ValueError(\"required must be at least 1\")\n            if require > len(values[\"triggers\"]):\n                raise ValueError(\n                    \"required must be less than or equal to the number of triggers\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"{str(self.require).capitalize()} of:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.CompoundTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"{str(self.require).capitalize()} of:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.EventTrigger","title":"EventTrigger","text":"

    Bases: ResourceTrigger

    A trigger that fires based on the presence or absence of events within a given period of time.

    Source code in prefect/events/schemas/automations.py
    class EventTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the presence or absence of events within a given\n    period of time.\n    \"\"\"\n\n    type: Literal[\"event\"] = \"event\"\n\n    after: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) which must first been seen to fire this trigger.  If \"\n            \"empty, then fire this trigger immediately.  Events may include \"\n            \"trailing wildcards, like `prefect.flow-run.*`\"\n        ),\n    )\n    expect: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) this trigger is expecting to see.  If empty, this \"\n            \"trigger will match any event.  Events may include trailing wildcards, \"\n            \"like `prefect.flow-run.*`\"\n        ),\n    )\n\n    for_each: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"Evaluate the trigger separately for each distinct value of these labels \"\n            \"on the resource.  By default, labels refer to the primary resource of the \"\n            \"triggering event.  You may also refer to labels from related \"\n            \"resources by specifying `related:<role>:<label>`.  This will use the \"\n            \"value of that label for the first related resource in that role.  For \"\n            'example, `\"for_each\": [\"related:flow:prefect.resource.id\"]` would '\n            \"evaluate the trigger for each flow.\"\n        ),\n    )\n    posture: Literal[Posture.Reactive, Posture.Proactive] = Field(  # type: ignore[valid-type]\n        Posture.Reactive,\n        description=(\n            \"The posture of this trigger, either Reactive or Proactive.  Reactive \"\n            \"triggers respond to the _presence_ of the expected events, while \"\n            \"Proactive triggers respond to the _absence_ of those expected events.\"\n        ),\n    )\n    threshold: int = Field(\n        1,\n        description=(\n            \"The number of events required for this trigger to fire (for \"\n            \"Reactive triggers), or the number of events expected (for Proactive \"\n            \"triggers)\"\n        ),\n    )\n    within: timedelta = Field(\n        timedelta(0),\n        minimum=0.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The time period over which the events must occur.  For Reactive triggers, \"\n            \"this may be as low as 0 seconds, but must be at least 10 seconds for \"\n            \"Proactive triggers\"\n        ),\n    )\n\n    @validator(\"within\")\n    def enforce_minimum_within(\n        cls, value: timedelta, values, config, field: ModelField\n    ):\n        return validate_trigger_within(value, field)\n\n    @root_validator(skip_on_failure=True)\n    def enforce_minimum_within_for_proactive_triggers(cls, values: Dict[str, Any]):\n        posture: Optional[Posture] = values.get(\"posture\")\n        within: Optional[timedelta] = values.get(\"within\")\n\n        if posture == Posture.Proactive:\n            if not within or within == timedelta(0):\n                values[\"within\"] = timedelta(seconds=10.0)\n            elif within < timedelta(seconds=10.0):\n                raise ValueError(\n                    \"The minimum within for Proactive triggers is 10 seconds\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        if self.posture == Posture.Reactive:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n        else:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                        f\"within {self.within}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.EventTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    if self.posture == Posture.Reactive:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n    else:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                    f\"within {self.within}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.MetricTrigger","title":"MetricTrigger","text":"

    Bases: ResourceTrigger

    A trigger that fires based on the results of a metric query.

    Source code in prefect/events/schemas/automations.py
    class MetricTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the results of a metric query.\n    \"\"\"\n\n    type: Literal[\"metric\"] = \"metric\"\n\n    posture: Literal[Posture.Metric] = Field(  # type: ignore[valid-type]\n        Posture.Metric,\n        description=\"Periodically evaluate the configured metric query.\",\n    )\n\n    metric: MetricTriggerQuery = Field(\n        ...,\n        description=\"The metric query to evaluate for this trigger. \",\n    )\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        m = self.metric\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.MetricTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    m = self.metric\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.MetricTriggerQuery","title":"MetricTriggerQuery","text":"

    Bases: PrefectBaseModel

    Defines a subset of the Trigger subclass, which is specific to Metric automations, that specify the query configurations and breaching conditions for the Automation

    Source code in prefect/events/schemas/automations.py
    class MetricTriggerQuery(PrefectBaseModel):\n    \"\"\"Defines a subset of the Trigger subclass, which is specific\n    to Metric automations, that specify the query configurations\n    and breaching conditions for the Automation\"\"\"\n\n    name: PrefectMetric = Field(\n        ...,\n        description=\"The name of the metric to query.\",\n    )\n    threshold: float = Field(\n        ...,\n        description=(\n            \"The threshold value against which we'll compare \" \"the query result.\"\n        ),\n    )\n    operator: MetricTriggerOperator = Field(\n        ...,\n        description=(\n            \"The comparative operator (LT / LTE / GT / GTE) used to compare \"\n            \"the query result against the threshold value.\"\n        ),\n    )\n    range: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The lookback duration (seconds) for a metric query. This duration is \"\n            \"used to determine the time range over which the query will be executed. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n    firing_for: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The duration (seconds) for which the metric query must breach \"\n            \"or resolve continuously before the state is updated and the \"\n            \"automation is triggered. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.ResourceSpecification","title":"ResourceSpecification","text":"

    Bases: PrefectBaseModel

    A specification that may match zero, one, or many resources, used to target or select a set of resources in a query or automation. A resource must match at least one value of all of the provided labels

    Source code in prefect/events/schemas/events.py
    class ResourceSpecification(PrefectBaseModel):\n    \"\"\"A specification that may match zero, one, or many resources, used to target or\n    select a set of resources in a query or automation.  A resource must match at least\n    one value of all of the provided labels\"\"\"\n\n    __root__: Dict[str, Union[str, List[str]]]\n\n    def matches_every_resource(self) -> bool:\n        return len(self) == 0\n\n    def matches_every_resource_of_kind(self, prefix: str) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        if len(self.__root__) == 1:\n            if resource_id := self.__root__.get(\"prefect.resource.id\"):\n                values = [resource_id] if isinstance(resource_id, str) else resource_id\n                return any(value == f\"{prefix}.*\" for value in values)\n\n        return False\n\n    def includes(self, candidates: Iterable[Resource]) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        for candidate in candidates:\n            if self.matches(candidate):\n                return True\n\n        return False\n\n    def matches(self, resource: Resource) -> bool:\n        for label, expected in self.items():\n            value = resource.get(label)\n            if not any(matches(candidate, value) for candidate in expected):\n                return False\n        return True\n\n    def items(self) -> Iterable[Tuple[str, List[str]]]:\n        return [\n            (label, [value] if isinstance(value, str) else value)\n            for label, value in self.__root__.items()\n        ]\n\n    def __contains__(self, key: str) -> bool:\n        return self.__root__.__contains__(key)\n\n    def __getitem__(self, key: str) -> List[str]:\n        value = self.__root__[key]\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def pop(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.pop(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def get(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.get(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def __len__(self) -> int:\n        return len(self.__root__)\n\n    def deepcopy(self) -> \"ResourceSpecification\":\n        return ResourceSpecification.parse_obj(copy.deepcopy(self.__root__))\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.ResourceTrigger","title":"ResourceTrigger","text":"

    Bases: Trigger, ABC

    Base class for triggers that may filter by the labels of resources.

    Source code in prefect/events/schemas/automations.py
    class ResourceTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Base class for triggers that may filter by the labels of resources.\n    \"\"\"\n\n    type: str\n\n    match: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for resources which this trigger will match.\",\n    )\n    match_related: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for related resources which this trigger will match.\",\n    )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.SequenceTrigger","title":"SequenceTrigger","text":"

    Bases: CompositeTrigger

    A composite trigger that requires some number of triggers to have fired within the given time period in a specific order

    Source code in prefect/events/schemas/automations.py
    class SequenceTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have fired\n    within the given time period in a specific order\"\"\"\n\n    type: Literal[\"sequence\"] = \"sequence\"\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    \"In this order:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.SequenceTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                \"In this order:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Trigger","title":"Trigger","text":"

    Bases: PrefectBaseModel, ABC

    Base class describing a set of criteria that must be satisfied in order to trigger an automation.

    Source code in prefect/events/schemas/automations.py
    class Trigger(PrefectBaseModel, abc.ABC, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"\n    Base class describing a set of criteria that must be satisfied in order to trigger\n    an automation.\n    \"\"\"\n\n    type: str\n\n    @abc.abstractmethod\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n\n    # The following allows the regular Trigger class to be used when serving or\n    # deploying flows, analogous to how the Deployment*Trigger classes work\n\n    _deployment_id: Optional[UUID] = PrivateAttr(default=None)\n\n    def set_deployment_id(self, deployment_id: UUID):\n        self._deployment_id = deployment_id\n\n    def owner_resource(self) -> Optional[str]:\n        return f\"prefect.deployment.{self._deployment_id}\"\n\n    def actions(self) -> List[ActionTypes]:\n        assert self._deployment_id\n        return [\n            RunDeployment(\n                source=\"selected\",\n                deployment_id=self._deployment_id,\n                parameters=getattr(self, \"parameters\", None),\n                job_variables=getattr(self, \"job_variables\", None),\n            )\n        ]\n\n    def as_automation(self) -> \"AutomationCore\":\n        assert self._deployment_id\n\n        trigger: TriggerTypes = cast(TriggerTypes, self)\n\n        # This is one of the Deployment*Trigger classes, so translate it over to a\n        # plain Trigger\n        if hasattr(self, \"trigger_type\"):\n            trigger = self.trigger_type(**self.dict())\n\n        return AutomationCore(\n            name=(\n                getattr(self, \"name\", None)\n                or f\"Automation for deployment {self._deployment_id}\"\n            ),\n            description=\"\",\n            enabled=getattr(self, \"enabled\", True),\n            trigger=trigger,\n            actions=self.actions(),\n            owner_resource=self.owner_resource(),\n        )\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/automations/#prefect.automations.Trigger.describe_for_cli","title":"describe_for_cli abstractmethod","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    @abc.abstractmethod\ndef describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n
    ","tags":["Python API","automations"]},{"location":"api-ref/prefect/context/","title":"prefect.context","text":"","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context","title":"prefect.context","text":"

    Async and thread safe models for passing runtime context data.

    These contexts should never be directly mutated by the user.

    For more user-accessible information about the current run, see prefect.runtime.

    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel","title":"ContextModel","text":"

    Bases: BaseModel

    A base model for context data that forbids mutation and extra data while providing a context manager

    Source code in prefect/context.py
    class ContextModel(BaseModel):\n    \"\"\"\n    A base model for context data that forbids mutation and extra data while providing\n    a context manager\n    \"\"\"\n\n    # The context variable for storing data must be defined by the child class\n    __var__: ContextVar\n    _token: Token = PrivateAttr(None)\n\n    class Config:\n        # allow_mutation = False\n        arbitrary_types_allowed = True\n        extra = \"forbid\"\n\n    def __enter__(self):\n        if self._token is not None:\n            raise RuntimeError(\n                \"Context already entered. Context enter calls cannot be nested.\"\n            )\n        self._token = self.__var__.set(self)\n        return self\n\n    def __exit__(self, *_):\n        if not self._token:\n            raise RuntimeError(\n                \"Asymmetric use of context. Context exit called without an enter.\"\n            )\n        self.__var__.reset(self._token)\n        self._token = None\n\n    @classmethod\n    def get(cls: Type[T]) -> Optional[T]:\n        return cls.__var__.get(None)\n\n    def copy(self, **kwargs):\n        \"\"\"\n        Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n        Attributes:\n            include: Fields to include in new model.\n            exclude: Fields to exclude from new model, as with values this takes precedence over include.\n            update: Values to change/add in the new model. Note: the data is not validated before creating\n                the new model - you should trust this data.\n            deep: Set to `True` to make a deep copy of the model.\n\n        Returns:\n            A new model instance.\n        \"\"\"\n        # Remove the token on copy to avoid re-entrance errors\n        new = super().copy(**kwargs)\n        new._token = None\n        return new\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel.copy","title":"copy","text":"

    Duplicate the context model, optionally choosing which fields to include, exclude, or change.

    Attributes:

    Name Type Description include

    Fields to include in new model.

    exclude

    Fields to exclude from new model, as with values this takes precedence over include.

    update

    Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data.

    deep

    Set to True to make a deep copy of the model.

    Returns:

    Type Description

    A new model instance.

    Source code in prefect/context.py
    def copy(self, **kwargs):\n    \"\"\"\n    Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n    Attributes:\n        include: Fields to include in new model.\n        exclude: Fields to exclude from new model, as with values this takes precedence over include.\n        update: Values to change/add in the new model. Note: the data is not validated before creating\n            the new model - you should trust this data.\n        deep: Set to `True` to make a deep copy of the model.\n\n    Returns:\n        A new model instance.\n    \"\"\"\n    # Remove the token on copy to avoid re-entrance errors\n    new = super().copy(**kwargs)\n    new._token = None\n    return new\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.EngineContext","title":"EngineContext","text":"

    Bases: RunContext

    The context for a flow run. Data in this context is only available from within a flow run function.

    Attributes:

    Name Type Description flow Optional[Flow]

    The flow instance associated with the run

    flow_run Optional[FlowRun]

    The API metadata for the flow run

    task_runner BaseTaskRunner

    The task runner instance being used for the flow run

    task_run_futures List[PrefectFuture]

    A list of futures for task runs submitted within this flow run

    task_run_states List[State]

    A list of states for task runs created within this flow run

    task_run_results Dict[int, State]

    A mapping of result ids to task run states for this flow run

    flow_run_states List[State]

    A list of states for flow runs created within this flow run

    sync_portal Optional[BlockingPortal]

    A blocking portal for sync task/flow runs in an async flow

    timeout_scope Optional[CancelScope]

    The cancellation scope for flow level timeouts

    Source code in prefect/context.py
    class EngineContext(RunContext):\n    \"\"\"\n    The context for a flow run. Data in this context is only available from within a\n    flow run function.\n\n    Attributes:\n        flow: The flow instance associated with the run\n        flow_run: The API metadata for the flow run\n        task_runner: The task runner instance being used for the flow run\n        task_run_futures: A list of futures for task runs submitted within this flow run\n        task_run_states: A list of states for task runs created within this flow run\n        task_run_results: A mapping of result ids to task run states for this flow run\n        flow_run_states: A list of states for flow runs created within this flow run\n        sync_portal: A blocking portal for sync task/flow runs in an async flow\n        timeout_scope: The cancellation scope for flow level timeouts\n    \"\"\"\n\n    flow: Optional[\"Flow\"] = None\n    flow_run: Optional[FlowRun] = None\n    autonomous_task_run: Optional[TaskRun] = None\n    task_runner: BaseTaskRunner\n    log_prints: bool = False\n    parameters: Optional[Dict[str, Any]] = None\n\n    # Result handling\n    result_factory: ResultFactory\n\n    # Counter for task calls allowing unique\n    task_run_dynamic_keys: Dict[str, int] = Field(default_factory=dict)\n\n    # Counter for flow pauses\n    observed_flow_pauses: Dict[str, int] = Field(default_factory=dict)\n\n    # Tracking for objects created by this flow run\n    task_run_futures: List[PrefectFuture] = Field(default_factory=list)\n    task_run_states: List[State] = Field(default_factory=list)\n    task_run_results: Dict[int, State] = Field(default_factory=dict)\n    flow_run_states: List[State] = Field(default_factory=list)\n\n    # The synchronous portal is only created for async flows for creating engine calls\n    # from synchronous task and subflow calls\n    sync_portal: Optional[anyio.abc.BlockingPortal] = None\n    timeout_scope: Optional[anyio.abc.CancelScope] = None\n\n    # Task group that can be used for background tasks during the flow run\n    background_tasks: anyio.abc.TaskGroup\n\n    # Events worker to emit events to Prefect Cloud\n    events: Optional[EventsWorker] = None\n\n    __var__ = ContextVar(\"flow_run\")\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry","title":"PrefectObjectRegistry","text":"

    Bases: ContextModel

    A context that acts as a registry for all Prefect objects that are registered during load and execution.

    Attributes:

    Name Type Description start_time DateTimeTZ

    The time the object registry was created.

    block_code_execution bool

    If set, flow calls will be ignored.

    capture_failures bool

    If set, failures during init will be silenced and tracked.

    Source code in prefect/context.py
    class PrefectObjectRegistry(ContextModel):\n    \"\"\"\n    A context that acts as a registry for all Prefect objects that are\n    registered during load and execution.\n\n    Attributes:\n        start_time: The time the object registry was created.\n        block_code_execution: If set, flow calls will be ignored.\n        capture_failures: If set, failures during __init__ will be silenced and tracked.\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n\n    _instance_registry: Dict[Type[T], List[T]] = PrivateAttr(\n        default_factory=lambda: defaultdict(list)\n    )\n\n    # Failures will be a tuple of (exception, instance, args, kwargs)\n    _instance_init_failures: Dict[\n        Type[T], List[Tuple[Exception, T, Tuple, Dict]]\n    ] = PrivateAttr(default_factory=lambda: defaultdict(list))\n\n    block_code_execution: bool = False\n    capture_failures: bool = False\n\n    __var__ = ContextVar(\"object_registry\")\n\n    def get_instances(self, type_: Type[T]) -> List[T]:\n        instances = []\n        for registered_type, type_instances in self._instance_registry.items():\n            if type_ in registered_type.mro():\n                instances.extend(type_instances)\n        return instances\n\n    def get_instance_failures(\n        self, type_: Type[T]\n    ) -> List[Tuple[Exception, T, Tuple, Dict]]:\n        failures = []\n        for type__ in type_.mro():\n            failures.extend(self._instance_init_failures[type__])\n        return failures\n\n    def register_instance(self, object):\n        # TODO: Consider using a 'Set' to avoid duplicate entries\n        self._instance_registry[type(object)].append(object)\n\n    def register_init_failure(\n        self, exc: Exception, object: Any, init_args: Tuple, init_kwargs: Dict\n    ):\n        self._instance_init_failures[type(object)].append(\n            (exc, object, init_args, init_kwargs)\n        )\n\n    @classmethod\n    def register_instances(cls, type_: Type[T]) -> Type[T]:\n        \"\"\"\n        Decorator for a class that adds registration to the `PrefectObjectRegistry`\n        on initialization of instances.\n        \"\"\"\n        original_init = type_.__init__\n\n        def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n            registry = cls.get()\n            try:\n                original_init(__self__, *args, **kwargs)\n            except Exception as exc:\n                if not registry or not registry.capture_failures:\n                    raise\n                else:\n                    registry.register_init_failure(exc, __self__, args, kwargs)\n            else:\n                if registry:\n                    registry.register_instance(__self__)\n\n        update_wrapper(__register_init__, original_init)\n\n        type_.__init__ = __register_init__\n        return type_\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry.register_instances","title":"register_instances classmethod","text":"

    Decorator for a class that adds registration to the PrefectObjectRegistry on initialization of instances.

    Source code in prefect/context.py
    @classmethod\ndef register_instances(cls, type_: Type[T]) -> Type[T]:\n    \"\"\"\n    Decorator for a class that adds registration to the `PrefectObjectRegistry`\n    on initialization of instances.\n    \"\"\"\n    original_init = type_.__init__\n\n    def __register_init__(__self__: T, *args: Any, **kwargs: Any) -> None:\n        registry = cls.get()\n        try:\n            original_init(__self__, *args, **kwargs)\n        except Exception as exc:\n            if not registry or not registry.capture_failures:\n                raise\n            else:\n                registry.register_init_failure(exc, __self__, args, kwargs)\n        else:\n            if registry:\n                registry.register_instance(__self__)\n\n    update_wrapper(__register_init__, original_init)\n\n    type_.__init__ = __register_init__\n    return type_\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.RunContext","title":"RunContext","text":"

    Bases: ContextModel

    The base context for a flow or task run. Data in this context will always be available when get_run_context is called.

    Attributes:

    Name Type Description start_time DateTimeTZ

    The time the run context was entered

    client PrefectClient

    The Prefect client instance being used for API communication

    Source code in prefect/context.py
    class RunContext(ContextModel):\n    \"\"\"\n    The base context for a flow or task run. Data in this context will always be\n    available when `get_run_context` is called.\n\n    Attributes:\n        start_time: The time the run context was entered\n        client: The Prefect client instance being used for API communication\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    input_keyset: Optional[Dict[str, Dict[str, str]]] = None\n    client: PrefectClient\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.SettingsContext","title":"SettingsContext","text":"

    Bases: ContextModel

    The context for a Prefect settings.

    This allows for safe concurrent access and modification of settings.

    Attributes:

    Name Type Description profile Profile

    The profile that is in use.

    settings Settings

    The complete settings model.

    Source code in prefect/context.py
    class SettingsContext(ContextModel):\n    \"\"\"\n    The context for a Prefect settings.\n\n    This allows for safe concurrent access and modification of settings.\n\n    Attributes:\n        profile: The profile that is in use.\n        settings: The complete settings model.\n    \"\"\"\n\n    profile: Profile\n    settings: Settings\n\n    __var__ = ContextVar(\"settings\")\n\n    def __hash__(self) -> int:\n        return hash(self.settings)\n\n    def __enter__(self):\n        \"\"\"\n        Upon entrance, we ensure the home directory for the profile exists.\n        \"\"\"\n        return_value = super().__enter__()\n\n        try:\n            prefect_home = Path(self.settings.value_of(PREFECT_HOME))\n            prefect_home.mkdir(mode=0o0700, exist_ok=True)\n        except OSError:\n            warnings.warn(\n                (\n                    \"Failed to create the Prefect home directory at \"\n                    f\"{self.settings.value_of(PREFECT_HOME)}\"\n                ),\n                stacklevel=2,\n            )\n\n        return return_value\n\n    @classmethod\n    def get(cls) -> \"SettingsContext\":\n        # Return the global context instead of `None` if no context exists\n        return super().get() or GLOBAL_SETTINGS_CONTEXT\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TagsContext","title":"TagsContext","text":"

    Bases: ContextModel

    The context for prefect.tags management.

    Attributes:

    Name Type Description current_tags Set[str]

    A set of current tags in the context

    Source code in prefect/context.py
    class TagsContext(ContextModel):\n    \"\"\"\n    The context for `prefect.tags` management.\n\n    Attributes:\n        current_tags: A set of current tags in the context\n    \"\"\"\n\n    current_tags: Set[str] = Field(default_factory=set)\n\n    @classmethod\n    def get(cls) -> \"TagsContext\":\n        # Return an empty `TagsContext` instead of `None` if no context exists\n        return cls.__var__.get(TagsContext())\n\n    __var__ = ContextVar(\"tags\")\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TaskRunContext","title":"TaskRunContext","text":"

    Bases: RunContext

    The context for a task run. Data in this context is only available from within a task run function.

    Attributes:

    Name Type Description task Task

    The task instance associated with the task run

    task_run TaskRun

    The API metadata for this task run

    Source code in prefect/context.py
    class TaskRunContext(RunContext):\n    \"\"\"\n    The context for a task run. Data in this context is only available from within a\n    task run function.\n\n    Attributes:\n        task: The task instance associated with the task run\n        task_run: The API metadata for this task run\n    \"\"\"\n\n    task: \"Task\"\n    task_run: TaskRun\n    log_prints: bool = False\n    parameters: Dict[str, Any]\n\n    # Result handling\n    result_factory: ResultFactory\n\n    __var__ = ContextVar(\"task_run\")\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_run_context","title":"get_run_context","text":"

    Get the current run context from within a task or flow function.

    Returns:

    Type Description Union[FlowRunContext, TaskRunContext]

    A FlowRunContext or TaskRunContext depending on the function type.

    Raises RuntimeError: If called outside of a flow or task run.

    Source code in prefect/context.py
    def get_run_context() -> Union[FlowRunContext, TaskRunContext]:\n    \"\"\"\n    Get the current run context from within a task or flow function.\n\n    Returns:\n        A `FlowRunContext` or `TaskRunContext` depending on the function type.\n\n    Raises\n        RuntimeError: If called outside of a flow or task run.\n    \"\"\"\n    task_run_ctx = TaskRunContext.get()\n    if task_run_ctx:\n        return task_run_ctx\n\n    flow_run_ctx = FlowRunContext.get()\n    if flow_run_ctx:\n        return flow_run_ctx\n\n    raise MissingContextError(\n        \"No run context available. You are not in a flow or task run context.\"\n    )\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_settings_context","title":"get_settings_context","text":"

    Get the current settings context which contains profile information and the settings that are being used.

    Generally, the settings that are being used are a combination of values from the profile and environment. See prefect.context.use_profile for more details.

    Source code in prefect/context.py
    def get_settings_context() -> SettingsContext:\n    \"\"\"\n    Get the current settings context which contains profile information and the\n    settings that are being used.\n\n    Generally, the settings that are being used are a combination of values from the\n    profile and environment. See `prefect.context.use_profile` for more details.\n    \"\"\"\n    settings_ctx = SettingsContext.get()\n\n    if not settings_ctx:\n        raise MissingContextError(\"No settings context found.\")\n\n    return settings_ctx\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.registry_from_script","title":"registry_from_script","text":"

    Return a fresh registry with instances populated from execution of a script.

    Source code in prefect/context.py
    def registry_from_script(\n    path: str,\n    block_code_execution: bool = True,\n    capture_failures: bool = True,\n) -> PrefectObjectRegistry:\n    \"\"\"\n    Return a fresh registry with instances populated from execution of a script.\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=block_code_execution,\n        capture_failures=capture_failures,\n    ) as registry:\n        load_script_as_module(path)\n\n    return registry\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.root_settings_context","title":"root_settings_context","text":"

    Return the settings context that will exist as the root context for the module.

    The profile to use is determined with the following precedence - Command line via 'prefect --profile ' - Environment variable via 'PREFECT_PROFILE' - Profiles file via the 'active' key Source code in prefect/context.py

    def root_settings_context():\n    \"\"\"\n    Return the settings context that will exist as the root context for the module.\n\n    The profile to use is determined with the following precedence\n    - Command line via 'prefect --profile <name>'\n    - Environment variable via 'PREFECT_PROFILE'\n    - Profiles file via the 'active' key\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    active_name = profiles.active_name\n    profile_source = \"in the profiles file\"\n\n    if \"PREFECT_PROFILE\" in os.environ:\n        active_name = os.environ[\"PREFECT_PROFILE\"]\n        profile_source = \"by environment variable\"\n\n    if (\n        sys.argv[0].endswith(\"/prefect\")\n        and len(sys.argv) >= 3\n        and sys.argv[1] == \"--profile\"\n    ):\n        active_name = sys.argv[2]\n        profile_source = \"by command line argument\"\n\n    if active_name not in profiles.names:\n        print(\n            (\n                f\"WARNING: Active profile {active_name!r} set {profile_source} not \"\n                \"found. The default profile will be used instead. \"\n            ),\n            file=sys.stderr,\n        )\n        active_name = \"default\"\n\n    with use_profile(\n        profiles[active_name],\n        # Override environment variables if the profile was set by the CLI\n        override_environment_variables=profile_source == \"by command line argument\",\n    ) as settings_context:\n        return settings_context\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.tags","title":"tags","text":"

    Context manager to add tags to flow and task run calls.

    Tags are always combined with any existing tags.

    Yields:

    Type Description Set[str]

    The current set of tags

    Examples:

    >>> from prefect import tags, task, flow\n>>> @task\n>>> def my_task():\n>>>     pass\n

    Run a task with tags

    >>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         my_task()  # has tags: a, b\n

    Run a flow with tags

    >>> @flow\n>>> def my_flow():\n>>>     pass\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()  # has tags: a, b\n

    Run a task with nested tag contexts

    >>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         with tags(\"c\", \"d\"):\n>>>             my_task()  # has tags: a, b, c, d\n>>>         my_task()  # has tags: a, b\n

    Inspect the current tags

    >>> @flow\n>>> def my_flow():\n>>>     with tags(\"c\", \"d\"):\n>>>         with tags(\"e\", \"f\") as current_tags:\n>>>              print(current_tags)\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()\n{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n
    Source code in prefect/context.py
    @contextmanager\ndef tags(*new_tags: str) -> Generator[Set[str], None, None]:\n    \"\"\"\n    Context manager to add tags to flow and task run calls.\n\n    Tags are always combined with any existing tags.\n\n    Yields:\n        The current set of tags\n\n    Examples:\n        >>> from prefect import tags, task, flow\n        >>> @task\n        >>> def my_task():\n        >>>     pass\n\n        Run a task with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         my_task()  # has tags: a, b\n\n        Run a flow with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     pass\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()  # has tags: a, b\n\n        Run a task with nested tag contexts\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         with tags(\"c\", \"d\"):\n        >>>             my_task()  # has tags: a, b, c, d\n        >>>         my_task()  # has tags: a, b\n\n        Inspect the current tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"c\", \"d\"):\n        >>>         with tags(\"e\", \"f\") as current_tags:\n        >>>              print(current_tags)\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()\n        {\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n    \"\"\"\n    current_tags = TagsContext.get().current_tags\n    new_tags = current_tags.union(new_tags)\n    with TagsContext(current_tags=new_tags):\n        yield new_tags\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.use_profile","title":"use_profile","text":"

    Switch to a profile for the duration of this context.

    Profile contexts are confined to an async context in a single thread.

    Parameters:

    Name Type Description Default profile Union[Profile, str]

    The name of the profile to load or an instance of a Profile.

    required override_environment_variable

    If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings.

    required include_current_context bool

    If set, the new settings will be constructed with the current settings context as a base. If not set, the use_base settings will be loaded from the environment and defaults.

    True

    Yields:

    Type Description

    The created SettingsContext object

    Source code in prefect/context.py
    @contextmanager\ndef use_profile(\n    profile: Union[Profile, str],\n    override_environment_variables: bool = False,\n    include_current_context: bool = True,\n):\n    \"\"\"\n    Switch to a profile for the duration of this context.\n\n    Profile contexts are confined to an async context in a single thread.\n\n    Args:\n        profile: The name of the profile to load or an instance of a Profile.\n        override_environment_variable: If set, variables in the profile will take\n            precedence over current environment variables. By default, environment\n            variables will override profile settings.\n        include_current_context: If set, the new settings will be constructed\n            with the current settings context as a base. If not set, the use_base settings\n            will be loaded from the environment and defaults.\n\n    Yields:\n        The created `SettingsContext` object\n    \"\"\"\n    if isinstance(profile, str):\n        profiles = prefect.settings.load_profiles()\n        profile = profiles[profile]\n\n    if not isinstance(profile, Profile):\n        raise TypeError(\n            f\"Unexpected type {type(profile).__name__!r} for `profile`. \"\n            \"Expected 'str' or 'Profile'.\"\n        )\n\n    # Create a copy of the profiles settings as we will mutate it\n    profile_settings = profile.settings.copy()\n\n    existing_context = SettingsContext.get()\n    if existing_context and include_current_context:\n        settings = existing_context.settings\n    else:\n        settings = prefect.settings.get_settings_from_env()\n\n    if not override_environment_variables:\n        for key in os.environ:\n            if key in prefect.settings.SETTING_VARIABLES:\n                profile_settings.pop(prefect.settings.SETTING_VARIABLES[key], None)\n\n    new_settings = settings.copy_with_update(updates=profile_settings)\n\n    with SettingsContext(profile=profile, settings=new_settings) as ctx:\n        yield ctx\n
    ","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/engine/","title":"prefect.engine","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine","title":"prefect.engine","text":"

    Client-side execution and orchestration of flows and tasks.

    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--engine-process-overview","title":"Engine process overview","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--flows","title":"Flows","text":"
    • The flow is called by the user or an existing flow run is executed in a new process.

      See Flow.__call__ and prefect.engine.__main__ (python -m prefect.engine)

    • A synchronous function acts as an entrypoint to the engine. The engine executes on a dedicated \"global loop\" thread. For asynchronous flow calls, we return a coroutine from the entrypoint so the user can enter the engine without blocking their event loop.

      See enter_flow_run_engine_from_flow_call, enter_flow_run_engine_from_subprocess

    • The thread that calls the entrypoint waits until orchestration of the flow run completes. This thread is referred to as the \"user\" thread and is usually the \"main\" thread. The thread is not blocked while waiting \u2014 it allows the engine to send work back to it. This allows us to send calls back to the user thread from the global loop thread.

      See wait_for_call_in_loop_thread and call_soon_in_waiting_thread

    • The asynchronous engine branches depending on if the flow run exists already and if there is a parent flow run in the current context.

      See create_then_begin_flow_run, create_and_begin_subflow_run, and retrieve_flow_then_begin_flow_run

    • The asynchronous engine prepares for execution of the flow run. This includes starting the task runner, preparing context, etc.

      See begin_flow_run

    • The flow run is orchestrated through states, calling the user's function as necessary. Generally the user's function is sent for execution on the user thread. If the flow function cannot be safely executed on the user thread, e.g. it is a synchronous child in an asynchronous parent it will be scheduled on a worker thread instead.

      See orchestrate_flow_run, call_soon_in_waiting_thread, call_soon_in_new_thread

    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--tasks","title":"Tasks","text":"
    • The task is called or submitted by the user. We require that this is always within a flow.

      See Task.__call__ and Task.submit

    • A synchronous function acts as an entrypoint to the engine. Unlike flow calls, this will not block until completion if submit was used.

      See enter_task_run_engine

    • A future is created for the task call. Creation of the task run and submission to the task runner is scheduled as a background task so submission of many tasks can occur concurrently.

      See create_task_run_future and create_task_run_then_submit

    • The engine branches depending on if a future, state, or result is requested. If a future is requested, it is returned immediately to the user thread. Otherwise, the engine will wait for the task run to complete and return the final state or result.

      See get_task_call_return_value

    • An engine function is submitted to the task runner. The task runner will schedule this function for execution on a worker. When executed, it will prepare for orchestration and wait for completion of the run.

      See create_task_run_then_submit and begin_task_run

    • The task run is orchestrated through states, calling the user's function as necessary. The user's function is always executed in a worker thread for isolation.

      See orchestrate_task_run, call_soon_in_new_thread

      _Ideally, for local and sequential task runners we would send the task run to the user thread as we do for flows. See #9855.

    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_flow_run","title":"begin_flow_run async","text":"

    Begins execution of a flow run; blocks until completion of the flow run

    • Starts a task runner
    • Determines the result storage block to use
    • Orchestrates the flow run (runs the user-function and generates tasks)
    • Waits for tasks to complete / shutsdown the task runner
    • Sets a terminal state for the flow run

    Note that the flow_run contains a parameters attribute which is the serialized parameters sent to the backend while the parameters argument here should be the deserialized and validated dictionary of python objects.

    Returns:

    Type Description State

    The final state of the run

    Source code in prefect/engine.py
    async def begin_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n    \"\"\"\n    Begins execution of a flow run; blocks until completion of the flow run\n\n    - Starts a task runner\n    - Determines the result storage block to use\n    - Orchestrates the flow run (runs the user-function and generates tasks)\n    - Waits for tasks to complete / shutsdown the task runner\n    - Sets a terminal state for the flow run\n\n    Note that the `flow_run` contains a `parameters` attribute which is the serialized\n    parameters sent to the backend while the `parameters` argument here should be the\n    deserialized and validated dictionary of python objects.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    logger = flow_run_logger(flow_run, flow)\n\n    log_prints = should_log_prints(flow)\n    flow_run_context = FlowRunContext.construct(log_prints=log_prints)\n\n    async with AsyncExitStack() as stack:\n        await stack.enter_async_context(\n            report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n        )\n\n        # Create a task group for background tasks\n        flow_run_context.background_tasks = await stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n        # If the flow is async, we need to provide a portal so sync tasks can run\n        flow_run_context.sync_portal = (\n            stack.enter_context(start_blocking_portal()) if flow.isasync else None\n        )\n\n        task_runner = flow.task_runner.duplicate()\n        if task_runner is NotImplemented:\n            # Backwards compatibility; will not support concurrent flow runs\n            task_runner = flow.task_runner\n            logger.warning(\n                f\"Task runner {type(task_runner).__name__!r} does not implement the\"\n                \" `duplicate` method and will fail if used for concurrent execution of\"\n                \" the same flow.\"\n            )\n\n        logger.debug(\n            f\"Starting {type(flow.task_runner).__name__!r}; submitted tasks \"\n            f\"will be run {CONCURRENCY_MESSAGES[flow.task_runner.concurrency_type]}...\"\n        )\n\n        flow_run_context.task_runner = await stack.enter_async_context(\n            task_runner.start()\n        )\n\n        flow_run_context.result_factory = await ResultFactory.from_flow(\n            flow, client=client\n        )\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        terminal_or_paused_state = await orchestrate_flow_run(\n            flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            wait_for=None,\n            client=client,\n            partial_flow_run_context=flow_run_context,\n            # Orchestration needs to be interruptible if it has a timeout\n            interruptible=flow.timeout_seconds is not None,\n            user_thread=user_thread,\n        )\n\n    if terminal_or_paused_state.is_paused():\n        timeout = terminal_or_paused_state.state_details.pause_timeout\n        msg = \"Currently paused and suspending execution.\"\n        if timeout:\n            msg += f\" Resume before {timeout.to_rfc3339_string()} to finish execution.\"\n        logger.log(level=logging.INFO, msg=msg)\n        await APILogHandler.aflush()\n\n        return terminal_or_paused_state\n    else:\n        terminal_state = terminal_or_paused_state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # When a \"root\" flow run finishes, flush logs so we do not have to rely on handling\n    # during interpreter shutdown\n    await APILogHandler.aflush()\n\n    return terminal_state\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_map","title":"begin_task_map async","text":"

    Async entrypoint for task mapping

    Source code in prefect/engine.py
    async def begin_task_map(\n    task: Task,\n    flow_run_context: Optional[FlowRunContext],\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n    autonomous: bool = False,\n) -> List[Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]]:\n    \"\"\"Async entrypoint for task mapping\"\"\"\n    # We need to resolve some futures to map over their data, collect the upstream\n    # links beforehand to retain relationship tracking.\n    task_inputs = {\n        k: await collect_task_run_inputs(v, max_depth=0) for k, v in parameters.items()\n    }\n\n    # Resolve the top-level parameters in order to get mappable data of a known length.\n    # Nested parameters will be resolved in each mapped child where their relationships\n    # will also be tracked.\n    parameters = await resolve_inputs(parameters, max_depth=1)\n\n    # Ensure that any parameters in kwargs are expanded before this check\n    parameters = explode_variadic_parameter(task.fn, parameters)\n\n    iterable_parameters = {}\n    static_parameters = {}\n    annotated_parameters = {}\n    for key, val in parameters.items():\n        if isinstance(val, (allow_failure, quote)):\n            # Unwrap annotated parameters to determine if they are iterable\n            annotated_parameters[key] = val\n            val = val.unwrap()\n\n        if isinstance(val, unmapped):\n            static_parameters[key] = val.value\n        elif isiterable(val):\n            iterable_parameters[key] = list(val)\n        else:\n            static_parameters[key] = val\n\n    if not len(iterable_parameters):\n        raise MappingMissingIterable(\n            \"No iterable parameters were received. Parameters for map must \"\n            f\"include at least one iterable. Parameters: {parameters}\"\n        )\n\n    iterable_parameter_lengths = {\n        key: len(val) for key, val in iterable_parameters.items()\n    }\n    lengths = set(iterable_parameter_lengths.values())\n    if len(lengths) > 1:\n        raise MappingLengthMismatch(\n            \"Received iterable parameters with different lengths. Parameters for map\"\n            f\" must all be the same length. Got lengths: {iterable_parameter_lengths}\"\n        )\n\n    map_length = list(lengths)[0]\n\n    task_runs = []\n    for i in range(map_length):\n        call_parameters = {key: value[i] for key, value in iterable_parameters.items()}\n        call_parameters.update({key: value for key, value in static_parameters.items()})\n\n        # Add default values for parameters; these are skipped earlier since they should\n        # not be mapped over\n        for key, value in get_parameter_defaults(task.fn).items():\n            call_parameters.setdefault(key, value)\n\n        # Re-apply annotations to each key again\n        for key, annotation in annotated_parameters.items():\n            call_parameters[key] = annotation.rewrap(call_parameters[key])\n\n        # Collapse any previously exploded kwargs\n        call_parameters = collapse_variadic_parameters(task.fn, call_parameters)\n\n        if autonomous:\n            task_runs.append(\n                await create_autonomous_task_run(\n                    task=task,\n                    parameters=call_parameters,\n                )\n            )\n        else:\n            task_runs.append(\n                partial(\n                    get_task_call_return_value,\n                    task=task,\n                    flow_run_context=flow_run_context,\n                    parameters=call_parameters,\n                    wait_for=wait_for,\n                    return_type=return_type,\n                    task_runner=task_runner,\n                    extra_task_inputs=task_inputs,\n                )\n            )\n\n    if autonomous:\n        return task_runs\n\n    # Maintain the order of the task runs when using the sequential task runner\n    runner = task_runner if task_runner else flow_run_context.task_runner\n    if runner.concurrency_type == TaskConcurrencyType.SEQUENTIAL:\n        return [await task_run() for task_run in task_runs]\n\n    return await gather(*task_runs)\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_run","title":"begin_task_run async","text":"

    Entrypoint for task run execution.

    This function is intended for submission to the task runner.

    This method may be called from a worker so we ensure the settings context has been entered. For example, with a runner that is executing tasks in the same event loop, we will likely not enter the context again because the current context already matches:

    main thread: --> Flow called with settings A --> begin_task_run executes same event loop --> Profile A matches and is not entered again

    However, with execution on a remote environment, we are going to need to ensure the settings for the task run are respected by entering the context:

    main thread: --> Flow called with settings A --> begin_task_run is scheduled on a remote worker, settings A is serialized remote worker: --> Remote worker imports Prefect (may not occur) --> Global settings is loaded with default settings --> begin_task_run executes on a different event loop than the flow --> Current settings is not set or does not match, settings A is entered

    Source code in prefect/engine.py
    async def begin_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    settings: prefect.context.SettingsContext,\n):\n    \"\"\"\n    Entrypoint for task run execution.\n\n    This function is intended for submission to the task runner.\n\n    This method may be called from a worker so we ensure the settings context has been\n    entered. For example, with a runner that is executing tasks in the same event loop,\n    we will likely not enter the context again because the current context already\n    matches:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` executes same event loop\n    --> Profile A matches and is not entered again\n\n    However, with execution on a remote environment, we are going to need to ensure the\n    settings for the task run are respected by entering the context:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` is scheduled on a remote worker, settings A is serialized\n    remote worker:\n    --> Remote worker imports Prefect (may not occur)\n    --> Global settings is loaded with default settings\n    --> `begin_task_run` executes on a different event loop than the flow\n    --> Current settings is not set or does not match, settings A is entered\n    \"\"\"\n    maybe_flow_run_context = prefect.context.FlowRunContext.get()\n\n    async with AsyncExitStack() as stack:\n        # The settings context may be null on a remote worker so we use the safe `.get`\n        # method and compare it to the settings required for this task run\n        if prefect.context.SettingsContext.get() != settings:\n            stack.enter_context(settings)\n            setup_logging()\n\n        if maybe_flow_run_context:\n            # Accessible if on a worker that is running in the same thread as the flow\n            client = maybe_flow_run_context.client\n            # Only run the task in an interruptible thread if it in the same thread as\n            # the flow _and_ the flow run has a timeout attached. If the task is on a\n            # worker, the flow run timeout will not be raised in the worker process.\n            interruptible = maybe_flow_run_context.timeout_scope is not None\n        else:\n            # Otherwise, retrieve a new clien`t\n            client = await stack.enter_async_context(get_client())\n            interruptible = False\n            await stack.enter_async_context(anyio.create_task_group())\n\n        await stack.enter_async_context(report_task_run_crashes(task_run, client))\n\n        # TODO: Use the background tasks group to manage logging for this task\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        await check_api_reachable(\n            client, f\"Cannot orchestrate task run '{task_run.id}'\"\n        )\n        try:\n            state = await orchestrate_task_run(\n                task=task,\n                task_run=task_run,\n                parameters=parameters,\n                wait_for=wait_for,\n                result_factory=result_factory,\n                log_prints=log_prints,\n                interruptible=interruptible,\n                client=client,\n            )\n\n            if not maybe_flow_run_context:\n                # When a a task run finishes on a remote worker flush logs to prevent\n                # loss if the process exits\n                await APILogHandler.aflush()\n\n        except Abort as abort:\n            # Task run probably already completed, fetch its state\n            task_run = await client.read_task_run(task_run.id)\n\n            if task_run.state.is_final():\n                task_run_logger(task_run).info(\n                    f\"Task run '{task_run.id}' already finished.\"\n                )\n            else:\n                # TODO: This is a concerning case; we should determine when this occurs\n                #       1. This can occur when the flow run is not in a running state\n                task_run_logger(task_run).warning(\n                    f\"Task run '{task_run.id}' received abort during orchestration: \"\n                    f\"{abort} Task run is in {task_run.state.type.value} state.\"\n                )\n            state = task_run.state\n\n        except Pause:\n            # A pause signal here should mean the flow run suspended, so we\n            # should do the same. We'll look up the flow run's pause state to\n            # try and reuse it, so we capture any data like timeouts.\n            flow_run = await client.read_flow_run(task_run.flow_run_id)\n            if flow_run.state and flow_run.state.is_paused():\n                state = flow_run.state\n            else:\n                state = Suspended()\n\n            task_run_logger(task_run).info(\n                \"Task run encountered a pause signal during orchestration.\"\n            )\n\n        return state\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_and_begin_subflow_run","title":"create_and_begin_subflow_run async","text":"

    Async entrypoint for flows calls within a flow run

    Subflows differ from parent flows in that they - Resolve futures in passed parameters into values - Create a dummy task for representation in the parent flow - Retrieve default result storage from the parent flow rather than the server

    Returns:

    Type Description Any

    The final state of the run

    Source code in prefect/engine.py
    @inject_client\nasync def create_and_begin_subflow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n    \"\"\"\n    Async entrypoint for flows calls within a flow run\n\n    Subflows differ from parent flows in that they\n    - Resolve futures in passed parameters into values\n    - Create a dummy task for representation in the parent flow\n    - Retrieve default result storage from the parent flow rather than the server\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    parent_flow_run_context = FlowRunContext.get()\n    parent_logger = get_run_logger(parent_flow_run_context)\n    log_prints = should_log_prints(flow)\n    terminal_state = None\n\n    parent_logger.debug(f\"Resolving inputs to {flow.name!r}\")\n    task_inputs = {k: await collect_task_run_inputs(v) for k, v in parameters.items()}\n\n    if wait_for:\n        task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n    rerunning = (\n        parent_flow_run_context.flow_run.run_count > 1\n        if getattr(parent_flow_run_context, \"flow_run\", None)\n        else False\n    )\n\n    # Generate a task in the parent flow run to represent the result of the subflow run\n    dummy_task = Task(name=flow.name, fn=flow.fn, version=flow.version)\n    parent_task_run = await client.create_task_run(\n        task=dummy_task,\n        flow_run_id=(\n            parent_flow_run_context.flow_run.id\n            if getattr(parent_flow_run_context, \"flow_run\", None)\n            else None\n        ),\n        dynamic_key=_dynamic_key_for_task_run(parent_flow_run_context, dummy_task),\n        task_inputs=task_inputs,\n        state=Pending(),\n    )\n\n    # Resolve any task futures in the input\n    parameters = await resolve_inputs(parameters)\n\n    if parent_task_run.state.is_final() and not (\n        rerunning and not parent_task_run.state.is_completed()\n    ):\n        # Retrieve the most recent flow run from the database\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                parent_task_run_id={\"any_\": [parent_task_run.id]}\n            ),\n            sort=FlowRunSort.EXPECTED_START_TIME_ASC,\n        )\n        flow_run = flow_runs[-1]\n\n        # Set up variables required downstream\n        terminal_state = flow_run.state\n        logger = flow_run_logger(flow_run, flow)\n\n    else:\n        flow_run = await client.create_flow_run(\n            flow,\n            parameters=flow.serialize_parameters(parameters),\n            parent_task_run_id=parent_task_run.id,\n            state=parent_task_run.state if not rerunning else Pending(),\n            tags=TagsContext.get().current_tags,\n        )\n\n        parent_logger.info(\n            f\"Created subflow run {flow_run.name!r} for flow {flow.name!r}\"\n        )\n\n        logger = flow_run_logger(flow_run, flow)\n        ui_url = PREFECT_UI_URL.value()\n        if ui_url:\n            logger.info(\n                f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n                extra={\"send_to_api\": False},\n            )\n\n        result_factory = await ResultFactory.from_flow(\n            flow, client=parent_flow_run_context.client\n        )\n\n        if flow.should_validate_parameters:\n            try:\n                parameters = flow.validate_parameters(parameters)\n            except Exception:\n                message = \"Validation of flow parameters failed with error:\"\n                logger.exception(message)\n                terminal_state = await propose_state(\n                    client,\n                    state=await exception_to_failed_state(\n                        message=message, result_factory=result_factory\n                    ),\n                    flow_run_id=flow_run.id,\n                )\n\n        if terminal_state is None or not terminal_state.is_final():\n            async with AsyncExitStack() as stack:\n                await stack.enter_async_context(\n                    report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n                )\n\n                task_runner = flow.task_runner.duplicate()\n                if task_runner is NotImplemented:\n                    # Backwards compatibility; will not support concurrent flow runs\n                    task_runner = flow.task_runner\n                    logger.warning(\n                        f\"Task runner {type(task_runner).__name__!r} does not implement\"\n                        \" the `duplicate` method and will fail if used for concurrent\"\n                        \" execution of the same flow.\"\n                    )\n\n                await stack.enter_async_context(task_runner.start())\n\n                if log_prints:\n                    stack.enter_context(patch_print())\n\n                terminal_state = await orchestrate_flow_run(\n                    flow,\n                    flow_run=flow_run,\n                    parameters=parameters,\n                    wait_for=wait_for,\n                    # If the parent flow run has a timeout, then this one needs to be\n                    # interruptible as well\n                    interruptible=parent_flow_run_context.timeout_scope is not None,\n                    client=client,\n                    partial_flow_run_context=FlowRunContext.construct(\n                        sync_portal=parent_flow_run_context.sync_portal,\n                        task_runner=task_runner,\n                        background_tasks=parent_flow_run_context.background_tasks,\n                        result_factory=result_factory,\n                        log_prints=log_prints,\n                    ),\n                    user_thread=user_thread,\n                )\n\n    # Display the full state (including the result) if debugging\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # Track the subflow state so the parent flow can use it to determine its final state\n    parent_flow_run_context.flow_run_states.append(terminal_state)\n\n    if return_type == \"state\":\n        return terminal_state\n    elif return_type == \"result\":\n        return await terminal_state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_autonomous_task_run","title":"create_autonomous_task_run async","text":"

    Create a task run in the API for an autonomous task submission and store the provided parameters using the existing result storage mechanism.

    Source code in prefect/engine.py
    async def create_autonomous_task_run(task: Task, parameters: Dict[str, Any]) -> TaskRun:\n    \"\"\"Create a task run in the API for an autonomous task submission and store\n    the provided parameters using the existing result storage mechanism.\n    \"\"\"\n    async with get_client() as client:\n        state = Scheduled()\n        if parameters:\n            parameters_id = uuid4()\n            state.state_details.task_parameters_id = parameters_id\n\n            # TODO: Improve use of result storage for parameter storage / reference\n            task.persist_result = True\n\n            factory = await ResultFactory.from_autonomous_task(task, client=client)\n            await factory.store_parameters(parameters_id, parameters)\n\n        task_run = await client.create_task_run(\n            task=task,\n            flow_run_id=None,\n            dynamic_key=f\"{task.task_key}-{str(uuid4())[:NUM_CHARS_DYNAMIC_KEY]}\",\n            state=state,\n        )\n\n        engine_logger.debug(f\"Submitted run of task {task.name!r} for execution\")\n\n    return task_run\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_then_begin_flow_run","title":"create_then_begin_flow_run async","text":"

    Async entrypoint for flow calls

    Creates the flow run in the backend, then enters the main flow run engine.

    Source code in prefect/engine.py
    @inject_client\nasync def create_then_begin_flow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n    \"\"\"\n    Async entrypoint for flow calls\n\n    Creates the flow run in the backend, then enters the main flow run engine.\n    \"\"\"\n    # TODO: Returns a `State` depending on `return_type` and we can add an overload to\n    #       the function signature to clarify this eventually.\n\n    await check_api_reachable(client, \"Cannot create flow run\")\n\n    state = Pending()\n    if flow.should_validate_parameters:\n        try:\n            parameters = flow.validate_parameters(parameters)\n        except Exception:\n            state = await exception_to_failed_state(\n                message=\"Validation of flow parameters failed with error:\"\n            )\n\n    flow_run = await client.create_flow_run(\n        flow,\n        # Send serialized parameters to the backend\n        parameters=flow.serialize_parameters(parameters),\n        state=state,\n        tags=TagsContext.get().current_tags,\n    )\n\n    engine_logger.info(f\"Created flow run {flow_run.name!r} for flow {flow.name!r}\")\n\n    logger = flow_run_logger(flow_run, flow)\n\n    ui_url = PREFECT_UI_URL.value()\n    if ui_url:\n        logger.info(\n            f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n            extra={\"send_to_api\": False},\n        )\n\n    if state.is_failed():\n        logger.error(state.message)\n        engine_logger.info(\n            f\"Flow run {flow_run.name!r} received invalid parameters and is marked as\"\n            \" failed.\"\n        )\n    else:\n        state = await begin_flow_run(\n            flow=flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            client=client,\n            user_thread=user_thread,\n        )\n\n    if return_type == \"state\":\n        return state\n    elif return_type == \"result\":\n        return await state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_flow_call","title":"enter_flow_run_engine_from_flow_call","text":"

    Sync entrypoint for flow calls.

    This function does the heavy lifting of ensuring we can get into an async context for flow run execution with minimal overhead.

    Source code in prefect/engine.py
    def enter_flow_run_engine_from_flow_call(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n) -> Union[State, Awaitable[State]]:\n    \"\"\"\n    Sync entrypoint for flow calls.\n\n    This function does the heavy lifting of ensuring we can get into an async context\n    for flow run execution with minimal overhead.\n    \"\"\"\n    setup_logging()\n\n    registry = PrefectObjectRegistry.get()\n    if registry and registry.block_code_execution:\n        engine_logger.warning(\n            f\"Script loading is in progress, flow {flow.name!r} will not be executed.\"\n            \" Consider updating the script to only call the flow if executed\"\n            f' directly:\\n\\n\\tif __name__ == \"__main__\":\\n\\t\\t{flow.fn.__name__}()'\n        )\n        return None\n\n    parent_flow_run_context = FlowRunContext.get()\n    is_subflow_run = parent_flow_run_context is not None\n\n    if wait_for is not None and not is_subflow_run:\n        raise ValueError(\"Only flows run as subflows can wait for dependencies.\")\n\n    begin_run = create_call(\n        create_and_begin_subflow_run if is_subflow_run else create_then_begin_flow_run,\n        flow=flow,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        client=parent_flow_run_context.client if is_subflow_run else None,\n        user_thread=threading.current_thread(),\n    )\n\n    # On completion of root flows, wait for the global thread to ensure that\n    # any work there is complete\n    done_callbacks = (\n        [create_call(wait_for_global_loop_exit)] if not is_subflow_run else None\n    )\n\n    # WARNING: You must define any context managers here to pass to our concurrency\n    # api instead of entering them in here in the engine entrypoint. Otherwise, async\n    # flows will not use the context as this function _exits_ to return an awaitable to\n    # the user. Generally, you should enter contexts _within_ the async `begin_run`\n    # instead but if you need to enter a context from the main thread you'll need to do\n    # it here.\n    contexts = [capture_sigterm()]\n\n    if flow.isasync and (\n        not is_subflow_run or (is_subflow_run and parent_flow_run_context.flow.isasync)\n    ):\n        # return a coro for the user to await if the flow is async\n        # unless it is an async subflow called in a sync flow\n        retval = from_async.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    else:\n        retval = from_sync.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    return retval\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_subprocess","title":"enter_flow_run_engine_from_subprocess","text":"

    Sync entrypoint for flow runs that have been submitted for execution by an agent

    Differs from enter_flow_run_engine_from_flow_call in that we have a flow run id but not a flow object. The flow must be retrieved before execution can begin. Additionally, this assumes that the caller is always in a context without an event loop as this should be called from a fresh process.

    Source code in prefect/engine.py
    def enter_flow_run_engine_from_subprocess(flow_run_id: UUID) -> State:\n    \"\"\"\n    Sync entrypoint for flow runs that have been submitted for execution by an agent\n\n    Differs from `enter_flow_run_engine_from_flow_call` in that we have a flow run id\n    but not a flow object. The flow must be retrieved before execution can begin.\n    Additionally, this assumes that the caller is always in a context without an event\n    loop as this should be called from a fresh process.\n    \"\"\"\n\n    # Ensure collections are imported and have the opportunity to register types before\n    # loading the user code from the deployment\n    prefect.plugins.load_prefect_collections()\n\n    setup_logging()\n\n    state = from_sync.wait_for_call_in_loop_thread(\n        create_call(\n            retrieve_flow_then_begin_flow_run,\n            flow_run_id,\n            user_thread=threading.current_thread(),\n        ),\n        contexts=[capture_sigterm()],\n    )\n\n    APILogHandler.flush()\n    return state\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_task_run_engine","title":"enter_task_run_engine","text":"

    Sync entrypoint for task calls

    Source code in prefect/engine.py
    def enter_task_run_engine(\n    task: Task,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n    mapped: bool,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun]:\n    \"\"\"Sync entrypoint for task calls\"\"\"\n\n    flow_run_context = FlowRunContext.get()\n\n    if not flow_run_context:\n        if return_type == \"future\" or mapped:\n            raise RuntimeError(\n                \" If you meant to submit a background task, you need to set\"\n                \" `prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=true`\"\n                \" and use `your_task.submit()` instead of `your_task()`.\"\n            )\n        from prefect.task_engine import submit_autonomous_task_run_to_engine\n\n        return submit_autonomous_task_run_to_engine(\n            task=task,\n            task_run=None,\n            parameters=parameters,\n            task_runner=task_runner,\n            wait_for=wait_for,\n            return_type=return_type,\n            client=get_client(),\n        )\n\n    if flow_run_context.timeout_scope and flow_run_context.timeout_scope.cancel_called:\n        raise TimeoutError(\"Flow run timed out\")\n\n    begin_run = create_call(\n        begin_task_map if mapped else get_task_call_return_value,\n        task=task,\n        flow_run_context=flow_run_context,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=task_runner,\n    )\n\n    if task.isasync and (\n        flow_run_context.flow is None or flow_run_context.flow.isasync\n    ):\n        # return a coro for the user to await if an async task in an async flow\n        return from_async.wait_for_call_in_loop_thread(begin_run)\n    else:\n        return from_sync.wait_for_call_in_loop_thread(begin_run)\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_flow_run","title":"orchestrate_flow_run async","text":"

    Executes a flow run.

    Note on flow timeouts

    Since async flows are run directly in the main event loop, timeout behavior will match that described by anyio. If the flow is awaiting something, it will immediately return; otherwise, the next time it awaits it will exit. Sync flows are being task runner in a worker thread, which cannot be interrupted. The worker thread will exit at the next task call. The worker thread also has access to the status of the cancellation scope at FlowRunContext.timeout_scope.cancel_called which allows it to raise a TimeoutError to respect the timeout.

    Returns:

    Type Description State

    The final state of the run

    Source code in prefect/engine.py
    async def orchestrate_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    interruptible: bool,\n    client: PrefectClient,\n    partial_flow_run_context: FlowRunContext,\n    user_thread: threading.Thread,\n) -> State:\n    \"\"\"\n    Executes a flow run.\n\n    Note on flow timeouts:\n        Since async flows are run directly in the main event loop, timeout behavior will\n        match that described by anyio. If the flow is awaiting something, it will\n        immediately return; otherwise, the next time it awaits it will exit. Sync flows\n        are being task runner in a worker thread, which cannot be interrupted. The worker\n        thread will exit at the next task call. The worker thread also has access to the\n        status of the cancellation scope at `FlowRunContext.timeout_scope.cancel_called`\n        which allows it to raise a `TimeoutError` to respect the timeout.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n\n    logger = flow_run_logger(flow_run, flow)\n\n    flow_run_context = None\n    parent_flow_run_context = FlowRunContext.get()\n\n    try:\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        if wait_for is not None:\n            await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            flow_run_id=flow_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=flow_run.state.is_pending(),\n        )\n\n    state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    # flag to ensure we only update the flow run name once\n    run_name_set = False\n\n    await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n    while state.is_running():\n        waited_for_task_runs = False\n\n        # Update the flow run to the latest data\n        flow_run = await client.read_flow_run(flow_run.id)\n        try:\n            with FlowRunContext(\n                **{\n                    **partial_flow_run_context.dict(),\n                    **{\n                        \"flow_run\": flow_run,\n                        \"flow\": flow,\n                        \"client\": client,\n                        \"parameters\": parameters,\n                    },\n                }\n            ) as flow_run_context:\n                # update flow run name\n                if not run_name_set and flow.flow_run_name:\n                    flow_run_name = _resolve_custom_flow_run_name(\n                        flow=flow, parameters=parameters\n                    )\n\n                    await client.update_flow_run(\n                        flow_run_id=flow_run.id, name=flow_run_name\n                    )\n                    logger.extra[\"flow_run_name\"] = flow_run_name\n                    logger.debug(\n                        f\"Renamed flow run {flow_run.name!r} to {flow_run_name!r}\"\n                    )\n                    flow_run.name = flow_run_name\n                    run_name_set = True\n\n                args, kwargs = parameters_to_args_kwargs(flow.fn, parameters)\n                logger.debug(\n                    f\"Executing flow {flow.name!r} for flow run {flow_run.name!r}...\"\n                )\n\n                if PREFECT_DEBUG_MODE:\n                    logger.debug(f\"Executing {call_repr(flow.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                flow_call = create_call(flow.fn, *args, **kwargs)\n\n                # This check for a parent call is needed for cases where the engine\n                # was entered directly during testing\n                parent_call = get_current_call()\n\n                if parent_call and (\n                    not parent_flow_run_context\n                    or (\n                        getattr(parent_flow_run_context, \"flow\", None)\n                        and parent_flow_run_context.flow.isasync == flow.isasync\n                    )\n                ):\n                    from_async.call_soon_in_waiting_thread(\n                        flow_call,\n                        thread=user_thread,\n                        timeout=flow.timeout_seconds,\n                    )\n                else:\n                    from_async.call_soon_in_new_thread(\n                        flow_call, timeout=flow.timeout_seconds\n                    )\n\n                result = await flow_call.aresult()\n\n                waited_for_task_runs = await wait_for_task_runs_and_report_crashes(\n                    flow_run_context.task_run_futures, client=client\n                )\n        except PausedRun as exc:\n            # could get raised either via utility or by returning Paused from a task run\n            # if a task run pauses, we set its state as the flow's state\n            # to preserve reschedule and timeout behavior\n            paused_flow_run = await client.read_flow_run(flow_run.id)\n            if paused_flow_run.state.is_running():\n                state = await propose_state(\n                    client,\n                    state=exc.state,\n                    flow_run_id=flow_run.id,\n                )\n\n                return state\n            paused_flow_run_state = paused_flow_run.state\n            return paused_flow_run_state\n        except CancelledError as exc:\n            if not flow_call.timedout():\n                # If the flow call was not cancelled by us; this is a crash\n                raise\n            # Construct a new exception as `TimeoutError`\n            original = exc\n            exc = TimeoutError()\n            exc.__cause__ = original\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                exc,\n                message=f\"Flow run exceeded timeout of {flow.timeout_seconds} seconds\",\n                result_factory=flow_run_context.result_factory,\n                name=\"TimedOut\",\n            )\n        except Exception:\n            # Generic exception in user code\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                message=\"Flow run encountered an exception.\",\n                result_factory=flow_run_context.result_factory,\n            )\n        else:\n            if result is None:\n                # All tasks and subflows are reference tasks if there is no return value\n                # If there are no tasks, use `None` instead of an empty iterable\n                result = (\n                    flow_run_context.task_run_futures\n                    + flow_run_context.task_run_states\n                    + flow_run_context.flow_run_states\n                ) or None\n\n            terminal_state = await return_value_to_state(\n                await resolve_futures_to_states(result),\n                result_factory=flow_run_context.result_factory,\n            )\n\n        if not waited_for_task_runs:\n            # An exception occurred that prevented us from waiting for task runs to\n            # complete. Ensure that we wait for them before proposing a final state\n            # for the flow run.\n            await wait_for_task_runs_and_report_crashes(\n                flow_run_context.task_run_futures, client=client\n            )\n\n        # Before setting the flow run state, store state.data using\n        # block storage and send the resulting data document to the Prefect API instead.\n        # This prevents the pickled return value of flow runs\n        # from being sent to the Prefect API and stored in the Prefect database.\n        # state.data is left as is, otherwise we would have to load\n        # the data from block storage again after storing.\n        state = await propose_state(\n            client,\n            state=terminal_state,\n            flow_run_id=flow_run.id,\n        )\n\n        await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n        if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n            logger.debug(\n                (\n                    f\"Received new state {state} when proposing final state\"\n                    f\" {terminal_state}\"\n                ),\n                extra={\"send_to_api\": False},\n            )\n\n        if not state.is_final() and not state.is_paused():\n            logger.info(\n                (\n                    f\"Received non-final state {state.name!r} when proposing final\"\n                    f\" state {terminal_state.name!r} and will attempt to run again...\"\n                ),\n            )\n            # Attempt to enter a running state again\n            state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    return state\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_task_run","title":"orchestrate_task_run async","text":"

    Execute a task run

    This function should be submitted to a task runner. We must construct the context here instead of receiving it already populated since we may be in a new environment.

    Proposes a RUNNING state, then - if accepted, the task user function will be run - if rejected, the received state will be returned

    When the user function is run, the result will be used to determine a final state - if an exception is encountered, it is trapped and stored in a FAILED state - otherwise, return_value_to_state is used to determine the state

    If the final state is COMPLETED, we generate a cache key as specified by the task

    The final state is then proposed - if accepted, this is the final state and will be returned - if rejected and a new final state is provided, it will be returned - if rejected and a non-final state is provided, we will attempt to enter a RUNNING state again

    Returns:

    Type Description State

    The final state of the run

    Source code in prefect/engine.py
    async def orchestrate_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    interruptible: bool,\n    client: PrefectClient,\n) -> State:\n    \"\"\"\n    Execute a task run\n\n    This function should be submitted to a task runner. We must construct the context\n    here instead of receiving it already populated since we may be in a new environment.\n\n    Proposes a RUNNING state, then\n    - if accepted, the task user function will be run\n    - if rejected, the received state will be returned\n\n    When the user function is run, the result will be used to determine a final state\n    - if an exception is encountered, it is trapped and stored in a FAILED state\n    - otherwise, `return_value_to_state` is used to determine the state\n\n    If the final state is COMPLETED, we generate a cache key as specified by the task\n\n    The final state is then proposed\n    - if accepted, this is the final state and will be returned\n    - if rejected and a new final state is provided, it will be returned\n    - if rejected and a non-final state is provided, we will attempt to enter a RUNNING\n        state again\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    flow_run_context = prefect.context.FlowRunContext.get()\n    if flow_run_context:\n        flow_run = flow_run_context.flow_run\n    else:\n        flow_run = await client.read_flow_run(task_run.flow_run_id)\n    logger = task_run_logger(task_run, task=task, flow_run=flow_run)\n\n    partial_task_run_context = TaskRunContext.construct(\n        task_run=task_run,\n        task=task,\n        client=client,\n        result_factory=result_factory,\n        log_prints=log_prints,\n    )\n    task_introspection_start_time = time.perf_counter()\n    try:\n        # Resolve futures in parameters into data\n        resolved_parameters = await resolve_inputs(parameters)\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            task_run_id=task_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=task_run.state.is_pending(),\n        )\n    task_introspection_end_time = time.perf_counter()\n\n    introspection_time = round(\n        task_introspection_end_time - task_introspection_start_time, 3\n    )\n    threshold = PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD.value()\n    if threshold and introspection_time > threshold:\n        logger.warning(\n            f\"Task parameter introspection took {introspection_time} seconds \"\n            f\", exceeding `PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD` of {threshold}. \"\n            \"Try wrapping large task parameters with \"\n            \"`prefect.utilities.annotations.quote` for increased performance, \"\n            \"e.g. `my_task(quote(param))`. To disable this message set \"\n            \"`PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD=0`.\"\n        )\n\n    # Generate the cache key to attach to proposed states\n    # The cache key uses a TaskRunContext that does not include a `timeout_context``\n\n    task_run_context = TaskRunContext(\n        **partial_task_run_context.dict(), parameters=resolved_parameters\n    )\n\n    cache_key = (\n        task.cache_key_fn(\n            task_run_context,\n            resolved_parameters,\n        )\n        if task.cache_key_fn\n        else None\n    )\n\n    # Ignore the cached results for a cache key, default = false\n    # Setting on task level overrules the Prefect setting (env var)\n    refresh_cache = (\n        task.refresh_cache\n        if task.refresh_cache is not None\n        else PREFECT_TASKS_REFRESH_CACHE.value()\n    )\n\n    # Emit an event to capture that the task run was in the `PENDING` state.\n    last_event = emit_task_run_state_change_event(\n        task_run=task_run, initial_state=None, validated_state=task_run.state\n    )\n    last_state = (\n        Pending()\n        if flow_run_context and flow_run_context.autonomous_task_run\n        else task_run.state\n    )\n\n    # Completed states with persisted results should have result data. If it's missing,\n    # this could be a manual state transition, so we should use the Unknown result type\n    # to represent that we know we don't know the result.\n    if (\n        last_state\n        and last_state.is_completed()\n        and result_factory.persist_result\n        and not last_state.data\n    ):\n        state = await propose_state(\n            client,\n            state=Completed(data=await UnknownResult.create()),\n            task_run_id=task_run.id,\n            force=True,\n        )\n\n    # Transition from `PENDING` -> `RUNNING`\n    try:\n        state = await propose_state(\n            client,\n            Running(\n                state_details=StateDetails(\n                    cache_key=cache_key, refresh_cache=refresh_cache\n                )\n            ),\n            task_run_id=task_run.id,\n        )\n    except Pause as exc:\n        # We shouldn't get a pause signal without a state, but if this happens,\n        # just use a Paused state to assume an in-process pause.\n        state = exc.state if exc.state else Paused()\n\n        # If a flow submits tasks and then pauses, we may reach this point due\n        # to concurrency timing because the tasks will try to transition after\n        # the flow run has paused. Orchestration will send back a Paused state\n        # for the task runs.\n        if state.state_details.pause_reschedule:\n            # If we're being asked to pause and reschedule, we should exit the\n            # task and expect to be resumed later.\n            raise\n\n    if state.is_paused():\n        BACKOFF_MAX = 10  # Seconds\n        backoff_count = 0\n\n        async def tick():\n            nonlocal backoff_count\n            if backoff_count < BACKOFF_MAX:\n                backoff_count += 1\n            interval = 1 + backoff_count + random.random() * backoff_count\n            await anyio.sleep(interval)\n\n        # Enter a loop to wait for the task run to be resumed, i.e.\n        # become Pending, and then propose a Running state again.\n        while True:\n            await tick()\n\n            # Propose a Running state again. We do this instead of reading the\n            # task run because if the flow run times out, this lets\n            # orchestration fail the task run.\n            try:\n                state = await propose_state(\n                    client,\n                    Running(\n                        state_details=StateDetails(\n                            cache_key=cache_key, refresh_cache=refresh_cache\n                        )\n                    ),\n                    task_run_id=task_run.id,\n                )\n            except Pause as exc:\n                if not exc.state:\n                    continue\n\n                if exc.state.state_details.pause_reschedule:\n                    # If the pause state includes pause_reschedule, we should exit the\n                    # task and expect to be resumed later. We've already checked for this\n                    # above, but we check again here in case the state changed; e.g. the\n                    # flow run suspended.\n                    raise\n                else:\n                    # Propose a Running state again.\n                    continue\n            else:\n                break\n\n    # Emit an event to capture the result of proposing a `RUNNING` state.\n    last_event = emit_task_run_state_change_event(\n        task_run=task_run,\n        initial_state=last_state,\n        validated_state=state,\n        follows=last_event,\n    )\n    last_state = state\n\n    # flag to ensure we only update the task run name once\n    run_name_set = False\n\n    # Only run the task if we enter a `RUNNING` state\n    while state.is_running():\n        # Retrieve the latest metadata for the task run context\n        task_run = await client.read_task_run(task_run.id)\n\n        with task_run_context.copy(\n            update={\"task_run\": task_run, \"start_time\": pendulum.now(\"UTC\")}\n        ):\n            try:\n                args, kwargs = parameters_to_args_kwargs(task.fn, resolved_parameters)\n                # update task run name\n                if not run_name_set and task.task_run_name:\n                    task_run_name = _resolve_custom_task_run_name(\n                        task=task, parameters=resolved_parameters\n                    )\n                    await client.set_task_run_name(\n                        task_run_id=task_run.id, name=task_run_name\n                    )\n                    logger.extra[\"task_run_name\"] = task_run_name\n                    logger.debug(\n                        f\"Renamed task run {task_run.name!r} to {task_run_name!r}\"\n                    )\n                    task_run.name = task_run_name\n                    run_name_set = True\n\n                if PREFECT_DEBUG_MODE.value():\n                    logger.debug(f\"Executing {call_repr(task.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                call = from_async.call_soon_in_new_thread(\n                    create_call(task.fn, *args, **kwargs), timeout=task.timeout_seconds\n                )\n                result = await call.aresult()\n\n            except (CancelledError, asyncio.CancelledError) as exc:\n                if not call.timedout():\n                    # If the task call was not cancelled by us; this is a crash\n                    raise\n                # Construct a new exception as `TimeoutError`\n                original = exc\n                exc = TimeoutError()\n                exc.__cause__ = original\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=(\n                        f\"Task run exceeded timeout of {task.timeout_seconds} seconds\"\n                    ),\n                    result_factory=task_run_context.result_factory,\n                    name=\"TimedOut\",\n                )\n            except Exception as exc:\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=\"Task run encountered an exception\",\n                    result_factory=task_run_context.result_factory,\n                )\n            else:\n                terminal_state = await return_value_to_state(\n                    result,\n                    result_factory=task_run_context.result_factory,\n                )\n\n                # for COMPLETED tasks, add the cache key and expiration\n                if terminal_state.is_completed():\n                    terminal_state.state_details.cache_expiration = (\n                        (pendulum.now(\"utc\") + task.cache_expiration)\n                        if task.cache_expiration\n                        else None\n                    )\n                    terminal_state.state_details.cache_key = cache_key\n\n            if terminal_state.is_failed():\n                # Defer to user to decide whether failure is retriable\n                terminal_state.state_details.retriable = (\n                    await _check_task_failure_retriable(task, task_run, terminal_state)\n                )\n            state = await propose_state(client, terminal_state, task_run_id=task_run.id)\n            last_event = emit_task_run_state_change_event(\n                task_run=task_run,\n                initial_state=last_state,\n                validated_state=state,\n                follows=last_event,\n            )\n            last_state = state\n\n            await _run_task_hooks(\n                task=task,\n                task_run=task_run,\n                state=state,\n            )\n\n            if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n                logger.debug(\n                    (\n                        f\"Received new state {state} when proposing final state\"\n                        f\" {terminal_state}\"\n                    ),\n                    extra={\"send_to_api\": False},\n                )\n\n            if not state.is_final() and not state.is_paused():\n                logger.info(\n                    (\n                        f\"Received non-final state {state.name!r} when proposing final\"\n                        f\" state {terminal_state.name!r} and will attempt to run\"\n                        \" again...\"\n                    ),\n                )\n                # Attempt to enter a running state again\n                state = await propose_state(client, Running(), task_run_id=task_run.id)\n                last_event = emit_task_run_state_change_event(\n                    task_run=task_run,\n                    initial_state=last_state,\n                    validated_state=state,\n                    follows=last_event,\n                )\n                last_state = state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(state) if PREFECT_DEBUG_MODE else str(state)\n\n    logger.log(\n        level=logging.INFO if state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n    return state\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.pause_flow_run","title":"pause_flow_run async","text":"

    Pauses the current flow run by blocking execution until resumed.

    When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time.

    Parameters:

    Name Type Description Default flow_run_id UUID

    a flow run id. If supplied, this function will attempt to pause the specified flow run outside of the flow run process. When paused, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order pause a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results option.

    None timeout int

    the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.

    3600 poll_interval int

    The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds.

    10 reschedule bool

    Flag that will reschedule the flow run if resumed. Instead of blocking execution, the flow will gracefully exit (with no result returned) instead. To use this flag, a flow needs to have an associated deployment and results need to be configured with the persist_results option.

    False key str

    An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the \"reschedule\" option from running the same pause twice. A custom key can be supplied for custom pausing behavior.

    None wait_for_input Optional[Type[T]]

    a subclass of RunInput or any type supported by Pydantic. If provided when the flow pauses, the flow will wait for the input to be provided before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.

    None
    @task\ndef task_one():\n    for i in range(3):\n        sleep(1)\n\n@flow\ndef my_flow():\n    terminal_state = task_one.submit(return_state=True)\n    if terminal_state.type == StateType.COMPLETED:\n        print(\"Task one succeeded! Pausing flow run..\")\n        pause_flow_run(timeout=2)\n    else:\n        print(\"Task one failed. Skipping pause flow run..\")\n
    Source code in prefect/engine.py
    @sync_compatible\n@deprecated_parameter(\n    \"flow_run_id\", start_date=\"Dec 2023\", help=\"Use `suspend_flow_run` instead.\"\n)\n@deprecated_parameter(\n    \"reschedule\",\n    start_date=\"Dec 2023\",\n    when=lambda p: p is True,\n    help=\"Use `suspend_flow_run` instead.\",\n)\n@experimental_parameter(\n    \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def pause_flow_run(\n    wait_for_input: Optional[Type[T]] = None,\n    flow_run_id: UUID = None,\n    timeout: int = 3600,\n    poll_interval: int = 10,\n    reschedule: bool = False,\n    key: str = None,\n) -> Optional[T]:\n    \"\"\"\n    Pauses the current flow run by blocking execution until resumed.\n\n    When called within a flow run, execution will block and no downstream tasks will\n    run until the flow is resumed. Task runs that have already started will continue\n    running. A timeout parameter can be passed that will fail the flow run if it has not\n    been resumed within the specified time.\n\n    Args:\n        flow_run_id: a flow run id. If supplied, this function will attempt to pause\n            the specified flow run outside of the flow run process. When paused, the\n            flow run will continue execution until the NEXT task is orchestrated, at\n            which point the flow will exit. Any tasks that have already started will\n            run until completion. When resumed, the flow run will be rescheduled to\n            finish execution. In order pause a flow run in this way, the flow needs to\n            have an associated deployment and results need to be configured with the\n            `persist_results` option.\n        timeout: the number of seconds to wait for the flow to be resumed before\n            failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds\n            any configured flow-level timeout, the flow might fail even after resuming.\n        poll_interval: The number of seconds between checking whether the flow has been\n            resumed. Defaults to 10 seconds.\n        reschedule: Flag that will reschedule the flow run if resumed. Instead of\n            blocking execution, the flow will gracefully exit (with no result returned)\n            instead. To use this flag, a flow needs to have an associated deployment and\n            results need to be configured with the `persist_results` option.\n        key: An optional key to prevent calling pauses more than once. This defaults to\n            the number of pauses observed by the flow so far, and prevents pauses that\n            use the \"reschedule\" option from running the same pause twice. A custom key\n            can be supplied for custom pausing behavior.\n        wait_for_input: a subclass of `RunInput` or any type supported by\n            Pydantic. If provided when the flow pauses, the flow will wait for the\n            input to be provided before resuming. If the flow is resumed without\n            providing the input, the flow will fail. If the flow is resumed with the\n            input, the flow will resume and the input will be loaded and returned\n            from this function.\n\n    Example:\n    ```python\n    @task\n    def task_one():\n        for i in range(3):\n            sleep(1)\n\n    @flow\n    def my_flow():\n        terminal_state = task_one.submit(return_state=True)\n        if terminal_state.type == StateType.COMPLETED:\n            print(\"Task one succeeded! Pausing flow run..\")\n            pause_flow_run(timeout=2)\n        else:\n            print(\"Task one failed. Skipping pause flow run..\")\n    ```\n\n    \"\"\"\n    if flow_run_id:\n        if wait_for_input is not None:\n            raise RuntimeError(\"Cannot wait for input when pausing out of process.\")\n\n        return await _out_of_process_pause(\n            flow_run_id=flow_run_id,\n            timeout=timeout,\n            reschedule=reschedule,\n            key=key,\n        )\n    else:\n        return await _in_process_pause(\n            timeout=timeout,\n            poll_interval=poll_interval,\n            reschedule=reschedule,\n            key=key,\n            wait_for_input=wait_for_input,\n        )\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_flow_run_crashes","title":"report_flow_run_crashes async","text":"

    Detect flow run crashes during this context and update the run to a proper final state.

    This context must reraise the exception to properly exit the run.

    Source code in prefect/engine.py
    @asynccontextmanager\nasync def report_flow_run_crashes(flow_run: FlowRun, client: PrefectClient, flow: Flow):\n    \"\"\"\n    Detect flow run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = flow_run_logger(flow_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            flow_run_state = await propose_state(client, state, flow_run_id=flow_run.id)\n            engine_logger.debug(\n                f\"Reported crashed flow run {flow_run.name!r} successfully!\"\n            )\n\n            # Only `on_crashed` and `on_cancellation` flow run state change hooks can be called here.\n            # We call the hooks after the state change proposal to `CRASHED` is validated\n            # or rejected (if it is in a `CANCELLING` state).\n            await _run_flow_hooks(\n                flow=flow,\n                flow_run=flow_run,\n                state=flow_run_state,\n            )\n\n        # Reraise the exception\n        raise\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_task_run_crashes","title":"report_task_run_crashes async","text":"

    Detect task run crashes during this context and update the run to a proper final state.

    This context must reraise the exception to properly exit the run.

    Source code in prefect/engine.py
    @asynccontextmanager\nasync def report_task_run_crashes(task_run: TaskRun, client: PrefectClient):\n    \"\"\"\n    Detect task run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = task_run_logger(task_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            await client.set_task_run_state(\n                state=state,\n                task_run_id=task_run.id,\n                force=True,\n            )\n            engine_logger.debug(\n                f\"Reported crashed task run {task_run.name!r} successfully!\"\n            )\n\n        # Reraise the exception\n        raise\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resume_flow_run","title":"resume_flow_run async","text":"

    Resumes a paused flow.

    Parameters:

    Name Type Description Default flow_run_id

    the flow_run_id to resume

    required run_input Optional[Dict]

    a dictionary of inputs to provide to the flow run.

    None Source code in prefect/engine.py
    @sync_compatible\nasync def resume_flow_run(flow_run_id, run_input: Optional[Dict] = None):\n    \"\"\"\n    Resumes a paused flow.\n\n    Args:\n        flow_run_id: the flow_run_id to resume\n        run_input: a dictionary of inputs to provide to the flow run.\n    \"\"\"\n    client = get_client()\n    async with client:\n        flow_run = await client.read_flow_run(flow_run_id)\n\n        if not flow_run.state.is_paused():\n            raise NotPausedError(\"Cannot resume a run that isn't paused!\")\n\n        response = await client.resume_flow_run(flow_run_id, run_input=run_input)\n\n    if response.status == SetStateStatus.REJECT:\n        if response.state.type == StateType.FAILED:\n            raise FlowPauseTimeout(\"Flow run can no longer be resumed.\")\n        else:\n            raise RuntimeError(f\"Cannot resume this run: {response.details.reason}\")\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.retrieve_flow_then_begin_flow_run","title":"retrieve_flow_then_begin_flow_run async","text":"

    Async entrypoint for flow runs that have been submitted for execution by an agent

    • Retrieves the deployment information
    • Loads the flow object using deployment information
    • Updates the flow run version
    Source code in prefect/engine.py
    @inject_client\nasync def retrieve_flow_then_begin_flow_run(\n    flow_run_id: UUID,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n    \"\"\"\n    Async entrypoint for flow runs that have been submitted for execution by an agent\n\n    - Retrieves the deployment information\n    - Loads the flow object using deployment information\n    - Updates the flow run version\n    \"\"\"\n    flow_run = await client.read_flow_run(flow_run_id)\n\n    entrypoint = os.environ.get(\"PREFECT__FLOW_ENTRYPOINT\")\n\n    try:\n        flow = (\n            load_flow_from_entrypoint(entrypoint)\n            if entrypoint\n            else await load_flow_from_flow_run(flow_run, client=client)\n        )\n    except Exception:\n        message = (\n            \"Flow could not be retrieved from\"\n            f\" {'entrypoint' if entrypoint else 'deployment'}.\"\n        )\n        flow_run_logger(flow_run).exception(message)\n        state = await exception_to_failed_state(message=message)\n        await client.set_flow_run_state(\n            state=state, flow_run_id=flow_run_id, force=True\n        )\n        return state\n\n    # Update the flow run policy defaults to match settings on the flow\n    # Note: Mutating the flow run object prevents us from performing another read\n    #       operation if these properties are used by the client downstream\n    if flow_run.empirical_policy.retry_delay is None:\n        flow_run.empirical_policy.retry_delay = flow.retry_delay_seconds\n\n    if flow_run.empirical_policy.retries is None:\n        flow_run.empirical_policy.retries = flow.retries\n\n    await client.update_flow_run(\n        flow_run_id=flow_run_id,\n        flow_version=flow.version,\n        empirical_policy=flow_run.empirical_policy,\n    )\n\n    if flow.should_validate_parameters:\n        failed_state = None\n        try:\n            parameters = flow.validate_parameters(flow_run.parameters)\n        except Exception:\n            message = \"Validation of flow parameters failed with error: \"\n            flow_run_logger(flow_run).exception(message)\n            failed_state = await exception_to_failed_state(message=message)\n\n        if failed_state is not None:\n            await propose_state(\n                client,\n                state=failed_state,\n                flow_run_id=flow_run_id,\n            )\n            return failed_state\n    else:\n        parameters = flow_run.parameters\n\n    # Ensure default values are populated\n    parameters = {**get_parameter_defaults(flow.fn), **parameters}\n\n    return await begin_flow_run(\n        flow=flow,\n        flow_run=flow_run,\n        parameters=parameters,\n        client=client,\n        user_thread=user_thread,\n    )\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.suspend_flow_run","title":"suspend_flow_run async","text":"

    Suspends a flow run by stopping code execution until resumed.

    When suspended, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order suspend a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results option.

    Parameters:

    Name Type Description Default flow_run_id Optional[UUID]

    a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run.

    None timeout Optional[int]

    the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.

    3600 key Optional[str]

    An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior.

    None wait_for_input Optional[Type[T]]

    a subclass of RunInput or any type supported by Pydantic. If provided when the flow suspends, the flow will remain suspended until receiving the input before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function.

    None Source code in prefect/engine.py
    @sync_compatible\n@inject_client\n@experimental_parameter(\n    \"wait_for_input\", group=\"flow_run_input\", when=lambda y: y is not None\n)\nasync def suspend_flow_run(\n    wait_for_input: Optional[Type[T]] = None,\n    flow_run_id: Optional[UUID] = None,\n    timeout: Optional[int] = 3600,\n    key: Optional[str] = None,\n    client: PrefectClient = None,\n) -> Optional[T]:\n    \"\"\"\n    Suspends a flow run by stopping code execution until resumed.\n\n    When suspended, the flow run will continue execution until the NEXT task is\n    orchestrated, at which point the flow will exit. Any tasks that have\n    already started will run until completion. When resumed, the flow run will\n    be rescheduled to finish execution. In order suspend a flow run in this\n    way, the flow needs to have an associated deployment and results need to be\n    configured with the `persist_results` option.\n\n    Args:\n        flow_run_id: a flow run id. If supplied, this function will attempt to\n            suspend the specified flow run. If not supplied will attempt to\n            suspend the current flow run.\n        timeout: the number of seconds to wait for the flow to be resumed before\n            failing. Defaults to 1 hour (3600 seconds). If the pause timeout\n            exceeds any configured flow-level timeout, the flow might fail even\n            after resuming.\n        key: An optional key to prevent calling suspend more than once. This\n            defaults to a random string and prevents suspends from running the\n            same suspend twice. A custom key can be supplied for custom\n            suspending behavior.\n        wait_for_input: a subclass of `RunInput` or any type supported by\n            Pydantic. If provided when the flow suspends, the flow will remain\n            suspended until receiving the input before resuming. If the flow is\n            resumed without providing the input, the flow will fail. If the flow is\n            resumed with the input, the flow will resume and the input will be\n            loaded and returned from this function.\n    \"\"\"\n    context = FlowRunContext.get()\n\n    if flow_run_id is None:\n        if TaskRunContext.get():\n            raise RuntimeError(\"Cannot suspend task runs.\")\n\n        if context is None or context.flow_run is None:\n            raise RuntimeError(\n                \"Flow runs can only be suspended from within a flow run.\"\n            )\n\n        logger = get_run_logger(context=context)\n        logger.info(\n            \"Suspending flow run, execution will be rescheduled when this flow run is\"\n            \" resumed.\"\n        )\n        flow_run_id = context.flow_run.id\n        suspending_current_flow_run = True\n        pause_counter = _observed_flow_pauses(context)\n        pause_key = key or str(pause_counter)\n    else:\n        # Since we're suspending another flow run we need to generate a pause\n        # key that won't conflict with whatever suspends/pauses that flow may\n        # have. Since this method won't be called during that flow run it's\n        # okay that this is non-deterministic.\n        suspending_current_flow_run = False\n        pause_key = key or str(uuid4())\n\n    proposed_state = Suspended(timeout_seconds=timeout, pause_key=pause_key)\n\n    if wait_for_input:\n        wait_for_input = run_input_subclass_from_type(wait_for_input)\n        run_input_keyset = keyset_from_paused_state(proposed_state)\n        proposed_state.state_details.run_input_keyset = run_input_keyset\n\n    try:\n        state = await propose_state(\n            client=client,\n            state=proposed_state,\n            flow_run_id=flow_run_id,\n        )\n    except Abort as exc:\n        # Aborted requests mean the suspension is not allowed\n        raise RuntimeError(f\"Flow run cannot be suspended: {exc}\")\n\n    if state.is_running():\n        # The orchestrator rejected the suspended state which means that this\n        # suspend has happened before and the flow run has been resumed.\n        if wait_for_input:\n            # The flow run wanted input, so we need to load it and return it\n            # to the user.\n            return await wait_for_input.load(run_input_keyset)\n        return\n\n    if not state.is_paused():\n        # If we receive anything but a PAUSED state, we are unable to continue\n        raise RuntimeError(\n            f\"Flow run cannot be suspended. Received unexpected state from API: {state}\"\n        )\n\n    if wait_for_input:\n        await wait_for_input.save(run_input_keyset)\n\n    if suspending_current_flow_run:\n        # Exit this process so the run can be resubmitted later\n        raise Pause()\n
    ","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/events/","title":"prefect.events","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events","title":"prefect.events","text":"","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.TriggerTypes","title":"TriggerTypes: TypeAlias = Union[EventTrigger, MetricTrigger, CompoundTrigger, SequenceTrigger] module-attribute","text":"

    The union of all concrete trigger types that a user may actually create

    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Action","title":"Action","text":"

    Bases: PrefectBaseModel, ABC

    An Action that may be performed when an Automation is triggered

    Source code in prefect/events/actions.py
    class Action(PrefectBaseModel, abc.ABC):\n    \"\"\"An Action that may be performed when an Automation is triggered\"\"\"\n\n    type: str\n\n    def describe_for_cli(self) -> str:\n        \"\"\"A human-readable description of the action\"\"\"\n        return self.type.replace(\"-\", \" \").capitalize()\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Action.describe_for_cli","title":"describe_for_cli","text":"

    A human-readable description of the action

    Source code in prefect/events/actions.py
    def describe_for_cli(self) -> str:\n    \"\"\"A human-readable description of the action\"\"\"\n    return self.type.replace(\"-\", \" \").capitalize()\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.AutomationCore","title":"AutomationCore","text":"

    Bases: PrefectBaseModel

    Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources

    Source code in prefect/events/schemas/automations.py
    class AutomationCore(PrefectBaseModel, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"Defines an action a user wants to take when a certain number of events\n    do or don't happen to the matching resources\"\"\"\n\n    name: str = Field(..., description=\"The name of this automation\")\n    description: str = Field(\"\", description=\"A longer description of this automation\")\n\n    enabled: bool = Field(True, description=\"Whether this automation will be evaluated\")\n\n    trigger: TriggerTypes = Field(\n        ...,\n        description=(\n            \"The criteria for which events this Automation covers and how it will \"\n            \"respond to the presence or absence of those events\"\n        ),\n    )\n\n    actions: List[ActionTypes] = Field(\n        ...,\n        description=\"The actions to perform when this Automation triggers\",\n    )\n\n    actions_on_trigger: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a triggered state\",\n    )\n\n    actions_on_resolve: List[ActionTypes] = Field(\n        default_factory=list,\n        description=\"The actions to perform when an Automation goes into a resolving state\",\n    )\n\n    owner_resource: Optional[str] = Field(\n        default=None, description=\"The owning resource of this automation\"\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CallWebhook","title":"CallWebhook","text":"

    Bases: Action

    Call a webhook when an Automation is triggered.

    Source code in prefect/events/actions.py
    class CallWebhook(Action):\n    \"\"\"Call a webhook when an Automation is triggered.\"\"\"\n\n    type: Literal[\"call-webhook\"] = \"call-webhook\"\n    block_document_id: UUID = Field(\n        description=\"The identifier of the webhook block to use\"\n    )\n    payload: str = Field(\n        default=\"\",\n        description=\"An optional templatable payload to send when calling the webhook.\",\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CancelFlowRun","title":"CancelFlowRun","text":"

    Bases: Action

    Cancels a flow run associated with the trigger

    Source code in prefect/events/actions.py
    class CancelFlowRun(Action):\n    \"\"\"Cancels a flow run associated with the trigger\"\"\"\n\n    type: Literal[\"cancel-flow-run\"] = \"cancel-flow-run\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ChangeFlowRunState","title":"ChangeFlowRunState","text":"

    Bases: Action

    Changes the state of a flow run associated with the trigger

    Source code in prefect/events/actions.py
    class ChangeFlowRunState(Action):\n    \"\"\"Changes the state of a flow run associated with the trigger\"\"\"\n\n    type: Literal[\"change-flow-run-state\"] = \"change-flow-run-state\"\n\n    name: Optional[str] = Field(\n        None,\n        description=\"The name of the state to change the flow run to\",\n    )\n    state: StateType = Field(\n        ...,\n        description=\"The type of the state to change the flow run to\",\n    )\n    message: Optional[str] = Field(\n        None,\n        description=\"An optional message to associate with the state change\",\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CompositeTrigger","title":"CompositeTrigger","text":"

    Bases: Trigger, ABC

    Requires some number of triggers to have fired within the given time period.

    Source code in prefect/events/schemas/automations.py
    class CompositeTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Requires some number of triggers to have fired within the given time period.\n    \"\"\"\n\n    type: Literal[\"compound\", \"sequence\"]\n    triggers: List[\"TriggerTypes\"]\n    within: Optional[timedelta]\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CompoundTrigger","title":"CompoundTrigger","text":"

    Bases: CompositeTrigger

    A composite trigger that requires some number of triggers to have fired within the given time period

    Source code in prefect/events/schemas/automations.py
    class CompoundTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have\n    fired within the given time period\"\"\"\n\n    type: Literal[\"compound\"] = \"compound\"\n    require: Union[int, Literal[\"any\", \"all\"]]\n\n    @root_validator\n    def validate_require(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n        require = values.get(\"require\")\n\n        if isinstance(require, int):\n            if require < 1:\n                raise ValueError(\"required must be at least 1\")\n            if require > len(values[\"triggers\"]):\n                raise ValueError(\n                    \"required must be less than or equal to the number of triggers\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"{str(self.require).capitalize()} of:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.CompoundTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"{str(self.require).capitalize()} of:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeclareIncident","title":"DeclareIncident","text":"

    Bases: Action

    Declares an incident for the triggering event. Only available on Prefect Cloud

    Source code in prefect/events/actions.py
    class DeclareIncident(Action):\n    \"\"\"Declares an incident for the triggering event.  Only available on Prefect Cloud\"\"\"\n\n    type: Literal[\"declare-incident\"] = \"declare-incident\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentCompoundTrigger","title":"DeploymentCompoundTrigger","text":"

    Bases: BaseDeploymentTrigger, CompoundTrigger

    A composite trigger that requires some number of triggers to have fired within the given time period

    Source code in prefect/events/schemas/deployment_triggers.py
    class DeploymentCompoundTrigger(BaseDeploymentTrigger, CompoundTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have\n    fired within the given time period\"\"\"\n\n    trigger_type = CompoundTrigger\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentEventTrigger","title":"DeploymentEventTrigger","text":"

    Bases: BaseDeploymentTrigger, EventTrigger

    A trigger that fires based on the presence or absence of events within a given period of time.

    Source code in prefect/events/schemas/deployment_triggers.py
    class DeploymentEventTrigger(BaseDeploymentTrigger, EventTrigger):\n    \"\"\"\n    A trigger that fires based on the presence or absence of events within a given\n    period of time.\n    \"\"\"\n\n    trigger_type = EventTrigger\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentMetricTrigger","title":"DeploymentMetricTrigger","text":"

    Bases: BaseDeploymentTrigger, MetricTrigger

    A trigger that fires based on the results of a metric query.

    Source code in prefect/events/schemas/deployment_triggers.py
    class DeploymentMetricTrigger(BaseDeploymentTrigger, MetricTrigger):\n    \"\"\"\n    A trigger that fires based on the results of a metric query.\n    \"\"\"\n\n    trigger_type = MetricTrigger\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DeploymentSequenceTrigger","title":"DeploymentSequenceTrigger","text":"

    Bases: BaseDeploymentTrigger, SequenceTrigger

    A composite trigger that requires some number of triggers to have fired within the given time period in a specific order

    Source code in prefect/events/schemas/deployment_triggers.py
    class DeploymentSequenceTrigger(BaseDeploymentTrigger, SequenceTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have fired\n    within the given time period in a specific order\"\"\"\n\n    trigger_type = SequenceTrigger\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.DoNothing","title":"DoNothing","text":"

    Bases: Action

    Do nothing when an Automation is triggered

    Source code in prefect/events/actions.py
    class DoNothing(Action):\n    \"\"\"Do nothing when an Automation is triggered\"\"\"\n\n    type: Literal[\"do-nothing\"] = \"do-nothing\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event","title":"Event","text":"

    Bases: PrefectBaseModel

    The client-side view of an event that has happened to a Resource

    Source code in prefect/events/schemas/events.py
    class Event(PrefectBaseModel):\n    \"\"\"The client-side view of an event that has happened to a Resource\"\"\"\n\n    occurred: DateTimeTZ = Field(\n        default_factory=lambda: pendulum.now(\"UTC\"),\n        description=\"When the event happened from the sender's perspective\",\n    )\n    event: str = Field(\n        description=\"The name of the event that happened\",\n    )\n    resource: Resource = Field(\n        description=\"The primary Resource this event concerns\",\n    )\n    related: List[RelatedResource] = Field(\n        default_factory=list,\n        description=\"A list of additional Resources involved in this event\",\n    )\n    payload: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"An open-ended set of data describing what happened\",\n    )\n    id: UUID = Field(\n        default_factory=uuid4,\n        description=\"The client-provided identifier of this event\",\n    )\n    follows: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The ID of an event that is known to have occurred prior to this one. \"\n            \"If set, this may be used to establish a more precise ordering of causally-\"\n            \"related events when they occur close enough together in time that the \"\n            \"system may receive them out-of-order.\"\n        ),\n    )\n\n    @property\n    def involved_resources(self) -> Sequence[Resource]:\n        return [self.resource] + list(self.related)\n\n    @property\n    def resource_in_role(self) -> Mapping[str, RelatedResource]:\n        \"\"\"Returns a mapping of roles to the first related resource in that role\"\"\"\n        return {related.role: related for related in reversed(self.related)}\n\n    @property\n    def resources_in_role(self) -> Mapping[str, Sequence[RelatedResource]]:\n        \"\"\"Returns a mapping of roles to related resources in that role\"\"\"\n        resources: Dict[str, List[RelatedResource]] = defaultdict(list)\n        for related in self.related:\n            resources[related.role].append(related)\n        return resources\n\n    @validator(\"related\")\n    def enforce_maximum_related_resources(cls, value: List[RelatedResource]):\n        if len(value) > PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES.value():\n            raise ValueError(\n                \"The maximum number of related resources \"\n                f\"is {PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES.value()}\"\n            )\n\n        return value\n\n    def find_resource_label(self, label: str) -> Optional[str]:\n        \"\"\"Finds the value of the given label in this event's resource or one of its\n        related resources.  If the label starts with `related:<role>:`, search for the\n        first matching label in a related resource with that role.\"\"\"\n        directive, _, related_label = label.rpartition(\":\")\n        directive, _, role = directive.partition(\":\")\n        if directive == \"related\":\n            for related in self.related:\n                if related.role == role:\n                    return related.get(related_label)\n        return self.resource.get(label)\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event.resource_in_role","title":"resource_in_role: Mapping[str, RelatedResource] property","text":"

    Returns a mapping of roles to the first related resource in that role

    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event.resources_in_role","title":"resources_in_role: Mapping[str, Sequence[RelatedResource]] property","text":"

    Returns a mapping of roles to related resources in that role

    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Event.find_resource_label","title":"find_resource_label","text":"

    Finds the value of the given label in this event's resource or one of its related resources. If the label starts with related:<role>:, search for the first matching label in a related resource with that role.

    Source code in prefect/events/schemas/events.py
    def find_resource_label(self, label: str) -> Optional[str]:\n    \"\"\"Finds the value of the given label in this event's resource or one of its\n    related resources.  If the label starts with `related:<role>:`, search for the\n    first matching label in a related resource with that role.\"\"\"\n    directive, _, related_label = label.rpartition(\":\")\n    directive, _, role = directive.partition(\":\")\n    if directive == \"related\":\n        for related in self.related:\n            if related.role == role:\n                return related.get(related_label)\n    return self.resource.get(label)\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.EventTrigger","title":"EventTrigger","text":"

    Bases: ResourceTrigger

    A trigger that fires based on the presence or absence of events within a given period of time.

    Source code in prefect/events/schemas/automations.py
    class EventTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the presence or absence of events within a given\n    period of time.\n    \"\"\"\n\n    type: Literal[\"event\"] = \"event\"\n\n    after: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) which must first been seen to fire this trigger.  If \"\n            \"empty, then fire this trigger immediately.  Events may include \"\n            \"trailing wildcards, like `prefect.flow-run.*`\"\n        ),\n    )\n    expect: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"The event(s) this trigger is expecting to see.  If empty, this \"\n            \"trigger will match any event.  Events may include trailing wildcards, \"\n            \"like `prefect.flow-run.*`\"\n        ),\n    )\n\n    for_each: Set[str] = Field(\n        default_factory=set,\n        description=(\n            \"Evaluate the trigger separately for each distinct value of these labels \"\n            \"on the resource.  By default, labels refer to the primary resource of the \"\n            \"triggering event.  You may also refer to labels from related \"\n            \"resources by specifying `related:<role>:<label>`.  This will use the \"\n            \"value of that label for the first related resource in that role.  For \"\n            'example, `\"for_each\": [\"related:flow:prefect.resource.id\"]` would '\n            \"evaluate the trigger for each flow.\"\n        ),\n    )\n    posture: Literal[Posture.Reactive, Posture.Proactive] = Field(  # type: ignore[valid-type]\n        Posture.Reactive,\n        description=(\n            \"The posture of this trigger, either Reactive or Proactive.  Reactive \"\n            \"triggers respond to the _presence_ of the expected events, while \"\n            \"Proactive triggers respond to the _absence_ of those expected events.\"\n        ),\n    )\n    threshold: int = Field(\n        1,\n        description=(\n            \"The number of events required for this trigger to fire (for \"\n            \"Reactive triggers), or the number of events expected (for Proactive \"\n            \"triggers)\"\n        ),\n    )\n    within: timedelta = Field(\n        timedelta(0),\n        minimum=0.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The time period over which the events must occur.  For Reactive triggers, \"\n            \"this may be as low as 0 seconds, but must be at least 10 seconds for \"\n            \"Proactive triggers\"\n        ),\n    )\n\n    @validator(\"within\")\n    def enforce_minimum_within(\n        cls, value: timedelta, values, config, field: ModelField\n    ):\n        return validate_trigger_within(value, field)\n\n    @root_validator(skip_on_failure=True)\n    def enforce_minimum_within_for_proactive_triggers(cls, values: Dict[str, Any]):\n        posture: Optional[Posture] = values.get(\"posture\")\n        within: Optional[timedelta] = values.get(\"within\")\n\n        if posture == Posture.Proactive:\n            if not within or within == timedelta(0):\n                values[\"within\"] = timedelta(seconds=10.0)\n            elif within < timedelta(seconds=10.0):\n                raise ValueError(\n                    \"The minimum within for Proactive triggers is 10 seconds\"\n                )\n\n        return values\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        if self.posture == Posture.Reactive:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n        else:\n            return textwrap.indent(\n                \"\\n\".join(\n                    [\n                        f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                        f\"within {self.within}\",\n                    ],\n                ),\n                prefix=\"  \" * indent,\n            )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.EventTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    if self.posture == Posture.Reactive:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Reactive: expecting {self.threshold} of {self.expect}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n    else:\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Proactive: expecting {self.threshold} {self.expect} event \"\n                    f\"within {self.within}\",\n                ],\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.MetricTrigger","title":"MetricTrigger","text":"

    Bases: ResourceTrigger

    A trigger that fires based on the results of a metric query.

    Source code in prefect/events/schemas/automations.py
    class MetricTrigger(ResourceTrigger):\n    \"\"\"\n    A trigger that fires based on the results of a metric query.\n    \"\"\"\n\n    type: Literal[\"metric\"] = \"metric\"\n\n    posture: Literal[Posture.Metric] = Field(  # type: ignore[valid-type]\n        Posture.Metric,\n        description=\"Periodically evaluate the configured metric query.\",\n    )\n\n    metric: MetricTriggerQuery = Field(\n        ...,\n        description=\"The metric query to evaluate for this trigger. \",\n    )\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        m = self.metric\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.MetricTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    m = self.metric\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                f\"Metric: {m.name.value} {m.operator.value} {m.threshold} for {m.range}\",\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.MetricTriggerQuery","title":"MetricTriggerQuery","text":"

    Bases: PrefectBaseModel

    Defines a subset of the Trigger subclass, which is specific to Metric automations, that specify the query configurations and breaching conditions for the Automation

    Source code in prefect/events/schemas/automations.py
    class MetricTriggerQuery(PrefectBaseModel):\n    \"\"\"Defines a subset of the Trigger subclass, which is specific\n    to Metric automations, that specify the query configurations\n    and breaching conditions for the Automation\"\"\"\n\n    name: PrefectMetric = Field(\n        ...,\n        description=\"The name of the metric to query.\",\n    )\n    threshold: float = Field(\n        ...,\n        description=(\n            \"The threshold value against which we'll compare \" \"the query result.\"\n        ),\n    )\n    operator: MetricTriggerOperator = Field(\n        ...,\n        description=(\n            \"The comparative operator (LT / LTE / GT / GTE) used to compare \"\n            \"the query result against the threshold value.\"\n        ),\n    )\n    range: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The lookback duration (seconds) for a metric query. This duration is \"\n            \"used to determine the time range over which the query will be executed. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n    firing_for: timedelta = Field(\n        timedelta(seconds=300),  # defaults to 5 minutes\n        minimum=300.0,\n        exclusiveMinimum=False,\n        description=(\n            \"The duration (seconds) for which the metric query must breach \"\n            \"or resolve continuously before the state is updated and the \"\n            \"automation is triggered. \"\n            \"The minimum value is 300 seconds (5 minutes).\"\n        ),\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseAutomation","title":"PauseAutomation","text":"

    Bases: AutomationAction

    Pauses a Work Queue

    Source code in prefect/events/actions.py
    class PauseAutomation(AutomationAction):\n    \"\"\"Pauses a Work Queue\"\"\"\n\n    type: Literal[\"pause-automation\"] = \"pause-automation\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseDeployment","title":"PauseDeployment","text":"

    Bases: DeploymentAction

    Pauses the given Deployment

    Source code in prefect/events/actions.py
    class PauseDeployment(DeploymentAction):\n    \"\"\"Pauses the given Deployment\"\"\"\n\n    type: Literal[\"pause-deployment\"] = \"pause-deployment\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseWorkPool","title":"PauseWorkPool","text":"

    Bases: WorkPoolAction

    Pauses a Work Pool

    Source code in prefect/events/actions.py
    class PauseWorkPool(WorkPoolAction):\n    \"\"\"Pauses a Work Pool\"\"\"\n\n    type: Literal[\"pause-work-pool\"] = \"pause-work-pool\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.PauseWorkQueue","title":"PauseWorkQueue","text":"

    Bases: WorkQueueAction

    Pauses a Work Queue

    Source code in prefect/events/actions.py
    class PauseWorkQueue(WorkQueueAction):\n    \"\"\"Pauses a Work Queue\"\"\"\n\n    type: Literal[\"pause-work-queue\"] = \"pause-work-queue\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ReceivedEvent","title":"ReceivedEvent","text":"

    Bases: Event

    The server-side view of an event that has happened to a Resource after it has been received by the server

    Source code in prefect/events/schemas/events.py
    class ReceivedEvent(Event):\n    \"\"\"The server-side view of an event that has happened to a Resource after it has\n    been received by the server\"\"\"\n\n    class Config:\n        orm_mode = True\n\n    received: DateTimeTZ = Field(\n        ...,\n        description=\"When the event was received by Prefect Cloud\",\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.RelatedResource","title":"RelatedResource","text":"

    Bases: Resource

    A Resource with a specific role in an Event

    Source code in prefect/events/schemas/events.py
    class RelatedResource(Resource):\n    \"\"\"A Resource with a specific role in an Event\"\"\"\n\n    @root_validator(pre=True)\n    def requires_resource_role(cls, values: Dict[str, Any]):\n        labels = values.get(\"__root__\")\n        if not isinstance(labels, dict):\n            return values\n\n        labels = cast(Dict[str, str], labels)\n\n        if \"prefect.resource.role\" not in labels:\n            raise ValueError(\n                \"Related Resources must include the prefect.resource.role label\"\n            )\n        if not labels[\"prefect.resource.role\"]:\n            raise ValueError(\"The prefect.resource.role label must be non-empty\")\n\n        return values\n\n    @property\n    def role(self) -> str:\n        return self[\"prefect.resource.role\"]\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Resource","title":"Resource","text":"

    Bases: Labelled

    An observable business object of interest to the user

    Source code in prefect/events/schemas/events.py
    class Resource(Labelled):\n    \"\"\"An observable business object of interest to the user\"\"\"\n\n    @root_validator(pre=True)\n    def enforce_maximum_labels(cls, values: Dict[str, Any]):\n        labels = values.get(\"__root__\")\n        if not isinstance(labels, dict):\n            return values\n\n        if len(labels) > PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE.value():\n            raise ValueError(\n                \"The maximum number of labels per resource \"\n                f\"is {PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE.value()}\"\n            )\n\n        return values\n\n    @root_validator(pre=True)\n    def requires_resource_id(cls, values: Dict[str, Any]):\n        labels = values.get(\"__root__\")\n        if not isinstance(labels, dict):\n            return values\n\n        labels = cast(Dict[str, str], labels)\n\n        if \"prefect.resource.id\" not in labels:\n            raise ValueError(\"Resources must include the prefect.resource.id label\")\n        if not labels[\"prefect.resource.id\"]:\n            raise ValueError(\"The prefect.resource.id label must be non-empty\")\n\n        return values\n\n    @property\n    def id(self) -> str:\n        return self[\"prefect.resource.id\"]\n\n    @property\n    def name(self) -> Optional[str]:\n        return self.get(\"prefect.resource.name\")\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResourceSpecification","title":"ResourceSpecification","text":"

    Bases: PrefectBaseModel

    A specification that may match zero, one, or many resources, used to target or select a set of resources in a query or automation. A resource must match at least one value of all of the provided labels

    Source code in prefect/events/schemas/events.py
    class ResourceSpecification(PrefectBaseModel):\n    \"\"\"A specification that may match zero, one, or many resources, used to target or\n    select a set of resources in a query or automation.  A resource must match at least\n    one value of all of the provided labels\"\"\"\n\n    __root__: Dict[str, Union[str, List[str]]]\n\n    def matches_every_resource(self) -> bool:\n        return len(self) == 0\n\n    def matches_every_resource_of_kind(self, prefix: str) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        if len(self.__root__) == 1:\n            if resource_id := self.__root__.get(\"prefect.resource.id\"):\n                values = [resource_id] if isinstance(resource_id, str) else resource_id\n                return any(value == f\"{prefix}.*\" for value in values)\n\n        return False\n\n    def includes(self, candidates: Iterable[Resource]) -> bool:\n        if self.matches_every_resource():\n            return True\n\n        for candidate in candidates:\n            if self.matches(candidate):\n                return True\n\n        return False\n\n    def matches(self, resource: Resource) -> bool:\n        for label, expected in self.items():\n            value = resource.get(label)\n            if not any(matches(candidate, value) for candidate in expected):\n                return False\n        return True\n\n    def items(self) -> Iterable[Tuple[str, List[str]]]:\n        return [\n            (label, [value] if isinstance(value, str) else value)\n            for label, value in self.__root__.items()\n        ]\n\n    def __contains__(self, key: str) -> bool:\n        return self.__root__.__contains__(key)\n\n    def __getitem__(self, key: str) -> List[str]:\n        value = self.__root__[key]\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def pop(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.pop(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def get(\n        self, key: str, default: Optional[Union[str, List[str]]] = None\n    ) -> Optional[List[str]]:\n        value = self.__root__.get(key, default)\n        if not value:\n            return []\n        if not isinstance(value, list):\n            value = [value]\n        return value\n\n    def __len__(self) -> int:\n        return len(self.__root__)\n\n    def deepcopy(self) -> \"ResourceSpecification\":\n        return ResourceSpecification.parse_obj(copy.deepcopy(self.__root__))\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResourceTrigger","title":"ResourceTrigger","text":"

    Bases: Trigger, ABC

    Base class for triggers that may filter by the labels of resources.

    Source code in prefect/events/schemas/automations.py
    class ResourceTrigger(Trigger, abc.ABC):\n    \"\"\"\n    Base class for triggers that may filter by the labels of resources.\n    \"\"\"\n\n    type: str\n\n    match: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for resources which this trigger will match.\",\n    )\n    match_related: ResourceSpecification = Field(\n        default_factory=lambda: ResourceSpecification.parse_obj({}),\n        description=\"Labels for related resources which this trigger will match.\",\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeAutomation","title":"ResumeAutomation","text":"

    Bases: AutomationAction

    Resumes a Work Queue

    Source code in prefect/events/actions.py
    class ResumeAutomation(AutomationAction):\n    \"\"\"Resumes a Work Queue\"\"\"\n\n    type: Literal[\"resume-automation\"] = \"resume-automation\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeDeployment","title":"ResumeDeployment","text":"

    Bases: DeploymentAction

    Resumes the given Deployment

    Source code in prefect/events/actions.py
    class ResumeDeployment(DeploymentAction):\n    \"\"\"Resumes the given Deployment\"\"\"\n\n    type: Literal[\"resume-deployment\"] = \"resume-deployment\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeWorkPool","title":"ResumeWorkPool","text":"

    Bases: WorkPoolAction

    Resumes a Work Pool

    Source code in prefect/events/actions.py
    class ResumeWorkPool(WorkPoolAction):\n    \"\"\"Resumes a Work Pool\"\"\"\n\n    type: Literal[\"resume-work-pool\"] = \"resume-work-pool\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.ResumeWorkQueue","title":"ResumeWorkQueue","text":"

    Bases: WorkQueueAction

    Resumes a Work Queue

    Source code in prefect/events/actions.py
    class ResumeWorkQueue(WorkQueueAction):\n    \"\"\"Resumes a Work Queue\"\"\"\n\n    type: Literal[\"resume-work-queue\"] = \"resume-work-queue\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.RunDeployment","title":"RunDeployment","text":"

    Bases: DeploymentAction

    Runs the given deployment with the given parameters

    Source code in prefect/events/actions.py
    class RunDeployment(DeploymentAction):\n    \"\"\"Runs the given deployment with the given parameters\"\"\"\n\n    type: Literal[\"run-deployment\"] = \"run-deployment\"\n\n    parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"The parameters to pass to the deployment, or None to use the \"\n            \"deployment's default parameters\"\n        ),\n    )\n    job_variables: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"The job variables to pass to the created flow run, or None \"\n            \"to use the deployment's default job variables\"\n        ),\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SendNotification","title":"SendNotification","text":"

    Bases: Action

    Send a notification when an Automation is triggered

    Source code in prefect/events/actions.py
    class SendNotification(Action):\n    \"\"\"Send a notification when an Automation is triggered\"\"\"\n\n    type: Literal[\"send-notification\"] = \"send-notification\"\n    block_document_id: UUID = Field(\n        description=\"The identifier of the notification block to use\"\n    )\n    subject: str = Field(\"Prefect automated notification\")\n    body: str = Field(description=\"The text of the notification to send\")\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SequenceTrigger","title":"SequenceTrigger","text":"

    Bases: CompositeTrigger

    A composite trigger that requires some number of triggers to have fired within the given time period in a specific order

    Source code in prefect/events/schemas/automations.py
    class SequenceTrigger(CompositeTrigger):\n    \"\"\"A composite trigger that requires some number of triggers to have fired\n    within the given time period in a specific order\"\"\"\n\n    type: Literal[\"sequence\"] = \"sequence\"\n\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n        return textwrap.indent(\n            \"\\n\".join(\n                [\n                    \"In this order:\",\n                    \"\\n\".join(\n                        [\n                            trigger.describe_for_cli(indent=indent + 1)\n                            for trigger in self.triggers\n                        ]\n                    ),\n                ]\n            ),\n            prefix=\"  \" * indent,\n        )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SequenceTrigger.describe_for_cli","title":"describe_for_cli","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    def describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n    return textwrap.indent(\n        \"\\n\".join(\n            [\n                \"In this order:\",\n                \"\\n\".join(\n                    [\n                        trigger.describe_for_cli(indent=indent + 1)\n                        for trigger in self.triggers\n                    ]\n                ),\n            ]\n        ),\n        prefix=\"  \" * indent,\n    )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.SuspendFlowRun","title":"SuspendFlowRun","text":"

    Bases: Action

    Suspends a flow run associated with the trigger

    Source code in prefect/events/actions.py
    class SuspendFlowRun(Action):\n    \"\"\"Suspends a flow run associated with the trigger\"\"\"\n\n    type: Literal[\"suspend-flow-run\"] = \"suspend-flow-run\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Trigger","title":"Trigger","text":"

    Bases: PrefectBaseModel, ABC

    Base class describing a set of criteria that must be satisfied in order to trigger an automation.

    Source code in prefect/events/schemas/automations.py
    class Trigger(PrefectBaseModel, abc.ABC, extra=\"ignore\"):  # type: ignore[call-arg]\n    \"\"\"\n    Base class describing a set of criteria that must be satisfied in order to trigger\n    an automation.\n    \"\"\"\n\n    type: str\n\n    @abc.abstractmethod\n    def describe_for_cli(self, indent: int = 0) -> str:\n        \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n\n    # The following allows the regular Trigger class to be used when serving or\n    # deploying flows, analogous to how the Deployment*Trigger classes work\n\n    _deployment_id: Optional[UUID] = PrivateAttr(default=None)\n\n    def set_deployment_id(self, deployment_id: UUID):\n        self._deployment_id = deployment_id\n\n    def owner_resource(self) -> Optional[str]:\n        return f\"prefect.deployment.{self._deployment_id}\"\n\n    def actions(self) -> List[ActionTypes]:\n        assert self._deployment_id\n        return [\n            RunDeployment(\n                source=\"selected\",\n                deployment_id=self._deployment_id,\n                parameters=getattr(self, \"parameters\", None),\n                job_variables=getattr(self, \"job_variables\", None),\n            )\n        ]\n\n    def as_automation(self) -> \"AutomationCore\":\n        assert self._deployment_id\n\n        trigger: TriggerTypes = cast(TriggerTypes, self)\n\n        # This is one of the Deployment*Trigger classes, so translate it over to a\n        # plain Trigger\n        if hasattr(self, \"trigger_type\"):\n            trigger = self.trigger_type(**self.dict())\n\n        return AutomationCore(\n            name=(\n                getattr(self, \"name\", None)\n                or f\"Automation for deployment {self._deployment_id}\"\n            ),\n            description=\"\",\n            enabled=getattr(self, \"enabled\", True),\n            trigger=trigger,\n            actions=self.actions(),\n            owner_resource=self.owner_resource(),\n        )\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.Trigger.describe_for_cli","title":"describe_for_cli abstractmethod","text":"

    Return a human-readable description of this trigger for the CLI

    Source code in prefect/events/schemas/automations.py
    @abc.abstractmethod\ndef describe_for_cli(self, indent: int = 0) -> str:\n    \"\"\"Return a human-readable description of this trigger for the CLI\"\"\"\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/events/#prefect.events.emit_event","title":"emit_event","text":"

    Send an event to Prefect Cloud.

    Parameters:

    Name Type Description Default event str

    The name of the event that happened.

    required resource Dict[str, str]

    The primary Resource this event concerns.

    required occurred Optional[DateTimeTZ]

    When the event happened from the sender's perspective. Defaults to the current datetime.

    None related Optional[Union[List[Dict[str, str]], List[RelatedResource]]]

    A list of additional Resources involved in this event.

    None payload Optional[Dict[str, Any]]

    An open-ended set of data describing what happened.

    None id Optional[UUID]

    The sender-provided identifier for this event. Defaults to a random UUID.

    None follows Optional[Event]

    The event that preceded this one. If the preceding event happened more than 5 minutes prior to this event the follows relationship will not be set.

    None

    Returns:

    Type Description Optional[Event]

    The event that was emitted if worker is using a client that emit

    Optional[Event]

    events, otherwise None

    Source code in prefect/events/utilities.py
    def emit_event(\n    event: str,\n    resource: Dict[str, str],\n    occurred: Optional[DateTimeTZ] = None,\n    related: Optional[Union[List[Dict[str, str]], List[RelatedResource]]] = None,\n    payload: Optional[Dict[str, Any]] = None,\n    id: Optional[UUID] = None,\n    follows: Optional[Event] = None,\n) -> Optional[Event]:\n    \"\"\"\n    Send an event to Prefect Cloud.\n\n    Args:\n        event: The name of the event that happened.\n        resource: The primary Resource this event concerns.\n        occurred: When the event happened from the sender's perspective.\n                  Defaults to the current datetime.\n        related: A list of additional Resources involved in this event.\n        payload: An open-ended set of data describing what happened.\n        id: The sender-provided identifier for this event. Defaults to a random\n            UUID.\n        follows: The event that preceded this one. If the preceding event\n            happened more than 5 minutes prior to this event the follows\n            relationship will not be set.\n\n    Returns:\n        The event that was emitted if worker is using a client that emit\n        events, otherwise None\n    \"\"\"\n    if not should_emit_events():\n        return None\n\n    operational_clients = [\n        AssertingEventsClient,\n        PrefectCloudEventsClient,\n        PrefectEventsClient,\n        PrefectEphemeralEventsClient,\n    ]\n    worker_instance = EventsWorker.instance()\n\n    if worker_instance.client_type not in operational_clients:\n        return None\n\n    event_kwargs: Dict[str, Any] = {\n        \"event\": event,\n        \"resource\": resource,\n    }\n\n    if occurred is None:\n        occurred = pendulum.now(\"UTC\")\n    event_kwargs[\"occurred\"] = occurred\n\n    if related is not None:\n        event_kwargs[\"related\"] = related\n\n    if payload is not None:\n        event_kwargs[\"payload\"] = payload\n\n    if id is not None:\n        event_kwargs[\"id\"] = id\n\n    if follows is not None:\n        if -TIGHT_TIMING < (occurred - follows.occurred) < TIGHT_TIMING:\n            event_kwargs[\"follows\"] = follows.id\n\n    event_obj = Event(**event_kwargs)\n    worker_instance.send(event_obj)\n\n    return event_obj\n
    ","tags":["Python API","events"]},{"location":"api-ref/prefect/exceptions/","title":"prefect.exceptions","text":"","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions","title":"prefect.exceptions","text":"

    Prefect-specific exceptions.

    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Abort","title":"Abort","text":"

    Bases: PrefectSignal

    Raised when the API sends an 'ABORT' instruction during state proposal.

    Indicates that the run should exit immediately.

    Source code in prefect/exceptions.py
    class Abort(PrefectSignal):\n    \"\"\"\n    Raised when the API sends an 'ABORT' instruction during state proposal.\n\n    Indicates that the run should exit immediately.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.BlockMissingCapabilities","title":"BlockMissingCapabilities","text":"

    Bases: PrefectException

    Raised when a block does not have required capabilities for a given operation.

    Source code in prefect/exceptions.py
    class BlockMissingCapabilities(PrefectException):\n    \"\"\"\n    Raised when a block does not have required capabilities for a given operation.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CancelledRun","title":"CancelledRun","text":"

    Bases: PrefectException

    Raised when the result from a cancelled run is retrieved and an exception is not attached.

    This occurs when a string is attached to the state instead of an exception or if the state's data is null.

    Source code in prefect/exceptions.py
    class CancelledRun(PrefectException):\n    \"\"\"\n    Raised when the result from a cancelled run is retrieved and an exception\n    is not attached.\n\n    This occurs when a string is attached to the state instead of an exception\n    or if the state's data is null.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CrashedRun","title":"CrashedRun","text":"

    Bases: PrefectException

    Raised when the result from a crashed run is retrieved.

    This occurs when a string is attached to the state instead of an exception or if the state's data is null.

    Source code in prefect/exceptions.py
    class CrashedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a crashed run is retrieved.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ExternalSignal","title":"ExternalSignal","text":"

    Bases: BaseException

    Base type for external signal-like exceptions that should never be caught by users.

    Source code in prefect/exceptions.py
    class ExternalSignal(BaseException):\n    \"\"\"\n    Base type for external signal-like exceptions that should never be caught by users.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FailedRun","title":"FailedRun","text":"

    Bases: PrefectException

    Raised when the result from a failed run is retrieved and an exception is not attached.

    This occurs when a string is attached to the state instead of an exception or if the state's data is null.

    Source code in prefect/exceptions.py
    class FailedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a failed run is retrieved and an exception is not\n    attached.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowPauseTimeout","title":"FlowPauseTimeout","text":"

    Bases: PrefectException

    Raised when a flow pause times out

    Source code in prefect/exceptions.py
    class FlowPauseTimeout(PrefectException):\n    \"\"\"Raised when a flow pause times out\"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowRunWaitTimeout","title":"FlowRunWaitTimeout","text":"

    Bases: PrefectException

    Raised when a flow run takes longer than a given timeout

    Source code in prefect/exceptions.py
    class FlowRunWaitTimeout(PrefectException):\n    \"\"\"Raised when a flow run takes longer than a given timeout\"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowScriptError","title":"FlowScriptError","text":"

    Bases: PrefectException

    Raised when a script errors during evaluation while attempting to load a flow.

    Source code in prefect/exceptions.py
    class FlowScriptError(PrefectException):\n    \"\"\"\n    Raised when a script errors during evaluation while attempting to load a flow.\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        script_path: str,\n    ) -> None:\n        message = f\"Flow script at {script_path!r} encountered an exception\"\n        super().__init__(message)\n\n        self.user_exc = user_exc\n\n    def rich_user_traceback(self, **kwargs):\n        trace = Traceback.extract(\n            type(self.user_exc),\n            self.user_exc,\n            self.user_exc.__traceback__.tb_next.tb_next.tb_next.tb_next,\n        )\n        return Traceback(trace, **kwargs)\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureError","title":"InfrastructureError","text":"

    Bases: PrefectException

    A base class for exceptions related to infrastructure blocks

    Source code in prefect/exceptions.py
    class InfrastructureError(PrefectException):\n    \"\"\"\n    A base class for exceptions related to infrastructure blocks\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotAvailable","title":"InfrastructureNotAvailable","text":"

    Bases: PrefectException

    Raised when infrastructure is not accessible from the current machine. For example, if a process was spawned on another machine it cannot be managed.

    Source code in prefect/exceptions.py
    class InfrastructureNotAvailable(PrefectException):\n    \"\"\"\n    Raised when infrastructure is not accessible from the current machine. For example,\n    if a process was spawned on another machine it cannot be managed.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotFound","title":"InfrastructureNotFound","text":"

    Bases: PrefectException

    Raised when infrastructure is missing, likely because it has exited or been deleted.

    Source code in prefect/exceptions.py
    class InfrastructureNotFound(PrefectException):\n    \"\"\"\n    Raised when infrastructure is missing, likely because it has exited or been\n    deleted.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidNameError","title":"InvalidNameError","text":"

    Bases: PrefectException, ValueError

    Raised when a name contains characters that are not permitted.

    Source code in prefect/exceptions.py
    class InvalidNameError(PrefectException, ValueError):\n    \"\"\"\n    Raised when a name contains characters that are not permitted.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidRepositoryURLError","title":"InvalidRepositoryURLError","text":"

    Bases: PrefectException

    Raised when an incorrect URL is provided to a GitHub filesystem block.

    Source code in prefect/exceptions.py
    class InvalidRepositoryURLError(PrefectException):\n    \"\"\"Raised when an incorrect URL is provided to a GitHub filesystem block.\"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingLengthMismatch","title":"MappingLengthMismatch","text":"

    Bases: PrefectException

    Raised when attempting to call Task.map with arguments of different lengths.

    Source code in prefect/exceptions.py
    class MappingLengthMismatch(PrefectException):\n    \"\"\"\n    Raised when attempting to call Task.map with arguments of different lengths.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingMissingIterable","title":"MappingMissingIterable","text":"

    Bases: PrefectException

    Raised when attempting to call Task.map with all static arguments

    Source code in prefect/exceptions.py
    class MappingMissingIterable(PrefectException):\n    \"\"\"\n    Raised when attempting to call Task.map with all static arguments\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingContextError","title":"MissingContextError","text":"

    Bases: PrefectException, RuntimeError

    Raised when a method is called that requires a task or flow run context to be active but one cannot be found.

    Source code in prefect/exceptions.py
    class MissingContextError(PrefectException, RuntimeError):\n    \"\"\"\n    Raised when a method is called that requires a task or flow run context to be\n    active but one cannot be found.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingFlowError","title":"MissingFlowError","text":"

    Bases: PrefectException

    Raised when a given flow name is not found in the expected script.

    Source code in prefect/exceptions.py
    class MissingFlowError(PrefectException):\n    \"\"\"\n    Raised when a given flow name is not found in the expected script.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingProfileError","title":"MissingProfileError","text":"

    Bases: PrefectException, ValueError

    Raised when a profile name does not exist.

    Source code in prefect/exceptions.py
    class MissingProfileError(PrefectException, ValueError):\n    \"\"\"\n    Raised when a profile name does not exist.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingResult","title":"MissingResult","text":"

    Bases: PrefectException

    Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API.

    Source code in prefect/exceptions.py
    class MissingResult(PrefectException):\n    \"\"\"\n    Raised when a result is missing from a state; often when result persistence is\n    disabled and the state is retrieved from the API.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.NotPausedError","title":"NotPausedError","text":"

    Bases: PrefectException

    Raised when attempting to unpause a run that isn't paused.

    Source code in prefect/exceptions.py
    class NotPausedError(PrefectException):\n    \"\"\"Raised when attempting to unpause a run that isn't paused.\"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectAlreadyExists","title":"ObjectAlreadyExists","text":"

    Bases: PrefectException

    Raised when the client receives a 409 (conflict) from the API.

    Source code in prefect/exceptions.py
    class ObjectAlreadyExists(PrefectException):\n    \"\"\"\n    Raised when the client receives a 409 (conflict) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectNotFound","title":"ObjectNotFound","text":"

    Bases: PrefectException

    Raised when the client receives a 404 (not found) from the API.

    Source code in prefect/exceptions.py
    class ObjectNotFound(PrefectException):\n    \"\"\"\n    Raised when the client receives a 404 (not found) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterBindError","title":"ParameterBindError","text":"

    Bases: TypeError, PrefectException

    Raised when args and kwargs cannot be converted to parameters.

    Source code in prefect/exceptions.py
    class ParameterBindError(TypeError, PrefectException):\n    \"\"\"\n    Raised when args and kwargs cannot be converted to parameters.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bind_failure(\n        cls, fn: Callable, exc: TypeError, call_args: List, call_kwargs: Dict\n    ) -> Self:\n        fn_signature = str(inspect.signature(fn)).strip(\"()\")\n\n        base = f\"Error binding parameters for function '{fn.__name__}': {exc}\"\n        signature = f\"Function '{fn.__name__}' has signature '{fn_signature}'\"\n        received = f\"received args: {call_args} and kwargs: {list(call_kwargs.keys())}\"\n        msg = f\"{base}.\\n{signature} but {received}.\"\n        return cls(msg)\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterTypeError","title":"ParameterTypeError","text":"

    Bases: PrefectException

    Raised when a parameter does not pass Pydantic type validation.

    Source code in prefect/exceptions.py
    class ParameterTypeError(PrefectException):\n    \"\"\"\n    Raised when a parameter does not pass Pydantic type validation.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_validation_error(cls, exc: ValidationError) -> Self:\n        bad_params = [f'{\".\".join(err[\"loc\"])}: {err[\"msg\"]}' for err in exc.errors()]\n        msg = \"Flow run received invalid parameters:\\n - \" + \"\\n - \".join(bad_params)\n        return cls(msg)\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Pause","title":"Pause","text":"

    Bases: PrefectSignal

    Raised when a flow run is PAUSED and needs to exit for resubmission.

    Source code in prefect/exceptions.py
    class Pause(PrefectSignal):\n    \"\"\"\n    Raised when a flow run is PAUSED and needs to exit for resubmission.\n    \"\"\"\n\n    def __init__(self, *args, state=None, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.state = state\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PausedRun","title":"PausedRun","text":"

    Bases: PrefectException

    Raised when the result from a paused run is retrieved.

    Source code in prefect/exceptions.py
    class PausedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a paused run is retrieved.\n    \"\"\"\n\n    def __init__(self, *args, state=None, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.state = state\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectException","title":"PrefectException","text":"

    Bases: Exception

    Base exception type for Prefect errors.

    Source code in prefect/exceptions.py
    class PrefectException(Exception):\n    \"\"\"\n    Base exception type for Prefect errors.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError","title":"PrefectHTTPStatusError","text":"

    Bases: HTTPStatusError

    Raised when client receives a Response that contains an HTTPStatusError.

    Used to include API error details in the error messages that the client provides users.

    Source code in prefect/exceptions.py
    class PrefectHTTPStatusError(HTTPStatusError):\n    \"\"\"\n    Raised when client receives a `Response` that contains an HTTPStatusError.\n\n    Used to include API error details in the error messages that the client provides users.\n    \"\"\"\n\n    @classmethod\n    def from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n        \"\"\"\n        Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n        \"\"\"\n        try:\n            details = httpx_error.response.json()\n        except Exception:\n            details = None\n\n        error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n        if details:\n            message_components = [error_message, f\"Response: {details}\", *more_info]\n        else:\n            message_components = [error_message, *more_info]\n\n        new_message = \"\\n\".join(message_components)\n\n        return cls(\n            new_message, request=httpx_error.request, response=httpx_error.response\n        )\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError.from_httpx_error","title":"from_httpx_error classmethod","text":"

    Generate a PrefectHTTPStatusError from an httpx.HTTPStatusError.

    Source code in prefect/exceptions.py
    @classmethod\ndef from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n    \"\"\"\n    Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n    \"\"\"\n    try:\n        details = httpx_error.response.json()\n    except Exception:\n        details = None\n\n    error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n    if details:\n        message_components = [error_message, f\"Response: {details}\", *more_info]\n    else:\n        message_components = [error_message, *more_info]\n\n    new_message = \"\\n\".join(message_components)\n\n    return cls(\n        new_message, request=httpx_error.request, response=httpx_error.response\n    )\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectSignal","title":"PrefectSignal","text":"

    Bases: BaseException

    Base type for signal-like exceptions that should never be caught by users.

    Source code in prefect/exceptions.py
    class PrefectSignal(BaseException):\n    \"\"\"\n    Base type for signal-like exceptions that should never be caught by users.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ProtectedBlockError","title":"ProtectedBlockError","text":"

    Bases: PrefectException

    Raised when an operation is prevented due to block protection.

    Source code in prefect/exceptions.py
    class ProtectedBlockError(PrefectException):\n    \"\"\"\n    Raised when an operation is prevented due to block protection.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ReservedArgumentError","title":"ReservedArgumentError","text":"

    Bases: PrefectException, TypeError

    Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature

    Source code in prefect/exceptions.py
    class ReservedArgumentError(PrefectException, TypeError):\n    \"\"\"\n    Raised when a function used with Prefect has an argument with a name that is\n    reserved for a Prefect feature\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ScriptError","title":"ScriptError","text":"

    Bases: PrefectException

    Raised when a script errors during evaluation while attempting to load data

    Source code in prefect/exceptions.py
    class ScriptError(PrefectException):\n    \"\"\"\n    Raised when a script errors during evaluation while attempting to load data\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        path: str,\n    ) -> None:\n        message = f\"Script at {str(path)!r} encountered an exception: {user_exc!r}\"\n        super().__init__(message)\n        self.user_exc = user_exc\n\n        # Strip script run information from the traceback\n        self.user_exc.__traceback__ = _trim_traceback(\n            self.user_exc.__traceback__,\n            remove_modules=[prefect.utilities.importtools],\n        )\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.SignatureMismatchError","title":"SignatureMismatchError","text":"

    Bases: PrefectException, TypeError

    Raised when parameters passed to a function do not match its signature.

    Source code in prefect/exceptions.py
    class SignatureMismatchError(PrefectException, TypeError):\n    \"\"\"Raised when parameters passed to a function do not match its signature.\"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bad_params(cls, expected_params: List[str], provided_params: List[str]):\n        msg = (\n            f\"Function expects parameters {expected_params} but was provided with\"\n            f\" parameters {provided_params}\"\n        )\n        return cls(msg)\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.TerminationSignal","title":"TerminationSignal","text":"

    Bases: ExternalSignal

    Raised when a flow run receives a termination signal.

    Source code in prefect/exceptions.py
    class TerminationSignal(ExternalSignal):\n    \"\"\"\n    Raised when a flow run receives a termination signal.\n    \"\"\"\n\n    def __init__(self, signal: int):\n        self.signal = signal\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnfinishedRun","title":"UnfinishedRun","text":"

    Bases: PrefectException

    Raised when the result from a run that is not finished is retrieved.

    For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.

    Source code in prefect/exceptions.py
    class UnfinishedRun(PrefectException):\n    \"\"\"\n    Raised when the result from a run that is not finished is retrieved.\n\n    For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnspecifiedFlowError","title":"UnspecifiedFlowError","text":"

    Bases: PrefectException

    Raised when multiple flows are found in the expected script and no name is given.

    Source code in prefect/exceptions.py
    class UnspecifiedFlowError(PrefectException):\n    \"\"\"\n    Raised when multiple flows are found in the expected script and no name is given.\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UpstreamTaskError","title":"UpstreamTaskError","text":"

    Bases: PrefectException

    Raised when a task relies on the result of another task but that task is not 'COMPLETE'

    Source code in prefect/exceptions.py
    class UpstreamTaskError(PrefectException):\n    \"\"\"\n    Raised when a task relies on the result of another task but that task is not\n    'COMPLETE'\n    \"\"\"\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.exception_traceback","title":"exception_traceback","text":"

    Convert an exception to a printable string with a traceback

    Source code in prefect/exceptions.py
    def exception_traceback(exc: Exception) -> str:\n    \"\"\"\n    Convert an exception to a printable string with a traceback\n    \"\"\"\n    tb = traceback.TracebackException.from_exception(exc)\n    return \"\".join(list(tb.format()))\n
    ","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/filesystems/","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure","title":"Azure","text":"

    Bases: WritableFileSystem, WritableDeploymentStorage

    DEPRECATION WARNING:

    This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by AzureBlobStorageContainer from the prefect-azure package, which offers enhanced functionality and better a better user experience.

    Store data as a file on Azure Datalake and Azure Blob Storage.

    Example

    Load stored Azure config:

    from prefect.filesystems import Azure\n\naz_block = Azure.load(\"BLOCK_NAME\")\n

    Source code in prefect/filesystems.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the `AzureBlobStorageContainer` block from prefect-azure instead.\",\n)\nclass Azure(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `AzureBlobStorageContainer` from the `prefect-azure` package, which\n    offers enhanced functionality and better a better user experience.\n\n    Store data as a file on Azure Datalake and Azure Blob Storage.\n\n    Example:\n        Load stored Azure config:\n        ```python\n        from prefect.filesystems import Azure\n\n        az_block = Azure.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#azure\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An Azure storage bucket path.\",\n        examples=[\"my-bucket/a-directory-within\"],\n    )\n    azure_storage_connection_string: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage connection string\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_CONNECTION_STRING environment variable.\"\n        ),\n    )\n    azure_storage_account_name: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account name\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_ACCOUNT_NAME environment variable.\"\n        ),\n    )\n    azure_storage_account_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account key\",\n        description=\"Equivalent to the AZURE_STORAGE_ACCOUNT_KEY environment variable.\",\n    )\n    azure_storage_tenant_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage tenant ID\",\n        description=\"Equivalent to the AZURE_TENANT_ID environment variable.\",\n    )\n    azure_storage_client_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client ID\",\n        description=\"Equivalent to the AZURE_CLIENT_ID environment variable.\",\n    )\n    azure_storage_client_secret: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client secret\",\n        description=\"Equivalent to the AZURE_CLIENT_SECRET environment variable.\",\n    )\n    azure_storage_anon: bool = Field(\n        default=True,\n        title=\"Azure storage anonymous connection\",\n        description=(\n            \"Set the 'anon' flag for ADLFS. This should be False for systems that\"\n            \" require ADLFS to use DefaultAzureCredentials.\"\n        ),\n    )\n    azure_storage_container: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage container\",\n        description=(\n            \"Blob Container in Azure Storage Account. If set the 'bucket_path' will\"\n            \" be interpreted using the following URL format:\"\n            \"'az://<container>@<storage_account>.dfs.core.windows.net/<bucket_path>'.\"\n        ),\n    )\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        if self.azure_storage_container:\n            return (\n                f\"az://{self.azure_storage_container.get_secret_value()}\"\n                f\"@{self.azure_storage_account_name.get_secret_value()}\"\n                f\".dfs.core.windows.net/{self.bucket_path}\"\n            )\n        else:\n            return f\"az://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.azure_storage_connection_string:\n            settings[\n                \"connection_string\"\n            ] = self.azure_storage_connection_string.get_secret_value()\n        if self.azure_storage_account_name:\n            settings[\n                \"account_name\"\n            ] = self.azure_storage_account_name.get_secret_value()\n        if self.azure_storage_account_key:\n            settings[\"account_key\"] = self.azure_storage_account_key.get_secret_value()\n        if self.azure_storage_tenant_id:\n            settings[\"tenant_id\"] = self.azure_storage_tenant_id.get_secret_value()\n        if self.azure_storage_client_id:\n            settings[\"client_id\"] = self.azure_storage_client_id.get_secret_value()\n        if self.azure_storage_client_secret:\n            settings[\n                \"client_secret\"\n            ] = self.azure_storage_client_secret.get_secret_value()\n        settings[\"anon\"] = self.azure_storage_anon\n        self._remote_file_system = RemoteFileSystem(\n            basepath=self.basepath, settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.get_directory","title":"get_directory async","text":"

    Downloads a directory from a given remote path to a local directory.

    Defaults to downloading the entire contents of the block's basepath to the current working directory.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.put_directory","title":"put_directory async","text":"

    Uploads a directory from a given local path to a remote directory.

    Defaults to uploading the entire contents of the current working directory to the block's basepath.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS","title":"GCS","text":"

    Bases: WritableFileSystem, WritableDeploymentStorage

    DEPRECATION WARNING:

    This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by GcsBucket from the prefect-gcp package, which offers enhanced functionality and better a better user experience. Store data as a file on Google Cloud Storage.

    Example

    Load stored GCS config:

    from prefect.filesystems import GCS\n\ngcs_block = GCS.load(\"BLOCK_NAME\")\n

    Source code in prefect/filesystems.py
    @deprecated_class(\n    start_date=\"Mar 2024\", help=\"Use the `GcsBucket` block from prefect-gcp instead.\"\n)\nclass GCS(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `GcsBucket` from the `prefect-gcp` package, which offers enhanced functionality\n    and better a better user experience.\n    Store data as a file on Google Cloud Storage.\n\n    Example:\n        Load stored GCS config:\n        ```python\n        from prefect.filesystems import GCS\n\n        gcs_block = GCS.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/422d13bb838cf247eb2b2cf229ce6a2e717d601b-256x256.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#gcs\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"A GCS bucket path.\",\n        examples=[\"my-bucket/a-directory-within\"],\n    )\n    service_account_info: Optional[SecretStr] = Field(\n        default=None,\n        description=\"The contents of a service account keyfile as a JSON string.\",\n    )\n    project: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The project the GCS bucket resides in. If not provided, the project will\"\n            \" be inferred from the credentials or environment.\"\n        ),\n    )\n\n    @property\n    def basepath(self) -> str:\n        return f\"gcs://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.service_account_info:\n            try:\n                settings[\"token\"] = json.loads(\n                    self.service_account_info.get_secret_value()\n                )\n            except json.JSONDecodeError:\n                raise ValueError(\n                    \"Unable to load provided service_account_info. Please make sure\"\n                    \" that the provided value is a valid JSON string.\"\n                )\n        remote_file_system = RemoteFileSystem(\n            basepath=f\"gcs://{self.bucket_path}\", settings=settings\n        )\n        return remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.get_directory","title":"get_directory async","text":"

    Downloads a directory from a given remote path to a local directory.

    Defaults to downloading the entire contents of the block's basepath to the current working directory.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.put_directory","title":"put_directory async","text":"

    Uploads a directory from a given local path to a remote directory.

    Defaults to uploading the entire contents of the current working directory to the block's basepath.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub","title":"GitHub","text":"

    Bases: ReadableDeploymentStorage

    DEPRECATION WARNING:\n\nThis class is deprecated as of March 2024 and will not be available after September 2024.\nIt has been replaced by `GitHubRepository` from the `prefect-github` package, which offers\nenhanced functionality and better a better user experience.\n

    q Interact with files stored on GitHub repositories.

    Source code in prefect/filesystems.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the `GitHubRepository` block from prefect-github instead.\",\n)\nclass GitHub(ReadableDeploymentStorage):\n    \"\"\"\n        DEPRECATION WARNING:\n\n        This class is deprecated as of March 2024 and will not be available after September 2024.\n        It has been replaced by `GitHubRepository` from the `prefect-github` package, which offers\n        enhanced functionality and better a better user experience.\n    q\n        Interact with files stored on GitHub repositories.\n    \"\"\"\n\n    _block_type_name = \"GitHub\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#github\"\n\n    repository: str = Field(\n        default=...,\n        description=(\n            \"The URL of a GitHub repository to read from, in either HTTPS or SSH\"\n            \" format.\"\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    access_token: Optional[SecretStr] = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=(\n            \"A GitHub Personal Access Token (PAT) with repo scope.\"\n            \" To use a fine-grained PAT, provide '{username}:{PAT}' as the value.\"\n        ),\n    )\n    include_git_objects: bool = Field(\n        default=True,\n        description=(\n            \"Whether to include git objects when copying the repo contents to a\"\n            \" directory.\"\n        ),\n    )\n\n    @validator(\"access_token\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n        return validate_github_access_token(v, values)\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urllib.parse.urlparse(self.repository)\n        if url_components.scheme == \"https\" and self.access_token is not None:\n            updated_components = url_components._replace(\n                netloc=f\"{self.access_token.get_secret_value()}@{url_components.netloc}\"\n            )\n            full_url = urllib.parse.urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: str\n    ) -> Tuple[str, str]:\n        \"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Clones a GitHub project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        cmd += [\"--depth\", \"1\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            ignore_func = None\n            if not self.include_git_objects:\n                ignore_func = ignore_patterns(\".git\")\n\n            copytree(\n                src=content_source,\n                dst=content_destination,\n                dirs_exist_ok=True,\n                ignore=ignore_func,\n            )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub.get_directory","title":"get_directory async","text":"

    Clones a GitHub project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory.

    Parameters:

    Name Type Description Default from_path Optional[str]

    If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

    None local_path Optional[str]

    A local path to clone to; defaults to present working directory.

    None Source code in prefect/filesystems.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Clones a GitHub project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        ignore_func = None\n        if not self.include_git_objects:\n            ignore_func = ignore_patterns(\".git\")\n\n        copytree(\n            src=content_source,\n            dst=content_destination,\n            dirs_exist_ok=True,\n            ignore=ignore_func,\n        )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem","title":"LocalFileSystem","text":"

    Bases: WritableFileSystem, WritableDeploymentStorage

    Store data as a file on a local file system.

    Example

    Load stored local file system config:

    from prefect.filesystems import LocalFileSystem\n\nlocal_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n

    Source code in prefect/filesystems.py
    class LocalFileSystem(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    Store data as a file on a local file system.\n\n    Example:\n        Load stored local file system config:\n        ```python\n        from prefect.filesystems import LocalFileSystem\n\n        local_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Local File System\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/ad39089fa66d273b943394a68f003f7a19aa850e-48x48.png\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#local-filesystem\"\n    )\n\n    basepath: Optional[str] = Field(\n        default=None, description=\"Default local path for this block to write to.\"\n    )\n\n    @validator(\"basepath\", pre=True)\n    def cast_pathlib(cls, value):\n        return stringify_path(value)\n\n    def _resolve_path(self, path: str) -> Path:\n        # Only resolve the base path at runtime, default to the current directory\n        basepath = (\n            Path(self.basepath).expanduser().resolve()\n            if self.basepath\n            else Path(\".\").resolve()\n        )\n\n        # Determine the path to access relative to the base path, ensuring that paths\n        # outside of the base path are off limits\n        if path is None:\n            return basepath\n\n        path: Path = Path(path).expanduser()\n\n        if not path.is_absolute():\n            path = basepath / path\n        else:\n            path = path.resolve()\n            if basepath not in path.parents and (basepath != path):\n                raise ValueError(\n                    f\"Provided path {path} is outside of the base path {basepath}.\"\n                )\n\n        return path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: str = None, local_path: str = None\n    ) -> None:\n        \"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if not from_path:\n            from_path = Path(self.basepath).expanduser().resolve()\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if not local_path:\n            local_path = Path(\".\").resolve()\n        else:\n            local_path = Path(local_path).resolve()\n\n        if from_path == local_path:\n            # If the paths are the same there is no need to copy\n            # and we avoid shutil.copytree raising an error\n            return\n\n        # .prefectignore exists in the original location, not the current location which\n        # is most likely temporary\n        if (from_path / Path(\".prefectignore\")).exists():\n            ignore_func = await self._get_ignore_func(\n                local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n            )\n        else:\n            ignore_func = None\n\n        copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n\n    async def _get_ignore_func(self, local_path: str, ignore_file: str):\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(root=local_path, ignore_patterns=ignore_patterns)\n\n        def ignore_func(directory, files):\n            relative_path = Path(directory).relative_to(local_path)\n\n            files_to_ignore = [\n                f for f in files if str(relative_path / f) not in included_files\n            ]\n            return files_to_ignore\n\n        return ignore_func\n\n    @sync_compatible\n    async def put_directory(\n        self, local_path: str = None, to_path: str = None, ignore_file: str = None\n    ) -> None:\n        \"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the current working directory to the block's basepath.\n        An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n        \"\"\"\n        destination_path = self._resolve_path(to_path)\n\n        if not local_path:\n            local_path = Path(\".\").absolute()\n\n        if ignore_file:\n            ignore_func = await self._get_ignore_func(\n                local_path=local_path, ignore_file=ignore_file\n            )\n        else:\n            ignore_func = None\n\n        if local_path == destination_path:\n            pass\n        else:\n            copytree(\n                src=local_path,\n                dst=destination_path,\n                ignore=ignore_func,\n                dirs_exist_ok=True,\n            )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path: Path = self._resolve_path(path)\n\n        # Check if the path exists\n        if not path.exists():\n            raise ValueError(f\"Path {path} does not exist.\")\n\n        # Validate that its a file\n        if not path.is_file():\n            raise ValueError(f\"Path {path} is not a file.\")\n\n        async with await anyio.open_file(str(path), mode=\"rb\") as f:\n            content = await f.read()\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path: Path = self._resolve_path(path)\n\n        # Construct the path if it does not exist\n        path.parent.mkdir(exist_ok=True, parents=True)\n\n        # Check if the file already exists\n        if path.exists() and not path.is_file():\n            raise ValueError(f\"Path {path} already exists and is not a file.\")\n\n        async with await anyio.open_file(path, mode=\"wb\") as f:\n            await f.write(content)\n        # Leave path stringify to the OS\n        return str(path)\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.get_directory","title":"get_directory async","text":"

    Copies a directory from one place to another on the local filesystem.

    Defaults to copying the entire contents of the block's basepath to the current working directory.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: str = None, local_path: str = None\n) -> None:\n    \"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if not from_path:\n        from_path = Path(self.basepath).expanduser().resolve()\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if not local_path:\n        local_path = Path(\".\").resolve()\n    else:\n        local_path = Path(local_path).resolve()\n\n    if from_path == local_path:\n        # If the paths are the same there is no need to copy\n        # and we avoid shutil.copytree raising an error\n        return\n\n    # .prefectignore exists in the original location, not the current location which\n    # is most likely temporary\n    if (from_path / Path(\".prefectignore\")).exists():\n        ignore_func = await self._get_ignore_func(\n            local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n        )\n    else:\n        ignore_func = None\n\n    copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.put_directory","title":"put_directory async","text":"

    Copies a directory from one place to another on the local filesystem.

    Defaults to copying the entire contents of the current working directory to the block's basepath. An ignore_file path may be provided that can include gitignore style expressions for filepaths to ignore.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def put_directory(\n    self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n    \"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the current working directory to the block's basepath.\n    An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n    \"\"\"\n    destination_path = self._resolve_path(to_path)\n\n    if not local_path:\n        local_path = Path(\".\").absolute()\n\n    if ignore_file:\n        ignore_func = await self._get_ignore_func(\n            local_path=local_path, ignore_file=ignore_file\n        )\n    else:\n        ignore_func = None\n\n    if local_path == destination_path:\n        pass\n    else:\n        copytree(\n            src=local_path,\n            dst=destination_path,\n            ignore=ignore_func,\n            dirs_exist_ok=True,\n        )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem","title":"RemoteFileSystem","text":"

    Bases: WritableFileSystem, WritableDeploymentStorage

    Store data as a file on a remote file system.

    Supports any remote file system supported by fsspec. The file system is specified using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.

    Example

    Load stored remote file system config:

    from prefect.filesystems import RemoteFileSystem\n\nremote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n

    Source code in prefect/filesystems.py
    class RemoteFileSystem(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    Store data as a file on a remote file system.\n\n    Supports any remote file system supported by `fsspec`. The file system is specified\n    using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.\n\n    Example:\n        Load stored remote file system config:\n        ```python\n        from prefect.filesystems import RemoteFileSystem\n\n        remote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Remote File System\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/e86b41bc0f9c99ba9489abeee83433b43d5c9365-48x48.png\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#remote-file-system\"\n    )\n\n    basepath: str = Field(\n        default=...,\n        description=\"Default path for this block to write to.\",\n        examples=[\"s3://my-bucket/my-folder/\"],\n    )\n    settings: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional settings to pass through to fsspec.\",\n    )\n\n    # Cache for the configured fsspec file system used for access\n    _filesystem: fsspec.AbstractFileSystem = None\n\n    @validator(\"basepath\")\n    def check_basepath(cls, value):\n        return validate_basepath(value)\n\n    def _resolve_path(self, path: str) -> str:\n        base_scheme, base_netloc, base_urlpath, _, _ = urllib.parse.urlsplit(\n            self.basepath\n        )\n        scheme, netloc, urlpath, _, _ = urllib.parse.urlsplit(path)\n\n        # Confirm that absolute paths are valid\n        if scheme:\n            if scheme != base_scheme:\n                raise ValueError(\n                    f\"Path {path!r} with scheme {scheme!r} must use the same scheme as\"\n                    f\" the base path {base_scheme!r}.\"\n                )\n\n        if netloc:\n            if (netloc != base_netloc) or not urlpath.startswith(base_urlpath):\n                raise ValueError(\n                    f\"Path {path!r} is outside of the base path {self.basepath!r}.\"\n                )\n\n        return f\"{self.basepath.rstrip('/')}/{urlpath.lstrip('/')}\"\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if from_path is None:\n            from_path = str(self.basepath)\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if local_path is None:\n            local_path = Path(\".\").absolute()\n\n        # validate that from_path has a trailing slash for proper fsspec behavior across versions\n        if not from_path.endswith(\"/\"):\n            from_path += \"/\"\n\n        return self.filesystem.get(from_path, local_path, recursive=True)\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n        overwrite: bool = True,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        if to_path is None:\n            to_path = str(self.basepath)\n        else:\n            to_path = self._resolve_path(to_path)\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(\n                local_path, ignore_patterns, include_dirs=True\n            )\n\n        counter = 0\n        for f in Path(local_path).rglob(\"*\"):\n            relative_path = f.relative_to(local_path)\n            if included_files and str(relative_path) not in included_files:\n                continue\n\n            if to_path.endswith(\"/\"):\n                fpath = to_path + relative_path.as_posix()\n            else:\n                fpath = to_path + \"/\" + relative_path.as_posix()\n\n            if f.is_dir():\n                pass\n            else:\n                f = f.as_posix()\n                if overwrite:\n                    self.filesystem.put_file(f, fpath, overwrite=True)\n                else:\n                    self.filesystem.put_file(f, fpath)\n\n                counter += 1\n\n        return counter\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path = self._resolve_path(path)\n\n        with self.filesystem.open(path, \"rb\") as file:\n            content = await run_sync_in_worker_thread(file.read)\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path = self._resolve_path(path)\n        dirpath = path[: path.rindex(\"/\")]\n\n        self.filesystem.makedirs(dirpath, exist_ok=True)\n\n        with self.filesystem.open(path, \"wb\") as file:\n            await run_sync_in_worker_thread(file.write, content)\n        return path\n\n    @property\n    def filesystem(self) -> fsspec.AbstractFileSystem:\n        if not self._filesystem:\n            scheme, _, _, _, _ = urllib.parse.urlsplit(self.basepath)\n\n            try:\n                self._filesystem = fsspec.filesystem(scheme, **self.settings)\n            except ImportError as exc:\n                # The path is a remote file system that uses a lib that is not installed\n                raise RuntimeError(\n                    f\"File system created with scheme {scheme!r} from base path \"\n                    f\"{self.basepath!r} could not be created. \"\n                    \"You are likely missing a Python module required to use the given \"\n                    \"storage protocol.\"\n                ) from exc\n\n        return self._filesystem\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.get_directory","title":"get_directory async","text":"

    Downloads a directory from a given remote path to a local directory.

    Defaults to downloading the entire contents of the block's basepath to the current working directory.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if from_path is None:\n        from_path = str(self.basepath)\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if local_path is None:\n        local_path = Path(\".\").absolute()\n\n    # validate that from_path has a trailing slash for proper fsspec behavior across versions\n    if not from_path.endswith(\"/\"):\n        from_path += \"/\"\n\n    return self.filesystem.get(from_path, local_path, recursive=True)\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.put_directory","title":"put_directory async","text":"

    Uploads a directory from a given local path to a remote directory.

    Defaults to uploading the entire contents of the current working directory to the block's basepath.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n    overwrite: bool = True,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    if to_path is None:\n        to_path = str(self.basepath)\n    else:\n        to_path = self._resolve_path(to_path)\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(\n            local_path, ignore_patterns, include_dirs=True\n        )\n\n    counter = 0\n    for f in Path(local_path).rglob(\"*\"):\n        relative_path = f.relative_to(local_path)\n        if included_files and str(relative_path) not in included_files:\n            continue\n\n        if to_path.endswith(\"/\"):\n            fpath = to_path + relative_path.as_posix()\n        else:\n            fpath = to_path + \"/\" + relative_path.as_posix()\n\n        if f.is_dir():\n            pass\n        else:\n            f = f.as_posix()\n            if overwrite:\n                self.filesystem.put_file(f, fpath, overwrite=True)\n            else:\n                self.filesystem.put_file(f, fpath)\n\n            counter += 1\n\n    return counter\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3","title":"S3","text":"

    Bases: WritableFileSystem, WritableDeploymentStorage

    DEPRECATION WARNING:

    This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by S3Bucket from the prefect-aws package, which offers enhanced functionality and better a better user experience.

    Store data as a file on AWS S3.

    Example

    Load stored S3 config:

    from prefect.filesystems import S3\n\ns3_block = S3.load(\"BLOCK_NAME\")\n

    Source code in prefect/filesystems.py
    @deprecated_class(\n    start_date=\"Mar 2024\", help=\"Use the `S3Bucket` block from prefect-aws instead.\"\n)\nclass S3(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `S3Bucket` from the `prefect-aws` package, which offers enhanced functionality\n    and better a better user experience.\n\n    Store data as a file on AWS S3.\n\n    Example:\n        Load stored S3 config:\n        ```python\n        from prefect.filesystems import S3\n\n        s3_block = S3.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"S3\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#s3\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An S3 bucket path.\",\n        examples=[\"my-bucket/a-directory-within\"],\n    )\n    aws_access_key_id: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Access Key ID\",\n        description=\"Equivalent to the AWS_ACCESS_KEY_ID environment variable.\",\n        examples=[\"AKIAIOSFODNN7EXAMPLE\"],\n    )\n    aws_secret_access_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Secret Access Key\",\n        description=\"Equivalent to the AWS_SECRET_ACCESS_KEY environment variable.\",\n        examples=[\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\"],\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"s3://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.aws_access_key_id:\n            settings[\"key\"] = self.aws_access_key_id.get_secret_value()\n        if self.aws_secret_access_key:\n            settings[\"secret\"] = self.aws_secret_access_key.get_secret_value()\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"s3://{self.bucket_path}\", settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.get_directory","title":"get_directory async","text":"

    Downloads a directory from a given remote path to a local directory.

    Defaults to downloading the entire contents of the block's basepath to the current working directory.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.put_directory","title":"put_directory async","text":"

    Uploads a directory from a given local path to a remote directory.

    Defaults to uploading the entire contents of the current working directory to the block's basepath.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB","title":"SMB","text":"

    Bases: WritableFileSystem, WritableDeploymentStorage

    Store data as a file on a SMB share.

    Example

    Load stored SMB config:

    from prefect.filesystems import SMB\nsmb_block = SMB.load(\"BLOCK_NAME\")\n
    Source code in prefect/filesystems.py
    class SMB(WritableFileSystem, WritableDeploymentStorage):\n    \"\"\"\n    Store data as a file on a SMB share.\n\n    Example:\n        Load stored SMB config:\n\n        ```python\n        from prefect.filesystems import SMB\n        smb_block = SMB.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"SMB\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/3f624663f7beb97d011d011bffd51ecf6c499efc-195x195.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#smb\"\n\n    share_path: str = Field(\n        default=...,\n        description=\"SMB target (requires <SHARE>, followed by <PATH>).\",\n        examples=[\"/SHARE/dir/subdir\"],\n    )\n    smb_username: Optional[SecretStr] = Field(\n        default=None,\n        title=\"SMB Username\",\n        description=\"Username with access to the target SMB SHARE.\",\n    )\n    smb_password: Optional[SecretStr] = Field(\n        default=None, title=\"SMB Password\", description=\"Password for SMB access.\"\n    )\n    smb_host: str = Field(\n        default=..., tile=\"SMB server/hostname\", description=\"SMB server/hostname.\"\n    )\n    smb_port: Optional[int] = Field(\n        default=None, title=\"SMB port\", description=\"SMB port (default: 445).\"\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.smb_username:\n            settings[\"username\"] = self.smb_username.get_secret_value()\n        if self.smb_password:\n            settings[\"password\"] = self.smb_password.get_secret_value()\n        if self.smb_host:\n            settings[\"host\"] = self.smb_host\n        if self.smb_port:\n            settings[\"port\"] = self.smb_port\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\",\n            settings=settings,\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n        \"\"\"\n        Downloads a directory from a given remote path to a local directory.\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to a remote directory.\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path,\n            to_path=to_path,\n            ignore_file=ignore_file,\n            overwrite=False,\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.get_directory","title":"get_directory async","text":"

    Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n    \"\"\"\n    Downloads a directory from a given remote path to a local directory.\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.put_directory","title":"put_directory async","text":"

    Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath.

    Source code in prefect/filesystems.py
    @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to a remote directory.\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path,\n        to_path=to_path,\n        ignore_file=ignore_file,\n        overwrite=False,\n    )\n
    ","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/flow_runs/","title":"prefect.flow_runs","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs","title":"prefect.flow_runs","text":"","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flow_runs/#prefect.flow_runs.wait_for_flow_run","title":"wait_for_flow_run async","text":"

    Waits for the prefect flow run to finish and returns the FlowRun

    Parameters:

    Name Type Description Default flow_run_id UUID

    The flow run ID for the flow run to wait for.

    required timeout Optional[int]

    The wait timeout in seconds. Defaults to 10800 (3 hours).

    10800 poll_interval int

    The poll interval in seconds. Defaults to 5.

    5

    Returns:

    Name Type Description FlowRun FlowRun

    The finished flow run.

    Raises:

    Type Description FlowWaitTimeout

    If flow run goes over the timeout.

    Examples:

    Create a flow run for a deployment and wait for it to finish:

    import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main():\n    async with get_client() as client:\n        flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n        flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n        print(flow_run.state)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n

    Trigger multiple flow runs and wait for them to finish:

    import asyncio\n\nfrom prefect import get_client\nfrom prefect.flow_runs import wait_for_flow_run\n\nasync def main(num_runs: int):\n    async with get_client() as client:\n        flow_runs = [\n            await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n            for _\n            in range(num_runs)\n        ]\n        coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n        finished_flow_runs = await asyncio.gather(*coros)\n        print([flow_run.state for flow_run in finished_flow_runs])\n\nif __name__ == \"__main__\":\n    asyncio.run(main(num_runs=10))\n

    Source code in prefect/flow_runs.py
    @inject_client\nasync def wait_for_flow_run(\n    flow_run_id: UUID,\n    timeout: Optional[int] = 10800,\n    poll_interval: int = 5,\n    client: Optional[PrefectClient] = None,\n    log_states: bool = False,\n) -> FlowRun:\n    \"\"\"\n    Waits for the prefect flow run to finish and returns the FlowRun\n\n    Args:\n        flow_run_id: The flow run ID for the flow run to wait for.\n        timeout: The wait timeout in seconds. Defaults to 10800 (3 hours).\n        poll_interval: The poll interval in seconds. Defaults to 5.\n\n    Returns:\n        FlowRun: The finished flow run.\n\n    Raises:\n        prefect.exceptions.FlowWaitTimeout: If flow run goes over the timeout.\n\n    Examples:\n        Create a flow run for a deployment and wait for it to finish:\n            ```python\n            import asyncio\n\n            from prefect import get_client\n            from prefect.flow_runs import wait_for_flow_run\n\n            async def main():\n                async with get_client() as client:\n                    flow_run = await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n                    flow_run = await wait_for_flow_run(flow_run_id=flow_run.id)\n                    print(flow_run.state)\n\n            if __name__ == \"__main__\":\n                asyncio.run(main())\n\n            ```\n\n        Trigger multiple flow runs and wait for them to finish:\n            ```python\n            import asyncio\n\n            from prefect import get_client\n            from prefect.flow_runs import wait_for_flow_run\n\n            async def main(num_runs: int):\n                async with get_client() as client:\n                    flow_runs = [\n                        await client.create_flow_run_from_deployment(deployment_id=\"my-deployment-id\")\n                        for _\n                        in range(num_runs)\n                    ]\n                    coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs]\n                    finished_flow_runs = await asyncio.gather(*coros)\n                    print([flow_run.state for flow_run in finished_flow_runs])\n\n            if __name__ == \"__main__\":\n                asyncio.run(main(num_runs=10))\n\n            ```\n    \"\"\"\n    assert client is not None, \"Client injection failed\"\n    logger = get_logger()\n    with anyio.move_on_after(timeout):\n        while True:\n            flow_run = await client.read_flow_run(flow_run_id)\n            flow_state = flow_run.state\n            if log_states:\n                logger.info(f\"Flow run is in state {flow_run.state.name!r}\")\n            if flow_state and flow_state.is_final():\n                return flow_run\n            await anyio.sleep(poll_interval)\n    raise FlowRunWaitTimeout(\n        f\"Flow run with ID {flow_run_id} exceeded watch timeout of {timeout} seconds\"\n    )\n
    ","tags":["Python API","flow-runs"]},{"location":"api-ref/prefect/flows/","title":"prefect.flows","text":"","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows","title":"prefect.flows","text":"

    Module containing the base workflow class and decorator - for most use cases, using the @flow decorator is preferred.

    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow","title":"Flow","text":"

    Bases: Generic[P, R]

    A Prefect workflow definition.

    Note

    We recommend using the @flow decorator for most use-cases.

    Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

    Parameters:

    Name Type Description Default fn Callable[P, R]

    The function defining the workflow.

    required name Optional[str]

    An optional name for the flow; if not provided, the name will be inferred from the given function.

    None version Optional[str]

    An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

    None flow_run_name Optional[Union[Callable[[], str], str]]

    An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

    None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

    An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be used.

    ConcurrentTaskRunner description str

    An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

    None timeout_seconds Union[int, float]

    An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

    None validate_parameters bool

    By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

    True retries Optional[int]

    An optional number of times to retry on flow run failure.

    None retry_delay_seconds Optional[Union[int, float]]

    An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

    None persist_result Optional[bool]

    An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

    None result_storage Optional[ResultStorage]

    An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

    None result_serializer Optional[ResultSerializer]

    An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

    None on_failure Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of callables to run when the flow enters a failed state.

    None on_completion Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of callables to run when the flow enters a completed state.

    None on_cancellation Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of callables to run when the flow enters a cancelling state.

    None on_crashed Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of callables to run when the flow enters a crashed state.

    None on_running Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of callables to run when the flow enters a running state.

    None Source code in prefect/flows.py
    @PrefectObjectRegistry.register_instances\nclass Flow(Generic[P, R]):\n    \"\"\"\n    A Prefect workflow definition.\n\n    !!! note\n        We recommend using the [`@flow` decorator][prefect.flows.flow] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. To preserve the input\n    and output types, we use the generic type variables `P` and `R` for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the workflow.\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: An optional task runner to use for task execution within the flow;\n            if not provided, a `ConcurrentTaskRunner` will be used.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        on_failure: An optional list of callables to run when the flow enters a failed state.\n        on_completion: An optional list of callables to run when the flow enters a completed state.\n        on_cancellation: An optional list of callables to run when the flow enters a cancelling state.\n        on_crashed: An optional list of callables to run when the flow enters a crashed state.\n        on_running: An optional list of callables to run when the flow enters a running state.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @flow decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: Optional[str] = None,\n        version: Optional[str] = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[Union[int, float]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = ConcurrentTaskRunner,\n        description: str = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = True,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        cache_result_in_memory: bool = True,\n        log_prints: Optional[bool] = None,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ):\n        if name is not None and not isinstance(name, str):\n            raise TypeError(\n                \"Expected string for flow parameter 'name'; got {} instead. {}\".format(\n                    type(name).__name__,\n                    (\n                        \"Perhaps you meant to call it? e.g.\"\n                        \" '@flow(name=get_flow_run_name())'\"\n                        if callable(name)\n                        else \"\"\n                    ),\n                )\n            )\n\n        # Validate if hook passed is list and contains callables\n        hook_categories = [\n            on_completion,\n            on_failure,\n            on_cancellation,\n            on_crashed,\n            on_running,\n        ]\n        hook_names = [\n            \"on_completion\",\n            \"on_failure\",\n            \"on_cancellation\",\n            \"on_crashed\",\n            \"on_running\",\n        ]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        # Validate name if given\n        if name:\n            raise_on_name_with_banned_characters(name)\n\n        self.name = name or fn.__name__.replace(\"_\", \"-\")\n\n        if flow_run_name is not None:\n            if not isinstance(flow_run_name, str) and not callable(flow_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'flow_run_name'; got\"\n                    f\" {type(flow_run_name).__name__} instead.\"\n                )\n        self.flow_run_name = flow_run_name\n\n        task_runner = task_runner or ConcurrentTaskRunner()\n        self.task_runner = (\n            task_runner() if isinstance(task_runner, type) else task_runner\n        )\n\n        self.log_prints = log_prints\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = is_async_fn(self.fn)\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        # Version defaults to a hash of the function's file\n        flow_file = inspect.getsourcefile(self.fn)\n        if not version:\n            try:\n                version = file_hash(flow_file)\n            except (FileNotFoundError, TypeError, OSError):\n                pass  # `getsourcefile` can return null values and \"<stdin>\" for objects in repls\n        self.version = version\n\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n\n        # FlowRunPolicy settings\n        # TODO: We can instantiate a `FlowRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n        self.retries = (\n            retries if retries is not None else PREFECT_FLOW_DEFAULT_RETRIES.value()\n        )\n\n        self.retry_delay_seconds = (\n            retry_delay_seconds\n            if retry_delay_seconds is not None\n            else PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS.value()\n        )\n\n        self.parameters = parameter_schema(self.fn)\n        self.should_validate_parameters = validate_parameters\n\n        if self.should_validate_parameters:\n            # Try to create the validated function now so that incompatibility can be\n            # raised at declaration time rather than at runtime\n            # We cannot, however, store the validated function on the flow because it\n            # is not picklable in some environments\n            try:\n                ValidatedFunction(self.fn, config={\"arbitrary_types_allowed\": True})\n            except pydantic.ConfigError as exc:\n                raise ValueError(\n                    \"Flow function is not compatible with `validate_parameters`. \"\n                    \"Disable validation or change the argument names.\"\n                ) from exc\n\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.cache_result_in_memory = cache_result_in_memory\n\n        # Check for collision in the registry\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Flow)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            file = inspect.getsourcefile(self.fn)\n            line_number = inspect.getsourcelines(self.fn)[1]\n            warnings.warn(\n                f\"A flow named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another flow. Consider specifying a unique `name` \"\n                \"parameter in the flow definition:\\n\\n \"\n                \"`@flow(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n        self.on_cancellation = on_cancellation\n        self.on_crashed = on_crashed\n        self.on_running = on_running\n\n        # Used for flows loaded from remote storage\n        self._storage: Optional[RunnerStorage] = None\n        self._entrypoint: Optional[str] = None\n\n        module = fn.__module__\n        if module in (\"__main__\", \"__prefect_loader__\"):\n            module_name = inspect.getfile(fn)\n            module = module_name if module_name != \"__main__\" else module\n\n        self._entrypoint = f\"{module}:{fn.__name__}\"\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        version: str = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[Union[int, float]] = None,\n        description: str = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = None,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        cache_result_in_memory: bool = None,\n        log_prints: Optional[bool] = NotSet,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ) -> Self:\n        \"\"\"\n        Create a new flow from the current object, updating provided options.\n\n        Args:\n            name: A new name for the flow.\n            version: A new version for the flow.\n            description: A new description for the flow.\n            flow_run_name: An optional name to distinguish runs of this flow; this name\n                can be provided as a string template with the flow's parameters as variables,\n                or a function that returns a string.\n            task_runner: A new task runner for the flow.\n            timeout_seconds: A new number of seconds to fail the flow after if still\n                running.\n            validate_parameters: A new value indicating if flow calls should validate\n                given parameters.\n            retries: A new number of times to retry on flow run failure.\n            retry_delay_seconds: A new number of seconds to wait before retrying the\n                flow after failure. This is only applicable if `retries` is nonzero.\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            cache_result_in_memory: A new value indicating if the flow's result should\n                be cached in memory.\n            on_failure: A new list of callables to run when the flow enters a failed state.\n            on_completion: A new list of callables to run when the flow enters a completed state.\n            on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n            on_crashed: A new list of callables to run when the flow enters a crashed state.\n            on_running: A new list of callables to run when the flow enters a running state.\n\n        Returns:\n            A new `Flow` instance.\n\n        Examples:\n\n            Create a new flow from an existing flow and update the name:\n\n            >>> @flow(name=\"My flow\")\n            >>> def my_flow():\n            >>>     return 1\n            >>>\n            >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n            Create a new flow from an existing flow, update the task runner, and call\n            it without an intermediate variable:\n\n            >>> from prefect.task_runners import SequentialTaskRunner\n            >>>\n            >>> @flow\n            >>> def my_flow(x, y):\n            >>>     return x + y\n            >>>\n            >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n            >>> assert state.result() == 4\n\n        \"\"\"\n        new_flow = Flow(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            flow_run_name=flow_run_name or self.flow_run_name,\n            version=version or self.version,\n            task_runner=task_runner or self.task_runner,\n            retries=retries if retries is not None else self.retries,\n            retry_delay_seconds=(\n                retry_delay_seconds\n                if retry_delay_seconds is not None\n                else self.retry_delay_seconds\n            ),\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            validate_parameters=(\n                validate_parameters\n                if validate_parameters is not None\n                else self.should_validate_parameters\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n            on_cancellation=on_cancellation or self.on_cancellation,\n            on_crashed=on_crashed or self.on_crashed,\n            on_running=on_running or self.on_running,\n        )\n        new_flow._storage = self._storage\n        new_flow._entrypoint = self._entrypoint\n        return new_flow\n\n    def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n        associated types specified by the function's type annotations.\n\n        Returns:\n            A new dict of parameters that have been cast to the appropriate types\n\n        Raises:\n            ParameterTypeError: if the provided parameters are not valid\n        \"\"\"\n        args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n        if HAS_PYDANTIC_V2:\n            has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n                isinstance(o, V1BaseModel) for o in kwargs.values()\n            )\n            has_v2_types = any(is_v2_type(o) for o in args) or any(\n                is_v2_type(o) for o in kwargs.values()\n            )\n\n            if has_v1_models and has_v2_types:\n                raise ParameterTypeError(\n                    \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n                )\n\n            if has_v1_models:\n                validated_fn = V1ValidatedFunction(\n                    self.fn, config={\"arbitrary_types_allowed\": True}\n                )\n            else:\n                validated_fn = V2ValidatedFunction(\n                    self.fn, config={\"arbitrary_types_allowed\": True}\n                )\n\n        else:\n            validated_fn = ValidatedFunction(\n                self.fn, config={\"arbitrary_types_allowed\": True}\n            )\n\n        try:\n            model = validated_fn.init_model_instance(*args, **kwargs)\n        except pydantic.ValidationError as exc:\n            # We capture the pydantic exception and raise our own because the pydantic\n            # exception is not picklable when using a cythonized pydantic installation\n            raise ParameterTypeError.from_validation_error(exc) from None\n        except V2ValidationError as exc:\n            # We capture the pydantic exception and raise our own because the pydantic\n            # exception is not picklable when using a cythonized pydantic installation\n            raise ParameterTypeError.from_validation_error(exc) from None\n\n        # Get the updated parameter dict with cast values from the model\n        cast_parameters = {\n            k: v\n            for k, v in model._iter()\n            if k in model.__fields_set__ or model.__fields__[k].default_factory\n        }\n        return cast_parameters\n\n    def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Convert parameters to a serializable form.\n\n        Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n        converting everything directly to a string. This maintains basic types like\n        integers during API roundtrips.\n        \"\"\"\n        serialized_parameters = {}\n        for key, value in parameters.items():\n            try:\n                serialized_parameters[key] = jsonable_encoder(value)\n            except (TypeError, ValueError):\n                logger.debug(\n                    f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                    f\"type {type(value).__name__!r} and will not be stored \"\n                    \"in the backend.\"\n                )\n                serialized_parameters[key] = f\"<{type(value).__name__}>\"\n        return serialized_parameters\n\n    @sync_compatible\n    @deprecated_parameter(\n        \"schedule\",\n        start_date=\"Mar 2024\",\n        when=lambda p: p is not None,\n        help=\"Use `schedules` instead.\",\n    )\n    @deprecated_parameter(\n        \"is_schedule_active\",\n        start_date=\"Mar 2024\",\n        when=lambda p: p is not None,\n        help=\"Use `paused` instead.\",\n    )\n    async def to_deployment(\n        self,\n        name: str,\n        interval: Optional[\n            Union[\n                Iterable[Union[int, float, datetime.timedelta]],\n                int,\n                float,\n                datetime.timedelta,\n            ]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ) -> \"RunnerDeployment\":\n        \"\"\"\n        Creates a runner deployment object for this flow.\n\n        Args:\n            name: The name to give the created deployment.\n            interval: An interval on which to execute the new deployment. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this deployment.\n            rrule: An rrule schedule of when to execute runs of this deployment.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options such as `timezone`.\n            schedule: A schedule object defining when to execute runs of this deployment.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            parameters: A dictionary of default parameter values to pass to runs of this deployment.\n            triggers: A list of triggers that will kick off runs of this deployment.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for the created deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n\n        Examples:\n            Prepare two deployments and serve them:\n\n            ```python\n            from prefect import flow, serve\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            @flow\n            def my_other_flow(name):\n                print(f\"goodbye {name}\")\n\n            if __name__ == \"__main__\":\n                hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n                bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n                serve(hello_deploy, bye_deploy)\n            ```\n        \"\"\"\n        from prefect.deployments.runner import RunnerDeployment\n\n        if not name.endswith(\".py\"):\n            raise_on_name_with_banned_characters(name)\n        if self._storage and self._entrypoint:\n            return await RunnerDeployment.from_storage(\n                storage=self._storage,\n                entrypoint=self._entrypoint,\n                name=name,\n                interval=interval,\n                cron=cron,\n                rrule=rrule,\n                paused=paused,\n                schedules=schedules,\n                schedule=schedule,\n                is_schedule_active=is_schedule_active,\n                tags=tags,\n                triggers=triggers,\n                parameters=parameters or {},\n                description=description,\n                version=version,\n                enforce_parameter_schema=enforce_parameter_schema,\n                work_pool_name=work_pool_name,\n                work_queue_name=work_queue_name,\n                job_variables=job_variables,\n            )\n        else:\n            return RunnerDeployment.from_flow(\n                self,\n                name=name,\n                interval=interval,\n                cron=cron,\n                rrule=rrule,\n                paused=paused,\n                schedules=schedules,\n                schedule=schedule,\n                is_schedule_active=is_schedule_active,\n                tags=tags,\n                triggers=triggers,\n                parameters=parameters or {},\n                description=description,\n                version=version,\n                enforce_parameter_schema=enforce_parameter_schema,\n                work_pool_name=work_pool_name,\n                work_queue_name=work_queue_name,\n                job_variables=job_variables,\n                entrypoint_type=entrypoint_type,\n            )\n\n    @sync_compatible\n    async def serve(\n        self,\n        name: Optional[str] = None,\n        interval: Optional[\n            Union[\n                Iterable[Union[int, float, datetime.timedelta]],\n                int,\n                float,\n                datetime.timedelta,\n            ]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        parameters: Optional[dict] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        pause_on_shutdown: bool = True,\n        print_starting_message: bool = True,\n        limit: Optional[int] = None,\n        webserver: bool = False,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ):\n        \"\"\"\n        Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n        Args:\n            name: The name to give the created deployment. Defaults to the name of the flow.\n            interval: An interval on which to execute the deployment. Accepts a number or a\n                timedelta object to create a single schedule. If a number is given, it will be\n                interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n                multiple schedules.\n            cron: A cron schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of cron schedule strings to create multiple schedules.\n            rrule: An rrule schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of rrule schedule strings to create multiple schedules.\n            triggers: A list of triggers that will kick off runs of this deployment.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object defining when to execute runs of this deployment. Used to\n                define additional scheduling options such as `timezone`.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            parameters: A dictionary of default parameter values to pass to runs of this deployment.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for the created deployment.\n            pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n                If False, the schedules will continue running.\n            print_starting_message: Whether or not to print the starting message when flow is served.\n            limit: The maximum number of runs that can be executed concurrently.\n            webserver: Whether or not to start a monitoring webserver for this flow.\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n\n        Examples:\n            Serve a flow:\n\n            ```python\n            from prefect import flow\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            if __name__ == \"__main__\":\n                my_flow.serve(\"example-deployment\")\n            ```\n\n            Serve a flow and run it every hour:\n\n            ```python\n            from prefect import flow\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            if __name__ == \"__main__\":\n                my_flow.serve(\"example-deployment\", interval=3600)\n            ```\n        \"\"\"\n        from prefect.runner import Runner\n\n        if not name:\n            name = self.name\n        else:\n            # Handling for my_flow.serve(__file__)\n            # Will set name to name of file where my_flow.serve() without the extension\n            # Non filepath strings will pass through unchanged\n            name = Path(name).stem\n\n        runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n        deployment_id = await runner.add_flow(\n            self,\n            name=name,\n            triggers=triggers,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            paused=paused,\n            schedules=schedules,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            parameters=parameters,\n            description=description,\n            tags=tags,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            entrypoint_type=entrypoint_type,\n        )\n        if print_starting_message:\n            help_message = (\n                f\"[green]Your flow {self.name!r} is being served and polling for\"\n                \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n                \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n                f\" '{self.name}/{name}'\\n[/]\"\n            )\n            if PREFECT_UI_URL:\n                help_message += (\n                    \"\\nYou can also run your flow via the Prefect UI:\"\n                    f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n                )\n\n            console = Console()\n            console.print(help_message, soft_wrap=True)\n        await runner.start(webserver=webserver)\n\n    @classmethod\n    @sync_compatible\n    async def from_source(\n        cls: Type[F],\n        source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n        entrypoint: str,\n    ) -> F:\n        \"\"\"\n        Loads a flow from a remote source.\n\n        Args:\n            source: Either a URL to a git repository or a storage object.\n            entrypoint:  The path to a file containing a flow and the name of the flow function in\n                the format `./path/to/file.py:flow_func_name`.\n\n        Returns:\n            A new `Flow` instance.\n\n        Examples:\n            Load a flow from a public git repository:\n\n\n            ```python\n            from prefect import flow\n            from prefect.runner.storage import GitRepository\n            from prefect.blocks.system import Secret\n\n            my_flow = flow.from_source(\n                source=\"https://github.com/org/repo.git\",\n                entrypoint=\"flows.py:my_flow\",\n            )\n\n            my_flow()\n            ```\n\n            Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n            ```python\n            from prefect import flow\n            from prefect.runner.storage import GitRepository\n            from prefect.blocks.system import Secret\n\n            my_flow = flow.from_source(\n                source=GitRepository(\n                    url=\"https://github.com/org/repo.git\",\n                    credentials={\"access_token\": Secret.load(\"github-access-token\")}\n                ),\n                entrypoint=\"flows.py:my_flow\",\n            )\n\n            my_flow()\n            ```\n        \"\"\"\n        if isinstance(source, str):\n            storage = create_storage_from_url(source)\n        elif isinstance(source, RunnerStorage):\n            storage = source\n        elif hasattr(source, \"get_directory\"):\n            storage = BlockStorageAdapter(source)\n        else:\n            raise TypeError(\n                f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n                \" URL to remote storage or a storage object.\"\n            )\n        with tempfile.TemporaryDirectory() as tmpdir:\n            storage.set_base_path(Path(tmpdir))\n            await storage.pull_code()\n\n            full_entrypoint = str(storage.destination / entrypoint)\n            flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n                create_call(load_flow_from_entrypoint, full_entrypoint)\n            )\n            flow._storage = storage\n            flow._entrypoint = entrypoint\n\n        return flow\n\n    @sync_compatible\n    async def deploy(\n        self,\n        name: str,\n        work_pool_name: Optional[str] = None,\n        image: Optional[Union[str, DeploymentImage]] = None,\n        build: bool = True,\n        push: bool = True,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[dict] = None,\n        interval: Optional[Union[int, float, datetime.timedelta]] = None,\n        cron: Optional[str] = None,\n        rrule: Optional[str] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        parameters: Optional[dict] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n        print_next_steps: bool = True,\n        ignore_warnings: bool = False,\n    ) -> UUID:\n        \"\"\"\n        Deploys a flow to run on dynamic infrastructure via a work pool.\n\n        By default, calling this method will build a Docker image for the flow, push it to a registry,\n        and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n        If you want to use an existing image, you can pass `build=False` to skip building and pushing\n        an image.\n\n        Args:\n            name: The name to give the created deployment.\n            work_pool_name: The name of the work pool to use for this deployment. Defaults to\n                the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n            image: The name of the Docker image to build, including the registry and\n                repository. Pass a DeploymentImage instance to customize the Dockerfile used\n                and build arguments.\n            build: Whether or not to build a new image for the flow. If False, the provided\n                image will be used as-is and pulled at runtime.\n            push: Whether or not to skip pushing the built image to a registry.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n            interval: An interval on which to execute the deployment. Accepts a number or a\n                timedelta object to create a single schedule. If a number is given, it will be\n                interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n                multiple schedules.\n            cron: A cron schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of cron schedule strings to create multiple schedules.\n            rrule: An rrule schedule string of when to execute runs of this deployment.\n                Also accepts an iterable of rrule schedule strings to create multiple schedules.\n            triggers: A list of triggers that will kick off runs of this deployment.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object defining when to execute runs of this deployment. Used to\n                define additional scheduling options like `timezone`.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            parameters: A dictionary of default parameter values to pass to runs of this deployment.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for the created deployment.\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n            print_next_steps_message: Whether or not to print a message with next steps\n                after deploying the deployments.\n            ignore_warnings: Whether or not to ignore warnings about the work pool type.\n\n        Returns:\n            The ID of the created/updated deployment.\n\n        Examples:\n            Deploy a local flow to a work pool:\n\n            ```python\n            from prefect import flow\n\n            @flow\n            def my_flow(name):\n                print(f\"hello {name}\")\n\n            if __name__ == \"__main__\":\n                my_flow.deploy(\n                    \"example-deployment\",\n                    work_pool_name=\"my-work-pool\",\n                    image=\"my-repository/my-image:dev\",\n                )\n            ```\n\n            Deploy a remotely stored flow to a work pool:\n\n            ```python\n            from prefect import flow\n\n            if __name__ == \"__main__\":\n                flow.from_source(\n                    source=\"https://github.com/org/repo.git\",\n                    entrypoint=\"flows.py:my_flow\",\n                ).deploy(\n                    \"example-deployment\",\n                    work_pool_name=\"my-work-pool\",\n                    image=\"my-repository/my-image:dev\",\n                )\n            ```\n        \"\"\"\n        work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n        try:\n            async with get_client() as client:\n                work_pool = await client.read_work_pool(work_pool_name)\n        except ObjectNotFound as exc:\n            raise ValueError(\n                f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n                \" deploying this flow.\"\n            ) from exc\n\n        deployment = await self.to_deployment(\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedules=schedules,\n            paused=paused,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            triggers=triggers,\n            parameters=parameters,\n            description=description,\n            tags=tags,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n            entrypoint_type=entrypoint_type,\n        )\n\n        deployment_ids = await deploy(\n            deployment,\n            work_pool_name=work_pool_name,\n            image=image,\n            build=build,\n            push=push,\n            print_next_steps_message=False,\n            ignore_warnings=ignore_warnings,\n        )\n\n        if print_next_steps:\n            console = Console()\n            if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n                console.print(\n                    \"\\nTo execute flow runs from this deployment, start a worker in a\"\n                    \" separate terminal that pulls work from the\"\n                    f\" {work_pool_name!r} work pool:\"\n                )\n                console.print(\n                    f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n                    style=\"blue\",\n                )\n            console.print(\n                \"\\nTo schedule a run for this deployment, use the following command:\"\n            )\n            console.print(\n                f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n                style=\"blue\",\n            )\n            if PREFECT_UI_URL:\n                message = (\n                    \"\\nYou can also run your flow via the Prefect UI:\"\n                    f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n                )\n                console.print(message, soft_wrap=True)\n\n        return deployment_ids[0]\n\n    @overload\n    def __call__(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: \"P.args\",\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n        \"\"\"\n        Run the flow and return its result.\n\n\n        Flow parameter values must be serializable by Pydantic.\n\n        If writing an async flow, this call must be awaited.\n\n        This will create a new flow run in the API.\n\n        Args:\n            *args: Arguments to run the flow with.\n            return_state: Return a Prefect State containing the result of the\n                flow run.\n            wait_for: Upstream task futures to wait for before starting the flow if called as a subflow\n            **kwargs: Keyword arguments to run the flow with.\n\n        Returns:\n            If `return_state` is False, returns the result of the flow run.\n            If `return_state` is True, returns the result of the flow run\n                wrapped in a Prefect State which provides error handling.\n\n        Examples:\n\n            Define a flow\n\n            >>> @flow\n            >>> def my_flow(name):\n            >>>     print(f\"hello {name}\")\n            >>>     return f\"goodbye {name}\"\n\n            Run a flow\n\n            >>> my_flow(\"marvin\")\n            hello marvin\n            \"goodbye marvin\"\n\n            Run a flow with additional tags\n\n            >>> from prefect import tags\n            >>> with tags(\"db\", \"blue\"):\n            >>>     my_flow(\"foo\")\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        task_viz_tracker = get_task_viz_tracker()\n        if task_viz_tracker:\n            # this is a subflow, for now return a single task and do not go further\n            # we can add support for exploring subflows for tasks in the future.\n            return track_viz_task(self.isasync, self.name, parameters)\n\n        if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE.value():\n            from prefect.new_flow_engine import run_flow, run_flow_sync\n\n            run_kwargs = dict(\n                flow=self,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n            )\n            if self.isasync:\n                # this returns an awaitable coroutine\n                return run_flow(**run_kwargs)\n            else:\n                return run_flow_sync(**run_kwargs)\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n        )\n\n    @overload\n    def _run(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def _run(self: \"Flow[P, T]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: \"P.args\",\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n        \"\"\"\n        Run the flow and return its final state.\n\n        Examples:\n\n            Run a flow and get the returned result\n\n            >>> state = my_flow._run(\"marvin\")\n            >>> state.result()\n           \"goodbye marvin\"\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n        )\n\n    @sync_compatible\n    async def visualize(self, *args, **kwargs):\n        \"\"\"\n        Generates a graphviz object representing the current flow. In IPython notebooks,\n        it's rendered inline, otherwise in a new window as a PNG.\n\n        Raises:\n            - ImportError: If `graphviz` isn't installed.\n            - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n            - FlowVisualizationError: If the flow can't be visualized for any other reason.\n        \"\"\"\n        if not PREFECT_UNIT_TEST_MODE:\n            warnings.warn(\n                \"`flow.visualize()` will execute code inside of your flow that is not\"\n                \" decorated with `@task` or `@flow`.\"\n            )\n\n        try:\n            with TaskVizTracker() as tracker:\n                if self.isasync:\n                    await self.fn(*args, **kwargs)\n                else:\n                    self.fn(*args, **kwargs)\n\n                graph = build_task_dependencies(tracker)\n\n                visualize_task_dependencies(graph, self.name)\n\n        except GraphvizImportError:\n            raise\n        except GraphvizExecutableNotFoundError:\n            raise\n        except VisualizationUnsupportedError:\n            raise\n        except FlowVisualizationError:\n            raise\n        except Exception as e:\n            msg = (\n                \"It's possible you are trying to visualize a flow that contains \"\n                \"code that directly interacts with the result of a task\"\n                \" inside of the flow. \\nTry passing a `viz_return_value` \"\n                \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n            )\n\n            new_exception = type(e)(str(e) + \"\\n\" + msg)\n            # Copy traceback information from the original exception\n            new_exception.__traceback__ = e.__traceback__\n            raise new_exception\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.deploy","title":"deploy async","text":"

    Deploys a flow to run on dynamic infrastructure via a work pool.

    By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule.

    If you want to use an existing image, you can pass build=False to skip building and pushing an image.

    Parameters:

    Name Type Description Default name str

    The name to give the created deployment.

    required work_pool_name Optional[str]

    The name of the work pool to use for this deployment. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME.

    None image Optional[Union[str, DeploymentImage]]

    The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.

    None build bool

    Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.

    True push bool

    Whether or not to skip pushing the built image to a registry.

    True work_queue_name Optional[str]

    The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

    None job_variables Optional[dict]

    Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

    None interval Optional[Union[int, float, timedelta]]

    An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.

    None cron Optional[str]

    A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.

    None rrule Optional[str]

    An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.

    None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

    A list of triggers that will kick off runs of this deployment.

    None paused Optional[bool]

    Whether or not to set this deployment as paused.

    None schedules Optional[List[MinimalDeploymentSchedule]]

    A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

    None schedule Optional[SCHEDULE_TYPES]

    A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options like timezone.

    None is_schedule_active Optional[bool]

    Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

    None parameters Optional[dict]

    A dictionary of default parameter values to pass to runs of this deployment.

    None description Optional[str]

    A description for the created deployment. Defaults to the flow's description if not provided.

    None tags Optional[List[str]]

    A list of tags to associate with the created deployment for organizational purposes.

    None version Optional[str]

    A version for the created deployment. Defaults to the flow's version.

    None enforce_parameter_schema bool

    Whether or not the Prefect API should enforce the parameter schema for the created deployment.

    False entrypoint_type EntrypointType

    Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

    FILE_PATH print_next_steps_message

    Whether or not to print a message with next steps after deploying the deployments.

    required ignore_warnings bool

    Whether or not to ignore warnings about the work pool type.

    False

    Returns:

    Type Description UUID

    The ID of the created/updated deployment.

    Examples:

    Deploy a local flow to a work pool:

    from prefect import flow\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        \"example-deployment\",\n        work_pool_name=\"my-work-pool\",\n        image=\"my-repository/my-image:dev\",\n    )\n

    Deploy a remotely stored flow to a work pool:

    from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/org/repo.git\",\n        entrypoint=\"flows.py:my_flow\",\n    ).deploy(\n        \"example-deployment\",\n        work_pool_name=\"my-work-pool\",\n        image=\"my-repository/my-image:dev\",\n    )\n
    Source code in prefect/flows.py
    @sync_compatible\nasync def deploy(\n    self,\n    name: str,\n    work_pool_name: Optional[str] = None,\n    image: Optional[Union[str, DeploymentImage]] = None,\n    build: bool = True,\n    push: bool = True,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[dict] = None,\n    interval: Optional[Union[int, float, datetime.timedelta]] = None,\n    cron: Optional[str] = None,\n    rrule: Optional[str] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[List[MinimalDeploymentSchedule]] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    parameters: Optional[dict] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    print_next_steps: bool = True,\n    ignore_warnings: bool = False,\n) -> UUID:\n    \"\"\"\n    Deploys a flow to run on dynamic infrastructure via a work pool.\n\n    By default, calling this method will build a Docker image for the flow, push it to a registry,\n    and create a deployment via the Prefect API that will run the flow on the given schedule.\n\n    If you want to use an existing image, you can pass `build=False` to skip building and pushing\n    an image.\n\n    Args:\n        name: The name to give the created deployment.\n        work_pool_name: The name of the work pool to use for this deployment. Defaults to\n            the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n        image: The name of the Docker image to build, including the registry and\n            repository. Pass a DeploymentImage instance to customize the Dockerfile used\n            and build arguments.\n        build: Whether or not to build a new image for the flow. If False, the provided\n            image will be used as-is and pulled at runtime.\n        push: Whether or not to skip pushing the built image to a registry.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n        interval: An interval on which to execute the deployment. Accepts a number or a\n            timedelta object to create a single schedule. If a number is given, it will be\n            interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n            multiple schedules.\n        cron: A cron schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of cron schedule strings to create multiple schedules.\n        rrule: An rrule schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of rrule schedule strings to create multiple schedules.\n        triggers: A list of triggers that will kick off runs of this deployment.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object defining when to execute runs of this deployment. Used to\n            define additional scheduling options like `timezone`.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        parameters: A dictionary of default parameter values to pass to runs of this deployment.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for the created deployment.\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n        print_next_steps_message: Whether or not to print a message with next steps\n            after deploying the deployments.\n        ignore_warnings: Whether or not to ignore warnings about the work pool type.\n\n    Returns:\n        The ID of the created/updated deployment.\n\n    Examples:\n        Deploy a local flow to a work pool:\n\n        ```python\n        from prefect import flow\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        if __name__ == \"__main__\":\n            my_flow.deploy(\n                \"example-deployment\",\n                work_pool_name=\"my-work-pool\",\n                image=\"my-repository/my-image:dev\",\n            )\n        ```\n\n        Deploy a remotely stored flow to a work pool:\n\n        ```python\n        from prefect import flow\n\n        if __name__ == \"__main__\":\n            flow.from_source(\n                source=\"https://github.com/org/repo.git\",\n                entrypoint=\"flows.py:my_flow\",\n            ).deploy(\n                \"example-deployment\",\n                work_pool_name=\"my-work-pool\",\n                image=\"my-repository/my-image:dev\",\n            )\n        ```\n    \"\"\"\n    work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n    try:\n        async with get_client() as client:\n            work_pool = await client.read_work_pool(work_pool_name)\n    except ObjectNotFound as exc:\n        raise ValueError(\n            f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n            \" deploying this flow.\"\n        ) from exc\n\n    deployment = await self.to_deployment(\n        name=name,\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedules=schedules,\n        paused=paused,\n        schedule=schedule,\n        is_schedule_active=is_schedule_active,\n        triggers=triggers,\n        parameters=parameters,\n        description=description,\n        tags=tags,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n        entrypoint_type=entrypoint_type,\n    )\n\n    deployment_ids = await deploy(\n        deployment,\n        work_pool_name=work_pool_name,\n        image=image,\n        build=build,\n        push=push,\n        print_next_steps_message=False,\n        ignore_warnings=ignore_warnings,\n    )\n\n    if print_next_steps:\n        console = Console()\n        if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n            console.print(\n                \"\\nTo execute flow runs from this deployment, start a worker in a\"\n                \" separate terminal that pulls work from the\"\n                f\" {work_pool_name!r} work pool:\"\n            )\n            console.print(\n                f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n                style=\"blue\",\n            )\n        console.print(\n            \"\\nTo schedule a run for this deployment, use the following command:\"\n        )\n        console.print(\n            f\"\\n\\t$ prefect deployment run '{self.name}/{name}'\\n\",\n            style=\"blue\",\n        )\n        if PREFECT_UI_URL:\n            message = (\n                \"\\nYou can also run your flow via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_ids[0]}[/]\\n\"\n            )\n            console.print(message, soft_wrap=True)\n\n    return deployment_ids[0]\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.from_source","title":"from_source async classmethod","text":"

    Loads a flow from a remote source.

    Parameters:

    Name Type Description Default source Union[str, RunnerStorage, ReadableDeploymentStorage]

    Either a URL to a git repository or a storage object.

    required entrypoint str

    The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name.

    required

    Returns:

    Type Description F

    A new Flow instance.

    Examples:

    Load a flow from a public git repository:

    from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n    source=\"https://github.com/org/repo.git\",\n    entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n

    Load a flow from a private git repository using an access token stored in a Secret block:

    from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n    source=GitRepository(\n        url=\"https://github.com/org/repo.git\",\n        credentials={\"access_token\": Secret.load(\"github-access-token\")}\n    ),\n    entrypoint=\"flows.py:my_flow\",\n)\n\nmy_flow()\n
    Source code in prefect/flows.py
    @classmethod\n@sync_compatible\nasync def from_source(\n    cls: Type[F],\n    source: Union[str, RunnerStorage, ReadableDeploymentStorage],\n    entrypoint: str,\n) -> F:\n    \"\"\"\n    Loads a flow from a remote source.\n\n    Args:\n        source: Either a URL to a git repository or a storage object.\n        entrypoint:  The path to a file containing a flow and the name of the flow function in\n            the format `./path/to/file.py:flow_func_name`.\n\n    Returns:\n        A new `Flow` instance.\n\n    Examples:\n        Load a flow from a public git repository:\n\n\n        ```python\n        from prefect import flow\n        from prefect.runner.storage import GitRepository\n        from prefect.blocks.system import Secret\n\n        my_flow = flow.from_source(\n            source=\"https://github.com/org/repo.git\",\n            entrypoint=\"flows.py:my_flow\",\n        )\n\n        my_flow()\n        ```\n\n        Load a flow from a private git repository using an access token stored in a `Secret` block:\n\n        ```python\n        from prefect import flow\n        from prefect.runner.storage import GitRepository\n        from prefect.blocks.system import Secret\n\n        my_flow = flow.from_source(\n            source=GitRepository(\n                url=\"https://github.com/org/repo.git\",\n                credentials={\"access_token\": Secret.load(\"github-access-token\")}\n            ),\n            entrypoint=\"flows.py:my_flow\",\n        )\n\n        my_flow()\n        ```\n    \"\"\"\n    if isinstance(source, str):\n        storage = create_storage_from_url(source)\n    elif isinstance(source, RunnerStorage):\n        storage = source\n    elif hasattr(source, \"get_directory\"):\n        storage = BlockStorageAdapter(source)\n    else:\n        raise TypeError(\n            f\"Unsupported source type {type(source).__name__!r}. Please provide a\"\n            \" URL to remote storage or a storage object.\"\n        )\n    with tempfile.TemporaryDirectory() as tmpdir:\n        storage.set_base_path(Path(tmpdir))\n        await storage.pull_code()\n\n        full_entrypoint = str(storage.destination / entrypoint)\n        flow: \"Flow\" = await from_async.wait_for_call_in_new_thread(\n            create_call(load_flow_from_entrypoint, full_entrypoint)\n        )\n        flow._storage = storage\n        flow._entrypoint = entrypoint\n\n    return flow\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serialize_parameters","title":"serialize_parameters","text":"

    Convert parameters to a serializable form.

    Uses FastAPI's jsonable_encoder to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips.

    Source code in prefect/flows.py
    def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Convert parameters to a serializable form.\n\n    Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n    converting everything directly to a string. This maintains basic types like\n    integers during API roundtrips.\n    \"\"\"\n    serialized_parameters = {}\n    for key, value in parameters.items():\n        try:\n            serialized_parameters[key] = jsonable_encoder(value)\n        except (TypeError, ValueError):\n            logger.debug(\n                f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                f\"type {type(value).__name__!r} and will not be stored \"\n                \"in the backend.\"\n            )\n            serialized_parameters[key] = f\"<{type(value).__name__}>\"\n    return serialized_parameters\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serve","title":"serve async","text":"

    Creates a deployment for this flow and starts a runner to monitor for scheduled work.

    Parameters:

    Name Type Description Default name Optional[str]

    The name to give the created deployment. Defaults to the name of the flow.

    None interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

    An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules.

    None cron Optional[Union[Iterable[str], str]]

    A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules.

    None rrule Optional[Union[Iterable[str], str]]

    An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules.

    None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

    A list of triggers that will kick off runs of this deployment.

    None paused Optional[bool]

    Whether or not to set this deployment as paused.

    None schedules Optional[List[FlexibleScheduleList]]

    A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

    None schedule Optional[SCHEDULE_TYPES]

    A schedule object defining when to execute runs of this deployment. Used to define additional scheduling options such as timezone.

    None is_schedule_active Optional[bool]

    Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

    None parameters Optional[dict]

    A dictionary of default parameter values to pass to runs of this deployment.

    None description Optional[str]

    A description for the created deployment. Defaults to the flow's description if not provided.

    None tags Optional[List[str]]

    A list of tags to associate with the created deployment for organizational purposes.

    None version Optional[str]

    A version for the created deployment. Defaults to the flow's version.

    None enforce_parameter_schema bool

    Whether or not the Prefect API should enforce the parameter schema for the created deployment.

    False pause_on_shutdown bool

    If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running.

    True print_starting_message bool

    Whether or not to print the starting message when flow is served.

    True limit Optional[int]

    The maximum number of runs that can be executed concurrently.

    None webserver bool

    Whether or not to start a monitoring webserver for this flow.

    False entrypoint_type EntrypointType

    Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

    FILE_PATH

    Examples:

    Serve a flow:

    from prefect import flow\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n    my_flow.serve(\"example-deployment\")\n

    Serve a flow and run it every hour:

    from prefect import flow\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\nif __name__ == \"__main__\":\n    my_flow.serve(\"example-deployment\", interval=3600)\n
    Source code in prefect/flows.py
    @sync_compatible\nasync def serve(\n    self,\n    name: Optional[str] = None,\n    interval: Optional[\n        Union[\n            Iterable[Union[int, float, datetime.timedelta]],\n            int,\n            float,\n            datetime.timedelta,\n        ]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    parameters: Optional[dict] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    pause_on_shutdown: bool = True,\n    print_starting_message: bool = True,\n    limit: Optional[int] = None,\n    webserver: bool = False,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n):\n    \"\"\"\n    Creates a deployment for this flow and starts a runner to monitor for scheduled work.\n\n    Args:\n        name: The name to give the created deployment. Defaults to the name of the flow.\n        interval: An interval on which to execute the deployment. Accepts a number or a\n            timedelta object to create a single schedule. If a number is given, it will be\n            interpreted as seconds. Also accepts an iterable of numbers or timedelta to create\n            multiple schedules.\n        cron: A cron schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of cron schedule strings to create multiple schedules.\n        rrule: An rrule schedule string of when to execute runs of this deployment.\n            Also accepts an iterable of rrule schedule strings to create multiple schedules.\n        triggers: A list of triggers that will kick off runs of this deployment.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object defining when to execute runs of this deployment. Used to\n            define additional scheduling options such as `timezone`.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        parameters: A dictionary of default parameter values to pass to runs of this deployment.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for the created deployment.\n        pause_on_shutdown: If True, provided schedule will be paused when the serve function is stopped.\n            If False, the schedules will continue running.\n        print_starting_message: Whether or not to print the starting message when flow is served.\n        limit: The maximum number of runs that can be executed concurrently.\n        webserver: Whether or not to start a monitoring webserver for this flow.\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n\n    Examples:\n        Serve a flow:\n\n        ```python\n        from prefect import flow\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        if __name__ == \"__main__\":\n            my_flow.serve(\"example-deployment\")\n        ```\n\n        Serve a flow and run it every hour:\n\n        ```python\n        from prefect import flow\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        if __name__ == \"__main__\":\n            my_flow.serve(\"example-deployment\", interval=3600)\n        ```\n    \"\"\"\n    from prefect.runner import Runner\n\n    if not name:\n        name = self.name\n    else:\n        # Handling for my_flow.serve(__file__)\n        # Will set name to name of file where my_flow.serve() without the extension\n        # Non filepath strings will pass through unchanged\n        name = Path(name).stem\n\n    runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown, limit=limit)\n    deployment_id = await runner.add_flow(\n        self,\n        name=name,\n        triggers=triggers,\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        paused=paused,\n        schedules=schedules,\n        schedule=schedule,\n        is_schedule_active=is_schedule_active,\n        parameters=parameters,\n        description=description,\n        tags=tags,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        entrypoint_type=entrypoint_type,\n    )\n    if print_starting_message:\n        help_message = (\n            f\"[green]Your flow {self.name!r} is being served and polling for\"\n            \" scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the\"\n            \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n            f\" '{self.name}/{name}'\\n[/]\"\n        )\n        if PREFECT_UI_URL:\n            help_message += (\n                \"\\nYou can also run your flow via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n            )\n\n        console = Console()\n        console.print(help_message, soft_wrap=True)\n    await runner.start(webserver=webserver)\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.to_deployment","title":"to_deployment async","text":"

    Creates a runner deployment object for this flow.

    Parameters:

    Name Type Description Default name str

    The name to give the created deployment.

    required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

    An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

    None cron Optional[Union[Iterable[str], str]]

    A cron schedule of when to execute runs of this deployment.

    None rrule Optional[Union[Iterable[str], str]]

    An rrule schedule of when to execute runs of this deployment.

    None paused Optional[bool]

    Whether or not to set this deployment as paused.

    None schedules Optional[List[FlexibleScheduleList]]

    A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as timezone.

    None schedule Optional[SCHEDULE_TYPES]

    A schedule object defining when to execute runs of this deployment.

    None is_schedule_active Optional[bool]

    Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

    None parameters Optional[dict]

    A dictionary of default parameter values to pass to runs of this deployment.

    None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

    A list of triggers that will kick off runs of this deployment.

    None description Optional[str]

    A description for the created deployment. Defaults to the flow's description if not provided.

    None tags Optional[List[str]]

    A list of tags to associate with the created deployment for organizational purposes.

    None version Optional[str]

    A version for the created deployment. Defaults to the flow's version.

    None enforce_parameter_schema bool

    Whether or not the Prefect API should enforce the parameter schema for the created deployment.

    False work_pool_name Optional[str]

    The name of the work pool to use for this deployment.

    None work_queue_name Optional[str]

    The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

    None job_variables Optional[Dict[str, Any]]

    Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for

    None entrypoint_type EntrypointType

    Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

    FILE_PATH

    Examples:

    Prepare two deployments and serve them:

    from prefect import flow, serve\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n    print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n    hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n    bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n    serve(hello_deploy, bye_deploy)\n
    Source code in prefect/flows.py
    @sync_compatible\n@deprecated_parameter(\n    \"schedule\",\n    start_date=\"Mar 2024\",\n    when=lambda p: p is not None,\n    help=\"Use `schedules` instead.\",\n)\n@deprecated_parameter(\n    \"is_schedule_active\",\n    start_date=\"Mar 2024\",\n    when=lambda p: p is not None,\n    help=\"Use `paused` instead.\",\n)\nasync def to_deployment(\n    self,\n    name: str,\n    interval: Optional[\n        Union[\n            Iterable[Union[int, float, datetime.timedelta]],\n            int,\n            float,\n            datetime.timedelta,\n        ]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[List[\"FlexibleScheduleList\"]] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n    \"\"\"\n    Creates a runner deployment object for this flow.\n\n    Args:\n        name: The name to give the created deployment.\n        interval: An interval on which to execute the new deployment. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this deployment.\n        rrule: An rrule schedule of when to execute runs of this deployment.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options such as `timezone`.\n        schedule: A schedule object defining when to execute runs of this deployment.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        parameters: A dictionary of default parameter values to pass to runs of this deployment.\n        triggers: A list of triggers that will kick off runs of this deployment.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for the created deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n\n    Examples:\n        Prepare two deployments and serve them:\n\n        ```python\n        from prefect import flow, serve\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        @flow\n        def my_other_flow(name):\n            print(f\"goodbye {name}\")\n\n        if __name__ == \"__main__\":\n            hello_deploy = my_flow.to_deployment(\"hello\", tags=[\"dev\"])\n            bye_deploy = my_other_flow.to_deployment(\"goodbye\", tags=[\"dev\"])\n            serve(hello_deploy, bye_deploy)\n        ```\n    \"\"\"\n    from prefect.deployments.runner import RunnerDeployment\n\n    if not name.endswith(\".py\"):\n        raise_on_name_with_banned_characters(name)\n    if self._storage and self._entrypoint:\n        return await RunnerDeployment.from_storage(\n            storage=self._storage,\n            entrypoint=self._entrypoint,\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            paused=paused,\n            schedules=schedules,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            tags=tags,\n            triggers=triggers,\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n    else:\n        return RunnerDeployment.from_flow(\n            self,\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            paused=paused,\n            schedules=schedules,\n            schedule=schedule,\n            is_schedule_active=is_schedule_active,\n            tags=tags,\n            triggers=triggers,\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n            entrypoint_type=entrypoint_type,\n        )\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.validate_parameters","title":"validate_parameters","text":"

    Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations.

    Returns:

    Type Description Dict[str, Any]

    A new dict of parameters that have been cast to the appropriate types

    Raises:

    Type Description ParameterTypeError

    if the provided parameters are not valid

    Source code in prefect/flows.py
    def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n    associated types specified by the function's type annotations.\n\n    Returns:\n        A new dict of parameters that have been cast to the appropriate types\n\n    Raises:\n        ParameterTypeError: if the provided parameters are not valid\n    \"\"\"\n    args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n    if HAS_PYDANTIC_V2:\n        has_v1_models = any(isinstance(o, V1BaseModel) for o in args) or any(\n            isinstance(o, V1BaseModel) for o in kwargs.values()\n        )\n        has_v2_types = any(is_v2_type(o) for o in args) or any(\n            is_v2_type(o) for o in kwargs.values()\n        )\n\n        if has_v1_models and has_v2_types:\n            raise ParameterTypeError(\n                \"Cannot mix Pydantic v1 and v2 types as arguments to a flow.\"\n            )\n\n        if has_v1_models:\n            validated_fn = V1ValidatedFunction(\n                self.fn, config={\"arbitrary_types_allowed\": True}\n            )\n        else:\n            validated_fn = V2ValidatedFunction(\n                self.fn, config={\"arbitrary_types_allowed\": True}\n            )\n\n    else:\n        validated_fn = ValidatedFunction(\n            self.fn, config={\"arbitrary_types_allowed\": True}\n        )\n\n    try:\n        model = validated_fn.init_model_instance(*args, **kwargs)\n    except pydantic.ValidationError as exc:\n        # We capture the pydantic exception and raise our own because the pydantic\n        # exception is not picklable when using a cythonized pydantic installation\n        raise ParameterTypeError.from_validation_error(exc) from None\n    except V2ValidationError as exc:\n        # We capture the pydantic exception and raise our own because the pydantic\n        # exception is not picklable when using a cythonized pydantic installation\n        raise ParameterTypeError.from_validation_error(exc) from None\n\n    # Get the updated parameter dict with cast values from the model\n    cast_parameters = {\n        k: v\n        for k, v in model._iter()\n        if k in model.__fields_set__ or model.__fields__[k].default_factory\n    }\n    return cast_parameters\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.visualize","title":"visualize async","text":"

    Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG.

    Raises:

    Type Description -ImportError

    If graphviz isn't installed.

    -GraphvizExecutableNotFoundError

    If the dot executable isn't found.

    -FlowVisualizationError

    If the flow can't be visualized for any other reason.

    Source code in prefect/flows.py
    @sync_compatible\nasync def visualize(self, *args, **kwargs):\n    \"\"\"\n    Generates a graphviz object representing the current flow. In IPython notebooks,\n    it's rendered inline, otherwise in a new window as a PNG.\n\n    Raises:\n        - ImportError: If `graphviz` isn't installed.\n        - GraphvizExecutableNotFoundError: If the `dot` executable isn't found.\n        - FlowVisualizationError: If the flow can't be visualized for any other reason.\n    \"\"\"\n    if not PREFECT_UNIT_TEST_MODE:\n        warnings.warn(\n            \"`flow.visualize()` will execute code inside of your flow that is not\"\n            \" decorated with `@task` or `@flow`.\"\n        )\n\n    try:\n        with TaskVizTracker() as tracker:\n            if self.isasync:\n                await self.fn(*args, **kwargs)\n            else:\n                self.fn(*args, **kwargs)\n\n            graph = build_task_dependencies(tracker)\n\n            visualize_task_dependencies(graph, self.name)\n\n    except GraphvizImportError:\n        raise\n    except GraphvizExecutableNotFoundError:\n        raise\n    except VisualizationUnsupportedError:\n        raise\n    except FlowVisualizationError:\n        raise\n    except Exception as e:\n        msg = (\n            \"It's possible you are trying to visualize a flow that contains \"\n            \"code that directly interacts with the result of a task\"\n            \" inside of the flow. \\nTry passing a `viz_return_value` \"\n            \"to the task decorator, e.g. `@task(viz_return_value=[1, 2, 3]).`\"\n        )\n\n        new_exception = type(e)(str(e) + \"\\n\" + msg)\n        # Copy traceback information from the original exception\n        new_exception.__traceback__ = e.__traceback__\n        raise new_exception\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.with_options","title":"with_options","text":"

    Create a new flow from the current object, updating provided options.

    Parameters:

    Name Type Description Default name str

    A new name for the flow.

    None version str

    A new version for the flow.

    None description str

    A new description for the flow.

    None flow_run_name Optional[Union[Callable[[], str], str]]

    An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

    None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

    A new task runner for the flow.

    None timeout_seconds Union[int, float]

    A new number of seconds to fail the flow after if still running.

    None validate_parameters bool

    A new value indicating if flow calls should validate given parameters.

    None retries Optional[int]

    A new number of times to retry on flow run failure.

    None retry_delay_seconds Optional[Union[int, float]]

    A new number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

    None persist_result Optional[bool]

    A new option for enabling or disabling result persistence.

    NotSet result_storage Optional[ResultStorage]

    A new storage type to use for results.

    NotSet result_serializer Optional[ResultSerializer]

    A new serializer to use for results.

    NotSet cache_result_in_memory bool

    A new value indicating if the flow's result should be cached in memory.

    None on_failure Optional[List[Callable[[Flow, FlowRun, State], None]]]

    A new list of callables to run when the flow enters a failed state.

    None on_completion Optional[List[Callable[[Flow, FlowRun, State], None]]]

    A new list of callables to run when the flow enters a completed state.

    None on_cancellation Optional[List[Callable[[Flow, FlowRun, State], None]]]

    A new list of callables to run when the flow enters a cancelling state.

    None on_crashed Optional[List[Callable[[Flow, FlowRun, State], None]]]

    A new list of callables to run when the flow enters a crashed state.

    None on_running Optional[List[Callable[[Flow, FlowRun, State], None]]]

    A new list of callables to run when the flow enters a running state.

    None

    Returns:

    Type Description Self

    A new Flow instance.

    Create a new flow from an existing flow and update the name:\n\n>>> @flow(name=\"My flow\")\n>>> def my_flow():\n>>>     return 1\n>>>\n>>> new_flow = my_flow.with_options(name=\"My new flow\")\n\nCreate a new flow from an existing flow, update the task runner, and call\nit without an intermediate variable:\n\n>>> from prefect.task_runners import SequentialTaskRunner\n>>>\n>>> @flow\n>>> def my_flow(x, y):\n>>>     return x + y\n>>>\n>>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n>>> assert state.result() == 4\n
    Source code in prefect/flows.py
    def with_options(\n    self,\n    *,\n    name: str = None,\n    version: str = None,\n    retries: Optional[int] = None,\n    retry_delay_seconds: Optional[Union[int, float]] = None,\n    description: str = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = None,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    cache_result_in_memory: bool = None,\n    log_prints: Optional[bool] = NotSet,\n    on_completion: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n) -> Self:\n    \"\"\"\n    Create a new flow from the current object, updating provided options.\n\n    Args:\n        name: A new name for the flow.\n        version: A new version for the flow.\n        description: A new description for the flow.\n        flow_run_name: An optional name to distinguish runs of this flow; this name\n            can be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: A new task runner for the flow.\n        timeout_seconds: A new number of seconds to fail the flow after if still\n            running.\n        validate_parameters: A new value indicating if flow calls should validate\n            given parameters.\n        retries: A new number of times to retry on flow run failure.\n        retry_delay_seconds: A new number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        cache_result_in_memory: A new value indicating if the flow's result should\n            be cached in memory.\n        on_failure: A new list of callables to run when the flow enters a failed state.\n        on_completion: A new list of callables to run when the flow enters a completed state.\n        on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n        on_crashed: A new list of callables to run when the flow enters a crashed state.\n        on_running: A new list of callables to run when the flow enters a running state.\n\n    Returns:\n        A new `Flow` instance.\n\n    Examples:\n\n        Create a new flow from an existing flow and update the name:\n\n        >>> @flow(name=\"My flow\")\n        >>> def my_flow():\n        >>>     return 1\n        >>>\n        >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n        Create a new flow from an existing flow, update the task runner, and call\n        it without an intermediate variable:\n\n        >>> from prefect.task_runners import SequentialTaskRunner\n        >>>\n        >>> @flow\n        >>> def my_flow(x, y):\n        >>>     return x + y\n        >>>\n        >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n        >>> assert state.result() == 4\n\n    \"\"\"\n    new_flow = Flow(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        flow_run_name=flow_run_name or self.flow_run_name,\n        version=version or self.version,\n        task_runner=task_runner or self.task_runner,\n        retries=retries if retries is not None else self.retries,\n        retry_delay_seconds=(\n            retry_delay_seconds\n            if retry_delay_seconds is not None\n            else self.retry_delay_seconds\n        ),\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        validate_parameters=(\n            validate_parameters\n            if validate_parameters is not None\n            else self.should_validate_parameters\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n        on_cancellation=on_cancellation or self.on_cancellation,\n        on_crashed=on_crashed or self.on_crashed,\n        on_running=on_running or self.on_running,\n    )\n    new_flow._storage = self._storage\n    new_flow._entrypoint = self._entrypoint\n    return new_flow\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.flow","title":"flow","text":"

    Decorator to designate a function as a Prefect workflow.

    This decorator may be used for asynchronous or synchronous functions.

    Flow parameters must be serializable by Pydantic.

    Parameters:

    Name Type Description Default name Optional[str]

    An optional name for the flow; if not provided, the name will be inferred from the given function.

    None version Optional[str]

    An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

    None flow_run_name Optional[Union[Callable[[], str], str]]

    An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

    None retries int

    An optional number of times to retry on flow run failure.

    None retry_delay_seconds Union[int, float]

    An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

    None task_runner BaseTaskRunner

    An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be instantiated.

    ConcurrentTaskRunner description str

    An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

    None timeout_seconds Union[int, float]

    An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

    None validate_parameters bool

    By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

    True persist_result Optional[bool]

    An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

    None result_storage Optional[ResultStorage]

    An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

    None result_serializer Optional[ResultSerializer]

    An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

    None cache_result_in_memory bool

    An optional toggle indicating whether the cached result of a running the flow should be stored in memory. Defaults to True.

    True log_prints Optional[bool]

    If set, print statements in the flow will be redirected to the Prefect logger for the flow run. Defaults to None, which indicates that the value from the parent flow should be used. If this is a parent flow, the default is pulled from the PREFECT_LOGGING_LOG_PRINTS setting.

    None on_completion Optional[List[Callable[[Flow, FlowRun, State], Union[Awaitable[None], None]]]]

    An optional list of functions to call when the flow run is completed. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

    None on_failure Optional[List[Callable[[Flow, FlowRun, State], Union[Awaitable[None], None]]]]

    An optional list of functions to call when the flow run fails. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

    None on_cancellation Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of functions to call when the flow run is cancelled. These functions will be passed the flow, flow run, and final state.

    None on_crashed Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of functions to call when the flow run crashes. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

    None on_running Optional[List[Callable[[Flow, FlowRun, State], None]]]

    An optional list of functions to call when the flow run is started. Each function should accept three arguments: the flow, the flow run, and the current state

    None

    Returns:

    Type Description

    A callable Flow object which, when called, will run the flow and return its

    final state.

    Examples:

    Define a simple flow

    >>> from prefect import flow\n>>> @flow\n>>> def add(x, y):\n>>>     return x + y\n

    Define an async flow

    >>> @flow\n>>> async def add(x, y):\n>>>     return x + y\n

    Define a flow with a version and description

    >>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n>>> def my_flow():\n>>>     pass\n

    Define a flow with a custom name

    >>> @flow(name=\"The Ultimate Flow\")\n>>> def my_flow():\n>>>     pass\n

    Define a flow that submits its tasks to dask

    >>> from prefect_dask.task_runners import DaskTaskRunner\n>>>\n>>> @flow(task_runner=DaskTaskRunner)\n>>> def my_flow():\n>>>     pass\n
    Source code in prefect/flows.py
    def flow(\n    __fn=None,\n    *,\n    name: Optional[str] = None,\n    version: Optional[str] = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[int, float] = None,\n    task_runner: BaseTaskRunner = ConcurrentTaskRunner,\n    description: str = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = True,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    log_prints: Optional[bool] = None,\n    on_completion: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    on_failure: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_running: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n    \"\"\"\n    Decorator to designate a function as a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Flow parameters must be serializable by Pydantic.\n\n    Args:\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        task_runner: An optional task runner to use for task execution within the flow; if\n            not provided, a `ConcurrentTaskRunner` will be instantiated.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        cache_result_in_memory: An optional toggle indicating whether the cached result of\n            a running the flow should be stored in memory. Defaults to `True`.\n        log_prints: If set, `print` statements in the flow will be redirected to the\n            Prefect logger for the flow run. Defaults to `None`, which indicates that\n            the value from the parent flow should be used. If this is a parent flow,\n            the default is pulled from the `PREFECT_LOGGING_LOG_PRINTS` setting.\n        on_completion: An optional list of functions to call when the flow run is\n            completed. Each function should accept three arguments: the flow, the flow\n            run, and the final state of the flow run.\n        on_failure: An optional list of functions to call when the flow run fails. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n        on_cancellation: An optional list of functions to call when the flow run is\n            cancelled. These functions will be passed the flow, flow run, and final state.\n        on_crashed: An optional list of functions to call when the flow run crashes. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n        on_running: An optional list of functions to call when the flow run is started. Each\n            function should accept three arguments: the flow, the flow run, and the current state\n\n    Returns:\n        A callable `Flow` object which, when called, will run the flow and return its\n        final state.\n\n    Examples:\n        Define a simple flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async flow\n\n        >>> @flow\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a flow with a version and description\n\n        >>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow with a custom name\n\n        >>> @flow(name=\"The Ultimate Flow\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow that submits its tasks to dask\n\n        >>> from prefect_dask.task_runners import DaskTaskRunner\n        >>>\n        >>> @flow(task_runner=DaskTaskRunner)\n        >>> def my_flow():\n        >>>     pass\n    \"\"\"\n    if __fn:\n        return cast(\n            Flow[P, R],\n            Flow(\n                fn=__fn,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n                on_running=on_running,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Flow[P, R]],\n            partial(\n                flow,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n                on_running=on_running,\n            ),\n        )\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_entrypoint","title":"load_flow_from_entrypoint","text":"

    Extract a flow object from a script at an entrypoint by running all of the code in the file.

    Parameters:

    Name Type Description Default entrypoint str

    a string in the format <path_to_script>:<flow_func_name> or a module path to a flow function

    required

    Returns:

    Type Description Flow

    The flow object from the script

    Raises:

    Type Description FlowScriptError

    If an exception is encountered while running the script

    MissingFlowError

    If the flow function specified in the entrypoint does not exist

    Source code in prefect/flows.py
    def load_flow_from_entrypoint(entrypoint: str) -> Flow:\n    \"\"\"\n    Extract a flow object from a script at an entrypoint by running all of the code in the file.\n\n    Args:\n        entrypoint: a string in the format `<path_to_script>:<flow_func_name>` or a module path\n            to a flow function\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If the flow function specified in the entrypoint does not exist\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=True,\n        capture_failures=True,\n    ):\n        if \":\" in entrypoint:\n            # split by the last colon once to handle Windows paths with drive letters i.e C:\\path\\to\\file.py:do_stuff\n            path, func_name = entrypoint.rsplit(\":\", maxsplit=1)\n        else:\n            path, func_name = entrypoint.rsplit(\".\", maxsplit=1)\n        try:\n            flow = import_object(entrypoint)\n        except AttributeError as exc:\n            raise MissingFlowError(\n                f\"Flow function with name {func_name!r} not found in {path!r}. \"\n            ) from exc\n\n        if not isinstance(flow, Flow):\n            raise MissingFlowError(\n                f\"Function with name {func_name!r} is not a flow. Make sure that it is \"\n                \"decorated with '@flow'.\"\n            )\n\n        return flow\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_script","title":"load_flow_from_script","text":"

    Extract a flow object from a script by running all of the code in the file.

    If the script has multiple flows in it, a flow name must be provided to specify the flow to return.

    Parameters:

    Name Type Description Default path str

    A path to a Python script containing flows

    required flow_name str

    An optional flow name to look for in the script

    None

    Returns:

    Type Description Flow

    The flow object from the script

    Raises:

    Type Description FlowScriptError

    If an exception is encountered while running the script

    MissingFlowError

    If no flows exist in the iterable

    MissingFlowError

    If a flow name is provided and that flow does not exist

    UnspecifiedFlowError

    If multiple flows exist but no flow name was provided

    Source code in prefect/flows.py
    def load_flow_from_script(path: str, flow_name: str = None) -> Flow:\n    \"\"\"\n    Extract a flow object from a script by running all of the code in the file.\n\n    If the script has multiple flows in it, a flow name must be provided to specify\n    the flow to return.\n\n    Args:\n        path: A path to a Python script containing flows\n        flow_name: An optional flow name to look for in the script\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    return select_flow(\n        load_flows_from_script(path),\n        flow_name=flow_name,\n        from_message=f\"in script '{path}'\",\n    )\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_text","title":"load_flow_from_text","text":"

    Load a flow from a text script.

    The script will be written to a temporary local file path so errors can refer to line numbers and contextual tracebacks can be provided.

    Source code in prefect/flows.py
    def load_flow_from_text(script_contents: AnyStr, flow_name: str):\n    \"\"\"\n    Load a flow from a text script.\n\n    The script will be written to a temporary local file path so errors can refer\n    to line numbers and contextual tracebacks can be provided.\n    \"\"\"\n    with NamedTemporaryFile(\n        mode=\"wt\" if isinstance(script_contents, str) else \"wb\",\n        prefix=f\"flow-script-{flow_name}\",\n        suffix=\".py\",\n        delete=False,\n    ) as tmpfile:\n        tmpfile.write(script_contents)\n        tmpfile.flush()\n    try:\n        flow = load_flow_from_script(tmpfile.name, flow_name=flow_name)\n    finally:\n        # windows compat\n        tmpfile.close()\n        os.remove(tmpfile.name)\n    return flow\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flows_from_script","title":"load_flows_from_script","text":"

    Load all flow objects from the given python script. All of the code in the file will be executed.

    Returns:

    Type Description List[Flow]

    A list of flows

    Raises:

    Type Description FlowScriptError

    If an exception is encountered while running the script

    Source code in prefect/flows.py
    def load_flows_from_script(path: str) -> List[Flow]:\n    \"\"\"\n    Load all flow objects from the given python script. All of the code in the file\n    will be executed.\n\n    Returns:\n        A list of flows\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n    \"\"\"\n    return registry_from_script(path).get_instances(Flow)\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.select_flow","title":"select_flow","text":"

    Select the only flow in an iterable or a flow specified by name.

    Returns A single flow object

    Raises:

    Type Description MissingFlowError

    If no flows exist in the iterable

    MissingFlowError

    If a flow name is provided and that flow does not exist

    UnspecifiedFlowError

    If multiple flows exist but no flow name was provided

    Source code in prefect/flows.py
    def select_flow(\n    flows: Iterable[Flow], flow_name: str = None, from_message: str = None\n) -> Flow:\n    \"\"\"\n    Select the only flow in an iterable or a flow specified by name.\n\n    Returns\n        A single flow object\n\n    Raises:\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    # Convert to flows by name\n    flows = {f.name: f for f in flows}\n\n    # Add a leading space if given, otherwise use an empty string\n    from_message = (\" \" + from_message) if from_message else \"\"\n    if not flows:\n        raise MissingFlowError(f\"No flows found{from_message}.\")\n\n    elif flow_name and flow_name not in flows:\n        raise MissingFlowError(\n            f\"Flow {flow_name!r} not found{from_message}. \"\n            f\"Found the following flows: {listrepr(flows.keys())}. \"\n            \"Check to make sure that your flow function is decorated with `@flow`.\"\n        )\n\n    elif not flow_name and len(flows) > 1:\n        raise UnspecifiedFlowError(\n            (\n                f\"Found {len(flows)} flows{from_message}:\"\n                f\" {listrepr(sorted(flows.keys()))}. Specify a flow name to select a\"\n                \" flow.\"\n            ),\n        )\n\n    if flow_name:\n        return flows[flow_name]\n    else:\n        return list(flows.values())[0]\n
    ","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/futures/","title":"prefect.futures","text":"","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures","title":"prefect.futures","text":"

    Futures represent the execution of a task and allow retrieval of the task run's state.

    This module contains the definition for futures as well as utilities for resolving futures in nested data structures.

    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture","title":"PrefectFuture","text":"

    Bases: Generic[R, A]

    Represents the result of a computation happening in a task runner.

    When tasks are called, they are submitted to a task runner which creates a future for access to the state and result of the task.

    Examples:

    Define a task that returns a string

    >>> from prefect import flow, task\n>>> @task\n>>> def my_task() -> str:\n>>>     return \"hello\"\n

    Calls of this task in a flow will return a future

    >>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n>>>     future.task_run.id  # UUID for the task run\n

    Wait for the task to complete

    >>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait()\n

    Wait N seconds for the task to complete

    >>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait(0.1)\n>>>     if final_state:\n>>>         ... # Task done\n>>>     else:\n>>>         ... # Task not done yet\n

    Wait for a task to complete and retrieve its result

    >>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result()\n>>>     assert result == \"hello\"\n

    Wait N seconds for a task to complete and retrieve its result

    >>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result(timeout=5)\n>>>     assert result == \"hello\"\n

    Retrieve the state of a task without waiting for completion

    >>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     state = future.get_state()\n
    Source code in prefect/futures.py
    class PrefectFuture(Generic[R, A]):\n    \"\"\"\n    Represents the result of a computation happening in a task runner.\n\n    When tasks are called, they are submitted to a task runner which creates a future\n    for access to the state and result of the task.\n\n    Examples:\n        Define a task that returns a string\n\n        >>> from prefect import flow, task\n        >>> @task\n        >>> def my_task() -> str:\n        >>>     return \"hello\"\n\n        Calls of this task in a flow will return a future\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n        >>>     future.task_run.id  # UUID for the task run\n\n        Wait for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait()\n\n        Wait N seconds for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait(0.1)\n        >>>     if final_state:\n        >>>         ... # Task done\n        >>>     else:\n        >>>         ... # Task not done yet\n\n        Wait for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result()\n        >>>     assert result == \"hello\"\n\n        Wait N seconds for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result(timeout=5)\n        >>>     assert result == \"hello\"\n\n        Retrieve the state of a task without waiting for completion\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     state = future.get_state()\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str,\n        key: UUID,\n        task_runner: \"BaseTaskRunner\",\n        asynchronous: A = True,\n        _final_state: State[R] = None,  # Exposed for testing\n    ) -> None:\n        self.key = key\n        self.name = name\n        self.asynchronous = asynchronous\n        self.task_run = None\n        self._final_state = _final_state\n        self._exception: Optional[Exception] = None\n        self._task_runner = task_runner\n        self._submitted = anyio.Event()\n\n        self._loop = asyncio.get_running_loop()\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: None = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: float\n    ) -> Awaitable[Optional[State[R]]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: float) -> Optional[State[R]]:\n        ...\n\n    def wait(self, timeout=None):\n        \"\"\"\n        Wait for the run to finish and return the final state\n\n        If the timeout is reached before the run reaches a final state,\n        `None` is returned.\n        \"\"\"\n        wait = create_call(self._wait, timeout=timeout)\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(wait).aresult()\n        else:\n            # type checking cannot handle the overloaded timeout passing\n            return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n\n    @overload\n    async def _wait(self, timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    async def _wait(self, timeout: float) -> Optional[State[R]]:\n        ...\n\n    async def _wait(self, timeout=None):\n        \"\"\"\n        Async implementation for `wait`\n        \"\"\"\n        await self._wait_for_submission()\n\n        if self._final_state:\n            return self._final_state\n\n        self._final_state = await self._task_runner.wait(self.key, timeout)\n        return self._final_state\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> R:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Union[R, Exception]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> Awaitable[R]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Awaitable[Union[R, Exception]]:\n        ...\n\n    def result(self, timeout: float = None, raise_on_failure: bool = True):\n        \"\"\"\n        Wait for the run to finish and return the final state.\n\n        If the timeout is reached before the run reaches a final state, a `TimeoutError`\n        will be raised.\n\n        If `raise_on_failure` is `True` and the task run failed, the task run's\n        exception will be raised.\n        \"\"\"\n        result = create_call(\n            self._result, timeout=timeout, raise_on_failure=raise_on_failure\n        )\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(result).aresult()\n        else:\n            return from_sync.call_soon_in_loop_thread(result).result()\n\n    async def _result(self, timeout: float = None, raise_on_failure: bool = True):\n        \"\"\"\n        Async implementation of `result`\n        \"\"\"\n        final_state = await self._wait(timeout=timeout)\n        if not final_state:\n            raise TimeoutError(\"Call timed out before task finished.\")\n        return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Async]\", client: PrefectClient = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Sync]\", client: PrefectClient = None\n    ) -> State[R]:\n        ...\n\n    def get_state(self, client: PrefectClient = None):\n        \"\"\"\n        Get the current state of the task run.\n        \"\"\"\n        if self.asynchronous:\n            return cast(Awaitable[State[R]], self._get_state(client=client))\n        else:\n            return cast(State[R], sync(self._get_state, client=client))\n\n    @inject_client\n    async def _get_state(self, client: PrefectClient = None) -> State[R]:\n        assert client is not None  # always injected\n\n        # We must wait for the task run id to be populated\n        await self._wait_for_submission()\n\n        task_run = await client.read_task_run(self.task_run.id)\n\n        if not task_run:\n            raise RuntimeError(\"Future has no associated task run in the server.\")\n\n        # Update the task run reference\n        self.task_run = task_run\n        return task_run.state\n\n    async def _wait_for_submission(self):\n        await run_coroutine_in_loop_from_async(self._loop, self._submitted.wait())\n\n    def __hash__(self) -> int:\n        return hash(self.key)\n\n    def __repr__(self) -> str:\n        return f\"PrefectFuture({self.name!r})\"\n\n    def __bool__(self) -> bool:\n        warnings.warn(\n            (\n                \"A 'PrefectFuture' from a task call was cast to a boolean; \"\n                \"did you mean to check the result of the task instead? \"\n                \"e.g. `if my_task().result(): ...`\"\n            ),\n            stacklevel=2,\n        )\n        return True\n
    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.get_state","title":"get_state","text":"

    Get the current state of the task run.

    Source code in prefect/futures.py
    def get_state(self, client: PrefectClient = None):\n    \"\"\"\n    Get the current state of the task run.\n    \"\"\"\n    if self.asynchronous:\n        return cast(Awaitable[State[R]], self._get_state(client=client))\n    else:\n        return cast(State[R], sync(self._get_state, client=client))\n
    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.result","title":"result","text":"

    Wait for the run to finish and return the final state.

    If the timeout is reached before the run reaches a final state, a TimeoutError will be raised.

    If raise_on_failure is True and the task run failed, the task run's exception will be raised.

    Source code in prefect/futures.py
    def result(self, timeout: float = None, raise_on_failure: bool = True):\n    \"\"\"\n    Wait for the run to finish and return the final state.\n\n    If the timeout is reached before the run reaches a final state, a `TimeoutError`\n    will be raised.\n\n    If `raise_on_failure` is `True` and the task run failed, the task run's\n    exception will be raised.\n    \"\"\"\n    result = create_call(\n        self._result, timeout=timeout, raise_on_failure=raise_on_failure\n    )\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(result).aresult()\n    else:\n        return from_sync.call_soon_in_loop_thread(result).result()\n
    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.wait","title":"wait","text":"

    Wait for the run to finish and return the final state

    If the timeout is reached before the run reaches a final state, None is returned.

    Source code in prefect/futures.py
    def wait(self, timeout=None):\n    \"\"\"\n    Wait for the run to finish and return the final state\n\n    If the timeout is reached before the run reaches a final state,\n    `None` is returned.\n    \"\"\"\n    wait = create_call(self._wait, timeout=timeout)\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(wait).aresult()\n    else:\n        # type checking cannot handle the overloaded timeout passing\n        return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n
    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.call_repr","title":"call_repr","text":"

    Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"

    Source code in prefect/futures.py
    def call_repr(__fn: Callable, *args: Any, **kwargs: Any) -> str:\n    \"\"\"\n    Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"\n    \"\"\"\n\n    name = __fn.__name__\n\n    # TODO: If this computation is concerningly expensive, we can iterate checking the\n    #       length at each arg or avoid calling `repr` on args with large amounts of\n    #       data\n    call_args = \", \".join(\n        [repr(arg) for arg in args]\n        + [f\"{key}={repr(val)}\" for key, val in kwargs.items()]\n    )\n\n    # Enforce a maximum length\n    if len(call_args) > 100:\n        call_args = call_args[:100] + \"...\"\n\n    return f\"{name}({call_args})\"\n
    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_data","title":"resolve_futures_to_data async","text":"

    Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their results. Resolving futures to their results may wait for execution to complete and require communication with the API.

    Unsupported object types will be returned without modification.

    Source code in prefect/futures.py
    async def resolve_futures_to_data(\n    expr: Union[PrefectFuture[R, Any], Any],\n    raise_on_failure: bool = True,\n) -> Union[R, Any]:\n    \"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their results.\n    Resolving futures to their results may wait for execution to complete and require\n    communication with the API.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    maybe_expr = visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n    if maybe_expr is not None:\n        expr = maybe_expr\n\n    # Get results\n    results = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(\n                create_call(future._result, raise_on_failure=raise_on_failure)\n            ).aresult()\n            for future in futures\n        ]\n    )\n\n    results_by_future = dict(zip(futures, results))\n\n    def replace_futures_with_results(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return results_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_results,\n        return_data=True,\n        context={},\n    )\n
    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_states","title":"resolve_futures_to_states async","text":"

    Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete.

    Unsupported object types will be returned without modification.

    Source code in prefect/futures.py
    async def resolve_futures_to_states(\n    expr: Union[PrefectFuture[R, Any], Any],\n) -> Union[State[R], Any]:\n    \"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their final states.\n    Resolving futures to their final states may wait for execution to complete.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n\n    # Get final states for each future\n    states = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(create_call(future._wait)).aresult()\n            for future in futures\n        ]\n    )\n\n    states_by_future = dict(zip(futures, states))\n\n    def replace_futures_with_states(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return states_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_states,\n        return_data=True,\n        context={},\n    )\n
    ","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/infrastructure/","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer","title":"DockerContainer","text":"

    Bases: Infrastructure

    Runs a command in a container.

    Requires a Docker Engine to be connectable. Docker settings will be retrieved from the environment.

    Click here to see a tutorial.

    Attributes:

    Name Type Description auto_remove bool

    If set, the container will be removed on completion. Otherwise, the container will remain after exit for inspection.

    command bool

    A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

    env bool

    Environment variables to set for the container.

    image str

    An optional string specifying the tag of a Docker image to use. Defaults to the Prefect image.

    image_pull_policy Optional[ImagePullPolicy]

    Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'.

    image_registry Optional[DockerRegistry]

    A DockerRegistry block containing credentials to use if image is stored in a private image registry.

    labels Optional[DockerRegistry]

    An optional dictionary of labels, mapping name to value.

    name Optional[DockerRegistry]

    An optional name for the container.

    network_mode Optional[str]

    Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set.

    networks List[str]

    An optional list of strings specifying Docker networks to connect the container to.

    stream_output bool

    If set, stream output from the container to local standard output.

    volumes List[str]

    An optional list of volume mount strings in the format of \"local_path:container_path\".

    memswap_limit Union[int, str]

    Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit is also set. If mem_limit is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit is 300m and memswap_limit is not set, the container can use 600m in total of memory and swap.

    mem_limit Union[float, str]

    Memory limit of the created container. Accepts float values to enforce a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. If a string is given without a unit, bytes are assumed.

    privileged bool

    Give extended privileges to this container.

    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer--connecting-to-a-locally-hosted-prefect-api","title":"Connecting to a locally hosted Prefect API","text":"

    If using a local API URL on Linux, we will update the network mode default to 'host' to enable connectivity. If using another OS or an alternative network mode is used, we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally, this will enable connectivity, but the API URL can be provided as an environment variable to override inference in more complex use-cases.

    Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not necessary and the API is connectable while bound to localhost.

    Source code in prefect/infrastructure/container.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the Docker worker from prefect-docker instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass DockerContainer(Infrastructure):\n    \"\"\"\n    Runs a command in a container.\n\n    Requires a Docker Engine to be connectable. Docker settings will be retrieved from\n    the environment.\n\n    Click [here](https://docs.prefect.io/guides/deployment/docker) to see a tutorial.\n\n    Attributes:\n        auto_remove: If set, the container will be removed on completion. Otherwise,\n            the container will remain after exit for inspection.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the container.\n        image: An optional string specifying the tag of a Docker image to use.\n            Defaults to the Prefect image.\n        image_pull_policy: Specifies if the image should be pulled. One of 'ALWAYS',\n            'NEVER', 'IF_NOT_PRESENT'.\n        image_registry: A `DockerRegistry` block containing credentials to use if `image` is stored in a private\n            image registry.\n        labels: An optional dictionary of labels, mapping name to value.\n        name: An optional name for the container.\n        network_mode: Set the network mode for the created container. Defaults to 'host'\n            if a local API url is detected, otherwise the Docker default of 'bridge' is\n            used. If 'networks' is set, this cannot be set.\n        networks: An optional list of strings specifying Docker networks to connect the\n            container to.\n        stream_output: If set, stream output from the container to local standard output.\n        volumes: An optional list of volume mount strings in the format of\n            \"local_path:container_path\".\n        memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n            set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n            allowing the container to use as much swap as memory. For example, if\n            `mem_limit` is 300m and `memswap_limit` is not set, the container can use\n            600m in total of memory and swap.\n        mem_limit: Memory limit of the created container. Accepts float values to enforce\n            a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g.\n            If a string is given without a unit, bytes are assumed.\n        privileged: Give extended privileges to this container.\n\n    ## Connecting to a locally hosted Prefect API\n\n    If using a local API URL on Linux, we will update the network mode default to 'host'\n    to enable connectivity. If using another OS or an alternative network mode is used,\n    we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally,\n    this will enable connectivity, but the API URL can be provided as an environment\n    variable to override inference in more complex use-cases.\n\n    Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound\n    to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not\n    necessary and the API is connectable while bound to localhost.\n    \"\"\"\n\n    type: Literal[\"docker-container\"] = Field(\n        default=\"docker-container\", description=\"The type of infrastructure.\"\n    )\n    image: str = Field(\n        default_factory=get_prefect_image_name,\n        description=\"Tag of a Docker image to use. Defaults to the Prefect image.\",\n    )\n    image_pull_policy: Optional[ImagePullPolicy] = Field(\n        default=None, description=\"Specifies if the image should be pulled.\"\n    )\n    image_registry: Optional[DockerRegistry] = None\n    networks: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of strings specifying Docker networks to connect the container to.\"\n        ),\n    )\n    network_mode: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The network mode for the created container (e.g. host, bridge). If\"\n            \" 'networks' is set, this cannot be set.\"\n        ),\n    )\n    auto_remove: bool = Field(\n        default=False,\n        description=\"If set, the container will be removed on completion.\",\n    )\n    volumes: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of volume mount strings in the format of\"\n            ' \"local_path:container_path\".'\n        ),\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, the output will be streamed from the container to local standard\"\n            \" output.\"\n        ),\n    )\n    memswap_limit: Union[int, str] = Field(\n        default=None,\n        description=(\n            \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n            \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n            \"allowing the container to use as much swap as memory. For example, if \"\n            \"`mem_limit` is 300m and `memswap_limit` is not set, the container can use \"\n            \"600m in total of memory and swap.\"\n        ),\n    )\n    mem_limit: Union[float, str] = Field(\n        default=None,\n        description=(\n            \"Memory limit of the created container. Accepts float values to enforce \"\n            \"a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. \"\n            \"If a string is given without a unit, bytes are assumed.\"\n        ),\n    )\n    privileged: bool = Field(\n        default=False,\n        description=\"Give extended privileges to this container.\",\n    )\n\n    _block_type_name = \"Docker Container\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer\"\n\n    @validator(\"labels\")\n    def convert_labels_to_docker_format(cls, labels: Dict[str, str]):\n        labels = labels or {}\n        new_labels = {}\n        for name, value in labels.items():\n            if \"/\" in name:\n                namespace, key = name.split(\"/\", maxsplit=1)\n                new_namespace = \".\".join(reversed(namespace.split(\".\")))\n                new_labels[f\"{new_namespace}.{key}\"] = value\n            else:\n                new_labels[name] = value\n        return new_labels\n\n    @validator(\"volumes\")\n    def check_volume_format(cls, volumes):\n        for volume in volumes:\n            if \":\" not in volume:\n                raise ValueError(\n                    \"Invalid volume specification. \"\n                    f\"Expected format 'path:container_path', but got {volume!r}\"\n                )\n\n        return volumes\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> Optional[bool]:\n        if not self.command:\n            raise ValueError(\"Docker container cannot be run with empty command.\")\n\n        # The `docker` library uses requests instead of an async http library so it must\n        # be run in a thread to avoid blocking the event loop.\n        container = await run_sync_in_worker_thread(self._create_and_start_container)\n        container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n        # Mark as started and return the infrastructure id\n        if task_status:\n            task_status.started(container_pid)\n\n        # Monitor the container\n        container = await run_sync_in_worker_thread(\n            self._watch_container_safe, container\n        )\n\n        exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n        return DockerContainerResult(\n            status_code=exit_code if exit_code is not None else -1,\n            identifier=container_pid,\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        docker_client = self._get_client()\n        base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n\n        if docker_client.api.base_url != base_url:\n            raise InfrastructureNotAvailable(\n                \"\".join(\n                    [\n                        (\n                            f\"Unable to stop container {container_id!r}: the current\"\n                            \" Docker API \"\n                        ),\n                        (\n                            f\"URL {docker_client.api.base_url!r} does not match the\"\n                            \" expected \"\n                        ),\n                        f\"API base URL {base_url}.\",\n                    ]\n                )\n            )\n        try:\n            container = docker_client.containers.get(container_id=container_id)\n        except docker.errors.NotFound:\n            raise InfrastructureNotFound(\n                f\"Unable to stop container {container_id!r}: The container was not\"\n                \" found.\"\n            )\n\n        try:\n            container.stop(timeout=grace_seconds)\n        except Exception:\n            raise\n\n    def preview(self):\n        # TODO: build and document a more sophisticated preview\n        docker_client = self._get_client()\n        try:\n            return json.dumps(self._build_container_settings(docker_client))\n        finally:\n            docker_client.close()\n\n    async def generate_work_pool_base_job_template(self):\n        from prefect.workers.utilities import (\n            get_default_base_job_template_for_infrastructure_type,\n        )\n\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type()\n        )\n        if base_job_template is None:\n            return await super().generate_work_pool_base_job_template()\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key == \"image_registry\":\n                self.logger.warning(\n                    \"Image registry blocks are not supported by Docker\"\n                    \" work pools. Please authenticate to your registry using\"\n                    \" the `docker login` command on your worker instances.\"\n                )\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key == \"image_pull_policy\":\n                new_value = None\n                if value == ImagePullPolicy.ALWAYS:\n                    new_value = \"Always\"\n                elif value == ImagePullPolicy.NEVER:\n                    new_value = \"Never\"\n                elif value == ImagePullPolicy.IF_NOT_PRESENT:\n                    new_value = \"IfNotPresent\"\n\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = new_value\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Docker work pools. Skipping.\"\n                )\n\n        return base_job_template\n\n    def get_corresponding_worker_type(self):\n        return \"docker\"\n\n    def _get_infrastructure_pid(self, container_id: str) -> str:\n        \"\"\"Generates a Docker infrastructure_pid string in the form of\n        `<docker_host_base_url>:<container_id>`.\n        \"\"\"\n        docker_client = self._get_client()\n        base_url = docker_client.api.base_url\n        docker_client.close()\n        return f\"{base_url}:{container_id}\"\n\n    def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n        \"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n        # base_url can contain `:` so we only want the last item of the split\n        base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n        return base_url, str(container_id)\n\n    def _build_container_settings(\n        self,\n        docker_client: \"DockerClient\",\n    ) -> Dict:\n        network_mode = self._get_network_mode()\n        return dict(\n            image=self.image,\n            network=self.networks[0] if self.networks else None,\n            network_mode=network_mode,\n            command=self.command,\n            environment=self._get_environment_variables(network_mode),\n            auto_remove=self.auto_remove,\n            labels={**CONTAINER_LABELS, **self.labels},\n            extra_hosts=self._get_extra_hosts(docker_client),\n            name=self._get_container_name(),\n            volumes=self.volumes,\n            mem_limit=self.mem_limit,\n            memswap_limit=self.memswap_limit,\n            privileged=self.privileged,\n        )\n\n    def _create_and_start_container(self) -> \"Container\":\n        if self.image_registry:\n            # If an image registry block was supplied, load an authenticated Docker\n            # client from the block. Otherwise, use an unauthenticated client to\n            # pull images from public registries.\n            docker_client = self.image_registry.get_docker_client()\n        else:\n            docker_client = self._get_client()\n        container_settings = self._build_container_settings(docker_client)\n\n        if self._should_pull_image(docker_client):\n            self.logger.info(f\"Pulling image {self.image!r}...\")\n            self._pull_image(docker_client)\n\n        container = self._create_container(docker_client, **container_settings)\n\n        # Add additional networks after the container is created; only one network can\n        # be attached at creation time\n        if len(self.networks) > 1:\n            for network_name in self.networks[1:]:\n                network = docker_client.networks.get(network_name)\n                network.connect(container)\n\n        # Start the container\n        container.start()\n\n        docker_client.close()\n\n        return container\n\n    def _get_image_and_tag(self) -> Tuple[str, Optional[str]]:\n        return parse_image_tag(self.image)\n\n    def _determine_image_pull_policy(self) -> ImagePullPolicy:\n        \"\"\"\n        Determine the appropriate image pull policy.\n\n        1. If they specified an image pull policy, use that.\n\n        2. If they did not specify an image pull policy and gave us\n           the \"latest\" tag, use ImagePullPolicy.always.\n\n        3. If they did not specify an image pull policy and did not\n           specify a tag, use ImagePullPolicy.always.\n\n        4. If they did not specify an image pull policy and gave us\n           a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n        This logic matches the behavior of Kubernetes.\n        See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n        \"\"\"\n        if not self.image_pull_policy:\n            _, tag = self._get_image_and_tag()\n            if tag == \"latest\" or not tag:\n                return ImagePullPolicy.ALWAYS\n            return ImagePullPolicy.IF_NOT_PRESENT\n        return self.image_pull_policy\n\n    def _get_network_mode(self) -> Optional[str]:\n        # User's value takes precedence; this may collide with the incompatible options\n        # mentioned below.\n        if self.network_mode:\n            if sys.platform != \"linux\" and self.network_mode == \"host\":\n                warnings.warn(\n                    f\"{self.network_mode!r} network mode is not supported on platform \"\n                    f\"{sys.platform!r} and may not work as intended.\"\n                )\n            return self.network_mode\n\n        # Network mode is not compatible with networks or ports (we do not support ports\n        # yet though)\n        if self.networks:\n            return None\n\n        # Check for a local API connection\n        api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n        if api_url:\n            try:\n                _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n            except Exception as exc:\n                warnings.warn(\n                    f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                    f\"{exc}\\nThe network mode will not be inferred.\"\n                )\n                return None\n\n            host = netloc.split(\":\")[0]\n\n            # If using a locally hosted API, use a host network on linux\n            if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n                return \"host\"\n\n        # Default to unset\n        return None\n\n    def _should_pull_image(self, docker_client: \"DockerClient\") -> bool:\n        \"\"\"\n        Decide whether we need to pull the Docker image.\n        \"\"\"\n        image_pull_policy = self._determine_image_pull_policy()\n\n        if image_pull_policy is ImagePullPolicy.ALWAYS:\n            return True\n        elif image_pull_policy is ImagePullPolicy.NEVER:\n            return False\n        elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n            try:\n                # NOTE: images.get() wants the tag included with the image\n                # name, while images.pull() wants them split.\n                docker_client.images.get(self.image)\n            except docker.errors.ImageNotFound:\n                self.logger.debug(f\"Could not find Docker image locally: {self.image}\")\n                return True\n        return False\n\n    def _pull_image(self, docker_client: \"DockerClient\"):\n        \"\"\"\n        Pull the image we're going to use to create the container.\n        \"\"\"\n        image, tag = self._get_image_and_tag()\n\n        return docker_client.images.pull(image, tag)\n\n    def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n        \"\"\"\n        Create a docker container with retries on name conflicts.\n\n        If the container already exists with the given name, an incremented index is\n        added.\n        \"\"\"\n        # Create the container with retries on name conflicts (with an incremented idx)\n        index = 0\n        container = None\n        name = original_name = kwargs.pop(\"name\")\n\n        while not container:\n            from docker.errors import APIError\n\n            try:\n                display_name = repr(name) if name else \"with auto-generated name\"\n                self.logger.info(f\"Creating Docker container {display_name}...\")\n                container = docker_client.containers.create(name=name, **kwargs)\n            except APIError as exc:\n                if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n                    self.logger.info(\n                        f\"Docker container name {display_name} already exists; \"\n                        \"retrying...\"\n                    )\n                    index += 1\n                    name = f\"{original_name}-{index}\"\n                else:\n                    raise\n\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        return container\n\n    def _watch_container_safe(self, container: \"Container\") -> \"Container\":\n        # Monitor the container capturing the latest snapshot while capturing\n        # not found errors\n        docker_client = self._get_client()\n\n        try:\n            for latest_container in self._watch_container(docker_client, container.id):\n                container = latest_container\n        except docker.errors.NotFound:\n            # The container was removed during watching\n            self.logger.warning(\n                f\"Docker container {container.name} was removed before we could wait \"\n                \"for its completion.\"\n            )\n        finally:\n            docker_client.close()\n\n        return container\n\n    def _watch_container(\n        self, docker_client: \"DockerClient\", container_id: str\n    ) -> Generator[None, None, \"Container\"]:\n        container: \"Container\" = docker_client.containers.get(container_id)\n\n        status = container.status\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n        if self.stream_output:\n            try:\n                for log in container.logs(stream=True):\n                    log: bytes\n                    print(log.decode().rstrip())\n            except docker.errors.APIError as exc:\n                if \"marked for removal\" in str(exc):\n                    self.logger.warning(\n                        f\"Docker container {container.name} was marked for removal\"\n                        \" before logs could be retrieved. Output will not be\"\n                        \" streamed. \"\n                    )\n                else:\n                    self.logger.exception(\n                        \"An unexpected Docker API error occurred while streaming\"\n                        f\" output from container {container.name}.\"\n                    )\n\n            container.reload()\n            if container.status != status:\n                self.logger.info(\n                    f\"Docker container {container.name!r} has status\"\n                    f\" {container.status!r}\"\n                )\n            yield container\n\n        container.wait()\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n    def _get_client(self):\n        try:\n            with warnings.catch_warnings():\n                # Silence warnings due to use of deprecated methods within dockerpy\n                # See https://github.com/docker/docker-py/pull/2931\n                warnings.filterwarnings(\n                    \"ignore\",\n                    message=\"distutils Version classes are deprecated.*\",\n                    category=DeprecationWarning,\n                )\n\n                docker_client = docker.from_env()\n\n        except docker.errors.DockerException as exc:\n            raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n        return docker_client\n\n    def _get_container_name(self) -> Optional[str]:\n        \"\"\"\n        Generates a container name to match the configured name, ensuring it is Docker\n        compatible.\n        \"\"\"\n        # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n        if not self.name:\n            return None\n\n        return (\n            slugify(\n                self.name,\n                lowercase=False,\n                # Docker does not limit length but URL limits apply eventually so\n                # limit the length for safety\n                max_length=250,\n                # Docker allows these characters for container names\n                regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n            ).lstrip(\n                # Docker does not allow leading underscore, dash, or period\n                \"_-.\"\n            )\n            # Docker does not allow 0 character names so cast to null if the name is\n            # empty after slufification\n            or None\n        )\n\n    def _get_extra_hosts(self, docker_client) -> Dict[str, str]:\n        \"\"\"\n        A host.docker.internal -> host-gateway mapping is necessary for communicating\n        with the API on Linux machines. Docker Desktop on macOS will automatically\n        already have this mapping.\n        \"\"\"\n        if sys.platform == \"linux\" and (\n            # Do not warn if the user has specified a host manually that does not use\n            # a local address\n            \"PREFECT_API_URL\" not in self.env\n            or re.search(\n                \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n                self.env[\"PREFECT_API_URL\"],\n            )\n        ):\n            user_version = packaging.version.parse(\n                format_outlier_version_name(docker_client.version()[\"Version\"])\n            )\n            required_version = packaging.version.parse(\"20.10.0\")\n\n            if user_version < required_version:\n                warnings.warn(\n                    \"`host.docker.internal` could not be automatically resolved to\"\n                    \" your local ip address. This feature is not supported on Docker\"\n                    f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                    \" encounter issues.\"\n                )\n                return {}\n            else:\n                # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n                # Only supported by Docker v20.10.0+ which is our minimum recommend version\n                return {\"host.docker.internal\": \"host-gateway\"}\n\n    def _get_environment_variables(self, network_mode):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the docker internal host unless the\n        # network mode is \"host\" where localhost is available already.\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and network_mode != \"host\"\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", \"host.docker.internal\")\n                .replace(\"127.0.0.1\", \"host.docker.internal\")\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainerResult","title":"DockerContainerResult","text":"

    Bases: InfrastructureResult

    Contains information about a completed Docker container

    Source code in prefect/infrastructure/container.py
    class DockerContainerResult(InfrastructureResult):\n    \"\"\"Contains information about a completed Docker container\"\"\"\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure","title":"Infrastructure","text":"

    Bases: Block, ABC

    Source code in prefect/infrastructure/base.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the `BaseWorker` class to create custom infrastructure integrations instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Infrastructure(Block, abc.ABC):\n    _block_schema_capabilities = [\"run-infrastructure\"]\n\n    type: str\n\n    env: Dict[str, Optional[str]] = pydantic.Field(\n        default_factory=dict,\n        title=\"Environment\",\n        description=\"Environment variables to set in the configured infrastructure.\",\n    )\n    labels: Dict[str, str] = pydantic.Field(\n        default_factory=dict,\n        description=\"Labels applied to the infrastructure for metadata purposes.\",\n    )\n    name: Optional[str] = pydantic.Field(\n        default=None,\n        description=\"Name applied to the infrastructure for identification.\",\n    )\n    command: Optional[List[str]] = pydantic.Field(\n        default=None,\n        description=\"The command to run in the infrastructure.\",\n    )\n\n    async def generate_work_pool_base_job_template(self):\n        if self._block_document_id is None:\n            raise BlockNotSavedError(\n                \"Cannot publish as work pool, block has not been saved. Please call\"\n                \" `.save()` on your block before publishing.\"\n            )\n\n        block_schema = self.__class__.schema()\n        return {\n            \"job_configuration\": {\"block\": \"{{ block }}\"},\n            \"variables\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"block\": {\n                        \"title\": \"Block\",\n                        \"description\": (\n                            \"The infrastructure block to use for job creation.\"\n                        ),\n                        \"allOf\": [{\"$ref\": f\"#/definitions/{self.__class__.__name__}\"}],\n                        \"default\": {\n                            \"$ref\": {\"block_document_id\": str(self._block_document_id)}\n                        },\n                    }\n                },\n                \"required\": [\"block\"],\n                \"definitions\": {self.__class__.__name__: block_schema},\n            },\n        }\n\n    def get_corresponding_worker_type(self):\n        return \"block\"\n\n    @sync_compatible\n    async def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n        \"\"\"\n        Creates a work pool configured to use the given block as the job creator.\n\n        Used to migrate from a agents setup to a worker setup.\n\n        Args:\n            work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n                block will be used.\n        \"\"\"\n\n        base_job_template = await self.generate_work_pool_base_job_template()\n        work_pool_name = work_pool_name or self._block_document_name\n\n        if work_pool_name is None:\n            raise ValueError(\n                \"`work_pool_name` must be provided if the block has not been saved.\"\n            )\n\n        console = Console()\n\n        try:\n            async with prefect.get_client() as client:\n                work_pool = await client.create_work_pool(\n                    work_pool=WorkPoolCreate(\n                        name=work_pool_name,\n                        type=self.get_corresponding_worker_type(),\n                        base_job_template=base_job_template,\n                    )\n                )\n        except ObjectAlreadyExists:\n            console.print(\n                (\n                    f\"Work pool with name {work_pool_name!r} already exists, please use\"\n                    \" a different name.\"\n                ),\n                style=\"red\",\n            )\n            return\n\n        console.print(\n            f\"Work pool {work_pool.name} created!\",\n            style=\"green\",\n        )\n        if PREFECT_UI_URL:\n            console.print(\n                \"You see your new work pool in the UI at\"\n                f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n            )\n\n        deploy_script = (\n            \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n        )\n        if not hasattr(self, \"image\"):\n            deploy_script = (\n                \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n                f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n            )\n        console.print(\n            \"\\nYou can deploy a flow to this work pool by calling\"\n            f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n        )\n        console.print(\n            \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n        )\n        console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> InfrastructureResult:\n        \"\"\"\n        Run the infrastructure.\n\n        If provided a `task_status`, the status will be reported as started when the\n        infrastructure is successfully created. The status return value will be an\n        identifier for the infrastructure.\n\n        The call will then monitor the created infrastructure, returning a result at\n        the end containing a status code indicating if the infrastructure exited cleanly\n        or encountered an error.\n        \"\"\"\n        # Note: implementations should include `sync_compatible`\n\n    @abc.abstractmethod\n    def preview(self) -> str:\n        \"\"\"\n        View a preview of the infrastructure that would be run.\n        \"\"\"\n\n    @property\n    def logger(self):\n        return get_logger(f\"prefect.infrastructure.{self.type}\")\n\n    @property\n    def is_using_a_runner(self):\n        return self.command is not None and \"prefect flow-run execute\" in shlex.join(\n            self.command\n        )\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n        \"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    def prepare_for_flow_run(\n        self: Self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"Deployment\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ) -> Self:\n        \"\"\"\n        Return an infrastructure block that is prepared to execute a flow run.\n        \"\"\"\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        return self.copy(\n            update={\n                \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n                \"labels\": {\n                    **self._base_flow_run_labels(flow_run),\n                    **deployment_labels,\n                    **flow_labels,\n                    **self.labels,\n                },\n                \"name\": self.name or flow_run.name,\n                \"command\": self.command or self._base_flow_run_command(),\n            }\n        )\n\n    @staticmethod\n    def _base_flow_run_command() -> List[str]:\n        \"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        if experiment_enabled(\"enhanced_cancellation\"):\n            if (\n                PREFECT_EXPERIMENTAL_WARN\n                and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n            ):\n                warnings.warn(\n                    EXPERIMENTAL_WARNING.format(\n                        feature=\"Enhanced flow run cancellation\",\n                        group=\"enhanced_cancellation\",\n                        help=\"\",\n                    ),\n                    ExperimentalFeature,\n                    stacklevel=3,\n                )\n            return [\"prefect\", \"flow-run\", \"execute\"]\n\n        return [\"python\", \"-m\", \"prefect.engine\"]\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        environment = {}\n        environment[\"PREFECT__FLOW_RUN_ID\"] = str(flow_run.id)\n        return environment\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"Deployment\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-name\": flow.name,\n        }\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

    Return an infrastructure block that is prepared to execute a flow run.

    Source code in prefect/infrastructure/base.py
    def prepare_for_flow_run(\n    self: Self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"Deployment\"] = None,\n    flow: Optional[\"Flow\"] = None,\n) -> Self:\n    \"\"\"\n    Return an infrastructure block that is prepared to execute a flow run.\n    \"\"\"\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    return self.copy(\n        update={\n            \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n            \"labels\": {\n                **self._base_flow_run_labels(flow_run),\n                **deployment_labels,\n                **flow_labels,\n                **self.labels,\n            },\n            \"name\": self.name or flow_run.name,\n            \"command\": self.command or self._base_flow_run_command(),\n        }\n    )\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.preview","title":"preview abstractmethod","text":"

    View a preview of the infrastructure that would be run.

    Source code in prefect/infrastructure/base.py
    @abc.abstractmethod\ndef preview(self) -> str:\n    \"\"\"\n    View a preview of the infrastructure that would be run.\n    \"\"\"\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.publish_as_work_pool","title":"publish_as_work_pool async","text":"

    Creates a work pool configured to use the given block as the job creator.

    Used to migrate from a agents setup to a worker setup.

    Parameters:

    Name Type Description Default work_pool_name Optional[str]

    The name to give to the created work pool. If not provided, the name of the current block will be used.

    None Source code in prefect/infrastructure/base.py
    @sync_compatible\nasync def publish_as_work_pool(self, work_pool_name: Optional[str] = None):\n    \"\"\"\n    Creates a work pool configured to use the given block as the job creator.\n\n    Used to migrate from a agents setup to a worker setup.\n\n    Args:\n        work_pool_name: The name to give to the created work pool. If not provided, the name of the current\n            block will be used.\n    \"\"\"\n\n    base_job_template = await self.generate_work_pool_base_job_template()\n    work_pool_name = work_pool_name or self._block_document_name\n\n    if work_pool_name is None:\n        raise ValueError(\n            \"`work_pool_name` must be provided if the block has not been saved.\"\n        )\n\n    console = Console()\n\n    try:\n        async with prefect.get_client() as client:\n            work_pool = await client.create_work_pool(\n                work_pool=WorkPoolCreate(\n                    name=work_pool_name,\n                    type=self.get_corresponding_worker_type(),\n                    base_job_template=base_job_template,\n                )\n            )\n    except ObjectAlreadyExists:\n        console.print(\n            (\n                f\"Work pool with name {work_pool_name!r} already exists, please use\"\n                \" a different name.\"\n            ),\n            style=\"red\",\n        )\n        return\n\n    console.print(\n        f\"Work pool {work_pool.name} created!\",\n        style=\"green\",\n    )\n    if PREFECT_UI_URL:\n        console.print(\n            \"You see your new work pool in the UI at\"\n            f\" {PREFECT_UI_URL.value()}/work-pools/work-pool/{work_pool.name}\"\n        )\n\n    deploy_script = (\n        \"my_flow.deploy(work_pool_name='{work_pool.name}', image='my_image:tag')\"\n    )\n    if not hasattr(self, \"image\"):\n        deploy_script = (\n            \"my_flow.from_source(source='https://github.com/org/repo.git',\"\n            f\" entrypoint='flow.py:my_flow').deploy(work_pool_name='{work_pool.name}')\"\n        )\n    console.print(\n        \"\\nYou can deploy a flow to this work pool by calling\"\n        f\" [blue].deploy[/]:\\n\\n\\t{deploy_script}\\n\"\n    )\n    console.print(\n        \"\\nTo start a worker to execute flow runs in this work pool run:\\n\"\n    )\n    console.print(f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\")\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure.run","title":"run abstractmethod async","text":"

    Run the infrastructure.

    If provided a task_status, the status will be reported as started when the infrastructure is successfully created. The status return value will be an identifier for the infrastructure.

    The call will then monitor the created infrastructure, returning a result at the end containing a status code indicating if the infrastructure exited cleanly or encountered an error.

    Source code in prefect/infrastructure/base.py
    @abc.abstractmethod\nasync def run(\n    self,\n    task_status: anyio.abc.TaskStatus = None,\n) -> InfrastructureResult:\n    \"\"\"\n    Run the infrastructure.\n\n    If provided a `task_status`, the status will be reported as started when the\n    infrastructure is successfully created. The status return value will be an\n    identifier for the infrastructure.\n\n    The call will then monitor the created infrastructure, returning a result at\n    the end containing a status code indicating if the infrastructure exited cleanly\n    or encountered an error.\n    \"\"\"\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

    Bases: Block

    Stores configuration for interaction with Kubernetes clusters.

    See from_file for creation.

    Attributes:

    Name Type Description config Dict

    The entire loaded YAML contents of a kubectl config file

    context_name str

    The name of the kubectl context to use

    Example

    Load a saved Kubernetes cluster config:

    from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

    Source code in prefect/blocks/kubernetes.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the KubernetesClusterConfig block from prefect-kubernetes instead.\",\n)\nclass KubernetesClusterConfig(Block):\n    \"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        return validate_yaml(value)\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n        \"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n        \"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n        \"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

    Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

    Source code in prefect/blocks/kubernetes.py
    def configure_client(self) -> None:\n    \"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

    Create a cluster config from the a Kubernetes config file.

    By default, the current context in the default Kubernetes config file will be used.

    An alternative file or context may be specified.

    The entire config file will be loaded and stored.

    Source code in prefect/blocks/kubernetes.py
    @classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n    \"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

    Returns a Kubernetes API client for this cluster config.

    Source code in prefect/blocks/kubernetes.py
    def get_api_client(self) -> \"ApiClient\":\n    \"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob","title":"KubernetesJob","text":"

    Bases: Infrastructure

    Runs a command as a Kubernetes Job.

    For a guided tutorial, see How to use Kubernetes with Prefect. For more information, including examples for customizing the resulting manifest, see KubernetesJob infrastructure concepts.

    Attributes:

    Name Type Description cluster_config Optional[KubernetesClusterConfig]

    An optional Kubernetes cluster config to use for this job.

    command Optional[KubernetesClusterConfig]

    A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

    customizations JsonPatch

    A list of JSON 6902 patches to apply to the base Job manifest.

    env JsonPatch

    Environment variables to set for the container.

    finished_job_ttl Optional[int]

    The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed.

    image Optional[str]

    An optional string specifying the image reference of a container image to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names. Defaults to the Prefect image.

    image_pull_policy Optional[KubernetesImagePullPolicy]

    The Kubernetes image pull policy to use for job containers.

    job KubernetesManifest

    The base manifest for the Kubernetes Job.

    job_watch_timeout_seconds Optional[int]

    Number of seconds to wait for the job to complete before marking it as crashed. Defaults to None, which means no timeout will be enforced.

    labels Optional[int]

    An optional dictionary of labels to add to the job.

    name Optional[int]

    An optional name for the job.

    namespace Optional[str]

    An optional string signifying the Kubernetes namespace to use.

    pod_watch_timeout_seconds int

    Number of seconds to watch for pod creation before timing out (default 60).

    service_account_name Optional[str]

    An optional string specifying which Kubernetes service account to use.

    stream_output bool

    If set, stream output from the job to local standard output.

    Source code in prefect/infrastructure/kubernetes.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the Kubernetes worker from prefect-kubernetes instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass KubernetesJob(Infrastructure):\n    \"\"\"\n    Runs a command as a Kubernetes Job.\n\n    For a guided tutorial, see [How to use Kubernetes with Prefect](https://medium.com/the-prefect-blog/how-to-use-kubernetes-with-prefect-419b2e8b8cb2/).\n    For more information, including examples for customizing the resulting manifest, see [`KubernetesJob` infrastructure concepts](https://docs.prefect.io/concepts/infrastructure/#kubernetesjob).\n\n    Attributes:\n        cluster_config: An optional Kubernetes cluster config to use for this job.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        customizations: A list of JSON 6902 patches to apply to the base Job manifest.\n        env: Environment variables to set for the container.\n        finished_job_ttl: The number of seconds to retain jobs after completion. If set, finished jobs will\n            be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be\n            manually removed.\n        image: An optional string specifying the image reference of a container image\n            to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The\n            behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names.\n            Defaults to the Prefect image.\n        image_pull_policy: The Kubernetes image pull policy to use for job containers.\n        job: The base manifest for the Kubernetes Job.\n        job_watch_timeout_seconds: Number of seconds to wait for the job to complete\n            before marking it as crashed. Defaults to `None`, which means no timeout will be enforced.\n        labels: An optional dictionary of labels to add to the job.\n        name: An optional name for the job.\n        namespace: An optional string signifying the Kubernetes namespace to use.\n        pod_watch_timeout_seconds: Number of seconds to watch for pod creation before timing out (default 60).\n        service_account_name: An optional string specifying which Kubernetes service account to use.\n        stream_output: If set, stream output from the job to local standard output.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob\"\n\n    type: Literal[\"kubernetes-job\"] = Field(\n        default=\"kubernetes-job\", description=\"The type of infrastructure.\"\n    )\n    # shortcuts for the most common user-serviceable settings\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image reference of a container image to use for the job, for example,\"\n            \" `docker.io/prefecthq/prefect:2-latest`.The behavior is as described in\"\n            \" the Kubernetes documentation and uses the latest version of Prefect by\"\n            \" default, unless an image is already present in a provided job manifest.\"\n        ),\n    )\n    namespace: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The Kubernetes namespace to use for this job. Defaults to 'default' \"\n            \"unless a namespace is already present in a provided job manifest.\"\n        ),\n    )\n    service_account_name: Optional[str] = Field(\n        default=None, description=\"The Kubernetes service account to use for this job.\"\n    )\n    image_pull_policy: Optional[KubernetesImagePullPolicy] = Field(\n        default=None,\n        description=\"The Kubernetes image pull policy to use for job containers.\",\n    )\n\n    # connection to a cluster\n    cluster_config: Optional[KubernetesClusterConfig] = Field(\n        default=None, description=\"The Kubernetes cluster config to use for this job.\"\n    )\n\n    # settings allowing full customization of the Job\n    job: KubernetesManifest = Field(\n        default_factory=lambda: KubernetesJob.base_job_manifest(),\n        description=\"The base manifest for the Kubernetes Job.\",\n        title=\"Base Job Manifest\",\n    )\n    customizations: JsonPatch = Field(\n        default_factory=lambda: JsonPatch([]),\n        description=\"A list of JSON 6902 patches to apply to the base Job manifest.\",\n    )\n\n    # controls the behavior of execution\n    job_watch_timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"Number of seconds to wait for the job to complete before marking it as\"\n            \" crashed. Defaults to `None`, which means no timeout will be enforced.\"\n        ),\n    )\n    pod_watch_timeout_seconds: int = Field(\n        default=60,\n        description=\"Number of seconds to watch for pod creation before timing out.\",\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the job to local standard output.\"\n        ),\n    )\n    finished_job_ttl: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to retain jobs after completion. If set, finished\"\n            \" jobs will be cleaned up by Kubernetes after the given delay. If None\"\n            \" (default), jobs will need to be manually removed.\"\n        ),\n    )\n\n    # internal-use only right now\n    _api_dns_name: Optional[str] = None  # Replaces 'localhost' in API URL\n\n    _block_type_name = \"Kubernetes Job\"\n\n    @validator(\"job\")\n    def ensure_job_includes_all_required_components(cls, value: KubernetesManifest):\n        return validate_k8s_job_required_components(cls, value)\n\n    @validator(\"job\")\n    def ensure_job_has_compatible_values(cls, value: KubernetesManifest):\n        return validate_k8s_job_compatible_values(cls, value)\n\n    @validator(\"customizations\", pre=True)\n    def cast_customizations_to_a_json_patch(\n        cls, value: Union[List[Dict], JsonPatch, str]\n    ) -> JsonPatch:\n        return cast_k8s_job_customizations(cls, value)\n\n    @root_validator\n    def default_namespace(cls, values):\n        return set_default_namespace(values)\n\n    @root_validator\n    def default_image(cls, values):\n        return set_default_image(values)\n\n    # Support serialization of the 'JsonPatch' type\n    class Config:\n        arbitrary_types_allowed = True\n        json_encoders = {JsonPatch: lambda p: p.patch}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        d = super().dict(*args, **kwargs)\n        d[\"customizations\"] = self.customizations.patch\n        return d\n\n    @classmethod\n    def base_job_manifest(cls) -> KubernetesManifest:\n        \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n        return {\n            \"apiVersion\": \"batch/v1\",\n            \"kind\": \"Job\",\n            \"metadata\": {\"labels\": {}},\n            \"spec\": {\n                \"template\": {\n                    \"spec\": {\n                        \"parallelism\": 1,\n                        \"completions\": 1,\n                        \"restartPolicy\": \"Never\",\n                        \"containers\": [\n                            {\n                                \"name\": \"prefect-job\",\n                                \"env\": [],\n                            }\n                        ],\n                    }\n                }\n            },\n        }\n\n    # Note that we're using the yaml package to load both YAML and JSON files below.\n    # This works because YAML is a strict superset of JSON:\n    #\n    #   > The YAML 1.23 specification was published in 2009. Its primary focus was\n    #   > making YAML a strict superset of JSON. It also removed many of the problematic\n    #   > implicit typing recommendations.\n    #\n    #   https://yaml.org/spec/1.2.2/#12-yaml-history\n\n    @classmethod\n    def job_from_file(cls, filename: str) -> KubernetesManifest:\n        \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return yaml.load(f, yaml.SafeLoader)\n\n    @classmethod\n    def customize_from_file(cls, filename: str) -> JsonPatch:\n        \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return JsonPatch(yaml.load(f, yaml.SafeLoader))\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> KubernetesJobResult:\n        if not self.command:\n            raise ValueError(\"Kubernetes job cannot be run with empty command.\")\n\n        self._configure_kubernetes_library_client()\n        manifest = self.build_job()\n        job = await run_sync_in_worker_thread(self._create_job, manifest)\n\n        pid = await run_sync_in_worker_thread(self._get_infrastructure_pid, job)\n        # Indicate that the job has started\n        if task_status is not None:\n            task_status.started(pid)\n\n        # Monitor the job until completion\n        status_code = await run_sync_in_worker_thread(\n            self._watch_job, job.metadata.name\n        )\n        return KubernetesJobResult(identifier=pid, status_code=status_code)\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        self._configure_kubernetes_library_client()\n        job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n            infrastructure_pid\n        )\n\n        if not job_namespace == self.namespace:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n                f\"{job_namespace!r} but this block is configured to use \"\n                f\"{self.namespace!r}.\"\n            )\n\n        current_cluster_uid = self._get_cluster_uid()\n        if job_cluster_uid != current_cluster_uid:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running on another \"\n                \"cluster.\"\n            )\n\n        with self.get_batch_client() as batch_client:\n            try:\n                batch_client.delete_namespaced_job(\n                    name=job_name,\n                    namespace=job_namespace,\n                    grace_period_seconds=grace_seconds,\n                    # Foreground propagation deletes dependent objects before deleting owner objects.\n                    # This ensures that the pods are cleaned up before the job is marked as deleted.\n                    # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion\n                    propagation_policy=\"Foreground\",\n                )\n            except kubernetes.client.exceptions.ApiException as exc:\n                if exc.status == 404:\n                    raise InfrastructureNotFound(\n                        f\"Unable to kill job {job_name!r}: The job was not found.\"\n                    ) from exc\n                else:\n                    raise\n\n    def preview(self):\n        return yaml.dump(self.build_job())\n\n    def get_corresponding_worker_type(self):\n        return \"kubernetes\"\n\n    async def generate_work_pool_base_job_template(self):\n        from prefect.workers.utilities import (\n            get_default_base_job_template_for_infrastructure_type,\n        )\n\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type()\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to retrieve default base job template.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n                \"job\",\n                \"customizations\",\n            ]:\n                continue\n            elif key == \"image_pull_policy\":\n                base_job_template[\"variables\"][\"properties\"][\"image_pull_policy\"][\n                    \"default\"\n                ] = value.value\n            elif key == \"cluster_config\":\n                base_job_template[\"variables\"][\"properties\"][\"cluster_config\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(self.cluster_config._block_document_id)\n                    }\n                }\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Kubernetes work pools.\"\n                    \" Skipping.\"\n                )\n\n        custom_job_manifest = self.dict(exclude_unset=True, exclude_defaults=True).get(\n            \"job\"\n        )\n        if custom_job_manifest:\n            job_manifest = self.build_job()\n        else:\n            job_manifest = copy.deepcopy(\n                base_job_template[\"job_configuration\"][\"job_manifest\"]\n            )\n            job_manifest = self.customizations.apply(job_manifest)\n        base_job_template[\"job_configuration\"][\"job_manifest\"] = job_manifest\n\n        return base_job_template\n\n    def build_job(self) -> KubernetesManifest:\n        \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n        job_manifest = copy.copy(self.job)\n        job_manifest = self._shortcut_customizations().apply(job_manifest)\n        job_manifest = self.customizations.apply(job_manifest)\n        return job_manifest\n\n    @contextmanager\n    def get_batch_client(self) -> Generator[\"BatchV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.BatchV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    @contextmanager\n    def get_client(self) -> Generator[\"CoreV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.CoreV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    def _get_infrastructure_pid(self, job: \"V1Job\") -> str:\n        \"\"\"\n        Generates a Kubernetes infrastructure PID.\n\n        The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n        \"\"\"\n        cluster_uid = self._get_cluster_uid()\n        pid = f\"{cluster_uid}:{self.namespace}:{job.metadata.name}\"\n        return pid\n\n    def _parse_infrastructure_pid(\n        self, infrastructure_pid: str\n    ) -> Tuple[str, str, str]:\n        \"\"\"\n        Parse a Kubernetes infrastructure PID into its component parts.\n\n        Returns a cluster UID, namespace, and job name.\n        \"\"\"\n        cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n        return cluster_uid, namespace, job_name\n\n    def _get_cluster_uid(self) -> str:\n        \"\"\"\n        Gets a unique id for the current cluster being used.\n\n        There is no real unique identifier for a cluster. However, the `kube-system`\n        namespace is immutable and has a persistence UID that we use instead.\n\n        PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n        namespace cannot be read e.g. when a cluster role cannot be created. If set,\n        this variable will be used and we will not attempt to read the `kube-system`\n        namespace.\n\n        See https://github.com/kubernetes/kubernetes/issues/44954\n        \"\"\"\n        # Default to an environment variable\n        env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n        if env_cluster_uid:\n            return env_cluster_uid\n\n        # Read the UID from the cluster namespace\n        with self.get_client() as client:\n            namespace = client.read_namespace(\"kube-system\")\n        cluster_uid = namespace.metadata.uid\n\n        return cluster_uid\n\n    def _configure_kubernetes_library_client(self) -> None:\n        \"\"\"\n        Set the correct kubernetes client configuration.\n\n        WARNING: This action is not threadsafe and may override the configuration\n                  specified by another `KubernetesJob` instance.\n        \"\"\"\n        # TODO: Investigate returning a configured client so calls on other threads\n        #       will not invalidate the config needed here\n\n        # if a k8s cluster block is provided to the flow runner, use that\n        if self.cluster_config:\n            self.cluster_config.configure_client()\n        else:\n            # If no block specified, try to load Kubernetes configuration within a cluster. If that doesn't\n            # work, try to load the configuration from the local environment, allowing\n            # any further ConfigExceptions to bubble up.\n            try:\n                kubernetes.config.load_incluster_config()\n            except kubernetes.config.ConfigException:\n                kubernetes.config.load_kube_config()\n\n    def _shortcut_customizations(self) -> JsonPatch:\n        \"\"\"Produces the JSON 6902 patch for the most commonly used customizations, like\n        image and namespace, which we offer as top-level parameters (with sensible\n        default values)\"\"\"\n        shortcuts = []\n\n        if self.namespace:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/namespace\",\n                    \"value\": self.namespace,\n                }\n            )\n\n        if self.image:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/image\",\n                    \"value\": self.image,\n                }\n            )\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": (\n                    f\"/metadata/labels/{self._slugify_label_key(key).replace('/', '~1', 1)}\"\n                ),\n                \"value\": self._slugify_label_value(value),\n            }\n            for key, value in self.labels.items()\n        ]\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/env/-\",\n                \"value\": {\"name\": key, \"value\": value},\n            }\n            for key, value in self._get_environment_variables().items()\n        ]\n\n        if self.image_pull_policy:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/imagePullPolicy\",\n                    \"value\": self.image_pull_policy.value,\n                }\n            )\n\n        if self.service_account_name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/serviceAccountName\",\n                    \"value\": self.service_account_name,\n                }\n            )\n\n        if self.finished_job_ttl is not None:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/ttlSecondsAfterFinished\",\n                    \"value\": self.finished_job_ttl,\n                }\n            )\n\n        if self.command:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/args\",\n                    \"value\": self.command,\n                }\n            )\n\n        if self.name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": self._slugify_name(self.name) + \"-\",\n                }\n            )\n        else:\n            # Generate name is required\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": (\n                        \"prefect-job-\"\n                        # We generate a name using a hash of the primary job settings\n                        + stable_hash(\n                            *self.command,\n                            *self.env.keys(),\n                            *[v for v in self.env.values() if v is not None],\n                        )\n                        + \"-\"\n                    ),\n                }\n            )\n\n        return JsonPatch(shortcuts)\n\n    def _get_job(self, job_id: str) -> Optional[\"V1Job\"]:\n        with self.get_batch_client() as batch_client:\n            try:\n                job = batch_client.read_namespaced_job(job_id, self.namespace)\n            except kubernetes.client.exceptions.ApiException:\n                self.logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n                return None\n            return job\n\n    def _get_job_pod(self, job_name: str) -> \"V1Pod\":\n        \"\"\"Get the first running pod for a job.\"\"\"\n\n        # Wait until we find a running pod for the job\n        # if `pod_watch_timeout_seconds` is None, no timeout will be enforced\n        watch = kubernetes.watch.Watch()\n        self.logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n        last_phase = None\n        with self.get_client() as client:\n            for event in watch.stream(\n                func=client.list_namespaced_pod,\n                namespace=self.namespace,\n                label_selector=f\"job-name={job_name}\",\n                timeout_seconds=self.pod_watch_timeout_seconds,\n            ):\n                phase = event[\"object\"].status.phase\n                if phase != last_phase:\n                    self.logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n                if phase != \"Pending\":\n                    watch.stop()\n                    return event[\"object\"]\n\n                last_phase = phase\n\n        self.logger.error(f\"Job {job_name!r}: Pod never started.\")\n\n    def _watch_job(self, job_name: str) -> int:\n        \"\"\"\n        Watch a job.\n\n        Return the final status code of the first container.\n        \"\"\"\n        self.logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n        job = self._get_job(job_name)\n        if not job:\n            return -1\n\n        pod = self._get_job_pod(job_name)\n        if not pod:\n            return -1\n\n        # Calculate the deadline before streaming output\n        deadline = (\n            (time.monotonic() + self.job_watch_timeout_seconds)\n            if self.job_watch_timeout_seconds is not None\n            else None\n        )\n\n        if self.stream_output:\n            with self.get_client() as client:\n                logs = client.read_namespaced_pod_log(\n                    pod.metadata.name,\n                    self.namespace,\n                    follow=True,\n                    _preload_content=False,\n                    container=\"prefect-job\",\n                )\n                try:\n                    for log in logs.stream():\n                        print(log.decode().rstrip())\n\n                        # Check if we have passed the deadline and should stop streaming\n                        # logs\n                        remaining_time = (\n                            deadline - time.monotonic() if deadline else None\n                        )\n                        if deadline and remaining_time <= 0:\n                            break\n\n                except Exception:\n                    self.logger.warning(\n                        (\n                            \"Error occurred while streaming logs - \"\n                            \"Job will continue to run but logs will \"\n                            \"no longer be streamed to stdout.\"\n                        ),\n                        exc_info=True,\n                    )\n\n        with self.get_batch_client() as batch_client:\n            # Check if the job is completed before beginning a watch\n            job = batch_client.read_namespaced_job(\n                name=job_name, namespace=self.namespace\n            )\n            completed = job.status.completion_time is not None\n\n            while not completed:\n                remaining_time = (\n                    math.ceil(deadline - time.monotonic()) if deadline else None\n                )\n                if deadline and remaining_time <= 0:\n                    self.logger.error(\n                        f\"Job {job_name!r}: Job did not complete within \"\n                        f\"timeout of {self.job_watch_timeout_seconds}s.\"\n                    )\n                    return -1\n\n                watch = kubernetes.watch.Watch()\n                # The kubernetes library will disable retries if the timeout kwarg is\n                # present regardless of the value so we do not pass it unless given\n                # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n                timeout_seconds = (\n                    {\"timeout_seconds\": remaining_time} if deadline else {}\n                )\n\n                for event in watch.stream(\n                    func=batch_client.list_namespaced_job,\n                    field_selector=f\"metadata.name={job_name}\",\n                    namespace=self.namespace,\n                    **timeout_seconds,\n                ):\n                    if event[\"type\"] == \"DELETED\":\n                        self.logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n                        completed = True\n                    elif event[\"object\"].status.completion_time:\n                        if not event[\"object\"].status.succeeded:\n                            # Job failed, exit while loop and return pod exit code\n                            self.logger.error(f\"Job {job_name!r}: Job failed.\")\n                        completed = True\n                    # Check if the job has reached its backoff limit\n                    # and stop watching if it has\n                    elif (\n                        event[\"object\"].spec.backoff_limit is not None\n                        and event[\"object\"].status.failed is not None\n                        and event[\"object\"].status.failed\n                        > event[\"object\"].spec.backoff_limit\n                    ):\n                        self.logger.error(\n                            f\"Job {job_name!r}: Job reached backoff limit.\"\n                        )\n                        completed = True\n                    # If the job has no backoff limit, check if it has failed\n                    # and stop watching if it has\n                    elif (\n                        not event[\"object\"].spec.backoff_limit\n                        and event[\"object\"].status.failed\n                    ):\n                        completed = True\n\n                    if completed:\n                        watch.stop()\n                        break\n\n        with self.get_client() as core_client:\n            # Get all pods for the job\n            pods = core_client.list_namespaced_pod(\n                namespace=self.namespace, label_selector=f\"job-name={job_name}\"\n            )\n            # Get the status for only the most recently used pod\n            pods.items.sort(\n                key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n            )\n            most_recent_pod = pods.items[0] if pods.items else None\n            first_container_status = (\n                most_recent_pod.status.container_statuses[0]\n                if most_recent_pod\n                else None\n            )\n            if not first_container_status:\n                self.logger.error(f\"Job {job_name!r}: No pods found for job.\")\n                return -1\n\n            # In some cases, such as spot instance evictions, the pod will be forcibly\n            # terminated and not report a status correctly.\n            elif (\n                first_container_status.state is None\n                or first_container_status.state.terminated is None\n                or first_container_status.state.terminated.exit_code is None\n            ):\n                self.logger.error(\n                    f\"Could not determine exit code for {job_name!r}.\"\n                    \"Exit code will be reported as -1.\"\n                    \"First container status info did not report an exit code.\"\n                    f\"First container info: {first_container_status}.\"\n                )\n                return -1\n\n        return first_container_status.state.terminated.exit_code\n\n    def _create_job(self, job_manifest: KubernetesManifest) -> \"V1Job\":\n        \"\"\"\n        Given a Kubernetes Job Manifest, create the Job on the configured Kubernetes\n        cluster and return its name.\n        \"\"\"\n        with self.get_batch_client() as batch_client:\n            job = batch_client.create_namespaced_job(self.namespace, job_manifest)\n        return job\n\n    def _slugify_name(self, name: str) -> str:\n        \"\"\"\n        Slugify text for use as a name.\n\n        Keeps only alphanumeric characters and dashes, and caps the length\n        of the slug at 45 chars.\n\n        The 45 character length allows room for the k8s utility\n        \"generateName\" to generate a unique name from the slug while\n        keeping the total length of a name below 63 characters, which is\n        the limit for e.g. label names that follow RFC 1123 (hostnames) and\n        RFC 1035 (domain names).\n\n        Args:\n            name: The name of the job\n\n        Returns:\n            the slugified job name\n        \"\"\"\n        slug = slugify(\n            name,\n            max_length=45,  # Leave enough space for generateName\n            regex_pattern=r\"[^a-zA-Z0-9-]+\",\n        )\n\n        # TODO: Handle the case that the name is an empty string after being\n        # slugified.\n\n        return slug\n\n    def _slugify_label_key(self, key: str) -> str:\n        \"\"\"\n        Slugify text for use as a label key.\n\n        Keys are composed of an optional prefix and name, separated by a slash (/).\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the length of the label prefix to 253 characters.\n        Limits the length of the label name to 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            key: The label key\n\n        Returns:\n            The slugified label key\n        \"\"\"\n        if \"/\" in key:\n            prefix, name = key.split(\"/\", maxsplit=1)\n        else:\n            prefix = None\n            name = key\n\n        name_slug = (\n            slugify(name, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or name\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        if prefix:\n            prefix_slug = (\n                slugify(\n                    prefix,\n                    max_length=253,\n                    regex_pattern=r\"[^a-zA-Z0-9-\\.]+\",\n                ).strip(\"_-.\")  # Must start or end with alphanumeric characters\n                or prefix\n            )\n\n            return f\"{prefix_slug}/{name_slug}\"\n\n        return name_slug\n\n    def _slugify_label_value(self, value: str) -> str:\n        \"\"\"\n        Slugify text for use as a label value.\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the total length of label text to below 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            value: The text for the label\n\n        Returns:\n            The slugified value\n        \"\"\"\n        slug = (\n            slugify(value, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_\\.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or value\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        return slug\n\n    def _get_environment_variables(self):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the internal host\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and self._api_dns_name\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", self._api_dns_name)\n                .replace(\"127.0.0.1\", self._api_dns_name)\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.base_job_manifest","title":"base_job_manifest classmethod","text":"

    Produces the bare minimum allowed Job manifest

    Source code in prefect/infrastructure/kubernetes.py
    @classmethod\ndef base_job_manifest(cls) -> KubernetesManifest:\n    \"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n    return {\n        \"apiVersion\": \"batch/v1\",\n        \"kind\": \"Job\",\n        \"metadata\": {\"labels\": {}},\n        \"spec\": {\n            \"template\": {\n                \"spec\": {\n                    \"parallelism\": 1,\n                    \"completions\": 1,\n                    \"restartPolicy\": \"Never\",\n                    \"containers\": [\n                        {\n                            \"name\": \"prefect-job\",\n                            \"env\": [],\n                        }\n                    ],\n                }\n            }\n        },\n    }\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.build_job","title":"build_job","text":"

    Builds the Kubernetes Job Manifest

    Source code in prefect/infrastructure/kubernetes.py
    def build_job(self) -> KubernetesManifest:\n    \"\"\"Builds the Kubernetes Job Manifest\"\"\"\n    job_manifest = copy.copy(self.job)\n    job_manifest = self._shortcut_customizations().apply(job_manifest)\n    job_manifest = self.customizations.apply(job_manifest)\n    return job_manifest\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.customize_from_file","title":"customize_from_file classmethod","text":"

    Load an RFC 6902 JSON patch from a YAML or JSON file.

    Source code in prefect/infrastructure/kubernetes.py
    @classmethod\ndef customize_from_file(cls, filename: str) -> JsonPatch:\n    \"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return JsonPatch(yaml.load(f, yaml.SafeLoader))\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob.job_from_file","title":"job_from_file classmethod","text":"

    Load a Kubernetes Job manifest from a YAML or JSON file.

    Source code in prefect/infrastructure/kubernetes.py
    @classmethod\ndef job_from_file(cls, filename: str) -> KubernetesManifest:\n    \"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return yaml.load(f, yaml.SafeLoader)\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJobResult","title":"KubernetesJobResult","text":"

    Bases: InfrastructureResult

    Contains information about the final state of a completed Kubernetes Job

    Source code in prefect/infrastructure/kubernetes.py
    class KubernetesJobResult(InfrastructureResult):\n    \"\"\"Contains information about the final state of a completed Kubernetes Job\"\"\"\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Process","title":"Process","text":"

    Bases: Infrastructure

    Run a command in a new process.

    Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

    Attributes:

    Name Type Description command

    A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

    env

    Environment variables to set for the new process.

    labels

    Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself.

    name

    A name for the process. For display purposes only.

    stream_output bool

    Whether to stream output to local stdout.

    working_dir Union[str, Path, None]

    Working directory where the process should be opened. If not set, a tmp directory will be used.

    Source code in prefect/infrastructure/process.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the process worker instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Process(Infrastructure):\n    \"\"\"\n    Run a command in a new process.\n\n    Current environment variables and Prefect settings will be included in the created\n    process. Configured environment variables will override any current environment\n    variables.\n\n    Attributes:\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the new process.\n        labels: Labels for the process. Labels are for metadata purposes only and\n            cannot be attached to the process itself.\n        name: A name for the process. For display purposes only.\n        stream_output: Whether to stream output to local stdout.\n        working_dir: Working directory where the process should be opened. If not set,\n            a tmp directory will be used.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/356e6766a91baf20e1d08bbe16e8b5aaef4d8643-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/concepts/infrastructure/#process\"\n\n    type: Literal[\"process\"] = Field(\n        default=\"process\", description=\"The type of infrastructure.\"\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the process to local standard output.\"\n        ),\n    )\n    working_dir: Union[str, Path, None] = Field(\n        default=None,\n        description=(\n            \"If set, the process will open within the specified path as the working\"\n            \" directory. Otherwise, a temporary directory will be created.\"\n        ),\n    )  # Underlying accepted types are str, bytes, PathLike[str], None\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> \"ProcessResult\":\n        if not self.command:\n            raise ValueError(\"Process cannot be run with empty command.\")\n\n        _use_threaded_child_watcher()\n        display_name = f\" {self.name!r}\" if self.name else \"\"\n\n        # Open a subprocess to execute the flow run\n        self.logger.info(f\"Opening process{display_name}...\")\n        working_dir_ctx = (\n            tempfile.TemporaryDirectory(suffix=\"prefect\")\n            if not self.working_dir\n            else contextlib.nullcontext(self.working_dir)\n        )\n        with working_dir_ctx as working_dir:\n            self.logger.debug(\n                f\"Process{display_name} running command: {' '.join(self.command)} in\"\n                f\" {working_dir}\"\n            )\n\n            # We must add creationflags to a dict so it is only passed as a function\n            # parameter on Windows, because the presence of creationflags causes\n            # errors on Unix even if set to None\n            kwargs: Dict[str, object] = {}\n            if sys.platform == \"win32\":\n                kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n            process = await run_process(\n                self.command,\n                stream_output=self.stream_output,\n                task_status=task_status,\n                task_status_handler=_infrastructure_pid_from_process,\n                env=self._get_environment_variables(),\n                cwd=working_dir,\n                **kwargs,\n            )\n\n        # Use the pid for display if no name was given\n        display_name = display_name or f\" {process.pid}\"\n\n        if process.returncode:\n            help_message = None\n            if process.returncode == -9:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGKILL signal. \"\n                    \"Typically, this is either caused by manual cancellation or \"\n                    \"high memory usage causing the operating system to \"\n                    \"terminate the process.\"\n                )\n            if process.returncode == -15:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGTERM signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n            elif process.returncode == 247:\n                help_message = (\n                    \"This indicates that the process was terminated due to high \"\n                    \"memory usage.\"\n                )\n            elif (\n                sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n            ):\n                help_message = (\n                    \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n\n            self.logger.error(\n                f\"Process{display_name} exited with status code: {process.returncode}\"\n                + (f\"; {help_message}\" if help_message else \"\")\n            )\n        else:\n            self.logger.info(f\"Process{display_name} exited cleanly.\")\n\n        return ProcessResult(\n            status_code=process.returncode, identifier=str(process.pid)\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        hostname, pid = _parse_infrastructure_pid(infrastructure_pid)\n\n        if hostname != socket.gethostname():\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill process {pid!r}: The process is running on a different\"\n                f\" host {hostname!r}.\"\n            )\n\n        # In a non-windows environment first send a SIGTERM, then, after\n        # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n        # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        if sys.platform == \"win32\":\n            try:\n                os.kill(pid, signal.CTRL_BREAK_EVENT)\n            except (ProcessLookupError, WindowsError):\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n        else:\n            try:\n                os.kill(pid, signal.SIGTERM)\n            except ProcessLookupError:\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n\n            # Throttle how often we check if the process is still alive to keep\n            # from making too many system calls in a short period of time.\n            check_interval = max(grace_seconds / 10, 1)\n\n            with anyio.move_on_after(grace_seconds):\n                while True:\n                    await anyio.sleep(check_interval)\n\n                    # Detect if the process is still alive. If not do an early\n                    # return as the process respected the SIGTERM from above.\n                    try:\n                        os.kill(pid, 0)\n                    except ProcessLookupError:\n                        return\n\n            try:\n                os.kill(pid, signal.SIGKILL)\n            except OSError:\n                # We shouldn't ever end up here, but it's possible that the\n                # process ended right after the check above.\n                return\n\n    def preview(self):\n        environment = self._get_environment_variables(include_os_environ=False)\n        return \" \\\\\\n\".join(\n            [f\"{key}={value}\" for key, value in environment.items()]\n            + [\" \".join(self.command)]\n        )\n\n    def _get_environment_variables(self, include_os_environ: bool = True):\n        os_environ = os.environ if include_os_environ else {}\n        # The base environment must override the current environment or\n        # the Prefect settings context may not be respected\n        env = {**os_environ, **self._base_environment(), **self.env}\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n\n    def _base_flow_run_command(self):\n        return [get_sys_executable(), \"-m\", \"prefect.engine\"]\n\n    def get_corresponding_worker_type(self):\n        return \"process\"\n\n    async def generate_work_pool_base_job_template(self):\n        from prefect.workers.utilities import (\n            get_default_base_job_template_for_infrastructure_type,\n        )\n\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type(),\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to generate default base job template for Process worker.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Process work pools.\"\n                    \" Skipping.\"\n                )\n\n        return base_job_template\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.ProcessResult","title":"ProcessResult","text":"

    Bases: InfrastructureResult

    Contains information about the final state of a completed process

    Source code in prefect/infrastructure/process.py
    class ProcessResult(InfrastructureResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
    ","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/manifests/","title":"prefect.manifests","text":"","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests","title":"prefect.manifests","text":"

    Manifests are portable descriptions of one or more workflows within a given directory structure.

    They are the foundational building blocks for defining Flow Deployments.

    ","tags":["Python API","deployments"]},{"location":"api-ref/prefect/manifests/#prefect.manifests.Manifest","title":"Manifest","text":"

    Bases: BaseModel

    A JSON representation of a flow.

    Source code in prefect/manifests.py
    class Manifest(BaseModel):\n    \"\"\"A JSON representation of a flow.\"\"\"\n\n    flow_name: str = Field(default=..., description=\"The name of the flow.\")\n    import_path: str = Field(\n        default=..., description=\"The relative import path for the flow.\"\n    )\n    parameter_openapi_schema: ParameterSchema = Field(\n        default=..., description=\"The OpenAPI schema of the flow's parameters.\"\n    )\n
    ","tags":["Python API","deployments"]},{"location":"api-ref/prefect/serializers/","title":"prefect.serializers","text":"","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers","title":"prefect.serializers","text":"

    Serializer implementations for converting objects to bytes and bytes to objects.

    All serializers are based on the Serializer class and include a type string that allows them to be referenced without referencing the actual class. For example, you can get often specify the JSONSerializer with the string \"json\". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects.

    All serializers must implement dumps and loads which convert objects to bytes and bytes to an object respectively.

    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedJSONSerializer","title":"CompressedJSONSerializer","text":"

    Bases: CompressedSerializer

    A compressed serializer preconfigured to use the json serializer.

    Source code in prefect/serializers.py
    class CompressedJSONSerializer(CompressedSerializer):\n    \"\"\"\n    A compressed serializer preconfigured to use the json serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/json\"] = \"compressed/json\"\n\n    serializer: Serializer = Field(default_factory=JSONSerializer)\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedPickleSerializer","title":"CompressedPickleSerializer","text":"

    Bases: CompressedSerializer

    A compressed serializer preconfigured to use the pickle serializer.

    Source code in prefect/serializers.py
    class CompressedPickleSerializer(CompressedSerializer):\n    \"\"\"\n    A compressed serializer preconfigured to use the pickle serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/pickle\"] = \"compressed/pickle\"\n\n    serializer: Serializer = Field(default_factory=PickleSerializer)\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer","title":"CompressedSerializer","text":"

    Bases: Serializer

    Wraps another serializer, compressing its output. Uses lzma by default. See compressionlib for using alternative libraries.

    Attributes:

    Name Type Description serializer Serializer

    The serializer to use before compression.

    compressionlib str

    The import path of a compression module to use. Must have methods compress(bytes) -> bytes and decompress(bytes) -> bytes.

    level str

    If not null, the level of compression to pass to compress.

    Source code in prefect/serializers.py
    class CompressedSerializer(Serializer):\n    \"\"\"\n    Wraps another serializer, compressing its output.\n    Uses `lzma` by default. See `compressionlib` for using alternative libraries.\n\n    Attributes:\n        serializer: The serializer to use before compression.\n        compressionlib: The import path of a compression module to use.\n            Must have methods `compress(bytes) -> bytes` and `decompress(bytes) -> bytes`.\n        level: If not null, the level of compression to pass to `compress`.\n    \"\"\"\n\n    type: Literal[\"compressed\"] = \"compressed\"\n\n    serializer: Serializer\n    compressionlib: str = \"lzma\"\n\n    @validator(\"serializer\", pre=True)\n    def validate_serializer(cls, value):\n        return cast_type_names_to_serializers(value)\n\n    @validator(\"compressionlib\")\n    def check_compressionlib(cls, value):\n        return validate_compressionlib(value)\n\n    def dumps(self, obj: Any) -> bytes:\n        blob = self.serializer.dumps(obj)\n        compressor = from_qualified_name(self.compressionlib)\n        return base64.encodebytes(compressor.compress(blob))\n\n    def loads(self, blob: bytes) -> Any:\n        compressor = from_qualified_name(self.compressionlib)\n        uncompressed = compressor.decompress(base64.decodebytes(blob))\n        return self.serializer.loads(uncompressed)\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.JSONSerializer","title":"JSONSerializer","text":"

    Bases: Serializer

    Serializes data to JSON.

    Input types must be compatible with the stdlib json library.

    Wraps the json library to serialize to UTF-8 bytes instead of string types.

    Source code in prefect/serializers.py
    class JSONSerializer(Serializer):\n    \"\"\"\n    Serializes data to JSON.\n\n    Input types must be compatible with the stdlib json library.\n\n    Wraps the `json` library to serialize to UTF-8 bytes instead of string types.\n    \"\"\"\n\n    type: Literal[\"json\"] = \"json\"\n\n    jsonlib: str = \"json\"\n    object_encoder: Optional[str] = Field(\n        default=\"prefect.serializers.prefect_json_object_encoder\",\n        description=(\n            \"An optional callable to use when serializing objects that are not \"\n            \"supported by the JSON encoder. By default, this is set to a callable that \"\n            \"adds support for all types supported by \"\n        ),\n    )\n    object_decoder: Optional[str] = Field(\n        default=\"prefect.serializers.prefect_json_object_decoder\",\n        description=(\n            \"An optional callable to use when deserializing objects. This callable \"\n            \"is passed each dictionary encountered during JSON deserialization. \"\n            \"By default, this is set to a callable that deserializes content created \"\n            \"by our default `object_encoder`.\"\n        ),\n    )\n    dumps_kwargs: Dict[str, Any] = Field(default_factory=dict)\n    loads_kwargs: Dict[str, Any] = Field(default_factory=dict)\n\n    @validator(\"dumps_kwargs\")\n    def dumps_kwargs_cannot_contain_default(cls, value):\n        return validate_dump_kwargs(value)\n\n    @validator(\"loads_kwargs\")\n    def loads_kwargs_cannot_contain_object_hook(cls, value):\n        return validate_load_kwargs(value)\n\n    def dumps(self, data: Any) -> bytes:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.dumps_kwargs.copy()\n        if self.object_encoder:\n            kwargs[\"default\"] = from_qualified_name(self.object_encoder)\n        result = json.dumps(data, **kwargs)\n        if isinstance(result, str):\n            # The standard library returns str but others may return bytes directly\n            result = result.encode()\n        return result\n\n    def loads(self, blob: bytes) -> Any:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.loads_kwargs.copy()\n        if self.object_decoder:\n            kwargs[\"object_hook\"] = from_qualified_name(self.object_decoder)\n        return json.loads(blob.decode(), **kwargs)\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer","title":"PickleSerializer","text":"

    Bases: Serializer

    Serializes objects using the pickle protocol.

    • Uses cloudpickle by default. See picklelib for using alternative libraries.
    • Stores the version of the pickle library to check for compatibility during deserialization.
    • Wraps pickles in base64 for safe transmission.
    Source code in prefect/serializers.py
    class PickleSerializer(Serializer):\n    \"\"\"\n    Serializes objects using the pickle protocol.\n\n    - Uses `cloudpickle` by default. See `picklelib` for using alternative libraries.\n    - Stores the version of the pickle library to check for compatibility during\n        deserialization.\n    - Wraps pickles in base64 for safe transmission.\n    \"\"\"\n\n    type: Literal[\"pickle\"] = \"pickle\"\n\n    picklelib: str = \"cloudpickle\"\n    picklelib_version: str = None\n\n    @validator(\"picklelib\")\n    def check_picklelib(cls, value):\n        return validate_picklelib(value)\n\n    @root_validator\n    def check_picklelib_version(cls, values):\n        return validate_picklelib_version(values)\n\n    def dumps(self, obj: Any) -> bytes:\n        pickler = from_qualified_name(self.picklelib)\n        blob = pickler.dumps(obj)\n        return base64.encodebytes(blob)\n\n    def loads(self, blob: bytes) -> Any:\n        pickler = from_qualified_name(self.picklelib)\n        return pickler.loads(base64.decodebytes(blob))\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer","title":"Serializer","text":"

    Bases: BaseModel, Generic[D], ABC

    A serializer that can encode objects of type 'D' into bytes.

    Source code in prefect/serializers.py
    @register_base_type\nclass Serializer(BaseModel, Generic[D], abc.ABC):\n    \"\"\"\n    A serializer that can encode objects of type 'D' into bytes.\n    \"\"\"\n\n    def __init__(self, **data: Any) -> None:\n        type_string = get_dispatch_key(self) if type(self) != Serializer else \"__base__\"\n        data.setdefault(\"type\", type_string)\n        super().__init__(**data)\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n        if \"type\" in kwargs:\n            try:\n                subcls = lookup_type(cls, dispatch_key=kwargs[\"type\"])\n            except KeyError as exc:\n                raise ValidationError(errors=[exc], model=cls)\n\n            return super().__new__(subcls)\n        else:\n            return super().__new__(cls)\n\n    type: str\n\n    @abc.abstractmethod\n    def dumps(self, obj: D) -> bytes:\n        \"\"\"Encode the object into a blob of bytes.\"\"\"\n\n    @abc.abstractmethod\n    def loads(self, blob: bytes) -> D:\n        \"\"\"Decode the blob of bytes into an object.\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    @classmethod\n    def __dispatch_key__(cls):\n        return cls.__fields__.get(\"type\").get_default()\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.dumps","title":"dumps abstractmethod","text":"

    Encode the object into a blob of bytes.

    Source code in prefect/serializers.py
    @abc.abstractmethod\ndef dumps(self, obj: D) -> bytes:\n    \"\"\"Encode the object into a blob of bytes.\"\"\"\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.loads","title":"loads abstractmethod","text":"

    Decode the blob of bytes into an object.

    Source code in prefect/serializers.py
    @abc.abstractmethod\ndef loads(self, blob: bytes) -> D:\n    \"\"\"Decode the blob of bytes into an object.\"\"\"\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_decoder","title":"prefect_json_object_decoder","text":"

    JSONDecoder.object_hook for decoding objects from JSON when previously encoded with prefect_json_object_encoder

    Source code in prefect/serializers.py
    def prefect_json_object_decoder(result: dict):\n    \"\"\"\n    `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded\n    with `prefect_json_object_encoder`\n    \"\"\"\n    if \"__class__\" in result:\n        return parse_obj_as(from_qualified_name(result[\"__class__\"]), result[\"data\"])\n    elif \"__exc_type__\" in result:\n        return from_qualified_name(result[\"__exc_type__\"])(result[\"message\"])\n    else:\n        return result\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_encoder","title":"prefect_json_object_encoder","text":"

    JSONEncoder.default for encoding objects into JSON with extended type support.

    Raises a TypeError to fallback on other encoders on failure.

    Source code in prefect/serializers.py
    def prefect_json_object_encoder(obj: Any) -> Any:\n    \"\"\"\n    `JSONEncoder.default` for encoding objects into JSON with extended type support.\n\n    Raises a `TypeError` to fallback on other encoders on failure.\n    \"\"\"\n    if isinstance(obj, BaseException):\n        return {\"__exc_type__\": to_qualified_name(obj.__class__), \"message\": str(obj)}\n    else:\n        return {\n            \"__class__\": to_qualified_name(obj.__class__),\n            \"data\": custom_pydantic_encoder({}, obj),\n        }\n
    ","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/settings/","title":"prefect.settings","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings","title":"prefect.settings","text":"

    Prefect settings management.

    Each setting is defined as a Setting type. The name of each setting is stylized in all caps, matching the environment variable that can be used to change the setting.

    All settings defined in this file are used to generate a dynamic Pydantic settings class called Settings. When instantiated, this class will load settings from environment variables and pull default values from the setting definitions.

    The current instance of Settings being used by the application is stored in a SettingsContext model which allows each instance of the Settings class to be accessed in an async-safe manner.

    Aside from environment variables, we allow settings to be changed during the runtime of the process using profiles. Profiles contain setting overrides that the user may persist without setting environment variables. Profiles are also used internally for managing settings during task run execution where differing settings may be used concurrently in the same process and during testing where we need to override settings to ensure their value is respected as intended.

    The SettingsContext is set when the prefect module is imported. This context is referred to as the \"root\" settings context for clarity. Generally, this is the only settings context that will be used. When this context is entered, we will instantiate a Settings object, loading settings from environment variables and defaults, then we will load the active profile and use it to override settings. See enter_root_settings_context for details on determining the active profile.

    Another SettingsContext may be entered at any time to change the settings being used by the code within the context. Generally, users should not use this. Settings management should be left to Prefect application internals.

    Generally, settings should be accessed with SETTING_VARIABLE.value() which will pull the current Settings instance from the current SettingsContext and retrieve the value of the relevant setting.

    Accessing a setting's value will also call the Setting.value_callback which allows settings to be dynamically modified on retrieval. This allows us to make settings dependent on the value of other settings or perform other dynamic effects.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_HOME","title":"PREFECT_HOME = Setting(Path, default=Path('~') / '.prefect', value_callback=expanduser_in_path) module-attribute","text":"

    Prefect's home directory. Defaults to ~/.prefect. This directory may be created automatically when required.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXTRA_ENTRYPOINTS","title":"PREFECT_EXTRA_ENTRYPOINTS = Setting(str, default='') module-attribute","text":"

    Modules for Prefect to import when Prefect is imported.

    Values should be separated by commas, e.g. my_module,my_other_module. Objects within modules may be specified by a ':' partition, e.g. my_module:my_object. If a callable object is provided, it will be called with no arguments on import.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEBUG_MODE","title":"PREFECT_DEBUG_MODE = Setting(bool, default=False) module-attribute","text":"

    If True, places the API in debug mode. This may modify behavior to facilitate debugging, including extra logs and other verbose assistance. Defaults to False.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_COLORS","title":"PREFECT_CLI_COLORS = Setting(bool, default=True) module-attribute","text":"

    If True, use colors in CLI output. If False, output will not include colors codes. Defaults to True.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_PROMPT","title":"PREFECT_CLI_PROMPT = Setting(Optional[bool], default=None) module-attribute","text":"

    If True, use interactive prompts in CLI commands. If False, no interactive prompts will be used. If None, the value will be dynamically determined based on the presence of an interactive-enabled terminal.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_WRAP_LINES","title":"PREFECT_CLI_WRAP_LINES = Setting(bool, default=True) module-attribute","text":"

    If True, wrap text by inserting new lines in long lines in CLI output. If False, output will not be wrapped. Defaults to True.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_MODE","title":"PREFECT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

    If True, places the API in test mode. This may modify behavior to facilitate testing. Defaults to False.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UNIT_TEST_MODE","title":"PREFECT_UNIT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

    This variable only exists to facilitate unit testing. If True, code is executing in a unit test context. Defaults to False.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UNIT_TEST_LOOP_DEBUG","title":"PREFECT_UNIT_TEST_LOOP_DEBUG = Setting(bool, default=True) module-attribute","text":"

    If True turns on debug mode for the unit testing event loop. Defaults to False.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_SETTING","title":"PREFECT_TEST_SETTING = Setting(Any, default=None, value_callback=only_return_value_in_test_mode) module-attribute","text":"

    This variable only exists to facilitate testing of settings. If accessed when PREFECT_TEST_MODE is not set, None is returned.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TLS_INSECURE_SKIP_VERIFY","title":"PREFECT_API_TLS_INSECURE_SKIP_VERIFY = Setting(bool, default=False) module-attribute","text":"

    If True, disables SSL checking to allow insecure requests. This is recommended only during development, e.g. when using self-signed certificates.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SSL_CERT_FILE","title":"PREFECT_API_SSL_CERT_FILE = Setting(str, default=os.environ.get('SSL_CERT_FILE')) module-attribute","text":"

    This configuration settings option specifies the path to an SSL certificate file. When set, it allows the application to use the specified certificate for secure communication. If left unset, the setting will default to the value provided by the SSL_CERT_FILE environment variable.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_URL","title":"PREFECT_API_URL = Setting(str, default=None) module-attribute","text":"

    If provided, the URL of a hosted Prefect API. Defaults to None.

    When using Prefect Cloud, this will include an account and workspace.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SILENCE_API_URL_MISCONFIGURATION","title":"PREFECT_SILENCE_API_URL_MISCONFIGURATION = Setting(bool, default=False) module-attribute","text":"

    If True, disable the warning when a user accidentally misconfigure its PREFECT_API_URL Sometimes when a user manually set PREFECT_API_URL to a custom url,reverse-proxy for example, we would like to silence this warning so we will set it to FALSE.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_KEY","title":"PREFECT_API_KEY = Setting(str, default=None, is_secret=True) module-attribute","text":"

    API key used to authenticate with a the Prefect API. Defaults to None.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_ENABLE_HTTP2","title":"PREFECT_API_ENABLE_HTTP2 = Setting(bool, default=True) module-attribute","text":"

    If true, enable support for HTTP/2 for communicating with an API.

    If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_MAX_RETRIES","title":"PREFECT_CLIENT_MAX_RETRIES = Setting(int, default=5) module-attribute","text":"

    The maximum number of retries to perform on failed HTTP requests.

    Defaults to 5. Set to 0 to disable retries.

    See PREFECT_CLIENT_RETRY_EXTRA_CODES for details on which HTTP status codes are retried.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_JITTER_FACTOR","title":"PREFECT_CLIENT_RETRY_JITTER_FACTOR = Setting(float, default=0.2) module-attribute","text":"

    A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter.

    Set to 0 to disable jitter. See clamped_poisson_interval for details on the how jitter can affect retry lengths.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_EXTRA_CODES","title":"PREFECT_CLIENT_RETRY_EXTRA_CODES = Setting(str, default='', value_callback=status_codes_as_integers_in_range) module-attribute","text":"

    A comma-separated list of extra HTTP status codes to retry on. Defaults to an empty string. 429, 502 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_CSRF_SUPPORT_ENABLED","title":"PREFECT_CLIENT_CSRF_SUPPORT_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Determines if CSRF token handling is active in the Prefect client for API requests.

    When enabled (True), the client automatically manages CSRF tokens by retrieving, storing, and including them in applicable state-changing requests (POST, PUT, PATCH, DELETE) to the API.

    Disabling this setting (False) means the client will not handle CSRF tokens, which might be suitable for environments where CSRF protection is disabled.

    Defaults to True, ensuring CSRF protection is enabled by default.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_API_URL","title":"PREFECT_CLOUD_API_URL = Setting(str, default='https://api.prefect.cloud/api', value_callback=check_for_deprecated_cloud_url) module-attribute","text":"

    API URL for Prefect Cloud. Used for authentication.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_URL","title":"PREFECT_CLOUD_URL = Setting(str, default=None, deprecated=True, deprecated_start_date='Dec 2022', deprecated_help='Use `PREFECT_CLOUD_API_URL` instead.') module-attribute","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_URL","title":"PREFECT_UI_URL = Setting(Optional[str], default=None, value_callback=default_ui_url) module-attribute","text":"

    The URL for the UI. By default, this is inferred from the PREFECT_API_URL.

    When using Prefect Cloud, this will include the account and workspace. When using an ephemeral server, this will be None.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_UI_URL","title":"PREFECT_CLOUD_UI_URL = Setting(str, default=None, value_callback=default_cloud_ui_url) module-attribute","text":"

    The URL for the Cloud UI. By default, this is inferred from the PREFECT_CLOUD_API_URL.

    PREFECT_UI_URL will be workspace specific and will be usable in the open source too.

    In contrast, this value is only valid for Cloud and will not include the workspace.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_REQUEST_TIMEOUT","title":"PREFECT_API_REQUEST_TIMEOUT = Setting(float, default=60.0) module-attribute","text":"

    The default timeout for requests to the API

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN","title":"PREFECT_EXPERIMENTAL_WARN = Setting(bool, default=True) module-attribute","text":"

    If enabled, warn on usage of experimental features.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_PROFILES_PATH","title":"PREFECT_PROFILES_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'profiles.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

    The path to a profiles configuration files.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_DEFAULT_SERIALIZER","title":"PREFECT_RESULTS_DEFAULT_SERIALIZER = Setting(str, default='pickle') module-attribute","text":"

    The default serializer to use when not otherwise specified.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_PERSIST_BY_DEFAULT","title":"PREFECT_RESULTS_PERSIST_BY_DEFAULT = Setting(bool, default=False) module-attribute","text":"

    The default setting for persisting results when not otherwise specified. If enabled, flow and task results will be persisted unless they opt out.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASKS_REFRESH_CACHE","title":"PREFECT_TASKS_REFRESH_CACHE = Setting(bool, default=False) module-attribute","text":"

    If True, enables a refresh of cached results: re-executing the task will refresh the cached results. Defaults to False.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRIES","title":"PREFECT_TASK_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

    This value sets the default number of retries for all tasks. This value does not overwrite individually set retries values on tasks

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRIES","title":"PREFECT_FLOW_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

    This value sets the default number of retries for all flows. This value does not overwrite individually set retries values on a flow

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[int, float], default=0) module-attribute","text":"

    This value sets the retry delay seconds for all flows. This value does not overwrite individually set retry delay seconds

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[float, int, List[float]], default=0) module-attribute","text":"

    This value sets the default retry delay seconds for all tasks. This value does not overwrite individually set retry delay seconds

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS","title":"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS = Setting(int, default=30) module-attribute","text":"

    The number of seconds to wait before retrying when a task run cannot secure a concurrency slot from the server.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOCAL_STORAGE_PATH","title":"PREFECT_LOCAL_STORAGE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'storage', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

    The path to a block storage directory to store things in.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMO_STORE_PATH","title":"PREFECT_MEMO_STORE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'memo_store.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

    The path to the memo store file.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION","title":"PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION = Setting(bool, default=True) module-attribute","text":"

    Controls whether or not block auto-registration on start up should be memoized. Setting to False may result in slower server start up times.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LEVEL","title":"PREFECT_LOGGING_LEVEL = Setting(str, default='INFO', value_callback=debug_mode_log_level) module-attribute","text":"

    The default logging level for Prefect loggers. Defaults to \"INFO\" during normal operation. Is forced to \"DEBUG\" during debug mode.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_INTERNAL_LEVEL","title":"PREFECT_LOGGING_INTERNAL_LEVEL = Setting(str, default='ERROR', value_callback=debug_mode_log_level) module-attribute","text":"

    The default logging level for Prefect's internal machinery loggers. Defaults to \"ERROR\" during normal operation. Is forced to \"DEBUG\" during debug mode.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SERVER_LEVEL","title":"PREFECT_LOGGING_SERVER_LEVEL = Setting(str, default='WARNING') module-attribute","text":"

    The default logging level for the Prefect API server.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SETTINGS_PATH","title":"PREFECT_LOGGING_SETTINGS_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'logging.yml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

    The path to a custom YAML logging configuration file. If no file is found, the default logging.yml is used. Defaults to a logging.yml in the Prefect home directory.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_EXTRA_LOGGERS","title":"PREFECT_LOGGING_EXTRA_LOGGERS = Setting(str, default='', value_callback=get_extra_loggers) module-attribute","text":"

    Additional loggers to attach to Prefect logging at runtime. Values should be comma separated. The handlers attached to the 'prefect' logger will be added to these loggers. Additionally, if the level is not set, it will be set to the same level as the 'prefect' logger.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LOG_PRINTS","title":"PREFECT_LOGGING_LOG_PRINTS = Setting(bool, default=False) module-attribute","text":"

    If set, print statements in flows and tasks will be redirected to the Prefect logger for the given run. This setting can be overridden by individual tasks and flows.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_ENABLED","title":"PREFECT_LOGGING_TO_API_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Toggles sending logs to the API. If False, logs sent to the API log handler will not be sent to the API.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_INTERVAL","title":"PREFECT_LOGGING_TO_API_BATCH_INTERVAL = Setting(float, default=2.0) module-attribute","text":"

    The number of seconds between batched writes of logs to the API.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_SIZE","title":"PREFECT_LOGGING_TO_API_BATCH_SIZE = Setting(int, default=4000000) module-attribute","text":"

    The maximum size in bytes for a batch of logs.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_MAX_LOG_SIZE","title":"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE = Setting(int, default=1000000) module-attribute","text":"

    The maximum size in bytes for a single log.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW = Setting(Literal['warn', 'error', 'ignore'], default='warn') module-attribute","text":"

    Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow.

    All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing.

    The following options are available:

    • \"warn\": Log a warning message.
    • \"error\": Raise an error.
    • \"ignore\": Do not log a warning message or raise an error.
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SQLALCHEMY_POOL_SIZE","title":"PREFECT_SQLALCHEMY_POOL_SIZE = Setting(int, default=None) module-attribute","text":"

    Controls connection pool size when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy pool size will be used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SQLALCHEMY_MAX_OVERFLOW","title":"PREFECT_SQLALCHEMY_MAX_OVERFLOW = Setting(int, default=None) module-attribute","text":"

    Controls maximum overflow of the connection pool when using a PostgreSQL database with the Prefect API. If not set, the default SQLAlchemy maximum overflow value will be used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_COLORS","title":"PREFECT_LOGGING_COLORS = Setting(bool, default=True) module-attribute","text":"

    Whether to style console logs with color.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_MARKUP","title":"PREFECT_LOGGING_MARKUP = Setting(bool, default=False) module-attribute","text":"

    Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. [red]This is a red message.[/red]. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD","title":"PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD = Setting(float, default=10.0) module-attribute","text":"

    Threshold time in seconds for logging a warning if task parameter introspection exceeds this duration. Parameter introspection can be a significant performance hit when the parameter is a large collection object, e.g. a large dictionary or DataFrame, and each element needs to be inspected. See prefect.utilities.annotations.quote for more details. Defaults to 10.0. Set to 0 to disable logging the warning.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_QUERY_INTERVAL","title":"PREFECT_AGENT_QUERY_INTERVAL = Setting(float, default=15) module-attribute","text":"

    The agent loop interval, in seconds. Agents will check for new runs this often. Defaults to 15.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_PREFETCH_SECONDS","title":"PREFECT_AGENT_PREFETCH_SECONDS = Setting(int, default=15) module-attribute","text":"

    Agents will look for scheduled runs this many seconds in the future and attempt to run them. This accounts for any additional infrastructure spin-up time or latency in preparing a flow run. Note flow runs will not start before their scheduled time, even if they are prefetched. Defaults to 15.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ASYNC_FETCH_STATE_RESULT","title":"PREFECT_ASYNC_FETCH_STATE_RESULT = Setting(bool, default=False) module-attribute","text":"

    Determines whether State.result() fetches results automatically or not. In Prefect 2.6.0, the State.result() method was updated to be async to facilitate automatic retrieval of results from storage which means when writing async code you must await the call. For backwards compatibility, the result is not retrieved by default for async users. You may opt into this per call by passing fetch=True or toggle this setting to change the behavior globally. This setting does not affect users writing synchronous tasks and flows. This setting does not affect retrieval of results when using Future.result().

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START","title":"PREFECT_API_BLOCKS_REGISTER_ON_START = Setting(bool, default=True) module-attribute","text":"

    If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_PASSWORD","title":"PREFECT_API_DATABASE_PASSWORD = Setting(str, default=None, is_secret=True) module-attribute","text":"

    Password to template into the PREFECT_API_DATABASE_CONNECTION_URL. This is useful if the password must be provided separately from the connection URL. To use this setting, you must include it in your connection URL.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL","title":"PREFECT_API_DATABASE_CONNECTION_URL = Setting(str, default=None, value_callback=default_database_connection_url, is_secret=True) module-attribute","text":"

    A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use sqlite+aiosqlite and for Postgres use postgresql+asyncpg.

    SQLite in-memory databases can be used by providing the url sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests.

    Defaults to a sqlite database stored in the Prefect home directory.

    If you need to provide password via a different environment variable, you use the PREFECT_API_DATABASE_PASSWORD setting. For example:

    PREFECT_API_DATABASE_PASSWORD='mypassword'\nPREFECT_API_DATABASE_CONNECTION_URL='postgresql+asyncpg://postgres:${PREFECT_API_DATABASE_PASSWORD}@localhost/prefect'\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_ECHO","title":"PREFECT_API_DATABASE_ECHO = Setting(bool, default=False) module-attribute","text":"

    If True, SQLAlchemy will log all SQL issued to the database. Defaults to False.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START","title":"PREFECT_API_DATABASE_MIGRATE_ON_START = Setting(bool, default=True) module-attribute","text":"

    If True, the database will be upgraded on application creation. If False, the database will need to be upgraded manually.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_TIMEOUT","title":"PREFECT_API_DATABASE_TIMEOUT = Setting(Optional[float], default=10.0) module-attribute","text":"

    A statement timeout, in seconds, applied to all database interactions made by the API. Defaults to 10 seconds.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_API_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=5) module-attribute","text":"

    A connection timeout, in seconds, applied to database connections. Defaults to 5.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS","title":"PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(float, default=60) module-attribute","text":"

    The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to 60.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(int, default=100) module-attribute","text":"

    The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for scheduler_loop_seconds until it has visited every deployment once. Defaults to 100.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS = Setting(int, default=100) module-attribute","text":"

    The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of scheduler_max_scheduled_time. Defaults to 100.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS = Setting(int, default=3) module-attribute","text":"

    The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of scheduler_min_scheduled_time. Defaults to 3.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(timedelta, default=timedelta(days=100)) module-attribute","text":"

    The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over scheduler_max_runs: if a flow runs once a month and scheduler_max_scheduled_time is three months, then only three runs will be scheduled. Defaults to 100 days (8640000 seconds).

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(timedelta, default=timedelta(hours=1)) module-attribute","text":"

    The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over scheduler_min_runs: if a flow runs every hour and scheduler_min_scheduled_time is three hours, then three runs will be scheduled even if scheduler_min_runs is 1. Defaults to 1 hour (3600 seconds).

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(int, default=500) module-attribute","text":"

    The number of flow runs the scheduler will attempt to insert in one batch across all deployments. If the number of flow runs to schedule exceeds this amount, the runs will be inserted in batches of this size. Defaults to 500.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

    The late runs service will look for runs to mark as late this often. Defaults to 5.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(timedelta, default=timedelta(seconds=15)) module-attribute","text":"

    The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to 5 seconds.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

    The pause expiration service will look for runs to mark as failed this often. Defaults to 5.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(float, default=20) module-attribute","text":"

    The cancellation cleanup service will look non-terminal tasks and subflows this often. Defaults to 20.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_ENABLED","title":"PREFECT_API_SERVICES_FOREMAN_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the Foreman service in the server application.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS = Setting(float, default=15) module-attribute","text":"

    The number of seconds to wait between each iteration of the Foreman loop which checks for offline workers and updates work pool status.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE","title":"PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE = Setting(int, default=3) module-attribute","text":"

    The number of heartbeats that must be missed before a worker is marked as offline.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS = Setting(int, default=30) module-attribute","text":"

    The number of seconds to use for online/offline evaluation if a worker's heartbeat interval is not set.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS = Setting(int, default=60) module-attribute","text":"

    The number of seconds before a deployment is marked as not ready if it has not been polled.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS","title":"PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS = Setting(int, default=60) module-attribute","text":"

    The number of seconds before a work queue is marked as not ready if it has not been polled.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DEFAULT_LIMIT","title":"PREFECT_API_DEFAULT_LIMIT = Setting(int, default=200) module-attribute","text":"

    The default limit applied to queries that can return multiple objects, such as POST /flow_runs/filter.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_HOST","title":"PREFECT_SERVER_API_HOST = Setting(str, default='127.0.0.1') module-attribute","text":"

    The API's host address (defaults to 127.0.0.1).

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_PORT","title":"PREFECT_SERVER_API_PORT = Setting(int, default=4200) module-attribute","text":"

    The API's port address (defaults to 4200).

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_KEEPALIVE_TIMEOUT","title":"PREFECT_SERVER_API_KEEPALIVE_TIMEOUT = Setting(int, default=5) module-attribute","text":"

    The API's keep alive timeout (defaults to 5). Refer to https://www.uvicorn.org/settings/#timeouts for details.

    When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout.

    Note this setting only applies when calling prefect server start; if hosting the API with another tool you will need to configure this there instead.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED","title":"PREFECT_SERVER_CSRF_PROTECTION_ENABLED = Setting(bool, default=False) module-attribute","text":"

    Controls the activation of CSRF protection for the Prefect server API.

    When enabled (True), the server enforces CSRF validation checks on incoming state-changing requests (POST, PUT, PATCH, DELETE), requiring a valid CSRF token to be included in the request headers or body. This adds a layer of security by preventing unauthorized or malicious sites from making requests on behalf of authenticated users.

    It is recommended to enable this setting in production environments where the API is exposed to web clients to safeguard against CSRF attacks.

    Note: Enabling this setting requires corresponding support in the client for CSRF token management. See PREFECT_CLIENT_CSRF_SUPPORT_ENABLED for more.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_CSRF_TOKEN_EXPIRATION","title":"PREFECT_SERVER_CSRF_TOKEN_EXPIRATION = Setting(timedelta, default=timedelta(hours=1)) module-attribute","text":"

    Specifies the duration for which a CSRF token remains valid after being issued by the server.

    The default expiration time is set to 1 hour, which offers a reasonable compromise. Adjust this setting based on your specific security requirements and usage patterns.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_ENABLED","title":"PREFECT_UI_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to serve the Prefect UI.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_API_URL","title":"PREFECT_UI_API_URL = Setting(str, default=None, value_callback=default_ui_api_url) module-attribute","text":"

    The connection url for communication from the UI to the API. Defaults to PREFECT_API_URL if set. Otherwise, the default URL is generated from PREFECT_SERVER_API_HOST and PREFECT_SERVER_API_PORT. If providing a custom value, the aforementioned settings may be templated into the given string.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED","title":"PREFECT_SERVER_ANALYTICS_ENABLED = Setting(bool, default=True) module-attribute","text":"

    When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_API_SERVICES_SCHEDULER_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the scheduling service in the server application. If disabled, you will need to run this service separately to schedule runs for deployments.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_API_SERVICES_LATE_RUNS_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the late runs service in the server application. If disabled, you will need to run this service separately to have runs past their scheduled start time marked as late.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the flow run notifications service in the server application. If disabled, you will need to run this service separately to send flow run notifications.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH = Setting(int, default=2000) module-attribute","text":"

    The maximum number of characters allowed for a task run cache key. This setting cannot be changed client-side, it must be set on the server.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the cancellation cleanup service in the server application. If disabled, task runs and subflow runs belonging to cancelled flows may remain in non-terminal states.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES = Setting(int, default=10000) module-attribute","text":"

    The maximum size of a flow run graph on the v2 API

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS","title":"PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS = Setting(int, default=10000) module-attribute","text":"

    The maximum number of artifacts to show on a flow run graph on the v2 API

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS_ON_FLOW_RUN_GRAPH = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable artifacts on the flow run graph.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH","title":"PREFECT_EXPERIMENTAL_ENABLE_STATES_ON_FLOW_RUN_GRAPH = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable flow run states on the flow run graph.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable experimental Prefect work pools.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_WARN_WORK_POOLS = Setting(bool, default=False) module-attribute","text":"

    Whether or not to warn when experimental Prefect work pools are used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKERS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKERS = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable experimental Prefect workers.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKERS","title":"PREFECT_EXPERIMENTAL_WARN_WORKERS = Setting(bool, default=False) module-attribute","text":"

    Whether or not to warn when experimental Prefect workers are used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_VISUALIZE","title":"PREFECT_EXPERIMENTAL_WARN_VISUALIZE = Setting(bool, default=False) module-attribute","text":"

    Whether or not to warn when experimental Prefect visualize is used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable experimental enhanced flow run cancellation.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION","title":"PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION = Setting(bool, default=False) module-attribute","text":"

    Whether or not to warn when experimental enhanced flow run cancellation is used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_DEPLOYMENT_STATUS = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable deployment status in the UI

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS","title":"PREFECT_EXPERIMENTAL_WARN_DEPLOYMENT_STATUS = Setting(bool, default=False) module-attribute","text":"

    Whether or not to warn when deployment status is used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_FLOW_RUN_INPUT = Setting(bool, default=False) module-attribute","text":"

    Whether or not to enable flow run input.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT","title":"PREFECT_EXPERIMENTAL_WARN_FLOW_RUN_INPUT = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable flow run input.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_EVENTS","title":"PREFECT_EXPERIMENTAL_EVENTS = Setting(bool, default=False) module-attribute","text":"

    Whether to enable Prefect's server-side event features. Note that Prefect Cloud clients will always emit events during flow and task runs regardless of this setting.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_PROCESS_LIMIT","title":"PREFECT_RUNNER_PROCESS_LIMIT = Setting(int, default=5) module-attribute","text":"

    Maximum number of processes a runner will execute in parallel.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_POLL_FREQUENCY","title":"PREFECT_RUNNER_POLL_FREQUENCY = Setting(int, default=10) module-attribute","text":"

    Number of seconds a runner should wait between queries for scheduled work.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE","title":"PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE = Setting(int, default=2) module-attribute","text":"

    Number of missed polls before a runner is considered unhealthy by its webserver.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_HOST","title":"PREFECT_RUNNER_SERVER_HOST = Setting(str, default='localhost') module-attribute","text":"

    The host address the runner's webserver should bind to.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_PORT","title":"PREFECT_RUNNER_SERVER_PORT = Setting(int, default=8080) module-attribute","text":"

    The port the runner's webserver should bind to.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_LOG_LEVEL","title":"PREFECT_RUNNER_SERVER_LOG_LEVEL = Setting(str, default='error') module-attribute","text":"

    The log level of the runner's webserver.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RUNNER_SERVER_ENABLE","title":"PREFECT_RUNNER_SERVER_ENABLE = Setting(bool, default=False) module-attribute","text":"

    Whether or not to enable the runner's webserver.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_HEARTBEAT_SECONDS","title":"PREFECT_WORKER_HEARTBEAT_SECONDS = Setting(float, default=30) module-attribute","text":"

    Number of seconds a worker should wait between sending a heartbeat.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_QUERY_SECONDS","title":"PREFECT_WORKER_QUERY_SECONDS = Setting(float, default=10) module-attribute","text":"

    Number of seconds a worker should wait between queries for scheduled flow runs.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_PREFETCH_SECONDS","title":"PREFECT_WORKER_PREFETCH_SECONDS = Setting(float, default=10) module-attribute","text":"

    The number of seconds into the future a worker should query for scheduled flow runs. Can be used to compensate for infrastructure start up time for a worker.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_HOST","title":"PREFECT_WORKER_WEBSERVER_HOST = Setting(str, default='0.0.0.0') module-attribute","text":"

    The host address the worker's webserver should bind to.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_WEBSERVER_PORT","title":"PREFECT_WORKER_WEBSERVER_PORT = Setting(int, default=8080) module-attribute","text":"

    The port the worker's webserver should bind to.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK","title":"PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK = Setting(str, default='local-file-system/prefect-task-scheduling') module-attribute","text":"

    The block-type/block-document slug of a block to use as the default storage for autonomous tasks.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS","title":"PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS = Setting(bool, default=True) module-attribute","text":"

    Whether or not to delete failed task submissions from the database.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE = Setting(int, default=1000) module-attribute","text":"

    The maximum number of scheduled tasks to queue for submission.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE","title":"PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE = Setting(int, default=100) module-attribute","text":"

    The maximum number of retries to queue for submission.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT","title":"PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT = Setting(timedelta, default=timedelta(seconds=30)) module-attribute","text":"

    How long before a PENDING task are made available to another task server. In practice, a task server should move a task from PENDING to RUNNING very quickly, so runs stuck in PENDING for a while is a sign that the task server may have crashed.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS","title":"PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS = Setting(bool, default=False) module-attribute","text":"

    Whether or not to enable experimental worker webserver endpoints.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable experimental Prefect artifacts.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_WARN_ARTIFACTS = Setting(bool, default=False) module-attribute","text":"

    Whether or not to warn when experimental Prefect artifacts are used.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable the experimental workspace dashboard.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD = Setting(bool, default=False) module-attribute","text":"

    Whether or not to warn when the experimental workspace dashboard is enabled.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING","title":"PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING = Setting(bool, default=False) module-attribute","text":"

    Whether or not to enable experimental task scheduling.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_QUEUE_STATUS = Setting(bool, default=True) module-attribute","text":"

    Whether or not to enable experimental work queue status in-place of work queue health.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE","title":"PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE = Setting(bool, default=False) module-attribute","text":"

    Whether or not to enable experimental new engine.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT","title":"PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT = Setting(bool, default=False) module-attribute","text":"

    Whether or not to disable the sync_compatible decorator utility.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_RESULT_STORAGE_BLOCK","title":"PREFECT_DEFAULT_RESULT_STORAGE_BLOCK = Setting(str, default=None) module-attribute","text":"

    The block-type/block-document slug of a block to use as the default result storage.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_WORK_POOL_NAME","title":"PREFECT_DEFAULT_WORK_POOL_NAME = Setting(str, default=None) module-attribute","text":"

    The default work pool to deploy to.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE","title":"PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE = Setting(str, default=None) module-attribute","text":"

    The default Docker namespace to use when building images.

    Can be either an organization/username or a registry URL with an organization/username.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_SERVE_BASE","title":"PREFECT_UI_SERVE_BASE = Setting(str, default='/') module-attribute","text":"

    The base URL path to serve the Prefect UI from.

    Defaults to the root path.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_STATIC_DIRECTORY","title":"PREFECT_UI_STATIC_DIRECTORY = Setting(str, default=None) module-attribute","text":"

    The directory to serve static files from. This should be used when running into permissions issues when attempting to serve the UI from the default directory (for example when running in a Docker container)

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MESSAGING_BROKER","title":"PREFECT_MESSAGING_BROKER = Setting(str, default='prefect.server.utilities.messaging.memory') module-attribute","text":"

    Which message broker implementation to use for the messaging system, should point to a module that exports a Publisher and Consumer class.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MESSAGING_CACHE","title":"PREFECT_MESSAGING_CACHE = Setting(str, default='prefect.server.utilities.messaging.memory') module-attribute","text":"

    Which cache implementation to use for the events system. Should point to a module that exports a Cache class.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE","title":"PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE = Setting(int, default=500) module-attribute","text":"

    The maximum number of labels a resource may have.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES","title":"PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES = Setting(int, default=500) module-attribute","text":"

    The maximum number of related resources an Event may have.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_MAXIMUM_SIZE_BYTES","title":"PREFECT_EVENTS_MAXIMUM_SIZE_BYTES = Setting(int, default=1500000) module-attribute","text":"

    The maximum size of an Event when serialized to JSON

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED","title":"PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the event debug logger service in the server application.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_TRIGGERS_ENABLED","title":"PREFECT_API_SERVICES_TRIGGERS_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the triggers service in the server application.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER","title":"PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER = Setting(timedelta, default=timedelta(seconds=60)) module-attribute","text":"

    The amount of time to retain expired automation buckets

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_PROACTIVE_GRANULARITY","title":"PREFECT_EVENTS_PROACTIVE_GRANULARITY = Setting(timedelta, default=timedelta(seconds=5)) module-attribute","text":"

    How frequently proactive automations are evaluated

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED","title":"PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to start the event persister service in the server application.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE","title":"PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE = Setting(int, default=20, gt=0) module-attribute","text":"

    The number of events the event persister will attempt to insert in one batch.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL","title":"PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL = Setting(float, default=5, gt=0.0) module-attribute","text":"

    The maximum number of seconds between flushes of the event persister.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EVENTS_RETENTION_PERIOD","title":"PREFECT_EVENTS_RETENTION_PERIOD = Setting(timedelta, default=timedelta(days=7)) module-attribute","text":"

    The amount of time to retain events in the database.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_EVENTS_STREAM_OUT_ENABLED","title":"PREFECT_API_EVENTS_STREAM_OUT_ENABLED = Setting(bool, default=True) module-attribute","text":"

    Whether or not to allow streaming events out of via websockets.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL","title":"PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL = Setting(timedelta, default=timedelta(minutes=5)) module-attribute","text":"

    How long to cache related resource data for emitting server-side vents

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting","title":"Setting","text":"

    Bases: Generic[T]

    Setting definition type.

    Source code in prefect/settings.py
    class Setting(Generic[T]):\n    \"\"\"\n    Setting definition type.\n    \"\"\"\n\n    def __init__(\n        self,\n        type: Type[T],\n        *,\n        deprecated: bool = False,\n        deprecated_start_date: Optional[str] = None,\n        deprecated_end_date: Optional[str] = None,\n        deprecated_help: str = \"\",\n        deprecated_when_message: str = \"\",\n        deprecated_when: Optional[Callable[[Any], bool]] = None,\n        deprecated_renamed_to: Optional[\"Setting[T]\"] = None,\n        value_callback: Optional[Callable[[\"Settings\", T], T]] = None,\n        is_secret: bool = False,\n        **kwargs: Any,\n    ) -> None:\n        self.field: fields.FieldInfo = Field(**kwargs)\n        self.type = type\n        self.value_callback = value_callback\n        self._name = None\n        self.is_secret = is_secret\n        self.deprecated = deprecated\n        self.deprecated_start_date = deprecated_start_date\n        self.deprecated_end_date = deprecated_end_date\n        self.deprecated_help = deprecated_help\n        self.deprecated_when = deprecated_when or (lambda _: True)\n        self.deprecated_when_message = deprecated_when_message\n        self.deprecated_renamed_to = deprecated_renamed_to\n        self.deprecated_renamed_from = None\n        self.__doc__ = self.field.description\n\n        # Validate the deprecation settings, will throw an error at setting definition\n        # time if the developer has not configured it correctly\n        if deprecated:\n            generate_deprecation_message(\n                name=\"...\",  # setting names not populated until after init\n                start_date=self.deprecated_start_date,\n                end_date=self.deprecated_end_date,\n                help=self.deprecated_help,\n                when=self.deprecated_when_message,\n            )\n\n        if deprecated_renamed_to is not None:\n            # Track the deprecation both ways\n            deprecated_renamed_to.deprecated_renamed_from = self\n\n    def value(self, bypass_callback: bool = False) -> T:\n        \"\"\"\n        Get the current value of a setting.\n\n        Example:\n        ```python\n        from prefect.settings import PREFECT_API_URL\n        PREFECT_API_URL.value()\n        ```\n        \"\"\"\n        return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n\n    def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n        \"\"\"\n        Get the value of a setting from a settings object\n\n        Example:\n        ```python\n        from prefect.settings import get_default_settings\n        PREFECT_API_URL.value_from(get_default_settings())\n        ```\n        \"\"\"\n        value = settings.value_of(self, bypass_callback=bypass_callback)\n\n        if not bypass_callback and self.deprecated and self.deprecated_when(value):\n            # Check if this setting is deprecated and someone is accessing the value\n            # via the old name\n            warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n            # If the the value is empty, return the new setting's value for compat\n            if value is None and self.deprecated_renamed_to is not None:\n                return self.deprecated_renamed_to.value_from(settings)\n\n        if not bypass_callback and self.deprecated_renamed_from is not None:\n            # Check if this setting is a rename of a deprecated setting and the\n            # deprecated setting is set and should be used for compatibility\n            deprecated_value = self.deprecated_renamed_from.value_from(\n                settings, bypass_callback=True\n            )\n            if deprecated_value is not None:\n                warnings.warn(\n                    (\n                        f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                        f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                        f\" instead of {self.name!r} for backwards compatibility.\"\n                    ),\n                    DeprecationWarning,\n                    stacklevel=3,\n                )\n            return deprecated_value or value\n\n        return value\n\n    @property\n    def name(self):\n        if self._name:\n            return self._name\n\n        # Lookup the name on first access\n        for name, val in tuple(globals().items()):\n            if val == self:\n                self._name = name\n                return name\n\n        raise ValueError(\"Setting not found in `prefect.settings` module.\")\n\n    @name.setter\n    def name(self, value: str):\n        self._name = value\n\n    @property\n    def deprecated_message(self):\n        return generate_deprecation_message(\n            name=f\"Setting {self.name!r}\",\n            start_date=self.deprecated_start_date,\n            end_date=self.deprecated_end_date,\n            help=self.deprecated_help,\n            when=self.deprecated_when_message,\n        )\n\n    def __repr__(self) -> str:\n        return f\"<{self.name}: {self.type.__name__}>\"\n\n    def __bool__(self) -> bool:\n        \"\"\"\n        Returns a truthy check of the current value.\n        \"\"\"\n        return bool(self.value())\n\n    def __eq__(self, __o: object) -> bool:\n        return __o.__eq__(self.value())\n\n    def __hash__(self) -> int:\n        return hash((type(self), self.name))\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value","title":"value","text":"

    Get the current value of a setting.

    Example:

    from prefect.settings import PREFECT_API_URL\nPREFECT_API_URL.value()\n

    Source code in prefect/settings.py
    def value(self, bypass_callback: bool = False) -> T:\n    \"\"\"\n    Get the current value of a setting.\n\n    Example:\n    ```python\n    from prefect.settings import PREFECT_API_URL\n    PREFECT_API_URL.value()\n    ```\n    \"\"\"\n    return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value_from","title":"value_from","text":"

    Get the value of a setting from a settings object

    Example:

    from prefect.settings import get_default_settings\nPREFECT_API_URL.value_from(get_default_settings())\n

    Source code in prefect/settings.py
    def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n    \"\"\"\n    Get the value of a setting from a settings object\n\n    Example:\n    ```python\n    from prefect.settings import get_default_settings\n    PREFECT_API_URL.value_from(get_default_settings())\n    ```\n    \"\"\"\n    value = settings.value_of(self, bypass_callback=bypass_callback)\n\n    if not bypass_callback and self.deprecated and self.deprecated_when(value):\n        # Check if this setting is deprecated and someone is accessing the value\n        # via the old name\n        warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n        # If the the value is empty, return the new setting's value for compat\n        if value is None and self.deprecated_renamed_to is not None:\n            return self.deprecated_renamed_to.value_from(settings)\n\n    if not bypass_callback and self.deprecated_renamed_from is not None:\n        # Check if this setting is a rename of a deprecated setting and the\n        # deprecated setting is set and should be used for compatibility\n        deprecated_value = self.deprecated_renamed_from.value_from(\n            settings, bypass_callback=True\n        )\n        if deprecated_value is not None:\n            warnings.warn(\n                (\n                    f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                    f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                    f\" instead of {self.name!r} for backwards compatibility.\"\n                ),\n                DeprecationWarning,\n                stacklevel=3,\n            )\n        return deprecated_value or value\n\n    return value\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings","title":"Settings","text":"

    Bases: SettingsFieldsMixin

    Contains validated Prefect settings.

    Settings should be accessed using the relevant Setting object. For example:

    from prefect.settings import PREFECT_HOME\nPREFECT_HOME.value()\n

    Accessing a setting attribute directly will ignore any value_callback mutations. This is not recommended:

    from prefect.settings import Settings\nSettings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n

    Source code in prefect/settings.py
    @add_cloudpickle_reduction\nclass Settings(SettingsFieldsMixin):\n    \"\"\"\n    Contains validated Prefect settings.\n\n    Settings should be accessed using the relevant `Setting` object. For example:\n    ```python\n    from prefect.settings import PREFECT_HOME\n    PREFECT_HOME.value()\n    ```\n\n    Accessing a setting attribute directly will ignore any `value_callback` mutations.\n    This is not recommended:\n    ```python\n    from prefect.settings import Settings\n    Settings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n    ```\n    \"\"\"\n\n    def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n        \"\"\"\n        Retrieve a setting's value.\n        \"\"\"\n        value = getattr(self, setting.name)\n        if setting.value_callback and not bypass_callback:\n            value = setting.value_callback(self, value)\n        return value\n\n    @validator(PREFECT_LOGGING_LEVEL.name, PREFECT_LOGGING_SERVER_LEVEL.name)\n    def check_valid_log_level(cls, value):\n        if isinstance(value, str):\n            value = value.upper()\n        logging._checkLevel(value)\n        return value\n\n    @root_validator\n    def post_root_validators(cls, values):\n        \"\"\"\n        Add root validation functions for settings here.\n        \"\"\"\n        # TODO: We could probably register these dynamically but this is the simpler\n        #       approach for now. We can explore more interesting validation features\n        #       in the future.\n        values = max_log_size_smaller_than_batch_size(values)\n        values = warn_on_database_password_value_without_usage(values)\n        if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n            values = warn_on_misconfigured_api_url(values)\n        return values\n\n    def copy_with_update(\n        self,\n        updates: Mapping[Setting, Any] = None,\n        set_defaults: Mapping[Setting, Any] = None,\n        restore_defaults: Iterable[Setting] = None,\n    ) -> \"Settings\":\n        \"\"\"\n        Create a new `Settings` object with validation.\n\n        Arguments:\n            updates: A mapping of settings to new values. Existing values for the\n                given settings will be overridden.\n            set_defaults: A mapping of settings to new default values. Existing values for\n                the given settings will only be overridden if they were not set.\n            restore_defaults: An iterable of settings to restore to their default values.\n\n        Returns:\n            A new `Settings` object.\n        \"\"\"\n        updates = updates or {}\n        set_defaults = set_defaults or {}\n        restore_defaults = restore_defaults or set()\n        restore_defaults_names = {setting.name for setting in restore_defaults}\n\n        return self.__class__(\n            **{\n                **{setting.name: value for setting, value in set_defaults.items()},\n                **self.dict(exclude_unset=True, exclude=restore_defaults_names),\n                **{setting.name: value for setting, value in updates.items()},\n            }\n        )\n\n    def with_obfuscated_secrets(self):\n        \"\"\"\n        Returns a copy of this settings object with secret setting values obfuscated.\n        \"\"\"\n        settings = self.copy(\n            update={\n                setting.name: obfuscate(self.value_of(setting))\n                for setting in SETTING_VARIABLES.values()\n                if setting.is_secret\n                # Exclude deprecated settings with null values to avoid warnings\n                and not (setting.deprecated and self.value_of(setting) is None)\n            }\n        )\n        # Ensure that settings that have not been marked as \"set\" before are still so\n        # after we have updated their value above\n        settings.__fields_set__.intersection_update(self.__fields_set__)\n        return settings\n\n    def hash_key(self) -> str:\n        \"\"\"\n        Return a hash key for the settings object.  This is needed since some\n        settings may be unhashable.  An example is lists.\n        \"\"\"\n        env_variables = self.to_environment_variables()\n        return str(hash(tuple((key, value) for key, value in env_variables.items())))\n\n    def to_environment_variables(\n        self, include: Iterable[Setting] = None, exclude_unset: bool = False\n    ) -> Dict[str, str]:\n        \"\"\"\n        Convert the settings object to environment variables.\n\n        Note that setting values will not be run through their `value_callback` allowing\n        dynamic resolution to occur when loaded from the returned environment.\n\n        Args:\n            include_keys: An iterable of settings to include in the return value.\n                If not set, all settings are used.\n            exclude_unset: Only include settings that have been set (i.e. the value is\n                not from the default). If set, unset keys will be dropped even if they\n                are set in `include_keys`.\n\n        Returns:\n            A dictionary of settings with values cast to strings\n        \"\"\"\n        include = set(include or SETTING_VARIABLES.values())\n\n        if exclude_unset:\n            set_keys = {\n                # Collect all of the \"set\" keys and cast to `Setting` objects\n                SETTING_VARIABLES[key]\n                for key in self.dict(exclude_unset=True)\n            }\n            include.intersection_update(set_keys)\n\n        # Validate the types of items in `include` to prevent exclusion bugs\n        for key in include:\n            if not isinstance(key, Setting):\n                raise TypeError(\n                    \"Invalid type {type(key).__name__!r} for key in `include`.\"\n                )\n\n        env = {\n            # Use `getattr` instead of `value_of` to avoid value callback resolution\n            key: getattr(self, key)\n            for key, setting in SETTING_VARIABLES.items()\n            if setting in include\n        }\n\n        # Cast to strings and drop null values\n        return {key: str(value) for key, value in env.items() if value is not None}\n\n    class Config:\n        frozen = True\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.value_of","title":"value_of","text":"

    Retrieve a setting's value.

    Source code in prefect/settings.py
    def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n    \"\"\"\n    Retrieve a setting's value.\n    \"\"\"\n    value = getattr(self, setting.name)\n    if setting.value_callback and not bypass_callback:\n        value = setting.value_callback(self, value)\n    return value\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.post_root_validators","title":"post_root_validators","text":"

    Add root validation functions for settings here.

    Source code in prefect/settings.py
    @root_validator\ndef post_root_validators(cls, values):\n    \"\"\"\n    Add root validation functions for settings here.\n    \"\"\"\n    # TODO: We could probably register these dynamically but this is the simpler\n    #       approach for now. We can explore more interesting validation features\n    #       in the future.\n    values = max_log_size_smaller_than_batch_size(values)\n    values = warn_on_database_password_value_without_usage(values)\n    if not values[\"PREFECT_SILENCE_API_URL_MISCONFIGURATION\"]:\n        values = warn_on_misconfigured_api_url(values)\n    return values\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.with_obfuscated_secrets","title":"with_obfuscated_secrets","text":"

    Returns a copy of this settings object with secret setting values obfuscated.

    Source code in prefect/settings.py
    def with_obfuscated_secrets(self):\n    \"\"\"\n    Returns a copy of this settings object with secret setting values obfuscated.\n    \"\"\"\n    settings = self.copy(\n        update={\n            setting.name: obfuscate(self.value_of(setting))\n            for setting in SETTING_VARIABLES.values()\n            if setting.is_secret\n            # Exclude deprecated settings with null values to avoid warnings\n            and not (setting.deprecated and self.value_of(setting) is None)\n        }\n    )\n    # Ensure that settings that have not been marked as \"set\" before are still so\n    # after we have updated their value above\n    settings.__fields_set__.intersection_update(self.__fields_set__)\n    return settings\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.hash_key","title":"hash_key","text":"

    Return a hash key for the settings object. This is needed since some settings may be unhashable. An example is lists.

    Source code in prefect/settings.py
    def hash_key(self) -> str:\n    \"\"\"\n    Return a hash key for the settings object.  This is needed since some\n    settings may be unhashable.  An example is lists.\n    \"\"\"\n    env_variables = self.to_environment_variables()\n    return str(hash(tuple((key, value) for key, value in env_variables.items())))\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.to_environment_variables","title":"to_environment_variables","text":"

    Convert the settings object to environment variables.

    Note that setting values will not be run through their value_callback allowing dynamic resolution to occur when loaded from the returned environment.

    Parameters:

    Name Type Description Default include_keys

    An iterable of settings to include in the return value. If not set, all settings are used.

    required exclude_unset bool

    Only include settings that have been set (i.e. the value is not from the default). If set, unset keys will be dropped even if they are set in include_keys.

    False

    Returns:

    Type Description Dict[str, str]

    A dictionary of settings with values cast to strings

    Source code in prefect/settings.py
    def to_environment_variables(\n    self, include: Iterable[Setting] = None, exclude_unset: bool = False\n) -> Dict[str, str]:\n    \"\"\"\n    Convert the settings object to environment variables.\n\n    Note that setting values will not be run through their `value_callback` allowing\n    dynamic resolution to occur when loaded from the returned environment.\n\n    Args:\n        include_keys: An iterable of settings to include in the return value.\n            If not set, all settings are used.\n        exclude_unset: Only include settings that have been set (i.e. the value is\n            not from the default). If set, unset keys will be dropped even if they\n            are set in `include_keys`.\n\n    Returns:\n        A dictionary of settings with values cast to strings\n    \"\"\"\n    include = set(include or SETTING_VARIABLES.values())\n\n    if exclude_unset:\n        set_keys = {\n            # Collect all of the \"set\" keys and cast to `Setting` objects\n            SETTING_VARIABLES[key]\n            for key in self.dict(exclude_unset=True)\n        }\n        include.intersection_update(set_keys)\n\n    # Validate the types of items in `include` to prevent exclusion bugs\n    for key in include:\n        if not isinstance(key, Setting):\n            raise TypeError(\n                \"Invalid type {type(key).__name__!r} for key in `include`.\"\n            )\n\n    env = {\n        # Use `getattr` instead of `value_of` to avoid value callback resolution\n        key: getattr(self, key)\n        for key, setting in SETTING_VARIABLES.items()\n        if setting in include\n    }\n\n    # Cast to strings and drop null values\n    return {key: str(value) for key, value in env.items() if value is not None}\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile","title":"Profile","text":"

    Bases: BaseModel

    A user profile containing settings.

    Source code in prefect/settings.py
    class Profile(BaseModel):\n    \"\"\"\n    A user profile containing settings.\n    \"\"\"\n\n    name: str\n    settings: Dict[Setting, Any] = Field(default_factory=dict)\n    source: Optional[Path]\n\n    @validator(\"settings\", pre=True)\n    def map_names_to_settings(cls, value):\n        return validate_settings(value)\n\n    def validate_settings(self) -> None:\n        \"\"\"\n        Validate the settings contained in this profile.\n\n        Raises:\n            pydantic.ValidationError: When settings do not have valid values.\n        \"\"\"\n        # Create a new `Settings` instance with the settings from this profile relying\n        # on Pydantic validation to raise an error.\n        # We do not return the `Settings` object because this is not the recommended\n        # path for constructing settings with a profile. See `use_profile` instead.\n        Settings(**{setting.name: value for setting, value in self.settings.items()})\n\n    def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n        \"\"\"\n        Update settings in place to replace deprecated settings with new settings when\n        renamed.\n\n        Returns a list of tuples with the old and new setting.\n        \"\"\"\n        changed = []\n        for setting in tuple(self.settings):\n            if (\n                setting.deprecated\n                and setting.deprecated_renamed_to\n                and setting.deprecated_renamed_to not in self.settings\n            ):\n                self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                    setting\n                )\n                changed.append((setting, setting.deprecated_renamed_to))\n        return changed\n\n    class Config:\n        arbitrary_types_allowed = True\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.validate_settings","title":"validate_settings","text":"

    Validate the settings contained in this profile.

    Raises:

    Type Description ValidationError

    When settings do not have valid values.

    Source code in prefect/settings.py
    def validate_settings(self) -> None:\n    \"\"\"\n    Validate the settings contained in this profile.\n\n    Raises:\n        pydantic.ValidationError: When settings do not have valid values.\n    \"\"\"\n    # Create a new `Settings` instance with the settings from this profile relying\n    # on Pydantic validation to raise an error.\n    # We do not return the `Settings` object because this is not the recommended\n    # path for constructing settings with a profile. See `use_profile` instead.\n    Settings(**{setting.name: value for setting, value in self.settings.items()})\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.convert_deprecated_renamed_settings","title":"convert_deprecated_renamed_settings","text":"

    Update settings in place to replace deprecated settings with new settings when renamed.

    Returns a list of tuples with the old and new setting.

    Source code in prefect/settings.py
    def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n    \"\"\"\n    Update settings in place to replace deprecated settings with new settings when\n    renamed.\n\n    Returns a list of tuples with the old and new setting.\n    \"\"\"\n    changed = []\n    for setting in tuple(self.settings):\n        if (\n            setting.deprecated\n            and setting.deprecated_renamed_to\n            and setting.deprecated_renamed_to not in self.settings\n        ):\n            self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                setting\n            )\n            changed.append((setting, setting.deprecated_renamed_to))\n    return changed\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection","title":"ProfilesCollection","text":"

    \" A utility class for working with a collection of profiles.

    Profiles in the collection must have unique names.

    The collection may store the name of the active profile.

    Source code in prefect/settings.py
    class ProfilesCollection:\n    \"\"\" \"\n    A utility class for working with a collection of profiles.\n\n    Profiles in the collection must have unique names.\n\n    The collection may store the name of the active profile.\n    \"\"\"\n\n    def __init__(\n        self, profiles: Iterable[Profile], active: Optional[str] = None\n    ) -> None:\n        self.profiles_by_name = {profile.name: profile for profile in profiles}\n        self.active_name = active\n\n    @property\n    def names(self) -> Set[str]:\n        \"\"\"\n        Return a set of profile names in this collection.\n        \"\"\"\n        return set(self.profiles_by_name.keys())\n\n    @property\n    def active_profile(self) -> Optional[Profile]:\n        \"\"\"\n        Retrieve the active profile in this collection.\n        \"\"\"\n        if self.active_name is None:\n            return None\n        return self[self.active_name]\n\n    def set_active(self, name: Optional[str], check: bool = True):\n        \"\"\"\n        Set the active profile name in the collection.\n\n        A null value may be passed to indicate that this collection does not determine\n        the active profile.\n        \"\"\"\n        if check and name is not None and name not in self.names:\n            raise ValueError(f\"Unknown profile name {name!r}.\")\n        self.active_name = name\n\n    def update_profile(\n        self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n    ) -> Profile:\n        \"\"\"\n        Add a profile to the collection or update the existing on if the name is already\n        present in this collection.\n\n        If updating an existing profile, the settings will be merged. Settings can\n        be dropped from the existing profile by setting them to `None` in the new\n        profile.\n\n        Returns the new profile object.\n        \"\"\"\n        existing = self.profiles_by_name.get(name)\n\n        # Convert the input to a `Profile` to cast settings to the correct type\n        profile = Profile(name=name, settings=settings, source=source)\n\n        if existing:\n            new_settings = {**existing.settings, **profile.settings}\n\n            # Drop null keys to restore to default\n            for key, value in tuple(new_settings.items()):\n                if value is None:\n                    new_settings.pop(key)\n\n            new_profile = Profile(\n                name=profile.name,\n                settings=new_settings,\n                source=source or profile.source,\n            )\n        else:\n            new_profile = profile\n\n        self.profiles_by_name[new_profile.name] = new_profile\n\n        return new_profile\n\n    def add_profile(self, profile: Profile) -> None:\n        \"\"\"\n        Add a profile to the collection.\n\n        If the profile name already exists, an exception will be raised.\n        \"\"\"\n        if profile.name in self.profiles_by_name:\n            raise ValueError(\n                f\"Profile name {profile.name!r} already exists in collection.\"\n            )\n\n        self.profiles_by_name[profile.name] = profile\n\n    def remove_profile(self, name: str) -> None:\n        \"\"\"\n        Remove a profile from the collection.\n        \"\"\"\n        self.profiles_by_name.pop(name)\n\n    def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n        \"\"\"\n        Remove profiles that were loaded from a given path.\n\n        Returns a new collection.\n        \"\"\"\n        return ProfilesCollection(\n            [\n                profile\n                for profile in self.profiles_by_name.values()\n                if profile.source != path\n            ],\n            active=self.active_name,\n        )\n\n    def to_dict(self):\n        \"\"\"\n        Convert to a dictionary suitable for writing to disk.\n        \"\"\"\n        return {\n            \"active\": self.active_name,\n            \"profiles\": {\n                profile.name: {\n                    setting.name: value for setting, value in profile.settings.items()\n                }\n                for profile in self.profiles_by_name.values()\n            },\n        }\n\n    def __getitem__(self, name: str) -> Profile:\n        return self.profiles_by_name[name]\n\n    def __iter__(self):\n        return self.profiles_by_name.__iter__()\n\n    def items(self):\n        return self.profiles_by_name.items()\n\n    def __eq__(self, __o: object) -> bool:\n        if not isinstance(__o, ProfilesCollection):\n            return False\n\n        return (\n            self.profiles_by_name == __o.profiles_by_name\n            and self.active_name == __o.active_name\n        )\n\n    def __repr__(self) -> str:\n        return (\n            f\"ProfilesCollection(profiles={list(self.profiles_by_name.values())!r},\"\n            f\" active={self.active_name!r})>\"\n        )\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.names","title":"names: Set[str] property","text":"

    Return a set of profile names in this collection.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.active_profile","title":"active_profile: Optional[Profile] property","text":"

    Retrieve the active profile in this collection.

    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.set_active","title":"set_active","text":"

    Set the active profile name in the collection.

    A null value may be passed to indicate that this collection does not determine the active profile.

    Source code in prefect/settings.py
    def set_active(self, name: Optional[str], check: bool = True):\n    \"\"\"\n    Set the active profile name in the collection.\n\n    A null value may be passed to indicate that this collection does not determine\n    the active profile.\n    \"\"\"\n    if check and name is not None and name not in self.names:\n        raise ValueError(f\"Unknown profile name {name!r}.\")\n    self.active_name = name\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.update_profile","title":"update_profile","text":"

    Add a profile to the collection or update the existing on if the name is already present in this collection.

    If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to None in the new profile.

    Returns the new profile object.

    Source code in prefect/settings.py
    def update_profile(\n    self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n) -> Profile:\n    \"\"\"\n    Add a profile to the collection or update the existing on if the name is already\n    present in this collection.\n\n    If updating an existing profile, the settings will be merged. Settings can\n    be dropped from the existing profile by setting them to `None` in the new\n    profile.\n\n    Returns the new profile object.\n    \"\"\"\n    existing = self.profiles_by_name.get(name)\n\n    # Convert the input to a `Profile` to cast settings to the correct type\n    profile = Profile(name=name, settings=settings, source=source)\n\n    if existing:\n        new_settings = {**existing.settings, **profile.settings}\n\n        # Drop null keys to restore to default\n        for key, value in tuple(new_settings.items()):\n            if value is None:\n                new_settings.pop(key)\n\n        new_profile = Profile(\n            name=profile.name,\n            settings=new_settings,\n            source=source or profile.source,\n        )\n    else:\n        new_profile = profile\n\n    self.profiles_by_name[new_profile.name] = new_profile\n\n    return new_profile\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.add_profile","title":"add_profile","text":"

    Add a profile to the collection.

    If the profile name already exists, an exception will be raised.

    Source code in prefect/settings.py
    def add_profile(self, profile: Profile) -> None:\n    \"\"\"\n    Add a profile to the collection.\n\n    If the profile name already exists, an exception will be raised.\n    \"\"\"\n    if profile.name in self.profiles_by_name:\n        raise ValueError(\n            f\"Profile name {profile.name!r} already exists in collection.\"\n        )\n\n    self.profiles_by_name[profile.name] = profile\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.remove_profile","title":"remove_profile","text":"

    Remove a profile from the collection.

    Source code in prefect/settings.py
    def remove_profile(self, name: str) -> None:\n    \"\"\"\n    Remove a profile from the collection.\n    \"\"\"\n    self.profiles_by_name.pop(name)\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.without_profile_source","title":"without_profile_source","text":"

    Remove profiles that were loaded from a given path.

    Returns a new collection.

    Source code in prefect/settings.py
    def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n    \"\"\"\n    Remove profiles that were loaded from a given path.\n\n    Returns a new collection.\n    \"\"\"\n    return ProfilesCollection(\n        [\n            profile\n            for profile in self.profiles_by_name.values()\n            if profile.source != path\n        ],\n        active=self.active_name,\n    )\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_extra_loggers","title":"get_extra_loggers","text":"

    value_callback for PREFECT_LOGGING_EXTRA_LOGGERSthat parses the CSV string into a list and trims whitespace from logger names.

    Source code in prefect/settings.py
    def get_extra_loggers(_: \"Settings\", value: str) -> List[str]:\n    \"\"\"\n    `value_callback` for `PREFECT_LOGGING_EXTRA_LOGGERS`that parses the CSV string into a\n    list and trims whitespace from logger names.\n    \"\"\"\n    return [name.strip() for name in value.split(\",\")] if value else []\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.debug_mode_log_level","title":"debug_mode_log_level","text":"

    value_callback for PREFECT_LOGGING_LEVEL that overrides the log level to DEBUG when debug mode is enabled.

    Source code in prefect/settings.py
    def debug_mode_log_level(settings, value):\n    \"\"\"\n    `value_callback` for `PREFECT_LOGGING_LEVEL` that overrides the log level to DEBUG\n    when debug mode is enabled.\n    \"\"\"\n    if PREFECT_DEBUG_MODE.value_from(settings):\n        return \"DEBUG\"\n    else:\n        return value\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.only_return_value_in_test_mode","title":"only_return_value_in_test_mode","text":"

    value_callback for PREFECT_TEST_SETTING that only allows access during test mode

    Source code in prefect/settings.py
    def only_return_value_in_test_mode(settings, value):\n    \"\"\"\n    `value_callback` for `PREFECT_TEST_SETTING` that only allows access during test mode\n    \"\"\"\n    if PREFECT_TEST_MODE.value_from(settings):\n        return value\n    else:\n        return None\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.default_ui_api_url","title":"default_ui_api_url","text":"

    value_callback for PREFECT_UI_API_URL that sets the default value to relative path '/api', otherwise it constructs an API URL from the API settings.

    Source code in prefect/settings.py
    def default_ui_api_url(settings, value):\n    \"\"\"\n    `value_callback` for `PREFECT_UI_API_URL` that sets the default value to\n    relative path '/api', otherwise it constructs an API URL from the API settings.\n    \"\"\"\n    if value is None:\n        # Set a default value\n        value = \"/api\"\n\n    return template_with_settings(\n        PREFECT_SERVER_API_HOST, PREFECT_SERVER_API_PORT, PREFECT_API_URL\n    )(settings, value)\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.status_codes_as_integers_in_range","title":"status_codes_as_integers_in_range","text":"

    value_callback for PREFECT_CLIENT_RETRY_EXTRA_CODES that ensures status codes are integers in the range 100-599.

    Source code in prefect/settings.py
    def status_codes_as_integers_in_range(_, value):\n    \"\"\"\n    `value_callback` for `PREFECT_CLIENT_RETRY_EXTRA_CODES` that ensures status codes\n    are integers in the range 100-599.\n    \"\"\"\n    if value == \"\":\n        return set()\n\n    values = {v.strip() for v in value.split(\",\")}\n\n    if any(not v.isdigit() or int(v) < 100 or int(v) > 599 for v in values):\n        raise ValueError(\n            \"PREFECT_CLIENT_RETRY_EXTRA_CODES must be a comma separated list of \"\n            \"integers between 100 and 599.\"\n        )\n\n    values = {int(v) for v in values}\n    return values\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.template_with_settings","title":"template_with_settings","text":"

    Returns a value_callback that will template the given settings into the runtime value for the setting.

    Source code in prefect/settings.py
    def template_with_settings(*upstream_settings: Setting) -> Callable[[\"Settings\", T], T]:\n    \"\"\"\n    Returns a `value_callback` that will template the given settings into the runtime\n    value for the setting.\n    \"\"\"\n\n    def templater(settings, value):\n        if value is None:\n            return value  # Do not attempt to template a null string\n\n        original_type = type(value)\n        template_values = {\n            setting.name: setting.value_from(settings) for setting in upstream_settings\n        }\n        template = string.Template(str(value))\n        return original_type(template.substitute(template_values))\n\n    return templater\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.max_log_size_smaller_than_batch_size","title":"max_log_size_smaller_than_batch_size","text":"

    Validator for settings asserting the batch size and match log size are compatible

    Source code in prefect/settings.py
    def max_log_size_smaller_than_batch_size(values):\n    \"\"\"\n    Validator for settings asserting the batch size and match log size are compatible\n    \"\"\"\n    if (\n        values[\"PREFECT_LOGGING_TO_API_BATCH_SIZE\"]\n        < values[\"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE\"]\n    ):\n        raise ValueError(\n            \"`PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` cannot be larger than\"\n            \" `PREFECT_LOGGING_TO_API_BATCH_SIZE`\"\n        )\n    return values\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_database_password_value_without_usage","title":"warn_on_database_password_value_without_usage","text":"

    Validator for settings warning if the database password is set but not used.

    Source code in prefect/settings.py
    def warn_on_database_password_value_without_usage(values):\n    \"\"\"\n    Validator for settings warning if the database password is set but not used.\n    \"\"\"\n    value = values[\"PREFECT_API_DATABASE_PASSWORD\"]\n    if (\n        value\n        and not value.startswith(OBFUSCATED_PREFIX)\n        and (\n            \"PREFECT_API_DATABASE_PASSWORD\"\n            not in values[\"PREFECT_API_DATABASE_CONNECTION_URL\"]\n        )\n    ):\n        warnings.warn(\n            \"PREFECT_API_DATABASE_PASSWORD is set but not included in the \"\n            \"PREFECT_API_DATABASE_CONNECTION_URL. \"\n            \"The provided password will be ignored.\"\n        )\n    return values\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_misconfigured_api_url","title":"warn_on_misconfigured_api_url","text":"

    Validator for settings warning if the API URL is misconfigured.

    Source code in prefect/settings.py
    def warn_on_misconfigured_api_url(values):\n    \"\"\"\n    Validator for settings warning if the API URL is misconfigured.\n    \"\"\"\n    api_url = values[\"PREFECT_API_URL\"]\n    if api_url is not None:\n        misconfigured_mappings = {\n            \"app.prefect.cloud\": (\n                \"`PREFECT_API_URL` points to `app.prefect.cloud`. Did you\"\n                \" mean `api.prefect.cloud`?\"\n            ),\n            \"account/\": (\n                \"`PREFECT_API_URL` uses `/account/` but should use `/accounts/`.\"\n            ),\n            \"workspace/\": (\n                \"`PREFECT_API_URL` uses `/workspace/` but should use `/workspaces/`.\"\n            ),\n        }\n        warnings_list = []\n\n        for misconfig, warning in misconfigured_mappings.items():\n            if misconfig in api_url:\n                warnings_list.append(warning)\n\n        parsed_url = urlparse(api_url)\n        if parsed_url.path and not parsed_url.path.startswith(\"/api\"):\n            warnings_list.append(\n                \"`PREFECT_API_URL` should have `/api` after the base URL.\"\n            )\n\n        if warnings_list:\n            example = 'e.g. PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"'\n            warnings_list.append(example)\n\n            warnings.warn(\"\\n\".join(warnings_list), stacklevel=2)\n\n    return values\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_current_settings","title":"get_current_settings","text":"

    Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment.

    Source code in prefect/settings.py
    def get_current_settings() -> Settings:\n    \"\"\"\n    Returns a settings object populated with values from the current settings context\n    or, if no settings context is active, the environment.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    settings_context = SettingsContext.get()\n    if settings_context is not None:\n        return settings_context.settings\n\n    return get_settings_from_env()\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_settings_from_env","title":"get_settings_from_env","text":"

    Returns a settings object populated with default values and overrides from environment variables, ignoring any values in profiles.

    Calls with the same environment return a cached object instead of reconstructing to avoid validation overhead.

    Source code in prefect/settings.py
    def get_settings_from_env() -> Settings:\n    \"\"\"\n    Returns a settings object populated with default values and overrides from\n    environment variables, ignoring any values in profiles.\n\n    Calls with the same environment return a cached object instead of reconstructing\n    to avoid validation overhead.\n    \"\"\"\n    # Since os.environ is a Dict[str, str] we can safely hash it by contents, but we\n    # must be careful to avoid hashing a generator instead of a tuple\n    cache_key = hash(tuple((key, value) for key, value in os.environ.items()))\n\n    if cache_key not in _FROM_ENV_CACHE:\n        _FROM_ENV_CACHE[cache_key] = Settings()\n\n    return _FROM_ENV_CACHE[cache_key]\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_default_settings","title":"get_default_settings","text":"

    Returns a settings object populated with default values, ignoring any overrides from environment variables or profiles.

    This is cached since the defaults should not change during the lifetime of the module.

    Source code in prefect/settings.py
    def get_default_settings() -> Settings:\n    \"\"\"\n    Returns a settings object populated with default values, ignoring any overrides\n    from environment variables or profiles.\n\n    This is cached since the defaults should not change during the lifetime of the\n    module.\n    \"\"\"\n    global _DEFAULTS_CACHE\n\n    if not _DEFAULTS_CACHE:\n        old = os.environ\n        try:\n            os.environ = {}\n            settings = get_settings_from_env()\n        finally:\n            os.environ = old\n\n        _DEFAULTS_CACHE = settings\n\n    return _DEFAULTS_CACHE\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.temporary_settings","title":"temporary_settings","text":"

    Temporarily override the current settings by entering a new profile.

    See Settings.copy_with_update for details on different argument behavior.

    Examples:

    >>> from prefect.settings import PREFECT_API_URL\n>>>\n>>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n>>>    assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n>>>         assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n>>>         assert PREFECT_API_URL.value() is None\n>>>\n>>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n>>>             assert PREFECT_API_URL.value() == \"bar\"\n>>> assert PREFECT_API_URL.value() is None\n
    Source code in prefect/settings.py
    @contextmanager\ndef temporary_settings(\n    updates: Optional[Mapping[Setting[T], Any]] = None,\n    set_defaults: Optional[Mapping[Setting[T], Any]] = None,\n    restore_defaults: Optional[Iterable[Setting[T]]] = None,\n) -> Generator[Settings, None, None]:\n    \"\"\"\n    Temporarily override the current settings by entering a new profile.\n\n    See `Settings.copy_with_update` for details on different argument behavior.\n\n    Examples:\n        >>> from prefect.settings import PREFECT_API_URL\n        >>>\n        >>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n        >>>    assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n        >>>         assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n        >>>         assert PREFECT_API_URL.value() is None\n        >>>\n        >>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n        >>>             assert PREFECT_API_URL.value() == \"bar\"\n        >>> assert PREFECT_API_URL.value() is None\n    \"\"\"\n    import prefect.context\n\n    context = prefect.context.get_settings_context()\n\n    new_settings = context.settings.copy_with_update(\n        updates=updates, set_defaults=set_defaults, restore_defaults=restore_defaults\n    )\n\n    with prefect.context.SettingsContext(\n        profile=context.profile, settings=new_settings\n    ):\n        yield new_settings\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profiles","title":"load_profiles","text":"

    Load all profiles from the default and current profile paths.

    Source code in prefect/settings.py
    def load_profiles() -> ProfilesCollection:\n    \"\"\"\n    Load all profiles from the default and current profile paths.\n    \"\"\"\n    profiles = _read_profiles_from(DEFAULT_PROFILES_PATH)\n\n    user_profiles_path = PREFECT_PROFILES_PATH.value()\n    if user_profiles_path.exists():\n        user_profiles = _read_profiles_from(user_profiles_path)\n\n        # Merge all of the user profiles with the defaults\n        for name in user_profiles:\n            profiles.update_profile(\n                name,\n                settings=user_profiles[name].settings,\n                source=user_profiles[name].source,\n            )\n\n        if user_profiles.active_name:\n            profiles.set_active(user_profiles.active_name, check=False)\n\n    return profiles\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_current_profile","title":"load_current_profile","text":"

    Load the current profile from the default and current profile paths.

    This will not include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved.

    Source code in prefect/settings.py
    def load_current_profile():\n    \"\"\"\n    Load the current profile from the default and current profile paths.\n\n    This will _not_ include settings from the current settings context. Only settings\n    that have been persisted to the profiles file will be saved.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    profiles = load_profiles()\n    context = SettingsContext.get()\n\n    if context:\n        profiles.set_active(context.profile.name)\n\n    return profiles.active_profile\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.save_profiles","title":"save_profiles","text":"

    Writes all non-default profiles to the current profiles path.

    Source code in prefect/settings.py
    def save_profiles(profiles: ProfilesCollection) -> None:\n    \"\"\"\n    Writes all non-default profiles to the current profiles path.\n    \"\"\"\n    profiles_path = PREFECT_PROFILES_PATH.value()\n    profiles = profiles.without_profile_source(DEFAULT_PROFILES_PATH)\n    return _write_profiles_to(profiles_path, profiles)\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profile","title":"load_profile","text":"

    Load a single profile by name.

    Source code in prefect/settings.py
    def load_profile(name: str) -> Profile:\n    \"\"\"\n    Load a single profile by name.\n    \"\"\"\n    profiles = load_profiles()\n    try:\n        return profiles[name]\n    except KeyError:\n        raise ValueError(f\"Profile {name!r} not found.\")\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.update_current_profile","title":"update_current_profile","text":"

    Update the persisted data for the profile currently in-use.

    If the profile does not exist in the profiles file, it will be created.

    Given settings will be merged with the existing settings as described in ProfilesCollection.update_profile.

    Returns:

    Type Description Profile

    The new profile.

    Source code in prefect/settings.py
    def update_current_profile(settings: Dict[Union[str, Setting], Any]) -> Profile:\n    \"\"\"\n    Update the persisted data for the profile currently in-use.\n\n    If the profile does not exist in the profiles file, it will be created.\n\n    Given settings will be merged with the existing settings as described in\n    `ProfilesCollection.update_profile`.\n\n    Returns:\n        The new profile.\n    \"\"\"\n    import prefect.context\n\n    current_profile = prefect.context.get_settings_context().profile\n\n    if not current_profile:\n        raise MissingProfileError(\"No profile is currently in use.\")\n\n    profiles = load_profiles()\n\n    # Ensure the current profile's settings are present\n    profiles.update_profile(current_profile.name, current_profile.settings)\n    # Then merge the new settings in\n    new_profile = profiles.update_profile(current_profile.name, settings)\n\n    # Validate before saving\n    new_profile.validate_settings()\n\n    save_profiles(profiles)\n\n    return profiles[current_profile.name]\n
    ","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/software/","title":"prefect.software","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/software/#prefect.software","title":"prefect.software","text":"","tags":["Python API","software"]},{"location":"api-ref/prefect/states/","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.AwaitingRetry","title":"AwaitingRetry","text":"

    Convenience function for creating AwaitingRetry states.

    Returns:

    Name Type Description State State[R]

    a AwaitingRetry state

    Source code in prefect/states.py
    def AwaitingRetry(\n    cls: Type[State[R]] = State,\n    scheduled_time: Optional[datetime.datetime] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelled","title":"Cancelled","text":"

    Convenience function for creating Cancelled states.

    Returns:

    Name Type Description State State[R]

    a Cancelled state

    Source code in prefect/states.py
    def Cancelled(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelling","title":"Cancelling","text":"

    Convenience function for creating Cancelling states.

    Returns:

    Name Type Description State State[R]

    a Cancelling state

    Source code in prefect/states.py
    def Cancelling(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Completed","title":"Completed","text":"

    Convenience function for creating Completed states.

    Returns:

    Name Type Description State State[R]

    a Completed state

    Source code in prefect/states.py
    def Completed(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Crashed","title":"Crashed","text":"

    Convenience function for creating Crashed states.

    Returns:

    Name Type Description State State[R]

    a Crashed state

    Source code in prefect/states.py
    def Crashed(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Failed","title":"Failed","text":"

    Convenience function for creating Failed states.

    Returns:

    Name Type Description State State[R]

    a Failed state

    Source code in prefect/states.py
    def Failed(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Late","title":"Late","text":"

    Convenience function for creating Late states.

    Returns:

    Name Type Description State State[R]

    a Late state

    Source code in prefect/states.py
    def Late(\n    cls: Type[State[R]] = State,\n    scheduled_time: Optional[datetime.datetime] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Paused","title":"Paused","text":"

    Convenience function for creating Paused states.

    Returns:

    Name Type Description State State[R]

    a Paused state

    Source code in prefect/states.py
    def Paused(\n    cls: Type[State[R]] = State,\n    timeout_seconds: Optional[int] = None,\n    pause_expiration_time: Optional[datetime.datetime] = None,\n    reschedule: bool = False,\n    pause_key: Optional[str] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Pending","title":"Pending","text":"

    Convenience function for creating Pending states.

    Returns:

    Name Type Description State State[R]

    a Pending state

    Source code in prefect/states.py
    def Pending(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Retrying","title":"Retrying","text":"

    Convenience function for creating Retrying states.

    Returns:

    Name Type Description State State[R]

    a Retrying state

    Source code in prefect/states.py
    def Retrying(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Running","title":"Running","text":"

    Convenience function for creating Running states.

    Returns:

    Name Type Description State State[R]

    a Running state

    Source code in prefect/states.py
    def Running(cls: Type[State[R]] = State, **kwargs: Any) -> State[R]:\n    \"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Scheduled","title":"Scheduled","text":"

    Convenience function for creating Scheduled states.

    Returns:

    Name Type Description State State[R]

    a Scheduled state

    Source code in prefect/states.py
    def Scheduled(\n    cls: Type[State[R]] = State,\n    scheduled_time: Optional[datetime.datetime] = None,\n    **kwargs: Any,\n) -> State[R]:\n    \"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Suspended","title":"Suspended","text":"

    Convenience function for creating Suspended states.

    Returns:

    Name Type Description State

    a Suspended state

    Source code in prefect/states.py
    def Suspended(\n    cls: Type[State[R]] = State,\n    timeout_seconds: Optional[int] = None,\n    pause_expiration_time: Optional[datetime.datetime] = None,\n    pause_key: Optional[str] = None,\n    **kwargs: Any,\n):\n    \"\"\"Convenience function for creating `Suspended` states.\n\n    Returns:\n        State: a Suspended state\n    \"\"\"\n    return Paused(\n        cls=cls,\n        name=\"Suspended\",\n        reschedule=True,\n        timeout_seconds=timeout_seconds,\n        pause_expiration_time=pause_expiration_time,\n        pause_key=pause_key,\n        **kwargs,\n    )\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_crashed_state","title":"exception_to_crashed_state async","text":"

    Takes an exception that occurs outside of user code and converts it to a 'Crash' exception with a 'Crashed' state.

    Source code in prefect/states.py
    async def exception_to_crashed_state(\n    exc: BaseException,\n    result_factory: Optional[ResultFactory] = None,\n) -> State:\n    \"\"\"\n    Takes an exception that occurs _outside_ of user code and converts it to a\n    'Crash' exception with a 'Crashed' state.\n    \"\"\"\n    state_message = None\n\n    if isinstance(exc, anyio.get_cancelled_exc_class()):\n        state_message = \"Execution was cancelled by the runtime environment.\"\n\n    elif isinstance(exc, KeyboardInterrupt):\n        state_message = \"Execution was aborted by an interrupt signal.\"\n\n    elif isinstance(exc, TerminationSignal):\n        state_message = \"Execution was aborted by a termination signal.\"\n\n    elif isinstance(exc, SystemExit):\n        state_message = \"Execution was aborted by Python system exit call.\"\n\n    elif isinstance(exc, (httpx.TimeoutException, httpx.ConnectError)):\n        try:\n            request: httpx.Request = exc.request\n        except RuntimeError:\n            # The request property is not set\n            state_message = (\n                \"Request failed while attempting to contact the server:\"\n                f\" {format_exception(exc)}\"\n            )\n        else:\n            # TODO: We can check if this is actually our API url\n            state_message = f\"Request to {request.url} failed: {format_exception(exc)}.\"\n\n    else:\n        state_message = (\n            \"Execution was interrupted by an unexpected exception:\"\n            f\" {format_exception(exc)}\"\n        )\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    return Crashed(message=state_message, data=data)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_failed_state","title":"exception_to_failed_state async","text":"

    Convenience function for creating Failed states from exceptions

    Source code in prefect/states.py
    async def exception_to_failed_state(\n    exc: Optional[BaseException] = None,\n    result_factory: Optional[ResultFactory] = None,\n    **kwargs,\n) -> State:\n    \"\"\"\n    Convenience function for creating `Failed` states from exceptions\n    \"\"\"\n    if not exc:\n        _, exc, _ = sys.exc_info()\n        if exc is None:\n            raise ValueError(\n                \"Exception was not passed and no active exception could be found.\"\n            )\n    else:\n        pass\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    existing_message = kwargs.pop(\"message\", \"\")\n    if existing_message and not existing_message.endswith(\" \"):\n        existing_message += \" \"\n\n    # TODO: Consider if we want to include traceback information, it is intentionally\n    #       excluded from messages for now\n    message = existing_message + format_exception(exc)\n\n    return Failed(data=data, message=message, **kwargs)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_exception","title":"get_state_exception async","text":"

    If not given a FAILED or CRASHED state, this raise a value error.

    If the state result is a state, its exception will be returned.

    If the state result is an iterable of states, the exception of the first failure will be returned.

    If the state result is a string, a wrapper exception will be returned with the string as the message.

    If the state result is null, a wrapper exception will be returned with the state message attached.

    If the state result is not of a known type, a TypeError will be returned.

    When a wrapper exception is returned, the type will be: - FailedRun if the state type is FAILED. - CrashedRun if the state type is CRASHED. - CancelledRun if the state type is CANCELLED.

    Source code in prefect/states.py
    @sync_compatible\nasync def get_state_exception(state: State) -> BaseException:\n    \"\"\"\n    If not given a FAILED or CRASHED state, this raise a value error.\n\n    If the state result is a state, its exception will be returned.\n\n    If the state result is an iterable of states, the exception of the first failure\n    will be returned.\n\n    If the state result is a string, a wrapper exception will be returned with the\n    string as the message.\n\n    If the state result is null, a wrapper exception will be returned with the state\n    message attached.\n\n    If the state result is not of a known type, a `TypeError` will be returned.\n\n    When a wrapper exception is returned, the type will be:\n        - `FailedRun` if the state type is FAILED.\n        - `CrashedRun` if the state type is CRASHED.\n        - `CancelledRun` if the state type is CANCELLED.\n    \"\"\"\n\n    if state.is_failed():\n        wrapper = FailedRun\n        default_message = \"Run failed.\"\n    elif state.is_crashed():\n        wrapper = CrashedRun\n        default_message = \"Run crashed.\"\n    elif state.is_cancelled():\n        wrapper = CancelledRun\n        default_message = \"Run cancelled.\"\n    else:\n        raise ValueError(f\"Expected failed or crashed state got {state!r}.\")\n\n    if isinstance(state.data, BaseResult):\n        result = await state.data.get()\n    elif state.data is None:\n        result = None\n    else:\n        result = state.data\n\n    if result is None:\n        return wrapper(state.message or default_message)\n\n    if isinstance(result, Exception):\n        return result\n\n    elif isinstance(result, BaseException):\n        return result\n\n    elif isinstance(result, str):\n        return wrapper(result)\n\n    elif is_state(result):\n        # Return the exception from the inner state\n        return await get_state_exception(result)\n\n    elif is_state_iterable(result):\n        # Return the first failure\n        for state in result:\n            if state.is_failed() or state.is_crashed() or state.is_cancelled():\n                return await get_state_exception(state)\n\n        raise ValueError(\n            \"Failed state result was an iterable of states but none were failed.\"\n        )\n\n    else:\n        raise TypeError(\n            f\"Unexpected result for failed state: {result!r} \u2014\u2014 \"\n            f\"{type(result).__name__} cannot be resolved into an exception\"\n        )\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_result","title":"get_state_result","text":"

    Get the result from a state.

    See State.result()

    Source code in prefect/states.py
    def get_state_result(\n    state: State[R], raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> R:\n    \"\"\"\n    Get the result from a state.\n\n    See `State.result()`\n    \"\"\"\n\n    if fetch is None and (\n        PREFECT_ASYNC_FETCH_STATE_RESULT or not in_async_main_thread()\n    ):\n        # Fetch defaults to `True` for sync users or async users who have opted in\n        fetch = True\n\n    if not fetch:\n        if fetch is None and in_async_main_thread():\n            warnings.warn(\n                (\n                    \"State.result() was called from an async context but not awaited. \"\n                    \"This method will be updated to return a coroutine by default in \"\n                    \"the future. Pass `fetch=True` and `await` the call to get rid of \"\n                    \"this warning.\"\n                ),\n                DeprecationWarning,\n                stacklevel=2,\n            )\n        # Backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return result_from_state_with_data_document(\n                state, raise_on_failure=raise_on_failure\n            )\n        else:\n            return state.data\n    else:\n        return _get_state_result(state, raise_on_failure=raise_on_failure)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state","title":"is_state","text":"

    Check if the given object is a state instance

    Source code in prefect/states.py
    def is_state(obj: Any) -> TypeGuard[State]:\n    \"\"\"\n    Check if the given object is a state instance\n    \"\"\"\n    # We may want to narrow this to client-side state types but for now this provides\n    # backwards compatibility\n    try:\n        from prefect.server.schemas.states import State as State_\n\n        classes_ = (State, State_)\n    except ImportError:\n        classes_ = State\n\n    # return isinstance(obj, (State, State_))\n    return isinstance(obj, classes_)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state_iterable","title":"is_state_iterable","text":"

    Check if a the given object is an iterable of states types

    Supported iterables are: - set - list - tuple

    Other iterables will return False even if they contain states.

    Source code in prefect/states.py
    def is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]]:\n    \"\"\"\n    Check if a the given object is an iterable of states types\n\n    Supported iterables are:\n    - set\n    - list\n    - tuple\n\n    Other iterables will return `False` even if they contain states.\n    \"\"\"\n    # We do not check for arbitrary iterables because this is not intended to be used\n    # for things like dictionaries, dataframes, or pydantic models\n    if (\n        not isinstance(obj, BaseAnnotation)\n        and isinstance(obj, (list, set, tuple))\n        and obj\n    ):\n        return all([is_state(o) for o in obj])\n    else:\n        return False\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.raise_state_exception","title":"raise_state_exception async","text":"

    Given a FAILED or CRASHED state, raise the contained exception.

    Source code in prefect/states.py
    @sync_compatible\nasync def raise_state_exception(state: State) -> None:\n    \"\"\"\n    Given a FAILED or CRASHED state, raise the contained exception.\n    \"\"\"\n    if not (state.is_failed() or state.is_crashed() or state.is_cancelled()):\n        return None\n\n    raise await get_state_exception(state)\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.return_value_to_state","title":"return_value_to_state async","text":"

    Given a return value from a user's function, create a State the run should be placed in.

    • If data is returned, we create a 'COMPLETED' state with the data
    • If a single, manually created state is returned, we use that state as given (manual creation is determined by the lack of ids)
    • If an upstream state or iterable of upstream states is returned, we apply the aggregate rule

    The aggregate rule says that given multiple states we will determine the final state such that:

    • If any states are not COMPLETED the final state is FAILED
    • If all of the states are COMPLETED the final state is COMPLETED
    • The states will be placed in the final state data attribute

    Callers should resolve all futures into states before passing return values to this function.

    Source code in prefect/states.py
    async def return_value_to_state(retval: R, result_factory: ResultFactory) -> State[R]:\n    \"\"\"\n    Given a return value from a user's function, create a `State` the run should\n    be placed in.\n\n    - If data is returned, we create a 'COMPLETED' state with the data\n    - If a single, manually created state is returned, we use that state as given\n        (manual creation is determined by the lack of ids)\n    - If an upstream state or iterable of upstream states is returned, we apply the\n        aggregate rule\n\n    The aggregate rule says that given multiple states we will determine the final state\n    such that:\n\n    - If any states are not COMPLETED the final state is FAILED\n    - If all of the states are COMPLETED the final state is COMPLETED\n    - The states will be placed in the final state `data` attribute\n\n    Callers should resolve all futures into states before passing return values to this\n    function.\n    \"\"\"\n\n    if (\n        is_state(retval)\n        # Check for manual creation\n        and not retval.state_details.flow_run_id\n        and not retval.state_details.task_run_id\n    ):\n        state = retval\n\n        # Do not modify states with data documents attached; backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return state\n\n        # Unless the user has already constructed a result explicitly, use the factory\n        # to update the data to the correct type\n        if not isinstance(state.data, BaseResult):\n            state.data = await result_factory.create_result(state.data)\n\n        return state\n\n    # Determine a new state from the aggregate of contained states\n    if is_state(retval) or is_state_iterable(retval):\n        states = StateGroup(ensure_iterable(retval))\n\n        # Determine the new state type\n        if states.all_completed():\n            new_state_type = StateType.COMPLETED\n        elif states.any_cancelled():\n            new_state_type = StateType.CANCELLED\n        elif states.any_paused():\n            new_state_type = StateType.PAUSED\n        else:\n            new_state_type = StateType.FAILED\n\n        # Generate a nice message for the aggregate\n        if states.all_completed():\n            message = \"All states completed.\"\n        elif states.any_cancelled():\n            message = f\"{states.cancelled_count}/{states.total_count} states cancelled.\"\n        elif states.any_paused():\n            message = f\"{states.paused_count}/{states.total_count} states paused.\"\n        elif states.any_failed():\n            message = f\"{states.fail_count}/{states.total_count} states failed.\"\n        elif not states.all_final():\n            message = (\n                f\"{states.not_final_count}/{states.total_count} states are not final.\"\n            )\n        else:\n            message = \"Given states: \" + states.counts_message()\n\n        # TODO: We may actually want to set the data to a `StateGroup` object and just\n        #       allow it to be unpacked into a tuple and such so users can interact with\n        #       it\n        return State(\n            type=new_state_type,\n            message=message,\n            data=await result_factory.create_result(retval),\n        )\n\n    # Generators aren't portable, implicitly convert them to a list.\n    if isinstance(retval, GeneratorType):\n        data = list(retval)\n    else:\n        data = retval\n\n    # Otherwise, they just gave data and this is a completed retval\n    return Completed(data=await result_factory.create_result(data))\n
    ","tags":["Python API","states"]},{"location":"api-ref/prefect/task-runners/","title":"prefect.task_runners","text":"","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners","title":"prefect.task_runners","text":"

    Interface and implementations of various task runners.

    Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

    Example
    >>> from prefect import flow, task\n>>> from prefect.task_runners import SequentialTaskRunner\n>>> from typing import List\n>>>\n>>> @task\n>>> def say_hello(name):\n...     print(f\"hello {name}\")\n>>>\n>>> @task\n>>> def say_goodbye(name):\n...     print(f\"goodbye {name}\")\n>>>\n>>> @flow(task_runner=SequentialTaskRunner())\n>>> def greetings(names: List[str]):\n...     for name in names:\n...         say_hello(name)\n...         say_goodbye(name)\n>>>\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n

    Switching to a DaskTaskRunner:

    >>> from prefect_dask.task_runners import DaskTaskRunner\n>>> flow.task_runner = DaskTaskRunner()\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\nhello ford\ngoodbye marvin\nhello marvin\ngoodbye ford\ngoodbye trillian\n

    For usage details, see the Task Runners documentation.

    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner","title":"BaseTaskRunner","text":"Source code in prefect/task_runners.py
    class BaseTaskRunner(metaclass=abc.ABCMeta):\n    def __init__(self) -> None:\n        self.logger = get_logger(f\"task_runner.{self.name}\")\n        self._started: bool = False\n\n    @property\n    @abc.abstractmethod\n    def concurrency_type(self) -> TaskConcurrencyType:\n        pass  # noqa\n\n    @property\n    def name(self):\n        return type(self).__name__.lower().replace(\"taskrunner\", \"\")\n\n    def duplicate(self):\n        \"\"\"\n        Return a new task runner instance with the same options.\n        \"\"\"\n        # The base class returns `NotImplemented` to indicate that this is not yet\n        # implemented by a given task runner.\n        return NotImplemented\n\n    def __eq__(self, other: object) -> bool:\n        \"\"\"\n        Returns true if the task runners use the same options.\n        \"\"\"\n        if type(other) == type(self) and (\n            # Compare public attributes for naive equality check\n            # Subclasses should implement this method with a check init option equality\n            {k: v for k, v in self.__dict__.items() if not k.startswith(\"_\")}\n            == {k: v for k, v in other.__dict__.items() if not k.startswith(\"_\")}\n        ):\n            return True\n        else:\n            return NotImplemented\n\n    @abc.abstractmethod\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        \"\"\"\n        Submit a call for execution and return a `PrefectFuture` that can be used to\n        get the call result.\n\n        Args:\n            task_run: The task run being submitted.\n            task_key: A unique key for this orchestration run of the task. Can be used\n                for caching.\n            call: The function to be executed\n            run_kwargs: A dict of keyword arguments to pass to `call`\n\n        Returns:\n            A future representing the result of `call` execution\n        \"\"\"\n        raise NotImplementedError()\n\n    @abc.abstractmethod\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        \"\"\"\n        Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n        If it is not finished after the timeout expires, `None` should be returned.\n\n        Implementers should be careful to ensure that this function never returns or\n        raises an exception.\n        \"\"\"\n        raise NotImplementedError()\n\n    @asynccontextmanager\n    async def start(\n        self: T,\n    ) -> AsyncIterator[T]:\n        \"\"\"\n        Start the task runner, preparing any resources necessary for task submission.\n\n        Children should implement `_start` to prepare and clean up resources.\n\n        Yields:\n            The prepared task runner\n        \"\"\"\n        if self._started:\n            raise RuntimeError(\"The task runner is already started!\")\n\n        async with AsyncExitStack() as exit_stack:\n            self.logger.debug(\"Starting task runner...\")\n            try:\n                await self._start(exit_stack)\n                self._started = True\n                yield self\n            finally:\n                self.logger.debug(\"Shutting down task runner...\")\n                self._started = False\n\n    async def _start(self, exit_stack: AsyncExitStack) -> None:\n        \"\"\"\n        Create any resources required for this task runner to submit work.\n\n        Cleanup of resources should be submitted to the `exit_stack`.\n        \"\"\"\n        pass  # noqa\n\n    def __str__(self) -> str:\n        return type(self).__name__\n
    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.duplicate","title":"duplicate","text":"

    Return a new task runner instance with the same options.

    Source code in prefect/task_runners.py
    def duplicate(self):\n    \"\"\"\n    Return a new task runner instance with the same options.\n    \"\"\"\n    # The base class returns `NotImplemented` to indicate that this is not yet\n    # implemented by a given task runner.\n    return NotImplemented\n
    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.start","title":"start async","text":"

    Start the task runner, preparing any resources necessary for task submission.

    Children should implement _start to prepare and clean up resources.

    Yields:

    Type Description AsyncIterator[T]

    The prepared task runner

    Source code in prefect/task_runners.py
    @asynccontextmanager\nasync def start(\n    self: T,\n) -> AsyncIterator[T]:\n    \"\"\"\n    Start the task runner, preparing any resources necessary for task submission.\n\n    Children should implement `_start` to prepare and clean up resources.\n\n    Yields:\n        The prepared task runner\n    \"\"\"\n    if self._started:\n        raise RuntimeError(\"The task runner is already started!\")\n\n    async with AsyncExitStack() as exit_stack:\n        self.logger.debug(\"Starting task runner...\")\n        try:\n            await self._start(exit_stack)\n            self._started = True\n            yield self\n        finally:\n            self.logger.debug(\"Shutting down task runner...\")\n            self._started = False\n
    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.submit","title":"submit abstractmethod async","text":"

    Submit a call for execution and return a PrefectFuture that can be used to get the call result.

    Parameters:

    Name Type Description Default task_run

    The task run being submitted.

    required task_key

    A unique key for this orchestration run of the task. Can be used for caching.

    required call Callable[..., Awaitable[State[R]]]

    The function to be executed

    required run_kwargs

    A dict of keyword arguments to pass to call

    required

    Returns:

    Type Description None

    A future representing the result of call execution

    Source code in prefect/task_runners.py
    @abc.abstractmethod\nasync def submit(\n    self,\n    key: UUID,\n    call: Callable[..., Awaitable[State[R]]],\n) -> None:\n    \"\"\"\n    Submit a call for execution and return a `PrefectFuture` that can be used to\n    get the call result.\n\n    Args:\n        task_run: The task run being submitted.\n        task_key: A unique key for this orchestration run of the task. Can be used\n            for caching.\n        call: The function to be executed\n        run_kwargs: A dict of keyword arguments to pass to `call`\n\n    Returns:\n        A future representing the result of `call` execution\n    \"\"\"\n    raise NotImplementedError()\n
    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.wait","title":"wait abstractmethod async","text":"

    Given a PrefectFuture, wait for its return state up to timeout seconds. If it is not finished after the timeout expires, None should be returned.

    Implementers should be careful to ensure that this function never returns or raises an exception.

    Source code in prefect/task_runners.py
    @abc.abstractmethod\nasync def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n    \"\"\"\n    Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n    If it is not finished after the timeout expires, `None` should be returned.\n\n    Implementers should be careful to ensure that this function never returns or\n    raises an exception.\n    \"\"\"\n    raise NotImplementedError()\n
    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.ConcurrentTaskRunner","title":"ConcurrentTaskRunner","text":"

    Bases: BaseTaskRunner

    A concurrent task runner that allows tasks to switch when blocking on IO. Synchronous tasks will be submitted to a thread pool maintained by anyio.

    Example
    Using a thread for concurrency:\n>>> from prefect import flow\n>>> from prefect.task_runners import ConcurrentTaskRunner\n>>> @flow(task_runner=ConcurrentTaskRunner)\n>>> def my_flow():\n>>>     ...\n
    Source code in prefect/task_runners.py
    class ConcurrentTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A concurrent task runner that allows tasks to switch when blocking on IO.\n    Synchronous tasks will be submitted to a thread pool maintained by `anyio`.\n\n    Example:\n        ```\n        Using a thread for concurrency:\n        >>> from prefect import flow\n        >>> from prefect.task_runners import ConcurrentTaskRunner\n        >>> @flow(task_runner=ConcurrentTaskRunner)\n        >>> def my_flow():\n        >>>     ...\n        ```\n    \"\"\"\n\n    def __init__(self):\n        # TODO: Consider adding `max_workers` support using anyio capacity limiters\n\n        # Runtime attributes\n        self._task_group: anyio.abc.TaskGroup = None\n        self._result_events: Dict[UUID, Event] = {}\n        self._results: Dict[UUID, Any] = {}\n        self._keys: Set[UUID] = set()\n\n        super().__init__()\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.CONCURRENT\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[[], Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to submit work after \"\n                \"serialization.\"\n            )\n\n        # Create an event to set on completion\n        self._result_events[key] = Event()\n\n        # Rely on the event loop for concurrency\n        self._task_group.start_soon(self._run_and_store_result, key, call)\n\n    async def wait(\n        self,\n        key: UUID,\n        timeout: float = None,\n    ) -> Optional[State]:\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to wait for work after \"\n                \"serialization.\"\n            )\n\n        return await self._get_run_result(key, timeout)\n\n    async def _run_and_store_result(\n        self, key: UUID, call: Callable[[], Awaitable[State[R]]]\n    ):\n        \"\"\"\n        Simple utility to store the orchestration result in memory on completion\n\n        Since this run is occurring on the main thread, we capture exceptions to prevent\n        task crashes from crashing the flow run.\n        \"\"\"\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n        self._result_events[key].set()\n\n    async def _get_run_result(\n        self, key: UUID, timeout: float = None\n    ) -> Optional[State]:\n        \"\"\"\n        Block until the run result has been populated.\n        \"\"\"\n        result = None  # retval on timeout\n\n        # Note we do not use `asyncio.wrap_future` and instead use an `Event` to avoid\n        # stdlib behavior where the wrapped future is cancelled if the parent future is\n        # cancelled (as it would be during a timeout here)\n        with anyio.move_on_after(timeout):\n            await self._result_events[key].wait()\n            result = self._results[key]\n\n        return result  # timeout reached\n\n    async def _start(self, exit_stack: AsyncExitStack):\n        \"\"\"\n        Start the process pool\n        \"\"\"\n        self._task_group = await exit_stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n    def __getstate__(self):\n        \"\"\"\n        Allow the `ConcurrentTaskRunner` to be serialized by dropping the task group.\n        \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_task_group\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"\n        When deserialized, we will no longer have a reference to the task group.\n        \"\"\"\n        self.__dict__.update(data)\n        self._task_group = None\n
    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.SequentialTaskRunner","title":"SequentialTaskRunner","text":"

    Bases: BaseTaskRunner

    A simple task runner that executes calls as they are submitted.

    If writing synchronous tasks, this runner will always execute tasks sequentially. If writing async tasks, this runner will execute tasks sequentially unless grouped using anyio.create_task_group or asyncio.gather.

    Source code in prefect/task_runners.py
    class SequentialTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A simple task runner that executes calls as they are submitted.\n\n    If writing synchronous tasks, this runner will always execute tasks sequentially.\n    If writing async tasks, this runner will execute tasks sequentially unless grouped\n    using `anyio.create_task_group` or `asyncio.gather`.\n    \"\"\"\n\n    def __init__(self) -> None:\n        super().__init__()\n        self._results: Dict[str, State] = {}\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.SEQUENTIAL\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        # Run the function immediately and store the result in memory\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        return self._results[key]\n
    ","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/tasks/","title":"prefect.tasks","text":"","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks","title":"prefect.tasks","text":"

    Module containing the base workflow task class and decorator - for most use cases, using the @task decorator is preferred.

    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task","title":"Task","text":"

    Bases: Generic[P, R]

    A Prefect task definition.

    Note

    We recommend using the @task decorator for most use-cases.

    Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run.

    To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

    Parameters:

    Name Type Description Default fn Callable[P, R]

    The function defining the task.

    required name Optional[str]

    An optional name for the task; if not provided, the name will be inferred from the given function.

    None description Optional[str]

    An optional string description for the task.

    None tags Optional[Iterable[str]]

    An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

    None version Optional[str]

    An optional string specifying the version of this task definition

    None cache_key_fn Optional[Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]]

    An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

    None cache_expiration Optional[timedelta]

    An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

    None task_run_name Optional[Union[Callable[[], str], str]]

    An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

    None retries Optional[int]

    An optional number of times to retry on task run failure.

    None retry_delay_seconds Optional[Union[float, int, List[float], Callable[[int], List[float]]]]

    Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

    None retry_jitter_factor Optional[float]

    An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

    None persist_result Optional[bool]

    An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

    None result_storage Optional[ResultStorage]

    An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

    None result_storage_key Optional[str]

    An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

    None result_serializer Optional[ResultSerializer]

    An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

    None timeout_seconds Union[int, float, None]

    An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

    None log_prints Optional[bool]

    If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

    False refresh_cache Optional[bool]

    If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

    None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

    An optional list of callables to run when the task enters a failed state.

    None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

    An optional list of callables to run when the task enters a completed state.

    None retry_condition_fn Optional[Callable[[Task, TaskRun, State], bool]]

    An optional callable run when a task run returns a Failed state. Should return True if the task should continue to its retry policy (e.g. retries=3), and False if the task should end as failed. Defaults to None, indicating the task should always continue to its retry policy.

    None viz_return_value Optional[Any]

    An optional value to return when the task dependency tree is visualized.

    None Source code in prefect/tasks.py
    @PrefectObjectRegistry.register_instances\nclass Task(Generic[P, R]):\n    \"\"\"\n    A Prefect task definition.\n\n    !!! note\n        We recommend using [the `@task` decorator][prefect.tasks.task] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function\n    creates a new task run.\n\n    To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the task.\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n        retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n            return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n            should end as failed. Defaults to `None`, indicating the task should always continue\n            to its retry policy.\n        viz_return_value: An optional value to return when the task dependency tree is visualized.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @task decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: Optional[str] = None,\n        description: Optional[str] = None,\n        tags: Optional[Iterable[str]] = None,\n        version: Optional[str] = None,\n        cache_key_fn: Optional[\n            Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]]\n        ] = None,\n        cache_expiration: Optional[datetime.timedelta] = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[\n            Union[\n                float,\n                int,\n                List[float],\n                Callable[[int], List[float]],\n            ]\n        ] = None,\n        retry_jitter_factor: Optional[float] = None,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        result_storage_key: Optional[str] = None,\n        cache_result_in_memory: bool = True,\n        timeout_seconds: Union[int, float, None] = None,\n        log_prints: Optional[bool] = False,\n        refresh_cache: Optional[bool] = None,\n        on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n        viz_return_value: Optional[Any] = None,\n    ):\n        # Validate if hook passed is list and contains callables\n        hook_categories = [on_completion, on_failure]\n        hook_names = [\"on_completion\", \"on_failure\"]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = inspect.iscoroutinefunction(self.fn)\n\n        if not name:\n            if not hasattr(self.fn, \"__name__\"):\n                self.name = type(self.fn).__name__\n            else:\n                self.name = self.fn.__name__\n        else:\n            self.name = name\n\n        if task_run_name is not None:\n            if not isinstance(task_run_name, str) and not callable(task_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'task_run_name'; got\"\n                    f\" {type(task_run_name).__name__} instead.\"\n                )\n        self.task_run_name = task_run_name\n\n        self.version = version\n        self.log_prints = log_prints\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        self.tags = set(tags if tags else [])\n\n        if not hasattr(self.fn, \"__qualname__\"):\n            self.task_key = to_qualified_name(type(self.fn))\n        else:\n            try:\n                task_origin_hash = hash_objects(\n                    self.name, os.path.abspath(inspect.getsourcefile(self.fn))\n                )\n            except TypeError:\n                task_origin_hash = \"unknown-source-file\"\n\n            self.task_key = f\"{self.fn.__qualname__}-{task_origin_hash}\"\n\n        self.cache_key_fn = cache_key_fn\n        self.cache_expiration = cache_expiration\n        self.refresh_cache = refresh_cache\n\n        # TaskRunPolicy settings\n        # TODO: We can instantiate a `TaskRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n\n        self.retries = (\n            retries if retries is not None else PREFECT_TASK_DEFAULT_RETRIES.value()\n        )\n        if retry_delay_seconds is None:\n            retry_delay_seconds = PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS.value()\n\n        if callable(retry_delay_seconds):\n            self.retry_delay_seconds = retry_delay_seconds(retries)\n        else:\n            self.retry_delay_seconds = retry_delay_seconds\n\n        if isinstance(self.retry_delay_seconds, list) and (\n            len(self.retry_delay_seconds) > 50\n        ):\n            raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n\n        if retry_jitter_factor is not None and retry_jitter_factor < 0:\n            raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n\n        self.retry_jitter_factor = retry_jitter_factor\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.result_storage_key = result_storage_key\n        self.cache_result_in_memory = cache_result_in_memory\n\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n        # Warn if this task's `name` conflicts with another task while having a\n        # different function. This is to detect the case where two or more tasks\n        # share a name or are lambdas, which should result in a warning, and to\n        # differentiate it from the case where the task was 'copied' via\n        # `with_options`, which should not result in a warning.\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Task)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            try:\n                file = inspect.getsourcefile(self.fn)\n                line_number = inspect.getsourcelines(self.fn)[1]\n            except TypeError:\n                file = \"unknown\"\n                line_number = \"unknown\"\n\n            warnings.warn(\n                f\"A task named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another task. Consider specifying a unique `name` \"\n                \"parameter in the task definition:\\n\\n \"\n                \"`@task(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n\n        # retry_condition_fn must be a callable or None. If it is neither, raise a TypeError\n        if retry_condition_fn is not None and not (callable(retry_condition_fn)):\n            raise TypeError(\n                \"Expected `retry_condition_fn` to be callable, got\"\n                f\" {type(retry_condition_fn).__name__} instead.\"\n            )\n\n        self.retry_condition_fn = retry_condition_fn\n        self.viz_return_value = viz_return_value\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        description: str = None,\n        tags: Iterable[str] = None,\n        cache_key_fn: Callable[\n            [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n        ] = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        cache_expiration: datetime.timedelta = None,\n        retries: Optional[int] = NotSet,\n        retry_delay_seconds: Union[\n            float,\n            int,\n            List[float],\n            Callable[[int], List[float]],\n        ] = NotSet,\n        retry_jitter_factor: Optional[float] = NotSet,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        result_storage_key: Optional[str] = NotSet,\n        cache_result_in_memory: Optional[bool] = None,\n        timeout_seconds: Union[int, float] = None,\n        log_prints: Optional[bool] = NotSet,\n        refresh_cache: Optional[bool] = NotSet,\n        on_completion: Optional[\n            List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n        ] = None,\n        on_failure: Optional[\n            List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n        ] = None,\n        retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n        viz_return_value: Optional[Any] = None,\n    ):\n        \"\"\"\n        Create a new task from the current object, updating provided options.\n\n        Args:\n            name: A new name for the task.\n            description: A new description for the task.\n            tags: A new set of tags for the task. If given, existing tags are ignored,\n                not merged.\n            cache_key_fn: A new cache key function for the task.\n            cache_expiration: A new cache expiration time for the task.\n            task_run_name: An optional name to distinguish runs of this task; this name can be provided\n                as a string template with the task's keyword arguments as variables,\n                or a function that returns a string.\n            retries: A new number of times to retry on task run failure.\n            retry_delay_seconds: Optionally configures how long to wait before retrying\n                the task after failure. This is only applicable if `retries` is nonzero.\n                This setting can either be a number of seconds, a list of retry delays,\n                or a callable that, given the total number of retries, generates a list\n                of retry delays. If a number of seconds, that delay will be applied to\n                all retries. If a list, each retry will wait for the corresponding delay\n                before retrying. When passing a callable or a list, the number of\n                configured retry delays cannot exceed 50.\n            retry_jitter_factor: An optional factor that defines the factor to which a\n                retry can be jittered in order to avoid a \"thundering herd\".\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            result_storage_key: A new key for the persisted result to be stored at.\n            timeout_seconds: A new maximum time for the task to complete in seconds.\n            log_prints: A new option for enabling or disabling redirection of `print` statements.\n            refresh_cache: A new option for enabling or disabling cache refresh.\n            on_completion: A new list of callables to run when the task enters a completed state.\n            on_failure: A new list of callables to run when the task enters a failed state.\n            retry_condition_fn: An optional callable run when a task run returns a Failed state.\n                Should return `True` if the task should continue to its retry policy, and `False`\n                if the task should end as failed. Defaults to `None`, indicating the task should\n                always continue to its retry policy.\n            viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n        Returns:\n            A new `Task` instance.\n\n        Examples:\n\n            Create a new task from an existing task and update the name\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> new_task = my_task.with_options(name=\"My new task\")\n\n            Create a new task from an existing task and update the retry settings\n\n            >>> from random import randint\n            >>>\n            >>> @task(retries=1, retry_delay_seconds=5)\n            >>> def my_task():\n            >>>     x = randint(0, 5)\n            >>>     if x >= 3:  # Make a task that fails sometimes\n            >>>         raise ValueError(\"Retry me please!\")\n            >>>     return x\n            >>>\n            >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n            Use a task with updated options within a flow\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> @flow\n            >>> my_flow():\n            >>>     new_task = my_task.with_options(name=\"My new task\")\n            >>>     new_task()\n        \"\"\"\n        return Task(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            tags=tags or copy(self.tags),\n            cache_key_fn=cache_key_fn or self.cache_key_fn,\n            cache_expiration=cache_expiration or self.cache_expiration,\n            task_run_name=task_run_name,\n            retries=retries if retries is not NotSet else self.retries,\n            retry_delay_seconds=(\n                retry_delay_seconds\n                if retry_delay_seconds is not NotSet\n                else self.retry_delay_seconds\n            ),\n            retry_jitter_factor=(\n                retry_jitter_factor\n                if retry_jitter_factor is not NotSet\n                else self.retry_jitter_factor\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_storage_key=(\n                result_storage_key\n                if result_storage_key is not NotSet\n                else self.result_storage_key\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n            refresh_cache=(\n                refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n            ),\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n            retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n            viz_return_value=viz_return_value or self.viz_return_value,\n        )\n\n    async def create_run(\n        self,\n        flow_run_context: FlowRunContext,\n        parameters: Dict[str, Any],\n        wait_for: Optional[Iterable[PrefectFuture]],\n        extra_task_inputs: Optional[Dict[str, Set[TaskRunInput]]] = None,\n    ) -> TaskRun:\n        # TODO: Investigate if we can replace create_task_run on the task run engine\n        # with this method. Would require updating to work without the flow run context.\n        from prefect.utilities.engine import (\n            _dynamic_key_for_task_run,\n            collect_task_run_inputs,\n        )\n\n        dynamic_key = _dynamic_key_for_task_run(flow_run_context, self)\n        task_inputs = {\n            k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        }\n        if wait_for:\n            task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n        # Join extra task inputs\n        extra_task_inputs = extra_task_inputs or {}\n        for k, extras in extra_task_inputs.items():\n            task_inputs[k] = task_inputs[k].union(extras)\n\n        flow_run_logger = get_run_logger(flow_run_context)\n\n        task_run = await flow_run_context.client.create_task_run(\n            task=self,\n            name=f\"{self.name} - {dynamic_key}\",\n            flow_run_id=flow_run_context.flow_run.id,\n            dynamic_key=dynamic_key,\n            state=Pending(),\n            extra_tags=TagsContext.get().current_tags,\n            task_inputs=task_inputs,\n        )\n\n        if flow_run_context.flow_run:\n            flow_run_logger.info(\n                f\"Created task run {task_run.name!r} for task {self.name!r}\"\n            )\n        else:\n            logger.info(f\"Created task run {task_run.name!r} for task {self.name!r}\")\n\n        return task_run\n\n    @overload\n    def __call__(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: P.args,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ):\n        \"\"\"\n        Run the task and return the result. If `return_state` is True returns\n        the result is wrapped in a Prefect State which provides error handling.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_engine import submit_autonomous_task_run_to_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        task_run_tracker = get_task_viz_tracker()\n        if task_run_tracker:\n            return track_viz_task(\n                self.isasync, self.name, parameters, self.viz_return_value\n            )\n\n        # new engine currently only compatible with async tasks\n        if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE.value():\n            from prefect.new_task_engine import run_task, run_task_sync\n\n            run_kwargs = dict(\n                task=self,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n            )\n            if self.isasync:\n                # this returns an awaitable coroutine\n                return run_task(**run_kwargs)\n            else:\n                return run_task_sync(**run_kwargs)\n\n        if (\n            PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n            and not FlowRunContext.get()\n        ):\n            from prefect import get_client\n\n            return submit_autonomous_task_run_to_engine(\n                task=self,\n                task_run=None,\n                task_runner=SequentialTaskRunner(),\n                parameters=parameters,\n                return_type=return_type,\n                client=get_client(),\n            )\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            task_runner=SequentialTaskRunner(),\n            return_type=return_type,\n            mapped=False,\n        )\n\n    @overload\n    def _run(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[State[T]]:\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: P.args,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ) -> Union[State, Awaitable[State]]:\n        \"\"\"\n        Run the task and return the final state.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n            task_runner=SequentialTaskRunner(),\n            mapped=False,\n        )\n\n    @overload\n    def submit(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[PrefectFuture[T, Async]]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[T, Sync]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> TaskRun:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[TaskRun]:\n        ...\n\n    def submit(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n        \"\"\"\n        Submit a run of the task to the engine.\n\n        If writing an async task, this call must be awaited.\n\n        If called from within a flow function,\n\n        Will create a new task run in the backing API and submit the task to the flow's\n        task runner. This call only blocks execution while the task is being submitted,\n        once it is submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n        and they are fully resolved on submission.\n\n        Args:\n            *args: Arguments to run the task with\n            return_state: Return the result of the flow run wrapped in a\n                Prefect State.\n            wait_for: Upstream task futures to wait for before starting the task\n            **kwargs: Keyword arguments to run the task with\n\n        Returns:\n            If `return_state` is False a future allowing asynchronous access to\n                the state of the task\n            If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n                the state of the task\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task():\n            >>>     return \"hello\"\n\n            Run a task in a flow\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit()\n\n            Wait for a task to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit().wait()\n\n            Use the result from a task in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     print(my_task.submit().result())\n            >>>\n            >>> my_flow()\n            hello\n\n            Run an async task in an async flow\n\n            >>> @task\n            >>> async def my_async_task():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> async def my_flow():\n            >>>     await my_async_task.submit()\n\n            Run a sync task in an async flow\n\n            >>> @flow\n            >>> async def my_flow():\n            >>>     my_task.submit()\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1():\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.submit(wait_for=[x])\n\n        \"\"\"\n\n        from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n        return_type = \"state\" if return_state else \"future\"\n        flow_run_context = FlowRunContext.get()\n\n        task_viz_tracker = get_task_viz_tracker()\n        if task_viz_tracker:\n            raise VisualizationUnsupportedError(\n                \"`task.submit()` is not currently supported by `flow.visualize()`\"\n            )\n\n        if PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING and not flow_run_context:\n            create_autonomous_task_run_call = create_call(\n                create_autonomous_task_run, task=self, parameters=parameters\n            )\n            if self.isasync:\n                return from_async.wait_for_call_in_loop_thread(\n                    create_autonomous_task_run_call\n                )\n            else:\n                return from_sync.wait_for_call_in_loop_thread(\n                    create_autonomous_task_run_call\n                )\n        if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE and flow_run_context:\n            if self.isasync:\n                return self._submit_async(\n                    parameters=parameters,\n                    flow_run_context=flow_run_context,\n                    wait_for=wait_for,\n                    return_state=return_state,\n                )\n            else:\n                raise NotImplementedError(\n                    \"Submitting sync tasks with the new engine has not be implemented yet.\"\n                )\n\n        else:\n            return enter_task_run_engine(\n                self,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n                task_runner=None,  # Use the flow's task runner\n                mapped=False,\n            )\n\n    async def _submit_async(\n        self,\n        parameters: Dict[str, Any],\n        flow_run_context: FlowRunContext,\n        wait_for: Optional[Iterable[PrefectFuture]],\n        return_state: bool,\n    ):\n        from prefect.new_task_engine import run_task\n\n        task_runner = flow_run_context.task_runner\n\n        task_run = await self.create_run(\n            flow_run_context=flow_run_context,\n            parameters=parameters,\n            wait_for=wait_for,\n        )\n\n        future = PrefectFuture(\n            name=task_run.name,\n            key=uuid4(),\n            task_runner=task_runner,\n            asynchronous=(self.isasync and flow_run_context.flow.isasync),\n        )\n        future.task_run = task_run\n        flow_run_context.task_run_futures.append(future)\n        await task_runner.submit(\n            key=future.key,\n            call=partial(\n                run_task,\n                task=self,\n                task_run=task_run,\n                parameters=parameters,\n                wait_for=wait_for,\n                return_type=\"state\",\n            ),\n        )\n        # TODO: I don't like this. Can we move responsibility for creating the future\n        # and setting this anyio.Event to the task runner?\n        future._submitted.set()\n\n        if return_state:\n            return await future.wait()\n        else:\n            return future\n\n    @overload\n    def map(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[None, Sync]]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[List[PrefectFuture[T, Async]]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[T, Sync]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> List[State[T]]:\n        ...\n\n    def map(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Any:\n        \"\"\"\n        Submit a mapped run of the task to a worker.\n\n        Must be called within a flow function. If writing an async task, this\n        call must be awaited.\n\n        Must be called with at least one iterable and all iterables must be\n        the same length. Any arguments that are not iterable will be treated as\n        a static value and each task run will receive the same value.\n\n        Will create as many task runs as the length of the iterable(s) in the\n        backing API and submit the task runs to the flow's task runner. This\n        call blocks if given a future as input while the future is resolved. It\n        also blocks while the tasks are being submitted, once they are\n        submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution\n        for sync tasks and they are fully resolved on submission.\n\n        Args:\n            *args: Iterable and static arguments to run the tasks with\n            return_state: Return a list of Prefect States that wrap the results\n                of each task run.\n            wait_for: Upstream task futures to wait for before starting the\n                task\n            **kwargs: Keyword iterable arguments to run the task with\n\n        Returns:\n            A list of futures allowing asynchronous access to the state of the\n            tasks\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task(x):\n            >>>     return x + 1\n\n            Create mapped tasks\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.map([1, 2, 3])\n\n            Wait for all mapped tasks to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         future.wait()\n            >>>     # Now all of the mapped tasks have finished\n            >>>     my_task(10)\n\n            Use the result from mapped tasks in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         print(future.result())\n            >>> my_flow()\n            2\n            3\n            4\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1(x):\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2(y):\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n            Use a non-iterable input as a constant across mapped tasks\n            >>> @task\n            >>> def display(prefix, item):\n            >>>    print(prefix, item)\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     display.map(\"Check it out: \", [1, 2, 3])\n            >>>\n            >>> my_flow()\n            Check it out: 1\n            Check it out: 2\n            Check it out: 3\n\n            Use `unmapped` to treat an iterable argument as a constant\n            >>> from prefect import unmapped\n            >>>\n            >>> @task\n            >>> def add_n_to_items(items, n):\n            >>>     return [item + n for item in items]\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n            >>>\n            >>> my_flow()\n            [[11, 21], [12, 22], [13, 23]]\n        \"\"\"\n\n        from prefect.engine import begin_task_map, enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict; do not apply defaults\n        # since they should not be mapped over\n        parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n        return_type = \"state\" if return_state else \"future\"\n\n        task_viz_tracker = get_task_viz_tracker()\n        if task_viz_tracker:\n            raise VisualizationUnsupportedError(\n                \"`task.map()` is not currently supported by `flow.visualize()`\"\n            )\n\n        if (\n            PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n            and not FlowRunContext.get()\n        ):\n            map_call = create_call(\n                begin_task_map,\n                task=self,\n                parameters=parameters,\n                flow_run_context=None,\n                wait_for=wait_for,\n                return_type=return_type,\n                task_runner=None,\n                autonomous=True,\n            )\n            if self.isasync:\n                return from_async.wait_for_call_in_loop_thread(map_call)\n            else:\n                return from_sync.wait_for_call_in_loop_thread(map_call)\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,\n            mapped=True,\n        )\n\n    def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n        \"\"\"Serve the task using the provided task runner. This method is used to\n        establish a websocket connection with the Prefect server and listen for\n        submitted task runs to execute.\n\n        Args:\n            task_runner: The task runner to use for serving the task. If not provided,\n                the default ConcurrentTaskRunner will be used.\n\n        Examples:\n            Serve a task using the default task runner\n            >>> @task\n            >>> def my_task():\n            >>>     return 1\n\n            >>> my_task.serve()\n        \"\"\"\n\n        if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n            raise ValueError(\n                \"Task's `serve` method is an experimental feature and must be enabled with \"\n                \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n            )\n\n        from prefect.task_server import serve\n\n        serve(self, task_runner=task_runner)\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.map","title":"map","text":"

    Submit a mapped run of the task to a worker.

    Must be called within a flow function. If writing an async task, this call must be awaited.

    Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will receive the same value.

    Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

    Parameters:

    Name Type Description Default *args Any

    Iterable and static arguments to run the tasks with

    () return_state bool

    Return a list of Prefect States that wrap the results of each task run.

    False wait_for Optional[Iterable[PrefectFuture]]

    Upstream task futures to wait for before starting the task

    None **kwargs Any

    Keyword iterable arguments to run the task with

    {}

    Returns:

    Type Description Any

    A list of futures allowing asynchronous access to the state of the

    Any

    tasks

    Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task(x):\n>>>     return x + 1\n\nCreate mapped tasks\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.map([1, 2, 3])\n\nWait for all mapped tasks to finish\n\n>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         future.wait()\n>>>     # Now all of the mapped tasks have finished\n>>>     my_task(10)\n\nUse the result from mapped tasks in a flow\n\n>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         print(future.result())\n>>> my_flow()\n2\n3\n4\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1(x):\n>>>     pass\n>>>\n>>> @task\n>>> def task_2(y):\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\nUse a non-iterable input as a constant across mapped tasks\n>>> @task\n>>> def display(prefix, item):\n>>>    print(prefix, item)\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     display.map(\"Check it out: \", [1, 2, 3])\n>>>\n>>> my_flow()\nCheck it out: 1\nCheck it out: 2\nCheck it out: 3\n\nUse `unmapped` to treat an iterable argument as a constant\n>>> from prefect import unmapped\n>>>\n>>> @task\n>>> def add_n_to_items(items, n):\n>>>     return [item + n for item in items]\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n>>>\n>>> my_flow()\n[[11, 21], [12, 22], [13, 23]]\n
    Source code in prefect/tasks.py
    def map(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Any:\n    \"\"\"\n    Submit a mapped run of the task to a worker.\n\n    Must be called within a flow function. If writing an async task, this\n    call must be awaited.\n\n    Must be called with at least one iterable and all iterables must be\n    the same length. Any arguments that are not iterable will be treated as\n    a static value and each task run will receive the same value.\n\n    Will create as many task runs as the length of the iterable(s) in the\n    backing API and submit the task runs to the flow's task runner. This\n    call blocks if given a future as input while the future is resolved. It\n    also blocks while the tasks are being submitted, once they are\n    submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution\n    for sync tasks and they are fully resolved on submission.\n\n    Args:\n        *args: Iterable and static arguments to run the tasks with\n        return_state: Return a list of Prefect States that wrap the results\n            of each task run.\n        wait_for: Upstream task futures to wait for before starting the\n            task\n        **kwargs: Keyword iterable arguments to run the task with\n\n    Returns:\n        A list of futures allowing asynchronous access to the state of the\n        tasks\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task(x):\n        >>>     return x + 1\n\n        Create mapped tasks\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.map([1, 2, 3])\n\n        Wait for all mapped tasks to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         future.wait()\n        >>>     # Now all of the mapped tasks have finished\n        >>>     my_task(10)\n\n        Use the result from mapped tasks in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         print(future.result())\n        >>> my_flow()\n        2\n        3\n        4\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1(x):\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2(y):\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n        Use a non-iterable input as a constant across mapped tasks\n        >>> @task\n        >>> def display(prefix, item):\n        >>>    print(prefix, item)\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     display.map(\"Check it out: \", [1, 2, 3])\n        >>>\n        >>> my_flow()\n        Check it out: 1\n        Check it out: 2\n        Check it out: 3\n\n        Use `unmapped` to treat an iterable argument as a constant\n        >>> from prefect import unmapped\n        >>>\n        >>> @task\n        >>> def add_n_to_items(items, n):\n        >>>     return [item + n for item in items]\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n        >>>\n        >>> my_flow()\n        [[11, 21], [12, 22], [13, 23]]\n    \"\"\"\n\n    from prefect.engine import begin_task_map, enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict; do not apply defaults\n    # since they should not be mapped over\n    parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n    return_type = \"state\" if return_state else \"future\"\n\n    task_viz_tracker = get_task_viz_tracker()\n    if task_viz_tracker:\n        raise VisualizationUnsupportedError(\n            \"`task.map()` is not currently supported by `flow.visualize()`\"\n        )\n\n    if (\n        PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value()\n        and not FlowRunContext.get()\n    ):\n        map_call = create_call(\n            begin_task_map,\n            task=self,\n            parameters=parameters,\n            flow_run_context=None,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,\n            autonomous=True,\n        )\n        if self.isasync:\n            return from_async.wait_for_call_in_loop_thread(map_call)\n        else:\n            return from_sync.wait_for_call_in_loop_thread(map_call)\n\n    return enter_task_run_engine(\n        self,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=None,\n        mapped=True,\n    )\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.serve","title":"serve","text":"

    Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute.

    Parameters:

    Name Type Description Default task_runner Optional[BaseTaskRunner]

    The task runner to use for serving the task. If not provided, the default ConcurrentTaskRunner will be used.

    None

    Examples:

    Serve a task using the default task runner

    >>> @task\n>>> def my_task():\n>>>     return 1\n
    >>> my_task.serve()\n
    Source code in prefect/tasks.py
    def serve(self, task_runner: Optional[BaseTaskRunner] = None) -> \"Task\":\n    \"\"\"Serve the task using the provided task runner. This method is used to\n    establish a websocket connection with the Prefect server and listen for\n    submitted task runs to execute.\n\n    Args:\n        task_runner: The task runner to use for serving the task. If not provided,\n            the default ConcurrentTaskRunner will be used.\n\n    Examples:\n        Serve a task using the default task runner\n        >>> @task\n        >>> def my_task():\n        >>>     return 1\n\n        >>> my_task.serve()\n    \"\"\"\n\n    if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING:\n        raise ValueError(\n            \"Task's `serve` method is an experimental feature and must be enabled with \"\n            \"`prefect config set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING=True`\"\n        )\n\n    from prefect.task_server import serve\n\n    serve(self, task_runner=task_runner)\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.submit","title":"submit","text":"

    Submit a run of the task to the engine.

    If writing an async task, this call must be awaited.

    If called from within a flow function,

    Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

    Parameters:

    Name Type Description Default *args Any

    Arguments to run the task with

    () return_state bool

    Return the result of the flow run wrapped in a Prefect State.

    False wait_for Optional[Iterable[PrefectFuture]]

    Upstream task futures to wait for before starting the task

    None **kwargs Any

    Keyword arguments to run the task with

    {}

    Returns:

    Type Description Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]

    If return_state is False a future allowing asynchronous access to the state of the task

    Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]

    If return_state is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task

    Define a task\n\n>>> from prefect import task\n>>> @task\n>>> def my_task():\n>>>     return \"hello\"\n\nRun a task in a flow\n\n>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.submit()\n\nWait for a task to finish\n\n>>> @flow\n>>> def my_flow():\n>>>     my_task.submit().wait()\n\nUse the result from a task in a flow\n\n>>> @flow\n>>> def my_flow():\n>>>     print(my_task.submit().result())\n>>>\n>>> my_flow()\nhello\n\nRun an async task in an async flow\n\n>>> @task\n>>> async def my_async_task():\n>>>     pass\n>>>\n>>> @flow\n>>> async def my_flow():\n>>>     await my_async_task.submit()\n\nRun a sync task in an async flow\n\n>>> @flow\n>>> async def my_flow():\n>>>     my_task.submit()\n\nEnforce ordering between tasks that do not exchange data\n>>> @task\n>>> def task_1():\n>>>     pass\n>>>\n>>> @task\n>>> def task_2():\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.submit(wait_for=[x])\n
    Source code in prefect/tasks.py
    def submit(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture], TaskRun, Awaitable[TaskRun]]:\n    \"\"\"\n    Submit a run of the task to the engine.\n\n    If writing an async task, this call must be awaited.\n\n    If called from within a flow function,\n\n    Will create a new task run in the backing API and submit the task to the flow's\n    task runner. This call only blocks execution while the task is being submitted,\n    once it is submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n    and they are fully resolved on submission.\n\n    Args:\n        *args: Arguments to run the task with\n        return_state: Return the result of the flow run wrapped in a\n            Prefect State.\n        wait_for: Upstream task futures to wait for before starting the task\n        **kwargs: Keyword arguments to run the task with\n\n    Returns:\n        If `return_state` is False a future allowing asynchronous access to\n            the state of the task\n        If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n            the state of the task\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task():\n        >>>     return \"hello\"\n\n        Run a task in a flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit()\n\n        Wait for a task to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit().wait()\n\n        Use the result from a task in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     print(my_task.submit().result())\n        >>>\n        >>> my_flow()\n        hello\n\n        Run an async task in an async flow\n\n        >>> @task\n        >>> async def my_async_task():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> async def my_flow():\n        >>>     await my_async_task.submit()\n\n        Run a sync task in an async flow\n\n        >>> @flow\n        >>> async def my_flow():\n        >>>     my_task.submit()\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1():\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.submit(wait_for=[x])\n\n    \"\"\"\n\n    from prefect.engine import create_autonomous_task_run, enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict\n    parameters = get_call_parameters(self.fn, args, kwargs)\n    return_type = \"state\" if return_state else \"future\"\n    flow_run_context = FlowRunContext.get()\n\n    task_viz_tracker = get_task_viz_tracker()\n    if task_viz_tracker:\n        raise VisualizationUnsupportedError(\n            \"`task.submit()` is not currently supported by `flow.visualize()`\"\n        )\n\n    if PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING and not flow_run_context:\n        create_autonomous_task_run_call = create_call(\n            create_autonomous_task_run, task=self, parameters=parameters\n        )\n        if self.isasync:\n            return from_async.wait_for_call_in_loop_thread(\n                create_autonomous_task_run_call\n            )\n        else:\n            return from_sync.wait_for_call_in_loop_thread(\n                create_autonomous_task_run_call\n            )\n    if PREFECT_EXPERIMENTAL_ENABLE_NEW_ENGINE and flow_run_context:\n        if self.isasync:\n            return self._submit_async(\n                parameters=parameters,\n                flow_run_context=flow_run_context,\n                wait_for=wait_for,\n                return_state=return_state,\n            )\n        else:\n            raise NotImplementedError(\n                \"Submitting sync tasks with the new engine has not be implemented yet.\"\n            )\n\n    else:\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,  # Use the flow's task runner\n            mapped=False,\n        )\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.with_options","title":"with_options","text":"

    Create a new task from the current object, updating provided options.

    Parameters:

    Name Type Description Default name str

    A new name for the task.

    None description str

    A new description for the task.

    None tags Iterable[str]

    A new set of tags for the task. If given, existing tags are ignored, not merged.

    None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

    A new cache key function for the task.

    None cache_expiration timedelta

    A new cache expiration time for the task.

    None task_run_name Optional[Union[Callable[[], str], str]]

    An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

    None retries Optional[int]

    A new number of times to retry on task run failure.

    NotSet retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

    Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

    NotSet retry_jitter_factor Optional[float]

    An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

    NotSet persist_result Optional[bool]

    A new option for enabling or disabling result persistence.

    NotSet result_storage Optional[ResultStorage]

    A new storage type to use for results.

    NotSet result_serializer Optional[ResultSerializer]

    A new serializer to use for results.

    NotSet result_storage_key Optional[str]

    A new key for the persisted result to be stored at.

    NotSet timeout_seconds Union[int, float]

    A new maximum time for the task to complete in seconds.

    None log_prints Optional[bool]

    A new option for enabling or disabling redirection of print statements.

    NotSet refresh_cache Optional[bool]

    A new option for enabling or disabling cache refresh.

    NotSet on_completion Optional[List[Callable[[Task, TaskRun, State], Union[Awaitable[None], None]]]]

    A new list of callables to run when the task enters a completed state.

    None on_failure Optional[List[Callable[[Task, TaskRun, State], Union[Awaitable[None], None]]]]

    A new list of callables to run when the task enters a failed state.

    None retry_condition_fn Optional[Callable[[Task, TaskRun, State], bool]]

    An optional callable run when a task run returns a Failed state. Should return True if the task should continue to its retry policy, and False if the task should end as failed. Defaults to None, indicating the task should always continue to its retry policy.

    None viz_return_value Optional[Any]

    An optional value to return when the task dependency tree is visualized.

    None

    Returns:

    Type Description

    A new Task instance.

    Create a new task from an existing task and update the name\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> new_task = my_task.with_options(name=\"My new task\")\n\nCreate a new task from an existing task and update the retry settings\n\n>>> from random import randint\n>>>\n>>> @task(retries=1, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n>>>\n>>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\nUse a task with updated options within a flow\n\n>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> @flow\n>>> my_flow():\n>>>     new_task = my_task.with_options(name=\"My new task\")\n>>>     new_task()\n
    Source code in prefect/tasks.py
    def with_options(\n    self,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    cache_key_fn: Callable[\n        [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n    ] = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    retries: Optional[int] = NotSet,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = NotSet,\n    retry_jitter_factor: Optional[float] = NotSet,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    result_storage_key: Optional[str] = NotSet,\n    cache_result_in_memory: Optional[bool] = None,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = NotSet,\n    refresh_cache: Optional[bool] = NotSet,\n    on_completion: Optional[\n        List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    on_failure: Optional[\n        List[Callable[[\"Task\", TaskRun, State], Union[Awaitable[None], None]]]\n    ] = None,\n    retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n    viz_return_value: Optional[Any] = None,\n):\n    \"\"\"\n    Create a new task from the current object, updating provided options.\n\n    Args:\n        name: A new name for the task.\n        description: A new description for the task.\n        tags: A new set of tags for the task. If given, existing tags are ignored,\n            not merged.\n        cache_key_fn: A new cache key function for the task.\n        cache_expiration: A new cache expiration time for the task.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: A new number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying\n            the task after failure. This is only applicable if `retries` is nonzero.\n            This setting can either be a number of seconds, a list of retry delays,\n            or a callable that, given the total number of retries, generates a list\n            of retry delays. If a number of seconds, that delay will be applied to\n            all retries. If a list, each retry will wait for the corresponding delay\n            before retrying. When passing a callable or a list, the number of\n            configured retry delays cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a\n            retry can be jittered in order to avoid a \"thundering herd\".\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        result_storage_key: A new key for the persisted result to be stored at.\n        timeout_seconds: A new maximum time for the task to complete in seconds.\n        log_prints: A new option for enabling or disabling redirection of `print` statements.\n        refresh_cache: A new option for enabling or disabling cache refresh.\n        on_completion: A new list of callables to run when the task enters a completed state.\n        on_failure: A new list of callables to run when the task enters a failed state.\n        retry_condition_fn: An optional callable run when a task run returns a Failed state.\n            Should return `True` if the task should continue to its retry policy, and `False`\n            if the task should end as failed. Defaults to `None`, indicating the task should\n            always continue to its retry policy.\n        viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n    Returns:\n        A new `Task` instance.\n\n    Examples:\n\n        Create a new task from an existing task and update the name\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> new_task = my_task.with_options(name=\"My new task\")\n\n        Create a new task from an existing task and update the retry settings\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=1, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n        >>>\n        >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n        Use a task with updated options within a flow\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> @flow\n        >>> my_flow():\n        >>>     new_task = my_task.with_options(name=\"My new task\")\n        >>>     new_task()\n    \"\"\"\n    return Task(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        tags=tags or copy(self.tags),\n        cache_key_fn=cache_key_fn or self.cache_key_fn,\n        cache_expiration=cache_expiration or self.cache_expiration,\n        task_run_name=task_run_name,\n        retries=retries if retries is not NotSet else self.retries,\n        retry_delay_seconds=(\n            retry_delay_seconds\n            if retry_delay_seconds is not NotSet\n            else self.retry_delay_seconds\n        ),\n        retry_jitter_factor=(\n            retry_jitter_factor\n            if retry_jitter_factor is not NotSet\n            else self.retry_jitter_factor\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_storage_key=(\n            result_storage_key\n            if result_storage_key is not NotSet\n            else self.result_storage_key\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n        refresh_cache=(\n            refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n        ),\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n        retry_condition_fn=retry_condition_fn or self.retry_condition_fn,\n        viz_return_value=viz_return_value or self.viz_return_value,\n    )\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.exponential_backoff","title":"exponential_backoff","text":"

    A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation.

    Parameters:

    Name Type Description Default backoff_factor float

    the base delay for the first retry, subsequent retries will increase the delay time by powers of 2.

    required

    Returns:

    Type Description Callable[[int], List[float]]

    a callable that can be passed to the task constructor

    Source code in prefect/tasks.py
    def exponential_backoff(backoff_factor: float) -> Callable[[int], List[float]]:\n    \"\"\"\n    A task retry backoff utility that configures exponential backoff for task retries.\n    The exponential backoff design matches the urllib3 implementation.\n\n    Arguments:\n        backoff_factor: the base delay for the first retry, subsequent retries will\n            increase the delay time by powers of 2.\n\n    Returns:\n        a callable that can be passed to the task constructor\n    \"\"\"\n\n    def retry_backoff_callable(retries: int) -> List[float]:\n        # no more than 50 retry delays can be configured on a task\n        retries = min(retries, 50)\n\n        return [backoff_factor * max(0, 2**r) for r in range(retries)]\n\n    return retry_backoff_callable\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task","title":"task","text":"

    Decorator to designate a function as a task in a Prefect workflow.

    This decorator may be used for asynchronous or synchronous functions.

    Parameters:

    Name Type Description Default name str

    An optional name for the task; if not provided, the name will be inferred from the given function.

    None description str

    An optional string description for the task.

    None tags Iterable[str]

    An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

    None version str

    An optional string specifying the version of this task definition

    None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

    An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

    None cache_expiration timedelta

    An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

    None task_run_name Optional[Union[Callable[[], str], str]]

    An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

    None retries int

    An optional number of times to retry on task run failure

    None retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

    Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

    None retry_jitter_factor Optional[float]

    An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

    None persist_result Optional[bool]

    An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

    None result_storage Optional[ResultStorage]

    An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

    None result_storage_key Optional[str]

    An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

    None result_serializer Optional[ResultSerializer]

    An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

    None timeout_seconds Union[int, float]

    An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

    None log_prints Optional[bool]

    If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

    None refresh_cache Optional[bool]

    If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

    None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

    An optional list of callables to run when the task enters a failed state.

    None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

    An optional list of callables to run when the task enters a completed state.

    None retry_condition_fn Optional[Callable[[Task, TaskRun, State], bool]]

    An optional callable run when a task run returns a Failed state. Should return True if the task should continue to its retry policy (e.g. retries=3), and False if the task should end as failed. Defaults to None, indicating the task should always continue to its retry policy.

    None viz_return_value Any

    An optional value to return when the task dependency tree is visualized.

    None

    Returns:

    Type Description

    A callable Task object which, when called, will submit the task for execution.

    Examples:

    Define a simple task

    >>> @task\n>>> def add(x, y):\n>>>     return x + y\n

    Define an async task

    >>> @task\n>>> async def add(x, y):\n>>>     return x + y\n

    Define a task with tags and a description

    >>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n>>> def my_task():\n>>>     pass\n

    Define a task with a custom name

    >>> @task(name=\"The Ultimate Task\")\n>>> def my_task():\n>>>     pass\n

    Define a task that retries 3 times with a 5 second delay between attempts

    >>> from random import randint\n>>>\n>>> @task(retries=3, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n

    Define a task that is cached for a day based on its inputs

    >>> from prefect.tasks import task_input_hash\n>>> from datetime import timedelta\n>>>\n>>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n>>> def my_task():\n>>>     return \"hello\"\n
    Source code in prefect/tasks.py
    def task(\n    __fn=None,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    version: str = None,\n    cache_key_fn: Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = None,\n    retry_jitter_factor: Optional[float] = None,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_storage_key: Optional[str] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = None,\n    refresh_cache: Optional[bool] = None,\n    on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    retry_condition_fn: Optional[Callable[[\"Task\", TaskRun, State], bool]] = None,\n    viz_return_value: Any = None,\n):\n    \"\"\"\n    Decorator to designate a function as a task in a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Args:\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n        retry_condition_fn: An optional callable run when a task run returns a Failed state. Should\n            return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task\n            should end as failed. Defaults to `None`, indicating the task should always continue\n            to its retry policy.\n        viz_return_value: An optional value to return when the task dependency tree is visualized.\n\n    Returns:\n        A callable `Task` object which, when called, will submit the task for execution.\n\n    Examples:\n        Define a simple task\n\n        >>> @task\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async task\n\n        >>> @task\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a task with tags and a description\n\n        >>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task with a custom name\n\n        >>> @task(name=\"The Ultimate Task\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task that retries 3 times with a 5 second delay between attempts\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=3, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n\n        Define a task that is cached for a day based on its inputs\n\n        >>> from prefect.tasks import task_input_hash\n        >>> from datetime import timedelta\n        >>>\n        >>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n        >>> def my_task():\n        >>>     return \"hello\"\n    \"\"\"\n\n    if __fn:\n        return cast(\n            Task[P, R],\n            Task(\n                fn=__fn,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                retry_condition_fn=retry_condition_fn,\n                viz_return_value=viz_return_value,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Task[P, R]],\n            partial(\n                task,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                retry_condition_fn=retry_condition_fn,\n                viz_return_value=viz_return_value,\n            ),\n        )\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task_input_hash","title":"task_input_hash","text":"

    A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs.

    Parameters:

    Name Type Description Default context TaskRunContext

    the active TaskRunContext

    required arguments Dict[str, Any]

    a dictionary of arguments to be passed to the underlying task

    required

    Returns:

    Type Description Optional[str]

    a string hash if hashing succeeded, else None

    Source code in prefect/tasks.py
    def task_input_hash(\n    context: \"TaskRunContext\", arguments: Dict[str, Any]\n) -> Optional[str]:\n    \"\"\"\n    A task cache key implementation which hashes all inputs to the task using a JSON or\n    cloudpickle serializer. If any arguments are not JSON serializable, the pickle\n    serializer is used as a fallback. If cloudpickle fails, this will return a null key\n    indicating that a cache key could not be generated for the given inputs.\n\n    Arguments:\n        context: the active `TaskRunContext`\n        arguments: a dictionary of arguments to be passed to the underlying task\n\n    Returns:\n        a string hash if hashing succeeded, else `None`\n    \"\"\"\n    return hash_objects(\n        # We use the task key to get the qualified name for the task and include the\n        # task functions `co_code` bytes to avoid caching when the underlying function\n        # changes\n        context.task.task_key,\n        context.task.fn.__code__.co_code.hex(),\n        arguments,\n    )\n
    ","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/testing/","title":"prefect.testing","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/testing/#prefect.testing","title":"prefect.testing","text":"","tags":["Python API","testing"]},{"location":"api-ref/prefect/variables/","title":"prefect.variables","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables","title":"prefect.variables","text":"","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.Variable","title":"Variable","text":"

    Bases: VariableCreate

    Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud. https://docs.prefect.io/latest/concepts/variables/

    Parameters:

    Name Type Description Default name

    A string identifying the variable.

    required value

    A string that is the value of the variable.

    required tags

    An optional list of strings to associate with the variable.

    required Source code in prefect/variables.py
    class Variable(VariableRequest):\n    \"\"\"\n    Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.\n    https://docs.prefect.io/latest/concepts/variables/\n\n    Arguments:\n        name: A string identifying the variable.\n        value: A string that is the value of the variable.\n        tags: An optional list of strings to associate with the variable.\n    \"\"\"\n\n    @classmethod\n    @sync_compatible\n    async def set(\n        cls,\n        name: str,\n        value: str,\n        tags: Optional[List[str]] = None,\n        overwrite: bool = False,\n    ) -> Optional[str]:\n        \"\"\"\n        Sets a new variable. If one exists with the same name, user must pass `overwrite=True`\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            def my_flow():\n                var = Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n        ```\n        or\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            async def my_flow():\n                var = await Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n        ```\n        \"\"\"\n        client, _ = get_or_create_client()\n        variable = await client.read_variable_by_name(name)\n        var_dict = {\"name\": name, \"value\": value}\n        var_dict[\"tags\"] = tags or []\n        if variable:\n            if not overwrite:\n                raise ValueError(\n                    \"You are attempting to save a variable with a name that is already in use. If you would like to overwrite the values that are saved, then call .set with `overwrite=True`.\"\n                )\n            var = VariableUpdateRequest(**var_dict)\n            await client.update_variable(variable=var)\n            variable = await client.read_variable_by_name(name)\n        else:\n            var = VariableRequest(**var_dict)\n            variable = await client.create_variable(variable=var)\n\n        return variable if variable else None\n\n    @classmethod\n    @sync_compatible\n    async def get(cls, name: str, default: Optional[str] = None) -> Optional[str]:\n        \"\"\"\n        Get a variable by name. If doesn't exist return the default.\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            def my_flow():\n                var = Variable.get(\"my_var\")\n        ```\n        or\n        ```\n            from prefect.variables import Variable\n\n            @flow\n            async def my_flow():\n                var = await Variable.get(\"my_var\")\n        ```\n        \"\"\"\n        client, _ = get_or_create_client()\n        variable = await client.read_variable_by_name(name)\n        return variable if variable else default\n
    ","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.Variable.get","title":"get async classmethod","text":"

    Get a variable by name. If doesn't exist return the default.

        from prefect.variables import Variable\n\n    @flow\n    def my_flow():\n        var = Variable.get(\"my_var\")\n
    or
        from prefect.variables import Variable\n\n    @flow\n    async def my_flow():\n        var = await Variable.get(\"my_var\")\n

    Source code in prefect/variables.py
    @classmethod\n@sync_compatible\nasync def get(cls, name: str, default: Optional[str] = None) -> Optional[str]:\n    \"\"\"\n    Get a variable by name. If doesn't exist return the default.\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        def my_flow():\n            var = Variable.get(\"my_var\")\n    ```\n    or\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        async def my_flow():\n            var = await Variable.get(\"my_var\")\n    ```\n    \"\"\"\n    client, _ = get_or_create_client()\n    variable = await client.read_variable_by_name(name)\n    return variable if variable else default\n
    ","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.Variable.set","title":"set async classmethod","text":"

    Sets a new variable. If one exists with the same name, user must pass overwrite=True

        from prefect.variables import Variable\n\n    @flow\n    def my_flow():\n        var = Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n
    or
        from prefect.variables import Variable\n\n    @flow\n    async def my_flow():\n        var = await Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n

    Source code in prefect/variables.py
    @classmethod\n@sync_compatible\nasync def set(\n    cls,\n    name: str,\n    value: str,\n    tags: Optional[List[str]] = None,\n    overwrite: bool = False,\n) -> Optional[str]:\n    \"\"\"\n    Sets a new variable. If one exists with the same name, user must pass `overwrite=True`\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        def my_flow():\n            var = Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n    ```\n    or\n    ```\n        from prefect.variables import Variable\n\n        @flow\n        async def my_flow():\n            var = await Variable.set(name=\"my_var\",value=\"test_value\", tags=[\"hi\", \"there\"], overwrite=True)\n    ```\n    \"\"\"\n    client, _ = get_or_create_client()\n    variable = await client.read_variable_by_name(name)\n    var_dict = {\"name\": name, \"value\": value}\n    var_dict[\"tags\"] = tags or []\n    if variable:\n        if not overwrite:\n            raise ValueError(\n                \"You are attempting to save a variable with a name that is already in use. If you would like to overwrite the values that are saved, then call .set with `overwrite=True`.\"\n            )\n        var = VariableUpdateRequest(**var_dict)\n        await client.update_variable(variable=var)\n        variable = await client.read_variable_by_name(name)\n    else:\n        var = VariableRequest(**var_dict)\n        variable = await client.create_variable(variable=var)\n\n    return variable if variable else None\n
    ","tags":["Python API","variables"]},{"location":"api-ref/prefect/variables/#prefect.variables.get","title":"get async","text":"

    Get a variable by name. If doesn't exist return the default.

        from prefect import variables\n\n    @flow\n    def my_flow():\n        var = variables.get(\"my_var\")\n
    or
        from prefect import variables\n\n    @flow\n    async def my_flow():\n        var = await variables.get(\"my_var\")\n

    Source code in prefect/variables.py
    @deprecated_callable(start_date=\"Apr 2024\")\n@sync_compatible\nasync def get(name: str, default: Optional[str] = None) -> Optional[str]:\n    \"\"\"\n    Get a variable by name. If doesn't exist return the default.\n    ```\n        from prefect import variables\n\n        @flow\n        def my_flow():\n            var = variables.get(\"my_var\")\n    ```\n    or\n    ```\n        from prefect import variables\n\n        @flow\n        async def my_flow():\n            var = await variables.get(\"my_var\")\n    ```\n    \"\"\"\n    variable = await Variable.get(name)\n    return variable.value if variable else default\n
    ","tags":["Python API","variables"]},{"location":"api-ref/prefect/blocks/core/","title":"core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core","title":"prefect.blocks.core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block","title":"Block","text":"

    Bases: BaseModel, ABC

    A base class for implementing a block that wraps an external service.

    This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. _block_document_name, _block_document_id, _block_schema_id, and _block_type_id are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API.

    Instead of the init method, a block implementation allows the definition of a block_initialization method that is called after initialization.

    Source code in prefect/blocks/core.py
    @register_base_type\n@instrument_method_calls_on_class_instances\nclass Block(BaseModel, ABC):\n    \"\"\"\n    A base class for implementing a block that wraps an external service.\n\n    This class can be defined with an arbitrary set of fields and methods, and\n    couples business logic with data contained in an block document.\n    `_block_document_name`, `_block_document_id`, `_block_schema_id`, and\n    `_block_type_id` are reserved by Prefect as Block metadata fields, but\n    otherwise a Block can implement arbitrary logic. Blocks can be instantiated\n    without populating these metadata fields, but can only be used interactively,\n    not with the Prefect API.\n\n    Instead of the __init__ method, a block implementation allows the\n    definition of a `block_initialization` method that is called after\n    initialization.\n    \"\"\"\n\n    class Config:\n        extra = \"allow\"\n\n        json_encoders = {SecretDict: lambda v: v.dict()}\n\n        @staticmethod\n        def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n            \"\"\"\n            Customizes Pydantic's schema generation feature to add blocks related information.\n            \"\"\"\n            schema[\"block_type_slug\"] = model.get_block_type_slug()\n            # Ensures args and code examples aren't included in the schema\n            description = model.get_description()\n            if description:\n                schema[\"description\"] = description\n            else:\n                # Prevent the description of the base class from being included in the schema\n                schema.pop(\"description\", None)\n\n            # create a list of secret field names\n            # secret fields include both top-level keys and dot-delimited nested secret keys\n            # A wildcard (*) means that all fields under a given key are secret.\n            # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n            # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n            # nested under the \"child\" key are all secret. There is no limit to nesting.\n            secrets = schema[\"secret_fields\"] = []\n            for field in model.__fields__.values():\n                _collect_secret_fields(field.name, field.type_, secrets)\n\n            # create block schema references\n            refs = schema[\"block_schema_references\"] = {}\n            for field in model.__fields__.values():\n                if Block.is_block_class(field.type_):\n                    refs[field.name] = field.type_._to_block_schema_reference_dict()\n                if get_origin(field.type_) is Union:\n                    for type_ in get_args(field.type_):\n                        if Block.is_block_class(type_):\n                            if isinstance(refs.get(field.name), list):\n                                refs[field.name].append(\n                                    type_._to_block_schema_reference_dict()\n                                )\n                            elif isinstance(refs.get(field.name), dict):\n                                refs[field.name] = [\n                                    refs[field.name],\n                                    type_._to_block_schema_reference_dict(),\n                                ]\n                            else:\n                                refs[\n                                    field.name\n                                ] = type_._to_block_schema_reference_dict()\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.block_initialization()\n\n    def __str__(self) -> str:\n        return self.__repr__()\n\n    def __repr_args__(self):\n        repr_args = super().__repr_args__()\n        data_keys = self.schema()[\"properties\"].keys()\n        return [\n            (key, value) for key, value in repr_args if key is None or key in data_keys\n        ]\n\n    def block_initialization(self) -> None:\n        pass\n\n    # -- private class variables\n    # set by the class itself\n\n    # Attribute to customize the name of the block type created\n    # when the block is registered with the API. If not set, block\n    # type name will default to the class name.\n    _block_type_name: Optional[str] = None\n    _block_type_slug: Optional[str] = None\n\n    # Attributes used to set properties on a block type when registered\n    # with the API.\n    _logo_url: Optional[HttpUrl] = None\n    _documentation_url: Optional[HttpUrl] = None\n    _description: Optional[str] = None\n    _code_example: Optional[str] = None\n\n    # -- private instance variables\n    # these are set when blocks are loaded from the API\n    _block_type_id: Optional[UUID] = None\n    _block_schema_id: Optional[UUID] = None\n    _block_schema_capabilities: Optional[List[str]] = None\n    _block_schema_version: Optional[str] = None\n    _block_document_id: Optional[UUID] = None\n    _block_document_name: Optional[str] = None\n    _is_anonymous: Optional[bool] = None\n\n    # Exclude `save` as it uses the `sync_compatible` decorator and needs to be\n    # decorated directly.\n    _events_excluded_methods = [\"block_initialization\", \"save\", \"dict\"]\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"Block\":\n            return None  # The base class is abstract\n        return block_schema_to_key(cls._to_block_schema())\n\n    @classmethod\n    def get_block_type_name(cls):\n        return cls._block_type_name or cls.__name__\n\n    @classmethod\n    def get_block_type_slug(cls):\n        return slugify(cls._block_type_slug or cls.get_block_type_name())\n\n    @classmethod\n    def get_block_capabilities(cls) -> FrozenSet[str]:\n        \"\"\"\n        Returns the block capabilities for this Block. Recursively collects all block\n        capabilities of all parent classes into a single frozenset.\n        \"\"\"\n        return frozenset(\n            {\n                c\n                for base in (cls,) + cls.__mro__\n                for c in getattr(base, \"_block_schema_capabilities\", []) or []\n            }\n        )\n\n    @classmethod\n    def _get_current_package_version(cls):\n        current_module = inspect.getmodule(cls)\n        if current_module:\n            top_level_module = sys.modules[\n                current_module.__name__.split(\".\")[0] or \"__main__\"\n            ]\n            try:\n                version = Version(top_level_module.__version__)\n                # Strips off any local version information\n                return version.base_version\n            except (AttributeError, InvalidVersion):\n                # Module does not have a __version__ attribute or is not a parsable format\n                pass\n        return DEFAULT_BLOCK_SCHEMA_VERSION\n\n    @classmethod\n    def get_block_schema_version(cls) -> str:\n        return cls._block_schema_version or cls._get_current_package_version()\n\n    @classmethod\n    def _to_block_schema_reference_dict(cls):\n        return dict(\n            block_type_slug=cls.get_block_type_slug(),\n            block_schema_checksum=cls._calculate_schema_checksum(),\n        )\n\n    @classmethod\n    def _calculate_schema_checksum(\n        cls, block_schema_fields: Optional[Dict[str, Any]] = None\n    ):\n        \"\"\"\n        Generates a unique hash for the underlying schema of block.\n\n        Args:\n            block_schema_fields: Dictionary detailing block schema fields to generate a\n                checksum for. The fields of the current class is used if this parameter\n                is not provided.\n\n        Returns:\n            str: The calculated checksum prefixed with the hashing algorithm used.\n        \"\"\"\n        block_schema_fields = (\n            cls.schema() if block_schema_fields is None else block_schema_fields\n        )\n        fields_for_checksum = remove_nested_keys([\"secret_fields\"], block_schema_fields)\n        if fields_for_checksum.get(\"definitions\"):\n            non_block_definitions = _get_non_block_reference_definitions(\n                fields_for_checksum, fields_for_checksum[\"definitions\"]\n            )\n            if non_block_definitions:\n                fields_for_checksum[\"definitions\"] = non_block_definitions\n            else:\n                # Pop off definitions entirely instead of empty dict for consistency\n                # with the OpenAPI specification\n                fields_for_checksum.pop(\"definitions\")\n        checksum = hash_objects(fields_for_checksum, hash_algo=hashlib.sha256)\n        if checksum is None:\n            raise ValueError(\"Unable to compute checksum for block schema\")\n        else:\n            return f\"sha256:{checksum}\"\n\n    def _to_block_document(\n        self,\n        name: Optional[str] = None,\n        block_schema_id: Optional[UUID] = None,\n        block_type_id: Optional[UUID] = None,\n        is_anonymous: Optional[bool] = None,\n    ) -> BlockDocument:\n        \"\"\"\n        Creates the corresponding block document based on the data stored in a block.\n        The corresponding block document name, block type ID, and block schema ID must\n        either be passed into the method or configured on the block.\n\n        Args:\n            name: The name of the created block document. Not required if anonymous.\n            block_schema_id: UUID of the corresponding block schema.\n            block_type_id: UUID of the corresponding block type.\n            is_anonymous: if True, an anonymous block is created. Anonymous\n                blocks are not displayed in the UI and used primarily for system\n                operations and features that need to automatically generate blocks.\n\n        Returns:\n            BlockDocument: Corresponding block document\n                populated with the block's configured data.\n        \"\"\"\n        if is_anonymous is None:\n            is_anonymous = self._is_anonymous or False\n\n        # name must be present if not anonymous\n        if not is_anonymous and not name and not self._block_document_name:\n            raise ValueError(\"No name provided, either as an argument or on the block.\")\n\n        if not block_schema_id and not self._block_schema_id:\n            raise ValueError(\n                \"No block schema ID provided, either as an argument or on the block.\"\n            )\n        if not block_type_id and not self._block_type_id:\n            raise ValueError(\n                \"No block type ID provided, either as an argument or on the block.\"\n            )\n\n        # The keys passed to `include` must NOT be aliases, else some items will be missed\n        # i.e. must do `self.schema_` vs `self.schema` to get a `schema_ = Field(alias=\"schema\")`\n        # reported from https://github.com/PrefectHQ/prefect-dbt/issues/54\n        data_keys = self.schema(by_alias=False)[\"properties\"].keys()\n\n        # `block_document_data`` must return the aliased version for it to show in the UI\n        block_document_data = self.dict(by_alias=True, include=data_keys)\n\n        # Iterate through and find blocks that already have saved block documents to\n        # create references to those saved block documents.\n        for key in data_keys:\n            field_value = getattr(self, key)\n            if (\n                isinstance(field_value, Block)\n                and field_value._block_document_id is not None\n            ):\n                block_document_data[key] = {\n                    \"$ref\": {\"block_document_id\": field_value._block_document_id}\n                }\n\n        return BlockDocument(\n            id=self._block_document_id or uuid4(),\n            name=(name or self._block_document_name) if not is_anonymous else None,\n            block_schema_id=block_schema_id or self._block_schema_id,\n            block_type_id=block_type_id or self._block_type_id,\n            data=block_document_data,\n            block_schema=self._to_block_schema(\n                block_type_id=block_type_id or self._block_type_id,\n            ),\n            block_type=self._to_block_type(),\n            is_anonymous=is_anonymous,\n        )\n\n    @classmethod\n    def _to_block_schema(cls, block_type_id: Optional[UUID] = None) -> BlockSchema:\n        \"\"\"\n        Creates the corresponding block schema of the block.\n        The corresponding block_type_id must either be passed into\n        the method or configured on the block.\n\n        Args:\n            block_type_id: UUID of the corresponding block type.\n\n        Returns:\n            BlockSchema: The corresponding block schema.\n        \"\"\"\n        fields = cls.schema()\n        return BlockSchema(\n            id=cls._block_schema_id if cls._block_schema_id is not None else uuid4(),\n            checksum=cls._calculate_schema_checksum(),\n            fields=fields,\n            block_type_id=block_type_id or cls._block_type_id,\n            block_type=cls._to_block_type(),\n            capabilities=list(cls.get_block_capabilities()),\n            version=cls.get_block_schema_version(),\n        )\n\n    @classmethod\n    def _parse_docstring(cls) -> List[DocstringSection]:\n        \"\"\"\n        Parses the docstring into list of DocstringSection objects.\n        Helper method used primarily to suppress irrelevant logs, e.g.\n        `<module>:11: No type or annotation for parameter 'write_json'`\n        because griffe is unable to parse the types from pydantic.BaseModel.\n        \"\"\"\n        with disable_logger(\"griffe.docstrings.google\"):\n            with disable_logger(\"griffe.agents.nodes\"):\n                docstring = Docstring(cls.__doc__)\n                parsed = parse(docstring, Parser.google)\n        return parsed\n\n    @classmethod\n    def get_description(cls) -> Optional[str]:\n        \"\"\"\n        Returns the description for the current block. Attempts to parse\n        description from class docstring if an override is not defined.\n        \"\"\"\n        description = cls._description\n        # If no description override has been provided, find the first text section\n        # and use that as the description\n        if description is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            parsed_description = next(\n                (\n                    section.as_dict().get(\"value\")\n                    for section in parsed\n                    if section.kind == DocstringSectionKind.text\n                ),\n                None,\n            )\n            if isinstance(parsed_description, str):\n                description = parsed_description.strip()\n        return description\n\n    @classmethod\n    def get_code_example(cls) -> Optional[str]:\n        \"\"\"\n        Returns the code example for the given block. Attempts to parse\n        code example from the class docstring if an override is not provided.\n        \"\"\"\n        code_example = (\n            dedent(cls._code_example) if cls._code_example is not None else None\n        )\n        # If no code example override has been provided, attempt to find a examples\n        # section or an admonition with the annotation \"example\" and use that as the\n        # code example\n        if code_example is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            for section in parsed:\n                # Section kind will be \"examples\" if Examples section heading is used.\n                if section.kind == DocstringSectionKind.examples:\n                    # Examples sections are made up of smaller sections that need to be\n                    # joined with newlines. Smaller sections are represented as tuples\n                    # with shape (DocstringSectionKind, str)\n                    code_example = \"\\n\".join(\n                        (part[1] for part in section.as_dict().get(\"value\", []))\n                    )\n                    break\n                # Section kind will be \"admonition\" if Example section heading is used.\n                if section.kind == DocstringSectionKind.admonition:\n                    value = section.as_dict().get(\"value\", {})\n                    if value.get(\"annotation\") == \"example\":\n                        code_example = value.get(\"description\")\n                        break\n\n        if code_example is None:\n            # If no code example has been specified or extracted from the class\n            # docstring, generate a sensible default\n            code_example = cls._generate_code_example()\n\n        return code_example\n\n    @classmethod\n    def _generate_code_example(cls) -> str:\n        \"\"\"Generates a default code example for the current class\"\"\"\n        qualified_name = to_qualified_name(cls)\n        module_str = \".\".join(qualified_name.split(\".\")[:-1])\n        class_name = cls.__name__\n        block_variable_name = f'{cls.get_block_type_slug().replace(\"-\", \"_\")}_block'\n\n        return dedent(\n            f\"\"\"\\\n        ```python\n        from {module_str} import {class_name}\n\n        {block_variable_name} = {class_name}.load(\"BLOCK_NAME\")\n        ```\"\"\"\n        )\n\n    @classmethod\n    def _to_block_type(cls) -> BlockType:\n        \"\"\"\n        Creates the corresponding block type of the block.\n\n        Returns:\n            BlockType: The corresponding block type.\n        \"\"\"\n        return BlockType(\n            id=cls._block_type_id or uuid4(),\n            slug=cls.get_block_type_slug(),\n            name=cls.get_block_type_name(),\n            logo_url=cls._logo_url,\n            documentation_url=cls._documentation_url,\n            description=cls.get_description(),\n            code_example=cls.get_code_example(),\n        )\n\n    @classmethod\n    def _from_block_document(cls, block_document: BlockDocument):\n        \"\"\"\n        Instantiates a block from a given block document. The corresponding block class\n        will be looked up in the block registry based on the corresponding block schema\n        of the provided block document.\n\n        Args:\n            block_document: The block document used to instantiate a block.\n\n        Raises:\n            ValueError: If the provided block document doesn't have a corresponding block\n                schema.\n\n        Returns:\n            Block: Hydrated block with data from block document.\n        \"\"\"\n        if block_document.block_schema is None:\n            raise ValueError(\n                \"Unable to determine block schema for provided block document\"\n            )\n\n        block_cls = (\n            cls\n            if cls.__name__ != \"Block\"\n            # Look up the block class by dispatch\n            else cls.get_block_class_from_schema(block_document.block_schema)\n        )\n\n        block_cls = instrument_method_calls_on_class_instances(block_cls)\n\n        block = block_cls.parse_obj(block_document.data)\n        block._block_document_id = block_document.id\n        block.__class__._block_schema_id = block_document.block_schema_id\n        block.__class__._block_type_id = block_document.block_type_id\n        block._block_document_name = block_document.name\n        block._is_anonymous = block_document.is_anonymous\n        block._define_metadata_on_nested_blocks(\n            block_document.block_document_references\n        )\n\n        # Due to the way blocks are loaded we can't directly instrument the\n        # `load` method and have the data be about the block document. Instead\n        # this will emit a proxy event for the load method so that block\n        # document data can be included instead of the event being about an\n        # 'anonymous' block.\n\n        emit_instance_method_called_event(block, \"load\", successful=True)\n\n        return block\n\n    def _event_kind(self) -> str:\n        return f\"prefect.block.{self.get_block_type_slug()}\"\n\n    def _event_method_called_resources(self) -> Optional[ResourceTuple]:\n        if not (self._block_document_id and self._block_document_name):\n            return None\n\n        return (\n            {\n                \"prefect.resource.id\": (\n                    f\"prefect.block-document.{self._block_document_id}\"\n                ),\n                \"prefect.resource.name\": self._block_document_name,\n            },\n            [\n                {\n                    \"prefect.resource.id\": (\n                        f\"prefect.block-type.{self.get_block_type_slug()}\"\n                    ),\n                    \"prefect.resource.role\": \"block-type\",\n                }\n            ],\n        )\n\n    @classmethod\n    def get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n        \"\"\"\n        Retrieve the block class implementation given a schema.\n        \"\"\"\n        return cls.get_block_class_from_key(block_schema_to_key(schema))\n\n    @classmethod\n    def get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n        \"\"\"\n        Retrieve the block class implementation given a key.\n        \"\"\"\n        # Ensure collections are imported and have the opportunity to register types\n        # before looking up the block class\n        prefect.plugins.load_prefect_collections()\n\n        return lookup_type(cls, key)\n\n    def _define_metadata_on_nested_blocks(\n        self, block_document_references: Dict[str, Dict[str, Any]]\n    ):\n        \"\"\"\n        Recursively populates metadata fields on nested blocks based on the\n        provided block document references.\n        \"\"\"\n        for item in block_document_references.items():\n            field_name, block_document_reference = item\n            nested_block = getattr(self, field_name)\n            if isinstance(nested_block, Block):\n                nested_block_document_info = block_document_reference.get(\n                    \"block_document\", {}\n                )\n                nested_block._define_metadata_on_nested_blocks(\n                    nested_block_document_info.get(\"block_document_references\", {})\n                )\n                nested_block_document_id = nested_block_document_info.get(\"id\")\n                nested_block._block_document_id = (\n                    UUID(nested_block_document_id) if nested_block_document_id else None\n                )\n                nested_block._block_document_name = nested_block_document_info.get(\n                    \"name\"\n                )\n                nested_block._is_anonymous = nested_block_document_info.get(\n                    \"is_anonymous\"\n                )\n\n    @classmethod\n    @inject_client\n    async def _get_block_document(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        if cls.__name__ == \"Block\":\n            block_type_slug, block_document_name = name.split(\"/\", 1)\n        else:\n            block_type_slug = cls.get_block_type_slug()\n            block_document_name = name\n\n        try:\n            block_document = await client.read_block_document_by_name(\n                name=block_document_name, block_type_slug=block_type_slug\n            )\n        except prefect.exceptions.ObjectNotFound as e:\n            raise ValueError(\n                f\"Unable to find block document named {block_document_name} for block\"\n                f\" type {block_type_slug}\"\n            ) from e\n\n        return block_document, block_document_name\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def load(\n        cls,\n        name: str,\n        validate: bool = True,\n        client: \"PrefectClient\" = None,\n    ):\n        \"\"\"\n        Retrieves data from the block document with the given name for the block type\n        that corresponds with the current class and returns an instantiated version of\n        the current class with the data stored in the block document.\n\n        If a block document for a given block type is saved with a different schema\n        than the current class calling `load`, a warning will be raised.\n\n        If the current class schema is a subset of the block document schema, the block\n        can be loaded as normal using the default `validate = True`.\n\n        If the current class schema is a superset of the block document schema, `load`\n        must be called with `validate` set to False to prevent a validation error. In\n        this case, the block attributes will default to `None` and must be set manually\n        and saved to a new block document before the block can be used as expected.\n\n        Args:\n            name: The name or slug of the block document. A block document slug is a\n                string with the format <block_type_slug>/<block_document_name>\n            validate: If False, the block document will be loaded without Pydantic\n                validating the block schema. This is useful if the block schema has\n                changed client-side since the block document referred to by `name` was saved.\n            client: The client to use to load the block document. If not provided, the\n                default client will be injected.\n\n        Raises:\n            ValueError: If the requested block document is not found.\n\n        Returns:\n            An instance of the current class hydrated with the data stored in the\n            block document with the specified name.\n\n        Examples:\n            Load from a Block subclass with a block document name:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Custom.load(\"my-custom-message\")\n            ```\n\n            Load from Block with a block document slug:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Block.load(\"custom/my-custom-message\")\n            ```\n\n            Migrate a block document to a new schema:\n            ```python\n            # original class\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            # Updated class with new required field\n            class Custom(Block):\n                message: str\n                number_of_ducks: int\n\n            loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n            # Prints UserWarning about schema mismatch\n\n            loaded_block.number_of_ducks = 42\n\n            loaded_block.save(\"my-custom-message\", overwrite=True)\n            ```\n        \"\"\"\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        try:\n            return cls._from_block_document(block_document)\n        except ValidationError as e:\n            if not validate:\n                missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n                missing_block_data = {field: None for field in missing_fields}\n                warnings.warn(\n                    f\"Could not fully load {block_document_name!r} of block type\"\n                    f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                    \" required fields were added to the schema for\"\n                    f\" {cls.__name__!r} that did not exist on the class when this block\"\n                    \" was last saved. Please specify values for new field(s):\"\n                    f\" {listrepr(missing_fields)}, then run\"\n                    f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                    \" and load this block again before attempting to use it.\"\n                )\n                return cls.construct(**block_document.data, **missing_block_data)\n            raise RuntimeError(\n                f\"Unable to load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n                \" validation, try loading again with `validate=False`.\"\n            ) from e\n\n    @staticmethod\n    def is_block_class(block) -> bool:\n        return _is_subclass(block, Block)\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n        \"\"\"\n        Makes block available for configuration with current Prefect API.\n        Recursively registers all nested blocks. Registration is idempotent.\n\n        Args:\n            client: Optional client to use for registering type and schema with the\n                Prefect API. A new client will be created and used if one is not\n                provided.\n        \"\"\"\n        if cls.__name__ == \"Block\":\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on the Block class directly.\"\n            )\n        if ABC in getattr(cls, \"__bases__\", []):\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on a Block interface class directly.\"\n            )\n\n        for field in cls.__fields__.values():\n            if Block.is_block_class(field.type_):\n                await field.type_.register_type_and_schema(client=client)\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        await type_.register_type_and_schema(client=client)\n\n        try:\n            block_type = await client.read_block_type_by_slug(\n                slug=cls.get_block_type_slug()\n            )\n            cls._block_type_id = block_type.id\n            local_block_type = cls._to_block_type()\n            if _should_update_block_type(\n                local_block_type=local_block_type, server_block_type=block_type\n            ):\n                await client.update_block_type(\n                    block_type_id=block_type.id, block_type=local_block_type\n                )\n        except prefect.exceptions.ObjectNotFound:\n            block_type = await client.create_block_type(block_type=cls._to_block_type())\n            cls._block_type_id = block_type.id\n\n        try:\n            block_schema = await client.read_block_schema_by_checksum(\n                checksum=cls._calculate_schema_checksum(),\n                version=cls.get_block_schema_version(),\n            )\n        except prefect.exceptions.ObjectNotFound:\n            block_schema = await client.create_block_schema(\n                block_schema=cls._to_block_schema(block_type_id=block_type.id)\n            )\n\n        cls._block_schema_id = block_schema.id\n\n    @inject_client\n    async def _save(\n        self,\n        name: Optional[str] = None,\n        is_anonymous: bool = False,\n        overwrite: bool = False,\n        client: \"PrefectClient\" = None,\n    ):\n        \"\"\"\n        Saves the values of a block as a block document with an option to save as an\n        anonymous block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            is_anonymous: Boolean value specifying whether the block document is anonymous. Anonymous\n                blocks are intended for system use and are not shown in the UI. Anonymous blocks do not\n                require a user-supplied name.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        Raises:\n            ValueError: If a name is not given and `is_anonymous` is `False` or a name is given and\n                `is_anonymous` is `True`.\n        \"\"\"\n        if name is None and not is_anonymous:\n            if self._block_document_name is None:\n                raise ValueError(\n                    \"You're attempting to save a block document without a name.\"\n                    \" Please either call `save` with a `name` or pass\"\n                    \" `is_anonymous=True` to save an anonymous block.\"\n                )\n            else:\n                name = self._block_document_name\n\n        self._is_anonymous = is_anonymous\n\n        # Ensure block type and schema are registered before saving block document.\n        await self.register_type_and_schema(client=client)\n\n        try:\n            block_document = await client.create_block_document(\n                block_document=self._to_block_document(name=name)\n            )\n        except prefect.exceptions.ObjectAlreadyExists as err:\n            if overwrite:\n                block_document_id = self._block_document_id\n                if block_document_id is None:\n                    existing_block_document = await client.read_block_document_by_name(\n                        name=name, block_type_slug=self.get_block_type_slug()\n                    )\n                    block_document_id = existing_block_document.id\n                await client.update_block_document(\n                    block_document_id=block_document_id,\n                    block_document=self._to_block_document(name=name),\n                )\n                block_document = await client.read_block_document(\n                    block_document_id=block_document_id\n                )\n            else:\n                raise ValueError(\n                    \"You are attempting to save values with a name that is already in\"\n                    \" use for this block type. If you would like to overwrite the\"\n                    \" values that are saved, then save with `overwrite=True`.\"\n                ) from err\n\n        # Update metadata on block instance for later use.\n        self._block_document_name = block_document.name\n        self._block_document_id = block_document.id\n        return self._block_document_id\n\n    @sync_compatible\n    @instrument_instance_method_call\n    async def save(\n        self,\n        name: Optional[str] = None,\n        overwrite: bool = False,\n        client: \"PrefectClient\" = None,\n    ):\n        \"\"\"\n        Saves the values of a block as a block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        \"\"\"\n        document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n        return document_id\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def delete(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        await client.delete_block_document(block_document.id)\n\n    def _iter(self, *, include=None, exclude=None, **kwargs):\n        # Injects the `block_type_slug` into serialized payloads for dispatch\n        for key_value in super()._iter(include=include, exclude=exclude, **kwargs):\n            yield key_value\n\n        # Respect inclusion and exclusion still\n        if include and \"block_type_slug\" not in include:\n            return\n        if exclude and \"block_type_slug\" in exclude:\n            return\n\n        yield \"block_type_slug\", self.get_block_type_slug()\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n        \"\"\"\n        Create an instance of the Block subclass type if a `block_type_slug` is\n        present in the data payload.\n        \"\"\"\n        block_type_slug = kwargs.pop(\"block_type_slug\", None)\n        if block_type_slug:\n            subcls = lookup_type(cls, dispatch_key=block_type_slug)\n            m = super().__new__(subcls)\n            # NOTE: This is a workaround for an obscure issue where copied models were\n            #       missing attributes. This pattern is from Pydantic's\n            #       `BaseModel._copy_and_set_values`.\n            #       The issue this fixes could not be reproduced in unit tests that\n            #       directly targeted dispatch handling and was only observed when\n            #       copying then saving infrastructure blocks on deployment models.\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n        else:\n            m = super().__new__(cls)\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n\n    def get_block_placeholder(self) -> str:\n        \"\"\"\n        Returns the block placeholder for the current block which can be used for\n        templating.\n\n        Returns:\n            str: The block placeholder for the current block in the format\n                `prefect.blocks.{block_type_name}.{block_document_name}`\n\n        Raises:\n            BlockNotSavedError: Raised if the block has not been saved.\n\n        If a block has not been saved, the return value will be `None`.\n        \"\"\"\n        block_document_name = self._block_document_name\n        if not block_document_name:\n            raise BlockNotSavedError(\n                \"Could not generate block placeholder for unsaved block.\"\n            )\n\n        return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config","title":"Config","text":"Source code in prefect/blocks/core.py
    class Config:\n    extra = \"allow\"\n\n    json_encoders = {SecretDict: lambda v: v.dict()}\n\n    @staticmethod\n    def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n        \"\"\"\n        Customizes Pydantic's schema generation feature to add blocks related information.\n        \"\"\"\n        schema[\"block_type_slug\"] = model.get_block_type_slug()\n        # Ensures args and code examples aren't included in the schema\n        description = model.get_description()\n        if description:\n            schema[\"description\"] = description\n        else:\n            # Prevent the description of the base class from being included in the schema\n            schema.pop(\"description\", None)\n\n        # create a list of secret field names\n        # secret fields include both top-level keys and dot-delimited nested secret keys\n        # A wildcard (*) means that all fields under a given key are secret.\n        # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n        # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n        # nested under the \"child\" key are all secret. There is no limit to nesting.\n        secrets = schema[\"secret_fields\"] = []\n        for field in model.__fields__.values():\n            _collect_secret_fields(field.name, field.type_, secrets)\n\n        # create block schema references\n        refs = schema[\"block_schema_references\"] = {}\n        for field in model.__fields__.values():\n            if Block.is_block_class(field.type_):\n                refs[field.name] = field.type_._to_block_schema_reference_dict()\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        if isinstance(refs.get(field.name), list):\n                            refs[field.name].append(\n                                type_._to_block_schema_reference_dict()\n                            )\n                        elif isinstance(refs.get(field.name), dict):\n                            refs[field.name] = [\n                                refs[field.name],\n                                type_._to_block_schema_reference_dict(),\n                            ]\n                        else:\n                            refs[\n                                field.name\n                            ] = type_._to_block_schema_reference_dict()\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config.schema_extra","title":"schema_extra staticmethod","text":"

    Customizes Pydantic's schema generation feature to add blocks related information.

    Source code in prefect/blocks/core.py
    @staticmethod\ndef schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n    \"\"\"\n    Customizes Pydantic's schema generation feature to add blocks related information.\n    \"\"\"\n    schema[\"block_type_slug\"] = model.get_block_type_slug()\n    # Ensures args and code examples aren't included in the schema\n    description = model.get_description()\n    if description:\n        schema[\"description\"] = description\n    else:\n        # Prevent the description of the base class from being included in the schema\n        schema.pop(\"description\", None)\n\n    # create a list of secret field names\n    # secret fields include both top-level keys and dot-delimited nested secret keys\n    # A wildcard (*) means that all fields under a given key are secret.\n    # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n    # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n    # nested under the \"child\" key are all secret. There is no limit to nesting.\n    secrets = schema[\"secret_fields\"] = []\n    for field in model.__fields__.values():\n        _collect_secret_fields(field.name, field.type_, secrets)\n\n    # create block schema references\n    refs = schema[\"block_schema_references\"] = {}\n    for field in model.__fields__.values():\n        if Block.is_block_class(field.type_):\n            refs[field.name] = field.type_._to_block_schema_reference_dict()\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    if isinstance(refs.get(field.name), list):\n                        refs[field.name].append(\n                            type_._to_block_schema_reference_dict()\n                        )\n                    elif isinstance(refs.get(field.name), dict):\n                        refs[field.name] = [\n                            refs[field.name],\n                            type_._to_block_schema_reference_dict(),\n                        ]\n                    else:\n                        refs[\n                            field.name\n                        ] = type_._to_block_schema_reference_dict()\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_capabilities","title":"get_block_capabilities classmethod","text":"

    Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset.

    Source code in prefect/blocks/core.py
    @classmethod\ndef get_block_capabilities(cls) -> FrozenSet[str]:\n    \"\"\"\n    Returns the block capabilities for this Block. Recursively collects all block\n    capabilities of all parent classes into a single frozenset.\n    \"\"\"\n    return frozenset(\n        {\n            c\n            for base in (cls,) + cls.__mro__\n            for c in getattr(base, \"_block_schema_capabilities\", []) or []\n        }\n    )\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_key","title":"get_block_class_from_key classmethod","text":"

    Retrieve the block class implementation given a key.

    Source code in prefect/blocks/core.py
    @classmethod\ndef get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n    \"\"\"\n    Retrieve the block class implementation given a key.\n    \"\"\"\n    # Ensure collections are imported and have the opportunity to register types\n    # before looking up the block class\n    prefect.plugins.load_prefect_collections()\n\n    return lookup_type(cls, key)\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_schema","title":"get_block_class_from_schema classmethod","text":"

    Retrieve the block class implementation given a schema.

    Source code in prefect/blocks/core.py
    @classmethod\ndef get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n    \"\"\"\n    Retrieve the block class implementation given a schema.\n    \"\"\"\n    return cls.get_block_class_from_key(block_schema_to_key(schema))\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_placeholder","title":"get_block_placeholder","text":"

    Returns the block placeholder for the current block which can be used for templating.

    Returns:

    Name Type Description str str

    The block placeholder for the current block in the format prefect.blocks.{block_type_name}.{block_document_name}

    Raises:

    Type Description BlockNotSavedError

    Raised if the block has not been saved.

    If a block has not been saved, the return value will be None.

    Source code in prefect/blocks/core.py
    def get_block_placeholder(self) -> str:\n    \"\"\"\n    Returns the block placeholder for the current block which can be used for\n    templating.\n\n    Returns:\n        str: The block placeholder for the current block in the format\n            `prefect.blocks.{block_type_name}.{block_document_name}`\n\n    Raises:\n        BlockNotSavedError: Raised if the block has not been saved.\n\n    If a block has not been saved, the return value will be `None`.\n    \"\"\"\n    block_document_name = self._block_document_name\n    if not block_document_name:\n        raise BlockNotSavedError(\n            \"Could not generate block placeholder for unsaved block.\"\n        )\n\n    return f\"prefect.blocks.{self.get_block_type_slug()}.{block_document_name}\"\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_code_example","title":"get_code_example classmethod","text":"

    Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided.

    Source code in prefect/blocks/core.py
    @classmethod\ndef get_code_example(cls) -> Optional[str]:\n    \"\"\"\n    Returns the code example for the given block. Attempts to parse\n    code example from the class docstring if an override is not provided.\n    \"\"\"\n    code_example = (\n        dedent(cls._code_example) if cls._code_example is not None else None\n    )\n    # If no code example override has been provided, attempt to find a examples\n    # section or an admonition with the annotation \"example\" and use that as the\n    # code example\n    if code_example is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        for section in parsed:\n            # Section kind will be \"examples\" if Examples section heading is used.\n            if section.kind == DocstringSectionKind.examples:\n                # Examples sections are made up of smaller sections that need to be\n                # joined with newlines. Smaller sections are represented as tuples\n                # with shape (DocstringSectionKind, str)\n                code_example = \"\\n\".join(\n                    (part[1] for part in section.as_dict().get(\"value\", []))\n                )\n                break\n            # Section kind will be \"admonition\" if Example section heading is used.\n            if section.kind == DocstringSectionKind.admonition:\n                value = section.as_dict().get(\"value\", {})\n                if value.get(\"annotation\") == \"example\":\n                    code_example = value.get(\"description\")\n                    break\n\n    if code_example is None:\n        # If no code example has been specified or extracted from the class\n        # docstring, generate a sensible default\n        code_example = cls._generate_code_example()\n\n    return code_example\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_description","title":"get_description classmethod","text":"

    Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined.

    Source code in prefect/blocks/core.py
    @classmethod\ndef get_description(cls) -> Optional[str]:\n    \"\"\"\n    Returns the description for the current block. Attempts to parse\n    description from class docstring if an override is not defined.\n    \"\"\"\n    description = cls._description\n    # If no description override has been provided, find the first text section\n    # and use that as the description\n    if description is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        parsed_description = next(\n            (\n                section.as_dict().get(\"value\")\n                for section in parsed\n                if section.kind == DocstringSectionKind.text\n            ),\n            None,\n        )\n        if isinstance(parsed_description, str):\n            description = parsed_description.strip()\n    return description\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.load","title":"load async classmethod","text":"

    Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document.

    If a block document for a given block type is saved with a different schema than the current class calling load, a warning will be raised.

    If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default validate = True.

    If the current class schema is a superset of the block document schema, load must be called with validate set to False to prevent a validation error. In this case, the block attributes will default to None and must be set manually and saved to a new block document before the block can be used as expected.

    Parameters:

    Name Type Description Default name str

    The name or slug of the block document. A block document slug is a string with the format / required validate bool

    If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by name was saved.

    True client PrefectClient

    The client to use to load the block document. If not provided, the default client will be injected.

    None

    Raises:

    Type Description ValueError

    If the requested block document is not found.

    Returns:

    Type Description

    An instance of the current class hydrated with the data stored in the

    block document with the specified name.

    Examples:

    Load from a Block subclass with a block document name:

    class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Custom.load(\"my-custom-message\")\n

    Load from Block with a block document slug:

    class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Block.load(\"custom/my-custom-message\")\n

    Migrate a block document to a new schema:

    # original class\nclass Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\n# Updated class with new required field\nclass Custom(Block):\n    message: str\n    number_of_ducks: int\n\nloaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n# Prints UserWarning about schema mismatch\n\nloaded_block.number_of_ducks = 42\n\nloaded_block.save(\"my-custom-message\", overwrite=True)\n

    Source code in prefect/blocks/core.py
    @classmethod\n@sync_compatible\n@inject_client\nasync def load(\n    cls,\n    name: str,\n    validate: bool = True,\n    client: \"PrefectClient\" = None,\n):\n    \"\"\"\n    Retrieves data from the block document with the given name for the block type\n    that corresponds with the current class and returns an instantiated version of\n    the current class with the data stored in the block document.\n\n    If a block document for a given block type is saved with a different schema\n    than the current class calling `load`, a warning will be raised.\n\n    If the current class schema is a subset of the block document schema, the block\n    can be loaded as normal using the default `validate = True`.\n\n    If the current class schema is a superset of the block document schema, `load`\n    must be called with `validate` set to False to prevent a validation error. In\n    this case, the block attributes will default to `None` and must be set manually\n    and saved to a new block document before the block can be used as expected.\n\n    Args:\n        name: The name or slug of the block document. A block document slug is a\n            string with the format <block_type_slug>/<block_document_name>\n        validate: If False, the block document will be loaded without Pydantic\n            validating the block schema. This is useful if the block schema has\n            changed client-side since the block document referred to by `name` was saved.\n        client: The client to use to load the block document. If not provided, the\n            default client will be injected.\n\n    Raises:\n        ValueError: If the requested block document is not found.\n\n    Returns:\n        An instance of the current class hydrated with the data stored in the\n        block document with the specified name.\n\n    Examples:\n        Load from a Block subclass with a block document name:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Custom.load(\"my-custom-message\")\n        ```\n\n        Load from Block with a block document slug:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Block.load(\"custom/my-custom-message\")\n        ```\n\n        Migrate a block document to a new schema:\n        ```python\n        # original class\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        # Updated class with new required field\n        class Custom(Block):\n            message: str\n            number_of_ducks: int\n\n        loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n        # Prints UserWarning about schema mismatch\n\n        loaded_block.number_of_ducks = 42\n\n        loaded_block.save(\"my-custom-message\", overwrite=True)\n        ```\n    \"\"\"\n    block_document, block_document_name = await cls._get_block_document(name)\n\n    try:\n        return cls._from_block_document(block_document)\n    except ValidationError as e:\n        if not validate:\n            missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n            missing_block_data = {field: None for field in missing_fields}\n            warnings.warn(\n                f\"Could not fully load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                \" required fields were added to the schema for\"\n                f\" {cls.__name__!r} that did not exist on the class when this block\"\n                \" was last saved. Please specify values for new field(s):\"\n                f\" {listrepr(missing_fields)}, then run\"\n                f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                \" and load this block again before attempting to use it.\"\n            )\n            return cls.construct(**block_document.data, **missing_block_data)\n        raise RuntimeError(\n            f\"Unable to load {block_document_name!r} of block type\"\n            f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n            \" validation, try loading again with `validate=False`.\"\n        ) from e\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.register_type_and_schema","title":"register_type_and_schema async classmethod","text":"

    Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent.

    Parameters:

    Name Type Description Default client PrefectClient

    Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided.

    None Source code in prefect/blocks/core.py
    @classmethod\n@sync_compatible\n@inject_client\nasync def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n    \"\"\"\n    Makes block available for configuration with current Prefect API.\n    Recursively registers all nested blocks. Registration is idempotent.\n\n    Args:\n        client: Optional client to use for registering type and schema with the\n            Prefect API. A new client will be created and used if one is not\n            provided.\n    \"\"\"\n    if cls.__name__ == \"Block\":\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on the Block class directly.\"\n        )\n    if ABC in getattr(cls, \"__bases__\", []):\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on a Block interface class directly.\"\n        )\n\n    for field in cls.__fields__.values():\n        if Block.is_block_class(field.type_):\n            await field.type_.register_type_and_schema(client=client)\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    await type_.register_type_and_schema(client=client)\n\n    try:\n        block_type = await client.read_block_type_by_slug(\n            slug=cls.get_block_type_slug()\n        )\n        cls._block_type_id = block_type.id\n        local_block_type = cls._to_block_type()\n        if _should_update_block_type(\n            local_block_type=local_block_type, server_block_type=block_type\n        ):\n            await client.update_block_type(\n                block_type_id=block_type.id, block_type=local_block_type\n            )\n    except prefect.exceptions.ObjectNotFound:\n        block_type = await client.create_block_type(block_type=cls._to_block_type())\n        cls._block_type_id = block_type.id\n\n    try:\n        block_schema = await client.read_block_schema_by_checksum(\n            checksum=cls._calculate_schema_checksum(),\n            version=cls.get_block_schema_version(),\n        )\n    except prefect.exceptions.ObjectNotFound:\n        block_schema = await client.create_block_schema(\n            block_schema=cls._to_block_schema(block_type_id=block_type.id)\n        )\n\n    cls._block_schema_id = block_schema.id\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.save","title":"save async","text":"

    Saves the values of a block as a block document.

    Parameters:

    Name Type Description Default name Optional[str]

    User specified name to give saved block document which can later be used to load the block document.

    None overwrite bool

    Boolean value specifying if values should be overwritten if a block document with the specified name already exists.

    False Source code in prefect/blocks/core.py
    @sync_compatible\n@instrument_instance_method_call\nasync def save(\n    self,\n    name: Optional[str] = None,\n    overwrite: bool = False,\n    client: \"PrefectClient\" = None,\n):\n    \"\"\"\n    Saves the values of a block as a block document.\n\n    Args:\n        name: User specified name to give saved block document which can later be used to load the\n            block document.\n        overwrite: Boolean value specifying if values should be overwritten if a block document with\n            the specified name already exists.\n\n    \"\"\"\n    document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n    return document_id\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.BlockNotSavedError","title":"BlockNotSavedError","text":"

    Bases: RuntimeError

    Raised when a given block is not saved and an operation that requires the block to be saved is attempted.

    Source code in prefect/blocks/core.py
    class BlockNotSavedError(RuntimeError):\n    \"\"\"\n    Raised when a given block is not saved and an operation that requires\n    the block to be saved is attempted.\n    \"\"\"\n\n    pass\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.InvalidBlockRegistration","title":"InvalidBlockRegistration","text":"

    Bases: Exception

    Raised on attempted registration of the base Block class or a Block interface class

    Source code in prefect/blocks/core.py
    class InvalidBlockRegistration(Exception):\n    \"\"\"\n    Raised on attempted registration of the base Block\n    class or a Block interface class\n    \"\"\"\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.block_schema_to_key","title":"block_schema_to_key","text":"

    Defines the unique key used to lookup the Block class for a given schema.

    Source code in prefect/blocks/core.py
    def block_schema_to_key(schema: BlockSchema) -> str:\n    \"\"\"\n    Defines the unique key used to lookup the Block class for a given schema.\n    \"\"\"\n    return f\"{schema.block_type.slug}\"\n
    ","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/fields/","title":"fields","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/fields/#prefect.blocks.fields","title":"prefect.blocks.fields","text":"","tags":["Python API","fields"]},{"location":"api-ref/prefect/blocks/kubernetes/","title":"kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes","title":"prefect.blocks.kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

    Bases: Block

    Stores configuration for interaction with Kubernetes clusters.

    See from_file for creation.

    Attributes:

    Name Type Description config Dict

    The entire loaded YAML contents of a kubectl config file

    context_name str

    The name of the kubectl context to use

    Example

    Load a saved Kubernetes cluster config:

    from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

    Source code in prefect/blocks/kubernetes.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use the KubernetesClusterConfig block from prefect-kubernetes instead.\",\n)\nclass KubernetesClusterConfig(Block):\n    \"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        return validate_yaml(value)\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n        \"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n        \"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n        \"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
    ","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

    Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

    Source code in prefect/blocks/kubernetes.py
    def configure_client(self) -> None:\n    \"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
    ","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

    Create a cluster config from the a Kubernetes config file.

    By default, the current context in the default Kubernetes config file will be used.

    An alternative file or context may be specified.

    The entire config file will be loaded and stored.

    Source code in prefect/blocks/kubernetes.py
    @classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n    \"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
    ","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

    Returns a Kubernetes API client for this cluster config.

    Source code in prefect/blocks/kubernetes.py
    def get_api_client(self) -> \"ApiClient\":\n    \"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
    ","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/notifications/","title":"notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications","title":"prefect.blocks.notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AbstractAppriseNotificationBlock","title":"AbstractAppriseNotificationBlock","text":"

    Bases: NotificationBlock, ABC

    An abstract class for sending notifications using Apprise.

    Source code in prefect/blocks/notifications.py
    class AbstractAppriseNotificationBlock(NotificationBlock, ABC):\n    \"\"\"\n    An abstract class for sending notifications using Apprise.\n    \"\"\"\n\n    notify_type: Literal[\n        \"prefect_default\", \"info\", \"success\", \"warning\", \"failure\"\n    ] = Field(\n        default=PREFECT_NOTIFY_TYPE_DEFAULT,\n        description=(\n            \"The type of notification being performed; the prefect_default \"\n            \"is a plain notification that does not attach an image.\"\n        ),\n    )\n\n    def __init__(self, *args, **kwargs):\n        import apprise\n\n        if PREFECT_NOTIFY_TYPE_DEFAULT not in apprise.NOTIFY_TYPES:\n            apprise.NOTIFY_TYPES += (PREFECT_NOTIFY_TYPE_DEFAULT,)\n\n        super().__init__(*args, **kwargs)\n\n    def _start_apprise_client(self, url: SecretStr):\n        from apprise import Apprise, AppriseAsset\n\n        # A custom `AppriseAsset` that ensures Prefect Notifications\n        # appear correctly across multiple messaging platforms\n        prefect_app_data = AppriseAsset(\n            app_id=\"Prefect Notifications\",\n            app_desc=\"Prefect Notifications\",\n            app_url=\"https://prefect.io\",\n        )\n\n        self._apprise_client = Apprise(asset=prefect_app_data)\n        self._apprise_client.add(url.get_secret_value())\n\n    def block_initialization(self) -> None:\n        self._start_apprise_client(self.url)\n\n    @sync_compatible\n    @instrument_instance_method_call\n    async def notify(\n        self,\n        body: str,\n        subject: Optional[str] = None,\n    ):\n        with LogEavesdropper(\"apprise\", level=logging.DEBUG) as eavesdropper:\n            result = await self._apprise_client.async_notify(\n                body=body, title=subject, notify_type=self.notify_type\n            )\n        if not result and self._raise_on_failure:\n            raise NotificationError(log=eavesdropper.text())\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AppriseNotificationBlock","title":"AppriseNotificationBlock","text":"

    Bases: AbstractAppriseNotificationBlock, ABC

    A base class for sending notifications using Apprise, through webhook URLs.

    Source code in prefect/blocks/notifications.py
    class AppriseNotificationBlock(AbstractAppriseNotificationBlock, ABC):\n    \"\"\"\n    A base class for sending notifications using Apprise, through webhook URLs.\n    \"\"\"\n\n    _documentation_url = \"https://docs.prefect.io/ui/notifications/\"\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Incoming webhook URL used to send notifications.\",\n        examples=[\"https://hooks.example.com/XXX\"],\n    )\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock","title":"CustomWebhookNotificationBlock","text":"

    Bases: NotificationBlock

    Enables sending notifications via any custom webhook.

    All nested string param contains {{key}} will be substituted with value from context/secrets.

    Context values include: subject, body and name.

    Examples:

    Load a saved custom webhook and send a message:

    from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\ncustom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\ncustom_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class CustomWebhookNotificationBlock(NotificationBlock):\n    \"\"\"\n    Enables sending notifications via any custom webhook.\n\n    All nested string param contains `{{key}}` will be substituted with value from context/secrets.\n\n    Context values include: `subject`, `body` and `name`.\n\n    Examples:\n        Load a saved custom webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\n        custom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\n        custom_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Custom Webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock\"\n\n    name: str = Field(title=\"Name\", description=\"Name of the webhook.\")\n\n    url: str = Field(\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        examples=[\"https://hooks.slack.com/XXX\"],\n    )\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    params: Optional[Dict[str, str]] = Field(\n        default=None, title=\"Query Params\", description=\"Custom query params.\"\n    )\n    json_data: Optional[dict] = Field(\n        default=None,\n        title=\"JSON Data\",\n        description=\"Send json data as payload.\",\n        examples=[\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ],\n    )\n    form_data: Optional[Dict[str, str]] = Field(\n        default=None,\n        title=\"Form Data\",\n        description=(\n            \"Send form data as payload. Should not be used together with _JSON Data_.\"\n        ),\n        examples=[\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ],\n    )\n\n    headers: Optional[Dict[str, str]] = Field(None, description=\"Custom headers.\")\n    cookies: Optional[Dict[str, str]] = Field(None, description=\"Custom cookies.\")\n\n    timeout: float = Field(\n        default=10, description=\"Request timeout in seconds. Defaults to 10.\"\n    )\n\n    secrets: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Custom Secret Values\",\n        description=\"A dictionary of secret values to be substituted in other configs.\",\n        examples=['{\"tokenFromSecrets\":\"SomeSecretToken\"}'],\n    )\n\n    def _build_request_args(self, body: str, subject: Optional[str]):\n        \"\"\"Build kwargs for httpx.AsyncClient.request\"\"\"\n        # prepare values\n        values = self.secrets.get_secret_value()\n        # use 'null' when subject is None\n        values.update(\n            {\n                \"subject\": \"null\" if subject is None else subject,\n                \"body\": body,\n                \"name\": self.name,\n            }\n        )\n        # do substution\n        return apply_values(\n            {\n                \"method\": self.method,\n                \"url\": self.url,\n                \"params\": self.params,\n                \"data\": self.form_data,\n                \"json\": self.json_data,\n                \"headers\": self.headers,\n                \"cookies\": self.cookies,\n                \"timeout\": self.timeout,\n            },\n            values,\n        )\n\n    def block_initialization(self) -> None:\n        # check form_data and json_data\n        if self.form_data is not None and self.json_data is not None:\n            raise ValueError(\"both `Form Data` and `JSON Data` provided\")\n        allowed_keys = {\"subject\", \"body\", \"name\"}.union(\n            self.secrets.get_secret_value().keys()\n        )\n        # test template to raise a error early\n        for name in [\"url\", \"params\", \"form_data\", \"json_data\", \"headers\", \"cookies\"]:\n            template = getattr(self, name)\n            if template is None:\n                continue\n            # check for placeholders not in predefined keys and secrets\n            placeholders = find_placeholders(template)\n            for placeholder in placeholders:\n                if placeholder.name not in allowed_keys:\n                    raise KeyError(f\"{name}/{placeholder}\")\n\n    @sync_compatible\n    @instrument_instance_method_call\n    async def notify(self, body: str, subject: Optional[str] = None):\n        import httpx\n\n        request_args = self._build_request_args(body, subject)\n        cookies = request_args.pop(\"cookies\", None)\n        # make request with httpx\n        client = httpx.AsyncClient(\n            headers={\"user-agent\": \"Prefect Notifications\"}, cookies=cookies\n        )\n        async with client:\n            resp = await client.request(**request_args)\n        resp.raise_for_status()\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook","title":"DiscordWebhook","text":"

    Bases: AbstractAppriseNotificationBlock

    Enables sending notifications via a provided Discord webhook. See Apprise notify_Discord docs # noqa

    Examples:

    Load a saved Discord webhook and send a message:

    from prefect.blocks.notifications import DiscordWebhook\n\ndiscord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\ndiscord_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class DiscordWebhook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Discord webhook.\n    See [Apprise notify_Discord docs](https://github.com/caronc/apprise/wiki/Notify_Discord) # noqa\n\n    Examples:\n        Load a saved Discord webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import DiscordWebhook\n\n        discord_webhook_block = DiscordWebhook.load(\"BLOCK_NAME\")\n\n        discord_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Discord webhook.\"\n    _block_type_name = \"Discord Webhook\"\n    _block_type_slug = \"discord-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/9e94976c80ef925b66d24e5d14f0d47baa6b8f88-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.DiscordWebhook\"\n\n    webhook_id: SecretStr = Field(\n        default=...,\n        description=(\n            \"The first part of 2 tokens provided to you after creating a\"\n            \" incoming-webhook.\"\n        ),\n    )\n\n    webhook_token: SecretStr = Field(\n        default=...,\n        description=(\n            \"The second part of 2 tokens provided to you after creating a\"\n            \" incoming-webhook.\"\n        ),\n    )\n\n    botname: Optional[str] = Field(\n        title=\"Bot name\",\n        default=None,\n        description=(\n            \"Identify the name of the bot that should issue the message. If one isn't\"\n            \" specified then the default is to just use your account (associated with\"\n            \" the incoming-webhook).\"\n        ),\n    )\n\n    tts: bool = Field(\n        default=False,\n        description=\"Whether to enable Text-To-Speech.\",\n    )\n\n    include_image: bool = Field(\n        default=False,\n        description=(\n            \"Whether to include an image in-line with the message describing the\"\n            \" notification type.\"\n        ),\n    )\n\n    avatar: bool = Field(\n        default=False,\n        description=\"Whether to override the default discord avatar icon.\",\n    )\n\n    avatar_url: Optional[str] = Field(\n        title=\"Avatar URL\",\n        default=False,\n        description=(\n            \"Over-ride the default discord avatar icon URL. By default this is not set\"\n            \" and Apprise chooses the URL dynamically based on the type of message\"\n            \" (info, success, warning, or error).\"\n        ),\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyDiscord import NotifyDiscord\n\n        url = SecretStr(\n            NotifyDiscord(\n                webhook_id=self.webhook_id.get_secret_value(),\n                webhook_token=self.webhook_token.get_secret_value(),\n                botname=self.botname,\n                tts=self.tts,\n                include_image=self.include_image,\n                avatar=self.avatar,\n                avatar_url=self.avatar_url,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook","title":"MattermostWebhook","text":"

    Bases: AbstractAppriseNotificationBlock

    Enables sending notifications via a provided Mattermost webhook. See Apprise notify_Mattermost docs # noqa

    Examples:

    Load a saved Mattermost webhook and send a message:

    from prefect.blocks.notifications import MattermostWebhook\n\nmattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\nmattermost_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class MattermostWebhook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Mattermost webhook.\n    See [Apprise notify_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa\n\n\n    Examples:\n        Load a saved Mattermost webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MattermostWebhook\n\n        mattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\n        mattermost_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Mattermost webhook.\"\n    _block_type_name = \"Mattermost Webhook\"\n    _block_type_slug = \"mattermost-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/1350a147130bf82cbc799a5f868d2c0116207736-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook\"\n\n    hostname: str = Field(\n        default=...,\n        description=\"The hostname of your Mattermost server.\",\n        examples=[\"Mattermost.example.com\"],\n    )\n\n    token: SecretStr = Field(\n        default=...,\n        description=\"The token associated with your Mattermost webhook.\",\n    )\n\n    botname: Optional[str] = Field(\n        title=\"Bot name\",\n        default=None,\n        description=\"The name of the bot that will send the message.\",\n    )\n\n    channels: Optional[List[str]] = Field(\n        default=None,\n        description=\"The channel(s) you wish to notify.\",\n    )\n\n    include_image: bool = Field(\n        default=False,\n        description=\"Whether to include the Apprise status image in the message.\",\n    )\n\n    path: Optional[str] = Field(\n        default=None,\n        description=\"An optional sub-path specification to append to the hostname.\",\n    )\n\n    port: int = Field(\n        default=8065,\n        description=\"The port of your Mattermost server.\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyMattermost import NotifyMattermost\n\n        url = SecretStr(\n            NotifyMattermost(\n                token=self.token.get_secret_value(),\n                fullpath=self.path,\n                host=self.hostname,\n                botname=self.botname,\n                channels=self.channels,\n                include_image=self.include_image,\n                port=self.port,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook","title":"MicrosoftTeamsWebhook","text":"

    Bases: AppriseNotificationBlock

    Enables sending notifications via a provided Microsoft Teams webhook.

    Examples:

    Load a saved Teams webhook and send a message:

    from prefect.blocks.notifications import MicrosoftTeamsWebhook\nteams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\nteams_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class MicrosoftTeamsWebhook(AppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Microsoft Teams webhook.\n\n    Examples:\n        Load a saved Teams webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MicrosoftTeamsWebhook\n        teams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\n        teams_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Microsoft Teams Webhook\"\n    _block_type_slug = \"ms-teams-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/817efe008a57f0a24f3587414714b563e5e23658-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook\"\n\n    url: SecretStr = Field(\n        ...,\n        title=\"Webhook URL\",\n        description=\"The Teams incoming webhook URL used to send notifications.\",\n        examples=[\n            \"https://your-org.webhook.office.com/webhookb2/XXX/IncomingWebhook/YYY/ZZZ\"\n        ],\n    )\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook","title":"OpsgenieWebhook","text":"

    Bases: AbstractAppriseNotificationBlock

    Enables sending notifications via a provided Opsgenie webhook. See Apprise notify_opsgenie docs for more info on formatting the URL.

    Examples:

    Load a saved Opsgenie webhook and send a message:

    from prefect.blocks.notifications import OpsgenieWebhook\nopsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\nopsgenie_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class OpsgenieWebhook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Opsgenie webhook.\n    See [Apprise notify_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved Opsgenie webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import OpsgenieWebhook\n        opsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\n        opsgenie_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Opsgenie webhook.\"\n\n    _block_type_name = \"Opsgenie Webhook\"\n    _block_type_slug = \"opsgenie-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d8b5bc6244ae6cd83b62ec42f10d96e14d6e9113-280x280.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook\"\n\n    apikey: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your Opsgenie account.\",\n    )\n\n    target_user: Optional[List] = Field(\n        default=None, description=\"The user(s) you wish to notify.\"\n    )\n\n    target_team: Optional[List] = Field(\n        default=None, description=\"The team(s) you wish to notify.\"\n    )\n\n    target_schedule: Optional[List] = Field(\n        default=None, description=\"The schedule(s) you wish to notify.\"\n    )\n\n    target_escalation: Optional[List] = Field(\n        default=None, description=\"The escalation(s) you wish to notify.\"\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The 2-character region code.\"\n    )\n\n    batch: bool = Field(\n        default=False,\n        description=\"Notify all targets in batches (instead of individually).\",\n    )\n\n    tags: Optional[List] = Field(\n        default=None,\n        description=(\n            \"A comma-separated list of tags you can associate with your Opsgenie\"\n            \" message.\"\n        ),\n        examples=['[\"tag1\", \"tag2\"]'],\n    )\n\n    priority: Optional[str] = Field(\n        default=3,\n        description=(\n            \"The priority to associate with the message. It is on a scale between 1\"\n            \" (LOW) and 5 (EMERGENCY).\"\n        ),\n    )\n\n    alias: Optional[str] = Field(\n        default=None, description=\"The alias to associate with the message.\"\n    )\n\n    entity: Optional[str] = Field(\n        default=None, description=\"The entity to associate with the message.\"\n    )\n\n    details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details composed of key/values pairs.\",\n        examples=['{\"key1\": \"value1\", \"key2\": \"value2\"}'],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyOpsgenie import NotifyOpsgenie\n\n        targets = []\n        if self.target_user:\n            [targets.append(f\"@{x}\") for x in self.target_user]\n        if self.target_team:\n            [targets.append(f\"#{x}\") for x in self.target_team]\n        if self.target_schedule:\n            [targets.append(f\"*{x}\") for x in self.target_schedule]\n        if self.target_escalation:\n            [targets.append(f\"^{x}\") for x in self.target_escalation]\n        url = SecretStr(\n            NotifyOpsgenie(\n                apikey=self.apikey.get_secret_value(),\n                targets=targets,\n                region_name=self.region_name,\n                details=self.details,\n                priority=self.priority,\n                alias=self.alias,\n                entity=self.entity,\n                batch=self.batch,\n                tags=self.tags,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook","title":"PagerDutyWebHook","text":"

    Bases: AbstractAppriseNotificationBlock

    Enables sending notifications via a provided PagerDuty webhook. See Apprise notify_pagerduty docs for more info on formatting the URL.

    Examples:

    Load a saved PagerDuty webhook and send a message:

    from prefect.blocks.notifications import PagerDutyWebHook\npagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\npagerduty_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class PagerDutyWebHook(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided PagerDuty webhook.\n    See [Apprise notify_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved PagerDuty webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import PagerDutyWebHook\n        pagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\n        pagerduty_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided PagerDuty webhook.\"\n\n    _block_type_name = \"Pager Duty Webhook\"\n    _block_type_slug = \"pager-duty-webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8dbf37d17089c1ce531708eac2e510801f7b3aee-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook\"\n\n    # The default cannot be prefect_default because NotifyPagerDuty's\n    # PAGERDUTY_SEVERITY_MAP only has these notify types defined as keys\n    notify_type: Literal[\"info\", \"success\", \"warning\", \"failure\"] = Field(\n        default=\"info\", description=\"The severity of the notification.\"\n    )\n\n    integration_key: SecretStr = Field(\n        default=...,\n        description=(\n            \"This can be found on the Events API V2 \"\n            \"integration's detail page, and is also referred to as a Routing Key. \"\n            \"This must be provided alongside `api_key`, but will error if provided \"\n            \"alongside `url`.\"\n        ),\n    )\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=(\n            \"This can be found under Integrations. \"\n            \"This must be provided alongside `integration_key`, but will error if \"\n            \"provided alongside `url`.\"\n        ),\n    )\n\n    source: Optional[str] = Field(\n        default=\"Prefect\", description=\"The source string as part of the payload.\"\n    )\n\n    component: str = Field(\n        default=\"Notification\",\n        description=\"The component string as part of the payload.\",\n    )\n\n    group: Optional[str] = Field(\n        default=None, description=\"The group string as part of the payload.\"\n    )\n\n    class_id: Optional[str] = Field(\n        default=None,\n        title=\"Class ID\",\n        description=\"The class string as part of the payload.\",\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The region name.\"\n    )\n\n    clickable_url: Optional[AnyHttpUrl] = Field(\n        default=None,\n        title=\"Clickable URL\",\n        description=\"A clickable URL to associate with the notice.\",\n    )\n\n    include_image: bool = Field(\n        default=True,\n        description=\"Associate the notification status via a represented icon.\",\n    )\n\n    custom_details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details to include as part of the payload.\",\n        examples=['{\"disk_space_left\": \"145GB\"}'],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyPagerDuty import NotifyPagerDuty\n\n        url = SecretStr(\n            NotifyPagerDuty(\n                apikey=self.api_key.get_secret_value(),\n                integrationkey=self.integration_key.get_secret_value(),\n                source=self.source,\n                component=self.component,\n                group=self.group,\n                class_id=self.class_id,\n                region_name=self.region_name,\n                click=self.clickable_url,\n                include_image=self.include_image,\n                details=self.custom_details,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail","title":"SendgridEmail","text":"

    Bases: AbstractAppriseNotificationBlock

    Enables sending notifications via any sendgrid account. See Apprise Notify_sendgrid docs

    Examples:

    Load a saved Sendgrid and send a email message: ```python from prefect.blocks.notifications import SendgridEmail

    sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")

    sendgrid_block.notify(\"Hello from Prefect!\")

    Source code in prefect/blocks/notifications.py
    class SendgridEmail(AbstractAppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via any sendgrid account.\n    See [Apprise Notify_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid)\n\n    Examples:\n        Load a saved Sendgrid and send a email message:\n        ```python\n        from prefect.blocks.notifications import SendgridEmail\n\n        sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")\n\n        sendgrid_block.notify(\"Hello from Prefect!\")\n    \"\"\"\n\n    _description = \"Enables sending notifications via Sendgrid email service.\"\n    _block_type_name = \"Sendgrid Email\"\n    _block_type_slug = \"sendgrid-email\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/82bc6ed16ca42a2252a5512c72233a253b8a58eb-250x250.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail\"\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your sendgrid account.\",\n    )\n\n    sender_email: str = Field(\n        title=\"Sender email id\",\n        description=\"The sender email id.\",\n        examples=[\"test-support@gmail.com\"],\n    )\n\n    to_emails: List[str] = Field(\n        default=...,\n        title=\"Recipient emails\",\n        description=\"Email ids of all recipients.\",\n        examples=['\"recipient1@gmail.com\"'],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifySendGrid import NotifySendGrid\n\n        url = SecretStr(\n            NotifySendGrid(\n                apikey=self.api_key.get_secret_value(),\n                from_email=self.sender_email,\n                targets=self.to_emails,\n            ).url()\n        )\n\n        self._start_apprise_client(url)\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook","title":"SlackWebhook","text":"

    Bases: AppriseNotificationBlock

    Enables sending notifications via a provided Slack webhook.

    Examples:

    Load a saved Slack webhook and send a message:

    from prefect.blocks.notifications import SlackWebhook\n\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class SlackWebhook(AppriseNotificationBlock):\n    \"\"\"\n    Enables sending notifications via a provided Slack webhook.\n\n    Examples:\n        Load a saved Slack webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import SlackWebhook\n\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        slack_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Slack Webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c1965ecbf8704ee1ea20d77786de9a41ce1087d1-500x500.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook\"\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Slack incoming webhook URL used to send notifications.\",\n        examples=[\"https://hooks.slack.com/XXX\"],\n    )\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS","title":"TwilioSMS","text":"

    Bases: AbstractAppriseNotificationBlock

    Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the docs.

    Examples:

    Load a saved TwilioSMS block and send a message:

    from prefect.blocks.notifications import TwilioSMS\ntwilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\ntwilio_webhook_block.notify(\"Hello from Prefect!\")\n

    Source code in prefect/blocks/notifications.py
    class TwilioSMS(AbstractAppriseNotificationBlock):\n    \"\"\"Enables sending notifications via Twilio SMS.\n    Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms).\n\n    Examples:\n        Load a saved `TwilioSMS` block and send a message:\n        ```python\n        from prefect.blocks.notifications import TwilioSMS\n        twilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\n        twilio_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via Twilio SMS.\"\n    _block_type_name = \"Twilio SMS\"\n    _block_type_slug = \"twilio-sms\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8bd8777999f82112c09b9c8d57083ac75a4a0d65-250x250.png\"  # noqa\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS\"\n\n    account_sid: str = Field(\n        default=...,\n        description=(\n            \"The Twilio Account SID - it can be found on the homepage \"\n            \"of the Twilio console.\"\n        ),\n    )\n\n    auth_token: SecretStr = Field(\n        default=...,\n        description=(\n            \"The Twilio Authentication Token - \"\n            \"it can be found on the homepage of the Twilio console.\"\n        ),\n    )\n\n    from_phone_number: str = Field(\n        default=...,\n        description=\"The valid Twilio phone number to send the message from.\",\n        examples=[\"18001234567\"],\n    )\n\n    to_phone_numbers: List[str] = Field(\n        default=...,\n        description=\"A list of valid Twilio phone number(s) to send the message to.\",\n        # not wrapped in brackets because of the way UI displays examples; in code should be [\"18004242424\"]\n        examples=[\"18004242424\"],\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyTwilio import NotifyTwilio\n\n        url = SecretStr(\n            NotifyTwilio(\n                account_sid=self.account_sid,\n                auth_token=self.auth_token.get_secret_value(),\n                source=self.from_phone_number,\n                targets=self.to_phone_numbers,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
    ","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/system/","title":"system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system","title":"prefect.blocks.system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime","title":"DateTime","text":"

    Bases: Block

    A block that represents a datetime

    Attributes:

    Name Type Description value DateTimeTZ

    An ISO 8601-compatible datetime value.

    Example

    Load a stored JSON value:

    from prefect.blocks.system import DateTime\n\ndata_time_block = DateTime.load(\"BLOCK_NAME\")\n

    Source code in prefect/blocks/system.py
    class DateTime(Block):\n    \"\"\"\n    A block that represents a datetime\n\n    Attributes:\n        value: An ISO 8601-compatible datetime value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import DateTime\n\n        data_time_block = DateTime.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Date Time\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/8b3da9a6621e92108b8e6a75b82e15374e170ff7-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime\"\n\n    value: DateTimeTZ = Field(\n        default=...,\n        description=\"An ISO 8601-compatible datetime value.\",\n    )\n
    ","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.JSON","title":"JSON","text":"

    Bases: Block

    A block that represents JSON

    Attributes:

    Name Type Description value Any

    A JSON-compatible value.

    Example

    Load a stored JSON value:

    from prefect.blocks.system import JSON\n\njson_block = JSON.load(\"BLOCK_NAME\")\n

    Source code in prefect/blocks/system.py
    class JSON(Block):\n    \"\"\"\n    A block that represents JSON\n\n    Attributes:\n        value: A JSON-compatible value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import JSON\n\n        json_block = JSON.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/4fcef2294b6eeb423b1332d1ece5156bf296ff96-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.JSON\"\n\n    value: Any = Field(default=..., description=\"A JSON-compatible value.\")\n
    ","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.Secret","title":"Secret","text":"

    Bases: Block

    A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI.

    Attributes:

    Name Type Description value SecretStr

    A string value that should be kept secret.

    Example
    from prefect.blocks.system import Secret\n\nsecret_block = Secret.load(\"BLOCK_NAME\")\n\n# Access the stored secret\nsecret_block.get()\n
    Source code in prefect/blocks/system.py
    class Secret(Block):\n    \"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c6f20e556dd16effda9df16551feecfb5822092b-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.Secret\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )\n\n    def get(self):\n        return self.value.get_secret_value()\n
    ","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.String","title":"String","text":"

    Bases: Block

    A block that represents a string

    Attributes:

    Name Type Description value str

    A string value.

    Example

    Load a stored string value:

    from prefect.blocks.system import String\n\nstring_block = String.load(\"BLOCK_NAME\")\n

    Source code in prefect/blocks/system.py
    class String(Block):\n    \"\"\"\n    A block that represents a string\n\n    Attributes:\n        value: A string value.\n\n    Example:\n        Load a stored string value:\n        ```python\n        from prefect.blocks.system import String\n\n        string_block = String.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c262ea2c80a2c043564e8763f3370c3db5a6b3e6-48x48.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.String\"\n\n    value: str = Field(default=..., description=\"A string value.\")\n
    ","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/webhook/","title":"webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook","title":"prefect.blocks.webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook","title":"Webhook","text":"

    Bases: Block

    Block that enables calling webhooks.

    Source code in prefect/blocks/webhook.py
    class Webhook(Block):\n    \"\"\"\n    Block that enables calling webhooks.\n    \"\"\"\n\n    _block_type_name = \"Webhook\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c7247cb359eb6cf276734d4b1fbf00fb8930e89e-250x250.png\"  # type: ignore\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook\"\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        examples=[\"https://hooks.slack.com/XXX\"],\n    )\n\n    headers: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Webhook Headers\",\n        description=\"A dictionary of headers to send with the webhook request.\",\n    )\n\n    def block_initialization(self):\n        self._client = AsyncClient(transport=_http_transport)\n\n    async def call(self, payload: Optional[dict] = None) -> Response:\n        \"\"\"\n        Call the webhook.\n\n        Args:\n            payload: an optional payload to send when calling the webhook.\n        \"\"\"\n        async with self._client:\n            return await self._client.request(\n                method=self.method,\n                url=self.url.get_secret_value(),\n                headers=self.headers.get_secret_value(),\n                json=payload,\n            )\n
    ","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook.call","title":"call async","text":"

    Call the webhook.

    Parameters:

    Name Type Description Default payload Optional[dict]

    an optional payload to send when calling the webhook.

    None Source code in prefect/blocks/webhook.py
    async def call(self, payload: Optional[dict] = None) -> Response:\n    \"\"\"\n    Call the webhook.\n\n    Args:\n        payload: an optional payload to send when calling the webhook.\n    \"\"\"\n    async with self._client:\n        return await self._client.request(\n            method=self.method,\n            url=self.url.get_secret_value(),\n            headers=self.headers.get_secret_value(),\n            json=payload,\n        )\n
    ","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/cli/agent/","title":"agent","text":"","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent","title":"prefect.cli.agent","text":"

    Command line interface for working with agent services

    ","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent.start","title":"start async","text":"

    Start an agent process to poll one or more work queues for flow runs.

    Source code in prefect/cli/agent.py
    @agent_app.command()\nasync def start(\n    # deprecated main argument\n    work_queue: str = typer.Argument(\n        None,\n        show_default=False,\n        help=\"DEPRECATED: A work queue name or ID\",\n    ),\n    work_queues: List[str] = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n    work_queue_prefix: List[str] = typer.Option(\n        None,\n        \"-m\",\n        \"--match\",\n        help=(\n            \"Dynamically matches work queue names with the specified prefix for the\"\n            \" agent to pull from,for example `dev-` will match all work queues with a\"\n            \" name that starts with `dev-`\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"A work pool name for the agent to pull from.\",\n    ),\n    hide_welcome: bool = typer.Option(False, \"--hide-welcome\"),\n    api: str = SettingsOption(PREFECT_API_URL),\n    run_once: bool = typer.Option(\n        False, help=\"Run the agent loop once, instead of forever.\"\n    ),\n    prefetch_seconds: int = SettingsOption(PREFECT_AGENT_PREFETCH_SECONDS),\n    # deprecated tags\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags that will be used to create a work\"\n            \" queue. This option will be removed on 2023-02-23.\"\n        ),\n    ),\n    limit: int = typer.Option(\n        None,\n        \"-l\",\n        \"--limit\",\n        help=\"Maximum number of flow runs to start simultaneously.\",\n    ),\n):\n    \"\"\"\n    Start an agent process to poll one or more work queues for flow runs.\n    \"\"\"\n    work_queues = work_queues or []\n\n    if work_queue is not None:\n        # try to treat the work_queue as a UUID\n        try:\n            async with get_client() as client:\n                q = await client.read_work_queue(UUID(work_queue))\n                work_queue = q.name\n        # otherwise treat it as a string name\n        except (TypeError, ValueError):\n            pass\n        work_queues.append(work_queue)\n        app.console.print(\n            (\n                \"Agents now support multiple work queues. Instead of passing a single\"\n                \" argument, provide work queue names with the `-q` or `--work-queue`\"\n                f\" flag: `prefect agent start -q {work_queue}`\\n\"\n            ),\n            style=\"blue\",\n        )\n    if work_pool_name:\n        is_queues_paused = await _check_work_queues_paused(\n            work_pool_name=work_pool_name,\n            work_queues=work_queues,\n        )\n        if is_queues_paused:\n            queue_scope = (\n                \"The 'default' work queue\"\n                if not work_queues\n                else \"Specified work queue(s)\"\n            )\n            app.console.print(\n                (\n                    f\"{queue_scope} in the work pool {work_pool_name!r} is currently\"\n                    \" paused. This agent will not execute any flow runs until the work\"\n                    \" queue(s) are unpaused.\"\n                ),\n                style=\"yellow\",\n            )\n\n    if not work_queues and not tags and not work_queue_prefix and not work_pool_name:\n        exit_with_error(\"No work queues provided!\", style=\"red\")\n    elif bool(work_queues) + bool(tags) + bool(work_queue_prefix) > 1:\n        exit_with_error(\n            \"Only one of `work_queues`, `match`, or `tags` can be provided.\",\n            style=\"red\",\n        )\n    if work_pool_name and tags:\n        exit_with_error(\n            \"`tag` and `pool` options cannot be used together.\", style=\"red\"\n        )\n\n    if tags:\n        work_queue_name = f\"Agent queue {'-'.join(sorted(tags))}\"\n        app.console.print(\n            (\n                \"`tags` are deprecated. For backwards-compatibility with old versions\"\n                \" of Prefect, this agent will create a work queue named\"\n                f\" `{work_queue_name}` that uses legacy tag-based matching. This option\"\n                \" will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n        async with get_client() as client:\n            try:\n                work_queue = await client.read_work_queue_by_name(work_queue_name)\n                if work_queue.filter is None:\n                    # ensure the work queue has legacy (deprecated) tag-based behavior\n                    await client.update_work_queue(filter=dict(tags=tags))\n            except ObjectNotFound:\n                # if the work queue doesn't already exist, we create it with tags\n                # to enable legacy (deprecated) tag-matching behavior\n                await client.create_work_queue(name=work_queue_name, tags=tags)\n\n        work_queues = [work_queue_name]\n\n    if not hide_welcome:\n        if api:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent connected to {api}...\"\n            )\n        else:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent with ephemeral API...\"\n            )\n\n    agent_process_id = os.getpid()\n    setup_signal_handlers_agent(\n        agent_process_id, \"the Prefect agent\", app.console.print\n    )\n\n    async with PrefectAgent(\n        work_queues=work_queues,\n        work_queue_prefix=work_queue_prefix,\n        work_pool_name=work_pool_name,\n        prefetch_seconds=prefetch_seconds,\n        limit=limit,\n    ) as agent:\n        if not hide_welcome:\n            app.console.print(ascii_name)\n            if work_pool_name:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"work pool '{work_pool_name}'...\"\n                )\n            elif work_queue_prefix:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s) that start with the prefix: {work_queue_prefix}...\"\n                )\n            else:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s): {', '.join(work_queues)}...\"\n                )\n\n        async with anyio.create_task_group() as tg:\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.get_and_submit_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value(),\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,  # Up to ~1 minute interval during backoff\n                )\n            )\n\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.check_for_cancelled_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value() * 2,\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n\n    app.console.print(\"Agent stopped!\")\n
    ","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/artifact/","title":"artifact","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact","title":"prefect.cli.artifact","text":"","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.delete","title":"delete async","text":"

    Delete an artifact.

    Parameters:

    Name Type Description Default key Optional[str]

    the key of the artifact to delete

    Argument(None, help='The key of the artifact to delete.')

    Examples:

    $ prefect artifact delete \"my-artifact\"

    Source code in prefect/cli/artifact.py
    @artifact_app.command(\"delete\")\nasync def delete(\n    key: Optional[str] = typer.Argument(\n        None, help=\"The key of the artifact to delete.\"\n    ),\n    artifact_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"The ID of the artifact to delete.\"\n    ),\n):\n    \"\"\"\n    Delete an artifact.\n\n    Arguments:\n        key: the key of the artifact to delete\n\n    Examples:\n        $ prefect artifact delete \"my-artifact\"\n    \"\"\"\n    if key and artifact_id:\n        exit_with_error(\"Please provide either a key or an artifact_id but not both.\")\n\n    async with get_client() as client:\n        if artifact_id is not None:\n            try:\n                confirm_delete = typer.confirm(\n                    (\n                        \"Are you sure you want to delete artifact with id\"\n                        f\" {artifact_id!r}?\"\n                    ),\n                    default=False,\n                )\n                if not confirm_delete:\n                    exit_with_error(\"Deletion aborted.\")\n\n                await client.delete_artifact(artifact_id)\n                exit_with_success(f\"Deleted artifact with id {artifact_id!r}.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Artifact with id {artifact_id!r} not found!\")\n\n        elif key is not None:\n            artifacts = await client.read_artifacts(\n                artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n            )\n            if not artifacts:\n                exit_with_error(\n                    f\"Artifact with key {key!r} not found. You can also specify an\"\n                    \" artifact id with the --id flag.\"\n                )\n\n            confirm_delete = typer.confirm(\n                (\n                    f\"Are you sure you want to delete {len(artifacts)} artifact(s) with\"\n                    f\" key {key!r}?\"\n                ),\n                default=False,\n            )\n            if not confirm_delete:\n                exit_with_error(\"Deletion aborted.\")\n\n            for a in artifacts:\n                await client.delete_artifact(a.id)\n\n            exit_with_success(f\"Deleted {len(artifacts)} artifact(s) with key {key!r}.\")\n\n        else:\n            exit_with_error(\"Please provide a key or an artifact_id.\")\n
    ","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.inspect","title":"inspect async","text":"
    View details about an artifact.\n\nArguments:\n    key: the key of the artifact to inspect\n\nExamples:\n    $ prefect artifact inspect \"my-artifact\"\n   [\n    {\n        'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n        'created': '2023-03-21T21:40:09.895910+00:00',\n        'updated': '2023-03-21T21:40:09.895910+00:00',\n        'key': 'my-artifact',\n        'type': 'markdown',\n        'description': None,\n        'data': 'my markdown',\n        'metadata_': None,\n        'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n        'task_run_id': None\n    },\n    {\n        'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n        'created': '2023-03-27T23:16:15.536434+00:00',\n        'updated': '2023-03-27T23:16:15.536434+00:00',\n        'key': 'my-artifact',\n        'type': 'markdown',\n        'description': 'my-artifact-description',\n        'data': 'my markdown',\n        'metadata_': None,\n        'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n        'task_run_id': None\n    }\n

    ]

    Source code in prefect/cli/artifact.py
    @artifact_app.command(\"inspect\")\nasync def inspect(\n    key: str,\n    limit: int = typer.Option(\n        10,\n        \"--limit\",\n        help=\"The maximum number of artifacts to return.\",\n    ),\n):\n    \"\"\"\n        View details about an artifact.\n\n        Arguments:\n            key: the key of the artifact to inspect\n\n        Examples:\n            $ prefect artifact inspect \"my-artifact\"\n           [\n            {\n                'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc',\n                'created': '2023-03-21T21:40:09.895910+00:00',\n                'updated': '2023-03-21T21:40:09.895910+00:00',\n                'key': 'my-artifact',\n                'type': 'markdown',\n                'description': None,\n                'data': 'my markdown',\n                'metadata_': None,\n                'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98',\n                'task_run_id': None\n            },\n            {\n                'id': '57f235b5-2576-45a5-bd93-c829c2900966',\n                'created': '2023-03-27T23:16:15.536434+00:00',\n                'updated': '2023-03-27T23:16:15.536434+00:00',\n                'key': 'my-artifact',\n                'type': 'markdown',\n                'description': 'my-artifact-description',\n                'data': 'my markdown',\n                'metadata_': None,\n                'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29',\n                'task_run_id': None\n            }\n    ]\n    \"\"\"\n\n    async with get_client() as client:\n        artifacts = await client.read_artifacts(\n            limit=limit,\n            sort=ArtifactSort.UPDATED_DESC,\n            artifact_filter=ArtifactFilter(key=ArtifactFilterKey(any_=[key])),\n        )\n        if not artifacts:\n            exit_with_error(f\"Artifact {key!r} not found.\")\n\n        artifacts = [a.dict(json_compatible=True) for a in artifacts]\n\n        app.console.print(Pretty(artifacts))\n
    ","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/artifact/#prefect.cli.artifact.list_artifacts","title":"list_artifacts async","text":"

    List artifacts.

    Source code in prefect/cli/artifact.py
    @artifact_app.command(\"ls\")\nasync def list_artifacts(\n    limit: int = typer.Option(\n        100,\n        \"--limit\",\n        help=\"The maximum number of artifacts to return.\",\n    ),\n    all: bool = typer.Option(\n        False,\n        \"--all\",\n        \"-a\",\n        help=\"Whether or not to only return the latest version of each artifact.\",\n    ),\n):\n    \"\"\"\n    List artifacts.\n    \"\"\"\n    table = Table(\n        title=\"Artifacts\",\n        caption=\"List Artifacts using `prefect artifact ls`\",\n        show_header=True,\n    )\n\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Key\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n    async with get_client() as client:\n        if all:\n            artifacts = await client.read_artifacts(\n                sort=ArtifactSort.KEY_ASC,\n                limit=limit,\n            )\n\n            for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n                table.add_row(\n                    str(artifact.id),\n                    artifact.key,\n                    artifact.type,\n                    pendulum.instance(artifact.updated).diff_for_humans(),\n                )\n\n        else:\n            artifacts = await client.read_latest_artifacts(\n                sort=ArtifactCollectionSort.KEY_ASC,\n                limit=limit,\n            )\n\n            for artifact in sorted(artifacts, key=lambda x: f\"{x.key}\"):\n                table.add_row(\n                    str(artifact.latest_id),\n                    artifact.key,\n                    artifact.type,\n                    pendulum.instance(artifact.updated).diff_for_humans(),\n                )\n\n        app.console.print(table)\n
    ","tags":["Python API","artifacts","CLI"]},{"location":"api-ref/prefect/cli/block/","title":"block","text":"","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block","title":"prefect.cli.block","text":"

    Command line interface for working with blocks.

    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_create","title":"block_create async","text":"

    Generate a link to the Prefect UI to create a block.

    Source code in prefect/cli/block.py
    @blocks_app.command(\"create\")\nasync def block_create(\n    block_type_slug: str = typer.Argument(\n        ...,\n        help=\"A block type slug. View available types with: prefect block type ls\",\n        show_default=False,\n    ),\n):\n    \"\"\"\n    Generate a link to the Prefect UI to create a block.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            block_type = await client.read_block_type_by_slug(block_type_slug)\n        except ObjectNotFound:\n            app.console.print(f\"[red]Block type {block_type_slug!r} not found![/red]\")\n            block_types = await client.read_block_types()\n            slugs = {block_type.slug for block_type in block_types}\n            app.console.print(f\"Available block types: {', '.join(slugs)}\")\n            raise typer.Exit(1)\n\n        if not PREFECT_UI_URL:\n            exit_with_error(\n                \"Prefect must be configured to use a hosted Prefect server or \"\n                \"Prefect Cloud to display the Prefect UI\"\n            )\n\n        block_link = f\"{PREFECT_UI_URL.value()}/blocks/catalog/{block_type.slug}/create\"\n        app.console.print(\n            f\"Create a {block_type_slug} block: {block_link}\",\n        )\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_delete","title":"block_delete async","text":"

    Delete a configured block.

    Source code in prefect/cli/block.py
    @blocks_app.command(\"delete\")\nasync def block_delete(\n    slug: Optional[str] = typer.Argument(\n        None, help=\"A block slug. Formatted as '<BLOCK_TYPE_SLUG>/<BLOCK_NAME>'\"\n    ),\n    block_id: Optional[str] = typer.Option(None, \"--id\", help=\"A block id.\"),\n):\n    \"\"\"\n    Delete a configured block.\n    \"\"\"\n    async with get_client() as client:\n        if slug is None and block_id is not None:\n            try:\n                await client.delete_block_document(block_id)\n                exit_with_success(f\"Deleted Block '{block_id}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {block_id!r} not found!\")\n        elif slug is not None:\n            if \"/\" not in slug:\n                exit_with_error(\n                    f\"{slug!r} is not valid. Slug must contain a '/', e.g. 'json/my-json-block'\"\n                )\n            block_type_slug, block_document_name = slug.split(\"/\")\n            try:\n                block_document = await client.read_block_document_by_name(\n                    block_document_name, block_type_slug, include_secrets=False\n                )\n                await client.delete_block_document(block_document.id)\n                exit_with_success(f\"Deleted Block '{slug}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Block {slug!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a block slug or id\")\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_inspect","title":"block_inspect async","text":"

    Displays details about a configured block.

    Source code in prefect/cli/block.py
    @blocks_app.command(\"inspect\")\nasync def block_inspect(\n    slug: Optional[str] = typer.Argument(\n        None, help=\"A Block slug: <BLOCK_TYPE_SLUG>/<BLOCK_NAME>\"\n    ),\n    block_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A Block id to search for if no slug is given\"\n    ),\n):\n    \"\"\"\n    Displays details about a configured block.\n    \"\"\"\n    async with get_client() as client:\n        if slug is None and block_id is not None:\n            try:\n                block_document = await client.read_block_document(\n                    block_id, include_secrets=False\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {block_id!r} not found!\")\n        elif slug is not None:\n            if \"/\" not in slug:\n                exit_with_error(\n                    f\"{slug!r} is not valid. Slug must contain a '/', e.g. 'json/my-json-block'\"\n                )\n            block_type_slug, block_document_name = slug.split(\"/\")\n            try:\n                block_document = await client.read_block_document_by_name(\n                    block_document_name, block_type_slug, include_secrets=False\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"Block {slug!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a block slug or id\")\n        app.console.print(display_block(block_document))\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.block_ls","title":"block_ls async","text":"

    View all configured blocks.

    Source code in prefect/cli/block.py
    @blocks_app.command(\"ls\")\nasync def block_ls():\n    \"\"\"\n    View all configured blocks.\n    \"\"\"\n    async with get_client() as client:\n        blocks = await client.read_block_documents()\n\n    table = Table(\n        title=\"Blocks\", caption=\"List Block Types using `prefect block type ls`\"\n    )\n    table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Type\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Slug\", style=\"blue\", no_wrap=True)\n\n    for block in sorted(blocks, key=lambda x: f\"{x.block_type.slug}/{x.name}\"):\n        table.add_row(\n            str(block.id),\n            block.block_type.name,\n            str(block.name),\n            f\"{block.block_type.slug}/{block.name}\",\n        )\n\n    app.console.print(table)\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_delete","title":"blocktype_delete async","text":"

    Delete an unprotected Block Type.

    Source code in prefect/cli/block.py
    @blocktypes_app.command(\"delete\")\nasync def blocktype_delete(\n    slug: str = typer.Argument(..., help=\"A Block type slug\"),\n):\n    \"\"\"\n    Delete an unprotected Block Type.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            block_type = await client.read_block_type_by_slug(slug)\n            await client.delete_block_type(block_type.id)\n            exit_with_success(f\"Deleted Block Type '{slug}'.\")\n        except ObjectNotFound:\n            exit_with_error(f\"Block Type {slug!r} not found!\")\n        except ProtectedBlockError:\n            exit_with_error(f\"Block Type {slug!r} is a protected block!\")\n        except PrefectHTTPStatusError:\n            exit_with_error(f\"Cannot delete Block Type {slug!r}!\")\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.blocktype_inspect","title":"blocktype_inspect async","text":"

    Display details about a block type.

    Source code in prefect/cli/block.py
    @blocktypes_app.command(\"inspect\")\nasync def blocktype_inspect(\n    slug: str = typer.Argument(..., help=\"A block type slug\"),\n):\n    \"\"\"\n    Display details about a block type.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            block_type = await client.read_block_type_by_slug(slug)\n        except ObjectNotFound:\n            exit_with_error(f\"Block type {slug!r} not found!\")\n\n        app.console.print(display_block_type(block_type))\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.list_types","title":"list_types async","text":"

    List all block types.

    Source code in prefect/cli/block.py
    @blocktypes_app.command(\"ls\")\nasync def list_types():\n    \"\"\"\n    List all block types.\n    \"\"\"\n    async with get_client() as client:\n        block_types = await client.read_block_types()\n\n    table = Table(\n        title=\"Block Types\",\n        show_lines=True,\n    )\n\n    table.add_column(\"Block Type Slug\", style=\"italic cyan\", no_wrap=True)\n    table.add_column(\"Description\", style=\"blue\", no_wrap=False, justify=\"left\")\n    table.add_column(\n        \"Generate creation link\", style=\"italic cyan\", no_wrap=False, justify=\"left\"\n    )\n\n    for blocktype in sorted(block_types, key=lambda x: x.name):\n        table.add_row(\n            str(blocktype.slug),\n            (\n                str(blocktype.description.splitlines()[0].partition(\".\")[0])\n                if blocktype.description is not None\n                else \"\"\n            ),\n            f\"prefect block create {blocktype.slug}\",\n        )\n\n    app.console.print(table)\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/block/#prefect.cli.block.register","title":"register async","text":"

    Register blocks types within a module or file.

    This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition.

    \b Examples: \b Register block types in a Python module: $ prefect block register -m prefect_aws.credentials \b Register block types in a .py file: $ prefect block register -f my_blocks.py

    Source code in prefect/cli/block.py
    @blocks_app.command()\nasync def register(\n    module_name: Optional[str] = typer.Option(\n        None,\n        \"--module\",\n        \"-m\",\n        help=\"Python module containing block types to be registered\",\n    ),\n    file_path: Optional[Path] = typer.Option(\n        None,\n        \"--file\",\n        \"-f\",\n        help=\"Path to .py file containing block types to be registered\",\n    ),\n):\n    \"\"\"\n    Register blocks types within a module or file.\n\n    This makes the blocks available for configuration via the UI.\n    If a block type has already been registered, its registration will be updated to\n    match the block's current definition.\n\n    \\b\n    Examples:\n        \\b\n        Register block types in a Python module:\n        $ prefect block register -m prefect_aws.credentials\n        \\b\n        Register block types in a .py file:\n        $ prefect block register -f my_blocks.py\n    \"\"\"\n    # Handles if both options are specified or if neither are specified\n    if not (bool(file_path) ^ bool(module_name)):\n        exit_with_error(\n            \"Please specify either a module or a file containing blocks to be\"\n            \" registered, but not both.\"\n        )\n\n    if module_name:\n        try:\n            imported_module = import_module(name=module_name)\n        except ModuleNotFoundError:\n            exit_with_error(\n                f\"Unable to load {module_name}. Please make sure the module is \"\n                \"installed in your current environment.\"\n            )\n\n    if file_path:\n        if file_path.suffix != \".py\":\n            exit_with_error(\n                f\"{file_path} is not a .py file. Please specify a \"\n                \".py that contains blocks to be registered.\"\n            )\n        try:\n            imported_module = await run_sync_in_worker_thread(\n                load_script_as_module, str(file_path)\n            )\n        except ScriptError as exc:\n            app.console.print(exc)\n            app.console.print(exception_traceback(exc.user_exc))\n            exit_with_error(\n                f\"Unable to load file at {file_path}. Please make sure the file path \"\n                \"is correct and the file contains valid Python.\"\n            )\n\n    registered_blocks = await _register_blocks_in_module(imported_module)\n    number_of_registered_blocks = len(registered_blocks)\n    block_text = \"block\" if 0 < number_of_registered_blocks < 2 else \"blocks\"\n    app.console.print(\n        f\"[green]Successfully registered {number_of_registered_blocks} {block_text}\\n\"\n    )\n    app.console.print(_build_registered_blocks_table(registered_blocks))\n    msg = (\n        \"\\n To configure the newly registered blocks, \"\n        \"go to the Blocks page in the Prefect UI.\\n\"\n    )\n\n    ui_url = PREFECT_UI_URL.value()\n    if ui_url is not None:\n        block_catalog_url = f\"{ui_url}/blocks/catalog\"\n        msg = f\"{msg.rstrip().rstrip('.')}: {block_catalog_url}\\n\"\n\n    app.console.print(msg)\n
    ","tags":["Python API","blocks","CLI"]},{"location":"api-ref/prefect/cli/cloud-webhook/","title":"Cloud webhook","text":"","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook","title":"prefect.cli.cloud.webhook","text":"

    Command line interface for working with webhooks

    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.create","title":"create async","text":"

    Create a new Cloud webhook

    Source code in prefect/cli/cloud/webhook.py
    @webhook_app.command()\nasync def create(\n    webhook_name: str,\n    description: str = typer.Option(\n        \"\", \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n    \"\"\"\n    Create a new Cloud webhook\n    \"\"\"\n    if not template:\n        exit_with_error(\n            \"Please provide a Jinja2 template expression in the --template flag \\nwhich\"\n            ' should define (at minimum) the following attributes: \\n{ \"event\":'\n            ' \"your.event.name\", \"resource\": { \"prefect.resource.id\":'\n            ' \"your.resource.id\" } }'\n            \" \\nhttps://docs.prefect.io/latest/cloud/webhooks/#webhook-templates\"\n        )\n\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\n            \"POST\",\n            \"/webhooks/\",\n            json={\n                \"name\": webhook_name,\n                \"description\": description,\n                \"template\": template,\n            },\n        )\n        app.console.print(f'Successfully created webhook {response[\"name\"]}')\n
    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.delete","title":"delete async","text":"

    Delete an existing Cloud webhook

    Source code in prefect/cli/cloud/webhook.py
    @webhook_app.command()\nasync def delete(webhook_id: UUID):\n    \"\"\"\n    Delete an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_delete = typer.confirm(\n        \"Are you sure you want to delete it? This cannot be undone.\"\n    )\n\n    if not confirm_delete:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        await client.request(\"DELETE\", f\"/webhooks/{webhook_id}\")\n        app.console.print(f\"Successfully deleted webhook {webhook_id}\")\n
    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.get","title":"get async","text":"

    Retrieve a webhook by ID.

    Source code in prefect/cli/cloud/webhook.py
    @webhook_app.command()\nasync def get(webhook_id: UUID):\n    \"\"\"\n    Retrieve a webhook by ID.\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        webhook = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        display_table = _render_webhooks_into_table([webhook])\n        app.console.print(display_table)\n
    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.ls","title":"ls async","text":"

    Fetch and list all webhooks in your workspace

    Source code in prefect/cli/cloud/webhook.py
    @webhook_app.command()\nasync def ls():\n    \"\"\"\n    Fetch and list all webhooks in your workspace\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        retrieved_webhooks = await client.request(\"POST\", \"/webhooks/filter\")\n        display_table = _render_webhooks_into_table(retrieved_webhooks)\n        app.console.print(display_table)\n
    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.rotate","title":"rotate async","text":"

    Rotate url for an existing Cloud webhook, in case it has been compromised

    Source code in prefect/cli/cloud/webhook.py
    @webhook_app.command()\nasync def rotate(webhook_id: UUID):\n    \"\"\"\n    Rotate url for an existing Cloud webhook, in case it has been compromised\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_rotate = typer.confirm(\n        \"Are you sure you want to rotate? This will invalidate the old URL.\"\n    )\n\n    if not confirm_rotate:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"POST\", f\"/webhooks/{webhook_id}/rotate\")\n        app.console.print(f'Successfully rotated webhook URL to {response[\"slug\"]}')\n
    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.toggle","title":"toggle async","text":"

    Toggle the enabled status of an existing Cloud webhook

    Source code in prefect/cli/cloud/webhook.py
    @webhook_app.command()\nasync def toggle(\n    webhook_id: UUID,\n):\n    \"\"\"\n    Toggle the enabled status of an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    status_lookup = {True: \"enabled\", False: \"disabled\"}\n\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        current_status = response[\"enabled\"]\n        new_status = not current_status\n\n        await client.request(\n            \"PATCH\", f\"/webhooks/{webhook_id}\", json={\"enabled\": new_status}\n        )\n        app.console.print(f\"Webhook is now {status_lookup[new_status]}\")\n
    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.update","title":"update async","text":"

    Partially update an existing Cloud webhook

    Source code in prefect/cli/cloud/webhook.py
    @webhook_app.command()\nasync def update(\n    webhook_id: UUID,\n    webhook_name: str = typer.Option(None, \"--name\", \"-n\", help=\"Webhook name\"),\n    description: str = typer.Option(\n        None, \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n    \"\"\"\n    Partially update an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        update_payload = {\n            \"name\": webhook_name or response[\"name\"],\n            \"description\": description or response[\"description\"],\n            \"template\": template or response[\"template\"],\n        }\n\n        await client.request(\"PUT\", f\"/webhooks/{webhook_id}\", json=update_payload)\n        app.console.print(f\"Successfully updated webhook {webhook_id}\")\n
    ","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud/","title":"cloud","text":"","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud","title":"prefect.cli.cloud","text":"

    Command line interface for interacting with Prefect Cloud

    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_api","title":"login_api = FastAPI(lifespan=lifespan) module-attribute","text":"

    This small API server is used for data transmission for browser-based log in.

    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.check_key_is_valid_for_login","title":"check_key_is_valid_for_login async","text":"

    Attempt to use a key to see if it is valid

    Source code in prefect/cli/cloud/__init__.py
    async def check_key_is_valid_for_login(key: str):\n    \"\"\"\n    Attempt to use a key to see if it is valid\n    \"\"\"\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            await client.read_workspaces()\n            return True\n        except CloudUnauthorizedError:\n            return False\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login","title":"login async","text":"

    Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT_API_KEY. Uses a previously configured profile if it exists.

    Source code in prefect/cli/cloud/__init__.py
    @cloud_app.command()\nasync def login(\n    key: Optional[str] = typer.Option(\n        None, \"--key\", \"-k\", help=\"API Key to authenticate with Prefect\"\n    ),\n    workspace_handle: Optional[str] = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n    \"\"\"\n    Log in to Prefect Cloud.\n    Creates a new profile configured to use the specified PREFECT_API_KEY.\n    Uses a previously configured profile if it exists.\n    \"\"\"\n    if not is_interactive() and (not key or not workspace_handle):\n        exit_with_error(\n            \"When not using an interactive terminal, you must supply a `--key` and\"\n            \" `--workspace`.\"\n        )\n\n    profiles = load_profiles()\n    current_profile = get_settings_context().profile\n    env_var_api_key = PREFECT_API_KEY.value()\n    selected_workspace = None\n\n    if env_var_api_key and key and env_var_api_key != key:\n        exit_with_error(\n            \"Cannot log in with a key when a different PREFECT_API_KEY is present as an\"\n            \" environment variable that will override it.\"\n        )\n\n    if env_var_api_key and env_var_api_key == key:\n        is_valid_key = await check_key_is_valid_for_login(key)\n        is_correct_key_format = key.startswith(\"pnu_\") or key.startswith(\"pnb_\")\n        if not is_valid_key:\n            help_message = \"Please ensure your credentials are correct and unexpired.\"\n            if not is_correct_key_format:\n                help_message = \"Your key is not in our expected format.\"\n            exit_with_error(\n                f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n            )\n\n    already_logged_in_profiles = []\n    for name, profile in profiles.items():\n        profile_key = profile.settings.get(PREFECT_API_KEY)\n        if (\n            # If a key is provided, only show profiles with the same key\n            (key and profile_key == key)\n            # Otherwise, show all profiles with a key set\n            or (not key and profile_key is not None)\n            # Check that the key is usable to avoid suggesting unauthenticated profiles\n            and await check_key_is_valid_for_login(profile_key)\n        ):\n            already_logged_in_profiles.append(name)\n\n    current_profile_is_logged_in = current_profile.name in already_logged_in_profiles\n\n    if current_profile_is_logged_in:\n        app.console.print(\"It looks like you're already authenticated on this profile.\")\n        if is_interactive():\n            should_reauth = typer.confirm(\n                \"? Would you like to reauthenticate?\", default=False\n            )\n        else:\n            should_reauth = True\n\n        if not should_reauth:\n            app.console.print(\"Using the existing authentication on this profile.\")\n            key = PREFECT_API_KEY.value()\n\n    elif already_logged_in_profiles:\n        app.console.print(\n            \"It looks like you're already authenticated with another profile.\"\n        )\n        if typer.confirm(\n            \"? Would you like to switch profiles?\",\n            default=True,\n        ):\n            profile_name = prompt_select_from_list(\n                app.console,\n                \"Which authenticated profile would you like to switch to?\",\n                already_logged_in_profiles,\n            )\n\n            profiles.set_active(profile_name)\n            save_profiles(profiles)\n            exit_with_success(f\"Switched to authenticated profile {profile_name!r}.\")\n\n    if not key:\n        choice = prompt_select_from_list(\n            app.console,\n            \"How would you like to authenticate?\",\n            [\n                (\"browser\", \"Log in with a web browser\"),\n                (\"key\", \"Paste an API key\"),\n            ],\n        )\n\n        if choice == \"key\":\n            key = typer.prompt(\"Paste your API key\", hide_input=True)\n        elif choice == \"browser\":\n            key = await login_with_browser()\n\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            workspaces = await client.read_workspaces()\n            current_workspace = get_current_workspace(workspaces)\n            prompt_switch_workspace = False\n        except CloudUnauthorizedError:\n            if key.startswith(\"pcu\"):\n                help_message = (\n                    \"It looks like you're using API key from Cloud 1\"\n                    \" (https://cloud.prefect.io). Make sure that you generate API key\"\n                    \" using Cloud 2 (https://app.prefect.cloud)\"\n                )\n            elif not key.startswith(\"pnu_\") and not key.startswith(\"pnb_\"):\n                help_message = (\n                    \"Your key is not in our expected format: 'pnu_' or 'pnb_'.\"\n                )\n            else:\n                help_message = (\n                    \"Please ensure your credentials are correct and unexpired.\"\n                )\n            exit_with_error(\n                f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n            )\n        except httpx.HTTPStatusError as exc:\n            exit_with_error(f\"Error connecting to Prefect Cloud: {exc!r}\")\n\n    if workspace_handle:\n        # Search for the given workspace\n        for workspace in workspaces:\n            if workspace.handle == workspace_handle:\n                selected_workspace = workspace\n                break\n        else:\n            if workspaces:\n                hint = (\n                    \" Available workspaces:\"\n                    f\" {listrepr((w.handle for w in workspaces), ', ')}\"\n                )\n            else:\n                hint = \"\"\n\n            exit_with_error(f\"Workspace {workspace_handle!r} not found.\" + hint)\n    else:\n        # Prompt a switch if the number of workspaces is greater than one\n        prompt_switch_workspace = len(workspaces) > 1\n\n        # Confirm that we want to switch if the current profile is already logged in\n        if (\n            current_profile_is_logged_in and current_workspace is not None\n        ) and prompt_switch_workspace:\n            app.console.print(\n                f\"You are currently using workspace {current_workspace.handle!r}.\"\n            )\n            prompt_switch_workspace = typer.confirm(\n                \"? Would you like to switch workspaces?\", default=False\n            )\n    if prompt_switch_workspace:\n        go_back = True\n        while go_back:\n            selected_workspace, go_back = await _prompt_for_account_and_workspace(\n                workspaces\n            )\n        if selected_workspace is None:\n            exit_with_error(\"No workspace selected.\")\n\n    elif not selected_workspace and not workspace_handle:\n        if current_workspace:\n            selected_workspace = current_workspace\n        elif len(workspaces) > 0:\n            selected_workspace = workspaces[0]\n        else:\n            exit_with_error(\n                \"No workspaces found! Create a workspace at\"\n                f\" {PREFECT_CLOUD_UI_URL.value()} and try again.\"\n            )\n\n    update_current_profile(\n        {\n            PREFECT_API_KEY: key,\n            PREFECT_API_URL: selected_workspace.api_url(),\n        }\n    )\n\n    exit_with_success(\n        f\"Authenticated with Prefect Cloud! Using workspace {selected_workspace.handle!r}.\"\n    )\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_with_browser","title":"login_with_browser async","text":"

    Perform login using the browser.

    On failure, this function will exit the process. On success, it will return an API key.

    Source code in prefect/cli/cloud/__init__.py
    async def login_with_browser() -> str:\n    \"\"\"\n    Perform login using the browser.\n\n    On failure, this function will exit the process.\n    On success, it will return an API key.\n    \"\"\"\n\n    # Set up an event that the login API will toggle on startup\n    ready_event = login_api.extra[\"ready-event\"] = anyio.Event()\n\n    # Set up an event that the login API will set when a response comes from the UI\n    result_event = login_api.extra[\"result-event\"] = anyio.Event()\n\n    timeout_scope = None\n    async with anyio.create_task_group() as tg:\n        # Run a server in the background to get payload from the browser\n        server = await tg.start(serve_login_api, tg.cancel_scope)\n\n        # Wait for the login server to be ready\n        with anyio.fail_after(10):\n            await ready_event.wait()\n\n            # The server may not actually be serving as the lifespan is started first\n            while not server.started:\n                await anyio.sleep(0)\n\n        # Get the port the server is using\n        server_port = server.servers[0].sockets[0].getsockname()[1]\n        callback = urllib.parse.quote(f\"http://localhost:{server_port}\")\n        ui_login_url = (\n            PREFECT_CLOUD_UI_URL.value() + f\"/auth/client?callback={callback}\"\n        )\n\n        # Then open the authorization page in a new browser tab\n        app.console.print(\"Opening browser...\")\n        await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_login_url)\n\n        # Wait for the response from the browser,\n        with anyio.move_on_after(120) as timeout_scope:\n            app.console.print(\"Waiting for response...\")\n            await result_event.wait()\n\n        # Uvicorn installs signal handlers, this is the cleanest way to shutdown the\n        # login API\n        raise_signal(signal.SIGINT)\n\n    result = login_api.extra.get(\"result\")\n    if not result:\n        if timeout_scope and timeout_scope.cancel_called:\n            exit_with_error(\"Timed out while waiting for authorization.\")\n        else:\n            exit_with_error(\"Aborted.\")\n\n    if result.type == \"success\":\n        return result.content.api_key\n    elif result.type == \"failure\":\n        exit_with_error(f\"Failed to log in. {result.content.reason}\")\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.logout","title":"logout async","text":"

    Logout the current workspace. Reset PREFECT_API_KEY and PREFECT_API_URL to default.

    Source code in prefect/cli/cloud/__init__.py
    @cloud_app.command()\nasync def logout():\n    \"\"\"\n    Logout the current workspace.\n    Reset PREFECT_API_KEY and PREFECT_API_URL to default.\n    \"\"\"\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile is None:\n        exit_with_error(\"There is no current profile set.\")\n\n    if current_profile.settings.get(PREFECT_API_KEY) is None:\n        exit_with_error(\"Current profile is not logged into Prefect Cloud.\")\n\n    update_current_profile(\n        {\n            PREFECT_API_URL: None,\n            PREFECT_API_KEY: None,\n        },\n    )\n\n    exit_with_success(\"Logged out from Prefect Cloud.\")\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.ls","title":"ls async","text":"

    List available workspaces.

    Source code in prefect/cli/cloud/__init__.py
    @workspace_app.command()\nasync def ls():\n    \"\"\"List available workspaces.\"\"\"\n\n    confirm_logged_in()\n\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n    current_workspace = get_current_workspace(workspaces)\n\n    table = Table(caption=\"* active workspace\")\n    table.add_column(\n        \"[#024dfd]Workspaces:\", justify=\"left\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for workspace_handle in sorted(workspace.handle for workspace in workspaces):\n        if workspace_handle == current_workspace.handle:\n            table.add_row(f\"[green]* {workspace_handle}[/green]\")\n        else:\n            table.add_row(f\"  {workspace_handle}\")\n\n    app.console.print(table)\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.open","title":"open async","text":"

    Open the Prefect Cloud UI in the browser.

    Source code in prefect/cli/cloud/__init__.py
    @cloud_app.command()\nasync def open():\n    \"\"\"\n    Open the Prefect Cloud UI in the browser.\n    \"\"\"\n    confirm_logged_in()\n\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile is None:\n        exit_with_error(\n            \"There is no current profile set - set one with `prefect profile create\"\n            \" <name>` and `prefect profile use <name>`.\"\n        )\n\n    current_workspace = get_current_workspace(\n        await prefect.get_cloud_client().read_workspaces()\n    )\n    if current_workspace is None:\n        exit_with_error(\n            \"There is no current workspace set - set one with `prefect cloud workspace\"\n            \" set --workspace <workspace>`.\"\n        )\n\n    ui_url = current_workspace.ui_url()\n\n    await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_url)\n\n    exit_with_success(f\"Opened {current_workspace.handle!r} in browser.\")\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.prompt_select_from_list","title":"prompt_select_from_list","text":"

    Given a list of options, display the values to user in a table and prompt them to select one.

    Parameters:

    Name Type Description Default options Union[List[str], List[Tuple[Hashable, str]]]

    A list of options to present to the user. A list of tuples can be passed as key value pairs. If a value is chosen, the key will be returned.

    required

    Returns:

    Name Type Description str str

    the selected option

    Source code in prefect/cli/cloud/__init__.py
    def prompt_select_from_list(\n    console, prompt: str, options: Union[List[str], List[Tuple[Hashable, str]]]\n) -> str:\n    \"\"\"\n    Given a list of options, display the values to user in a table and prompt them\n    to select one.\n\n    Args:\n        options: A list of options to present to the user.\n            A list of tuples can be passed as key value pairs. If a value is chosen, the\n            key will be returned.\n\n    Returns:\n        str: the selected option\n    \"\"\"\n\n    current_idx = 0\n    selected_option = None\n\n    def build_table() -> Table:\n        \"\"\"\n        Generate a table of options. The `current_idx` will be highlighted.\n        \"\"\"\n\n        table = Table(box=False, header_style=None, padding=(0, 0))\n        table.add_column(\n            f\"? [bold]{prompt}[/] [bright_blue][Use arrows to move; enter to select]\",\n            justify=\"left\",\n            no_wrap=True,\n        )\n\n        for i, option in enumerate(options):\n            if isinstance(option, tuple):\n                option = option[1]\n\n            if i == current_idx:\n                # Use blue for selected options\n                table.add_row(\"[bold][blue]> \" + option)\n            else:\n                table.add_row(\"  \" + option)\n        return table\n\n    with Live(build_table(), auto_refresh=False, console=console) as live:\n        while selected_option is None:\n            key = readchar.readkey()\n\n            if key == readchar.key.UP:\n                current_idx = current_idx - 1\n                # wrap to bottom if at the top\n                if current_idx < 0:\n                    current_idx = len(options) - 1\n            elif key == readchar.key.DOWN:\n                current_idx = current_idx + 1\n                # wrap to top if at the bottom\n                if current_idx >= len(options):\n                    current_idx = 0\n            elif key == readchar.key.CTRL_C:\n                # gracefully exit with no message\n                exit_with_error(\"\")\n            elif key == readchar.key.ENTER or key == readchar.key.CR:\n                selected_option = options[current_idx]\n                if isinstance(selected_option, tuple):\n                    selected_option = selected_option[0]\n\n            live.update(build_table(), refresh=True)\n\n        return selected_option\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.set","title":"set async","text":"

    Set current workspace. Shows a workspace picker if no workspace is specified.

    Source code in prefect/cli/cloud/__init__.py
    @workspace_app.command()\nasync def set(\n    workspace_handle: str = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n    \"\"\"Set current workspace. Shows a workspace picker if no workspace is specified.\"\"\"\n    confirm_logged_in()\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n        if workspace_handle:\n            # Search for the given workspace\n            for workspace in workspaces:\n                if workspace.handle == workspace_handle:\n                    break\n            else:\n                exit_with_error(f\"Workspace {workspace_handle!r} not found.\")\n        else:\n            if not workspaces:\n                exit_with_error(\"No workspaces found in the selected account.\")\n\n            go_back = True\n            while go_back:\n                workspace, go_back = await _prompt_for_account_and_workspace(workspaces)\n            if workspace is None:\n                exit_with_error(\"No workspace selected.\")\n\n        profile = update_current_profile({PREFECT_API_URL: workspace.api_url()})\n        exit_with_success(\n            f\"Successfully set workspace to {workspace.handle!r} in profile\"\n            f\" {profile.name!r}.\"\n        )\n
    ","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/concurrency_limit/","title":"concurrency_limit","text":"","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit","title":"prefect.cli.concurrency_limit","text":"

    Command line interface for working with concurrency limits.

    ","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.create","title":"create async","text":"

    Create a concurrency limit against a tag.

    This limit controls how many task runs with that tag may simultaneously be in a Running state.

    Source code in prefect/cli/concurrency_limit.py
    @concurrency_limit_app.command()\nasync def create(tag: str, concurrency_limit: int):\n    \"\"\"\n    Create a concurrency limit against a tag.\n\n    This limit controls how many task runs with that tag may simultaneously be in a\n    Running state.\n    \"\"\"\n\n    async with get_client() as client:\n        await client.create_concurrency_limit(\n            tag=tag, concurrency_limit=concurrency_limit\n        )\n        await client.read_concurrency_limit_by_tag(tag)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created concurrency limit with properties:\n                tag - {tag!r}\n                concurrency_limit - {concurrency_limit}\n\n            Delete the concurrency limit:\n                prefect concurrency-limit delete {tag!r}\n\n            Inspect the concurrency limit:\n                prefect concurrency-limit inspect {tag!r}\n        \"\"\"\n        )\n    )\n
    ","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.delete","title":"delete async","text":"

    Delete the concurrency limit set on the specified tag.

    Source code in prefect/cli/concurrency_limit.py
    @concurrency_limit_app.command()\nasync def delete(tag: str):\n    \"\"\"\n    Delete the concurrency limit set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.delete_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Deleted concurrency limit set on the tag: {tag}\")\n
    ","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.inspect","title":"inspect async","text":"

    View details about a concurrency limit. active_slots shows a list of TaskRun IDs which are currently using a concurrency slot.

    Source code in prefect/cli/concurrency_limit.py
    @concurrency_limit_app.command()\nasync def inspect(tag: str):\n    \"\"\"\n    View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs\n    which are currently using a concurrency slot.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            result = await client.read_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    trid_table = Table()\n    trid_table.add_column(\"Active Task Run IDs\", style=\"cyan\", no_wrap=True)\n\n    cl_table = Table(title=f\"Concurrency Limit ID: [red]{str(result.id)}\")\n    cl_table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    cl_table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    cl_table.add_column(\"Created\", style=\"magenta\", no_wrap=True)\n    cl_table.add_column(\"Updated\", style=\"magenta\", no_wrap=True)\n\n    for trid in sorted(result.active_slots):\n        trid_table.add_row(str(trid))\n\n    cl_table.add_row(\n        str(result.tag),\n        str(result.concurrency_limit),\n        Pretty(pendulum.instance(result.created).diff_for_humans()),\n        Pretty(pendulum.instance(result.updated).diff_for_humans()),\n    )\n\n    group = Group(\n        cl_table,\n        trid_table,\n    )\n    app.console.print(Panel(group, expand=False))\n
    ","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.ls","title":"ls async","text":"

    View all concurrency limits.

    Source code in prefect/cli/concurrency_limit.py
    @concurrency_limit_app.command()\nasync def ls(limit: int = 15, offset: int = 0):\n    \"\"\"\n    View all concurrency limits.\n    \"\"\"\n    table = Table(\n        title=\"Concurrency Limits\",\n        caption=\"inspect a concurrency limit to show active task run IDs\",\n    )\n    table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Active Task Runs\", style=\"magenta\", no_wrap=True)\n\n    async with get_client() as client:\n        concurrency_limits = await client.read_concurrency_limits(\n            limit=limit, offset=offset\n        )\n\n    for cl in sorted(concurrency_limits, key=lambda c: c.updated, reverse=True):\n        table.add_row(\n            str(cl.tag),\n            str(cl.id),\n            str(cl.concurrency_limit),\n            str(len(cl.active_slots)),\n        )\n\n    app.console.print(table)\n
    ","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.reset","title":"reset async","text":"

    Resets the concurrency limit slots set on the specified tag.

    Source code in prefect/cli/concurrency_limit.py
    @concurrency_limit_app.command()\nasync def reset(tag: str):\n    \"\"\"\n    Resets the concurrency limit slots set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.reset_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Reset concurrency limit set on the tag: {tag}\")\n
    ","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/config/","title":"config","text":"","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config","title":"prefect.cli.config","text":"

    Command line interface for working with profiles

    ","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.set_","title":"set_","text":"

    Change the value for a setting by setting the value in the current profile.

    Source code in prefect/cli/config.py
    @config_app.command(\"set\")\ndef set_(settings: List[str]):\n    \"\"\"\n    Change the value for a setting by setting the value in the current profile.\n    \"\"\"\n    parsed_settings = {}\n    for item in settings:\n        try:\n            setting, value = item.split(\"=\", maxsplit=1)\n        except ValueError:\n            exit_with_error(\n                f\"Failed to parse argument {item!r}. Use the format 'VAR=VAL'.\"\n            )\n\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n\n        # Guard against changing settings that tweak config locations\n        if setting in {\"PREFECT_HOME\", \"PREFECT_PROFILES_PATH\"}:\n            exit_with_error(\n                f\"Setting {setting!r} cannot be changed with this command. \"\n                \"Use an environment variable instead.\"\n            )\n\n        parsed_settings[setting] = value\n\n    try:\n        new_profile = prefect.settings.update_current_profile(parsed_settings)\n    except pydantic.ValidationError as exc:\n        for error in exc.errors():\n            setting = error[\"loc\"][0]\n            message = error[\"msg\"]\n            app.console.print(f\"Validation error for setting {setting!r}: {message}\")\n        exit_with_error(\"Invalid setting value.\")\n\n    for setting, value in parsed_settings.items():\n        app.console.print(f\"Set {setting!r} to {value!r}.\")\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting} is also set by an environment variable which will \"\n                f\"override your config value. Run `unset {setting}` to clear it.\"\n            )\n\n        if prefect.settings.SETTING_VARIABLES[setting].deprecated:\n            app.console.print(\n                f\"[yellow]{prefect.settings.SETTING_VARIABLES[setting].deprecated_message}.\"\n            )\n\n    exit_with_success(f\"Updated profile {new_profile.name!r}.\")\n
    ","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.unset","title":"unset","text":"

    Restore the default value for a setting.

    Removes the setting from the current profile.

    Source code in prefect/cli/config.py
    @config_app.command()\ndef unset(settings: List[str]):\n    \"\"\"\n    Restore the default value for a setting.\n\n    Removes the setting from the current profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    parsed = set()\n\n    for setting in settings:\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n        # Cast to settings objects\n        parsed.add(prefect.settings.SETTING_VARIABLES[setting])\n\n    for setting in parsed:\n        if setting not in profile.settings:\n            exit_with_error(f\"{setting.name!r} is not set in profile {profile.name!r}.\")\n\n    profiles.update_profile(\n        name=profile.name, settings={setting: None for setting in parsed}\n    )\n\n    for setting in settings:\n        app.console.print(f\"Unset {setting!r}.\")\n\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting!r} is also set by an environment variable. \"\n                f\"Use `unset {setting}` to clear it.\"\n            )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Updated profile {profile.name!r}.\")\n
    ","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.validate","title":"validate","text":"

    Read and validate the current profile.

    Deprecated settings will be automatically converted to new names unless both are set.

    Source code in prefect/cli/config.py
    @config_app.command()\ndef validate():\n    \"\"\"\n    Read and validate the current profile.\n\n    Deprecated settings will be automatically converted to new names unless both are\n    set.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    changed = profile.convert_deprecated_renamed_settings()\n    for old, new in changed:\n        app.console.print(f\"Updated {old.name!r} to {new.name!r}.\")\n\n    for setting in profile.settings.keys():\n        if setting.deprecated:\n            app.console.print(f\"Found deprecated setting {setting.name!r}.\")\n\n    profile.validate_settings()\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(\"Configuration valid!\")\n
    ","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.view","title":"view","text":"

    Display the current settings.

    Source code in prefect/cli/config.py
    @config_app.command()\ndef view(\n    show_defaults: Optional[bool] = typer.Option(\n        False, \"--show-defaults/--hide-defaults\", help=(show_defaults_help)\n    ),\n    show_sources: Optional[bool] = typer.Option(\n        True,\n        \"--show-sources/--hide-sources\",\n        help=(show_sources_help),\n    ),\n    show_secrets: Optional[bool] = typer.Option(\n        False,\n        \"--show-secrets/--hide-secrets\",\n        help=\"Toggle display of secrets setting values.\",\n    ),\n):\n    \"\"\"\n    Display the current settings.\n    \"\"\"\n    context = prefect.context.get_settings_context()\n\n    # Get settings at each level, converted to a flat dictionary for easy comparison\n    default_settings = prefect.settings.get_default_settings()\n    env_settings = prefect.settings.get_settings_from_env()\n    current_profile_settings = context.settings\n\n    # Obfuscate secrets\n    if not show_secrets:\n        default_settings = default_settings.with_obfuscated_secrets()\n        env_settings = env_settings.with_obfuscated_secrets()\n        current_profile_settings = current_profile_settings.with_obfuscated_secrets()\n\n    # Display the profile first\n    app.console.print(f\"PREFECT_PROFILE={context.profile.name!r}\")\n\n    settings_output = []\n\n    # The combination of environment variables and profile settings that are in use\n    profile_overrides = current_profile_settings.dict(exclude_unset=True)\n\n    # Used to see which settings in current_profile_settings came from env vars\n    env_overrides = env_settings.dict(exclude_unset=True)\n\n    for key, value in profile_overrides.items():\n        source = \"env\" if env_overrides.get(key) is not None else \"profile\"\n        source_blurb = f\" (from {source})\" if show_sources else \"\"\n        settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    if show_defaults:\n        for key, value in default_settings.dict().items():\n            if key not in profile_overrides:\n                source_blurb = \" (from defaults)\" if show_sources else \"\"\n                settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    app.console.print(\"\\n\".join(sorted(settings_output)))\n
    ","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/deploy/","title":"deploy","text":"","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy","title":"prefect.cli.deploy","text":"

    Module containing implementation for deploying flows.

    ","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy.deploy","title":"deploy async","text":"

    Deploy a flow from this project by creating a deployment.

    Should be run from a project root directory.

    Source code in prefect/cli/deploy.py
    @app.command()\nasync def deploy(\n    entrypoint: str = typer.Argument(\n        None,\n        help=(\n            \"The path to a flow entrypoint within a project, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    names: List[str] = typer.Option(\n        None,\n        \"--name\",\n        \"-n\",\n        help=(\n            \"The name to give the deployment. Can be a pattern. Examples:\"\n            \" 'my-deployment', 'my-flow/my-deployment', 'my-deployment-*',\"\n            \" '*-flow-name/deployment*'\"\n        ),\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the deployment. If not provided, the description\"\n            \" will be populated from the flow's description.\"\n        ),\n    ),\n    version: str = typer.Option(\n        None, \"--version\", help=\"A version to give the deployment.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"One or more optional tags to apply to the deployment. Note: tags are used\"\n            \" only for organizational purposes. For delegating work to agents, use the\"\n            \" --work-queue flag.\"\n        ),\n    ),\n    work_pool_name: str = SettingsOption(\n        PREFECT_DEFAULT_WORK_POOL_NAME,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool that will handle this deployment's runs.\",\n    ),\n    work_queue_name: str = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"The work queue that will handle this deployment's runs. \"\n            \"It will be created if it doesn't already exist. Defaults to `None`.\"\n        ),\n    ),\n    variables: List[str] = typer.Option(\n        None,\n        \"-v\",\n        \"--variable\",\n        help=(\"DEPRECATED: Please use --jv/--job-variable for similar functionality \"),\n    ),\n    job_variables: List[str] = typer.Option(\n        None,\n        \"-jv\",\n        \"--job-variable\",\n        help=(\n            \"One or more job variable overrides for the work pool provided in the\"\n            \" format of key=value string or a JSON object\"\n        ),\n    ),\n    cron: List[str] = typer.Option(\n        None,\n        \"--cron\",\n        help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n    ),\n    interval: List[int] = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) that will be used to set an\"\n            \" IntervalSchedule on the deployment.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The anchor date for all interval schedules\"\n    ),\n    rrule: List[str] = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n    ),\n    timezone: str = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    trigger: List[str] = typer.Option(\n        None,\n        \"--trigger\",\n        help=(\n            \"Specifies a trigger for the deployment. The value can be a\"\n            \" json string or path to `.yaml`/`.json` file. This flag can be used\"\n            \" multiple times.\"\n        ),\n    ),\n    param: List[str] = typer.Option(\n        None,\n        \"--param\",\n        help=(\n            \"An optional parameter override, values are parsed as JSON strings e.g.\"\n            \" --param question=ultimate --param answer=42\"\n        ),\n    ),\n    params: str = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"An optional parameter override in a JSON string format e.g.\"\n            ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n        ),\n    ),\n    enforce_parameter_schema: bool = typer.Option(\n        False,\n        \"--enforce-parameter-schema\",\n        help=(\n            \"Whether to enforce the parameter schema on this deployment. If set to\"\n            \" True, any parameters passed to this deployment must match the signature\"\n            \" of the flow.\"\n        ),\n    ),\n    deploy_all: bool = typer.Option(\n        False,\n        \"--all\",\n        help=(\n            \"Deploy all flows in the project. If a flow name or entrypoint is also\"\n            \" provided, this flag will be ignored.\"\n        ),\n    ),\n    prefect_file: Path = typer.Option(\n        Path(\"prefect.yaml\"),\n        \"--prefect-file\",\n        help=\"Specify a custom path to a prefect.yaml file\",\n    ),\n):\n    \"\"\"\n    Deploy a flow from this project by creating a deployment.\n\n    Should be run from a project root directory.\n    \"\"\"\n\n    if variables is not None:\n        app.console.print(\n            generate_deprecation_message(\n                name=\"The `--variable` flag\",\n                start_date=\"Mar 2024\",\n                help=(\n                    \"Please use the `--job-variable foo=bar` argument instead: `prefect\"\n                    \" deploy --job-variable`.\"\n                ),\n            ),\n            style=\"yellow\",\n        )\n\n    if variables is None:\n        variables = list()\n    if job_variables is None:\n        job_variables = list()\n    job_variables.extend(variables)\n\n    options = {\n        \"entrypoint\": entrypoint,\n        \"description\": description,\n        \"version\": version,\n        \"tags\": tags,\n        \"work_pool_name\": work_pool_name,\n        \"work_queue_name\": work_queue_name,\n        \"variables\": job_variables,\n        \"cron\": cron,\n        \"interval\": interval,\n        \"anchor_date\": interval_anchor,\n        \"rrule\": rrule,\n        \"timezone\": timezone,\n        \"triggers\": trigger,\n        \"param\": param,\n        \"params\": params,\n        \"enforce_parameter_schema\": enforce_parameter_schema,\n    }\n    try:\n        deploy_configs, actions = _load_deploy_configs_and_actions(\n            prefect_file=prefect_file,\n        )\n        parsed_names = []\n        for name in names or []:\n            if \"*\" in name:\n                parsed_names.extend(_parse_name_from_pattern(deploy_configs, name))\n            else:\n                parsed_names.append(name)\n        deploy_configs = _pick_deploy_configs(\n            deploy_configs,\n            parsed_names,\n            deploy_all,\n        )\n\n        if len(deploy_configs) > 1:\n            if any(options.values()):\n                app.console.print(\n                    (\n                        \"You have passed options to the deploy command, but you are\"\n                        \" creating or updating multiple deployments. These options\"\n                        \" will be ignored.\"\n                    ),\n                    style=\"yellow\",\n                )\n            await _run_multi_deploy(\n                deploy_configs=deploy_configs,\n                actions=actions,\n                deploy_all=deploy_all,\n                prefect_file=prefect_file,\n            )\n        else:\n            # Accommodate passing in -n flow-name/deployment-name as well as -n deployment-name\n            options[\"names\"] = [\n                name.split(\"/\", 1)[-1] if \"/\" in name else name for name in parsed_names\n            ]\n\n            await _run_single_deploy(\n                deploy_config=deploy_configs[0] if deploy_configs else {},\n                actions=actions,\n                options=options,\n                prefect_file=prefect_file,\n            )\n    except ValueError as exc:\n        exit_with_error(str(exc))\n
    ","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deploy/#prefect.cli.deploy.init","title":"init async","text":"

    Initialize a new deployment configuration recipe.

    Source code in prefect/cli/deploy.py
    @app.command()\nasync def init(\n    name: str = None,\n    recipe: str = None,\n    fields: List[str] = typer.Option(\n        None,\n        \"-f\",\n        \"--field\",\n        help=(\n            \"One or more fields to pass to the recipe (e.g., image_name) in the format\"\n            \" of key=value.\"\n        ),\n    ),\n):\n    \"\"\"\n    Initialize a new deployment configuration recipe.\n    \"\"\"\n    inputs = {}\n    fields = fields or []\n    recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n\n    for field in fields:\n        key, value = field.split(\"=\")\n        inputs[key] = value\n\n    if not recipe and is_interactive():\n        recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n        recipes = []\n\n        for r in recipe_paths.iterdir():\n            if r.is_dir() and (r / \"prefect.yaml\").exists():\n                with open(r / \"prefect.yaml\") as f:\n                    recipe_data = yaml.safe_load(f)\n                    recipe_name = r.name\n                    recipe_description = recipe_data.get(\n                        \"description\", \"(no description available)\"\n                    )\n                    recipe_dict = {\n                        \"name\": recipe_name,\n                        \"description\": recipe_description,\n                    }\n                    recipes.append(recipe_dict)\n\n        selected_recipe = prompt_select_from_table(\n            app.console,\n            \"Would you like to initialize your deployment configuration with a recipe?\",\n            columns=[\n                {\"header\": \"Name\", \"key\": \"name\"},\n                {\"header\": \"Description\", \"key\": \"description\"},\n            ],\n            data=recipes,\n            opt_out_message=\"No, I'll use the default deployment configuration.\",\n            opt_out_response={},\n        )\n        if selected_recipe != {}:\n            recipe = selected_recipe[\"name\"]\n\n    if recipe and (recipe_paths / recipe / \"prefect.yaml\").exists():\n        with open(recipe_paths / recipe / \"prefect.yaml\") as f:\n            recipe_inputs = yaml.safe_load(f).get(\"required_inputs\") or {}\n\n        if recipe_inputs:\n            if set(recipe_inputs.keys()) < set(inputs.keys()):\n                # message to user about extra fields\n                app.console.print(\n                    (\n                        f\"Warning: extra fields provided for {recipe!r} recipe:\"\n                        f\" '{', '.join(set(inputs.keys()) - set(recipe_inputs.keys()))}'\"\n                    ),\n                    style=\"red\",\n                )\n            elif set(recipe_inputs.keys()) > set(inputs.keys()):\n                table = Table(\n                    title=f\"[red]Required inputs for {recipe!r} recipe[/red]\",\n                )\n                table.add_column(\"Field Name\", style=\"green\", no_wrap=True)\n                table.add_column(\n                    \"Description\", justify=\"left\", style=\"white\", no_wrap=False\n                )\n                for field, description in recipe_inputs.items():\n                    if field not in inputs:\n                        table.add_row(field, description)\n\n                app.console.print(table)\n\n                for key, description in recipe_inputs.items():\n                    if key not in inputs:\n                        inputs[key] = typer.prompt(key)\n\n            app.console.print(\"-\" * 15)\n\n    try:\n        files = [\n            f\"[green]{fname}[/green]\"\n            for fname in initialize_project(name=name, recipe=recipe, inputs=inputs)\n        ]\n    except ValueError as exc:\n        if \"Unknown recipe\" in str(exc):\n            exit_with_error(\n                f\"Unknown recipe {recipe!r} provided - run [yellow]`prefect init\"\n                \"`[/yellow] to see all available recipes.\"\n            )\n        else:\n            raise\n\n    files = \"\\n\".join(files)\n    empty_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green]; no new files\"\n        \" created.\"\n    )\n    file_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green] with the following\"\n        f\" new files:\\n{files}\"\n    )\n    app.console.print(file_msg if files else empty_msg)\n
    ","tags":["Python API","deploy","deployment","CLI"]},{"location":"api-ref/prefect/cli/deployment/","title":"deployment","text":"","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment","title":"prefect.cli.deployment","text":"

    Command line interface for working with deployments.

    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.apply","title":"apply async","text":"

    Create or update a deployment from a YAML file.

    Source code in prefect/cli/deployment.py
    @deployment_app.command(\n    deprecated=True,\n    deprecated_start_date=\"Mar 2024\",\n    deprecated_name=\"deployment apply\",\n    deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def apply(\n    paths: List[str] = typer.Argument(\n        ...,\n        help=\"One or more paths to deployment YAML files.\",\n    ),\n    upload: bool = typer.Option(\n        False,\n        \"--upload\",\n        help=(\n            \"A flag that, when provided, uploads this deployment's files to remote\"\n            \" storage.\"\n        ),\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n):\n    \"\"\"\n    Create or update a deployment from a YAML file.\n    \"\"\"\n    deployment = None\n    async with get_client() as client:\n        for path in paths:\n            try:\n                deployment = await Deployment.load_from_yaml(path)\n                app.console.print(\n                    f\"Successfully loaded {deployment.name!r}\", style=\"green\"\n                )\n            except Exception as exc:\n                exit_with_error(\n                    f\"'{path!s}' did not conform to deployment spec: {exc!r}\"\n                )\n\n            assert deployment\n\n            await create_work_queue_and_set_concurrency_limit(\n                deployment.work_queue_name,\n                deployment.work_pool_name,\n                work_queue_concurrency,\n            )\n\n            if upload:\n                if (\n                    deployment.storage\n                    and \"put-directory\" in deployment.storage.get_block_capabilities()\n                ):\n                    file_count = await deployment.upload_to_storage()\n                    if file_count:\n                        app.console.print(\n                            (\n                                f\"Successfully uploaded {file_count} files to\"\n                                f\" {deployment.location}\"\n                            ),\n                            style=\"green\",\n                        )\n                else:\n                    app.console.print(\n                        (\n                            f\"Deployment storage {deployment.storage} does not have\"\n                            \" upload capabilities; no files uploaded.\"\n                        ),\n                        style=\"red\",\n                    )\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n\n            if client.server_type != ServerType.CLOUD and deployment.triggers:\n                app.console.print(\n                    (\n                        \"Deployment triggers are only supported on \"\n                        f\"Prefect Cloud. Triggers defined in {path!r} will be \"\n                        \"ignored.\"\n                    ),\n                    style=\"red\",\n                )\n\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n\n            if PREFECT_UI_URL:\n                app.console.print(\n                    \"View Deployment in UI:\"\n                    f\" {PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}\"\n                )\n\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.build","title":"build async","text":"

    Generate a deployment YAML from /path/to/file.py:flow_function

    Source code in prefect/cli/deployment.py
    @deployment_app.command(\n    deprecated=True,\n    deprecated_start_date=\"Mar 2024\",\n    deprecated_name=\"deployment build\",\n    deprecated_help=\"Use 'prefect deploy' to deploy flows via YAML instead.\",\n)\nasync def build(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a flow entrypoint, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    name: str = typer.Option(\n        None, \"--name\", \"-n\", help=\"The name to give the deployment.\"\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the deployment. If not provided, the description\"\n            \" will be populated from the flow's description.\"\n        ),\n    ),\n    version: str = typer.Option(\n        None, \"--version\", \"-v\", help=\"A version to give the deployment.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"One or more optional tags to apply to the deployment. Note: tags are used\"\n            \" only for organizational purposes. For delegating work to agents, use the\"\n            \" --work-queue flag.\"\n        ),\n    ),\n    work_queue_name: str = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"The work queue that will handle this deployment's runs. \"\n            \"It will be created if it doesn't already exist. Defaults to `None`. \"\n            \"Note that if a work queue is not set, work will not be scheduled.\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool that will handle this deployment's runs.\",\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n    infra_type: str = typer.Option(\n        None,\n        \"--infra\",\n        \"-i\",\n        help=\"The infrastructure type to use, prepopulated with defaults. For example: \"\n        + listrepr(builtin_infrastructure_types, sep=\", \"),\n    ),\n    infra_block: str = typer.Option(\n        None,\n        \"--infra-block\",\n        \"-ib\",\n        help=\"The slug of the infrastructure block to use as a template.\",\n    ),\n    overrides: List[str] = typer.Option(\n        None,\n        \"--override\",\n        help=(\n            \"One or more optional infrastructure overrides provided as a dot delimited\"\n            \" path, e.g., `env.env_key=env_value`\"\n        ),\n    ),\n    storage_block: str = typer.Option(\n        None,\n        \"--storage-block\",\n        \"-sb\",\n        help=(\n            \"The slug of a remote storage block. Use the syntax:\"\n            \" 'block_type/block_name', where block_type is one of 'github', 's3',\"\n            \" 'gcs', 'azure', 'smb', or a registered block from a library that\"\n            \" implements the WritableDeploymentStorage interface such as\"\n            \" 'gitlab-repository', 'bitbucket-repository', 's3-bucket',\"\n            \" 'gcs-bucket'\"\n        ),\n    ),\n    skip_upload: bool = typer.Option(\n        False,\n        \"--skip-upload\",\n        help=(\n            \"A flag that, when provided, skips uploading this deployment's files to\"\n            \" remote storage.\"\n        ),\n    ),\n    cron: str = typer.Option(\n        None,\n        \"--cron\",\n        help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n    ),\n    interval: int = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) that will be used to set an\"\n            \" IntervalSchedule on the deployment.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The anchor date for an interval schedule\"\n    ),\n    rrule: str = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n    ),\n    timezone: str = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    path: str = typer.Option(\n        None,\n        \"--path\",\n        help=(\n            \"An optional path to specify a subdirectory of remote storage to upload to,\"\n            \" or to point to a subdirectory of a locally stored flow.\"\n        ),\n    ),\n    output: str = typer.Option(\n        None,\n        \"--output\",\n        \"-o\",\n        help=\"An optional filename to write the deployment file to.\",\n    ),\n    _apply: bool = typer.Option(\n        False,\n        \"--apply\",\n        \"-a\",\n        help=(\n            \"An optional flag to automatically register the resulting deployment with\"\n            \" the API.\"\n        ),\n    ),\n    param: List[str] = typer.Option(\n        None,\n        \"--param\",\n        help=(\n            \"An optional parameter override, values are parsed as JSON strings e.g.\"\n            \" --param question=ultimate --param answer=42\"\n        ),\n    ),\n    params: str = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"An optional parameter override in a JSON string format e.g.\"\n            ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n        ),\n    ),\n    no_schedule: bool = typer.Option(\n        False,\n        \"--no-schedule\",\n        help=\"An optional flag to disable scheduling for this deployment.\",\n    ),\n):\n    \"\"\"\n    Generate a deployment YAML from /path/to/file.py:flow_function\n    \"\"\"\n    # validate inputs\n    if not name:\n        exit_with_error(\n            \"A name for this deployment must be provided with the '--name' flag.\"\n        )\n\n    if (\n        len([value for value in (cron, rrule, interval) if value is not None])\n        + (1 if no_schedule else 0)\n        > 1\n    ):\n        exit_with_error(\"Only one schedule type can be provided.\")\n\n    if infra_block and infra_type:\n        exit_with_error(\n            \"Only one of `infra` or `infra_block` can be provided, please choose one.\"\n        )\n\n    output_file = None\n    if output:\n        output_file = Path(output)\n        if output_file.suffix and output_file.suffix != \".yaml\":\n            exit_with_error(\"Output file must be a '.yaml' file.\")\n        else:\n            output_file = output_file.with_suffix(\".yaml\")\n\n    # validate flow\n    try:\n        fpath, obj_name = entrypoint.rsplit(\":\", 1)\n    except ValueError as exc:\n        if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n            missing_flow_name_msg = (\n                \"Your flow entrypoint must include the name of the function that is\"\n                f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name>\"\n            )\n            exit_with_error(missing_flow_name_msg)\n        else:\n            raise exc\n    try:\n        flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n    except Exception as exc:\n        exit_with_error(exc)\n    app.console.print(f\"Found flow {flow.name!r}\", style=\"green\")\n    job_variables = {}\n    for override in overrides or []:\n        key, value = override.split(\"=\", 1)\n        job_variables[key] = value\n\n    if infra_block:\n        infrastructure = await Block.load(infra_block)\n    elif infra_type:\n        # Create an instance of the given type\n        infrastructure = Block.get_block_class_from_key(infra_type)()\n    else:\n        # will reset to a default of Process is no infra is present on the\n        # server-side definition of this deployment\n        infrastructure = None\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    schedule = None\n    if cron:\n        cron_kwargs = {\"cron\": cron, \"timezone\": timezone}\n        schedule = CronSchedule(\n            **{k: v for k, v in cron_kwargs.items() if v is not None}\n        )\n    elif interval:\n        interval_kwargs = {\n            \"interval\": timedelta(seconds=interval),\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        schedule = IntervalSchedule(\n            **{k: v for k, v in interval_kwargs.items() if v is not None}\n        )\n    elif rrule:\n        try:\n            schedule = RRuleSchedule(**json.loads(rrule))\n            if timezone:\n                # override timezone if specified via CLI argument\n                schedule.timezone = timezone\n        except json.JSONDecodeError:\n            schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n    # parse storage_block\n    if storage_block:\n        block_type, block_name, *block_path = storage_block.split(\"/\")\n        if block_path and path:\n            exit_with_error(\n                \"Must provide a `path` explicitly or provide one on the storage block\"\n                \" specification, but not both.\"\n            )\n        elif not path:\n            path = \"/\".join(block_path)\n        storage_block = f\"{block_type}/{block_name}\"\n        storage = await Block.load(storage_block)\n    else:\n        storage = None\n\n    if create_default_ignore_file(path=\".\"):\n        app.console.print(\n            (\n                \"Default '.prefectignore' file written to\"\n                f\" {(Path('.') / '.prefectignore').absolute()}\"\n            ),\n            style=\"green\",\n        )\n\n    if param and (params is not None):\n        exit_with_error(\"Can only pass one of `param` or `params` options\")\n\n    parameters = dict()\n\n    if param:\n        for p in param or []:\n            k, unparsed_value = p.split(\"=\", 1)\n            try:\n                v = json.loads(unparsed_value)\n                app.console.print(\n                    f\"The parameter value {unparsed_value} is parsed as a JSON string\"\n                )\n            except json.JSONDecodeError:\n                v = unparsed_value\n            parameters[k] = v\n\n    if params is not None:\n        parameters = json.loads(params)\n\n    # set up deployment object\n    entrypoint = (\n        f\"{Path(fpath).absolute().relative_to(Path('.').absolute())}:{obj_name}\"\n    )\n\n    init_kwargs = dict(\n        path=path,\n        entrypoint=entrypoint,\n        version=version,\n        storage=storage,\n        job_variables=job_variables or {},\n    )\n\n    if parameters:\n        init_kwargs[\"parameters\"] = parameters\n\n    if description:\n        init_kwargs[\"description\"] = description\n\n    # if a schedule, tags, work_queue_name, or infrastructure are not provided via CLI,\n    # we let `build_from_flow` load them from the server\n    if schedule or no_schedule:\n        init_kwargs.update(schedule=schedule)\n    if tags:\n        init_kwargs.update(tags=tags)\n    if infrastructure:\n        init_kwargs.update(infrastructure=infrastructure)\n    if work_queue_name:\n        init_kwargs.update(work_queue_name=work_queue_name)\n    if work_pool_name:\n        init_kwargs.update(work_pool_name=work_pool_name)\n\n    deployment_loc = output_file or f\"{obj_name}-deployment.yaml\"\n    deployment = await Deployment.build_from_flow(\n        flow=flow,\n        name=name,\n        output=deployment_loc,\n        skip_upload=skip_upload,\n        apply=False,\n        **init_kwargs,\n    )\n    app.console.print(\n        f\"Deployment YAML created at '{Path(deployment_loc).absolute()!s}'.\",\n        style=\"green\",\n    )\n\n    await create_work_queue_and_set_concurrency_limit(\n        deployment.work_queue_name, deployment.work_pool_name, work_queue_concurrency\n    )\n\n    # we process these separately for informative output\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            file_count = await deployment.upload_to_storage()\n            if file_count:\n                app.console.print(\n                    (\n                        f\"Successfully uploaded {file_count} files to\"\n                        f\" {deployment.location}\"\n                    ),\n                    style=\"green\",\n                )\n        else:\n            app.console.print(\n                (\n                    f\"Deployment storage {deployment.storage} does not have upload\"\n                    \" capabilities; no files uploaded.  Pass --skip-upload to suppress\"\n                    \" this warning.\"\n                ),\n                style=\"green\",\n            )\n\n    if _apply:\n        async with get_client() as client:\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.clear_schedules","title":"clear_schedules async","text":"

    Clear all schedules for a deployment.

    Source code in prefect/cli/deployment.py
    @schedule_app.command(\"clear\")\nasync def clear_schedules(\n    deployment_name: str,\n    assume_yes: Optional[bool] = typer.Option(\n        False,\n        \"--accept-yes\",\n        \"-y\",\n        help=\"Accept the confirmation prompt without prompting\",\n    ),\n):\n    \"\"\"\n    Clear all schedules for a deployment.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n        await client.read_flow(deployment.flow_id)\n\n        # Get input from user: confirm removal of all schedules\n        if not assume_yes and not typer.confirm(\n            \"Are you sure you want to clear all schedules for this deployment?\",\n        ):\n            exit_with_error(\"Clearing schedules cancelled.\")\n\n        for schedule in deployment.schedules:\n            try:\n                await client.delete_deployment_schedule(deployment.id, schedule.id)\n            except ObjectNotFound:\n                pass\n\n        exit_with_success(f\"Cleared all schedules for deployment {deployment_name}\")\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.create_schedule","title":"create_schedule async","text":"

    Create a schedule for a given deployment.

    Source code in prefect/cli/deployment.py
    @schedule_app.command(\"create\")\nasync def create_schedule(\n    name: str,\n    interval: Optional[float] = typer.Option(\n        None,\n        \"--interval\",\n        help=\"An interval to schedule on, specified in seconds\",\n        min=0.0001,\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None,\n        \"--anchor-date\",\n        help=\"The anchor date for an interval schedule\",\n    ),\n    rrule_string: Optional[str] = typer.Option(\n        None, \"--rrule\", help=\"Deployment schedule rrule string\"\n    ),\n    cron_string: Optional[str] = typer.Option(\n        None, \"--cron\", help=\"Deployment schedule cron string\"\n    ),\n    cron_day_or: Optional[str] = typer.Option(\n        None,\n        \"--day_or\",\n        help=\"Control how croniter handles `day` and `day_of_week` entries\",\n    ),\n    timezone: Optional[str] = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    active: Optional[bool] = typer.Option(\n        True,\n        \"--active\",\n        help=\"Whether the schedule is active. Defaults to True.\",\n    ),\n    replace: Optional[bool] = typer.Option(\n        False,\n        \"--replace\",\n        help=\"Replace the deployment's current schedule(s) with this new schedule.\",\n    ),\n    assume_yes: Optional[bool] = typer.Option(\n        False,\n        \"--accept-yes\",\n        \"-y\",\n        help=\"Accept the confirmation prompt without prompting\",\n    ),\n):\n    \"\"\"\n    Create a schedule for a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    if sum(option is not None for option in [interval, rrule_string, cron_string]) != 1:\n        exit_with_error(\n            \"Exactly one of `--interval`, `--rrule`, or `--cron` must be provided.\"\n        )\n\n    schedule = None\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    if interval is not None:\n        if interval_anchor:\n            try:\n                pendulum.parse(interval_anchor)\n            except ValueError:\n                return exit_with_error(\"The anchor date must be a valid date string.\")\n        interval_schedule = {\n            \"interval\": interval,\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        schedule = IntervalSchedule(\n            **{k: v for k, v in interval_schedule.items() if v is not None}\n        )\n\n    if cron_string is not None:\n        cron_schedule = {\n            \"cron\": cron_string,\n            \"day_or\": cron_day_or,\n            \"timezone\": timezone,\n        }\n        schedule = CronSchedule(\n            **{k: v for k, v in cron_schedule.items() if v is not None}\n        )\n\n    if rrule_string is not None:\n        # a timezone in the `rrule_string` gets ignored by the RRuleSchedule constructor\n        if \"TZID\" in rrule_string and not timezone:\n            exit_with_error(\n                \"You can provide a timezone by providing a dict with a `timezone` key\"\n                \" to the --rrule option. E.g. {'rrule': 'FREQ=MINUTELY;INTERVAL=5',\"\n                \" 'timezone': 'America/New_York'}.\\nAlternatively, you can provide a\"\n                \" timezone by passing in a --timezone argument.\"\n            )\n        try:\n            schedule = RRuleSchedule(**json.loads(rrule_string))\n            if timezone:\n                # override timezone if specified via CLI argument\n                schedule.timezone = timezone\n        except json.JSONDecodeError:\n            schedule = RRuleSchedule(rrule=rrule_string, timezone=timezone)\n\n    if schedule is None:\n        return exit_with_success(\n            \"Could not create a valid schedule from the provided options.\"\n        )\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {name!r} not found!\")\n\n        num_schedules = len(deployment.schedules)\n        noun = \"schedule\" if num_schedules == 1 else \"schedules\"\n\n        if replace and num_schedules > 0:\n            if not assume_yes and not typer.confirm(\n                f\"Are you sure you want to replace {num_schedules} {noun} for {name}?\"\n            ):\n                return exit_with_error(\"Schedule replacement cancelled.\")\n\n            for existing_schedule in deployment.schedules:\n                try:\n                    await client.delete_deployment_schedule(\n                        deployment.id, existing_schedule.id\n                    )\n                except ObjectNotFound:\n                    pass\n\n        await client.create_deployment_schedules(deployment.id, [(schedule, active)])\n\n        if replace and num_schedules > 0:\n            exit_with_success(f\"Replaced existing deployment {noun} with new schedule!\")\n        else:\n            exit_with_success(\"Created deployment schedule!\")\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete","title":"delete async","text":"

    Delete a deployment.

    \b Examples: \b $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6

    Source code in prefect/cli/deployment.py
    @deployment_app.command()\nasync def delete(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A deployment id to search for if no name is given\"\n    ),\n):\n    \"\"\"\n    Delete a deployment.\n\n    \\b\n    Examples:\n        \\b\n        $ prefect deployment delete test_flow/test_deployment\n        $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6\n    \"\"\"\n    async with get_client() as client:\n        if name is None and deployment_id is not None:\n            try:\n                await client.delete_deployment(deployment_id)\n                exit_with_success(f\"Deleted deployment '{deployment_id}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n        elif name is not None:\n            try:\n                deployment = await client.read_deployment_by_name(name)\n                await client.delete_deployment(deployment.id)\n                exit_with_success(f\"Deleted deployment '{name}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {name!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a deployment name or id\")\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete_schedule","title":"delete_schedule async","text":"

    Delete a deployment schedule.

    Source code in prefect/cli/deployment.py
    @schedule_app.command(\"delete\")\nasync def delete_schedule(\n    deployment_name: str,\n    schedule_id: UUID,\n    assume_yes: Optional[bool] = typer.Option(\n        False,\n        \"--accept-yes\",\n        \"-y\",\n        help=\"Accept the confirmation prompt without prompting\",\n    ),\n):\n    \"\"\"\n    Delete a deployment schedule.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name} not found!\")\n\n        try:\n            schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n        except IndexError:\n            return exit_with_error(\"Deployment schedule not found!\")\n\n        if not assume_yes and not typer.confirm(\n            f\"Are you sure you want to delete this schedule: {schedule.schedule}\",\n        ):\n            return exit_with_error(\"Deletion cancelled.\")\n\n        try:\n            await client.delete_deployment_schedule(deployment.id, schedule_id)\n        except ObjectNotFound:\n            exit_with_error(\"Deployment schedule not found!\")\n\n        exit_with_success(f\"Deleted deployment schedule {schedule_id}\")\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.inspect","title":"inspect async","text":"

    View details about a deployment.

    \b Example: \b $ prefect deployment inspect \"hello-world/my-deployment\" { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedules': None, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } }

    Source code in prefect/cli/deployment.py
    @deployment_app.command()\nasync def inspect(name: str):\n    \"\"\"\n    View details about a deployment.\n\n    \\b\n    Example:\n        \\b\n        $ prefect deployment inspect \"hello-world/my-deployment\"\n        {\n            'id': '610df9c3-0fb4-4856-b330-67f588d20201',\n            'created': '2022-08-01T18:36:25.192102+00:00',\n            'updated': '2022-08-01T18:36:25.188166+00:00',\n            'name': 'my-deployment',\n            'description': None,\n            'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e',\n            'schedules': None,\n            'parameters': {'name': 'Marvin'},\n            'tags': ['test'],\n            'parameter_openapi_schema': {\n                'title': 'Parameters',\n                'type': 'object',\n                'properties': {\n                    'name': {\n                        'title': 'name',\n                        'type': 'string'\n                    }\n                },\n                'required': ['name']\n            },\n            'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32',\n            'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028',\n            'infrastructure': {\n                'type': 'process',\n                'env': {},\n                'labels': {},\n                'name': None,\n                'command': ['python', '-m', 'prefect.engine'],\n                'stream_output': True\n            }\n        }\n\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        deployment_json = deployment.dict(json_compatible=True)\n\n        if deployment.infrastructure_document_id:\n            deployment_json[\"infrastructure\"] = Block._from_block_document(\n                await client.read_block_document(deployment.infrastructure_document_id)\n            ).dict(\n                exclude={\"_block_document_id\", \"_block_document_name\", \"_is_anonymous\"}\n            )\n\n        if client.server_type.supports_automations():\n            deployment_json[\"automations\"] = [\n                a.dict()\n                for a in await client.read_resource_related_automations(\n                    f\"prefect.deployment.{deployment.id}\"\n                )\n            ]\n\n    app.console.print(Pretty(deployment_json))\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.list_schedules","title":"list_schedules async","text":"

    View all schedules for a deployment.

    Source code in prefect/cli/deployment.py
    @schedule_app.command(\"ls\")\nasync def list_schedules(deployment_name: str):\n    \"\"\"\n    View all schedules for a deployment.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n    def sort_by_created_key(schedule: DeploymentSchedule):  # noqa\n        return pendulum.now(\"utc\") - schedule.created\n\n    def schedule_details(schedule: DeploymentSchedule):\n        if isinstance(schedule.schedule, IntervalSchedule):\n            return f\"interval: {schedule.schedule.interval}s\"\n        elif isinstance(schedule.schedule, CronSchedule):\n            return f\"cron: {schedule.schedule.cron}\"\n        elif isinstance(schedule.schedule, RRuleSchedule):\n            return f\"rrule: {schedule.schedule.rrule}\"\n        else:\n            return \"unknown\"\n\n    table = Table(\n        title=\"Deployment Schedules\",\n    )\n    table.add_column(\"ID\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Schedule\", style=\"cyan\", no_wrap=False)\n    table.add_column(\"Active\", style=\"purple\", no_wrap=True)\n\n    for schedule in sorted(deployment.schedules, key=sort_by_created_key):\n        table.add_row(\n            str(schedule.id),\n            schedule_details(schedule),\n            str(schedule.active),\n        )\n\n    app.console.print(table)\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.ls","title":"ls async","text":"

    View all deployments or deployments for specific flows.

    Source code in prefect/cli/deployment.py
    @deployment_app.command()\nasync def ls(flow_name: List[str] = None, by_created: bool = False):\n    \"\"\"\n    View all deployments or deployments for specific flows.\n    \"\"\"\n    async with get_client() as client:\n        deployments = await client.read_deployments(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None\n        )\n        flows = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [d.flow_id for d in deployments]})\n            )\n        }\n\n    def sort_by_name_keys(d):\n        return flows[d.flow_id].name, d.name\n\n    def sort_by_created_key(d):\n        return pendulum.now(\"utc\") - d.created\n\n    table = Table(\n        title=\"Deployments\",\n    )\n    table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n\n    for deployment in sorted(\n        deployments, key=sort_by_created_key if by_created else sort_by_name_keys\n    ):\n        table.add_row(\n            f\"{flows[deployment.flow_id].name}/[bold]{deployment.name}[/]\",\n            str(deployment.id),\n        )\n\n    app.console.print(table)\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.pause_schedule","title":"pause_schedule async","text":"

    Pause a deployment schedule.

    Source code in prefect/cli/deployment.py
    @schedule_app.command(\"pause\")\nasync def pause_schedule(deployment_name: str, schedule_id: UUID):\n    \"\"\"\n    Pause a deployment schedule.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n        try:\n            schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n        except IndexError:\n            return exit_with_error(\"Deployment schedule not found!\")\n\n        if not schedule.active:\n            return exit_with_error(\n                f\"Deployment schedule {schedule_id} is already inactive\"\n            )\n\n        await client.update_deployment_schedule(\n            deployment.id, schedule_id, active=False\n        )\n        exit_with_success(\n            f\"Paused schedule {schedule.schedule} for deployment {deployment_name}\"\n        )\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.resume_schedule","title":"resume_schedule async","text":"

    Resume a deployment schedule.

    Source code in prefect/cli/deployment.py
    @schedule_app.command(\"resume\")\nasync def resume_schedule(deployment_name: str, schedule_id: UUID):\n    \"\"\"\n    Resume a deployment schedule.\n    \"\"\"\n    assert_deployment_name_format(deployment_name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(deployment_name)\n        except ObjectNotFound:\n            return exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n\n        try:\n            schedule = [s for s in deployment.schedules if s.id == schedule_id][0]\n        except IndexError:\n            return exit_with_error(\"Deployment schedule not found!\")\n\n        if schedule.active:\n            return exit_with_error(\n                f\"Deployment schedule {schedule_id} is already active\"\n            )\n\n        await client.update_deployment_schedule(deployment.id, schedule_id, active=True)\n        exit_with_success(\n            f\"Resumed schedule {schedule.schedule} for deployment {deployment_name}\"\n        )\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.run","title":"run async","text":"

    Create a flow run for the given flow and deployment.

    The flow run will be scheduled to run immediately unless --start-in or --start-at is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the --watch flag.

    Source code in prefect/cli/deployment.py
    @deployment_app.command()\nasync def run(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None,\n        \"--id\",\n        help=(\"A deployment id to search for if no name is given\"),\n    ),\n    job_variables: List[str] = typer.Option(\n        None,\n        \"-jv\",\n        \"--job-variable\",\n        help=(\n            \"A key, value pair (key=value) specifying a flow run job variable. The value will\"\n            \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n            \" job variable values.\"\n        ),\n    ),\n    params: List[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--param\",\n        help=(\n            \"A key, value pair (key=value) specifying a flow parameter. The value will\"\n            \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n            \" parameter values.\"\n        ),\n    ),\n    multiparams: Optional[str] = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"A mapping of parameters to values. To use a stdin, pass '-'. Any \"\n            \"parameters passed with `--param` will take precedence over these values.\"\n        ),\n    ),\n    start_in: Optional[str] = typer.Option(\n        None,\n        \"--start-in\",\n        help=(\n            \"A human-readable string specifying a time interval to wait before starting\"\n            \" the flow run. E.g. 'in 5 minutes', 'in 1 hour', 'in 2 days'.\"\n        ),\n    ),\n    start_at: Optional[str] = typer.Option(\n        None,\n        \"--start-at\",\n        help=(\n            \"A human-readable string specifying a time to start the flow run. E.g.\"\n            \" 'at 5:30pm', 'at 2022-08-01 17:30', 'at 2022-08-01 17:30:00'.\"\n        ),\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"--tag\",\n        help=(\"Tag(s) to be applied to flow run.\"),\n    ),\n    watch: bool = typer.Option(\n        False,\n        \"--watch\",\n        help=(\"Whether to poll the flow run until a terminal state is reached.\"),\n    ),\n    watch_interval: Optional[int] = typer.Option(\n        None,\n        \"--watch-interval\",\n        help=(\"How often to poll the flow run for state changes (in seconds).\"),\n    ),\n    watch_timeout: Optional[int] = typer.Option(\n        None,\n        \"--watch-timeout\",\n        help=(\"Timeout for --watch.\"),\n    ),\n):\n    \"\"\"\n    Create a flow run for the given flow and deployment.\n\n    The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified.\n    The flow run will not execute until a worker starts.\n    To watch the flow run until it reaches a terminal state, use the `--watch` flag.\n    \"\"\"\n    import dateparser\n\n    now = pendulum.now(\"UTC\")\n\n    multi_params = {}\n    if multiparams:\n        if multiparams == \"-\":\n            multiparams = sys.stdin.read()\n            if not multiparams:\n                exit_with_error(\"No data passed to stdin\")\n\n        try:\n            multi_params = json.loads(multiparams)\n        except ValueError as exc:\n            exit_with_error(f\"Failed to parse JSON: {exc}\")\n        if watch_interval and not watch:\n            exit_with_error(\n                \"`--watch-interval` can only be used with `--watch`.\",\n            )\n    cli_params = _load_json_key_values(params or [], \"parameter\")\n    conflicting_keys = set(cli_params.keys()).intersection(multi_params.keys())\n    if conflicting_keys:\n        app.console.print(\n            \"The following parameters were specified by `--param` and `--params`, the \"\n            f\"`--param` value will be used: {conflicting_keys}\"\n        )\n    parameters = {**multi_params, **cli_params}\n\n    job_vars = _load_json_key_values(job_variables or [], \"job variable\")\n    if start_in and start_at:\n        exit_with_error(\n            \"Only one of `--start-in` or `--start-at` can be set, not both.\"\n        )\n\n    elif start_in is None and start_at is None:\n        scheduled_start_time = now\n        human_dt_diff = \" (now)\"\n    else:\n        if start_in:\n            start_time_raw = \"in \" + start_in\n        else:\n            start_time_raw = \"at \" + start_at\n        with warnings.catch_warnings():\n            # PyTZ throws a warning based on dateparser usage of the library\n            # See https://github.com/scrapinghub/dateparser/issues/1089\n            warnings.filterwarnings(\"ignore\", module=\"dateparser\")\n\n            try:\n                start_time_parsed = dateparser.parse(\n                    start_time_raw,\n                    settings={\n                        \"TO_TIMEZONE\": \"UTC\",\n                        \"RETURN_AS_TIMEZONE_AWARE\": False,\n                        \"PREFER_DATES_FROM\": \"future\",\n                        \"RELATIVE_BASE\": datetime.fromtimestamp(\n                            now.timestamp(), tz=timezone.utc\n                        ),\n                    },\n                )\n\n            except Exception as exc:\n                exit_with_error(f\"Failed to parse '{start_time_raw!r}': {exc!s}\")\n\n        if start_time_parsed is None:\n            exit_with_error(f\"Unable to parse scheduled start time {start_time_raw!r}.\")\n\n        scheduled_start_time = pendulum.instance(start_time_parsed)\n        human_dt_diff = (\n            \" (\" + pendulum.format_diff(scheduled_start_time.diff(now)) + \")\"\n        )\n\n    async with get_client() as client:\n        deployment = await get_deployment(client, name, deployment_id)\n        flow = await client.read_flow(deployment.flow_id)\n\n        deployment_parameters = deployment.parameter_openapi_schema[\"properties\"].keys()\n        unknown_keys = set(parameters.keys()).difference(deployment_parameters)\n        if unknown_keys:\n            available_parameters = (\n                (\n                    \"The following parameters are available on the deployment: \"\n                    + listrepr(deployment_parameters, sep=\", \")\n                )\n                if deployment_parameters\n                else \"This deployment does not accept parameters.\"\n            )\n\n            exit_with_error(\n                \"The following parameters were specified but not found on the \"\n                f\"deployment: {listrepr(unknown_keys, sep=', ')}\"\n                f\"\\n{available_parameters}\"\n            )\n\n        app.console.print(\n            f\"Creating flow run for deployment '{flow.name}/{deployment.name}'...\",\n        )\n\n        try:\n            flow_run = await client.create_flow_run_from_deployment(\n                deployment.id,\n                parameters=parameters,\n                state=Scheduled(scheduled_time=scheduled_start_time),\n                tags=tags,\n                job_variables=job_vars,\n            )\n        except PrefectHTTPStatusError as exc:\n            detail = exc.response.json().get(\"detail\")\n            if detail:\n                exit_with_error(\n                    exc.response.json()[\"detail\"],\n                )\n            else:\n                raise\n\n    if PREFECT_UI_URL:\n        run_url = f\"{PREFECT_UI_URL.value()}/flow-runs/flow-run/{flow_run.id}\"\n    else:\n        run_url = \"<no dashboard available>\"\n\n    datetime_local_tz = scheduled_start_time.in_tz(pendulum.tz.local_timezone())\n    scheduled_display = (\n        datetime_local_tz.to_datetime_string()\n        + \" \"\n        + datetime_local_tz.tzname()\n        + human_dt_diff\n    )\n\n    app.console.print(f\"Created flow run {flow_run.name!r}.\")\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n        \u2514\u2500\u2500 UUID: {flow_run.id}\n        \u2514\u2500\u2500 Parameters: {flow_run.parameters}\n        \u2514\u2500\u2500 Job Variables: {flow_run.job_variables}\n        \u2514\u2500\u2500 Scheduled start time: {scheduled_display}\n        \u2514\u2500\u2500 URL: {run_url}\n        \"\"\"\n        ).strip(),\n        soft_wrap=True,\n    )\n    if watch:\n        watch_interval = 5 if watch_interval is None else watch_interval\n        app.console.print(f\"Watching flow run {flow_run.name!r}...\")\n        finished_flow_run = await wait_for_flow_run(\n            flow_run.id,\n            timeout=watch_timeout,\n            poll_interval=watch_interval,\n            log_states=True,\n        )\n        finished_flow_run_state = finished_flow_run.state\n        if finished_flow_run_state.is_completed():\n            exit_with_success(\n                f\"Flow run finished successfully in {finished_flow_run_state.name!r}.\"\n            )\n        exit_with_error(\n            f\"Flow run finished in state {finished_flow_run_state.name!r}.\",\n            code=1,\n        )\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.str_presenter","title":"str_presenter","text":"

    configures yaml for dumping multiline strings Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data

    Source code in prefect/cli/deployment.py
    def str_presenter(dumper, data):\n    \"\"\"\n    configures yaml for dumping multiline strings\n    Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data\n    \"\"\"\n    if len(data.splitlines()) > 1:  # check for multiline string\n        return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data, style=\"|\")\n    return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data)\n
    ","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/dev/","title":"dev","text":"","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev","title":"prefect.cli.dev","text":"

    Command line interface for working with Prefect Server

    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent","title":"agent async","text":"

    Starts a hot-reloading development agent process.

    Source code in prefect/cli/dev.py
    @dev_app.command()\nasync def agent(\n    api_url: str = SettingsOption(PREFECT_API_URL),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n):\n    \"\"\"\n    Starts a hot-reloading development agent process.\n    \"\"\"\n    # Delayed import since this is only a 'dev' dependency\n    import watchfiles\n\n    app.console.print(\"Creating hot-reloading agent process...\")\n\n    try:\n        await watchfiles.arun_process(\n            prefect.__module_path__,\n            target=agent_process_entrypoint,\n            kwargs=dict(api=api_url, work_queues=work_queues),\n        )\n    except RuntimeError as err:\n        # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n        # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n        if str(err).strip() != \"Already borrowed\":\n            raise\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent_process_entrypoint","title":"agent_process_entrypoint","text":"

    An entrypoint for starting an agent in a subprocess. Adds a Rich console to the Typer app, processes Typer default parameters, then starts an agent. All kwargs are forwarded to prefect.cli.agent.start.

    Source code in prefect/cli/dev.py
    def agent_process_entrypoint(**kwargs):\n    \"\"\"\n    An entrypoint for starting an agent in a subprocess. Adds a Rich console\n    to the Typer app, processes Typer default parameters, then starts an agent.\n    All kwargs are forwarded to  `prefect.cli.agent.start`.\n    \"\"\"\n    import inspect\n\n    # import locally so only the `dev` command breaks if Typer internals change\n    from typer.models import ParameterInfo\n\n    # Typer does not process default parameters when calling a function\n    # directly, so we must set `start_agent`'s default parameters manually.\n    # get the signature of the `start_agent` function\n    start_agent_signature = inspect.signature(start_agent)\n\n    # for any arguments not present in kwargs, use the default value.\n    for name, param in start_agent_signature.parameters.items():\n        if name not in kwargs:\n            # All `param.default` values for start_agent are Typer params that store the\n            # actual default value in their `default` attribute and we must call\n            # `param.default.default` to get the actual default value. We should also\n            # ensure we extract the right default if non-Typer defaults are added\n            # to `start_agent` in the future.\n            if isinstance(param.default, ParameterInfo):\n                default = param.default.default\n            else:\n                default = param.default\n\n            # Some defaults are Prefect `SettingsOption.value` methods\n            # that must be called to get the actual value.\n            kwargs[name] = default() if callable(default) else default\n\n    try:\n        start_agent(**kwargs)  # type: ignore\n    except KeyboardInterrupt:\n        # expected when watchfiles kills the process\n        pass\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.api","title":"api async","text":"

    Starts a hot-reloading development API.

    Source code in prefect/cli/dev.py
    @dev_app.command()\nasync def api(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    log_level: str = \"DEBUG\",\n    services: bool = True,\n):\n    \"\"\"\n    Starts a hot-reloading development API.\n    \"\"\"\n    import watchfiles\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_RUN_IN_APP\"] = str(services)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = \"False\"\n    server_env[\"PREFECT_UI_API_URL\"] = f\"http://{host}:{port}/api\"\n\n    command = [\n        sys.executable,\n        \"-m\",\n        \"uvicorn\",\n        \"--factory\",\n        \"prefect.server.api.server:create_app\",\n        \"--host\",\n        str(host),\n        \"--port\",\n        str(port),\n        \"--log-level\",\n        log_level.lower(),\n    ]\n\n    app.console.print(f\"Running: {' '.join(command)}\")\n    import signal\n\n    stop_event = anyio.Event()\n    start_command = partial(\n        run_process, command=command, env=server_env, stream_output=True\n    )\n\n    async with anyio.create_task_group() as tg:\n        try:\n            server_pid = await tg.start(start_command)\n            async for _ in watchfiles.awatch(\n                prefect.__module_path__,\n                stop_event=stop_event,  # type: ignore\n            ):\n                # when any watched files change, restart the server\n                app.console.print(\"Restarting Prefect Server...\")\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n                # start a new server\n                server_pid = await tg.start(start_command)\n        except RuntimeError as err:\n            # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n            # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n            if str(err).strip() != \"Already borrowed\":\n                raise\n        except KeyboardInterrupt:\n            # exit cleanly on ctrl-c by killing the server process if it's\n            # still running\n            try:\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n            except ProcessLookupError:\n                # process already exited\n                pass\n\n            stop_event.set()\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_docs","title":"build_docs","text":"

    Builds REST API reference documentation for static display.

    Source code in prefect/cli/dev.py
    @dev_app.command()\ndef build_docs(\n    schema_path: str = None,\n):\n    \"\"\"\n    Builds REST API reference documentation for static display.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    from prefect.server.api.server import create_app\n\n    schema = create_app(ephemeral=True).openapi()\n\n    if not schema_path:\n        schema_path = (\n            prefect.__development_base_path__ / \"docs\" / \"api-ref\" / \"schema.json\"\n        ).absolute()\n    # overwrite info for display purposes\n    schema[\"info\"] = {}\n    with open(schema_path, \"w\") as f:\n        json.dump(schema, f)\n    app.console.print(f\"OpenAPI schema written to {schema_path}\")\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_image","title":"build_image","text":"

    Build a docker image for development.

    Source code in prefect/cli/dev.py
    @dev_app.command()\ndef build_image(\n    arch: str = typer.Option(\n        None,\n        help=(\n            \"The architecture to build the container for. \"\n            \"Defaults to the architecture of the host Python. \"\n            f\"[default: {platform.machine()}]\"\n        ),\n    ),\n    python_version: str = typer.Option(\n        None,\n        help=(\n            \"The Python version to build the container for. \"\n            \"Defaults to the version of the host Python. \"\n            f\"[default: {python_version_minor()}]\"\n        ),\n    ),\n    flavor: str = typer.Option(\n        None,\n        help=(\n            \"An alternative flavor to build, for example 'conda'. \"\n            \"Defaults to the standard Python base image\"\n        ),\n    ),\n    dry_run: bool = False,\n):\n    \"\"\"\n    Build a docker image for development.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    # TODO: Once https://github.com/tiangolo/typer/issues/354 is addressed, the\n    #       default can be set in the function signature\n    arch = arch or platform.machine()\n    python_version = python_version or python_version_minor()\n\n    tag = get_prefect_image_name(python_version=python_version, flavor=flavor)\n\n    # Here we use a subprocess instead of the docker-py client to easily stream output\n    # as it comes\n    command = [\n        \"docker\",\n        \"build\",\n        str(prefect.__development_base_path__),\n        \"--tag\",\n        tag,\n        \"--platform\",\n        f\"linux/{arch}\",\n        \"--build-arg\",\n        \"PREFECT_EXTRAS=[dev]\",\n        \"--build-arg\",\n        f\"PYTHON_VERSION={python_version}\",\n    ]\n\n    if flavor:\n        command += [\"--build-arg\", f\"BASE_IMAGE=prefect-{flavor}\"]\n\n    if dry_run:\n        print(\" \".join(command))\n        return\n\n    try:\n        subprocess.check_call(command, shell=sys.platform == \"win32\")\n    except subprocess.CalledProcessError:\n        exit_with_error(\"Failed to build image!\")\n    else:\n        exit_with_success(f\"Built image {tag!r} for linux/{arch}\")\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.container","title":"container","text":"

    Run a docker container with local code mounted and installed.

    Source code in prefect/cli/dev.py
    @dev_app.command()\ndef container(bg: bool = False, name=\"prefect-dev\", api: bool = True, tag: str = None):\n    \"\"\"\n    Run a docker container with local code mounted and installed.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    import docker\n    from docker.models.containers import Container\n\n    client = docker.from_env()\n\n    containers = client.containers.list()\n    container_names = {container.name for container in containers}\n    if name in container_names:\n        exit_with_error(\n            f\"Container {name!r} already exists. Specify a different name or stop \"\n            \"the existing container.\"\n        )\n\n    blocking_cmd = \"prefect dev api\" if api else \"sleep infinity\"\n    tag = tag or get_prefect_image_name()\n\n    container: Container = client.containers.create(\n        image=tag,\n        command=[\n            \"/bin/bash\",\n            \"-c\",\n            (  # noqa\n                \"pip install -e /opt/prefect/repo\\\\[dev\\\\] && touch /READY &&\"\n                f\" {blocking_cmd}\"\n            ),\n        ],\n        name=name,\n        auto_remove=True,\n        working_dir=\"/opt/prefect/repo\",\n        volumes=[f\"{prefect.__development_base_path__}:/opt/prefect/repo\"],\n        shm_size=\"4G\",\n    )\n\n    print(f\"Starting container for image {tag!r}...\")\n    container.start()\n\n    print(\"Waiting for installation to complete\", end=\"\", flush=True)\n    try:\n        ready = False\n        while not ready:\n            print(\".\", end=\"\", flush=True)\n            result = container.exec_run(\"test -f /READY\")\n            ready = result.exit_code == 0\n            if not ready:\n                time.sleep(3)\n    except BaseException:\n        print(\"\\nInterrupted. Stopping container...\")\n        container.stop()\n        raise\n\n    print(\n        textwrap.dedent(\n            f\"\"\"\n            Container {container.name!r} is ready! To connect to the container, run:\n\n                docker exec -it {container.name} /bin/bash\n            \"\"\"\n        )\n    )\n\n    if bg:\n        print(\n            textwrap.dedent(\n                f\"\"\"\n                The container will run forever. Stop the container with:\n\n                    docker stop {container.name}\n                \"\"\"\n            )\n        )\n        # Exit without stopping\n        return\n\n    try:\n        print(\"Send a keyboard interrupt to exit...\")\n        container.wait()\n    except KeyboardInterrupt:\n        pass  # Avoid showing \"Abort\"\n    finally:\n        print(\"\\nStopping container...\")\n        container.stop()\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.kubernetes_manifest","title":"kubernetes_manifest","text":"

    Generates a Kubernetes manifest for development.

    Example

    $ prefect dev kubernetes-manifest | kubectl apply -f -

    Source code in prefect/cli/dev.py
    @dev_app.command()\ndef kubernetes_manifest():\n    \"\"\"\n    Generates a Kubernetes manifest for development.\n\n    Example:\n        $ prefect dev kubernetes-manifest | kubectl apply -f -\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-dev.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"prefect_root_directory\": prefect.__development_base_path__,\n            \"image_name\": get_prefect_image_name(),\n        }\n    )\n    print(manifest)\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.start","title":"start async","text":"

    Starts a hot-reloading development server with API, UI, and agent processes.

    Each service has an individual command if you wish to start them separately. Each service can be excluded here as well.

    Source code in prefect/cli/dev.py
    @dev_app.command()\nasync def start(\n    exclude_api: bool = typer.Option(False, \"--no-api\"),\n    exclude_ui: bool = typer.Option(False, \"--no-ui\"),\n    exclude_agent: bool = typer.Option(False, \"--no-agent\"),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the dev agent to pull from.\",\n    ),\n):\n    \"\"\"\n    Starts a hot-reloading development server with API, UI, and agent processes.\n\n    Each service has an individual command if you wish to start them separately.\n    Each service can be excluded here as well.\n    \"\"\"\n    async with anyio.create_task_group() as tg:\n        if not exclude_api:\n            tg.start_soon(\n                partial(\n                    api,\n                    host=PREFECT_SERVER_API_HOST.value(),\n                    port=PREFECT_SERVER_API_PORT.value(),\n                )\n            )\n        if not exclude_ui:\n            tg.start_soon(ui)\n        if not exclude_agent:\n            # Hook the agent to the hosted API if running\n            if not exclude_api:\n                host = f\"http://{PREFECT_SERVER_API_HOST.value()}:{PREFECT_SERVER_API_PORT.value()}/api\"  # noqa\n            else:\n                host = PREFECT_API_URL.value()\n            tg.start_soon(agent, host, work_queues)\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.ui","title":"ui async","text":"

    Starts a hot-reloading development UI.

    Source code in prefect/cli/dev.py
    @dev_app.command()\nasync def ui():\n    \"\"\"\n    Starts a hot-reloading development UI.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    with tmpchdir(prefect.__development_base_path__ / \"ui\"):\n        app.console.print(\"Installing npm packages...\")\n        await run_process([\"npm\", \"install\"], stream_output=True)\n\n        app.console.print(\"Starting UI development server...\")\n        await run_process(command=[\"npm\", \"run\", \"serve\"], stream_output=True)\n
    ","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/events/","title":"Events","text":"","tags":["Python API","CLI","Events"]},{"location":"api-ref/prefect/cli/events/#prefect.cli.events","title":"prefect.cli.events","text":"","tags":["Python API","CLI","Events"]},{"location":"api-ref/prefect/cli/events/#prefect.cli.events.stream","title":"stream async","text":"

    Subscribes to the event stream of a workspace, printing each event as it is received. By default, events are printed as JSON, but can be printed as text by passing --format text.

    Source code in prefect/cli/events.py
    @events_app.command()\nasync def stream(\n    format: StreamFormat = typer.Option(\n        StreamFormat.json, \"--format\", help=\"Output format (json or text)\"\n    ),\n    output_file: str = typer.Option(\n        None, \"--output-file\", help=\"File to write events to\"\n    ),\n    account: bool = typer.Option(\n        False,\n        \"--account\",\n        help=\"Stream events for entire account, including audit logs\",\n    ),\n    run_once: bool = typer.Option(False, \"--run-once\", help=\"Stream only one event\"),\n):\n    \"\"\"Subscribes to the event stream of a workspace, printing each event\n    as it is received. By default, events are printed as JSON, but can be\n    printed as text by passing `--format text`.\n    \"\"\"\n\n    try:\n        if account:\n            events_subscriber = PrefectCloudAccountEventSubscriber()\n        else:\n            events_subscriber = get_events_subscriber()\n\n        app.console.print(\"Subscribing to event stream...\")\n        async with events_subscriber as subscriber:\n            async for event in subscriber:\n                await handle_event(event, format, output_file)\n                if run_once:\n                    typer.Exit(0)\n    except Exception as exc:\n        handle_error(exc)\n
    ","tags":["Python API","CLI","Events"]},{"location":"api-ref/prefect/cli/flow/","title":"flow","text":"","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow","title":"prefect.cli.flow","text":"

    Command line interface for working with flows.

    ","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.ls","title":"ls async","text":"

    View flows.

    Source code in prefect/cli/flow.py
    @flow_app.command()\nasync def ls(\n    limit: int = 15,\n):\n    \"\"\"\n    View flows.\n    \"\"\"\n    async with get_client() as client:\n        flows = await client.read_flows(\n            limit=limit,\n            sort=FlowSort.CREATED_DESC,\n        )\n\n    table = Table(title=\"Flows\")\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Created\", no_wrap=True)\n\n    for flow in flows:\n        table.add_row(\n            str(flow.id),\n            str(flow.name),\n            str(flow.created),\n        )\n\n    app.console.print(table)\n
    ","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow/#prefect.cli.flow.serve","title":"serve async","text":"

    Serve a flow via an entrypoint.

    Source code in prefect/cli/flow.py
    @flow_app.command()\nasync def serve(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a file containing a flow and the name of the flow function in\"\n            \" the format `./path/to/file.py:flow_func_name`.\"\n        ),\n    ),\n    name: str = typer.Option(\n        ...,\n        \"--name\",\n        \"-n\",\n        help=\"The name to give the deployment created for the flow.\",\n    ),\n    description: Optional[str] = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the created deployment. If not provided, the\"\n            \" description will be populated from the flow's description.\"\n        ),\n    ),\n    version: Optional[str] = typer.Option(\n        None, \"-v\", \"--version\", help=\"A version to give the created deployment.\"\n    ),\n    tags: Optional[List[str]] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=\"One or more optional tags to apply to the created deployment.\",\n    ),\n    cron: Optional[str] = typer.Option(\n        None,\n        \"--cron\",\n        help=(\n            \"A cron string that will be used to set a schedule for the created\"\n            \" deployment.\"\n        ),\n    ),\n    interval: Optional[int] = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) between scheduled runs of\"\n            \" the flow.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The start date for an interval schedule.\"\n    ),\n    rrule: Optional[str] = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set a schedule for the created deployment.\",\n    ),\n    timezone: Optional[str] = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Timezone to used scheduling flow runs e.g. 'America/New_York'\",\n    ),\n    pause_on_shutdown: bool = typer.Option(\n        True,\n        help=(\n            \"If set, provided schedule will be paused when the serve command is\"\n            \" stopped. If not set, the schedules will continue running.\"\n        ),\n    ),\n):\n    \"\"\"\n    Serve a flow via an entrypoint.\n    \"\"\"\n    runner = Runner(name=name, pause_on_shutdown=pause_on_shutdown)\n    try:\n        schedules = []\n        if interval or cron or rrule:\n            schedule = construct_schedule(\n                interval=interval,\n                cron=cron,\n                rrule=rrule,\n                timezone=timezone,\n                anchor_date=interval_anchor,\n            )\n            schedules = [MinimalDeploymentSchedule(schedule=schedule, active=True)]\n\n        runner_deployment = RunnerDeployment.from_entrypoint(\n            entrypoint=entrypoint,\n            name=name,\n            schedules=schedules,\n            description=description,\n            tags=tags or [],\n            version=version,\n        )\n    except (MissingFlowError, ValueError) as exc:\n        exit_with_error(str(exc))\n    deployment_id = await runner.add_deployment(runner_deployment)\n\n    help_message = (\n        f\"[green]Your flow {runner_deployment.flow_name!r} is being served and polling\"\n        \" for scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the following\"\n        \" command:\\n[blue]\\n\\t$ prefect deployment run\"\n        f\" '{runner_deployment.flow_name}/{name}'\\n[/]\"\n    )\n    if PREFECT_UI_URL:\n        help_message += (\n            \"\\nYou can also run your flow via the Prefect UI:\"\n            f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n        )\n\n    app.console.print(help_message, soft_wrap=True)\n    await runner.start()\n
    ","tags":["Python API","flows","CLI"]},{"location":"api-ref/prefect/cli/flow_run/","title":"flow_run","text":"","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run","title":"prefect.cli.flow_run","text":"

    Command line interface for working with flow runs

    ","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.cancel","title":"cancel async","text":"

    Cancel a flow run by ID.

    Source code in prefect/cli/flow_run.py
    @flow_run_app.command()\nasync def cancel(id: UUID):\n    \"\"\"Cancel a flow run by ID.\"\"\"\n    async with get_client() as client:\n        cancelling_state = State(type=StateType.CANCELLING)\n        try:\n            result = await client.set_flow_run_state(\n                flow_run_id=id, state=cancelling_state\n            )\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    if result.status == SetStateStatus.ABORT:\n        exit_with_error(\n            f\"Flow run '{id}' was unable to be cancelled. Reason:\"\n            f\" '{result.details.reason}'\"\n        )\n\n    exit_with_success(f\"Flow run '{id}' was successfully scheduled for cancellation.\")\n
    ","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.delete","title":"delete async","text":"

    Delete a flow run by ID.

    Source code in prefect/cli/flow_run.py
    @flow_run_app.command()\nasync def delete(id: UUID):\n    \"\"\"\n    Delete a flow run by ID.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.delete_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    exit_with_success(f\"Successfully deleted flow run '{id}'.\")\n
    ","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.inspect","title":"inspect async","text":"

    View details about a flow run.

    Source code in prefect/cli/flow_run.py
    @flow_run_app.command()\nasync def inspect(id: UUID):\n    \"\"\"\n    View details about a flow run.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            flow_run = await client.read_flow_run(id)\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code == status.HTTP_404_NOT_FOUND:\n                exit_with_error(f\"Flow run {id!r} not found!\")\n            else:\n                raise\n\n    app.console.print(Pretty(flow_run))\n
    ","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.logs","title":"logs async","text":"

    View logs for a flow run.

    Source code in prefect/cli/flow_run.py
    @flow_run_app.command()\nasync def logs(\n    id: UUID,\n    head: bool = typer.Option(\n        False,\n        \"--head\",\n        \"-h\",\n        help=(\n            f\"Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n    num_logs: int = typer.Option(\n        None,\n        \"--num-logs\",\n        \"-n\",\n        help=(\n            \"Number of logs to show when using the --head or --tail flag. If None,\"\n            f\" defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.\"\n        ),\n        min=1,\n    ),\n    reverse: bool = typer.Option(\n        False,\n        \"--reverse\",\n        \"-r\",\n        help=\"Reverse the logs order to print the most recent logs first\",\n    ),\n    tail: bool = typer.Option(\n        False,\n        \"--tail\",\n        \"-t\",\n        help=(\n            f\"Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n):\n    \"\"\"\n    View logs for a flow run.\n    \"\"\"\n    # Pagination - API returns max 200 (LOGS_DEFAULT_PAGE_SIZE) logs at a time\n    offset = 0\n    more_logs = True\n    num_logs_returned = 0\n\n    # if head and tail flags are being used together\n    if head and tail:\n        exit_with_error(\"Please provide either a `head` or `tail` option but not both.\")\n\n    user_specified_num_logs = (\n        num_logs or LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS\n        if head or tail or num_logs\n        else None\n    )\n\n    # if using tail update offset according to LOGS_DEFAULT_PAGE_SIZE\n    if tail:\n        offset = max(0, user_specified_num_logs - LOGS_DEFAULT_PAGE_SIZE)\n\n    log_filter = LogFilter(flow_run_id={\"any_\": [id]})\n\n    async with get_client() as client:\n        # Get the flow run\n        try:\n            flow_run = await client.read_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run {str(id)!r} not found!\")\n\n        while more_logs:\n            num_logs_to_return_from_page = (\n                LOGS_DEFAULT_PAGE_SIZE\n                if user_specified_num_logs is None\n                else min(\n                    LOGS_DEFAULT_PAGE_SIZE, user_specified_num_logs - num_logs_returned\n                )\n            )\n\n            # Get the next page of logs\n            page_logs = await client.read_logs(\n                log_filter=log_filter,\n                limit=num_logs_to_return_from_page,\n                offset=offset,\n                sort=(\n                    LogSort.TIMESTAMP_DESC if reverse or tail else LogSort.TIMESTAMP_ASC\n                ),\n            )\n\n            for log in reversed(page_logs) if tail and not reverse else page_logs:\n                app.console.print(\n                    # Print following the flow run format (declared in logging.yml)\n                    (\n                        f\"{pendulum.instance(log.timestamp).to_datetime_string()}.{log.timestamp.microsecond // 1000:03d} |\"\n                        f\" {logging.getLevelName(log.level):7s} | Flow run\"\n                        f\" {flow_run.name!r} - {log.message}\"\n                    ),\n                    soft_wrap=True,\n                )\n\n            # Update the number of logs retrieved\n            num_logs_returned += num_logs_to_return_from_page\n\n            if tail:\n                #  If the current offset is not 0, update the offset for the next page\n                if offset != 0:\n                    offset = (\n                        0\n                        # Reset the offset to 0 if there are less logs than the LOGS_DEFAULT_PAGE_SIZE to get the remaining log\n                        if offset < LOGS_DEFAULT_PAGE_SIZE\n                        else offset - LOGS_DEFAULT_PAGE_SIZE\n                    )\n                else:\n                    more_logs = False\n            else:\n                if len(page_logs) == LOGS_DEFAULT_PAGE_SIZE:\n                    offset += LOGS_DEFAULT_PAGE_SIZE\n                else:\n                    # No more logs to show, exit\n                    more_logs = False\n
    ","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.ls","title":"ls async","text":"

    View recent flow runs or flow runs for specific flows.

    Arguments:

    flow_name: Name of the flow\n\nlimit: Maximum number of flow runs to list. Defaults to 15.\n\nstate: Name of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'PAUSED', 'SUSPENDED', 'AWAITINGRETRY', 'RETRYING', and 'LATE'.\n\nstate_type: Type of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'CRASHED', and 'PAUSED'.\n

    Examples:

    $ prefect flow-runs ls --state Running

    $ prefect flow-runs ls --state Running --state late

    $ prefect flow-runs ls --state-type RUNNING

    $ prefect flow-runs ls --state-type RUNNING --state-type FAILED

    Source code in prefect/cli/flow_run.py
    @flow_run_app.command()\nasync def ls(\n    flow_name: List[str] = typer.Option(None, help=\"Name of the flow\"),\n    limit: int = typer.Option(15, help=\"Maximum number of flow runs to list\"),\n    state: List[str] = typer.Option(None, help=\"Name of the flow run's state\"),\n    state_type: List[str] = typer.Option(None, help=\"Type of the flow run's state\"),\n):\n    \"\"\"\n    View recent flow runs or flow runs for specific flows.\n\n    Arguments:\n\n        flow_name: Name of the flow\n\n        limit: Maximum number of flow runs to list. Defaults to 15.\n\n        state: Name of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'PAUSED', 'SUSPENDED', 'AWAITINGRETRY', 'RETRYING', and 'LATE'.\n\n        state_type: Type of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'CRASHED', and 'PAUSED'.\n\n    Examples:\n\n    $ prefect flow-runs ls --state Running\n\n    $ prefect flow-runs ls --state Running --state late\n\n    $ prefect flow-runs ls --state-type RUNNING\n\n    $ prefect flow-runs ls --state-type RUNNING --state-type FAILED\n    \"\"\"\n\n    # Handling `state` and `state_type` argument validity in the function instead of by specifying\n    # List[StateType] and List[StateName] in the type hints, allows users to provide\n    # case-insensitive arguments for `state` and `state_type`.\n\n    prefect_state_names = {\n        \"SCHEDULED\": \"Scheduled\",\n        \"PENDING\": \"Pending\",\n        \"RUNNING\": \"Running\",\n        \"COMPLETED\": \"Completed\",\n        \"FAILED\": \"Failed\",\n        \"CANCELLED\": \"Cancelled\",\n        \"CRASHED\": \"Crashed\",\n        \"PAUSED\": \"Paused\",\n        \"CANCELLING\": \"Cancelling\",\n        \"SUSPENDED\": \"Suspended\",\n        \"AWAITINGRETRY\": \"AwaitingRetry\",\n        \"RETRYING\": \"Retrying\",\n        \"LATE\": \"Late\",\n    }\n\n    state_filter = {}\n    formatted_states = []\n\n    if state:\n        for s in state:\n            uppercased_state = s.upper()\n            if uppercased_state in prefect_state_names:\n                capitalized_state = prefect_state_names[uppercased_state]\n                formatted_states.append(capitalized_state)\n            else:\n                # Do not change the case of the state name if it is not one of the official Prefect state names\n                formatted_states.append(s)\n                logger.warning(\n                    f\"State name {repr(s)} is not one of the official Prefect state names.\"\n                )\n\n        state_filter[\"name\"] = {\"any_\": formatted_states}\n\n    if state_type:\n        upper_cased_states = [s.upper() for s in state_type]\n        if not all(s in StateType.__members__ for s in upper_cased_states):\n            exit_with_error(\n                f\"Invalid state type. Options are {', '.join(StateType.__members__)}.\"\n            )\n\n        state_filter[\"type\"] = {\n            \"any_\": [StateType[s].value for s in upper_cased_states]\n        }\n\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None,\n            flow_run_filter=FlowRunFilter(state=state_filter) if state_filter else None,\n            limit=limit,\n            sort=FlowRunSort.EXPECTED_START_TIME_DESC,\n        )\n        flows_by_id = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [run.flow_id for run in flow_runs]})\n            )\n        }\n\n        if not flow_runs:\n            exit_with_success(\"No flow runs found.\")\n\n    table = Table(title=\"Flow Runs\")\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Flow\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"State\", no_wrap=True)\n    table.add_column(\"When\", style=\"bold\", no_wrap=True)\n\n    for flow_run in sorted(flow_runs, key=lambda d: d.created, reverse=True):\n        flow = flows_by_id[flow_run.flow_id]\n        timestamp = (\n            flow_run.state.state_details.scheduled_time\n            if flow_run.state.is_scheduled()\n            else flow_run.state.timestamp\n        )\n        table.add_row(\n            str(flow_run.id),\n            str(flow.name),\n            str(flow_run.name),\n            str(flow_run.state.type.value),\n            pendulum.instance(timestamp).diff_for_humans(),\n        )\n\n    app.console.print(table)\n
    ","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/kubernetes/","title":"kubernetes","text":"","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes","title":"prefect.cli.kubernetes","text":"

    Command line interface for working with Prefect on Kubernetes

    ","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_agent","title":"manifest_agent","text":"

    Generates a manifest for deploying Agent on Kubernetes.

    Example

    $ prefect kubernetes manifest agent | kubectl apply -f -

    Source code in prefect/cli/kubernetes.py
    @manifest_app.command(\"agent\")\ndef manifest_agent(\n    api_url: str = SettingsOption(PREFECT_API_URL),\n    api_key: str = SettingsOption(PREFECT_API_KEY),\n    image_tag: str = typer.Option(\n        get_prefect_image_name(),\n        \"-i\",\n        \"--image-tag\",\n        help=\"The tag of a Docker image to use for the Agent.\",\n    ),\n    namespace: str = typer.Option(\n        \"default\",\n        \"-n\",\n        \"--namespace\",\n        help=\"A Kubernetes namespace to create agent in.\",\n    ),\n    work_queue: str = typer.Option(\n        \"kubernetes\",\n        \"-q\",\n        \"--work-queue\",\n        help=\"A work queue name for the agent to pull from.\",\n    ),\n):\n    \"\"\"\n    Generates a manifest for deploying Agent on Kubernetes.\n\n    Example:\n        $ prefect kubernetes manifest agent | kubectl apply -f -\n    \"\"\"\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-agent.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"api_url\": api_url,\n            \"api_key\": api_key,\n            \"image_name\": image_tag,\n            \"namespace\": namespace,\n            \"work_queue\": work_queue,\n        }\n    )\n    print(manifest)\n
    ","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_flow_run_job","title":"manifest_flow_run_job async","text":"

    Prints the default KubernetesJob Job manifest.

    Use this file to fully customize your KubernetesJob deployments.

    \b Example: \b $ prefect kubernetes manifest flow-run-job

    \b Output, a YAML file: \b apiVersion: batch/v1 kind: Job ...

    Source code in prefect/cli/kubernetes.py
    @manifest_app.command(\"flow-run-job\")\nasync def manifest_flow_run_job():\n    \"\"\"\n    Prints the default KubernetesJob Job manifest.\n\n    Use this file to fully customize your `KubernetesJob` deployments.\n\n    \\b\n    Example:\n        \\b\n        $ prefect kubernetes manifest flow-run-job\n\n    \\b\n    Output, a YAML file:\n        \\b\n        apiVersion: batch/v1\n        kind: Job\n        ...\n    \"\"\"\n\n    KubernetesJob.base_job_manifest()\n\n    output = yaml.dump(KubernetesJob.base_job_manifest())\n\n    # add some commentary where appropriate\n    output = output.replace(\n        \"metadata:\\n  labels:\",\n        \"metadata:\\n  # labels are required, even if empty\\n  labels:\",\n    )\n    output = output.replace(\n        \"containers:\\n\",\n        \"containers:  # the first container is required\\n\",\n    )\n    output = output.replace(\n        \"env: []\\n\",\n        \"env: []  # env is required, even if empty\\n\",\n    )\n\n    print(output)\n
    ","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/kubernetes/#prefect.cli.kubernetes.manifest_server","title":"manifest_server","text":"

    Generates a manifest for deploying Prefect on Kubernetes.

    Example

    $ prefect kubernetes manifest server | kubectl apply -f -

    Source code in prefect/cli/kubernetes.py
    @manifest_app.command(\"server\")\ndef manifest_server(\n    image_tag: str = typer.Option(\n        get_prefect_image_name(),\n        \"-i\",\n        \"--image-tag\",\n        help=\"The tag of a Docker image to use for the server.\",\n    ),\n    namespace: str = typer.Option(\n        \"default\",\n        \"-n\",\n        \"--namespace\",\n        help=\"A Kubernetes namespace to create the server in.\",\n    ),\n    log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n):\n    \"\"\"\n    Generates a manifest for deploying Prefect on Kubernetes.\n\n    Example:\n        $ prefect kubernetes manifest server | kubectl apply -f -\n    \"\"\"\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-server.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"image_name\": image_tag,\n            \"namespace\": namespace,\n            \"log_level\": log_level,\n        }\n    )\n    print(manifest)\n
    ","tags":["Python API","kubernetes","CLI"]},{"location":"api-ref/prefect/cli/profile/","title":"profile","text":"","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile","title":"prefect.cli.profile","text":"

    Command line interface for working with profiles.

    ","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.create","title":"create","text":"

    Create a new profile.

    Source code in prefect/cli/profile.py
    @profile_app.command()\ndef create(\n    name: str,\n    from_name: str = typer.Option(None, \"--from\", help=\"Copy an existing profile.\"),\n):\n    \"\"\"\n    Create a new profile.\n    \"\"\"\n\n    profiles = prefect.settings.load_profiles()\n    if name in profiles:\n        app.console.print(\n            textwrap.dedent(\n                f\"\"\"\n                [red]Profile {name!r} already exists.[/red]\n                To create a new profile, remove the existing profile first:\n\n                    prefect profile delete {name!r}\n                \"\"\"\n            ).strip()\n        )\n        raise typer.Exit(1)\n\n    if from_name:\n        if from_name not in profiles:\n            exit_with_error(f\"Profile {from_name!r} not found.\")\n\n        # Create a copy of the profile with a new name and add to the collection\n        profiles.add_profile(profiles[from_name].copy(update={\"name\": name}))\n    else:\n        profiles.add_profile(prefect.settings.Profile(name=name, settings={}))\n\n    prefect.settings.save_profiles(profiles)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created profile with properties:\n                name - {name!r}\n                from name - {from_name or None}\n\n            Use created profile for future, subsequent commands:\n                prefect profile use {name!r}\n\n            Use created profile temporarily for a single command:\n                prefect -p {name!r} config view\n            \"\"\"\n        )\n    )\n
    ","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.delete","title":"delete","text":"

    Delete the given profile.

    Source code in prefect/cli/profile.py
    @profile_app.command()\ndef delete(name: str):\n    \"\"\"\n    Delete the given profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile.name == name:\n        exit_with_error(\n            f\"Profile {name!r} is the active profile. You must switch profiles before\"\n            \" it can be deleted.\"\n        )\n\n    profiles.remove_profile(name)\n\n    verb = \"Removed\"\n    if name == \"default\":\n        verb = \"Reset\"\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"{verb} profile {name!r}.\")\n
    ","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.inspect","title":"inspect","text":"

    Display settings from a given profile; defaults to active.

    Source code in prefect/cli/profile.py
    @profile_app.command()\ndef inspect(\n    name: Optional[str] = typer.Argument(\n        None, help=\"Name of profile to inspect; defaults to active profile.\"\n    ),\n):\n    \"\"\"\n    Display settings from a given profile; defaults to active.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name is None:\n        current_profile = prefect.context.get_settings_context().profile\n        if not current_profile:\n            exit_with_error(\"No active profile set - please provide a name to inspect.\")\n        name = current_profile.name\n        print(f\"No name provided, defaulting to {name!r}\")\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if not profiles[name].settings:\n        # TODO: Consider instructing on how to add settings.\n        print(f\"Profile {name!r} is empty.\")\n\n    for setting, value in profiles[name].settings.items():\n        app.console.print(f\"{setting.name}='{value}'\")\n
    ","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.ls","title":"ls","text":"

    List profile names.

    Source code in prefect/cli/profile.py
    @profile_app.command()\ndef ls():\n    \"\"\"\n    List profile names.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    current_profile = prefect.context.get_settings_context().profile\n    current_name = current_profile.name if current_profile is not None else None\n\n    table = Table(caption=\"* active profile\")\n    table.add_column(\n        \"[#024dfd]Available Profiles:\", justify=\"right\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for name in profiles:\n        if name == current_name:\n            table.add_row(f\"[green]  * {name}[/green]\")\n        else:\n            table.add_row(f\"  {name}\")\n    app.console.print(table)\n
    ","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.rename","title":"rename","text":"

    Change the name of a profile.

    Source code in prefect/cli/profile.py
    @profile_app.command()\ndef rename(name: str, new_name: str):\n    \"\"\"\n    Change the name of a profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if new_name in profiles:\n        exit_with_error(f\"Profile {new_name!r} already exists.\")\n\n    profiles.add_profile(profiles[name].copy(update={\"name\": new_name}))\n    profiles.remove_profile(name)\n\n    # If the active profile was renamed switch the active profile to the new name.\n    prefect.context.get_settings_context().profile\n    if profiles.active_name == name:\n        profiles.set_active(new_name)\n    if os.environ.get(\"PREFECT_PROFILE\") == name:\n        app.console.print(\n            f\"You have set your current profile to {name!r} with the \"\n            \"PREFECT_PROFILE environment variable. You must update this variable to \"\n            f\"{new_name!r} to continue using the profile.\"\n        )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Renamed profile {name!r} to {new_name!r}.\")\n
    ","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.use","title":"use async","text":"

    Set the given profile to active.

    Source code in prefect/cli/profile.py
    @profile_app.command()\nasync def use(name: str):\n    \"\"\"\n    Set the given profile to active.\n    \"\"\"\n    status_messages = {\n        ConnectionStatus.CLOUD_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_UNAUTHORIZED: (\n            exit_with_error,\n            f\"Error authenticating with Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.EPHEMERAL: (\n            exit_with_success,\n            (\n                f\"No Prefect server specified using profile {name!r} - the API will run\"\n                \" in ephemeral mode.\"\n            ),\n        ),\n        ConnectionStatus.INVALID_API: (\n            exit_with_error,\n            \"Error connecting to Prefect API URL\",\n        ),\n    }\n\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles.names:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    profiles.set_active(name)\n    prefect.settings.save_profiles(profiles)\n\n    with Progress(\n        SpinnerColumn(),\n        TextColumn(\"[progress.description]{task.description}\"),\n        transient=False,\n    ) as progress:\n        progress.add_task(\n            description=\"Checking API connectivity...\",\n            total=None,\n        )\n\n        with use_profile(name, include_current_context=False):\n            connection_status = await check_orion_connection()\n\n        exit_method, msg = status_messages[connection_status]\n\n    exit_method(msg)\n
    ","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/root/","title":"root","text":"","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root","title":"prefect.cli.root","text":"

    Base prefect command-line application

    ","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root.version","title":"version async","text":"

    Get the current Prefect version.

    Source code in prefect/cli/root.py
    @app.command()\nasync def version():\n    \"\"\"Get the current Prefect version.\"\"\"\n    import sqlite3\n\n    from prefect.server.utilities.database import get_dialect\n    from prefect.settings import PREFECT_API_DATABASE_CONNECTION_URL\n\n    version_info = {\n        \"Version\": prefect.__version__,\n        \"API version\": SERVER_API_VERSION,\n        \"Python version\": platform.python_version(),\n        \"Git commit\": prefect.__version_info__[\"full-revisionid\"][:8],\n        \"Built\": pendulum.parse(\n            prefect.__version_info__[\"date\"]\n        ).to_day_datetime_string(),\n        \"OS/Arch\": f\"{sys.platform}/{platform.machine()}\",\n        \"Profile\": prefect.context.get_settings_context().profile.name,\n    }\n\n    server_type: str\n\n    try:\n        # We do not context manage the client because when using an ephemeral app we do not\n        # want to create the database or run migrations\n        client = prefect.get_client()\n        server_type = client.server_type.value\n    except Exception:\n        server_type = \"<client error>\"\n\n    version_info[\"Server type\"] = server_type.lower()\n\n    # TODO: Consider adding an API route to retrieve this information?\n    if server_type == ServerType.EPHEMERAL.value:\n        database = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        version_info[\"Server\"] = {\"Database\": database}\n        if database == \"sqlite\":\n            version_info[\"Server\"][\"SQLite version\"] = sqlite3.sqlite_version\n\n    def display(object: dict, nesting: int = 0):\n        # Recursive display of a dictionary with nesting\n        for key, value in object.items():\n            key += \":\"\n            if isinstance(value, dict):\n                app.console.print(key)\n                return display(value, nesting + 2)\n            prefix = \" \" * nesting\n            app.console.print(f\"{prefix}{key.ljust(20 - len(prefix))} {value}\")\n\n    display(version_info)\n
    ","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/server/","title":"server","text":"","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server","title":"prefect.cli.server","text":"

    Command line interface for working with Prefect

    ","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.downgrade","title":"downgrade async","text":"

    Downgrade the Prefect database

    Source code in prefect/cli/server.py
    @database_app.command()\nasync def downgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"-1\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic downgrade`. If not provided, \"\n            \"downgrades to the most recent revision. Use 'base' to run all \"\n            \"migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n    \"\"\"Downgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_downgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to downgrade the Prefect \"\n            f\"database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database downgrade aborted!\")\n\n    app.console.print(\"Running downgrade migrations ...\")\n    await run_sync_in_worker_thread(\n        alembic_downgrade, revision=revision, dry_run=dry_run\n    )\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} downgraded!\")\n
    ","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.reset","title":"reset async","text":"

    Drop and recreate all Prefect database tables

    Source code in prefect/cli/server.py
    @database_app.command()\nasync def reset(yes: bool = typer.Option(False, \"--yes\", \"-y\")):\n    \"\"\"Drop and recreate all Prefect database tables\"\"\"\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to reset the Prefect database located \"\n            f'at \"{engine.url!r}\"? This will drop and recreate all tables.'\n        )\n        if not confirm:\n            exit_with_error(\"Database reset aborted\")\n    app.console.print(\"Downgrading database...\")\n    await db.drop_db()\n    app.console.print(\"Upgrading database...\")\n    await db.create_db()\n    exit_with_success(f'Prefect database \"{engine.url!r}\" reset!')\n
    ","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.revision","title":"revision async","text":"

    Create a new migration for the Prefect database

    Source code in prefect/cli/server.py
    @database_app.command()\nasync def revision(\n    message: str = typer.Option(\n        None,\n        \"--message\",\n        \"-m\",\n        help=\"A message to describe the migration.\",\n    ),\n    autogenerate: bool = False,\n):\n    \"\"\"Create a new migration for the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_revision\n\n    app.console.print(\"Running migration file creation ...\")\n    await run_sync_in_worker_thread(\n        alembic_revision,\n        message=message,\n        autogenerate=autogenerate,\n    )\n    exit_with_success(\"Creating new migration file succeeded!\")\n
    ","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.stamp","title":"stamp async","text":"

    Stamp the revision table with the given revision; don't run any migrations

    Source code in prefect/cli/server.py
    @database_app.command()\nasync def stamp(revision: str):\n    \"\"\"Stamp the revision table with the given revision; don't run any migrations\"\"\"\n    from prefect.server.database.alembic_commands import alembic_stamp\n\n    app.console.print(\"Stamping database with revision ...\")\n    await run_sync_in_worker_thread(alembic_stamp, revision=revision)\n    exit_with_success(\"Stamping database with revision succeeded!\")\n
    ","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.start","title":"start async","text":"

    Start a Prefect server instance

    Source code in prefect/cli/server.py
    @server_app.command()\nasync def start(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    keep_alive_timeout: int = SettingsOption(PREFECT_SERVER_API_KEEPALIVE_TIMEOUT),\n    log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n    scheduler: bool = SettingsOption(PREFECT_API_SERVICES_SCHEDULER_ENABLED),\n    analytics: bool = SettingsOption(\n        PREFECT_SERVER_ANALYTICS_ENABLED, \"--analytics-on/--analytics-off\"\n    ),\n    late_runs: bool = SettingsOption(PREFECT_API_SERVICES_LATE_RUNS_ENABLED),\n    ui: bool = SettingsOption(PREFECT_UI_ENABLED),\n):\n    \"\"\"Start a Prefect server instance\"\"\"\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_SCHEDULER_ENABLED\"] = str(scheduler)\n    server_env[\"PREFECT_SERVER_ANALYTICS_ENABLED\"] = str(analytics)\n    server_env[\"PREFECT_API_SERVICES_LATE_RUNS_ENABLED\"] = str(late_runs)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = str(ui)\n    server_env[\"PREFECT_LOGGING_SERVER_LEVEL\"] = log_level\n\n    base_url = f\"http://{host}:{port}\"\n\n    async with anyio.create_task_group() as tg:\n        app.console.print(generate_welcome_blurb(base_url, ui_enabled=ui))\n        app.console.print(\"\\n\")\n\n        server_process_id = await tg.start(\n            partial(\n                run_process,\n                command=[\n                    get_sys_executable(),\n                    \"-m\",\n                    \"uvicorn\",\n                    \"--app-dir\",\n                    # quote wrapping needed for windows paths with spaces\n                    f'\"{prefect.__module_path__.parent}\"',\n                    \"--factory\",\n                    \"prefect.server.api.server:create_app\",\n                    \"--host\",\n                    str(host),\n                    \"--port\",\n                    str(port),\n                    \"--timeout-keep-alive\",\n                    str(keep_alive_timeout),\n                ],\n                env=server_env,\n                stream_output=True,\n            )\n        )\n\n        # Explicitly handle the interrupt signal here, as it will allow us to\n        # cleanly stop the uvicorn server. Failing to do that may cause a\n        # large amount of anyio error traces on the terminal, because the\n        # SIGINT is handled by Typer/Click in this process (the parent process)\n        # and will start shutting down subprocesses:\n        # https://github.com/PrefectHQ/server/issues/2475\n\n        setup_signal_handlers_server(\n            server_process_id, \"the Prefect server\", app.console.print\n        )\n\n    app.console.print(\"Server stopped!\")\n
    ","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.upgrade","title":"upgrade async","text":"

    Upgrade the Prefect database

    Source code in prefect/cli/server.py
    @database_app.command()\nasync def upgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"head\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic upgrade`. If not provided, runs all\"\n            \" migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n    \"\"\"Upgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_upgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            f\"Are you sure you want to upgrade the Prefect database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database upgrade aborted!\")\n\n    app.console.print(\"Running upgrade migrations ...\")\n    await run_sync_in_worker_thread(alembic_upgrade, revision=revision, dry_run=dry_run)\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} upgraded!\")\n
    ","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/shell/","title":"shell","text":"","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell","title":"prefect.cli.shell","text":"

    Provides a set of tools for executing shell commands as Prefect flows. Includes functionalities for running shell commands ad-hoc or serving them as Prefect flows, with options for logging output, scheduling, and deployment customization.

    ","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.output_collect","title":"output_collect","text":"

    Collects output from a subprocess pipe and stores it in a container list.

    Parameters:

    Name Type Description Default pipe

    The output pipe of the subprocess, either stdout or stderr.

    required container

    A list to store the collected output lines.

    required Source code in prefect/cli/shell.py
    def output_collect(pipe, container):\n    \"\"\"\n    Collects output from a subprocess pipe and stores it in a container list.\n\n    Args:\n        pipe: The output pipe of the subprocess, either stdout or stderr.\n        container: A list to store the collected output lines.\n    \"\"\"\n    for line in iter(pipe.readline, \"\"):\n        container.append(line)\n
    ","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.output_stream","title":"output_stream","text":"

    Read from a pipe line by line and log using the provided logging function.

    Parameters:

    Name Type Description Default pipe IO

    A file-like object for reading process output.

    required logger_function function

    A logging function from the logger.

    required Source code in prefect/cli/shell.py
    def output_stream(pipe, logger_function):\n    \"\"\"\n    Read from a pipe line by line and log using the provided logging function.\n\n    Args:\n        pipe (IO): A file-like object for reading process output.\n        logger_function (function): A logging function from the logger.\n    \"\"\"\n    with pipe:\n        for line in iter(pipe.readline, \"\"):\n            logger_function(line.strip())\n
    ","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.run_shell_process","title":"run_shell_process","text":"

    Asynchronously executes the specified shell command and logs its output.

    This function is designed to be used within Prefect flows to run shell commands as part of task execution. It handles both the execution of the command and the collection of its output for logging purposes.

    Parameters:

    Name Type Description Default command str

    The shell command to execute.

    required log_output bool

    If True, the output of the command (both stdout and stderr) is logged to Prefect. Defaults to True

    True stream_stdout bool

    If True, the stdout of the command is streamed to Prefect logs. Defaults to False.

    False log_stderr bool

    If True, the stderr of the command is logged to Prefect logs. Defaults to False.

    False Source code in prefect/cli/shell.py
    @flow\ndef run_shell_process(\n    command: str,\n    log_output: bool = True,\n    stream_stdout: bool = False,\n    log_stderr: bool = False,\n):\n    \"\"\"\n    Asynchronously executes the specified shell command and logs its output.\n\n    This function is designed to be used within Prefect flows to run shell commands as part of task execution.\n    It handles both the execution of the command and the collection of its output for logging purposes.\n\n    Args:\n        command (str): The shell command to execute.\n        log_output (bool, optional): If True, the output of the command (both stdout and stderr) is logged to Prefect.\n                                     Defaults to True\n        stream_stdout (bool, optional): If True, the stdout of the command is streamed to Prefect logs. Defaults to False.\n        log_stderr (bool, optional): If True, the stderr of the command is logged to Prefect logs. Defaults to False.\n\n\n    \"\"\"\n\n    logger = get_run_logger() if log_output else logging.getLogger(\"prefect\")\n\n    # Containers for log batching\n    stdout_container, stderr_container = [], []\n    with subprocess.Popen(\n        command,\n        stdout=subprocess.PIPE,\n        stderr=subprocess.PIPE,\n        shell=True,\n        text=True,\n        bufsize=1,\n        universal_newlines=True,\n    ) as proc:\n        # Create threads for collecting stdout and stderr\n        if stream_stdout:\n            stdout_logger = logger.info\n            output = output_stream\n        else:\n            stdout_logger = stdout_container\n            output = output_collect\n\n        stdout_thread = threading.Thread(\n            target=output, args=(proc.stdout, stdout_logger)\n        )\n\n        stderr_thread = threading.Thread(\n            target=output_collect, args=(proc.stderr, stderr_container)\n        )\n\n        stdout_thread.start()\n        stderr_thread.start()\n\n        stdout_thread.join()\n        stderr_thread.join()\n\n        proc.wait()\n        if stdout_container:\n            logger.info(\"\".join(stdout_container).strip())\n\n        if stderr_container and log_stderr:\n            logger.error(\"\".join(stderr_container).strip())\n            # Suppress traceback\n        if proc.returncode != 0:\n            logger.error(\"\".join(stderr_container).strip())\n            sys.tracebacklimit = 0\n            raise FailedRun(f\"Command failed with exit code {proc.returncode}\")\n
    ","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.serve","title":"serve async","text":"

    Creates and serves a Prefect deployment that runs a specified shell command according to a cron schedule or ad hoc.

    This function allows users to integrate shell command execution into Prefect workflows seamlessly. It provides options for scheduled execution via cron expressions, flow and deployment naming for better management, and the application of tags for easier categorization and filtering within the Prefect UI. Additionally, it supports streaming command output to Prefect logs, setting concurrency limits to control flow execution, and optionally running the deployment once for ad-hoc tasks.

    Parameters:

    Name Type Description Default command str

    The shell command the flow will execute.

    required name str

    The name assigned to the flow. This is required..

    required deployment_tags List[str]

    Optional tags for the deployment to facilitate filtering and organization.

    Option(None, '--tag', help='Tag for the deployment (can be provided multiple times)') log_output bool

    If True, streams the output of the shell command to the Prefect logs. Defaults to True.

    Option(True, help='Stream the output of the command', hidden=True) cron_schedule str

    A cron expression that defines when the flow will run. If not provided, the flow can be triggered manually.

    Option(None, help='Cron schedule for the flow') timezone str

    The timezone for the cron schedule. This is important if the schedule should align with local time.

    Option(None, help='Timezone for the schedule') concurrency_limit int

    The maximum number of instances of the flow that can run simultaneously.

    Option(None, min=1, help='The maximum number of flow runs that can execute at the same time') deployment_name str

    The name of the deployment. This helps distinguish deployments within the Prefect platform.

    Option('CLI Runner Deployment', help='Name of the deployment') run_once bool

    When True, the flow will only run once upon deployment initiation, rather than continuously.

    Option(False, help='Run the agent loop once, instead of forever.') Source code in prefect/cli/shell.py
    @shell_app.command(\"serve\")\nasync def serve(\n    command: str,\n    flow_name: str = typer.Option(..., help=\"Name of the flow\"),\n    deployment_name: str = typer.Option(\n        \"CLI Runner Deployment\", help=\"Name of the deployment\"\n    ),\n    deployment_tags: List[str] = typer.Option(\n        None, \"--tag\", help=\"Tag for the deployment (can be provided multiple times)\"\n    ),\n    log_output: bool = typer.Option(\n        True, help=\"Stream the output of the command\", hidden=True\n    ),\n    stream_stdout: bool = typer.Option(True, help=\"Stream the output of the command\"),\n    cron_schedule: str = typer.Option(None, help=\"Cron schedule for the flow\"),\n    timezone: str = typer.Option(None, help=\"Timezone for the schedule\"),\n    concurrency_limit: int = typer.Option(\n        None,\n        min=1,\n        help=\"The maximum number of flow runs that can execute at the same time\",\n    ),\n    run_once: bool = typer.Option(\n        False, help=\"Run the agent loop once, instead of forever.\"\n    ),\n):\n    \"\"\"\n    Creates and serves a Prefect deployment that runs a specified shell command according to a cron schedule or ad hoc.\n\n    This function allows users to integrate shell command execution into Prefect workflows seamlessly. It provides options for\n    scheduled execution via cron expressions, flow and deployment naming for better management, and the application of tags for\n    easier categorization and filtering within the Prefect UI. Additionally, it supports streaming command output to Prefect logs,\n    setting concurrency limits to control flow execution, and optionally running the deployment once for ad-hoc tasks.\n\n    Args:\n        command (str): The shell command the flow will execute.\n        name (str): The name assigned to the flow. This is required..\n        deployment_tags (List[str], optional): Optional tags for the deployment to facilitate filtering and organization.\n        log_output (bool, optional): If True, streams the output of the shell command to the Prefect logs. Defaults to True.\n        cron_schedule (str, optional): A cron expression that defines when the flow will run. If not provided, the flow can be triggered manually.\n        timezone (str, optional): The timezone for the cron schedule. This is important if the schedule should align with local time.\n        concurrency_limit (int, optional): The maximum number of instances of the flow that can run simultaneously.\n        deployment_name (str, optional): The name of the deployment. This helps distinguish deployments within the Prefect platform.\n        run_once (bool, optional): When True, the flow will only run once upon deployment initiation, rather than continuously.\n    \"\"\"\n    schedule = (\n        CronSchedule(cron=cron_schedule, timezone=timezone) if cron_schedule else None\n    )\n    defined_flow = run_shell_process.with_options(name=flow_name)\n\n    runner_deployment = await defined_flow.to_deployment(\n        name=deployment_name,\n        parameters={\n            \"command\": command,\n            \"log_output\": log_output,\n            \"stream_stdout\": stream_stdout,\n        },\n        entrypoint_type=EntrypointType.MODULE_PATH,\n        schedule=schedule,\n        tags=(deployment_tags or []) + [\"shell\"],\n    )\n\n    runner = Runner(name=flow_name)\n    deployment_id = await runner.add_deployment(runner_deployment)\n    help_message = (\n        f\"[green]Your flow {runner_deployment.flow_name!r} is being served and polling\"\n        \" for scheduled runs!\\n[/]\\nTo trigger a run for this flow, use the following\"\n        \" command:\\n[blue]\\n\\t$ prefect deployment run\"\n        f\" '{runner_deployment.flow_name}/{deployment_name}'\\n[/]\"\n    )\n    if PREFECT_UI_URL:\n        help_message += (\n            \"\\nYou can also run your flow via the Prefect UI:\"\n            f\" [blue]{PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}[/]\\n\"\n        )\n\n    app.console.print(help_message, soft_wrap=True)\n    await runner.start(run_once=run_once)\n
    ","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/shell/#prefect.cli.shell.watch","title":"watch async","text":"

    Executes a shell command and observes it as Prefect flow.

    Parameters:

    Name Type Description Default command str

    The shell command to be executed.

    required log_output bool

    If True, logs the command's output. Defaults to True.

    Option(True, help='Log the output of the command to Prefect logs.') flow_run_name str

    An optional name for the flow run.

    Option(None, help='Name of the flow run.') flow_name str

    An optional name for the flow. Useful for identification in the Prefect UI.

    Option('Shell Command', help='Name of the flow.') tag List[str]

    An optional list of tags for categorizing and filtering flows in the Prefect UI.

    None Source code in prefect/cli/shell.py
    @shell_app.command(\"watch\")\nasync def watch(\n    command: str,\n    log_output: bool = typer.Option(\n        True, help=\"Log the output of the command to Prefect logs.\"\n    ),\n    flow_run_name: str = typer.Option(None, help=\"Name of the flow run.\"),\n    flow_name: str = typer.Option(\"Shell Command\", help=\"Name of the flow.\"),\n    stream_stdout: bool = typer.Option(True, help=\"Stream the output of the command.\"),\n    tag: Annotated[\n        Optional[List[str]], typer.Option(help=\"Optional tags for the flow run.\")\n    ] = None,\n):\n    \"\"\"\n    Executes a shell command and observes it as Prefect flow.\n\n    Args:\n        command (str): The shell command to be executed.\n        log_output (bool, optional): If True, logs the command's output. Defaults to True.\n        flow_run_name (str, optional): An optional name for the flow run.\n        flow_name (str, optional): An optional name for the flow. Useful for identification in the Prefect UI.\n        tag (List[str], optional): An optional list of tags for categorizing and filtering flows in the Prefect UI.\n    \"\"\"\n    tag = (tag or []) + [\"shell\"]\n\n    # Call the shell_run_command flow with provided arguments\n    defined_flow = run_shell_process.with_options(\n        name=flow_name, flow_run_name=flow_run_name\n    )\n    with tags(*tag):\n        defined_flow(\n            command=command, log_output=log_output, stream_stdout=stream_stdout\n        )\n
    ","tags":["Python API","CLI","shell","command line","terminal"]},{"location":"api-ref/prefect/cli/variable/","title":"variable","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable","title":"prefect.cli.variable","text":"","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.delete","title":"delete async","text":"

    Delete a variable.

    Parameters:

    Name Type Description Default name str

    the name of the variable to delete

    required Source code in prefect/cli/variable.py
    @variable_app.command(\"delete\")\nasync def delete(\n    name: str,\n):\n    \"\"\"\n    Delete a variable.\n\n    Arguments:\n        name: the name of the variable to delete\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.delete_variable_by_name(\n                name=name,\n            )\n        except ObjectNotFound:\n            exit_with_error(f\"Variable {name!r} not found.\")\n\n        exit_with_success(f\"Deleted variable {name!r}.\")\n
    ","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.inspect","title":"inspect async","text":"

    View details about a variable.

    Parameters:

    Name Type Description Default name str

    the name of the variable to inspect

    required Source code in prefect/cli/variable.py
    @variable_app.command(\"inspect\")\nasync def inspect(\n    name: str,\n):\n    \"\"\"\n    View details about a variable.\n\n    Arguments:\n        name: the name of the variable to inspect\n    \"\"\"\n\n    async with get_client() as client:\n        variable = await client.read_variable_by_name(\n            name=name,\n        )\n        if not variable:\n            exit_with_error(f\"Variable {name!r} not found.\")\n\n        app.console.print(Pretty(variable))\n
    ","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/variable/#prefect.cli.variable.list_variables","title":"list_variables async","text":"

    List variables.

    Source code in prefect/cli/variable.py
    @variable_app.command(\"ls\")\nasync def list_variables(\n    limit: int = typer.Option(\n        100,\n        \"--limit\",\n        help=\"The maximum number of variables to return.\",\n    ),\n):\n    \"\"\"\n    List variables.\n    \"\"\"\n    async with get_client() as client:\n        variables = await client.read_variables(\n            limit=limit,\n        )\n\n        table = Table(\n            title=\"Variables\",\n            caption=\"List Variables using `prefect variable ls`\",\n            show_header=True,\n        )\n\n        table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n        # values can be up 5000 characters so truncate early\n        table.add_column(\"Value\", style=\"blue\", no_wrap=True, max_width=50)\n        table.add_column(\"Created\", style=\"blue\", no_wrap=True)\n        table.add_column(\"Updated\", style=\"blue\", no_wrap=True)\n\n        for variable in sorted(variables, key=lambda x: f\"{x.name}\"):\n            table.add_row(\n                variable.name,\n                variable.value,\n                pendulum.instance(variable.created).diff_for_humans(),\n                pendulum.instance(variable.updated).diff_for_humans(),\n            )\n\n        app.console.print(table)\n
    ","tags":["Python API","variables","CLI"]},{"location":"api-ref/prefect/cli/work_pool/","title":"work_pool","text":"","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool","title":"prefect.cli.work_pool","text":"

    Command line interface for working with work queues.

    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.clear_concurrency_limit","title":"clear_concurrency_limit async","text":"

    Clear the concurrency limit for a work pool.

    \b Examples: $ prefect work-pool clear-concurrency-limit \"my-pool\"

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def clear_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n):\n    \"\"\"\n    Clear the concurrency limit for a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool clear-concurrency-limit \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    concurrency_limit=None,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Cleared concurrency limit for work pool {name!r}\")\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.create","title":"create async","text":"

    Create a new work pool.

    \b Examples: \b Create a Kubernetes work pool in a paused state: \b $ prefect work-pool create \"my-pool\" --type kubernetes --paused \b Create a Docker work pool with a custom base job template: \b $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def create(\n    name: str = typer.Argument(..., help=\"The name of the work pool.\"),\n    base_job_template: typer.FileText = typer.Option(\n        None,\n        \"--base-job-template\",\n        help=(\n            \"The path to a JSON file containing the base job template to use. If\"\n            \" unspecified, Prefect will use the default base job template for the given\"\n            \" worker type.\"\n        ),\n    ),\n    paused: bool = typer.Option(\n        False,\n        \"--paused\",\n        help=\"Whether or not to create the work pool in a paused state.\",\n    ),\n    type: str = typer.Option(\n        None, \"-t\", \"--type\", help=\"The type of work pool to create.\"\n    ),\n    set_as_default: bool = typer.Option(\n        False,\n        \"--set-as-default\",\n        help=(\n            \"Whether or not to use the created work pool as the local default for\"\n            \" deployment.\"\n        ),\n    ),\n    provision_infrastructure: bool = typer.Option(\n        False,\n        \"--provision-infrastructure\",\n        \"--provision-infra\",\n        help=(\n            \"Whether or not to provision infrastructure for the work pool if supported\"\n            \" for the given work pool type.\"\n        ),\n    ),\n):\n    \"\"\"\n    Create a new work pool.\n\n    \\b\n    Examples:\n        \\b\n        Create a Kubernetes work pool in a paused state:\n            \\b\n            $ prefect work-pool create \"my-pool\" --type kubernetes --paused\n        \\b\n        Create a Docker work pool with a custom base job template:\n            \\b\n            $ prefect work-pool create \"my-pool\" --type docker --base-job-template ./base-job-template.json\n\n    \"\"\"\n    if not name.lower().strip(\"'\\\" \"):\n        exit_with_error(\"Work pool name cannot be empty.\")\n    async with get_client() as client:\n        try:\n            await client.read_work_pool(work_pool_name=name)\n        except ObjectNotFound:\n            pass\n        else:\n            exit_with_error(\n                f\"Work pool named {name!r} already exists. Please try creating your\"\n                \" work pool again with a different name.\"\n            )\n\n        if type is None:\n            async with get_collections_metadata_client() as collections_client:\n                if not is_interactive():\n                    exit_with_error(\n                        \"When not using an interactive terminal, you must supply a\"\n                        \" `--type` value.\"\n                    )\n                worker_metadata = await collections_client.read_worker_metadata()\n\n                # Retrieve only push pools if provisioning infrastructure\n                data = [\n                    worker\n                    for collection in worker_metadata.values()\n                    for worker in collection.values()\n                    if provision_infrastructure\n                    and has_provisioner_for_type(worker[\"type\"])\n                    or not provision_infrastructure\n                ]\n                worker = prompt_select_from_table(\n                    app.console,\n                    \"What type of work pool infrastructure would you like to use?\",\n                    columns=[\n                        {\"header\": \"Infrastructure Type\", \"key\": \"display_name\"},\n                        {\"header\": \"Description\", \"key\": \"description\"},\n                    ],\n                    data=data,\n                    table_kwargs={\"show_lines\": True},\n                )\n                type = worker[\"type\"]\n\n        available_work_pool_types = await get_available_work_pool_types()\n        if type not in available_work_pool_types:\n            exit_with_error(\n                f\"Unknown work pool type {type!r}. \"\n                \"Please choose from\"\n                f\" {', '.join(available_work_pool_types)}.\"\n            )\n\n        if base_job_template is None:\n            template_contents = (\n                await get_default_base_job_template_for_infrastructure_type(type)\n            )\n        else:\n            template_contents = json.load(base_job_template)\n\n        if provision_infrastructure:\n            try:\n                provisioner = get_infrastructure_provisioner_for_work_pool_type(type)\n                provisioner.console = app.console\n                template_contents = await provisioner.provision(\n                    work_pool_name=name, base_job_template=template_contents\n                )\n            except ValueError as exc:\n                print(exc)\n                app.console.print(\n                    (\n                        \"Automatic infrastructure provisioning is not supported for\"\n                        f\" {type!r} work pools.\"\n                    ),\n                    style=\"yellow\",\n                )\n            except RuntimeError as exc:\n                exit_with_error(f\"Failed to provision infrastructure: {exc}\")\n\n        try:\n            wp = WorkPoolCreate(\n                name=name,\n                type=type,\n                base_job_template=template_contents,\n                is_paused=paused,\n            )\n            work_pool = await client.create_work_pool(work_pool=wp)\n            app.console.print(f\"Created work pool {work_pool.name!r}!\\n\", style=\"green\")\n            if (\n                not work_pool.is_paused\n                and not work_pool.is_managed_pool\n                and not work_pool.is_push_pool\n            ):\n                app.console.print(\"To start a worker for this work pool, run:\\n\")\n                app.console.print(\n                    f\"\\t[blue]prefect worker start --pool {work_pool.name}[/]\\n\"\n                )\n            if set_as_default:\n                set_work_pool_as_default(work_pool.name)\n            exit_with_success(\"\")\n        except ObjectAlreadyExists:\n            exit_with_error(\n                f\"Work pool named {name!r} already exists. Please try creating your\"\n                \" work pool again with a different name.\"\n            )\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.delete","title":"delete async","text":"

    Delete a work pool.

    \b Examples: $ prefect work-pool delete \"my-pool\"

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def delete(\n    name: str = typer.Argument(..., help=\"The name of the work pool to delete.\"),\n):\n    \"\"\"\n    Delete a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool delete \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.delete_work_pool(work_pool_name=name)\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Deleted work pool {name!r}\")\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.get_default_base_job_template","title":"get_default_base_job_template async","text":"

    Get the default base job template for a given work pool type.

    \b Examples: $ prefect work-pool get-default-base-job-template --type kubernetes

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def get_default_base_job_template(\n    type: str = typer.Option(\n        None,\n        \"-t\",\n        \"--type\",\n        help=\"The type of work pool for which to get the default base job template.\",\n    ),\n    file: str = typer.Option(\n        None, \"-f\", \"--file\", help=\"If set, write the output to a file.\"\n    ),\n):\n    \"\"\"\n    Get the default base job template for a given work pool type.\n\n    \\b\n    Examples:\n        $ prefect work-pool get-default-base-job-template --type kubernetes\n    \"\"\"\n    base_job_template = await get_default_base_job_template_for_infrastructure_type(\n        type\n    )\n    if base_job_template is None:\n        exit_with_error(\n            f\"Unknown work pool type {type!r}. \"\n            \"Please choose from\"\n            f\" {', '.join(await get_available_work_pool_types())}.\"\n        )\n\n    if file is None:\n        print(json.dumps(base_job_template, indent=2))\n    else:\n        with open(file, mode=\"w\") as f:\n            json.dump(base_job_template, fp=f, indent=2)\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.has_provisioner_for_type","title":"has_provisioner_for_type","text":"

    Check if there is a provisioner for the given work pool type.

    Parameters:

    Name Type Description Default work_pool_type str

    The type of the work pool.

    required

    Returns:

    Name Type Description bool bool

    True if a provisioner exists for the given type, False otherwise.

    Source code in prefect/cli/work_pool.py
    def has_provisioner_for_type(work_pool_type: str) -> bool:\n    \"\"\"\n    Check if there is a provisioner for the given work pool type.\n\n    Args:\n        work_pool_type (str): The type of the work pool.\n\n    Returns:\n        bool: True if a provisioner exists for the given type, False otherwise.\n    \"\"\"\n    return work_pool_type in _provisioners\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.inspect","title":"inspect async","text":"

    Inspect a work pool.

    \b Examples: $ prefect work-pool inspect \"my-pool\"

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def inspect(\n    name: str = typer.Argument(..., help=\"The name of the work pool to inspect.\"),\n):\n    \"\"\"\n    Inspect a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool inspect \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            pool = await client.read_work_pool(work_pool_name=name)\n            app.console.print(Pretty(pool))\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool {name!r} not found!\")\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.ls","title":"ls async","text":"

    List work pools.

    \b Examples: $ prefect work-pool ls

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def ls(\n    verbose: bool = typer.Option(\n        False,\n        \"--verbose\",\n        \"-v\",\n        help=\"Show additional information about work pools.\",\n    ),\n):\n    \"\"\"\n    List work pools.\n\n    \\b\n    Examples:\n        $ prefect work-pool ls\n    \"\"\"\n    table = Table(\n        title=\"Work Pools\", caption=\"(**) denotes a paused pool\", caption_style=\"red\"\n    )\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Type\", style=\"magenta\", no_wrap=True)\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    if verbose:\n        table.add_column(\"Base Job Template\", style=\"magenta\", no_wrap=True)\n\n    async with get_client() as client:\n        pools = await client.read_work_pools()\n\n    def sort_by_created_key(q):\n        return pendulum.now(\"utc\") - q.created\n\n    for pool in sorted(pools, key=sort_by_created_key):\n        row = [\n            f\"{pool.name} [red](**)\" if pool.is_paused else pool.name,\n            str(pool.type),\n            str(pool.id),\n            (\n                f\"[red]{pool.concurrency_limit}\"\n                if pool.concurrency_limit is not None\n                else \"[blue]None\"\n            ),\n        ]\n        if verbose:\n            row.append(str(pool.base_job_template))\n        table.add_row(*row)\n\n    app.console.print(table)\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.pause","title":"pause async","text":"

    Pause a work pool.

    \b Examples: $ prefect work-pool pause \"my-pool\"

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def pause(\n    name: str = typer.Argument(..., help=\"The name of the work pool to pause.\"),\n):\n    \"\"\"\n    Pause a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool pause \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    is_paused=True,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Paused work pool {name!r}\")\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.preview","title":"preview async","text":"

    Preview the work pool's scheduled work for all queues.

    \b Examples: $ prefect work-pool preview \"my-pool\" --hours 24

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def preview(\n    name: str = typer.Argument(None, help=\"The name or ID of the work pool to preview\"),\n    hours: int = typer.Option(\n        None,\n        \"-h\",\n        \"--hours\",\n        help=\"The number of hours to look ahead; defaults to 1 hour\",\n    ),\n):\n    \"\"\"\n    Preview the work pool's scheduled work for all queues.\n\n    \\b\n    Examples:\n        $ prefect work-pool preview \"my-pool\" --hours 24\n\n    \"\"\"\n    if hours is None:\n        hours = 1\n\n    async with get_client() as client:\n        try:\n            responses = await client.get_scheduled_flow_runs_for_work_pool(\n                work_pool_name=name,\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n    runs = [response.flow_run for response in responses]\n    table = Table(caption=\"(**) denotes a late run\", caption_style=\"red\")\n\n    table.add_column(\n        \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n    )\n    table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n    pendulum.now(\"utc\").add(hours=hours or 1)\n\n    now = pendulum.now(\"utc\")\n\n    def sort_by_created_key(r):\n        return now - r.created\n\n    for run in sorted(runs, key=sort_by_created_key):\n        table.add_row(\n            (\n                f\"{run.expected_start_time} [red](**)\"\n                if run.expected_start_time < now\n                else f\"{run.expected_start_time}\"\n            ),\n            str(run.id),\n            run.name,\n            str(run.deployment_id),\n        )\n\n    if runs:\n        app.console.print(table)\n    else:\n        app.console.print(\n            (\n                \"No runs found - try increasing how far into the future you preview\"\n                \" with the --hours flag\"\n            ),\n            style=\"yellow\",\n        )\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.provision_infrastructure","title":"provision_infrastructure async","text":"

    Provision infrastructure for a work pool.

    \b Examples: $ prefect work-pool provision-infrastructure \"my-pool\"

    $ prefect work-pool provision-infra \"my-pool\"\n
    Source code in prefect/cli/work_pool.py
    @work_pool_app.command(aliases=[\"provision-infra\"])\nasync def provision_infrastructure(\n    name: str = typer.Argument(\n        ..., help=\"The name of the work pool to provision infrastructure for.\"\n    ),\n):\n    \"\"\"\n    Provision infrastructure for a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool provision-infrastructure \"my-pool\"\n\n        $ prefect work-pool provision-infra \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            work_pool = await client.read_work_pool(work_pool_name=name)\n            if not work_pool.is_push_pool:\n                exit_with_error(\n                    f\"Work pool {name!r} is not a push pool type. \"\n                    \"Please try provisioning infrastructure for a push pool.\"\n                )\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool {name!r} does not exist.\")\n        except Exception as exc:\n            exit_with_error(f\"Failed to read work pool {name!r}: {exc}\")\n\n        try:\n            provisioner = get_infrastructure_provisioner_for_work_pool_type(\n                work_pool.type\n            )\n            provisioner.console = app.console\n            new_base_job_template = await provisioner.provision(\n                work_pool_name=name, base_job_template=work_pool.base_job_template\n            )\n\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    base_job_template=new_base_job_template,\n                ),\n            )\n\n        except ValueError as exc:\n            app.console.print(f\"Error: {exc}\")\n            app.console.print(\n                (\n                    \"Automatic infrastructure provisioning is not supported for\"\n                    f\" {work_pool.type!r} work pools.\"\n                ),\n                style=\"yellow\",\n            )\n        except RuntimeError as exc:\n            exit_with_error(\n                f\"Failed to provision infrastructure for '{name}' work pool: {exc}\"\n            )\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.resume","title":"resume async","text":"

    Resume a work pool.

    \b Examples: $ prefect work-pool resume \"my-pool\"

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def resume(\n    name: str = typer.Argument(..., help=\"The name of the work pool to resume.\"),\n):\n    \"\"\"\n    Resume a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool resume \"my-pool\"\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    is_paused=False,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(f\"Resumed work pool {name!r}\")\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.set_concurrency_limit","title":"set_concurrency_limit async","text":"

    Set the concurrency limit for a work pool.

    \b Examples: $ prefect work-pool set-concurrency-limit \"my-pool\" 10

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def set_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n    concurrency_limit: int = typer.Argument(\n        ..., help=\"The new concurrency limit for the work pool.\"\n    ),\n):\n    \"\"\"\n    Set the concurrency limit for a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool set-concurrency-limit \"my-pool\" 10\n\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=WorkPoolUpdate(\n                    concurrency_limit=concurrency_limit,\n                ),\n            )\n        except ObjectNotFound as exc:\n            exit_with_error(exc)\n\n        exit_with_success(\n            f\"Set concurrency limit for work pool {name!r} to {concurrency_limit}\"\n        )\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_pool/#prefect.cli.work_pool.update","title":"update async","text":"

    Update a work pool.

    \b Examples: $ prefect work-pool update \"my-pool\"

    Source code in prefect/cli/work_pool.py
    @work_pool_app.command()\nasync def update(\n    name: str = typer.Argument(..., help=\"The name of the work pool to update.\"),\n    base_job_template: typer.FileText = typer.Option(\n        None,\n        \"--base-job-template\",\n        help=(\n            \"The path to a JSON file containing the base job template to use. If\"\n            \" unspecified, Prefect will use the default base job template for the given\"\n            \" worker type. If None, the base job template will not be modified.\"\n        ),\n    ),\n    concurrency_limit: int = typer.Option(\n        None,\n        \"--concurrency-limit\",\n        help=(\n            \"The concurrency limit for the work pool. If None, the concurrency limit\"\n            \" will not be modified.\"\n        ),\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        help=(\n            \"The description for the work pool. If None, the description will not be\"\n            \" modified.\"\n        ),\n    ),\n):\n    \"\"\"\n    Update a work pool.\n\n    \\b\n    Examples:\n        $ prefect work-pool update \"my-pool\"\n\n    \"\"\"\n    wp = WorkPoolUpdate()\n    if base_job_template:\n        wp.base_job_template = json.load(base_job_template)\n    if concurrency_limit:\n        wp.concurrency_limit = concurrency_limit\n    if description:\n        wp.description = description\n\n    async with get_client() as client:\n        try:\n            await client.update_work_pool(\n                work_pool_name=name,\n                work_pool=wp,\n            )\n        except ObjectNotFound:\n            exit_with_error(\"Work pool named {name!r} does not exist.\")\n\n        exit_with_success(f\"Updated work pool {name!r}\")\n
    ","tags":["Python API","work pools","CLI"]},{"location":"api-ref/prefect/cli/work_queue/","title":"work_queue","text":"","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue","title":"prefect.cli.work_queue","text":"

    Command line interface for working with work queues.

    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.clear_concurrency_limit","title":"clear_concurrency_limit async","text":"

    Clear any concurrency limits from a work queue.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def clear_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to clear\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Clear any concurrency limits from a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=None,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limits removed on work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limits removed on work queue {name!r}\"\n    exit_with_success(success_message)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.create","title":"create async","text":"

    Create a work queue.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def create(\n    name: str = typer.Argument(..., help=\"The unique name to assign this work queue\"),\n    limit: int = typer.Option(\n        None, \"-l\", \"--limit\", help=\"The concurrency limit to set on the queue.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags. This option will be removed on\"\n            \" 2023-02-23.\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool to create the work queue in.\",\n    ),\n    priority: Optional[int] = typer.Option(\n        None,\n        \"-q\",\n        \"--priority\",\n        help=\"The associated priority for the created work queue\",\n    ),\n):\n    \"\"\"\n    Create a work queue.\n    \"\"\"\n    if tags:\n        app.console.print(\n            (\n                \"Supplying `tags` for work queues is deprecated. This work \"\n                \"queue will use legacy tag-matching behavior. \"\n                \"This option will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n    if pool and tags:\n        exit_with_error(\n            \"Work queues created with tags cannot specify work pools or set priorities.\"\n        )\n\n    async with get_client() as client:\n        try:\n            result = await client.create_work_queue(\n                name=name, tags=tags or None, work_pool_name=pool, priority=priority\n            )\n            if limit is not None:\n                await client.update_work_queue(\n                    id=result.id,\n                    concurrency_limit=limit,\n                )\n        except ObjectAlreadyExists:\n            exit_with_error(f\"Work queue with name: {name!r} already exists.\")\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool with name: {pool!r} not found.\")\n\n    if tags:\n        tags_message = f\"tags - {', '.join(sorted(tags))}\\n\" or \"\"\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                id - {result.id}\n                concurrency limit - {limit}\n                {tags_message}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    else:\n        if not pool:\n            # specify the default work pool name after work queue creation to allow the server\n            # to handle a bunch of logic associated with agents without work pools\n            pool = DEFAULT_AGENT_WORK_POOL_NAME\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                work pool - {pool!r}\n                id - {result.id}\n                concurrency limit - {limit}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name} -p {pool}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    exit_with_success(output_msg)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.delete","title":"delete async","text":"

    Delete a work queue by ID.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def delete(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to delete\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queue to delete.\",\n    ),\n):\n    \"\"\"\n    Delete a work queue by ID.\n    \"\"\"\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.delete_work_queue_by_id(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n    if pool:\n        success_message = (\n            f\"Successfully deleted work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Successfully deleted work queue {name!r}\"\n    exit_with_success(success_message)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.inspect","title":"inspect async","text":"

    Inspect a work queue by ID.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def inspect(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to inspect\"\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Inspect a work queue by ID.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            result = await client.read_work_queue(id=queue_id)\n            app.console.print(Pretty(result))\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n        try:\n            status = await client.read_work_queue_status(id=queue_id)\n            app.console.print(Pretty(status))\n        except ObjectNotFound:\n            pass\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.ls","title":"ls async","text":"

    View all work queues.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def ls(\n    verbose: bool = typer.Option(\n        False, \"--verbose\", \"-v\", help=\"Display more information.\"\n    ),\n    work_queue_prefix: str = typer.Option(\n        None,\n        \"--match\",\n        \"-m\",\n        help=(\n            \"Will match work queues with names that start with the specified prefix\"\n            \" string\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queues to list.\",\n    ),\n):\n    \"\"\"\n    View all work queues.\n    \"\"\"\n    if not pool and not experiment_enabled(\"work_pools\"):\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit is not None\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n    elif not pool:\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Pool\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            pool_ids = [q.work_pool_id for q in queues]\n            wp_filter = WorkPoolFilter(id=WorkPoolFilterId(any_=pool_ids))\n            pools = await client.read_work_pools(work_pool_filter=wp_filter)\n            pool_id_name_map = {p.id: p.name for p in pools}\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    pool_id_name_map[queue.work_pool_id],\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit is not None\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n\n    else:\n        table = Table(\n            title=f\"Work Queues in Work Pool {pool!r}\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Priority\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Description\", style=\"cyan\", no_wrap=False)\n\n        async with get_client() as client:\n            try:\n                queues = await client.read_work_queues(work_pool_name=pool)\n            except ObjectNotFound:\n                exit_with_error(f\"No work pool found: {pool!r}\")\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    f\"{queue.priority}\",\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit is not None\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose:\n                    row.append(queue.description)\n                table.add_row(*row)\n\n    app.console.print(table)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.pause","title":"pause async","text":"

    Pause a work queue.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def pause(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to pause\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Pause a work queue.\n    \"\"\"\n\n    if not pool and not typer.confirm(\n        f\"You have not specified a work pool. Are you sure you want to pause {name} work queue in `{DEFAULT_AGENT_WORK_POOL_NAME}`?\"\n    ):\n        exit_with_error(\"Work queue pause aborted!\")\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=True,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} paused\"\n    else:\n        success_message = f\"Work queue {name!r} paused\"\n    exit_with_success(success_message)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.preview","title":"preview async","text":"

    Preview a work queue.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def preview(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to preview\"\n    ),\n    hours: int = typer.Option(\n        None,\n        \"-h\",\n        \"--hours\",\n        help=\"The number of hours to look ahead; defaults to 1 hour\",\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Preview a work queue.\n    \"\"\"\n    if pool:\n        title = f\"Preview of Work Queue {name!r} in Work Pool {pool!r}\"\n    else:\n        title = f\"Preview of Work Queue {name!r}\"\n\n    table = Table(title=title, caption=\"(**) denotes a late run\", caption_style=\"red\")\n    table.add_column(\n        \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n    )\n    table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n    window = pendulum.now(\"utc\").add(hours=hours or 1)\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name, work_pool_name=pool\n    )\n    async with get_client() as client:\n        if pool:\n            try:\n                responses = await client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=pool,\n                    work_queue_names=[name],\n                )\n                runs = [response.flow_run for response in responses]\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r} in work pool {pool!r}\")\n        else:\n            try:\n                runs = await client.get_runs_in_work_queue(\n                    queue_id,\n                    limit=10,\n                    scheduled_before=window,\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r}\")\n    now = pendulum.now(\"utc\")\n\n    def sort_by_created_key(r):\n        return now - r.created\n\n    for run in sorted(runs, key=sort_by_created_key):\n        table.add_row(\n            (\n                f\"{run.expected_start_time} [red](**)\"\n                if run.expected_start_time < now\n                else f\"{run.expected_start_time}\"\n            ),\n            str(run.id),\n            run.name,\n            str(run.deployment_id),\n        )\n\n    if runs:\n        app.console.print(table)\n    else:\n        app.console.print(\n            (\n                \"No runs found - try increasing how far into the future you preview\"\n                \" with the --hours flag\"\n            ),\n            style=\"yellow\",\n        )\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.read_wq_runs","title":"read_wq_runs async","text":"

    Get runs in a work queue. Note that this will trigger an artificial poll of the work queue.

    Source code in prefect/cli/work_queue.py
    @work_app.command(\"read-runs\")\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def read_wq_runs(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to poll\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queue to poll.\",\n    ),\n):\n    \"\"\"\n    Get runs in a work queue. Note that this will trigger an artificial poll of\n    the work queue.\n    \"\"\"\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            runs = await client.get_runs_in_work_queue(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n    success_message = (\n        f\"Read {len(runs)} runs for work queue {name!r} in work pool {pool}: {runs}\"\n    )\n    exit_with_success(success_message)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.resume","title":"resume async","text":"

    Resume a paused work queue.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def resume(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to resume\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Resume a paused work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=False,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} resumed\"\n    else:\n        success_message = f\"Work queue {name!r} resumed\"\n    exit_with_success(success_message)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.set_concurrency_limit","title":"set_concurrency_limit async","text":"

    Set a concurrency limit on a work queue.

    Source code in prefect/cli/work_queue.py
    @work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def set_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue\"),\n    limit: int = typer.Argument(..., help=\"The concurrency limit to set on the queue.\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n    \"\"\"\n    Set a concurrency limit on a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=limit,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = (\n                    f\"No work queue named {name!r} found in work pool {pool!r}.\"\n                )\n            else:\n                error_message = f\"No work queue named {name!r} found.\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limit of {limit} set on work queue {name!r} in work pool\"\n            f\" {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limit of {limit} set on work queue {name!r}\"\n    exit_with_success(success_message)\n
    ","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/worker/","title":"worker","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker","title":"prefect.cli.worker","text":"","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/cli/worker/#prefect.cli.worker.start","title":"start async","text":"

    Start a worker process to poll a work pool for flow runs.

    Source code in prefect/cli/worker.py
    @worker_app.command()\nasync def start(\n    worker_name: str = typer.Option(\n        None,\n        \"-n\",\n        \"--name\",\n        help=(\n            \"The name to give to the started worker. If not provided, a unique name\"\n            \" will be generated.\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        ...,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool the started worker should poll.\",\n        prompt=True,\n    ),\n    work_queues: List[str] = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"One or more work queue names for the worker to pull from. If not provided,\"\n            \" the worker will pull from all work queues in the work pool.\"\n        ),\n    ),\n    worker_type: Optional[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--type\",\n        help=(\n            \"The type of worker to start. If not provided, the worker type will be\"\n            \" inferred from the work pool.\"\n        ),\n    ),\n    prefetch_seconds: int = SettingsOption(\n        PREFECT_WORKER_PREFETCH_SECONDS,\n        help=\"Number of seconds to look into the future for scheduled flow runs.\",\n    ),\n    run_once: bool = typer.Option(\n        False, help=\"Only run worker polling once. By default, the worker runs forever.\"\n    ),\n    limit: int = typer.Option(\n        None,\n        \"-l\",\n        \"--limit\",\n        help=\"Maximum number of flow runs to start simultaneously.\",\n    ),\n    with_healthcheck: bool = typer.Option(\n        False, help=\"Start a healthcheck server for the worker.\"\n    ),\n    install_policy: InstallPolicy = typer.Option(\n        InstallPolicy.PROMPT.value,\n        \"--install-policy\",\n        help=\"Install policy to use workers from Prefect integration packages.\",\n        case_sensitive=False,\n    ),\n    base_job_template: typer.FileText = typer.Option(\n        None,\n        \"--base-job-template\",\n        help=(\n            \"The path to a JSON file containing the base job template to use. If\"\n            \" unspecified, Prefect will use the default base job template for the given\"\n            \" worker type. If the work pool already exists, this will be ignored.\"\n        ),\n    ),\n):\n    \"\"\"\n    Start a worker process to poll a work pool for flow runs.\n    \"\"\"\n\n    is_paused = await _check_work_pool_paused(work_pool_name)\n    if is_paused:\n        app.console.print(\n            (\n                f\"The work pool {work_pool_name!r} is currently paused. This worker\"\n                \" will not execute any flow runs until the work pool is unpaused.\"\n            ),\n            style=\"yellow\",\n        )\n\n    is_queues_paused = await _check_work_queues_paused(\n        work_pool_name,\n        work_queues,\n    )\n    if is_queues_paused:\n        queue_scope = (\n            \"All work queues\" if not work_queues else \"Specified work queue(s)\"\n        )\n        app.console.print(\n            (\n                f\"{queue_scope} in the work pool {work_pool_name!r} are currently\"\n                \" paused. This worker will not execute any flow runs until the work\"\n                \" queues are unpaused.\"\n            ),\n            style=\"yellow\",\n        )\n\n    worker_cls = await _get_worker_class(worker_type, work_pool_name, install_policy)\n\n    if worker_cls is None:\n        exit_with_error(\n            \"Unable to start worker. Please ensure you have the necessary dependencies\"\n            \" installed to run your desired worker type.\"\n        )\n\n    worker_process_id = os.getpid()\n    setup_signal_handlers_worker(\n        worker_process_id, f\"the {worker_type} worker\", app.console.print\n    )\n\n    template_contents = None\n    if base_job_template is not None:\n        template_contents = json.load(fp=base_job_template)\n\n    async with worker_cls(\n        name=worker_name,\n        work_pool_name=work_pool_name,\n        work_queues=work_queues,\n        limit=limit,\n        prefetch_seconds=prefetch_seconds,\n        heartbeat_interval_seconds=PREFECT_WORKER_HEARTBEAT_SECONDS.value(),\n        base_job_template=template_contents,\n    ) as worker:\n        app.console.print(f\"Worker {worker.name!r} started!\", style=\"green\")\n        async with anyio.create_task_group() as tg:\n            # wait for an initial heartbeat to configure the worker\n            await worker.sync_with_backend()\n            # schedule the scheduled flow run polling loop\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=worker.get_and_submit_flow_runs,\n                    interval=PREFECT_WORKER_QUERY_SECONDS.value(),\n                    run_once=run_once,\n                    printer=app.console.print,\n                    jitter_range=0.3,\n                    backoff=4,  # Up to ~1 minute interval during backoff\n                )\n            )\n            # schedule the sync loop\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=worker.sync_with_backend,\n                    interval=worker.heartbeat_interval_seconds,\n                    run_once=run_once,\n                    printer=app.console.print,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=worker.check_for_cancelled_flow_runs,\n                    interval=PREFECT_WORKER_QUERY_SECONDS.value() * 2,\n                    run_once=run_once,\n                    printer=app.console.print,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n\n            started_event = await worker._emit_worker_started_event()\n\n            # if --with-healthcheck was passed, start the healthcheck server\n            if with_healthcheck:\n                # we'll start the ASGI server in a separate thread so that\n                # uvicorn does not block the main thread\n                server_thread = threading.Thread(\n                    name=\"healthcheck-server-thread\",\n                    target=partial(\n                        start_healthcheck_server,\n                        worker=worker,\n                        query_interval_seconds=PREFECT_WORKER_QUERY_SECONDS.value(),\n                    ),\n                    daemon=True,\n                )\n                server_thread.start()\n\n    await worker._emit_worker_stopped_event(started_event)\n    app.console.print(f\"Worker {worker.name!r} stopped!\")\n
    ","tags":["Python API","workers","CLI"]},{"location":"api-ref/prefect/client/base/","title":"base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base","title":"prefect.client.base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse","title":"PrefectResponse","text":"

    Bases: Response

    A Prefect wrapper for the httpx.Response class.

    Provides more informative error messages.

    Source code in prefect/client/base.py
    class PrefectResponse(httpx.Response):\n    \"\"\"\n    A Prefect wrapper for the `httpx.Response` class.\n\n    Provides more informative error messages.\n    \"\"\"\n\n    def raise_for_status(self) -> None:\n        \"\"\"\n        Raise an exception if the response contains an HTTPStatusError.\n\n        The `PrefectHTTPStatusError` contains useful additional information that\n        is not contained in the `HTTPStatusError`.\n        \"\"\"\n        try:\n            return super().raise_for_status()\n        except HTTPStatusError as exc:\n            raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n\n    @classmethod\n    def from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n        \"\"\"\n        Create a `PrefectReponse` from an `httpx.Response`.\n\n        By changing the `__class__` attribute of the Response, we change the method\n        resolution order to look for methods defined in PrefectResponse, while leaving\n        everything else about the original Response instance intact.\n        \"\"\"\n        new_response = copy.copy(response)\n        new_response.__class__ = cls\n        return new_response\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.raise_for_status","title":"raise_for_status","text":"

    Raise an exception if the response contains an HTTPStatusError.

    The PrefectHTTPStatusError contains useful additional information that is not contained in the HTTPStatusError.

    Source code in prefect/client/base.py
    def raise_for_status(self) -> None:\n    \"\"\"\n    Raise an exception if the response contains an HTTPStatusError.\n\n    The `PrefectHTTPStatusError` contains useful additional information that\n    is not contained in the `HTTPStatusError`.\n    \"\"\"\n    try:\n        return super().raise_for_status()\n    except HTTPStatusError as exc:\n        raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.from_httpx_response","title":"from_httpx_response classmethod","text":"

    Create a PrefectReponse from an httpx.Response.

    By changing the __class__ attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact.

    Source code in prefect/client/base.py
    @classmethod\ndef from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n    \"\"\"\n    Create a `PrefectReponse` from an `httpx.Response`.\n\n    By changing the `__class__` attribute of the Response, we change the method\n    resolution order to look for methods defined in PrefectResponse, while leaving\n    everything else about the original Response instance intact.\n    \"\"\"\n    new_response = copy.copy(response)\n    new_response.__class__ = cls\n    return new_response\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient","title":"PrefectHttpxClient","text":"

    Bases: AsyncClient

    A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503).

    Additionally, this client will always call raise_for_status on responses.

    For more details on rate limit headers, see: Configuring Cloudflare Rate Limiting

    Source code in prefect/client/base.py
    class PrefectHttpxClient(httpx.AsyncClient):\n    \"\"\"\n    A Prefect wrapper for the async httpx client with support for retry-after headers\n    for the provided status codes (typically 429, 502 and 503).\n\n    Additionally, this client will always call `raise_for_status` on responses.\n\n    For more details on rate limit headers, see:\n    [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI)\n    \"\"\"\n\n    def __init__(\n        self,\n        *args,\n        enable_csrf_support: bool = False,\n        raise_on_all_errors: bool = True,\n        **kwargs,\n    ):\n        self.enable_csrf_support: bool = enable_csrf_support\n        self.csrf_token: Optional[str] = None\n        self.csrf_token_expiration: Optional[datetime] = None\n        self.csrf_client_id: uuid.UUID = uuid.uuid4()\n        self.raise_on_all_errors: bool = raise_on_all_errors\n\n        super().__init__(*args, **kwargs)\n\n        user_agent = (\n            f\"prefect/{prefect.__version__} (API {constants.SERVER_API_VERSION})\"\n        )\n        self.headers[\"User-Agent\"] = user_agent\n\n    async def _send_with_retry(\n        self,\n        request: Request,\n        send: Callable[[Request], Awaitable[Response]],\n        send_args: Tuple,\n        send_kwargs: Dict,\n        retry_codes: Set[int] = set(),\n        retry_exceptions: Tuple[Exception, ...] = tuple(),\n    ):\n        \"\"\"\n        Send a request and retry it if it fails.\n\n        Sends the provided request and retries it up to PREFECT_CLIENT_MAX_RETRIES times\n        if the request either raises an exception listed in `retry_exceptions` or\n        receives a response with a status code listed in `retry_codes`.\n\n        Retries will be delayed based on either the retry header (preferred) or\n        exponential backoff if a retry header is not provided.\n        \"\"\"\n        try_count = 0\n        response = None\n\n        is_change_request = request.method.lower() in {\"post\", \"put\", \"patch\", \"delete\"}\n\n        if self.enable_csrf_support and is_change_request:\n            await self._add_csrf_headers(request=request)\n\n        while try_count <= PREFECT_CLIENT_MAX_RETRIES.value():\n            try_count += 1\n            retry_seconds = None\n            exc_info = None\n\n            try:\n                response = await send(request, *send_args, **send_kwargs)\n            except retry_exceptions:  # type: ignore\n                if try_count > PREFECT_CLIENT_MAX_RETRIES.value():\n                    raise\n                # Otherwise, we will ignore this error but capture the info for logging\n                exc_info = sys.exc_info()\n            else:\n                # We got a response; check if it's a CSRF error, otherwise\n                # return immediately if it is not retryable\n                if (\n                    response.status_code == status.HTTP_403_FORBIDDEN\n                    and \"Invalid CSRF token\" in response.text\n                ):\n                    # We got a CSRF error, clear the token and try again\n                    self.csrf_token = None\n                    await self._add_csrf_headers(request)\n                elif response.status_code not in retry_codes:\n                    return response\n\n                if \"Retry-After\" in response.headers:\n                    retry_seconds = float(response.headers[\"Retry-After\"])\n\n            # Use an exponential back-off if not set in a header\n            if retry_seconds is None:\n                retry_seconds = 2**try_count\n\n            # Add jitter\n            jitter_factor = PREFECT_CLIENT_RETRY_JITTER_FACTOR.value()\n            if retry_seconds > 0 and jitter_factor > 0:\n                if response is not None and \"Retry-After\" in response.headers:\n                    # Always wait for _at least_ retry seconds if requested by the API\n                    retry_seconds = bounded_poisson_interval(\n                        retry_seconds, retry_seconds * (1 + jitter_factor)\n                    )\n                else:\n                    # Otherwise, use a symmetrical jitter\n                    retry_seconds = clamped_poisson_interval(\n                        retry_seconds, jitter_factor\n                    )\n\n            logger.debug(\n                (\n                    \"Encountered retryable exception during request. \"\n                    if exc_info\n                    else (\n                        \"Received response with retryable status code\"\n                        f\" {response.status_code}. \"\n                    )\n                )\n                + f\"Another attempt will be made in {retry_seconds}s. \"\n                \"This is attempt\"\n                f\" {try_count}/{PREFECT_CLIENT_MAX_RETRIES.value() + 1}.\",\n                exc_info=exc_info,\n            )\n            await anyio.sleep(retry_seconds)\n\n        assert (\n            response is not None\n        ), \"Retry handling ended without response or exception\"\n\n        # We ran out of retries, return the failed response\n        return response\n\n    async def send(self, request: Request, *args, **kwargs) -> Response:\n        \"\"\"\n        Send a request with automatic retry behavior for the following status codes:\n\n        - 403 Forbidden, if the request failed due to CSRF protection\n        - 408 Request Timeout\n        - 429 CloudFlare-style rate limiting\n        - 502 Bad Gateway\n        - 503 Service unavailable\n        - Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES`\n        \"\"\"\n\n        super_send = super().send\n        response = await self._send_with_retry(\n            request=request,\n            send=super_send,\n            send_args=args,\n            send_kwargs=kwargs,\n            retry_codes={\n                status.HTTP_429_TOO_MANY_REQUESTS,\n                status.HTTP_503_SERVICE_UNAVAILABLE,\n                status.HTTP_502_BAD_GATEWAY,\n                status.HTTP_408_REQUEST_TIMEOUT,\n                *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n            },\n            retry_exceptions=(\n                httpx.ReadTimeout,\n                httpx.PoolTimeout,\n                httpx.ConnectTimeout,\n                # `ConnectionResetError` when reading socket raises as a `ReadError`\n                httpx.ReadError,\n                # Sockets can be closed during writes resulting in a `WriteError`\n                httpx.WriteError,\n                # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n                httpx.RemoteProtocolError,\n                # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n                httpx.LocalProtocolError,\n            ),\n        )\n\n        # Convert to a Prefect response to add nicer errors messages\n        response = PrefectResponse.from_httpx_response(response)\n\n        if self.raise_on_all_errors:\n            response.raise_for_status()\n\n        return response\n\n    async def _add_csrf_headers(self, request: Request):\n        now = datetime.now(timezone.utc)\n\n        if not self.enable_csrf_support:\n            return\n\n        if not self.csrf_token or (\n            self.csrf_token_expiration and now > self.csrf_token_expiration\n        ):\n            token_request = self.build_request(\n                \"GET\", f\"/csrf-token?client={self.csrf_client_id}\"\n            )\n\n            try:\n                token_response = await self.send(token_request)\n            except PrefectHTTPStatusError as exc:\n                old_server = exc.response.status_code == status.HTTP_404_NOT_FOUND\n                unconfigured_server = (\n                    exc.response.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY\n                    and \"CSRF protection is disabled.\" in exc.response.text\n                )\n\n                if old_server or unconfigured_server:\n                    # The token endpoint is either unavailable, suggesting an\n                    # older server, or CSRF protection is disabled. In either\n                    # case we should disable CSRF support.\n                    self.enable_csrf_support = False\n                    return\n\n                raise\n\n            token: CsrfToken = CsrfToken.parse_obj(token_response.json())\n            self.csrf_token = token.token\n            self.csrf_token_expiration = token.expiration\n\n        request.headers[\"Prefect-Csrf-Token\"] = self.csrf_token\n        request.headers[\"Prefect-Csrf-Client\"] = str(self.csrf_client_id)\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient.send","title":"send async","text":"

    Send a request with automatic retry behavior for the following status codes:

    • 403 Forbidden, if the request failed due to CSRF protection
    • 408 Request Timeout
    • 429 CloudFlare-style rate limiting
    • 502 Bad Gateway
    • 503 Service unavailable
    • Any additional status codes provided in PREFECT_CLIENT_RETRY_EXTRA_CODES
    Source code in prefect/client/base.py
    async def send(self, request: Request, *args, **kwargs) -> Response:\n    \"\"\"\n    Send a request with automatic retry behavior for the following status codes:\n\n    - 403 Forbidden, if the request failed due to CSRF protection\n    - 408 Request Timeout\n    - 429 CloudFlare-style rate limiting\n    - 502 Bad Gateway\n    - 503 Service unavailable\n    - Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES`\n    \"\"\"\n\n    super_send = super().send\n    response = await self._send_with_retry(\n        request=request,\n        send=super_send,\n        send_args=args,\n        send_kwargs=kwargs,\n        retry_codes={\n            status.HTTP_429_TOO_MANY_REQUESTS,\n            status.HTTP_503_SERVICE_UNAVAILABLE,\n            status.HTTP_502_BAD_GATEWAY,\n            status.HTTP_408_REQUEST_TIMEOUT,\n            *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n        },\n        retry_exceptions=(\n            httpx.ReadTimeout,\n            httpx.PoolTimeout,\n            httpx.ConnectTimeout,\n            # `ConnectionResetError` when reading socket raises as a `ReadError`\n            httpx.ReadError,\n            # Sockets can be closed during writes resulting in a `WriteError`\n            httpx.WriteError,\n            # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n            httpx.RemoteProtocolError,\n            # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n            httpx.LocalProtocolError,\n        ),\n    )\n\n    # Convert to a Prefect response to add nicer errors messages\n    response = PrefectResponse.from_httpx_response(response)\n\n    if self.raise_on_all_errors:\n        response.raise_for_status()\n\n    return response\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.app_lifespan_context","title":"app_lifespan_context async","text":"

    A context manager that calls startup/shutdown hooks for the given application.

    Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed.

    This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits.

    A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed.

    In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done.

    Source code in prefect/client/base.py
    @asynccontextmanager\nasync def app_lifespan_context(app: ASGIApp) -> AsyncGenerator[None, None]:\n    \"\"\"\n    A context manager that calls startup/shutdown hooks for the given application.\n\n    Lifespan contexts are cached per application to avoid calling the lifespan hooks\n    more than once if the context is entered in nested code. A no-op context will be\n    returned if the context for the given application is already being managed.\n\n    This manager is robust to concurrent access within the event loop. For example,\n    if you have concurrent contexts for the same application, it is guaranteed that\n    startup hooks will be called before their context starts and shutdown hooks will\n    only be called after their context exits.\n\n    A reference count is used to support nested use of clients without running\n    lifespan hooks excessively. The first client context entered will create and enter\n    a lifespan context. Each subsequent client will increment a reference count but will\n    not create a new lifespan context. When each client context exits, the reference\n    count is decremented. When the last client context exits, the lifespan will be\n    closed.\n\n    In simple nested cases, the first client context will be the one to exit the\n    lifespan. However, if client contexts are entered concurrently they may not exit\n    in a consistent order. If the first client context was responsible for closing\n    the lifespan, it would have to wait until all other client contexts to exit to\n    avoid firing shutdown hooks while the application is in use. Waiting for the other\n    clients to exit can introduce deadlocks, so, instead, the first client will exit\n    without closing the lifespan context and reference counts will be used to ensure\n    the lifespan is closed once all of the clients are done.\n    \"\"\"\n    # TODO: A deadlock has been observed during multithreaded use of clients while this\n    #       lifespan context is being used. This has only been reproduced on Python 3.7\n    #       and while we hope to discourage using multiple event loops in threads, this\n    #       bug may emerge again.\n    #       See https://github.com/PrefectHQ/orion/pull/1696\n    thread_id = threading.get_ident()\n\n    # The id of the application is used instead of the hash so each application instance\n    # is managed independently even if they share the same settings. We include the\n    # thread id since applications are managed separately per thread.\n    key = (thread_id, id(app))\n\n    # On exception, this will be populated with exception details\n    exc_info = (None, None, None)\n\n    # Get a lock unique to this thread since anyio locks are not threadsafe\n    lock = APP_LIFESPANS_LOCKS[thread_id]\n\n    async with lock:\n        if key in APP_LIFESPANS:\n            # The lifespan is already being managed, just increment the reference count\n            APP_LIFESPANS_REF_COUNTS[key] += 1\n        else:\n            # Create a new lifespan manager\n            APP_LIFESPANS[key] = context = LifespanManager(\n                app, startup_timeout=30, shutdown_timeout=30\n            )\n            APP_LIFESPANS_REF_COUNTS[key] = 1\n\n            # Ensure we enter the context before releasing the lock so startup hooks\n            # are complete before another client can be used\n            await context.__aenter__()\n\n    try:\n        yield\n    except BaseException:\n        exc_info = sys.exc_info()\n        raise\n    finally:\n        # If we do not shield against anyio cancellation, the lock will return\n        # immediately and the code in its context will not run, leaving the lifespan\n        # open\n        with anyio.CancelScope(shield=True):\n            async with lock:\n                # After the consumer exits the context, decrement the reference count\n                APP_LIFESPANS_REF_COUNTS[key] -= 1\n\n                # If this the last context to exit, close the lifespan\n                if APP_LIFESPANS_REF_COUNTS[key] <= 0:\n                    APP_LIFESPANS_REF_COUNTS.pop(key)\n                    context = APP_LIFESPANS.pop(key)\n                    await context.__aexit__(*exc_info)\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/","title":"cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud","title":"prefect.client.cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudUnauthorizedError","title":"CloudUnauthorizedError","text":"

    Bases: PrefectException

    Raised when the CloudClient receives a 401 or 403 from the Cloud API.

    Source code in prefect/client/cloud.py
    class CloudUnauthorizedError(PrefectException):\n    \"\"\"\n    Raised when the CloudClient receives a 401 or 403 from the Cloud API.\n    \"\"\"\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient","title":"CloudClient","text":"Source code in prefect/client/cloud.py
    class CloudClient:\n    def __init__(\n        self,\n        host: str,\n        api_key: str,\n        httpx_settings: Optional[Dict[str, Any]] = None,\n    ) -> None:\n        httpx_settings = httpx_settings or dict()\n        httpx_settings.setdefault(\"headers\", dict())\n        httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        httpx_settings.setdefault(\"base_url\", host)\n        if not PREFECT_UNIT_TEST_MODE.value():\n            httpx_settings.setdefault(\"follow_redirects\", True)\n        self._client = PrefectHttpxClient(**httpx_settings, enable_csrf_support=False)\n\n    async def api_healthcheck(self):\n        \"\"\"\n        Attempts to connect to the Cloud API and raises the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        with anyio.fail_after(10):\n            await self.read_workspaces()\n\n    async def read_workspaces(self) -> List[Workspace]:\n        workspaces = pydantic.parse_obj_as(\n            List[Workspace], await self.get(\"/me/workspaces\")\n        )\n        return workspaces\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n        configured_url = prefect.settings.PREFECT_API_URL.value()\n        account_id, workspace_id = re.findall(PARSE_API_URL_REGEX, configured_url)[0]\n        return await self.get(\n            f\"accounts/{account_id}/workspaces/{workspace_id}/collections/work_pool_types\"\n        )\n\n    async def __aenter__(self):\n        await self._client.__aenter__()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        return await self._client.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `CloudClient` must be entered with an async context. Use 'async \"\n            \"with CloudClient(...)' not 'with CloudClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n\n    async def get(self, route, **kwargs):\n        return await self.request(\"GET\", route, **kwargs)\n\n    async def request(self, method, route, **kwargs):\n        try:\n            res = await self._client.request(method, route, **kwargs)\n            res.raise_for_status()\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code in (\n                status.HTTP_401_UNAUTHORIZED,\n                status.HTTP_403_FORBIDDEN,\n            ):\n                raise CloudUnauthorizedError\n            else:\n                raise exc\n\n        if res.status_code == status.HTTP_204_NO_CONTENT:\n            return\n\n        return res.json()\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient.api_healthcheck","title":"api_healthcheck async","text":"

    Attempts to connect to the Cloud API and raises the encountered exception if not successful.

    If successful, returns None.

    Source code in prefect/client/cloud.py
    async def api_healthcheck(self):\n    \"\"\"\n    Attempts to connect to the Cloud API and raises the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    with anyio.fail_after(10):\n        await self.read_workspaces()\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.get_cloud_client","title":"get_cloud_client","text":"

    Needs a docstring.

    Source code in prefect/client/cloud.py
    def get_cloud_client(\n    host: Optional[str] = None,\n    api_key: Optional[str] = None,\n    httpx_settings: Optional[dict] = None,\n    infer_cloud_url: bool = False,\n) -> \"CloudClient\":\n    \"\"\"\n    Needs a docstring.\n    \"\"\"\n    if httpx_settings is not None:\n        httpx_settings = httpx_settings.copy()\n\n    if infer_cloud_url is False:\n        host = host or PREFECT_CLOUD_API_URL.value()\n    else:\n        configured_url = prefect.settings.PREFECT_API_URL.value()\n        host = re.sub(PARSE_API_URL_REGEX, \"\", configured_url)\n\n    return CloudClient(\n        host=host,\n        api_key=api_key or PREFECT_API_KEY.value(),\n        httpx_settings=httpx_settings,\n    )\n
    ","tags":["Python API"]},{"location":"api-ref/prefect/client/orchestration/","title":"orchestration","text":"

    Asynchronous client implementation for communicating with the Prefect REST API.

    Explore the client by communicating with an in-memory webserver \u2014 no setup required:

    $ # start python REPL with native await functionality\n$ python -m asyncio\n>>> from prefect import get_client\n>>> async with get_client() as client:\n...     response = await client.hello()\n...     print(response.json())\n\ud83d\udc4b\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration","title":"prefect.client.orchestration","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient","title":"PrefectClient","text":"

    An asynchronous client for interacting with the Prefect REST API.

    Parameters:

    Name Type Description Default api Union[str, ASGIApp]

    the REST API URL or FastAPI application to connect to

    required api_key str

    An optional API key for authentication.

    None api_version str

    The API version this client is compatible with.

    None httpx_settings Optional[Dict[str, Any]]

    An optional dictionary of settings to pass to the underlying httpx.AsyncClient

    None
    Say hello to a Prefect REST API\n\n<div class=\"terminal\">\n```\n>>> async with get_client() as client:\n>>>     response = await client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n```\n</div>\n
    Source code in prefect/client/orchestration.py
    class PrefectClient:\n    \"\"\"\n    An asynchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n    Args:\n        api: the REST API URL or FastAPI application to connect to\n        api_key: An optional API key for authentication.\n        api_version: The API version this client is compatible with.\n        httpx_settings: An optional dictionary of settings to pass to the underlying\n            `httpx.AsyncClient`\n\n    Examples:\n\n        Say hello to a Prefect REST API\n\n        <div class=\"terminal\">\n        ```\n        >>> async with get_client() as client:\n        >>>     response = await client.hello()\n        >>>\n        >>> print(response.json())\n        \ud83d\udc4b\n        ```\n        </div>\n    \"\"\"\n\n    def __init__(\n        self,\n        api: Union[str, ASGIApp],\n        *,\n        api_key: str = None,\n        api_version: str = None,\n        httpx_settings: Optional[Dict[str, Any]] = None,\n    ) -> None:\n        httpx_settings = httpx_settings.copy() if httpx_settings else {}\n        httpx_settings.setdefault(\"headers\", {})\n\n        if PREFECT_API_TLS_INSECURE_SKIP_VERIFY:\n            httpx_settings.setdefault(\"verify\", False)\n        else:\n            cert_file = PREFECT_API_SSL_CERT_FILE.value()\n            if not cert_file:\n                cert_file = certifi.where()\n            httpx_settings.setdefault(\"verify\", cert_file)\n\n        if api_version is None:\n            api_version = SERVER_API_VERSION\n        httpx_settings[\"headers\"].setdefault(\"X-PREFECT-API-VERSION\", api_version)\n        if api_key:\n            httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        # Context management\n        self._exit_stack = AsyncExitStack()\n        self._ephemeral_app: Optional[ASGIApp] = None\n        self.manage_lifespan = True\n        self.server_type: ServerType\n\n        # Only set if this client started the lifespan of the application\n        self._ephemeral_lifespan: Optional[LifespanManager] = None\n\n        self._closed = False\n        self._started = False\n\n        # Connect to an external application\n        if isinstance(api, str):\n            if httpx_settings.get(\"app\"):\n                raise ValueError(\n                    \"Invalid httpx settings: `app` cannot be set when providing an \"\n                    \"api url. `app` is only for use with ephemeral instances. Provide \"\n                    \"it as the `api` parameter instead.\"\n                )\n            httpx_settings.setdefault(\"base_url\", api)\n\n            # See https://www.python-httpx.org/advanced/#pool-limit-configuration\n            httpx_settings.setdefault(\n                \"limits\",\n                httpx.Limits(\n                    # We see instability when allowing the client to open many connections at once.\n                    # Limiting concurrency results in more stable performance.\n                    max_connections=16,\n                    max_keepalive_connections=8,\n                    # The Prefect Cloud LB will keep connections alive for 30s.\n                    # Only allow the client to keep connections alive for 25s.\n                    keepalive_expiry=25,\n                ),\n            )\n\n            # See https://www.python-httpx.org/http2/\n            # Enabling HTTP/2 support on the client does not necessarily mean that your requests\n            # and responses will be transported over HTTP/2, since both the client and the server\n            # need to support HTTP/2. If you connect to a server that only supports HTTP/1.1 the\n            # client will use a standard HTTP/1.1 connection instead.\n            httpx_settings.setdefault(\"http2\", PREFECT_API_ENABLE_HTTP2.value())\n\n            self.server_type = (\n                ServerType.CLOUD\n                if api.startswith(PREFECT_CLOUD_API_URL.value())\n                else ServerType.SERVER\n            )\n\n        # Connect to an in-process application\n        elif isinstance(api, ASGIApp):\n            self._ephemeral_app = api\n            self.server_type = ServerType.EPHEMERAL\n\n            # When using an ephemeral server, server-side exceptions can be raised\n            # client-side breaking all of our response error code handling. To work\n            # around this, we create an ASGI transport with application exceptions\n            # disabled instead of using the application directly.\n            # refs:\n            # - https://github.com/PrefectHQ/prefect/pull/9637\n            # - https://github.com/encode/starlette/blob/d3a11205ed35f8e5a58a711db0ff59c86fa7bb31/starlette/middleware/errors.py#L184\n            # - https://github.com/tiangolo/fastapi/blob/8cc967a7605d3883bd04ceb5d25cc94ae079612f/fastapi/applications.py#L163-L164\n            httpx_settings.setdefault(\n                \"transport\",\n                httpx.ASGITransport(\n                    app=self._ephemeral_app, raise_app_exceptions=False\n                ),\n            )\n            httpx_settings.setdefault(\"base_url\", \"http://ephemeral-prefect/api\")\n\n        else:\n            raise TypeError(\n                f\"Unexpected type {type(api).__name__!r} for argument `api`. Expected\"\n                \" 'str' or 'ASGIApp/FastAPI'\"\n            )\n\n        # See https://www.python-httpx.org/advanced/#timeout-configuration\n        httpx_settings.setdefault(\n            \"timeout\",\n            httpx.Timeout(\n                connect=PREFECT_API_REQUEST_TIMEOUT.value(),\n                read=PREFECT_API_REQUEST_TIMEOUT.value(),\n                write=PREFECT_API_REQUEST_TIMEOUT.value(),\n                pool=PREFECT_API_REQUEST_TIMEOUT.value(),\n            ),\n        )\n\n        if not PREFECT_UNIT_TEST_MODE:\n            httpx_settings.setdefault(\"follow_redirects\", True)\n\n        enable_csrf_support = (\n            self.server_type != ServerType.CLOUD\n            and PREFECT_CLIENT_CSRF_SUPPORT_ENABLED.value()\n        )\n\n        self._client = PrefectHttpxClient(\n            **httpx_settings, enable_csrf_support=enable_csrf_support\n        )\n        self._loop = None\n\n        # See https://www.python-httpx.org/advanced/#custom-transports\n        #\n        # If we're using an HTTP/S client (not the ephemeral client), adjust the\n        # transport to add retries _after_ it is instantiated. If we alter the transport\n        # before instantiation, the transport will not be aware of proxies unless we\n        # reproduce all of the logic to make it so.\n        #\n        # Only alter the transport to set our default of 3 retries, don't modify any\n        # transport a user may have provided via httpx_settings.\n        #\n        # Making liberal use of getattr and isinstance checks here to avoid any\n        # surprises if the internals of httpx or httpcore change on us\n        if isinstance(api, str) and not httpx_settings.get(\"transport\"):\n            transport_for_url = getattr(self._client, \"_transport_for_url\", None)\n            if callable(transport_for_url):\n                server_transport = transport_for_url(httpx.URL(api))\n                if isinstance(server_transport, httpx.AsyncHTTPTransport):\n                    pool = getattr(server_transport, \"_pool\", None)\n                    if isinstance(pool, httpcore.AsyncConnectionPool):\n                        pool._retries = 3\n\n        self.logger = get_logger(\"client\")\n\n    @property\n    def api_url(self) -> httpx.URL:\n        \"\"\"\n        Get the base URL for the API.\n        \"\"\"\n        return self._client.base_url\n\n    # API methods ----------------------------------------------------------------------\n\n    async def api_healthcheck(self) -> Optional[Exception]:\n        \"\"\"\n        Attempts to connect to the API and returns the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        try:\n            await self._client.get(\"/health\")\n            return None\n        except Exception as exc:\n            return exc\n\n    async def hello(self) -> httpx.Response:\n        \"\"\"\n        Send a GET request to /hello for testing purposes.\n        \"\"\"\n        return await self._client.get(\"/hello\")\n\n    async def create_flow(self, flow: \"FlowObject\") -> UUID:\n        \"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow: a [Flow][prefect.flows.Flow] object\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        return await self.create_flow_from_name(flow.name)\n\n    async def create_flow_from_name(self, flow_name: str) -> UUID:\n        \"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow_name: the name of the new flow\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        flow_data = FlowCreate(name=flow_name)\n        response = await self._client.post(\n            \"/flows/\", json=flow_data.dict(json_compatible=True)\n        )\n\n        flow_id = response.json().get(\"id\")\n        if not flow_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        # Return the id of the created flow\n        return UUID(flow_id)\n\n    async def read_flow(self, flow_id: UUID) -> Flow:\n        \"\"\"\n        Query the Prefect API for a flow by id.\n\n        Args:\n            flow_id: the flow ID of interest\n\n        Returns:\n            a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n        \"\"\"\n        response = await self._client.get(f\"/flows/{flow_id}\")\n        return Flow.parse_obj(response.json())\n\n    async def read_flows(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Flow]:\n        \"\"\"\n        Query the Prefect API for flows. Only flows matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flows\n            limit: limit for the flow query\n            offset: offset for the flow query\n\n        Returns:\n            a list of Flow model representations of the flows\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flows/filter\", json=body)\n        return pydantic.parse_obj_as(List[Flow], response.json())\n\n    async def read_flow_by_name(\n        self,\n        flow_name: str,\n    ) -> Flow:\n        \"\"\"\n        Query the Prefect API for a flow by name.\n\n        Args:\n            flow_name: the name of a flow\n\n        Returns:\n            a fully hydrated Flow model\n        \"\"\"\n        response = await self._client.get(f\"/flows/name/{flow_name}\")\n        return Flow.parse_obj(response.json())\n\n    async def create_flow_run_from_deployment(\n        self,\n        deployment_id: UUID,\n        *,\n        parameters: Optional[Dict[str, Any]] = None,\n        context: Optional[Dict[str, Any]] = None,\n        state: prefect.states.State = None,\n        name: str = None,\n        tags: Iterable[str] = None,\n        idempotency_key: str = None,\n        parent_task_run_id: UUID = None,\n        work_queue_name: str = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ) -> FlowRun:\n        \"\"\"\n        Create a flow run for a deployment.\n\n        Args:\n            deployment_id: The deployment ID to create the flow run from\n            parameters: Parameter overrides for this flow run. Merged with the\n                deployment defaults\n            context: Optional run context data\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n            name: An optional name for the flow run. If not provided, the server will\n                generate a name.\n            tags: An optional iterable of tags to apply to the flow run; these tags\n                are merged with the deployment's tags.\n            idempotency_key: Optional idempotency key for creation of the flow run.\n                If the key matches the key of an existing flow run, the existing run will\n                be returned instead of creating a new one.\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            work_queue_name: An optional work queue name to add this run to. If not provided,\n                will default to the deployment's set work queue.  If one is provided that does not\n                exist, a new work queue will be created within the deployment's work pool.\n            job_variables: Optional variables that will be supplied to the flow run job.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n        state = state or prefect.states.Scheduled()\n        tags = tags or []\n\n        flow_run_create = DeploymentFlowRunCreate(\n            parameters=parameters,\n            context=context,\n            state=state.to_state_create(),\n            tags=tags,\n            name=name,\n            idempotency_key=idempotency_key,\n            parent_task_run_id=parent_task_run_id,\n            job_variables=job_variables,\n        )\n\n        # done separately to avoid including this field in payloads sent to older API versions\n        if work_queue_name:\n            flow_run_create.work_queue_name = work_queue_name\n\n        response = await self._client.post(\n            f\"/deployments/{deployment_id}/create_flow_run\",\n            json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n        )\n        return FlowRun.parse_obj(response.json())\n\n    async def create_flow_run(\n        self,\n        flow: \"FlowObject\",\n        name: Optional[str] = None,\n        parameters: Optional[Dict[str, Any]] = None,\n        context: Optional[Dict[str, Any]] = None,\n        tags: Optional[Iterable[str]] = None,\n        parent_task_run_id: Optional[UUID] = None,\n        state: Optional[\"prefect.states.State\"] = None,\n    ) -> FlowRun:\n        \"\"\"\n        Create a flow run for a flow.\n\n        Args:\n            flow: The flow model to create the flow run for\n            name: An optional name for the flow run\n            parameters: Parameter overrides for this flow run.\n            context: Optional run context data\n            tags: a list of tags to apply to this flow run\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        # Retrieve the flow id\n        flow_id = await self.create_flow(flow)\n\n        flow_run_create = FlowRunCreate(\n            flow_id=flow_id,\n            flow_version=flow.version,\n            name=name,\n            parameters=parameters,\n            context=context,\n            tags=list(tags or []),\n            parent_task_run_id=parent_task_run_id,\n            state=state.to_state_create(),\n            empirical_policy=FlowRunPolicy(\n                retries=flow.retries,\n                retry_delay=flow.retry_delay_seconds,\n            ),\n        )\n\n        flow_run_create_json = flow_run_create.dict(json_compatible=True)\n        response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n        flow_run = FlowRun.parse_obj(response.json())\n\n        # Restore the parameters to the local objects to retain expectations about\n        # Python objects\n        flow_run.parameters = parameters\n\n        return flow_run\n\n    async def update_flow_run(\n        self,\n        flow_run_id: UUID,\n        flow_version: Optional[str] = None,\n        parameters: Optional[dict] = None,\n        name: Optional[str] = None,\n        tags: Optional[Iterable[str]] = None,\n        empirical_policy: Optional[FlowRunPolicy] = None,\n        infrastructure_pid: Optional[str] = None,\n        job_variables: Optional[dict] = None,\n    ) -> httpx.Response:\n        \"\"\"\n        Update a flow run's details.\n\n        Args:\n            flow_run_id: The identifier for the flow run to update.\n            flow_version: A new version string for the flow run.\n            parameters: A dictionary of parameter values for the flow run. This will not\n                be merged with any existing parameters.\n            name: A new name for the flow run.\n            empirical_policy: A new flow run orchestration policy. This will not be\n                merged with any existing policy.\n            tags: An iterable of new tags for the flow run. These will not be merged with\n                any existing tags.\n            infrastructure_pid: The id of flow run as returned by an\n                infrastructure block.\n\n        Returns:\n            an `httpx.Response` object from the PATCH request\n        \"\"\"\n        params = {}\n        if flow_version is not None:\n            params[\"flow_version\"] = flow_version\n        if parameters is not None:\n            params[\"parameters\"] = parameters\n        if name is not None:\n            params[\"name\"] = name\n        if tags is not None:\n            params[\"tags\"] = tags\n        if empirical_policy is not None:\n            params[\"empirical_policy\"] = empirical_policy\n        if infrastructure_pid:\n            params[\"infrastructure_pid\"] = infrastructure_pid\n        if job_variables is not None:\n            params[\"job_variables\"] = job_variables\n\n        flow_run_data = FlowRunUpdate(**params)\n\n        return await self._client.patch(\n            f\"/flow_runs/{flow_run_id}\",\n            json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def delete_flow_run(\n        self,\n        flow_run_id: UUID,\n    ) -> None:\n        \"\"\"\n        Delete a flow run by UUID.\n\n        Args:\n            flow_run_id: The flow run UUID of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_concurrency_limit(\n        self,\n        tag: str,\n        concurrency_limit: int,\n    ) -> UUID:\n        \"\"\"\n        Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n        running tasks.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n        Raises:\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the ID of the concurrency limit in the backend\n        \"\"\"\n\n        concurrency_limit_create = ConcurrencyLimitCreate(\n            tag=tag,\n            concurrency_limit=concurrency_limit,\n        )\n        response = await self._client.post(\n            \"/concurrency_limits/\",\n            json=concurrency_limit_create.dict(json_compatible=True),\n        )\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(concurrency_limit_id)\n\n    async def read_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n        \"\"\"\n        Read the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the concurrency limit set on a specific tag\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n        return concurrency_limit\n\n    async def read_concurrency_limits(\n        self,\n        limit: int,\n        offset: int,\n    ):\n        \"\"\"\n        Lists concurrency limits set on task run tags.\n\n        Args:\n            limit: the maximum number of concurrency limits returned\n            offset: the concurrency limit query offset\n\n        Returns:\n            a list of concurrency limits\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n        return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n\n    async def reset_concurrency_limit_by_tag(\n        self,\n        tag: str,\n        slot_override: Optional[List[Union[UUID, str]]] = None,\n    ):\n        \"\"\"\n        Resets the concurrency limit slots set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            slot_override: a list of task run IDs that are currently using a\n                concurrency slot, please check that any task run IDs included in\n                `slot_override` are currently running, otherwise those concurrency\n                slots will never be released.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        if slot_override is not None:\n            slot_override = [str(slot) for slot in slot_override]\n\n        try:\n            await self._client.post(\n                f\"/concurrency_limits/tag/{tag}/reset\",\n                json=dict(slot_override=slot_override),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n        \"\"\"\n        Delete the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_work_queue(\n        self,\n        name: str,\n        tags: Optional[List[str]] = None,\n        description: Optional[str] = None,\n        is_paused: Optional[bool] = None,\n        concurrency_limit: Optional[int] = None,\n        priority: Optional[int] = None,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n        \"\"\"\n        Create a work queue.\n\n        Args:\n            name: a unique name for the work queue\n            tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n                will be included in the queue. This option will be removed on 2023-02-23.\n            description: An optional description for the work queue.\n            is_paused: Whether or not the work queue is paused.\n            concurrency_limit: An optional concurrency limit for the work queue.\n            priority: The queue's priority. Lower values are higher priority (1 is the highest).\n            work_pool_name: The name of the work pool to use for this queue.\n\n        Raises:\n            prefect.exceptions.ObjectAlreadyExists: If request returns 409\n            httpx.RequestError: If request fails\n\n        Returns:\n            The created work queue\n        \"\"\"\n        if tags:\n            warnings.warn(\n                (\n                    \"The use of tags for creating work queue filters is deprecated.\"\n                    \" This option will be removed on 2023-02-23.\"\n                ),\n                DeprecationWarning,\n            )\n            filter = QueueFilter(tags=tags)\n        else:\n            filter = None\n        create_model = WorkQueueCreate(name=name, filter=filter)\n        if description is not None:\n            create_model.description = description\n        if is_paused is not None:\n            create_model.is_paused = is_paused\n        if concurrency_limit is not None:\n            create_model.concurrency_limit = concurrency_limit\n        if priority is not None:\n            create_model.priority = priority\n\n        data = create_model.dict(json_compatible=True)\n        try:\n            if work_pool_name is not None:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues\", json=data\n                )\n            else:\n                response = await self._client.post(\"/work_queues/\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def read_work_queue_by_name(\n        self,\n        name: str,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n        \"\"\"\n        Read a work queue by name.\n\n        Args:\n            name (str): a unique name for the work queue\n            work_pool_name (str, optional): the name of the work pool\n                the queue belongs to.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: if no work queue is found\n            httpx.HTTPStatusError: other status errors\n\n        Returns:\n            WorkQueue: a work queue API object\n        \"\"\"\n        try:\n            if work_pool_name is not None:\n                response = await self._client.get(\n                    f\"/work_pools/{work_pool_name}/queues/{name}\"\n                )\n            else:\n                response = await self._client.get(f\"/work_queues/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return WorkQueue.parse_obj(response.json())\n\n    async def update_work_queue(self, id: UUID, **kwargs):\n        \"\"\"\n        Update properties of a work queue.\n\n        Args:\n            id: the ID of the work queue to update\n            **kwargs: the fields to update\n\n        Raises:\n            ValueError: if no kwargs are provided\n            prefect.exceptions.ObjectNotFound: if request returns 404\n            httpx.RequestError: if the request fails\n\n        \"\"\"\n        if not kwargs:\n            raise ValueError(\"No fields provided to update.\")\n\n        data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n        try:\n            await self._client.patch(f\"/work_queues/{id}\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def get_runs_in_work_queue(\n        self,\n        id: UUID,\n        limit: int = 10,\n        scheduled_before: datetime.datetime = None,\n    ) -> List[FlowRun]:\n        \"\"\"\n        Read flow runs off a work queue.\n\n        Args:\n            id: the id of the work queue to read from\n            limit: a limit on the number of runs to return\n            scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n                Defaults to now.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            List[FlowRun]: a list of FlowRun objects read from the queue\n        \"\"\"\n        if scheduled_before is None:\n            scheduled_before = pendulum.now(\"UTC\")\n\n        try:\n            response = await self._client.post(\n                f\"/work_queues/{id}/get_runs\",\n                json={\n                    \"limit\": limit,\n                    \"scheduled_before\": scheduled_before.isoformat(),\n                },\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def read_work_queue(\n        self,\n        id: UUID,\n    ) -> WorkQueue:\n        \"\"\"\n        Read a work queue.\n\n        Args:\n            id: the id of the work queue to load\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            WorkQueue: an instantiated WorkQueue object\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_queues/{id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def read_work_queue_status(\n        self,\n        id: UUID,\n    ) -> WorkQueueStatusDetail:\n        \"\"\"\n        Read a work queue status.\n\n        Args:\n            id: the id of the work queue to load\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            WorkQueueStatus: an instantiated WorkQueueStatus object\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_queues/{id}/status\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueueStatusDetail.parse_obj(response.json())\n\n    async def match_work_queues(\n        self,\n        prefixes: List[str],\n        work_pool_name: Optional[str] = None,\n    ) -> List[WorkQueue]:\n        \"\"\"\n        Query the Prefect API for work queues with names with a specific prefix.\n\n        Args:\n            prefixes: a list of strings used to match work queue name prefixes\n            work_pool_name: an optional work pool name to scope the query to\n\n        Returns:\n            a list of WorkQueue model representations\n                of the work queues\n        \"\"\"\n        page_length = 100\n        current_page = 0\n        work_queues = []\n\n        while True:\n            new_queues = await self.read_work_queues(\n                work_pool_name=work_pool_name,\n                offset=current_page * page_length,\n                limit=page_length,\n                work_queue_filter=WorkQueueFilter(\n                    name=WorkQueueFilterName(startswith_=prefixes)\n                ),\n            )\n            if not new_queues:\n                break\n            work_queues += new_queues\n            current_page += 1\n\n        return work_queues\n\n    async def delete_work_queue_by_id(\n        self,\n        id: UUID,\n    ):\n        \"\"\"\n        Delete a work queue by its ID.\n\n        Args:\n            id: the id of the work queue to delete\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/work_queues/{id}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n        \"\"\"\n        Create a block type in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_types/\",\n                json=block_type.dict(\n                    json_compatible=True, exclude_unset=True, exclude={\"id\"}\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n        \"\"\"\n        Create a block schema in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_schemas/\",\n                json=block_schema.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_type\", \"checksum\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def create_block_document(\n        self,\n        block_document: Union[BlockDocument, BlockDocumentCreate],\n        include_secrets: bool = True,\n    ) -> BlockDocument:\n        \"\"\"\n        Create a block document in the Prefect API. This data is used to configure a\n        corresponding Block.\n\n        Args:\n            include_secrets (bool): whether to include secret values\n                on the stored Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. Note Blocks may not work as expected if\n                this is set to `False`.\n        \"\"\"\n        if isinstance(block_document, BlockDocument):\n            block_document = BlockDocumentCreate.parse_obj(\n                block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n\n        try:\n            response = await self._client.post(\n                \"/block_documents/\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def update_block_document(\n        self,\n        block_document_id: UUID,\n        block_document: BlockDocumentUpdate,\n    ):\n        \"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_documents/{block_document_id}\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_document(self, block_document_id: UUID):\n        \"\"\"\n        Delete a block document.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_documents/{block_document_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_block_type_by_slug(self, slug: str) -> BlockType:\n        \"\"\"\n        Read a block type by its slug.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/block_types/slug/{slug}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def read_block_schema_by_checksum(\n        self, checksum: str, version: Optional[str] = None\n    ) -> BlockSchema:\n        \"\"\"\n        Look up a block schema checksum\n        \"\"\"\n        try:\n            url = f\"/block_schemas/checksum/{checksum}\"\n            if version is not None:\n                url = f\"{url}?version={version}\"\n            response = await self._client.get(url)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n        \"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_types/{block_type_id}\",\n                json=block_type.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include=BlockTypeUpdate.updatable_fields(),\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_type(self, block_type_id: UUID):\n        \"\"\"\n        Delete a block type.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_types/{block_type_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            elif (\n                e.response.status_code == status.HTTP_403_FORBIDDEN\n                and e.response.json()[\"detail\"]\n                == \"protected block types cannot be deleted.\"\n            ):\n                raise prefect.exceptions.ProtectedBlockError(\n                    \"Protected block types cannot be deleted.\"\n                ) from e\n            else:\n                raise\n\n    async def read_block_types(self) -> List[BlockType]:\n        \"\"\"\n        Read all block types\n        Raises:\n            httpx.RequestError: if the block types were not found\n\n        Returns:\n            List of BlockTypes.\n        \"\"\"\n        response = await self._client.post(\"/block_types/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockType], response.json())\n\n    async def read_block_schemas(self) -> List[BlockSchema]:\n        \"\"\"\n        Read all block schemas\n        Raises:\n            httpx.RequestError: if a valid block schema was not found\n\n        Returns:\n            A BlockSchema.\n        \"\"\"\n        response = await self._client.post(\"/block_schemas/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockSchema], response.json())\n\n    async def get_most_recent_block_schema_for_block_type(\n        self,\n        block_type_id: UUID,\n    ) -> Optional[BlockSchema]:\n        \"\"\"\n        Fetches the most recent block schema for a specified block type ID.\n\n        Args:\n            block_type_id: The ID of the block type.\n\n        Raises:\n            httpx.RequestError: If the request fails for any reason.\n\n        Returns:\n            The most recent block schema or None.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_schemas/filter\",\n                json={\n                    \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n                    \"limit\": 1,\n                },\n            )\n        except httpx.HTTPStatusError:\n            raise\n        return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n\n    async def read_block_document(\n        self,\n        block_document_id: UUID,\n        include_secrets: bool = True,\n    ):\n        \"\"\"\n        Read the block document with the specified ID.\n\n        Args:\n            block_document_id: the block document id\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        assert (\n            block_document_id is not None\n        ), \"Unexpected ID on block document. Was it persisted?\"\n        try:\n            response = await self._client.get(\n                f\"/block_documents/{block_document_id}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_document_by_name(\n        self,\n        name: str,\n        block_type_slug: str,\n        include_secrets: bool = True,\n    ) -> BlockDocument:\n        \"\"\"\n        Read the block document with the specified name that corresponds to a\n        specific block type name.\n\n        Args:\n            name: The block document name.\n            block_type_slug: The block type slug.\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_documents(\n        self,\n        block_schema_type: Optional[str] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n        include_secrets: bool = True,\n    ):\n        \"\"\"\n        Read block documents\n\n        Args:\n            block_schema_type: an optional block schema type\n            offset: an offset\n            limit: the number of blocks to return\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Returns:\n            A list of block documents\n        \"\"\"\n        response = await self._client.post(\n            \"/block_documents/filter\",\n            json=dict(\n                block_schema_type=block_schema_type,\n                offset=offset,\n                limit=limit,\n                include_secrets=include_secrets,\n            ),\n        )\n        return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n    async def read_block_documents_by_type(\n        self,\n        block_type_slug: str,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n        include_secrets: bool = True,\n    ) -> List[BlockDocument]:\n        \"\"\"Retrieve block documents by block type slug.\n\n        Args:\n            block_type_slug: The block type slug.\n            offset: an offset\n            limit: the number of blocks to return\n            include_secrets: whether to include secret values\n\n        Returns:\n            A list of block documents\n        \"\"\"\n        response = await self._client.get(\n            f\"/block_types/slug/{block_type_slug}/block_documents\",\n            params=dict(\n                offset=offset,\n                limit=limit,\n                include_secrets=include_secrets,\n            ),\n        )\n\n        return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n    async def create_deployment(\n        self,\n        flow_id: UUID,\n        name: str,\n        version: str = None,\n        schedule: SCHEDULE_TYPES = None,\n        schedules: List[DeploymentScheduleCreate] = None,\n        parameters: Optional[Dict[str, Any]] = None,\n        description: str = None,\n        work_queue_name: str = None,\n        work_pool_name: str = None,\n        tags: List[str] = None,\n        storage_document_id: UUID = None,\n        manifest_path: str = None,\n        path: str = None,\n        entrypoint: str = None,\n        infrastructure_document_id: UUID = None,\n        infra_overrides: Optional[Dict[str, Any]] = None,  # for backwards compat\n        parameter_openapi_schema: Optional[Dict[str, Any]] = None,\n        is_schedule_active: Optional[bool] = None,\n        paused: Optional[bool] = None,\n        pull_steps: Optional[List[dict]] = None,\n        enforce_parameter_schema: Optional[bool] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ) -> UUID:\n        \"\"\"\n        Create a deployment.\n\n        Args:\n            flow_id: the flow ID to create a deployment for\n            name: the name of the deployment\n            version: an optional version string for the deployment\n            schedule: an optional schedule to apply to the deployment\n            tags: an optional list of tags to apply to the deployment\n            storage_document_id: an reference to the storage block document\n                used for the deployed flow\n            infrastructure_document_id: an reference to the infrastructure block document\n                to use for this deployment\n            job_variables: A dictionary of dot delimited infrastructure overrides that\n                will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n                `namespace='prefect'`. This argument was previously named `infra_overrides`.\n                Both arguments are supported for backwards compatibility.\n\n        Raises:\n            httpx.RequestError: if the deployment was not created for any reason\n\n        Returns:\n            the ID of the deployment in the backend\n        \"\"\"\n        jv = handle_deprecated_infra_overrides_parameter(job_variables, infra_overrides)\n\n        deployment_create = DeploymentCreate(\n            flow_id=flow_id,\n            name=name,\n            version=version,\n            parameters=dict(parameters or {}),\n            tags=list(tags or []),\n            work_queue_name=work_queue_name,\n            description=description,\n            storage_document_id=storage_document_id,\n            path=path,\n            entrypoint=entrypoint,\n            manifest_path=manifest_path,  # for backwards compat\n            infrastructure_document_id=infrastructure_document_id,\n            job_variables=jv,\n            parameter_openapi_schema=parameter_openapi_schema,\n            is_schedule_active=is_schedule_active,\n            paused=paused,\n            schedule=schedule,\n            schedules=schedules or [],\n            pull_steps=pull_steps,\n            enforce_parameter_schema=enforce_parameter_schema,\n        )\n\n        if work_pool_name is not None:\n            deployment_create.work_pool_name = work_pool_name\n\n        # Exclude newer fields that are not set to avoid compatibility issues\n        exclude = {\n            field\n            for field in [\"work_pool_name\", \"work_queue_name\"]\n            if field not in deployment_create.__fields_set__\n        }\n\n        if deployment_create.is_schedule_active is None:\n            exclude.add(\"is_schedule_active\")\n\n        if deployment_create.paused is None:\n            exclude.add(\"paused\")\n\n        if deployment_create.pull_steps is None:\n            exclude.add(\"pull_steps\")\n\n        if deployment_create.enforce_parameter_schema is None:\n            exclude.add(\"enforce_parameter_schema\")\n\n        json = deployment_create.dict(json_compatible=True, exclude=exclude)\n        response = await self._client.post(\n            \"/deployments/\",\n            json=json,\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def update_schedule(self, deployment_id: UUID, active: bool = True):\n        path = \"set_schedule_active\" if active else \"set_schedule_inactive\"\n        await self._client.post(\n            f\"/deployments/{deployment_id}/{path}\",\n        )\n\n    async def set_deployment_paused_state(self, deployment_id: UUID, paused: bool):\n        await self._client.patch(\n            f\"/deployments/{deployment_id}\", json={\"paused\": paused}\n        )\n\n    async def update_deployment(\n        self,\n        deployment: Deployment,\n        schedule: SCHEDULE_TYPES = None,\n        is_schedule_active: bool = None,\n    ):\n        deployment_update = DeploymentUpdate(\n            version=deployment.version,\n            schedule=schedule if schedule is not None else deployment.schedule,\n            is_schedule_active=(\n                is_schedule_active\n                if is_schedule_active is not None\n                else deployment.is_schedule_active\n            ),\n            description=deployment.description,\n            work_queue_name=deployment.work_queue_name,\n            tags=deployment.tags,\n            manifest_path=deployment.manifest_path,\n            path=deployment.path,\n            entrypoint=deployment.entrypoint,\n            parameters=deployment.parameters,\n            storage_document_id=deployment.storage_document_id,\n            infrastructure_document_id=deployment.infrastructure_document_id,\n            job_variables=deployment.job_variables,\n            enforce_parameter_schema=deployment.enforce_parameter_schema,\n        )\n\n        if getattr(deployment, \"work_pool_name\", None) is not None:\n            deployment_update.work_pool_name = deployment.work_pool_name\n\n        exclude = set()\n        if deployment.enforce_parameter_schema is None:\n            exclude.add(\"enforce_parameter_schema\")\n\n        await self._client.patch(\n            f\"/deployments/{deployment.id}\",\n            json=deployment_update.dict(json_compatible=True, exclude=exclude),\n        )\n\n    async def _create_deployment_from_schema(self, schema: DeploymentCreate) -> UUID:\n        \"\"\"\n        Create a deployment from a prepared `DeploymentCreate` schema.\n        \"\"\"\n        # TODO: We are likely to remove this method once we have considered the\n        #       packaging interface for deployments further.\n        response = await self._client.post(\n            \"/deployments/\", json=schema.dict(json_compatible=True)\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def read_deployment(\n        self,\n        deployment_id: UUID,\n    ) -> DeploymentResponse:\n        \"\"\"\n        Query the Prefect API for a deployment by id.\n\n        Args:\n            deployment_id: the deployment ID of interest\n\n        Returns:\n            a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployment_by_name(\n        self,\n        name: str,\n    ) -> DeploymentResponse:\n        \"\"\"\n        Query the Prefect API for a deployment by name.\n\n        Args:\n            name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            a Deployment model representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployments(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        limit: int = None,\n        sort: DeploymentSort = None,\n        offset: int = 0,\n    ) -> List[DeploymentResponse]:\n        \"\"\"\n        Query the Prefect API for deployments. Only deployments matching all\n        the provided criteria will be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            limit: a limit for the deployment query\n            offset: an offset for the deployment query\n\n        Returns:\n            a list of Deployment model representations\n                of the deployments\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/deployments/filter\", json=body)\n        return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n\n    async def delete_deployment(\n        self,\n        deployment_id: UUID,\n    ):\n        \"\"\"\n        Delete deployment by id.\n\n        Args:\n            deployment_id: The deployment id of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_deployment_schedules(\n        self,\n        deployment_id: UUID,\n        schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n    ) -> List[DeploymentSchedule]:\n        \"\"\"\n        Create deployment schedules.\n\n        Args:\n            deployment_id: the deployment ID\n            schedules: a list of tuples containing the schedule to create\n                       and whether or not it should be active.\n\n        Raises:\n            httpx.RequestError: if the schedules were not created for any reason\n\n        Returns:\n            the list of schedules created in the backend\n        \"\"\"\n        deployment_schedule_create = [\n            DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n            for schedule in schedules\n        ]\n\n        json = [\n            deployment_schedule_create.dict(json_compatible=True)\n            for deployment_schedule_create in deployment_schedule_create\n        ]\n        response = await self._client.post(\n            f\"/deployments/{deployment_id}/schedules\", json=json\n        )\n        return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n    async def read_deployment_schedules(\n        self,\n        deployment_id: UUID,\n    ) -> List[DeploymentSchedule]:\n        \"\"\"\n        Query the Prefect API for a deployment's schedules.\n\n        Args:\n            deployment_id: the deployment ID\n\n        Returns:\n            a list of DeploymentSchedule model representations of the deployment schedules\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n\n    async def update_deployment_schedule(\n        self,\n        deployment_id: UUID,\n        schedule_id: UUID,\n        active: Optional[bool] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n    ):\n        \"\"\"\n        Update a deployment schedule by ID.\n\n        Args:\n            deployment_id: the deployment ID\n            schedule_id: the deployment schedule ID of interest\n            active: whether or not the schedule should be active\n            schedule: the cron, rrule, or interval schedule this deployment schedule should use\n        \"\"\"\n        kwargs = {}\n        if active is not None:\n            kwargs[\"active\"] = active\n        elif schedule is not None:\n            kwargs[\"schedule\"] = schedule\n\n        deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n        json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n        try:\n            await self._client.patch(\n                f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_deployment_schedule(\n        self,\n        deployment_id: UUID,\n        schedule_id: UUID,\n    ) -> None:\n        \"\"\"\n        Delete a deployment schedule.\n\n        Args:\n            deployment_id: the deployment ID\n            schedule_id: the ID of the deployment schedule to delete.\n\n        Raises:\n            httpx.RequestError: if the schedules were not deleted for any reason\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n        \"\"\"\n        Query the Prefect API for a flow run by id.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n\n        Returns:\n            a Flow Run model representation of the flow run\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return FlowRun.parse_obj(response.json())\n\n    async def resume_flow_run(\n        self, flow_run_id: UUID, run_input: Optional[Dict] = None\n    ) -> OrchestrationResult:\n        \"\"\"\n        Resumes a paused flow run.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n            run_input: the input to resume the flow run with\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        try:\n            response = await self._client.post(\n                f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n            )\n        except httpx.HTTPStatusError:\n            raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[FlowRun]:\n        \"\"\"\n        Query the Prefect API for flow runs. Only flow runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flow runs\n            limit: limit for the flow run query\n            offset: offset for the flow run query\n\n        Returns:\n            a list of Flow Run model representations\n                of the flow runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flow_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def set_flow_run_state(\n        self,\n        flow_run_id: UUID,\n        state: \"prefect.states.State\",\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a flow run.\n\n        Args:\n            flow_run_id: the id of the flow run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.flow_run_id = flow_run_id\n        state_create.state_details.transition_id = uuid4()\n        try:\n            response = await self._client.post(\n                f\"/flow_runs/{flow_run_id}/set_state\",\n                json=dict(state=state_create.dict(json_compatible=True), force=force),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_run_states(\n        self, flow_run_id: UUID\n    ) -> List[prefect.states.State]:\n        \"\"\"\n        Query for the states of a flow run\n\n        Args:\n            flow_run_id: the id of the flow run\n\n        Returns:\n            a list of State model representations\n                of the flow run states\n        \"\"\"\n        response = await self._client.get(\n            \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def set_task_run_name(self, task_run_id: UUID, name: str):\n        task_run_data = TaskRunUpdate(name=name)\n        return await self._client.patch(\n            f\"/task_runs/{task_run_id}\",\n            json=task_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def create_task_run(\n        self,\n        task: \"TaskObject[P, R]\",\n        flow_run_id: Optional[UUID],\n        dynamic_key: str,\n        name: Optional[str] = None,\n        extra_tags: Optional[Iterable[str]] = None,\n        state: Optional[prefect.states.State[R]] = None,\n        task_inputs: Optional[\n            Dict[\n                str,\n                List[\n                    Union[\n                        TaskRunResult,\n                        Parameter,\n                        Constant,\n                    ]\n                ],\n            ]\n        ] = None,\n    ) -> TaskRun:\n        \"\"\"\n        Create a task run\n\n        Args:\n            task: The Task to run\n            flow_run_id: The flow run id with which to associate the task run\n            dynamic_key: A key unique to this particular run of a Task within the flow\n            name: An optional name for the task run\n            extra_tags: an optional list of extra tags to apply to the task run in\n                addition to `task.tags`\n            state: The initial state for the run. If not provided, defaults to\n                `Pending` for now. Should always be a `Scheduled` type.\n            task_inputs: the set of inputs passed to the task\n\n        Returns:\n            The created task run.\n        \"\"\"\n        tags = set(task.tags).union(extra_tags or [])\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        task_run_data = TaskRunCreate(\n            name=name,\n            flow_run_id=flow_run_id,\n            task_key=task.task_key,\n            dynamic_key=dynamic_key,\n            tags=list(tags),\n            task_version=task.version,\n            empirical_policy=TaskRunPolicy(\n                retries=task.retries,\n                retry_delay=task.retry_delay_seconds,\n                retry_jitter_factor=task.retry_jitter_factor,\n            ),\n            state=state.to_state_create(),\n            task_inputs=task_inputs or {},\n        )\n\n        response = await self._client.post(\n            \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n        )\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n        \"\"\"\n        Query the Prefect API for a task run by id.\n\n        Args:\n            task_run_id: the task run ID of interest\n\n        Returns:\n            a Task Run model representation of the task run\n        \"\"\"\n        response = await self._client.get(f\"/task_runs/{task_run_id}\")\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        sort: TaskRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[TaskRun]:\n        \"\"\"\n        Query the Prefect API for task runs. Only task runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            sort: sort criteria for the task runs\n            limit: a limit for the task run query\n            offset: an offset for the task run query\n\n        Returns:\n            a list of Task Run model representations\n                of the task runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/task_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[TaskRun], response.json())\n\n    async def delete_task_run(self, task_run_id: UUID) -> None:\n        \"\"\"\n        Delete a task run by id.\n\n        Args:\n            task_run_id: the task run ID of interest\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/task_runs/{task_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def set_task_run_state(\n        self,\n        task_run_id: UUID,\n        state: prefect.states.State,\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a task run.\n\n        Args:\n            task_run_id: the id of the task run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.task_run_id = task_run_id\n        response = await self._client.post(\n            f\"/task_runs/{task_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_task_run_states(\n        self, task_run_id: UUID\n    ) -> List[prefect.states.State]:\n        \"\"\"\n        Query for the states of a task run\n\n        Args:\n            task_run_id: the id of the task run\n\n        Returns:\n            a list of State model representations of the task run states\n        \"\"\"\n        response = await self._client.get(\n            \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n        \"\"\"\n        Create logs for a flow or task run\n\n        Args:\n            logs: An iterable of `LogCreate` objects or already json-compatible dicts\n        \"\"\"\n        serialized_logs = [\n            log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n            for log in logs\n        ]\n        await self._client.post(\"/logs/\", json=serialized_logs)\n\n    async def create_flow_run_notification_policy(\n        self,\n        block_document_id: UUID,\n        is_active: bool = True,\n        tags: List[str] = None,\n        state_names: List[str] = None,\n        message_template: Optional[str] = None,\n    ) -> UUID:\n        \"\"\"\n        Create a notification policy for flow runs\n\n        Args:\n            block_document_id: The block document UUID\n            is_active: Whether the notification policy is active\n            tags: List of flow tags\n            state_names: List of state names\n            message_template: Notification message template\n        \"\"\"\n        if tags is None:\n            tags = []\n        if state_names is None:\n            state_names = []\n\n        policy = FlowRunNotificationPolicyCreate(\n            block_document_id=block_document_id,\n            is_active=is_active,\n            tags=tags,\n            state_names=state_names,\n            message_template=message_template,\n        )\n        response = await self._client.post(\n            \"/flow_run_notification_policies/\",\n            json=policy.dict(json_compatible=True),\n        )\n\n        policy_id = response.json().get(\"id\")\n        if not policy_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(policy_id)\n\n    async def delete_flow_run_notification_policy(\n        self,\n        id: UUID,\n    ) -> None:\n        \"\"\"\n        Delete a flow run notification policy by id.\n\n        Args:\n            id: UUID of the flow run notification policy to delete.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def update_flow_run_notification_policy(\n        self,\n        id: UUID,\n        block_document_id: Optional[UUID] = None,\n        is_active: Optional[bool] = None,\n        tags: Optional[List[str]] = None,\n        state_names: Optional[List[str]] = None,\n        message_template: Optional[str] = None,\n    ) -> None:\n        \"\"\"\n        Update a notification policy for flow runs\n\n        Args:\n            id: UUID of the notification policy\n            block_document_id: The block document UUID\n            is_active: Whether the notification policy is active\n            tags: List of flow tags\n            state_names: List of state names\n            message_template: Notification message template\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        params = {}\n        if block_document_id is not None:\n            params[\"block_document_id\"] = block_document_id\n        if is_active is not None:\n            params[\"is_active\"] = is_active\n        if tags is not None:\n            params[\"tags\"] = tags\n        if state_names is not None:\n            params[\"state_names\"] = state_names\n        if message_template is not None:\n            params[\"message_template\"] = message_template\n\n        policy = FlowRunNotificationPolicyUpdate(**params)\n\n        try:\n            await self._client.patch(\n                f\"/flow_run_notification_policies/{id}\",\n                json=policy.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_flow_run_notification_policies(\n        self,\n        flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n        limit: Optional[int] = None,\n        offset: int = 0,\n    ) -> List[FlowRunNotificationPolicy]:\n        \"\"\"\n        Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n        be returned.\n\n        Args:\n            flow_run_notification_policy_filter: filter criteria for notification policies\n            limit: a limit for the notification policies query\n            offset: an offset for the notification policies query\n\n        Returns:\n            a list of FlowRunNotificationPolicy model representations\n                of the notification policies\n        \"\"\"\n        body = {\n            \"flow_run_notification_policy_filter\": (\n                flow_run_notification_policy_filter.dict(json_compatible=True)\n                if flow_run_notification_policy_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\n            \"/flow_run_notification_policies/filter\", json=body\n        )\n        return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n\n    async def read_logs(\n        self,\n        log_filter: LogFilter = None,\n        limit: int = None,\n        offset: int = None,\n        sort: LogSort = LogSort.TIMESTAMP_ASC,\n    ) -> List[Log]:\n        \"\"\"\n        Read flow and task run logs.\n        \"\"\"\n        body = {\n            \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/logs/filter\", json=body)\n        return pydantic.parse_obj_as(List[Log], response.json())\n\n    async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n        \"\"\"\n        Recursively decode possibly nested data documents.\n\n        \"server\" encoded documents will be retrieved from the server.\n\n        Args:\n            datadoc: The data document to resolve\n\n        Returns:\n            a decoded object, the innermost data\n        \"\"\"\n        if not isinstance(datadoc, DataDocument):\n            raise TypeError(\n                f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n            )\n\n        async def resolve_inner(data):\n            if isinstance(data, bytes):\n                try:\n                    data = DataDocument.parse_raw(data)\n                except pydantic.ValidationError:\n                    return data\n\n            if isinstance(data, DataDocument):\n                return await resolve_inner(data.decode())\n\n            return data\n\n        return await resolve_inner(datadoc)\n\n    async def send_worker_heartbeat(\n        self,\n        work_pool_name: str,\n        worker_name: str,\n        heartbeat_interval_seconds: Optional[float] = None,\n    ):\n        \"\"\"\n        Sends a worker heartbeat for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool to heartbeat against.\n            worker_name: The name of the worker sending the heartbeat.\n        \"\"\"\n        await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n            json={\n                \"name\": worker_name,\n                \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n            },\n        )\n\n    async def read_workers_for_work_pool(\n        self,\n        work_pool_name: str,\n        worker_filter: Optional[WorkerFilter] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n    ) -> List[Worker]:\n        \"\"\"\n        Reads workers for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool for which to get\n                member workers.\n            worker_filter: Criteria by which to filter workers.\n            limit: Limit for the worker query.\n            offset: Limit for the worker query.\n        \"\"\"\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/filter\",\n            json={\n                \"worker_filter\": (\n                    worker_filter.dict(json_compatible=True, exclude_unset=True)\n                    if worker_filter\n                    else None\n                ),\n                \"offset\": offset,\n                \"limit\": limit,\n            },\n        )\n\n        return pydantic.parse_obj_as(List[Worker], response.json())\n\n    async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n        \"\"\"\n        Reads information for a given work pool\n\n        Args:\n            work_pool_name: The name of the work pool to for which to get\n                information.\n\n        Returns:\n            Information about the requested work pool.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n            return pydantic.parse_obj_as(WorkPool, response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_pools(\n        self,\n        limit: Optional[int] = None,\n        offset: int = 0,\n        work_pool_filter: Optional[WorkPoolFilter] = None,\n    ) -> List[WorkPool]:\n        \"\"\"\n        Reads work pools.\n\n        Args:\n            limit: Limit for the work pool query.\n            offset: Offset for the work pool query.\n            work_pool_filter: Criteria by which to filter work pools.\n\n        Returns:\n            A list of work pools.\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n        }\n        response = await self._client.post(\"/work_pools/filter\", json=body)\n        return pydantic.parse_obj_as(List[WorkPool], response.json())\n\n    async def create_work_pool(\n        self,\n        work_pool: WorkPoolCreate,\n    ) -> WorkPool:\n        \"\"\"\n        Creates a work pool with the provided configuration.\n\n        Args:\n            work_pool: Desired configuration for the new work pool.\n\n        Returns:\n            Information about the newly created work pool.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/work_pools/\",\n                json=work_pool.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n\n        return pydantic.parse_obj_as(WorkPool, response.json())\n\n    async def update_work_pool(\n        self,\n        work_pool_name: str,\n        work_pool: WorkPoolUpdate,\n    ):\n        \"\"\"\n        Updates a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool to update.\n            work_pool: Fields to update in the work pool.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/work_pools/{work_pool_name}\",\n                json=work_pool.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_work_pool(\n        self,\n        work_pool_name: str,\n    ):\n        \"\"\"\n        Deletes a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/work_pools/{work_pool_name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_queues(\n        self,\n        work_pool_name: Optional[str] = None,\n        work_queue_filter: Optional[WorkQueueFilter] = None,\n        limit: Optional[int] = None,\n        offset: Optional[int] = None,\n    ) -> List[WorkQueue]:\n        \"\"\"\n        Retrieves queues for a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool for which to get queues.\n            work_queue_filter: Criteria by which to filter queues.\n            limit: Limit for the queue query.\n            offset: Limit for the queue query.\n\n        Returns:\n            List of queues for the specified work pool.\n        \"\"\"\n        json = {\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        if work_pool_name:\n            try:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues/filter\",\n                    json=json,\n                )\n            except httpx.HTTPStatusError as e:\n                if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                    raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n                else:\n                    raise\n        else:\n            response = await self._client.post(\"/work_queues/filter\", json=json)\n\n        return pydantic.parse_obj_as(List[WorkQueue], response.json())\n\n    async def get_scheduled_flow_runs_for_deployments(\n        self,\n        deployment_ids: List[UUID],\n        scheduled_before: Optional[datetime.datetime] = None,\n        limit: Optional[int] = None,\n    ):\n        body: Dict[str, Any] = dict(deployment_ids=[str(id) for id in deployment_ids])\n        if scheduled_before:\n            body[\"scheduled_before\"] = str(scheduled_before)\n        if limit:\n            body[\"limit\"] = limit\n\n        response = await self._client.post(\n            \"/deployments/get_scheduled_flow_runs\",\n            json=body,\n        )\n\n        return pydantic.parse_obj_as(List[FlowRunResponse], response.json())\n\n    async def get_scheduled_flow_runs_for_work_pool(\n        self,\n        work_pool_name: str,\n        work_queue_names: Optional[List[str]] = None,\n        scheduled_before: Optional[datetime.datetime] = None,\n    ) -> List[WorkerFlowRunResponse]:\n        \"\"\"\n        Retrieves scheduled flow runs for the provided set of work pool queues.\n\n        Args:\n            work_pool_name: The name of the work pool that the work pool\n                queues are associated with.\n            work_queue_names: The names of the work pool queues from which\n                to get scheduled flow runs.\n            scheduled_before: Datetime used to filter returned flow runs. Flow runs\n                scheduled for after the given datetime string will not be returned.\n\n        Returns:\n            A list of worker flow run responses containing information about the\n            retrieved flow runs.\n        \"\"\"\n        body: Dict[str, Any] = {}\n        if work_queue_names is not None:\n            body[\"work_queue_names\"] = list(work_queue_names)\n        if scheduled_before:\n            body[\"scheduled_before\"] = str(scheduled_before)\n\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n            json=body,\n        )\n        return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n\n    async def create_artifact(\n        self,\n        artifact: ArtifactCreate,\n    ) -> Artifact:\n        \"\"\"\n        Creates an artifact with the provided configuration.\n\n        Args:\n            artifact: Desired configuration for the new artifact.\n        Returns:\n            Information about the newly created artifact.\n        \"\"\"\n\n        response = await self._client.post(\n            \"/artifacts/\",\n            json=artifact.dict(json_compatible=True, exclude_unset=True),\n        )\n\n        return pydantic.parse_obj_as(Artifact, response.json())\n\n    async def read_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Artifact]:\n        \"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/filter\", json=body)\n        return pydantic.parse_obj_as(List[Artifact], response.json())\n\n    async def read_latest_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactCollectionFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactCollectionSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[ArtifactCollection]:\n        \"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n        return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n\n    async def delete_artifact(self, artifact_id: UUID) -> None:\n        \"\"\"\n        Deletes an artifact with the provided id.\n\n        Args:\n            artifact_id: The id of the artifact to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/artifacts/{artifact_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_variable(self, variable: VariableCreate) -> Variable:\n        \"\"\"\n        Creates an variable with the provided configuration.\n\n        Args:\n            variable: Desired configuration for the new variable.\n        Returns:\n            Information about the newly created variable.\n        \"\"\"\n        response = await self._client.post(\n            \"/variables/\",\n            json=variable.dict(json_compatible=True, exclude_unset=True),\n        )\n        return Variable(**response.json())\n\n    async def update_variable(self, variable: VariableUpdate) -> None:\n        \"\"\"\n        Updates a variable with the provided configuration.\n\n        Args:\n            variable: Desired configuration for the updated variable.\n        Returns:\n            Information about the updated variable.\n        \"\"\"\n        await self._client.patch(\n            f\"/variables/name/{variable.name}\",\n            json=variable.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n        \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n        try:\n            response = await self._client.get(f\"/variables/name/{name}\")\n            return Variable(**response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                return None\n            else:\n                raise\n\n    async def delete_variable_by_name(self, name: str):\n        \"\"\"Deletes a variable by name.\"\"\"\n        try:\n            await self._client.delete(f\"/variables/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_variables(self, limit: int = None) -> List[Variable]:\n        \"\"\"Reads all variables.\"\"\"\n        response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n        return pydantic.parse_obj_as(List[Variable], response.json())\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n        \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n        response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n        response.raise_for_status()\n        return response.json()\n\n    async def increment_concurrency_slots(\n        self, names: List[str], slots: int, mode: str\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/increment\",\n            json={\"names\": names, \"slots\": slots, \"mode\": mode},\n        )\n\n    async def release_concurrency_slots(\n        self, names: List[str], slots: int, occupancy_seconds: float\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/decrement\",\n            json={\n                \"names\": names,\n                \"slots\": slots,\n                \"occupancy_seconds\": occupancy_seconds,\n            },\n        )\n\n    async def create_global_concurrency_limit(\n        self, concurrency_limit: GlobalConcurrencyLimitCreate\n    ) -> UUID:\n        response = await self._client.post(\n            \"/v2/concurrency_limits/\",\n            json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n        )\n        return UUID(response.json()[\"id\"])\n\n    async def update_global_concurrency_limit(\n        self, name: str, concurrency_limit: GlobalConcurrencyLimitUpdate\n    ) -> httpx.Response:\n        try:\n            response = await self._client.patch(\n                f\"/v2/concurrency_limits/{name}\",\n                json=concurrency_limit.dict(json_compatible=True, exclude_unset=True),\n            )\n            return response\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_global_concurrency_limit_by_name(\n        self, name: str\n    ) -> httpx.Response:\n        try:\n            response = await self._client.delete(f\"/v2/concurrency_limits/{name}\")\n            return response\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_global_concurrency_limit_by_name(\n        self, name: str\n    ) -> GlobalConcurrencyLimitResponse:\n        try:\n            response = await self._client.get(f\"/v2/concurrency_limits/{name}\")\n            return GlobalConcurrencyLimitResponse.parse_obj(response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_global_concurrency_limits(\n        self, limit: int = 10, offset: int = 0\n    ) -> List[GlobalConcurrencyLimitResponse]:\n        response = await self._client.post(\n            \"/v2/concurrency_limits/filter\",\n            json={\n                \"limit\": limit,\n                \"offset\": offset,\n            },\n        )\n        return pydantic.parse_obj_as(\n            List[GlobalConcurrencyLimitResponse], response.json()\n        )\n\n    async def create_flow_run_input(\n        self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n    ):\n        \"\"\"\n        Creates a flow run input.\n\n        Args:\n            flow_run_id: The flow run id.\n            key: The input key.\n            value: The input value.\n            sender: The sender of the input.\n        \"\"\"\n\n        # Initialize the input to ensure that the key is valid.\n        FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/input\",\n            json={\"key\": key, \"value\": value, \"sender\": sender},\n        )\n        response.raise_for_status()\n\n    async def filter_flow_run_input(\n        self, flow_run_id: UUID, key_prefix: str, limit: int, exclude_keys: Set[str]\n    ) -> List[FlowRunInput]:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/input/filter\",\n            json={\n                \"prefix\": key_prefix,\n                \"limit\": limit,\n                \"exclude_keys\": list(exclude_keys),\n            },\n        )\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[FlowRunInput], response.json())\n\n    async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n        \"\"\"\n        Reads a flow run input.\n\n        Args:\n            flow_run_id: The flow run id.\n            key: The input key.\n        \"\"\"\n        response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n        response.raise_for_status()\n        return response.content.decode()\n\n    async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n        \"\"\"\n        Deletes a flow run input.\n\n        Args:\n            flow_run_id: The flow run id.\n            key: The input key.\n        \"\"\"\n        response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n        response.raise_for_status()\n\n    def _raise_for_unsupported_automations(self) -> NoReturn:\n        if not PREFECT_EXPERIMENTAL_EVENTS:\n            raise RuntimeError(\n                \"The current server and client configuration does not support \"\n                \"events.  Enable experimental events support with the \"\n                \"PREFECT_EXPERIMENTAL_EVENTS setting.\"\n            )\n        else:\n            raise RuntimeError(\n                \"The current server and client configuration does not support \"\n                \"automations.  Enable experimental automations with the \"\n                \"PREFECT_API_SERVICES_TRIGGERS_ENABLED setting.\"\n            )\n\n    async def create_automation(self, automation: AutomationCore) -> UUID:\n        \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.post(\n            \"/automations/\",\n            json=automation.dict(json_compatible=True),\n        )\n\n        return UUID(response.json()[\"id\"])\n\n    async def update_automation(self, automation_id: UUID, automation: AutomationCore):\n        \"\"\"Updates an automation in Prefect Cloud.\"\"\"\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n        response = await self._client.put(\n            f\"/automations/{automation_id}\",\n            json=automation.dict(json_compatible=True, exclude_unset=True),\n        )\n        response.raise_for_status\n\n    async def read_automations(self) -> List[Automation]:\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.post(\"/automations/filter\")\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[Automation], response.json())\n\n    async def find_automation(\n        self, id_or_name: Union[str, UUID], exit_if_not_found: bool = True\n    ) -> Optional[Automation]:\n        if isinstance(id_or_name, str):\n            try:\n                id = UUID(id_or_name)\n            except ValueError:\n                id = None\n        elif isinstance(id_or_name, UUID):\n            id = id_or_name\n\n        if id:\n            try:\n                automation = await self.read_automation(id)\n                return automation\n            except prefect.exceptions.HTTPStatusError as e:\n                if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                    raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n\n        automations = await self.read_automations()\n\n        # Look for it by an exact name\n        for automation in automations:\n            if automation.name == id_or_name:\n                return automation\n\n        # Look for it by a case-insensitive name\n        for automation in automations:\n            if automation.name.lower() == id_or_name.lower():\n                return automation\n\n        return None\n\n    async def read_automation(self, automation_id: UUID) -> Optional[Automation]:\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.get(f\"/automations/{automation_id}\")\n        if response.status_code == 404:\n            return None\n        response.raise_for_status()\n        return Automation.parse_obj(response.json())\n\n    async def read_automations_by_name(self, name: str) -> List[Automation]:\n        \"\"\"\n        Query the Prefect API for an automation by name. Only automations matching the provided name will be returned.\n\n        Args:\n            name: the name of the automation to query\n\n        Returns:\n            a list of Automation model representations of the automations\n        \"\"\"\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n        automation_filter = filters.AutomationFilter(name=dict(any_=[name]))\n\n        response = await self._client.post(\n            \"/automations/filter\",\n            json={\n                \"sort\": sorting.AutomationSort.UPDATED_DESC,\n                \"automations\": automation_filter.dict(json_compatible=True)\n                if automation_filter\n                else None,\n            },\n        )\n\n        response.raise_for_status()\n\n        return pydantic.parse_obj_as(List[Automation], response.json())\n\n    async def pause_automation(self, automation_id: UUID):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.patch(\n            f\"/automations/{automation_id}\", json={\"enabled\": False}\n        )\n        response.raise_for_status()\n\n    async def resume_automation(self, automation_id: UUID):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.patch(\n            f\"/automations/{automation_id}\", json={\"enabled\": True}\n        )\n        response.raise_for_status()\n\n    async def delete_automation(self, automation_id: UUID):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.delete(f\"/automations/{automation_id}\")\n        if response.status_code == 404:\n            return\n\n        response.raise_for_status()\n\n    async def read_resource_related_automations(\n        self, resource_id: str\n    ) -> List[Automation]:\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        response = await self._client.get(f\"/automations/related-to/{resource_id}\")\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[Automation], response.json())\n\n    async def delete_resource_owned_automations(self, resource_id: str):\n        if not self.server_type.supports_automations():\n            self._raise_for_unsupported_automations()\n\n        await self._client.delete(f\"/automations/owned-by/{resource_id}\")\n\n    async def __aenter__(self):\n        \"\"\"\n        Start the client.\n\n        If the client is already started, this will raise an exception.\n\n        If the client is already closed, this will raise an exception. Use a new client\n        instance instead.\n        \"\"\"\n        if self._closed:\n            # httpx.AsyncClient does not allow reuse so we will not either.\n            raise RuntimeError(\n                \"The client cannot be started again after closing. \"\n                \"Retrieve a new client with `get_client()` instead.\"\n            )\n\n        if self._started:\n            # httpx.AsyncClient does not allow reentrancy so we will not either.\n            raise RuntimeError(\"The client cannot be started more than once.\")\n\n        self._loop = asyncio.get_running_loop()\n        await self._exit_stack.__aenter__()\n\n        # Enter a lifespan context if using an ephemeral application.\n        # See https://github.com/encode/httpx/issues/350\n        if self._ephemeral_app and self.manage_lifespan:\n            self._ephemeral_lifespan = await self._exit_stack.enter_async_context(\n                app_lifespan_context(self._ephemeral_app)\n            )\n\n        if self._ephemeral_app:\n            self.logger.debug(\n                \"Using ephemeral application with database at \"\n                f\"{PREFECT_API_DATABASE_CONNECTION_URL.value()}\"\n            )\n        else:\n            self.logger.debug(f\"Connecting to API at {self.api_url}\")\n\n        # Enter the httpx client's context\n        await self._exit_stack.enter_async_context(self._client)\n\n        self._started = True\n\n        return self\n\n    async def __aexit__(self, *exc_info):\n        \"\"\"\n        Shutdown the client.\n        \"\"\"\n        self._closed = True\n        return await self._exit_stack.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `PrefectClient` must be entered with an async context. Use 'async \"\n            \"with PrefectClient(...)' not 'with PrefectClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_url","title":"api_url: httpx.URL property","text":"

    Get the base URL for the API.

    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_healthcheck","title":"api_healthcheck async","text":"

    Attempts to connect to the API and returns the encountered exception if not successful.

    If successful, returns None.

    Source code in prefect/client/orchestration.py
    async def api_healthcheck(self) -> Optional[Exception]:\n    \"\"\"\n    Attempts to connect to the API and returns the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    try:\n        await self._client.get(\"/health\")\n        return None\n    except Exception as exc:\n        return exc\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.hello","title":"hello async","text":"

    Send a GET request to /hello for testing purposes.

    Source code in prefect/client/orchestration.py
    async def hello(self) -> httpx.Response:\n    \"\"\"\n    Send a GET request to /hello for testing purposes.\n    \"\"\"\n    return await self._client.get(\"/hello\")\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow","title":"create_flow async","text":"

    Create a flow in the Prefect API.

    Parameters:

    Name Type Description Default flow Flow

    a Flow object

    required

    Raises:

    Type Description RequestError

    if a flow was not created for any reason

    Returns:

    Type Description UUID

    the ID of the flow in the backend

    Source code in prefect/client/orchestration.py
    async def create_flow(self, flow: \"FlowObject\") -> UUID:\n    \"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow: a [Flow][prefect.flows.Flow] object\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    return await self.create_flow_from_name(flow.name)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_from_name","title":"create_flow_from_name async","text":"

    Create a flow in the Prefect API.

    Parameters:

    Name Type Description Default flow_name str

    the name of the new flow

    required

    Raises:

    Type Description RequestError

    if a flow was not created for any reason

    Returns:

    Type Description UUID

    the ID of the flow in the backend

    Source code in prefect/client/orchestration.py
    async def create_flow_from_name(self, flow_name: str) -> UUID:\n    \"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow_name: the name of the new flow\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    flow_data = FlowCreate(name=flow_name)\n    response = await self._client.post(\n        \"/flows/\", json=flow_data.dict(json_compatible=True)\n    )\n\n    flow_id = response.json().get(\"id\")\n    if not flow_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    # Return the id of the created flow\n    return UUID(flow_id)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow","title":"read_flow async","text":"

    Query the Prefect API for a flow by id.

    Parameters:

    Name Type Description Default flow_id UUID

    the flow ID of interest

    required

    Returns:

    Type Description Flow

    a Flow model representation of the flow

    Source code in prefect/client/orchestration.py
    async def read_flow(self, flow_id: UUID) -> Flow:\n    \"\"\"\n    Query the Prefect API for a flow by id.\n\n    Args:\n        flow_id: the flow ID of interest\n\n    Returns:\n        a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n    \"\"\"\n    response = await self._client.get(f\"/flows/{flow_id}\")\n    return Flow.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flows","title":"read_flows async","text":"

    Query the Prefect API for flows. Only flows matching all criteria will be returned.

    Parameters:

    Name Type Description Default flow_filter FlowFilter

    filter criteria for flows

    None flow_run_filter FlowRunFilter

    filter criteria for flow runs

    None task_run_filter TaskRunFilter

    filter criteria for task runs

    None deployment_filter DeploymentFilter

    filter criteria for deployments

    None work_pool_filter WorkPoolFilter

    filter criteria for work pools

    None work_queue_filter WorkQueueFilter

    filter criteria for work pool queues

    None sort FlowSort

    sort criteria for the flows

    None limit int

    limit for the flow query

    None offset int

    offset for the flow query

    0

    Returns:

    Type Description List[Flow]

    a list of Flow model representations of the flows

    Source code in prefect/client/orchestration.py
    async def read_flows(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Flow]:\n    \"\"\"\n    Query the Prefect API for flows. Only flows matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flows\n        limit: limit for the flow query\n        offset: offset for the flow query\n\n    Returns:\n        a list of Flow model representations of the flows\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flows/filter\", json=body)\n    return pydantic.parse_obj_as(List[Flow], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_by_name","title":"read_flow_by_name async","text":"

    Query the Prefect API for a flow by name.

    Parameters:

    Name Type Description Default flow_name str

    the name of a flow

    required

    Returns:

    Type Description Flow

    a fully hydrated Flow model

    Source code in prefect/client/orchestration.py
    async def read_flow_by_name(\n    self,\n    flow_name: str,\n) -> Flow:\n    \"\"\"\n    Query the Prefect API for a flow by name.\n\n    Args:\n        flow_name: the name of a flow\n\n    Returns:\n        a fully hydrated Flow model\n    \"\"\"\n    response = await self._client.get(f\"/flows/name/{flow_name}\")\n    return Flow.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

    Create a flow run for a deployment.

    Parameters:

    Name Type Description Default deployment_id UUID

    The deployment ID to create the flow run from

    required parameters Optional[Dict[str, Any]]

    Parameter overrides for this flow run. Merged with the deployment defaults

    None context Optional[Dict[str, Any]]

    Optional run context data

    None state State

    The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

    None name str

    An optional name for the flow run. If not provided, the server will generate a name.

    None tags Iterable[str]

    An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags.

    None idempotency_key str

    Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one.

    None parent_task_run_id UUID

    if a subflow run is being created, the placeholder task run identifier in the parent flow

    None work_queue_name str

    An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool.

    None job_variables Optional[Dict[str, Any]]

    Optional variables that will be supplied to the flow run job.

    None

    Raises:

    Type Description RequestError

    if the Prefect API does not successfully create a run for any reason

    Returns:

    Type Description FlowRun

    The flow run model

    Source code in prefect/client/orchestration.py
    async def create_flow_run_from_deployment(\n    self,\n    deployment_id: UUID,\n    *,\n    parameters: Optional[Dict[str, Any]] = None,\n    context: Optional[Dict[str, Any]] = None,\n    state: prefect.states.State = None,\n    name: str = None,\n    tags: Iterable[str] = None,\n    idempotency_key: str = None,\n    parent_task_run_id: UUID = None,\n    work_queue_name: str = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n) -> FlowRun:\n    \"\"\"\n    Create a flow run for a deployment.\n\n    Args:\n        deployment_id: The deployment ID to create the flow run from\n        parameters: Parameter overrides for this flow run. Merged with the\n            deployment defaults\n        context: Optional run context data\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n        name: An optional name for the flow run. If not provided, the server will\n            generate a name.\n        tags: An optional iterable of tags to apply to the flow run; these tags\n            are merged with the deployment's tags.\n        idempotency_key: Optional idempotency key for creation of the flow run.\n            If the key matches the key of an existing flow run, the existing run will\n            be returned instead of creating a new one.\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        work_queue_name: An optional work queue name to add this run to. If not provided,\n            will default to the deployment's set work queue.  If one is provided that does not\n            exist, a new work queue will be created within the deployment's work pool.\n        job_variables: Optional variables that will be supplied to the flow run job.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n    state = state or prefect.states.Scheduled()\n    tags = tags or []\n\n    flow_run_create = DeploymentFlowRunCreate(\n        parameters=parameters,\n        context=context,\n        state=state.to_state_create(),\n        tags=tags,\n        name=name,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n        job_variables=job_variables,\n    )\n\n    # done separately to avoid including this field in payloads sent to older API versions\n    if work_queue_name:\n        flow_run_create.work_queue_name = work_queue_name\n\n    response = await self._client.post(\n        f\"/deployments/{deployment_id}/create_flow_run\",\n        json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n    )\n    return FlowRun.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run","title":"create_flow_run async","text":"

    Create a flow run for a flow.

    Parameters:

    Name Type Description Default flow Flow

    The flow model to create the flow run for

    required name Optional[str]

    An optional name for the flow run

    None parameters Optional[Dict[str, Any]]

    Parameter overrides for this flow run.

    None context Optional[Dict[str, Any]]

    Optional run context data

    None tags Optional[Iterable[str]]

    a list of tags to apply to this flow run

    None parent_task_run_id Optional[UUID]

    if a subflow run is being created, the placeholder task run identifier in the parent flow

    None state Optional[State]

    The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

    None

    Raises:

    Type Description RequestError

    if the Prefect API does not successfully create a run for any reason

    Returns:

    Type Description FlowRun

    The flow run model

    Source code in prefect/client/orchestration.py
    async def create_flow_run(\n    self,\n    flow: \"FlowObject\",\n    name: Optional[str] = None,\n    parameters: Optional[Dict[str, Any]] = None,\n    context: Optional[Dict[str, Any]] = None,\n    tags: Optional[Iterable[str]] = None,\n    parent_task_run_id: Optional[UUID] = None,\n    state: Optional[\"prefect.states.State\"] = None,\n) -> FlowRun:\n    \"\"\"\n    Create a flow run for a flow.\n\n    Args:\n        flow: The flow model to create the flow run for\n        name: An optional name for the flow run\n        parameters: Parameter overrides for this flow run.\n        context: Optional run context data\n        tags: a list of tags to apply to this flow run\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    # Retrieve the flow id\n    flow_id = await self.create_flow(flow)\n\n    flow_run_create = FlowRunCreate(\n        flow_id=flow_id,\n        flow_version=flow.version,\n        name=name,\n        parameters=parameters,\n        context=context,\n        tags=list(tags or []),\n        parent_task_run_id=parent_task_run_id,\n        state=state.to_state_create(),\n        empirical_policy=FlowRunPolicy(\n            retries=flow.retries,\n            retry_delay=flow.retry_delay_seconds,\n        ),\n    )\n\n    flow_run_create_json = flow_run_create.dict(json_compatible=True)\n    response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n    flow_run = FlowRun.parse_obj(response.json())\n\n    # Restore the parameters to the local objects to retain expectations about\n    # Python objects\n    flow_run.parameters = parameters\n\n    return flow_run\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run","title":"update_flow_run async","text":"

    Update a flow run's details.

    Parameters:

    Name Type Description Default flow_run_id UUID

    The identifier for the flow run to update.

    required flow_version Optional[str]

    A new version string for the flow run.

    None parameters Optional[dict]

    A dictionary of parameter values for the flow run. This will not be merged with any existing parameters.

    None name Optional[str]

    A new name for the flow run.

    None empirical_policy Optional[FlowRunPolicy]

    A new flow run orchestration policy. This will not be merged with any existing policy.

    None tags Optional[Iterable[str]]

    An iterable of new tags for the flow run. These will not be merged with any existing tags.

    None infrastructure_pid Optional[str]

    The id of flow run as returned by an infrastructure block.

    None

    Returns:

    Type Description Response

    an httpx.Response object from the PATCH request

    Source code in prefect/client/orchestration.py
    async def update_flow_run(\n    self,\n    flow_run_id: UUID,\n    flow_version: Optional[str] = None,\n    parameters: Optional[dict] = None,\n    name: Optional[str] = None,\n    tags: Optional[Iterable[str]] = None,\n    empirical_policy: Optional[FlowRunPolicy] = None,\n    infrastructure_pid: Optional[str] = None,\n    job_variables: Optional[dict] = None,\n) -> httpx.Response:\n    \"\"\"\n    Update a flow run's details.\n\n    Args:\n        flow_run_id: The identifier for the flow run to update.\n        flow_version: A new version string for the flow run.\n        parameters: A dictionary of parameter values for the flow run. This will not\n            be merged with any existing parameters.\n        name: A new name for the flow run.\n        empirical_policy: A new flow run orchestration policy. This will not be\n            merged with any existing policy.\n        tags: An iterable of new tags for the flow run. These will not be merged with\n            any existing tags.\n        infrastructure_pid: The id of flow run as returned by an\n            infrastructure block.\n\n    Returns:\n        an `httpx.Response` object from the PATCH request\n    \"\"\"\n    params = {}\n    if flow_version is not None:\n        params[\"flow_version\"] = flow_version\n    if parameters is not None:\n        params[\"parameters\"] = parameters\n    if name is not None:\n        params[\"name\"] = name\n    if tags is not None:\n        params[\"tags\"] = tags\n    if empirical_policy is not None:\n        params[\"empirical_policy\"] = empirical_policy\n    if infrastructure_pid:\n        params[\"infrastructure_pid\"] = infrastructure_pid\n    if job_variables is not None:\n        params[\"job_variables\"] = job_variables\n\n    flow_run_data = FlowRunUpdate(**params)\n\n    return await self._client.patch(\n        f\"/flow_runs/{flow_run_id}\",\n        json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run","title":"delete_flow_run async","text":"

    Delete a flow run by UUID.

    Parameters:

    Name Type Description Default flow_run_id UUID

    The flow run UUID of interest.

    required Source code in prefect/client/orchestration.py
    async def delete_flow_run(\n    self,\n    flow_run_id: UUID,\n) -> None:\n    \"\"\"\n    Delete a flow run by UUID.\n\n    Args:\n        flow_run_id: The flow run UUID of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/flow_runs/{flow_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_concurrency_limit","title":"create_concurrency_limit async","text":"

    Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks.

    Parameters:

    Name Type Description Default tag str

    a tag the concurrency limit is applied to

    required concurrency_limit int

    the maximum number of concurrent task runs for a given tag

    required

    Raises:

    Type Description RequestError

    if the concurrency limit was not created for any reason

    Returns:

    Type Description UUID

    the ID of the concurrency limit in the backend

    Source code in prefect/client/orchestration.py
    async def create_concurrency_limit(\n    self,\n    tag: str,\n    concurrency_limit: int,\n) -> UUID:\n    \"\"\"\n    Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n    running tasks.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n    Raises:\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the ID of the concurrency limit in the backend\n    \"\"\"\n\n    concurrency_limit_create = ConcurrencyLimitCreate(\n        tag=tag,\n        concurrency_limit=concurrency_limit,\n    )\n    response = await self._client.post(\n        \"/concurrency_limits/\",\n        json=concurrency_limit_create.dict(json_compatible=True),\n    )\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(concurrency_limit_id)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limit_by_tag","title":"read_concurrency_limit_by_tag async","text":"

    Read the concurrency limit set on a specific tag.

    Parameters:

    Name Type Description Default tag str

    a tag the concurrency limit is applied to

    required

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    if the concurrency limit was not created for any reason

    Returns:

    Type Description

    the concurrency limit set on a specific tag

    Source code in prefect/client/orchestration.py
    async def read_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n    \"\"\"\n    Read the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the concurrency limit set on a specific tag\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n    return concurrency_limit\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limits","title":"read_concurrency_limits async","text":"

    Lists concurrency limits set on task run tags.

    Parameters:

    Name Type Description Default limit int

    the maximum number of concurrency limits returned

    required offset int

    the concurrency limit query offset

    required

    Returns:

    Type Description

    a list of concurrency limits

    Source code in prefect/client/orchestration.py
    async def read_concurrency_limits(\n    self,\n    limit: int,\n    offset: int,\n):\n    \"\"\"\n    Lists concurrency limits set on task run tags.\n\n    Args:\n        limit: the maximum number of concurrency limits returned\n        offset: the concurrency limit query offset\n\n    Returns:\n        a list of concurrency limits\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n    return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.reset_concurrency_limit_by_tag","title":"reset_concurrency_limit_by_tag async","text":"

    Resets the concurrency limit slots set on a specific tag.

    Parameters:

    Name Type Description Default tag str

    a tag the concurrency limit is applied to

    required slot_override Optional[List[Union[UUID, str]]]

    a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in slot_override are currently running, otherwise those concurrency slots will never be released.

    None

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    If request fails

    Source code in prefect/client/orchestration.py
    async def reset_concurrency_limit_by_tag(\n    self,\n    tag: str,\n    slot_override: Optional[List[Union[UUID, str]]] = None,\n):\n    \"\"\"\n    Resets the concurrency limit slots set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        slot_override: a list of task run IDs that are currently using a\n            concurrency slot, please check that any task run IDs included in\n            `slot_override` are currently running, otherwise those concurrency\n            slots will never be released.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    if slot_override is not None:\n        slot_override = [str(slot) for slot in slot_override]\n\n    try:\n        await self._client.post(\n            f\"/concurrency_limits/tag/{tag}/reset\",\n            json=dict(slot_override=slot_override),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_concurrency_limit_by_tag","title":"delete_concurrency_limit_by_tag async","text":"

    Delete the concurrency limit set on a specific tag.

    Parameters:

    Name Type Description Default tag str

    a tag the concurrency limit is applied to

    required

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    If request fails

    Source code in prefect/client/orchestration.py
    async def delete_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n    \"\"\"\n    Delete the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_queue","title":"create_work_queue async","text":"

    Create a work queue.

    Parameters:

    Name Type Description Default name str

    a unique name for the work queue

    required tags Optional[List[str]]

    will be included in the queue. This option will be removed on 2023-02-23.

    None description Optional[str]

    An optional description for the work queue.

    None is_paused Optional[bool]

    Whether or not the work queue is paused.

    None concurrency_limit Optional[int]

    An optional concurrency limit for the work queue.

    None priority Optional[int]

    The queue's priority. Lower values are higher priority (1 is the highest).

    None work_pool_name Optional[str]

    The name of the work pool to use for this queue.

    None

    Raises:

    Type Description ObjectAlreadyExists

    If request returns 409

    RequestError

    If request fails

    Returns:

    Type Description WorkQueue

    The created work queue

    Source code in prefect/client/orchestration.py
    async def create_work_queue(\n    self,\n    name: str,\n    tags: Optional[List[str]] = None,\n    description: Optional[str] = None,\n    is_paused: Optional[bool] = None,\n    concurrency_limit: Optional[int] = None,\n    priority: Optional[int] = None,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n    \"\"\"\n    Create a work queue.\n\n    Args:\n        name: a unique name for the work queue\n        tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n            will be included in the queue. This option will be removed on 2023-02-23.\n        description: An optional description for the work queue.\n        is_paused: Whether or not the work queue is paused.\n        concurrency_limit: An optional concurrency limit for the work queue.\n        priority: The queue's priority. Lower values are higher priority (1 is the highest).\n        work_pool_name: The name of the work pool to use for this queue.\n\n    Raises:\n        prefect.exceptions.ObjectAlreadyExists: If request returns 409\n        httpx.RequestError: If request fails\n\n    Returns:\n        The created work queue\n    \"\"\"\n    if tags:\n        warnings.warn(\n            (\n                \"The use of tags for creating work queue filters is deprecated.\"\n                \" This option will be removed on 2023-02-23.\"\n            ),\n            DeprecationWarning,\n        )\n        filter = QueueFilter(tags=tags)\n    else:\n        filter = None\n    create_model = WorkQueueCreate(name=name, filter=filter)\n    if description is not None:\n        create_model.description = description\n    if is_paused is not None:\n        create_model.is_paused = is_paused\n    if concurrency_limit is not None:\n        create_model.concurrency_limit = concurrency_limit\n    if priority is not None:\n        create_model.priority = priority\n\n    data = create_model.dict(json_compatible=True)\n    try:\n        if work_pool_name is not None:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues\", json=data\n            )\n        else:\n            response = await self._client.post(\"/work_queues/\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_by_name","title":"read_work_queue_by_name async","text":"

    Read a work queue by name.

    Parameters:

    Name Type Description Default name str

    a unique name for the work queue

    required work_pool_name str

    the name of the work pool the queue belongs to.

    None

    Raises:

    Type Description ObjectNotFound

    if no work queue is found

    HTTPStatusError

    other status errors

    Returns:

    Name Type Description WorkQueue WorkQueue

    a work queue API object

    Source code in prefect/client/orchestration.py
    async def read_work_queue_by_name(\n    self,\n    name: str,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n    \"\"\"\n    Read a work queue by name.\n\n    Args:\n        name (str): a unique name for the work queue\n        work_pool_name (str, optional): the name of the work pool\n            the queue belongs to.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: if no work queue is found\n        httpx.HTTPStatusError: other status errors\n\n    Returns:\n        WorkQueue: a work queue API object\n    \"\"\"\n    try:\n        if work_pool_name is not None:\n            response = await self._client.get(\n                f\"/work_pools/{work_pool_name}/queues/{name}\"\n            )\n        else:\n            response = await self._client.get(f\"/work_queues/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return WorkQueue.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_queue","title":"update_work_queue async","text":"

    Update properties of a work queue.

    Parameters:

    Name Type Description Default id UUID

    the ID of the work queue to update

    required **kwargs

    the fields to update

    {}

    Raises:

    Type Description ValueError

    if no kwargs are provided

    ObjectNotFound

    if request returns 404

    RequestError

    if the request fails

    Source code in prefect/client/orchestration.py
    async def update_work_queue(self, id: UUID, **kwargs):\n    \"\"\"\n    Update properties of a work queue.\n\n    Args:\n        id: the ID of the work queue to update\n        **kwargs: the fields to update\n\n    Raises:\n        ValueError: if no kwargs are provided\n        prefect.exceptions.ObjectNotFound: if request returns 404\n        httpx.RequestError: if the request fails\n\n    \"\"\"\n    if not kwargs:\n        raise ValueError(\"No fields provided to update.\")\n\n    data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n    try:\n        await self._client.patch(f\"/work_queues/{id}\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_runs_in_work_queue","title":"get_runs_in_work_queue async","text":"

    Read flow runs off a work queue.

    Parameters:

    Name Type Description Default id UUID

    the id of the work queue to read from

    required limit int

    a limit on the number of runs to return

    10 scheduled_before datetime

    a timestamp; only runs scheduled before this time will be returned. Defaults to now.

    None

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    If request fails

    Returns:

    Type Description List[FlowRun]

    List[FlowRun]: a list of FlowRun objects read from the queue

    Source code in prefect/client/orchestration.py
    async def get_runs_in_work_queue(\n    self,\n    id: UUID,\n    limit: int = 10,\n    scheduled_before: datetime.datetime = None,\n) -> List[FlowRun]:\n    \"\"\"\n    Read flow runs off a work queue.\n\n    Args:\n        id: the id of the work queue to read from\n        limit: a limit on the number of runs to return\n        scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n            Defaults to now.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        List[FlowRun]: a list of FlowRun objects read from the queue\n    \"\"\"\n    if scheduled_before is None:\n        scheduled_before = pendulum.now(\"UTC\")\n\n    try:\n        response = await self._client.post(\n            f\"/work_queues/{id}/get_runs\",\n            json={\n                \"limit\": limit,\n                \"scheduled_before\": scheduled_before.isoformat(),\n            },\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue","title":"read_work_queue async","text":"

    Read a work queue.

    Parameters:

    Name Type Description Default id UUID

    the id of the work queue to load

    required

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    If request fails

    Returns:

    Name Type Description WorkQueue WorkQueue

    an instantiated WorkQueue object

    Source code in prefect/client/orchestration.py
    async def read_work_queue(\n    self,\n    id: UUID,\n) -> WorkQueue:\n    \"\"\"\n    Read a work queue.\n\n    Args:\n        id: the id of the work queue to load\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        WorkQueue: an instantiated WorkQueue object\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_queues/{id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_status","title":"read_work_queue_status async","text":"

    Read a work queue status.

    Parameters:

    Name Type Description Default id UUID

    the id of the work queue to load

    required

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    If request fails

    Returns:

    Name Type Description WorkQueueStatus WorkQueueStatusDetail

    an instantiated WorkQueueStatus object

    Source code in prefect/client/orchestration.py
    async def read_work_queue_status(\n    self,\n    id: UUID,\n) -> WorkQueueStatusDetail:\n    \"\"\"\n    Read a work queue status.\n\n    Args:\n        id: the id of the work queue to load\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        WorkQueueStatus: an instantiated WorkQueueStatus object\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_queues/{id}/status\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueueStatusDetail.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.match_work_queues","title":"match_work_queues async","text":"

    Query the Prefect API for work queues with names with a specific prefix.

    Parameters:

    Name Type Description Default prefixes List[str]

    a list of strings used to match work queue name prefixes

    required work_pool_name Optional[str]

    an optional work pool name to scope the query to

    None

    Returns:

    Type Description List[WorkQueue]

    a list of WorkQueue model representations of the work queues

    Source code in prefect/client/orchestration.py
    async def match_work_queues(\n    self,\n    prefixes: List[str],\n    work_pool_name: Optional[str] = None,\n) -> List[WorkQueue]:\n    \"\"\"\n    Query the Prefect API for work queues with names with a specific prefix.\n\n    Args:\n        prefixes: a list of strings used to match work queue name prefixes\n        work_pool_name: an optional work pool name to scope the query to\n\n    Returns:\n        a list of WorkQueue model representations\n            of the work queues\n    \"\"\"\n    page_length = 100\n    current_page = 0\n    work_queues = []\n\n    while True:\n        new_queues = await self.read_work_queues(\n            work_pool_name=work_pool_name,\n            offset=current_page * page_length,\n            limit=page_length,\n            work_queue_filter=WorkQueueFilter(\n                name=WorkQueueFilterName(startswith_=prefixes)\n            ),\n        )\n        if not new_queues:\n            break\n        work_queues += new_queues\n        current_page += 1\n\n    return work_queues\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_queue_by_id","title":"delete_work_queue_by_id async","text":"

    Delete a work queue by its ID.

    Parameters:

    Name Type Description Default id UUID

    the id of the work queue to delete

    required

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    If requests fails

    Source code in prefect/client/orchestration.py
    async def delete_work_queue_by_id(\n    self,\n    id: UUID,\n):\n    \"\"\"\n    Delete a work queue by its ID.\n\n    Args:\n        id: the id of the work queue to delete\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/work_queues/{id}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_type","title":"create_block_type async","text":"

    Create a block type in the Prefect API.

    Source code in prefect/client/orchestration.py
    async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n    \"\"\"\n    Create a block type in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_types/\",\n            json=block_type.dict(\n                json_compatible=True, exclude_unset=True, exclude={\"id\"}\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_schema","title":"create_block_schema async","text":"

    Create a block schema in the Prefect API.

    Source code in prefect/client/orchestration.py
    async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n    \"\"\"\n    Create a block schema in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_schemas/\",\n            json=block_schema.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                exclude={\"id\", \"block_type\", \"checksum\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_document","title":"create_block_document async","text":"

    Create a block document in the Prefect API. This data is used to configure a corresponding Block.

    Parameters:

    Name Type Description Default include_secrets bool

    whether to include secret values on the stored Block, corresponding to Pydantic's SecretStr and SecretBytes fields. Note Blocks may not work as expected if this is set to False.

    True Source code in prefect/client/orchestration.py
    async def create_block_document(\n    self,\n    block_document: Union[BlockDocument, BlockDocumentCreate],\n    include_secrets: bool = True,\n) -> BlockDocument:\n    \"\"\"\n    Create a block document in the Prefect API. This data is used to configure a\n    corresponding Block.\n\n    Args:\n        include_secrets (bool): whether to include secret values\n            on the stored Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. Note Blocks may not work as expected if\n            this is set to `False`.\n    \"\"\"\n    if isinstance(block_document, BlockDocument):\n        block_document = BlockDocumentCreate.parse_obj(\n            block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n\n    try:\n        response = await self._client.post(\n            \"/block_documents/\",\n            json=block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_document","title":"update_block_document async","text":"

    Update a block document in the Prefect API.

    Source code in prefect/client/orchestration.py
    async def update_block_document(\n    self,\n    block_document_id: UUID,\n    block_document: BlockDocumentUpdate,\n):\n    \"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_documents/{block_document_id}\",\n            json=block_document.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_document","title":"delete_block_document async","text":"

    Delete a block document.

    Source code in prefect/client/orchestration.py
    async def delete_block_document(self, block_document_id: UUID):\n    \"\"\"\n    Delete a block document.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_documents/{block_document_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_type_by_slug","title":"read_block_type_by_slug async","text":"

    Read a block type by its slug.

    Source code in prefect/client/orchestration.py
    async def read_block_type_by_slug(self, slug: str) -> BlockType:\n    \"\"\"\n    Read a block type by its slug.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/block_types/slug/{slug}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schema_by_checksum","title":"read_block_schema_by_checksum async","text":"

    Look up a block schema checksum

    Source code in prefect/client/orchestration.py
    async def read_block_schema_by_checksum(\n    self, checksum: str, version: Optional[str] = None\n) -> BlockSchema:\n    \"\"\"\n    Look up a block schema checksum\n    \"\"\"\n    try:\n        url = f\"/block_schemas/checksum/{checksum}\"\n        if version is not None:\n            url = f\"{url}?version={version}\"\n        response = await self._client.get(url)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_type","title":"update_block_type async","text":"

    Update a block document in the Prefect API.

    Source code in prefect/client/orchestration.py
    async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n    \"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_types/{block_type_id}\",\n            json=block_type.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include=BlockTypeUpdate.updatable_fields(),\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_type","title":"delete_block_type async","text":"

    Delete a block type.

    Source code in prefect/client/orchestration.py
    async def delete_block_type(self, block_type_id: UUID):\n    \"\"\"\n    Delete a block type.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_types/{block_type_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        elif (\n            e.response.status_code == status.HTTP_403_FORBIDDEN\n            and e.response.json()[\"detail\"]\n            == \"protected block types cannot be deleted.\"\n        ):\n            raise prefect.exceptions.ProtectedBlockError(\n                \"Protected block types cannot be deleted.\"\n            ) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_types","title":"read_block_types async","text":"

    Read all block types Raises: httpx.RequestError: if the block types were not found

    Returns:

    Type Description List[BlockType]

    List of BlockTypes.

    Source code in prefect/client/orchestration.py
    async def read_block_types(self) -> List[BlockType]:\n    \"\"\"\n    Read all block types\n    Raises:\n        httpx.RequestError: if the block types were not found\n\n    Returns:\n        List of BlockTypes.\n    \"\"\"\n    response = await self._client.post(\"/block_types/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockType], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schemas","title":"read_block_schemas async","text":"

    Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found

    Returns:

    Type Description List[BlockSchema]

    A BlockSchema.

    Source code in prefect/client/orchestration.py
    async def read_block_schemas(self) -> List[BlockSchema]:\n    \"\"\"\n    Read all block schemas\n    Raises:\n        httpx.RequestError: if a valid block schema was not found\n\n    Returns:\n        A BlockSchema.\n    \"\"\"\n    response = await self._client.post(\"/block_schemas/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockSchema], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_most_recent_block_schema_for_block_type","title":"get_most_recent_block_schema_for_block_type async","text":"

    Fetches the most recent block schema for a specified block type ID.

    Parameters:

    Name Type Description Default block_type_id UUID

    The ID of the block type.

    required

    Raises:

    Type Description RequestError

    If the request fails for any reason.

    Returns:

    Type Description Optional[BlockSchema]

    The most recent block schema or None.

    Source code in prefect/client/orchestration.py
    async def get_most_recent_block_schema_for_block_type(\n    self,\n    block_type_id: UUID,\n) -> Optional[BlockSchema]:\n    \"\"\"\n    Fetches the most recent block schema for a specified block type ID.\n\n    Args:\n        block_type_id: The ID of the block type.\n\n    Raises:\n        httpx.RequestError: If the request fails for any reason.\n\n    Returns:\n        The most recent block schema or None.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_schemas/filter\",\n            json={\n                \"block_schemas\": {\"block_type_id\": {\"any_\": [str(block_type_id)]}},\n                \"limit\": 1,\n            },\n        )\n    except httpx.HTTPStatusError:\n        raise\n    return BlockSchema.parse_obj(response.json()[0]) if response.json() else None\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document","title":"read_block_document async","text":"

    Read the block document with the specified ID.

    Parameters:

    Name Type Description Default block_document_id UUID

    the block document id

    required include_secrets bool

    whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

    True

    Raises:

    Type Description RequestError

    if the block document was not found for any reason

    Returns:

    Type Description

    A block document or None.

    Source code in prefect/client/orchestration.py
    async def read_block_document(\n    self,\n    block_document_id: UUID,\n    include_secrets: bool = True,\n):\n    \"\"\"\n    Read the block document with the specified ID.\n\n    Args:\n        block_document_id: the block document id\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    assert (\n        block_document_id is not None\n    ), \"Unexpected ID on block document. Was it persisted?\"\n    try:\n        response = await self._client.get(\n            f\"/block_documents/{block_document_id}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document_by_name","title":"read_block_document_by_name async","text":"

    Read the block document with the specified name that corresponds to a specific block type name.

    Parameters:

    Name Type Description Default name str

    The block document name.

    required block_type_slug str

    The block type slug.

    required include_secrets bool

    whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

    True

    Raises:

    Type Description RequestError

    if the block document was not found for any reason

    Returns:

    Type Description BlockDocument

    A block document or None.

    Source code in prefect/client/orchestration.py
    async def read_block_document_by_name(\n    self,\n    name: str,\n    block_type_slug: str,\n    include_secrets: bool = True,\n) -> BlockDocument:\n    \"\"\"\n    Read the block document with the specified name that corresponds to a\n    specific block type name.\n\n    Args:\n        name: The block document name.\n        block_type_slug: The block type slug.\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents","title":"read_block_documents async","text":"

    Read block documents

    Parameters:

    Name Type Description Default block_schema_type Optional[str]

    an optional block schema type

    None offset Optional[int]

    an offset

    None limit Optional[int]

    the number of blocks to return

    None include_secrets bool

    whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

    True

    Returns:

    Type Description

    A list of block documents

    Source code in prefect/client/orchestration.py
    async def read_block_documents(\n    self,\n    block_schema_type: Optional[str] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n    include_secrets: bool = True,\n):\n    \"\"\"\n    Read block documents\n\n    Args:\n        block_schema_type: an optional block schema type\n        offset: an offset\n        limit: the number of blocks to return\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Returns:\n        A list of block documents\n    \"\"\"\n    response = await self._client.post(\n        \"/block_documents/filter\",\n        json=dict(\n            block_schema_type=block_schema_type,\n            offset=offset,\n            limit=limit,\n            include_secrets=include_secrets,\n        ),\n    )\n    return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents_by_type","title":"read_block_documents_by_type async","text":"

    Retrieve block documents by block type slug.

    Parameters:

    Name Type Description Default block_type_slug str

    The block type slug.

    required offset Optional[int]

    an offset

    None limit Optional[int]

    the number of blocks to return

    None include_secrets bool

    whether to include secret values

    True

    Returns:

    Type Description List[BlockDocument]

    A list of block documents

    Source code in prefect/client/orchestration.py
    async def read_block_documents_by_type(\n    self,\n    block_type_slug: str,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n    include_secrets: bool = True,\n) -> List[BlockDocument]:\n    \"\"\"Retrieve block documents by block type slug.\n\n    Args:\n        block_type_slug: The block type slug.\n        offset: an offset\n        limit: the number of blocks to return\n        include_secrets: whether to include secret values\n\n    Returns:\n        A list of block documents\n    \"\"\"\n    response = await self._client.get(\n        f\"/block_types/slug/{block_type_slug}/block_documents\",\n        params=dict(\n            offset=offset,\n            limit=limit,\n            include_secrets=include_secrets,\n        ),\n    )\n\n    return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment","title":"create_deployment async","text":"

    Create a deployment.

    Parameters:

    Name Type Description Default flow_id UUID

    the flow ID to create a deployment for

    required name str

    the name of the deployment

    required version str

    an optional version string for the deployment

    None schedule SCHEDULE_TYPES

    an optional schedule to apply to the deployment

    None tags List[str]

    an optional list of tags to apply to the deployment

    None storage_document_id UUID

    an reference to the storage block document used for the deployed flow

    None infrastructure_document_id UUID

    an reference to the infrastructure block document to use for this deployment

    None job_variables Optional[Dict[str, Any]]

    A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'. This argument was previously named infra_overrides. Both arguments are supported for backwards compatibility.

    None

    Raises:

    Type Description RequestError

    if the deployment was not created for any reason

    Returns:

    Type Description UUID

    the ID of the deployment in the backend

    Source code in prefect/client/orchestration.py
    async def create_deployment(\n    self,\n    flow_id: UUID,\n    name: str,\n    version: str = None,\n    schedule: SCHEDULE_TYPES = None,\n    schedules: List[DeploymentScheduleCreate] = None,\n    parameters: Optional[Dict[str, Any]] = None,\n    description: str = None,\n    work_queue_name: str = None,\n    work_pool_name: str = None,\n    tags: List[str] = None,\n    storage_document_id: UUID = None,\n    manifest_path: str = None,\n    path: str = None,\n    entrypoint: str = None,\n    infrastructure_document_id: UUID = None,\n    infra_overrides: Optional[Dict[str, Any]] = None,  # for backwards compat\n    parameter_openapi_schema: Optional[Dict[str, Any]] = None,\n    is_schedule_active: Optional[bool] = None,\n    paused: Optional[bool] = None,\n    pull_steps: Optional[List[dict]] = None,\n    enforce_parameter_schema: Optional[bool] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n) -> UUID:\n    \"\"\"\n    Create a deployment.\n\n    Args:\n        flow_id: the flow ID to create a deployment for\n        name: the name of the deployment\n        version: an optional version string for the deployment\n        schedule: an optional schedule to apply to the deployment\n        tags: an optional list of tags to apply to the deployment\n        storage_document_id: an reference to the storage block document\n            used for the deployed flow\n        infrastructure_document_id: an reference to the infrastructure block document\n            to use for this deployment\n        job_variables: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`. This argument was previously named `infra_overrides`.\n            Both arguments are supported for backwards compatibility.\n\n    Raises:\n        httpx.RequestError: if the deployment was not created for any reason\n\n    Returns:\n        the ID of the deployment in the backend\n    \"\"\"\n    jv = handle_deprecated_infra_overrides_parameter(job_variables, infra_overrides)\n\n    deployment_create = DeploymentCreate(\n        flow_id=flow_id,\n        name=name,\n        version=version,\n        parameters=dict(parameters or {}),\n        tags=list(tags or []),\n        work_queue_name=work_queue_name,\n        description=description,\n        storage_document_id=storage_document_id,\n        path=path,\n        entrypoint=entrypoint,\n        manifest_path=manifest_path,  # for backwards compat\n        infrastructure_document_id=infrastructure_document_id,\n        job_variables=jv,\n        parameter_openapi_schema=parameter_openapi_schema,\n        is_schedule_active=is_schedule_active,\n        paused=paused,\n        schedule=schedule,\n        schedules=schedules or [],\n        pull_steps=pull_steps,\n        enforce_parameter_schema=enforce_parameter_schema,\n    )\n\n    if work_pool_name is not None:\n        deployment_create.work_pool_name = work_pool_name\n\n    # Exclude newer fields that are not set to avoid compatibility issues\n    exclude = {\n        field\n        for field in [\"work_pool_name\", \"work_queue_name\"]\n        if field not in deployment_create.__fields_set__\n    }\n\n    if deployment_create.is_schedule_active is None:\n        exclude.add(\"is_schedule_active\")\n\n    if deployment_create.paused is None:\n        exclude.add(\"paused\")\n\n    if deployment_create.pull_steps is None:\n        exclude.add(\"pull_steps\")\n\n    if deployment_create.enforce_parameter_schema is None:\n        exclude.add(\"enforce_parameter_schema\")\n\n    json = deployment_create.dict(json_compatible=True, exclude=exclude)\n    response = await self._client.post(\n        \"/deployments/\",\n        json=json,\n    )\n    deployment_id = response.json().get(\"id\")\n    if not deployment_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(deployment_id)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment","title":"read_deployment async","text":"

    Query the Prefect API for a deployment by id.

    Parameters:

    Name Type Description Default deployment_id UUID

    the deployment ID of interest

    required

    Returns:

    Type Description DeploymentResponse

    a Deployment model representation of the deployment

    Source code in prefect/client/orchestration.py
    async def read_deployment(\n    self,\n    deployment_id: UUID,\n) -> DeploymentResponse:\n    \"\"\"\n    Query the Prefect API for a deployment by id.\n\n    Args:\n        deployment_id: the deployment ID of interest\n\n    Returns:\n        a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return DeploymentResponse.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_by_name","title":"read_deployment_by_name async","text":"

    Query the Prefect API for a deployment by name.

    Parameters:

    Name Type Description Default name str

    A deployed flow's name: / required

    Raises:

    Type Description ObjectNotFound

    If request returns 404

    RequestError

    If request fails

    Returns:

    Type Description DeploymentResponse

    a Deployment model representation of the deployment

    Source code in prefect/client/orchestration.py
    async def read_deployment_by_name(\n    self,\n    name: str,\n) -> DeploymentResponse:\n    \"\"\"\n    Query the Prefect API for a deployment by name.\n\n    Args:\n        name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        a Deployment model representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return DeploymentResponse.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployments","title":"read_deployments async","text":"

    Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned.

    Parameters:

    Name Type Description Default flow_filter FlowFilter

    filter criteria for flows

    None flow_run_filter FlowRunFilter

    filter criteria for flow runs

    None task_run_filter TaskRunFilter

    filter criteria for task runs

    None deployment_filter DeploymentFilter

    filter criteria for deployments

    None work_pool_filter WorkPoolFilter

    filter criteria for work pools

    None work_queue_filter WorkQueueFilter

    filter criteria for work pool queues

    None limit int

    a limit for the deployment query

    None offset int

    an offset for the deployment query

    0

    Returns:

    Type Description List[DeploymentResponse]

    a list of Deployment model representations of the deployments

    Source code in prefect/client/orchestration.py
    async def read_deployments(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    limit: int = None,\n    sort: DeploymentSort = None,\n    offset: int = 0,\n) -> List[DeploymentResponse]:\n    \"\"\"\n    Query the Prefect API for deployments. Only deployments matching all\n    the provided criteria will be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        limit: a limit for the deployment query\n        offset: an offset for the deployment query\n\n    Returns:\n        a list of Deployment model representations\n            of the deployments\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/deployments/filter\", json=body)\n    return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment","title":"delete_deployment async","text":"

    Delete deployment by id.

    Parameters:

    Name Type Description Default deployment_id UUID

    The deployment id of interest.

    required Source code in prefect/client/orchestration.py
    async def delete_deployment(\n    self,\n    deployment_id: UUID,\n):\n    \"\"\"\n    Delete deployment by id.\n\n    Args:\n        deployment_id: The deployment id of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment_schedules","title":"create_deployment_schedules async","text":"

    Create deployment schedules.

    Parameters:

    Name Type Description Default deployment_id UUID

    the deployment ID

    required schedules List[Tuple[SCHEDULE_TYPES, bool]]

    a list of tuples containing the schedule to create and whether or not it should be active.

    required

    Raises:

    Type Description RequestError

    if the schedules were not created for any reason

    Returns:

    Type Description List[DeploymentSchedule]

    the list of schedules created in the backend

    Source code in prefect/client/orchestration.py
    async def create_deployment_schedules(\n    self,\n    deployment_id: UUID,\n    schedules: List[Tuple[SCHEDULE_TYPES, bool]],\n) -> List[DeploymentSchedule]:\n    \"\"\"\n    Create deployment schedules.\n\n    Args:\n        deployment_id: the deployment ID\n        schedules: a list of tuples containing the schedule to create\n                   and whether or not it should be active.\n\n    Raises:\n        httpx.RequestError: if the schedules were not created for any reason\n\n    Returns:\n        the list of schedules created in the backend\n    \"\"\"\n    deployment_schedule_create = [\n        DeploymentScheduleCreate(schedule=schedule[0], active=schedule[1])\n        for schedule in schedules\n    ]\n\n    json = [\n        deployment_schedule_create.dict(json_compatible=True)\n        for deployment_schedule_create in deployment_schedule_create\n    ]\n    response = await self._client.post(\n        f\"/deployments/{deployment_id}/schedules\", json=json\n    )\n    return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_schedules","title":"read_deployment_schedules async","text":"

    Query the Prefect API for a deployment's schedules.

    Parameters:

    Name Type Description Default deployment_id UUID

    the deployment ID

    required

    Returns:

    Type Description List[DeploymentSchedule]

    a list of DeploymentSchedule model representations of the deployment schedules

    Source code in prefect/client/orchestration.py
    async def read_deployment_schedules(\n    self,\n    deployment_id: UUID,\n) -> List[DeploymentSchedule]:\n    \"\"\"\n    Query the Prefect API for a deployment's schedules.\n\n    Args:\n        deployment_id: the deployment ID\n\n    Returns:\n        a list of DeploymentSchedule model representations of the deployment schedules\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/{deployment_id}/schedules\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return pydantic.parse_obj_as(List[DeploymentSchedule], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_deployment_schedule","title":"update_deployment_schedule async","text":"

    Update a deployment schedule by ID.

    Parameters:

    Name Type Description Default deployment_id UUID

    the deployment ID

    required schedule_id UUID

    the deployment schedule ID of interest

    required active Optional[bool]

    whether or not the schedule should be active

    None schedule Optional[SCHEDULE_TYPES]

    the cron, rrule, or interval schedule this deployment schedule should use

    None Source code in prefect/client/orchestration.py
    async def update_deployment_schedule(\n    self,\n    deployment_id: UUID,\n    schedule_id: UUID,\n    active: Optional[bool] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n):\n    \"\"\"\n    Update a deployment schedule by ID.\n\n    Args:\n        deployment_id: the deployment ID\n        schedule_id: the deployment schedule ID of interest\n        active: whether or not the schedule should be active\n        schedule: the cron, rrule, or interval schedule this deployment schedule should use\n    \"\"\"\n    kwargs = {}\n    if active is not None:\n        kwargs[\"active\"] = active\n    elif schedule is not None:\n        kwargs[\"schedule\"] = schedule\n\n    deployment_schedule_update = DeploymentScheduleUpdate(**kwargs)\n    json = deployment_schedule_update.dict(json_compatible=True, exclude_unset=True)\n\n    try:\n        await self._client.patch(\n            f\"/deployments/{deployment_id}/schedules/{schedule_id}\", json=json\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment_schedule","title":"delete_deployment_schedule async","text":"

    Delete a deployment schedule.

    Parameters:

    Name Type Description Default deployment_id UUID

    the deployment ID

    required schedule_id UUID

    the ID of the deployment schedule to delete.

    required

    Raises:

    Type Description RequestError

    if the schedules were not deleted for any reason

    Source code in prefect/client/orchestration.py
    async def delete_deployment_schedule(\n    self,\n    deployment_id: UUID,\n    schedule_id: UUID,\n) -> None:\n    \"\"\"\n    Delete a deployment schedule.\n\n    Args:\n        deployment_id: the deployment ID\n        schedule_id: the ID of the deployment schedule to delete.\n\n    Raises:\n        httpx.RequestError: if the schedules were not deleted for any reason\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/deployments/{deployment_id}/schedules/{schedule_id}\"\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run","title":"read_flow_run async","text":"

    Query the Prefect API for a flow run by id.

    Parameters:

    Name Type Description Default flow_run_id UUID

    the flow run ID of interest

    required

    Returns:

    Type Description FlowRun

    a Flow Run model representation of the flow run

    Source code in prefect/client/orchestration.py
    async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n    \"\"\"\n    Query the Prefect API for a flow run by id.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n\n    Returns:\n        a Flow Run model representation of the flow run\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return FlowRun.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resume_flow_run","title":"resume_flow_run async","text":"

    Resumes a paused flow run.

    Parameters:

    Name Type Description Default flow_run_id UUID

    the flow run ID of interest

    required run_input Optional[Dict]

    the input to resume the flow run with

    None

    Returns:

    Type Description OrchestrationResult

    an OrchestrationResult model representation of state orchestration output

    Source code in prefect/client/orchestration.py
    async def resume_flow_run(\n    self, flow_run_id: UUID, run_input: Optional[Dict] = None\n) -> OrchestrationResult:\n    \"\"\"\n    Resumes a paused flow run.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n        run_input: the input to resume the flow run with\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    try:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/resume\", json={\"run_input\": run_input}\n        )\n    except httpx.HTTPStatusError:\n        raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_runs","title":"read_flow_runs async","text":"

    Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned.

    Parameters:

    Name Type Description Default flow_filter FlowFilter

    filter criteria for flows

    None flow_run_filter FlowRunFilter

    filter criteria for flow runs

    None task_run_filter TaskRunFilter

    filter criteria for task runs

    None deployment_filter DeploymentFilter

    filter criteria for deployments

    None work_pool_filter WorkPoolFilter

    filter criteria for work pools

    None work_queue_filter WorkQueueFilter

    filter criteria for work pool queues

    None sort FlowRunSort

    sort criteria for the flow runs

    None limit int

    limit for the flow run query

    None offset int

    offset for the flow run query

    0

    Returns:

    Type Description List[FlowRun]

    a list of Flow Run model representations of the flow runs

    Source code in prefect/client/orchestration.py
    async def read_flow_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[FlowRun]:\n    \"\"\"\n    Query the Prefect API for flow runs. Only flow runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flow runs\n        limit: limit for the flow run query\n        offset: offset for the flow run query\n\n    Returns:\n        a list of Flow Run model representations\n            of the flow runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flow_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_flow_run_state","title":"set_flow_run_state async","text":"

    Set the state of a flow run.

    Parameters:

    Name Type Description Default flow_run_id UUID

    the id of the flow run

    required state State

    the state to set

    required force bool

    if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

    False

    Returns:

    Type Description OrchestrationResult

    an OrchestrationResult model representation of state orchestration output

    Source code in prefect/client/orchestration.py
    async def set_flow_run_state(\n    self,\n    flow_run_id: UUID,\n    state: \"prefect.states.State\",\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a flow run.\n\n    Args:\n        flow_run_id: the id of the flow run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.flow_run_id = flow_run_id\n    state_create.state_details.transition_id = uuid4()\n    try:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_states","title":"read_flow_run_states async","text":"

    Query for the states of a flow run

    Parameters:

    Name Type Description Default flow_run_id UUID

    the id of the flow run

    required

    Returns:

    Type Description List[State]

    a list of State model representations of the flow run states

    Source code in prefect/client/orchestration.py
    async def read_flow_run_states(\n    self, flow_run_id: UUID\n) -> List[prefect.states.State]:\n    \"\"\"\n    Query for the states of a flow run\n\n    Args:\n        flow_run_id: the id of the flow run\n\n    Returns:\n        a list of State model representations\n            of the flow run states\n    \"\"\"\n    response = await self._client.get(\n        \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_task_run","title":"create_task_run async","text":"

    Create a task run

    Parameters:

    Name Type Description Default task Task[P, R]

    The Task to run

    required flow_run_id Optional[UUID]

    The flow run id with which to associate the task run

    required dynamic_key str

    A key unique to this particular run of a Task within the flow

    required name Optional[str]

    An optional name for the task run

    None extra_tags Optional[Iterable[str]]

    an optional list of extra tags to apply to the task run in addition to task.tags

    None state Optional[State[R]]

    The initial state for the run. If not provided, defaults to Pending for now. Should always be a Scheduled type.

    None task_inputs Optional[Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]]

    the set of inputs passed to the task

    None

    Returns:

    Type Description TaskRun

    The created task run.

    Source code in prefect/client/orchestration.py
    async def create_task_run(\n    self,\n    task: \"TaskObject[P, R]\",\n    flow_run_id: Optional[UUID],\n    dynamic_key: str,\n    name: Optional[str] = None,\n    extra_tags: Optional[Iterable[str]] = None,\n    state: Optional[prefect.states.State[R]] = None,\n    task_inputs: Optional[\n        Dict[\n            str,\n            List[\n                Union[\n                    TaskRunResult,\n                    Parameter,\n                    Constant,\n                ]\n            ],\n        ]\n    ] = None,\n) -> TaskRun:\n    \"\"\"\n    Create a task run\n\n    Args:\n        task: The Task to run\n        flow_run_id: The flow run id with which to associate the task run\n        dynamic_key: A key unique to this particular run of a Task within the flow\n        name: An optional name for the task run\n        extra_tags: an optional list of extra tags to apply to the task run in\n            addition to `task.tags`\n        state: The initial state for the run. If not provided, defaults to\n            `Pending` for now. Should always be a `Scheduled` type.\n        task_inputs: the set of inputs passed to the task\n\n    Returns:\n        The created task run.\n    \"\"\"\n    tags = set(task.tags).union(extra_tags or [])\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    task_run_data = TaskRunCreate(\n        name=name,\n        flow_run_id=flow_run_id,\n        task_key=task.task_key,\n        dynamic_key=dynamic_key,\n        tags=list(tags),\n        task_version=task.version,\n        empirical_policy=TaskRunPolicy(\n            retries=task.retries,\n            retry_delay=task.retry_delay_seconds,\n            retry_jitter_factor=task.retry_jitter_factor,\n        ),\n        state=state.to_state_create(),\n        task_inputs=task_inputs or {},\n    )\n\n    response = await self._client.post(\n        \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n    )\n    return TaskRun.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run","title":"read_task_run async","text":"

    Query the Prefect API for a task run by id.

    Parameters:

    Name Type Description Default task_run_id UUID

    the task run ID of interest

    required

    Returns:

    Type Description TaskRun

    a Task Run model representation of the task run

    Source code in prefect/client/orchestration.py
    async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n    \"\"\"\n    Query the Prefect API for a task run by id.\n\n    Args:\n        task_run_id: the task run ID of interest\n\n    Returns:\n        a Task Run model representation of the task run\n    \"\"\"\n    response = await self._client.get(f\"/task_runs/{task_run_id}\")\n    return TaskRun.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_runs","title":"read_task_runs async","text":"

    Query the Prefect API for task runs. Only task runs matching all criteria will be returned.

    Parameters:

    Name Type Description Default flow_filter FlowFilter

    filter criteria for flows

    None flow_run_filter FlowRunFilter

    filter criteria for flow runs

    None task_run_filter TaskRunFilter

    filter criteria for task runs

    None deployment_filter DeploymentFilter

    filter criteria for deployments

    None sort TaskRunSort

    sort criteria for the task runs

    None limit int

    a limit for the task run query

    None offset int

    an offset for the task run query

    0

    Returns:

    Type Description List[TaskRun]

    a list of Task Run model representations of the task runs

    Source code in prefect/client/orchestration.py
    async def read_task_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    sort: TaskRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[TaskRun]:\n    \"\"\"\n    Query the Prefect API for task runs. Only task runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        sort: sort criteria for the task runs\n        limit: a limit for the task run query\n        offset: an offset for the task run query\n\n    Returns:\n        a list of Task Run model representations\n            of the task runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/task_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[TaskRun], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_task_run","title":"delete_task_run async","text":"

    Delete a task run by id.

    Parameters:

    Name Type Description Default task_run_id UUID

    the task run ID of interest

    required Source code in prefect/client/orchestration.py
    async def delete_task_run(self, task_run_id: UUID) -> None:\n    \"\"\"\n    Delete a task run by id.\n\n    Args:\n        task_run_id: the task run ID of interest\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/task_runs/{task_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_task_run_state","title":"set_task_run_state async","text":"

    Set the state of a task run.

    Parameters:

    Name Type Description Default task_run_id UUID

    the id of the task run

    required state State

    the state to set

    required force bool

    if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

    False

    Returns:

    Type Description OrchestrationResult

    an OrchestrationResult model representation of state orchestration output

    Source code in prefect/client/orchestration.py
    async def set_task_run_state(\n    self,\n    task_run_id: UUID,\n    state: prefect.states.State,\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a task run.\n\n    Args:\n        task_run_id: the id of the task run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.task_run_id = task_run_id\n    response = await self._client.post(\n        f\"/task_runs/{task_run_id}/set_state\",\n        json=dict(state=state_create.dict(json_compatible=True), force=force),\n    )\n    return OrchestrationResult.parse_obj(response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run_states","title":"read_task_run_states async","text":"

    Query for the states of a task run

    Parameters:

    Name Type Description Default task_run_id UUID

    the id of the task run

    required

    Returns:

    Type Description List[State]

    a list of State model representations of the task run states

    Source code in prefect/client/orchestration.py
    async def read_task_run_states(\n    self, task_run_id: UUID\n) -> List[prefect.states.State]:\n    \"\"\"\n    Query for the states of a task run\n\n    Args:\n        task_run_id: the id of the task run\n\n    Returns:\n        a list of State model representations of the task run states\n    \"\"\"\n    response = await self._client.get(\n        \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_logs","title":"create_logs async","text":"

    Create logs for a flow or task run

    Parameters:

    Name Type Description Default logs Iterable[Union[LogCreate, dict]]

    An iterable of LogCreate objects or already json-compatible dicts

    required Source code in prefect/client/orchestration.py
    async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n    \"\"\"\n    Create logs for a flow or task run\n\n    Args:\n        logs: An iterable of `LogCreate` objects or already json-compatible dicts\n    \"\"\"\n    serialized_logs = [\n        log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n        for log in logs\n    ]\n    await self._client.post(\"/logs/\", json=serialized_logs)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_notification_policy","title":"create_flow_run_notification_policy async","text":"

    Create a notification policy for flow runs

    Parameters:

    Name Type Description Default block_document_id UUID

    The block document UUID

    required is_active bool

    Whether the notification policy is active

    True tags List[str]

    List of flow tags

    None state_names List[str]

    List of state names

    None message_template Optional[str]

    Notification message template

    None Source code in prefect/client/orchestration.py
    async def create_flow_run_notification_policy(\n    self,\n    block_document_id: UUID,\n    is_active: bool = True,\n    tags: List[str] = None,\n    state_names: List[str] = None,\n    message_template: Optional[str] = None,\n) -> UUID:\n    \"\"\"\n    Create a notification policy for flow runs\n\n    Args:\n        block_document_id: The block document UUID\n        is_active: Whether the notification policy is active\n        tags: List of flow tags\n        state_names: List of state names\n        message_template: Notification message template\n    \"\"\"\n    if tags is None:\n        tags = []\n    if state_names is None:\n        state_names = []\n\n    policy = FlowRunNotificationPolicyCreate(\n        block_document_id=block_document_id,\n        is_active=is_active,\n        tags=tags,\n        state_names=state_names,\n        message_template=message_template,\n    )\n    response = await self._client.post(\n        \"/flow_run_notification_policies/\",\n        json=policy.dict(json_compatible=True),\n    )\n\n    policy_id = response.json().get(\"id\")\n    if not policy_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(policy_id)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_notification_policy","title":"delete_flow_run_notification_policy async","text":"

    Delete a flow run notification policy by id.

    Parameters:

    Name Type Description Default id UUID

    UUID of the flow run notification policy to delete.

    required Source code in prefect/client/orchestration.py
    async def delete_flow_run_notification_policy(\n    self,\n    id: UUID,\n) -> None:\n    \"\"\"\n    Delete a flow run notification policy by id.\n\n    Args:\n        id: UUID of the flow run notification policy to delete.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/flow_run_notification_policies/{id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run_notification_policy","title":"update_flow_run_notification_policy async","text":"

    Update a notification policy for flow runs

    Parameters:

    Name Type Description Default id UUID

    UUID of the notification policy

    required block_document_id Optional[UUID]

    The block document UUID

    None is_active Optional[bool]

    Whether the notification policy is active

    None tags Optional[List[str]]

    List of flow tags

    None state_names Optional[List[str]]

    List of state names

    None message_template Optional[str]

    Notification message template

    None Source code in prefect/client/orchestration.py
    async def update_flow_run_notification_policy(\n    self,\n    id: UUID,\n    block_document_id: Optional[UUID] = None,\n    is_active: Optional[bool] = None,\n    tags: Optional[List[str]] = None,\n    state_names: Optional[List[str]] = None,\n    message_template: Optional[str] = None,\n) -> None:\n    \"\"\"\n    Update a notification policy for flow runs\n\n    Args:\n        id: UUID of the notification policy\n        block_document_id: The block document UUID\n        is_active: Whether the notification policy is active\n        tags: List of flow tags\n        state_names: List of state names\n        message_template: Notification message template\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    params = {}\n    if block_document_id is not None:\n        params[\"block_document_id\"] = block_document_id\n    if is_active is not None:\n        params[\"is_active\"] = is_active\n    if tags is not None:\n        params[\"tags\"] = tags\n    if state_names is not None:\n        params[\"state_names\"] = state_names\n    if message_template is not None:\n        params[\"message_template\"] = message_template\n\n    policy = FlowRunNotificationPolicyUpdate(**params)\n\n    try:\n        await self._client.patch(\n            f\"/flow_run_notification_policies/{id}\",\n            json=policy.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_notification_policies","title":"read_flow_run_notification_policies async","text":"

    Query the Prefect API for flow run notification policies. Only policies matching all criteria will be returned.

    Parameters:

    Name Type Description Default flow_run_notification_policy_filter FlowRunNotificationPolicyFilter

    filter criteria for notification policies

    required limit Optional[int]

    a limit for the notification policies query

    None offset int

    an offset for the notification policies query

    0

    Returns:

    Type Description List[FlowRunNotificationPolicy]

    a list of FlowRunNotificationPolicy model representations of the notification policies

    Source code in prefect/client/orchestration.py
    async def read_flow_run_notification_policies(\n    self,\n    flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n    limit: Optional[int] = None,\n    offset: int = 0,\n) -> List[FlowRunNotificationPolicy]:\n    \"\"\"\n    Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n    be returned.\n\n    Args:\n        flow_run_notification_policy_filter: filter criteria for notification policies\n        limit: a limit for the notification policies query\n        offset: an offset for the notification policies query\n\n    Returns:\n        a list of FlowRunNotificationPolicy model representations\n            of the notification policies\n    \"\"\"\n    body = {\n        \"flow_run_notification_policy_filter\": (\n            flow_run_notification_policy_filter.dict(json_compatible=True)\n            if flow_run_notification_policy_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\n        \"/flow_run_notification_policies/filter\", json=body\n    )\n    return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_logs","title":"read_logs async","text":"

    Read flow and task run logs.

    Source code in prefect/client/orchestration.py
    async def read_logs(\n    self,\n    log_filter: LogFilter = None,\n    limit: int = None,\n    offset: int = None,\n    sort: LogSort = LogSort.TIMESTAMP_ASC,\n) -> List[Log]:\n    \"\"\"\n    Read flow and task run logs.\n    \"\"\"\n    body = {\n        \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/logs/filter\", json=body)\n    return pydantic.parse_obj_as(List[Log], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resolve_datadoc","title":"resolve_datadoc async","text":"

    Recursively decode possibly nested data documents.

    \"server\" encoded documents will be retrieved from the server.

    Parameters:

    Name Type Description Default datadoc DataDocument

    The data document to resolve

    required

    Returns:

    Type Description Any

    a decoded object, the innermost data

    Source code in prefect/client/orchestration.py
    async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n    \"\"\"\n    Recursively decode possibly nested data documents.\n\n    \"server\" encoded documents will be retrieved from the server.\n\n    Args:\n        datadoc: The data document to resolve\n\n    Returns:\n        a decoded object, the innermost data\n    \"\"\"\n    if not isinstance(datadoc, DataDocument):\n        raise TypeError(\n            f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n        )\n\n    async def resolve_inner(data):\n        if isinstance(data, bytes):\n            try:\n                data = DataDocument.parse_raw(data)\n            except pydantic.ValidationError:\n                return data\n\n        if isinstance(data, DataDocument):\n            return await resolve_inner(data.decode())\n\n        return data\n\n    return await resolve_inner(datadoc)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.send_worker_heartbeat","title":"send_worker_heartbeat async","text":"

    Sends a worker heartbeat for a given work pool.

    Parameters:

    Name Type Description Default work_pool_name str

    The name of the work pool to heartbeat against.

    required worker_name str

    The name of the worker sending the heartbeat.

    required Source code in prefect/client/orchestration.py
    async def send_worker_heartbeat(\n    self,\n    work_pool_name: str,\n    worker_name: str,\n    heartbeat_interval_seconds: Optional[float] = None,\n):\n    \"\"\"\n    Sends a worker heartbeat for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool to heartbeat against.\n        worker_name: The name of the worker sending the heartbeat.\n    \"\"\"\n    await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n        json={\n            \"name\": worker_name,\n            \"heartbeat_interval_seconds\": heartbeat_interval_seconds,\n        },\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_workers_for_work_pool","title":"read_workers_for_work_pool async","text":"

    Reads workers for a given work pool.

    Parameters:

    Name Type Description Default work_pool_name str

    The name of the work pool for which to get member workers.

    required worker_filter Optional[WorkerFilter]

    Criteria by which to filter workers.

    None limit Optional[int]

    Limit for the worker query.

    None offset Optional[int]

    Limit for the worker query.

    None Source code in prefect/client/orchestration.py
    async def read_workers_for_work_pool(\n    self,\n    work_pool_name: str,\n    worker_filter: Optional[WorkerFilter] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n) -> List[Worker]:\n    \"\"\"\n    Reads workers for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool for which to get\n            member workers.\n        worker_filter: Criteria by which to filter workers.\n        limit: Limit for the worker query.\n        offset: Limit for the worker query.\n    \"\"\"\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/filter\",\n        json={\n            \"worker_filter\": (\n                worker_filter.dict(json_compatible=True, exclude_unset=True)\n                if worker_filter\n                else None\n            ),\n            \"offset\": offset,\n            \"limit\": limit,\n        },\n    )\n\n    return pydantic.parse_obj_as(List[Worker], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pool","title":"read_work_pool async","text":"

    Reads information for a given work pool

    Parameters:

    Name Type Description Default work_pool_name str

    The name of the work pool to for which to get information.

    required

    Returns:

    Type Description WorkPool

    Information about the requested work pool.

    Source code in prefect/client/orchestration.py
    async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n    \"\"\"\n    Reads information for a given work pool\n\n    Args:\n        work_pool_name: The name of the work pool to for which to get\n            information.\n\n    Returns:\n        Information about the requested work pool.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n        return pydantic.parse_obj_as(WorkPool, response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pools","title":"read_work_pools async","text":"

    Reads work pools.

    Parameters:

    Name Type Description Default limit Optional[int]

    Limit for the work pool query.

    None offset int

    Offset for the work pool query.

    0 work_pool_filter Optional[WorkPoolFilter]

    Criteria by which to filter work pools.

    None

    Returns:

    Type Description List[WorkPool]

    A list of work pools.

    Source code in prefect/client/orchestration.py
    async def read_work_pools(\n    self,\n    limit: Optional[int] = None,\n    offset: int = 0,\n    work_pool_filter: Optional[WorkPoolFilter] = None,\n) -> List[WorkPool]:\n    \"\"\"\n    Reads work pools.\n\n    Args:\n        limit: Limit for the work pool query.\n        offset: Offset for the work pool query.\n        work_pool_filter: Criteria by which to filter work pools.\n\n    Returns:\n        A list of work pools.\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n    }\n    response = await self._client.post(\"/work_pools/filter\", json=body)\n    return pydantic.parse_obj_as(List[WorkPool], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_pool","title":"create_work_pool async","text":"

    Creates a work pool with the provided configuration.

    Parameters:

    Name Type Description Default work_pool WorkPoolCreate

    Desired configuration for the new work pool.

    required

    Returns:

    Type Description WorkPool

    Information about the newly created work pool.

    Source code in prefect/client/orchestration.py
    async def create_work_pool(\n    self,\n    work_pool: WorkPoolCreate,\n) -> WorkPool:\n    \"\"\"\n    Creates a work pool with the provided configuration.\n\n    Args:\n        work_pool: Desired configuration for the new work pool.\n\n    Returns:\n        Information about the newly created work pool.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/work_pools/\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n\n    return pydantic.parse_obj_as(WorkPool, response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_pool","title":"update_work_pool async","text":"

    Updates a work pool.

    Parameters:

    Name Type Description Default work_pool_name str

    Name of the work pool to update.

    required work_pool WorkPoolUpdate

    Fields to update in the work pool.

    required Source code in prefect/client/orchestration.py
    async def update_work_pool(\n    self,\n    work_pool_name: str,\n    work_pool: WorkPoolUpdate,\n):\n    \"\"\"\n    Updates a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool to update.\n        work_pool: Fields to update in the work pool.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/work_pools/{work_pool_name}\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_pool","title":"delete_work_pool async","text":"

    Deletes a work pool.

    Parameters:

    Name Type Description Default work_pool_name str

    Name of the work pool to delete.

    required Source code in prefect/client/orchestration.py
    async def delete_work_pool(\n    self,\n    work_pool_name: str,\n):\n    \"\"\"\n    Deletes a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/work_pools/{work_pool_name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queues","title":"read_work_queues async","text":"

    Retrieves queues for a work pool.

    Parameters:

    Name Type Description Default work_pool_name Optional[str]

    Name of the work pool for which to get queues.

    None work_queue_filter Optional[WorkQueueFilter]

    Criteria by which to filter queues.

    None limit Optional[int]

    Limit for the queue query.

    None offset Optional[int]

    Limit for the queue query.

    None

    Returns:

    Type Description List[WorkQueue]

    List of queues for the specified work pool.

    Source code in prefect/client/orchestration.py
    async def read_work_queues(\n    self,\n    work_pool_name: Optional[str] = None,\n    work_queue_filter: Optional[WorkQueueFilter] = None,\n    limit: Optional[int] = None,\n    offset: Optional[int] = None,\n) -> List[WorkQueue]:\n    \"\"\"\n    Retrieves queues for a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool for which to get queues.\n        work_queue_filter: Criteria by which to filter queues.\n        limit: Limit for the queue query.\n        offset: Limit for the queue query.\n\n    Returns:\n        List of queues for the specified work pool.\n    \"\"\"\n    json = {\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    if work_pool_name:\n        try:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues/filter\",\n                json=json,\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n    else:\n        response = await self._client.post(\"/work_queues/filter\", json=json)\n\n    return pydantic.parse_obj_as(List[WorkQueue], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_scheduled_flow_runs_for_work_pool","title":"get_scheduled_flow_runs_for_work_pool async","text":"

    Retrieves scheduled flow runs for the provided set of work pool queues.

    Parameters:

    Name Type Description Default work_pool_name str

    The name of the work pool that the work pool queues are associated with.

    required work_queue_names Optional[List[str]]

    The names of the work pool queues from which to get scheduled flow runs.

    None scheduled_before Optional[datetime]

    Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned.

    None

    Returns:

    Type Description List[WorkerFlowRunResponse]

    A list of worker flow run responses containing information about the

    List[WorkerFlowRunResponse]

    retrieved flow runs.

    Source code in prefect/client/orchestration.py
    async def get_scheduled_flow_runs_for_work_pool(\n    self,\n    work_pool_name: str,\n    work_queue_names: Optional[List[str]] = None,\n    scheduled_before: Optional[datetime.datetime] = None,\n) -> List[WorkerFlowRunResponse]:\n    \"\"\"\n    Retrieves scheduled flow runs for the provided set of work pool queues.\n\n    Args:\n        work_pool_name: The name of the work pool that the work pool\n            queues are associated with.\n        work_queue_names: The names of the work pool queues from which\n            to get scheduled flow runs.\n        scheduled_before: Datetime used to filter returned flow runs. Flow runs\n            scheduled for after the given datetime string will not be returned.\n\n    Returns:\n        A list of worker flow run responses containing information about the\n        retrieved flow runs.\n    \"\"\"\n    body: Dict[str, Any] = {}\n    if work_queue_names is not None:\n        body[\"work_queue_names\"] = list(work_queue_names)\n    if scheduled_before:\n        body[\"scheduled_before\"] = str(scheduled_before)\n\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n        json=body,\n    )\n    return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_artifact","title":"create_artifact async","text":"

    Creates an artifact with the provided configuration.

    Parameters:

    Name Type Description Default artifact ArtifactCreate

    Desired configuration for the new artifact.

    required Source code in prefect/client/orchestration.py
    async def create_artifact(\n    self,\n    artifact: ArtifactCreate,\n) -> Artifact:\n    \"\"\"\n    Creates an artifact with the provided configuration.\n\n    Args:\n        artifact: Desired configuration for the new artifact.\n    Returns:\n        Information about the newly created artifact.\n    \"\"\"\n\n    response = await self._client.post(\n        \"/artifacts/\",\n        json=artifact.dict(json_compatible=True, exclude_unset=True),\n    )\n\n    return pydantic.parse_obj_as(Artifact, response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_artifacts","title":"read_artifacts async","text":"

    Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts

    Source code in prefect/client/orchestration.py
    async def read_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Artifact]:\n    \"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/filter\", json=body)\n    return pydantic.parse_obj_as(List[Artifact], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_latest_artifacts","title":"read_latest_artifacts async","text":"

    Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned. Args: artifact_filter: filter criteria for artifacts flow_run_filter: filter criteria for flow runs task_run_filter: filter criteria for task runs sort: sort criteria for the artifacts limit: limit for the artifact query offset: offset for the artifact query Returns: a list of Artifact model representations of the artifacts

    Source code in prefect/client/orchestration.py
    async def read_latest_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactCollectionFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactCollectionSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[ArtifactCollection]:\n    \"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n    return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_artifact","title":"delete_artifact async","text":"

    Deletes an artifact with the provided id.

    Parameters:

    Name Type Description Default artifact_id UUID

    The id of the artifact to delete.

    required Source code in prefect/client/orchestration.py
    async def delete_artifact(self, artifact_id: UUID) -> None:\n    \"\"\"\n    Deletes an artifact with the provided id.\n\n    Args:\n        artifact_id: The id of the artifact to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/artifacts/{artifact_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_variable","title":"create_variable async","text":"

    Creates an variable with the provided configuration.

    Parameters:

    Name Type Description Default variable VariableCreate

    Desired configuration for the new variable.

    required Source code in prefect/client/orchestration.py
    async def create_variable(self, variable: VariableCreate) -> Variable:\n    \"\"\"\n    Creates an variable with the provided configuration.\n\n    Args:\n        variable: Desired configuration for the new variable.\n    Returns:\n        Information about the newly created variable.\n    \"\"\"\n    response = await self._client.post(\n        \"/variables/\",\n        json=variable.dict(json_compatible=True, exclude_unset=True),\n    )\n    return Variable(**response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_variable","title":"update_variable async","text":"

    Updates a variable with the provided configuration.

    Parameters:

    Name Type Description Default variable VariableUpdate

    Desired configuration for the updated variable.

    required Source code in prefect/client/orchestration.py
    async def update_variable(self, variable: VariableUpdate) -> None:\n    \"\"\"\n    Updates a variable with the provided configuration.\n\n    Args:\n        variable: Desired configuration for the updated variable.\n    Returns:\n        Information about the updated variable.\n    \"\"\"\n    await self._client.patch(\n        f\"/variables/name/{variable.name}\",\n        json=variable.dict(json_compatible=True, exclude_unset=True),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variable_by_name","title":"read_variable_by_name async","text":"

    Reads a variable by name. Returns None if no variable is found.

    Source code in prefect/client/orchestration.py
    async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n    \"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n    try:\n        response = await self._client.get(f\"/variables/name/{name}\")\n        return Variable(**response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            return None\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_variable_by_name","title":"delete_variable_by_name async","text":"

    Deletes a variable by name.

    Source code in prefect/client/orchestration.py
    async def delete_variable_by_name(self, name: str):\n    \"\"\"Deletes a variable by name.\"\"\"\n    try:\n        await self._client.delete(f\"/variables/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variables","title":"read_variables async","text":"

    Reads all variables.

    Source code in prefect/client/orchestration.py
    async def read_variables(self, limit: int = None) -> List[Variable]:\n    \"\"\"Reads all variables.\"\"\"\n    response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n    return pydantic.parse_obj_as(List[Variable], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_worker_metadata","title":"read_worker_metadata async","text":"

    Reads worker metadata stored in Prefect collection registry.

    Source code in prefect/client/orchestration.py
    async def read_worker_metadata(self) -> Dict[str, Any]:\n    \"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n    response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n    response.raise_for_status()\n    return response.json()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_input","title":"create_flow_run_input async","text":"

    Creates a flow run input.

    Parameters:

    Name Type Description Default flow_run_id UUID

    The flow run id.

    required key str

    The input key.

    required value str

    The input value.

    required sender Optional[str]

    The sender of the input.

    None Source code in prefect/client/orchestration.py
    async def create_flow_run_input(\n    self, flow_run_id: UUID, key: str, value: str, sender: Optional[str] = None\n):\n    \"\"\"\n    Creates a flow run input.\n\n    Args:\n        flow_run_id: The flow run id.\n        key: The input key.\n        value: The input value.\n        sender: The sender of the input.\n    \"\"\"\n\n    # Initialize the input to ensure that the key is valid.\n    FlowRunInput(flow_run_id=flow_run_id, key=key, value=value)\n\n    response = await self._client.post(\n        f\"/flow_runs/{flow_run_id}/input\",\n        json={\"key\": key, \"value\": value, \"sender\": sender},\n    )\n    response.raise_for_status()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_input","title":"read_flow_run_input async","text":"

    Reads a flow run input.

    Parameters:

    Name Type Description Default flow_run_id UUID

    The flow run id.

    required key str

    The input key.

    required Source code in prefect/client/orchestration.py
    async def read_flow_run_input(self, flow_run_id: UUID, key: str) -> str:\n    \"\"\"\n    Reads a flow run input.\n\n    Args:\n        flow_run_id: The flow run id.\n        key: The input key.\n    \"\"\"\n    response = await self._client.get(f\"/flow_runs/{flow_run_id}/input/{key}\")\n    response.raise_for_status()\n    return response.content.decode()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run_input","title":"delete_flow_run_input async","text":"

    Deletes a flow run input.

    Parameters:

    Name Type Description Default flow_run_id UUID

    The flow run id.

    required key str

    The input key.

    required Source code in prefect/client/orchestration.py
    async def delete_flow_run_input(self, flow_run_id: UUID, key: str):\n    \"\"\"\n    Deletes a flow run input.\n\n    Args:\n        flow_run_id: The flow run id.\n        key: The input key.\n    \"\"\"\n    response = await self._client.delete(f\"/flow_runs/{flow_run_id}/input/{key}\")\n    response.raise_for_status()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_automation","title":"create_automation async","text":"

    Creates an automation in Prefect Cloud.

    Source code in prefect/client/orchestration.py
    async def create_automation(self, automation: AutomationCore) -> UUID:\n    \"\"\"Creates an automation in Prefect Cloud.\"\"\"\n    if not self.server_type.supports_automations():\n        self._raise_for_unsupported_automations()\n\n    response = await self._client.post(\n        \"/automations/\",\n        json=automation.dict(json_compatible=True),\n    )\n\n    return UUID(response.json()[\"id\"])\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_automation","title":"update_automation async","text":"

    Updates an automation in Prefect Cloud.

    Source code in prefect/client/orchestration.py
    async def update_automation(self, automation_id: UUID, automation: AutomationCore):\n    \"\"\"Updates an automation in Prefect Cloud.\"\"\"\n    if not self.server_type.supports_automations():\n        self._raise_for_unsupported_automations()\n    response = await self._client.put(\n        f\"/automations/{automation_id}\",\n        json=automation.dict(json_compatible=True, exclude_unset=True),\n    )\n    response.raise_for_status\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_automations_by_name","title":"read_automations_by_name async","text":"

    Query the Prefect API for an automation by name. Only automations matching the provided name will be returned.

    Parameters:

    Name Type Description Default name str

    the name of the automation to query

    required

    Returns:

    Type Description List[Automation]

    a list of Automation model representations of the automations

    Source code in prefect/client/orchestration.py
    async def read_automations_by_name(self, name: str) -> List[Automation]:\n    \"\"\"\n    Query the Prefect API for an automation by name. Only automations matching the provided name will be returned.\n\n    Args:\n        name: the name of the automation to query\n\n    Returns:\n        a list of Automation model representations of the automations\n    \"\"\"\n    if not self.server_type.supports_automations():\n        self._raise_for_unsupported_automations()\n    automation_filter = filters.AutomationFilter(name=dict(any_=[name]))\n\n    response = await self._client.post(\n        \"/automations/filter\",\n        json={\n            \"sort\": sorting.AutomationSort.UPDATED_DESC,\n            \"automations\": automation_filter.dict(json_compatible=True)\n            if automation_filter\n            else None,\n        },\n    )\n\n    response.raise_for_status()\n\n    return pydantic.parse_obj_as(List[Automation], response.json())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient","title":"SyncPrefectClient","text":"

    A synchronous client for interacting with the Prefect REST API.

    Parameters:

    Name Type Description Default api Union[str, ASGIApp]

    the REST API URL or FastAPI application to connect to

    required api_key Optional[str]

    An optional API key for authentication.

    None api_version Optional[str]

    The API version this client is compatible with.

    None httpx_settings Optional[Dict[str, Any]]

    An optional dictionary of settings to pass to the underlying httpx.AsyncClient

    None
    Say hello to a Prefect REST API\n\n<div class=\"terminal\">\n```\n>>> with get_client(sync_client=True) as client:\n>>>     response = client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n```\n</div>\n
    Source code in prefect/client/orchestration.py
    class SyncPrefectClient:\n    \"\"\"\n    A synchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n    Args:\n        api: the REST API URL or FastAPI application to connect to\n        api_key: An optional API key for authentication.\n        api_version: The API version this client is compatible with.\n        httpx_settings: An optional dictionary of settings to pass to the underlying\n            `httpx.AsyncClient`\n\n    Examples:\n\n        Say hello to a Prefect REST API\n\n        <div class=\"terminal\">\n        ```\n        >>> with get_client(sync_client=True) as client:\n        >>>     response = client.hello()\n        >>>\n        >>> print(response.json())\n        \ud83d\udc4b\n        ```\n        </div>\n    \"\"\"\n\n    def __init__(\n        self,\n        api: Union[str, ASGIApp],\n        *,\n        api_key: Optional[str] = None,\n        api_version: Optional[str] = None,\n        httpx_settings: Optional[Dict[str, Any]] = None,\n    ) -> None:\n        self._prefect_client = PrefectClient(\n            api=api,\n            api_key=api_key,\n            api_version=api_version,\n            httpx_settings=httpx_settings,\n        )\n\n    def __enter__(self):\n        run_sync(self._prefect_client.__aenter__())\n        return self\n\n    def __exit__(self, *exc_info):\n        return run_sync(self._prefect_client.__aexit__(*exc_info))\n\n    async def __aenter__(self):\n        raise RuntimeError(\n            \"The `SyncPrefectClient` must be entered with a sync context. Use '\"\n            \"with SyncPrefectClient(...)' not 'async with SyncPrefectClient(...)'\"\n        )\n\n    async def __aexit__(self, *_):\n        assert False, \"This should never be called but must be defined for __aenter__\"\n\n    def hello(self) -> httpx.Response:\n        \"\"\"\n        Send a GET request to /hello for testing purposes.\n        \"\"\"\n        return run_sync(self._prefect_client.hello())\n\n    def create_task_run(\n        self,\n        task: \"TaskObject[P, R]\",\n        flow_run_id: Optional[UUID],\n        dynamic_key: str,\n        name: Optional[str] = None,\n        extra_tags: Optional[Iterable[str]] = None,\n        state: Optional[prefect.states.State[R]] = None,\n        task_inputs: Optional[\n            Dict[\n                str,\n                List[\n                    Union[\n                        TaskRunResult,\n                        Parameter,\n                        Constant,\n                    ]\n                ],\n            ]\n        ] = None,\n    ) -> TaskRun:\n        \"\"\"\n        Create a task run\n\n        Args:\n            task: The Task to run\n            flow_run_id: The flow run id with which to associate the task run\n            dynamic_key: A key unique to this particular run of a Task within the flow\n            name: An optional name for the task run\n            extra_tags: an optional list of extra tags to apply to the task run in\n                addition to `task.tags`\n            state: The initial state for the run. If not provided, defaults to\n                `Pending` for now. Should always be a `Scheduled` type.\n            task_inputs: the set of inputs passed to the task\n\n        Returns:\n            The created task run.\n        \"\"\"\n        return run_sync(\n            self._prefect_client.create_task_run(\n                task=task,\n                flow_run_id=flow_run_id,\n                dynamic_key=dynamic_key,\n                name=name,\n                extra_tags=extra_tags,\n                state=state,\n                task_inputs=task_inputs,\n            )\n        )\n\n    def set_task_run_state(\n        self,\n        task_run_id: UUID,\n        state: prefect.states.State,\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a task run.\n\n        Args:\n            task_run_id: the id of the task run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        return run_sync(\n            self._prefect_client.set_task_run_state(\n                task_run_id=task_run_id,\n                state=state,\n                force=force,\n            )\n        )\n\n    def create_flow_run(\n        self,\n        flow_id: UUID,\n        parameters: Optional[Dict[str, Any]] = None,\n        context: Optional[Dict[str, Any]] = None,\n        scheduled_start_time: Optional[datetime.datetime] = None,\n        run_name: Optional[str] = None,\n        labels: Optional[List[str]] = None,\n        parameters_json: Optional[str] = None,\n        run_config: Optional[Dict[str, Any]] = None,\n        idempotency_key: Optional[str] = None,\n    ) -> FlowRunResponse:\n        \"\"\"\n        Create a new flow run.\n\n        Args:\n            - flow_id (UUID): the ID of the flow to create a run for\n            - parameters (Optional[Dict[str, Any]]): a dictionary of parameter values to pass to the flow\n            - context (Optional[Dict[str, Any]]): a dictionary of context values to pass to the flow\n            - scheduled_start_time (Optional[datetime.datetime]): the scheduled start time for the flow run\n            - run_name (Optional[str]): a name to assign to the flow run\n            - labels (Optional[List[str]]): a list of labels to assign to the flow run\n            - parameters_json (Optional[str]): a JSON string of parameter values to pass to the flow\n            - run_config (Optional[Dict[str, Any]]): a dictionary of run configuration options\n            - idempotency_key (Optional[str]): a key to ensure idempotency when creating the flow run\n\n        Returns:\n            - FlowRunResponse: the created flow run\n        \"\"\"\n        return run_sync(\n            self._prefect_client.create_flow_run(\n                flow_id=flow_id,\n                parameters=parameters,\n                context=context,\n                scheduled_start_time=scheduled_start_time,\n                run_name=run_name,\n                labels=labels,\n                parameters_json=parameters_json,\n                run_config=run_config,\n                idempotency_key=idempotency_key,\n            )\n        )\n\n    async def set_flow_run_state(\n        self,\n        flow_run_id: UUID,\n        state: \"prefect.states.State\",\n        force: bool = False,\n    ) -> OrchestrationResult:\n        \"\"\"\n        Set the state of a flow run.\n\n        Args:\n            flow_run_id: the id of the flow run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        return run_sync(\n            self._prefect_client.set_flow_run_state(\n                flow_run_id=flow_run_id,\n                state=state,\n                force=force,\n            )\n        )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.hello","title":"hello","text":"

    Send a GET request to /hello for testing purposes.

    Source code in prefect/client/orchestration.py
    def hello(self) -> httpx.Response:\n    \"\"\"\n    Send a GET request to /hello for testing purposes.\n    \"\"\"\n    return run_sync(self._prefect_client.hello())\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.create_task_run","title":"create_task_run","text":"

    Create a task run

    Parameters:

    Name Type Description Default task Task[P, R]

    The Task to run

    required flow_run_id Optional[UUID]

    The flow run id with which to associate the task run

    required dynamic_key str

    A key unique to this particular run of a Task within the flow

    required name Optional[str]

    An optional name for the task run

    None extra_tags Optional[Iterable[str]]

    an optional list of extra tags to apply to the task run in addition to task.tags

    None state Optional[State[R]]

    The initial state for the run. If not provided, defaults to Pending for now. Should always be a Scheduled type.

    None task_inputs Optional[Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]]

    the set of inputs passed to the task

    None

    Returns:

    Type Description TaskRun

    The created task run.

    Source code in prefect/client/orchestration.py
    def create_task_run(\n    self,\n    task: \"TaskObject[P, R]\",\n    flow_run_id: Optional[UUID],\n    dynamic_key: str,\n    name: Optional[str] = None,\n    extra_tags: Optional[Iterable[str]] = None,\n    state: Optional[prefect.states.State[R]] = None,\n    task_inputs: Optional[\n        Dict[\n            str,\n            List[\n                Union[\n                    TaskRunResult,\n                    Parameter,\n                    Constant,\n                ]\n            ],\n        ]\n    ] = None,\n) -> TaskRun:\n    \"\"\"\n    Create a task run\n\n    Args:\n        task: The Task to run\n        flow_run_id: The flow run id with which to associate the task run\n        dynamic_key: A key unique to this particular run of a Task within the flow\n        name: An optional name for the task run\n        extra_tags: an optional list of extra tags to apply to the task run in\n            addition to `task.tags`\n        state: The initial state for the run. If not provided, defaults to\n            `Pending` for now. Should always be a `Scheduled` type.\n        task_inputs: the set of inputs passed to the task\n\n    Returns:\n        The created task run.\n    \"\"\"\n    return run_sync(\n        self._prefect_client.create_task_run(\n            task=task,\n            flow_run_id=flow_run_id,\n            dynamic_key=dynamic_key,\n            name=name,\n            extra_tags=extra_tags,\n            state=state,\n            task_inputs=task_inputs,\n        )\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.set_task_run_state","title":"set_task_run_state","text":"

    Set the state of a task run.

    Parameters:

    Name Type Description Default task_run_id UUID

    the id of the task run

    required state State

    the state to set

    required force bool

    if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

    False

    Returns:

    Type Description OrchestrationResult

    an OrchestrationResult model representation of state orchestration output

    Source code in prefect/client/orchestration.py
    def set_task_run_state(\n    self,\n    task_run_id: UUID,\n    state: prefect.states.State,\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a task run.\n\n    Args:\n        task_run_id: the id of the task run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    return run_sync(\n        self._prefect_client.set_task_run_state(\n            task_run_id=task_run_id,\n            state=state,\n            force=force,\n        )\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.create_flow_run","title":"create_flow_run","text":"

    Create a new flow run.

    Parameters:

    Name Type Description Default - flow_id (UUID

    the ID of the flow to create a run for

    required - parameters (Optional[Dict[str, Any]]

    a dictionary of parameter values to pass to the flow

    required - context (Optional[Dict[str, Any]]

    a dictionary of context values to pass to the flow

    required - scheduled_start_time (Optional[datetime.datetime]

    the scheduled start time for the flow run

    required - run_name (Optional[str]

    a name to assign to the flow run

    required - labels (Optional[List[str]]

    a list of labels to assign to the flow run

    required - parameters_json (Optional[str]

    a JSON string of parameter values to pass to the flow

    required - run_config (Optional[Dict[str, Any]]

    a dictionary of run configuration options

    required - idempotency_key (Optional[str]

    a key to ensure idempotency when creating the flow run

    required

    Returns:

    Type Description FlowRunResponse
    • FlowRunResponse: the created flow run
    Source code in prefect/client/orchestration.py
    def create_flow_run(\n    self,\n    flow_id: UUID,\n    parameters: Optional[Dict[str, Any]] = None,\n    context: Optional[Dict[str, Any]] = None,\n    scheduled_start_time: Optional[datetime.datetime] = None,\n    run_name: Optional[str] = None,\n    labels: Optional[List[str]] = None,\n    parameters_json: Optional[str] = None,\n    run_config: Optional[Dict[str, Any]] = None,\n    idempotency_key: Optional[str] = None,\n) -> FlowRunResponse:\n    \"\"\"\n    Create a new flow run.\n\n    Args:\n        - flow_id (UUID): the ID of the flow to create a run for\n        - parameters (Optional[Dict[str, Any]]): a dictionary of parameter values to pass to the flow\n        - context (Optional[Dict[str, Any]]): a dictionary of context values to pass to the flow\n        - scheduled_start_time (Optional[datetime.datetime]): the scheduled start time for the flow run\n        - run_name (Optional[str]): a name to assign to the flow run\n        - labels (Optional[List[str]]): a list of labels to assign to the flow run\n        - parameters_json (Optional[str]): a JSON string of parameter values to pass to the flow\n        - run_config (Optional[Dict[str, Any]]): a dictionary of run configuration options\n        - idempotency_key (Optional[str]): a key to ensure idempotency when creating the flow run\n\n    Returns:\n        - FlowRunResponse: the created flow run\n    \"\"\"\n    return run_sync(\n        self._prefect_client.create_flow_run(\n            flow_id=flow_id,\n            parameters=parameters,\n            context=context,\n            scheduled_start_time=scheduled_start_time,\n            run_name=run_name,\n            labels=labels,\n            parameters_json=parameters_json,\n            run_config=run_config,\n            idempotency_key=idempotency_key,\n        )\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.SyncPrefectClient.set_flow_run_state","title":"set_flow_run_state async","text":"

    Set the state of a flow run.

    Parameters:

    Name Type Description Default flow_run_id UUID

    the id of the flow run

    required state State

    the state to set

    required force bool

    if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

    False

    Returns:

    Type Description OrchestrationResult

    an OrchestrationResult model representation of state orchestration output

    Source code in prefect/client/orchestration.py
    async def set_flow_run_state(\n    self,\n    flow_run_id: UUID,\n    state: \"prefect.states.State\",\n    force: bool = False,\n) -> OrchestrationResult:\n    \"\"\"\n    Set the state of a flow run.\n\n    Args:\n        flow_run_id: the id of the flow run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    return run_sync(\n        self._prefect_client.set_flow_run_state(\n            flow_run_id=flow_run_id,\n            state=state,\n            force=force,\n        )\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.get_client","title":"get_client","text":"

    Retrieve a HTTP client for communicating with the Prefect REST API.

    The client must be context managed; for example:

    async with get_client() as client:\n    await client.hello()\n

    To return a synchronous client, pass sync_client=True:

    with get_client(sync_client=True) as client:\n    client.hello()\n
    Source code in prefect/client/orchestration.py
    def get_client(\n    httpx_settings: Optional[Dict[str, Any]] = None, sync_client: bool = False\n) -> \"PrefectClient\":\n    \"\"\"\n    Retrieve a HTTP client for communicating with the Prefect REST API.\n\n    The client must be context managed; for example:\n\n    ```python\n    async with get_client() as client:\n        await client.hello()\n    ```\n\n    To return a synchronous client, pass sync_client=True:\n\n    ```python\n    with get_client(sync_client=True) as client:\n        client.hello()\n    ```\n    \"\"\"\n    ctx = prefect.context.get_settings_context()\n    api = PREFECT_API_URL.value()\n\n    if not api:\n        # create an ephemeral API if none was provided\n        from prefect.server.api.server import create_app\n\n        api = create_app(ctx.settings, ephemeral=True)\n\n    if sync_client:\n        return SyncPrefectClient(\n            api,\n            api_key=PREFECT_API_KEY.value(),\n            httpx_settings=httpx_settings,\n        )\n    else:\n        return PrefectClient(\n            api,\n            api_key=PREFECT_API_KEY.value(),\n            httpx_settings=httpx_settings,\n        )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_1","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions","title":"prefect.client.schemas.actions","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.StateCreate","title":"StateCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a new state.

    Source code in prefect/client/schemas/actions.py
    class StateCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    message: Optional[str] = Field(default=None, examples=[\"Run started\"])\n    state_details: StateDetails = Field(default_factory=StateDetails)\n    data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n        default=None,\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowCreate","title":"FlowCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow.

    Source code in prefect/client/schemas/actions.py
    class FlowCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowUpdate","title":"FlowUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a flow.

    Source code in prefect/client/schemas/actions.py
    class FlowUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate","title":"DeploymentCreate","text":"

    Bases: DeprecatedInfraOverridesField, ActionBaseModel

    Data used by the Prefect REST API to create a deployment.

    Source code in prefect/client/schemas/actions.py
    class DeploymentCreate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    name: str = Field(..., description=\"The name of the deployment.\")\n    flow_id: UUID = Field(..., description=\"The ID of the flow to deploy.\")\n    is_schedule_active: Optional[bool] = Field(None)\n    paused: Optional[bool] = Field(None)\n    schedules: List[DeploymentScheduleCreate] = Field(\n        default_factory=list,\n        description=\"A list of schedules for the deployment.\",\n    )\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(None)\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(default_factory=list)\n    pull_steps: Optional[List[dict]] = Field(None)\n\n    manifest_path: Optional[str] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    schedule: Optional[SCHEDULE_TYPES] = Field(None)\n    description: Optional[str] = Field(None)\n    path: Optional[str] = Field(None)\n    version: Optional[str] = Field(None)\n    entrypoint: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n            jsonschema.validate(self.job_variables, variables_schema)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration","text":"

    Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

    Source code in prefect/client/schemas/actions.py
    def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n        jsonschema.validate(self.job_variables, variables_schema)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate","text":"

    Bases: DeprecatedInfraOverridesField, ActionBaseModel

    Data used by the Prefect REST API to update a deployment.

    Source code in prefect/client/schemas/actions.py
    class DeploymentUpdate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    @validator(\"schedule\")\n    def validate_none_schedule(cls, v):\n        return return_none_schedule(v)\n\n    version: Optional[str] = Field(None)\n    schedule: Optional[SCHEDULE_TYPES] = Field(None)\n    description: Optional[str] = Field(None)\n    is_schedule_active: bool = Field(None)\n    parameters: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(default_factory=list)\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    path: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n    entrypoint: Optional[str] = Field(None)\n    manifest_path: Optional[str] = Field(None)\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n        if variables_schema is not None:\n            jsonschema.validate(self.job_variables, variables_schema)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration","text":"

    Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

    Source code in prefect/client/schemas/actions.py
    def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n    if variables_schema is not None:\n        jsonschema.validate(self.job_variables, variables_schema)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a flow run.

    Source code in prefect/client/schemas/actions.py
    class FlowRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n    name: Optional[str] = Field(None)\n    flow_version: Optional[str] = Field(None)\n    parameters: Optional[Dict[str, Any]] = Field(None)\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    infrastructure_pid: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunCreate","title":"TaskRunCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a task run

    Source code in prefect/client/schemas/actions.py
    class TaskRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n    # TaskRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the task run to create\"\n    )\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the task run\",\n    )\n    flow_run_id: Optional[UUID] = Field(None)\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(None)\n    cache_expiration: Optional[objects.DateTimeTZ] = Field(None)\n    task_version: Optional[str] = Field(None)\n    empirical_policy: objects.TaskRunPolicy = Field(\n        default_factory=objects.TaskRunPolicy,\n    )\n    tags: List[str] = Field(default_factory=list)\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                objects.TaskRunResult,\n                objects.Parameter,\n                objects.Constant,\n            ]\n        ],\n    ] = Field(default_factory=dict)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a task run

    Source code in prefect/client/schemas/actions.py
    class TaskRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n    name: Optional[str] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunCreate","title":"FlowRunCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow run.

    Source code in prefect/client/schemas/actions.py
    class FlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: Optional[str] = Field(default=None, description=\"The name of the flow run.\")\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    deployment_id: Optional[UUID] = Field(None)\n    flow_version: Optional[str] = Field(None)\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The parameters for the flow run.\"\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The context for the flow run.\"\n    )\n    parent_task_run_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    idempotency_key: Optional[str] = Field(None)\n\n    class Config(ActionBaseModel.Config):\n        json_dumps = orjson_dumps_extra_compatible\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow run from a deployment.

    Source code in prefect/client/schemas/actions.py
    class DeploymentFlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: Optional[str] = Field(default=None, description=\"The name of the flow run.\")\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The parameters for the flow run.\"\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The context for the flow run.\"\n    )\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    idempotency_key: Optional[str] = Field(None)\n    parent_task_run_id: Optional[UUID] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    job_variables: Optional[dict] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a saved search.

    Source code in prefect/client/schemas/actions.py
    class SavedSearchCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[objects.SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a concurrency limit.

    Source code in prefect/client/schemas/actions.py
    class ConcurrencyLimitCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a block type.

    Source code in prefect/client/schemas/actions.py
    class BlockTypeCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[objects.HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[objects.HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n\n    # validators\n    _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n        validate_block_type_slug\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a block type.

    Source code in prefect/client/schemas/actions.py
    class BlockTypeUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n    logo_url: Optional[objects.HttpUrl] = Field(None)\n    documentation_url: Optional[objects.HttpUrl] = Field(None)\n    description: Optional[str] = Field(None)\n    code_example: Optional[str] = Field(None)\n\n    @classmethod\n    def updatable_fields(cls) -> set:\n        return get_class_fields_only(cls)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a block schema.

    Source code in prefect/client/schemas/actions.py
    class BlockSchemaCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(None)\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=objects.DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a block document.

    Source code in prefect/client/schemas/actions.py
    class BlockDocumentCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None, description=\"The name of the block document\"\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(\n        default=..., description=\"The block schema ID for the block document\"\n    )\n    block_type_id: UUID = Field(\n        default=..., description=\"The block type ID for the block document\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    _validate_name_format = validator(\"name\", allow_reuse=True)(\n        validate_block_document_name\n    )\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a block document.

    Source code in prefect/client/schemas/actions.py
    class BlockDocumentUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n    block_schema_id: Optional[UUID] = Field(\n        default=None, description=\"A block schema ID\"\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    merge_existing_data: bool = Field(\n        default=True,\n        description=\"Whether to merge the existing data with the new data or replace it\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate","text":"

    Bases: ActionBaseModel

    Data used to create block document reference.

    Source code in prefect/client/schemas/actions.py
    class BlockDocumentReferenceCreate(ActionBaseModel):\n    \"\"\"Data used to create block document reference.\"\"\"\n\n    id: UUID = Field(default_factory=uuid4)\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.LogCreate","title":"LogCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a log.

    Source code in prefect/client/schemas/actions.py
    class LogCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(None)\n    task_run_id: Optional[UUID] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a work pool.

    Source code in prefect/client/schemas/actions.py
    class WorkPoolCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(None)\n    type: str = Field(\n        description=\"The work pool type.\", default=\"prefect-agent\"\n    )  # TODO: change default\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"The base job template for the work pool.\",\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Whether the work pool is paused.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a work pool.

    Source code in prefect/client/schemas/actions.py
    class WorkPoolUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n    description: Optional[str] = Field(None)\n    is_paused: Optional[bool] = Field(None)\n    base_job_template: Optional[Dict[str, Any]] = Field(None)\n    concurrency_limit: Optional[int] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a work queue.

    Source code in prefect/client/schemas/actions.py
    class WorkQueueCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(None)\n    is_paused: bool = Field(\n        default=False,\n        description=\"Whether the work queue is paused.\",\n    )\n    concurrency_limit: Optional[int] = Field(\n        default=None,\n        description=\"A concurrency limit for the work queue.\",\n    )\n    priority: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n\n    # DEPRECATED\n\n    filter: Optional[objects.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a work queue.

    Source code in prefect/client/schemas/actions.py
    class WorkQueueUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n    name: Optional[str] = Field(None)\n    description: Optional[str] = Field(None)\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[int] = Field(None)\n    priority: Optional[int] = Field(None)\n    last_polled: Optional[DateTimeTZ] = Field(None)\n\n    # DEPRECATED\n\n    filter: Optional[objects.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a flow run notification policy.

    Source code in prefect/client/schemas/actions.py
    class FlowRunNotificationPolicyCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(objects.FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a flow run notification policy.

    Source code in prefect/client/schemas/actions.py
    class FlowRunNotificationPolicyUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n    is_active: Optional[bool] = Field(None)\n    state_names: Optional[List[str]] = Field(None)\n    tags: Optional[List[str]] = Field(None)\n    block_document_id: Optional[UUID] = Field(None)\n    message_template: Optional[str] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactCreate","title":"ArtifactCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create an artifact.

    Source code in prefect/client/schemas/actions.py
    class ArtifactCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n    key: Optional[str] = Field(None)\n    type: Optional[str] = Field(None)\n    description: Optional[str] = Field(None)\n    data: Optional[Union[Dict[str, Any], Any]] = Field(None)\n    metadata_: Optional[Dict[str, str]] = Field(None)\n    flow_run_id: Optional[UUID] = Field(None)\n    task_run_id: Optional[UUID] = Field(None)\n\n    _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n        validate_artifact_key\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update an artifact.

    Source code in prefect/client/schemas/actions.py
    class ArtifactUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n    data: Optional[Union[Dict[str, Any], Any]] = Field(None)\n    description: Optional[str] = Field(None)\n    metadata_: Optional[Dict[str, str]] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableCreate","title":"VariableCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a Variable.

    Source code in prefect/client/schemas/actions.py
    class VariableCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n    name: str = Field(\n        default=...,\n        description=\"The name of the variable\",\n        examples=[\"my_variable\"],\n        max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: str = Field(\n        default=...,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=objects.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: Optional[List[str]] = Field(default=None)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.VariableUpdate","title":"VariableUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a Variable.

    Source code in prefect/client/schemas/actions.py
    class VariableUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the variable\",\n        examples=[\"my_variable\"],\n        max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: Optional[str] = Field(\n        default=None,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=objects.MAX_VARIABLE_NAME_LENGTH,\n    )\n    tags: Optional[List[str]] = Field(default=None)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitCreate","title":"GlobalConcurrencyLimitCreate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to create a global concurrency limit.

    Source code in prefect/client/schemas/actions.py
    class GlobalConcurrencyLimitCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a global concurrency limit.\"\"\"\n\n    name: str = Field(description=\"The name of the global concurrency limit.\")\n    limit: int = Field(\n        description=(\n            \"The maximum number of slots that can be occupied on this concurrency\"\n            \" limit.\"\n        )\n    )\n    active: Optional[bool] = Field(\n        default=True,\n        description=\"Whether or not the concurrency limit is in an active state.\",\n    )\n    active_slots: Optional[int] = Field(\n        default=0,\n        description=\"Number of tasks currently using a concurrency slot.\",\n    )\n    slot_decay_per_second: Optional[float] = Field(\n        default=0.0,\n        description=(\n            \"Controls the rate at which slots are released when the concurrency limit\"\n            \" is used as a rate limit.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.actions.GlobalConcurrencyLimitUpdate","title":"GlobalConcurrencyLimitUpdate","text":"

    Bases: ActionBaseModel

    Data used by the Prefect REST API to update a global concurrency limit.

    Source code in prefect/client/schemas/actions.py
    class GlobalConcurrencyLimitUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a global concurrency limit.\"\"\"\n\n    name: Optional[str] = Field(None)\n    limit: Optional[NonNegativeInteger] = Field(None)\n    active: Optional[NonNegativeInteger] = Field(None)\n    active_slots: Optional[NonNegativeInteger] = Field(None)\n    slot_decay_per_second: Optional[NonNegativeFloat] = Field(None)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_2","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters","title":"prefect.client.schemas.filters","text":"

    Schemas that define Prefect REST API filtering operations.

    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.Operator","title":"Operator","text":"

    Bases: AutoEnum

    Operators for combining filter criteria.

    Source code in prefect/client/schemas/filters.py
    class Operator(AutoEnum):\n    \"\"\"Operators for combining filter criteria.\"\"\"\n\n    and_ = AutoEnum.auto()\n    or_ = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.OperatorMixin","title":"OperatorMixin","text":"

    Base model for Prefect filters that combines criteria with a user-provided operator

    Source code in prefect/client/schemas/filters.py
    class OperatorMixin:\n    \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n    operator: Operator = Field(\n        default=Operator.and_,\n        description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterId","title":"FlowFilterId","text":"

    Bases: PrefectBaseModel

    Filter by Flow.id.

    Source code in prefect/client/schemas/filters.py
    class FlowFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Flow.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterName","title":"FlowFilterName","text":"

    Bases: PrefectBaseModel

    Filter by Flow.name.

    Source code in prefect/client/schemas/filters.py
    class FlowFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Flow.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow names to include\",\n        examples=[[\"my-flow-1\", \"my-flow-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilterTags","title":"FlowFilterTags","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by Flow.tags.

    Source code in prefect/client/schemas/filters.py
    class FlowFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `Flow.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flows will be returned only if their tags are a superset\"\n            \" of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flows without tags\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowFilter","title":"FlowFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter for flows. Only flows matching all criteria will be returned.

    Source code in prefect/client/schemas/filters.py
    class FlowFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n    id: Optional[FlowFilterId] = Field(\n        default=None, description=\"Filter criteria for `Flow.id`\"\n    )\n    name: Optional[FlowFilterName] = Field(\n        default=None, description=\"Filter criteria for `Flow.name`\"\n    )\n    tags: Optional[FlowFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Flow.tags`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.id.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterId(PrefectBaseModel):\n    \"\"\"Filter by FlowRun.id.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n    not_any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to exclude\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.name.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterName(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow run names to include\",\n        examples=[[\"my-flow-run-1\", \"my-flow-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by FlowRun.tags.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flow runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flow runs without tags\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by FlowRun.deployment_id.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterDeploymentId(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run deployment ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without deployment ids\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by FlowRun.work_queue_name.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterWorkQueueName(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without work queue names\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.state_type.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterStateType(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n    any_: Optional[List[StateType]] = Field(\n        default=None, description=\"A list of flow run state types to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.flow_version.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterFlowVersion(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run flow_versions to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.start_time.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterStartTime(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return flow runs without a start time\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.expected_start_time.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterExpectedStartTime(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or after this time\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.next_scheduled_start_time.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterNextScheduledStartTime(PrefectBaseModel):\n    \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time or before this\"\n            \" time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time at or after this\"\n            \" time\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter for subflows of the given flow runs

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterParentFlowRunId(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter for subflows of the given flow runs\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parents to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by FlowRun.parent_task_run_id.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterParentTaskRunId(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parent_task_run_ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without parent_task_run_id\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey","text":"

    Bases: PrefectBaseModel

    Filter by FlowRun.idempotency_key.

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilterIdempotencyKey(PrefectBaseModel):\n    \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to exclude\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunFilter","title":"FlowRunFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter flow runs. Only flow runs matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class FlowRunFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n    id: Optional[FlowRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.id`\"\n    )\n    name: Optional[FlowRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.name`\"\n    )\n    tags: Optional[FlowRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.tags`\"\n    )\n    deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n    )\n    work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n    )\n    state: Optional[FlowRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state`\"\n    )\n    flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n    )\n    start_time: Optional[FlowRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n    )\n    expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n    )\n    next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n        default=None,\n        description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n    )\n    parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n        default=None, description=\"Filter criteria for subflows of the given flow runs\"\n    )\n    parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n    )\n    idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId","text":"

    Bases: PrefectBaseModel

    Filter by TaskRun.flow_run_id.

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n\n    is_null_: bool = Field(\n        default=False,\n        description=\"If true, only include task runs without a flow run id\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId","text":"

    Bases: PrefectBaseModel

    Filter by TaskRun.id.

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilterId(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName","text":"

    Bases: PrefectBaseModel

    Filter by TaskRun.name.

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilterName(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of task run names to include\",\n        examples=[[\"my-task-run-1\", \"my-task-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by TaskRun.tags.

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Task runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include task runs without tags\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType","text":"

    Bases: PrefectBaseModel

    Filter by TaskRun.state_type.

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilterStateType(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n    any_: Optional[List[StateType]] = Field(\n        default=None, description=\"A list of task run state types to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns","text":"

    Bases: PrefectBaseModel

    Filter by TaskRun.subflow_run.

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilterSubFlowRuns(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If true, only include task runs that are subflow run parents; if false,\"\n            \" exclude parent task runs\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime","text":"

    Bases: PrefectBaseModel

    Filter by TaskRun.start_time.

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilterStartTime(PrefectBaseModel):\n    \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return task runs without a start time\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.TaskRunFilter","title":"TaskRunFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter task runs. Only task runs matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class TaskRunFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n    id: Optional[TaskRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.id`\"\n    )\n    name: Optional[TaskRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.name`\"\n    )\n    tags: Optional[TaskRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.tags`\"\n    )\n    state: Optional[TaskRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.state`\"\n    )\n    start_time: Optional[TaskRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n    )\n    subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n    )\n    flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId","text":"

    Bases: PrefectBaseModel

    Filter by Deployment.id.

    Source code in prefect/client/schemas/filters.py
    class DeploymentFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of deployment ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName","text":"

    Bases: PrefectBaseModel

    Filter by Deployment.name.

    Source code in prefect/client/schemas/filters.py
    class DeploymentFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of deployment names to include\",\n        examples=[[\"my-deployment-1\", \"my-deployment-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName","text":"

    Bases: PrefectBaseModel

    Filter by Deployment.work_queue_name.

    Source code in prefect/client/schemas/filters.py
    class DeploymentFilterWorkQueueName(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive","text":"

    Bases: PrefectBaseModel

    Filter by Deployment.is_schedule_active.

    Source code in prefect/client/schemas/filters.py
    class DeploymentFilterIsScheduleActive(PrefectBaseModel):\n    \"\"\"Filter by `Deployment.is_schedule_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by Deployment.tags.

    Source code in prefect/client/schemas/filters.py
    class DeploymentFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Deployments will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include deployments without tags\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.DeploymentFilter","title":"DeploymentFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter for deployments. Only deployments matching all criteria will be returned.

    Source code in prefect/client/schemas/filters.py
    class DeploymentFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    id: Optional[DeploymentFilterId] = Field(\n        default=None, description=\"Filter criteria for `Deployment.id`\"\n    )\n    name: Optional[DeploymentFilterName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.name`\"\n    )\n    is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n        default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n    )\n    tags: Optional[DeploymentFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Deployment.tags`\"\n    )\n    work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterName","title":"LogFilterName","text":"

    Bases: PrefectBaseModel

    Filter by Log.name.

    Source code in prefect/client/schemas/filters.py
    class LogFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Log.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of log names to include\",\n        examples=[[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"]],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterLevel","title":"LogFilterLevel","text":"

    Bases: PrefectBaseModel

    Filter by Log.level.

    Source code in prefect/client/schemas/filters.py
    class LogFilterLevel(PrefectBaseModel):\n    \"\"\"Filter by `Log.level`.\"\"\"\n\n    ge_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level greater than or equal to this level\",\n        examples=[20],\n    )\n\n    le_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level less than or equal to this level\",\n        examples=[50],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp","text":"

    Bases: PrefectBaseModel

    Filter by Log.timestamp.

    Source code in prefect/client/schemas/filters.py
    class LogFilterTimestamp(PrefectBaseModel):\n    \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or after this time\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId","text":"

    Bases: PrefectBaseModel

    Filter by Log.flow_run_id.

    Source code in prefect/client/schemas/filters.py
    class LogFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId","text":"

    Bases: PrefectBaseModel

    Filter by Log.task_run_id.

    Source code in prefect/client/schemas/filters.py
    class LogFilterTaskRunId(PrefectBaseModel):\n    \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.LogFilter","title":"LogFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter logs. Only logs matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class LogFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n    level: Optional[LogFilterLevel] = Field(\n        default=None, description=\"Filter criteria for `Log.level`\"\n    )\n    timestamp: Optional[LogFilterTimestamp] = Field(\n        default=None, description=\"Filter criteria for `Log.timestamp`\"\n    )\n    flow_run_id: Optional[LogFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n    )\n    task_run_id: Optional[LogFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.task_run_id`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FilterSet","title":"FilterSet","text":"

    Bases: PrefectBaseModel

    A collection of filters for common objects

    Source code in prefect/client/schemas/filters.py
    class FilterSet(PrefectBaseModel):\n    \"\"\"A collection of filters for common objects\"\"\"\n\n    flows: FlowFilter = Field(\n        default_factory=FlowFilter, description=\"Filters that apply to flows\"\n    )\n    flow_runs: FlowRunFilter = Field(\n        default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n    )\n    task_runs: TaskRunFilter = Field(\n        default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n    )\n    deployments: DeploymentFilter = Field(\n        default_factory=DeploymentFilter,\n        description=\"Filters that apply to deployments\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName","text":"

    Bases: PrefectBaseModel

    Filter by BlockType.name

    Source code in prefect/client/schemas/filters.py
    class BlockTypeFilterName(PrefectBaseModel):\n    \"\"\"Filter by `BlockType.name`\"\"\"\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug","text":"

    Bases: PrefectBaseModel

    Filter by BlockType.slug

    Source code in prefect/client/schemas/filters.py
    class BlockTypeFilterSlug(PrefectBaseModel):\n    \"\"\"Filter by `BlockType.slug`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of slugs to match\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter","text":"

    Bases: PrefectBaseModel

    Filter BlockTypes

    Source code in prefect/client/schemas/filters.py
    class BlockTypeFilter(PrefectBaseModel):\n    \"\"\"Filter BlockTypes\"\"\"\n\n    name: Optional[BlockTypeFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockType.name`\"\n    )\n\n    slug: Optional[BlockTypeFilterSlug] = Field(\n        default=None, description=\"Filter criteria for `BlockType.slug`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId","text":"

    Bases: PrefectBaseModel

    Filter by BlockSchema.block_type_id.

    Source code in prefect/client/schemas/filters.py
    class BlockSchemaFilterBlockTypeId(PrefectBaseModel):\n    \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId","text":"

    Bases: PrefectBaseModel

    Filter by BlockSchema.id

    Source code in prefect/client/schemas/filters.py
    class BlockSchemaFilterId(PrefectBaseModel):\n    \"\"\"Filter by BlockSchema.id\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of IDs to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities","text":"

    Bases: PrefectBaseModel

    Filter by BlockSchema.capabilities

    Source code in prefect/client/schemas/filters.py
    class BlockSchemaFilterCapabilities(PrefectBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"write-storage\", \"read-storage\"]],\n        description=(\n            \"A list of block capabilities. Block entities will be returned only if an\"\n            \" associated block schema has a superset of the defined capabilities.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion","text":"

    Bases: PrefectBaseModel

    Filter by BlockSchema.capabilities

    Source code in prefect/client/schemas/filters.py
    class BlockSchemaFilterVersion(PrefectBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"2.0.0\", \"2.1.0\"]],\n        description=\"A list of block schema versions.\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter BlockSchemas

    Source code in prefect/client/schemas/filters.py
    class BlockSchemaFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter BlockSchemas\"\"\"\n\n    block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n    )\n    block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n    )\n    id: Optional[BlockSchemaFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.id`\"\n    )\n    version: Optional[BlockSchemaFilterVersion] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.version`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous","text":"

    Bases: PrefectBaseModel

    Filter by BlockDocument.is_anonymous.

    Source code in prefect/client/schemas/filters.py
    class BlockDocumentFilterIsAnonymous(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter block documents for only those that are or are not anonymous.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId","text":"

    Bases: PrefectBaseModel

    Filter by BlockDocument.block_type_id.

    Source code in prefect/client/schemas/filters.py
    class BlockDocumentFilterBlockTypeId(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId","text":"

    Bases: PrefectBaseModel

    Filter by BlockDocument.id.

    Source code in prefect/client/schemas/filters.py
    class BlockDocumentFilterId(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName","text":"

    Bases: PrefectBaseModel

    Filter by BlockDocument.name.

    Source code in prefect/client/schemas/filters.py
    class BlockDocumentFilterName(PrefectBaseModel):\n    \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of block names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match block names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-block%\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class BlockDocumentFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n    id: Optional[BlockDocumentFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.id`\"\n    )\n    is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n        # default is to exclude anonymous blocks\n        BlockDocumentFilterIsAnonymous(eq_=False),\n        description=(\n            \"Filter criteria for `BlockDocument.is_anonymous`. \"\n            \"Defaults to excluding anonymous blocks.\"\n        ),\n    )\n    block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n    )\n    name: Optional[BlockDocumentFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.name`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive","text":"

    Bases: PrefectBaseModel

    Filter by FlowRunNotificationPolicy.is_active.

    Source code in prefect/client/schemas/filters.py
    class FlowRunNotificationPolicyFilterIsActive(PrefectBaseModel):\n    \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter notification policies for only those that are or are not active.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter","text":"

    Bases: PrefectBaseModel

    Filter FlowRunNotificationPolicies.

    Source code in prefect/client/schemas/filters.py
    class FlowRunNotificationPolicyFilter(PrefectBaseModel):\n    \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n    is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n        default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n        description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId","text":"

    Bases: PrefectBaseModel

    Filter by WorkQueue.id.

    Source code in prefect/client/schemas/filters.py
    class WorkQueueFilterId(PrefectBaseModel):\n    \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"A list of work queue ids to include\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName","text":"

    Bases: PrefectBaseModel

    Filter by WorkQueue.name.

    Source code in prefect/client/schemas/filters.py
    class WorkQueueFilterName(PrefectBaseModel):\n    \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"wq-1\", \"wq-2\"]],\n    )\n\n    startswith_: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"A list of case-insensitive starts-with matches. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n        ),\n        examples=[[\"marvin\", \"Marvin-robot\"]],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter work queues. Only work queues matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class WorkQueueFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter work queues. Only work queues matching all criteria will be\n    returned\"\"\"\n\n    id: Optional[WorkQueueFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.id`\"\n    )\n\n    name: Optional[WorkQueueFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.name`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId","text":"

    Bases: PrefectBaseModel

    Filter by WorkPool.id.

    Source code in prefect/client/schemas/filters.py
    class WorkPoolFilterId(PrefectBaseModel):\n    \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName","text":"

    Bases: PrefectBaseModel

    Filter by WorkPool.name.

    Source code in prefect/client/schemas/filters.py
    class WorkPoolFilterName(PrefectBaseModel):\n    \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool names to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType","text":"

    Bases: PrefectBaseModel

    Filter by WorkPool.type.

    Source code in prefect/client/schemas/filters.py
    class WorkPoolFilterType(PrefectBaseModel):\n    \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool types to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId","text":"

    Bases: PrefectBaseModel

    Filter by Worker.worker_config_id.

    Source code in prefect/client/schemas/filters.py
    class WorkerFilterWorkPoolId(PrefectBaseModel):\n    \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime","text":"

    Bases: PrefectBaseModel

    Filter by Worker.last_heartbeat_time.

    Source code in prefect/client/schemas/filters.py
    class WorkerFilterLastHeartbeatTime(PrefectBaseModel):\n    \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or before this time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or after this time\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId","text":"

    Bases: PrefectBaseModel

    Filter by Artifact.id.

    Source code in prefect/client/schemas/filters.py
    class ArtifactFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey","text":"

    Bases: PrefectBaseModel

    Filter by Artifact.key.

    Source code in prefect/client/schemas/filters.py
    class ArtifactFilterKey(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId","text":"

    Bases: PrefectBaseModel

    Filter by Artifact.flow_run_id.

    Source code in prefect/client/schemas/filters.py
    class ArtifactFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId","text":"

    Bases: PrefectBaseModel

    Filter by Artifact.task_run_id.

    Source code in prefect/client/schemas/filters.py
    class ArtifactFilterTaskRunId(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType","text":"

    Bases: PrefectBaseModel

    Filter by Artifact.type.

    Source code in prefect/client/schemas/filters.py
    class ArtifactFilterType(PrefectBaseModel):\n    \"\"\"Filter by `Artifact.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactFilter","title":"ArtifactFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter artifacts. Only artifacts matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class ArtifactFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n    id: Optional[ArtifactFilterId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId","text":"

    Bases: PrefectBaseModel

    Filter by ArtifactCollection.latest_id.

    Source code in prefect/client/schemas/filters.py
    class ArtifactCollectionFilterLatestId(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey","text":"

    Bases: PrefectBaseModel

    Filter by ArtifactCollection.key.

    Source code in prefect/client/schemas/filters.py
    class ArtifactCollectionFilterKey(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key. Should return all rows in \"\n            \"the ArtifactCollection table if specified.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId","text":"

    Bases: PrefectBaseModel

    Filter by ArtifactCollection.flow_run_id.

    Source code in prefect/client/schemas/filters.py
    class ArtifactCollectionFilterFlowRunId(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId","text":"

    Bases: PrefectBaseModel

    Filter by ArtifactCollection.task_run_id.

    Source code in prefect/client/schemas/filters.py
    class ArtifactCollectionFilterTaskRunId(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType","text":"

    Bases: PrefectBaseModel

    Filter by ArtifactCollection.type.

    Source code in prefect/client/schemas/filters.py
    class ArtifactCollectionFilterType(PrefectBaseModel):\n    \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter artifact collections. Only artifact collections matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class ArtifactCollectionFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n    latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactCollectionFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactCollectionFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterId","title":"VariableFilterId","text":"

    Bases: PrefectBaseModel

    Filter by Variable.id.

    Source code in prefect/client/schemas/filters.py
    class VariableFilterId(PrefectBaseModel):\n    \"\"\"Filter by `Variable.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of variable ids to include\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterName","title":"VariableFilterName","text":"

    Bases: PrefectBaseModel

    Filter by Variable.name.

    Source code in prefect/client/schemas/filters.py
    class VariableFilterName(PrefectBaseModel):\n    \"\"\"Filter by `Variable.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my_variable_%\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterValue","title":"VariableFilterValue","text":"

    Bases: PrefectBaseModel

    Filter by Variable.value.

    Source code in prefect/client/schemas/filters.py
    class VariableFilterValue(PrefectBaseModel):\n    \"\"\"Filter by `Variable.value`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables value to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable value against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-value-%\"],\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilterTags","title":"VariableFilterTags","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter by Variable.tags.

    Source code in prefect/client/schemas/filters.py
    class VariableFilterTags(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter by `Variable.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Variables will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include Variables without tags\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.filters.VariableFilter","title":"VariableFilter","text":"

    Bases: PrefectBaseModel, OperatorMixin

    Filter variables. Only variables matching all criteria will be returned

    Source code in prefect/client/schemas/filters.py
    class VariableFilter(PrefectBaseModel, OperatorMixin):\n    \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n    id: Optional[VariableFilterId] = Field(\n        default=None, description=\"Filter criteria for `Variable.id`\"\n    )\n    name: Optional[VariableFilterName] = Field(\n        default=None, description=\"Filter criteria for `Variable.name`\"\n    )\n    value: Optional[VariableFilterValue] = Field(\n        default=None, description=\"Filter criteria for `Variable.value`\"\n    )\n    tags: Optional[VariableFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Variable.tags`\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_3","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects","title":"prefect.client.schemas.objects","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.StateType","title":"StateType","text":"

    Bases: AutoEnum

    Enumeration of state types.

    Source code in prefect/client/schemas/objects.py
    class StateType(AutoEnum):\n    \"\"\"Enumeration of state types.\"\"\"\n\n    SCHEDULED = AutoEnum.auto()\n    PENDING = AutoEnum.auto()\n    RUNNING = AutoEnum.auto()\n    COMPLETED = AutoEnum.auto()\n    FAILED = AutoEnum.auto()\n    CANCELLED = AutoEnum.auto()\n    CRASHED = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n    CANCELLING = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPoolStatus","title":"WorkPoolStatus","text":"

    Bases: AutoEnum

    Enumeration of work pool statuses.

    Source code in prefect/client/schemas/objects.py
    class WorkPoolStatus(AutoEnum):\n    \"\"\"Enumeration of work pool statuses.\"\"\"\n\n    READY = AutoEnum.auto()\n    NOT_READY = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkerStatus","title":"WorkerStatus","text":"

    Bases: AutoEnum

    Enumeration of worker statuses.

    Source code in prefect/client/schemas/objects.py
    class WorkerStatus(AutoEnum):\n    \"\"\"Enumeration of worker statuses.\"\"\"\n\n    ONLINE = AutoEnum.auto()\n    OFFLINE = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.DeploymentStatus","title":"DeploymentStatus","text":"

    Bases: AutoEnum

    Enumeration of deployment statuses.

    Source code in prefect/client/schemas/objects.py
    class DeploymentStatus(AutoEnum):\n    \"\"\"Enumeration of deployment statuses.\"\"\"\n\n    READY = AutoEnum.auto()\n    NOT_READY = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueStatus","title":"WorkQueueStatus","text":"

    Bases: AutoEnum

    Enumeration of work queue statuses.

    Source code in prefect/client/schemas/objects.py
    class WorkQueueStatus(AutoEnum):\n    \"\"\"Enumeration of work queue statuses.\"\"\"\n\n    READY = AutoEnum.auto()\n    NOT_READY = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State","title":"State","text":"

    Bases: ObjectBaseModel, Generic[R]

    The state of a run.

    Source code in prefect/client/schemas/objects.py
    class State(ObjectBaseModel, Generic[R]):\n    \"\"\"\n    The state of a run.\n    \"\"\"\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    message: Optional[str] = Field(default=None, examples=[\"Run started\"])\n    state_details: StateDetails = Field(default_factory=StateDetails)\n    data: Union[\"BaseResult[R]\", \"DataDocument[R]\", Any] = Field(\n        default=None,\n    )\n\n    @overload\n    def result(self: \"State[R]\", raise_on_failure: bool = True) -> R:\n        ...\n\n    @overload\n    def result(self: \"State[R]\", raise_on_failure: bool = False) -> Union[R, Exception]:\n        ...\n\n    def result(\n        self, raise_on_failure: bool = True, fetch: Optional[bool] = None\n    ) -> Union[R, Exception]:\n        \"\"\"\n        Retrieve the result attached to this state.\n\n        Args:\n            raise_on_failure: a boolean specifying whether to raise an exception\n                if the state is of type `FAILED` and the underlying data is an exception\n            fetch: a boolean specifying whether to resolve references to persisted\n                results into data. For synchronous users, this defaults to `True`.\n                For asynchronous users, this defaults to `False` for backwards\n                compatibility.\n\n        Raises:\n            TypeError: If the state is failed but the result is not an exception.\n\n        Returns:\n            The result of the run\n\n        Examples:\n            >>> from prefect import flow, task\n            >>> @task\n            >>> def my_task(x):\n            >>>     return x\n\n            Get the result from a task future in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     future = my_task(\"hello\")\n            >>>     state = future.wait()\n            >>>     result = state.result()\n            >>>     print(result)\n            >>> my_flow()\n            hello\n\n            Get the result from a flow state\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     return \"hello\"\n            >>> my_flow(return_state=True).result()\n            hello\n\n            Get the result from a failed state\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     raise ValueError(\"oh no!\")\n            >>> state = my_flow(return_state=True)  # Error is wrapped in FAILED state\n            >>> state.result()  # Raises `ValueError`\n\n            Get the result from a failed state without erroring\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     raise ValueError(\"oh no!\")\n            >>> state = my_flow(return_state=True)\n            >>> result = state.result(raise_on_failure=False)\n            >>> print(result)\n            ValueError(\"oh no!\")\n\n\n            Get the result from a flow state in an async context\n\n            >>> @flow\n            >>> async def my_flow():\n            >>>     return \"hello\"\n            >>> state = await my_flow(return_state=True)\n            >>> await state.result()\n            hello\n        \"\"\"\n        from prefect.states import get_state_result\n\n        return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n\n    def to_state_create(self):\n        \"\"\"\n        Convert this state to a `StateCreate` type which can be used to set the state of\n        a run in the API.\n\n        This method will drop this state's `data` if it is not a result type. Only\n        results should be sent to the API. Other data is only available locally.\n        \"\"\"\n        from prefect.client.schemas.actions import StateCreate\n        from prefect.results import BaseResult\n\n        return StateCreate(\n            type=self.type,\n            name=self.name,\n            message=self.message,\n            data=self.data if isinstance(self.data, BaseResult) else None,\n            state_details=self.state_details,\n        )\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n        return get_or_create_state_name(v, values)\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n        \"\"\"\n        TODO: This should throw an error instead of setting a default but is out of\n              scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n              into work refactoring state initialization\n        \"\"\"\n        if values.get(\"type\") == StateType.SCHEDULED:\n            state_details = values.setdefault(\n                \"state_details\", cls.__fields__[\"state_details\"].get_default()\n            )\n            if not state_details.scheduled_time:\n                state_details.scheduled_time = pendulum.now(\"utc\")\n        return values\n\n    def is_scheduled(self) -> bool:\n        return self.type == StateType.SCHEDULED\n\n    def is_pending(self) -> bool:\n        return self.type == StateType.PENDING\n\n    def is_running(self) -> bool:\n        return self.type == StateType.RUNNING\n\n    def is_completed(self) -> bool:\n        return self.type == StateType.COMPLETED\n\n    def is_failed(self) -> bool:\n        return self.type == StateType.FAILED\n\n    def is_crashed(self) -> bool:\n        return self.type == StateType.CRASHED\n\n    def is_cancelled(self) -> bool:\n        return self.type == StateType.CANCELLED\n\n    def is_cancelling(self) -> bool:\n        return self.type == StateType.CANCELLING\n\n    def is_final(self) -> bool:\n        return self.type in {\n            StateType.CANCELLED,\n            StateType.FAILED,\n            StateType.COMPLETED,\n            StateType.CRASHED,\n        }\n\n    def is_paused(self) -> bool:\n        return self.type == StateType.PAUSED\n\n    def copy(\n        self,\n        *,\n        update: Optional[Dict[str, Any]] = None,\n        reset_fields: bool = False,\n        **kwargs,\n    ):\n        \"\"\"\n        Copying API models should return an object that could be inserted into the\n        database again. The 'timestamp' is reset using the default factory.\n        \"\"\"\n        update = update or {}\n        update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n        return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n    def __repr__(self) -> str:\n        \"\"\"\n        Generates a complete state representation appropriate for introspection\n        and debugging, including the result:\n\n        `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n        \"\"\"\n        from prefect.deprecated.data_documents import DataDocument\n\n        if isinstance(self.data, DataDocument):\n            result = self.data.decode()\n        else:\n            result = self.data\n\n        display = dict(\n            message=repr(self.message),\n            type=str(self.type.value),\n            result=repr(result),\n        )\n\n        return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n    def __str__(self) -> str:\n        \"\"\"\n        Generates a simple state representation appropriate for logging:\n\n        `MyCompletedState(\"my message\", type=COMPLETED)`\n        \"\"\"\n\n        display = []\n\n        if self.message:\n            display.append(repr(self.message))\n\n        if self.type.value.lower() != self.name.lower():\n            display.append(f\"type={self.type.value}\")\n\n        return f\"{self.name}({', '.join(display)})\"\n\n    def __hash__(self) -> int:\n        return hash(\n            (\n                getattr(self.state_details, \"flow_run_id\", None),\n                getattr(self.state_details, \"task_run_id\", None),\n                self.timestamp,\n                self.type,\n            )\n        )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.result","title":"result","text":"

    Retrieve the result attached to this state.

    Parameters:

    Name Type Description Default raise_on_failure bool

    a boolean specifying whether to raise an exception if the state is of type FAILED and the underlying data is an exception

    True fetch Optional[bool]

    a boolean specifying whether to resolve references to persisted results into data. For synchronous users, this defaults to True. For asynchronous users, this defaults to False for backwards compatibility.

    None

    Raises:

    Type Description TypeError

    If the state is failed but the result is not an exception.

    Returns:

    Type Description Union[R, Exception]

    The result of the run

    Examples:

    >>> from prefect import flow, task\n>>> @task\n>>> def my_task(x):\n>>>     return x\n

    Get the result from a task future in a flow

    >>> @flow\n>>> def my_flow():\n>>>     future = my_task(\"hello\")\n>>>     state = future.wait()\n>>>     result = state.result()\n>>>     print(result)\n>>> my_flow()\nhello\n

    Get the result from a flow state

    >>> @flow\n>>> def my_flow():\n>>>     return \"hello\"\n>>> my_flow(return_state=True).result()\nhello\n

    Get the result from a failed state

    >>> @flow\n>>> def my_flow():\n>>>     raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True)  # Error is wrapped in FAILED state\n>>> state.result()  # Raises `ValueError`\n

    Get the result from a failed state without erroring

    >>> @flow\n>>> def my_flow():\n>>>     raise ValueError(\"oh no!\")\n>>> state = my_flow(return_state=True)\n>>> result = state.result(raise_on_failure=False)\n>>> print(result)\nValueError(\"oh no!\")\n

    Get the result from a flow state in an async context

    >>> @flow\n>>> async def my_flow():\n>>>     return \"hello\"\n>>> state = await my_flow(return_state=True)\n>>> await state.result()\nhello\n
    Source code in prefect/client/schemas/objects.py
    def result(\n    self, raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> Union[R, Exception]:\n    \"\"\"\n    Retrieve the result attached to this state.\n\n    Args:\n        raise_on_failure: a boolean specifying whether to raise an exception\n            if the state is of type `FAILED` and the underlying data is an exception\n        fetch: a boolean specifying whether to resolve references to persisted\n            results into data. For synchronous users, this defaults to `True`.\n            For asynchronous users, this defaults to `False` for backwards\n            compatibility.\n\n    Raises:\n        TypeError: If the state is failed but the result is not an exception.\n\n    Returns:\n        The result of the run\n\n    Examples:\n        >>> from prefect import flow, task\n        >>> @task\n        >>> def my_task(x):\n        >>>     return x\n\n        Get the result from a task future in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task(\"hello\")\n        >>>     state = future.wait()\n        >>>     result = state.result()\n        >>>     print(result)\n        >>> my_flow()\n        hello\n\n        Get the result from a flow state\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     return \"hello\"\n        >>> my_flow(return_state=True).result()\n        hello\n\n        Get the result from a failed state\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     raise ValueError(\"oh no!\")\n        >>> state = my_flow(return_state=True)  # Error is wrapped in FAILED state\n        >>> state.result()  # Raises `ValueError`\n\n        Get the result from a failed state without erroring\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     raise ValueError(\"oh no!\")\n        >>> state = my_flow(return_state=True)\n        >>> result = state.result(raise_on_failure=False)\n        >>> print(result)\n        ValueError(\"oh no!\")\n\n\n        Get the result from a flow state in an async context\n\n        >>> @flow\n        >>> async def my_flow():\n        >>>     return \"hello\"\n        >>> state = await my_flow(return_state=True)\n        >>> await state.result()\n        hello\n    \"\"\"\n    from prefect.states import get_state_result\n\n    return get_state_result(self, raise_on_failure=raise_on_failure, fetch=fetch)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.to_state_create","title":"to_state_create","text":"

    Convert this state to a StateCreate type which can be used to set the state of a run in the API.

    This method will drop this state's data if it is not a result type. Only results should be sent to the API. Other data is only available locally.

    Source code in prefect/client/schemas/objects.py
    def to_state_create(self):\n    \"\"\"\n    Convert this state to a `StateCreate` type which can be used to set the state of\n    a run in the API.\n\n    This method will drop this state's `data` if it is not a result type. Only\n    results should be sent to the API. Other data is only available locally.\n    \"\"\"\n    from prefect.client.schemas.actions import StateCreate\n    from prefect.results import BaseResult\n\n    return StateCreate(\n        type=self.type,\n        name=self.name,\n        message=self.message,\n        data=self.data if isinstance(self.data, BaseResult) else None,\n        state_details=self.state_details,\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.State.default_scheduled_start_time","title":"default_scheduled_start_time","text":"This should throw an error instead of setting a default but is out of

    scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled into work refactoring state initialization

    Source code in prefect/client/schemas/objects.py
    @root_validator\ndef default_scheduled_start_time(cls, values):\n    \"\"\"\n    TODO: This should throw an error instead of setting a default but is out of\n          scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n          into work refactoring state initialization\n    \"\"\"\n    if values.get(\"type\") == StateType.SCHEDULED:\n        state_details = values.setdefault(\n            \"state_details\", cls.__fields__[\"state_details\"].get_default()\n        )\n        if not state_details.scheduled_time:\n            state_details.scheduled_time = pendulum.now(\"utc\")\n    return values\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunPolicy","title":"FlowRunPolicy","text":"

    Bases: PrefectBaseModel

    Defines of how a flow run should be orchestrated.

    Source code in prefect/client/schemas/objects.py
    class FlowRunPolicy(PrefectBaseModel):\n    \"\"\"Defines of how a flow run should be orchestrated.\"\"\"\n\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Optional[int] = Field(\n        default=None, description=\"The delay time between retries, in seconds.\"\n    )\n    pause_keys: Optional[set] = Field(\n        default_factory=set, description=\"Tracks pauses this run has observed.\"\n    )\n    resuming: Optional[bool] = Field(\n        default=False, description=\"Indicates if this run is resuming from a pause.\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n        return set_run_policy_deprecated_fields(values)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRun","title":"FlowRun","text":"

    Bases: ObjectBaseModel

    Source code in prefect/client/schemas/objects.py
    class FlowRun(ObjectBaseModel):\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_val\"}],\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    work_pool_id: Optional[UUID] = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    state: Optional[State] = Field(\n        default=None,\n        description=\"The state of the flow run.\",\n        examples=[State(type=StateType.COMPLETED)],\n    )\n    job_variables: Optional[dict] = Field(\n        default=None, description=\"Job variables for the flow run.\"\n    )\n\n    # These are server-side optimizations and should not be present on client models\n    # TODO: Deprecate these fields\n\n    state_type: Optional[StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n\n    @validator(\"name\", pre=True)\n    def set_default_name(cls, name):\n        return get_or_create_run_name(name)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunPolicy","title":"TaskRunPolicy","text":"

    Bases: PrefectBaseModel

    Defines of how a task run should retry.

    Source code in prefect/client/schemas/objects.py
    class TaskRunPolicy(PrefectBaseModel):\n    \"\"\"Defines of how a task run should retry.\"\"\"\n\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Union[None, int, List[int]] = Field(\n        default=None,\n        description=\"A delay time or list of delay times between retries, in seconds.\",\n    )\n    retry_jitter_factor: Optional[float] = Field(\n        default=None, description=\"Determines the amount a retry should jitter\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n        return set_run_policy_deprecated_fields(values)\n\n    @validator(\"retry_delay\")\n    def validate_configured_retry_delays(cls, v):\n        return list_length_50_or_less(v)\n\n    @validator(\"retry_jitter_factor\")\n    def validate_jitter_factor(cls, v):\n        return validate_not_negative(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunInput","title":"TaskRunInput","text":"

    Bases: PrefectBaseModel

    Base class for classes that represent inputs to task runs, which could include, constants, parameters, or other task runs.

    Source code in prefect/client/schemas/objects.py
    class TaskRunInput(PrefectBaseModel):\n    \"\"\"\n    Base class for classes that represent inputs to task runs, which\n    could include, constants, parameters, or other task runs.\n    \"\"\"\n\n    # freeze TaskRunInputs to allow them to be placed in sets\n    class Config:\n        frozen = True\n\n    input_type: str\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.TaskRunResult","title":"TaskRunResult","text":"

    Bases: TaskRunInput

    Represents a task run result input to another task run.

    Source code in prefect/client/schemas/objects.py
    class TaskRunResult(TaskRunInput):\n    \"\"\"Represents a task run result input to another task run.\"\"\"\n\n    input_type: Literal[\"task_run\"] = \"task_run\"\n    id: UUID\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Parameter","title":"Parameter","text":"

    Bases: TaskRunInput

    Represents a parameter input to a task run.

    Source code in prefect/client/schemas/objects.py
    class Parameter(TaskRunInput):\n    \"\"\"Represents a parameter input to a task run.\"\"\"\n\n    input_type: Literal[\"parameter\"] = \"parameter\"\n    name: str\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Constant","title":"Constant","text":"

    Bases: TaskRunInput

    Represents constant input value to a task run.

    Source code in prefect/client/schemas/objects.py
    class Constant(TaskRunInput):\n    \"\"\"Represents constant input value to a task run.\"\"\"\n\n    input_type: Literal[\"constant\"] = \"constant\"\n    type: str\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace","title":"Workspace","text":"

    Bases: PrefectBaseModel

    A Prefect Cloud workspace.

    Expected payload for each workspace returned by the me/workspaces route.

    Source code in prefect/client/schemas/objects.py
    class Workspace(PrefectBaseModel):\n    \"\"\"\n    A Prefect Cloud workspace.\n\n    Expected payload for each workspace returned by the `me/workspaces` route.\n    \"\"\"\n\n    account_id: UUID = Field(..., description=\"The account id of the workspace.\")\n    account_name: str = Field(..., description=\"The account name.\")\n    account_handle: str = Field(..., description=\"The account's unique handle.\")\n    workspace_id: UUID = Field(..., description=\"The workspace id.\")\n    workspace_name: str = Field(..., description=\"The workspace name.\")\n    workspace_description: str = Field(..., description=\"Description of the workspace.\")\n    workspace_handle: str = Field(..., description=\"The workspace's unique handle.\")\n\n    class Config:\n        extra = \"ignore\"\n\n    @property\n    def handle(self) -> str:\n        \"\"\"\n        The full handle of the workspace as `account_handle` / `workspace_handle`\n        \"\"\"\n        return self.account_handle + \"/\" + self.workspace_handle\n\n    def api_url(self) -> str:\n        \"\"\"\n        Generate the API URL for accessing this workspace\n        \"\"\"\n        return (\n            f\"{PREFECT_CLOUD_API_URL.value()}\"\n            f\"/accounts/{self.account_id}\"\n            f\"/workspaces/{self.workspace_id}\"\n        )\n\n    def ui_url(self) -> str:\n        \"\"\"\n        Generate the UI URL for accessing this workspace\n        \"\"\"\n        return (\n            f\"{PREFECT_CLOUD_UI_URL.value()}\"\n            f\"/account/{self.account_id}\"\n            f\"/workspace/{self.workspace_id}\"\n        )\n\n    def __hash__(self):\n        return hash(self.handle)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.handle","title":"handle: str property","text":"

    The full handle of the workspace as account_handle / workspace_handle

    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.api_url","title":"api_url","text":"

    Generate the API URL for accessing this workspace

    Source code in prefect/client/schemas/objects.py
    def api_url(self) -> str:\n    \"\"\"\n    Generate the API URL for accessing this workspace\n    \"\"\"\n    return (\n        f\"{PREFECT_CLOUD_API_URL.value()}\"\n        f\"/accounts/{self.account_id}\"\n        f\"/workspaces/{self.workspace_id}\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Workspace.ui_url","title":"ui_url","text":"

    Generate the UI URL for accessing this workspace

    Source code in prefect/client/schemas/objects.py
    def ui_url(self) -> str:\n    \"\"\"\n    Generate the UI URL for accessing this workspace\n    \"\"\"\n    return (\n        f\"{PREFECT_CLOUD_UI_URL.value()}\"\n        f\"/account/{self.account_id}\"\n        f\"/workspace/{self.workspace_id}\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockType","title":"BlockType","text":"

    Bases: ObjectBaseModel

    An ORM representation of a block type

    Source code in prefect/client/schemas/objects.py
    class BlockType(ObjectBaseModel):\n    \"\"\"An ORM representation of a block type\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n    is_protected: bool = Field(\n        default=False, description=\"Protected block types cannot be modified via API.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocument","title":"BlockDocument","text":"

    Bases: ObjectBaseModel

    An ORM representation of a block document.

    Source code in prefect/client/schemas/objects.py
    class BlockDocument(ObjectBaseModel):\n    \"\"\"An ORM representation of a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n    block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The associated block schema\"\n    )\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n    block_type_name: Optional[str] = Field(None, description=\"A block type name\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    block_document_references: Dict[str, Dict[str, Any]] = Field(\n        default_factory=dict, description=\"Record of the block document's references\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        # the BlockDocumentCreate subclass allows name=None\n        # and will inherit this validator\n        return raise_on_name_with_banned_characters(v)\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Flow","title":"Flow","text":"

    Bases: ObjectBaseModel

    An ORM representation of flow data.

    Source code in prefect/client/schemas/objects.py
    class Flow(ObjectBaseModel):\n    \"\"\"An ORM representation of flow data.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Deployment","title":"Deployment","text":"

    Bases: DeprecatedInfraOverridesField, ObjectBaseModel

    An ORM representation of deployment data.

    Source code in prefect/client/schemas/objects.py
    class Deployment(DeprecatedInfraOverridesField, ObjectBaseModel):\n    \"\"\"An ORM representation of deployment data.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the deployment.\")\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description for the deployment.\"\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The flow id associated with the deployment.\"\n    )\n    schedule: Optional[SCHEDULE_TYPES] = Field(\n        default=None, description=\"A schedule for the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether or not the deployment schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentSchedule] = Field(\n        default_factory=list, description=\"A list of schedules for the deployment.\"\n    )\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    pull_steps: Optional[List[dict]] = Field(\n        default=None,\n        description=\"Pull steps for cloning and running this deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the deployment\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The work queue for the deployment. If no work queue is set, work will not\"\n            \" be scheduled.\"\n        ),\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The last time the deployment was polled for status updates.\",\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    storage_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining storage used for this flow.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use for flow runs.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this deployment.\",\n    )\n    updated_by: Optional[UpdatedBy] = Field(\n        default=None,\n        description=\"Optional information about the updater of this deployment.\",\n    )\n    work_queue_id: UUID = Field(\n        default=None,\n        description=(\n            \"The id of the work pool queue to which this deployment is assigned.\"\n        ),\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.ConcurrencyLimit","title":"ConcurrencyLimit","text":"

    Bases: ObjectBaseModel

    An ORM representation of a concurrency limit.

    Source code in prefect/client/schemas/objects.py
    class ConcurrencyLimit(ObjectBaseModel):\n    \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: List[UUID] = Field(\n        default_factory=list,\n        description=\"A list of active run ids using a concurrency slot\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchema","title":"BlockSchema","text":"

    Bases: ObjectBaseModel

    An ORM representation of a block schema.

    Source code in prefect/client/schemas/objects.py
    class BlockSchema(ObjectBaseModel):\n    \"\"\"An ORM representation of a block schema.\"\"\"\n\n    checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockSchemaReference","title":"BlockSchemaReference","text":"

    Bases: ObjectBaseModel

    An ORM representation of a block schema reference.

    Source code in prefect/client/schemas/objects.py
    class BlockSchemaReference(ObjectBaseModel):\n    \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n    parent_block_schema_id: UUID = Field(\n        default=..., description=\"ID of block schema the reference is nested within\"\n    )\n    parent_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The block schema the reference is nested within\"\n    )\n    reference_block_schema_id: UUID = Field(\n        default=..., description=\"ID of the nested block schema\"\n    )\n    reference_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The nested block schema\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.BlockDocumentReference","title":"BlockDocumentReference","text":"

    Bases: ObjectBaseModel

    An ORM representation of a block document reference.

    Source code in prefect/client/schemas/objects.py
    class BlockDocumentReference(ObjectBaseModel):\n    \"\"\"An ORM representation of a block document reference.\"\"\"\n\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    parent_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    reference_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        return validate_parent_and_ref_diff(values)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearchFilter","title":"SavedSearchFilter","text":"

    Bases: PrefectBaseModel

    A filter for a saved search model. Intended for use by the Prefect UI.

    Source code in prefect/client/schemas/objects.py
    class SavedSearchFilter(PrefectBaseModel):\n    \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n    object: str = Field(default=..., description=\"The object over which to filter.\")\n    property: str = Field(\n        default=..., description=\"The property of the object on which to filter.\"\n    )\n    type: str = Field(default=..., description=\"The type of the property.\")\n    operation: str = Field(\n        default=...,\n        description=\"The operator to apply to the object. For example, `equals`.\",\n    )\n    value: Any = Field(\n        default=..., description=\"A JSON-compatible value for the filter.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.SavedSearch","title":"SavedSearch","text":"

    Bases: ObjectBaseModel

    An ORM representation of saved search data. Represents a set of filter criteria.

    Source code in prefect/client/schemas/objects.py
    class SavedSearch(ObjectBaseModel):\n    \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Log","title":"Log","text":"

    Bases: ObjectBaseModel

    An ORM representation of log data.

    Source code in prefect/client/schemas/objects.py
    class Log(ObjectBaseModel):\n    \"\"\"An ORM representation of log data.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run ID associated with the log.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run ID associated with the log.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.QueueFilter","title":"QueueFilter","text":"

    Bases: PrefectBaseModel

    Filter criteria definition for a work queue.

    Source code in prefect/client/schemas/objects.py
    class QueueFilter(PrefectBaseModel):\n    \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"Only include flow runs with these tags in the work queue.\",\n    )\n    deployment_ids: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"Only include flow runs from these deployments in the work queue.\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueue","title":"WorkQueue","text":"

    Bases: ObjectBaseModel

    An ORM representation of a work queue

    Source code in prefect/client/schemas/objects.py
    class WorkQueue(ObjectBaseModel):\n    \"\"\"An ORM representation of a work queue\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"An optional concurrency limit for the work queue.\"\n    )\n    priority: PositiveInteger = Field(\n        default=1,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n    work_pool_name: Optional[str] = Field(default=None)\n    # Will be required after a future migration\n    work_pool_id: Optional[UUID] = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    filter: Optional[QueueFilter] = Field(\n        default=None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time an agent polled this queue for work.\"\n    )\n    status: Optional[WorkQueueStatus] = Field(\n        default=None, description=\"The queue status.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy","text":"

    Bases: PrefectBaseModel

    Source code in prefect/client/schemas/objects.py
    class WorkQueueHealthPolicy(PrefectBaseModel):\n    maximum_late_runs: Optional[int] = Field(\n        default=0,\n        description=(\n            \"The maximum number of late runs in the work queue before it is deemed\"\n            \" unhealthy. Defaults to `0`.\"\n        ),\n    )\n    maximum_seconds_since_last_polled: Optional[int] = Field(\n        default=60,\n        description=(\n            \"The maximum number of time in seconds elapsed since work queue has been\"\n            \" polled before it is deemed unhealthy. Defaults to `60`.\"\n        ),\n    )\n\n    def evaluate_health_status(\n        self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n    ) -> bool:\n        \"\"\"\n        Given empirical information about the state of the work queue, evaluate its health status.\n\n        Args:\n            late_runs: the count of late runs for the work queue.\n            last_polled: the last time the work queue was polled, if available.\n\n        Returns:\n            bool: whether or not the work queue is healthy.\n        \"\"\"\n        healthy = True\n        if (\n            self.maximum_late_runs is not None\n            and late_runs_count > self.maximum_late_runs\n        ):\n            healthy = False\n\n        if self.maximum_seconds_since_last_polled is not None:\n            if (\n                last_polled is None\n                or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n                > self.maximum_seconds_since_last_polled\n            ):\n                healthy = False\n\n        return healthy\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status","text":"

    Given empirical information about the state of the work queue, evaluate its health status.

    Parameters:

    Name Type Description Default late_runs

    the count of late runs for the work queue.

    required last_polled Optional[DateTimeTZ]

    the last time the work queue was polled, if available.

    None

    Returns:

    Name Type Description bool bool

    whether or not the work queue is healthy.

    Source code in prefect/client/schemas/objects.py
    def evaluate_health_status(\n    self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n    \"\"\"\n    Given empirical information about the state of the work queue, evaluate its health status.\n\n    Args:\n        late_runs: the count of late runs for the work queue.\n        last_polled: the last time the work queue was polled, if available.\n\n    Returns:\n        bool: whether or not the work queue is healthy.\n    \"\"\"\n    healthy = True\n    if (\n        self.maximum_late_runs is not None\n        and late_runs_count > self.maximum_late_runs\n    ):\n        healthy = False\n\n    if self.maximum_seconds_since_last_polled is not None:\n        if (\n            last_polled is None\n            or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n            > self.maximum_seconds_since_last_polled\n        ):\n            healthy = False\n\n    return healthy\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy","text":"

    Bases: ObjectBaseModel

    An ORM representation of a flow run notification.

    Source code in prefect/client/schemas/objects.py
    class FlowRunNotificationPolicy(ObjectBaseModel):\n    \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Agent","title":"Agent","text":"

    Bases: ObjectBaseModel

    An ORM representation of an agent

    Source code in prefect/client/schemas/objects.py
    class Agent(ObjectBaseModel):\n    \"\"\"An ORM representation of an agent\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the agent. If a name is not provided, it will be\"\n            \" auto-generated.\"\n        ),\n    )\n    work_queue_id: UUID = Field(\n        default=..., description=\"The work queue with which the agent is associated.\"\n    )\n    last_activity_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time this agent polled for work.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.WorkPool","title":"WorkPool","text":"

    Bases: ObjectBaseModel

    An ORM representation of a work pool

    Source code in prefect/client/schemas/objects.py
    class WorkPool(ObjectBaseModel):\n    \"\"\"An ORM representation of a work pool\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description of the work pool.\"\n    )\n    type: str = Field(description=\"The work pool type.\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n    status: Optional[WorkPoolStatus] = Field(\n        default=None, description=\"The current status of the work pool.\"\n    )\n\n    # this required field has a default of None so that the custom validator\n    # below will be called and produce a more helpful error message\n    default_queue_id: UUID = Field(\n        None, description=\"The id of the pool's default queue.\"\n    )\n\n    @property\n    def is_push_pool(self) -> bool:\n        return self.type.endswith(\":push\")\n\n    @property\n    def is_managed_pool(self) -> bool:\n        return self.type.endswith(\":managed\")\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n\n    @validator(\"default_queue_id\", always=True)\n    def helpful_error_for_missing_default_queue_id(cls, v):\n        return validate_default_queue_id_not_none(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.Worker","title":"Worker","text":"

    Bases: ObjectBaseModel

    An ORM representation of a worker

    Source code in prefect/client/schemas/objects.py
    class Worker(ObjectBaseModel):\n    \"\"\"An ORM representation of a worker\"\"\"\n\n    name: str = Field(description=\"The name of the worker.\")\n    work_pool_id: UUID = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    last_heartbeat_time: datetime.datetime = Field(\n        None, description=\"The last time the worker process sent a heartbeat.\"\n    )\n    heartbeat_interval_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to expect between heartbeats sent by the worker.\"\n        ),\n    )\n    status: WorkerStatus = Field(\n        WorkerStatus.OFFLINE,\n        description=\"Current status of the worker.\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput","title":"FlowRunInput","text":"

    Bases: ObjectBaseModel

    Source code in prefect/client/schemas/objects.py
    class FlowRunInput(ObjectBaseModel):\n    flow_run_id: UUID = Field(description=\"The flow run ID associated with the input.\")\n    key: str = Field(description=\"The key of the input.\")\n    value: str = Field(description=\"The value of the input.\")\n    sender: Optional[str] = Field(description=\"The sender of the input.\")\n\n    @property\n    def decoded_value(self) -> Any:\n        \"\"\"\n        Decode the value of the input.\n\n        Returns:\n            Any: the decoded value\n        \"\"\"\n        return orjson.loads(self.value)\n\n    @validator(\"key\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_alphanumeric_dashes_only(v)\n        return v\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.FlowRunInput.decoded_value","title":"decoded_value: Any property","text":"

    Decode the value of the input.

    Returns:

    Name Type Description Any Any

    the decoded value

    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.objects.GlobalConcurrencyLimit","title":"GlobalConcurrencyLimit","text":"

    Bases: ObjectBaseModel

    An ORM representation of a global concurrency limit

    Source code in prefect/client/schemas/objects.py
    class GlobalConcurrencyLimit(ObjectBaseModel):\n    \"\"\"An ORM representation of a global concurrency limit\"\"\"\n\n    name: str = Field(description=\"The name of the global concurrency limit.\")\n    limit: int = Field(\n        description=(\n            \"The maximum number of slots that can be occupied on this concurrency\"\n            \" limit.\"\n        )\n    )\n    active: Optional[bool] = Field(\n        default=True,\n        description=\"Whether or not the concurrency limit is in an active state.\",\n    )\n    active_slots: Optional[int] = Field(\n        default=0,\n        description=\"Number of tasks currently using a concurrency slot.\",\n    )\n    slot_decay_per_second: Optional[float] = Field(\n        default=0.0,\n        description=(\n            \"Controls the rate at which slots are released when the concurrency limit\"\n            \" is used as a rate limit.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_4","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses","title":"prefect.client.schemas.responses","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.SetStateStatus","title":"SetStateStatus","text":"

    Bases: AutoEnum

    Enumerates return statuses for setting run states.

    Source code in prefect/client/schemas/responses.py
    class SetStateStatus(AutoEnum):\n    \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n    ACCEPT = AutoEnum.auto()\n    REJECT = AutoEnum.auto()\n    ABORT = AutoEnum.auto()\n    WAIT = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails","text":"

    Bases: PrefectBaseModel

    Details associated with an ACCEPT state transition.

    Source code in prefect/client/schemas/responses.py
    class StateAcceptDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n    type: Literal[\"accept_details\"] = Field(\n        default=\"accept_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateRejectDetails","title":"StateRejectDetails","text":"

    Bases: PrefectBaseModel

    Details associated with a REJECT state transition.

    Source code in prefect/client/schemas/responses.py
    class StateRejectDetails(PrefectBaseModel):\n    \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n    type: Literal[\"reject_details\"] = Field(\n        default=\"reject_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was rejected.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateAbortDetails","title":"StateAbortDetails","text":"

    Bases: PrefectBaseModel

    Details associated with an ABORT state transition.

    Source code in prefect/client/schemas/responses.py
    class StateAbortDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n    type: Literal[\"abort_details\"] = Field(\n        default=\"abort_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was aborted.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.StateWaitDetails","title":"StateWaitDetails","text":"

    Bases: PrefectBaseModel

    Details associated with a WAIT state transition.

    Source code in prefect/client/schemas/responses.py
    class StateWaitDetails(PrefectBaseModel):\n    \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n    type: Literal[\"wait_details\"] = Field(\n        default=\"wait_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    delay_seconds: int = Field(\n        default=...,\n        description=(\n            \"The length of time in seconds the client should wait before transitioning\"\n            \" states.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition should wait.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponseState","title":"HistoryResponseState","text":"

    Bases: PrefectBaseModel

    Represents a single state's history over an interval.

    Source code in prefect/client/schemas/responses.py
    class HistoryResponseState(PrefectBaseModel):\n    \"\"\"Represents a single state's history over an interval.\"\"\"\n\n    state_type: objects.StateType = Field(default=..., description=\"The state type.\")\n    state_name: str = Field(default=..., description=\"The state name.\")\n    count_runs: int = Field(\n        default=...,\n        description=\"The number of runs in the specified state during the interval.\",\n    )\n    sum_estimated_run_time: datetime.timedelta = Field(\n        default=...,\n        description=\"The total estimated run time of all runs during the interval.\",\n    )\n    sum_estimated_lateness: datetime.timedelta = Field(\n        default=...,\n        description=(\n            \"The sum of differences between actual and expected start time during the\"\n            \" interval.\"\n        ),\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.HistoryResponse","title":"HistoryResponse","text":"

    Bases: PrefectBaseModel

    Represents a history of aggregation states over an interval

    Source code in prefect/client/schemas/responses.py
    class HistoryResponse(PrefectBaseModel):\n    \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n    interval_start: DateTimeTZ = Field(\n        default=..., description=\"The start date of the interval.\"\n    )\n    interval_end: DateTimeTZ = Field(\n        default=..., description=\"The end date of the interval.\"\n    )\n    states: List[HistoryResponseState] = Field(\n        default=..., description=\"A list of state histories during the interval.\"\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.OrchestrationResult","title":"OrchestrationResult","text":"

    Bases: PrefectBaseModel

    A container for the output of state orchestration.

    Source code in prefect/client/schemas/responses.py
    class OrchestrationResult(PrefectBaseModel):\n    \"\"\"\n    A container for the output of state orchestration.\n    \"\"\"\n\n    state: Optional[objects.State]\n    status: SetStateStatus\n    details: StateResponseDetails\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.FlowRunResponse","title":"FlowRunResponse","text":"

    Bases: ObjectBaseModel

    Source code in prefect/client/schemas/responses.py
    class FlowRunResponse(ObjectBaseModel):\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_val\"}],\n    )\n    empirical_policy: objects.FlowRunPolicy = Field(\n        default_factory=objects.FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    work_pool_id: Optional[UUID] = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    state: Optional[objects.State] = Field(\n        default=None,\n        description=\"The state of the flow run.\",\n        examples=[objects.State(type=objects.StateType.COMPLETED)],\n    )\n    job_variables: Optional[dict] = Field(\n        default=None, description=\"Job variables for the flow run.\"\n    )\n\n    # These are server-side optimizations and should not be present on client models\n    # TODO: Deprecate these fields\n\n    state_type: Optional[objects.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, objects.FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.responses.GlobalConcurrencyLimitResponse","title":"GlobalConcurrencyLimitResponse","text":"

    Bases: ObjectBaseModel

    A response object for global concurrency limits.

    Source code in prefect/client/schemas/responses.py
    class GlobalConcurrencyLimitResponse(ObjectBaseModel):\n    \"\"\"\n    A response object for global concurrency limits.\n    \"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the global concurrency limit is active.\"\n    )\n    name: str = Field(\n        default=..., description=\"The name of the global concurrency limit.\"\n    )\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=..., description=\"The number of active slots.\")\n    slot_decay_per_second: float = Field(\n        default=2.0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_5","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules","title":"prefect.client.schemas.schedules","text":"

    Schedule schemas

    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.IntervalSchedule","title":"IntervalSchedule","text":"

    Bases: PrefectBaseModel

    A schedule formed by adding interval increments to an anchor_date. If no anchor_date is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.

    NOTE: If the IntervalSchedule anchor_date or timezone is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

    Parameters:

    Name Type Description Default interval timedelta

    an interval to schedule on

    required anchor_date DateTimeTZ

    an anchor date to schedule increments against; if not provided, the current timestamp will be used

    required timezone str

    a valid timezone string

    required Source code in prefect/client/schemas/schedules.py
    class IntervalSchedule(PrefectBaseModel):\n    \"\"\"\n    A schedule formed by adding `interval` increments to an `anchor_date`. If no\n    `anchor_date` is supplied, the current UTC time is used.  If a\n    timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n    in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n    anchor dates are always stored as UTC offsets, so a `timezone` can be\n    provided to determine localization behaviors like DST boundary handling. If\n    none is provided it will be inferred from the anchor date.\n\n    NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n    DST-observing timezone, then the schedule will adjust itself appropriately.\n    Intervals greater than 24 hours will follow DST conventions, while intervals\n    of less than 24 hours will follow UTC intervals. For example, an hourly\n    schedule will fire every UTC hour, even across DST boundaries. When clocks\n    are set back, this will result in two runs that *appear* to both be\n    scheduled for 1am local time, even though they are an hour apart in UTC\n    time. For longer intervals, like a daily schedule, the interval schedule\n    will adjust for DST boundaries so that the clock-hour remains constant. This\n    means that a daily schedule that always fires at 9am will observe DST and\n    continue to fire at 9am in the local time zone.\n\n    Args:\n        interval (datetime.timedelta): an interval to schedule on\n        anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n            if not provided, the current timestamp will be used\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n        exclude_none = True\n\n    interval: PositiveDuration\n    anchor_date: DateTimeTZ = None\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"anchor_date\", always=True)\n    def validate_anchor_date(cls, v):\n        return default_anchor_date(v)\n\n    @validator(\"timezone\", always=True)\n    def validate_default_timezone(cls, v, values):\n        return default_timezone(v, values=values)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.CronSchedule","title":"CronSchedule","text":"

    Bases: PrefectBaseModel

    Cron schedule

    NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.

    Parameters:

    Name Type Description Default cron str

    a valid cron string

    required timezone str

    a valid timezone string in IANA tzdata format (for example, America/New_York).

    required day_or bool

    Control how croniter handles day and day_of_week entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.

    required Source code in prefect/client/schemas/schedules.py
    class CronSchedule(PrefectBaseModel):\n    \"\"\"\n    Cron schedule\n\n    NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n    itself appropriately. Cron's rules for DST are based on schedule times, not\n    intervals. This means that an hourly cron schedule will fire on every new\n    schedule hour, not every elapsed hour; for example, when clocks are set back\n    this will result in a two-hour pause as the schedule will fire *the first\n    time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n    Longer schedules, such as one that fires at 9am every morning, will\n    automatically adjust for DST.\n\n    Args:\n        cron (str): a valid cron string\n        timezone (str): a valid timezone string in IANA tzdata format (for example,\n            America/New_York).\n        day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n            entries. Defaults to True, matching cron which connects those values using\n            OR. If the switch is set to False, the values are connected using AND. This\n            behaves like fcron and enables you to e.g. define a job that executes each\n            2nd friday of a month by setting the days of month and the weekday.\n\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    cron: str = Field(default=..., examples=[\"0 0 * * *\"])\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n    day_or: bool = Field(\n        default=True,\n        description=(\n            \"Control croniter behavior for handling day and day_of_week entries.\"\n        ),\n    )\n\n    @validator(\"timezone\")\n    def valid_timezone(cls, v):\n        return default_timezone(v)\n\n    @validator(\"cron\")\n    def valid_cron_string(cls, v):\n        return validate_cron_string(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule","title":"RRuleSchedule","text":"

    Bases: PrefectBaseModel

    RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule.

    RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.

    Note that as a calendar-oriented standard, RRuleSchedules are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.

    Parameters:

    Name Type Description Default rrule str

    a valid RRule string

    required timezone str

    a valid timezone string

    required Source code in prefect/client/schemas/schedules.py
    class RRuleSchedule(PrefectBaseModel):\n    \"\"\"\n    RRule schedule, based on the iCalendar standard\n    ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n    implemented in `dateutils.rrule`.\n\n    RRules are appropriate for any kind of calendar-date manipulation, including\n    irregular intervals, repetition, exclusions, week day or day-of-month\n    adjustments, and more.\n\n    Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n    to the initial timezone provided. A 9am daily schedule with a daylight saving\n    time-aware start date will maintain a local 9am time through DST boundaries;\n    a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n    Args:\n        rrule (str): a valid RRule string\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    rrule: str\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"rrule\")\n    def validate_rrule_str(cls, v):\n        return validate_rrule_string(v)\n\n    @classmethod\n    def from_rrule(cls, rrule: dateutil.rrule.rrule):\n        if isinstance(rrule, dateutil.rrule.rrule):\n            if rrule._dtstart.tzinfo is not None:\n                timezone = rrule._dtstart.tzinfo.name\n            else:\n                timezone = \"UTC\"\n            return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n            unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n            unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n            if len(unique_timezones) > 1:\n                raise ValueError(\n                    f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n                )\n\n            if len(unique_dstarts) > 1:\n                raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n            if unique_dstarts and unique_timezones:\n                timezone = dtstarts[0].tzinfo.name\n            else:\n                timezone = \"UTC\"\n\n            rruleset_string = \"\"\n            if rrule._rrule:\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n            if rrule._exrule:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n                    \"RRULE\", \"EXRULE\"\n                )\n            if rrule._rdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"RDATE:\" + \",\".join(\n                    rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n                )\n            if rrule._exdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"EXDATE:\" + \",\".join(\n                    exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n                )\n            return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n        else:\n            raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n    def to_rrule(self) -> dateutil.rrule.rrule:\n        \"\"\"\n        Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n        here\n        \"\"\"\n        rrule = dateutil.rrule.rrulestr(\n            self.rrule,\n            dtstart=DEFAULT_ANCHOR_DATE,\n            cache=True,\n        )\n        timezone = dateutil.tz.gettz(self.timezone)\n        if isinstance(rrule, dateutil.rrule.rrule):\n            kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n            if rrule._until:\n                kwargs.update(\n                    until=rrule._until.replace(tzinfo=timezone),\n                )\n            return rrule.replace(**kwargs)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            # update rrules\n            localized_rrules = []\n            for rr in rrule._rrule:\n                kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n                if rr._until:\n                    kwargs.update(\n                        until=rr._until.replace(tzinfo=timezone),\n                    )\n                localized_rrules.append(rr.replace(**kwargs))\n            rrule._rrule = localized_rrules\n\n            # update exrules\n            localized_exrules = []\n            for exr in rrule._exrule:\n                kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n                if exr._until:\n                    kwargs.update(\n                        until=exr._until.replace(tzinfo=timezone),\n                    )\n                localized_exrules.append(exr.replace(**kwargs))\n            rrule._exrule = localized_exrules\n\n            # update rdates\n            localized_rdates = []\n            for rd in rrule._rdate:\n                localized_rdates.append(rd.replace(tzinfo=timezone))\n            rrule._rdate = localized_rdates\n\n            # update exdates\n            localized_exdates = []\n            for exd in rrule._exdate:\n                localized_exdates.append(exd.replace(tzinfo=timezone))\n            rrule._exdate = localized_exdates\n\n            return rrule\n\n    @validator(\"timezone\", always=True)\n    def valid_timezone(cls, v):\n        return validate_rrule_timezone(v)\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule","text":"

    Since rrule doesn't properly serialize/deserialize timezones, we localize dates here

    Source code in prefect/client/schemas/schedules.py
    def to_rrule(self) -> dateutil.rrule.rrule:\n    \"\"\"\n    Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n    here\n    \"\"\"\n    rrule = dateutil.rrule.rrulestr(\n        self.rrule,\n        dtstart=DEFAULT_ANCHOR_DATE,\n        cache=True,\n    )\n    timezone = dateutil.tz.gettz(self.timezone)\n    if isinstance(rrule, dateutil.rrule.rrule):\n        kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n        if rrule._until:\n            kwargs.update(\n                until=rrule._until.replace(tzinfo=timezone),\n            )\n        return rrule.replace(**kwargs)\n    elif isinstance(rrule, dateutil.rrule.rruleset):\n        # update rrules\n        localized_rrules = []\n        for rr in rrule._rrule:\n            kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n            if rr._until:\n                kwargs.update(\n                    until=rr._until.replace(tzinfo=timezone),\n                )\n            localized_rrules.append(rr.replace(**kwargs))\n        rrule._rrule = localized_rrules\n\n        # update exrules\n        localized_exrules = []\n        for exr in rrule._exrule:\n            kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n            if exr._until:\n                kwargs.update(\n                    until=exr._until.replace(tzinfo=timezone),\n                )\n            localized_exrules.append(exr.replace(**kwargs))\n        rrule._exrule = localized_exrules\n\n        # update rdates\n        localized_rdates = []\n        for rd in rrule._rdate:\n            localized_rdates.append(rd.replace(tzinfo=timezone))\n        rrule._rdate = localized_rdates\n\n        # update exdates\n        localized_exdates = []\n        for exd in rrule._exdate:\n            localized_exdates.append(exd.replace(tzinfo=timezone))\n        rrule._exdate = localized_exdates\n\n        return rrule\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.schedules.construct_schedule","title":"construct_schedule","text":"

    Construct a schedule from the provided arguments.

    Parameters:

    Name Type Description Default interval Optional[Union[int, float, timedelta]]

    An interval on which to schedule runs. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

    None anchor_date Optional[Union[datetime, str]]

    The start date for an interval schedule.

    None cron Optional[str]

    A cron schedule for runs.

    None rrule Optional[str]

    An rrule schedule of when to execute runs of this flow.

    None timezone Optional[str]

    A timezone to use for the schedule. Defaults to UTC.

    None Source code in prefect/client/schemas/schedules.py
    def construct_schedule(\n    interval: Optional[Union[int, float, datetime.timedelta]] = None,\n    anchor_date: Optional[Union[datetime.datetime, str]] = None,\n    cron: Optional[str] = None,\n    rrule: Optional[str] = None,\n    timezone: Optional[str] = None,\n) -> SCHEDULE_TYPES:\n    \"\"\"\n    Construct a schedule from the provided arguments.\n\n    Args:\n        interval: An interval on which to schedule runs. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        anchor_date: The start date for an interval schedule.\n        cron: A cron schedule for runs.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        timezone: A timezone to use for the schedule. Defaults to UTC.\n    \"\"\"\n    num_schedules = sum(1 for entry in (interval, cron, rrule) if entry is not None)\n    if num_schedules > 1:\n        raise ValueError(\"Only one of interval, cron, or rrule can be provided.\")\n\n    if anchor_date and not interval:\n        raise ValueError(\n            \"An anchor date can only be provided with an interval schedule\"\n        )\n\n    if timezone and not (interval or cron or rrule):\n        raise ValueError(\n            \"A timezone can only be provided with interval, cron, or rrule\"\n        )\n\n    schedule = None\n    if interval:\n        if isinstance(interval, (int, float)):\n            interval = datetime.timedelta(seconds=interval)\n        schedule = IntervalSchedule(\n            interval=interval, anchor_date=anchor_date, timezone=timezone\n        )\n    elif cron:\n        schedule = CronSchedule(cron=cron, timezone=timezone)\n    elif rrule:\n        schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n    if schedule is None:\n        raise ValueError(\"Either interval, cron, or rrule must be provided\")\n\n    return schedule\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#_6","title":"schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting","title":"prefect.client.schemas.sorting","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowRunSort","title":"FlowRunSort","text":"

    Bases: AutoEnum

    Defines flow run sorting options.

    Source code in prefect/client/schemas/sorting.py
    class FlowRunSort(AutoEnum):\n    \"\"\"Defines flow run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    START_TIME_ASC = AutoEnum.auto()\n    START_TIME_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.TaskRunSort","title":"TaskRunSort","text":"

    Bases: AutoEnum

    Defines task run sorting options.

    Source code in prefect/client/schemas/sorting.py
    class TaskRunSort(AutoEnum):\n    \"\"\"Defines task run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.AutomationSort","title":"AutomationSort","text":"

    Bases: AutoEnum

    Defines automation sorting options.

    Source code in prefect/client/schemas/sorting.py
    class AutomationSort(AutoEnum):\n    \"\"\"Defines automation sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.LogSort","title":"LogSort","text":"

    Bases: AutoEnum

    Defines log sorting options.

    Source code in prefect/client/schemas/sorting.py
    class LogSort(AutoEnum):\n    \"\"\"Defines log sorting options.\"\"\"\n\n    TIMESTAMP_ASC = AutoEnum.auto()\n    TIMESTAMP_DESC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.FlowSort","title":"FlowSort","text":"

    Bases: AutoEnum

    Defines flow sorting options.

    Source code in prefect/client/schemas/sorting.py
    class FlowSort(AutoEnum):\n    \"\"\"Defines flow sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.DeploymentSort","title":"DeploymentSort","text":"

    Bases: AutoEnum

    Defines deployment sorting options.

    Source code in prefect/client/schemas/sorting.py
    class DeploymentSort(AutoEnum):\n    \"\"\"Defines deployment sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactSort","title":"ArtifactSort","text":"

    Bases: AutoEnum

    Defines artifact sorting options.

    Source code in prefect/client/schemas/sorting.py
    class ArtifactSort(AutoEnum):\n    \"\"\"Defines artifact sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort","text":"

    Bases: AutoEnum

    Defines artifact collection sorting options.

    Source code in prefect/client/schemas/sorting.py
    class ArtifactCollectionSort(AutoEnum):\n    \"\"\"Defines artifact collection sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.VariableSort","title":"VariableSort","text":"

    Bases: AutoEnum

    Defines variables sorting options.

    Source code in prefect/client/schemas/sorting.py
    class VariableSort(AutoEnum):\n    \"\"\"Defines variables sorting options.\"\"\"\n\n    CREATED_DESC = \"CREATED_DESC\"\n    UPDATED_DESC = \"UPDATED_DESC\"\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort","text":"

    Bases: AutoEnum

    Defines block document sorting options.

    Source code in prefect/client/schemas/sorting.py
    class BlockDocumentSort(AutoEnum):\n    \"\"\"Defines block document sorting options.\"\"\"\n\n    NAME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    BLOCK_TYPE_AND_NAME_ASC = AutoEnum.auto()\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/","title":"utilities","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities","title":"prefect.client.utilities","text":"

    Utilities for working with clients.

    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.get_or_create_client","title":"get_or_create_client","text":"

    Returns provided client, infers a client from context if available, or creates a new client.

    Parameters:

    Name Type Description Default - client (PrefectClient

    an optional client to use

    required

    Returns:

    Type Description Tuple[PrefectClient, bool]
    • tuple: a tuple of the client and a boolean indicating if the client was inferred from context
    Source code in prefect/client/utilities.py
    def get_or_create_client(\n    client: Optional[\"PrefectClient\"] = None,\n) -> Tuple[\"PrefectClient\", bool]:\n    \"\"\"\n    Returns provided client, infers a client from context if available, or creates a new client.\n\n    Args:\n        - client (PrefectClient, optional): an optional client to use\n\n    Returns:\n        - tuple: a tuple of the client and a boolean indicating if the client was inferred from context\n    \"\"\"\n    if client is not None:\n        return client, True\n    from prefect._internal.concurrency.event_loop import get_running_loop\n    from prefect.context import FlowRunContext, TaskRunContext\n\n    flow_run_context = FlowRunContext.get()\n    task_run_context = TaskRunContext.get()\n\n    if (\n        flow_run_context\n        and getattr(flow_run_context.client, \"_loop\") == get_running_loop()\n    ):\n        return flow_run_context.client, True\n    elif (\n        task_run_context\n        and getattr(task_run_context.client, \"_loop\") == get_running_loop()\n    ):\n        return task_run_context.client, True\n    else:\n        from prefect.client.orchestration import get_client as get_httpx_client\n\n        return get_httpx_client(), False\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.inject_client","title":"inject_client","text":"

    Simple helper to provide a context managed client to a asynchronous function.

    The decorated function must take a client kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context.

    Source code in prefect/client/utilities.py
    def inject_client(\n    fn: Callable[P, Coroutine[Any, Any, Any]],\n) -> Callable[P, Coroutine[Any, Any, Any]]:\n    \"\"\"\n    Simple helper to provide a context managed client to a asynchronous function.\n\n    The decorated function _must_ take a `client` kwarg and if a client is passed when\n    called it will be used instead of creating a new one, but it will not be context\n    managed as it is assumed that the caller is managing the context.\n    \"\"\"\n\n    @wraps(fn)\n    async def with_injected_client(*args: P.args, **kwargs: P.kwargs) -> Any:\n        client = cast(Optional[\"PrefectClient\"], kwargs.pop(\"client\", None))\n        client, inferred = get_or_create_client(client)\n        if not inferred:\n            context = client\n        else:\n            from prefect.utilities.asyncutils import asyncnullcontext\n\n            context = asyncnullcontext()\n        async with context as new_client:\n            kwargs.setdefault(\"client\", new_client or client)\n            return await fn(*args, **kwargs)\n\n    return with_injected_client\n
    ","tags":["Python API","REST API"]},{"location":"api-ref/prefect/concurrency/asyncio/","title":"asyncio","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio","title":"prefect.concurrency.asyncio","text":"","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.ConcurrencySlotAcquisitionError","title":"ConcurrencySlotAcquisitionError","text":"

    Bases: Exception

    Raised when an unhandlable occurs while acquiring concurrency slots.

    Source code in prefect/concurrency/asyncio.py
    class ConcurrencySlotAcquisitionError(Exception):\n    \"\"\"Raised when an unhandlable occurs while acquiring concurrency slots.\"\"\"\n
    ","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/asyncio/#prefect.concurrency.asyncio.rate_limit","title":"rate_limit async","text":"

    Block execution until an occupy number of slots of the concurrency limits given in names are acquired. Requires that all given concurrency limits have a slot decay.

    Parameters:

    Name Type Description Default names Union[str, List[str]]

    The names of the concurrency limits to acquire slots from.

    required occupy int

    The number of slots to acquire and hold from each limit.

    1 Source code in prefect/concurrency/asyncio.py
    async def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n    \"\"\"Block execution until an `occupy` number of slots of the concurrency\n    limits given in `names` are acquired. Requires that all given concurrency\n    limits have a slot decay.\n\n    Args:\n        names: The names of the concurrency limits to acquire slots from.\n        occupy: The number of slots to acquire and hold from each limit.\n    \"\"\"\n    names = names if isinstance(names, list) else [names]\n    limits = await _acquire_concurrency_slots(names, occupy, mode=\"rate_limit\")\n    _emit_concurrency_acquisition_events(limits, occupy)\n
    ","tags":["Python API","concurrency","asyncio"]},{"location":"api-ref/prefect/concurrency/events/","title":"events","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/events/#prefect.concurrency.events","title":"prefect.concurrency.events","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/","title":"services","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/services/#prefect.concurrency.services","title":"prefect.concurrency.services","text":"","tags":["Python API","concurrency"]},{"location":"api-ref/prefect/concurrency/sync/","title":"sync","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync","title":"prefect.concurrency.sync","text":"","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/concurrency/sync/#prefect.concurrency.sync.rate_limit","title":"rate_limit","text":"

    Block execution until an occupy number of slots of the concurrency limits given in names are acquired. Requires that all given concurrency limits have a slot decay.

    Parameters:

    Name Type Description Default names Union[str, List[str]]

    The names of the concurrency limits to acquire slots from.

    required occupy int

    The number of slots to acquire and hold from each limit.

    1 Source code in prefect/concurrency/sync.py
    def rate_limit(names: Union[str, List[str]], occupy: int = 1):\n    \"\"\"Block execution until an `occupy` number of slots of the concurrency\n    limits given in `names` are acquired. Requires that all given concurrency\n    limits have a slot decay.\n\n    Args:\n        names: The names of the concurrency limits to acquire slots from.\n        occupy: The number of slots to acquire and hold from each limit.\n    \"\"\"\n    names = names if isinstance(names, list) else [names]\n    limits = _call_async_function_from_sync(\n        _acquire_concurrency_slots, names, occupy, mode=\"rate_limit\"\n    )\n    _emit_concurrency_acquisition_events(limits, occupy)\n
    ","tags":["Python API","concurrency","sync"]},{"location":"api-ref/prefect/deployments/base/","title":"base","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base","title":"prefect.deployments.base","text":"

    Core primitives for managing Prefect deployments via prefect deploy, providing a minimally opinionated build system for managing flows and deployments.

    To get started, follow along with the deloyments tutorial.

    ","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.configure_project_by_recipe","title":"configure_project_by_recipe","text":"

    Given a recipe name, returns a dictionary representing base configuration options.

    Parameters:

    Name Type Description Default recipe str

    the name of the recipe to use

    required formatting_kwargs dict

    additional keyword arguments to format the recipe

    {}

    Raises:

    Type Description ValueError

    if provided recipe name does not exist.

    Source code in prefect/deployments/base.py
    def configure_project_by_recipe(recipe: str, **formatting_kwargs) -> dict:\n    \"\"\"\n    Given a recipe name, returns a dictionary representing base configuration options.\n\n    Args:\n        recipe (str): the name of the recipe to use\n        formatting_kwargs (dict, optional): additional keyword arguments to format the recipe\n\n    Raises:\n        ValueError: if provided recipe name does not exist.\n    \"\"\"\n    # load the recipe\n    recipe_path = Path(__file__).parent / \"recipes\" / recipe / \"prefect.yaml\"\n\n    if not recipe_path.exists():\n        raise ValueError(f\"Unknown recipe {recipe!r} provided.\")\n\n    with recipe_path.open(mode=\"r\") as f:\n        config = yaml.safe_load(f)\n\n    config = apply_values(\n        template=config, values=formatting_kwargs, remove_notset=False\n    )\n\n    return config\n
    ","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.create_default_prefect_yaml","title":"create_default_prefect_yaml","text":"

    Creates default prefect.yaml file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

    Parameters:

    Name Type Description Default name str

    the name of the project; if not provided, the current directory name will be used

    None contents dict

    a dictionary of contents to write to the file; if not provided, defaults will be used

    None Source code in prefect/deployments/base.py
    def create_default_prefect_yaml(\n    path: str, name: str = None, contents: Optional[Dict[str, Any]] = None\n) -> bool:\n    \"\"\"\n    Creates default `prefect.yaml` file in the provided path if one does not already exist;\n    returns boolean specifying whether a file was created.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n            will be used\n        contents (dict, optional): a dictionary of contents to write to the file; if not provided,\n            defaults will be used\n    \"\"\"\n    path = Path(path)\n    prefect_file = path / \"prefect.yaml\"\n    if prefect_file.exists():\n        return False\n    default_file = Path(__file__).parent / \"templates\" / \"prefect.yaml\"\n\n    with default_file.open(mode=\"r\") as df:\n        default_contents = yaml.safe_load(df)\n\n    import prefect\n\n    contents[\"prefect-version\"] = prefect.__version__\n    contents[\"name\"] = name\n\n    with prefect_file.open(mode=\"w\") as f:\n        # write header\n        f.write(\n            \"# Welcome to your prefect.yaml file! You can use this file for storing and\"\n            \" managing\\n# configuration for deploying your flows. We recommend\"\n            \" committing this file to source\\n# control along with your flow code.\\n\\n\"\n        )\n\n        f.write(\"# Generic metadata about this project\\n\")\n        yaml.dump({\"name\": contents[\"name\"]}, f, sort_keys=False)\n        yaml.dump({\"prefect-version\": contents[\"prefect-version\"]}, f, sort_keys=False)\n        f.write(\"\\n\")\n\n        # build\n        f.write(\"# build section allows you to manage and build docker images\\n\")\n        yaml.dump(\n            {\"build\": contents.get(\"build\", default_contents.get(\"build\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # push\n        f.write(\n            \"# push section allows you to manage if and how this project is uploaded to\"\n            \" remote locations\\n\"\n        )\n        yaml.dump(\n            {\"push\": contents.get(\"push\", default_contents.get(\"push\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # pull\n        f.write(\n            \"# pull section allows you to provide instructions for cloning this project\"\n            \" in remote locations\\n\"\n        )\n        yaml.dump(\n            {\"pull\": contents.get(\"pull\", default_contents.get(\"pull\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # deployments\n        f.write(\n            \"# the deployments section allows you to provide configuration for\"\n            \" deploying flows\\n\"\n        )\n        yaml.dump(\n            {\n                \"deployments\": contents.get(\n                    \"deployments\", default_contents.get(\"deployments\")\n                )\n            },\n            f,\n            sort_keys=False,\n        )\n    return True\n
    ","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.initialize_project","title":"initialize_project","text":"

    Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred.

    Parameters:

    Name Type Description Default name str

    the name of the project; if not provided, the current directory name

    None recipe str

    the name of the recipe to use; if not provided, one is inferred

    None inputs dict

    a dictionary of inputs to use when formatting the recipe

    None

    Returns:

    Type Description List[str]

    List[str]: a list of files / directories that were created

    Source code in prefect/deployments/base.py
    def initialize_project(\n    name: str = None, recipe: str = None, inputs: Optional[Dict[str, Any]] = None\n) -> List[str]:\n    \"\"\"\n    Initializes a basic project structure with base files.  If no name is provided, the name\n    of the current directory is used.  If no recipe is provided, one is inferred.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n        recipe (str, optional): the name of the recipe to use; if not provided, one is inferred\n        inputs (dict, optional): a dictionary of inputs to use when formatting the recipe\n\n    Returns:\n        List[str]: a list of files / directories that were created\n    \"\"\"\n    # determine if in git repo or use directory name as a default\n    is_git_based = False\n    formatting_kwargs = {\"directory\": str(Path(\".\").absolute().resolve())}\n    dir_name = os.path.basename(os.getcwd())\n\n    remote_url = _get_git_remote_origin_url()\n    if remote_url:\n        formatting_kwargs[\"repository\"] = remote_url\n        is_git_based = True\n        branch = _get_git_branch()\n        formatting_kwargs[\"branch\"] = branch or \"main\"\n\n    formatting_kwargs[\"name\"] = dir_name\n\n    has_dockerfile = Path(\"Dockerfile\").exists()\n\n    if has_dockerfile:\n        formatting_kwargs[\"dockerfile\"] = \"Dockerfile\"\n    elif recipe is not None and \"docker\" in recipe:\n        formatting_kwargs[\"dockerfile\"] = \"auto\"\n\n    # hand craft a pull step\n    if is_git_based and recipe is None:\n        if has_dockerfile:\n            recipe = \"docker-git\"\n        else:\n            recipe = \"git\"\n    elif recipe is None and has_dockerfile:\n        recipe = \"docker\"\n    elif recipe is None:\n        recipe = \"local\"\n\n    formatting_kwargs.update(inputs or {})\n    configuration = configure_project_by_recipe(recipe=recipe, **formatting_kwargs)\n\n    project_name = name or dir_name\n\n    files = []\n    if create_default_ignore_file(\".\"):\n        files.append(\".prefectignore\")\n    if create_default_prefect_yaml(\".\", name=project_name, contents=configuration):\n        files.append(\"prefect.yaml\")\n\n    return files\n
    ","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/deployments/","title":"deployments","text":"","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments","title":"prefect.deployments.deployments","text":"

    Objects for specifying deployments and utilities for loading flows from deployments.

    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment","title":"Deployment","text":"

    Bases: DeprecatedInfraOverridesField, BaseModel

    DEPRECATION WARNING:

    This class is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by flow.deploy, which offers enhanced functionality and better a better user experience. For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

    A Prefect Deployment definition, used for specifying and building deployments.

    Parameters:

    Name Type Description Default name

    A name for the deployment (required).

    required version

    An optional version for the deployment; defaults to the flow's version

    required description

    An optional description of the deployment; defaults to the flow's description

    required tags

    An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name.

    required schedule

    A schedule to run this deployment on, once registered (deprecated)

    required is_schedule_active

    Whether or not the schedule is active (deprecated)

    required schedules

    A list of schedules to run this deployment on

    required work_queue_name

    The work queue that will handle this deployment's runs

    required work_pool_name

    The work pool for the deployment

    required flow_name

    The name of the flow this deployment encapsulates

    required parameters

    A dictionary of parameter values to pass to runs created from this deployment

    required infrastructure

    An optional infrastructure block used to configure infrastructure for runs; if not provided, will default to running this deployment in Agent subprocesses

    required job_variables

    A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'

    required storage

    An optional remote storage block used to store and retrieve this workflow; if not provided, will default to referencing this flow by its local path

    required path

    The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path

    required entrypoint

    The path to the entrypoint for the workflow, always relative to the path

    required parameter_openapi_schema

    The parameter schema of the flow, including defaults.

    required enforce_parameter_schema

    Whether or not the Prefect API should enforce the parameter schema for this deployment.

    required
    Create a new deployment using configuration defaults for an imported flow:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>>\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"example\",\n...     version=\"1\",\n...     tags=[\"demo\"],\n>>> )\n>>> deployment.apply()\n\nCreate a new deployment with custom storage and an infrastructure override:\n\n>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>> from prefect.filesystems import S3\n\n>>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"s3-example\",\n...     version=\"2\",\n...     tags=[\"aws\"],\n...     storage=storage,\n...     job_variables=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n>>> )\n>>> deployment.apply()\n
    Source code in prefect/deployments/deployments.py
    @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=\"Use `flow.deploy` to deploy your flows instead.\"\n    \" Refer to the upgrade guide for more information:\"\n    \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\",\n)\nclass Deployment(DeprecatedInfraOverridesField, BaseModel):\n    \"\"\"\n    DEPRECATION WARNING:\n\n    This class is deprecated as of March 2024 and will not be available after September 2024.\n    It has been replaced by `flow.deploy`, which offers enhanced functionality and better a better user experience.\n    For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\n\n    A Prefect Deployment definition, used for specifying and building deployments.\n\n    Args:\n        name: A name for the deployment (required).\n        version: An optional version for the deployment; defaults to the flow's version\n        description: An optional description of the deployment; defaults to the flow's\n            description\n        tags: An optional list of tags to associate with this deployment; note that tags\n            are used only for organizational purposes. For delegating work to agents,\n            see `work_queue_name`.\n        schedule: A schedule to run this deployment on, once registered (deprecated)\n        is_schedule_active: Whether or not the schedule is active (deprecated)\n        schedules: A list of schedules to run this deployment on\n        work_queue_name: The work queue that will handle this deployment's runs\n        work_pool_name: The work pool for the deployment\n        flow_name: The name of the flow this deployment encapsulates\n        parameters: A dictionary of parameter values to pass to runs created from this\n            deployment\n        infrastructure: An optional infrastructure block used to configure\n            infrastructure for runs; if not provided, will default to running this\n            deployment in Agent subprocesses\n        job_variables: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`\n        storage: An optional remote storage block used to store and retrieve this\n            workflow; if not provided, will default to referencing this flow by its\n            local path\n        path: The path to the working directory for the workflow, relative to remote\n            storage or, if stored on a local filesystem, an absolute path\n        entrypoint: The path to the entrypoint for the workflow, always relative to the\n            `path`\n        parameter_openapi_schema: The parameter schema of the flow, including defaults.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n\n    Examples:\n\n        Create a new deployment using configuration defaults for an imported flow:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>>\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"example\",\n        ...     version=\"1\",\n        ...     tags=[\"demo\"],\n        >>> )\n        >>> deployment.apply()\n\n        Create a new deployment with custom storage and an infrastructure override:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>> from prefect.filesystems import S3\n\n        >>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"s3-example\",\n        ...     version=\"2\",\n        ...     tags=[\"aws\"],\n        ...     storage=storage,\n        ...     job_variables=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n        >>> )\n        >>> deployment.apply()\n\n    \"\"\"\n\n    class Config:\n        json_encoders = {SecretDict: lambda v: v.dict()}\n        validate_assignment = True\n        extra = \"forbid\"\n\n    @property\n    def _editable_fields(self) -> List[str]:\n        editable_fields = [\n            \"name\",\n            \"description\",\n            \"version\",\n            \"work_queue_name\",\n            \"work_pool_name\",\n            \"tags\",\n            \"parameters\",\n            \"schedule\",\n            \"schedules\",\n            \"is_schedule_active\",\n            # The `infra_overrides` field has been renamed to `job_variables`.\n            # We will continue writing it in the YAML file as `infra_overrides`\n            # instead of `job_variables` for better backwards compat, but we'll\n            # accept either `job_variables` or `infra_overrides` when we read\n            # the file.\n            \"infra_overrides\",\n        ]\n\n        # if infrastructure is baked as a pre-saved block, then\n        # editing its fields will not update anything\n        if self.infrastructure._block_document_id:\n            return editable_fields\n        else:\n            return editable_fields + [\"infrastructure\"]\n\n    @property\n    def location(self) -> str:\n        \"\"\"\n        The 'location' that this deployment points to is given by `path` alone\n        in the case of no remote storage, and otherwise by `storage.basepath / path`.\n\n        The underlying flow entrypoint is interpreted relative to this location.\n        \"\"\"\n        location = \"\"\n        if self.storage:\n            location = (\n                self.storage.basepath + \"/\"\n                if not self.storage.basepath.endswith(\"/\")\n                else \"\"\n            )\n        if self.path:\n            location += self.path\n        return location\n\n    @sync_compatible\n    async def to_yaml(self, path: Path) -> None:\n        yaml_dict = self._yaml_dict()\n        schema = self.schema()\n\n        with open(path, \"w\") as f:\n            # write header\n            f.write(\n                \"###\\n### A complete description of a Prefect Deployment for flow\"\n                f\" {self.flow_name!r}\\n###\\n\"\n            )\n\n            # write editable fields\n            for field in self._editable_fields:\n                # write any comments\n                if schema[\"properties\"][field].get(\"yaml_comment\"):\n                    f.write(f\"# {schema['properties'][field]['yaml_comment']}\\n\")\n                # write the field\n                yaml.dump({field: yaml_dict[field]}, f, sort_keys=False)\n\n            # write non-editable fields, excluding `job_variables` because we'll\n            # continue writing it as `infra_overrides` for better backwards compat\n            # with the existing file format.\n            f.write(\"\\n###\\n### DO NOT EDIT BELOW THIS LINE\\n###\\n\")\n            yaml.dump(\n                {\n                    k: v\n                    for k, v in yaml_dict.items()\n                    if k not in self._editable_fields and k != \"job_variables\"\n                },\n                f,\n                sort_keys=False,\n            )\n\n    def _yaml_dict(self) -> dict:\n        \"\"\"\n        Returns a YAML-compatible representation of this deployment as a dictionary.\n        \"\"\"\n        # avoids issues with UUIDs showing up in YAML\n        all_fields = json.loads(\n            self.json(\n                exclude={\n                    \"storage\": {\"_filesystem\", \"filesystem\", \"_remote_file_system\"}\n                }\n            )\n        )\n        if all_fields[\"storage\"]:\n            all_fields[\"storage\"][\n                \"_block_type_slug\"\n            ] = self.storage.get_block_type_slug()\n        if all_fields[\"infrastructure\"]:\n            all_fields[\"infrastructure\"][\n                \"_block_type_slug\"\n            ] = self.infrastructure.get_block_type_slug()\n        return all_fields\n\n    @classmethod\n    def _validate_schedule(cls, value):\n        \"\"\"We do not support COUNT-based (# of occurrences) RRule schedules for deployments.\"\"\"\n        if value:\n            rrule_value = getattr(value, \"rrule\", None)\n            if rrule_value and \"COUNT\" in rrule_value.upper():\n                raise ValueError(\n                    \"RRule schedules with `COUNT` are not supported. Please use `UNTIL`\"\n                    \" or the `/deployments/{id}/schedule` endpoint to schedule a fixed\"\n                    \" number of flow runs.\"\n                )\n\n    # top level metadata\n    name: str = Field(..., description=\"The name of the deployment.\")\n    description: Optional[str] = Field(\n        default=None, description=\"An optional description of the deployment.\"\n    )\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"One of more tags to apply to this deployment.\",\n    )\n    schedule: Optional[SCHEDULE_TYPES] = Field(default=None)\n    schedules: List[MinimalDeploymentSchedule] = Field(\n        default_factory=list,\n        description=\"The schedules to run this deployment on.\",\n    )\n    is_schedule_active: Optional[bool] = Field(\n        default=None, description=\"Whether or not the schedule is active.\"\n    )\n    flow_name: Optional[str] = Field(default=None, description=\"The name of the flow.\")\n    work_queue_name: Optional[str] = Field(\n        \"default\",\n        description=\"The work queue for the deployment.\",\n        yaml_comment=\"The work queue that will handle this deployment's runs\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None, description=\"The work pool for the deployment\"\n    )\n    # flow data\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    infrastructure: Infrastructure = Field(default_factory=Process)\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to the base infrastructure block at runtime.\",\n    )\n    storage: Optional[Block] = Field(\n        None,\n        help=\"The remote storage to use for this workflow.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    parameter_openapi_schema: ParameterSchema = Field(\n        default_factory=ParameterSchema,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    timestamp: datetime = Field(default_factory=partial(pendulum.now, \"UTC\"))\n    triggers: List[Union[DeploymentTriggerTypes, TriggerTypes]] = Field(\n        default_factory=list,\n        description=\"The triggers that should cause this deployment to run.\",\n    )\n    # defaults to None to allow for backwards compatibility\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the Prefect API should enforce the parameter schema for\"\n            \" this deployment.\"\n        ),\n    )\n\n    @validator(\"infrastructure\", pre=True)\n    def validate_infrastructure_capabilities(cls, value):\n        return infrastructure_must_have_capabilities(value)\n\n    @validator(\"storage\", pre=True)\n    def validate_storage(cls, value):\n        return storage_must_have_capabilities(value)\n\n    @validator(\"parameter_openapi_schema\", pre=True)\n    def validate_parameter_openapi_schema(cls, value):\n        return handle_openapi_schema(value)\n\n    @validator(\"triggers\")\n    def validate_triggers(cls, field_value, values):\n        return validate_automation_names(field_value, values)\n\n    @root_validator(pre=True)\n    def validate_schedule(cls, values):\n        return validate_deprecated_schedule_fields(values, logger)\n\n    @root_validator(pre=True)\n    def validate_backwards_compatibility_for_schedule(cls, values):\n        return reconcile_schedules(cls, values)\n\n    @classmethod\n    @sync_compatible\n    async def load_from_yaml(cls, path: str):\n        data = yaml.safe_load(await anyio.Path(path).read_bytes())\n        # load blocks from server to ensure secret values are properly hydrated\n        if data.get(\"storage\"):\n            block_doc_name = data[\"storage\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"storage\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"storage\"] = block\n\n        if data.get(\"infrastructure\"):\n            block_doc_name = data[\"infrastructure\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"infrastructure\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"infrastructure\"] = block\n\n            return cls(**data)\n\n    @sync_compatible\n    async def load(self) -> bool:\n        \"\"\"\n        Queries the API for a deployment with this name for this flow, and if found,\n        prepopulates any settings that were not set at initialization.\n\n        Returns a boolean specifying whether a load was successful or not.\n\n        Raises:\n            - ValueError: if both name and flow name are not set\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be provided.\")\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment_by_name(\n                    f\"{self.flow_name}/{self.name}\"\n                )\n                if deployment.storage_document_id:\n                    Block._from_block_document(\n                        await client.read_block_document(deployment.storage_document_id)\n                    )\n\n                excluded_fields = self.__fields_set__.union(\n                    {\n                        \"infrastructure\",\n                        \"storage\",\n                        \"timestamp\",\n                        \"triggers\",\n                        \"enforce_parameter_schema\",\n                        \"schedules\",\n                        \"schedule\",\n                        \"is_schedule_active\",\n                    }\n                )\n                for field in set(self.__fields__.keys()) - excluded_fields:\n                    new_value = getattr(deployment, field)\n                    setattr(self, field, new_value)\n\n                if \"schedules\" not in self.__fields_set__:\n                    self.schedules = [\n                        MinimalDeploymentSchedule(\n                            **schedule.dict(include={\"schedule\", \"active\"})\n                        )\n                        for schedule in deployment.schedules\n                    ]\n\n                # The API server generates the \"schedule\" field from the\n                # current list of schedules, so if the user has locally set\n                # \"schedules\" to anything, we should avoid sending \"schedule\"\n                # and let the API server generate a new value if necessary.\n                if \"schedules\" in self.__fields_set__:\n                    self.schedule = None\n                    self.is_schedule_active = None\n                else:\n                    # The user isn't using \"schedules,\" so we should\n                    # populate \"schedule\" and \"is_schedule_active\" from the\n                    # API's version of the deployment, unless the user gave\n                    # us these fields in __init__().\n                    if \"schedule\" not in self.__fields_set__:\n                        self.schedule = deployment.schedule\n                    if \"is_schedule_active\" not in self.__fields_set__:\n                        self.is_schedule_active = deployment.is_schedule_active\n\n                if \"infrastructure\" not in self.__fields_set__:\n                    if deployment.infrastructure_document_id:\n                        self.infrastructure = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.infrastructure_document_id\n                            )\n                        )\n                if \"storage\" not in self.__fields_set__:\n                    if deployment.storage_document_id:\n                        self.storage = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.storage_document_id\n                            )\n                        )\n            except ObjectNotFound:\n                return False\n        return True\n\n    @sync_compatible\n    async def update(self, ignore_none: bool = False, **kwargs):\n        \"\"\"\n        Performs an in-place update with the provided settings.\n\n        Args:\n            ignore_none: if True, all `None` values are ignored when performing the\n                update\n        \"\"\"\n        unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n        if unknown_keys:\n            raise ValueError(\n                f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n            )\n        for key, value in kwargs.items():\n            if ignore_none and value is None:\n                continue\n            setattr(self, key, value)\n\n    @sync_compatible\n    async def upload_to_storage(\n        self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n    ) -> Optional[int]:\n        \"\"\"\n        Uploads the workflow this deployment represents using a provided storage block;\n        if no block is provided, defaults to configuring self for local storage.\n\n        Args:\n            storage_block: a string reference a remote storage block slug `$type/$name`;\n                if provided, used to upload the workflow's project\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n        \"\"\"\n        file_count = None\n        if storage_block:\n            storage = await Block.load(storage_block)\n\n            if \"put-directory\" not in storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {storage!r} missing 'put-directory' capability.\"\n                )\n\n            self.storage = storage\n\n            # upload current directory to storage location\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n        elif self.storage:\n            if \"put-directory\" not in self.storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {self.storage!r} missing 'put-directory'\"\n                    \" capability.\"\n                )\n\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n\n        # persists storage now in case it contains secret values\n        if self.storage and not self.storage._block_document_id:\n            await self.storage._save(is_anonymous=True)\n\n        return file_count\n\n    @sync_compatible\n    async def apply(\n        self, upload: bool = False, work_queue_concurrency: int = None\n    ) -> UUID:\n        \"\"\"\n        Registers this deployment with the API and returns the deployment's ID.\n\n        Args:\n            upload: if True, deployment files are automatically uploaded to remote\n                storage\n            work_queue_concurrency: If provided, sets the concurrency limit on the\n                deployment's work queue\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be set.\")\n        async with get_client() as client:\n            # prep IDs\n            flow_id = await client.create_flow_from_name(self.flow_name)\n\n            infrastructure_document_id = self.infrastructure._block_document_id\n            if not infrastructure_document_id:\n                # if not building off a block, will create an anonymous block\n                self.infrastructure = self.infrastructure.copy()\n                infrastructure_document_id = await self.infrastructure._save(\n                    is_anonymous=True,\n                )\n\n            if upload:\n                await self.upload_to_storage()\n\n            if self.work_queue_name and work_queue_concurrency is not None:\n                try:\n                    res = await client.create_work_queue(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                except ObjectAlreadyExists:\n                    res = await client.read_work_queue_by_name(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                await client.update_work_queue(\n                    res.id, concurrency_limit=work_queue_concurrency\n                )\n\n            if self.schedule:\n                logger.info(\n                    \"Interpreting the deprecated `schedule` field as an entry in \"\n                    \"`schedules`.\"\n                )\n                schedules = [\n                    DeploymentScheduleCreate(\n                        schedule=self.schedule, active=self.is_schedule_active\n                    )\n                ]\n            elif self.schedules:\n                schedules = [\n                    DeploymentScheduleCreate(**schedule.dict())\n                    for schedule in self.schedules\n                ]\n            else:\n                schedules = None\n\n            # we assume storage was already saved\n            storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n            deployment_id = await client.create_deployment(\n                flow_id=flow_id,\n                name=self.name,\n                work_queue_name=self.work_queue_name,\n                work_pool_name=self.work_pool_name,\n                version=self.version,\n                schedules=schedules,\n                is_schedule_active=self.is_schedule_active,\n                parameters=self.parameters,\n                description=self.description,\n                tags=self.tags,\n                manifest_path=self.manifest_path,  # allows for backwards YAML compat\n                path=self.path,\n                entrypoint=self.entrypoint,\n                job_variables=self.job_variables,\n                storage_document_id=storage_document_id,\n                infrastructure_document_id=infrastructure_document_id,\n                parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n                enforce_parameter_schema=self.enforce_parameter_schema,\n            )\n\n            if client.server_type.supports_automations():\n                try:\n                    # The triggers defined in the deployment spec are, essentially,\n                    # anonymous and attempting truly sync them with cloud is not\n                    # feasible. Instead, we remove all automations that are owned\n                    # by the deployment, meaning that they were created via this\n                    # mechanism below, and then recreate them.\n                    await client.delete_resource_owned_automations(\n                        f\"prefect.deployment.{deployment_id}\"\n                    )\n                except PrefectHTTPStatusError as e:\n                    if e.response.status_code == 404:\n                        # This Prefect server does not support automations, so we can safely\n                        # ignore this 404 and move on.\n                        return deployment_id\n                    raise e\n\n                for trigger in self.triggers:\n                    trigger.set_deployment_id(deployment_id)\n                    await client.create_automation(trigger.as_automation())\n\n            return deployment_id\n\n    @classmethod\n    @sync_compatible\n    async def build_from_flow(\n        cls,\n        flow: Flow,\n        name: str,\n        output: str = None,\n        skip_upload: bool = False,\n        ignore_file: str = \".prefectignore\",\n        apply: bool = False,\n        load_existing: bool = True,\n        schedules: Optional[FlexibleScheduleList] = None,\n        **kwargs,\n    ) -> \"Deployment\":\n        \"\"\"\n        Configure a deployment for a given flow.\n\n        Args:\n            flow: A flow function to deploy\n            name: A name for the deployment\n            output (optional): if provided, the full deployment specification will be\n                written as a YAML file in the location specified by `output`\n            skip_upload: if True, deployment files are not automatically uploaded to\n                remote storage\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n            apply: if True, the deployment is automatically registered with the API\n            load_existing: if True, load any settings that may already be configured for\n                the named deployment server-side (e.g., schedules, default parameter\n                values, etc.)\n            schedules: An optional list of schedules. Each item in the list can be:\n                  - An instance of `MinimalDeploymentSchedule`.\n                  - A dictionary with a `schedule` key, and optionally, an\n                    `active` key. The `schedule` key should correspond to a\n                    schedule type, and `active` is a boolean indicating whether\n                    the schedule is active or not.\n                  - An instance of one of the predefined schedule types:\n                    `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n            **kwargs: other keyword arguments to pass to the constructor for the\n                `Deployment` class\n        \"\"\"\n        if not name:\n            raise ValueError(\"A deployment name must be provided.\")\n\n        # note that `deployment.load` only updates settings that were *not*\n        # provided at initialization\n\n        deployment_args = {\n            \"name\": name,\n            \"flow_name\": flow.name,\n            **kwargs,\n        }\n\n        if schedules is not None:\n            deployment_args[\"schedules\"] = schedules\n\n        deployment = cls(**deployment_args)\n        deployment.flow_name = flow.name\n        if not deployment.entrypoint:\n            ## first see if an entrypoint can be determined\n            flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n            mod_name = getattr(flow, \"__module__\", None)\n            if not flow_file:\n                if not mod_name:\n                    # todo, check if the file location was manually set already\n                    raise ValueError(\"Could not determine flow's file location.\")\n                module = importlib.import_module(mod_name)\n                flow_file = getattr(module, \"__file__\", None)\n                if not flow_file:\n                    raise ValueError(\"Could not determine flow's file location.\")\n\n            # set entrypoint\n            entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n            deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n        if load_existing:\n            await deployment.load()\n\n        # set a few attributes for this flow object\n        deployment.parameter_openapi_schema = parameter_schema(flow)\n\n        # ensure the ignore file exists\n        if not Path(ignore_file).exists():\n            Path(ignore_file).touch()\n\n        if not deployment.version:\n            deployment.version = flow.version\n        if not deployment.description:\n            deployment.description = flow.description\n\n        # proxy for whether infra is docker-based\n        is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n        if not deployment.storage and not is_docker_based and not deployment.path:\n            deployment.path = str(Path(\".\").absolute())\n        elif not deployment.storage and is_docker_based:\n            # only update if a path is not already set\n            if not deployment.path:\n                deployment.path = \"/opt/prefect/flows\"\n\n        if not skip_upload:\n            if (\n                deployment.storage\n                and \"put-directory\" in deployment.storage.get_block_capabilities()\n            ):\n                await deployment.upload_to_storage(ignore_file=ignore_file)\n\n        if output:\n            await deployment.to_yaml(output)\n\n        if apply:\n            await deployment.apply()\n\n        return deployment\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.location","title":"location: str property","text":"

    The 'location' that this deployment points to is given by path alone in the case of no remote storage, and otherwise by storage.basepath / path.

    The underlying flow entrypoint is interpreted relative to this location.

    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.apply","title":"apply async","text":"

    Registers this deployment with the API and returns the deployment's ID.

    Parameters:

    Name Type Description Default upload bool

    if True, deployment files are automatically uploaded to remote storage

    False work_queue_concurrency int

    If provided, sets the concurrency limit on the deployment's work queue

    None Source code in prefect/deployments/deployments.py
    @sync_compatible\nasync def apply(\n    self, upload: bool = False, work_queue_concurrency: int = None\n) -> UUID:\n    \"\"\"\n    Registers this deployment with the API and returns the deployment's ID.\n\n    Args:\n        upload: if True, deployment files are automatically uploaded to remote\n            storage\n        work_queue_concurrency: If provided, sets the concurrency limit on the\n            deployment's work queue\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be set.\")\n    async with get_client() as client:\n        # prep IDs\n        flow_id = await client.create_flow_from_name(self.flow_name)\n\n        infrastructure_document_id = self.infrastructure._block_document_id\n        if not infrastructure_document_id:\n            # if not building off a block, will create an anonymous block\n            self.infrastructure = self.infrastructure.copy()\n            infrastructure_document_id = await self.infrastructure._save(\n                is_anonymous=True,\n            )\n\n        if upload:\n            await self.upload_to_storage()\n\n        if self.work_queue_name and work_queue_concurrency is not None:\n            try:\n                res = await client.create_work_queue(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            except ObjectAlreadyExists:\n                res = await client.read_work_queue_by_name(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            await client.update_work_queue(\n                res.id, concurrency_limit=work_queue_concurrency\n            )\n\n        if self.schedule:\n            logger.info(\n                \"Interpreting the deprecated `schedule` field as an entry in \"\n                \"`schedules`.\"\n            )\n            schedules = [\n                DeploymentScheduleCreate(\n                    schedule=self.schedule, active=self.is_schedule_active\n                )\n            ]\n        elif self.schedules:\n            schedules = [\n                DeploymentScheduleCreate(**schedule.dict())\n                for schedule in self.schedules\n            ]\n        else:\n            schedules = None\n\n        # we assume storage was already saved\n        storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n        deployment_id = await client.create_deployment(\n            flow_id=flow_id,\n            name=self.name,\n            work_queue_name=self.work_queue_name,\n            work_pool_name=self.work_pool_name,\n            version=self.version,\n            schedules=schedules,\n            is_schedule_active=self.is_schedule_active,\n            parameters=self.parameters,\n            description=self.description,\n            tags=self.tags,\n            manifest_path=self.manifest_path,  # allows for backwards YAML compat\n            path=self.path,\n            entrypoint=self.entrypoint,\n            job_variables=self.job_variables,\n            storage_document_id=storage_document_id,\n            infrastructure_document_id=infrastructure_document_id,\n            parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n            enforce_parameter_schema=self.enforce_parameter_schema,\n        )\n\n        if client.server_type.supports_automations():\n            try:\n                # The triggers defined in the deployment spec are, essentially,\n                # anonymous and attempting truly sync them with cloud is not\n                # feasible. Instead, we remove all automations that are owned\n                # by the deployment, meaning that they were created via this\n                # mechanism below, and then recreate them.\n                await client.delete_resource_owned_automations(\n                    f\"prefect.deployment.{deployment_id}\"\n                )\n            except PrefectHTTPStatusError as e:\n                if e.response.status_code == 404:\n                    # This Prefect server does not support automations, so we can safely\n                    # ignore this 404 and move on.\n                    return deployment_id\n                raise e\n\n            for trigger in self.triggers:\n                trigger.set_deployment_id(deployment_id)\n                await client.create_automation(trigger.as_automation())\n\n        return deployment_id\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.build_from_flow","title":"build_from_flow async classmethod","text":"

    Configure a deployment for a given flow.

    Parameters:

    Name Type Description Default flow Flow

    A flow function to deploy

    required name str

    A name for the deployment

    required output optional

    if provided, the full deployment specification will be written as a YAML file in the location specified by output

    None skip_upload bool

    if True, deployment files are not automatically uploaded to remote storage

    False ignore_file str

    an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

    '.prefectignore' apply bool

    if True, the deployment is automatically registered with the API

    False load_existing bool

    if True, load any settings that may already be configured for the named deployment server-side (e.g., schedules, default parameter values, etc.)

    True schedules Optional[FlexibleScheduleList]

    An optional list of schedules. Each item in the list can be: - An instance of MinimalDeploymentSchedule. - A dictionary with a schedule key, and optionally, an active key. The schedule key should correspond to a schedule type, and active is a boolean indicating whether the schedule is active or not. - An instance of one of the predefined schedule types: IntervalSchedule, CronSchedule, or RRuleSchedule.

    None **kwargs

    other keyword arguments to pass to the constructor for the Deployment class

    {} Source code in prefect/deployments/deployments.py
    @classmethod\n@sync_compatible\nasync def build_from_flow(\n    cls,\n    flow: Flow,\n    name: str,\n    output: str = None,\n    skip_upload: bool = False,\n    ignore_file: str = \".prefectignore\",\n    apply: bool = False,\n    load_existing: bool = True,\n    schedules: Optional[FlexibleScheduleList] = None,\n    **kwargs,\n) -> \"Deployment\":\n    \"\"\"\n    Configure a deployment for a given flow.\n\n    Args:\n        flow: A flow function to deploy\n        name: A name for the deployment\n        output (optional): if provided, the full deployment specification will be\n            written as a YAML file in the location specified by `output`\n        skip_upload: if True, deployment files are not automatically uploaded to\n            remote storage\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n        apply: if True, the deployment is automatically registered with the API\n        load_existing: if True, load any settings that may already be configured for\n            the named deployment server-side (e.g., schedules, default parameter\n            values, etc.)\n        schedules: An optional list of schedules. Each item in the list can be:\n              - An instance of `MinimalDeploymentSchedule`.\n              - A dictionary with a `schedule` key, and optionally, an\n                `active` key. The `schedule` key should correspond to a\n                schedule type, and `active` is a boolean indicating whether\n                the schedule is active or not.\n              - An instance of one of the predefined schedule types:\n                `IntervalSchedule`, `CronSchedule`, or `RRuleSchedule`.\n        **kwargs: other keyword arguments to pass to the constructor for the\n            `Deployment` class\n    \"\"\"\n    if not name:\n        raise ValueError(\"A deployment name must be provided.\")\n\n    # note that `deployment.load` only updates settings that were *not*\n    # provided at initialization\n\n    deployment_args = {\n        \"name\": name,\n        \"flow_name\": flow.name,\n        **kwargs,\n    }\n\n    if schedules is not None:\n        deployment_args[\"schedules\"] = schedules\n\n    deployment = cls(**deployment_args)\n    deployment.flow_name = flow.name\n    if not deployment.entrypoint:\n        ## first see if an entrypoint can be determined\n        flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n        mod_name = getattr(flow, \"__module__\", None)\n        if not flow_file:\n            if not mod_name:\n                # todo, check if the file location was manually set already\n                raise ValueError(\"Could not determine flow's file location.\")\n            module = importlib.import_module(mod_name)\n            flow_file = getattr(module, \"__file__\", None)\n            if not flow_file:\n                raise ValueError(\"Could not determine flow's file location.\")\n\n        # set entrypoint\n        entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n        deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n    if load_existing:\n        await deployment.load()\n\n    # set a few attributes for this flow object\n    deployment.parameter_openapi_schema = parameter_schema(flow)\n\n    # ensure the ignore file exists\n    if not Path(ignore_file).exists():\n        Path(ignore_file).touch()\n\n    if not deployment.version:\n        deployment.version = flow.version\n    if not deployment.description:\n        deployment.description = flow.description\n\n    # proxy for whether infra is docker-based\n    is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n    if not deployment.storage and not is_docker_based and not deployment.path:\n        deployment.path = str(Path(\".\").absolute())\n    elif not deployment.storage and is_docker_based:\n        # only update if a path is not already set\n        if not deployment.path:\n            deployment.path = \"/opt/prefect/flows\"\n\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            await deployment.upload_to_storage(ignore_file=ignore_file)\n\n    if output:\n        await deployment.to_yaml(output)\n\n    if apply:\n        await deployment.apply()\n\n    return deployment\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.load","title":"load async","text":"

    Queries the API for a deployment with this name for this flow, and if found, prepopulates any settings that were not set at initialization.

    Returns a boolean specifying whether a load was successful or not.

    Raises:

    Type Description -ValueError

    if both name and flow name are not set

    Source code in prefect/deployments/deployments.py
    @sync_compatible\nasync def load(self) -> bool:\n    \"\"\"\n    Queries the API for a deployment with this name for this flow, and if found,\n    prepopulates any settings that were not set at initialization.\n\n    Returns a boolean specifying whether a load was successful or not.\n\n    Raises:\n        - ValueError: if both name and flow name are not set\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be provided.\")\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(\n                f\"{self.flow_name}/{self.name}\"\n            )\n            if deployment.storage_document_id:\n                Block._from_block_document(\n                    await client.read_block_document(deployment.storage_document_id)\n                )\n\n            excluded_fields = self.__fields_set__.union(\n                {\n                    \"infrastructure\",\n                    \"storage\",\n                    \"timestamp\",\n                    \"triggers\",\n                    \"enforce_parameter_schema\",\n                    \"schedules\",\n                    \"schedule\",\n                    \"is_schedule_active\",\n                }\n            )\n            for field in set(self.__fields__.keys()) - excluded_fields:\n                new_value = getattr(deployment, field)\n                setattr(self, field, new_value)\n\n            if \"schedules\" not in self.__fields_set__:\n                self.schedules = [\n                    MinimalDeploymentSchedule(\n                        **schedule.dict(include={\"schedule\", \"active\"})\n                    )\n                    for schedule in deployment.schedules\n                ]\n\n            # The API server generates the \"schedule\" field from the\n            # current list of schedules, so if the user has locally set\n            # \"schedules\" to anything, we should avoid sending \"schedule\"\n            # and let the API server generate a new value if necessary.\n            if \"schedules\" in self.__fields_set__:\n                self.schedule = None\n                self.is_schedule_active = None\n            else:\n                # The user isn't using \"schedules,\" so we should\n                # populate \"schedule\" and \"is_schedule_active\" from the\n                # API's version of the deployment, unless the user gave\n                # us these fields in __init__().\n                if \"schedule\" not in self.__fields_set__:\n                    self.schedule = deployment.schedule\n                if \"is_schedule_active\" not in self.__fields_set__:\n                    self.is_schedule_active = deployment.is_schedule_active\n\n            if \"infrastructure\" not in self.__fields_set__:\n                if deployment.infrastructure_document_id:\n                    self.infrastructure = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.infrastructure_document_id\n                        )\n                    )\n            if \"storage\" not in self.__fields_set__:\n                if deployment.storage_document_id:\n                    self.storage = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.storage_document_id\n                        )\n                    )\n        except ObjectNotFound:\n            return False\n    return True\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.update","title":"update async","text":"

    Performs an in-place update with the provided settings.

    Parameters:

    Name Type Description Default ignore_none bool

    if True, all None values are ignored when performing the update

    False Source code in prefect/deployments/deployments.py
    @sync_compatible\nasync def update(self, ignore_none: bool = False, **kwargs):\n    \"\"\"\n    Performs an in-place update with the provided settings.\n\n    Args:\n        ignore_none: if True, all `None` values are ignored when performing the\n            update\n    \"\"\"\n    unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n    if unknown_keys:\n        raise ValueError(\n            f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n        )\n    for key, value in kwargs.items():\n        if ignore_none and value is None:\n            continue\n        setattr(self, key, value)\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.upload_to_storage","title":"upload_to_storage async","text":"

    Uploads the workflow this deployment represents using a provided storage block; if no block is provided, defaults to configuring self for local storage.

    Parameters:

    Name Type Description Default storage_block str

    a string reference a remote storage block slug $type/$name; if provided, used to upload the workflow's project

    None ignore_file str

    an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

    '.prefectignore' Source code in prefect/deployments/deployments.py
    @sync_compatible\nasync def upload_to_storage(\n    self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n) -> Optional[int]:\n    \"\"\"\n    Uploads the workflow this deployment represents using a provided storage block;\n    if no block is provided, defaults to configuring self for local storage.\n\n    Args:\n        storage_block: a string reference a remote storage block slug `$type/$name`;\n            if provided, used to upload the workflow's project\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n    \"\"\"\n    file_count = None\n    if storage_block:\n        storage = await Block.load(storage_block)\n\n        if \"put-directory\" not in storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {storage!r} missing 'put-directory' capability.\"\n            )\n\n        self.storage = storage\n\n        # upload current directory to storage location\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n    elif self.storage:\n        if \"put-directory\" not in self.storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {self.storage!r} missing 'put-directory'\"\n                \" capability.\"\n            )\n\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n\n    # persists storage now in case it contains secret values\n    if self.storage and not self.storage._block_document_id:\n        await self.storage._save(is_anonymous=True)\n\n    return file_count\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_deployments_from_yaml","title":"load_deployments_from_yaml","text":"

    Load deployments from a yaml file.

    Source code in prefect/deployments/deployments.py
    @deprecated_callable(start_date=\"Mar 2024\")\ndef load_deployments_from_yaml(\n    path: str,\n) -> PrefectObjectRegistry:\n    \"\"\"\n    Load deployments from a yaml file.\n    \"\"\"\n    with open(path, \"r\") as f:\n        contents = f.read()\n\n    # Parse into a yaml tree to retrieve separate documents\n    nodes = yaml.compose_all(contents)\n\n    with PrefectObjectRegistry(capture_failures=True) as registry:\n        for node in nodes:\n            with tmpchdir(path):\n                deployment_dict = yaml.safe_load(yaml.serialize(node))\n                # The return value is not necessary, just instantiating the Deployment\n                # is enough to get it recorded on the registry\n                parse_obj_as(Deployment, deployment_dict)\n\n    return registry\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_flow_from_flow_run","title":"load_flow_from_flow_run async","text":"

    Load a flow from the location/script provided in a deployment's storage document.

    If ignore_storage=True is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally.

    Source code in prefect/deployments/deployments.py
    @inject_client\nasync def load_flow_from_flow_run(\n    flow_run: FlowRun,\n    client: PrefectClient,\n    ignore_storage: bool = False,\n    storage_base_path: Optional[str] = None,\n) -> Flow:\n    \"\"\"\n    Load a flow from the location/script provided in a deployment's storage document.\n\n    If `ignore_storage=True` is provided, no pull from remote storage occurs.  This flag\n    is largely for testing, and assumes the flow is already available locally.\n    \"\"\"\n    deployment = await client.read_deployment(flow_run.deployment_id)\n\n    if deployment.entrypoint is None:\n        raise ValueError(\n            f\"Deployment {deployment.id} does not have an entrypoint and can not be run.\"\n        )\n\n    run_logger = flow_run_logger(flow_run)\n\n    runner_storage_base_path = storage_base_path or os.environ.get(\n        \"PREFECT__STORAGE_BASE_PATH\"\n    )\n\n    # If there's no colon, assume it's a module path\n    if \":\" not in deployment.entrypoint:\n        run_logger.debug(\n            f\"Importing flow code from module path {deployment.entrypoint}\"\n        )\n        flow = await run_sync_in_worker_thread(\n            load_flow_from_entrypoint, deployment.entrypoint\n        )\n        return flow\n\n    if not ignore_storage and not deployment.pull_steps:\n        sys.path.insert(0, \".\")\n        if deployment.storage_document_id:\n            storage_document = await client.read_block_document(\n                deployment.storage_document_id\n            )\n            storage_block = Block._from_block_document(storage_document)\n        else:\n            basepath = deployment.path or Path(deployment.manifest_path).parent\n            if runner_storage_base_path:\n                basepath = str(basepath).replace(\n                    \"$STORAGE_BASE_PATH\", runner_storage_base_path\n                )\n            storage_block = LocalFileSystem(basepath=basepath)\n\n        from_path = (\n            str(deployment.path).replace(\"$STORAGE_BASE_PATH\", runner_storage_base_path)\n            if runner_storage_base_path and deployment.path\n            else deployment.path\n        )\n        run_logger.info(f\"Downloading flow code from storage at {from_path!r}\")\n        await storage_block.get_directory(from_path=from_path, local_path=\".\")\n\n    if deployment.pull_steps:\n        run_logger.debug(f\"Running {len(deployment.pull_steps)} deployment pull steps\")\n        output = await run_steps(deployment.pull_steps)\n        if output.get(\"directory\"):\n            run_logger.debug(f\"Changing working directory to {output['directory']!r}\")\n            os.chdir(output[\"directory\"])\n\n    import_path = relative_path_to_current_platform(deployment.entrypoint)\n    # for backwards compat\n    if deployment.manifest_path:\n        with open(deployment.manifest_path, \"r\") as f:\n            import_path = json.load(f)[\"import_path\"]\n            import_path = (\n                Path(deployment.manifest_path).parent / import_path\n            ).absolute()\n    run_logger.debug(f\"Importing flow code from '{import_path}'\")\n\n    flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, str(import_path))\n\n    return flow\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.run_deployment","title":"run_deployment async","text":"

    Create a flow run for a deployment and return it after completion or a timeout.

    By default, this function blocks until the flow run finishes executing. Specify a timeout (in seconds) to wait for the flow run to execute before returning flow run metadata. To return immediately, without waiting for the flow run to execute, set timeout=0.

    Note that if you specify a timeout, this function will return the flow run metadata whether or not the flow run finished executing.

    If called within a flow or task, the flow run this function creates will be linked to the current flow run as a subflow. Disable this behavior by passing as_subflow=False.

    Parameters:

    Name Type Description Default name Union[str, UUID]

    The deployment id or deployment name in the form: \"flow name/deployment name\"

    required parameters Optional[dict]

    Parameter overrides for this flow run. Merged with the deployment defaults.

    None scheduled_time Optional[datetime]

    The time to schedule the flow run for, defaults to scheduling the flow run to start now.

    None flow_run_name Optional[str]

    A name for the created flow run

    None timeout Optional[float]

    The amount of time to wait (in seconds) for the flow run to complete before returning. Setting timeout to 0 will return the flow run metadata immediately. Setting timeout to None will allow this function to poll indefinitely. Defaults to None.

    None poll_interval Optional[float]

    The number of seconds between polls

    5 tags Optional[Iterable[str]]

    A list of tags to associate with this flow run; tags can be used in automations and for organizational purposes.

    None idempotency_key Optional[str]

    A unique value to recognize retries of the same run, and prevent creating multiple flow runs.

    None work_queue_name Optional[str]

    The name of a work queue to use for this run. Defaults to the default work queue for the deployment.

    None as_subflow Optional[bool]

    Whether to link the flow run as a subflow of the current flow or task run.

    True job_variables Optional[dict]

    A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'

    None Source code in prefect/deployments/deployments.py
    @sync_compatible\n@deprecated_parameter(\n    \"infra_overrides\",\n    start_date=\"Apr 2024\",\n    help=\"Use `job_variables` instead.\",\n)\n@inject_client\nasync def run_deployment(\n    name: Union[str, UUID],\n    client: Optional[PrefectClient] = None,\n    parameters: Optional[dict] = None,\n    scheduled_time: Optional[datetime] = None,\n    flow_run_name: Optional[str] = None,\n    timeout: Optional[float] = None,\n    poll_interval: Optional[float] = 5,\n    tags: Optional[Iterable[str]] = None,\n    idempotency_key: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    as_subflow: Optional[bool] = True,\n    infra_overrides: Optional[dict] = None,\n    job_variables: Optional[dict] = None,\n) -> FlowRun:\n    \"\"\"\n    Create a flow run for a deployment and return it after completion or a timeout.\n\n    By default, this function blocks until the flow run finishes executing.\n    Specify a timeout (in seconds) to wait for the flow run to execute before\n    returning flow run metadata. To return immediately, without waiting for the\n    flow run to execute, set `timeout=0`.\n\n    Note that if you specify a timeout, this function will return the flow run\n    metadata whether or not the flow run finished executing.\n\n    If called within a flow or task, the flow run this function creates will\n    be linked to the current flow run as a subflow. Disable this behavior by\n    passing `as_subflow=False`.\n\n    Args:\n        name: The deployment id or deployment name in the form:\n            `\"flow name/deployment name\"`\n        parameters: Parameter overrides for this flow run. Merged with the deployment\n            defaults.\n        scheduled_time: The time to schedule the flow run for, defaults to scheduling\n            the flow run to start now.\n        flow_run_name: A name for the created flow run\n        timeout: The amount of time to wait (in seconds) for the flow run to\n            complete before returning. Setting `timeout` to 0 will return the flow\n            run metadata immediately. Setting `timeout` to None will allow this\n            function to poll indefinitely. Defaults to None.\n        poll_interval: The number of seconds between polls\n        tags: A list of tags to associate with this flow run; tags can be used in\n            automations and for organizational purposes.\n        idempotency_key: A unique value to recognize retries of the same run, and\n            prevent creating multiple flow runs.\n        work_queue_name: The name of a work queue to use for this run. Defaults to\n            the default work queue for the deployment.\n        as_subflow: Whether to link the flow run as a subflow of the current\n            flow or task run.\n        job_variables: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`\n    \"\"\"\n    if timeout is not None and timeout < 0:\n        raise ValueError(\"`timeout` cannot be negative\")\n\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n\n    jv = handle_deprecated_infra_overrides_parameter(job_variables, infra_overrides)\n\n    parameters = parameters or {}\n\n    deployment_id = None\n\n    if isinstance(name, UUID):\n        deployment_id = name\n    else:\n        try:\n            deployment_id = UUID(name)\n        except ValueError:\n            pass\n\n    if deployment_id:\n        deployment = await client.read_deployment(deployment_id=deployment_id)\n    else:\n        deployment = await client.read_deployment_by_name(name)\n\n    flow_run_ctx = FlowRunContext.get()\n    task_run_ctx = TaskRunContext.get()\n    if as_subflow and (flow_run_ctx or task_run_ctx):\n        # This was called from a flow. Link the flow run as a subflow.\n        from prefect.engine import (\n            Pending,\n            _dynamic_key_for_task_run,\n            collect_task_run_inputs,\n        )\n\n        task_inputs = {\n            k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        }\n\n        if deployment_id:\n            flow = await client.read_flow(deployment.flow_id)\n            deployment_name = f\"{flow.name}/{deployment.name}\"\n        else:\n            deployment_name = name\n\n        # Generate a task in the parent flow run to represent the result of the subflow\n        dummy_task = Task(\n            name=deployment_name,\n            fn=lambda: None,\n            version=deployment.version,\n        )\n        # Override the default task key to include the deployment name\n        dummy_task.task_key = f\"{__name__}.run_deployment.{slugify(deployment_name)}\"\n        flow_run_id = (\n            flow_run_ctx.flow_run.id\n            if flow_run_ctx\n            else task_run_ctx.task_run.flow_run_id\n        )\n        dynamic_key = (\n            _dynamic_key_for_task_run(flow_run_ctx, dummy_task)\n            if flow_run_ctx\n            else task_run_ctx.task_run.dynamic_key\n        )\n        parent_task_run = await client.create_task_run(\n            task=dummy_task,\n            flow_run_id=flow_run_id,\n            dynamic_key=dynamic_key,\n            task_inputs=task_inputs,\n            state=Pending(),\n        )\n        parent_task_run_id = parent_task_run.id\n    else:\n        parent_task_run_id = None\n\n    flow_run = await client.create_flow_run_from_deployment(\n        deployment.id,\n        parameters=parameters,\n        state=Scheduled(scheduled_time=scheduled_time),\n        name=flow_run_name,\n        tags=tags,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n        work_queue_name=work_queue_name,\n        job_variables=jv,\n    )\n\n    flow_run_id = flow_run.id\n\n    if timeout == 0:\n        return flow_run\n\n    with anyio.move_on_after(timeout):\n        while True:\n            flow_run = await client.read_flow_run(flow_run_id)\n            flow_state = flow_run.state\n            if flow_state and flow_state.is_final():\n                return flow_run\n            await anyio.sleep(poll_interval)\n\n    return flow_run\n
    ","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/runner/","title":"runner","text":"","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner","title":"prefect.deployments.runner","text":"

    Objects for creating and configuring deployments for flows using serve functionality.

    Example
    import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    # to_deployment creates RunnerDeployment instances\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n    serve(slow_deploy, fast_deploy)\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentApplyError","title":"DeploymentApplyError","text":"

    Bases: RuntimeError

    Raised when an error occurs while applying a deployment.

    Source code in prefect/deployments/runner.py
    class DeploymentApplyError(RuntimeError):\n    \"\"\"\n    Raised when an error occurs while applying a deployment.\n    \"\"\"\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.DeploymentImage","title":"DeploymentImage","text":"

    Configuration used to build and push a Docker image for a deployment.

    Attributes:

    Name Type Description name

    The name of the Docker image to build, including the registry and repository.

    tag

    The tag to apply to the built image.

    dockerfile

    The path to the Dockerfile to use for building the image. If not provided, a default Dockerfile will be generated.

    **build_kwargs

    Additional keyword arguments to pass to the Docker build request. See the docker-py documentation for more information.

    Source code in prefect/deployments/runner.py
    class DeploymentImage:\n    \"\"\"\n    Configuration used to build and push a Docker image for a deployment.\n\n    Attributes:\n        name: The name of the Docker image to build, including the registry and\n            repository.\n        tag: The tag to apply to the built image.\n        dockerfile: The path to the Dockerfile to use for building the image. If\n            not provided, a default Dockerfile will be generated.\n        **build_kwargs: Additional keyword arguments to pass to the Docker build request.\n            See the [`docker-py` documentation](https://docker-py.readthedocs.io/en/stable/images.html#docker.models.images.ImageCollection.build)\n            for more information.\n\n    \"\"\"\n\n    def __init__(self, name, tag=None, dockerfile=\"auto\", **build_kwargs):\n        image_name, image_tag = parse_image_tag(name)\n        if tag and image_tag:\n            raise ValueError(\n                f\"Only one tag can be provided - both {image_tag!r} and {tag!r} were\"\n                \" provided as tags.\"\n            )\n        namespace, repository = split_repository_path(image_name)\n        # if the provided image name does not include a namespace (registry URL or user/org name),\n        # use the default namespace\n        if not namespace:\n            namespace = PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE.value()\n        # join the namespace and repository to create the full image name\n        # ignore namespace if it is None\n        self.name = \"/\".join(filter(None, [namespace, repository]))\n        self.tag = tag or image_tag or slugify(pendulum.now(\"utc\").isoformat())\n        self.dockerfile = dockerfile\n        self.build_kwargs = build_kwargs\n\n    @property\n    def reference(self):\n        return f\"{self.name}:{self.tag}\"\n\n    def build(self):\n        full_image_name = self.reference\n        build_kwargs = self.build_kwargs.copy()\n        build_kwargs[\"context\"] = Path.cwd()\n        build_kwargs[\"tag\"] = full_image_name\n        build_kwargs[\"pull\"] = build_kwargs.get(\"pull\", True)\n\n        if self.dockerfile == \"auto\":\n            with generate_default_dockerfile():\n                build_image(**build_kwargs)\n        else:\n            build_kwargs[\"dockerfile\"] = self.dockerfile\n            build_image(**build_kwargs)\n\n    def push(self):\n        with docker_client() as client:\n            events = client.api.push(\n                repository=self.name, tag=self.tag, stream=True, decode=True\n            )\n            for event in events:\n                if \"error\" in event:\n                    raise PushError(event[\"error\"])\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.EntrypointType","title":"EntrypointType","text":"

    Bases: Enum

    Enum representing a entrypoint type.

    File path entrypoints are in the format: path/to/file.py:function_name. Module path entrypoints are in the format: path.to.module.function_name.

    Source code in prefect/deployments/runner.py
    class EntrypointType(enum.Enum):\n    \"\"\"\n    Enum representing a entrypoint type.\n\n    File path entrypoints are in the format: `path/to/file.py:function_name`.\n    Module path entrypoints are in the format: `path.to.module.function_name`.\n    \"\"\"\n\n    FILE_PATH = \"file_path\"\n    MODULE_PATH = \"module_path\"\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment","title":"RunnerDeployment","text":"

    Bases: BaseModel

    A Prefect RunnerDeployment definition, used for specifying and building deployments.

    Attributes:

    Name Type Description name str

    A name for the deployment (required).

    version Optional[str]

    An optional version for the deployment; defaults to the flow's version

    description Optional[str]

    An optional description of the deployment; defaults to the flow's description

    tags List[str]

    An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name.

    schedule Optional[SCHEDULE_TYPES]

    A schedule to run this deployment on, once registered

    is_schedule_active Optional[bool]

    Whether or not the schedule is active

    parameters Dict[str, Any]

    A dictionary of parameter values to pass to runs created from this deployment

    path Dict[str, Any]

    The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path

    entrypoint Optional[str]

    The path to the entrypoint for the workflow, always relative to the path

    parameter_openapi_schema Optional[str]

    The parameter schema of the flow, including defaults.

    enforce_parameter_schema bool

    Whether or not the Prefect API should enforce the parameter schema for this deployment.

    work_pool_name Optional[str]

    The name of the work pool to use for this deployment.

    work_queue_name Optional[str]

    The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

    job_variables Dict[str, Any]

    Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

    Source code in prefect/deployments/runner.py
    class RunnerDeployment(BaseModel):\n    \"\"\"\n    A Prefect RunnerDeployment definition, used for specifying and building deployments.\n\n    Attributes:\n        name: A name for the deployment (required).\n        version: An optional version for the deployment; defaults to the flow's version\n        description: An optional description of the deployment; defaults to the flow's\n            description\n        tags: An optional list of tags to associate with this deployment; note that tags\n            are used only for organizational purposes. For delegating work to agents,\n            see `work_queue_name`.\n        schedule: A schedule to run this deployment on, once registered\n        is_schedule_active: Whether or not the schedule is active\n        parameters: A dictionary of parameter values to pass to runs created from this\n            deployment\n        path: The path to the working directory for the workflow, relative to remote\n            storage or, if stored on a local filesystem, an absolute path\n        entrypoint: The path to the entrypoint for the workflow, always relative to the\n            `path`\n        parameter_openapi_schema: The parameter schema of the flow, including defaults.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n\n    class Config:\n        arbitrary_types_allowed = True\n\n    name: str = Field(..., description=\"The name of the deployment.\")\n    flow_name: Optional[str] = Field(\n        None, description=\"The name of the underlying flow; typically inferred.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"An optional description of the deployment.\"\n    )\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"One of more tags to apply to this deployment.\",\n    )\n    schedules: Optional[List[MinimalDeploymentSchedule]] = Field(\n        default=None,\n        description=\"The schedules that should cause this deployment to run.\",\n    )\n    schedule: Optional[SCHEDULE_TYPES] = None\n    paused: Optional[bool] = Field(\n        default=None, description=\"Whether or not the deployment is paused.\"\n    )\n    is_schedule_active: Optional[bool] = Field(\n        default=None, description=\"DEPRECATED: Whether or not the schedule is active.\"\n    )\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    triggers: List[Union[DeploymentTriggerTypes, TriggerTypes]] = Field(\n        default_factory=list,\n        description=\"The triggers that should cause this deployment to run.\",\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the Prefect API should enforce the parameter schema for\"\n            \" this deployment.\"\n        ),\n    )\n    storage: Optional[RunnerStorage] = Field(\n        default=None,\n        description=(\n            \"The storage object used to retrieve flow code for this deployment.\"\n        ),\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The name of the work pool to use for this deployment. Only used when\"\n            \" the deployment is registered with a built runner.\"\n        ),\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The name of the work queue to use for this deployment. Only used when\"\n            \" the deployment is registered with a built runner.\"\n        ),\n    )\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=(\n            \"Job variables used to override the default values of a work pool\"\n            \" base job template. Only used when the deployment is registered with\"\n            \" a built runner.\"\n        ),\n    )\n    _entrypoint_type: EntrypointType = PrivateAttr(\n        default=EntrypointType.FILE_PATH,\n    )\n    _path: Optional[str] = PrivateAttr(\n        default=None,\n    )\n    _parameter_openapi_schema: ParameterSchema = PrivateAttr(\n        default_factory=ParameterSchema,\n    )\n\n    @property\n    def entrypoint_type(self) -> EntrypointType:\n        return self._entrypoint_type\n\n    @validator(\"triggers\", allow_reuse=True)\n    def validate_automation_names(cls, field_value, values):\n        \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n        return validate_automation_names(field_value, values)\n\n    @root_validator(pre=True)\n    def reconcile_paused(cls, values):\n        return reconcile_paused_deployment(values)\n\n    @root_validator(pre=True)\n    def reconcile_schedules(cls, values):\n        return reconcile_schedules_runner(values)\n\n    @sync_compatible\n    async def apply(\n        self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n    ) -> UUID:\n        \"\"\"\n        Registers this deployment with the API and returns the deployment's ID.\n\n        Args:\n            work_pool_name: The name of the work pool to use for this\n                deployment.\n            image: The registry, name, and tag of the Docker image to\n                use for this deployment. Only used when the deployment is\n                deployed to a work pool.\n\n        Returns:\n            The ID of the created deployment.\n        \"\"\"\n\n        work_pool_name = work_pool_name or self.work_pool_name\n\n        if image and not work_pool_name:\n            raise ValueError(\n                \"An image can only be provided when registering a deployment with a\"\n                \" work pool.\"\n            )\n\n        if self.work_queue_name and not work_pool_name:\n            raise ValueError(\n                \"A work queue can only be provided when registering a deployment with\"\n                \" a work pool.\"\n            )\n\n        if self.job_variables and not work_pool_name:\n            raise ValueError(\n                \"Job variables can only be provided when registering a deployment\"\n                \" with a work pool.\"\n            )\n\n        async with get_client() as client:\n            flow_id = await client.create_flow_from_name(self.flow_name)\n\n            create_payload = dict(\n                flow_id=flow_id,\n                name=self.name,\n                work_queue_name=self.work_queue_name,\n                work_pool_name=work_pool_name,\n                version=self.version,\n                paused=self.paused,\n                schedules=self.schedules,\n                parameters=self.parameters,\n                description=self.description,\n                tags=self.tags,\n                path=self._path,\n                entrypoint=self.entrypoint,\n                storage_document_id=None,\n                infrastructure_document_id=None,\n                parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n                enforce_parameter_schema=self.enforce_parameter_schema,\n            )\n\n            if work_pool_name:\n                create_payload[\"job_variables\"] = self.job_variables\n                if image:\n                    create_payload[\"job_variables\"][\"image\"] = image\n                create_payload[\"path\"] = None if self.storage else self._path\n                create_payload[\"pull_steps\"] = (\n                    [self.storage.to_pull_step()] if self.storage else []\n                )\n\n            try:\n                deployment_id = await client.create_deployment(**create_payload)\n            except Exception as exc:\n                if isinstance(exc, PrefectHTTPStatusError):\n                    detail = exc.response.json().get(\"detail\")\n                    if detail:\n                        raise DeploymentApplyError(detail) from exc\n                raise DeploymentApplyError(\n                    f\"Error while applying deployment: {str(exc)}\"\n                ) from exc\n\n            if client.server_type.supports_automations():\n                try:\n                    # The triggers defined in the deployment spec are, essentially,\n                    # anonymous and attempting truly sync them with cloud is not\n                    # feasible. Instead, we remove all automations that are owned\n                    # by the deployment, meaning that they were created via this\n                    # mechanism below, and then recreate them.\n                    await client.delete_resource_owned_automations(\n                        f\"prefect.deployment.{deployment_id}\"\n                    )\n                except PrefectHTTPStatusError as e:\n                    if e.response.status_code == 404:\n                        # This Prefect server does not support automations, so we can safely\n                        # ignore this 404 and move on.\n                        return deployment_id\n                    raise e\n\n                for trigger in self.triggers:\n                    trigger.set_deployment_id(deployment_id)\n                    await client.create_automation(trigger.as_automation())\n\n            return deployment_id\n\n    @staticmethod\n    def _construct_deployment_schedules(\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        anchor_date: Optional[Union[datetime, str]] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        timezone: Optional[str] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n    ) -> Union[List[MinimalDeploymentSchedule], FlexibleScheduleList]:\n        \"\"\"\n        Construct a schedule or schedules from the provided arguments.\n\n        This method serves as a unified interface for creating deployment\n        schedules. If `schedules` is provided, it is directly returned. If\n        `schedule` is provided, it is encapsulated in a list and returned. If\n        `interval`, `cron`, or `rrule` are provided, they are used to construct\n        schedule objects.\n\n        Args:\n            interval: An interval on which to schedule runs, either as a single\n              value or as a list of values. Accepts numbers (interpreted as\n              seconds) or `timedelta` objects. Each value defines a separate\n              scheduling interval.\n            anchor_date: The anchor date from which interval schedules should\n              start. This applies to all intervals if a list is provided.\n            cron: A cron expression or a list of cron expressions defining cron\n              schedules. Each expression defines a separate cron schedule.\n            rrule: An rrule string or a list of rrule strings for scheduling.\n              Each string defines a separate recurrence rule.\n            timezone: The timezone to apply to the cron or rrule schedules.\n              This is a single value applied uniformly to all schedules.\n            schedule: A singular schedule object, used for advanced scheduling\n              options like specifying a timezone. This is returned as a list\n              containing this single schedule.\n            schedules: A pre-defined list of schedule objects. If provided,\n              this list is returned as-is, bypassing other schedule construction\n              logic.\n        \"\"\"\n\n        num_schedules = sum(\n            1\n            for entry in (interval, cron, rrule, schedule, schedules)\n            if entry is not None\n        )\n        if num_schedules > 1:\n            raise ValueError(\n                \"Only one of interval, cron, rrule, schedule, or schedules can be provided.\"\n            )\n        elif num_schedules == 0:\n            return []\n\n        if schedules is not None:\n            return schedules\n        elif interval or cron or rrule:\n            # `interval`, `cron`, and `rrule` can be lists of values. This\n            # block figures out which one is not None and uses that to\n            # construct the list of schedules via `construct_schedule`.\n            parameters = [(\"interval\", interval), (\"cron\", cron), (\"rrule\", rrule)]\n            schedule_type, value = [\n                param for param in parameters if param[1] is not None\n            ][0]\n\n            if not isiterable(value):\n                value = [value]\n\n            return [\n                create_minimal_deployment_schedule(\n                    construct_schedule(\n                        **{\n                            schedule_type: v,\n                            \"timezone\": timezone,\n                            \"anchor_date\": anchor_date,\n                        }\n                    )\n                )\n                for v in value\n            ]\n        else:\n            return [create_minimal_deployment_schedule(schedule)]\n\n    def _set_defaults_from_flow(self, flow: \"Flow\"):\n        self._parameter_openapi_schema = parameter_schema(flow)\n\n        if not self.version:\n            self.version = flow.version\n        if not self.description:\n            self.description = flow.description\n\n    @classmethod\n    def from_flow(\n        cls,\n        flow: \"Flow\",\n        name: str,\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ) -> \"RunnerDeployment\":\n        \"\"\"\n        Configure a deployment for a given flow.\n\n        Args:\n            flow: A flow function to deploy\n            name: A name for the deployment\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for this deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n        \"\"\"\n        constructed_schedules = cls._construct_deployment_schedules(\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedule=schedule,\n            schedules=schedules,\n        )\n\n        job_variables = job_variables or {}\n\n        deployment = cls(\n            name=Path(name).stem,\n            flow_name=flow.name,\n            schedule=schedule,\n            schedules=constructed_schedules,\n            is_schedule_active=is_schedule_active,\n            paused=paused,\n            tags=tags or [],\n            triggers=triggers or [],\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n\n        if not deployment.entrypoint:\n            no_file_location_error = (\n                \"Flows defined interactively cannot be deployed. Check out the\"\n                \" quickstart guide for help getting started:\"\n                \" https://docs.prefect.io/latest/getting-started/quickstart\"\n            )\n            ## first see if an entrypoint can be determined\n            flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n            mod_name = getattr(flow, \"__module__\", None)\n            if entrypoint_type == EntrypointType.MODULE_PATH:\n                if mod_name:\n                    deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n                else:\n                    raise ValueError(\n                        \"Unable to determine module path for provided flow.\"\n                    )\n            else:\n                if not flow_file:\n                    if not mod_name:\n                        raise ValueError(no_file_location_error)\n                    try:\n                        module = importlib.import_module(mod_name)\n                        flow_file = getattr(module, \"__file__\", None)\n                    except ModuleNotFoundError as exc:\n                        if \"__prefect_loader__\" in str(exc):\n                            raise ValueError(\n                                \"Cannot create a RunnerDeployment from a flow that has been\"\n                                \" loaded from an entrypoint. To deploy a flow via\"\n                                \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n                            )\n                        raise ValueError(no_file_location_error)\n                    if not flow_file:\n                        raise ValueError(no_file_location_error)\n\n                # set entrypoint\n                entry_path = (\n                    Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n                )\n                deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n        if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n            deployment._path = \".\"\n\n        deployment._entrypoint_type = entrypoint_type\n\n        cls._set_defaults_from_flow(deployment, flow)\n\n        return deployment\n\n    @classmethod\n    def from_entrypoint(\n        cls,\n        entrypoint: str,\n        name: str,\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ) -> \"RunnerDeployment\":\n        \"\"\"\n        Configure a deployment for a given flow located at a given entrypoint.\n\n        Args:\n            entrypoint:  The path to a file containing a flow and the name of the flow function in\n                the format `./path/to/file.py:flow_func_name`.\n            name: A name for the deployment\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            paused: Whether or not to set this deployment as paused.\n            schedules: A list of schedule objects defining when to execute runs of this deployment.\n                Used to define multiple schedules or additional scheduling options like `timezone`.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for this deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n        \"\"\"\n        from prefect.flows import load_flow_from_entrypoint\n\n        job_variables = job_variables or {}\n        flow = load_flow_from_entrypoint(entrypoint)\n\n        constructed_schedules = cls._construct_deployment_schedules(\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedule=schedule,\n            schedules=schedules,\n        )\n\n        deployment = cls(\n            name=Path(name).stem,\n            flow_name=flow.name,\n            schedule=schedule,\n            schedules=constructed_schedules,\n            paused=paused,\n            is_schedule_active=is_schedule_active,\n            tags=tags or [],\n            triggers=triggers or [],\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            entrypoint=entrypoint,\n            enforce_parameter_schema=enforce_parameter_schema,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n        deployment._path = str(Path.cwd())\n\n        cls._set_defaults_from_flow(deployment, flow)\n\n        return deployment\n\n    @classmethod\n    @sync_compatible\n    async def from_storage(\n        cls,\n        storage: RunnerStorage,\n        entrypoint: str,\n        name: str,\n        interval: Optional[\n            Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        work_pool_name: Optional[str] = None,\n        work_queue_name: Optional[str] = None,\n        job_variables: Optional[Dict[str, Any]] = None,\n    ):\n        \"\"\"\n        Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n        local storage location.\n\n        Args:\n            entrypoint:  The path to a file containing a flow and the name of the flow function in\n                the format `./path/to/file.py:flow_func_name`.\n            name: A name for the deployment\n            storage: A storage object to use for retrieving flow code. If not provided, a\n                URL must be provided.\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            enforce_parameter_schema: Whether or not the Prefect API should enforce the\n                parameter schema for this deployment.\n            work_pool_name: The name of the work pool to use for this deployment.\n            work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n                If not provided the default work queue for the work pool will be used.\n            job_variables: Settings used to override the values specified default base job template\n                of the chosen work pool. Refer to the base job template of the chosen work pool for\n                available settings.\n        \"\"\"\n        from prefect.flows import load_flow_from_entrypoint\n\n        constructed_schedules = cls._construct_deployment_schedules(\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedule=schedule,\n            schedules=schedules,\n        )\n\n        job_variables = job_variables or {}\n\n        with tempfile.TemporaryDirectory() as tmpdir:\n            storage.set_base_path(Path(tmpdir))\n            await storage.pull_code()\n\n            full_entrypoint = str(storage.destination / entrypoint)\n            flow = await from_async.wait_for_call_in_new_thread(\n                create_call(load_flow_from_entrypoint, full_entrypoint)\n            )\n\n        deployment = cls(\n            name=Path(name).stem,\n            flow_name=flow.name,\n            schedule=schedule,\n            schedules=constructed_schedules,\n            paused=paused,\n            is_schedule_active=is_schedule_active,\n            tags=tags or [],\n            triggers=triggers or [],\n            parameters=parameters or {},\n            description=description,\n            version=version,\n            entrypoint=entrypoint,\n            enforce_parameter_schema=enforce_parameter_schema,\n            storage=storage,\n            work_pool_name=work_pool_name,\n            work_queue_name=work_queue_name,\n            job_variables=job_variables,\n        )\n        deployment._path = str(storage.destination).replace(\n            tmpdir, \"$STORAGE_BASE_PATH\"\n        )\n\n        cls._set_defaults_from_flow(deployment, flow)\n\n        return deployment\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.apply","title":"apply async","text":"

    Registers this deployment with the API and returns the deployment's ID.

    Parameters:

    Name Type Description Default work_pool_name Optional[str]

    The name of the work pool to use for this deployment.

    None image Optional[str]

    The registry, name, and tag of the Docker image to use for this deployment. Only used when the deployment is deployed to a work pool.

    None

    Returns:

    Type Description UUID

    The ID of the created deployment.

    Source code in prefect/deployments/runner.py
    @sync_compatible\nasync def apply(\n    self, work_pool_name: Optional[str] = None, image: Optional[str] = None\n) -> UUID:\n    \"\"\"\n    Registers this deployment with the API and returns the deployment's ID.\n\n    Args:\n        work_pool_name: The name of the work pool to use for this\n            deployment.\n        image: The registry, name, and tag of the Docker image to\n            use for this deployment. Only used when the deployment is\n            deployed to a work pool.\n\n    Returns:\n        The ID of the created deployment.\n    \"\"\"\n\n    work_pool_name = work_pool_name or self.work_pool_name\n\n    if image and not work_pool_name:\n        raise ValueError(\n            \"An image can only be provided when registering a deployment with a\"\n            \" work pool.\"\n        )\n\n    if self.work_queue_name and not work_pool_name:\n        raise ValueError(\n            \"A work queue can only be provided when registering a deployment with\"\n            \" a work pool.\"\n        )\n\n    if self.job_variables and not work_pool_name:\n        raise ValueError(\n            \"Job variables can only be provided when registering a deployment\"\n            \" with a work pool.\"\n        )\n\n    async with get_client() as client:\n        flow_id = await client.create_flow_from_name(self.flow_name)\n\n        create_payload = dict(\n            flow_id=flow_id,\n            name=self.name,\n            work_queue_name=self.work_queue_name,\n            work_pool_name=work_pool_name,\n            version=self.version,\n            paused=self.paused,\n            schedules=self.schedules,\n            parameters=self.parameters,\n            description=self.description,\n            tags=self.tags,\n            path=self._path,\n            entrypoint=self.entrypoint,\n            storage_document_id=None,\n            infrastructure_document_id=None,\n            parameter_openapi_schema=self._parameter_openapi_schema.dict(),\n            enforce_parameter_schema=self.enforce_parameter_schema,\n        )\n\n        if work_pool_name:\n            create_payload[\"job_variables\"] = self.job_variables\n            if image:\n                create_payload[\"job_variables\"][\"image\"] = image\n            create_payload[\"path\"] = None if self.storage else self._path\n            create_payload[\"pull_steps\"] = (\n                [self.storage.to_pull_step()] if self.storage else []\n            )\n\n        try:\n            deployment_id = await client.create_deployment(**create_payload)\n        except Exception as exc:\n            if isinstance(exc, PrefectHTTPStatusError):\n                detail = exc.response.json().get(\"detail\")\n                if detail:\n                    raise DeploymentApplyError(detail) from exc\n            raise DeploymentApplyError(\n                f\"Error while applying deployment: {str(exc)}\"\n            ) from exc\n\n        if client.server_type.supports_automations():\n            try:\n                # The triggers defined in the deployment spec are, essentially,\n                # anonymous and attempting truly sync them with cloud is not\n                # feasible. Instead, we remove all automations that are owned\n                # by the deployment, meaning that they were created via this\n                # mechanism below, and then recreate them.\n                await client.delete_resource_owned_automations(\n                    f\"prefect.deployment.{deployment_id}\"\n                )\n            except PrefectHTTPStatusError as e:\n                if e.response.status_code == 404:\n                    # This Prefect server does not support automations, so we can safely\n                    # ignore this 404 and move on.\n                    return deployment_id\n                raise e\n\n            for trigger in self.triggers:\n                trigger.set_deployment_id(deployment_id)\n                await client.create_automation(trigger.as_automation())\n\n        return deployment_id\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_entrypoint","title":"from_entrypoint classmethod","text":"

    Configure a deployment for a given flow located at a given entrypoint.

    Parameters:

    Name Type Description Default entrypoint str

    The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name.

    required name str

    A name for the deployment

    required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

    An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

    None cron Optional[Union[Iterable[str], str]]

    A cron schedule of when to execute runs of this flow.

    None rrule Optional[Union[Iterable[str], str]]

    An rrule schedule of when to execute runs of this flow.

    None paused Optional[bool]

    Whether or not to set this deployment as paused.

    None schedules Optional[FlexibleScheduleList]

    A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

    None schedule Optional[SCHEDULE_TYPES]

    A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

    None is_schedule_active Optional[bool]

    Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

    None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

    A list of triggers that should kick of a run of this flow.

    None parameters Optional[dict]

    A dictionary of default parameter values to pass to runs of this flow.

    None description Optional[str]

    A description for the created deployment. Defaults to the flow's description if not provided.

    None tags Optional[List[str]]

    A list of tags to associate with the created deployment for organizational purposes.

    None version Optional[str]

    A version for the created deployment. Defaults to the flow's version.

    None enforce_parameter_schema bool

    Whether or not the Prefect API should enforce the parameter schema for this deployment.

    False work_pool_name Optional[str]

    The name of the work pool to use for this deployment.

    None work_queue_name Optional[str]

    The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

    None job_variables Optional[Dict[str, Any]]

    Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

    None Source code in prefect/deployments/runner.py
    @classmethod\ndef from_entrypoint(\n    cls,\n    entrypoint: str,\n    name: str,\n    interval: Optional[\n        Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n) -> \"RunnerDeployment\":\n    \"\"\"\n    Configure a deployment for a given flow located at a given entrypoint.\n\n    Args:\n        entrypoint:  The path to a file containing a flow and the name of the flow function in\n            the format `./path/to/file.py:flow_func_name`.\n        name: A name for the deployment\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n    from prefect.flows import load_flow_from_entrypoint\n\n    job_variables = job_variables or {}\n    flow = load_flow_from_entrypoint(entrypoint)\n\n    constructed_schedules = cls._construct_deployment_schedules(\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedule=schedule,\n        schedules=schedules,\n    )\n\n    deployment = cls(\n        name=Path(name).stem,\n        flow_name=flow.name,\n        schedule=schedule,\n        schedules=constructed_schedules,\n        paused=paused,\n        is_schedule_active=is_schedule_active,\n        tags=tags or [],\n        triggers=triggers or [],\n        parameters=parameters or {},\n        description=description,\n        version=version,\n        entrypoint=entrypoint,\n        enforce_parameter_schema=enforce_parameter_schema,\n        work_pool_name=work_pool_name,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n    )\n    deployment._path = str(Path.cwd())\n\n    cls._set_defaults_from_flow(deployment, flow)\n\n    return deployment\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_flow","title":"from_flow classmethod","text":"

    Configure a deployment for a given flow.

    Parameters:

    Name Type Description Default flow Flow

    A flow function to deploy

    required name str

    A name for the deployment

    required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

    An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

    None cron Optional[Union[Iterable[str], str]]

    A cron schedule of when to execute runs of this flow.

    None rrule Optional[Union[Iterable[str], str]]

    An rrule schedule of when to execute runs of this flow.

    None paused Optional[bool]

    Whether or not to set this deployment as paused.

    None schedules Optional[FlexibleScheduleList]

    A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like timezone.

    None schedule Optional[SCHEDULE_TYPES]

    A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

    None is_schedule_active Optional[bool]

    Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

    None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

    A list of triggers that should kick of a run of this flow.

    None parameters Optional[dict]

    A dictionary of default parameter values to pass to runs of this flow.

    None description Optional[str]

    A description for the created deployment. Defaults to the flow's description if not provided.

    None tags Optional[List[str]]

    A list of tags to associate with the created deployment for organizational purposes.

    None version Optional[str]

    A version for the created deployment. Defaults to the flow's version.

    None enforce_parameter_schema bool

    Whether or not the Prefect API should enforce the parameter schema for this deployment.

    False work_pool_name Optional[str]

    The name of the work pool to use for this deployment.

    None work_queue_name Optional[str]

    The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

    None job_variables Optional[Dict[str, Any]]

    Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

    None Source code in prefect/deployments/runner.py
    @classmethod\ndef from_flow(\n    cls,\n    flow: \"Flow\",\n    name: str,\n    interval: Optional[\n        Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> \"RunnerDeployment\":\n    \"\"\"\n    Configure a deployment for a given flow.\n\n    Args:\n        flow: A flow function to deploy\n        name: A name for the deployment\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        paused: Whether or not to set this deployment as paused.\n        schedules: A list of schedule objects defining when to execute runs of this deployment.\n            Used to define multiple schedules or additional scheduling options like `timezone`.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n    constructed_schedules = cls._construct_deployment_schedules(\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedule=schedule,\n        schedules=schedules,\n    )\n\n    job_variables = job_variables or {}\n\n    deployment = cls(\n        name=Path(name).stem,\n        flow_name=flow.name,\n        schedule=schedule,\n        schedules=constructed_schedules,\n        is_schedule_active=is_schedule_active,\n        paused=paused,\n        tags=tags or [],\n        triggers=triggers or [],\n        parameters=parameters or {},\n        description=description,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        work_pool_name=work_pool_name,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n    )\n\n    if not deployment.entrypoint:\n        no_file_location_error = (\n            \"Flows defined interactively cannot be deployed. Check out the\"\n            \" quickstart guide for help getting started:\"\n            \" https://docs.prefect.io/latest/getting-started/quickstart\"\n        )\n        ## first see if an entrypoint can be determined\n        flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n        mod_name = getattr(flow, \"__module__\", None)\n        if entrypoint_type == EntrypointType.MODULE_PATH:\n            if mod_name:\n                deployment.entrypoint = f\"{mod_name}.{flow.__name__}\"\n            else:\n                raise ValueError(\n                    \"Unable to determine module path for provided flow.\"\n                )\n        else:\n            if not flow_file:\n                if not mod_name:\n                    raise ValueError(no_file_location_error)\n                try:\n                    module = importlib.import_module(mod_name)\n                    flow_file = getattr(module, \"__file__\", None)\n                except ModuleNotFoundError as exc:\n                    if \"__prefect_loader__\" in str(exc):\n                        raise ValueError(\n                            \"Cannot create a RunnerDeployment from a flow that has been\"\n                            \" loaded from an entrypoint. To deploy a flow via\"\n                            \" entrypoint, use RunnerDeployment.from_entrypoint instead.\"\n                        )\n                    raise ValueError(no_file_location_error)\n                if not flow_file:\n                    raise ValueError(no_file_location_error)\n\n            # set entrypoint\n            entry_path = (\n                Path(flow_file).absolute().relative_to(Path.cwd().absolute())\n            )\n            deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n    if entrypoint_type == EntrypointType.FILE_PATH and not deployment._path:\n        deployment._path = \".\"\n\n    deployment._entrypoint_type = entrypoint_type\n\n    cls._set_defaults_from_flow(deployment, flow)\n\n    return deployment\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.from_storage","title":"from_storage async classmethod","text":"

    Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location.

    Parameters:

    Name Type Description Default entrypoint str

    The path to a file containing a flow and the name of the flow function in the format ./path/to/file.py:flow_func_name.

    required name str

    A name for the deployment

    required storage RunnerStorage

    A storage object to use for retrieving flow code. If not provided, a URL must be provided.

    required interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

    An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

    None cron Optional[Union[Iterable[str], str]]

    A cron schedule of when to execute runs of this flow.

    None rrule Optional[Union[Iterable[str], str]]

    An rrule schedule of when to execute runs of this flow.

    None schedule Optional[SCHEDULE_TYPES]

    A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

    None is_schedule_active Optional[bool]

    Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

    None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

    A list of triggers that should kick of a run of this flow.

    None parameters Optional[dict]

    A dictionary of default parameter values to pass to runs of this flow.

    None description Optional[str]

    A description for the created deployment. Defaults to the flow's description if not provided.

    None tags Optional[List[str]]

    A list of tags to associate with the created deployment for organizational purposes.

    None version Optional[str]

    A version for the created deployment. Defaults to the flow's version.

    None enforce_parameter_schema bool

    Whether or not the Prefect API should enforce the parameter schema for this deployment.

    False work_pool_name Optional[str]

    The name of the work pool to use for this deployment.

    None work_queue_name Optional[str]

    The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used.

    None job_variables Optional[Dict[str, Any]]

    Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings.

    None Source code in prefect/deployments/runner.py
    @classmethod\n@sync_compatible\nasync def from_storage(\n    cls,\n    storage: RunnerStorage,\n    entrypoint: str,\n    name: str,\n    interval: Optional[\n        Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    work_pool_name: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n    job_variables: Optional[Dict[str, Any]] = None,\n):\n    \"\"\"\n    Create a RunnerDeployment from a flow located at a given entrypoint and stored in a\n    local storage location.\n\n    Args:\n        entrypoint:  The path to a file containing a flow and the name of the flow function in\n            the format `./path/to/file.py:flow_func_name`.\n        name: A name for the deployment\n        storage: A storage object to use for retrieving flow code. If not provided, a\n            URL must be provided.\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        enforce_parameter_schema: Whether or not the Prefect API should enforce the\n            parameter schema for this deployment.\n        work_pool_name: The name of the work pool to use for this deployment.\n        work_queue_name: The name of the work queue to use for this deployment's scheduled runs.\n            If not provided the default work queue for the work pool will be used.\n        job_variables: Settings used to override the values specified default base job template\n            of the chosen work pool. Refer to the base job template of the chosen work pool for\n            available settings.\n    \"\"\"\n    from prefect.flows import load_flow_from_entrypoint\n\n    constructed_schedules = cls._construct_deployment_schedules(\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedule=schedule,\n        schedules=schedules,\n    )\n\n    job_variables = job_variables or {}\n\n    with tempfile.TemporaryDirectory() as tmpdir:\n        storage.set_base_path(Path(tmpdir))\n        await storage.pull_code()\n\n        full_entrypoint = str(storage.destination / entrypoint)\n        flow = await from_async.wait_for_call_in_new_thread(\n            create_call(load_flow_from_entrypoint, full_entrypoint)\n        )\n\n    deployment = cls(\n        name=Path(name).stem,\n        flow_name=flow.name,\n        schedule=schedule,\n        schedules=constructed_schedules,\n        paused=paused,\n        is_schedule_active=is_schedule_active,\n        tags=tags or [],\n        triggers=triggers or [],\n        parameters=parameters or {},\n        description=description,\n        version=version,\n        entrypoint=entrypoint,\n        enforce_parameter_schema=enforce_parameter_schema,\n        storage=storage,\n        work_pool_name=work_pool_name,\n        work_queue_name=work_queue_name,\n        job_variables=job_variables,\n    )\n    deployment._path = str(storage.destination).replace(\n        tmpdir, \"$STORAGE_BASE_PATH\"\n    )\n\n    cls._set_defaults_from_flow(deployment, flow)\n\n    return deployment\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.RunnerDeployment.validate_automation_names","title":"validate_automation_names","text":"

    Ensure that each trigger has a name for its automation if none is provided.

    Source code in prefect/deployments/runner.py
    @validator(\"triggers\", allow_reuse=True)\ndef validate_automation_names(cls, field_value, values):\n    \"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n    return validate_automation_names(field_value, values)\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/runner/#prefect.deployments.runner.deploy","title":"deploy async","text":"

    Deploy the provided list of deployments to dynamic infrastructure via a work pool.

    By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule.

    If you want to use an existing image, you can pass build=False to skip building and pushing an image.

    Parameters:

    Name Type Description Default *deployments RunnerDeployment

    A list of deployments to deploy.

    () work_pool_name Optional[str]

    The name of the work pool to use for these deployments. Defaults to the value of PREFECT_DEFAULT_WORK_POOL_NAME.

    None image Optional[Union[str, DeploymentImage]]

    The name of the Docker image to build, including the registry and repository. Pass a DeploymentImage instance to customize the Dockerfile used and build arguments.

    None build bool

    Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime.

    True push bool

    Whether or not to skip pushing the built image to a registry.

    True print_next_steps_message bool

    Whether or not to print a message with next steps after deploying the deployments.

    True

    Returns:

    Type Description List[UUID]

    A list of deployment IDs for the created/updated deployments.

    Examples:

    Deploy a group of flows to a work pool:

    from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef local_flow():\n    print(\"I'm a locally defined flow!\")\n\nif __name__ == \"__main__\":\n    deploy(\n        local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n        flow.from_source(\n            source=\"https://github.com/org/repo.git\",\n            entrypoint=\"flows.py:my_flow\",\n        ).to_deployment(\n            name=\"example-deploy-remote-flow\",\n        ),\n        work_pool_name=\"my-work-pool\",\n        image=\"my-registry/my-image:dev\",\n    )\n
    Source code in prefect/deployments/runner.py
    @sync_compatible\nasync def deploy(\n    *deployments: RunnerDeployment,\n    work_pool_name: Optional[str] = None,\n    image: Optional[Union[str, DeploymentImage]] = None,\n    build: bool = True,\n    push: bool = True,\n    print_next_steps_message: bool = True,\n    ignore_warnings: bool = False,\n) -> List[UUID]:\n    \"\"\"\n    Deploy the provided list of deployments to dynamic infrastructure via a\n    work pool.\n\n    By default, calling this function will build a Docker image for the deployments, push it to a\n    registry, and create each deployment via the Prefect API that will run the corresponding\n    flow on the given schedule.\n\n    If you want to use an existing image, you can pass `build=False` to skip building and pushing\n    an image.\n\n    Args:\n        *deployments: A list of deployments to deploy.\n        work_pool_name: The name of the work pool to use for these deployments. Defaults to\n            the value of `PREFECT_DEFAULT_WORK_POOL_NAME`.\n        image: The name of the Docker image to build, including the registry and\n            repository. Pass a DeploymentImage instance to customize the Dockerfile used\n            and build arguments.\n        build: Whether or not to build a new image for the flow. If False, the provided\n            image will be used as-is and pulled at runtime.\n        push: Whether or not to skip pushing the built image to a registry.\n        print_next_steps_message: Whether or not to print a message with next steps\n            after deploying the deployments.\n\n    Returns:\n        A list of deployment IDs for the created/updated deployments.\n\n    Examples:\n        Deploy a group of flows to a work pool:\n\n        ```python\n        from prefect import deploy, flow\n\n        @flow(log_prints=True)\n        def local_flow():\n            print(\"I'm a locally defined flow!\")\n\n        if __name__ == \"__main__\":\n            deploy(\n                local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n                flow.from_source(\n                    source=\"https://github.com/org/repo.git\",\n                    entrypoint=\"flows.py:my_flow\",\n                ).to_deployment(\n                    name=\"example-deploy-remote-flow\",\n                ),\n                work_pool_name=\"my-work-pool\",\n                image=\"my-registry/my-image:dev\",\n            )\n        ```\n    \"\"\"\n    work_pool_name = work_pool_name or PREFECT_DEFAULT_WORK_POOL_NAME.value()\n\n    if not image and not all(\n        d.storage or d.entrypoint_type == EntrypointType.MODULE_PATH\n        for d in deployments\n    ):\n        raise ValueError(\n            \"Either an image or remote storage location must be provided when deploying\"\n            \" a deployment.\"\n        )\n\n    if not work_pool_name:\n        raise ValueError(\n            \"A work pool name must be provided when deploying a deployment. Either\"\n            \" provide a work pool name when calling `deploy` or set\"\n            \" `PREFECT_DEFAULT_WORK_POOL_NAME` in your profile.\"\n        )\n\n    if image and isinstance(image, str):\n        image_name, image_tag = parse_image_tag(image)\n        image = DeploymentImage(name=image_name, tag=image_tag)\n\n    try:\n        async with get_client() as client:\n            work_pool = await client.read_work_pool(work_pool_name)\n    except ObjectNotFound as exc:\n        raise ValueError(\n            f\"Could not find work pool {work_pool_name!r}. Please create it before\"\n            \" deploying this flow.\"\n        ) from exc\n\n    is_docker_based_work_pool = get_from_dict(\n        work_pool.base_job_template, \"variables.properties.image\", False\n    )\n    is_block_based_work_pool = get_from_dict(\n        work_pool.base_job_template, \"variables.properties.block\", False\n    )\n    # carve out an exception for block based work pools that only have a block in their base job template\n    console = Console()\n    if not is_docker_based_work_pool and not is_block_based_work_pool:\n        if image:\n            raise ValueError(\n                f\"Work pool {work_pool_name!r} does not support custom Docker images.\"\n                \" Please use a work pool with an `image` variable in its base job template\"\n                \" or specify a remote storage location for the flow with `.from_source`.\"\n                \" If you are attempting to deploy a flow to a local process work pool,\"\n                \" consider using `flow.serve` instead. See the documentation for more\"\n                \" information: https://docs.prefect.io/latest/concepts/flows/#serving-a-flow\"\n            )\n        elif work_pool.type == \"process\" and not ignore_warnings:\n            console.print(\n                \"Looks like you're deploying to a process work pool. If you're creating a\"\n                \" deployment for local development, calling `.serve` on your flow is a great\"\n                \" way to get started. See the documentation for more information:\"\n                \" https://docs.prefect.io/latest/concepts/flows/#serving-a-flow. \"\n                \" Set `ignore_warnings=True` to suppress this message.\",\n                style=\"yellow\",\n            )\n\n    is_managed_pool = work_pool.is_managed_pool\n    if is_managed_pool:\n        build = False\n        push = False\n\n    if image and build:\n        with Progress(\n            SpinnerColumn(),\n            TextColumn(f\"Building image {image.reference}...\"),\n            transient=True,\n            console=console,\n        ) as progress:\n            docker_build_task = progress.add_task(\"docker_build\", total=1)\n            image.build()\n\n            progress.update(docker_build_task, completed=1)\n            console.print(\n                f\"Successfully built image {image.reference!r}\", style=\"green\"\n            )\n\n    if image and build and push:\n        with Progress(\n            SpinnerColumn(),\n            TextColumn(\"Pushing image...\"),\n            transient=True,\n            console=console,\n        ) as progress:\n            docker_push_task = progress.add_task(\"docker_push\", total=1)\n\n            image.push()\n\n            progress.update(docker_push_task, completed=1)\n\n        console.print(f\"Successfully pushed image {image.reference!r}\", style=\"green\")\n\n    deployment_exceptions = []\n    deployment_ids = []\n    image_ref = image.reference if image else None\n    for deployment in track(\n        deployments,\n        description=\"Creating/updating deployments...\",\n        console=console,\n        transient=True,\n    ):\n        try:\n            deployment_ids.append(\n                await deployment.apply(image=image_ref, work_pool_name=work_pool_name)\n            )\n        except Exception as exc:\n            if len(deployments) == 1:\n                raise\n            deployment_exceptions.append({\"deployment\": deployment, \"exc\": exc})\n\n    if deployment_exceptions:\n        console.print(\n            \"Encountered errors while creating/updating deployments:\\n\",\n            style=\"orange_red1\",\n        )\n    else:\n        console.print(\"Successfully created/updated all deployments!\\n\", style=\"green\")\n\n    complete_failure = len(deployment_exceptions) == len(deployments)\n\n    table = Table(\n        title=\"Deployments\",\n        show_lines=True,\n    )\n\n    table.add_column(header=\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(header=\"Status\", style=\"blue\", no_wrap=True)\n    table.add_column(header=\"Details\", style=\"blue\")\n\n    for deployment in deployments:\n        errored_deployment = next(\n            (d for d in deployment_exceptions if d[\"deployment\"] == deployment),\n            None,\n        )\n        if errored_deployment:\n            table.add_row(\n                f\"{deployment.flow_name}/{deployment.name}\",\n                \"failed\",\n                str(errored_deployment[\"exc\"]),\n                style=\"red\",\n            )\n        else:\n            table.add_row(f\"{deployment.flow_name}/{deployment.name}\", \"applied\")\n    console.print(table)\n\n    if print_next_steps_message and not complete_failure:\n        if not work_pool.is_push_pool and not work_pool.is_managed_pool:\n            console.print(\n                \"\\nTo execute flow runs from these deployments, start a worker in a\"\n                \" separate terminal that pulls work from the\"\n                f\" {work_pool_name!r} work pool:\"\n            )\n            console.print(\n                f\"\\n\\t$ prefect worker start --pool {work_pool_name!r}\",\n                style=\"blue\",\n            )\n        console.print(\n            \"\\nTo trigger any of these deployments, use the\"\n            \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n            \" [DEPLOYMENT_NAME]\\n[/]\"\n        )\n\n        if PREFECT_UI_URL:\n            console.print(\n                \"\\nYou can also trigger your deployments via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n            )\n\n    return deployment_ids\n
    ","tags":["Python API","flow runs","deployments","runners"]},{"location":"api-ref/prefect/deployments/steps/core/","title":"steps.core","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core","title":"prefect.deployments.steps.core","text":"

    Core primitives for running Prefect deployment steps.

    Deployment steps are YAML representations of Python functions along with their inputs.

    Whenever a step is run, the following actions are taken:

    • The step's inputs and block / variable references are resolved (see the prefect deploy documentation for more details)
    • The step's function is imported; if it cannot be found, the requires keyword is used to install the necessary packages
    • The step's function is called with the resolved inputs
    • The step's output is returned and used to resolve inputs for subsequent steps
    ","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.StepExecutionError","title":"StepExecutionError","text":"

    Bases: Exception

    Raised when a step fails to execute.

    Source code in prefect/deployments/steps/core.py
    class StepExecutionError(Exception):\n    \"\"\"\n    Raised when a step fails to execute.\n    \"\"\"\n
    ","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.run_step","title":"run_step async","text":"

    Runs a step, returns the step's output.

    Steps are assumed to be in the format {\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}.

    The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function:

    This keyword is used to specify packages that should be installed before running the step.

    Source code in prefect/deployments/steps/core.py
    async def run_step(step: Dict, upstream_outputs: Optional[Dict] = None) -> Dict:\n    \"\"\"\n    Runs a step, returns the step's output.\n\n    Steps are assumed to be in the format `{\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}`.\n\n    The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the\n    inputs before passing to the step function:\n\n    This keyword is used to specify packages that should be installed before running the step.\n    \"\"\"\n    fqn, inputs = _get_step_fully_qualified_name_and_inputs(step)\n    upstream_outputs = upstream_outputs or {}\n\n    if len(step.keys()) > 1:\n        raise ValueError(\n            f\"Step has unexpected additional keys: {', '.join(list(step.keys())[1:])}\"\n        )\n\n    keywords = {\n        keyword: inputs.pop(keyword)\n        for keyword in RESERVED_KEYWORDS\n        if keyword in inputs\n    }\n\n    inputs = apply_values(inputs, upstream_outputs)\n    inputs = await resolve_block_document_references(inputs)\n    inputs = await resolve_variables(inputs)\n    inputs = apply_values(inputs, os.environ)\n    step_func = _get_function_for_step(fqn, requires=keywords.get(\"requires\"))\n    result = await from_async.call_soon_in_new_thread(\n        Call.new(step_func, **inputs)\n    ).aresult()\n    return result\n
    ","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/pull/","title":"steps.pull","text":"","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull","title":"prefect.deployments.steps.pull","text":"

    Core set of steps for specifying a Prefect project pull step.

    ","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone","title":"git_clone async","text":"

    Clones a git repository into the current working directory.

    Parameters:

    Name Type Description Default repository str

    the URL of the repository to clone

    required branch Optional[str]

    the branch to clone; if not provided, the default branch will be used

    None include_submodules bool

    whether to include git submodules when cloning the repository

    False access_token Optional[str]

    an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials

    None credentials Optional[Block]

    a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository.

    None

    Returns:

    Name Type Description dict dict

    a dictionary containing a directory key of the new directory that was created

    Raises:

    Type Description CalledProcessError

    if the git clone command fails for any reason

    Examples:

    Clone a public repository:

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/PrefectHQ/prefect.git\n

    Clone a branch of a public repository:

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/PrefectHQ/prefect.git\n        branch: my-branch\n

    Clone a private repository using a GitHubCredentials block:

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n

    Clone a private repository using an access token:

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n
    Note that you will need to create a Secret block to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. x-token-auth) in your secret block. Refer to your git providers documentation for the correct authentication schema.

    Clone a repository with submodules:

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        include_submodules: true\n

    Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows):

    pull:\n    - prefect.deployments.steps.git_clone:\n        repository: git@github.com:org/repo.git\n

    Source code in prefect/deployments/steps/pull.py
    @sync_compatible\nasync def git_clone(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n    credentials: Optional[Block] = None,\n) -> dict:\n    \"\"\"\n    Clones a git repository into the current working directory.\n\n    Args:\n        repository: the URL of the repository to clone\n        branch: the branch to clone; if not provided, the default branch will be used\n        include_submodules (bool): whether to include git submodules when cloning the repository\n        access_token: an access token to use for cloning the repository; if not provided\n            the repository will be cloned using the default git credentials\n        credentials: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the\n            credentials to use for cloning the repository.\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the new directory that was created\n\n    Raises:\n        subprocess.CalledProcessError: if the git clone command fails for any reason\n\n    Examples:\n        Clone a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n        ```\n\n        Clone a branch of a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n                branch: my-branch\n        ```\n\n        Clone a private repository using a GitHubCredentials block:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n        ```\n\n        Clone a private repository using an access token:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n        ```\n        Note that you will need to [create a Secret block](/concepts/blocks/#using-existing-block-types) to store the\n        value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`)\n        in your secret block. Refer to your git providers documentation for the correct authentication schema.\n\n        Clone a repository with submodules:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                include_submodules: true\n        ```\n\n        Clone a repository with an SSH key (note that the SSH key must be added to the worker\n        before executing flows):\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: git@github.com:org/repo.git\n        ```\n    \"\"\"\n    if access_token and credentials:\n        raise ValueError(\n            \"Please provide either an access token or credentials but not both.\"\n        )\n\n    credentials = {\"access_token\": access_token} if access_token else credentials\n\n    storage = GitRepository(\n        url=repository,\n        credentials=credentials,\n        branch=branch,\n        include_submodules=include_submodules,\n    )\n\n    await storage.pull_code()\n\n    directory = str(storage.destination.relative_to(Path.cwd()))\n    deployment_logger.info(f\"Cloned repository {repository!r} into {directory!r}\")\n    return {\"directory\": directory}\n
    ","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone_project","title":"git_clone_project async","text":"

    Deprecated. Use git_clone instead.

    Source code in prefect/deployments/steps/pull.py
    @deprecated_callable(start_date=\"Jun 2023\", help=\"Use 'git clone' instead.\")\n@sync_compatible\nasync def git_clone_project(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n) -> dict:\n    \"\"\"Deprecated. Use `git_clone` instead.\"\"\"\n    return await git_clone(\n        repository=repository,\n        branch=branch,\n        include_submodules=include_submodules,\n        access_token=access_token,\n    )\n
    ","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_from_remote_storage","title":"pull_from_remote_storage async","text":"

    Pulls code from a remote storage location into the current working directory.

    Works with protocols supported by fsspec.

    Parameters:

    Name Type Description Default url str

    the URL of the remote storage location. Should be a valid fsspec URL. Some protocols may require an additional fsspec dependency to be installed. Refer to the fsspec docs for more details.

    required **settings Any

    any additional settings to pass the fsspec filesystem class.

    {}

    Returns:

    Name Type Description dict

    a dictionary containing a directory key of the new directory that was created

    Examples:

    Pull code from a remote storage location:

    pull:\n    - prefect.deployments.steps.pull_from_remote_storage:\n        url: s3://my-bucket/my-folder\n

    Pull code from a remote storage location with additional settings:

    pull:\n    - prefect.deployments.steps.pull_from_remote_storage:\n        url: s3://my-bucket/my-folder\n        key: {{ prefect.blocks.secret.my-aws-access-key }}}\n        secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n

    Source code in prefect/deployments/steps/pull.py
    async def pull_from_remote_storage(url: str, **settings: Any):\n    \"\"\"\n    Pulls code from a remote storage location into the current working directory.\n\n    Works with protocols supported by `fsspec`.\n\n    Args:\n        url (str): the URL of the remote storage location. Should be a valid `fsspec` URL.\n            Some protocols may require an additional `fsspec` dependency to be installed.\n            Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n            for more details.\n        **settings (Any): any additional settings to pass the `fsspec` filesystem class.\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the new directory that was created\n\n    Examples:\n        Pull code from a remote storage location:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.pull_from_remote_storage:\n                url: s3://my-bucket/my-folder\n        ```\n\n        Pull code from a remote storage location with additional settings:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.pull_from_remote_storage:\n                url: s3://my-bucket/my-folder\n                key: {{ prefect.blocks.secret.my-aws-access-key }}}\n                secret: {{ prefect.blocks.secret.my-aws-secret-key }}}\n        ```\n    \"\"\"\n    storage = RemoteStorage(url, **settings)\n\n    await storage.pull_code()\n\n    directory = str(storage.destination.relative_to(Path.cwd()))\n    deployment_logger.info(f\"Pulled code from {url!r} into {directory!r}\")\n    return {\"directory\": directory}\n
    ","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.pull_with_block","title":"pull_with_block async","text":"

    Pulls code using a block.

    Parameters:

    Name Type Description Default block_document_name str

    The name of the block document to use

    required block_type_slug str

    The slug of the type of block to use

    required Source code in prefect/deployments/steps/pull.py
    async def pull_with_block(block_document_name: str, block_type_slug: str):\n    \"\"\"\n    Pulls code using a block.\n\n    Args:\n        block_document_name: The name of the block document to use\n        block_type_slug: The slug of the type of block to use\n    \"\"\"\n    full_slug = f\"{block_type_slug}/{block_document_name}\"\n    try:\n        block = await Block.load(full_slug)\n    except Exception:\n        deployment_logger.exception(\"Unable to load block '%s'\", full_slug)\n        raise\n\n    try:\n        storage = BlockStorageAdapter(block)\n    except Exception:\n        deployment_logger.exception(\n            \"Unable to create storage adapter for block '%s'\", full_slug\n        )\n        raise\n\n    await storage.pull_code()\n\n    directory = str(storage.destination.relative_to(Path.cwd()))\n    deployment_logger.info(\n        \"Pulled code using block '%s' into '%s'\", full_slug, directory\n    )\n    return {\"directory\": directory}\n
    ","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.set_working_directory","title":"set_working_directory","text":"

    Sets the working directory; works with both absolute and relative paths.

    Parameters:

    Name Type Description Default directory str

    the directory to set as the working directory

    required

    Returns:

    Name Type Description dict dict

    a dictionary containing a directory key of the directory that was set

    Source code in prefect/deployments/steps/pull.py
    def set_working_directory(directory: str) -> dict:\n    \"\"\"\n    Sets the working directory; works with both absolute and relative paths.\n\n    Args:\n        directory (str): the directory to set as the working directory\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the\n            directory that was set\n    \"\"\"\n    os.chdir(directory)\n    return dict(directory=directory)\n
    ","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/utility/","title":"steps.utility","text":"","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility","title":"prefect.deployments.steps.utility","text":"

    Utility project steps that are useful for managing a project's deployment lifecycle.

    Steps within this module can be used within a build, push, or pull deployment action.

    Example

    Use the run_shell_script setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker\n        image_name: my-image\n        image_tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

    ","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.RunShellScriptResult","title":"RunShellScriptResult","text":"

    Bases: TypedDict

    The result of a run_shell_script step.

    Attributes:

    Name Type Description stdout str

    The captured standard output of the script.

    stderr str

    The captured standard error of the script.

    Source code in prefect/deployments/steps/utility.py
    class RunShellScriptResult(TypedDict):\n    \"\"\"\n    The result of a `run_shell_script` step.\n\n    Attributes:\n        stdout: The captured standard output of the script.\n        stderr: The captured standard error of the script.\n    \"\"\"\n\n    stdout: str\n    stderr: str\n
    ","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.pip_install_requirements","title":"pip_install_requirements async","text":"

    Installs dependencies from a requirements.txt file.

    Parameters:

    Name Type Description Default requirements_file str

    The requirements.txt to use for installation.

    'requirements.txt' directory Optional[str]

    The directory the requirements.txt file is in. Defaults to the current working directory.

    None stream_output bool

    Whether to stream the output from pip install should be streamed to the console

    True

    Returns:

    Type Description

    A dictionary with the keys stdout and stderr containing the output the pip install command

    Raises:

    Type Description CalledProcessError

    if the pip install command fails for any reason

    Example
    pull:\n    - prefect.deployments.steps.git_clone:\n        id: clone-step\n        repository: https://github.com/org/repo.git\n    - prefect.deployments.steps.pip_install_requirements:\n        directory: {{ clone-step.directory }}\n        requirements_file: requirements.txt\n        stream_output: False\n
    Source code in prefect/deployments/steps/utility.py
    async def pip_install_requirements(\n    directory: Optional[str] = None,\n    requirements_file: str = \"requirements.txt\",\n    stream_output: bool = True,\n):\n    \"\"\"\n    Installs dependencies from a requirements.txt file.\n\n    Args:\n        requirements_file: The requirements.txt to use for installation.\n        directory: The directory the requirements.txt file is in. Defaults to\n            the current working directory.\n        stream_output: Whether to stream the output from pip install should be\n            streamed to the console\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            the `pip install` command\n\n    Raises:\n        subprocess.CalledProcessError: if the pip install command fails for any reason\n\n    Example:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                id: clone-step\n                repository: https://github.com/org/repo.git\n            - prefect.deployments.steps.pip_install_requirements:\n                directory: {{ clone-step.directory }}\n                requirements_file: requirements.txt\n                stream_output: False\n        ```\n    \"\"\"\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    async with open_process(\n        [get_sys_executable(), \"-m\", \"pip\", \"install\", \"-r\", requirements_file],\n        stdout=subprocess.PIPE,\n        stderr=subprocess.PIPE,\n        cwd=directory,\n    ) as process:\n        await _stream_capture_process_output(\n            process,\n            stdout_sink=stdout_sink,\n            stderr_sink=stderr_sink,\n            stream_output=stream_output,\n        )\n        await process.wait()\n\n        if process.returncode != 0:\n            raise RuntimeError(\n                f\"pip_install_requirements failed with error code {process.returncode}:\"\n                f\" {stderr_sink.getvalue()}\"\n            )\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
    ","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.run_shell_script","title":"run_shell_script async","text":"

    Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script.

    Parameters:

    Name Type Description Default script str

    The script to run

    required directory Optional[str]

    The directory to run the script in. Defaults to the current working directory.

    None env Optional[Dict[str, str]]

    A dictionary of environment variables to set for the script

    None stream_output bool

    Whether to stream the output of the script to stdout/stderr

    True expand_env_vars bool

    Whether to expand environment variables in the script before running it

    False

    Returns:

    Type Description RunShellScriptResult

    A dictionary with the keys stdout and stderr containing the output of the script

    Examples:

    Retrieve the short Git commit hash of the current repository to use as a Docker image tag:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker\n        image_name: my-image\n        image_tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

    Run a multi-line shell script:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        script: |\n            echo \"Hello\"\n            echo \"World\"\n

    Run a shell script with environment variables:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        script: echo \"Hello $NAME\"\n        env:\n            NAME: World\n

    Run a shell script with environment variables expanded from the current environment:

    pull:\n    - prefect.deployments.steps.run_shell_script:\n        script: |\n            echo \"User: $USER\"\n            echo \"Home Directory: $HOME\"\n        stream_output: true\n        expand_env_vars: true\n

    Run a shell script in a specific directory:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        script: echo \"Hello\"\n        directory: /path/to/directory\n

    Run a script stored in a file:

    build:\n    - prefect.deployments.steps.run_shell_script:\n        script: \"bash path/to/script.sh\"\n

    Source code in prefect/deployments/steps/utility.py
    async def run_shell_script(\n    script: str,\n    directory: Optional[str] = None,\n    env: Optional[Dict[str, str]] = None,\n    stream_output: bool = True,\n    expand_env_vars: bool = False,\n) -> RunShellScriptResult:\n    \"\"\"\n    Runs one or more shell commands in a subprocess. Returns the standard\n    output and standard error of the script.\n\n    Args:\n        script: The script to run\n        directory: The directory to run the script in. Defaults to the current\n            working directory.\n        env: A dictionary of environment variables to set for the script\n        stream_output: Whether to stream the output of the script to\n            stdout/stderr\n        expand_env_vars: Whether to expand environment variables in the script\n            before running it\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            of the script\n\n    Examples:\n        Retrieve the short Git commit hash of the current repository to use as\n            a Docker image tag:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                id: get-commit-hash\n                script: git rev-parse --short HEAD\n                stream_output: false\n            - prefect_docker.deployments.steps.build_docker_image:\n                requires: prefect-docker\n                image_name: my-image\n                image_tag: \"{{ get-commit-hash.stdout }}\"\n                dockerfile: auto\n        ```\n\n        Run a multi-line shell script:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: |\n                    echo \"Hello\"\n                    echo \"World\"\n        ```\n\n        Run a shell script with environment variables:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello $NAME\"\n                env:\n                    NAME: World\n        ```\n\n        Run a shell script with environment variables expanded\n            from the current environment:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.run_shell_script:\n                script: |\n                    echo \"User: $USER\"\n                    echo \"Home Directory: $HOME\"\n                stream_output: true\n                expand_env_vars: true\n        ```\n\n        Run a shell script in a specific directory:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello\"\n                directory: /path/to/directory\n        ```\n\n        Run a script stored in a file:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: \"bash path/to/script.sh\"\n        ```\n    \"\"\"\n    current_env = os.environ.copy()\n    current_env.update(env or {})\n\n    commands = script.splitlines()\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    for command in commands:\n        if expand_env_vars:\n            # Expand environment variables in command and provided environment\n            command = string.Template(command).safe_substitute(current_env)\n        split_command = shlex.split(command, posix=sys.platform != \"win32\")\n        if not split_command:\n            continue\n        async with open_process(\n            split_command,\n            stdout=subprocess.PIPE,\n            stderr=subprocess.PIPE,\n            cwd=directory,\n            env=current_env,\n        ) as process:\n            await _stream_capture_process_output(\n                process,\n                stdout_sink=stdout_sink,\n                stderr_sink=stderr_sink,\n                stream_output=stream_output,\n            )\n\n            await process.wait()\n\n            if process.returncode != 0:\n                raise RuntimeError(\n                    f\"`run_shell_script` failed with error code {process.returncode}:\"\n                    f\" {stderr_sink.getvalue()}\"\n                )\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
    ","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/input/actions/","title":"actions","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions","title":"prefect.input.actions","text":"","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.create_flow_run_input","title":"create_flow_run_input async","text":"

    Create a new flow run input. The given value will be serialized to JSON and stored as a flow run input value.

    Parameters:

    Name Type Description Default - key (str

    the flow run input key

    required - value (Any

    the flow run input value

    required - flow_run_id (UUID

    the, optional, flow run ID. If not given will default to pulling the flow run ID from the current context.

    required Source code in prefect/input/actions.py
    @sync_compatible\n@client_injector\nasync def create_flow_run_input(\n    client: \"PrefectClient\",\n    key: str,\n    value: Any,\n    flow_run_id: Optional[UUID] = None,\n    sender: Optional[str] = None,\n):\n    \"\"\"\n    Create a new flow run input. The given `value` will be serialized to JSON\n    and stored as a flow run input value.\n\n    Args:\n        - key (str): the flow run input key\n        - value (Any): the flow run input value\n        - flow_run_id (UUID): the, optional, flow run ID. If not given will\n          default to pulling the flow run ID from the current context.\n    \"\"\"\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n\n    await client.create_flow_run_input(\n        flow_run_id=flow_run_id,\n        key=key,\n        sender=sender,\n        value=orjson.dumps(value).decode(),\n    )\n
    ","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.delete_flow_run_input","title":"delete_flow_run_input async","text":"

    Delete a flow run input.

    Parameters:

    Name Type Description Default - flow_run_id (UUID

    the flow run ID

    required - key (str

    the flow run input key

    required Source code in prefect/input/actions.py
    @sync_compatible\n@client_injector\nasync def delete_flow_run_input(\n    client: \"PrefectClient\", key: str, flow_run_id: Optional[UUID] = None\n):\n    \"\"\"Delete a flow run input.\n\n    Args:\n        - flow_run_id (UUID): the flow run ID\n        - key (str): the flow run input key\n    \"\"\"\n\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n\n    await client.delete_flow_run_input(flow_run_id=flow_run_id, key=key)\n
    ","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/actions/#prefect.input.actions.read_flow_run_input","title":"read_flow_run_input async","text":"

    Read a flow run input.

    Parameters:

    Name Type Description Default - key (str

    the flow run input key

    required - flow_run_id (UUID

    the flow run ID

    required Source code in prefect/input/actions.py
    @sync_compatible\n@client_injector\nasync def read_flow_run_input(\n    client: \"PrefectClient\", key: str, flow_run_id: Optional[UUID] = None\n) -> Any:\n    \"\"\"Read a flow run input.\n\n    Args:\n        - key (str): the flow run input key\n        - flow_run_id (UUID): the flow run ID\n    \"\"\"\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n\n    try:\n        value = await client.read_flow_run_input(flow_run_id=flow_run_id, key=key)\n    except PrefectHTTPStatusError as exc:\n        if exc.response.status_code == 404:\n            return None\n        raise\n    else:\n        return orjson.loads(value)\n
    ","tags":["flow run","pause","suspend","input"]},{"location":"api-ref/prefect/input/run_input/","title":"run_input","text":"","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input","title":"prefect.input.run_input","text":"

    This module contains functions that allow sending type-checked RunInput data to flows at runtime. Flows can send back responses, establishing two-way channels with senders. These functions are particularly useful for systems that require ongoing data transfer or need to react to input quickly. real-time interaction and efficient data handling. It's designed to facilitate dynamic communication within distributed or microservices-oriented systems, making it ideal for scenarios requiring continuous data synchronization and processing. It's particularly useful for systems that require ongoing data input and output.

    The following is an example of two flows. One sends a random number to the other and waits for a response. The other receives the number, squares it, and sends the result back. The sender flow then prints the result.

    Sender flow:

    import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n    number: int\n\n\n@flow\nasync def sender_flow(receiver_flow_run_id: UUID):\n    logger = get_run_logger()\n\n    the_number = random.randint(1, 100)\n\n    await NumberData(number=the_number).send_to(receiver_flow_run_id)\n\n    receiver = NumberData.receive(flow_run_id=receiver_flow_run_id)\n    squared = await receiver.next()\n\n    logger.info(f\"{the_number} squared is {squared.number}\")\n

    Receiver flow:

    import random\nfrom uuid import UUID\nfrom prefect import flow, get_run_logger\nfrom prefect.input import RunInput\n\nclass NumberData(RunInput):\n    number: int\n\n\n@flow\nasync def receiver_flow():\n    async for data in NumberData.receive():\n        squared = data.number ** 2\n        data.respond(NumberData(number=squared))\n

    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput","title":"AutomaticRunInput","text":"

    Bases: RunInput, Generic[T]

    Source code in prefect/input/run_input.py
    class AutomaticRunInput(RunInput, Generic[T]):\n    value: T\n\n    @classmethod\n    @sync_compatible\n    async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n        \"\"\"\n        Load the run input response from the given key.\n\n        Args:\n            - keyset (Keyset): the keyset to load the input for\n            - flow_run_id (UUID, optional): the flow run ID to load the input for\n        \"\"\"\n        instance = await super().load(keyset, flow_run_id=flow_run_id)\n        return instance.value\n\n    @classmethod\n    def subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n        \"\"\"\n        Create a new `AutomaticRunInput` subclass from the given type.\n\n        This method uses the type's name as a key prefix to identify related\n        flow run inputs. This helps in ensuring that values saved under a type\n        (like List[int]) are retrievable under the generic type name (like \"list\").\n        \"\"\"\n        fields: Dict[str, Any] = {\"value\": (_type, ...)}\n\n        # Explanation for using getattr for type name extraction:\n        # - \"__name__\": This is the usual attribute for getting the name of\n        #   most types.\n        # - \"_name\": Used as a fallback, some type annotations in Python 3.9\n        #   and earlier might only have this attribute instead of __name__.\n        # - If neither is available, defaults to an empty string to prevent\n        #   errors, but typically we should find at least one valid name\n        #   attribute. This will match all automatic inputs sent to the flow\n        #   run, rather than a specific type.\n        #\n        # This approach ensures compatibility across Python versions and\n        # handles various edge cases in type annotations.\n\n        type_prefix: str = getattr(\n            _type, \"__name__\", getattr(_type, \"_name\", \"\")\n        ).lower()\n\n        class_name = f\"{type_prefix}AutomaticRunInput\"\n\n        # Creating a new Pydantic model class dynamically with the name based\n        # on the type prefix.\n        new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n            class_name, **fields, __base__=AutomaticRunInput\n        )\n        return new_cls\n\n    @classmethod\n    def receive(cls, *args, **kwargs):\n        if kwargs.get(\"key_prefix\") is None:\n            kwargs[\"key_prefix\"] = f\"{cls.__name__.lower()}-auto\"\n\n        return GetAutomaticInputHandler(run_input_cls=cls, *args, **kwargs)\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.load","title":"load async classmethod","text":"

    Load the run input response from the given key.

    Parameters:

    Name Type Description Default - keyset (Keyset

    the keyset to load the input for

    required - flow_run_id (UUID

    the flow run ID to load the input for

    required Source code in prefect/input/run_input.py
    @classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T:\n    \"\"\"\n    Load the run input response from the given key.\n\n    Args:\n        - keyset (Keyset): the keyset to load the input for\n        - flow_run_id (UUID, optional): the flow run ID to load the input for\n    \"\"\"\n    instance = await super().load(keyset, flow_run_id=flow_run_id)\n    return instance.value\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.AutomaticRunInput.subclass_from_type","title":"subclass_from_type classmethod","text":"

    Create a new AutomaticRunInput subclass from the given type.

    This method uses the type's name as a key prefix to identify related flow run inputs. This helps in ensuring that values saved under a type (like List[int]) are retrievable under the generic type name (like \"list\").

    Source code in prefect/input/run_input.py
    @classmethod\ndef subclass_from_type(cls, _type: Type[T]) -> Type[\"AutomaticRunInput[T]\"]:\n    \"\"\"\n    Create a new `AutomaticRunInput` subclass from the given type.\n\n    This method uses the type's name as a key prefix to identify related\n    flow run inputs. This helps in ensuring that values saved under a type\n    (like List[int]) are retrievable under the generic type name (like \"list\").\n    \"\"\"\n    fields: Dict[str, Any] = {\"value\": (_type, ...)}\n\n    # Explanation for using getattr for type name extraction:\n    # - \"__name__\": This is the usual attribute for getting the name of\n    #   most types.\n    # - \"_name\": Used as a fallback, some type annotations in Python 3.9\n    #   and earlier might only have this attribute instead of __name__.\n    # - If neither is available, defaults to an empty string to prevent\n    #   errors, but typically we should find at least one valid name\n    #   attribute. This will match all automatic inputs sent to the flow\n    #   run, rather than a specific type.\n    #\n    # This approach ensures compatibility across Python versions and\n    # handles various edge cases in type annotations.\n\n    type_prefix: str = getattr(\n        _type, \"__name__\", getattr(_type, \"_name\", \"\")\n    ).lower()\n\n    class_name = f\"{type_prefix}AutomaticRunInput\"\n\n    # Creating a new Pydantic model class dynamically with the name based\n    # on the type prefix.\n    new_cls: Type[\"AutomaticRunInput\"] = pydantic.create_model(\n        class_name, **fields, __base__=AutomaticRunInput\n    )\n    return new_cls\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput","title":"RunInput","text":"

    Bases: BaseModel

    Source code in prefect/input/run_input.py
    class RunInput(pydantic.BaseModel):\n    class Config:\n        extra = \"forbid\"\n\n    _description: Optional[str] = pydantic.PrivateAttr(default=None)\n    _metadata: RunInputMetadata = pydantic.PrivateAttr()\n\n    @property\n    def metadata(self) -> RunInputMetadata:\n        return self._metadata\n\n    @classmethod\n    def keyset_from_type(cls) -> Keyset:\n        return keyset_from_base_key(cls.__name__.lower())\n\n    @classmethod\n    @sync_compatible\n    async def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n        \"\"\"\n        Save the run input response to the given key.\n\n        Args:\n            - keyset (Keyset): the keyset to save the input for\n            - flow_run_id (UUID, optional): the flow run ID to save the input for\n        \"\"\"\n\n        if HAS_PYDANTIC_V2:\n            schema = create_v2_schema(cls.__name__, model_base=cls)\n        else:\n            schema = cls.schema(by_alias=True)\n\n        await create_flow_run_input(\n            key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n        )\n\n        description = cls._description if isinstance(cls._description, str) else None\n        if description:\n            await create_flow_run_input(\n                key=keyset[\"description\"],\n                value=description,\n                flow_run_id=flow_run_id,\n            )\n\n    @classmethod\n    @sync_compatible\n    async def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n        \"\"\"\n        Load the run input response from the given key.\n\n        Args:\n            - keyset (Keyset): the keyset to load the input for\n            - flow_run_id (UUID, optional): the flow run ID to load the input for\n        \"\"\"\n        flow_run_id = ensure_flow_run_id(flow_run_id)\n        value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n        if value:\n            instance = cls(**value)\n        else:\n            instance = cls()\n        instance._metadata = RunInputMetadata(\n            key=keyset[\"response\"], sender=None, receiver=flow_run_id\n        )\n        return instance\n\n    @classmethod\n    def load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n        \"\"\"\n        Load the run input from a FlowRunInput object.\n\n        Args:\n            - flow_run_input (FlowRunInput): the flow run input to load the input for\n        \"\"\"\n        instance = cls(**flow_run_input.decoded_value)\n        instance._metadata = RunInputMetadata(\n            key=flow_run_input.key,\n            sender=flow_run_input.sender,\n            receiver=flow_run_input.flow_run_id,\n        )\n        return instance\n\n    @classmethod\n    def with_initial_data(\n        cls: Type[R], description: Optional[str] = None, **kwargs: Any\n    ) -> Type[R]:\n        \"\"\"\n        Create a new `RunInput` subclass with the given initial data as field\n        defaults.\n\n        Args:\n            - description (str, optional): a description to show when resuming\n                a flow run that requires input\n            - kwargs (Any): the initial data to populate the subclass\n        \"\"\"\n        fields: Dict[str, Any] = {}\n        for key, value in kwargs.items():\n            fields[key] = (type(value), value)\n        model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n        if description is not None:\n            model._description = description\n\n        return model\n\n    @sync_compatible\n    async def respond(\n        self,\n        run_input: \"RunInput\",\n        sender: Optional[str] = None,\n        key_prefix: Optional[str] = None,\n    ):\n        flow_run_id = None\n        if self.metadata.sender and self.metadata.sender.startswith(\"prefect.flow-run\"):\n            _, _, id = self.metadata.sender.rpartition(\".\")\n            flow_run_id = UUID(id)\n\n        if not flow_run_id:\n            raise RuntimeError(\n                \"Cannot respond to an input that was not sent by a flow run.\"\n            )\n\n        await _send_input(\n            flow_run_id=flow_run_id,\n            run_input=run_input,\n            sender=sender,\n            key_prefix=key_prefix,\n        )\n\n    @sync_compatible\n    async def send_to(\n        self,\n        flow_run_id: UUID,\n        sender: Optional[str] = None,\n        key_prefix: Optional[str] = None,\n    ):\n        await _send_input(\n            flow_run_id=flow_run_id,\n            run_input=self,\n            sender=sender,\n            key_prefix=key_prefix,\n        )\n\n    @classmethod\n    def receive(\n        cls,\n        timeout: Optional[float] = 3600,\n        poll_interval: float = 10,\n        raise_timeout_error: bool = False,\n        exclude_keys: Optional[Set[str]] = None,\n        key_prefix: Optional[str] = None,\n        flow_run_id: Optional[UUID] = None,\n    ):\n        if key_prefix is None:\n            key_prefix = f\"{cls.__name__.lower()}-auto\"\n\n        return GetInputHandler(\n            run_input_cls=cls,\n            key_prefix=key_prefix,\n            timeout=timeout,\n            poll_interval=poll_interval,\n            raise_timeout_error=raise_timeout_error,\n            exclude_keys=exclude_keys,\n            flow_run_id=flow_run_id,\n        )\n\n    @classmethod\n    def subclass_from_base_model_type(\n        cls, model_cls: Type[pydantic.BaseModel]\n    ) -> Type[\"RunInput\"]:\n        \"\"\"\n        Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n        subclass.\n\n        Args:\n            - model_cls (pydantic.BaseModel subclass): the class from which\n                to create the new `RunInput` subclass\n        \"\"\"\n        return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {})  # type: ignore\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load","title":"load async classmethod","text":"

    Load the run input response from the given key.

    Parameters:

    Name Type Description Default - keyset (Keyset

    the keyset to load the input for

    required - flow_run_id (UUID

    the flow run ID to load the input for

    required Source code in prefect/input/run_input.py
    @classmethod\n@sync_compatible\nasync def load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n    \"\"\"\n    Load the run input response from the given key.\n\n    Args:\n        - keyset (Keyset): the keyset to load the input for\n        - flow_run_id (UUID, optional): the flow run ID to load the input for\n    \"\"\"\n    flow_run_id = ensure_flow_run_id(flow_run_id)\n    value = await read_flow_run_input(keyset[\"response\"], flow_run_id=flow_run_id)\n    if value:\n        instance = cls(**value)\n    else:\n        instance = cls()\n    instance._metadata = RunInputMetadata(\n        key=keyset[\"response\"], sender=None, receiver=flow_run_id\n    )\n    return instance\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.load_from_flow_run_input","title":"load_from_flow_run_input classmethod","text":"

    Load the run input from a FlowRunInput object.

    Parameters:

    Name Type Description Default - flow_run_input (FlowRunInput

    the flow run input to load the input for

    required Source code in prefect/input/run_input.py
    @classmethod\ndef load_from_flow_run_input(cls, flow_run_input: \"FlowRunInput\"):\n    \"\"\"\n    Load the run input from a FlowRunInput object.\n\n    Args:\n        - flow_run_input (FlowRunInput): the flow run input to load the input for\n    \"\"\"\n    instance = cls(**flow_run_input.decoded_value)\n    instance._metadata = RunInputMetadata(\n        key=flow_run_input.key,\n        sender=flow_run_input.sender,\n        receiver=flow_run_input.flow_run_id,\n    )\n    return instance\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.save","title":"save async classmethod","text":"

    Save the run input response to the given key.

    Parameters:

    Name Type Description Default - keyset (Keyset

    the keyset to save the input for

    required - flow_run_id (UUID

    the flow run ID to save the input for

    required Source code in prefect/input/run_input.py
    @classmethod\n@sync_compatible\nasync def save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None):\n    \"\"\"\n    Save the run input response to the given key.\n\n    Args:\n        - keyset (Keyset): the keyset to save the input for\n        - flow_run_id (UUID, optional): the flow run ID to save the input for\n    \"\"\"\n\n    if HAS_PYDANTIC_V2:\n        schema = create_v2_schema(cls.__name__, model_base=cls)\n    else:\n        schema = cls.schema(by_alias=True)\n\n    await create_flow_run_input(\n        key=keyset[\"schema\"], value=schema, flow_run_id=flow_run_id\n    )\n\n    description = cls._description if isinstance(cls._description, str) else None\n    if description:\n        await create_flow_run_input(\n            key=keyset[\"description\"],\n            value=description,\n            flow_run_id=flow_run_id,\n        )\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.subclass_from_base_model_type","title":"subclass_from_base_model_type classmethod","text":"

    Create a new RunInput subclass from the given pydantic.BaseModel subclass.

    Parameters:

    Name Type Description Default - model_cls (pydantic.BaseModel subclass

    the class from which to create the new RunInput subclass

    required Source code in prefect/input/run_input.py
    @classmethod\ndef subclass_from_base_model_type(\n    cls, model_cls: Type[pydantic.BaseModel]\n) -> Type[\"RunInput\"]:\n    \"\"\"\n    Create a new `RunInput` subclass from the given `pydantic.BaseModel`\n    subclass.\n\n    Args:\n        - model_cls (pydantic.BaseModel subclass): the class from which\n            to create the new `RunInput` subclass\n    \"\"\"\n    return type(f\"{model_cls.__name__}RunInput\", (RunInput, model_cls), {})  # type: ignore\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.RunInput.with_initial_data","title":"with_initial_data classmethod","text":"

    Create a new RunInput subclass with the given initial data as field defaults.

    Parameters:

    Name Type Description Default - description (str

    a description to show when resuming a flow run that requires input

    required - kwargs (Any

    the initial data to populate the subclass

    required Source code in prefect/input/run_input.py
    @classmethod\ndef with_initial_data(\n    cls: Type[R], description: Optional[str] = None, **kwargs: Any\n) -> Type[R]:\n    \"\"\"\n    Create a new `RunInput` subclass with the given initial data as field\n    defaults.\n\n    Args:\n        - description (str, optional): a description to show when resuming\n            a flow run that requires input\n        - kwargs (Any): the initial data to populate the subclass\n    \"\"\"\n    fields: Dict[str, Any] = {}\n    for key, value in kwargs.items():\n        fields[key] = (type(value), value)\n    model = pydantic.create_model(cls.__name__, **fields, __base__=cls)\n\n    if description is not None:\n        model._description = description\n\n    return model\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_base_key","title":"keyset_from_base_key","text":"

    Get the keyset for the given base key.

    Parameters:

    Name Type Description Default - base_key (str

    the base key to get the keyset for

    required

    Returns:

    Type Description Keyset
    • Dict[str, str]: the keyset
    Source code in prefect/input/run_input.py
    def keyset_from_base_key(base_key: str) -> Keyset:\n    \"\"\"\n    Get the keyset for the given base key.\n\n    Args:\n        - base_key (str): the base key to get the keyset for\n\n    Returns:\n        - Dict[str, str]: the keyset\n    \"\"\"\n    return {\n        \"description\": f\"{base_key}-description\",\n        \"response\": f\"{base_key}-response\",\n        \"schema\": f\"{base_key}-schema\",\n    }\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.keyset_from_paused_state","title":"keyset_from_paused_state","text":"

    Get the keyset for the given Paused state.

    Parameters:

    Name Type Description Default - state (State

    the state to get the keyset for

    required Source code in prefect/input/run_input.py
    def keyset_from_paused_state(state: \"State\") -> Keyset:\n    \"\"\"\n    Get the keyset for the given Paused state.\n\n    Args:\n        - state (State): the state to get the keyset for\n    \"\"\"\n\n    if not state.is_paused():\n        raise RuntimeError(f\"{state.type.value!r} is unsupported.\")\n\n    state_name = state.name or \"\"\n    base_key = f\"{state_name.lower()}-{str(state.state_details.pause_key)}\"\n    return keyset_from_base_key(base_key)\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/input/run_input/#prefect.input.run_input.run_input_subclass_from_type","title":"run_input_subclass_from_type","text":"

    Create a new RunInput subclass from the given type.

    Source code in prefect/input/run_input.py
    def run_input_subclass_from_type(\n    _type: Union[Type[R], Type[T], pydantic.BaseModel],\n) -> Union[Type[AutomaticRunInput[T]], Type[R]]:\n    \"\"\"\n    Create a new `RunInput` subclass from the given type.\n    \"\"\"\n    if isclass(_type):\n        if issubclass(_type, RunInput):\n            return cast(Type[R], _type)\n        elif issubclass(_type, pydantic.BaseModel):\n            return cast(Type[R], RunInput.subclass_from_base_model_type(_type))\n\n    # Could be something like a typing._GenericAlias or any other type that\n    # isn't a `RunInput` subclass or `pydantic.BaseModel` subclass. Try passing\n    # it to AutomaticRunInput to see if we can create a model from it.\n    return cast(\n        Type[AutomaticRunInput[T]],\n        AutomaticRunInput.subclass_from_type(cast(Type[T], _type)),\n    )\n
    ","tags":["flow run","pause","suspend","input","run input"]},{"location":"api-ref/prefect/logging/configuration/","title":"configuration","text":"

    \"\"\"

    ","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration","title":"prefect.logging.configuration","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.load_logging_config","title":"load_logging_config","text":"

    Loads logging configuration from a path allowing override from the environment

    Source code in prefect/logging/configuration.py
    def load_logging_config(path: Path) -> dict:\n    \"\"\"\n    Loads logging configuration from a path allowing override from the environment\n    \"\"\"\n    template = string.Template(path.read_text())\n    with warnings.catch_warnings():\n        warnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n        config = yaml.safe_load(\n            # Substitute settings into the template in format $SETTING / ${SETTING}\n            template.substitute(\n                {\n                    setting.name: str(setting.value())\n                    for setting in SETTING_VARIABLES.values()\n                    if setting.value() is not None\n                }\n            )\n        )\n\n    # Load overrides from the environment\n    flat_config = dict_to_flatdict(config)\n\n    for key_tup, val in flat_config.items():\n        env_val = os.environ.get(\n            # Generate a valid environment variable with nesting indicated with '_'\n            to_envvar(\"PREFECT_LOGGING_\" + \"_\".join(key_tup)).upper()\n        )\n        if env_val:\n            val = env_val\n\n        # reassign the updated value\n        flat_config[key_tup] = val\n\n    return flatdict_to_dict(flat_config)\n
    ","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/configuration/#prefect.logging.configuration.setup_logging","title":"setup_logging","text":"

    Sets up logging.

    Returns the config used.

    Source code in prefect/logging/configuration.py
    def setup_logging(incremental: Optional[bool] = None) -> dict:\n    \"\"\"\n    Sets up logging.\n\n    Returns the config used.\n    \"\"\"\n    global PROCESS_LOGGING_CONFIG\n\n    # If the user has specified a logging path and it exists we will ignore the\n    # default entirely rather than dealing with complex merging\n    config = load_logging_config(\n        (\n            PREFECT_LOGGING_SETTINGS_PATH.value()\n            if PREFECT_LOGGING_SETTINGS_PATH.value().exists()\n            else DEFAULT_LOGGING_SETTINGS_PATH\n        )\n    )\n\n    incremental = (\n        incremental if incremental is not None else bool(PROCESS_LOGGING_CONFIG)\n    )\n\n    # Perform an incremental update if setup has already been run\n    config.setdefault(\"incremental\", incremental)\n\n    try:\n        logging.config.dictConfig(config)\n    except ValueError:\n        if incremental:\n            setup_logging(incremental=False)\n\n    # Copy configuration of the 'prefect.extra' logger to the extra loggers\n    extra_config = logging.getLogger(\"prefect.extra\")\n\n    for logger_name in PREFECT_LOGGING_EXTRA_LOGGERS.value():\n        logger = logging.getLogger(logger_name)\n        for handler in extra_config.handlers:\n            if not config[\"incremental\"]:\n                logger.addHandler(handler)\n            if logger.level == logging.NOTSET:\n                logger.setLevel(extra_config.level)\n            logger.propagate = extra_config.propagate\n\n    PROCESS_LOGGING_CONFIG = config\n\n    return config\n
    ","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/formatters/","title":"formatters","text":"

    \"\"\"

    ","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters","title":"prefect.logging.formatters","text":"","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.JsonFormatter","title":"JsonFormatter","text":"

    Bases: Formatter

    Formats log records as a JSON string.

    The format may be specified as \"pretty\" to format the JSON with indents and newlines.

    Source code in prefect/logging/formatters.py
    class JsonFormatter(logging.Formatter):\n    \"\"\"\n    Formats log records as a JSON string.\n\n    The format may be specified as \"pretty\" to format the JSON with indents and\n    newlines.\n    \"\"\"\n\n    def __init__(self, fmt, dmft, style) -> None:  # noqa\n        super().__init__()\n\n        if fmt not in [\"pretty\", \"default\"]:\n            raise ValueError(\"Format must be either 'pretty' or 'default'.\")\n\n        self.serializer = JSONSerializer(\n            jsonlib=\"orjson\",\n            object_encoder=\"pydantic.json.pydantic_encoder\",\n            dumps_kwargs={\"option\": orjson.OPT_INDENT_2} if fmt == \"pretty\" else {},\n        )\n\n    def format(self, record: logging.LogRecord) -> str:\n        record_dict = record.__dict__.copy()\n\n        # GCP severity detection compatibility\n        record_dict.setdefault(\"severity\", record.levelname)\n\n        # replace any exception tuples returned by `sys.exc_info()`\n        # with a JSON-serializable `dict`.\n        if record.exc_info:\n            record_dict[\"exc_info\"] = format_exception_info(record.exc_info)\n\n        log_json_bytes = self.serializer.dumps(record_dict)\n\n        # JSONSerializer returns bytes; decode to string to conform to\n        # the `logging.Formatter.format` interface\n        return log_json_bytes.decode()\n
    ","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/formatters/#prefect.logging.formatters.PrefectFormatter","title":"PrefectFormatter","text":"

    Bases: Formatter

    Source code in prefect/logging/formatters.py
    class PrefectFormatter(logging.Formatter):\n    def __init__(\n        self,\n        format=None,\n        datefmt=None,\n        style=\"%\",\n        validate=True,\n        *,\n        defaults=None,\n        task_run_fmt: str = None,\n        flow_run_fmt: str = None,\n    ) -> None:\n        \"\"\"\n        Implementation of the standard Python formatter with support for multiple\n        message formats.\n\n        \"\"\"\n        # See https://github.com/python/cpython/blob/c8c6113398ee9a7867fe9b08bc539cceb61e2aaa/Lib/logging/__init__.py#L546\n        # for implementation details\n\n        init_kwargs = {}\n        style_kwargs = {}\n\n        # defaults added in 3.10\n        if sys.version_info >= (3, 10):\n            init_kwargs[\"defaults\"] = defaults\n            style_kwargs[\"defaults\"] = defaults\n\n        # validate added in 3.8\n        if sys.version_info >= (3, 8):\n            init_kwargs[\"validate\"] = validate\n        else:\n            validate = False\n\n        super().__init__(format, datefmt, style, **init_kwargs)\n\n        self.flow_run_fmt = flow_run_fmt\n        self.task_run_fmt = task_run_fmt\n\n        # Retrieve the style class from the base class to avoid importing private\n        # `_STYLES` mapping\n        style_class = type(self._style)\n\n        self._flow_run_style = (\n            style_class(flow_run_fmt, **style_kwargs) if flow_run_fmt else self._style\n        )\n        self._task_run_style = (\n            style_class(task_run_fmt, **style_kwargs) if task_run_fmt else self._style\n        )\n        if validate:\n            self._flow_run_style.validate()\n            self._task_run_style.validate()\n\n    def formatMessage(self, record: logging.LogRecord):\n        if record.name == \"prefect.flow_runs\":\n            style = self._flow_run_style\n        elif record.name == \"prefect.task_runs\":\n            style = self._task_run_style\n        else:\n            style = self._style\n\n        return style.format(record)\n
    ","tags":["Python API","logging","formatters"]},{"location":"api-ref/prefect/logging/handlers/","title":"handlers","text":"

    \"\"\"

    ","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers","title":"prefect.logging.handlers","text":"","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler","title":"APILogHandler","text":"

    Bases: Handler

    A logging handler that sends logs to the Prefect API.

    Sends log records to the APILogWorker which manages sending batches of logs in the background.

    Source code in prefect/logging/handlers.py
    class APILogHandler(logging.Handler):\n    \"\"\"\n    A logging handler that sends logs to the Prefect API.\n\n    Sends log records to the `APILogWorker` which manages sending batches of logs in\n    the background.\n    \"\"\"\n\n    @classmethod\n    def flush(cls):\n        \"\"\"\n        Tell the `APILogWorker` to send any currently enqueued logs and block until\n        completion.\n\n        Use `aflush` from async contexts instead.\n        \"\"\"\n        loop = get_running_loop()\n        if loop:\n            if in_global_loop():  # Guard against internal misuse\n                raise RuntimeError(\n                    \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n                    \" would block the event loop and cause a deadlock. Use\"\n                    \" `APILogWorker.aflush` instead.\"\n                )\n\n            # Not ideal, but this method is called by the stdlib and cannot return a\n            # coroutine so we just schedule the drain in a new thread and continue\n            from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n            return None\n        else:\n            # We set a timeout of 5s because we don't want to block forever if the worker\n            # is stuck. This can occur when the handler is being shutdown and the\n            # `logging._lock` is held but the worker is attempting to emit logs resulting\n            # in a deadlock.\n            return APILogWorker.drain_all(timeout=5)\n\n    @classmethod\n    def aflush(cls):\n        \"\"\"\n        Tell the `APILogWorker` to send any currently enqueued logs and block until\n        completion.\n\n        If called in a synchronous context, will only block up to 5s before returning.\n        \"\"\"\n\n        if not get_running_loop():\n            raise RuntimeError(\n                \"`aflush` cannot be used from a synchronous context; use `flush`\"\n                \" instead.\"\n            )\n\n        return APILogWorker.drain_all()\n\n    def emit(self, record: logging.LogRecord):\n        \"\"\"\n        Send a log to the `APILogWorker`\n        \"\"\"\n        try:\n            profile = prefect.context.get_settings_context()\n\n            if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n                return  # Respect the global settings toggle\n            if not getattr(record, \"send_to_api\", True):\n                return  # Do not send records that have opted out\n            if not getattr(record, \"send_to_orion\", True):\n                return  # Backwards compatibility\n\n            log = self.prepare(record)\n            APILogWorker.instance().send(log)\n\n        except Exception:\n            self.handleError(record)\n\n    def handleError(self, record: logging.LogRecord) -> None:\n        _, exc, _ = sys.exc_info()\n\n        if isinstance(exc, MissingContextError):\n            log_handling_when_missing_flow = (\n                PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW.value()\n            )\n            if log_handling_when_missing_flow == \"warn\":\n                # Warn when a logger is used outside of a run context, the stack level here\n                # gets us to the user logging call\n                warnings.warn(str(exc), stacklevel=8)\n                return\n            elif log_handling_when_missing_flow == \"ignore\":\n                return\n            else:\n                raise exc\n\n        # Display a longer traceback for other errors\n        return super().handleError(record)\n\n    def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n        \"\"\"\n        Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n        This infers the linked flow or task run from the log record or the current\n        run context.\n\n        If a flow run id cannot be found, the log will be dropped.\n\n        Logs exceeding the maximum size will be dropped.\n        \"\"\"\n        flow_run_id = getattr(record, \"flow_run_id\", None)\n        task_run_id = getattr(record, \"task_run_id\", None)\n\n        if not flow_run_id:\n            try:\n                context = prefect.context.get_run_context()\n            except MissingContextError:\n                raise MissingContextError(\n                    f\"Logger {record.name!r} attempted to send logs to the API without\"\n                    \" a flow run id. The API log handler can only send logs within\"\n                    \" flow run contexts unless the flow run id is manually provided.\"\n                ) from None\n\n            if hasattr(context, \"flow_run\"):\n                flow_run_id = context.flow_run.id\n            elif hasattr(context, \"task_run\"):\n                flow_run_id = context.task_run.flow_run_id\n                task_run_id = task_run_id or context.task_run.id\n            else:\n                raise ValueError(\n                    \"Encountered malformed run context. Does not contain flow or task \"\n                    \"run information.\"\n                )\n\n        # Parsing to a `LogCreate` object here gives us nice parsing error messages\n        # from the standard lib `handleError` method if something goes wrong and\n        # prevents malformed logs from entering the queue\n        try:\n            is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n                isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n            )\n        except ValueError:\n            is_uuid_like = False\n\n        log = LogCreate(\n            flow_run_id=flow_run_id if is_uuid_like else None,\n            task_run_id=task_run_id,\n            name=record.name,\n            level=record.levelno,\n            timestamp=pendulum.from_timestamp(\n                getattr(record, \"created\", None) or time.time()\n            ),\n            message=self.format(record),\n        ).dict(json_compatible=True)\n\n        log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n        if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n            raise ValueError(\n                f\"Log of size {log_size} is greater than the max size of \"\n                f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n            )\n\n        return log\n\n    def _get_payload_size(self, log: Dict[str, Any]) -> int:\n        return len(json.dumps(log).encode())\n
    ","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.aflush","title":"aflush classmethod","text":"

    Tell the APILogWorker to send any currently enqueued logs and block until completion.

    If called in a synchronous context, will only block up to 5s before returning.

    Source code in prefect/logging/handlers.py
    @classmethod\ndef aflush(cls):\n    \"\"\"\n    Tell the `APILogWorker` to send any currently enqueued logs and block until\n    completion.\n\n    If called in a synchronous context, will only block up to 5s before returning.\n    \"\"\"\n\n    if not get_running_loop():\n        raise RuntimeError(\n            \"`aflush` cannot be used from a synchronous context; use `flush`\"\n            \" instead.\"\n        )\n\n    return APILogWorker.drain_all()\n
    ","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.emit","title":"emit","text":"

    Send a log to the APILogWorker

    Source code in prefect/logging/handlers.py
    def emit(self, record: logging.LogRecord):\n    \"\"\"\n    Send a log to the `APILogWorker`\n    \"\"\"\n    try:\n        profile = prefect.context.get_settings_context()\n\n        if not PREFECT_LOGGING_TO_API_ENABLED.value_from(profile.settings):\n            return  # Respect the global settings toggle\n        if not getattr(record, \"send_to_api\", True):\n            return  # Do not send records that have opted out\n        if not getattr(record, \"send_to_orion\", True):\n            return  # Backwards compatibility\n\n        log = self.prepare(record)\n        APILogWorker.instance().send(log)\n\n    except Exception:\n        self.handleError(record)\n
    ","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.flush","title":"flush classmethod","text":"

    Tell the APILogWorker to send any currently enqueued logs and block until completion.

    Use aflush from async contexts instead.

    Source code in prefect/logging/handlers.py
    @classmethod\ndef flush(cls):\n    \"\"\"\n    Tell the `APILogWorker` to send any currently enqueued logs and block until\n    completion.\n\n    Use `aflush` from async contexts instead.\n    \"\"\"\n    loop = get_running_loop()\n    if loop:\n        if in_global_loop():  # Guard against internal misuse\n            raise RuntimeError(\n                \"Cannot call `APILogWorker.flush` from the global event loop; it\"\n                \" would block the event loop and cause a deadlock. Use\"\n                \" `APILogWorker.aflush` instead.\"\n            )\n\n        # Not ideal, but this method is called by the stdlib and cannot return a\n        # coroutine so we just schedule the drain in a new thread and continue\n        from_sync.call_soon_in_new_thread(create_call(APILogWorker.drain_all))\n        return None\n    else:\n        # We set a timeout of 5s because we don't want to block forever if the worker\n        # is stuck. This can occur when the handler is being shutdown and the\n        # `logging._lock` is held but the worker is attempting to emit logs resulting\n        # in a deadlock.\n        return APILogWorker.drain_all(timeout=5)\n
    ","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.APILogHandler.prepare","title":"prepare","text":"

    Convert a logging.LogRecord to the API LogCreate schema and serialize.

    This infers the linked flow or task run from the log record or the current run context.

    If a flow run id cannot be found, the log will be dropped.

    Logs exceeding the maximum size will be dropped.

    Source code in prefect/logging/handlers.py
    def prepare(self, record: logging.LogRecord) -> Dict[str, Any]:\n    \"\"\"\n    Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize.\n\n    This infers the linked flow or task run from the log record or the current\n    run context.\n\n    If a flow run id cannot be found, the log will be dropped.\n\n    Logs exceeding the maximum size will be dropped.\n    \"\"\"\n    flow_run_id = getattr(record, \"flow_run_id\", None)\n    task_run_id = getattr(record, \"task_run_id\", None)\n\n    if not flow_run_id:\n        try:\n            context = prefect.context.get_run_context()\n        except MissingContextError:\n            raise MissingContextError(\n                f\"Logger {record.name!r} attempted to send logs to the API without\"\n                \" a flow run id. The API log handler can only send logs within\"\n                \" flow run contexts unless the flow run id is manually provided.\"\n            ) from None\n\n        if hasattr(context, \"flow_run\"):\n            flow_run_id = context.flow_run.id\n        elif hasattr(context, \"task_run\"):\n            flow_run_id = context.task_run.flow_run_id\n            task_run_id = task_run_id or context.task_run.id\n        else:\n            raise ValueError(\n                \"Encountered malformed run context. Does not contain flow or task \"\n                \"run information.\"\n            )\n\n    # Parsing to a `LogCreate` object here gives us nice parsing error messages\n    # from the standard lib `handleError` method if something goes wrong and\n    # prevents malformed logs from entering the queue\n    try:\n        is_uuid_like = isinstance(flow_run_id, uuid.UUID) or (\n            isinstance(flow_run_id, str) and uuid.UUID(flow_run_id)\n        )\n    except ValueError:\n        is_uuid_like = False\n\n    log = LogCreate(\n        flow_run_id=flow_run_id if is_uuid_like else None,\n        task_run_id=task_run_id,\n        name=record.name,\n        level=record.levelno,\n        timestamp=pendulum.from_timestamp(\n            getattr(record, \"created\", None) or time.time()\n        ),\n        message=self.format(record),\n    ).dict(json_compatible=True)\n\n    log_size = log[\"__payload_size__\"] = self._get_payload_size(log)\n    if log_size > PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value():\n        raise ValueError(\n            f\"Log of size {log_size} is greater than the max size of \"\n            f\"{PREFECT_LOGGING_TO_API_MAX_LOG_SIZE.value()}\"\n        )\n\n    return log\n
    ","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/handlers/#prefect.logging.handlers.PrefectConsoleHandler","title":"PrefectConsoleHandler","text":"

    Bases: StreamHandler

    Source code in prefect/logging/handlers.py
    class PrefectConsoleHandler(logging.StreamHandler):\n    def __init__(\n        self,\n        stream=None,\n        highlighter: Highlighter = PrefectConsoleHighlighter,\n        styles: Dict[str, str] = None,\n        level: Union[int, str] = logging.NOTSET,\n    ):\n        \"\"\"\n        The default console handler for Prefect, which highlights log levels,\n        web and file URLs, flow and task (run) names, and state types in the\n        local console (terminal).\n\n        Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting.\n        For finer control, use logging.yml to add or remove styles, and/or\n        adjust colors.\n        \"\"\"\n        super().__init__(stream=stream)\n\n        styled_console = PREFECT_LOGGING_COLORS.value()\n        markup_console = PREFECT_LOGGING_MARKUP.value()\n        if styled_console:\n            highlighter = highlighter()\n            theme = Theme(styles, inherit=False)\n        else:\n            highlighter = NullHighlighter()\n            theme = Theme(inherit=False)\n\n        self.level = level\n        self.console = Console(\n            highlighter=highlighter,\n            theme=theme,\n            file=self.stream,\n            markup=markup_console,\n        )\n\n    def emit(self, record: logging.LogRecord):\n        try:\n            message = self.format(record)\n            self.console.print(message, soft_wrap=True)\n        except RecursionError:\n            # This was copied over from logging.StreamHandler().emit()\n            # https://bugs.python.org/issue36272\n            raise\n        except Exception:\n            self.handleError(record)\n
    ","tags":["Python API","logging","handlers"]},{"location":"api-ref/prefect/logging/highlighters/","title":"highlighters","text":"

    \"\"\"

    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers","title":"prefect.logging.loggers","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.LogEavesdropper","title":"LogEavesdropper","text":"

    Bases: Handler

    A context manager that collects logs for the duration of the context

    Example:\n\n    ```python\n    import logging\n    from prefect.logging import LogEavesdropper\n\n    with LogEavesdropper(\"my_logger\") as eavesdropper:\n        logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n        logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n    print(eavesdropper.text())\n\n    # Outputs: \"Hello, world!\n

    Another one!\"

    Source code in prefect/logging/loggers.py
    class LogEavesdropper(logging.Handler):\n    \"\"\"A context manager that collects logs for the duration of the context\n\n    Example:\n\n        ```python\n        import logging\n        from prefect.logging import LogEavesdropper\n\n        with LogEavesdropper(\"my_logger\") as eavesdropper:\n            logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n            logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n        print(eavesdropper.text())\n\n        # Outputs: \"Hello, world!\\nAnother one!\"\n    \"\"\"\n\n    _target_logger: logging.Logger\n    _lines: List[str]\n\n    def __init__(self, eavesdrop_on: str, level: int = logging.NOTSET):\n        \"\"\"\n        Args:\n            eavesdrop_on (str): the name of the logger to eavesdrop on\n            level (int): the minimum log level to eavesdrop on; if omitted, all levels\n                are captured\n        \"\"\"\n\n        super().__init__(level=level)\n        self.eavesdrop_on = eavesdrop_on\n        self._target_logger = None\n\n        # It's important that we use a very minimalistic formatter for use cases where\n        # we may present these logs back to the user.  We shouldn't leak filenames,\n        # versions, or other environmental information.\n        self.formatter = logging.Formatter(\"[%(levelname)s]: %(message)s\")\n\n    def __enter__(self) -> Self:\n        self._target_logger = logging.getLogger(self.eavesdrop_on)\n        self._original_level = self._target_logger.level\n        self._target_logger.level = self.level\n        self._target_logger.addHandler(self)\n        self._lines = []\n        return self\n\n    def __exit__(self, *_):\n        if self._target_logger:\n            self._target_logger.removeHandler(self)\n            self._target_logger.level = self._original_level\n\n    def emit(self, record: LogRecord) -> None:\n        \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n        self._lines.append(self.format(record))\n\n    def text(self) -> str:\n        \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n        return \"\\n\".join(self._lines)\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.LogEavesdropper.emit","title":"emit","text":"

    The logging.Handler implementation, not intended to be called directly.

    Source code in prefect/logging/loggers.py
    def emit(self, record: LogRecord) -> None:\n    \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n    self._lines.append(self.format(record))\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.LogEavesdropper.text","title":"text","text":"

    Return the collected logs as a single newline-delimited string

    Source code in prefect/logging/loggers.py
    def text(self) -> str:\n    \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n    return \"\\n\".join(self._lines)\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter","text":"

    Bases: LoggerAdapter

    Adapter that ensures extra kwargs are passed through correctly; without this the extra fields set on the adapter would overshadow any provided on a log-by-log basis.

    See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.

    Source code in prefect/logging/loggers.py
    class PrefectLogAdapter(logging.LoggerAdapter):\n    \"\"\"\n    Adapter that ensures extra kwargs are passed through correctly; without this\n    the `extra` fields set on the adapter would overshadow any provided on a\n    log-by-log basis.\n\n    See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n    not a bug in the LoggingAdapter and subclassing is the intended workaround.\n    \"\"\"\n\n    def process(self, msg, kwargs):\n        kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n        from prefect._internal.compatibility.deprecated import (\n            PrefectDeprecationWarning,\n            generate_deprecation_message,\n        )\n\n        if \"send_to_orion\" in kwargs[\"extra\"]:\n            warnings.warn(\n                generate_deprecation_message(\n                    'The \"send_to_orion\" option',\n                    start_date=\"May 2023\",\n                    help='Use \"send_to_api\" instead.',\n                ),\n                PrefectDeprecationWarning,\n                stacklevel=4,\n            )\n\n        return (msg, kwargs)\n\n    def getChild(\n        self, suffix: str, extra: Optional[Dict[str, str]] = None\n    ) -> \"PrefectLogAdapter\":\n        if extra is None:\n            extra = {}\n\n        return PrefectLogAdapter(\n            self.logger.getChild(suffix),\n            extra={\n                **self.extra,\n                **extra,\n            },\n        )\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_logger","title":"disable_logger","text":"

    Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.

    Source code in prefect/logging/loggers.py
    @contextmanager\ndef disable_logger(name: str):\n    \"\"\"\n    Get a logger by name and disables it within the context manager.\n    Upon exiting the context manager, the logger is returned to its\n    original state.\n    \"\"\"\n    logger = logging.getLogger(name=name)\n\n    # determine if it's already disabled\n    base_state = logger.disabled\n    try:\n        # disable the logger\n        logger.disabled = True\n        yield\n    finally:\n        # return to base state\n        logger.disabled = base_state\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger","text":"

    Gets both prefect.flow_run and prefect.task_run and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.

    Source code in prefect/logging/loggers.py
    @contextmanager\ndef disable_run_logger():\n    \"\"\"\n    Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n    within the context manager. Upon exiting the context manager, both loggers\n    are returned to its original state.\n    \"\"\"\n    with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n        yield\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger","text":"

    Create a flow run logger with the run's metadata attached.

    Additional keyword arguments can be provided to attach custom data to the log records.

    If the flow run context is available, see get_run_logger instead.

    Source code in prefect/logging/loggers.py
    def flow_run_logger(\n    flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n    flow: Optional[\"Flow\"] = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a flow run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the flow run context is available, see `get_run_logger` instead.\n    \"\"\"\n    return PrefectLogAdapter(\n        get_logger(\"prefect.flow_runs\"),\n        extra={\n            **{\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_logger","title":"get_logger cached","text":"

    Get a prefect logger. These loggers are intended for internal use within the prefect package.

    See get_run_logger for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler.

    Source code in prefect/logging/loggers.py
    @lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n    \"\"\"\n    Get a `prefect` logger. These loggers are intended for internal use within the\n    `prefect` package.\n\n    See `get_run_logger` for retrieving loggers for use within task or flow runs.\n    By default, only run-related loggers are connected to the `APILogHandler`.\n    \"\"\"\n    parent_logger = logging.getLogger(\"prefect\")\n\n    if name:\n        # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n        # should not become \"prefect.prefect.test\"\n        if not name.startswith(parent_logger.name + \".\"):\n            logger = parent_logger.getChild(name)\n        else:\n            logger = logging.getLogger(name)\n    else:\n        logger = parent_logger\n\n    # Prevent the current API key from being logged in plain text\n    obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n    logger.addFilter(obfuscate_api_key_filter)\n\n    return logger\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.get_run_logger","title":"get_run_logger","text":"

    Get a Prefect logger for the current task run or flow run.

    The logger will be named either prefect.task_runs or prefect.flow_runs. Contextual data about the run will be attached to the log records.

    These loggers are connected to the APILogHandler by default to send log records to the API.

    Parameters:

    Name Type Description Default context RunContext

    A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.

    None **kwargs str

    Additional keyword arguments will be attached to the log records in addition to the run metadata

    {}

    Raises:

    Type Description RuntimeError

    If no context can be found

    Source code in prefect/logging/loggers.py
    def get_run_logger(\n    context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n    \"\"\"\n    Get a Prefect logger for the current task run or flow run.\n\n    The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n    Contextual data about the run will be attached to the log records.\n\n    These loggers are connected to the `APILogHandler` by default to send log records to\n    the API.\n\n    Arguments:\n        context: A specific context may be provided as an override. By default, the\n            context is inferred from global state and this should not be needed.\n        **kwargs: Additional keyword arguments will be attached to the log records in\n            addition to the run metadata\n\n    Raises:\n        RuntimeError: If no context can be found\n    \"\"\"\n    # Check for existing contexts\n    task_run_context = prefect.context.TaskRunContext.get()\n    flow_run_context = prefect.context.FlowRunContext.get()\n\n    # Apply the context override\n    if context:\n        if isinstance(context, prefect.context.FlowRunContext):\n            flow_run_context = context\n        elif isinstance(context, prefect.context.TaskRunContext):\n            task_run_context = context\n        else:\n            raise TypeError(\n                f\"Received unexpected type {type(context).__name__!r} for context. \"\n                \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n            )\n\n    # Determine if this is a task or flow run logger\n    if task_run_context:\n        logger = task_run_logger(\n            task_run=task_run_context.task_run,\n            task=task_run_context.task,\n            flow_run=flow_run_context.flow_run if flow_run_context else None,\n            flow=flow_run_context.flow if flow_run_context else None,\n            **kwargs,\n        )\n    elif flow_run_context:\n        logger = flow_run_logger(\n            flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n        )\n    elif (\n        get_logger(\"prefect.flow_run\").disabled\n        and get_logger(\"prefect.task_run\").disabled\n    ):\n        logger = logging.getLogger(\"null\")\n    else:\n        raise MissingContextError(\"There is no active flow or task run context.\")\n\n    return logger\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.patch_print","title":"patch_print","text":"

    Patches the Python builtin print method to use print_as_log

    Source code in prefect/logging/loggers.py
    @contextmanager\ndef patch_print():\n    \"\"\"\n    Patches the Python builtin `print` method to use `print_as_log`\n    \"\"\"\n    import builtins\n\n    original = builtins.print\n\n    try:\n        builtins.print = print_as_log\n        yield\n    finally:\n        builtins.print = original\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.print_as_log","title":"print_as_log","text":"

    A patch for print to send printed messages to the Prefect run logger.

    If no run is active, print will behave as if it were not patched.

    If print sends data to a file other than sys.stdout or sys.stderr, it will not be forwarded to the Prefect logger either.

    Source code in prefect/logging/loggers.py
    def print_as_log(*args, **kwargs):\n    \"\"\"\n    A patch for `print` to send printed messages to the Prefect run logger.\n\n    If no run is active, `print` will behave as if it were not patched.\n\n    If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n    not be forwarded to the Prefect logger either.\n    \"\"\"\n    from prefect.context import FlowRunContext, TaskRunContext\n\n    context = TaskRunContext.get() or FlowRunContext.get()\n    if (\n        not context\n        or not context.log_prints\n        or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n    ):\n        return print(*args, **kwargs)\n\n    logger = get_run_logger()\n\n    # Print to an in-memory buffer; so we do not need to implement `print`\n    buffer = io.StringIO()\n    kwargs[\"file\"] = buffer\n    print(*args, **kwargs)\n\n    # Remove trailing whitespace to prevent duplicates\n    logger.info(buffer.getvalue().rstrip())\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/highlighters/#prefect.logging.loggers.task_run_logger","title":"task_run_logger","text":"

    Create a task run logger with the run's metadata attached.

    Additional keyword arguments can be provided to attach custom data to the log records.

    If the task run context is available, see get_run_logger instead.

    If only the flow run context is available, it will be used for default values of flow_run and flow.

    Source code in prefect/logging/loggers.py
    def task_run_logger(\n    task_run: \"TaskRun\",\n    task: \"Task\" = None,\n    flow_run: \"FlowRun\" = None,\n    flow: \"Flow\" = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a task run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the task run context is available, see `get_run_logger` instead.\n\n    If only the flow run context is available, it will be used for default values\n    of `flow_run` and `flow`.\n    \"\"\"\n    if not flow_run or not flow:\n        flow_run_context = prefect.context.FlowRunContext.get()\n        if flow_run_context:\n            flow_run = flow_run or flow_run_context.flow_run\n            flow = flow or flow_run_context.flow\n\n    return PrefectLogAdapter(\n        get_logger(\"prefect.task_runs\"),\n        extra={\n            **{\n                \"task_run_id\": str(task_run.id),\n                \"flow_run_id\": str(task_run.flow_run_id),\n                \"task_run_name\": task_run.name,\n                \"task_name\": task.name if task else \"<unknown>\",\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/","title":"loggers","text":"

    \"\"\"

    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers","title":"prefect.logging.loggers","text":"","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.LogEavesdropper","title":"LogEavesdropper","text":"

    Bases: Handler

    A context manager that collects logs for the duration of the context

    Example:\n\n    ```python\n    import logging\n    from prefect.logging import LogEavesdropper\n\n    with LogEavesdropper(\"my_logger\") as eavesdropper:\n        logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n        logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n    print(eavesdropper.text())\n\n    # Outputs: \"Hello, world!\n

    Another one!\"

    Source code in prefect/logging/loggers.py
    class LogEavesdropper(logging.Handler):\n    \"\"\"A context manager that collects logs for the duration of the context\n\n    Example:\n\n        ```python\n        import logging\n        from prefect.logging import LogEavesdropper\n\n        with LogEavesdropper(\"my_logger\") as eavesdropper:\n            logging.getLogger(\"my_logger\").info(\"Hello, world!\")\n            logging.getLogger(\"my_logger.child_module\").info(\"Another one!\")\n\n        print(eavesdropper.text())\n\n        # Outputs: \"Hello, world!\\nAnother one!\"\n    \"\"\"\n\n    _target_logger: logging.Logger\n    _lines: List[str]\n\n    def __init__(self, eavesdrop_on: str, level: int = logging.NOTSET):\n        \"\"\"\n        Args:\n            eavesdrop_on (str): the name of the logger to eavesdrop on\n            level (int): the minimum log level to eavesdrop on; if omitted, all levels\n                are captured\n        \"\"\"\n\n        super().__init__(level=level)\n        self.eavesdrop_on = eavesdrop_on\n        self._target_logger = None\n\n        # It's important that we use a very minimalistic formatter for use cases where\n        # we may present these logs back to the user.  We shouldn't leak filenames,\n        # versions, or other environmental information.\n        self.formatter = logging.Formatter(\"[%(levelname)s]: %(message)s\")\n\n    def __enter__(self) -> Self:\n        self._target_logger = logging.getLogger(self.eavesdrop_on)\n        self._original_level = self._target_logger.level\n        self._target_logger.level = self.level\n        self._target_logger.addHandler(self)\n        self._lines = []\n        return self\n\n    def __exit__(self, *_):\n        if self._target_logger:\n            self._target_logger.removeHandler(self)\n            self._target_logger.level = self._original_level\n\n    def emit(self, record: LogRecord) -> None:\n        \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n        self._lines.append(self.format(record))\n\n    def text(self) -> str:\n        \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n        return \"\\n\".join(self._lines)\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.LogEavesdropper.emit","title":"emit","text":"

    The logging.Handler implementation, not intended to be called directly.

    Source code in prefect/logging/loggers.py
    def emit(self, record: LogRecord) -> None:\n    \"\"\"The logging.Handler implementation, not intended to be called directly.\"\"\"\n    self._lines.append(self.format(record))\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.LogEavesdropper.text","title":"text","text":"

    Return the collected logs as a single newline-delimited string

    Source code in prefect/logging/loggers.py
    def text(self) -> str:\n    \"\"\"Return the collected logs as a single newline-delimited string\"\"\"\n    return \"\\n\".join(self._lines)\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.PrefectLogAdapter","title":"PrefectLogAdapter","text":"

    Bases: LoggerAdapter

    Adapter that ensures extra kwargs are passed through correctly; without this the extra fields set on the adapter would overshadow any provided on a log-by-log basis.

    See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround.

    Source code in prefect/logging/loggers.py
    class PrefectLogAdapter(logging.LoggerAdapter):\n    \"\"\"\n    Adapter that ensures extra kwargs are passed through correctly; without this\n    the `extra` fields set on the adapter would overshadow any provided on a\n    log-by-log basis.\n\n    See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n    not a bug in the LoggingAdapter and subclassing is the intended workaround.\n    \"\"\"\n\n    def process(self, msg, kwargs):\n        kwargs[\"extra\"] = {**(self.extra or {}), **(kwargs.get(\"extra\") or {})}\n\n        from prefect._internal.compatibility.deprecated import (\n            PrefectDeprecationWarning,\n            generate_deprecation_message,\n        )\n\n        if \"send_to_orion\" in kwargs[\"extra\"]:\n            warnings.warn(\n                generate_deprecation_message(\n                    'The \"send_to_orion\" option',\n                    start_date=\"May 2023\",\n                    help='Use \"send_to_api\" instead.',\n                ),\n                PrefectDeprecationWarning,\n                stacklevel=4,\n            )\n\n        return (msg, kwargs)\n\n    def getChild(\n        self, suffix: str, extra: Optional[Dict[str, str]] = None\n    ) -> \"PrefectLogAdapter\":\n        if extra is None:\n            extra = {}\n\n        return PrefectLogAdapter(\n            self.logger.getChild(suffix),\n            extra={\n                **self.extra,\n                **extra,\n            },\n        )\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_logger","title":"disable_logger","text":"

    Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state.

    Source code in prefect/logging/loggers.py
    @contextmanager\ndef disable_logger(name: str):\n    \"\"\"\n    Get a logger by name and disables it within the context manager.\n    Upon exiting the context manager, the logger is returned to its\n    original state.\n    \"\"\"\n    logger = logging.getLogger(name=name)\n\n    # determine if it's already disabled\n    base_state = logger.disabled\n    try:\n        # disable the logger\n        logger.disabled = True\n        yield\n    finally:\n        # return to base state\n        logger.disabled = base_state\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.disable_run_logger","title":"disable_run_logger","text":"

    Gets both prefect.flow_run and prefect.task_run and disables them within the context manager. Upon exiting the context manager, both loggers are returned to its original state.

    Source code in prefect/logging/loggers.py
    @contextmanager\ndef disable_run_logger():\n    \"\"\"\n    Gets both `prefect.flow_run` and `prefect.task_run` and disables them\n    within the context manager. Upon exiting the context manager, both loggers\n    are returned to its original state.\n    \"\"\"\n    with disable_logger(\"prefect.flow_run\"), disable_logger(\"prefect.task_run\"):\n        yield\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.flow_run_logger","title":"flow_run_logger","text":"

    Create a flow run logger with the run's metadata attached.

    Additional keyword arguments can be provided to attach custom data to the log records.

    If the flow run context is available, see get_run_logger instead.

    Source code in prefect/logging/loggers.py
    def flow_run_logger(\n    flow_run: Union[\"FlowRun\", \"ClientFlowRun\"],\n    flow: Optional[\"Flow\"] = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a flow run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the flow run context is available, see `get_run_logger` instead.\n    \"\"\"\n    return PrefectLogAdapter(\n        get_logger(\"prefect.flow_runs\"),\n        extra={\n            **{\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_run_id\": str(flow_run.id) if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_logger","title":"get_logger cached","text":"

    Get a prefect logger. These loggers are intended for internal use within the prefect package.

    See get_run_logger for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler.

    Source code in prefect/logging/loggers.py
    @lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n    \"\"\"\n    Get a `prefect` logger. These loggers are intended for internal use within the\n    `prefect` package.\n\n    See `get_run_logger` for retrieving loggers for use within task or flow runs.\n    By default, only run-related loggers are connected to the `APILogHandler`.\n    \"\"\"\n    parent_logger = logging.getLogger(\"prefect\")\n\n    if name:\n        # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n        # should not become \"prefect.prefect.test\"\n        if not name.startswith(parent_logger.name + \".\"):\n            logger = parent_logger.getChild(name)\n        else:\n            logger = logging.getLogger(name)\n    else:\n        logger = parent_logger\n\n    # Prevent the current API key from being logged in plain text\n    obfuscate_api_key_filter = ObfuscateApiKeyFilter()\n    logger.addFilter(obfuscate_api_key_filter)\n\n    return logger\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.get_run_logger","title":"get_run_logger","text":"

    Get a Prefect logger for the current task run or flow run.

    The logger will be named either prefect.task_runs or prefect.flow_runs. Contextual data about the run will be attached to the log records.

    These loggers are connected to the APILogHandler by default to send log records to the API.

    Parameters:

    Name Type Description Default context RunContext

    A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.

    None **kwargs str

    Additional keyword arguments will be attached to the log records in addition to the run metadata

    {}

    Raises:

    Type Description RuntimeError

    If no context can be found

    Source code in prefect/logging/loggers.py
    def get_run_logger(\n    context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n    \"\"\"\n    Get a Prefect logger for the current task run or flow run.\n\n    The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n    Contextual data about the run will be attached to the log records.\n\n    These loggers are connected to the `APILogHandler` by default to send log records to\n    the API.\n\n    Arguments:\n        context: A specific context may be provided as an override. By default, the\n            context is inferred from global state and this should not be needed.\n        **kwargs: Additional keyword arguments will be attached to the log records in\n            addition to the run metadata\n\n    Raises:\n        RuntimeError: If no context can be found\n    \"\"\"\n    # Check for existing contexts\n    task_run_context = prefect.context.TaskRunContext.get()\n    flow_run_context = prefect.context.FlowRunContext.get()\n\n    # Apply the context override\n    if context:\n        if isinstance(context, prefect.context.FlowRunContext):\n            flow_run_context = context\n        elif isinstance(context, prefect.context.TaskRunContext):\n            task_run_context = context\n        else:\n            raise TypeError(\n                f\"Received unexpected type {type(context).__name__!r} for context. \"\n                \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n            )\n\n    # Determine if this is a task or flow run logger\n    if task_run_context:\n        logger = task_run_logger(\n            task_run=task_run_context.task_run,\n            task=task_run_context.task,\n            flow_run=flow_run_context.flow_run if flow_run_context else None,\n            flow=flow_run_context.flow if flow_run_context else None,\n            **kwargs,\n        )\n    elif flow_run_context:\n        logger = flow_run_logger(\n            flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n        )\n    elif (\n        get_logger(\"prefect.flow_run\").disabled\n        and get_logger(\"prefect.task_run\").disabled\n    ):\n        logger = logging.getLogger(\"null\")\n    else:\n        raise MissingContextError(\"There is no active flow or task run context.\")\n\n    return logger\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.patch_print","title":"patch_print","text":"

    Patches the Python builtin print method to use print_as_log

    Source code in prefect/logging/loggers.py
    @contextmanager\ndef patch_print():\n    \"\"\"\n    Patches the Python builtin `print` method to use `print_as_log`\n    \"\"\"\n    import builtins\n\n    original = builtins.print\n\n    try:\n        builtins.print = print_as_log\n        yield\n    finally:\n        builtins.print = original\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.print_as_log","title":"print_as_log","text":"

    A patch for print to send printed messages to the Prefect run logger.

    If no run is active, print will behave as if it were not patched.

    If print sends data to a file other than sys.stdout or sys.stderr, it will not be forwarded to the Prefect logger either.

    Source code in prefect/logging/loggers.py
    def print_as_log(*args, **kwargs):\n    \"\"\"\n    A patch for `print` to send printed messages to the Prefect run logger.\n\n    If no run is active, `print` will behave as if it were not patched.\n\n    If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will\n    not be forwarded to the Prefect logger either.\n    \"\"\"\n    from prefect.context import FlowRunContext, TaskRunContext\n\n    context = TaskRunContext.get() or FlowRunContext.get()\n    if (\n        not context\n        or not context.log_prints\n        or kwargs.get(\"file\") not in {None, sys.stdout, sys.stderr}\n    ):\n        return print(*args, **kwargs)\n\n    logger = get_run_logger()\n\n    # Print to an in-memory buffer; so we do not need to implement `print`\n    buffer = io.StringIO()\n    kwargs[\"file\"] = buffer\n    print(*args, **kwargs)\n\n    # Remove trailing whitespace to prevent duplicates\n    logger.info(buffer.getvalue().rstrip())\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/logging/loggers/#prefect.logging.loggers.task_run_logger","title":"task_run_logger","text":"

    Create a task run logger with the run's metadata attached.

    Additional keyword arguments can be provided to attach custom data to the log records.

    If the task run context is available, see get_run_logger instead.

    If only the flow run context is available, it will be used for default values of flow_run and flow.

    Source code in prefect/logging/loggers.py
    def task_run_logger(\n    task_run: \"TaskRun\",\n    task: \"Task\" = None,\n    flow_run: \"FlowRun\" = None,\n    flow: \"Flow\" = None,\n    **kwargs: str,\n):\n    \"\"\"\n    Create a task run logger with the run's metadata attached.\n\n    Additional keyword arguments can be provided to attach custom data to the log\n    records.\n\n    If the task run context is available, see `get_run_logger` instead.\n\n    If only the flow run context is available, it will be used for default values\n    of `flow_run` and `flow`.\n    \"\"\"\n    if not flow_run or not flow:\n        flow_run_context = prefect.context.FlowRunContext.get()\n        if flow_run_context:\n            flow_run = flow_run or flow_run_context.flow_run\n            flow = flow or flow_run_context.flow\n\n    return PrefectLogAdapter(\n        get_logger(\"prefect.task_runs\"),\n        extra={\n            **{\n                \"task_run_id\": str(task_run.id),\n                \"flow_run_id\": str(task_run.flow_run_id),\n                \"task_run_name\": task_run.name,\n                \"task_name\": task.name if task else \"<unknown>\",\n                \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n                \"flow_name\": flow.name if flow else \"<unknown>\",\n            },\n            **kwargs,\n        },\n    )\n
    ","tags":["Python API","logging","loggers"]},{"location":"api-ref/prefect/runner/runner/","title":"runner","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner","title":"prefect.runner.runner","text":"

    Runners are responsible for managing the execution of deployments created and managed by either flow.serve or the serve utility.

    Example
    import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n\n    # serve generates a Runner instance\n    serve(slow_deploy, fast_deploy)\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner","title":"Runner","text":"Source code in prefect/runner/runner.py
    class Runner:\n    def __init__(\n        self,\n        name: Optional[str] = None,\n        query_seconds: Optional[float] = None,\n        prefetch_seconds: float = 10,\n        limit: Optional[int] = None,\n        pause_on_shutdown: bool = True,\n        webserver: bool = False,\n    ):\n        \"\"\"\n        Responsible for managing the execution of remotely initiated flow runs.\n\n        Args:\n            name: The name of the runner. If not provided, a random one\n                will be generated. If provided, it cannot contain '/' or '%'.\n            query_seconds: The number of seconds to wait between querying for\n                scheduled flow runs; defaults to `PREFECT_RUNNER_POLL_FREQUENCY`\n            prefetch_seconds: The number of seconds to prefetch flow runs for.\n            limit: The maximum number of flow runs this runner should be running at\n            pause_on_shutdown: A boolean for whether or not to automatically pause\n                deployment schedules on shutdown; defaults to `True`\n            webserver: a boolean flag for whether to start a webserver for this runner\n\n        Examples:\n            Set up a Runner to manage the execute of scheduled flow runs for two flows:\n                ```python\n                from prefect import flow, Runner\n\n                @flow\n                def hello_flow(name):\n                    print(f\"hello {name}\")\n\n                @flow\n                def goodbye_flow(name):\n                    print(f\"goodbye {name}\")\n\n                if __name__ == \"__main__\"\n                    runner = Runner(name=\"my-runner\")\n\n                    # Will be runnable via the API\n                    runner.add_flow(hello_flow)\n\n                    # Run on a cron schedule\n                    runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n                    runner.start()\n                ```\n        \"\"\"\n        if name and (\"/\" in name or \"%\" in name):\n            raise ValueError(\"Runner name cannot contain '/' or '%'\")\n        self.name = Path(name).stem if name is not None else f\"runner-{uuid4()}\"\n        self._logger = get_logger(\"runner\")\n\n        self.started = False\n        self.stopping = False\n        self.pause_on_shutdown = pause_on_shutdown\n        self.limit = limit or PREFECT_RUNNER_PROCESS_LIMIT.value()\n        self.webserver = webserver\n\n        self.query_seconds = query_seconds or PREFECT_RUNNER_POLL_FREQUENCY.value()\n        self._prefetch_seconds = prefetch_seconds\n\n        self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n        self._loops_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n\n        self._limiter: Optional[anyio.CapacityLimiter] = anyio.CapacityLimiter(\n            self.limit\n        )\n        self._client = get_client()\n        self._submitting_flow_run_ids = set()\n        self._cancelling_flow_run_ids = set()\n        self._scheduled_task_scopes = set()\n        self._deployment_ids: Set[UUID] = set()\n        self._flow_run_process_map = dict()\n\n        self._tmp_dir: Path = (\n            Path(tempfile.gettempdir()) / \"runner_storage\" / str(uuid4())\n        )\n        self._storage_objs: List[RunnerStorage] = []\n        self._deployment_storage_map: Dict[UUID, RunnerStorage] = {}\n        self._loop = asyncio.get_event_loop()\n\n    @sync_compatible\n    async def add_deployment(\n        self,\n        deployment: RunnerDeployment,\n    ) -> UUID:\n        \"\"\"\n        Registers the deployment with the Prefect API and will monitor for work once\n        the runner is started.\n\n        Args:\n            deployment: A deployment for the runner to register.\n        \"\"\"\n        deployment_id = await deployment.apply()\n        storage = deployment.storage\n        if storage is not None:\n            storage = await self._add_storage(storage)\n            self._deployment_storage_map[deployment_id] = storage\n        self._deployment_ids.add(deployment_id)\n\n        return deployment_id\n\n    @sync_compatible\n    async def add_flow(\n        self,\n        flow: Flow,\n        name: str = None,\n        interval: Optional[\n            Union[\n                Iterable[Union[int, float, datetime.timedelta]],\n                int,\n                float,\n                datetime.timedelta,\n            ]\n        ] = None,\n        cron: Optional[Union[Iterable[str], str]] = None,\n        rrule: Optional[Union[Iterable[str], str]] = None,\n        paused: Optional[bool] = None,\n        schedules: Optional[FlexibleScheduleList] = None,\n        schedule: Optional[SCHEDULE_TYPES] = None,\n        is_schedule_active: Optional[bool] = None,\n        parameters: Optional[dict] = None,\n        triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n        description: Optional[str] = None,\n        tags: Optional[List[str]] = None,\n        version: Optional[str] = None,\n        enforce_parameter_schema: bool = False,\n        entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n    ) -> UUID:\n        \"\"\"\n        Provides a flow to the runner to be run based on the provided configuration.\n\n        Will create a deployment for the provided flow and register the deployment\n        with the runner.\n\n        Args:\n            flow: A flow for the runner to run.\n            name: The name to give the created deployment. Will default to the name\n                of the runner.\n            interval: An interval on which to execute the current flow. Accepts either a number\n                or a timedelta object. If a number is given, it will be interpreted as seconds.\n            cron: A cron schedule of when to execute runs of this flow.\n            rrule: An rrule schedule of when to execute runs of this flow.\n            schedule: A schedule object of when to execute runs of this flow. Used for\n                advanced scheduling options like timezone.\n            is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n                not provided when creating a deployment, the schedule will be set as active. If not\n                provided when updating a deployment, the schedule's activation will not be changed.\n            triggers: A list of triggers that should kick of a run of this flow.\n            parameters: A dictionary of default parameter values to pass to runs of this flow.\n            description: A description for the created deployment. Defaults to the flow's\n                description if not provided.\n            tags: A list of tags to associate with the created deployment for organizational\n                purposes.\n            version: A version for the created deployment. Defaults to the flow's version.\n            entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n                entrypoint, ensure that the module will be importable in the execution environment.\n        \"\"\"\n        api = PREFECT_API_URL.value()\n        if any([interval, cron, rrule]) and not api:\n            self._logger.warning(\n                \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n                \" start` to start the scheduler.\"\n            )\n        name = self.name if name is None else name\n\n        deployment = await flow.to_deployment(\n            name=name,\n            interval=interval,\n            cron=cron,\n            rrule=rrule,\n            schedules=schedules,\n            schedule=schedule,\n            paused=paused,\n            is_schedule_active=is_schedule_active,\n            triggers=triggers,\n            parameters=parameters,\n            description=description,\n            tags=tags,\n            version=version,\n            enforce_parameter_schema=enforce_parameter_schema,\n            entrypoint_type=entrypoint_type,\n        )\n        return await self.add_deployment(deployment)\n\n    @sync_compatible\n    async def _add_storage(self, storage: RunnerStorage) -> RunnerStorage:\n        \"\"\"\n        Adds a storage object to the runner. The storage object will be used to pull\n        code to the runner's working directory before the runner starts.\n\n        Args:\n            storage: The storage object to add to the runner.\n        Returns:\n            The updated storage object that was added to the runner.\n        \"\"\"\n        if storage not in self._storage_objs:\n            storage_copy = deepcopy(storage)\n            storage_copy.set_base_path(self._tmp_dir)\n\n            self._logger.debug(\n                f\"Adding storage {storage_copy!r} to runner at\"\n                f\" {str(storage_copy.destination)!r}\"\n            )\n            self._storage_objs.append(storage_copy)\n\n            return storage_copy\n        else:\n            return next(s for s in self._storage_objs if s == storage)\n\n    def handle_sigterm(self, signum, frame):\n        \"\"\"\n        Gracefully shuts down the runner when a SIGTERM is received.\n        \"\"\"\n        self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n        from_sync.call_in_loop_thread(create_call(self.stop))\n\n        sys.exit(0)\n\n    @sync_compatible\n    async def start(\n        self, run_once: bool = False, webserver: Optional[bool] = None\n    ) -> None:\n        \"\"\"\n        Starts a runner.\n\n        The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n        Args:\n            run_once: If True, the runner will through one query loop and then exit.\n            webserver: a boolean for whether to start a webserver for this runner. If provided,\n                overrides the default on the runner\n\n        Examples:\n            Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n            ```python\n            from prefect import flow, Runner\n\n            @flow\n            def hello_flow(name):\n                print(f\"hello {name}\")\n\n            @flow\n            def goodbye_flow(name):\n                print(f\"goodbye {name}\")\n\n            if __name__ == \"__main__\"\n                runner = Runner(name=\"my-runner\")\n\n                # Will be runnable via the API\n                runner.add_flow(hello_flow)\n\n                # Run on a cron schedule\n                runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n                runner.start()\n            ```\n        \"\"\"\n        _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n        webserver = webserver if webserver is not None else self.webserver\n\n        if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n            # we'll start the ASGI server in a separate thread so that\n            # uvicorn does not block the main thread\n            server_thread = threading.Thread(\n                name=\"runner-server-thread\",\n                target=partial(\n                    start_webserver,\n                    runner=self,\n                ),\n                daemon=True,\n            )\n            server_thread.start()\n\n        async with self as runner:\n            async with self._loops_task_group as tg:\n                for storage in self._storage_objs:\n                    if storage.pull_interval:\n                        tg.start_soon(\n                            partial(\n                                critical_service_loop,\n                                workload=storage.pull_code,\n                                interval=storage.pull_interval,\n                                run_once=run_once,\n                                jitter_range=0.3,\n                            )\n                        )\n                    else:\n                        tg.start_soon(storage.pull_code)\n                tg.start_soon(\n                    partial(\n                        critical_service_loop,\n                        workload=runner._get_and_submit_flow_runs,\n                        interval=self.query_seconds,\n                        run_once=run_once,\n                        jitter_range=0.3,\n                    )\n                )\n                tg.start_soon(\n                    partial(\n                        critical_service_loop,\n                        workload=runner._check_for_cancelled_flow_runs,\n                        interval=self.query_seconds * 2,\n                        run_once=run_once,\n                        jitter_range=0.3,\n                    )\n                )\n\n    def execute_in_background(self, func, *args, **kwargs):\n        \"\"\"\n        Executes a function in the background.\n        \"\"\"\n\n        return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n\n    async def cancel_all(self):\n        runs_to_cancel = []\n\n        # done to avoid dictionary size changing during iteration\n        for info in self._flow_run_process_map.values():\n            runs_to_cancel.append(info[\"flow_run\"])\n        if runs_to_cancel:\n            for run in runs_to_cancel:\n                try:\n                    await self._cancel_run(run, state_msg=\"Runner is shutting down.\")\n                except Exception:\n                    self._logger.exception(\n                        f\"Exception encountered while cancelling {run.id}\",\n                        exc_info=True,\n                    )\n\n    @sync_compatible\n    async def stop(self):\n        \"\"\"Stops the runner's polling cycle.\"\"\"\n        if not self.started:\n            raise RuntimeError(\n                \"Runner has not yet started. Please start the runner by calling\"\n                \" .start()\"\n            )\n\n        self.started = False\n        self.stopping = True\n        await self.cancel_all()\n        try:\n            self._loops_task_group.cancel_scope.cancel()\n        except Exception:\n            self._logger.exception(\n                \"Exception encountered while shutting down\", exc_info=True\n            )\n\n    async def execute_flow_run(\n        self, flow_run_id: UUID, entrypoint: Optional[str] = None\n    ):\n        \"\"\"\n        Executes a single flow run with the given ID.\n\n        Execution will wait to monitor for cancellation requests. Exits once\n        the flow run process has exited.\n        \"\"\"\n        self.pause_on_shutdown = False\n        context = self if not self.started else asyncnullcontext()\n\n        async with context:\n            if not self._acquire_limit_slot(flow_run_id):\n                return\n\n            async with anyio.create_task_group() as tg:\n                with anyio.CancelScope():\n                    self._submitting_flow_run_ids.add(flow_run_id)\n                    flow_run = await self._client.read_flow_run(flow_run_id)\n\n                    pid = await self._runs_task_group.start(\n                        partial(\n                            self._submit_run_and_capture_errors,\n                            flow_run=flow_run,\n                            entrypoint=entrypoint,\n                        ),\n                    )\n\n                    self._flow_run_process_map[flow_run.id] = dict(\n                        pid=pid, flow_run=flow_run\n                    )\n\n                    # We want this loop to stop when the flow run process exits\n                    # so we'll check if the flow run process is still alive on\n                    # each iteration and cancel the task group if it is not.\n                    workload = partial(\n                        self._check_for_cancelled_flow_runs,\n                        should_stop=lambda: not self._flow_run_process_map,\n                        on_stop=tg.cancel_scope.cancel,\n                    )\n\n                    tg.start_soon(\n                        partial(\n                            critical_service_loop,\n                            workload=workload,\n                            interval=self.query_seconds,\n                            jitter_range=0.3,\n                        )\n                    )\n\n    def _get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n        return flow_run_logger(flow_run=flow_run).getChild(\n            \"runner\",\n            extra={\n                \"runner_name\": self.name,\n            },\n        )\n\n    async def _run_process(\n        self,\n        flow_run: \"FlowRun\",\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n        entrypoint: Optional[str] = None,\n    ):\n        \"\"\"\n        Runs the given flow run in a subprocess.\n\n        Args:\n            flow_run: Flow run to execute via process. The ID of this flow run\n                is stored in the PREFECT__FLOW_RUN_ID environment variable to\n                allow the engine to retrieve the corresponding flow's code and\n                begin execution.\n            task_status: anyio task status used to send a message to the caller\n                than the flow run process has started.\n        \"\"\"\n        command = f\"{shlex.quote(sys.executable)} -m prefect.engine\"\n\n        flow_run_logger = self._get_flow_run_logger(flow_run)\n\n        # We must add creationflags to a dict so it is only passed as a function\n        # parameter on Windows, because the presence of creationflags causes\n        # errors on Unix even if set to None\n        kwargs: Dict[str, object] = {}\n        if sys.platform == \"win32\":\n            kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n        _use_threaded_child_watcher()\n        flow_run_logger.info(\"Opening process...\")\n\n        env = get_current_settings().to_environment_variables(exclude_unset=True)\n        env.update(\n            {\n                **{\n                    \"PREFECT__FLOW_RUN_ID\": str(flow_run.id),\n                    \"PREFECT__STORAGE_BASE_PATH\": str(self._tmp_dir),\n                    \"PREFECT__ENABLE_CANCELLATION_AND_CRASHED_HOOKS\": \"false\",\n                },\n                **({\"PREFECT__FLOW_ENTRYPOINT\": entrypoint} if entrypoint else {}),\n            }\n        )\n        env.update(**os.environ)  # is this really necessary??\n\n        storage = self._deployment_storage_map.get(flow_run.deployment_id)\n        if storage and storage.pull_interval:\n            # perform an adhoc pull of code before running the flow if an\n            # adhoc pull hasn't been performed in the last pull_interval\n            # TODO: Explore integrating this behavior with global concurrency.\n            last_adhoc_pull = getattr(storage, \"last_adhoc_pull\", None)\n            if (\n                last_adhoc_pull is None\n                or last_adhoc_pull\n                < datetime.datetime.now()\n                - datetime.timedelta(seconds=storage.pull_interval)\n            ):\n                self._logger.debug(\n                    \"Performing adhoc pull of code for flow run %s with storage %r\",\n                    flow_run.id,\n                    storage,\n                )\n                await storage.pull_code()\n                setattr(storage, \"last_adhoc_pull\", datetime.datetime.now())\n\n        process = await run_process(\n            shlex.split(command),\n            stream_output=True,\n            task_status=task_status,\n            env=env,\n            **kwargs,\n            cwd=storage.destination if storage else None,\n        )\n\n        # Use the pid for display if no name was given\n\n        if process.returncode:\n            help_message = None\n            level = logging.ERROR\n            if process.returncode == -9:\n                level = logging.INFO\n                help_message = (\n                    \"This indicates that the process exited due to a SIGKILL signal. \"\n                    \"Typically, this is either caused by manual cancellation or \"\n                    \"high memory usage causing the operating system to \"\n                    \"terminate the process.\"\n                )\n            if process.returncode == -15:\n                level = logging.INFO\n                help_message = (\n                    \"This indicates that the process exited due to a SIGTERM signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n            elif process.returncode == 247:\n                help_message = (\n                    \"This indicates that the process was terminated due to high \"\n                    \"memory usage.\"\n                )\n            elif (\n                sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n            ):\n                level = logging.INFO\n                help_message = (\n                    \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n\n            flow_run_logger.log(\n                level,\n                f\"Process for flow run {flow_run.name!r} exited with status code:\"\n                f\" {process.returncode}\"\n                + (f\"; {help_message}\" if help_message else \"\"),\n            )\n        else:\n            flow_run_logger.info(\n                f\"Process for flow run {flow_run.name!r} exited cleanly.\"\n            )\n\n        return process.returncode\n\n    async def _kill_process(\n        self,\n        pid: int,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Kills a given flow run process.\n\n        Args:\n            pid: ID of the process to kill\n            grace_seconds: Number of seconds to wait for the process to end.\n        \"\"\"\n        # In a non-windows environment first send a SIGTERM, then, after\n        # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n        # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        if sys.platform == \"win32\":\n            try:\n                os.kill(pid, signal.CTRL_BREAK_EVENT)\n            except (ProcessLookupError, WindowsError):\n                raise RuntimeError(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n        else:\n            try:\n                os.kill(pid, signal.SIGTERM)\n            except ProcessLookupError:\n                raise RuntimeError(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n\n            # Throttle how often we check if the process is still alive to keep\n            # from making too many system calls in a short period of time.\n            check_interval = max(grace_seconds / 10, 1)\n\n            with anyio.move_on_after(grace_seconds):\n                while True:\n                    await anyio.sleep(check_interval)\n\n                    # Detect if the process is still alive. If not do an early\n                    # return as the process respected the SIGTERM from above.\n                    try:\n                        os.kill(pid, 0)\n                    except ProcessLookupError:\n                        return\n\n            try:\n                os.kill(pid, signal.SIGKILL)\n            except OSError:\n                # We shouldn't ever end up here, but it's possible that the\n                # process ended right after the check above.\n                return\n\n    async def _pause_schedules(self):\n        \"\"\"\n        Pauses all deployment schedules.\n        \"\"\"\n        self._logger.info(\"Pausing all deployments...\")\n        for deployment_id in self._deployment_ids:\n            self._logger.debug(f\"Pausing deployment '{deployment_id}'\")\n            await self._client.set_deployment_paused_state(deployment_id, True)\n        self._logger.info(\"All deployments have been paused!\")\n\n    async def _get_and_submit_flow_runs(self):\n        if self.stopping:\n            return\n        runs_response = await self._get_scheduled_flow_runs()\n        self.last_polled = pendulum.now(\"UTC\")\n        return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n    async def _check_for_cancelled_flow_runs(\n        self, should_stop: Callable = lambda: False, on_stop: Callable = lambda: None\n    ):\n        \"\"\"\n        Checks for flow runs with CANCELLING a cancelling state and attempts to\n        cancel them.\n\n        Args:\n            should_stop: A callable that returns a boolean indicating whether or not\n                the runner should stop checking for cancelled flow runs.\n            on_stop: A callable that is called when the runner should stop checking\n                for cancelled flow runs.\n        \"\"\"\n        if self.stopping:\n            return\n        if not self.started:\n            raise RuntimeError(\n                \"Runner is not set up. Please make sure you are running this runner \"\n                \"as an async context manager.\"\n            )\n\n        if should_stop():\n            self._logger.debug(\n                \"Runner has no active flow runs or deployments. Sending message to loop\"\n                \" service that no further cancellation checks are needed.\"\n            )\n            on_stop()\n\n        self._logger.debug(\"Checking for cancelled flow runs...\")\n\n        named_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(\n                    any_=list(\n                        self._flow_run_process_map.keys()\n                        - self._cancelling_flow_run_ids\n                    )\n                ),\n            ),\n        )\n\n        typed_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(\n                    any_=list(\n                        self._flow_run_process_map.keys()\n                        - self._cancelling_flow_run_ids\n                    )\n                ),\n            ),\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self._logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self._cancelling_flow_run_ids.add(flow_run.id)\n            self._runs_task_group.start_soon(self._cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def _cancel_run(self, flow_run: \"FlowRun\", state_msg: Optional[str] = None):\n        run_logger = self._get_flow_run_logger(flow_run)\n\n        pid = self._flow_run_process_map.get(flow_run.id, {}).get(\"pid\")\n        if not pid:\n            await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"Could not find process ID for flow run\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            await self._kill_process(pid)\n        except RuntimeError as exc:\n            self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except Exception:\n            run_logger.exception(\n                \"Encountered exception while killing process for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self._cancelling_flow_run_ids.remove(flow_run.id)\n        else:\n            await self._run_on_cancellation_hooks(flow_run, flow_run.state)\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": state_msg or \"Flow run was cancelled successfully.\"\n                },\n            )\n            run_logger.info(f\"Cancelled flow run '{flow_run.name}'!\")\n\n    async def _get_scheduled_flow_runs(\n        self,\n    ) -> List[\"FlowRun\"]:\n        \"\"\"\n        Retrieve scheduled flow runs for this runner.\n        \"\"\"\n        scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n        self._logger.debug(\n            f\"Querying for flow runs scheduled before {scheduled_before}\"\n        )\n\n        scheduled_flow_runs = (\n            await self._client.get_scheduled_flow_runs_for_deployments(\n                deployment_ids=list(self._deployment_ids),\n                scheduled_before=scheduled_before,\n            )\n        )\n        self._logger.debug(f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\")\n        return scheduled_flow_runs\n\n    def has_slots_available(self) -> bool:\n        \"\"\"\n        Determine if the flow run limit has been reached.\n\n        Returns:\n            - bool: True if the limit has not been reached, False otherwise.\n        \"\"\"\n        return self._limiter.available_tokens > 0\n\n    def _acquire_limit_slot(self, flow_run_id: str) -> bool:\n        \"\"\"\n        Enforces flow run limit set on runner.\n\n        Returns:\n            - bool: True if a slot was acquired, False otherwise.\n        \"\"\"\n        try:\n            if self._limiter:\n                self._limiter.acquire_on_behalf_of_nowait(flow_run_id)\n                self._logger.debug(\"Limit slot acquired for flow run '%s'\", flow_run_id)\n            return True\n        except RuntimeError as exc:\n            if (\n                \"this borrower is already holding one of this CapacityLimiter's tokens\"\n                in str(exc)\n            ):\n                self._logger.warning(\n                    f\"Duplicate submission of flow run '{flow_run_id}' detected. Runner\"\n                    \" will not re-submit flow run.\"\n                )\n                return False\n            else:\n                raise\n        except anyio.WouldBlock:\n            self._logger.info(\n                f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n                \" in progress. You can control this limit by passing a `limit` value\"\n                \" to `serve` or adjusting the PREFECT_RUNNER_PROCESS_LIMIT setting.\"\n            )\n            return False\n\n    def _release_limit_slot(self, flow_run_id: str) -> None:\n        \"\"\"\n        Frees up a slot taken by the given flow run id.\n        \"\"\"\n        if self._limiter:\n            self._limiter.release_on_behalf_of(flow_run_id)\n            self._logger.debug(\"Limit slot released for flow run '%s'\", flow_run_id)\n\n    async def _submit_scheduled_flow_runs(\n        self,\n        flow_run_response: List[\"FlowRun\"],\n        entrypoints: Optional[List[str]] = None,\n    ) -> List[\"FlowRun\"]:\n        \"\"\"\n        Takes a list of FlowRuns and submits the referenced flow runs\n        for execution by the runner.\n        \"\"\"\n        submittable_flow_runs = flow_run_response\n        submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n        for i, flow_run in enumerate(submittable_flow_runs):\n            if flow_run.id in self._submitting_flow_run_ids:\n                continue\n\n            if self._acquire_limit_slot(flow_run.id):\n                run_logger = self._get_flow_run_logger(flow_run)\n                run_logger.info(\n                    f\"Runner '{self.name}' submitting flow run '{flow_run.id}'\"\n                )\n                self._submitting_flow_run_ids.add(flow_run.id)\n                self._runs_task_group.start_soon(\n                    partial(\n                        self._submit_run,\n                        flow_run=flow_run,\n                        entrypoint=(\n                            entrypoints[i] if entrypoints else None\n                        ),  # TODO: avoid relying on index\n                    )\n                )\n            else:\n                break\n\n        return list(\n            filter(\n                lambda run: run.id in self._submitting_flow_run_ids,\n                submittable_flow_runs,\n            )\n        )\n\n    async def _submit_run(self, flow_run: \"FlowRun\", entrypoint: Optional[str] = None):\n        \"\"\"\n        Submits a given flow run for execution by the runner.\n        \"\"\"\n        run_logger = self._get_flow_run_logger(flow_run)\n\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            readiness_result = await self._runs_task_group.start(\n                partial(\n                    self._submit_run_and_capture_errors,\n                    flow_run=flow_run,\n                    entrypoint=entrypoint,\n                ),\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                self._flow_run_process_map[flow_run.id] = dict(\n                    pid=readiness_result, flow_run=flow_run\n                )\n\n            run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            self._release_limit_slot(flow_run.id)\n\n        self._submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self,\n        flow_run: \"FlowRun\",\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n        entrypoint: Optional[str] = None,\n    ) -> Union[Optional[int], Exception]:\n        run_logger = self._get_flow_run_logger(flow_run)\n\n        try:\n            status_code = await self._run_process(\n                flow_run=flow_run,\n                task_status=task_status,\n                entrypoint=entrypoint,\n            )\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                run_logger.exception(\n                    f\"Failed to start process for flow run '{flow_run.id}'.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run process could not be started\"\n                )\n            else:\n                run_logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            self._release_limit_slot(flow_run.id)\n            self._flow_run_process_map.pop(flow_run.id, None)\n\n        if status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                f\"Flow run process exited with non-zero status code {status_code}.\",\n            )\n\n        api_flow_run = await self._client.read_flow_run(flow_run_id=flow_run.id)\n        terminal_state = api_flow_run.state\n        if terminal_state.is_crashed():\n            await self._run_on_crashed_hooks(flow_run=flow_run, state=terminal_state)\n\n        return status_code\n\n    async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n        run_logger = self._get_flow_run_logger(flow_run)\n        state = flow_run.state\n        try:\n            state = await propose_state(\n                self._client, Pending(), flow_run_id=flow_run.id\n            )\n        except Abort as exc:\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            run_logger.exception(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n            )\n            return False\n\n        if not state.is_pending():\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n        run_logger = self._get_flow_run_logger(flow_run)\n        try:\n            await propose_state(\n                self._client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            run_logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n        run_logger = self._get_flow_run_logger(flow_run)\n        try:\n            state = await propose_state(\n                self._client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                run_logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n        \"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the runner exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.started:\n                with anyio.CancelScope() as scope:\n                    self._scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self._scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self._runs_task_group.start(wrapper)\n\n    async def _run_on_cancellation_hooks(\n        self,\n        flow_run: \"FlowRun\",\n        state: State,\n    ) -> None:\n        \"\"\"\n        Run the hooks for a flow.\n        \"\"\"\n        if state.is_cancelling():\n            flow = await load_flow_from_flow_run(\n                flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n            )\n            hooks = flow.on_cancellation or []\n\n            await _run_hooks(hooks, flow_run, flow, state)\n\n    async def _run_on_crashed_hooks(\n        self,\n        flow_run: \"FlowRun\",\n        state: State,\n    ) -> None:\n        \"\"\"\n        Run the hooks for a flow.\n        \"\"\"\n        if state.is_crashed():\n            flow = await load_flow_from_flow_run(\n                flow_run, client=self._client, storage_base_path=str(self._tmp_dir)\n            )\n            hooks = flow.on_crashed or []\n\n            await _run_hooks(hooks, flow_run, flow, state)\n\n    async def __aenter__(self):\n        self._logger.debug(\"Starting runner...\")\n        self._client = get_client()\n        self._tmp_dir.mkdir(parents=True)\n        await self._client.__aenter__()\n        await self._runs_task_group.__aenter__()\n\n        self.started = True\n        return self\n\n    async def __aexit__(self, *exc_info):\n        self._logger.debug(\"Stopping runner...\")\n        if self.pause_on_shutdown:\n            await self._pause_schedules()\n        self.started = False\n        for scope in self._scheduled_task_scopes:\n            scope.cancel()\n        if self._runs_task_group:\n            await self._runs_task_group.__aexit__(*exc_info)\n        if self._client:\n            await self._client.__aexit__(*exc_info)\n        shutil.rmtree(str(self._tmp_dir))\n\n    def __repr__(self):\n        return f\"Runner(name={self.name!r})\"\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_deployment","title":"add_deployment async","text":"

    Registers the deployment with the Prefect API and will monitor for work once the runner is started.

    Parameters:

    Name Type Description Default deployment RunnerDeployment

    A deployment for the runner to register.

    required Source code in prefect/runner/runner.py
    @sync_compatible\nasync def add_deployment(\n    self,\n    deployment: RunnerDeployment,\n) -> UUID:\n    \"\"\"\n    Registers the deployment with the Prefect API and will monitor for work once\n    the runner is started.\n\n    Args:\n        deployment: A deployment for the runner to register.\n    \"\"\"\n    deployment_id = await deployment.apply()\n    storage = deployment.storage\n    if storage is not None:\n        storage = await self._add_storage(storage)\n        self._deployment_storage_map[deployment_id] = storage\n    self._deployment_ids.add(deployment_id)\n\n    return deployment_id\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.add_flow","title":"add_flow async","text":"

    Provides a flow to the runner to be run based on the provided configuration.

    Will create a deployment for the provided flow and register the deployment with the runner.

    Parameters:

    Name Type Description Default flow Flow

    A flow for the runner to run.

    required name str

    The name to give the created deployment. Will default to the name of the runner.

    None interval Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]]

    An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds.

    None cron Optional[Union[Iterable[str], str]]

    A cron schedule of when to execute runs of this flow.

    None rrule Optional[Union[Iterable[str], str]]

    An rrule schedule of when to execute runs of this flow.

    None schedule Optional[SCHEDULE_TYPES]

    A schedule object of when to execute runs of this flow. Used for advanced scheduling options like timezone.

    None is_schedule_active Optional[bool]

    Whether or not to set the schedule for this deployment as active. If not provided when creating a deployment, the schedule will be set as active. If not provided when updating a deployment, the schedule's activation will not be changed.

    None triggers Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]]

    A list of triggers that should kick of a run of this flow.

    None parameters Optional[dict]

    A dictionary of default parameter values to pass to runs of this flow.

    None description Optional[str]

    A description for the created deployment. Defaults to the flow's description if not provided.

    None tags Optional[List[str]]

    A list of tags to associate with the created deployment for organizational purposes.

    None version Optional[str]

    A version for the created deployment. Defaults to the flow's version.

    None entrypoint_type EntrypointType

    Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment.

    FILE_PATH Source code in prefect/runner/runner.py
    @sync_compatible\nasync def add_flow(\n    self,\n    flow: Flow,\n    name: str = None,\n    interval: Optional[\n        Union[\n            Iterable[Union[int, float, datetime.timedelta]],\n            int,\n            float,\n            datetime.timedelta,\n        ]\n    ] = None,\n    cron: Optional[Union[Iterable[str], str]] = None,\n    rrule: Optional[Union[Iterable[str], str]] = None,\n    paused: Optional[bool] = None,\n    schedules: Optional[FlexibleScheduleList] = None,\n    schedule: Optional[SCHEDULE_TYPES] = None,\n    is_schedule_active: Optional[bool] = None,\n    parameters: Optional[dict] = None,\n    triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None,\n    description: Optional[str] = None,\n    tags: Optional[List[str]] = None,\n    version: Optional[str] = None,\n    enforce_parameter_schema: bool = False,\n    entrypoint_type: EntrypointType = EntrypointType.FILE_PATH,\n) -> UUID:\n    \"\"\"\n    Provides a flow to the runner to be run based on the provided configuration.\n\n    Will create a deployment for the provided flow and register the deployment\n    with the runner.\n\n    Args:\n        flow: A flow for the runner to run.\n        name: The name to give the created deployment. Will default to the name\n            of the runner.\n        interval: An interval on which to execute the current flow. Accepts either a number\n            or a timedelta object. If a number is given, it will be interpreted as seconds.\n        cron: A cron schedule of when to execute runs of this flow.\n        rrule: An rrule schedule of when to execute runs of this flow.\n        schedule: A schedule object of when to execute runs of this flow. Used for\n            advanced scheduling options like timezone.\n        is_schedule_active: Whether or not to set the schedule for this deployment as active. If\n            not provided when creating a deployment, the schedule will be set as active. If not\n            provided when updating a deployment, the schedule's activation will not be changed.\n        triggers: A list of triggers that should kick of a run of this flow.\n        parameters: A dictionary of default parameter values to pass to runs of this flow.\n        description: A description for the created deployment. Defaults to the flow's\n            description if not provided.\n        tags: A list of tags to associate with the created deployment for organizational\n            purposes.\n        version: A version for the created deployment. Defaults to the flow's version.\n        entrypoint_type: Type of entrypoint to use for the deployment. When using a module path\n            entrypoint, ensure that the module will be importable in the execution environment.\n    \"\"\"\n    api = PREFECT_API_URL.value()\n    if any([interval, cron, rrule]) and not api:\n        self._logger.warning(\n            \"Cannot schedule flows on an ephemeral server; run `prefect server\"\n            \" start` to start the scheduler.\"\n        )\n    name = self.name if name is None else name\n\n    deployment = await flow.to_deployment(\n        name=name,\n        interval=interval,\n        cron=cron,\n        rrule=rrule,\n        schedules=schedules,\n        schedule=schedule,\n        paused=paused,\n        is_schedule_active=is_schedule_active,\n        triggers=triggers,\n        parameters=parameters,\n        description=description,\n        tags=tags,\n        version=version,\n        enforce_parameter_schema=enforce_parameter_schema,\n        entrypoint_type=entrypoint_type,\n    )\n    return await self.add_deployment(deployment)\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_flow_run","title":"execute_flow_run async","text":"

    Executes a single flow run with the given ID.

    Execution will wait to monitor for cancellation requests. Exits once the flow run process has exited.

    Source code in prefect/runner/runner.py
    async def execute_flow_run(\n    self, flow_run_id: UUID, entrypoint: Optional[str] = None\n):\n    \"\"\"\n    Executes a single flow run with the given ID.\n\n    Execution will wait to monitor for cancellation requests. Exits once\n    the flow run process has exited.\n    \"\"\"\n    self.pause_on_shutdown = False\n    context = self if not self.started else asyncnullcontext()\n\n    async with context:\n        if not self._acquire_limit_slot(flow_run_id):\n            return\n\n        async with anyio.create_task_group() as tg:\n            with anyio.CancelScope():\n                self._submitting_flow_run_ids.add(flow_run_id)\n                flow_run = await self._client.read_flow_run(flow_run_id)\n\n                pid = await self._runs_task_group.start(\n                    partial(\n                        self._submit_run_and_capture_errors,\n                        flow_run=flow_run,\n                        entrypoint=entrypoint,\n                    ),\n                )\n\n                self._flow_run_process_map[flow_run.id] = dict(\n                    pid=pid, flow_run=flow_run\n                )\n\n                # We want this loop to stop when the flow run process exits\n                # so we'll check if the flow run process is still alive on\n                # each iteration and cancel the task group if it is not.\n                workload = partial(\n                    self._check_for_cancelled_flow_runs,\n                    should_stop=lambda: not self._flow_run_process_map,\n                    on_stop=tg.cancel_scope.cancel,\n                )\n\n                tg.start_soon(\n                    partial(\n                        critical_service_loop,\n                        workload=workload,\n                        interval=self.query_seconds,\n                        jitter_range=0.3,\n                    )\n                )\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.execute_in_background","title":"execute_in_background","text":"

    Executes a function in the background.

    Source code in prefect/runner/runner.py
    def execute_in_background(self, func, *args, **kwargs):\n    \"\"\"\n    Executes a function in the background.\n    \"\"\"\n\n    return asyncio.run_coroutine_threadsafe(func(*args, **kwargs), self._loop)\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.handle_sigterm","title":"handle_sigterm","text":"

    Gracefully shuts down the runner when a SIGTERM is received.

    Source code in prefect/runner/runner.py
    def handle_sigterm(self, signum, frame):\n    \"\"\"\n    Gracefully shuts down the runner when a SIGTERM is received.\n    \"\"\"\n    self._logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n    from_sync.call_in_loop_thread(create_call(self.stop))\n\n    sys.exit(0)\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.has_slots_available","title":"has_slots_available","text":"

    Determine if the flow run limit has been reached.

    Returns:

    Type Description bool
    • bool: True if the limit has not been reached, False otherwise.
    Source code in prefect/runner/runner.py
    def has_slots_available(self) -> bool:\n    \"\"\"\n    Determine if the flow run limit has been reached.\n\n    Returns:\n        - bool: True if the limit has not been reached, False otherwise.\n    \"\"\"\n    return self._limiter.available_tokens > 0\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.start","title":"start async","text":"

    Starts a runner.

    The runner will begin monitoring for and executing any scheduled work for all added flows.

    Parameters:

    Name Type Description Default run_once bool

    If True, the runner will through one query loop and then exit.

    False webserver Optional[bool]

    a boolean for whether to start a webserver for this runner. If provided, overrides the default on the runner

    None

    Examples:

    Initialize a Runner, add two flows, and serve them by starting the Runner:

    from prefect import flow, Runner\n\n@flow\ndef hello_flow(name):\n    print(f\"hello {name}\")\n\n@flow\ndef goodbye_flow(name):\n    print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\"\n    runner = Runner(name=\"my-runner\")\n\n    # Will be runnable via the API\n    runner.add_flow(hello_flow)\n\n    # Run on a cron schedule\n    runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n    runner.start()\n
    Source code in prefect/runner/runner.py
    @sync_compatible\nasync def start(\n    self, run_once: bool = False, webserver: Optional[bool] = None\n) -> None:\n    \"\"\"\n    Starts a runner.\n\n    The runner will begin monitoring for and executing any scheduled work for all added flows.\n\n    Args:\n        run_once: If True, the runner will through one query loop and then exit.\n        webserver: a boolean for whether to start a webserver for this runner. If provided,\n            overrides the default on the runner\n\n    Examples:\n        Initialize a Runner, add two flows, and serve them by starting the Runner:\n\n        ```python\n        from prefect import flow, Runner\n\n        @flow\n        def hello_flow(name):\n            print(f\"hello {name}\")\n\n        @flow\n        def goodbye_flow(name):\n            print(f\"goodbye {name}\")\n\n        if __name__ == \"__main__\"\n            runner = Runner(name=\"my-runner\")\n\n            # Will be runnable via the API\n            runner.add_flow(hello_flow)\n\n            # Run on a cron schedule\n            runner.add_flow(goodbye_flow, schedule={\"cron\": \"0 * * * *\"})\n\n            runner.start()\n        ```\n    \"\"\"\n    _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n    webserver = webserver if webserver is not None else self.webserver\n\n    if webserver or PREFECT_RUNNER_SERVER_ENABLE.value():\n        # we'll start the ASGI server in a separate thread so that\n        # uvicorn does not block the main thread\n        server_thread = threading.Thread(\n            name=\"runner-server-thread\",\n            target=partial(\n                start_webserver,\n                runner=self,\n            ),\n            daemon=True,\n        )\n        server_thread.start()\n\n    async with self as runner:\n        async with self._loops_task_group as tg:\n            for storage in self._storage_objs:\n                if storage.pull_interval:\n                    tg.start_soon(\n                        partial(\n                            critical_service_loop,\n                            workload=storage.pull_code,\n                            interval=storage.pull_interval,\n                            run_once=run_once,\n                            jitter_range=0.3,\n                        )\n                    )\n                else:\n                    tg.start_soon(storage.pull_code)\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=runner._get_and_submit_flow_runs,\n                    interval=self.query_seconds,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                )\n            )\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    workload=runner._check_for_cancelled_flow_runs,\n                    interval=self.query_seconds * 2,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                )\n            )\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.Runner.stop","title":"stop async","text":"

    Stops the runner's polling cycle.

    Source code in prefect/runner/runner.py
    @sync_compatible\nasync def stop(self):\n    \"\"\"Stops the runner's polling cycle.\"\"\"\n    if not self.started:\n        raise RuntimeError(\n            \"Runner has not yet started. Please start the runner by calling\"\n            \" .start()\"\n        )\n\n    self.started = False\n    self.stopping = True\n    await self.cancel_all()\n    try:\n        self._loops_task_group.cancel_scope.cancel()\n    except Exception:\n        self._logger.exception(\n            \"Exception encountered while shutting down\", exc_info=True\n        )\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/runner/#prefect.runner.runner.serve","title":"serve async","text":"

    Serve the provided list of deployments.

    Parameters:

    Name Type Description Default *args RunnerDeployment

    A list of deployments to serve.

    () pause_on_shutdown bool

    A boolean for whether or not to automatically pause deployment schedules on shutdown.

    True print_starting_message bool

    Whether or not to print message to the console on startup.

    True limit Optional[int]

    The maximum number of runs that can be executed concurrently.

    None **kwargs

    Additional keyword arguments to pass to the runner.

    {}

    Examples:

    Prepare two deployments and serve them:

    import datetime\n\nfrom prefect import flow, serve\n\n@flow\ndef my_flow(name):\n    print(f\"hello {name}\")\n\n@flow\ndef my_other_flow(name):\n    print(f\"goodbye {name}\")\n\nif __name__ == \"__main__\":\n    # Run once a day\n    hello_deploy = my_flow.to_deployment(\n        \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n    )\n\n    # Run every Sunday at 4:00 AM\n    bye_deploy = my_other_flow.to_deployment(\n        \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n    )\n\n    serve(hello_deploy, bye_deploy)\n
    Source code in prefect/runner/runner.py
    @sync_compatible\nasync def serve(\n    *args: RunnerDeployment,\n    pause_on_shutdown: bool = True,\n    print_starting_message: bool = True,\n    limit: Optional[int] = None,\n    **kwargs,\n):\n    \"\"\"\n    Serve the provided list of deployments.\n\n    Args:\n        *args: A list of deployments to serve.\n        pause_on_shutdown: A boolean for whether or not to automatically pause\n            deployment schedules on shutdown.\n        print_starting_message: Whether or not to print message to the console\n            on startup.\n        limit: The maximum number of runs that can be executed concurrently.\n        **kwargs: Additional keyword arguments to pass to the runner.\n\n    Examples:\n        Prepare two deployments and serve them:\n\n        ```python\n        import datetime\n\n        from prefect import flow, serve\n\n        @flow\n        def my_flow(name):\n            print(f\"hello {name}\")\n\n        @flow\n        def my_other_flow(name):\n            print(f\"goodbye {name}\")\n\n        if __name__ == \"__main__\":\n            # Run once a day\n            hello_deploy = my_flow.to_deployment(\n                \"hello\", tags=[\"dev\"], interval=datetime.timedelta(days=1)\n            )\n\n            # Run every Sunday at 4:00 AM\n            bye_deploy = my_other_flow.to_deployment(\n                \"goodbye\", tags=[\"dev\"], cron=\"0 4 * * sun\"\n            )\n\n            serve(hello_deploy, bye_deploy)\n        ```\n    \"\"\"\n    runner = Runner(pause_on_shutdown=pause_on_shutdown, limit=limit, **kwargs)\n    for deployment in args:\n        await runner.add_deployment(deployment)\n\n    if print_starting_message:\n        help_message_top = (\n            \"[green]Your deployments are being served and polling for\"\n            \" scheduled runs!\\n[/]\"\n        )\n\n        table = Table(title=\"Deployments\", show_header=False)\n\n        table.add_column(style=\"blue\", no_wrap=True)\n\n        for deployment in args:\n            table.add_row(f\"{deployment.flow_name}/{deployment.name}\")\n\n        help_message_bottom = (\n            \"\\nTo trigger any of these deployments, use the\"\n            \" following command:\\n[blue]\\n\\t$ prefect deployment run\"\n            \" [DEPLOYMENT_NAME]\\n[/]\"\n        )\n        if PREFECT_UI_URL:\n            help_message_bottom += (\n                \"\\nYou can also trigger your deployments via the Prefect UI:\"\n                f\" [blue]{PREFECT_UI_URL.value()}/deployments[/]\\n\"\n            )\n\n        console = Console()\n        console.print(\n            Group(help_message_top, table, help_message_bottom), soft_wrap=True\n        )\n\n    await runner.start()\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/server/","title":"server","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server","title":"prefect.runner.server","text":"","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.build_server","title":"build_server async","text":"

    Build a FastAPI server for a runner.

    Parameters:

    Name Type Description Default runner Runner

    the runner this server interacts with and monitors

    required log_level str

    the log level to use for the server

    required Source code in prefect/runner/server.py
    @sync_compatible\nasync def build_server(runner: \"Runner\") -> FastAPI:\n    \"\"\"\n    Build a FastAPI server for a runner.\n\n    Args:\n        runner (Runner): the runner this server interacts with and monitors\n        log_level (str): the log level to use for the server\n    \"\"\"\n    webserver = FastAPI()\n    router = APIRouter()\n\n    router.add_api_route(\n        \"/health\", perform_health_check(runner=runner), methods=[\"GET\"]\n    )\n    router.add_api_route(\"/run_count\", run_count(runner=runner), methods=[\"GET\"])\n    router.add_api_route(\"/shutdown\", shutdown(runner=runner), methods=[\"POST\"])\n    webserver.include_router(router)\n\n    if PREFECT_EXPERIMENTAL_ENABLE_EXTRA_RUNNER_ENDPOINTS.value():\n        deployments_router, deployment_schemas = await get_deployment_router(runner)\n        webserver.include_router(deployments_router)\n\n        subflow_schemas = await get_subflow_schemas(runner)\n        webserver.add_api_route(\n            \"/flow/run\",\n            _build_generic_endpoint_for_flows(runner=runner, schemas=subflow_schemas),\n            methods=[\"POST\"],\n            name=\"Run flow in background\",\n            description=\"Trigger any flow run as a background task on the runner.\",\n            summary=\"Run flow\",\n        )\n\n        def customize_openapi():\n            if webserver.openapi_schema:\n                return webserver.openapi_schema\n\n            openapi_schema = inject_schemas_into_openapi(webserver, deployment_schemas)\n            webserver.openapi_schema = openapi_schema\n            return webserver.openapi_schema\n\n        webserver.openapi = customize_openapi\n\n    return webserver\n
    ","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.get_subflow_schemas","title":"get_subflow_schemas async","text":"

    Load available subflow schemas by filtering for only those subflows in the deployment entrypoint's import space.

    Source code in prefect/runner/server.py
    async def get_subflow_schemas(runner: \"Runner\") -> Dict[str, Dict]:\n    \"\"\"\n    Load available subflow schemas by filtering for only those subflows in the\n    deployment entrypoint's import space.\n    \"\"\"\n    schemas = {}\n    async with get_client() as client:\n        for deployment_id in runner._deployment_ids:\n            deployment = await client.read_deployment(deployment_id)\n            if deployment.entrypoint is None:\n                continue\n\n            script = deployment.entrypoint.split(\":\")[0]\n            subflows = load_flows_from_script(script)\n            for flow in subflows:\n                schemas[flow.name] = flow.parameters.dict()\n\n    return schemas\n
    ","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/server/#prefect.runner.server.start_webserver","title":"start_webserver","text":"

    Run a FastAPI server for a runner.

    Parameters:

    Name Type Description Default runner Runner

    the runner this server interacts with and monitors

    required log_level str

    the log level to use for the server

    None Source code in prefect/runner/server.py
    def start_webserver(runner: \"Runner\", log_level: Optional[str] = None) -> None:\n    \"\"\"\n    Run a FastAPI server for a runner.\n\n    Args:\n        runner (Runner): the runner this server interacts with and monitors\n        log_level (str): the log level to use for the server\n    \"\"\"\n    host = PREFECT_RUNNER_SERVER_HOST.value()\n    port = PREFECT_RUNNER_SERVER_PORT.value()\n    log_level = log_level or PREFECT_RUNNER_SERVER_LOG_LEVEL.value()\n    webserver = build_server(runner)\n    uvicorn.run(webserver, host=host, port=port, log_level=log_level)\n
    ","tags":["Python API","server"]},{"location":"api-ref/prefect/runner/storage/","title":"storage","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage","title":"prefect.runner.storage","text":"","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.BlockStorageAdapter","title":"BlockStorageAdapter","text":"

    A storage adapter for a storage block object to allow it to be used as a runner storage object.

    Source code in prefect/runner/storage.py
    class BlockStorageAdapter:\n    \"\"\"\n    A storage adapter for a storage block object to allow it to be used as a\n    runner storage object.\n    \"\"\"\n\n    def __init__(\n        self,\n        block: Union[ReadableDeploymentStorage, WritableDeploymentStorage],\n        pull_interval: Optional[int] = 60,\n    ):\n        self._block = block\n        self._pull_interval = pull_interval\n        self._storage_base_path = Path.cwd()\n        if not isinstance(block, Block):\n            raise TypeError(\n                f\"Expected a block object. Received a {type(block).__name__!r} object.\"\n            )\n        if not hasattr(block, \"get_directory\"):\n            raise ValueError(\"Provided block must have a `get_directory` method.\")\n\n        self._name = (\n            f\"{block.get_block_type_slug()}-{block._block_document_name}\"\n            if block._block_document_name\n            else str(uuid4())\n        )\n\n    def set_base_path(self, path: Path):\n        self._storage_base_path = path\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        return self._pull_interval\n\n    @property\n    def destination(self) -> Path:\n        return self._storage_base_path / self._name\n\n    async def pull_code(self):\n        if not self.destination.exists():\n            self.destination.mkdir(parents=True, exist_ok=True)\n        await self._block.get_directory(local_path=str(self.destination))\n\n    def to_pull_step(self) -> dict:\n        # Give blocks the change to implement their own pull step\n        if hasattr(self._block, \"get_pull_step\"):\n            return self._block.get_pull_step()\n        else:\n            if not self._block._block_document_name:\n                raise BlockNotSavedError(\n                    \"Block must be saved with `.save()` before it can be converted to a\"\n                    \" pull step.\"\n                )\n            return {\n                \"prefect.deployments.steps.pull_with_block\": {\n                    \"block_type_slug\": self._block.get_block_type_slug(),\n                    \"block_document_name\": self._block._block_document_name,\n                }\n            }\n\n    def __eq__(self, __value) -> bool:\n        if isinstance(__value, BlockStorageAdapter):\n            return self._block == __value._block\n        return False\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository","title":"GitRepository","text":"

    Pulls the contents of a git repository to the local filesystem.

    Parameters:

    Name Type Description Default url str

    The URL of the git repository to pull from

    required credentials Union[GitCredentials, Block, Dict[str, Any], None]

    A dictionary of credentials to use when pulling from the repository. If a username is provided, an access token must also be provided.

    None name Optional[str]

    The name of the repository. If not provided, the name will be inferred from the repository URL.

    None branch Optional[str]

    The branch to pull from. Defaults to \"main\".

    None pull_interval Optional[int]

    The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.

    60

    Examples:

    Pull the contents of a private git repository to the local filesystem:

    from prefect.runner.storage import GitRepository\n\nstorage = GitRepository(\n    url=\"https://github.com/org/repo.git\",\n    credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n)\n\nawait storage.pull_code()\n
    Source code in prefect/runner/storage.py
    class GitRepository:\n    \"\"\"\n    Pulls the contents of a git repository to the local filesystem.\n\n    Parameters:\n        url: The URL of the git repository to pull from\n        credentials: A dictionary of credentials to use when pulling from the\n            repository. If a username is provided, an access token must also be\n            provided.\n        name: The name of the repository. If not provided, the name will be\n            inferred from the repository URL.\n        branch: The branch to pull from. Defaults to \"main\".\n        pull_interval: The interval in seconds at which to pull contents from\n            remote storage to local storage. If None, remote storage will perform\n            a one-time sync.\n\n    Examples:\n        Pull the contents of a private git repository to the local filesystem:\n\n        ```python\n        from prefect.runner.storage import GitRepository\n\n        storage = GitRepository(\n            url=\"https://github.com/org/repo.git\",\n            credentials={\"username\": \"oauth2\", \"access_token\": \"my-access-token\"},\n        )\n\n        await storage.pull_code()\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        url: str,\n        credentials: Union[GitCredentials, Block, Dict[str, Any], None] = None,\n        name: Optional[str] = None,\n        branch: Optional[str] = None,\n        include_submodules: bool = False,\n        pull_interval: Optional[int] = 60,\n    ):\n        if credentials is None:\n            credentials = {}\n\n        if (\n            isinstance(credentials, dict)\n            and credentials.get(\"username\")\n            and not (credentials.get(\"access_token\") or credentials.get(\"password\"))\n        ):\n            raise ValueError(\n                \"If a username is provided, an access token or password must also be\"\n                \" provided.\"\n            )\n        self._url = url\n        self._branch = branch\n        self._credentials = credentials\n        self._include_submodules = include_submodules\n        repo_name = urlparse(url).path.split(\"/\")[-1].replace(\".git\", \"\")\n        default_name = f\"{repo_name}-{branch}\" if branch else repo_name\n        self._name = name or default_name\n        self._logger = get_logger(f\"runner.storage.git-repository.{self._name}\")\n        self._storage_base_path = Path.cwd()\n        self._pull_interval = pull_interval\n\n    @property\n    def destination(self) -> Path:\n        return self._storage_base_path / self._name\n\n    def set_base_path(self, path: Path):\n        self._storage_base_path = path\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        return self._pull_interval\n\n    @property\n    def _repository_url_with_credentials(self) -> str:\n        if not self._credentials:\n            return self._url\n\n        url_components = urlparse(self._url)\n\n        credentials = (\n            self._credentials.dict()\n            if isinstance(self._credentials, Block)\n            else deepcopy(self._credentials)\n        )\n\n        for k, v in credentials.items():\n            if isinstance(v, Secret):\n                credentials[k] = v.get()\n            elif isinstance(v, SecretStr):\n                credentials[k] = v.get_secret_value()\n\n        formatted_credentials = _format_token_from_credentials(\n            urlparse(self._url).netloc, credentials\n        )\n        if url_components.scheme == \"https\" and formatted_credentials is not None:\n            updated_components = url_components._replace(\n                netloc=f\"{formatted_credentials}@{url_components.netloc}\"\n            )\n            repository_url = urlunparse(updated_components)\n        else:\n            repository_url = self._url\n\n        return repository_url\n\n    async def pull_code(self):\n        \"\"\"\n        Pulls the contents of the configured repository to the local filesystem.\n        \"\"\"\n        self._logger.debug(\n            \"Pulling contents from repository '%s' to '%s'...\",\n            self._name,\n            self.destination,\n        )\n\n        git_dir = self.destination / \".git\"\n\n        if git_dir.exists():\n            # Check if the existing repository matches the configured repository\n            result = await run_process(\n                [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n                cwd=str(self.destination),\n            )\n            existing_repo_url = None\n            if result.stdout is not None:\n                existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n            if existing_repo_url != self._url:\n                raise ValueError(\n                    f\"The existing repository at {str(self.destination)} \"\n                    f\"does not match the configured repository {self._url}\"\n                )\n\n            self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n            # Update the existing repository\n            cmd = [\"git\", \"pull\", \"origin\"]\n            if self._branch:\n                cmd += [self._branch]\n            if self._include_submodules:\n                cmd += [\"--recurse-submodules\"]\n            cmd += [\"--depth\", \"1\"]\n            try:\n                await run_process(cmd, cwd=self.destination)\n                self._logger.debug(\"Successfully pulled latest changes\")\n            except subprocess.CalledProcessError as exc:\n                self._logger.error(\n                    f\"Failed to pull latest changes with exit code {exc}\"\n                )\n                shutil.rmtree(self.destination)\n                await self._clone_repo()\n\n        else:\n            await self._clone_repo()\n\n    async def _clone_repo(self):\n        \"\"\"\n        Clones the repository into the local destination.\n        \"\"\"\n        self._logger.debug(\"Cloning repository %s\", self._url)\n\n        repository_url = self._repository_url_with_credentials\n\n        cmd = [\n            \"git\",\n            \"clone\",\n            repository_url,\n        ]\n        if self._branch:\n            cmd += [\"--branch\", self._branch]\n        if self._include_submodules:\n            cmd += [\"--recurse-submodules\"]\n\n        # Limit git history and set path to clone to\n        cmd += [\"--depth\", \"1\", str(self.destination)]\n\n        try:\n            await run_process(cmd)\n        except subprocess.CalledProcessError as exc:\n            # Hide the command used to avoid leaking the access token\n            exc_chain = None if self._credentials else exc\n            raise RuntimeError(\n                f\"Failed to clone repository {self._url!r} with exit code\"\n                f\" {exc.returncode}.\"\n            ) from exc_chain\n\n    def __eq__(self, __value) -> bool:\n        if isinstance(__value, GitRepository):\n            return (\n                self._url == __value._url\n                and self._branch == __value._branch\n                and self._name == __value._name\n            )\n        return False\n\n    def __repr__(self) -> str:\n        return (\n            f\"GitRepository(name={self._name!r} repository={self._url!r},\"\n            f\" branch={self._branch!r})\"\n        )\n\n    def to_pull_step(self) -> Dict:\n        pull_step = {\n            \"prefect.deployments.steps.git_clone\": {\n                \"repository\": self._url,\n                \"branch\": self._branch,\n            }\n        }\n        if isinstance(self._credentials, Block):\n            pull_step[\"prefect.deployments.steps.git_clone\"][\n                \"credentials\"\n            ] = f\"{{{{ {self._credentials.get_block_placeholder()} }}}}\"\n        elif isinstance(self._credentials, dict):\n            if isinstance(self._credentials.get(\"access_token\"), Secret):\n                pull_step[\"prefect.deployments.steps.git_clone\"][\"credentials\"] = {\n                    **self._credentials,\n                    \"access_token\": (\n                        \"{{\"\n                        f\" {self._credentials['access_token'].get_block_placeholder()} }}}}\"\n                    ),\n                }\n            elif self._credentials.get(\"access_token\") is not None:\n                raise ValueError(\n                    \"Please save your access token as a Secret block before converting\"\n                    \" this storage object to a pull step.\"\n                )\n\n        return pull_step\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.GitRepository.pull_code","title":"pull_code async","text":"

    Pulls the contents of the configured repository to the local filesystem.

    Source code in prefect/runner/storage.py
    async def pull_code(self):\n    \"\"\"\n    Pulls the contents of the configured repository to the local filesystem.\n    \"\"\"\n    self._logger.debug(\n        \"Pulling contents from repository '%s' to '%s'...\",\n        self._name,\n        self.destination,\n    )\n\n    git_dir = self.destination / \".git\"\n\n    if git_dir.exists():\n        # Check if the existing repository matches the configured repository\n        result = await run_process(\n            [\"git\", \"config\", \"--get\", \"remote.origin.url\"],\n            cwd=str(self.destination),\n        )\n        existing_repo_url = None\n        if result.stdout is not None:\n            existing_repo_url = _strip_auth_from_url(result.stdout.decode().strip())\n\n        if existing_repo_url != self._url:\n            raise ValueError(\n                f\"The existing repository at {str(self.destination)} \"\n                f\"does not match the configured repository {self._url}\"\n            )\n\n        self._logger.debug(\"Pulling latest changes from origin/%s\", self._branch)\n        # Update the existing repository\n        cmd = [\"git\", \"pull\", \"origin\"]\n        if self._branch:\n            cmd += [self._branch]\n        if self._include_submodules:\n            cmd += [\"--recurse-submodules\"]\n        cmd += [\"--depth\", \"1\"]\n        try:\n            await run_process(cmd, cwd=self.destination)\n            self._logger.debug(\"Successfully pulled latest changes\")\n        except subprocess.CalledProcessError as exc:\n            self._logger.error(\n                f\"Failed to pull latest changes with exit code {exc}\"\n            )\n            shutil.rmtree(self.destination)\n            await self._clone_repo()\n\n    else:\n        await self._clone_repo()\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage","title":"RemoteStorage","text":"

    Pulls the contents of a remote storage location to the local filesystem.

    Parameters:

    Name Type Description Default url str

    The URL of the remote storage location to pull from. Supports fsspec URLs. Some protocols may require an additional fsspec dependency to be installed. Refer to the fsspec docs for more details.

    required pull_interval Optional[int]

    The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync.

    60 **settings Any

    Any additional settings to pass the fsspec filesystem class.

    {}

    Examples:

    Pull the contents of a remote storage location to the local filesystem:

    from prefect.runner.storage import RemoteStorage\n\nstorage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\nawait storage.pull_code()\n

    Pull the contents of a remote storage location to the local filesystem with additional settings:

    from prefect.runner.storage import RemoteStorage\nfrom prefect.blocks.system import Secret\n\nstorage = RemoteStorage(\n    url=\"s3://my-bucket/my-folder\",\n    # Use Secret blocks to keep credentials out of your code\n    key=Secret.load(\"my-aws-access-key\"),\n    secret=Secret.load(\"my-aws-secret-key\"),\n)\n\nawait storage.pull_code()\n
    Source code in prefect/runner/storage.py
    class RemoteStorage:\n    \"\"\"\n    Pulls the contents of a remote storage location to the local filesystem.\n\n    Parameters:\n        url: The URL of the remote storage location to pull from. Supports\n            `fsspec` URLs. Some protocols may require an additional `fsspec`\n            dependency to be installed. Refer to the\n            [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations)\n            for more details.\n        pull_interval: The interval in seconds at which to pull contents from\n            remote storage to local storage. If None, remote storage will perform\n            a one-time sync.\n        **settings: Any additional settings to pass the `fsspec` filesystem class.\n\n    Examples:\n        Pull the contents of a remote storage location to the local filesystem:\n\n        ```python\n        from prefect.runner.storage import RemoteStorage\n\n        storage = RemoteStorage(url=\"s3://my-bucket/my-folder\")\n\n        await storage.pull_code()\n        ```\n\n        Pull the contents of a remote storage location to the local filesystem\n        with additional settings:\n\n        ```python\n        from prefect.runner.storage import RemoteStorage\n        from prefect.blocks.system import Secret\n\n        storage = RemoteStorage(\n            url=\"s3://my-bucket/my-folder\",\n            # Use Secret blocks to keep credentials out of your code\n            key=Secret.load(\"my-aws-access-key\"),\n            secret=Secret.load(\"my-aws-secret-key\"),\n        )\n\n        await storage.pull_code()\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        url: str,\n        pull_interval: Optional[int] = 60,\n        **settings: Any,\n    ):\n        self._url = url\n        self._settings = settings\n        self._logger = get_logger(\"runner.storage.remote-storage\")\n        self._storage_base_path = Path.cwd()\n        self._pull_interval = pull_interval\n\n    @staticmethod\n    def _get_required_package_for_scheme(scheme: str) -> Optional[str]:\n        # attempt to discover the package name for the given scheme\n        # from fsspec's registry\n        known_implementation = fsspec.registry.get(scheme)\n        if known_implementation:\n            return known_implementation.__module__.split(\".\")[0]\n        # if we don't know the implementation, try to guess it for some\n        # common schemes\n        elif scheme == \"s3\":\n            return \"s3fs\"\n        elif scheme == \"gs\" or scheme == \"gcs\":\n            return \"gcsfs\"\n        elif scheme == \"abfs\" or scheme == \"az\":\n            return \"adlfs\"\n        else:\n            return None\n\n    @property\n    def _filesystem(self) -> fsspec.AbstractFileSystem:\n        scheme, _, _, _, _ = urlsplit(self._url)\n\n        def replace_blocks_with_values(obj: Any) -> Any:\n            if isinstance(obj, Block):\n                if hasattr(obj, \"get\"):\n                    return obj.get()\n                if hasattr(obj, \"value\"):\n                    return obj.value\n                else:\n                    return obj.dict()\n            return obj\n\n        settings_with_block_values = visit_collection(\n            self._settings, replace_blocks_with_values, return_data=True\n        )\n\n        return fsspec.filesystem(scheme, **settings_with_block_values)\n\n    def set_base_path(self, path: Path):\n        self._storage_base_path = path\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        \"\"\"\n        The interval at which contents from remote storage should be pulled to\n        local storage. If None, remote storage will perform a one-time sync.\n        \"\"\"\n        return self._pull_interval\n\n    @property\n    def destination(self) -> Path:\n        \"\"\"\n        The local file path to pull contents from remote storage to.\n        \"\"\"\n        return self._storage_base_path / self._remote_path\n\n    @property\n    def _remote_path(self) -> Path:\n        \"\"\"\n        The remote file path to pull contents from remote storage to.\n        \"\"\"\n        _, netloc, urlpath, _, _ = urlsplit(self._url)\n        return Path(netloc) / Path(urlpath.lstrip(\"/\"))\n\n    async def pull_code(self):\n        \"\"\"\n        Pulls contents from remote storage to the local filesystem.\n        \"\"\"\n        self._logger.debug(\n            \"Pulling contents from remote storage '%s' to '%s'...\",\n            self._url,\n            self.destination,\n        )\n\n        if not self.destination.exists():\n            self.destination.mkdir(parents=True, exist_ok=True)\n\n        remote_path = str(self._remote_path) + \"/\"\n\n        try:\n            await from_async.wait_for_call_in_new_thread(\n                create_call(\n                    self._filesystem.get,\n                    remote_path,\n                    str(self.destination),\n                    recursive=True,\n                )\n            )\n        except Exception as exc:\n            raise RuntimeError(\n                f\"Failed to pull contents from remote storage {self._url!r} to\"\n                f\" {self.destination!r}\"\n            ) from exc\n\n    def to_pull_step(self) -> dict:\n        \"\"\"\n        Returns a dictionary representation of the storage object that can be\n        used as a deployment pull step.\n        \"\"\"\n\n        def replace_block_with_placeholder(obj: Any) -> Any:\n            if isinstance(obj, Block):\n                return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n            return obj\n\n        settings_with_placeholders = visit_collection(\n            self._settings, replace_block_with_placeholder, return_data=True\n        )\n        required_package = self._get_required_package_for_scheme(\n            urlparse(self._url).scheme\n        )\n        step = {\n            \"prefect.deployments.steps.pull_from_remote_storage\": {\n                \"url\": self._url,\n                **settings_with_placeholders,\n            }\n        }\n        if required_package:\n            step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n                \"requires\"\n            ] = required_package\n        return step\n\n    def __eq__(self, __value) -> bool:\n        \"\"\"\n        Equality check for runner storage objects.\n        \"\"\"\n        if isinstance(__value, RemoteStorage):\n            return self._url == __value._url and self._settings == __value._settings\n        return False\n\n    def __repr__(self) -> str:\n        return f\"RemoteStorage(url={self._url!r})\"\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.destination","title":"destination: Path property","text":"

    The local file path to pull contents from remote storage to.

    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_interval","title":"pull_interval: Optional[int] property","text":"

    The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.

    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.pull_code","title":"pull_code async","text":"

    Pulls contents from remote storage to the local filesystem.

    Source code in prefect/runner/storage.py
    async def pull_code(self):\n    \"\"\"\n    Pulls contents from remote storage to the local filesystem.\n    \"\"\"\n    self._logger.debug(\n        \"Pulling contents from remote storage '%s' to '%s'...\",\n        self._url,\n        self.destination,\n    )\n\n    if not self.destination.exists():\n        self.destination.mkdir(parents=True, exist_ok=True)\n\n    remote_path = str(self._remote_path) + \"/\"\n\n    try:\n        await from_async.wait_for_call_in_new_thread(\n            create_call(\n                self._filesystem.get,\n                remote_path,\n                str(self.destination),\n                recursive=True,\n            )\n        )\n    except Exception as exc:\n        raise RuntimeError(\n            f\"Failed to pull contents from remote storage {self._url!r} to\"\n            f\" {self.destination!r}\"\n        ) from exc\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RemoteStorage.to_pull_step","title":"to_pull_step","text":"

    Returns a dictionary representation of the storage object that can be used as a deployment pull step.

    Source code in prefect/runner/storage.py
    def to_pull_step(self) -> dict:\n    \"\"\"\n    Returns a dictionary representation of the storage object that can be\n    used as a deployment pull step.\n    \"\"\"\n\n    def replace_block_with_placeholder(obj: Any) -> Any:\n        if isinstance(obj, Block):\n            return f\"{{{{ {obj.get_block_placeholder()} }}}}\"\n        return obj\n\n    settings_with_placeholders = visit_collection(\n        self._settings, replace_block_with_placeholder, return_data=True\n    )\n    required_package = self._get_required_package_for_scheme(\n        urlparse(self._url).scheme\n    )\n    step = {\n        \"prefect.deployments.steps.pull_from_remote_storage\": {\n            \"url\": self._url,\n            **settings_with_placeholders,\n        }\n    }\n    if required_package:\n        step[\"prefect.deployments.steps.pull_from_remote_storage\"][\n            \"requires\"\n        ] = required_package\n    return step\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage","title":"RunnerStorage","text":"

    Bases: Protocol

    A storage interface for a runner to use to retrieve remotely stored flow code.

    Source code in prefect/runner/storage.py
    @runtime_checkable\nclass RunnerStorage(Protocol):\n    \"\"\"\n    A storage interface for a runner to use to retrieve\n    remotely stored flow code.\n    \"\"\"\n\n    def set_base_path(self, path: Path):\n        \"\"\"\n        Sets the base path to use when pulling contents from remote storage to\n        local storage.\n        \"\"\"\n        ...\n\n    @property\n    def pull_interval(self) -> Optional[int]:\n        \"\"\"\n        The interval at which contents from remote storage should be pulled to\n        local storage. If None, remote storage will perform a one-time sync.\n        \"\"\"\n        ...\n\n    @property\n    def destination(self) -> Path:\n        \"\"\"\n        The local file path to pull contents from remote storage to.\n        \"\"\"\n        ...\n\n    async def pull_code(self):\n        \"\"\"\n        Pulls contents from remote storage to the local filesystem.\n        \"\"\"\n        ...\n\n    def to_pull_step(self) -> dict:\n        \"\"\"\n        Returns a dictionary representation of the storage object that can be\n        used as a deployment pull step.\n        \"\"\"\n        ...\n\n    def __eq__(self, __value) -> bool:\n        \"\"\"\n        Equality check for runner storage objects.\n        \"\"\"\n        ...\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.destination","title":"destination: Path property","text":"

    The local file path to pull contents from remote storage to.

    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_interval","title":"pull_interval: Optional[int] property","text":"

    The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync.

    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.pull_code","title":"pull_code async","text":"

    Pulls contents from remote storage to the local filesystem.

    Source code in prefect/runner/storage.py
    async def pull_code(self):\n    \"\"\"\n    Pulls contents from remote storage to the local filesystem.\n    \"\"\"\n    ...\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.set_base_path","title":"set_base_path","text":"

    Sets the base path to use when pulling contents from remote storage to local storage.

    Source code in prefect/runner/storage.py
    def set_base_path(self, path: Path):\n    \"\"\"\n    Sets the base path to use when pulling contents from remote storage to\n    local storage.\n    \"\"\"\n    ...\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.RunnerStorage.to_pull_step","title":"to_pull_step","text":"

    Returns a dictionary representation of the storage object that can be used as a deployment pull step.

    Source code in prefect/runner/storage.py
    def to_pull_step(self) -> dict:\n    \"\"\"\n    Returns a dictionary representation of the storage object that can be\n    used as a deployment pull step.\n    \"\"\"\n    ...\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/storage/#prefect.runner.storage.create_storage_from_url","title":"create_storage_from_url","text":"

    Creates a storage object from a URL.

    Parameters:

    Name Type Description Default url str

    The URL to create a storage object from. Supports git and fsspec URLs.

    required pull_interval Optional[int]

    The interval at which to pull contents from remote storage to local storage

    60

    Returns:

    Name Type Description RunnerStorage RunnerStorage

    A runner storage compatible object

    Source code in prefect/runner/storage.py
    def create_storage_from_url(\n    url: str, pull_interval: Optional[int] = 60\n) -> RunnerStorage:\n    \"\"\"\n    Creates a storage object from a URL.\n\n    Args:\n        url: The URL to create a storage object from. Supports git and `fsspec`\n            URLs.\n        pull_interval: The interval at which to pull contents from remote storage to\n            local storage\n\n    Returns:\n        RunnerStorage: A runner storage compatible object\n    \"\"\"\n    parsed_url = urlparse(url)\n    if parsed_url.scheme == \"git\" or parsed_url.path.endswith(\".git\"):\n        return GitRepository(url=url, pull_interval=pull_interval)\n    else:\n        return RemoteStorage(url=url, pull_interval=pull_interval)\n
    ","tags":["Python API","runner"]},{"location":"api-ref/prefect/runner/utils/","title":"utils","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils","title":"prefect.runner.utils","text":"","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.inject_schemas_into_openapi","title":"inject_schemas_into_openapi","text":"

    Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.

    Parameters:

    Name Type Description Default webserver FastAPI

    The FastAPI instance representing the webserver.

    required schemas_to_inject Dict[str, Any]

    A dictionary of OpenAPI schemas to integrate.

    required

    Returns:

    Type Description Dict[str, Any]

    The augmented OpenAPI schema dictionary.

    Source code in prefect/runner/utils.py
    def inject_schemas_into_openapi(\n    webserver: FastAPI, schemas_to_inject: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks.\n\n    Args:\n        webserver: The FastAPI instance representing the webserver.\n        schemas_to_inject: A dictionary of OpenAPI schemas to integrate.\n\n    Returns:\n        The augmented OpenAPI schema dictionary.\n    \"\"\"\n    openapi_schema = get_openapi(\n        title=\"FastAPI Prefect Runner\", version=PREFECT_VERSION, routes=webserver.routes\n    )\n\n    augmented_schema = merge_definitions(schemas_to_inject, openapi_schema)\n    return update_refs_to_components(augmented_schema)\n
    ","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.merge_definitions","title":"merge_definitions","text":"

    Integrates definitions from injected schemas into the OpenAPI components.

    Parameters:

    Name Type Description Default injected_schemas Dict[str, Any]

    A dictionary of deployment-specific schemas.

    required openapi_schema Dict[str, Any]

    The base OpenAPI schema to update.

    required Source code in prefect/runner/utils.py
    def merge_definitions(\n    injected_schemas: Dict[str, Any], openapi_schema: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Integrates definitions from injected schemas into the OpenAPI components.\n\n    Args:\n        injected_schemas: A dictionary of deployment-specific schemas.\n        openapi_schema: The base OpenAPI schema to update.\n    \"\"\"\n    openapi_schema_copy = deepcopy(openapi_schema)\n    components = openapi_schema_copy.setdefault(\"components\", {}).setdefault(\n        \"schemas\", {}\n    )\n    for definitions in injected_schemas.values():\n        if \"definitions\" in definitions:\n            for def_name, def_schema in definitions[\"definitions\"].items():\n                def_schema_copy = deepcopy(def_schema)\n                update_refs_in_schema(def_schema_copy, \"#/components/schemas/\")\n                components[def_name] = def_schema_copy\n    return openapi_schema_copy\n
    ","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_in_schema","title":"update_refs_in_schema","text":"

    Recursively replaces $ref with a new reference base in a schema item.

    Parameters:

    Name Type Description Default schema_item Any

    A schema or part of a schema to update references in.

    required new_ref str

    The new base string to replace in $ref values.

    required Source code in prefect/runner/utils.py
    def update_refs_in_schema(schema_item: Any, new_ref: str) -> None:\n    \"\"\"\n    Recursively replaces `$ref` with a new reference base in a schema item.\n\n    Args:\n        schema_item: A schema or part of a schema to update references in.\n        new_ref: The new base string to replace in `$ref` values.\n    \"\"\"\n    if isinstance(schema_item, dict):\n        if \"$ref\" in schema_item:\n            schema_item[\"$ref\"] = schema_item[\"$ref\"].replace(\"#/definitions/\", new_ref)\n        for value in schema_item.values():\n            update_refs_in_schema(value, new_ref)\n    elif isinstance(schema_item, list):\n        for item in schema_item:\n            update_refs_in_schema(item, new_ref)\n
    ","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runner/utils/#prefect.runner.utils.update_refs_to_components","title":"update_refs_to_components","text":"

    Updates all $ref fields in the OpenAPI schema to reference the components section.

    Parameters:

    Name Type Description Default openapi_schema Dict[str, Any]

    The OpenAPI schema to modify $ref fields in.

    required Source code in prefect/runner/utils.py
    def update_refs_to_components(openapi_schema: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Updates all `$ref` fields in the OpenAPI schema to reference the components section.\n\n    Args:\n        openapi_schema: The OpenAPI schema to modify `$ref` fields in.\n    \"\"\"\n    for path_item in openapi_schema.get(\"paths\", {}).values():\n        for operation in path_item.values():\n            schema = (\n                operation.get(\"requestBody\", {})\n                .get(\"content\", {})\n                .get(\"application/json\", {})\n                .get(\"schema\", {})\n            )\n            update_refs_in_schema(schema, \"#/components/schemas/\")\n\n    for definition in openapi_schema.get(\"definitions\", {}).values():\n        update_refs_in_schema(definition, \"#/components/schemas/\")\n\n    return openapi_schema\n
    ","tags":["Python API","runner","utilities"]},{"location":"api-ref/prefect/runtime/deployment/","title":"deployment","text":"","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/deployment/#prefect.runtime.deployment","title":"prefect.runtime.deployment","text":"

    Access attributes of the current deployment run dynamically.

    Note that if a deployment is not currently being run, all attributes will return empty values.

    You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__DEPLOYMENT.

    Example usage
    from prefect.runtime import deployment\n\ndef get_task_runner():\n    task_runner_config = deployment.parameters.get(\"runner_config\", \"default config here\")\n    return DummyTaskRunner(task_runner_specs=task_runner_config)\n
    Available attributes
    • id: the deployment's unique ID
    • name: the deployment's name
    • version: the deployment's version
    • flow_run_id: the current flow run ID for this deployment
    • parameters: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values set on the deployment object or those directly provided via API for this run
    ","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/flow_run/","title":"flow_run","text":"","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/flow_run/#prefect.runtime.flow_run","title":"prefect.runtime.flow_run","text":"

    Access attributes of the current flow run dynamically.

    Note that if a flow run cannot be discovered, all attributes will return empty values.

    You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__FLOW_RUN.

    Available attributes
    • id: the flow run's unique ID
    • tags: the flow run's set of tags
    • scheduled_start_time: the flow run's expected scheduled start time; defaults to now if not present
    • name: the name of the flow run
    • flow_name: the name of the flow
    • parameters: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values explicitly passed for the run
    • parent_flow_run_id: the ID of the flow run that triggered this run, if any
    • parent_deployment_id: the ID of the deployment that triggered this run, if any
    • run_count: the number of times this flow run has been run
    ","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/task_run/","title":"task_run","text":"","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/runtime/task_run/#prefect.runtime.task_run","title":"prefect.runtime.task_run","text":"

    Access attributes of the current task run dynamically.

    Note that if a task run cannot be discovered, all attributes will return empty values.

    You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__TASK_RUN.

    Available attributes
    • id: the task run's unique ID
    • name: the name of the task run
    • tags: the task run's set of tags
    • parameters: the parameters the task was called with
    • run_count: the number of times this task run has been run
    • task_name: the name of the task
    ","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/utilities/annotations/","title":"annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations","title":"prefect.utilities.annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.BaseAnnotation","title":"BaseAnnotation","text":"

    Bases: namedtuple('BaseAnnotation', field_names='value'), ABC, Generic[T]

    Base class for Prefect annotation types.

    Inherits from namedtuple for unpacking support in another tools.

    Source code in prefect/utilities/annotations.py
    class BaseAnnotation(\n    namedtuple(\"BaseAnnotation\", field_names=\"value\"), ABC, Generic[T]\n):\n    \"\"\"\n    Base class for Prefect annotation types.\n\n    Inherits from `namedtuple` for unpacking support in another tools.\n    \"\"\"\n\n    def unwrap(self) -> T:\n        if sys.version_info < (3, 8):\n            # cannot simply return self.value due to recursion error in Python 3.7\n            # also _asdict does not follow convention; it's not an internal method\n            # https://stackoverflow.com/a/26180604\n            return self._asdict()[\"value\"]\n        else:\n            return self.value\n\n    def rewrap(self, value: T) -> \"BaseAnnotation[T]\":\n        return type(self)(value)\n\n    def __eq__(self, other: object) -> bool:\n        if not type(self) == type(other):\n            return False\n        return self.unwrap() == other.unwrap()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}({self.value!r})\"\n
    ","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.NotSet","title":"NotSet","text":"

    Singleton to distinguish None from a value that is not provided by the user.

    Source code in prefect/utilities/annotations.py
    class NotSet:\n    \"\"\"\n    Singleton to distinguish `None` from a value that is not provided by the user.\n    \"\"\"\n
    ","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.allow_failure","title":"allow_failure","text":"

    Bases: BaseAnnotation[T]

    Wrapper for states or futures.

    Indicates that the upstream run for this input can be failed.

    Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream.

    If the input is from a failed run, the attached exception will be passed to your function.

    Source code in prefect/utilities/annotations.py
    class allow_failure(BaseAnnotation[T]):\n    \"\"\"\n    Wrapper for states or futures.\n\n    Indicates that the upstream run for this input can be failed.\n\n    Generally, Prefect will not allow a downstream run to start if any of its inputs\n    are failed. This annotation allows you to opt into receiving a failed input\n    downstream.\n\n    If the input is from a failed run, the attached exception will be passed to your\n    function.\n    \"\"\"\n
    ","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.quote","title":"quote","text":"

    Bases: BaseAnnotation[T]

    Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state.

    quote will also instruct prefect to ignore introspection of the wrapped object when passed as flow or task parameter. Parameter introspection can be a significant performance hit when the object is a large collection, e.g. a large dictionary or DataFrame, and each element needs to be visited. This will disable task dependency tracking for the wrapped object, but likely will increase performance.

    @task\ndef my_task(df):\n    ...\n\n@flow\ndef my_flow():\n    my_task(quote(df))\n
    Source code in prefect/utilities/annotations.py
    class quote(BaseAnnotation[T]):\n    \"\"\"\n    Simple wrapper to mark an expression as a different type so it will not be coerced\n    by Prefect. For example, if you want to return a state from a flow without having\n    the flow assume that state.\n\n    quote will also instruct prefect to ignore introspection of the wrapped object\n    when passed as flow or task parameter. Parameter introspection can be a\n    significant performance hit when the object is a large collection,\n    e.g. a large dictionary or DataFrame, and each element needs to be visited. This\n    will disable task dependency tracking for the wrapped object, but likely will\n    increase performance.\n\n    ```\n    @task\n    def my_task(df):\n        ...\n\n    @flow\n    def my_flow():\n        my_task(quote(df))\n    ```\n    \"\"\"\n\n    def unquote(self) -> T:\n        return self.unwrap()\n
    ","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.unmapped","title":"unmapped","text":"

    Bases: BaseAnnotation[T]

    Wrapper for iterables.

    Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split.

    Source code in prefect/utilities/annotations.py
    class unmapped(BaseAnnotation[T]):\n    \"\"\"\n    Wrapper for iterables.\n\n    Indicates that this input should be sent as-is to all runs created during a mapping\n    operation instead of being split.\n    \"\"\"\n\n    def __getitem__(self, _) -> T:\n        # Internally, this acts as an infinite array where all items are the same value\n        return self.unwrap()\n
    ","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/asyncutils/","title":"asyncutils","text":"","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils","title":"prefect.utilities.asyncutils","text":"

    Utilities for interoperability with async functions and workers from various contexts.

    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherIncomplete","title":"GatherIncomplete","text":"

    Bases: RuntimeError

    Used to indicate retrieving gather results before completion

    Source code in prefect/utilities/asyncutils.py
    class GatherIncomplete(RuntimeError):\n    \"\"\"Used to indicate retrieving gather results before completion\"\"\"\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup","title":"GatherTaskGroup","text":"

    Bases: TaskGroup

    A task group that gathers results.

    AnyIO does not include support gather. This class extends the TaskGroup interface to allow simple gathering.

    See https://github.com/agronholm/anyio/issues/100

    This class should be instantiated with create_gather_task_group.

    Source code in prefect/utilities/asyncutils.py
    class GatherTaskGroup(anyio.abc.TaskGroup):\n    \"\"\"\n    A task group that gathers results.\n\n    AnyIO does not include support `gather`. This class extends the `TaskGroup`\n    interface to allow simple gathering.\n\n    See https://github.com/agronholm/anyio/issues/100\n\n    This class should be instantiated with `create_gather_task_group`.\n    \"\"\"\n\n    def __init__(self, task_group: anyio.abc.TaskGroup):\n        self._results: Dict[UUID, Any] = {}\n        # The concrete task group implementation to use\n        self._task_group: anyio.abc.TaskGroup = task_group\n\n    async def _run_and_store(self, key, fn, args):\n        self._results[key] = await fn(*args)\n\n    def start_soon(self, fn, *args) -> UUID:\n        key = uuid4()\n        # Put a placeholder in-case the result is retrieved earlier\n        self._results[key] = GatherIncomplete\n        self._task_group.start_soon(self._run_and_store, key, fn, args)\n        return key\n\n    async def start(self, fn, *args):\n        \"\"\"\n        Since `start` returns the result of `task_status.started()` but here we must\n        return the key instead, we just won't support this method for now.\n        \"\"\"\n        raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n\n    def get_result(self, key: UUID) -> Any:\n        result = self._results[key]\n        if result is GatherIncomplete:\n            raise GatherIncomplete(\n                \"Task is not complete. \"\n                \"Results should not be retrieved until the task group exits.\"\n            )\n        return result\n\n    async def __aenter__(self):\n        await self._task_group.__aenter__()\n        return self\n\n    async def __aexit__(self, *tb):\n        try:\n            retval = await self._task_group.__aexit__(*tb)\n            return retval\n        finally:\n            del self._task_group\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup.start","title":"start async","text":"

    Since start returns the result of task_status.started() but here we must return the key instead, we just won't support this method for now.

    Source code in prefect/utilities/asyncutils.py
    async def start(self, fn, *args):\n    \"\"\"\n    Since `start` returns the result of `task_status.started()` but here we must\n    return the key instead, we just won't support this method for now.\n    \"\"\"\n    raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.add_event_loop_shutdown_callback","title":"add_event_loop_shutdown_callback async","text":"

    Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down.

    Requires use of asyncio.run() which waits for async generator shutdown by default or explicit call of asyncio.shutdown_asyncgens(). If the application is entered with asyncio.run_until_complete() and the user calls asyncio.close() without the generator shutdown call, this will not trigger callbacks.

    asyncio does not provided any other way to clean up a resource when the event loop is about to close.

    Source code in prefect/utilities/asyncutils.py
    async def add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable]):\n    \"\"\"\n    Adds a callback to the given callable on event loop closure. The callable must be\n    a coroutine function. It will be awaited when the current event loop is shutting\n    down.\n\n    Requires use of `asyncio.run()` which waits for async generator shutdown by\n    default or explicit call of `asyncio.shutdown_asyncgens()`. If the application\n    is entered with `asyncio.run_until_complete()` and the user calls\n    `asyncio.close()` without the generator shutdown call, this will not trigger\n    callbacks.\n\n    asyncio does not provided _any_ other way to clean up a resource when the event\n    loop is about to close.\n    \"\"\"\n\n    async def on_shutdown(key):\n        # It appears that EVENT_LOOP_GC_REFS is somehow being garbage collected early.\n        # We hold a reference to it so as to preserve it, at least for the lifetime of\n        # this coroutine. See the issue below for the initial report/discussion:\n        # https://github.com/PrefectHQ/prefect/issues/7709#issuecomment-1560021109\n        _ = EVENT_LOOP_GC_REFS\n        try:\n            yield\n        except GeneratorExit:\n            await coroutine_fn()\n            # Remove self from the garbage collection set\n            EVENT_LOOP_GC_REFS.pop(key)\n\n    # Create the iterator and store it in a global variable so it is not garbage\n    # collected. If the iterator is garbage collected before the event loop closes, the\n    # callback will not run. Since this function does not know the scope of the event\n    # loop that is calling it, a reference with global scope is necessary to ensure\n    # garbage collection does not occur until after event loop closure.\n    key = id(on_shutdown)\n    EVENT_LOOP_GC_REFS[key] = on_shutdown(key)\n\n    # Begin iterating so it will be cleaned up as an incomplete generator\n    try:\n        await EVENT_LOOP_GC_REFS[key].__anext__()\n    # There is a poorly understood edge case we've seen in CI where the key is\n    # removed from the dict before we begin generator iteration.\n    except KeyError:\n        logger.warning(\"The event loop shutdown callback was not properly registered. \")\n        pass\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.create_gather_task_group","title":"create_gather_task_group","text":"

    Create a new task group that gathers results

    Source code in prefect/utilities/asyncutils.py
    def create_gather_task_group() -> GatherTaskGroup:\n    \"\"\"Create a new task group that gathers results\"\"\"\n    # This function matches the AnyIO API which uses callables since the concrete\n    # task group class depends on the async library being used and cannot be\n    # determined until runtime\n    return GatherTaskGroup(anyio.create_task_group())\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.gather","title":"gather async","text":"

    Run calls concurrently and gather their results.

    Unlike asyncio.gather this expects to receive callables not coroutines. This matches anyio semantics.

    Source code in prefect/utilities/asyncutils.py
    async def gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> List[T]:\n    \"\"\"\n    Run calls concurrently and gather their results.\n\n    Unlike `asyncio.gather` this expects to receive _callables_ not _coroutines_.\n    This matches `anyio` semantics.\n    \"\"\"\n    keys = []\n    async with create_gather_task_group() as tg:\n        for call in calls:\n            keys.append(tg.start_soon(call))\n    return [tg.get_result(key) for key in keys]\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_fn","title":"is_async_fn","text":"

    Returns True if a function returns a coroutine.

    See https://github.com/microsoft/pyright/issues/2142 for an example use

    Source code in prefect/utilities/asyncutils.py
    def is_async_fn(\n    func: Union[Callable[P, R], Callable[P, Awaitable[R]]],\n) -> TypeGuard[Callable[P, Awaitable[R]]]:\n    \"\"\"\n    Returns `True` if a function returns a coroutine.\n\n    See https://github.com/microsoft/pyright/issues/2142 for an example use\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.iscoroutinefunction(func)\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_gen_fn","title":"is_async_gen_fn","text":"

    Returns True if a function is an async generator.

    Source code in prefect/utilities/asyncutils.py
    def is_async_gen_fn(func):\n    \"\"\"\n    Returns `True` if a function is an async generator.\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.isasyncgenfunction(func)\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.raise_async_exception_in_thread","title":"raise_async_exception_in_thread","text":"

    Raise an exception in a thread asynchronously.

    This will not interrupt long-running system calls like sleep or wait.

    Source code in prefect/utilities/asyncutils.py
    def raise_async_exception_in_thread(thread: Thread, exc_type: Type[BaseException]):\n    \"\"\"\n    Raise an exception in a thread asynchronously.\n\n    This will not interrupt long-running system calls like `sleep` or `wait`.\n    \"\"\"\n    ret = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n        ctypes.c_long(thread.ident), ctypes.py_object(exc_type)\n    )\n    if ret == 0:\n        raise ValueError(\"Thread not found.\")\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_async_from_worker_thread","title":"run_async_from_worker_thread","text":"

    Runs an async function in the main thread's event loop, blocking the worker thread until completion

    Source code in prefect/utilities/asyncutils.py
    def run_async_from_worker_thread(\n    __fn: Callable[..., Awaitable[T]], *args: Any, **kwargs: Any\n) -> T:\n    \"\"\"\n    Runs an async function in the main thread's event loop, blocking the worker\n    thread until completion\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return anyio.from_thread.run(call)\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync","title":"run_sync","text":"

    Runs a coroutine from a synchronous context. A thread will be spawned to run the event loop if necessary, which allows coroutines to run in environments like Jupyter notebooks where the event loop runs on the main thread.

    Parameters:

    Name Type Description Default coroutine Coroutine[Any, Any, T]

    The coroutine to run.

    required

    Returns:

    Type Description T

    The return value of the coroutine.

    Example

    Basic usage:

    async def my_async_function(x: int) -> int:\n    return x + 1\n\nrun_sync(my_async_function(1))\n

    Source code in prefect/utilities/asyncutils.py
    def run_sync(coroutine: Coroutine[Any, Any, T]) -> T:\n    \"\"\"\n    Runs a coroutine from a synchronous context. A thread will be spawned\n    to run the event loop if necessary, which allows coroutines to run in\n    environments like Jupyter notebooks where the event loop runs on the main\n    thread.\n\n    Args:\n        coroutine: The coroutine to run.\n\n    Returns:\n        The return value of the coroutine.\n\n    Example:\n        Basic usage:\n        ```python\n        async def my_async_function(x: int) -> int:\n            return x + 1\n\n        run_sync(my_async_function(1))\n        ```\n    \"\"\"\n    # ensure context variables are properly copied to the async frame\n    context = copy_context()\n    try:\n        loop = asyncio.get_running_loop()\n    except RuntimeError:\n        loop = None\n\n    if loop and loop.is_running():\n        with ThreadPoolExecutor() as executor:\n            future = executor.submit(context.run, asyncio.run, coroutine)\n            return cast(T, future.result())\n    else:\n        return context.run(asyncio.run, coroutine)\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_interruptible_worker_thread","title":"run_sync_in_interruptible_worker_thread async","text":"

    Runs a sync function in a new interruptible worker thread so that the main thread's event loop is not blocked

    Unlike the anyio function, this performs best-effort cancellation of the thread using the C API. Cancellation will not interrupt system calls like sleep.

    Source code in prefect/utilities/asyncutils.py
    async def run_sync_in_interruptible_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n    \"\"\"\n    Runs a sync function in a new interruptible worker thread so that the main\n    thread's event loop is not blocked\n\n    Unlike the anyio function, this performs best-effort cancellation of the\n    thread using the C API. Cancellation will not interrupt system calls like\n    `sleep`.\n    \"\"\"\n\n    class NotSet:\n        pass\n\n    thread: Thread = None\n    result = NotSet\n    event = asyncio.Event()\n    loop = asyncio.get_running_loop()\n\n    def capture_worker_thread_and_result():\n        # Captures the worker thread that AnyIO is using to execute the function so\n        # the main thread can perform actions on it\n        nonlocal thread, result\n        try:\n            thread = threading.current_thread()\n            result = __fn(*args, **kwargs)\n        except BaseException as exc:\n            result = exc\n            raise\n        finally:\n            loop.call_soon_threadsafe(event.set)\n\n    async def send_interrupt_to_thread():\n        # This task waits until the result is returned from the thread, if cancellation\n        # occurs during that time, we will raise the exception in the thread as well\n        try:\n            await event.wait()\n        except anyio.get_cancelled_exc_class():\n            # NOTE: We could send a SIGINT here which allow us to interrupt system\n            # calls but the interrupt bubbles from the child thread into the main thread\n            # and there is not a clear way to prevent it.\n            raise_async_exception_in_thread(thread, anyio.get_cancelled_exc_class())\n            raise\n\n    async with anyio.create_task_group() as tg:\n        tg.start_soon(send_interrupt_to_thread)\n        tg.start_soon(\n            partial(\n                anyio.to_thread.run_sync,\n                capture_worker_thread_and_result,\n                cancellable=True,\n                limiter=get_thread_limiter(),\n            )\n        )\n\n    assert result is not NotSet\n    return result\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_worker_thread","title":"run_sync_in_worker_thread async","text":"

    Runs a sync function in a new worker thread so that the main thread's event loop is not blocked

    Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function.

    Note that cancellation of threads will not result in interrupted computation, the thread may continue running \u2014 the outcome will just be ignored.

    Source code in prefect/utilities/asyncutils.py
    async def run_sync_in_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n    \"\"\"\n    Runs a sync function in a new worker thread so that the main thread's event loop\n    is not blocked\n\n    Unlike the anyio function, this defaults to a cancellable thread and does not allow\n    passing arguments to the anyio function so users can pass kwargs to their function.\n\n    Note that cancellation of threads will not result in interrupted computation, the\n    thread may continue running \u2014 the outcome will just be ignored.\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return await anyio.to_thread.run_sync(\n        call, cancellable=True, limiter=get_thread_limiter()\n    )\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync","title":"sync","text":"

    Call an async function from a synchronous context. Block until completion.

    If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended.

    Source code in prefect/utilities/asyncutils.py
    def sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T:\n    \"\"\"\n    Call an async function from a synchronous context. Block until completion.\n\n    If in an asynchronous context, we will run the code in a separate loop instead of\n    failing but a warning will be displayed since this is not recommended.\n    \"\"\"\n    if in_async_main_thread():\n        warnings.warn(\n            \"`sync` called from an asynchronous context; \"\n            \"you should `await` the async function directly instead.\"\n        )\n        with anyio.start_blocking_portal() as portal:\n            return portal.call(partial(__async_fn, *args, **kwargs))\n    elif in_async_worker_thread():\n        # In a sync context but we can access the event loop thread; send the async\n        # call to the parent\n        return run_async_from_worker_thread(__async_fn, *args, **kwargs)\n    else:\n        # In a sync context and there is no event loop; just create an event loop\n        # to run the async code then tear it down\n        return run_async_in_new_loop(__async_fn, *args, **kwargs)\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync_compatible","title":"sync_compatible","text":"

    Converts an async function into a dual async and sync function.

    When the returned function is called, we will attempt to determine the best way to enter the async function.

    • If in a thread with a running event loop, we will return the coroutine for the caller to await. This is normal async behavior.
    • If in a blocking worker thread with access to an event loop in another thread, we will submit the async method to the event loop.
    • If we cannot find an event loop, we will create a new one and run the async method then tear down the loop.
    Source code in prefect/utilities/asyncutils.py
    def sync_compatible(async_fn: T) -> T:\n    \"\"\"\n    Converts an async function into a dual async and sync function.\n\n    When the returned function is called, we will attempt to determine the best way\n    to enter the async function.\n\n    - If in a thread with a running event loop, we will return the coroutine for the\n        caller to await. This is normal async behavior.\n    - If in a blocking worker thread with access to an event loop in another thread, we\n        will submit the async method to the event loop.\n    - If we cannot find an event loop, we will create a new one and run the async method\n        then tear down the loop.\n    \"\"\"\n\n    @wraps(async_fn)\n    def coroutine_wrapper(*args, **kwargs):\n        from prefect._internal.concurrency.api import create_call, from_sync\n        from prefect._internal.concurrency.calls import get_current_call, logger\n        from prefect._internal.concurrency.event_loop import get_running_loop\n        from prefect._internal.concurrency.threads import get_global_loop\n        from prefect.settings import PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT\n\n        if PREFECT_EXPERIMENTAL_DISABLE_SYNC_COMPAT:\n            return async_fn(*args, **kwargs)\n\n        global_thread_portal = get_global_loop()\n        current_thread = threading.current_thread()\n        current_call = get_current_call()\n        current_loop = get_running_loop()\n\n        if current_thread.ident == global_thread_portal.thread.ident:\n            logger.debug(f\"{async_fn} --> return coroutine for internal await\")\n            # In the prefect async context; return the coro for us to await\n            return async_fn(*args, **kwargs)\n        elif in_async_main_thread() and (\n            not current_call or is_async_fn(current_call.fn)\n        ):\n            # In the main async context; return the coro for them to await\n            logger.debug(f\"{async_fn} --> return coroutine for user await\")\n            return async_fn(*args, **kwargs)\n        elif in_async_worker_thread():\n            # In a sync context but we can access the event loop thread; send the async\n            # call to the parent\n            return run_async_from_worker_thread(async_fn, *args, **kwargs)\n        elif current_loop is not None:\n            logger.debug(f\"{async_fn} --> run async in global loop portal\")\n            # An event loop is already present but we are in a sync context, run the\n            # call in Prefect's event loop thread\n            return from_sync.call_soon_in_loop_thread(\n                create_call(async_fn, *args, **kwargs)\n            ).result()\n        else:\n            logger.debug(f\"{async_fn} --> run async in new loop\")\n            # Run in a new event loop, but use a `Call` for nested context detection\n            call = create_call(async_fn, *args, **kwargs)\n            return call()\n\n    # TODO: This is breaking type hints on the callable... mypy is behind the curve\n    #       on argument annotations. We can still fix this for editors though.\n    if is_async_fn(async_fn):\n        wrapper = coroutine_wrapper\n    elif is_async_gen_fn(async_fn):\n        raise ValueError(\"Async generators cannot yet be marked as `sync_compatible`\")\n    else:\n        raise TypeError(\"The decorated function must be async.\")\n\n    wrapper.aio = async_fn\n    return wrapper\n
    ","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/callables/","title":"callables","text":"","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables","title":"prefect.utilities.callables","text":"

    Utilities for working with Python callables.

    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema","title":"ParameterSchema","text":"

    Bases: BaseModel

    Simple data model corresponding to an OpenAPI Schema.

    Source code in prefect/utilities/callables.py
    class ParameterSchema(pydantic.BaseModel):\n    \"\"\"Simple data model corresponding to an OpenAPI `Schema`.\"\"\"\n\n    title: Literal[\"Parameters\"] = \"Parameters\"\n    type: Literal[\"object\"] = \"object\"\n    properties: Dict[str, Any] = pydantic.Field(default_factory=dict)\n    required: List[str] = None\n    definitions: Optional[Dict[str, Any]] = None\n\n    def dict(self, *args, **kwargs):\n        \"\"\"Exclude `None` fields by default to comply with\n        the OpenAPI spec.\n        \"\"\"\n        kwargs.setdefault(\"exclude_none\", True)\n        return super().dict(*args, **kwargs)\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema.dict","title":"dict","text":"

    Exclude None fields by default to comply with the OpenAPI spec.

    Source code in prefect/utilities/callables.py
    def dict(self, *args, **kwargs):\n    \"\"\"Exclude `None` fields by default to comply with\n    the OpenAPI spec.\n    \"\"\"\n    kwargs.setdefault(\"exclude_none\", True)\n    return super().dict(*args, **kwargs)\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.call_with_parameters","title":"call_with_parameters","text":"

    Call a function with parameters extracted with get_call_parameters

    The function must have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using parameters_to_positional_and_keyword directly

    Source code in prefect/utilities/callables.py
    def call_with_parameters(fn: Callable, parameters: Dict[str, Any]):\n    \"\"\"\n    Call a function with parameters extracted with `get_call_parameters`\n\n    The function _must_ have an identical signature to the original function or this\n    will fail. If you need to send to a function with a different signature, extract\n    the args/kwargs using `parameters_to_positional_and_keyword` directly\n    \"\"\"\n    args, kwargs = parameters_to_args_kwargs(fn, parameters)\n    return fn(*args, **kwargs)\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.cloudpickle_wrapped_call","title":"cloudpickle_wrapped_call","text":"

    Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value

    This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. anyio.to_process and multiprocessing) but may require a wider range of pickling support.

    Source code in prefect/utilities/callables.py
    def cloudpickle_wrapped_call(\n    __fn: Callable, *args: Any, **kwargs: Any\n) -> Callable[[], bytes]:\n    \"\"\"\n    Serializes a function call using cloudpickle then returns a callable which will\n    execute that call and return a cloudpickle serialized return value\n\n    This is particularly useful for sending calls to libraries that only use the Python\n    built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require\n    a wider range of pickling support.\n    \"\"\"\n    payload = cloudpickle.dumps((__fn, args, kwargs))\n    return partial(_run_serialized_call, payload)\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.collapse_variadic_parameters","title":"collapse_variadic_parameters","text":"

    Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument.

    Example:

    ```python\ndef foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\ncollapse_variadic_parameters(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n```\n
    Source code in prefect/utilities/callables.py
    def collapse_variadic_parameters(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Given a parameter dictionary, move any parameters stored not present in the\n    signature into the variadic keyword argument.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        collapse_variadic_parameters(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        ```\n    \"\"\"\n    signature_parameters = inspect.signature(fn).parameters\n    variadic_key = None\n    for key, parameter in signature_parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    missing_parameters = set(parameters.keys()) - set(signature_parameters.keys())\n\n    if not variadic_key and missing_parameters:\n        raise ValueError(\n            f\"Signature for {fn} does not include any variadic keyword argument \"\n            \"but parameters were given that are not present in the signature.\"\n        )\n\n    if variadic_key and not missing_parameters:\n        # variadic key is present but no missing parameters, return parameters unchanged\n        return parameters\n\n    new_parameters = parameters.copy()\n    if variadic_key:\n        new_parameters[variadic_key] = {}\n\n    for key in missing_parameters:\n        new_parameters[variadic_key][key] = new_parameters.pop(key)\n\n    return new_parameters\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.explode_variadic_parameter","title":"explode_variadic_parameter","text":"

    Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. **kwargs) into the top level.

    Example:

    ```python\ndef foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\nexplode_variadic_parameter(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n```\n
    Source code in prefect/utilities/callables.py
    def explode_variadic_parameter(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Given a parameter dictionary, move any parameters stored in a variadic keyword\n    argument parameter (i.e. **kwargs) into the top level.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        explode_variadic_parameter(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        ```\n    \"\"\"\n    variadic_key = None\n    for key, parameter in inspect.signature(fn).parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    if not variadic_key:\n        return parameters\n\n    new_parameters = parameters.copy()\n    for key, value in new_parameters.pop(variadic_key, {}).items():\n        new_parameters[key] = value\n\n    return new_parameters\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_call_parameters","title":"get_call_parameters","text":"

    Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overridden.

    Raises a ParameterBindError if the arguments/kwargs are not valid for the function

    Source code in prefect/utilities/callables.py
    def get_call_parameters(\n    fn: Callable,\n    call_args: Tuple[Any, ...],\n    call_kwargs: Dict[str, Any],\n    apply_defaults: bool = True,\n) -> Dict[str, Any]:\n    \"\"\"\n    Bind a call to a function to get parameter/value mapping. Default values on the\n    signature will be included if not overridden.\n\n    Raises a ParameterBindError if the arguments/kwargs are not valid for the function\n    \"\"\"\n    try:\n        bound_signature = inspect.signature(fn).bind(*call_args, **call_kwargs)\n    except TypeError as exc:\n        raise ParameterBindError.from_bind_failure(fn, exc, call_args, call_kwargs)\n\n    if apply_defaults:\n        bound_signature.apply_defaults()\n\n    # We cast from `OrderedDict` to `dict` because Dask will not convert futures in an\n    # ordered dictionary to values during execution; this is the default behavior in\n    # Python 3.9 anyway.\n    return dict(bound_signature.arguments)\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_parameter_defaults","title":"get_parameter_defaults","text":"

    Get default parameter values for a callable.

    Source code in prefect/utilities/callables.py
    def get_parameter_defaults(\n    fn: Callable,\n) -> Dict[str, Any]:\n    \"\"\"\n    Get default parameter values for a callable.\n    \"\"\"\n    signature = inspect.signature(fn)\n\n    parameter_defaults = {}\n\n    for name, param in signature.parameters.items():\n        if param.default is not signature.empty:\n            parameter_defaults[name] = param.default\n\n    return parameter_defaults\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_docstrings","title":"parameter_docstrings","text":"

    Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring.

    Parameters:

    Name Type Description Default docstring Optional[str]

    The function's docstring.

    required

    Returns:

    Type Description Dict[str, str]

    Mapping from parameter names to docstrings.

    Source code in prefect/utilities/callables.py
    def parameter_docstrings(docstring: Optional[str]) -> Dict[str, str]:\n    \"\"\"\n    Given a docstring in Google docstring format, parse the parameter section\n    and return a dictionary that maps parameter names to docstring.\n\n    Args:\n        docstring: The function's docstring.\n\n    Returns:\n        Mapping from parameter names to docstrings.\n    \"\"\"\n    param_docstrings = {}\n\n    if not docstring:\n        return param_docstrings\n\n    with disable_logger(\"griffe.docstrings.google\"), disable_logger(\n        \"griffe.agents.nodes\"\n    ):\n        parsed = parse(Docstring(docstring), Parser.google)\n        for section in parsed:\n            if section.kind != DocstringSectionKind.parameters:\n                continue\n            param_docstrings = {\n                parameter.name: parameter.description for parameter in section.value\n            }\n\n    return param_docstrings\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_schema","title":"parameter_schema","text":"

    Given a function, generates an OpenAPI-compatible description of the function's arguments, including: - name - typing information - whether it is required - a default value - additional constraints (like possible enum values)

    Parameters:

    Name Type Description Default fn Callable

    The function whose arguments will be serialized

    required

    Returns:

    Name Type Description ParameterSchema ParameterSchema

    the argument schema

    Source code in prefect/utilities/callables.py
    def parameter_schema(fn: Callable) -> ParameterSchema:\n    \"\"\"Given a function, generates an OpenAPI-compatible description\n    of the function's arguments, including:\n        - name\n        - typing information\n        - whether it is required\n        - a default value\n        - additional constraints (like possible enum values)\n\n    Args:\n        fn (Callable): The function whose arguments will be serialized\n\n    Returns:\n        ParameterSchema: the argument schema\n    \"\"\"\n    try:\n        signature = inspect.signature(fn, eval_str=True)  # novm\n    except (NameError, TypeError):\n        # `eval_str` is not available in Python < 3.10\n        signature = inspect.signature(fn)\n\n    model_fields = {}\n    aliases = {}\n    docstrings = parameter_docstrings(inspect.getdoc(fn))\n\n    class ModelConfig:\n        arbitrary_types_allowed = True\n\n    if HAS_PYDANTIC_V2 and not has_v1_type_as_param(signature):\n        create_schema = create_v2_schema\n        process_params = process_v2_params\n    else:\n        create_schema = create_v1_schema\n        process_params = process_v1_params\n\n    for position, param in enumerate(signature.parameters.values()):\n        name, type_, field = process_params(\n            param, position=position, docstrings=docstrings, aliases=aliases\n        )\n        # Generate a Pydantic model at each step so we can check if this parameter\n        # type supports schema generation\n        try:\n            create_schema(\n                \"CheckParameter\", model_cfg=ModelConfig, **{name: (type_, field)}\n            )\n        except (ValueError, TypeError):\n            # This field's type is not valid for schema creation, update it to `Any`\n            type_ = Any\n        model_fields[name] = (type_, field)\n\n    # Generate the final model and schema\n    schema = create_schema(\"Parameters\", model_cfg=ModelConfig, **model_fields)\n    return ParameterSchema(**schema)\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameters_to_args_kwargs","title":"parameters_to_args_kwargs","text":"

    Convert a parameters dictionary to positional and keyword arguments

    The function must have an identical signature to the original function or this will return an empty tuple and dict.

    Source code in prefect/utilities/callables.py
    def parameters_to_args_kwargs(\n    fn: Callable,\n    parameters: Dict[str, Any],\n) -> Tuple[Tuple[Any, ...], Dict[str, Any]]:\n    \"\"\"\n    Convert a `parameters` dictionary to positional and keyword arguments\n\n    The function _must_ have an identical signature to the original function or this\n    will return an empty tuple and dict.\n    \"\"\"\n    function_params = dict(inspect.signature(fn).parameters).keys()\n    # Check for parameters that are not present in the function signature\n    unknown_params = parameters.keys() - function_params\n    if unknown_params:\n        raise SignatureMismatchError.from_bad_params(\n            list(function_params), list(parameters.keys())\n        )\n    bound_signature = inspect.signature(fn).bind_partial()\n    bound_signature.arguments = parameters\n\n    return bound_signature.args, bound_signature.kwargs\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.raise_for_reserved_arguments","title":"raise_for_reserved_arguments","text":"

    Raise a ReservedArgumentError if fn has any parameters that conflict with the names contained in reserved_arguments.

    Source code in prefect/utilities/callables.py
    def raise_for_reserved_arguments(fn: Callable, reserved_arguments: Iterable[str]):\n    \"\"\"Raise a ReservedArgumentError if `fn` has any parameters that conflict\n    with the names contained in `reserved_arguments`.\"\"\"\n    function_paremeters = inspect.signature(fn).parameters\n\n    for argument in reserved_arguments:\n        if argument in function_paremeters:\n            raise ReservedArgumentError(\n                f\"{argument!r} is a reserved argument name and cannot be used.\"\n            )\n
    ","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/collections/","title":"collections","text":"","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections","title":"prefect.utilities.collections","text":"

    Utilities for extensions of and operations on Python collections.

    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum","title":"AutoEnum","text":"

    Bases: str, Enum

    An enum class that automatically generates value from variable names.

    This guards against common errors where variable names are updated but values are not.

    In addition, because AutoEnums inherit from str, they are automatically JSON-serializable.

    See https://docs.python.org/3/library/enum.html#using-automatic-values

    Example
    class MyEnum(AutoEnum):\n    RED = AutoEnum.auto() # equivalent to RED = 'RED'\n    BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n
    Source code in prefect/utilities/collections.py
    class AutoEnum(str, Enum):\n    \"\"\"\n    An enum class that automatically generates value from variable names.\n\n    This guards against common errors where variable names are updated but values are\n    not.\n\n    In addition, because AutoEnums inherit from `str`, they are automatically\n    JSON-serializable.\n\n    See https://docs.python.org/3/library/enum.html#using-automatic-values\n\n    Example:\n        ```python\n        class MyEnum(AutoEnum):\n            RED = AutoEnum.auto() # equivalent to RED = 'RED'\n            BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n        ```\n    \"\"\"\n\n    def _generate_next_value_(name, start, count, last_values):\n        return name\n\n    @staticmethod\n    def auto():\n        \"\"\"\n        Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n        \"\"\"\n        return auto()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}.{self.value}\"\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum.auto","title":"auto staticmethod","text":"

    Exposes enum.auto() to avoid requiring a second import to use AutoEnum

    Source code in prefect/utilities/collections.py
    @staticmethod\ndef auto():\n    \"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.StopVisiting","title":"StopVisiting","text":"

    Bases: BaseException

    A special exception used to stop recursive visits in visit_collection.

    When raised, the expression is returned without modification and recursive visits in that path will end.

    Source code in prefect/utilities/collections.py
    class StopVisiting(BaseException):\n    \"\"\"\n    A special exception used to stop recursive visits in `visit_collection`.\n\n    When raised, the expression is returned without modification and recursive visits\n    in that path will end.\n    \"\"\"\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.batched_iterable","title":"batched_iterable","text":"

    Yield batches of a certain size from an iterable

    Parameters:

    Name Type Description Default iterable Iterable

    An iterable

    required size int

    The batch size to return

    required

    Yields:

    Name Type Description tuple T

    A batch of the iterable

    Source code in prefect/utilities/collections.py
    def batched_iterable(iterable: Iterable[T], size: int) -> Iterator[Tuple[T, ...]]:\n    \"\"\"\n    Yield batches of a certain size from an iterable\n\n    Args:\n        iterable (Iterable): An iterable\n        size (int): The batch size to return\n\n    Yields:\n        tuple: A batch of the iterable\n    \"\"\"\n    it = iter(iterable)\n    while True:\n        batch = tuple(itertools.islice(it, size))\n        if not batch:\n            break\n        yield batch\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.dict_to_flatdict","title":"dict_to_flatdict","text":"

    Converts a (nested) dictionary to a flattened representation.

    Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\" for the corresponding value.

    Parameters:

    Name Type Description Default dct dict

    The dictionary to flatten

    required _parent Tuple

    The current parent for recursion

    None

    Returns:

    Type Description Dict[Tuple[KT, ...], Any]

    A flattened dict of the same type as dct

    Source code in prefect/utilities/collections.py
    def dict_to_flatdict(\n    dct: Dict[KT, Union[Any, Dict[KT, Any]]], _parent: Tuple[KT, ...] = None\n) -> Dict[Tuple[KT, ...], Any]:\n    \"\"\"Converts a (nested) dictionary to a flattened representation.\n\n    Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\"\n    for the corresponding value.\n\n    Args:\n        dct (dict): The dictionary to flatten\n        _parent (Tuple, optional): The current parent for recursion\n\n    Returns:\n        A flattened dict of the same type as dct\n    \"\"\"\n    typ = cast(Type[Dict[Tuple[KT, ...], Any]], type(dct))\n    items: List[Tuple[Tuple[KT, ...], Any]] = []\n    parent = _parent or tuple()\n\n    for k, v in dct.items():\n        k_parent = tuple(parent + (k,))\n        # if v is a non-empty dict, recurse\n        if isinstance(v, dict) and v:\n            items.extend(dict_to_flatdict(v, _parent=k_parent).items())\n        else:\n            items.append((k_parent, v))\n    return typ(items)\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.extract_instances","title":"extract_instances","text":"

    Extract objects from a file and returns a dict of type -> instances

    Parameters:

    Name Type Description Default objects Iterable

    An iterable of objects

    required types Union[Type[T], Tuple[Type[T], ...]]

    A type or tuple of types to extract, defaults to all objects

    object

    Returns:

    Type Description Union[List[T], Dict[Type[T], T]]

    If a single type is given: a list of instances of that type

    Union[List[T], Dict[Type[T], T]]

    If a tuple of types is given: a mapping of type to a list of instances

    Source code in prefect/utilities/collections.py
    def extract_instances(\n    objects: Iterable,\n    types: Union[Type[T], Tuple[Type[T], ...]] = object,\n) -> Union[List[T], Dict[Type[T], T]]:\n    \"\"\"\n    Extract objects from a file and returns a dict of type -> instances\n\n    Args:\n        objects: An iterable of objects\n        types: A type or tuple of types to extract, defaults to all objects\n\n    Returns:\n        If a single type is given: a list of instances of that type\n        If a tuple of types is given: a mapping of type to a list of instances\n    \"\"\"\n    types = ensure_iterable(types)\n\n    # Create a mapping of type -> instance from the exec values\n    ret = defaultdict(list)\n\n    for o in objects:\n        # We iterate here so that the key is the passed type rather than type(o)\n        for type_ in types:\n            if isinstance(o, type_):\n                ret[type_].append(o)\n\n    if len(types) == 1:\n        return ret[types[0]]\n\n    return ret\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.flatdict_to_dict","title":"flatdict_to_dict","text":"

    Converts a flattened dictionary back to a nested dictionary.

    Parameters:

    Name Type Description Default dct dict

    The dictionary to be nested. Each key should be a tuple of keys as generated by dict_to_flatdict

    required

    Returns A nested dict of the same type as dct

    Source code in prefect/utilities/collections.py
    def flatdict_to_dict(\n    dct: Dict[Tuple[KT, ...], VT],\n) -> Dict[KT, Union[VT, Dict[KT, VT]]]:\n    \"\"\"Converts a flattened dictionary back to a nested dictionary.\n\n    Args:\n        dct (dict): The dictionary to be nested. Each key should be a tuple of keys\n            as generated by `dict_to_flatdict`\n\n    Returns\n        A nested dict of the same type as dct\n    \"\"\"\n    typ = type(dct)\n    result = cast(Dict[KT, Union[VT, Dict[KT, VT]]], typ())\n    for key_tuple, value in dct.items():\n        current_dict = result\n        for prefix_key in key_tuple[:-1]:\n            # Build nested dictionaries up for the current key tuple\n            # Use `setdefault` in case the nested dict has already been created\n            current_dict = current_dict.setdefault(prefix_key, typ())  # type: ignore\n        # Set the value\n        current_dict[key_tuple[-1]] = value\n\n    return result\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.get_from_dict","title":"get_from_dict","text":"

    Fetch a value from a nested dictionary or list using a sequence of keys.

    This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value.

    Parameters:

    Name Type Description Default dct Dict

    The nested dictionary or list from which to fetch the value.

    required keys Union[str, List[str]]

    The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets.

    required default Any

    The default value to return if the requested key path does not exist. Defaults to None.

    None

    Returns:

    Type Description Any

    The fetched value if the key exists, or the default value if it does not.

    get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') 'default'

    Source code in prefect/utilities/collections.py
    def get_from_dict(dct: Dict, keys: Union[str, List[str]], default: Any = None) -> Any:\n    \"\"\"\n    Fetch a value from a nested dictionary or list using a sequence of keys.\n\n    This function allows to fetch a value from a deeply nested structure\n    of dictionaries and lists using either a dot-separated string or a list\n    of keys. If a requested key does not exist, the function returns the\n    provided default value.\n\n    Args:\n        dct: The nested dictionary or list from which to fetch the value.\n        keys: The sequence of keys to use for access. Can be a\n            dot-separated string or a list of keys. List indices can be included\n            in the sequence as either integer keys or as string indices in square\n            brackets.\n        default: The default value to return if the requested key path does not\n            exist. Defaults to None.\n\n    Returns:\n        The fetched value if the key exists, or the default value if it does not.\n\n    Examples:\n    >>> get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]')\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1])\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default')\n    'default'\n    \"\"\"\n    if isinstance(keys, str):\n        keys = keys.replace(\"[\", \".\").replace(\"]\", \"\").split(\".\")\n    try:\n        for key in keys:\n            try:\n                # Try to cast to int to handle list indices\n                key = int(key)\n            except ValueError:\n                # If it's not an int, use the key as-is\n                # for dict lookup\n                pass\n            dct = dct[key]\n        return dct\n    except (TypeError, KeyError, IndexError):\n        return default\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.isiterable","title":"isiterable","text":"

    Return a boolean indicating if an object is iterable.

    Excludes types that are iterable but typically used as singletons: - str - bytes - IO objects

    Source code in prefect/utilities/collections.py
    def isiterable(obj: Any) -> bool:\n    \"\"\"\n    Return a boolean indicating if an object is iterable.\n\n    Excludes types that are iterable but typically used as singletons:\n    - str\n    - bytes\n    - IO objects\n    \"\"\"\n    try:\n        iter(obj)\n    except TypeError:\n        return False\n    else:\n        return not isinstance(obj, (str, bytes, io.IOBase))\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.remove_nested_keys","title":"remove_nested_keys","text":"

    Recurses a dictionary returns a copy without all keys that match an entry in key_to_remove. Return obj unchanged if not a dictionary.

    Parameters:

    Name Type Description Default keys_to_remove List[Hashable]

    A list of keys to remove from obj obj: The object to remove keys from.

    required

    Returns:

    Type Description

    obj without keys matching an entry in keys_to_remove if obj is a dictionary. obj if obj is not a dictionary.

    Source code in prefect/utilities/collections.py
    def remove_nested_keys(keys_to_remove: List[Hashable], obj):\n    \"\"\"\n    Recurses a dictionary returns a copy without all keys that match an entry in\n    `key_to_remove`. Return `obj` unchanged if not a dictionary.\n\n    Args:\n        keys_to_remove: A list of keys to remove from obj obj: The object to remove keys\n            from.\n\n    Returns:\n        `obj` without keys matching an entry in `keys_to_remove` if `obj` is a\n            dictionary. `obj` if `obj` is not a dictionary.\n    \"\"\"\n    if not isinstance(obj, dict):\n        return obj\n    return {\n        key: remove_nested_keys(keys_to_remove, value)\n        for key, value in obj.items()\n        if key not in keys_to_remove\n    }\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.visit_collection","title":"visit_collection","text":"

    This function visits every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, visit_fn will be called with the element. The return value of visit_fn can be used to alter the element if return_data is set.

    Note that when using return_data a copy of each collection is created to avoid mutating the original object. This may have significant performance penalties and should only be used if you intend to transform the collection.

    Supported types: - List - Tuple - Set - Dict (note: keys are also visited recursively) - Dataclass - Pydantic model - Prefect annotations

    Parameters:

    Name Type Description Default expr Any

    a Python object or expression

    required visit_fn Callable[[Any], Awaitable[Any]]

    an async function that will be applied to every non-collection element of expr.

    required return_data bool

    if True, a copy of expr containing data modified by visit_fn will be returned. This is slower than return_data=False (the default).

    False max_depth int

    Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer N, visitation will only descend to N layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited.

    -1 context Optional[dict]

    An optional dictionary. If passed, the context will be sent to each call to the visit_fn. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available \"up\" a level from a given expression.

    The context will be automatically populated with an 'annotation' key when visiting collections within a BaseAnnotation type. This requires the caller to pass context={} and will not be activated by default.

    None remove_annotations bool

    If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited.

    False Source code in prefect/utilities/collections.py
    def visit_collection(\n    expr,\n    visit_fn: Callable[[Any], Any],\n    return_data: bool = False,\n    max_depth: int = -1,\n    context: Optional[dict] = None,\n    remove_annotations: bool = False,\n):\n    \"\"\"\n    This function visits every element of an arbitrary Python collection. If an element\n    is a Python collection, it will be visited recursively. If an element is not a\n    collection, `visit_fn` will be called with the element. The return value of\n    `visit_fn` can be used to alter the element if `return_data` is set.\n\n    Note that when using `return_data` a copy of each collection is created to avoid\n    mutating the original object. This may have significant performance penalties and\n    should only be used if you intend to transform the collection.\n\n    Supported types:\n    - List\n    - Tuple\n    - Set\n    - Dict (note: keys are also visited recursively)\n    - Dataclass\n    - Pydantic model\n    - Prefect annotations\n\n    Args:\n        expr (Any): a Python object or expression\n        visit_fn (Callable[[Any], Awaitable[Any]]): an async function that\n            will be applied to every non-collection element of expr.\n        return_data (bool): if `True`, a copy of `expr` containing data modified\n            by `visit_fn` will be returned. This is slower than `return_data=False`\n            (the default).\n        max_depth: Controls the depth of recursive visitation. If set to zero, no\n            recursion will occur. If set to a positive integer N, visitation will only\n            descend to N layers deep. If set to any negative integer, no limit will be\n            enforced and recursion will continue until terminal items are reached. By\n            default, recursion is unlimited.\n        context: An optional dictionary. If passed, the context will be sent to each\n            call to the `visit_fn`. The context can be mutated by each visitor and will\n            be available for later visits to expressions at the given depth. Values\n            will not be available \"up\" a level from a given expression.\n\n            The context will be automatically populated with an 'annotation' key when\n            visiting collections within a `BaseAnnotation` type. This requires the\n            caller to pass `context={}` and will not be activated by default.\n        remove_annotations: If set, annotations will be replaced by their contents. By\n            default, annotations are preserved but their contents are visited.\n    \"\"\"\n\n    def visit_nested(expr):\n        # Utility for a recursive call, preserving options and updating the depth.\n        return visit_collection(\n            expr,\n            visit_fn=visit_fn,\n            return_data=return_data,\n            remove_annotations=remove_annotations,\n            max_depth=max_depth - 1,\n            # Copy the context on nested calls so it does not \"propagate up\"\n            context=context.copy() if context is not None else None,\n        )\n\n    def visit_expression(expr):\n        if context is not None:\n            return visit_fn(expr, context)\n        else:\n            return visit_fn(expr)\n\n    # Visit every expression\n    try:\n        result = visit_expression(expr)\n    except StopVisiting:\n        max_depth = 0\n        result = expr\n\n    if return_data:\n        # Only mutate the expression while returning data, otherwise it could be null\n        expr = result\n\n    # Then, visit every child of the expression recursively\n\n    # If we have reached the maximum depth, do not perform any recursion\n    if max_depth == 0:\n        return result if return_data else None\n\n    # Get the expression type; treat iterators like lists\n    typ = list if isinstance(expr, IteratorABC) and isiterable(expr) else type(expr)\n    typ = cast(type, typ)  # mypy treats this as 'object' otherwise and complains\n\n    # Then visit every item in the expression if it is a collection\n    if isinstance(expr, Mock):\n        # Do not attempt to recurse into mock objects\n        result = expr\n\n    elif isinstance(expr, BaseAnnotation):\n        if context is not None:\n            context[\"annotation\"] = expr\n        value = visit_nested(expr.unwrap())\n\n        if remove_annotations:\n            result = value if return_data else None\n        else:\n            result = expr.rewrap(value) if return_data else None\n\n    elif typ in (list, tuple, set):\n        items = [visit_nested(o) for o in expr]\n        result = typ(items) if return_data else None\n\n    elif typ in (dict, OrderedDict):\n        assert isinstance(expr, (dict, OrderedDict))  # typecheck assertion\n        items = [(visit_nested(k), visit_nested(v)) for k, v in expr.items()]\n        result = typ(items) if return_data else None\n\n    elif is_dataclass(expr) and not isinstance(expr, type):\n        values = [visit_nested(getattr(expr, f.name)) for f in fields(expr)]\n        items = {field.name: value for field, value in zip(fields(expr), values)}\n        result = typ(**items) if return_data else None\n\n    elif isinstance(expr, pydantic.BaseModel):\n        # NOTE: This implementation *does not* traverse private attributes\n        # Pydantic does not expose extras in `__fields__` so we use `__fields_set__`\n        # as well to get all of the relevant attributes\n        # Check for presence of attrs even if they're in the field set due to pydantic#4916\n        model_fields = {\n            f for f in expr.__fields_set__.union(expr.__fields__) if hasattr(expr, f)\n        }\n        items = [visit_nested(getattr(expr, key)) for key in model_fields]\n\n        if return_data:\n            # Collect fields with aliases so reconstruction can use the correct field name\n            aliases = {\n                key: value.alias\n                for key, value in expr.__fields__.items()\n                if value.has_alias\n            }\n\n            model_instance = typ(\n                **{\n                    aliases.get(key) or key: value\n                    for key, value in zip(model_fields, items)\n                }\n            )\n\n            # Private attributes are not included in `__fields_set__` but we do not want\n            # to drop them from the model so we restore them after constructing a new\n            # model\n            for attr in expr.__private_attributes__:\n                # Use `object.__setattr__` to avoid errors on immutable models\n                object.__setattr__(model_instance, attr, getattr(expr, attr))\n\n            # Preserve data about which fields were explicitly set on the original model\n            object.__setattr__(model_instance, \"__fields_set__\", expr.__fields_set__)\n            result = model_instance\n        else:\n            result = None\n\n    else:\n        result = result if return_data else None\n\n    return result\n
    ","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/compat/","title":"compat","text":"","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/compat/#prefect.utilities.compat","title":"prefect.utilities.compat","text":"

    Utilities for Python version compatibility

    ","tags":["Python API","utilities","compatibility"]},{"location":"api-ref/prefect/utilities/context/","title":"context","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/context/#prefect.utilities.context","title":"prefect.utilities.context","text":"","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/context/#prefect.utilities.context.get_task_and_flow_run_ids","title":"get_task_and_flow_run_ids","text":"

    Get the task run and flow run ids from the context, if available.

    Returns:

    Type Description Tuple[Optional[UUID], Optional[UUID]]

    Tuple[Optional[UUID], Optional[UUID]]: a tuple of the task run id and flow run id

    Source code in prefect/utilities/context.py
    def get_task_and_flow_run_ids() -> Tuple[Optional[UUID], Optional[UUID]]:\n    \"\"\"\n    Get the task run and flow run ids from the context, if available.\n\n    Returns:\n        Tuple[Optional[UUID], Optional[UUID]]: a tuple of the task run id and flow run id\n    \"\"\"\n    return get_task_run_id(), get_flow_run_id()\n
    ","tags":["Python API","utilities","context"]},{"location":"api-ref/prefect/utilities/dispatch/","title":"dispatch","text":"","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch","title":"prefect.utilities.dispatch","text":"

    Provides methods for performing dynamic dispatch for actions on base type to one of its subtypes.

    Example:

    @register_base_type\nclass Base:\n    @classmethod\n    def __dispatch_key__(cls):\n        return cls.__name__.lower()\n\n\nclass Foo(Base):\n    ...\n\nkey = get_dispatch_key(Foo)  # 'foo'\nlookup_type(Base, key) # Foo\n
    ","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_dispatch_key","title":"get_dispatch_key","text":"

    Retrieve the unique dispatch key for a class type or instance.

    This key is defined at the __dispatch_key__ attribute. If it is a callable, it will be resolved.

    If allow_missing is False, an exception will be raised if the attribute is not defined or the key is null. If True, None will be returned in these cases.

    Source code in prefect/utilities/dispatch.py
    def get_dispatch_key(\n    cls_or_instance: Any, allow_missing: bool = False\n) -> Optional[str]:\n    \"\"\"\n    Retrieve the unique dispatch key for a class type or instance.\n\n    This key is defined at the `__dispatch_key__` attribute. If it is a callable, it\n    will be resolved.\n\n    If `allow_missing` is `False`, an exception will be raised if the attribute is not\n    defined or the key is null. If `True`, `None` will be returned in these cases.\n    \"\"\"\n    dispatch_key = getattr(cls_or_instance, \"__dispatch_key__\", None)\n\n    type_name = (\n        cls_or_instance.__name__\n        if isinstance(cls_or_instance, type)\n        else type(cls_or_instance).__name__\n    )\n\n    if dispatch_key is None:\n        if allow_missing:\n            return None\n        raise ValueError(\n            f\"Type {type_name!r} does not define a value for \"\n            \"'__dispatch_key__' which is required for registry lookup.\"\n        )\n\n    if callable(dispatch_key):\n        dispatch_key = dispatch_key()\n\n    if allow_missing and dispatch_key is None:\n        return None\n\n    if not isinstance(dispatch_key, str):\n        raise TypeError(\n            f\"Type {type_name!r} has a '__dispatch_key__' of type \"\n            f\"{type(dispatch_key).__name__} but a type of 'str' is required.\"\n        )\n\n    return dispatch_key\n
    ","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.get_registry_for_type","title":"get_registry_for_type","text":"

    Get the first matching registry for a class or any of its base classes.

    If not found, None is returned.

    Source code in prefect/utilities/dispatch.py
    def get_registry_for_type(cls: T) -> Optional[Dict[str, T]]:\n    \"\"\"\n    Get the first matching registry for a class or any of its base classes.\n\n    If not found, `None` is returned.\n    \"\"\"\n    return next(\n        filter(\n            lambda registry: registry is not None,\n            (_TYPE_REGISTRIES.get(cls) for cls in cls.mro()),\n        ),\n        None,\n    )\n
    ","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.lookup_type","title":"lookup_type","text":"

    Look up a dispatch key in the type registry for the given class.

    Source code in prefect/utilities/dispatch.py
    def lookup_type(cls: T, dispatch_key: str) -> T:\n    \"\"\"\n    Look up a dispatch key in the type registry for the given class.\n    \"\"\"\n    # Get the first matching registry for the class or one of its bases\n    registry = get_registry_for_type(cls)\n\n    # Look up this type in the registry\n    subcls = registry.get(dispatch_key)\n\n    if subcls is None:\n        raise KeyError(\n            f\"No class found for dispatch key {dispatch_key!r} in registry for type \"\n            f\"{cls.__name__!r}.\"\n        )\n\n    return subcls\n
    ","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_base_type","title":"register_base_type","text":"

    Register a base type allowing child types to be registered for dispatch with register_type.

    The base class may or may not define a __dispatch_key__ to allow lookups of the base type.

    Source code in prefect/utilities/dispatch.py
    def register_base_type(cls: T) -> T:\n    \"\"\"\n    Register a base type allowing child types to be registered for dispatch with\n    `register_type`.\n\n    The base class may or may not define a `__dispatch_key__` to allow lookups of the\n    base type.\n    \"\"\"\n    registry = _TYPE_REGISTRIES.setdefault(cls, {})\n    base_key = get_dispatch_key(cls, allow_missing=True)\n    if base_key is not None:\n        registry[base_key] = cls\n\n    # Add automatic subtype registration\n    cls.__init_subclass_original__ = getattr(cls, \"__init_subclass__\")\n    cls.__init_subclass__ = _register_subclass_of_base_type\n\n    return cls\n
    ","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dispatch/#prefect.utilities.dispatch.register_type","title":"register_type","text":"

    Register a type for lookup with dispatch.

    The type or one of its parents must define a unique __dispatch_key__.

    One of the classes base types must be registered using register_base_type.

    Source code in prefect/utilities/dispatch.py
    def register_type(cls: T) -> T:\n    \"\"\"\n    Register a type for lookup with dispatch.\n\n    The type or one of its parents must define a unique `__dispatch_key__`.\n\n    One of the classes base types must be registered using `register_base_type`.\n    \"\"\"\n    # Lookup the registry for this type\n    registry = get_registry_for_type(cls)\n\n    # Check if a base type is registered\n    if registry is None:\n        # Include a description of registered base types\n        known = \", \".join(repr(base.__name__) for base in _TYPE_REGISTRIES)\n        known_message = (\n            f\" Did you mean to inherit from one of the following known types: {known}.\"\n            if known\n            else \"\"\n        )\n\n        # And a list of all base types for the type they tried to register\n        bases = \", \".join(\n            repr(base.__name__) for base in cls.mro() if base not in (object, cls)\n        )\n\n        raise ValueError(\n            f\"No registry found for type {cls.__name__!r} with bases {bases}.\"\n            + known_message\n        )\n\n    key = get_dispatch_key(cls)\n    existing_value = registry.get(key)\n    if existing_value is not None and id(existing_value) != id(cls):\n        # Get line numbers for debugging\n        file = inspect.getsourcefile(cls)\n        line_number = inspect.getsourcelines(cls)[1]\n        existing_file = inspect.getsourcefile(existing_value)\n        existing_line_number = inspect.getsourcelines(existing_value)[1]\n        warnings.warn(\n            f\"Type {cls.__name__!r} at {file}:{line_number} has key {key!r} that \"\n            f\"matches existing registered type {existing_value.__name__!r} from \"\n            f\"{existing_file}:{existing_line_number}. The existing type will be \"\n            \"overridden.\"\n        )\n\n    # Add to the registry\n    registry[key] = cls\n\n    return cls\n
    ","tags":["Python API","dispatch"]},{"location":"api-ref/prefect/utilities/dockerutils/","title":"dockerutils","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils","title":"prefect.utilities.dockerutils","text":"","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.BuildError","title":"BuildError","text":"

    Bases: Exception

    Raised when a Docker build fails

    Source code in prefect/utilities/dockerutils.py
    class BuildError(Exception):\n    \"\"\"Raised when a Docker build fails\"\"\"\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder","title":"ImageBuilder","text":"

    An interface for preparing Docker build contexts and building images

    Source code in prefect/utilities/dockerutils.py
    class ImageBuilder:\n    \"\"\"An interface for preparing Docker build contexts and building images\"\"\"\n\n    base_directory: Path\n    context: Optional[Path]\n    platform: Optional[str]\n    dockerfile_lines: List[str]\n\n    def __init__(\n        self,\n        base_image: str,\n        base_directory: Path = None,\n        platform: str = None,\n        context: Path = None,\n    ):\n        \"\"\"Create an ImageBuilder\n\n        Args:\n            base_image: the base image to use\n            base_directory: the starting point on your host for relative file locations,\n                defaulting to the current directory\n            context: use this path as the build context (if not provided, will create a\n                temporary directory for the context)\n\n        Returns:\n            The image ID\n        \"\"\"\n        self.base_directory = base_directory or context or Path().absolute()\n        self.temporary_directory = None\n        self.context = context\n        self.platform = platform\n        self.dockerfile_lines = []\n\n        if self.context:\n            dockerfile_path: Path = self.context / \"Dockerfile\"\n            if dockerfile_path.exists():\n                raise ValueError(f\"There is already a Dockerfile at {context}\")\n\n        self.add_line(f\"FROM {base_image}\")\n\n    def __enter__(self) -> Self:\n        if self.context and not self.temporary_directory:\n            return self\n\n        self.temporary_directory = TemporaryDirectory()\n        self.context = Path(self.temporary_directory.__enter__())\n        return self\n\n    def __exit__(\n        self, exc: Type[BaseException], value: BaseException, traceback: TracebackType\n    ) -> None:\n        if not self.temporary_directory:\n            return\n\n        self.temporary_directory.__exit__(exc, value, traceback)\n        self.temporary_directory = None\n        self.context = None\n\n    def add_line(self, line: str) -> None:\n        \"\"\"Add a line to this image's Dockerfile\"\"\"\n        self.add_lines([line])\n\n    def add_lines(self, lines: Iterable[str]) -> None:\n        \"\"\"Add lines to this image's Dockerfile\"\"\"\n        self.dockerfile_lines.extend(lines)\n\n    def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n        \"\"\"Copy a file to this image\"\"\"\n        if not self.context:\n            raise Exception(\"No context available\")\n\n        if not isinstance(destination, PurePosixPath):\n            destination = PurePosixPath(destination)\n\n        if not isinstance(source, Path):\n            source = Path(source)\n\n        if source.is_absolute():\n            source = source.resolve().relative_to(self.base_directory)\n\n        if self.temporary_directory:\n            os.makedirs(self.context / source.parent, exist_ok=True)\n\n            if source.is_dir():\n                shutil.copytree(self.base_directory / source, self.context / source)\n            else:\n                shutil.copy2(self.base_directory / source, self.context / source)\n\n        self.add_line(f\"COPY {source} {destination}\")\n\n    def write_text(self, text: str, destination: Union[str, PurePosixPath]):\n        if not self.context:\n            raise Exception(\"No context available\")\n\n        if not isinstance(destination, PurePosixPath):\n            destination = PurePosixPath(destination)\n\n        source_hash = hashlib.sha256(text.encode()).hexdigest()\n        (self.context / f\".{source_hash}\").write_text(text)\n        self.add_line(f\"COPY .{source_hash} {destination}\")\n\n    def build(\n        self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n    ) -> str:\n        \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n        Args:\n            pull: True to pull the base image during the build\n            stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n                that will collect the build output as it is reported by Docker\n\n        Returns:\n            The image ID\n        \"\"\"\n        dockerfile_path: Path = self.context / \"Dockerfile\"\n\n        with dockerfile_path.open(\"w\") as dockerfile:\n            dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n        try:\n            return build_image(\n                self.context,\n                platform=self.platform,\n                pull=pull,\n                stream_progress_to=stream_progress_to,\n            )\n        finally:\n            os.unlink(dockerfile_path)\n\n    def assert_has_line(self, line: str) -> None:\n        \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n        all_lines = \"\\n\".join(\n            [f\"  {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n        )\n        message = (\n            f\"Expected {line!r} not found in Dockerfile.  Dockerfile:\\n{all_lines}\"\n        )\n        assert line in self.dockerfile_lines, message\n\n    def assert_line_absent(self, line: str) -> None:\n        \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n        if line not in self.dockerfile_lines:\n            return\n\n        i = self.dockerfile_lines.index(line)\n\n        surrounding_lines = \"\\n\".join(\n            [\n                f\"  {i+1:>3}: {line}\"\n                for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n            ]\n        )\n        message = (\n            f\"Unexpected {line!r} found in Dockerfile at line {i+1}.  \"\n            f\"Surrounding lines:\\n{surrounding_lines}\"\n        )\n\n        assert line not in self.dockerfile_lines, message\n\n    def assert_line_before(self, first: str, second: str) -> None:\n        \"\"\"Asserts that the first line appears before the second line\"\"\"\n        self.assert_has_line(first)\n        self.assert_has_line(second)\n\n        first_index = self.dockerfile_lines.index(first)\n        second_index = self.dockerfile_lines.index(second)\n\n        surrounding_lines = \"\\n\".join(\n            [\n                f\"  {i+1:>3}: {line}\"\n                for i, line in enumerate(\n                    self.dockerfile_lines[second_index - 2 : first_index + 2]\n                )\n            ]\n        )\n\n        message = (\n            f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n            f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n            f\"{second_index+1}.  Surrounding lines:\\n{surrounding_lines}\"\n        )\n\n        assert first_index < second_index, message\n\n    def assert_line_after(self, second: str, first: str) -> None:\n        \"\"\"Asserts that the second line appears after the first line\"\"\"\n        self.assert_line_before(first, second)\n\n    def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n        \"\"\"Asserts that the given file or directory will be copied into the container\n        at the given path\"\"\"\n        if source.is_absolute():\n            source = source.relative_to(self.base_directory)\n\n        self.assert_has_line(f\"COPY {source} {container_path}\")\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_line","title":"add_line","text":"

    Add a line to this image's Dockerfile

    Source code in prefect/utilities/dockerutils.py
    def add_line(self, line: str) -> None:\n    \"\"\"Add a line to this image's Dockerfile\"\"\"\n    self.add_lines([line])\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.add_lines","title":"add_lines","text":"

    Add lines to this image's Dockerfile

    Source code in prefect/utilities/dockerutils.py
    def add_lines(self, lines: Iterable[str]) -> None:\n    \"\"\"Add lines to this image's Dockerfile\"\"\"\n    self.dockerfile_lines.extend(lines)\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_file","title":"assert_has_file","text":"

    Asserts that the given file or directory will be copied into the container at the given path

    Source code in prefect/utilities/dockerutils.py
    def assert_has_file(self, source: Path, container_path: PurePosixPath) -> None:\n    \"\"\"Asserts that the given file or directory will be copied into the container\n    at the given path\"\"\"\n    if source.is_absolute():\n        source = source.relative_to(self.base_directory)\n\n    self.assert_has_line(f\"COPY {source} {container_path}\")\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_has_line","title":"assert_has_line","text":"

    Asserts that the given line is in the Dockerfile

    Source code in prefect/utilities/dockerutils.py
    def assert_has_line(self, line: str) -> None:\n    \"\"\"Asserts that the given line is in the Dockerfile\"\"\"\n    all_lines = \"\\n\".join(\n        [f\"  {i+1:>3}: {line}\" for i, line in enumerate(self.dockerfile_lines)]\n    )\n    message = (\n        f\"Expected {line!r} not found in Dockerfile.  Dockerfile:\\n{all_lines}\"\n    )\n    assert line in self.dockerfile_lines, message\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_absent","title":"assert_line_absent","text":"

    Asserts that the given line is absent from the Dockerfile

    Source code in prefect/utilities/dockerutils.py
    def assert_line_absent(self, line: str) -> None:\n    \"\"\"Asserts that the given line is absent from the Dockerfile\"\"\"\n    if line not in self.dockerfile_lines:\n        return\n\n    i = self.dockerfile_lines.index(line)\n\n    surrounding_lines = \"\\n\".join(\n        [\n            f\"  {i+1:>3}: {line}\"\n            for i, line in enumerate(self.dockerfile_lines[i - 2 : i + 2])\n        ]\n    )\n    message = (\n        f\"Unexpected {line!r} found in Dockerfile at line {i+1}.  \"\n        f\"Surrounding lines:\\n{surrounding_lines}\"\n    )\n\n    assert line not in self.dockerfile_lines, message\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_after","title":"assert_line_after","text":"

    Asserts that the second line appears after the first line

    Source code in prefect/utilities/dockerutils.py
    def assert_line_after(self, second: str, first: str) -> None:\n    \"\"\"Asserts that the second line appears after the first line\"\"\"\n    self.assert_line_before(first, second)\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.assert_line_before","title":"assert_line_before","text":"

    Asserts that the first line appears before the second line

    Source code in prefect/utilities/dockerutils.py
    def assert_line_before(self, first: str, second: str) -> None:\n    \"\"\"Asserts that the first line appears before the second line\"\"\"\n    self.assert_has_line(first)\n    self.assert_has_line(second)\n\n    first_index = self.dockerfile_lines.index(first)\n    second_index = self.dockerfile_lines.index(second)\n\n    surrounding_lines = \"\\n\".join(\n        [\n            f\"  {i+1:>3}: {line}\"\n            for i, line in enumerate(\n                self.dockerfile_lines[second_index - 2 : first_index + 2]\n            )\n        ]\n    )\n\n    message = (\n        f\"Expected {first!r} to appear before {second!r} in the Dockerfile, but \"\n        f\"{first!r} was at line {first_index+1} and {second!r} as at line \"\n        f\"{second_index+1}.  Surrounding lines:\\n{surrounding_lines}\"\n    )\n\n    assert first_index < second_index, message\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.build","title":"build","text":"

    Build the Docker image from the current state of the ImageBuilder

    Parameters:

    Name Type Description Default pull bool

    True to pull the base image during the build

    False stream_progress_to Optional[TextIO]

    an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker

    None

    Returns:

    Type Description str

    The image ID

    Source code in prefect/utilities/dockerutils.py
    def build(\n    self, pull: bool = False, stream_progress_to: Optional[TextIO] = None\n) -> str:\n    \"\"\"Build the Docker image from the current state of the ImageBuilder\n\n    Args:\n        pull: True to pull the base image during the build\n        stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO)\n            that will collect the build output as it is reported by Docker\n\n    Returns:\n        The image ID\n    \"\"\"\n    dockerfile_path: Path = self.context / \"Dockerfile\"\n\n    with dockerfile_path.open(\"w\") as dockerfile:\n        dockerfile.writelines(line + \"\\n\" for line in self.dockerfile_lines)\n\n    try:\n        return build_image(\n            self.context,\n            platform=self.platform,\n            pull=pull,\n            stream_progress_to=stream_progress_to,\n        )\n    finally:\n        os.unlink(dockerfile_path)\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.ImageBuilder.copy","title":"copy","text":"

    Copy a file to this image

    Source code in prefect/utilities/dockerutils.py
    def copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]):\n    \"\"\"Copy a file to this image\"\"\"\n    if not self.context:\n        raise Exception(\"No context available\")\n\n    if not isinstance(destination, PurePosixPath):\n        destination = PurePosixPath(destination)\n\n    if not isinstance(source, Path):\n        source = Path(source)\n\n    if source.is_absolute():\n        source = source.resolve().relative_to(self.base_directory)\n\n    if self.temporary_directory:\n        os.makedirs(self.context / source.parent, exist_ok=True)\n\n        if source.is_dir():\n            shutil.copytree(self.base_directory / source, self.context / source)\n        else:\n            shutil.copy2(self.base_directory / source, self.context / source)\n\n    self.add_line(f\"COPY {source} {destination}\")\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.PushError","title":"PushError","text":"

    Bases: Exception

    Raised when a Docker image push fails

    Source code in prefect/utilities/dockerutils.py
    class PushError(Exception):\n    \"\"\"Raised when a Docker image push fails\"\"\"\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.build_image","title":"build_image","text":"

    Builds a Docker image, returning the image ID

    Parameters:

    Name Type Description Default context Path

    the root directory for the Docker build context

    required dockerfile str

    the path to the Dockerfile, relative to the context

    'Dockerfile' tag Optional[str]

    the tag to give this image

    None pull bool

    True to pull the base image during the build

    False stream_progress_to Optional[TextIO]

    an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker

    None

    Returns:

    Type Description str

    The image ID

    Source code in prefect/utilities/dockerutils.py
    @silence_docker_warnings()\ndef build_image(\n    context: Path,\n    dockerfile: str = \"Dockerfile\",\n    tag: Optional[str] = None,\n    pull: bool = False,\n    platform: str = None,\n    stream_progress_to: Optional[TextIO] = None,\n    **kwargs,\n) -> str:\n    \"\"\"Builds a Docker image, returning the image ID\n\n    Args:\n        context: the root directory for the Docker build context\n        dockerfile: the path to the Dockerfile, relative to the context\n        tag: the tag to give this image\n        pull: True to pull the base image during the build\n        stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n            will collect the build output as it is reported by Docker\n\n    Returns:\n        The image ID\n    \"\"\"\n\n    if not context:\n        raise ValueError(\"context required to build an image\")\n\n    if not Path(context).exists():\n        raise ValueError(f\"Context path {context} does not exist\")\n\n    kwargs = {key: kwargs[key] for key in kwargs if key not in [\"decode\", \"labels\"]}\n\n    image_id = None\n    with docker_client() as client:\n        events = client.api.build(\n            path=str(context),\n            tag=tag,\n            dockerfile=dockerfile,\n            pull=pull,\n            decode=True,\n            labels=IMAGE_LABELS,\n            platform=platform,\n            **kwargs,\n        )\n\n        try:\n            for event in events:\n                if \"stream\" in event:\n                    if not stream_progress_to:\n                        continue\n                    stream_progress_to.write(event[\"stream\"])\n                    stream_progress_to.flush()\n                elif \"aux\" in event:\n                    image_id = event[\"aux\"][\"ID\"]\n                elif \"error\" in event:\n                    raise BuildError(event[\"error\"])\n                elif \"message\" in event:\n                    raise BuildError(event[\"message\"])\n        except docker.errors.APIError as e:\n            raise BuildError(e.explanation) from e\n\n    assert image_id, \"The Docker daemon did not return an image ID\"\n    return image_id\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.docker_client","title":"docker_client","text":"

    Get the environmentally-configured Docker client

    Source code in prefect/utilities/dockerutils.py
    @contextmanager\ndef docker_client() -> Generator[\"DockerClient\", None, None]:\n    \"\"\"Get the environmentally-configured Docker client\"\"\"\n    client = None\n    try:\n        with silence_docker_warnings():\n            client = docker.DockerClient.from_env()\n\n            yield client\n    except docker.errors.DockerException as exc:\n        raise RuntimeError(\n            \"This error is often thrown because Docker is not running. Please ensure Docker is running.\"\n        ) from exc\n    finally:\n        client is not None and client.close()\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.format_outlier_version_name","title":"format_outlier_version_name","text":"

    Formats outlier docker version names to pass packaging.version.parse validation - Current cases are simple, but creates stub for more complicated formatting if eventually needed. - Example outlier versions that throw a parsing exception: - \"20.10.0-ce\" (variant of community edition label) - \"20.10.0-ee\" (variant of enterprise edition label)

    Parameters:

    Name Type Description Default version str

    raw docker version value

    required

    Returns:

    Name Type Description str

    value that can pass packaging.version.parse validation

    Source code in prefect/utilities/dockerutils.py
    def format_outlier_version_name(version: str):\n    \"\"\"\n    Formats outlier docker version names to pass `packaging.version.parse` validation\n    - Current cases are simple, but creates stub for more complicated formatting if eventually needed.\n    - Example outlier versions that throw a parsing exception:\n      - \"20.10.0-ce\" (variant of community edition label)\n      - \"20.10.0-ee\" (variant of enterprise edition label)\n\n    Args:\n        version (str): raw docker version value\n\n    Returns:\n        str: value that can pass `packaging.version.parse` validation\n    \"\"\"\n    return version.replace(\"-ce\", \"\").replace(\"-ee\", \"\")\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.generate_default_dockerfile","title":"generate_default_dockerfile","text":"

    Generates a default Dockerfile used for deploying flows. The Dockerfile is written to a temporary file and yielded. The temporary file is removed after the context manager exits.

    Parameters:

    Name Type Description Default - context

    The context to use for the Dockerfile. Defaults to the current working directory.

    required Source code in prefect/utilities/dockerutils.py
    @contextmanager\ndef generate_default_dockerfile(context: Optional[Path] = None):\n    \"\"\"\n    Generates a default Dockerfile used for deploying flows. The Dockerfile is written\n    to a temporary file and yielded. The temporary file is removed after the context\n    manager exits.\n\n    Args:\n        - context: The context to use for the Dockerfile. Defaults to\n            the current working directory.\n    \"\"\"\n    if not context:\n        context = Path.cwd()\n    lines = []\n    base_image = get_prefect_image_name()\n    lines.append(f\"FROM {base_image}\")\n    dir_name = context.name\n\n    if (context / \"requirements.txt\").exists():\n        lines.append(f\"COPY requirements.txt /opt/prefect/{dir_name}/requirements.txt\")\n        lines.append(\n            f\"RUN python -m pip install -r /opt/prefect/{dir_name}/requirements.txt\"\n        )\n\n    lines.append(f\"COPY . /opt/prefect/{dir_name}/\")\n    lines.append(f\"WORKDIR /opt/prefect/{dir_name}/\")\n\n    temp_dockerfile = context / \"Dockerfile\"\n    if Path(temp_dockerfile).exists():\n        raise RuntimeError(\n            \"Failed to generate Dockerfile. Dockerfile already exists in the\"\n            \" current directory.\"\n        )\n\n    with Path(temp_dockerfile).open(\"w\") as f:\n        f.writelines(line + \"\\n\" for line in lines)\n\n    try:\n        yield temp_dockerfile\n    finally:\n        temp_dockerfile.unlink()\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.get_prefect_image_name","title":"get_prefect_image_name","text":"

    Get the Prefect image name matching the current Prefect and Python versions.

    Parameters:

    Name Type Description Default prefect_version str

    An optional override for the Prefect version.

    None python_version str

    An optional override for the Python version; must be at the minor level e.g. '3.9'.

    None flavor str

    An optional alternative image flavor to build, like 'conda'

    None Source code in prefect/utilities/dockerutils.py
    def get_prefect_image_name(\n    prefect_version: str = None, python_version: str = None, flavor: str = None\n) -> str:\n    \"\"\"\n    Get the Prefect image name matching the current Prefect and Python versions.\n\n    Args:\n        prefect_version: An optional override for the Prefect version.\n        python_version: An optional override for the Python version; must be at the\n            minor level e.g. '3.9'.\n        flavor: An optional alternative image flavor to build, like 'conda'\n    \"\"\"\n    parsed_version = (prefect_version or prefect.__version__).split(\"+\")\n    is_prod_build = len(parsed_version) == 1\n    prefect_version = (\n        parsed_version[0]\n        if is_prod_build\n        else \"sha-\" + prefect.__version_info__[\"full-revisionid\"][:7]\n    )\n\n    python_version = python_version or python_version_minor()\n\n    tag = slugify(\n        f\"{prefect_version}-python{python_version}\" + (f\"-{flavor}\" if flavor else \"\"),\n        lowercase=False,\n        max_length=128,\n        # Docker allows these characters for tag names\n        regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n    )\n\n    image = \"prefect\" if is_prod_build else \"prefect-dev\"\n    return f\"prefecthq/{image}:{tag}\"\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.parse_image_tag","title":"parse_image_tag","text":"

    Parse Docker Image String

    • If a tag exists, this function parses and returns the image registry and tag, separately as a tuple.
    • Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest')
    • Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest')
    • Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0
    • Image building tools typically enforce this standard

    Parameters:

    Name Type Description Default name str

    Name of Docker Image

    required Return Source code in prefect/utilities/dockerutils.py
    def parse_image_tag(name: str) -> Tuple[str, Optional[str]]:\n    \"\"\"\n    Parse Docker Image String\n\n    - If a tag exists, this function parses and returns the image registry and tag,\n      separately as a tuple.\n      - Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest')\n      - Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest')\n    - Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0\n      - Image building tools typically enforce this standard\n\n    Args:\n        name (str): Name of Docker Image\n\n    Return:\n        tuple: image registry, image tag\n    \"\"\"\n    tag = None\n    name_parts = name.split(\"/\")\n    # First handles the simplest image names (DockerHub-based, index-free, potentionally with a tag)\n    # - Example: simplename:latest\n    if len(name_parts) == 1:\n        if \":\" in name_parts[0]:\n            image_name, tag = name_parts[0].split(\":\")\n        else:\n            image_name = name_parts[0]\n    else:\n        # 1. Separates index (hostname.io or prefecthq) from path:tag (folder/subfolder:latest or prefect:latest)\n        # 2. Separates path and tag (if tag exists)\n        # 3. Reunites index and path (without tag) as image name\n        index_name = name_parts[0]\n        image_path = \"/\".join(name_parts[1:])\n        if \":\" in image_path:\n            image_path, tag = image_path.split(\":\")\n        image_name = f\"{index_name}/{image_path}\"\n    return image_name, tag\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.push_image","title":"push_image","text":"

    Pushes a local image to a Docker registry, returning the registry-qualified tag for that image

    This assumes that the environment's Docker daemon is already authenticated to the given registry, and currently makes no attempt to authenticate.

    Parameters:

    Name Type Description Default image_id str

    a Docker image ID

    required registry_url str

    the URL of a Docker registry

    required name str

    the name of this image

    required tag str

    the tag to give this image (defaults to a short representation of the image's ID)

    None stream_progress_to Optional[TextIO]

    an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker

    None

    Returns:

    Type Description str

    A registry-qualified tag, like my-registry.example.com/my-image:abcdefg

    Source code in prefect/utilities/dockerutils.py
    @silence_docker_warnings()\ndef push_image(\n    image_id: str,\n    registry_url: str,\n    name: str,\n    tag: Optional[str] = None,\n    stream_progress_to: Optional[TextIO] = None,\n) -> str:\n    \"\"\"Pushes a local image to a Docker registry, returning the registry-qualified tag\n    for that image\n\n    This assumes that the environment's Docker daemon is already authenticated to the\n    given registry, and currently makes no attempt to authenticate.\n\n    Args:\n        image_id (str): a Docker image ID\n        registry_url (str): the URL of a Docker registry\n        name (str): the name of this image\n        tag (str): the tag to give this image (defaults to a short representation of\n            the image's ID)\n        stream_progress_to: an optional stream (like sys.stdout, or an io.TextIO) that\n            will collect the build output as it is reported by Docker\n\n    Returns:\n        A registry-qualified tag, like my-registry.example.com/my-image:abcdefg\n    \"\"\"\n\n    if not tag:\n        tag = slugify(pendulum.now(\"utc\").isoformat())\n\n    _, registry, _, _, _ = urlsplit(registry_url)\n    repository = f\"{registry}/{name}\"\n\n    with docker_client() as client:\n        image: \"docker.Image\" = client.images.get(image_id)\n        image.tag(repository, tag=tag)\n        events = client.api.push(repository, tag=tag, stream=True, decode=True)\n        try:\n            for event in events:\n                if \"status\" in event:\n                    if not stream_progress_to:\n                        continue\n                    stream_progress_to.write(event[\"status\"])\n                    if \"progress\" in event:\n                        stream_progress_to.write(\" \" + event[\"progress\"])\n                    stream_progress_to.write(\"\\n\")\n                    stream_progress_to.flush()\n                elif \"error\" in event:\n                    raise PushError(event[\"error\"])\n        finally:\n            client.api.remove_image(f\"{repository}:{tag}\", noprune=True)\n\n    return f\"{repository}:{tag}\"\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.split_repository_path","title":"split_repository_path","text":"

    Splits a Docker repository path into its namespace and repository components.

    Parameters:

    Name Type Description Default repository_path str

    The Docker repository path to split.

    required

    Returns:

    Type Description Tuple[Optional[str], str]

    Tuple[Optional[str], str]: A tuple containing the namespace and repository components. - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present. - repository (Optionals[str]): The repository name.

    Source code in prefect/utilities/dockerutils.py
    def split_repository_path(repository_path: str) -> Tuple[Optional[str], str]:\n    \"\"\"\n    Splits a Docker repository path into its namespace and repository components.\n\n    Args:\n        repository_path: The Docker repository path to split.\n\n    Returns:\n        Tuple[Optional[str], str]: A tuple containing the namespace and repository components.\n            - namespace (Optional[str]): The Docker namespace, combining the registry and organization. None if not present.\n            - repository (Optionals[str]): The repository name.\n    \"\"\"\n    parts = repository_path.split(\"/\", 2)\n\n    # Check if the path includes a registry and organization or just organization/repository\n    if len(parts) == 3 or (len(parts) == 2 and (\".\" in parts[0] or \":\" in parts[0])):\n        # Namespace includes registry and organization\n        namespace = \"/\".join(parts[:-1])\n        repository = parts[-1]\n    elif len(parts) == 2:\n        # Only organization/repository provided, so namespace is just the first part\n        namespace = parts[0]\n        repository = parts[1]\n    else:\n        # No namespace provided\n        namespace = None\n        repository = parts[0]\n\n    return namespace, repository\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/dockerutils/#prefect.utilities.dockerutils.to_run_command","title":"to_run_command","text":"

    Convert a process-style list of command arguments to a single Dockerfile RUN instruction.

    Source code in prefect/utilities/dockerutils.py
    def to_run_command(command: List[str]) -> str:\n    \"\"\"\n    Convert a process-style list of command arguments to a single Dockerfile RUN\n    instruction.\n    \"\"\"\n    if not command:\n        return \"\"\n\n    run_command = f\"RUN {command[0]}\"\n    if len(command) > 1:\n        run_command += \" \" + \" \".join([repr(arg) for arg in command[1:]])\n\n    # TODO: Consider performing text-wrapping to improve readability of the generated\n    #       Dockerfile\n    # return textwrap.wrap(\n    #     run_command,\n    #     subsequent_indent=\" \" * 4,\n    #     break_on_hyphens=False,\n    #     break_long_words=False\n    # )\n\n    return run_command\n
    ","tags":["Python API","Docker"]},{"location":"api-ref/prefect/utilities/filesystem/","title":"filesystem","text":"","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem","title":"prefect.utilities.filesystem","text":"

    Utilities for working with file systems

    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.create_default_ignore_file","title":"create_default_ignore_file","text":"

    Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

    Source code in prefect/utilities/filesystem.py
    def create_default_ignore_file(path: str) -> bool:\n    \"\"\"\n    Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n    whether a file was created.\n    \"\"\"\n    path = pathlib.Path(path)\n    ignore_file = path / \".prefectignore\"\n    if ignore_file.exists():\n        return False\n    default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n    with ignore_file.open(mode=\"w\") as f:\n        f.write(default_file.read_text())\n    return True\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filename","title":"filename","text":"

    Extract the file name from a path with remote file system support

    Source code in prefect/utilities/filesystem.py
    def filename(path: str) -> str:\n    \"\"\"Extract the file name from a path with remote file system support\"\"\"\n    try:\n        of: OpenFile = fsspec.open(path)\n        sep = of.fs.sep\n    except (ImportError, AttributeError):\n        sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n    return path.split(sep)[-1]\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filter_files","title":"filter_files","text":"

    This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored.

    The specification matches that of .gitignore files.

    Source code in prefect/utilities/filesystem.py
    def filter_files(\n    root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n    \"\"\"\n    This function accepts a root directory path and a list of file patterns to ignore, and returns\n    a list of files that excludes those that should be ignored.\n\n    The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n    \"\"\"\n    if ignore_patterns is None:\n        ignore_patterns = []\n    spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n    ignored_files = {p.path for p in spec.match_tree_entries(root)}\n    if include_dirs:\n        all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n    else:\n        all_files = set(pathspec.util.iter_tree_files(root))\n    included_files = all_files - ignored_files\n    return included_files\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.get_open_file_limit","title":"get_open_file_limit","text":"

    Get the maximum number of open files allowed for the current process

    Source code in prefect/utilities/filesystem.py
    def get_open_file_limit() -> int:\n    \"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n\n    try:\n        if os.name == \"nt\":\n            import ctypes\n\n            return ctypes.cdll.ucrtbase._getmaxstdio()\n        else:\n            import resource\n\n            soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n            return soft_limit\n    except Exception:\n        # Catch all exceptions, as ctypes can raise several errors\n        # depending on what went wrong. Return a safe default if we\n        # can't get the limit from the OS.\n        return 200\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.is_local_path","title":"is_local_path","text":"

    Check if the given path points to a local or remote file system

    Source code in prefect/utilities/filesystem.py
    def is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n    \"\"\"Check if the given path points to a local or remote file system\"\"\"\n    if isinstance(path, str):\n        try:\n            of = fsspec.open(path)\n        except ImportError:\n            # The path is a remote file system that uses a lib that is not installed\n            return False\n    elif isinstance(path, pathlib.Path):\n        return True\n    elif isinstance(path, OpenFile):\n        of = path\n    else:\n        raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n    return type(of.fs) == LocalFileSystem\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.relative_path_to_current_platform","title":"relative_path_to_current_platform","text":"

    Converts a relative path generated on any platform to a relative path for the current platform.

    Source code in prefect/utilities/filesystem.py
    def relative_path_to_current_platform(path_str: str) -> Path:\n    \"\"\"\n    Converts a relative path generated on any platform to a relative path for the\n    current platform.\n    \"\"\"\n\n    return Path(PureWindowsPath(path_str).as_posix())\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.tmpchdir","title":"tmpchdir","text":"

    Change current-working directories for the duration of the context

    Source code in prefect/utilities/filesystem.py
    @contextmanager\ndef tmpchdir(path: str):\n    \"\"\"\n    Change current-working directories for the duration of the context\n    \"\"\"\n    path = os.path.abspath(path)\n    if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n        path = os.path.dirname(path)\n\n    owd = os.getcwd()\n\n    with chdir_lock:\n        try:\n            os.chdir(path)\n            yield path\n        finally:\n            os.chdir(owd)\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.to_display_path","title":"to_display_path","text":"

    Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter.

    Source code in prefect/utilities/filesystem.py
    def to_display_path(\n    path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n    \"\"\"\n    Convert a path to a displayable path. The absolute path or relative path to the\n    current (or given) directory will be returned, whichever is shorter.\n    \"\"\"\n    path, relative_to = (\n        pathlib.Path(path).resolve(),\n        pathlib.Path(relative_to or \".\").resolve(),\n    )\n    relative_path = str(path.relative_to(relative_to))\n    absolute_path = str(path)\n    return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n
    ","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/hashing/","title":"hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing","title":"prefect.utilities.hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.file_hash","title":"file_hash","text":"

    Given a path to a file, produces a stable hash of the file contents.

    Parameters:

    Name Type Description Default path str

    the path to a file

    required hash_algo

    Hash algorithm from hashlib to use.

    _md5

    Returns:

    Name Type Description str str

    a hash of the file contents

    Source code in prefect/utilities/hashing.py
    def file_hash(path: str, hash_algo=_md5) -> str:\n    \"\"\"Given a path to a file, produces a stable hash of the file contents.\n\n    Args:\n        path (str): the path to a file\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        str: a hash of the file contents\n    \"\"\"\n    contents = Path(path).read_bytes()\n    return stable_hash(contents, hash_algo=hash_algo)\n
    ","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.hash_objects","title":"hash_objects","text":"

    Attempt to hash objects by dumping to JSON or serializing with cloudpickle. On failure of both, None will be returned

    Source code in prefect/utilities/hashing.py
    def hash_objects(*args, hash_algo=_md5, **kwargs) -> Optional[str]:\n    \"\"\"\n    Attempt to hash objects by dumping to JSON or serializing with cloudpickle.\n    On failure of both, `None` will be returned\n    \"\"\"\n    try:\n        serializer = JSONSerializer(dumps_kwargs={\"sort_keys\": True})\n        return stable_hash(serializer.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    try:\n        return stable_hash(cloudpickle.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    return None\n
    ","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.stable_hash","title":"stable_hash","text":"

    Given some arguments, produces a stable 64-bit hash of their contents.

    Supports bytes and strings. Strings will be UTF-8 encoded.

    Parameters:

    Name Type Description Default *args Union[str, bytes]

    Items to include in the hash.

    () hash_algo

    Hash algorithm from hashlib to use.

    _md5

    Returns:

    Type Description str

    A hex hash.

    Source code in prefect/utilities/hashing.py
    def stable_hash(*args: Union[str, bytes], hash_algo=_md5) -> str:\n    \"\"\"Given some arguments, produces a stable 64-bit hash of their contents.\n\n    Supports bytes and strings. Strings will be UTF-8 encoded.\n\n    Args:\n        *args: Items to include in the hash.\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        A hex hash.\n    \"\"\"\n    h = hash_algo()\n    for a in args:\n        if isinstance(a, str):\n            a = a.encode()\n        h.update(a)\n    return h.hexdigest()\n
    ","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/importtools/","title":"importtools","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools","title":"prefect.utilities.importtools","text":"","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleDefinition","title":"AliasedModuleDefinition","text":"

    Bases: NamedTuple

    A definition for the AliasedModuleFinder.

    Parameters:

    Name Type Description Default alias

    The import name to create

    required real

    The import name of the module to reference for the alias

    required callback

    A function to call when the alias module is loaded

    required Source code in prefect/utilities/importtools.py
    class AliasedModuleDefinition(NamedTuple):\n    \"\"\"\n    A definition for the `AliasedModuleFinder`.\n\n    Args:\n        alias: The import name to create\n        real: The import name of the module to reference for the alias\n        callback: A function to call when the alias module is loaded\n    \"\"\"\n\n    alias: str\n    real: str\n    callback: Optional[Callable[[str], None]]\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder","title":"AliasedModuleFinder","text":"

    Bases: MetaPathFinder

    Source code in prefect/utilities/importtools.py
    class AliasedModuleFinder(MetaPathFinder):\n    def __init__(self, aliases: Iterable[AliasedModuleDefinition]):\n        \"\"\"\n        See `AliasedModuleDefinition` for alias specification.\n\n        Aliases apply to all modules nested within an alias.\n        \"\"\"\n        self.aliases = aliases\n\n    def find_spec(\n        self,\n        fullname: str,\n        path=None,\n        target=None,\n    ) -> Optional[ModuleSpec]:\n        \"\"\"\n        The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n        for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n        create a new spec for \"phi.bar\" that points to \"foo.bar\".\n        \"\"\"\n        for alias, real, callback in self.aliases:\n            if fullname.startswith(alias):\n                # Retrieve the spec of the real module\n                real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n                # Create a new spec for the alias\n                return ModuleSpec(\n                    fullname,\n                    AliasedModuleLoader(fullname, callback, real_spec),\n                    origin=real_spec.origin,\n                    is_package=real_spec.submodule_search_locations is not None,\n                )\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.AliasedModuleFinder.find_spec","title":"find_spec","text":"

    The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\" for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and create a new spec for \"phi.bar\" that points to \"foo.bar\".

    Source code in prefect/utilities/importtools.py
    def find_spec(\n    self,\n    fullname: str,\n    path=None,\n    target=None,\n) -> Optional[ModuleSpec]:\n    \"\"\"\n    The fullname is the imported path, e.g. \"foo.bar\". If there is an alias \"phi\"\n    for \"foo\" then on import of \"phi.bar\" we will find the spec for \"foo.bar\" and\n    create a new spec for \"phi.bar\" that points to \"foo.bar\".\n    \"\"\"\n    for alias, real, callback in self.aliases:\n        if fullname.startswith(alias):\n            # Retrieve the spec of the real module\n            real_spec = importlib.util.find_spec(fullname.replace(alias, real, 1))\n            # Create a new spec for the alias\n            return ModuleSpec(\n                fullname,\n                AliasedModuleLoader(fullname, callback, real_spec),\n                origin=real_spec.origin,\n                is_package=real_spec.submodule_search_locations is not None,\n            )\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.DelayedImportErrorModule","title":"DelayedImportErrorModule","text":"

    Bases: ModuleType

    A fake module returned by lazy_import when the module cannot be found. When any of the module's attributes are accessed, we will throw a ModuleNotFoundError.

    Adapted from lazy_loader

    Source code in prefect/utilities/importtools.py
    class DelayedImportErrorModule(ModuleType):\n    \"\"\"\n    A fake module returned by `lazy_import` when the module cannot be found. When any\n    of the module's attributes are accessed, we will throw a `ModuleNotFoundError`.\n\n    Adapted from [lazy_loader][1]\n\n    [1]: https://github.com/scientific-python/lazy_loader\n    \"\"\"\n\n    def __init__(self, frame_data, help_message, *args, **kwargs):\n        self.__frame_data = frame_data\n        self.__help_message = (\n            help_message or \"Import errors for this module are only reported when used.\"\n        )\n        super().__init__(*args, **kwargs)\n\n    def __getattr__(self, attr):\n        if attr in (\"__class__\", \"__file__\", \"__frame_data\", \"__help_message\"):\n            super().__getattr__(attr)\n        else:\n            fd = self.__frame_data\n            raise ModuleNotFoundError(\n                f\"No module named '{fd['spec']}'\\n\\nThis module was originally imported\"\n                f\" at:\\n  File \\\"{fd['filename']}\\\", line {fd['lineno']}, in\"\n                f\" {fd['function']}\\n\\n    {''.join(fd['code_context']).strip()}\\n\"\n                + self.__help_message\n            )\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.from_qualified_name","title":"from_qualified_name","text":"

    Import an object given a fully-qualified name.

    Parameters:

    Name Type Description Default name str

    The fully-qualified name of the object to import.

    required

    Returns:

    Type Description Any

    the imported object

    Examples:

    >>> obj = from_qualified_name(\"random.randint\")\n>>> import random\n>>> obj == random.randint\nTrue\n
    Source code in prefect/utilities/importtools.py
    def from_qualified_name(name: str) -> Any:\n    \"\"\"\n    Import an object given a fully-qualified name.\n\n    Args:\n        name: The fully-qualified name of the object to import.\n\n    Returns:\n        the imported object\n\n    Examples:\n        >>> obj = from_qualified_name(\"random.randint\")\n        >>> import random\n        >>> obj == random.randint\n        True\n    \"\"\"\n    # Try importing it first so we support \"module\" or \"module.sub_module\"\n    try:\n        module = importlib.import_module(name)\n        return module\n    except ImportError:\n        # If no subitem was included raise the import error\n        if \".\" not in name:\n            raise\n\n    # Otherwise, we'll try to load it as an attribute of a module\n    mod_name, attr_name = name.rsplit(\".\", 1)\n    module = importlib.import_module(mod_name)\n    return getattr(module, attr_name)\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.import_object","title":"import_object","text":"

    Load an object from an import path.

    Import paths can be formatted as one of: - module.object - module:object - /path/to/script.py:object

    This function is not thread safe as it modifies the 'sys' module during execution.

    Source code in prefect/utilities/importtools.py
    def import_object(import_path: str):\n    \"\"\"\n    Load an object from an import path.\n\n    Import paths can be formatted as one of:\n    - module.object\n    - module:object\n    - /path/to/script.py:object\n\n    This function is not thread safe as it modifies the 'sys' module during execution.\n    \"\"\"\n    if \".py:\" in import_path:\n        script_path, object_name = import_path.rsplit(\":\", 1)\n        module = load_script_as_module(script_path)\n    else:\n        if \":\" in import_path:\n            module_name, object_name = import_path.rsplit(\":\", 1)\n        elif \".\" in import_path:\n            module_name, object_name = import_path.rsplit(\".\", 1)\n        else:\n            raise ValueError(\n                f\"Invalid format for object import. Received {import_path!r}.\"\n            )\n\n        module = load_module(module_name)\n\n    return getattr(module, object_name)\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.lazy_import","title":"lazy_import","text":"

    Create a lazily-imported module to use in place of the module of the given name. Use this to retain module-level imports for libraries that we don't want to actually import until they are needed.

    Adapted from the Python documentation and lazy_loader

    Source code in prefect/utilities/importtools.py
    def lazy_import(\n    name: str, error_on_import: bool = False, help_message: str = \"\"\n) -> ModuleType:\n    \"\"\"\n    Create a lazily-imported module to use in place of the module of the given name.\n    Use this to retain module-level imports for libraries that we don't want to\n    actually import until they are needed.\n\n    Adapted from the [Python documentation][1] and [lazy_loader][2]\n\n    [1]: https://docs.python.org/3/library/importlib.html#implementing-lazy-imports\n    [2]: https://github.com/scientific-python/lazy_loader\n    \"\"\"\n\n    try:\n        return sys.modules[name]\n    except KeyError:\n        pass\n\n    spec = importlib.util.find_spec(name)\n    if spec is None:\n        if error_on_import:\n            raise ModuleNotFoundError(f\"No module named '{name}'.\\n{help_message}\")\n        else:\n            try:\n                parent = inspect.stack()[1]\n                frame_data = {\n                    \"spec\": name,\n                    \"filename\": parent.filename,\n                    \"lineno\": parent.lineno,\n                    \"function\": parent.function,\n                    \"code_context\": parent.code_context,\n                }\n                return DelayedImportErrorModule(\n                    frame_data, help_message, \"DelayedImportErrorModule\"\n                )\n            finally:\n                del parent\n\n    module = importlib.util.module_from_spec(spec)\n    sys.modules[name] = module\n\n    loader = importlib.util.LazyLoader(spec.loader)\n    loader.exec_module(module)\n\n    return module\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_module","title":"load_module","text":"

    Import a module with support for relative imports within the module.

    Source code in prefect/utilities/importtools.py
    def load_module(module_name: str) -> ModuleType:\n    \"\"\"\n    Import a module with support for relative imports within the module.\n    \"\"\"\n    # Ensure relative imports within the imported module work if the user is in the\n    # correct working directory\n    working_directory = os.getcwd()\n    sys.path.insert(0, working_directory)\n\n    try:\n        return importlib.import_module(module_name)\n    finally:\n        sys.path.remove(working_directory)\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.load_script_as_module","title":"load_script_as_module","text":"

    Execute a script at the given path.

    Sets the module name to __prefect_loader__.

    If an exception occurs during execution of the script, a prefect.exceptions.ScriptError is created to wrap the exception and raised.

    During the duration of this function call, sys is modified to support loading. These changes are reverted after completion, but this function is not thread safe and use of it in threaded contexts may result in undesirable behavior.

    See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly

    Source code in prefect/utilities/importtools.py
    def load_script_as_module(path: str) -> ModuleType:\n    \"\"\"\n    Execute a script at the given path.\n\n    Sets the module name to `__prefect_loader__`.\n\n    If an exception occurs during execution of the script, a\n    `prefect.exceptions.ScriptError` is created to wrap the exception and raised.\n\n    During the duration of this function call, `sys` is modified to support loading.\n    These changes are reverted after completion, but this function is not thread safe\n    and use of it in threaded contexts may result in undesirable behavior.\n\n    See https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly\n    \"\"\"\n    # We will add the parent directory to search locations to support relative imports\n    # during execution of the script\n    if not path.endswith(\".py\"):\n        raise ValueError(f\"The provided path does not point to a python file: {path!r}\")\n\n    parent_path = str(Path(path).resolve().parent)\n    working_directory = os.getcwd()\n\n    spec = importlib.util.spec_from_file_location(\n        \"__prefect_loader__\",\n        path,\n        # Support explicit relative imports i.e. `from .foo import bar`\n        submodule_search_locations=[parent_path, working_directory],\n    )\n    module = importlib.util.module_from_spec(spec)\n    sys.modules[\"__prefect_loader__\"] = module\n\n    # Support implicit relative imports i.e. `from foo import bar`\n    sys.path.insert(0, working_directory)\n    sys.path.insert(0, parent_path)\n    try:\n        spec.loader.exec_module(module)\n    except Exception as exc:\n        raise ScriptError(user_exc=exc, path=path) from exc\n    finally:\n        sys.modules.pop(\"__prefect_loader__\")\n        sys.path.remove(parent_path)\n        sys.path.remove(working_directory)\n\n    return module\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.objects_from_script","title":"objects_from_script","text":"

    Run a python script and return all the global variables

    Supports remote paths by copying to a local temporary file.

    WARNING: The Python documentation does not recommend using runpy for this pattern.

    Furthermore, any functions and classes defined by the executed code are not guaranteed to work correctly after a runpy function has returned. If that limitation is not acceptable for a given use case, importlib is likely to be a more suitable choice than this module.

    The function load_script_as_module uses importlib instead and should be used instead for loading objects from scripts.

    Parameters:

    Name Type Description Default path str

    The path to the script to run

    required text Union[str, bytes]

    Optionally, the text of the script. Skips loading the contents if given.

    None

    Returns:

    Type Description Dict[str, Any]

    A dictionary mapping variable name to value

    Raises:

    Type Description ScriptError

    if the script raises an exception during execution

    Source code in prefect/utilities/importtools.py
    def objects_from_script(path: str, text: Union[str, bytes] = None) -> Dict[str, Any]:\n    \"\"\"\n    Run a python script and return all the global variables\n\n    Supports remote paths by copying to a local temporary file.\n\n    WARNING: The Python documentation does not recommend using runpy for this pattern.\n\n    > Furthermore, any functions and classes defined by the executed code are not\n    > guaranteed to work correctly after a runpy function has returned. If that\n    > limitation is not acceptable for a given use case, importlib is likely to be a\n    > more suitable choice than this module.\n\n    The function `load_script_as_module` uses importlib instead and should be used\n    instead for loading objects from scripts.\n\n    Args:\n        path: The path to the script to run\n        text: Optionally, the text of the script. Skips loading the contents if given.\n\n    Returns:\n        A dictionary mapping variable name to value\n\n    Raises:\n        ScriptError: if the script raises an exception during execution\n    \"\"\"\n\n    def run_script(run_path: str):\n        # Cast to an absolute path before changing directories to ensure relative paths\n        # are not broken\n        abs_run_path = os.path.abspath(run_path)\n        with tmpchdir(run_path):\n            try:\n                return runpy.run_path(abs_run_path)\n            except Exception as exc:\n                raise ScriptError(user_exc=exc, path=path) from exc\n\n    if text:\n        with NamedTemporaryFile(\n            mode=\"wt\" if isinstance(text, str) else \"wb\",\n            prefix=f\"run-{filename(path)}\",\n            suffix=\".py\",\n        ) as tmpfile:\n            tmpfile.write(text)\n            tmpfile.flush()\n            return run_script(tmpfile.name)\n\n    else:\n        if not is_local_path(path):\n            # Remote paths need to be local to run\n            with fsspec.open(path) as f:\n                contents = f.read()\n            return objects_from_script(path, contents)\n        else:\n            return run_script(path)\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/importtools/#prefect.utilities.importtools.to_qualified_name","title":"to_qualified_name","text":"

    Given an object, returns its fully-qualified name: a string that represents its Python import path.

    Parameters:

    Name Type Description Default obj Any

    an importable Python object

    required

    Returns:

    Name Type Description str str

    the qualified name

    Source code in prefect/utilities/importtools.py
    def to_qualified_name(obj: Any) -> str:\n    \"\"\"\n    Given an object, returns its fully-qualified name: a string that represents its\n    Python import path.\n\n    Args:\n        obj (Any): an importable Python object\n\n    Returns:\n        str: the qualified name\n    \"\"\"\n    if sys.version_info < (3, 10):\n        # These attributes are only available in Python 3.10+\n        if isinstance(obj, (classmethod, staticmethod)):\n            obj = obj.__func__\n    return obj.__module__ + \".\" + obj.__qualname__\n
    ","tags":["Python API","imports"]},{"location":"api-ref/prefect/utilities/math/","title":"math","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math","title":"prefect.utilities.math","text":"","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.bounded_poisson_interval","title":"bounded_poisson_interval","text":"

    Bounds Poisson \"inter-arrival times\" to a range.

    Unlike clamped_poisson_interval this does not take a target average interval. Instead, the interval is predetermined and the average is calculated as their midpoint. This allows Poisson intervals to be used in cases where a lower bound must be enforced.

    Source code in prefect/utilities/math.py
    def bounded_poisson_interval(lower_bound, upper_bound):\n    \"\"\"\n    Bounds Poisson \"inter-arrival times\" to a range.\n\n    Unlike `clamped_poisson_interval` this does not take a target average interval.\n    Instead, the interval is predetermined and the average is calculated as their\n    midpoint. This allows Poisson intervals to be used in cases where a lower bound\n    must be enforced.\n    \"\"\"\n    average = (float(lower_bound) + float(upper_bound)) / 2.0\n    upper_rv = exponential_cdf(upper_bound, average)\n    lower_rv = exponential_cdf(lower_bound, average)\n    return poisson_interval(average, lower_rv, upper_rv)\n
    ","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.clamped_poisson_interval","title":"clamped_poisson_interval","text":"

    Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.

    The upper bound for this random variate is: average_interval * (1 + clamping_factor). A lower bound is picked so that the average interval remains approximately fixed.

    Source code in prefect/utilities/math.py
    def clamped_poisson_interval(average_interval, clamping_factor=0.3):\n    \"\"\"\n    Bounds Poisson \"inter-arrival times\" to a range defined by the clamping factor.\n\n    The upper bound for this random variate is: average_interval * (1 + clamping_factor).\n    A lower bound is picked so that the average interval remains approximately fixed.\n    \"\"\"\n    if clamping_factor <= 0:\n        raise ValueError(\"`clamping_factor` must be >= 0.\")\n\n    upper_clamp_multiple = 1 + clamping_factor\n    upper_bound = average_interval * upper_clamp_multiple\n    lower_bound = max(0, average_interval * lower_clamp_multiple(upper_clamp_multiple))\n\n    upper_rv = exponential_cdf(upper_bound, average_interval)\n    lower_rv = exponential_cdf(lower_bound, average_interval)\n    return poisson_interval(average_interval, lower_rv, upper_rv)\n
    ","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.lower_clamp_multiple","title":"lower_clamp_multiple","text":"

    Computes a lower clamp multiple that can be used to bound a random variate drawn from an exponential distribution.

    Given an upper clamp multiple k (and corresponding upper bound k * average_interval), this function computes a lower clamp multiple c (corresponding to a lower bound c * average_interval) where the probability mass between the lower bound and the median is equal to the probability mass between the median and the upper bound.

    Source code in prefect/utilities/math.py
    def lower_clamp_multiple(k):\n    \"\"\"\n    Computes a lower clamp multiple that can be used to bound a random variate drawn\n    from an exponential distribution.\n\n    Given an upper clamp multiple `k` (and corresponding upper bound k * average_interval),\n    this function computes a lower clamp multiple `c` (corresponding to a lower bound\n    c * average_interval) where the probability mass between the lower bound and the\n    median is equal to the probability mass between the median and the upper bound.\n    \"\"\"\n    if k >= 50:\n        # return 0 for large values of `k` to prevent numerical overflow\n        return 0.0\n\n    return math.log(max(2**k / (2**k - 1), 1e-10), 2)\n
    ","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/math/#prefect.utilities.math.poisson_interval","title":"poisson_interval","text":"

    Generates an \"inter-arrival time\" for a Poisson process.

    Draws a random variable from an exponential distribution using the inverse-CDF method. Can optionally be passed a lower and upper bound between (0, 1] to clamp the potential output values.

    Source code in prefect/utilities/math.py
    def poisson_interval(average_interval, lower=0, upper=1):\n    \"\"\"\n    Generates an \"inter-arrival time\" for a Poisson process.\n\n    Draws a random variable from an exponential distribution using the inverse-CDF\n    method. Can optionally be passed a lower and upper bound between (0, 1] to clamp\n    the potential output values.\n    \"\"\"\n\n    # note that we ensure the argument to the logarithm is stabilized to prevent\n    # calling log(0), which results in a DomainError\n    return -math.log(max(1 - random.uniform(lower, upper), 1e-10)) * average_interval\n
    ","tags":["Python API","math"]},{"location":"api-ref/prefect/utilities/names/","title":"names","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names","title":"prefect.utilities.names","text":"","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.generate_slug","title":"generate_slug","text":"

    Generates a random slug.

    Parameters:

    Name Type Description Default - n_words (int

    the number of words in the slug

    required Source code in prefect/utilities/names.py
    def generate_slug(n_words: int) -> str:\n    \"\"\"\n    Generates a random slug.\n\n    Args:\n        - n_words (int): the number of words in the slug\n    \"\"\"\n    words = coolname.generate(n_words)\n\n    # regenerate words if they include ignored words\n    while IGNORE_LIST.intersection(words):\n        words = coolname.generate(n_words)\n\n    return \"-\".join(words)\n
    ","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate","title":"obfuscate","text":"

    Obfuscates any data type's string representation. See obfuscate_string.

    Source code in prefect/utilities/names.py
    def obfuscate(s: Any, show_tail=False) -> str:\n    \"\"\"\n    Obfuscates any data type's string representation. See `obfuscate_string`.\n    \"\"\"\n    if s is None:\n        return OBFUSCATED_PREFIX + \"*\" * 4\n\n    return obfuscate_string(str(s), show_tail=show_tail)\n
    ","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/names/#prefect.utilities.names.obfuscate_string","title":"obfuscate_string","text":"

    Obfuscates a string by returning a new string of 8 characters. If the input string is longer than 10 characters and show_tail is True, then up to 4 of its final characters will become final characters of the obfuscated string; all other characters are \"*\".

    \"abc\" -> \"*\" \"abcdefgh\" -> \"*\" \"abcdefghijk\" -> \"*k\" \"abcdefghijklmnopqrs\" -> \"****pqrs\"

    Source code in prefect/utilities/names.py
    def obfuscate_string(s: str, show_tail=False) -> str:\n    \"\"\"\n    Obfuscates a string by returning a new string of 8 characters. If the input\n    string is longer than 10 characters and show_tail is True, then up to 4 of\n    its final characters will become final characters of the obfuscated string;\n    all other characters are \"*\".\n\n    \"abc\"      -> \"********\"\n    \"abcdefgh\" -> \"********\"\n    \"abcdefghijk\" -> \"*******k\"\n    \"abcdefghijklmnopqrs\" -> \"****pqrs\"\n    \"\"\"\n    result = OBFUSCATED_PREFIX + \"*\" * 4\n    # take up to 4 characters, but only after the 10th character\n    suffix = s[10:][-4:]\n    if suffix and show_tail:\n        result = f\"{result[:-len(suffix)]}{suffix}\"\n    return result\n
    ","tags":["Python API","names"]},{"location":"api-ref/prefect/utilities/processutils/","title":"processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils","title":"prefect.utilities.processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.forward_signal_handler","title":"forward_signal_handler","text":"

    Forward subsequent signum events (e.g. interrupts) to respective signums.

    Source code in prefect/utilities/processutils.py
    def forward_signal_handler(\n    pid: int, signum: int, *signums: int, process_name: str, print_fn: Callable\n):\n    \"\"\"Forward subsequent signum events (e.g. interrupts) to respective signums.\"\"\"\n    current_signal, future_signals = signums[0], signums[1:]\n\n    # avoid RecursionError when setting up a direct signal forward to the same signal for the main pid\n    avoid_infinite_recursion = signum == current_signal and pid == os.getpid()\n    if avoid_infinite_recursion:\n        # store the vanilla handler so it can be temporarily restored below\n        original_handler = signal.getsignal(current_signal)\n\n    def handler(*args):\n        print_fn(\n            f\"Received {getattr(signum, 'name', signum)}. \"\n            f\"Sending {getattr(current_signal, 'name', current_signal)} to\"\n            f\" {process_name} (PID {pid})...\"\n        )\n        if avoid_infinite_recursion:\n            signal.signal(current_signal, original_handler)\n        os.kill(pid, current_signal)\n        if future_signals:\n            forward_signal_handler(\n                pid,\n                signum,\n                *future_signals,\n                process_name=process_name,\n                print_fn=print_fn,\n            )\n\n    # register current and future signal handlers\n    _register_signal(signum, handler)\n
    "},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.open_process","title":"open_process async","text":"

    Like anyio.open_process but with: - Support for Windows command joining - Termination of the process on exception during yield - Forced cleanup of process resources during cancellation

    Source code in prefect/utilities/processutils.py
    @asynccontextmanager\nasync def open_process(command: List[str], **kwargs):\n    \"\"\"\n    Like `anyio.open_process` but with:\n    - Support for Windows command joining\n    - Termination of the process on exception during yield\n    - Forced cleanup of process resources during cancellation\n    \"\"\"\n    # Passing a string to open_process is equivalent to shell=True which is\n    # generally necessary for Unix-like commands on Windows but otherwise should\n    # be avoided\n    if not isinstance(command, list):\n        raise TypeError(\n            \"The command passed to open process must be a list. You passed the command\"\n            f\"'{command}', which is type '{type(command)}'.\"\n        )\n\n    if sys.platform == \"win32\":\n        command = \" \".join(command)\n        process = await _open_anyio_process(command, **kwargs)\n    else:\n        process = await anyio.open_process(command, **kwargs)\n\n    # if there's a creationflags kwarg and it contains CREATE_NEW_PROCESS_GROUP,\n    # use SetConsoleCtrlHandler to handle CTRL-C\n    win32_process_group = False\n    if (\n        sys.platform == \"win32\"\n        and \"creationflags\" in kwargs\n        and kwargs[\"creationflags\"] & subprocess.CREATE_NEW_PROCESS_GROUP\n    ):\n        win32_process_group = True\n        _windows_process_group_pids.add(process.pid)\n        # Add a handler for CTRL-C. Re-adding the handler is safe as Windows\n        # will not add a duplicate handler if _win32_ctrl_handler is\n        # already registered.\n        windll.kernel32.SetConsoleCtrlHandler(_win32_ctrl_handler, 1)\n\n    try:\n        async with process:\n            yield process\n    finally:\n        try:\n            process.terminate()\n            if win32_process_group:\n                _windows_process_group_pids.remove(process.pid)\n\n        except OSError:\n            # Occurs if the process is already terminated\n            pass\n\n        # Ensure the process resource is closed. If not shielded from cancellation,\n        # this resource can be left open and the subprocess output can appear after\n        # the parent process has exited.\n        with anyio.CancelScope(shield=True):\n            await process.aclose()\n
    "},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.run_process","title":"run_process async","text":"

    Like anyio.run_process but with:

    • Use of our open_process utility to ensure resources are cleaned up
    • Simple stream_output support to connect the subprocess to the parent stdout/err
    • Support for submission with TaskGroup.start marking as 'started' after the process has been created. When used, the PID is returned to the task status.
    Source code in prefect/utilities/processutils.py
    async def run_process(\n    command: List[str],\n    stream_output: Union[bool, Tuple[Optional[TextSink], Optional[TextSink]]] = False,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n    task_status_handler: Optional[Callable[[anyio.abc.Process], Any]] = None,\n    **kwargs,\n):\n    \"\"\"\n    Like `anyio.run_process` but with:\n\n    - Use of our `open_process` utility to ensure resources are cleaned up\n    - Simple `stream_output` support to connect the subprocess to the parent stdout/err\n    - Support for submission with `TaskGroup.start` marking as 'started' after the\n        process has been created. When used, the PID is returned to the task status.\n\n    \"\"\"\n    if stream_output is True:\n        stream_output = (sys.stdout, sys.stderr)\n\n    async with open_process(\n        command,\n        stdout=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        stderr=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        **kwargs,\n    ) as process:\n        if task_status is not None:\n            if not task_status_handler:\n\n                def task_status_handler(process):\n                    return process.pid\n\n            task_status.started(task_status_handler(process))\n\n        if stream_output:\n            await consume_process_output(\n                process, stdout_sink=stream_output[0], stderr_sink=stream_output[1]\n            )\n\n        await process.wait()\n\n    return process\n
    "},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_agent","title":"setup_signal_handlers_agent","text":"

    Handle interrupts of the agent gracefully.

    Source code in prefect/utilities/processutils.py
    def setup_signal_handlers_agent(pid: int, process_name: str, print_fn: Callable):\n    \"\"\"Handle interrupts of the agent gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
    "},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_server","title":"setup_signal_handlers_server","text":"

    Handle interrupts of the server gracefully.

    Source code in prefect/utilities/processutils.py
    def setup_signal_handlers_server(pid: int, process_name: str, print_fn: Callable):\n    \"\"\"Handle interrupts of the server gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when server receives a signal, it needs to be propagated to the uvicorn subprocess\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # first interrupt: SIGTERM, second interrupt: SIGKILL\n        setup_handler(signal.SIGINT, signal.SIGTERM, signal.SIGKILL)\n        # forward first SIGTERM directly, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGTERM, signal.SIGKILL)\n
    "},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_worker","title":"setup_signal_handlers_worker","text":"

    Handle interrupts of workers gracefully.

    Source code in prefect/utilities/processutils.py
    def setup_signal_handlers_worker(pid: int, process_name: str, print_fn: Callable):\n    \"\"\"Handle interrupts of workers gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
    "},{"location":"api-ref/prefect/utilities/pydantic/","title":"pydantic","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic","title":"prefect.utilities.pydantic","text":"","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.PartialModel","title":"PartialModel","text":"

    Bases: Generic[M]

    A utility for creating a Pydantic model in several steps.

    Fields may be set at initialization, via attribute assignment, or at finalization when the concrete model is returned.

    Pydantic validation does not occur until finalization.

    Each field can only be set once and a ValueError will be raised on assignment if a field already has a value.

    Example

    class MyModel(pydantic_v1.BaseModel): x: int y: str z: float

    partial_model = PartialModel(MyModel, x=1) partial_model.y = \"two\" model = partial_model.finalize(z=3.0)

    Source code in prefect/utilities/pydantic.py
    class PartialModel(Generic[M]):\n    \"\"\"\n    A utility for creating a Pydantic model in several steps.\n\n    Fields may be set at initialization, via attribute assignment, or at finalization\n    when the concrete model is returned.\n\n    Pydantic validation does not occur until finalization.\n\n    Each field can only be set once and a `ValueError` will be raised on assignment if\n    a field already has a value.\n\n    Example:\n        >>> class MyModel(pydantic_v1.BaseModel):\n        >>>     x: int\n        >>>     y: str\n        >>>     z: float\n        >>>\n        >>> partial_model = PartialModel(MyModel, x=1)\n        >>> partial_model.y = \"two\"\n        >>> model = partial_model.finalize(z=3.0)\n    \"\"\"\n\n    def __init__(self, __model_cls: Type[M], **kwargs: Any) -> None:\n        self.fields = kwargs\n        # Set fields first to avoid issues if `fields` is also set on the `model_cls`\n        # in our custom `setattr` implementation.\n        self.model_cls = __model_cls\n\n        for name in kwargs.keys():\n            self.raise_if_not_in_model(name)\n\n    def finalize(self, **kwargs: Any) -> M:\n        for name in kwargs.keys():\n            self.raise_if_already_set(name)\n            self.raise_if_not_in_model(name)\n        return self.model_cls(**self.fields, **kwargs)\n\n    def raise_if_already_set(self, name):\n        if name in self.fields:\n            raise ValueError(f\"Field {name!r} has already been set.\")\n\n    def raise_if_not_in_model(self, name):\n        if name not in self.model_cls.__fields__:\n            raise ValueError(f\"Field {name!r} is not present in the model.\")\n\n    def __setattr__(self, __name: str, __value: Any) -> None:\n        if __name in {\"fields\", \"model_cls\"}:\n            return super().__setattr__(__name, __value)\n\n        self.raise_if_already_set(__name)\n        self.raise_if_not_in_model(__name)\n        self.fields[__name] = __value\n\n    def __repr__(self) -> str:\n        dsp_fields = \", \".join(\n            f\"{key}={repr(value)}\" for key, value in self.fields.items()\n        )\n        return f\"PartialModel(cls={self.model_cls.__name__}, {dsp_fields})\"\n
    ","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_cloudpickle_reduction","title":"add_cloudpickle_reduction","text":"

    Adds a __reducer__ to the given class that ensures it is cloudpickle compatible.

    Workaround for issues with cloudpickle when using cythonized pydantic which throws exceptions when attempting to pickle the class which has \"compiled\" validator methods dynamically attached to it.

    We cannot define this utility in the model class itself because the class is the type that contains unserializable methods.

    Any model using some features of Pydantic (e.g. Path validation) with a Cython compiled Pydantic installation may encounter pickling issues.

    See related issue at https://github.com/cloudpipe/cloudpickle/issues/408

    Source code in prefect/utilities/pydantic.py
    def add_cloudpickle_reduction(__model_cls: Type[M] = None, **kwargs: Any):\n    \"\"\"\n    Adds a `__reducer__` to the given class that ensures it is cloudpickle compatible.\n\n    Workaround for issues with cloudpickle when using cythonized pydantic which\n    throws exceptions when attempting to pickle the class which has \"compiled\"\n    validator methods dynamically attached to it.\n\n    We cannot define this utility in the model class itself because the class is the\n    type that contains unserializable methods.\n\n    Any model using some features of Pydantic (e.g. `Path` validation) with a Cython\n    compiled Pydantic installation may encounter pickling issues.\n\n    See related issue at https://github.com/cloudpipe/cloudpickle/issues/408\n    \"\"\"\n    if __model_cls:\n        __model_cls.__reduce__ = _reduce_model\n        __model_cls.__reduce_kwargs__ = kwargs\n        return __model_cls\n    else:\n        return cast(\n            Callable[[Type[M]], Type[M]],\n            partial(\n                add_cloudpickle_reduction,\n                **kwargs,\n            ),\n        )\n
    ","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.add_type_dispatch","title":"add_type_dispatch","text":"

    Extend a Pydantic model to add a 'type' field that is used as a discriminator field to dynamically determine the subtype that when deserializing models.

    This allows automatic resolution to subtypes of the decorated model.

    If a type field already exists, it should be a string literal field that has a constant value for each subclass. The default value of this field will be used as the dispatch key.

    If a type field does not exist, one will be added. In this case, the value of the field will be set to the value of the __dispatch_key__. The base class should define a __dispatch_key__ class method that is used to determine the unique key for each subclass. Alternatively, each subclass can define the __dispatch_key__ as a string literal.

    The base class must not define a 'type' field. If it is not desirable to add a field to the model and the dispatch key can be tracked separately, the lower level utilities in prefect.utilities.dispatch should be used directly.

    Source code in prefect/utilities/pydantic.py
    def add_type_dispatch(model_cls: Type[M]) -> Type[M]:\n    \"\"\"\n    Extend a Pydantic model to add a 'type' field that is used as a discriminator field\n    to dynamically determine the subtype that when deserializing models.\n\n    This allows automatic resolution to subtypes of the decorated model.\n\n    If a type field already exists, it should be a string literal field that has a\n    constant value for each subclass. The default value of this field will be used as\n    the dispatch key.\n\n    If a type field does not exist, one will be added. In this case, the value of the\n    field will be set to the value of the `__dispatch_key__`. The base class should\n    define a `__dispatch_key__` class method that is used to determine the unique key\n    for each subclass. Alternatively, each subclass can define the `__dispatch_key__`\n    as a string literal.\n\n    The base class must not define a 'type' field. If it is not desirable to add a field\n    to the model and the dispatch key can be tracked separately, the lower level\n    utilities in `prefect.utilities.dispatch` should be used directly.\n    \"\"\"\n    defines_dispatch_key = hasattr(\n        model_cls, \"__dispatch_key__\"\n    ) or \"__dispatch_key__\" in getattr(model_cls, \"__annotations__\", {})\n\n    defines_type_field = \"type\" in model_cls.__fields__\n\n    if not defines_dispatch_key and not defines_type_field:\n        raise ValueError(\n            f\"Model class {model_cls.__name__!r} does not define a `__dispatch_key__` \"\n            \"or a type field. One of these is required for dispatch.\"\n        )\n\n    elif defines_dispatch_key and not defines_type_field:\n        # Add a type field to store the value of the dispatch key\n        model_cls.__fields__[\"type\"] = pydantic_v1.fields.ModelField(\n            name=\"type\",\n            type_=str,\n            required=True,\n            class_validators=None,\n            model_config=model_cls.__config__,\n        )\n\n    elif not defines_dispatch_key and defines_type_field:\n        field_type_annotation = model_cls.__fields__[\"type\"].type_\n        if field_type_annotation != str:\n            raise TypeError(\n                f\"Model class {model_cls.__name__!r} defines a 'type' field with \"\n                f\"type {field_type_annotation.__name__!r} but it must be 'str'.\"\n            )\n\n        # Set the dispatch key to retrieve the value from the type field\n        @classmethod\n        def dispatch_key_from_type_field(cls):\n            return cls.__fields__[\"type\"].default\n\n        model_cls.__dispatch_key__ = dispatch_key_from_type_field\n\n    else:\n        raise ValueError(\n            f\"Model class {model_cls.__name__!r} defines a `__dispatch_key__` \"\n            \"and a type field. Only one of these may be defined for dispatch.\"\n        )\n\n    cls_init = model_cls.__init__\n    cls_new = model_cls.__new__\n\n    def __init__(__pydantic_self__, **data: Any) -> None:\n        type_string = (\n            get_dispatch_key(__pydantic_self__)\n            if type(__pydantic_self__) != model_cls\n            else \"__base__\"\n        )\n        data.setdefault(\"type\", type_string)\n        cls_init(__pydantic_self__, **data)\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n        if \"type\" in kwargs:\n            try:\n                subcls = lookup_type(cls, dispatch_key=kwargs[\"type\"])\n            except KeyError as exc:\n                raise pydantic_v1.ValidationError(errors=[exc], model=cls)\n            return cls_new(subcls)\n        else:\n            return cls_new(cls)\n\n    model_cls.__init__ = __init__\n    model_cls.__new__ = __new__\n\n    register_base_type(model_cls)\n\n    return model_cls\n
    ","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/pydantic/#prefect.utilities.pydantic.get_class_fields_only","title":"get_class_fields_only","text":"

    Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included.

    Source code in prefect/utilities/pydantic.py
    def get_class_fields_only(model: Type[pydantic_v1.BaseModel]) -> set:\n    \"\"\"\n    Gets all the field names defined on the model class but not any parent classes.\n    Any fields that are on the parent but redefined on the subclass are included.\n    \"\"\"\n    subclass_class_fields = set(model.__annotations__.keys())\n    parent_class_fields = set()\n\n    for base in model.__class__.__bases__:\n        if issubclass(base, pydantic_v1.BaseModel):\n            parent_class_fields.update(base.__annotations__.keys())\n\n    return (subclass_class_fields - parent_class_fields) | (\n        subclass_class_fields & parent_class_fields\n    )\n
    ","tags":["Python API","pydantic"]},{"location":"api-ref/prefect/utilities/render_swagger/","title":"render_swagger","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger","title":"prefect.utilities.render_swagger","text":"","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/render_swagger/#prefect.utilities.render_swagger.swagger_lib","title":"swagger_lib","text":"

    Provides the actual swagger library used

    Source code in prefect/utilities/render_swagger.py
    def swagger_lib(config) -> dict:\n    \"\"\"\n    Provides the actual swagger library used\n    \"\"\"\n    lib_swagger = {\n        \"css\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui.css\",\n        \"js\": \"https://unpkg.com/swagger-ui-dist@3/swagger-ui-bundle.js\",\n    }\n\n    extra_javascript = config.get(\"extra_javascript\", [])\n    extra_css = config.get(\"extra_css\", [])\n    for lib in extra_javascript:\n        if os.path.basename(urllib.parse.urlparse(lib).path) == \"swagger-ui-bundle.js\":\n            lib_swagger[\"js\"] = lib\n            break\n\n    for css in extra_css:\n        if os.path.basename(urllib.parse.urlparse(css).path) == \"swagger-ui.css\":\n            lib_swagger[\"css\"] = css\n            break\n    return lib_swagger\n
    ","tags":["Python API","Swagger"]},{"location":"api-ref/prefect/utilities/services/","title":"services","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services","title":"prefect.utilities.services","text":"","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/services/#prefect.utilities.services.critical_service_loop","title":"critical_service_loop async","text":"

    Runs the given workload function on the specified interval, while being forgiving of intermittent issues like temporary HTTP errors. If more than a certain number of consecutive errors occur, print a summary of up to memory recent exceptions to printer, then begin backoff.

    The loop will exit after reaching the consecutive error limit backoff times. On each backoff, the interval will be doubled. On a successful loop, the backoff will be reset.

    Parameters:

    Name Type Description Default workload Callable[..., Coroutine]

    the function to call

    required interval float

    how frequently to call it

    required memory int

    how many recent errors to remember

    10 consecutive int

    how many consecutive errors must we see before we begin backoff

    3 backoff int

    how many times we should allow consecutive errors before exiting

    1 printer Callable[..., None]

    a print-like function where errors will be reported

    print run_once bool

    if set, the loop will only run once then return

    False jitter_range float

    if set, the interval will be a random variable (rv) drawn from a clamped Poisson distribution where lambda = interval and the rv is bound between interval * (1 - range) < rv < interval * (1 + range)

    None Source code in prefect/utilities/services.py
    async def critical_service_loop(\n    workload: Callable[..., Coroutine],\n    interval: float,\n    memory: int = 10,\n    consecutive: int = 3,\n    backoff: int = 1,\n    printer: Callable[..., None] = print,\n    run_once: bool = False,\n    jitter_range: float = None,\n):\n    \"\"\"\n    Runs the given `workload` function on the specified `interval`, while being\n    forgiving of intermittent issues like temporary HTTP errors.  If more than a certain\n    number of `consecutive` errors occur, print a summary of up to `memory` recent\n    exceptions to `printer`, then begin backoff.\n\n    The loop will exit after reaching the consecutive error limit `backoff` times.\n    On each backoff, the interval will be doubled. On a successful loop, the backoff\n    will be reset.\n\n    Args:\n        workload: the function to call\n        interval: how frequently to call it\n        memory: how many recent errors to remember\n        consecutive: how many consecutive errors must we see before we begin backoff\n        backoff: how many times we should allow consecutive errors before exiting\n        printer: a `print`-like function where errors will be reported\n        run_once: if set, the loop will only run once then return\n        jitter_range: if set, the interval will be a random variable (rv) drawn from\n            a clamped Poisson distribution where lambda = interval and the rv is bound\n            between `interval * (1 - range) < rv < interval * (1 + range)`\n    \"\"\"\n\n    track_record: Deque[bool] = deque([True] * consecutive, maxlen=consecutive)\n    failures: Deque[Tuple[Exception, TracebackType]] = deque(maxlen=memory)\n    backoff_count = 0\n\n    while True:\n        try:\n            workload_display_name = (\n                workload.__name__ if hasattr(workload, \"__name__\") else workload\n            )\n            logger.debug(f\"Starting run of {workload_display_name!r}\")\n            await workload()\n\n            # Reset the backoff count on success; we may want to consider resetting\n            # this only if the track record is _all_ successful to avoid ending backoff\n            # prematurely\n            if backoff_count > 0:\n                printer(\"Resetting backoff due to successful run.\")\n                backoff_count = 0\n\n            track_record.append(True)\n        except httpx.TransportError as exc:\n            # httpx.TransportError is the base class for any kind of communications\n            # error, like timeouts, connection failures, etc.  This does _not_ cover\n            # routine HTTP error codes (even 5xx errors like 502/503) so this\n            # handler should not be attempting to cover cases where the Prefect server\n            # or Prefect Cloud is having an outage (which will be covered by the\n            # exception clause below)\n            track_record.append(False)\n            failures.append((exc, sys.exc_info()[-1]))\n            logger.debug(\n                f\"Run of {workload!r} failed with TransportError\", exc_info=exc\n            )\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code >= 500:\n                # 5XX codes indicate a potential outage of the Prefect API which is\n                # likely to be temporary and transient.  Don't quit over these unless\n                # it is prolonged.\n                track_record.append(False)\n                failures.append((exc, sys.exc_info()[-1]))\n                logger.debug(\n                    f\"Run of {workload!r} failed with HTTPStatusError\", exc_info=exc\n                )\n            else:\n                raise\n\n        # Decide whether to exit now based on recent history.\n        #\n        # Given some typical background error rate of, say, 1%, we may still observe\n        # quite a few errors in our recent samples, but this is not necessarily a cause\n        # for concern.\n        #\n        # Imagine two distributions that could reflect our situation at any time: the\n        # everything-is-fine distribution of errors, and the everything-is-on-fire\n        # distribution of errors. We are trying to determine which of those two worlds\n        # we are currently experiencing.  We compare the likelihood that we'd draw N\n        # consecutive errors from each.  In the everything-is-fine distribution, that\n        # would be a very low-probability occurrence, but in the everything-is-on-fire\n        # distribution, that is a high-probability occurrence.\n        #\n        # Remarkably, we only need to look back for a small number of consecutive\n        # errors to have reasonable confidence that this is indeed an anomaly.\n        # @anticorrelator and @chrisguidry estimated that we should only need to look\n        # back for 3 consecutive errors.\n        if not any(track_record):\n            # We've failed enough times to be sure something is wrong, the writing is\n            # on the wall.  Let's explain what we've seen and exit.\n            printer(\n                f\"\\nFailed the last {consecutive} attempts. \"\n                \"Please check your environment and configuration.\"\n            )\n\n            printer(\"Examples of recent errors:\\n\")\n\n            failures_by_type = distinct(\n                reversed(failures),\n                key=lambda pair: type(pair[0]),  # Group by the type of exception\n            )\n            for exception, traceback in failures_by_type:\n                printer(\"\".join(format_exception(None, exception, traceback)))\n                printer()\n\n            backoff_count += 1\n\n            if backoff_count >= backoff:\n                raise RuntimeError(\"Service exceeded error threshold.\")\n\n            # Reset the track record\n            track_record.extend([True] * consecutive)\n            failures.clear()\n            printer(\n                \"Backing off due to consecutive errors, using increased interval of \"\n                f\" {interval * 2**backoff_count}s.\"\n            )\n\n        if run_once:\n            return\n\n        if jitter_range is not None:\n            sleep = clamped_poisson_interval(interval, clamping_factor=jitter_range)\n        else:\n            sleep = interval * 2**backoff_count\n\n        await anyio.sleep(sleep)\n
    ","tags":["Python API","services"]},{"location":"api-ref/prefect/utilities/slugify/","title":"slugify","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/slugify/#prefect.utilities.slugify","title":"prefect.utilities.slugify","text":"","tags":["Python API","slugify"]},{"location":"api-ref/prefect/utilities/templating/","title":"templating","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating","title":"prefect.utilities.templating","text":"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.apply_values","title":"apply_values","text":"

    Replaces placeholders in a template with values from a supplied dictionary.

    Will recursively replace placeholders in dictionaries and lists.

    If a value has no placeholders, it will be returned unchanged.

    If a template contains only a single placeholder, the placeholder will be fully replaced with the value.

    If a template contains text before or after a placeholder or there are multiple placeholders, the placeholders will be replaced with the corresponding variable values.

    If a template contains a placeholder that is not in values, NotSet will be returned to signify that no placeholder replacement occurred. If template is a dictionary that contains a key with a value of NotSet, the key will be removed in the return value unless remove_notset is set to False.

    Parameters:

    Name Type Description Default template T

    template to discover and replace values in

    required values Dict[str, Any]

    The values to apply to placeholders in the template

    required remove_notset bool

    If True, remove keys with an unset value

    True

    Returns:

    Type Description Union[T, Type[NotSet]]

    The template with the values applied

    Source code in prefect/utilities/templating.py
    def apply_values(\n    template: T, values: Dict[str, Any], remove_notset: bool = True\n) -> Union[T, Type[NotSet]]:\n    \"\"\"\n    Replaces placeholders in a template with values from a supplied dictionary.\n\n    Will recursively replace placeholders in dictionaries and lists.\n\n    If a value has no placeholders, it will be returned unchanged.\n\n    If a template contains only a single placeholder, the placeholder will be\n    fully replaced with the value.\n\n    If a template contains text before or after a placeholder or there are\n    multiple placeholders, the placeholders will be replaced with the\n    corresponding variable values.\n\n    If a template contains a placeholder that is not in `values`, NotSet will\n    be returned to signify that no placeholder replacement occurred. If\n    `template` is a dictionary that contains a key with a value of NotSet,\n    the key will be removed in the return value unless `remove_notset` is set to False.\n\n    Args:\n        template: template to discover and replace values in\n        values: The values to apply to placeholders in the template\n        remove_notset: If True, remove keys with an unset value\n\n    Returns:\n        The template with the values applied\n    \"\"\"\n    if isinstance(template, (int, float, bool, type(NotSet), type(None))):\n        return template\n    if isinstance(template, str):\n        placeholders = find_placeholders(template)\n        if not placeholders:\n            # If there are no values, we can just use the template\n            return template\n        elif (\n            len(placeholders) == 1\n            and list(placeholders)[0].full_match == template\n            and list(placeholders)[0].type is PlaceholderType.STANDARD\n        ):\n            # If there is only one variable with no surrounding text,\n            # we can replace it. If there is no variable value, we\n            # return NotSet to indicate that the value should not be included.\n            return get_from_dict(values, list(placeholders)[0].name, NotSet)\n        else:\n            for full_match, name, placeholder_type in placeholders:\n                if placeholder_type is PlaceholderType.STANDARD:\n                    value = get_from_dict(values, name, NotSet)\n                elif placeholder_type is PlaceholderType.ENV_VAR:\n                    name = name.lstrip(ENV_VAR_PLACEHOLDER_PREFIX)\n                    value = os.environ.get(name, NotSet)\n                else:\n                    continue\n\n                if value is NotSet and not remove_notset:\n                    continue\n                elif value is NotSet:\n                    template = template.replace(full_match, \"\")\n                else:\n                    template = template.replace(full_match, str(value))\n\n            return template\n    elif isinstance(template, dict):\n        updated_template = {}\n        for key, value in template.items():\n            updated_value = apply_values(value, values, remove_notset=remove_notset)\n            if updated_value is not NotSet:\n                updated_template[key] = updated_value\n            elif not remove_notset:\n                updated_template[key] = value\n\n        return updated_template\n    elif isinstance(template, list):\n        updated_list = []\n        for value in template:\n            updated_value = apply_values(value, values, remove_notset=remove_notset)\n            if updated_value is not NotSet:\n                updated_list.append(updated_value)\n        return updated_list\n    else:\n        raise ValueError(f\"Unexpected template type {type(template).__name__!r}\")\n
    ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.determine_placeholder_type","title":"determine_placeholder_type","text":"

    Determines the type of a placeholder based on its name.

    Parameters:

    Name Type Description Default name str

    The name of the placeholder

    required

    Returns:

    Type Description PlaceholderType

    The type of the placeholder

    Source code in prefect/utilities/templating.py
    def determine_placeholder_type(name: str) -> PlaceholderType:\n    \"\"\"\n    Determines the type of a placeholder based on its name.\n\n    Args:\n        name: The name of the placeholder\n\n    Returns:\n        The type of the placeholder\n    \"\"\"\n    if name.startswith(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX):\n        return PlaceholderType.BLOCK_DOCUMENT\n    elif name.startswith(VARIABLE_PLACEHOLDER_PREFIX):\n        return PlaceholderType.VARIABLE\n    elif name.startswith(ENV_VAR_PLACEHOLDER_PREFIX):\n        return PlaceholderType.ENV_VAR\n    else:\n        return PlaceholderType.STANDARD\n
    ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.find_placeholders","title":"find_placeholders","text":"

    Finds all placeholders in a template.

    Parameters:

    Name Type Description Default template T

    template to discover placeholders in

    required

    Returns:

    Type Description Set[Placeholder]

    A set of all placeholders in the template

    Source code in prefect/utilities/templating.py
    def find_placeholders(template: T) -> Set[Placeholder]:\n    \"\"\"\n    Finds all placeholders in a template.\n\n    Args:\n        template: template to discover placeholders in\n\n    Returns:\n        A set of all placeholders in the template\n    \"\"\"\n    if isinstance(template, (int, float, bool)):\n        return set()\n    if isinstance(template, str):\n        result = PLACEHOLDER_CAPTURE_REGEX.findall(template)\n        return {\n            Placeholder(full_match, name, determine_placeholder_type(name))\n            for full_match, name in result\n        }\n    elif isinstance(template, dict):\n        return set().union(\n            *[find_placeholders(value) for key, value in template.items()]\n        )\n    elif isinstance(template, list):\n        return set().union(*[find_placeholders(item) for item in template])\n    else:\n        raise ValueError(f\"Unexpected type: {type(template)}\")\n
    ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references","title":"resolve_block_document_references async","text":"

    Resolve block document references in a template by replacing each reference with the data of the block document.

    Recursively searches for block document references in dictionaries and lists.

    Identifies block document references by the as dictionary with the following structure:

    {\n    \"$ref\": {\n        \"block_document_id\": <block_document_id>\n    }\n}\n
    where <block_document_id> is the ID of the block document to resolve.

    Once the block document is retrieved from the API, the data of the block document is used to replace the reference.

    ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--accessing-values","title":"Accessing Values:","text":"

    To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.

    For a block document with the structure:

    {\n    \"value\": {\n        \"key\": {\n            \"nested-key\": \"nested-value\"\n        },\n        \"list\": [\n            {\"list-key\": \"list-value\"},\n            1,\n            2\n        ]\n    }\n}\n
    examples of value resolution are as follows:

    1. Accessing a nested dictionary: Format: prefect.blocks...value.key Example: Returns {\"nested-key\": \"nested-value\"}

    2. Accessing a specific nested value: Format: prefect.blocks...value.key.nested-key Example: Returns \"nested-value\"

    3. Accessing a list element's key-value: Format: prefect.blocks...value.list[0].list-key Example: Returns \"list-value\"","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_block_document_references--default-resolution-for-system-blocks","title":"Default Resolution for System Blocks:","text":"

      For system blocks, which only contain a value attribute, this attribute is resolved by default.

      Parameters:

      Name Type Description Default template T

      The template to resolve block documents in

      required

      Returns:

      Type Description Union[T, Dict[str, Any]]

      The template with block documents resolved

      Source code in prefect/utilities/templating.py
      @inject_client\nasync def resolve_block_document_references(\n    template: T, client: \"PrefectClient\" = None\n) -> Union[T, Dict[str, Any]]:\n    \"\"\"\n    Resolve block document references in a template by replacing each reference with\n    the data of the block document.\n\n    Recursively searches for block document references in dictionaries and lists.\n\n    Identifies block document references by the as dictionary with the following\n    structure:\n    ```\n    {\n        \"$ref\": {\n            \"block_document_id\": <block_document_id>\n        }\n    }\n    ```\n    where `<block_document_id>` is the ID of the block document to resolve.\n\n    Once the block document is retrieved from the API, the data of the block document\n    is used to replace the reference.\n\n    Accessing Values:\n    -----------------\n    To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name.\n\n    For a block document with the structure:\n    ```json\n    {\n        \"value\": {\n            \"key\": {\n                \"nested-key\": \"nested-value\"\n            },\n            \"list\": [\n                {\"list-key\": \"list-value\"},\n                1,\n                2\n            ]\n        }\n    }\n    ```\n    examples of value resolution are as follows:\n\n    1. Accessing a nested dictionary:\n       Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key\n       Example: Returns {\"nested-key\": \"nested-value\"}\n\n    2. Accessing a specific nested value:\n       Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.key.nested-key\n       Example: Returns \"nested-value\"\n\n    3. Accessing a list element's key-value:\n       Format: prefect.blocks.<block_type_slug>.<block_document_name>.value.list[0].list-key\n       Example: Returns \"list-value\"\n\n    Default Resolution for System Blocks:\n    -------------------------------------\n    For system blocks, which only contain a `value` attribute, this attribute is resolved by default.\n\n    Args:\n        template: The template to resolve block documents in\n\n    Returns:\n        The template with block documents resolved\n    \"\"\"\n    if isinstance(template, dict):\n        block_document_id = template.get(\"$ref\", {}).get(\"block_document_id\")\n        if block_document_id:\n            block_document = await client.read_block_document(block_document_id)\n            return block_document.data\n        updated_template = {}\n        for key, value in template.items():\n            updated_value = await resolve_block_document_references(\n                value, client=client\n            )\n            updated_template[key] = updated_value\n        return updated_template\n    elif isinstance(template, list):\n        return [\n            await resolve_block_document_references(item, client=client)\n            for item in template\n        ]\n    elif isinstance(template, str):\n        placeholders = find_placeholders(template)\n        has_block_document_placeholder = any(\n            placeholder.type is PlaceholderType.BLOCK_DOCUMENT\n            for placeholder in placeholders\n        )\n        if len(placeholders) == 0 or not has_block_document_placeholder:\n            return template\n        elif (\n            len(placeholders) == 1\n            and list(placeholders)[0].full_match == template\n            and list(placeholders)[0].type is PlaceholderType.BLOCK_DOCUMENT\n        ):\n            # value_keypath will be a list containing a dot path if additional\n            # attributes are accessed and an empty list otherwise.\n            block_type_slug, block_document_name, *value_keypath = (\n                list(placeholders)[0]\n                .name.replace(BLOCK_DOCUMENT_PLACEHOLDER_PREFIX, \"\")\n                .split(\".\", 2)\n            )\n            block_document = await client.read_block_document_by_name(\n                name=block_document_name, block_type_slug=block_type_slug\n            )\n            value = block_document.data\n\n            # resolving system blocks to their data for backwards compatibility\n            if len(value) == 1 and \"value\" in value:\n                # only resolve the value if the keypath is not already pointing to \"value\"\n                if len(value_keypath) == 0 or value_keypath[0][:5] != \"value\":\n                    value = value[\"value\"]\n\n            # resolving keypath/block attributes\n            if len(value_keypath) > 0:\n                value_keypath: str = value_keypath[0]\n                value = get_from_dict(value, value_keypath, default=NotSet)\n                if value is NotSet:\n                    raise ValueError(\n                        f\"Invalid template: {template!r}. Could not resolve the\"\n                        \" keypath in the block document data.\"\n                    )\n\n            return value\n        else:\n            raise ValueError(\n                f\"Invalid template: {template!r}. Only a single block placeholder is\"\n                \" allowed in a string and no surrounding text is allowed.\"\n            )\n\n    return template\n
      ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/templating/#prefect.utilities.templating.resolve_variables","title":"resolve_variables async","text":"

      Resolve variables in a template by replacing each variable placeholder with the value of the variable.

      Recursively searches for variable placeholders in dictionaries and lists.

      Strips variable placeholders if the variable is not found.

      Parameters:

      Name Type Description Default template T

      The template to resolve variables in

      required

      Returns:

      Type Description

      The template with variables resolved

      Source code in prefect/utilities/templating.py
      @inject_client\nasync def resolve_variables(template: T, client: \"PrefectClient\" = None):\n    \"\"\"\n    Resolve variables in a template by replacing each variable placeholder with the\n    value of the variable.\n\n    Recursively searches for variable placeholders in dictionaries and lists.\n\n    Strips variable placeholders if the variable is not found.\n\n    Args:\n        template: The template to resolve variables in\n\n    Returns:\n        The template with variables resolved\n    \"\"\"\n    if isinstance(template, str):\n        placeholders = find_placeholders(template)\n        has_variable_placeholder = any(\n            placeholder.type is PlaceholderType.VARIABLE for placeholder in placeholders\n        )\n        if not placeholders or not has_variable_placeholder:\n            # If there are no values, we can just use the template\n            return template\n        elif (\n            len(placeholders) == 1\n            and list(placeholders)[0].full_match == template\n            and list(placeholders)[0].type is PlaceholderType.VARIABLE\n        ):\n            variable_name = list(placeholders)[0].name.replace(\n                VARIABLE_PLACEHOLDER_PREFIX, \"\"\n            )\n            variable = await client.read_variable_by_name(name=variable_name)\n            if variable is None:\n                return \"\"\n            else:\n                return variable.value\n        else:\n            for full_match, name, placeholder_type in placeholders:\n                if placeholder_type is PlaceholderType.VARIABLE:\n                    variable_name = name.replace(VARIABLE_PLACEHOLDER_PREFIX, \"\")\n                    variable = await client.read_variable_by_name(name=variable_name)\n                    if variable is None:\n                        template = template.replace(full_match, \"\")\n                    else:\n                        template = template.replace(full_match, variable.value)\n            return template\n    elif isinstance(template, dict):\n        return {\n            key: await resolve_variables(value, client=client)\n            for key, value in template.items()\n        }\n    elif isinstance(template, list):\n        return [await resolve_variables(item, client=client) for item in template]\n    else:\n        return template\n
      ","tags":["Python API","templating"]},{"location":"api-ref/prefect/utilities/text/","title":"text","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/text/#prefect.utilities.text","title":"prefect.utilities.text","text":"","tags":["Python API","text"]},{"location":"api-ref/prefect/utilities/visualization/","title":"visualization","text":"","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization","title":"prefect.utilities.visualization","text":"

      Utilities for working with Flow.visualize()

      ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker","title":"TaskVizTracker","text":"Source code in prefect/utilities/visualization.py
      class TaskVizTracker:\n    def __init__(self):\n        self.tasks = []\n        self.dynamic_task_counter = {}\n        self.object_id_to_task = {}\n\n    def add_task(self, task: VizTask):\n        if task.name not in self.dynamic_task_counter:\n            self.dynamic_task_counter[task.name] = 0\n        else:\n            self.dynamic_task_counter[task.name] += 1\n\n        task.name = f\"{task.name}-{self.dynamic_task_counter[task.name]}\"\n        self.tasks.append(task)\n\n    def __enter__(self):\n        TaskVizTrackerState.current = self\n        return self\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        TaskVizTrackerState.current = None\n\n    def link_viz_return_value_to_viz_task(\n        self, viz_return_value: Any, viz_task: VizTask\n    ) -> None:\n        \"\"\"\n        We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n        because they are singletons.\n        \"\"\"\n        from prefect.utilities.engine import UNTRACKABLE_TYPES\n\n        if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n            isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n        ):\n            return\n        self.object_id_to_task[id(viz_return_value)] = viz_task\n
      ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.TaskVizTracker.link_viz_return_value_to_viz_task","title":"link_viz_return_value_to_viz_task","text":"

      We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256 because they are singletons.

      Source code in prefect/utilities/visualization.py
      def link_viz_return_value_to_viz_task(\n    self, viz_return_value: Any, viz_task: VizTask\n) -> None:\n    \"\"\"\n    We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n    because they are singletons.\n    \"\"\"\n    from prefect.utilities.engine import UNTRACKABLE_TYPES\n\n    if (type(viz_return_value) in UNTRACKABLE_TYPES) or (\n        isinstance(viz_return_value, int) and (-5 <= viz_return_value <= 256)\n    ):\n        return\n    self.object_id_to_task[id(viz_return_value)] = viz_task\n
      ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.build_task_dependencies","title":"build_task_dependencies","text":"

      Constructs a Graphviz directed graph object that represents the dependencies between tasks in the given TaskVizTracker.

      • task_run_tracker (TaskVizTracker): An object containing tasks and their dependencies.
      • graphviz.Digraph: A directed graph object depicting the relationships and dependencies between tasks.

      Raises: - GraphvizImportError: If there's an ImportError related to graphviz. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value.

      Source code in prefect/utilities/visualization.py
      def build_task_dependencies(task_run_tracker: TaskVizTracker):\n    \"\"\"\n    Constructs a Graphviz directed graph object that represents the dependencies\n    between tasks in the given TaskVizTracker.\n\n    Parameters:\n    - task_run_tracker (TaskVizTracker): An object containing tasks and their\n      dependencies.\n\n    Returns:\n    - graphviz.Digraph: A directed graph object depicting the relationships and\n      dependencies between tasks.\n\n    Raises:\n    - GraphvizImportError: If there's an ImportError related to graphviz.\n    - FlowVisualizationError: If there's any other error during the visualization\n      process or if return values of tasks are directly accessed without\n      specifying a `viz_return_value`.\n    \"\"\"\n    try:\n        g = graphviz.Digraph()\n        for task in task_run_tracker.tasks:\n            g.node(task.name)\n            for upstream in task.upstream_tasks:\n                g.edge(upstream.name, task.name)\n        return g\n    except ImportError as exc:\n        raise GraphvizImportError from exc\n    except Exception:\n        raise FlowVisualizationError(\n            \"Something went wrong building the flow's visualization.\"\n            \" If you're interacting with the return value of a task\"\n            \" directly inside of your flow, you must set a set a `viz_return_value`\"\n            \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n        )\n
      ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.track_viz_task","title":"track_viz_task","text":"

      Return a result if sync otherwise return a coroutine that returns the result

      Source code in prefect/utilities/visualization.py
      def track_viz_task(\n    is_async: bool,\n    task_name: str,\n    parameters: dict,\n    viz_return_value: Optional[Any] = None,\n):\n    \"\"\"Return a result if sync otherwise return a coroutine that returns the result\"\"\"\n    if is_async:\n        return from_async.wait_for_call_in_loop_thread(\n            partial(_track_viz_task, task_name, parameters, viz_return_value)\n        )\n    else:\n        return _track_viz_task(task_name, parameters, viz_return_value)\n
      ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/utilities/visualization/#prefect.utilities.visualization.visualize_task_dependencies","title":"visualize_task_dependencies","text":"

      Renders and displays a Graphviz directed graph representing task dependencies.

      The graph is rendered in PNG format and saved with the name specified by flow_run_name. After rendering, the visualization is opened and displayed.

      Parameters: - graph (graphviz.Digraph): The directed graph object to visualize. - flow_run_name (str): The name to use when saving the rendered graph image.

      Raises: - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system. - FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a viz_return_value.

      Source code in prefect/utilities/visualization.py
      def visualize_task_dependencies(graph: graphviz.Digraph, flow_run_name: str):\n    \"\"\"\n    Renders and displays a Graphviz directed graph representing task dependencies.\n\n    The graph is rendered in PNG format and saved with the name specified by\n    flow_run_name. After rendering, the visualization is opened and displayed.\n\n    Parameters:\n    - graph (graphviz.Digraph): The directed graph object to visualize.\n    - flow_run_name (str): The name to use when saving the rendered graph image.\n\n    Raises:\n    - GraphvizExecutableNotFoundError: If Graphviz isn't found on the system.\n    - FlowVisualizationError: If there's any other error during the visualization\n      process or if return values of tasks are directly accessed without\n      specifying a `viz_return_value`.\n    \"\"\"\n    try:\n        graph.render(filename=flow_run_name, view=True, format=\"png\", cleanup=True)\n    except graphviz.backend.ExecutableNotFound as exc:\n        msg = (\n            \"It appears you do not have Graphviz installed, or it is not on your \"\n            \"PATH. Please install Graphviz from http://www.graphviz.org/download/. \"\n            \"Note: Just installing the `graphviz` python package is not \"\n            \"sufficient.\"\n        )\n        raise GraphvizExecutableNotFoundError(msg) from exc\n    except Exception:\n        raise FlowVisualizationError(\n            \"Something went wrong building the flow's visualization.\"\n            \" If you're interacting with the return value of a task\"\n            \" directly inside of your flow, you must set a set a `viz_return_value`\"\n            \", for example `@task(viz_return_value=[1, 2, 3])`.\"\n        )\n
      ","tags":["Python API","visualization"]},{"location":"api-ref/prefect/workers/base/","title":"base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base","title":"prefect.workers.base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration","title":"BaseJobConfiguration","text":"

      Bases: BaseModel

      Source code in prefect/workers/base.py
      class BaseJobConfiguration(BaseModel):\n    command: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The command to use when starting a flow run. \"\n            \"In most cases, this should be left blank and the command \"\n            \"will be automatically generated by the worker.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to set when starting a flow run.\",\n    )\n    labels: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"Labels applied to infrastructure created by the worker using \"\n            \"this job configuration.\"\n        ),\n    )\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"Name given to infrastructure created by the worker using this \"\n            \"job configuration.\"\n        ),\n    )\n\n    _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n    @property\n    def is_using_a_runner(self):\n        return self.command is not None and \"prefect flow-run execute\" in self.command\n\n    @validator(\"command\")\n    def _coerce_command(cls, v):\n        return return_v_or_none(v)\n\n    @staticmethod\n    def _get_base_config_defaults(variables: dict) -> dict:\n        \"\"\"Get default values from base config for all variables that have them.\"\"\"\n        defaults = dict()\n        for variable_name, attrs in variables.items():\n            if \"default\" in attrs:\n                defaults[variable_name] = attrs[\"default\"]\n\n        return defaults\n\n    @classmethod\n    @inject_client\n    async def from_template_and_values(\n        cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n    ):\n        \"\"\"Creates a valid worker configuration object from the provided base\n        configuration and overrides.\n\n        Important: this method expects that the base_job_template was already\n        validated server-side.\n        \"\"\"\n        job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n        variables_schema = base_job_template[\"variables\"]\n        variables = cls._get_base_config_defaults(\n            variables_schema.get(\"properties\", {})\n        )\n        variables.update(values)\n\n        populated_configuration = apply_values(template=job_config, values=variables)\n        populated_configuration = await resolve_block_document_references(\n            template=populated_configuration, client=client\n        )\n        populated_configuration = await resolve_variables(\n            template=populated_configuration, client=client\n        )\n        return cls(**populated_configuration)\n\n    @classmethod\n    def json_template(cls) -> dict:\n        \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n        Defaults to using the job configuration parameter name as the template variable name.\n\n        e.g.\n        {\n            key1: '{{ key1 }}',     # default variable template\n            key2: '{{ template2 }}', # `template2` specifically provide as template\n        }\n        \"\"\"\n        configuration = {}\n        properties = cls.schema()[\"properties\"]\n        for k, v in properties.items():\n            if v.get(\"template\"):\n                template = v[\"template\"]\n            else:\n                template = \"{{ \" + k + \" }}\"\n            configuration[k] = template\n\n        return configuration\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepare the job configuration for a flow run.\n\n        This method is called by the worker before starting a flow run. It\n        should be used to set any configuration values that are dependent on\n        the flow run.\n\n        Args:\n            flow_run: The flow run to be executed.\n            deployment: The deployment that the flow run is associated with.\n            flow: The flow that the flow run is associated with.\n        \"\"\"\n\n        self._related_objects = {\n            \"deployment\": deployment,\n            \"flow\": flow,\n            \"flow-run\": flow_run,\n        }\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        env = {\n            **self._base_environment(),\n            **self._base_flow_run_environment(flow_run),\n            **self.env,\n        }\n        self.env = {key: value for key, value in env.items() if value is not None}\n        self.labels = {\n            **self._base_flow_run_labels(flow_run),\n            **deployment_labels,\n            **flow_labels,\n            **self.labels,\n        }\n        self.name = self.name or flow_run.name\n        self.command = self.command or self._base_flow_run_command()\n\n    @staticmethod\n    def _base_flow_run_command() -> str:\n        \"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        if experiment_enabled(\"enhanced_cancellation\"):\n            if (\n                PREFECT_EXPERIMENTAL_WARN\n                and PREFECT_EXPERIMENTAL_WARN_ENHANCED_CANCELLATION\n            ):\n                warnings.warn(\n                    EXPERIMENTAL_WARNING.format(\n                        feature=\"Enhanced flow run cancellation\",\n                        group=\"enhanced_cancellation\",\n                        help=\"\",\n                    ),\n                    ExperimentalFeature,\n                    stacklevel=3,\n                )\n            return \"prefect flow-run execute\"\n        return \"python -m prefect.engine\"\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n        \"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n        \"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        return {\"PREFECT__FLOW_RUN_ID\": str(flow_run.id)}\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"DeploymentResponse\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-id\": str(deployment.id),\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-id\": str(flow.id),\n            \"prefect.io/flow-name\": flow.name,\n        }\n\n    def _related_resources(self) -> List[RelatedResource]:\n        tags = set()\n        related = []\n\n        for kind, obj in self._related_objects.items():\n            if obj is None:\n                continue\n            if hasattr(obj, \"tags\"):\n                tags.update(obj.tags)\n            related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n        return related + tags_as_related_resources(tags)\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.from_template_and_values","title":"from_template_and_values async classmethod","text":"

      Creates a valid worker configuration object from the provided base configuration and overrides.

      Important: this method expects that the base_job_template was already validated server-side.

      Source code in prefect/workers/base.py
      @classmethod\n@inject_client\nasync def from_template_and_values(\n    cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n    \"\"\"Creates a valid worker configuration object from the provided base\n    configuration and overrides.\n\n    Important: this method expects that the base_job_template was already\n    validated server-side.\n    \"\"\"\n    job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n    variables_schema = base_job_template[\"variables\"]\n    variables = cls._get_base_config_defaults(\n        variables_schema.get(\"properties\", {})\n    )\n    variables.update(values)\n\n    populated_configuration = apply_values(template=job_config, values=variables)\n    populated_configuration = await resolve_block_document_references(\n        template=populated_configuration, client=client\n    )\n    populated_configuration = await resolve_variables(\n        template=populated_configuration, client=client\n    )\n    return cls(**populated_configuration)\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.json_template","title":"json_template classmethod","text":"

      Returns a dict with job configuration as keys and the corresponding templates as values

      Defaults to using the job configuration parameter name as the template variable name.

      e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2 specifically provide as template }

      Source code in prefect/workers/base.py
      @classmethod\ndef json_template(cls) -> dict:\n    \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n    Defaults to using the job configuration parameter name as the template variable name.\n\n    e.g.\n    {\n        key1: '{{ key1 }}',     # default variable template\n        key2: '{{ template2 }}', # `template2` specifically provide as template\n    }\n    \"\"\"\n    configuration = {}\n    properties = cls.schema()[\"properties\"]\n    for k, v in properties.items():\n        if v.get(\"template\"):\n            template = v[\"template\"]\n        else:\n            template = \"{{ \" + k + \" }}\"\n        configuration[k] = template\n\n    return configuration\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

      Prepare the job configuration for a flow run.

      This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run.

      Parameters:

      Name Type Description Default flow_run FlowRun

      The flow run to be executed.

      required deployment Optional[DeploymentResponse]

      The deployment that the flow run is associated with.

      None flow Optional[Flow]

      The flow that the flow run is associated with.

      None Source code in prefect/workers/base.py
      def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepare the job configuration for a flow run.\n\n    This method is called by the worker before starting a flow run. It\n    should be used to set any configuration values that are dependent on\n    the flow run.\n\n    Args:\n        flow_run: The flow run to be executed.\n        deployment: The deployment that the flow run is associated with.\n        flow: The flow that the flow run is associated with.\n    \"\"\"\n\n    self._related_objects = {\n        \"deployment\": deployment,\n        \"flow\": flow,\n        \"flow-run\": flow_run,\n    }\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    env = {\n        **self._base_environment(),\n        **self._base_flow_run_environment(flow_run),\n        **self.env,\n    }\n    self.env = {key: value for key, value in env.items() if value is not None}\n    self.labels = {\n        **self._base_flow_run_labels(flow_run),\n        **deployment_labels,\n        **flow_labels,\n        **self.labels,\n    }\n    self.name = self.name or flow_run.name\n    self.command = self.command or self._base_flow_run_command()\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker","title":"BaseWorker","text":"

      Bases: ABC

      Source code in prefect/workers/base.py
      @register_base_type\nclass BaseWorker(abc.ABC):\n    type: str\n    job_configuration: Type[BaseJobConfiguration] = BaseJobConfiguration\n    job_configuration_variables: Optional[Type[BaseVariables]] = None\n\n    _documentation_url = \"\"\n    _logo_url = \"\"\n    _description = \"\"\n\n    def __init__(\n        self,\n        work_pool_name: str,\n        work_queues: Optional[List[str]] = None,\n        name: Optional[str] = None,\n        prefetch_seconds: Optional[float] = None,\n        create_pool_if_not_found: bool = True,\n        limit: Optional[int] = None,\n        heartbeat_interval_seconds: Optional[int] = None,\n        *,\n        base_job_template: Optional[Dict[str, Any]] = None,\n    ):\n        \"\"\"\n        Base class for all Prefect workers.\n\n        Args:\n            name: The name of the worker. If not provided, a random one\n                will be generated. If provided, it cannot contain '/' or '%'.\n                The name is used to identify the worker in the UI; if two\n                processes have the same name, they will be treated as the same\n                worker.\n            work_pool_name: The name of the work pool to poll.\n            work_queues: A list of work queues to poll. If not provided, all\n                work queue in the work pool will be polled.\n            prefetch_seconds: The number of seconds to prefetch flow runs for.\n            create_pool_if_not_found: Whether to create the work pool\n                if it is not found. Defaults to `True`, but can be set to `False` to\n                ensure that work pools are not created accidentally.\n            limit: The maximum number of flow runs this worker should be running at\n                a given time.\n            base_job_template: If creating the work pool, provide the base job\n                template to use. Logs a warning if the pool already exists.\n        \"\"\"\n        if name and (\"/\" in name or \"%\" in name):\n            raise ValueError(\"Worker name cannot contain '/' or '%'\")\n        self.name = name or f\"{self.__class__.__name__} {uuid4()}\"\n        self._logger = get_logger(f\"worker.{self.__class__.type}.{self.name.lower()}\")\n\n        self.is_setup = False\n        self._create_pool_if_not_found = create_pool_if_not_found\n        self._base_job_template = base_job_template\n        self._work_pool_name = work_pool_name\n        self._work_queues: Set[str] = set(work_queues) if work_queues else set()\n\n        self._prefetch_seconds: float = (\n            prefetch_seconds or PREFECT_WORKER_PREFETCH_SECONDS.value()\n        )\n        self.heartbeat_interval_seconds = (\n            heartbeat_interval_seconds or PREFECT_WORKER_HEARTBEAT_SECONDS.value()\n        )\n\n        self._work_pool: Optional[WorkPool] = None\n        self._runs_task_group: Optional[anyio.abc.TaskGroup] = None\n        self._client: Optional[PrefectClient] = None\n        self._last_polled_time: pendulum.DateTime = pendulum.now(\"utc\")\n        self._limit = limit\n        self._limiter: Optional[anyio.CapacityLimiter] = None\n        self._submitting_flow_run_ids = set()\n        self._cancelling_flow_run_ids = set()\n        self._scheduled_task_scopes = set()\n\n    @classmethod\n    def get_documentation_url(cls) -> str:\n        return cls._documentation_url\n\n    @classmethod\n    def get_logo_url(cls) -> str:\n        return cls._logo_url\n\n    @classmethod\n    def get_description(cls) -> str:\n        return cls._description\n\n    @classmethod\n    def get_default_base_job_template(cls) -> Dict:\n        if cls.job_configuration_variables is None:\n            schema = cls.job_configuration.schema()\n            # remove \"template\" key from all dicts in schema['properties'] because it is not a\n            # relevant field\n            for key, value in schema[\"properties\"].items():\n                if isinstance(value, dict):\n                    schema[\"properties\"][key].pop(\"template\", None)\n            variables_schema = schema\n        else:\n            variables_schema = cls.job_configuration_variables.schema()\n        variables_schema.pop(\"title\", None)\n        return {\n            \"job_configuration\": cls.job_configuration.json_template(),\n            \"variables\": variables_schema,\n        }\n\n    @staticmethod\n    def get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n        \"\"\"\n        Returns the worker class for a given worker type. If the worker type\n        is not recognized, returns None.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return worker_registry.get(type)\n\n    @staticmethod\n    def get_all_available_worker_types() -> List[str]:\n        \"\"\"\n        Returns all worker types available in the local registry.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return list(worker_registry.keys())\n        return []\n\n    def get_name_slug(self):\n        return slugify(self.name)\n\n    def get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n        return flow_run_logger(flow_run=flow_run).getChild(\n            \"worker\",\n            extra={\n                \"worker_name\": self.name,\n                \"work_pool_name\": (\n                    self._work_pool_name if self._work_pool else \"<unknown>\"\n                ),\n                \"work_pool_id\": str(getattr(self._work_pool, \"id\", \"unknown\")),\n            },\n        )\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: BaseJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        \"\"\"\n        Runs a given flow run on the current worker.\n        \"\"\"\n        raise NotImplementedError(\n            \"Workers must implement a method for running submitted flow runs\"\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: BaseJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Method for killing infrastructure created by a worker. Should be implemented by\n        individual workers if they support killing infrastructure.\n        \"\"\"\n        raise NotImplementedError(\n            \"This worker does not support killing infrastructure.\"\n        )\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"BaseWorker\":\n            return None  # The base class is abstract\n        return cls.type\n\n    async def setup(self):\n        \"\"\"Prepares the worker to run.\"\"\"\n        self._logger.debug(\"Setting up worker...\")\n        self._runs_task_group = anyio.create_task_group()\n        self._limiter = (\n            anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n        )\n        self._client = get_client()\n        await self._client.__aenter__()\n        await self._runs_task_group.__aenter__()\n\n        self.is_setup = True\n\n    async def teardown(self, *exc_info):\n        \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n        self._logger.debug(\"Tearing down worker...\")\n        self.is_setup = False\n        for scope in self._scheduled_task_scopes:\n            scope.cancel()\n        if self._runs_task_group:\n            await self._runs_task_group.__aexit__(*exc_info)\n        if self._client:\n            await self._client.__aexit__(*exc_info)\n        self._runs_task_group = None\n        self._client = None\n\n    def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n        \"\"\"\n        This method is invoked by a webserver healthcheck handler\n        and returns a boolean indicating if the worker has recorded a\n        scheduled flow run poll within a variable amount of time.\n\n        The `query_interval_seconds` is the same value that is used by\n        the loop services - we will evaluate if the _last_polled_time\n        was within that interval x 30 (so 10s -> 5m)\n\n        The instance property `self._last_polled_time`\n        is currently set/updated in `get_and_submit_flow_runs()`\n        \"\"\"\n        threshold_seconds = query_interval_seconds * 30\n\n        seconds_since_last_poll = (\n            pendulum.now(\"utc\") - self._last_polled_time\n        ).in_seconds()\n\n        is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n        if not is_still_polling:\n            self._logger.error(\n                f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n                \"and should be restarted\"\n            )\n\n        return is_still_polling\n\n    async def get_and_submit_flow_runs(self):\n        runs_response = await self._get_scheduled_flow_runs()\n\n        self._last_polled_time = pendulum.now(\"utc\")\n\n        return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.is_setup:\n            raise RuntimeError(\n                \"Worker is not set up. Please make sure you are running this worker \"\n                \"as an async context manager.\"\n            )\n\n        self._logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self._work_queues)))\n            if self._work_queues\n            else None\n        )\n\n        named_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self._logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self._cancelling_flow_run_ids.add(flow_run.id)\n            self._runs_task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: \"FlowRun\"):\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n        except ObjectNotFound:\n            self._logger.warning(\n                f\"Flow run {flow_run.id!r} cannot be cancelled by this worker:\"\n                f\" associated deployment {flow_run.deployment_id!r} does not exist.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure configuration information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n        else:\n            if configuration.is_using_a_runner:\n                self._logger.info(\n                    f\"Skipping cancellation because flow run {str(flow_run.id)!r} is\"\n                    \" using enhanced cancellation. A dedicated runner will handle\"\n                    \" cancellation.\"\n                )\n                return\n\n        if not flow_run.infrastructure_pid:\n            run_logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            await self.kill_infrastructure(\n                infrastructure_pid=flow_run.infrastructure_pid,\n                configuration=configuration,\n            )\n        except NotImplementedError:\n            self._logger.error(\n                f\"Worker type {self.type!r} does not support killing created \"\n                \"infrastructure. Cancellation cannot be guaranteed.\"\n            )\n        except InfrastructureNotFound as exc:\n            self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self._logger.warning(f\"{exc} Flow run cannot be cancelled by this worker.\")\n        except Exception:\n            run_logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self._cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            self._emit_flow_run_cancelled_event(\n                flow_run=flow_run, configuration=configuration\n            )\n            await self._mark_flow_run_as_cancelled(flow_run)\n            run_logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _update_local_work_pool_info(self):\n        try:\n            work_pool = await self._client.read_work_pool(\n                work_pool_name=self._work_pool_name\n            )\n        except ObjectNotFound:\n            if self._create_pool_if_not_found:\n                wp = WorkPoolCreate(\n                    name=self._work_pool_name,\n                    type=self.type,\n                )\n                if self._base_job_template is not None:\n                    wp.base_job_template = self._base_job_template\n\n                work_pool = await self._client.create_work_pool(work_pool=wp)\n                self._logger.info(f\"Work pool {self._work_pool_name!r} created.\")\n            else:\n                self._logger.warning(f\"Work pool {self._work_pool_name!r} not found!\")\n                if self._base_job_template is not None:\n                    self._logger.warning(\n                        \"Ignoring supplied base job template because the work pool\"\n                        \" already exists\"\n                    )\n                return\n\n        # if the remote config type changes (or if it's being loaded for the\n        # first time), check if it matches the local type and warn if not\n        if getattr(self._work_pool, \"type\", 0) != work_pool.type:\n            if work_pool.type != self.__class__.type:\n                self._logger.warning(\n                    \"Worker type mismatch! This worker process expects type \"\n                    f\"{self.type!r} but received {work_pool.type!r}\"\n                    \" from the server. Unexpected behavior may occur.\"\n                )\n\n        # once the work pool is loaded, verify that it has a `base_job_template` and\n        # set it if not\n        if not work_pool.base_job_template:\n            job_template = self.__class__.get_default_base_job_template()\n            await self._set_work_pool_template(work_pool, job_template)\n            work_pool.base_job_template = job_template\n\n        self._work_pool = work_pool\n\n    async def _send_worker_heartbeat(self):\n        if self._work_pool:\n            await self._client.send_worker_heartbeat(\n                work_pool_name=self._work_pool_name,\n                worker_name=self.name,\n                heartbeat_interval_seconds=self.heartbeat_interval_seconds,\n            )\n\n    async def sync_with_backend(self):\n        \"\"\"\n        Updates the worker's local information about it's current work pool and\n        queues. Sends a worker heartbeat to the API.\n        \"\"\"\n        await self._update_local_work_pool_info()\n\n        await self._send_worker_heartbeat()\n\n        self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n\n    async def _get_scheduled_flow_runs(\n        self,\n    ) -> List[\"WorkerFlowRunResponse\"]:\n        \"\"\"\n        Retrieve scheduled flow runs from the work pool's queues.\n        \"\"\"\n        scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n        self._logger.debug(\n            f\"Querying for flow runs scheduled before {scheduled_before}\"\n        )\n        try:\n            scheduled_flow_runs = (\n                await self._client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=self._work_pool_name,\n                    scheduled_before=scheduled_before,\n                    work_queue_names=list(self._work_queues),\n                )\n            )\n            self._logger.debug(\n                f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\"\n            )\n            return scheduled_flow_runs\n        except ObjectNotFound:\n            # the pool doesn't exist; it will be created on the next\n            # heartbeat (or an appropriate warning will be logged)\n            return []\n\n    async def _submit_scheduled_flow_runs(\n        self, flow_run_response: List[\"WorkerFlowRunResponse\"]\n    ) -> List[\"FlowRun\"]:\n        \"\"\"\n        Takes a list of WorkerFlowRunResponses and submits the referenced flow runs\n        for execution by the worker.\n        \"\"\"\n        submittable_flow_runs = [entry.flow_run for entry in flow_run_response]\n        submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n        for flow_run in submittable_flow_runs:\n            if flow_run.id in self._submitting_flow_run_ids:\n                continue\n\n            try:\n                if self._limiter:\n                    self._limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self._logger.info(\n                    f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                run_logger = self.get_flow_run_logger(flow_run)\n                run_logger.info(\n                    f\"Worker '{self.name}' submitting flow run '{flow_run.id}'\"\n                )\n                self._submitting_flow_run_ids.add(flow_run.id)\n                self._runs_task_group.start_soon(\n                    self._submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(\n                lambda run: run.id in self._submitting_flow_run_ids,\n                submittable_flow_runs,\n            )\n        )\n\n    async def _check_flow_run(self, flow_run: \"FlowRun\") -> None:\n        \"\"\"\n        Performs a check on a submitted flow run to warn the user if the flow run\n        was created from a deployment with a storage block.\n        \"\"\"\n        if flow_run.deployment_id:\n            deployment = await self._client.read_deployment(flow_run.deployment_id)\n            if deployment.storage_document_id:\n                raise ValueError(\n                    f\"Flow run {flow_run.id!r} was created from deployment\"\n                    f\" {deployment.name!r} which is configured with a storage block.\"\n                    \" Please use an\"\n                    \" agent to execute this flow run.\"\n                )\n\n    async def _submit_run(self, flow_run: \"FlowRun\") -> None:\n        \"\"\"\n        Submits a given flow run for execution by the worker.\n        \"\"\"\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            await self._check_flow_run(flow_run)\n        except (ValueError, ObjectNotFound):\n            self._logger.exception(\n                (\n                    \"Flow run %s did not pass checks and will not be submitted for\"\n                    \" execution\"\n                ),\n                flow_run.id,\n            )\n            self._submitting_flow_run_ids.remove(flow_run.id)\n            return\n\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            readiness_result = await self._runs_task_group.start(\n                self._submit_run_and_capture_errors, flow_run\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self._client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    run_logger.exception(\n                        \"An error occurred while setting the `infrastructure_pid` on \"\n                        f\"flow run {flow_run.id!r}. The flow run will \"\n                        \"not be cancellable.\"\n                    )\n\n            run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        self._submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self, flow_run: \"FlowRun\", task_status: anyio.abc.TaskStatus = None\n    ) -> Union[BaseWorkerResult, Exception]:\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n            submitted_event = self._emit_flow_run_submitted_event(configuration)\n            result = await self.run(\n                flow_run=flow_run,\n                task_status=task_status,\n                configuration=configuration,\n            )\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                run_logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                run_logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            run_logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        self._emit_flow_run_executed_event(result, configuration, submitted_event)\n\n        return result\n\n    def get_status(self):\n        \"\"\"\n        Retrieves the status of the current worker including its name, current worker\n        pool, the work pool queues it is polling, and its local settings.\n        \"\"\"\n        return {\n            \"name\": self.name,\n            \"work_pool\": (\n                self._work_pool.dict(json_compatible=True)\n                if self._work_pool is not None\n                else None\n            ),\n            \"settings\": {\n                \"prefetch_seconds\": self._prefetch_seconds,\n            },\n        }\n\n    async def _get_configuration(\n        self,\n        flow_run: \"FlowRun\",\n    ) -> BaseJobConfiguration:\n        deployment = await self._client.read_deployment(flow_run.deployment_id)\n        flow = await self._client.read_flow(flow_run.flow_id)\n\n        deployment_vars = deployment.job_variables or {}\n        flow_run_vars = flow_run.job_variables or {}\n        job_variables = {**deployment_vars, **flow_run_vars}\n\n        configuration = await self.job_configuration.from_template_and_values(\n            base_job_template=self._work_pool.base_job_template,\n            values=job_variables,\n            client=self._client,\n        )\n        configuration.prepare_for_flow_run(\n            flow_run=flow_run, deployment=deployment, flow=flow\n        )\n        return configuration\n\n    async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n        run_logger = self.get_flow_run_logger(flow_run)\n        state = flow_run.state\n        try:\n            state = await propose_state(\n                self._client, Pending(), flow_run_id=flow_run.id\n            )\n        except Abort as exc:\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            run_logger.exception(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n            )\n            return False\n\n        if not state.is_pending():\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            await propose_state(\n                self._client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            run_logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            state = await propose_state(\n                self._client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                run_logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def _set_work_pool_template(self, work_pool, job_template):\n        \"\"\"Updates the `base_job_template` for the worker's work pool server side.\"\"\"\n        await self._client.update_work_pool(\n            work_pool_name=work_pool.name,\n            work_pool=WorkPoolUpdate(\n                base_job_template=job_template,\n            ),\n        )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n        \"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the worker exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.is_setup:\n                with anyio.CancelScope() as scope:\n                    self._scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self._scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self._runs_task_group.start(wrapper)\n\n    async def __aenter__(self):\n        self._logger.debug(\"Entering worker context...\")\n        await self.setup()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        self._logger.debug(\"Exiting worker context...\")\n        await self.teardown(*exc_info)\n\n    def __repr__(self):\n        return f\"Worker(pool={self._work_pool_name!r}, name={self.name!r})\"\n\n    def _event_resource(self):\n        return {\n            \"prefect.resource.id\": f\"prefect.worker.{self.type}.{self.get_name_slug()}\",\n            \"prefect.resource.name\": self.name,\n            \"prefect.version\": prefect.__version__,\n            \"prefect.worker-type\": self.type,\n        }\n\n    def _event_related_resources(\n        self,\n        configuration: Optional[BaseJobConfiguration] = None,\n        include_self: bool = False,\n    ) -> List[RelatedResource]:\n        related = []\n        if configuration:\n            related += configuration._related_resources()\n\n        if self._work_pool:\n            related.append(\n                object_as_related_resource(\n                    kind=\"work-pool\", role=\"work-pool\", object=self._work_pool\n                )\n            )\n\n        if include_self:\n            worker_resource = self._event_resource()\n            worker_resource[\"prefect.resource.role\"] = \"worker\"\n            related.append(RelatedResource.parse_obj(worker_resource))\n\n        return related\n\n    def _emit_flow_run_submitted_event(\n        self, configuration: BaseJobConfiguration\n    ) -> Event:\n        return emit_event(\n            event=\"prefect.worker.submitted-flow-run\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(configuration=configuration),\n        )\n\n    def _emit_flow_run_executed_event(\n        self,\n        result: BaseWorkerResult,\n        configuration: BaseJobConfiguration,\n        submitted_event: Event,\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(result.identifier)\n                resource[\"prefect.infrastructure.status-code\"] = str(result.status_code)\n\n        emit_event(\n            event=\"prefect.worker.executed-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n            follows=submitted_event,\n        )\n\n    async def _emit_worker_started_event(self) -> Event:\n        return emit_event(\n            \"prefect.worker.started\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n        )\n\n    async def _emit_worker_stopped_event(self, started_event: Event):\n        emit_event(\n            \"prefect.worker.stopped\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n            follows=started_event,\n        )\n\n    def _emit_flow_run_cancelled_event(\n        self, flow_run: \"FlowRun\", configuration: BaseJobConfiguration\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(\n                    flow_run.infrastructure_pid\n                )\n\n        emit_event(\n            event=\"prefect.worker.cancelled-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n        )\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_all_available_worker_types","title":"get_all_available_worker_types staticmethod","text":"

      Returns all worker types available in the local registry.

      Source code in prefect/workers/base.py
      @staticmethod\ndef get_all_available_worker_types() -> List[str]:\n    \"\"\"\n    Returns all worker types available in the local registry.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return list(worker_registry.keys())\n    return []\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_status","title":"get_status","text":"

      Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings.

      Source code in prefect/workers/base.py
      def get_status(self):\n    \"\"\"\n    Retrieves the status of the current worker including its name, current worker\n    pool, the work pool queues it is polling, and its local settings.\n    \"\"\"\n    return {\n        \"name\": self.name,\n        \"work_pool\": (\n            self._work_pool.dict(json_compatible=True)\n            if self._work_pool is not None\n            else None\n        ),\n        \"settings\": {\n            \"prefetch_seconds\": self._prefetch_seconds,\n        },\n    }\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_worker_class_from_type","title":"get_worker_class_from_type staticmethod","text":"

      Returns the worker class for a given worker type. If the worker type is not recognized, returns None.

      Source code in prefect/workers/base.py
      @staticmethod\ndef get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n    \"\"\"\n    Returns the worker class for a given worker type. If the worker type\n    is not recognized, returns None.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return worker_registry.get(type)\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.is_worker_still_polling","title":"is_worker_still_polling","text":"

      This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time.

      The query_interval_seconds is the same value that is used by the loop services - we will evaluate if the _last_polled_time was within that interval x 30 (so 10s -> 5m)

      The instance property self._last_polled_time is currently set/updated in get_and_submit_flow_runs()

      Source code in prefect/workers/base.py
      def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n    \"\"\"\n    This method is invoked by a webserver healthcheck handler\n    and returns a boolean indicating if the worker has recorded a\n    scheduled flow run poll within a variable amount of time.\n\n    The `query_interval_seconds` is the same value that is used by\n    the loop services - we will evaluate if the _last_polled_time\n    was within that interval x 30 (so 10s -> 5m)\n\n    The instance property `self._last_polled_time`\n    is currently set/updated in `get_and_submit_flow_runs()`\n    \"\"\"\n    threshold_seconds = query_interval_seconds * 30\n\n    seconds_since_last_poll = (\n        pendulum.now(\"utc\") - self._last_polled_time\n    ).in_seconds()\n\n    is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n    if not is_still_polling:\n        self._logger.error(\n            f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n            \"and should be restarted\"\n        )\n\n    return is_still_polling\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

      Method for killing infrastructure created by a worker. Should be implemented by individual workers if they support killing infrastructure.

      Source code in prefect/workers/base.py
      async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: BaseJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Method for killing infrastructure created by a worker. Should be implemented by\n    individual workers if they support killing infrastructure.\n    \"\"\"\n    raise NotImplementedError(\n        \"This worker does not support killing infrastructure.\"\n    )\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.run","title":"run abstractmethod async","text":"

      Runs a given flow run on the current worker.

      Source code in prefect/workers/base.py
      @abc.abstractmethod\nasync def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: BaseJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n    \"\"\"\n    Runs a given flow run on the current worker.\n    \"\"\"\n    raise NotImplementedError(\n        \"Workers must implement a method for running submitted flow runs\"\n    )\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.setup","title":"setup async","text":"

      Prepares the worker to run.

      Source code in prefect/workers/base.py
      async def setup(self):\n    \"\"\"Prepares the worker to run.\"\"\"\n    self._logger.debug(\"Setting up worker...\")\n    self._runs_task_group = anyio.create_task_group()\n    self._limiter = (\n        anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n    )\n    self._client = get_client()\n    await self._client.__aenter__()\n    await self._runs_task_group.__aenter__()\n\n    self.is_setup = True\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.sync_with_backend","title":"sync_with_backend async","text":"

      Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API.

      Source code in prefect/workers/base.py
      async def sync_with_backend(self):\n    \"\"\"\n    Updates the worker's local information about it's current work pool and\n    queues. Sends a worker heartbeat to the API.\n    \"\"\"\n    await self._update_local_work_pool_info()\n\n    await self._send_worker_heartbeat()\n\n    self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.teardown","title":"teardown async","text":"

      Cleans up resources after the worker is stopped.

      Source code in prefect/workers/base.py
      async def teardown(self, *exc_info):\n    \"\"\"Cleans up resources after the worker is stopped.\"\"\"\n    self._logger.debug(\"Tearing down worker...\")\n    self.is_setup = False\n    for scope in self._scheduled_task_scopes:\n        scope.cancel()\n    if self._runs_task_group:\n        await self._runs_task_group.__aexit__(*exc_info)\n    if self._client:\n        await self._client.__aexit__(*exc_info)\n    self._runs_task_group = None\n    self._client = None\n
      ","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/block/","title":"block","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block","title":"prefect.workers.block","text":"","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration","title":"BlockWorkerJobConfiguration","text":"

      Bases: BaseModel

      Source code in prefect/workers/block.py
      class BlockWorkerJobConfiguration(BaseModel):\n    block: Block = Field(\n        default=..., description=\"The infrastructure block to use for job creation.\"\n    )\n\n    @validator(\"block\")\n    def _validate_infrastructure_block(cls, v):\n        return validate_block_is_infrastructure(v)\n\n    _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n    @property\n    def is_using_a_runner(self):\n        return (\n            self.block.command is not None\n            and \"prefect flow-run execute\" in shlex.join(self.block.command)\n        )\n\n    @staticmethod\n    def _get_base_config_defaults(variables: dict) -> dict:\n        \"\"\"Get default values from base config for all variables that have them.\"\"\"\n        defaults = dict()\n        for variable_name, attrs in variables.items():\n            if \"default\" in attrs:\n                defaults[variable_name] = attrs[\"default\"]\n\n        return defaults\n\n    @classmethod\n    @inject_client\n    async def from_template_and_values(\n        cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n    ):\n        \"\"\"Creates a valid worker configuration object from the provided base\n        configuration and overrides.\n\n        Important: this method expects that the base_job_template was already\n        validated server-side.\n        \"\"\"\n        job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n        variables_schema = base_job_template[\"variables\"]\n        variables = cls._get_base_config_defaults(\n            variables_schema.get(\"properties\", {})\n        )\n        variables.update(values)\n\n        populated_configuration = apply_values(template=job_config, values=variables)\n\n        block_document_id = get_from_dict(\n            populated_configuration, \"block.$ref.block_document_id\"\n        )\n        if not block_document_id:\n            raise ValueError(\n                \"Base job template is invalid for this worker type because it does not\"\n                \" contain a block_document_id after variable resolution.\"\n            )\n\n        block_document = await client.read_block_document(\n            block_document_id=block_document_id\n        )\n        infrastructure_block = Block._from_block_document(block_document)\n\n        populated_configuration[\"block\"] = infrastructure_block\n\n        return cls(**populated_configuration)\n\n    @classmethod\n    def json_template(cls) -> dict:\n        \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n        Defaults to using the job configuration parameter name as the template variable name.\n\n        e.g.\n        {\n            key1: '{{ key1 }}',     # default variable template\n            key2: '{{ template2 }}', # `template2` specifically provide as template\n        }\n        \"\"\"\n        configuration = {}\n        properties = cls.schema()[\"properties\"]\n        for k, v in properties.items():\n            if v.get(\"template\"):\n                template = v[\"template\"]\n            else:\n                template = \"{{ \" + k + \" }}\"\n            configuration[k] = template\n\n        return configuration\n\n    def _related_resources(self) -> List[RelatedResource]:\n        tags = set()\n        related = []\n\n        for kind, obj in self._related_objects.items():\n            if obj is None:\n                continue\n            if hasattr(obj, \"tags\"):\n                tags.update(obj.tags)\n            related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n        return related + tags_as_related_resources(tags)\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        self.block = self.block.prepare_for_flow_run(\n            flow_run=flow_run, deployment=deployment, flow=flow\n        )\n
      ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.from_template_and_values","title":"from_template_and_values async classmethod","text":"

      Creates a valid worker configuration object from the provided base configuration and overrides.

      Important: this method expects that the base_job_template was already validated server-side.

      Source code in prefect/workers/block.py
      @classmethod\n@inject_client\nasync def from_template_and_values(\n    cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n    \"\"\"Creates a valid worker configuration object from the provided base\n    configuration and overrides.\n\n    Important: this method expects that the base_job_template was already\n    validated server-side.\n    \"\"\"\n    job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n    variables_schema = base_job_template[\"variables\"]\n    variables = cls._get_base_config_defaults(\n        variables_schema.get(\"properties\", {})\n    )\n    variables.update(values)\n\n    populated_configuration = apply_values(template=job_config, values=variables)\n\n    block_document_id = get_from_dict(\n        populated_configuration, \"block.$ref.block_document_id\"\n    )\n    if not block_document_id:\n        raise ValueError(\n            \"Base job template is invalid for this worker type because it does not\"\n            \" contain a block_document_id after variable resolution.\"\n        )\n\n    block_document = await client.read_block_document(\n        block_document_id=block_document_id\n    )\n    infrastructure_block = Block._from_block_document(block_document)\n\n    populated_configuration[\"block\"] = infrastructure_block\n\n    return cls(**populated_configuration)\n
      ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerJobConfiguration.json_template","title":"json_template classmethod","text":"

      Returns a dict with job configuration as keys and the corresponding templates as values

      Defaults to using the job configuration parameter name as the template variable name.

      e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2 specifically provide as template }

      Source code in prefect/workers/block.py
      @classmethod\ndef json_template(cls) -> dict:\n    \"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n    Defaults to using the job configuration parameter name as the template variable name.\n\n    e.g.\n    {\n        key1: '{{ key1 }}',     # default variable template\n        key2: '{{ template2 }}', # `template2` specifically provide as template\n    }\n    \"\"\"\n    configuration = {}\n    properties = cls.schema()[\"properties\"]\n    for k, v in properties.items():\n        if v.get(\"template\"):\n            template = v[\"template\"]\n        else:\n            template = \"{{ \" + k + \" }}\"\n        configuration[k] = template\n\n    return configuration\n
      ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/block/#prefect.workers.block.BlockWorkerResult","title":"BlockWorkerResult","text":"

      Bases: BaseWorkerResult

      Result of a block worker job

      Source code in prefect/workers/block.py
      class BlockWorkerResult(BaseWorkerResult):\n    \"\"\"Result of a block worker job\"\"\"\n
      ","tags":["Python API","workers","block"]},{"location":"api-ref/prefect/workers/process/","title":"process","text":"","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process","title":"prefect.workers.process","text":"

      Module containing the Process worker used for executing flow runs as subprocesses.

      To start a Process worker, run the following command:

      prefect worker start --pool 'my-work-pool' --type process\n

      Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

      For more information about work pools and workers, checkout out the Prefect docs.

      ","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration","title":"ProcessJobConfiguration","text":"

      Bases: BaseJobConfiguration

      Source code in prefect/workers/process.py
      class ProcessJobConfiguration(BaseJobConfiguration):\n    stream_output: bool = Field(default=True)\n    working_dir: Optional[Path] = Field(default=None)\n\n    @validator(\"working_dir\")\n    def validate_command(cls, v):\n        return validate_command(v)\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self.env = {**os.environ, **self.env}\n        self.command = (\n            f\"{get_sys_executable()} -m prefect.engine\"\n            if self.command == self._base_flow_run_command()\n            else self.command\n        )\n\n    def _base_flow_run_command(self) -> str:\n        \"\"\"\n        Override the base flow run command because enhanced cancellation doesn't\n        work with the process worker.\n        \"\"\"\n        return \"python -m prefect.engine\"\n
      ","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessWorkerResult","title":"ProcessWorkerResult","text":"

      Bases: BaseWorkerResult

      Contains information about the final state of a completed process

      Source code in prefect/workers/process.py
      class ProcessWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
      ","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/server/","title":"server","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server","title":"prefect.workers.server","text":"","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/server/#prefect.workers.server.start_healthcheck_server","title":"start_healthcheck_server","text":"

      Run a healthcheck FastAPI server for a worker.

      Parameters:

      Name Type Description Default worker BaseWorker | ProcessWorker

      the worker whose health we will check

      required log_level str

      the log level to use for the server

      'error' Source code in prefect/workers/server.py
      def start_healthcheck_server(\n    worker: Union[BaseWorker, ProcessWorker],\n    query_interval_seconds: int,\n    log_level: str = \"error\",\n) -> None:\n    \"\"\"\n    Run a healthcheck FastAPI server for a worker.\n\n    Args:\n        worker (BaseWorker | ProcessWorker): the worker whose health we will check\n        log_level (str): the log level to use for the server\n    \"\"\"\n    webserver = FastAPI()\n    router = APIRouter()\n\n    def perform_health_check():\n        did_recently_poll = worker.is_worker_still_polling(\n            query_interval_seconds=query_interval_seconds\n        )\n\n        if not did_recently_poll:\n            return JSONResponse(\n                status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n                content={\"message\": \"Worker may be unresponsive at this time\"},\n            )\n        return JSONResponse(status_code=status.HTTP_200_OK, content={\"message\": \"OK\"})\n\n    router.add_api_route(\"/health\", perform_health_check, methods=[\"GET\"])\n\n    webserver.include_router(router)\n\n    uvicorn.run(\n        webserver,\n        host=PREFECT_WORKER_WEBSERVER_HOST.value(),\n        port=PREFECT_WORKER_WEBSERVER_PORT.value(),\n        log_level=log_level,\n    )\n
      ","tags":["Python API","workers","server"]},{"location":"api-ref/prefect/workers/utilities/","title":"utilities","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/prefect/workers/utilities/#prefect.workers.utilities","title":"prefect.workers.utilities","text":"","tags":["Python API","workers","utilities"]},{"location":"api-ref/python/","title":"Python SDK","text":"

      The Prefect Python SDK is used to build, test, and execute workflows against the Prefect API.

      Explore the modules in the navigation bar to the left to learn more.

      ","tags":["API","Python SDK"]},{"location":"api-ref/rest-api/","title":"REST API","text":"

      The Prefect REST API is used for communicating data from clients to a Prefect server so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard.

      Prefect Cloud and a locally hosted Prefect server each provide a REST API.

      • Prefect Cloud:
        • Interactive Prefect Cloud REST API documentation
        • Finding your Prefect Cloud details
      • A Locally hosted open-source Prefect server:
        • Interactive REST API documentation for a locally hosted open-source Prefect server is available at http://localhost:4200/docs or the /docs endpoint of the PREFECT_API_URL you have configured to access the server. You must have the server running with prefect server start to access the interactive documentation.
        • Prefect REST API documentation
      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#interacting-with-the-rest-api","title":"Interacting with the REST API","text":"

      You have many options to interact with the Prefect REST API:

      • Create an instance of PrefectClient
      • Use your favorite Python HTTP library such as Requests or HTTPX
      • Use an HTTP library in your language of choice
      • Use curl from the command line
      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#prefectclient-with-a-prefect-server","title":"PrefectClient with a Prefect server","text":"

      This example uses PrefectClient with a locally hosted Prefect server:

      import asyncio\nfrom prefect.client import get_client\n\nasync def get_flows():\n    client = get_client()\n    r = await client.read_flows(limit=5)\n    return r\n\nr = asyncio.run(get_flows())\n\nfor flow in r:\n    print(flow.name, flow.id)\n\nif __name__ == \"__main__\":\n    asyncio.run(get_flows())\n

      Output:

      cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022\ndog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766\nsloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d\ncapybara-facts fbadaf8b-584f-48b9-b092-07d351edd424\nlemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c\n
      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#requests-with-prefect","title":"Requests with Prefect","text":"

      This example uses the Requests library with Prefect Cloud to return the five newest artifacts.

      import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n
      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#curl-with-prefect-cloud","title":"curl with Prefect Cloud","text":"

      This example uses curl with Prefect Cloud to create a flow run:

      ACCOUNT_ID=\"abc-my-cloud-account-id-goes-here\"\nWORKSPACE_ID=\"123-my-workspace-id-goes-here\"\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\nDEPLOYMENT_ID=\"my_deployment_id\"\n\ncurl --location --request POST \"$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run\" \\\n  --header \"Content-Type: application/json\" \\\n  --header \"Authorization: Bearer $PREFECT_API_KEY\" \\\n  --header \"X-PREFECT-API-VERSION: 0.8.4\" \\\n  --data-raw \"{}\"\n

      Note that in this example --data-raw \"{}\" is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute ^ for \\ for line multi-line commands.

      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#finding-your-prefect-cloud-details","title":"Finding your Prefect Cloud details","text":"

      When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the workspace you want to interact with. You can find both IDs for a Prefect profile in the CLI with prefect profile inspect my_profile. This command will also display your Prefect API key, as shown below:

      PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here'\nPREFECT_API_KEY='123abc_my_api_key_is_here'\n

      Alternatively, view your Account ID and Workspace ID in your browser URL. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#rest-guidelines","title":"REST Guidelines","text":"

      The REST APIs adhere to the following guidelines:

      • Collection names are pluralized (for example, /flows or /runs).
      • We indicate variable placeholders with colons: GET /flows/:id.
      • We use snake case for route names: GET /task_runs.
      • We avoid nested resources unless there is no possibility of accessing the child resource outside the parent context. For example, we query /task_runs with a flow run filter instead of accessing /flow_runs/:id/task_runs.
      • The API is hosted with an /api/:version prefix that (optionally) allows versioning in the future. By convention, we treat that as part of the base URL and do not include that in API examples.
      • Filtering, sorting, and pagination parameters are provided in the request body of POST requests where applicable.
        • Pagination parameters are limit and offset.
        • Sorting is specified with a single sort parameter.
        • See more information on filtering below.
      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#http-verbs","title":"HTTP verbs","text":"
      • GET, PUT and DELETE requests are always idempotent. POST and PATCH are not guaranteed to be idempotent.
      • GET requests cannot receive information from the request body.
      • POST requests can receive information from the request body.
      • POST /collection creates a new member of the collection.
      • GET /collection lists all members of the collection.
      • GET /collection/:id gets a specific member of the collection by ID.
      • DELETE /collection/:id deletes a specific member of the collection.
      • PUT /collection/:id creates or replaces a specific member of the collection.
      • PATCH /collection/:id partially updates a specific member of the collection.
      • POST /collection/action is how we implement non-CRUD actions. For example, to set a flow run's state, we use POST /flow_runs/:id/set_state.
      • POST /collection/action may also be used for read-only queries. This is to allow us to send complex arguments as body arguments (which often cannot be done via GET). Examples include POST /flow_runs/filter, POST /flow_runs/count, and POST /flow_runs/history.
      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#filtering","title":"Filtering","text":"

      Objects can be filtered by providing filter criteria in the body of a POST request. When multiple criteria are specified, logical AND will be applied to the criteria.

      Filter criteria are structured as follows:

      {\n    \"objects\": {\n        \"object_field\": {\n            \"field_operator_\": <field_value>\n        }\n    }\n}\n

      In this example, objects is the name of the collection to filter over (for example, flows). The collection can be either the object being queried for (flows for POST /flows/filter) or a related object (flow_runs for POST /flows/filter).

      object_field is the name of the field over which to filter (name for flows). Note that some objects may have nested object fields, such as {flow_run: {state: {type: {any_: []}}}}.

      field_operator_ is the operator to apply to a field when filtering. Common examples include:

      • any_: return objects where this field matches any of the following values.
      • is_null_: return objects where this field is or is not null.
      • eq_: return objects where this field is equal to the following value.
      • all_: return objects where this field matches all of the following values.
      • before_: return objects where this datetime field is less than or equal to the following value.
      • after_: return objects where this datetime field is greater than or equal to the following value.

      For example, to query for flows with the tag \"database\" and failed flow runs, POST /flows/filter with the following request body:

      {\n    \"flows\": {\n        \"tags\": {\n            \"all_\": [\"database\"]\n        }\n    },\n    \"flow_runs\": {\n        \"state\": {\n            \"type\": {\n              \"any_\": [\"FAILED\"]\n            }\n        }\n    }\n}\n
      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#openapi","title":"OpenAPI","text":"

      The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. OpenAPI is a standard specification for describing REST APIs.

      To generate the Prefect server's complete OpenAPI document, run the following commands in an interactive Python session:

      from prefect.server.api.server import create_app\n\napp = create_app()\nopenapi_doc = app.openapi()\n

      This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance.

      ","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/server/","title":"Server API","text":"

      The Prefect server API is used by the server to work with workflow metadata and enforce orchestration logic. This API is primarily used by Prefect developers.

      Select links in the left navigation menu to explore.

      ","tags":["API","Server API"]},{"location":"api-ref/server/api/admin/","title":"server.api.admin","text":"","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin","title":"prefect.server.api.admin","text":"

      Routes for admin-level interactions with the Prefect REST API.

      ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.clear_database","title":"clear_database async","text":"

      Clear all database tables without dropping them.

      Source code in prefect/server/api/admin.py
      @router.post(\"/database/clear\", status_code=status.HTTP_204_NO_CONTENT)\nasync def clear_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n    \"\"\"Clear all database tables without dropping them.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n    async with db.session_context(begin_transaction=True) as session:\n        # work pool has a circular dependency on pool queue; delete it first\n        await session.execute(db.WorkPool.__table__.delete())\n        for table in reversed(db.Base.metadata.sorted_tables):\n            await session.execute(table.delete())\n
      ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.create_database","title":"create_database async","text":"

      Create all database objects.

      Source code in prefect/server/api/admin.py
      @router.post(\"/database/create\", status_code=status.HTTP_204_NO_CONTENT)\nasync def create_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n    \"\"\"Create all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.create_db()\n
      ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.drop_database","title":"drop_database async","text":"

      Drop all database objects.

      Source code in prefect/server/api/admin.py
      @router.post(\"/database/drop\", status_code=status.HTTP_204_NO_CONTENT)\nasync def drop_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n    \"\"\"Drop all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.drop_db()\n
      ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_settings","title":"read_settings async","text":"

      Get the current Prefect REST API settings.

      Secret setting values will be obfuscated.

      Source code in prefect/server/api/admin.py
      @router.get(\"/settings\")\nasync def read_settings() -> prefect.settings.Settings:\n    \"\"\"\n    Get the current Prefect REST API settings.\n\n    Secret setting values will be obfuscated.\n    \"\"\"\n    return prefect.settings.get_current_settings().with_obfuscated_secrets()\n
      ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_version","title":"read_version async","text":"

      Returns the Prefect version number

      Source code in prefect/server/api/admin.py
      @router.get(\"/version\")\nasync def read_version() -> str:\n    \"\"\"Returns the Prefect version number\"\"\"\n    return prefect.__version__\n
      ","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/csrf_token/","title":"server.api.csrf_token","text":"","tags":["Prefect API","csrf","security"]},{"location":"api-ref/server/api/csrf_token/#prefect.server.api.csrf_token","title":"prefect.server.api.csrf_token","text":"","tags":["Prefect API","csrf","security"]},{"location":"api-ref/server/api/csrf_token/#prefect.server.api.csrf_token.create_csrf_token","title":"create_csrf_token async","text":"

      Create or update a CSRF token for a client

      Source code in prefect/server/api/csrf_token.py
      @router.get(\"\")\nasync def create_csrf_token(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    client: str = Query(..., description=\"The client to create a CSRF token for\"),\n) -> schemas.core.CsrfToken:\n    \"\"\"Create or update a CSRF token for a client\"\"\"\n    if PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value() is False:\n        raise HTTPException(\n            status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"CSRF protection is disabled.\",\n        )\n\n    async with db.session_context(begin_transaction=True) as session:\n        token = await models.csrf_token.create_or_update_csrf_token(\n            session=session, client=client\n        )\n        await models.csrf_token.delete_expired_tokens(session=session)\n\n    return token\n
      ","tags":["Prefect API","csrf","security"]},{"location":"api-ref/server/api/dependencies/","title":"server.api.dependencies","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies","title":"prefect.server.api.dependencies","text":"

      Utilities for injecting FastAPI dependencies.

      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.EnforceMinimumAPIVersion","title":"EnforceMinimumAPIVersion","text":"

      FastAPI Dependency used to check compatibility between the version of the api and a given request.

      Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version.

      Source code in prefect/server/api/dependencies.py
      class EnforceMinimumAPIVersion:\n    \"\"\"\n    FastAPI Dependency used to check compatibility between the version of the api\n    and a given request.\n\n    Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it\n    to the api's version. Rejects requests that are lower than the minimum version.\n    \"\"\"\n\n    def __init__(self, minimum_api_version: str, logger: logging.Logger):\n        self.minimum_api_version = minimum_api_version\n        versions = [int(v) for v in minimum_api_version.split(\".\")]\n        self.api_major = versions[0]\n        self.api_minor = versions[1]\n        self.api_patch = versions[2]\n        self.logger = logger\n\n    async def __call__(\n        self,\n        x_prefect_api_version: str = Header(None),\n    ):\n        request_version = x_prefect_api_version\n\n        # if no version header, assume latest and continue\n        if not request_version:\n            return\n\n        # parse version\n        try:\n            major, minor, patch = [int(v) for v in request_version.split(\".\")]\n        except ValueError:\n            await self._notify_of_invalid_value(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    \"Invalid X-PREFECT-API-VERSION header format.\"\n                    f\"Expected header in format 'x.y.z' but received {request_version}\"\n                ),\n            )\n\n        if (major, minor, patch) < (self.api_major, self.api_minor, self.api_patch):\n            await self._notify_of_outdated_version(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    f\"The request specified API version {request_version} but this \"\n                    f\"server requires version {self.minimum_api_version} or higher.\"\n                ),\n            )\n\n    async def _notify_of_invalid_value(self, request_version: str):\n        self.logger.error(\n            f\"Invalid X-PREFECT-API-VERSION header format: '{request_version}'\"\n        )\n\n    async def _notify_of_outdated_version(self, request_version: str):\n        self.logger.error(\n            f\"X-PREFECT-API-VERSION header specifies version '{request_version}' \"\n            f\"but minimum allowed version is '{self.minimum_api_version}'\"\n        )\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.LimitBody","title":"LimitBody","text":"

      A fastapi.Depends factory for pulling a limit: int parameter from the request body while determining the default from the current settings.

      Source code in prefect/server/api/dependencies.py
      def LimitBody() -> Depends:\n    \"\"\"\n    A `fastapi.Depends` factory for pulling a `limit: int` parameter from the\n    request body while determining the default from the current settings.\n    \"\"\"\n\n    def get_limit(\n        limit: int = Body(\n            None,\n            description=\"Defaults to PREFECT_API_DEFAULT_LIMIT if not provided.\",\n        ),\n    ):\n        default_limit = PREFECT_API_DEFAULT_LIMIT.value()\n        limit = limit if limit is not None else default_limit\n        if not limit >= 0:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=\"Invalid limit: must be greater than or equal to 0.\",\n            )\n        if limit > default_limit:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=f\"Invalid limit: must be less than or equal to {default_limit}.\",\n            )\n        return limit\n\n    return Depends(get_limit)\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.get_created_by","title":"get_created_by","text":"

      A dependency that returns the provenance information to use when creating objects during this API call.

      Source code in prefect/server/api/dependencies.py
      def get_created_by(\n    prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False),\n    prefect_automation_name: Optional[str] = Header(None, include_in_schema=False),\n) -> Optional[schemas.core.CreatedBy]:\n    \"\"\"A dependency that returns the provenance information to use when creating objects\n    during this API call.\"\"\"\n    if prefect_automation_id and prefect_automation_name:\n        try:\n            display_value = b64decode(prefect_automation_name.encode()).decode()\n        except Exception:\n            display_value = None\n\n        if display_value:\n            return schemas.core.CreatedBy(\n                id=prefect_automation_id,\n                type=\"AUTOMATION\",\n                display_value=display_value,\n            )\n\n    return None\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.get_updated_by","title":"get_updated_by","text":"

      A dependency that returns the provenance information to use when updating objects during this API call.

      Source code in prefect/server/api/dependencies.py
      def get_updated_by(\n    prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False),\n    prefect_automation_name: Optional[str] = Header(None, include_in_schema=False),\n) -> Optional[schemas.core.UpdatedBy]:\n    \"\"\"A dependency that returns the provenance information to use when updating objects\n    during this API call.\"\"\"\n    if prefect_automation_id and prefect_automation_name:\n        return schemas.core.UpdatedBy(\n            id=prefect_automation_id,\n            type=\"AUTOMATION\",\n            display_value=prefect_automation_name,\n        )\n\n    return None\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.is_ephemeral_request","title":"is_ephemeral_request","text":"

      A dependency that returns whether the request is to an ephemeral server.

      Source code in prefect/server/api/dependencies.py
      def is_ephemeral_request(request: Request):\n    \"\"\"\n    A dependency that returns whether the request is to an ephemeral server.\n    \"\"\"\n    return \"ephemeral-prefect\" in str(request.base_url)\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/deployments/","title":"server.api.deployments","text":"","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments","title":"prefect.server.api.deployments","text":"

      Routes for interacting with Deployment objects.

      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.count_deployments","title":"count_deployments async","text":"

      Count deployments.

      Source code in prefect/server/api/deployments.py
      @router.post(\"/count\")\nasync def count_deployments(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n    \"\"\"\n    Count deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.deployments.count_deployments(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_deployment","title":"create_deployment async","text":"

      Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated.

      If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted.

      Source code in prefect/server/api/deployments.py
      @router.post(\"/\")\nasync def create_deployment(\n    deployment: schemas.actions.DeploymentCreate,\n    response: Response,\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by),\n    updated_by: Optional[schemas.core.UpdatedBy] = Depends(dependencies.get_updated_by),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n    \"\"\"\n    Gracefully creates a new deployment from the provided schema. If a deployment with\n    the same name and flow_id already exists, the deployment is updated.\n\n    If the deployment has an active schedule, flow runs will be scheduled.\n    When upserting, any scheduled runs from the existing deployment will be deleted.\n    \"\"\"\n\n    data = deployment.dict(exclude_unset=True)\n    data[\"created_by\"] = created_by.dict() if created_by else None\n    data[\"updated_by\"] = updated_by.dict() if created_by else None\n\n    async with db.session_context(begin_transaction=True) as session:\n        if (\n            deployment.work_pool_name\n            and deployment.work_pool_name != DEFAULT_AGENT_WORK_POOL_NAME\n        ):\n            # Make sure that deployment is valid before beginning creation process\n            work_pool = await models.workers.read_work_pool_by_name(\n                session=session, work_pool_name=deployment.work_pool_name\n            )\n            if work_pool is None:\n                raise HTTPException(\n                    status_code=status.HTTP_404_NOT_FOUND,\n                    detail=f'Work pool \"{deployment.work_pool_name}\" not found.',\n                )\n            try:\n                deployment.check_valid_configuration(work_pool.base_job_template)\n            except (MissingVariableError, jsonschema.exceptions.ValidationError) as exc:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=f\"Error creating deployment: {exc!r}\",\n                )\n\n        # hydrate the input model into a full model\n        deployment_dict = deployment.dict(exclude={\"work_pool_name\"})\n        if deployment.work_pool_name and deployment.work_queue_name:\n            # If a specific pool name/queue name combination was provided, get the\n            # ID for that work pool queue.\n            deployment_dict[\n                \"work_queue_id\"\n            ] = await worker_lookups._get_work_queue_id_from_name(\n                session=session,\n                work_pool_name=deployment.work_pool_name,\n                work_queue_name=deployment.work_queue_name,\n                create_queue_if_not_found=True,\n            )\n        elif deployment.work_pool_name:\n            # If just a pool name was provided, get the ID for its default\n            # work pool queue.\n            deployment_dict[\n                \"work_queue_id\"\n            ] = await worker_lookups._get_default_work_queue_id_from_work_pool_name(\n                session=session,\n                work_pool_name=deployment.work_pool_name,\n            )\n        elif deployment.work_queue_name:\n            # If just a queue name was provided, ensure that the queue exists and\n            # get its ID.\n            work_queue = await models.work_queues.ensure_work_queue_exists(\n                session=session, name=deployment.work_queue_name\n            )\n            deployment_dict[\"work_queue_id\"] = work_queue.id\n\n        deployment = schemas.core.Deployment(**deployment_dict)\n        # check to see if relevant blocks exist, allowing us throw a useful error message\n        # for debugging\n        if deployment.infrastructure_document_id is not None:\n            infrastructure_block = (\n                await models.block_documents.read_block_document_by_id(\n                    session=session,\n                    block_document_id=deployment.infrastructure_document_id,\n                )\n            )\n            if not infrastructure_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find infrastructure\"\n                        f\" block with id: {deployment.infrastructure_document_id}. This\"\n                        \" usually occurs when applying a deployment specification that\"\n                        \" was built against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        if deployment.storage_document_id is not None:\n            storage_block = await models.block_documents.read_block_document_by_id(\n                session=session,\n                block_document_id=deployment.storage_document_id,\n            )\n            if not storage_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find storage block with\"\n                        f\" id: {deployment.storage_document_id}. This usually occurs\"\n                        \" when applying a deployment specification that was built\"\n                        \" against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        # Ensure that `paused` and `is_schedule_active` are consistent.\n        if \"paused\" in data:\n            deployment.is_schedule_active = not data[\"paused\"]\n        elif \"is_schedule_active\" in data:\n            deployment.paused = not data[\"is_schedule_active\"]\n\n        now = pendulum.now(\"UTC\")\n        model = await models.deployments.create_deployment(\n            session=session, deployment=deployment\n        )\n\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.DeploymentResponse.from_orm(model)\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

      Create a flow run from a deployment.

      Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used.

      If no state is provided, the flow run will be created in a SCHEDULED state.

      Source code in prefect/server/api/deployments.py
      @router.post(\"/{id}/create_flow_run\")\nasync def create_flow_run_from_deployment(\n    flow_run: schemas.actions.DeploymentFlowRunCreate,\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    response: Response = None,\n) -> schemas.responses.FlowRunResponse:\n    \"\"\"\n    Create a flow run from a deployment.\n\n    Any parameters not provided will be inferred from the deployment's parameters.\n    If tags are not provided, the deployment's tags will be used.\n\n    If no state is provided, the flow run will be created in a SCHEDULED state.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        # get relevant info from the deployment\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n\n        try:\n            dehydrated_params = deployment.parameters\n            dehydrated_params.update(flow_run.parameters or {})\n            ctx = await HydrationContext.build(session=session, raise_on_error=True)\n            parameters = hydrate(dehydrated_params, ctx)\n        except HydrationError as exc:\n            raise HTTPException(\n                status.HTTP_400_BAD_REQUEST,\n                detail=f\"Error hydrating flow run parameters: {exc}\",\n            )\n\n        if deployment.enforce_parameter_schema:\n            if not isinstance(deployment.parameter_openapi_schema, dict):\n                raise HTTPException(\n                    status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error updating deployment: Cannot update parameters because\"\n                        \" parameter schema enforcement is enabled and the deployment\"\n                        \" does not have a valid parameter schema.\"\n                    ),\n                )\n            try:\n                validate(\n                    parameters, deployment.parameter_openapi_schema, raise_on_error=True\n                )\n            except ValidationError as exc:\n                raise HTTPException(\n                    status.HTTP_409_CONFLICT,\n                    detail=f\"Error creating flow run: {exc}\",\n                )\n            except CircularSchemaRefError:\n                raise HTTPException(\n                    status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                    detail=\"Invalid schema: Unable to validate schema with circular references.\",\n                )\n\n        await validate_job_variables_for_flow_run(flow_run, deployment, session)\n\n        work_queue_name = deployment.work_queue_name\n        work_queue_id = deployment.work_queue_id\n\n        if flow_run.work_queue_name:\n            # can't mutate the ORM model or else it will commit the changes back\n            work_queue_id = await worker_lookups._get_work_queue_id_from_name(\n                session=session,\n                work_pool_name=deployment.work_queue.work_pool.name,\n                work_queue_name=flow_run.work_queue_name,\n                create_queue_if_not_found=True,\n            )\n            work_queue_name = flow_run.work_queue_name\n\n        # hydrate the input model into a full flow run / state model\n        flow_run = schemas.core.FlowRun(\n            **flow_run.dict(\n                exclude={\n                    \"parameters\",\n                    \"tags\",\n                    \"infrastructure_document_id\",\n                    \"work_queue_name\",\n                }\n            ),\n            flow_id=deployment.flow_id,\n            deployment_id=deployment.id,\n            deployment_version=deployment.version,\n            parameters=parameters,\n            tags=set(deployment.tags).union(flow_run.tags),\n            infrastructure_document_id=(\n                flow_run.infrastructure_document_id\n                or deployment.infrastructure_document_id\n            ),\n            work_queue_name=work_queue_name,\n            work_queue_id=work_queue_id,\n            created_by=created_by,\n        )\n\n        if not flow_run.state:\n            flow_run.state = schemas.states.Scheduled()\n\n        now = pendulum.now(\"UTC\")\n        model = await models.flow_runs.create_flow_run(\n            session=session, flow_run=flow_run\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.delete_deployment","title":"delete_deployment async","text":"

      Delete a deployment by id.

      Source code in prefect/server/api/deployments.py
      @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a deployment by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.deployments.delete_deployment(\n            session=session, deployment_id=deployment_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.get_scheduled_flow_runs_for_deployments","title":"get_scheduled_flow_runs_for_deployments async","text":"

      Get scheduled runs for a set of deployments. Used by a runner to poll for work.

      Source code in prefect/server/api/deployments.py
      @router.post(\"/get_scheduled_flow_runs\")\nasync def get_scheduled_flow_runs_for_deployments(\n    background_tasks: BackgroundTasks,\n    deployment_ids: List[UUID] = Body(\n        default=..., description=\"The deployment IDs to get scheduled runs for\"\n    ),\n    scheduled_before: DateTimeTZ = Body(\n        None, description=\"The maximum time to look for scheduled flow runs\"\n    ),\n    limit: int = dependencies.LimitBody(),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n    \"\"\"\n    Get scheduled runs for a set of deployments. Used by a runner to poll for work.\n    \"\"\"\n    async with db.session_context() as session:\n        orm_flow_runs = await models.flow_runs.read_flow_runs(\n            session=session,\n            limit=limit,\n            deployment_filter=schemas.filters.DeploymentFilter(\n                id=schemas.filters.DeploymentFilterId(any_=deployment_ids),\n            ),\n            flow_run_filter=schemas.filters.FlowRunFilter(\n                next_scheduled_start_time=schemas.filters.FlowRunFilterNextScheduledStartTime(\n                    before_=scheduled_before\n                ),\n                state=schemas.filters.FlowRunFilterState(\n                    type=schemas.filters.FlowRunFilterStateType(\n                        any_=[schemas.states.StateType.SCHEDULED]\n                    )\n                ),\n            ),\n            sort=schemas.sorting.FlowRunSort.NEXT_SCHEDULED_START_TIME_ASC,\n        )\n\n        flow_run_responses = [\n            schemas.responses.FlowRunResponse.from_orm(orm_flow_run=orm_flow_run)\n            for orm_flow_run in orm_flow_runs\n        ]\n\n    background_tasks.add_task(\n        mark_deployments_ready,\n        deployment_ids=deployment_ids,\n    )\n\n    return flow_run_responses\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.pause_deployment","title":"pause_deployment async","text":"

      Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted.

      Source code in prefect/server/api/deployments.py
      @router.post(\"/{id:uuid}/set_schedule_inactive\")  # Legacy route\n@router.post(\"/{id:uuid}/pause_deployment\")\nasync def pause_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n    \"\"\"\n    Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled\n    state will be deleted.\n    \"\"\"\n    async with db.session_context(begin_transaction=False) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = False\n        deployment.paused = True\n\n        # Ensure that we're updating the replicated schedule's `active` field,\n        # if there is only a single schedule. This is support for legacy\n        # clients.\n\n        number_of_schedules = len(deployment.schedules)\n\n        if number_of_schedules == 1:\n            deployment.schedules[0].active = False\n        elif number_of_schedules > 1:\n            raise _multiple_schedules_error(deployment_id)\n\n        # commit here to make the inactive schedule \"visible\" to the scheduler service\n        await session.commit()\n\n        # delete any auto scheduled runs\n        await models.deployments._delete_scheduled_runs(\n            session=session,\n            deployment_id=deployment_id,\n            auto_scheduled_only=True,\n        )\n\n        await session.commit()\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment","title":"read_deployment async","text":"

      Get a deployment by id.

      Source code in prefect/server/api/deployments.py
      @router.get(\"/{id}\")\nasync def read_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n    \"\"\"\n    Get a deployment by id.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

      Get a deployment using the name of the flow and the deployment.

      Source code in prefect/server/api/deployments.py
      @router.get(\"/name/{flow_name}/{deployment_name}\")\nasync def read_deployment_by_name(\n    flow_name: str = Path(..., description=\"The name of the flow\"),\n    deployment_name: str = Path(..., description=\"The name of the deployment\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n    \"\"\"\n    Get a deployment using the name of the flow and the deployment.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment_by_name(\n            session=session, name=deployment_name, flow_name=flow_name\n        )\n        if not deployment:\n            raise HTTPException(\n                status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployments","title":"read_deployments async","text":"

      Query for deployments.

      Source code in prefect/server/api/deployments.py
      @router.post(\"/filter\")\nasync def read_deployments(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = Body(\n        schemas.sorting.DeploymentSort.NAME_ASC\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.DeploymentResponse]:\n    \"\"\"\n    Query for deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        response = await models.deployments.read_deployments(\n            session=session,\n            offset=offset,\n            sort=sort,\n            limit=limit,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        return [\n            schemas.responses.DeploymentResponse.from_orm(orm_deployment=deployment)\n            for deployment in response\n        ]\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.resume_deployment","title":"resume_deployment async","text":"

      Set a deployment schedule to active. Runs will be scheduled immediately.

      Source code in prefect/server/api/deployments.py
      @router.post(\"/{id:uuid}/set_schedule_active\")  # Legacy route\n@router.post(\"/{id:uuid}/resume_deployment\")\nasync def resume_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n    \"\"\"\n    Set a deployment schedule to active. Runs will be scheduled immediately.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = True\n        deployment.paused = False\n\n        # Ensure that we're updating the replicated schedule's `active` field,\n        # if there is only a single schedule. This is support for legacy\n        # clients.\n\n        number_of_schedules = len(deployment.schedules)\n\n        if number_of_schedules == 1:\n            deployment.schedules[0].active = True\n        elif number_of_schedules > 1:\n            raise _multiple_schedules_error(deployment_id)\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.schedule_deployment","title":"schedule_deployment async","text":"

      Schedule runs for a deployment. For backfills, provide start/end times in the past.

      This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

      - Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time + min_time` is reached\n
      Source code in prefect/server/api/deployments.py
      @router.post(\"/{id}/schedule\")\nasync def schedule_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    start_time: DateTimeTZ = Body(None, description=\"The earliest date to schedule\"),\n    end_time: DateTimeTZ = Body(None, description=\"The latest date to schedule\"),\n    min_time: datetime.timedelta = Body(\n        None,\n        description=(\n            \"Runs will be scheduled until at least this long after the `start_time`\"\n        ),\n    ),\n    min_runs: int = Body(None, description=\"The minimum number of runs to schedule\"),\n    max_runs: int = Body(None, description=\"The maximum number of runs to schedule\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n    \"\"\"\n    Schedule runs for a deployment. For backfills, provide start/end times in the past.\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time + min_time` is reached\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        await models.deployments.schedule_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            min_time=min_time,\n            end_time=end_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.work_queue_check_for_deployment","title":"work_queue_check_for_deployment async","text":"

      Get list of work-queues that are able to pick up the specified deployment.

      This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments.

      Source code in prefect/server/api/deployments.py
      @router.get(\"/{id}/work_queue_check\", deprecated=True)\nasync def work_queue_check_for_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.WorkQueue]:\n    \"\"\"\n    Get list of work-queues that are able to pick up the specified deployment.\n\n    This endpoint is intended to be used by the UI to provide users warnings\n    about deployments that are unable to be executed because there are no work\n    queues that will pick up their runs, based on existing filter criteria. It\n    may be deprecated in the future because there is not a strict relationship\n    between work queues and deployments.\n    \"\"\"\n    try:\n        async with db.session_context() as session:\n            work_queues = await models.deployments.check_work_queues_for_deployment(\n                session=session, deployment_id=deployment_id\n            )\n    except ObjectNotFoundError:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n    return work_queues\n
      ","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/flow_run_states/","title":"server.api.flow_run_states","text":"","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states","title":"prefect.server.api.flow_run_states","text":"

      Routes for interacting with flow run state objects.

      ","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

      Get a flow run state by id.

      Source code in prefect/server/api/flow_run_states.py
      @router.get(\"/{id}\")\nasync def read_flow_run_state(\n    flow_run_state_id: UUID = Path(\n        ..., description=\"The flow run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n    \"\"\"\n    Get a flow run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run_state = await models.flow_run_states.read_flow_run_state(\n            session=session, flow_run_state_id=flow_run_state_id\n        )\n    if not flow_run_state:\n        raise HTTPException(\n            status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return flow_run_state\n
      ","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

      Get states associated with a flow run.

      Source code in prefect/server/api/flow_run_states.py
      @router.get(\"/\")\nasync def read_flow_run_states(\n    flow_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n    \"\"\"\n    Get states associated with a flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_run_states.read_flow_run_states(\n            session=session, flow_run_id=flow_run_id\n        )\n
      ","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_runs/","title":"server.api.flow_runs","text":"","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs","title":"prefect.server.api.flow_runs","text":"

      Routes for interacting with flow run objects.

      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.average_flow_run_lateness","title":"average_flow_run_lateness async","text":"

      Query for average flow-run lateness in seconds.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/lateness\")\nasync def average_flow_run_lateness(\n    flows: Optional[schemas.filters.FlowFilter] = None,\n    flow_runs: Optional[schemas.filters.FlowRunFilter] = None,\n    task_runs: Optional[schemas.filters.TaskRunFilter] = None,\n    deployments: Optional[schemas.filters.DeploymentFilter] = None,\n    work_pools: Optional[schemas.filters.WorkPoolFilter] = None,\n    work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Optional[float]:\n    \"\"\"\n    Query for average flow-run lateness in seconds.\n    \"\"\"\n    async with db.session_context() as session:\n        if db.dialect.name == \"sqlite\":\n            # Since we want an _average_ of the lateness we're unable to use\n            # the existing FlowRun.expected_start_time_delta property as it\n            # returns a timedelta and SQLite is unable to properly deal with it\n            # and always returns 1970.0 as the average. This copies the same\n            # logic but ensures that it returns the number of seconds instead\n            # so it's compatible with SQLite.\n            base_query = sa.case(\n                (\n                    db.FlowRun.start_time > db.FlowRun.expected_start_time,\n                    sa.func.strftime(\"%s\", db.FlowRun.start_time)\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                (\n                    db.FlowRun.start_time.is_(None)\n                    & db.FlowRun.state_type.notin_(schemas.states.TERMINAL_STATES)\n                    & (db.FlowRun.expected_start_time < sa.func.datetime(\"now\")),\n                    sa.func.strftime(\"%s\", sa.func.datetime(\"now\"))\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                else_=0,\n            )\n        else:\n            base_query = db.FlowRun.estimated_start_time_delta\n\n        query = await models.flow_runs._apply_flow_run_filters(\n            sa.select(sa.func.avg(base_query)),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        result = await session.execute(query)\n\n        avg_lateness = result.scalar()\n\n        if avg_lateness is None:\n            return None\n        elif isinstance(avg_lateness, datetime.timedelta):\n            return avg_lateness.total_seconds()\n        else:\n            return avg_lateness\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

      Query for flow runs.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/count\")\nasync def count_flow_runs(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n    \"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.count_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run","title":"create_flow_run async","text":"

      Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned.

      If no state is provided, the flow run will be created in a PENDING state.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/\")\nasync def create_flow_run(\n    flow_run: schemas.actions.FlowRunCreate,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> schemas.responses.FlowRunResponse:\n    \"\"\"\n    Create a flow run. If a flow run with the same flow_id and\n    idempotency key already exists, the existing flow run will be returned.\n\n    If no state is provided, the flow run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full flow run / state model\n    flow_run = schemas.core.FlowRun(**flow_run.dict(), created_by=created_by)\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    if not flow_run.state:\n        flow_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flow_runs.create_flow_run(\n            session=session,\n            flow_run=flow_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run_input","title":"create_flow_run_input async","text":"

      Create a key/value input for a flow run.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/{id}/input\", status_code=status.HTTP_201_CREATED)\nasync def create_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    key: str = Body(..., description=\"The input key\"),\n    value: bytes = Body(..., description=\"The value of the input\"),\n    sender: Optional[str] = Body(None, description=\"The sender of the input\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Create a key/value input for a flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        try:\n            await models.flow_run_input.create_flow_run_input(\n                session=session,\n                flow_run_input=schemas.core.FlowRunInput(\n                    flow_run_id=flow_run_id,\n                    key=key,\n                    sender=sender,\n                    value=value.decode(),\n                ),\n            )\n            await session.commit()\n\n        except IntegrityError as exc:\n            if \"unique constraint\" in str(exc).lower():\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=\"A flow run input with this key already exists.\",\n                )\n            else:\n                raise HTTPException(\n                    status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n                )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

      Delete a flow run by id.

      Source code in prefect/server/api/flow_runs.py
      @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a flow run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flow_runs.delete_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n        )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run_input","title":"delete_flow_run_input async","text":"

      Delete a flow run input

      Source code in prefect/server/api/flow_runs.py
      @router.delete(\"/{id}/input/{key}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    key: str = Path(..., description=\"The input key\", alias=\"key\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a flow run input\n    \"\"\"\n\n    async with db.session_context() as session:\n        deleted = await models.flow_run_input.delete_flow_run_input(\n            session=session, flow_run_id=flow_run_id, key=key\n        )\n        await session.commit()\n\n        if not deleted:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n            )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.filter_flow_run_input","title":"filter_flow_run_input async","text":"

      Filter flow run inputs by key prefix

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/{id}/input/filter\")\nasync def filter_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    prefix: str = Body(..., description=\"The input key prefix\", embed=True),\n    limit: int = Body(\n        1, description=\"The maximum number of results to return\", embed=True\n    ),\n    exclude_keys: List[str] = Body(\n        [], description=\"Exclude inputs with these keys\", embed=True\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.FlowRunInput]:\n    \"\"\"\n    Filter flow run inputs by key prefix\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_run_input.filter_flow_run_input(\n            session=session,\n            flow_run_id=flow_run_id,\n            prefix=prefix,\n            limit=limit,\n            exclude_keys=exclude_keys,\n        )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.flow_run_history","title":"flow_run_history async","text":"

      Query for flow run history data across a given range and interval.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/history\")\nasync def flow_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n    \"\"\"\n    Query for flow run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"flow_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n            work_pools=work_pools,\n            work_queues=work_queues,\n        )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run","title":"read_flow_run async","text":"

      Get a flow run by id.

      Source code in prefect/server/api/flow_runs.py
      @router.get(\"/{id}\")\nasync def read_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.FlowRunResponse:\n    \"\"\"\n    Get a flow run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run = await models.flow_runs.read_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n        if not flow_run:\n            raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n        return schemas.responses.FlowRunResponse.from_orm(flow_run)\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v1","title":"read_flow_run_graph_v1 async","text":"

      Get a task run dependency map for a given flow run.

      Source code in prefect/server/api/flow_runs.py
      @router.get(\"/{id}/graph\")\nasync def read_flow_run_graph_v1(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[DependencyResult]:\n    \"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.read_task_run_dependencies(\n            session=session, flow_run_id=flow_run_id\n        )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph_v2","title":"read_flow_run_graph_v2 async","text":"

      Get a graph of the tasks and subflow runs for the given flow run

      Source code in prefect/server/api/flow_runs.py
      @router.get(\"/{id:uuid}/graph-v2\")\nasync def read_flow_run_graph_v2(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    since: datetime.datetime = Query(\n        datetime.datetime.min,\n        description=\"Only include runs that start or end after this time.\",\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Graph:\n    \"\"\"\n    Get a graph of the tasks and subflow runs for the given flow run\n    \"\"\"\n    async with db.session_context() as session:\n        try:\n            return await read_flow_run_graph(\n                session=session,\n                flow_run_id=flow_run_id,\n                since=since,\n            )\n        except FlowRunGraphTooLarge as e:\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=str(e),\n            )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_input","title":"read_flow_run_input async","text":"

      Create a value from a flow run input

      Source code in prefect/server/api/flow_runs.py
      @router.get(\"/{id}/input/{key}\")\nasync def read_flow_run_input(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    key: str = Path(..., description=\"The input key\", alias=\"key\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> PlainTextResponse:\n    \"\"\"\n    Create a value from a flow run input\n    \"\"\"\n\n    async with db.session_context() as session:\n        flow_run_input = await models.flow_run_input.read_flow_run_input(\n            session=session, flow_run_id=flow_run_id, key=key\n        )\n\n    if flow_run_input:\n        return PlainTextResponse(flow_run_input.value)\n    else:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run input not found\"\n        )\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

      Query for flow runs.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/filter\", response_class=ORJSONResponse)\nasync def read_flow_runs(\n    sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n    \"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        db_flow_runs = await models.flow_runs.read_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n\n        # Instead of relying on fastapi.encoders.jsonable_encoder to convert the\n        # response to JSON, we do so more efficiently ourselves.\n        # In particular, the FastAPI encoder is very slow for large, nested objects.\n        # See: https://github.com/tiangolo/fastapi/issues/1224\n        encoded = [\n            schemas.responses.FlowRunResponse.from_orm(fr).dict(json_compatible=True)\n            for fr in db_flow_runs\n        ]\n        return ORJSONResponse(content=encoded)\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.resume_flow_run","title":"resume_flow_run async","text":"

      Resume a paused flow run.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/{id}/resume\")\nasync def resume_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    run_input: Optional[Dict] = Body(default=None, embed=True),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    task_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_task_policy\n    ),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n    \"\"\"\n    Resume a paused flow run.\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        flow_run = await models.flow_runs.read_flow_run(session, flow_run_id)\n        state = flow_run.state\n\n        if state is None or state.type != schemas.states.StateType.PAUSED:\n            result = OrchestrationResult(\n                state=None,\n                status=schemas.responses.SetStateStatus.ABORT,\n                details=schemas.responses.StateAbortDetails(\n                    reason=\"Cannot resume a flow run that is not paused.\"\n                ),\n            )\n            return result\n\n        orchestration_parameters.update({\"api-version\": api_version})\n\n        keyset = state.state_details.run_input_keyset\n\n        if keyset:\n            run_input = run_input or {}\n\n            try:\n                hydration_context = await schema_tools.HydrationContext.build(\n                    session=session, raise_on_error=True\n                )\n                run_input = schema_tools.hydrate(run_input, hydration_context) or {}\n            except schema_tools.HydrationError as exc:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=f\"Error hydrating run input: {exc}\",\n                    ),\n                )\n\n            schema_json = await models.flow_run_input.read_flow_run_input(\n                session=session, flow_run_id=flow_run.id, key=keyset[\"schema\"]\n            )\n\n            if schema_json is None:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=\"Run input schema not found.\"\n                    ),\n                )\n\n            try:\n                schema = orjson.loads(schema_json.value)\n            except orjson.JSONDecodeError:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=\"Run input schema is not valid JSON.\"\n                    ),\n                )\n\n            try:\n                schema_tools.validate(run_input, schema, raise_on_error=True)\n            except schema_tools.ValidationError as exc:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=f\"Reason: {exc}\"\n                    ),\n                )\n            except schema_tools.CircularSchemaRefError:\n                return OrchestrationResult(\n                    state=state,\n                    status=schemas.responses.SetStateStatus.REJECT,\n                    details=schemas.responses.StateAbortDetails(\n                        reason=\"Invalid schema: Unable to validate schema with circular references.\",\n                    ),\n                )\n\n        if state.state_details.pause_reschedule:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Scheduled(\n                    name=\"Resuming\", scheduled_time=pendulum.now(\"UTC\")\n                ),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n        else:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Running(),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n\n        if (\n            keyset\n            and run_input\n            and orchestration_result.status == schemas.responses.SetStateStatus.ACCEPT\n        ):\n            # The state change is accepted, go ahead and store the validated\n            # run input.\n            await models.flow_run_input.create_flow_run_input(\n                session=session,\n                flow_run_input=schemas.core.FlowRunInput(\n                    flow_run_id=flow_run_id,\n                    key=keyset[\"response\"],\n                    value=orjson.dumps(run_input).decode(\"utf-8\"),\n                ),\n            )\n\n        # set the 201 if a new state was created\n        if orchestration_result.state and orchestration_result.state.timestamp >= now:\n            response.status_code = status.HTTP_201_CREATED\n        else:\n            response.status_code = status.HTTP_200_OK\n\n        return orchestration_result\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

      Set a flow run state, invoking any orchestration rules.

      Source code in prefect/server/api/flow_runs.py
      @router.post(\"/{id}/set_state\")\nasync def set_flow_run_state(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n    \"\"\"Set a flow run state, invoking any orchestration rules.\"\"\"\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=flow_run_id,\n            # convert to a full State object\n            state=schemas.states.State.parse_obj(state),\n            force=force,\n            flow_policy=flow_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.update_flow_run","title":"update_flow_run async","text":"

      Updates a flow run.

      Source code in prefect/server/api/flow_runs.py
      @router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow_run(\n    flow_run: schemas.actions.FlowRunUpdate,\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Updates a flow run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        if flow_run.job_variables is not None:\n            this_run = await models.flow_runs.read_flow_run(\n                session, flow_run_id=flow_run_id\n            )\n            if this_run is None:\n                raise HTTPException(\n                    status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n                )\n            if not this_run.state:\n                raise HTTPException(\n                    status.HTTP_400_BAD_REQUEST,\n                    detail=\"Flow run state is required to update job variables but none exists\",\n                )\n            if this_run.state.type != schemas.states.StateType.SCHEDULED:\n                raise HTTPException(\n                    status_code=status.HTTP_400_BAD_REQUEST,\n                    detail=f\"Job variables for a flow run in state {this_run.state.type.name} cannot be updated\",\n                )\n            if this_run.deployment_id is None:\n                raise HTTPException(\n                    status_code=status.HTTP_400_BAD_REQUEST,\n                    detail=\"A deployment for the flow run could not be found\",\n                )\n\n            deployment = await models.deployments.read_deployment(\n                session=session, deployment_id=this_run.deployment_id\n            )\n            if deployment is None:\n                raise HTTPException(\n                    status_code=status.HTTP_400_BAD_REQUEST,\n                    detail=\"A deployment for the flow run could not be found\",\n                )\n\n            await validate_job_variables_for_flow_run(flow_run, deployment, session)\n\n        result = await models.flow_runs.update_flow_run(\n            session=session, flow_run=flow_run, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n
      ","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flows/","title":"server.api.flows","text":"","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows","title":"prefect.server.api.flows","text":"

      Routes for interacting with flow objects.

      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.count_flows","title":"count_flows async","text":"

      Count flows.

      Source code in prefect/server/api/flows.py
      @router.post(\"/count\")\nasync def count_flows(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n    \"\"\"\n    Count flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.count_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n        )\n
      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.create_flow","title":"create_flow async","text":"

      Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned.

      Source code in prefect/server/api/flows.py
      @router.post(\"/\")\nasync def create_flow(\n    flow: schemas.actions.FlowCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n    \"\"\"Gracefully creates a new flow from the provided schema. If a flow with the\n    same name already exists, the existing flow is returned.\n    \"\"\"\n    # hydrate the input model into a full flow model\n    flow = schemas.core.Flow(**flow.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flows.create_flow(session=session, flow=flow)\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n    return model\n
      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.delete_flow","title":"delete_flow async","text":"

      Delete a flow by id.

      Source code in prefect/server/api/flows.py
      @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a flow by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.delete_flow(session=session, flow_id=flow_id)\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow","title":"read_flow async","text":"

      Get a flow by id.

      Source code in prefect/server/api/flows.py
      @router.get(\"/{id}\")\nasync def read_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n    \"\"\"\n    Get a flow by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow(session=session, flow_id=flow_id)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

      Get a flow by name.

      Source code in prefect/server/api/flows.py
      @router.get(\"/name/{name}\")\nasync def read_flow_by_name(\n    name: str = Path(..., description=\"The name of the flow\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n    \"\"\"\n    Get a flow by name.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow_by_name(session=session, name=name)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flows","title":"read_flows async","text":"

      Query for flows.

      Source code in prefect/server/api/flows.py
      @router.post(\"/filter\")\nasync def read_flows(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.Flow]:\n    \"\"\"\n    Query for flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.read_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            sort=sort,\n            offset=offset,\n            limit=limit,\n        )\n
      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.update_flow","title":"update_flow async","text":"

      Updates a flow.

      Source code in prefect/server/api/flows.py
      @router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow(\n    flow: schemas.actions.FlowUpdate,\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Updates a flow.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.update_flow(\n            session=session, flow=flow, flow_id=flow_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
      ","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/middleware/","title":"server.api.middleware","text":"","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/middleware/#prefect.server.api.middleware","title":"prefect.server.api.middleware","text":"","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/middleware/#prefect.server.api.middleware.CsrfMiddleware","title":"CsrfMiddleware","text":"

      Bases: BaseHTTPMiddleware

      Middleware for CSRF protection. This middleware will check for a CSRF token in the headers of any POST, PUT, PATCH, or DELETE request. If the token is not present or does not match the token stored in the database for the client, the request will be rejected with a 403 status code.

      Source code in prefect/server/api/middleware.py
      class CsrfMiddleware(BaseHTTPMiddleware):\n    \"\"\"\n    Middleware for CSRF protection. This middleware will check for a CSRF token\n    in the headers of any POST, PUT, PATCH, or DELETE request. If the token is\n    not present or does not match the token stored in the database for the\n    client, the request will be rejected with a 403 status code.\n    \"\"\"\n\n    async def dispatch(\n        self, request: Request, call_next: NextMiddlewareFunction\n    ) -> Response:\n        \"\"\"\n        Dispatch method for the middleware. This method will check for the\n        presence of a CSRF token in the headers of the request and compare it\n        to the token stored in the database for the client. If the token is not\n        present or does not match, the request will be rejected with a 403\n        status code.\n        \"\"\"\n\n        request_needs_csrf_protection = request.method in {\n            \"POST\",\n            \"PUT\",\n            \"PATCH\",\n            \"DELETE\",\n        }\n\n        if (\n            settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value()\n            and request_needs_csrf_protection\n        ):\n            incoming_token = request.headers.get(\"Prefect-Csrf-Token\")\n            incoming_client = request.headers.get(\"Prefect-Csrf-Client\")\n\n            if incoming_token is None:\n                return JSONResponse(\n                    {\"detail\": \"Missing CSRF token.\"},\n                    status_code=status.HTTP_403_FORBIDDEN,\n                )\n\n            if incoming_client is None:\n                return JSONResponse(\n                    {\"detail\": \"Missing client identifier.\"},\n                    status_code=status.HTTP_403_FORBIDDEN,\n                )\n\n            db = provide_database_interface()\n            async with db.session_context() as session:\n                token = await models.csrf_token.read_token_for_client(\n                    session=session, client=incoming_client\n                )\n\n                if token is None or token.token != incoming_token:\n                    return JSONResponse(\n                        {\"detail\": \"Invalid CSRF token or client identifier.\"},\n                        status_code=status.HTTP_403_FORBIDDEN,\n                        headers={\"Access-Control-Allow-Origin\": \"*\"},\n                    )\n\n        return await call_next(request)\n
      ","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/middleware/#prefect.server.api.middleware.CsrfMiddleware.dispatch","title":"dispatch async","text":"

      Dispatch method for the middleware. This method will check for the presence of a CSRF token in the headers of the request and compare it to the token stored in the database for the client. If the token is not present or does not match, the request will be rejected with a 403 status code.

      Source code in prefect/server/api/middleware.py
      async def dispatch(\n    self, request: Request, call_next: NextMiddlewareFunction\n) -> Response:\n    \"\"\"\n    Dispatch method for the middleware. This method will check for the\n    presence of a CSRF token in the headers of the request and compare it\n    to the token stored in the database for the client. If the token is not\n    present or does not match, the request will be rejected with a 403\n    status code.\n    \"\"\"\n\n    request_needs_csrf_protection = request.method in {\n        \"POST\",\n        \"PUT\",\n        \"PATCH\",\n        \"DELETE\",\n    }\n\n    if (\n        settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value()\n        and request_needs_csrf_protection\n    ):\n        incoming_token = request.headers.get(\"Prefect-Csrf-Token\")\n        incoming_client = request.headers.get(\"Prefect-Csrf-Client\")\n\n        if incoming_token is None:\n            return JSONResponse(\n                {\"detail\": \"Missing CSRF token.\"},\n                status_code=status.HTTP_403_FORBIDDEN,\n            )\n\n        if incoming_client is None:\n            return JSONResponse(\n                {\"detail\": \"Missing client identifier.\"},\n                status_code=status.HTTP_403_FORBIDDEN,\n            )\n\n        db = provide_database_interface()\n        async with db.session_context() as session:\n            token = await models.csrf_token.read_token_for_client(\n                session=session, client=incoming_client\n            )\n\n            if token is None or token.token != incoming_token:\n                return JSONResponse(\n                    {\"detail\": \"Invalid CSRF token or client identifier.\"},\n                    status_code=status.HTTP_403_FORBIDDEN,\n                    headers={\"Access-Control-Allow-Origin\": \"*\"},\n                )\n\n    return await call_next(request)\n
      ","tags":["Prefect API","middleware"]},{"location":"api-ref/server/api/run_history/","title":"server.api.run_history","text":"","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history","title":"prefect.server.api.run_history","text":"

      Utilities for querying flow and task run history.

      ","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history.run_history","title":"run_history async","text":"

      Produce a history of runs aggregated by interval and state

      Source code in prefect/server/api/run_history.py
      @db_injector\nasync def run_history(\n    db: PrefectDBInterface,\n    session: sa.orm.Session,\n    run_type: Literal[\"flow_run\", \"task_run\"],\n    history_start: DateTimeTZ,\n    history_end: DateTimeTZ,\n    history_interval: datetime.timedelta,\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n) -> List[schemas.responses.HistoryResponse]:\n    \"\"\"\n    Produce a history of runs aggregated by interval and state\n    \"\"\"\n\n    # SQLite has issues with very small intervals\n    # (by 0.001 seconds it stops incrementing the interval)\n    if history_interval < datetime.timedelta(seconds=1):\n        raise ValueError(\"History interval must not be less than 1 second.\")\n\n    # prepare run-specific models\n    if run_type == \"flow_run\":\n        run_model = db.FlowRun\n        run_filter_function = models.flow_runs._apply_flow_run_filters\n    elif run_type == \"task_run\":\n        run_model = db.TaskRun\n        run_filter_function = models.task_runs._apply_task_run_filters\n    else:\n        raise ValueError(\n            f\"Unknown run type {run_type!r}. Expected 'flow_run' or 'task_run'.\"\n        )\n\n    # create a CTE for timestamp intervals\n    intervals = db.make_timestamp_intervals(\n        history_start,\n        history_end,\n        history_interval,\n    ).cte(\"intervals\")\n\n    # apply filters to the flow runs (and related states)\n    runs = (\n        await run_filter_function(\n            sa.select(\n                run_model.id,\n                run_model.expected_start_time,\n                run_model.estimated_run_time,\n                run_model.estimated_start_time_delta,\n                run_model.state_type,\n                run_model.state_name,\n            ).select_from(run_model),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_queues,\n        )\n    ).alias(\"runs\")\n    # outer join intervals to the filtered runs to create a dataset composed of\n    # every interval and the aggregate of all its runs. The runs aggregate is represented\n    # by a descriptive JSON object\n    counts = (\n        sa.select(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            # build a JSON object, ignoring the case where the count of runs is 0\n            sa.case(\n                (sa.func.count(runs.c.id) == 0, None),\n                else_=db.build_json_object(\n                    \"state_type\",\n                    runs.c.state_type,\n                    \"state_name\",\n                    runs.c.state_name,\n                    \"count_runs\",\n                    sa.func.count(runs.c.id),\n                    # estimated run times only includes positive run times (to avoid any unexpected corner cases)\n                    \"sum_estimated_run_time\",\n                    sa.func.sum(\n                        db.greatest(0, sa.extract(\"epoch\", runs.c.estimated_run_time))\n                    ),\n                    # estimated lateness is the sum of any positive start time deltas\n                    \"sum_estimated_lateness\",\n                    sa.func.sum(\n                        db.greatest(\n                            0, sa.extract(\"epoch\", runs.c.estimated_start_time_delta)\n                        )\n                    ),\n                ),\n            ).label(\"state_agg\"),\n        )\n        .select_from(intervals)\n        .join(\n            runs,\n            sa.and_(\n                runs.c.expected_start_time >= intervals.c.interval_start,\n                runs.c.expected_start_time < intervals.c.interval_end,\n            ),\n            isouter=True,\n        )\n        .group_by(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            runs.c.state_type,\n            runs.c.state_name,\n        )\n    ).alias(\"counts\")\n\n    # aggregate all state-aggregate objects into a single array for each interval,\n    # ensuring that intervals with no runs have an empty array\n    query = (\n        sa.select(\n            counts.c.interval_start,\n            counts.c.interval_end,\n            sa.func.coalesce(\n                db.json_arr_agg(db.cast_to_json(counts.c.state_agg)).filter(\n                    counts.c.state_agg.is_not(None)\n                ),\n                sa.text(\"'[]'\"),\n            ).label(\"states\"),\n        )\n        .group_by(counts.c.interval_start, counts.c.interval_end)\n        .order_by(counts.c.interval_start)\n        # return no more than 500 bars\n        .limit(500)\n    )\n\n    # issue the query\n    result = await session.execute(query)\n    records = result.mappings()\n\n    # load and parse the record if the database returns JSON as strings\n    if db.uses_json_strings:\n        records = [dict(r) for r in records]\n        for r in records:\n            r[\"states\"] = json.loads(r[\"states\"])\n\n    return pydantic.parse_obj_as(List[schemas.responses.HistoryResponse], list(records))\n
      ","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/saved_searches/","title":"server.api.saved_searches","text":"","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches","title":"prefect.server.api.saved_searches","text":"

      Routes for interacting with saved search objects.

      ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.create_saved_search","title":"create_saved_search async","text":"

      Gracefully creates a new saved search from the provided schema.

      If a saved search with the same name already exists, the saved search's fields are replaced.

      Source code in prefect/server/api/saved_searches.py
      @router.put(\"/\")\nasync def create_saved_search(\n    saved_search: schemas.actions.SavedSearchCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n    \"\"\"Gracefully creates a new saved search from the provided schema.\n\n    If a saved search with the same name already exists, the saved search's fields are\n    replaced.\n    \"\"\"\n\n    # hydrate the input model into a full model\n    saved_search = schemas.core.SavedSearch(**saved_search.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.saved_searches.create_saved_search(\n            session=session, saved_search=saved_search\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n\n    return model\n
      ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

      Delete a saved search by id.

      Source code in prefect/server/api/saved_searches.py
      @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a saved search by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.saved_searches.delete_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n
      ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_search","title":"read_saved_search async","text":"

      Get a saved search by id.

      Source code in prefect/server/api/saved_searches.py
      @router.get(\"/{id}\")\nasync def read_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n    \"\"\"\n    Get a saved search by id.\n    \"\"\"\n    async with db.session_context() as session:\n        saved_search = await models.saved_searches.read_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not saved_search:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n    return saved_search\n
      ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

      Query for saved searches.

      Source code in prefect/server/api/saved_searches.py
      @router.post(\"/filter\")\nasync def read_saved_searches(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.SavedSearch]:\n    \"\"\"\n    Query for saved searches.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.saved_searches.read_saved_searches(\n            session=session,\n            offset=offset,\n            limit=limit,\n        )\n
      ","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/server/","title":"server.api.server","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server","title":"prefect.server.api.server","text":"

      Defines the Prefect REST API FastAPI app.

      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.RequestLimitMiddleware","title":"RequestLimitMiddleware","text":"

      A middleware that limits the number of concurrent requests handled by the API.

      This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes.

      Source code in prefect/server/api/server.py
      class RequestLimitMiddleware:\n    \"\"\"\n    A middleware that limits the number of concurrent requests handled by the API.\n\n    This is a blunt tool for limiting SQLite concurrent writes which will cause failures\n    at high volume. Ideally, we would only apply the limit to routes that perform\n    writes.\n    \"\"\"\n\n    def __init__(self, app, limit: float):\n        self.app = app\n        self._limiter = anyio.CapacityLimiter(limit)\n\n    async def __call__(self, scope, receive, send) -> None:\n        async with self._limiter:\n            await self.app(scope, receive, send)\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.SPAStaticFiles","title":"SPAStaticFiles","text":"

      Bases: StaticFiles

      Implementation of StaticFiles for serving single page applications.

      Adds get_response handling to ensure that when a resource isn't found the application still returns the index.

      Source code in prefect/server/api/server.py
      class SPAStaticFiles(StaticFiles):\n    \"\"\"\n    Implementation of `StaticFiles` for serving single page applications.\n\n    Adds `get_response` handling to ensure that when a resource isn't found the\n    application still returns the index.\n    \"\"\"\n\n    async def get_response(self, path: str, scope):\n        try:\n            return await super().get_response(path, scope)\n        except HTTPException:\n            return await super().get_response(\"./index.html\", scope)\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_api_app","title":"create_api_app","text":"

      Create a FastAPI app that includes the Prefect REST API

      Parameters:

      Name Type Description Default router_prefix Optional[str]

      a prefix to apply to all included routers

      '' dependencies Optional[List[Depends]]

      a list of global dependencies to add to each Prefect REST API router

      None health_check_path str

      the health check route path

      '/health' fast_api_app_kwargs Optional[Dict[str, Any]]

      kwargs to pass to the FastAPI constructor

      None router_overrides Mapping[str, Optional[APIRouter]]

      a mapping of route prefixes (i.e. \"/admin\") to new routers allowing the caller to override the default routers. If None is provided as a value, the default router will be dropped from the application.

      None

      Returns:

      Type Description FastAPI

      a FastAPI app that serves the Prefect REST API

      Source code in prefect/server/api/server.py
      def create_api_app(\n    router_prefix: Optional[str] = \"\",\n    dependencies: Optional[List[Depends]] = None,\n    health_check_path: str = \"/health\",\n    version_check_path: str = \"/version\",\n    fast_api_app_kwargs: Optional[Dict[str, Any]] = None,\n    router_overrides: Mapping[str, Optional[APIRouter]] = None,\n) -> FastAPI:\n    \"\"\"\n    Create a FastAPI app that includes the Prefect REST API\n\n    Args:\n        router_prefix: a prefix to apply to all included routers\n        dependencies: a list of global dependencies to add to each Prefect REST API router\n        health_check_path: the health check route path\n        fast_api_app_kwargs: kwargs to pass to the FastAPI constructor\n        router_overrides: a mapping of route prefixes (i.e. \"/admin\") to new routers\n            allowing the caller to override the default routers. If `None` is provided\n            as a value, the default router will be dropped from the application.\n\n    Returns:\n        a FastAPI app that serves the Prefect REST API\n    \"\"\"\n    fast_api_app_kwargs = fast_api_app_kwargs or {}\n    api_app = FastAPI(title=API_TITLE, **fast_api_app_kwargs)\n    api_app.add_middleware(GZipMiddleware)\n\n    @api_app.get(health_check_path, tags=[\"Root\"])\n    async def health_check():\n        return True\n\n    @api_app.get(version_check_path, tags=[\"Root\"])\n    async def orion_info():\n        return SERVER_API_VERSION\n\n    # always include version checking\n    if dependencies is None:\n        dependencies = [Depends(enforce_minimum_version)]\n    else:\n        dependencies.append(Depends(enforce_minimum_version))\n\n    routers = {router.prefix: router for router in API_ROUTERS}\n\n    if router_overrides:\n        for prefix, router in router_overrides.items():\n            # We may want to allow this behavior in the future to inject new routes, but\n            # for now this will be treated an as an exception\n            if prefix not in routers:\n                raise KeyError(\n                    \"Router override provided for prefix that does not exist:\"\n                    f\" {prefix!r}\"\n                )\n\n            # Drop the existing router\n            existing_router = routers.pop(prefix)\n\n            # Replace it with a new router if provided\n            if router is not None:\n                if prefix != router.prefix:\n                    # We may want to allow this behavior in the future, but it will\n                    # break expectations without additional routing and is banned for\n                    # now\n                    raise ValueError(\n                        f\"Router override for {prefix!r} defines a different prefix \"\n                        f\"{router.prefix!r}.\"\n                    )\n\n                existing_paths = method_paths_from_routes(existing_router.routes)\n                new_paths = method_paths_from_routes(router.routes)\n                if not existing_paths.issubset(new_paths):\n                    raise ValueError(\n                        f\"Router override for {prefix!r} is missing paths defined by \"\n                        f\"the original router: {existing_paths.difference(new_paths)}\"\n                    )\n\n                routers[prefix] = router\n\n    for router in routers.values():\n        api_app.include_router(router, prefix=router_prefix, dependencies=dependencies)\n\n    return api_app\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_app","title":"create_app","text":"

      Create an FastAPI app that includes the Prefect REST API and UI

      Parameters:

      Name Type Description Default settings Settings

      The settings to use to create the app. If not set, settings are pulled from the context.

      None ignore_cache bool

      If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache.

      False ephemeral bool

      If set, the application will be treated as ephemeral. The UI and services will be disabled.

      False Source code in prefect/server/api/server.py
      def create_app(\n    settings: prefect.settings.Settings = None,\n    ephemeral: bool = False,\n    ignore_cache: bool = False,\n) -> FastAPI:\n    \"\"\"\n    Create an FastAPI app that includes the Prefect REST API and UI\n\n    Args:\n        settings: The settings to use to create the app. If not set, settings are pulled\n            from the context.\n        ignore_cache: If set, a new application will be created even if the settings\n            match. Otherwise, an application is returned from the cache.\n        ephemeral: If set, the application will be treated as ephemeral. The UI\n            and services will be disabled.\n    \"\"\"\n    settings = settings or prefect.settings.get_current_settings()\n    cache_key = (settings.hash_key(), ephemeral)\n\n    if cache_key in APP_CACHE and not ignore_cache:\n        return APP_CACHE[cache_key]\n\n    # TODO: Move these startup functions out of this closure into the top-level or\n    #       another dedicated location\n    async def run_migrations():\n        \"\"\"Ensure the database is created and up to date with the current migrations\"\"\"\n        if prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START:\n            from prefect.server.database.dependencies import provide_database_interface\n\n            db = provide_database_interface()\n            await db.create_db()\n\n    @_memoize_block_auto_registration\n    async def add_block_types():\n        \"\"\"Add all registered blocks to the database\"\"\"\n        if not prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START:\n            return\n\n        from prefect.server.database.dependencies import provide_database_interface\n        from prefect.server.models.block_registration import run_block_auto_registration\n\n        db = provide_database_interface()\n        session = await db.session()\n\n        async with session:\n            await run_block_auto_registration(session=session)\n\n    async def start_services():\n        \"\"\"Start additional services when the Prefect REST API starts up.\"\"\"\n\n        if ephemeral:\n            app.state.services = None\n            return\n\n        service_instances = []\n\n        if prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED.value():\n            service_instances.append(services.scheduler.Scheduler())\n            service_instances.append(services.scheduler.RecentDeploymentsScheduler())\n\n        if prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED.value():\n            service_instances.append(services.late_runs.MarkLateRuns())\n\n        if prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED.value():\n            service_instances.append(services.pause_expirations.FailExpiredPauses())\n\n        if prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED.value():\n            service_instances.append(\n                services.cancellation_cleanup.CancellationCleanup()\n            )\n\n        if prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED.value():\n            service_instances.append(services.telemetry.Telemetry())\n\n        if prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED.value():\n            service_instances.append(\n                services.flow_run_notifications.FlowRunNotifications()\n            )\n\n        if prefect.settings.PREFECT_API_SERVICES_FOREMAN_ENABLED.value():\n            service_instances.append(services.foreman.Foreman())\n\n        if prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n            service_instances.append(services.task_scheduling.TaskSchedulingTimeouts())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS.value()\n            and prefect.settings.PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED.value()\n        ):\n            service_instances.append(EventLogger())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS.value()\n            and prefect.settings.PREFECT_API_SERVICES_TRIGGERS_ENABLED.value()\n        ):\n            service_instances.append(ReactiveTriggers())\n            service_instances.append(ProactiveTriggers())\n            service_instances.append(Actions())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS\n            and prefect.settings.PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED\n        ):\n            service_instances.append(EventPersister())\n\n        if (\n            prefect.settings.PREFECT_EXPERIMENTAL_EVENTS\n            and prefect.settings.PREFECT_API_EVENTS_STREAM_OUT_ENABLED\n        ):\n            service_instances.append(stream.Distributor())\n\n        loop = asyncio.get_running_loop()\n\n        app.state.services = {\n            service: loop.create_task(service.start()) for service in service_instances\n        }\n\n        for service, task in app.state.services.items():\n            logger.info(f\"{service.name} service scheduled to start in-app\")\n            task.add_done_callback(partial(on_service_exit, service))\n\n    async def stop_services():\n        \"\"\"Ensure services are stopped before the Prefect REST API shuts down.\"\"\"\n        if hasattr(app.state, \"services\") and app.state.services:\n            await asyncio.gather(*[service.stop() for service in app.state.services])\n            try:\n                await asyncio.gather(\n                    *[task.stop() for task in app.state.services.values()]\n                )\n            except Exception:\n                # `on_service_exit` should handle logging exceptions on exit\n                pass\n\n    @asynccontextmanager\n    async def lifespan(app):\n        try:\n            await run_migrations()\n            await add_block_types()\n            await start_services()\n            yield\n        finally:\n            await stop_services()\n\n    def on_service_exit(service, task):\n        \"\"\"\n        Added as a callback for completion of services to log exit\n        \"\"\"\n        try:\n            # Retrieving the result will raise the exception\n            task.result()\n        except Exception:\n            logger.error(f\"{service.name} service failed!\", exc_info=True)\n        else:\n            logger.info(f\"{service.name} service stopped!\")\n\n    app = FastAPI(\n        title=TITLE,\n        version=API_VERSION,\n        lifespan=lifespan,\n    )\n    api_app = create_api_app(\n        fast_api_app_kwargs={\n            \"exception_handlers\": {\n                # NOTE: FastAPI special cases the generic `Exception` handler and\n                #       registers it as a separate middleware from the others\n                Exception: custom_internal_exception_handler,\n                RequestValidationError: validation_exception_handler,\n                sa.exc.IntegrityError: integrity_exception_handler,\n                ObjectNotFoundError: prefect_object_not_found_exception_handler,\n            }\n        },\n    )\n    ui_app = create_ui_app(ephemeral)\n\n    # middleware\n    app.add_middleware(\n        CORSMiddleware,\n        allow_origins=[\"*\"],\n        allow_methods=[\"*\"],\n        allow_headers=[\"*\"],\n    )\n\n    # Limit the number of concurrent requests when using a SQLite database to reduce\n    # chance of errors where the database cannot be opened due to a high number of\n    # concurrent writes\n    if (\n        get_dialect(prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        == \"sqlite\"\n    ):\n        app.add_middleware(RequestLimitMiddleware, limit=100)\n\n    if prefect.settings.PREFECT_SERVER_CSRF_PROTECTION_ENABLED.value():\n        app.add_middleware(api.middleware.CsrfMiddleware)\n\n    api_app.mount(\n        \"/static\",\n        StaticFiles(\n            directory=os.path.join(\n                os.path.dirname(os.path.realpath(__file__)), \"static\"\n            )\n        ),\n        name=\"static\",\n    )\n    app.api_app = api_app\n    app.mount(\"/api\", app=api_app, name=\"api\")\n    app.mount(\"/\", app=ui_app, name=\"ui\")\n\n    def openapi():\n        \"\"\"\n        Convenience method for extracting the user facing OpenAPI schema from the API app.\n\n        This method is attached to the global public app for easy access.\n        \"\"\"\n        partial_schema = get_openapi(\n            title=API_TITLE,\n            version=API_VERSION,\n            routes=api_app.routes,\n        )\n        new_schema = partial_schema.copy()\n        new_schema[\"paths\"] = {}\n        for path, value in partial_schema[\"paths\"].items():\n            new_schema[\"paths\"][f\"/api{path}\"] = value\n\n        new_schema[\"info\"][\"x-logo\"] = {\"url\": \"static/prefect-logo-mark-gradient.png\"}\n        return new_schema\n\n    app.openapi = openapi\n\n    APP_CACHE[cache_key] = app\n    return app\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.custom_internal_exception_handler","title":"custom_internal_exception_handler async","text":"

      Log a detailed exception for internal server errors before returning.

      Send 503 for errors clients can retry on.

      Source code in prefect/server/api/server.py
      async def custom_internal_exception_handler(request: Request, exc: Exception):\n    \"\"\"\n    Log a detailed exception for internal server errors before returning.\n\n    Send 503 for errors clients can retry on.\n    \"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n\n    if is_client_retryable_exception(exc):\n        return JSONResponse(\n            content={\"exception_message\": \"Service Unavailable\"},\n            status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n        )\n\n    return JSONResponse(\n        content={\"exception_message\": \"Internal Server Error\"},\n        status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,\n    )\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.integrity_exception_handler","title":"integrity_exception_handler async","text":"

      Capture database integrity errors.

      Source code in prefect/server/api/server.py
      async def integrity_exception_handler(request: Request, exc: Exception):\n    \"\"\"Capture database integrity errors.\"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n    return JSONResponse(\n        content={\n            \"detail\": (\n                \"Data integrity conflict. This usually means a \"\n                \"unique or foreign key constraint was violated. \"\n                \"See server logs for details.\"\n            )\n        },\n        status_code=status.HTTP_409_CONFLICT,\n    )\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.prefect_object_not_found_exception_handler","title":"prefect_object_not_found_exception_handler async","text":"

      Return 404 status code on object not found exceptions.

      Source code in prefect/server/api/server.py
      async def prefect_object_not_found_exception_handler(\n    request: Request, exc: ObjectNotFoundError\n):\n    \"\"\"Return 404 status code on object not found exceptions.\"\"\"\n    return JSONResponse(\n        content={\"exception_message\": str(exc)}, status_code=status.HTTP_404_NOT_FOUND\n    )\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.replace_placeholder_string_in_files","title":"replace_placeholder_string_in_files","text":"

      Recursively loops through all files in the given directory and replaces a placeholder string.

      Source code in prefect/server/api/server.py
      def replace_placeholder_string_in_files(\n    directory, placeholder, replacement, allowed_extensions=None\n):\n    \"\"\"\n    Recursively loops through all files in the given directory and replaces\n    a placeholder string.\n    \"\"\"\n    if allowed_extensions is None:\n        allowed_extensions = [\".txt\", \".html\", \".css\", \".js\", \".json\", \".txt\"]\n\n    for root, dirs, files in os.walk(directory):\n        for file in files:\n            if any(file.endswith(ext) for ext in allowed_extensions):\n                file_path = os.path.join(root, file)\n\n                with open(file_path, \"r\", encoding=\"utf-8\") as file:\n                    file_data = file.read()\n\n                file_data = file_data.replace(placeholder, replacement)\n\n                with open(file_path, \"w\", encoding=\"utf-8\") as file:\n                    file.write(file_data)\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.validation_exception_handler","title":"validation_exception_handler async","text":"

      Provide a detailed message for request validation errors.

      Source code in prefect/server/api/server.py
      async def validation_exception_handler(request: Request, exc: RequestValidationError):\n    \"\"\"Provide a detailed message for request validation errors.\"\"\"\n    return JSONResponse(\n        status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n        content=jsonable_encoder(\n            {\n                \"exception_message\": \"Invalid request received.\",\n                \"exception_detail\": exc.errors(),\n                \"request_body\": exc.body,\n            }\n        ),\n    )\n
      ","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/task_run_states/","title":"server.api.task_run_states","text":"","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states","title":"prefect.server.api.task_run_states","text":"

      Routes for interacting with task run state objects.

      ","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

      Get a task run state by id.

      Source code in prefect/server/api/task_run_states.py
      @router.get(\"/{id}\")\nasync def read_task_run_state(\n    task_run_state_id: UUID = Path(\n        ..., description=\"The task run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n    \"\"\"\n    Get a task run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run_state = await models.task_run_states.read_task_run_state(\n            session=session, task_run_state_id=task_run_state_id\n        )\n    if not task_run_state:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return task_run_state\n
      ","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

      Get states associated with a task run.

      Source code in prefect/server/api/task_run_states.py
      @router.get(\"/\")\nasync def read_task_run_states(\n    task_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n    \"\"\"\n    Get states associated with a task run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_run_states.read_task_run_states(\n            session=session, task_run_id=task_run_id\n        )\n
      ","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_runs/","title":"server.api.task_runs","text":"","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs","title":"prefect.server.api.task_runs","text":"

      Routes for interacting with task run objects.

      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.count_task_runs","title":"count_task_runs async","text":"

      Count task runs.

      Source code in prefect/server/api/task_runs.py
      @router.post(\"/count\")\nasync def count_task_runs(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n) -> int:\n    \"\"\"\n    Count task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.count_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n        )\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.create_task_run","title":"create_task_run async","text":"

      Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned.

      If no state is provided, the task run will be created in a PENDING state.

      Source code in prefect/server/api/task_runs.py
      @router.post(\"/\")\nasync def create_task_run(\n    task_run: schemas.actions.TaskRunCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> schemas.core.TaskRun:\n    \"\"\"\n    Create a task run. If a task run with the same flow_run_id,\n    task_key, and dynamic_key already exists, the existing task\n    run will be returned.\n\n    If no state is provided, the task run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full task run / state model\n    task_run = schemas.core.TaskRun(**task_run.dict())\n\n    if not task_run.state:\n        task_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.task_runs.create_task_run(\n            session=session,\n            task_run=task_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n\n    new_task_run: schemas.core.TaskRun = schemas.core.TaskRun.from_orm(model)\n\n    return new_task_run\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.delete_task_run","title":"delete_task_run async","text":"

      Delete a task run by id.

      Source code in prefect/server/api/task_runs.py
      @router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Delete a task run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.delete_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_run","title":"read_task_run async","text":"

      Get a task run by id.

      Source code in prefect/server/api/task_runs.py
      @router.get(\"/{id}\")\nasync def read_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.TaskRun:\n    \"\"\"\n    Get a task run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run = await models.task_runs.read_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not task_run:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n    return task_run\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_runs","title":"read_task_runs async","text":"

      Query for task runs.

      Source code in prefect/server/api/task_runs.py
      @router.post(\"/filter\")\nasync def read_task_runs(\n    sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.TaskRun]:\n    \"\"\"\n    Query for task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.read_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

      Set a task run state, invoking any orchestration rules.

      Source code in prefect/server/api/task_runs.py
      @router.post(\"/{id}/set_state\")\nasync def set_task_run_state(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    task_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_task_policy\n    ),\n    orchestration_parameters: Dict[str, Any] = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> OrchestrationResult:\n    \"\"\"Set a task run state, invoking any orchestration rules.\"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=task_run_id,\n            state=schemas.states.State.parse_obj(\n                state\n            ),  # convert to a full State object\n            force=force,\n            task_policy=task_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.task_run_history","title":"task_run_history async","text":"

      Query for task run history data across a given range and interval.

      Source code in prefect/server/api/task_runs.py
      @router.post(\"/history\")\nasync def task_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n    \"\"\"\n    Query for task run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"task_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n        )\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.update_task_run","title":"update_task_run async","text":"

      Updates a task run.

      Source code in prefect/server/api/task_runs.py
      @router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_task_run(\n    task_run: schemas.actions.TaskRunUpdate,\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n    \"\"\"\n    Updates a task run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.update_task_run(\n            session=session, task_run=task_run, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task run not found\")\n
      ","tags":["Prefect API","task runs"]},{"location":"api-ref/server/models/csrf_token/","title":"server.models.csrf_token","text":""},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token","title":"prefect.server.models.csrf_token","text":""},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token.create_or_update_csrf_token","title":"create_or_update_csrf_token async","text":"

      Create or update a CSRF token for a client. If the client already has a token, it will be updated.

      Parameters:

      Name Type Description Default session AsyncSession

      The database session

      required client str

      The client identifier

      required

      Returns:

      Type Description CsrfToken

      core.CsrfToken: The CSRF token

      Source code in prefect/server/models/csrf_token.py
      @db_injector\nasync def create_or_update_csrf_token(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    client: str,\n) -> core.CsrfToken:\n    \"\"\"Create or update a CSRF token for a client. If the client already has a\n    token, it will be updated.\n\n    Args:\n        session (AsyncSession): The database session\n        client (str): The client identifier\n\n    Returns:\n        core.CsrfToken: The CSRF token\n    \"\"\"\n\n    expiration = (\n        datetime.now(timezone.utc)\n        + settings.PREFECT_SERVER_CSRF_TOKEN_EXPIRATION.value()\n    )\n    token = secrets.token_hex(32)\n\n    await session.execute(\n        db.insert(db.CsrfToken)\n        .values(\n            client=client,\n            token=token,\n            expiration=expiration,\n        )\n        .on_conflict_do_update(\n            index_elements=[db.CsrfToken.client],\n            set_={\"token\": token, \"expiration\": expiration},\n        ),\n    )\n\n    # Return the created / updated token object\n    return await read_token_for_client(session=session, client=client)\n
      "},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token.delete_expired_tokens","title":"delete_expired_tokens async","text":"

      Delete expired CSRF tokens.

      Parameters:

      Name Type Description Default session AsyncSession

      The database session

      required

      Returns:

      Name Type Description int int

      The number of tokens deleted

      Source code in prefect/server/models/csrf_token.py
      @db_injector\nasync def delete_expired_tokens(db: PrefectDBInterface, session: AsyncSession) -> int:\n    \"\"\"Delete expired CSRF tokens.\n\n    Args:\n        session (AsyncSession): The database session\n\n    Returns:\n        int: The number of tokens deleted\n    \"\"\"\n\n    result = await session.execute(\n        sa.delete(db.CsrfToken).where(\n            db.CsrfToken.expiration < datetime.now(timezone.utc)\n        )\n    )\n    return result.rowcount\n
      "},{"location":"api-ref/server/models/csrf_token/#prefect.server.models.csrf_token.read_token_for_client","title":"read_token_for_client async","text":"

      Read a CSRF token for a client.

      Parameters:

      Name Type Description Default session AsyncSession

      The database session

      required client str

      The client identifier

      required

      Returns:

      Type Description Optional[CsrfToken]

      Optional[core.CsrfToken]: The CSRF token, if it exists and is not expired.

      Source code in prefect/server/models/csrf_token.py
      @db_injector\nasync def read_token_for_client(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    client: str,\n) -> Optional[core.CsrfToken]:\n    \"\"\"Read a CSRF token for a client.\n\n    Args:\n        session (AsyncSession): The database session\n        client (str): The client identifier\n\n    Returns:\n        Optional[core.CsrfToken]: The CSRF token, if it exists and is not\n            expired.\n    \"\"\"\n    token = (\n        await session.execute(\n            sa.select(db.CsrfToken).where(\n                sa.and_(\n                    db.CsrfToken.expiration > datetime.now(timezone.utc),\n                    db.CsrfToken.client == client,\n                )\n            )\n        )\n    ).scalar_one_or_none()\n\n    if token is None:\n        return None\n\n    return core.CsrfToken.from_orm(token)\n
      "},{"location":"api-ref/server/models/deployments/","title":"server.models.deployments","text":""},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments","title":"prefect.server.models.deployments","text":"

      Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API.

      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.check_work_queues_for_deployment","title":"check_work_queues_for_deployment async","text":"

      Get work queues that can pick up the specified deployment.

      Work queues will pick up a deployment when all of the following are met.

      • The deployment has ALL tags that the work queue has (i.e. the work queue's tags must be a subset of the deployment's tags).
      • The work queue's specified deployment IDs match the deployment's ID, or the work queue does NOT have specified deployment IDs.
      • The work queue's specified flow runners match the deployment's flow runner or the work queue does NOT have a specified flow runner.

      Notes on the query:

      • Our database currently allows either \"null\" and empty lists as null values in filters, so we need to catch both cases with \"or\".
      • json_contains(A, B) should be interpreted as \"True if A contains B\".

      Returns:

      Type Description List[WorkQueue]

      List[db.WorkQueue]: WorkQueues

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def check_work_queues_for_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> List[schemas.core.WorkQueue]:\n    \"\"\"\n    Get work queues that can pick up the specified deployment.\n\n    Work queues will pick up a deployment when all of the following are met.\n\n    - The deployment has ALL tags that the work queue has (i.e. the work\n    queue's tags must be a subset of the deployment's tags).\n    - The work queue's specified deployment IDs match the deployment's ID,\n    or the work queue does NOT have specified deployment IDs.\n    - The work queue's specified flow runners match the deployment's flow\n    runner or the work queue does NOT have a specified flow runner.\n\n    Notes on the query:\n\n    - Our database currently allows either \"null\" and empty lists as\n    null values in filters, so we need to catch both cases with \"or\".\n    - `json_contains(A, B)` should be interpreted as \"True if A\n    contains B\".\n\n    Returns:\n        List[db.WorkQueue]: WorkQueues\n    \"\"\"\n    deployment = await session.get(db.Deployment, deployment_id)\n    if not deployment:\n        raise ObjectNotFoundError(f\"Deployment with id {deployment_id} not found\")\n\n    query = (\n        select(db.WorkQueue)\n        # work queue tags are a subset of deployment tags\n        .filter(\n            or_(\n                json_contains(deployment.tags, db.WorkQueue.filter[\"tags\"]),\n                json_contains([], db.WorkQueue.filter[\"tags\"]),\n                json_contains(None, db.WorkQueue.filter[\"tags\"]),\n            )\n        )\n        # deployment_ids is null or contains the deployment's ID\n        .filter(\n            or_(\n                json_contains(\n                    db.WorkQueue.filter[\"deployment_ids\"],\n                    str(deployment.id),\n                ),\n                json_contains(None, db.WorkQueue.filter[\"deployment_ids\"]),\n                json_contains([], db.WorkQueue.filter[\"deployment_ids\"]),\n            )\n        )\n    )\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.count_deployments","title":"count_deployments async","text":"

      Count deployments.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required flow_filter FlowFilter

      only count deployments whose flows match these criteria

      None flow_run_filter FlowRunFilter

      only count deployments whose flow runs match these criteria

      None task_run_filter TaskRunFilter

      only count deployments whose task runs match these criteria

      None deployment_filter DeploymentFilter

      only count deployment that match these filters

      None work_pool_filter WorkPoolFilter

      only count deployments that match these work pool filters

      None work_queue_filter WorkQueueFilter

      only count deployments that match these work pool queue filters

      None

      Returns:

      Name Type Description int int

      the number of deployments matching filters

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def count_deployments(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n    \"\"\"\n    Count deployments.\n\n    Args:\n        session: A database session\n        flow_filter: only count deployments whose flows match these criteria\n        flow_run_filter: only count deployments whose flow runs match these criteria\n        task_run_filter: only count deployments whose task runs match these criteria\n        deployment_filter: only count deployment that match these filters\n        work_pool_filter: only count deployments that match these work pool filters\n        work_queue_filter: only count deployments that match these work pool queue filters\n\n    Returns:\n        int: the number of deployments matching filters\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Deployment)\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment","title":"create_deployment async","text":"

      Upserts a deployment.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required deployment Deployment

      a deployment model

      required

      Returns:

      Type Description Optional[ORMDeployment]

      db.Deployment: the newly-created or updated deployment

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def create_deployment(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment: schemas.core.Deployment,\n) -> Optional[\"ORMDeployment\"]:\n    \"\"\"Upserts a deployment.\n\n    Args:\n        session: a database session\n        deployment: a deployment model\n\n    Returns:\n        db.Deployment: the newly-created or updated deployment\n\n    \"\"\"\n\n    # set `updated` manually\n    # known limitation of `on_conflict_do_update`, will not use `Column.onupdate`\n    # https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#the-set-clause\n    deployment.updated = pendulum.now(\"UTC\")\n\n    schedules = deployment.schedules\n    insert_values = deployment.dict(\n        shallow=True, exclude_unset=True, exclude={\"schedules\"}\n    )\n\n    # The job_variables field in client and server schemas is named\n    # infra_overrides in the database.\n    job_variables = insert_values.pop(\"job_variables\", None)\n    if job_variables:\n        insert_values[\"infra_overrides\"] = job_variables\n\n    conflict_update_fields = deployment.dict(\n        shallow=True,\n        exclude_unset=True,\n        exclude={\"id\", \"created\", \"created_by\", \"schedules\", \"job_variables\"},\n    )\n    if job_variables:\n        conflict_update_fields[\"infra_overrides\"] = job_variables\n\n    insert_stmt = (\n        db.insert(db.Deployment)\n        .values(**insert_values)\n        .on_conflict_do_update(\n            index_elements=db.deployment_unique_upsert_columns,\n            set_={**conflict_update_fields},\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    # Get the id of the deployment we just created or updated\n    result = await session.execute(\n        sa.select(db.Deployment.id).where(\n            sa.and_(\n                db.Deployment.flow_id == deployment.flow_id,\n                db.Deployment.name == deployment.name,\n            )\n        )\n    )\n    deployment_id = result.scalar_one_or_none()\n\n    if not deployment_id:\n        return None\n\n    # Because this was possibly an upsert, we need to delete any existing\n    # schedules and any runs from the old deployment.\n\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=True\n    )\n\n    await delete_schedules_for_deployment(session=session, deployment_id=deployment_id)\n\n    if schedules:\n        await create_deployment_schedules(\n            session=session,\n            deployment_id=deployment_id,\n            schedules=[\n                schemas.actions.DeploymentScheduleCreate(\n                    schedule=schedule.schedule,\n                    active=schedule.active,  # type: ignore[call-arg]\n                )\n                for schedule in schedules\n            ],\n        )\n\n    query = (\n        sa.select(db.Deployment)\n        .where(\n            sa.and_(\n                db.Deployment.flow_id == deployment.flow_id,\n                db.Deployment.name == deployment.name,\n            )\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment_schedules","title":"create_deployment_schedules async","text":"

      Creates a deployment's schedules.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required deployment_id UUID

      a deployment id

      required schedules List[DeploymentScheduleCreate]

      a list of deployment schedule create actions

      required Source code in prefect/server/models/deployments.py
      @db_injector\nasync def create_deployment_schedules(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    schedules: List[schemas.actions.DeploymentScheduleCreate],\n) -> List[schemas.core.DeploymentSchedule]:\n    \"\"\"\n    Creates a deployment's schedules.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n        schedules: a list of deployment schedule create actions\n    \"\"\"\n\n    schedules_with_deployment_id = []\n    for schedule in schedules:\n        data = schedule.dict()\n        data[\"deployment_id\"] = deployment_id\n        schedules_with_deployment_id.append(data)\n\n    models = [\n        db.DeploymentSchedule(**schedule) for schedule in schedules_with_deployment_id\n    ]\n    session.add_all(models)\n    await session.flush()\n\n    return [schemas.core.DeploymentSchedule.from_orm(m) for m in models]\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment","title":"delete_deployment async","text":"

      Delete a deployment by id.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required deployment_id UUID

      a deployment id

      required

      Returns:

      Name Type Description bool bool

      whether or not the deployment was deleted

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def delete_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> bool:\n    \"\"\"\n    Delete a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        bool: whether or not the deployment was deleted\n    \"\"\"\n\n    # delete scheduled runs, both auto- and user- created.\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=False\n    )\n\n    result = await session.execute(\n        delete(db.Deployment).where(db.Deployment.id == deployment_id)\n    )\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment_schedule","title":"delete_deployment_schedule async","text":"

      Deletes a deployment schedule.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required deployment_schedule_id UUID

      a deployment schedule id

      required Source code in prefect/server/models/deployments.py
      @db_injector\nasync def delete_deployment_schedule(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment_schedule_id: UUID,\n) -> bool:\n    \"\"\"\n    Deletes a deployment schedule.\n\n    Args:\n        session: A database session\n        deployment_schedule_id: a deployment schedule id\n    \"\"\"\n\n    result = await session.execute(\n        sa.delete(db.DeploymentSchedule).where(\n            sa.and_(\n                db.DeploymentSchedule.id == deployment_schedule_id,\n                db.DeploymentSchedule.deployment_id == deployment_id,\n            )\n        )\n    )\n\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_schedules_for_deployment","title":"delete_schedules_for_deployment async","text":"

      Deletes a deployment schedule.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required deployment_id UUID

      a deployment id

      required Source code in prefect/server/models/deployments.py
      @db_injector\nasync def delete_schedules_for_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> bool:\n    \"\"\"\n    Deletes a deployment schedule.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n    \"\"\"\n\n    result = await session.execute(\n        sa.delete(db.DeploymentSchedule).where(\n            db.DeploymentSchedule.deployment_id == deployment_id\n        )\n    )\n\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment","title":"read_deployment async","text":"

      Reads a deployment by id.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required deployment_id UUID

      a deployment id

      required

      Returns:

      Type Description Optional[ORMDeployment]

      db.Deployment: the deployment

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def read_deployment(\n    db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID\n) -> Optional[\"ORMDeployment\"]:\n    \"\"\"Reads a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    return await session.get(db.Deployment, deployment_id)\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

      Reads a deployment by name.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required name str

      a deployment name

      required flow_name str

      the name of the flow the deployment belongs to

      required

      Returns:

      Type Description Optional[ORMDeployment]

      db.Deployment: the deployment

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def read_deployment_by_name(\n    db: PrefectDBInterface, session: AsyncSession, name: str, flow_name: str\n) -> Optional[\"ORMDeployment\"]:\n    \"\"\"Reads a deployment by name.\n\n    Args:\n        session: A database session\n        name: a deployment name\n        flow_name: the name of the flow the deployment belongs to\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    result = await session.execute(\n        select(db.Deployment)\n        .join(db.Flow, db.Deployment.flow_id == db.Flow.id)\n        .where(\n            sa.and_(\n                db.Flow.name == flow_name,\n                db.Deployment.name == name,\n            )\n        )\n        .limit(1)\n    )\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_schedules","title":"read_deployment_schedules async","text":"

      Reads a deployment's schedules.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required deployment_id UUID

      a deployment id

      required

      Returns:

      Type Description List[DeploymentSchedule]

      list[schemas.core.DeploymentSchedule]: the deployment's schedules

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def read_deployment_schedules(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment_schedule_filter: Optional[\n        schemas.filters.DeploymentScheduleFilter\n    ] = None,\n) -> List[schemas.core.DeploymentSchedule]:\n    \"\"\"\n    Reads a deployment's schedules.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        list[schemas.core.DeploymentSchedule]: the deployment's schedules\n    \"\"\"\n\n    query = (\n        sa.select(db.DeploymentSchedule)\n        .where(db.DeploymentSchedule.deployment_id == deployment_id)\n        .order_by(db.DeploymentSchedule.updated.desc())\n    )\n\n    if deployment_schedule_filter:\n        query = query.where(deployment_schedule_filter.as_sql_filter(db))\n\n    result = await session.execute(query)\n\n    return [schemas.core.DeploymentSchedule.from_orm(s) for s in result.scalars().all()]\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployments","title":"read_deployments async","text":"

      Read deployments.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required offset int

      Query offset

      None limit int

      Query limit

      None flow_filter FlowFilter

      only select deployments whose flows match these criteria

      None flow_run_filter FlowRunFilter

      only select deployments whose flow runs match these criteria

      None task_run_filter TaskRunFilter

      only select deployments whose task runs match these criteria

      None deployment_filter DeploymentFilter

      only select deployment that match these filters

      None work_pool_filter WorkPoolFilter

      only select deployments whose work pools match these criteria

      None work_queue_filter WorkQueueFilter

      only select deployments whose work pool queues match these criteria

      None sort DeploymentSort

      the sort criteria for selected deployments. Defaults to name ASC.

      NAME_ASC

      Returns:

      Type Description Sequence[ORMDeployment]

      List[db.Deployment]: deployments

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def read_deployments(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    offset: int = None,\n    limit: int = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC,\n) -> Sequence[\"ORMDeployment\"]:\n    \"\"\"\n    Read deployments.\n\n    Args:\n        session: A database session\n        offset: Query offset\n        limit: Query limit\n        flow_filter: only select deployments whose flows match these criteria\n        flow_run_filter: only select deployments whose flow runs match these criteria\n        task_run_filter: only select deployments whose task runs match these criteria\n        deployment_filter: only select deployment that match these filters\n        work_pool_filter: only select deployments whose work pools match these criteria\n        work_queue_filter: only select deployments whose work pool queues match these criteria\n        sort: the sort criteria for selected deployments. Defaults to `name` ASC.\n\n    Returns:\n        List[db.Deployment]: deployments\n    \"\"\"\n\n    query = select(db.Deployment).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.schedule_runs","title":"schedule_runs async","text":"

      Schedule flow runs for a deployment

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required deployment_id UUID

      the id of the deployment to schedule

      required start_time datetime

      the time from which to start scheduling runs

      None end_time datetime

      runs will be scheduled until at most this time

      None min_time timedelta

      runs will be scheduled until at least this far in the future

      None min_runs int

      a minimum amount of runs to schedule

      None max_runs int

      a maximum amount of runs to schedule

      None

      This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

      - Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time` + `min_time` is reached\n

      Returns:

      Type Description List[UUID]

      a list of flow run ids scheduled for the deployment

      Source code in prefect/server/models/deployments.py
      async def schedule_runs(\n    session: AsyncSession,\n    deployment_id: UUID,\n    start_time: datetime.datetime = None,\n    end_time: datetime.datetime = None,\n    min_time: datetime.timedelta = None,\n    min_runs: int = None,\n    max_runs: int = None,\n    auto_scheduled: bool = True,\n) -> List[UUID]:\n    \"\"\"\n    Schedule flow runs for a deployment\n\n    Args:\n        session: a database session\n        deployment_id: the id of the deployment to schedule\n        start_time: the time from which to start scheduling runs\n        end_time: runs will be scheduled until at most this time\n        min_time: runs will be scheduled until at least this far in the future\n        min_runs: a minimum amount of runs to schedule\n        max_runs: a maximum amount of runs to schedule\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time` + `min_time` is reached\n\n    Returns:\n        a list of flow run ids scheduled for the deployment\n    \"\"\"\n    if min_runs is None:\n        min_runs = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n    if max_runs is None:\n        max_runs = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n    if start_time is None:\n        start_time = pendulum.now(\"UTC\")\n    if end_time is None:\n        end_time = start_time + (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n    if min_time is None:\n        min_time = PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n\n    start_time = pendulum.instance(start_time)\n    end_time = pendulum.instance(end_time)\n\n    runs = await _generate_scheduled_flow_runs(\n        session=session,\n        deployment_id=deployment_id,\n        start_time=start_time,\n        end_time=end_time,\n        min_time=min_time,\n        min_runs=min_runs,\n        max_runs=max_runs,\n        auto_scheduled=auto_scheduled,\n    )\n    return await _insert_scheduled_flow_runs(session=session, runs=runs)\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment","title":"update_deployment async","text":"

      Updates a deployment.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required deployment_id UUID

      the ID of the deployment to modify

      required deployment DeploymentUpdate

      changes to a deployment model

      required

      Returns:

      Name Type Description bool bool

      whether the deployment was updated

      Source code in prefect/server/models/deployments.py
      @db_injector\nasync def update_deployment(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment: schemas.actions.DeploymentUpdate,\n) -> bool:\n    \"\"\"Updates a deployment.\n\n    Args:\n        session: a database session\n        deployment_id: the ID of the deployment to modify\n        deployment: changes to a deployment model\n\n    Returns:\n        bool: whether the deployment was updated\n\n    \"\"\"\n\n    from prefect.server.api.workers import WorkerLookups\n\n    schedules = deployment.schedules\n\n    # exclude_unset=True allows us to only update values provided by\n    # the user, ignoring any defaults on the model\n    update_data = deployment.dict(\n        shallow=True,\n        exclude_unset=True,\n        exclude={\"work_pool_name\"},\n    )\n\n    # The job_variables field in client and server schemas is named\n    # infra_overrides in the database.\n    job_variables = update_data.pop(\"job_variables\", None)\n    if job_variables:\n        update_data[\"infra_overrides\"] = job_variables\n\n    should_update_schedules = update_data.pop(\"schedules\", None) is not None\n\n    if deployment.work_pool_name and deployment.work_queue_name:\n        # If a specific pool name/queue name combination was provided, get the\n        # ID for that work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_work_queue_id_from_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n            work_queue_name=deployment.work_queue_name,\n            create_queue_if_not_found=True,\n        )\n    elif deployment.work_pool_name:\n        # If just a pool name was provided, get the ID for its default\n        # work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_default_work_queue_id_from_work_pool_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n        )\n    elif deployment.work_queue_name:\n        # If just a queue name was provided, ensure the queue exists and\n        # get its ID.\n        work_queue = await models.work_queues.ensure_work_queue_exists(\n            session=session, name=update_data[\"work_queue_name\"]\n        )\n        update_data[\"work_queue_id\"] = work_queue.id\n\n    if \"is_schedule_active\" in update_data:\n        update_data[\"paused\"] = not update_data[\"is_schedule_active\"]\n\n    update_stmt = (\n        sa.update(db.Deployment)\n        .where(db.Deployment.id == deployment_id)\n        .values(**update_data)\n    )\n    result = await session.execute(update_stmt)\n\n    # delete any auto scheduled runs that would have reflected the old deployment config\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=True\n    )\n\n    if should_update_schedules:\n        # If schedules were provided, remove the existing schedules and\n        # replace them with the new ones.\n        await delete_schedules_for_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        await create_deployment_schedules(\n            session=session,\n            deployment_id=deployment_id,\n            schedules=[\n                schemas.actions.DeploymentScheduleCreate(\n                    schedule=schedule.schedule,\n                    active=schedule.active,  # type: ignore[call-arg]\n                )\n                for schedule in schedules\n            ],\n        )\n\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment_schedule","title":"update_deployment_schedule async","text":"

      Updates a deployment's schedules.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required deployment_schedule_id UUID

      a deployment schedule id

      required schedule DeploymentScheduleUpdate

      a deployment schedule update action

      required Source code in prefect/server/models/deployments.py
      @db_injector\nasync def update_deployment_schedule(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    deployment_id: UUID,\n    deployment_schedule_id: UUID,\n    schedule: schemas.actions.DeploymentScheduleUpdate,\n) -> bool:\n    \"\"\"\n    Updates a deployment's schedules.\n\n    Args:\n        session: A database session\n        deployment_schedule_id: a deployment schedule id\n        schedule: a deployment schedule update action\n    \"\"\"\n\n    result = await session.execute(\n        sa.update(db.DeploymentSchedule)\n        .where(\n            sa.and_(\n                db.DeploymentSchedule.id == deployment_schedule_id,\n                db.DeploymentSchedule.deployment_id == deployment_id,\n            )\n        )\n        .values(**schedule.dict(exclude_none=True))\n    )\n\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/flow_run_states/","title":"server.models.flow_run_states","text":""},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states","title":"prefect.server.models.flow_run_states","text":"

      Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API.

      "},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.delete_flow_run_state","title":"delete_flow_run_state async","text":"

      Delete a flow run state by id.

      Parameters:

      Name Type Description Default session Session

      A database session

      required flow_run_state_id UUID

      a flow run state id

      required

      Returns:

      Name Type Description bool bool

      whether or not the flow run state was deleted

      Source code in prefect/server/models/flow_run_states.py
      @inject_db\nasync def delete_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        bool: whether or not the flow run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRunState).where(db.FlowRunState.id == flow_run_state_id)\n    )\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

      Reads a flow run state by id.

      Parameters:

      Name Type Description Default session Session

      A database session

      required flow_run_state_id UUID

      a flow run state id

      required

      Returns:

      Type Description

      db.FlowRunState: the flow state

      Source code in prefect/server/models/flow_run_states.py
      @inject_db\nasync def read_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        db.FlowRunState: the flow state\n    \"\"\"\n\n    return await session.get(db.FlowRunState, flow_run_state_id)\n
      "},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

      Reads flow runs states for a flow run.

      Parameters:

      Name Type Description Default session Session

      A database session

      required flow_run_id UUID

      the flow run id

      required

      Returns:

      Type Description

      List[db.FlowRunState]: the flow run states

      Source code in prefect/server/models/flow_run_states.py
      @inject_db\nasync def read_flow_run_states(\n    session: sa.orm.Session, flow_run_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads flow runs states for a flow run.\n\n    Args:\n        session: A database session\n        flow_run_id: the flow run id\n\n    Returns:\n        List[db.FlowRunState]: the flow run states\n    \"\"\"\n\n    query = (\n        select(db.FlowRunState)\n        .filter_by(flow_run_id=flow_run_id)\n        .order_by(db.FlowRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/flow_runs/","title":"server.models.flow_runs","text":""},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs","title":"prefect.server.models.flow_runs","text":"

      Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API.

      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

      Count flow runs.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required flow_filter FlowFilter

      only count flow runs whose flows match these filters

      None flow_run_filter FlowRunFilter

      only count flow runs that match these filters

      None task_run_filter TaskRunFilter

      only count flow runs whose task runs match these filters

      None deployment_filter DeploymentFilter

      only count flow runs whose deployments match these filters

      None

      Returns:

      Name Type Description int int

      count of flow runs

      Source code in prefect/server/models/flow_runs.py
      @db_injector\nasync def count_flow_runs(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n    \"\"\"\n    Count flow runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count flow runs whose flows match these filters\n        flow_run_filter: only count flow runs that match these filters\n        task_run_filter: only count flow runs whose task runs match these filters\n        deployment_filter: only count flow runs whose deployments match these filters\n\n    Returns:\n        int: count of flow runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.FlowRun)\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.create_flow_run","title":"create_flow_run async","text":"

      Creates a new flow run.

      If the provided flow run has a state attached, it will also be created.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required flow_run FlowRun

      a flow run model

      required

      Returns:

      Type Description ORMFlowRun

      db.FlowRun: the newly-created flow run

      Source code in prefect/server/models/flow_runs.py
      @db_injector\nasync def create_flow_run(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run: schemas.core.FlowRun,\n    orchestration_parameters: Optional[dict] = None,\n) -> \"ORMFlowRun\":\n    \"\"\"Creates a new flow run.\n\n    If the provided flow run has a state attached, it will also be created.\n\n    Args:\n        session: a database session\n        flow_run: a flow run model\n\n    Returns:\n        db.FlowRun: the newly-created flow run\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    flow_run_dict = dict(\n        **flow_run.dict(\n            shallow=True,\n            exclude={\n                \"created\",\n                \"state\",\n                \"estimated_run_time\",\n                \"estimated_start_time_delta\",\n            },\n            exclude_unset=True,\n        ),\n        created=now,\n    )\n\n    # if no idempotency key was provided, create the run directly\n    if not flow_run.idempotency_key:\n        model = db.FlowRun(**flow_run_dict)\n        session.add(model)\n        await session.flush()\n\n    # otherwise let the database take care of enforcing idempotency\n    else:\n        insert_stmt = (\n            db.insert(db.FlowRun)\n            .values(**flow_run_dict)\n            .on_conflict_do_nothing(\n                index_elements=db.flow_run_unique_upsert_columns,\n            )\n        )\n        await session.execute(insert_stmt)\n\n        # read the run to see if idempotency was applied or not\n        query = (\n            sa.select(db.FlowRun)\n            .where(\n                sa.and_(\n                    db.FlowRun.flow_id == flow_run.flow_id,\n                    db.FlowRun.idempotency_key == flow_run.idempotency_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n            .options(\n                selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n            )\n        )\n        result = await session.execute(query)\n        model = result.scalar()\n\n    # if the flow run was created in this function call then we need to set the\n    # state. If it was created idempotently, the created time won't match.\n    if model.created == now and flow_run.state:\n        await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=model.id,\n            state=flow_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

      Delete a flow run by flow_run_id.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required flow_run_id UUID

      a flow run id

      required

      Returns:

      Name Type Description bool bool

      whether or not the flow run was deleted

      Source code in prefect/server/models/flow_runs.py
      @db_injector\nasync def delete_flow_run(\n    db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID\n) -> bool:\n    \"\"\"\n    Delete a flow run by flow_run_id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        bool: whether or not the flow run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n    )\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run","title":"read_flow_run async","text":"

      Reads a flow run by id.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required flow_run_id UUID

      a flow run id

      required

      Returns:

      Type Description Optional[ORMFlowRun]

      db.FlowRun: the flow run

      Source code in prefect/server/models/flow_runs.py
      @db_injector\nasync def read_flow_run(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run_id: UUID,\n    for_update: bool = False,\n) -> Optional[\"ORMFlowRun\"]:\n    \"\"\"\n    Reads a flow run by id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        db.FlowRun: the flow run\n    \"\"\"\n    select = (\n        sa.select(db.FlowRun)\n        .where(db.FlowRun.id == flow_run_id)\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if for_update:\n        select = select.with_for_update()\n\n    result = await session.execute(select)\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run_graph","title":"read_flow_run_graph async","text":"

      Given a flow run, return the graph of it's task and subflow runs. If a since datetime is provided, only return items that may have changed since that time.

      Source code in prefect/server/models/flow_runs.py
      @db_injector\nasync def read_flow_run_graph(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run_id: UUID,\n    since: datetime.datetime = datetime.datetime.min,\n) -> Graph:\n    \"\"\"Given a flow run, return the graph of it's task and subflow runs. If a `since`\n    datetime is provided, only return items that may have changed since that time.\"\"\"\n    return await db.queries.flow_run_graph_v2(\n        db=db,\n        session=session,\n        flow_run_id=flow_run_id,\n        since=since,\n        max_nodes=PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES.value(),\n        max_artifacts=PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS.value(),\n    )\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

      Read flow runs.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required columns List

      a list of the flow run ORM columns to load, for performance

      None flow_filter FlowFilter

      only select flow runs whose flows match these filters

      None flow_run_filter FlowRunFilter

      only select flow runs match these filters

      None task_run_filter TaskRunFilter

      only select flow runs whose task runs match these filters

      None deployment_filter DeploymentFilter

      only select flow runs whose deployments match these filters

      None offset int

      Query offset

      None limit int

      Query limit

      None sort FlowRunSort

      Query sort

      ID_DESC

      Returns:

      Type Description Sequence[ORMFlowRun]

      List[db.FlowRun]: flow runs

      Source code in prefect/server/models/flow_runs.py
      @db_injector\nasync def read_flow_runs(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    columns: List = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC,\n) -> Sequence[\"ORMFlowRun\"]:\n    \"\"\"\n    Read flow runs.\n\n    Args:\n        session: a database session\n        columns: a list of the flow run ORM columns to load, for performance\n        flow_filter: only select flow runs whose flows match these filters\n        flow_run_filter: only select flow runs match these filters\n        task_run_filter: only select flow runs whose task runs match these filters\n        deployment_filter: only select flow runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.FlowRun]: flow runs\n    \"\"\"\n    query = (\n        select(db.FlowRun)\n        .order_by(sort.as_sql_sort(db))\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if columns:\n        query = query.options(load_only(*columns))\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_task_run_dependencies","title":"read_task_run_dependencies async","text":"

      Get a task run dependency map for a given flow run.

      Source code in prefect/server/models/flow_runs.py
      async def read_task_run_dependencies(\n    session: AsyncSession,\n    flow_run_id: UUID,\n) -> List[DependencyResult]:\n    \"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    flow_run = await models.flow_runs.read_flow_run(\n        session=session, flow_run_id=flow_run_id\n    )\n    if not flow_run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    task_runs = await models.task_runs.read_task_runs(\n        session=session,\n        flow_run_filter=schemas.filters.FlowRunFilter(\n            id=schemas.filters.FlowRunFilterId(any_=[flow_run_id])\n        ),\n    )\n\n    dependency_graph = []\n\n    for task_run in task_runs:\n        inputs = list(set(chain(*task_run.task_inputs.values())))\n        untrackable_result_status = (\n            False\n            if task_run.state is None\n            else task_run.state.state_details.untrackable_result\n        )\n        dependency_graph.append(\n            {\n                \"id\": task_run.id,\n                \"upstream_dependencies\": inputs,\n                \"state\": task_run.state,\n                \"expected_start_time\": task_run.expected_start_time,\n                \"name\": task_run.name,\n                \"start_time\": task_run.start_time,\n                \"end_time\": task_run.end_time,\n                \"total_run_time\": task_run.total_run_time,\n                \"estimated_run_time\": task_run.estimated_run_time,\n                \"untrackable_result\": untrackable_result_status,\n            }\n        )\n\n    return dependency_graph\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

      Creates a new orchestrated flow run state.

      Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required flow_run_id UUID

      the flow run id

      required state State

      a flow run state model

      required force bool

      if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

      False

      Returns:

      Type Description OrchestrationResult

      OrchestrationResult object

      Source code in prefect/server/models/flow_runs.py
      async def set_flow_run_state(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    flow_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: Optional[Dict[str, Any]] = None,\n) -> OrchestrationResult:\n    \"\"\"\n    Creates a new orchestrated flow run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id\n        state: a flow run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the flow run\n    run = await models.flow_runs.read_flow_run(\n        session=session,\n        flow_run_id=flow_run_id,\n        # Lock the row to prevent orchestration race conditions\n        for_update=True,\n    )\n\n    if not run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if force or flow_policy is None:\n        flow_policy = MinimalFlowPolicy\n\n    orchestration_rules = flow_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalFlowPolicy.compile_transition_rules(*intended_transition)\n\n    context = FlowOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new flow run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    # if a new state is being set (either ACCEPTED from user or REJECTED\n    # and set by the server), check for any notification policies\n    if result.status in (SetStateStatus.ACCEPT, SetStateStatus.REJECT):\n        await models.flow_run_notification_policies.queue_flow_run_notifications(\n            session=session, flow_run=run\n        )\n\n    return result\n
      "},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.update_flow_run","title":"update_flow_run async","text":"

      Updates a flow run.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required flow_run_id UUID

      the flow run id to update

      required flow_run FlowRunUpdate

      a flow run model

      required

      Returns:

      Name Type Description bool bool

      whether or not matching rows were found to update

      Source code in prefect/server/models/flow_runs.py
      @db_injector\nasync def update_flow_run(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_run_id: UUID,\n    flow_run: schemas.actions.FlowRunUpdate,\n) -> bool:\n    \"\"\"\n    Updates a flow run.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id to update\n        flow_run: a flow run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.FlowRun)\n        .where(db.FlowRun.id == flow_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/flows/","title":"server.models.flows","text":""},{"location":"api-ref/server/models/flows/#prefect.server.models.flows","title":"prefect.server.models.flows","text":"

      Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API.

      "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.count_flows","title":"count_flows async","text":"

      Count flows.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required flow_filter Union[FlowFilter, None]

      only count flows that match these filters

      None flow_run_filter Union[FlowRunFilter, None]

      only count flows whose flow runs match these filters

      None task_run_filter Union[TaskRunFilter, None]

      only count flows whose task runs match these filters

      None deployment_filter Union[DeploymentFilter, None]

      only count flows whose deployments match these filters

      None work_pool_filter Union[WorkPoolFilter, None]

      only count flows whose work pools match these filters

      None

      Returns:

      Name Type Description int int

      count of flows

      Source code in prefect/server/models/flows.py
      @db_injector\nasync def count_flows(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: Union[schemas.filters.FlowFilter, None] = None,\n    flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None,\n    task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None,\n    deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None,\n    work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None,\n) -> int:\n    \"\"\"\n    Count flows.\n\n    Args:\n        session: A database session\n        flow_filter: only count flows that match these filters\n        flow_run_filter: only count flows whose flow runs match these filters\n        task_run_filter: only count flows whose task runs match these filters\n        deployment_filter: only count flows whose deployments match these filters\n        work_pool_filter: only count flows whose work pools match these filters\n\n    Returns:\n        int: count of flows\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Flow)\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n    )\n\n    result = await session.execute(query)\n    return result.scalar_one()\n
      "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.create_flow","title":"create_flow async","text":"

      Creates a new flow.

      If a flow with the same name already exists, the existing flow is returned.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required flow Flow

      a flow model

      required

      Returns:

      Type Description ORMFlow

      db.Flow: the newly-created or existing flow

      Source code in prefect/server/models/flows.py
      @db_injector\nasync def create_flow(\n    db: PrefectDBInterface, session: AsyncSession, flow: schemas.core.Flow\n) -> \"ORMFlow\":\n    \"\"\"\n    Creates a new flow.\n\n    If a flow with the same name already exists, the existing flow is returned.\n\n    Args:\n        session: a database session\n        flow: a flow model\n\n    Returns:\n        db.Flow: the newly-created or existing flow\n    \"\"\"\n\n    insert_stmt = (\n        db.insert(db.Flow)\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_nothing(\n            index_elements=db.flow_unique_upsert_columns,\n        )\n    )\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.Flow)\n        .where(\n            db.Flow.name == flow.name,\n        )\n        .limit(1)\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar_one()\n    return model\n
      "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.delete_flow","title":"delete_flow async","text":"

      Delete a flow by id.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required flow_id UUID

      a flow id

      required

      Returns:

      Name Type Description bool bool

      whether or not the flow was deleted

      Source code in prefect/server/models/flows.py
      @db_injector\nasync def delete_flow(\n    db: PrefectDBInterface, session: AsyncSession, flow_id: UUID\n) -> bool:\n    \"\"\"\n    Delete a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        bool: whether or not the flow was deleted\n    \"\"\"\n\n    result = await session.execute(delete(db.Flow).where(db.Flow.id == flow_id))\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow","title":"read_flow async","text":"

      Reads a flow by id.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required flow_id UUID

      a flow id

      required

      Returns:

      Type Description Optional[ORMFlow]

      db.Flow: the flow

      Source code in prefect/server/models/flows.py
      @db_injector\nasync def read_flow(\n    db: PrefectDBInterface, session: AsyncSession, flow_id: UUID\n) -> Optional[\"ORMFlow\"]:\n    \"\"\"\n    Reads a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n    return await session.get(db.Flow, flow_id)\n
      "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

      Reads a flow by name.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required name str

      a flow name

      required

      Returns:

      Type Description Optional[ORMFlow]

      db.Flow: the flow

      Source code in prefect/server/models/flows.py
      @db_injector\nasync def read_flow_by_name(\n    db: PrefectDBInterface, session: AsyncSession, name: str\n) -> Optional[\"ORMFlow\"]:\n    \"\"\"\n    Reads a flow by name.\n\n    Args:\n        session: A database session\n        name: a flow name\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n\n    result = await session.execute(select(db.Flow).filter_by(name=name))\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flows","title":"read_flows async","text":"

      Read multiple flows.

      Parameters:

      Name Type Description Default session AsyncSession

      A database session

      required flow_filter Union[FlowFilter, None]

      only select flows that match these filters

      None flow_run_filter Union[FlowRunFilter, None]

      only select flows whose flow runs match these filters

      None task_run_filter Union[TaskRunFilter, None]

      only select flows whose task runs match these filters

      None deployment_filter Union[DeploymentFilter, None]

      only select flows whose deployments match these filters

      None work_pool_filter Union[WorkPoolFilter, None]

      only select flows whose work pools match these filters

      None offset Union[int, None]

      Query offset

      None limit Union[int, None]

      Query limit

      None

      Returns:

      Type Description Sequence[ORMFlow]

      List[db.Flow]: flows

      Source code in prefect/server/models/flows.py
      @db_injector\nasync def read_flows(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_filter: Union[schemas.filters.FlowFilter, None] = None,\n    flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None,\n    task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None,\n    deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None,\n    work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None,\n    sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC,\n    offset: Union[int, None] = None,\n    limit: Union[int, None] = None,\n) -> Sequence[\"ORMFlow\"]:\n    \"\"\"\n    Read multiple flows.\n\n    Args:\n        session: A database session\n        flow_filter: only select flows that match these filters\n        flow_run_filter: only select flows whose flow runs match these filters\n        task_run_filter: only select flows whose task runs match these filters\n        deployment_filter: only select flows whose deployments match these filters\n        work_pool_filter: only select flows whose work pools match these filters\n        offset: Query offset\n        limit: Query limit\n\n    Returns:\n        List[db.Flow]: flows\n    \"\"\"\n\n    query = select(db.Flow).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.update_flow","title":"update_flow async","text":"

      Updates a flow.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required flow_id UUID

      the flow id to update

      required flow FlowUpdate

      a flow update model

      required

      Returns:

      Name Type Description bool bool

      whether or not matching rows were found to update

      Source code in prefect/server/models/flows.py
      @db_injector\nasync def update_flow(\n    db: PrefectDBInterface,\n    session: AsyncSession,\n    flow_id: UUID,\n    flow: schemas.actions.FlowUpdate,\n) -> bool:\n    \"\"\"\n    Updates a flow.\n\n    Args:\n        session: a database session\n        flow_id: the flow id to update\n        flow: a flow update model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.Flow)\n        .where(db.Flow.id == flow_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/saved_searches/","title":"server.models.saved_searches","text":""},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches","title":"prefect.server.models.saved_searches","text":"

      Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API.

      "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.create_saved_search","title":"create_saved_search async","text":"

      Upserts a SavedSearch.

      If a SavedSearch with the same name exists, all properties will be updated.

      Parameters:

      Name Type Description Default session Session

      a database session

      required saved_search SavedSearch

      a SavedSearch model

      required

      Returns:

      Type Description

      db.SavedSearch: the newly-created or updated SavedSearch

      Source code in prefect/server/models/saved_searches.py
      @inject_db\nasync def create_saved_search(\n    session: sa.orm.Session,\n    saved_search: schemas.core.SavedSearch,\n    db: PrefectDBInterface,\n):\n    \"\"\"\n    Upserts a SavedSearch.\n\n    If a SavedSearch with the same name exists, all properties will be updated.\n\n    Args:\n        session (sa.orm.Session): a database session\n        saved_search (schemas.core.SavedSearch): a SavedSearch model\n\n    Returns:\n        db.SavedSearch: the newly-created or updated SavedSearch\n\n    \"\"\"\n\n    insert_stmt = (\n        db.insert(db.SavedSearch)\n        .values(**saved_search.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_update(\n            index_elements=db.saved_search_unique_upsert_columns,\n            set_=saved_search.dict(shallow=True, include={\"filters\"}),\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.SavedSearch)\n        .where(\n            db.SavedSearch.name == saved_search.name,\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    return model\n
      "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

      Delete a SavedSearch by id.

      Parameters:

      Name Type Description Default session Session

      A database session

      required saved_search_id str

      a SavedSearch id

      required

      Returns:

      Name Type Description bool bool

      whether or not the SavedSearch was deleted

      Source code in prefect/server/models/saved_searches.py
      @inject_db\nasync def delete_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        bool: whether or not the SavedSearch was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.SavedSearch).where(db.SavedSearch.id == saved_search_id)\n    )\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search","title":"read_saved_search async","text":"

      Reads a SavedSearch by id.

      Parameters:

      Name Type Description Default session Session

      A database session

      required saved_search_id str

      a SavedSearch id

      required

      Returns:

      Type Description

      db.SavedSearch: the SavedSearch

      Source code in prefect/server/models/saved_searches.py
      @inject_db\nasync def read_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n\n    return await session.get(db.SavedSearch, saved_search_id)\n
      "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search_by_name","title":"read_saved_search_by_name async","text":"

      Reads a SavedSearch by name.

      Parameters:

      Name Type Description Default session Session

      A database session

      required name str

      a SavedSearch name

      required

      Returns:

      Type Description

      db.SavedSearch: the SavedSearch

      Source code in prefect/server/models/saved_searches.py
      @inject_db\nasync def read_saved_search_by_name(\n    session: sa.orm.Session, name: str, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a SavedSearch by name.\n\n    Args:\n        session (sa.orm.Session): A database session\n        name (str): a SavedSearch name\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n    result = await session.execute(\n        select(db.SavedSearch).where(db.SavedSearch.name == name).limit(1)\n    )\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

      Read SavedSearches.

      Parameters:

      Name Type Description Default session Session

      A database session

      required offset int

      Query offset

      None limit(int)

      Query limit

      required

      Returns:

      Type Description

      List[db.SavedSearch]: SavedSearches

      Source code in prefect/server/models/saved_searches.py
      @inject_db\nasync def read_saved_searches(\n    db: PrefectDBInterface,\n    session: sa.orm.Session,\n    offset: int = None,\n    limit: int = None,\n):\n    \"\"\"\n    Read SavedSearches.\n\n    Args:\n        session (sa.orm.Session): A database session\n        offset (int): Query offset\n        limit(int): Query limit\n\n    Returns:\n        List[db.SavedSearch]: SavedSearches\n    \"\"\"\n\n    query = select(db.SavedSearch).order_by(db.SavedSearch.name)\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/task_run_states/","title":"server.models.task_run_states","text":""},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states","title":"prefect.server.models.task_run_states","text":"

      Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API.

      "},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.delete_task_run_state","title":"delete_task_run_state async","text":"

      Delete a task run state by id.

      Parameters:

      Name Type Description Default session Session

      A database session

      required task_run_state_id UUID

      a task run state id

      required

      Returns:

      Name Type Description bool bool

      whether or not the task run state was deleted

      Source code in prefect/server/models/task_run_states.py
      @inject_db\nasync def delete_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        bool: whether or not the task run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRunState).where(db.TaskRunState.id == task_run_state_id)\n    )\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

      Reads a task run state by id.

      Parameters:

      Name Type Description Default session Session

      A database session

      required task_run_state_id UUID

      a task run state id

      required

      Returns:

      Type Description

      db.TaskRunState: the task state

      Source code in prefect/server/models/task_run_states.py
      @inject_db\nasync def read_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        db.TaskRunState: the task state\n    \"\"\"\n\n    return await session.get(db.TaskRunState, task_run_state_id)\n
      "},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

      Reads task runs states for a task run.

      Parameters:

      Name Type Description Default session Session

      A database session

      required task_run_id UUID

      the task run id

      required

      Returns:

      Type Description

      List[db.TaskRunState]: the task run states

      Source code in prefect/server/models/task_run_states.py
      @inject_db\nasync def read_task_run_states(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Reads task runs states for a task run.\n\n    Args:\n        session: A database session\n        task_run_id: the task run id\n\n    Returns:\n        List[db.TaskRunState]: the task run states\n    \"\"\"\n\n    query = (\n        select(db.TaskRunState)\n        .filter_by(task_run_id=task_run_id)\n        .order_by(db.TaskRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/task_runs/","title":"server.models.task_runs","text":""},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs","title":"prefect.server.models.task_runs","text":"

      Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API.

      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs","title":"count_task_runs async","text":"

      Count task runs.

      Parameters:

      Name Type Description Default session Session

      a database session

      required flow_filter FlowFilter

      only count task runs whose flows match these filters

      None flow_run_filter FlowRunFilter

      only count task runs whose flow runs match these filters

      None task_run_filter TaskRunFilter

      only count task runs that match these filters

      None deployment_filter DeploymentFilter

      only count task runs whose deployments match these filters

      None Source code in prefect/server/models/task_runs.py
      @inject_db\nasync def count_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n) -> int:\n    \"\"\"\n    Count task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count task runs whose flows match these filters\n        flow_run_filter: only count task runs whose flow runs match these filters\n        task_run_filter: only count task runs that match these filters\n        deployment_filter: only count task runs whose deployments match these filters\n    Returns:\n        int: count of task runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.TaskRun)\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs_by_state","title":"count_task_runs_by_state async","text":"

      Count task runs by state.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required flow_filter Optional[FlowFilter]

      only count task runs whose flows match these filters

      None flow_run_filter Optional[FlowRunFilter]

      only count task runs whose flow runs match these filters

      None task_run_filter Optional[TaskRunFilter]

      only count task runs that match these filters

      None deployment_filter Optional[DeploymentFilter]

      only count task runs whose deployments match these filters

      None Source code in prefect/server/models/task_runs.py
      async def count_task_runs_by_state(\n    session: AsyncSession,\n    db: PrefectDBInterface,\n    flow_filter: Optional[schemas.filters.FlowFilter] = None,\n    flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None,\n    task_run_filter: Optional[schemas.filters.TaskRunFilter] = None,\n    deployment_filter: Optional[schemas.filters.DeploymentFilter] = None,\n) -> schemas.states.CountByState:\n    \"\"\"\n    Count task runs by state.\n\n    Args:\n        session: a database session\n        flow_filter: only count task runs whose flows match these filters\n        flow_run_filter: only count task runs whose flow runs match these filters\n        task_run_filter: only count task runs that match these filters\n        deployment_filter: only count task runs whose deployments match these filters\n    Returns:\n        schemas.states.CountByState: count of task runs by state\n    \"\"\"\n\n    base_query = (\n        select(\n            db.TaskRun.state_type,\n            sa.func.count(sa.text(\"*\")).label(\"count\"),\n        )\n        .select_from(db.TaskRun)\n        .group_by(db.TaskRun.state_type)\n    )\n\n    query = await _apply_task_run_filters(\n        base_query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n    )\n\n    result = await session.execute(query)\n\n    counts = schemas.states.CountByState()\n\n    for row in result:\n        setattr(counts, row.state_type, row.count)\n\n    return counts\n
      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.create_task_run","title":"create_task_run async","text":"

      Creates a new task run.

      If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created.

      Parameters:

      Name Type Description Default session Session

      a database session

      required task_run TaskRun

      a task run model

      required

      Returns:

      Type Description

      db.TaskRun: the newly-created or existing task run

      Source code in prefect/server/models/task_runs.py
      @inject_db\nasync def create_task_run(\n    session: sa.orm.Session,\n    task_run: schemas.core.TaskRun,\n    db: PrefectDBInterface,\n    orchestration_parameters: Optional[Dict[str, Any]] = None,\n):\n    \"\"\"\n    Creates a new task run.\n\n    If a task run with the same flow_run_id, task_key, and dynamic_key already exists,\n    the existing task run will be returned. If the provided task run has a state\n    attached, it will also be created.\n\n    Args:\n        session: a database session\n        task_run: a task run model\n\n    Returns:\n        db.TaskRun: the newly-created or existing task run\n    \"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # if a dynamic key exists, we need to guard against conflicts\n    if task_run.flow_run_id:\n        insert_stmt = (\n            db.insert(db.TaskRun)\n            .values(\n                created=now,\n                **task_run.dict(\n                    shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n                ),\n            )\n            .on_conflict_do_nothing(\n                index_elements=db.task_run_unique_upsert_columns,\n            )\n        )\n        await session.execute(insert_stmt)\n\n        query = (\n            sa.select(db.TaskRun)\n            .where(\n                sa.and_(\n                    db.TaskRun.flow_run_id == task_run.flow_run_id,\n                    db.TaskRun.task_key == task_run.task_key,\n                    db.TaskRun.dynamic_key == task_run.dynamic_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n        )\n        result = await session.execute(query)\n        model = result.scalar()\n    else:\n        # Upsert on (task_key, dynamic_key) application logic.\n        query = (\n            sa.select(db.TaskRun)\n            .where(\n                sa.and_(\n                    db.TaskRun.flow_run_id.is_(None),\n                    db.TaskRun.task_key == task_run.task_key,\n                    db.TaskRun.dynamic_key == task_run.dynamic_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n        )\n\n        result = await session.execute(query)\n        model = result.scalar()\n\n        if model is None:\n            model = db.TaskRun(\n                created=now,\n                **task_run.dict(\n                    shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n                ),\n                state=None,\n            )\n            session.add(model)\n            await session.flush()\n\n    if model.created == now and task_run.state:\n        await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=model.id,\n            state=task_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.delete_task_run","title":"delete_task_run async","text":"

      Delete a task run by id.

      Parameters:

      Name Type Description Default session Session

      a database session

      required task_run_id UUID

      the task run id to delete

      required

      Returns:

      Name Type Description bool bool

      whether or not the task run was deleted

      Source code in prefect/server/models/task_runs.py
      @inject_db\nasync def delete_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n    \"\"\"\n    Delete a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to delete\n\n    Returns:\n        bool: whether or not the task run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRun).where(db.TaskRun.id == task_run_id)\n    )\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_run","title":"read_task_run async","text":"

      Read a task run by id.

      Parameters:

      Name Type Description Default session Session

      a database session

      required task_run_id UUID

      the task run id

      required

      Returns:

      Type Description

      db.TaskRun: the task run

      Source code in prefect/server/models/task_runs.py
      @inject_db\nasync def read_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n    \"\"\"\n    Read a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n\n    Returns:\n        db.TaskRun: the task run\n    \"\"\"\n\n    model = await session.get(db.TaskRun, task_run_id)\n    return model\n
      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_runs","title":"read_task_runs async","text":"

      Read task runs.

      Parameters:

      Name Type Description Default session Session

      a database session

      required flow_filter FlowFilter

      only select task runs whose flows match these filters

      None flow_run_filter FlowRunFilter

      only select task runs whose flow runs match these filters

      None task_run_filter TaskRunFilter

      only select task runs that match these filters

      None deployment_filter DeploymentFilter

      only select task runs whose deployments match these filters

      None offset int

      Query offset

      None limit int

      Query limit

      None sort TaskRunSort

      Query sort

      ID_DESC

      Returns:

      Type Description

      List[db.TaskRun]: the task runs

      Source code in prefect/server/models/task_runs.py
      @inject_db\nasync def read_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC,\n):\n    \"\"\"\n    Read task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only select task runs whose flows match these filters\n        flow_run_filter: only select task runs whose flow runs match these filters\n        task_run_filter: only select task runs that match these filters\n        deployment_filter: only select task runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.TaskRun]: the task runs\n    \"\"\"\n\n    query = select(db.TaskRun).order_by(sort.as_sql_sort(db))\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    logger.debug(f\"In read_task_runs, query generated is:\\n{query}\")\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

      Creates a new orchestrated task run state.

      Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

      Parameters:

      Name Type Description Default session Session

      a database session

      required task_run_id UUID

      the task run id

      required state State

      a task run state model

      required force bool

      if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

      False

      Returns:

      Type Description OrchestrationResult

      OrchestrationResult object

      Source code in prefect/server/models/task_runs.py
      async def set_task_run_state(\n    session: sa.orm.Session,\n    task_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    task_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: Optional[Dict[str, Any]] = None,\n) -> OrchestrationResult:\n    \"\"\"\n    Creates a new orchestrated task run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n        state: a task run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the task run\n    run = await models.task_runs.read_task_run(session=session, task_run_id=task_run_id)\n\n    if not run:\n        raise ObjectNotFoundError(f\"Task run with id {task_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if run.flow_run_id is None:\n        task_policy = AutonomousTaskPolicy  # CoreTaskPolicy + prevent `Running` -> `Running` transition\n    elif force or task_policy is None:\n        task_policy = MinimalTaskPolicy\n\n    orchestration_rules = task_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalTaskPolicy.compile_transition_rules(*intended_transition)\n\n    context = TaskOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new task run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    return result\n
      "},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.update_task_run","title":"update_task_run async","text":"

      Updates a task run.

      Parameters:

      Name Type Description Default session AsyncSession

      a database session

      required task_run_id UUID

      the task run id to update

      required task_run TaskRunUpdate

      a task run model

      required

      Returns:

      Name Type Description bool bool

      whether or not matching rows were found to update

      Source code in prefect/server/models/task_runs.py
      @inject_db\nasync def update_task_run(\n    session: AsyncSession,\n    task_run_id: UUID,\n    task_run: schemas.actions.TaskRunUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n    \"\"\"\n    Updates a task run.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to update\n        task_run: a task run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.TaskRun)\n        .where(db.TaskRun.id == task_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**task_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
      "},{"location":"api-ref/server/orchestration/core_policy/","title":"server.orchestration.core_policy","text":""},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy","title":"prefect.server.orchestration.core_policy","text":"

      Orchestration logic that fires on state transitions.

      CoreFlowPolicy and CoreTaskPolicy contain all default orchestration rules that Prefect enforces on a state transition.

      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreFlowPolicy","title":"CoreFlowPolicy","text":"

      Bases: BaseOrchestrationPolicy

      Orchestration rules that run against flow-run-state transitions in priority order.

      Source code in prefect/server/orchestration/core_policy.py
      class CoreFlowPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Orchestration rules that run against flow-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            PreventDuplicateTransitions,\n            HandleFlowTerminalStateTransitions,\n            EnforceCancellingToCancelledTransition,\n            BypassCancellingFlowRunsWithNoInfra,\n            PreventPendingTransitions,\n            EnsureOnlyScheduledFlowsMarkedLate,\n            HandlePausingFlows,\n            HandleResumingPausedFlows,\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedFlows,\n        ] + ([InstrumentFlowRunStateTransitions] if PREFECT_EXPERIMENTAL_EVENTS else [])\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreTaskPolicy","title":"CoreTaskPolicy","text":"

      Bases: BaseOrchestrationPolicy

      Orchestration rules that run against task-run-state transitions in priority order.

      Source code in prefect/server/orchestration/core_policy.py
      class CoreTaskPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Orchestration rules that run against task-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            CacheRetrieval,\n            HandleTaskTerminalStateTransitions,\n            PreventRunningTasksFromStoppedFlows,\n            SecureTaskConcurrencySlots,  # retrieve cached states even if slots are full\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedTasks,\n            RenameReruns,\n            UpdateFlowRunTrackerOnTasks,\n            CacheInsertion,\n            ReleaseTaskConcurrencySlots,\n        ]\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AutonomousTaskPolicy","title":"AutonomousTaskPolicy","text":"

      Bases: BaseOrchestrationPolicy

      Orchestration rules that run against task-run-state transitions in priority order.

      Source code in prefect/server/orchestration/core_policy.py
      class AutonomousTaskPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Orchestration rules that run against task-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            PreventPendingTransitions,\n            CacheRetrieval,\n            HandleTaskTerminalStateTransitions,\n            SecureTaskConcurrencySlots,  # retrieve cached states even if slots are full\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedTasks,\n            RenameReruns,\n            UpdateFlowRunTrackerOnTasks,\n            CacheInsertion,\n            ReleaseTaskConcurrencySlots,\n            EnqueueScheduledTasks,\n        ]\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.SecureTaskConcurrencySlots","title":"SecureTaskConcurrencySlots","text":"

      Bases: BaseOrchestrationRule

      Checks relevant concurrency slots are available before entering a Running state.

      This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for the duration specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks.

      Source code in prefect/server/orchestration/core_policy.py
      class SecureTaskConcurrencySlots(BaseOrchestrationRule):\n    \"\"\"\n    Checks relevant concurrency slots are available before entering a Running state.\n\n    This rule checks if concurrency limits have been set on the tags associated with a\n    TaskRun. If so, a concurrency slot will be secured against each concurrency limit\n    before being allowed to transition into a running state. If a concurrency limit has\n    been reached, the client will be instructed to delay the transition for the duration\n    specified by the \"PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS\" setting\n    before trying again. If the concurrency limit set on a tag is 0, the transition will\n    be aborted to prevent deadlocks.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self._applied_limits = []\n        filtered_limits = (\n            await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                context.session, tags=context.run.tags\n            )\n        )\n        run_limits = {limit.tag: limit for limit in filtered_limits}\n        for tag, cl in run_limits.items():\n            limit = cl.concurrency_limit\n            if limit == 0:\n                # limits of 0 will deadlock, and the transition needs to abort\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.abort_transition(\n                    reason=(\n                        f'The concurrency limit on tag \"{tag}\" is 0 and will deadlock'\n                        \" if the task tries to run again.\"\n                    ),\n                )\n            elif len(cl.active_slots) >= limit:\n                # if the limit has already been reached, delay the transition\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.delay_transition(\n                    PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS.value(),\n                    f\"Concurrency limit for the {tag} tag has been reached\",\n                )\n            else:\n                # log the TaskRun ID to active_slots\n                self._applied_limits.append(tag)\n                active_slots = set(cl.active_slots)\n                active_slots.add(str(context.run.id))\n                cl.active_slots = list(active_slots)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        for tag in self._applied_limits:\n            cl = await concurrency_limits.read_concurrency_limit_by_tag(\n                context.session, tag\n            )\n            active_slots = set(cl.active_slots)\n            active_slots.discard(str(context.run.id))\n            cl.active_slots = list(active_slots)\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.ReleaseTaskConcurrencySlots","title":"ReleaseTaskConcurrencySlots","text":"

      Bases: BaseUniversalTransform

      Releases any concurrency slots held by a run upon exiting a Running or Cancelling state.

      Source code in prefect/server/orchestration/core_policy.py
      class ReleaseTaskConcurrencySlots(BaseUniversalTransform):\n    \"\"\"\n    Releases any concurrency slots held by a run upon exiting a Running or\n    Cancelling state.\n    \"\"\"\n\n    async def after_transition(\n        self,\n        context: OrchestrationContext,\n    ):\n        if self.nullified_transition():\n            return\n\n        if context.validated_state and context.validated_state.type not in [\n            states.StateType.RUNNING,\n            states.StateType.CANCELLING,\n        ]:\n            filtered_limits = (\n                await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                    context.session, tags=context.run.tags\n                )\n            )\n            run_limits = {limit.tag: limit for limit in filtered_limits}\n            for tag, cl in run_limits.items():\n                active_slots = set(cl.active_slots)\n                active_slots.discard(str(context.run.id))\n                cl.active_slots = list(active_slots)\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.AddUnknownResult","title":"AddUnknownResult","text":"

      Bases: BaseOrchestrationRule

      Assign an \"unknown\" result to runs that are forced to complete from a failed or crashed state, if the previous state used a persisted result.

      When we retry a flow run, we retry any task runs that were in a failed or crashed state, but we also retry completed task runs that didn't use a persisted result. This means that without a sentinel value for unknown results, a task run forced into Completed state will always get rerun if the flow run retries because the task run lacks a persisted result. The \"unknown\" sentinel ensures that when we see a completed task run with an unknown result, we know that it was forced to complete and we shouldn't rerun it.

      Flow runs forced into a Completed state have a similar problem: without a sentinel value, attempting to refer to the flow run's result will raise an exception because the flow run has no result. The sentinel ensures that we can distinguish between a flow run that has no result and a flow run that has an unknown result.

      Source code in prefect/server/orchestration/core_policy.py
      class AddUnknownResult(BaseOrchestrationRule):\n    \"\"\"\n    Assign an \"unknown\" result to runs that are forced to complete from a\n    failed or crashed state, if the previous state used a persisted result.\n\n    When we retry a flow run, we retry any task runs that were in a failed or\n    crashed state, but we also retry completed task runs that didn't use a\n    persisted result. This means that without a sentinel value for unknown\n    results, a task run forced into Completed state will always get rerun if the\n    flow run retries because the task run lacks a persisted result. The\n    \"unknown\" sentinel ensures that when we see a completed task run with an\n    unknown result, we know that it was forced to complete and we shouldn't\n    rerun it.\n\n    Flow runs forced into a Completed state have a similar problem: without a\n    sentinel value, attempting to refer to the flow run's result will raise an\n    exception because the flow run has no result. The sentinel ensures that we\n    can distinguish between a flow run that has no result and a flow run that\n    has an unknown result.\n    \"\"\"\n\n    FROM_STATES = [StateType.FAILED, StateType.CRASHED]\n    TO_STATES = [StateType.COMPLETED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if (\n            initial_state\n            and initial_state.data\n            and initial_state.data.get(\"type\") == \"reference\"\n        ):\n            unknown_result = await UnknownResult.create()\n            self.context.proposed_state.data = unknown_result.dict()\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheInsertion","title":"CacheInsertion","text":"

      Bases: BaseOrchestrationRule

      Caches completed states with cache keys after they are validated.

      Source code in prefect/server/orchestration/core_policy.py
      class CacheInsertion(BaseOrchestrationRule):\n    \"\"\"\n    Caches completed states with cache keys after they are validated.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.COMPLETED]\n\n    @inject_db\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        if not validated_state or not context.session:\n            return\n\n        cache_key = validated_state.state_details.cache_key\n        if cache_key:\n            new_cache_item = db.TaskRunStateCache(\n                cache_key=cache_key,\n                cache_expiration=validated_state.state_details.cache_expiration,\n                task_run_state_id=validated_state.id,\n            )\n            context.session.add(new_cache_item)\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheRetrieval","title":"CacheRetrieval","text":"

      Bases: BaseOrchestrationRule

      Rejects running states if a completed state has been cached.

      This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead.

      Source code in prefect/server/orchestration/core_policy.py
      class CacheRetrieval(BaseOrchestrationRule):\n    \"\"\"\n    Rejects running states if a completed state has been cached.\n\n    This rule rejects transitions into a running state with a cache key if the key\n    has already been associated with a completed state in the cache table. The client\n    will be instructed to transition into the cached completed state instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    @inject_db\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        cache_key = proposed_state.state_details.cache_key\n        if cache_key and not proposed_state.state_details.refresh_cache:\n            # Check for cached states matching the cache key\n            cached_state_id = (\n                select(db.TaskRunStateCache.task_run_state_id)\n                .where(\n                    sa.and_(\n                        db.TaskRunStateCache.cache_key == cache_key,\n                        sa.or_(\n                            db.TaskRunStateCache.cache_expiration.is_(None),\n                            db.TaskRunStateCache.cache_expiration > pendulum.now(\"utc\"),\n                        ),\n                    ),\n                )\n                .order_by(db.TaskRunStateCache.created.desc())\n                .limit(1)\n            ).scalar_subquery()\n            query = select(db.TaskRunState).where(db.TaskRunState.id == cached_state_id)\n            cached_state = (await context.session.execute(query)).scalar()\n            if cached_state:\n                new_state = cached_state.as_state().copy(reset_fields=True)\n                new_state.name = \"Cached\"\n                await self.reject_transition(\n                    state=new_state, reason=\"Retrieved state from cache\"\n                )\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedFlows","title":"RetryFailedFlows","text":"

      Bases: BaseOrchestrationRule

      Rejects failed states and schedules a retry if the retry limit has not been reached.

      This rule rejects transitions into a failed state if retries has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution.

      Source code in prefect/server/orchestration/core_policy.py
      class RetryFailedFlows(BaseOrchestrationRule):\n    \"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set and the run count has not reached the specified limit. The client will be\n    instructed to transition into a scheduled state to retry flow execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n\n        if run_settings.retries is None or run_count > run_settings.retries:\n            return  # Retry count exceeded, allow transition to failed\n\n        scheduled_start_time = pendulum.now(\"UTC\").add(\n            seconds=run_settings.retry_delay or 0\n        )\n\n        # support old-style flow run retries for older clients\n        # older flow retries require us to loop over failed tasks to update their state\n        # this is not required after API version 0.8.3\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version and api_version < Version(\"0.8.3\"):\n            failed_task_runs = await models.task_runs.read_task_runs(\n                context.session,\n                flow_run_filter=filters.FlowRunFilter(id={\"any_\": [context.run.id]}),\n                task_run_filter=filters.TaskRunFilter(\n                    state={\"type\": {\"any_\": [\"FAILED\"]}}\n                ),\n            )\n            for run in failed_task_runs:\n                await models.task_runs.set_task_run_state(\n                    context.session,\n                    run.id,\n                    state=states.AwaitingRetry(scheduled_time=scheduled_start_time),\n                    force=True,\n                )\n                # Reset the run count so that the task run retries still work correctly\n                run.run_count = 0\n\n        # Reset pause metadata on retry\n        # Pauses as a concept only exist after API version 0.8.4\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            updated_policy = context.run.empirical_policy.dict()\n            updated_policy[\"resuming\"] = False\n            updated_policy[\"pause_keys\"] = set()\n            context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n        # Generate a new state for the flow\n        retry_state = states.AwaitingRetry(\n            scheduled_time=scheduled_start_time,\n            message=proposed_state.message,\n            data=proposed_state.data,\n        )\n        await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedTasks","title":"RetryFailedTasks","text":"

      Bases: BaseOrchestrationRule

      Rejects failed states and schedules a retry if the retry limit has not been reached.

      This rule rejects transitions into a failed state if retries has been set, the run count has not reached the specified limit, and the client asserts it is a retriable task run. The client will be instructed to transition into a scheduled state to retry task execution.

      Source code in prefect/server/orchestration/core_policy.py
      class RetryFailedTasks(BaseOrchestrationRule):\n    \"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set, the run count has not reached the specified limit, and the client\n    asserts it is a retriable task run. The client will be instructed to\n    transition into a scheduled state to retry task execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n        delay = run_settings.retry_delay\n\n        if isinstance(delay, list):\n            base_delay = delay[min(run_count - 1, len(delay) - 1)]\n        else:\n            base_delay = run_settings.retry_delay or 0\n\n        # guard against negative relative jitter inputs\n        if run_settings.retry_jitter_factor:\n            delay = clamped_poisson_interval(\n                base_delay, clamping_factor=run_settings.retry_jitter_factor\n            )\n        else:\n            delay = base_delay\n\n        # set by user to conditionally retry a task using @task(retry_condition_fn=...)\n        if getattr(proposed_state.state_details, \"retriable\", True) is False:\n            return\n\n        if run_settings.retries is not None and run_count <= run_settings.retries:\n            retry_state = states.AwaitingRetry(\n                scheduled_time=pendulum.now(\"UTC\").add(seconds=delay),\n                message=proposed_state.message,\n                data=proposed_state.data,\n            )\n            await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnqueueScheduledTasks","title":"EnqueueScheduledTasks","text":"

      Bases: BaseOrchestrationRule

      Enqueues autonomous task runs when they are scheduled

      Source code in prefect/server/orchestration/core_policy.py
      class EnqueueScheduledTasks(BaseOrchestrationRule):\n    \"\"\"\n    Enqueues autonomous task runs when they are scheduled\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.SCHEDULED]\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n            # Only if task scheduling is enabled\n            return\n\n        if not validated_state:\n            # Only if the transition was valid\n            return\n\n        if context.run.flow_run_id:\n            # Only for autonomous tasks\n            return\n\n        task_run: TaskRun = TaskRun.from_orm(context.run)\n        queue = TaskQueue.for_key(task_run.task_key)\n\n        if validated_state.name == \"AwaitingRetry\":\n            await queue.retry(task_run)\n        else:\n            await queue.enqueue(task_run)\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RenameReruns","title":"RenameReruns","text":"

      Bases: BaseOrchestrationRule

      Name the states if they have run more than once.

      In the special case where the initial state is an \"AwaitingRetry\" scheduled state, the proposed state will be renamed to \"Retrying\" instead.

      Source code in prefect/server/orchestration/core_policy.py
      class RenameReruns(BaseOrchestrationRule):\n    \"\"\"\n    Name the states if they have run more than once.\n\n    In the special case where the initial state is an \"AwaitingRetry\" scheduled state,\n    the proposed state will be renamed to \"Retrying\" instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_count = context.run.run_count\n        if run_count > 0:\n            if initial_state.name == \"AwaitingRetry\":\n                await self.rename_state(\"Retrying\")\n            else:\n                await self.rename_state(\"Rerunning\")\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CopyScheduledTime","title":"CopyScheduledTime","text":"

      Bases: BaseOrchestrationRule

      Ensures scheduled time is copied from scheduled states to pending states.

      If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored.

      Source code in prefect/server/orchestration/core_policy.py
      class CopyScheduledTime(BaseOrchestrationRule):\n    \"\"\"\n    Ensures scheduled time is copied from scheduled states to pending states.\n\n    If a new scheduled time has been proposed on the pending state, the scheduled time\n    on the scheduled state will be ignored.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if not proposed_state.state_details.scheduled_time:\n            proposed_state.state_details.scheduled_time = (\n                initial_state.state_details.scheduled_time\n            )\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.WaitForScheduledTime","title":"WaitForScheduledTime","text":"

      Bases: BaseOrchestrationRule

      Prevents transitions to running states from happening to early.

      This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for delay_seconds before attempting the transition again.

      Source code in prefect/server/orchestration/core_policy.py
      class WaitForScheduledTime(BaseOrchestrationRule):\n    \"\"\"\n    Prevents transitions to running states from happening to early.\n\n    This rule enforces that all scheduled states will only start with the machine clock\n    used by the Prefect REST API instance. This rule will identify transitions from scheduled\n    states that are too early and nullify them. Instead, no state will be written to the\n    database and the client will be sent an instruction to wait for `delay_seconds`\n    before attempting the transition again.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED, StateType.PENDING]\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        scheduled_time = initial_state.state_details.scheduled_time\n        if not scheduled_time:\n            return\n\n        # At this moment, we round delay to the nearest second as the API schema\n        # specifies an integer return value.\n        delay = scheduled_time - pendulum.now(\"UTC\")\n        delay_seconds = delay.in_seconds()\n        delay_seconds += round(delay.microseconds / 1e6)\n        if delay_seconds > 0:\n            await self.delay_transition(\n                delay_seconds, reason=\"Scheduled time is in the future\"\n            )\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandlePausingFlows","title":"HandlePausingFlows","text":"

      Bases: BaseOrchestrationRule

      Governs runs attempting to enter a Paused/Suspended state

      Source code in prefect/server/orchestration/core_policy.py
      class HandlePausingFlows(BaseOrchestrationRule):\n    \"\"\"\n    Governs runs attempting to enter a Paused/Suspended state\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.PAUSED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n        if initial_state is None:\n            await self.abort_transition(f\"Cannot {verb} flows with no state.\")\n            return\n\n        if not initial_state.is_running():\n            await self.reject_transition(\n                state=None,\n                reason=f\"Cannot {verb} flows that are not currently running.\",\n            )\n            return\n\n        self.key = proposed_state.state_details.pause_key\n        if self.key is None:\n            # if no pause key is provided, default to a UUID\n            self.key = str(uuid4())\n\n        if self.key in context.run.empirical_policy.pause_keys:\n            await self.reject_transition(\n                state=None, reason=f\"This {verb} has already fired.\"\n            )\n            return\n\n        if proposed_state.state_details.pause_reschedule:\n            if context.run.parent_task_run_id:\n                await self.abort_transition(\n                    reason=f\"Cannot {verb} subflows.\",\n                )\n                return\n\n            if context.run.deployment_id is None:\n                await self.abort_transition(\n                    reason=f\"Cannot {verb} flows without a deployment.\",\n                )\n                return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"pause_keys\"].add(self.key)\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleResumingPausedFlows","title":"HandleResumingPausedFlows","text":"

      Bases: BaseOrchestrationRule

      Governs runs attempting to leave a Paused state

      Source code in prefect/server/orchestration/core_policy.py
      class HandleResumingPausedFlows(BaseOrchestrationRule):\n    \"\"\"\n    Governs runs attempting to leave a Paused state\n    \"\"\"\n\n    FROM_STATES = [StateType.PAUSED]\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if not (\n            proposed_state.is_running()\n            or proposed_state.is_scheduled()\n            or proposed_state.is_final()\n        ):\n            await self.reject_transition(\n                state=None,\n                reason=(\n                    f\"This run cannot transition to the {proposed_state.type} state\"\n                    f\" from the {initial_state.type} state.\"\n                ),\n            )\n            return\n\n        verb = \"suspend\" if proposed_state.name == \"Suspended\" else \"pause\"\n\n        if initial_state.state_details.pause_reschedule:\n            if not context.run.deployment_id:\n                await self.reject_transition(\n                    state=None,\n                    reason=(\n                        f\"Cannot reschedule a {proposed_state.name.lower()} flow run\"\n                        \" without a deployment.\"\n                    ),\n                )\n                return\n        pause_timeout = initial_state.state_details.pause_timeout\n        if pause_timeout and pause_timeout < pendulum.now(\"UTC\"):\n            pause_timeout_failure = states.Failed(\n                message=(\n                    f\"The flow was {proposed_state.name.lower()} and never resumed.\"\n                ),\n            )\n            await self.reject_transition(\n                state=pause_timeout_failure,\n                reason=f\"The flow run {verb} has timed out and can no longer resume.\",\n            )\n            return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"resuming\"] = True\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.UpdateFlowRunTrackerOnTasks","title":"UpdateFlowRunTrackerOnTasks","text":"

      Bases: BaseOrchestrationRule

      Tracks the flow run attempt a task run state is associated with.

      Source code in prefect/server/orchestration/core_policy.py
      class UpdateFlowRunTrackerOnTasks(BaseOrchestrationRule):\n    \"\"\"\n    Tracks the flow run attempt a task run state is associated with.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if context.run.flow_run_id is not None:\n            self.flow_run = await context.flow_run()\n            if self.flow_run:\n                context.run.flow_run_run_count = self.flow_run.run_count\n            else:\n                raise ObjectNotFoundError(\n                    (\n                        \"Unable to read flow run associated with task run:\"\n                        f\" {context.run.id}, this flow run might have been deleted\"\n                    ),\n                )\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleTaskTerminalStateTransitions","title":"HandleTaskTerminalStateTransitions","text":"

      Bases: BaseOrchestrationRule

      We do not allow tasks to leave terminal states if: - The task is completed and has a persisted result - The task is going to CANCELLING / PAUSED / CRASHED

      We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries.

      Source code in prefect/server/orchestration/core_policy.py
      class HandleTaskTerminalStateTransitions(BaseOrchestrationRule):\n    \"\"\"\n    We do not allow tasks to leave terminal states if:\n    - The task is completed and has a persisted result\n    - The task is going to CANCELLING / PAUSED / CRASHED\n\n    We reset the run count when a task leaves a terminal state for a non-terminal state\n    which resets task run retries; this is particularly relevant for flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self.original_run_count = context.run.run_count\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(f\"Run is already {initial_state.type.value}.\")\n            return\n\n        # Only allow departure from a happily completed state if the result is not persisted\n        if (\n            initial_state.is_completed()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"This run is already completed.\")\n            return\n\n        if not proposed_state.is_final():\n            # Reset run count to reset retries\n            context.run.run_count = 0\n\n        # Change the name of the state to retrying if its a flow run retry\n        if proposed_state.is_running() and context.run.flow_run_id is not None:\n            self.flow_run = await context.flow_run()\n            flow_retrying = context.run.flow_run_run_count < self.flow_run.run_count\n            if flow_retrying:\n                await self.rename_state(\"Retrying\")\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        # reset run count\n        context.run.run_count = self.original_run_count\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleFlowTerminalStateTransitions","title":"HandleFlowTerminalStateTransitions","text":"

      Bases: BaseOrchestrationRule

      We do not allow flows to leave terminal states if: - The flow is completed and has a persisted result - The flow is going to CANCELLING / PAUSED / CRASHED - The flow is going to scheduled and has no deployment

      We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries.

      Source code in prefect/server/orchestration/core_policy.py
      class HandleFlowTerminalStateTransitions(BaseOrchestrationRule):\n    \"\"\"\n    We do not allow flows to leave terminal states if:\n    - The flow is completed and has a persisted result\n    - The flow is going to CANCELLING / PAUSED / CRASHED\n    - The flow is going to scheduled and has no deployment\n\n    We reset the pause metadata when a flow leaves a terminal state for a non-terminal\n    state. This resets pause behavior during manual flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        self.original_flow_policy = context.run.empirical_policy.dict()\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(\n                f\"Run is already in terminal state {initial_state.type.value}.\"\n            )\n            return\n\n        # Only allow departure from a happily completed state if the result is not\n        # persisted and the a rerun is being proposed\n        if (\n            initial_state.is_completed()\n            and not proposed_state.is_final()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"Run is already COMPLETED.\")\n            return\n\n        # Do not allows runs to be rescheduled without a deployment\n        if proposed_state.is_scheduled() and not context.run.deployment_id:\n            await self.abort_transition(\n                \"Cannot reschedule a run without an associated deployment.\"\n            )\n            return\n\n        if not proposed_state.is_final():\n            # Reset pause metadata when leaving a terminal state\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                updated_policy = context.run.empirical_policy.dict()\n                updated_policy[\"resuming\"] = False\n                updated_policy[\"pause_keys\"] = set()\n                context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        context.run.empirical_policy = core.FlowRunPolicy(**self.original_flow_policy)\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventPendingTransitions","title":"PreventPendingTransitions","text":"

      Bases: BaseOrchestrationRule

      Prevents transitions to PENDING.

      This rule is only used for flow runs.

      This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run.

      Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state.

      CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING.

      Source code in prefect/server/orchestration/core_policy.py
      class PreventPendingTransitions(BaseOrchestrationRule):\n    \"\"\"\n    Prevents transitions to PENDING.\n\n    This rule is only used for flow runs.\n\n    This is intended to prevent race conditions during duplicate submissions of runs.\n    Before a run is submitted to its execution environment, it should be placed in a\n    PENDING state. If two workers attempt to submit the same run, one of them should\n    encounter a PENDING -> PENDING transition and abort orchestration of the run.\n\n    Similarly, if the execution environment starts quickly the run may be in a RUNNING\n    state when the second worker attempts the PENDING transition. We deny these state\n    changes as well to prevent duplicate submission. If a run has transitioned to a\n    RUNNING state a worker should not attempt to submit it again unless it has moved\n    into a terminal state.\n\n    CANCELLING and CANCELLED runs should not be allowed to transition to PENDING.\n    For re-runs of deployed runs, they should transition to SCHEDULED first.\n    For re-runs of ad-hoc runs, they should transition directly to RUNNING.\n    \"\"\"\n\n    FROM_STATES = [\n        StateType.PENDING,\n        StateType.CANCELLING,\n        StateType.RUNNING,\n        StateType.CANCELLED,\n    ]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        await self.abort_transition(\n            reason=(\n                f\"This run is in a {initial_state.type.name} state and cannot\"\n                \" transition to a PENDING state.\"\n            )\n        )\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventRunningTasksFromStoppedFlows","title":"PreventRunningTasksFromStoppedFlows","text":"

      Bases: BaseOrchestrationRule

      Prevents running tasks from stopped flows.

      A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running.

      Source code in prefect/server/orchestration/core_policy.py
      class PreventRunningTasksFromStoppedFlows(BaseOrchestrationRule):\n    \"\"\"\n    Prevents running tasks from stopped flows.\n\n    A running state implies execution, but also the converse. This rule ensures that a\n    flow's tasks cannot be run unless the flow is also running.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        flow_run = await context.flow_run()\n        if flow_run is not None:\n            if flow_run.state is None:\n                await self.abort_transition(\n                    reason=\"The enclosing flow must be running to begin task execution.\"\n                )\n            elif flow_run.state.type == StateType.PAUSED:\n                # Use the flow run's Paused state details to preserve data like\n                # timeouts.\n                paused_state = states.Paused(\n                    name=\"NotReady\",\n                    pause_expiration_time=flow_run.state.state_details.pause_timeout,\n                    reschedule=flow_run.state.state_details.pause_reschedule,\n                )\n                await self.reject_transition(\n                    state=paused_state,\n                    reason=(\n                        \"The flow is paused, new tasks can execute after resuming flow\"\n                        f\" run: {flow_run.id}.\"\n                    ),\n                )\n            elif not flow_run.state.type == StateType.RUNNING:\n                # task runners should abort task run execution\n                await self.abort_transition(\n                    reason=(\n                        \"The enclosing flow must be running to begin task execution.\"\n                    ),\n                )\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnforceCancellingToCancelledTransition","title":"EnforceCancellingToCancelledTransition","text":"

      Bases: BaseOrchestrationRule

      Rejects transitions from Cancelling to any terminal state except for Cancelled.

      Source code in prefect/server/orchestration/core_policy.py
      class EnforceCancellingToCancelledTransition(BaseOrchestrationRule):\n    \"\"\"\n    Rejects transitions from Cancelling to any terminal state except for Cancelled.\n    \"\"\"\n\n    FROM_STATES = {StateType.CANCELLED, StateType.CANCELLING}\n    TO_STATES = ALL_ORCHESTRATION_STATES - {StateType.CANCELLED}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        await self.reject_transition(\n            state=None,\n            reason=(\n                \"Cannot transition flows that are cancelling to a state other \"\n                \"than Cancelled.\"\n            ),\n        )\n        return\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.BypassCancellingFlowRunsWithNoInfra","title":"BypassCancellingFlowRunsWithNoInfra","text":"

      Bases: BaseOrchestrationRule

      Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID. Also Rejects transitions from Paused to Cancelling if the Paused state's details indicates the flow run has been suspended, exiting the flow and tearing down infra.

      The Cancelling state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to Cancelled. Runs that are AwaitingRetry are a Scheduled state that may have associated infrastructure.

      Source code in prefect/server/orchestration/core_policy.py
      class BypassCancellingFlowRunsWithNoInfra(BaseOrchestrationRule):\n    \"\"\"Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled,\n    if the flow run has no associated infrastructure process ID. Also Rejects transitions from\n    Paused to Cancelling if the Paused state's details indicates the flow run has been suspended,\n    exiting the flow and tearing down infra.\n\n    The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure\n    to clean up, we can transition directly to `Cancelled`. Runs that are `AwaitingRetry` are\n    a `Scheduled` state that may have associated infrastructure.\n    \"\"\"\n\n    FROM_STATES = {StateType.SCHEDULED, StateType.PAUSED}\n    TO_STATES = {StateType.CANCELLING}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        if (\n            initial_state.type == states.StateType.SCHEDULED\n            and not context.run.infrastructure_pid\n        ):\n            await self.reject_transition(\n                state=states.Cancelled(),\n                reason=\"Scheduled flow run has no infrastructure to terminate.\",\n            )\n        elif (\n            initial_state.type == states.StateType.PAUSED\n            and initial_state.state_details.pause_reschedule\n        ):\n            await self.reject_transition(\n                state=states.Cancelled(),\n                reason=\"Suspended flow run has no infrastructure to terminate.\",\n            )\n
      "},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventDuplicateTransitions","title":"PreventDuplicateTransitions","text":"

      Bases: BaseOrchestrationRule

      Prevent duplicate transitions from being made right after one another.

      This rule allows for clients to set an optional transition_id on a state. If the run's next transition has the same transition_id, the transition will be rejected and the existing state will be returned.

      This allows for clients to make state transition requests without worrying about the following case: - A client making a state transition request - The server accepts transition and commits the transition - The client is unable to receive the response and retries the request

      Source code in prefect/server/orchestration/core_policy.py
      class PreventDuplicateTransitions(BaseOrchestrationRule):\n    \"\"\"\n    Prevent duplicate transitions from being made right after one another.\n\n    This rule allows for clients to set an optional transition_id on a state. If the\n    run's next transition has the same transition_id, the transition will be\n    rejected and the existing state will be returned.\n\n    This allows for clients to make state transition requests without worrying about\n    the following case:\n    - A client making a state transition request\n    - The server accepts transition and commits the transition\n    - The client is unable to receive the response and retries the request\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if (\n            initial_state is None\n            or proposed_state is None\n            or initial_state.state_details is None\n            or proposed_state.state_details is None\n        ):\n            return\n\n        initial_transition_id = getattr(\n            initial_state.state_details, \"transition_id\", None\n        )\n        proposed_transition_id = getattr(\n            proposed_state.state_details, \"transition_id\", None\n        )\n        if (\n            initial_transition_id is not None\n            and proposed_transition_id is not None\n            and initial_transition_id == proposed_transition_id\n        ):\n            await self.reject_transition(\n                # state=None will return the initial (current) state\n                state=None,\n                reason=\"This run has already made this state transition.\",\n            )\n
      "},{"location":"api-ref/server/orchestration/global_policy/","title":"server.orchestration.global_policy","text":""},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy","title":"prefect.server.orchestration.global_policy","text":"

      Bookkeeping logic that fires on every state transition.

      For clarity, GlobalFlowpolicy and GlobalTaskPolicy contain all transition logic implemented using BaseUniversalTransform. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop.

      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalFlowPolicy","title":"GlobalFlowPolicy","text":"

      Bases: BaseOrchestrationPolicy

      Global transforms that run against flow-run-state transitions in priority order.

      These transforms are intended to run immediately before and after a state transition is validated.

      Source code in prefect/server/orchestration/global_policy.py
      class GlobalFlowPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Global transforms that run against flow-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            UpdateSubflowParentTask,\n            UpdateSubflowStateDetails,\n            IncrementFlowRunCount,\n            RemoveResumingIndicator,\n        ]\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalTaskPolicy","title":"GlobalTaskPolicy","text":"

      Bases: BaseOrchestrationPolicy

      Global transforms that run against task-run-state transitions in priority order.

      These transforms are intended to run immediately before and after a state transition is validated.

      Source code in prefect/server/orchestration/global_policy.py
      class GlobalTaskPolicy(BaseOrchestrationPolicy):\n    \"\"\"\n    Global transforms that run against task-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            IncrementTaskRunCount,\n        ]\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateType","title":"SetRunStateType","text":"

      Bases: BaseUniversalTransform

      Updates the state type of a run on a state transition.

      Source code in prefect/server/orchestration/global_policy.py
      class SetRunStateType(BaseUniversalTransform):\n    \"\"\"\n    Updates the state type of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's type\n        context.run.state_type = context.proposed_state.type\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateName","title":"SetRunStateName","text":"

      Bases: BaseUniversalTransform

      Updates the state name of a run on a state transition.

      Source code in prefect/server/orchestration/global_policy.py
      class SetRunStateName(BaseUniversalTransform):\n    \"\"\"\n    Updates the state name of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's name\n        context.run.state_name = context.proposed_state.name\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetStartTime","title":"SetStartTime","text":"

      Bases: BaseUniversalTransform

      Records the time a run enters a running state for the first time.

      Source code in prefect/server/orchestration/global_policy.py
      class SetStartTime(BaseUniversalTransform):\n    \"\"\"\n    Records the time a run enters a running state for the first time.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state and no start time is set...\n        if context.proposed_state.is_running() and context.run.start_time is None:\n            # set the start time\n            context.run.start_time = context.proposed_state.timestamp\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateTimestamp","title":"SetRunStateTimestamp","text":"

      Bases: BaseUniversalTransform

      Records the time a run changes states.

      Source code in prefect/server/orchestration/global_policy.py
      class SetRunStateTimestamp(BaseUniversalTransform):\n    \"\"\"\n    Records the time a run changes states.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's timestamp\n        context.run.state_timestamp = context.proposed_state.timestamp\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetEndTime","title":"SetEndTime","text":"

      Bases: BaseUniversalTransform

      Records the time a run enters a terminal state.

      With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset.

      Source code in prefect/server/orchestration/global_policy.py
      class SetEndTime(BaseUniversalTransform):\n    \"\"\"\n    Records the time a run enters a terminal state.\n\n    With normal client usage, a run will not transition out of a terminal state.\n    However, it's possible to force these transitions manually via the API. While\n    leaving a terminal state, the end time will be unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a final state for a non-final state...\n        if (\n            context.initial_state\n            and context.initial_state.is_final()\n            and not context.proposed_state.is_final()\n        ):\n            # clear the end time\n            context.run.end_time = None\n\n        # if entering a final state...\n        if context.proposed_state.is_final():\n            # if the run has a start time and no end time, give it one\n            if context.run.start_time and not context.run.end_time:\n                context.run.end_time = context.proposed_state.timestamp\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementRunTime","title":"IncrementRunTime","text":"

      Bases: BaseUniversalTransform

      Records the amount of time a run spends in the running state.

      Source code in prefect/server/orchestration/global_policy.py
      class IncrementRunTime(BaseUniversalTransform):\n    \"\"\"\n    Records the amount of time a run spends in the running state.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a running state...\n        if context.initial_state and context.initial_state.is_running():\n            # increment the run time by the time spent in the previous state\n            context.run.total_run_time += (\n                context.proposed_state.timestamp - context.initial_state.timestamp\n            )\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementFlowRunCount","title":"IncrementFlowRunCount","text":"

      Bases: BaseUniversalTransform

      Records the number of times a run enters a running state. For use with retries.

      Source code in prefect/server/orchestration/global_policy.py
      class IncrementFlowRunCount(BaseUniversalTransform):\n    \"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # do not increment the run count if resuming a paused flow\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                if context.run.empirical_policy.resuming:\n                    return\n\n            # increment the run count\n            context.run.run_count += 1\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.RemoveResumingIndicator","title":"RemoveResumingIndicator","text":"

      Bases: BaseUniversalTransform

      Removes the indicator on a flow run that marks it as resuming.

      Source code in prefect/server/orchestration/global_policy.py
      class RemoveResumingIndicator(BaseUniversalTransform):\n    \"\"\"\n    Removes the indicator on a flow run that marks it as resuming.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        proposed_state = context.proposed_state\n\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            if proposed_state.is_running() or proposed_state.is_final():\n                if context.run.empirical_policy.resuming:\n                    updated_policy = context.run.empirical_policy.dict()\n                    updated_policy[\"resuming\"] = False\n                    context.run.empirical_policy = FlowRunPolicy(**updated_policy)\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementTaskRunCount","title":"IncrementTaskRunCount","text":"

      Bases: BaseUniversalTransform

      Records the number of times a run enters a running state. For use with retries.

      Source code in prefect/server/orchestration/global_policy.py
      class IncrementTaskRunCount(BaseUniversalTransform):\n    \"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # increment the run count\n            context.run.run_count += 1\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetExpectedStartTime","title":"SetExpectedStartTime","text":"

      Bases: BaseUniversalTransform

      Estimates the time a state is expected to start running if not set.

      For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect.

      Source code in prefect/server/orchestration/global_policy.py
      class SetExpectedStartTime(BaseUniversalTransform):\n    \"\"\"\n    Estimates the time a state is expected to start running if not set.\n\n    For scheduled states, this estimate is simply the scheduled time. For other states,\n    this is set to the time the proposed state was created by Prefect.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # set expected start time if this is the first state\n        if not context.run.expected_start_time:\n            if context.proposed_state.is_scheduled():\n                context.run.expected_start_time = (\n                    context.proposed_state.state_details.scheduled_time\n                )\n            else:\n                context.run.expected_start_time = context.proposed_state.timestamp\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetNextScheduledStartTime","title":"SetNextScheduledStartTime","text":"

      Bases: BaseUniversalTransform

      Records the scheduled time on a run.

      When a run enters a scheduled state, run.next_scheduled_start_time is set to the state's scheduled time. When leaving a scheduled state, run.next_scheduled_start_time is unset.

      Source code in prefect/server/orchestration/global_policy.py
      class SetNextScheduledStartTime(BaseUniversalTransform):\n    \"\"\"\n    Records the scheduled time on a run.\n\n    When a run enters a scheduled state, `run.next_scheduled_start_time` is set to\n    the state's scheduled time. When leaving a scheduled state,\n    `run.next_scheduled_start_time` is unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # remove the next scheduled start time if exiting a scheduled state\n        if context.initial_state and context.initial_state.is_scheduled():\n            context.run.next_scheduled_start_time = None\n\n        # set next scheduled start time if entering a scheduled state\n        if context.proposed_state.is_scheduled():\n            context.run.next_scheduled_start_time = (\n                context.proposed_state.state_details.scheduled_time\n            )\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowParentTask","title":"UpdateSubflowParentTask","text":"

      Bases: BaseUniversalTransform

      Whenever a subflow changes state, it must update its parent task run's state.

      Source code in prefect/server/orchestration/global_policy.py
      class UpdateSubflowParentTask(BaseUniversalTransform):\n    \"\"\"\n    Whenever a subflow changes state, it must update its parent task run's state.\n    \"\"\"\n\n    async def after_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            # avoid mutation of the flow run state\n            subflow_parent_task_state = context.validated_state.copy(\n                reset_fields=True,\n                include={\n                    \"type\",\n                    \"timestamp\",\n                    \"name\",\n                    \"message\",\n                    \"state_details\",\n                    \"data\",\n                },\n            )\n\n            # set the task's \"child flow run id\" to be the subflow run id\n            subflow_parent_task_state.state_details.child_flow_run_id = context.run.id\n\n            await models.task_runs.set_task_run_state(\n                session=context.session,\n                task_run_id=context.run.parent_task_run_id,\n                state=subflow_parent_task_state,\n                force=True,\n            )\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowStateDetails","title":"UpdateSubflowStateDetails","text":"

      Bases: BaseUniversalTransform

      Update a child subflow state's references to a corresponding tracking task run id in the parent flow run

      Source code in prefect/server/orchestration/global_policy.py
      class UpdateSubflowStateDetails(BaseUniversalTransform):\n    \"\"\"\n    Update a child subflow state's references to a corresponding tracking task run id\n    in the parent flow run\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            context.proposed_state.state_details.task_run_id = (\n                context.run.parent_task_run_id\n            )\n
      "},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateStateDetails","title":"UpdateStateDetails","text":"

      Bases: BaseUniversalTransform

      Update a state's references to a corresponding flow- or task- run.

      Source code in prefect/server/orchestration/global_policy.py
      class UpdateStateDetails(BaseUniversalTransform):\n    \"\"\"\n    Update a state's references to a corresponding flow- or task- run.\n    \"\"\"\n\n    async def before_transition(\n        self,\n        context: OrchestrationContext,\n    ) -> None:\n        if self.nullified_transition():\n            return\n\n        if isinstance(context, FlowOrchestrationContext):\n            flow_run = await context.flow_run()\n            context.proposed_state.state_details.flow_run_id = flow_run.id\n\n        elif isinstance(context, TaskOrchestrationContext):\n            task_run = await context.task_run()\n            context.proposed_state.state_details.flow_run_id = task_run.flow_run_id\n            context.proposed_state.state_details.task_run_id = task_run.id\n
      "},{"location":"api-ref/server/orchestration/policies/","title":"server.orchestration.policies","text":""},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies","title":"prefect.server.orchestration.policies","text":"

      Policies are collections of orchestration rules and transforms.

      Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process.

      While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition.

      "},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy","title":"BaseOrchestrationPolicy","text":"

      Bases: ABC

      An abstract base class used to organize orchestration rules in priority order.

      Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic.

      Source code in prefect/server/orchestration/policies.py
      class BaseOrchestrationPolicy(ABC):\n    \"\"\"\n    An abstract base class used to organize orchestration rules in priority order.\n\n    Different collections of orchestration rules might be used to govern various kinds\n    of transitions. For example, flow-run states and task-run states might require\n    different orchestration logic.\n    \"\"\"\n\n    @staticmethod\n    @abstractmethod\n    def priority():\n        \"\"\"\n        A list of orchestration rules in priority order.\n        \"\"\"\n\n        return []\n\n    @classmethod\n    def compile_transition_rules(cls, from_state=None, to_state=None):\n        \"\"\"\n        Returns rules in policy that are valid for the specified state transition.\n        \"\"\"\n\n        transition_rules = []\n        for rule in cls.priority():\n            if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n                transition_rules.append(rule)\n        return transition_rules\n
      "},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.priority","title":"priority abstractmethod staticmethod","text":"

      A list of orchestration rules in priority order.

      Source code in prefect/server/orchestration/policies.py
      @staticmethod\n@abstractmethod\ndef priority():\n    \"\"\"\n    A list of orchestration rules in priority order.\n    \"\"\"\n\n    return []\n
      "},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.compile_transition_rules","title":"compile_transition_rules classmethod","text":"

      Returns rules in policy that are valid for the specified state transition.

      Source code in prefect/server/orchestration/policies.py
      @classmethod\ndef compile_transition_rules(cls, from_state=None, to_state=None):\n    \"\"\"\n    Returns rules in policy that are valid for the specified state transition.\n    \"\"\"\n\n    transition_rules = []\n    for rule in cls.priority():\n        if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n            transition_rules.append(rule)\n    return transition_rules\n
      "},{"location":"api-ref/server/orchestration/rules/","title":"server.orchestration.rules","text":""},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules","title":"prefect.server.orchestration.rules","text":"

      Prefect's flow and task-run orchestration machinery.

      This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept documentation.

      Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code.

      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext","title":"OrchestrationContext","text":"

      Bases: PrefectBaseModel

      A container for a state transition, governed by orchestration rules.

      Note

      An OrchestrationContext should not be instantiated directly, instead use the flow- or task- specific subclasses, FlowOrchestrationContext and TaskOrchestrationContext.

      When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

      OrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

      Attributes:

      Name Type Description session Optional[Union[Session, AsyncSession]]

      a SQLAlchemy database session

      initial_state Optional[State]

      the initial state of a run

      proposed_state Optional[State]

      the proposed state a run is transitioning into

      validated_state Optional[State]

      a proposed state that has committed to the database

      rule_signature List[str]

      a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

      finalization_signature List[str]

      a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

      response_status SetStateStatus

      a SetStateStatus object used to build the API response

      response_details StateResponseDetails

      a StateResponseDetails object use to build the API response

      Parameters:

      Name Type Description Default session

      a SQLAlchemy database session

      required initial_state

      the initial state of a run

      required proposed_state

      the proposed state a run is transitioning into

      required Source code in prefect/server/orchestration/rules.py
      class OrchestrationContext(PrefectBaseModel):\n    \"\"\"\n    A container for a state transition, governed by orchestration rules.\n\n    Note:\n        An `OrchestrationContext` should not be instantiated directly, instead\n        use the flow- or task- specific subclasses, `FlowOrchestrationContext` and\n        `TaskOrchestrationContext`.\n\n    When a flow- or task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `OrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    class Config:\n        arbitrary_types_allowed = True\n\n    session: Optional[Union[sa.orm.Session, AsyncSession]] = ...\n    initial_state: Optional[states.State] = ...\n    proposed_state: Optional[states.State] = ...\n    validated_state: Optional[states.State]\n    rule_signature: List[str] = Field(default_factory=list)\n    finalization_signature: List[str] = Field(default_factory=list)\n    response_status: SetStateStatus = Field(default=SetStateStatus.ACCEPT)\n    response_details: StateResponseDetails = Field(default_factory=StateAcceptDetails)\n    orchestration_error: Optional[Exception] = Field(default=None)\n    parameters: Dict[Any, Any] = Field(default_factory=dict)\n\n    @property\n    def initial_state_type(self) -> Optional[states.StateType]:\n        \"\"\"The state type of `self.initial_state` if it exists.\"\"\"\n\n        return self.initial_state.type if self.initial_state else None\n\n    @property\n    def proposed_state_type(self) -> Optional[states.StateType]:\n        \"\"\"The state type of `self.proposed_state` if it exists.\"\"\"\n\n        return self.proposed_state.type if self.proposed_state else None\n\n    @property\n    def validated_state_type(self) -> Optional[states.StateType]:\n        \"\"\"The state type of `self.validated_state` if it exists.\"\"\"\n        return self.validated_state.type if self.validated_state else None\n\n    def safe_copy(self):\n        \"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Returns:\n            A mutation-safe copy of the `OrchestrationContext`\n        \"\"\"\n\n        safe_copy = self.copy()\n\n        safe_copy.initial_state = (\n            self.initial_state.copy() if self.initial_state else None\n        )\n        safe_copy.proposed_state = (\n            self.proposed_state.copy() if self.proposed_state else None\n        )\n        safe_copy.validated_state = (\n            self.validated_state.copy() if self.validated_state else None\n        )\n        safe_copy.parameters = self.parameters.copy()\n        return safe_copy\n\n    def entry_context(self):\n        \"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks before a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.proposed_state, safe_context\n\n    def exit_context(self):\n        \"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks after a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.validated_state, safe_context\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.initial_state_type","title":"initial_state_type: Optional[states.StateType] property","text":"

      The state type of self.initial_state if it exists.

      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.proposed_state_type","title":"proposed_state_type: Optional[states.StateType] property","text":"

      The state type of self.proposed_state if it exists.

      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.validated_state_type","title":"validated_state_type: Optional[states.StateType] property","text":"

      The state type of self.validated_state if it exists.

      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.safe_copy","title":"safe_copy","text":"

      Creates a mostly-mutation-safe copy for use in orchestration rules.

      Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

      Returns:

      Type Description

      A mutation-safe copy of the OrchestrationContext

      Source code in prefect/server/orchestration/rules.py
      def safe_copy(self):\n    \"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Returns:\n        A mutation-safe copy of the `OrchestrationContext`\n    \"\"\"\n\n    safe_copy = self.copy()\n\n    safe_copy.initial_state = (\n        self.initial_state.copy() if self.initial_state else None\n    )\n    safe_copy.proposed_state = (\n        self.proposed_state.copy() if self.proposed_state else None\n    )\n    safe_copy.validated_state = (\n        self.validated_state.copy() if self.validated_state else None\n    )\n    safe_copy.parameters = self.parameters.copy()\n    return safe_copy\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.entry_context","title":"entry_context","text":"

      A convenience method that generates input parameters for orchestration rules.

      An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

      Source code in prefect/server/orchestration/rules.py
      def entry_context(self):\n    \"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks before a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.proposed_state, safe_context\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.exit_context","title":"exit_context","text":"

      A convenience method that generates input parameters for orchestration rules.

      An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

      Source code in prefect/server/orchestration/rules.py
      def exit_context(self):\n    \"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks after a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.validated_state, safe_context\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext","title":"FlowOrchestrationContext","text":"

      Bases: OrchestrationContext

      A container for a flow run state transition, governed by orchestration rules.

      When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

      FlowOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

      Attributes:

      Name Type Description session

      a SQLAlchemy database session

      run Any

      the flow run attempting to change state

      initial_state Any

      the initial state of the run

      proposed_state Any

      the proposed state the run is transitioning into

      validated_state Any

      a proposed state that has committed to the database

      rule_signature Any

      a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

      finalization_signature Any

      a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

      response_status Any

      a SetStateStatus object used to build the API response

      response_details Any

      a StateResponseDetails object use to build the API response

      Parameters:

      Name Type Description Default session

      a SQLAlchemy database session

      required run

      the flow run attempting to change state

      required initial_state

      the initial state of a run

      required proposed_state

      the proposed state a run is transitioning into

      required Source code in prefect/server/orchestration/rules.py
      class FlowOrchestrationContext(OrchestrationContext):\n    \"\"\"\n    A container for a flow run state transition, governed by orchestration rules.\n\n    When a flow- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `FlowOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.FlowRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        \"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `FlowOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None and not (\n                isinstance(state_data, dict) and state_data.get(\"type\") == \"unpersisted\"\n            ):\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.flow_run_id = self.run.id\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.FlowRunState(\n                flow_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n        \"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `FlowOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n        \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return None\n\n    async def flow_run(self):\n        return self.run\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

      Run-level settings used to orchestrate the state transition.

      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

      Validates a proposed state by committing it to the database.

      After the FlowOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

      If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

      Returns:

      Type Description

      None

      Source code in prefect/server/orchestration/rules.py
      @inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n    \"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `FlowOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.safe_copy","title":"safe_copy","text":"

      Creates a mostly-mutation-safe copy for use in orchestration rules.

      Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

      Note

      self.run is an ORM model, and even when copied is unsafe to mutate

      Returns:

      Type Description

      A mutation-safe copy of FlowOrchestrationContext

      Source code in prefect/server/orchestration/rules.py
      def safe_copy(self):\n    \"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `FlowOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext","title":"TaskOrchestrationContext","text":"

      Bases: OrchestrationContext

      A container for a task run state transition, governed by orchestration rules.

      When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

      TaskOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

      Attributes:

      Name Type Description session

      a SQLAlchemy database session

      run Any

      the task run attempting to change state

      initial_state Any

      the initial state of the run

      proposed_state Any

      the proposed state the run is transitioning into

      validated_state Any

      a proposed state that has committed to the database

      rule_signature Any

      a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

      finalization_signature Any

      a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

      response_status Any

      a SetStateStatus object used to build the API response

      response_details Any

      a StateResponseDetails object use to build the API response

      Parameters:

      Name Type Description Default session

      a SQLAlchemy database session

      required run

      the task run attempting to change state

      required initial_state

      the initial state of a run

      required proposed_state

      the proposed state a run is transitioning into

      required Source code in prefect/server/orchestration/rules.py
      class TaskOrchestrationContext(OrchestrationContext):\n    \"\"\"\n    A container for a task run state transition, governed by orchestration rules.\n\n    When a task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `TaskOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.TaskRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        \"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `TaskOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None and not (\n                isinstance(state_data, dict) and state_data.get(\"type\") == \"unpersisted\"\n            ):\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.task_run_id = self.run.id\n\n                if self.run.flow_run_id is not None:\n                    flow_run = await self.flow_run()\n                    state_result_artifact.flow_run_id = flow_run.id\n\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.TaskRunState(\n                task_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n        \"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `TaskOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n        \"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return self.run\n\n    async def flow_run(self):\n        return await flow_runs.read_flow_run(\n            session=self.session,\n            flow_run_id=self.run.flow_run_id,\n        )\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

      Run-level settings used to orchestrate the state transition.

      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

      Validates a proposed state by committing it to the database.

      After the TaskOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

      If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

      Returns:

      Type Description

      None

      Source code in prefect/server/orchestration/rules.py
      @inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n    \"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `TaskOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.safe_copy","title":"safe_copy","text":"

      Creates a mostly-mutation-safe copy for use in orchestration rules.

      Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

      Note

      self.run is an ORM model, and even when copied is unsafe to mutate

      Returns:

      Type Description

      A mutation-safe copy of TaskOrchestrationContext

      Source code in prefect/server/orchestration/rules.py
      def safe_copy(self):\n    \"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `TaskOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule","title":"BaseOrchestrationRule","text":"

      Bases: AbstractAsyncContextManager

      An abstract base class used to implement a discrete piece of orchestration logic.

      An OrchestrationRule is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an OrchestrationContext that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered \"valid\" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect.

      A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the \"initial state\", and the state a run is attempting to transition into is the \"proposed state\". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the OrchestrationContext will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as context.validated_state, and rules will call the self.after_transition hook upon exiting the managed context.

      Examples:

      Create a rule:\n\n>>> class BasicRule(BaseOrchestrationRule):\n>>>     # allowed initial state types\n>>>     FROM_STATES = [StateType.RUNNING]\n>>>     # allowed proposed state types\n>>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n>>>\n>>>     async def before_transition(initial_state, proposed_state, ctx):\n>>>         # side effects and proposed state mutation can happen here\n>>>         ...\n>>>\n>>>     async def after_transition(initial_state, validated_state, ctx):\n>>>         # operations on states that have been validated can happen here\n>>>         ...\n>>>\n>>>     async def cleanup(intitial_state, validated_state, ctx):\n>>>         # reverts side effects generated by `before_transition` if necessary\n>>>         ...\n\nUse a rule:\n\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with BasicRule(context, *intended_transition):\n>>>     # context.proposed_state has been governed by BasicRule\n>>>     ...\n\nUse multiple rules:\n\n>>> rules = [BasicRule, BasicRule]\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with contextlib.AsyncExitStack() as stack:\n>>>     for rule in rules:\n>>>         stack.enter_async_context(rule(context, *intended_transition))\n>>>\n>>>     # context.proposed_state has been governed by all rules\n>>>     ...\n

      Attributes:

      Name Type Description FROM_STATES Iterable

      list of valid initial state types this rule governs

      TO_STATES Iterable

      list of valid proposed state types this rule governs

      context

      the orchestration context

      from_state_type

      the state type a run is currently in

      to_state_type

      the intended proposed state type prior to any orchestration

      Parameters:

      Name Type Description Default context OrchestrationContext

      A FlowOrchestrationContext or TaskOrchestrationContext that is passed between rules

      required from_state_type Optional[StateType]

      The state type of the initial state of a run, if this state type is not contained in FROM_STATES, no hooks will fire

      required to_state_type Optional[StateType]

      The state type of the proposed state before orchestration, if this state type is not contained in TO_STATES, no hooks will fire

      required Source code in prefect/server/orchestration/rules.py
      class BaseOrchestrationRule(contextlib.AbstractAsyncContextManager):\n    \"\"\"\n    An abstract base class used to implement a discrete piece of orchestration logic.\n\n    An `OrchestrationRule` is a stateful context manager that directly governs a state\n    transition. Complex orchestration is achieved by nesting multiple rules.\n    Each rule runs against an `OrchestrationContext` that contains the transition\n    details; this context is then passed to subsequent rules. The context can be\n    modified by hooks that fire before and after a new state is validated and committed\n    to the database. These hooks will fire as long as the state transition is\n    considered \"valid\" and govern a transition by either modifying the proposed state\n    before it is validated or by producing a side-effect.\n\n    A state transition occurs whenever a flow- or task- run changes state, prompting\n    Prefect REST API to decide whether or not this transition can proceed. The current state of\n    the run is referred to as the \"initial state\", and the state a run is\n    attempting to transition into is the \"proposed state\". Together, the initial state\n    transitioning into the proposed state is the intended transition that is governed\n    by these orchestration rules. After using rules to enter a runtime context, the\n    `OrchestrationContext` will contain a proposed state that has been governed by\n    each rule, and at that point can validate the proposed state and commit it to\n    the database. The validated state will be set on the context as\n    `context.validated_state`, and rules will call the `self.after_transition` hook\n    upon exiting the managed context.\n\n    Examples:\n\n        Create a rule:\n\n        >>> class BasicRule(BaseOrchestrationRule):\n        >>>     # allowed initial state types\n        >>>     FROM_STATES = [StateType.RUNNING]\n        >>>     # allowed proposed state types\n        >>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n        >>>\n        >>>     async def before_transition(initial_state, proposed_state, ctx):\n        >>>         # side effects and proposed state mutation can happen here\n        >>>         ...\n        >>>\n        >>>     async def after_transition(initial_state, validated_state, ctx):\n        >>>         # operations on states that have been validated can happen here\n        >>>         ...\n        >>>\n        >>>     async def cleanup(intitial_state, validated_state, ctx):\n        >>>         # reverts side effects generated by `before_transition` if necessary\n        >>>         ...\n\n        Use a rule:\n\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with BasicRule(context, *intended_transition):\n        >>>     # context.proposed_state has been governed by BasicRule\n        >>>     ...\n\n        Use multiple rules:\n\n        >>> rules = [BasicRule, BasicRule]\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with contextlib.AsyncExitStack() as stack:\n        >>>     for rule in rules:\n        >>>         stack.enter_async_context(rule(context, *intended_transition))\n        >>>\n        >>>     # context.proposed_state has been governed by all rules\n        >>>     ...\n\n    Attributes:\n        FROM_STATES: list of valid initial state types this rule governs\n        TO_STATES: list of valid proposed state types this rule governs\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between rules\n        from_state_type: The state type of the initial state of a run, if this\n            state type is not contained in `FROM_STATES`, no hooks will fire\n        to_state_type: The state type of the proposed state before orchestration, if\n            this state type is not contained in `TO_STATES`, no hooks will fire\n    \"\"\"\n\n    FROM_STATES: Iterable = []\n    TO_STATES: Iterable = []\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n        self._invalid_on_entry = None\n\n    async def __aenter__(self) -> OrchestrationContext:\n        \"\"\"\n        Enter an async runtime context governed by this rule.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` is considered invalid on entry, entering this context\n        will do nothing. Otherwise, `self.before_transition` will fire.\n        \"\"\"\n\n        if await self.invalid():\n            pass\n        else:\n            try:\n                entry_context = self.context.entry_context()\n                await self.before_transition(*entry_context)\n                self.context.rule_signature.append(str(self.__class__))\n            except Exception as before_transition_error:\n                reason = (\n                    f\"Aborting orchestration due to error in {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n                logger.exception(\n                    f\"Error running before-transition hook in rule {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n\n                self.context.proposed_state = None\n                self.context.response_status = SetStateStatus.ABORT\n                self.context.response_details = StateAbortDetails(reason=reason)\n                self.context.orchestration_error = before_transition_error\n\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n        \"\"\"\n        Exit the async runtime context governed by this rule.\n\n        One of three outcomes can happen upon exiting this rule's context depending on\n        the state of the rule. If the rule was found to be invalid on entry, nothing\n        happens. If the rule was valid on entry and continues to be valid on exit,\n        `self.after_transition` will fire. If the rule was valid on entry but invalid\n        on exit, the rule will \"fizzle\" and `self.cleanup` will fire in order to revert\n        any side-effects produced by `self.before_transition`.\n        \"\"\"\n\n        exit_context = self.context.exit_context()\n        if await self.invalid():\n            pass\n        elif await self.fizzled():\n            await self.cleanup(*exit_context)\n        else:\n            await self.after_transition(*exit_context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        \"\"\"\n        Implements a hook that can fire before a state is committed to the database.\n\n        This hook may produce side-effects or mutate the proposed state of a\n        transition using one of four methods: `self.reject_transition`,\n        `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n        Note:\n            As currently implemented, the `before_transition` hook is not\n            perfectly isolated from mutating the transition. It is a standard instance\n            method that has access to `self`, and therefore `self.context`. This should\n            never be modified directly. Furthermore, `context.run` is an ORM model, and\n            mutating the run can also cause unintended writes to the database.\n\n        Args:\n            initial_state: The initial state of a transition\n            proposed_state: The proposed state of a transition\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        \"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            initial_state: The initial state of a transition\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        \"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        The intended use of this method is to revert side-effects produced by\n        `self.before_transition` when the transition is found to be invalid on exit.\n        This allows multiple rules to be gracefully run in sequence, without logic that\n        keeps track of all other rules that might govern a transition.\n\n        Args:\n            initial_state: The initial state of a transition\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def invalid(self) -> bool:\n        \"\"\"\n        Determines if a rule is invalid.\n\n        Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n        context. Rules are invalid if the transition states types are not contained in\n        `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n        a transition that differs from the transition the rule was instantiated with.\n\n        Returns:\n            True if the rules in invalid, False otherwise.\n        \"\"\"\n        # invalid and fizzled states are mutually exclusive,\n        # `_invalid_on_entry` holds this statefulness\n        if self.from_state_type not in self.FROM_STATES:\n            self._invalid_on_entry = True\n        if self.to_state_type not in self.TO_STATES:\n            self._invalid_on_entry = True\n\n        if self._invalid_on_entry is None:\n            self._invalid_on_entry = await self.invalid_transition()\n        return self._invalid_on_entry\n\n    async def fizzled(self) -> bool:\n        \"\"\"\n        Determines if a rule is fizzled and side-effects need to be reverted.\n\n        Rules are fizzled if the transitions were valid on entry (thus firing\n        `self.before_transition`) but are invalid upon exiting the governed context,\n        most likely caused by another rule mutating the transition.\n\n        Returns:\n            True if the rule is fizzled, False otherwise.\n        \"\"\"\n\n        if self._invalid_on_entry:\n            return False\n        return await self.invalid_transition()\n\n    async def invalid_transition(self) -> bool:\n        \"\"\"\n        Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n        If the `OrchestrationContext` is attempting to manage a transition with this\n        rule that differs from the transition the rule was instantiated with, the\n        transition is considered to be invalid. Depending on the context, a rule with an\n        invalid transition is either \"invalid\" or \"fizzled\".\n\n        Returns:\n            True if the transition is invalid, False otherwise.\n        \"\"\"\n\n        initial_state_type = self.context.initial_state_type\n        proposed_state_type = self.context.proposed_state_type\n        return (self.from_state_type != initial_state_type) or (\n            self.to_state_type != proposed_state_type\n        )\n\n    async def reject_transition(self, state: Optional[states.State], reason: str):\n        \"\"\"\n        Rejects a proposed transition before the transition is validated.\n\n        This method will reject a proposed transition, mutating the proposed state to\n        the provided `state`. A reason for rejecting the transition is also passed on\n        to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n        despite the proposed state type changing.\n\n        Args:\n            state: The new proposed state. If `None`, the current run state will be\n                returned in the result instead.\n            reason: The reason for rejecting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # the current state will be used if a new one is not provided\n        if state is None:\n            if self.from_state_type is None:\n                raise OrchestrationError(\n                    \"The current run has no state; this transition cannot be \"\n                    \"rejected without providing a new state.\"\n                )\n            self.to_state_type = None\n            self.context.proposed_state = None\n        else:\n            # a rule that mutates state should not fizzle itself\n            self.to_state_type = state.type\n            self.context.proposed_state = state\n\n        self.context.response_status = SetStateStatus.REJECT\n        self.context.response_details = StateRejectDetails(reason=reason)\n\n    async def delay_transition(\n        self,\n        delay_seconds: int,\n        reason: str,\n    ):\n        \"\"\"\n        Delays a proposed transition before the transition is validated.\n\n        This method will delay a proposed transition, setting the proposed state to\n        `None`, signaling to the `OrchestrationContext` that no state should be\n        written to the database. The number of seconds a transition should be delayed is\n        passed to the `OrchestrationContext`. A reason for delaying the transition is\n        also provided. Rules that delay the transition will not fizzle, despite the\n        proposed state type changing.\n\n        Args:\n            delay_seconds: The number of seconds the transition should be delayed\n            reason: The reason for delaying the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.WAIT\n        self.context.response_details = StateWaitDetails(\n            delay_seconds=delay_seconds, reason=reason\n        )\n\n    async def abort_transition(self, reason: str):\n        \"\"\"\n        Aborts a proposed transition before the transition is validated.\n\n        This method will abort a proposed transition, expecting no further action to\n        occur for this run. The proposed state is set to `None`, signaling to the\n        `OrchestrationContext` that no state should be written to the database. A\n        reason for aborting the transition is also provided. Rules that abort the\n        transition will not fizzle, despite the proposed state type changing.\n\n        Args:\n            reason: The reason for aborting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.ABORT\n        self.context.response_details = StateAbortDetails(reason=reason)\n\n    async def rename_state(self, state_name):\n        \"\"\"\n        Sets the \"name\" attribute on a proposed state.\n\n        The name of a state is an annotation intended to provide rich, human-readable\n        context for how a run is progressing. This method only updates the name and not\n        the canonical state TYPE, and will not fizzle or invalidate any other rules\n        that might govern this state transition.\n        \"\"\"\n        if self.context.proposed_state is not None:\n            self.context.proposed_state.name = state_name\n\n    async def update_context_parameters(self, key, value):\n        \"\"\"\n        Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n        This mechanism streamlines the process of passing messages and information\n        between orchestration rules if necessary and is simpler and more ephemeral than\n        message-passing via the database or some other side-effect. This mechanism can\n        be used to break up large rules for ease of testing or comprehension, but note\n        that any rules coupled this way (or any other way) are no longer independent and\n        the order in which they appear in the orchestration policy priority will matter.\n        \"\"\"\n\n        self.context.parameters.update({key: value})\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.before_transition","title":"before_transition async","text":"

      Implements a hook that can fire before a state is committed to the database.

      This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: self.reject_transition, self.delay_transition, self.abort_transition, and self.rename_state.

      Note

      As currently implemented, the before_transition hook is not perfectly isolated from mutating the transition. It is a standard instance method that has access to self, and therefore self.context. This should never be modified directly. Furthermore, context.run is an ORM model, and mutating the run can also cause unintended writes to the database.

      Parameters:

      Name Type Description Default initial_state Optional[State]

      The initial state of a transition

      required proposed_state Optional[State]

      The proposed state of a transition

      required context OrchestrationContext

      A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

      required

      Returns:

      Type Description None

      None

      Source code in prefect/server/orchestration/rules.py
      async def before_transition(\n    self,\n    initial_state: Optional[states.State],\n    proposed_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n    \"\"\"\n    Implements a hook that can fire before a state is committed to the database.\n\n    This hook may produce side-effects or mutate the proposed state of a\n    transition using one of four methods: `self.reject_transition`,\n    `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n    Note:\n        As currently implemented, the `before_transition` hook is not\n        perfectly isolated from mutating the transition. It is a standard instance\n        method that has access to `self`, and therefore `self.context`. This should\n        never be modified directly. Furthermore, `context.run` is an ORM model, and\n        mutating the run can also cause unintended writes to the database.\n\n    Args:\n        initial_state: The initial state of a transition\n        proposed_state: The proposed state of a transition\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.after_transition","title":"after_transition async","text":"

      Implements a hook that can fire after a state is committed to the database.

      Parameters:

      Name Type Description Default initial_state Optional[State]

      The initial state of a transition

      required validated_state Optional[State]

      The governed state that has been committed to the database

      required context OrchestrationContext

      A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

      required

      Returns:

      Type Description None

      None

      Source code in prefect/server/orchestration/rules.py
      async def after_transition(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n    \"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        initial_state: The initial state of a transition\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.cleanup","title":"cleanup async","text":"

      Implements a hook that can fire after a state is committed to the database.

      The intended use of this method is to revert side-effects produced by self.before_transition when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition.

      Parameters:

      Name Type Description Default initial_state Optional[State]

      The initial state of a transition

      required validated_state Optional[State]

      The governed state that has been committed to the database

      required context OrchestrationContext

      A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

      required

      Returns:

      Type Description None

      None

      Source code in prefect/server/orchestration/rules.py
      async def cleanup(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n    \"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    The intended use of this method is to revert side-effects produced by\n    `self.before_transition` when the transition is found to be invalid on exit.\n    This allows multiple rules to be gracefully run in sequence, without logic that\n    keeps track of all other rules that might govern a transition.\n\n    Args:\n        initial_state: The initial state of a transition\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid","title":"invalid async","text":"

      Determines if a rule is invalid.

      Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in self.FROM_STATES and self.TO_STATES, or if the context is proposing a transition that differs from the transition the rule was instantiated with.

      Returns:

      Type Description bool

      True if the rules in invalid, False otherwise.

      Source code in prefect/server/orchestration/rules.py
      async def invalid(self) -> bool:\n    \"\"\"\n    Determines if a rule is invalid.\n\n    Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n    context. Rules are invalid if the transition states types are not contained in\n    `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n    a transition that differs from the transition the rule was instantiated with.\n\n    Returns:\n        True if the rules in invalid, False otherwise.\n    \"\"\"\n    # invalid and fizzled states are mutually exclusive,\n    # `_invalid_on_entry` holds this statefulness\n    if self.from_state_type not in self.FROM_STATES:\n        self._invalid_on_entry = True\n    if self.to_state_type not in self.TO_STATES:\n        self._invalid_on_entry = True\n\n    if self._invalid_on_entry is None:\n        self._invalid_on_entry = await self.invalid_transition()\n    return self._invalid_on_entry\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.fizzled","title":"fizzled async","text":"

      Determines if a rule is fizzled and side-effects need to be reverted.

      Rules are fizzled if the transitions were valid on entry (thus firing self.before_transition) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition.

      Returns:

      Type Description bool

      True if the rule is fizzled, False otherwise.

      Source code in prefect/server/orchestration/rules.py
      async def fizzled(self) -> bool:\n    \"\"\"\n    Determines if a rule is fizzled and side-effects need to be reverted.\n\n    Rules are fizzled if the transitions were valid on entry (thus firing\n    `self.before_transition`) but are invalid upon exiting the governed context,\n    most likely caused by another rule mutating the transition.\n\n    Returns:\n        True if the rule is fizzled, False otherwise.\n    \"\"\"\n\n    if self._invalid_on_entry:\n        return False\n    return await self.invalid_transition()\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid_transition","title":"invalid_transition async","text":"

      Determines if the transition proposed by the OrchestrationContext is invalid.

      If the OrchestrationContext is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either \"invalid\" or \"fizzled\".

      Returns:

      Type Description bool

      True if the transition is invalid, False otherwise.

      Source code in prefect/server/orchestration/rules.py
      async def invalid_transition(self) -> bool:\n    \"\"\"\n    Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n    If the `OrchestrationContext` is attempting to manage a transition with this\n    rule that differs from the transition the rule was instantiated with, the\n    transition is considered to be invalid. Depending on the context, a rule with an\n    invalid transition is either \"invalid\" or \"fizzled\".\n\n    Returns:\n        True if the transition is invalid, False otherwise.\n    \"\"\"\n\n    initial_state_type = self.context.initial_state_type\n    proposed_state_type = self.context.proposed_state_type\n    return (self.from_state_type != initial_state_type) or (\n        self.to_state_type != proposed_state_type\n    )\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.reject_transition","title":"reject_transition async","text":"

      Rejects a proposed transition before the transition is validated.

      This method will reject a proposed transition, mutating the proposed state to the provided state. A reason for rejecting the transition is also passed on to the OrchestrationContext. Rules that reject the transition will not fizzle, despite the proposed state type changing.

      Parameters:

      Name Type Description Default state Optional[State]

      The new proposed state. If None, the current run state will be returned in the result instead.

      required reason str

      The reason for rejecting the transition

      required Source code in prefect/server/orchestration/rules.py
      async def reject_transition(self, state: Optional[states.State], reason: str):\n    \"\"\"\n    Rejects a proposed transition before the transition is validated.\n\n    This method will reject a proposed transition, mutating the proposed state to\n    the provided `state`. A reason for rejecting the transition is also passed on\n    to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n    despite the proposed state type changing.\n\n    Args:\n        state: The new proposed state. If `None`, the current run state will be\n            returned in the result instead.\n        reason: The reason for rejecting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # the current state will be used if a new one is not provided\n    if state is None:\n        if self.from_state_type is None:\n            raise OrchestrationError(\n                \"The current run has no state; this transition cannot be \"\n                \"rejected without providing a new state.\"\n            )\n        self.to_state_type = None\n        self.context.proposed_state = None\n    else:\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = state.type\n        self.context.proposed_state = state\n\n    self.context.response_status = SetStateStatus.REJECT\n    self.context.response_details = StateRejectDetails(reason=reason)\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.delay_transition","title":"delay_transition async","text":"

      Delays a proposed transition before the transition is validated.

      This method will delay a proposed transition, setting the proposed state to None, signaling to the OrchestrationContext that no state should be written to the database. The number of seconds a transition should be delayed is passed to the OrchestrationContext. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing.

      Parameters:

      Name Type Description Default delay_seconds int

      The number of seconds the transition should be delayed

      required reason str

      The reason for delaying the transition

      required Source code in prefect/server/orchestration/rules.py
      async def delay_transition(\n    self,\n    delay_seconds: int,\n    reason: str,\n):\n    \"\"\"\n    Delays a proposed transition before the transition is validated.\n\n    This method will delay a proposed transition, setting the proposed state to\n    `None`, signaling to the `OrchestrationContext` that no state should be\n    written to the database. The number of seconds a transition should be delayed is\n    passed to the `OrchestrationContext`. A reason for delaying the transition is\n    also provided. Rules that delay the transition will not fizzle, despite the\n    proposed state type changing.\n\n    Args:\n        delay_seconds: The number of seconds the transition should be delayed\n        reason: The reason for delaying the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.WAIT\n    self.context.response_details = StateWaitDetails(\n        delay_seconds=delay_seconds, reason=reason\n    )\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.abort_transition","title":"abort_transition async","text":"

      Aborts a proposed transition before the transition is validated.

      This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to None, signaling to the OrchestrationContext that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing.

      Parameters:

      Name Type Description Default reason str

      The reason for aborting the transition

      required Source code in prefect/server/orchestration/rules.py
      async def abort_transition(self, reason: str):\n    \"\"\"\n    Aborts a proposed transition before the transition is validated.\n\n    This method will abort a proposed transition, expecting no further action to\n    occur for this run. The proposed state is set to `None`, signaling to the\n    `OrchestrationContext` that no state should be written to the database. A\n    reason for aborting the transition is also provided. Rules that abort the\n    transition will not fizzle, despite the proposed state type changing.\n\n    Args:\n        reason: The reason for aborting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.ABORT\n    self.context.response_details = StateAbortDetails(reason=reason)\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.rename_state","title":"rename_state async","text":"

      Sets the \"name\" attribute on a proposed state.

      The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition.

      Source code in prefect/server/orchestration/rules.py
      async def rename_state(self, state_name):\n    \"\"\"\n    Sets the \"name\" attribute on a proposed state.\n\n    The name of a state is an annotation intended to provide rich, human-readable\n    context for how a run is progressing. This method only updates the name and not\n    the canonical state TYPE, and will not fizzle or invalidate any other rules\n    that might govern this state transition.\n    \"\"\"\n    if self.context.proposed_state is not None:\n        self.context.proposed_state.name = state_name\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.update_context_parameters","title":"update_context_parameters async","text":"

      Updates the \"parameters\" dictionary attribute with the specified key-value pair.

      This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter.

      Source code in prefect/server/orchestration/rules.py
      async def update_context_parameters(self, key, value):\n    \"\"\"\n    Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n    This mechanism streamlines the process of passing messages and information\n    between orchestration rules if necessary and is simpler and more ephemeral than\n    message-passing via the database or some other side-effect. This mechanism can\n    be used to break up large rules for ease of testing or comprehension, but note\n    that any rules coupled this way (or any other way) are no longer independent and\n    the order in which they appear in the orchestration policy priority will matter.\n    \"\"\"\n\n    self.context.parameters.update({key: value})\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform","title":"BaseUniversalTransform","text":"

      Bases: AbstractAsyncContextManager

      An abstract base class used to implement privileged bookkeeping logic.

      Warning

      In almost all cases, use the BaseOrchestrationRule base class instead.

      Beyond the orchestration rules implemented with the BaseOrchestrationRule ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is None, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care.

      Attributes:

      Name Type Description FROM_STATES Iterable

      for compatibility with BaseOrchestrationPolicy

      TO_STATES Iterable

      for compatibility with BaseOrchestrationPolicy

      context

      the orchestration context

      from_state_type

      the state type a run is currently in

      to_state_type

      the intended proposed state type prior to any orchestration

      Parameters:

      Name Type Description Default context OrchestrationContext

      A FlowOrchestrationContext or TaskOrchestrationContext that is passed between transforms

      required Source code in prefect/server/orchestration/rules.py
      class BaseUniversalTransform(contextlib.AbstractAsyncContextManager):\n    \"\"\"\n    An abstract base class used to implement privileged bookkeeping logic.\n\n    Warning:\n        In almost all cases, use the `BaseOrchestrationRule` base class instead.\n\n    Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC,\n    Universal transforms are not stateful, and fire their before- and after- transition\n    hooks on every state transition unless the proposed state is `None`, indicating that\n    no state should be written to the database. Because there are no guardrails in place\n    to prevent directly mutating state or other parts of the orchestration context,\n    universal transforms should only be used with care.\n\n    Attributes:\n        FROM_STATES: for compatibility with `BaseOrchestrationPolicy`\n        TO_STATES: for compatibility with `BaseOrchestrationPolicy`\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between transforms\n    \"\"\"\n\n    # `BaseUniversalTransform` will always fire on non-null transitions\n    FROM_STATES: Iterable = ALL_ORCHESTRATION_STATES\n    TO_STATES: Iterable = ALL_ORCHESTRATION_STATES\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n\n    async def __aenter__(self):\n        \"\"\"\n        Enter an async runtime context governed by this transform.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` has been nullified on entry and `context.proposed_state`\n        is `None`, entering this context will do nothing. Otherwise\n        `self.before_transition` will fire.\n        \"\"\"\n\n        await self.before_transition(self.context)\n        self.context.rule_signature.append(str(self.__class__))\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n        \"\"\"\n        Exit the async runtime context governed by this transform.\n\n        If the transition has been nullified or errorred upon exiting this transforms's context,\n        nothing happens. Otherwise, `self.after_transition` will fire on every non-null\n        proposed state.\n        \"\"\"\n\n        if not self.exception_in_transition():\n            await self.after_transition(self.context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(self, context) -> None:\n        \"\"\"\n        Implements a hook that fires before a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(self, context) -> None:\n        \"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    def nullified_transition(self) -> bool:\n        \"\"\"\n        Determines if the transition has been nullified.\n\n        Transitions are nullified if the proposed state is `None`, indicating that\n        nothing should be written to the database.\n\n        Returns:\n            True if the transition is nullified, False otherwise.\n        \"\"\"\n\n        return self.context.proposed_state is None\n\n    def exception_in_transition(self) -> bool:\n        \"\"\"\n        Determines if the transition has encountered an exception.\n\n        Returns:\n            True if the transition is encountered an exception, False otherwise.\n        \"\"\"\n\n        return self.context.orchestration_error is not None\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.before_transition","title":"before_transition async","text":"

      Implements a hook that fires before a state is committed to the database.

      Parameters:

      Name Type Description Default context

      the OrchestrationContext that contains transition details

      required

      Returns:

      Type Description None

      None

      Source code in prefect/server/orchestration/rules.py
      async def before_transition(self, context) -> None:\n    \"\"\"\n    Implements a hook that fires before a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.after_transition","title":"after_transition async","text":"

      Implements a hook that can fire after a state is committed to the database.

      Parameters:

      Name Type Description Default context

      the OrchestrationContext that contains transition details

      required

      Returns:

      Type Description None

      None

      Source code in prefect/server/orchestration/rules.py
      async def after_transition(self, context) -> None:\n    \"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.nullified_transition","title":"nullified_transition","text":"

      Determines if the transition has been nullified.

      Transitions are nullified if the proposed state is None, indicating that nothing should be written to the database.

      Returns:

      Type Description bool

      True if the transition is nullified, False otherwise.

      Source code in prefect/server/orchestration/rules.py
      def nullified_transition(self) -> bool:\n    \"\"\"\n    Determines if the transition has been nullified.\n\n    Transitions are nullified if the proposed state is `None`, indicating that\n    nothing should be written to the database.\n\n    Returns:\n        True if the transition is nullified, False otherwise.\n    \"\"\"\n\n    return self.context.proposed_state is None\n
      "},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.exception_in_transition","title":"exception_in_transition","text":"

      Determines if the transition has encountered an exception.

      Returns:

      Type Description bool

      True if the transition is encountered an exception, False otherwise.

      Source code in prefect/server/orchestration/rules.py
      def exception_in_transition(self) -> bool:\n    \"\"\"\n    Determines if the transition has encountered an exception.\n\n    Returns:\n        True if the transition is encountered an exception, False otherwise.\n    \"\"\"\n\n    return self.context.orchestration_error is not None\n
      "},{"location":"api-ref/server/schemas/actions/","title":"server.schemas.actions","text":""},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions","title":"prefect.server.schemas.actions","text":"

      Reduced schemas for accepting API actions.

      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate","title":"ArtifactCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create an artifact.

      Source code in prefect/server/schemas/actions.py
      class ArtifactCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n    key: Optional[str] = Field(\n        default=None, description=\"An optional unique reference key for this artifact.\"\n    )\n    type: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An identifier that describes the shape of the data field. e.g. 'result',\"\n            \" 'table', 'markdown'\"\n        ),\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A markdown-enabled description of the artifact.\"\n    )\n    data: Optional[Union[Dict[str, Any], Any]] = Field(\n        default=None,\n        description=(\n            \"Data associated with the artifact, e.g. a result.; structure depends on\"\n            \" the artifact type.\"\n        ),\n    )\n    metadata_: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=(\n            \"User-defined artifact metadata. Content must be string key and value\"\n            \" pairs.\"\n        ),\n    )\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run associated with the artifact.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run associated with the artifact.\"\n    )\n\n    @classmethod\n    def from_result(cls, data: Any):\n        artifact_info = dict()\n        if isinstance(data, dict):\n            artifact_key = data.pop(\"artifact_key\", None)\n            if artifact_key:\n                artifact_info[\"key\"] = artifact_key\n\n            artifact_type = data.pop(\"artifact_type\", None)\n            if artifact_type:\n                artifact_info[\"type\"] = artifact_type\n\n            description = data.pop(\"artifact_description\", None)\n            if description:\n                artifact_info[\"description\"] = description\n\n        return cls(data=data, **artifact_info)\n\n    _validate_metadata_length = validator(\"metadata_\")(validate_max_metadata_length)\n\n    _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n        validate_artifact_key\n    )\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update an artifact.

      Source code in prefect/server/schemas/actions.py
      class ArtifactUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n    data: Optional[Union[Dict[str, Any], Any]] = Field(None)\n    description: Optional[str] = Field(None)\n    metadata_: Optional[Dict[str, str]] = Field(None)\n\n    _validate_metadata_length = validator(\"metadata_\", allow_reuse=True)(\n        validate_max_metadata_length\n    )\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a block document.

      Source code in prefect/server/schemas/actions.py
      class BlockDocumentCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    _validate_name_format = validator(\"name\", allow_reuse=True)(\n        validate_block_document_name\n    )\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate","text":"

      Bases: ActionBaseModel

      Data used to create block document reference.

      Source code in prefect/server/schemas/actions.py
      class BlockDocumentReferenceCreate(ActionBaseModel):\n    \"\"\"Data used to create block document reference.\"\"\"\n\n    id: UUID = Field(\n        default_factory=uuid4, description=\"The block document reference ID\"\n    )\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of the parent block document\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        return validate_parent_and_ref_diff(values)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a block document.

      Source code in prefect/server/schemas/actions.py
      class BlockDocumentUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n    block_schema_id: Optional[UUID] = Field(\n        default=None, description=\"A block schema ID\"\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    merge_existing_data: bool = True\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a block schema.

      Source code in prefect/server/schemas/actions.py
      class BlockSchemaCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=schemas.core.DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a block type.

      Source code in prefect/server/schemas/actions.py
      class BlockTypeCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n\n    # validators\n    _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n        validate_block_type_slug\n    )\n\n    _validate_name_characters = validator(\"name\", check_fields=False)(\n        raise_on_name_with_banned_characters\n    )\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a block type.

      Source code in prefect/server/schemas/actions.py
      class BlockTypeUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n    logo_url: Optional[schemas.core.HttpUrl] = Field(None)\n    documentation_url: Optional[schemas.core.HttpUrl] = Field(None)\n    description: Optional[str] = Field(None)\n    code_example: Optional[str] = Field(None)\n\n    @classmethod\n    def updatable_fields(cls) -> set:\n        return get_class_fields_only(cls)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a concurrency limit.

      Source code in prefect/server/schemas/actions.py
      class ConcurrencyLimitCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create","title":"ConcurrencyLimitV2Create","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a v2 concurrency limit.

      Source code in prefect/server/schemas/actions.py
      class ConcurrencyLimitV2Create(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a v2 concurrency limit.\"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the concurrency limit is active.\"\n    )\n    name: str = Field(default=..., description=\"The name of the concurrency limit.\")\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=0, description=\"The number of active slots.\")\n    denied_slots: int = Field(default=0, description=\"The number of denied slots.\")\n    slot_decay_per_second: float = Field(\n        default=0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Create.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update","title":"ConcurrencyLimitV2Update","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a v2 concurrency limit.

      Source code in prefect/server/schemas/actions.py
      class ConcurrencyLimitV2Update(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a v2 concurrency limit.\"\"\"\n\n    active: Optional[bool] = Field(None)\n    name: Optional[str] = Field(None)\n    limit: Optional[NonNegativeInteger] = Field(None)\n    active_slots: Optional[NonNegativeInteger] = Field(None)\n    denied_slots: Optional[NonNegativeInteger] = Field(None)\n    slot_decay_per_second: Optional[NonNegativeFloat] = Field(None)\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitV2Update.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate","title":"DeploymentCreate","text":"

      Bases: DeprecatedInfraOverridesField, ActionBaseModel

      Data used by the Prefect REST API to create a deployment.

      Source code in prefect/server/schemas/actions.py
      class DeploymentCreate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n    @root_validator\n    def populate_schedules(cls, values):\n        return set_deployment_schedules(values)\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    name: str = Field(\n        default=...,\n        description=\"The name of the deployment.\",\n        examples=[\"my-deployment\"],\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The ID of the flow associated with the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether the schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentScheduleCreate] = Field(\n        default_factory=list,\n        description=\"A list of schedules for the deployment.\",\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of deployment tags.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    pull_steps: Optional[List[dict]] = Field(None)\n\n    manifest_path: Optional[str] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = Field(\n        None, description=\"The schedule for the deployment.\"\n    )\n    description: Optional[str] = Field(None)\n    path: Optional[str] = Field(None)\n    version: Optional[str] = Field(None)\n    entrypoint: Optional[str] = Field(None)\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides for the flow's infrastructure configuration.\",\n    )\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n            jsonschema.validate(self.job_variables, variables_schema)\n\n    @validator(\"parameters\")\n    def _validate_parameters_conform_to_schema(cls, value, values):\n        return validate_parameters_conform_to_schema(value, values)\n\n    @validator(\"parameter_openapi_schema\")\n    def _validate_parameter_openapi_schema(cls, value, values):\n        return validate_parameter_openapi_schema(value, values)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration","text":"

      Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

      Source code in prefect/server/schemas/actions.py
      def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n        jsonschema.validate(self.job_variables, variables_schema)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.schema","title":"schema classmethod","text":"

      Don't use the mixin docstring as the description if this class is missing a docstring.

      Source code in prefect/_internal/compatibility/deprecated.py
      @classmethod\ndef schema(\n    cls, by_alias: bool = True, ref_template: str = default_ref_template\n) -> Dict[str, Any]:\n    \"\"\"\n    Don't use the mixin docstring as the description if this class is missing a\n    docstring.\n    \"\"\"\n    schema = super().schema(by_alias=by_alias, ref_template=ref_template)\n\n    if not cls.__doc__:\n        schema.pop(\"description\", None)\n\n    return schema\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a flow run from a deployment.

      Source code in prefect/server/schemas/actions.py
      class DeploymentFlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    context: Dict[str, Any] = Field(default_factory=dict)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: schemas.core.FlowRunPolicy = Field(\n        default_factory=schemas.core.FlowRunPolicy,\n        description=\"The empirical policy for the flow run.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the flow run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    idempotency_key: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional idempotency key. If a flow run with the same idempotency key\"\n            \" has already been created, the existing flow run will be returned.\"\n        ),\n    )\n    parent_task_run_id: Optional[UUID] = Field(None)\n    work_queue_name: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(None)\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate","text":"

      Bases: DeprecatedInfraOverridesField, ActionBaseModel

      Data used by the Prefect REST API to update a deployment.

      Source code in prefect/server/schemas/actions.py
      class DeploymentUpdate(DeprecatedInfraOverridesField, ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        return remove_old_deployment_fields(values)\n\n    version: Optional[str] = Field(None)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = Field(\n        None, description=\"The schedule for the deployment.\"\n    )\n    description: Optional[str] = Field(None)\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether the schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentScheduleCreate] = Field(\n        default_factory=list,\n        description=\"A list of schedules for the deployment.\",\n    )\n    parameters: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of deployment tags.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    work_queue_name: Optional[str] = Field(None)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    path: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Overrides for the flow's infrastructure configuration.\",\n    )\n    entrypoint: Optional[str] = Field(None)\n    manifest_path: Optional[str] = Field(None)\n    storage_document_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    enforce_parameter_schema: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    class Config:\n        allow_population_by_field_name = True\n\n    def check_valid_configuration(self, base_job_template: dict):\n        \"\"\"Check that the combination of base_job_template defaults\n        and job_variables conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n        if variables_schema is not None:\n            jsonschema.validate(self.job_variables, variables_schema)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration","text":"

      Check that the combination of base_job_template defaults and job_variables conforms to the specified schema.

      Source code in prefect/server/schemas/actions.py
      def check_valid_configuration(self, base_job_template: dict):\n    \"\"\"Check that the combination of base_job_template defaults\n    and job_variables conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n    if variables_schema is not None:\n        jsonschema.validate(self.job_variables, variables_schema)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.schema","title":"schema classmethod","text":"

      Don't use the mixin docstring as the description if this class is missing a docstring.

      Source code in prefect/_internal/compatibility/deprecated.py
      @classmethod\ndef schema(\n    cls, by_alias: bool = True, ref_template: str = default_ref_template\n) -> Dict[str, Any]:\n    \"\"\"\n    Don't use the mixin docstring as the description if this class is missing a\n    docstring.\n    \"\"\"\n    schema = super().schema(by_alias=by_alias, ref_template=ref_template)\n\n    if not cls.__doc__:\n        schema.pop(\"description\", None)\n\n    return schema\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate","title":"FlowCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a flow.

      Source code in prefect/server/schemas/actions.py
      class FlowCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate","title":"FlowRunCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a flow run.

      Source code in prefect/server/schemas/actions.py
      class FlowRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    flow_version: Optional[str] = Field(\n        default=None, description=\"The version of the flow being run.\"\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"The context of the flow run.\",\n    )\n    parent_task_run_id: Optional[UUID] = Field(None)\n    infrastructure_document_id: Optional[UUID] = Field(None)\n    empirical_policy: schemas.core.FlowRunPolicy = Field(\n        default_factory=schemas.core.FlowRunPolicy,\n        description=\"The empirical policy for the flow run.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the flow run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    idempotency_key: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional idempotency key. If a flow run with the same idempotency key\"\n            \" has already been created, the existing flow run will be returned.\"\n        ),\n    )\n\n    # DEPRECATED\n\n    deployment_id: Optional[UUID] = Field(\n        None,\n        description=(\n            \"DEPRECATED: The id of the deployment associated with this flow run, if\"\n            \" available.\"\n        ),\n        deprecated=True,\n    )\n\n    class Config(ActionBaseModel.Config):\n        json_dumps = orjson_dumps_extra_compatible\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a flow run notification policy.

      Source code in prefect/server/schemas/actions.py
      class FlowRunNotificationPolicyCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(schemas.core.FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a flow run notification policy.

      Source code in prefect/server/schemas/actions.py
      class FlowRunNotificationPolicyUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n    is_active: Optional[bool] = Field(None)\n    state_names: Optional[List[str]] = Field(None)\n    tags: Optional[List[str]] = Field(None)\n    block_document_id: Optional[UUID] = Field(None)\n    message_template: Optional[str] = Field(None)\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a flow run.

      Source code in prefect/server/schemas/actions.py
      class FlowRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n    name: Optional[str] = Field(None)\n    flow_version: Optional[str] = Field(None)\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    empirical_policy: schemas.core.FlowRunPolicy = Field(\n        default_factory=schemas.core.FlowRunPolicy\n    )\n    tags: List[str] = Field(default_factory=list)\n    infrastructure_pid: Optional[str] = Field(None)\n    job_variables: Optional[Dict[str, Any]] = Field(None)\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate","title":"FlowUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a flow.

      Source code in prefect/server/schemas/actions.py
      class FlowUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate","title":"LogCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a log.

      Source code in prefect/server/schemas/actions.py
      class LogCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(None)\n    task_run_id: Optional[UUID] = Field(None)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a saved search.

      Source code in prefect/server/schemas/actions.py
      class SavedSearchCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[schemas.core.SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate","title":"StateCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a new state.

      Source code in prefect/server/schemas/actions.py
      class StateCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n    type: schemas.states.StateType = Field(\n        default=..., description=\"The type of the state to create\"\n    )\n    name: Optional[str] = Field(\n        default=None, description=\"The name of the state to create\"\n    )\n    message: Optional[str] = Field(\n        default=None, description=\"The message of the state to create\"\n    )\n    data: Optional[Any] = Field(\n        default=None, description=\"The data of the state to create\"\n    )\n    state_details: schemas.states.StateDetails = Field(\n        default_factory=schemas.states.StateDetails,\n        description=\"The details of the state to create\",\n    )\n\n    timestamp: Optional[DateTimeTZ] = Field(\n        default=None,\n        repr=False,\n        ignored=True,\n    )\n    id: Optional[UUID] = Field(default=None, repr=False, ignored=True)\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n        return get_or_create_state_name(v, values)\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n        return set_default_scheduled_time(cls, values)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate","title":"TaskRunCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a task run

      Source code in prefect/server/schemas/actions.py
      class TaskRunCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n    # TaskRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the task run to create\"\n    )\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2), examples=[\"my-task-run\"]\n    )\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run id of the task run.\"\n    )\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional cache key. If a COMPLETED state associated with this cache key\"\n            \" is found, the cached COMPLETED state will be used instead of executing\"\n            \" the task run.\"\n        ),\n    )\n    cache_expiration: Optional[DateTimeTZ] = Field(\n        default=None, description=\"Specifies when the cached state should expire.\"\n    )\n    task_version: Optional[str] = Field(\n        default=None, description=\"The version of the task being run.\"\n    )\n    empirical_policy: schemas.core.TaskRunPolicy = Field(\n        default_factory=schemas.core.TaskRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the task run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                schemas.core.TaskRunResult,\n                schemas.core.Parameter,\n                schemas.core.Constant,\n            ]\n        ],\n    ] = Field(\n        default_factory=dict,\n        description=\"The inputs to the task run.\",\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n\n    @validator(\"cache_key\")\n    def validate_cache_key(cls, cache_key):\n        return validate_cache_key_length(cache_key)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a task run

      Source code in prefect/server/schemas/actions.py
      class TaskRunUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2), examples=[\"my-task-run\"]\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate","title":"VariableCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a Variable.

      Source code in prefect/server/schemas/actions.py
      class VariableCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n    name: str = Field(\n        default=...,\n        description=\"The name of the variable\",\n        examples=[\"my-variable\"],\n        max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: str = Field(\n        default=...,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of variable tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate","title":"VariableUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a Variable.

      Source code in prefect/server/schemas/actions.py
      class VariableUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the variable\",\n        examples=[\"my-variable\"],\n        max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: Optional[str] = Field(\n        default=None,\n        description=\"The value of the variable\",\n        examples=[\"my-value\"],\n        max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of variable tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a work pool.

      Source code in prefect/server/schemas/actions.py
      class WorkPoolCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n    name: str = Field(..., description=\"The name of the work pool.\")\n    description: Optional[str] = Field(None, description=\"The work pool description.\")\n    type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n\n    _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n        validate_base_job_template\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a work pool.

      Source code in prefect/server/schemas/actions.py
      class WorkPoolUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n    description: Optional[str] = Field(None)\n    is_paused: Optional[bool] = Field(None)\n    base_job_template: Optional[Dict[str, Any]] = Field(None)\n    concurrency_limit: Optional[NonNegativeInteger] = Field(None)\n\n    _validate_base_job_template = validator(\"base_job_template\", allow_reuse=True)(\n        validate_base_job_template\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to create a work queue.

      Source code in prefect/server/schemas/actions.py
      class WorkQueueCreate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        None, description=\"The work queue's concurrency limit.\"\n    )\n    priority: Optional[PositiveInteger] = Field(\n        None,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate","text":"

      Bases: ActionBaseModel

      Data used by the Prefect REST API to update a work queue.

      Source code in prefect/server/schemas/actions.py
      class WorkQueueUpdate(ActionBaseModel):\n    \"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n    name: Optional[str] = Field(None)\n    description: Optional[str] = Field(None)\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(None)\n    priority: Optional[PositiveInteger] = Field(None)\n    last_polled: Optional[DateTimeTZ] = Field(None)\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
      "},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/core/","title":"server.schemas.core","text":""},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core","title":"prefect.server.schemas.core","text":"

      Full schemas of Prefect REST API objects.

      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Agent","title":"Agent","text":"

      Bases: ORMBaseModel

      An ORM representation of an agent

      Source code in prefect/server/schemas/core.py
      class Agent(ORMBaseModel):\n    \"\"\"An ORM representation of an agent\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the agent. If a name is not provided, it will be\"\n            \" auto-generated.\"\n        ),\n    )\n    work_queue_id: UUID = Field(\n        default=..., description=\"The work queue with which the agent is associated.\"\n    )\n    last_activity_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time this agent polled for work.\"\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocument","title":"BlockDocument","text":"

      Bases: ORMBaseModel

      An ORM representation of a block document.

      Source code in prefect/server/schemas/core.py
      class BlockDocument(ORMBaseModel):\n    \"\"\"An ORM representation of a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block document's data\"\n    )\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n    block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The associated block schema\"\n    )\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n    block_type_name: Optional[str] = Field(\n        default=None, description=\"The associated block type's name\"\n    )\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    block_document_references: Dict[str, Dict[str, Any]] = Field(\n        default_factory=dict, description=\"Record of the block document's references\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        # the BlockDocumentCreate subclass allows name=None\n        # and will inherit this validator\n        return raise_on_name_with_banned_characters(v)\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        return validate_name_present_on_nonanonymous_blocks(values)\n\n    @classmethod\n    async def from_orm_model(\n        cls,\n        session,\n        orm_block_document: \"prefect.server.database.orm_models.ORMBlockDocument\",\n        include_secrets: bool = False,\n    ):\n        data = await orm_block_document.decrypt_data(session=session)\n        # if secrets are not included, obfuscate them based on the schema's\n        # `secret_fields`. Note this walks any nested blocks as well. If the\n        # nested blocks were recovered from named blocks, they will already\n        # be obfuscated, but if nested fields were hardcoded into the parent\n        # blocks data, this is the only opportunity to obfuscate them.\n        if not include_secrets:\n            flat_data = dict_to_flatdict(data)\n            # iterate over the (possibly nested) secret fields\n            # and obfuscate their data\n            for secret_field in orm_block_document.block_schema.fields.get(\n                \"secret_fields\", []\n            ):\n                secret_key = tuple(secret_field.split(\".\"))\n                if flat_data.get(secret_key) is not None:\n                    flat_data[secret_key] = obfuscate_string(flat_data[secret_key])\n                # If a wildcard (*) is in the current secret key path, we take the portion\n                # of the path before the wildcard and compare it to the same level of each\n                # key. A match means that the field is nested under the secret key and should\n                # be obfuscated.\n                elif \"*\" in secret_key:\n                    wildcard_index = secret_key.index(\"*\")\n                    for data_key in flat_data.keys():\n                        if secret_key[0:wildcard_index] == data_key[0:wildcard_index]:\n                            flat_data[data_key] = obfuscate(flat_data[data_key])\n            data = flatdict_to_dict(flat_data)\n\n        return cls(\n            id=orm_block_document.id,\n            created=orm_block_document.created,\n            updated=orm_block_document.updated,\n            name=orm_block_document.name,\n            data=data,\n            block_schema_id=orm_block_document.block_schema_id,\n            block_schema=orm_block_document.block_schema,\n            block_type_id=orm_block_document.block_type_id,\n            block_type_name=orm_block_document.block_type_name,\n            block_type=orm_block_document.block_type,\n            is_anonymous=orm_block_document.is_anonymous,\n        )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocumentReference","title":"BlockDocumentReference","text":"

      Bases: ORMBaseModel

      An ORM representation of a block document reference.

      Source code in prefect/server/schemas/core.py
      class BlockDocumentReference(ORMBaseModel):\n    \"\"\"An ORM representation of a block document reference.\"\"\"\n\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    parent_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    reference_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        return validate_parent_and_ref_diff(values)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchema","title":"BlockSchema","text":"

      Bases: ORMBaseModel

      An ORM representation of a block schema.

      Source code in prefect/server/schemas/core.py
      class BlockSchema(ORMBaseModel):\n    \"\"\"An ORM representation of a block schema.\"\"\"\n\n    checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n    fields: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchemaReference","title":"BlockSchemaReference","text":"

      Bases: ORMBaseModel

      An ORM representation of a block schema reference.

      Source code in prefect/server/schemas/core.py
      class BlockSchemaReference(ORMBaseModel):\n    \"\"\"An ORM representation of a block schema reference.\"\"\"\n\n    parent_block_schema_id: UUID = Field(\n        default=..., description=\"ID of block schema the reference is nested within\"\n    )\n    parent_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The block schema the reference is nested within\"\n    )\n    reference_block_schema_id: UUID = Field(\n        default=..., description=\"ID of the nested block schema\"\n    )\n    reference_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The nested block schema\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockType","title":"BlockType","text":"

      Bases: ORMBaseModel

      An ORM representation of a block type

      Source code in prefect/server/schemas/core.py
      class BlockType(ORMBaseModel):\n    \"\"\"An ORM representation of a block type\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n    is_protected: bool = Field(\n        default=False, description=\"Protected block types cannot be modified via API.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimit","title":"ConcurrencyLimit","text":"

      Bases: ORMBaseModel

      An ORM representation of a concurrency limit.

      Source code in prefect/server/schemas/core.py
      class ConcurrencyLimit(ORMBaseModel):\n    \"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: List[UUID] = Field(\n        default_factory=list,\n        description=\"A list of active run ids using a concurrency slot\",\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimitV2","title":"ConcurrencyLimitV2","text":"

      Bases: ORMBaseModel

      An ORM representation of a v2 concurrency limit.

      Source code in prefect/server/schemas/core.py
      class ConcurrencyLimitV2(ORMBaseModel):\n    \"\"\"An ORM representation of a v2 concurrency limit.\"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the concurrency limit is active.\"\n    )\n    name: str = Field(default=..., description=\"The name of the concurrency limit.\")\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=0, description=\"The number of active slots.\")\n    denied_slots: int = Field(default=0, description=\"The number of denied slots.\")\n    slot_decay_per_second: float = Field(\n        default=0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n    avg_slot_occupancy_seconds: float = Field(\n        default=2.0, description=\"The average amount of time a slot is occupied.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Configuration","title":"Configuration","text":"

      Bases: ORMBaseModel

      An ORM representation of account info.

      Source code in prefect/server/schemas/core.py
      class Configuration(ORMBaseModel):\n    \"\"\"An ORM representation of account info.\"\"\"\n\n    key: str = Field(default=..., description=\"Account info key\")\n    value: Dict[str, Any] = Field(default=..., description=\"Account info\")\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Deployment","title":"Deployment","text":"

      Bases: DeprecatedInfraOverridesField, ORMBaseModel

      An ORM representation of deployment data.

      Source code in prefect/server/schemas/core.py
      class Deployment(DeprecatedInfraOverridesField, ORMBaseModel):\n    \"\"\"An ORM representation of deployment data.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the deployment.\")\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description for the deployment.\"\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The flow id associated with the deployment.\"\n    )\n    schedule: Optional[schedules.SCHEDULE_TYPES] = Field(\n        default=None, description=\"A schedule for the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether or not the deployment schedule is active.\"\n    )\n    paused: bool = Field(\n        default=False, description=\"Whether or not the deployment is paused.\"\n    )\n    schedules: List[DeploymentSchedule] = Field(\n        default_factory=list, description=\"A list of schedules for the deployment.\"\n    )\n    job_variables: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to flow run infrastructure at runtime.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    pull_steps: Optional[List[dict]] = Field(\n        default=None,\n        description=\"Pull steps for cloning and running this deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the deployment\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The work queue for the deployment. If no work queue is set, work will not\"\n            \" be scheduled.\"\n        ),\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The last time the deployment was polled for status updates.\",\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    storage_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining storage used for this flow.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use for flow runs.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this deployment.\",\n    )\n    updated_by: Optional[UpdatedBy] = Field(\n        default=None,\n        description=\"Optional information about the updater of this deployment.\",\n    )\n    work_queue_id: UUID = Field(\n        default=None,\n        description=(\n            \"The id of the work pool queue to which this deployment is assigned.\"\n        ),\n    )\n    enforce_parameter_schema: bool = Field(\n        default=False,\n        description=(\n            \"Whether or not the deployment should enforce the parameter schema.\"\n        ),\n    )\n\n    class Config:\n        allow_population_by_field_name = True\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Flow","title":"Flow","text":"

      Bases: ORMBaseModel

      An ORM representation of flow data.

      Source code in prefect/server/schemas/core.py
      class Flow(ORMBaseModel):\n    \"\"\"An ORM representation of flow data.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", examples=[\"my-flow\"]\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRun","title":"FlowRun","text":"

      Bases: ORMBaseModel

      An ORM representation of flow run data.

      Source code in prefect/server/schemas/core.py
      class FlowRun(ORMBaseModel):\n    \"\"\"An ORM representation of flow run data.\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_value\"}],\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    # relationships\n    # flow: Flow = None\n    # task_runs: List[\"TaskRun\"] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current state of the flow run.\"\n    )\n    # parent_task_run: \"TaskRun\" = None\n\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Variables used as overrides in the base job template\",\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy","text":"

      Bases: ORMBaseModel

      An ORM representation of a flow run notification.

      Source code in prefect/server/schemas/core.py
      class FlowRunNotificationPolicy(ORMBaseModel):\n    \"\"\"An ORM representation of a flow run notification.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        examples=[\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ],\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        return validate_message_template_variables(v)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy","title":"FlowRunPolicy","text":"

      Bases: PrefectBaseModel

      Defines of how a flow run should retry.

      Source code in prefect/server/schemas/core.py
      class FlowRunPolicy(PrefectBaseModel):\n    \"\"\"Defines of how a flow run should retry.\"\"\"\n\n    # TODO: Determine how to separate between infrastructure and within-process level\n    #       retries\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Optional[int] = Field(\n        default=None, description=\"The delay time between retries, in seconds.\"\n    )\n    pause_keys: Optional[set] = Field(\n        default_factory=set, description=\"Tracks pauses this run has observed.\"\n    )\n    resuming: Optional[bool] = Field(\n        default=False, description=\"Indicates if this run is resuming from a pause.\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n        return set_run_policy_deprecated_fields(values)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Log","title":"Log","text":"

      Bases: ORMBaseModel

      An ORM representation of log data.

      Source code in prefect/server/schemas/core.py
      class Log(ORMBaseModel):\n    \"\"\"An ORM representation of log data.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run ID associated with the log.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run ID associated with the log.\"\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.QueueFilter","title":"QueueFilter","text":"

      Bases: PrefectBaseModel

      Filter criteria definition for a work queue.

      Source code in prefect/server/schemas/core.py
      class QueueFilter(PrefectBaseModel):\n    \"\"\"Filter criteria definition for a work queue.\"\"\"\n\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"Only include flow runs with these tags in the work queue.\",\n    )\n    deployment_ids: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"Only include flow runs from these deployments in the work queue.\",\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearch","title":"SavedSearch","text":"

      Bases: ORMBaseModel

      An ORM representation of saved search data. Represents a set of filter criteria.

      Source code in prefect/server/schemas/core.py
      class SavedSearch(ORMBaseModel):\n    \"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearchFilter","title":"SavedSearchFilter","text":"

      Bases: PrefectBaseModel

      A filter for a saved search model. Intended for use by the Prefect UI.

      Source code in prefect/server/schemas/core.py
      class SavedSearchFilter(PrefectBaseModel):\n    \"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n    object: str = Field(default=..., description=\"The object over which to filter.\")\n    property: str = Field(\n        default=..., description=\"The property of the object on which to filter.\"\n    )\n    type: str = Field(default=..., description=\"The type of the property.\")\n    operation: str = Field(\n        default=...,\n        description=\"The operator to apply to the object. For example, `equals`.\",\n    )\n    value: Any = Field(\n        default=..., description=\"A JSON-compatible value for the filter.\"\n    )\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.TaskRun","title":"TaskRun","text":"

      Bases: ORMBaseModel

      An ORM representation of task run data.

      Source code in prefect/server/schemas/core.py
      class TaskRun(ORMBaseModel):\n    \"\"\"An ORM representation of task run data.\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2), examples=[\"my-task-run\"]\n    )\n    flow_run_id: Optional[UUID] = Field(\n        default=None, description=\"The flow run id of the task run.\"\n    )\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional cache key. If a COMPLETED state associated with this cache key\"\n            \" is found, the cached COMPLETED state will be used instead of executing\"\n            \" the task run.\"\n        ),\n    )\n    cache_expiration: Optional[DateTimeTZ] = Field(\n        default=None, description=\"Specifies when the cached state should expire.\"\n    )\n    task_version: Optional[str] = Field(\n        default=None, description=\"The version of the task being run.\"\n    )\n    empirical_policy: TaskRunPolicy = Field(\n        default_factory=TaskRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the task run.\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the current task run state.\"\n    )\n    task_inputs: Dict[str, List[Union[TaskRunResult, Parameter, Constant]]] = Field(\n        default_factory=dict,\n        description=(\n            \"Tracks the source of inputs to a task run. Used for internal bookkeeping.\"\n        ),\n    )\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current task run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current task run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the task run has been executed.\"\n    )\n    flow_run_run_count: int = Field(\n        default=0,\n        description=(\n            \"If the parent flow has retried, this indicates the flow retry this run is\"\n            \" associated with.\"\n        ),\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The task run's expected start time.\",\n    )\n\n    # the next scheduled start time will be populated\n    # whenever the run is in a scheduled state\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the task run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the task run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n\n    # relationships\n    # flow_run: FlowRun = None\n    # subflow_runs: List[FlowRun] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current task run state.\"\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return get_or_create_run_name(name)\n\n    @validator(\"cache_key\")\n    def validate_cache_key(cls, cache_key):\n        return validate_cache_key_length(cache_key)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool","title":"WorkPool","text":"

      Bases: ORMBaseModel

      An ORM representation of a work pool

      Source code in prefect/server/schemas/core.py
      class WorkPool(ORMBaseModel):\n    \"\"\"An ORM representation of a work pool\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description of the work pool.\"\n    )\n    type: str = Field(description=\"The work pool type.\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n    status: Optional[WorkPoolStatus] = Field(\n        default=None, description=\"The current status of the work pool.\"\n    )\n\n    # this required field has a default of None so that the custom validator\n    # below will be called and produce a more helpful error message\n    default_queue_id: UUID = Field(\n        None, description=\"The id of the pool's default queue.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n\n    @validator(\"default_queue_id\", always=True)\n    def helpful_error_for_missing_default_queue_id(cls, v):\n        return validate_default_queue_id_not_none(v)\n\n    @classmethod\n    def from_orm(cls, work_pool: \"ORMWorkPool\") -> Self:\n        parsed: WorkPool = super().from_orm(work_pool)\n        if work_pool.type == \"prefect-agent\":\n            parsed.status = None\n        return parsed\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueue","title":"WorkQueue","text":"

      Bases: ORMBaseModel

      An ORM representation of a work queue

      Source code in prefect/server/schemas/core.py
      class WorkQueue(ORMBaseModel):\n    \"\"\"An ORM representation of a work queue\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[NonNegativeInteger] = Field(\n        default=None, description=\"An optional concurrency limit for the work queue.\"\n    )\n    priority: PositiveInteger = Field(\n        default=1,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n    # Will be required after a future migration\n    work_pool_id: Optional[UUID] = Field(\n        default=None, description=\"The work pool with which the queue is associated.\"\n    )\n    filter: Optional[QueueFilter] = Field(\n        default=None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time an agent polled this queue for work.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        return raise_on_name_with_banned_characters(v)\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy","text":"

      Bases: PrefectBaseModel

      Source code in prefect/server/schemas/core.py
      class WorkQueueHealthPolicy(PrefectBaseModel):\n    maximum_late_runs: Optional[int] = Field(\n        default=0,\n        description=(\n            \"The maximum number of late runs in the work queue before it is deemed\"\n            \" unhealthy. Defaults to `0`.\"\n        ),\n    )\n    maximum_seconds_since_last_polled: Optional[int] = Field(\n        default=60,\n        description=(\n            \"The maximum number of time in seconds elapsed since work queue has been\"\n            \" polled before it is deemed unhealthy. Defaults to `60`.\"\n        ),\n    )\n\n    def evaluate_health_status(\n        self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n    ) -> bool:\n        \"\"\"\n        Given empirical information about the state of the work queue, evaluate its health status.\n\n        Args:\n            late_runs: the count of late runs for the work queue.\n            last_polled: the last time the work queue was polled, if available.\n\n        Returns:\n            bool: whether or not the work queue is healthy.\n        \"\"\"\n        healthy = True\n        if (\n            self.maximum_late_runs is not None\n            and late_runs_count > self.maximum_late_runs\n        ):\n            healthy = False\n\n        if self.maximum_seconds_since_last_polled is not None:\n            if (\n                last_polled is None\n                or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n                > self.maximum_seconds_since_last_polled\n            ):\n                healthy = False\n\n        return healthy\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status","text":"

      Given empirical information about the state of the work queue, evaluate its health status.

      Parameters:

      Name Type Description Default late_runs

      the count of late runs for the work queue.

      required last_polled Optional[DateTimeTZ]

      the last time the work queue was polled, if available.

      None

      Returns:

      Name Type Description bool bool

      whether or not the work queue is healthy.

      Source code in prefect/server/schemas/core.py
      def evaluate_health_status(\n    self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n    \"\"\"\n    Given empirical information about the state of the work queue, evaluate its health status.\n\n    Args:\n        late_runs: the count of late runs for the work queue.\n        last_polled: the last time the work queue was polled, if available.\n\n    Returns:\n        bool: whether or not the work queue is healthy.\n    \"\"\"\n    healthy = True\n    if (\n        self.maximum_late_runs is not None\n        and late_runs_count > self.maximum_late_runs\n    ):\n        healthy = False\n\n    if self.maximum_seconds_since_last_polled is not None:\n        if (\n            last_polled is None\n            or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n            > self.maximum_seconds_since_last_polled\n        ):\n            healthy = False\n\n    return healthy\n
      "},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Worker","title":"Worker","text":"

      Bases: ORMBaseModel

      An ORM representation of a worker

      Source code in prefect/server/schemas/core.py
      class Worker(ORMBaseModel):\n    \"\"\"An ORM representation of a worker\"\"\"\n\n    name: str = Field(description=\"The name of the worker.\")\n    work_pool_id: UUID = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    last_heartbeat_time: datetime.datetime = Field(\n        None, description=\"The last time the worker process sent a heartbeat.\"\n    )\n    heartbeat_interval_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to expect between heartbeats sent by the worker.\"\n        ),\n    )\n
      "},{"location":"api-ref/server/schemas/filters/","title":"server.schemas.filters","text":""},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters","title":"prefect.server.schemas.filters","text":"

      Schemas that define Prefect REST API filtering operations.

      Each filter schema includes logic for transforming itself into a SQL where clause.

      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter artifact collections. Only artifact collections matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class ArtifactCollectionFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n    latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactCollectionFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactCollectionFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.latest_id is not None:\n            filters.append(self.latest_id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId","text":"

      Bases: PrefectFilterBaseModel

      Filter by ArtifactCollection.flow_run_id.

      Source code in prefect/server/schemas/filters.py
      class ArtifactCollectionFilterFlowRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.flow_run_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey","text":"

      Bases: PrefectFilterBaseModel

      Filter by ArtifactCollection.key.

      Source code in prefect/server/schemas/filters.py
      class ArtifactCollectionFilterKey(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key. Should return all rows in \"\n            \"the ArtifactCollection table if specified.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.ArtifactCollection.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.ArtifactCollection.key.isnot(None)\n                if self.exists_\n                else db.ArtifactCollection.key.is_(None)\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId","text":"

      Bases: PrefectFilterBaseModel

      Filter by ArtifactCollection.latest_id.

      Source code in prefect/server/schemas/filters.py
      class ArtifactCollectionFilterLatestId(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.latest_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId","text":"

      Bases: PrefectFilterBaseModel

      Filter by ArtifactCollection.task_run_id.

      Source code in prefect/server/schemas/filters.py
      class ArtifactCollectionFilterTaskRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.task_run_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType","text":"

      Bases: PrefectFilterBaseModel

      Filter by ArtifactCollection.type.

      Source code in prefect/server/schemas/filters.py
      class ArtifactCollectionFilterType(PrefectFilterBaseModel):\n    \"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.ArtifactCollection.type.notin_(self.not_any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter","title":"ArtifactFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter artifacts. Only artifacts matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class ArtifactFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n    id: Optional[ArtifactFilterId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Artifact.flow_run_id.

      Source code in prefect/server/schemas/filters.py
      class ArtifactFilterFlowRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.flow_run_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Artifact.id.

      Source code in prefect/server/schemas/filters.py
      class ArtifactFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey","text":"

      Bases: PrefectFilterBaseModel

      Filter by Artifact.key.

      Source code in prefect/server/schemas/filters.py
      class ArtifactFilterKey(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-artifact-%\"],\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Artifact.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.Artifact.key.isnot(None)\n                if self.exists_\n                else db.Artifact.key.is_(None)\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Artifact.task_run_id.

      Source code in prefect/server/schemas/filters.py
      class ArtifactFilterTaskRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.task_run_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType","text":"

      Bases: PrefectFilterBaseModel

      Filter by Artifact.type.

      Source code in prefect/server/schemas/filters.py
      class ArtifactFilterType(PrefectFilterBaseModel):\n    \"\"\"Filter by `Artifact.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.Artifact.type.notin_(self.not_any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class BlockDocumentFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n    id: Optional[BlockDocumentFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.id`\"\n    )\n    is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n        # default is to exclude anonymous blocks\n        BlockDocumentFilterIsAnonymous(eq_=False),\n        description=(\n            \"Filter criteria for `BlockDocument.is_anonymous`. \"\n            \"Defaults to excluding anonymous blocks.\"\n        ),\n    )\n    block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n    )\n    name: Optional[BlockDocumentFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.is_anonymous is not None:\n            filters.append(self.is_anonymous.as_sql_filter(db))\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockDocument.block_type_id.

      Source code in prefect/server/schemas/filters.py
      class BlockDocumentFilterBlockTypeId(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.block_type_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockDocument.id.

      Source code in prefect/server/schemas/filters.py
      class BlockDocumentFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockDocument.is_anonymous.

      Source code in prefect/server/schemas/filters.py
      class BlockDocumentFilterIsAnonymous(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter block documents for only those that are or are not anonymous.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.BlockDocument.is_anonymous.is_(self.eq_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockDocument.name.

      Source code in prefect/server/schemas/filters.py
      class BlockDocumentFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of block names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match block names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-block%\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.BlockDocument.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter BlockSchemas

      Source code in prefect/server/schemas/filters.py
      class BlockSchemaFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter BlockSchemas\"\"\"\n\n    block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n    )\n    block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n    )\n    id: Optional[BlockSchemaFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.id`\"\n    )\n    version: Optional[BlockSchemaFilterVersion] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.version`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.block_capabilities is not None:\n            filters.append(self.block_capabilities.as_sql_filter(db))\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.version is not None:\n            filters.append(self.version.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockSchema.block_type_id.

      Source code in prefect/server/schemas/filters.py
      class BlockSchemaFilterBlockTypeId(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.block_type_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockSchema.capabilities

      Source code in prefect/server/schemas/filters.py
      class BlockSchemaFilterCapabilities(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"write-storage\", \"read-storage\"]],\n        description=(\n            \"A list of block capabilities. Block entities will be returned only if an\"\n            \" associated block schema has a superset of the defined capabilities.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.BlockSchema.capabilities, self.all_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockSchema.id

      Source code in prefect/server/schemas/filters.py
      class BlockSchemaFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by BlockSchema.id\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockSchema.capabilities

      Source code in prefect/server/schemas/filters.py
      class BlockSchemaFilterVersion(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"2.0.0\", \"2.1.0\"]],\n        description=\"A list of block schema versions.\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        pass\n\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.version.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter","text":"

      Bases: PrefectFilterBaseModel

      Filter BlockTypes

      Source code in prefect/server/schemas/filters.py
      class BlockTypeFilter(PrefectFilterBaseModel):\n    \"\"\"Filter BlockTypes\"\"\"\n\n    name: Optional[BlockTypeFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockType.name`\"\n    )\n\n    slug: Optional[BlockTypeFilterSlug] = Field(\n        default=None, description=\"Filter criteria for `BlockType.slug`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.slug is not None:\n            filters.append(self.slug.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockType.name

      Source code in prefect/server/schemas/filters.py
      class BlockTypeFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockType.name`\"\"\"\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.like_ is not None:\n            filters.append(db.BlockType.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug","text":"

      Bases: PrefectFilterBaseModel

      Filter by BlockType.slug

      Source code in prefect/server/schemas/filters.py
      class BlockTypeFilterSlug(PrefectFilterBaseModel):\n    \"\"\"Filter by `BlockType.slug`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of slugs to match\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockType.slug.in_(self.any_))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter","title":"DeploymentFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter for deployments. Only deployments matching all criteria will be returned.

      Source code in prefect/server/schemas/filters.py
      class DeploymentFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    id: Optional[DeploymentFilterId] = Field(\n        default=None, description=\"Filter criteria for `Deployment.id`\"\n    )\n    name: Optional[DeploymentFilterName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.name`\"\n    )\n    paused: Optional[DeploymentFilterPaused] = Field(\n        default=None, description=\"Filter criteria for `Deployment.paused`\"\n    )\n    is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n        default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n    )\n    tags: Optional[DeploymentFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Deployment.tags`\"\n    )\n    work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.paused is not None:\n            filters.append(self.paused.as_sql_filter(db))\n        if self.is_schedule_active is not None:\n            filters.append(self.is_schedule_active.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Deployment.id.

      Source code in prefect/server/schemas/filters.py
      class DeploymentFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of deployment ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive","text":"

      Bases: PrefectFilterBaseModel

      Legacy filter to filter by Deployment.is_schedule_active which is always the opposite of Deployment.paused.

      Source code in prefect/server/schemas/filters.py
      class DeploymentFilterIsScheduleActive(PrefectFilterBaseModel):\n    \"\"\"Legacy filter to filter by `Deployment.is_schedule_active` which\n    is always the opposite of `Deployment.paused`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.Deployment.paused.is_not(self.eq_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by Deployment.name.

      Source code in prefect/server/schemas/filters.py
      class DeploymentFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of deployment names to include\",\n        examples=[[\"my-deployment-1\", \"my-deployment-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Deployment.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused","title":"DeploymentFilterPaused","text":"

      Bases: PrefectFilterBaseModel

      Filter by Deployment.paused.

      Source code in prefect/server/schemas/filters.py
      class DeploymentFilterPaused(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.paused`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment is/is not paused\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.Deployment.paused.is_(self.eq_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterPaused.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by Deployment.tags.

      Source code in prefect/server/schemas/filters.py
      class DeploymentFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Deployment.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Deployments will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include deployments without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Deployment.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Deployment.tags == [] if self.is_null_ else db.Deployment.tags != []\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName","text":"

      Bases: PrefectFilterBaseModel

      Filter by Deployment.work_queue_name.

      Source code in prefect/server/schemas/filters.py
      class DeploymentFilterWorkQueueName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.work_queue_name.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter","title":"DeploymentScheduleFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter for deployments. Only deployments matching all criteria will be returned.

      Source code in prefect/server/schemas/filters.py
      class DeploymentScheduleFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    active: Optional[DeploymentScheduleFilterActive] = Field(\n        default=None, description=\"Filter criteria for `DeploymentSchedule.active`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.active is not None:\n            filters.append(self.active.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive","title":"DeploymentScheduleFilterActive","text":"

      Bases: PrefectFilterBaseModel

      Filter by DeploymentSchedule.active.

      Source code in prefect/server/schemas/filters.py
      class DeploymentScheduleFilterActive(PrefectFilterBaseModel):\n    \"\"\"Filter by `DeploymentSchedule.active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.DeploymentSchedule.active.is_(self.eq_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentScheduleFilterActive.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet","title":"FilterSet","text":"

      Bases: PrefectBaseModel

      A collection of filters for common objects

      Source code in prefect/server/schemas/filters.py
      class FilterSet(PrefectBaseModel):\n    \"\"\"A collection of filters for common objects\"\"\"\n\n    flows: FlowFilter = Field(\n        default_factory=FlowFilter, description=\"Filters that apply to flows\"\n    )\n    flow_runs: FlowRunFilter = Field(\n        default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n    )\n    task_runs: TaskRunFilter = Field(\n        default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n    )\n    deployments: DeploymentFilter = Field(\n        default_factory=DeploymentFilter,\n        description=\"Filters that apply to deployments\",\n    )\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter","title":"FlowFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter for flows. Only flows matching all criteria will be returned.

      Source code in prefect/server/schemas/filters.py
      class FlowFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n    id: Optional[FlowFilterId] = Field(\n        default=None, description=\"Filter criteria for `Flow.id`\"\n    )\n    deployment: Optional[FlowFilterDeployment] = Field(\n        default=None, description=\"Filter criteria for Flow deployments\"\n    )\n    name: Optional[FlowFilterName] = Field(\n        default=None, description=\"Filter criteria for `Flow.name`\"\n    )\n    tags: Optional[FlowFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Flow.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.deployment is not None:\n            filters.append(self.deployment.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment","title":"FlowFilterDeployment","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by flows by deployment

      Source code in prefect/server/schemas/filters.py
      class FlowFilterDeployment(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by flows by deployment\"\"\"\n\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flows without deployments\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.is_null_ is not None:\n            deployments_subquery = (\n                sa.select(db.Deployment.flow_id).distinct().subquery()\n            )\n\n            if self.is_null_:\n                filters.append(\n                    db.Flow.id.not_in(sa.select(deployments_subquery.c.flow_id))\n                )\n            else:\n                filters.append(\n                    db.Flow.id.in_(sa.select(deployments_subquery.c.flow_id))\n                )\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterDeployment.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId","title":"FlowFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Flow.id.

      Source code in prefect/server/schemas/filters.py
      class FlowFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Flow.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName","title":"FlowFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by Flow.name.

      Source code in prefect/server/schemas/filters.py
      class FlowFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Flow.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow names to include\",\n        examples=[[\"my-flow-1\", \"my-flow-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Flow.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags","title":"FlowFilterTags","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by Flow.tags.

      Source code in prefect/server/schemas/filters.py
      class FlowFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Flow.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flows will be returned only if their tags are a superset\"\n            \" of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flows without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Flow.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(db.Flow.tags == [] if self.is_null_ else db.Flow.tags != [])\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter","title":"FlowRunFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter flow runs. Only flow runs matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n    id: Optional[FlowRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.id`\"\n    )\n    name: Optional[FlowRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.name`\"\n    )\n    tags: Optional[FlowRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.tags`\"\n    )\n    deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n    )\n    work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n    )\n    state: Optional[FlowRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state`\"\n    )\n    flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n    )\n    start_time: Optional[FlowRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n    )\n    expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n    )\n    next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n        default=None,\n        description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n    )\n    parent_flow_run_id: Optional[FlowRunFilterParentFlowRunId] = Field(\n        default=None, description=\"Filter criteria for subflows of the given flow runs\"\n    )\n    parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n    )\n    idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n    )\n\n    def only_filters_on_id(self):\n        return (\n            self.id is not None\n            and (self.id.any_ and not self.id.not_any_)\n            and self.name is None\n            and self.tags is None\n            and self.deployment_id is None\n            and self.work_queue_name is None\n            and self.state is None\n            and self.flow_version is None\n            and self.start_time is None\n            and self.expected_start_time is None\n            and self.next_scheduled_start_time is None\n            and self.parent_flow_run_id is None\n            and self.parent_task_run_id is None\n            and self.idempotency_key is None\n        )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.deployment_id is not None:\n            filters.append(self.deployment_id.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n        if self.flow_version is not None:\n            filters.append(self.flow_version.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.expected_start_time is not None:\n            filters.append(self.expected_start_time.as_sql_filter(db))\n        if self.next_scheduled_start_time is not None:\n            filters.append(self.next_scheduled_start_time.as_sql_filter(db))\n        if self.parent_flow_run_id is not None:\n            filters.append(self.parent_flow_run_id.as_sql_filter(db))\n        if self.parent_task_run_id is not None:\n            filters.append(self.parent_task_run_id.as_sql_filter(db))\n        if self.idempotency_key is not None:\n            filters.append(self.idempotency_key.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by FlowRun.deployment_id.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterDeploymentId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run deployment ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without deployment ids\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.deployment_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.deployment_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.deployment_id.is_not(None)\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.expected_start_time.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterExpectedStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.expected_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.expected_start_time >= self.after_)\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.flow_version.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterFlowVersion(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run flow_versions to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.flow_version.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.id.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n    not_any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.id.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.id.not_in(self.not_any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.idempotency_key.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterIdempotencyKey(PrefectFilterBaseModel):\n    \"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.not_in(self.not_any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.name.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow run names to include\",\n        examples=[[\"my-flow-run-1\", \"my-flow-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.FlowRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.next_scheduled_start_time.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterNextScheduledStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time or before this\"\n            \" time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time at or after this\"\n            \" time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time >= self.after_)\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId","title":"FlowRunFilterParentFlowRunId","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter for subflows of a given flow run

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterParentFlowRunId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter for subflows of a given flow run\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of parent flow run ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(\n                db.FlowRun.id.in_(\n                    sa.select(db.FlowRun.id)\n                    .join(\n                        db.TaskRun,\n                        sa.and_(\n                            db.TaskRun.id == db.FlowRun.parent_task_run_id,\n                        ),\n                    )\n                    .where(db.TaskRun.flow_run_id.in_(self.any_))\n                )\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentFlowRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by FlowRun.parent_task_run_id.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterParentTaskRunId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parent_task_run_ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without parent_task_run_id\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.parent_task_run_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.parent_task_run_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.parent_task_run_id.is_not(None)\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.start_time.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return flow runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.start_time.is_(None)\n                if self.is_null_\n                else db.FlowRun.start_time.is_not(None)\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState","title":"FlowRunFilterState","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by FlowRun.state_type and FlowRun.state_name.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterState(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.state_type` and `FlowRun.state_name`.\"\"\"\n\n    type: Optional[FlowRunFilterStateType] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state_type`\"\n    )\n    name: Optional[FlowRunFilterStateName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state_name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName","title":"FlowRunFilterStateName","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.state_name.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterStateName(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_name.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRun.state_type.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterStateType(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of flow run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_type.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by FlowRun.tags.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Flow runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flow runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.FlowRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.tags == [] if self.is_null_ else db.FlowRun.tags != []\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by FlowRun.work_queue_name.

      Source code in prefect/server/schemas/filters.py
      class FlowRunFilterWorkQueueName(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"work_queue_1\", \"work_queue_2\"]],\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without work queue names\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.work_queue_name.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.work_queue_name.is_(None)\n                if self.is_null_\n                else db.FlowRun.work_queue_name.is_not(None)\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter","text":"

      Bases: PrefectFilterBaseModel

      Filter FlowRunNotificationPolicies.

      Source code in prefect/server/schemas/filters.py
      class FlowRunNotificationPolicyFilter(PrefectFilterBaseModel):\n    \"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n    is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n        default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n        description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.is_active is not None:\n            filters.append(self.is_active.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive","text":"

      Bases: PrefectFilterBaseModel

      Filter by FlowRunNotificationPolicy.is_active.

      Source code in prefect/server/schemas/filters.py
      class FlowRunNotificationPolicyFilterIsActive(PrefectFilterBaseModel):\n    \"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter notification policies for only those that are or are not active.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.FlowRunNotificationPolicy.is_active.is_(self.eq_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter","title":"LogFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter logs. Only logs matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class LogFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n    level: Optional[LogFilterLevel] = Field(\n        default=None, description=\"Filter criteria for `Log.level`\"\n    )\n    timestamp: Optional[LogFilterTimestamp] = Field(\n        default=None, description=\"Filter criteria for `Log.timestamp`\"\n    )\n    flow_run_id: Optional[LogFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n    )\n    task_run_id: Optional[LogFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.task_run_id`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.level is not None:\n            filters.append(self.level.as_sql_filter(db))\n        if self.timestamp is not None:\n            filters.append(self.timestamp.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Log.flow_run_id.

      Source code in prefect/server/schemas/filters.py
      class LogFilterFlowRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.flow_run_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel","title":"LogFilterLevel","text":"

      Bases: PrefectFilterBaseModel

      Filter by Log.level.

      Source code in prefect/server/schemas/filters.py
      class LogFilterLevel(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.level`.\"\"\"\n\n    ge_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level greater than or equal to this level\",\n        examples=[20],\n    )\n\n    le_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level less than or equal to this level\",\n        examples=[50],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.ge_ is not None:\n            filters.append(db.Log.level >= self.ge_)\n        if self.le_ is not None:\n            filters.append(db.Log.level <= self.le_)\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName","title":"LogFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by Log.name.

      Source code in prefect/server/schemas/filters.py
      class LogFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of log names to include\",\n        examples=[[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"]],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.name.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Log.task_run_id.

      Source code in prefect/server/schemas/filters.py
      class LogFilterTaskRunId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.task_run_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp","text":"

      Bases: PrefectFilterBaseModel

      Filter by Log.timestamp.

      Source code in prefect/server/schemas/filters.py
      class LogFilterTimestamp(PrefectFilterBaseModel):\n    \"\"\"Filter by `Log.timestamp`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Log.timestamp <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Log.timestamp >= self.after_)\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator","title":"Operator","text":"

      Bases: AutoEnum

      Operators for combining filter criteria.

      Source code in prefect/server/schemas/filters.py
      class Operator(AutoEnum):\n    \"\"\"Operators for combining filter criteria.\"\"\"\n\n    and_ = AutoEnum.auto()\n    or_ = AutoEnum.auto()\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator.auto","title":"auto staticmethod","text":"

      Exposes enum.auto() to avoid requiring a second import to use AutoEnum

      Source code in prefect/utilities/collections.py
      @staticmethod\ndef auto():\n    \"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel","title":"PrefectFilterBaseModel","text":"

      Bases: PrefectBaseModel

      Base model for Prefect filters

      Source code in prefect/server/schemas/filters.py
      class PrefectFilterBaseModel(PrefectBaseModel):\n    \"\"\"Base model for Prefect filters\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n        \"\"\"Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter.\"\"\"\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters)\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        \"\"\"Return a list of boolean filter statements based on filter parameters\"\"\"\n        raise NotImplementedError(\"_get_filter_list must be implemented\")\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel","title":"PrefectOperatorFilterBaseModel","text":"

      Bases: PrefectFilterBaseModel

      Base model for Prefect filters that combines criteria with a user-provided operator

      Source code in prefect/server/schemas/filters.py
      class PrefectOperatorFilterBaseModel(PrefectFilterBaseModel):\n    \"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n    operator: Operator = Field(\n        default=Operator.and_,\n        description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n    )\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters) if self.operator == Operator.and_ else sa.or_(*filters)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter","title":"TaskRunFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter task runs. Only task runs matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n    id: Optional[TaskRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.id`\"\n    )\n    name: Optional[TaskRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.name`\"\n    )\n    tags: Optional[TaskRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.tags`\"\n    )\n    state: Optional[TaskRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.state`\"\n    )\n    start_time: Optional[TaskRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n    )\n    subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n    )\n    flow_run_id: Optional[TaskRunFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.flow_run_id`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.subflow_runs is not None:\n            filters.append(self.subflow_runs.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId","title":"TaskRunFilterFlowRunId","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by TaskRun.flow_run_id.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterFlowRunId(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `TaskRun.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run flow run ids to include\"\n    )\n\n    is_null_: bool = Field(\n        default=False, description=\"Filter for task runs with None as their flow run id\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.is_null_ is True:\n            filters.append(db.TaskRun.flow_run_id.is_(None))\n        else:\n            if self.any_ is not None:\n                filters.append(db.TaskRun.flow_run_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterFlowRunId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by TaskRun.id.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by TaskRun.name.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of task run names to include\",\n        examples=[[\"my-task-run-1\", \"my-task-run-2\"]],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        examples=[\"marvin\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.TaskRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime","text":"

      Bases: PrefectFilterBaseModel

      Filter by TaskRun.start_time.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterStartTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return task runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.TaskRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.TaskRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.start_time.is_(None)\n                if self.is_null_\n                else db.TaskRun.start_time.is_not(None)\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState","title":"TaskRunFilterState","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by TaskRun.type and TaskRun.name.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterState(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `TaskRun.type` and `TaskRun.name`.\"\"\"\n\n    type: Optional[TaskRunFilterStateType]\n    name: Optional[TaskRunFilterStateName]\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName","title":"TaskRunFilterStateName","text":"

      Bases: PrefectFilterBaseModel

      Filter by TaskRun.state_name.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterStateName(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of task run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_name.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType","text":"

      Bases: PrefectFilterBaseModel

      Filter by TaskRun.state_type.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterStateType(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of task run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_type.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns","text":"

      Bases: PrefectFilterBaseModel

      Filter by TaskRun.subflow_run.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterSubFlowRuns(PrefectFilterBaseModel):\n    \"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If true, only include task runs that are subflow run parents; if false,\"\n            \" exclude parent task runs\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.exists_ is True:\n            filters.append(db.TaskRun.subflow_run.has())\n        elif self.exists_ is False:\n            filters.append(sa.not_(db.TaskRun.subflow_run.has()))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by TaskRun.tags.

      Source code in prefect/server/schemas/filters.py
      class TaskRunFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Task runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include task runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.TaskRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.tags == [] if self.is_null_ else db.TaskRun.tags != []\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter","title":"VariableFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter variables. Only variables matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class VariableFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n    id: Optional[VariableFilterId] = Field(\n        default=None, description=\"Filter criteria for `Variable.id`\"\n    )\n    name: Optional[VariableFilterName] = Field(\n        default=None, description=\"Filter criteria for `Variable.name`\"\n    )\n    value: Optional[VariableFilterValue] = Field(\n        default=None, description=\"Filter criteria for `Variable.value`\"\n    )\n    tags: Optional[VariableFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Variable.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.value is not None:\n            filters.append(self.value.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId","title":"VariableFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Variable.id.

      Source code in prefect/server/schemas/filters.py
      class VariableFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Variable.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of variable ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName","title":"VariableFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by Variable.name.

      Source code in prefect/server/schemas/filters.py
      class VariableFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `Variable.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my_variable_%\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags","title":"VariableFilterTags","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by Variable.tags.

      Source code in prefect/server/schemas/filters.py
      class VariableFilterTags(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Variable.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        examples=[[\"tag-1\", \"tag-2\"]],\n        description=(\n            \"A list of tags. Variables will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include Variables without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Variable.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Variable.tags == [] if self.is_null_ else db.Variable.tags != []\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue","title":"VariableFilterValue","text":"

      Bases: PrefectFilterBaseModel

      Filter by Variable.value.

      Source code in prefect/server/schemas/filters.py
      class VariableFilterValue(PrefectFilterBaseModel):\n    \"\"\"Filter by `Variable.value`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables value to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable value against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        examples=[\"my-value-%\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.value.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.value.ilike(f\"%{self.like_}%\"))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter","title":"WorkPoolFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter work pools. Only work pools matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class WorkPoolFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter work pools. Only work pools matching all criteria will be returned\"\"\"\n\n    id: Optional[WorkPoolFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.id`\"\n    )\n    name: Optional[WorkPoolFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.name`\"\n    )\n    type: Optional[WorkPoolFilterType] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by WorkPool.id.

      Source code in prefect/server/schemas/filters.py
      class WorkPoolFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkPool.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by WorkPool.name.

      Source code in prefect/server/schemas/filters.py
      class WorkPoolFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkPool.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.name.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType","text":"

      Bases: PrefectFilterBaseModel

      Filter by WorkPool.type.

      Source code in prefect/server/schemas/filters.py
      class WorkPoolFilterType(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkPool.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.type.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter work queues. Only work queues matching all criteria will be returned

      Source code in prefect/server/schemas/filters.py
      class WorkQueueFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter work queues. Only work queues matching all criteria will be\n    returned\"\"\"\n\n    id: Optional[WorkQueueFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.id`\"\n    )\n\n    name: Optional[WorkQueueFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId","text":"

      Bases: PrefectFilterBaseModel

      Filter by WorkQueue.id.

      Source code in prefect/server/schemas/filters.py
      class WorkQueueFilterId(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"A list of work queue ids to include\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName","text":"

      Bases: PrefectFilterBaseModel

      Filter by WorkQueue.name.

      Source code in prefect/server/schemas/filters.py
      class WorkQueueFilterName(PrefectFilterBaseModel):\n    \"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        examples=[[\"wq-1\", \"wq-2\"]],\n    )\n\n    startswith_: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"A list of case-insensitive starts-with matches. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n        ),\n        examples=[[\"marvin\", \"Marvin-robot\"]],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.name.in_(self.any_))\n        if self.startswith_ is not None:\n            filters.append(\n                sa.or_(\n                    *[db.WorkQueue.name.ilike(f\"{item}%\") for item in self.startswith_]\n                )\n            )\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter","title":"WorkerFilter","text":"

      Bases: PrefectOperatorFilterBaseModel

      Filter by Worker.last_heartbeat_time.

      Source code in prefect/server/schemas/filters.py
      class WorkerFilter(PrefectOperatorFilterBaseModel):\n    \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    # worker_config_id: Optional[WorkerFilterWorkPoolId] = Field(\n    #     default=None, description=\"Filter criteria for `Worker.worker_config_id`\"\n    # )\n\n    last_heartbeat_time: Optional[WorkerFilterLastHeartbeatTime] = Field(\n        default=None,\n        description=\"Filter criteria for `Worker.last_heartbeat_time`\",\n    )\n\n    status: Optional[WorkerFilterStatus] = Field(\n        default=None, description=\"Filter criteria for `Worker.status`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.last_heartbeat_time is not None:\n            filters.append(self.last_heartbeat_time.as_sql_filter(db))\n\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime","text":"

      Bases: PrefectFilterBaseModel

      Filter by Worker.last_heartbeat_time.

      Source code in prefect/server/schemas/filters.py
      class WorkerFilterLastHeartbeatTime(PrefectFilterBaseModel):\n    \"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or before this time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or after this time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Worker.last_heartbeat_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Worker.last_heartbeat_time >= self.after_)\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterStatus","title":"WorkerFilterStatus","text":"

      Bases: PrefectFilterBaseModel

      Filter by Worker.status.

      Source code in prefect/server/schemas/filters.py
      class WorkerFilterStatus(PrefectFilterBaseModel):\n    \"\"\"Filter by `Worker.status`.\"\"\"\n\n    any_: Optional[List[schemas.statuses.WorkerStatus]] = Field(\n        default=None, description=\"A list of worker statuses to include\"\n    )\n    not_any_: Optional[List[schemas.statuses.WorkerStatus]] = Field(\n        default=None, description=\"A list of worker statuses to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Worker.status.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.Worker.status.notin_(self.not_any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterStatus.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId","text":"

      Bases: PrefectFilterBaseModel

      Filter by Worker.worker_config_id.

      Source code in prefect/server/schemas/filters.py
      class WorkerFilterWorkPoolId(PrefectFilterBaseModel):\n    \"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Worker.worker_config_id.in_(self.any_))\n        return filters\n
      "},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/responses/","title":"server.schemas.responses","text":""},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses","title":"prefect.server.schemas.responses","text":"

      Schemas for special responses from the Prefect REST API.

      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.FlowRunResponse","title":"FlowRunResponse","text":"

      Bases: ORMBaseModel

      Source code in prefect/server/schemas/responses.py
      class FlowRunResponse(ORMBaseModel):\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        examples=[\"my-flow-run\"],\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    deployment_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the deployment associated with this flow run.\",\n        examples=[\"1.0\"],\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        examples=[\"1.0\"],\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        examples=[{\"my_var\": \"my_val\"}],\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        examples=[[\"tag-1\", \"tag-2\"]],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n    state_type: Optional[schemas.states.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_pool_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The id of the flow run's work pool.\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        examples=[\"my-work-pool\"],\n    )\n    state: Optional[schemas.states.State] = Field(\n        default=None, description=\"The current state of the flow run.\"\n    )\n    job_variables: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"Variables used as overrides in the base job template\",\n    )\n\n    @classmethod\n    def from_orm(cls, orm_flow_run: \"ORMFlowRun\"):\n        response = super().from_orm(orm_flow_run)\n        if orm_flow_run.work_queue:\n            response.work_queue_id = orm_flow_run.work_queue.id\n            response.work_queue_name = orm_flow_run.work_queue.name\n            if orm_flow_run.work_queue.work_pool:\n                response.work_pool_id = orm_flow_run.work_queue.work_pool.id\n                response.work_pool_name = orm_flow_run.work_queue.work_pool.name\n\n        return response\n\n    def __eq__(self, other: Any) -> bool:\n        \"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRunResponse):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.GlobalConcurrencyLimitResponse","title":"GlobalConcurrencyLimitResponse","text":"

      Bases: ORMBaseModel

      A response object for global concurrency limits.

      Source code in prefect/server/schemas/responses.py
      class GlobalConcurrencyLimitResponse(ORMBaseModel):\n    \"\"\"\n    A response object for global concurrency limits.\n    \"\"\"\n\n    active: bool = Field(\n        default=True, description=\"Whether the global concurrency limit is active.\"\n    )\n    name: str = Field(\n        default=..., description=\"The name of the global concurrency limit.\"\n    )\n    limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: int = Field(default=..., description=\"The number of active slots.\")\n    slot_decay_per_second: float = Field(\n        default=2.0,\n        description=\"The decay rate for active slots when used as a rate limit.\",\n    )\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponse","title":"HistoryResponse","text":"

      Bases: PrefectBaseModel

      Represents a history of aggregation states over an interval

      Source code in prefect/server/schemas/responses.py
      class HistoryResponse(PrefectBaseModel):\n    \"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n    interval_start: DateTimeTZ = Field(\n        default=..., description=\"The start date of the interval.\"\n    )\n    interval_end: DateTimeTZ = Field(\n        default=..., description=\"The end date of the interval.\"\n    )\n    states: List[HistoryResponseState] = Field(\n        default=..., description=\"A list of state histories during the interval.\"\n    )\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponseState","title":"HistoryResponseState","text":"

      Bases: PrefectBaseModel

      Represents a single state's history over an interval.

      Source code in prefect/server/schemas/responses.py
      class HistoryResponseState(PrefectBaseModel):\n    \"\"\"Represents a single state's history over an interval.\"\"\"\n\n    state_type: schemas.states.StateType = Field(\n        default=..., description=\"The state type.\"\n    )\n    state_name: str = Field(default=..., description=\"The state name.\")\n    count_runs: int = Field(\n        default=...,\n        description=\"The number of runs in the specified state during the interval.\",\n    )\n    sum_estimated_run_time: datetime.timedelta = Field(\n        default=...,\n        description=\"The total estimated run time of all runs during the interval.\",\n    )\n    sum_estimated_lateness: datetime.timedelta = Field(\n        default=...,\n        description=(\n            \"The sum of differences between actual and expected start time during the\"\n            \" interval.\"\n        ),\n    )\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.OrchestrationResult","title":"OrchestrationResult","text":"

      Bases: PrefectBaseModel

      A container for the output of state orchestration.

      Source code in prefect/server/schemas/responses.py
      class OrchestrationResult(PrefectBaseModel):\n    \"\"\"\n    A container for the output of state orchestration.\n    \"\"\"\n\n    state: Optional[schemas.states.State]\n    status: SetStateStatus\n    details: StateResponseDetails\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.SetStateStatus","title":"SetStateStatus","text":"

      Bases: AutoEnum

      Enumerates return statuses for setting run states.

      Source code in prefect/server/schemas/responses.py
      class SetStateStatus(AutoEnum):\n    \"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n    ACCEPT = AutoEnum.auto()\n    REJECT = AutoEnum.auto()\n    ABORT = AutoEnum.auto()\n    WAIT = AutoEnum.auto()\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAbortDetails","title":"StateAbortDetails","text":"

      Bases: PrefectBaseModel

      Details associated with an ABORT state transition.

      Source code in prefect/server/schemas/responses.py
      class StateAbortDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ABORT state transition.\"\"\"\n\n    type: Literal[\"abort_details\"] = Field(\n        default=\"abort_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was aborted.\"\n    )\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails","text":"

      Bases: PrefectBaseModel

      Details associated with an ACCEPT state transition.

      Source code in prefect/server/schemas/responses.py
      class StateAcceptDetails(PrefectBaseModel):\n    \"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n    type: Literal[\"accept_details\"] = Field(\n        default=\"accept_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateRejectDetails","title":"StateRejectDetails","text":"

      Bases: PrefectBaseModel

      Details associated with a REJECT state transition.

      Source code in prefect/server/schemas/responses.py
      class StateRejectDetails(PrefectBaseModel):\n    \"\"\"Details associated with a REJECT state transition.\"\"\"\n\n    type: Literal[\"reject_details\"] = Field(\n        default=\"reject_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was rejected.\"\n    )\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateWaitDetails","title":"StateWaitDetails","text":"

      Bases: PrefectBaseModel

      Details associated with a WAIT state transition.

      Source code in prefect/server/schemas/responses.py
      class StateWaitDetails(PrefectBaseModel):\n    \"\"\"Details associated with a WAIT state transition.\"\"\"\n\n    type: Literal[\"wait_details\"] = Field(\n        default=\"wait_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    delay_seconds: int = Field(\n        default=...,\n        description=(\n            \"The length of time in seconds the client should wait before transitioning\"\n            \" states.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition should wait.\"\n    )\n
      "},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.WorkQueueWithStatus","title":"WorkQueueWithStatus","text":"

      Bases: WorkQueueResponse, WorkQueueStatusDetail

      Combines a work queue and its status details into a single object

      Source code in prefect/server/schemas/responses.py
      class WorkQueueWithStatus(WorkQueueResponse, WorkQueueStatusDetail):\n    \"\"\"Combines a work queue and its status details into a single object\"\"\"\n
      "},{"location":"api-ref/server/schemas/schedules/","title":"server.schemas.schedules","text":""},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules","title":"prefect.server.schemas.schedules","text":"

      Schedule schemas

      "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule","title":"CronSchedule","text":"

      Bases: PrefectBaseModel

      Cron schedule

      NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.

      Parameters:

      Name Type Description Default cron str

      a valid cron string

      required timezone str

      a valid timezone string in IANA tzdata format (for example, America/New_York).

      required day_or bool

      Control how croniter handles day and day_of_week entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.

      required Source code in prefect/server/schemas/schedules.py
      class CronSchedule(PrefectBaseModel):\n    \"\"\"\n    Cron schedule\n\n    NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n    itself appropriately. Cron's rules for DST are based on schedule times, not\n    intervals. This means that an hourly cron schedule will fire on every new\n    schedule hour, not every elapsed hour; for example, when clocks are set back\n    this will result in a two-hour pause as the schedule will fire *the first\n    time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n    Longer schedules, such as one that fires at 9am every morning, will\n    automatically adjust for DST.\n\n    Args:\n        cron (str): a valid cron string\n        timezone (str): a valid timezone string in IANA tzdata format (for example,\n            America/New_York).\n        day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n            entries. Defaults to True, matching cron which connects those values using\n            OR. If the switch is set to False, the values are connected using AND. This\n            behaves like fcron and enables you to e.g. define a job that executes each\n            2nd friday of a month by setting the days of month and the weekday.\n\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    cron: str = Field(default=..., examples=[\"0 0 * * *\"])\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n    day_or: bool = Field(\n        default=True,\n        description=(\n            \"Control croniter behavior for handling day and day_of_week entries.\"\n        ),\n    )\n\n    @validator(\"timezone\")\n    def validate_timezone(cls, v, *, values, **kwargs):\n        return default_timezone(v, values)\n\n    @validator(\"cron\")\n    def valid_cron_string(cls, v):\n        return validate_cron_string(v)\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        elif self.timezone:\n            start = start.in_tz(self.timezone)\n\n        # subtract one second from the start date, so that croniter returns it\n        # as an event (if it meets the cron criteria)\n        start = start.subtract(seconds=1)\n\n        # Respect microseconds by rounding up\n        if start.microsecond > 0:\n            start += datetime.timedelta(seconds=1)\n\n        # croniter's DST logic interferes with all other datetime libraries except pytz\n        start_localized = pytz.timezone(start.tz.name).localize(\n            datetime.datetime(\n                year=start.year,\n                month=start.month,\n                day=start.day,\n                hour=start.hour,\n                minute=start.minute,\n                second=start.second,\n                microsecond=start.microsecond,\n            )\n        )\n        start_naive_tz = start.naive()\n\n        cron = croniter(self.cron, start_naive_tz, day_or=self.day_or)  # type: ignore\n        dates = set()\n        counter = 0\n\n        while True:\n            # croniter does not handle DST properly when the start time is\n            # in and around when the actual shift occurs. To work around this,\n            # we use the naive start time to get the next cron date delta, then\n            # add that time to the original scheduling anchor.\n            next_time = cron.get_next(datetime.datetime)\n            delta = next_time - start_naive_tz\n            next_date = pendulum.instance(start_localized + delta)\n\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
      "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule.get_dates","title":"get_dates async","text":"

      Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

      Parameters:

      Name Type Description Default n int

      The number of dates to generate

      None start datetime

      The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

      None end datetime

      The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

      None

      Returns:

      Type Description List[DateTime]

      List[pendulum.DateTime]: A list of dates

      Source code in prefect/server/schemas/schedules.py
      async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n    \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
      "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule","title":"IntervalSchedule","text":"

      Bases: PrefectBaseModel

      A schedule formed by adding interval increments to an anchor_date. If no anchor_date is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.

      NOTE: If the IntervalSchedule anchor_date or timezone is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

      Parameters:

      Name Type Description Default interval timedelta

      an interval to schedule on.

      required anchor_date DateTimeTZ

      an anchor date to schedule increments against; if not provided, the current timestamp will be used.

      required timezone str

      a valid timezone string.

      required Source code in prefect/server/schemas/schedules.py
      class IntervalSchedule(PrefectBaseModel):\n    \"\"\"\n    A schedule formed by adding `interval` increments to an `anchor_date`. If no\n    `anchor_date` is supplied, the current UTC time is used.  If a\n    timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n    in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n    anchor dates are always stored as UTC offsets, so a `timezone` can be\n    provided to determine localization behaviors like DST boundary handling. If\n    none is provided it will be inferred from the anchor date.\n\n    NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n    DST-observing timezone, then the schedule will adjust itself appropriately.\n    Intervals greater than 24 hours will follow DST conventions, while intervals\n    of less than 24 hours will follow UTC intervals. For example, an hourly\n    schedule will fire every UTC hour, even across DST boundaries. When clocks\n    are set back, this will result in two runs that *appear* to both be\n    scheduled for 1am local time, even though they are an hour apart in UTC\n    time. For longer intervals, like a daily schedule, the interval schedule\n    will adjust for DST boundaries so that the clock-hour remains constant. This\n    means that a daily schedule that always fires at 9am will observe DST and\n    continue to fire at 9am in the local time zone.\n\n    Args:\n        interval (datetime.timedelta): an interval to schedule on.\n        anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n            if not provided, the current timestamp will be used.\n        timezone (str, optional): a valid timezone string.\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n        exclude_none = True\n\n    interval: PositiveDuration\n    anchor_date: DateTimeTZ = None\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"anchor_date\", always=True)\n    def validate_anchor_date(cls, v):\n        return default_anchor_date(v)\n\n    @validator(\"timezone\", always=True)\n    def validate_timezone(cls, v, *, values, **kwargs):\n        return default_timezone(v, values)\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        anchor_tz = self.anchor_date.in_tz(self.timezone)\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        # compute the offset between the anchor date and the start date to jump to the\n        # next date\n        offset = (start - anchor_tz).total_seconds() / self.interval.total_seconds()\n        next_date = anchor_tz.add(seconds=self.interval.total_seconds() * int(offset))\n\n        # break the interval into `days` and `seconds` because pendulum\n        # will handle DST boundaries properly if days are provided, but not\n        # if we add `total seconds`. Therefore, `next_date + self.interval`\n        # fails while `next_date.add(days=days, seconds=seconds)` works.\n        interval_days = self.interval.days\n        interval_seconds = self.interval.total_seconds() - (\n            interval_days * 24 * 60 * 60\n        )\n\n        # daylight saving time boundaries can create a situation where the next date is\n        # before the start date, so we advance it if necessary\n        while next_date < start:\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n\n        counter = 0\n        dates = set()\n\n        while True:\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n
      "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule.get_dates","title":"get_dates async","text":"

      Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

      Parameters:

      Name Type Description Default n int

      The number of dates to generate

      None start datetime

      The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

      None end datetime

      The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

      None

      Returns:

      Type Description List[DateTime]

      List[pendulum.DateTime]: A list of dates

      Source code in prefect/server/schemas/schedules.py
      async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n    \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
      "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule","title":"RRuleSchedule","text":"

      Bases: PrefectBaseModel

      RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule.

      RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.

      Note that as a calendar-oriented standard, RRuleSchedules are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.

      Parameters:

      Name Type Description Default rrule str

      a valid RRule string

      required timezone str

      a valid timezone string

      required Source code in prefect/server/schemas/schedules.py
      class RRuleSchedule(PrefectBaseModel):\n    \"\"\"\n    RRule schedule, based on the iCalendar standard\n    ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n    implemented in `dateutils.rrule`.\n\n    RRules are appropriate for any kind of calendar-date manipulation, including\n    irregular intervals, repetition, exclusions, week day or day-of-month\n    adjustments, and more.\n\n    Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n    to the initial timezone provided. A 9am daily schedule with a daylight saving\n    time-aware start date will maintain a local 9am time through DST boundaries;\n    a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n    Args:\n        rrule (str): a valid RRule string\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    rrule: str\n    timezone: Optional[str] = Field(default=None, examples=[\"America/New_York\"])\n\n    @validator(\"rrule\")\n    def validate_rrule_str(cls, v):\n        return validate_rrule_string(v)\n\n    @classmethod\n    def from_rrule(cls, rrule: dateutil.rrule.rrule):\n        if isinstance(rrule, dateutil.rrule.rrule):\n            if rrule._dtstart.tzinfo is not None:\n                timezone = rrule._dtstart.tzinfo.name\n            else:\n                timezone = \"UTC\"\n            return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n            unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n            unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n            if len(unique_timezones) > 1:\n                raise ValueError(\n                    f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n                )\n\n            if len(unique_dstarts) > 1:\n                raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n            if unique_dstarts and unique_timezones:\n                timezone = dtstarts[0].tzinfo.name\n            else:\n                timezone = \"UTC\"\n\n            rruleset_string = \"\"\n            if rrule._rrule:\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n            if rrule._exrule:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n                    \"RRULE\", \"EXRULE\"\n                )\n            if rrule._rdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"RDATE:\" + \",\".join(\n                    rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n                )\n            if rrule._exdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"EXDATE:\" + \",\".join(\n                    exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n                )\n            return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n        else:\n            raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n    def to_rrule(self) -> dateutil.rrule.rrule:\n        \"\"\"\n        Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n        here\n        \"\"\"\n        rrule = dateutil.rrule.rrulestr(\n            self.rrule,\n            dtstart=DEFAULT_ANCHOR_DATE,\n            cache=True,\n        )\n        timezone = dateutil.tz.gettz(self.timezone)\n        if isinstance(rrule, dateutil.rrule.rrule):\n            kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n            if rrule._until:\n                kwargs.update(\n                    until=rrule._until.replace(tzinfo=timezone),\n                )\n            return rrule.replace(**kwargs)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            # update rrules\n            localized_rrules = []\n            for rr in rrule._rrule:\n                kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n                if rr._until:\n                    kwargs.update(\n                        until=rr._until.replace(tzinfo=timezone),\n                    )\n                localized_rrules.append(rr.replace(**kwargs))\n            rrule._rrule = localized_rrules\n\n            # update exrules\n            localized_exrules = []\n            for exr in rrule._exrule:\n                kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n                if exr._until:\n                    kwargs.update(\n                        until=exr._until.replace(tzinfo=timezone),\n                    )\n                localized_exrules.append(exr.replace(**kwargs))\n            rrule._exrule = localized_exrules\n\n            # update rdates\n            localized_rdates = []\n            for rd in rrule._rdate:\n                localized_rdates.append(rd.replace(tzinfo=timezone))\n            rrule._rdate = localized_rdates\n\n            # update exdates\n            localized_exdates = []\n            for exd in rrule._exdate:\n                localized_exdates.append(exd.replace(tzinfo=timezone))\n            rrule._exdate = localized_exdates\n\n            return rrule\n\n    @validator(\"timezone\", always=True)\n    def valid_timezone(cls, v):\n        return validate_rrule_timezone(v)\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n        \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up\n            # to MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        dates = set()\n        counter = 0\n\n        # pass count = None to account for discrepancies with duplicates around DST\n        # boundaries\n        for next_date in self.to_rrule().xafter(start, count=None, inc=True):\n            next_date = pendulum.instance(next_date).in_tz(self.timezone)\n\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
      "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.get_dates","title":"get_dates async","text":"

      Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

      Parameters:

      Name Type Description Default n int

      The number of dates to generate

      None start datetime

      The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

      None end datetime

      The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

      None

      Returns:

      Type Description List[DateTime]

      List[pendulum.DateTime]: A list of dates

      Source code in prefect/server/schemas/schedules.py
      async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n    \"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
      "},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule","text":"

      Since rrule doesn't properly serialize/deserialize timezones, we localize dates here

      Source code in prefect/server/schemas/schedules.py
      def to_rrule(self) -> dateutil.rrule.rrule:\n    \"\"\"\n    Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n    here\n    \"\"\"\n    rrule = dateutil.rrule.rrulestr(\n        self.rrule,\n        dtstart=DEFAULT_ANCHOR_DATE,\n        cache=True,\n    )\n    timezone = dateutil.tz.gettz(self.timezone)\n    if isinstance(rrule, dateutil.rrule.rrule):\n        kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n        if rrule._until:\n            kwargs.update(\n                until=rrule._until.replace(tzinfo=timezone),\n            )\n        return rrule.replace(**kwargs)\n    elif isinstance(rrule, dateutil.rrule.rruleset):\n        # update rrules\n        localized_rrules = []\n        for rr in rrule._rrule:\n            kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n            if rr._until:\n                kwargs.update(\n                    until=rr._until.replace(tzinfo=timezone),\n                )\n            localized_rrules.append(rr.replace(**kwargs))\n        rrule._rrule = localized_rrules\n\n        # update exrules\n        localized_exrules = []\n        for exr in rrule._exrule:\n            kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n            if exr._until:\n                kwargs.update(\n                    until=exr._until.replace(tzinfo=timezone),\n                )\n            localized_exrules.append(exr.replace(**kwargs))\n        rrule._exrule = localized_exrules\n\n        # update rdates\n        localized_rdates = []\n        for rd in rrule._rdate:\n            localized_rdates.append(rd.replace(tzinfo=timezone))\n        rrule._rdate = localized_rdates\n\n        # update exdates\n        localized_exdates = []\n        for exd in rrule._exdate:\n            localized_exdates.append(exd.replace(tzinfo=timezone))\n        rrule._exdate = localized_exdates\n\n        return rrule\n
      "},{"location":"api-ref/server/schemas/sorting/","title":"server.schemas.sorting","text":""},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting","title":"prefect.server.schemas.sorting","text":"

      Schemas for sorting Prefect REST API objects.

      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort","text":"

      Bases: AutoEnum

      Defines artifact collection sorting options.

      Source code in prefect/server/schemas/sorting.py
      class ArtifactCollectionSort(AutoEnum):\n    \"\"\"Defines artifact collection sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort artifact collections\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n            \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n            \"ID_DESC\": db.ArtifactCollection.id.desc(),\n            \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n            \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort artifact collections

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort artifact collections\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n        \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n        \"ID_DESC\": db.ArtifactCollection.id.desc(),\n        \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n        \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort","title":"ArtifactSort","text":"

      Bases: AutoEnum

      Defines artifact sorting options.

      Source code in prefect/server/schemas/sorting.py
      class ArtifactSort(AutoEnum):\n    \"\"\"Defines artifact sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort artifacts\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Artifact.created.desc(),\n            \"UPDATED_DESC\": db.Artifact.updated.desc(),\n            \"ID_DESC\": db.Artifact.id.desc(),\n            \"KEY_DESC\": db.Artifact.key.desc(),\n            \"KEY_ASC\": db.Artifact.key.asc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort artifacts

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort artifacts\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Artifact.created.desc(),\n        \"UPDATED_DESC\": db.Artifact.updated.desc(),\n        \"ID_DESC\": db.Artifact.id.desc(),\n        \"KEY_DESC\": db.Artifact.key.desc(),\n        \"KEY_ASC\": db.Artifact.key.asc(),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort","title":"BlockDocumentSort","text":"

      Bases: AutoEnum

      Defines block document sorting options.

      Source code in prefect/server/schemas/sorting.py
      class BlockDocumentSort(AutoEnum):\n    \"\"\"Defines block document sorting options.\"\"\"\n\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n    BLOCK_TYPE_AND_NAME_ASC = \"BLOCK_TYPE_AND_NAME_ASC\"\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort block documents\"\"\"\n        sort_mapping = {\n            \"NAME_DESC\": db.BlockDocument.name.desc(),\n            \"NAME_ASC\": db.BlockDocument.name.asc(),\n            \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.BlockDocumentSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort block documents

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort block documents\"\"\"\n    sort_mapping = {\n        \"NAME_DESC\": db.BlockDocument.name.desc(),\n        \"NAME_ASC\": db.BlockDocument.name.asc(),\n        \"BLOCK_TYPE_AND_NAME_ASC\": sa.text(\"block_type_name asc, name asc\"),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort","title":"DeploymentSort","text":"

      Bases: AutoEnum

      Defines deployment sorting options.

      Source code in prefect/server/schemas/sorting.py
      class DeploymentSort(AutoEnum):\n    \"\"\"Defines deployment sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort deployments\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Deployment.created.desc(),\n            \"UPDATED_DESC\": db.Deployment.updated.desc(),\n            \"NAME_ASC\": db.Deployment.name.asc(),\n            \"NAME_DESC\": db.Deployment.name.desc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort deployments

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort deployments\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Deployment.created.desc(),\n        \"UPDATED_DESC\": db.Deployment.updated.desc(),\n        \"NAME_ASC\": db.Deployment.name.asc(),\n        \"NAME_DESC\": db.Deployment.name.desc(),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowRunSort","title":"FlowRunSort","text":"

      Bases: AutoEnum

      Defines flow run sorting options.

      Source code in prefect/server/schemas/sorting.py
      class FlowRunSort(AutoEnum):\n    \"\"\"Defines flow run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    START_TIME_ASC = AutoEnum.auto()\n    START_TIME_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        from sqlalchemy.sql.functions import coalesce\n\n        \"\"\"Return an expression used to sort flow runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.FlowRun.id.desc(),\n            \"START_TIME_ASC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).asc(),\n            \"START_TIME_DESC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).desc(),\n            \"EXPECTED_START_TIME_ASC\": db.FlowRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.FlowRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.FlowRun.name.asc(),\n            \"NAME_DESC\": db.FlowRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.FlowRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.FlowRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort","title":"FlowSort","text":"

      Bases: AutoEnum

      Defines flow sorting options.

      Source code in prefect/server/schemas/sorting.py
      class FlowSort(AutoEnum):\n    \"\"\"Defines flow sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort flows\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Flow.created.desc(),\n            \"UPDATED_DESC\": db.Flow.updated.desc(),\n            \"NAME_ASC\": db.Flow.name.asc(),\n            \"NAME_DESC\": db.Flow.name.desc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort flows

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort flows\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Flow.created.desc(),\n        \"UPDATED_DESC\": db.Flow.updated.desc(),\n        \"NAME_ASC\": db.Flow.name.asc(),\n        \"NAME_DESC\": db.Flow.name.desc(),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort","title":"LogSort","text":"

      Bases: AutoEnum

      Defines log sorting options.

      Source code in prefect/server/schemas/sorting.py
      class LogSort(AutoEnum):\n    \"\"\"Defines log sorting options.\"\"\"\n\n    TIMESTAMP_ASC = AutoEnum.auto()\n    TIMESTAMP_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n            \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort task runs

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n        \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort","title":"TaskRunSort","text":"

      Bases: AutoEnum

      Defines task run sorting options.

      Source code in prefect/server/schemas/sorting.py
      class TaskRunSort(AutoEnum):\n    \"\"\"Defines task run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.TaskRun.id.desc(),\n            \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.TaskRun.name.asc(),\n            \"NAME_DESC\": db.TaskRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort task runs

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"ID_DESC\": db.TaskRun.id.desc(),\n        \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n        \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n        \"NAME_ASC\": db.TaskRun.name.asc(),\n        \"NAME_DESC\": db.TaskRun.name.desc(),\n        \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n        \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort","title":"VariableSort","text":"

      Bases: AutoEnum

      Defines variables sorting options.

      Source code in prefect/server/schemas/sorting.py
      class VariableSort(AutoEnum):\n    \"\"\"Defines variables sorting options.\"\"\"\n\n    CREATED_DESC = \"CREATED_DESC\"\n    UPDATED_DESC = \"UPDATED_DESC\"\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        \"\"\"Return an expression used to sort variables\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Variable.created.desc(),\n            \"UPDATED_DESC\": db.Variable.updated.desc(),\n            \"NAME_DESC\": db.Variable.name.desc(),\n            \"NAME_ASC\": db.Variable.name.asc(),\n        }\n        return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort.as_sql_sort","title":"as_sql_sort","text":"

      Return an expression used to sort variables

      Source code in prefect/server/schemas/sorting.py
      def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n    \"\"\"Return an expression used to sort variables\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Variable.created.desc(),\n        \"UPDATED_DESC\": db.Variable.updated.desc(),\n        \"NAME_DESC\": db.Variable.name.desc(),\n        \"NAME_ASC\": db.Variable.name.asc(),\n    }\n    return sort_mapping[self.value]\n
      "},{"location":"api-ref/server/schemas/states/","title":"server.schemas.states","text":""},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states","title":"prefect.server.schemas.states","text":"

      State schemas.

      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State","title":"State","text":"

      Bases: StateBaseModel

      Represents the state of a run.

      Source code in prefect/server/schemas/states.py
      class State(StateBaseModel):\n    \"\"\"Represents the state of a run.\"\"\"\n\n    class Config:\n        orm_mode = True\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    message: Optional[str] = Field(default=None, examples=[\"Run started\"])\n    data: Optional[Any] = Field(\n        default=None,\n        description=(\n            \"Data associated with the state, e.g. a result. \"\n            \"Content must be storable as JSON.\"\n        ),\n    )\n    state_details: StateDetails = Field(default_factory=StateDetails)\n\n    @classmethod\n    def from_orm_without_result(\n        cls,\n        orm_state: Union[\n            \"prefect.server.database.orm_models.ORMFlowRunState\",\n            \"prefect.server.database.orm_models.ORMTaskRunState\",\n        ],\n        with_data: Optional[Any] = None,\n    ):\n        \"\"\"\n        During orchestration, ORM states can be instantiated prior to inserting results\n        into the artifact table and the `data` field will not be eagerly loaded. In\n        these cases, sqlalchemy will attempt to lazily load the the relationship, which\n        will fail when called within a synchronous pydantic method.\n\n        This method will construct a `State` object from an ORM model without a loaded\n        artifact and attach data passed using the `with_data` argument to the `data`\n        field.\n        \"\"\"\n\n        field_keys = cls.schema()[\"properties\"].keys()\n        state_data = {\n            field: getattr(orm_state, field, None)\n            for field in field_keys\n            if field != \"data\"\n        }\n        state_data[\"data\"] = with_data\n        return cls(**state_data)\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n        return get_or_create_state_name(v, values)\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n        return set_default_scheduled_time(cls, values)\n\n    def is_scheduled(self) -> bool:\n        return self.type == StateType.SCHEDULED\n\n    def is_pending(self) -> bool:\n        return self.type == StateType.PENDING\n\n    def is_running(self) -> bool:\n        return self.type == StateType.RUNNING\n\n    def is_completed(self) -> bool:\n        return self.type == StateType.COMPLETED\n\n    def is_failed(self) -> bool:\n        return self.type == StateType.FAILED\n\n    def is_crashed(self) -> bool:\n        return self.type == StateType.CRASHED\n\n    def is_cancelled(self) -> bool:\n        return self.type == StateType.CANCELLED\n\n    def is_cancelling(self) -> bool:\n        return self.type == StateType.CANCELLING\n\n    def is_final(self) -> bool:\n        return self.type in TERMINAL_STATES\n\n    def is_paused(self) -> bool:\n        return self.type == StateType.PAUSED\n\n    def copy(\n        self,\n        *,\n        update: Optional[Dict[str, Any]] = None,\n        reset_fields: bool = False,\n        **kwargs,\n    ):\n        \"\"\"\n        Copying API models should return an object that could be inserted into the\n        database again. The 'timestamp' is reset using the default factory.\n        \"\"\"\n        update = update or {}\n        update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n        return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n    def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n        # Backwards compatible `result` handling on the server-side schema\n        from prefect.states import State\n\n        warnings.warn(\n            (\n                \"`result` is no longer supported by\"\n                \" `prefect.server.schemas.states.State` and will be removed in a future\"\n                \" release. When result retrieval is needed, use `prefect.states.State`.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.result(raise_on_failure=raise_on_failure, fetch=fetch)\n\n    def to_state_create(self):\n        # Backwards compatibility for `to_state_create`\n        from prefect.client.schemas import State\n\n        warnings.warn(\n            (\n                \"Use of `prefect.server.schemas.states.State` from the client is\"\n                \" deprecated and support will be removed in a future release. Use\"\n                \" `prefect.states.State` instead.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.to_state_create()\n\n    def __repr__(self) -> str:\n        \"\"\"\n        Generates a complete state representation appropriate for introspection\n        and debugging, including the result:\n\n        `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n        \"\"\"\n        from prefect.deprecated.data_documents import DataDocument\n\n        if isinstance(self.data, DataDocument):\n            result = self.data.decode()\n        else:\n            result = self.data\n\n        display = dict(\n            message=repr(self.message),\n            type=str(self.type.value),\n            result=repr(result),\n        )\n\n        return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n    def __str__(self) -> str:\n        \"\"\"\n        Generates a simple state representation appropriate for logging:\n\n        `MyCompletedState(\"my message\", type=COMPLETED)`\n        \"\"\"\n\n        display = []\n\n        if self.message:\n            display.append(repr(self.message))\n\n        if self.type.value.lower() != self.name.lower():\n            display.append(f\"type={self.type.value}\")\n\n        return f\"{self.name}({', '.join(display)})\"\n\n    def __hash__(self) -> int:\n        return hash(\n            (\n                getattr(self.state_details, \"flow_run_id\", None),\n                getattr(self.state_details, \"task_run_id\", None),\n                self.timestamp,\n                self.type,\n            )\n        )\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.from_orm_without_result","title":"from_orm_without_result classmethod","text":"

      During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the data field will not be eagerly loaded. In these cases, sqlalchemy will attempt to lazily load the the relationship, which will fail when called within a synchronous pydantic method.

      This method will construct a State object from an ORM model without a loaded artifact and attach data passed using the with_data argument to the data field.

      Source code in prefect/server/schemas/states.py
      @classmethod\ndef from_orm_without_result(\n    cls,\n    orm_state: Union[\n        \"prefect.server.database.orm_models.ORMFlowRunState\",\n        \"prefect.server.database.orm_models.ORMTaskRunState\",\n    ],\n    with_data: Optional[Any] = None,\n):\n    \"\"\"\n    During orchestration, ORM states can be instantiated prior to inserting results\n    into the artifact table and the `data` field will not be eagerly loaded. In\n    these cases, sqlalchemy will attempt to lazily load the the relationship, which\n    will fail when called within a synchronous pydantic method.\n\n    This method will construct a `State` object from an ORM model without a loaded\n    artifact and attach data passed using the `with_data` argument to the `data`\n    field.\n    \"\"\"\n\n    field_keys = cls.schema()[\"properties\"].keys()\n    state_data = {\n        field: getattr(orm_state, field, None)\n        for field in field_keys\n        if field != \"data\"\n    }\n    state_data[\"data\"] = with_data\n    return cls(**state_data)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel","title":"StateBaseModel","text":"

      Bases: IDBaseModel

      Source code in prefect/server/schemas/states.py
      class StateBaseModel(IDBaseModel):\n    def orm_dict(\n        self, *args, shallow: bool = False, json_compatible: bool = False, **kwargs\n    ) -> dict:\n        \"\"\"\n        This method is used as a convenience method for constructing fixtues by first\n        building a `State` schema object and converting it into an ORM-compatible\n        format. Because the `data` field is not writable on ORM states, this method\n        omits the `data` field entirely for the purposes of constructing an ORM model.\n        If state data is required, an artifact must be created separately.\n        \"\"\"\n\n        schema_dict = self.dict(\n            *args, shallow=shallow, json_compatible=json_compatible, **kwargs\n        )\n        # remove the data field in order to construct a state ORM model\n        schema_dict.pop(\"data\", None)\n        return schema_dict\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.json","title":"json","text":"

      Returns a representation of the model as JSON.

      If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

      Source code in prefect/server/utilities/schemas/bases.py
      def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n    \"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType","title":"StateType","text":"

      Bases: AutoEnum

      Enumeration of state types.

      Source code in prefect/server/schemas/states.py
      class StateType(AutoEnum):\n    \"\"\"Enumeration of state types.\"\"\"\n\n    SCHEDULED = AutoEnum.auto()\n    PENDING = AutoEnum.auto()\n    RUNNING = AutoEnum.auto()\n    COMPLETED = AutoEnum.auto()\n    FAILED = AutoEnum.auto()\n    CANCELLED = AutoEnum.auto()\n    CRASHED = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n    CANCELLING = AutoEnum.auto()\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType.auto","title":"auto staticmethod","text":"

      Exposes enum.auto() to avoid requiring a second import to use AutoEnum

      Source code in prefect/utilities/collections.py
      @staticmethod\ndef auto():\n    \"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.AwaitingRetry","title":"AwaitingRetry","text":"

      Convenience function for creating AwaitingRetry states.

      Returns:

      Name Type Description State State

      a AwaitingRetry state

      Source code in prefect/server/schemas/states.py
      def AwaitingRetry(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n    \"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelled","title":"Cancelled","text":"

      Convenience function for creating Cancelled states.

      Returns:

      Name Type Description State State

      a Cancelled state

      Source code in prefect/server/schemas/states.py
      def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelling","title":"Cancelling","text":"

      Convenience function for creating Cancelling states.

      Returns:

      Name Type Description State State

      a Cancelling state

      Source code in prefect/server/schemas/states.py
      def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Completed","title":"Completed","text":"

      Convenience function for creating Completed states.

      Returns:

      Name Type Description State State

      a Completed state

      Source code in prefect/server/schemas/states.py
      def Completed(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Crashed","title":"Crashed","text":"

      Convenience function for creating Crashed states.

      Returns:

      Name Type Description State State

      a Crashed state

      Source code in prefect/server/schemas/states.py
      def Crashed(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Failed","title":"Failed","text":"

      Convenience function for creating Failed states.

      Returns:

      Name Type Description State State

      a Failed state

      Source code in prefect/server/schemas/states.py
      def Failed(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Late","title":"Late","text":"

      Convenience function for creating Late states.

      Returns:

      Name Type Description State State

      a Late state

      Source code in prefect/server/schemas/states.py
      def Late(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n    \"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Paused","title":"Paused","text":"

      Convenience function for creating Paused states.

      Returns:

      Name Type Description State State

      a Paused state

      Source code in prefect/server/schemas/states.py
      def Paused(\n    cls: Type[State] = State,\n    timeout_seconds: int = None,\n    pause_expiration_time: datetime.datetime = None,\n    reschedule: bool = False,\n    pause_key: str = None,\n    **kwargs,\n) -> State:\n    \"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Pending","title":"Pending","text":"

      Convenience function for creating Pending states.

      Returns:

      Name Type Description State State

      a Pending state

      Source code in prefect/server/schemas/states.py
      def Pending(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Retrying","title":"Retrying","text":"

      Convenience function for creating Retrying states.

      Returns:

      Name Type Description State State

      a Retrying state

      Source code in prefect/server/schemas/states.py
      def Retrying(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Running","title":"Running","text":"

      Convenience function for creating Running states.

      Returns:

      Name Type Description State State

      a Running state

      Source code in prefect/server/schemas/states.py
      def Running(cls: Type[State] = State, **kwargs) -> State:\n    \"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Scheduled","title":"Scheduled","text":"

      Convenience function for creating Scheduled states.

      Returns:

      Name Type Description State State

      a Scheduled state

      Source code in prefect/server/schemas/states.py
      def Scheduled(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n    \"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    # NOTE: `scheduled_time` must come first for backwards compatibility\n\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
      "},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Suspended","title":"Suspended","text":"

      Convenience function for creating Suspended states.

      Returns:

      Name Type Description State

      a Suspended state

      Source code in prefect/server/schemas/states.py
      def Suspended(\n    cls: Type[State] = State,\n    timeout_seconds: Optional[int] = None,\n    pause_expiration_time: Optional[datetime.datetime] = None,\n    pause_key: Optional[str] = None,\n    **kwargs,\n):\n    \"\"\"Convenience function for creating `Suspended` states.\n\n    Returns:\n        State: a Suspended state\n    \"\"\"\n    return Paused(\n        cls=cls,\n        name=\"Suspended\",\n        reschedule=True,\n        timeout_seconds=timeout_seconds,\n        pause_expiration_time=pause_expiration_time,\n        pause_key=pause_key,\n        **kwargs,\n    )\n
      "},{"location":"api-ref/server/services/late_runs/","title":"server.services.late_runs","text":""},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs","title":"prefect.server.services.late_runs","text":"

      The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.

      "},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns","title":"MarkLateRuns","text":"

      Bases: LoopService

      A simple loop service responsible for identifying flow runs that are \"late\".

      A flow run is defined as \"late\" if has not scheduled within a certain amount of time after its scheduled start time. The exact amount is configurable in Prefect REST API Settings.

      Source code in prefect/server/services/late_runs.py
      class MarkLateRuns(LoopService):\n    \"\"\"\n    A simple loop service responsible for identifying flow runs that are \"late\".\n\n    A flow run is defined as \"late\" if has not scheduled within a certain amount\n    of time after its scheduled start time. The exact amount is configurable in\n    Prefect REST API Settings.\n    \"\"\"\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=loop_seconds\n            or PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS.value(),\n            **kwargs,\n        )\n\n        # mark runs late if they are this far past their expected start time\n        self.mark_late_after: datetime.timedelta = (\n            PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.value()\n        )\n\n        # query for this many runs to mark as late at once\n        self.batch_size = 400\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n        \"\"\"\n        Mark flow runs as late by:\n\n        - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n        - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n        \"\"\"\n        scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n            seconds=self.mark_late_after.total_seconds()\n        )\n\n        while True:\n            async with db.session_context(begin_transaction=True) as session:\n                query = self._get_select_late_flow_runs_query(\n                    scheduled_to_start_before=scheduled_to_start_before, db=db\n                )\n\n                result = await session.execute(query)\n                runs = result.all()\n\n                # mark each run as late\n                for run in runs:\n                    await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n                # if no runs were found, exit the loop\n                if len(runs) < self.batch_size:\n                    break\n\n        self.logger.info(\"Finished monitoring for late runs.\")\n\n    @inject_db\n    def _get_select_late_flow_runs_query(\n        self, scheduled_to_start_before: datetime.datetime, db: PrefectDBInterface\n    ):\n        \"\"\"\n        Returns a sqlalchemy query for late flow runs.\n\n        Args:\n            scheduled_to_start_before: the maximum next scheduled start time of\n                scheduled flow runs to consider in the returned query\n        \"\"\"\n        query = (\n            sa.select(\n                db.FlowRun.id,\n                db.FlowRun.next_scheduled_start_time,\n            )\n            .where(\n                # The next scheduled start time is in the past, including the mark late\n                # after buffer\n                (db.FlowRun.next_scheduled_start_time <= scheduled_to_start_before),\n                db.FlowRun.state_type == states.StateType.SCHEDULED,\n                db.FlowRun.state_name == \"Scheduled\",\n            )\n            .limit(self.batch_size)\n        )\n        return query\n\n    async def _mark_flow_run_as_late(\n        self, session: AsyncSession, flow_run: PrefectDBInterface.FlowRun\n    ) -> None:\n        \"\"\"\n        Mark a flow run as late.\n\n        Pass-through method for overrides.\n        \"\"\"\n        try:\n            await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run.id,\n                state=states.Late(scheduled_time=flow_run.next_scheduled_start_time),\n                flow_policy=MarkLateRunsPolicy,  # type: ignore\n            )\n        except ObjectNotFoundError:\n            return  # flow run was deleted, ignore it\n
      "},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns.run_once","title":"run_once async","text":"

      Mark flow runs as late by:

      • Querying for flow runs in a scheduled state that are Scheduled to start in the past
      • For any runs past the \"late\" threshold, setting the flow run state to a new Late state
      Source code in prefect/server/services/late_runs.py
      @inject_db\nasync def run_once(self, db: PrefectDBInterface):\n    \"\"\"\n    Mark flow runs as late by:\n\n    - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n    - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n    \"\"\"\n    scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n        seconds=self.mark_late_after.total_seconds()\n    )\n\n    while True:\n        async with db.session_context(begin_transaction=True) as session:\n            query = self._get_select_late_flow_runs_query(\n                scheduled_to_start_before=scheduled_to_start_before, db=db\n            )\n\n            result = await session.execute(query)\n            runs = result.all()\n\n            # mark each run as late\n            for run in runs:\n                await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n            # if no runs were found, exit the loop\n            if len(runs) < self.batch_size:\n                break\n\n    self.logger.info(\"Finished monitoring for late runs.\")\n
      "},{"location":"api-ref/server/services/loop_service/","title":"server.services.loop_service","text":""},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service","title":"prefect.server.services.loop_service","text":"

      The base class for all Prefect REST API loop services.

      "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService","title":"LoopService","text":"

      Loop services are relatively lightweight maintenance routines that need to run periodically.

      This class makes it straightforward to design and integrate them. Users only need to define the run_once coroutine to describe the behavior of the service on each loop.

      Source code in prefect/server/services/loop_service.py
      class LoopService:\n    \"\"\"\n    Loop services are relatively lightweight maintenance routines that need to run periodically.\n\n    This class makes it straightforward to design and integrate them. Users only need to\n    define the `run_once` coroutine to describe the behavior of the service on each loop.\n    \"\"\"\n\n    loop_seconds = 60\n\n    def __init__(self, loop_seconds: float = None, handle_signals: bool = True):\n        \"\"\"\n        Args:\n            loop_seconds (float): if provided, overrides the loop interval\n                otherwise specified as a class variable\n            handle_signals (bool): if True (default), SIGINT and SIGTERM are\n                gracefully intercepted and shut down the running service.\n        \"\"\"\n        if loop_seconds:\n            self.loop_seconds = loop_seconds  # seconds between runs\n        self._should_stop = False  # flag for whether the service should stop running\n        self._is_running = False  # flag for whether the service is running\n        self.name = type(self).__name__\n        self.logger = get_logger(f\"server.services.{self.name.lower()}\")\n\n        if handle_signals:\n            _register_signal(signal.SIGINT, self._stop)\n            _register_signal(signal.SIGTERM, self._stop)\n\n    @inject_db\n    async def _on_start(self, db: PrefectDBInterface) -> None:\n        \"\"\"\n        Called prior to running the service\n        \"\"\"\n        # reset the _should_stop flag\n        self._should_stop = False\n        # set the _is_running flag\n        self._is_running = True\n\n    async def _on_stop(self) -> None:\n        \"\"\"\n        Called after running the service\n        \"\"\"\n        # reset the _is_running flag\n        self._is_running = False\n\n    async def start(self, loops=None) -> None:\n        \"\"\"\n        Run the service `loops` time. Pass loops=None to run forever.\n\n        Args:\n            loops (int, optional): the number of loops to run before exiting.\n        \"\"\"\n\n        await self._on_start()\n\n        i = 0\n        while not self._should_stop:\n            start_time = pendulum.now(\"UTC\")\n\n            try:\n                self.logger.debug(f\"About to run {self.name}...\")\n                await self.run_once()\n\n            except NotImplementedError as exc:\n                raise exc from None\n\n            # if an error is raised, log and continue\n            except Exception as exc:\n                self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n            end_time = pendulum.now(\"UTC\")\n\n            # if service took longer than its loop interval, log a warning\n            # that the interval might be too short\n            if (end_time - start_time).total_seconds() > self.loop_seconds:\n                self.logger.warning(\n                    f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                    \" to run, which is longer than its loop interval of\"\n                    f\" {self.loop_seconds} seconds.\"\n                )\n\n            # check if early stopping was requested\n            i += 1\n            if loops is not None and i == loops:\n                self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n                await self.stop(block=False)\n\n            # next run is every \"loop seconds\" after each previous run *started*.\n            # note that if the loop took unexpectedly long, the \"next_run\" time\n            # might be in the past, which will result in an instant start\n            next_run = max(\n                start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n            )\n            self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n            # check the `_should_stop` flag every 1 seconds until the next run time is reached\n            while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n                await asyncio.sleep(\n                    min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n                )\n\n        await self._on_stop()\n\n    async def stop(self, block=True) -> None:\n        \"\"\"\n        Gracefully stops a running LoopService and optionally blocks until the\n        service stops.\n\n        Args:\n            block (bool): if True, blocks until the service is\n                finished running. Otherwise it requests a stop and returns but\n                the service may still be running a final loop.\n\n        \"\"\"\n        self._stop()\n\n        if block:\n            # if block=True, sleep until the service stops running,\n            # but no more than `loop_seconds` to avoid a deadlock\n            with anyio.move_on_after(self.loop_seconds):\n                while self._is_running:\n                    await asyncio.sleep(0.1)\n\n            # if the service is still running after `loop_seconds`, something's wrong\n            if self._is_running:\n                self.logger.warning(\n                    f\"`stop(block=True)` was called on {self.name} but more than one\"\n                    f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                    \" usually means something is wrong. If `stop()` was called from\"\n                    \" inside the loop service, use `stop(block=False)` instead.\"\n                )\n\n    def _stop(self, *_) -> None:\n        \"\"\"\n        Private, synchronous method for setting the `_should_stop` flag. Takes arbitrary\n        arguments so it can be used as a signal handler.\n        \"\"\"\n        self._should_stop = True\n\n    async def run_once(self) -> None:\n        \"\"\"\n        Represents one loop of the service.\n\n        Users should override this method.\n\n        To actually run the service once, call `LoopService().start(loops=1)`\n        instead of `LoopService().run_once()`, because this method will not invoke setup\n        and teardown methods properly.\n        \"\"\"\n        raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
      "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.run_once","title":"run_once async","text":"

      Represents one loop of the service.

      Users should override this method.

      To actually run the service once, call LoopService().start(loops=1) instead of LoopService().run_once(), because this method will not invoke setup and teardown methods properly.

      Source code in prefect/server/services/loop_service.py
      async def run_once(self) -> None:\n    \"\"\"\n    Represents one loop of the service.\n\n    Users should override this method.\n\n    To actually run the service once, call `LoopService().start(loops=1)`\n    instead of `LoopService().run_once()`, because this method will not invoke setup\n    and teardown methods properly.\n    \"\"\"\n    raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
      "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.start","title":"start async","text":"

      Run the service loops time. Pass loops=None to run forever.

      Parameters:

      Name Type Description Default loops int

      the number of loops to run before exiting.

      None Source code in prefect/server/services/loop_service.py
      async def start(self, loops=None) -> None:\n    \"\"\"\n    Run the service `loops` time. Pass loops=None to run forever.\n\n    Args:\n        loops (int, optional): the number of loops to run before exiting.\n    \"\"\"\n\n    await self._on_start()\n\n    i = 0\n    while not self._should_stop:\n        start_time = pendulum.now(\"UTC\")\n\n        try:\n            self.logger.debug(f\"About to run {self.name}...\")\n            await self.run_once()\n\n        except NotImplementedError as exc:\n            raise exc from None\n\n        # if an error is raised, log and continue\n        except Exception as exc:\n            self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n        end_time = pendulum.now(\"UTC\")\n\n        # if service took longer than its loop interval, log a warning\n        # that the interval might be too short\n        if (end_time - start_time).total_seconds() > self.loop_seconds:\n            self.logger.warning(\n                f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                \" to run, which is longer than its loop interval of\"\n                f\" {self.loop_seconds} seconds.\"\n            )\n\n        # check if early stopping was requested\n        i += 1\n        if loops is not None and i == loops:\n            self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n            await self.stop(block=False)\n\n        # next run is every \"loop seconds\" after each previous run *started*.\n        # note that if the loop took unexpectedly long, the \"next_run\" time\n        # might be in the past, which will result in an instant start\n        next_run = max(\n            start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n        )\n        self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n        # check the `_should_stop` flag every 1 seconds until the next run time is reached\n        while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n            await asyncio.sleep(\n                min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n            )\n\n    await self._on_stop()\n
      "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.stop","title":"stop async","text":"

      Gracefully stops a running LoopService and optionally blocks until the service stops.

      Parameters:

      Name Type Description Default block bool

      if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop.

      True Source code in prefect/server/services/loop_service.py
      async def stop(self, block=True) -> None:\n    \"\"\"\n    Gracefully stops a running LoopService and optionally blocks until the\n    service stops.\n\n    Args:\n        block (bool): if True, blocks until the service is\n            finished running. Otherwise it requests a stop and returns but\n            the service may still be running a final loop.\n\n    \"\"\"\n    self._stop()\n\n    if block:\n        # if block=True, sleep until the service stops running,\n        # but no more than `loop_seconds` to avoid a deadlock\n        with anyio.move_on_after(self.loop_seconds):\n            while self._is_running:\n                await asyncio.sleep(0.1)\n\n        # if the service is still running after `loop_seconds`, something's wrong\n        if self._is_running:\n            self.logger.warning(\n                f\"`stop(block=True)` was called on {self.name} but more than one\"\n                f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                \" usually means something is wrong. If `stop()` was called from\"\n                \" inside the loop service, use `stop(block=False)` instead.\"\n            )\n
      "},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.run_multiple_services","title":"run_multiple_services async","text":"

      Only one signal handler can be active at a time, so this function takes a list of loop services and runs all of them with a global signal handler.

      Source code in prefect/server/services/loop_service.py
      async def run_multiple_services(loop_services: List[LoopService]):\n    \"\"\"\n    Only one signal handler can be active at a time, so this function takes a list\n    of loop services and runs all of them with a global signal handler.\n    \"\"\"\n\n    def stop_all_services(self, *_):\n        for service in loop_services:\n            service._stop()\n\n    signal.signal(signal.SIGINT, stop_all_services)\n    signal.signal(signal.SIGTERM, stop_all_services)\n    await asyncio.gather(*[service.start() for service in loop_services])\n
      "},{"location":"api-ref/server/services/scheduler/","title":"server.services.scheduler","text":""},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler","title":"prefect.server.services.scheduler","text":"

      The Scheduler service.

      "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.RecentDeploymentsScheduler","title":"RecentDeploymentsScheduler","text":"

      Bases: Scheduler

      A scheduler that only schedules deployments that were updated very recently. This scheduler can run on a tight loop and ensure that runs from newly-created or updated deployments are rapidly scheduled without having to wait for the \"main\" scheduler to complete its loop.

      Note that scheduling is idempotent, so its ok for this scheduler to attempt to schedule the same deployments as the main scheduler. It's purpose is to accelerate scheduling for any deployments that users are interacting with.

      Source code in prefect/server/services/scheduler.py
      class RecentDeploymentsScheduler(Scheduler):\n    \"\"\"\n    A scheduler that only schedules deployments that were updated very recently.\n    This scheduler can run on a tight loop and ensure that runs from\n    newly-created or updated deployments are rapidly scheduled without having to\n    wait for the \"main\" scheduler to complete its loop.\n\n    Note that scheduling is idempotent, so its ok for this scheduler to attempt\n    to schedule the same deployments as the main scheduler. It's purpose is to\n    accelerate scheduling for any deployments that users are interacting with.\n    \"\"\"\n\n    # this scheduler runs on a tight loop\n    loop_seconds = 5\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n        \"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule\n        \"\"\"\n        query = (\n            sa.select(db.Deployment.id)\n            .where(\n                sa.and_(\n                    db.Deployment.paused.is_not(True),\n                    # use a slightly larger window than the loop interval to pick up\n                    # any deployments that were created *while* the scheduler was\n                    # last running (assuming the scheduler takes less than one\n                    # second to run). Scheduling is idempotent so picking up schedules\n                    # multiple times is not a concern.\n                    db.Deployment.updated\n                    >= pendulum.now(\"UTC\").subtract(seconds=self.loop_seconds + 1),\n                    (\n                        # Only include deployments that have at least one\n                        # active schedule.\n                        sa.select(db.DeploymentSchedule.deployment_id)\n                        .where(\n                            sa.and_(\n                                db.DeploymentSchedule.deployment_id == db.Deployment.id,\n                                db.DeploymentSchedule.active.is_(True),\n                            )\n                        )\n                        .exists()\n                    ),\n                )\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n
      "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler","title":"Scheduler","text":"

      Bases: LoopService

      A loop service that schedules flow runs from deployments.

      Source code in prefect/server/services/scheduler.py
      class Scheduler(LoopService):\n    \"\"\"\n    A loop service that schedules flow runs from deployments.\n    \"\"\"\n\n    # the main scheduler takes its loop interval from\n    # PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS\n    loop_seconds = None\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=(\n                loop_seconds\n                or self.loop_seconds\n                or PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS.value()\n            ),\n            **kwargs,\n        )\n        self.deployment_batch_size: int = (\n            PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE.value()\n        )\n        self.max_runs: int = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n        self.min_runs: int = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n        self.max_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n        self.min_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n        )\n        self.insert_batch_size = (\n            PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE.value()\n        )\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n        \"\"\"\n        Schedule flow runs by:\n\n        - Querying for deployments with active schedules\n        - Generating the next set of flow runs based on each deployments schedule\n        - Inserting all scheduled flow runs into the database\n\n        All inserted flow runs are committed to the database at the termination of the\n        loop.\n        \"\"\"\n        total_inserted_runs = 0\n\n        last_id = None\n        while True:\n            async with db.session_context(begin_transaction=False) as session:\n                query = self._get_select_deployments_to_schedule_query()\n\n                # use cursor based pagination\n                if last_id:\n                    query = query.where(db.Deployment.id > last_id)\n\n                result = await session.execute(query)\n                deployment_ids = result.scalars().unique().all()\n\n                # collect runs across all deployments\n                try:\n                    runs_to_insert = await self._collect_flow_runs(\n                        session=session, deployment_ids=deployment_ids\n                    )\n                except TryAgain:\n                    continue\n\n            # bulk insert the runs based on batch size setting\n            for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n                async with db.session_context(begin_transaction=True) as session:\n                    inserted_runs = await self._insert_scheduled_flow_runs(\n                        session=session, runs=batch\n                    )\n                    total_inserted_runs += len(inserted_runs)\n\n            # if this is the last page of deployments, exit the loop\n            if len(deployment_ids) < self.deployment_batch_size:\n                break\n            else:\n                # record the last deployment ID\n                last_id = deployment_ids[-1]\n\n        self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n        \"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule.\n\n        The query gets the IDs of any deployments with:\n\n            - an active schedule\n            - EITHER:\n                - fewer than `min_runs` auto-scheduled runs\n                - OR the max scheduled time is less than `max_scheduled_time` in the future\n        \"\"\"\n        now = pendulum.now(\"UTC\")\n        query = (\n            sa.select(db.Deployment.id)\n            .select_from(db.Deployment)\n            # TODO: on Postgres, this could be replaced with a lateral join that\n            # sorts by `next_scheduled_start_time desc` and limits by\n            # `self.min_runs` for a ~ 50% speedup. At the time of writing,\n            # performance of this universal query appears to be fast enough that\n            # this optimization is not worth maintaining db-specific queries\n            .join(\n                db.FlowRun,\n                # join on matching deployments, only picking up future scheduled runs\n                sa.and_(\n                    db.Deployment.id == db.FlowRun.deployment_id,\n                    db.FlowRun.state_type == StateType.SCHEDULED,\n                    db.FlowRun.next_scheduled_start_time >= now,\n                    db.FlowRun.auto_scheduled.is_(True),\n                ),\n                isouter=True,\n            )\n            .where(\n                sa.and_(\n                    db.Deployment.paused.is_not(True),\n                    (\n                        # Only include deployments that have at least one\n                        # active schedule.\n                        sa.select(db.DeploymentSchedule.deployment_id)\n                        .where(\n                            sa.and_(\n                                db.DeploymentSchedule.deployment_id == db.Deployment.id,\n                                db.DeploymentSchedule.active.is_(True),\n                            )\n                        )\n                        .exists()\n                    ),\n                )\n            )\n            .group_by(db.Deployment.id)\n            # having EITHER fewer than three runs OR runs not scheduled far enough out\n            .having(\n                sa.or_(\n                    sa.func.count(db.FlowRun.next_scheduled_start_time) < self.min_runs,\n                    sa.func.max(db.FlowRun.next_scheduled_start_time)\n                    < now + self.min_scheduled_time,\n                )\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n\n    async def _collect_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_ids: List[UUID],\n    ) -> List[Dict]:\n        runs_to_insert = []\n        for deployment_id in deployment_ids:\n            now = pendulum.now(\"UTC\")\n            # guard against erroneously configured schedules\n            try:\n                runs_to_insert.extend(\n                    await self._generate_scheduled_flow_runs(\n                        session=session,\n                        deployment_id=deployment_id,\n                        start_time=now,\n                        end_time=now + self.max_scheduled_time,\n                        min_time=self.min_scheduled_time,\n                        min_runs=self.min_runs,\n                        max_runs=self.max_runs,\n                    )\n                )\n            except Exception:\n                self.logger.exception(\n                    f\"Error scheduling deployment {deployment_id!r}.\",\n                )\n            finally:\n                connection = await session.connection()\n                if connection.invalidated:\n                    # If the error we handled above was the kind of database error that\n                    # causes underlying transaction to rollback and the connection to\n                    # become invalidated, rollback this session.  Errors that may cause\n                    # this are connection drops, database restarts, and things of the\n                    # sort.\n                    #\n                    # This rollback _does not rollback a transaction_, since that has\n                    # actually already happened due to the error above.  It brings the\n                    # Python session in sync with underlying connection so that when we\n                    # exec the outer with block, the context manager will not attempt to\n                    # commit the session.\n                    #\n                    # Then, raise TryAgain to break out of these nested loops, back to\n                    # the outer loop, where we'll begin a new transaction with\n                    # session.begin() in the next loop iteration.\n                    await session.rollback()\n                    raise TryAgain()\n        return runs_to_insert\n\n    @inject_db\n    async def _generate_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_id: UUID,\n        start_time: datetime.datetime,\n        end_time: datetime.datetime,\n        min_time: datetime.timedelta,\n        min_runs: int,\n        max_runs: int,\n        db: PrefectDBInterface,\n    ) -> List[Dict]:\n        \"\"\"\n        Given a `deployment_id` and schedule params, generates a list of flow run\n        objects and associated scheduled states that represent scheduled flow runs.\n\n        Pass-through method for overrides.\n\n\n        Args:\n            session: a database session\n            deployment_id: the id of the deployment to schedule\n            start_time: the time from which to start scheduling runs\n            end_time: runs will be scheduled until at most this time\n            min_time: runs will be scheduled until at least this far in the future\n            min_runs: a minimum amount of runs to schedule\n            max_runs: a maximum amount of runs to schedule\n\n        This function will generate the minimum number of runs that satisfy the min\n        and max times, and the min and max counts. Specifically, the following order\n        will be respected:\n\n            - Runs will be generated starting on or after the `start_time`\n            - No more than `max_runs` runs will be generated\n            - No runs will be generated after `end_time` is reached\n            - At least `min_runs` runs will be generated\n            - Runs will be generated until at least `start_time + min_time` is reached\n\n        \"\"\"\n        return await models.deployments._generate_scheduled_flow_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            end_time=end_time,\n            min_time=min_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n\n    @inject_db\n    async def _insert_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        runs: List[Dict],\n        db: PrefectDBInterface,\n    ) -> List[UUID]:\n        \"\"\"\n        Given a list of flow runs to schedule, as generated by\n        `_generate_scheduled_flow_runs`, inserts them into the database. Note this is a\n        separate method to facilitate batch operations on many scheduled runs.\n\n        Pass-through method for overrides.\n        \"\"\"\n        return await models.deployments._insert_scheduled_flow_runs(\n            session=session, runs=runs\n        )\n
      "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler.run_once","title":"run_once async","text":"

      Schedule flow runs by:

      • Querying for deployments with active schedules
      • Generating the next set of flow runs based on each deployments schedule
      • Inserting all scheduled flow runs into the database

      All inserted flow runs are committed to the database at the termination of the loop.

      Source code in prefect/server/services/scheduler.py
      @inject_db\nasync def run_once(self, db: PrefectDBInterface):\n    \"\"\"\n    Schedule flow runs by:\n\n    - Querying for deployments with active schedules\n    - Generating the next set of flow runs based on each deployments schedule\n    - Inserting all scheduled flow runs into the database\n\n    All inserted flow runs are committed to the database at the termination of the\n    loop.\n    \"\"\"\n    total_inserted_runs = 0\n\n    last_id = None\n    while True:\n        async with db.session_context(begin_transaction=False) as session:\n            query = self._get_select_deployments_to_schedule_query()\n\n            # use cursor based pagination\n            if last_id:\n                query = query.where(db.Deployment.id > last_id)\n\n            result = await session.execute(query)\n            deployment_ids = result.scalars().unique().all()\n\n            # collect runs across all deployments\n            try:\n                runs_to_insert = await self._collect_flow_runs(\n                    session=session, deployment_ids=deployment_ids\n                )\n            except TryAgain:\n                continue\n\n        # bulk insert the runs based on batch size setting\n        for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n            async with db.session_context(begin_transaction=True) as session:\n                inserted_runs = await self._insert_scheduled_flow_runs(\n                    session=session, runs=batch\n                )\n                total_inserted_runs += len(inserted_runs)\n\n        # if this is the last page of deployments, exit the loop\n        if len(deployment_ids) < self.deployment_batch_size:\n            break\n        else:\n            # record the last deployment ID\n            last_id = deployment_ids[-1]\n\n    self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n
      "},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.TryAgain","title":"TryAgain","text":"

      Bases: Exception

      Internal control-flow exception used to retry the Scheduler's main loop

      Source code in prefect/server/services/scheduler.py
      class TryAgain(Exception):\n    \"\"\"Internal control-flow exception used to retry the Scheduler's main loop\"\"\"\n
      "},{"location":"api-ref/server/utilities/database/","title":"server.utilities.database","text":""},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database","title":"prefect.server.utilities.database","text":"

      Utilities for interacting with Prefect REST API database and ORM layer.

      Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two.

      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.GenerateUUID","title":"GenerateUUID","text":"

      Bases: FunctionElement

      Platform-independent UUID default generator. Note the actual functionality for this class is specified in the compiles-decorated functions below

      Source code in prefect/server/utilities/database.py
      class GenerateUUID(FunctionElement):\n    \"\"\"\n    Platform-independent UUID default generator.\n    Note the actual functionality for this class is specified in the\n    `compiles`-decorated functions below\n    \"\"\"\n\n    name = \"uuid_default\"\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.JSON","title":"JSON","text":"

      Bases: TypeDecorator

      JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise.

      The \"base\" type is postgresql.JSONB to expose useful methods prior to SQL compilation

      Source code in prefect/server/utilities/database.py
      class JSON(TypeDecorator):\n    \"\"\"\n    JSON type that returns SQLAlchemy's dialect-specific JSON types, where\n    possible. Uses generic JSON otherwise.\n\n    The \"base\" type is postgresql.JSONB to expose useful methods prior\n    to SQL compilation\n    \"\"\"\n\n    impl = postgresql.JSONB\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.JSONB(none_as_null=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(sqlite.JSON(none_as_null=True))\n        else:\n            return dialect.type_descriptor(sa.JSON(none_as_null=True))\n\n    def process_bind_param(self, value, dialect):\n        \"\"\"Prepares the given value to be used as a JSON field in a parameter binding\"\"\"\n        if not value:\n            return value\n\n        # PostgreSQL does not support the floating point extrema values `NaN`,\n        # `-Infinity`, or `Infinity`\n        # https://www.postgresql.org/docs/current/datatype-json.html#JSON-TYPE-MAPPING-TABLE\n        #\n        # SQLite supports storing and retrieving full JSON values that include\n        # `NaN`, `-Infinity`, or `Infinity`, but any query that requires SQLite to parse\n        # the value (like `json_extract`) will fail.\n        #\n        # Replace any `NaN`, `-Infinity`, or `Infinity` values with `None` in the\n        # returned value.  See more about `parse_constant` at\n        # https://docs.python.org/3/library/json.html#json.load.\n        return json.loads(json.dumps(value), parse_constant=lambda c: None)\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Pydantic","title":"Pydantic","text":"

      Bases: TypeDecorator

      A pydantic type that converts inserted parameters to json and converts read values to the pydantic type.

      Source code in prefect/server/utilities/database.py
      class Pydantic(TypeDecorator):\n    \"\"\"\n    A pydantic type that converts inserted parameters to\n    json and converts read values to the pydantic type.\n    \"\"\"\n\n    impl = JSON\n    cache_ok = True\n\n    def __init__(self, pydantic_type, sa_column_type=None):\n        super().__init__()\n        self._pydantic_type = pydantic_type\n        if sa_column_type is not None:\n            self.impl = sa_column_type\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        # parse the value to ensure it complies with the schema\n        # (this will raise validation errors if not)\n        value = pydantic.parse_obj_as(self._pydantic_type, value)\n        # sqlalchemy requires the bind parameter's value to be a python-native\n        # collection of JSON-compatible objects. we achieve that by dumping the\n        # value to a json string using the pydantic JSON encoder and re-parsing\n        # it into a python-native form.\n        return json.loads(json.dumps(value, default=pydantic.json.pydantic_encoder))\n\n    def process_result_value(self, value, dialect):\n        if value is not None:\n            # load the json object into a fully hydrated typed object\n            return pydantic.parse_obj_as(self._pydantic_type, value)\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Timestamp","title":"Timestamp","text":"

      Bases: TypeDecorator

      TypeDecorator that ensures that timestamps have a timezone.

      For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC.

      Source code in prefect/server/utilities/database.py
      class Timestamp(TypeDecorator):\n    \"\"\"TypeDecorator that ensures that timestamps have a timezone.\n\n    For SQLite, all timestamps are converted to UTC (since they are stored\n    as naive timestamps without timezones) and recovered as UTC.\n    \"\"\"\n\n    impl = sa.TIMESTAMP(timezone=True)\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.TIMESTAMP(timezone=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(\n                sqlite.DATETIME(\n                    # SQLite is very particular about datetimes, and performs all comparisons\n                    # as alphanumeric comparisons without regard for actual timestamp\n                    # semantics or timezones. Therefore, it's important to have uniform\n                    # and sortable datetime representations. The default is an ISO8601-compatible\n                    # string with NO time zone and a space (\" \") delimiter between the date\n                    # and the time. The below settings can be used to add a \"T\" delimiter but\n                    # will require all other sqlite datetimes to be set similarly, including\n                    # the custom default value for datetime columns and any handwritten SQL\n                    # formed with `strftime()`.\n                    #\n                    # store with \"T\" separator for time\n                    # storage_format=(\n                    #     \"%(year)04d-%(month)02d-%(day)02d\"\n                    #     \"T%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d\"\n                    # ),\n                    # handle ISO 8601 with \"T\" or \" \" as the time separator\n                    # regexp=r\"(\\d+)-(\\d+)-(\\d+)[T ](\\d+):(\\d+):(\\d+).(\\d+)\",\n                )\n            )\n        else:\n            return dialect.type_descriptor(sa.TIMESTAMP(timezone=True))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        else:\n            if value.tzinfo is None:\n                raise ValueError(\"Timestamps must have a timezone.\")\n            elif dialect.name == \"sqlite\":\n                return pendulum.instance(value).in_timezone(\"UTC\")\n            else:\n                return value\n\n    def process_result_value(self, value, dialect):\n        # retrieve timestamps in their native timezone (or UTC)\n        if value is not None:\n            return pendulum.instance(value).in_timezone(\"utc\")\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.UUID","title":"UUID","text":"

      Bases: TypeDecorator

      Platform-independent UUID type.

      Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens.

      Source code in prefect/server/utilities/database.py
      class UUID(TypeDecorator):\n    \"\"\"\n    Platform-independent UUID type.\n\n    Uses PostgreSQL's UUID type, otherwise uses\n    CHAR(36), storing as stringified hex values with\n    hyphens.\n    \"\"\"\n\n    impl = TypeEngine\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.UUID())\n        else:\n            return dialect.type_descriptor(CHAR(36))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        elif dialect.name == \"postgresql\":\n            return str(value)\n        elif isinstance(value, uuid.UUID):\n            return str(value)\n        else:\n            return str(uuid.UUID(value))\n\n    def process_result_value(self, value, dialect):\n        if value is None:\n            return value\n        else:\n            if not isinstance(value, uuid.UUID):\n                value = uuid.UUID(value)\n            return value\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_add","title":"date_add","text":"

      Bases: FunctionElement

      Platform-independent way to add a date and an interval.

      Source code in prefect/server/utilities/database.py
      class date_add(FunctionElement):\n    \"\"\"\n    Platform-independent way to add a date and an interval.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"date_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, dt, interval):\n        self.dt = dt\n        self.interval = interval\n        super().__init__()\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_diff","title":"date_diff","text":"

      Bases: FunctionElement

      Platform-independent difference of dates. Computes d1 - d2.

      Source code in prefect/server/utilities/database.py
      class date_diff(FunctionElement):\n    \"\"\"\n    Platform-independent difference of dates. Computes d1 - d2.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"date_diff\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, d1, d2):\n        self.d1 = d1\n        self.d2 = d2\n        super().__init__()\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.interval_add","title":"interval_add","text":"

      Bases: FunctionElement

      Platform-independent way to add two intervals.

      Source code in prefect/server/utilities/database.py
      class interval_add(FunctionElement):\n    \"\"\"\n    Platform-independent way to add two intervals.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"interval_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, i1, i2):\n        self.i1 = i1\n        self.i2 = i2\n        super().__init__()\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_contains","title":"json_contains","text":"

      Bases: FunctionElement

      Platform independent json_contains operator, tests if the left expression contains the right expression.

      On postgres this is equivalent to the @> containment operator. https://www.postgresql.org/docs/current/functions-json.html

      Source code in prefect/server/utilities/database.py
      class json_contains(FunctionElement):\n    \"\"\"\n    Platform independent json_contains operator, tests if the\n    `left` expression contains the `right` expression.\n\n    On postgres this is equivalent to the @> containment operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_contains\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, left, right):\n        self.left = left\n        self.right = right\n        super().__init__()\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_extract","title":"json_extract","text":"

      Bases: FunctionElement

      Platform independent json_extract operator, extracts a value from a JSON field via key.

      On postgres this is equivalent to the ->> operator. https://www.postgresql.org/docs/current/functions-json.html

      Source code in prefect/server/utilities/database.py
      class json_extract(FunctionElement):\n    \"\"\"\n    Platform independent json_extract operator, extracts a value from a JSON\n    field via key.\n\n    On postgres this is equivalent to the ->> operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = sa.Text()\n    name = \"json_extract\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, column: sa.Column, path: str, wrap_quotes: bool = False):\n        self.column = column\n        self.path = path\n        self.wrap_quotes = wrap_quotes\n        super().__init__()\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_all_keys","title":"json_has_all_keys","text":"

      Bases: FunctionElement

      Platform independent json_has_all_keys operator.

      On postgres this is equivalent to the ?& existence operator. https://www.postgresql.org/docs/current/functions-json.html

      Source code in prefect/server/utilities/database.py
      class json_has_all_keys(FunctionElement):\n    \"\"\"Platform independent json_has_all_keys operator.\n\n    On postgres this is equivalent to the ?& existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_all_keys\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if isinstance(values, list) and not all(isinstance(v, str) for v in values):\n            raise ValueError(\n                \"json_has_all_key values must be strings if provided as a literal list\"\n            )\n        self.values = values\n        super().__init__()\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_any_key","title":"json_has_any_key","text":"

      Bases: FunctionElement

      Platform independent json_has_any_key operator.

      On postgres this is equivalent to the ?| existence operator. https://www.postgresql.org/docs/current/functions-json.html

      Source code in prefect/server/utilities/database.py
      class json_has_any_key(FunctionElement):\n    \"\"\"\n    Platform independent json_has_any_key operator.\n\n    On postgres this is equivalent to the ?| existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_any_key\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if not all(isinstance(v, str) for v in values):\n            raise ValueError(\"json_has_any_key values must be strings\")\n        self.values = values\n        super().__init__()\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.now","title":"now","text":"

      Bases: FunctionElement

      Platform-independent \"now\" generator.

      Source code in prefect/server/utilities/database.py
      class now(FunctionElement):\n    \"\"\"\n    Platform-independent \"now\" generator.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"now\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = True\n
      "},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.get_dialect","title":"get_dialect","text":"

      Get the dialect of a session, engine, or connection url.

      Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres.

      Example
      import prefect.settings\nfrom prefect.server.utilities.database import get_dialect\n\ndialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\nif dialect.name == \"sqlite\":\n    print(\"Using SQLite!\")\nelse:\n    print(\"Using Postgres!\")\n
      Source code in prefect/server/utilities/database.py
      def get_dialect(\n    obj: Union[str, sa.orm.Session, sa.engine.Engine],\n) -> sa.engine.Dialect:\n    \"\"\"\n    Get the dialect of a session, engine, or connection url.\n\n    Primary use case is figuring out whether the Prefect REST API is communicating with\n    SQLite or Postgres.\n\n    Example:\n        ```python\n        import prefect.settings\n        from prefect.server.utilities.database import get_dialect\n\n        dialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\n        if dialect.name == \"sqlite\":\n            print(\"Using SQLite!\")\n        else:\n            print(\"Using Postgres!\")\n        ```\n    \"\"\"\n    if isinstance(obj, sa.orm.Session):\n        url = obj.bind.url\n    elif isinstance(obj, sa.engine.Engine):\n        url = obj.url\n    else:\n        url = sa.engine.url.make_url(obj)\n\n    return url.get_dialect()\n
      "},{"location":"api-ref/server/utilities/schemas/","title":"server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas","title":"prefect.server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/server/","title":"server.utilities.server","text":""},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server","title":"prefect.server.utilities.server","text":"

      Utilities for the Prefect REST API server.

      "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectAPIRoute","title":"PrefectAPIRoute","text":"

      Bases: APIRoute

      A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned.

      Requests already have request.scope['fastapi_astack'] which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at request.state.response_scoped_stack.

      Source code in prefect/server/utilities/server.py
      class PrefectAPIRoute(APIRoute):\n    \"\"\"\n    A FastAPIRoute class which attaches an async stack to requests that exits before\n    a response is returned.\n\n    Requests already have `request.scope['fastapi_astack']` which is an async stack for\n    the full scope of the request. This stack is used for managing contexts of FastAPI\n    dependencies. If we want to close a dependency before the request is complete\n    (i.e. before returning a response to the user), we need a stack with a different\n    scope. This extension adds this stack at `request.state.response_scoped_stack`.\n    \"\"\"\n\n    def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:\n        default_handler = super().get_route_handler()\n\n        async def handle_response_scoped_depends(request: Request) -> Response:\n            # Create a new stack scoped to exit before the response is returned\n            async with AsyncExitStack() as stack:\n                request.state.response_scoped_stack = stack\n                response = await default_handler(request)\n\n            return response\n\n        return handle_response_scoped_depends\n
      "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter","title":"PrefectRouter","text":"

      Bases: APIRouter

      A base class for Prefect REST API routers.

      Source code in prefect/server/utilities/server.py
      class PrefectRouter(APIRouter):\n    \"\"\"\n    A base class for Prefect REST API routers.\n    \"\"\"\n\n    def __init__(self, **kwargs: Any) -> None:\n        kwargs.setdefault(\"route_class\", PrefectAPIRoute)\n        super().__init__(**kwargs)\n\n    def add_api_route(\n        self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n    ) -> None:\n        \"\"\"\n        Add an API route.\n\n        For routes that return content and have not specified a `response_model`,\n        use return type annotation to infer the response model.\n\n        For routes that return No-Content status codes, explicitly set\n        a `response_class` to ensure nothing is returned in the response body.\n        \"\"\"\n        if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n            # any routes that return No-Content status codes must\n            # explicitly set a response_class that will handle status codes\n            # and not return anything in the body\n            kwargs[\"response_class\"] = Response\n        if kwargs.get(\"response_model\") is None:\n            kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n        return super().add_api_route(path, endpoint, **kwargs)\n
      "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter.add_api_route","title":"add_api_route","text":"

      Add an API route.

      For routes that return content and have not specified a response_model, use return type annotation to infer the response model.

      For routes that return No-Content status codes, explicitly set a response_class to ensure nothing is returned in the response body.

      Source code in prefect/server/utilities/server.py
      def add_api_route(\n    self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n) -> None:\n    \"\"\"\n    Add an API route.\n\n    For routes that return content and have not specified a `response_model`,\n    use return type annotation to infer the response model.\n\n    For routes that return No-Content status codes, explicitly set\n    a `response_class` to ensure nothing is returned in the response body.\n    \"\"\"\n    if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n        # any routes that return No-Content status codes must\n        # explicitly set a response_class that will handle status codes\n        # and not return anything in the body\n        kwargs[\"response_class\"] = Response\n    if kwargs.get(\"response_model\") is None:\n        kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n    return super().add_api_route(path, endpoint, **kwargs)\n
      "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.method_paths_from_routes","title":"method_paths_from_routes","text":"

      Generate a set of strings describing the given routes in the format:

      For example, \"GET /logs/\"

      Source code in prefect/server/utilities/server.py
      def method_paths_from_routes(routes: Sequence[BaseRoute]) -> Set[str]:\n    \"\"\"\n    Generate a set of strings describing the given routes in the format: <method> <path>\n\n    For example, \"GET /logs/\"\n    \"\"\"\n    method_paths = set()\n    for route in routes:\n        if isinstance(route, (APIRoute, StarletteRoute)):\n            for method in route.methods:\n                method_paths.add(f\"{method} {route.path}\")\n\n    return method_paths\n
      "},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.response_scoped_dependency","title":"response_scoped_dependency","text":"

      Ensure that this dependency closes before the response is returned to the client. By default, FastAPI closes dependencies after sending the response.

      Uses an async stack that is exited before the response is returned. This is particularly useful for database sessions which must be committed before the client can do more work.

      Do not use a response-scoped dependency within a FastAPI background task.

      Background tasks run after FastAPI sends the response, so a response-scoped dependency will already be closed. Use a normal FastAPI dependency instead.

      Parameters:

      Name Type Description Default dependency Callable

      An async callable. FastAPI dependencies may still be used.

      required

      Returns:

      Type Description

      A wrapped dependency which will push the dependency context manager onto

      a stack when called.

      Source code in prefect/server/utilities/server.py
      def response_scoped_dependency(dependency: Callable):\n    \"\"\"\n    Ensure that this dependency closes before the response is returned to the client. By\n    default, FastAPI closes dependencies after sending the response.\n\n    Uses an async stack that is exited before the response is returned. This is\n    particularly useful for database sessions which must be committed before the client\n    can do more work.\n\n    NOTE: Do not use a response-scoped dependency within a FastAPI background task.\n          Background tasks run after FastAPI sends the response, so a response-scoped\n          dependency will already be closed. Use a normal FastAPI dependency instead.\n\n    Args:\n        dependency: An async callable. FastAPI dependencies may still be used.\n\n    Returns:\n        A wrapped `dependency` which will push the `dependency` context manager onto\n        a stack when called.\n    \"\"\"\n    signature = inspect.signature(dependency)\n\n    async def wrapper(*args, request: Request, **kwargs):\n        # Replicate FastAPI behavior of auto-creating a context manager\n        if inspect.isasyncgenfunction(dependency):\n            context_manager = asynccontextmanager(dependency)\n        else:\n            context_manager = dependency\n\n        # Ensure request is provided if requested\n        if \"request\" in signature.parameters:\n            kwargs[\"request\"] = request\n\n        # Enter the route handler provided stack that is closed before responding,\n        # return the value yielded by the wrapped dependency\n        return await request.state.response_scoped_stack.enter_async_context(\n            context_manager(*args, **kwargs)\n        )\n\n    # Ensure that the signature includes `request: Request` to ensure that FastAPI will\n    # inject the request as a dependency; maintain the old signature so those depends\n    # work\n    request_parameter = inspect.signature(wrapper).parameters[\"request\"]\n    functools.update_wrapper(wrapper, dependency)\n\n    if \"request\" not in signature.parameters:\n        new_parameters = signature.parameters.copy()\n        new_parameters[\"request\"] = request_parameter\n        wrapper.__signature__ = signature.replace(\n            parameters=tuple(new_parameters.values())\n        )\n\n    return wrapper\n
      "},{"location":"cloud/","title":"Welcome to Prefect Cloud","text":"

      Prefect Cloud is a hosted workflow application framework that provides all the capabilities of Prefect server plus additional features, such as:

      • automations, events, and webhooks so you can create event-driven workflows
      • workspaces, RBAC, SSO, audit logs and related user management tools for collaboration
      • push work pools for running flows on serverless infrastructure without a worker
      • error summaries powered by Marvin AI to help you resolve errors faster

      Getting Started with Prefect Cloud

      Ready to jump right in and start running with Prefect Cloud? See the Quickstart and follow the instructions on the Cloud tabs to write and deploy your first Prefect Cloud-monitored flow run.

      Prefect Cloud includes all the features in the open-source Prefect server plus the following:

      Prefect Cloud features

      • User accounts \u2014 personal accounts for working in Prefect Cloud.
      • Workspaces \u2014 isolated environments to organize your flows, deployments, and flow runs.
      • Automations \u2014 configure triggers, actions, and notifications in response to real-time monitoring events.
      • Email notifications \u2014 send email alerts from Prefect's server based on automation triggers.
      • Service accounts \u2014 configure API access for running workers or executing flow runs on remote infrastructure.
      • Custom role-based access controls (RBAC) \u2014 assign users granular permissions to perform certain activities within an account or a workspace.
      • Single Sign-on (SSO) \u2014 authentication using your identity provider.
      • Audit Log \u2014 a record of user activities to monitor security and compliance.
      • Collaboration \u2014 invite other people to your account.
      • Error summaries \u2014 (enabled by Marvin AI) distill the error logs of Failed and Crashed flow runs into actionable information.
      • Push work pools \u2014 run flows on your serverless infrastructure without running a worker.
      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#user-accounts","title":"User accounts","text":"

      When you sign up for Prefect Cloud, an account and a user profile are automatically provisioned for you.

      Your profile is the place where you'll manage settings related to yourself as a user, including:

      • Profile, including profile handle and image
      • API keys
      • Preferences, including timezone and color mode

      As an account Admin, you will also have access to account settings from the Account Settings page, such as:

      • Members
      • Workspaces
      • Roles

      As an account Admin you can create a workspace and invite other individuals to your workspace.

      Upgrading from a Prefect Cloud Free tier plan to a Pro or Custom tier plan enables additional functionality for adding workspaces, managing teams, and running higher volume workloads.

      Workspace Admins for Pro tier plans have the ability to set role-based access controls (RBAC), view Audit Logs, and configure service accounts.

      Custom plans have object-level access control lists, custom roles, teams, and [single sign-on (SSO)](#single-sign-on-(sso) with Directory Sync/SCIM provisioning.

      Prefect Cloud plans for teams of every size

      See the Prefect Cloud plans for details on Pro and Custom account tiers.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#workspaces","title":"Workspaces","text":"

      A workspace is an isolated environment within Prefect Cloud for your flows, deployments, and block configuration. See the Workspaces documentation for more information about configuring and using workspaces.

      Each workspace keeps track of its own:

      • Flow runs and task runs executed in an environment that is syncing with the workspace
      • Flows associated with flow runs and deployments observed by the Prefect Cloud API
      • Deployments
      • Work pools
      • Blocks and storage
      • Events
      • Automations
      • Incidents

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#events","title":"Events","text":"

      Prefect Cloud allows you to see your events. Events provide information about the state of your workflows, and can be used as automation triggers.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#automations","title":"Automations","text":"

      Prefect Cloud automations provide additional notification capabilities beyond those in a self-hosted open-source Prefect server. Automations also enable you to create event-driven workflows, toggle resources such as schedules and work pools, and declare incidents.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#incidents","title":"Incidents","text":"

      Prefect Cloud's incidents help teams identify, rectify, and document issues in mission-critical workflows. Incidents are formal declarations of disruptions to a workspace. With automations), activity in that workspace can be paused when an incident is created and resumed when it is resolved.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#error-summaries","title":"Error summaries","text":"

      Prefect Cloud error summaries, enabled by Marvin AI, distill the error logs of Failed and Crashed flow runs into actionable information. To enable this feature and others powered by Marvin AI, visit the Settings page for your account.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#service-accounts","title":"Service accounts","text":"

      Service accounts enable you to create Prefect Cloud API keys that are not associated with a user account. Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. See the service accounts documentation for more information about creating and managing service accounts.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#roles-and-custom-permissions","title":"Roles and custom permissions","text":"

      Role-based access controls (RBAC) enable you to assign users a role with permissions to perform certain activities within an account or a workspace. See the role-based access controls (RBAC) documentation for more information about managing user roles in a Prefect Cloud account.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"

      Prefect Cloud's Pro and Custom plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML. Directory Sync and SCIM provisioning is also available with Custom plans.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#audit-log","title":"Audit log","text":"

      Prefect Cloud's Pro and Custom plans offer Audit Logs for compliance and security. Audit logs provide a chronological record of activities performed by users in an account.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#prefect-cloud-rest-api","title":"Prefect Cloud REST API","text":"

      The Prefect REST API is used for communicating data from Prefect clients to Prefect Cloud or a local Prefect server for orchestration and monitoring. This API is mainly consumed by Prefect clients like the Prefect Python Client or the Prefect UI.

      Prefect Cloud REST API interactive documentation

      Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#start-using-prefect-cloud","title":"Start using Prefect Cloud","text":"

      To create an account or sign in with an existing Prefect Cloud account, go to https://app.prefect.cloud/.

      Then follow the steps in the UI to deploy your first Prefect Cloud-monitored flow run. For more details, see the Prefect Quickstart and follow the instructions on the Cloud tabs.

      Need help?

      Get your questions answered by a Prefect Product Advocate! Book a Meeting

      ","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/connecting/","title":"Connecting & Troubleshooting Prefect Cloud","text":"

      To create flow runs in a local or remote execution environment and use either Prefect Cloud or a Prefect server as the backend API server, you need to

      • Configure the execution environment with the location of the API.
      • Authenticate with the API, either by logging in or providing a valid API key (Prefect Cloud only).
      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"

      Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.

      1. Open a new terminal session.
      2. Install Prefect in the environment in which you want to execute flow runs.
      $ pip install -U prefect\n
      1. Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.
      $ prefect cloud login\n

      The prefect cloud login command, used on its own, provides an interactive login experience. Using this command, you can log in with either an API key or through a browser.

      $ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n    Paste an API key\nPaste your authentication key:\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n    g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n

      You can also log in by providing a Prefect Cloud API key that you create.

      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#change-workspaces","title":"Change workspaces","text":"

      If you need to change which workspace you're syncing with, use the prefect cloud workspace set Prefect CLI command while logged in, passing the account handle and workspace name.

      $ prefect cloud workspace set --workspace \"prefect/my-workspace\"\n

      If no workspace is provided, you will be prompted to select one.

      Workspace Settings also shows you the prefect cloud workspace set Prefect CLI command you can use to sync a local execution environment with a given workspace.

      You may also use the prefect cloud login command with the --workspace or -w option to set the current workspace.

      $ prefect cloud login --workspace \"prefect/my-workspace\"\n
      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#manually-configure-prefect-api-settings","title":"Manually configure Prefect API settings","text":"

      You can also manually configure the PREFECT_API_URL setting to specify the Prefect Cloud API.

      For Prefect Cloud, you can configure the PREFECT_API_URL and PREFECT_API_KEY settings to authenticate with Prefect Cloud by using an account ID, workspace ID, and API key.

      $ prefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n$ prefect config set PREFECT_API_KEY=\"[API-KEY]\"\n

      When you're in a Prefect Cloud workspace, you can copy the PREFECT_API_URL value directly from the page URL.

      In this example, we configured PREFECT_API_URL and PREFECT_API_KEY in the default profile. You can use prefect profile CLI commands to create settings profiles for different configurations. For example, you could have a \"cloud\" profile configured to use the Prefect Cloud API URL and API key, and another \"local\" profile for local development using a local Prefect API server started with prefect server start. See Settings for details.

      Environment variables

      You can also set PREFECT_API_URL and PREFECT_API_KEY as you would any other environment variable. See Overriding defaults with environment variables for more information.

      See the Flow orchestration with Prefect tutorial for examples.

      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#install-requirements-in-execution-environments","title":"Install requirements in execution environments","text":"

      In local and remote execution environments \u2014 such as VMs and containers \u2014 you must make sure any flow requirements or dependencies have been installed before creating a flow run.

      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#troubleshooting-prefect-cloud","title":"Troubleshooting Prefect Cloud","text":"

      This section provides tips that may be helpful if you run into problems using Prefect Cloud.

      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-and-proxies","title":"Prefect Cloud and proxies","text":"

      Proxies intermediate network requests between a server and a client.

      To communicate with Prefect Cloud, the Prefect client library makes HTTPS requests. These requests are made using the httpx Python library. httpx respects accepted proxy environment variables, so the Prefect client is able to communicate through proxies.

      To enable communication via proxies, simply set the HTTPS_PROXY and SSL_CERT_FILE environment variables as appropriate in your execution environment and things should \u201cjust work.\u201d

      See the Using Prefect Cloud with proxies topic in Prefect Discourse for examples of proxy configuration.

      URLs that should be whitelisted for outbound-communication in a secure environment include the UI, the API, Authentication, and the current OCSP server:

      • app.prefect.cloud
      • api.prefect.cloud
      • auth.workos.com
      • api.github.com
      • github.com
      • ocsp.pki.goog/s/gts1d4/OxYEb8XcYmo
      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-access-via-api","title":"Prefect Cloud access via API","text":"

      If the Prefect Cloud API key, environment variable settings, or account login for your execution environment are not configured correctly, you may experience errors or unexpected flow run results when using Prefect CLI commands, running flows, or observing flow run results in Prefect Cloud.

      Use the prefect config view CLI command to make sure your execution environment is correctly configured to access Prefect Cloud.

      $ prefect config view\nPREFECT_PROFILE='cloud'\nPREFECT_API_KEY='pnu_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' (from profile)\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/...' (from profile)\n

      Make sure PREFECT_API_URL is configured to use https://api.prefect.cloud/api/....

      Make sure PREFECT_API_KEY is configured to use a valid API key.

      You can use the prefect cloud workspace ls CLI command to view or set the active workspace.

      $ prefect cloud workspace ls\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503   Available Workspaces: \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502   g-gadflow/g-workspace \u2502\n\u2502    * prefect/workinonit \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n    * active workspace\n

      You can also check that the account and workspace IDs specified in the URL for PREFECT_API_URL match those shown in the URL bar for your Prefect Cloud workspace.

      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-login-errors","title":"Prefect Cloud login errors","text":"

      If you're having difficulty logging in to Prefect Cloud, the following troubleshooting steps may resolve the issue, or will provide more information when sharing your case to the support channel.

      • Are you logging into Prefect Cloud 2? Prefect Cloud 1 and Prefect Cloud 2 use separate accounts. Make sure to use the right Prefect Cloud 2 URL: https://app.prefect.cloud/
      • Do you already have a Prefect Cloud account? If you\u2019re having difficulty accepting an invitation, try creating an account first using the email associated with the invitation, then accept the invitation.
      • Are you using a single sign-on (SSO) provider, social authentication (Google, Microsoft, or GitHub) or just using an emailed link?

      Other tips to help with login difficulties:

      • Hard refresh your browser with Cmd+Shift+R.
      • Try in a different browser. We actively test against the following browsers:
      • Chrome
      • Edge
      • Firefox
      • Safari
      • Clear recent browser history/cookies

      None of this worked?

      Email us at help@prefect.io and provide answers to the questions above in your email to make it faster to troubleshoot and unblock you. Make sure you add the email address with which you were trying to log in, your Prefect Cloud account name, and, if applicable, the organization to which it belongs.

      ","tags":["Prefect Cloud","API keys","configuration","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/events/","title":"Events","text":"

      An event is a notification of a change. Together, events form a feed of activity recording what's happening across your stack.

      Events power several features in Prefect Cloud, including flow run logs, audit logs, and automations.

      Events can represent API calls, state transitions, or changes in your execution environment or infrastructure.

      Events enable observability into your data stack via the event feed, and the configuration of Prefect's reactivity via automations.

      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-specification","title":"Event specification","text":"

      Events adhere to a structured specification.

      Name Type Required? Description occurred String yes When the event happened event String yes The name of the event that happened resource Object yes The primary Resource this event concerns related Array no A list of additional Resources involved in this event payload Object no An open-ended set of data describing what happened id String yes The client-provided identifier of this event follows String no The ID of an event that is known to have occurred prior to this one.","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-grammar","title":"Event grammar","text":"

      Generally, events have a consistent and informative grammar - an event describes a resource and an action that the resource took or that was taken on that resource. For example, events emitted by Prefect objects take the form of:

      prefect.block.write-method.called\nprefect-cloud.automation.action.executed\nprefect-cloud.user.logged-in\n
      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-sources","title":"Event sources","text":"

      Events are automatically emitted by all Prefect objects, including flows, tasks, deployments, work queues, and logs. Prefect-emitted events will contain the prefect or prefect-cloud resource prefix. Events can also be sent to the Prefect events API via authenticated http request.

      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#emit-custom-events-from-python-code","title":"Emit custom events from Python code","text":"

      The Prefect Python SDK provides an emit_event function that emits an Prefect event when called. The function can be called inside or outside of a task or flow. Running the following code will emit an event to Prefect Cloud, which will validate and ingest the event data.

      from prefect.events import emit_event\n\ndef some_function(name: str=\"kiki\") -> None:\n    print(f\"hi {name}!\")\n    emit_event(event=f\"{name}.sent.event!\", resource={\"prefect.resource.id\": f\"coder.{name}\"})\n\nsome_function()\n

      Note that the emit_event arguments shown above are required: event represents the name of the event and resource={\"prefect.resource.id\": \"my_string\"} is the resource id. To get data into an event for use in an automation action, you can specify a dictionary of values for the payload parameter.

      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#emit-events-via-webhooks","title":"Emit events via webhooks","text":"

      Prefect Cloud offers programmable webhooks to receive HTTP requests from other systems and translate them into events within your workspace. Webhooks can emit pre-defined static events, dynamic events that use portions of the incoming HTTP request, or events derived from CloudEvents.

      Events emitted from any source will appear in the event feed, where you can visualize activity in context and configure automations to react to the presence or absence of it in the future.

      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#resources","title":"Resources","text":"

      Every event has a primary resource, which describes the object that emitted an event. Resources are used as quasi-stable identifiers for sources of events, and are constructed as dot-delimited strings, for example:

      prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\nacme.user.kiki.elt_script_1\nprefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6\n

      Resources can optionally have additional arbitrary labels which can be used in event aggregation queries, such as:

      \"resource\": {\n    \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n    \"prefect-cloud.action.type\": \"call-webhook\"\n    }\n

      Events can optionally contain related resources, used to associate the event with other resources, such as in the case that the primary resource acted on or with another resource:

      \"resource\": {\n    \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\",\n    \"prefect-cloud.action.type\": \"call-webhook\"\n  },\n\"related\": [\n  {\n      \"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n      \"prefect.resource.role\": \"automation\",\n      \"prefect-cloud.name\": \"webhook_body_demo\",\n      \"prefect-cloud.posture\": \"Reactive\"\n  }\n]\n
      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#events-in-the-cloud-ui","title":"Events in the Cloud UI","text":"

      Prefect Cloud provides an interactive dashboard to analyze and take action on events that occurred in your workspace on the event feed page.

      The event feed is the primary place to view, search, and filter events to understand activity across your stack. Each entry displays data on the resource, related resource, and event that took place.

      You can view more information about an event by clicking into it, where you can view the full details of an event's resource, related resources, and its payload.

      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#reacting-to-events","title":"Reacting to events","text":"

      From an event page, you can configure an automation to trigger on the observation of matching events or a lack of matching events by clicking the automate button in the overflow menu:

      The default trigger configuration will fire every time it sees an event with a matching resource identifier. Advanced configuration is possible via custom triggers.

      ","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/incidents/","title":"Incidents","text":"","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#overview","title":"Overview","text":"

      Incidents are a Prefect Cloud feature to help your team manage workflow disruptions. Incidents help you identify, resolve, and document issues with mission-critical workflows. This system enhances operational efficiency by automating the incident management process and providing a centralized platform for collaboration and compliance.

      ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#what-are-incidents","title":"What are incidents?","text":"

      Incidents are formal declarations of disruptions to a workspace. With automations, activity in a workspace can be paused when an incident is created and resumed when it is resolved.

      Incidents vary in nature and severity, ranging from minor glitches to critical system failures. Prefect Cloud enables users to effectively and automatically track and manage these incidents, ensuring minimal impact on operational continuity.

      ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#why-use-incident-management","title":"Why use incident management?","text":"
      1. Automated detection and reporting: Incidents can be automatically identified based on specific triggers or manually reported by team members, facilitating prompt response.

      2. Collaborative problem-solving: The platform fosters collaboration, allowing team members to share insights, discuss resolutions, and track contributions.

      3. Comprehensive impact assessment: Users gain insights into the incident's influence on workflows, helping in prioritizing response efforts.

      4. Compliance with incident management processes: Detailed documentation and reporting features support compliance with incident management systems.

      5. Enhanced operational transparency: The system provides a transparent view of both ongoing and resolved incidents, promoting accountability and continuous improvement.

      ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#how-to-use-incident-management-in-prefect-cloud","title":"How to use incident management in Prefect Cloud","text":"","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#creating-an-incident","title":"Creating an incident","text":"

      There are several ways to create an incident:

      1. From the Incidents page:

        • Click on the + button.
        • Fill in required fields and attach any Prefect resources related to your incident.
      2. From a flow run, work pool, or block:

        • Initiate an incident directly from a failed flow run, automatically linking it as a resource, by clicking on the menu button and selecting \"Declare an incident\".
      3. Via an automation:

        • Set up incident creation as an automated response to selected triggers.
      ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#incident-automations","title":"Incident automations","text":"

      Automations can be used for triggering an incident and for selecting actions to take when an incident is triggered. For example, a work pool status change could trigger the declaration of an incident, or a critical level incident could trigger a notification action.

      To automatically take action when an incident is declared, set up a custom trigger that listens for declaration events.

      {\n  \"match\": {\n    \"prefect.resource.id\": \"prefect-cloud.incident.*\"\n  },\n  \"expect\": [\n    \"prefect-cloud.incident.declared\"\n  ],\n  \"posture\": \"Reactive\",\n  \"threshold\": 1,\n  \"within\": 0\n}\n

      Building custom triggers

      To get started with incident automations, you only need to specify two fields in your trigger:

      • match: The resource emitting your event of interest. You can match on specific resource IDs, use wildcards to match on all resources of a given type, and even match on other resource attributes, like prefect.resource.name.

      • expect: The event type to listen for. For example, you could listen for any (or all) of the following event types:

        • prefect-cloud.incident.declared
        • prefect-cloud.incident.resolved
        • prefect-cloud.incident.updated.severity

      See Event Triggers for more information on custom triggers, and check out your Event Feed to see the event types emitted by your incidents and other resources (i.e. events that you can react to).

      When an incident is declared, any actions you configure such as pausing work pools or sending notifications, will execute immediately.

      ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#managing-an-incident","title":"Managing an incident","text":"
      • Monitor active incidents: View real-time status, severity, and impact.
      • Adjust incident details: Update status, severity, and other relevant information.
      • Collaborate: Add comments and insights; these will display with user identifiers and timestamps.
      • Impact assessment: Evaluate how the incident affects ongoing and future workflows.
      ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#resolving-and-documenting-incidents","title":"Resolving and documenting incidents","text":"
      • Resolution: Update the incident status to reflect resolution steps taken.
      • Documentation: Ensure all actions, comments, and changes are logged for future reference.
      ","tags":["incidents"],"boost":2},{"location":"cloud/incidents/#incident-reporting","title":"Incident reporting","text":"
      • Generate a detailed timeline of the incident: actions taken, updates to severity and resolution - suitable for compliance and retrospective analysis.
      ","tags":["incidents"],"boost":2},{"location":"cloud/rate-limits/","title":"API Rate Limits & Retention Periods","text":"

      API rate limits restrict the number of requests that a single client can make in a given time period. They ensure Prefect Cloud's stability, so that when you make an API call, you always get a response.

      Prefect Cloud rate limits are subject to change

      The following rate limits are in effect currently, but are subject to change. Contact Prefect support at help@prefect.io if you have questions about current rate limits.

      Prefect Cloud enforces the following rate limits:

      • Flow and task creation rate limits
      • Log service rate limits
      ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-flow-run-and-task-run-rate-limits","title":"Flow, flow run, and task run rate limits","text":"

      Prefect Cloud limits the flow_runs, task_runs, and flows endpoints and their subroutes at the following levels:

      • 400 per minute for personal accounts
      • 2,000 per minute for Pro accounts

      The Prefect Cloud API will return a 429 response with an appropriate Retry-After header if these limits are triggered.

      ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#log-service-rate-limits","title":"Log service rate limits","text":"

      Prefect Cloud limits the number of logs accepted:

      • 700 logs per minute for personal accounts
      • 10,000 logs per minute for Pro accounts

      The Prefect Cloud API will return a 429 response if these limits are triggered.

      ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-run-retention","title":"Flow run retention","text":"

      Prefect Cloud feature

      The Flow Run Retention Policy setting is only applicable in Prefect Cloud.

      Flow runs in Prefect Cloud are retained according to the Flow Run Retention Policy set by your account tier. The policy setting applies to all workspaces owned by the account.

      The flow run retention policy represents the number of days each flow run is available in the Prefect Cloud UI, and via the Prefect CLI and API after it ends. Once a flow run reaches a terminal state (detailed in the chart here), it will be retained until the end of the flow run retention period.

      Flow Run Retention Policy keys on terminal state

      Note that, because Flow Run Retention Policy keys on terminal state, if two flows start at the same time, but reach a terminal state at different times, they will be removed at different times according to when they each reached their respective terminal states.

      This retention policy applies to all details about a flow run, including its task runs. Subflow runs follow the retention policy independently from their parent flow runs, and are removed based on the time each subflow run reaches a terminal state.

      If you or your organization have needs that require a tailored retention period, contact the Prefect Sales team.

      ","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/workspaces/","title":"Workspaces","text":"

      A workspace is a discrete environment within Prefect Cloud for your workflows and blocks. Workspaces are available to Prefect Cloud accounts only.

      Workspaces can be used to organize and compartmentalize your workflows. For example, you can use separate workspaces to isolate dev, staging, and prod environments, or to provide separation between different teams.

      When you first log into Prefect Cloud, you will be prompted to create your own initial workspace. After creating your workspace, you'll be able to view flow runs, flows, deployments, and other workspace-specific features in the Prefect Cloud UI.

      Select a workspace name in the navigation menu to see all workspaces you can access.

      Your list of available workspaces may include:

      • Your own account's workspace.
      • Workspaces in an account to which you've been invited and have been given access as an Admin or Member.

      Workspace-specific features

      Each workspace keeps track of its own:

      • Flow runs and task runs executed in an environment that is syncing with the workspace
      • Flows associated with flow runs or deployments observed by the Prefect Cloud API
      • Deployments
      • Work pools
      • Blocks and Storage
      • Automations

      Your user permissions within workspaces may vary. Account admins can assign roles and permissions at the workspace level.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#create-a-workspace","title":"Create a workspace","text":"

      On the Account Workspaces dropdown or the Workspaces page select the + icon to create a new workspace.

      You'll be prompted to configure:

      • The Workspace Owner from the dropdown account menu options.
      • The Workspace Name must be unique within the account.
      • An optional description for the workspace.

      Select Create to create the new workspace. The number of available workspaces varies by Prefect Cloud plan. See Pricing if you need additional workspaces or users.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-settings","title":"Workspace settings","text":"

      Within a workspace, select Settings -> General to view or edit workspace details.

      On this page you can edit workspace details or delete the workspace.

      Deleting a workspace

      Deleting a workspace deletes all deployments, flow run history, work pools, and notifications configured in workspace.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-access","title":"Workspace access","text":"

      Within a Prefect Cloud Pro or Custom tier account, Workspace Owners can invite other people to be members and provision service accounts to a workspace. In addition to giving the user access to the workspace, a Workspace Owner assigns a workspace role to the user. The role specifies the scope of permissions for the user within the workspace.

      As a Workspace Owner, select Workspaces -> Sharing to manage members and service accounts for the workspace.

      If you've previously invited individuals to your account or provisioned service accounts, you'll see them listed here.

      To invite someone to an account, select the Members + icon. You can select from a list of existing account members.

      Select a Role for the user. This will be the initial role for the user within the workspace. A workspace Owner can change this role at any time.

      Select Send to initiate the invitation.

      To add a service account to a workspace, select the Service Accounts + icon. You can select from a list of configured service accounts. Select a Workspace Role for the service account. This will be the initial role for the service account within the workspace. A workspace Owner can change this role at any time. Select Share to finalize adding the service account.

      To remove a workspace member or service account, select Remove from the menu on the right side of the user or service account information on this page.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-transfer","title":"Workspace transfer","text":"

      Workspace transfer enables you to move an existing workspace from one account to another.

      Workspace transfer retains existing workspace configuration and flow run history, including blocks, deployments, notifications, work pools, and logs.

      Workspace transfer permissions

      Workspace transfer must be initiated or approved by a user with admin privileges for the workspace to be transferred.

      To initiate a workspace transfer between personal accounts, contact support@prefect.io.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#transfer-a-workspace","title":"Transfer a workspace","text":"

      To transfer a workspace, select Settings -> General within the workspace. Then, from the three dot menu in the upper right of the page, select Transfer.

      The Transfer Workspace page shows the workspace to be transferred on the left. Select the target account for the workspace on the right.

      Workspace transfer impact on accounts

      Workspace transfer may impact resource usage and costs for source and target accounts.

      When you transfer a workspace, users, API keys, and service accounts may lose access to the workspace. Audit log will no longer track activity on the workspace. Flow runs ending outside of the destination account\u2019s flow run retention period will be removed. You may also need to update Prefect CLI profiles and execution environment settings to access the workspace's new location.

      You may also incur new charges in the target account to accommodate the transferred workspace.

      The Transfer Workspace page outlines the impacts of transferring the selected workspace to the selected target. Please review these notes carefully before selecting Transfer to transfer the workspace.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/","title":"User accounts","text":"

      Sign up for a Prefect Cloud account at app.prefect.cloud.

      An individual user can be invited to become a member of other accounts.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#user-settings","title":"User settings","text":"

      Users can access their personal settings in the profile menu, including:

      • Profile: View and editing basic information, such as name.
      • API keys: Create and view API keys for connecting to Prefect Cloud from the CLI or other environments.
      • Preferences: Manage settings, such as color mode and default time zone.
      • Feature previews: Enable or disable feature previews.
      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#account-roles","title":"Account roles","text":"

      Users who are part of an account can hold the role of Admin or Member. Admins can invite other users to join the account and manage the account's workspaces and teams.

      Admins on Pro and Custom tier Prefect Cloud accounts can grant members of the account roles in a workspace, such as Runner or Viewer. Custom roles are available on Custom tier accounts.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#api-keys","title":"API keys","text":"

      API keys enable you to authenticate an environment to work with Prefect Cloud.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#service-accounts","title":"Service accounts","text":"

      Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#single-sign-on-sso","title":"Single sign-on (SSO)","text":"

      Custom tier plans offer single sign-on (SSO) integration with your team\u2019s identity provider, including options for directory sync and SCIM provisioning.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#audit-log","title":"Audit log","text":"

      Audit logs provide a chronological record of activities performed by Prefect Cloud users who are members of an account.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#object-level-access-control-lists-acls","title":"Object-level access control lists (ACLs)","text":"

      Prefect Cloud's Custom plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#teams","title":"Teams","text":"

      Users of Custom tier Prefect Cloud accounts can be added to Teams to simplify access control governance.

      ","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/api-keys/","title":"Manage Prefect Cloud API Keys","text":"

      API keys enable you to authenticate a local environment to work with Prefect Cloud.

      If you run prefect cloud login from your CLI, you'll have the choice to authenticate through your browser or by pasting an API key.

      If you choose to authenticate through your browser, you'll be directed to an authorization page. After you grant approval to connect, you'll be redirected to the CLI and the API key will be saved to your local Prefect profile.

      If you choose to authenticate by pasting an API key, you'll need to create an API key in the Prefect Cloud UI first.

      ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#create-an-api-key","title":"Create an API key","text":"

      To create an API key, select the account icon at the bottom-left corner of the UI.

      Select API Keys. The page displays a list of previously generated keys and lets you create new API keys or delete keys.

      Select the + button to create a new API key. Provide a name for the key and an expiration date.

      Warning

      Note that an API key cannot be revealed again in the UI after it is generated, so copy the key to a secure location.

      ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#log-into-prefect-cloud-with-an-api-key","title":"Log into Prefect Cloud with an API Key","text":"
      prefect cloud login -k '<my-api-key>'\n
      ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#service-account-api-keys","title":"Service account API keys","text":"

      Service accounts are a feature of Prefect Cloud Pro and Custom tier plans that enable you to create a Prefect Cloud API key that is not associated with a user account.

      Service accounts are typically used to configure API access for running workers or executing flow runs on remote infrastructure. Events and logs for flow runs in those environments are then associated with the service account rather than a user, and API access may be managed or revoked by configuring or removing the service account without disrupting user access.

      See the service accounts documentation for more information about creating and managing service accounts in Prefect Cloud.

      ","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/audit-log/","title":"Audit Log","text":"

      Prefect Cloud's Pro and Custom plans offer enhanced compliance and transparency tools with Audit Log. Audit logs provide a chronological record of activities performed by members in your account, allowing you to monitor detailed Prefect Cloud actions for security and compliance purposes.

      Audit logs enable you to identify who took what action, when, and using what resources within your Prefect Cloud account. In conjunction with appropriate tools and procedures, audit logs can assist in detecting potential security violations and investigating application errors.

      Audit logs can be used to identify changes in:

      • Access to workspaces
      • User login activity
      • User API key creation and removal
      • Workspace creation and removal
      • Account member invitations and removal
      • Service account creation, API key rotation, and removal
      • Billing payment method for self-serve pricing tiers

      See the Prefect Cloud plan information to learn more about options for supporting audit logs.

      ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/audit-log/#viewing-audit-logs","title":"Viewing audit logs","text":"

      From your Pro or Custom account settings page, select the Audit Log page to view audit logs.

      Pro and Custom account tier admins can view audit logs for:

      • Account-level events in Prefect Cloud, such as:
      • Member invites
      • Changing a member\u2019s role
      • Member login and logout of Prefect Cloud
      • Creating or deleting a service account
      • Workspace-level events in Prefect Cloud, such as:
      • Adding a member to a workspace
      • Changing a member\u2019s workspace role
      • Creating or deleting a workspace

      Admins can filter audit logs on multiple dimensions to restrict the results they see by workspace, user, or event type. Available audit log events are displayed in the Events drop-down menu.

      Audit logs may also be filtered by date range. Audit log retention period varies by Prefect Cloud plan.

      ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/object-access-control-lists/","title":"Object Access Control Lists","text":"

      Prefect Cloud's Custom plan offers object-level access control lists to restrict access to specific users and service accounts within a workspace. ACLs are supported for blocks and deployments.

      Organization Admins and Workspace Owners can configure access control lists by navigating to an object and clicking manage access. When an ACL is added, all users and service accounts with access to an object via their workspace role will lose access if not explicitly added to the ACL.

      ACLs and visibility

      Objects not governed by access control lists such as flow runs, flows, and artifacts will be visible to a user within a workspace even if an associated block or deployment has been restricted for that user.

      See the Prefect Cloud plans to learn more about options for supporting object-level access control.

      ","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/roles/","title":"User and Service Account Roles","text":"

      Prefect Cloud's Pro and Custom tiers allow you to set team member access to the appropriate level within specific workspaces.

      Role-based access controls (RBAC) enable you to assign users granular permissions to perform certain activities.

      To give users access to functionality beyond the scope of Prefect\u2019s built-in workspace roles, Custom account Admins can create custom roles for users.

      ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#built-in-roles","title":"Built-in roles","text":"

      Roles give users abilities at either the account level or at the individual workspace level.

      • An account-level role defines a user's default permissions within an account.
      • A workspace-level role defines a user's permissions within a specific workspace.

      The following sections outline the abilities of the built-in, Prefect-defined ac and workspace roles.

      ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#account-level-roles","title":"Account-level roles","text":"

      The following built-in roles have permissions across an account in Prefect Cloud.

      Role Abilities Owner \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Bypass SSO. Admin \u2022 Set/change all account profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove account members, and their account roles. \u2022 Create and delete service accounts in the account. \u2022 Create workspaces in the account. \u2022 Implicit workspace owner access on all workspaces in the account. \u2022 Cannot bypass SSO. Member \u2022 View account profile settings. \u2022 View workspaces I have access to in the account. \u2022 View account members and their roles. \u2022 View service accounts in the account.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-level-roles","title":"Workspace-level roles","text":"

      The following built-in roles have permissions within a given workspace in Prefect Cloud.

      Role Abilities Viewer \u2022 View flow runs within a workspace. \u2022 View deployments within a workspace. \u2022 View all work pools within a workspace. \u2022 View all blocks within a workspace. \u2022 View all automations within a workspace. \u2022 View workspace handle and description. Runner All Viewer abilities, plus: \u2022 Run deployments within a workspace. Developer All Runner abilities, plus: \u2022 Run flows within a workspace. \u2022 Delete flow runs within a workspace. \u2022 Create, edit, and delete deployments within a workspace. \u2022 Create, edit, and delete work pools within a workspace. \u2022 Create, edit, and delete all blocks and their secrets within a workspace. \u2022 Create, edit, and delete automations within a workspace. \u2022 View all workspace settings. Owner All Developer abilities, plus: \u2022 Add and remove account members, and set their role within a workspace. \u2022 Set the workspace\u2019s default workspace role for all users in the account. \u2022 Set, view, edit workspace settings. Worker The minimum scopes required for a worker to poll for and submit work.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#custom-workspace-roles","title":"Custom workspace roles","text":"

      The built-in roles will serve the needs of most users, but your team may need to configure custom roles, giving users access to specific permissions within a workspace.

      Custom roles can inherit permissions from a built-in role. This enables tweaks to the role to meet your team\u2019s needs, while ensuring users can still benefit from Prefect\u2019s default workspace role permission curation as new functionality becomes available.

      Custom workspace roles can also be created independent of Prefect\u2019s built-in roles. This option gives workspace admins full control of user access to workspace functionality. However, for non-inherited custom roles, the workspace admin takes on the responsibility for monitoring and setting permissions for new functionality as it is released.

      See Role permissions for details of permissions you may set for custom roles.

      After you create a new role, it become available in the account Members page and the Workspace Sharing page for you to apply to users.

      ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#inherited-roles","title":"Inherited roles","text":"

      A custom role may be configured as an Inherited Role. Using an inherited role allows you to create a custom role using a set of initial permissions associated with a built-in Prefect role. Additional permissions can be added to the custom role. Permissions included in the inherited role cannot be removed.

      Custom roles created using an inherited role will follow Prefect's default workspace role permission curation as new functionality becomes available.

      To configure an inherited role when configuring a custom role, select the Inherit permission from a default role check box, then select the role from which the new role should inherit permissions.

      ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-role-permissions","title":"Workspace role permissions","text":"

      The following permissions are available for custom roles.

      ","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#automations","title":"Automations","text":"Permission Description View automations User can see configured automations within a workspace. Create, edit, and delete automations User can create, edit, and delete automations within a workspace. Includes permissions of View automations.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#blocks","title":"Blocks","text":"Permission Description View blocks User can see configured blocks within a workspace. View secret block data User can see configured blocks and their secrets within a workspace. Includes permissions of\u00a0View blocks. Create, edit, and delete blocks User can create, edit, and delete blocks within a workspace. Includes permissions of View blocks and View secret block data.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#deployments","title":"Deployments","text":"Permission Description View deployments User can see configured deployments within a workspace. Run deployments User can run deployments within a workspace. This does not give a user permission to execute the flow associated with the deployment. This only gives a user (via their key) the ability to run a deployment \u2014 another user/key must actually execute that flow, such as a service account with an appropriate role. Includes permissions of View deployments. Create and edit deployments User can create and edit deployments within a workspace. Includes permissions of View deployments and Run deployments. Delete deployments User can delete deployments within a workspace. Includes permissions of View deployments, Run deployments, and Create and edit deployments.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#flows","title":"Flows","text":"Permission Description View flows and flow runs User can see flows and flow runs within a workspace. Create, update, and delete saved search filters User can create, update, and delete saved flow run search filters configured within a workspace. Includes permissions of View flows and flow runs. Create, update, and run flows User can create, update, and run flows within a workspace. Includes permissions of View flows and flow runs. Delete flows User can delete flows within a workspace. Includes permissions of View flows and flow runs and Create, update, and run flows.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#notifications","title":"Notifications","text":"Permission Description View notification policies User can see notification policies configured within a workspace. Create and edit notification policies User can create and edit notification policies configured within a workspace. Includes permissions of View notification policies. Delete notification policies User can delete notification policies configured within a workspace. Includes permissions of View notification policies and Create and edit notification policies.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#task-run-concurrency","title":"Task run concurrency","text":"Permission Description View concurrency limits User can see configured task run concurrency limits within a workspace. Create, edit, and delete concurrency limits User can create, edit, and delete task run concurrency limits within a workspace. Includes permissions of View concurrency limits.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#work-pools","title":"Work pools","text":"Permission Description View work pools User can see work pools configured within a workspace. Create, edit, and pause work pools User can create, edit, and pause work pools configured within a workspace. Includes permissions of View work pools. Delete work pools User can delete work pools configured within a workspace. Includes permissions of View work pools and Create, edit, and pause work pools.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-management","title":"Workspace management","text":"Permission Description View information about workspace service accounts User can see service accounts configured within a workspace. View information about workspace users User can see user accounts for users invited to the workspace. View workspace settings User can see settings configured within a workspace. Edit workspace settings User can edit settings for a workspace. Includes permissions of View workspace settings. Delete the workspace User can delete a workspace. Includes permissions of View workspace settings and Edit workspace settings.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/service-accounts/","title":"Service Accounts","text":"

      Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running workers or executing deployment flow runs on remote infrastructure.

      Service accounts are non-user accounts that have the following features:

      • Prefect Cloud API keys
      • Roles and permissions

      Using service account credentials, you can configure an execution environment to interact with your Prefect Cloud workspaces without a user having to manually log in from that environment. Service accounts may be created, added to workspaces, have their roles changed, or deleted without affecting other user accounts.

      Select Service Accounts to view, create, or edit service accounts.

      Service accounts are created at the account level, but individual workspaces may be shared with the service account. See workspace sharing for more information.

      Service account credentials

      When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.

      Service account roles

      Service accounts are created at the account level, and can then be added to workspaces within the account. You may apply any valid workspace-level role to a service account.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#create-a-service-account","title":"Create a service account","text":"

      Within your account, on the Service Accounts page, select the + icon to create a new service account. You'll be prompted to configure:

      • The service account name. This name must be unique within your account.
      • An expiration date, or the Never Expire option.

      Service account roles

      A service account may only be a Member of an account. You may apply any valid workspace-level role to a service account when it is added to a workspace.

      Select Create to create the new service account.

      Warning

      Note that an API key cannot be revealed again in the UI after it is generated, so copy the key to a secure location.

      You can change the API key and expiration for a service account by rotating the API key. Select Rotate API Key from the menu on the left side of the service account's information on this page. Optionally, you can set a period of time for your old service account key to remain active.

      To delete a service account, select Remove from the menu on the left side of the service account's information.

      ","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/sso/","title":"Single Sign-on (SSO)","text":"

      Prefect Cloud's Custom plans offer single sign-on (SSO) integration with your team\u2019s identity provider. SSO integration can be set up with any identity provider that supports:

      • OIDC
      • SAML 2.0

      When using SSO, Prefect Cloud won't store passwords for any accounts managed by your identity provider. Members of your Prefect Cloud account will instead log in and authenticate using your identity provider.

      Once your SSO integration has been set up, non-admins will be required to authenticate through the SSO provider when accessing account resources.

      See the Prefect Cloud plans to learn more about options for supporting more users and workspaces, service accounts, and SSO.

      ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#configuring-sso","title":"Configuring SSO","text":"

      Within your account, select the SSO page to enable SSO for users.

      If you haven't enabled SSO for a domain yet, enter the email domains for which you want to configure SSO in Prefect Cloud and save it.

      Under Enabled Domains, select the domains from the Domains list, then select Generate Link. This step creates a link you can use to configure SSO with your identity provider.

      Using the provided link navigate to the Identity Provider Configuration dashboard and select your identity provider to continue configuration. If your provider isn't listed, you can continue with the SAML or Open ID Connect choices instead.

      Once you complete SSO configuration your users will be required to authenticate via your identity provider when accessing account resources, giving you full control over application access.

      ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#directory-sync","title":"Directory sync","text":"

      Directory sync automatically provisions and de-provisions users for your account.

      Provisioned users are given basic \u201cMember\u201d roles and will have access to any resources that role entails.

      When a user is unassigned from the Prefect Cloud application in your identity provider, they will automatically lose access to Prefect Cloud resources, allowing your IT team to control access to Prefect Cloud without ever signing into the Prefect UI.

      ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#scim-provisioning","title":"SCIM Provisioning","text":"

      Custom accounts have access to SCIM for user provisioning. The SSO tab provides access to enable SCIM provisioning.

      ","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/teams/","title":"Teams","text":"

      Prefect Cloud's Custom plan offers team management to simplify access control governance.

      Account Admins can configure teams and team membership from the account settings menu by clicking Teams. Teams are composed of users and service accounts. Teams can be added to workspaces or object access control lists just like users and service accounts.

      If SCIM is enabled on your account, the set of teams and the users within them is governed by your IDP. Prefect Cloud service accounts, which are not governed by your IDP, can be still be added to your existing set of teams.

      See the Prefect Cloud plans to learn more about options for supporting teams.

      ","tags":["UI","Permissions","Access","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"community/","title":"Community","text":"

      There are many ways to get involved with the Prefect community

      • Join over 26,000 engineers in the Prefect Slack community
      • Get help in Prefect Discourse - the community-driven knowledge base
      • Give Prefect a \u2b50\ufe0f on GitHub
      • Contribute to Prefect's open source libraries
      • Become a Prefect Ambassador by joining Club 42
      ","tags":["community","Slack","Discourse"],"boost":2},{"location":"concepts/","title":"Explore Prefect concepts","text":"Concept Description Flows A Prefect workflow, defined as a Python function. Tasks Discrete units of work in a Prefect workflow. Deployments A server-side concept that encapsulates flow metadata, allowing it to be scheduled and triggered via API. Work Pools & Workers Use Prefect to dynamically provision and configure infrastructure in your execution environment. Schedules Tell the Prefect API how to create new flow runs for you automatically on a specified cadence. Results The data returned by a flow or a task. Artifacts Formatted outputs rendered in the Prefect UI, such as markdown, tables, or links. States Rich objects that capture the status of a particular task run or flow run. Blocks Prefect primitives that enable the storage of configuration and provide a UI interface. Task Runners Configure how tasks are run - concurrently, in parallel, or in a distributed environment. Automations Configure actions that Prefect executes automatically based on trigger conditions. Block and Agent-Based Deployments Description Block-based Deployments Create deployments that rely on blocks. Infrastructure Blocks that specify infrastructure for flow runs created by a deployment. Storage Lets you configure how flow code for deployments is persisted and retrieved. Agents Like untyped workers.

      Many features specific to Prefect Cloud are in their own navigation subheading.

      ","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/agents/","title":"Agents","text":"

      Workers are recommended

      Agents are part of the block-based deployment model. Work Pools and Workers simplify the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.

      ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-overview","title":"Agent overview","text":"

      Agent processes are lightweight polling services that get scheduled work from a work pool and deploy the corresponding flow runs.

      Agents poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_AGENT_QUERY_INTERVAL setting.

      It is possible for multiple agent processes to be started for a single work pool. Each agent process sends a unique ID to the server to help disambiguate themselves and let users know how many agents are active.

      ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-options","title":"Agent options","text":"

      Agents are configured to pull work from one or more work pool queues. If the agent references a work queue that doesn't exist, it will be created automatically.

      Configuration parameters you can specify when starting an agent include:

      Option Description --api The API URL for the Prefect server. Default is the value of PREFECT_API_URL. --hide-welcome Do not display the startup ASCII art for the agent process. --limit Maximum number of flow runs to start simultaneously. [default: None] --match, -m Dynamically matches work queue names with the specified prefix for the agent to pull from,for example dev- will match all work queues with a name that starts with dev-. [default: None] --pool, -p A work pool name for the agent to pull from. [default: None] --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_AGENT_PREFETCH_SECONDS. --run-once Only run agent polling once. By default, the agent runs forever. [default: no-run-once] --work-queue, -q One or more work queue names for the agent to pull from. [default: None]

      You must start an agent within an environment that can access or create the infrastructure needed to execute flow runs. Your agent will deploy flow runs to the infrastructure specified by the deployment.

      Prefect must be installed in execution environments

      Prefect must be installed in any environment in which you intend to run the agent or execute a flow run.

      PREFECT_API_URL and PREFECT_API_KEY settings for agents

      PREFECT_API_URL must be set for the environment in which your agent is running or specified when starting the agent with the --api flag. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

      If you want an agent to communicate with Prefect Cloud or a Prefect server from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

      ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#starting-an-agent","title":"Starting an agent","text":"

      Use the prefect agent start CLI command to start an agent. You must pass at least one work pool name or match string that the agent will poll for work. If the work pool does not exist, it will be created.

      prefect agent start -p [work pool name]\n

      For example:

      Starting agent with ephemeral API...\n\u00a0 ___ ___ ___ ___ ___ ___ _____ \u00a0 \u00a0 _ \u00a0 ___ ___ _\u00a0 _ _____\n\u00a0| _ \\ _ \\ __| __| __/ __|_ \u00a0 _| \u00a0 /_\\ / __| __| \\| |_ \u00a0 _|\n\u00a0|\u00a0 _/ \u00a0 / _|| _|| _| (__\u00a0 | |\u00a0 \u00a0 / _ \\ (_ | _|| .` | | |\n\u00a0|_| |_|_\\___|_| |___\\___| |_| \u00a0 /_/ \\_\\___|___|_|\\_| |_|\n\nAgent started! Looking for work from work pool 'my-pool'...\n

      By default, the agent polls the API specified by the PREFECT_API_URL environment variable. To configure the agent to poll from a different server location, use the --api flag, specifying the URL of the server.

      In addition, agents can match multiple queues in a work pool by providing a --match string instead of specifying all of the queues. The agent will poll every queue with a name that starts with the given string. New queues matching this prefix will be found by the agent without needing to restart it.

      For example:

      prefect agent start --match \"foo-\"\n

      This example will poll every work queue that starts with \"foo-\".

      ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#configuring-prefetch","title":"Configuring prefetch","text":"

      By default, the agent begins submission of flow runs a short time (10 seconds) before they are scheduled to run. This allows time for the infrastructure to be created, so the flow run can start on time. In some cases, infrastructure will take longer than this to actually start the flow run. In these cases, the prefetch can be increased using the --prefetch-seconds option or the PREFECT_AGENT_PREFETCH_SECONDS setting.

      Submission can begin an arbitrary amount of time before the flow run is scheduled to start. If this value is larger than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time. This allows flow runs to start exactly on time.

      ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#troubleshooting","title":"Troubleshooting","text":"","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/agents/#agent-crash-or-keyboard-interrupt","title":"Agent crash or keyboard interrupt","text":"

      If the agent process is ended abruptly, you can sometimes have left over flows that were destined for the agent whose process was ended. In the UI, these will show up as pending. You will need to delete these flows in order for the restarted agent to begin processing the work queue again. Take note of the flows you deleted, you might need to set them to run manually.

      ","tags":["agents","deployments"],"boost":0.5},{"location":"concepts/artifacts/","title":"Artifacts","text":"

      Artifacts are persisted outputs such as tables, Markdown, or links. They are stored on Prefect Cloud or a Prefect server instance and rendered in the Prefect UI. Artifacts make it easy to track and monitor the objects that your flows produce and update over time.

      Published artifacts may be associated with a particular task run or flow run. Artifacts can also be created outside of any flow run context.

      Whether you're publishing links, Markdown, or tables, artifacts provide a powerful and flexible way to showcase data within your workflows.

      With artifacts, you can easily manage and share information with your team, providing valuable insights and context.

      Common use cases for artifacts include:

      • Debugging: By publishing data that you care about in the UI, you can easily see when and where your results were written. If an artifact doesn't look the way you expect, you can find out which flow run last updated it, and you can click through a link in the artifact to a storage location (such as an S3 bucket).
      • Data quality checks: Artifacts can be used to publish data quality checks from in-progress tasks. This can help ensure that data quality is maintained throughout the pipeline. During long-running tasks such as ML model training, you might use artifacts to publish performance graphs. This can help you visualize how well your models are performing and make adjustments as needed. You can also track the versions of these artifacts over time, making it easier to identify changes in your data.
      • Documentation: Artifacts can be used to publish documentation and sample data to help you keep track of your work and share information with your colleagues. For instance, artifacts allow you to add a description to let your colleagues know why this piece of data is important.
      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-artifacts","title":"Creating artifacts","text":"

      Creating artifacts allows you to publish data from task and flow runs or outside of a flow run context. Currently, you can render three artifact types: links, Markdown, and tables.

      Artifacts render individually

      Please note that every artifact created within a task will be displayed as an individual artifact in the Prefect UI. This means that each call to create_link_artifact() or create_markdown_artifact() generates a distinct artifact.

      Unlike the print() command, where you can concatenate multiple calls to include additional items in a report, within a task, these commands must be used multiple times if necessary.

      To create artifacts like reports or summaries using create_markdown_artifact(), compile your message string separately and then pass it to create_markdown_artifact() to create the complete artifact.

      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-link-artifacts","title":"Creating link artifacts","text":"

      To create a link artifact, use the create_link_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_link_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.

      from prefect import flow, task\nfrom prefect.artifacts import create_link_artifact\n\n@task\ndef my_first_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/highly_variable_data.csv\",\n            description=\"## Highly variable data\",\n        )\n\n@task\ndef my_second_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/low_pred_data.csv\",\n            description=\"# Low prediction accuracy\",\n        )\n\n@flow\ndef my_flow():\n    my_first_task()\n    my_second_task()\n\nif __name__ == \"__main__\":\n    my_flow()\n

      Tip

      You can specify multiple artifacts with the same key to more easily track something very specific that you care about, such as irregularities in your data pipeline.

      After running the above flows, you can find your new artifacts in the Artifacts page of the UI. Click into the \"irregular-data\" artifact and see all versions of it, along with custom descriptions and links to the relevant data.

      Here, you'll also be able to view information about your artifact such as its associated flow run or task run id, previous and future versions of the artifact (multiple artifacts can have the same key in order to show lineage), the data you've stored (in this case a Markdown-rendered link), an optional Markdown description, and when the artifact was created or updated.

      To make the links more readable for you and your collaborators, you can pass in a link_text argument for your link artifacts:

      from prefect import flow\nfrom prefect.artifacts import create_link_artifact\n\n@flow\ndef my_flow():\n    create_link_artifact(\n        key=\"my-important-link\",\n        link=\"https://www.prefect.io/\",\n        link_text=\"Prefect\",\n    )\n\nif __name__ == \"__main__\":\n    my_flow()\n

      In the above example, the create_link_artifact method is used within a flow to create a link artifact with a key of my-important-link. The link parameter is used to specify the external resource to be linked to, and link_text is used to specify the text to be displayed for the link. An optional description could also be added for context.

      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-markdown-artifacts","title":"Creating Markdown artifacts","text":"

      To create a Markdown artifact, you can use the create_markdown_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_markdown_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the Artifacts tab of the associated flow run or task run.

      Don't indent Markdown

      Markdown in mult-line strings must be unindented to be interpreted correctly.

      from prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef markdown_task():\n    na_revenue = 500000\n    markdown_report = f\"\"\"# Sales Report\n\n## Summary\n\nIn the past quarter, our company saw a significant increase in sales, with a total revenue of $1,000,000. \nThis represents a 20% increase over the same period last year.\n\n## Sales by Region\n\n| Region        | Revenue |\n|:--------------|-------:|\n| North America | ${na_revenue:,} |\n| Europe        | $250,000 |\n| Asia          | $150,000 |\n| South America | $75,000 |\n| Africa        | $25,000 |\n\n## Top Products\n\n1. Product A - $300,000 in revenue\n2. Product B - $200,000 in revenue\n3. Product C - $150,000 in revenue\n\n## Conclusion\n\nOverall, these results are very encouraging and demonstrate the success of our sales team in increasing revenue \nacross all regions. However, we still have room for improvement and should focus on further increasing sales in \nthe coming quarter.\n\"\"\"\n    create_markdown_artifact(\n        key=\"gtm-report\",\n        markdown=markdown_report,\n        description=\"Quarterly Sales Report\",\n    )\n\n@flow()\ndef my_flow():\n    markdown_task()\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

      After running the above flow, you should see your \"gtm-report\" artifact in the Artifacts page of the UI.

      As with all artifacts, you'll be able to view the associated flow run or task run id, previous and future versions of the artifact, your rendered Markdown data, and your optional Markdown description.

      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#create-table-artifacts","title":"Create table artifacts","text":"

      You can create a table artifact by calling create_table_artifact(). To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_table_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.

      Note

      The create_table_artifact() function accepts a table argument, which can be provided as either a list of lists, a list of dictionaries, or a dictionary of lists.

      from prefect.artifacts import create_table_artifact\n\ndef my_fn():\n    highest_churn_possibility = [\n       {'customer_id':'12345', 'name': 'John Smith', 'churn_probability': 0.85 }, \n       {'customer_id':'56789', 'name': 'Jane Jones', 'churn_probability': 0.65 } \n    ]\n\n    create_table_artifact(\n        key=\"personalized-reachout\",\n        table=highest_churn_possibility,\n        description= \"# Marvin, please reach out to these customers today!\"\n    )\n\nif __name__ == \"__main__\":\n    my_fn()\n

      As you can see, you don't need to create an artifact in a flow run context. You can create one anywhere in a Python script and see it in the Prefect UI.

      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#managing-artifacts","title":"Managing artifacts","text":"","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#reading-artifacts","title":"Reading artifacts","text":"

      In the Prefect UI, you can view all of the latest versions of your artifacts and click into a specific artifact to see its lineage over time. Additionally, you can inspect all versions of an artifact with a given key from the CLI by running:

      prefect artifact inspect <my_key>\n

      or view all artifacts by running:

      prefect artifact ls\n

      You can also use the Prefect REST API to programmatically filter your results.

      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#fetching-artifacts","title":"Fetching artifacts","text":"

      In Python code, you can retrieve an existing artifact with the Artifact.get class method:

      from prefect.artifacts import Artifact\n\nmy_retrieved_artifact = Artifact.get(\"my_artifact_key\")\n
      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#deleting-artifacts","title":"Deleting artifacts","text":"

      You can delete an artifact directly using the CLI to delete specific artifacts with a given key or id:

      prefect artifact delete <my_key>\n
      prefect artifact delete --id <my_id>\n

      Alternatively, you can delete artifacts using the Prefect REST API.

      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-api","title":"Artifacts API","text":"

      Prefect provides the Prefect REST API to allow you to create, read, and delete artifacts programmatically. With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow.

      For example, to read the five most recently created Markdown, table, and link artifacts, you can run the following:

      import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc/workspaces/xyz\"\nPREFECT_API_KEY=\"pnu_ghijk\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n

      If you don't specify a key or that a key must exist, you will also return results (which are a type of key-less artifact).

      See the rest of the Prefect REST API documentation on artifacts for more information!

      ","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/automations/","title":"Automations","text":"

      Automations in Prefect Cloud enable you to configure actions that Prefect executes automatically based on trigger conditions.

      Potential triggers include the occurrence of events from changes in a flow run's state - or the absence of such events. You can even define your own custom trigger to fire based on an event created from a webhook or a custom event defined in Python code.

      Potential actions include kicking off flow runs, pausing schedules, and sending custom notifications.

      Automations are only available in Prefect Cloud

      Notifications in an open-source Prefect server provide a subset of the notification message-sending features available in Automations.

      Automations provide a flexible and powerful framework for automatically taking action in response to events.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automations-overview","title":"Automations overview","text":"

      The Automations page provides an overview of all configured automations for your workspace.

      Selecting the toggle next to an automation pauses execution of the automation.

      The button next to the toggle provides commands to copy the automation ID, edit the automation, or delete the automation.

      Select the name of an automation to view Details about it and relevant Events.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation","title":"Create an automation","text":"

      On the Automations page, select the + icon to create a new automation. You'll be prompted to configure:

      • A trigger condition that causes the automation to execute.
      • One or more actions carried out by the automation.
      • Details about the automation, such as a name and description.
      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#triggers","title":"Triggers","text":"

      Triggers specify the conditions under which your action should be performed. The Prefect UI includes templates for many common conditions, such as:

      • Flow run state change
      • Note - Flow Run Tags currently are only evaluated with OR criteria
      • Work pool status
      • Work queue status
      • Deployment status
      • Metric thresholds, such as average duration, lateness, or completion percentage
      • Incident declarations (available on Pro and Custom plans)
      • Custom event triggers

      Automations API

      The automations API enables further programmatic customization of trigger and action policies based on arbitrary events.

      Importantly, triggers can be configured not only in reaction to events, but also proactively: to fire in the absence of an expected event.

      For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get \u201cstuck\u201d in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be taken on the flow itself, such as canceling or restarting it. Or the action could take the form of a notification so someone can take manual remediation steps. Or you could set both actions to to take place when the trigger occurs.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#actions","title":"Actions","text":"

      Actions specify what your automation does when its trigger criteria are met. Current action types include:

      • Cancel a flow run
      • Pause or resume a schedule
      • Run a deployment
      • Pause or resume a deployment schedule
      • Pause or resume a work pool
      • Pause or resume a work queue
      • Pause or resume an automation
      • Send a notification
      • Call a webhook
      • Suspend a flow run
      • Declare an incident (available on Pro and Custom plans)
      • Change the state of a flow run

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#creating-automations-in-python-code","title":"Creating automations In Python code","text":"

      You can create and access any automation with the Python SDK's Automation class and its methods.

      from prefect.automations import Automation\nfrom prefect.events.schemas.automations import EventTrigger\nfrom prefect.server.events.actions import CancelFlowRun\n\n# creating an automation\nautomation =\n  Automation(\n    name=\"woodchonk\",\n    trigger=EventTrigger(\n      expect={\"animal.walked\"},\n      match={\n        \"genus\": \"Marmota\",\n        \"species\": \"monax\",\n        },\n        posture=\"Reactive\",\n        threshold=3,\n        ),\n      actions=[CancelFlowRun()]\n        ).create()\nprint(automation)\n# name='woodchonk' description='' enabled=True trigger=EventTrigger(type='event', match=ResourceSpecification(__root__={'genus': 'Marmota', 'species': 'monax'}), match_related=ResourceSpecification(__root__={}), after=set(), expect={'animal.walked'}, for_each=set(), posture=Posture.Reactive, threshold=3, within=datetime.timedelta(seconds=10)) actions=[CancelFlowRun(type='cancel-flow-run')] actions_on_trigger=[] actions_on_resolve=[] owner_resource=None id=UUID('d641c552-775c-4dc6-a31e-541cb11137a6')\n\n# reading the automation\n\nautomation = Automation.read(id =\"d641c552-775c-4dc6-a31e-541cb11137a6\")\nor\nautomation = Automation.read(\"woodchonk\")\n\nprint(automation)\n# name='woodchonk' description='' enabled=True trigger=EventTrigger(type='event', match=ResourceSpecification(__root__={'genus': 'Marmota', 'species': 'monax'}), match_related=ResourceSpecification(__root__={}), after=set(), expect={'animal.walked'}, for_each=set(), posture=Posture.Reactive, threshold=3, within=datetime.timedelta(seconds=10)) actions=[CancelFlowRun(type='cancel-flow-run')] actions_on_trigger=[] actions_on_resolve=[] owner_resource=None id=UUID('d641c552-775c-4dc6-a31e-541cb11137a6')\n
      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#selected-and-inferred-action-targets","title":"Selected and inferred action targets","text":"

      Some actions require you to either select the target of the action, or specify that the target of the action should be inferred.

      Selected targets are simple, and useful for when you know exactly what object your action should act on \u2014 for example, the case of a cleanup flow you want to run or a specific notification you\u2019d like to send.

      Inferred targets are deduced from the trigger itself.

      For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow run, the flow run to cancel is inferred as the stuck run that caused the trigger to fire.

      Similarly, if a trigger fires on a work queue event and the corresponding action is to pause an inferred work queue, the inferred work queue is the one that emitted the event.

      Prefect tries to infer the relevant event whenever possible, but sometimes one does not exist.

      Specify a name and, optionally, a description for the automation.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#custom-triggers","title":"Custom triggers","text":"

      When you need a trigger that doesn't quite fit the templates in UI trigger builder, you can define a custom trigger in JSON. With custom triggers, you have access to the full capabilities of Prefect's automation system - allowing you to react to many kinds of events and metrics in your workspace.

      Each automation has a single trigger that, when fired, will cause all of its associated actions to run. That single trigger may be a reactive or proactive event trigger, a trigger monitoring the value of a metric, or a composite trigger that combines several underlying triggers.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#event-triggers","title":"Event triggers","text":"

      Event triggers are the most common types of trigger, and they are intended to react to the presence or absence of an event happening in your workspace. Event triggers are indicated with {\"type\": \"event\"}.

      The schema that defines an event trigger is as follows:

      Name Type Supports trailing wildcards Description match object Labels for resources which this Automation will match. match_related object Labels for related resources which this Automation will match. posture string enum N/A The posture of this Automation, either Reactive or Proactive. Reactive automations respond to the presence of the expected events, while Proactive automations respond to the absence of those expected events. after array of strings Event(s), one of which must have first been seen to start this automation. expect array of strings The event(s) this automation is expecting to see. If empty, this automation will evaluate any matched event. for_each array of strings Evaluate the Automation separately for each distinct value of these labels on the resource. By default, labels refer to the primary resource of the triggering event. You may also refer to labels from related resources by specifying related:<role>:<label>. This will use the value of that label for the first related resource in that role. threshold integer N/A The number of events required for this Automation to trigger (for Reactive automations), or the number of events expected (for Proactive automations) within number N/A The time period over which the events must occur. For Reactive triggers, this may be as low as 0 seconds, but must be at least 10 seconds for Proactive triggers","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#resource-matching","title":"Resource matching","text":"

      Both the event and metric triggers support matching events for specific resources in your workspace, including most Prefect objects (like flows, deployment, blocks, work pools, tags, etc) as well as resources you have defined in any events you emit yourself. The match and match_related fields control which events a trigger considers for evaluation by filtering on the contents of their resource and related fields, respectively. Each label added to a match filter is ANDed with the other labels, and can accept a single value or a list of multiple values that are ORed together.

      Consider the resource and related fields on the following prefect.flow-run.Completed event, truncated for the sake of example. Its primary resource is a flow run, and since that flow run was started via a deployment, it is related to both its flow and its deployment:

      \"resource\": {\n  \"prefect.resource.id\": \"prefect.flow-run.925eacce-7fe5-4753-8f02-77f1511543db\",\n  \"prefect.resource.name\": \"cute-kittiwake\"\n}\n\"related\": [\n  {\n    \"prefect.resource.id\": \"prefect.flow.cb6126db-d528-402f-b439-96637187a8ca\",\n    \"prefect.resource.role\": \"flow\",\n    \"prefect.resource.name\": \"hello\"\n  },\n  {\n    \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\",\n    \"prefect.resource.role\": \"deployment\",\n    \"prefect.resource.name\": \"example\"\n  }\n]\n

      There are a number of valid ways to select the above event for evaluation, and the approach depends on the purpose of the automation.

      The following configuration will filter for any events whose primary resource is a flow run, and that flow run has a name starting with cute- or radical-.

      \"match\": {\n  \"prefect.resource.id\": \"prefect.flow-run.*\",\n  \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {},\n...\n

      This configuration, on the other hand, will filter for any events for which this specific deployment is a related resource.

      \"match\": {},\n\"match_related\": {\n  \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n

      Both of the above approaches will select the example prefect.flow-run.Completed event, but will permit additional, possibly undesired events through the filter as well. match and match_related can be combined for more restrictive filtering:

      \"match\": {\n  \"prefect.resource.id\": \"prefect.flow-run.*\",\n  \"prefect.resource.name\": [\"cute-*\", \"radical-*\"]\n},\n\"match_related\": {\n  \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n},\n...\n

      Now this trigger will filter only for events whose primary resource is a flow run started by a specific deployment, and that flow run has a name starting with cute- or radical-.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#expected-events","title":"Expected events","text":"

      Once an event has passed through the match filters, it must be decided if this event should be counted toward the trigger's threshold. Whether that is the case is determined by the event names present in expect.

      This configuration informs the trigger to evaluate only prefect.flow-run.Completed events that have passed the match filters.

      \"expect\": [\n  \"prefect.flow-run.Completed\"\n],\n...\n

      threshold decides the quantity of expected events needed to satisfy the trigger. Increasing the threshold above 1 will also require use of within to define a range of time in which multiple events are seen. The following configuration will expect two occurrences of prefect.flow-run.Completed within 60 seconds.

      \"expect\": [\n  \"prefect.flow-run.Completed\"\n],\n\"threshold\": 2,\n\"within\": 60,\n...\n

      after can be used to handle scenarios that require more complex event reactivity.

      Take, for example, this flow which emits an event indicating the table it operates on is missing or empty:

      from prefect import flow\nfrom prefect.events import emit_event\nfrom db import Table\n\n\n@flow\ndef transform(table_name: str):\n  table = Table(table_name)\n\n  if not table.exists():\n    emit_event(\n        event=\"table-missing\",\n        resource={\"prefect.resource.id\": \"etl-events.transform\"}\n    )\n  elif table.is_empty():\n    emit_event(\n        event=\"table-empty\",\n        resource={\"prefect.resource.id\": \"etl-events.transform\"}\n    )\n  else:\n    # transform data\n

      The following configuration uses after to prevent this automation from firing unless either a table-missing or a table-empty event has occurred before a flow run of this deployment completes.

      Tip

      Note how match and match_related are used to ensure the trigger only evaluates events that are relevant to its purpose.

      \"match\": {\n  \"prefect.resource.id\": [\n    \"prefect.flow-run.*\",\n    \"etl-events.transform\"\n  ]\n},\n\"match_related\": {\n  \"prefect.resource.id\": \"prefect.deployment.37ca4a08-e2d9-4628-a310-cc15a323378e\"\n}\n\"after\": [\n  \"table-missing\",\n  \"table-empty\"\n]\n\"expect\": [\n  \"prefect.flow-run.Completed\"\n],\n...\n
      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#evaluation-strategy","title":"Evaluation strategy","text":"

      All of the previous examples were designed around a reactive posture - that is, count up events toward the threshold until it is met, then execute actions. To respond to the absence of events, use a proactive posture. A proactive trigger will fire when its threshold has not been met by the end of the window of time defined by within. Proactive triggers must have a within value of at least 10 seconds.

      The following trigger will fire if a prefect.flow-run.Completed event is not seen within 60 seconds after a prefect.flow-run.Running event is seen.

      {\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {},\n  \"after\": [\n    \"prefect.flow-run.Running\"\n  ],\n  \"expect\": [\n    \"prefect.flow-run.Completed\"\n  ],\n  \"for_each\": [],\n  \"posture\": \"Proactive\",\n  \"threshold\": 1,\n  \"within\": 60\n}\n

      However, without for_each, a prefect.flow-run.Completed event from a different flow run than the one that started this trigger with its prefect.flow-run.Running event could satisfy the condition. Adding a for_each of prefect.resource.id will cause this trigger to be evaluated separately for each flow run id associated with these events.

      {\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {},\n  \"after\": [\n    \"prefect.flow-run.Running\"\n  ],\n  \"expect\": [\n    \"prefect.flow-run.Completed\"\n  ],\n  \"for_each\": [\n    \"prefect.resource.id\"\n  ],\n  \"posture\": \"Proactive\",\n  \"threshold\": 1,\n  \"within\": 60\n}\n
      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#metric-triggers","title":"Metric triggers","text":"

      Metric triggers ({\"type\": \"metric\"}) fire when the value of a metric in your workspace crosses a threshold you've defined. For example, you can trigger an automation when the success rate of flows in your workspace drops below 95% over the course of an hour.

      Prefect's metrics are all derived by examining your workspace's events, and if applicable, use the occurred times of those events as the basis for their calculations.

      Prefect defines three metrics today:

      • Successes ({\"name\": \"successes\"}), defined as the number of flow runs that went Pending and then the latest state we saw was not a failure (Failed or Crashed). This metric accounts for retries if the ultimate state was successful.
      • Duration ({\"name\": \"duration\"}), defined as the length of time that a flow remains in a Running state before transitioning to a terminal state such as Completed, Failed, or Crashed. Because this time is derived in terms of flow run state change events, it may be greater than the runtime of your function.
      • Lateness ({\"name\": \"lateness\"}), defined as the length of time that a Scheduled flow remains in a Late state before transitioning to a Running and/or Crashed state. Only flow runs that the system marks Late are included.

      The schema of a metric trigger is as follows:

      Name Type Supports trailing wildcards Description match object Labels for resources which this Automation will match. match_related object Labels for related resources which this Automation will match. metric MetricTriggerQuery N/A The definition of the metric query to run

      And the MetricTriggerQuery query is defined as:

      Name Type Description name string The name of the Prefect metric to evaluate (see above). threshold number The threshold to which the current metric value is compared operator string (\"<\", \"<=\", \">\", \">=\") The comparison operator to use to decide if the threshold value is met range duration in seconds How far back to evaluate the metric firing_for duration in seconds How long the value must exceed the threshold before this trigger fires

      For example, to fire when flow runs tagged production in your workspace are failing at a rate of 10% or worse for the last hour (in other words, your success rate is below 90%), create this trigger:

      {\n  \"type\": \"metric\",\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {\n    \"prefect.resource.id\": \"prefect.tag.production\",\n    \"prefect.resource.role\": \"tag\"\n  },\n  \"metric\": {\n    \"name\": \"successes\",\n    \"threshold\": 0.9,\n    \"operator\": \"<\",\n    \"range\": 3600,\n    \"firing_for\": 0\n  }\n}\n

      To detect when the average lateness of your Kubernetes workloads (running on a work pool named kubernetes) in the last day exceeds 5 minutes late, and that number hasn't gotten better for the last 10 minutes, use a trigger like this:

      {\n  \"type\": \"metric\",\n  \"match\": {\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  },\n  \"match_related\": {\n    \"prefect.resource.id\": \"prefect.work-pool.kubernetes\",\n    \"prefect.resource.role\": \"work-pool\"\n  },\n  \"metric\": {\n    \"name\": \"lateness\",\n    \"threshold\": 300,\n    \"operator\": \">\",\n    \"range\": 86400,\n    \"firing_for\": 600\n  }\n}\n
      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#composite-triggers","title":"Composite triggers","text":"

      To create a trigger from multiple kinds of events and metrics, use a compound or sequence trigger. These higher-order triggers are composed from a set of underlying event and metric triggers.

      For example, if you want to run a deployment only after three different flows in your workspace have written their results to a remote filesystem, combine them with a 'compound' trigger:

      {\n  \"type\": \"compound\",\n  \"require\": \"all\",\n  \"within\": 3600,\n  \"triggers\": [\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-customer-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-revenue-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-expenses-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    }\n  ]\n}\n

      This trigger will fire once it sees at least one of each of the underlying event triggers fire within the time frame specified. Then the trigger will reset its state and fire the next time these three events all happen. The order the events occur doesn't matter, just that all of the events occur within one hour.

      If you want a flow run to complete prior to starting to watch for those three events, you can combine the entire previous trigger as the second part of a sequence of two triggers:

      {\n  // the outer trigger is now a \"sequence\" trigger\n  \"type\": \"sequence\",\n  \"within\": 7200,\n  \"triggers\": [\n    // with the first child trigger expecting a Completed event\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.flow-run.Completed\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-export-initiator\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    // and the second child trigger being the compound trigger from the prior example\n    {\n      \"type\": \"compound\",\n      \"require\": \"all\",\n      \"within\": 3600,\n      \"triggers\": [\n        {\n          \"type\": \"event\",\n          \"posture\": \"Reactive\",\n          \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n          \"match_related\": {\n            \"prefect.resource.name\": \"daily-customer-export\",\n            \"prefect.resource.role\": \"flow\"\n          }\n        },\n        {\n          \"type\": \"event\",\n          \"posture\": \"Reactive\",\n          \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n          \"match_related\": {\n            \"prefect.resource.name\": \"daily-revenue-export\",\n            \"prefect.resource.role\": \"flow\"\n          }\n        },\n        {\n          \"type\": \"event\",\n          \"posture\": \"Reactive\",\n          \"expect\": [\"prefect.block.remote-file-system.write_path.called\"],\n          \"match_related\": {\n            \"prefect.resource.name\": \"daily-expenses-export\",\n            \"prefect.resource.role\": \"flow\"\n          }\n        }\n      ]\n    }\n  ]\n}\n

      In this case, the trigger will only fire if it sees the daily-export-initiator flow complete, and then the three files written by the other flows.

      The within parameter for compound and sequence triggers constrains how close in time (in seconds) the child triggers must fire to satisfy the composite trigger. For example, if the daily-export-initiator flow runs, but the other three flows don't write their result files until three hours later, this trigger won't fire. Placing these time constraints on the triggers can prevent a misfire if you know that the events will generally happen within a specific timeframe, and you don't want a stray older event to be included in the evaluation of the trigger. If this isn't a concern for you, you may omit the within period, in which case there is no limit to how far apart in time the child triggers occur.

      Any type of trigger may be composed into higher-order composite triggers, including proactive event triggers and metric triggers. In the following example, the compound trigger will fire if any of the following events occur: a flow run stuck in Pending, a work pool becoming unready, or the average amount of Late work in your workspace going over 10 minutes:

      {\n  \"type\": \"compound\",\n  \"require\": \"any\",\n  \"triggers\": [\n    {\n      \"type\": \"event\",\n      \"posture\": \"Proactive\",\n      \"after\": [\"prefect.flow-run.Pending\"],\n      \"expect\": [\"prefect.flow-run.Running\", \"prefect.flow-run.Crashed\"],\n      \"for_each\": [\"prefect.resource.id\"],\n      \"match_related\": {\n        \"prefect.resource.name\": \"daily-customer-export\",\n        \"prefect.resource.role\": \"flow\"\n      }\n    },\n    {\n      \"type\": \"event\",\n      \"posture\": \"Reactive\",\n      \"expect\": [\"prefect.work-pool.not-ready\"],\n      \"match\": {\n        \"prefect.resource.name\": \"kubernetes-workers\",\n      }\n    },\n    {\n      \"type\": \"metric\",\n      \"metric\": {\n        \"name\": \"lateness\",\n        \"operator\": \">\",\n        \"threshold\": 600,\n        \"range\": 3600,\n        \"firing_for\": 300\n      }\n    }\n  ]\n}\n

      For compound triggers, the require parameter may be \"any\", \"all\", or a number between 1 and the number of child triggers. In the example above, if you feel that you are receiving too many spurious notifications for issues that resolve on their own, you can specify {\"require\": 2} to express that any two of the triggers must fire in order for the compound trigger to fire. Sequence triggers, on the other hand, always require all of their child triggers to fire before they fire.

      Compound triggers are defined as:

      Name Type Description require number, \"any\", or \"all\" How many of the child triggers must fire for this trigger to fire within time, in seconds How close in time the child triggers must fire for this trigger to fire triggers array of other triggers

      Sequence triggers are defined as:

      Name Type Description within time, in seconds How close in time the child triggers must fire for this trigger to fire triggers array of other triggers","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#create-an-automation-via-deployment-triggers","title":"Create an automation via deployment triggers","text":"

      To enable the simple configuration of event-driven deployments, Prefect provides deployment triggers - a shorthand for creating automations that are linked to specific deployments to run them based on the presence or absence of events.

      Trigger definitions for deployments are supported in prefect.yaml, .serve, and .deploy. At deployment time, specified trigger definitions will create linked automations that are triggered by events matching your chosen grammar. Each trigger definition may include a jinja template to render the triggering event as the parameters of your deployment's flow run.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#defining-triggers-in-prefectyaml","title":"Defining triggers in prefect.yaml","text":"

      A list of triggers can be included directly on any deployment in a prefect.yaml file:

      deployments:\n  - name: my-deployment\n    entrypoint: path/to/flow.py:decorated_fn\n    work_pool:\n      name: my-work-pool\n    triggers:\n      - type: event\n        enabled: true\n        match:\n          prefect.resource.id: my.external.resource\n        expect:\n          - external.resource.pinged\n        parameters:\n          param_1: \"{{ event }}\"\n

      This deployment will create a flow run when an external.resource.pinged event and an external.resource.replied event have been seen from my.external.resource:

      deployments:\n  - name: my-deployment\n    entrypoint: path/to/flow.py:decorated_fn\n    work_pool:\n      name: my-work-pool\n    triggers:\n      - type: compound\n        require: all\n        parameters:\n          param_1: \"{{ event }}\"\n        triggers:\n          - type: event\n            match:\n              prefect.resource.id: my.external.resource\n            expect:\n              - external.resource.pinged\n          - type: event\n            match:\n              prefect.resource.id: my.external.resource\n            expect:\n              - external.resource.replied\n
      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#defining-triggers-in-serve-and-deploy","title":"Defining triggers in .serve and .deploy","text":"

      For creating deployments with triggers in Python, the trigger types DeploymentEventTrigger, DeploymentMetricTrigger, DeploymentCompoundTrigger, and DeploymentSequenceTrigger can be imported from prefect.events:

      from prefect import flow\nfrom prefect.events import DeploymentEventTrigger\n\n\n@flow(log_prints=True)\ndef decorated_fn(param_1: str):\n    print(param_1)\n\n\nif __name__==\"__main__\":\n    decorated_fn.serve(\n        name=\"my-deployment\",\n        triggers=[\n            DeploymentEventTrigger(\n                enabled=True,\n                match={\"prefect.resource.id\": \"my.external.resource\"},\n                expect=[\"external.resource.pinged\"],\n                parameters={\n                    \"param_1\": \"{{ event }}\",\n                },\n            )\n        ],\n    )\n

      As with prior examples, composite triggers must be supplied with a list of underlying triggers:

      from prefect import flow\nfrom prefect.events import DeploymentCompoundTrigger\n\n\n@flow(log_prints=True)\ndef decorated_fn(param_1: str):\n    print(param_1)\n\n\nif __name__==\"__main__\":\n    decorated_fn.deploy(\n        name=\"my-deployment\",\n        image=\"my-image-registry/my-image:my-tag\"\n        triggers=[\n            DeploymentCompoundTrigger(\n                enabled=True,\n                name=\"my-compound-trigger\",\n                require=\"all\",\n                triggers=[\n                    {\n                      \"type\": \"event\",\n                      \"match\": {\"prefect.resource.id\": \"my.external.resource\"},\n                      \"expect\": [\"external.resource.pinged\"],\n                    },\n                    {\n                      \"type\": \"event\",\n                      \"match\": {\"prefect.resource.id\": \"my.external.resource\"},\n                      \"expect\": [\"external.resource.replied\"],\n                    },\n                ],\n                parameters={\n                    \"param_1\": \"{{ event }}\",\n                },\n            )\n        ],\n        work_pool_name=\"my-work-pool\",\n    )\n
      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#pass-triggers-to-prefect-deploy","title":"Pass triggers to prefect deploy","text":"

      You can pass one or more --trigger arguments to prefect deploy, which can be either a JSON string or a path to a .yaml or .json file.

      # Pass a trigger as a JSON string\nprefect deploy -n test-deployment \\\n  --trigger '{\n    \"enabled\": true,\n    \"match\": {\n      \"prefect.resource.id\": \"prefect.flow-run.*\"\n    },\n    \"expect\": [\"prefect.flow-run.Completed\"]\n  }'\n\n# Pass a trigger using a JSON/YAML file\nprefect deploy -n test-deployment --trigger triggers.yaml\nprefect deploy -n test-deployment --trigger my_stuff/triggers.json\n

      For example, a triggers.yaml file could have many triggers defined:

      triggers:\n  - enabled: true\n    match:\n      prefect.resource.id: my.external.resource\n    expect:\n      - external.resource.pinged\n    parameters:\n      param_1: \"{{ event }}\"\n  - enabled: true\n    match:\n      prefect.resource.id: my.other.external.resource\n    expect:\n      - some.other.event\n    parameters:\n      param_1: \"{{ event }}\"\n

      Both of the above triggers would be attached to test-deployment after running prefect deploy.

      Triggers passed to prefect deploy will override any triggers defined in prefect.yaml

      While you can define triggers in prefect.yaml for a given deployment, triggers passed to prefect deploy will take precedence over those defined in prefect.yaml.

      Note that deployment triggers contribute to the total number of automations in your workspace.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#automation-notifications","title":"Automation notifications","text":"

      Notifications enable you to set up automation actions that send a message.

      Automation notifications support sending notifications via any predefined block that is capable of and configured to send a message. That includes, for example:

      • Slack message to a channel
      • Microsoft Teams message to a channel
      • Email to a configured email address

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/automations/#templating-with-jinja","title":"Templating with Jinja","text":"

      Automation actions can access templated variables through Jinja syntax. Templated variables enable you to dynamically include details from an automation trigger, such as a flow or pool name.

      Jinja templated variable syntax wraps the variable name in double curly brackets, like this: {{ variable }}.

      You can access properties of the underlying flow run objects including:

      • flow_run
      • flow
      • deployment
      • work_queue
      • work_pool

      In addition to its native properties, each object includes an id along with created and updated timestamps.

      The flow_run|ui_url token returns the URL for viewing the flow run in Prefect Cloud.

      Here\u2019s an example for something that would be relevant to a flow run state-based notification:

      Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}.\n\n    Timestamp: {{ flow_run.state.timestamp }}\n    Flow ID: {{ flow_run.flow_id }}\n    Flow Run ID: {{ flow_run.id }}\n    State message: {{ flow_run.state.message }}\n

      The resulting Slack webhook notification would look something like this:

      You could include flow and deployment properties.

      Flow run {{ flow_run.name }} for flow {{ flow.name }}\nentered state {{ flow_run.state.name }}\nwith message {{ flow_run.state.message }}\n\nFlow tags: {{ flow_run.tags }}\nDeployment name: {{ deployment.name }}\nDeployment version: {{ deployment.version }}\nDeployment parameters: {{ deployment.parameters }}\n

      An automation that reports on work pool status might include notifications using work_pool properties.

      Work pool status alert!\n\nName: {{ work_pool.name }}\nLast polled: {{ work_pool.last_polled }}\n

      In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the Automations API for additional details.

      Automation: {{ automation.name }}\nDescription: {{ automation.description }}\n\nEvent: {{ event.id }}\nResource:\n{% for label, value in event.resource %}\n{{ label }}: {{ value }}\n{% endfor %}\nRelated Resources:\n{% for related in event.related %}\n    Role: {{ related.role }}\n    {% for label, value in event.resource %}\n    {{ label }}: {{ value }}\n    {% endfor %}\n{% endfor %}\n

      Note that this example also illustrates the ability to use Jinja features such as iterator and for loop control structures when templating notifications.

      ","tags":["UI","states","flow runs","events","triggers","Prefect Cloud","automations"],"boost":2},{"location":"concepts/blocks/","title":"Blocks","text":"

      Blocks are a primitive within Prefect that enable the storage of configuration and provide an interface for interacting with external systems.

      With blocks, you can securely store credentials for authenticating with services like AWS, GitHub, Slack, and any other system you'd like to orchestrate with Prefect.

      Blocks expose methods that provide pre-built functionality for performing actions against an external system. They can be used to download data from or upload data to an S3 bucket, query data from or write data to a database, or send a message to a Slack channel.

      You may configure blocks through code or via the Prefect Cloud and the Prefect server UI.

      You can access blocks for both configuring flow deployments and directly from within your flow code.

      Prefect provides some built-in block types that you can use right out of the box. Additional blocks are available through Prefect Integrations. To use these blocks you can pip install the package, then register the blocks you want to use with Prefect Cloud or a Prefect server.

      Prefect Cloud and the Prefect server UI display a library of block types available for you to configure blocks that may be used by your flows.

      Blocks and parameters

      Blocks are useful for configuration that needs to be shared across flow runs and between flows.

      For configuration that will change between flow runs, we recommend using parameters.

      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#prefect-built-in-blocks","title":"Prefect built-in blocks","text":"

      Prefect provides a broad range of commonly used, built-in block types. These block types are available in Prefect Cloud and the Prefect server UI.

      Block Slug Description Azure azure Store data as a file on Azure Datalake and Azure Blob Storage. Date Time date-time A block that represents a datetime. Docker Container docker-container Runs a command in a container. Docker Registry docker-registry Connects to a Docker registry. Requires a Docker Engine to be connectable. GCS gcs Store data as a file on Google Cloud Storage. GitHub github Interact with files stored on public GitHub repositories. JSON json A block that represents JSON. Kubernetes Cluster Config kubernetes-cluster-config Stores configuration for interaction with Kubernetes clusters. Kubernetes Job kubernetes-job Runs a command as a Kubernetes Job. Local File System local-file-system Store data as a file on a local file system. Microsoft Teams Webhook ms-teams-webhook Enables sending notifications via a provided Microsoft Teams webhook. Opsgenie Webhook opsgenie-webhook Enables sending notifications via a provided Opsgenie webhook. Pager Duty Webhook pager-duty-webhook Enables sending notifications via a provided PagerDuty webhook. Process process Run a command in a new process. Remote File System remote-file-system Store data as a file on a remote file system. Supports any remote file system supported by fsspec. S3 s3 Store data as a file on AWS S3. Secret secret A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI. Slack Webhook slack-webhook Enables sending notifications via a provided Slack webhook. SMB smb Store data as a file on a SMB share. String string A block that represents a string. Twilio SMS twilio-sms Enables sending notifications via Twilio SMS. Webhook webhook Block that enables calling webhooks.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-in-prefect-integrations","title":"Blocks in Prefect Integrations","text":"

      Blocks can also be created by anyone and shared with the community. You'll find blocks that are available for consumption in many of the published Prefect Integrations. The following table provides an overview of the blocks available from our most popular Prefect Integrations.

      Integration Block Slug prefect-airbyte Airbyte Connection airbyte-connection prefect-airbyte Airbyte Server airbyte-server prefect-aws AWS Credentials aws-credentials prefect-aws ECS Task ecs-task prefect-aws MinIO Credentials minio-credentials prefect-aws S3 Bucket s3-bucket prefect-azure Azure Blob Storage Credentials azure-blob-storage-credentials prefect-azure Azure Container Instance Credentials azure-container-instance-credentials prefect-azure Azure Container Instance Job azure-container-instance-job prefect-azure Azure Cosmos DB Credentials azure-cosmos-db-credentials prefect-azure AzureML Credentials azureml-credentials prefect-bitbucket BitBucket Credentials bitbucket-credentials prefect-bitbucket BitBucket Repository bitbucket-repository prefect-census Census Credentials census-credentials prefect-census Census Sync census-sync prefect-databricks Databricks Credentials databricks-credentials prefect-dbt dbt CLI BigQuery Target Configs dbt-cli-bigquery-target-configs prefect-dbt dbt CLI Profile dbt-cli-profile prefect-dbt dbt Cloud Credentials dbt-cloud-credentials prefect-dbt dbt CLI Global Configs dbt-cli-global-configs prefect-dbt dbt CLI Postgres Target Configs dbt-cli-postgres-target-configs prefect-dbt dbt CLI Snowflake Target Configs dbt-cli-snowflake-target-configs prefect-dbt dbt CLI Target Configs dbt-cli-target-configs prefect-docker Docker Host docker-host prefect-docker Docker Registry Credentials docker-registry-credentials prefect-email Email Server Credentials email-server-credentials prefect-firebolt Firebolt Credentials firebolt-credentials prefect-firebolt Firebolt Database firebolt-database prefect-gcp BigQuery Warehouse bigquery-warehouse prefect-gcp GCP Cloud Run Job cloud-run-job prefect-gcp GCP Credentials gcp-credentials prefect-gcp GcpSecret gcpsecret prefect-gcp GCS Bucket gcs-bucket prefect-gcp Vertex AI Custom Training Job vertex-ai-custom-training-job prefect-github GitHub Credentials github-credentials prefect-github GitHub Repository github-repository prefect-gitlab GitLab Credentials gitlab-credentials prefect-gitlab GitLab Repository gitlab-repository prefect-hex Hex Credentials hex-credentials prefect-hightouch Hightouch Credentials hightouch-credentials prefect-kubernetes Kubernetes Credentials kubernetes-credentials prefect-monday Monday Credentials monday-credentials prefect-monte-carlo Monte Carlo Credentials monte-carlo-credentials prefect-openai OpenAI Completion Model openai-completion-model prefect-openai OpenAI Image Model openai-image-model prefect-openai OpenAI Credentials openai-credentials prefect-slack Slack Credentials slack-credentials prefect-slack Slack Incoming Webhook slack-incoming-webhook prefect-snowflake Snowflake Connector snowflake-connector prefect-snowflake Snowflake Credentials snowflake-credentials prefect-sqlalchemy Database Credentials database-credentials prefect-sqlalchemy SQLAlchemy Connector sqlalchemy-connector prefect-twitter Twitter Credentials twitter-credentials","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#using-existing-block-types","title":"Using existing block types","text":"

      Blocks are classes that subclass the Block base class. They can be instantiated and used like normal classes.

      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#instantiating-blocks","title":"Instantiating blocks","text":"

      For example, to instantiate a block that stores a JSON value, use the JSON block:

      from prefect.blocks.system import JSON\n\njson_block = JSON(value={\"the_answer\": 42})\n
      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#saving-blocks","title":"Saving blocks","text":"

      If this JSON value needs to be retrieved later to be used within a flow or task, we can use the .save() method on the block to store the value in a block document on the Prefect database for retrieval later:

      json_block.save(name=\"life-the-universe-everything\")\n

      If you'd like to update the block value stored for a given name, you can overwrite the existing block document by setting overwrite=True:

      json_block.save(overwrite=True)\n

      Tip

      in the above example, the name \"life-the-universe-everything\" is inferred from the existing block document

      ... or save the same block value as a new block document by setting the name parameter to a new value:

      json_block.save(name=\"actually-life-the-universe-everything\")\n

      Utilizing the UI

      Blocks documents can also be created and updated via the Prefect UI.

      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#loading-blocks","title":"Loading blocks","text":"

      The name given when saving the value stored in the JSON block can be used when retrieving the value during a flow or task run:

      from prefect import flow\nfrom prefect.blocks.system import JSON\n\n@flow\ndef what_is_the_answer():\n    json_block = JSON.load(\"life-the-universe-everything\")\n    print(json_block.value[\"the_answer\"])\n\nwhat_is_the_answer() # 42\n

      Blocks can also be loaded with a unique slug that is a combination of a block type slug and a block document name.

      To load our JSON block document from before, we can run the following:

      from prefect.blocks.core import Block\n\njson_block = Block.load(\"json/life-the-universe-everything\")\nprint(json_block.value[\"the-answer\"]) #42\n

      Sharing Blocks

      Blocks can also be loaded by fellow Workspace Collaborators, available on Prefect Cloud.

      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#deleting-blocks","title":"Deleting blocks","text":"

      You can delete a block by using the .delete() method on the block:

      from prefect.blocks.core import Block\nBlock.delete(\"json/life-the-universe-everything\")\n

      You can also use the CLI to delete specific blocks with a given slug or id:

      prefect block delete json/life-the-universe-everything\n
      prefect block delete --id <my-id>\n
      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#creating-new-block-types","title":"Creating new block types","text":"

      To create a custom block type, define a class that subclasses Block. The Block base class builds off of Pydantic's BaseModel, so custom blocks can be declared in same manner as a Pydantic model.

      Here's a block that represents a cube and holds information about the length of each edge in inches:

      from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n

      You can also include methods on a block include useful functionality. Here's the same cube block with methods to calculate the volume and surface area of the cube:

      from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n\n    def get_volume(self):\n        return self.edge_length_inches**3\n\n    def get_surface_area(self):\n        return 6 * self.edge_length_inches**2\n

      Now the Cube block can be used to store different cube configuration that can later be used in a flow:

      from prefect import flow\n\nrubiks_cube = Cube(edge_length_inches=2.25)\nrubiks_cube.save(\"rubiks-cube\")\n\n@flow\ndef calculate_cube_surface_area(cube_name):\n    cube = Cube.load(cube_name)\n    print(cube.get_surface_area())\n\ncalculate_cube_surface_area(\"rubiks-cube\") # 30.375\n
      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#secret-fields","title":"Secret fields","text":"

      All block values are encrypted before being stored, but if you have values that you would not like visible in the UI or in logs, then you can use the SecretStr field type provided by Pydantic to automatically obfuscate those values. This can be useful for fields that are used to store credentials like passwords and API tokens.

      Here's an example of an AWSCredentials block that uses SecretStr:

      from typing import Optional\n\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr  # if pydantic version >= 2.0, use: from pydantic.v1 import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\n    aws_secret_access_key: Optional[SecretStr] = None\n    aws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n

      Because aws_secret_access_key has the SecretStr type hint assigned to it, the value of that field will not be exposed if the object is logged:

      aws_credentials_block = AWSCredentials(\n    aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n    aws_secret_access_key=\"secret_access_key\"\n)\n\nprint(aws_credentials_block)\n# aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None\n

      There's also use the SecretDict field type provided by Prefect. This type will allow you to add a dictionary field to your block that will have values at all levels automatically obfuscated in the UI or in logs. This is useful for blocks where typing or structure of secret fields is not known until configuration time.

      Here's an example of a block that uses SecretDict:

      from typing import Dict\n\nfrom prefect.blocks.core import Block\nfrom prefect.blocks.fields import SecretDict\n\n\nclass SystemConfiguration(Block):\n    system_secrets: SecretDict\n    system_variables: Dict\n\n\nsystem_configuration_block = SystemConfiguration(\n    system_secrets={\n        \"password\": \"p@ssw0rd\",\n        \"api_token\": \"token_123456789\",\n        \"private_key\": \"<private key here>\",\n    },\n    system_variables={\n        \"self_destruct_countdown_seconds\": 60,\n        \"self_destruct_countdown_stop_time\": 7,\n    },\n)\n
      system_secrets will be obfuscated when system_configuration_block is displayed, but system_variables will be shown in plain-text:

      print(system_configuration_block)\n# SystemConfiguration(\n#   system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), \n#   system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7}\n# )\n
      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-metadata","title":"Blocks metadata","text":"

      The way that a block is displayed can be controlled by metadata fields that can be set on a block subclass.

      Available metadata fields include:

      Property Description _block_type_name Display name of the block in the UI. Defaults to the class name. _block_type_slug Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. _logo_url URL pointing to an image that should be displayed for the block type in the UI. Default to None. _description Short description of block type. Defaults to docstring, if provided. _code_example Short code snippet shown in UI for how to load/use block type. Default to first example provided in the docstring of the class, if provided.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#nested-blocks","title":"Nested blocks","text":"

      Block are composable. This means that you can create a block that uses functionality from another block by declaring it as an attribute on the block that you're creating. It also means that configuration can be changed for each block independently, which allows configuration that may change on different time frames to be easily managed and configuration can be shared across multiple use cases.

      To illustrate, here's a an expanded AWSCredentials block that includes the ability to get an authenticated session via the boto3 library:

      from typing import Optional\n\nimport boto3\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\n    aws_secret_access_key: Optional[SecretStr] = None\n    aws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n\n    def get_boto3_session(self):\n        return boto3.Session(\n            aws_access_key_id = self.aws_access_key_id\n            aws_secret_access_key = self.aws_secret_access_key\n            aws_session_token = self.aws_session_token\n            profile_name = self.profile_name\n            region_name = self.region\n        )\n

      The AWSCredentials block can be used within an S3Bucket block to provide authentication when interacting with an S3 bucket:

      import io\n\nclass S3Bucket(Block):\n    bucket_name: str\n    credentials: AWSCredentials\n\n    def read(self, key: str) -> bytes:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n\n        stream = io.BytesIO()\n        s3_client.download_fileobj(Bucket=self.bucket_name, key=key, Fileobj=stream)\n\n        stream.seek(0)\n        output = stream.read()\n\n        return output\n\n    def write(self, key: str, data: bytes) -> None:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n        stream = io.BytesIO(data)\n        s3_client.upload_fileobj(stream, Bucket=self.bucket_name, Key=key)\n

      You can use this S3Bucket block with previously saved AWSCredentials block values in order to interact with the configured S3 bucket:

      my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials.load(\"my_aws_credentials\")\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

      Saving block values like this links the values of the two blocks so that any changes to the values stored for the AWSCredentials block with the name my_aws_credentials will be seen the next time that block values for the S3Bucket block named my_s3_bucket is loaded.

      Values for nested blocks can also be hard coded by not first saving child blocks:

      my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials(\n        aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

      In the above example, the values for AWSCredentials are saved with my_s3_bucket and will not be usable with any other blocks.

      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#handling-updates-to-custom-block-types","title":"Handling updates to custom Block types","text":"

      Let's say that you now want to add a bucket_folder field to your custom S3Bucket block that represents the default path to read and write objects from (this field exists on our implementation).

      We can add the new field to the class definition:

      class S3Bucket(Block):\n    bucket_name: str\n    credentials: AWSCredentials\n    bucket_folder: str = None\n    ...\n

      Then register the updated block type with either Prefect Cloud or your self-hosted Prefect server.

      If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, you can migrate them to the new version of your block type by adding the missing values:

      # Bypass Pydantic validation to allow your local Block class to load the old block version\nmy_s3_bucket_block = S3Bucket.load(\"my-s3-bucket\", validate=False)\n\n# Set the new field to an appropriate value\nmy_s3_bucket_block.bucket_path = \"my-default-bucket-path\"\n\n# Overwrite the old block values and update the expected fields on the block\nmy_s3_bucket_block.save(\"my-s3-bucket\", overwrite=True)\n
      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#registering-blocks-for-use-in-the-prefect-ui","title":"Registering blocks for use in the Prefect UI","text":"

      Blocks can be registered from a Python module available in the current virtual environment with a CLI command like this:

      prefect block register --module prefect_aws.credentials\n

      This command is useful for registering all blocks found in the credentials module within Prefect Integrations.

      Or, if a block has been created in a .py file, the block can also be registered with the CLI command:

      prefect block register --file my_block.py\n

      The registered block will then be available in the Prefect UI for configuration.

      ","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/deployments-block-based/","title":"Block Based Deployments","text":"

      Workers are recommended

      This page is about the block-based deployment model. The Work Pools and Workers based deployment model simplifies the specification of a flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.

      We encourage you to check out the new deployment experience with guided command line prompts and convenient CI/CD with prefect.yaml files.

      With remote storage blocks, you can package not only your flow code script but also any supporting files, including your custom modules, SQL scripts and any configuration files needed in your project.

      To define how your flow execution environment should be configured, you may either reference pre-configured infrastructure blocks or let Prefect create those automatically for you as anonymous blocks (this happens when you specify the infrastructure type using --infra flag during the build process).

      Work queue affinity improved starting from Prefect 2.0.5

      Until Prefect 2.0.4, tags were used to associate flow runs with work queues. Starting in Prefect 2.0.5, tag-based work queues are deprecated. Instead, work queue names are used to explicitly direct flow runs from deployments into queues.

      Note that backward compatibility is maintained and work queues that use tag-based matching can still be created and will continue to work. However, those work queues are now considered legacy and we encourage you to use the new behavior by specifying work queues explicitly on agents and deployments.

      See Agents & Work Pools for details.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployments-and-flows","title":"Deployments and flows","text":"

      Each deployment is associated with a single flow, but any given flow can be referenced by multiple deployments.

      Deployments are uniquely identified by the combination of: flow_name/deployment_name.

      graph LR\n    F(\"my_flow\"):::yellow -.-> A(\"Deployment 'daily'\"):::tan --> W(\"my_flow/daily\"):::fgreen\n    F -.-> B(\"Deployment 'weekly'\"):::gold  --> X(\"my_flow/weekly\"):::green\n    F -.-> C(\"Deployment 'ad-hoc'\"):::dgold --> Y(\"my_flow/ad-hoc\"):::dgreen\n    F -.-> D(\"Deployment 'trigger-based'\"):::dgold --> Z(\"my_flow/trigger-based\"):::dgreen\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:white\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px\n    classDef dgold fill:darkgoldenrod,stroke:darkgoldenrod,stroke-width:4px,color:white\n    classDef tan fill:tan,stroke:tan,stroke-width:4px,color:white\n    classDef fgreen fill:forestgreen,stroke:forestgreen,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef dgreen fill:darkgreen,stroke:darkgreen,stroke-width:4px,color:white

      This enables you to run a single flow with different parameters, based on multiple schedules and triggers, and in different environments. This also enables you to run different versions of the same flow for testing and production purposes.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-definition","title":"Deployment definition","text":"

      A deployment definition captures the settings for creating a deployment object on the Prefect API. You can create the deployment definition by:

      • Run the prefect deployment build CLI command with deployment options to create a deployment.yaml deployment definition file, then run prefect deployment apply to create a deployment on the API using the settings in deployment.yaml.
      • Define a Deployment Python object, specifying the deployment options as properties of the object, then building and applying the object using methods of Deployment.

      The minimum required information to create a deployment includes:

      • The path and filename of the file containing the flow script.
      • The name of the entrypoint flow function \u2014 this is the flow function that starts the flow and calls and additional tasks or subflows.
      • The name of the deployment.

      You may provide additional settings for the deployment. Any settings you do not explicitly specify are inferred from defaults.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-on-the-cli","title":"Create a deployment on the CLI","text":"

      To create a deployment on the CLI, there are two steps:

      1. Build the deployment definition file deployment.yaml. This step includes uploading your flow to its configured remote storage location, if one is specified.
      2. Create the deployment on the API.
      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#build-the-deployment","title":"Build the deployment","text":"

      To build the deployment definition file deployment.yaml, run the prefect deployment build Prefect CLI command from the folder containing your flow script and any dependencies of the script.

      $ prefect deployment build [OPTIONS] PATH\n

      Path to the flow is specified in the format path-to-script:flow-function-name \u2014 The path and filename of the flow script file, a colon, then the name of the entrypoint flow function.

      For example:

      $ prefect deployment build -n marvin -p default-agent-pool -q test flows/marvin.py:say_hi\n

      When you run this command, Prefect:

      • Creates a marvin_flow-deployment.yaml file for your deployment based on your flow code and options.
      • Uploads your flow files to the configured storage location (local by default).
      • Submit your deployment to the work queue test. The work queue test will be created if it doesn't exist.

      Uploading files may require storage filesystem libraries

      Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library.

      Ignore files or directories from a deployment

      By default, Prefect uploads all files in the current folder to the configured storage location (local by default) when you build a deployment.

      If you want to omit certain files or directories from your deployments, add a .prefectignore file to the root directory. .prefectignore enables users to omit certain files or directories from their deployments.

      Similar to other .ignore files, the syntax supports pattern matching, so an entry of *.pyc will ensure all .pyc files are ignored by the deployment call when uploading to remote storage.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-build-options","title":"Deployment build options","text":"

      You may specify additional options to further customize your deployment.

      Options Description PATH Path, filename, and flow name of the flow definition. (Required) --apply, -a When provided, automatically registers the resulting deployment with the API. --cron TEXT A cron string that will be used to set a CronSchedule on the deployment. For example, --cron \"*/1 * * * *\" to create flow runs from that deployment every minute. --help Display help for available commands and options. --infra-block TEXT, -ib The infrastructure block to use, in block-type/block-name format. --infra, -i The infrastructure type to use. (Default is Process) --interval INTEGER An integer specifying an interval (in seconds) that will be used to set an IntervalSchedule on the deployment. For example, --interval 60 to create flow runs from that deployment every minute. --name TEXT, -n The name of the deployment. --output TEXT, -o Optional location for the YAML manifest generated as a result of the build step. You can version-control that file, but it's not required since the CLI can generate everything you need to define a deployment. --override TEXT One or more optional infrastructure overrides provided as a dot delimited path. For example, specify an environment variable: env.env_key=env_value. For Kubernetes, specify customizations: customizations='[{\"op\": \"add\",\"path\": \"/spec/template/spec/containers/0/resources/limits\", \"value\": {\"memory\": \"8Gi\",\"cpu\": \"4000m\"}}]' (note the string format). --param An optional parameter override, values are parsed as JSON strings. For example, --param question=ultimate --param answer=42. --params An optional parameter override in a JSON string format. For example, --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\'. --path An optional path to specify a subdirectory of remote storage to upload to, or to point to a subdirectory of a locally stored flow. --pool TEXT, -p The work pool that will handle this deployment's runs. \u2502 --rrule TEXT An RRule that will be used to set an RRuleSchedule on the deployment. For example, --rrule 'FREQ=HOURLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=9,10,11,12,13,14,15,16,17' to create flow runs from that deployment every hour but only during business hours. --skip-upload When provided, skips uploading this deployment's files to remote storage. --storage-block TEXT, -sb The storage block to use, in block-type/block-name or block-type/block-name/path format. Note that the appropriate library supporting the storage filesystem must be installed. --tag TEXT, -t One or more optional tags to apply to the deployment. --version TEXT, -v An optional version for the deployment. This could be a git commit hash if you use this command from a CI/CD pipeline. --work-queue TEXT, -q The work queue that will handle this deployment's runs. It will be created if it doesn't already exist. Defaults to None. Note that if a work queue is not set, work will not be scheduled.","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#block-identifiers","title":"Block identifiers","text":"

      When specifying a storage block with the -sb or --storage-block flag, you may specify the block by passing its slug. The storage block slug is formatted as block-type/block-name.

      For example, s3/example-block is the slug for an S3 block named example-block.

      In addition, when passing the storage block slug, you may pass just the block slug or the block slug and a path.

      • block-type/block-name indicates just the block, including any path included in the block configuration.
      • block-type/block-name/path indicates a storage path in addition to any path included in the block configuration.

      When specifying an infrastructure block with the -ib or --infra-block flag, you specify the block by passing its slug. The infrastructure block slug is formatted as block-type/block-name.

      Block name Block class name Block type for a slug Azure Azure azure Docker Container DockerContainer docker-container GitHub GitHub github GCS GCS gcs Kubernetes Job KubernetesJob kubernetes-job Process Process process Remote File System RemoteFileSystem remote-file-system S3 S3 s3 SMB SMB smb GitLab Repository GitLabRepository gitlab-repository

      Note that the appropriate library supporting the storage filesystem must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library. See Storage for more information.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deploymentyaml","title":"deployment.yaml","text":"

      A deployment's YAML file configures additional settings needed to create a deployment on the server.

      A single flow may have multiple deployments created for it, with different schedules, tags, and so on. A single flow definition may have multiple deployment YAML files referencing it, each specifying different settings. The only requirement is that each deployment must have a unique name.

      The default {flow-name}-deployment.yaml filename may be edited as needed with the --output flag to prefect deployment build.

      ###\n### A complete description of a Prefect Deployment for flow 'Cat Facts'\n###\nname: catfact\ndescription: null\nversion: c0fc95308d8137c50d2da51af138aa23\n# The work queue that will handle this deployment's runs\nwork_queue_name: test\nwork_pool_name: null\ntags: []\nparameters: {}\nschedule: null\ninfra_overrides: {}\ninfrastructure:\n  type: process\n  env: {}\n  labels: {}\n  name: null\n  command:\n  - python\n  - -m\n  - prefect.engine\n  stream_output: true\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: Cat Facts\nmanifest_path: null\nstorage: null\npath: /Users/terry/test/testflows/catfact\nentrypoint: catfact.py:catfacts_flow\nparameter_openapi_schema:\n  title: Parameters\n  type: object\n  properties:\n    url:\n      title: url\n  required:\n  - url\n  definitions: null\n

      Editing deployment.yaml

      Note the big DO NOT EDIT comment in your deployment's YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

      We recommend editing most of these fields from the CLI or Prefect UI for convenience.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#parameters-in-deployments","title":"Parameters in deployments","text":"

      You may provide default parameter values in the deployment.yaml configuration, and these parameter values will be used for flow runs based on the deployment.

      To configure default parameter values, add them to the parameters: {} line of deployment.yaml as JSON key-value pairs. The parameter list configured in deployment.yaml must match the parameters expected by the entrypoint flow function.

      parameters: {\"name\": \"Marvin\", \"num\": 42, \"url\": \"https://catfact.ninja/fact\"}\n

      Passing **kwargs as flow parameters

      You may pass **kwargs as a deployment parameter as a \"kwargs\":{} JSON object containing the key-value pairs of any passed keyword arguments.

      parameters: {\"name\": \"Marvin\", \"kwargs\":{\"cattype\":\"tabby\",\"num\": 42}\n

      You can edit default parameters for deployments in the Prefect UI, and you can override default parameter values when creating ad-hoc flow runs via the Prefect UI.

      To edit parameters in the Prefect UI, go the the details page for a deployment, then select Edit from the commands menu. If you change parameter values, the new values are used for all future flow runs based on the deployment.

      To create an ad-hoc flow run with different parameter values, go the the details page for a deployment, select Run, then select Custom. You will be able to provide custom values for any editable deployment fields. Under Parameters, select Custom. Provide the new values, then select Save. Select Run to begin the flow run with custom values.

      If you want the Prefect API to verify the parameter values passed to a flow run against the schema defined by parameter_openapi_schema, set enforce_parameter_schema to true.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment","title":"Create a deployment","text":"

      When you've configured deployment.yaml for a deployment, you can create the deployment on the API by running the prefect deployment apply Prefect CLI command.

      $ prefect deployment apply catfacts_flow-deployment.yaml\n

      For example:

      $ prefect deployment apply ./catfacts_flow-deployment.yaml\nSuccessfully loaded 'catfact'\nDeployment '76a9f1ac-4d8c-4a92-8869-615bec502685' successfully created.\n

      prefect deployment apply accepts an optional --upload flag that, when provided, uploads this deployment's files to remote storage.

      Once the deployment has been created, you'll see it in the Prefect UI and can inspect it using the CLI.

      $ prefect deployment ls\n                               Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                           \u2503 ID                                   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Cat Facts/catfact              \u2502 76a9f1ac-4d8c-4a92-8869-615bec502685 \u2502\n\u2502 leonardo_dicapriflow/hello_leo \u2502 fb4681d7-aa5a-4617-bf6f-f67e6f964984 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

      When you run a deployed flow with Prefect, the following happens:

      • The user runs the deployment, which creates a flow run. (The API creates flow runs automatically for deployments with schedules.)
      • An agent picks up the flow run from a work queue and uses an infrastructure block to create infrastructure for the run.
      • The flow run executes within the infrastructure.

      Agents and work pools enable the Prefect orchestration engine and API to run deployments in your local execution environments. To execute deployed flow runs you need to configure at least one agent.

      Scheduled flow runs

      Scheduled flow runs will not be created unless the scheduler is running with either Prefect Cloud or a local Prefect server started with prefect server start.

      Scheduled flow runs will not run unless an appropriate agent and work pool are configured.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-deployment-from-a-python-object","title":"Create a deployment from a Python object","text":"

      You can also create deployments from Python scripts by using the prefect.deployments.Deployment class.

      Create a new deployment using configuration defaults for an imported flow:

      from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"example-deployment\", \n    version=1, \n    work_queue_name=\"demo\",\n    work_pool_name=\"default-agent-pool\",\n)\ndeployment.apply()\n

      Create a new deployment with a pre-defined storage block and an infrastructure override:

      from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import S3\n\nstorage = S3.load(\"dev-bucket\") # load a pre-defined block\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    work_pool_name=\"default-agent-pool\",\n    storage=storage,\n    infra_overrides={\n        \"env\": {\n            \"ENV_VAR\": \"value\"\n        }\n    },\n)\n\ndeployment.apply()\n

      If you have settings that you want to share from an existing deployment you can load those settings:

      deployment = Deployment(\n    name=\"a-name-you-used\", \n    flow_name=\"name-of-flow\"\n)\ndeployment.load() # loads server-side settings\n

      Once the existing deployment settings are loaded, you may update them as needed by changing deployment properties.

      View all of the parameters for the Deployment object in the Python API documentation.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#deployment-api-representation","title":"Deployment API representation","text":"

      When you create a deployment, it is constructed from deployment definition data you provide and additional properties set by client-side utilities.

      Deployment properties include:

      Property Description id An auto-generated UUID ID value identifying the deployment. created A datetime timestamp indicating when the deployment was created. updated A datetime timestamp indicating when the deployment was last changed. name The name of the deployment. version The version of the deployment description A description of the deployment. flow_id The id of the flow associated with the deployment. schedule An optional schedule for the deployment. is_schedule_active Boolean indicating whether the deployment schedule is active. Default is True. infra_overrides One or more optional infrastructure overrides parameters An optional dictionary of parameters for flow runs scheduled by the deployment. tags An optional list of tags for the deployment. work_queue_name The optional work queue that will handle the deployment's run parameter_openapi_schema JSON schema for flow parameters. enforce_parameter_schema Whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema path The path to the deployment.yaml file entrypoint The path to a flow entry point storage_document_id Storage block configured for the deployment. infrastructure_document_id Infrastructure block configured for the deployment.

      You can inspect a deployment using the CLI with the prefect deployment inspect command, referencing the deployment with <flow_name>/<deployment_name>.

      $ prefect deployment inspect 'Cat Facts/catfact'\n{\n    'id': '76a9f1ac-4d8c-4a92-8869-615bec502685',\n    'created': '2022-07-26T03:48:14.723328+00:00',\n    'updated': '2022-07-26T03:50:02.043238+00:00',\n    'name': 'catfact',\n    'version': '899b136ebc356d58562f48d8ddce7c19',\n    'description': None,\n    'flow_id': '2c7b36d1-0bdb-462e-bb97-f6eb9fef6fd5',\n    'schedule': None,\n    'is_schedule_active': True,\n    'infra_overrides': {},\n    'parameters': {},\n    'tags': [],\n    'work_queue_name': 'test',\n    'parameter_openapi_schema': {\n        'title': 'Parameters',\n        'type': 'object',\n        'properties': {'url': {'title': 'url'}},\n        'required': ['url']\n    },\n    'path': '/Users/terry/test/testflows/catfact',\n    'entrypoint': 'catfact.py:catfacts_flow',\n    'manifest_path': None,\n    'storage_document_id': None,\n    'infrastructure_document_id': 'f958db1c-b143-4709-846c-321125247e07',\n    'infrastructure': {\n        'type': 'process',\n        'env': {},\n        'labels': {},\n        'name': None,\n        'command': ['python', '-m', 'prefect.engine'],\n        'stream_output': True\n    }\n}\n
      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-from-a-deployment","title":"Create a flow run from a deployment","text":"","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-a-schedule","title":"Create a flow run with a schedule","text":"

      If you specify a schedule for a deployment, the deployment will execute its flow automatically on that schedule as long as a Prefect server and agent are running. Prefect Cloud creates schedules flow runs automatically, and they will run on schedule if an agent is configured to pick up flow runs for the deployment.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-an-event-trigger","title":"Create a flow run with an event trigger","text":"

      deployment triggers are only available in Prefect Cloud

      Deployments can optionally take a trigger specification, which will configure an automation to run the deployment based on the presence or absence of events, and optionally pass event data into the deployment run as parameters via jinja templating.

      triggers:\n  - enabled: true\n    match:\n      prefect.resource.id: prefect.flow-run.*\n    expect:\n      - prefect.flow-run.Completed\n    match_related:\n      prefect.resource.name: prefect.flow.etl-flow\n      prefect.resource.role: flow\n    parameters:\n      param_1: \"{{ event }}\"\n

      When applied, this deployment will start a flow run upon the completion of the upstream flow specified in the match_related key, with the flow run passed in as a parameter. Triggers can be configured to respond to the presence or absence of arbitrary internal or external events. The trigger system and API are detailed in Automations.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-with-prefect-ui","title":"Create a flow run with Prefect UI","text":"

      In the Prefect UI, you can click the Run button next to any deployment to execute an ad hoc flow run for that deployment.

      The prefect deployment CLI command provides commands for managing and running deployments locally.

      Command Description apply Create or update a deployment from a YAML file. build Generate a deployment YAML from /path/to/file.py:flow_function. delete Delete a deployment. inspect View details about a deployment. ls View all deployments or deployments for specific flows. pause-schedule Pause schedule of a given deployment. resume-schedule Resume schedule of a given deployment. run Create a flow run for the given flow and deployment. schedule Commands for interacting with your deployment's schedules. set-schedule Set schedule for a given deployment.

      Deprecated Schedule Commands

      The pause-schedule, resume-schedule, and set-schedule commands are deprecated due to the introduction of multi-schedule support for deployments. Use the new prefect deployment schedule command for enhanced flexibility and control over your deployment schedules.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#create-a-flow-run-in-a-python-script","title":"Create a flow run in a Python script","text":"

      You can create a flow run from a deployment in a Python script with the run_deployment function.

      from prefect.deployments import run_deployment\n\n\ndef main():\n    response = run_deployment(name=\"flow-name/deployment-name\")\n    print(response)\n\n\nif __name__ == \"__main__\":\n   main()\n

      PREFECT_API_URL setting for agents

      You'll need to configure agents and work pools that can create flow runs for deployments in remote environments. PREFECT_API_URL must be set for the environment in which your agent is running.

      If you want the agent to communicate with Prefect Cloud from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments-block-based/#examples","title":"Examples","text":"
      • How to deploy Prefect flows to AWS
      • How to deploy Prefect flows to GCP
      • How to deploy Prefect flows to Azure
      • How to deploy Prefect flows using files stored locally
      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker"]},{"location":"concepts/deployments/","title":"Deployments","text":"

      Deployments are server-side representations of flows. They store the crucial metadata needed for remote orchestration including when, where, and how a workflow should run. Deployments elevate workflows from functions that you must call manually to API-managed entities that can be triggered remotely.

      Here we will focus largely on the metadata that defines a deployment and how it is used. Different ways of creating a deployment populate these fields differently.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#overview","title":"Overview","text":"

      Every Prefect deployment references one and only one \"entrypoint\" flow (though that flow may itself call any number of subflows). Different deployments may reference the same underlying flow, a useful pattern when developing or promoting workflow changes through staged environments.

      The complete schema that defines a deployment is as follows:

      class Deployment:\n    \"\"\"\n    Structure of the schema defining a deployment\n    \"\"\"\n\n    # required defining data\n    name: str \n    flow_id: UUID\n    entrypoint: str\n    path: str = None\n\n    # workflow scheduling and parametrization\n    parameters: Optional[Dict[str, Any]] = None\n    parameter_openapi_schema: Optional[Dict[str, Any]] = None\n    schedules: list[Schedule] = None\n    paused: bool = False\n    trigger: Trigger = None\n\n    # metadata for bookkeeping\n    version: str = None\n    description: str = None\n    tags: list = None\n\n    # worker-specific fields\n    work_pool_name: str = None\n    work_queue_name: str = None\n    infra_overrides: Optional[Dict[str, Any]] = None\n    pull_steps: Optional[Dict[str, Any]] = None\n

      All methods for creating Prefect deployments are interfaces for populating this schema. Let's look at each section in turn.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#required-data","title":"Required data","text":"

      Deployments universally require both a name and a reference to an underlying Flow. In almost all instances of deployment creation, users do not need to concern themselves with the flow_id as most interfaces will only need the flow's name. Note that the deployment name is not required to be unique across all deployments but is required to be unique for a given flow ID. As a consequence, you will often see references to the deployment's unique identifying name {FLOW_NAME}/{DEPLOYMENT_NAME}. For example, triggering a run of a deployment from the Prefect CLI can be done via:

      prefect deployment run my-first-flow/my-first-deployment\n

      The other two fields are less obvious:

      • path: the path can generally be interpreted as the runtime working directory for the flow. For example, if a deployment references a workflow defined within a Docker image, the path will be the absolute path to the parent directory where that workflow will run anytime the deployment is triggered. This interpretation is more subtle in the case of flows defined in remote filesystems.
      • entrypoint: the entrypoint of a deployment is a relative reference to a function decorated as a flow that exists on some filesystem. It is always specified relative to the path. Entrypoints use Python's standard path-to-object syntax (e.g., path/to/file.py:function_name or simply path:object).

      The entrypoint must reference the same flow as the flow ID.

      Note that Prefect requires that deployments reference flows defined within Python files. Flows defined within interactive REPLs or notebooks cannot currently be deployed as such. They are still valid flows that will be monitored by the API and observable in the UI whenever they are run, but Prefect cannot trigger them.

      Deployments do not contain code definitions

      Deployment metadata references code that exists in potentially diverse locations within your environment. This separation of concerns means that your flow code stays within your storage and execution infrastructure and never lives on the Prefect server or database.

      This is the heart of the Prefect hybrid model: there's a boundary between your proprietary assets, such as your flow code, and the Prefect backend (including Prefect Cloud).

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#scheduling-and-parametrization","title":"Scheduling and parametrization","text":"

      One of the primary motivations for creating deployments of flows is to remotely schedule and trigger them. Just as flows can be called as functions with different input values, so can deployments be triggered or scheduled with different values through the use of parameters.

      The six fields here capture the necessary metadata to perform such actions:

      • schedules: a list of schedule objects. Most of the convenient interfaces for creating deployments allow users to avoid creating this object themselves. For example, when updating a deployment schedule in the UI basic information such as a cron string or interval is all that's required.
      • trigger (Cloud-only): triggers allow you to define event-based rules for running a deployment. For more information see Automations.
      • parameter_openapi_schema: an OpenAPI compatible schema that defines the types and defaults for the flow's parameters. This is used by both the UI and the backend to expose options for creating manual runs as well as type validation.
      • parameters: default values of flow parameters that this deployment will pass on each run. These can be overwritten through a trigger or when manually creating a custom run.
      • enforce_parameter_schema: a boolean flag that determines whether the API should validate the parameters passed to a flow run against the schema defined by parameter_openapi_schema.

      Scheduling is asynchronous and decoupled

      Because deployments are nothing more than metadata, runs can be created at anytime. Note that pausing a schedule, updating your deployment, and other actions reset your auto-scheduled runs.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#running-a-deployed-flow-from-within-python-flow-code","title":"Running a deployed flow from within Python flow code","text":"

      Prefect provides a run_deployment function that can be used to schedule the run of an existing deployment when your Python code executes.

      from prefect.deployments import run_deployment\n\ndef main():\n    run_deployment(name=\"my_flow_name/my_deployment_name\")\n

      Run a deployment without blocking

      By default, run_deployment blocks until the scheduled flow run finishes executing. Pass timeout=0 to return immediately and not block.

      If you call run_deployment from within a flow or task, the scheduled flow run will be linked to the calling flow run (or the calling task's flow run) as a subflow run by default.

      Subflow runs have different behavior than regular flow runs. For example, a subflow run can't be suspended independently of its parent flow. If you'd rather not link the scheduled flow run to the calling flow or task run, you can disable this behavior by passing as_subflow=False:

      from prefect import flow\nfrom prefect.deployments import run_deployment\n\n\n@flow\ndef my_flow():\n    # The scheduled flow run will not be linked to this flow as a subflow.\n    run_deployment(name=\"my_other_flow/my_deployment_name\", as_subflow=False)\n

      The return value of run_deployment is a FlowRun object containing metadata about the scheduled run. You can use this object to retrieve information about the run after calling run_deployment:

      from prefect import get_client\nfrom prefect.deployments import run_deployment\n\ndef main():\n    flow_run = run_deployment(name=\"my_flow_name/my_deployment_name\")\n    flow_run_id = flow_run.id\n\n    # If you save the flow run's ID, you can use it later to retrieve\n    # flow run metadata again, e.g. to check if it's completed.\n    async with get_client() as client:\n        flow_run = client.read_flow_run(flow_run_id)\n        print(f\"Current state of the flow run: {flow_run.state}\")\n

      Using the Prefect client

      For more information on using the Prefect client to interact with Prefect's REST API, see our guide.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#versioning-and-bookkeeping","title":"Versioning and bookkeeping","text":"

      Versions, descriptions and tags are omnipresent fields throughout Prefect that can be easy to overlook. However, putting some extra thought into how you use these fields can pay dividends down the road.

      • version: versions are always set by the client and can be any arbitrary string. We recommend tightly coupling this field on your deployments to your software development lifecycle. For example if you leverage git to manage code changes, use either a tag or commit hash in this field. If you don't set a value for the version, Prefect will compute a hash
      • description: the description field of a deployment is a place to provide rich reference material for downstream stakeholders such as intended use and parameter documentation. Markdown formatting will be rendered in the Prefect UI, allowing for section headers, links, tables, and other formatting. If not provided explicitly, Prefect will use the docstring of your flow function as a default value.
      • tags: tags are a mechanism for grouping related work together across a diverse set of objects. Tags set on a deployment will be inherited by that deployment's flow runs. These tags can then be used to filter what runs are displayed on the primary UI dashboard, allowing you to customize different views into your work. In addition, in Prefect Cloud you can easily find objects through searching by tag.

      All of these bits of metadata can be leveraged to great effect by injecting them into the processes that Prefect is orchestrating. For example you can use both run ID and versions to organize files that you produce from your workflows, or by associating your flow run's tags with the metadata of a job it orchestrates. This metadata is available during execution through Prefect runtime.

      Everything has a version

      Deployments aren't the only entity in Prefect with a version attached; both flows and tasks also have versions that can be set through their respective decorators. These versions will be sent to the API anytime the flow or task is run and thereby allow you to audit your changes across all levels.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#workers-and-work-pools","title":"Workers and Work Pools","text":"

      Workers and work pools are an advanced deployment pattern that allow you to dynamically provision infrastructure for each flow run. In addition, the work pool job template interface allows users to create and govern opinionated interfaces to their workflow infrastructure. To do this, a deployment using workers needs to evaluate the following fields:

      • work_pool_name: the name of the work pool this deployment will be associated with. Work pool types mirror infrastructure types and therefore the decision here affects the options available for the other fields.
      • work_queue_name: if you are using work queues to either manage priority or concurrency, you can associate a deployment with a specific queue within a work pool using this field.
      • infra_overrides: often called job_variables within various interfaces, this field allows deployment authors to customize whatever infrastructure options have been exposed on this work pool. This field is often used for things such as Docker image names, Kubernetes annotations and limits, and environment variables.
      • pull_steps: a JSON description of steps that should be performed to retrieve flow code or configuration and prepare the runtime environment for workflow execution.

      Pull steps allow users to highly decouple their workflow architecture. For example, a common use of pull steps is to dynamically pull code from remote filesystems such as GitHub with each run of their deployment.

      For more information see the guide to deploying with a worker.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#two-approaches-to-deployments","title":"Two approaches to deployments","text":"

      There are two primary ways to deploy flows with Prefect, differentiated by how much control Prefect has over the infrastructure in which the flows run.

      In one setup, deploying Prefect flows is analogous to deploying a webserver - users author their workflows and then start a long-running process (often within a Docker container) that is responsible for managing all of the runs for the associated deployment(s).

      In the other setup, you do a little extra up-front work to set up a work pool and a base job template that defines how individual flow runs will be submitted to infrastructure.

      Prefect provides several types of work pools corresponding to different types of infrastructure. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.

      Some work pool types require a client-side worker to submit job definitions to the appropriate infrastructure with each run.

      Each of these setups can support production workloads. The choice ultimately boils down to your use case and preferences. Read further to decide which setup is best for your situation.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#serving-flows-on-long-lived-infrastructure","title":"Serving flows on long-lived infrastructure","text":"

      When you have several flows running regularly, the serve method of the Flow object or the serve utility is a great option for managing multiple flows simultaneously.

      Once you have authored your flow and decided on its deployment settings as described above, all that's left is to run this long-running process in a location of your choosing. The process will stay in communication with the Prefect API, monitoring for work and submitting each run within an individual subprocess. Note that because runs are submitted to subprocesses, any external infrastructure configuration will need to be setup beforehand and kept associated with this process.

      This approach has many benefits:

      • Users are in complete control of their infrastructure, and anywhere the \"serve\" Python process can run is a suitable deployment environment.
      • It is simple to reason about.
      • Creating deployments requires a minimal set of decisions.
      • Iteration speed is fast.

      However, there are a few reasons you might consider running flows on dynamically provisioned infrastructure with work pools instead:

      • Flows that have expensive infrastructure needs may be more costly in this setup due to the long-running process.
      • Flows with heterogeneous infrastructure needs across runs will be more difficult to configure and schedule.
      • Large volumes of deployments can be harder to track.
      • If your internal team structure requires that deployment authors be members of a different team than the team managing infrastructure, the work pool interface may be preferred.
      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/deployments/#dynamically-provisioning-infrastructure-with-work-pools","title":"Dynamically provisioning infrastructure with work pools","text":"

      Work pools allow Prefect to exercise greater control of the infrastructure on which flows run. Options for serverless work pools allow you to scale to zero when workflows aren't running. Prefect even provides you with the ability to provision cloud infrastructure via a single CLI command, if you use a Prefect Cloud push work pool option.

      With work pools:

      • You can configure and monitor infrastructure configuration within the Prefect UI.
      • Infrastructure is ephemeral and dynamically provisioned.
      • Prefect is more infrastructure-aware and therefore collects more event data from your infrastructure by default.
      • Highly decoupled setups are possible.

      You don't have to commit to one approach

      You are not required to use only one of these approaches for your deployments. You can mix and match approaches based on the needs of each flow. Further, you can change the deployment approach for a particular flow as its needs evolve. For example, you might use workers for your expensive machine learning pipelines, but use the serve mechanics for smaller, more frequent file-processing pipelines.

      ","tags":["orchestration","flow runs","deployments","schedules","triggers","infrastructure","storage","work pool","worker","run_deployment"],"boost":2},{"location":"concepts/filesystems/","title":"Filesystems","text":"

      A filesystem block is an object that allows you to read and write data from paths. Prefect provides multiple built-in file system types that cover a wide range of use cases.

      • LocalFileSystem
      • RemoteFileSystem
      • Azure
      • GitHub
      • GitLab
      • GCS
      • S3
      • SMB

      Additional file system types are available in Prefect Collections.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#local-filesystem","title":"Local filesystem","text":"

      The LocalFileSystem block enables interaction with the files in your current development environment.

      LocalFileSystem properties include:

      Property Description basepath String path to the location of files on the local filesystem. Access to files outside of the base path will not be allowed.
      from prefect.filesystems import LocalFileSystem\n\nfs = LocalFileSystem(basepath=\"/foo/bar\")\n

      Limited access to local file system

      Be aware that LocalFileSystem access is limited to the exact path provided. This file system may not be ideal for some use cases. The execution environment for your workflows may not have the same file system as the environment you are writing and deploying your code on.

      Use of this file system can limit the availability of results after a flow run has completed or prevent the code for a flow from being retrieved successfully at the start of a run.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remote-file-system","title":"Remote file system","text":"

      The RemoteFileSystem block enables interaction with arbitrary remote file systems. Under the hood, RemoteFileSystem uses fsspec and supports any file system that fsspec supports.

      RemoteFileSystem properties include:

      Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. settings Dictionary containing extra parameters required to access the remote file system.

      The file system is specified using a protocol:

      • s3://my-bucket/my-folder/ will use S3
      • gcs://my-bucket/my-folder/ will use GCS
      • az://my-bucket/my-folder/ will use Azure

      For example, to use it with Amazon S3:

      from prefect.filesystems import RemoteFileSystem\n\nblock = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nblock.save(\"dev\")\n

      You may need to install additional libraries to use some remote storage types.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#remotefilesystem-examples","title":"RemoteFileSystem examples","text":"

      How can we use RemoteFileSystem to store our flow code? The following is a use case where we use MinIO as a storage backend:

      from prefect.filesystems import RemoteFileSystem\n\nminio_block = RemoteFileSystem(\n    basepath=\"s3://my-bucket\",\n    settings={\n        \"key\": \"MINIO_ROOT_USER\",\n        \"secret\": \"MINIO_ROOT_PASSWORD\",\n        \"client_kwargs\": {\"endpoint_url\": \"http://localhost:9000\"},\n    },\n)\nminio_block.save(\"minio\")\n
      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#azure","title":"Azure","text":"

      The Azure file system block enables interaction with Azure Datalake and Azure Blob Storage. Under the hood, the Azure block uses adlfs.

      Azure properties include:

      Property Description bucket_path String path to the location of files on the remote filesystem. Access to files outside of the bucket path will not be allowed. azure_storage_connection_string Azure storage connection string. azure_storage_account_name Azure storage account name. azure_storage_account_key Azure storage account key. azure_storage_tenant_id Azure storage tenant ID. azure_storage_client_id Azure storage client ID. azure_storage_client_secret Azure storage client secret. azure_storage_anon Anonymous authentication, disable to use DefaultAzureCredential.

      To create a block:

      from prefect.filesystems import Azure\n\nblock = Azure(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

      To use it in a deployment:

      prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb az/dev\n

      You need to install adlfs to use it.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#github","title":"GitHub","text":"

      The GitHub filesystem block enables interaction with GitHub repositories. This block is read-only and works with both public and private repositories.

      GitHub properties include:

      Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitHub repository to read from, in either HTTPS or SSH format. access_token A GitHub Personal Access Token (PAT) with repo scope.

      To create a block:

      from prefect.filesystems import GitHub\n\nblock = GitHub(\n    repository=\"https://github.com/my-repo/\",\n    access_token=<my_access_token> # only required for private repos\n)\nblock.get_directory(\"folder-in-repo\") # specify a subfolder of repo\nblock.save(\"dev\")\n

      To use it in a deployment:

      prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb github/dev -a\n
      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#gitlabrepository","title":"GitLabRepository","text":"

      The GitLabRepository block is read-only and works with private GitLab repositories.

      GitLabRepository properties include:

      Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitLab repository to read from, in either HTTPS or SSH format. credentials A GitLabCredentials block with Personal Access Token (PAT) with read_repository scope.

      To create a block:

      from prefect_gitlab.credentials import GitLabCredentials\nfrom prefect_gitlab.repositories import GitLabRepository\n\ngitlab_creds = GitLabCredentials(token=\"YOUR_GITLAB_ACCESS_TOKEN\")\ngitlab_repo = GitLabRepository(\n    repository=\"https://gitlab.com/yourorg/yourrepo.git\",\n    reference=\"main\",\n    credentials=gitlab_creds,\n)\ngitlab_repo.save(\"dev\")\n

      To use it in a deployment (and apply):

      prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gitlab-repository/dev -a\n

      Note that to use this block, you need to install the prefect-gitlab collection.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#gcs","title":"GCS","text":"

      The GCS file system block enables interaction with Google Cloud Storage. Under the hood, GCS uses gcsfs.

      GCS properties include:

      Property Description bucket_path A GCS bucket path service_account_info The contents of a service account keyfile as a JSON string. project The project the GCS bucket resides in. If not provided, the project will be inferred from the credentials or environment.

      To create a block:

      from prefect.filesystems import GCS\n\nblock = GCS(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

      To use it in a deployment:

      prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gcs/dev\n

      You need to install gcsfsto use it.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#s3","title":"S3","text":"

      The S3 file system block enables interaction with Amazon S3. Under the hood, S3 uses s3fs.

      S3 properties include:

      Property Description bucket_path An S3 bucket path aws_access_key_id AWS Access Key ID aws_secret_access_key AWS Secret Access Key

      To create a block:

      from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

      To use it in a deployment:

      prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb s3/dev\n

      You need to install s3fsto use this block.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#smb","title":"SMB","text":"

      The SMB file system block enables interaction with SMB shared network storage. Under the hood, SMB uses smbprotocol. Used to connect to Windows-based SMB shares from Linux-based Prefect flows. The SMB file system block is able to copy files, but cannot create directories.

      SMB properties include:

      Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. smb_host Hostname or IP address where SMB network share is located. smb_port Port for SMB network share (defaults to 445). smb_username SMB username with read/write permissions. smb_password SMB password.

      To create a block:

      from prefect.filesystems import SMB\n\nblock = SMB(basepath=\"my-share/folder/\")\nblock.save(\"dev\")\n

      To use it in a deployment:

      prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb smb/dev\n

      You need to install smbprotocol to use it.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#handling-credentials-for-cloud-object-storage-services","title":"Handling credentials for cloud object storage services","text":"

      If you leverage S3, GCS, or Azure storage blocks, and you don't explicitly configure credentials on the respective storage block, those credentials will be inferred from the environment. Make sure to set those either explicitly on the block or as environment variables, configuration files, or IAM roles within both the build and runtime environment for your deployments.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#filesystem-package-dependencies","title":"Filesystem package dependencies","text":"

      A Prefect installation and doesn't include filesystem-specific package dependencies such as s3fs, gcsfs or adlfs. This includes Prefect base Docker images.

      You must ensure that filesystem-specific libraries are installed in an execution environment where they will be used by flow runs.

      In Dockerized deployments using the Prefect base image, you can leverage the EXTRA_PIP_PACKAGES environment variable. Those dependencies will be installed at runtime within your Docker container or Kubernetes Job before the flow starts running.

      In Dockerized deployments using a custom image, you must include the filesystem-specific package dependency in your image.

      Here is an example from a deployment YAML file showing how to specify the installation of s3fs from into your image:

      infrastructure:\n  type: docker-container\n  env:\n    EXTRA_PIP_PACKAGES: s3fs  # could be gcsfs, adlfs, etc.\n

      You may specify multiple dependencies by providing a comma-delimted list.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#saving-and-loading-file-systems","title":"Saving and loading file systems","text":"

      Configuration for a file system can be saved to the Prefect API. For example:

      fs = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nfs.write_path(\"foo\", b\"hello\")\nfs.save(\"dev-s3\")\n

      This file system can be retrieved for later use with load.

      fs = RemoteFileSystem.load(\"dev-s3\")\nfs.read_path(\"foo\")  # b'hello'\n
      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/filesystems/#readable-and-writable-file-systems","title":"Readable and writable file systems","text":"

      Prefect provides two abstract file system types, ReadableFileSystem and WriteableFileSystem.

      • All readable file systems must implement read_path, which takes a file path to read content from and returns bytes.
      • All writeable file systems must implement write_path which takes a file path and content and writes the content to the file as bytes.

      A file system may implement both of these types.

      ","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":0.5},{"location":"concepts/flows/","title":"Flows","text":"

      Flows are the most central Prefect object. A flow is a container for workflow logic as-code and allows users to configure how their workflows behave. Flows are defined as Python functions, and any Python function is eligible to be a flow.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flows-overview","title":"Flows overview","text":"

      Flows can be thought of as special types of functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

      • Every invocation of this function is tracked and all state transitions are reported to the API, allowing observation of flow execution.
      • Input arguments are automatically type checked and coerced to the appropriate types.
      • Retries can be performed on failure.
      • Timeouts can be enforced to prevent unintentional, long-running workflows.

      Flows also take advantage of automatic Prefect logging to capture details about flow runs such as run time and final state.

      Flows can include calls to tasks as well as to other flows, which Prefect calls \"subflows\" in this context. Flows may be defined within modules and imported for use as subflows in your flow definitions.

      Deployments elevate individual workflows from functions that you call manually to API-managed entities.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-runs","title":"Flow runs","text":"

      A flow run represents a single execution of the flow.

      You can create a flow run by calling the flow manually. For example, by running a Python script or importing the flow into an interactive session and calling it.

      You can also create a flow run by:

      • Using external schedulers such as cron to invoke a flow function
      • Creating a deployment on Prefect Cloud or a locally run Prefect server.
      • Creating a flow run for the deployment via a schedule, the Prefect UI, or the Prefect API.

      However you run the flow, the Prefect API monitors the flow run, capturing flow run state for observability.

      When you run a flow that contains tasks or additional flows, Prefect will track the relationship of each child run to the parent flow run.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#writing-flows","title":"Writing flows","text":"

      The @flow decorator is used to designate a flow:

      from prefect import flow\n\n@flow\ndef my_flow():\n    return\n

      There are no rigid rules for what code you include within a flow definition - all valid Python is acceptable.

      Flows are uniquely identified by name. You can provide a name parameter value for the flow. If you don't provide a name, Prefect uses the flow function name.

      @flow(name=\"My Flow\")\ndef my_flow():\n    return\n

      Flows can call tasks to allow Prefect to orchestrate and track more granular units of work:

      from prefect import flow, task\n\n@task\ndef print_hello(name):\n    print(f\"Hello {name}!\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print_hello(name)\n

      Flows and tasks

      There's nothing stopping you from putting all of your code in a single flow function \u2014 Prefect will happily run it!

      However, organizing your workflow code into smaller flow and task units lets you take advantage of Prefect features like retries, more granular visibility into runtime state, the ability to determine final state regardless of individual task state, and more.

      In addition, if you put all of your workflow logic in a single flow function and any line of code fails, the entire flow will fail and must be retried from the beginning. This can be avoided by breaking up the code into multiple tasks.

      You may call any number of other tasks, subflows, and even regular Python functions within your flow. You can pass parameters to your flow function that will be used elsewhere in the workflow, and Prefect will report on the progress and final state of any invocation.

      Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-settings","title":"Flow settings","text":"

      Flows allow a great deal of configuration by passing arguments to the decorator. Flows accept the following optional settings.

      Argument Description description An optional string description for the flow. If not provided, the description will be pulled from the docstring for the decorated function. name An optional name for the flow. If not provided, the name will be inferred from the function. retries An optional number of times to retry on flow run failure. retry_delay_seconds An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero. flow_run_name An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables; this name can also be provided as a function that returns a string. task_runner An optional task runner to use for task execution within the flow when you .submit() tasks. If not provided and you .submit() tasks, the ConcurrentTaskRunner will be used. timeout_seconds An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. validate_parameters Boolean indicating whether parameters passed to flows are validated by Pydantic. Default is True. version An optional version string for the flow. If not provided, we will attempt to create a version string as a hash of the file containing the wrapped function. If the file cannot be located, the version will be null.

      For example, you can provide a name value for the flow. Here we've also used the optional description argument and specified a non-default task runner.

      from prefect import flow\nfrom prefect.task_runners import SequentialTaskRunner\n\n@flow(name=\"My Flow\",\n      description=\"My flow using SequentialTaskRunner\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n    return\n

      You can also provide the description as the docstring on the flow function.

      @flow(name=\"My Flow\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n    \"\"\"My flow using SequentialTaskRunner\"\"\"\n    return\n

      You can distinguish runs of this flow by providing a flow_run_name. This setting accepts a string that can optionally contain templated references to the parameters of your flow. The name will be formatted using Python's standard string formatting syntax as can be seen here:

      import datetime\nfrom prefect import flow\n\n@flow(flow_run_name=\"{name}-on-{date:%A}\")\ndef my_flow(name: str, date: datetime.datetime):\n    pass\n\n# creates a flow run called 'marvin-on-Thursday'\nmy_flow(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n

      Additionally this setting also accepts a function that returns a string for the flow run name:

      import datetime\nfrom prefect import flow\n\ndef generate_flow_run_name():\n    date = datetime.datetime.now(datetime.timezone.utc)\n\n    return f\"{date:%A}-is-a-nice-day\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str):\n    pass\n\n# creates a flow run called 'Thursday-is-a-nice-day'\nif __name__ == \"__main__\":\n    my_flow(name=\"marvin\")\n

      If you need access to information about the flow, use the prefect.runtime module. For example:

      from prefect import flow\nfrom prefect.runtime import flow_run\n\ndef generate_flow_run_name():\n    flow_name = flow_run.flow_name\n\n    parameters = flow_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-with-{name}-and-{limit}\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str, limit: int = 100):\n    pass\n\n# creates a flow run called 'my-flow-with-marvin-and-100'\nif __name__ == \"__main__\":\n    my_flow(name=\"marvin\")\n

      Note that validate_parameters will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type. For example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#separating-logic-into-tasks","title":"Separating logic into tasks","text":"

      The simplest workflow is just a @flow function that does all the work of the workflow.

      from prefect import flow\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print(f\"Hello {name}!\")\n\nif __name__ == \"__main__\":\n    hello_world(\"Marvin\")\n

      When you run this flow, you'll see output like the following:

      $ python hello.py\n15:11:23.594 | INFO    | prefect.engine - Created flow run 'benevolent-donkey' for flow 'hello-world'\n15:11:23.594 | INFO    | Flow run 'benevolent-donkey' - Using task runner 'ConcurrentTaskRunner'\nHello Marvin!\n15:11:24.447 | INFO    | Flow run 'benevolent-donkey' - Finished in state Completed()\n

      A better practice is to create @task functions that do the specific work of your flow, and use your @flow function as the conductor that orchestrates the flow of your application:

      from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n\nif __name__ == \"__main__\":\n    hello_world(\"Marvin\")\n

      When you run this flow, you'll see the following output, which illustrates how the work is encapsulated in a task run.

      $ python hello.py\n15:15:58.673 | INFO    | prefect.engine - Created flow run 'loose-wolverine' for flow 'Hello Flow'\n15:15:58.674 | INFO    | Flow run 'loose-wolverine' - Using task runner 'ConcurrentTaskRunner'\n15:15:58.973 | INFO    | Flow run 'loose-wolverine' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:15:59.037 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:15:59.568 | INFO    | Flow run 'loose-wolverine' - Finished in state Completed('All states completed.')\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#visualizing-flow-structure","title":"Visualizing flow structure","text":"

      You can get a quick sense of the structure of your flow using the .visualize() method on your flow. Calling this method will attempt to produce a schematic diagram of your flow and tasks without actually running your flow code.

      Functions and code not inside of flows or tasks will still be run when calling .visualize(). This may have unintended consequences. Place your code into tasks to avoid unintended execution.

      To use the visualize() method, Graphviz must be installed and on your PATH. Please install Graphviz from http://www.graphviz.org/download/. And note: just installing the graphviz python package is not sufficient.

      from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@task(name=\"Print Hello Again\")\ndef print_hello_again(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    message2 = print_hello_again(message)\n\nif __name__ == \"__main__\":\n    hello_world.visualize()\n

      Prefect cannot automatically produce a schematic for dynamic workflows, such as those with loops or if/else control flow. In this case, you can provide tasks with mock return values for use in the visualize() call.

      from prefect import flow, task\n@task(viz_return_value=[4])\ndef get_list():\n    return [1, 2, 3]\n\n@task\ndef append_one(n):\n    return n.append(6)\n\n@flow\ndef viz_return_value_tracked():\n    l = get_list()\n    for num in range(3):\n        l.append(5)\n        append_one(l)\n\nif __name__ == \"__main__\":\n    viz_return_value_tracked.visualize()\n

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#composing-flows","title":"Composing flows","text":"

      A subflow run is created when a flow function is called inside the execution of another flow. The primary flow is the \"parent\" flow. The flow created within the parent is the \"child\" flow or \"subflow.\"

      Subflow runs behave like normal flow runs. There is a full representation of the flow run in the backend as if it had been called separately. When a subflow starts, it will create a new task runner for tasks within the subflow. When the subflow completes, the task runner is shut down.

      Subflows will block execution of the parent flow until completion. However, asynchronous subflows can be run concurrently by using AnyIO task groups or asyncio.gather.

      Subflows differ from normal flows in that they will resolve any passed task futures into data. This allows data to be passed from the parent flow to the child easily.

      The relationship between a child and parent flow is tracked by creating a special task run in the parent flow. This task run will mirror the state of the child flow run.

      A task that represents a subflow will be annotated as such in its state_details via the presence of a child_flow_run_id field. A subflow can be identified via the presence of a parent_task_run_id on state_details.

      You can define multiple flows within the same file. Whether running locally or via a deployment, you must indicate which flow is the entrypoint for a flow run.

      Cancelling subflow runs

      Inline subflow runs, specifically those created without run_deployment, cannot be cancelled without cancelling their parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.

      from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nif __name__ == \"__main__\":\n    hello_world(\"Marvin\")\n

      You can also define flows or tasks in separate modules and import them for usage. For example, here's a simple subflow module:

      from prefect import flow, task\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n

      Here's a parent flow that imports and uses my_subflow() as a subflow:

      from prefect import flow, task\nfrom subflow import my_subflow\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nhello_world(\"Marvin\")\n

      Running the hello_world() flow (in this example from the file hello.py) creates a flow run like this:

      $ python hello.py\n15:19:21.651 | INFO    | prefect.engine - Created flow run 'daft-cougar' for flow 'Hello Flow'\n15:19:21.651 | INFO    | Flow run 'daft-cougar' - Using task runner 'ConcurrentTaskRunner'\n15:19:21.945 | INFO    | Flow run 'daft-cougar' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:19:22.055 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:19:22.107 | INFO    | Flow run 'daft-cougar' - Created subflow run 'ninja-duck' for flow 'Subflow'\nSubflow says: Hello Marvin!\n15:19:22.794 | INFO    | Flow run 'ninja-duck' - Finished in state Completed()\n15:19:23.215 | INFO    | Flow run 'daft-cougar' - Finished in state Completed('All states completed.')\n

      Subflows or tasks?

      In Prefect you can call tasks or subflows to do work within your workflow, including passing results from other tasks to your subflow. So a common question is:

      \"When should I use a subflow instead of a task?\"

      We recommend writing tasks that do a discrete, specific piece of work in your workflow: calling an API, performing a database operation, analyzing or transforming a data point. Prefect tasks are well suited to parallel or distributed execution using distributed computation frameworks such as Dask or Ray. For troubleshooting, the more granular you create your tasks, the easier it is to find and fix issues should a task fail.

      Subflows enable you to group related tasks within your workflow. Here are some scenarios where you might choose to use a subflow rather than calling tasks individually:

      • Observability: Subflows, like any other flow run, have first-class observability within the Prefect UI and Prefect Cloud. You'll see subflow status in the Flow Runs dashboard rather than having to dig down into the tasks within a specific flow run. See Final state determination for some examples of leveraging task state within flows.
      • Conditional flows: If you have a group of tasks that run only under certain conditions, you can group them within a subflow and conditionally run the subflow rather than each task individually.
      • Parameters: Flows have first-class support for parameterization, making it easy to run the same group of tasks in different use cases by simply passing different parameters to the subflow in which they run.
      • Task runners: Subflows enable you to specify the task runner used for tasks within the flow. For example, if you want to optimize parallel execution of certain tasks with Dask, you can group them in a subflow that uses the Dask task runner. You can use a different task runner for each subflow.
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#parameters","title":"Parameters","text":"

      Flows can be called with both positional and keyword arguments. These arguments are resolved at runtime into a dictionary of parameters mapping name to value. These parameters are stored by the Prefect orchestration engine on the flow run object.

      Prefect API requires keyword arguments

      When creating flow runs from the Prefect API, parameter names must be specified when overriding defaults \u2014 they cannot be positional.

      Type hints provide an easy way to enforce typing on your flow parameters via pydantic. This means any pydantic model used as a type hint within a flow will be coerced automatically into the relevant object type:

      from prefect import flow\nfrom pydantic import BaseModel\n\nclass Model(BaseModel):\n    a: int\n    b: float\n    c: str\n\n@flow\ndef model_validator(model: Model):\n    print(model)\n

      Note that parameter values can be provided to a flow via API using a deployment. Flow run parameters sent to the API on flow calls are coerced to a serializable form. Type hints on your flow functions provide you a way of automatically coercing JSON provided values to their appropriate Python representation.

      For example, to automatically convert something to a datetime:

      from prefect import flow\nfrom datetime import datetime\n\n@flow\ndef what_day_is_it(date: datetime = None):\n    if date is None:\n        date = datetime.now(timezone.utc)\n    print(f\"It was {date.strftime('%A')} on {date.isoformat()}\")\n\nif __name__ == \"__main__\":\n    what_day_is_it(\"2021-01-01T02:00:19.180906\")\n

      When you run this flow, you'll see the following output:

      It was Friday on 2021-01-01T02:00:19.180906\n

      Parameters are validated before a flow is run. If a flow call receives invalid parameters, a flow run is created in a Failed state. If a flow run for a deployment receives invalid parameters, it will move from a Pending state to a Failed without entering a Running state.

      Flow run parameters cannot exceed 512kb in size

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#final-state-determination","title":"Final state determination","text":"

      Prerequisite

      Read the documentation about states before proceeding with this section.

      The final state of the flow is determined by its return value. The following rules apply:

      • If an exception is raised directly in the flow function, the flow run is marked as failed.
      • If the flow does not return a value (or returns None), its state is determined by the states of all of the tasks and subflows within it.
      • If any task run or subflow run failed, then the final flow run state is marked as FAILED.
      • If any task run was cancelled, then the final flow run state is marked as CANCELLED.
      • If a flow returns a manually created state, it is used as the state of the final flow run. This allows for manual determination of final state.
      • If the flow run returns any other object, then it is marked as completed.

      The following examples illustrate each of these cases:

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#raise-an-exception","title":"Raise an exception","text":"

      If an exception is raised within the flow function, the flow is immediately marked as failed.

      from prefect import flow\n\n@flow\ndef always_fails_flow():\n    raise ValueError(\"This flow immediately fails\")\n\nif __name__ == \"__main__\":\n    always_fails_flow()\n

      Running this flow produces the following result:

      22:22:36.864 | INFO    | prefect.engine - Created flow run 'acrid-tuatara' for flow 'always-fails-flow'\n22:22:36.864 | INFO    | Flow run 'acrid-tuatara' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n22:22:37.060 | ERROR   | Flow run 'acrid-tuatara' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: This flow immediately fails\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-none","title":"Return none","text":"

      A flow with no return statement is determined by the state of all of its task runs.

      from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_fails_flow():\n    always_fails_task.submit().result(raise_on_failure=False)\n    always_succeeds_task()\n\nif __name__ == \"__main__\":\n    always_fails_flow()\n

      Running this flow produces the following result:

      18:32:05.345 | INFO    | prefect.engine - Created flow run 'auburn-lionfish' for flow 'always-fails-flow'\n18:32:05.346 | INFO    | Flow run 'auburn-lionfish' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:32:05.610 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:32:05.638 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:32:05.658 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:32:05.659 | INFO    | Flow run 'auburn-lionfish' - Executing 'always_succeeds_task-9c27db32-0' immediately...\nI'm fail safe!\n18:32:05.703 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:32:05.730 | ERROR   | Flow run 'auburn-lionfish' - Finished in state Failed('1/2 states failed.')\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-future","title":"Return a future","text":"

      If a flow returns one or more futures, the final state is determined based on the underlying states.

      from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit().result(raise_on_failure=False)\n    y = always_succeeds_task.submit(wait_for=[x])\n    return y\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

      Running this flow produces the following result \u2014 it succeeds because it returns the future of the task that succeeds:

      18:35:24.965 | INFO    | prefect.engine - Created flow run 'whispering-guan' for flow 'always-succeeds-flow'\n18:35:24.965 | INFO    | Flow run 'whispering-guan' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:35:25.204 | INFO    | Flow run 'whispering-guan' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:35:25.205 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:35:25.232 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:35:25.265 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\nI'm fail safe!\n18:35:25.335 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:35:25.362 | INFO    | Flow run 'whispering-guan' - Finished in state Completed('All states completed.')\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-multiple-states-or-futures","title":"Return multiple states or futures","text":"

      If a flow returns a mix of futures and states, the final state is determined by resolving all futures to states, then determining if any of the states are not COMPLETED.

      from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I am bad task\")\n\n@task\ndef always_succeeds_task():\n    return \"foo\"\n\n@flow\ndef always_succeeds_flow():\n    return \"bar\"\n\n@flow\ndef always_fails_flow():\n    x = always_fails_task()\n    y = always_succeeds_task()\n    z = always_succeeds_flow()\n    return x, y, z\n

      Running this flow produces the following result. It fails because one of the three returned futures failed. Note that the final state is Failed, but the states of each of the returned futures is included in the flow state:

      20:57:51.547 | INFO    | prefect.engine - Created flow run 'impartial-gorilla' for flow 'always-fails-flow'\n20:57:51.548 | INFO    | Flow run 'impartial-gorilla' - Using task runner 'ConcurrentTaskRunner'\n20:57:51.645 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n20:57:51.686 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_succeeds_task-c9014725-0' for task 'always_succeeds_task'\n20:57:51.727 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n20:57:51.787 | INFO    | Task run 'always_succeeds_task-c9014725-0' - Finished in state Completed()\n20:57:51.808 | INFO    | Flow run 'impartial-gorilla' - Created subflow run 'unbiased-firefly' for flow 'always-succeeds-flow'\n20:57:51.884 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n20:57:52.438 | INFO    | Flow run 'unbiased-firefly' - Finished in state Completed()\n20:57:52.811 | ERROR   | Flow run 'impartial-gorilla' - Finished in state Failed('1/3 states failed.')\nFailed(message='1/3 states failed.', type=FAILED, result=(Failed(message='Task run encountered an exception.', type=FAILED, result=ValueError('I am bad task'), task_run_id=5fd4c697-7c4c-440d-8ebc-dd9c5bbf2245), Completed(message=None, type=COMPLETED, result='foo', task_run_id=df9b6256-f8ac-457c-ba69-0638ac9b9367), Completed(message=None, type=COMPLETED, result='bar', task_run_id=cfdbf4f1-dccd-4816-8d0f-128750017d0c)), flow_run_id=6d2ec094-001a-4cb0-a24e-d2051db6318d)\n

      Returning multiple states

      When returning multiple states, they must be contained in a set, list, or tuple. If other collection types are used, the result of the contained states will not be checked.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-manual-state","title":"Return a manual state","text":"

      If a flow returns a manually created state, the final state is determined based on the return value.

      from prefect import task, flow\nfrom prefect.states import Completed, Failed\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit()\n    y = always_succeeds_task.submit()\n    if y.result() == \"success\":\n        return Completed(message=\"I am happy with this result\")\n    else:\n        return Failed(message=\"How did this happen!?\")\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

      Running this flow produces the following result.

      18:37:42.844 | INFO    | prefect.engine - Created flow run 'lavender-elk' for flow 'always-succeeds-flow'\n18:37:42.845 | INFO    | Flow run 'lavender-elk' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:37:43.125 | INFO    | Flow run 'lavender-elk' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:37:43.126 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:37:43.162 | INFO    | Flow run 'lavender-elk' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:37:43.163 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\n18:37:43.175 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\nI'm fail safe!\n18:37:43.217 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:37:43.236 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:37:43.264 | INFO    | Flow run 'lavender-elk' - Finished in state Completed('I am happy with this result')\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-an-object","title":"Return an object","text":"

      If the flow run returns any other object, then it is marked as completed.

      from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@flow\ndef always_succeeds_flow():\n    always_fails_task().submit()\n    return \"foo\"\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

      Running this flow produces the following result.

      21:02:45.715 | INFO    | prefect.engine - Created flow run 'sparkling-pony' for flow 'always-succeeds-flow'\n21:02:45.715 | INFO    | Flow run 'sparkling-pony' - Using task runner 'ConcurrentTaskRunner'\n21:02:45.816 | INFO    | Flow run 'sparkling-pony' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n21:02:45.853 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n21:02:45.879 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n21:02:46.593 | INFO    | Flow run 'sparkling-pony' - Finished in state Completed()\nCompleted(message=None, type=COMPLETED, result='foo', flow_run_id=7240e6f5-f0a8-4e00-9440-a7b33fb51153)\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-a-flow","title":"Serving a flow","text":"

      The simplest way to create a deployment for your flow is by calling its serve method. This method creates a deployment for the flow and starts a long-running process that monitors for work from the Prefect server. When work is found, it is executed within its own isolated subprocess.

      hello_world.py
      from prefect import flow\n\n\n@flow(log_prints=True)\ndef hello_world(name: str = \"world\", goodbye: bool = False):\n    print(f\"Hello {name} from Prefect! \ud83e\udd17\")\n\n    if goodbye:\n        print(f\"Goodbye {name}!\")\n\n\nif __name__ == \"__main__\":\n    # creates a deployment and stays running to monitor for work instructions generated on the server\n\n    hello_world.serve(name=\"my-first-deployment\",\n                      tags=[\"onboarding\"],\n                      parameters={\"goodbye\": True},\n                      interval=60)\n

      This interface provides all of the configuration needed for a deployment with no strong infrastructure requirements:

      • schedules
      • event triggers
      • metadata such as tags and description
      • default parameter values

      Schedules are auto-paused on shutdown

      By default, stopping the process running flow.serve will pause the schedule for the deployment (if it has one). When running this in environments where restarts are expected use the pause_on_shutdown=False flag to prevent this behavior:

      if __name__ == \"__main__\":\n    hello_world.serve(name=\"my-first-deployment\",\n                      tags=[\"onboarding\"],\n                      parameters={\"goodbye\": True},\n                      pause_on_shutdown=False,\n                      interval=60)\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#serving-multiple-flows-at-once","title":"Serving multiple flows at once","text":"

      You can take this further and serve multiple flows with the same process using the serve utility along with the to_deployment method of flows:

      import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n    serve(slow_deploy, fast_deploy)\n

      The behavior and interfaces are identical to the single flow case.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#retrieve-a-flow-from-remote-storage","title":"Retrieve a flow from remote storage","text":"

      Flows can be retrieved from remote storage using the flow.from_source method.

      flow.from_source accepts a git repository URL and an entrypoint pointing to the flow to load from the repository:

      load_from_url.py
      from prefect import flow\n\nmy_flow = flow.from_source(\n    source=\"https://github.com/PrefectHQ/prefect.git\",\n    entrypoint=\"flows/hello_world.py:hello\"\n)\n\nif __name__ == \"__main__\":\n    my_flow()\n
      16:40:33.818 | INFO    | prefect.engine - Created flow run 'muscular-perch' for flow 'hello'\n16:40:34.048 | INFO    | Flow run 'muscular-perch' - Hello world!\n16:40:34.706 | INFO    | Flow run 'muscular-perch' - Finished in state Completed()\n

      A flow entrypoint is the path to the file the flow is located in and the name of the flow function separated by a colon.

      If you need additional configuration, such as specifying a private repository, you can provide a GitRepository instead of URL:

      load_from_storage.py
      from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nmy_flow = flow.from_source(\n    source=GitRepository(\n        url=\"https://github.com/org/private-repo.git\",\n        branch=\"dev\",\n        credentials={\n            \"access_token\": Secret.load(\"github-access-token\").get()\n        }\n    ),\n    entrypoint=\"flows.py:my_flow\"\n)\n\nif __name__ == \"__main__\":\n    my_flow()\n

      You can serve loaded flows

      Flows loaded from remote storage can be served using the same serve method as local flows:

      serve_loaded_flow.py
      from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/org/repo.git\",\n        entrypoint=\"flows.py:my_flow\"\n    ).serve(name=\"my-deployment\")\n

      When you serve a flow loaded from remote storage, the serving process will periodically poll your remote storage for updates to the flow's code. This pattern allows you to update your flow code without restarting the serving process.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-or-suspending-a-flow-run","title":"Pausing or suspending a flow run","text":"

      Prefect provides you with the ability to halt a flow run with two functions that are similar, but slightly different. When a flow run is paused, code execution is stopped and the process continues to run. When a flow run is suspended, code execution is stopped and so is the process.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pausing-a-flow-run","title":"Pausing a flow run","text":"

      Prefect enables pausing an in-progress flow run for manual approval. Prefect exposes this functionality via the pause_flow_run and resume_flow_run functions.

      Timeouts

      Paused flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it paused and never resumed. You can specify a different timeout period in seconds using the timeout parameter.

      Most simply, pause_flow_run can be called inside a flow:

      from prefect import task, flow, pause_flow_run, resume_flow_run\n\n@task\nasync def marvin_setup():\n    return \"a raft of ducks walk into a bar...\"\n\n\n@task\nasync def marvin_punchline():\n    return \"it's a wonder none of them ducked!\"\n\n\n@flow\nasync def inspiring_joke():\n    await marvin_setup()\n    await pause_flow_run(timeout=600)  # pauses for 10 minutes\n    await marvin_punchline()\n

      You can also implement conditional pauses:

      from prefect import task, flow, pause_flow_run\n\n@task\ndef task_one():\n    for i in range(3):\n        sleep(1)\n        print(i)\n\n@flow(log_prints=True)\ndef my_flow():\n    terminal_state = task_one.submit(return_state=True)\n    if terminal_state.type == StateType.COMPLETED:\n        print(\"Task one succeeded! Pausing flow run..\")\n        pause_flow_run(timeout=2) \n    else:\n        print(\"Task one failed. Skipping pause flow run..\")\n

      Calling this flow will block code execution after the first task and wait for resumption to deliver the punchline.

      await inspiring_joke()\n> \"a raft of ducks walk into a bar...\"\n

      Paused flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run utility via client code.

      resume_flow_run(FLOW_RUN_ID)\n

      The paused flow run will then finish!

      > \"it's a wonder none of them ducked!\"\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#suspending-a-flow-run","title":"Suspending a flow run","text":"

      Similar to pausing a flow run, Prefect enables suspending an in-progress flow run.

      The difference between pausing and suspending a flow run

      There is an important difference between pausing and suspending a flow run. When you pause a flow run, the flow code is still running but is blocked until someone resumes the flow. This is not the case with suspending a flow run! When you suspend a flow run, the flow exits completely and the infrastructure running it (e.g., a Kubernetes Job) tears down.

      This means that you can suspend flow runs to save costs instead of paying for long-running infrastructure. However, when the flow run resumes, the flow code will execute again from the beginning of the flow, so you should use tasks and task caching to avoid recomputing expensive operations.

      Prefect exposes this functionality via the suspend_flow_run and resume_flow_run functions, as well as the Prefect UI.

      When called inside of a flow suspend_flow_run will immediately suspend execution of the flow run. The flow run will be marked as Suspended and will not be resumed until resume_flow_run is called.

      Timeouts

      Suspended flow runs time out after one hour by default. After the timeout, the flow run will fail with a message saying it suspended and never resumed. You can specify a different timeout period in seconds using the timeout parameter or pass timeout=None for no timeout.

      Here is an example of a flow that does not block flow execution while paused. This flow will exit after one task, and will be rescheduled upon resuming. The stored result of the first task is retrieved instead of being rerun.

      from prefect import flow, pause_flow_run, task\n\n@task(persist_result=True)\ndef foo():\n    return 42\n\n@flow(persist_result=True)\ndef noblock_pausing():\n    x = foo.submit()\n    pause_flow_run(timeout=30, reschedule=True)\n    y = foo.submit()\n    z = foo(wait_for=[x])\n    alpha = foo(wait_for=[y])\n    omega = foo(wait_for=[x, y])\n

      Flow runs can be suspended out-of-process by calling suspend_flow_run(flow_run_id=<ID>) or selecting the Suspend button in the Prefect UI or Prefect Cloud.

      Suspended flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run utility via client code.

      resume_flow_run(FLOW_RUN_ID)\n

      Subflows can't be suspended independently of their parent run

      You can't suspend a subflow run independently of its parent flow run.

      If you use a flow to schedule a flow run with run_deployment, the scheduled flow run will be linked to the calling flow as a subflow run by default. This means you won't be able to suspend the scheduled flow run independently of the calling flow. Call run_deployment with as_subflow=False to disable this linking if you need to be able to suspend the scheduled flow run independently of the calling flow.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#waiting-for-input-when-pausing-or-suspending-a-flow-run","title":"Waiting for input when pausing or suspending a flow run","text":"

      Experimental

      The wait_for_input parameter used in the pause_flow_run or suspend_flow_run functions is an experimental feature. The interface or behavior of this feature may change without warning in future releases.

      If you encounter any issues, please let us know in Slack or with a Github issue.

      When pausing or suspending a flow run you may want to wait for input from a user. Prefect provides a way to do this by leveraging the pause_flow_run and suspend_flow_run functions. These functions accept a wait_for_input argument, the value of which should be a subclass of prefect.input.RunInput, a pydantic model. When resuming the flow run, users are required to provide data for this model. Upon successful validation, the flow run resumes, and the return value of the pause_flow_run or suspend_flow_run is an instance of the model containing the provided data.

      Here is an example of a flow that pauses and waits for input from a user:

      from prefect import flow, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass UserNameInput(RunInput):\n    name: str\n\n\n@flow(log_prints=True)\nasync def greet_user():\n    user_input = await pause_flow_run(\n        wait_for_input=UserNameInput\n    )\n\n    print(f\"Hello, {user_input.name}!\")\n

      Running this flow will create a flow run. The flow run will advance until code execution reaches pause_flow_run, at which point it will move into a Paused state. Execution will block and wait for resumption.

      When resuming the flow run, users will be prompted to provide a value for the name field of the UserNameInput model. Upon successful validation, the flow run will resume, and the return value of the pause_flow_run will be an instance of the UserNameInput model containing the provided data.

      For more in-depth information on receiving input from users when pausing and suspending flow runs, see the Creating interactive workflows guide.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#canceling-a-flow-run","title":"Canceling a flow run","text":"

      You may cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client.

      When cancellation is requested, the flow run is moved to a \"Cancelling\" state. If the deployment is a work pool-based deployemnt with a worker, then the worker monitors the state of flow runs and detects that cancellation has been requested. The worker then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure will be killed, ensuring the flow run exits.

      A deployment is required

      Flow run cancellation requires the flow run to be associated with a deployment. A monitoring process must be running to enforce the cancellation. Inline subflow runs, i.e. those created without run_deployment, cannot be cancelled without cancelling the parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment function.

      Cancellation is robust to restarts of Prefect workers. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the infrastructure_pid or infrastructure identifier. Generally, this is composed of two parts:

      1. Scope: identifying where the infrastructure is running.
      2. ID: a unique identifier for the infrastructure within the scope.

      The scope is used to ensure that Prefect does not kill the wrong infrastructure. For example, workers running on multiple machines may have overlapping process IDs but should not have a matching scope.

      The identifiers for infrastructure types:

      • Processes: The machine hostname and the PID.
      • Docker Containers: The Docker API URL and container ID.
      • Kubernetes Jobs: The Kubernetes cluster name and the job name.

      While the cancellation process is robust, there are a few issues than can occur:

      • If the infrastructure block for the flow run has been removed or altered, cancellation may not work.
      • If the infrastructure block for the flow run does not have support for cancellation, cancellation will not work.
      • If the identifier scope does not match when attempting to cancel a flow run the worker will be unable to cancel the flow run. Another worker may attempt cancellation.
      • If the infrastructure associated with the run cannot be found or has already been killed, the worker will mark the flow run as cancelled.
      • If the infrastructre_pid is missing from the flow run will be marked as cancelled but cancellation cannot be enforced.
      • If the worker runs into an unexpected error during cancellation the flow run may or may not be cancelled depending on where the error occurred. The worker will try again to cancel the flow run. Another worker may attempt cancellation.

      Enhanced cancellation

      We are working on improving cases where cancellation can fail. You can try the improved cancellation experience by enabling the PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION setting on your worker or agents:

      prefect config set PREFECT_EXPERIMENTAL_ENABLE_ENHANCED_CANCELLATION=True\n

      If you encounter any issues, please let us know in Slack or with a Github issue.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-cli","title":"Cancel via the CLI","text":"

      From the command line in your execution environment, you can cancel a flow run by using the prefect flow-run cancel CLI command, passing the ID of the flow run.

      prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736'\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-ui","title":"Cancel via the UI","text":"

      From the UI you can cancel a flow run by navigating to the flow run's detail page and clicking the Cancel button in the upper right corner.

      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#timeouts","title":"Timeouts","text":"

      Flow timeouts are used to prevent unintentional long-running flows. When the duration of execution for a flow exceeds the duration specified in the timeout, a timeout exception will be raised and the flow will be marked as failed. In the UI, the flow will be visibly designated as TimedOut.

      Timeout durations are specified using the timeout_seconds keyword argument.

      from prefect import flow\nimport time\n\n@flow(timeout_seconds=1, log_prints=True)\ndef show_timeouts():\n    print(\"I will execute\")\n    time.sleep(5)\n    print(\"I will not execute\")\n
      ","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/infrastructure/","title":"Infrastructure","text":"

      Workers are recommended

      Infrastructure blocks are part of the agent based deployment model. Work Pools and Workers simplify the specification of each flow's infrastructure and runtime environment. If you have existing agents, you can upgrade from agents to workers to significantly enhance the experience of deploying flows.

      Users may specify an infrastructure block when creating a deployment. This block will be used to specify infrastructure for flow runs created by the deployment at runtime.

      Infrastructure can only be used with a deployment. When you run a flow directly by calling the flow yourself, you are responsible for the environment in which the flow executes.

      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#infrastructure-overview","title":"Infrastructure overview","text":"

      Prefect uses infrastructure to create the environment for a user's flow to execute.

      Infrastructure is attached to a deployment and is propagated to flow runs created for that deployment. Infrastructure is deserialized by the agent and it has two jobs:

      • Create execution environment infrastructure for the flow run.
      • Run a Python command to start the prefect.engine in the infrastructure, which retrieves the flow from storage and executes the flow.

      The engine acquires and calls the flow. Infrastructure doesn't know anything about how the flow is stored, it's just passing a flow run ID to the engine.

      Infrastructure is specific to the environments in which flows will run. Prefect currently provides the following infrastructure types:

      • Process runs flows in a local subprocess.
      • DockerContainer runs flows in a Docker container.
      • KubernetesJob runs flows in a Kubernetes Job.
      • ECSTask runs flows in an Amazon ECS Task.
      • Cloud Run runs flows in a Google Cloud Run Job.
      • Container Instance runs flows in an Azure Container Instance.

      What about tasks?

      Flows and tasks can both use configuration objects to manage the environment in which code runs.

      Flows use infrastructure.

      Tasks use task runners. For more on how task runners work, see Task Runners.

      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#using-infrastructure","title":"Using infrastructure","text":"

      You may create customized infrastructure blocks through the Prefect UI or Prefect Cloud Blocks page or create them in code and save them to the API using the blocks .save() method.

      Once created, there are two distinct ways to use infrastructure in a deployment:

      • Starting with Prefect defaults \u2014 this is what happens when you pass the -i or --infra flag and provide a type when building deployment files.
      • Pre-configure infrastructure settings as blocks and base your deployment infrastructure on those settings \u2014 by passing -ib or --infra-block and a block slug when building deployment files.

      For example, when creating your deployment files, the supported Prefect infrastrucure types are:

      • process
      • docker-container
      • kubernetes-job
      • ecs-task
      • cloud-run-job
      • container-instance-job
      $ prefect deployment build ./my_flow.py:my_flow -n my-flow-deployment -t test -i docker-container -sb s3/my-bucket --override env.EXTRA_PIP_PACKAGES=s3fs\nFound flow 'my-flow'\nSuccessfully uploaded 2 files to s3://bucket-full-of-sunshine\nDeployment YAML created at '/Users/terry/test/flows/infra/deployment.yaml'.\n

      In this example we specify the DockerContainer infrastructure in addition to a preconfigured AWS S3 bucket storage block.

      The default deployment YAML filename may be edited as needed to add an infrastructure type or infrastructure settings.

      ###\n### A complete description of a Prefect Deployment for flow 'my-flow'\n###\nname: my-flow-deployment\ndescription: null\nversion: e29de5d01b06d61b4e321d40f34a480c\n# The work queue that will handle this deployment's runs\nwork_queue_name: default\nwork_pool_name: default-agent-pool\ntags:\n- test\nparameters: {}\nschedules: []\npaused: true\ninfra_overrides:\n  env.EXTRA_PIP_PACKAGES: s3fs\ninfrastructure:\n  type: docker-container\n  env: {}\n  labels: {}\n  name: null\n  command:\n  - python\n  - -m\n  - prefect.engine\n  image: prefecthq/prefect:2-latest\n  image_pull_policy: null\n  networks: []\n  network_mode: null\n  auto_remove: false\n  volumes: []\n  stream_output: true\n  memswap_limit: null\n  mem_limit: null\n  privileged: false\n  block_type_slug: docker-container\n  _block_type_slug: docker-container\n\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: my-flow\nmanifest_path: my_flow-manifest.json\nstorage:\n  bucket_path: bucket-full-of-sunshine\n  aws_access_key_id: '**********'\n  aws_secret_access_key: '**********'\n  _is_anonymous: true\n  _block_document_name: anonymous-xxxxxxxx-f1ff-4265-b55c-6353a6d65333\n  _block_document_id: xxxxxxxx-06c2-4c3c-a505-4a8db0147011\n  block_type_slug: s3\n  _block_type_slug: s3\npath: ''\nentrypoint: my_flow.py:my-flow\nparameter_openapi_schema:\n  title: Parameters\n  type: object\n  properties: {}\n  required: null\n  definitions: null\ntimestamp: '2023-02-08T23:00:14.974642+00:00'\n

      Editing deployment YAML

      Note the big DO NOT EDIT comment in the deployment YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

      Once the deployment exists, any flow runs that this deployment starts will use DockerContainer infrastructure.

      You can also create custom infrastructure blocks \u2014 either in the Prefect UI for in code via the API \u2014 and use the settings in the block to configure your infastructure. For example, here we specify settings for Kubernetes infrastructure in a block named k8sdev.

      from prefect.infrastructure import KubernetesJob, KubernetesImagePullPolicy\n\nk8s_job = KubernetesJob(\n    namespace=\"dev\",\n    image=\"prefecthq/prefect:2.0.0-python3.9\",\n    image_pull_policy=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n)\nk8s_job.save(\"k8sdev\")\n

      Now we can apply the infrastrucure type and settings in the block by specifying the block slug kubernetes-job/k8sdev as the infrastructure type when building a deployment:

      prefect deployment build flows/k8s_example.py:k8s_flow --name k8sdev --tag k8s -sb s3/dev -ib kubernetes-job/k8sdev\n

      See Deployments for more information about deployment build options.

      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#configuring-infrastructure","title":"Configuring infrastructure","text":"

      Every infrastrcture type has type-specific options.

      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#process","title":"Process","text":"

      Process infrastructure runs a command in a new process.

      Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

      Process supports the following settings:

      Attributes Description command A list of strings specifying the command to start the flow run. In most cases you should not override this. env Environment variables to set for the new process. labels Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself. name A name for the process. For display purposes only.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#dockercontainer","title":"DockerContainer","text":"

      DockerContainer infrastructure executes flow runs in a container.

      Requirements for DockerContainer:

      • Docker Engine must be available.
      • You must configure remote Storage. Local storage is not supported for Docker.
      • The API must be available from within the flow run container. To facilitate connections to locally hosted APIs, localhost and 127.0.0.1 will be replaced with host.docker.internal.
      • The ephemeral Prefect API won't work with Docker and Kubernetes. You must have a Prefect server or Prefect Cloud API endpoint set in your agent's configuration.

      DockerContainer supports the following settings:

      Attributes Description auto_remove Bool indicating whether the container will be removed on completion. If False, the container will remain after exit for inspection. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. env Environment variables to set for the container. image An optional string specifying the name of a Docker image to use. Defaults to the Prefect image. If the image is stored anywhere other than a public Docker Hub registry, use a corresponding registry block, e.g. DockerRegistry or ensure otherwise that your execution layer is authenticated to pull the image from the image registry. image_pull_policy Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'. image_registry A DockerRegistry block containing credentials to use if image is stored in a private image registry. labels An optional dictionary of labels, mapping name to value. name An optional name for the container. networks An optional list of strings specifying Docker networks to connect the container to. network_mode Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set. stream_output Bool indicating whether to stream output from the subprocess to local standard output. volumes An optional list of volume mount strings in the format of \"local_path:container_path\".

      Prefect automatically sets a Docker image matching the Python and Prefect version you're using at deployment time. You can see all available images at Docker Hub.

      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#kubernetesjob","title":"KubernetesJob","text":"

      KubernetesJob infrastructure executes flow runs in a Kubernetes Job.

      Requirements for KubernetesJob:

      • kubectl must be available.
      • You must configure remote Storage. Local storage is not supported for Kubernetes.
      • The ephemeral Prefect API won't work with Docker and Kubernetes. You must have a Prefect server or Prefect Cloud API endpoint set in your agent's configuration.

      The Prefect CLI command prefect kubernetes manifest server automatically generates a Kubernetes manifest with default settings for Prefect deployments. By default, it simply prints out the YAML configuration for a manifest. You can pipe this output to a file of your choice and edit as necessary.

      KubernetesJob supports the following settings:

      Attributes Description cluster_config An optional Kubernetes cluster config to use for this job. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. customizations A list of JSON 6902 patches to apply to the base Job manifest. Alternatively, a valid JSON string is allowed (handy for deployments CLI). env Environment variables to set for the container. finished_job_ttl The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed. image String specifying the tag of a Docker image to use for the Job. image_pull_policy The Kubernetes image pull policy to use for job containers. job The base manifest for the Kubernetes Job. job_watch_timeout_seconds Number of seconds to watch for job creation before timing out (defaults to None). labels Dictionary of labels to add to the Job. name An optional name for the job. namespace String signifying the Kubernetes namespace to use. pod_watch_timeout_seconds Number of seconds to watch for pod creation before timing out (default 60). service_account_name An optional string specifying which Kubernetes service account to use. stream_output Bool indicating whether to stream output from the subprocess to local standard output.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#kubernetesjob-overrides-and-customizations","title":"KubernetesJob overrides and customizations","text":"

      When creating deployments using KubernetesJob infrastructure, the infra_overrides parameter expects a dictionary. For a KubernetesJob, the customizations parameter expects a list.

      Containers expect a list of objects, even if there is only one. For any patches applying to the container, the path value should be a list, for example: /spec/templates/spec/containers/0/resources

      A Kubernetes-Job infrastructure block defined in Python:

      customizations = [\n {\n     \"op\": \"add\",\n     \"path\": \"/spec/template/spec/containers/0/resources\",\n     \"value\": {\n         \"requests\": {\n             \"cpu\": \"2000m\",\n             \"memory\": \"4gi\"\n         },\n         \"limits\": {\n             \"cpu\": \"4000m\",\n             \"memory\": \"8Gi\",\n             \"nvidia.com/gpu\": \"1\"\n      }\n  },\n }\n]\n\nk8s_job = KubernetesJob(\n        namespace=namespace,\n        image=image_name,\n        image_pull_policy=KubernetesImagePullPolicy.ALWAYS,\n        finished_job_ttl=300,\n        job_watch_timeout_seconds=600,\n        pod_watch_timeout_seconds=600,\n        service_account_name=\"prefect-server\",\n        customizations=customizations,\n    )\nk8s_job.save(\"devk8s\")\n

      A Deployment with infra-overrides defined in Python:

      infra_overrides={ \n    \"customizations\": [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/resources\",\n                \"value\": {\n                    \"requests\": {\n                        \"cpu\": \"2000m\",\n                        \"memory\": \"4gi\"\n                    },\n                    \"limits\": {\n                        \"cpu\": \"4000m\",\n                        \"memory\": \"8Gi\",\n                        \"nvidia.com/gpu\": \"1\"\n                }\n            },\n        }\n    ]\n}\n\n# Load an already created K8s Block\nk8sjob = k8s_job.load(\"devk8s\")\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    infrastructure=k8sjob,\n    storage=storage,\n    infra_overrides=infra_overrides,\n)\n\ndeployment.apply()\n
      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#ecstask","title":"ECSTask","text":"

      ECSTask infrastructure runs your flow in an ECS Task.

      Requirements for ECSTask:

      • The ephemeral Prefect API won't work with ECS directly. You must have a Prefect server or Prefect Cloud API endpoint set in your agent's configuration.
      • The prefect-aws collection must be installed within the agent environment: pip install prefect-aws
      • The ECSTask and AwsCredentials blocks must be registered within the agent environment: prefect block register -m prefect_aws.ecs
      • You must configure remote Storage. Local storage is not supported for ECS tasks. The most commonly used type of storage with ECSTask is S3. If you leverage that type of block, make sure that s3fs is installed within your agent and flow run environment. The easiest way to satisfy all the installation-related points mentioned above is to include the following commands in your Dockerfile:
      FROM prefecthq/prefect:2-python3.9  # example base image \nRUN pip install s3fs prefect-aws\n

      Make sure to allocate enough CPU and memory to your agent, and consider adding retries

      When you start a Prefect agent on AWS ECS Fargate, allocate as much CPU and memory as needed for your workloads. Your agent needs enough resources to appropriately provision infrastructure for your flow runs and to monitor their execution. Otherwise, your flow runs may get stuck in a Pending state. Alternatively, set a work-queue concurrency limit to ensure that the agent will not try to process all runs at the same time.

      Some API calls to provision infrastructure may fail due to unexpected issues on the client side (for example, transient errors such as ConnectionError, HTTPClientError, or RequestTimeout), or due to server-side rate limiting from the AWS service. To mitigate those issues, we recommend adding environment variables such as AWS_MAX_ATTEMPTS (can be set to an integer value such as 10) and AWS_RETRY_MODE (can be set to a string value including standard or adaptive modes). Those environment variables must be added within the agent environment, e.g. on your ECS service running the agent, rather than on the ECSTask infrastructure block.

      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/infrastructure/#docker-images","title":"Docker images","text":"

      Learn about options for Prefect-maintained Docker images in the Docker guide.

      ","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":0.5},{"location":"concepts/results/","title":"Results","text":"

      Results represent the data returned by a flow or a task.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#retrieving-results","title":"Retrieving results","text":"

      When calling flows or tasks, the result is returned directly:

      from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    task_result = my_task()\n    return task_result + 1\n\nresult = my_flow()\nassert result == 2\n

      When working with flow and task states, the result can be retrieved with the State.result() method:

      from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n    return state.result() + 1\n\nstate = my_flow(return_state=True)\nassert state.result() == 2\n

      When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

      from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    return future.result() + 1\n\nresult = my_flow()\nassert result == 2\n
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#handling-failures","title":"Handling failures","text":"

      Sometimes your flows or tasks will encounter an exception. Prefect captures all exceptions in order to report states to the orchestrator, but we do not hide them from you (unless you ask us to) as your program needs to know if an unexpected error has occurred.

      When calling flows or tasks, the exceptions are raised as in normal Python:

      from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    try:\n        my_task()\n    except ValueError:\n        print(\"Oh no! The task failed.\")\n\n    return True\n\nmy_flow()\n

      If you would prefer to check for a failed task without using try/except, you may ask Prefect to return the state:

      from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    if state.is_failed():\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = state.result()\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

      If you retrieve the result from a failed state, the exception will be raised. For this reason, it's often best to check if the state is failed first.

      from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    try:\n        result = state.result()\n    except ValueError:\n        print(\"Oh no! The state raised the error!\")\n\n    return True\n\nmy_flow()\n

      When retrieving the result from a state, you can ask Prefect not to raise exceptions:

      from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    maybe_result = state.result(raise_on_failure=False)\n    if isinstance(maybe_result, ValueError):\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = maybe_result\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

      When submitting tasks to a runner, Future.result() works the same as State.result():

      from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n\n    try:\n        future.result()\n    except ValueError:\n        print(\"Ah! Futures will raise the failure as well.\")\n\n    # You can ask it not to raise the exception too\n    maybe_result = future.result(raise_on_failure=False)\n    print(f\"Got {type(maybe_result)}\")\n\n    return True\n\nmy_flow()\n
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#working-with-async-results","title":"Working with async results","text":"

      When calling flows or tasks, the result is returned directly:

      import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    task_result = await my_task()\n    return task_result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n

      When working with flow and task states, the result can be retrieved with the State.result() method:

      import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    state = await my_task(return_state=True)\n    result = await state.result(fetch=True)\n    return result + 1\n\nasync def main():\n    state = await my_flow(return_state=True)\n    assert await state.result(fetch=True) == 2\n\nasyncio.run(main())\n

      Resolving results

      Prefect 2.6.0 added automatic retrieval of persisted results. Prior to this version, State.result() did not require an await. For backwards compatibility, when used from an asynchronous context, State.result() returns a raw result type.

      You may opt-in to the new behavior by passing fetch=True as shown in the example above. If you would like this behavior to be used automatically, you may enable the PREFECT_ASYNC_FETCH_STATE_RESULT setting. If you do not opt-in to this behavior, you will see a warning.

      You may also opt-out by setting fetch=False. This will silence the warning, but you will need to retrieve your result manually from the result type.

      When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

      import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    future = await my_task.submit()\n    result = await future.result()\n    return result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisting-results","title":"Persisting results","text":"

      The Prefect API does not store your results except in special cases. Instead, the result is persisted to a storage location in your infrastructure and Prefect stores a reference to the result.

      The following Prefect features require results to be persisted:

      • Task cache keys
      • Flow run retries

      If results are not persisted, these features may not be usable.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#configuring-persistence-of-results","title":"Configuring persistence of results","text":"

      Persistence of results requires a serializer and a storage location. Prefect sets defaults for these, and you should not need to adjust them until you want to customize behavior. You can configure results on the flow and task decorators with the following options:

      • persist_result: Whether the result should be persisted to storage.
      • result_storage: Where to store the result when persisted.
      • result_serializer: How to convert the result to a storable form.
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#toggling-persistence","title":"Toggling persistence","text":"

      Persistence of the result of a task or flow can be configured with the persist_result option. The persist_result option defaults to a null value, which will automatically enable persistence if it is needed for a Prefect feature used by the flow or task. Otherwise, persistence is disabled by default.

      For example, the following flow has retries enabled. Flow retries require that all task results are persisted, so the task's result will be persisted:

      from prefect import flow, task\n\n@task\ndef my_task():\n    return \"hello world!\"\n\n@flow(retries=2)\ndef my_flow():\n    # This task does not have persistence toggled off and it is needed for the flow feature,\n    # so Prefect will persist its result at runtime\n    my_task()\n

      Flow retries do not require the flow's result to be persisted, so it will not be.

      In this next example, one task has caching enabled. Task caching requires that the given task's result is persisted:

      from prefect import flow, task\nfrom datetime import timedelta\n\n@task(cache_key_fn=lambda: \"always\", cache_expiration=timedelta(seconds=20))\ndef my_task():\n    # This task uses caching so its result will be persisted by default\n    return \"hello world!\"\n\n\n@task\ndef my_other_task():\n    ...\n\n@flow\ndef my_flow():\n    # This task uses a feature that requires result persistence\n    my_task()\n\n    # This task does not use a feature that requires result persistence and the\n    # flow does not use any features that require task result persistence so its\n    # result will not be persisted by default\n    my_other_task()\n

      Persistence of results can be manually toggled on or off:

      from prefect import flow, task\n\n@flow(persist_result=True)\ndef my_flow():\n    # This flow will persist its result even if not necessary for a feature.\n    ...\n\n@task(persist_result=False)\ndef my_task():\n    # This task will never persist its result.\n    # If persistence needed for a feature, an error will be raised.\n    ...\n

      Toggling persistence manually will always override any behavior that Prefect would infer.

      You may also change Prefect's default persistence behavior with the PREFECT_RESULTS_PERSIST_BY_DEFAULT setting. To persist results by default, even if they are not needed for a feature change the value to a truthy value:

      prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true\n

      Task and flows with persist_result=False will not persist their results even if PREFECT_RESULTS_PERSIST_BY_DEFAULT is true.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-location","title":"Result storage location","text":"

      The result storage location can be configured with the result_storage option. The result_storage option defaults to a null value, which infers storage from the context. Generally, this means that tasks will use the result storage configured on the flow unless otherwise specified. If there is no context to load the storage from and results must be persisted, results will be stored in the path specified by the PREFECT_LOCAL_STORAGE_PATH setting (defaults to ~/.prefect/storage).

      from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(persist_result=True)\ndef my_flow():\n    my_task()  # This task will use the flow's result storage\n\n@task(persist_result=True)\ndef my_task():\n    ...\n\nmy_flow()  # The flow has no result storage configured and no parent, the local file system will be used.\n\n\n# Reconfigure the flow to use a different storage type\nnew_flow = my_flow.with_options(result_storage=S3(bucket_path=\"my-bucket\"))\n\nnew_flow()  # The flow and task within it will use S3 for result storage.\n

      You can configure this to use a specific storage using one of the following:

      • A storage instance, e.g. LocalFileSystem(basepath=\".my-results\")
      • A storage slug, e.g. 's3/dev-s3-block'
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-key","title":"Result storage key","text":"

      The path of the result file in the result storage can be configured with the result_storage_key. The result_storage_key option defaults to a null value, which generates a unique identifier for each result.

      from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(result_storage=S3(bucket_path=\"my-bucket\"))\ndef my_flow():\n    my_task()\n\n@task(persist_result=True, result_storage_key=\"my_task.json\")\ndef my_task():\n    ...\n\nmy_flow()  # The task's result will be persisted to 's3://my-bucket/my_task.json'\n

      Result storage keys are formatted with access to all of the modules in prefect.runtime and the run's parameters. In the following example, we will run a flow with three runs of the same task. Each task run will write its result to a unique file based on the name parameter.

      from prefect import flow, task\n\n@flow()\ndef my_flow():\n    hello_world()\n    hello_world(name=\"foo\")\n    hello_world(name=\"bar\")\n\n@task(persist_result=True, result_storage_key=\"hello-{parameters[name]}.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

      After running the flow, we can see three persisted result files in our storage directory:

      $ ls ~/.prefect/storage | grep \"hello-\"\nhello-bar.json\nhello-foo.json\nhello-world.json\n

      In the next example, we include metadata about the flow run from the prefect.runtime.flow_run module:

      from prefect import flow, task\n\n@flow\ndef my_flow():\n    hello_world()\n\n@task(persist_result=True, result_storage_key=\"{flow_run.flow_name}_{flow_run.name}_hello.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

      After running this flow, we can see a result file templated with the name of the flow and the flow run:

      \u276f ls ~/.prefect/storage | grep \"my-flow\"    \nmy-flow_industrious-trout_hello.json\n

      If a result exists at a given storage key in the storage location, it will be overwritten.

      Result storage keys can only be configured on tasks at this time.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer","title":"Result serializer","text":"

      The result serializer can be configured with the result_serializer option. The result_serializer option defaults to a null value, which infers the serializer from the context. Generally, this means that tasks will use the result serializer configured on the flow unless otherwise specified. If there is no context to load the serializer from, the serializer defined by PREFECT_RESULTS_DEFAULT_SERIALIZER will be used. This setting defaults to Prefect's pickle serializer.

      You may configure the result serializer using:

      • A type name, e.g. \"json\" or \"pickle\" \u2014 this corresponds to an instance with default values
      • An instance, e.g. JSONSerializer(jsonlib=\"orjson\")
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#compressing-results","title":"Compressing results","text":"

      Prefect provides a CompressedSerializer which can be used to wrap other serializers to provide compression over the bytes they generate. The compressed serializer uses lzma compression by default. We test other compression schemes provided in the Python standard library such as bz2 and zlib, but you should be able to use any compression library that provides compress and decompress methods.

      You may configure compression of results using:

      • A type name, prefixed with compressed/ e.g. \"compressed/json\" or \"compressed/pickle\"
      • An instance e.g. CompressedSerializer(serializer=\"pickle\", compressionlib=\"lzma\")

      Note that the \"compressed/<serializer-type>\" shortcut will only work for serializers provided by Prefect. If you are using custom serializers, you must pass a full instance.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#storage-of-results-in-prefect","title":"Storage of results in Prefect","text":"

      The Prefect API does not store your results in most cases for the following reasons:

      • Results can be large and slow to send to and from the API.
      • Results often contain private information or data.
      • Results would need to be stored in the database or complex logic implemented to hydrate from another source.

      There are a few cases where Prefect will store your results directly in the database. This is an optimization to reduce the overhead of reading and writing to result storage.

      The following data types will be stored by the API without persistence to storage:

      • booleans (True, False)
      • nulls (None)

      If persist_result is set to False, these values will never be stored.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#tracking-results","title":"Tracking results","text":"

      The Prefect API tracks metadata about your results. The value of your result is only stored in specific cases. Result metadata can be seen in the UI on the \"Results\" page for flows.

      Prefect tracks the following result metadata:

      • Data type
      • Storage location (if persisted)
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#caching-of-results-in-memory","title":"Caching of results in memory","text":"

      When running your workflows, Prefect will keep the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task it can be costly to keep it memory for the entire duration of the flow run.

      Flows and tasks both include an option to drop the result from memory with cache_result_in_memory:

      @flow(cache_result_in_memory=False)\ndef foo():\n    return \"pretend this is large data\"\n\n@task(cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n

      When cache_result_in_memory is disabled, the result of your flow or task will be persisted by default. The result will then be pulled from storage when needed.

      @flow\ndef foo():\n    result = bar()\n    state = bar(return_state=True)\n\n    # The result will be retrieved from storage here\n    state.result()\n\n    future = bar.submit()\n    # The result will be retrieved from storage here\n    future.result()\n\n@task(cache_result_in_memory=False)\ndef bar():\n    # This result will persisted\n    return \"pretend this is biiiig data\"\n

      If both cache_result_in_memory and persistence are disabled, your results will not be available downstream.

      @task(persist_result=False, cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n\n@flow\ndef foo():\n    # Raises an error\n    result = bar()\n\n    # This is oaky\n    state = bar(return_state=True)\n\n    # Raises an error\n    state.result()\n\n    # This is okay\n    future = bar.submit()\n\n    # Raises an error\n    future.result()\n
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-types","title":"Result storage types","text":"

      Result storage is responsible for reading and writing serialized data to an external location. At this time, any file system block can be used for result storage.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer-types","title":"Result serializer types","text":"

      A result serializer is responsible for converting your Python object to and from bytes. This is necessary to store the object outside of Python and retrieve it later.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#pickle-serializer","title":"Pickle serializer","text":"

      Pickle is a standard Python protocol for encoding arbitrary Python objects. We supply a custom pickle serializer at prefect.serializers.PickleSerializer. Prefect's pickle serializer uses the cloudpickle project by default to support more object types. Alternative pickle libraries can be specified:

      from prefect.serializers import PickleSerializer\n\nPickleSerializer(picklelib=\"custompickle\")\n

      Benefits of the pickle serializer:

      • Many object types are supported.
      • Objects can define custom pickle support.

      Drawbacks of the pickle serializer:

      • When nested attributes of an object cannot be pickled, it is hard to determine the cause.
      • When deserializing objects, your Python and pickle library versions must match the one used at serialization time.
      • Serialized objects cannot be easily shared across different programming languages.
      • Serialized objects are not human readable.
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#json-serializer","title":"JSON serializer","text":"

      We supply a custom JSON serializer at prefect.serializers.JSONSerializer. Prefect's JSON serializer uses custom hooks by default to support more object types. Specifically, we add support for all types supported by Pydantic.

      By default, we use the standard Python json library. Alternative JSON libraries can be specified:

      from prefect.serializers import JSONSerializer\n\nJSONSerializer(jsonlib=\"orjson\")\n

      Benefits of the JSON serializer:

      • Serialized objects are human readable.
      • Serialized objects can often be shared across different programming languages.
      • Deserialization of serialized objects is generally version agnostic.

      Drawbacks of the JSON serializer:

      • Supported types are limited.
      • Implementing support for additional types must be done at the serializer level.
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-types","title":"Result types","text":"

      Prefect uses internal result types to capture information about the result attached to a state. The following types are used:

      • UnpersistedResult: Stores result metadata but the value is only available when created.
      • LiteralResult: Stores simple values inline.
      • PersistedResult: Stores a reference to a result persisted to storage.

      All result types include a get() method that can be called to return the value of the result. This is done behind the scenes when the result() method is used on states or futures.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#unpersisted-results","title":"Unpersisted results","text":"

      Unpersisted results are used to represent results that have not been and will not be persisted beyond the current flow run. The value associated with the result is stored in memory, but will not be available later. Result metadata is attached to this object for storage in the API and representation in the UI.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#literal-results","title":"Literal results","text":"

      Literal results are used to represent results stored in the Prefect database. The values contained by these results must always be JSON serializable.

      Example:

      result = LiteralResult(value=None)\nresult.json()\n# {\"type\": \"result\", \"value\": \"null\"}\n

      Literal results reduce the overhead required to persist simple results.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-results","title":"Persisted results","text":"

      The persisted result type contains all of the information needed to retrieve the result from storage. This includes:

      • Storage: A reference to the result storage that can be used to read the serialized result.
      • Key: Indicates where this specific result is in storage.

      Persisted result types also contain metadata for inspection without retrieving the result:

      • Serializer type: The name of the result serializer type.

      The get() method on result references retrieves the data from storage, deserializes it, and returns the original object. The get() operation will cache the resolved object to reduce the overhead of subsequent calls.

      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-result-blob","title":"Persisted result blob","text":"

      When results are persisted to storage, they are always written as a JSON document. The schema for this is described by the PersistedResultBlob type. The document contains:

      • The serialized data of the result.
      • A full description of result serializer that can be used to deserialize the result data.
      • The Prefect version used to create the result.
      ","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/schedules/","title":"Schedules","text":"

      Scheduling is one of the primary reasons for using an orchestrator such as Prefect. Prefect allows you to use schedules to automatically create new flow runs for deployments.

      Prefect Cloud can also schedule flow runs through event-driven automations.

      Schedules tell the Prefect API how to create new flow runs for you automatically on a specified cadence.

      You can add a schedule to any deployment. The Prefect Scheduler service periodically reviews every deployment and creates new flow runs according to the schedule configured for the deployment.

      Support for multiple schedules

      We are currently rolling out support for multiple schedules per deployment. You can now assign multiple schedules to deployments in the Prefect UI, the CLI via prefect deployment schedule commands, the Deployment class, and in block-based deployment YAML files.

      Support for multiple schedules in flow.serve, flow.deploy, serve, and worker-based deployments with prefect deploy will arrive soon.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#schedule-types","title":"Schedule types","text":"

      Prefect supports several types of schedules that cover a wide range of use cases and offer a large degree of customization:

      • Cron is most appropriate for users who are already familiar with cron from previous use.
      • Interval is best suited for deployments that need to run at some consistent cadence that isn't related to absolute time.
      • RRule is best suited for deployments that rely on calendar logic for simple recurring schedules, irregular intervals, exclusions, or day-of-month adjustments.

      Schedules can be inactive

      When you create or edit a schedule, you can set the active property to False in Python (or false in a YAML file) to deactivate the schedule. This is useful if you want to keep the schedule configuration but temporarily stop the schedule from creating new flow runs.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#cron","title":"Cron","text":"

      A schedule may be specified with a cron pattern. Users may also provide a timezone to enforce DST behaviors.

      Cron uses croniter to specify datetime iteration with a cron-like format.

      Cron properties include:

      Property Description cron A valid cron string. (Required) day_or Boolean indicating how croniter handles day and day_of_week entries. Default is True. timezone String name of a time zone. (See the IANA Time Zone Database for valid time zones.)","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#how-the-day_or-property-works","title":"How the day_or property works","text":"

      The day_or property defaults to True, matching the behavior of cron. In this mode, if you specify a day (of the month) entry and a day_of_week entry, the schedule will run a flow on both the specified day of the month and on the specified day of the week. The \"or\" in day_or refers to the fact that the two entries are treated like an OR statement, so the schedule should include both, as in the SQL statement SELECT * FROM employees WHERE first_name = 'Xi\u0101ng' OR last_name = 'Brookins';.

      For example, with day_or set to True, the cron schedule * * 3 1 2 runs a flow every minute on the 3rd day of the month (whatever that is) and on Tuesday (the second day of the week) in January (the first month of the year).

      With day_or set to False, the day (of the month) and day_of_week entries are joined with the more restrictive AND operation, as in the SQL statement SELECT * from employees WHERE first_name = 'Andrew' AND last_name = 'Brookins';. For example, the same schedule, when day_or is False, runs a flow on every minute on the 3rd Tuesday in January. This behavior matches fcron instead of cron.

      Supported croniter features

      While Prefect supports most features of croniter for creating cron-like schedules, we do not currently support \"R\" random or \"H\" hashed keyword expressions or the schedule jittering possible with those expressions.

      Daylight saving time considerations

      If the timezone is a DST-observing one, then the schedule will adjust itself appropriately.

      The cron rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule fires on every new schedule hour, not every elapsed hour. For example, when clocks are set back, this results in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later.

      Longer schedules, such as one that fires at 9am every morning, will adjust for DST automatically.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#interval","title":"Interval","text":"

      An Interval schedule creates new flow runs on a regular interval measured in seconds. Intervals are computed using an optional anchor_date. For example, here's how you can create a schedule for every 10 minutes in a block-based deployment YAML file:

      schedule:\n  interval: 600\n  timezone: America/Chicago \n

      Interval properties include:

      Property Description interval datetime.timedelta indicating the time between flow runs. (Required) anchor_date datetime.datetime indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. (See the IANA Time Zone Database for valid time zones.)

      Note that the anchor_date does not indicate a \"start time\" for the schedule, but rather a fixed point in time from which to compute intervals. If the anchor date is in the future, then schedule dates are computed by subtracting the interval from it. Note that in this example, we import the Pendulum Python package for easy datetime manipulation. Pendulum isn\u2019t required, but it\u2019s a useful tool for specifying dates.

      Daylight saving time considerations

      If the schedule's anchor_date or timezone are provided with a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals.

      For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time.

      For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#rrule","title":"RRule","text":"

      An RRule scheduling supports iCal recurrence rules (RRules), which provide convenient syntax for creating repetitive schedules. Schedules can repeat on a frequency from yearly down to every minute.

      RRule uses the dateutil rrule module to specify iCal recurrence rules.

      RRules are appropriate for any kind of calendar-date manipulation, including simple repetition, irregular intervals, exclusions, week day or day-of-month adjustments, and more. RRules can represent complex logic like:

      • The last weekday of each month
      • The fourth Thursday of November
      • Every other day of the week

      RRule properties include:

      Property Description rrule String representation of an RRule schedule. See the rrulestr examples for syntax. timezone String name of a time zone. See the IANA Time Zone Database for valid time zones.

      You may find it useful to use an RRule string generator such as the iCalendar.org RRule Tool to help create valid RRules.

      For example, the following RRule schedule in a block-based deployment YAML file creates flow runs on Monday, Wednesday, and Friday until July 30, 2024.

      schedule:\n  rrule: 'FREQ=WEEKLY;BYDAY=MO,WE,FR;UNTIL=20240730T040000Z'\n

      RRule restrictions

      Note the max supported character length of an rrulestr is 6500 characters

      Note that COUNT is not supported. Please use UNTIL or the /deployments/{id}/runs endpoint to schedule a fixed number of flow runs.

      Daylight saving time considerations

      Note that as a calendar-oriented standard, RRules are sensitive to the initial timezone provided. A 9am daily schedule with a DST-aware start date will maintain a local 9am time through DST boundaries. A 9am daily schedule with a UTC start date will maintain a 9am UTC time.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules","title":"Creating schedules","text":"

      There are several ways to create a schedule for a deployment:

      • Through the Prefect UI
      • Via the cron, interval, or rrule parameters if building your deployment via the serve method of the Flow object or the serve utility for managing multiple flows simultaneously
      • If using worker-based deployments
      • When you define a deployment with flow.serve or flow.deploy
      • Through the interactive prefect deploy command
      • With the deployments -> schedules section of the prefect.yaml file
      • If using block-based deployments - Deprecated
      • Through the schedules section of the deployment YAML file
      • By passing schedules into the Deployment class or Deployment.build_from_flow
      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-in-the-ui","title":"Creating schedules in the UI","text":"

      You can add schedules in the Schedules section on a Deployment page in the UI.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#locating-the-schedules-section","title":"Locating the Schedules section","text":"

      The Schedules section appears in the sidebar on the right side of the page on wider displays. On narrower displays, it appears on the Details tab of the page.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#adding-a-schedule","title":"Adding a schedule","text":"

      Under Schedules, select the + Schedule button. A modal dialog will open. Choose Interval or Cron to create a schedule.

      What about RRule?

      The UI does not support creating RRule schedules. However, the UI will display RRule schedules that you've created via the command line.

      The new schedule will appear on the Deployment page where you created it. In addition, the schedule will be viewable in human-friendly text in the list of deployments on the Deployments page.

      After you create a schedule, new scheduled flow runs will be visible in the Upcoming tab of the Deployment page where you created it.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#editing-schedules","title":"Editing schedules","text":"

      You can edit a schedule by selecting Edit from the three-dot menu next to a schedule on a Deployment page.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-with-a-python-deployment-creation-file","title":"Creating schedules with a Python deployment creation file","text":"

      When you create a deployment in a Python file with flow.serve(), serve, flow.deploy(), or deploy you can specify the schedule. Just add the keyword argument cron, interval, or rrule.

      interval: An interval on which to execute the deployment. Accepts a number or a \n    timedelta object to create a single schedule. If a number is given, it will be \n    interpreted as seconds. Also accepts an iterable of numbers or timedelta to create \n    multiple schedules.\ncron: A cron schedule string of when to execute runs of this deployment. \n    Also accepts an iterable of cron schedule strings to create multiple schedules.\nrrule: An rrule schedule string of when to execute runs of this deployment.\n    Also accepts an iterable of rrule schedule strings to create multiple schedules.\nschedules: A list of schedule objects defining when to execute runs of this deployment.\n    Used to define multiple schedules or additional scheduling options such as `timezone`.\nschedule: A schedule object defining when to execute runs of this deployment. Used to\n    define additional scheduling options like `timezone`.\n

      Here's an example of creating a cron schedule with serve for a deployment flow that will run every minute of every day:

      my_flow.serve(name=\"flowing\", cron=\"* * * * *\")\n

      If using work pool-based deployments, the deploy method has the same schedule-based parameters.

      Here's an example of creating an interval schedule with serve for a deployment flow that will run every 10 minutes with an anchor date and a timezone:

      from datetime import timedelta, datetime\nfrom prefect.client.schemas.schedules import IntervalSchedule\n\nmy_flow.serve(name=\"flowing\", schedule=IntervalSchedule(interval=timedelta(minutes=10), anchor_date=datetime(2023, 1, 1, 0, 0), timezone=\"America/Chicago\"))\n

      Block and agent-based deployments with Python files are not a recommended way to create deployments. However, if you are using that deployment creation method you can create a schedule by passing a schedule argument to the Deployment.build_from_flow method.

      Here's how you create the equivalent schedule in a Python deployment file.

      from prefect.client.schemas.schedules import CronSchedule\n\ncron_demo = Deployment.build_from_flow(\n    pipeline,\n    \"etl\",\n    schedule=(CronSchedule(cron=\"0 0 * * *\", timezone=\"America/Chicago\"))\n)\n
      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-with-the-interactive-prefect-deploy-command","title":"Creating schedules with the interactive prefect deploy command","text":"

      If you are using worker-based deployments, you can create a schedule through the interactive prefect deploy command. You will be prompted to choose which type of schedule to create.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-in-the-prefectyaml-files-deployments-schedule-section","title":"Creating schedules in the prefect.yaml file's deployments -> schedule section","text":"

      If you save the prefect.yaml file from the prefect deploy command, you will see it has a schedules section for your deployment. Alternatively, you can create a prefect.yaml file from a recipe or from scratch and add a schedules section to it.

      deployments:\n  ...\n  schedules:\n    - cron: \"0 0 * * *\"\n      timezone: \"America/Chicago\"\n      active: false\n    - cron: \"0 12 * * *\"\n      timezone: \"America/New_York\"\n      active: true\n    - cron: \"0 18 * * *\"\n      timezone: \"Europe/London\"\n      active: true\n
      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#the-scheduler-service","title":"The Scheduler service","text":"

      The Scheduler service is started automatically when prefect server start is run and it is a built-in service of Prefect Cloud.

      By default, the Scheduler service visits deployments on a 60-second loop, though recently-modified deployments will be visited more frequently. The Scheduler evaluates each deployment's schedules and creates new runs appropriately. For typical deployments, it will create the next three runs, though more runs will be scheduled if the next 3 would all start in the next hour.

      More specifically, the Scheduler tries to create the smallest number of runs that satisfy the following constraints, in order:

      • No more than 100 runs will be scheduled.
      • Runs will not be scheduled more than 100 days in the future.
      • At least 3 runs will be scheduled.
      • Runs will be scheduled until at least one hour in the future.

      These behaviors can all be adjusted through the relevant settings that can be viewed with the terminal command prefect config view --show-defaults:

      PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100'\nPREFECT_API_SERVICES_SCHEDULER_ENABLED='True'\nPREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500'\nPREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0'\nPREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3'\nPREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100'\nPREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00'\nPREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00'\n

      See the Settings docs for more information on altering your settings.

      These settings mean that if a deployment has an hourly schedule, the default settings will create runs for the next 4 days (or 100 hours). If it has a weekly schedule, the default settings will maintain the next 14 runs (up to 100 days in the future).

      The Scheduler does not affect execution

      The Prefect Scheduler service only creates new flow runs and places them in Scheduled states. It is not involved in flow or task execution.

      If you change a schedule, previously scheduled flow runs that have not started are removed, and new scheduled flow runs are created to reflect the new schedule.

      To remove all scheduled runs for a flow deployment, you can remove the schedule via the UI.

      ","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/states/","title":"States","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#overview","title":"Overview","text":"

      States are rich objects that contain information about the status of a particular task run or flow run. While you don't need to know the details of the states to use Prefect, you can give your workflows superpowers by taking advantage of it.

      At any moment, you can learn anything you need to know about a task or flow by examining its current state or the history of its states. For example, a state could tell you that a task:

      • is scheduled to make a third run attempt in an hour

      • succeeded and what data it produced

      • was scheduled to run, but later cancelled

      • used the cached result of a previous run instead of re-running

      • failed because it timed out

      By manipulating a relatively small number of task states, Prefect flows can harness the complexity that emerges in workflows.

      Only runs have states

      Though we often refer to the \"state\" of a flow or a task, what we really mean is the state of a flow run or a task run. Flows and tasks are templates that describe what a system does; only when we run the system does it also take on a state. So while we might refer to a task as \"running\" or being \"successful\", we really mean that a specific instance of the task is in that state.

      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-types","title":"State Types","text":"

      States have names and types. State types are canonical, with specific orchestration rules that apply to transitions into and out of each state type. A state's name, is often, but not always, synonymous with its type. For example, a task run that is running for the first time has a state with the name Running and the type RUNNING. However, if the task retries, that same task run will have the name Retrying and the type RUNNING. Each time the task run transitions into the RUNNING state, the same orchestration rules are applied.

      There are terminal state types from which there are no orchestrated transitions to any other state type.

      • COMPLETED
      • CANCELLED
      • FAILED
      • CRASHED

      The full complement of states and state types includes:

      Name Type Terminal? Description Scheduled SCHEDULED No The run will begin at a particular time in the future. Late SCHEDULED No The run's scheduled start time has passed, but it has not transitioned to PENDING (15 seconds by default). AwaitingRetry SCHEDULED No The run did not complete successfully because of a code issue and had remaining retry attempts. Pending PENDING No The run has been submitted to run, but is waiting on necessary preconditions to be satisfied. Running RUNNING No The run code is currently executing. Retrying RUNNING No The run code is currently executing after previously not complete successfully. Paused PAUSED No The run code has stopped executing until it receives manual approval to proceed. Cancelling CANCELLING No The infrastructure on which the code was running is being cleaned up. Cancelled CANCELLED Yes The run did not complete because a user determined that it should not. Completed COMPLETED Yes The run completed successfully. Failed FAILED Yes The run did not complete because of a code issue and had no remaining retry attempts. Crashed CRASHED Yes The run did not complete because of an infrastructure issue.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#returned-values","title":"Returned values","text":"

      When calling a task or a flow, there are three types of returned values:

      • Data: A Python object (such as int, str, dict, list, and so on).
      • State: A Prefect object indicating the state of a flow or task run.
      • PrefectFuture: A Prefect object that contains both data and State.

      Returning data\u200a is the default behavior any time you call your_task().

      Returning Prefect State occurs anytime you call your task or flow with the argument return_state=True.

      Returning PrefectFuture is achieved by calling your_task.submit().

      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-data","title":"Return Data","text":"

      By default, running a task will return data:

      from prefect import flow, task \n\n@task \ndef add_one(x):\n    return x + 1\n\n@flow \ndef my_flow():\n    result = add_one(1) # return int\n

      The same rule applies for a subflow:

      @flow \ndef subflow():\n    return 42 \n\n@flow \ndef my_flow():\n    result = subflow() # return data\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-prefect-state","title":"Return Prefect State","text":"

      To return a State instead, add return_state=True as a parameter of your task call.

      @flow \ndef my_flow():\n    state = add_one(1, return_state=True) # return State\n

      To get data from a State, call .result().

      @flow \ndef my_flow():\n    state = add_one(1, return_state=True) # return State\n    result = state.result() # return int\n

      The same rule applies for a subflow:

      @flow \ndef subflow():\n    return 42 \n\n@flow \ndef my_flow():\n    state = subflow(return_state=True) # return State\n    result = state.result() # return int\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-a-prefectfuture","title":"Return a PrefectFuture","text":"

      To get a PrefectFuture, add .submit() to your task call.

      @flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\n

      To get data from a PrefectFuture, call .result().

      @flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\n    result = future.result() # return data\n

      To get a State from a PrefectFuture, call .wait().

      @flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\n    state = future.wait() # return State\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#final-state-determination","title":"Final state determination","text":"

      The final state of a flow is determined by its return value. The following rules apply:

      • If an exception is raised directly in the flow function, the flow run is marked as FAILED.
      • If the flow does not return a value (or returns None), its state is determined by the states of all of the tasks and subflows within it.
      • If any task run or subflow run failed and none were cancelled, then the final flow run state is marked as FAILED.
      • If any task run or subflow run was cancelled, then the final flow run state is marked as CANCELLED.
      • If a flow returns a manually created state, it is used as the state of the final flow run. This allows for manual determination of final state.
      • If the flow run returns any other object, then it is marked as successfully completed.

      See the Final state determination section of the Flows documentation for further details and examples.

      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-change-hooks","title":"State Change Hooks","text":"

      State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow.

      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#a-simple-example","title":"A simple example","text":"
      from prefect import flow\n\ndef my_success_hook(flow, flow_run, state):\n    print(\"Flow run succeeded!\")\n\n@flow(on_completion=[my_success_hook])\ndef my_flow():\n    return 42\n\nmy_flow()\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-and-use-hooks","title":"Create and use hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#available-state-change-hooks","title":"Available state change hooks","text":"Type Flow Task Description on_completion \u2713 \u2713 Executes when a flow or task run enters a Completed state. on_failure \u2713 \u2713 Executes when a flow or task run enters a Failed state. on_cancellation \u2713 - Executes when a flow run enters a Cancelling state. on_crashed \u2713 - Executes when a flow run enters a Crashed state. on_running \u2713 - Executes when a flow run enters a Running state.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-flow-run-state-change-hooks","title":"Create flow run state change hooks","text":"
      def my_flow_hook(flow: Flow, flow_run: FlowRun, state: State):\n    \"\"\"This is the required signature for a flow run state\n    change hook. This hook can only be passed into flows.\n    \"\"\"\n\n# pass hook as a list of callables\n@flow(on_completion=[my_flow_hook])\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-task-run-state-change-hooks","title":"Create task run state change hooks","text":"
      def my_task_hook(task: Task, task_run: TaskRun, state: State):\n    \"\"\"This is the required signature for a task run state change\n    hook. This hook can only be passed into tasks.\n    \"\"\"\n\n# pass hook as a list of callables\n@task(on_failure=[my_task_hook])\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#use-multiple-state-change-hooks","title":"Use multiple state change hooks","text":"

      State change hooks are versatile, allowing you to specify multiple state change hooks for the same state transition, or to use the same state change hook for different transitions:

      def my_success_hook(task, task_run, state):\n    print(\"Task run succeeded!\")\n\ndef my_failure_hook(task, task_run, state):\n    print(\"Task run failed!\")\n\ndef my_succeed_or_fail_hook(task, task_run, state):\n    print(\"If the task run succeeds or fails, this hook runs.\")\n\n@task(\n    on_completion=[my_success_hook, my_succeed_or_fail_hook],\n    on_failure=[my_failure_hook, my_succeed_or_fail_hook]\n)\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#pass-kwargs-to-your-hooks","title":"Pass kwargs to your hooks","text":"

      The Prefect engine will call your hooks for you upon the state change, passing in the flow, flow run, and state objects.

      However, you can define your hook to have additional default arguments:

      from prefect import flow\n\ndata = {}\n\ndef my_hook(flow, flow_run, state, my_arg=\"custom_value\"):\n    data.update(my_arg=my_arg, state=state)\n\n@flow(on_completion=[my_hook])\ndef lazy_flow():\n    pass\n\nstate = lazy_flow(return_state=True)\n\nassert data == {\"my_arg\": \"custom_value\", \"state\": state}\n

      ... or define your hook to accept arbitrary keyword arguments:

      from functools import partial\nfrom prefect import flow, task\n\ndata = {}\n\ndef my_hook(task, task_run, state, **kwargs):\n    data.update(state=state, **kwargs)\n\n@task\ndef bad_task():\n    raise ValueError(\"meh\")\n\n@flow\ndef ok_with_failure_flow(x: str = \"foo\", y: int = 42):\n    bad_task_with_a_hook = bad_task.with_options(\n        on_failure=[partial(my_hook, **dict(x=x, y=y))]\n    )\n    # return a tuple of \"bar\" and the task run state\n    # to avoid raising the task's exception\n    return \"bar\", bad_task_with_a_hook(return_state=True)\n\n_, task_run_state = ok_with_failure_flow()\n\nassert data == {\"x\": \"foo\", \"y\": 42, \"state\": task_run_state}\n
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#more-examples-of-state-change-hooks","title":"More examples of state change hooks","text":"
      • Send a notification when a flow run fails
      • Delete a Cloud Run job when a flow crashes
      ","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/storage/","title":"Storage","text":"

      Storage blocks are not recommended

      Storage blocks are part of the legacy block-based deployment model. Instead, using serve or runner-based Python creation methods or workers and work pools with prefect deploy via the CLI are the recommended options for creating a deployment. Flow code storage can be specified in the Python file with serve or runner-based Python creation methods; alternatively, with the work pools and workers style of flow deployment, you can specify flow code storage during the interactive prefect deploy CLI experience and in its resulting prefect.yaml file.

      Storage lets you configure how flow code for deployments is persisted and retrieved by Prefect workers (or legacy agents). Anytime you build a block-based deployment, a storage block is used to upload the entire directory containing your workflow code (along with supporting files) to its configured location. This helps ensure portability of your relative imports, configuration files, and more. Note that your environment dependencies (for example, external Python packages) still need to be managed separately.

      If no storage is explicitly configured, Prefect will use LocalFileSystem storage by default. Local storage works fine for many local flow run scenarios, especially when testing and getting started. However, due to the inherent lack of portability, many use cases are better served by using remote storage such as S3 or Google Cloud Storage.

      Prefect supports creating multiple storage configurations and switching between storage as needed.

      Storage uses blocks

      Blocks are the Prefect technology underlying storage, and enables you to do so much more.

      In addition to creating storage blocks via the Prefect CLI, you can now create storage blocks and other kinds of block configuration objects via the Prefect UI and Prefect Cloud.

      ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-storage-for-a-deployment","title":"Configuring storage for a deployment","text":"

      When building a deployment for a workflow, you have two options for configuring workflow storage:

      • Use the default local storage
      • Preconfigure a storage block to use
      ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#using-the-default","title":"Using the default","text":"

      Anytime you call prefect deployment build without providing the --storage-block flag, a default LocalFileSystem block will be used. Note that this block will always use your present working directory as its basepath (which is usually desirable). You can see the block's settings by inspecting the deployment.yaml file that Prefect creates after calling prefect deployment build.

      While you generally can't run a deployment stored on a local file system on other machines, any agent running on the same machine will be able to successfully run your deployment.

      ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#supported-storage-blocks","title":"Supported storage blocks","text":"

      Current options for deployment storage blocks include:

      Storage Description Required Library Local File System Store code in a run's local file system. Remote File System Store code in a any filesystem supported by fsspec. AWS S3 Storage Store code in an AWS S3 bucket. s3fs Azure Storage Store code in Azure Datalake and Azure Blob Storage. adlfs GitHub Storage Store code in a GitHub repository. Google Cloud Storage Store code in a Google Cloud Platform (GCP) Cloud Storage bucket. gcsfs SMB Store code in SMB shared network storage. smbprotocol GitLab Repository Store code in a GitLab repository. prefect-gitlab Bitbucket Repository Store code in a Bitbucket repository. prefect-bitbucket

      Accessing files may require storage filesystem libraries

      Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block or accessing the storage location from flow scripts.

      For example, the AWS S3 Storage block requires the s3fs library.

      See Filesystem package dependencies for more information about configuring filesystem libraries in your execution environment.

      ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/storage/#configuring-a-block","title":"Configuring a block","text":"

      You can create these blocks either via the UI or via Python.

      You can create, edit, and manage storage blocks in the Prefect UI and Prefect Cloud. On a Prefect server, blocks are created in the server's database. On Prefect Cloud, blocks are created on a workspace.

      To create a new block, select the + button. Prefect displays a library of block types you can configure to create blocks to be used by your flows.

      Select Add + to configure a new storage block based on a specific block type. Prefect displays a Create page that enables specifying storage settings.

      You can also create blocks using the Prefect Python API:

      from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/a-sub-directory\", \n           aws_access_key_id=\"foo\", \n           aws_secret_access_key=\"bar\"\n)\nblock.save(\"example-block\")\n

      This block configuration is now available to be used by anyone with appropriate access to your Prefect API. We can use this block to build a deployment by passing its slug to the prefect deployment build command. The storage block slug is formatted as block-type/block-name. In this case, s3/example-block for an AWS S3 Bucket block named example-block. See block identifiers for details.

      prefect deployment build ./flows/my_flow.py:my_flow --name \"Example Deployment\" --storage-block s3/example-block\n

      This command will upload the contents of your flow's directory to the designated storage location, then the full deployment specification will be persisted to a newly created deployment.yaml file. For more information, see Deployments.

      ","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":0.5},{"location":"concepts/task-runners/","title":"Task runners","text":"

      Task runners enable you to engage specific executors for Prefect tasks, such as for concurrent, parallel, or distributed execution of tasks.

      Task runners are not required for task execution. If you call a task function directly, the task executes as a regular Python function, without a task runner, and produces whatever result is returned by the function.

      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#task-runner-overview","title":"Task runner overview","text":"

      Calling a task function from within a flow, using the default task settings, executes the function sequentially. Execution of the task function blocks execution of the flow until the task completes. This means, by default, calling multiple tasks in a flow causes them to run in order.

      However, that's not the only way to run tasks!

      You can use the .submit() method on a task function to submit the task to a task runner. Using a task runner enables you to control whether tasks run sequentially, concurrently, or if you want to take advantage of a parallel or distributed execution library such as Dask or Ray.

      Using the .submit() method to submit a task also causes the task run to return a PrefectFuture, a Prefect object that contains both any data returned by the task function and a State, a Prefect object indicating the state of the task run.

      Prefect currently provides the following built-in task runners:

      • SequentialTaskRunner can run tasks sequentially.
      • ConcurrentTaskRunner can run tasks concurrently, allowing tasks to switch when blocking on IO. Tasks will be submitted to a thread pool maintained by anyio.

      In addition, the following Prefect-developed task runners for parallel or distributed task execution may be installed as Prefect Integrations.

      • DaskTaskRunner can run tasks requiring parallel execution using dask.distributed.
      • RayTaskRunner can run tasks requiring parallel execution using Ray.

      Concurrency versus parallelism

      The words \"concurrency\" and \"parallelism\" may sound the same, but they mean different things in computing.

      Concurrency refers to a system that can do more than one thing simultaneously, but not at the exact same time. It may be more accurate to think of concurrent execution as non-blocking: within the restrictions of resources available in the execution environment and data dependencies between tasks, execution of one task does not block execution of other tasks in a flow.

      Parallelism refers to a system that can do more than one thing at the exact same time. Again, within the restrictions of resources available, parallel execution can run tasks at the same time, such as for operations mapped across a dataset.

      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-task-runner","title":"Using a task runner","text":"

      You do not need to specify a task runner for a flow unless your tasks require a specific type of execution.

      To configure your flow to use a specific task runner, import a task runner and assign it as an argument for the flow when the flow is defined.

      Remember to call .submit() when using a task runner

      Make sure you use .submit() to run your task with a task runner. Calling the task directly, without .submit(), from within a flow will run the task sequentially instead of using a specified task runner.

      For example, you can use ConcurrentTaskRunner to allow tasks to switch when they would block.

      from prefect import flow, task\nfrom prefect.task_runners import ConcurrentTaskRunner\nimport time\n\n@task\ndef stop_at_floor(floor):\n    print(f\"elevator moving to floor {floor}\")\n    time.sleep(floor)\n    print(f\"elevator stops on floor {floor}\")\n\n@flow(task_runner=ConcurrentTaskRunner())\ndef elevator():\n    for floor in range(10, 0, -1):\n        stop_at_floor.submit(floor)\n

      If you specify an uninitialized task runner class, a task runner instance of that type is created with the default settings. You can also pass additional configuration parameters for task runners that accept parameters, such as DaskTaskRunner and RayTaskRunner.

      Default task runner

      If you don't specify a task runner for a flow and you call a task with .submit() within the flow, Prefect uses the default ConcurrentTaskRunner.

      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-sequentially","title":"Running tasks sequentially","text":"

      Sometimes, it's useful to force tasks to run sequentially to make it easier to reason about the behavior of your program. Switching to the SequentialTaskRunner will force submitted tasks to run sequentially rather than concurrently.

      Synchronous and asynchronous tasks

      The SequentialTaskRunner works with both synchronous and asynchronous task functions. Asynchronous tasks are Python functions defined using async def rather than def.

      The following example demonstrates using the SequentialTaskRunner to ensure that tasks run sequentially. In the example, the flow glass_tower runs the task stop_at_floor for floors one through 38, in that order.

      from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nimport random\n\n@task\ndef stop_at_floor(floor):\n    situation = random.choice([\"on fire\",\"clear\"])\n    print(f\"elevator stops on {floor} which is {situation}\")\n\n@flow(task_runner=SequentialTaskRunner(),\n      name=\"towering-infernflow\",\n      )\ndef glass_tower():\n    for floor in range(1, 39):\n        stop_at_floor.submit(floor)\n\nglass_tower()\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

      Each flow can only have a single task runner, but sometimes you may want a subset of your tasks to run using a specific task runner. In this case, you can create subflows for tasks that need to use a different task runner.

      For example, you can have a flow (in the example below called sequential_flow) that runs its tasks locally using the SequentialTaskRunner. If you have some tasks that can run more efficiently in parallel on a Dask cluster, you could create a subflow (such as dask_subflow) to run those tasks using the DaskTaskRunner.

      from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef hello_local():\n    print(\"Hello!\")\n\n@task\ndef hello_dask():\n    print(\"Hello from Dask!\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef sequential_flow():\n    hello_local.submit()\n    dask_subflow()\n    hello_local.submit()\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_subflow():\n    hello_dask.submit()\n\nif __name__ == \"__main__\":\n    sequential_flow()\n

      Guarding main

      Note that you should guard the main function by using if __name__ == \"__main__\" to avoid issues with parallel processing.

      This script outputs the following logs demonstrating the use of the Dask task runner:

      120:14:29.785 | INFO    | prefect.engine - Created flow run 'ivory-caiman' for flow 'sequential-flow'\n20:14:29.785 | INFO    | Flow run 'ivory-caiman' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n20:14:29.880 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-0' for task 'hello_local'\n20:14:29.881 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-0' immediately...\nHello!\n20:14:29.904 | INFO    | Task run 'hello_local-7633879f-0' - Finished in state Completed()\n20:14:29.952 | INFO    | Flow run 'ivory-caiman' - Created subflow run 'nimble-sparrow' for flow 'dask-subflow'\n20:14:29.953 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n20:14:31.862 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n20:14:31.901 | INFO    | Flow run 'nimble-sparrow' - Created task run 'hello_dask-2b96d711-0' for task 'hello_dask'\n20:14:32.370 | INFO    | Flow run 'nimble-sparrow' - Submitted task run 'hello_dask-2b96d711-0' for execution.\nHello from Dask!\n20:14:33.358 | INFO    | Flow run 'nimble-sparrow' - Finished in state Completed('All states completed.')\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-1' for task 'hello_local'\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-1' immediately...\nHello!\n20:14:33.386 | INFO    | Task run 'hello_local-7633879f-1' - Finished in state Completed()\n20:14:33.399 | INFO    | Flow run 'ivory-caiman' - Finished in state Completed('All states completed.')\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-results-from-submitted-tasks","title":"Using results from submitted tasks","text":"

      When you use .submit() to submit a task to a task runner, the task runner creates a PrefectFuture for access to the state and result of the task.

      A PrefectFuture is an object that provides access to a computation happening in a task runner \u2014 even if that computation is happening on a remote system.

      In the following example, we save the return value of calling .submit() on the task say_hello to the variable future, and then we print the type of the variable:

      from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@flow\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print(f\"variable 'future' is type {type(future)}\")\n\nhello_world()\n

      When you run this code, you'll see that the variable future is a PrefectFuture:

      variable 'future' is type <class 'prefect.futures.PrefectFuture'>\n

      When you pass a future into a task, Prefect waits for the \"upstream\" task \u2014 the one that the future references \u2014 to reach a final state before starting the downstream task.

      This means that the downstream task won't receive the PrefectFuture you passed as an argument. Instead, the downstream task will receive the value that the upstream task returned.

      Take a look at how this works in the following example

      from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef print_result(result):\n    print(type(result))\n    print(result)\n\n@flow(name=\"hello-flow\")\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print_result.submit(future)\n\nhello_world()\n
      <class 'str'>\nHello Marvin!\n

      Futures have a few useful methods. For example, you can get the return value of the task run with .result():

      from prefect import flow, task\n\n@task\ndef my_task():\n    return 42\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result()\n    print(result)\n\nmy_flow()\n

      The .result() method will wait for the task to complete before returning the result to the caller. If the task run fails, .result() will raise the task run's exception. You may disable this behavior with the raise_on_failure option:

      from prefect import flow, task\n\n@task\ndef my_task():\n    return \"I'm a task!\"\n\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result(raise_on_failure=False)\n    if future.get_state().is_failed():\n        # `result` is an exception! handle accordingly\n        ...\n    else:\n        # `result` is the expected return value of our task\n        ...\n

      You can retrieve the current state of the task run associated with the PrefectFuture using .get_state():

      @flow\ndef my_flow():\n    future = my_task.submit()\n    state = future.get_state()\n

      You can also wait for a task to complete by using the .wait() method:

      @flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait()\n

      You can include a timeout in the wait call to perform logic if the task has not finished in a given amount of time:

      @flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait(1)  # Wait one second max\n    if final_state:\n        # Take action if the task is done\n        result = final_state.result()\n    else:\n        ... # Task action if the task is still running\n

      You may also use the wait_for=[] parameter when calling a task, specifying upstream task dependencies. This enables you to control task execution order for tasks that do not share data dependencies.

      @task\ndef task_a():\n    pass\n\n@task\ndef task_b():\n    pass\n\n@task\ndef task_c():\n    pass\n\n@task\ndef task_d():\n    pass\n\n@flow\ndef my_flow():\n    a = task_a.submit()\n    b = task_b.submit()\n    # Wait for task_a and task_b to complete\n    c = task_c.submit(wait_for=[a, b])\n    # task_d will wait for task_c to complete\n    # Note: If waiting for one task it must still be in a list.\n    d = task_d(wait_for=[c])\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#when-to-use-result-in-flows","title":"When to use .result() in flows","text":"

      The simplest pattern for writing a flow is either only using tasks or only using pure Python functions. When you need to mix the two, use .result().

      Using only tasks:

      from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    hello = say_hello.submit(\"Marvin\")\n    nice_to_meet_you = say_nice_to_meet_you.submit(hello)\n\nhello_world()\n

      Using only Python functions:

      from prefect import flow, task\n\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # because this is just a Python function, calls will not be tracked\n    hello = say_hello(\"Marvin\") \n    nice_to_meet_you = say_nice_to_meet_you(hello)\n\nhello_world()\n

      Mixing tasks and Python functions:

      from prefect import flow, task\n\ndef say_hello_extra_nicely_to_marvin(hello): # not a task or flow!\n    if hello == \"Hello Marvin!\":\n        return \"HI MARVIN!\"\n    return hello\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # run a task and get the result\n    hello = say_hello.submit(\"Marvin\").result()\n\n    # not calling a task or flow\n    special_greeting = say_hello_extra_nicely_to_marvin(hello)\n\n    # pass our modified greeting back into a task\n    nice_to_meet_you = say_nice_to_meet_you.submit(special_greeting)\n\n    print(nice_to_meet_you.result())\n\nhello_world()\n

      Note that .result() also limits Prefect's ability to track task dependencies. In the \"mixed\" example above, Prefect will not be aware that say_hello is upstream of nice_to_meet_you.

      Calling .result() is blocking

      When calling .result(), be mindful your flow function will have to wait until the task run is completed before continuing.

      from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef do_important_stuff():\n    print(\"Doing lots of important stuff!\")\n\n@flow\ndef hello_world():\n    # blocks until `say_hello` has finished\n    result = say_hello.submit(\"Marvin\").result() \n    do_important_stuff.submit()\n\nhello_world()\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-dask","title":"Running tasks on Dask","text":"

      The DaskTaskRunner is a parallel task runner that submits tasks to the dask.distributed scheduler. By default, a temporary Dask cluster is created for the duration of the flow run. If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via the address kwarg.

      1. Make sure the prefect-dask collection is installed: pip install prefect-dask.
      2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
      3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.

      For example, this flow uses the DaskTaskRunner configured to access an existing Dask cluster at http://my-dask-cluster.

      from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n    ...\n

      DaskTaskRunner accepts the following optional parameters:

      Parameter Description address Address of a currently running Dask scheduler. cluster_class The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, \"distributed.LocalCluster\"), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client.

      Multiprocessing safety

      Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or you will encounter warnings and errors.

      If you don't provide the address of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers or threads_per_worker to cluster_kwargs.

      # Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n    cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"

      The DaskTaskRunner is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.

      To configure, you need to provide a cluster_class. This can be:

      • A string specifying the import path to the cluster class (for example, \"dask_cloudprovider.aws.FargateCluster\")
      • The cluster class itself
      • A function for creating a custom cluster.

      You can also configure cluster_kwargs, which takes a dictionary of keyword arguments to pass to cluster_class when starting the flow run.

      For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster with 4 workers running with an image named my-prefect-image:

      DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"

      Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):

      • All workers in the cluster must have dependencies installed for all flows you intend to run.
      • Multiple flow runs may compete for resources. Dask tries to do a good job sharing resources between tasks, but you may still run into issues.

      That said, you may prefer managing a single long-running cluster.

      To configure a DaskTaskRunner to connect to an existing cluster, pass in the address of the scheduler to the address argument:

      # Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#adaptive-scaling","title":"Adaptive scaling","text":"

      One nice feature of using a DaskTaskRunner is the ability to scale adaptively to the workload. Instead of specifying n_workers as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.

      To do this, you can pass adapt_kwargs to DaskTaskRunner. This takes the following fields:

      • maximum (int or None, optional): the maximum number of workers to scale to. Set to None for no maximum.
      • minimum (int or None, optional): the minimum number of workers to scale to. Set to None for no minimum.

      For example, here we configure a flow to run on a FargateCluster scaling up to at most 10 workers.

      DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    adapt_kwargs={\"maximum\": 10}\n)\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#dask-annotations","title":"Dask annotations","text":"

      Dask annotations can be used to further control the behavior of tasks.

      For example, we can set the priority of tasks in the Dask scheduler:

      import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n    with dask.annotate(priority=-10):\n        future = show.submit(1)  # low priority task\n\n    with dask.annotate(priority=10):\n        future = show.submit(2)  # high priority task\n

      Another common use case is resource annotations:

      import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n    task_runner=DaskTaskRunner(\n        cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n    )\n)\n\ndef my_flow():\n    with dask.annotate(resources={'GPU': 1}):\n        future = show(0)  # this task requires 1 GPU resource on a worker\n\n    with dask.annotate(resources={'process': 1}):\n        # These tasks each require 1 process on a worker; because we've \n        # specified that our cluster has 1 process per worker and 1 worker,\n        # these tasks will run sequentially\n        future = show(1)\n        future = show(2)\n        future = show(3)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n
      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-ray","title":"Running tasks on Ray","text":"

      The RayTaskRunner \u2014 installed separately as a Prefect Collection \u2014 is a parallel task runner that submits tasks to Ray. By default, a temporary Ray instance is created for the duration of the flow run. If you already have a Ray instance running, you can provide the connection URL via an address argument.

      Remote storage and Ray tasks

      We recommend configuring remote storage for task execution with the RayTaskRunner. This ensures tasks executing in Ray have access to task result storage, particularly when accessing a Ray instance outside of your execution environment.

      To configure your flow to use the RayTaskRunner:

      1. Make sure the prefect-ray collection is installed: pip install prefect-ray.
      2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
      3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

      For example, this flow uses the RayTaskRunner configured to access an existing Ray instance at ray://192.0.2.255:8786.

      from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n    ... \n

      RayTaskRunner accepts the following optional parameters:

      Parameter Description address Address of a currently running Ray instance, starting with the ray:// URI. init_kwargs Additional kwargs to use when calling ray.init.

      Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address of a Ray instance, Prefect creates a temporary instance automatically.

      Ray environment limitations

      While we're excited about adding support for parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

      Ray's support for Python 3.11 is experimental.

      Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.

      See the Ray installation documentation for further compatibility information.

      ","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/tasks/","title":"Tasks","text":"

      A task is a function that represents a discrete unit of work in a Prefect workflow. Tasks are not required \u2014 you may define Prefect workflows that consist only of flows, using regular Python statements and functions. Tasks enable you to encapsulate elements of your workflow logic in observable units that can be reused across flows and subflows.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tasks-overview","title":"Tasks overview","text":"

      Tasks are functions: they can take inputs, perform work, and return an output. A Prefect task can do almost anything a Python function can do.

      Tasks are special because they receive metadata about upstream dependencies and the state of those dependencies before they run, even if they don't receive any explicit data inputs from them. This gives you the opportunity to, for example, have a task wait on the completion of another task before executing.

      Tasks also take advantage of automatic Prefect logging to capture details about task runs such as runtime, tags, and final state.

      You can define your tasks within the same file as your flow definition, or you can define tasks within modules and import them for use in your flow definitions. Tasks may be called from within a flow, from within a subflow, or (as of prefect 2.18.x) from within another task.

      Calling a task from a flow

      Use the @task decorator to designate a function as a task. Calling the task creates a new task run:

      from prefect import flow, task\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n    my_task()\n

      Calling a task from another task

      As of prefect 2.18.x, you can call a task from within another task:

      from prefect import task\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@task(log_prints=True)\ndef my_parent_task():\n    my_task()\n

      Tasks are uniquely identified by a task key, which is a hash composed of the task name, the fully-qualified name of the function, and any tags. If the task does not have a name specified, the name is derived from the task function.

      How big should a task be?

      Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

      To be clear, there's nothing stopping you from putting all of your code in a single task \u2014 Prefect will happily run it! However, if any line of code fails, the entire task will fail and must be retried from the beginning. This can be avoided by splitting the code into multiple dependent tasks.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-arguments","title":"Task arguments","text":"

      Tasks allow for customization through optional arguments:

      Argument Description name An optional name for the task. If not provided, the name will be inferred from the function name. description An optional string description for the task. If not provided, the description will be pulled from the docstring for the decorated function. tags An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime. cache_key_fn An optional callable that, given the task run context and call parameters, generates a string key. If the key matches a previous completed state, that state result will be restored instead of running the task again. cache_expiration An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. retries An optional number of times to retry on task run failure. retry_delay_seconds An optional number of seconds to wait before retrying the task after failure. This is only applicable if retries is nonzero. log_prints An optional boolean indicating whether to log print statements. persist_result An optional boolean indicating whether to persist the result of the task run to storage.

      See all possible parameters in the Python SDK API docs.

      For example, you can provide a name value for the task. Here we've used the optional description argument as well.

      @task(name=\"hello-task\", \n      description=\"This task says hello.\")\ndef my_task():\n    print(\"Hello, I'm a task\")\n

      You can distinguish runs of this task by providing a task_run_name; this setting accepts a string that can optionally contain templated references to the keyword arguments of your task. The name will be formatted using Python's standard string formatting syntax as can be seen here:

      import datetime\nfrom prefect import flow, task\n\n@task(name=\"My Example Task\", \n      description=\"An example task for a tutorial.\",\n      task_run_name=\"hello-{name}-on-{date:%A}\")\ndef my_task(name, date):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"hello-marvin-on-Thursday\"\n    my_task(name=\"marvin\", date=datetime.datetime.now(datetime.timezone.utc))\n

      Additionally this setting also accepts a function that returns a string to be used for the task run name:

      import datetime\nfrom prefect import flow, task\n\ndef generate_task_name():\n    date = datetime.datetime.now(datetime.timezone.utc)\n    return f\"{date:%A}-is-a-lovely-day\"\n\n@task(name=\"My Example Task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"Thursday-is-a-lovely-day\"\n    my_task(name=\"marvin\")\n

      If you need access to information about the task, use the prefect.runtime module. For example:

      from prefect import flow\nfrom prefect.runtime import flow_run, task_run\n\ndef generate_task_name():\n    flow_name = flow_run.flow_name\n    task_name = task_run.task_name\n\n    parameters = task_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-{task_name}-with-{name}-and-{limit}\"\n\n@task(name=\"my-example-task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name: str, limit: int = 100):\n    pass\n\n@flow\ndef my_flow(name: str):\n    # creates a run with a name like \"my-flow-my-example-task-with-marvin-and-100\"\n    my_task(name=\"marvin\")\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tags","title":"Tags","text":"

      Tags are optional string labels that enable you to identify and group tasks other than by name or flow. Tags are useful for:

      • Filtering task runs by tag in the UI and via the Prefect REST API.
      • Setting concurrency limits on task runs by tag.

      Tags may be specified as a keyword argument on the task decorator.

      @task(name=\"hello-task\", tags=[\"test\"])\ndef my_task():\n    print(\"Hello, I'm a task\")\n

      You can also provide tags as an argument with a tags context manager, specifying tags when the task is called rather than in its definition.

      from prefect import flow, task\nfrom prefect import tags\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\n    with tags(\"test\"):\n        my_task()\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#retries","title":"Retries","text":"

      Prefect can automatically retry tasks on failure. In Prefect, a task fails if its Python function raises an exception.

      To enable retries, pass retries and retry_delay_seconds parameters to your task. If the task fails, Prefect will retry it up to retries times, waiting retry_delay_seconds seconds between each attempt. If the task fails on the final retry, Prefect marks the task as crashed if the task raised an exception or failed if it returned a string.

      Retries don't create new task runs

      A new task run is not created when a task is retried. A new state is added to the state history of the original task run.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#a-real-world-example-making-an-api-request","title":"A real-world example: making an API request","text":"

      Consider the real-world problem of making an API request. In this example, we'll use the httpx library to make an HTTP request.

      import httpx\n\nfrom prefect import flow, task\n\n\n@task(retries=2, retry_delay_seconds=5)\ndef get_data_task(\n    url: str = \"https://api.brittle-service.com/endpoint\"\n) -> dict:\n    response = httpx.get(url)\n\n    # If the response status code is anything but a 2xx, httpx will raise\n    # an exception. This task doesn't handle the exception, so Prefect will\n    # catch the exception and will consider the task run failed.\n    response.raise_for_status()\n\n    return response.json()\n\n\n@flow\ndef get_data_flow():\n    get_data_task()\n

      In this task, if the HTTP request to the brittle API receives any status code other than a 2xx (200, 201, etc.), Prefect will retry the task a maximum of two times, waiting five seconds in between retries.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#custom-retry-behavior","title":"Custom retry behavior","text":"

      The retry_delay_seconds option accepts a list of delays for more custom retry behavior. The following task will wait for successively increasing intervals of 1, 10, and 100 seconds, respectively, before the next attempt starts:

      from prefect import task\n\n@task(retries=3, retry_delay_seconds=[1, 10, 100])\ndef some_task_with_manual_backoff_retries():\n   ...\n

      The retry_condition_fn option accepts a callable that returns a boolean. If the callable returns True, the task will be retried. If the callable returns False, the task will not be retried. The callable accepts three arguments \u2014 the task, the task run, and the state of the task run. The following task will retry on HTTP status codes other than 401 or 404:

      import httpx\nfrom prefect import flow, task\n\ndef retry_handler(task, task_run, state) -> bool:\n    \"\"\"This is a custom retry handler to handle when we want to retry a task\"\"\"\n    try:\n        # Attempt to get the result of the task\n        state.result()\n    except httpx.HTTPStatusError as exc:\n        # Retry on any HTTP status code that is not 401 or 404\n        do_not_retry_on_these_codes = [401, 404]\n        return exc.response.status_code not in do_not_retry_on_these_codes\n    except httpx.ConnectError:\n        # Do not retry\n        return False\n    except:\n        # For any other exception, retry\n        return True\n\n@task(retries=1, retry_condition_fn=retry_handler)\ndef my_api_call_task(url):\n    response = httpx.get(url)\n    response.raise_for_status()\n    return response.json()\n\n@flow\ndef get_data_flow(url):\n    my_api_call_task(url=url)\n\nif __name__ == \"__main__\":\n    get_data_flow(url=\"https://httpbin.org/status/503\")\n

      Additionally, you can pass a callable that accepts the number of retries as an argument and returns a list. Prefect includes an exponential_backoff utility that will automatically generate a list of retry delays that correspond to an exponential backoff retry strategy. The following flow will wait for 10, 20, then 40 seconds before each retry.

      from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(retries=3, retry_delay_seconds=exponential_backoff(backoff_factor=10))\ndef some_task_with_exponential_backoff_retries():\n   ...\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#advanced-topic-adding-jitter","title":"Advanced topic: adding \"jitter\"","text":"

      While using exponential backoff, you may also want to add jitter to the delay times. Jitter is a random amount of time added to retry periods that helps prevent \"thundering herd\" scenarios, which is when many tasks all retry at the exact same time, potentially overwhelming systems.

      The retry_jitter_factor option can be used to add variance to the base delay. For example, a retry delay of 10 seconds with a retry_jitter_factor of 0.5 will be allowed to delay up to 15 seconds. Large values of retry_jitter_factor provide more protection against \"thundering herds,\" while keeping the average retry delay time constant. For example, the following task adds jitter to its exponential backoff so the retry delays will vary up to a maximum delay time of 20, 40, and 80 seconds respectively.

      from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(\n    retries=3,\n    retry_delay_seconds=exponential_backoff(backoff_factor=10),\n    retry_jitter_factor=1,\n)\ndef some_task_with_exponential_backoff_retries():\n   ...\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-retry-behavior-globally-with-settings","title":"Configuring retry behavior globally with settings","text":"

      You can also set retries and retry delays by using the following global settings. These settings will not override the retries or retry_delay_seconds that are set in the flow or task decorator.

      prefect config set PREFECT_FLOW_DEFAULT_RETRIES=2\nprefect config set PREFECT_TASK_DEFAULT_RETRIES=2\nprefect config set PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\nprefect config set PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#caching","title":"Caching","text":"

      Caching refers to the ability of a task run to reflect a finished state without actually running the code that defines the task. This allows you to efficiently reuse results of tasks that may be expensive to run with every flow run, or reuse cached results if the inputs to a task have not changed.

      To determine whether a task run should retrieve a cached state, we use \"cache keys\". A cache key is a string value that indicates if one run should be considered identical to another. When a task run with a cache key finishes, we attach that cache key to the state. When each task run starts, Prefect checks for states with a matching cache key. If a state with an identical key is found, Prefect will use the cached state instead of running the task again.

      To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. If you do not specify a cache_expiration, the cache key does not expire.

      You can define a task that is cached based on its inputs by using the Prefect task_input_hash. This is a task cache key implementation that hashes all inputs to the task using a JSON or cloudpickle serializer. If the task inputs do not change, the cached results are used rather than running the task until the cache expires.

      Note that, if any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, task_input_hash returns a null key indicating that a cache key could not be generated for the given inputs.

      In this example, until the cache_expiration time ends, as long as the input to hello_task() remains the same when it is called, the cached return value is returned. In this situation the task is not rerun. However, if the input argument value changes, hello_task() runs using the new input.

      from datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\ndef hello_task(name_input):\n    # Doing some work\n    print(\"Saying hello\")\n    return \"hello \" + name_input\n\n@flow(log_prints=True)\ndef hello_flow(name_input):\n    hello_task(name_input)\n

      Alternatively, you can provide your own function or other callable that returns a string cache key. A generic cache_key_fn is a function that accepts two positional arguments:

      • The first argument corresponds to the TaskRunContext, which stores task run metadata in the attributes task_run_id, flow_run_id, and task.
      • The second argument corresponds to a dictionary of input values to the task. For example, if your task is defined with signature fn(x, y, z) then the dictionary will have keys \"x\", \"y\", and \"z\" with corresponding values that can be used to compute your cache key.

      Note that the cache_key_fn is not defined as a @task.

      Task cache keys

      By default, a task cache key is limited to 2000 characters, specified by the PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH setting.

      from prefect import task, flow\n\ndef static_cache_key(context, parameters):\n    # return a constant\n    return \"static cache key\"\n\n@task(cache_key_fn=static_cache_key)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n\n@flow\ndef test_caching():\n    cached_task()\n    cached_task()\n    cached_task()\n

      In this case, there's no expiration for the cache key, and no logic to change the cache key, so cached_task() only runs once.

      >>> test_caching()\nrunning an expensive operation\n>>> test_caching()\n>>> test_caching()\n

      When each task run requested to enter a Running state, it provided its cache key computed from the cache_key_fn. The Prefect backend identified that there was a COMPLETED state associated with this key and instructed the run to immediately enter the same COMPLETED state, including the same return values.

      A real-world example might include the flow run ID from the context in the cache key so only repeated calls in the same flow run are cached.

      def cache_within_flow_run(context, parameters):\n    return f\"{context.task_run.flow_run_id}-{task_input_hash(context, parameters)}\"\n\n@task(cache_key_fn=cache_within_flow_run)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n

      Task results, retries, and caching

      Task results are cached in memory during a flow run and persisted to the location specified by the PREFECT_LOCAL_STORAGE_PATH setting. As a result, task caching between flow runs is currently limited to flow runs with access to that local storage path.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#refreshing-the-cache","title":"Refreshing the cache","text":"

      Sometimes, you want a task to update the data associated with its cache key instead of using the cache. This is a cache \"refresh\".

      The refresh_cache option can be used to enable this behavior for a specific task:

      import random\n\n\ndef static_cache_key(context, parameters):\n    # return a constant\n    return \"static cache key\"\n\n\n@task(cache_key_fn=static_cache_key, refresh_cache=True)\ndef caching_task():\n    return random.random()\n

      When this task runs, it will always update the cache key instead of using the cached value. This is particularly useful when you have a flow that is responsible for updating the cache.

      If you want to refresh the cache for all tasks, you can use the PREFECT_TASKS_REFRESH_CACHE setting. Setting PREFECT_TASKS_REFRESH_CACHE=true will change the default behavior of all tasks to refresh. This is particularly useful if you want to rerun a flow without cached results.

      If you have tasks that should not refresh when this setting is enabled, you may explicitly set refresh_cache to False. These tasks will never refresh the cache \u2014 if a cache key exists it will be read, not updated. Note that, if a cache key does not exist yet, these tasks can still write to the cache.

      @task(cache_key_fn=static_cache_key, refresh_cache=False)\ndef caching_task():\n    return random.random()\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#timeouts","title":"Timeouts","text":"

      Task timeouts are used to prevent unintentional long-running tasks. When the duration of execution for a task exceeds the duration specified in the timeout, a timeout exception will be raised and the task will be marked as failed. In the UI, the task will be visibly designated as TimedOut. From the perspective of the flow, the timed-out task will be treated like any other failed task.

      Timeout durations are specified using the timeout_seconds keyword argument.

      from prefect import task\nimport time\n\n@task(timeout_seconds=1, log_prints=True)\ndef show_timeouts():\n    print(\"I will execute\")\n    time.sleep(5)\n    print(\"I will not execute\")\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-results","title":"Task results","text":"

      Depending on how you call tasks, they can return different types of results and optionally engage the use of a task runner.

      Any task can return:

      • Data\u200a, such as int, str, dict, list, and so on \u2014 \u200athis is the default behavior any time you call your_task().
      • PrefectFuture \u2014 \u200athis is achieved by calling your_task.submit(). A PrefectFuture contains both data and State
      • Prefect State \u200a\u2014 anytime you call your task or flow with the argument return_state=True, it will directly return a state you can use to build custom behavior based on a state change you care about, such as task or flow failing or retrying.

      To run your task with a task runner, you must call the task with .submit().

      See state returned values for examples.

      Task runners are optional

      If you just need the result from a task, you can simply call the task from your flow. For most workflows, the default behavior of calling a task directly and receiving a result is all you'll need.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#wait-for","title":"Wait for","text":"

      To create a dependency between two tasks that do not exchange data, but one needs to wait for the other to finish, use the special wait_for keyword argument:

      @task\ndef task_1():\n    pass\n\n@task\ndef task_2():\n    pass\n\n@flow\ndef my_flow():\n    x = task_1()\n\n    # task 2 will wait for task_1 to complete\n    y = task_2(wait_for=[x])\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#map","title":"Map","text":"

      Prefect provides a .map() implementation that automatically creates a task run for each element of its input data. Mapped tasks represent the computations of many individual children tasks.

      The simplest Prefect map takes a tasks and applies it to each element of its inputs.

      from prefect import flow, task\n\n@task\ndef print_nums(nums):\n    for n in nums:\n        print(n)\n\n@task\ndef square_num(num):\n    return num**2\n\n@flow\ndef map_flow(nums):\n    print_nums(nums)\n    squared_nums = square_num.map(nums) \n    print_nums(squared_nums)\n\nmap_flow([1,2,3,5,8,13])\n

      Prefect also supports unmapped arguments, allowing you to pass static values that don't get mapped over.

      from prefect import flow, task\n\n@task\ndef add_together(x, y):\n    return x + y\n\n@flow\ndef sum_it(numbers, static_value):\n    futures = add_together.map(numbers, static_value)\n    return futures\n\nsum_it([1, 2, 3], 5)\n

      If your static argument is an iterable, you'll need to wrap it with unmapped to tell Prefect that it should be treated as a static value.

      from prefect import flow, task, unmapped\n\n@task\ndef sum_plus(x, static_iterable):\n    return x + sum(static_iterable)\n\n@flow\ndef sum_it(numbers, static_iterable):\n    futures = sum_plus.map(numbers, static_iterable)\n    return futures\n\nsum_it([4, 5, 6], unmapped([1, 2, 3]))\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#async-tasks","title":"Async tasks","text":"

      Prefect also supports asynchronous task and flow definitions by default. All of the standard rules of async apply:

      import asyncio\n\nfrom prefect import task, flow\n\n@task\nasync def print_values(values):\n    for value in values:\n        await asyncio.sleep(1) # yield\n        print(value, end=\" \")\n\n@flow\nasync def async_flow():\n    await print_values([1, 2])  # runs immediately\n    coros = [print_values(\"abcd\"), print_values(\"6789\")]\n\n    # asynchronously gather the tasks\n    await asyncio.gather(*coros)\n\nasyncio.run(async_flow())\n

      Note, if you are not using asyncio.gather, calling .submit() is required for asynchronous execution on the ConcurrentTaskRunner.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-run-concurrency-limits","title":"Task run concurrency limits","text":"

      There are situations in which you want to actively prevent too many tasks from running simultaneously. For example, if many tasks across multiple flows are designed to interact with a database that only allows 10 connections, you want to make sure that no more than 10 tasks that connect to this database are running at any given time.

      Prefect has built-in functionality for achieving this: task concurrency limits.

      Task concurrency limits use task tags. You can specify an optional concurrency limit as the maximum number of concurrent task runs in a Running state for tasks with a given tag. The specified concurrency limit applies to any task to which the tag is applied.

      If a task has multiple tags, it will run only if all tags have available concurrency.

      Tags without explicit limits are considered to have unlimited concurrency.

      0 concurrency limit aborts task runs

      Currently, if the concurrency limit is set to 0 for a tag, any attempt to run a task with that tag will be aborted instead of delayed.

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#execution-behavior","title":"Execution behavior","text":"

      Task tag limits are checked whenever a task run attempts to enter a Running state.

      If there are no concurrency slots available for any one of your task's tags, the transition to a Running state will be delayed and the client is instructed to try entering a Running state again in 30 seconds (or the value specified by the PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS setting).

      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-concurrency-limits","title":"Configuring concurrency limits","text":"

      Flow run concurrency limits are set at a work pool and/or work queue level

      While task run concurrency limits are configured via tags (as shown below), flow run concurrency limits are configured via work pools and/or work queues.

      You can set concurrency limits on as few or as many tags as you wish. You can set limits through:

      • Prefect CLI
      • Prefect API by using PrefectClient Python client
      • Prefect server UI or Prefect Cloud
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#cli","title":"CLI","text":"

      You can create, list, and remove concurrency limits by using Prefect CLI concurrency-limit commands.

      prefect concurrency-limit [command] [arguments]\n
      Command Description create Create a concurrency limit by specifying a tag and limit. delete Delete the concurrency limit set on the specified tag. inspect View details about a concurrency limit set on the specified tag. ls View all defined concurrency limits.

      For example, to set a concurrency limit of 10 on the 'small_instance' tag:

      prefect concurrency-limit create small_instance 10\n

      To delete the concurrency limit on the 'small_instance' tag:

      prefect concurrency-limit delete small_instance\n

      To view details about the concurrency limit on the 'small_instance' tag:

      prefect concurrency-limit inspect small_instance\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#python-client","title":"Python client","text":"

      To update your tag concurrency limits programmatically, use PrefectClient.orchestration.create_concurrency_limit.

      create_concurrency_limit takes two arguments:

      • tag specifies the task tag on which you're setting a limit.
      • concurrency_limit specifies the maximum number of concurrent task runs for that tag.

      For example, to set a concurrency limit of 10 on the 'small_instance' tag:

      from prefect import get_client\n\nasync with get_client() as client:\n    # set a concurrency limit of 10 on the 'small_instance' tag\n    limit_id = await client.create_concurrency_limit(\n        tag=\"small_instance\", \n        concurrency_limit=10\n        )\n

      To remove all concurrency limits on a tag, use PrefectClient.delete_concurrency_limit_by_tag, passing the tag:

      async with get_client() as client:\n    # remove a concurrency limit on the 'small_instance' tag\n    await client.delete_concurrency_limit_by_tag(tag=\"small_instance\")\n

      If you wish to query for the currently set limit on a tag, use PrefectClient.read_concurrency_limit_by_tag, passing the tag:

      To see all of your limits across all of your tags, use PrefectClient.read_concurrency_limits.

      async with get_client() as client:\n    # query the concurrency limit on the 'small_instance' tag\n    limit = await client.read_concurrency_limit_by_tag(tag=\"small_instance\")\n
      ","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/work-pools/","title":"Work Pools & Workers","text":"

      Work pools and workers bridge the Prefect orchestration environment with your execution environment. When a deployment creates a flow run, it is submitted to a specific work pool for scheduling. A worker running in the execution environment can poll its respective work pool for new runs to execute, or the work pool can submit flow runs to serverless infrastructure directly, depending on your configuration.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-overview","title":"Work pool overview","text":"

      Work pools organize work for execution. Work pools have types corresponding to the infrastructure that will execute the flow code, as well as the delivery method of work to that environment. Pull work pools require workers (or less ideally, agents) to poll the work pool for flow runs to execute. Push work pools can submit runs directly to your serverless infrastructure providers such as Google Cloud Run, Azure Container Instances, and AWS ECS without the need for an agent or worker. Managed work pools are administered by Prefect and handle the submission and execution of code on your behalf.

      Work pools are like pub/sub topics

      It's helpful to think of work pools as a way to coordinate (potentially many) deployments with (potentially many) workers through a known channel: the pool itself. This is similar to how \"topics\" are used to connect producers and consumers in a pub/sub or message-based system. By switching a deployment's work pool, users can quickly change the worker that will execute their runs, making it easy to promote runs through environments or even debug locally.

      In addition, users can control aspects of work pool behavior, such as how many runs the pool allows to be run concurrently or pausing delivery entirely. These options can be modified at any time, and any workers requesting work for a specific pool will only see matching flow runs.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-configuration","title":"Work pool configuration","text":"

      You can configure work pools by using any of the following:

      • Prefect UI
      • Prefect CLI commands
      • Prefect REST API
      • Terraform provider for Prefect Cloud

      To manage work pools in the UI, click the Work Pools icon. This displays a list of currently configured work pools.

      You can pause a work pool from this page by using the toggle.

      Select the + button to create a new work pool. You'll be able to specify the details for work served by this work pool.

      To create a work pool via the Prefect CLI, use the prefect work-pool create command:

      prefect work-pool create [OPTIONS] NAME\n

      NAME is a required, unique name for the work pool.

      Optional configuration parameters you can specify to filter work on the pool include:

      Option Description --paused If provided, the work pool will be created in a paused state. --type The type of infrastructure that can execute runs from this work pool. --set-as-default Whether to use the created work pool as the local default for deployment. --base-job-template The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type.

      For example, to create a work pool called test-pool, you would run this command:

      prefect work-pool create test-pool\n
      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-types","title":"Work pool types","text":"

      If you don't use the --type flag to specify an infrastructure type, you are prompted to select from the following options:

      Prefect CloudPrefect server instance Infrastructure Type Description Prefect Agent Execute flow runs on heterogeneous infrastructure using infrastructure blocks. Process Execute flow runs as subprocesses on a worker. Works well for local execution when first getting started. AWS Elastic Container Service Execute flow runs within containers on AWS ECS. Works with EC2 and Fargate clusters. Requires an AWS account. Azure Container Instances Execute flow runs within containers on Azure's Container Instances service. Requires an Azure account. Docker Execute flow runs within Docker containers. Works well for managing flow execution environments via Docker images. Requires access to a running Docker daemon. Google Cloud Run Execute flow runs within containers on Google Cloud Run. Requires a Google Cloud Platform account. Google Cloud Run V2 Execute flow runs within containers on Google Cloud Run (V2 API). Requires a Google Cloud Platform account. Google Vertex AI Execute flow runs within containers on Google Vertex AI. Requires a Google Cloud Platform account. Kubernetes Execute flow runs within jobs scheduled on a Kubernetes cluster. Requires a Kubernetes cluster. Google Cloud Run - Push Execute flow runs within containers on Google Cloud Run. Requires a Google Cloud Platform account. Flow runs are pushed directly to your environment, without the need for a Prefect worker. AWS Elastic Container Service - Push Execute flow runs within containers on AWS ECS. Works with existing ECS clusters and serverless execution via AWS Fargate. Requires an AWS account. Flow runs are pushed directly to your environment, without the need for a Prefect worker. Azure Container Instances - Push Execute flow runs within containers on Azure's Container Instances service. Requires an Azure account. Flow runs are pushed directly to your environment, without the need for a Prefect worker. Modal - Push Execute flow runs on Modal. Requires a Modal account. Flow runs are pushed directly to your Modal workspace, without the need for a Prefect worker. Prefect Managed Execute flow runs within containers on Prefect managed infrastructure. Infrastructure Type Description Prefect Agent Execute flow runs on heterogeneous infrastructure using infrastructure blocks. Process Execute flow runs as subprocesses on a worker. Works well for local execution when first getting started. AWS Elastic Container Service Execute flow runs within containers on AWS ECS. Works with EC2 and Fargate clusters. Requires an AWS account. Azure Container Instances Execute flow runs within containers on Azure's Container Instances service. Requires an Azure account. Docker Execute flow runs within Docker containers. Works well for managing flow execution environments via Docker images. Requires access to a running Docker daemon. Google Cloud Run Execute flow runs within containers on Google Cloud Run. Requires a Google Cloud Platform account. Google Cloud Run V2 Execute flow runs within containers on Google Cloud Run (V2 API). Requires a Google Cloud Platform account. Google Vertex AI Execute flow runs within containers on Google Vertex AI. Requires a Google Cloud Platform account. Kubernetes Execute flow runs within jobs scheduled on a Kubernetes cluster. Requires a Kubernetes cluster.

      On success, the command returns the details of the newly created work pool.

      Created work pool with properties:\n    name - 'test-pool'\n    id - a51adf8c-58bb-4949-abe6-1b87af46eabd\n    concurrency limit - None\n\nStart a worker to pick up flows from the work pool:\n    prefect worker start -p 'test-pool'\n\nInspect the work pool:\n    prefect work-pool inspect 'test-pool'\n

      Set a work pool as the default for new deployments by adding the --set-as-default flag.

      Which would result in output similar to the following:

      Set 'test-pool' as default work pool for profile 'default'\n\nTo change your default work pool, run:\n\n        prefect config set PREFECT_DEFAULT_WORK_POOL_NAME=<work-pool-name>\n

      To update a work pool via the Prefect CLI, use the prefect work-pool update command:

      prefect work-pool update [OPTIONS] NAME\n

      NAME is the name of the work pool to update.

      Optional configuration parameters you can specify to update the work pool include:

      Option Description --base-job-template The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. --description A description of the work pool. --concurrency-limit The maximum number of flow runs to run simultaneously in the work pool.

      Managing work pools in CI/CD

      You can version control your base job template by committing it as a JSON file to your repository and control updates to your work pools' base job templates by using the prefect work-pool update command in your CI/CD pipeline. For example, you could use the following command to update a work pool's base job template to the contents of a file named base-job-template.json:

      prefect work-pool update --base-job-template base-job-template.json my-work-pool\n
      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#base-job-template","title":"Base job template","text":"

      Each work pool has a base job template that allows the customization of the behavior of the worker executing flow runs from the work pool.

      The base job template acts as a contract defining the configuration passed to the worker for each flow run and the options available to deployment creators to customize worker behavior per deployment.

      A base job template comprises a job_configuration section and a variables section.

      The variables section defines the fields available to be customized per deployment. The variables section follows the OpenAPI specification, which allows work pool creators to place limits on provided values (type, minimum, maximum, etc.).

      The job configuration section defines how values provided for fields in the variables section should be translated into the configuration given to a worker when executing a flow run.

      The values in the job_configuration can use placeholders to reference values provided in the variables section. Placeholders are declared using double curly braces, e.g., {{ variable_name }}. job_configuration values can also be hard-coded if the value should not be customizable.

      Each worker type is configured with a default base job template, making it easy to start with a work pool. The default base template defines values that will be passed to every flow run, but can be overridden on a per-deployment or per-flow run basis.

      For example, if we create a process work pool named 'above-ground' via the CLI:

      prefect work-pool create --type process above-ground\n

      We see these configuration options available in the Prefect UI:

      For a process work pool with the default base job template, we can set environment variables for spawned processes, set the working directory to execute flows, and control whether the flow run output is streamed to workers' standard output. You can also see an example of JSON formatted base job template with the 'Advanced' tab.

      You can examine the default base job template for a given worker type by running:

      prefect work-pool get-default-base-job-template --type process\n
      {\n  \"job_configuration\": {\n    \"command\": \"{{ command }}\",\n    \"env\": \"{{ env }}\",\n    \"labels\": \"{{ labels }}\",\n    \"name\": \"{{ name }}\",\n    \"stream_output\": \"{{ stream_output }}\",\n    \"working_dir\": \"{{ working_dir }}\"\n  },\n  \"variables\": {\n    \"type\": \"object\",\n    \"properties\": {\n      \"name\": {\n        \"title\": \"Name\",\n        \"description\": \"Name given to infrastructure created by a worker.\",\n        \"type\": \"string\"\n      },\n      \"env\": {\n        \"title\": \"Environment Variables\",\n        \"description\": \"Environment variables to set when starting a flow run.\",\n        \"type\": \"object\",\n        \"additionalProperties\": {\n          \"type\": \"string\"\n        }\n      },\n      \"labels\": {\n        \"title\": \"Labels\",\n        \"description\": \"Labels applied to infrastructure created by a worker.\",\n        \"type\": \"object\",\n        \"additionalProperties\": {\n          \"type\": \"string\"\n        }\n      },\n      \"command\": {\n        \"title\": \"Command\",\n        \"description\": \"The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker.\",\n        \"type\": \"string\"\n      },\n      \"stream_output\": {\n        \"title\": \"Stream Output\",\n        \"description\": \"If enabled, workers will stream output from flow run processes to local standard output.\",\n        \"default\": true,\n        \"type\": \"boolean\"\n      },\n      \"working_dir\": {\n        \"title\": \"Working Directory\",\n        \"description\": \"If provided, workers will open flow run processes within the specified path as the working directory. Otherwise, a temporary directory will be created.\",\n        \"type\": \"string\",\n        \"format\": \"path\"\n      }\n    }\n  }\n}\n

      You can override each of these attributes on a per-deployment or per-flow run basis. When creating a deployment, you can specify these overrides in the deployments.work_pool.job_variables section of a prefect.yaml file or in the job_variables argument of a Python flow.deploy method.

      For example, to turn off streaming output for a specific deployment, we could add the following to our prefect.yaml:

      deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: above-ground  \n    job_variables:\n        stream_output: false\n

      See more about overriding job variables in the Overriding Job Variables Guide.

      Advanced Customization of the Base Job Template

      For advanced use cases, you can create work pools with fully customizable job templates. This customization is available when creating or editing a work pool on the 'Advanced' tab within the UI or when updating a work pool via the Prefect CLI.

      Advanced customization is useful anytime the underlying infrastructure supports a high degree of customization. In these scenarios a work pool job template allows you to expose a minimal and easy-to-digest set of options to deployment authors. Additionally, these options are the only customizable aspects for deployment infrastructure, which can be useful for restricting functionality in secure environments. For example, the kubernetes worker type allows users to specify a custom job template that can be used to configure the manifest that workers use to create jobs for flow execution.

      For more information and advanced configuration examples, see the Kubernetes Worker documentation.

      For more information on overriding a work pool's job variables see this guide.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#viewing-work-pools","title":"Viewing work pools","text":"

      At any time, users can see and edit configured work pools in the Prefect UI.

      To view work pools with the Prefect CLI, you can:

      • List (ls) all available pools
      • Inspect (inspect) the details of a single pool
      • Preview (preview) scheduled work for a single pool

      prefect work-pool ls lists all configured work pools for the server.

      prefect work-pool ls\n

      For example:

                                     Work pools\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name       \u2503    Type        \u2503                                   ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 barbeque   \u2502 docker         \u2502 72c0a101-b3e2-4448-b5f8-a8c5184abd17 \u2502 None              \u2502\n\u2502 k8s-pool   \u2502 kubernetes     \u2502 7b6e3523-d35b-4882-84a7-7a107325bb3f \u2502 None              \u2502\n\u2502 test-pool  \u2502 prefect-agent  \u2502 a51adf8c-58bb-4949-abe6-1b87af46eabd \u2502 None              |\n| my-pool    \u2502 process        \u2502 cd6ff9e8-bfd8-43be-9be3-69375f7a11cd \u2502 None              \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                       (**) denotes a paused pool\n

      prefect work-pool inspect provides all configuration metadata for a specific work pool by ID.

      prefect work-pool inspect 'test-pool'\n

      Outputs information similar to the following:

      Workpool(\n    id='a51adf8c-58bb-4949-abe6-1b87af46eabd',\n    created='2 minutes ago',\n    updated='2 minutes ago',\n    name='test-pool',\n    filter=None,\n)\n

      prefect work-pool preview displays scheduled flow runs for a specific work pool by ID for the upcoming hour. The optional --hours flag lets you specify the number of hours to look ahead.

      prefect work-pool preview 'test-pool' --hours 12\n
      \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Scheduled Star\u2026 \u2503 Run ID                     \u2503 Name         \u2503 Deployment ID               \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 2022-02-26 06:\u2026 \u2502 741483d4-dc90-4913-b88d-0\u2026 \u2502 messy-petrel \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 05:\u2026 \u2502 14e23a19-a51b-4833-9322-5\u2026 \u2502 unselfish-g\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 04:\u2026 \u2502 deb44d4d-5fa2-4f70-a370-e\u2026 \u2502 solid-ostri\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 03:\u2026 \u2502 07374b5c-121f-4c8d-9105-b\u2026 \u2502 sophisticat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 02:\u2026 \u2502 545bc975-b694-4ece-9def-8\u2026 \u2502 gorgeous-mo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 01:\u2026 \u2502 704f2d67-9dfa-4fb8-9784-4\u2026 \u2502 sassy-hedge\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 00:\u2026 \u2502 691312f0-d142-4218-b617-a\u2026 \u2502 sincere-moo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 23:\u2026 \u2502 7cb3ff96-606b-4d8c-8a33-4\u2026 \u2502 curious-cat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 22:\u2026 \u2502 3ea559fe-cb34-43b0-8090-1\u2026 \u2502 primitive-f\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 21:\u2026 \u2502 96212e80-426d-4bf4-9c49-e\u2026 \u2502 phenomenal-\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                                   (**) denotes a late run\n
      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-pool-status","title":"Work pool status","text":"

      Work pools have three statuses: READY, NOT_READY, and PAUSED. A work pool is considered ready if it has at least one online worker sending heartbeats to the work pool. If a work pool has no online workers, it is considered not ready to execute work. A work pool can be placed in a paused status manually by a user or via an automation. When a paused work pool is unpaused, it will be reassigned the appropriate status based on whether any workers are sending heartbeats.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#pausing-and-deleting-work-pools","title":"Pausing and deleting work pools","text":"

      A work pool can be paused at any time to stop the delivery of work to workers. Workers will not receive any work when polling a paused pool.

      To pause a work pool through the Prefect CLI, use the prefect work-pool pause command:

      prefect work-pool pause 'test-pool'\n

      To resume a work pool through the Prefect CLI, use the prefect work-pool resume command with the work pool name.

      To delete a work pool through the Prefect CLI, use the prefect work-pool delete command with the work pool name.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#managing-concurrency","title":"Managing concurrency","text":"

      Each work pool can optionally restrict concurrent runs of matching flows.

      For example, a work pool with a concurrency limit of 5 will only release new work if fewer than 5 matching runs are currently in a Running or Pending state. If 3 runs are Running or Pending, polling the pool for work will only result in 2 new runs, even if there are many more available, to ensure that the concurrency limit is not exceeded.

      When using the prefect work-pool Prefect CLI command to configure a work pool, the following subcommands set concurrency limits:

      • set-concurrency-limit sets a concurrency limit on a work pool.
      • clear-concurrency-limit clears any concurrency limits from a work pool.
      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#work-queues","title":"Work queues","text":"

      Advanced topic

      Prefect will automatically create a default work queue if needed.

      Work queues offer advanced control over how runs are executed. Each work pool has a \"default\" queue that all work will be sent to by default. Additional queues can be added to a work pool to enable greater control over work delivery through fine grained priority and concurrency. Each work queue has a priority indicated by a unique positive integer. Lower numbers take greater priority in the allocation of work. Accordingly, new queues can be added without changing the rank of the higher-priority queues (e.g. no matter how many queues you add, the queue with priority 1 will always be the highest priority).

      Work queues can also have their own concurrency limits. Note that each queue is also subject to the global work pool concurrency limit, which cannot be exceeded.

      Together work queue priority and concurrency enable precise control over work. For example, a pool may have three queues: A \"low\" queue with priority 10 and no concurrency limit, a \"high\" queue with priority 5 and a concurrency limit of 3, and a \"critical\" queue with priority 1 and a concurrency limit of 1. This arrangement would enable a pattern in which there are two levels of priority, \"high\" and \"low\" for regularly scheduled flow runs, with the remaining \"critical\" queue for unplanned, urgent work, such as a backfill.

      Priority is evaluated to determine the order in which flow runs are submitted for execution. If all flow runs are capable of being executed with no limitation due to concurrency or otherwise, priority is still used to determine order of submission, but there is no impact to execution. If not all flow runs can be executed, usually as a result of concurrency limits, priority is used to determine which queues receive precedence to submit runs for execution.

      Priority for flow run submission proceeds from the highest priority to the lowest priority. In the preceding example, all work from the \"critical\" queue (priority 1) will be submitted, before any work is submitted from \"high\" (priority 5). Once all work has been submitted from priority queue \"critical\", work from the \"high\" queue will begin submission.

      If new flow runs are received on the \"critical\" queue while flow runs are still in scheduled on the \"high\" and \"low\" queues, flow run submission goes back to ensuring all scheduled work is first satisfied from the highest priority queue, until it is empty, in waterfall fashion.

      Work queue status

      A work queue has a READY status when it has been polled by a worker in the last 60 seconds. Pausing a work queue will give it a PAUSED status and mean that it will accept no new work until it is unpaused. A user can control the work queue's paused status in the UI. Unpausing a work queue will give the work queue a NOT_READY status unless a worker has polled it in the last 60 seconds.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#local-debugging","title":"Local debugging","text":"

      As long as your deployment's infrastructure block supports it, you can use work pools to temporarily send runs to a worker running on your local machine for debugging by running prefect worker start -p my-local-machine and updating the deployment's work pool to my-local-machine.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-overview","title":"Worker overview","text":"

      Workers are lightweight polling services that retrieve scheduled runs from a work pool and execute them.

      Workers are similar to agents, but offer greater control over infrastructure configuration and the ability to route work to specific types of execution environments.

      Workers each have a type corresponding to the execution environment to which they will submit flow runs. Workers are only able to poll work pools that match their type. As a result, when deployments are assigned to a work pool, you know in which execution environment scheduled flow runs for that deployment will run.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-types","title":"Worker types","text":"

      Below is a list of available worker types. Note that most worker types will require installation of an additional package.

      Worker Type Description Required Package process Executes flow runs in subprocesses kubernetes Executes flow runs as Kubernetes jobs prefect-kubernetes docker Executes flow runs within Docker containers prefect-docker ecs Executes flow runs as ECS tasks prefect-aws cloud-run Executes flow runs as Google Cloud Run jobs prefect-gcp vertex-ai Executes flow runs as Google Cloud Vertex AI jobs prefect-gcp azure-container-instance Execute flow runs in ACI containers prefect-azure

      If you don\u2019t see a worker type that meets your needs, consider developing a new worker type!

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-options","title":"Worker options","text":"

      Workers poll for work from one or more queues within a work pool. If the worker references a work queue that doesn't exist, it will be created automatically. The worker CLI is able to infer the worker type from the work pool. Alternatively, you can also specify the worker type explicitly. If you supply the worker type to the worker CLI, a work pool will be created automatically if it doesn't exist (using default job settings).

      Configuration parameters you can specify when starting a worker include:

      Option Description --name, -n The name to give to the started worker. If not provided, a unique name will be generated. --pool, -p The work pool the started worker should poll. --work-queue, -q One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. --type, -t The type of worker to start. If not provided, the worker type will be inferred from the work pool. --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_WORKER_PREFETCH_SECONDS. --run-once Only run worker polling once. By default, the worker runs forever. --limit, -l The maximum number of flow runs to start simultaneously. --with-healthcheck Start a healthcheck server for the worker. --install-policy Install policy to use workers from Prefect integration packages.

      You must start a worker within an environment that can access or create the infrastructure needed to execute flow runs. The worker will deploy flow runs to the infrastructure corresponding to the worker type. For example, if you start a worker with type kubernetes, the worker will deploy flow runs to a Kubernetes cluster.

      Prefect must be installed in execution environments

      Prefect must be installed in any environment (virtual environment, Docker container, etc.) where you intend to run the worker or execute a flow run.

      PREFECT_API_URL and PREFECT_API_KEYsettings for workers

      PREFECT_API_URL must be set for the environment in which your worker is running. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#worker-status","title":"Worker status","text":"

      Workers have two statuses: ONLINE and OFFLINE. A worker is online if it sends regular heartbeat messages to the Prefect API. If a worker has missed three heartbeats, it is considered offline. By default, a worker is considered offline a maximum of 90 seconds after it stopped sending heartbeats, but the threshold can be configured via the PREFECT_WORKER_HEARTBEAT_SECONDS setting.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#starting-a-worker","title":"Starting a worker","text":"

      Use the prefect worker start CLI command to start a worker. You must pass at least the work pool name. If the work pool does not exist, it will be created if the --type flag is used.

      prefect worker start -p [work pool name]\n

      For example:

      prefect worker start -p \"my-pool\"\n

      Results in output like this:

      Discovered worker type 'process' for work pool 'my-pool'.\nWorker 'ProcessWorker 65716280-96f8-420b-9300-7e94417f2673' started!\n

      In this case, Prefect automatically discovered the worker type from the work pool. To create a work pool and start a worker in one command, use the --type flag:

      prefect worker start -p \"my-pool\" --type \"process\"\n
      Worker 'ProcessWorker d24f3768-62a9-4141-9480-a056b9539a25' started!\n06:57:53.289 | INFO    | prefect.worker.process.processworker d24f3768-62a9-4141-9480-a056b9539a25 - Worker pool 'my-pool' created.\n

      In addition, workers can limit the number of flow runs they will start simultaneously with the --limit flag. For example, to limit a worker to five concurrent flow runs:

      prefect worker start --pool \"my-pool\" --limit 5\n
      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch","title":"Configuring prefetch","text":"

      By default, the worker begins submitting flow runs a short time (10 seconds) before they are scheduled to run. This behavior allows time for the infrastructure to be created so that the flow run can start on time.

      In some cases, infrastructure will take longer than 10 seconds to start the flow run. The prefetch can be increased using the --prefetch-seconds option or the PREFECT_WORKER_PREFETCH_SECONDS setting.

      If this value is more than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#polling-for-work","title":"Polling for work","text":"

      Workers poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_WORKER_QUERY_SECONDS setting.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#install-policy","title":"Install policy","text":"

      The Prefect CLI can install the required package for Prefect-maintained worker types automatically. You can configure this behavior with the --install-policy option. The following are valid install policies

      Install Policy Description always Always install the required package. Will update the required package to the most recent version if already installed. if-not-present Install the required package if it is not already installed. never Never install the required package. prompt Prompt the user to choose whether to install the required package. This is the default install policy. If prefect worker start is run non-interactively, the prompt install policy will behave the same as never.","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"concepts/work-pools/#additional-resources","title":"Additional resources","text":"

      See how to daemonize a Prefect worker in this guide.

      For more information on overriding a work pool's job variables see this guide.

      ","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","concurrency limits","priority","work queues"],"boost":2},{"location":"contributing/overview/","title":"Contributing","text":"

      Thanks for considering contributing to Prefect!

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#setting-up-a-development-environment","title":"Setting up a development environment","text":"

      First, download the source code and install an editable version of the Python package:

      # Clone the repository\ngit clone https://github.com/PrefectHQ/prefect.git\ncd prefect\n\n# We recommend using a virtual environment\n\npython -m venv .venv\nsource .venv/bin/activate\n\n# Install the package with development dependencies\n\npip install -e \".[dev]\"\n\n# Setup pre-commit hooks for required formatting\n\npre-commit install\n

      If you don't want to install the pre-commit hooks, you can manually install the formatting dependencies with:

      pip install $(./scripts/precommit-versions.py)\n

      You'll need to run black and ruff before a contribution can be accepted.

      After installation, you can run the test suite with pytest:

      # Run all the tests\npytest tests\n\n# Run a subset of tests\n\npytest tests/test_flows.py\n

      Building the Prefect UI

      If you intend to run a local Prefect server during development, you must first build the UI. See UI development for instructions.

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#prefect-code-of-conduct","title":"Prefect Code of Conduct","text":"","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-pledge","title":"Our Pledge","text":"

      In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-standards","title":"Our Standards","text":"

      Examples of behavior that contributes to creating a positive environment include:

      • Using welcoming and inclusive language
      • Being respectful of differing viewpoints and experiences
      • Gracefully accepting constructive criticism
      • Focusing on what is best for the community
      • Showing empathy towards other community members

      Examples of unacceptable behavior by participants include:

      • The use of sexualized language or imagery and unwelcome sexual attention or advances
      • Trolling, insulting/derogatory comments, and personal or political attacks
      • Public or private harassment
      • Publishing others' private information, such as a physical or electronic address, without explicit permission
      • Other conduct which could reasonably be considered inappropriate in a professional setting
      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-responsibilities","title":"Our Responsibilities","text":"

      Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

      Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#scope","title":"Scope","text":"

      This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#enforcement","title":"Enforcement","text":"

      Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Chris White at chris@prefect.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

      Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#attribution","title":"Attribution","text":"

      This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

      For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#developer-tooling","title":"Developer tooling","text":"

      The Prefect CLI provides several helpful commands to aid development.

      Start all services with hot-reloading on code changes (requires UI dependencies to be installed):

      prefect dev start\n

      Start a Prefect API that reloads on code changes:

      prefect dev api\n

      Start a Prefect worker that reloads on code changes:

      prefect dev agent\n
      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#ui-development","title":"UI development","text":"

      Developing the Prefect UI requires that npm is installed.

      Start a development UI that reloads on code changes:

      prefect dev ui\n

      Build the static UI (the UI served by prefect server start):

      prefect dev build-ui\n
      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#docs-development","title":"Docs Development","text":"

      Prefect uses mkdocs for the docs website and the mkdocs-material theme. While we use mkdocs-material-insiders for production, builds can still happen without the extra plugins. Deploy previews are available on pull requests, so you'll be able to browse the final look of your changes before merging.

      To build the docs:

      mkdocs build\n

      To serve the docs locally at http://127.0.0.1:8000/:

      mkdocs serve\n

      For additional mkdocs help and options:

      mkdocs --help\n

      We use the mkdocs-material theme. To add additional JavaScript or CSS to the docs, please see the theme documentation here.

      Internal developers can install the production theme by running:

      pip install -e git+https://github.com/PrefectHQ/mkdocs-material-insiders.git#egg=mkdocs-material\nmkdocs build # or mkdocs build --config-file mkdocs.insiders.yml if needed\n
      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#kubernetes-development","title":"Kubernetes development","text":"

      Generate a manifest to deploy a development API to a local kubernetes cluster:

      prefect dev kubernetes-manifest\n

      To access the Prefect UI running in a Kubernetes cluster, use the kubectl port-forward command to forward a port on your local machine to an open port within the cluster. For example:

      kubectl port-forward deployment/prefect-dev 4200:4200\n

      This forwards port 4200 on the default internal loop IP for localhost to the Prefect server deployment.

      To tell the local prefect command how to communicate with the Prefect API running in Kubernetes, set the PREFECT_API_URL environment variable:

      export PREFECT_API_URL=http://localhost:4200/api\n

      Since you previously configured port forwarding for the localhost port to the Kubernetes environment, you\u2019ll be able to interact with the Prefect API running in Kubernetes when using local Prefect CLI commands.

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#adding-database-migrations","title":"Adding database migrations","text":"

      To make changes to a table, first update the SQLAlchemy model in src/prefect/server/database/orm_models.py. For example, if you wanted to add a new column to the flow_run table, you would add a new column to the FlowRun model:

      # src/prefect/server/database/orm_models.py\n\n@declarative_mixin\nclass ORMFlowRun(ORMRun):\n    \"\"\"SQLAlchemy model of a flow run.\"\"\"\n    ...\n    new_column = Column(String, nullable=True) # <-- add this line\n

      Next, you will need to generate new migration files. You must generate a new migration file for each database type. Migrations will be generated for whatever database type PREFECT_API_DATABASE_CONNECTION_URL is set to. See here for how to set the database connection URL for each database type.

      To generate a new migration file, run the following command:

      prefect server database revision --autogenerate -m \"<migration name>\"\n

      Try to make your migration name brief but descriptive. For example:

      • add_flow_run_new_column
      • add_flow_run_new_column_idx
      • rename_flow_run_old_column_to_new_column

      The --autogenerate flag will automatically generate a migration file based on the changes to the models.

      Always inspect the output of --autogenerate

      --autogenerate will generate a migration file based on the changes to the models. However, it is not perfect. Be sure to check the file to make sure it only includes the changes you want to make. Additionally, you may need to remove extra statements that were included and not related to your change.

      The new migration can be found in the src/prefect/server/database/migrations/versions/ directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in src/prefect/server/database/migrations/versions/sqlite/.

      After you have inspected the migration file, you can apply the migration to your database by running the following command:

      prefect server database upgrade -y\n

      Once you have successfully created and applied migrations for all database types, make sure to update MIGRATION-NOTES.md to document your additions.

      ","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/style/","title":"Code style and practices","text":"

      Generally, we follow the Google Python Style Guide. This document covers sections where we differ or where additional clarification is necessary.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports","title":"Imports","text":"

      A brief collection of rules and guidelines for how imports should be handled in this repository.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-__init__-files","title":"Imports in __init__ files","text":"

      Leave __init__ files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-objects-from-submodules","title":"Exposing objects from submodules","text":"

      If importing objects from submodules, the __init__ file should use a relative import. This is required for type checkers to understand the exposed interface.

      # Correct\nfrom .flows import flow\n
      # Wrong\nfrom prefect.flows import flow\n
      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-submodules","title":"Exposing submodules","text":"

      Generally, submodules should not be imported in the __init__ file. Submodules should only be exposed when the module is designed to be imported and used as a namespaced object.

      For example, we do this for our schema and model modules because it is important to know if you are working with an API schema or database model, both of which may have similar names.

      import prefect.server.schemas as schemas\n\n# The full module is accessible now\nschemas.core.FlowRun\n

      If exposing a submodule, use a relative import as you would when exposing an object.

      # Correct\nfrom . import flows\n
      # Wrong\nimport prefect.flows\n
      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-to-run-side-effects","title":"Importing to run side-effects","text":"

      Another use case for importing submodules is perform global side-effects that occur when they are imported.

      Often, global side-effects on import are a dangerous pattern. Avoid them if feasible.

      We have a couple acceptable use-cases for this currently:

      • To register dispatchable types, e.g. prefect.serializers.
      • To extend a CLI application e.g. prefect.cli.
      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-modules","title":"Imports in modules","text":"","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-other-modules","title":"Importing other modules","text":"

      The from syntax should be reserved for importing objects from modules. Modules should not be imported using the from syntax.

      # Correct\nimport prefect.server.schemas  # use with the full name\nimport prefect.server.schemas as schemas  # use the shorter name\n
      # Wrong\nfrom prefect.server import schemas\n

      Unless in an __init__.py file, relative imports should not be used.

      # Correct\nfrom prefect.utilities.foo import bar\n
      # Wrong\nfrom .utilities.foo import bar\n

      Imports dependent on file location should never be used without explicit indication it is relative. This avoids confusion about the source of a module.

      # Correct\nfrom . import test\n
      # Wrong\nimport test\n
      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#resolving-circular-dependencies","title":"Resolving circular dependencies","text":"

      Sometimes, we must defer an import and perform it within a function to avoid a circular dependency.

      ## This function in `settings.py` requires a method from the global `context` but the context\n## uses settings\ndef from_context():\n    from prefect.context import get_profile_context\n\n    ...\n

      Attempt to avoid circular dependencies. This often reveals overentanglement in the design.

      When performing deferred imports, they should all be placed at the top of the function.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#with-type-annotations","title":"With type annotations","text":"

      If you are just using the imported object for a type signature, you should use the TYPE_CHECKING flag.

      # Correct\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n    from prefect.server.schemas.states import State\n\ndef foo(state: \"State\"):\n    pass\n

      Note that usage of the type within the module will need quotes e.g. \"State\" since it is not available at runtime.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-optional-requirements","title":"Importing optional requirements","text":"

      We do not have a best practice for this yet. See the kubernetes, docker, and distributed implementations for now.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#delaying-expensive-imports","title":"Delaying expensive imports","text":"

      Sometimes, imports are slow. We'd like to keep the prefect module import times fast. In these cases, we can lazily import the slow module by deferring import to the relevant function body. For modules that are consumed by many functions, the pattern used for optional requirements may be used instead.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#command-line-interface-cli-output-messages","title":"Command line interface (CLI) output messages","text":"

      Upon executing a command that creates an object, the output message should offer: - A short description of what the command just did. - A bullet point list, rehashing user inputs, if possible. - Next steps, like the next command to run, if applicable. - Other relevant, pre-formatted commands that can be copied and pasted, if applicable. - A new line before the first line and after the last line.

      Output Example:

      $ prefect work-queue create testing\n\nCreated work queue with properties:\n    name - 'abcde'\n    uuid - 940f9828-c820-4148-9526-ea8107082bda\n    tags - None\n    deployment_ids - None\n\nStart an agent to pick up flows from the created work queue:\n    prefect agent start -q 'abcde'\n\nInspect the created work queue:\n    prefect work-queue inspect 'abcde'\n

      Additionally:

      • Wrap generated arguments in apostrophes (') to ensure validity by using suffixing formats with !r.
      • Indent example commands, instead of wrapping in backticks (`).
      • Use placeholders if the example cannot be pre-formatted completely.
      • Capitalize placeholder labels and wrap them in less than (<) and greater than (>) signs.
      • Utilize textwrap.dedent to remove extraneous spacing for strings that are written with triple quotes (\"\"\").

      Placeholder Example:

      Create a work queue with tags:\n    prefect work-queue create '<WORK QUEUE NAME>' -t '<OPTIONAL TAG 1>' -t '<OPTIONAL TAG 2>'\n

      Dedent Example:

      from textwrap import dedent\n...\noutput_msg = dedent(\n    f\"\"\"\n    Created work queue with properties:\n        name - {name!r}\n        uuid - {result}\n        tags - {tags or None}\n        deployment_ids - {deployment_ids or None}\n\n    Start an agent to pick up flows from the created work queue:\n        prefect agent start -q {name!r}\n\n    Inspect the created work queue:\n        prefect work-queue inspect {name!r}\n    \"\"\"\n)\n

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#api-versioning","title":"API Versioning","text":"

      The Prefect client can be run separately from the Prefect orchestration server and communicate entirely via an API. Among other things, the Prefect client includes anything that runs task or flow code, (e.g. agents, and the Python client) or any consumer of Prefect metadata, (e.g. the Prefect UI, and CLI). The Prefect server stores this metadata and serves it via the REST API.

      Sometimes, we make breaking changes to the API (for good reasons). In order to check that a Prefect client is compatible with the API it's making requests to, every API call the client makes includes a three-component API_VERSION header with major, minor, and patch versions.

      For example, a request with the X-PREFECT-API-VERSION=3.2.1 header has a major version of 3, minor version 2, and patch version 1.

      This version header can be changed by modifying the API_VERSION constant in prefect.server.api.server.

      When making a breaking change to the API, we should consider if the change might be backwards compatible for clients, meaning that the previous version of the client would still be able to make calls against the updated version of the server code. This might happen if the changes are purely additive: such as adding a non-critical API route. In these cases, we should make sure to bump the patch version.

      In almost all other cases, we should bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version chanes to denote also-backwards compatible change that might be significant in some way, such as a major release milestone.

      ","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/versioning/","title":"Versioning","text":"","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#understanding-version-numbers","title":"Understanding version numbers","text":"

      Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0.

      Occasionally, we will add a suffix to the version such as rc, a, or b. These indicate pre-release versions that users can opt-into installing to test functionality before it is ready for release.

      Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it will be reset to zero.

      ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#prefects-versioning-scheme","title":"Prefect's versioning scheme","text":"

      Prefect will increase the major version when significant and widespread changes are made to the core product. It is very unlikely that the major version will change without extensive warning.

      Prefect will increase the minor version when:

      • Introducing a new concept that changes how Prefect can be used
      • Changing an existing concept in a way that fundamentally alters how it is used
      • Removing a deprecated feature

      Prefect will increase the patch version when:

      • Making enhancements to existing features
      • Fixing behavior in existing features
      • Adding new functionality to existing concepts
      • Updating dependencies
      ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#breaking-changes-and-deprecation","title":"Breaking changes and deprecation","text":"

      A breaking change means that your code will need to change to use a new version of Prefect. We strive to avoid breaking changes in all releases.

      At times, Prefect will deprecate a feature. This means that a feature has been marked for removal in the future. When you use it, you may see warnings that it will be removed. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least 3 minor version increases or 6 months, whichever is longer. We may retain deprecated features longer than this time period.

      Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes.

      ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-prefect","title":"Client compatibility with Prefect","text":"

      When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes new clients cannot be used with an old server. The new client may expect the server to support functionality that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older.

      For example, a client on 2.1.0 can be used with a server on 2.5.0. A client on 2.5.0 cannot be used with a server on 2.1.0.

      ","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-cloud","title":"Client compatibility with Cloud","text":"

      Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please file a bug report.

      ","tags":["versioning","semver"],"boost":2},{"location":"getting-started/installation/","title":"Installation","text":"

      Prefect requires Python 3.8 or newer.

      We recommend installing Prefect using a Python virtual environment manager such as pipenv, conda, or virtualenv/venv.

      You can use Prefect Cloud as your API server or host your own Prefect server instance backed by PostgreSQL. For development, you can use SQLite 2.24 or newer as your database.

      Prefect Cloud is a managed solution that provides strong scaling, performance, and security. Learn more about Prefect Cloud solutions for enterprises here.

      Windows and Linux requirements

      See Windows installation notes and Linux installation notes for details on additional installation requirements and considerations.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-prefect","title":"Install Prefect","text":"

      The following sections describe how to install Prefect in your environment.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-latest-version","title":"Installing the latest version","text":"

      Prefect is published as a Python package. To install the latest release or upgrade an existing Prefect install, and upgrade existing Python dependencies, run the following command in your terminal:

      pip install -U prefect\n

      To install a specific Prefect version, specify the version number like this:

      pip install -U \"prefect==2.17.1\"\n

      See available release versions in the Prefect Release Notes.

      See our Contributing guide for instructions on installing Prefect for development and see the section below to install directly from the main branch.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#checking-your-installation","title":"Checking your installation","text":"

      To confirm that Prefect was installed correctly, run the command prefect version in your terminal.

      prefect version\n

      You should see output similar to the following:

      Version:             2.17.1\nAPI version:         0.8.4\nPython version:      3.12.2\nGit commit:          d6bdb075\nBuilt:               Thu, Apr 11, 2024 6:58 PM\nOS/Arch:             darwin/arm64\nProfile:              local\nServer type:         ephemeral\nServer:\n  Database:          sqlite\n  SQLite version:    3.45.2\n
      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#windows-installation-notes","title":"Windows installation notes","text":"

      You can install and run Prefect via Windows PowerShell, the Windows Command Prompt, or conda. After installation, you may need to manually add the Python local packages Scripts folder to your Path environment variable.

      The Scripts folder path looks something like this (the username and Python version may be different on your system):

      C:\\Users\\MyUserNameHere\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\n

      Watch the pip install output messages for the Scripts folder path on your system.

      If you're using Windows Subsystem for Linux (WSL), see Linux installation notes.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#linux-installation-notes","title":"Linux installation notes","text":"

      Linux is a popular operating system for running Prefect. If you are hosting your own Prefect server instance with a SQLite database, note that certain Linux versions of SQLite can be problematic. Compatible versions include Ubuntu 22.04 LTS and Ubuntu 20.04 LTS.

      Alternatively, you can install SQLite on Red Hat Custom Linux (RHEL) or use the conda virtual environment manager and configure a compatible SQLite version.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-a-self-signed-ssl-certificate","title":"Using a self-signed SSL certificate","text":"

      If you're using a self-signed SSL certificate, you need to configure your environment to trust the certificate. You can add the certificate to your system bundle and pointing your tools to use that bundle by configuring the SSL_CERT_FILE environment variable.

      If the certificate is not part of your system bundle, you can set the PREFECT_API_TLS_INSECURE_SKIP_VERIFY to True to disable certificate verification altogether.

      Note: Disabling certificate validation is insecure and only suggested as an option for testing!

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#proxies","title":"Proxies","text":"

      Prefect supports communicating via proxies through environment variables. Whether you are using Prefect Cloud or hosting your own Prefect server instance, set HTTPS_PROXY and SSL_CERT_FILE in your environment, and the underlying network libraries will route Prefect\u2019s requests appropriately.

      Alternatively, the Prefect library will connect to the API via any proxies you have listed in the HTTP_PROXY or ALL_PROXY environment variables. You may also use the NO_PROXY environment variable to specify which hosts should not be sent through the proxy.

      For more information about these environment variables, see the cURL documentation.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#prefect-client-library","title":"prefect-client library","text":"

      The prefect-client library is a minimal installation of Prefect designed for interacting with Prefect Cloud or a remote self-hosted server instance.

      prefect-client enables a subset of Prefect's functionality with a smaller installation size, making it ideal for use in lightweight, resource-constrained, or ephemeral environments. It omits all CLI and server components found in the prefect library.

      Install the latest version with:

      pip install -U prefect-client\n
      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#sqlite","title":"SQLite","text":"

      By default, a local Prefect server instance uses SQLite as the backing database. SQLite is not packaged with the Prefect installation. Most systems will already have SQLite installed, because it is typically bundled with Python.

      Note

      Note that in production we recommend using Prefect Cloud as your API server or hosting your own Prefect server instance backed by PostgreSQL.

      Info

      If you install the prefect-client library that provides a limited set of the full Prefect library's functionality, you do not need SQLite installed.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-sqlite-on-rhel","title":"Install SQLite on RHEL","text":"

      To install an appropriate version of SQLite on Red Hat Custom Linux (RHEL), follow the instructions below:

      Expand for instructions Note that some RHEL instances have no C compiler, so you may need to check for and install `gcc` first:
      yum install gcc\n
      Download and extract the tarball for SQLite.
      wget https://www.sqlite.org/2022/sqlite-autoconf-3390200.tar.gz\ntar -xzf sqlite-autoconf-3390200.tar.gz\n
      Move to the extracted SQLite directory, then build and install SQLite.
      cd sqlite-autoconf-3390200/\n./configure\nmake\nmake install\n
      Add `LD_LIBRARY_PATH` to your profile.
      echo 'export LD_LIBRARY_PATH=\"/usr/local/lib\"' >> /etc/profile\n
      Restart your shell to register these changes. Now you can install Prefect using `pip`.
      pip3 install prefect\n
      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-unreleased-code","title":"Installing unreleased code","text":"

      To use the most up-to-date, unreleased Prefect code, you can install directly off the main GitHub branch:

      pip install -U git+https://github.com/PrefectHQ/prefect\n

      The main branch may not be stable

      Please be aware that this method installs unreleased code and may not be stable.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#next-steps","title":"Next steps","text":"

      Now that you have Prefect installed and your environment configured, check out the Tutorial to get more familiar with Prefect.

      ","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/quickstart/","title":"Quickstart","text":"

      Prefect is an orchestration and observability platform that empowers developers to build and scale code quickly, turning their Python scripts into resilient, recurring workflows.

      In this quickstart, you'll see how you can schedule your code on remote infrastructure and observe the state of your workflows. With Prefect, you can go from a Python script to a production-ready workflow that runs remotely in a few minutes.

      Let's get started!

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#setup","title":"Setup","text":"

      Here's a basic script that fetches statistics about the main Prefect GitHub repository.

      import httpx\n\ndef get_repo_info():\n    url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n    response = httpx.get(url)\n    repo = response.json()\n    print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

      Let's make this script schedulable, observable, resilient, and capable of running anywhere.

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-1-install-prefect","title":"Step 1: Install Prefect","text":"
      pip install -U prefect\n

      See the install guide for more detailed installation instructions, if needed.

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-2-connect-to-prefects-api","title":"Step 2: Connect to Prefect's API","text":"

      Much of Prefect's functionality is backed by an API. The easiest way to get started is to use the API hosted by Prefect:

      1. Create a forever-free Prefect Cloud account or sign in at https://app.prefect.cloud/
      2. Use the prefect cloud login CLI command to log in to Prefect Cloud from your development environment
      prefect cloud login\n

      Choose Log in with a web browser and click the Authorize button in the browser window that opens. Your CLI is now authenticated with your Prefect Cloud account through a locally-stored API key that expires in 30 days.

      If you have any issues with browser-based authentication, see the Prefect Cloud docs to learn how to authenticate with a manually created API key.

      Self-hosted Prefect server instance

      If you would like to host a Prefect server instance on your own infrastructure, see the tutorial and select the \"Self-hosted\" tab. Note that you will need to both host your own server and run your flows on your own infrastructure.

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-3-turn-your-function-into-a-prefect-flow","title":"Step 3: Turn your function into a Prefect flow","text":"

      The fastest way to get started with Prefect is to add a @flow decorator to your Python function. Flows are the core observable, deployable units in Prefect and are the primary entrypoint to orchestrated work.

      my_gh_workflow.py
      import httpx   # an HTTP client library and dependency of Prefect\nfrom prefect import flow, task\n\n\n@task(retries=2)\ndef get_repo_info(repo_owner: str, repo_name: str):\n    \"\"\"Get info about a repo - will retry twice after failing\"\"\"\n    url = f\"https://api.github.com/repos/{repo_owner}/{repo_name}\"\n    api_response = httpx.get(url)\n    api_response.raise_for_status()\n    repo_info = api_response.json()\n    return repo_info\n\n\n@task\ndef get_contributors(repo_info: dict):\n    \"\"\"Get contributors for a repo\"\"\"\n    contributors_url = repo_info[\"contributors_url\"]\n    response = httpx.get(contributors_url)\n    response.raise_for_status()\n    contributors = response.json()\n    return contributors\n\n\n@flow(log_prints=True)\ndef repo_info(repo_owner: str = \"PrefectHQ\", repo_name: str = \"prefect\"):\n    \"\"\"\n    Given a GitHub repository, logs the number of stargazers\n    and contributors for that repo.\n    \"\"\"\n    repo_info = get_repo_info(repo_owner, repo_name)\n    print(f\"Stars \ud83c\udf20 : {repo_info['stargazers_count']}\")\n\n    contributors = get_contributors(repo_info)\n    print(f\"Number of contributors \ud83d\udc77: {len(contributors)}\")\n\n\nif __name__ == \"__main__\":\n    repo_info()\n

      Note that we added a log_prints=True argument to the @flow decorator so that print statements within the flow-decorated function will be logged. Also note that our flow calls two tasks, which are defined by the @task decorator. Tasks are the smallest unit of observed and orchestrated work in Prefect.

      python my_gh_workflow.py\n

      Now when we run this script, Prefect will automatically track the state of the flow run and log the output where we can see it in the UI and CLI.

      14:28:31.099 | INFO    | prefect.engine - Created flow run 'energetic-panther' for flow 'repo-info'\n14:28:31.100 | INFO    | Flow run 'energetic-panther' - View at https://app.prefect.cloud/account/123/workspace/abc/flow-runs/flow-run/xyz\n14:28:32.178 | INFO    | Flow run 'energetic-panther' - Created task run 'get_repo_info-0' for task 'get_repo_info'\n14:28:32.179 | INFO    | Flow run 'energetic-panther' - Executing 'get_repo_info-0' immediately...\n14:28:32.584 | INFO    | Task run 'get_repo_info-0' - Finished in state Completed()\n14:28:32.599 | INFO    | Flow run 'energetic-panther' - Stars \ud83c\udf20 : 13609\n14:28:32.682 | INFO    | Flow run 'energetic-panther' - Created task run 'get_contributors-0' for task 'get_contributors'\n14:28:32.682 | INFO    | Flow run 'energetic-panther' - Executing 'get_contributors-0' immediately...\n14:28:33.118 | INFO    | Task run 'get_contributors-0' - Finished in state Completed()\n14:28:33.134 | INFO    | Flow run 'energetic-panther' - Number of contributors \ud83d\udc77: 30\n14:28:33.255 | INFO    | Flow run 'energetic-panther' - Finished in state Completed('All states completed.')\n

      You should see similar output in your terminal, with your own randomly generated flow run name and your own Prefect Cloud account URL.

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-4-choose-a-remote-infrastructure-location","title":"Step 4: Choose a remote infrastructure location","text":"

      Let's get this workflow running on infrastructure other than your local machine! We can tell Prefect where we want to run our workflow by creating a work pool.

      We can have Prefect Cloud run our flow code for us with a Prefect Managed work pool.

      Let's create a Prefect Managed work pool so that Prefect can run our flows for us. We can create a work pool in the UI or from the CLI. Let's use the CLI:

      prefect work-pool create my-managed-pool --type prefect:managed\n

      You should see a message in the CLI that your work pool was created. Feel free to check out your new work pool on the Work Pools page in the UI.

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#step-5-make-your-code-schedulable","title":"Step 5: Make your code schedulable","text":"

      We have a flow function and we have a work pool where we can run our flow remotely. Let's package both of these things, along with the location for where to find our flow code, into a deployment so that we can schedule our workflow to run remotely.

      Deployments elevate flows to remotely configurable entities that have their own API.

      Let's make a script to build a deployment with the name my-first-deployment and set it to run on a schedule.

      create_deployment.py
      from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/discdiver/demos.git\",\n        entrypoint=\"my_gh_workflow.py:repo_info\",\n    ).deploy(\n        name=\"my-first-deployment\",\n        work_pool_name=\"my-managed-pool\",\n        cron=\"0 1 * * *\",\n    )\n

      Run the script to create the deployment on the Prefect Cloud server. Note that the cron argument will schedule the deployment to run at 1am every day.

      python create_deployment.py\n

      You should see a message that your deployment was created, similar to the one below.

      Successfully created/updated all deployments!\n______________________________________________________\n|                    Deployments                     |  \n______________________________________________________\n|    Name                       |  Status  | Details |\n______________________________________________________\n| repo-info/my-first-deployment | applied  |         |\n______________________________________________________\n\nTo schedule a run for this deployment, use the following command:\n\n       $ prefect deployment run 'repo-info/my-first-deployment'\n\nYou can also run your flow via the Prefect UI: <https://app.prefect.cloud/account/abc/workspace/123/deployments/deployment/xyz>\n

      Head to the Deployments page of the UI to check it out.

      Code storage options

      You can store your flow code in nearly any location. You just need to tell Prefect where to find it. In this example, we use a GitHub repository, but you could bake your code into a Docker image or store it in cloud provider storage. Read more in this guide.

      Push your code to GitHub

      In the example above, we use an existing GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source argument to point to your repository.

      You can trigger a manual run of this deployment by either clicking the Run button in the top right of the deployment page in the UI, or by running the following CLI command in your terminal:

      prefect deployment run 'repo-info/my-first-deployment'\n

      The deployment is configured to run on a Prefect Managed work pool, so Prefect will automatically spin up the infrastructure to run this flow. It may take a minute to set up the Docker image in which the flow will run.

      After a minute or so, you should see the flow run graph and logs on the Flow Run page in the UI.

      Remove the schedule

      Click the Remove button in the top right of the Deployment page so that the workflow is no longer scheduled to run once per day.

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"getting-started/quickstart/#next-steps","title":"Next steps","text":"

      You've seen how to move from a Python script to a scheduled, observable, remotely orchestrated workflow with Prefect.

      To learn how to run flows on your own infrastructure, customize the Docker image where your flow runs, and gain more orchestration and observation benefits check out the tutorial.

      Need help?

      Get your questions answered by a Prefect Product Advocate! Book a meeting

      Happy building!

      ","tags":["getting started","quickstart","overview"],"boost":2},{"location":"guides/","title":"How-to Guides","text":"

      This section of the documentation contains how-to guides for common workflows and use cases.

      ","tags":["guides","how to"],"boost":2},{"location":"guides/#development","title":"Development","text":"Title Description Hosting Host your own Prefect server instance. Profiles & Settings Configure Prefect and save your settings. Testing Easily test your workflows. Global Concurrency Limits Limit flow runs. Runtime Context Enable a flow to access metadata about itself and its context when it runs. Variables Store and retrieve configuration data. Prefect Client Use PrefectClient to interact with the API server. Interactive Workflows Create human-in-the-loop workflows by pausing flow runs for input. Automations Configure actions that Prefect executes automatically based on trigger conditions. Webhooks Receive, observe, and react to events from other systems. Terraform Provider Use the Terraform Provider for Prefect Cloud for infrastructure as code. CI/CD Use CI/CD with Prefect. Specifying Upstream Dependencies Run tasks in a desired order. Third-party Secrets Use credentials stored in a secrets manager in your workflows. Prefect Recipes Common, extensible examples for setting up Prefect.","tags":["guides","how to"],"boost":2},{"location":"guides/#execution","title":"Execution","text":"Title Description Docker Deploy flows with Docker containers. State Change Hooks Execute code in response to state changes. Dask and Ray Scale your flows with parallel computing frameworks. Read and Write Data Read and write data to and from cloud provider storage. Big Data Handle large data with Prefect. Logging Configure Prefect's logger and aggregate logs from other tools. Troubleshooting Identify and resolve common issues with Prefect. Managed Execution Let prefect run your code. Shell Commands Shell commands as Prefect flows","tags":["guides","how to"],"boost":2},{"location":"guides/#work-pools","title":"Work Pools","text":"Title Description Deploying Flows to Work Pools and Workers Learn how to run you code with dynamic infrastructure. Upgrade from Agents to Workers Why and how to upgrade from agents to workers. Flow Code Storage Where to store your code for deployments. Kubernetes Deploy flows on Kubernetes. Serverless Push Work Pools Run flows on serverless infrastructure without a worker. Serverless Work Pools with Workers Run flows on serverless infrastructure with a worker. Daemonize Processes Set up a systemd service to run a Prefect worker or .serve process. Custom Workers Develop your own worker type. Overriding Work Pool Job Variables Override job variables for a work pool for a given deployment.

      Need help?

      Get your questions answered by a Prefect Product Advocate! Book a Meeting

      ","tags":["guides","how to"],"boost":2},{"location":"guides/automations/","title":"Using Automations for Dynamic Responses","text":"

      From the Automations concept page, we saw what an automation can do and how to configure one within the UI.

      In this guide, we will showcase the following common use cases:

      • Create a simple notification automation in just a few UI clicks
      • Build upon an event based automation
      • Combine into a multi-layered responsive deployment pattern

      Available only on Prefect Cloud

      Automations are a Prefect Cloud feature.\n
      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#prerequisites","title":"Prerequisites","text":"

      Please have the following before exploring the guide:

      • Python installed
      • Prefect installed (follow the installation guide)
      • Authenticated to a Prefect Cloud workspace
      • A work pool set up to handle the deployments
      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#creating-the-example-script","title":"Creating the example script","text":"

      Automations allow you to take actions in response to triggering events recorded by Prefect.

      For example, let's try to grab data from an API and send a notification based on the end state.

      We can start by pulling hypothetical user data from an endpoint and then performing data cleaning and transformations.

      Let's create a simple extract method, that pulls the data from a random user data generator endpoint.

      from prefect import flow, task, get_run_logger\nimport requests\nimport json\n\n@task\ndef fetch(url: str):\n    logger = get_run_logger()\n    response = requests.get(url)\n    raw_data = response.json()\n    logger.info(f\"Raw response: {raw_data}\")\n    return raw_data\n\n@task\ndef clean(raw_data: dict):\n    print(raw_data.get('results')[0])\n    results = raw_data.get('results')[0]\n    logger = get_run_logger()\n    logger.info(f\"Cleaned results: {results}\")\n    return results['name']\n\n@flow\ndef build_names(num: int = 10):\n    df = []\n    url = \"https://randomuser.me/api/\"\n    logger = get_run_logger()\n    copy = num\n    while num != 0:\n        raw_data = fetch(url)\n        df.append(clean(raw_data))\n        num -= 1\n    logger.info(f\"Built {copy} names: {df}\")\n    return df\n\nif __name__ == \"__main__\":\n    list_of_names = build_names()\n

      The data cleaning workflow has visibility into each step, and we are sending a list of names to our next step of our pipeline.

      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#create-notification-block-within-the-ui","title":"Create notification block within the UI","text":"

      Now let's try to send a notification based off a completed state outcome. We can configure a notification to be sent so that we know when to look into our workflow logic.

      1. Prior to creating the automation, let's confirm the notification location. We have to create a notification block to help define where the notification will be sent.

      2. Let's navigate to the blocks page on the UI, and click into creating an email notification block.

      3. Now that we created a notification block, we can go to the automations page to create our first automation.

      4. Next we try to find the trigger type, in this case let's use a flow completion.

      5. Finally, let's create the actions that will be done once the triggered is hit. In this case, let's create a notification to be sent out to showcase the completion.

      6. Now the automation is ready to be triggered from a flow run completion. Let's run the file locally and see that the notification is sent to our inbox after the completion. It may take a few minutes for the notification to arrive.

      No deployment created

      Keep in mind, we did not need to create a deployment to trigger our automation, where a state outcome of a local flow run helped trigger this notification block. We are not required to create a deployment to trigger a notification.

      Now that you've seen how to create an email notification from a flow run completion, let's see how we can kick off a deployment run in response to an event.

      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#event-based-deployment-automation","title":"Event-based deployment automation","text":"

      We can create an automation that can kick off a deployment instead of a notification. Let's explore how we can programmatically create this automation. We will take advantage of Prefect's REST API to help create this automation.

      See the REST API documentation as a reference for interacting with the Prefect Cloud automation endpoints.

      Let's create a deployment where we can kick off some work based on how long a flow is running. For example, if the build_names flow is taking too long to execute, we can kick off a deployment of the with the same build_names flow, but replace the count value with a lower number - to speed up completion. You can create a deployment with a prefect.yaml file or a Python file that uses flow.deploy.

      prefect.yaml.deploy

      Create a prefect.yaml file like this one for our flow build_names:

        # Welcome to your prefect.yaml file! You can use this file for storing and managing\n  # configuration for deploying your flows. We recommend committing this file to source\n  # control along with your flow code.\n\n  # Generic metadata about this project\n  name: automations-guide\n  prefect-version: 2.13.1\n\n  # build section allows you to manage and build docker images\n  build: null\n\n  # push section allows you to manage if and how this project is uploaded to remote locations\n  push: null\n\n  # pull section allows you to provide instructions for cloning this project in remote locations\n  pull:\n  - prefect.deployments.steps.set_working_directory:\n      directory: /Users/src/prefect/Playground/automations-guide\n\n  # the deployments section allows you to provide configuration for deploying flows\n  deployments:\n  - name: deploy-build-names\n    version: null\n    tags: []\n    description: null\n    entrypoint: test-automations.py:build_names\n    parameters: {}\n    work_pool:\n      name: tutorial-process-pool\n      work_queue_name: null\n      job_variables: {}\n    schedule: null\n

      To follow a more Python based approach to create a deployment, you can use flow.deploy as in the example below.

      # .deploy only needs a name, valid work pool \n# and a reference to where the flow code exists\n\nif __name__ == \"__main__\":\nbuild_names.deploy(\n    name=\"deploy-build-names\",\n    work_pool_name=\"tutorial-process-pool\"\n    image=\"my_registry/my_image:my_image_tag\",\n)\n

      Now let's grab our deployment_id from this deployment, and embed it in our automation. There are many ways to obtain the deployment_id, but the CLI is a quick way to see all of your deployment ids.

      Find deployment_id from the CLI

      The quickest way to see the ID's associated with your deployment would be running prefect deployment ls in an authenticated command prompt, and you will be able to see the id's associated with all of your deployments

      prefect deployment ls\n                                          Deployments                                           \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                                                  \u2503 ID                                   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Extract islands/island-schedule                       \u2502 d9d7289c-7a41-436d-8313-80a044e61532 \u2502\n\u2502 build-names/deploy-build-names                        \u2502 8b10a65e-89ef-4c19-9065-eec5c50242f4 \u2502\n\u2502 ride-duration-prediction-backfill/backfill-deployment \u2502 76dc6581-1773-45c5-a291-7f864d064c57 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
      We can create an automation via a POST call, where we can programmatically create the automation. Ensure you have your api_key, account_id, and workspace_id.

      def create_event_driven_automation():\n    api_url = f\"https://api.prefect.cloud/api/accounts/{account_id}/workspaces/{workspace_id}/automations/\"\n    data = {\n    \"name\": \"Event Driven Redeploy\",\n    \"description\": \"Programmatically created an automation to redeploy a flow based on an event\",\n    \"enabled\": \"true\",\n    \"trigger\": {\n    \"after\": [\n        \"string\"\n    ],\n    \"expect\": [\n        \"prefect.flow-run.Running\"\n    ],\n    \"for_each\": [\n        \"prefect.resource.id\"\n    ],\n    \"posture\": \"Proactive\",\n    \"threshold\": 30,\n    \"within\": 0\n    },\n    \"actions\": [\n    {\n        \"type\": \"run-deployment\",\n        \"source\": \"selected\",\n        \"deployment_id\": \"YOUR-DEPLOYMENT-ID\", \n        \"parameters\": \"10\"\n    }\n    ],\n    \"owner_resource\": \"string\"\n        }\n\n    headers = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\n    response = requests.post(api_url, headers=headers, json=data)\n\n    print(response.json())\n    return response.json()\n

      After running this function, you will see within the UI the changes that came from the post request. Keep in mind, the context will be \"custom\" on UI.

      Let's run the underlying flow and see the deployment get kicked off after 30 seconds elapsed. This will result in a new flow run of build_names, and we are able to see this new deployment get initiated with the custom parameters we outlined above.

      In a few quick changes, we are able to programmatically create an automation that deploys workflows with custom parameters.

      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-an-underlying-yaml-file","title":"Using an underlying .yaml file","text":"

      We can extend this idea one step further by utilizing our own .yaml version of the automation, and registering that file with our UI. This simplifies the requirements of the automation by declaring it in its own .yaml file, and then registering that .yaml with the API.

      Let's first start with creating the .yaml file that will house the automation requirements. Here is how it would look like:

      name: Cancel long running flows\ndescription: Cancel any flow run after an hour of execution\ntrigger:\n  match:\n    \"prefect.resource.id\": \"prefect.flow-run.*\"\n  match_related: {}\n  after:\n    - \"prefect.flow-run.Failed\"\n  expect:\n    - \"prefect.flow-run.*\"\n  for_each:\n    - \"prefect.resource.id\"\n  posture: \"Proactive\"\n  threshold: 1\n  within: 30\nactions:\n  - type: \"cancel-flow-run\"\n

      We can then have a helper function that applies this YAML file with the REST API function.

      import yaml\n\nfrom utils import post, put\n\ndef create_or_update_automation(path: str = \"automation.yaml\"):\n    \"\"\"Create or update an automation from a local YAML file\"\"\"\n    # Load the definition\n    with open(path, \"r\") as fh:\n        payload = yaml.safe_load(fh)\n\n    # Find existing automations by name\n    automations = post(\"/automations/filter\")\n    existing_automation = [a[\"id\"] for a in automations if a[\"name\"] == payload[\"name\"]]\n    automation_exists = len(existing_automation) > 0\n\n    # Create or update the automation\n    if automation_exists:\n        print(f\"Automation '{payload['name']}' already exists and will be updated\")\n        put(f\"/automations/{existing_automation[0]}\", payload=payload)\n    else:\n        print(f\"Creating automation '{payload['name']}'\")\n        post(\"/automations/\", payload=payload)\n\nif __name__ == \"__main__\":\n    create_or_update_automation()\n

      You can find a complete repo with these APIs examples in this GitHub repository.

      In this example, we managed to create the automation by registering the .yaml file with a helper function. This offers another experience when trying to create an automation.

      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#custom-webhook-kicking-off-an-automation","title":"Custom webhook kicking off an automation","text":"

      We can use webhooks to expose the events API which allows us to extend the functionality of deployments and ways to respond to changes in our workflow through a few easy steps.

      By exposing a webhook endpoint, we can kick off workflows that can trigger deployments - all from a simple event created from an HTTP request.

      Lets create a webhook within the UI. Here is the webhook we can use to create these dynamic events.

      {\n    \"event\": \"model-update\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.{{ body.model_id}}\",\n        \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n        \"run_count\": \"{{body.run_count}}\"\n    }\n}\n
      From a simple input, we can easily create an exposed webhook endpoint.

      Each webhook will correspond to a custom event created, where you can react to it downstream with a separate deployment or automation.

      For example, we can create a curl request that sends the endpoint information such as a run count for our deployment.

      curl -X POST https://api.prefect.cloud/hooks/34iV2SFke3mVa6y5Y-YUoA -d \"model_id=adhoc\" -d \"run_count=10\" -d \"friendly_name=test-user-input\"\n
      From here, we can make a webhook that is connected to pulling in parameters on the curl command, and then it kicks off a deployment that uses these pulled parameters.

      Let us go into the event feed, and we can automate straight from this event.

      This allows us to create automations that respond to these webhook events. From a few clicks in the UI, we are able to associate an external process with the Prefect events API, that can enable us to trigger downstream deployments.

      In the next section, we will explore event triggers that automate the kickoff of a deployment run.

      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#using-triggers","title":"Using triggers","text":"

      Let's take this idea one step further, by creating a deployment that will be triggered when a flow run takes longer than expected. We can take advantage of Prefect's Marvin library that will use an LLM to classify our data. Marvin is great at embedding data science and data analysis applications within your pre-existing data engineering workflows. In this case, we can use Marvin'd AI functions to help make our dataset more information rich.

      Install Marvin with pip install marvin and set you OpenAI API key as shown here

      We can add a trigger to run a deployment in response to a specific event.

      Let's create an example with Marvin's AI functions. We will take in a pandas DataFrame and use the AI function to analyze it.

      Here is an example of pulling in that data and classifying using Marvin AI. We can help create dummy data based on classifications we have already created.

      from marvin import ai_classifier\nfrom enum import Enum\nimport pandas as pd\n\n@ai_fn\ndef generate_synthetic_user_data(build_of_names: list[dict]) -> list:\n    \"\"\"\n    Generate additional data for userID (numerical values with 6 digits), location, and timestamp as separate columns and append the data onto 'build_of_names'. Make userID the first column\n    \"\"\"\n\n@flow\ndef create_fake_user_dataset(df):\n  artifact_df = generate_synthetic_user_data(df)\n  print(artifact_df)\n\n  create_table_artifact(\n      key=\"fake-user-data\",\n      table=artifact_df,\n      description= \"Dataset that is comprised of a mix of autogenerated data based on user data\"\n  )\n\nif __name__ == \"__main__\":\n    create_fake_artifact()  \n

      Let's kick off a deployment with a trigger defined in a prefect.yaml file. Let's specify what we want to trigger when the event stays in a running state for longer than 30 seconds.

      # Welcome to your prefect.yaml file! You can use this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: automations-guide\nprefect-version: 2.13.1\n\n# build section allows you to manage and build docker images\nbuild: null\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush: null\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n    directory: /Users/src/prefect/Playground/marvin-extension\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: create-fake-user-dataset\n  triggers:\n    - enabled: true\n      match:\n        prefect.resource.id: \"prefect.flow-run.*\"\n      after: \"prefect.flow-run.Running\",\n      expect: [],\n      for_each: [\"prefect.resource.id\"],\n      parameters:\n        param_1: 10\n      posture: \"Proactive\"\n  version: null\n  tags: []\n  description: null\n  entrypoint: marvin-extension.py:create_fake_user_dataset\n  parameters: {}\n  work_pool:\n    name: tutorial-process-pool\n    work_queue_name: null\n    job_variables: {}\n  schedule: null\n

      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/automations/#next-steps","title":"Next steps","text":"

      You've seen how to create automations via the UI, REST API, and a triggers defined in a prefect.yaml deployment definition.

      To learn more about events that can act as automation triggers, see the events docs. To learn more about event webhooks in particular, see the webhooks guide.

      ","tags":["automations","event-driven","trigger"],"boost":2},{"location":"guides/big-data/","title":"Big Data with Prefect","text":"

      In this guide you'll learn tips for working with large amounts of data in Prefect.

      Big data doesn't have a widely accepted, precise definition. In this guide, we'll discuss methods to reduce the processing time or memory utilization of Prefect workflows, without editing your Python code.

      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#optimizing-your-python-code-with-prefect-for-big-data","title":"Optimizing your Python code with Prefect for big data","text":"

      Depending upon your needs, you may want to optimize your Python code for speed, memory, compute, or disk space.

      Prefect provides several options that we'll explore in this guide:

      1. Remove task introspection with quote to save time running your code.
      2. Write task results to cloud storage such as S3 using a block to save memory.
      3. Save data to disk within a flow rather than using results.
      4. Cache task results to save time and compute.
      5. Compress results written to disk to save space.
      6. Use a task runner for parallelizable operations to save time.
      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#remove-task-introspection","title":"Remove task introspection","text":"

      When a task is called from a flow, each argument is introspected by Prefect, by default. To speed up your flow runs, you can disable this behavior for a task by wrapping the argument using quote.

      To demonstrate, let's use a basic example that extracts and transforms some New York taxi data.

      et_quote.py
      from prefect import task, flow\nfrom prefect.utilities.annotations import quote\nimport pandas as pd\n\n\n@task\ndef extract(url: str):\n    \"\"\"Extract data\"\"\"\n    df_raw = pd.read_parquet(url)\n    print(df_raw.info())\n    return df_raw\n\n\n@task\ndef transform(df: pd.DataFrame):\n    \"\"\"Basic transformation\"\"\"\n    df[\"tip_fraction\"] = df[\"tip_amount\"] / df[\"total_amount\"]\n    print(df.info())\n    return df\n\n\n@flow(log_prints=True)\ndef et(url: str):\n    \"\"\"ET pipeline\"\"\"\n    df_raw = extract(url)\n    df = transform(quote(df_raw))\n\n\nif __name__ == \"__main__\":\n    url = \"https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2023-09.parquet\"\n    et(url)\n

      Introspection can take significant time when the object being passed is a large collection, such as dictionary or DataFrame, where each element needs to be visited. Note that using quote reduces execution time at the expense of disabling task dependency tracking for the wrapped object.

      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#write-task-results-to-cloud-storage","title":"Write task results to cloud storage","text":"

      By default, the results of task runs are stored in memory in your execution environment. This behavior makes flow runs fast for small data, but can be problematic for large data. You can save memory by writing results to disk. In production, you'll generally want to write results to a cloud provider storage such as AWS S3. Prefect lets you to use a storage block from a Prefect cloud integration library such as prefect-aws to save your configuration information. Learn more about blocks here.

      Install the relevant library, register the block with the server, and create your storage block. Then you can reference the block in your flow like this:

      ...\nfrom prefect_aws.s3 import S3Bucket\n\nmy_s3_block = S3Bucket.load(\"MY_BLOCK_NAME\")\n\n...\n@task(result_storage=my_s3_block)\n

      Now the result of the task will be written to S3, rather than stored in memory.

      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#save-data-to-disk-within-a-flow","title":"Save data to disk within a flow","text":"

      To save memory and time with big data, you don't need to pass results between tasks at all. Instead, you can write and read data to disk directly in your flow code. Prefect has integration libraries for each of the major cloud providers. Each library contains blocks with methods that make it convenient to read and write data to and from cloud object storage. The moving data guide has step-by-step examples for each cloud provider.

      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#cache-task-results","title":"Cache task results","text":"

      Caching allows you to avoid re-running tasks when doing so is unnecessary. Caching can save you time and compute. Note that caching requires task result persistence. Caching is discussed in detail in the tasks concept page.

      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#compress-results-written-to-disk","title":"Compress results written to disk","text":"

      If you're using Prefect's task result persistence, you can save disk space by compressing the results. You just need to specify the result type with compressed/ prefixed like this:

      @task(result_serializer=\"compressed/json\")\n

      Read about compressing results with Prefect for more details. The tradeoff of using compression is that it takes time to compress and decompress the data.

      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/big-data/#use-a-task-runner-for-parallelizable-operations","title":"Use a task runner for parallelizable operations","text":"

      Prefect's task runners allow you to use the Dask and Ray Python libraries to run tasks in parallel and distributed across multiple machines. This can save you time and compute when operating on large data structures. See the guide to working with Dask and Ray Task Runners for details.

      ","tags":["big data","flow configuration","parallel execution","distributed execution","caching"],"boost":2},{"location":"guides/ci-cd/","title":"CI/CD With Prefect","text":"

      Many organizations deploy Prefect workflows via their CI/CD process. Each organization has their own unique CI/CD setup, but a common pattern is to use CI/CD to manage Prefect deployments. Combining Prefect's deployment features with CI/CD tools enables efficient management of flow code updates, scheduling changes, and container builds. This guide uses GitHub Actions to implement a CI/CD process, but these concepts are generally applicable across many CI/CD tools.

      Note that Prefect's primary ways for creating deployments, a .deploy flow method or a prefect.yaml configuration file, are both designed with building and pushing images to a Docker registry in mind.

      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#getting-started-with-github-actions-and-prefect","title":"Getting started with GitHub Actions and Prefect","text":"

      In this example, you'll write a GitHub Actions workflow that will run each time you push to your repository's main branch. This workflow will build and push a Docker image containing your flow code to Docker Hub, then deploy the flow to Prefect Cloud.

      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#repository-secrets","title":"Repository secrets","text":"

      Your CI/CD process must be able to authenticate with Prefect in order to deploy flows.

      Deploying flows securely and non-interactively in your CI/CD process can be accomplished by saving your PREFECT_API_URL and PREFECT_API_KEY as secrets in your repository's settings so they can be accessed in your CI/CD runner's environment without exposing them in any scripts or configuration files.

      In this scenario, deploying flows involves building and pushing Docker images, so add DOCKER_USERNAME and DOCKER_PASSWORD as secrets to your repository as well.

      You can create secrets for GitHub Actions in your repository under Settings -> Secrets and variables -> Actions -> New repository secret:

      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#writing-a-github-workflow","title":"Writing a GitHub workflow","text":"

      To deploy your flow via GitHub Actions, you'll need a workflow YAML file. GitHub will look for workflow YAML files in the .github/workflows/ directory in the root of your repository. In their simplest form, GitHub workflow files are made up of triggers and jobs.

      The on: trigger is set to run the workflow each time a push occurs on the main branch of the repository.

      The deploy job is comprised of four steps:

      • Checkout clones your repository into the GitHub Actions runner so you can reference files or run scripts from your repository in later steps.
      • Log in to Docker Hub authenticates to DockerHub so your image can be pushed to the Docker registry in your DockerHub account. docker/login-action is an existing GitHub action maintained by Docker. with: passes values into the Action, similar to passing parameters to a function.
      • Setup Python installs your selected version of Python.
      • Prefect Deploy installs the dependencies used in your flow, then deploys your flow. env: makes the PREFECT_API_KEY and PREFECT_API_URL secrets from your repository available as environment variables during this step's execution.

      For reference, the examples below can be found on their respective branches of this repository.

      .deployprefect.yaml
      .\n\u251c\u2500\u2500 .github/\n\u2502   \u2514\u2500\u2500 workflows/\n\u2502       \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u2514\u2500\u2500 requirements.txt\n

      flow.py

      from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n  print(\"Hello!\")\n\nif __name__ == \"__main__\":\n    hello.deploy(\n        name=\"my-deployment\",\n        work_pool_name=\"my-work-pool\",\n        image=\"my_registry/my_image:my_image_tag\",\n    )\n

      .github/workflows/deploy-prefect-flow.yaml

      name: Deploy Prefect flow\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    name: Deploy\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Log in to Docker Hub\n        uses: docker/login-action@v3\n        with:\n          username: ${{ secrets.DOCKER_USERNAME }}\n          password: ${{ secrets.DOCKER_PASSWORD }}\n\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: '3.11'\n\n      - name: Prefect Deploy\n        env:\n          PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n          PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n        run: |\n          pip install -r requirements.txt\n          python flow.py\n
      .\n\u251c\u2500\u2500 .github/\n\u2502   \u2514\u2500\u2500 workflows/\n\u2502       \u2514\u2500\u2500 deploy-prefect-flow.yaml\n\u251c\u2500\u2500 flow.py\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 requirements.txt\n

      flow.py

      from prefect import flow\n\n@flow(log_prints=True)\ndef hello():\n  print(\"Hello!\")\n

      prefect.yaml

      name: cicd-example\nprefect-version: 2.14.11\n\nbuild:\n  - prefect_docker.deployments.steps.build_docker_image:\n      id: build_image\n      requires: prefect-docker>=0.3.1\n      image_name: my_registry/my_image\n      tag: my_image_tag\n      dockerfile: auto\n\npush:\n  - prefect_docker.deployments.steps.push_docker_image:\n      requires: prefect-docker>=0.3.1\n      image_name: \"{{ build_image.image_name }}\"\n      tag: \"{{ build_image.tag }}\"\n\npull: null\n\ndeployments:\n  - name: my-deployment\n    entrypoint: flow.py:hello\n    work_pool:\n      name: my-work-pool\n      work_queue_name: default\n      job_variables:\n        image: \"{{ build-image.image }}\"\n

      .github/workflows/deploy-prefect-flow.yaml

      name: Deploy Prefect flow\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    name: Deploy\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Log in to Docker Hub\n        uses: docker/login-action@v3\n        with:\n          username: ${{ secrets.DOCKER_USERNAME }}\n          password: ${{ secrets.DOCKER_PASSWORD }}\n\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: '3.11'\n\n      - name: Prefect Deploy\n        env:\n          PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }}\n          PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }}\n        run: |\n          pip install -r requirements.txt\n          prefect deploy -n my-deployment\n
      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#running-a-github-workflow","title":"Running a GitHub workflow","text":"

      After pushing commits to your repository, GitHub will automatically trigger a run of your workflow. The status of running and completed workflows can be monitored from the Actions tab of your repository.

      You can view the logs from each workflow step as they run. The Prefect Deploy step will include output about your image build and push, and the creation/update of your deployment.

      Successfully built image '***/cicd-example:latest'\n\nSuccessfully pushed image '***/cicd-example:latest'\n\nSuccessfully created/updated all deployments!\n\n                Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                \u2503 Status  \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 hello/my-deployment \u2502 applied \u2502         \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#advanced-example","title":"Advanced example","text":"

      In more complex scenarios, CI/CD processes often need to accommodate several additional considerations to enable a smooth development workflow:

      • Making code available in different environments as it advances through stages of development
      • Handling independent deployment of distinct groupings of work, as in a monorepo
      • Efficiently using build time to avoid repeated work

      This example repository demonstrates how each of these considerations can be addressed using a combination of Prefect's and GitHub's capabilities.

      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#deploying-to-multiple-workspaces","title":"Deploying to multiple workspaces","text":"

      Which deployment processes should run are automatically selected when changes are pushed depending on two conditions:

      on:\n  push:\n    branches:\n      - stg\n      - main\n    paths:\n      - \"project_1/**\"\n
      1. branches: - which branch has changed. This will ultimately select which Prefect workspace a deployment is created or updated in. In this example, changes on the stg branch will deploy flows to a staging workspace, and changes on the main branch will deploy flows to a production workspace.
      2. paths: - which project folders' files have changed. Since each project folder contains its own flows, dependencies, and prefect.yaml, it represents a complete set of logic and configuration that can be deployed independently. Each project in this repository gets its own GitHub Actions workflow YAML file.

      The prefect.yaml file in each project folder depends on environment variables that are dictated by the selected job in each CI/CD workflow, enabling external code storage for Prefect deployments that is clearly separated across projects and environments.

        .\n  \u251c\u2500\u2500 cicd-example-workspaces-prod  # production bucket\n  \u2502   \u251c\u2500\u2500 project_1\n  \u2502   \u2514\u2500\u2500 project_2\n  \u2514\u2500\u2500 cicd-example-workspaces-stg  # staging bucket\n      \u251c\u2500\u2500 project_1\n      \u2514\u2500\u2500 project_2  \n

      Since the deployments in this example use S3 for code storage, it's important that push steps place flow files in separate locations depending upon their respective environment and project so no deployment overwrites another deployment's files.

      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#caching-build-dependencies","title":"Caching build dependencies","text":"

      Since building Docker images and installing Python dependencies are essential parts of the deployment process, it's useful to rely on caching to skip repeated build steps.

      The setup-python action offers caching options so Python packages do not have to be downloaded on repeat workflow runs.

      - name: Setup Python\n  uses: actions/setup-python@v5\n  with:\n    python-version: \"3.11\"\n    cache: \"pip\"\n
      Using cached prefect-2.16.1-py3-none-any.whl (2.9 MB)\nUsing cached prefect_aws-0.4.10-py3-none-any.whl (61 kB)\n

      The build-push-action for building Docker images also offers caching options for GitHub Actions. If you are not using GitHub, other remote cache backends are available as well.

      - name: Build and push\n  id: build-docker-image\n  env:\n      GITHUB_SHA: ${{ steps.get-commit-hash.outputs.COMMIT_HASH }}\n  uses: docker/build-push-action@v5\n  with:\n    context: ${{ env.PROJECT_NAME }}/\n    push: true\n    tags: ${{ secrets.DOCKER_USERNAME }}/${{ env.PROJECT_NAME }}:${{ env.GITHUB_SHA }}-stg\n    cache-from: type=gha\n    cache-to: type=gha,mode=max\n
      importing cache manifest from gha:***\nDONE 0.1s\n\n[internal] load build context\ntransferring context: 70B done\nDONE 0.0s\n\n[2/3] COPY requirements.txt requirements.txt\nCACHED\n\n[3/3] RUN pip install -r requirements.txt\nCACHED\n
      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#prefect-github-actions","title":"Prefect GitHub Actions","text":"

      Prefect provides its own GitHub Actions for authentication and deployment creation. These actions can simplify deploying with CI/CD when using prefect.yaml, especially in cases where a repository contains flows that are used in multiple deployments across multiple Prefect Cloud workspaces.

      Here's an example of integrating these actions into the workflow we created above:

      name: Deploy Prefect flow\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    name: Deploy\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Log in to Docker Hub\n        uses: docker/login-action@v3\n        with:\n          username: ${{ secrets.DOCKER_USERNAME }}\n          password: ${{ secrets.DOCKER_PASSWORD }}\n\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: \"3.11\"\n\n      - name: Prefect Auth\n        uses: PrefectHQ/actions-prefect-auth@v1\n        with:\n          prefect-api-key: ${{ secrets.PREFECT_API_KEY }}\n          prefect-workspace: ${{ secrets.PREFECT_WORKSPACE }}\n\n      - name: Run Prefect Deploy\n        uses: PrefectHQ/actions-prefect-deploy@v3\n        with:\n          deployment-names: my-deployment\n          requirements-file-paths: requirements.txt\n
      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#authenticating-to-other-docker-image-registries","title":"Authenticating to other Docker image registries","text":"

      The docker/login-action GitHub Action supports pushing images to a wide variety of image registries.

      For example, if you are storing Docker images in AWS Elastic Container Registry, you can add your ECR registry URL to the registry key in the with: part of the action and use an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as your username and password.

      - name: Login to ECR\n  uses: docker/login-action@v3\n  with:\n    registry: <aws-account-number>.dkr.ecr.<region>.amazonaws.com\n    username: ${{ secrets.AWS_ACCESS_KEY_ID }}\n    password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}\n
      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/ci-cd/#other-resources","title":"Other resources","text":"

      Check out the Prefect Cloud Terraform provider if you're using Terraform to manage your infrastructure.

      ","tags":["CI/CD","continuous integration","continuous delivery"],"boost":2},{"location":"guides/cli-shell/","title":"Orchestrating Shell Commands with Prefect","text":"

      Harness the power of the Prefect CLI to execute and schedule shell commands as Prefect flows. This guide shows how to use the watch and serve commands to showcase the CLI's versatility for automation tasks.

      Here's what you'll learn:

      • Running a shell command as a Prefect flow on-demand with watch.
      • Scheduling a shell command as a recurring Prefect flow using serve.
      • The benefits of embedding these commands into your automation workflows.
      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#prerequisites","title":"Prerequisites","text":"

      Before you begin, ensure you have:

      • A basic understanding of Prefect flows. Start with the Getting Started guide if you're new.
      • A recent version of Prefect installed in your command line environment. Follow the instructions in the docs if you have any issues.
      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#the-watch-command","title":"The watch command","text":"

      The watch command wraps any shell command in a Prefect flow for instant execution, ideal for quick tasks or integrating shell scripts into your workflows.

      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#example-usage","title":"Example usage","text":"

      Imagine you want to fetch the current weather in Chicago using the curl command. The following Prefect CLI command does just that:

      prefect shell watch \"curl http://wttr.in/Chicago?format=3\"\n

      This command makes a request to wttr.in, a console-oriented weather service, and prints the weather conditions for Chicago.

      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#benefits-of-watch","title":"Benefits of watch","text":"
      • Immediate feedback: Execute shell commands within the Prefect framework for immediate results.
      • Easy integration: Seamlessly blend external scripts or data fetching into your data workflows.
      • Visibility and logging: Leverage Prefect's logging to track the execution and output of your shell tasks.
      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#deploying-with-serve","title":"Deploying with serve","text":"

      When you need to run shell commands on a schedule, the serve command creates a Prefect [deployment](/concepts/deployments/ for regular execution. This is an extremely quick way to create a deployment that is served by Prefect.

      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#example-usage_1","title":"Example usage","text":"

      To set up a daily weather report for Chicago at 9 AM, you can use the serve command as follows:

      prefect shell serve \"curl http://wttr.in/Chicago?format=3\" --flow-name \"Daily Chicago Weather Report\" --cron-schedule \"0 9 * * *\" --deployment-name \"Chicago Weather\"\n

      This command schedules a Prefect flow to fetch Chicago's weather conditions daily, providing consistent updates without manual intervention. Additionally, if you want to fetch the Chicago weather, you can manually create a run of your new deployment from the UI or the CLI.

      To shut down your server and pause your scheduled runs, hit ctrl + c in the CLI.

      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#benefits-of-serve","title":"Benefits of serve","text":"
      • Automated scheduling: Schedule shell commands to run automatically, ensuring critical updates are generated and available on time.
      • Centralized workflow management: Manage and monitor your scheduled shell commands inside Prefect for a unified workflow overview.
      • Configurable execution: Tailor execution frequency, concurrency limits, and other parameters to suit your project's needs and resources.
      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/cli-shell/#next-steps","title":"Next steps","text":"

      With the watch and serve commands at your disposal, you're ready to incorporate shell command automation into your Prefect workflows. You can start with straightforward tasks like observing cron jobs and expand to more complex automation scenarios to enhance your workflows' efficiency and capabilities.

      Check out the tutorial and explore other Prefect docs to learn how to gain more observability and orchestration capabilities in your workflows.

      ","tags":["CLI","Shell Commands","Prefect Flows","Automation","Scheduling"],"boost":2},{"location":"guides/creating-interactive-workflows/","title":"Creating Interactive Workflows","text":"

      Flows can pause or suspend execution and automatically resume when they receive type-checked input in Prefect's UI. Flows can also send and receive type-checked input at any time while running, without pausing or suspending. This guide will show you how to use these features to build interactive workflows.

      A note on async Python syntax

      Most of the example code in this section uses async Python functions and await. However, as with other Prefect features, you can call these functions with or without await.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#pausing-or-suspending-a-flow-until-it-receives-input","title":"Pausing or suspending a flow until it receives input","text":"

      You can pause or suspend a flow until it receives input from a user in Prefect's UI. This is useful when you need to ask for additional information or feedback before resuming a flow. Such workflows are often called human-in-the-loop (HITL) systems.

      What is human-in-the-loop interactivity used for?

      Approval workflows that pause to ask a human to confirm whether a workflow should continue are very common in the business world. Certain types of machine learning training and artificial intelligence workflows benefit from incorporating HITL design.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#waiting-for-input","title":"Waiting for input","text":"

      To receive input while paused or suspended use the wait_for_input parameter in the pause_flow_run or suspend_flow_run functions. This parameter accepts one of the following:

      • A built-in type like int or str, or a built-in collection like List[int]
      • A pydantic.BaseModel subclass
      • A subclass of prefect.input.RunInput

      When to use a RunModel or BaseModel instead of a built-in type

      There are a few reasons to use a RunModel or BaseModel. The first is that when you let Prefect automatically create one of these classes for your input type, the field that users will see in Prefect's UI when they click \"Resume\" on a flow run is named value and has no help text to suggest what the field is. If you create a RunInput or BaseModel, you can change details like the field name, help text, and default value, and users will see those reflected in the \"Resume\" form.

      The simplest way to pause or suspend and wait for input is to pass a built-in type:

      from prefect import flow, pause_flow_run, get_run_logger\n\n@flow\ndef greet_user():\n    logger = get_run_logger()\n\n    user = pause_flow_run(wait_for_input=str)\n\n    logger.info(f\"Hello, {user}!\")\n

      In this example, the flow run will pause until a user clicks the Resume button in the Prefect UI, enters a name, and submits the form.

      What types can you pass for wait_for_input?

      When you pass a built-in type such as int as an argument for the wait_for_input parameter to pause_flow_run or suspend_flow_run, Prefect automatically creates a Pydantic model containing one field annotated with the type you specified. This means you can use any type annotation that Pydantic accepts for model fields with these functions.

      Instead of a built-in type, you can pass in a pydantic.BaseModel class. This is useful if you already have a BaseModel you want to use:

      from prefect import flow, pause_flow_run, get_run_logger\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n\n    user = await pause_flow_run(wait_for_input=User)\n\n    logger.info(f\"Hello, {user.name}!\")\n

      BaseModel classes are upgraded to RunInput classes automatically

      When you pass a pydantic.BaseModel class as the wait_for_input argument to pause_flow_run or suspend_flow_run, Prefect automatically creates a RunInput class with the same behavior as your BaseModel and uses that instead.

      RunInput classes contain extra logic that allows flows to send and receive them at runtime. You shouldn't notice any difference!

      Finally, for advanced use cases like overriding how Prefect stores flow run inputs, you can create a RunInput class:

      from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n    name: str\n    age: int\n\n    # Imagine overridden methods here!\n    def override_something(self, *args, **kwargs):\n        super().override_something(*args, **kwargs)\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n\n    user = await pause_flow_run(wait_for_input=UserInput)\n\n    logger.info(f\"Hello, {user.name}!\")\n
      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-initial-data","title":"Providing initial data","text":"

      You can set default values for fields in your model by using the with_initial_data method. This is useful when you want to provide default values for the fields in your own RunInput class.

      Expanding on the example above, you could make the name field default to \"anonymous\":

      from prefect import get_run_logger\nfrom prefect.input import RunInput\n\nclass UserInput(RunInput):\n    name: str\n    age: int\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n\n    user_input = await pause_flow_run(\n        wait_for_input=UserInput.with_initial_data(name=\"anonymous\")\n    )\n\n    if user_input.name == \"anonymous\":\n        logger.info(\"Hello, stranger!\")\n    else:\n        logger.info(f\"Hello, {user_input.name}!\")\n

      When a user sees the form for this input, the name field will contain \"anonymous\" as the default.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#providing-a-description-with-runtime-data","title":"Providing a description with runtime data","text":"

      You can provide a dynamic, markdown description that will appear in the Prefect UI when the flow run pauses. This feature enables context-specific prompts, enhancing clarity and user interaction. Building on the example above:

      from datetime import datetime\nfrom prefect import flow, pause_flow_run, get_run_logger\nfrom prefect.input import RunInput\n\n\nclass UserInput(RunInput):\n    name: str\n    age: int\n\n\n@flow\nasync def greet_user():\n    logger = get_run_logger()\n    current_date = datetime.now().strftime(\"%B %d, %Y\")\n\n    description_md = f\"\"\"\n**Welcome to the User Greeting Flow!**\nToday's Date: {current_date}\n\nPlease enter your details below:\n- **Name**: What should we call you?\n- **Age**: Just a number, nothing more.\n\"\"\"\n\n    user_input = await pause_flow_run(\n        wait_for_input=UserInput.with_initial_data(\n            description=description_md, name=\"anonymous\"\n        )\n    )\n\n    if user_input.name == \"anonymous\":\n        logger.info(\"Hello, stranger!\")\n    else:\n        logger.info(f\"Hello, {user_input.name}!\")\n

      When a user sees the form for this input, the given markdown will appear above the input fields.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#handling-custom-validation","title":"Handling custom validation","text":"

      Prefect uses the fields and type hints on your RunInput or BaseModel class to validate the general structure of input your flow receives, but you might require more complex validation. If you do, you can use Pydantic validators.

      Custom validation runs after the flow resumes

      Prefect transforms the type annotations in your RunInput or BaseModel class to a JSON schema and uses that schema in the UI for client-side validation. However, custom validation requires running Python logic defined in your RunInput class. Because of this, validation happens after the flow resumes, so you'll want to handle it explicitly in your flow. Continue reading for an example best practice.

      The following is an example RunInput class that uses a custom field validator:

      import pydantic\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n    size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n    color: Literal[\"red\", \"green\", \"black\"]\n\n    @pydantic.validator(\"color\")\n    def validate_age(cls, value, values, **kwargs):\n        if value == \"green\" and values[\"size\"] == \"small\":\n            raise ValueError(\n                \"Green is only in-stock for medium, large, and XL sizes.\"\n            )\n\n        return value\n

      In the example, we use Pydantic's validator decorator to define a custom validation method for the color field. We can use it in a flow like this:

      import pydantic\nfrom prefect import flow, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n    size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n    color: Literal[\"red\", \"green\", \"black\"]\n\n    @pydantic.validator(\"color\")\n    def validate_age(cls, value, values, **kwargs):\n        if value == \"green\" and values[\"size\"] == \"small\":\n            raise ValueError(\n                \"Green is only in-stock for medium, large, and XL sizes.\"\n            )\n\n        return value\n\n\n@flow\ndef get_shirt_order():\n    shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n

      If a user chooses any size and color combination other than small and green, the flow run will resume successfully. However, if the user chooses size small and color green, the flow run will resume, and pause_flow_run will raise a ValidationError exception. This will cause the flow run to fail and log the error.

      However, what if you don't want the flow run to fail? One way to handle this case is to use a while loop and pause again if the ValidationError exception is raised:

      from typing import Literal\n\nimport pydantic\nfrom prefect import flow, get_run_logger, pause_flow_run\nfrom prefect.input import RunInput\n\n\nclass ShirtOrder(RunInput):\n    size: Literal[\"small\", \"medium\", \"large\", \"xlarge\"]\n    color: Literal[\"red\", \"green\", \"black\"]\n\n    @pydantic.validator(\"color\")\n    def validate_age(cls, value, values, **kwargs):\n        if value == \"green\" and values[\"size\"] == \"small\":\n            raise ValueError(\n                \"Green is only in-stock for medium, large, and XL sizes.\"\n            )\n\n        return value\n\n\n@flow\ndef get_shirt_order():\n    logger = get_run_logger()\n    shirt_order = None\n\n    while shirt_order is None:\n        try:\n            shirt_order = pause_flow_run(wait_for_input=ShirtOrder)\n        except pydantic.ValidationError as exc:\n            logger.error(f\"Invalid size and color combination: {exc}\")\n\n    logger.info(\n        f\"Shirt order: {shirt_order.size}, {shirt_order.color}\"\n    )\n

      This code will cause the flow run to continually pause until the user enters a valid age.

      As an additional step, you may want to use an automation or notification to alert the user to the error.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#sending-and-receiving-input-at-runtime","title":"Sending and receiving input at runtime","text":"

      Use the send_input and receive_input functions to send input to a flow or receive input from a flow at runtime. You don't need to pause or suspend the flow to send or receive input.

      Why would you send or receive input without pausing or suspending?

      You might want to send or receive input without pausing or suspending in scenarios where the flow run is designed to handle real-time data. For instance, in a live monitoring system, you might need to update certain parameters based on the incoming data without interrupting the flow. Another use is having a long-running flow that continually responds to runtime input with low latency. For example, if you're building a chatbot, you could have a flow that starts a GPT Assistant and manages a conversation thread.

      The most important parameter to the send_input and receive_input functions is run_type, which should be one of the following:

      • A built-in type such as int or str
      • A pydantic.BaseModel class
      • A prefect.input.RunInput class

      When to use a BaseModel or RunInput instead of a built-in type

      Most built-in types and collections of built-in types should work with send_input and receive_input, but there is a caveat with nested collection types, such as lists of tuples, e.g. List[Tuple[str, float]]). In this case, validation may happen after your flow receives the data, so calling receive_input may raise a ValidationError. You can plan to catch this exception, but also, consider placing the field in an explicit BaseModel or RunInput so that your flow only receives exact type matches.

      Let's look at some examples! We'll check out receive_input first, followed by send_input, and then we'll see the two functions working together.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#receiving-input","title":"Receiving input","text":"

      The following flow uses receive_input to continually receive names and print a personalized greeting for each name it receives:

      from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n    async for name_input in receive_input(str, timeout=None):\n        # Prints \"Hello, andrew!\" if another flow sent \"andrew\"\n        print(f\"Hello, {name_input}!\")\n

      When you pass a type such as str into receive_input, Prefect creates a RunInput class to manage your input automatically. When a flow sends input of this type, Prefect uses the RunInput class to validate the input. If the validation succeeds, your flow receives the input in the type you specified. In this example, if the flow received a valid string as input, the variable name_input would contain the string value.

      If, instead, you pass a BaseModel, Prefect upgrades your BaseModel to a RunInput class, and the variable your flow sees \u2014 in this case, name_input \u2014 is a RunInput instance that behaves like a BaseModel. Of course, if you pass in a RunInput class, no upgrade is needed, and you'll get a RunInput instance.

      If you prefer to keep things simple and pass types such as str into receive_input, you can do so. If you need access to the generated RunInput that contains the received value, pass with_metadata=True to receive_input:

      from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter_flow():\n    async for name_input in receive_input(\n        str,\n        timeout=None,\n        with_metadata=True\n    ):\n        # Input will always be in the field \"value\" on this object.\n        print(f\"Hello, {name_input.value}!\")\n

      Why would you need to use with_metadata=True?

      The primary uses of accessing the RunInput object for a receive input are to respond to the sender with the RunInput.respond() function or to access the unique key for an input. Later in this guide, we'll discuss how and why you might use these features.

      Notice that we are now printing name_input.value. When Prefect generates a RunInput for you from a built-in type, the RunInput class has a single field, value, that uses a type annotation matching the type you specified. So if you call receive_input like this: receive_input(str, with_metadata=True), that's equivalent to manually creating the following RunInput class and receive_input call:

      from prefect import flow\nfrom prefect.input.run_input import RunInput\n\nclass GreeterInput(RunInput):\n    value: str\n\n@flow\nasync def greeter_flow():\n    async for name_input in receive_input(GreeterInput, timeout=None):\n        print(f\"Hello, {name_input.value}!\")\n

      The type used in receive_input and send_input must match

      For a flow to receive input, the sender must use the same type that the receiver is receiving. This means that if the receiver is receiving GreeterInput, the sender must send GreeterInput. If the receiver is receiving GreeterInput and the sender sends str input that Prefect automatically upgrades to a RunInput class, the types won't match, so the receiving flow run won't receive the input. However, the input will be waiting if the flow ever calls receive_input(str)!

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#keeping-track-of-inputs-youve-already-seen","title":"Keeping track of inputs you've already seen","text":"

      By default, each time you call receive_input, you get an iterator that iterates over all known inputs to a specific flow run, starting with the first received. The iterator will keep track of your current position as you iterate over it, or you can call next() to explicitly get the next input. If you're using the iterator in a loop, you should probably assign it to a variable:

      from prefect import flow, get_client\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def sender():\n    greeter_flow_run = await run_deployment(\n        \"greeter/send-receive\", timeout=0, as_subflow=False\n    )\n    client = get_client()\n\n    # Assigning the `receive_input` iterator to a variable\n    # outside of the the `while True` loop allows us to continue\n    # iterating over inputs in subsequent passes through the\n    # while loop without losing our position.\n    receiver = receive_input(\n        str,\n        with_metadata=True,\n        timeout=None,\n        poll_interval=0.1\n    )\n\n    while True:\n        name = input(\"What is your name? \")\n        if not name:\n            continue\n\n        if name == \"q\" or name == \"quit\":\n            await send_input(\n                EXIT_SIGNAL,\n                flow_run_id=greeter_flow_run.id\n            )\n            print(\"Goodbye!\")\n            break\n\n        await send_input(name, flow_run_id=greeter_flow_run.id)\n\n        # Saving the iterator outside of the while loop and\n        # calling next() on each iteration of the loop ensures\n        # that we're always getting the newest greeting. If we\n        # had instead called `receive_input` here, we would\n        # always get the _first_ greeting this flow received,\n        # print it, and then ask for a new name.\n        greeting = await receiver.next()\n        print(greeting)\n

      So, an iterator helps to keep track of the inputs your flow has already received. But what if you want your flow to suspend and then resume later, picking up where it left off? In that case, you will need to save the keys of the inputs you've seen so that the flow can read them back out when it resumes. You might use a Block, such as a JSONBlock.

      The following flow receives input for 30 seconds then suspends itself, which exits the flow and tears down infrastructure:

      from prefect import flow, get_run_logger, suspend_flow_run\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.input.run_input import receive_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n    logger = get_run_logger()\n    run_context = get_run_context()\n    assert run_context.flow_run, \"Could not see my flow run ID\"\n\n    block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n    try:\n        seen_keys_block = await JSON.load(block_name)\n    except ValueError:\n        seen_keys_block = JSON(\n            value=[],\n        )\n\n    try:\n        async for name_input in receive_input(\n            str,\n            with_metadata=True,\n            poll_interval=0.1,\n            timeout=30,\n            exclude_keys=seen_keys_block.value\n        ):\n            if name_input.value == EXIT_SIGNAL:\n                print(\"Goodbye!\")\n                return\n            await name_input.respond(f\"Hello, {name_input.value}!\")\n\n            seen_keys_block.value.append(name_input.metadata.key)\n            await seen_keys_block.save(\n                name=block_name,\n                overwrite=True\n            )\n    except TimeoutError:\n        logger.info(\"Suspending greeter after 30 seconds of idle time\")\n        await suspend_flow_run(timeout=10000)\n

      As this flow processes name input, it adds the key of the flow run input to the seen_keys_block. When the flow later suspends and then resumes, it reads the keys it has already seen out of the JSON Block and passes them as the exlude_keys parameter to receive_input.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#responding-to-the-inputs-sender","title":"Responding to the input's sender","text":"

      When your flow receives input from another flow, Prefect knows the sending flow run ID, so the receiving flow can respond by calling the respond method on the RunInput instance the flow received. There are a couple of requirements:

      1. You will need to pass in a BaseModel or RunInput, or use with_metadata=True
      2. The flow you are responding to must receive the same type of input you send in order to see it.

      The respond method is equivalent to calling send_input(..., flow_run_id=sending_flow_run.id), but with respond, your flow doesn't need to know the sending flow run's ID.

      Now that we know about respond, let's make our greeter_flow respond to name inputs instead of printing them:

      from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n@flow\nasync def greeter():\n    async for name_input in receive_input(\n        str,\n        with_metadata=True,\n        timeout=None\n    ):\n        await name_input.respond(f\"Hello, {name_input.value}!\")\n

      Cool! There's one problem left: this flow runs forever! We need a way to signal that it should exit. Let's keep things simple and teach it to look for a special string:

      from prefect import flow\nfrom prefect.input.run_input import receive_input\n\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n    async for name_input in receive_input(\n        str,\n        with_metadata=True,\n        poll_interval=0.1,\n        timeout=None\n    ):\n        if name_input.value == EXIT_SIGNAL:\n            print(\"Goodbye!\")\n            return\n        await name_input.respond(f\"Hello, {name_input.value}!\")\n

      With a greeter flow in place, we're ready to create the flow that sends greeter names!

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#sending-input","title":"Sending input","text":"

      You can send input to a flow with the send_input function. This works similarly to receive_input and, like that function, accepts the same run_input argument, which can be a built-in type such as str, or else a BaseModel or RunInput subclass.

      When can you send input to a flow run?

      You can send input to a flow run as soon as you have the flow run's ID. The flow does not have to be receiving input for you to send input. If you send a flow input before it is receiving, it will see your input when it calls receive_input (as long as the types in the send_input and receive_input calls match!)

      Next, we'll create a sender flow that starts a greeter flow run and then enters a loop, continuously getting input from the terminal and sending it to the greeter flow:

      @flow\nasync def sender():\n    greeter_flow_run = await run_deployment(\n        \"greeter/send-receive\", timeout=0, as_subflow=False\n    )\n    receiver = receive_input(str, timeout=None, poll_interval=0.1)\n    client = get_client()\n\n    while True:\n        flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n        if not flow_run.state or not flow_run.state.is_running():\n            continue\n\n        name = input(\"What is your name? \")\n        if not name:\n            continue\n\n        if name == \"q\" or name == \"quit\":\n            await send_input(\n                EXIT_SIGNAL,\n                flow_run_id=greeter_flow_run.id\n            )\n            print(\"Goodbye!\")\n            break\n\n        await send_input(name, flow_run_id=greeter_flow_run.id)\n        greeting = await receiver.next()\n        print(greeting)\n

      There's more going on here than in greeter, so let's take a closer look at the pieces.

      First, we use run_deployment to start a greeter flow run. This means we must have a worker or flow.serve() running in separate process. That process will begin running greeter while sender continues to execute. Calling run_deployment(..., timeout=0) ensures that sender won't wait for the greeter flow run to complete, because it's running a loop and will only exit when we send EXIT_SIGNAL.

      Next, we capture the iterator returned by receive_input as receiver. This flow works by entering a loop, and on each iteration of the loop, the flow asks for terminal input, sends that to the greeter flow, and then runs receiver.next() to wait until it receives the response from greeter.

      Next, we let the terminal user who ran this flow exit by entering the string q or quit. When that happens, we send the greeter flow an exit signal so it will shut down too.

      Finally, we send the new name to greeter. We know that greeter is going to send back a greeting as a string, so we immediately wait for new string input. When we receive the greeting, we print it and continue the loop that gets terminal input.

      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/creating-interactive-workflows/#seeing-a-complete-example","title":"Seeing a complete example","text":"

      Finally, let's see a complete example of using send_input and receive_input. Here is what the greeter and sender flows look like together:

      import asyncio\nimport sys\nfrom prefect import flow, get_client\nfrom prefect.blocks.system import JSON\nfrom prefect.context import get_run_context\nfrom prefect.deployments.deployments import run_deployment\nfrom prefect.input.run_input import receive_input, send_input\n\n\nEXIT_SIGNAL = \"__EXIT__\"\n\n\n@flow\nasync def greeter():\n    run_context = get_run_context()\n    assert run_context.flow_run, \"Could not see my flow run ID\"\n\n    block_name = f\"{run_context.flow_run.id}-seen-ids\"\n\n    try:\n        seen_keys_block = await JSON.load(block_name)\n    except ValueError:\n        seen_keys_block = JSON(\n            value=[],\n        )\n\n    async for name_input in receive_input(\n        str,\n        with_metadata=True,\n        poll_interval=0.1,\n        timeout=None\n    ):\n        if name_input.value == EXIT_SIGNAL:\n            print(\"Goodbye!\")\n            return\n        await name_input.respond(f\"Hello, {name_input.value}!\")\n\n        seen_keys_block.value.append(name_input.metadata.key)\n        await seen_keys_block.save(\n            name=block_name,\n            overwrite=True\n        )\n\n\n@flow\nasync def sender():\n    greeter_flow_run = await run_deployment(\n        \"greeter/send-receive\", timeout=0, as_subflow=False\n    )\n    receiver = receive_input(str, timeout=None, poll_interval=0.1)\n    client = get_client()\n\n    while True:\n        flow_run = await client.read_flow_run(greeter_flow_run.id)\n\n        if not flow_run.state or not flow_run.state.is_running():\n            continue\n\n        name = input(\"What is your name? \")\n        if not name:\n            continue\n\n        if name == \"q\" or name == \"quit\":\n            await send_input(\n                EXIT_SIGNAL,\n                flow_run_id=greeter_flow_run.id\n            )\n            print(\"Goodbye!\")\n            break\n\n        await send_input(name, flow_run_id=greeter_flow_run.id)\n        greeting = await receiver.next()\n        print(greeting)\n\n\nif __name__ == \"__main__\":\n    if sys.argv[1] == \"greeter\":\n        asyncio.run(greeter.serve(name=\"send-receive\"))\n    elif sys.argv[1] == \"sender\":\n        asyncio.run(sender())\n

      To run the example, you'll need a Python environment with Prefect installed, pointed at either an open-source Prefect server instance or Prefect Cloud.

      With your environment set up, start a flow runner in one terminal with the following command:

      python my_file_name greeter\n

      For example, with Prefect Cloud, you should see output like this:

      \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Your flow 'greeter' is being served and polling for scheduled runs!                              \u2502\n\u2502                                                                                                  \u2502\n\u2502 To trigger a run for this flow, use the following command:                                       \u2502\n\u2502                                                                                                  \u2502\n\u2502         $ prefect deployment run 'greeter/send-receive'                                          \u2502\n\u2502                                                                                                  \u2502\n\u2502 You can also run your flow via the Prefect UI:                                                   \u2502\n\u2502 https://app.prefect.cloud/account/...(a URL for your account)                                    \u2502\n\u2502                                                                                                  \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n

      Then start the greeter process in another terminal:

      python my_file_name sender\n

      You should see output like this:

      11:38:41.800 | INFO    | prefect.engine - Created flow run 'gregarious-owl' for flow 'sender'\n11:38:41.802 | INFO    | Flow run 'gregarious-owl' - View at https://app.prefect.cloud/account/...\nWhat is your name?\n

      Type a name and press the enter key to see a greeting, and you'll see sending and receiving in action:

      What is your name? andrew\nHello, andrew!\n
      ","tags":["flow run","pause","suspend","input","human-in-the-loop workflows","interactive workflows"],"boost":2},{"location":"guides/dask-ray-task-runners/","title":"Dask and Ray Task Runners","text":"

      Task runners provide an execution environment for tasks. In a flow decorator, you can specify a task runner to run the tasks called in that flow.

      The default task runner is the ConcurrentTaskRunner.

      Use .submit to run your tasks asynchronously

      To run tasks asynchronously use the .submit method when you call them. If you call a task as you would normally in Python code it will run synchronously, even if you are calling the task within a flow that uses the ConcurrentTaskRunner, DaskTaskRunner, or RayTaskRunner.

      Many real-world data workflows benefit from true parallel, distributed task execution. For these use cases, the following Prefect-developed task runners for parallel task execution may be installed as Prefect Integrations.

      • DaskTaskRunner runs tasks requiring parallel execution using dask.distributed.
      • RayTaskRunner runs tasks requiring parallel execution using Ray.

      These task runners can spin up a local Dask cluster or Ray instance on the fly, or let you connect with a Dask or Ray environment you've set up separately. Then you can take advantage of massively parallel computing environments.

      Use Dask or Ray in your flows to choose the execution environment that fits your particular needs.

      To show you how they work, let's start small.

      Remote storage

      We recommend configuring remote file storage for task execution with DaskTaskRunner or RayTaskRunner. This ensures tasks executing in Dask or Ray have access to task result storage, particularly when accessing a Dask or Ray instance outside of your execution environment.

      ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#configuring-a-task-runner","title":"Configuring a task runner","text":"

      You may have seen this briefly in a previous tutorial, but let's look a bit more closely at how you can configure a specific task runner for a flow.

      Let's start with the SequentialTaskRunner. This task runner runs all tasks synchronously and may be useful when used as a debugging tool in conjunction with async code.

      Let's start with this simple flow. We import the SequentialTaskRunner, specify a task_runner on the flow, and call the tasks with .submit().

      from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

      Save this as sequential_flow.py and run it in a terminal. You'll see output similar to the following:

      $ python sequential_flow.py\n16:51:17.967 | INFO    | prefect.engine - Created flow run 'humongous-mink' for flow 'greetings'\n16:51:17.967 | INFO    | Flow run 'humongous-mink' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:51:18.060 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:51:18.123 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:51:18.150 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:51:18.181 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:51:18.210 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:51:18.237 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:51:18.264 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:51:18.290 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:51:18.321 | INFO    | Flow run 'humongous-mink' - Finished in state Completed('All states completed.')\n

      If we take out the log messages and just look at the printed output of the tasks, you see they're executed in sequential order:

      $ python sequential_flow.py\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
      ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-dask","title":"Running parallel tasks with Dask","text":"

      You could argue that this simple flow gains nothing from parallel execution, but let's roll with it so you can see just how simple it is to take advantage of the DaskTaskRunner.

      To configure your flow to use the DaskTaskRunner:

      1. Make sure the prefect-dask collection is installed by running pip install prefect-dask.
      2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
      3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.
      4. Use the .submit method when calling functions.

      This is the same flow as above, with a few minor changes to use DaskTaskRunner where we previously configured SequentialTaskRunner. Install prefect-dask, made these changes, then save the updated code as dask_flow.py.

      from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

      Note that, because you're using DaskTaskRunner in a script, you must use if __name__ == \"__main__\": or you'll see warnings and errors.

      Now run dask_flow.py. If you get a warning about accepting incoming network connections, that's okay - everything is local in this example.

      $ python dask_flow.py\n19:29:03.798 | INFO    | prefect.engine - Created flow run 'fine-bison' for flow 'greetings'\n\n19:29:03.798 | INFO    | Flow run 'fine-bison' - Using task runner 'DaskTaskRunner'\n\n19:29:04.080 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:18.465 | INFO    | prefect.engine - Created flow run 'radical-finch' for flow 'greetings'\n16:54:18.465 | INFO    | Flow run 'radical-finch' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:54:18.465 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:19.811 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:54:19.881 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:54:20.364 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-0' for execution.\n16:54:20.379 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:54:20.386 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-0' for execution.\n16:54:20.397 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:54:20.401 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-1' for execution.\n16:54:20.417 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:54:20.423 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-1' for execution.\n16:54:20.443 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:54:20.449 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-2' for execution.\n16:54:20.462 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:54:20.474 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-2' for execution.\n16:54:20.500 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:54:20.511 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-3' for execution.\n16:54:20.544 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:54:20.555 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-3' for execution.\nhello arthur\ngoodbye ford\ngoodbye arthur\nhello ford\ngoodbye marvin\ngoodbye trillian\nhello trillian\nhello marvin\n

      DaskTaskRunner automatically creates a local Dask cluster, then starts executing all of the tasks in parallel. The results do not return in the same order as the sequential code above.

      Notice what happens if you do not use the submit method when calling tasks:

      from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello(name)\n        say_goodbye(name)\n\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
      $ python dask_flow.py\n\n16:57:34.534 | INFO    | prefect.engine - Created flow run 'papaya-honeybee' for flow 'greetings'\n16:57:34.534 | INFO    | Flow run 'papaya-honeybee' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:57:34.535 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:57:35.715 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at <http://127.0.0.1:8787/status>\n16:57:35.787 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:57:35.788 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:57:35.810 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:57:35.840 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:57:35.869 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:57:35.894 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:57:35.924 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:57:35.951 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:57:35.976 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:57:36.004 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:57:36.289 | INFO    | Flow run 'papaya-honeybee' - Finished in state Completed('All states completed.')\n

      The tasks are not submitted to the DaskTaskRunner and are run sequentially.

      ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-ray","title":"Running parallel tasks with Ray","text":"

      To demonstrate the ability to flexibly apply the task runner appropriate for your workflow, use the same flow as above, with a few minor changes to use the RayTaskRunner where we previously configured DaskTaskRunner.

      To configure your flow to use the RayTaskRunner:

      1. Make sure the prefect-ray collection is installed by running pip install prefect-ray.
      2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
      3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

      Ray environment limitations

      While we're excited about parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

      • Support for Python 3.11 is experimental.
      • Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.
      • Ray's Windows support is currently in beta.

      See the Ray installation documentation for further compatibility information.

      Save this code in ray_flow.py.

      from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

      Now run ray_flow.py RayTaskRunner automatically creates a local Ray instance, then immediately starts executing all of the tasks in parallel. If you have an existing Ray instance, you can provide the address as a parameter to run tasks in the instance. See Running tasks on Ray for details.

      ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

      Many workflows include a variety of tasks, and not all of them benefit from parallel execution. You'll most likely want to use the Dask or Ray task runners and spin up their respective resources only for those tasks that need them.

      Because task runners are specified on flows, you can assign different task runners to tasks by using subflows to organize those tasks.

      This example uses the same tasks as the previous examples, but on the parent flow greetings() we use the default ConcurrentTaskRunner. Then we call a ray_greetings() subflow that uses the RayTaskRunner to execute the same tasks in a Ray instance.

      from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef ray_greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\n@flow()\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n    ray_greetings(names)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

      If you save this as ray_subflow.py and run it, you'll see that the flow greetings runs as you'd expect for a concurrent flow, then flow ray-greetings spins up a Ray instance to run the tasks again.

      ","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/docker/","title":"Running Flows with Docker","text":"

      In the Deployments tutorial, we looked at serving a flow that enables scheduling or creating flow runs via the Prefect API.

      With our Python script in hand, we can build a Docker image for our script, allowing us to serve our flow in various remote environments. We'll use Kubernetes in this guide, but you can use any Docker-compatible infrastructure.

      In this guide we'll:

      • Write a Dockerfile to build an image that stores our Prefect flow code.
      • Build a Docker image for our flow.
      • Deploy and run our Docker image on a Kubernetes cluster.
      • Look at the Prefect-maintained Docker images and discuss options for use

      Note that in this guide we'll create a Dockerfile from scratch. Alternatively, Prefect makes it convenient to build a Docker image as part of deployment creation. You can even include environment variables and specify additional Python packages to install at runtime.

      If creating a deployment with a prefect.yaml file, the build step makes it easy to customize your Docker image and push it to the registry of your choice. See an example here.

      Deployment creation with a Python script that includes flow.deploy similarly allows you to customize your Docker image with keyword arguments as shown below.

      ...\n\nif __name__ == \"__main__\":\n    hello_world.deploy(\n        name=\"my-first-deployment\",\n        work_pool_name=\"above-ground\",\n        image='my_registry/hello_world:demo',\n        job_variables={\"env\": { \"EXTRA_PIP_PACKAGES\": \"boto3\" } }\n    )\n
      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#prerequisites","title":"Prerequisites","text":"

      To complete this guide, you'll need the following:

      • A Python script that defines and serves a flow.
      • We'll use the flow script and deployment from the Deployments tutorial.
      • Access to a running Prefect API server.
      • You can sign up for a forever free Prefect Cloud account or run a Prefect API server locally with prefect server start.
      • Docker Desktop installed on your machine.
      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#writing-a-dockerfile","title":"Writing a Dockerfile","text":"

      First let's make a clean directory to work from, prefect-docker-guide.

      mkdir prefect-docker-guide\ncd prefect-docker-guide\n

      In this directory, we'll create a sub-directory named flows and put our flow script from the Deployments tutorial in it.

      mkdir flows\ncd flows\ntouch prefect-docker-guide-flow.py\n

      Here's the flow code for reference:

      prefect-docker-guide-flow.py
      import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.serve(name=\"prefect-docker-guide\")\n

      The next file we'll add to the prefect-docker-guide directory is a requirements.txt. We'll include all dependencies required for our prefect-docker-guide-flow.py script in the Docker image we'll build.

      # ensure you run this line from the top level of the `prefect-docker-guide` directory\ntouch requirements.txt\n

      Here's what we'll put in our requirements.txt file:

      requirements.txt
      prefect>=2.12.0\nhttpx\n

      Next, we'll create a Dockerfile that we'll use to create a Docker image that will also store the flow code.

      touch Dockerfile\n

      We'll add the following content to our Dockerfile:

      Dockerfile
      # We're using the latest version of Prefect with Python 3.10\nFROM prefecthq/prefect:2-python3.10\n\n# Add our requirements.txt file to the image and install dependencies\nCOPY requirements.txt .\nRUN pip install -r requirements.txt --trusted-host pypi.python.org --no-cache-dir\n\n# Add our flow code to the image\nCOPY flows /opt/prefect/flows\n\n# Run our flow script when the container starts\nCMD [\"python\", \"flows/prefect-docker-guide-flow.py\"]\n
      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#building-a-docker-image","title":"Building a Docker image","text":"

      Now that we have a Dockerfile we can build our image by running:

      docker build -t prefect-docker-guide-image .\n

      We can check that our build worked by running a container from our new image.

      CloudSelf-hosted

      Our container will need an API URL and and API key to communicate with Prefect Cloud.

      • You can get an API key from the API Keys section of the user settings in the Prefect UI.

      • You can get your API URL by running prefect config view and copying the PREFECT_API_URL value.

      We'll provide both these values to our container by passing them as environment variables with the -e flag.

      docker run -e PREFECT_API_URL=YOUR_PREFECT_API_URL -e PREFECT_API_KEY=YOUR_API_KEY prefect-docker-guide-image\n

      After running the above command, the container should start up and serve the flow within the container!

      Our container will need an API URL and network access to communicate with the Prefect API.

      For this guide, we'll assume the Prefect API is running on the same machine that we'll run our container on and the Prefect API was started with prefect server start. If you're running a different setup, check out the Hosting a Prefect server guide for information on how to connect to your Prefect API instance.

      To ensure that our flow container can communicate with the Prefect API, we'll set our PREFECT_API_URL to http://host.docker.internal:4200/api. If you're running Linux, you'll need to set your PREFECT_API_URL to http://localhost:4200/api and use the --network=\"host\" option instead.

      docker run --network=\"host\" -e PREFECT_API_URL=http://host.docker.internal:4200/api prefect-docker-guide-image\n

      After running the above command, the container should start up and serve the flow within the container!

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#deploying-to-a-remote-environment","title":"Deploying to a remote environment","text":"

      Now that we have a Docker image with our flow code embedded, we can deploy it to a remote environment!

      For this guide, we'll simulate a remote environment by using Kubernetes locally with Docker Desktop. You can use the instructions provided by Docker to set up Kubernetes locally.

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#creating-a-kubernetes-deployment-manifest","title":"Creating a Kubernetes deployment manifest","text":"

      To ensure the process serving our flow is always running, we'll create a Kubernetes deployment. If our flow's container ever crashes, Kubernetes will automatically restart it, ensuring that we won't miss any scheduled runs.

      First, we'll create a deployment-manifest.yaml file in our prefect-docker-guide directory:

      touch deployment-manifest.yaml\n

      And we'll add the following content to our deployment-manifest.yaml file:

      CloudSelf-hosted deployment-manifest.yaml
      apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: prefect-docker-guide\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      flow: get-repo-info\n  template:\n    metadata:\n      labels:\n        flow: get-repo-info\n    spec:\n      containers:\n      - name: flow-container\n        image: prefect-docker-guide-image:latest\n        env:\n        - name: PREFECT_API_URL\n          value: YOUR_PREFECT_API_URL\n        - name: PREFECT_API_KEY\n          value: YOUR_API_KEY\n        # Never pull the image because we're using a local image\n        imagePullPolicy: Never\n

      Keep your API key secret

      In the above manifest we are passing in the Prefect API URL and API key as environment variables. This approach is simple, but it is not secure. If you are deploying your flow to a remote cluster, you should use a Kubernetes secret to store your API key.

      deployment-manifest.yaml
      apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: prefect-docker-guide\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      flow: get-repo-info\n  template:\n    metadata:\n      labels:\n        flow: get-repo-info\n    spec:\n      containers:\n      - name: flow-container\n        image: prefect-docker-guide-image:latest\n        env:\n        - name: PREFECT_API_URL\n          value: <http://host.docker.internal:4200/api>\n        # Never pull the image because we're using a local image\n        imagePullPolicy: Never\n

      Linux users

      If you're running Linux, you'll need to set your PREFECT_API_URL to use the IP address of your machine instead of host.docker.internal.

      This manifest defines how our image will run when deployed in our Kubernetes cluster. Note that we will be running a single replica of our flow container. If you want to run multiple replicas of your flow container to keep up with an active schedule, or because our flow is resource-intensive, you can increase the replicas value.

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#deploying-our-flow-to-the-cluster","title":"Deploying our flow to the cluster","text":"

      Now that we have a deployment manifest, we can deploy our flow to the cluster by running:

      kubectl apply -f deployment-manifest.yaml\n

      We can monitor the status of our Kubernetes deployment by running:

      kubectl get deployments\n

      Once the deployment has successfully started, we can check the logs of our flow container by running the following:

      kubectl logs -l flow=get-repo-info\n

      Now that we're serving our flow in our cluster, we can trigger a flow run by running:

      prefect deployment run get-repo-info/prefect-docker-guide\n

      If we navigate to the URL provided by the prefect deployment run command, we can follow the flow run via the logs in the Prefect UI!

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#prefect-maintained-docker-images","title":"Prefect-maintained Docker images","text":"

      Every release of Prefect results in several new Docker images. These images are all named prefecthq/prefect and their tags identify their differences.

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#image-tags","title":"Image tags","text":"

      When a release is published, images are built for all of Prefect's supported Python versions. These images are tagged to identify the combination of Prefect and Python versions contained. Additionally, we have \"convenience\" tags which are updated with each release to facilitate automatic updates.

      For example, when release 2.11.5 is published:

      1. Images with the release packaged are built for each supported Python version (3.8, 3.9, 3.10, 3.11) with both standard Python and Conda.
      2. These images are tagged with the full description, e.g. prefect:2.1.1-python3.10 and prefect:2.1.1-python3.10-conda.
      3. For users that want more specific pins, these images are also tagged with the SHA of the git commit of the release, e.g. sha-88a7ff17a3435ec33c95c0323b8f05d7b9f3f6d2-python3.10
      4. For users that want to be on the latest 2.1.x release, receiving patch updates, we update a tag without the patch version to this release, e.g. prefect.2.1-python3.10.
      5. For users that want to be on the latest 2.x.y release, receiving minor version updates, we update a tag without the minor or patch version to this release, e.g. prefect.2-python3.10
      6. Finally, for users who want the latest 2.x.y release without specifying a Python version, we update 2-latest to the image for our highest supported Python version, which in this case would be equivalent to prefect:2.1.1-python3.10.

      Choose image versions carefully

      It's a good practice to use Docker images with specific Prefect versions in production.

      Use care when employing images that automatically update to new versions (such as prefecthq/prefect:2-python3.11 or prefecthq/prefect:2-latest).

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#standard-python","title":"Standard Python","text":"

      Standard Python images are based on the official Python slim images, e.g. python:3.10-slim.

      Tag Prefect Version Python Version 2-latest most recent v2 PyPi version 3.10 2-python3.11 most recent v2 PyPi version 3.11 2-python3.10 most recent v2 PyPi version 3.10 2-python3.9 most recent v2 PyPi version 3.9 2-python3.8 most recent v2 PyPi version 3.8 2.X-python3.11 2.X 3.11 2.X-python3.10 2.X 3.10 2.X-python3.9 2.X 3.9 2.X-python3.8 2.X 3.8 sha-<hash>-python3.11 <hash> 3.11 sha-<hash>-python3.10 <hash> 3.10 sha-<hash>-python3.9 <hash> 3.9 sha-<hash>-python3.8 <hash> 3.8","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#conda-flavored-python","title":"Conda-flavored Python","text":"

      Conda flavored images are based on continuumio/miniconda3. Prefect is installed into a conda environment named prefect.

      Tag Prefect Version Python Version 2-latest-conda most recent v2 PyPi version 3.10 2-python3.11-conda most recent v2 PyPi version 3.11 2-python3.10-conda most recent v2 PyPi version 3.10 2-python3.9-conda most recent v2 PyPi version 3.9 2-python3.8-conda most recent v2 PyPi version 3.8 2.X-python3.11-conda 2.X 3.11 2.X-python3.10-conda 2.X 3.10 2.X-python3.9-conda 2.X 3.9 2.X-python3.8-conda 2.X 3.8 sha-<hash>-python3.11-conda <hash> 3.11 sha-<hash>-python3.10-conda <hash> 3.10 sha-<hash>-python3.9-conda <hash> 3.9 sha-<hash>-python3.8-conda <hash> 3.8","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#building-your-own-image","title":"Building your own image","text":"

      If your flow relies on dependencies not found in the default prefecthq/prefect images, you may want to build your own image. You can either base it off of one of the provided prefecthq/prefect images, or build your own image. See the Work pool deployment guide for discussion of how Prefect can help you build custom images with dependencies specified in a requirements.txt file.

      By default, Prefect work pools that use containers refer to the 2-latest image. You can specify another image at work pool creation. The work pool image choice can be overridden in individual deployments.

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#extending-the-prefecthqprefect-image-manually","title":"Extending the prefecthq/prefect image manually","text":"

      Here we provide an example Dockerfile for building an image based on prefecthq/prefect:2-latest, but with scikit-learn installed.

      FROM prefecthq/prefect:2-latest\n\nRUN pip install scikit-learn\n
      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#choosing-an-image-strategy","title":"Choosing an image strategy","text":"

      The options described above have different complexity (and performance) characteristics. For choosing a strategy, we provide the following recommendations:

      • If your flow only makes use of tasks defined in the same file as the flow, or tasks that are part of prefect itself, then you can rely on the default provided prefecthq/prefect image.

      • If your flow requires a few extra dependencies found on PyPI, you can use the default prefecthq/prefect image and set prefect.deployments.steps.pip_install_requirements: in the pullstep to install these dependencies at runtime.

      • If the installation process requires compiling code or other expensive operations, you may be better off building a custom image instead.

      • If your flow (or flows) require extra dependencies or shared libraries, we recommend building a shared custom image with all the extra dependencies and shared task definitions you need. Your flows can then all rely on the same image, but have their source stored externally. This option can ease development, as the shared image only needs to be rebuilt when dependencies change, not when the flow source changes.

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/docker/#next-steps","title":"Next steps","text":"

      We only served a single flow in this guide, but you can extend this setup to serve multiple flows in a single Docker image by updating your Python script to using flow.to_deployment and serve to serve multiple flows or the same flow with different configuration.

      To learn more about deploying flows, check out the Deployments concept doc!

      For advanced infrastructure requirements, such as executing each flow run within its own dedicated Docker container, learn more in the Work pool deployment guide.

      ","tags":["Docker","containers","orchestration","infrastructure","deployments","images","Kubernetes"],"boost":2},{"location":"guides/global-concurrency-limits/","title":"Global Concurrency Limits and Rate Limits","text":"

      Global concurrency limits allow you to manage execution efficiently, controlling how many tasks, flows, or other operations can run simultaneously. They are ideal when optimizing resource usage, preventing bottlenecks, and customizing task execution are priorities.

      Clarification on use of the term 'tasks'

      In the context of global concurrency and rate limits, \"tasks\" refers not specifically to Prefect tasks, but to concurrent units of work in general, such as those managed by an event loop or TaskGroup in asynchronous programming. These general \"tasks\" could include Prefect tasks when they are part of an asynchronous execution environment.

      Rate Limits ensure system stability by governing the frequency of requests or operations. They are suitable for preventing overuse, ensuring fairness, and handling errors gracefully.

      When selecting between Concurrency and Rate Limits, consider your primary goal. Choose Concurrency Limits for resource optimization and task management. Choose Rate Limits to maintain system stability and fair access to services.

      The core difference between a rate limit and a concurrency limit is the way in which slots are released. With a rate limit, slots are released at a controlled rate, controlled by slot_decay_per_second whereas with a concurrency limit, slots are released when the concurrency manager is exited.

      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#managing-global-concurrency-limits-and-rate-limits","title":"Managing Global concurrency limits and rate limits","text":"

      You can create, read, edit, and delete concurrency limits via the Prefect UI.

      When creating a concurrency limit, you can specify the following parameters:

      • Name: The name of the concurrency limit. This name is also how you'll reference the concurrency limit in your code. Special characters, such as /, %, &, >, <, are not allowed.
      • Concurrency Limit: The maximum number of slots that can be occupied on this concurrency limit.
      • Slot Decay Per Second: Controls the rate at which slots are released when the concurrency limit is used as a rate limit. This value must be configured when using the rate_limit function.
      • Active: Whether or not the concurrency limit is in an active state.
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#active-vs-inactive-limits","title":"Active vs inactive limits","text":"

      Global concurrency limits can be in either an active or inactive state.

      • Active: In this state, slots can be occupied, and code execution will be blocked when slots are unable to be acquired.
      • Inactive: In this state, slots will not be occupied, and code execution will not be blocked. Concurrency enforcement occurs only when you activate the limit.
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#slot-decay","title":"Slot decay","text":"

      Global concurrency limits can be configured with slot decay. This is used when the concurrency limit is used as a rate limit, and it governs the pace at which slots are released or become available for reuse after being occupied. These slots effectively represent the concurrency capacity within a specific concurrency limit. The concept is best understood as the rate at which these slots \"decay\" or refresh.

      To configure slot decay, you can set the slot_decay_per_second parameter when defining or adjusting a concurrency limit.

      For practical use, consider the following:

      • Higher values: Setting slot_decay_per_second to a higher value, such as 5.0, results in slots becoming available relatively quickly. In this scenario, a slot that was occupied by a task will free up after just 0.2 (1.0 / 5.0) seconds.

      • Lower values: Conversely, setting slot_decay_per_second to a lower value, like 0.1, causes slots to become available more slowly. In this scenario it would take 10 (1.0 / 0.1) seconds for a slot to become available again after occupancy

      Slot decay provides fine-grained control over the availability of slots, enabling you to optimize the rate of your workflow based on your specific requirements.

      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-the-concurrency-context-manager","title":"Using the concurrency context manager","text":"

      The concurrencycontext manager allows control over the maximum number of concurrent operations. You can select either the synchronous (sync) or asynchronous (async) version, depending on your use case. Here's how to use it:

      Concurrency limits are implicitly created

      When using the concurrency context manager, the concurrency limit you use will be created, in an inactive state, if it does not already exist.

      Sync

      from prefect import flow, task\nfrom prefect.concurrency.sync import concurrency\n\n\n@task\ndef process_data(x, y):\n    with concurrency(\"database\", occupy=1):\n        return x + y\n\n\n@flow\ndef my_flow():\n    for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n        process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

      Async

      import asyncio\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import concurrency\n\n\n@task\nasync def process_data(x, y):\n    async with concurrency(\"database\", occupy=1):\n        return x + y\n\n\n@flow\nasync def my_flow():\n    for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:\n        await process_data.submit(x, y)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(my_flow())\n
      1. The code imports the necessary modules and the concurrency context manager. Use the prefect.concurrency.sync module for sync usage and the prefect.concurrency.asyncio module for async usage.
      2. It defines a process_data task, taking x and y as input arguments. Inside this task, the concurrency context manager controls concurrency, using the database concurrency limit and occupying one slot. If another task attempts to run with the same limit and no slots are available, that task will be blocked until a slot becomes available.
      3. A flow named my_flow is defined. Within this flow, it iterates through a list of tuples, each containing pairs of x and y values. For each pair, the process_data task is submitted with the corresponding x and y values for processing.
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-rate_limit","title":"Using rate_limit","text":"

      The Rate Limit feature provides control over the frequency of requests or operations, ensuring responsible usage and system stability. Depending on your requirements, you can utilize rate_limit to govern both synchronous (sync) and asynchronous (async) operations. Here's how to make the most of it:

      Slot decay

      When using the rate_limit function the concurrency limit you use must have a slot decay configured.

      Sync

      from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef make_http_request():\n    rate_limit(\"rate-limited-api\")\n    print(\"Making an HTTP request...\")\n\n\n@flow\ndef my_flow():\n    for _ in range(10):\n        make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

      Async

      import asyncio\n\nfrom prefect import flow, task\nfrom prefect.concurrency.asyncio import rate_limit\n\n\n@task\nasync def make_http_request():\n    await rate_limit(\"rate-limited-api\")\n    print(\"Making an HTTP request...\")\n\n\n@flow\nasync def my_flow():\n    for _ in range(10):\n        await make_http_request.submit()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(my_flow())\n
      1. The code imports the necessary modules and the rate_limit function. Use the prefect.concurrency.sync module for sync usage and the prefect.concurrency.asyncio module for async usage.
      2. It defines a make_http_request task. Inside this task, the rate_limit function is used to ensure that the requests are made at a controlled pace.
      3. A flow named my_flow is defined. Within this flow the make_http_request task is submitted 10 times.
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#using-concurrency-and-rate_limit-outside-of-a-flow","title":"Using concurrency and rate_limit outside of a flow","text":"

      concurreny and rate_limit can be used outside of a flow to control concurrency and rate limits for any operation.

      import asyncio\n\nfrom prefect.concurrency.asyncio import rate_limit\n\n\nasync def main():\n    for _ in range(10):\n        await rate_limit(\"rate-limited-api\")\n        print(\"Making an HTTP request...\")\n\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#use-cases","title":"Use cases","text":"","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#throttling-task-submission","title":"Throttling task submission","text":"

      Throttling task submission to avoid overloading resources, to comply with external rate limits, or ensure a steady, controlled flow of work.

      In this scenario the rate_limit function is used to throttle the submission of tasks. The rate limit acts as a bottleneck, ensuring that tasks are submitted at a controlled rate, governed by the slot_decay_per_second setting on the associated concurrency limit.

      from prefect import flow, task\nfrom prefect.concurrency.sync import rate_limit\n\n\n@task\ndef my_task(i):\n    return i\n\n\n@flow\ndef my_flow():\n    for _ in range(100):\n        rate_limit(\"slow-my-flow\", occupy=1)\n        my_task.submit(1)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#managing-database-connections","title":"Managing database connections","text":"

      Managing the maximum number of concurrent database connections to avoid exhausting database resources.

      In this scenario we've setup a concurrency limit named database and given it a maximum concurrency limit that matches the maximum number of database connections we want to allow. We then use the concurrency context manager to control the number of database connections allowed at any one time.

      from prefect import flow, task, concurrency\nimport psycopg2\n\n@task\ndef database_query(query):\n    # Here we request a single slot on the 'database' concurrency limit. This\n    # will block in the case that all of the database connections are in use\n    # ensuring that we never exceed the maximum number of database connections.\n    with concurrency(\"database\", occupy=1):\n        connection = psycopg2.connect(\"<connection_string>\")\n        cursor = connection.cursor()\n        cursor.execute(query)\n        result = cursor.fetchall()\n        connection.close()\n        return result\n\n@flow\ndef my_flow():\n    queries = [\"SELECT * FROM table1\", \"SELECT * FROM table2\", \"SELECT * FROM table3\"]\n\n    for query in queries:\n        database_query.submit(query)\n\nif __name__ == \"__main__\":\n    my_flow()\n
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/global-concurrency-limits/#parallel-data-processing","title":"Parallel data processing","text":"

      Limiting the maximum number of parallel processing tasks.

      In this scenario we want to limit the number of process_data tasks to five at any one time. We do this by using the concurrency context manager to request five slots on the data-processing concurrency limit. This will block until five slots are free and then submit five more tasks, ensuring that we never exceed the maximum number of parallel processing tasks.

      import asyncio\nfrom prefect.concurrency.sync import concurrency\n\n\nasync def process_data(data):\n    print(f\"Processing: {data}\")\n    await asyncio.sleep(1)\n    return f\"Processed: {data}\"\n\n\nasync def main():\n    data_items = list(range(100))\n    processed_data = []\n\n    while data_items:\n        with concurrency(\"data-processing\", occupy=5):\n            chunk = [data_items.pop() for _ in range(5)]\n            processed_data += await asyncio.gather(\n                *[process_data(item) for item in chunk]\n            )\n\n    print(processed_data)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n
      ","tags":["concurrency","rate limits"],"boost":2},{"location":"guides/host/","title":"Host a Prefect server instance","text":"

      Learn how to host your own Prefect server instance.

      Note

      If you would like to host a Prefect server instance on Kubernetes, check out the prefect-server Helm chart.

      After installing Prefect, you have: - a Python SDK client that can communicate with Prefect Cloud - an API server instance backed by a database and a UI

      ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#steps","title":"Steps","text":"
      1. Spin up a local Prefect server UI with the prefect server start CLI command in the terminal:
      prefect server start\n
      1. Open the URL for the Prefect server UI (http://127.0.0.1:4200 by default) in a browser.
      1. Shut down the Prefect server with ctrl + c in the terminal.","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#comparing-a-self-hosted-prefect-server-instance-and-prefect-cloud","title":"Comparing a self-hosted Prefect server instance and Prefect Cloud","text":"

        A self-hosted Prefect server instance and Prefect Cloud share a common set of features. Prefect Cloud includes the following additional features:

        • Workspaces \u2014 isolated environments to organize your flows, deployments, and flow runs.
        • Automations \u2014 configure triggers, actions, and notifications in response to real-time monitoring events.
        • Email notifications \u2014 send email alerts from Prefect's servers based on automation triggers.
        • Service accounts \u2014 configure API access for running workers or executing flow runs on remote infrastructure.
        • Custom role-based access controls (RBAC) \u2014 assign users granular permissions to perform activities within an account or workspace.
        • Single Sign-on (SSO) \u2014 authentication using your identity provider.
        • Audit Log \u2014 a record of user activities to monitor security and compliance.

        Read more about Prefect Cloud in the Cloud section.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configure-a-prefect-server-instance","title":"Configure a Prefect server instance","text":"

        Go to your terminal session and run this command to set the API URL to point to a Prefect server instance:

        prefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

        PREFECT_API_URL required when running Prefect inside a container

        You must set the API server address to use Prefect within a container, such as a Docker container.

        You can save the API server address in a Prefect profile. Whenever that profile is active, the API endpoint is at that address.

        See Profiles & Configuration for more information on profiles and configurable Prefect settings.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#the-prefect-database","title":"The Prefect database","text":"

        The Prefect database persists data to track the state of your flow runs and related Prefect concepts, including:

        • Flow run and task run state
        • Run history
        • Logs
        • Deployments
        • Flow and task run concurrency limits
        • Storage blocks for flow and task results
        • Variables
        • Artifacts
        • Work pool status

        Currently Prefect supports the following databases:

        • SQLite (default in Prefect): Recommended for lightweight, single-server deployments. SQLite requires essentially no setup.
        • PostgreSQL: Best for connecting to external databases, but requires additional setup (such as Docker). Prefect uses the pg_trgm extension, so it must be installed and enabled.
        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#using-the-database","title":"Using the database","text":"

        A local SQLite database is the default database and is configured upon Prefect installation. The database is located at ~/.prefect/prefect.db by default.

        To reset your database, run the CLI command:

        prefect server database reset -y\n

        This command clears all data and reapplies the schema.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#database-settings","title":"Database settings","text":"

        Prefect provides several settings for configuring the database. The default settings are:

        PREFECT_API_DATABASE_CONNECTION_URL='sqlite+aiosqlite:///${PREFECT_HOME}/prefect.db'\nPREFECT_API_DATABASE_ECHO='False'\nPREFECT_API_DATABASE_MIGRATE_ON_START='True'\nPREFECT_API_DATABASE_PASSWORD='None'\n

        Save a setting to your active Prefect profile with prefect config set.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configure-a-postgresql-database","title":"Configure a PostgreSQL database","text":"

        Connect Prefect to a PostgreSQL database by setting the following environment variable:

        prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n

        The above environment variable assumes:

        • You have a username called postgres
        • Your password is set to yourTopSecretPassword
        • Your database runs on the same host as the Prefect server instance, localhost
        • You use the default PostgreSQL port 5432
        • Your PostgreSQL instance has a database called prefect
        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#quickstart-configure-a-postgresql-database-with-docker","title":"Quickstart: configure a PostgreSQL database with Docker","text":"

        Quickly start a PostgreSQL instance to use as your Prefect database with the following command (which will start a Docker container running PostgreSQL):

        docker run -d --name prefect-postgres -v prefectdb:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=yourTopSecretPassword -e POSTGRES_DB=prefect postgres:latest\n

        The above command:

        • Pulls the latest version of the official postgres Docker image, which is compatible with Prefect.
        • Starts a container with the name prefect-postgres.
        • Creates a database prefect with a user postgres and yourTopSecretPassword password.
        • Mounts the PostgreSQL data to a Docker volume called prefectdb to provide persistence if you ever have to restart or rebuild that container.

        Run the command below to set your current Prefect Profile to the PostgreSQL database instance running in your Docker container.

        prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n
        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#confirm-your-postgresql-database-configuration","title":"Confirm your PostgreSQL database configuration","text":"

        Inspect your Prefect profile to confirm that the environment variable has been properly set:

        prefect config view --show-sources\n
        You should see output similar to the following:\n\nPREFECT_PROFILE='my_profile'\nPREFECT_API_DATABASE_CONNECTION_URL='********' (from profile)\nPREFECT_API_URL='http://127.0.0.1:4200/api' (from profile)\n

        Start the Prefect server to use your PostgreSQL database instance:

        prefect server start\n
        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#in-memory-database","title":"In-memory database","text":"

        To use an in-memory SQLite database, set the following environment variable:

        prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false\"\n

        Use SQLite database for testing only

        SQLite is only supported by Prefect for testing purposes and is not compatible with multiprocessing.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#migrations","title":"Migrations","text":"

        Prefect uses Alembic to manage database migrations. Alembic is a database migration tool to use with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for generating and applying schema changes to a database.

        Apply migrations to your database with the following commands:

        To upgrade:

        prefect server database upgrade -y\n

        To downgrade:

        prefect server database downgrade -y\n

        Use the -r flag to specify a specific migration version to upgrade or downgrade to. For example, to downgrade to the previous migration version, run:

        prefect server database downgrade -y -r -1\n

        or to downgrade to a specific revision:

        prefect server database downgrade -y -r d20618ce678e\n

        To downgrade all migrations, use the base revision.

        See the contributing docs to create new database migrations.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#notifications","title":"Notifications","text":"

        Prefect Cloud gives you access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.

        Notifications enable you to set up alerts that are sent when a flow enters any state you specify. When your flow and task runs changes state, Prefect notes the state change and checks whether the new state matches any notification policies. If it does, a new notification is queued.

        Prefect supports sending notifications through:

        • Custom webhook
        • Discord webhook
        • Mattermost webhook
        • Microsoft Teams webhook
        • Opsgenie webhook
        • PagerDuty webhook
        • Sendgrid email
        • Slack webhook
        • Twilio SMS

        Notifications in Prefect Cloud

        Prefect Cloud uses the robust Automations interface to enable notifications related to flow run state changes and work pool status.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/host/#configure-notifications","title":"Configure notifications","text":"

        To configure a notification in a Prefect server, go to the Notifications page and select Create Notification or the + button.

        You can choose:

        • Which run states should trigger a notification
        • Tags to filter which flow runs are covered by the notification
        • Whether to send an email, a Slack message, Microsoft Teams message, or use another services

        For email notifications (supported on Prefect Cloud only), the configuration requires email addresses to which the message is sent.

        For Slack notifications, the configuration requires webhook credentials for your Slack and the channel to which the message is sent.

        For example, to get a Slack message if a flow with a daily-etl tag fails, the notification will read:

        If a run of any flow with daily-etl tag enters a failed state, send a notification to my-slack-webhook

        When the conditions of the notification are triggered, you\u2019ll receive a message:

        The fuzzy-leopard run of the daily-etl flow entered a failed state at 22-06-27 16:21:37 EST.

        On the Notifications page you can pause, edit, or delete any configured notification.

        ","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"guides/logs/","title":"Logging","text":"

        Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing.

        Prefect captures logs for your flow and task runs by default, even if you have not started a Prefect server with prefect server start.

        You can view and filter logs in the Prefect UI or Prefect Cloud, or access log records via the API.

        Prefect enables fine-grained customization of log levels for flows and tasks, including configuration for default levels and log message formatting.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-overview","title":"Logging overview","text":"

        Whenever you run a flow, Prefect automatically logs events for flow runs and task runs, along with any custom log handlers you have configured. No configuration is needed to enable Prefect logging.

        For example, say you created a simple flow in a file flow.py. If you create a local flow run with python flow.py, you'll see an example of the log messages created automatically by Prefect:

        16:45:44.534 | INFO    | prefect.engine - Created flow run 'gray-dingo' for flow\n'hello-flow'\n16:45:44.534 | INFO    | Flow run 'gray-dingo' - Using task runner 'SequentialTaskRunner'\n16:45:44.598 | INFO    | Flow run 'gray-dingo' - Created task run 'hello-task-54135dc1-0'\nfor task 'hello-task'\nHello world!\n16:45:44.650 | INFO    | Task run 'hello-task-54135dc1-0' - Finished in state\nCompleted(None)\n16:45:44.672 | INFO    | Flow run 'gray-dingo' - Finished in state\nCompleted('All states completed.')\n

        You can see logs for a flow run in the Prefect UI by navigating to the Flow Runs page and selecting a specific flow run to inspect.

        These log messages reflect the logging configuration for log levels and message formatters. You may customize the log levels captured and the default message format through configuration, and you can capture custom logging events by explicitly emitting log messages during flow and task runs.

        Prefect supports the standard Python logging levels CRITICAL, ERROR, WARNING, INFO, and DEBUG. By default, Prefect displays INFO-level and above events. You can configure the root logging level as well as specific logging levels for flow and task runs.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-configuration","title":"Logging configuration","text":"","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-settings","title":"Logging settings","text":"

        Prefect provides several settings for configuring logging level and loggers.

        By default, Prefect displays INFO-level and above logging records. You may change this level to DEBUG and DEBUG-level logs created by Prefect will be shown as well. You may need to change the log level used by loggers from other libraries to see their log records.

        You can override any logging configuration by setting an environment variable or Prefect Profile setting using the syntax PREFECT_LOGGING_[PATH]_[TO]_[KEY], with [PATH]_[TO]_[KEY] corresponding to the nested address of any setting.

        For example, to change the default logging levels for Prefect to DEBUG, you can set the environment variable PREFECT_LOGGING_LEVEL=\"DEBUG\".

        You may also configure the \"root\" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output WARNING level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the variable PREFECT_LOGGING_ROOT_LEVEL.

        You may adjust the log level used by specific handlers. For example, you could set PREFECT_LOGGING_HANDLERS_API_LEVEL=ERROR to have only ERROR logs reported to the Prefect API. The console handlers will still default to level INFO.

        There is a logging.yml file packaged with Prefect that defines the default logging configuration.

        You can customize logging configuration by creating your own version of logging.yml with custom settings, by either creating the file at the default location (/.prefect/logging.yml) or by specifying the path to the file with PREFECT_LOGGING_SETTINGS_PATH. (If the file does not exist at the specified location, Prefect ignores the setting and uses the default configuration.)

        See the Python Logging configuration documentation for more information about the configuration options and syntax used by logging.yml.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#prefect-loggers","title":"Prefect loggers","text":"

        To access the Prefect logger, import from prefect import get_run_logger. You can send messages to the logger in both flows and tasks.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-in-flows","title":"Logging in flows","text":"

        To log from a flow, retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

        from prefect import flow, get_run_logger\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message.\")\n

        Prefect automatically uses the flow run logger based on the flow context. If you run the above code, Prefect captures the following as a log event.

        15:35:17.304 | INFO    | Flow run 'mottled-marten' - INFO level log message.\n

        The default flow run log formatter uses the flow run name for log messages.

        Note

        Starting in 2.7.11, if you use a logger that sends logs to the API outside of a flow or task run, a warning will be displayed instead of an error. You can silence this warning by setting `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW=ignore` or have the logger raise an error by setting the value to `error`.\n
        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-in-tasks","title":"Logging in tasks","text":"

        Logging in tasks works much as logging in flows: retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

        from prefect import flow, task, get_run_logger\n\n@task(name=\"log-example-task\")\ndef logger_task():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message from a task.\")\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger_task()\n

        Prefect automatically uses the task run logger based on the task context. The default task run log formatter uses the task run name for log messages.

        15:33:47.179 | INFO   | Task run 'logger_task-80a1ffd1-0' - INFO level log message from a task.\n

        The underlying log model for task runs captures the task name, task run ID, and parent flow run ID, which are persisted to the database for reporting and may also be used in custom message formatting.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#logging-print-statements","title":"Logging print statements","text":"

        Prefect provides the log_prints option to enable the logging of print statements at the task or flow level. When log_prints=True for a given task or flow, the Python builtin print will be patched to redirect to the Prefect logger for the scope of that task or flow.

        By default, tasks and subflows will inherit the log_prints setting from their parent flow, unless opted out with their own explicit log_prints setting.

        from prefect import task, flow\n\n@task\ndef my_task():\n    print(\"we're logging print statements from a task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

        Will output:

        15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\n15:52:12.217 | INFO    | Task run 'my_task-20c6ece6-0' - we're logging print statements from a task\n
        from prefect import task, flow\n\n@task\ndef my_task(log_prints=False):\n    print(\"not logging print statements in this task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

        Using log_prints=False at the task level will output:

        15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\nnot logging print statements in this task\n

        You can also configure this behavior globally for all Prefect flows, tasks, and subflows.

        prefect config set PREFECT_LOGGING_LOG_PRINTS=True\n
        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#formatters","title":"Formatters","text":"

        Prefect log formatters specify the format of log messages. You can see details of message formatting for different loggers in logging.yml. For example, the default formatting for task run log records is:

        \"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s\"\n

        The variables available to interpolate in log messages varies by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables.

        The flow run logger has the following:

        • flow_run_name
        • flow_run_id
        • flow_name

        The task run logger has the following:

        • task_run_id
        • flow_run_id
        • task_run_name
        • task_name
        • flow_run_name
        • flow_name

        You can specify custom formatting by setting an environment variable or by modifying the formatter in a logging.yml file as described earlier. For example, to change the formatting for the flow runs formatter:

        PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT=\"%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s\"\n

        The resulting messages, using the flow run ID instead of name, would look like this:

        10:40:01.211 | INFO    | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run\n'othertask-1c085beb-3' for task 'othertask'\n
        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#styles","title":"Styles","text":"

        By default, Prefect highlights specific keywords in the console logs with a variety of colors.

        Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting, e.g.

        PREFECT_LOGGING_COLORS=False\n

        You can change what gets highlighted and also adjust the colors by updating the styles in a logging.yml file. Below lists the specific keys built-in to the PrefectConsoleHighlighter.

        URLs:

        • log.web_url
        • log.local_url

        Log levels:

        • log.info_level
        • log.warning_level
        • log.error_level
        • log.critical_level

        State types:

        • log.pending_state
        • log.running_state
        • log.scheduled_state
        • log.completed_state
        • log.cancelled_state
        • log.failed_state
        • log.crashed_state

        Flow (run) names:

        • log.flow_run_name
        • log.flow_name

        Task (run) names:

        • log.task_run_name
        • log.task_name

        You can also build your own handler with a custom highlighter. For example, to additionally highlight emails:

        1. Copy and paste the following into my_package_or_module.py (rename as needed) in the same directory as the flow run script, or ideally part of a Python package so it's available in site-packages to be accessed anywhere within your environment.
        import logging\nfrom typing import Dict, Union\n\nfrom rich.highlighter import Highlighter\n\nfrom prefect.logging.handlers import PrefectConsoleHandler\nfrom prefect.logging.highlighters import PrefectConsoleHighlighter\n\nclass CustomConsoleHighlighter(PrefectConsoleHighlighter):\n    base_style = \"log.\"\n    highlights = PrefectConsoleHighlighter.highlights + [\n        # ?P<email> is naming this expression as `email`\n        r\"(?P<email>[\\w-]+@([\\w-]+\\.)+[\\w-]+)\",\n    ]\n\nclass CustomConsoleHandler(PrefectConsoleHandler):\n    def __init__(\n        self,\n        highlighter: Highlighter = CustomConsoleHighlighter,\n        styles: Dict[str, str] = None,\n        level: Union[int, str] = logging.NOTSET,\n   ):\n        super().__init__(highlighter=highlighter, styles=styles, level=level)\n
        1. Update /.prefect/logging.yml to use my_package_or_module.CustomConsoleHandler and additionally reference the base_style and named expression: log.email.
            console_flow_runs:\n        level: 0\n        class: my_package_or_module.CustomConsoleHandler\n        formatter: flow_runs\n        styles:\n            log.email: magenta\n            # other styles can be appended here, e.g.\n            # log.completed_state: green\n
        1. Then on your next flow run, text that looks like an email will be highlighted--e.g. my@email.com is colored in magenta here.
        from prefect import flow, get_run_logger\n\n@flow\ndef log_email_flow():\n    logger = get_run_logger()\n    logger.info(\"my@email.com\")\n\nlog_email_flow()\n
        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#applying-markup-in-logs","title":"Applying markup in logs","text":"

        To use Rich's markup in Prefect logs, first configure PREFECT_LOGGING_MARKUP.

        PREFECT_LOGGING_MARKUP=True\n

        Then, the following will highlight \"fancy\" in red.

        from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"This is [bold red]fancy[/]\")\n\nmy_flow()\n

        Inaccurate logs could result

        Although this can be convenient, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#log-database-schema","title":"Log database schema","text":"

        Logged events are also persisted to the Prefect database. A log record includes the following data:

        Column Description id Primary key ID of the log record. created Timestamp specifying when the record was created. updated Timestamp specifying when the record was updated. name String specifying the name of the logger. level Integer representation of the logging level. flow_run_id ID of the flow run associated with the log record. If the log record is for a task run, this is the parent flow of the task. task_run_id ID of the task run associated with the log record. Null if logging a flow run event. message Log message. timestamp The client-side timestamp of this logged statement.

        For more information, see Log schema in the API documentation.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/logs/#including-logs-from-other-libraries","title":"Including logs from other libraries","text":"

        By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the PREFECT_LOGGING_EXTRA_LOGGERS setting.

        To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want to make sure Prefect captures Dask and SciPy logging statements with your flow and task run logs:

        PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy\n

        You can set this setting as an environment variable or in a profile. See Settings for more details about how to use settings.

        ","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"guides/managed-execution/","title":"Managed Execution","text":"

        Prefect Cloud can run your flows on your behalf with Prefect Managed work pools. Flows run with this work pool do not require a worker or cloud provider account. Prefect handles the infrastructure and code execution for you.

        Managed execution is a great option for users who want to get started quickly, with no infrastructure setup.

        Managed Execution is in beta

        Managed Execution is currently in beta. Features are likely to change without warning.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-guide","title":"Usage guide","text":"

        Run a flow with managed infrastructure in three steps.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-1","title":"Step 1","text":"

        Create a new work pool of type Prefect Managed in the UI or the CLI. Here's the command to create a new work pool using the CLI:

        prefect work-pool create my-managed-pool --type prefect:managed\n
        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-2","title":"Step 2","text":"

        Create a deployment using the flow deploy method or prefect.yaml.

        Specify the name of your managed work pool, as shown in this example that uses the deploy method:

        managed-execution.py
        from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n    source=\"https://github.com/desertaxle/demo.git\",\n    entrypoint=\"flow.py:my_flow\",\n    ).deploy(\n        name=\"test-managed-flow\",\n        work_pool_name=\"my-managed-pool\",\n    )\n

        With your CLI authenticated to your Prefect Cloud workspace, run the script to create your deployment:

        python managed-execution.py\n

        Note that this deployment uses flow code stored in a GitHub repository.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#step-3","title":"Step 3","text":"

        Run the deployment from the UI or from the CLI.

        That's it! You ran a flow on remote infrastructure without any infrastructure setup, starting a worker, or needing a cloud provider account.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#adding-dependencies","title":"Adding dependencies","text":"

        Prefect can install Python packages in the container that runs your flow at runtime. You can specify these dependencies in the Pip Packages field in the UI, or by configuring job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]} in your deployment creation like this:

        from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n    source=\"https://github.com/desertaxle/demo.git\",\n    entrypoint=\"flow.py:my_flow\",\n    ).deploy(\n        name=\"test-managed-flow\",\n        work_pool_name=\"my-managed-pool\",\n        job_variables={\"pip_packages\": [\"pandas\", \"prefect-aws\"]}\n    )\n

        Alternatively, you can create a requirements.txt file and reference it in your prefect.yaml pull_step.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#limitations","title":"Limitations","text":"

        Managed execution requires Prefect 2.14.4 or newer.

        All limitations listed below may change without warning during the beta period. We will update this page as we make changes.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#concurrency-work-pools","title":"Concurrency & work pools","text":"

        Free tier accounts are limited to:

        • Maximum of 1 concurrent flow run per workspace across all prefect:managed pools.
        • Maximum of 1 managed execution work pool per workspace.

        Pro tier and above accounts are limited to:

        • Maximum of 10 concurrent flow runs per workspace across all prefect:managed pools.
        • Maximum of 5 managed execution work pools per workspace.
        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#images","title":"Images","text":"

        At this time, managed execution requires that you run the official Prefect Docker image: prefecthq/prefect:2-latest. However, as noted above, you can install Python package dependencies at runtime. If you need to use your own image, we recommend using another type of work pool.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#code-storage","title":"Code storage","text":"

        Flow code must be stored in an accessible remote location. This means git-based cloud providers such as GitHub, Bitbucket, or GitLab are supported. Remote block-based storage is also supported, so S3, GCS, and Azure Blob are additional code storage options.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#resources","title":"Resources","text":"

        Memory is limited to 2GB of RAM, which includes all operations such as dependency installation. Maximum job run time is 24 hours.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#usage-limits","title":"Usage limits","text":"

        Free tier accounts are limited to ten compute hours per workspace per month. Pro tier and above accounts are limited to 250 hours per workspace per month. You can view your compute hours quota usage on the Work Pools page in the UI.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/managed-execution/#next-steps","title":"Next steps","text":"

        Read more about creating deployments in the deployment guide.

        If you find that you need more control over your infrastructure, such as the ability to run custom Docker images, serverless push work pools might be a good option. Read more here.

        ","tags":["managed infrastructure","infrastructure"],"boost":2},{"location":"guides/migration-guide/","title":"Migrating from Prefect 1 to Prefect 2","text":"

        This guide is designed to help you migrate your workflows from Prefect 1 to Prefect 2.

        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-stayed-the-same","title":"What stayed the same","text":"

        Prefect 2 still:

        • Has tasks and flows.
        • Orchestrates your flow runs and provides observability into their execution states.
        • Runs and inspects flow runs locally.
        • Provides a coordination plane for your dataflows based on the same principles.
        • Employs the same hybrid execution model, where Prefect doesn't store your flow code or data.
        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed","title":"What changed","text":"

        Prefect 2 requires modifications to your existing tasks, flows, and deployment patterns. We've organized this section into the following categories:

        • Simplified patterns \u2014 abstractions from Prefect 1 that are no longer necessary in the dynamic, DAG-free Prefect workflows that support running native Python code in your flows.
        • Conceptual and syntax changes that often clarify names and simplify familiar abstractions such as retries and caching.
        • New features enabled by the dynamic and flexible Prefect API.
        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#simplified-patterns","title":"Simplified patterns","text":"

        Since Prefect 2 allows running native Python code within the flow function, some abstractions are no longer necessary:

        • Parameter tasks: in Prefect 2, inputs to your flow function are automatically treated as parameters of your flow. You can define the parameter values in your flow code when you create your Deployment, or when you schedule an ad-hoc flow run. One benefit of Prefect parametrization is built-in type validation with pydantic.
        • Task-level state_handlers: in Prefect 2, you can build custom logic that reacts to task-run states within your flow function without the need for state_handlers. The page \" How to take action on a state change of a task run\" provides a further explanation and code examples.
        • Instead of using signals, Prefect 2 allows you to raise an arbitrary exception in your task or flow and return a custom state. For more details and examples, see How can I stop the task run based on a custom logic.
        • Conditional tasks such as case are no longer required. Use Python native if...else statements to build a conditional logic. The Discourse tag \"conditional-logic\" provides more resources.
        • Since you can use any context manager directly in your flow, a resource_manager is no longer necessary. As long as you point to your flow script in your Deployment, you can share database connections and any other resources between tasks in your flow. The Discourse page How to clean up resources used in a flow provides a full example.
        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#conceptual-and-syntax-changes","title":"Conceptual and syntax changes","text":"

        The changes listed below require you to modify your workflow code. The following table shows how Prefect 1 concepts have been implemented in Prefect 2. The last column contains references to additional resources that provide more details and examples.

        Concept Prefect 1 Prefect 2 Reference links Flow definition. with Flow(\"flow_name\") as flow: @flow(name=\"flow_name\") How can I define a flow? Flow executor that determines how to execute your task runs. Executor such as LocalExecutor. Task runner such as ConcurrentTaskRunner. What is the default TaskRunner (executor)? Configuration that determines how and where to execute your flow runs. Run configuration such as flow.run_config = DockerRun(). Create an infrastructure block such as a Docker Container and specify it as the infrastructure when creating a deployment. How can I run my flow in a Docker container? Assignment of schedules and default parameter values. Schedules are attached to the flow object and default parameter values are defined within the Parameter tasks. Schedules and default parameters are assigned to a flow\u2019s Deployment, rather than to a Flow object. How can I attach a schedule to a flow? Retries @task(max_retries=2, retry_delay=timedelta(seconds=5)) @task(retries=2, retry_delay_seconds=5) How can I specify the retry behavior for a specific task? Logger syntax. Logger is retrieved from prefect.context and can only be used within tasks. In Prefect 2, you can log not only from tasks, but also within flows. To get the logger object, use: prefect.get_run_logger(). How can I add logs to my flow? The syntax and contents of Prefect context. Context is a thread-safe way of accessing variables related to the flow run and task run. The syntax to retrieve it: prefect.context. Context is still available, but its content is much richer, allowing you to retrieve even more information about your flow runs and task runs. The syntax to retrieve it: prefect.context.get_run_context(). How to access Prefect context values? Task library. Included in the main Prefect Core repository. Separated into individual repositories per system, cloud provider, or technology. How to migrate Prefect 1 tasks to Prefect 2 integrations.","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-dataflow-orchestration","title":"What changed in dataflow orchestration?","text":"

        Let\u2019s look at the differences in how Prefect 2 transitions your flow and task runs between various execution states.

        • In Prefect 2, the final state of a flow run that finished without errors is Completed, while in Prefect 1, this flow run has a Success state. You can find more about that topic here.
        • The decision about whether a flow run should be considered successful or not is no longer based on special reference tasks. Instead, your flow\u2019s return value determines the final state of a flow run. This link provides a more detailed explanation with code examples.
        • In Prefect 1, concurrency limits were only available to Prefect Cloud users. Prefect 2 provides customizable concurrency limits with the open-source Prefect server and Prefect Cloud. In Prefect 2, flow run concurrency limits are set on work pools.
        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-flow-deployment-patterns","title":"What changed in flow deployment patterns?","text":"

        To deploy your Prefect 1 flows, you have to send flow metadata to the backend in a step called registration. Prefect 2 no longer requires flow pre-registration. Instead, you create a Deployment that specifies the entry point to your flow code and optionally specifies:

        • Where to run your flow (your Infrastructure, such as a DockerContainer, KubernetesJob, or ECSTask).
        • When to run your flow (an Interval, Cron, or RRule schedule).
        • How to run your flow (execution details such as parameters, flow deployment name, and more).
        • The work pool for your deployment. If no work pool is specified, a default work pool named default is used.

        The API is now implemented as a REST API rather than GraphQL. This page illustrates how you can interact with the API.

        In Prefect 1, the logical grouping of flows was based on projects. Prefect 2 provides a much more flexible way of organizing your flows, tasks, and deployments through customizable filters and\u00a0tags. This page provides more details on how to assign tags to various Prefect 2 objects.

        The role of agents has changed:

        • In Prefect 2, there is only one generic agent type. The agent polls a work pool looking for flow runs.
        • See this Discourse page for a more detailed discussion.
        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#new-features-introduced-in-prefect-2","title":"New features introduced in Prefect 2","text":"

        The following new components and capabilities are enabled by Prefect 2.

        • More flexibility thanks to the elimination of flow pre-registration.
        • More flexibility for flow deployments, including easier promotion of a flow through development, staging, and production environments.
        • Native async support.
        • Out-of-the-box pydantic validation.
        • Blocks allowing you to securely store UI-editable, type-checked configuration to external systems and an easy-to-use Key-Value Store. All those components are configurable in one place and provided as part of the open-source Prefect 2 product. In contrast, the concept of Secrets in Prefect 1 was much more narrow and only available in Prefect Cloud.
        • Notifications available in the open-source Prefect 2 version, as opposed to Cloud-only Automations in Prefect 1.
        • A first-class subflows concept: Prefect 1 only allowed the flow-of-flows orchestrator pattern. With Prefect 2 subflows, you gain a natural and intuitive way of organizing your flows into modular sub-components. For more details, see the following list of resources about subflows.
        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#orchestration-behind-the-api","title":"Orchestration behind the API","text":"

        Apart from new features, Prefect 2 simplifies many usage patterns and provides a much more seamless onboarding experience.

        Every time you run a flow, whether it is tracked by the API server or ad-hoc through a Python script, it is on the same UI page for easier debugging and observability.

        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#code-as-workflows","title":"Code as workflows","text":"

        With Prefect 2, your functions\u00a0are\u00a0your flows and tasks. Prefect 2 automatically detects your flows and tasks without the need to define a rigid DAG structure. While use of tasks is encouraged to provide you the maximum visibility into your workflows, they are no longer required. You can add a single @flow decorator to your main function to transform any Python script into a Prefect workflow.

        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#incremental-adoption","title":"Incremental adoption","text":"

        The built-in SQLite database automatically tracks all your locally executed flow runs. As soon as you start a Prefect server and open the Prefect UI in your browser (or authenticate your CLI with your Prefect Cloud workspace), you can see all your locally executed flow runs in the UI. You don't even need to start an agent.

        Then, when you want to move toward scheduled, repeatable workflows, you can build a deployment and send it to the server by running a CLI command or a Python script.

        • You can create a deployment to on remote infrastructure, where the run environment is defined by a reusable infrastructure block.
        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#fewer-ambiguities","title":"Fewer ambiguities","text":"

        Prefect 2 eliminates ambiguities in many ways. For example. there is no more confusion between Prefect Core and Prefect Server \u2014 Prefect 2 unifies those into a single open source product. This product is also much easier to deploy with no requirement for Docker or docker-compose.

        If you want to switch your backend to use Prefect Cloud for an easier production-level managed experience, Prefect profiles let you quickly connect to your workspace.

        In Prefect 1, there are several confusing ways you could implement caching. Prefect 2 resolves those ambiguities by providing a single cache_key_fn function paired with cache_expiration, allowing you to define arbitrary caching mechanisms \u2014 no more confusion about whether you need to use cache_for, cache_validator, or file-based caching using targets.

        For more details on how to configure caching, check out the following resources:

        • Caching docs
        • Time-based caching
        • Input-based caching

        A similarly confusing concept in Prefect 1 was distinguishing between the functional and imperative APIs. This distinction caused ambiguities with respect to how to define state dependencies between tasks. Prefect 1 users were often unsure whether they should use the functional upstream_tasks keyword argument or the imperative methods such as task.set_upstream(), task.set_downstream(), or flow.set_dependencies(). In Prefect 2, there is only the functional API.

        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#next-steps","title":"Next steps","text":"

        We know migrations can be tough. We encourage you to take it step-by-step and experiment with the new features.

        To make the migration process easier for you:

        • We provided a detailed FAQ section allowing you to find the right information you need to move your workflows to Prefect 2. If you still have some open questions, feel free to create a new topic describing your migration issue.
        • We have dedicated resources in the Customer Success team to help you along your migration journey. Reach out to cs@prefect.io to discuss how we can help.
        • You can ask questions in our 20,000+ member Community Slack.

        Happy Engineering!

        ","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/moving-data/","title":"Read and Write Data to and from Cloud Provider Storage","text":"

        Writing data to cloud-based storage and reading data from that storage is a common task in data engineering. In this guide we'll learn how to use Prefect to move data to and from AWS, Azure, and GCP blob storage.

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#prerequisites","title":"Prerequisites","text":"
        • Prefect installed
        • Authenticated with Prefect Cloud (or self-hosted Prefect server instance)
        • A cloud provider account (e.g. AWS)
        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#install-relevant-prefect-integration-library","title":"Install relevant Prefect integration library","text":"

        In the CLI, install the Prefect integration library for your cloud provider:

        AWSAzureGCP

        prefect-aws provides blocks for interacting with AWS services.

        pip install -U prefect-aws\n

        prefect-azure provides blocks for interacting with Azure services.

         pip install -U prefect-azure\n

        prefect-gcp provides blocks for interacting with GCP services.

         pip install -U prefect-gcp\n

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#register-the-block-types","title":"Register the block types","text":"

        Register the new block types with Prefect Cloud (or with your self-hosted Prefect server instance):

        AWSAzureGCP

        prefect block register -m prefect_aws  \n

        prefect block register -m prefect_azure \n

        prefect block register -m prefect_gcp\n

        We should see a message in the CLI that several block types were registered. If we check the UI, we should see the new block types listed.

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-bucket","title":"Create a storage bucket","text":"

        Create a storage bucket in the cloud provider account. Ensure the bucket is publicly accessible or create a user or service account with the appropriate permissions to fetch and write data to the bucket.

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-credentials-block","title":"Create a credentials block","text":"

        If the bucket is private, there are several options to authenticate:

        1. At deployment runtime, ensure the runtime environment is authenticated.
        2. Create a block with configuration details and reference it when creating the storage block.

        If saving credential details in a block we can use a credentials block specific to the cloud provider or use a more generic secret block. We can create blocks via the UI or Python code. Below we'll use Python code to create a credentials block for our cloud provider.

        Credentials safety

        Reminder, don't store credential values in public locations such as public git platform repositories. In the examples below we use environment variables to store credential values.

        AWSAzureGCP
        import os\nfrom prefect_aws import AwsCredentials\n\nmy_aws_creds = AwsCredentials(\n    aws_access_key_id=\"123abc\",\n    aws_secret_access_key=os.environ.get(\"MY_AWS_SECRET_ACCESS_KEY\"),\n)\nmy_aws_creds.save(name=\"my-aws-creds-block\", overwrite=True)\n
        import os\nfrom prefect_azure import AzureBlobStorageCredentials\n\nmy_azure_creds = AzureBlobStorageCredentials(\n    connection_string=os.environ.get(\"MY_AZURE_CONNECTION_STRING\"),\n)\nmy_azure_creds.save(name=\"my-azure-creds-block\", overwrite=True)\n

        We recommend specifying the service account key file contents as a string, rather than the path to the file, because that file might not be available in your production environments.

        import os\nfrom prefect_gcp import GCPCredentials\n\nmy_gcp_creds = GCPCredentials(\n    service_account_info=os.environ.get(\"GCP_SERVICE_ACCOUNT_KEY_FILE_CONTENTS\"), \n)\nmy_gcp_creds.save(name=\"my-gcp-creds-block\", overwrite=True)\n

        Run the code to create the block. We should see a message that the block was created.

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#create-a-storage-block","title":"Create a storage block","text":"

        Let's create a block for the chosen cloud provider using Python code or the UI. In this example we'll use Python code.

        AWSAzureGCP

        Note that the S3Bucket block is not the same as the S3 block that ships with Prefect. The S3Bucket block we use in this example is part of the prefect-aws library and provides additional functionality.

        We'll reference the credentials block created above.

        from prefect_aws import S3Bucket\n\ns3bucket = S3Bucket.create(\n    bucket=\"my-bucket-name\",\n    credentials=\"my-aws-creds-block\"\n    )\ns3bucket.save(name=\"my-s3-bucket-block\", overwrite=True)\n

        Note that the AzureBlobStorageCredentials block is not the same as the Azure block that ships with Prefect. The AzureBlobStorageCredentials block we use in this example is part of the prefect-azure library and provides additional functionality.

        Azure blob storage doesn't require a separate block, the connection string used in the AzureBlobStorageCredentials block can encode the information needed.

        Note that the GcsBucket block is not the same as the GCS block that ships with Prefect. The GcsBucket block is part of the prefect-gcp library and provides additional functionality. We'll use it here.

        We'll reference the credentials block created above.

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcsbucket = GcsBucket(\n    bucket=\"my-bucket-name\", \n    credentials=\"my-gcp-creds-block\"\n    )\ngcsbucket.save(name=\"my-gcs-bucket-block\", overwrite=True)\n

        Run the code to create the block. We should see a message that the block was created.

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#write-data","title":"Write data","text":"

        Use your new block inside a flow to write data to your cloud provider.

        AWSAzureGCP
        from pathlib import Path\nfrom prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\n@flow()\ndef upload_to_s3():\n    \"\"\"Flow function to upload data\"\"\"\n    path = Path(\"my_path_to/my_file.parquet\")\n    aws_block = S3Bucket.load(\"my-s3-bucket-block\")\n    aws_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n    upload_to_s3()\n
        from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_upload\n\n@flow\ndef upload_to_azure():\n    \"\"\"Flow function to upload data\"\"\"\n    blob_storage_credentials = AzureBlobStorageCredentials.load(\n        name=\"my-azure-creds-block\"\n    )\n\n    with open(\"my_path_to/my_file.parquet\", \"rb\") as f:\n        blob_storage_upload(\n            data=f.read(),\n            container=\"my_container\",\n            blob=\"my_path_to/my_file.parquet\",\n            blob_storage_credentials=blob_storage_credentials,\n        )\n\nif __name__ == \"__main__\":\n    upload_to_azure()\n
        from pathlib import Path\nfrom prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow()\ndef upload_to_gcs():\n    \"\"\"Flow function to upload data\"\"\"\n    path = Path(\"my_path_to/my_file.parquet\")\n    gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n    gcs_block.upload_from_path(from_path=path, to_path=path)\n\nif __name__ == \"__main__\":\n    upload_to_gcs()\n
        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#read-data","title":"Read data","text":"

        Use your block to read data from your cloud provider inside a flow.

        AWSAzureGCP
        from prefect import flow\nfrom prefect_aws import S3Bucket\n\n@flow\ndef download_from_s3():\n    \"\"\"Flow function to download data\"\"\"\n    s3_block = S3Bucket.load(\"my-s3-bucket-block\")\n    s3_block.get_directory(\n        from_path=\"my_path_to/my_file.parquet\", \n        local_path=\"my_path_to/my_file.parquet\"\n    )\n\nif __name__ == \"__main__\":\n    download_from_s3()\n
        from prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef download_from_azure():\n    \"\"\"Flow function to download data\"\"\"\n    blob_storage_credentials = AzureBlobStorageCredentials.load(\n        name=\"my-azure-creds-block\"\n    )\n    blob_storage_download(\n        blob=\"my_path_to/my_file.parquet\",\n        container=\"my_container\",\n        blob_storage_credentials=blob_storage_credentials,\n    )\n\nif __name__ == \"__main__\":\n    download_from_azure()\n
        from prefect import flow\nfrom prefect_gcp.cloud_storage import GcsBucket\n\n@flow\ndef download_from_gcs():\n    gcs_block = GcsBucket.load(\"my-gcs-bucket-block\")\n    gcs_block.get_directory(\n        from_path=\"my_path_to/my_file.parquet\", \n        local_path=\"my_path_to/my_file.parquet\"\n    )\n\nif __name__ == \"__main__\":\n    download_from_gcs()\n

        In this guide we've seen how to use Prefect to read data from and write data to cloud providers!

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/moving-data/#next-steps","title":"Next steps","text":"

        Check out the prefect-aws, prefect-azure, and prefect-gcp docs to see additional methods for interacting with cloud storage providers. Each library also contains blocks for interacting with other cloud-provider services.

        ","tags":["data","storage","read data","write data","cloud providers","AWS","S3","Azure Storage","Azure Blob Storage","Azure","GCP","Google Cloud Storage","GCS","moving data"],"boost":2},{"location":"guides/prefect-deploy/","title":"Deploying Flows to Work Pools and Workers","text":"

        In this guide, we will configure a deployment that uses a work pool for dynamically provisioned infrastructure.

        All Prefect flow runs are tracked by the API. The API does not require prior registration of flows. With Prefect, you can call a flow locally or on a remote environment and it will be tracked.

        A deployment turns your workflow into an application that can be interacted with and managed via the Prefect API. A deployment enables you to:

        • Schedule flow runs.
        • Specify event triggers for flow runs.
        • Assign one or more tags to organize your deployments and flow runs. You can use those tags as filters in the Prefect UI.
        • Assign custom parameter values for flow runs based on the deployment.
        • Create ad-hoc flow runs from the API or Prefect UI.
        • Upload flow files to a defined storage location for retrieval at run time.

        Deployments created with .serve

        A deployment created with the Python flow.serve method or the serve function runs flows in a subprocess on the same machine where the deployment is created. It does not use a work pool or worker.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#work-pool-based-deployments","title":"Work pool-based deployments","text":"

        A work pool-based deployment is useful when you want to dynamically scale the infrastructure where your flow code runs. Work pool-based deployments contain information about the infrastructure type and configuration for your workflow execution.

        Work pool-based deployment infrastructure options include the following:

        • Process - runs flow in a subprocess. In most cases, you're better off using .serve.
        • Docker - runs flows in an ephemeral Docker container.
        • Kubernetes - runs flows as a Kubernetes Job.
        • Serverless Cloud Provider options - runs flows in a Docker container in a serverless cloud provider environment, such as AWS ECS, Azure Container Instance, Google Cloud Run, or Vertex AI.

        The following diagram provides a high-level overview of the conceptual elements involved in defining a work-pool based deployment that is polled by a worker and executes a flow run based on that deployment.

        %%{\n  init: {\n    'theme': 'base',\n    'themeVariables': {\n      'fontSize': '19px'\n    }\n  }\n}%%\n\nflowchart LR\n    F(\"<div style='margin: 5px 10px 5px 5px;'>Flow Code</div>\"):::yellow -.-> A(\"<div style='margin: 5px 10px 5px 5px;'>Deployment Definition</div>\"):::gold\n    subgraph Server [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Prefect API</div>\"]\n        D(\"<div style='margin: 5px 10px 5px 5px;'>Deployment</div>\"):::green\n    end\n    subgraph Remote Storage [\"<div style='width: 160px; text-align: center; margin-top: 5px;'>Remote Storage</div>\"]\n        B(\"<div style='margin: 5px 6px 5px 5px;'>Flow</div>\"):::yellow\n    end\n    subgraph Infrastructure [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Infrastructure</div>\"]\n        G(\"<div style='margin: 5px 10px 5px 5px;'>Flow Run</div>\"):::blue\n    end\n\n    A --> D\n    D --> E(\"<div style='margin: 5px 10px 5px 5px;'>Worker</div>\"):::red\n    B -.-> E\n    A -.-> B\n    E -.-> G\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:black\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px,color:black\n    classDef gray fill:lightgray,stroke:lightgray,stroke-width:4px\n    classDef blue fill:blue,stroke:blue,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef red fill:red,stroke:red,stroke-width:4px,color:white\n    classDef dkgray fill:darkgray,stroke:darkgray,stroke-width:4px,color:white

        The work pool types above require a worker to be running on your infrastructure to poll a work pool for scheduled flow runs.

        Additional work pool options available with Prefect Cloud

        Prefect Cloud offers other flavors of work pools that don't require a worker:

        • Push Work Pools - serverless cloud options that don't require a worker because Prefect Cloud submits them to your serverless cloud infrastructure on your behalf. Prefect can auto-provision your cloud infrastructure for you and set it up to use your work pool.

        • Managed Execution Prefect Cloud submits and runs your deployment on serverless infrastructure. No cloud provider account required.

        In this guide, we focus on deployments that require a worker.

        Work pool-based deployments that use a worker also allow you to assign a work queue name to prioritize work and allow you to limit concurrent runs at the work pool level.

        When creating a deployment that uses a work pool and worker, we must answer two basic questions:

        • What instructions does a worker need to set up an execution environment for our flow? For example, a flow may have Python package requirements, unique Kubernetes settings, or Docker networking configuration.
        • How should the flow code be accessed?

        The tutorial shows how you can create a deployment with a long-running process using .serve and how to move to a work-pool-based deployment setup with .deploy. See the discussion of when you might want to move to work-pool-based deployments there.

        Next, we'll explore how to use .deploy to create deployments with Python code. If you'd prefer to learn about using a YAML-based alternative for managing deployment configuration, skip to the later section on prefect.yaml.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#creating-work-pool-based-deployments-with-deploy","title":"Creating work pool-based deployments with .deploy","text":"","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#automatically-bake-your-code-into-a-docker-image","title":"Automatically bake your code into a Docker image","text":"

        You can create a deployment from Python code by calling the .deploy method on a flow.

        buy.py
        from prefect import flow\n\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my_registry/my_image:my_image_tag\"\n    )\n

        Make sure you have the work pool created in the Prefect Cloud workspace you are authenticated to or on your running self-hosted server instance. Then run the script to create a deployment (in future examples this step will be omitted for brevity):

        python buy.py\n

        You should see messages in your terminal that Docker is building your image. When the deployment build succeeds you will see helpful information in your terminal showing you how to start a worker for your deployment and how to run your deployment. Your deployment will be visible on the Deployments page in the UI.

        By default, .deploy will build a Docker image with your flow code baked into it and push the image to the Docker Hub registry specified in the image argument`.

        Authentication to Docker Hub

        You need your environment to be authenticated to your Docker registry to push an image to it.

        You can specify a registry other than Docker Hub by providing the full registry path in the image argument.

        Warning

        If building a Docker image, the environment in which you are creating the deployment needs to have Docker installed and running.

        To avoid pushing to a registry, set push=False in the .deploy method.

        if __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my_registry/my_image:my_image_tag\",\n        push=False\n    )\n

        To avoid building an image, set build=False in the .deploy method.

        if __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"discdiver/no-build-image:1.0\",\n        build=False\n    )\n

        The specified image will need to be available in your deployment's execution environment for your flow code to be accessible.

        Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt file.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#automatically-build-a-custom-docker-image-with-a-local-dockerfile","title":"Automatically build a custom Docker image with a local Dockerfile","text":"

        If you want to use a custom Dockerfile, you can specify the path to the Dockerfile with the DeploymentImage class:

        custom_dockerfile.py
        from prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Selling securities\")\n\n\nif __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-custom-dockerfile-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=DeploymentImage(\n            name=\"my_image\",\n            tag=\"deploy-guide\",\n            dockerfile=\"Dockerfile\"\n    ),\n    push=False\n)\n

        The DeploymentImage object allows for a great deal of image customization.

        For example, you can install a private Python package from GCP's artifact registry like this:

        Create a custom base Dockerfile.

        FROM python:3.10\n\nARG AUTHED_ARTIFACT_REG_URL\nCOPY ./requirements.txt /requirements.txt\n\nRUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt\n

        Create our deployment by leveraging the DeploymentImage class.

        private-package.py
        from prefect import flow\nfrom prefect.deployments.runner import DeploymentImage\nfrom prefect.blocks.system import Secret\nfrom my_private_package import do_something_cool\n\n\n@flow(log_prints=True)\ndef my_flow():\n    do_something_cool()\n\n\nif __name__ == \"__main__\":\n    artifact_reg_url: Secret = Secret.load(\"artifact-reg-url\")\n\n    my_flow.deploy(\n        name=\"my-deployment\",\n        work_pool_name=\"k8s-demo\",\n        image=DeploymentImage(\n            name=\"my-image\",\n            tag=\"test\",\n            dockerfile=\"Dockerfile\",\n            buildargs={\"AUTHED_ARTIFACT_REG_URL\": artifact_reg_url.get()},\n        ),\n    )\n

        Note that we used a Prefect Secret block to load the URL configuration for the artifact registry above.

        See all the optional keyword arguments for the DeploymentImage class here.

        Default Docker namespace

        You can set the PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE setting to append a default Docker namespace to all images you build with .deploy. This is great if you use a private registry to store your images.

        To set a default Docker namespace for your current profile run:

        prefect config set PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE=<docker-registry-url>/<organization-or-username>\n

        Once set, you can omit the namespace from your image name when creating a deployment:

        with_default_docker_namespace.py
        if __name__ == \"__main__\":\n    buy.deploy(\n        name=\"my-code-baked-into-an-image-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my_image:my_image_tag\"\n    )\n

        The above code will build an image with the format <docker-registry-url>/<organization-or-username>/my_image:my_image_tag when PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE is set.

        While baking code into Docker images is a popular deployment option, many teams decide to store their workflow code in git-based storage, such as GitHub, Bitbucket, or Gitlab. Let's see how to do that next.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#store-your-code-in-git-based-cloud-storage","title":"Store your code in git-based cloud storage","text":"

        If you don't specify an image argument for .deploy, then you need to specify where to pull the flow code from at runtime with the from_source method.

        Here's how we can pull our flow code from a GitHub repository.

        git_storage.py
        from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        \"https://github.com/my_github_account/my_repo/my_file.git\",\n        entrypoint=\"flows/no-image.py:hello_world\",\n    ).deploy(\n        name=\"no-image-deployment\",\n        work_pool_name=\"my_pool\",\n        build=False\n    )\n

        The entrypoint is the path to the file the flow is located in and the function name, separated by a colon.

        Alternatively, you could specify a git-based cloud storage URL for a Bitbucket or Gitlab repository.

        Note

        If you don't specify an image as part of your deployment creation, the image specified in the work pool will be used to run your flow.

        After creating a deployment you might change your flow code. Generally, you can just push your code to GitHub, without rebuilding your deployment. The exception is if something that the server needs to know about changes, such as the flow entrypoint parameters. Rerunning the Python script with .deploy will update your deployment on the server with the new flow code.

        If you need to provide additional configuration, such as specifying a private repository, you can provide a GitRepository object instead of a URL:

        private_git_storage.py
        from prefect import flow\nfrom prefect.runner.storage import GitRepository\nfrom prefect.blocks.system import Secret\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=GitRepository(\n        url=\"https://github.com/org/private-repo.git\",\n        branch=\"dev\",\n        credentials={\n            \"access_token\": Secret.load(\"github-access-token\")\n        }\n    ),\n    entrypoint=\"flows/no-image.py:hello_world\",\n    ).deploy(\n        name=\"private-git-storage-deployment\",\n        work_pool_name=\"my_pool\",\n        build=False\n    )\n

        Note the use of the Secret block to load the GitHub access token. Alternatively, you could provide a username and password to the username and password fields of the credentials argument.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#store-your-code-in-cloud-provider-storage","title":"Store your code in cloud provider storage","text":"

        Another option for flow code storage is any fsspec-supported storage location, such as AWS S3, GCP GCS, or Azure Blob Storage.

        For example, you can pass the S3 bucket path to source.

        s3_storage.py
        from prefect import flow\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=\"s3://my-bucket/my-folder\",\n        entrypoint=\"flows.py:my_flow\",\n    ).deploy(\n        name=\"deployment-from-aws-flow\",\n        work_pool_name=\"my_pool\",\n    )\n

        In the example above your credentials will be auto-discovered from your deployment creation environment and credentials will need to be available in your runtime environment.

        If you need additional configuration for your cloud-based storage - for example, with a private S3 Bucket - we recommend using a storage block. A storage block also ensures your credentials will be available in both your deployment creation environment and your execution environment.

        Here's an example that uses an S3Bucket block from the prefect-aws library.

        s3_storage_auth.py
        from prefect import flow\nfrom prefect_aws.s3 import S3Bucket\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=S3Bucket.load(\"my-code-storage\"), entrypoint=\"my_file.py:my_flow\"\n    ).deploy(name=\"test-s3\", work_pool_name=\"my_pool\")\n

        If you are familiar with the deployment creation mechanics with .serve, you will notice that .deploy is very similar. .deploy just requires a work pool name and has a number of parameters dealing with flow-code storage for Docker images.

        Unlike .serve, if you don't specify an image to use for your flow, you must to specify where to pull the flow code from at runtime with the from_source method, whereas from_source is optional with .serve.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#additional-configuration-with-deploy","title":"Additional configuration with .deploy","text":"

        Our examples thus far have explored options for where to store flow code. Let's turn our attention to other deployment configuration options.

        To pass parameters to your flow, you can use the parameters argument in the .deploy method. Just pass in a dictionary of key-value pairs.

        pass_params.py
        from prefect import flow\n\n@flow\ndef hello_world(name: str):\n    print(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n    hello_world.deploy(\n        name=\"pass-params-deployment\",\n        work_pool_name=\"my_pool\",\n        parameters=dict(name=\"Prefect\"),\n        image=\"my_registry/my_image:my_image_tag\",\n    )\n

        The job_variables parameter allows you to fine-tune the infrastructure settings for a deployment. The values passed in override default values in the specified work pool's base job template.

        You can override environment variables, such as image_pull_policy and image, for a specific deployment with the job_variables argument.

        job_var_image_pull.py
        if __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-deployment-never-pull\", \n        work_pool_name=\"my-docker-pool\", \n        job_variables={\"image_pull_policy\": \"Never\"},\n        image=\"my-image:my-tag\"\",\n        push=False\n    )\n

        Similarly, you can override the environment variables specified in a work pool through the job_variables parameter:

        job_var_env_vars.py
        if __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-deployment-never-pull\", \n        work_pool_name=\"my-docker-pool\", \n        job_variables={\"env\": {\"EXTRA_PIP_PACKAGES\": \"boto3\"} },\n        image=\"my-image:my-tag\"\",\n        push=False\n    )\n

        The dictionary key \"EXTRA_PIP_PACKAGES\" denotes a special environment variable that Prefect will use to install additional Python packages at runtime. This approach is an alternative to building an image with a custom requirements.txt copied into it.

        For more information on overriding job variables see this guide.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#working-with-multiple-deployments-with-deploy","title":"Working with multiple deployments with deploy","text":"

        You can create multiple deployments from one or more Python files that use .deploy. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.

        To create multiple work pool-based deployments at once you can use the deploy function, which is analogous to the serve function.

        from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Buying securities\")\n\n\nif __name__ == \"__main__\":\n    deploy(\n        buy.to_deployment(name=\"dev-deploy\", work_pool_name=\"my-dev-work-pool\"),\n        buy.to_deployment(name=\"prod-deploy\", work_pool_name=\"my-prod-work-pool\"),\n        image=\"my-registry/my-image:dev\",\n        push=False,\n    )\n

        Note that in the example above we created two deployments from the same flow, but with different work pools. Alternatively, we could have created two deployments from different flows.

        from prefect import deploy, flow\n\n@flow(log_prints=True)\ndef buy():\n    print(\"Buying securities.\")\n\n@flow(log_prints=True)\ndef sell():\n    print(\"Selling securities.\")\n\n\nif __name__ == \"__main__\":\n    deploy(\n        buy.to_deployment(name=\"buy-deploy\"),\n        sell.to_deployment(name=\"sell-deploy\"),\n        work_pool_name=\"my-dev-work-pool\"\n        image=\"my-registry/my-image:dev\",\n        push=False,\n    )\n

        In the example above the code for both flows gets baked into the same image.

        We can specify that one or more flows should be pulled from a remote location at runtime by using the from_source method. Here's an example of deploying two flows, one defined locally and one defined in a remote repository:

        from prefect import deploy, flow\n\n\n@flow(log_prints=True)\ndef local_flow():\n    print(\"I'm a flow!\")\n\nif __name__ == \"__main__\":\n    deploy(\n        local_flow.to_deployment(name=\"example-deploy-local-flow\"),\n        flow.from_source(\n            source=\"https://github.com/org/repo.git\",\n            entrypoint=\"flows.py:my_flow\",\n        ).to_deployment(\n            name=\"example-deploy-remote-flow\",\n        ),\n        work_pool_name=\"my-work-pool\",\n        image=\"my-registry/my-image:dev\",\n    )\n

        You could pass any number of flows to the deploy function. This behavior is useful if using a monorepo approach to your workflows.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#creating-work-pool-based-deployments-with-prefectyaml","title":"Creating work pool-based deployments with prefect.yaml","text":"

        The prefect.yaml file is a YAML file describing base settings for your deployments, procedural steps for preparing deployments, and instructions for preparing the execution environment for a deployment run.

        You can initialize your deployment configuration, which creates the prefect.yaml file, by running the CLI command prefect init in any directory or repository that stores your flow code.

        Deployment configuration recipes

        Prefect ships with many off-the-shelf \"recipes\" that allow you to get started with more structure within your prefect.yaml file; run prefect init to be prompted with available recipes in your installation. You can provide a recipe name in your initialization command with the --recipe flag, otherwise Prefect will attempt to guess an appropriate recipe based on the structure of your working directory (for example if you initialize within a git repository, Prefect will use the git recipe).

        The prefect.yaml file contains deployment configuration for deployments created from this file, default instructions for how to build and push any necessary code artifacts (such as Docker images), and default instructions for pulling a deployment in remote execution environments (e.g., cloning a GitHub repository).

        Any deployment configuration can be overridden via options available on the prefect deploy CLI command when creating a deployment.

        prefect.yaml file flexibility

        In older versions of Prefect, this file had to be in the root of your repository or project directory and named prefect.yaml. Now this file can be located in a directory outside the project or a subdirectory inside the project. It can be named differently, provided the filename ends in .yaml. You can even have multiple prefect.yaml files with the same name in different directories. By default, prefect deploy will use a prefect.yaml file in the project's root directory. To use a custom deployment configuration file, supply the new --prefect-file CLI argument when running the deploy command from the root of your project directory:

        prefect deploy --prefect-file path/to/my_file.yaml

        The base structure for prefect.yaml is as follows:

        # generic metadata\nprefect-version: null\nname: null\n\n# preparation steps\nbuild: null\npush: null\n\n# runtime steps\npull: null\n\n# deployment configurations\ndeployments:\n- # base metadata\n    name: null\n    version: null\n    tags: []\n    description: null\n    schedule: null\n\n    # flow-specific fields\n    entrypoint: null\n    parameters: {}\n\n    # infra-specific fields\n    work_pool:\n    name: null\n    work_queue_name: null\n    job_variables: {}\n

        The metadata fields are always pre-populated for you. These fields are for bookkeeping purposes only. The other sections are pre-populated based on recipe; if no recipe is provided, Prefect will attempt to guess an appropriate one based on local configuration.

        You can create deployments via the CLI command prefect deploy without ever needing to alter the deployments section of your prefect.yaml file \u2014 the prefect deploy command will help in deployment creation via interactive prompts. The prefect.yaml file facilitates version-controlling your deployment configuration and managing multiple deployments.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-actions","title":"Deployment actions","text":"

        Deployment actions defined in your prefect.yaml file control the lifecycle of the creation and execution of your deployments. The three actions available are build, push, and pull. pull is the only required deployment action \u2014 it is used to define how Prefect will pull your deployment in remote execution environments.

        Each action is defined as a list of steps that are executing in sequence.

        Each step has the following format:

        section:\n- prefect_package.path.to.importable.step:\n    id: \"step-id\" # optional\n    requires: \"pip-installable-package-spec\" # optional\n    kwarg1: value\n    kwarg2: more-values\n

        Every step can optionally provide a requires field that Prefect will use to auto-install in the event that the step cannot be found in the current environment. Each step can also specify an id for the step which is used when referencing step outputs in later steps. The additional fields map directly onto Python keyword arguments to the step function. Within a given section, steps always run in the order that they are provided within the prefect.yaml file.

        Deployment Instruction Overrides

        build, push, and pull sections can all be overridden on a per-deployment basis by defining build, push, and pull fields within a deployment definition in the prefect.yaml file.

        The prefect deploy command will use any build, push, or pull instructions provided in a deployment's definition in the prefect.yaml file.

        This capability is useful with multiple deployments that require different deployment instructions.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-build-action","title":"The build action","text":"

        The build section of prefect.yaml is where any necessary side effects for running your deployments are built - the most common type of side effect produced here is a Docker image. If you initialize with the docker recipe, you will be prompted to provide required information, such as image name and tag:

        prefect init --recipe docker\n>> image_name: < insert image name here >\n>> tag: < insert image tag here >\n

        Use --field to avoid the interactive experience

        We recommend that you only initialize a recipe when you are first creating your deployment structure, and afterwards store your configuration files within version control. However, sometimes you may need to initialize programmatically and avoid the interactive prompts. To do so, provide all required fields for your recipe using the --field flag:

        prefect init --recipe docker \\\n    --field image_name=my-repo/my-image \\\n    --field tag=my-tag\n

        build:\n- prefect_docker.deployments.steps.build_docker_image:\n    requires: prefect-docker>=0.3.0\n    image_name: my-repo/my-image\n    tag: my-tag\n    dockerfile: auto\n    push: true\n

        Once you've confirmed that these fields are set to their desired values, this step will automatically build a Docker image with the provided name and tag and push it to the repository referenced by the image name. As the prefect-docker package documentation notes, this step produces a few fields that can optionally be used in future steps or within prefect.yaml as template values. It is best practice to use {{ image }} within prefect.yaml (specifically the work pool's job variables section) so that you don't risk having your build step and deployment specification get out of sync with hardcoded values.

        Note

        Note that in the build step example above, we relied on the prefect-docker package; in cases that deal with external services, additional packages are often required and will be auto-installed for you.

        Pass output to downstream steps

        Each deployment action can be composed of multiple steps. For example, if you wanted to build a Docker image tagged with the current commit hash, you could use the run_shell_script step and feed the output into the build_docker_image step:

        build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker\n        image_name: my-image\n        image_tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

        Note that the id field is used in the run_shell_script step so that its output can be referenced in the next step.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-push-action","title":"The push action","text":"

        The push section is most critical for situations in which code is not stored on persistent filesystems or in version control. In this scenario, code is often pushed and pulled from a Cloud storage bucket of some kind (e.g., S3, GCS, Azure Blobs, etc.). The push section allows users to specify and customize the logic for pushing this code repository to arbitrary remote locations.

        For example, a user wishing to store their code in an S3 bucket and rely on default worker settings for its runtime environment could use the s3 recipe:

        prefect init --recipe s3\n>> bucket: < insert bucket name here >\n

        Inspecting our newly created prefect.yaml file we find that the push and pull sections have been templated out for us as follows:

        push:\n- prefect_aws.deployments.steps.push_to_s3:\n    id: push-code\n    requires: prefect-aws>=0.3.0\n    bucket: my-bucket\n    folder: project-name\n    credentials: null\n\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n    requires: prefect-aws>=0.3.0\n    bucket: my-bucket\n    folder: \"{{ push-code.folder }}\"\n    credentials: null\n

        The bucket has been populated with our provided value (which also could have been provided with the --field flag); note that the folder property of the push step is a template - the pull_from_s3 step outputs both a bucket value as well as a folder value that can be used to template downstream steps. Doing this helps you keep your steps consistent across edits.

        As discussed above, if you are using blocks, the credentials section can be templated with a block reference for secure and dynamic credentials access:

        push:\n- prefect_aws.deployments.steps.push_to_s3:\n    requires: prefect-aws>=0.3.0\n    bucket: my-bucket\n    folder: project-name\n    credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

        Anytime you run prefect deploy, this push section will be executed upon successful completion of your build section. For more information on the mechanics of steps, see below.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#the-pull-action","title":"The pull action","text":"

        The pull section is the most important section within the prefect.yaml file. It contains instructions for preparing your flows for a deployment run. These instructions will be executed each time a deployment created within this folder is run via a worker.

        There are three main types of steps that typically show up in a pull section:

        • set_working_directory: this step simply sets the working directory for the process prior to importing your flow
        • git_clone: this step clones the provided repository on the provided branch
        • pull_from_{cloud}: this step pulls the working directory from a Cloud storage location (e.g., S3)

        Use block and variable references

        All block and variable references within your pull step will remain unresolved until runtime and will be pulled each time your deployment is run. This allows you to avoid storing sensitive information insecurely; it also allows you to manage certain types of configuration from the API and UI without having to rebuild your deployment every time.

        Below is an example of how to use an existing GitHubCredentials block to clone a private GitHub repository:

        pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/org/repo.git\n        credentials: \"{{ prefect.blocks.github-credentials.my-credentials }}\"\n

        Alternatively, you can specify a BitBucketCredentials or GitLabCredentials block to clone from Bitbucket or GitLab. In lieu of a credentials block, you can also provide a GitHub, GitLab, or Bitbucket token directly to the 'access_token` field. You can use a Secret block to do this securely:

        pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://bitbucket.org/org/repo.git\n        access_token: \"{{ prefect.blocks.secret.bitbucket-token }}\"\n
        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#utility-steps","title":"Utility steps","text":"

        Utility steps can be used within a build, push, or pull action to assist in managing the deployment lifecycle:

        • run_shell_script allows for the execution of one or more shell commands in a subprocess, and returns the standard output and standard error of the script. This step is useful for scripts that require execution in a specific environment, or those which have specific input and output requirements.

        Here is an example of retrieving the short Git commit hash of the current repository to use as a Docker image tag:

        build:\n    - prefect.deployments.steps.run_shell_script:\n        id: get-commit-hash\n        script: git rev-parse --short HEAD\n        stream_output: false\n    - prefect_docker.deployments.steps.build_docker_image:\n        requires: prefect-docker>=0.3.0\n        image_name: my-image\n        tag: \"{{ get-commit-hash.stdout }}\"\n        dockerfile: auto\n

        Provided environment variables are not expanded by default

        To expand environment variables in your shell script, set expand_env_vars: true in your run_shell_script step. For example:

        - prefect.deployments.steps.run_shell_script:\n    id: get-user\n    script: echo $USER\n    stream_output: true\n    expand_env_vars: true\n

        Without expand_env_vars: true, the above step would return a literal string $USER instead of the current user.

        • pip_install_requirements installs dependencies from a requirements.txt file within a specified directory.

        Below is an example of installing dependencies from a requirements.txt file after cloning:

        pull:\n    - prefect.deployments.steps.git_clone:\n        id: clone-step  # needed in order to be referenced in subsequent steps\n        repository: https://github.com/org/repo.git\n    - prefect.deployments.steps.pip_install_requirements:\n        directory: {{ clone-step.directory }}  # `clone-step` is a user-provided `id` field\n        requirements_file: requirements.txt\n

        Below is an example that retrieves an access token from a 3rd party Key Vault and uses it in a private clone step:

        pull:\n- prefect.deployments.steps.run_shell_script:\n    id: get-access-token\n    script: az keyvault secret show --name <secret name> --vault-name <secret vault> --query \"value\" --output tsv\n    stream_output: false\n- prefect.deployments.steps.git_clone:\n    repository: https://bitbucket.org/samples/deployments.git\n    branch: master\n    access_token: \"{{ get-access-token.stdout }}\"\n

        You can also run custom steps by packaging them. In the example below, retrieve_secrets is a custom python module that has been packaged into the default working directory of a Docker image (which is /opt/prefect by default). main is the function entry point, which returns an access token (e.g. return {\"access_token\": access_token}) like the preceding example, but utilizing the Azure Python SDK for retrieval.

        - retrieve_secrets.main:\n    id: get-access-token\n- prefect.deployments.steps.git_clone:\n    repository: https://bitbucket.org/samples/deployments.git\n    branch: master\n    access_token: '{{ get-access-token.access_token }}'\n
        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#templating-options","title":"Templating options","text":"

        Values that you place within your prefect.yaml file can reference dynamic values in several different ways:

        • step outputs: every step of both build and push produce named fields such as image_name; you can reference these fields within prefect.yaml and prefect deploy will populate them with each call. References must be enclosed in double brackets and be of the form \"{{ field_name }}\"
        • blocks: Prefect blocks can also be referenced with the special syntax {{ prefect.blocks.block_type.block_slug }}. It is highly recommended that you use block references for any sensitive information (such as a GitHub access token or any credentials) to avoid hardcoding these values in plaintext
        • variables: Prefect variables can also be referenced with the special syntax {{ prefect.variables.variable_name }}. Variables can be used to reference non-sensitive, reusable pieces of information such as a default image name or a default work pool name.
        • environment variables: you can also reference environment variables with the special syntax {{ $MY_ENV_VAR }}. This is especially useful for referencing environment variables that are set at runtime.

        As an example, consider the following prefect.yaml file:

        build:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build-image\n    requires: prefect-docker>=0.3.0\n    image_name: my-repo/my-image\n    tag: my-tag\n    dockerfile: auto\n    push: true\n\ndeployments:\n- # base metadata\n    name: null\n    version: \"{{ build-image.tag }}\"\n    tags:\n        - \"{{ $my_deployment_tag }}\"\n        - \"{{ prefect.variables.some_common_tag }}\"\n    description: null\n    schedule: null\n\n    # flow-specific fields\n    entrypoint: null\n    parameters: {}\n\n    # infra-specific fields\n    work_pool:\n        name: \"my-k8s-work-pool\"\n        work_queue_name: null\n        job_variables:\n            image: \"{{ build-image.image }}\"\n            cluster_config: \"{{ prefect.blocks.kubernetes-cluster-config.my-favorite-config }}\"\n

        So long as our build steps produce fields called image_name and tag, every time we deploy a new version of our deployment, the {{ build-image.image }} variable will be dynamically populated with the relevant values.

        Docker step

        The most commonly used build step is prefect_docker.deployments.steps.build_docker_image which produces both the image_name and tag fields.

        For an example, check out the deployments tutorial.

        A prefect.yaml file can have multiple deployment configurations that control the behavior of several deployments. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#working-with-multiple-deployments-with-prefectyaml","title":"Working with multiple deployments with prefect.yaml","text":"

        Prefect supports multiple deployment declarations within the prefect.yaml file. This method of declaring multiple deployments allows the configuration for all deployments to be version controlled and deployed with a single command.

        New deployment declarations can be added to the prefect.yaml file by adding a new entry to the deployments list. Each deployment declaration must have a unique name field which is used to select deployment declarations when using the prefect deploy command.

        Warning

        When using a prefect.yaml file that is in another directory or differently named, remember that the value for the deployment entrypoint must be relative to the root directory of the project.

        For example, consider the following prefect.yaml file:

        build: ...\npush: ...\npull: ...\n\ndeployments:\n- name: deployment-1\n    entrypoint: flows/hello.py:my_flow\n    parameters:\n        number: 42,\n        message: Don't panic!\n    work_pool:\n        name: my-process-work-pool\n        work_queue_name: primary-queue\n\n- name: deployment-2\n    entrypoint: flows/goodbye.py:my_other_flow\n    work_pool:\n        name: my-process-work-pool\n        work_queue_name: secondary-queue\n\n- name: deployment-3\n    entrypoint: flows/hello.py:yet_another_flow\n    work_pool:\n        name: my-docker-work-pool\n        work_queue_name: tertiary-queue\n

        This file has three deployment declarations, each referencing a different flow. Each deployment declaration has a unique name field and can be deployed individually by using the --name flag when deploying.

        For example, to deploy deployment-1 you would run:

        prefect deploy --name deployment-1\n

        To deploy multiple deployments you can provide multiple --name flags:

        prefect deploy --name deployment-1 --name deployment-2\n

        To deploy multiple deployments with the same name, you can prefix the deployment name with its flow name:

        prefect deploy --name my_flow/deployment-1 --name my_other_flow/deployment-1\n

        To deploy all deployments you can use the --all flag:

        prefect deploy --all\n

        To deploy deployments that match a pattern you can run:

        prefect deploy -n my-flow/* -n *dev/my-deployment -n dep*prod\n

        The above command will deploy all deployments from the flow my-flow, all flows ending in dev with a deployment named my-deployment, and all deployments starting with dep and ending in prod.

        CLI Options When Deploying Multiple Deployments

        When deploying more than one deployment with a single prefect deploy command, any additional attributes provided via the CLI will be ignored.

        To provide overrides to a deployment via the CLI, you must deploy that deployment individually.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#reusing-configuration-across-deployments","title":"Reusing configuration across deployments","text":"

        Because a prefect.yaml file is a standard YAML file, you can use YAML aliases to reuse configuration across deployments.

        This functionality is useful when multiple deployments need to share the work pool configuration, deployment actions, or other configurations.

        You can declare a YAML alias by using the &{alias_name} syntax and insert that alias elsewhere in the file with the *{alias_name} syntax. When aliasing YAML maps, you can also override specific fields of the aliased map by using the <<: *{alias_name} syntax and adding additional fields below.

        We recommend adding a definitions section to your prefect.yaml file at the same level as the deployments section to store your aliases.

        For example, consider the following prefect.yaml file:

        build: ...\npush: ...\npull: ...\n\ndefinitions:\n    work_pools:\n        my_docker_work_pool: &my_docker_work_pool\n            name: my-docker-work-pool\n            work_queue_name: default\n            job_variables:\n                image: \"{{ build-image.image }}\"\n    schedules:\n        every_ten_minutes: &every_10_minutes\n            interval: 600\n    actions:\n        docker_build: &docker_build\n            - prefect_docker.deployments.steps.build_docker_image: &docker_build_config\n                id: build-image\n                requires: prefect-docker>=0.3.0\n                image_name: my-example-image\n                tag: dev\n                dockerfile: auto\n                push: true\n\ndeployments:\n- name: deployment-1\n    entrypoint: flows/hello.py:my_flow\n    schedule: *every_10_minutes\n    parameters:\n        number: 42,\n        message: Don't panic!\n    work_pool: *my_docker_work_pool\n    build: *docker_build # Uses the full docker_build action with no overrides\n\n- name: deployment-2\n    entrypoint: flows/goodbye.py:my_other_flow\n    work_pool: *my_docker_work_pool\n    build:\n        - prefect_docker.deployments.steps.build_docker_image:\n            <<: *docker_build_config # Uses the docker_build_config alias and overrides the dockerfile field\n            dockerfile: Dockerfile.custom\n\n- name: deployment-3\n    entrypoint: flows/hello.py:yet_another_flow\n    schedule: *every_10_minutes\n    work_pool:\n        name: my-process-work-pool\n        work_queue_name: primary-queue\n

        In the above example, we are using YAML aliases to reuse work pool, schedule, and build configuration across multiple deployments:

        • deployment-1 and deployment-2 are using the same work pool configuration
        • deployment-1 and deployment-3 are using the same schedule
        • deployment-1 and deployment-2 are using the same build deployment action, but deployment-2 is overriding the dockerfile field to use a custom Dockerfile
        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-declaration-reference","title":"Deployment declaration reference","text":"","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-fields","title":"Deployment fields","text":"

        Below are fields that can be added to each deployment declaration.

        Property Description name The name to give to the created deployment. Used with the prefect deploy command to create or update specific deployments. version An optional version for the deployment. tags A list of strings to assign to the deployment as tags. description An optional description for the deployment. schedule An optional schedule to assign to the deployment. Fields for this section are documented in the Schedule Fields section. triggers An optional array of triggers to assign to the deployment entrypoint Required path to the .py file containing the flow you want to deploy (relative to the root directory of your development folder) combined with the name of the flow function. Should be in the format path/to/file.py:flow_function_name. parameters Optional default values to provide for the parameters of the deployed flow. Should be an object with key/value pairs. enforce_parameter_schema Boolean flag that determines whether the API should validate the parameters passed to a flow run against the parameter schema generated for the deployed flow. work_pool Information on where to schedule flow runs for the deployment. Fields for this section are documented in the Work Pool Fields section.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#schedule-fields","title":"Schedule fields","text":"

        Below are fields that can be added to a deployment declaration's schedule section.

        Property Description interval Number of seconds indicating the time between flow runs. Cannot be used in conjunction with cron or rrule. anchor_date Datetime string indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. Can only be used with interval. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. See the IANA Time Zone Database for valid time zones. cron A valid cron string. Cannot be used in conjunction with interval or rrule. day_or Boolean indicating how croniter handles day and day_of_week entries. Must be used with cron. Defaults to True. rrule String representation of an RRule schedule. See the rrulestr examples for syntax. Cannot be used in conjunction with interval or cron.

        For more information about schedules, see the Schedules concept doc.

        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#work-pool-fields","title":"Work pool fields","text":"

        Below are fields that can be added to a deployment declaration's work_pool section.

        Property Description name The name of the work pool to schedule flow runs in for the deployment. work_queue_name The name of the work queue within the specified work pool to schedule flow runs in for the deployment. If not provided, the default queue for the specified work pool will be used. job_variables Values used to override the default values in the specified work pool's base job template. Maps directly to a created deployments infra_overrides attribute.","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#deployment-mechanics","title":"Deployment mechanics","text":"

        Anytime you run prefect deploy in a directory that contains a prefect.yaml file, the following actions are taken in order:

        • The prefect.yaml file is loaded. First, the build section is loaded and all variable and block references are resolved. The steps are then run in the order provided.
        • Next, the push section is loaded and all variable and block references are resolved; the steps within this section are then run in the order provided
        • Next, the pull section is templated with any step outputs but is not run. Note that block references are not hydrated for security purposes - block references are always resolved at runtime
        • Next, all variable and block references are resolved with the deployment declaration. All flags provided via the prefect deploy CLI are then overlaid on the values loaded from the file.
        • The final step occurs when the fully realized deployment specification is registered with the Prefect API

        Deployment Instruction Overrides

        The build, push, and pull sections in deployment definitions take precedence over the corresponding sections above them in prefect.yaml.

        Each time a step is run, the following actions are taken in order:

        • The step's inputs and block / variable references are resolved (see the templating documentation above for more details).
        • The step's function is imported; if it cannot be found, the special requires keyword is used to install the necessary packages
        • The step's function is called with the resolved inputs.
        • The step's output is returned and used to resolve inputs for subsequent steps.
        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/prefect-deploy/#next-steps","title":"Next steps","text":"

        Now that you are familiar with creating deployments, you may want to explore infrastructure options for running your deployments:

        • Managed work pools
        • Push work pools
        • Kubernetes work pools
        • Serverless hybrid work pools
        ","tags":["orchestration","deploy","CLI","flow runs","deployments","schedules","triggers","prefect.yaml","infrastructure","storage","work pool","worker"],"boost":2},{"location":"guides/runtime-context/","title":"Get Information about the Runtime Context","text":"

        Prefect tracks information about the current flow or task run with a run context. The run context can be thought of as a global variable that allows the Prefect engine to determine relationships between your runs, such as which flow your task was called from.

        The run context itself contains many internal objects used by Prefect to manage execution of your run and is only available in specific situations. For this reason, we expose a simple interface that only includes the items you care about and dynamically retrieves additional information when necessary. We call this the \"runtime context\" as it contains information that can be accessed only when a run is happening.

        Mock values via environment variable

        Oftentimes, you may want to mock certain values for testing purposes. For example, by manually setting an ID or a scheduled start time to ensure your code is functioning properly. Starting in version 2.10.3, you can mock values in runtime via environment variable using the schema PREFECT__RUNTIME__{SUBMODULE}__{KEY_NAME}=value:

        $ export PREFECT__RUNTIME__TASK_RUN__FAKE_KEY='foo'\n$ python -c 'from prefect.runtime import task_run; print(task_run.fake_key)' # \"foo\"\n

        If the environment variable mocks an existing runtime attribute, the value is cast to the same type. This works for runtime attributes of basic types (bool, int, float and str) and pendulum.DateTime. For complex types like list or dict, we suggest mocking them using monkeypatch or a similar tool.

        ","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/runtime-context/#accessing-runtime-information","title":"Accessing runtime information","text":"

        The prefect.runtime module is the home for all runtime context access. Each major runtime concept has its own submodule:

        • deployment: Access information about the deployment for the current run
        • flow_run: Access information about the current flow run
        • task_run: Access information about the current task run

        For example:

        my_runtime_info.py
        from prefect import flow, task\nfrom prefect import runtime\n\n@flow(log_prints=True)\ndef my_flow(x):\n    print(\"My name is\", runtime.flow_run.name)\n    print(\"I belong to deployment\", runtime.deployment.name)\n    my_task(2)\n\n@task\ndef my_task(y):\n    print(\"My name is\", runtime.task_run.name)\n    print(\"Flow run parameters:\", runtime.flow_run.parameters)\n\nmy_flow(1)\n

        Running this file will produce output similar to the following:

        10:08:02.948 | INFO    | prefect.engine - Created flow run 'solid-gibbon' for flow 'my-flow'\n10:08:03.555 | INFO    | Flow run 'solid-gibbon' - My name is solid-gibbon\n10:08:03.558 | INFO    | Flow run 'solid-gibbon' - I belong to deployment None\n10:08:03.703 | INFO    | Flow run 'solid-gibbon' - Created task run 'my_task-0' for task 'my_task'\n10:08:03.704 | INFO    | Flow run 'solid-gibbon' - Executing 'my_task-0' immediately...\n10:08:04.006 | INFO    | Task run 'my_task-0' - My name is my_task-0\n10:08:04.007 | INFO    | Task run 'my_task-0' - Flow run parameters: {'x': 1}\n10:08:04.105 | INFO    | Task run 'my_task-0' - Finished in state Completed()\n10:08:04.968 | INFO    | Flow run 'solid-gibbon' - Finished in state Completed('All states completed.')\n

        Above, we demonstrated access to information about the current flow run, task run, and deployment. When run without a deployment (via python my_runtime_info.py), you should see \"I belong to deployment None\" logged. When information is not available, the runtime will always return an empty value. Because this flow was run outside of a deployment, there is no deployment data. If this flow was run as part of a deployment, we'd see the name of the deployment instead.

        See the runtime API reference for a full list of available attributes.

        ","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/runtime-context/#accessing-the-run-context-directly","title":"Accessing the run context directly","text":"

        The current run context can be accessed with prefect.context.get_run_context(). This function will raise an exception if no run context is available, meaning you are not in a flow or task run. If a task run context is available, it will be returned even if a flow run context is available.

        Alternatively, you can access the flow run or task run context explicitly. This will, for example, allow you to access the flow run context from a task run.

        Note that we do not send the flow run context to distributed task workers because the context is costly to serialize and deserialize.

        from prefect.context import FlowRunContext, TaskRunContext\n\nflow_run_ctx = FlowRunContext.get()\ntask_run_ctx = TaskRunContext.get()\n

        Unlike get_run_context, these method calls will not raise an error if the context is not available. Instead, they will return None.

        ","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"guides/secrets/","title":"Third-party Secrets: Connect to services without storing credentials in blocks","text":"

        Credentials blocks and secret blocks are popular ways to store and retrieve sensitive information for connecting to third-party services.

        In Prefect Cloud, these block values are stored in encrypted format. Organizations whose security policies make such storage infeasible can still use Prefect to connect to third-party services securely.

        In this example, we interact with a Snowflake database and store the credentials we need to connect in AWS Secrets Manager. This example can be generalized to other third party services that require credentials. We use Prefect Cloud in this example.

        "},{"location":"guides/secrets/#prerequisites","title":"Prerequisites","text":"
        1. Prefect installed.
        2. CLI authenticated to your Prefect Cloud account.
        3. Snowflake account.
        4. AWS account.
        "},{"location":"guides/secrets/#steps","title":"Steps","text":"
        1. Install prefect-aws and prefect-snowflake integration libraries.
        2. Store Snowflake password in AWS Secrets Manager.
        3. Create AwsSecret block to access the Snowflake password.
        4. Create AwsCredentials block for authentication.
        5. Ensure the compute environment has access to AWS credentials that are authorized to access the secret in AWS.
        6. Create and use SnowflakeCredentials and SnowflakeConnector blocks in Python code to interact with Snowflake.
        "},{"location":"guides/secrets/#install-prefect-aws-and-prefect-snowflake-libraries","title":"Install prefect-aws and prefect-snowflake libraries","text":"

        The following code will install and upgrade the necessary libraries and their dependencies.

        pip install -U prefect-aws prefect-snowflake\n
        "},{"location":"guides/secrets/#store-snowflake-password-in-aws-secrets-manager","title":"Store Snowflake password in AWS Secrets Manager","text":"

        Go to the AWS Secrets Manager console and create a new secret. Alternatively, create a secret using the AWS CLI or a script.

        1. In the UI, choose Store a new secret.
        2. Select Other type of secret.
        3. Input the key-value pair for your Snowflake password where the key is any string and the value is your Snowflake password.
        4. Copy the key for future reference and click Next.
        5. Enter a name for your secret, copy the name, and click Next.
        6. For this demo, we won't rotate the key, so click Next.
        7. Click Store.
        "},{"location":"guides/secrets/#create-awssecret-block-to-access-your-snowflake-password","title":"Create AwsSecret block to access your Snowflake password","text":"

        You can create blocks with Python code or via the Prefect UI. Block creation through the UI can help you visualize how the pieces fit together, so let's use it here.

        On the Blocks page, click on + to add a new block and select AWS Secret from the list of block types. Enter a name for your block and enter the secret name from AWS Secrets Manager.

        Note that if you're using a self-hosted Prefect server instance, you'll need to register the block types in the newly installed modules before creating blocks.

        prefect block register -m prefect_aws && prefect block register -m prefect_snowflake\n
        "},{"location":"guides/secrets/#create-awscredentials-block","title":"Create AwsCredentials block","text":"

        In the AwsCredentials section, click Add + and a form will appear to create an AWS Credentials block.

        Values for Access Key ID and Secret Access Key will be read from the compute environment. My AWS Access Key ID and Secret Access Key values with permissions to read the AWS Secret are stored locally in my ~/.aws/credentials file, so I'll leave those fields blank. You could enter those values at block creation, but then they would be saved to the database, and that's what we're trying to avoid. By leaving those attributes blank, Prefect knows to look to the compute environment.

        We need to specify a region in our local AWS config file or in our AWSCredentials block. The AwsCredentials block takes precedence, so let's specify it here for portability.

        Under the hood, Prefect is using the AWS boto3 client to create a session.

        Click Create to save the blocks.

        "},{"location":"guides/secrets/#ensure-the-compute-environment-has-access-to-aws-credentials","title":"Ensure the compute environment has access to AWS credentials","text":"

        Ensure the compute environment contains AWS credentials with authorization to access AWS Secrets Manager. When we connect to Snowflake, Prefect will automatically use these credentials to authenticate and access the secret.

        "},{"location":"guides/secrets/#create-and-use-snowflakecredentials-and-snowflakeconnector-blocks-in-python-code","title":"Create and use SnowflakeCredentials and SnowflakeConnector blocks in Python code","text":"

        Let's use Prefect's blocks for convenient access to Snowflake. We won't save the blocks, to ensure the credentials are not stored in Prefect Cloud.

        We'll create a flow that connects to Snowflake and calls two tasks. The first task creates a table and inserts some data. The second task reads the data out.

        import json\nfrom prefect import flow, task\nfrom prefect_aws import AwsSecret\nfrom prefect_snowflake import SnowflakeConnector, SnowflakeCredentials\n\n\n@task\ndef setup_table(snow_connector: SnowflakeConnector) -> None:\n    with snow_connector as connector:\n        connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Space\"},\n                {\"name\": \"Me\", \"address\": \"Myway 88\"},\n            ],\n        )\n\n\n@task\ndef fetch_data(snow_connector: SnowflakeConnector) -> list:\n    all_rows = []\n    with snow_connector as connector:\n        while True:\n            new_rows = connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n\n@flow(log_prints=True)\ndef snowflake_flow():\n    aws_secret_block = AwsSecret.load(\"my-snowflake-pw\")\n\n    snow_connector = SnowflakeConnector(\n        schema=\"MY_SCHEMA\",\n        database=\"MY_DATABASE\",\n        warehouse=\"COMPUTE_WH\",\n        fetch_size=1,\n        credentials=SnowflakeCredentials(\n            role=\"MYROLE\",\n            user=\"MYUSERNAME\",\n            account=\"ab12345.us-east-2.aws\",\n            password=json.loads(aws_secret_block.read_secret()).get(\"my-snowflake-pw\"),\n        ),\n        poll_frequency_s=1,\n    )\n\n    setup_table(snow_connector)\n    all_rows = fetch_data(snow_connector)\n    print(all_rows)\n\n\nif __name__ == \"__main__\":\n    snowflake_flow()\n

        Fill in the relevant details for your Snowflake account and run the script.

        Note that the flow reads the Snowflake password from the AWS Secret Manager and uses it in the SnowflakeCredentials block. The SnowflakeConnector block uses the nested SnowflakeCredentials block to connect to Snowflake. Again, neither of the Snowflake blocks are saved, so the credentials are not stored in Prefect Cloud.

        Check out the prefect-snowflake docs for more examples of working with Snowflake.

        "},{"location":"guides/secrets/#next-steps","title":"Next steps","text":"

        Now you can turn your flow into a deployment so that you and your team can run it remotely on a schedule, in response to an event, or manually.

        Make sure to specify the prefect-aws and prefect-snowflake dependencies in your work pool or deployment so that they are available at runtime.

        Also ensure your compute has the AWS credentials for accessing the secret in AWS Secrets Manager.

        You've seen how to use Prefect blocks to store non-sensitive configuration and fetch sensitive configuration values from the environment. You can use this pattern to connect to other third-party services that require credentials, such as databases and APIs. You can use a similar pattern with any secret manager, or extend it to work with environment variables.

        "},{"location":"guides/settings/","title":"Profiles & Configuration","text":"

        Prefect's local settings are documented and type-validated.

        By modifying the default settings, you can customize various aspects of the system. You can override a setting with an environment variable or by updating the setting in a Prefect profile.

        Prefect profiles are persisted groups of settings on your local machine. A single profile is always active.

        Initially, a default profile named default is active and contains no settings overrides.

        All currently active settings can be viewed from the command line by running the following command:

        prefect config view --show-defaults\n

        When you switch to a different profile, all of the settings configured in the newly activated profile are applied.

        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#commonly-configured-settings","title":"Commonly configured settings","text":"

        This section describes some commonly configured settings. See Configuring settings for details on setting and unsetting configuration values.

        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_api_key","title":"PREFECT_API_KEY","text":"

        The PREFECT_API_KEY value specifies the API key used to authenticate with Prefect Cloud.

        PREFECT_API_KEY=\"[API-KEY]\"\n

        Generally, you will set the PREFECT_API_URL and PREFECT_API_KEY for your active profile by running prefect cloud login. If you're curious, read more about managing API keys.

        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_api_url","title":"PREFECT_API_URL","text":"

        The PREFECT_API_URL value specifies the API endpoint of your Prefect Cloud workspace or a self-hosted Prefect server instance.

        For example, if using Prefect Cloud:

        PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

        You can view your Account ID and Workspace ID in your browser URL when at a Prefect Cloud workspace page. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

        If using a local Prefect server instance, set your API URL like this:

        PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

        PREFECT_API_URL setting for workers

        If using a worker (agent and block-based deployments are legacy) that can create flow runs for deployments in remote environments, PREFECT_API_URL must be set for the environment in which your worker is running.

        If you want the worker to communicate with Prefect Cloud or a Prefect server instance from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

        Running the Prefect UI behind a reverse proxy

        When using a reverse proxy (such as Nginx or Traefik) to proxy traffic to a locally-hosted Prefect UI instance, the Prefect server instance also needs to be configured to know how to connect to the API. The PREFECT_UI_API_URL should be set to the external proxy URL (e.g. if your external URL is https://prefect-server.example.com/ then set PREFECT_UI_API_URL=https://prefect-server.example.com/api for the Prefect server process). You can also accomplish this by setting PREFECT_API_URL to the API URL, as this setting is used as a fallback if PREFECT_UI_API_URL is not set.

        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_home","title":"PREFECT_HOME","text":"

        The PREFECT_HOME value specifies the local Prefect directory for configuration files, profiles, and the location of the default Prefect SQLite database.

        PREFECT_HOME='~/.prefect'\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#prefect_local_storage_path","title":"PREFECT_LOCAL_STORAGE_PATH","text":"

        The PREFECT_LOCAL_STORAGE_PATH value specifies the default location of local storage for flow runs.

        PREFECT_LOCAL_STORAGE_PATH='${PREFECT_HOME}/storage'\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#csrf-protection-settings","title":"CSRF Protection Settings","text":"

        If using a local Prefect server instance, you can configure CSRF protection settings.

        PREFECT_SERVER_CSRF_PROTECTION_ENABLED - Activates CSRF protection on the server, requiring valid CSRF tokens for applicable requests. Recommended for production to prevent CSRF attacks. Defaults to False.

        PREFECT_SERVER_CSRF_PROTECTION_ENABLED=True\n

        PREFECT_SERVER_CSRF_TOKEN_EXPIRATION - Sets the expiration duration for server-issued CSRF tokens, influencing how often tokens need to be refreshed. The default is 1 hour.

        PREFECT_SERVER_CSRF_TOKEN_EXPIRATION='3600'  # 1 hour in seconds\n

        By default clients expect that CSRF protection is enabled on the server. If you are running a server without CSRF protection, you can disable CSRF support in the client.

        PREFECT_CLIENT_CSRF_SUPPORT_ENABLED - Enables or disables CSRF token handling in the Prefect client. When enabled, the client manages CSRF tokens for state-changing API requests. Defaults to True.

        PREFECT_CLIENT_CSRF_SUPPORT_ENABLED=True\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#database-settings","title":"Database settings","text":"

        If running a self-hosted Prefect server instance, there are several database configuration settings you can read about here.

        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#logging-settings","title":"Logging settings","text":"

        Prefect provides several logging configuration settings that you can read about in the logging docs.

        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuring-settings","title":"Configuring settings","text":"

        The prefect config CLI commands enable you to view, set, and unset settings.

        Command Description set Change the value for a setting. unset Restore the default value for a setting. view Display the current settings.","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#viewing-settings-from-the-cli","title":"Viewing settings from the CLI","text":"

        The prefect config view command will display settings that override default values.

        $ prefect config view\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG'\n

        You can show the sources of values with --show-sources:

        $ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

        You can also include default values with --show-defaults:

        $ prefect config view --show-defaults\nPREFECT_PROFILE='default'\nPREFECT_AGENT_PREFETCH_SECONDS='10' (from defaults)\nPREFECT_AGENT_QUERY_INTERVAL='5.0' (from defaults)\nPREFECT_API_KEY='None' (from defaults)\nPREFECT_API_REQUEST_TIMEOUT='60.0' (from defaults)\nPREFECT_API_URL='None' (from defaults)\n...\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#setting-and-clearing-values","title":"Setting and clearing values","text":"

        The prefect config set command lets you change the value of a default setting.

        A commonly used example is setting the PREFECT_API_URL, which you may need to change when interacting with different Prefect server instances or Prefect Cloud.

        # use a local Prefect server\nprefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n\n# use Prefect Cloud\nprefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

        If you want to configure a setting to use its default value, use the prefect config unset command.

        prefect config unset PREFECT_API_URL\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#overriding-defaults-with-environment-variables","title":"Overriding defaults with environment variables","text":"

        All settings have keys that match the environment variable that can be used to override them.

        For example, configuring the home directory:

        # environment variable\nexport PREFECT_HOME=\"/path/to/home\"\n
        # python\nimport prefect.settings\nprefect.settings.PREFECT_HOME.value()  # PosixPath('/path/to/home')\n

        Configuring the a server instance's port:

        # environment variable\nexport PREFECT_SERVER_API_PORT=4242\n
        # python\nprefect.settings.PREFECT_SERVER_API_PORT.value()  # 4242\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#configuration-profiles","title":"Configuration profiles","text":"

        Prefect allows you to persist settings instead of setting an environment variable each time you open a new shell. Settings are persisted to profiles, which allow you to move between groups of settings quickly.

        The prefect profile CLI commands enable you to create, review, and manage profiles.

        Command Description create Create a new profile. delete Delete the given profile. inspect Display settings from a given profile; defaults to active. ls List profile names. rename Change the name of a profile. use Switch the active profile.

        If you configured settings for a profile, prefect profile inspect displays those settings:

        $ prefect profile inspect\nPREFECT_PROFILE = \"default\"\nPREFECT_API_KEY = \"pnu_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nPREFECT_API_URL = \"http://127.0.0.1:4200/api\"\n

        You can pass the name of a profile to view its settings:

        $ prefect profile create test\n$ prefect profile inspect test\nPREFECT_PROFILE=\"test\"\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#creating-and-removing-profiles","title":"Creating and removing profiles","text":"

        Create a new profile with no settings:

        $ prefect profile create test\nCreated profile 'test' at /Users/terry/.prefect/profiles.toml.\n

        Create a new profile foo with settings cloned from an existing default profile:

        $ prefect profile create foo --from default\nCreated profile 'cloud' matching 'default' at /Users/terry/.prefect/profiles.toml.\n

        Rename a profile:

        $ prefect profile rename temp test\nRenamed profile 'temp' to 'test'.\n

        Remove a profile:

        $ prefect profile delete test\nRemoved profile 'test'.\n

        Removing the default profile resets it:

        $ prefect profile delete default\nReset profile 'default'.\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#change-values-in-profiles","title":"Change values in profiles","text":"

        Set a value in the current profile:

        $ prefect config set VAR=X\nSet variable 'VAR' to 'X'\nUpdated profile 'default'\n

        Set multiple values in the current profile:

        $ prefect config set VAR2=Y VAR3=Z\nSet variable 'VAR2' to 'Y'\nSet variable 'VAR3' to 'Z'\nUpdated profile 'default'\n

        You can set a value in another profile by passing the --profile NAME option to a CLI command:

        $ prefect --profile \"foo\" config set VAR=Y\nSet variable 'VAR' to 'Y'\nUpdated profile 'foo'\n

        Unset values in the current profile to restore the defaults:

        $ prefect config unset VAR2 VAR3\nUnset variable 'VAR2'\nUnset variable 'VAR3'\nUpdated profile 'default'\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#inspecting-profiles","title":"Inspecting profiles","text":"

        See a list of available profiles:

        $ prefect profile ls\n* default\ncloud\ntest\nlocal\n

        View all settings for a profile:

        $ prefect profile inspect cloud\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx\nx/workspaces/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'\nPREFECT_API_KEY='xxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'          \n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#using-profiles","title":"Using profiles","text":"

        The profile named default is used by default. There are several methods to switch to another profile.

        The recommended method is to use the prefect profile use command with the name of the profile:

        $ prefect profile use foo\nProfile 'test' now active.\n

        Alternatively, you can set the environment variable PREFECT_PROFILE to the name of the profile:

        export PREFECT_PROFILE=foo\n

        Or, specify the profile in the CLI command for one-time usage:

        prefect --profile \"foo\" ...\n

        Note that this option must come before the subcommand. For example, to list flow runs using the profile foo:

        prefect --profile \"foo\" flow-run ls\n

        You may use the -p flag as well:

        prefect -p \"foo\" flow-run ls\n

        You may also create an 'alias' to automatically use your profile:

        $ alias prefect-foo=\"prefect --profile 'foo' \"\n# uses our profile!\n$ prefect-foo config view  \n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#conflicts-with-environment-variables","title":"Conflicts with environment variables","text":"

        If setting the profile from the CLI with --profile, environment variables that conflict with settings in the profile will be ignored.

        In all other cases, environment variables will take precedence over the value in the profile.

        For example, a value set in a profile will be used by default:

        $ prefect config set PREFECT_LOGGING_LEVEL=\"ERROR\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n

        But, setting an environment variable will override the profile setting:

        $ export PREFECT_LOGGING_LEVEL=\"DEBUG\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

        Unless the profile is explicitly requested when using the CLI:

        $ prefect --profile default config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/settings/#profile-files","title":"Profile files","text":"

        Profiles are persisted to the file location specified by PREFECT_PROFILES_PATH. The default location is a profiles.toml file in the PREFECT_HOME directory:

        $ prefect config view --show-defaults\n...\nPREFECT_PROFILES_PATH='${PREFECT_HOME}/profiles.toml'\n...\n

        The TOML format is used to store profile data.

        ","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"guides/specifying-upstream-dependencies/","title":"Specifying Upstream Dependencies","text":"

        Results from a task can be provided to other tasks (or subflows) as upstream dependencies. Prefect uses upstream dependencies in two ways:

        1. To populate dependency arrows in the flow run graph
        2. To determine execution order for concurrently submitted units of work that depend on each other

        Tasks vs. other functions

        Only results from tasks inform Prefect's ability to determine dependencies. Return values from functions without task decorators, including subflows, do not carry the same information about their origin as task results.

        When using non-sequential task runners such as the ConcurrentTaskRunner or DaskTaskRunner, the order of execution for submitted tasks is not guaranteed unless their dependencies are specified.

        For example, compare how tasks submitted to the ConcurrentTaskRunner behave with and without upstream dependencies by clicking on the tabs below.

        Without dependenciesWith dependencies
        @flow(log_prints=True) # Default task runner is ConcurrentTaskRunner\ndef flow_of_tasks():\n    # no dependencies, execution is order not guaranteed\n    first.submit()\n    second.submit()\n    third.submit()\n\n@task\ndef first():\n    print(\"I'm first!\")\n\n@task\ndef second():\n    print(\"I'm second!\")\n\n@task\ndef third():\n    print(\"I'm third!\")\n
        Flow run 'pumpkin-puffin' - Created task run 'first-0' for task 'first'\nFlow run 'pumpkin-puffin' - Submitted task run 'first-0' for execution.\nFlow run 'pumpkin-puffin' - Created task run 'second-0' for task 'second'\nFlow run 'pumpkin-puffin' - Submitted task run 'second-0' for execution.\nFlow run 'pumpkin-puffin' - Created task run 'third-0' for task 'third'\nFlow run 'pumpkin-puffin' - Submitted task run 'third-0' for execution.\nTask run 'third-0' - I'm third!\nTask run 'first-0' - I'm first!\nTask run 'second-0' - I'm second!\nTask run 'second-0' - Finished in state Completed()\nTask run 'third-0' - Finished in state Completed()\nTask run 'first-0' - Finished in state Completed()\nFlow run 'pumpkin-puffin' - Finished in state Completed('All states completed.')\n
        @flow(log_prints=True) # Default task runner is ConcurrentTaskRunner\ndef flow_of_tasks():\n    # with dependencies, tasks execute in order\n    first_result = first.submit()\n    second_result = second.submit(first_result)\n    third.submit(second_result)\n\n@task\ndef first():\n    print(\"I'm first!\")\n\n@task\ndef second(input):\n    print(\"I'm second!\")\n\n@task\ndef third(input):\n    print(\"I'm third!\")\n
        Flow run 'statuesque-waxbill' - Created task run 'first-0' for task 'first'\nFlow run 'statuesque-waxbill' - Submitted task run 'first-0' for execution.\nFlow run 'statuesque-waxbill' - Created task run 'second-0' for task 'second'\nFlow run 'statuesque-waxbill' - Submitted task run 'second-0' for execution.\nFlow run 'statuesque-waxbill' - Created task run 'third-0' for task 'third'\nFlow run 'statuesque-waxbill' - Submitted task run 'third-0' for execution.\nTask run 'first-0' - I'm first!\nTask run 'first-0' - Finished in state Completed()\nTask run 'second-0' - I'm second!\nTask run 'second-0' - Finished in state Completed()\nTask run 'third-0' - I'm third!\nTask run 'third-0' - Finished in state Completed()\nFlow run 'statuesque-waxbill' - Finished in state Completed('All states completed.')\n
        ","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#determination-methods","title":"Determination methods","text":"

        A task or subflow's upstream dependencies can be inferred automatically via its inputs, or stated explicitly via the wait_for parameter.

        ","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#automatic","title":"Automatic","text":"

        When a result from a task is used as input for another task, Prefect automatically recognizes the task that result originated from as an upstream dependency.

        This applies to every way you can run tasks with Prefect, whether you're calling the task function directly, calling .submit(), or calling .map(). Subflows similarly recognize tasks results as upstream dependencies.

        from prefect import flow, task\n\n\n@flow(log_prints=True)\ndef flow_of_tasks():\n    upstream_result = upstream.submit()\n    downstream_1_result = downstream_1.submit(upstream_result)\n    downstream_2_result = downstream_2.submit(upstream_result)\n    mapped_task_results = mapped_task.map([downstream_1_result, downstream_2_result])\n    final_task(mapped_task_results)\n\n@task\ndef upstream():\n    return \"Hello from upstream!\"\n\n@task\ndef downstream_1(input):\n    return input\n\n@task\ndef downstream_2(input):\n    return input\n\n@task\ndef mapped_task(input):\n    return input\n\n@task\ndef final_task(input):\n    print(input)\n
        Flow run graph displaying inferred dependencies with the \"Dependency grid\" layout selected","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#manual","title":"Manual","text":"

        Tasks that do not share data can be informed of their upstream dependencies through the wait_for parameter. Just as with automatic dependencies, this applies to direct task function calls, .submit(), .map(), and subflows.

        Differences with .map()

        Manually defined upstream dependencies apply to all tasks submitted by .map(), so each mapped task must wait for all upstream dependencies passed into wait_for to finish. This is distinct from automatic dependencies for mapped tasks, where each mapped task must only wait for the upstream tasks whose results it depends on.

        from prefect import flow, task\n\n\n@flow(log_prints=True)\ndef flow_of_tasks():\n    upstream_result = upstream.submit()\n    downstream_1_result = downstream_1.submit(wait_for=[upstream_result])\n    downstream_2_result = downstream_2.submit(wait_for=[upstream_result])\n    mapped_task_results = mapped_task.map([1, 2], wait_for=[downstream_1_result, downstream_2_result])\n    final_task(wait_for=mapped_task_results)\n\n@task\ndef upstream():\n    pass\n\n@task\ndef downstream_1():\n    pass\n\n@task\ndef downstream_2():\n    pass\n\n@task\ndef mapped_task(input):\n    pass\n\n@task\ndef final_task():\n    pass\n
        Flow run graph displaying manual dependencies with the \"Dependency grid\" layout selected","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/specifying-upstream-dependencies/#deployments-as-dependencies","title":"Deployments as dependencies","text":"

        For more complex workflows, parts of your logic may require additional resources, different infrastructure, or independent parallel execution. A typical approach for addressing these needs is to execute that logic as separate deployment runs from within a flow.

        Composing deployment runs into a flow so that they can be treated as upstream dependencies is as simple as calling run_deployment from within a task.

        Given a deployment process-user of flow parallel-work, a flow of deployments might look like this:

        from prefect import flow, task\nfrom prefect.deployments import run_deployment\n\n\n@flow\ndef flow_of_deployments():\n    deployment_run_1 = run_deployment_task.submit(\n        flow_name=\"parallel-work\",\n        deployment_name=\"process-user\",\n        parameters={\"user_id\": 1},\n    )\n    deployment_run_2 = run_deployment_task.submit(\n        flow_name=\"parallel-work\",\n        deployment_name=\"process-user\",\n        parameters={\"user_id\": 2},\n    )\n    downstream_task(wait_for=[deployment_run_1, deployment_run_2])\n\n\n@task(task_run_name=\"Run deployment {flow_name}/{deployment_name}\")\ndef run_deployment_task(\n    flow_name: str,\n    deployment_name: str,\n    parameters: dict\n):\n    run_deployment(\n        name=f\"{flow_name}/{deployment_name}\",\n        parameters=parameters\n    )\n\n\n@task\ndef downstream_task():\n    print(\"I'm downstream!\")\n

        By default, deployments started from run_deployment will also appear as subflows for tracking purposes. This behavior can be disabled by setting the as_subflow parameter for run_deployment to False.

        Flow run graph displaying deployments as dependencies with the \"Dependency grid\" layout selected","tags":["tasks","dependencies","wait_for"],"boost":2},{"location":"guides/state-change-hooks/","title":"State Change Hooks","text":"

        State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow. This guide provides examples of real-world use cases.

        ","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#example-use-cases","title":"Example use cases","text":"","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#send-a-notification-when-a-flow-run-fails","title":"Send a notification when a flow run fails","text":"

        State change hooks enable you to customize messages sent when tasks transition between states, such as sending notifications containing sensitive information when tasks enter a Failed state. Let's run a client-side hook upon a flow run entering a Failed state.

        from prefect import flow\nfrom prefect.blocks.core import Block\nfrom prefect.settings import PREFECT_API_URL\n\ndef notify_slack(flow, flow_run, state):\n    slack_webhook_block = Block.load(\n        \"slack-webhook/my-slack-webhook\"\n    )\n\n    slack_webhook_block.notify(\n        (\n            f\"Your job {flow_run.name} entered {state.name} \"\n            f\"with message:\\n\\n\"\n            f\"See <https://{PREFECT_API_URL.value()}/flow-runs/\"\n            f\"flow-run/{flow_run.id}|the flow run in the UI>\\n\\n\"\n            f\"Tags: {flow_run.tags}\\n\\n\"\n            f\"Scheduled start: {flow_run.expected_start_time}\"\n        )\n    )\n\n@flow(on_failure=[notify_slack], retries=1)\ndef failing_flow():\n    raise ValueError(\"oops!\")\n\nif __name__ == \"__main__\":\n    failing_flow()\n

        Note that because we've configured retries in this example, the on_failure hook will not run until all retries have completed, when the flow run enters a Failed state.

        ","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#delete-a-cloud-run-job-when-a-flow-run-crashes","title":"Delete a Cloud Run job when a flow run crashes","text":"

        State change hooks can aid in managing infrastructure cleanup in scenarios where tasks spin up individual infrastructure resources independently of Prefect. When a flow run crashes, tasks may exit abruptly, resulting in the potential omission of cleanup logic within the tasks. State change hooks can be used to ensure infrastructure is properly cleaned up even when a flow run enters a Crashed state!

        Let's create a hook that deletes a Cloud Run job if the flow run crashes.

        import os\nfrom prefect import flow, task\nfrom prefect.blocks.system import String\nfrom prefect.client import get_client\nimport prefect.runtime\n\nasync def delete_cloud_run_job(flow, flow_run, state):\n    \"\"\"Flow run state change hook that deletes a Cloud Run Job if\n    the flow run crashes.\"\"\"\n\n    # retrieve Cloud Run job name\n    cloud_run_job_name = await String.load(\n        name=\"crashing-flow-cloud-run-job\"\n    )\n\n    # delete Cloud Run job\n    delete_job_command = f\"yes | gcloud beta run jobs delete \n    {cloud_run_job_name.value} --region us-central1\"\n    os.system(delete_job_command)\n\n    # clean up the Cloud Run job string block as well\n    async with get_client() as client:\n        block_document = await client.read_block_document_by_name(\n            \"crashing-flow-cloud-run-job\", block_type_slug=\"string\"\n        )\n        await client.delete_block_document(block_document.id)\n\n@task\ndef my_task_that_crashes():\n    raise SystemExit(\"Crashing on purpose!\")\n\n@flow(on_crashed=[delete_cloud_run_job])\ndef crashing_flow():\n    \"\"\"Save the flow run name (i.e. Cloud Run job name) as a \n    String block. It then executes a task that ends up crashing.\"\"\"\n    flow_run_name = prefect.runtime.flow_run.name\n    cloud_run_job_name = String(value=flow_run_name)\n    cloud_run_job_name.save(\n        name=\"crashing-flow-cloud-run-job\", overwrite=True\n    )\n\n    my_task_that_crashes()\n\nif __name__ == \"__main__\":\n    crashing_flow()\n
        ","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/testing/","title":"Testing","text":"

        Once you have some awesome flows, you probably want to test them!

        ","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-flows","title":"Unit testing flows","text":"

        Prefect provides a simple context manager for unit tests that allows you to run flows and tasks against a temporary local SQLite database.

        from prefect import flow\nfrom prefect.testing.utilities import prefect_test_harness\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n  with prefect_test_harness():\n      # run the flow against a temporary testing database\n      assert my_favorite_flow() == 42\n

        For more extensive testing, you can leverage prefect_test_harness as a fixture in your unit testing framework. For example, when using pytest:

        from prefect import flow\nimport pytest\nfrom prefect.testing.utilities import prefect_test_harness\n\n@pytest.fixture(autouse=True, scope=\"session\")\ndef prefect_test_fixture():\n    with prefect_test_harness():\n        yield\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n    assert my_favorite_flow() == 42\n

        Note

        In this example, the fixture is scoped to run once for the entire test session. In most cases, you will not need a clean database for each test and just want to isolate your test runs to a test database. Creating a new test database per test creates significant overhead, so we recommend scoping the fixture to the session. If you need to isolate some tests fully, you can use the test harness again to create a fresh database.

        ","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-tasks","title":"Unit testing tasks","text":"

        To test an individual task, you can access the original function using .fn:

        from prefect import flow, task\n\n@task\ndef my_favorite_task():\n    return 42\n\n@flow\ndef my_favorite_flow():\n    val = my_favorite_task()\n    return val\n\ndef test_my_favorite_task():\n    assert my_favorite_task.fn() == 42\n

        Disable logger

        If your task makes uses a logger, you can disable the logger in order to avoid the RuntimeError raised from a missing flow context.

        from prefect.logging import disable_run_logger\n\ndef test_my_favorite_task():\n    with disable_run_logger():\n        assert my_favorite_task.fn() == 42\n

        ","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/troubleshooting/","title":"Troubleshooting","text":"

        Don't Panic! If you experience an error with Prefect, there are many paths to understanding and resolving it. The first troubleshooting step is confirming that you are running the latest version of Prefect. If you are not, be sure to upgrade to the latest version, since the issue may have already been fixed. Beyond that, there are several categories of errors:

        • The issue may be in your flow code, in which case you should carefully read the logs.
        • The issue could be with how you are authenticated, and whether or not you are connected to Cloud.
        • The issue might have to do with how your code is executed.
        ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#upgrade","title":"Upgrade","text":"

        Prefect is constantly evolving, adding new features and fixing bugs. Chances are that a patch has already been identified and released. Search existing issues for similar reports and check out the Release Notes. Upgrade to the newest version with the following command:

        pip install --upgrade prefect\n

        Different components may use different versions of Prefect:

        • Cloud will generally always be the newest version. Cloud is continuously deployed by the Prefect team. When using a self-hosted server, you can control this version.
        • Workers and agents typically don't change versions frequently, and are usually whatever the latest version was at the time of creation. Workers and agents provision infrastructure for flow runs, so upgrading them may help with infrastructure problems.
        • Flows could use a different version than the worker or agent that created them, especially when running in different environments. Suppose your worker and flow both use the latest official Docker image, but your worker was created a month ago. Your worker will often be on an older version than your flow.

        Integration Versions

        Keep in mind that integrations are versioned and released independently of the core Prefect library. They should be upgraded simultaneously with the core library, using the same method.

        ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#logs","title":"Logs","text":"

        In many cases, there will be an informative stack trace in Prefect's logs. Read it carefully, locate the source of the error, and try to identify the cause.

        There are two types of logs:

        • Flow and task logs are always scoped to a flow. They are sent to Prefect and are viewable in the UI.
        • Worker and agent logs are not scoped to a flow and may have more information on what happened before the flow started. These logs are generally only available where the worker or agent is running.

        If your flow and task logs are empty, there may have been an infrastructure issue that prevented your flow from starting. Check your worker logs for more details.

        If there is no clear indication of what went wrong, try updating the logging level from the default INFO level to the DEBUG level. Settings such as the logging level are propagated from the worker environment to the flow run environment and can be set via environment variables or the prefect config set CLI:

        # Using the CLI\nprefect config set PREFECT_LOGGING_LEVEL=DEBUG\n\n# Using environment variables\nexport PREFECT_LOGGING_LEVEL=DEBUG\n

        The DEBUG logging level produces a high volume of logs so consider setting it back to INFO once any issues are resolved.

        ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#cloud","title":"Cloud","text":"

        When using Prefect Cloud, there are the additional concerns of authentication and authorization. The Prefect API authenticates users and service accounts - collectively known as actors - with API keys. Missing, incorrect, or expired API keys will result in a 401 response with detail Invalid authentication credentials. Use the following command to check your authentication, replacing $PREFECT_API_KEY with your API key:

        curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/\"\n

        Users vs Service Accounts

        Service accounts - sometimes referred to as bots - represent non-human actors that interact with Prefect such as workers and CI/CD systems. Each human that interacts with Prefect should be represented as a user. User API keys start with pnu_ and service account API keys start with pnb_.

        Supposing the response succeeds, let's check our authorization. Actors can be members of workspaces. An actor attempting an action in a workspace they are not a member of will result in a 404 response. Use the following command to check your actor's workspace memberships:

        curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/workspaces\"\n

        Formatting JSON

        Python comes with a helpful tool for formatting JSON. Append the following to the end of the command above to make the output more readable: | python -m json.tool

        Make sure your actor is a member of the workspace you are working in. Within a workspace, an actor has a role which grants them certain permissions. Insufficient permissions will result in an error. For example, starting an agent or worker with the Viewer role, will result in errors.

        ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#execution","title":"Execution","text":"

        Prefect flows can be executed locally by the user, or remotely by a worker or agent. Local execution generally means that you - the user - run your flow directly with a command like python flow.py. Remote execution generally means that a worker runs your flow via a deployment, optionally on different infrastructure.

        With remote execution, the creation of your flow run happens separately from its execution. Flow runs are assigned to a work pool and a work queue. For flow runs to execute, a worker must be subscribed to the work pool and work queue, otherwise the flow runs will go from Scheduled to Late. Ensure that your work pool and work queue have a subscribed worker.

        Local and remote execution can also differ in their treatment of relative imports. If switching from local to remote execution results in local import errors, try replicating the behavior by executing the flow locally with the -m flag (i.e. python -m flow instead of python flow.py). Read more about -m here.

        ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#api-tests-return-an-unexpected-307-redirected","title":"API tests return an unexpected 307 Redirected","text":"

        Summary: Requests require a trailing / in the request URL.

        If you write a test that does not include a trailing / when making a request to a specific endpoint:

        async def test_example(client):\n    response = await client.post(\"/my_route\")\n    assert response.status_code == 201\n

        You'll see a failure like:

        E       assert 307 == 201\nE        +  where 307 = <Response [307 Temporary Redirect]>.status_code\n

        To resolve this, include the trailing /:

        async def test_example(client):\n    response = await client.post(\"/my_route/\")\n    assert response.status_code == 201\n

        Note: requests to nested URLs may exhibit the opposite behavior and require no trailing slash:

        async def test_nested_example(client):\n    response = await client.post(\"/my_route/filter/\")\n    assert response.status_code == 307\n\n    response = await client.post(\"/my_route/filter\")\n    assert response.status_code == 200\n

        Reference: \"HTTPX disabled redirect following by default\" in 0.22.0.

        ","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#pytestpytestunraisableexceptionwarning-or-resourcewarning","title":"pytest.PytestUnraisableExceptionWarning or ResourceWarning","text":"

        As you're working with one of the FlowRunner implementations, you may get an error like this one:

        E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <ssl.SSLSocket fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>\nE\nE               Traceback (most recent call last):\nE                 File \".../pytest_asyncio/plugin.py\", line 306, in setup\nE                   res = await func(**_add_kwargs(func, kwargs, event_loop, request))\nE               ResourceWarning: unclosed <ssl.SSLSocket fd=10, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 60605), raddr=('127.0.0.1', 6443)>\n\n.../_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning\n

        This error is saying that your test suite (or the prefect library code) opened a connection to something (like a Docker daemon or a Kubernetes cluster) and didn't close it.

        It may help to re-run the specific test with PYTHONTRACEMALLOC=25 pytest ... so that Python can display more of the stack trace where the connection was opened.

        ","tags":["troubleshooting","guides","how to"]},{"location":"guides/upgrade-guide-agents-to-workers/","title":"Upgrade from Agents to Workers","text":"

        Upgrading from agents to workers significantly enhances the experience of deploying flows. It simplifies the specification of each flow's infrastructure and runtime environment.

        A worker is the fusion of an agent with an infrastructure block. Like agents, workers poll a work pool for flow runs that are scheduled to start. Like infrastructure blocks, workers are typed - they work with only one kind of infrastructure, and they specify the default configuration for jobs submitted to that infrastructure.

        Accordingly, workers are not a drop-in replacement for agents. Using workers requires deploying flows differently. In particular, deploying a flow with a worker does not involve specifying an infrastructure block. Instead, infrastructure configuration is specified on the work pool and passed to each worker that polls work from that pool.

        This guide provides an overview of the differences between agents and workers. It also describes how to upgrade from agents to workers in just a few quick steps.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#enhancements","title":"Enhancements","text":"","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#workers","title":"Workers","text":"
        • Improved visibility into the status of each worker, including when a worker was started and when it last polled.
        • Better handling of race conditions for high availability use cases.
        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#work-pools","title":"Work pools","text":"
        • Work pools allow greater customization and governance of infrastructure parameters for deployments via their base job template.
        • Prefect Cloud push work pools enable flow execution in your cloud provider environment without needing to host a worker.
        • Prefect Cloud managed work pools allow you to run flows on Prefect's infrastructure, without needing to host a worker or configure cloud provider infrastructure.
        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#improved-deployment-interfaces","title":"Improved deployment interfaces","text":"
        • The Python deployment experience with .deploy() or the alternative deployment experience with prefect.yaml are more flexible and easier to use than block and agent-based deployments.
        • Both options allow you to deploy multiple flows with a single command.
        • Both options allow you to build Docker images for your flows to create portable execution environments.
        • The YAML-based API supports templating to enable dryer deployment definitions.
        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#whats-different","title":"What's different","text":"
        1. Deployment CLI and Python SDK:

          prefect deployment build <entrypoint>/prefect deployment apply --> prefect deploy

          Prefect will now automatically detect flows in your repo and provide a wizard \ud83e\uddd9 to guide you through setting required attributes for your deployments.

          Deployment.build_from_flow --> flow.deploy

        2. Configuring remote flow code storage:

          storage blocks --> pull action

          When using the YAML-based deployment API, you can configure a pull action in your prefect.yaml file to specify how to retrieve flow code for your deployments. You can use configuration from your existing storage blocks to define your pull action via templating.

          When using the Python deployment API, you can pass any storage block to the flow.deploy method to specify how to retrieve flow code for your deployment.

        3. Configuring flow run infrastructure:

          infrastructure blocks --> typed work pool

          Default infrastructure config is now set on the typed work pool, and can be overwritten by individual deployments.

        4. Managing multiple deployments:

          Create and/or update many deployments at once through a prefect.yaml file or use the deploy function.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#whats-similar","title":"What's similar","text":"
        • Storage blocks can be set as the pull action in a prefect.yaml file.
        • Infrastructure blocks have configuration fields similar to typed work pools.
        • Deployment-level infrastructure overrides operate in much the same way.

          infra_override -> job_variable

        • The process for starting an agent and starting a worker in your environment are virtually identical.

          prefect agent start --pool <work pool name> --> prefect worker start --pool <work pool name>

          Worker Helm chart

          If you host your agents in a Kubernetes cluster, you can use the Prefect worker Helm chart to host workers in your cluster.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#upgrade-guide","title":"Upgrade guide","text":"

        If you have existing deployments that use infrastructure blocks, you can quickly upgrade them to be compatible with workers by following these steps:

        1. Create a work pool

        This new work pool will replace your infrastructure block.

        You can use the .publish_as_work_pool method on any infrastructure block to create a work pool with the same configuration.

        For example, if you have a KubernetesJob infrastructure block named 'my-k8s-job', you can create a work pool with the same configuration with this script:

        from prefect.infrastructure import KubernetesJob\n\nKubernetesJob.load(\"my-k8s-job\").publish_as_work_pool()\n
        Running this script will create a work pool named 'my-k8s-job' with the same configuration as your infrastructure block.\n

        Serving flows

        If you are using a Process infrastructure block and a LocalFilesystem storage block (or aren't using an infrastructure and storage block at all), you can use flow.serve to create a deployment without needing to specify a work pool name or start a worker.

        This is a quick way to create a deployment for a flow and is a great way to manage your deployments if you don't need the dynamic infrastructure creation or configuration offered by workers.

        Check out our Docker guide for how to build a served flow into a Docker image and host it in your environment.

        1. Start a worker

        This worker will replace your agent and poll your new work pool for flow runs to execute.

        prefect worker start -p <work pool name>\n
        1. Deploy your flows to the new work pool

        To deploy your flows to the new work pool, you can use flow.deploy for a Pythonic deployment experience or prefect deploy for a YAML-based deployment experience.

        If you currently use Deployment.build_from_flow, we recommend using flow.deploy.

        If you currently use prefect deployment build and prefect deployment apply, we recommend using prefect deploy.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#flowdeploy","title":"flow.deploy","text":"

        If you have a Python script that uses Deployment.build_from_flow, you can replace it with flow.deploy.

        Most arguments to Deployment.build_from_flow can be translated directly to flow.deploy, but here are some changes that you may need to make:

        • Replace infrastructure with work_pool_name.
        • If you've used the .publish_as_work_pool method on your infrastructure block, use the name of the created work pool.
        • Replace infra_overrides with job_variables.
        • Replace storage with a call to flow.from_source.
        • flow.from_source will load your flow from a remote storage location and make it deployable. Your existing storage block can be passed to the source argument of flow.from_source.

        Below are some examples of how to translate Deployment.build_from_flow to flow.deploy.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-without-any-blocks","title":"Deploying without any blocks","text":"

        If you aren't using any blocks:

        from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n    Deployment.build_from_flow(\n        my_flow,\n        name=\"my-deployment\",\n        parameters=dict(name=\"Marvin\"),\n    )\n

        You can replace Deployment.build_from_flow with flow.serve :

        from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a Python script!\")\n\nif __name__ == \"__main__\":\n    my_flow.serve(\n        name=\"my-deployment\",\n        parameters=dict(name=\"Marvin\"),\n    )\n

        This will start a process that will serve your flow and execute any flow runs that are scheduled to start.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-a-storage-block","title":"Deploying using a storage block","text":"

        If you currently use a storage block to load your flow code but no infrastructure block:

        from prefect import flow\nfrom prefect.storage import GitHub\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\nif __name__ == \"__main__\":\n    Deployment.build_from_flow(\n        my_flow,\n        name=\"my-deployment\",\n        storage=GitHub.load(\"demo-repo\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

        you can use flow.from_source to load your flow from the same location and flow.serve to create a deployment:

        from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=GitHub.load(\"demo-repo\"),\n        entrypoint=\"example.py:my_flow\"\n    ).serve(\n        name=\"my-deployment\",\n        parameters=dict(name=\"Marvin\"),\n    )\n

        This will allow you to execute scheduled flow runs without starting a worker. Additionally, the process serving your flow will regularly check for updates to your flow code and automatically update the flow if it detects any changes to the code.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-using-an-infrastructure-and-storage-block","title":"Deploying using an infrastructure and storage block","text":"

        For the code below, we'll need to create a work pool from our infrastructure block and pass it to flow.deploy as the work_pool_name argument. We'll also need to pass our storage block to flow.from_source as the source argument.

        from prefect import flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import GitHub\nfrom prefect.infrastructure.kubernetes import KubernetesJob\n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a GitHub repo!\")\n\n\nif __name__ == \"__main__\":\n    Deployment.build_from_flow(\n        my_flow,\n        name=\"my-deployment\",\n        storage=GitHub.load(\"demo-repo\"),\n        entrypoint=\"example.py:my_flow\",\n        infrastructure=KubernetesJob.load(\"my-k8s-job\"),\n        infra_overrides=dict(pull_policy=\"Never\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

        The equivalent deployment code using flow.deploy would look like this:

        from prefect import flow\nfrom prefect.storage import GitHub\n\nif __name__ == \"__main__\":\n    flow.from_source(\n        source=GitHub.load(\"demo-repo\"),\n        entrypoint=\"example.py:my_flow\"\n    ).deploy(\n        name=\"my-deployment\",\n        work_pool_name=\"my-k8s-job\",\n        job_variables=dict(pull_policy=\"Never\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

        Note that when using flow.from_source(...).deploy(...), the flow you're deploying does not need to be available locally before running your script.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#deploying-via-a-docker-image","title":"Deploying via a Docker image","text":"

        If you currently bake your flow code into a Docker image before deploying, you can use the image argument of flow.deploy to build a Docker image as part of your deployment process:

        from prefect import flow\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow from a Docker image!\")\n\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        name=\"my-deployment\",\n        image=\"my-repo/my-image:latest\",\n        work_pool_name=\"my-k8s-job\",\n        job_variables=dict(pull_policy=\"Never\"),\n        parameters=dict(name=\"Marvin\"),\n    )\n

        You can skip a flow.from_source call when building an image with flow.deploy. Prefect will keep track of the flow's source code location in the image and load it from that location when the flow is executed.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/upgrade-guide-agents-to-workers/#using-prefect-deploy","title":"Using prefect deploy","text":"

        Always run prefect deploy commands from the root level of your repo!

        With agents, you might have had multiple deployment.yaml files, but under worker deployment patterns, each repo will have a single prefect.yaml file located at the root of the repo that contains deployment configuration for all flows in that repo.

        To set up a new prefect.yaml file for your deployments, run the following command from the root level of your repo:

        prefect deploy\n

        This will start a wizard that will guide you through setting up your deployment.

        For step 4, select y on the last prompt to save the configuration for the deployment.

        Saving the configuration for your deployment will result in a prefect.yaml file populated with your first deployment. You can use this YAML file to edit and define multiple deployments for this repo.

        You can add more deployments to the deployments list in your prefect.yaml file and/or by continuing to use the deployment creation wizard.

        For more information on deployments, check out our in-depth guide for deploying flows to work pools.

        ","tags":["worker","agent","deployments","infrastructure","work pool"],"boost":2},{"location":"guides/using-the-client/","title":"Using the Prefect Orchestration Client","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#overview","title":"Overview","text":"

        In the API reference for the PrefectClient, you can find many useful client methods that make it simpler to do things such as:

        • reschedule late flow runs
        • get the last N completed flow runs from my workspace

        The PrefectClient is an async context manager, so you can use it like this:

        from prefect import get_client\n\nasync with get_client() as client:\n    response = await client.hello()\n    print(response.json()) # \ud83d\udc4b\n
        ","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#examples","title":"Examples","text":"","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#rescheduling-late-flow-runs","title":"Rescheduling late flow runs","text":"

        Sometimes, you may need to bulk reschedule flow runs that are late - for example, if you've accidentally scheduled many flow runs of a deployment to an inactive work pool.

        To do this, we can delete late flow runs and create new ones in a Scheduled state with a delay.

        This example reschedules the last 3 late flow runs of a deployment named healthcheck-storage-test to run 6 hours later than their original expected start time. It also deletes any remaining late flow runs of that deployment.

        import asyncio\nfrom datetime import datetime, timedelta, timezone\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import (\n    DeploymentFilter, FlowRunFilter\n)\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\nfrom prefect.states import Scheduled\n\nasync def reschedule_late_flow_runs(\n    deployment_name: str,\n    delay: timedelta,\n    most_recent_n: int,\n    delete_remaining: bool = True,\n    states: Optional[list[str]] = None\n) -> list[FlowRun]:\n    if not states:\n        states = [\"Late\"]\n\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=dict(name=dict(any_=states)),\n                expected_start_time=dict(\n                    before_=datetime.now(timezone.utc)\n                ),\n            ),\n            deployment_filter=DeploymentFilter(\n                name={'like_': deployment_name}\n            ),\n            sort=FlowRunSort.START_TIME_DESC,\n            limit=most_recent_n if not delete_remaining else None\n        )\n\n        if not flow_runs:\n            print(f\"No flow runs found in states: {states!r}\")\n            return []\n\n        rescheduled_flow_runs = []\n        for i, run in enumerate(flow_runs):\n            await client.delete_flow_run(flow_run_id=run.id)\n            if i < most_recent_n:\n                new_run = await client.create_flow_run_from_deployment(\n                    deployment_id=run.deployment_id,\n                    state=Scheduled(\n                        scheduled_time=run.expected_start_time + delay\n                    ),\n                )\n                rescheduled_flow_runs.append(new_run)\n\n        return rescheduled_flow_runs\n\nif __name__ == \"__main__\":\n    rescheduled_flow_runs = asyncio.run(\n        reschedule_late_flow_runs(\n            deployment_name=\"healthcheck-storage-test\",\n            delay=timedelta(hours=6),\n            most_recent_n=3,\n        )\n    )\n\n    print(f\"Rescheduled {len(rescheduled_flow_runs)} flow runs\")\n\n    assert all(\n        run.state.is_scheduled() for run in rescheduled_flow_runs\n    )\n    assert all(\n        run.expected_start_time > datetime.now(timezone.utc)\n        for run in rescheduled_flow_runs\n    )\n
        ","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#get-the-last-n-completed-flow-runs-from-my-workspace","title":"Get the last N completed flow runs from my workspace","text":"

        To get the last N completed flow runs from our workspace, we can make use of read_flow_runs and prefect.client.schemas.

        This example gets the last three completed flow runs from our workspace:

        import asyncio\nfrom typing import Optional\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import FlowRunFilter\nfrom prefect.client.schemas.objects import FlowRun\nfrom prefect.client.schemas.sorting import FlowRunSort\n\nasync def get_most_recent_flow_runs(\n    n: int = 3,\n    states: Optional[list[str]] = None\n) -> list[FlowRun]:\n    if not states:\n        states = [\"COMPLETED\"]\n\n    async with get_client() as client:\n        return await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state={'type': {'any_': states}}\n            ),\n            sort=FlowRunSort.END_TIME_DESC,\n            limit=n,\n        )\n\nif __name__ == \"__main__\":\n    last_3_flow_runs: list[FlowRun] = asyncio.run(\n        get_most_recent_flow_runs()\n    )\n    print(last_3_flow_runs)\n\n    assert all(\n        run.state.is_completed() for run in last_3_flow_runs\n    )\n    assert (\n        end_times := [run.end_time for run in last_3_flow_runs]\n    ) == sorted(end_times, reverse=True)\n

        Instead of the last three from the whole workspace, you could also use the DeploymentFilter like the previous example to get the last three completed flow runs of a specific deployment.

        ","tags":["client","API","filters","orchestration"]},{"location":"guides/using-the-client/#transition-all-running-flows-to-cancelled-via-the-client","title":"Transition all running flows to cancelled via the Client","text":"

        It can be cumbersome to cancel many flow runs through the UI. You can use get_clientto set multiple runs to a Cancelled state. The code below will cancel all flow runs that are in Pending, Running, Scheduled, or Late states when the script is run.

        import anyio\n\nfrom prefect import get_client\nfrom prefect.client.schemas.filters import FlowRunFilter, FlowRunFilterState, FlowRunFilterStateName\nfrom prefect.client.schemas.objects import StateType\n\nasync def list_flow_runs_with_states(states: list[str]):\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    name=FlowRunFilterStateName(any_=states)\n                )\n            )\n        )\n    return flow_runs\n\n\nasync def cancel_flow_runs(flow_runs):\n    async with get_client() as client:\n        for idx, flow_run in enumerate(flow_runs):\n            print(f\"[{idx + 1}] Cancelling flow run '{flow_run.name}' with ID '{flow_run.id}'\")\n            state_updates = {}\n            state_updates.setdefault(\"name\", \"Cancelled\")\n            state_updates.setdefault(\"type\", StateType.CANCELLED)\n            state = flow_run.state.copy(update=state_updates)\n            await client.set_flow_run_state(flow_run.id, state, force=True)\n\n\nasync def bulk_cancel_flow_runs():\n    states = [\"Pending\", \"Running\", \"Scheduled\", \"Late\"]\n    flow_runs = await list_flow_runs_with_states(states)\n\n    while len(flow_runs) > 0:\n        print(f\"Cancelling {len(flow_runs)} flow runs\\n\")\n        await cancel_flow_runs(flow_runs)\n        flow_runs = await list_flow_runs_with_states(states)\n    print(\"Done!\")\n\n\nif __name__ == \"__main__\":\n    anyio.run(bulk_cancel_flow_runs)\n

        There are other ways to filter objects like flow runs

        See the filters API reference for more ways to filter flow runs and other objects in your Prefect ecosystem.

        ","tags":["client","API","filters","orchestration"]},{"location":"guides/variables/","title":"Variables","text":"

        Variables enable you to store and reuse non-sensitive bits of data, such as configuration information. Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.

        Variables can be created or modified at any time, but are intended for values with infrequent writes and frequent reads. Variable values may be cached for quicker retrieval.

        While variable values are most commonly loaded during flow runtime, they can be loaded in other contexts, at any time, such that they can be used to pass configuration information to Prefect configuration files, such as deployment steps.

        Variables are not Encrypted

        Using variables to store sensitive information, such as credentials, is not recommended. Instead, use Secret blocks to store and access sensitive information.

        ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#managing-variables","title":"Managing variables","text":"

        You can create, read, edit and delete variables via the Prefect UI, API, and CLI. Names must adhere to traditional variable naming conventions:

        • Have no more than 255 characters.
        • Only contain lowercase alphanumeric characters ([a-z], [0-9]) or underscores (_). Spaces are not allowed.
        • be unique.

        Values must:

        • have less than or equal to 5000 characters.

        Optionally, you can add tags to the variable.

        ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-prefect-ui","title":"Via the Prefect UI","text":"

        You can see all the variables in your Prefect server instance or Prefect Cloud workspace on the Variables page of the Prefect UI. Both the name and value of all variables are visible to anyone with access to the server or workspace.

        To create a new variable, select the + button next to the header of the Variables page. Enter the name and value of the variable.

        ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-rest-api","title":"Via the REST API","text":"

        Variables can be created and deleted via the REST API. You can also set and get variables via the API with either the variable name or ID. See the REST reference for more information.

        ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#via-the-cli","title":"Via the CLI","text":"

        You can list, inspect, and delete variables via the command line interface with the prefect variable ls, prefect variable inspect <name>, and prefect variable delete <name> commands, respectively.

        ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#accessing-variables","title":"Accessing variables","text":"

        In addition to the UI and API, variables can be referenced in code and in certain Prefect configuration files.

        ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-python-code","title":"In Python code","text":"

        You can access any variable via the Python SDK via the Variable.get() method. If you attempt to reference a variable that does not exist, the method will return None. You can create variables via the Python SDK with the Variable.set() method. Note that if a variable of the same name exists, you'll need to pass overwrite=True.

        from prefect.variables import Variable\n\n# setting the variable\nvariable = Variable.set(name=\"the_answer\", value=\"42\")\n\n# getting from a synchronous context\nanswer = Variable.get('the_answer')\nprint(answer.value)\n# 42\n\n# getting from an asynchronous context\nanswer = await Variable.get('the_answer')\nprint(answer.value)\n# 42\n\n# getting without a default value\nanswer = Variable.get('not_the_answer')\nprint(answer.value)\n# None\n\n# getting with a default value\nanswer = Variable.get('not_the_answer', default='42')\nprint(answer.value)\n# 42\n\n# using `overwrite=True`\nanswer = Variable.get('the_answer')\nprint(answer.value)\n#42\nanswer = Variable.set(name=\"the_answer\", value=\"43\", overwrite=True)\nprint(answer.value)\n#43\n
        ","tags":["variables","blocks"],"boost":2},{"location":"guides/variables/#in-prefectyaml-deployment-steps","title":"In prefect.yaml deployment steps","text":"

        In .yaml files, variables are denoted by quotes and double curly brackets, like so: \"{{ prefect.variables.my_variable }}\". You can use variables to templatize deployment steps by referencing them in the prefect.yaml file used to create deployments. For example, you could pass a variable in to specify a branch for a git repo in a deployment pull step:

        pull:\n- prefect.deployments.steps.git_clone:\n    repository: https://github.com/PrefectHQ/hello-projects.git\n    branch: \"{{ prefect.variables.deployment_branch }}\"\n

        The deployment_branch variable will be evaluated at runtime for the deployed flow, allowing changes to be made to variables used in a pull action without updating a deployment directly.

        ","tags":["variables","blocks"],"boost":2},{"location":"guides/webhooks/","title":"Webhooks","text":"

        Use webhooks in your Prefect Cloud workspace to receive, observe, and react to events from other systems in your ecosystem. Each webhook exposes a unique URL endpoint to receive events from other systems and transforms them into Prefect events for use in automations.

        Webhooks are defined by two essential components: a unique URL and a template that translates incoming web requests to a Prefect event.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#configuring-webhooks","title":"Configuring webhooks","text":"","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cloud-api","title":"Via the Prefect Cloud API","text":"

        Webhooks are managed via the Webhooks API endpoints. This is a Prefect Cloud-only feature. You authenticate API calls using the standard authentication methods you use with Prefect Cloud.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-prefect-cloud","title":"Via Prefect Cloud","text":"

        Webhooks can be created and managed from the Prefect Cloud UI.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#via-the-prefect-cli","title":"Via the Prefect CLI","text":"

        Webhooks can be managed and interacted with via the prefect cloud webhook command group.

        prefect cloud webhook --help\n

        You can create your first webhook by invoking create:

        prefect cloud webhook create your-webhook-name \\\n    --description \"Receives webhooks from your system\" \\\n    --template '{ \"event\": \"your.event.name\", \"resource\": { \"prefect.resource.id\": \"your.resource.id\" } }'\n

        Note the template string, which is discussed in greater detail below

        You can retrieve details for a specific webhook by ID using get, or optionally query all webhooks in your workspace via ls:

        # get webhook by ID\nprefect cloud webhook get <webhook-id>\n\n# list all configured webhooks in your workspace\n\nprefect cloud webhook ls\n

        If you need to disable an existing webhook without deleting it, use toggle:

        prefect cloud webhook toggle <webhook-id>\nWebhook is now disabled\n\nprefect cloud webhook toggle <webhook-id>\nWebhook is now enabled\n

        If you are concerned that your webhook endpoint may have been compromised, use rotate to generate a new, random endpoint

        prefect cloud webhook rotate <webhook-url-slug>\n
        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#webhook-endpoints","title":"Webhook endpoints","text":"

        The webhook endpoints have randomly generated opaque URLs that do not divulge any information about your Prefect Cloud workspace. They are rooted at https://api.prefect.cloud/hooks/. For example: https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ. Prefect Cloud assigns this URL when you create a webhook; it cannot be set via the API. You may rotate your webhook URL at any time without losing the associated configuration.

        All webhooks may accept requests via the most common HTTP methods:

        • GET, HEAD, and DELETE may be used for webhooks that define a static event template, or a template that does not depend on the body of the HTTP request. The headers of the request will be available for templates.
        • POST, PUT, and PATCH may be used when the webhook request will include a body. See How HTTP request components are handled for more details on how the body is parsed.

        Prefect Cloud webhooks are deliberately quiet to the outside world, and will only return a 204 No Content response when they are successful, and a 400 Bad Request error when there is any error interpreting the request. For more visibility when your webhooks fail, see the Troubleshooting section below.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#webhook-templates","title":"Webhook templates","text":"

        The purpose of a webhook is to accept an HTTP request from another system and produce a Prefect event from it. You may find that you often have little influence or control over the format of those requests, so Prefect's webhook system gives you full control over how you turn those notifications from other systems into meaningful events in your Prefect Cloud workspace. The template you define for each webhook will determine how individual components of the incoming HTTP request become the event name and resource labels of the resulting Prefect event.

        As with the templates available in Prefect Cloud Automation for defining notifications and other parameters, you will write templates in Jinja2. All of the built-in Jinja2 blocks and filters are available, as well as the filters from the jinja2-humanize-extensions package.

        Your goal when defining your event template is to produce a valid JSON object that defines (at minimum) the event name and the resource[\"prefect.resource.id\"], which are required of all events. The simplest template is one in which these are statically defined.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#static-webhook-events","title":"Static webhook events","text":"

        Let's see a static webhook template example. Say you want to configure a webhook that will notify Prefect when your recommendations machine learning model has been updated, so you can then send a Slack notification to your team and run a few subsequent deployments. Those models are produced on a daily schedule by another team that is using cron for scheduling. They aren't able to use Prefect for their flows (yet!), but they are happy to add a curl to the end of their daily script to notify you. Because this webhook will only be used for a single event from a single resource, your template can be entirely static:

        {\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.recommendations\",\n        \"prefect.resource.name\": \"Recommendations [Products]\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

        Make sure to produce valid JSON

        The output of your template, when rendered, should be a valid string that can be parsed, for example, with json.loads.

        A webhook with this template may be invoked via any of the HTTP methods, including a GET request with no body, so the team you are integrating with can include this line at the end of their daily script:

        curl https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

        Each time the script hits the webhook, the webhook will produce a single Prefect event with that name and resource in your workspace.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#event-fields-that-prefect-cloud-populates-for-you","title":"Event fields that Prefect Cloud populates for you","text":"

        You may notice that you only had to provide the event and resource definition, which is not a completely fleshed out event. Prefect Cloud will set default values for any missing fields, such as occurred and id, so you don't need to set them in your template. Additionally, Prefect Cloud will add the webhook itself as a related resource on all of the events it produces.

        If your template does not produce a payload field, the payload will default to a standard set of debugging information, including the HTTP method, headers, and body.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#dynamic-webhook-events","title":"Dynamic webhook events","text":"

        Now let's say that after a few days you and the Data Science team are getting a lot of value from the automations you have set up with the static webhook. You've agreed to upgrade this webhook to handle all of the various models that the team produces. It's time to add some dynamic information to your webhook template.

        Your colleagues on the team have adjusted their daily cron scripts to POST a small body that includes the ID and name of the model that was updated:

        curl \\\n    -d \"model=recommendations\" \\\n    -d \"friendly_name=Recommendations%20[Products]\" \\\n    -X POST https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

        This script will send a POST request and the body will include a traditional URL-encoded form with two fields describing the model that was updated: model and friendly_name. Here's the webhook code that uses Jinja to receive these values in your template and produce different events for the different models:

        {\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.{{ body.model }}\",\n        \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

        All subsequent POST requests will produce events with those variable resource IDs and names. The other statically-defined parts, such as event or the producing-team label you included earlier will still be used.

        Use Jinja2's default filter to handle missing values

        Jinja2 has a helpful default filter that can compensate for missing values in the request. In this example, you may want to use the model's ID in place of the friendly name when the friendly name is not provided: {{ body.friendly_name|default(body.model) }}.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#how-http-request-components-are-handled","title":"How HTTP request components are handled","text":"

        The Jinja2 template context includes the three parts of the incoming HTTP request:

        • method is the uppercased string of the HTTP method, like GET or POST.
        • headers is a case-insensitive dictionary of the HTTP headers included with the request. To prevent accidental disclosures, the Authorization header is removed.
        • body represents the body that was posted to the webhook, with a best-effort approach to parse it into an object you can access.

        HTTP headers are available without any alteration as a dict-like object, but you may access them with header names in any case. For example, these template expressions all return the value of the Content-Length header:

        {{ headers['Content-Length'] }}\n\n{{ headers['content-length'] }}\n\n{{ headers['CoNtEnt-LeNgTh'] }}\n

        The HTTP request body goes through some light preprocessing to make it more useful in templates. If the Content-Type of the request is application/json, the body will be parsed as a JSON object and made available to the webhook templates. If the Content-Type is application/x-www-form-urlencoded (as in our example above), the body is parsed into a flat dict-like object of key-value pairs. Jinja2 supports both index and attribute access to the fields of these objects, so the following two expressions are equivalent:

        {{ body['friendly_name'] }}\n\n{{ body.friendly_name }}\n

        Only for Python identifiers

        Jinja2's syntax only allows attribute-like access if the key is a valid Python identifier, so body.friendly-name will not work. Use body['friendly-name'] in those cases.

        You may not have much control over the client invoking your webhook, but would still like for bodies that look like JSON to be parsed as such. Prefect Cloud will attempt to parse any other content type (like text/plain) as if it were JSON first. In any case where the body cannot be transformed into JSON, it will be made available to your templates as a Python str.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#accepting-prefect-events-directly","title":"Accepting Prefect events directly","text":"

        In cases where you have more control over the client, your webhook can accept Prefect events directly with a simple pass-through template:

        {{ body|tojson }}\n

        This template accepts the incoming body (assuming it was in JSON format) and just passes it through unmodified. This allows a POST of a partial Prefect event as in this example:

        POST /hooks/AERylZ_uewzpDx-8fcweHQ HTTP/1.1\nHost: api.prefect.cloud\nContent-Type: application/json\nContent-Length: 228\n\n{\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.recommendations\",\n        \"prefect.resource.name\": \"Recommendations [Products]\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

        The resulting event will be filled out with the default values for occurred, id, and other fields as described above.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#accepting-cloudevents","title":"Accepting CloudEvents","text":"

        The Cloud Native Computing Foundation has standardized CloudEvents for use by systems to exchange event information in a common format. These events are supported by major cloud providers and a growing number of cloud-native systems. Prefect Cloud can interpret a webhook containing a CloudEvent natively with the following template:

        {{ body|from_cloud_event(headers) }}\n

        The resulting event will use the CloudEvent's subject as the resource (or the source if no subject is available). The CloudEvent's data attribute will become the Prefect event's payload['data'], and the other CloudEvent metadata will be at payload['cloudevents']. If you would like to handle CloudEvents in a more specific way tailored to your use case, use a dynamic template to interpret the incoming body.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/webhooks/#troubleshooting","title":"Troubleshooting","text":"

        The initial configuration of your webhook may require some trial and error as you get the sender and your receiving webhook speaking a compatible language. While you are in this phase, you may find the Event Feed in the UI to be indispensable for seeing the events as they are happening.

        When Prefect Cloud encounters an error during receipt of a webhook, it will produce a prefect-cloud.webhook.failed event in your workspace. This event will include critical information about the HTTP method, headers, and body it received, as well as what the template rendered. Keep an eye out for these events when something goes wrong.

        ","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"guides/deployment/aci/","title":"Run an Agent with Azure Container Instances","text":"

        Microsoft Azure Container Instances (ACI) provides a convenient and simple service for quickly spinning up a Docker container that can host a Prefect Agent and execute flow runs.

        ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#prerequisites","title":"Prerequisites","text":"

        To follow this quickstart, you'll need the following:

        • A Prefect Cloud account
        • A Prefect Cloud API key (Prefect Cloud Pro and Custom tier accounts can use a service account API key)
        • A Microsoft Azure account
        • Azure CLI installed and authenticated
        ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-resource-group","title":"Create a resource group","text":"

        Like most Azure resources, ACI applications must live in a resource group. If you don\u2019t already have a resource group you\u2019d like to use, create a new one by running the az group create command. For example, this example creates a resource group called prefect-agents in the eastus region:

        az group create --name prefect-agents --location eastus\n

        Feel free to change the group name or location to match your use case. You can also run az account list-locations -o table to see all available resource group locations for your account.

        ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-the-container-instance","title":"Create the container instance","text":"

        Prefect provides pre-configured Docker images you can use to quickly stand up a container instance. These Docker images include Python and Prefect. For example, the image prefecthq/prefect:2-python3.10 includes the latest release version of Prefect and Python 3.10.

        To create the container instance, use the az container create command. This example shows the syntax, but you'll need to provide the correct values for [ACCOUNT-ID],[WORKSPACE-ID], [API-KEY], and any dependencies you need to pip install on the instance. These options are discussed below.

        az container create \\\n--resource-group prefect-agents \\\n--name prefect-agent-example \\\n--image prefecthq/prefect:2-python3.10 \\\n--secure-environment-variables PREFECT_API_URL='https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]' PREFECT_API_KEY='[API-KEY]' \\\n--command-line \"/bin/bash -c 'pip install adlfs s3fs requests pandas; prefect agent start -p default-agent-pool -q test'\"\n

        When the container instance is running, go to Prefect Cloud and select the Work Pools page. Select default-agent-pool, then select the Queues tab to see work queues configured on this work pool. When the container instance is running and the agent has started, the test work queue displays \"Healthy\" status. This work queue and agent are ready to execute deployments configured to run on the test queue.

        Agents and queues

        The agent running in this container instance can now pick up and execute flow runs for any deployment configured to use the test queue on the default-agent-pool work pool.

        ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#container-create-options","title":"Container create options","text":"

        Let's break down the details of the az container create command used here.

        The az container create command creates a new ACI container.

        --resource-group prefect-agents tells Azure which resource group the new container is created in. Here, the examples uses the prefect-agents resource group created earlier.

        --name prefect-agent-example determines the container name you will see in the Azure Portal. You can set any name you\u2019d like here to suit your use case, but container instance names must be unique in your resource group.

        --image prefecthq/prefect:2-python3.10 tells ACI which Docker images to run. The script above pulls a public Prefect image from Docker Hub. You can also build custom images and push them to a public container registry so ACI can access them. Or you can push your image to a private Azure Container Registry and use it to create a container instance.

        --secure-environment-variables sets environment variables that are only visible from inside the container. They do not show up when viewing the container\u2019s metadata. You'll populate these environment variables with a few pieces of information to configure the execution environment of the container instance so it can communicate with your Prefect Cloud workspace:

        • A Prefect Cloud [PREFECT_API_KEY]/concepts/settings/#prefect_api_key) value specifying the API key used to authenticate with your Prefect Cloud workspace. (Pro and Custom tier accounts can use a service account API key.)
        • The PREFECT_API_URL value specifying the API endpoint of your Prefect Cloud workspace.

        --command-line lets you override the container\u2019s normal entry point and run a command instead. The script above uses this section to install the adlfs pip package so it can read flow code from Azure Blob Storage, along with s3fs, pandas, and requests. It then runs the Prefect agent, in this case using the default work pool and a test work queue. If you want to use a different work pool or queue, make sure to change these values appropriately.

        ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-deployment","title":"Create a deployment","text":"

        Following the example of the Flow deployments tutorial, let's create a deployment that can be executed by the agent on this container instance.

        In an environment where you have installed Prefect, create a new folder called health_test, and within it create a new file called health_flow.py containing the following code.

        import prefect\nfrom prefect import task, flow\nfrom prefect import get_run_logger\n\n\n@task\ndef say_hi():\n    logger = get_run_logger()\n    logger.info(\"Hello from the Health Check Flow! \ud83d\udc4b\")\n\n\n@task\ndef log_platform_info():\n    import platform\n    import sys\n    from prefect.server.api.server import SERVER_API_VERSION\n\n    logger = get_run_logger()\n    logger.info(\"Host's network name = %s\", platform.node())\n    logger.info(\"Python version = %s\", platform.python_version())\n    logger.info(\"Platform information (instance type) = %s \", platform.platform())\n    logger.info(\"OS/Arch = %s/%s\", sys.platform, platform.machine())\n    logger.info(\"Prefect Version = %s \ud83d\ude80\", prefect.__version__)\n    logger.info(\"Prefect API Version = %s\", SERVER_API_VERSION)\n\n\n@flow(name=\"Health Check Flow\")\ndef health_check_flow():\n    hi = say_hi()\n    log_platform_info(wait_for=[hi])\n

        Now create a deployment for this flow script, making sure that it's configured to use the test queue on the default-agent-pool work pool.

        prefect deployment build --infra process --storage-block azure/flowsville/health_test --name health-test --pool default-agent-pool --work-queue test --apply health_flow.py:health_check_flow\n

        Once created, any flow runs for this deployment will be picked up by the agent running on this container instance.

        Infrastructure and storage

        This Prefect deployment example was built using the Process infrastructure type and Azure Blob Storage.

        You might wonder why your deployment needs process infrastructure rather than DockerContainer infrastructure when you are deploying a Docker image to ACI.

        A Prefect deployment\u2019s infrastructure type describes how you want Prefect agents to run flows for the deployment. With DockerContainer infrastructure, the agent will try to use Docker to spin up a new container for each flow run. Since you\u2019ll be starting your own container on ACI, you don\u2019t need Prefect to do it for you. Specifying process infrastructure on the deployment tells Prefect you want to agent to run flows by starting a process in your ACI container.

        You can use any storage type as long as you've configured a block for it before creating the deployment.

        ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#cleaning-up","title":"Cleaning up","text":"

        Note that ACI instances may incur usage charges while running, but must be running for the agent to pick up and execute flow runs.

        To stop a container, use the az container stop command:

        az container stop --resource-group prefect-agents --name prefect-agent-example\n

        To delete a container, use the az container delete command:

        az container delete --resource-group prefect-agents --name prefect-agent-example\n
        ","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/daemonize/","title":"Daemonize Processes for Prefect Deployments","text":"

        When running workflow applications, it can be helpful to create long-running processes that run at startup and are robust to failure. In this guide you'll learn how to set up a systemd service to create long-running Prefect processes that poll for scheduled flow runs.

        A systemd service is ideal for running a long-lived process on a Linux VM or physical Linux server. We will leverage systemd and see how to automatically start a Prefect worker or long-lived serve process when Linux starts. This approach provides resilience by automatically restarting the process if it crashes.

        In this guide we will:

        • Create a Linux user
        • Install and configure Prefect
        • Set up a systemd service for the Prefect worker or .serve process
        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#prerequisites","title":"Prerequisites","text":"
        • An environment with a linux operating system with systemd and Python 3.8 or later.
        • A superuser account (you can run sudo commands).
        • A Prefect Cloud account, or a local instance of a Prefect server running on your network.
        • If daemonizing a worker, you'll need a Prefect deployment with a work pool your worker can connect to.

        If using an AWS t2-micro EC2 instance with an AWS Linux image, you can install Python and pip with sudo yum install -y python3 python3-pip.

        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-1-add-a-user","title":"Step 1: Add a user","text":"

        Create a user account on your linux system for the Prefect process. While you can run a worker or serve process as root, it's good security practice to avoid doing so unless you are sure you need to.

        In a terminal, run:

        sudo useradd -m prefect\nsudo passwd prefect\n

        When prompted, enter a password for the prefect account.

        Next, log in to the prefect account by running:

        sudo su prefect\n
        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-2-install-prefect","title":"Step 2: Install Prefect","text":"

        Run:

        pip3 install prefect\n

        This guide assumes you are installing Prefect globally, not in a virtual environment. If running a systemd service in a virtual environment, you'll just need to change the ExecPath. For example, if using venv, change the ExecPath to target the prefect application in the bin subdirectory of your virtual environment.

        Next, set up your environment so that the Prefect client will know which server to connect to.

        If connecting to Prefect Cloud, follow the instructions to obtain an API key and then run the following:

        prefect cloud login -k YOUR_API_KEY\n

        When prompted, choose the Prefect workspace you'd like to log in to.

        If connecting to a self-hosted Prefect server instance instead of Prefect Cloud, run the following and substitute the IP address of your server:

        prefect config set PREFECT_API_URL=http://your-prefect-server-IP:4200\n

        Finally, run the exit command to sign out of the prefect Linux account. This command switches you back to your sudo-enabled account so you will can run the commands in the next section.

        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#step-3-set-up-a-systemd-service","title":"Step 3: Set up a systemd service","text":"

        See the section below if you are setting up a Prefect worker. Skip to the next section if you are setting up a Prefect .serve process.

        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#setting-up-a-systemd-service-for-a-prefect-worker","title":"Setting up a systemd service for a Prefect worker","text":"

        Move into the /etc/systemd/system folder and open a file for editing. We use the Vim text editor below.

        cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
        my-prefect-service.service
        [Unit]\nDescription=Prefect worker\n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=prefect worker start --pool YOUR_WORK_POOL_NAME\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n

        Make sure you substitute your own work pool name.

        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#setting-up-a-systemd-service-for-serve","title":"Setting up a systemd service for .serve","text":"

        Copy your flow entrypoint Python file and any other files needed for your flow to run into the /home directory (or the directory of your choice).

        Here's a basic example flow:

        my_file.py
        from prefect import flow\n\n\n@flow(log_prints=True)\ndef say_hi():\n    print(\"Hello!\")\n\nif __name__==\"__main__\":\n    say_hi.serve(name=\"Greeting from daemonized .serve\")\n

        If you want to make changes to your flow code without restarting your process, you can push your code to git-based cloud storage (GitHub, BitBucket, GitLab) and use flow.from_source().serve(), as in the example below.

        my_remote_flow_code_file.py
        if __name__ == \"__main__\":\nflow.from_source(\n    source=\"https://github.com/org/repo.git\",\n    entrypoint=\"path/to/my_remote_flow_code_file.py:say_hi\",\n).serve(name=\"deployment-with-github-storage\")\n

        Make sure you substitute your own flow code entrypoint path.

        Note that if you change the flow entrypoint parameters, you will need to restart the process.

        Move into the /etc/systemd/system folder and open a file for editing. We use the Vim text editor below.

        cd /etc/systemd/system\nsudo vim my-prefect-service.service\n
        my-prefect-service.service
        [Unit]\nDescription=Prefect serve \n\n[Service]\nUser=prefect\nWorkingDirectory=/home\nExecStart=python3 my_file.py\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n
        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#save-enable-and-start-the-service","title":"Save, enable, and start the service","text":"

        To save the file and exit Vim hit the escape key, type :wq!, then press the return key.

        Next, run sudo systemctl daemon-reload to make systemd aware of your new service.

        Then, run sudo systemctl enable my-prefect-service to enable the service. This command will ensure it runs when your system boots.

        Next, run sudo systemctl start my-prefect-service to start the service.

        Run your deployment from UI and check out the logs on the Flow Runs page.

        You can see if your daemonized Prefect worker or serve process is running and see the Prefect logs with systemctl status my-prefect-service.

        That's it! You now have a systemd service that starts when your system boots, and will restart if it ever crashes.

        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/daemonize/#next-steps","title":"Next steps","text":"

        If you want to set up a long-lived process on a Windows machine the pattern is similar. Instead of systemd, you can use NSSM.

        Check out other Prefect guides to see what else you can do with Prefect!

        ","tags":["systemd","daemonize","worker"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/","title":"Developing a New Worker Type","text":"

        Advanced Topic

        This tutorial is for users who want to extend the Prefect framework and completing this successfully will require deep knowledge of Prefect concepts. For standard use cases, we recommend using one of the available workers instead.

        Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure.

        A list of available workers can be found here. What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-configuration","title":"Worker configuration","text":"

        When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run.

        How are the configuration values populated?

        The work pool that a worker polls for flow runs has a base job template associated with it. The template is the contract for how configuration values populate for each flow run.

        The keys in the job_configuration section of this base job template match the worker's configuration class attributes. The values in the job_configuration section of the base job template are used to populate the attributes of the worker's configuration class.

        The work pool creator gets to decide how they want to populate the values in the job_configuration section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#implementing-a-basejobconfiguration-subclass","title":"Implementing a BaseJobConfiguration subclass","text":"

        A worker developer defines their worker's configuration to function with a class extending BaseJobConfiguration.

        BaseJobConfiguration has attributes that are common to all workers:

        Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned to the created execution environment for metadata purposes. command The command to use when starting a flow run.

        Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the prepare_for_flow_run method.

        Here's an example prepare_for_flow_run method that adds a label to the execution environment:

        def prepare_for_flow_run(\n    self, flow_run, deployment = None, flow = None,\n):  \n    super().prepare_for_flow_run(flow_run, deployment, flow)  \n    self.labels.append(\"my-custom-label\")\n

        A worker configuration class is a Pydantic model, so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

        from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

        This configuration class will populate the job_configuration section of the resulting base job template.

        For this example, the base job template would look like this:

        job_configuration:\n    name: \"{{ name }}\"\n    env: \"{{ env }}\"\n    labels: \"{{ labels }}\"\n    command: \"{{ command }}\"\n    memory: \"{{ memory }}\"\n    cpu: \"{{ cpu }}\"\nvariables:\n    type: object\n    properties:\n        name:\n          title: Name\n          description: Name given to infrastructure created by a worker.\n          type: string\n        env:\n          title: Environment Variables\n          description: Environment variables to set when starting a flow run.\n          type: object\n          additionalProperties:\n            type: string\n        labels:\n          title: Labels\n          description: Labels applied to infrastructure created by a worker.\n          type: object\n          additionalProperties:\n            type: string\n        command:\n          title: Command\n          description: The command to use when starting a flow run. In most cases,\n            this should be left blank and the command will be automatically generated\n            by the worker.\n          type: string\n        memory:\n            title: Memory\n            description: Memory allocation for the execution environment.\n            type: integer\n            default: 1024\n        cpu:\n            title: CPU\n            description: CPU allocation for the execution environment.\n            type: integer\n            default: 500\n

        This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment.

        Notice that each attribute for the class was added in the job_configuration section with placeholders whose name matches the attribute name. The variables section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#customizing-configuration-attribute-templates","title":"Customizing Configuration Attribute Templates","text":"

        You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the memory attribute, you can do so like this:

        from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n

        Notice that we changed the type of each attribute to str to accommodate the units, and we added a new template attribute to each attribute. The template attribute is used to populate the job_configuration section of the resulting base job template.

        For this example, the job_configuration section of the resulting base job template would look like this:

        job_configuration:\n    name: \"{{ name }}\"\n    env: \"{{ env }}\"\n    labels: \"{{ labels }}\"\n    command: \"{{ command }}\"\n    memory: \"{{ memory_request }}Mi\"\n    cpu: \"{{ cpu_request }}m\"\n

        Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the Worker Template Variables section.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#rules-for-template-variable-interpolation","title":"Rules for template variable interpolation","text":"

        When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules:

        1. If a template variable is the only value for a key in the job_configuration section, the key will be replaced with the value template variable.
        2. If a template variable is part of a string (i.e., there is text before or after the template variable), the value of the template variable will be interpolated into the string.
        3. If a template variable is the only value for a key in the job_configuration section and no value is provided for the template variable, the key will be removed from the job_configuration section.

        These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#template-variable-usage-strategies","title":"Template variable usage strategies","text":"

        Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization.

        There are two patterns that are represented in current worker implementations:

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#pass-through","title":"Pass-through","text":"

        In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment.

        This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge.

        The Docker worker is an example of a worker that uses this pattern.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#infrastructure-as-code-templating","title":"Infrastructure as code templating","text":"

        Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition).

        In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form.

        This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators.

        The Kubernetes worker is an example of a worker that uses this pattern.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#configuring-credentials","title":"Configuring credentials","text":"

        When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user.

        For example, if you want to allow the user to configure AWS credentials, you can do so like this:

        from prefect_aws import AwsCredentials\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    aws_credentials: AwsCredentials = Field(\n        default=None,\n        description=\"AWS credentials to use when creating AWS resources.\"\n    )\n

        Users can create and assign a block to the aws_credentials attribute in the UI and the worker will use these credentials when interacting with AWS resources.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-template-variables","title":"Worker template variables","text":"

        Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use.

        Default template variables for a worker are defined by implementing the BaseVariables class. Like the BaseJobConfiguration class, the BaseVariables class has attributes that are common to all workers:

        Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned the created execution environment for metadata purposes. command The command to use when starting a flow run.

        Additional attributes can be added to the BaseVariables class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

        from pydantic import Field\nfrom prefect.workers.base import BaseVariables\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

        When MyWorkerTemplateVariables is used in conjunction with MyWorkerConfiguration from the Customizing Configuration Attribute Templates section, the resulting base job template will look like this:

        job_configuration:\n    name: \"{{ name }}\"\n    env: \"{{ env }}\"\n    labels: \"{{ labels }}\"\n    command: \"{{ command }}\"\n    memory: \"{{ memory_request }}Mi\"\n    cpu: \"{{ cpu_request }}m\"\nvariables:\n    type: object\n    properties:\n        name:\n          title: Name\n          description: Name given to infrastructure created by a worker.\n          type: string\n        env:\n          title: Environment Variables\n          description: Environment variables to set when starting a flow run.\n          type: object\n          additionalProperties:\n            type: string\n        labels:\n          title: Labels\n          description: Labels applied to infrastructure created by a worker.\n          type: object\n          additionalProperties:\n            type: string\n        command:\n          title: Command\n          description: The command to use when starting a flow run. In most cases,\n            this should be left blank and the command will be automatically generated\n            by the worker.\n          type: string\n        memory_request:\n            title: Memory Request\n            description: Memory allocation for the execution environment.\n            type: integer\n            default: 1024\n        cpu_request:\n            title: CPU Request\n            description: CPU allocation for the execution environment.\n            type: integer\n            default: 500\n

        Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the variables section of a base job template and validate the template variables provided by the user.

        We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation","title":"Worker implementation","text":"

        Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#attributes","title":"Attributes","text":"

        To implement a worker, you must implement the BaseWorker class and provide it with the following attributes:

        Attribute Description Required type The type of the worker. Yes job_configuration The configuration class for the worker. Yes job_configuration_variables The template variables class for the worker. No _documentation_url Link to documentation for the worker. No _logo_url Link to a logo for the worker. No _description A description of the worker. No","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#methods","title":"Methods","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#run","title":"run","text":"

        In addition to the attributes above, you must also implement a run method. The run method is called for each flow run the worker receives for execution from the work pool.

        The run method has the following signature:

         async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        ...\n

        The run method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully.

        The run method must also return a BaseWorkerResult object. The BaseWorkerResult object returned contains information about the flow run execution. For the most part, you can implement the BaseWorkerResult with no modifications like so:

        from prefect.workers.base import BaseWorkerResult\n\nclass MyWorkerResult(BaseWorkerResult):\n    \"\"\"Result returned by the MyWorker.\"\"\"\n

        If you would like to return more information about a flow run, then additional attributes can be added to the BaseWorkerResult class.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#kill_infrastructure","title":"kill_infrastructure","text":"

        Workers must implement a kill_infrastructure method to support flow run cancellation. The kill_infrastructure method is called when a flow run is canceled and is passed an identifier for the infrastructure to tear down and the execution environment configuration for the flow run.

        The infrastructure_pid passed to the kill_infrastructure method is the same identifier used to mark a flow run execution as started in the run method. The infrastructure_pid must be a string, but it can take on any format you choose.

        The infrastructure_pid should contain enough information to uniquely identify the infrastructure created for a flow run when used with the job_configuration passed to the kill_infrastructure method. Examples of useful information include: the cluster name, the hostname, the process ID, the container ID, etc.

        If a worker cannot tear down infrastructure created for a flow run, the kill_infrastructure command should raise an InfrastructureNotFound or InfrastructureNotAvailable exception.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation-example","title":"Worker implementation example","text":"

        Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts.

        from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n\nclass MyWorkerResult(BaseWorkerResult):\n    \"\"\"Result returned by the MyWorker.\"\"\"\n\nclass MyWorker(BaseWorker):\n    type = \"my-worker\"\n    job_configuration = MyWorkerConfiguration\n    job_configuration_variables = MyWorkerTemplateVariables\n    _documentation_url = \"https://example.com/docs\"\n    _logo_url = \"https://example.com/logo\"\n    _description = \"My worker description.\"\n\n    async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        # Create the execution environment and start execution\n        job = await self._create_and_start_job(configuration)\n\n        if task_status:\n            # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure\n            # if the flow run is cancelled.\n            task_status.started(job.id) \n\n        # Monitor the execution\n        job_status = await self._watch_job(job, configuration)\n\n        exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting\n        return MyWorkerResult(\n            status_code=exit_code,\n            identifier=job.id,\n        )\n\n    async def kill_infrastructure(self, infrastructure_pid: str, configuration: BaseJobConfiguration) -> None:\n        # Tear down the execution environment\n        await self._kill_job(infrastructure_pid, configuration)\n

        Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the run method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed task_status object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a BaseWorkerResult object

        To see other examples of worker implementations, see the ProcessWorker and KubernetesWorker implementations.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#integrating-with-the-prefect-cli","title":"Integrating with the Prefect CLI","text":"

        Workers can be started via the Prefect CLI by providing the --type option to the prefect worker start CLI command. To make your worker type available via the CLI, it must be available at import time.

        If your worker is in a package, you can add an entry point to your setup file in the following format:

        entry_points={\n    \"prefect.collections\": [\n        \"my_package_name = my_worker_module\",\n    ]\n},\n

        Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI.

        ","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/kubernetes/","title":"Running Flows with Kubernetes","text":"

        This guide will walk you through running your flows on Kubernetes. Though much of the guide is general to any Kubernetes cluster, there are differences between the managed Kubernetes offerings between cloud providers, especially when it comes to container registries and access management. We'll focus on Amazon Elastic Kubernetes Service (EKS).

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#prerequisites","title":"Prerequisites","text":"

        Before we begin, there are a few pre-requisites:

        1. A Prefect Cloud account
        2. A cloud provider (AWS, GCP, or Azure) account
        3. Install Python and Prefect
        4. Install Helm
        5. Install the Kubernetes CLI (kubectl)

        Prefect is tested against Kubernetes 1.26.0 and newer minor versions.

        Administrator Access

        Though not strictly necessary, you may want to ensure you have admin access, both in Prefect Cloud and in your cloud provider. Admin access is only necessary during the initial setup and can be downgraded after.

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-cluster","title":"Create a cluster","text":"

        Let's start by creating a new cluster. If you already have one, skip ahead to the next section.

        AWSGCPAzure

        One easy way to get set up with a cluster in EKS is with eksctl. Node pools can be backed by either EC2 instances or FARGATE. Let's choose FARGATE so there's less to manage. The following command takes around 15 minutes and must not be interrupted:

        # Replace the cluster name with your own value\neksctl create cluster --fargate --name <CLUSTER-NAME>\n\n# Authenticate to the cluster.\naws eks update-kubeconfig --name <CLUSTER-NAME>\n

        You can get a GKE cluster up and running with a few commands using the gcloud CLI. We'll build a bare-bones cluster that is accessible over the open internet - this should not be used in a production environment. To deploy the cluster, your project must have a VPC network configured.

        First, authenticate to GCP by setting the following configuration options.

        # Authenticate to gcloud\ngcloud auth login\n\n# Specify the project & zone to deploy the cluster to\n# Replace the project name with your GCP project name\ngcloud config set project <GCP-PROJECT-NAME>\ngcloud config set compute/zone <AVAILABILITY-ZONE>\n

        Next, deploy the cluster - this command will take ~15 minutes to complete. Once the cluster has been created, authenticate to the cluster.

        # Create cluster\n# Replace the cluster name with your own value\ngcloud container clusters create <CLUSTER-NAME> --num-nodes=1 \\\n--machine-type=n1-standard-2\n\n# Authenticate to the cluster\ngcloud container clusters <CLUSTER-NAME> --region <AVAILABILITY-ZONE>\n

        GCP Gotchas

        • You'll need to enable the default service account in the IAM console, or specify a different service account with the appropriate permissions to be used.
        ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=Service account \"000000000000-compute@developer.gserviceaccount.com\" is disabled.\n
        • Organization policy blocks creation of external (public) IPs. You can override this policy (if you have the appropriate permissions) under the Organizational Policy page within IAM.
        creation failed: Constraint constraints/compute.vmExternalIpAccess violated for project 000000000000. Add instance projects/<GCP-PROJECT-NAME>/zones/us-east1-b/instances/gke-gke-guide-1-default-pool-c369c84d-wcfl to the constraint to use external IP with it.\"\n

        You can quickly create an AKS cluster using the Azure CLI, or use the Cloud Shell directly from the Azure portal shell.azure.com.

        First, authenticate to Azure if not already done.

          az login\n

        Next, deploy the cluster - this command will take ~4 minutes to complete. Once the cluster has been created, authenticate to the cluster.

          # Create a Resource Group at the desired location, e.g. westus\n  az group create --name <RESOURCE-GROUP-NAME> --location <LOCATION>\n\n  # Create a kubernetes cluster with default kubernetes version, default SKU load balancer (Standard) and default vm set type (VirtualMachineScaleSets)\n  az aks create --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n  # Configure kubectl to connect to your Kubernetes cluster\n  az aks get-credentials --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME>\n\n  # Verify the connection by listing the cluster nodes\n  kubectl get nodes\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-container-registry","title":"Create a container registry","text":"

        Besides a cluster, the other critical resource we'll need is a container registry. A registry is not strictly required, but in most cases you'll want to use custom images and/or have more control over where images are stored. If you already have a registry, skip ahead to the next section.

        AWSGCPAzure

        Let's create a registry using the AWS CLI and authenticate the docker daemon to said registry:

        # Replace the image name with your own value\naws ecr create-repository --repository-name <IMAGE-NAME>\n\n# Login to ECR\n# Replace the region and account ID with your own values\naws ecr get-login-password --region <REGION> | docker login \\\n  --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com\n

        Let's create a registry using the gcloud CLI and authenticate the docker daemon to said registry:

        # Create artifact registry repository to host your custom image\n# Replace the repository name with your own value; it can be the \n# same name as your image\ngcloud artifacts repositories create <REPOSITORY-NAME> \\\n--repository-format=docker --location=us\n\n# Authenticate to artifact registry\ngcloud auth configure-docker us-docker.pkg.dev\n

        Let's create a registry using the Azure CLI and authenticate the docker daemon to said registry:

        # Name must be a lower-case alphanumeric\n# Tier SKU can easily be updated later, e.g. az acr update --name <REPOSITORY-NAME> --sku Standard\naz acr create --resource-group <RESOURCE-GROUP-NAME> \\\n  --name <REPOSITORY-NAME> \\\n  --sku Basic\n\n# Attach ACR to AKS cluster\n# You need Owner, Account Administrator, or Co-Administrator role on your Azure subscription as per Azure docs\naz aks update --resource-group <RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --attach-acr <REPOSITORY-NAME>\n\n# You can verify AKS can now reach ACR\naz aks check-acr --resource-group RESOURCE-GROUP-NAME> --name <CLUSTER-NAME> --acr <REPOSITORY-NAME>.azurecr.io\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-work-pool","title":"Create a Kubernetes work pool","text":"

        Work pools allow you to manage deployment infrastructure. We'll configure the default values for our Kubernetes base job template. Note that these values can be overridden by individual deployments.

        Let's switch to the Prefect Cloud UI, where we'll create a new Kubernetes work pool (alternatively, you could use the Prefect CLI to create a work pool).

        1. Click on the Work Pools tab on the left sidebar
        2. Click the + button at the top of the page
        3. Select Kubernetes as the work pool type
        4. Click Next to configure the work pool settings

        Let's look at a few popular configuration options.

        Environment Variables

        Add environment variables to set when starting a flow run. So long as you are using a Prefect-maintained image and haven't overwritten the image's entrypoint, you can specify Python packages to install at runtime with {\"EXTRA_PIP_PACKAGES\":\"my_package\"}. For example {\"EXTRA_PIP_PACKAGES\":\"pandas==1.2.3\"} will install pandas version 1.2.3. Alternatively, you can specify package installation in a custom Dockerfile, which can allow you to take advantage of image caching. As we'll see below, Prefect can help us create a Dockerfile with our flow code and the packages specified in a requirements.txt file baked in.

        Namespace

        Set the Kubernetes namespace to create jobs within, such as prefect. By default, set to default.

        Image

        Specify the Docker container image for created jobs. If not set, the latest Prefect 2 image will be used (i.e. prefecthq/prefect:2-latest). Note that you can override this on each deployment through job_variables.

        Image Pull Policy

        Select from the dropdown options to specify when to pull the image. When using the IfNotPresent policy, make sure to use unique image tags, as otherwise old images could get cached on your nodes.

        Finished Job TTL

        Number of seconds before finished jobs are automatically cleaned up by Kubernetes' controller. You may want to set to 60 so that completed flow runs are cleaned up after a minute.

        Pod Watch Timeout Seconds

        Number of seconds for pod creation to complete before timing out. Consider setting to 300, especially if using a serverless type node pool, as these tend to have longer startup times.

        Kubernetes Cluster Config

        You can configure the Kubernetes cluster to use for job creation by specifying a KubernetesClusterConfig block. Generally you should leave the cluster config blank as the worker should be provisioned with appropriate access and permissions. Typically this setting is used when a worker is deployed to a cluster that is different from the cluster where flow runs are executed.

        Advanced Settings

        Want to modify the default base job template to add other fields or delete existing fields?

        Select the Advanced tab and edit the JSON representation of the base job template.

        For example, to set a CPU request, add the following section under variables:

        \"cpu_request\": {\n  \"title\": \"CPU Request\",\n  \"description\": \"The CPU allocation to request for this pod.\",\n  \"default\": \"default\",\n  \"type\": \"string\"\n},\n

        Next add the following to the first containers item under job_configuration:

        ...\n\"containers\": [\n  {\n    ...,\n    \"resources\": {\n      \"requests\": {\n        \"cpu\": \"{{ cpu_request }}\"\n      }\n    }\n  }\n],\n...\n

        Running deployments with this work pool will now request the specified CPU.

        After configuring the work pool settings, move to the next screen.

        Give the work pool a name and save.

        Our new Kubernetes work pool should now appear in the list of work pools.

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-prefect-cloud-api-key","title":"Create a Prefect Cloud API key","text":"

        While in the Prefect Cloud UI, create a Prefect Cloud API key if you don't already have one. Click on your profile avatar picture, then click your name to go to your profile settings, click API Keys and hit the plus button to create a new API key here. Make sure to store it safely along with your other passwords, ideally via a password manager.

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-a-worker-using-helm","title":"Deploy a worker using Helm","text":"

        With our cluster and work pool created, it's time to deploy a worker, which will set up Kubernetes infrastructure to run our flows. The best way to deploy a worker is using the Prefect Helm Chart.

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#add-the-prefect-helm-repository","title":"Add the Prefect Helm repository","text":"

        Add the Prefect Helm repository to your Helm client:

        helm repo add prefect https://prefecthq.github.io/prefect-helm\nhelm repo update\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-namespace","title":"Create a namespace","text":"

        Create a new namespace in your Kubernetes cluster to deploy the Prefect worker:

        kubectl create namespace prefect\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-kubernetes-secret-for-the-prefect-api-key","title":"Create a Kubernetes secret for the Prefect API key","text":"
        kubectl create secret generic prefect-api-key \\\n--namespace=prefect --from-literal=key=your-prefect-cloud-api-key\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#configure-helm-chart-values","title":"Configure Helm chart values","text":"

        Create a values.yaml file to customize the Prefect worker configuration. Add the following contents to the file:

        worker:\n  cloudApiConfig:\n    accountId: <target account ID>\n    workspaceId: <target workspace ID>\n  config:\n    workPool: <target work pool name>\n

        These settings will ensure that the worker connects to the proper account, workspace, and work pool.

        View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#create-a-helm-release","title":"Create a Helm release","text":"

        Let's install the Prefect worker using the Helm chart with your custom values.yaml file:

        helm install prefect-worker prefect/prefect-worker \\\n  --namespace=prefect \\\n  -f values.yaml\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#verify-deployment","title":"Verify deployment","text":"

        Check the status of your Prefect worker deployment:

        kubectl get pods -n prefect\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#define-a-flow","title":"Define a flow","text":"

        Let's start simple with a flow that just logs a message. In a directory named flows, create a file named hello.py with the following contents:

        from prefect import flow, get_run_logger, tags\n\n@flow\ndef hello(name: str = \"Marvin\"):\n    logger = get_run_logger()\n    logger.info(f\"Hello, {name}!\")\n\nif __name__ == \"__main__\":\n    with tags(\"local\"):\n        hello()\n

        Run the flow locally with python hello.py to verify that it works. Note that we use the tags context manager to tag the flow run as local. This step is not required, but does add some helpful metadata.

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#define-a-prefect-deployment","title":"Define a Prefect deployment","text":"

        Prefect has two recommended options for creating a deployment with dynamic infrastructure. You can define a deployment in a Python script using the flow.deploy mechanics or in a prefect.yaml definition file. The prefect.yaml file currently allows for more customization in terms of push and pull steps. Kubernetes objects are defined in YAML, so we expect many teams using Kubernetes work pools to create their deployments with YAML as well. To learn about the Python deployment creation method with flow.deploy refer to the Workers & Work Pools tutorial page.

        The prefect.yaml file is used by the prefect deploy command to deploy our flows. As a part of that process it will also build and push our image. Create a new file named prefect.yaml with the following contents:

        # Generic metadata about this project\nname: flows\nprefect-version: 2.13.8\n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build-image\n    requires: prefect-docker>=0.4.0\n    image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n    tag: latest\n    dockerfile: auto\n    platform: \"linux/amd64\"\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n    requires: prefect-docker>=0.4.0\n    image_name: \"{{ build-image.image_name }}\"\n    tag: \"{{ build-image.tag }}\"\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n    directory: /opt/prefect/flows\n\n# the definitions section allows you to define reusable components for your deployments\ndefinitions:\n  tags: &common_tags\n    - \"eks\"\n  work_pool: &common_work_pool\n    name: \"kubernetes\"\n    job_variables:\n      image: \"{{ build-image.image }}\"\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: \"default\"\n  tags: *common_tags\n  schedule: null\n  entrypoint: \"flows/hello.py:hello\"\n  work_pool: *common_work_pool\n\n- name: \"arthur\"\n  tags: *common_tags\n  schedule: null\n  entrypoint: \"flows/hello.py:hello\"\n  parameters:\n    name: \"Arthur\"\n  work_pool: *common_work_pool\n

        We define two deployments of the hello flow: default and arthur. Note that by specifying dockerfile: auto, Prefect will automatically create a dockerfile that installs any requirements.txt and copies over the current directory. You can pass a custom Dockerfile instead with dockerfile: Dockerfile or dockerfile: path/to/Dockerfile. Also note that we are specifically building for the linux/amd64 platform. This specification is often necessary when images are built on Macs with M series chips but run on cloud provider instances.

        Deployment specific build, push, and pull

        The build, push, and pull steps can be overridden for each deployment. This allows for more custom behavior, such as specifying a different image for each deployment.

        Let's make sure we define our requirements in a requirements.txt file:

        prefect>=2.13.8\nprefect-docker>=0.4.0\nprefect-kubernetes>=0.3.1\n

        The directory should now look something like this:

        .\n\u251c\u2500\u2500 prefect.yaml\n\u2514\u2500\u2500 flows\n    \u251c\u2500\u2500 requirements.txt\n    \u2514\u2500\u2500 hello.py\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#tag-images-with-a-git-sha","title":"Tag images with a Git SHA","text":"

        If your code is stored in a GitHub repository, it's good practice to tag your images with the Git SHA of the code used to build it. This can be done in the prefect.yaml file with a few minor modifications, and isn't yet an option with the Python deployment creation method. Let's use the run_shell_script command to grab the SHA and pass it to the tag parameter of build_docker_image:

        build:\n- prefect.deployments.steps.run_shell_script:\n    id: get-commit-hash\n    script: git rev-parse --short HEAD\n    stream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build-image\n    requires: prefect-docker>=0.4.0\n    image_name: \"{{ $PREFECT_IMAGE_NAME }}\"\n    tag: \"{{ get-commit-hash.stdout }}\"\n    dockerfile: auto\n    platform: \"linux/amd64\"\n

        Let's also set the SHA as a tag for easy identification in the UI:

        definitions:\n  tags: &common_tags\n    - \"eks\"\n    - \"{{ get-commit-hash.stdout }}\"\n  work_pool: &common_work_pool\n    name: \"kubernetes\"\n    job_variables:\n      image: \"{{ build-image.image }}\"\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#authenticate-to-prefect","title":"Authenticate to Prefect","text":"

        Before we deploy the flows to Prefect, we will need to authenticate via the Prefect CLI. We will also need to ensure that all of our flow's dependencies are present at deploy time.

        This example uses a virtual environment to ensure consistency across environments.

        # Create a virtualenv & activate it\nvirtualenv prefect-demo\nsource prefect-demo/bin/activate\n\n# Install dependencies of your flow\nprefect-demo/bin/pip install -r requirements.txt\n\n# Authenticate to Prefect & select the appropriate \n# workspace to deploy your flows to\nprefect-demo/bin/prefect cloud login\n
        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#deploy-the-flows","title":"Deploy the flows","text":"

        Now we're ready to deploy our flows which will build our images. The image name determines which registry it will end up in. We have configured our prefect.yaml file to get the image name from the PREFECT_IMAGE_NAME environment variable, so let's set that first:

        AWSGCPAzure
        export PREFECT_IMAGE_NAME=<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<IMAGE-NAME>\n
        export PREFECT_IMAGE_NAME=us-docker.pkg.dev/<GCP-PROJECT-NAME>/<REPOSITORY-NAME>/<IMAGE-NAME>\n
        export PREFECT_IMAGE_NAME=<REPOSITORY-NAME>.azurecr.io/<IMAGE-NAME>\n

        To deploy your flows, ensure your Docker daemon is running first. Deploy all the flows with prefect deploy --all or deploy them individually by name: prefect deploy -n hello/default or prefect deploy -n hello/arthur.

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/kubernetes/#run-the-flows","title":"Run the flows","text":"

        Once the deployments are successfully created, we can run them from the UI or the CLI:

        prefect deployment run hello/default\nprefect deployment run hello/arthur\n

        Congratulations! You just ran two deployments in Kubernetes. Head over to the UI to check their status!

        ","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/overriding-job-variables/","title":"Deeper Dive: Overriding Work Pool Job Variables","text":"

        As described in the Deploying Flows to Work Pools and Workers guide, there are two ways to deploy flows to work pools: using a prefect.yaml file or using the .deploy() method.

        In both cases, you can override job variables on a work pool for a given deployment.

        While exactly which job variables are available to be overridden depends on the type of work pool you're using at a given time, this guide will explore some common patterns for overriding job variables in both deployment methods.

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#background","title":"Background","text":"

        First of all, what are \"job variables\"?

        Job variables are infrastructure related values that are configurable on a work pool, which may be relevant to how your flow run executes on your infrastructure. Job variables can be overridden on a per-deployment or per-flow run basis, allowing you to dynamically change infrastructure from the work pools defaults depending on your needs.

        Let's use env - the only job variable that is configurable for all work pool types - as an example.

        When you create or edit a work pool, you can specify a set of environment variables that will be set in the runtime environment of the flow run.

        For example, you might want a certain deployment to have the following environment variables available:

        {\n  \"EXECUTION_ENV\": \"staging\",\n  \"MY_NOT_SO_SECRET_CONFIG\": \"plumbus\",\n}\n

        Rather than hardcoding these values into your work pool in the UI and making them available to all deployments associated with that work pool, you can override these values on a per-deployment basis.

        Let's look at how to do that.

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#how-to-override-job-variables-on-a-deployment","title":"How to override job variables on a deployment","text":"

        Say we have the following repo structure:

        \u00bb tree\n.\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 demo_project\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 daily_flow.py\n

        ... and we have some demo_flow.py file like this:

        import os\nfrom prefect import flow, task\n\n@task\ndef do_something_important(not_so_secret_value: str) -> None:\n    print(f\"Doing something important with {not_so_secret_value}!\")\n\n@flow(log_prints=True)\ndef some_work():\n    environment = os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\")\n\n    print(f\"Coming to you live from {environment}!\")\n\n    not_so_secret_value = os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n\n    if not_so_secret_value is None:\n        raise ValueError(\"You forgot to set MY_NOT_SO_SECRET_CONFIG!\")\n\n    do_something_important(not_so_secret_value)\n
        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-a-prefectyaml-file","title":"Using a prefect.yaml file","text":"

        In this case, let's also say we have the following deployment definition in a prefect.yaml file at the root of our repository:

        deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: local\n  schedule: null\n

        Note

        While not the focus of this guide, note that this deployment definition uses a default \"global\" pull step, because one is not explicitly defined on the deployment. For reference, here's what that would look like at the top of the prefect.yaml file:

        pull:\n- prefect.deployments.steps.git_clone: &clone_repo\n    repository: https://github.com/some-user/prefect-monorepo\n    branch: main\n

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#hard-coded-job-variables","title":"Hard-coded job variables","text":"

        To provide the EXECUTION_ENVIRONMENT and MY_NOT_SO_SECRET_CONFIG environment variables to this deployment, we can add a job_variables section to our deployment definition in the prefect.yaml file:

        deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: local\n    job_variables:\n        env:\n            EXECUTION_ENVIRONMENT: staging\n            MY_NOT_SO_SECRET_CONFIG: plumbus\n  schedule: null\n

        ... and then run prefect deploy -n demo-deployment to deploy the flow with these job variables.

        We should then be able to see the job variables in the Configuration tab of the deployment in the UI:

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-existing-environment-variables","title":"Using existing environment variables","text":"

        If you want to use environment variables that are already set in your local environment, you can template these in the prefect.yaml file using the {{ $ENV_VAR_NAME }} syntax:

        deployments:\n- name: demo-deployment\n  entrypoint: demo_project/demo_flow.py:some_work\n  work_pool:\n    name: local\n    job_variables:\n        env:\n            EXECUTION_ENVIRONMENT: \"{{ $EXECUTION_ENVIRONMENT }}\"\n            MY_NOT_SO_SECRET_CONFIG: \"{{ $MY_NOT_SO_SECRET_CONFIG }}\"\n  schedule: null\n

        Note

        This assumes that the machine where prefect deploy is run would have these environment variables set.

        export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n

        As before, run prefect deploy -n demo-deployment to deploy the flow with these job variables, and you should see them in the UI under the Configuration tab.

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-the-deploy-method","title":"Using the .deploy() method","text":"

        If you're using the .deploy() method to deploy your flow, the process is similar, but instead of having your prefect.yaml file define the job variables, you can pass them as a dictionary to the job_variables argument of the .deploy() method.

        We could add the following block to our demo_project/daily_flow.py file from the setup section:

        if __name__ == \"__main__\":\n    flow.from_source(\n        source=\"https://github.com/zzstoatzz/prefect-monorepo.git\",\n        entrypoint=\"src/demo_project/demo_flow.py:some_work\"\n    ).deploy(\n        name=\"demo-deployment\",\n        work_pool_name=\"local\", # can only .deploy() to a local work pool in prefect>=2.15.1\n        job_variables={\n            \"env\": {\n                \"EXECUTION_ENVIRONMENT\": os.environ.get(\"EXECUTION_ENVIRONMENT\", \"local\"),\n                \"MY_NOT_SO_SECRET_CONFIG\": os.environ.get(\"MY_NOT_SO_SECRET_CONFIG\")\n            }\n        }\n    )\n

        Note

        The above example works assuming a couple things: - the machine where this script is run would have these environment variables set.

        export EXECUTION_ENVIRONMENT=staging\nexport MY_NOT_SO_SECRET_CONFIG=plumbus\n

        • demo_project/daily_flow.py already exists in the repository at the specified path

        Running this script with something like:

        python demo_project/daily_flow.py\n

        ... will deploy the flow with the specified job variables, which should then be visible in the UI under the Configuration tab.

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#how-to-override-job-variables-on-a-flow-run","title":"How to override job variables on a flow run","text":"

        When running flows, you can pass in job variables that override any values set on the work pool or deployment. Any interface that runs deployments can accept job variables.

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-the-custom-run-form-in-the-ui","title":"Using the custom run form in the UI","text":"

        Custom runs allow you to pass in a dictionary of variables into your flow run infrastructure. Using the same env example from above, we could do the following:

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-the-cli","title":"Using the CLI","text":"

        Similarly, runs kicked off via CLI accept job variables with the -jv or --job-variable flag.

        prefect deployment run \\\n  --id \"fb8e3073-c449-474b-b993-851fe5e80e53\" \\\n  --job-variable MY_NEW_ENV_VAR=42 \\\n  --job-variable HELLO=THERE\n
        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/overriding-job-variables/#using-job-variables-in-automations","title":"Using job variables in automations","text":"

        Additionally, runs kicked off via automation actions can use job variables, including ones rendered from Jinja templates.

        ","tags":["deployments","flow runs","work pools","job variables","environment variables"],"boost":2},{"location":"guides/deployment/push-work-pools/","title":"Push Work to Serverless Computing Infrastructure","text":"

        Push work pools are a special type of work pool that allows Prefect Cloud to submit flow runs for execution to serverless computing infrastructure without running a worker. Push work pools currently support execution in AWS ECS tasks, Azure Container Instances, Google Cloud Run jobs, and Modal.

        In this guide you will:

        • Create a push work pool that sends work to Amazon Elastic Container Service (AWS ECS), Azure Container Instances (ACI), Google Cloud Run, or Modal
        • Deploy a flow to that work pool
        • Execute a flow without having to run a worker or agent process to poll for flow runs

        You can automatically provision infrastructure and create your push work pool using the prefect work-pool create CLI command with the --provision-infra flag. This approach greatly simplifies the setup process.

        Let's explore automatic infrastructure provisioning for push work pools first, and then we'll cover how to manually set up your push work pool.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatic-infrastructure-provisioning","title":"Automatic infrastructure provisioning","text":"

        With Perfect Cloud you can provision infrastructure for use with an AWS ECS, Google Cloud Run, ACI push work pool. Push work pools in Prefect Cloud simplify the setup and management of the infrastructure necessary to run your flows. However, setting up infrastructure on your cloud provider can still be a time-consuming process. Prefect can dramatically simplify this process by automatically provisioning the necessary infrastructure for you.

        We'll use the prefect work-pool create CLI command with the --provision-infra flag to automatically provision your serverless cloud resources and set up your Prefect workspace to use a new push pool.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#prerequisites","title":"Prerequisites","text":"

        To use automatic infrastructure provisioning, you'll need to have the relevant cloud CLI library installed and to have authenticated with your cloud provider.

        AWS ECSAzure Container InstancesGoogle Cloud RunModal

        Install the AWS CLI, authenticate with your AWS account, and set a default region.

        If you already have the AWS CLI installed, be sure to update to the latest version.

        You will need the following permissions in your authenticated AWS account:

        IAM Permissions:

        • iam:CreatePolicy
        • iam:GetPolicy
        • iam:ListPolicies
        • iam:CreateUser
        • iam:GetUser
        • iam:AttachUserPolicy
        • iam:CreateRole
        • iam:GetRole
        • iam:AttachRolePolicy
        • iam:ListRoles
        • iam:PassRole

        Amazon ECS Permissions:

        • ecs:CreateCluster
        • ecs:DescribeClusters

        Amazon EC2 Permissions:

        • ec2:CreateVpc
        • ec2:DescribeVpcs
        • ec2:CreateInternetGateway
        • ec2:AttachInternetGateway
        • ec2:CreateRouteTable
        • ec2:CreateRoute
        • ec2:CreateSecurityGroup
        • ec2:DescribeSubnets
        • ec2:CreateSubnet
        • ec2:DescribeAvailabilityZones
        • ec2:AuthorizeSecurityGroupIngress
        • ec2:AuthorizeSecurityGroupEgress

        Amazon ECR Permissions:

        • ecr:CreateRepository
        • ecr:DescribeRepositories
        • ecr:GetAuthorizationToken

        If you want to use AWS managed policies, you can use the following:

        • AmazonECS_FullAccess
        • AmazonEC2FullAccess
        • IAMFullAccess
        • AmazonEC2ContainerRegistryFullAccess

        Note that the above policies will give you all the permissions needed, but are more permissive than necessary.

        Docker is also required to build and push images to your registry. You can install Docker here.

        Install the Azure CLI and authenticate with your Azure account.

        If you already have the Azure CLI installed, be sure to update to the latest version with az upgrade.

        You will also need the following roles in your Azure subscription:

        • Contributor
        • User Access Administrator
        • Application Administrator
        • Managed Identity Operator
        • Azure Container Registry Contributor

        Docker is also required to build and push images to your registry. You can install Docker here.

        Install the gcloud CLI and authenticate with your GCP project.

        If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update.

        You will also need the following permissions in your GCP project:

        • resourcemanager.projects.list
        • serviceusage.services.enable
        • iam.serviceAccounts.create
        • iam.serviceAccountKeys.create
        • resourcemanager.projects.setIamPolicy
        • artifactregistry.repositories.create

        Docker is also required to build and push images to your registry. You can install Docker here.

        Install modal by running:

        pip install modal\n

        Create a Modal API token by running:

        modal token new\n

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#automatically-creating-a-new-push-work-pool-and-provisioning-infrastructure","title":"Automatically creating a new push work pool and provisioning infrastructure","text":"

        Here's the command to create a new push work pool and configure the necessary infrastructure.

        AWS ECSAzure Container InstancesGoogle Cloud RunModal
        prefect work-pool create --type ecs:push --provision-infra my-ecs-pool\n

        Using the --provision-infra flag will automatically set up your default AWS account to be ready to execute flows via ECS tasks. In your AWS account, this command will create a new IAM user, IAM policy, ECS cluster that uses AWS Fargate, VPC, and ECR repository if they don't already exist. In your Prefect workspace, this command will create an AWSCredentials block for storing the generated credentials.

        Here's an abbreviated example output from running the command:

        \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-ecs-pool will require:                                          \u2502\n\u2502                                                                                                                   \u2502\n\u2502          - Creating an IAM user for managing ECS tasks: prefect-ecs-user                                          \u2502\n\u2502          - Creating and attaching an IAM policy for managing ECS tasks: prefect-ecs-policy                        \u2502\n\u2502          - Storing generated AWS credentials in a block                                                           \u2502\n\u2502          - Creating an ECS cluster for running Prefect flows: prefect-ecs-cluster                                 \u2502\n\u2502          - Creating a VPC with CIDR 172.31.0.0/16 for running ECS tasks: prefect-ecs-vpc                          \u2502\n\u2502          - Creating an ECR repository for storing Prefect images: prefect-flows                                   \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nProvisioning IAM user\nCreating IAM policy\nGenerating AWS credentials\nCreating AWS credentials block\nProvisioning ECS cluster\nProvisioning VPC\nCreating internet gateway\nSetting up subnets\nSetting up security group\nProvisioning ECR repository\nAuthenticating with ECR\nSetting default Docker build namespace\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-ecs-pool'!\n

        Default Docker build namespace

        After infrastructure provisioning completes, you will be logged into your new ECR repository and the default Docker build namespace will be set to the URL of the registry.

        While the default namespace is set, you will not need to provide the registry URL when building images as part of your deployment process.

        To take advantage of this, you can write your deploy scripts like this:

        example_deploy_script.py
        from prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n@flow(log_prints=True)            \ndef my_flow(name: str = \"world\"):                          \n    print(f\"Hello {name}! I'm a flow running in a ECS task!\") \n\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        name=\"my-deployment\", \n        work_pool_name=\"my-work-pool\",\n        image=DeploymentImage(                                                 \n            name=\"my-repository:latest\",\n            platform=\"linux/amd64\",\n        )                                                                      \n    )       \n

        This will build an image with the tag <ecr-registry-url>/my-image:latest and push it to the registry.

        Your image name will need to match the name of the repository created with your work pool. You can create new repositories in the ECR console.

        prefect work-pool create --type azure-container-instance:push --provision-infra my-aci-pool\n

        Using the --provision-infra flag will automatically set up your default Azure account to be ready to execute flows via Azure Container Instances. In your Azure account, this command will create a resource group, app registration, service account with necessary permission, generate a secret for the app registration, and create an Azure Container Registry, if they don't already exist. In your Prefect workspace, this command will create an AzureContainerInstanceCredentials block for storing the client secret value from the generated secret.

        Here's an abbreviated example output from running the command:

        \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-aci-work-pool will require:                     \u2502\n\u2502                                                                                           \u2502\n\u2502     Updates in subscription Azure subscription 1                                          \u2502\n\u2502                                                                                           \u2502\n\u2502         - Create a resource group in location eastus                                      \u2502\n\u2502         - Create an app registration in Azure AD prefect-aci-push-pool-app                \u2502\n\u2502         - Create/use a service principal for app registration                             \u2502\n\u2502         - Generate a secret for app registration                                          \u2502\n\u2502         - Create an Azure Container Registry with prefix prefect                          \u2502\n\u2502         - Create an identity prefect-acr-identity to allow access to the created registry \u2502\n\u2502         - Assign Contributor role to service account                                      \u2502\n\u2502         - Create an ACR registry for image hosting                                        \u2502\n\u2502         - Create an identity for Azure Container Instance to allow access to the registry \u2502\n\u2502                                                                                           \u2502\n\u2502     Updates in Prefect workspace                                                          \u2502\n\u2502                                                                                           \u2502\n\u2502         - Create Azure Container Instance credentials block aci-push-pool-credentials     \u2502\n\u2502                                                                                           \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]:     \nCreating resource group\nCreating app registration\nGenerating secret for app registration\nCreating ACI credentials block\nACI credentials block 'aci-push-pool-credentials' created in Prefect Cloud\nAssigning Contributor role to service account\nCreating Azure Container Registry\nCreating identity\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned for 'my-aci-work-pool' work pool!\nCreated work pool 'my-aci-work-pool'!\n

        Default Docker build namespace

        After infrastructure provisioning completes, you will be logged into your new Azure Container Registry and the default Docker build namespace will be set to the URL of the registry.

        While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the registry.

        To take advantage of this functionality, you can write your deploy scripts like this:

        example_deploy_script.py
        from prefect import flow                                                       \nfrom prefect.deployments import DeploymentImage                                \n\n\n@flow(log_prints=True)                                                         \ndef my_flow(name: str = \"world\"):                                              \n    print(f\"Hello {name}! I'm a flow running on an Azure Container Instance!\") \n\n\nif __name__ == \"__main__\":                                                     \n    my_flow.deploy(                                                            \n        name=\"my-deployment\",\n        work_pool_name=\"my-work-pool\",                                                \n        image=DeploymentImage(                                                 \n            name=\"my-image:latest\",                                            \n            platform=\"linux/amd64\",                                            \n        )                                                                      \n    )       \n

        This will build an image with the tag <acr-registry-url>/my-image:latest and push it to the registry.

        prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n

        Using the --provision-infra flag will allow you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials block for storing the service account key.

        Here's an abbreviated example output from running the command:

        \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require:                           \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in GCP project central-kit-405415 in region us-central1                                      \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Activate the Cloud Run API for your project                                                    \u2502\n\u2502         - Activate the Artifact Registry API for your project                                            \u2502\n\u2502         - Create an Artifact Registry repository named prefect-images                                    \u2502\n\u2502         - Create a service account for managing Cloud Run jobs: prefect-cloud-run                        \u2502\n\u2502             - Service account will be granted the following roles:                                       \u2502\n\u2502                 - Service Account User                                                                   \u2502\n\u2502                 - Cloud Run Developer                                                                    \u2502\n\u2502         - Create a key for service account prefect-cloud-run                                             \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in Prefect workspace                                                                         \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Create GCP credentials block my--pool-push-pool-credentials to store the service account key   \u2502\n\u2502                                                                                                          \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n

        Default Docker build namespace

        After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.

        While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.

        To take advantage of this functionality, you can write your deploy scripts like this:

        example_deploy_script.py
        from prefect import flow                                                       \nfrom prefect.deployments import DeploymentImage                                \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\":                                                     \n    my_flow.deploy(                                                            \n        name=\"my-deployment\",\n        work_pool_name=\"above-ground\",\n        image=DeploymentImage(\n            name=\"my-image:latest\",\n            platform=\"linux/amd64\",\n        )\n    )\n

        This will build an image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest and push it to the repository.

        prefect work-pool create --type modal:push --provision-infra my-modal-pool \n

        Using the --provision-infra flag will trigger the creation of a ModalCredentials block in your Prefect Cloud workspace. This block will store your Modal API token, which is used to authenticate with Modal's API. By default, the token for your current Modal profile will be used for the new ModalCredentials block. If Prefect is unable to discover a Modal API token for your current profile, you will be prompted to create a new one.

        That's it! You're ready to create and schedule deployments that use your new push work pool. Reminder that no worker is needed to run flows with a push work pool.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#using-existing-resources-with-automatic-infrastructure-provisioning","title":"Using existing resources with automatic infrastructure provisioning","text":"

        If you already have the necessary infrastructure set up, Prefect will detect that upon work pool creation and the infrastructure provisioning for that resource will be skipped.

        For example, here's how prefect work-pool create my-work-pool --provision-infra looks when existing Azure resources are detected:

        Proceed with infrastructure provisioning? [y/n]: y\nCreating resource group\nResource group 'prefect-aci-push-pool-rg' already exists in location 'eastus'.\nCreating app registration\nApp registration 'prefect-aci-push-pool-app' already exists.\nGenerating secret for app registration\nProvisioning infrastructure\nACI credentials block 'bb-push-pool-credentials' created\nAssigning Contributor role to service account...\nService principal with object ID '4be6fed7-...' already has the 'Contributor' role assigned in \n'/subscriptions/.../'\nCreating Azure Container Instance\nContainer instance 'prefect-aci-push-pool-container' already exists.\nCreating Azure Container Instance credentials block\nProvisioning infrastructure... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-work-pool'!\n
        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#provisioning-infrastructure-for-an-existing-push-work-pool","title":"Provisioning infrastructure for an existing push work pool","text":"

        If you already have a push work pool set up, but haven't configured the necessary infrastructure, you can use the provision-infra sub-command to provision the infrastructure for that work pool. For example, you can run the following command if you have a work pool named \"my-work-pool\".

        prefect work-pool provision-infra my-work-pool\n

        Prefect will create the necessary infrastructure for the my-work-pool work pool and provide you with a summary of the changes to be made:

        \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-work-pool will require:                                      \u2502\n\u2502                                                                                                                \u2502\n\u2502     Updates in subscription Azure subscription 1                                                               \u2502\n\u2502                                                                                                                \u2502\n\u2502         - Create a resource group in location eastus                                                           \u2502\n\u2502         - Create an app registration in Azure AD prefect-aci-push-pool-app                                     \u2502\n\u2502         - Create/use a service principal for app registration                                                  \u2502\n\u2502         - Generate a secret for app registration                                                               \u2502\n\u2502         - Assign Contributor role to service account                                                           \u2502\n\u2502         - Create Azure Container Instance 'aci-push-pool-container' in resource group prefect-aci-push-pool-rg \u2502\n\u2502                                                                                                                \u2502\n\u2502     Updates in Prefect workspace                                                                               \u2502\n\u2502                                                                                                                \u2502\n\u2502         - Create Azure Container Instance credentials block aci-push-pool-credentials                          \u2502\n\u2502                                                                                                                \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\n

        This command can speed up your infrastructure setup process.

        As with the examples above, you will need to have the related cloud CLI library installed and be authenticated with your cloud provider.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#manual-infrastructure-provisioning","title":"Manual infrastructure provisioning","text":"

        If you prefer to set up your infrastructure manually, don't include the --provision-infra flag in the CLI command. In the examples below, we'll create a push work pool via the Prefect Cloud UI.

        AWS ECSAzure Container InstancesGoogle Cloud RunModal

        To push work to ECS, AWS credentials are required.

        Create a user and attach the AmazonECS_FullAccess permissions.

        From that user's page create credentials and store them somewhere safe for use in the next section.

        To push work to Azure, an Azure subscription, resource group and tenant secret are required.

        Create Subscription and Resource Group

        1. In the Azure portal, create a subscription.
        2. Create a resource group within your subscription.

        Create App Registration

        1. In the Azure portal, create an app registration.
        2. In the app registration, create a client secret. Copy the value and store it somewhere safe.

        Add App Registration to Resource Group

        1. Navigate to the resource group you created earlier.
        2. Choose the \"Access control (IAM)\" blade in the left-hand side menu. Click \"+ Add\" button at the top, then \"Add role assignment\".
        3. Go to the \"Privileged administrator roles\" tab, click on \"Contributor\", then click \"Next\" at the bottom of the page.
        4. Click on \"+ Select members\". Type the name of the app registration (otherwise it may not autopopulate) and click to add it. Then hit \"Select\" and click \"Next\". The default permissions associated with a role like \"Contributor\" might not always be sufficient for all operations related to Azure Container Instances (ACI). The specific permissions required can depend on the operations you need to perform (like creating, running, and deleting ACI container groups) and your organization's security policies. In some cases, additional permissions or custom roles might be necessary.
        5. Click \"Review + assign\" to finish.

        A GCP service account and an API Key are required, to push work to Cloud Run.

        Create a service account by navigating to the service accounts page and clicking Create. Name and describe your service account, and click continue to configure permissions.

        The service account must have two roles at a minimum, Cloud Run Developer, and Service Account User.

        Once the Service account is created, navigate to its Keys page to add an API key. Create a JSON type key, download it, and store it somewhere safe for use in the next section.

        A Modal API token is required to push work to Modal.

        Create a Modal API token by navigating to Settings in the Modal UI. In the API Tokens section of the Settings page, click New Token.

        Copy the token ID and token secret and store them somewhere safe for use in the next section.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#work-pool-configuration","title":"Work pool configuration","text":"

        Our push work pool will store information about what type of infrastructure our flow will run on, what default values to provide to compute jobs, and other important execution environment parameters. Because our push work pool needs to integrate securely with your serverless infrastructure, we need to start by storing our credentials in Prefect Cloud, which we'll do by making a block.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-credentials-block","title":"Creating a Credentials block","text":"AWS ECSAzure Container InstancesGoogle Cloud RunModal

        Navigate to the blocks page, click create new block, and select AWS Credentials for the type.

        For use in a push work pool, region, access key, and access key secret must be set.

        Provide any other optional information and create your block.

        Navigate to the blocks page and click the \"+\" at the top to create a new block. Find the Azure Container Instance Credentials block and click \"Add +\".

        Locate the client ID and tenant ID on your app registration and use the client secret you saved earlier. Be sure to use the value of the secret, not the secret ID!

        Provide any other optional information and click \"Create\".

        Navigate to the blocks page, click create new block, and select GCP Credentials for the type.

        For use in a push work pool, this block must have the contents of the JSON key stored in the Service Account Info field, as such:

        Provide any other optional information and create your block.

        Navigate to the blocks page, click create new block, and select Modal Credentials for the type.

        For use in a push work pool, this block must have the token ID and token secret stored in the Token ID and Token Secret fields, respectively.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#creating-a-push-work-pool","title":"Creating a push work pool","text":"

        Now navigate to the work pools page. Click Create to start configuring your push work pool by selecting a push option in the infrastructure type step.

        AWS ECSAzure Container InstancesGoogle Cloud RunModal

        Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the AWS Credentials field. This will allow Prefect Cloud to securely interact with your ECS cluster.

        Fill in the subscription ID and resource group name from the resource group you created. Add the Azure Container Instance Credentials block you created in the step above.

        Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the GCP Credentials field. This will allow Prefect Cloud to securely interact with your GCP project.

        Each step has several optional fields that are detailed in the work pools documentation. Select the block you created under the Modal Credentials field. This will allow Prefect Cloud to securely interact with your Modal account.

        Create your pool and you are ready to deploy flows to your Push work pool.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#deployment","title":"Deployment","text":"

        Deployment details are described in the deployments concept section. Your deployment needs to be configured to send flow runs to our push work pool. For example, if you create a deployment through the interactive command line experience, choose the work pool you just created. If you are deploying an existing prefect.yaml file, the deployment would contain:

          work_pool:\n    name: my-push-pool\n

        Deploying your flow to the my-push-pool work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them.

        Serverless infrastructure may require a certain image architecture

        Note that serverless infrastructure may assume a certain Docker image architecture; for example, Google Cloud Run will fail to run images built with linux/arm64 architecture. If using Prefect to build your image, you can change the image architecture through the platform keyword (e.g., platform=\"linux/amd64\").

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#putting-it-all-together","title":"Putting it all together","text":"

        With your deployment created, navigate to its detail page and create a new flow run. You'll see the flow start running without ever having to poll the work pool, because Prefect Cloud securely connected to your serverless infrastructure, created a job, ran the job, and began reporting on its execution.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/push-work-pools/#next-steps","title":"Next steps","text":"

        Learn more about workers and work pools in the Prefect concept documentation.

        Learn about installing dependencies at runtime or baking them into your Docker image in the Deploying Flows to Work Pools and Workers guide.

        ","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instances","ACI","elastic container service","GCP","Google Cloud Run","serverless","Amazon Web Services","Modal","push work pools"],"boost":2},{"location":"guides/deployment/serverless-workers/","title":"Run Deployments on Serverless Infrastructure with Prefect Workers","text":"

        Prefect provides hybrid work pools for workers to run flows on the serverless platforms of major cloud providers. The following options are available:

        • AWS Elastic Container Service (ECS)
        • Azure Container Instances (ACI)
        • Google Cloud Run
        • Google Cloud Run V2
        • Google Vertex AI

        • Create a work pool that sends work to your chosen serverless infrastructure
        • Deploy a flow to that work pool
        • Start a worker in your serverless cloud provider that will poll its matched work pool for scheduled runs
        • Schedule a deployment run that a worker will pick up from the work pool and run on your serverless infrastructure

        Push work pools don't require a worker

        Options for push work pool versions of AWS ECS, Azure Container Instances, and Google Cloud Run that do not require a worker are available with Prefect Cloud. These push work pool options require connection configuration information to be stored on Prefect Cloud. Read more in the Serverless Push Work Pool Guide.

        This is a brief overview of the options to run workflows on serverless infrastructure. For in-depth guides, see the Prefect integration libraries:

        • AWS ECS guide in the prefect-aws docs
        • Azure Container Instances guide
        • Google Cloud Run guide in the prefect-gcp docs.
        • For Google Vertex AI, follow the Cloud Run guide, substituting Google Vertex AI where Google Cloud Run is mentioned.

        Choosing between Google Cloud Run and Google Vertex AI

        Google Vertex AI is well-suited for machine learning model training applications in which GPUs or TPUs and high resource levels are desired.

        ","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/serverless-workers/#steps","title":"Steps","text":"
        1. Make sure you have an user or service account on your chosen cloud provider with the necessary permissions to run serverless jobs
        2. Create the appropriate serverless work pool that uses a worker in the Prefect UI
        3. Create a deployment that references the work pool
        4. Start a worker in your chose serverless cloud provider infrastructure
        5. Run the deployment
        ","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/serverless-workers/#next-steps","title":"Next steps","text":"

        Options for push versions on AWS ECS, Azure Container Instances, and Google Cloud Run work pools that do not require a worker are available with Prefect Cloud. Read more in the Serverless Push Work Pool Guide.

        Learn more about workers and work pools in the Prefect concept documentation.

        Learn about installing dependencies at runtime or baking them into your Docker image in the Deploying Flows to Work Pools and Workers guide.

        ","tags":["work pools","deployments","Cloud Run","GCP","Vertex AI","AWS ECS","Azure Container Instances","ACI"],"boost":2},{"location":"guides/deployment/storage-guide/","title":"Where to Store Your Flow Code","text":"

        When a flow runs, the execution environment needs access to its code. Flow code is not stored in a Prefect server database instance or Prefect Cloud. When deploying a flow, you have several flow code storage options.

        This guide discusses storage options with a focus on deployments created with the interactive CLI experience or a prefect.yaml file. If you'd like to create your deployments using Python code, see the discussion of flow code storage on the .deploy tab of Deploying Flows to Work pools and Workers guide.

        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-1-local-storage","title":"Option 1: Local storage","text":"

        Local flow code storage is often used with a Local Subprocess work pool for initial experimentation.

        To create a deployment with local storage and a Local Subprocess work pool, do the following:

        1. Run prefect deploy from the root of the directory containing your flow code.
        2. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.
        3. Select a process work pool.

        You are then shown the location that your flow code will be fetched from when a flow is run. For example:

        Your Prefect workers will attempt to load your flow from: \n/my-path/my-flow-file.py. To see more options for managing your flow's code, run:\n\n    $ prefect init\n

        When deploying a flow to production, you most likely want code to run with infrastructure-specific configuration. The flow code storage options shown below are recommended for production deployments.

        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-2-git-based-storage","title":"Option 2: Git-based storage","text":"

        Git-based version control platforms are popular locations for code storage. They provide redundancy, version control, and easier collaboration.

        GitHub is the most popular cloud-based repository hosting provider. GitLab and Bitbucket are other popular options. Prefect supports each of these platforms.

        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#creating-a-deployment-with-git-based-storage","title":"Creating a deployment with git-based storage","text":"

        Run prefect deploy from the root directory of the git repository and create a new deployment. You will see a series of prompts. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.

        Prefect detects that you are in a git repository and asks if you want to store your flow code in a git repository. Select \"y\" and you will be prompted to confirm the URL of your git repository and the branch name, as in the example below:

        ? Your Prefect workers will need access to this flow's code in order to run it. \nWould you like your workers to pull your flow code from its remote repository when running this flow? [y/n] (y): \n? Is https://github.com/my_username/my_repo.git the correct URL to pull your flow code from? [y/n] (y): \n? Is main the correct branch to pull your flow code from? [y/n] (y): \n? Is this a private repository? [y/n]: y\n

        In this example, the git repository is hosted on GitHub. If you are using Bitbucket or GitLab, the URL will match your provider. If the repository is public, enter \"n\" and you are on your way.

        If the repository is private, you can enter a token to access your private repository. This token will be saved in an encrypted Prefect Secret block.

        ? Please enter a token that can be used to access your private repository. This token will be saved as a Secret block via the Prefect API: \"123_abc_this_is_my_token\"\n

        Verify that you have a new Secret block in your active workspace named in the format \"deployment-my-deployment-my-flow-name-repo-token\".

        Creating access tokens differs for each provider.

        GitHubBitbucketGitLab

        We recommend using HTTPS with fine-grained Personal Access Tokens so that you can limit access by repository. See the GitHub docs for Personal Access Tokens (PATs).

        Under Your Profile->Developer Settings->Personal access tokens->Fine-grained token choose Generate New Token and fill in the required fields. Under Repository access choose Only select repositories and grant the token permissions for Contents.

        We recommend using HTTPS with Repository, Project, or Workspace Access Tokens.

        You can create a Repository Access Token with Scopes->Repositories->Read.

        Bitbucket requires you prepend the token string with x-token-auth: So the full string looks like x-token-auth:abc_123_this_is_my_token.

        We recommend using HTTPS with Project Access Tokens.

        In your repository in the GitLab UI, select Settings->Repository->Project Access Tokens and check read_repository under Select scopes.

        If you want to configure a Secret block ahead of time, create the block via code or the Prefect UI and reference it in your prefect.yaml file.

        pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://bitbucket.org/org/my-private-repo.git\n        access_token: \"{{ prefect.blocks.secret.my-block-name }}\"\n

        Alternatively, you can create a Credentials block ahead of time and reference it in the prefect.yaml pull step.

        GitHubBitbucketGitLab
        1. Install the Prefect-Github library with pip install -U prefect-github
        2. Register the blocks in that library to make them available on the server with prefect block register -m prefect_github.
        3. Create a GitHub Credentials block via code or the Prefect UI and reference it as shown above.
        pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://github.com/discdiver/my-private-repo.git\n        credentials: \"{{ prefect.blocks.github-credentials.my-block-name }}\"\n
        1. Install the relevant library with pip install -U prefect-bitbucket
        2. Register the blocks in that library with prefect block register -m prefect_bitbucket
        3. Create a Bitbucket credentials block via code or the Prefect UI and reference it as shown above.
        pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://bitbucket.org/org/my-private-repo.git\n        credentials: \"{{ prefect.blocks.bitbucket-credentials.my-block-name }}\"\n
        1. Install the relevant library with pip install -U prefect-gitlab
        2. Register the blocks in that library with prefect block register -m prefect_gitlab
        3. Create a GitLab credentials block via code or the Prefect UI and reference it as shown above.
        pull:\n    - prefect.deployments.steps.git_clone:\n        repository: https://gitlab.com/org/my-private-repo.git\n        credentials: \"{{ prefect.blocks.gitlab-credentials.my-block-name }}\"\n

        Push your code

        When you make a change to your code, Prefect does not push your code to your git-based version control platform. You need to push your code manually or as part of your CI/CD pipeline. This design decision is an intentional one to avoid confusion about the git history and push process.

        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-3-docker-based-storage","title":"Option 3: Docker-based storage","text":"

        Another popular way to store your flow code is to include it in a Docker image. The following work pools use Docker containers, so the flow code can be directly baked into the image:

        • Docker
        • Kubernetes
        • Serverless cloud-based options
          • AWS Elastic Container Service
          • Azure Container Instances
          • Google Cloud Run
        • Push-based serverless cloud-based options (no worker required)

          • AWS Elastic Container Service - Push
          • Azure Container Instances - Push
          • Google Cloud Run - Push
        • Run prefect init in the root of your repository and choose docker for the project name and answer the prompts to create a prefect.yaml file with a build step that will create a Docker image with the flow code built in. See the Workers and Work Pools page of the tutorial for more info.

        • Run prefect deploy from the root of your repository to create a deployment.
        • Upon deployment run the worker will pull the Docker image and spin up a container.
        • The flow code baked into the image will run inside the container.

        CI/CD may not require push or pull steps

        You don't need push or pull steps in the prefect.yaml file if using CI/CD to build a Docker image outside of Prefect. Instead, the work pool can reference the image directly.

        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-4-cloud-provider-storage","title":"Option 4: Cloud-provider storage","text":"

        You can store your code in an AWS S3 bucket, Azure Blob Storage container, or GCP GCS bucket and specify the destination directly in the push and pull steps of your prefect.yaml file.

        To create a templated prefect.yaml file run prefect init and select the recipe for the applicable cloud-provider storage. Below are the recipe options and the relevant portions of the prefect.yaml file.

        AWS S3 bucketAzure Blob Storage containerGCP GCS bucket

        Choose s3 as the recipe and enter the bucket name when prompted.

        # push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_aws.deployments.steps.push_to_s3:\n    id: push_code\n    requires: prefect-aws>=0.3.4\n    bucket: my-bucket\n    folder: my-folder\n    credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\n    id: pull_code\n    requires: prefect-aws>=0.3.4\n    bucket: '{{ push_code.bucket }}'\n    folder: '{{ push_code.folder }}'\n    credentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private \n

        If the bucket requires authentication to access it, you can do the following:

        1. Install the Prefect-AWS library with pip install -U prefect-aws
        2. Register the blocks in Prefect-AWS with prefect block register -m prefect_aws
        3. Create a user with a role with read and write permissions to access the bucket. If using the UI, create an access key pair with IAM->Users->Security credentials->Access keys->Create access key. Choose Use case->Other and then copy the Access key and Secret access key values.
        4. Create an AWS Credentials block via code or the Prefect UI. In addition to the block name, most users will fill in the AWS Access Key ID and AWS Access Key Secret fields.
        5. Reference the block as shown in the push and pull steps above.

        Choose azure as the recipe and enter the container name when prompted.

        # push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_azure.deployments.steps.push_to_azure_blob_storage:\n    id: push_code\n    requires: prefect-azure>=0.2.8\n    container: my-prefect-azure-container\n    folder: my-folder\n    credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n    id: pull_code\n    requires: prefect-azure>=0.2.8\n    container: '{{ push_code.container }}'\n    folder: '{{ push_code.folder }}'\n    credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n

        If the blob requires authentication to access it, you can do the following:

        1. Install the Prefect-Azure library with pip install -U prefect-azure
        2. Register the blocks in Prefect-Azure with prefect block register -m prefect_azure
        3. Create an access key for a role with sufficient (read and write) permissions to access the blob. A connection string that will contain all needed information can be created in the UI under Storage Account->Access keys.
        4. Create an Azure Blob Storage Credentials block via code or the Prefect UI. Enter a name for the block and paste the connection string into the Connection String field.
        5. Reference the block as shown in the push and pull steps above.

        Choose `gcs`` as the recipe and enter the bucket name when prompted.

        # push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_gcp.deployment.steps.push_to_gcs:\n    id: push_code\n    requires: prefect-gcp>=0.4.3\n    bucket: my-bucket\n    folder: my-folder\n    credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_gcp.deployment.steps.pull_from_gcs:\n    id: pull_code\n    requires: prefect-gcp>=0.4.3\n    bucket: '{{ push_code.bucket }}'\n    folder: '{{ pull_code.folder }}'\n    credentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n

        If the bucket requires authentication to access it, you can do the following:

        1. Install the Prefect-GCP library with pip install -U prefect-gcp
        2. Register the blocks in Prefect-GCP with prefect block register -m prefect_gcp
        3. Create a service account in GCP for a role with read and write permissions to access the bucket contents. If using the GCP console, go to IAM & Admin->Service accounts->Create service account. After choosing a role with the required permissions, see your service account and click on the three dot menu in the Actions column. Select Manage Keys->ADD KEY->Create new key->JSON. Download the JSON file.
        4. Create a GCP Credentials block via code or the Prefect UI. Enter a name for the block and paste the entire contents of the JSON key file into the Service Account Info field.
        5. Reference the block as shown in the push and pull steps above.

        Another option for authentication is for the worker to have access to the storage location at runtime via SSH keys.

        Alternatively, you can inject environment variables into your deployment like this example that uses an environment variable named CUSTOM_FOLDER:

         push:\n    - prefect_gcp.deployment.steps.push_to_gcs:\n        id: push_code\n        requires: prefect-gcp>=0.4.3\n        bucket: my-bucket\n        folder: '{{ $CUSTOM_FOLDER }}'\n
        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#including-and-excluding-files-from-storage","title":"Including and excluding files from storage","text":"

        By default, Prefect uploads all files in the current folder to the configured storage location when you create a deployment.

        When using a git repository, Docker image, or cloud-provider storage location, you may want to exclude certain files or directories.

        • If you are familiar with git you are likely familiar with the .gitignore file.
        • If you are familiar with Docker you are likely familiar with the .dockerignore file.
        • For cloud-provider storage the .prefectignore file serves the same purpose and follows a similar syntax as those files. So an entry of *.pyc will exclude all .pyc files from upload.
        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#other-code-storage-creation-methods","title":"Other code storage creation methods","text":"

        In earlier versions of Prefect storage blocks were the recommended way to store flow code. Storage blocks are still supported, but not recommended.

        As shown above, repositories can be referenced directly through interactive prompts with prefect deploy or in a prefect.yaml. When authentication is needed, Secret or Credential blocks can be referenced, and in some cases created automatically through interactive deployment creation prompts.

        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#next-steps","title":"Next steps","text":"

        You've seen options for where to store your flow code.

        We recommend using Docker-based storage or git-based storage for your production deployments.

        Check out more guides to reach your goals with Prefect.

        ","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"integrations/","title":"Integrations","text":"

        You can install the following integrations of pre-built tasks, flows, blocks and more as PyPI packages:

        Alert

        Maintained by Khuyen Tran

        AWS

        Maintained by Prefect

        Azure

        Maintained by Prefect

        Bitbucket

        Maintained by Prefect

        Coiled

        Maintained by Coiled

        CubeJS

        Maintained by Alessandro Lollo

        Dask

        Maintained by Prefect

        Databricks

        Maintained by Prefect

        dbt

        Maintained by Prefect

        Docker

        Maintained by Prefect

        Earthdata

        Maintained by Giorgio Basile

        Email

        Maintained by Prefect

        Fivetran

        Maintained by Fivetran

        Fugue

        Maintained by The Fugue Development Team

        GCP

        Maintained by Prefect

        GitHub

        Maintained by Prefect

        GitLab

        Maintained by Prefect

        Google Sheets

        Maintained by Stefano Cascavilla

        HashiCorp Vault

        Maintained by Pavel Chekin

        Kubernetes

        Maintained by Prefect

        KV

        Maintained by Zanie Blue

        MetricFlow

        Maintained by Alessandro Lollo

        Planetary Computer

        Maintained by Giorgio Basile

        Ray

        Maintained by Prefect

        Shell

        Maintained by Prefect

        Sifflet

        Maintained by Sifflet and Alessandro Lollo

        Slack

        Maintained by Prefect

        Snowflake

        Maintained by Prefect

        Soda Cloud

        Maintained by Alessandro Lollo

        Soda Core

        Maintained by Soda and Alessandro Lollo

        Spark on Kubernetes

        Maintained by Manoj Babu Katragadda

        SQLAlchemy

        Maintained by Prefect

        Stitch

        Maintained by Alessandro Lollo

        Transform

        Maintained by Alessandro Lollo

        ","tags":["tasks","flows","blocks","collections","task library","integrations","Alert","AWS","Azure","Bitbucket","Coiled","CubeJS","Dask","Databricks","dbt","Docker","Earthdata","Email","Fivetran","Fugue","GCP","GitHub","GitLab","Google Sheets","HashiCorp Vault","Kubernetes","KV","MetricFlow","Planetary Computer","Ray","Shell","Sifflet","Slack","Snowflake","Soda Cloud","Soda Core","Spark on Kubernetes","SQLAlchemy","Stitch","Transform"],"boost":2},{"location":"integrations/contribute/","title":"Contribute","text":"

        We welcome contributors! You can help contribute blocks and integrations by following these steps.

        ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-blocks","title":"Contributing Blocks","text":"

        Building your own custom block is simple!

        1. Subclass from Block.
        2. Add a description alongside an Attributes and Example section in the docstring.
        3. Set a _logo_url to point to a relevant image.
        4. Create the pydantic.Fields of the block with a type annotation, default or default_factory, and a short description about the field.
        5. Define the methods of the block.

        For example, this is how the Secret block is implemented:

        from pydantic import Field, SecretStr\nfrom prefect.blocks.core import Block\n\nclass Secret(Block):\n    \"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://example.com/logo.png\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )  # ... indicates it's a required field\n\n    def get(self):\n        return self.value.get_secret_value()\n

        To view in Prefect Cloud or the Prefect server UI, register the block.

        ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-integrations","title":"Contributing Integrations","text":"

        Anyone can create and share a Prefect Integration and we encourage anyone interested in creating an integration to do so!

        ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#generate-a-project","title":"Generate a project","text":"

        To help you get started with your integration, we've created a template that gives the tools you need to create and publish your integration.

        Use the Prefect Integration template to get started creating an integration with a bootstrapped project!

        ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#list-a-project-in-the-integrations-catalog","title":"List a project in the Integrations Catalog","text":"

        To list your integration in the Prefect Integrations Catalog, submit a PR to the Prefect repository adding a file to the docs/integrations/catalog directory with details about your integration. Please use TEMPLATE.yaml in that folder as a guide.

        ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contribute-fixes-or-enhancements-to-integrations","title":"Contribute fixes or enhancements to Integrations","text":"

        If you'd like to help contribute to fix an issue or add a feature to any of our Integrations, please propose changes through a pull request from a fork of the repository.

        1. Fork the repository
        2. Clone the forked repository
        3. Install the repository and its dependencies:
          pip install -e \".[dev]\"\n
        4. Make desired changes
        5. Add tests
        6. Insert an entry to the Integration's CHANGELOG.md
        7. Install pre-commit to perform quality checks prior to commit:
          pre-commit install\n
        8. git commit, git push, and create a pull request
        ","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/usage/","title":"Using Integrations","text":"","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#installing-an-integration","title":"Installing an Integration","text":"

        Install the Integration via pip.

        For example, to use prefect-aws:

        pip install prefect-aws\n
        ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#registering-blocks-from-an-integration","title":"Registering Blocks from an Integration","text":"

        Once the Prefect Integration is installed, register the blocks within the integration to view them in the Prefect Cloud UI:

        For example, to register the blocks available in prefect-aws:

        prefect block register -m prefect_aws\n

        Updating blocks from an integrations

        If you install an updated Prefect integration that adds fields to a block type, you will need to re-register that block type.

        Loading a block in code

        To use the load method on a Block, you must already have a block document saved either through code or through the Prefect UI.

        Learn more about Blocks here!

        ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#using-tasks-and-flows-from-an-integration","title":"Using Tasks and Flows from an Integration","text":"

        Integrations also contain pre-built tasks and flows that can be imported and called within your code.

        As an example, to read a secret from AWS Secrets Manager with the read_secret task:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef connect_to_database():\n    aws_credentials = AwsCredentials.load(\"MY_BLOCK_NAME\")\n    secret_value = read_secret(\n        secret_name=\"db_password\",\n        aws_credentials=aws_credentials\n    )\n\n    # Use secret_value to connect to a database\n
        ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#customizing-tasks-and-flows-from-an-integration","title":"Customizing Tasks and Flows from an Integration","text":"

        To customize the settings of a task or flow pre-configured in a collection, use with_options:

        from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\ncustom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n    name=\"Run My DBT Cloud Job\",\n    retries=2,\n    retry_delay_seconds=10\n)\n\n@flow\ndef run_dbt_job_flow():\n    run_result = custom_run_dbt_cloud_job(\n        dbt_cloud_credentials=DbtCloudCredentials.load(\"my-dbt-cloud-credentials\"),\n        job_id=1\n    )\n\nrun_dbt_job_flow()\n
        ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#recipes-and-tutorials","title":"Recipes and Tutorials","text":"

        To learn more about how to use Integrations, check out Prefect recipes on GitHub. These recipes provide examples of how Integrations can be used in various scenarios.

        ","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/prefect-aws/","title":"prefect-aws","text":""},{"location":"integrations/prefect-aws/#welcome","title":"Welcome","text":"

        prefect-aws makes it easy to leverage the capabilities of AWS in your workflows.

        "},{"location":"integrations/prefect-aws/#getting-started","title":"Getting started","text":""},{"location":"integrations/prefect-aws/#installation","title":"Installation","text":"

        Prefect requires Python 3.8 or newer.

        We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

        Install prefect-aws

        pip install prefect-aws\n
        "},{"location":"integrations/prefect-aws/#registering-blocks","title":"Registering blocks","text":"

        Register blocks in this module to make them available for use.

        prefect block register -m prefect_aws\n

        A list of available blocks in prefect-aws and their setup instructions can be found here.

        "},{"location":"integrations/prefect-aws/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

        You will need an AWS account and credentials to use prefect-aws.

        1. Refer to the AWS Configuration documentation on how to retrieve your access key ID and secret access key
        2. Copy the access key ID and secret access key
        3. Create an AWSCredenitals block in the Prefect UI or use a Python script like the one below and replace the placeholders with your credential information and desired block name:
        from prefect_aws import AwsCredentials\nAwsCredentials(\n    aws_access_key_id=\"PLACEHOLDER\",\n    aws_secret_access_key=\"PLACEHOLDER\",\n    aws_session_token=None,  # replace this with token if necessary\n    region_name=\"us-east-2\"\n).save(\"BLOCK-NAME-PLACEHOLDER\")\n

        Congrats! You can now load the saved block to use your credentials in your Python code:

        from prefect_aws import AwsCredentials\nAwsCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n
        "},{"location":"integrations/prefect-aws/#using-prefect-with-aws-s3","title":"Using Prefect with AWS S3","text":"

        prefect_aws allows you to read and write objects with AWS S3 within your Prefect flows.

        The provided code snippet shows how you can use prefect_aws to upload a file to a AWS S3 bucket and download the same file under a different file name.

        Note, the following code assumes that the bucket already exists.

        from pathlib import Path\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials, S3Bucket\n\n@flow\ndef s3_flow():\n    # create a dummy file to upload\n    file_path = Path(\"test-example.txt\")\n    file_path.write_text(\"Hello, Prefect!\")\n\n    aws_credentials = AwsCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    s3_bucket = S3Bucket(\n        bucket_name=\"BUCKET-NAME-PLACEHOLDER\",\n        credentials=aws_credentials\n    )\n\n    s3_bucket_path = s3_bucket.upload_from_path(file_path)\n    downloaded_file_path = s3_bucket.download_object_to_path(\n        s3_bucket_path, \"downloaded-test-example.txt\"\n    )\n    return downloaded_file_path.read_text()\n\ns3_flow()\n
        "},{"location":"integrations/prefect-aws/#using-prefect-with-aws-secrets-manager","title":"Using Prefect with AWS Secrets Manager","text":"

        prefect_aws allows you to read and write secrets with AWS Secrets Manager within your Prefect flows.

        The provided code snippet shows how you can use prefect_aws to write a secret to the Secret Manager, read the secret data, delete the secret, and finally return the secret data.

        from prefect import flow\nfrom prefect_aws import AwsCredentials, AwsSecret\n\n@flow\ndef secrets_manager_flow():\n    aws_credentials = AwsCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    aws_secret = AwsSecret(secret_name=\"test-example\", aws_credentials=aws_credentials)\n    aws_secret.write_secret(secret_data=b\"Hello, Prefect!\")\n    secret_data = aws_secret.read_secret()\n    aws_secret.delete_secret()\n    return secret_data\n\nsecrets_manager_flow()\n
        "},{"location":"integrations/prefect-aws/#using-prefect-with-aws-ecs","title":"Using Prefect with AWS ECS","text":"

        prefect_aws allows you to use AWS ECS as infrastructure for your deployments. Using ECS for scheduled flow runs enables the dynamic provisioning of infrastructure for containers and unlocks greater scalability. This setup gives you all of the observation and orchestration benefits of Prefect, while also providing you the scalability of ECS.

        See the ECS guide for a full walkthrough.

        "},{"location":"integrations/prefect-aws/#resources","title":"Resources","text":"

        Refer to the API documentation on the sidebar to explore all the capabilities of Prefect AWS!

        For more tips on how to use blocks and tasks in Prefect integration libraries, check out the docs!

        For more information about how to use Prefect, please refer to the Prefect documentation.

        If you encounter any bugs while using prefect-aws, feel free to open an issue in the prefect repository.

        If you have any questions or issues while using prefect-aws, you can find help in the Prefect Slack community.

        "},{"location":"integrations/prefect-aws/batch/","title":"Batch","text":""},{"location":"integrations/prefect-aws/batch/#prefect_aws.batch","title":"prefect_aws.batch","text":"

        Tasks for interacting with AWS Batch

        "},{"location":"integrations/prefect-aws/batch/#prefect_aws.batch.batch_submit","title":"batch_submit async","text":"

        Submit a job to the AWS Batch job service.

        Parameters:

        Name Type Description Default job_name str

        The AWS batch job name.

        required job_queue str

        Name of the AWS batch job queue.

        required job_definition str

        The AWS batch job definition.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required **batch_kwargs Optional[Dict[str, Any]]

        Additional keyword arguments to pass to the boto3 submit_job function. See the documentation for submit_job for more details.

        {}

        Returns:

        Type Description str

        The id corresponding to the job.

        Example

        Submits a job to batch.

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.batch import batch_submit\n\n\n@flow\ndef example_batch_submit_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    job_id = batch_submit(\n        \"job_name\",\n        \"job_queue\",\n        \"job_definition\",\n        aws_credentials\n    )\n    return job_id\n\nexample_batch_submit_flow()\n
        Source code in prefect_aws/batch.py
        @task\nasync def batch_submit(\n    job_name: str,\n    job_queue: str,\n    job_definition: str,\n    aws_credentials: AwsCredentials,\n    **batch_kwargs: Optional[Dict[str, Any]],\n) -> str:\n    \"\"\"\n    Submit a job to the AWS Batch job service.\n\n    Args:\n        job_name: The AWS batch job name.\n        job_queue: Name of the AWS batch job queue.\n        job_definition: The AWS batch job definition.\n        aws_credentials: Credentials to use for authentication with AWS.\n        **batch_kwargs: Additional keyword arguments to pass to the boto3\n            `submit_job` function. See the documentation for\n            [submit_job](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/batch.html#Batch.Client.submit_job)\n            for more details.\n\n    Returns:\n        The id corresponding to the job.\n\n    Example:\n        Submits a job to batch.\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.batch import batch_submit\n\n\n        @flow\n        def example_batch_submit_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            job_id = batch_submit(\n                \"job_name\",\n                \"job_queue\",\n                \"job_definition\",\n                aws_credentials\n            )\n            return job_id\n\n        example_batch_submit_flow()\n        ```\n\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Preparing to submit %s job to %s job queue\", job_name, job_queue)\n\n    batch_client = aws_credentials.get_boto3_session().client(\"batch\")\n\n    response = await run_sync_in_worker_thread(\n        batch_client.submit_job,\n        jobName=job_name,\n        jobQueue=job_queue,\n        jobDefinition=job_definition,\n        **batch_kwargs,\n    )\n    return response[\"jobId\"]\n
        "},{"location":"integrations/prefect-aws/client_waiter/","title":"Client Waiter","text":""},{"location":"integrations/prefect-aws/client_waiter/#prefect_aws.client_waiter","title":"prefect_aws.client_waiter","text":"

        Task for waiting on a long-running AWS job

        "},{"location":"integrations/prefect-aws/client_waiter/#prefect_aws.client_waiter.client_waiter","title":"client_waiter async","text":"

        Uses the underlying boto3 waiter functionality.

        Parameters:

        Name Type Description Default client str

        The AWS client on which to wait (e.g., 'client_wait', 'ec2', etc).

        required waiter_name str

        The name of the waiter to instantiate. You may also use a custom waiter name, if you supply an accompanying waiter definition dict.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required waiter_definition Optional[Dict[str, Any]]

        A valid custom waiter model, as a dict. Note that if you supply a custom definition, it is assumed that the provided 'waiter_name' is contained within the waiter definition dict.

        None **waiter_kwargs Optional[Dict[str, Any]]

        Arguments to pass to the waiter.wait(...) method. Will depend upon the specific waiter being called.

        {} Example

        Run an ec2 waiter until instance_exists.

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.client_wait import client_waiter\n\n@flow\ndef example_client_wait_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n\n    waiter = client_waiter(\n        \"ec2\",\n        \"instance_exists\",\n        aws_credentials\n    )\n\n    return waiter\nexample_client_wait_flow()\n

        Source code in prefect_aws/client_waiter.py
        @task\nasync def client_waiter(\n    client: str,\n    waiter_name: str,\n    aws_credentials: AwsCredentials,\n    waiter_definition: Optional[Dict[str, Any]] = None,\n    **waiter_kwargs: Optional[Dict[str, Any]],\n):\n    \"\"\"\n    Uses the underlying boto3 waiter functionality.\n\n    Args:\n        client: The AWS client on which to wait (e.g., 'client_wait', 'ec2', etc).\n        waiter_name: The name of the waiter to instantiate.\n            You may also use a custom waiter name, if you supply\n            an accompanying waiter definition dict.\n        aws_credentials: Credentials to use for authentication with AWS.\n        waiter_definition: A valid custom waiter model, as a dict. Note that if\n            you supply a custom definition, it is assumed that the provided\n            'waiter_name' is contained within the waiter definition dict.\n        **waiter_kwargs: Arguments to pass to the `waiter.wait(...)` method. Will\n            depend upon the specific waiter being called.\n\n    Example:\n        Run an ec2 waiter until instance_exists.\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.client_wait import client_waiter\n\n        @flow\n        def example_client_wait_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n\n            waiter = client_waiter(\n                \"ec2\",\n                \"instance_exists\",\n                aws_credentials\n            )\n\n            return waiter\n        example_client_wait_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Waiting on %s job\", client)\n\n    boto_client = aws_credentials.get_boto3_session().client(client)\n\n    if waiter_definition is not None:\n        # Use user-provided waiter definition\n        waiter_model = WaiterModel(waiter_definition)\n        waiter = create_waiter_with_client(waiter_name, waiter_model, boto_client)\n    elif waiter_name in boto_client.waiter_names:\n        waiter = boto_client.get_waiter(waiter_name)\n    else:\n        raise ValueError(\n            f\"The waiter name, {waiter_name}, is not a valid boto waiter; \"\n            \"if using a custom waiter, you must provide a waiter definition\"\n        )\n\n    await run_sync_in_worker_thread(waiter.wait, **waiter_kwargs)\n
        "},{"location":"integrations/prefect-aws/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials","title":"prefect_aws.credentials","text":"

        Module handling AWS credentials

        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials","title":"AwsCredentials","text":"

        Bases: CredentialsBlock

        Block used to manage authentication with AWS. AWS authentication is handled via the boto3 module. Refer to the boto3 docs for more info about the possible credential configurations.

        Example

        Load stored AWS credentials:

        from prefect_aws import AwsCredentials\n\naws_credentials_block = AwsCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_aws/credentials.py
        class AwsCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with AWS. AWS authentication is\n    handled via the `boto3` module. Refer to the\n    [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html)\n    for more info about the possible credential configurations.\n\n    Example:\n        Load stored AWS credentials:\n        ```python\n        from prefect_aws import AwsCredentials\n\n        aws_credentials_block = AwsCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _block_type_name = \"AWS Credentials\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials\"  # noqa\n\n    aws_access_key_id: Optional[str] = Field(\n        default=None,\n        description=\"A specific AWS access key ID.\",\n        title=\"AWS Access Key ID\",\n    )\n    aws_secret_access_key: Optional[SecretStr] = Field(\n        default=None,\n        description=\"A specific AWS secret access key.\",\n        title=\"AWS Access Key Secret\",\n    )\n    aws_session_token: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The session key for your AWS account. \"\n            \"This is only needed when you are using temporary credentials.\"\n        ),\n        title=\"AWS Session Token\",\n    )\n    profile_name: Optional[str] = Field(\n        default=None, description=\"The profile to use when creating your session.\"\n    )\n    region_name: Optional[str] = Field(\n        default=None,\n        description=\"The AWS Region where you want to create new connections.\",\n    )\n    aws_client_parameters: AwsClientParameters = Field(\n        default_factory=AwsClientParameters,\n        description=\"Extra parameters to initialize the Client.\",\n        title=\"AWS Client Parameters\",\n    )\n\n    class Config:\n        \"\"\"Config class for pydantic model.\"\"\"\n\n        arbitrary_types_allowed = True\n\n    def __hash__(self):\n        field_hashes = (\n            hash(self.aws_access_key_id),\n            hash(self.aws_secret_access_key),\n            hash(self.aws_session_token),\n            hash(self.profile_name),\n            hash(self.region_name),\n            hash(self.aws_client_parameters),\n        )\n        return hash(field_hashes)\n\n    def get_boto3_session(self) -> boto3.Session:\n        \"\"\"\n        Returns an authenticated boto3 session that can be used to create clients\n        for AWS services\n\n        Example:\n            Create an S3 client from an authorized boto3 session:\n            ```python\n            aws_credentials = AwsCredentials(\n                aws_access_key_id = \"access_key_id\",\n                aws_secret_access_key = \"secret_access_key\"\n                )\n            s3_client = aws_credentials.get_boto3_session().client(\"s3\")\n            ```\n        \"\"\"\n\n        if self.aws_secret_access_key:\n            aws_secret_access_key = self.aws_secret_access_key.get_secret_value()\n        else:\n            aws_secret_access_key = None\n\n        return boto3.Session(\n            aws_access_key_id=self.aws_access_key_id,\n            aws_secret_access_key=aws_secret_access_key,\n            aws_session_token=self.aws_session_token,\n            profile_name=self.profile_name,\n            region_name=self.region_name,\n        )\n\n    def get_client(self, client_type: Union[str, ClientType]):\n        \"\"\"\n        Helper method to dynamically get a client type.\n\n        Args:\n            client_type: The client's service name.\n\n        Returns:\n            An authenticated client.\n\n        Raises:\n            ValueError: if the client is not supported.\n        \"\"\"\n        if isinstance(client_type, ClientType):\n            client_type = client_type.value\n\n        return _get_client_cached(ctx=self, client_type=client_type)\n\n    def get_s3_client(self) -> S3Client:\n        \"\"\"\n        Gets an authenticated S3 client.\n\n        Returns:\n            An authenticated S3 client.\n        \"\"\"\n        return self.get_client(client_type=ClientType.S3)\n\n    def get_secrets_manager_client(self) -> SecretsManagerClient:\n        \"\"\"\n        Gets an authenticated Secrets Manager client.\n\n        Returns:\n            An authenticated Secrets Manager client.\n        \"\"\"\n        return self.get_client(client_type=ClientType.SECRETS_MANAGER)\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.Config","title":"Config","text":"

        Config class for pydantic model.

        Source code in prefect_aws/credentials.py
        class Config:\n    \"\"\"Config class for pydantic model.\"\"\"\n\n    arbitrary_types_allowed = True\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_boto3_session","title":"get_boto3_session","text":"

        Returns an authenticated boto3 session that can be used to create clients for AWS services

        Example

        Create an S3 client from an authorized boto3 session:

        aws_credentials = AwsCredentials(\n    aws_access_key_id = \"access_key_id\",\n    aws_secret_access_key = \"secret_access_key\"\n    )\ns3_client = aws_credentials.get_boto3_session().client(\"s3\")\n

        Source code in prefect_aws/credentials.py
        def get_boto3_session(self) -> boto3.Session:\n    \"\"\"\n    Returns an authenticated boto3 session that can be used to create clients\n    for AWS services\n\n    Example:\n        Create an S3 client from an authorized boto3 session:\n        ```python\n        aws_credentials = AwsCredentials(\n            aws_access_key_id = \"access_key_id\",\n            aws_secret_access_key = \"secret_access_key\"\n            )\n        s3_client = aws_credentials.get_boto3_session().client(\"s3\")\n        ```\n    \"\"\"\n\n    if self.aws_secret_access_key:\n        aws_secret_access_key = self.aws_secret_access_key.get_secret_value()\n    else:\n        aws_secret_access_key = None\n\n    return boto3.Session(\n        aws_access_key_id=self.aws_access_key_id,\n        aws_secret_access_key=aws_secret_access_key,\n        aws_session_token=self.aws_session_token,\n        profile_name=self.profile_name,\n        region_name=self.region_name,\n    )\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_client","title":"get_client","text":"

        Helper method to dynamically get a client type.

        Parameters:

        Name Type Description Default client_type Union[str, ClientType]

        The client's service name.

        required

        Returns:

        Type Description

        An authenticated client.

        Raises:

        Type Description ValueError

        if the client is not supported.

        Source code in prefect_aws/credentials.py
        def get_client(self, client_type: Union[str, ClientType]):\n    \"\"\"\n    Helper method to dynamically get a client type.\n\n    Args:\n        client_type: The client's service name.\n\n    Returns:\n        An authenticated client.\n\n    Raises:\n        ValueError: if the client is not supported.\n    \"\"\"\n    if isinstance(client_type, ClientType):\n        client_type = client_type.value\n\n    return _get_client_cached(ctx=self, client_type=client_type)\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_s3_client","title":"get_s3_client","text":"

        Gets an authenticated S3 client.

        Returns:

        Type Description S3Client

        An authenticated S3 client.

        Source code in prefect_aws/credentials.py
        def get_s3_client(self) -> S3Client:\n    \"\"\"\n    Gets an authenticated S3 client.\n\n    Returns:\n        An authenticated S3 client.\n    \"\"\"\n    return self.get_client(client_type=ClientType.S3)\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.AwsCredentials.get_secrets_manager_client","title":"get_secrets_manager_client","text":"

        Gets an authenticated Secrets Manager client.

        Returns:

        Type Description SecretsManagerClient

        An authenticated Secrets Manager client.

        Source code in prefect_aws/credentials.py
        def get_secrets_manager_client(self) -> SecretsManagerClient:\n    \"\"\"\n    Gets an authenticated Secrets Manager client.\n\n    Returns:\n        An authenticated Secrets Manager client.\n    \"\"\"\n    return self.get_client(client_type=ClientType.SECRETS_MANAGER)\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.ClientType","title":"ClientType","text":"

        Bases: Enum

        The supported boto3 clients.

        Source code in prefect_aws/credentials.py
        class ClientType(Enum):\n    \"\"\"The supported boto3 clients.\"\"\"\n\n    S3 = \"s3\"\n    ECS = \"ecs\"\n    BATCH = \"batch\"\n    SECRETS_MANAGER = \"secretsmanager\"\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials","title":"MinIOCredentials","text":"

        Bases: CredentialsBlock

        Block used to manage authentication with MinIO. Refer to the MinIO docs for more info about the possible credential configurations.

        Attributes:

        Name Type Description minio_root_user str

        Admin or root user.

        minio_root_password SecretStr

        Admin or root password.

        region_name Optional[str]

        Location of server, e.g. \"us-east-1\".

        Example

        Load stored MinIO credentials:

        from prefect_aws import MinIOCredentials\n\nminio_credentials_block = MinIOCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_aws/credentials.py
        class MinIOCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with MinIO. Refer to the\n    [MinIO docs](https://docs.min.io/docs/minio-server-configuration-guide.html)\n    for more info about the possible credential configurations.\n\n    Attributes:\n        minio_root_user: Admin or root user.\n        minio_root_password: Admin or root password.\n        region_name: Location of server, e.g. \"us-east-1\".\n\n    Example:\n        Load stored MinIO credentials:\n        ```python\n        from prefect_aws import MinIOCredentials\n\n        minio_credentials_block = MinIOCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/676cb17bcbdff601f97e0a02ff8bcb480e91ff40-250x250.png\"  # noqa\n    _block_type_name = \"MinIO Credentials\"\n    _description = (\n        \"Block used to manage authentication with MinIO. Refer to the MinIO \"\n        \"docs: https://docs.min.io/docs/minio-server-configuration-guide.html \"\n        \"for more info about the possible credential configurations.\"\n    )\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials\"  # noqa\n\n    minio_root_user: str = Field(default=..., description=\"Admin or root user.\")\n    minio_root_password: SecretStr = Field(\n        default=..., description=\"Admin or root password.\"\n    )\n    region_name: Optional[str] = Field(\n        default=None,\n        description=\"The AWS Region where you want to create new connections.\",\n    )\n    aws_client_parameters: AwsClientParameters = Field(\n        default_factory=AwsClientParameters,\n        description=\"Extra parameters to initialize the Client.\",\n    )\n\n    class Config:\n        \"\"\"Config class for pydantic model.\"\"\"\n\n        arbitrary_types_allowed = True\n\n    def __hash__(self):\n        return hash(\n            (\n                hash(self.minio_root_user),\n                hash(self.minio_root_password),\n                hash(self.region_name),\n                hash(frozenset(self.aws_client_parameters.dict().items())),\n            )\n        )\n\n    def get_boto3_session(self) -> boto3.Session:\n        \"\"\"\n        Returns an authenticated boto3 session that can be used to create clients\n        and perform object operations on MinIO server.\n\n        Example:\n            Create an S3 client from an authorized boto3 session\n\n            ```python\n            minio_credentials = MinIOCredentials(\n                minio_root_user = \"minio_root_user\",\n                minio_root_password = \"minio_root_password\"\n            )\n            s3_client = minio_credentials.get_boto3_session().client(\n                service=\"s3\",\n                endpoint_url=\"http://localhost:9000\"\n            )\n            ```\n        \"\"\"\n\n        minio_root_password = (\n            self.minio_root_password.get_secret_value()\n            if self.minio_root_password\n            else None\n        )\n\n        return boto3.Session(\n            aws_access_key_id=self.minio_root_user,\n            aws_secret_access_key=minio_root_password,\n            region_name=self.region_name,\n        )\n\n    def get_client(self, client_type: Union[str, ClientType]):\n        \"\"\"\n        Helper method to dynamically get a client type.\n\n        Args:\n            client_type: The client's service name.\n\n        Returns:\n            An authenticated client.\n\n        Raises:\n            ValueError: if the client is not supported.\n        \"\"\"\n        if isinstance(client_type, ClientType):\n            client_type = client_type.value\n\n        return _get_client_cached(ctx=self, client_type=client_type)\n\n    def get_s3_client(self) -> S3Client:\n        \"\"\"\n        Gets an authenticated S3 client.\n\n        Returns:\n            An authenticated S3 client.\n        \"\"\"\n        return self.get_client(client_type=ClientType.S3)\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.Config","title":"Config","text":"

        Config class for pydantic model.

        Source code in prefect_aws/credentials.py
        class Config:\n    \"\"\"Config class for pydantic model.\"\"\"\n\n    arbitrary_types_allowed = True\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.get_boto3_session","title":"get_boto3_session","text":"

        Returns an authenticated boto3 session that can be used to create clients and perform object operations on MinIO server.

        Example

        Create an S3 client from an authorized boto3 session

        minio_credentials = MinIOCredentials(\n    minio_root_user = \"minio_root_user\",\n    minio_root_password = \"minio_root_password\"\n)\ns3_client = minio_credentials.get_boto3_session().client(\n    service=\"s3\",\n    endpoint_url=\"http://localhost:9000\"\n)\n
        Source code in prefect_aws/credentials.py
        def get_boto3_session(self) -> boto3.Session:\n    \"\"\"\n    Returns an authenticated boto3 session that can be used to create clients\n    and perform object operations on MinIO server.\n\n    Example:\n        Create an S3 client from an authorized boto3 session\n\n        ```python\n        minio_credentials = MinIOCredentials(\n            minio_root_user = \"minio_root_user\",\n            minio_root_password = \"minio_root_password\"\n        )\n        s3_client = minio_credentials.get_boto3_session().client(\n            service=\"s3\",\n            endpoint_url=\"http://localhost:9000\"\n        )\n        ```\n    \"\"\"\n\n    minio_root_password = (\n        self.minio_root_password.get_secret_value()\n        if self.minio_root_password\n        else None\n    )\n\n    return boto3.Session(\n        aws_access_key_id=self.minio_root_user,\n        aws_secret_access_key=minio_root_password,\n        region_name=self.region_name,\n    )\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.get_client","title":"get_client","text":"

        Helper method to dynamically get a client type.

        Parameters:

        Name Type Description Default client_type Union[str, ClientType]

        The client's service name.

        required

        Returns:

        Type Description

        An authenticated client.

        Raises:

        Type Description ValueError

        if the client is not supported.

        Source code in prefect_aws/credentials.py
        def get_client(self, client_type: Union[str, ClientType]):\n    \"\"\"\n    Helper method to dynamically get a client type.\n\n    Args:\n        client_type: The client's service name.\n\n    Returns:\n        An authenticated client.\n\n    Raises:\n        ValueError: if the client is not supported.\n    \"\"\"\n    if isinstance(client_type, ClientType):\n        client_type = client_type.value\n\n    return _get_client_cached(ctx=self, client_type=client_type)\n
        "},{"location":"integrations/prefect-aws/credentials/#prefect_aws.credentials.MinIOCredentials.get_s3_client","title":"get_s3_client","text":"

        Gets an authenticated S3 client.

        Returns:

        Type Description S3Client

        An authenticated S3 client.

        Source code in prefect_aws/credentials.py
        def get_s3_client(self) -> S3Client:\n    \"\"\"\n    Gets an authenticated S3 client.\n\n    Returns:\n        An authenticated S3 client.\n    \"\"\"\n    return self.get_client(client_type=ClientType.S3)\n
        "},{"location":"integrations/prefect-aws/ecs/","title":"ECS (deprecated)","text":""},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs","title":"prefect_aws.ecs","text":"

        DEPRECATION WARNING:

        This module is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by the ECS worker, which offers enhanced functionality and better performance.

        For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

        Integrations with the Amazon Elastic Container Service.

        Examples:

        Run a task using ECS Fargate\n```python\nECSTask(command=[\"echo\", \"hello world\"]).run()\n```\n\nRun a task using ECS Fargate with a spot container instance\n```python\nECSTask(command=[\"echo\", \"hello world\"], launch_type=\"FARGATE_SPOT\").run()\n```\n\nRun a task using ECS with an EC2 container instance\n```python\nECSTask(command=[\"echo\", \"hello world\"], launch_type=\"EC2\").run()\n```\n\nRun a task on a specific VPC using ECS Fargate\n```python\nECSTask(command=[\"echo\", \"hello world\"], vpc_id=\"vpc-01abcdf123456789a\").run()\n```\n\nRun a task and stream the container's output to the local terminal. Note an\nexecution role must be provided with permissions: logs:CreateLogStream,\nlogs:CreateLogGroup, and logs:PutLogEvents.\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    stream_output=True,\n    execution_role_arn=\"...\"\n)\n```\n\nRun a task using an existing task definition as a base\n```python\nECSTask(command=[\"echo\", \"hello world\"], task_definition_arn=\"arn:aws:ecs:...\")\n```\n\nRun a task with a specific image\n```python\nECSTask(command=[\"echo\", \"hello world\"], image=\"alpine:latest\")\n```\n\nRun a task with custom memory and CPU requirements\n```python\nECSTask(command=[\"echo\", \"hello world\"], memory=4096, cpu=2048)\n```\n\nRun a task with custom environment variables\n```python\nECSTask(command=[\"echo\", \"hello $PLANET\"], env={\"PLANET\": \"earth\"})\n```\n\nRun a task in a specific ECS cluster\n```python\nECSTask(command=[\"echo\", \"hello world\"], cluster=\"my-cluster-name\")\n```\n\nRun a task with custom VPC subnets\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    task_customizations=[\n        {\n            \"op\": \"add\",\n            \"path\": \"/networkConfiguration/awsvpcConfiguration/subnets\",\n            \"value\": [\"subnet-80b6fbcd\", \"subnet-42a6fdgd\"],\n        },\n    ]\n)\n```\n\nRun a task without a public IP assigned\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    vpc_id=\"vpc-01abcdf123456789a\",\n    task_customizations=[\n        {\n            \"op\": \"replace\",\n            \"path\": \"/networkConfiguration/awsvpcConfiguration/assignPublicIp\",\n            \"value\": \"DISABLED\",\n        },\n    ]\n)\n```\n\nRun a task with custom VPC security groups\n```python\nECSTask(\n    command=[\"echo\", \"hello world\"],\n    vpc_id=\"vpc-01abcdf123456789a\",\n    task_customizations=[\n        {\n            \"op\": \"add\",\n            \"path\": \"/networkConfiguration/awsvpcConfiguration/securityGroups\",\n            \"value\": [\"sg-d72e9599956a084f5\"],\n        },\n    ],\n)\n```\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask","title":"ECSTask","text":"

        Bases: Infrastructure

        Run a command as an ECS task.

        Attributes:

        Name Type Description type Literal['ecs-task']

        The slug for this task type with a default value of \"ecs-task\".

        aws_credentials AwsCredentials

        The AWS credentials to use to connect to ECS with a default factory of AwsCredentials.

        task_definition_arn Optional[str]

        An optional identifier for an existing task definition to use. If fields are set on the ECSTask that conflict with the task definition, a new copy will be registered with the required values. Cannot be used with task_definition. If not provided, Prefect will generate and register a minimal task definition.

        task_definition Optional[dict]

        An optional ECS task definition to use. Prefect may set defaults or override fields on this task definition to match other ECSTask fields. Cannot be used with task_definition_arn. If not provided, Prefect will generate and register a minimal task definition.

        family Optional[str]

        An optional family for the task definition. If not provided, it will be inferred from the task definition. If the task definition does not have a family, the name will be generated. When flow and deployment metadata is available, the generated name will include their names. Values for this field will be slugified to match AWS character requirements.

        image Optional[str]

        An optional image to use for the Prefect container in the task. If this value is not null, it will override the value in the task definition. This value defaults to a Prefect base image matching your local versions.

        auto_deregister_task_definition bool

        A boolean that controls if any task definitions that are created by this block will be deregistered or not. Existing task definitions linked by ARN will never be deregistered. Deregistering a task definition does not remove it from your AWS account, instead it will be marked as INACTIVE.

        cpu int

        The amount of CPU to provide to the ECS task. Valid amounts are specified in the AWS documentation. If not provided, a default value of ECS_DEFAULT_CPU will be used unless present on the task definition.

        memory int

        The amount of memory to provide to the ECS task. Valid amounts are specified in the AWS documentation. If not provided, a default value of ECS_DEFAULT_MEMORY will be used unless present on the task definition.

        execution_role_arn str

        An execution role to use for the task. This controls the permissions of the task when it is launching. If this value is not null, it will override the value in the task definition. An execution role must be provided to capture logs from the container.

        configure_cloudwatch_logs bool

        A boolean that controls if the Prefect container will be configured to send its output to the AWS CloudWatch logs service or not. This functionality requires an execution role with permissions to create log streams and groups.

        cloudwatch_logs_options Dict[str, str]

        A dictionary of options to pass to the CloudWatch logs configuration.

        stream_output bool

        A boolean indicating whether logs will be streamed from the Prefect container to the local console.

        launch_type Optional[Literal['FARGATE', 'EC2', 'EXTERNAL', 'FARGATE_SPOT']]

        An optional launch type for the ECS task run infrastructure.

        vpc_id Optional[str]

        An optional VPC ID to link the task run to. This is only applicable when using the 'awsvpc' network mode for your task.

        cluster Optional[str]

        An optional ECS cluster to run the task in. The ARN or name may be provided. If not provided, the default cluster will be used.

        env Dict[str, Optional[str]]

        A dictionary of environment variables to provide to the task run. These variables are set on the Prefect container at task runtime.

        task_role_arn str

        An optional role to attach to the task run. This controls the permissions of the task while it is running.

        task_customizations JsonPatch

        A list of JSON 6902 patches to apply to the task run request. If a string is given, it will parsed as a JSON expression.

        task_start_timeout_seconds int

        The amount of time to watch for the start of the ECS task before marking it as failed. The task must enter a RUNNING state to be considered started.

        task_watch_poll_interval float

        The amount of time to wait between AWS API calls while monitoring the state of an ECS task.

        Source code in prefect_aws/ecs.py
        @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=(\n        \"Use the ECS worker instead.\"\n        \" Refer to the upgrade guide for more information:\"\n        \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\"\n    ),\n)\nclass ECSTask(Infrastructure):\n    \"\"\"\n    Run a command as an ECS task.\n\n    Attributes:\n        type: The slug for this task type with a default value of \"ecs-task\".\n        aws_credentials: The AWS credentials to use to connect to ECS with a\n            default factory of AwsCredentials.\n        task_definition_arn: An optional identifier for an existing task definition\n            to use. If fields are set on the ECSTask that conflict with the task\n            definition, a new copy will be registered with the required values.\n            Cannot be used with task_definition. If not provided, Prefect will\n            generate and register a minimal task definition.\n        task_definition: An optional ECS task definition to use. Prefect may set\n            defaults or override fields on this task definition to match other\n            ECSTask fields. Cannot be used with task_definition_arn.\n            If not provided, Prefect will generate and register\n            a minimal task definition.\n        family: An optional family for the task definition. If not provided,\n            it will be inferred from the task definition. If the task definition\n            does not have a family, the name will be generated. When flow and\n            deployment metadata is available, the generated name will include\n            their names. Values for this field will be slugified to match\n            AWS character requirements.\n        image: An optional image to use for the Prefect container in the task.\n            If this value is not null, it will override the value in the task\n            definition. This value defaults to a Prefect base image matching\n            your local versions.\n        auto_deregister_task_definition: A boolean that controls if any task\n            definitions that are created by this block will be deregistered\n            or not. Existing task definitions linked by ARN will never be\n            deregistered. Deregistering a task definition does not remove\n            it from your AWS account, instead it will be marked as INACTIVE.\n        cpu: The amount of CPU to provide to the ECS task. Valid amounts are\n            specified in the AWS documentation. If not provided, a default\n            value of ECS_DEFAULT_CPU will be used unless present on\n            the task definition.\n        memory: The amount of memory to provide to the ECS task.\n            Valid amounts are specified in the AWS documentation.\n            If not provided, a default value of ECS_DEFAULT_MEMORY\n            will be used unless present on the task definition.\n        execution_role_arn: An execution role to use for the task.\n            This controls the permissions of the task when it is launching.\n            If this value is not null, it will override the value in the task\n            definition. An execution role must be provided to capture logs\n            from the container.\n        configure_cloudwatch_logs: A boolean that controls if the Prefect\n            container will be configured to send its output to the\n            AWS CloudWatch logs service or not. This functionality requires\n            an execution role with permissions to create log streams and groups.\n        cloudwatch_logs_options: A dictionary of options to pass to\n            the CloudWatch logs configuration.\n        stream_output: A boolean indicating whether logs will be\n            streamed from the Prefect container to the local console.\n        launch_type: An optional launch type for the ECS task run infrastructure.\n        vpc_id: An optional VPC ID to link the task run to.\n            This is only applicable when using the 'awsvpc' network mode for your task.\n        cluster: An optional ECS cluster to run the task in.\n            The ARN or name may be provided. If not provided,\n            the default cluster will be used.\n        env: A dictionary of environment variables to provide to\n            the task run. These variables are set on the\n            Prefect container at task runtime.\n        task_role_arn: An optional role to attach to the task run.\n            This controls the permissions of the task while it is running.\n        task_customizations: A list of JSON 6902 patches to apply to the task\n            run request. If a string is given, it will parsed as a JSON expression.\n        task_start_timeout_seconds: The amount of time to watch for the\n            start of the ECS task before marking it as failed. The task must\n            enter a RUNNING state to be considered started.\n        task_watch_poll_interval: The amount of time to wait between AWS API\n            calls while monitoring the state of an ECS task.\n    \"\"\"\n\n    _block_type_slug = \"ecs-task\"\n    _block_type_name = \"ECS Task\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _description = \"Run a command as an ECS task.\"  # noqa\n    _documentation_url = (\n        \"https://prefecthq.github.io/prefect-aws/ecs/#prefect_aws.ecs.ECSTask\"  # noqa\n    )\n\n    type: Literal[\"ecs-task\"] = Field(\n        \"ecs-task\", description=\"The slug for this task type.\"\n    )\n\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to use to connect to ECS.\",\n    )\n\n    # Task definition settings\n    task_definition_arn: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An identifier for an existing task definition to use. If fields are set \"\n            \"on the `ECSTask` that conflict with the task definition, a new copy \"\n            \"will be registered with the required values. \"\n            \"Cannot be used with `task_definition`. If not provided, Prefect will \"\n            \"generate and register a minimal task definition.\"\n        ),\n    )\n    task_definition: Optional[dict] = Field(\n        default=None,\n        description=(\n            \"An ECS task definition to use. Prefect may set defaults or override \"\n            \"fields on this task definition to match other `ECSTask` fields. \"\n            \"Cannot be used with `task_definition_arn`. If not provided, Prefect will \"\n            \"generate and register a minimal task definition.\"\n        ),\n    )\n    family: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A family for the task definition. If not provided, it will be inferred \"\n            \"from the task definition. If the task definition does not have a family, \"\n            \"the name will be generated. When flow and deployment metadata is \"\n            \"available, the generated name will include their names. Values for this \"\n            \"field will be slugified to match AWS character requirements.\"\n        ),\n    )\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image to use for the Prefect container in the task. If this value is \"\n            \"not null, it will override the value in the task definition. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    auto_deregister_task_definition: bool = Field(\n        default=True,\n        description=(\n            \"If set, any task definitions that are created by this block will be \"\n            \"deregistered. Existing task definitions linked by ARN will never be \"\n            \"deregistered. Deregistering a task definition does not remove it from \"\n            \"your AWS account, instead it will be marked as INACTIVE.\"\n        ),\n    )\n\n    # Mixed task definition / run settings\n    cpu: int = Field(\n        title=\"CPU\",\n        default=None,\n        description=(\n            \"The amount of CPU to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_CPU} will be used unless present on the task definition.\"\n        ),\n    )\n    memory: int = Field(\n        default=None,\n        description=(\n            \"The amount of memory to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_MEMORY} will be used unless present on the task definition.\"\n        ),\n    )\n    execution_role_arn: str = Field(\n        title=\"Execution Role ARN\",\n        default=None,\n        description=(\n            \"An execution role to use for the task. This controls the permissions of \"\n            \"the task when it is launching. If this value is not null, it will \"\n            \"override the value in the task definition. An execution role must be \"\n            \"provided to capture logs from the container.\"\n        ),\n    )\n    configure_cloudwatch_logs: bool = Field(\n        default=None,\n        description=(\n            \"If `True`, the Prefect container will be configured to send its output \"\n            \"to the AWS CloudWatch logs service. This functionality requires an \"\n            \"execution role with logs:CreateLogStream, logs:CreateLogGroup, and \"\n            \"logs:PutLogEvents permissions. The default for this field is `False` \"\n            \"unless `stream_output` is set.\"\n        ),\n    )\n    cloudwatch_logs_options: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"When `configure_cloudwatch_logs` is enabled, this setting may be used to \"\n            \"pass additional options to the CloudWatch logs configuration or override \"\n            \"the default options. See the AWS documentation for available options. \"\n            \"https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html#create_awslogs_logdriver_options.\"  # noqa\n        ),\n    )\n    stream_output: bool = Field(\n        default=None,\n        description=(\n            \"If `True`, logs will be streamed from the Prefect container to the local \"\n            \"console. Unless you have configured AWS CloudWatch logs manually on your \"\n            \"task definition, this requires the same prerequisites outlined in \"\n            \"`configure_cloudwatch_logs`.\"\n        ),\n    )\n\n    # Task run settings\n    launch_type: Optional[\n        Literal[\"FARGATE\", \"EC2\", \"EXTERNAL\", \"FARGATE_SPOT\"]\n    ] = Field(\n        default=\"FARGATE\",\n        description=(\n            \"The type of ECS task run infrastructure that should be used. Note that\"\n            \" 'FARGATE_SPOT' is not a formal ECS launch type, but we will configure\"\n            \" the proper capacity provider strategy if set here.\"\n        ),\n    )\n    vpc_id: Optional[str] = Field(\n        title=\"VPC ID\",\n        default=None,\n        description=(\n            \"The AWS VPC to link the task run to. This is only applicable when using \"\n            \"the 'awsvpc' network mode for your task. FARGATE tasks require this \"\n            \"network  mode, but for EC2 tasks the default network mode is 'bridge'. \"\n            \"If using the 'awsvpc' network mode and this field is null, your default \"\n            \"VPC will be used. If no default VPC can be found, the task run will fail.\"\n        ),\n    )\n    cluster: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The ECS cluster to run the task in. The ARN or name may be provided. If \"\n            \"not provided, the default cluster will be used.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        title=\"Environment Variables\",\n        default_factory=dict,\n        description=(\n            \"Environment variables to provide to the task run. These variables are set \"\n            \"on the Prefect container at task runtime. These will not be set on the \"\n            \"task definition.\"\n        ),\n    )\n    task_role_arn: str = Field(\n        title=\"Task Role ARN\",\n        default=None,\n        description=(\n            \"A role to attach to the task run. This controls the permissions of the \"\n            \"task while it is running.\"\n        ),\n    )\n    task_customizations: JsonPatch = Field(\n        default_factory=lambda: JsonPatch([]),\n        description=(\n            \"A list of JSON 6902 patches to apply to the task run request. \"\n            \"If a string is given, it will parsed as a JSON expression.\"\n        ),\n    )\n\n    # Execution settings\n    task_start_timeout_seconds: int = Field(\n        default=120,\n        description=(\n            \"The amount of time to watch for the start of the ECS task \"\n            \"before marking it as failed. The task must enter a RUNNING state to be \"\n            \"considered started.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an ECS task.\"\n        ),\n    )\n\n    @root_validator(pre=True)\n    def set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n        \"\"\"\n        Streaming output generally requires CloudWatch logs to be configured.\n\n        To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n        defaults to matching the value of `stream_output`.\n        \"\"\"\n        configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n        if configure_cloudwatch_logs is None:\n            values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n        return values\n\n    @root_validator\n    def configure_cloudwatch_logs_requires_execution_role_arn(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if (\n            values.get(\"configure_cloudwatch_logs\")\n            and not values.get(\"execution_role_arn\")\n            # Do not raise if they've linked to another task definition or provided\n            # it without using our shortcuts\n            and not values.get(\"task_definition_arn\")\n            and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n        ):\n            raise ValueError(\n                \"An `execution_role_arn` must be provided to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs`.\"\n            )\n        return values\n\n    @root_validator\n    def cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if values.get(\"cloudwatch_logs_options\") and not values.get(\n            \"configure_cloudwatch_logs\"\n        ):\n            raise ValueError(\n                \"`configure_cloudwatch_log` must be enabled to use \"\n                \"`cloudwatch_logs_options`.\"\n            )\n        return values\n\n    @root_validator(pre=True)\n    def image_is_required(cls, values: dict) -> dict:\n        \"\"\"\n        Enforces that an image is available if image is `None`.\n        \"\"\"\n        has_image = bool(values.get(\"image\"))\n        has_task_definition_arn = bool(values.get(\"task_definition_arn\"))\n\n        # The image can only be null when the task_definition_arn is set\n        if has_image or has_task_definition_arn:\n            return values\n\n        prefect_container = (\n            get_prefect_container(\n                (values.get(\"task_definition\") or {}).get(\"containerDefinitions\", [])\n            )\n            or {}\n        )\n        image_in_task_definition = prefect_container.get(\"image\")\n\n        # If a task_definition is given with a prefect container image, use that value\n        if image_in_task_definition:\n            values[\"image\"] = image_in_task_definition\n        # Otherwise, it should default to the Prefect base image\n        else:\n            values[\"image\"] = get_prefect_image_name()\n        return values\n\n    @validator(\"task_customizations\", pre=True)\n    def cast_customizations_to_a_json_patch(\n        cls, value: Union[List[Dict], JsonPatch, str]\n    ) -> JsonPatch:\n        \"\"\"\n        Casts lists to JsonPatch instances.\n        \"\"\"\n        if isinstance(value, str):\n            value = json.loads(value)\n        if isinstance(value, list):\n            return JsonPatch(value)\n        return value  # type: ignore\n\n    class Config:\n        \"\"\"Configuration of pydantic.\"\"\"\n\n        # Support serialization of the 'JsonPatch' type\n        arbitrary_types_allowed = True\n        json_encoders = {JsonPatch: lambda p: p.patch}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        \"\"\"\n        Convert to a dictionary.\n        \"\"\"\n        # Support serialization of the 'JsonPatch' type\n        d = super().dict(*args, **kwargs)\n        d[\"task_customizations\"] = self.task_customizations.patch\n        return d\n\n    def prepare_for_flow_run(\n        self: Self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"Deployment\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ) -> Self:\n        \"\"\"\n        Return an copy of the block that is prepared to execute a flow run.\n        \"\"\"\n        new_family = None\n\n        # Update the family if not specified elsewhere\n        if (\n            not self.family\n            and not self.task_definition_arn\n            and not (self.task_definition and self.task_definition.get(\"family\"))\n        ):\n            if flow and deployment:\n                new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}__{deployment.name}\"\n            elif flow and not deployment:\n                new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}\"\n            elif deployment and not flow:\n                # This is a weird case and should not be see in the wild\n                new_family = f\"{ECS_DEFAULT_FAMILY}__unknown-flow__{deployment.name}\"\n\n        new = super().prepare_for_flow_run(flow_run, deployment=deployment, flow=flow)\n\n        if new_family:\n            return new.copy(update={\"family\": new_family})\n        else:\n            # Avoid an extra copy if not needed\n            return new\n\n    @sync_compatible\n    async def run(self, task_status: Optional[TaskStatus] = None) -> ECSTaskResult:\n        \"\"\"\n        Run the configured task on ECS.\n        \"\"\"\n        boto_session, ecs_client = await run_sync_in_worker_thread(\n            self._get_session_and_client\n        )\n\n        (\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition,\n        ) = await run_sync_in_worker_thread(\n            self._create_task_and_wait_for_start, boto_session, ecs_client\n        )\n\n        # Display a nice message indicating the command and image\n        command = self.command or get_prefect_container(\n            task_definition[\"containerDefinitions\"]\n        ).get(\"command\", [])\n        self.logger.info(\n            f\"{self._log_prefix}: Running command {' '.join(command)!r} \"\n            f\"in container {PREFECT_ECS_CONTAINER_NAME!r} ({self.image})...\"\n        )\n\n        # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n        # if set to preserve matching by name rather than arn\n        # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n        # single colons.\n        identifier = (self.cluster if self.cluster else cluster_arn) + \"::\" + task_arn\n\n        if task_status:\n            task_status.started(identifier)\n\n        status_code = await run_sync_in_worker_thread(\n            self._watch_task_and_get_exit_code,\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition and self.auto_deregister_task_definition,\n            boto_session,\n            ecs_client,\n        )\n\n        return ECSTaskResult(\n            identifier=identifier,\n            # If the container does not start the exit code can be null but we must\n            # still report a status code. We use a -1 to indicate a special code.\n            status_code=status_code if status_code is not None else -1,\n        )\n\n    @sync_compatible\n    async def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n        \"\"\"\n        Kill a task running on ECS.\n\n        Args:\n            identifier: A cluster and task arn combination. This should match a value\n                yielded by `ECSTask.run`.\n        \"\"\"\n        if grace_seconds != 30:\n            self.logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n                \"support dynamic grace period configuration so 30s will be used. \"\n                \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n            )\n        cluster, task = parse_task_identifier(identifier)\n        await run_sync_in_worker_thread(self._stop_task, cluster, task)\n\n    @staticmethod\n    def get_corresponding_worker_type() -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        return ECSWorker.type\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for a cloud-run work pool with the same\n        configuration as this block.\n\n        Returns:\n            - dict: a base job template for a cloud-run work pool\n        \"\"\"\n        base_job_template = copy.deepcopy(ECSWorker.get_default_base_job_template())\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n                \"task_customizations\",\n            ]:\n                continue\n            elif key == \"aws_credentials\":\n                if not self.aws_credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"aws_credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(\n                            self.aws_credentials._block_document_id\n                        )\n                    }\n                }\n            elif key == \"task_definition\":\n                base_job_template[\"job_configuration\"][\"task_definition\"] = value\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                    \" Skipping.\"\n                )\n\n        if self.task_customizations:\n            network_config_patches = JsonPatch(\n                [\n                    patch\n                    for patch in self.task_customizations\n                    if \"networkConfiguration\" in patch[\"path\"]\n                ]\n            )\n            minimal_network_config = assemble_document_for_patches(\n                network_config_patches\n            )\n            if minimal_network_config:\n                minimal_network_config_with_patches = network_config_patches.apply(\n                    minimal_network_config\n                )\n                base_job_template[\"variables\"][\"properties\"][\"network_configuration\"][\n                    \"default\"\n                ] = minimal_network_config_with_patches[\"networkConfiguration\"]\n            try:\n                base_job_template[\"job_configuration\"][\n                    \"task_run_request\"\n                ] = self.task_customizations.apply(\n                    base_job_template[\"job_configuration\"][\"task_run_request\"]\n                )\n            except JsonPointerException:\n                self.logger.warning(\n                    \"Unable to apply task customizations to the base job template.\"\n                    \"You may need to update the template manually.\"\n                )\n\n        return base_job_template\n\n    def _stop_task(self, cluster: str, task: str) -> None:\n        \"\"\"\n        Stop a running ECS task.\n        \"\"\"\n        if self.cluster is not None and cluster != self.cluster:\n            raise InfrastructureNotAvailable(\n                \"Cannot stop ECS task: this infrastructure block has access to \"\n                f\"cluster {self.cluster!r} but the task is running in cluster \"\n                f\"{cluster!r}.\"\n            )\n\n        _, ecs_client = self._get_session_and_client()\n        try:\n            ecs_client.stop_task(cluster=cluster, task=task)\n        except Exception as exc:\n            # Raise a special exception if the task does not exist\n            if \"ClusterNotFound\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} could not be found.\"\n                ) from exc\n            if \"not find task\" in str(exc) or \"referenced task was not found\" in str(\n                exc\n            ):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the task {task!r} could not be found in \"\n                    f\"cluster {cluster!r}.\"\n                ) from exc\n            if \"no registered tasks\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} has no tasks.\"\n                ) from exc\n\n            # Reraise unknown exceptions\n            raise\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"ECSTask {self.name!r}\"\n        else:\n            return \"ECSTask\"\n\n    def _get_session_and_client(self) -> Tuple[boto3.Session, _ECSClient]:\n        \"\"\"\n        Retrieve a boto3 session and ECS client\n        \"\"\"\n        boto_session = self.aws_credentials.get_boto3_session()\n        ecs_client = boto_session.client(\"ecs\")\n        return boto_session, ecs_client\n\n    def _create_task_and_wait_for_start(\n        self, boto_session: boto3.Session, ecs_client: _ECSClient\n    ) -> Tuple[str, str, dict, bool]:\n        \"\"\"\n        Register the task definition, create the task run, and wait for it to start.\n\n        Returns a tuple of\n        - The task ARN\n        - The task's cluster ARN\n        - The task definition\n        - A bool indicating if the task definition is newly registered\n        \"\"\"\n        new_task_definition_registered = False\n        requested_task_definition = (\n            self._retrieve_task_definition(ecs_client, self.task_definition_arn)\n            if self.task_definition_arn\n            else self.task_definition\n        ) or {}\n        task_definition_arn = requested_task_definition.get(\"taskDefinitionArn\", None)\n\n        task_definition = self._prepare_task_definition(\n            requested_task_definition, region=ecs_client.meta.region_name\n        )\n\n        # We must register the task definition if the arn is null or changes were made\n        if task_definition != requested_task_definition or not task_definition_arn:\n            # Before registering, check if the latest task definition in the family\n            # can be used\n            latest_task_definition = self._retrieve_latest_task_definition(\n                ecs_client, task_definition[\"family\"]\n            )\n            if self._task_definitions_equal(latest_task_definition, task_definition):\n                self.logger.debug(\n                    f\"{self._log_prefix}: The latest task definition matches the \"\n                    \"required task definition; using that instead of registering a new \"\n                    \" one.\"\n                )\n                task_definition_arn = latest_task_definition[\"taskDefinitionArn\"]\n            else:\n                if task_definition_arn:\n                    self.logger.warning(\n                        f\"{self._log_prefix}: Settings require changes to the linked \"\n                        \"task definition. A new task definition will be registered. \"\n                        + (\n                            \"Enable DEBUG level logs to see the difference.\"\n                            if self.logger.level > logging.DEBUG\n                            else \"\"\n                        )\n                    )\n                    self.logger.debug(\n                        f\"{self._log_prefix}: Diff for requested task definition\"\n                        + _pretty_diff(requested_task_definition, task_definition)\n                    )\n                else:\n                    self.logger.info(\n                        f\"{self._log_prefix}: Registering task definition...\"\n                    )\n                    self.logger.debug(\n                        \"Task definition payload\\n\" + yaml.dump(task_definition)\n                    )\n\n                task_definition_arn = self._register_task_definition(\n                    ecs_client, task_definition\n                )\n                new_task_definition_registered = True\n\n        if task_definition.get(\"networkMode\") == \"awsvpc\":\n            network_config = self._load_vpc_network_config(self.vpc_id, boto_session)\n        else:\n            network_config = None\n\n        task_run = self._prepare_task_run(\n            network_config=network_config,\n            task_definition_arn=task_definition_arn,\n        )\n        self.logger.info(f\"{self._log_prefix}: Creating task run...\")\n        self.logger.debug(\"Task run payload\\n\" + yaml.dump(task_run))\n\n        try:\n            task = self._run_task(ecs_client, task_run)\n            task_arn = task[\"taskArn\"]\n            cluster_arn = task[\"clusterArn\"]\n        except Exception as exc:\n            self._report_task_run_creation_failure(task_run, exc)\n\n        # Raises an exception if the task does not start\n        self.logger.info(f\"{self._log_prefix}: Waiting for task run to start...\")\n        self._wait_for_task_start(\n            task_arn, cluster_arn, ecs_client, timeout=self.task_start_timeout_seconds\n        )\n\n        return task_arn, cluster_arn, task_definition, new_task_definition_registered\n\n    def _watch_task_and_get_exit_code(\n        self,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        deregister_task_definition: bool,\n        boto_session: boto3.Session,\n        ecs_client: _ECSClient,\n    ) -> Optional[int]:\n        \"\"\"\n        Wait for the task run to complete and retrieve the exit code of the Prefect\n        container.\n        \"\"\"\n\n        # Wait for completion and stream logs\n        task = self._wait_for_task_finish(\n            task_arn, cluster_arn, task_definition, ecs_client, boto_session\n        )\n\n        if deregister_task_definition:\n            ecs_client.deregister_task_definition(\n                taskDefinition=task[\"taskDefinitionArn\"]\n            )\n\n        # Check the status code of the Prefect container\n        prefect_container = get_prefect_container(task[\"containers\"])\n        assert (\n            prefect_container is not None\n        ), f\"'prefect' container missing from task: {task}\"\n        status_code = prefect_container.get(\"exitCode\")\n        self._report_container_status_code(PREFECT_ECS_CONTAINER_NAME, status_code)\n\n        return status_code\n\n    def _task_definitions_equal(self, taskdef_1, taskdef_2) -> bool:\n        \"\"\"\n        Compare two task definitions.\n\n        Since one may come from the AWS API and have populated defaults, we do our best\n        to homogenize the definitions without changing their meaning.\n        \"\"\"\n        if taskdef_1 == taskdef_2:\n            return True\n\n        if taskdef_1 is None or taskdef_2 is None:\n            return False\n\n        taskdef_1 = copy.deepcopy(taskdef_1)\n        taskdef_2 = copy.deepcopy(taskdef_2)\n\n        def _set_aws_defaults(taskdef):\n            \"\"\"Set defaults that AWS would set after registration\"\"\"\n            container_definitions = taskdef.get(\"containerDefinitions\", [])\n            essential = any(\n                container.get(\"essential\") for container in container_definitions\n            )\n            if not essential:\n                container_definitions[0].setdefault(\"essential\", True)\n\n            taskdef.setdefault(\"networkMode\", \"bridge\")\n\n        _set_aws_defaults(taskdef_1)\n        _set_aws_defaults(taskdef_2)\n\n        def _drop_empty_keys(dict_):\n            \"\"\"Recursively drop keys with 'empty' values\"\"\"\n            for key, value in tuple(dict_.items()):\n                if not value:\n                    dict_.pop(key)\n                if isinstance(value, dict):\n                    _drop_empty_keys(value)\n                if isinstance(value, list):\n                    for v in value:\n                        if isinstance(v, dict):\n                            _drop_empty_keys(v)\n\n        _drop_empty_keys(taskdef_1)\n        _drop_empty_keys(taskdef_2)\n\n        # Clear fields that change on registration for comparison\n        for field in POST_REGISTRATION_FIELDS:\n            taskdef_1.pop(field, None)\n            taskdef_2.pop(field, None)\n\n        return taskdef_1 == taskdef_2\n\n    def preview(self) -> str:\n        \"\"\"\n        Generate a preview of the task definition and task run that will be sent to AWS.\n        \"\"\"\n        preview = \"\"\n\n        task_definition_arn = self.task_definition_arn or \"<registered at runtime>\"\n\n        if self.task_definition or not self.task_definition_arn:\n            task_definition = self._prepare_task_definition(\n                self.task_definition or {},\n                region=self.aws_credentials.region_name\n                or \"<loaded from client at runtime>\",\n            )\n            preview += \"---\\n# Task definition\\n\"\n            preview += yaml.dump(task_definition)\n            preview += \"\\n\"\n        else:\n            task_definition = None\n\n        if task_definition and task_definition.get(\"networkMode\") == \"awsvpc\":\n            vpc = \"the default VPC\" if not self.vpc_id else self.vpc_id\n            network_config = {\n                \"awsvpcConfiguration\": {\n                    \"subnets\": f\"<loaded from {vpc} at runtime>\",\n                    \"assignPublicIp\": \"ENABLED\",\n                }\n            }\n        else:\n            network_config = None\n\n        task_run = self._prepare_task_run(network_config, task_definition_arn)\n        preview += \"---\\n# Task run request\\n\"\n        preview += yaml.dump(task_run)\n\n        return preview\n\n    def _report_container_status_code(\n        self, name: str, status_code: Optional[int]\n    ) -> None:\n        \"\"\"\n        Display a log for the given container status code.\n        \"\"\"\n        if status_code is None:\n            self.logger.error(\n                f\"{self._log_prefix}: Task exited without reporting an exit status \"\n                f\"for container {name!r}.\"\n            )\n        elif status_code == 0:\n            self.logger.info(\n                f\"{self._log_prefix}: Container {name!r} exited successfully.\"\n            )\n        else:\n            self.logger.warning(\n                f\"{self._log_prefix}: Container {name!r} exited with non-zero exit \"\n                f\"code {status_code}.\"\n            )\n\n    def _report_task_run_creation_failure(self, task_run: dict, exc: Exception) -> None:\n        \"\"\"\n        Wrap common AWS task run creation failures with nicer user-facing messages.\n        \"\"\"\n        # AWS generates exception types at runtime so they must be captured a bit\n        # differently than normal.\n        if \"ClusterNotFoundException\" in str(exc):\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} not found. \"\n                \"Confirm that the cluster is configured in your region.\"\n            ) from exc\n        elif \"No Container Instances\" in str(exc) and self.launch_type == \"EC2\":\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} does not appear to \"\n                \"have any container instances associated with it. Confirm that you \"\n                \"have EC2 container instances available.\"\n            ) from exc\n        elif (\n            \"failed to validate logger args\" in str(exc)\n            and \"AccessDeniedException\" in str(exc)\n            and self.configure_cloudwatch_logs\n        ):\n            raise RuntimeError(\n                \"Failed to run ECS task, the attached execution role does not appear \"\n                \"to have sufficient permissions. Ensure that the execution role \"\n                f\"{self.execution_role!r} has permissions logs:CreateLogStream, \"\n                \"logs:CreateLogGroup, and logs:PutLogEvents.\"\n            )\n        else:\n            raise\n\n    def _watch_task_run(\n        self,\n        task_arn: str,\n        cluster_arn: str,\n        ecs_client: _ECSClient,\n        current_status: str = \"UNKNOWN\",\n        until_status: str = None,\n        timeout: int = None,\n    ) -> Generator[None, None, dict]:\n        \"\"\"\n        Watches an ECS task run by querying every `poll_interval` seconds. After each\n        query, the retrieved task is yielded. This function returns when the task run\n        reaches a STOPPED status or the provided `until_status`.\n\n        Emits a log each time the status changes.\n        \"\"\"\n        last_status = status = current_status\n        t0 = time.time()\n        while status != until_status:\n            tasks = ecs_client.describe_tasks(\n                tasks=[task_arn], cluster=cluster_arn, include=[\"TAGS\"]\n            )[\"tasks\"]\n\n            if tasks:\n                task = tasks[0]\n\n                status = task[\"lastStatus\"]\n                if status != last_status:\n                    self.logger.info(f\"{self._log_prefix}: Status is {status}.\")\n\n                yield task\n\n                # No point in continuing if the status is final\n                if status == \"STOPPED\":\n                    break\n\n                last_status = status\n\n            else:\n                # Intermittently, the task will not be described. We wat to respect the\n                # watch timeout though.\n                self.logger.debug(f\"{self._log_prefix}: Task not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching task for status \"\n                    f\"{until_status or 'STOPPED'}\"\n                )\n            time.sleep(self.task_watch_poll_interval)\n\n    def _wait_for_task_start(\n        self, task_arn: str, cluster_arn: str, ecs_client: _ECSClient, timeout: int\n    ) -> dict:\n        \"\"\"\n        Waits for an ECS task run to reach a RUNNING status.\n\n        If a STOPPED status is reached instead, an exception is raised indicating the\n        reason that the task run did not start.\n        \"\"\"\n        for task in self._watch_task_run(\n            task_arn, cluster_arn, ecs_client, until_status=\"RUNNING\", timeout=timeout\n        ):\n            # TODO: It is possible that the task has passed _through_ a RUNNING\n            #       status during the polling interval. In this case, there is not an\n            #       exception to raise.\n            if task[\"lastStatus\"] == \"STOPPED\":\n                code = task.get(\"stopCode\")\n                reason = task.get(\"stoppedReason\")\n                # Generate a dynamic exception type from the AWS name\n                raise type(code, (RuntimeError,), {})(reason)\n\n        return task\n\n    def _wait_for_task_finish(\n        self,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        ecs_client: _ECSClient,\n        boto_session: boto3.Session,\n    ):\n        \"\"\"\n        Watch an ECS task until it reaches a STOPPED status.\n\n        If configured, logs from the Prefect container are streamed to stderr.\n\n        Returns a description of the task on completion.\n        \"\"\"\n        can_stream_output = False\n\n        if self.stream_output:\n            container_def = get_prefect_container(\n                task_definition[\"containerDefinitions\"]\n            )\n            if not container_def:\n                self.logger.warning(\n                    f\"{self._log_prefix}: Prefect container definition not found in \"\n                    \"task definition. Output cannot be streamed.\"\n                )\n            elif not container_def.get(\"logConfiguration\"):\n                self.logger.warning(\n                    f\"{self._log_prefix}: Logging configuration not found on task. \"\n                    \"Output cannot be streamed.\"\n                )\n            elif not container_def[\"logConfiguration\"].get(\"logDriver\") == \"awslogs\":\n                self.logger.warning(\n                    f\"{self._log_prefix}: Logging configuration uses unsupported \"\n                    \" driver {container_def['logConfiguration'].get('logDriver')!r}. \"\n                    \"Output cannot be streamed.\"\n                )\n            else:\n                # Prepare to stream the output\n                log_config = container_def[\"logConfiguration\"][\"options\"]\n                logs_client = boto_session.client(\"logs\")\n                can_stream_output = True\n                # Track the last log timestamp to prevent double display\n                last_log_timestamp: Optional[int] = None\n                # Determine the name of the stream as \"prefix/container/run-id\"\n                stream_name = \"/\".join(\n                    [\n                        log_config[\"awslogs-stream-prefix\"],\n                        PREFECT_ECS_CONTAINER_NAME,\n                        task_arn.rsplit(\"/\")[-1],\n                    ]\n                )\n                self.logger.info(\n                    f\"{self._log_prefix}: Streaming output from container \"\n                    f\"{PREFECT_ECS_CONTAINER_NAME!r}...\"\n                )\n\n        for task in self._watch_task_run(\n            task_arn, cluster_arn, ecs_client, current_status=\"RUNNING\"\n        ):\n            if self.stream_output and can_stream_output:\n                # On each poll for task run status, also retrieve available logs\n                last_log_timestamp = self._stream_available_logs(\n                    logs_client,\n                    log_group=log_config[\"awslogs-group\"],\n                    log_stream=stream_name,\n                    last_log_timestamp=last_log_timestamp,\n                )\n\n        return task\n\n    def _stream_available_logs(\n        self,\n        logs_client: Any,\n        log_group: str,\n        log_stream: str,\n        last_log_timestamp: Optional[int] = None,\n    ) -> Optional[int]:\n        \"\"\"\n        Stream logs from the given log group and stream since the last log timestamp.\n\n        Will continue on paginated responses until all logs are returned.\n\n        Returns the last log timestamp which can be used to call this method in the\n        future.\n        \"\"\"\n        last_log_stream_token = \"NO-TOKEN\"\n        next_log_stream_token = None\n\n        # AWS will return the same token that we send once the end of the paginated\n        # response is reached\n        while last_log_stream_token != next_log_stream_token:\n            last_log_stream_token = next_log_stream_token\n\n            request = {\n                \"logGroupName\": log_group,\n                \"logStreamName\": log_stream,\n            }\n\n            if last_log_stream_token is not None:\n                request[\"nextToken\"] = last_log_stream_token\n\n            if last_log_timestamp is not None:\n                # Bump the timestamp by one ms to avoid retrieving the last log again\n                request[\"startTime\"] = last_log_timestamp + 1\n\n            try:\n                response = logs_client.get_log_events(**request)\n            except Exception:\n                self.logger.error(\n                    (\n                        f\"{self._log_prefix}: Failed to read log events with request \"\n                        f\"{request}\"\n                    ),\n                    exc_info=True,\n                )\n                return last_log_timestamp\n\n            log_events = response[\"events\"]\n            for log_event in log_events:\n                # TODO: This doesn't forward to the local logger, which can be\n                #       bad for customizing handling and understanding where the\n                #       log is coming from, but it avoid nesting logger information\n                #       when the content is output from a Prefect logger on the\n                #       running infrastructure\n                print(log_event[\"message\"], file=sys.stderr)\n\n                if (\n                    last_log_timestamp is None\n                    or log_event[\"timestamp\"] > last_log_timestamp\n                ):\n                    last_log_timestamp = log_event[\"timestamp\"]\n\n            next_log_stream_token = response.get(\"nextForwardToken\")\n            if not log_events:\n                # Stop reading pages if there was no data\n                break\n\n        return last_log_timestamp\n\n    def _retrieve_latest_task_definition(\n        self, ecs_client: _ECSClient, task_definition_family: str\n    ) -> Optional[dict]:\n        try:\n            latest_task_definition = self._retrieve_task_definition(\n                ecs_client, task_definition_family\n            )\n        except Exception:\n            # The family does not exist...\n            return None\n\n        return latest_task_definition\n\n    def _retrieve_task_definition(\n        self, ecs_client: _ECSClient, task_definition_arn: str\n    ):\n        \"\"\"\n        Retrieve an existing task definition from AWS.\n        \"\"\"\n        self.logger.info(\n            f\"{self._log_prefix}: Retrieving task definition {task_definition_arn!r}...\"\n        )\n        response = ecs_client.describe_task_definition(\n            taskDefinition=task_definition_arn\n        )\n        return response[\"taskDefinition\"]\n\n    def _register_task_definition(\n        self, ecs_client: _ECSClient, task_definition: dict\n    ) -> str:\n        \"\"\"\n        Register a new task definition with AWS.\n        \"\"\"\n        # TODO: Consider including a global cache for this task definition since\n        #       registration of task definitions is frequently rate limited\n        task_definition_request = copy.deepcopy(task_definition)\n\n        # We need to remove some fields here if copying an existing task definition\n        for field in POST_REGISTRATION_FIELDS:\n            task_definition_request.pop(field, None)\n\n        response = ecs_client.register_task_definition(**task_definition_request)\n        return response[\"taskDefinition\"][\"taskDefinitionArn\"]\n\n    def _prepare_task_definition(self, task_definition: dict, region: str) -> dict:\n        \"\"\"\n        Prepare a task definition by inferring any defaults and merging overrides.\n        \"\"\"\n        task_definition = copy.deepcopy(task_definition)\n\n        # Configure the Prefect runtime container\n        task_definition.setdefault(\"containerDefinitions\", [])\n        container = get_prefect_container(task_definition[\"containerDefinitions\"])\n        if container is None:\n            container = {\"name\": PREFECT_ECS_CONTAINER_NAME}\n            task_definition[\"containerDefinitions\"].append(container)\n\n        if self.image:\n            container[\"image\"] = self.image\n\n        # Remove any keys that have been explicitly \"unset\"\n        unset_keys = {key for key, value in self.env.items() if value is None}\n        for item in tuple(container.get(\"environment\", [])):\n            if item[\"name\"] in unset_keys:\n                container[\"environment\"].remove(item)\n\n        if self.configure_cloudwatch_logs:\n            container[\"logConfiguration\"] = {\n                \"logDriver\": \"awslogs\",\n                \"options\": {\n                    \"awslogs-create-group\": \"true\",\n                    \"awslogs-group\": \"prefect\",\n                    \"awslogs-region\": region,\n                    \"awslogs-stream-prefix\": self.name or \"prefect\",\n                    **self.cloudwatch_logs_options,\n                },\n            }\n\n        family = self.family or task_definition.get(\"family\") or ECS_DEFAULT_FAMILY\n        task_definition[\"family\"] = slugify(\n            family,\n            max_length=255,\n            regex_pattern=r\"[^a-zA-Z0-9-_]+\",\n        )\n\n        # CPU and memory are required in some cases, retrieve the value to use\n        cpu = self.cpu or task_definition.get(\"cpu\") or ECS_DEFAULT_CPU\n        memory = self.memory or task_definition.get(\"memory\") or ECS_DEFAULT_MEMORY\n\n        if self.launch_type == \"FARGATE\" or self.launch_type == \"FARGATE_SPOT\":\n            # Task level memory and cpu are required when using fargate\n            task_definition[\"cpu\"] = str(cpu)\n            task_definition[\"memory\"] = str(memory)\n\n            # The FARGATE compatibility is required if it will be used as as launch type\n            requires_compatibilities = task_definition.setdefault(\n                \"requiresCompatibilities\", []\n            )\n            if \"FARGATE\" not in requires_compatibilities:\n                task_definition[\"requiresCompatibilities\"].append(\"FARGATE\")\n\n            # Only the 'awsvpc' network mode is supported when using FARGATE\n            # However, we will not enforce that here if the user has set it\n            network_mode = task_definition.setdefault(\"networkMode\", \"awsvpc\")\n\n            if network_mode != \"awsvpc\":\n                warnings.warn(\n                    f\"Found network mode {network_mode!r} which is not compatible with \"\n                    f\"launch type {self.launch_type!r}. Use either the 'EC2' launch \"\n                    \"type or the 'awsvpc' network mode.\"\n                )\n\n        elif self.launch_type == \"EC2\":\n            # Container level memory and cpu are required when using ec2\n            container.setdefault(\"cpu\", int(cpu))\n            container.setdefault(\"memory\", int(memory))\n\n        if self.execution_role_arn and not self.task_definition_arn:\n            task_definition[\"executionRoleArn\"] = self.execution_role_arn\n\n        if self.configure_cloudwatch_logs and not task_definition.get(\n            \"executionRoleArn\"\n        ):\n            raise ValueError(\n                \"An execution role arn must be set on the task definition to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs` but no execution role \"\n                \"was found on the task definition.\"\n            )\n\n        return task_definition\n\n    def _prepare_task_run_overrides(self) -> dict:\n        \"\"\"\n        Prepare the 'overrides' payload for a task run request.\n        \"\"\"\n        overrides = {\n            \"containerOverrides\": [\n                {\n                    \"name\": PREFECT_ECS_CONTAINER_NAME,\n                    \"environment\": [\n                        {\"name\": key, \"value\": value}\n                        for key, value in {\n                            **self._base_environment(),\n                            **self.env,\n                        }.items()\n                        if value is not None\n                    ],\n                }\n            ],\n        }\n\n        prefect_container_overrides = overrides[\"containerOverrides\"][0]\n\n        if self.command:\n            prefect_container_overrides[\"command\"] = self.command\n\n        if self.execution_role_arn:\n            overrides[\"executionRoleArn\"] = self.execution_role_arn\n\n        if self.task_role_arn:\n            overrides[\"taskRoleArn\"] = self.task_role_arn\n\n        if self.memory:\n            overrides[\"memory\"] = str(self.memory)\n            prefect_container_overrides.setdefault(\"memory\", self.memory)\n\n        if self.cpu:\n            overrides[\"cpu\"] = str(self.cpu)\n            prefect_container_overrides.setdefault(\"cpu\", self.cpu)\n\n        return overrides\n\n    def _load_vpc_network_config(\n        self, vpc_id: Optional[str], boto_session: boto3.Session\n    ) -> dict:\n        \"\"\"\n        Load settings from a specific VPC or the default VPC and generate a task\n        run request's network configuration.\n        \"\"\"\n        ec2_client = boto_session.client(\"ec2\")\n        vpc_message = \"the default VPC\" if not vpc_id else f\"VPC with ID {vpc_id}\"\n\n        if not vpc_id:\n            # Retrieve the default VPC\n            describe = {\"Filters\": [{\"Name\": \"isDefault\", \"Values\": [\"true\"]}]}\n        else:\n            describe = {\"VpcIds\": [vpc_id]}\n\n        vpcs = ec2_client.describe_vpcs(**describe)[\"Vpcs\"]\n        if not vpcs:\n            help_message = (\n                \"Pass an explicit `vpc_id` or configure a default VPC.\"\n                if not vpc_id\n                else \"Check that the VPC exists in the current region.\"\n            )\n            raise ValueError(\n                f\"Failed to find {vpc_message}. \"\n                \"Network configuration cannot be inferred. \" + help_message\n            )\n\n        vpc_id = vpcs[0][\"VpcId\"]\n        subnets = ec2_client.describe_subnets(\n            Filters=[{\"Name\": \"vpc-id\", \"Values\": [vpc_id]}]\n        )[\"Subnets\"]\n        if not subnets:\n            raise ValueError(\n                f\"Failed to find subnets for {vpc_message}. \"\n                \"Network configuration cannot be inferred.\"\n            )\n\n        return {\n            \"awsvpcConfiguration\": {\n                \"subnets\": [s[\"SubnetId\"] for s in subnets],\n                \"assignPublicIp\": \"ENABLED\",\n                \"securityGroups\": [],\n            }\n        }\n\n    def _prepare_task_run(\n        self,\n        network_config: Optional[dict],\n        task_definition_arn: str,\n    ) -> dict:\n        \"\"\"\n        Prepare a task run request payload.\n        \"\"\"\n        task_run = {\n            \"overrides\": self._prepare_task_run_overrides(),\n            \"tags\": [\n                {\n                    \"key\": slugify(\n                        key,\n                        regex_pattern=_TAG_REGEX,\n                        allow_unicode=True,\n                        lowercase=False,\n                    ),\n                    \"value\": slugify(\n                        value,\n                        regex_pattern=_TAG_REGEX,\n                        allow_unicode=True,\n                        lowercase=False,\n                    ),\n                }\n                for key, value in self.labels.items()\n            ],\n            \"taskDefinition\": task_definition_arn,\n        }\n\n        if self.cluster:\n            task_run[\"cluster\"] = self.cluster\n\n        if self.launch_type:\n            if self.launch_type == \"FARGATE_SPOT\":\n                task_run[\"capacityProviderStrategy\"] = [\n                    {\"capacityProvider\": \"FARGATE_SPOT\", \"weight\": 1}\n                ]\n            else:\n                task_run[\"launchType\"] = self.launch_type\n\n        if network_config:\n            task_run[\"networkConfiguration\"] = network_config\n\n        task_run = self.task_customizations.apply(task_run)\n        return task_run\n\n    def _run_task(self, ecs_client: _ECSClient, task_run: dict):\n        \"\"\"\n        Run the task using the ECS client.\n\n        This is isolated as a separate method for testing purposes.\n        \"\"\"\n        return ecs_client.run_task(**task_run)[\"tasks\"][0]\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.Config","title":"Config","text":"

        Configuration of pydantic.

        Source code in prefect_aws/ecs.py
        class Config:\n    \"\"\"Configuration of pydantic.\"\"\"\n\n    # Support serialization of the 'JsonPatch' type\n    arbitrary_types_allowed = True\n    json_encoders = {JsonPatch: lambda p: p.patch}\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.cast_customizations_to_a_json_patch","title":"cast_customizations_to_a_json_patch","text":"

        Casts lists to JsonPatch instances.

        Source code in prefect_aws/ecs.py
        @validator(\"task_customizations\", pre=True)\ndef cast_customizations_to_a_json_patch(\n    cls, value: Union[List[Dict], JsonPatch, str]\n) -> JsonPatch:\n    \"\"\"\n    Casts lists to JsonPatch instances.\n    \"\"\"\n    if isinstance(value, str):\n        value = json.loads(value)\n    if isinstance(value, list):\n        return JsonPatch(value)\n    return value  # type: ignore\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.cloudwatch_logs_options_requires_configure_cloudwatch_logs","title":"cloudwatch_logs_options_requires_configure_cloudwatch_logs","text":"

        Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

        Source code in prefect_aws/ecs.py
        @root_validator\ndef cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if values.get(\"cloudwatch_logs_options\") and not values.get(\n        \"configure_cloudwatch_logs\"\n    ):\n        raise ValueError(\n            \"`configure_cloudwatch_log` must be enabled to use \"\n            \"`cloudwatch_logs_options`.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.configure_cloudwatch_logs_requires_execution_role_arn","title":"configure_cloudwatch_logs_requires_execution_role_arn","text":"

        Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

        Source code in prefect_aws/ecs.py
        @root_validator\ndef configure_cloudwatch_logs_requires_execution_role_arn(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if (\n        values.get(\"configure_cloudwatch_logs\")\n        and not values.get(\"execution_role_arn\")\n        # Do not raise if they've linked to another task definition or provided\n        # it without using our shortcuts\n        and not values.get(\"task_definition_arn\")\n        and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n    ):\n        raise ValueError(\n            \"An `execution_role_arn` must be provided to use \"\n            \"`configure_cloudwatch_logs` or `stream_logs`.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.dict","title":"dict","text":"

        Convert to a dictionary.

        Source code in prefect_aws/ecs.py
        def dict(self, *args, **kwargs) -> Dict:\n    \"\"\"\n    Convert to a dictionary.\n    \"\"\"\n    # Support serialization of the 'JsonPatch' type\n    d = super().dict(*args, **kwargs)\n    d[\"task_customizations\"] = self.task_customizations.patch\n    return d\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

        Generate a base job template for a cloud-run work pool with the same configuration as this block.

        Returns:

        Type Description dict
        • dict: a base job template for a cloud-run work pool
        Source code in prefect_aws/ecs.py
        async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for a cloud-run work pool with the same\n    configuration as this block.\n\n    Returns:\n        - dict: a base job template for a cloud-run work pool\n    \"\"\"\n    base_job_template = copy.deepcopy(ECSWorker.get_default_base_job_template())\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n            \"task_customizations\",\n        ]:\n            continue\n        elif key == \"aws_credentials\":\n            if not self.aws_credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"aws_credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(\n                        self.aws_credentials._block_document_id\n                    )\n                }\n            }\n        elif key == \"task_definition\":\n            base_job_template[\"job_configuration\"][\"task_definition\"] = value\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                \" Skipping.\"\n            )\n\n    if self.task_customizations:\n        network_config_patches = JsonPatch(\n            [\n                patch\n                for patch in self.task_customizations\n                if \"networkConfiguration\" in patch[\"path\"]\n            ]\n        )\n        minimal_network_config = assemble_document_for_patches(\n            network_config_patches\n        )\n        if minimal_network_config:\n            minimal_network_config_with_patches = network_config_patches.apply(\n                minimal_network_config\n            )\n            base_job_template[\"variables\"][\"properties\"][\"network_configuration\"][\n                \"default\"\n            ] = minimal_network_config_with_patches[\"networkConfiguration\"]\n        try:\n            base_job_template[\"job_configuration\"][\n                \"task_run_request\"\n            ] = self.task_customizations.apply(\n                base_job_template[\"job_configuration\"][\"task_run_request\"]\n            )\n        except JsonPointerException:\n            self.logger.warning(\n                \"Unable to apply task customizations to the base job template.\"\n                \"You may need to update the template manually.\"\n            )\n\n    return base_job_template\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.get_corresponding_worker_type","title":"get_corresponding_worker_type staticmethod","text":"

        Return the corresponding worker type for this infrastructure block.

        Source code in prefect_aws/ecs.py
        @staticmethod\ndef get_corresponding_worker_type() -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    return ECSWorker.type\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.image_is_required","title":"image_is_required","text":"

        Enforces that an image is available if image is None.

        Source code in prefect_aws/ecs.py
        @root_validator(pre=True)\ndef image_is_required(cls, values: dict) -> dict:\n    \"\"\"\n    Enforces that an image is available if image is `None`.\n    \"\"\"\n    has_image = bool(values.get(\"image\"))\n    has_task_definition_arn = bool(values.get(\"task_definition_arn\"))\n\n    # The image can only be null when the task_definition_arn is set\n    if has_image or has_task_definition_arn:\n        return values\n\n    prefect_container = (\n        get_prefect_container(\n            (values.get(\"task_definition\") or {}).get(\"containerDefinitions\", [])\n        )\n        or {}\n    )\n    image_in_task_definition = prefect_container.get(\"image\")\n\n    # If a task_definition is given with a prefect container image, use that value\n    if image_in_task_definition:\n        values[\"image\"] = image_in_task_definition\n    # Otherwise, it should default to the Prefect base image\n    else:\n        values[\"image\"] = get_prefect_image_name()\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.kill","title":"kill async","text":"

        Kill a task running on ECS.

        Parameters:

        Name Type Description Default identifier str

        A cluster and task arn combination. This should match a value yielded by ECSTask.run.

        required Source code in prefect_aws/ecs.py
        @sync_compatible\nasync def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n    \"\"\"\n    Kill a task running on ECS.\n\n    Args:\n        identifier: A cluster and task arn combination. This should match a value\n            yielded by `ECSTask.run`.\n    \"\"\"\n    if grace_seconds != 30:\n        self.logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n            \"support dynamic grace period configuration so 30s will be used. \"\n            \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n        )\n    cluster, task = parse_task_identifier(identifier)\n    await run_sync_in_worker_thread(self._stop_task, cluster, task)\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

        Return an copy of the block that is prepared to execute a flow run.

        Source code in prefect_aws/ecs.py
        def prepare_for_flow_run(\n    self: Self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"Deployment\"] = None,\n    flow: Optional[\"Flow\"] = None,\n) -> Self:\n    \"\"\"\n    Return an copy of the block that is prepared to execute a flow run.\n    \"\"\"\n    new_family = None\n\n    # Update the family if not specified elsewhere\n    if (\n        not self.family\n        and not self.task_definition_arn\n        and not (self.task_definition and self.task_definition.get(\"family\"))\n    ):\n        if flow and deployment:\n            new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}__{deployment.name}\"\n        elif flow and not deployment:\n            new_family = f\"{ECS_DEFAULT_FAMILY}__{flow.name}\"\n        elif deployment and not flow:\n            # This is a weird case and should not be see in the wild\n            new_family = f\"{ECS_DEFAULT_FAMILY}__unknown-flow__{deployment.name}\"\n\n    new = super().prepare_for_flow_run(flow_run, deployment=deployment, flow=flow)\n\n    if new_family:\n        return new.copy(update={\"family\": new_family})\n    else:\n        # Avoid an extra copy if not needed\n        return new\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.preview","title":"preview","text":"

        Generate a preview of the task definition and task run that will be sent to AWS.

        Source code in prefect_aws/ecs.py
        def preview(self) -> str:\n    \"\"\"\n    Generate a preview of the task definition and task run that will be sent to AWS.\n    \"\"\"\n    preview = \"\"\n\n    task_definition_arn = self.task_definition_arn or \"<registered at runtime>\"\n\n    if self.task_definition or not self.task_definition_arn:\n        task_definition = self._prepare_task_definition(\n            self.task_definition or {},\n            region=self.aws_credentials.region_name\n            or \"<loaded from client at runtime>\",\n        )\n        preview += \"---\\n# Task definition\\n\"\n        preview += yaml.dump(task_definition)\n        preview += \"\\n\"\n    else:\n        task_definition = None\n\n    if task_definition and task_definition.get(\"networkMode\") == \"awsvpc\":\n        vpc = \"the default VPC\" if not self.vpc_id else self.vpc_id\n        network_config = {\n            \"awsvpcConfiguration\": {\n                \"subnets\": f\"<loaded from {vpc} at runtime>\",\n                \"assignPublicIp\": \"ENABLED\",\n            }\n        }\n    else:\n        network_config = None\n\n    task_run = self._prepare_task_run(network_config, task_definition_arn)\n    preview += \"---\\n# Task run request\\n\"\n    preview += yaml.dump(task_run)\n\n    return preview\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.run","title":"run async","text":"

        Run the configured task on ECS.

        Source code in prefect_aws/ecs.py
        @sync_compatible\nasync def run(self, task_status: Optional[TaskStatus] = None) -> ECSTaskResult:\n    \"\"\"\n    Run the configured task on ECS.\n    \"\"\"\n    boto_session, ecs_client = await run_sync_in_worker_thread(\n        self._get_session_and_client\n    )\n\n    (\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition,\n    ) = await run_sync_in_worker_thread(\n        self._create_task_and_wait_for_start, boto_session, ecs_client\n    )\n\n    # Display a nice message indicating the command and image\n    command = self.command or get_prefect_container(\n        task_definition[\"containerDefinitions\"]\n    ).get(\"command\", [])\n    self.logger.info(\n        f\"{self._log_prefix}: Running command {' '.join(command)!r} \"\n        f\"in container {PREFECT_ECS_CONTAINER_NAME!r} ({self.image})...\"\n    )\n\n    # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n    # if set to preserve matching by name rather than arn\n    # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n    # single colons.\n    identifier = (self.cluster if self.cluster else cluster_arn) + \"::\" + task_arn\n\n    if task_status:\n        task_status.started(identifier)\n\n    status_code = await run_sync_in_worker_thread(\n        self._watch_task_and_get_exit_code,\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition and self.auto_deregister_task_definition,\n        boto_session,\n        ecs_client,\n    )\n\n    return ECSTaskResult(\n        identifier=identifier,\n        # If the container does not start the exit code can be null but we must\n        # still report a status code. We use a -1 to indicate a special code.\n        status_code=status_code if status_code is not None else -1,\n    )\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTask.set_default_configure_cloudwatch_logs","title":"set_default_configure_cloudwatch_logs","text":"

        Streaming output generally requires CloudWatch logs to be configured.

        To avoid entangled arguments in the simple case, configure_cloudwatch_logs defaults to matching the value of stream_output.

        Source code in prefect_aws/ecs.py
        @root_validator(pre=True)\ndef set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n    \"\"\"\n    Streaming output generally requires CloudWatch logs to be configured.\n\n    To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n    defaults to matching the value of `stream_output`.\n    \"\"\"\n    configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n    if configure_cloudwatch_logs is None:\n        values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.ECSTaskResult","title":"ECSTaskResult","text":"

        Bases: InfrastructureResult

        The result of a run of an ECS task

        Source code in prefect_aws/ecs.py
        class ECSTaskResult(InfrastructureResult):\n    \"\"\"The result of a run of an ECS task\"\"\"\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.get_container","title":"get_container","text":"

        Extract a container from a list of containers or container definitions. If not found, None is returned.

        Source code in prefect_aws/ecs.py
        def get_container(containers: List[dict], name: str) -> Optional[dict]:\n    \"\"\"\n    Extract a container from a list of containers or container definitions.\n    If not found, `None` is returned.\n    \"\"\"\n    for container in containers:\n        if container.get(\"name\") == name:\n            return container\n    return None\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.get_prefect_container","title":"get_prefect_container","text":"

        Extract the Prefect container from a list of containers or container definitions. If not found, None is returned.

        Source code in prefect_aws/ecs.py
        def get_prefect_container(containers: List[dict]) -> Optional[dict]:\n    \"\"\"\n    Extract the Prefect container from a list of containers or container definitions.\n    If not found, `None` is returned.\n    \"\"\"\n    return get_container(containers, PREFECT_ECS_CONTAINER_NAME)\n
        "},{"location":"integrations/prefect-aws/ecs/#prefect_aws.ecs.parse_task_identifier","title":"parse_task_identifier","text":"

        Splits identifier into its cluster and task components, e.g. input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").

        Source code in prefect_aws/ecs.py
        def parse_task_identifier(identifier: str) -> Tuple[str, str]:\n    \"\"\"\n    Splits identifier into its cluster and task components, e.g.\n    input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").\n    \"\"\"\n    cluster, task = identifier.split(\"::\", maxsplit=1)\n    return cluster, task\n
        "},{"location":"integrations/prefect-aws/ecs_guide/","title":"ECS Worker Guide","text":""},{"location":"integrations/prefect-aws/ecs_guide/#why-use-ecs-for-flow-run-execution","title":"Why use ECS for flow run execution?","text":"

        ECS (Elastic Container Service) tasks are a good option for executing Prefect flow runs for several reasons:

        1. Scalability: ECS scales your infrastructure in response to demand, effectively managing Prefect flow runs. ECS automatically administers container distribution across multiple instances based on demand.
        2. Flexibility: ECS lets you choose between AWS Fargate and Amazon EC2 for container operation. Fargate abstracts the underlying infrastructure, while EC2 has faster job start times and offers additional control over instance management and configuration.
        3. AWS Integration: Easily connect with other AWS services, such as AWS IAM and CloudWatch.
        4. Containerization: ECS supports Docker containers and offers managed execution. Containerization encourages reproducible deployments.
        "},{"location":"integrations/prefect-aws/ecs_guide/#ecs-flow-run-execution","title":"ECS flow run execution","text":"

        Prefect enables remote flow execution via workers and work pools. To learn more about these concepts please see our deployment tutorial.

        For details on how workers and work pools are implemented for ECS, see the diagram below.

        %%{\n  init: {\n    'theme': 'base',\n    'themeVariables': {\n      'primaryColor': '#2D6DF6',\n      'primaryTextColor': '#fff',\n      'lineColor': '#FE5A14',\n      'secondaryColor': '#E04BF0',\n      'tertiaryColor': '#fff'\n    }\n  }\n}%%\ngraph TB\n\n  subgraph ecs_cluster[ECS cluster]\n    subgraph ecs_service[ECS service]\n      td_worker[Worker task definition] --> |defines| prefect_worker((Prefect worker))\n    end\n    prefect_worker -->|kicks off| ecs_task\n    fr_task_definition[Flow run task definition]\n\n\n    subgraph ecs_task[\"ECS task execution\"]\n    style ecs_task text-align:center,display:flex\n\n\n    flow_run((Flow run))\n\n    end\n    fr_task_definition -->|defines| ecs_task\n  end\n\n  subgraph prefect_cloud[Prefect Cloud]\n    subgraph prefect_workpool[ECS work pool]\n      workqueue[Work queue]\n    end\n  end\n\n  subgraph github[\"ECR\"]\n    flow_code{{\"Flow code\"}}\n  end\n  flow_code --> |pulls| ecs_task\n  prefect_worker -->|polls| workqueue\n  prefect_workpool -->|configures| fr_task_definition
        "},{"location":"integrations/prefect-aws/ecs_guide/#ecs-and-prefect","title":"ECS and Prefect","text":"

        ECS tasks != Prefect tasks

        An ECS task is not the same thing as a Prefect task.

        ECS tasks are groupings of containers that run within an ECS Cluster. An ECS task's behavior is determined by its task definition.

        An ECS task definition is the blueprint for the ECS task. It describes which Docker containers to run and what you want to have happen inside these containers.

        ECS tasks are instances of a task definition. A Task Execution launches container(s) as defined in the task definition until they are stopped or exit on their own. This setup is ideal for ephemeral processes such as a Prefect flow run.

        The ECS task running the Prefect worker should be an ECS Service, given its long-running nature and need for auto-recovery in case of failure. An ECS service automatically replaces any task that fails, which is ideal for managing a long-running process such as a Prefect worker.

        When a Prefect flow is scheduled to run it goes into the work pool specified in the flow's deployment. Work pools are typed according to the infrastructure the flow will run on. Flow runs scheduled in an ecs typed work pool are executed as ECS tasks. Only Prefect ECS workers can poll an ecs typed work pool.

        When the ECS worker receives a scheduled flow run from the ECS work pool it is polling, it spins up the specified infrastructure on AWS ECS. The worker knows to build an ECS task definition for each flow run based on the configuration specified in the work pool.

        Once the flow run completes, the ECS containers of the cluster are spun down to a single container that continues to run the Prefect worker. This worker continues polling for work from the Prefect work pool.

        If you specify a task definition ARN (Amazon Resource Name) in the work pool, the worker will use that ARN when spinning up the ECS Task, rather than creating a task definition from the fields supplied in the work pool configuration.

        You can use either EC2 or Fargate as the capacity provider. Fargate simplifies initiation, but lengthens infrastructure setup time for each flow run. Using EC2 for the ECS cluster can reduce setup time. In this example, we will show how to use Fargate.

        Tip

        If you prefer infrastructure as code check out this Terraform module to provision an ECS cluster with a worker.

        "},{"location":"integrations/prefect-aws/ecs_guide/#prerequisites","title":"Prerequisites","text":"
        • An AWS account with permissions to create ECS services and IAM roles.
        • The AWS CLI installed on your local machine. You can download it from the AWS website.
        • An ECS Cluster to host both the worker and the flow runs it submits. This guide uses the default cluster. To create your own follow this guide.
        • A VPC configured for your ECS tasks. This guide uses the default VPC.
        • Prefect Cloud account or Prefect self-managed instance.
        "},{"location":"integrations/prefect-aws/ecs_guide/#step-1-set-up-an-ecs-work-pool","title":"Step 1: Set up an ECS work pool","text":"

        Before setting up the worker, create a work pool of type ECS for the worker to pull work from. If doing so from the CLI, be sure to authenticate with Prefect Cloud.

        Create a work pool from the CLI:

        prefect work-pool create --type ecs my-ecs-pool\n

        Or from the Prefect UI:

        !!! Because this guide uses Fargate as the capacity provider and the default VPC and ECS cluster, no further configuration is needed.

        Next, set up a Prefect ECS worker that will discover and pull work from this work pool.

        "},{"location":"integrations/prefect-aws/ecs_guide/#step-2-start-a-prefect-worker-in-your-ecs-cluster","title":"Step 2: Start a Prefect worker in your ECS cluster","text":"

        First start by creating the IAM role required in order for your worker and flows to run. The sample flow in this guide doesn't interact with many other AWS services, so you will only be creating one role, taskExecutionRole. To create an IAM role for the ECS task using the AWS CLI, follow these steps:

        "},{"location":"integrations/prefect-aws/ecs_guide/#1-create-a-trust-policy","title":"1. Create a trust policy","text":"

        The trust policy will specify that the ECS service containing the Prefect worker will be able to assume the role required for calling other AWS services.

        Save this policy to a file, such as ecs-trust-policy.json:

        {\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Service\": \"ecs-tasks.amazonaws.com\"\n            },\n            \"Action\": \"sts:AssumeRole\"\n        }\n    ]\n}\n
        "},{"location":"integrations/prefect-aws/ecs_guide/#2-create-the-iam-roles","title":"2. Create the IAM roles","text":"

        Use the aws iam create-role command to create the roles that you will be using. For this guide, the ecsTaskExecutionRole will be used by the worker to start ECS tasks, and will also be the role assigned to the ECS tasks running your Prefect flows.

            aws iam create-role \\\n    --role-name ecsTaskExecutionRole \\\n    --assume-role-policy-document file://ecs-trust-policy.json\n

        Tip

        Depending on the requirements of your flows, it is advised to create a second role for your ECS tasks. This role will contain the permissions required by the ECS tasks in which your flows will run. For example, if your workflow loads data into an S3 bucket, you would need a role with additional permissions to access S3.

        "},{"location":"integrations/prefect-aws/ecs_guide/#3-attach-the-policy-to-the-role","title":"3. Attach the policy to the role","text":"

        For this guide the ECS worker will require permissions to pull images from ECR and publish logs to CloudWatch. Amazon has a managed policy named AmazonECSTaskExecutionRolePolicy that grants the permissions necessary for starting ECS tasks. See here for other common execution role permissions. Attach this policy to your task execution role:

            aws iam attach-role-policy \\\n    --role-name ecsTaskExecutionRole \\\n    --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy\n

        Remember to replace the --role-name and --policy-arn with the actual role name and policy Amazon Resource Name (ARN) you want to use.

        "},{"location":"integrations/prefect-aws/ecs_guide/#step-3-creating-an-ecs-worker-service","title":"Step 3: Creating an ECS worker service","text":""},{"location":"integrations/prefect-aws/ecs_guide/#1-launch-an-ecs-service-to-host-the-worker","title":"1. Launch an ECS Service to host the worker","text":"

        Next, create an ECS task definition that specifies the Docker image for the Prefect worker, the resources it requires, and the command it should run. In this example, the command to start the worker is prefect worker start --pool my-ecs-pool.

        Create a JSON file with the following contents:

        {\n    \"family\": \"prefect-worker-task\",\n    \"networkMode\": \"awsvpc\",\n    \"requiresCompatibilities\": [\n        \"FARGATE\"\n    ],\n    \"cpu\": \"512\",\n    \"memory\": \"1024\",\n    \"executionRoleArn\": \"<ecs-task-role-arn>\",\n    \"taskRoleArn\": \"<ecs-task-role-arn>\",\n    \"containerDefinitions\": [\n        {\n            \"name\": \"prefect-worker\",\n            \"image\": \"prefecthq/prefect:2-latest\",\n            \"cpu\": 512,\n            \"memory\": 1024,\n            \"essential\": true,\n            \"command\": [\n                \"/bin/sh\",\n                \"-c\",\n                \"pip install prefect-aws && prefect worker start --pool my-ecs-pool --type ecs\"\n            ],\n            \"environment\": [\n                {\n                    \"name\": \"PREFECT_API_URL\",\n                    \"value\": \"prefect-api-url>\"\n                },\n                {\n                    \"name\": \"PREFECT_API_KEY\",\n                    \"value\": \"<prefect-api-key>\"\n                }\n            ]\n        }\n    ]\n}\n
        • Use prefect config view to view the PREFECT_API_URL for your current Prefect profile. Use this to replace <prefect-api-url>.

        • For the PREFECT_API_KEY, if you are on a paid plan you can create a service account for the worker. If your are on a free plan, you can pass a user\u2019s API key.

        • Replace both instances of <ecs-task-role-arn> with the ARN of the IAM role you created in Step 2. You can grab this by running:

          aws iam get-role --role-name taskExecutionRole --query 'Role.[RoleName, Arn]' --output text\n

        • Notice that the CPU and Memory allocations are relatively small. The worker's main responsibility is to submit work through API calls to AWS, not to execute your Prefect flow code.

        Tip

        To avoid hardcoding your API key into the task definition JSON see how to add sensitive data using AWS secrets manager to the container definition.

        "},{"location":"integrations/prefect-aws/ecs_guide/#2-register-the-task-definition","title":"2. Register the task definition","text":"

        Before creating a service, you first need to register a task definition. You can do that using the register-task-definition command in the AWS CLI. Here is an example:

        aws ecs register-task-definition --cli-input-json file://task-definition.json\n

        Replace task-definition.json with the name of your JSON file.

        "},{"location":"integrations/prefect-aws/ecs_guide/#3-create-an-ecs-service-to-host-your-worker","title":"3. Create an ECS service to host your worker","text":"

        Finally, create a service that will manage your Prefect worker:

        Open a terminal window and run the following command to create an ECS Fargate service:

        aws ecs create-service \\\n    --service-name prefect-worker-service \\\n    --cluster <ecs-cluster> \\\n    --task-definition <task-definition-arn> \\\n    --launch-type FARGATE \\\n    --desired-count 1 \\\n    --network-configuration \"awsvpcConfiguration={subnets=[<subnet-ids>],securityGroups=[<security-group-ids>],assignPublicIp='ENABLED'}\"\n
        • Replace <ecs-cluster> with the name of your ECS cluster.
        • Replace <task-definition-arn> with the ARN of the task definition you just registered.
        • Replace <subnet-ids> with a comma-separated list of your VPC subnet IDs. Ensure that these subnets are aligned with the vpc specified on the work pool in step 1. You can view subnet ids with the following command: aws ec2 describe-subnets --filter Name=<vpc-id>
        • Replace <security-group-ids> with a comma-separated list of your VPC security group IDs.

        Sanity check

        The work pool page in the Prefect UI allows you to check the health of your workers - make sure your new worker is live! Note that it can take a few minutes for an ECS service to come online. If your worker does not come online and you are using the command from this guide, you may not be using the default VPC. For connectivity issues, check your VPC's configuration and refer to the ECS outbound networking guide.

        "},{"location":"integrations/prefect-aws/ecs_guide/#step-4-pick-up-a-flow-run-with-your-new-worker","title":"Step 4: Pick up a flow run with your new worker","text":"

        This guide uses ECR to store a Docker image containing your flow code. To do this, we will write a flow, then deploy it using build and push steps that copy flow code into a Docker image and push that image to an ECR repository.

        "},{"location":"integrations/prefect-aws/ecs_guide/#1-write-a-simple-test-flow","title":"1. Write a simple test flow","text":"

        my_flow.py

        from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"Hello from ECS!!\")\n\nif __name__ == \"__main__\":\n    my_flow()\n
        "},{"location":"integrations/prefect-aws/ecs_guide/#2-create-an-ecr-repository","title":"2. Create an ECR repository","text":"

        Use the following AWS CLI command to create an ECR repository. The name you choose for your repository will be reused in the next step when defining your Prefect deployment.

        aws ecr create-repository \\\n--repository-name <my-ecr-repo> \\\n--region <region>\n
        "},{"location":"integrations/prefect-aws/ecs_guide/#3-create-a-prefectyaml-file","title":"3. Create a prefect.yaml file","text":"

        To have Prefect build your image when deploying your flow create a prefect.yaml file with the following specification:

        name: ecs-worker-guide\n# this is pre-populated by running prefect init\nprefect-version: 2.14.20 \n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build_image\n    requires: prefect-docker>=0.3.1\n    image_name: <my-ecr-repo>\n    tag: latest\n    dockerfile: auto\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n    requires: prefect-docker>=0.3.1\n    image_name: '{{ build_image.image_name }}'\n    tag: '{{ build_image.tag }}'\n\n # the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: my_ecs_deployment\n    version:\n    tags: []\n    description:\n    entrypoint: flow.py:my_flow\n    parameters: {}\n    work_pool:\n        name: ecs-dev-pool\n        work_queue_name:\n        job_variables:\n        image: '{{ build_image.image }}'\n    schedules: []\npull:\n    - prefect.deployments.steps.set_working_directory:\n        directory: /opt/prefect/ecs-worker-guide\n
        "},{"location":"integrations/prefect-aws/ecs_guide/#4-deploy-the-flow-to-the-prefect-cloud-or-your-self-managed-server-instance-specifying-the-ecs-work-pool-when-prompted","title":"4. Deploy the flow to the Prefect Cloud or your self-managed server instance, specifying the ECS work pool when prompted","text":"
        prefect deploy my_flow.py:my_ecs_deployment\n
        "},{"location":"integrations/prefect-aws/ecs_guide/#5-find-the-deployment-in-the-ui-and-click-the-quick-run-button","title":"5. Find the deployment in the UI and click the Quick Run button!","text":""},{"location":"integrations/prefect-aws/ecs_guide/#optional-next-steps","title":"Optional next steps","text":"
        1. Now that you are confident your ECS worker is healthy, you can experiment with different work pool configurations.

          • Do your flow runs require higher CPU?
          • Would an EC2 Launch Type speed up your flow run execution?

          These infrastructure configuration values can be set on your ECS work pool or they can be overridden on the deployment level through job_variables if desired.

        "},{"location":"integrations/prefect-aws/ecs_worker/","title":"ECS Worker","text":""},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker","title":"prefect_aws.workers.ecs_worker","text":"

        Prefect worker for executing flow runs as ECS tasks.

        Get started by creating a work pool:

        $ prefect work-pool create --type ecs my-ecs-pool\n

        Then, you can start a worker for the pool:

        $ prefect worker start --pool my-ecs-pool\n

        It's common to deploy the worker as an ECS task as well. However, you can run the worker locally to get started.

        The worker may work without any additional configuration, but it is dependent on your specific AWS setup and we'd recommend opening the work pool editor in the UI to see the available options.

        By default, the worker will register a task definition for each flow run and run a task in your default ECS cluster using AWS Fargate. Fargate requires tasks to configure subnets, which we will infer from your default VPC. If you do not have a default VPC, you must provide a VPC ID or manually setup the network configuration for your tasks.

        Note, the worker caches task definitions for each deployment to avoid excessive registration. The worker will check that the cached task definition is compatible with your configuration before using it.

        The launch type option can be used to run your tasks in different modes. For example, FARGATE_SPOT can be used to use spot instances for your Fargate tasks or EC2 can be used to run your tasks on a cluster backed by EC2 instances.

        Generally, it is very useful to enable CloudWatch logging for your ECS tasks; this can help you debug task failures. To enable CloudWatch logging, you must provide an execution role ARN with permissions to create and write to log streams. See the configure_cloudwatch_logs field documentation for details.

        The worker can be configured to use an existing task definition by setting the task definition arn variable or by providing a \"taskDefinition\" in the task run request. When a task definition is provided, the worker will never create a new task definition which may result in variables that are templated into the task definition payload being ignored.

        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.CapacityProvider","title":"CapacityProvider","text":"

        Bases: BaseModel

        The capacity provider strategy to use when running the task.

        Source code in prefect_aws/workers/ecs_worker.py
        class CapacityProvider(BaseModel):\n    \"\"\"\n    The capacity provider strategy to use when running the task.\n    \"\"\"\n\n    capacityProvider: str\n    weight: int\n    base: int\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSIdentifier","title":"ECSIdentifier","text":"

        Bases: NamedTuple

        The identifier for a running ECS task.

        Source code in prefect_aws/workers/ecs_worker.py
        class ECSIdentifier(NamedTuple):\n    \"\"\"\n    The identifier for a running ECS task.\n    \"\"\"\n\n    cluster: str\n    task_arn: str\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration","title":"ECSJobConfiguration","text":"

        Bases: BaseJobConfiguration

        Job configuration for an ECS worker.

        Source code in prefect_aws/workers/ecs_worker.py
        class ECSJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Job configuration for an ECS worker.\n    \"\"\"\n\n    aws_credentials: Optional[AwsCredentials] = Field(default_factory=AwsCredentials)\n    task_definition: Optional[Dict[str, Any]] = Field(\n        template=_default_task_definition_template()\n    )\n    task_run_request: Dict[str, Any] = Field(\n        template=_default_task_run_request_template()\n    )\n    configure_cloudwatch_logs: Optional[bool] = Field(default=None)\n    cloudwatch_logs_options: Dict[str, str] = Field(default_factory=dict)\n    cloudwatch_logs_prefix: Optional[str] = Field(default=None)\n    network_configuration: Dict[str, Any] = Field(default_factory=dict)\n    stream_output: Optional[bool] = Field(default=None)\n    task_start_timeout_seconds: int = Field(default=300)\n    task_watch_poll_interval: float = Field(default=5.0)\n    auto_deregister_task_definition: bool = Field(default=False)\n    vpc_id: Optional[str] = Field(default=None)\n    container_name: Optional[str] = Field(default=None)\n    cluster: Optional[str] = Field(default=None)\n    match_latest_revision_in_family: bool = Field(default=False)\n\n    @root_validator\n    def task_run_request_requires_arn_if_no_task_definition_given(cls, values) -> dict:\n        \"\"\"\n        If no task definition is provided, a task definition ARN must be present on the\n        task run request.\n        \"\"\"\n        if not values.get(\"task_run_request\", {}).get(\n            \"taskDefinition\"\n        ) and not values.get(\"task_definition\"):\n            raise ValueError(\n                \"A task definition must be provided if a task definition ARN is not \"\n                \"present on the task run request.\"\n            )\n        return values\n\n    @root_validator\n    def container_name_default_from_task_definition(cls, values) -> dict:\n        \"\"\"\n        Infers the container name from the task definition if not provided.\n        \"\"\"\n        if values.get(\"container_name\") is None:\n            values[\"container_name\"] = _container_name_from_task_definition(\n                values.get(\"task_definition\")\n            )\n\n            # We may not have a name here still; for example if someone is using a task\n            # definition arn. In that case, we'll perform similar logic later to find\n            # the name to treat as the \"orchestration\" container.\n\n        return values\n\n    @root_validator(pre=True)\n    def set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n        \"\"\"\n        Streaming output generally requires CloudWatch logs to be configured.\n\n        To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n        defaults to matching the value of `stream_output`.\n        \"\"\"\n        configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n        if configure_cloudwatch_logs is None:\n            values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n        return values\n\n    @root_validator\n    def configure_cloudwatch_logs_requires_execution_role_arn(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if (\n            values.get(\"configure_cloudwatch_logs\")\n            and not values.get(\"execution_role_arn\")\n            # TODO: Does not match\n            # Do not raise if they've linked to another task definition or provided\n            # it without using our shortcuts\n            and not values.get(\"task_run_request\", {}).get(\"taskDefinition\")\n            and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n        ):\n            raise ValueError(\n                \"An `execution_role_arn` must be provided to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs`.\"\n            )\n        return values\n\n    @root_validator\n    def cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n        cls, values: dict\n    ) -> dict:\n        \"\"\"\n        Enforces that an execution role arn is provided (or could be provided by a\n        runtime task definition) when configuring logging.\n        \"\"\"\n        if values.get(\"cloudwatch_logs_options\") and not values.get(\n            \"configure_cloudwatch_logs\"\n        ):\n            raise ValueError(\n                \"`configure_cloudwatch_log` must be enabled to use \"\n                \"`cloudwatch_logs_options`.\"\n            )\n        return values\n\n    @root_validator\n    def network_configuration_requires_vpc_id(cls, values: dict) -> dict:\n        \"\"\"\n        Enforces a `vpc_id` is provided when custom network configuration mode is\n        enabled for network settings.\n        \"\"\"\n        if values.get(\"network_configuration\") and not values.get(\"vpc_id\"):\n            raise ValueError(\n                \"You must provide a `vpc_id` to enable custom `network_configuration`.\"\n            )\n        return values\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.cloudwatch_logs_options_requires_configure_cloudwatch_logs","title":"cloudwatch_logs_options_requires_configure_cloudwatch_logs","text":"

        Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

        Source code in prefect_aws/workers/ecs_worker.py
        @root_validator\ndef cloudwatch_logs_options_requires_configure_cloudwatch_logs(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if values.get(\"cloudwatch_logs_options\") and not values.get(\n        \"configure_cloudwatch_logs\"\n    ):\n        raise ValueError(\n            \"`configure_cloudwatch_log` must be enabled to use \"\n            \"`cloudwatch_logs_options`.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.configure_cloudwatch_logs_requires_execution_role_arn","title":"configure_cloudwatch_logs_requires_execution_role_arn","text":"

        Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging.

        Source code in prefect_aws/workers/ecs_worker.py
        @root_validator\ndef configure_cloudwatch_logs_requires_execution_role_arn(\n    cls, values: dict\n) -> dict:\n    \"\"\"\n    Enforces that an execution role arn is provided (or could be provided by a\n    runtime task definition) when configuring logging.\n    \"\"\"\n    if (\n        values.get(\"configure_cloudwatch_logs\")\n        and not values.get(\"execution_role_arn\")\n        # TODO: Does not match\n        # Do not raise if they've linked to another task definition or provided\n        # it without using our shortcuts\n        and not values.get(\"task_run_request\", {}).get(\"taskDefinition\")\n        and not (values.get(\"task_definition\") or {}).get(\"executionRoleArn\")\n    ):\n        raise ValueError(\n            \"An `execution_role_arn` must be provided to use \"\n            \"`configure_cloudwatch_logs` or `stream_logs`.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.container_name_default_from_task_definition","title":"container_name_default_from_task_definition","text":"

        Infers the container name from the task definition if not provided.

        Source code in prefect_aws/workers/ecs_worker.py
        @root_validator\ndef container_name_default_from_task_definition(cls, values) -> dict:\n    \"\"\"\n    Infers the container name from the task definition if not provided.\n    \"\"\"\n    if values.get(\"container_name\") is None:\n        values[\"container_name\"] = _container_name_from_task_definition(\n            values.get(\"task_definition\")\n        )\n\n        # We may not have a name here still; for example if someone is using a task\n        # definition arn. In that case, we'll perform similar logic later to find\n        # the name to treat as the \"orchestration\" container.\n\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.network_configuration_requires_vpc_id","title":"network_configuration_requires_vpc_id","text":"

        Enforces a vpc_id is provided when custom network configuration mode is enabled for network settings.

        Source code in prefect_aws/workers/ecs_worker.py
        @root_validator\ndef network_configuration_requires_vpc_id(cls, values: dict) -> dict:\n    \"\"\"\n    Enforces a `vpc_id` is provided when custom network configuration mode is\n    enabled for network settings.\n    \"\"\"\n    if values.get(\"network_configuration\") and not values.get(\"vpc_id\"):\n        raise ValueError(\n            \"You must provide a `vpc_id` to enable custom `network_configuration`.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.set_default_configure_cloudwatch_logs","title":"set_default_configure_cloudwatch_logs","text":"

        Streaming output generally requires CloudWatch logs to be configured.

        To avoid entangled arguments in the simple case, configure_cloudwatch_logs defaults to matching the value of stream_output.

        Source code in prefect_aws/workers/ecs_worker.py
        @root_validator(pre=True)\ndef set_default_configure_cloudwatch_logs(cls, values: dict) -> dict:\n    \"\"\"\n    Streaming output generally requires CloudWatch logs to be configured.\n\n    To avoid entangled arguments in the simple case, `configure_cloudwatch_logs`\n    defaults to matching the value of `stream_output`.\n    \"\"\"\n    configure_cloudwatch_logs = values.get(\"configure_cloudwatch_logs\")\n    if configure_cloudwatch_logs is None:\n        values[\"configure_cloudwatch_logs\"] = values.get(\"stream_output\")\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSJobConfiguration.task_run_request_requires_arn_if_no_task_definition_given","title":"task_run_request_requires_arn_if_no_task_definition_given","text":"

        If no task definition is provided, a task definition ARN must be present on the task run request.

        Source code in prefect_aws/workers/ecs_worker.py
        @root_validator\ndef task_run_request_requires_arn_if_no_task_definition_given(cls, values) -> dict:\n    \"\"\"\n    If no task definition is provided, a task definition ARN must be present on the\n    task run request.\n    \"\"\"\n    if not values.get(\"task_run_request\", {}).get(\n        \"taskDefinition\"\n    ) and not values.get(\"task_definition\"):\n        raise ValueError(\n            \"A task definition must be provided if a task definition ARN is not \"\n            \"present on the task run request.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSVariables","title":"ECSVariables","text":"

        Bases: BaseVariables

        Variables for templating an ECS job.

        Source code in prefect_aws/workers/ecs_worker.py
        class ECSVariables(BaseVariables):\n    \"\"\"\n    Variables for templating an ECS job.\n    \"\"\"\n\n    task_definition_arn: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An identifier for an existing task definition to use. If set, options that\"\n            \" require changes to the task definition will be ignored. All contents of \"\n            \"the task definition in the job configuration will be ignored.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        title=\"Environment Variables\",\n        default_factory=dict,\n        description=(\n            \"Environment variables to provide to the task run. These variables are set \"\n            \"on the Prefect container at task runtime. These will not be set on the \"\n            \"task definition.\"\n        ),\n    )\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=(\n            \"The AWS credentials to use to connect to ECS. If not provided, credentials\"\n            \" will be inferred from the local environment following AWS's boto client's\"\n            \" rules.\"\n        ),\n    )\n    cluster: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The ECS cluster to run the task in. An ARN or name may be provided. If \"\n            \"not provided, the default cluster will be used.\"\n        ),\n    )\n    family: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A family for the task definition. If not provided, it will be inferred \"\n            \"from the task definition. If the task definition does not have a family, \"\n            \"the name will be generated. When flow and deployment metadata is \"\n            \"available, the generated name will include their names. Values for this \"\n            \"field will be slugified to match AWS character requirements.\"\n        ),\n    )\n    launch_type: Optional[\n        Literal[\"FARGATE\", \"EC2\", \"EXTERNAL\", \"FARGATE_SPOT\"]\n    ] = Field(\n        default=ECS_DEFAULT_LAUNCH_TYPE,\n        description=(\n            \"The type of ECS task run infrastructure that should be used. Note that\"\n            \" 'FARGATE_SPOT' is not a formal ECS launch type, but we will configure\"\n            \" the proper capacity provider strategy if set here.\"\n        ),\n    )\n    capacity_provider_strategy: Optional[List[CapacityProvider]] = Field(\n        default_factory=list,\n        description=(\n            \"The capacity provider strategy to use when running the task. \"\n            \"If a capacity provider strategy is specified, the selected launch\"\n            \" type will be ignored.\"\n        ),\n    )\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image to use for the Prefect container in the task. If this value is \"\n            \"not null, it will override the value in the task definition. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    cpu: int = Field(\n        title=\"CPU\",\n        default=None,\n        description=(\n            \"The amount of CPU to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_CPU} will be used unless present on the task definition.\"\n        ),\n    )\n    memory: int = Field(\n        default=None,\n        description=(\n            \"The amount of memory to provide to the ECS task. Valid amounts are \"\n            \"specified in the AWS documentation. If not provided, a default value of \"\n            f\"{ECS_DEFAULT_MEMORY} will be used unless present on the task definition.\"\n        ),\n    )\n    container_name: str = Field(\n        default=None,\n        description=(\n            \"The name of the container flow run orchestration will occur in. If not \"\n            f\"specified, a default value of {ECS_DEFAULT_CONTAINER_NAME} will be used \"\n            \"and if that is not found in the task definition the first container will \"\n            \"be used.\"\n        ),\n    )\n    task_role_arn: str = Field(\n        title=\"Task Role ARN\",\n        default=None,\n        description=(\n            \"A role to attach to the task run. This controls the permissions of the \"\n            \"task while it is running.\"\n        ),\n    )\n    execution_role_arn: str = Field(\n        title=\"Execution Role ARN\",\n        default=None,\n        description=(\n            \"An execution role to use for the task. This controls the permissions of \"\n            \"the task when it is launching. If this value is not null, it will \"\n            \"override the value in the task definition. An execution role must be \"\n            \"provided to capture logs from the container.\"\n        ),\n    )\n    vpc_id: Optional[str] = Field(\n        title=\"VPC ID\",\n        default=None,\n        description=(\n            \"The AWS VPC to link the task run to. This is only applicable when using \"\n            \"the 'awsvpc' network mode for your task. FARGATE tasks require this \"\n            \"network  mode, but for EC2 tasks the default network mode is 'bridge'. \"\n            \"If using the 'awsvpc' network mode and this field is null, your default \"\n            \"VPC will be used. If no default VPC can be found, the task run will fail.\"\n        ),\n    )\n    configure_cloudwatch_logs: bool = Field(\n        default=None,\n        description=(\n            \"If enabled, the Prefect container will be configured to send its output \"\n            \"to the AWS CloudWatch logs service. This functionality requires an \"\n            \"execution role with logs:CreateLogStream, logs:CreateLogGroup, and \"\n            \"logs:PutLogEvents permissions. The default for this field is `False` \"\n            \"unless `stream_output` is set.\"\n        ),\n    )\n    cloudwatch_logs_options: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"When `configure_cloudwatch_logs` is enabled, this setting may be used to\"\n            \" pass additional options to the CloudWatch logs configuration or override\"\n            \" the default options. See the [AWS\"\n            \" documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html#create_awslogs_logdriver_options)\"  # noqa\n            \" for available options. \"\n        ),\n    )\n    cloudwatch_logs_prefix: Optional[str] = Field(\n        default=None,\n        description=(\n            \"When `configure_cloudwatch_logs` is enabled, this setting may be used to\"\n            \" set a prefix for the log group. If not provided, the default prefix will\"\n            \" be `prefect-logs_<work_pool_name>_<deployment_id>`. If\"\n            \" `awslogs-stream-prefix` is present in `Cloudwatch logs options` this\"\n            \" setting will be ignored.\"\n        ),\n    )\n\n    network_configuration: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=(\n            \"When `network_configuration` is supplied it will override ECS Worker's\"\n            \"awsvpcConfiguration that defined in the ECS task executing your workload. \"\n            \"See the [AWS documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-service-awsvpcconfiguration.html)\"  # noqa\n            \" for available options.\"\n        ),\n    )\n\n    stream_output: bool = Field(\n        default=None,\n        description=(\n            \"If enabled, logs will be streamed from the Prefect container to the local \"\n            \"console. Unless you have configured AWS CloudWatch logs manually on your \"\n            \"task definition, this requires the same prerequisites outlined in \"\n            \"`configure_cloudwatch_logs`.\"\n        ),\n    )\n    task_start_timeout_seconds: int = Field(\n        default=300,\n        description=(\n            \"The amount of time to watch for the start of the ECS task \"\n            \"before marking it as failed. The task must enter a RUNNING state to be \"\n            \"considered started.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an ECS task.\"\n        ),\n    )\n    auto_deregister_task_definition: bool = Field(\n        default=False,\n        description=(\n            \"If enabled, any task definitions that are created by this block will be \"\n            \"deregistered. Existing task definitions linked by ARN will never be \"\n            \"deregistered. Deregistering a task definition does not remove it from \"\n            \"your AWS account, instead it will be marked as INACTIVE.\"\n        ),\n    )\n    match_latest_revision_in_family: bool = Field(\n        default=False,\n        description=(\n            \"If enabled, the most recent active revision in the task definition \"\n            \"family will be compared against the desired ECS task configuration. \"\n            \"If they are equal, the existing task definition will be used instead \"\n            \"of registering a new one. If no family is specified the default family \"\n            f'\"{ECS_DEFAULT_FAMILY}\" will be used.'\n        ),\n    )\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorker","title":"ECSWorker","text":"

        Bases: BaseWorker

        A Prefect worker to run flow runs as ECS tasks.

        Source code in prefect_aws/workers/ecs_worker.py
        class ECSWorker(BaseWorker):\n    \"\"\"\n    A Prefect worker to run flow runs as ECS tasks.\n    \"\"\"\n\n    type = \"ecs\"\n    job_configuration = ECSJobConfiguration\n    job_configuration_variables = ECSVariables\n    _description = (\n        \"Execute flow runs within containers on AWS ECS. Works with EC2 \"\n        \"and Fargate clusters. Requires an AWS account.\"\n    )\n    _display_name = \"AWS Elastic Container Service\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/ecs_worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: ECSJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> ECSWorkerResult:\n        \"\"\"\n        Runs a given flow run on the current worker.\n        \"\"\"\n        ecs_client = await run_sync_in_worker_thread(\n            self._get_client, configuration, \"ecs\"\n        )\n\n        logger = self.get_flow_run_logger(flow_run)\n\n        (\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition,\n        ) = await run_sync_in_worker_thread(\n            self._create_task_and_wait_for_start,\n            logger,\n            ecs_client,\n            configuration,\n            flow_run,\n        )\n\n        # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n        # if set to preserve matching by name rather than arn\n        # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n        # single colons.\n        identifier = (\n            (configuration.cluster if configuration.cluster else cluster_arn)\n            + \"::\"\n            + task_arn\n        )\n\n        if task_status:\n            task_status.started(identifier)\n\n        status_code = await run_sync_in_worker_thread(\n            self._watch_task_and_get_exit_code,\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            task_definition,\n            is_new_task_definition and configuration.auto_deregister_task_definition,\n            ecs_client,\n        )\n\n        return ECSWorkerResult(\n            identifier=identifier,\n            # If the container does not start the exit code can be null but we must\n            # still report a status code. We use a -1 to indicate a special code.\n            status_code=status_code if status_code is not None else -1,\n        )\n\n    def _get_client(\n        self, configuration: ECSJobConfiguration, client_type: Union[str, ClientType]\n    ) -> _ECSClient:\n        \"\"\"\n        Get a boto3 client of client_type. Will use a cached client if one exists.\n        \"\"\"\n        return configuration.aws_credentials.get_client(client_type)\n\n    def _create_task_and_wait_for_start(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        configuration: ECSJobConfiguration,\n        flow_run: FlowRun,\n    ) -> Tuple[str, str, dict, bool]:\n        \"\"\"\n        Register the task definition, create the task run, and wait for it to start.\n\n        Returns a tuple of\n        - The task ARN\n        - The task's cluster ARN\n        - The task definition\n        - A bool indicating if the task definition is newly registered\n        \"\"\"\n        task_definition_arn = configuration.task_run_request.get(\"taskDefinition\")\n        new_task_definition_registered = False\n\n        if not task_definition_arn:\n            task_definition = self._prepare_task_definition(\n                configuration, region=ecs_client.meta.region_name, flow_run=flow_run\n            )\n            (\n                task_definition_arn,\n                new_task_definition_registered,\n            ) = self._get_or_register_task_definition(\n                logger, ecs_client, configuration, flow_run, task_definition\n            )\n        else:\n            task_definition = self._retrieve_task_definition(\n                logger, ecs_client, task_definition_arn\n            )\n            if configuration.task_definition:\n                logger.warning(\n                    \"Ignoring task definition in configuration since task definition\"\n                    \" ARN is provided on the task run request.\"\n                )\n\n        self._validate_task_definition(task_definition, configuration)\n\n        _TASK_DEFINITION_CACHE[flow_run.deployment_id] = task_definition_arn\n\n        logger.info(f\"Using ECS task definition {task_definition_arn!r}...\")\n        logger.debug(\n            f\"Task definition {json.dumps(task_definition, indent=2, default=str)}\"\n        )\n\n        task_run_request = self._prepare_task_run_request(\n            configuration,\n            task_definition,\n            task_definition_arn,\n        )\n\n        logger.info(\"Creating ECS task run...\")\n        logger.debug(\n            \"Task run request\"\n            f\"{json.dumps(mask_api_key(task_run_request), indent=2, default=str)}\"\n        )\n\n        try:\n            task = self._create_task_run(ecs_client, task_run_request)\n            task_arn = task[\"taskArn\"]\n            cluster_arn = task[\"clusterArn\"]\n        except Exception as exc:\n            self._report_task_run_creation_failure(configuration, task_run_request, exc)\n            raise\n\n        logger.info(\"Waiting for ECS task run to start...\")\n        self._wait_for_task_start(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            ecs_client,\n            timeout=configuration.task_start_timeout_seconds,\n        )\n\n        return task_arn, cluster_arn, task_definition, new_task_definition_registered\n\n    def _get_or_register_task_definition(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        configuration: ECSJobConfiguration,\n        flow_run: FlowRun,\n        task_definition: dict,\n    ) -> Tuple[str, bool]:\n        \"\"\"Get or register a task definition for the given flow run.\n\n        Returns a tuple of the task definition ARN and a bool indicating if the task\n        definition is newly registered.\n        \"\"\"\n\n        cached_task_definition_arn = _TASK_DEFINITION_CACHE.get(flow_run.deployment_id)\n        new_task_definition_registered = False\n\n        if cached_task_definition_arn:\n            try:\n                cached_task_definition = self._retrieve_task_definition(\n                    logger, ecs_client, cached_task_definition_arn\n                )\n                if not cached_task_definition[\n                    \"status\"\n                ] == \"ACTIVE\" or not self._task_definitions_equal(\n                    task_definition, cached_task_definition\n                ):\n                    cached_task_definition_arn = None\n            except Exception:\n                cached_task_definition_arn = None\n\n        if (\n            not cached_task_definition_arn\n            and configuration.match_latest_revision_in_family\n        ):\n            family_name = task_definition.get(\"family\", ECS_DEFAULT_FAMILY)\n            try:\n                task_definition_from_family = self._retrieve_task_definition(\n                    logger, ecs_client, family_name\n                )\n                if task_definition_from_family and self._task_definitions_equal(\n                    task_definition, task_definition_from_family\n                ):\n                    cached_task_definition_arn = task_definition_from_family[\n                        \"taskDefinitionArn\"\n                    ]\n            except Exception:\n                cached_task_definition_arn = None\n\n        if not cached_task_definition_arn:\n            task_definition_arn = self._register_task_definition(\n                logger, ecs_client, task_definition\n            )\n            new_task_definition_registered = True\n        else:\n            task_definition_arn = cached_task_definition_arn\n\n        return task_definition_arn, new_task_definition_registered\n\n    def _watch_task_and_get_exit_code(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        deregister_task_definition: bool,\n        ecs_client: _ECSClient,\n    ) -> Optional[int]:\n        \"\"\"\n        Wait for the task run to complete and retrieve the exit code of the Prefect\n        container.\n        \"\"\"\n\n        # Wait for completion and stream logs\n        task = self._wait_for_task_finish(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            task_definition,\n            ecs_client,\n        )\n\n        if deregister_task_definition:\n            ecs_client.deregister_task_definition(\n                taskDefinition=task[\"taskDefinitionArn\"]\n            )\n\n        container_name = (\n            configuration.container_name\n            or _container_name_from_task_definition(task_definition)\n            or ECS_DEFAULT_CONTAINER_NAME\n        )\n\n        # Check the status code of the Prefect container\n        container = _get_container(task[\"containers\"], container_name)\n        assert (\n            container is not None\n        ), f\"'{container_name}' container missing from task: {task}\"\n        status_code = container.get(\"exitCode\")\n        self._report_container_status_code(logger, container_name, status_code)\n\n        return status_code\n\n    def _report_container_status_code(\n        self, logger: logging.Logger, name: str, status_code: Optional[int]\n    ) -> None:\n        \"\"\"\n        Display a log for the given container status code.\n        \"\"\"\n        if status_code is None:\n            logger.error(\n                f\"Task exited without reporting an exit status for container {name!r}.\"\n            )\n        elif status_code == 0:\n            logger.info(f\"Container {name!r} exited successfully.\")\n        else:\n            logger.warning(\n                f\"Container {name!r} exited with non-zero exit code {status_code}.\"\n            )\n\n    def _report_task_run_creation_failure(\n        self, configuration: ECSJobConfiguration, task_run: dict, exc: Exception\n    ) -> None:\n        \"\"\"\n        Wrap common AWS task run creation failures with nicer user-facing messages.\n        \"\"\"\n        # AWS generates exception types at runtime so they must be captured a bit\n        # differently than normal.\n        if \"ClusterNotFoundException\" in str(exc):\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} not found. \"\n                \"Confirm that the cluster is configured in your region.\"\n            ) from exc\n        elif (\n            \"No Container Instances\" in str(exc) and task_run.get(\"launchType\") == \"EC2\"\n        ):\n            cluster = task_run.get(\"cluster\", \"default\")\n            raise RuntimeError(\n                f\"Failed to run ECS task, cluster {cluster!r} does not appear to \"\n                \"have any container instances associated with it. Confirm that you \"\n                \"have EC2 container instances available.\"\n            ) from exc\n        elif (\n            \"failed to validate logger args\" in str(exc)\n            and \"AccessDeniedException\" in str(exc)\n            and configuration.configure_cloudwatch_logs\n        ):\n            raise RuntimeError(\n                \"Failed to run ECS task, the attached execution role does not appear\"\n                \" to have sufficient permissions. Ensure that the execution role\"\n                f\" {configuration.execution_role!r} has permissions\"\n                \" logs:CreateLogStream, logs:CreateLogGroup, and logs:PutLogEvents.\"\n            )\n        else:\n            raise\n\n    def _validate_task_definition(\n        self, task_definition: dict, configuration: ECSJobConfiguration\n    ) -> None:\n        \"\"\"\n        Ensure that the task definition is compatible with the configuration.\n\n        Raises `ValueError` on incompatibility. Returns `None` on success.\n        \"\"\"\n        launch_type = configuration.task_run_request.get(\n            \"launchType\", ECS_DEFAULT_LAUNCH_TYPE\n        )\n        if (\n            launch_type != \"EC2\"\n            and \"FARGATE\" not in task_definition[\"requiresCompatibilities\"]\n        ):\n            raise ValueError(\n                \"Task definition does not have 'FARGATE' in 'requiresCompatibilities'\"\n                f\" and cannot be used with launch type {launch_type!r}\"\n            )\n\n        if launch_type == \"FARGATE\" or launch_type == \"FARGATE_SPOT\":\n            # Only the 'awsvpc' network mode is supported when using FARGATE\n            network_mode = task_definition.get(\"networkMode\")\n            if network_mode != \"awsvpc\":\n                raise ValueError(\n                    f\"Found network mode {network_mode!r} which is not compatible with \"\n                    f\"launch type {launch_type!r}. Use either the 'EC2' launch \"\n                    \"type or the 'awsvpc' network mode.\"\n                )\n\n        if configuration.configure_cloudwatch_logs and not task_definition.get(\n            \"executionRoleArn\"\n        ):\n            raise ValueError(\n                \"An execution role arn must be set on the task definition to use \"\n                \"`configure_cloudwatch_logs` or `stream_logs` but no execution role \"\n                \"was found on the task definition.\"\n            )\n\n    def _register_task_definition(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        task_definition: dict,\n    ) -> str:\n        \"\"\"\n        Register a new task definition with AWS.\n\n        Returns the ARN.\n        \"\"\"\n        logger.info(\"Registering ECS task definition...\")\n        logger.debug(\n            \"Task definition request\"\n            f\"{json.dumps(task_definition, indent=2, default=str)}\"\n        )\n        response = ecs_client.register_task_definition(**task_definition)\n        return response[\"taskDefinition\"][\"taskDefinitionArn\"]\n\n    def _retrieve_task_definition(\n        self,\n        logger: logging.Logger,\n        ecs_client: _ECSClient,\n        task_definition: str,\n    ):\n        \"\"\"\n        Retrieve an existing task definition from AWS.\n        \"\"\"\n        if task_definition.startswith(\"arn:aws:ecs:\"):\n            logger.info(f\"Retrieving ECS task definition {task_definition!r}...\")\n        else:\n            logger.info(\n                \"Retrieving most recent active revision from \"\n                f\"ECS task family {task_definition!r}...\"\n            )\n        response = ecs_client.describe_task_definition(taskDefinition=task_definition)\n        return response[\"taskDefinition\"]\n\n    def _wait_for_task_start(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        ecs_client: _ECSClient,\n        timeout: int,\n    ) -> dict:\n        \"\"\"\n        Waits for an ECS task run to reach a RUNNING status.\n\n        If a STOPPED status is reached instead, an exception is raised indicating the\n        reason that the task run did not start.\n        \"\"\"\n        for task in self._watch_task_run(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            ecs_client,\n            until_status=\"RUNNING\",\n            timeout=timeout,\n        ):\n            # TODO: It is possible that the task has passed _through_ a RUNNING\n            #       status during the polling interval. In this case, there is not an\n            #       exception to raise.\n            if task[\"lastStatus\"] == \"STOPPED\":\n                code = task.get(\"stopCode\")\n                reason = task.get(\"stoppedReason\")\n                # Generate a dynamic exception type from the AWS name\n                raise type(code, (RuntimeError,), {})(reason)\n\n        return task\n\n    def _wait_for_task_finish(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        task_definition: dict,\n        ecs_client: _ECSClient,\n    ):\n        \"\"\"\n        Watch an ECS task until it reaches a STOPPED status.\n\n        If configured, logs from the Prefect container are streamed to stderr.\n\n        Returns a description of the task on completion.\n        \"\"\"\n        can_stream_output = False\n        container_name = (\n            configuration.container_name\n            or _container_name_from_task_definition(task_definition)\n            or ECS_DEFAULT_CONTAINER_NAME\n        )\n\n        if configuration.stream_output:\n            container_def = _get_container(\n                task_definition[\"containerDefinitions\"], container_name\n            )\n            if not container_def:\n                logger.warning(\n                    \"Prefect container definition not found in \"\n                    \"task definition. Output cannot be streamed.\"\n                )\n            elif not container_def.get(\"logConfiguration\"):\n                logger.warning(\n                    \"Logging configuration not found on task. \"\n                    \"Output cannot be streamed.\"\n                )\n            elif not container_def[\"logConfiguration\"].get(\"logDriver\") == \"awslogs\":\n                logger.warning(\n                    \"Logging configuration uses unsupported \"\n                    \" driver {container_def['logConfiguration'].get('logDriver')!r}. \"\n                    \"Output cannot be streamed.\"\n                )\n            else:\n                # Prepare to stream the output\n                log_config = container_def[\"logConfiguration\"][\"options\"]\n                logs_client = self._get_client(configuration, \"logs\")\n                can_stream_output = True\n                # Track the last log timestamp to prevent double display\n                last_log_timestamp: Optional[int] = None\n                # Determine the name of the stream as \"prefix/container/run-id\"\n                stream_name = \"/\".join(\n                    [\n                        log_config[\"awslogs-stream-prefix\"],\n                        container_name,\n                        task_arn.rsplit(\"/\")[-1],\n                    ]\n                )\n                self._logger.info(\n                    f\"Streaming output from container {container_name!r}...\"\n                )\n\n        for task in self._watch_task_run(\n            logger,\n            configuration,\n            task_arn,\n            cluster_arn,\n            ecs_client,\n            current_status=\"RUNNING\",\n        ):\n            if configuration.stream_output and can_stream_output:\n                # On each poll for task run status, also retrieve available logs\n                last_log_timestamp = self._stream_available_logs(\n                    logger,\n                    logs_client,\n                    log_group=log_config[\"awslogs-group\"],\n                    log_stream=stream_name,\n                    last_log_timestamp=last_log_timestamp,\n                )\n\n        return task\n\n    def _stream_available_logs(\n        self,\n        logger: logging.Logger,\n        logs_client: Any,\n        log_group: str,\n        log_stream: str,\n        last_log_timestamp: Optional[int] = None,\n    ) -> Optional[int]:\n        \"\"\"\n        Stream logs from the given log group and stream since the last log timestamp.\n\n        Will continue on paginated responses until all logs are returned.\n\n        Returns the last log timestamp which can be used to call this method in the\n        future.\n        \"\"\"\n        last_log_stream_token = \"NO-TOKEN\"\n        next_log_stream_token = None\n\n        # AWS will return the same token that we send once the end of the paginated\n        # response is reached\n        while last_log_stream_token != next_log_stream_token:\n            last_log_stream_token = next_log_stream_token\n\n            request = {\n                \"logGroupName\": log_group,\n                \"logStreamName\": log_stream,\n            }\n\n            if last_log_stream_token is not None:\n                request[\"nextToken\"] = last_log_stream_token\n\n            if last_log_timestamp is not None:\n                # Bump the timestamp by one ms to avoid retrieving the last log again\n                request[\"startTime\"] = last_log_timestamp + 1\n\n            try:\n                response = logs_client.get_log_events(**request)\n            except Exception:\n                logger.error(\n                    f\"Failed to read log events with request {request}\",\n                    exc_info=True,\n                )\n                return last_log_timestamp\n\n            log_events = response[\"events\"]\n            for log_event in log_events:\n                # TODO: This doesn't forward to the local logger, which can be\n                #       bad for customizing handling and understanding where the\n                #       log is coming from, but it avoid nesting logger information\n                #       when the content is output from a Prefect logger on the\n                #       running infrastructure\n                print(log_event[\"message\"], file=sys.stderr)\n\n                if (\n                    last_log_timestamp is None\n                    or log_event[\"timestamp\"] > last_log_timestamp\n                ):\n                    last_log_timestamp = log_event[\"timestamp\"]\n\n            next_log_stream_token = response.get(\"nextForwardToken\")\n            if not log_events:\n                # Stop reading pages if there was no data\n                break\n\n        return last_log_timestamp\n\n    def _watch_task_run(\n        self,\n        logger: logging.Logger,\n        configuration: ECSJobConfiguration,\n        task_arn: str,\n        cluster_arn: str,\n        ecs_client: _ECSClient,\n        current_status: str = \"UNKNOWN\",\n        until_status: str = None,\n        timeout: int = None,\n    ) -> Generator[None, None, dict]:\n        \"\"\"\n        Watches an ECS task run by querying every `poll_interval` seconds. After each\n        query, the retrieved task is yielded. This function returns when the task run\n        reaches a STOPPED status or the provided `until_status`.\n\n        Emits a log each time the status changes.\n        \"\"\"\n        last_status = status = current_status\n        t0 = time.time()\n        while status != until_status:\n            tasks = ecs_client.describe_tasks(\n                tasks=[task_arn], cluster=cluster_arn, include=[\"TAGS\"]\n            )[\"tasks\"]\n\n            if tasks:\n                task = tasks[0]\n\n                status = task[\"lastStatus\"]\n                if status != last_status:\n                    logger.info(f\"ECS task status is {status}.\")\n\n                yield task\n\n                # No point in continuing if the status is final\n                if status == \"STOPPED\":\n                    break\n\n                last_status = status\n\n            else:\n                # Intermittently, the task will not be described. We wat to respect the\n                # watch timeout though.\n                logger.debug(\"Task not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching task for status \"\n                    f\"{until_status or 'STOPPED'}.\"\n                )\n            time.sleep(configuration.task_watch_poll_interval)\n\n    def _get_or_generate_family(self, task_definition: dict, flow_run: FlowRun) -> str:\n        \"\"\"\n        Gets or generate a family for the task definition.\n        \"\"\"\n        family = task_definition.get(\"family\")\n        if not family:\n            assert self._work_pool_name and flow_run.deployment_id\n            family = (\n                f\"{ECS_DEFAULT_FAMILY}_{self._work_pool_name}_{flow_run.deployment_id}\"\n            )\n        slugify(\n            family,\n            max_length=255,\n            regex_pattern=r\"[^a-zA-Z0-9-_]+\",\n        )\n        return family\n\n    def _prepare_task_definition(\n        self,\n        configuration: ECSJobConfiguration,\n        region: str,\n        flow_run: FlowRun,\n    ) -> dict:\n        \"\"\"\n        Prepare a task definition by inferring any defaults and merging overrides.\n        \"\"\"\n        task_definition = copy.deepcopy(configuration.task_definition)\n\n        # Configure the Prefect runtime container\n        task_definition.setdefault(\"containerDefinitions\", [])\n\n        # Remove empty container definitions\n        task_definition[\"containerDefinitions\"] = [\n            d for d in task_definition[\"containerDefinitions\"] if d\n        ]\n\n        container_name = configuration.container_name\n        if not container_name:\n            container_name = (\n                _container_name_from_task_definition(task_definition)\n                or ECS_DEFAULT_CONTAINER_NAME\n            )\n\n        container = _get_container(\n            task_definition[\"containerDefinitions\"], container_name\n        )\n        if container is None:\n            if container_name != ECS_DEFAULT_CONTAINER_NAME:\n                raise ValueError(\n                    f\"Container {container_name!r} not found in task definition.\"\n                )\n\n            # Look for a container without a name\n            for container in task_definition[\"containerDefinitions\"]:\n                if \"name\" not in container:\n                    container[\"name\"] = container_name\n                    break\n            else:\n                container = {\"name\": container_name}\n                task_definition[\"containerDefinitions\"].append(container)\n\n        # Image is required so make sure it's present\n        container.setdefault(\"image\", get_prefect_image_name())\n\n        # Remove any keys that have been explicitly \"unset\"\n        unset_keys = {key for key, value in configuration.env.items() if value is None}\n        for item in tuple(container.get(\"environment\", [])):\n            if item[\"name\"] in unset_keys or item[\"value\"] is None:\n                container[\"environment\"].remove(item)\n\n        if configuration.configure_cloudwatch_logs:\n            prefix = f\"prefect-logs_{self._work_pool_name}_{flow_run.deployment_id}\"\n            container[\"logConfiguration\"] = {\n                \"logDriver\": \"awslogs\",\n                \"options\": {\n                    \"awslogs-create-group\": \"true\",\n                    \"awslogs-group\": \"prefect\",\n                    \"awslogs-region\": region,\n                    \"awslogs-stream-prefix\": (\n                        configuration.cloudwatch_logs_prefix or prefix\n                    ),\n                    **configuration.cloudwatch_logs_options,\n                },\n            }\n\n        task_definition[\"family\"] = self._get_or_generate_family(\n            task_definition, flow_run\n        )\n        # CPU and memory are required in some cases, retrieve the value to use\n        cpu = task_definition.get(\"cpu\") or ECS_DEFAULT_CPU\n        memory = task_definition.get(\"memory\") or ECS_DEFAULT_MEMORY\n\n        launch_type = configuration.task_run_request.get(\n            \"launchType\", ECS_DEFAULT_LAUNCH_TYPE\n        )\n\n        if launch_type == \"FARGATE\" or launch_type == \"FARGATE_SPOT\":\n            # Task level memory and cpu are required when using fargate\n            task_definition[\"cpu\"] = str(cpu)\n            task_definition[\"memory\"] = str(memory)\n\n            # The FARGATE compatibility is required if it will be used as as launch type\n            requires_compatibilities = task_definition.setdefault(\n                \"requiresCompatibilities\", []\n            )\n            if \"FARGATE\" not in requires_compatibilities:\n                task_definition[\"requiresCompatibilities\"].append(\"FARGATE\")\n\n            # Only the 'awsvpc' network mode is supported when using FARGATE\n            # However, we will not enforce that here if the user has set it\n            task_definition.setdefault(\"networkMode\", \"awsvpc\")\n\n        elif launch_type == \"EC2\":\n            # Container level memory and cpu are required when using ec2\n            container.setdefault(\"cpu\", cpu)\n            container.setdefault(\"memory\", memory)\n\n            # Ensure set values are cast to integers\n            container[\"cpu\"] = int(container[\"cpu\"])\n            container[\"memory\"] = int(container[\"memory\"])\n\n        # Ensure set values are cast to strings\n        if task_definition.get(\"cpu\"):\n            task_definition[\"cpu\"] = str(task_definition[\"cpu\"])\n        if task_definition.get(\"memory\"):\n            task_definition[\"memory\"] = str(task_definition[\"memory\"])\n\n        return task_definition\n\n    def _load_network_configuration(\n        self, vpc_id: Optional[str], configuration: ECSJobConfiguration\n    ) -> dict:\n        \"\"\"\n        Load settings from a specific VPC or the default VPC and generate a task\n        run request's network configuration.\n        \"\"\"\n        ec2_client = self._get_client(configuration, \"ec2\")\n        vpc_message = \"the default VPC\" if not vpc_id else f\"VPC with ID {vpc_id}\"\n\n        if not vpc_id:\n            # Retrieve the default VPC\n            describe = {\"Filters\": [{\"Name\": \"isDefault\", \"Values\": [\"true\"]}]}\n        else:\n            describe = {\"VpcIds\": [vpc_id]}\n\n        vpcs = ec2_client.describe_vpcs(**describe)[\"Vpcs\"]\n        if not vpcs:\n            help_message = (\n                \"Pass an explicit `vpc_id` or configure a default VPC.\"\n                if not vpc_id\n                else \"Check that the VPC exists in the current region.\"\n            )\n            raise ValueError(\n                f\"Failed to find {vpc_message}. \"\n                \"Network configuration cannot be inferred. \" + help_message\n            )\n\n        vpc_id = vpcs[0][\"VpcId\"]\n        subnets = ec2_client.describe_subnets(\n            Filters=[{\"Name\": \"vpc-id\", \"Values\": [vpc_id]}]\n        )[\"Subnets\"]\n        if not subnets:\n            raise ValueError(\n                f\"Failed to find subnets for {vpc_message}. \"\n                \"Network configuration cannot be inferred.\"\n            )\n\n        return {\n            \"awsvpcConfiguration\": {\n                \"subnets\": [s[\"SubnetId\"] for s in subnets],\n                \"assignPublicIp\": \"ENABLED\",\n                \"securityGroups\": [],\n            }\n        }\n\n    def _custom_network_configuration(\n        self,\n        vpc_id: str,\n        network_configuration: dict,\n        configuration: ECSJobConfiguration,\n    ) -> dict:\n        \"\"\"\n        Load settings from a specific VPC or the default VPC and generate a task\n        run request's network configuration.\n        \"\"\"\n        ec2_client = self._get_client(configuration, \"ec2\")\n        vpc_message = f\"VPC with ID {vpc_id}\"\n\n        vpcs = ec2_client.describe_vpcs(VpcIds=[vpc_id]).get(\"Vpcs\")\n\n        if not vpcs:\n            raise ValueError(\n                f\"Failed to find {vpc_message}. \"\n                + \"Network configuration cannot be inferred. \"\n                + \"Pass an explicit `vpc_id`.\"\n            )\n\n        vpc_id = vpcs[0][\"VpcId\"]\n        subnets = ec2_client.describe_subnets(\n            Filters=[{\"Name\": \"vpc-id\", \"Values\": [vpc_id]}]\n        )[\"Subnets\"]\n\n        if not subnets:\n            raise ValueError(\n                f\"Failed to find subnets for {vpc_message}. \"\n                + \"Network configuration cannot be inferred.\"\n            )\n\n        subnet_ids = [subnet[\"SubnetId\"] for subnet in subnets]\n\n        config_subnets = network_configuration.get(\"subnets\", [])\n        if not all(conf_sn in subnet_ids for conf_sn in config_subnets):\n            raise ValueError(\n                f\"Subnets {config_subnets} not found within {vpc_message}.\"\n                + \"Please check that VPC is associated with supplied subnets.\"\n            )\n\n        return {\"awsvpcConfiguration\": network_configuration}\n\n    def _prepare_task_run_request(\n        self,\n        configuration: ECSJobConfiguration,\n        task_definition: dict,\n        task_definition_arn: str,\n    ) -> dict:\n        \"\"\"\n        Prepare a task run request payload.\n        \"\"\"\n        task_run_request = deepcopy(configuration.task_run_request)\n\n        task_run_request.setdefault(\"taskDefinition\", task_definition_arn)\n        assert task_run_request[\"taskDefinition\"] == task_definition_arn\n        capacityProviderStrategy = task_run_request.get(\"capacityProviderStrategy\")\n\n        if capacityProviderStrategy:\n            # Should not be provided at all if capacityProviderStrategy is set, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html#ECS-RunTask-request-capacityProviderStrategy  # noqa\n            self._logger.warning(\n                \"Found capacityProviderStrategy. \"\n                \"Removing launchType from task run request.\"\n            )\n            task_run_request.pop(\"launchType\", None)\n\n        elif task_run_request.get(\"launchType\") == \"FARGATE_SPOT\":\n            # Should not be provided at all for FARGATE SPOT\n            task_run_request.pop(\"launchType\", None)\n\n            # A capacity provider strategy is required for FARGATE SPOT\n            task_run_request[\"capacityProviderStrategy\"] = [\n                {\"capacityProvider\": \"FARGATE_SPOT\", \"weight\": 1}\n            ]\n        overrides = task_run_request.get(\"overrides\", {})\n        container_overrides = overrides.get(\"containerOverrides\", [])\n\n        # Ensure the network configuration is present if using awsvpc for network mode\n        if (\n            task_definition.get(\"networkMode\") == \"awsvpc\"\n            and not task_run_request.get(\"networkConfiguration\")\n            and not configuration.network_configuration\n        ):\n            task_run_request[\"networkConfiguration\"] = self._load_network_configuration(\n                configuration.vpc_id, configuration\n            )\n\n        # Use networkConfiguration if supplied by user\n        if (\n            task_definition.get(\"networkMode\") == \"awsvpc\"\n            and configuration.network_configuration\n            and configuration.vpc_id\n        ):\n            task_run_request[\n                \"networkConfiguration\"\n            ] = self._custom_network_configuration(\n                configuration.vpc_id,\n                configuration.network_configuration,\n                configuration,\n            )\n\n        # Ensure the container name is set if not provided at template time\n\n        container_name = (\n            configuration.container_name\n            or _container_name_from_task_definition(task_definition)\n            or ECS_DEFAULT_CONTAINER_NAME\n        )\n\n        if container_overrides and not container_overrides[0].get(\"name\"):\n            container_overrides[0][\"name\"] = container_name\n\n        # Ensure configuration command is respected post-templating\n\n        orchestration_container = _get_container(container_overrides, container_name)\n\n        if orchestration_container:\n            # Override the command if given on the configuration\n            if configuration.command:\n                orchestration_container[\"command\"] = configuration.command\n\n        # Clean up templated variable formatting\n\n        for container in container_overrides:\n            if isinstance(container.get(\"command\"), str):\n                container[\"command\"] = shlex.split(container[\"command\"])\n            if isinstance(container.get(\"environment\"), dict):\n                container[\"environment\"] = [\n                    {\"name\": k, \"value\": v} for k, v in container[\"environment\"].items()\n                ]\n\n            # Remove null values \u2014 they're not allowed by AWS\n            container[\"environment\"] = [\n                item\n                for item in container.get(\"environment\", [])\n                if item[\"value\"] is not None\n            ]\n\n        if isinstance(task_run_request.get(\"tags\"), dict):\n            task_run_request[\"tags\"] = [\n                {\"key\": k, \"value\": v} for k, v in task_run_request[\"tags\"].items()\n            ]\n\n        if overrides.get(\"cpu\"):\n            overrides[\"cpu\"] = str(overrides[\"cpu\"])\n\n        if overrides.get(\"memory\"):\n            overrides[\"memory\"] = str(overrides[\"memory\"])\n\n        # Ensure configuration tags and env are respected post-templating\n\n        tags = [\n            item\n            for item in task_run_request.get(\"tags\", [])\n            if item[\"key\"] not in configuration.labels.keys()\n        ] + [\n            {\"key\": k, \"value\": v}\n            for k, v in configuration.labels.items()\n            if v is not None\n        ]\n\n        # Slugify tags keys and values\n        tags = [\n            {\n                \"key\": slugify(\n                    item[\"key\"],\n                    regex_pattern=_TAG_REGEX,\n                    allow_unicode=True,\n                    lowercase=False,\n                ),\n                \"value\": slugify(\n                    item[\"value\"],\n                    regex_pattern=_TAG_REGEX,\n                    allow_unicode=True,\n                    lowercase=False,\n                ),\n            }\n            for item in tags\n        ]\n\n        if tags:\n            task_run_request[\"tags\"] = tags\n\n        if orchestration_container:\n            environment = [\n                item\n                for item in orchestration_container.get(\"environment\", [])\n                if item[\"name\"] not in configuration.env.keys()\n            ] + [\n                {\"name\": k, \"value\": v}\n                for k, v in configuration.env.items()\n                if v is not None\n            ]\n            if environment:\n                orchestration_container[\"environment\"] = environment\n\n        # Remove empty container overrides\n\n        overrides[\"containerOverrides\"] = [v for v in container_overrides if v]\n\n        return task_run_request\n\n    @retry(\n        stop=stop_after_attempt(MAX_CREATE_TASK_RUN_ATTEMPTS),\n        wait=wait_fixed(CREATE_TASK_RUN_MIN_DELAY_SECONDS)\n        + wait_random(\n            CREATE_TASK_RUN_MIN_DELAY_JITTER_SECONDS,\n            CREATE_TASK_RUN_MAX_DELAY_JITTER_SECONDS,\n        ),\n        reraise=True,\n    )\n    def _create_task_run(self, ecs_client: _ECSClient, task_run_request: dict) -> str:\n        \"\"\"\n        Create a run of a task definition.\n\n        Returns the task run ARN.\n        \"\"\"\n        task = ecs_client.run_task(**task_run_request)\n        if task[\"failures\"]:\n            raise RuntimeError(\n                f\"Failed to run ECS task: {task['failures'][0]['reason']}\"\n            )\n        elif not task[\"tasks\"]:\n            raise RuntimeError(\n                \"Failed to run ECS task: no tasks or failures were returned.\"\n            )\n        return task[\"tasks\"][0]\n\n    def _task_definitions_equal(self, taskdef_1, taskdef_2) -> bool:\n        \"\"\"\n        Compare two task definitions.\n\n        Since one may come from the AWS API and have populated defaults, we do our best\n        to homogenize the definitions without changing their meaning.\n        \"\"\"\n        if taskdef_1 == taskdef_2:\n            return True\n\n        if taskdef_1 is None or taskdef_2 is None:\n            return False\n\n        taskdef_1 = copy.deepcopy(taskdef_1)\n        taskdef_2 = copy.deepcopy(taskdef_2)\n\n        for taskdef in (taskdef_1, taskdef_2):\n            # Set defaults that AWS would set after registration\n            container_definitions = taskdef.get(\"containerDefinitions\", [])\n            essential = any(\n                container.get(\"essential\") for container in container_definitions\n            )\n            if not essential:\n                container_definitions[0].setdefault(\"essential\", True)\n\n            taskdef.setdefault(\"networkMode\", \"bridge\")\n\n        _drop_empty_keys_from_task_definition(taskdef_1)\n        _drop_empty_keys_from_task_definition(taskdef_2)\n\n        # Clear fields that change on registration for comparison\n        for field in ECS_POST_REGISTRATION_FIELDS:\n            taskdef_1.pop(field, None)\n            taskdef_2.pop(field, None)\n\n        return taskdef_1 == taskdef_2\n\n    async def kill_infrastructure(\n        self,\n        configuration: ECSJobConfiguration,\n        infrastructure_pid: str,\n        grace_seconds: int = 30,\n    ) -> None:\n        \"\"\"\n        Kill a task running on ECS.\n\n        Args:\n            infrastructure_pid: A cluster and task arn combination. This should match a\n                value yielded by `ECSWorker.run`.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n                \"support dynamic grace period configuration so 30s will be used. \"\n                \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n            )\n        cluster, task = parse_identifier(infrastructure_pid)\n        await run_sync_in_worker_thread(self._stop_task, configuration, cluster, task)\n\n    def _stop_task(\n        self, configuration: ECSJobConfiguration, cluster: str, task: str\n    ) -> None:\n        \"\"\"\n        Stop a running ECS task.\n        \"\"\"\n        if configuration.cluster is not None and cluster != configuration.cluster:\n            raise InfrastructureNotAvailable(\n                \"Cannot stop ECS task: this infrastructure block has access to \"\n                f\"cluster {configuration.cluster!r} but the task is running in cluster \"\n                f\"{cluster!r}.\"\n            )\n\n        ecs_client = self._get_client(configuration, \"ecs\")\n        try:\n            ecs_client.stop_task(cluster=cluster, task=task)\n        except Exception as exc:\n            # Raise a special exception if the task does not exist\n            if \"ClusterNotFound\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} could not be found.\"\n                ) from exc\n            if \"not find task\" in str(exc) or \"referenced task was not found\" in str(\n                exc\n            ):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the task {task!r} could not be found in \"\n                    f\"cluster {cluster!r}.\"\n                ) from exc\n            if \"no registered tasks\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop ECS task: the cluster {cluster!r} has no tasks.\"\n                ) from exc\n\n            # Reraise unknown exceptions\n            raise\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

        Kill a task running on ECS.

        Parameters:

        Name Type Description Default infrastructure_pid str

        A cluster and task arn combination. This should match a value yielded by ECSWorker.run.

        required Source code in prefect_aws/workers/ecs_worker.py
        async def kill_infrastructure(\n    self,\n    configuration: ECSJobConfiguration,\n    infrastructure_pid: str,\n    grace_seconds: int = 30,\n) -> None:\n    \"\"\"\n    Kill a task running on ECS.\n\n    Args:\n        infrastructure_pid: A cluster and task arn combination. This should match a\n            value yielded by `ECSWorker.run`.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but AWS does not \"\n            \"support dynamic grace period configuration so 30s will be used. \"\n            \"See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html for configuration of grace periods.\"  # noqa\n        )\n    cluster, task = parse_identifier(infrastructure_pid)\n    await run_sync_in_worker_thread(self._stop_task, configuration, cluster, task)\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorker.run","title":"run async","text":"

        Runs a given flow run on the current worker.

        Source code in prefect_aws/workers/ecs_worker.py
        async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: ECSJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> ECSWorkerResult:\n    \"\"\"\n    Runs a given flow run on the current worker.\n    \"\"\"\n    ecs_client = await run_sync_in_worker_thread(\n        self._get_client, configuration, \"ecs\"\n    )\n\n    logger = self.get_flow_run_logger(flow_run)\n\n    (\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition,\n    ) = await run_sync_in_worker_thread(\n        self._create_task_and_wait_for_start,\n        logger,\n        ecs_client,\n        configuration,\n        flow_run,\n    )\n\n    # The task identifier is \"{cluster}::{task}\" where we use the configured cluster\n    # if set to preserve matching by name rather than arn\n    # Note \"::\" is used despite the Prefect standard being \":\" because ARNs contain\n    # single colons.\n    identifier = (\n        (configuration.cluster if configuration.cluster else cluster_arn)\n        + \"::\"\n        + task_arn\n    )\n\n    if task_status:\n        task_status.started(identifier)\n\n    status_code = await run_sync_in_worker_thread(\n        self._watch_task_and_get_exit_code,\n        logger,\n        configuration,\n        task_arn,\n        cluster_arn,\n        task_definition,\n        is_new_task_definition and configuration.auto_deregister_task_definition,\n        ecs_client,\n    )\n\n    return ECSWorkerResult(\n        identifier=identifier,\n        # If the container does not start the exit code can be null but we must\n        # still report a status code. We use a -1 to indicate a special code.\n        status_code=status_code if status_code is not None else -1,\n    )\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.ECSWorkerResult","title":"ECSWorkerResult","text":"

        Bases: BaseWorkerResult

        The result of an ECS job.

        Source code in prefect_aws/workers/ecs_worker.py
        class ECSWorkerResult(BaseWorkerResult):\n    \"\"\"\n    The result of an ECS job.\n    \"\"\"\n
        "},{"location":"integrations/prefect-aws/ecs_worker/#prefect_aws.workers.ecs_worker.parse_identifier","title":"parse_identifier","text":"

        Splits identifier into its cluster and task components, e.g. input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").

        Source code in prefect_aws/workers/ecs_worker.py
        def parse_identifier(identifier: str) -> ECSIdentifier:\n    \"\"\"\n    Splits identifier into its cluster and task components, e.g.\n    input \"cluster_name::task_arn\" outputs (\"cluster_name\", \"task_arn\").\n    \"\"\"\n    cluster, task = identifier.split(\"::\", maxsplit=1)\n    return ECSIdentifier(cluster, task)\n
        "},{"location":"integrations/prefect-aws/glue_job/","title":"Glue Job","text":""},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job","title":"prefect_aws.glue_job","text":"

        Integrations with the AWS Glue Job.

        "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobBlock","title":"GlueJobBlock","text":"

        Bases: JobBlock

        Execute a job to the AWS Glue Job service.

        Attributes:

        Name Type Description job_name str

        The name of the job definition to use.

        arguments Optional[dict]

        The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself. You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes. Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job. doc

        job_watch_poll_interval float

        The amount of time to wait between AWS API calls while monitoring the state of a Glue Job. default is 60s because of jobs that use AWS Glue versions 2.0 and later have a 1-minute minimum. AWS Glue Pricing

        Example

        Start a job to AWS Glue Job.

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.glue_job import GlueJobBlock\n\n\n@flow\ndef example_run_glue_job():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"your_access_key_id\",\n        aws_secret_access_key=\"your_secret_access_key\"\n    )\n    glue_job_run = GlueJobBlock(\n        job_name=\"your_glue_job_name\",\n        arguments={\"--YOUR_EXTRA_ARGUMENT\": \"YOUR_EXTRA_ARGUMENT_VALUE\"},\n    ).trigger()\n\n    return glue_job_run.wait_for_completion()\n\n\nexample_run_glue_job()\n

        Source code in prefect_aws/glue_job.py
        class GlueJobBlock(JobBlock):\n    \"\"\"Execute a job to the AWS Glue Job service.\n\n    Attributes:\n        job_name: The name of the job definition to use.\n        arguments: The job arguments associated with this run.\n            For this job run, they replace the default arguments set in the job\n            definition itself.\n            You can specify arguments here that your own job-execution script consumes,\n            as well as arguments that Glue itself consumes.\n            Job arguments may be logged. Do not pass plaintext secrets as arguments.\n            Retrieve secrets from a Glue Connection, Secrets Manager or other secret\n            management mechanism if you intend to keep them within the Job.\n            [doc](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html)\n        job_watch_poll_interval: The amount of time to wait between AWS API\n            calls while monitoring the state of a Glue Job.\n            default is 60s because of jobs that use AWS Glue versions 2.0 and later\n            have a 1-minute minimum.\n            [AWS Glue Pricing](https://aws.amazon.com/glue/pricing/?nc1=h_ls)\n\n    Example:\n        Start a job to AWS Glue Job.\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.glue_job import GlueJobBlock\n\n\n        @flow\n        def example_run_glue_job():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"your_access_key_id\",\n                aws_secret_access_key=\"your_secret_access_key\"\n            )\n            glue_job_run = GlueJobBlock(\n                job_name=\"your_glue_job_name\",\n                arguments={\"--YOUR_EXTRA_ARGUMENT\": \"YOUR_EXTRA_ARGUMENT_VALUE\"},\n            ).trigger()\n\n            return glue_job_run.wait_for_completion()\n\n\n        example_run_glue_job()\n        ```\n    \"\"\"\n\n    job_name: str = Field(\n        ...,\n        title=\"AWS Glue Job Name\",\n        description=\"The name of the job definition to use.\",\n    )\n\n    arguments: Optional[dict] = Field(\n        default=None,\n        title=\"AWS Glue Job Arguments\",\n        description=\"The job arguments associated with this run.\",\n    )\n    job_watch_poll_interval: float = Field(\n        default=60.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an Glue Job.\"\n        ),\n    )\n\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to use to connect to Glue.\",\n    )\n\n    async def trigger(self) -> GlueJobRun:\n        \"\"\"trigger for GlueJobRun\"\"\"\n        client = self._get_client()\n        job_run_id = self._start_job(client)\n        return GlueJobRun(\n            job_name=self.job_name,\n            job_id=job_run_id,\n            job_watch_poll_interval=self.job_watch_poll_interval,\n        )\n\n    def _start_job(self, client: _GlueJobClient) -> str:\n        \"\"\"\n        Start the AWS Glue Job\n        [doc](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue/client/start_job_run.html)\n        \"\"\"\n        self.logger.info(\n            f\"starting job {self.job_name} with arguments {self.arguments}\"\n        )\n        try:\n            response = client.start_job_run(\n                JobName=self.job_name,\n                Arguments=self.arguments,\n            )\n            job_run_id = str(response[\"JobRunId\"])\n            self.logger.info(f\"job started with job run id: {job_run_id}\")\n            return job_run_id\n        except Exception as e:\n            self.logger.error(f\"failed to start job: {e}\")\n            raise RuntimeError\n\n    def _get_client(self) -> _GlueJobClient:\n        \"\"\"\n        Retrieve a Glue Job Client\n        \"\"\"\n        boto_session = self.aws_credentials.get_boto3_session()\n        return boto_session.client(\"glue\")\n
        "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobBlock.trigger","title":"trigger async","text":"

        trigger for GlueJobRun

        Source code in prefect_aws/glue_job.py
        async def trigger(self) -> GlueJobRun:\n    \"\"\"trigger for GlueJobRun\"\"\"\n    client = self._get_client()\n    job_run_id = self._start_job(client)\n    return GlueJobRun(\n        job_name=self.job_name,\n        job_id=job_run_id,\n        job_watch_poll_interval=self.job_watch_poll_interval,\n    )\n
        "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobRun","title":"GlueJobRun","text":"

        Bases: JobRun, BaseModel

        Execute a Glue Job

        Source code in prefect_aws/glue_job.py
        class GlueJobRun(JobRun, BaseModel):\n    \"\"\"Execute a Glue Job\"\"\"\n\n    job_name: str = Field(\n        ...,\n        title=\"AWS Glue Job Name\",\n        description=\"The name of the job definition to use.\",\n    )\n\n    job_id: str = Field(\n        ...,\n        title=\"AWS Glue Job ID\",\n        description=\"The ID of the job run.\",\n    )\n\n    job_watch_poll_interval: float = Field(\n        default=60.0,\n        description=(\n            \"The amount of time to wait between AWS API calls while monitoring the \"\n            \"state of an Glue Job.\"\n        ),\n    )\n\n    _error_states = [\"FAILED\", \"STOPPED\", \"ERROR\", \"TIMEOUT\"]\n\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to use to connect to Glue.\",\n    )\n\n    client: _GlueJobClient = Field(default=None, description=\"\")\n\n    async def fetch_result(self) -> str:\n        \"\"\"fetch glue job state\"\"\"\n        job = self._get_job_run()\n        return job[\"JobRun\"][\"JobRunState\"]\n\n    def wait_for_completion(self) -> None:\n        \"\"\"\n        Wait for the job run to complete and get exit code\n        \"\"\"\n        self.logger.info(f\"watching job {self.job_name} with run id {self.job_id}\")\n        while True:\n            job = self._get_job_run()\n            job_state = job[\"JobRun\"][\"JobRunState\"]\n            if job_state in self._error_states:\n                # Generate a dynamic exception type from the AWS name\n                self.logger.error(f\"job failed: {job['JobRun']['ErrorMessage']}\")\n                raise RuntimeError(job[\"JobRun\"][\"ErrorMessage\"])\n            elif job_state == \"SUCCEEDED\":\n                self.logger.info(f\"job succeeded: {self.job_id}\")\n                break\n\n            time.sleep(self.job_watch_poll_interval)\n\n    def _get_job_run(self):\n        \"\"\"get glue job\"\"\"\n        return self.client.get_job_run(JobName=self.job_name, RunId=self.job_id)\n
        "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobRun.fetch_result","title":"fetch_result async","text":"

        fetch glue job state

        Source code in prefect_aws/glue_job.py
        async def fetch_result(self) -> str:\n    \"\"\"fetch glue job state\"\"\"\n    job = self._get_job_run()\n    return job[\"JobRun\"][\"JobRunState\"]\n
        "},{"location":"integrations/prefect-aws/glue_job/#prefect_aws.glue_job.GlueJobRun.wait_for_completion","title":"wait_for_completion","text":"

        Wait for the job run to complete and get exit code

        Source code in prefect_aws/glue_job.py
        def wait_for_completion(self) -> None:\n    \"\"\"\n    Wait for the job run to complete and get exit code\n    \"\"\"\n    self.logger.info(f\"watching job {self.job_name} with run id {self.job_id}\")\n    while True:\n        job = self._get_job_run()\n        job_state = job[\"JobRun\"][\"JobRunState\"]\n        if job_state in self._error_states:\n            # Generate a dynamic exception type from the AWS name\n            self.logger.error(f\"job failed: {job['JobRun']['ErrorMessage']}\")\n            raise RuntimeError(job[\"JobRun\"][\"ErrorMessage\"])\n        elif job_state == \"SUCCEEDED\":\n            self.logger.info(f\"job succeeded: {self.job_id}\")\n            break\n\n        time.sleep(self.job_watch_poll_interval)\n
        "},{"location":"integrations/prefect-aws/lambda_function/","title":"Lambda","text":""},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function","title":"prefect_aws.lambda_function","text":"

        Integrations with AWS Lambda.

        Examples:

        Run a lambda function with a payload\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(payload={\"foo\": \"bar\"})\n```\n\nSpecify a version of a lambda function\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    qualifier=\"1\",\n    aws_credentials=aws_credentials,\n).invoke()\n```\n\nInvoke a lambda function asynchronously\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(invocation_type=\"Event\")\n```\n\nInvoke a lambda function and return the last 4 KB of logs\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(tail=True)\n```\n\nInvoke a lambda function with a client context\n\n```python\nLambdaFunction(\n    function_name=\"test-function\",\n    aws_credentials=aws_credentials,\n).invoke(client_context={\"bar\": \"foo\"})\n```\n
        "},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function.LambdaFunction","title":"LambdaFunction","text":"

        Bases: Block

        Invoke a Lambda function. This block is part of the prefect-aws collection. Install prefect-aws with pip install prefect-aws to use this block.

        Attributes:

        Name Type Description function_name str

        The name, ARN, or partial ARN of the Lambda function to run. This must be the name of a function that is already deployed to AWS Lambda.

        qualifier Optional[str]

        The version or alias of the Lambda function to use when invoked. If not specified, the latest (unqualified) version of the Lambda function will be used.

        aws_credentials AwsCredentials

        The AWS credentials to use to connect to AWS Lambda with a default factory of AwsCredentials.

        Source code in prefect_aws/lambda_function.py
        class LambdaFunction(Block):\n    \"\"\"Invoke a Lambda function. This block is part of the prefect-aws\n    collection. Install prefect-aws with `pip install prefect-aws` to use this\n    block.\n\n    Attributes:\n        function_name: The name, ARN, or partial ARN of the Lambda function to\n            run. This must be the name of a function that is already deployed\n            to AWS Lambda.\n        qualifier: The version or alias of the Lambda function to use when\n            invoked. If not specified, the latest (unqualified) version of the\n            Lambda function will be used.\n        aws_credentials: The AWS credentials to use to connect to AWS Lambda\n            with a default factory of AwsCredentials.\n\n    \"\"\"\n\n    _block_type_name = \"Lambda Function\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/s3/#prefect_aws.lambda_function.LambdaFunction\"  # noqa\n\n    function_name: str = Field(\n        title=\"Function Name\",\n        description=(\n            \"The name, ARN, or partial ARN of the Lambda function to run. This\"\n            \" must be the name of a function that is already deployed to AWS\"\n            \" Lambda.\"\n        ),\n    )\n    qualifier: Optional[str] = Field(\n        default=None,\n        title=\"Qualifier\",\n        description=(\n            \"The version or alias of the Lambda function to use when invoked. \"\n            \"If not specified, the latest (unqualified) version of the Lambda \"\n            \"function will be used.\"\n        ),\n    )\n    aws_credentials: AwsCredentials = Field(\n        title=\"AWS Credentials\",\n        default_factory=AwsCredentials,\n        description=\"The AWS credentials to invoke the Lambda with.\",\n    )\n\n    class Config:\n        \"\"\"Lambda's pydantic configuration.\"\"\"\n\n        smart_union = True\n\n    def _get_lambda_client(self):\n        \"\"\"\n        Retrieve a boto3 session and Lambda client\n        \"\"\"\n        boto_session = self.aws_credentials.get_boto3_session()\n        lambda_client = boto_session.client(\"lambda\")\n        return lambda_client\n\n    @sync_compatible\n    async def invoke(\n        self,\n        payload: dict = None,\n        invocation_type: Literal[\n            \"RequestResponse\", \"Event\", \"DryRun\"\n        ] = \"RequestResponse\",\n        tail: bool = False,\n        client_context: Optional[dict] = None,\n    ) -> dict:\n        \"\"\"\n        [Invoke](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda/client/invoke.html)\n        the Lambda function with the given payload.\n\n        Args:\n            payload: The payload to send to the Lambda function.\n            invocation_type: The invocation type of the Lambda function. This\n                can be one of \"RequestResponse\", \"Event\", or \"DryRun\". Uses\n                \"RequestResponse\" by default.\n            tail: If True, the response will include the base64-encoded last 4\n                KB of log data produced by the Lambda function.\n            client_context: The client context to send to the Lambda function.\n                Limited to 3583 bytes.\n\n        Returns:\n            The response from the Lambda function.\n\n        Examples:\n\n            ```python\n            from prefect_aws.lambda_function import LambdaFunction\n            from prefect_aws.credentials import AwsCredentials\n\n            credentials = AwsCredentials()\n            lambda_function = LambdaFunction(\n                function_name=\"test_lambda_function\",\n                aws_credentials=credentials,\n            )\n            response = lambda_function.invoke(\n                payload={\"foo\": \"bar\"},\n                invocation_type=\"RequestResponse\",\n            )\n            response[\"Payload\"].read()\n            ```\n            ```txt\n            b'{\"foo\": \"bar\"}'\n            ```\n\n        \"\"\"\n        # Add invocation arguments\n        kwargs = dict(FunctionName=self.function_name)\n\n        if payload:\n            kwargs[\"Payload\"] = json.dumps(payload).encode()\n\n        # Let boto handle invalid invocation types\n        kwargs[\"InvocationType\"] = invocation_type\n\n        if self.qualifier is not None:\n            kwargs[\"Qualifier\"] = self.qualifier\n\n        if tail:\n            kwargs[\"LogType\"] = \"Tail\"\n\n        if client_context is not None:\n            # For some reason this is string, but payload is bytes\n            kwargs[\"ClientContext\"] = json.dumps(client_context)\n\n        # Get client and invoke\n        lambda_client = await run_sync_in_worker_thread(self._get_lambda_client)\n        return await run_sync_in_worker_thread(lambda_client.invoke, **kwargs)\n
        "},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function.LambdaFunction.Config","title":"Config","text":"

        Lambda's pydantic configuration.

        Source code in prefect_aws/lambda_function.py
        class Config:\n    \"\"\"Lambda's pydantic configuration.\"\"\"\n\n    smart_union = True\n
        "},{"location":"integrations/prefect-aws/lambda_function/#prefect_aws.lambda_function.LambdaFunction.invoke","title":"invoke async","text":"

        Invoke the Lambda function with the given payload.

        Parameters:

        Name Type Description Default payload dict

        The payload to send to the Lambda function.

        None invocation_type Literal['RequestResponse', 'Event', 'DryRun']

        The invocation type of the Lambda function. This can be one of \"RequestResponse\", \"Event\", or \"DryRun\". Uses \"RequestResponse\" by default.

        'RequestResponse' tail bool

        If True, the response will include the base64-encoded last 4 KB of log data produced by the Lambda function.

        False client_context Optional[dict]

        The client context to send to the Lambda function. Limited to 3583 bytes.

        None

        Returns:

        Type Description dict

        The response from the Lambda function.

        ```python\nfrom prefect_aws.lambda_function import LambdaFunction\nfrom prefect_aws.credentials import AwsCredentials\n\ncredentials = AwsCredentials()\nlambda_function = LambdaFunction(\n    function_name=\"test_lambda_function\",\n    aws_credentials=credentials,\n)\nresponse = lambda_function.invoke(\n    payload={\"foo\": \"bar\"},\n    invocation_type=\"RequestResponse\",\n)\nresponse[\"Payload\"].read()\n```\n```txt\nb'{\"foo\": \"bar\"}'\n```\n
        Source code in prefect_aws/lambda_function.py
        @sync_compatible\nasync def invoke(\n    self,\n    payload: dict = None,\n    invocation_type: Literal[\n        \"RequestResponse\", \"Event\", \"DryRun\"\n    ] = \"RequestResponse\",\n    tail: bool = False,\n    client_context: Optional[dict] = None,\n) -> dict:\n    \"\"\"\n    [Invoke](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda/client/invoke.html)\n    the Lambda function with the given payload.\n\n    Args:\n        payload: The payload to send to the Lambda function.\n        invocation_type: The invocation type of the Lambda function. This\n            can be one of \"RequestResponse\", \"Event\", or \"DryRun\". Uses\n            \"RequestResponse\" by default.\n        tail: If True, the response will include the base64-encoded last 4\n            KB of log data produced by the Lambda function.\n        client_context: The client context to send to the Lambda function.\n            Limited to 3583 bytes.\n\n    Returns:\n        The response from the Lambda function.\n\n    Examples:\n\n        ```python\n        from prefect_aws.lambda_function import LambdaFunction\n        from prefect_aws.credentials import AwsCredentials\n\n        credentials = AwsCredentials()\n        lambda_function = LambdaFunction(\n            function_name=\"test_lambda_function\",\n            aws_credentials=credentials,\n        )\n        response = lambda_function.invoke(\n            payload={\"foo\": \"bar\"},\n            invocation_type=\"RequestResponse\",\n        )\n        response[\"Payload\"].read()\n        ```\n        ```txt\n        b'{\"foo\": \"bar\"}'\n        ```\n\n    \"\"\"\n    # Add invocation arguments\n    kwargs = dict(FunctionName=self.function_name)\n\n    if payload:\n        kwargs[\"Payload\"] = json.dumps(payload).encode()\n\n    # Let boto handle invalid invocation types\n    kwargs[\"InvocationType\"] = invocation_type\n\n    if self.qualifier is not None:\n        kwargs[\"Qualifier\"] = self.qualifier\n\n    if tail:\n        kwargs[\"LogType\"] = \"Tail\"\n\n    if client_context is not None:\n        # For some reason this is string, but payload is bytes\n        kwargs[\"ClientContext\"] = json.dumps(client_context)\n\n    # Get client and invoke\n    lambda_client = await run_sync_in_worker_thread(self._get_lambda_client)\n    return await run_sync_in_worker_thread(lambda_client.invoke, **kwargs)\n
        "},{"location":"integrations/prefect-aws/s3/","title":"S3","text":""},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3","title":"prefect_aws.s3","text":"

        Tasks for interacting with AWS S3

        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket","title":"S3Bucket","text":"

        Bases: WritableFileSystem, WritableDeploymentStorage, ObjectStorageBlock

        Block used to store data using AWS S3 or S3-compatible object storage like MinIO.

        Attributes:

        Name Type Description bucket_name str

        Name of your bucket.

        credentials Union[MinIOCredentials, AwsCredentials]

        A block containing your credentials to AWS or MinIO.

        bucket_folder str

        A default path to a folder within the S3 bucket to use for reading and writing objects.

        Source code in prefect_aws/s3.py
        class S3Bucket(WritableFileSystem, WritableDeploymentStorage, ObjectStorageBlock):\n\n    \"\"\"\n    Block used to store data using AWS S3 or S3-compatible object storage like MinIO.\n\n    Attributes:\n        bucket_name: Name of your bucket.\n        credentials: A block containing your credentials to AWS or MinIO.\n        bucket_folder: A default path to a folder within the S3 bucket to use\n            for reading and writing objects.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _block_type_name = \"S3 Bucket\"\n    _documentation_url = (\n        \"https://prefecthq.github.io/prefect-aws/s3/#prefect_aws.s3.S3Bucket\"  # noqa\n    )\n\n    bucket_name: str = Field(default=..., description=\"Name of your bucket.\")\n\n    credentials: Union[MinIOCredentials, AwsCredentials] = Field(\n        default_factory=AwsCredentials,\n        description=\"A block containing your credentials to AWS or MinIO.\",\n    )\n\n    bucket_folder: str = Field(\n        default=\"\",\n        description=(\n            \"A default path to a folder within the S3 bucket to use \"\n            \"for reading and writing objects.\"\n        ),\n    )\n\n    # Property to maintain compatibility with storage block based deployments\n    @property\n    def basepath(self) -> str:\n        \"\"\"\n        The base path of the S3 bucket.\n\n        Returns:\n            str: The base path of the S3 bucket.\n        \"\"\"\n        return self.bucket_folder\n\n    @basepath.setter\n    def basepath(self, value: str) -> None:\n        self.bucket_folder = value\n\n    def _resolve_path(self, path: str) -> str:\n        \"\"\"\n        A helper function used in write_path to join `self.basepath` and `path`.\n\n        Args:\n\n            path: Name of the key, e.g. \"file1\". Each object in your\n                bucket has a unique key (or key name).\n\n        \"\"\"\n        # If bucket_folder provided, it means we won't write to the root dir of\n        # the bucket. So we need to add it on the front of the path.\n        #\n        # AWS object key naming guidelines require '/' for bucket folders.\n        # Get POSIX path to prevent `pathlib` from inferring '\\' on Windows OS\n        path = (\n            (Path(self.bucket_folder) / path).as_posix() if self.bucket_folder else path\n        )\n\n        return path\n\n    def _get_s3_client(self) -> boto3.client:\n        \"\"\"\n        Authenticate MinIO credentials or AWS credentials and return an S3 client.\n        This is a helper function called by read_path() or write_path().\n        \"\"\"\n        return self.credentials.get_client(\"s3\")\n\n    def _get_bucket_resource(self) -> boto3.resource:\n        \"\"\"\n        Retrieves boto3 resource object for the configured bucket\n        \"\"\"\n        params_override = self.credentials.aws_client_parameters.get_params_override()\n        bucket = (\n            self.credentials.get_boto3_session()\n            .resource(\"s3\", **params_override)\n            .Bucket(self.bucket_name)\n        )\n        return bucket\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Copies a folder from the configured S3 bucket to a local directory.\n\n        Defaults to copying the entire contents of the block's basepath to the current\n        working directory.\n\n        Args:\n            from_path: Path in S3 bucket to download from. Defaults to the block's\n                configured basepath.\n            local_path: Local path to download S3 contents to. Defaults to the current\n                working directory.\n        \"\"\"\n        bucket_folder = self.bucket_folder\n        if from_path is None:\n            from_path = str(bucket_folder) if bucket_folder else \"\"\n\n        if local_path is None:\n            local_path = str(Path(\".\").absolute())\n        else:\n            local_path = str(Path(local_path).expanduser())\n\n        bucket = self._get_bucket_resource()\n        for obj in bucket.objects.filter(Prefix=from_path):\n            if obj.key[-1] == \"/\":\n                # object is a folder and will be created if it contains any objects\n                continue\n            target = os.path.join(\n                local_path,\n                os.path.relpath(obj.key, from_path),\n            )\n            os.makedirs(os.path.dirname(target), exist_ok=True)\n            bucket.download_file(obj.key, target)\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to the configured S3 bucket in a\n        given folder.\n\n        Defaults to uploading the entire contents the current working directory to the\n        block's basepath.\n\n        Args:\n            local_path: Path to local directory to upload from.\n            to_path: Path in S3 bucket to upload to. Defaults to block's configured\n                basepath.\n            ignore_file: Path to file containing gitignore style expressions for\n                filepaths to ignore.\n\n        \"\"\"\n        to_path = \"\" if to_path is None else to_path\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(local_path, ignore_patterns)\n\n        uploaded_file_count = 0\n        for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n            if (\n                included_files is not None\n                and str(local_file_path.relative_to(local_path)) not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = Path(to_path) / local_file_path.relative_to(\n                    local_path\n                )\n                with open(local_file_path, \"rb\") as local_file:\n                    local_file_content = local_file.read()\n\n                await self.write_path(\n                    remote_file_path.as_posix(), content=local_file_content\n                )\n                uploaded_file_count += 1\n\n        return uploaded_file_count\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        \"\"\"\n        Read specified path from S3 and return contents. Provide the entire\n        path to the key in S3.\n\n        Args:\n            path: Entire path to (and including) the key.\n\n        Example:\n            Read \"subfolder/file1\" contents from an S3 bucket named \"bucket\":\n            ```python\n            from prefect_aws import AwsCredentials\n            from prefect_aws.s3 import S3Bucket\n\n            aws_creds = AwsCredentials(\n                aws_access_key_id=AWS_ACCESS_KEY_ID,\n                aws_secret_access_key=AWS_SECRET_ACCESS_KEY\n            )\n\n            s3_bucket_block = S3Bucket(\n                bucket_name=\"bucket\",\n                credentials=aws_creds,\n                bucket_folder=\"subfolder\"\n            )\n\n            key_contents = s3_bucket_block.read_path(path=\"subfolder/file1\")\n            ```\n        \"\"\"\n        path = self._resolve_path(path)\n\n        return await run_sync_in_worker_thread(self._read_sync, path)\n\n    def _read_sync(self, key: str) -> bytes:\n        \"\"\"\n        Called by read_path(). Creates an S3 client and retrieves the\n        contents from  a specified path.\n        \"\"\"\n\n        s3_client = self._get_s3_client()\n\n        with io.BytesIO() as stream:\n            s3_client.download_fileobj(Bucket=self.bucket_name, Key=key, Fileobj=stream)\n            stream.seek(0)\n            output = stream.read()\n            return output\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        \"\"\"\n        Writes to an S3 bucket.\n\n        Args:\n\n            path: The key name. Each object in your bucket has a unique\n                key (or key name).\n            content: What you are uploading to S3.\n\n        Example:\n\n            Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket:\n            ```python\n            from prefect_aws import MinioCredentials\n            from prefect_aws.s3 import S3Bucket\n\n            minio_creds = MinIOCredentials(\n                minio_root_user = \"minioadmin\",\n                minio_root_password = \"minioadmin\",\n            )\n\n            s3_bucket_block = S3Bucket(\n                bucket_name=\"bucket\",\n                minio_credentials=minio_creds,\n                bucket_folder=\"dogs/smalldogs\",\n                endpoint_url=\"http://localhost:9000\",\n            )\n            s3_havanese_path = s3_bucket_block.write_path(path=\"havanese\", content=data)\n            ```\n        \"\"\"\n\n        path = self._resolve_path(path)\n\n        await run_sync_in_worker_thread(self._write_sync, path, content)\n\n        return path\n\n    def _write_sync(self, key: str, data: bytes) -> None:\n        \"\"\"\n        Called by write_path(). Creates an S3 client and uploads a file\n        object.\n        \"\"\"\n\n        s3_client = self._get_s3_client()\n\n        with io.BytesIO(data) as stream:\n            s3_client.upload_fileobj(Fileobj=stream, Bucket=self.bucket_name, Key=key)\n\n    # NEW BLOCK INTERFACE METHODS BELOW\n    @staticmethod\n    def _list_objects_sync(page_iterator: PageIterator) -> List[Dict[str, Any]]:\n        \"\"\"\n        Synchronous method to collect S3 objects into a list\n\n        Args:\n            page_iterator: AWS Paginator for S3 objects\n\n        Returns:\n            List[Dict]: List of object information\n        \"\"\"\n        return [\n            content for page in page_iterator for content in page.get(\"Contents\", [])\n        ]\n\n    def _join_bucket_folder(self, bucket_path: str = \"\") -> str:\n        \"\"\"\n        Joins the base bucket folder to the bucket path.\n        NOTE: If a method reuses another method in this class, be careful to not\n        call this  twice because it'll join the bucket folder twice.\n        See https://github.com/PrefectHQ/prefect-aws/issues/141 for a past issue.\n        \"\"\"\n        if not self.bucket_folder and not bucket_path:\n            # there's a difference between \".\" and \"\", at least in the tests\n            return \"\"\n\n        bucket_path = str(bucket_path)\n        if self.bucket_folder != \"\" and bucket_path.startswith(self.bucket_folder):\n            self.logger.info(\n                f\"Bucket path {bucket_path!r} is already prefixed with \"\n                f\"bucket folder {self.bucket_folder!r}; is this intentional?\"\n            )\n\n        return (Path(self.bucket_folder) / bucket_path).as_posix() + (\n            \"\" if not bucket_path.endswith(\"/\") else \"/\"\n        )\n\n    @sync_compatible\n    async def list_objects(\n        self,\n        folder: str = \"\",\n        delimiter: str = \"\",\n        page_size: Optional[int] = None,\n        max_items: Optional[int] = None,\n        jmespath_query: Optional[str] = None,\n    ) -> List[Dict[str, Any]]:\n        \"\"\"\n        Args:\n            folder: Folder to list objects from.\n            delimiter: Character used to group keys of listed objects.\n            page_size: Number of objects to return in each request to the AWS API.\n            max_items: Maximum number of objects that to be returned by task.\n            jmespath_query: Query used to filter objects based on object attributes refer to\n                the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath)\n                for more information on how to construct queries.\n\n        Returns:\n            List of objects and their metadata in the bucket.\n\n        Examples:\n            List objects under the `base_folder`.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.list_objects(\"base_folder\")\n            ```\n        \"\"\"  # noqa: E501\n        bucket_path = self._join_bucket_folder(folder)\n        client = self.credentials.get_s3_client()\n        paginator = client.get_paginator(\"list_objects_v2\")\n        page_iterator = paginator.paginate(\n            Bucket=self.bucket_name,\n            Prefix=bucket_path,\n            Delimiter=delimiter,\n            PaginationConfig={\"PageSize\": page_size, \"MaxItems\": max_items},\n        )\n        if jmespath_query:\n            page_iterator = page_iterator.search(f\"{jmespath_query} | {{Contents: @}}\")\n\n        self.logger.info(f\"Listing objects in bucket {bucket_path}.\")\n        objects = await run_sync_in_worker_thread(\n            self._list_objects_sync, page_iterator\n        )\n        return objects\n\n    @sync_compatible\n    async def download_object_to_path(\n        self,\n        from_path: str,\n        to_path: Optional[Union[str, Path]],\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads an object from the S3 bucket to a path.\n\n        Args:\n            from_path: The path to the object to download; this gets prefixed\n                with the bucket_folder.\n            to_path: The path to download the object to. If not provided, the\n                object's name will be used.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Client.download_file`.\n\n        Returns:\n            The absolute path that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to notes.txt.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n            ```\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        # making path absolute, but converting back to str here\n        # since !r looks nicer that way and filename arg expects str\n        to_path = str(Path(to_path).absolute())\n        bucket_path = self._join_bucket_folder(from_path)\n        client = self.credentials.get_s3_client()\n\n        self.logger.debug(\n            f\"Preparing to download object from bucket {self.bucket_name!r} \"\n            f\"path {bucket_path!r} to {to_path!r}.\"\n        )\n        await run_sync_in_worker_thread(\n            client.download_file,\n            Bucket=self.bucket_name,\n            Key=from_path,\n            Filename=to_path,\n            **download_kwargs,\n        )\n        self.logger.info(\n            f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n            f\"to {to_path!r}.\"\n        )\n        return Path(to_path)\n\n    @sync_compatible\n    async def download_object_to_file_object(\n        self,\n        from_path: str,\n        to_file_object: BinaryIO,\n        **download_kwargs: Dict[str, Any],\n    ) -> BinaryIO:\n        \"\"\"\n        Downloads an object from the object storage service to a file-like object,\n        which can be a BytesIO object or a BufferedWriter.\n\n        Args:\n            from_path: The path to the object to download from; this gets prefixed\n                with the bucket_folder.\n            to_file_object: The file-like object to download the object to.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Client.download_fileobj`.\n\n        Returns:\n            The file-like object that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to a BytesIO object.\n            ```python\n            from io import BytesIO\n\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with BytesIO() as buf:\n                s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n            ```\n\n            Download my_folder/notes.txt object to a BufferedWriter.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"wb\") as f:\n                s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n            ```\n        \"\"\"\n        client = self.credentials.get_s3_client()\n        bucket_path = self._join_bucket_folder(from_path)\n\n        self.logger.debug(\n            f\"Preparing to download object from bucket {self.bucket_name!r} \"\n            f\"path {bucket_path!r} to file object.\"\n        )\n        await run_sync_in_worker_thread(\n            client.download_fileobj,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            Fileobj=to_file_object,\n            **download_kwargs,\n        )\n        self.logger.info(\n            f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n            \"to file object.\"\n        )\n        return to_file_object\n\n    @sync_compatible\n    async def download_folder_to_path(\n        self,\n        from_folder: str,\n        to_folder: Optional[Union[str, Path]] = None,\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads objects *within* a folder (excluding the folder itself)\n        from the S3 bucket to a folder.\n\n        Args:\n            from_folder: The path to the folder to download from.\n            to_folder: The path to download the folder to.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Client.download_file`.\n\n        Returns:\n            The absolute path that the folder was downloaded to.\n\n        Examples:\n            Download my_folder to a local folder named my_folder.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n            ```\n        \"\"\"\n        if to_folder is None:\n            to_folder = \"\"\n        to_folder = Path(to_folder).absolute()\n\n        client = self.credentials.get_s3_client()\n        objects = await self.list_objects(folder=from_folder)\n\n        # do not call self._join_bucket_folder for filter\n        # because it's built-in to that method already!\n        # however, we still need to do it because we're using relative_to\n        bucket_folder = self._join_bucket_folder(from_folder)\n\n        async_coros = []\n        for object in objects:\n            bucket_path = Path(object[\"Key\"]).relative_to(bucket_folder)\n            # this skips the actual directory itself, e.g.\n            # `my_folder/` will be skipped\n            # `my_folder/notes.txt` will be downloaded\n            if bucket_path.is_dir():\n                continue\n            to_path = to_folder / bucket_path\n            to_path.parent.mkdir(parents=True, exist_ok=True)\n            to_path = str(to_path)  # must be string\n            self.logger.info(\n                f\"Downloading object from bucket {self.bucket_name!r} path \"\n                f\"{bucket_path.as_posix()!r} to {to_path!r}.\"\n            )\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    client.download_file,\n                    Bucket=self.bucket_name,\n                    Key=object[\"Key\"],\n                    Filename=to_path,\n                    **download_kwargs,\n                )\n            )\n        await asyncio.gather(*async_coros)\n\n        return Path(to_folder)\n\n    @sync_compatible\n    async def stream_from(\n        self,\n        bucket: \"S3Bucket\",\n        from_path: str,\n        to_path: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"Streams an object from another bucket to this bucket. Requires the\n        object to be downloaded and uploaded in chunks. If `self`'s credentials\n        allow for writes to the other bucket, try using `S3Bucket.copy_object`.\n\n        Args:\n            bucket: The bucket to stream from.\n            from_path: The path of the object to stream.\n            to_path: The path to stream the object to. Defaults to the object's name.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Client.upload_fileobj`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            your_s3_bucket = S3Bucket.load(\"your-bucket\")\n            my_s3_bucket = S3Bucket.load(\"my-bucket\")\n\n            my_s3_bucket.stream_from(\n                your_s3_bucket,\n                \"notes.txt\",\n                to_path=\"landed/notes.txt\"\n            )\n            ```\n\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        # Get the source object's StreamingBody\n        from_path: str = bucket._join_bucket_folder(from_path)\n        from_client = bucket.credentials.get_s3_client()\n        obj = await run_sync_in_worker_thread(\n            from_client.get_object, Bucket=bucket.bucket_name, Key=from_path\n        )\n        body: StreamingBody = obj[\"Body\"]\n\n        # Upload the StreamingBody to this bucket\n        bucket_path = str(self._join_bucket_folder(to_path))\n        to_client = self.credentials.get_s3_client()\n        await run_sync_in_worker_thread(\n            to_client.upload_fileobj,\n            Fileobj=body,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            **upload_kwargs,\n        )\n        self.logger.info(\n            f\"Streamed s3://{bucket.bucket_name}/{from_path} to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_path(\n        self,\n        from_path: Union[str, Path],\n        to_path: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads an object from a path to the S3 bucket.\n\n        Args:\n            from_path: The path to the file to upload from.\n            to_path: The path to upload the file to.\n            **upload_kwargs: Additional keyword arguments to pass to `Client.upload`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload notes.txt to my_folder/notes.txt.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n            ```\n        \"\"\"\n        from_path = str(Path(from_path).absolute())\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        bucket_path = str(self._join_bucket_folder(to_path))\n        client = self.credentials.get_s3_client()\n\n        await run_sync_in_worker_thread(\n            client.upload_file,\n            Filename=from_path,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            **upload_kwargs,\n        )\n        self.logger.info(\n            f\"Uploaded from {from_path!r} to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_file_object(\n        self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n    ) -> str:\n        \"\"\"\n        Uploads an object to the S3 bucket from a file-like object,\n        which can be a BytesIO object or a BufferedReader.\n\n        Args:\n            from_file_object: The file-like object to upload from.\n            to_path: The path to upload the object to.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Client.upload_fileobj`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload BytesIO object to my_folder/notes.txt.\n            ```python\n            from io import BytesIO\n\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                s3_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n            ```\n\n            Upload BufferedReader object to my_folder/notes.txt.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                s3_bucket.upload_from_file_object(\n                    f, \"my_folder/notes.txt\"\n                )\n            ```\n        \"\"\"\n        bucket_path = str(self._join_bucket_folder(to_path))\n        client = self.credentials.get_s3_client()\n        await run_sync_in_worker_thread(\n            client.upload_fileobj,\n            Fileobj=from_file_object,\n            Bucket=self.bucket_name,\n            Key=bucket_path,\n            **upload_kwargs,\n        )\n        self.logger.info(\n            \"Uploaded from file object to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_folder(\n        self,\n        from_folder: Union[str, Path],\n        to_folder: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads files *within* a folder (excluding the folder itself)\n        to the object storage service folder.\n\n        Args:\n            from_folder: The path to the folder to upload from.\n            to_folder: The path to upload the folder to.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Client.upload_fileobj`.\n\n        Returns:\n            The path that the folder was uploaded to.\n\n        Examples:\n            Upload contents from my_folder to new_folder.\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.upload_from_folder(\"my_folder\", \"new_folder\")\n            ```\n        \"\"\"\n        from_folder = Path(from_folder)\n        bucket_folder = self._join_bucket_folder(to_folder or \"\")\n\n        num_uploaded = 0\n        client = self.credentials.get_s3_client()\n\n        async_coros = []\n        for from_path in from_folder.rglob(\"**/*\"):\n            # this skips the actual directory itself, e.g.\n            # `my_folder/` will be skipped\n            # `my_folder/notes.txt` will be uploaded\n            if from_path.is_dir():\n                continue\n            bucket_path = (\n                Path(bucket_folder) / from_path.relative_to(from_folder)\n            ).as_posix()\n            self.logger.info(\n                f\"Uploading from {str(from_path)!r} to the bucket \"\n                f\"{self.bucket_name!r} path {bucket_path!r}.\"\n            )\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    client.upload_file,\n                    Filename=str(from_path),\n                    Bucket=self.bucket_name,\n                    Key=bucket_path,\n                    **upload_kwargs,\n                )\n            )\n            num_uploaded += 1\n        await asyncio.gather(*async_coros)\n\n        if num_uploaded == 0:\n            self.logger.warning(f\"No files were uploaded from {str(from_folder)!r}.\")\n        else:\n            self.logger.info(\n                f\"Uploaded {num_uploaded} files from {str(from_folder)!r} to \"\n                f\"the bucket {self.bucket_name!r} path {bucket_path!r}\"\n            )\n\n        return to_folder\n\n    @sync_compatible\n    async def copy_object(\n        self,\n        from_path: Union[str, Path],\n        to_path: Union[str, Path],\n        to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n        **copy_kwargs,\n    ) -> str:\n        \"\"\"Uses S3's internal\n        [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)\n        to copy objects within or between buckets. To copy objects between buckets,\n        `self`'s credentials must have permission to read the source object and write\n        to the target object. If the credentials do not have those permissions, try\n        using `S3Bucket.stream_from`.\n\n        Args:\n            from_path: The path of the object to copy.\n            to_path: The path to copy the object to.\n            to_bucket: The bucket to copy to. Defaults to the current bucket.\n            **copy_kwargs: Additional keyword arguments to pass to\n                `S3Client.copy_object`.\n\n        Returns:\n            The path that the object was copied to. Excludes the bucket name.\n\n        Examples:\n\n            Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.copy_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n            ```\n\n            Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n            another bucket.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.copy_object(\n                \"my_folder/notes.txt\",\n                \"my_folder/notes_copy.txt\",\n                to_bucket=\"other-bucket\"\n            )\n            ```\n        \"\"\"\n        s3_client = self.credentials.get_s3_client()\n\n        source_bucket_name = self.bucket_name\n        source_path = self._resolve_path(Path(from_path).as_posix())\n\n        # Default to copying within the same bucket\n        to_bucket = to_bucket or self\n\n        target_bucket_name: str\n        target_path: str\n        if isinstance(to_bucket, S3Bucket):\n            target_bucket_name = to_bucket.bucket_name\n            target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n        elif isinstance(to_bucket, str):\n            target_bucket_name = to_bucket\n            target_path = Path(to_path).as_posix()\n        else:\n            raise TypeError(\n                f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n            )\n\n        self.logger.info(\n            \"Copying object from bucket %s with key %s to bucket %s with key %s\",\n            source_bucket_name,\n            source_path,\n            target_bucket_name,\n            target_path,\n        )\n\n        s3_client.copy_object(\n            CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n            Bucket=target_bucket_name,\n            Key=target_path,\n            **copy_kwargs,\n        )\n\n        return target_path\n\n    @sync_compatible\n    async def move_object(\n        self,\n        from_path: Union[str, Path],\n        to_path: Union[str, Path],\n        to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n    ) -> str:\n        \"\"\"Uses S3's internal CopyObject and DeleteObject to move objects within or\n        between buckets. To move objects between buckets, `self`'s credentials must\n        have permission to read and delete the source object and write to the target\n        object. If the credentials do not have those permissions, this method will\n        raise an error. If the credentials have permission to read the source object\n        but not delete it, the object will be copied but not deleted.\n\n        Args:\n            from_path: The path of the object to move.\n            to_path: The path to move the object to.\n            to_bucket: The bucket to move to. Defaults to the current bucket.\n\n        Returns:\n            The path that the object was moved to. Excludes the bucket name.\n\n        Examples:\n\n            Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.move_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n            ```\n\n            Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n            another bucket.\n\n            ```python\n            from prefect_aws.s3 import S3Bucket\n\n            s3_bucket = S3Bucket.load(\"my-bucket\")\n            s3_bucket.move_object(\n                \"my_folder/notes.txt\",\n                \"my_folder/notes_copy.txt\",\n                to_bucket=\"other-bucket\"\n            )\n            ```\n        \"\"\"\n        s3_client = self.credentials.get_s3_client()\n\n        source_bucket_name = self.bucket_name\n        source_path = self._resolve_path(Path(from_path).as_posix())\n\n        # Default to moving within the same bucket\n        to_bucket = to_bucket or self\n\n        target_bucket_name: str\n        target_path: str\n        if isinstance(to_bucket, S3Bucket):\n            target_bucket_name = to_bucket.bucket_name\n            target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n        elif isinstance(to_bucket, str):\n            target_bucket_name = to_bucket\n            target_path = Path(to_path).as_posix()\n        else:\n            raise TypeError(\n                f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n            )\n\n        self.logger.info(\n            \"Moving object from s3://%s/%s to s3://%s/%s\",\n            source_bucket_name,\n            source_path,\n            target_bucket_name,\n            target_path,\n        )\n\n        # If invalid, should error and prevent next operation\n        s3_client.copy(\n            CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n            Bucket=target_bucket_name,\n            Key=target_path,\n        )\n        s3_client.delete_object(Bucket=source_bucket_name, Key=source_path)\n        return target_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.basepath","title":"basepath: str property writable","text":"

        The base path of the S3 bucket.

        Returns:

        Name Type Description str str

        The base path of the S3 bucket.

        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.copy_object","title":"copy_object async","text":"

        Uses S3's internal CopyObject to copy objects within or between buckets. To copy objects between buckets, self's credentials must have permission to read the source object and write to the target object. If the credentials do not have those permissions, try using S3Bucket.stream_from.

        Parameters:

        Name Type Description Default from_path Union[str, Path]

        The path of the object to copy.

        required to_path Union[str, Path]

        The path to copy the object to.

        required to_bucket Optional[Union[S3Bucket, str]]

        The bucket to copy to. Defaults to the current bucket.

        None **copy_kwargs

        Additional keyword arguments to pass to S3Client.copy_object.

        {}

        Returns:

        Type Description str

        The path that the object was copied to. Excludes the bucket name.

        Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.copy_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n```\n\nCopy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\nanother bucket.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.copy_object(\n    \"my_folder/notes.txt\",\n    \"my_folder/notes_copy.txt\",\n    to_bucket=\"other-bucket\"\n)\n```\n
        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def copy_object(\n    self,\n    from_path: Union[str, Path],\n    to_path: Union[str, Path],\n    to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n    **copy_kwargs,\n) -> str:\n    \"\"\"Uses S3's internal\n    [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)\n    to copy objects within or between buckets. To copy objects between buckets,\n    `self`'s credentials must have permission to read the source object and write\n    to the target object. If the credentials do not have those permissions, try\n    using `S3Bucket.stream_from`.\n\n    Args:\n        from_path: The path of the object to copy.\n        to_path: The path to copy the object to.\n        to_bucket: The bucket to copy to. Defaults to the current bucket.\n        **copy_kwargs: Additional keyword arguments to pass to\n            `S3Client.copy_object`.\n\n    Returns:\n        The path that the object was copied to. Excludes the bucket name.\n\n    Examples:\n\n        Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.copy_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n        ```\n\n        Copy notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n        another bucket.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.copy_object(\n            \"my_folder/notes.txt\",\n            \"my_folder/notes_copy.txt\",\n            to_bucket=\"other-bucket\"\n        )\n        ```\n    \"\"\"\n    s3_client = self.credentials.get_s3_client()\n\n    source_bucket_name = self.bucket_name\n    source_path = self._resolve_path(Path(from_path).as_posix())\n\n    # Default to copying within the same bucket\n    to_bucket = to_bucket or self\n\n    target_bucket_name: str\n    target_path: str\n    if isinstance(to_bucket, S3Bucket):\n        target_bucket_name = to_bucket.bucket_name\n        target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n    elif isinstance(to_bucket, str):\n        target_bucket_name = to_bucket\n        target_path = Path(to_path).as_posix()\n    else:\n        raise TypeError(\n            f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n        )\n\n    self.logger.info(\n        \"Copying object from bucket %s with key %s to bucket %s with key %s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    s3_client.copy_object(\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Bucket=target_bucket_name,\n        Key=target_path,\n        **copy_kwargs,\n    )\n\n    return target_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.download_folder_to_path","title":"download_folder_to_path async","text":"

        Downloads objects within a folder (excluding the folder itself) from the S3 bucket to a folder.

        Parameters:

        Name Type Description Default from_folder str

        The path to the folder to download from.

        required to_folder Optional[Union[str, Path]]

        The path to download the folder to.

        None **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Client.download_file.

        {}

        Returns:

        Type Description Path

        The absolute path that the folder was downloaded to.

        Examples:

        Download my_folder to a local folder named my_folder.

        from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def download_folder_to_path(\n    self,\n    from_folder: str,\n    to_folder: Optional[Union[str, Path]] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads objects *within* a folder (excluding the folder itself)\n    from the S3 bucket to a folder.\n\n    Args:\n        from_folder: The path to the folder to download from.\n        to_folder: The path to download the folder to.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Client.download_file`.\n\n    Returns:\n        The absolute path that the folder was downloaded to.\n\n    Examples:\n        Download my_folder to a local folder named my_folder.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n        ```\n    \"\"\"\n    if to_folder is None:\n        to_folder = \"\"\n    to_folder = Path(to_folder).absolute()\n\n    client = self.credentials.get_s3_client()\n    objects = await self.list_objects(folder=from_folder)\n\n    # do not call self._join_bucket_folder for filter\n    # because it's built-in to that method already!\n    # however, we still need to do it because we're using relative_to\n    bucket_folder = self._join_bucket_folder(from_folder)\n\n    async_coros = []\n    for object in objects:\n        bucket_path = Path(object[\"Key\"]).relative_to(bucket_folder)\n        # this skips the actual directory itself, e.g.\n        # `my_folder/` will be skipped\n        # `my_folder/notes.txt` will be downloaded\n        if bucket_path.is_dir():\n            continue\n        to_path = to_folder / bucket_path\n        to_path.parent.mkdir(parents=True, exist_ok=True)\n        to_path = str(to_path)  # must be string\n        self.logger.info(\n            f\"Downloading object from bucket {self.bucket_name!r} path \"\n            f\"{bucket_path.as_posix()!r} to {to_path!r}.\"\n        )\n        async_coros.append(\n            run_sync_in_worker_thread(\n                client.download_file,\n                Bucket=self.bucket_name,\n                Key=object[\"Key\"],\n                Filename=to_path,\n                **download_kwargs,\n            )\n        )\n    await asyncio.gather(*async_coros)\n\n    return Path(to_folder)\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.download_object_to_file_object","title":"download_object_to_file_object async","text":"

        Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter.

        Parameters:

        Name Type Description Default from_path str

        The path to the object to download from; this gets prefixed with the bucket_folder.

        required to_file_object BinaryIO

        The file-like object to download the object to.

        required **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Client.download_fileobj.

        {}

        Returns:

        Type Description BinaryIO

        The file-like object that the object was downloaded to.

        Examples:

        Download my_folder/notes.txt object to a BytesIO object.

        from io import BytesIO\n\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith BytesIO() as buf:\n    s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n

        Download my_folder/notes.txt object to a BufferedWriter.

        from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"wb\") as f:\n    s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def download_object_to_file_object(\n    self,\n    from_path: str,\n    to_file_object: BinaryIO,\n    **download_kwargs: Dict[str, Any],\n) -> BinaryIO:\n    \"\"\"\n    Downloads an object from the object storage service to a file-like object,\n    which can be a BytesIO object or a BufferedWriter.\n\n    Args:\n        from_path: The path to the object to download from; this gets prefixed\n            with the bucket_folder.\n        to_file_object: The file-like object to download the object to.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Client.download_fileobj`.\n\n    Returns:\n        The file-like object that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to a BytesIO object.\n        ```python\n        from io import BytesIO\n\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with BytesIO() as buf:\n            s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n        ```\n\n        Download my_folder/notes.txt object to a BufferedWriter.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"wb\") as f:\n            s3_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n        ```\n    \"\"\"\n    client = self.credentials.get_s3_client()\n    bucket_path = self._join_bucket_folder(from_path)\n\n    self.logger.debug(\n        f\"Preparing to download object from bucket {self.bucket_name!r} \"\n        f\"path {bucket_path!r} to file object.\"\n    )\n    await run_sync_in_worker_thread(\n        client.download_fileobj,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        Fileobj=to_file_object,\n        **download_kwargs,\n    )\n    self.logger.info(\n        f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n        \"to file object.\"\n    )\n    return to_file_object\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.download_object_to_path","title":"download_object_to_path async","text":"

        Downloads an object from the S3 bucket to a path.

        Parameters:

        Name Type Description Default from_path str

        The path to the object to download; this gets prefixed with the bucket_folder.

        required to_path Optional[Union[str, Path]]

        The path to download the object to. If not provided, the object's name will be used.

        required **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Client.download_file.

        {}

        Returns:

        Type Description Path

        The absolute path that the object was downloaded to.

        Examples:

        Download my_folder/notes.txt object to notes.txt.

        from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def download_object_to_path(\n    self,\n    from_path: str,\n    to_path: Optional[Union[str, Path]],\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads an object from the S3 bucket to a path.\n\n    Args:\n        from_path: The path to the object to download; this gets prefixed\n            with the bucket_folder.\n        to_path: The path to download the object to. If not provided, the\n            object's name will be used.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Client.download_file`.\n\n    Returns:\n        The absolute path that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to notes.txt.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n        ```\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    # making path absolute, but converting back to str here\n    # since !r looks nicer that way and filename arg expects str\n    to_path = str(Path(to_path).absolute())\n    bucket_path = self._join_bucket_folder(from_path)\n    client = self.credentials.get_s3_client()\n\n    self.logger.debug(\n        f\"Preparing to download object from bucket {self.bucket_name!r} \"\n        f\"path {bucket_path!r} to {to_path!r}.\"\n    )\n    await run_sync_in_worker_thread(\n        client.download_file,\n        Bucket=self.bucket_name,\n        Key=from_path,\n        Filename=to_path,\n        **download_kwargs,\n    )\n    self.logger.info(\n        f\"Downloaded object from bucket {self.bucket_name!r} path {bucket_path!r} \"\n        f\"to {to_path!r}.\"\n    )\n    return Path(to_path)\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.get_directory","title":"get_directory async","text":"

        Copies a folder from the configured S3 bucket to a local directory.

        Defaults to copying the entire contents of the block's basepath to the current working directory.

        Parameters:

        Name Type Description Default from_path Optional[str]

        Path in S3 bucket to download from. Defaults to the block's configured basepath.

        None local_path Optional[str]

        Local path to download S3 contents to. Defaults to the current working directory.

        None Source code in prefect_aws/s3.py
        @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Copies a folder from the configured S3 bucket to a local directory.\n\n    Defaults to copying the entire contents of the block's basepath to the current\n    working directory.\n\n    Args:\n        from_path: Path in S3 bucket to download from. Defaults to the block's\n            configured basepath.\n        local_path: Local path to download S3 contents to. Defaults to the current\n            working directory.\n    \"\"\"\n    bucket_folder = self.bucket_folder\n    if from_path is None:\n        from_path = str(bucket_folder) if bucket_folder else \"\"\n\n    if local_path is None:\n        local_path = str(Path(\".\").absolute())\n    else:\n        local_path = str(Path(local_path).expanduser())\n\n    bucket = self._get_bucket_resource()\n    for obj in bucket.objects.filter(Prefix=from_path):\n        if obj.key[-1] == \"/\":\n            # object is a folder and will be created if it contains any objects\n            continue\n        target = os.path.join(\n            local_path,\n            os.path.relpath(obj.key, from_path),\n        )\n        os.makedirs(os.path.dirname(target), exist_ok=True)\n        bucket.download_file(obj.key, target)\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.list_objects","title":"list_objects async","text":"

        Parameters:

        Name Type Description Default folder str

        Folder to list objects from.

        '' delimiter str

        Character used to group keys of listed objects.

        '' page_size Optional[int]

        Number of objects to return in each request to the AWS API.

        None max_items Optional[int]

        Maximum number of objects that to be returned by task.

        None jmespath_query Optional[str]

        Query used to filter objects based on object attributes refer to the boto3 docs for more information on how to construct queries.

        None

        Returns:

        Type Description List[Dict[str, Any]]

        List of objects and their metadata in the bucket.

        Examples:

        List objects under the base_folder.

        from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.list_objects(\"base_folder\")\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def list_objects(\n    self,\n    folder: str = \"\",\n    delimiter: str = \"\",\n    page_size: Optional[int] = None,\n    max_items: Optional[int] = None,\n    jmespath_query: Optional[str] = None,\n) -> List[Dict[str, Any]]:\n    \"\"\"\n    Args:\n        folder: Folder to list objects from.\n        delimiter: Character used to group keys of listed objects.\n        page_size: Number of objects to return in each request to the AWS API.\n        max_items: Maximum number of objects that to be returned by task.\n        jmespath_query: Query used to filter objects based on object attributes refer to\n            the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath)\n            for more information on how to construct queries.\n\n    Returns:\n        List of objects and their metadata in the bucket.\n\n    Examples:\n        List objects under the `base_folder`.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.list_objects(\"base_folder\")\n        ```\n    \"\"\"  # noqa: E501\n    bucket_path = self._join_bucket_folder(folder)\n    client = self.credentials.get_s3_client()\n    paginator = client.get_paginator(\"list_objects_v2\")\n    page_iterator = paginator.paginate(\n        Bucket=self.bucket_name,\n        Prefix=bucket_path,\n        Delimiter=delimiter,\n        PaginationConfig={\"PageSize\": page_size, \"MaxItems\": max_items},\n    )\n    if jmespath_query:\n        page_iterator = page_iterator.search(f\"{jmespath_query} | {{Contents: @}}\")\n\n    self.logger.info(f\"Listing objects in bucket {bucket_path}.\")\n    objects = await run_sync_in_worker_thread(\n        self._list_objects_sync, page_iterator\n    )\n    return objects\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.move_object","title":"move_object async","text":"

        Uses S3's internal CopyObject and DeleteObject to move objects within or between buckets. To move objects between buckets, self's credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted.

        Parameters:

        Name Type Description Default from_path Union[str, Path]

        The path of the object to move.

        required to_path Union[str, Path]

        The path to move the object to.

        required to_bucket Optional[Union[S3Bucket, str]]

        The bucket to move to. Defaults to the current bucket.

        None

        Returns:

        Type Description str

        The path that the object was moved to. Excludes the bucket name.

        Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.move_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n```\n\nMove notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\nanother bucket.\n\n```python\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.move_object(\n    \"my_folder/notes.txt\",\n    \"my_folder/notes_copy.txt\",\n    to_bucket=\"other-bucket\"\n)\n```\n
        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def move_object(\n    self,\n    from_path: Union[str, Path],\n    to_path: Union[str, Path],\n    to_bucket: Optional[Union[\"S3Bucket\", str]] = None,\n) -> str:\n    \"\"\"Uses S3's internal CopyObject and DeleteObject to move objects within or\n    between buckets. To move objects between buckets, `self`'s credentials must\n    have permission to read and delete the source object and write to the target\n    object. If the credentials do not have those permissions, this method will\n    raise an error. If the credentials have permission to read the source object\n    but not delete it, the object will be copied but not deleted.\n\n    Args:\n        from_path: The path of the object to move.\n        to_path: The path to move the object to.\n        to_bucket: The bucket to move to. Defaults to the current bucket.\n\n    Returns:\n        The path that the object was moved to. Excludes the bucket name.\n\n    Examples:\n\n        Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.move_object(\"my_folder/notes.txt\", \"my_folder/notes_copy.txt\")\n        ```\n\n        Move notes.txt from my_folder/notes.txt to my_folder/notes_copy.txt in\n        another bucket.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.move_object(\n            \"my_folder/notes.txt\",\n            \"my_folder/notes_copy.txt\",\n            to_bucket=\"other-bucket\"\n        )\n        ```\n    \"\"\"\n    s3_client = self.credentials.get_s3_client()\n\n    source_bucket_name = self.bucket_name\n    source_path = self._resolve_path(Path(from_path).as_posix())\n\n    # Default to moving within the same bucket\n    to_bucket = to_bucket or self\n\n    target_bucket_name: str\n    target_path: str\n    if isinstance(to_bucket, S3Bucket):\n        target_bucket_name = to_bucket.bucket_name\n        target_path = to_bucket._resolve_path(Path(to_path).as_posix())\n    elif isinstance(to_bucket, str):\n        target_bucket_name = to_bucket\n        target_path = Path(to_path).as_posix()\n    else:\n        raise TypeError(\n            f\"to_bucket must be a string or S3Bucket, not {type(to_bucket)}\"\n        )\n\n    self.logger.info(\n        \"Moving object from s3://%s/%s to s3://%s/%s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    # If invalid, should error and prevent next operation\n    s3_client.copy(\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Bucket=target_bucket_name,\n        Key=target_path,\n    )\n    s3_client.delete_object(Bucket=source_bucket_name, Key=source_path)\n    return target_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.put_directory","title":"put_directory async","text":"

        Uploads a directory from a given local path to the configured S3 bucket in a given folder.

        Defaults to uploading the entire contents the current working directory to the block's basepath.

        Parameters:

        Name Type Description Default local_path Optional[str]

        Path to local directory to upload from.

        None to_path Optional[str]

        Path in S3 bucket to upload to. Defaults to block's configured basepath.

        None ignore_file Optional[str]

        Path to file containing gitignore style expressions for filepaths to ignore.

        None Source code in prefect_aws/s3.py
        @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to the configured S3 bucket in a\n    given folder.\n\n    Defaults to uploading the entire contents the current working directory to the\n    block's basepath.\n\n    Args:\n        local_path: Path to local directory to upload from.\n        to_path: Path in S3 bucket to upload to. Defaults to block's configured\n            basepath.\n        ignore_file: Path to file containing gitignore style expressions for\n            filepaths to ignore.\n\n    \"\"\"\n    to_path = \"\" if to_path is None else to_path\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(local_path, ignore_patterns)\n\n    uploaded_file_count = 0\n    for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n        if (\n            included_files is not None\n            and str(local_file_path.relative_to(local_path)) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = Path(to_path) / local_file_path.relative_to(\n                local_path\n            )\n            with open(local_file_path, \"rb\") as local_file:\n                local_file_content = local_file.read()\n\n            await self.write_path(\n                remote_file_path.as_posix(), content=local_file_content\n            )\n            uploaded_file_count += 1\n\n    return uploaded_file_count\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.read_path","title":"read_path async","text":"

        Read specified path from S3 and return contents. Provide the entire path to the key in S3.

        Parameters:

        Name Type Description Default path str

        Entire path to (and including) the key.

        required Example

        Read \"subfolder/file1\" contents from an S3 bucket named \"bucket\":

        from prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import S3Bucket\n\naws_creds = AwsCredentials(\n    aws_access_key_id=AWS_ACCESS_KEY_ID,\n    aws_secret_access_key=AWS_SECRET_ACCESS_KEY\n)\n\ns3_bucket_block = S3Bucket(\n    bucket_name=\"bucket\",\n    credentials=aws_creds,\n    bucket_folder=\"subfolder\"\n)\n\nkey_contents = s3_bucket_block.read_path(path=\"subfolder/file1\")\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def read_path(self, path: str) -> bytes:\n    \"\"\"\n    Read specified path from S3 and return contents. Provide the entire\n    path to the key in S3.\n\n    Args:\n        path: Entire path to (and including) the key.\n\n    Example:\n        Read \"subfolder/file1\" contents from an S3 bucket named \"bucket\":\n        ```python\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import S3Bucket\n\n        aws_creds = AwsCredentials(\n            aws_access_key_id=AWS_ACCESS_KEY_ID,\n            aws_secret_access_key=AWS_SECRET_ACCESS_KEY\n        )\n\n        s3_bucket_block = S3Bucket(\n            bucket_name=\"bucket\",\n            credentials=aws_creds,\n            bucket_folder=\"subfolder\"\n        )\n\n        key_contents = s3_bucket_block.read_path(path=\"subfolder/file1\")\n        ```\n    \"\"\"\n    path = self._resolve_path(path)\n\n    return await run_sync_in_worker_thread(self._read_sync, path)\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.stream_from","title":"stream_from async","text":"

        Streams an object from another bucket to this bucket. Requires the object to be downloaded and uploaded in chunks. If self's credentials allow for writes to the other bucket, try using S3Bucket.copy_object.

        Parameters:

        Name Type Description Default bucket S3Bucket

        The bucket to stream from.

        required from_path str

        The path of the object to stream.

        required to_path Optional[str]

        The path to stream the object to. Defaults to the object's name.

        None **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Client.upload_fileobj.

        {}

        Returns:

        Type Description str

        The path that the object was uploaded to.

        Examples:

        Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt.

        from prefect_aws.s3 import S3Bucket\n\nyour_s3_bucket = S3Bucket.load(\"your-bucket\")\nmy_s3_bucket = S3Bucket.load(\"my-bucket\")\n\nmy_s3_bucket.stream_from(\n    your_s3_bucket,\n    \"notes.txt\",\n    to_path=\"landed/notes.txt\"\n)\n
        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def stream_from(\n    self,\n    bucket: \"S3Bucket\",\n    from_path: str,\n    to_path: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"Streams an object from another bucket to this bucket. Requires the\n    object to be downloaded and uploaded in chunks. If `self`'s credentials\n    allow for writes to the other bucket, try using `S3Bucket.copy_object`.\n\n    Args:\n        bucket: The bucket to stream from.\n        from_path: The path of the object to stream.\n        to_path: The path to stream the object to. Defaults to the object's name.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Client.upload_fileobj`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt.\n\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        your_s3_bucket = S3Bucket.load(\"your-bucket\")\n        my_s3_bucket = S3Bucket.load(\"my-bucket\")\n\n        my_s3_bucket.stream_from(\n            your_s3_bucket,\n            \"notes.txt\",\n            to_path=\"landed/notes.txt\"\n        )\n        ```\n\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    # Get the source object's StreamingBody\n    from_path: str = bucket._join_bucket_folder(from_path)\n    from_client = bucket.credentials.get_s3_client()\n    obj = await run_sync_in_worker_thread(\n        from_client.get_object, Bucket=bucket.bucket_name, Key=from_path\n    )\n    body: StreamingBody = obj[\"Body\"]\n\n    # Upload the StreamingBody to this bucket\n    bucket_path = str(self._join_bucket_folder(to_path))\n    to_client = self.credentials.get_s3_client()\n    await run_sync_in_worker_thread(\n        to_client.upload_fileobj,\n        Fileobj=body,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        **upload_kwargs,\n    )\n    self.logger.info(\n        f\"Streamed s3://{bucket.bucket_name}/{from_path} to the bucket \"\n        f\"{self.bucket_name!r} path {bucket_path!r}.\"\n    )\n    return bucket_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.upload_from_file_object","title":"upload_from_file_object async","text":"

        Uploads an object to the S3 bucket from a file-like object, which can be a BytesIO object or a BufferedReader.

        Parameters:

        Name Type Description Default from_file_object BinaryIO

        The file-like object to upload from.

        required to_path str

        The path to upload the object to.

        required **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Client.upload_fileobj.

        {}

        Returns:

        Type Description str

        The path that the object was uploaded to.

        Examples:

        Upload BytesIO object to my_folder/notes.txt.

        from io import BytesIO\n\nfrom prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    s3_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n

        Upload BufferedReader object to my_folder/notes.txt.

        from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    s3_bucket.upload_from_file_object(\n        f, \"my_folder/notes.txt\"\n    )\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def upload_from_file_object(\n    self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n) -> str:\n    \"\"\"\n    Uploads an object to the S3 bucket from a file-like object,\n    which can be a BytesIO object or a BufferedReader.\n\n    Args:\n        from_file_object: The file-like object to upload from.\n        to_path: The path to upload the object to.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Client.upload_fileobj`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload BytesIO object to my_folder/notes.txt.\n        ```python\n        from io import BytesIO\n\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            s3_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n        ```\n\n        Upload BufferedReader object to my_folder/notes.txt.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            s3_bucket.upload_from_file_object(\n                f, \"my_folder/notes.txt\"\n            )\n        ```\n    \"\"\"\n    bucket_path = str(self._join_bucket_folder(to_path))\n    client = self.credentials.get_s3_client()\n    await run_sync_in_worker_thread(\n        client.upload_fileobj,\n        Fileobj=from_file_object,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        **upload_kwargs,\n    )\n    self.logger.info(\n        \"Uploaded from file object to the bucket \"\n        f\"{self.bucket_name!r} path {bucket_path!r}.\"\n    )\n    return bucket_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.upload_from_folder","title":"upload_from_folder async","text":"

        Uploads files within a folder (excluding the folder itself) to the object storage service folder.

        Parameters:

        Name Type Description Default from_folder Union[str, Path]

        The path to the folder to upload from.

        required to_folder Optional[str]

        The path to upload the folder to.

        None **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Client.upload_fileobj.

        {}

        Returns:

        Type Description str

        The path that the folder was uploaded to.

        Examples:

        Upload contents from my_folder to new_folder.

        from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.upload_from_folder(\"my_folder\", \"new_folder\")\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def upload_from_folder(\n    self,\n    from_folder: Union[str, Path],\n    to_folder: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads files *within* a folder (excluding the folder itself)\n    to the object storage service folder.\n\n    Args:\n        from_folder: The path to the folder to upload from.\n        to_folder: The path to upload the folder to.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Client.upload_fileobj`.\n\n    Returns:\n        The path that the folder was uploaded to.\n\n    Examples:\n        Upload contents from my_folder to new_folder.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.upload_from_folder(\"my_folder\", \"new_folder\")\n        ```\n    \"\"\"\n    from_folder = Path(from_folder)\n    bucket_folder = self._join_bucket_folder(to_folder or \"\")\n\n    num_uploaded = 0\n    client = self.credentials.get_s3_client()\n\n    async_coros = []\n    for from_path in from_folder.rglob(\"**/*\"):\n        # this skips the actual directory itself, e.g.\n        # `my_folder/` will be skipped\n        # `my_folder/notes.txt` will be uploaded\n        if from_path.is_dir():\n            continue\n        bucket_path = (\n            Path(bucket_folder) / from_path.relative_to(from_folder)\n        ).as_posix()\n        self.logger.info(\n            f\"Uploading from {str(from_path)!r} to the bucket \"\n            f\"{self.bucket_name!r} path {bucket_path!r}.\"\n        )\n        async_coros.append(\n            run_sync_in_worker_thread(\n                client.upload_file,\n                Filename=str(from_path),\n                Bucket=self.bucket_name,\n                Key=bucket_path,\n                **upload_kwargs,\n            )\n        )\n        num_uploaded += 1\n    await asyncio.gather(*async_coros)\n\n    if num_uploaded == 0:\n        self.logger.warning(f\"No files were uploaded from {str(from_folder)!r}.\")\n    else:\n        self.logger.info(\n            f\"Uploaded {num_uploaded} files from {str(from_folder)!r} to \"\n            f\"the bucket {self.bucket_name!r} path {bucket_path!r}\"\n        )\n\n    return to_folder\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.upload_from_path","title":"upload_from_path async","text":"

        Uploads an object from a path to the S3 bucket.

        Parameters:

        Name Type Description Default from_path Union[str, Path]

        The path to the file to upload from.

        required to_path Optional[str]

        The path to upload the file to.

        None **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Client.upload.

        {}

        Returns:

        Type Description str

        The path that the object was uploaded to.

        Examples:

        Upload notes.txt to my_folder/notes.txt.

        from prefect_aws.s3 import S3Bucket\n\ns3_bucket = S3Bucket.load(\"my-bucket\")\ns3_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n

        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def upload_from_path(\n    self,\n    from_path: Union[str, Path],\n    to_path: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads an object from a path to the S3 bucket.\n\n    Args:\n        from_path: The path to the file to upload from.\n        to_path: The path to upload the file to.\n        **upload_kwargs: Additional keyword arguments to pass to `Client.upload`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload notes.txt to my_folder/notes.txt.\n        ```python\n        from prefect_aws.s3 import S3Bucket\n\n        s3_bucket = S3Bucket.load(\"my-bucket\")\n        s3_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n        ```\n    \"\"\"\n    from_path = str(Path(from_path).absolute())\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    bucket_path = str(self._join_bucket_folder(to_path))\n    client = self.credentials.get_s3_client()\n\n    await run_sync_in_worker_thread(\n        client.upload_file,\n        Filename=from_path,\n        Bucket=self.bucket_name,\n        Key=bucket_path,\n        **upload_kwargs,\n    )\n    self.logger.info(\n        f\"Uploaded from {from_path!r} to the bucket \"\n        f\"{self.bucket_name!r} path {bucket_path!r}.\"\n    )\n    return bucket_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.S3Bucket.write_path","title":"write_path async","text":"

        Writes to an S3 bucket.

        Args:

        path: The key name. Each object in your bucket has a unique\n    key (or key name).\ncontent: What you are uploading to S3.\n

        Example:

        Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket:\n```python\nfrom prefect_aws import MinioCredentials\nfrom prefect_aws.s3 import S3Bucket\n\nminio_creds = MinIOCredentials(\n    minio_root_user = \"minioadmin\",\n    minio_root_password = \"minioadmin\",\n)\n\ns3_bucket_block = S3Bucket(\n    bucket_name=\"bucket\",\n    minio_credentials=minio_creds,\n    bucket_folder=\"dogs/smalldogs\",\n    endpoint_url=\"http://localhost:9000\",\n)\ns3_havanese_path = s3_bucket_block.write_path(path=\"havanese\", content=data)\n```\n
        Source code in prefect_aws/s3.py
        @sync_compatible\nasync def write_path(self, path: str, content: bytes) -> str:\n    \"\"\"\n    Writes to an S3 bucket.\n\n    Args:\n\n        path: The key name. Each object in your bucket has a unique\n            key (or key name).\n        content: What you are uploading to S3.\n\n    Example:\n\n        Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket:\n        ```python\n        from prefect_aws import MinioCredentials\n        from prefect_aws.s3 import S3Bucket\n\n        minio_creds = MinIOCredentials(\n            minio_root_user = \"minioadmin\",\n            minio_root_password = \"minioadmin\",\n        )\n\n        s3_bucket_block = S3Bucket(\n            bucket_name=\"bucket\",\n            minio_credentials=minio_creds,\n            bucket_folder=\"dogs/smalldogs\",\n            endpoint_url=\"http://localhost:9000\",\n        )\n        s3_havanese_path = s3_bucket_block.write_path(path=\"havanese\", content=data)\n        ```\n    \"\"\"\n\n    path = self._resolve_path(path)\n\n    await run_sync_in_worker_thread(self._write_sync, path, content)\n\n    return path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_copy","title":"s3_copy async","text":"

        Uses S3's internal CopyObject to copy objects within or between buckets. To copy objects between buckets, the credentials must have permission to read the source object and write to the target object. If the credentials do not have those permissions, try using S3Bucket.stream_from.

        Parameters:

        Name Type Description Default source_path str

        The path to the object to copy. Can be a string or Path.

        required target_path str

        The path to copy the object to. Can be a string or Path.

        required source_bucket_name str

        The bucket to copy the object from.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required target_bucket_name Optional[str]

        The bucket to copy the object to. If not provided, defaults to source_bucket.

        None **copy_kwargs

        Additional keyword arguments to pass to S3Client.copy_object.

        {}

        Returns:

        Type Description str

        The path that the object was copied to. Excludes the bucket name.

        Copy notes.txt from s3://my-bucket/my_folder/notes.txt to\ns3://my-bucket/my_folder/notes_copy.txt.\n\n```python\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_copy\n\naws_credentials = AwsCredentials.load(\"my-creds\")\n\n@flow\nasync def example_copy_flow():\n    await s3_copy(\n        source_path=\"my_folder/notes.txt\",\n        target_path=\"my_folder/notes_copy.txt\",\n        source_bucket_name=\"my-bucket\",\n        aws_credentials=aws_credentials,\n    )\n\nexample_copy_flow()\n```\n\nCopy notes.txt from s3://my-bucket/my_folder/notes.txt to\ns3://other-bucket/notes_copy.txt.\n\n```python\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_copy\n\naws_credentials = AwsCredentials.load(\"shared-creds\")\n\n@flow\nasync def example_copy_flow():\n    await s3_copy(\n        source_path=\"my_folder/notes.txt\",\n        target_path=\"notes_copy.txt\",\n        source_bucket_name=\"my-bucket\",\n        aws_credentials=aws_credentials,\n        target_bucket_name=\"other-bucket\",\n    )\n\nexample_copy_flow()\n```\n
        Source code in prefect_aws/s3.py
        @task\nasync def s3_copy(\n    source_path: str,\n    target_path: str,\n    source_bucket_name: str,\n    aws_credentials: AwsCredentials,\n    target_bucket_name: Optional[str] = None,\n    **copy_kwargs,\n) -> str:\n    \"\"\"Uses S3's internal\n    [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)\n    to copy objects within or between buckets. To copy objects between buckets, the\n    credentials must have permission to read the source object and write to the target\n    object. If the credentials do not have those permissions, try using\n    `S3Bucket.stream_from`.\n\n    Args:\n        source_path: The path to the object to copy. Can be a string or `Path`.\n        target_path: The path to copy the object to. Can be a string or `Path`.\n        source_bucket_name: The bucket to copy the object from.\n        aws_credentials: Credentials to use for authentication with AWS.\n        target_bucket_name: The bucket to copy the object to. If not provided, defaults\n            to `source_bucket`.\n        **copy_kwargs: Additional keyword arguments to pass to `S3Client.copy_object`.\n\n    Returns:\n        The path that the object was copied to. Excludes the bucket name.\n\n    Examples:\n\n        Copy notes.txt from s3://my-bucket/my_folder/notes.txt to\n        s3://my-bucket/my_folder/notes_copy.txt.\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_copy\n\n        aws_credentials = AwsCredentials.load(\"my-creds\")\n\n        @flow\n        async def example_copy_flow():\n            await s3_copy(\n                source_path=\"my_folder/notes.txt\",\n                target_path=\"my_folder/notes_copy.txt\",\n                source_bucket_name=\"my-bucket\",\n                aws_credentials=aws_credentials,\n            )\n\n        example_copy_flow()\n        ```\n\n        Copy notes.txt from s3://my-bucket/my_folder/notes.txt to\n        s3://other-bucket/notes_copy.txt.\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_copy\n\n        aws_credentials = AwsCredentials.load(\"shared-creds\")\n\n        @flow\n        async def example_copy_flow():\n            await s3_copy(\n                source_path=\"my_folder/notes.txt\",\n                target_path=\"notes_copy.txt\",\n                source_bucket_name=\"my-bucket\",\n                aws_credentials=aws_credentials,\n                target_bucket_name=\"other-bucket\",\n            )\n\n        example_copy_flow()\n        ```\n\n    \"\"\"\n    logger = get_run_logger()\n\n    s3_client = aws_credentials.get_s3_client()\n\n    target_bucket_name = target_bucket_name or source_bucket_name\n\n    logger.info(\n        \"Copying object from bucket %s with key %s to bucket %s with key %s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    s3_client.copy_object(\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Bucket=target_bucket_name,\n        Key=target_path,\n        **copy_kwargs,\n    )\n\n    return target_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_download","title":"s3_download async","text":"

        Downloads an object with a given key from a given S3 bucket.

        Parameters:

        Name Type Description Default bucket str

        Name of bucket to download object from. Required if a default value was not supplied when creating the task.

        required key str

        Key of object to download. Required if a default value was not supplied when creating the task.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required aws_client_parameters AwsClientParameters

        Custom parameter for the boto3 client initialization.

        AwsClientParameters()

        Returns:

        Type Description bytes

        A bytes representation of the downloaded object.

        Example

        Download a file from an S3 bucket:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_download\n\n\n@flow\nasync def example_s3_download_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    data = await s3_download(\n        bucket=\"bucket\",\n        key=\"key\",\n        aws_credentials=aws_credentials,\n    )\n\nexample_s3_download_flow()\n
        Source code in prefect_aws/s3.py
        @task\nasync def s3_download(\n    bucket: str,\n    key: str,\n    aws_credentials: AwsCredentials,\n    aws_client_parameters: AwsClientParameters = AwsClientParameters(),\n) -> bytes:\n    \"\"\"\n    Downloads an object with a given key from a given S3 bucket.\n\n    Args:\n        bucket: Name of bucket to download object from. Required if a default value was\n            not supplied when creating the task.\n        key: Key of object to download. Required if a default value was not supplied\n            when creating the task.\n        aws_credentials: Credentials to use for authentication with AWS.\n        aws_client_parameters: Custom parameter for the boto3 client initialization.\n\n\n    Returns:\n        A `bytes` representation of the downloaded object.\n\n    Example:\n        Download a file from an S3 bucket:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_download\n\n\n        @flow\n        async def example_s3_download_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            data = await s3_download(\n                bucket=\"bucket\",\n                key=\"key\",\n                aws_credentials=aws_credentials,\n            )\n\n        example_s3_download_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Downloading object from bucket %s with key %s\", bucket, key)\n\n    s3_client = aws_credentials.get_boto3_session().client(\n        \"s3\", **aws_client_parameters.get_params_override()\n    )\n    stream = io.BytesIO()\n    await run_sync_in_worker_thread(\n        s3_client.download_fileobj, Bucket=bucket, Key=key, Fileobj=stream\n    )\n    stream.seek(0)\n    output = stream.read()\n\n    return output\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_list_objects","title":"s3_list_objects async","text":"

        Lists details of objects in a given S3 bucket.

        Parameters:

        Name Type Description Default bucket str

        Name of bucket to list items from. Required if a default value was not supplied when creating the task.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required aws_client_parameters AwsClientParameters

        Custom parameter for the boto3 client initialization..

        AwsClientParameters() prefix str

        Used to filter objects with keys starting with the specified prefix.

        '' delimiter str

        Character used to group keys of listed objects.

        '' page_size Optional[int]

        Number of objects to return in each request to the AWS API.

        None max_items Optional[int]

        Maximum number of objects that to be returned by task.

        None jmespath_query Optional[str]

        Query used to filter objects based on object attributes refer to the boto3 docs for more information on how to construct queries.

        None

        Returns:

        Type Description List[Dict[str, Any]]

        A list of dictionaries containing information about the objects retrieved. Refer to the boto3 docs for an example response.

        Example

        List all objects in a bucket:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_list_objects\n\n\n@flow\nasync def example_s3_list_objects_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    objects = await s3_list_objects(\n        bucket=\"data_bucket\",\n        aws_credentials=aws_credentials\n    )\n\nexample_s3_list_objects_flow()\n
        Source code in prefect_aws/s3.py
        @task\nasync def s3_list_objects(\n    bucket: str,\n    aws_credentials: AwsCredentials,\n    aws_client_parameters: AwsClientParameters = AwsClientParameters(),\n    prefix: str = \"\",\n    delimiter: str = \"\",\n    page_size: Optional[int] = None,\n    max_items: Optional[int] = None,\n    jmespath_query: Optional[str] = None,\n) -> List[Dict[str, Any]]:\n    \"\"\"\n    Lists details of objects in a given S3 bucket.\n\n    Args:\n        bucket: Name of bucket to list items from. Required if a default value was not\n            supplied when creating the task.\n        aws_credentials: Credentials to use for authentication with AWS.\n        aws_client_parameters: Custom parameter for the boto3 client initialization..\n        prefix: Used to filter objects with keys starting with the specified prefix.\n        delimiter: Character used to group keys of listed objects.\n        page_size: Number of objects to return in each request to the AWS API.\n        max_items: Maximum number of objects that to be returned by task.\n        jmespath_query: Query used to filter objects based on object attributes refer to\n            the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath)\n            for more information on how to construct queries.\n\n    Returns:\n        A list of dictionaries containing information about the objects retrieved. Refer\n            to the boto3 docs for an example response.\n\n    Example:\n        List all objects in a bucket:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_list_objects\n\n\n        @flow\n        async def example_s3_list_objects_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            objects = await s3_list_objects(\n                bucket=\"data_bucket\",\n                aws_credentials=aws_credentials\n            )\n\n        example_s3_list_objects_flow()\n        ```\n    \"\"\"  # noqa E501\n    logger = get_run_logger()\n    logger.info(\"Listing objects in bucket %s with prefix %s\", bucket, prefix)\n\n    s3_client = aws_credentials.get_boto3_session().client(\n        \"s3\", **aws_client_parameters.get_params_override()\n    )\n    paginator = s3_client.get_paginator(\"list_objects_v2\")\n    page_iterator = paginator.paginate(\n        Bucket=bucket,\n        Prefix=prefix,\n        Delimiter=delimiter,\n        PaginationConfig={\"PageSize\": page_size, \"MaxItems\": max_items},\n    )\n    if jmespath_query:\n        page_iterator = page_iterator.search(f\"{jmespath_query} | {{Contents: @}}\")\n\n    return await run_sync_in_worker_thread(_list_objects_sync, page_iterator)\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_move","title":"s3_move async","text":"

        Move an object from one S3 location to another. To move objects between buckets, the credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted.

        Parameters:

        Name Type Description Default source_path str

        The path of the object to move

        required target_path str

        The path to move the object to

        required source_bucket_name str

        The name of the bucket containing the source object

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required target_bucket_name Optional[str]

        The bucket to copy the object to. If not provided, defaults to source_bucket.

        None

        Returns:

        Type Description str

        The path that the object was moved to. Excludes the bucket name.

        Source code in prefect_aws/s3.py
        @task\nasync def s3_move(\n    source_path: str,\n    target_path: str,\n    source_bucket_name: str,\n    aws_credentials: AwsCredentials,\n    target_bucket_name: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Move an object from one S3 location to another. To move objects between buckets,\n    the credentials must have permission to read and delete the source object and write\n    to the target object. If the credentials do not have those permissions, this method\n    will raise an error. If the credentials have permission to read the source object\n    but not delete it, the object will be copied but not deleted.\n\n    Args:\n        source_path: The path of the object to move\n        target_path: The path to move the object to\n        source_bucket_name: The name of the bucket containing the source object\n        aws_credentials: Credentials to use for authentication with AWS.\n        target_bucket_name: The bucket to copy the object to. If not provided, defaults\n            to `source_bucket`.\n\n    Returns:\n        The path that the object was moved to. Excludes the bucket name.\n    \"\"\"\n    logger = get_run_logger()\n\n    s3_client = aws_credentials.get_s3_client()\n\n    # If target bucket is not provided, assume it's the same as the source bucket\n    target_bucket_name = target_bucket_name or source_bucket_name\n\n    logger.info(\n        \"Moving object from s3://%s/%s s3://%s/%s\",\n        source_bucket_name,\n        source_path,\n        target_bucket_name,\n        target_path,\n    )\n\n    # Copy the object to the new location\n    s3_client.copy_object(\n        Bucket=target_bucket_name,\n        CopySource={\"Bucket\": source_bucket_name, \"Key\": source_path},\n        Key=target_path,\n    )\n\n    # Delete the original object\n    s3_client.delete_object(Bucket=source_bucket_name, Key=source_path)\n\n    return target_path\n
        "},{"location":"integrations/prefect-aws/s3/#prefect_aws.s3.s3_upload","title":"s3_upload async","text":"

        Uploads data to an S3 bucket.

        Parameters:

        Name Type Description Default data bytes

        Bytes representation of data to upload to S3.

        required bucket str

        Name of bucket to upload data to. Required if a default value was not supplied when creating the task.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required aws_client_parameters AwsClientParameters

        Custom parameter for the boto3 client initialization..

        AwsClientParameters() key Optional[str]

        Key of object to download. Defaults to a UUID string.

        None

        Returns:

        Type Description str

        The key of the uploaded object

        Example

        Read and upload a file to an S3 bucket:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.s3 import s3_upload\n\n\n@flow\nasync def example_s3_upload_flow():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"acccess_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    with open(\"data.csv\", \"rb\") as file:\n        key = await s3_upload(\n            bucket=\"bucket\",\n            key=\"data.csv\",\n            data=file.read(),\n            aws_credentials=aws_credentials,\n        )\n\nexample_s3_upload_flow()\n
        Source code in prefect_aws/s3.py
        @task\nasync def s3_upload(\n    data: bytes,\n    bucket: str,\n    aws_credentials: AwsCredentials,\n    aws_client_parameters: AwsClientParameters = AwsClientParameters(),\n    key: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Uploads data to an S3 bucket.\n\n    Args:\n        data: Bytes representation of data to upload to S3.\n        bucket: Name of bucket to upload data to. Required if a default value was not\n            supplied when creating the task.\n        aws_credentials: Credentials to use for authentication with AWS.\n        aws_client_parameters: Custom parameter for the boto3 client initialization..\n        key: Key of object to download. Defaults to a UUID string.\n\n    Returns:\n        The key of the uploaded object\n\n    Example:\n        Read and upload a file to an S3 bucket:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.s3 import s3_upload\n\n\n        @flow\n        async def example_s3_upload_flow():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"acccess_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            with open(\"data.csv\", \"rb\") as file:\n                key = await s3_upload(\n                    bucket=\"bucket\",\n                    key=\"data.csv\",\n                    data=file.read(),\n                    aws_credentials=aws_credentials,\n                )\n\n        example_s3_upload_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    key = key or str(uuid.uuid4())\n\n    logger.info(\"Uploading object to bucket %s with key %s\", bucket, key)\n\n    s3_client = aws_credentials.get_boto3_session().client(\n        \"s3\", **aws_client_parameters.get_params_override()\n    )\n    stream = io.BytesIO(data)\n    await run_sync_in_worker_thread(\n        s3_client.upload_fileobj, stream, Bucket=bucket, Key=key\n    )\n\n    return key\n
        "},{"location":"integrations/prefect-aws/secrets_manager/","title":"Secrets Manager","text":""},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager","title":"prefect_aws.secrets_manager","text":"

        Tasks for interacting with AWS Secrets Manager

        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret","title":"AwsSecret","text":"

        Bases: SecretBlock

        Manages a secret in AWS's Secrets Manager.

        Attributes:

        Name Type Description aws_credentials AwsCredentials

        The credentials to use for authentication with AWS.

        secret_name str

        The name of the secret.

        Source code in prefect_aws/secrets_manager.py
        class AwsSecret(SecretBlock):\n    \"\"\"\n    Manages a secret in AWS's Secrets Manager.\n\n    Attributes:\n        aws_credentials: The credentials to use for authentication with AWS.\n        secret_name: The name of the secret.\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/d74b16fe84ce626345adf235a47008fea2869a60-225x225.png\"  # noqa\n    _block_type_name = \"AWS Secret\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret\"  # noqa\n\n    aws_credentials: AwsCredentials\n    secret_name: str = Field(default=..., description=\"The name of the secret.\")\n\n    @sync_compatible\n    async def read_secret(\n        self,\n        version_id: str = None,\n        version_stage: str = None,\n        **read_kwargs: Dict[str, Any],\n    ) -> bytes:\n        \"\"\"\n        Reads the secret from the secret storage service.\n\n        Args:\n            version_id: The version of the secret to read. If not provided, the latest\n                version will be read.\n            version_stage: The version stage of the secret to read. If not provided,\n                the latest version will be read.\n            read_kwargs: Additional keyword arguments to pass to the\n                `get_secret_value` method of the boto3 client.\n\n        Returns:\n            The secret data.\n\n        Examples:\n            Reads a secret.\n            ```python\n            secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n            secrets_manager.read_secret()\n            ```\n        \"\"\"\n        client = self.aws_credentials.get_secrets_manager_client()\n        if version_id is not None:\n            read_kwargs[\"VersionId\"] = version_id\n        if version_stage is not None:\n            read_kwargs[\"VersionStage\"] = version_stage\n        response = await run_sync_in_worker_thread(\n            client.get_secret_value, SecretId=self.secret_name, **read_kwargs\n        )\n        if \"SecretBinary\" in response:\n            secret = response[\"SecretBinary\"]\n        elif \"SecretString\" in response:\n            secret = response[\"SecretString\"]\n        arn = response[\"ARN\"]\n        self.logger.info(f\"The secret {arn!r} data was successfully read.\")\n        return secret\n\n    @sync_compatible\n    async def write_secret(\n        self, secret_data: bytes, **put_or_create_secret_kwargs: Dict[str, Any]\n    ) -> str:\n        \"\"\"\n        Writes the secret to the secret storage service as a SecretBinary;\n        if it doesn't exist, it will be created.\n\n        Args:\n            secret_data: The secret data to write.\n            **put_or_create_secret_kwargs: Additional keyword arguments to pass to\n                put_secret_value or create_secret method of the boto3 client.\n\n        Returns:\n            The path that the secret was written to.\n\n        Examples:\n            Write some secret data.\n            ```python\n            secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n            secrets_manager.write_secret(b\"my_secret_data\")\n            ```\n        \"\"\"\n        client = self.aws_credentials.get_secrets_manager_client()\n        try:\n            response = await run_sync_in_worker_thread(\n                client.put_secret_value,\n                SecretId=self.secret_name,\n                SecretBinary=secret_data,\n                **put_or_create_secret_kwargs,\n            )\n        except client.exceptions.ResourceNotFoundException:\n            self.logger.info(\n                f\"The secret {self.secret_name!r} does not exist yet, creating it now.\"\n            )\n            response = await run_sync_in_worker_thread(\n                client.create_secret,\n                Name=self.secret_name,\n                SecretBinary=secret_data,\n                **put_or_create_secret_kwargs,\n            )\n        arn = response[\"ARN\"]\n        self.logger.info(f\"The secret data was written successfully to {arn!r}.\")\n        return arn\n\n    @sync_compatible\n    async def delete_secret(\n        self,\n        recovery_window_in_days: int = 30,\n        force_delete_without_recovery: bool = False,\n        **delete_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Deletes the secret from the secret storage service.\n\n        Args:\n            recovery_window_in_days: The number of days to wait before permanently\n                deleting the secret. Must be between 7 and 30 days.\n            force_delete_without_recovery: If True, the secret will be deleted\n                immediately without a recovery window.\n            **delete_kwargs: Additional keyword arguments to pass to the\n                delete_secret method of the boto3 client.\n\n        Returns:\n            The path that the secret was deleted from.\n\n        Examples:\n            Deletes the secret with a recovery window of 15 days.\n            ```python\n            secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n            secrets_manager.delete_secret(recovery_window_in_days=15)\n            ```\n        \"\"\"\n        if force_delete_without_recovery and recovery_window_in_days:\n            raise ValueError(\n                \"Cannot specify recovery window and force delete without recovery.\"\n            )\n        elif not (7 <= recovery_window_in_days <= 30):\n            raise ValueError(\n                \"Recovery window must be between 7 and 30 days, got \"\n                f\"{recovery_window_in_days}.\"\n            )\n\n        client = self.aws_credentials.get_secrets_manager_client()\n        response = await run_sync_in_worker_thread(\n            client.delete_secret,\n            SecretId=self.secret_name,\n            RecoveryWindowInDays=recovery_window_in_days,\n            ForceDeleteWithoutRecovery=force_delete_without_recovery,\n            **delete_kwargs,\n        )\n        arn = response[\"ARN\"]\n        self.logger.info(f\"The secret {arn} was deleted successfully.\")\n        return arn\n
        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret.delete_secret","title":"delete_secret async","text":"

        Deletes the secret from the secret storage service.

        Parameters:

        Name Type Description Default recovery_window_in_days int

        The number of days to wait before permanently deleting the secret. Must be between 7 and 30 days.

        30 force_delete_without_recovery bool

        If True, the secret will be deleted immediately without a recovery window.

        False **delete_kwargs Dict[str, Any]

        Additional keyword arguments to pass to the delete_secret method of the boto3 client.

        {}

        Returns:

        Type Description str

        The path that the secret was deleted from.

        Examples:

        Deletes the secret with a recovery window of 15 days.

        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\nsecrets_manager.delete_secret(recovery_window_in_days=15)\n

        Source code in prefect_aws/secrets_manager.py
        @sync_compatible\nasync def delete_secret(\n    self,\n    recovery_window_in_days: int = 30,\n    force_delete_without_recovery: bool = False,\n    **delete_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Deletes the secret from the secret storage service.\n\n    Args:\n        recovery_window_in_days: The number of days to wait before permanently\n            deleting the secret. Must be between 7 and 30 days.\n        force_delete_without_recovery: If True, the secret will be deleted\n            immediately without a recovery window.\n        **delete_kwargs: Additional keyword arguments to pass to the\n            delete_secret method of the boto3 client.\n\n    Returns:\n        The path that the secret was deleted from.\n\n    Examples:\n        Deletes the secret with a recovery window of 15 days.\n        ```python\n        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n        secrets_manager.delete_secret(recovery_window_in_days=15)\n        ```\n    \"\"\"\n    if force_delete_without_recovery and recovery_window_in_days:\n        raise ValueError(\n            \"Cannot specify recovery window and force delete without recovery.\"\n        )\n    elif not (7 <= recovery_window_in_days <= 30):\n        raise ValueError(\n            \"Recovery window must be between 7 and 30 days, got \"\n            f\"{recovery_window_in_days}.\"\n        )\n\n    client = self.aws_credentials.get_secrets_manager_client()\n    response = await run_sync_in_worker_thread(\n        client.delete_secret,\n        SecretId=self.secret_name,\n        RecoveryWindowInDays=recovery_window_in_days,\n        ForceDeleteWithoutRecovery=force_delete_without_recovery,\n        **delete_kwargs,\n    )\n    arn = response[\"ARN\"]\n    self.logger.info(f\"The secret {arn} was deleted successfully.\")\n    return arn\n
        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret.read_secret","title":"read_secret async","text":"

        Reads the secret from the secret storage service.

        Parameters:

        Name Type Description Default version_id str

        The version of the secret to read. If not provided, the latest version will be read.

        None version_stage str

        The version stage of the secret to read. If not provided, the latest version will be read.

        None read_kwargs Dict[str, Any]

        Additional keyword arguments to pass to the get_secret_value method of the boto3 client.

        {}

        Returns:

        Type Description bytes

        The secret data.

        Examples:

        Reads a secret.

        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\nsecrets_manager.read_secret()\n

        Source code in prefect_aws/secrets_manager.py
        @sync_compatible\nasync def read_secret(\n    self,\n    version_id: str = None,\n    version_stage: str = None,\n    **read_kwargs: Dict[str, Any],\n) -> bytes:\n    \"\"\"\n    Reads the secret from the secret storage service.\n\n    Args:\n        version_id: The version of the secret to read. If not provided, the latest\n            version will be read.\n        version_stage: The version stage of the secret to read. If not provided,\n            the latest version will be read.\n        read_kwargs: Additional keyword arguments to pass to the\n            `get_secret_value` method of the boto3 client.\n\n    Returns:\n        The secret data.\n\n    Examples:\n        Reads a secret.\n        ```python\n        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n        secrets_manager.read_secret()\n        ```\n    \"\"\"\n    client = self.aws_credentials.get_secrets_manager_client()\n    if version_id is not None:\n        read_kwargs[\"VersionId\"] = version_id\n    if version_stage is not None:\n        read_kwargs[\"VersionStage\"] = version_stage\n    response = await run_sync_in_worker_thread(\n        client.get_secret_value, SecretId=self.secret_name, **read_kwargs\n    )\n    if \"SecretBinary\" in response:\n        secret = response[\"SecretBinary\"]\n    elif \"SecretString\" in response:\n        secret = response[\"SecretString\"]\n    arn = response[\"ARN\"]\n    self.logger.info(f\"The secret {arn!r} data was successfully read.\")\n    return secret\n
        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.AwsSecret.write_secret","title":"write_secret async","text":"

        Writes the secret to the secret storage service as a SecretBinary; if it doesn't exist, it will be created.

        Parameters:

        Name Type Description Default secret_data bytes

        The secret data to write.

        required **put_or_create_secret_kwargs Dict[str, Any]

        Additional keyword arguments to pass to put_secret_value or create_secret method of the boto3 client.

        {}

        Returns:

        Type Description str

        The path that the secret was written to.

        Examples:

        Write some secret data.

        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\nsecrets_manager.write_secret(b\"my_secret_data\")\n

        Source code in prefect_aws/secrets_manager.py
        @sync_compatible\nasync def write_secret(\n    self, secret_data: bytes, **put_or_create_secret_kwargs: Dict[str, Any]\n) -> str:\n    \"\"\"\n    Writes the secret to the secret storage service as a SecretBinary;\n    if it doesn't exist, it will be created.\n\n    Args:\n        secret_data: The secret data to write.\n        **put_or_create_secret_kwargs: Additional keyword arguments to pass to\n            put_secret_value or create_secret method of the boto3 client.\n\n    Returns:\n        The path that the secret was written to.\n\n    Examples:\n        Write some secret data.\n        ```python\n        secrets_manager = SecretsManager.load(\"MY_BLOCK\")\n        secrets_manager.write_secret(b\"my_secret_data\")\n        ```\n    \"\"\"\n    client = self.aws_credentials.get_secrets_manager_client()\n    try:\n        response = await run_sync_in_worker_thread(\n            client.put_secret_value,\n            SecretId=self.secret_name,\n            SecretBinary=secret_data,\n            **put_or_create_secret_kwargs,\n        )\n    except client.exceptions.ResourceNotFoundException:\n        self.logger.info(\n            f\"The secret {self.secret_name!r} does not exist yet, creating it now.\"\n        )\n        response = await run_sync_in_worker_thread(\n            client.create_secret,\n            Name=self.secret_name,\n            SecretBinary=secret_data,\n            **put_or_create_secret_kwargs,\n        )\n    arn = response[\"ARN\"]\n    self.logger.info(f\"The secret data was written successfully to {arn!r}.\")\n    return arn\n
        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.create_secret","title":"create_secret async","text":"

        Creates a secret in AWS Secrets Manager.

        Parameters:

        Name Type Description Default secret_name str

        The name of the secret to create.

        required secret_value Union[str, bytes]

        The value to store in the created secret.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required description Optional[str]

        A description for the created secret.

        None tags Optional[List[Dict[str, str]]]

        A list of tags to attach to the secret. Each tag should be specified as a dictionary in the following format:

        {\n    \"Key\": str,\n    \"Value\": str\n}\n

        None

        Returns:

        Type Description Dict[str, str]

        A dict containing the secret ARN (Amazon Resource Name), name, and current version ID.

        {\n    \"ARN\": str,\n    \"Name\": str,\n    \"VersionId\": str\n}\n

        ```python\nfrom prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import create_secret\n\n@flow\ndef example_create_secret():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    create_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        secret_value=\"42\",\n        aws_credentials=aws_credentials\n    )\n\nexample_create_secret()\n```\n
        Source code in prefect_aws/secrets_manager.py
        @task\nasync def create_secret(\n    secret_name: str,\n    secret_value: Union[str, bytes],\n    aws_credentials: AwsCredentials,\n    description: Optional[str] = None,\n    tags: Optional[List[Dict[str, str]]] = None,\n) -> Dict[str, str]:\n    \"\"\"\n    Creates a secret in AWS Secrets Manager.\n\n    Args:\n        secret_name: The name of the secret to create.\n        secret_value: The value to store in the created secret.\n        aws_credentials: Credentials to use for authentication with AWS.\n        description: A description for the created secret.\n        tags: A list of tags to attach to the secret. Each tag should be specified as a\n            dictionary in the following format:\n            ```python\n            {\n                \"Key\": str,\n                \"Value\": str\n            }\n            ```\n\n    Returns:\n        A dict containing the secret ARN (Amazon Resource Name),\n            name, and current version ID.\n            ```python\n            {\n                \"ARN\": str,\n                \"Name\": str,\n                \"VersionId\": str\n            }\n            ```\n    Example:\n        Create a secret:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import create_secret\n\n        @flow\n        def example_create_secret():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            create_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                secret_value=\"42\",\n                aws_credentials=aws_credentials\n            )\n\n        example_create_secret()\n        ```\n\n\n    \"\"\"\n    create_secret_kwargs: Dict[str, Union[str, bytes, List[Dict[str, str]]]] = dict(\n        Name=secret_name\n    )\n    if description is not None:\n        create_secret_kwargs[\"Description\"] = description\n    if tags is not None:\n        create_secret_kwargs[\"Tags\"] = tags\n    if isinstance(secret_value, bytes):\n        create_secret_kwargs[\"SecretBinary\"] = secret_value\n    elif isinstance(secret_value, str):\n        create_secret_kwargs[\"SecretString\"] = secret_value\n    else:\n        raise ValueError(\"Please provide a bytes or str value for secret_value\")\n\n    logger = get_run_logger()\n    logger.info(\"Creating secret named %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.create_secret, **create_secret_kwargs\n        )\n        print(response.pop(\"ResponseMetadata\", None))\n        return response\n    except ClientError:\n        logger.exception(\"Unable to create secret %s\", secret_name)\n        raise\n
        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.delete_secret","title":"delete_secret async","text":"

        Deletes a secret from AWS Secrets Manager.

        Secrets can either be deleted immediately by setting force_delete_without_recovery equal to True. Otherwise, secrets will be marked for deletion and available for recovery for the number of days specified in recovery_window_in_days

        Parameters:

        Name Type Description Default secret_name str

        Name of the secret to be deleted.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required recovery_window_in_days int

        Number of days a secret should be recoverable for before permanent deletion. Minimum window is 7 days and maximum window is 30 days. If force_delete_without_recovery is set to True, this value will be ignored.

        30 force_delete_without_recovery bool

        If True, the secret will be immediately deleted and will not be recoverable.

        False

        Returns:

        Type Description Dict[str, str]

        A dict containing the secret ARN (Amazon Resource Name), name, and deletion date of the secret. DeletionDate is the date and time of the delete request plus the number of days in recovery_window_in_days.

        {\n    \"ARN\": str,\n    \"Name\": str,\n    \"DeletionDate\": datetime.datetime\n}\n

        Examples:

        Delete a secret immediately:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import delete_secret\n\n@flow\ndef example_delete_secret_immediately():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    delete_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        aws_credentials=aws_credentials,\n        force_delete_without_recovery: True\n    )\n\nexample_delete_secret_immediately()\n

        Delete a secret with a 90 day recovery window:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import delete_secret\n\n@flow\ndef example_delete_secret_with_recovery_window():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    delete_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        aws_credentials=aws_credentials,\n        recovery_window_in_days=90\n    )\n\nexample_delete_secret_with_recovery_window()\n
        Source code in prefect_aws/secrets_manager.py
        @task\nasync def delete_secret(\n    secret_name: str,\n    aws_credentials: AwsCredentials,\n    recovery_window_in_days: int = 30,\n    force_delete_without_recovery: bool = False,\n) -> Dict[str, str]:\n    \"\"\"\n    Deletes a secret from AWS Secrets Manager.\n\n    Secrets can either be deleted immediately by setting `force_delete_without_recovery`\n    equal to `True`. Otherwise, secrets will be marked for deletion and available for\n    recovery for the number of days specified in `recovery_window_in_days`\n\n    Args:\n        secret_name: Name of the secret to be deleted.\n        aws_credentials: Credentials to use for authentication with AWS.\n        recovery_window_in_days: Number of days a secret should be recoverable for\n            before permanent deletion. Minimum window is 7 days and maximum window\n            is 30 days. If `force_delete_without_recovery` is set to `True`, this\n            value will be ignored.\n        force_delete_without_recovery: If `True`, the secret will be immediately\n            deleted and will not be recoverable.\n\n    Returns:\n        A dict containing the secret ARN (Amazon Resource Name),\n            name, and deletion date of the secret. DeletionDate is the date and\n            time of the delete request plus the number of days in\n            `recovery_window_in_days`.\n            ```python\n            {\n                \"ARN\": str,\n                \"Name\": str,\n                \"DeletionDate\": datetime.datetime\n            }\n            ```\n\n    Examples:\n        Delete a secret immediately:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import delete_secret\n\n        @flow\n        def example_delete_secret_immediately():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            delete_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                aws_credentials=aws_credentials,\n                force_delete_without_recovery: True\n            )\n\n        example_delete_secret_immediately()\n        ```\n\n        Delete a secret with a 90 day recovery window:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import delete_secret\n\n        @flow\n        def example_delete_secret_with_recovery_window():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            delete_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                aws_credentials=aws_credentials,\n                recovery_window_in_days=90\n            )\n\n        example_delete_secret_with_recovery_window()\n        ```\n\n\n    \"\"\"\n    if not force_delete_without_recovery and not (7 <= recovery_window_in_days <= 30):\n        raise ValueError(\"Recovery window must be between 7 and 30 days.\")\n\n    delete_secret_kwargs: Dict[str, Union[str, int, bool]] = dict(SecretId=secret_name)\n    if force_delete_without_recovery:\n        delete_secret_kwargs[\n            \"ForceDeleteWithoutRecovery\"\n        ] = force_delete_without_recovery\n    else:\n        delete_secret_kwargs[\"RecoveryWindowInDays\"] = recovery_window_in_days\n\n    logger = get_run_logger()\n    logger.info(\"Deleting secret %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.delete_secret, **delete_secret_kwargs\n        )\n        response.pop(\"ResponseMetadata\", None)\n        return response\n    except ClientError:\n        logger.exception(\"Unable to delete secret %s\", secret_name)\n        raise\n
        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.read_secret","title":"read_secret async","text":"

        Reads the value of a given secret from AWS Secrets Manager.

        Parameters:

        Name Type Description Default secret_name str

        Name of stored secret.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required version_id Optional[str]

        Specifies version of secret to read. Defaults to the most recent version if not given.

        None version_stage Optional[str]

        Specifies the version stage of the secret to read. Defaults to AWS_CURRENT if not given.

        None

        Returns:

        Type Description Union[str, bytes]

        The secret values as a str or bytes depending on the format in which the secret was stored.

        Example

        Read a secret value:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef example_read_secret():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    secret_value = read_secret(\n        secret_name=\"db_password\",\n        aws_credentials=aws_credentials\n    )\n\nexample_read_secret()\n
        Source code in prefect_aws/secrets_manager.py
        @task\nasync def read_secret(\n    secret_name: str,\n    aws_credentials: AwsCredentials,\n    version_id: Optional[str] = None,\n    version_stage: Optional[str] = None,\n) -> Union[str, bytes]:\n    \"\"\"\n    Reads the value of a given secret from AWS Secrets Manager.\n\n    Args:\n        secret_name: Name of stored secret.\n        aws_credentials: Credentials to use for authentication with AWS.\n        version_id: Specifies version of secret to read. Defaults to the most recent\n            version if not given.\n        version_stage: Specifies the version stage of the secret to read. Defaults to\n            AWS_CURRENT if not given.\n\n    Returns:\n        The secret values as a `str` or `bytes` depending on the format in which the\n            secret was stored.\n\n    Example:\n        Read a secret value:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import read_secret\n\n        @flow\n        def example_read_secret():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            secret_value = read_secret(\n                secret_name=\"db_password\",\n                aws_credentials=aws_credentials\n            )\n\n        example_read_secret()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Getting value for secret %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    get_secret_value_kwargs = dict(SecretId=secret_name)\n    if version_id is not None:\n        get_secret_value_kwargs[\"VersionId\"] = version_id\n    if version_stage is not None:\n        get_secret_value_kwargs[\"VersionStage\"] = version_stage\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.get_secret_value, **get_secret_value_kwargs\n        )\n    except ClientError:\n        logger.exception(\"Unable to get value for secret %s\", secret_name)\n        raise\n    else:\n        return response.get(\"SecretString\") or response.get(\"SecretBinary\")\n
        "},{"location":"integrations/prefect-aws/secrets_manager/#prefect_aws.secrets_manager.update_secret","title":"update_secret async","text":"

        Updates the value of a given secret in AWS Secrets Manager.

        Parameters:

        Name Type Description Default secret_name str

        Name of secret to update.

        required secret_value Union[str, bytes]

        Desired value of the secret. Can be either str or bytes.

        required aws_credentials AwsCredentials

        Credentials to use for authentication with AWS.

        required description Optional[str]

        Desired description of the secret.

        None

        Returns:

        Type Description Dict[str, str]

        A dict containing the secret ARN (Amazon Resource Name), name, and current version ID.

        {\n    \"ARN\": str,\n    \"Name\": str,\n    \"VersionId\": str\n}\n

        Example

        Update a secret value:

        from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import update_secret\n\n@flow\ndef example_update_secret():\n    aws_credentials = AwsCredentials(\n        aws_access_key_id=\"access_key_id\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n    update_secret(\n        secret_name=\"life_the_universe_and_everything\",\n        secret_value=\"42\",\n        aws_credentials=aws_credentials\n    )\n\nexample_update_secret()\n
        Source code in prefect_aws/secrets_manager.py
        @task\nasync def update_secret(\n    secret_name: str,\n    secret_value: Union[str, bytes],\n    aws_credentials: AwsCredentials,\n    description: Optional[str] = None,\n) -> Dict[str, str]:\n    \"\"\"\n    Updates the value of a given secret in AWS Secrets Manager.\n\n    Args:\n        secret_name: Name of secret to update.\n        secret_value: Desired value of the secret. Can be either `str` or `bytes`.\n        aws_credentials: Credentials to use for authentication with AWS.\n        description: Desired description of the secret.\n\n    Returns:\n        A dict containing the secret ARN (Amazon Resource Name),\n            name, and current version ID.\n            ```python\n            {\n                \"ARN\": str,\n                \"Name\": str,\n                \"VersionId\": str\n            }\n            ```\n\n    Example:\n        Update a secret value:\n\n        ```python\n        from prefect import flow\n        from prefect_aws import AwsCredentials\n        from prefect_aws.secrets_manager import update_secret\n\n        @flow\n        def example_update_secret():\n            aws_credentials = AwsCredentials(\n                aws_access_key_id=\"access_key_id\",\n                aws_secret_access_key=\"secret_access_key\"\n            )\n            update_secret(\n                secret_name=\"life_the_universe_and_everything\",\n                secret_value=\"42\",\n                aws_credentials=aws_credentials\n            )\n\n        example_update_secret()\n        ```\n\n    \"\"\"\n    update_secret_kwargs: Dict[str, Union[str, bytes]] = dict(SecretId=secret_name)\n    if description is not None:\n        update_secret_kwargs[\"Description\"] = description\n    if isinstance(secret_value, bytes):\n        update_secret_kwargs[\"SecretBinary\"] = secret_value\n    elif isinstance(secret_value, str):\n        update_secret_kwargs[\"SecretString\"] = secret_value\n    else:\n        raise ValueError(\"Please provide a bytes or str value for secret_value\")\n\n    logger = get_run_logger()\n    logger.info(\"Updating value for secret %s\", secret_name)\n\n    client = aws_credentials.get_boto3_session().client(\"secretsmanager\")\n\n    try:\n        response = await run_sync_in_worker_thread(\n            client.update_secret, **update_secret_kwargs\n        )\n        response.pop(\"ResponseMetadata\", None)\n        return response\n    except ClientError:\n        logger.exception(\"Unable to update secret %s\", secret_name)\n        raise\n
        "},{"location":"integrations/prefect-aws/deployments/steps/","title":"Steps","text":""},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps","title":"prefect_aws.deployments.steps","text":"

        Prefect deployment steps for code storage and retrieval in S3 and S3 compatible services.

        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PullFromS3Output","title":"PullFromS3Output","text":"

        Bases: TypedDict

        The output of the pull_from_s3 step.

        Source code in prefect_aws/deployments/steps.py
        class PullFromS3Output(TypedDict):\n    \"\"\"\n    The output of the `pull_from_s3` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n    directory: str\n
        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PullProjectFromS3Output","title":"PullProjectFromS3Output","text":"

        Bases: PullFromS3Output

        Deprecated. Use PullFromS3Output instead..

        Source code in prefect_aws/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PullFromS3Output` instead.\")\nclass PullProjectFromS3Output(PullFromS3Output):\n    \"\"\"Deprecated. Use `PullFromS3Output` instead..\"\"\"\n
        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PushProjectToS3Output","title":"PushProjectToS3Output","text":"

        Bases: PushToS3Output

        Deprecated. Use PushToS3Output instead.

        Source code in prefect_aws/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PushToS3Output` instead.\")\nclass PushProjectToS3Output(PushToS3Output):\n    \"\"\"Deprecated. Use `PushToS3Output` instead.\"\"\"\n
        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.PushToS3Output","title":"PushToS3Output","text":"

        Bases: TypedDict

        The output of the push_to_s3 step.

        Source code in prefect_aws/deployments/steps.py
        class PushToS3Output(TypedDict):\n    \"\"\"\n    The output of the `push_to_s3` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n
        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.pull_from_s3","title":"pull_from_s3","text":"

        Pulls the contents of an S3 bucket folder to the current working directory.

        Parameters:

        Name Type Description Default bucket str

        The name of the S3 bucket where files are stored.

        required folder str

        The folder in the S3 bucket where files are stored.

        required credentials Optional[Dict]

        A dictionary of AWS credentials (aws_access_key_id, aws_secret_access_key, aws_session_token) or MinIO credentials (minio_root_user, minio_root_password).

        None client_parameters Optional[Dict]

        A dictionary of additional parameters to pass to the boto3 client.

        None

        Returns:

        Type Description PullFromS3Output

        A dictionary containing the bucket, folder, and local directory where files were downloaded.

        Examples:

        Pull files from S3 using the default credentials and client parameters:

        pull:\n    - prefect_aws.deployments.steps.pull_from_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n

        Pull files from S3 using credentials stored in a block:

        pull:\n    - prefect_aws.deployments.steps.pull_from_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n        credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

        Source code in prefect_aws/deployments/steps.py
        def pull_from_s3(\n    bucket: str,\n    folder: str,\n    credentials: Optional[Dict] = None,\n    client_parameters: Optional[Dict] = None,\n) -> PullFromS3Output:\n    \"\"\"\n    Pulls the contents of an S3 bucket folder to the current working directory.\n\n    Args:\n        bucket: The name of the S3 bucket where files are stored.\n        folder: The folder in the S3 bucket where files are stored.\n        credentials: A dictionary of AWS credentials (aws_access_key_id,\n            aws_secret_access_key, aws_session_token) or MinIO credentials\n            (minio_root_user, minio_root_password).\n        client_parameters: A dictionary of additional parameters to pass to the\n            boto3 client.\n\n    Returns:\n        A dictionary containing the bucket, folder, and local directory where\n            files were downloaded.\n\n    Examples:\n        Pull files from S3 using the default credentials and client parameters:\n        ```yaml\n        pull:\n            - prefect_aws.deployments.steps.pull_from_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n        ```\n\n        Pull files from S3 using credentials stored in a block:\n        ```yaml\n        pull:\n            - prefect_aws.deployments.steps.pull_from_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n                credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n        ```\n    \"\"\"\n    s3 = get_s3_client(credentials=credentials, client_parameters=client_parameters)\n\n    local_path = Path.cwd()\n\n    paginator = s3.get_paginator(\"list_objects_v2\")\n    for result in paginator.paginate(Bucket=bucket, Prefix=folder):\n        for obj in result.get(\"Contents\", []):\n            remote_key = obj[\"Key\"]\n\n            if remote_key[-1] == \"/\":\n                # object is a folder and will be created if it contains any objects\n                continue\n\n            target = PurePosixPath(\n                local_path\n                / relative_path_to_current_platform(remote_key).relative_to(folder)\n            )\n            Path.mkdir(Path(target.parent), parents=True, exist_ok=True)\n            s3.download_file(bucket, remote_key, str(target))\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n        \"directory\": str(local_path),\n    }\n
        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.pull_project_from_s3","title":"pull_project_from_s3","text":"

        Deprecated. Use pull_from_s3 instead.

        Source code in prefect_aws/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `pull_from_s3` instead.\")\ndef pull_project_from_s3(*args, **kwargs):\n    \"\"\"Deprecated. Use `pull_from_s3` instead.\"\"\"\n    pull_from_s3(*args, **kwargs)\n
        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.push_project_to_s3","title":"push_project_to_s3","text":"

        Deprecated. Use push_to_s3 instead.

        Source code in prefect_aws/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `push_to_s3` instead.\")\ndef push_project_to_s3(*args, **kwargs):\n    \"\"\"Deprecated. Use `push_to_s3` instead.\"\"\"\n    push_to_s3(*args, **kwargs)\n
        "},{"location":"integrations/prefect-aws/deployments/steps/#prefect_aws.deployments.steps.push_to_s3","title":"push_to_s3","text":"

        Pushes the contents of the current working directory to an S3 bucket, excluding files and folders specified in the ignore_file.

        Parameters:

        Name Type Description Default bucket str

        The name of the S3 bucket where files will be uploaded.

        required folder str

        The folder in the S3 bucket where files will be uploaded.

        required credentials Optional[Dict]

        A dictionary of AWS credentials (aws_access_key_id, aws_secret_access_key, aws_session_token) or MinIO credentials (minio_root_user, minio_root_password).

        None client_parameters Optional[Dict]

        A dictionary of additional parameters to pass to the boto3 client.

        None ignore_file Optional[str]

        The name of the file containing ignore patterns.

        '.prefectignore'

        Returns:

        Type Description PushToS3Output

        A dictionary containing the bucket and folder where files were uploaded.

        Examples:

        Push files to an S3 bucket:

        push:\n    - prefect_aws.deployments.steps.push_to_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n

        Push files to an S3 bucket using credentials stored in a block:

        push:\n    - prefect_aws.deployments.steps.push_to_s3:\n        requires: prefect-aws\n        bucket: my-bucket\n        folder: my-project\n        credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

        Source code in prefect_aws/deployments/steps.py
        def push_to_s3(\n    bucket: str,\n    folder: str,\n    credentials: Optional[Dict] = None,\n    client_parameters: Optional[Dict] = None,\n    ignore_file: Optional[str] = \".prefectignore\",\n) -> PushToS3Output:\n    \"\"\"\n    Pushes the contents of the current working directory to an S3 bucket,\n    excluding files and folders specified in the ignore_file.\n\n    Args:\n        bucket: The name of the S3 bucket where files will be uploaded.\n        folder: The folder in the S3 bucket where files will be uploaded.\n        credentials: A dictionary of AWS credentials (aws_access_key_id,\n            aws_secret_access_key, aws_session_token) or MinIO credentials\n            (minio_root_user, minio_root_password).\n        client_parameters: A dictionary of additional parameters to pass to the boto3\n            client.\n        ignore_file: The name of the file containing ignore patterns.\n\n    Returns:\n        A dictionary containing the bucket and folder where files were uploaded.\n\n    Examples:\n        Push files to an S3 bucket:\n        ```yaml\n        push:\n            - prefect_aws.deployments.steps.push_to_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n        ```\n\n        Push files to an S3 bucket using credentials stored in a block:\n        ```yaml\n        push:\n            - prefect_aws.deployments.steps.push_to_s3:\n                requires: prefect-aws\n                bucket: my-bucket\n                folder: my-project\n                credentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n        ```\n\n    \"\"\"\n    s3 = get_s3_client(credentials=credentials, client_parameters=client_parameters)\n\n    local_path = Path.cwd()\n\n    included_files = None\n    if ignore_file and Path(ignore_file).exists():\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(str(local_path), ignore_patterns)\n\n    for local_file_path in local_path.expanduser().rglob(\"*\"):\n        if (\n            included_files is not None\n            and str(local_file_path.relative_to(local_path)) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = Path(folder) / local_file_path.relative_to(local_path)\n            s3.upload_file(\n                str(local_file_path), bucket, str(remote_file_path.as_posix())\n            )\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n    }\n
        "},{"location":"integrations/prefect-azure/","title":"prefect-azure","text":"

        prefect-azure is a collection of Prefect integrations for orchestration workflows with Azure.

        "},{"location":"integrations/prefect-azure/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-azure/#installation","title":"Installation","text":"

        Install prefect-azure with pip

        pip install prefect-azure\n

        To use Blob Storage:

        pip install \"prefect-azure[blob_storage]\"\n

        To use Cosmos DB:

        pip install \"prefect-azure[cosmos_db]\"\n

        To use ML Datastore:

        pip install \"prefect-azure[ml_datastore]\"\n

        "},{"location":"integrations/prefect-azure/#examples","title":"Examples","text":""},{"location":"integrations/prefect-azure/#download-a-blob","title":"Download a blob","text":"
        from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef example_blob_storage_download_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    data = blob_storage_download(\n        blob=\"prefect.txt\",\n        container=\"prefect\",\n        azure_credentials=blob_storage_credentials,\n    )\n    return data\n\nexample_blob_storage_download_flow()\n

        Use with_options to customize options on any existing task or flow:

        custom_blob_storage_download_flow = example_blob_storage_download_flow.with_options(\n    name=\"My custom task name\",\n    retries=2,\n    retry_delay_seconds=10,\n)\n

        "},{"location":"integrations/prefect-azure/#run-a-command-on-an-azure-container-instance","title":"Run a command on an Azure container instance","text":"
        from prefect import flow\nfrom prefect_azure import AzureContainerInstanceCredentials\nfrom prefect_azure.container_instance import AzureContainerInstanceJob\n\n\n@flow\ndef container_instance_job_flow():\n    aci_credentials = AzureContainerInstanceCredentials.load(\"MY_BLOCK_NAME\")\n    container_instance_job = AzureContainerInstanceJob(\n        aci_credentials=aci_credentials,\n        resource_group_name=\"azure_resource_group.example.name\",\n        subscription_id=\"<MY_AZURE_SUBSCRIPTION_ID>\",\n        command=[\"echo\", \"hello world\"],\n    )\n    return container_instance_job.run()\n
        "},{"location":"integrations/prefect-azure/#use-azure-container-instance-as-infrastructure","title":"Use Azure Container Instance as infrastructure","text":"

        If we have a_flow_module.py:

        from prefect import flow, get_run_logger\n\n@flow\ndef log_hello_flow(name=\"Marvin\"):\n    logger = get_run_logger()\n    logger.info(f\"{name} said hello!\")\n\nif __name__ == \"__main__\":\n    log_hello_flow()\n

        We can run that flow using an Azure Container Instance, but first create the infrastructure block:

        from prefect_azure import AzureContainerInstanceCredentials\nfrom prefect_azure.container_instance import AzureContainerInstanceJob\n\ncontainer_instance_job = AzureContainerInstanceJob(\n    aci_credentials=AzureContainerInstanceCredentials.load(\"MY_BLOCK_NAME\"),\n    resource_group_name=\"azure_resource_group.example.name\",\n    subscription_id=\"<MY_AZURE_SUBSCRIPTION_ID>\",\n)\ncontainer_instance_job.save(\"aci-dev\")\n

        Then, create the deployment either on the UI or through the CLI:

        prefect deployment build a_flow_module.py:log_hello_flow --name aci-dev -ib container-instance-job/aci-dev\n

        Visit Prefect Deployments for more information about deployments.

        "},{"location":"integrations/prefect-azure/#azure-container-instance-worker","title":"Azure Container Instance Worker","text":"

        The Azure Container Instance worker is an excellent way to run your workflows on Azure.

        To get started, create an Azure Container Instances typed work pool:

        prefect work-pool create -t azure-container-instance my-aci-work-pool\n

        Then, run a worker that pulls jobs from the work pool:

        prefect worker start -n my-aci-worker -p my-aci-work-pool\n

        The worker should automatically read the work pool's type and start an Azure Container Instance worker.

        "},{"location":"integrations/prefect-azure/aci_worker/","title":"Azure Container Instances Worker Guide","text":""},{"location":"integrations/prefect-azure/aci_worker/#why-use-aci-for-flow-run-execution","title":"Why use ACI for flow run execution?","text":"

        ACI (Azure Container Instances) is a fully managed compute platform that streamlines running your Prefect flows on scalable, on-demand infrastructure on Azure.

        "},{"location":"integrations/prefect-azure/aci_worker/#prerequisites","title":"Prerequisites","text":"

        Before starting this guide, make sure you have:

        • An Azure account and user permissions for provisioning resource groups and container instances.
        • The azure CLI installed on your local machine. You can follow Microsoft's installation guide.
        • Docker installed on your local machine.
        "},{"location":"integrations/prefect-azure/aci_worker/#step-1-create-a-resource-group","title":"Step 1. Create a resource group","text":"

        Azure resource groups serve as containers for managing groupings of Azure resources.

        Replace <resource-group-name> with the name of your choosing, and <location> with a valid Azure location name, such aseastus.

        export RG_NAME=<resource-group-name> && \\\naz group create --name $RG_NAME --location <location>\n

        Throughout the rest of the guide, we'll need to refer to the scope of the created resource group, which is a string describing where the resource group lives in the hierarchy of your Azure account. To save the scope of your resource group as an environment variable, run the following command:

        RG_SCOPE=$(az group show --name $RG_NAME --query id --output tsv)\n

        You can check that the scope is correct before moving on by running echo $RG_SCOPE in your terminal. It should be formatted as follows:

        /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>\n
        "},{"location":"integrations/prefect-azure/aci_worker/#step-2-prepare-aci-permissions","title":"Step 2. Prepare ACI permissions","text":"

        In order for the worker to create, monitor, and delete the other container instances in which flows will run, we'll need to create a custom role and an identity, and then affiliate that role to the identity with a role assignment. When we start our worker, we'll assign that identity to the container instance it's running in.

        "},{"location":"integrations/prefect-azure/aci_worker/#1-create-a-role","title":"1. Create a role","text":"

        The custom Container Instances Contributor role has all the permissions your worker will need to run flows in other container instances. Create it by running the following command:

        az role definition create --role-definition '{\n  \"Name\": \"Container Instances Contributor\",\n  \"IsCustom\": true,\n  \"Description\": \"Can create, delete, and monitor container instances.\",\n  \"Actions\": [\n    \"Microsoft.ManagedIdentity/userAssignedIdentities/assign/action\",\n    \"Microsoft.Resources/deployments/*\",\n    \"Microsoft.ContainerInstance/containerGroups/*\"\n  ],\n  \"NotActions\": [\n  ],\n  \"AssignableScopes\": [\n    '\"\\\"$RG_SCOPE\\\"\"'\n  ]\n}'\n
        "},{"location":"integrations/prefect-azure/aci_worker/#2-create-an-identity","title":"2. Create an identity","text":"

        Create a user-managed identity with the following command, replacing <identity-name> with the name you'd like to use for the identity:

        export IDENTITY_NAME=<identity_name> && \\\naz identity create -g $RG_NAME -n $IDENTITY_NAME\n

        We'll also need to save the principal ID and full object ID of the identity for the role assignment and container creation steps, respectively:

        IDENTITY_PRINCIPAL_ID=$(az identity list --query \"[?name=='$IDENTITY_NAME'].principalId\" --output tsv) && \\\nIDENTITY_ID=$(az identity list --query \"[?name=='$IDENTITY_NAME'].id\" --output tsv)\n
        "},{"location":"integrations/prefect-azure/aci_worker/#3-assign-roles-to-the-identity","title":"3. Assign roles to the identity","text":"

        Now let's assign the Container Instances Contributor role we created earlier to the new identity:

        az role assignment create \\\n    --assignee $IDENTITY_PRINCIPAL_ID \\\n    --role \"Container Instances Contributor\" \\\n    --scope $RG_SCOPE\n

        Since we'll be using ACR to host a custom Docker image containing a Prefect flow later in the guide, let's also assign the built in AcrPull role to the identity:

        az role assignment create \\\n    --assignee $IDENTITY_PRINCIPAL_ID \\\n    --role \"AcrPull\" \\\n    --scope $RG_SCOPE\n
        "},{"location":"integrations/prefect-azure/aci_worker/#step-3-create-the-worker-container-instance","title":"Step 3. Create the worker container instance","text":"

        Before running this command, set your PREFECT_API_URL and PREFECT_API_KEY as environment variables:

        export PREFECT_API_URL=<PREFECT_API_URL_HERE> PREFECT_API_KEY=<PREFECT_API_KEY_HERE>\n

        Running the following command will create a container instance in your Azure resource group that will start a Prefect ACI worker. If there is not already a work pool in Prefect with the name you chose, a work pool will also be created.

        Replace <work-pool-name> with the name of the ACI work pool you want to create in Prefect. Here we're using the work pool name as the name of the container instance in Azure as well, but you may name it something else if you prefer.

        az container create \\\n    --name <work-pool-name> \\\n    --resource-group $RG_NAME \\\n    --assign-identity $IDENTITY_ID \\\n    --image \"prefecthq/prefect:2-python3.11\" \\\n    --secure-environment-variables PREFECT_API_URL=$PREFECT_API_URL PREFECT_API_KEY=$PREFECT_API_KEY \\\n    --command-line \"/bin/bash -c 'pip install prefect-azure && prefect worker start --pool <work-pool-name> --type azure-container-instance'\" \n

        This container instance uses default networking and security settings. For advanced configuration, refer the az container create CLI reference.

        "},{"location":"integrations/prefect-azure/aci_worker/#step-4-create-an-acr-registry","title":"Step 4. Create an ACR registry","text":"

        In order to build and push images containing flow code to Azure, we'll need a container registry. Create one with the following command, replacing <registry-name> with the registry name of your choosing:

        export REGISTRY_NAME=<registry-name> && \\\naz acr create --resource-group $RG_NAME \\\n  --name <registry-name> --sku Basic\n
        "},{"location":"integrations/prefect-azure/aci_worker/#step-5-update-your-aci-work-pool-configuration","title":"Step 5. Update your ACI work pool configuration","text":"

        Once your work pool is created, navigate to the Edit page of your ACI work pool. You will need to update the following fields:

        "},{"location":"integrations/prefect-azure/aci_worker/#identities","title":"Identities","text":"

        This will be your IDENTITY_ID. You can get it from your terminal by running echo $IDENTITY_ID. When adding it to your work pool, it should be formatted as a JSON array:

        [\"/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>\"]\n

        "},{"location":"integrations/prefect-azure/aci_worker/#acrmanagedidentity","title":"ACRManagedIdentity","text":"

        ACRManagedIdentity is required for your flow code containers to be pulled from ACR. It consists of the following:

        • Identity: the same IDENTITY_ID as above, as a string
          /subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>\n
        • Registry URL: your <registry-name>, followed by .azurecr.io
          <registry-name>.azurecr.io\n

        "},{"location":"integrations/prefect-azure/aci_worker/#subscription-id-and-resource-group-name","title":"Subscription ID and resource group name","text":"

        Both the subscription ID and resource group name can be found in in the RG_SCOPE environment variable created earlier in the guide. View their values by running echo $RG_SCOPE:

        /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>\n

        Then click Save.

        "},{"location":"integrations/prefect-azure/aci_worker/#step-6-pick-up-a-flow-run-with-your-new-worker","title":"Step 6. Pick up a flow run with your new worker","text":"

        This guide uses ACR to store a Docker image containing your flow code. Write a flow, then deploy it using flow.deploy(), which will copy flow code into a Docker image and push that image to an ACR registry.

        "},{"location":"integrations/prefect-azure/aci_worker/#1-log-in-to-acr","title":"1. Log in to ACR","text":"

        Use the following commands to log in to ACR:

        TOKEN=$(az acr login --name $REGISTRY_NAME --expose-token --output tsv --query accessToken)\n
        docker login $REGISTRY_NAME.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN\n
        "},{"location":"integrations/prefect-azure/aci_worker/#2-write-and-deploy-a-simple-test-flow","title":"2. Write and deploy a simple test flow","text":"

        Create and run the following script to deploy your flow. Be sure to replace <registry-name> and <work-pool-name> with the appropriate values.

        my_flow.py

        from prefect import flow, get_run_logger\nfrom prefect.deployments import DeploymentImage\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"Hello from ACI!\")\n\nif __name__ == \"__main__\":\n    my_flow.deploy(\n        name=\"aci-deployment\",\n        image=DeploymentImage(\n            name=\"<registry-name>.azurecr.io/example:latest\",\n            platform=\"linux/amd64\",\n        ),\n        work_pool_name=\"<work-pool-name>\",\n    )\n
        "},{"location":"integrations/prefect-azure/aci_worker/#3-find-the-deployment-in-the-ui-and-click-the-quick-run-button","title":"3. Find the deployment in the UI and click the Quick Run button!","text":""},{"location":"integrations/prefect-azure/blob_storage/","title":"Blob Storage","text":""},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage","title":"prefect_azure.blob_storage","text":"

        Integrations for interacting with Azure Blob Storage

        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer","title":"AzureBlobStorageContainer","text":"

        Bases: ObjectStorageBlock, WritableFileSystem, WritableDeploymentStorage

        Represents a container in Azure Blob Storage.

        This class provides methods for downloading and uploading files and folders to and from the Azure Blob Storage container.

        Attributes:

        Name Type Description container_name str

        The name of the Azure Blob Storage container.

        credentials AzureBlobStorageCredentials

        The credentials to use for authentication with Azure.

        base_folder Optional[str]

        A base path to a folder within the container to use for reading and writing objects.

        Source code in prefect_azure/blob_storage.py
        class AzureBlobStorageContainer(\n    ObjectStorageBlock, WritableFileSystem, WritableDeploymentStorage\n):\n    \"\"\"\n    Represents a container in Azure Blob Storage.\n\n    This class provides methods for downloading and uploading files and folders\n    to and from the Azure Blob Storage container.\n\n    Attributes:\n        container_name: The name of the Azure Blob Storage container.\n        credentials: The credentials to use for authentication with Azure.\n        base_folder: A base path to a folder within the container to use\n            for reading and writing objects.\n    \"\"\"\n\n    _block_type_name = \"Azure Blob Storage Container\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/blob_storage/#prefect_azure.blob_storabe.AzureBlobStorageContainer\"  # noqa\n\n    container_name: str = Field(\n        default=..., description=\"The name of a Azure Blob Storage container.\"\n    )\n    credentials: AzureBlobStorageCredentials = Field(\n        default_factory=AzureBlobStorageCredentials,\n        description=\"The credentials to use for authentication with Azure.\",\n    )\n    base_folder: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A base path to a folder within the container to use \"\n            \"for reading and writing objects.\"\n        ),\n    )\n\n    def _get_path_relative_to_base_folder(self, path: Optional[str] = None) -> str:\n        if path is None and self.base_folder is None:\n            return \"\"\n        if path is None:\n            return self.base_folder\n        if self.base_folder is None:\n            return path\n        return (Path(self.base_folder) / Path(path)).as_posix()\n\n    @sync_compatible\n    async def download_folder_to_path(\n        self,\n        from_folder: str,\n        to_folder: Union[str, Path],\n        **download_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, Path]:\n        \"\"\"Download a folder from the container to a local path.\n\n        Args:\n            from_folder: The folder path in the container to download.\n            to_folder: The local path to download the folder to.\n            **download_kwargs: Additional keyword arguments passed into\n                `BlobClient.download_blob`.\n\n        Returns:\n            The local path where the folder was downloaded.\n\n        Example:\n            Download the contents of container folder `folder` from the container\n                to the local folder `local_folder`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.download_folder_to_path(\n                from_folder=\"folder\",\n                to_folder=\"local_folder\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Downloading folder from container %s to path %s\",\n            self.container_name,\n            to_folder,\n        )\n        full_container_path = self._get_path_relative_to_base_folder(from_folder)\n        async with self.credentials.get_container_client(\n            self.container_name\n        ) as container_client:\n            try:\n                async for blob in container_client.list_blobs(\n                    name_starts_with=full_container_path\n                ):\n                    blob_path = blob.name\n                    local_path = Path(to_folder) / Path(blob_path).relative_to(\n                        full_container_path\n                    )\n                    local_path.parent.mkdir(parents=True, exist_ok=True)\n                    async with container_client.get_blob_client(\n                        blob_path\n                    ) as blob_client:\n                        blob_obj = await blob_client.download_blob(**download_kwargs)\n\n                    with local_path.open(mode=\"wb\") as to_file:\n                        await blob_obj.readinto(to_file)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to download from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return Path(to_folder)\n\n    @sync_compatible\n    async def download_object_to_file_object(\n        self,\n        from_path: str,\n        to_file_object: BinaryIO,\n        **download_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, BinaryIO]:\n        \"\"\"\n        Downloads an object from the container to a file object.\n\n        Args:\n            from_path : The path of the object to download within the container.\n            to_file_object: The file object to download the object to.\n            **download_kwargs: Additional keyword arguments for the download\n                operation.\n\n        Returns:\n            The file object that the object was downloaded to.\n\n        Example:\n            Download the object `object` from the container to a file object:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            with open(\"file.txt\", \"wb\") as f:\n                block.download_object_to_file_object(\n                    from_path=\"object\",\n                    to_file_object=f\n                )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Downloading object from container %s to file object\", self.container_name\n        )\n        full_container_path = self._get_path_relative_to_base_folder(from_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                blob_obj = await blob_client.download_blob(**download_kwargs)\n                await blob_obj.download_to_stream(to_file_object)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to download from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return to_file_object\n\n    @sync_compatible\n    async def download_object_to_path(\n        self,\n        from_path: str,\n        to_path: Union[str, Path],\n        **download_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, Path]:\n        \"\"\"\n        Downloads an object from a container to a specified path.\n\n        Args:\n            from_path: The path of the object in the container.\n            to_path: The path where the object will be downloaded to.\n            **download_kwargs (Dict[str, Any]): Additional keyword arguments\n                for the download operation.\n\n        Returns:\n            The path where the object was downloaded to.\n\n        Example:\n            Download the object `object` from the container to the local path\n                `file.txt`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.download_object_to_path(\n                from_path=\"object\",\n                to_path=\"file.txt\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Downloading object from container %s to path %s\",\n            self.container_name,\n            to_path,\n        )\n        full_container_path = self._get_path_relative_to_base_folder(from_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                blob_obj = await blob_client.download_blob(**download_kwargs)\n\n                path = Path(to_path)\n\n                path.parent.mkdir(parents=True, exist_ok=True)\n\n                with path.open(mode=\"wb\") as to_file:\n                    await blob_obj.readinto(to_file)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to download from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n        return Path(to_path)\n\n    @sync_compatible\n    async def upload_from_file_object(\n        self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n    ) -> Coroutine[Any, Any, str]:\n        \"\"\"\n        Uploads an object from a file object to the specified path in the blob\n            storage container.\n\n        Args:\n            from_file_object: The file object to upload.\n            to_path: The path in the blob storage container to upload the\n                object to.\n            **upload_kwargs: Additional keyword arguments to pass to the\n                upload_blob method.\n\n        Returns:\n            The path where the object was uploaded to.\n\n        Example:\n            Upload a file object to the container at the path `object`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            with open(\"file.txt\", \"rb\") as f:\n                block.upload_from_file_object(\n                    from_file_object=f,\n                    to_path=\"object\"\n                )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Uploading object to container %s with key %s\", self.container_name, to_path\n        )\n        full_container_path = self._get_path_relative_to_base_folder(to_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                await blob_client.upload_blob(from_file_object, **upload_kwargs)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to upload from container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return to_path\n\n    @sync_compatible\n    async def upload_from_path(\n        self, from_path: Union[str, Path], to_path: str, **upload_kwargs: Dict[str, Any]\n    ) -> Coroutine[Any, Any, str]:\n        \"\"\"\n        Uploads an object from a local path to the specified destination path in the\n            blob storage container.\n\n        Args:\n            from_path: The local path of the object to upload.\n            to_path: The destination path in the blob storage container.\n            **upload_kwargs: Additional keyword arguments to pass to the\n                `upload_blob` method.\n\n        Returns:\n            The destination path in the blob storage container.\n\n        Example:\n            Upload a file from the local path `file.txt` to the container\n                at the path `object`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.upload_from_path(\n                from_path=\"file.txt\",\n                to_path=\"object\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Uploading object to container %s with key %s\", self.container_name, to_path\n        )\n        full_container_path = self._get_path_relative_to_base_folder(to_path)\n        async with self.credentials.get_blob_client(\n            self.container_name, full_container_path\n        ) as blob_client:\n            try:\n                with open(from_path, \"rb\") as f:\n                    await blob_client.upload_blob(f, **upload_kwargs)\n            except ResourceNotFoundError as exc:\n                raise RuntimeError(\n                    \"An error occurred when attempting to upload to container\"\n                    f\" {self.container_name}: {exc.reason}\"\n                ) from exc\n\n        return to_path\n\n    @sync_compatible\n    async def upload_from_folder(\n        self,\n        from_folder: Union[str, Path],\n        to_folder: str,\n        **upload_kwargs: Dict[str, Any],\n    ) -> Coroutine[Any, Any, str]:\n        \"\"\"\n        Uploads files from a local folder to a specified folder in the Azure\n            Blob Storage container.\n\n        Args:\n            from_folder: The path to the local folder containing the files to upload.\n            to_folder: The destination folder in the Azure Blob Storage container.\n            **upload_kwargs: Additional keyword arguments to pass to the\n                `upload_blob` method.\n\n        Returns:\n            The full path of the destination folder in the container.\n\n        Example:\n            Upload the contents of the local folder `local_folder` to the container\n                folder `folder`:\n\n            ```python\n            from prefect_azure import AzureBlobStorageCredentials\n            from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n            credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            block = AzureBlobStorageContainer(\n                container_name=\"container\",\n                credentials=credentials,\n            )\n            block.upload_from_folder(\n                from_folder=\"local_folder\",\n                to_folder=\"folder\"\n            )\n            ```\n        \"\"\"\n        self.logger.info(\n            \"Uploading folder to container %s with key %s\",\n            self.container_name,\n            to_folder,\n        )\n        full_container_path = self._get_path_relative_to_base_folder(to_folder)\n        async with self.credentials.get_container_client(\n            self.container_name\n        ) as container_client:\n            if not Path(from_folder).is_dir():\n                raise ValueError(f\"{from_folder} is not a directory\")\n            for path in Path(from_folder).rglob(\"*\"):\n                if path.is_file():\n                    blob_path = Path(full_container_path) / path.relative_to(\n                        from_folder\n                    )\n                    async with container_client.get_blob_client(\n                        blob_path.as_posix()\n                    ) as blob_client:\n                        try:\n                            await blob_client.upload_blob(\n                                path.read_bytes(), **upload_kwargs\n                            )\n                        except ResourceNotFoundError as exc:\n                            raise RuntimeError(\n                                \"An error occurred when attempting to upload to \"\n                                f\"container {self.container_name}: {exc.reason}\"\n                            ) from exc\n        return full_container_path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: str = None, local_path: str = None\n    ) -> None:\n        \"\"\"\n        Downloads the contents of a direry from the blob storage to a local path.\n\n        Used to enable flow code storage for deployments.\n\n        Args:\n            from_path: The path of the directory in the blob storage.\n            local_path: The local path where the directory will be downloaded.\n        \"\"\"\n        await self.download_folder_to_path(from_path, local_path)\n\n    @sync_compatible\n    async def put_directory(\n        self, local_path: str = None, to_path: str = None, ignore_file: str = None\n    ) -> None:\n        \"\"\"\n        Uploads a directory to the blob storage.\n\n        Used to enable flow code storage for deployments.\n\n        Args:\n            local_path: The local path of the directory to upload. Defaults to\n                current directory.\n            to_path: The destination path in the blob storage. Defaults to\n                root directory.\n            ignore_file: The path to a file containing patterns to ignore\n                during upload.\n        \"\"\"\n        to_path = \"\" if to_path is None else to_path\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(local_path, ignore_patterns)\n\n        for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n            if (\n                included_files is not None\n                and str(local_file_path.relative_to(local_path)) not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = Path(to_path) / local_file_path.relative_to(\n                    local_path\n                )\n                with open(local_file_path, \"rb\") as local_file:\n                    local_file_content = local_file.read()\n\n                await self.write_path(\n                    remote_file_path.as_posix(), content=local_file_content\n                )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        \"\"\"\n        Reads the contents of a file at the specified path and returns it as bytes.\n\n        Used to enable results storage.\n\n        Args:\n            path: The path of the file to read.\n\n        Returns:\n            The contents of the file as bytes.\n        \"\"\"\n        file_obj = BytesIO()\n        await self.download_object_to_file_object(path, file_obj)\n        return file_obj.getvalue()\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> None:\n        \"\"\"\n        Writes the content to the specified path in the blob storage.\n\n        Used to enable results storage.\n\n        Args:\n            path: The path where the content will be written.\n            content: The content to be written.\n        \"\"\"\n        await self.upload_from_file_object(BytesIO(content), path)\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.download_folder_to_path","title":"download_folder_to_path async","text":"

        Download a folder from the container to a local path.

        Parameters:

        Name Type Description Default from_folder str

        The folder path in the container to download.

        required to_folder Union[str, Path]

        The local path to download the folder to.

        required **download_kwargs Dict[str, Any]

        Additional keyword arguments passed into BlobClient.download_blob.

        {}

        Returns:

        Type Description Coroutine[Any, Any, Path]

        The local path where the folder was downloaded.

        Example

        Download the contents of container folder folder from the container to the local folder local_folder:

        from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.download_folder_to_path(\n    from_folder=\"folder\",\n    to_folder=\"local_folder\"\n)\n
        Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def download_folder_to_path(\n    self,\n    from_folder: str,\n    to_folder: Union[str, Path],\n    **download_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, Path]:\n    \"\"\"Download a folder from the container to a local path.\n\n    Args:\n        from_folder: The folder path in the container to download.\n        to_folder: The local path to download the folder to.\n        **download_kwargs: Additional keyword arguments passed into\n            `BlobClient.download_blob`.\n\n    Returns:\n        The local path where the folder was downloaded.\n\n    Example:\n        Download the contents of container folder `folder` from the container\n            to the local folder `local_folder`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.download_folder_to_path(\n            from_folder=\"folder\",\n            to_folder=\"local_folder\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Downloading folder from container %s to path %s\",\n        self.container_name,\n        to_folder,\n    )\n    full_container_path = self._get_path_relative_to_base_folder(from_folder)\n    async with self.credentials.get_container_client(\n        self.container_name\n    ) as container_client:\n        try:\n            async for blob in container_client.list_blobs(\n                name_starts_with=full_container_path\n            ):\n                blob_path = blob.name\n                local_path = Path(to_folder) / Path(blob_path).relative_to(\n                    full_container_path\n                )\n                local_path.parent.mkdir(parents=True, exist_ok=True)\n                async with container_client.get_blob_client(\n                    blob_path\n                ) as blob_client:\n                    blob_obj = await blob_client.download_blob(**download_kwargs)\n\n                with local_path.open(mode=\"wb\") as to_file:\n                    await blob_obj.readinto(to_file)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to download from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return Path(to_folder)\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.download_object_to_file_object","title":"download_object_to_file_object async","text":"

        Downloads an object from the container to a file object.

        Parameters:

        Name Type Description Default from_path

        The path of the object to download within the container.

        required to_file_object BinaryIO

        The file object to download the object to.

        required **download_kwargs Dict[str, Any]

        Additional keyword arguments for the download operation.

        {}

        Returns:

        Type Description Coroutine[Any, Any, BinaryIO]

        The file object that the object was downloaded to.

        Example

        Download the object object from the container to a file object:

        from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nwith open(\"file.txt\", \"wb\") as f:\n    block.download_object_to_file_object(\n        from_path=\"object\",\n        to_file_object=f\n    )\n
        Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def download_object_to_file_object(\n    self,\n    from_path: str,\n    to_file_object: BinaryIO,\n    **download_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, BinaryIO]:\n    \"\"\"\n    Downloads an object from the container to a file object.\n\n    Args:\n        from_path : The path of the object to download within the container.\n        to_file_object: The file object to download the object to.\n        **download_kwargs: Additional keyword arguments for the download\n            operation.\n\n    Returns:\n        The file object that the object was downloaded to.\n\n    Example:\n        Download the object `object` from the container to a file object:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        with open(\"file.txt\", \"wb\") as f:\n            block.download_object_to_file_object(\n                from_path=\"object\",\n                to_file_object=f\n            )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Downloading object from container %s to file object\", self.container_name\n    )\n    full_container_path = self._get_path_relative_to_base_folder(from_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            blob_obj = await blob_client.download_blob(**download_kwargs)\n            await blob_obj.download_to_stream(to_file_object)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to download from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return to_file_object\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.download_object_to_path","title":"download_object_to_path async","text":"

        Downloads an object from a container to a specified path.

        Parameters:

        Name Type Description Default from_path str

        The path of the object in the container.

        required to_path Union[str, Path]

        The path where the object will be downloaded to.

        required **download_kwargs Dict[str, Any]

        Additional keyword arguments for the download operation.

        {}

        Returns:

        Type Description Coroutine[Any, Any, Path]

        The path where the object was downloaded to.

        Example

        Download the object object from the container to the local path file.txt:

        from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.download_object_to_path(\n    from_path=\"object\",\n    to_path=\"file.txt\"\n)\n
        Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def download_object_to_path(\n    self,\n    from_path: str,\n    to_path: Union[str, Path],\n    **download_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, Path]:\n    \"\"\"\n    Downloads an object from a container to a specified path.\n\n    Args:\n        from_path: The path of the object in the container.\n        to_path: The path where the object will be downloaded to.\n        **download_kwargs (Dict[str, Any]): Additional keyword arguments\n            for the download operation.\n\n    Returns:\n        The path where the object was downloaded to.\n\n    Example:\n        Download the object `object` from the container to the local path\n            `file.txt`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.download_object_to_path(\n            from_path=\"object\",\n            to_path=\"file.txt\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Downloading object from container %s to path %s\",\n        self.container_name,\n        to_path,\n    )\n    full_container_path = self._get_path_relative_to_base_folder(from_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            blob_obj = await blob_client.download_blob(**download_kwargs)\n\n            path = Path(to_path)\n\n            path.parent.mkdir(parents=True, exist_ok=True)\n\n            with path.open(mode=\"wb\") as to_file:\n                await blob_obj.readinto(to_file)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to download from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n    return Path(to_path)\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.get_directory","title":"get_directory async","text":"

        Downloads the contents of a direry from the blob storage to a local path.

        Used to enable flow code storage for deployments.

        Parameters:

        Name Type Description Default from_path str

        The path of the directory in the blob storage.

        None local_path str

        The local path where the directory will be downloaded.

        None Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def get_directory(\n    self, from_path: str = None, local_path: str = None\n) -> None:\n    \"\"\"\n    Downloads the contents of a direry from the blob storage to a local path.\n\n    Used to enable flow code storage for deployments.\n\n    Args:\n        from_path: The path of the directory in the blob storage.\n        local_path: The local path where the directory will be downloaded.\n    \"\"\"\n    await self.download_folder_to_path(from_path, local_path)\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.put_directory","title":"put_directory async","text":"

        Uploads a directory to the blob storage.

        Used to enable flow code storage for deployments.

        Parameters:

        Name Type Description Default local_path str

        The local path of the directory to upload. Defaults to current directory.

        None to_path str

        The destination path in the blob storage. Defaults to root directory.

        None ignore_file str

        The path to a file containing patterns to ignore during upload.

        None Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def put_directory(\n    self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n    \"\"\"\n    Uploads a directory to the blob storage.\n\n    Used to enable flow code storage for deployments.\n\n    Args:\n        local_path: The local path of the directory to upload. Defaults to\n            current directory.\n        to_path: The destination path in the blob storage. Defaults to\n            root directory.\n        ignore_file: The path to a file containing patterns to ignore\n            during upload.\n    \"\"\"\n    to_path = \"\" if to_path is None else to_path\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(local_path, ignore_patterns)\n\n    for local_file_path in Path(local_path).expanduser().rglob(\"*\"):\n        if (\n            included_files is not None\n            and str(local_file_path.relative_to(local_path)) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = Path(to_path) / local_file_path.relative_to(\n                local_path\n            )\n            with open(local_file_path, \"rb\") as local_file:\n                local_file_content = local_file.read()\n\n            await self.write_path(\n                remote_file_path.as_posix(), content=local_file_content\n            )\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.read_path","title":"read_path async","text":"

        Reads the contents of a file at the specified path and returns it as bytes.

        Used to enable results storage.

        Parameters:

        Name Type Description Default path str

        The path of the file to read.

        required

        Returns:

        Type Description bytes

        The contents of the file as bytes.

        Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def read_path(self, path: str) -> bytes:\n    \"\"\"\n    Reads the contents of a file at the specified path and returns it as bytes.\n\n    Used to enable results storage.\n\n    Args:\n        path: The path of the file to read.\n\n    Returns:\n        The contents of the file as bytes.\n    \"\"\"\n    file_obj = BytesIO()\n    await self.download_object_to_file_object(path, file_obj)\n    return file_obj.getvalue()\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.upload_from_file_object","title":"upload_from_file_object async","text":"

        Uploads an object from a file object to the specified path in the blob storage container.

        Parameters:

        Name Type Description Default from_file_object BinaryIO

        The file object to upload.

        required to_path str

        The path in the blob storage container to upload the object to.

        required **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to the upload_blob method.

        {}

        Returns:

        Type Description Coroutine[Any, Any, str]

        The path where the object was uploaded to.

        Example

        Upload a file object to the container at the path object:

        from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nwith open(\"file.txt\", \"rb\") as f:\n    block.upload_from_file_object(\n        from_file_object=f,\n        to_path=\"object\"\n    )\n
        Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def upload_from_file_object(\n    self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]\n) -> Coroutine[Any, Any, str]:\n    \"\"\"\n    Uploads an object from a file object to the specified path in the blob\n        storage container.\n\n    Args:\n        from_file_object: The file object to upload.\n        to_path: The path in the blob storage container to upload the\n            object to.\n        **upload_kwargs: Additional keyword arguments to pass to the\n            upload_blob method.\n\n    Returns:\n        The path where the object was uploaded to.\n\n    Example:\n        Upload a file object to the container at the path `object`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        with open(\"file.txt\", \"rb\") as f:\n            block.upload_from_file_object(\n                from_file_object=f,\n                to_path=\"object\"\n            )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Uploading object to container %s with key %s\", self.container_name, to_path\n    )\n    full_container_path = self._get_path_relative_to_base_folder(to_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            await blob_client.upload_blob(from_file_object, **upload_kwargs)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to upload from container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return to_path\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.upload_from_folder","title":"upload_from_folder async","text":"

        Uploads files from a local folder to a specified folder in the Azure Blob Storage container.

        Parameters:

        Name Type Description Default from_folder Union[str, Path]

        The path to the local folder containing the files to upload.

        required to_folder str

        The destination folder in the Azure Blob Storage container.

        required **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to the upload_blob method.

        {}

        Returns:

        Type Description Coroutine[Any, Any, str]

        The full path of the destination folder in the container.

        Example

        Upload the contents of the local folder local_folder to the container folder folder:

        from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.upload_from_folder(\n    from_folder=\"local_folder\",\n    to_folder=\"folder\"\n)\n
        Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def upload_from_folder(\n    self,\n    from_folder: Union[str, Path],\n    to_folder: str,\n    **upload_kwargs: Dict[str, Any],\n) -> Coroutine[Any, Any, str]:\n    \"\"\"\n    Uploads files from a local folder to a specified folder in the Azure\n        Blob Storage container.\n\n    Args:\n        from_folder: The path to the local folder containing the files to upload.\n        to_folder: The destination folder in the Azure Blob Storage container.\n        **upload_kwargs: Additional keyword arguments to pass to the\n            `upload_blob` method.\n\n    Returns:\n        The full path of the destination folder in the container.\n\n    Example:\n        Upload the contents of the local folder `local_folder` to the container\n            folder `folder`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.upload_from_folder(\n            from_folder=\"local_folder\",\n            to_folder=\"folder\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Uploading folder to container %s with key %s\",\n        self.container_name,\n        to_folder,\n    )\n    full_container_path = self._get_path_relative_to_base_folder(to_folder)\n    async with self.credentials.get_container_client(\n        self.container_name\n    ) as container_client:\n        if not Path(from_folder).is_dir():\n            raise ValueError(f\"{from_folder} is not a directory\")\n        for path in Path(from_folder).rglob(\"*\"):\n            if path.is_file():\n                blob_path = Path(full_container_path) / path.relative_to(\n                    from_folder\n                )\n                async with container_client.get_blob_client(\n                    blob_path.as_posix()\n                ) as blob_client:\n                    try:\n                        await blob_client.upload_blob(\n                            path.read_bytes(), **upload_kwargs\n                        )\n                    except ResourceNotFoundError as exc:\n                        raise RuntimeError(\n                            \"An error occurred when attempting to upload to \"\n                            f\"container {self.container_name}: {exc.reason}\"\n                        ) from exc\n    return full_container_path\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.upload_from_path","title":"upload_from_path async","text":"

        Uploads an object from a local path to the specified destination path in the blob storage container.

        Parameters:

        Name Type Description Default from_path Union[str, Path]

        The local path of the object to upload.

        required to_path str

        The destination path in the blob storage container.

        required **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to the upload_blob method.

        {}

        Returns:

        Type Description Coroutine[Any, Any, str]

        The destination path in the blob storage container.

        Example

        Upload a file from the local path file.txt to the container at the path object:

        from prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import AzureBlobStorageContainer\n\ncredentials = AzureBlobStorageCredentials(\n    connection_string=\"connection_string\",\n)\nblock = AzureBlobStorageContainer(\n    container_name=\"container\",\n    credentials=credentials,\n)\nblock.upload_from_path(\n    from_path=\"file.txt\",\n    to_path=\"object\"\n)\n
        Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def upload_from_path(\n    self, from_path: Union[str, Path], to_path: str, **upload_kwargs: Dict[str, Any]\n) -> Coroutine[Any, Any, str]:\n    \"\"\"\n    Uploads an object from a local path to the specified destination path in the\n        blob storage container.\n\n    Args:\n        from_path: The local path of the object to upload.\n        to_path: The destination path in the blob storage container.\n        **upload_kwargs: Additional keyword arguments to pass to the\n            `upload_blob` method.\n\n    Returns:\n        The destination path in the blob storage container.\n\n    Example:\n        Upload a file from the local path `file.txt` to the container\n            at the path `object`:\n\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import AzureBlobStorageContainer\n\n        credentials = AzureBlobStorageCredentials(\n            connection_string=\"connection_string\",\n        )\n        block = AzureBlobStorageContainer(\n            container_name=\"container\",\n            credentials=credentials,\n        )\n        block.upload_from_path(\n            from_path=\"file.txt\",\n            to_path=\"object\"\n        )\n        ```\n    \"\"\"\n    self.logger.info(\n        \"Uploading object to container %s with key %s\", self.container_name, to_path\n    )\n    full_container_path = self._get_path_relative_to_base_folder(to_path)\n    async with self.credentials.get_blob_client(\n        self.container_name, full_container_path\n    ) as blob_client:\n        try:\n            with open(from_path, \"rb\") as f:\n                await blob_client.upload_blob(f, **upload_kwargs)\n        except ResourceNotFoundError as exc:\n            raise RuntimeError(\n                \"An error occurred when attempting to upload to container\"\n                f\" {self.container_name}: {exc.reason}\"\n            ) from exc\n\n    return to_path\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.AzureBlobStorageContainer.write_path","title":"write_path async","text":"

        Writes the content to the specified path in the blob storage.

        Used to enable results storage.

        Parameters:

        Name Type Description Default path str

        The path where the content will be written.

        required content bytes

        The content to be written.

        required Source code in prefect_azure/blob_storage.py
        @sync_compatible\nasync def write_path(self, path: str, content: bytes) -> None:\n    \"\"\"\n    Writes the content to the specified path in the blob storage.\n\n    Used to enable results storage.\n\n    Args:\n        path: The path where the content will be written.\n        content: The content to be written.\n    \"\"\"\n    await self.upload_from_file_object(BytesIO(content), path)\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.blob_storage_download","title":"blob_storage_download async","text":"

        Downloads a blob with a given key from a given Blob Storage container. Args: blob: Name of the blob within this container to retrieve. container: Name of the Blob Storage container to retrieve from. blob_storage_credentials: Credentials to use for authentication with Azure. Returns: A bytes representation of the downloaded blob. Example: Download a file from a Blob Storage container

        from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_download\n\n@flow\ndef example_blob_storage_download_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    data = blob_storage_download(\n        container=\"prefect\",\n        blob=\"prefect.txt\",\n        blob_storage_credentials=blob_storage_credentials,\n    )\n    return data\n\nexample_blob_storage_download_flow()\n

        Source code in prefect_azure/blob_storage.py
        @task\nasync def blob_storage_download(\n    container: str,\n    blob: str,\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n) -> bytes:\n    \"\"\"\n    Downloads a blob with a given key from a given Blob Storage container.\n    Args:\n        blob: Name of the blob within this container to retrieve.\n        container: Name of the Blob Storage container to retrieve from.\n        blob_storage_credentials: Credentials to use for authentication with Azure.\n    Returns:\n        A `bytes` representation of the downloaded blob.\n    Example:\n        Download a file from a Blob Storage container\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import blob_storage_download\n\n        @flow\n        def example_blob_storage_download_flow():\n            connection_string = \"connection_string\"\n            blob_storage_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            data = blob_storage_download(\n                container=\"prefect\",\n                blob=\"prefect.txt\",\n                blob_storage_credentials=blob_storage_credentials,\n            )\n            return data\n\n        example_blob_storage_download_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Downloading blob from container %s with key %s\", container, blob)\n\n    async with blob_storage_credentials.get_blob_client(container, blob) as blob_client:\n        blob_obj = await blob_client.download_blob()\n        output = await blob_obj.content_as_bytes()\n\n    return output\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.blob_storage_list","title":"blob_storage_list async","text":"

        List objects from a given Blob Storage container. Args: container: Name of the Blob Storage container to retrieve from. blob_storage_credentials: Credentials to use for authentication with Azure. name_starts_with: Filters the results to return only blobs whose names begin with the specified prefix. include: Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy', 'legalhold'. **kwargs: Additional kwargs passed to ContainerClient.list_blobs() Returns: A list of dicts containing metadata about the blob. Example:

        from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_list\n\n@flow\ndef example_blob_storage_list_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=\"connection_string\",\n    )\n    data = blob_storage_list(\n        container=\"container\",\n        blob_storage_credentials=blob_storage_credentials,\n    )\n    return data\n\nexample_blob_storage_list_flow()\n

        Source code in prefect_azure/blob_storage.py
        @task\nasync def blob_storage_list(\n    container: str,\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n    name_starts_with: str = None,\n    include: Union[str, List[str]] = None,\n    **kwargs,\n) -> List[\"BlobProperties\"]:\n    \"\"\"\n    List objects from a given Blob Storage container.\n    Args:\n        container: Name of the Blob Storage container to retrieve from.\n        blob_storage_credentials: Credentials to use for authentication with Azure.\n        name_starts_with: Filters the results to return only blobs whose names\n            begin with the specified prefix.\n        include: Specifies one or more additional datasets to include in the response.\n            Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy',\n            'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy',\n            'legalhold'.\n        **kwargs: Additional kwargs passed to `ContainerClient.list_blobs()`\n    Returns:\n        A `list` of `dict`s containing metadata about the blob.\n    Example:\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import blob_storage_list\n\n        @flow\n        def example_blob_storage_list_flow():\n            connection_string = \"connection_string\"\n            blob_storage_credentials = AzureBlobStorageCredentials(\n                connection_string=\"connection_string\",\n            )\n            data = blob_storage_list(\n                container=\"container\",\n                blob_storage_credentials=blob_storage_credentials,\n            )\n            return data\n\n        example_blob_storage_list_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Listing blobs from container %s\", container)\n\n    async with blob_storage_credentials.get_container_client(\n        container\n    ) as container_client:\n        blobs = [\n            blob\n            async for blob in container_client.list_blobs(\n                name_starts_with=name_starts_with, include=include, **kwargs\n            )\n        ]\n\n    return blobs\n
        "},{"location":"integrations/prefect-azure/blob_storage/#prefect_azure.blob_storage.blob_storage_upload","title":"blob_storage_upload async","text":"

        Uploads data to an Blob Storage container. Args: data: Bytes representation of data to upload to Blob Storage. container: Name of the Blob Storage container to upload to. blob_storage_credentials: Credentials to use for authentication with Azure. blob: Name of the blob within this container to retrieve. overwrite: If True, an existing blob with the same name will be overwritten. Defaults to False and an error will be thrown if the blob already exists. Returns: The blob name of the uploaded object Example: Read and upload a file to a Blob Storage container

        from prefect import flow\n\nfrom prefect_azure import AzureBlobStorageCredentials\nfrom prefect_azure.blob_storage import blob_storage_upload\n\n@flow\ndef example_blob_storage_upload_flow():\n    connection_string = \"connection_string\"\n    blob_storage_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    with open(\"data.csv\", \"rb\") as f:\n        blob = blob_storage_upload(\n            data=f.read(),\n            container=\"container\",\n            blob=\"data.csv\",\n            blob_storage_credentials=blob_storage_credentials,\n            overwrite=False,\n        )\n    return blob\n\nexample_blob_storage_upload_flow()\n

        Source code in prefect_azure/blob_storage.py
        @task\nasync def blob_storage_upload(\n    data: bytes,\n    container: str,\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n    blob: str = None,\n    overwrite: bool = False,\n) -> str:\n    \"\"\"\n    Uploads data to an Blob Storage container.\n    Args:\n        data: Bytes representation of data to upload to Blob Storage.\n        container: Name of the Blob Storage container to upload to.\n        blob_storage_credentials: Credentials to use for authentication with Azure.\n        blob: Name of the blob within this container to retrieve.\n        overwrite: If `True`, an existing blob with the same name will be overwritten.\n            Defaults to `False` and an error will be thrown if the blob already exists.\n    Returns:\n        The blob name of the uploaded object\n    Example:\n        Read and upload a file to a Blob Storage container\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureBlobStorageCredentials\n        from prefect_azure.blob_storage import blob_storage_upload\n\n        @flow\n        def example_blob_storage_upload_flow():\n            connection_string = \"connection_string\"\n            blob_storage_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            with open(\"data.csv\", \"rb\") as f:\n                blob = blob_storage_upload(\n                    data=f.read(),\n                    container=\"container\",\n                    blob=\"data.csv\",\n                    blob_storage_credentials=blob_storage_credentials,\n                    overwrite=False,\n                )\n            return blob\n\n        example_blob_storage_upload_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading blob to container %s with key %s\", container, blob)\n\n    # create key if not provided\n    if blob is None:\n        blob = str(uuid.uuid4())\n\n    async with blob_storage_credentials.get_blob_client(container, blob) as blob_client:\n        await blob_client.upload_blob(data, overwrite=overwrite)\n\n    return blob\n
        "},{"location":"integrations/prefect-azure/container_instance/","title":"Container Instance Block","text":""},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance","title":"prefect_azure.container_instance","text":"

        Integrations with the Azure Container Instances service. Note this module is experimental. The interfaces within may change without notice.

        The AzureContainerInstanceJob infrastructure block in this module is ideally configured via the Prefect UI and run via a Prefect agent, but it can be called directly as demonstrated in the following examples.

        Examples:

        Run a command using an Azure Container Instances container.

        AzureContainerInstanceJob(command=[\"echo\", \"hello world\"]).run()\n

        Run a command and stream the container's output to the local terminal.

        AzureContainerInstanceJob(\n    command=[\"echo\", \"hello world\"],\n    stream_output=True,\n)\n

        Run a command with a specific image

        AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], image=\"alpine:latest\")\n

        Run a task with custom memory and CPU requirements

        AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], memory=1.0, cpu=1.0)\n

        Run a task with custom memory and CPU requirements

        AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], memory=1.0, cpu=1.0)\n

        Run a task with custom memory, CPU, and GPU requirements

        AzureContainerInstanceJob(command=[\"echo\", \"hello world\"], memory=1.0, cpu=1.0,\ngpu_count=1, gpu_sku=\"V100\")\n

        Run a task with custom environment variables

        AzureContainerInstanceJob(\n    command=[\"echo\", \"hello $PLANET\"],\n    env={\"PLANET\": \"earth\"}\n)\n

        Run a task that uses a private ACR registry with a managed identity

        AzureContainerInstanceJob(\n    command=[\"echo\", \"hello $PLANET\"],\n    image=\"my-registry.azurecr.io/my-image\",\n    image_registry=ACRManagedIdentity(\n        registry_url=\"my-registry.azurecr.io\",\n        identity=\"/my/managed/identity/123abc\"\n    )\n)\n

        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.ACRManagedIdentity","title":"ACRManagedIdentity","text":"

        Bases: BaseModel

        Use a Managed Identity to access Azure Container registry. Requires the user-assigned managed identity be available to the ACI container group.

        Source code in prefect_azure/container_instance.py
        class ACRManagedIdentity(BaseModel):\n    \"\"\"\n    Use a Managed Identity to access Azure Container registry. Requires the\n    user-assigned managed identity be available to the ACI container group.\n    \"\"\"\n\n    registry_url: str = Field(\n        default=...,\n        title=\"Registry URL\",\n        description=(\n            \"The URL to the registry, such as myregistry.azurecr.io. Generally, 'http' \"\n            \"or 'https' can be omitted.\"\n        ),\n    )\n    identity: str = Field(\n        default=...,\n        description=(\n            \"The user-assigned Azure managed identity for the private registry.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob","title":"AzureContainerInstanceJob","text":"

        Bases: Infrastructure

        Run a command using a container on Azure Container Instances. Note this block is experimental. The interface may change without notice.

        Source code in prefect_azure/container_instance.py
        class AzureContainerInstanceJob(Infrastructure):\n    \"\"\"\n    Run a command using a container on Azure Container Instances.\n    Note this block is experimental. The interface may change without notice.\n    \"\"\"\n\n    _block_type_name = \"Azure Container Instance Job\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _description = \"Run tasks using Azure Container Instances. Note this block is experimental. The interface may change without notice.\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob\"  # noqa\n\n    type: Literal[\"container-instance-job\"] = Field(\n        default=\"container-instance-job\", description=\"The slug for this task type.\"\n    )\n    aci_credentials: AzureContainerInstanceCredentials = Field(\n        default_factory=AzureContainerInstanceCredentials,\n        description=(\n            \"Credentials for Azure Container Instances; \"\n            \"if not provided will attempt to use DefaultAzureCredentials.\"\n        ),\n    )\n    resource_group_name: str = Field(\n        default=...,\n        title=\"Azure Resource Group Name\",\n        description=(\n            \"The name of the Azure Resource Group in which to run Prefect ACI tasks.\"\n        ),\n    )\n    subscription_id: SecretStr = Field(\n        default=...,\n        title=\"Azure Subscription ID\",\n        description=\"The ID of the Azure subscription to create containers under.\",\n    )\n    identities: Optional[List[str]] = Field(\n        title=\"Identities\",\n        default=None,\n        description=(\n            \"A list of user-assigned identities to associate with the container group. \"\n            \"The identities should be an ARM resource IDs in the form: \"\n            \"'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'.\"  # noqa\n        ),\n    )\n    image: Optional[str] = Field(\n        default_factory=get_prefect_image_name,\n        description=(\n            \"The image to use for the Prefect container in the task. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=DEFAULT_CONTAINER_ENTRYPOINT,\n        description=(\n            \"The entrypoint of the container you wish you run. This value \"\n            \"defaults to the entrypoint used by Prefect images and should only be \"\n            \"changed when using a custom image that is not based on an official \"\n            \"Prefect image. Any commands set on deployments will be passed \"\n            \"to the entrypoint as parameters.\"\n        ),\n    )\n    image_registry: Optional[\n        Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n        ]\n    ] = Field(\n        default=None,\n        title=\"Image Registry (Optional)\",\n        description=(\n            \"To use any private container registry with a username and password, \"\n            \"choose DockerRegistry. To use a private Azure Container Registry \"\n            \"with a managed identity, choose ACRManagedIdentity.\"\n        ),\n    )\n    cpu: float = Field(\n        title=\"CPU\",\n        default=ACI_DEFAULT_CPU,\n        description=(\n            \"The number of virtual CPUs to assign to the task container. \"\n            f\"If not provided, a default value of {ACI_DEFAULT_CPU} will be used.\"\n        ),\n    )\n    gpu_count: Optional[int] = Field(\n        title=\"GPU Count\",\n        default=None,\n        description=(\n            \"The number of GPUs to assign to the task container. \"\n            \"If not provided, no GPU will be used.\"\n        ),\n    )\n    gpu_sku: Optional[str] = Field(\n        title=\"GPU SKU\",\n        default=None,\n        description=(\n            \"The Azure GPU SKU to use. See the ACI documentation for a list of \"\n            \"GPU SKUs available in each Azure region.\"\n        ),\n    )\n    memory: float = Field(\n        default=ACI_DEFAULT_MEMORY,\n        description=(\n            \"The amount of memory in gigabytes to provide to the ACI task. Valid \"\n            \"amounts are specified in the Azure documentation. If not provided, a \"\n            f\"default value of  {ACI_DEFAULT_MEMORY} will be used unless present \"\n            \"on the task definition.\"\n        ),\n    )\n    subnet_ids: Optional[List[str]] = Field(\n        default=None,\n        title=\"Subnet IDs\",\n        description=\"A list of Azure subnet IDs the container should be connected to.\",\n    )\n    dns_servers: Optional[List[str]] = Field(\n        default=None,\n        title=\"DNS Servers\",\n        description=\"A list of custom DNS Servers the container should use.\",\n    )\n    stream_output: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `True`, logs will be streamed from the Prefect container to the local \"\n            \"console.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        title=\"Environment Variables\",\n        default_factory=dict,\n        description=(\n            \"Environment variables to provide to the task run. These variables are set \"\n            \"on the Prefect container at task runtime. These will not be set on the \"\n            \"task definition.\"\n        ),\n    )\n    # Execution settings\n    task_start_timeout_seconds: int = Field(\n        default=240,\n        description=(\n            \"The amount of time to watch for the start of the ACI container. \"\n            \"before marking it as failed.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The number of seconds to wait between Azure API calls while monitoring \"\n            \"the state of an Azure Container Instances task.\"\n        ),\n    )\n\n    @sync_compatible\n    async def run(\n        self, task_status: Optional[TaskStatus] = None\n    ) -> AzureContainerInstanceJobResult:\n        \"\"\"\n        Runs the configured task using an ACI container.\n\n        Args:\n            task_status: An optional `TaskStatus` to update when the container starts.\n\n        Returns:\n            An `AzureContainerInstanceJobResult` with the container's exit code.\n        \"\"\"\n\n        run_start_time = datetime.datetime.now(datetime.timezone.utc)\n\n        container = self._configure_container()\n        container_group = self._configure_container_group(container)\n        created_container_group = None\n\n        aci_client = self.aci_credentials.get_container_client(\n            self.subscription_id.get_secret_value()\n        )\n\n        self.logger.info(\n            f\"{self._log_prefix}: Preparing to run command {' '.join(self.command)!r} \"\n            f\"in container {container.name!r} ({self.image})...\"\n        )\n        try:\n            self.logger.info(f\"{self._log_prefix}: Waiting for container creation...\")\n            # Create the container group and wait for it to start\n            creation_status_poller = await run_sync_in_worker_thread(\n                aci_client.container_groups.begin_create_or_update,\n                self.resource_group_name,\n                container.name,\n                container_group,\n            )\n            created_container_group = await run_sync_in_worker_thread(\n                self._wait_for_task_container_start, creation_status_poller\n            )\n\n            # If creation succeeded, group provisioning state should be 'Succeeded'\n            # and the group should have a single container\n            if self._provisioning_succeeded(created_container_group):\n                self.logger.info(f\"{self._log_prefix}: Running command...\")\n                if task_status:\n                    task_status.started(value=created_container_group.name)\n                status_code = await run_sync_in_worker_thread(\n                    self._watch_task_and_get_exit_code,\n                    aci_client,\n                    created_container_group,\n                    run_start_time,\n                )\n                self.logger.info(f\"{self._log_prefix}: Completed command run.\")\n            else:\n                raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n        finally:\n            if created_container_group:\n                await self._wait_for_container_group_deletion(\n                    aci_client, created_container_group\n                )\n\n        return AzureContainerInstanceJobResult(\n            identifier=created_container_group.name, status_code=status_code\n        )\n\n    async def kill(\n        self,\n        container_group_name: str,\n        grace_seconds: int = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS,\n    ):\n        \"\"\"\n        Kill a flow running in an ACI container group.\n\n        Args:\n            container_group_name: The container group name yielded by\n                `AzureContainerInstanceJob.run`.\n        \"\"\"\n        # ACI does not provide a way to specify grace period, but it gives\n        # applications ~30 seconds to gracefully terminate before killing\n        # a container group.\n        if grace_seconds != CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS:\n            self.logger.warning(\n                f\"{self._log_prefix}: Kill grace period of {grace_seconds}s requested, \"\n                f\"but ACI does not support grace period configuration.\"\n            )\n\n        aci_client = self.aci_credentials.get_container_client(\n            self.subscription_id.get_secret_value()\n        )\n\n        # get the container group to check that it still exists\n        try:\n            container_group = aci_client.container_groups.get(\n                resource_group_name=self.resource_group_name,\n                container_group_name=container_group_name,\n            )\n        except ResourceNotFoundError as exc:\n            # the container group no longer exists, so there's nothing to cancel\n            raise InfrastructureNotFound(\n                f\"Cannot stop ACI job: container group {container_group_name} \"\n                \"no longer exists.\"\n            ) from exc\n\n        # get the container state to check if the container has terminated\n        container = self._get_container(container_group)\n        container_state = container.instance_view.current_state.state\n\n        # the container group needs to be deleted regardless of whether the container\n        # already terminated\n        await self._wait_for_container_group_deletion(aci_client, container_group)\n\n        # if the container had already terminated, raise an exception to let the agent\n        # know the flow was not cancelled\n        if container_state == ContainerRunState.TERMINATED:\n            raise InfrastructureNotAvailable(\n                f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n                f\"but container {container.name} has already terminated.\"\n            )\n\n    def preview(self) -> str:\n        \"\"\"\n        Provides a summary of how the container will be created when `run` is called.\n\n        Returns:\n           A string containing the summary.\n        \"\"\"\n        preview = {\n            \"container_name\": \"<generated when run>\",\n            \"resource_group_name\": self.resource_group_name,\n            \"memory\": self.memory,\n            \"cpu\": self.cpu,\n            \"gpu_count\": self.gpu_count,\n            \"gpu_sku\": self.gpu_sku,\n            \"env\": self._get_environment(),\n        }\n\n        return json.dumps(preview)\n\n    def get_corresponding_worker_type(self) -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        from prefect_azure.workers.container_instance import AzureContainerWorker\n\n        return AzureContainerWorker.type\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for an `Azure Container Instance` work pool\n        with the same configuration as this block.\n\n        Returns:\n            - dict: a base job template for an `Azure Container Instance` work pool\n        \"\"\"\n        from prefect_azure.workers.container_instance import AzureContainerWorker\n\n        base_job_template = deepcopy(\n            AzureContainerWorker.get_default_base_job_template()\n        )\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key == \"subscription_id\":\n                base_job_template[\"variables\"][\"properties\"][\"subscription_id\"][\n                    \"default\"\n                ] = value.get_secret_value()\n            elif key == \"aci_credentials\":\n                if not self.aci_credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"aci_credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(\n                            self.aci_credentials._block_document_id\n                        )\n                    }\n                }\n            elif key == \"image_registry\":\n                if not self.image_registry._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"image_registry\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(self.image_registry._block_document_id)\n                    }\n                }\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by `Azure Container Instance`\"\n                    \" work pools. Skipping.\"\n                )\n\n        return base_job_template\n\n    def _configure_container(self) -> Container:\n        \"\"\"\n        Configures an Azure `Container` using data from the block's fields.\n\n        Returns:\n            An instance of `Container` ready to submit to Azure.\n        \"\"\"\n\n        # setup container environment variables\n        environment = [\n            EnvironmentVariable(name=k, secure_value=v)\n            if k in ENV_SECRETS\n            else EnvironmentVariable(name=k, value=v)\n            for (k, v) in self._get_environment().items()\n        ]\n\n        # all container names in a resource group must be unique\n        if self.name:\n            slugified_name = slugify(\n                self.name,\n                max_length=52,\n                regex_pattern=r\"[^a-zA-Z0-9-]+\",\n            )\n            random_suffix = \"\".join(\n                random.choices(string.ascii_lowercase + string.digits, k=10)\n            )\n            container_name = slugified_name + \"-\" + random_suffix\n        else:\n            container_name = str(uuid.uuid4())\n\n        container_resource_requirements = self._configure_container_resources()\n\n        # add the entrypoint if provided, because creating an ACI container with a\n        # command overrides the container's built-in entrypoint.\n        if self.entrypoint:\n            self.command.insert(0, self.entrypoint)\n\n        return Container(\n            name=container_name,\n            image=self.image,\n            command=self.command,\n            resources=container_resource_requirements,\n            environment_variables=environment,\n        )\n\n    def _configure_container_resources(self) -> ResourceRequirements:\n        \"\"\"\n        Configures the container's memory, CPU, and GPU resources.\n\n        Returns:\n            A `ResourceRequirements` instance initialized with data from this\n            `AzureContainerInstanceJob` block.\n        \"\"\"\n\n        gpu_resource = (\n            GpuResource(count=self.gpu_count, sku=self.gpu_sku)\n            if self.gpu_count and self.gpu_sku\n            else None\n        )\n        container_resource_requests = ResourceRequests(\n            memory_in_gb=self.memory, cpu=self.cpu, gpu=gpu_resource\n        )\n\n        return ResourceRequirements(requests=container_resource_requests)\n\n    def _configure_container_group(self, container: Container) -> ContainerGroup:\n        \"\"\"\n        Configures the container group needed to start a container on ACI.\n\n        Args:\n            container: An initialized instance of `Container`.\n\n        Returns:\n            An initialized `ContainerGroup` ready to submit to Azure.\n        \"\"\"\n\n        # Load the resource group, so we can set the container group location\n        # correctly.\n\n        resource_group_client = self.aci_credentials.get_resource_client(\n            self.subscription_id.get_secret_value()\n        )\n\n        resource_group = resource_group_client.resource_groups.get(\n            self.resource_group_name\n        )\n\n        image_registry_credentials = self._create_image_registry_credentials(\n            self.image_registry\n        )\n\n        identity = (\n            ContainerGroupIdentity(\n                type=\"UserAssigned\",\n                # The Azure API only uses the dict keys and ignores values when\n                # creating a container group. Using empty `UserAssignedIdentities`\n                # instances as dict values satisfies Python type checkers.\n                user_assigned_identities={\n                    identity: UserAssignedIdentities() for identity in self.identities\n                },\n            )\n            if self.identities\n            else None\n        )\n\n        subnet_ids = (\n            [ContainerGroupSubnetId(id=subnet_id) for subnet_id in self.subnet_ids]\n            if self.subnet_ids\n            else None\n        )\n\n        dns_config = (\n            DnsConfiguration(name_servers=self.dns_servers)\n            if self.dns_servers\n            else None\n        )\n\n        return ContainerGroup(\n            location=resource_group.location,\n            identity=identity,\n            containers=[container],\n            os_type=OperatingSystemTypes.linux,\n            restart_policy=ContainerGroupRestartPolicy.never,\n            image_registry_credentials=image_registry_credentials,\n            subnet_ids=subnet_ids,\n            dns_config=dns_config,\n        )\n\n    @staticmethod\n    def _create_image_registry_credentials(\n        image_registry: Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n            None,\n        ],\n    ):\n        \"\"\"\n        Create image registry credentials based on the type of image_registry provided.\n\n        Args:\n            image_registry: An instance of a DockerRegistry or\n            ACRManagedIdentity object.\n\n        Returns:\n            A list containing an ImageRegistryCredential object if the input is a\n            `DockerRegistry` or `ACRManagedIdentity`, or None if the\n            input doesn't match any of the expected types.\n        \"\"\"\n        if image_registry and isinstance(\n            image_registry, prefect.infrastructure.container.DockerRegistry\n        ):\n            return [\n                ImageRegistryCredential(\n                    server=image_registry.registry_url,\n                    username=image_registry.username,\n                    password=image_registry.password.get_secret_value(),\n                )\n            ]\n        elif image_registry and isinstance(image_registry, ACRManagedIdentity):\n            return [\n                ImageRegistryCredential(\n                    server=image_registry.registry_url,\n                    identity=image_registry.identity,\n                )\n            ]\n        else:\n            return None\n\n    def _wait_for_task_container_start(\n        self, creation_status_poller: LROPoller[ContainerGroup]\n    ) -> ContainerGroup:\n        \"\"\"\n        Wait for the result of group and container creation.\n\n        Args:\n            creation_status_poller: Poller returned by the Azure SDK.\n\n        Raises:\n            RuntimeError: Raised if the timeout limit is exceeded before the\n            container starts.\n\n        Returns:\n            A `ContainerGroup` representing the current status of the group being\n            watched.\n        \"\"\"\n\n        t0 = time.time()\n        timeout = self.task_start_timeout_seconds\n\n        while not creation_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while watching waiting for \"\n                        \"container start.\"\n                    )\n                )\n            time.sleep(self.task_watch_poll_interval)\n\n        return creation_status_poller.result()\n\n    def _watch_task_and_get_exit_code(\n        self,\n        client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n        run_start_time: datetime.datetime,\n    ) -> int:\n        \"\"\"\n        Waits until the container finishes running and obtains its exit code.\n\n        Args:\n            client: An initialized Azure `ContainerInstanceManagementClient`\n            container_group: The `ContainerGroup` in which the container resides.\n\n        Returns:\n            An `int` representing the container's exit code.\n        \"\"\"\n\n        status_code = -1\n        running_container = self._get_container(container_group)\n        current_state = running_container.instance_view.current_state.state\n\n        # get any logs the container has already generated\n        last_log_time = run_start_time\n        if self.stream_output:\n            last_log_time = self._get_and_stream_output(\n                client, container_group, last_log_time\n            )\n\n        # set exit code if flow run already finished:\n        if current_state == ContainerRunState.TERMINATED:\n            status_code = running_container.instance_view.current_state.exit_code\n\n        while current_state != ContainerRunState.TERMINATED:\n            try:\n                container_group = client.container_groups.get(\n                    resource_group_name=self.resource_group_name,\n                    container_group_name=container_group.name,\n                )\n            except ResourceNotFoundError:\n                self.logger.exception(\n                    f\"{self._log_prefix}: Container group was deleted before flow run \"\n                    \"completed, likely due to flow cancellation.\"\n                )\n\n                # since the flow was cancelled, exit early instead of raising an\n                # exception\n                return status_code\n\n            container = self._get_container(container_group)\n            current_state = container.instance_view.current_state.state\n\n            if current_state == ContainerRunState.TERMINATED:\n                status_code = container.instance_view.current_state.exit_code\n                # break instead of waiting for next loop iteration because\n                # trying to read logs from a terminated container raises an exception\n                break\n\n            if self.stream_output:\n                last_log_time = self._get_and_stream_output(\n                    client, container_group, last_log_time\n                )\n\n            time.sleep(self.task_watch_poll_interval)\n\n        return status_code\n\n    async def _wait_for_container_group_deletion(\n        self,\n        aci_client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n    ):\n        self.logger.info(f\"{self._log_prefix}: Deleting container...\")\n\n        deletion_status_poller = await run_sync_in_worker_thread(\n            aci_client.container_groups.begin_delete,\n            resource_group_name=self.resource_group_name,\n            container_group_name=container_group.name,\n        )\n\n        t0 = time.time()\n        timeout = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS\n\n        while not deletion_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while waiting for deletion of\"\n                        f\" container group {container_group.name}. To verify the group \"\n                        \"has been deleted, check the Azure Portal or run \"\n                        f\"az container show --name {container_group.name} --resource-group {self.resource_group_name}\"  # noqa\n                    )\n                )\n            await anyio.sleep(self.task_watch_poll_interval)\n\n        self.logger.info(f\"{self._log_prefix}: Container deleted.\")\n\n    def _get_container(self, container_group: ContainerGroup) -> Container:\n        \"\"\"\n        Extracts the job container from a container group.\n        \"\"\"\n        return container_group.containers[0]\n\n    def _get_and_stream_output(\n        self,\n        client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n        last_log_time: datetime.datetime,\n    ) -> datetime.datetime:\n        \"\"\"\n        Fetches logs output from the job container and writes all entries after\n        a given time to stderr.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        logs = self._get_logs(client, container_group)\n        return self._stream_output(logs, last_log_time)\n\n    def _get_logs(\n        self,\n        client: ContainerInstanceManagementClient,\n        container_group: ContainerGroup,\n        max_lines: int = 100,\n    ) -> str:\n        \"\"\"\n        Gets the most container logs up to a given maximum.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            max_lines: The number of log lines to pull. Defaults to 100.\n\n        Returns:\n            A string containing the requested log entries, one per line.\n        \"\"\"\n        container = self._get_container(container_group)\n\n        logs: Union[Logs, None] = None\n        try:\n            logs = client.containers.list_logs(\n                resource_group_name=self.resource_group_name,\n                container_group_name=container_group.name,\n                container_name=container.name,\n                tail=max_lines,\n                timestamps=True,\n            )\n        except HttpResponseError:\n            # Trying to get logs when the container is under heavy CPU load sometimes\n            # results in an error, but we won't want to raise an exception and stop\n            # monitoring the flow. Instead, log the error and carry on so we can try to\n            # get all missed logs on the next check.\n            self.logger.warning(\n                f\"{self._log_prefix}: Unable to retrieve logs from container \"\n                f\"{container.name}. Trying again in {self.task_watch_poll_interval}s\"\n            )\n\n        return logs.content if logs else \"\"\n\n    def _stream_output(\n        self, log_content: Union[str, None], last_log_time: datetime.datetime\n    ) -> datetime.datetime:\n        \"\"\"\n        Writes each entry from a string of log lines to stderr.\n\n        Args:\n            log_content: A string containing Azure container logs.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        if not log_content:\n            # nothing to stream\n            return last_log_time\n\n        log_lines = log_content.split(\"\\n\")\n\n        last_written_time = last_log_time\n\n        for log_line in log_lines:\n            # skip if the line is blank or whitespace\n            if not log_line.strip():\n                continue\n\n            line_parts = log_line.split(\" \")\n            # timestamp should always be before first space in line\n            line_timestamp = line_parts[0]\n            line = \" \".join(line_parts[1:])\n\n            try:\n                line_time = dateutil.parser.parse(line_timestamp)\n                if line_time > last_written_time:\n                    self._write_output_line(line)\n                    last_written_time = line_time\n            except dateutil.parser.ParserError as e:\n                self.logger.debug(\n                    (\n                        f\"{self._log_prefix}: Unable to parse timestamp from Azure \"\n                        \"log line: %s\"\n                    ),\n                    log_line,\n                    exc_info=e,\n                )\n\n        return last_written_time\n\n    def _get_environment(self):\n        \"\"\"\n        Generates a dictionary of all environment variables to send to the\n        ACI container.\n        \"\"\"\n        return {**self._base_environment(), **self.env}\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"AzureContainerInstanceJob {self.name!r}\"\n        else:\n            return \"AzureContainerInstanceJob\"\n\n    @staticmethod\n    def _provisioning_succeeded(container_group: ContainerGroup) -> bool:\n        \"\"\"\n        Determines whether ACI container group provisioning was successful.\n\n        Args:\n            container_group: a container group returned by the Azure SDK.\n\n        Returns:\n            True if provisioning was successful, False otherwise.\n        \"\"\"\n        if not container_group:\n            return False\n\n        return (\n            container_group.provisioning_state\n            == ContainerGroupProvisioningState.SUCCEEDED\n            and len(container_group.containers) == 1\n        )\n\n    @staticmethod\n    def _write_output_line(line: str):\n        \"\"\"\n        Writes a line of output to stderr.\n        \"\"\"\n        print(line, file=sys.stderr)\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

        Generate a base job template for an Azure Container Instance work pool with the same configuration as this block.

        Returns:

        Type Description dict
        • dict: a base job template for an Azure Container Instance work pool
        Source code in prefect_azure/container_instance.py
        async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for an `Azure Container Instance` work pool\n    with the same configuration as this block.\n\n    Returns:\n        - dict: a base job template for an `Azure Container Instance` work pool\n    \"\"\"\n    from prefect_azure.workers.container_instance import AzureContainerWorker\n\n    base_job_template = deepcopy(\n        AzureContainerWorker.get_default_base_job_template()\n    )\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n        ]:\n            continue\n        elif key == \"subscription_id\":\n            base_job_template[\"variables\"][\"properties\"][\"subscription_id\"][\n                \"default\"\n            ] = value.get_secret_value()\n        elif key == \"aci_credentials\":\n            if not self.aci_credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"aci_credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(\n                        self.aci_credentials._block_document_id\n                    )\n                }\n            }\n        elif key == \"image_registry\":\n            if not self.image_registry._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"image_registry\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(self.image_registry._block_document_id)\n                }\n            }\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by `Azure Container Instance`\"\n                \" work pools. Skipping.\"\n            )\n\n    return base_job_template\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.get_corresponding_worker_type","title":"get_corresponding_worker_type","text":"

        Return the corresponding worker type for this infrastructure block.

        Source code in prefect_azure/container_instance.py
        def get_corresponding_worker_type(self) -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    from prefect_azure.workers.container_instance import AzureContainerWorker\n\n    return AzureContainerWorker.type\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.kill","title":"kill async","text":"

        Kill a flow running in an ACI container group.

        Parameters:

        Name Type Description Default container_group_name str

        The container group name yielded by AzureContainerInstanceJob.run.

        required Source code in prefect_azure/container_instance.py
        async def kill(\n    self,\n    container_group_name: str,\n    grace_seconds: int = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS,\n):\n    \"\"\"\n    Kill a flow running in an ACI container group.\n\n    Args:\n        container_group_name: The container group name yielded by\n            `AzureContainerInstanceJob.run`.\n    \"\"\"\n    # ACI does not provide a way to specify grace period, but it gives\n    # applications ~30 seconds to gracefully terminate before killing\n    # a container group.\n    if grace_seconds != CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS:\n        self.logger.warning(\n            f\"{self._log_prefix}: Kill grace period of {grace_seconds}s requested, \"\n            f\"but ACI does not support grace period configuration.\"\n        )\n\n    aci_client = self.aci_credentials.get_container_client(\n        self.subscription_id.get_secret_value()\n    )\n\n    # get the container group to check that it still exists\n    try:\n        container_group = aci_client.container_groups.get(\n            resource_group_name=self.resource_group_name,\n            container_group_name=container_group_name,\n        )\n    except ResourceNotFoundError as exc:\n        # the container group no longer exists, so there's nothing to cancel\n        raise InfrastructureNotFound(\n            f\"Cannot stop ACI job: container group {container_group_name} \"\n            \"no longer exists.\"\n        ) from exc\n\n    # get the container state to check if the container has terminated\n    container = self._get_container(container_group)\n    container_state = container.instance_view.current_state.state\n\n    # the container group needs to be deleted regardless of whether the container\n    # already terminated\n    await self._wait_for_container_group_deletion(aci_client, container_group)\n\n    # if the container had already terminated, raise an exception to let the agent\n    # know the flow was not cancelled\n    if container_state == ContainerRunState.TERMINATED:\n        raise InfrastructureNotAvailable(\n            f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n            f\"but container {container.name} has already terminated.\"\n        )\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.preview","title":"preview","text":"

        Provides a summary of how the container will be created when run is called.

        Returns:

        Type Description str

        A string containing the summary.

        Source code in prefect_azure/container_instance.py
        def preview(self) -> str:\n    \"\"\"\n    Provides a summary of how the container will be created when `run` is called.\n\n    Returns:\n       A string containing the summary.\n    \"\"\"\n    preview = {\n        \"container_name\": \"<generated when run>\",\n        \"resource_group_name\": self.resource_group_name,\n        \"memory\": self.memory,\n        \"cpu\": self.cpu,\n        \"gpu_count\": self.gpu_count,\n        \"gpu_sku\": self.gpu_sku,\n        \"env\": self._get_environment(),\n    }\n\n    return json.dumps(preview)\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJob.run","title":"run async","text":"

        Runs the configured task using an ACI container.

        Parameters:

        Name Type Description Default task_status Optional[TaskStatus]

        An optional TaskStatus to update when the container starts.

        None

        Returns:

        Type Description AzureContainerInstanceJobResult

        An AzureContainerInstanceJobResult with the container's exit code.

        Source code in prefect_azure/container_instance.py
        @sync_compatible\nasync def run(\n    self, task_status: Optional[TaskStatus] = None\n) -> AzureContainerInstanceJobResult:\n    \"\"\"\n    Runs the configured task using an ACI container.\n\n    Args:\n        task_status: An optional `TaskStatus` to update when the container starts.\n\n    Returns:\n        An `AzureContainerInstanceJobResult` with the container's exit code.\n    \"\"\"\n\n    run_start_time = datetime.datetime.now(datetime.timezone.utc)\n\n    container = self._configure_container()\n    container_group = self._configure_container_group(container)\n    created_container_group = None\n\n    aci_client = self.aci_credentials.get_container_client(\n        self.subscription_id.get_secret_value()\n    )\n\n    self.logger.info(\n        f\"{self._log_prefix}: Preparing to run command {' '.join(self.command)!r} \"\n        f\"in container {container.name!r} ({self.image})...\"\n    )\n    try:\n        self.logger.info(f\"{self._log_prefix}: Waiting for container creation...\")\n        # Create the container group and wait for it to start\n        creation_status_poller = await run_sync_in_worker_thread(\n            aci_client.container_groups.begin_create_or_update,\n            self.resource_group_name,\n            container.name,\n            container_group,\n        )\n        created_container_group = await run_sync_in_worker_thread(\n            self._wait_for_task_container_start, creation_status_poller\n        )\n\n        # If creation succeeded, group provisioning state should be 'Succeeded'\n        # and the group should have a single container\n        if self._provisioning_succeeded(created_container_group):\n            self.logger.info(f\"{self._log_prefix}: Running command...\")\n            if task_status:\n                task_status.started(value=created_container_group.name)\n            status_code = await run_sync_in_worker_thread(\n                self._watch_task_and_get_exit_code,\n                aci_client,\n                created_container_group,\n                run_start_time,\n            )\n            self.logger.info(f\"{self._log_prefix}: Completed command run.\")\n        else:\n            raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n    finally:\n        if created_container_group:\n            await self._wait_for_container_group_deletion(\n                aci_client, created_container_group\n            )\n\n    return AzureContainerInstanceJobResult(\n        identifier=created_container_group.name, status_code=status_code\n    )\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.AzureContainerInstanceJobResult","title":"AzureContainerInstanceJobResult","text":"

        Bases: InfrastructureResult

        The result of an AzureContainerInstanceJob run.

        Source code in prefect_azure/container_instance.py
        class AzureContainerInstanceJobResult(InfrastructureResult):\n    \"\"\"\n    The result of an `AzureContainerInstanceJob` run.\n    \"\"\"\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.ContainerGroupProvisioningState","title":"ContainerGroupProvisioningState","text":"

        Bases: str, Enum

        Terminal provisioning states for ACI container groups. Per the Azure docs, the states in this Enum are the only ones that can be relied on as dependencies.

        Source code in prefect_azure/container_instance.py
        class ContainerGroupProvisioningState(str, Enum):\n    \"\"\"\n    Terminal provisioning states for ACI container groups. Per the Azure docs,\n    the states in this Enum are the only ones that can be relied on as dependencies.\n    \"\"\"\n\n    SUCCEEDED = \"Succeeded\"\n    FAILED = \"Failed\"\n
        "},{"location":"integrations/prefect-azure/container_instance/#prefect_azure.container_instance.ContainerRunState","title":"ContainerRunState","text":"

        Bases: str, Enum

        Terminal run states for ACI containers.

        Source code in prefect_azure/container_instance.py
        class ContainerRunState(str, Enum):\n    \"\"\"\n    Terminal run states for ACI containers.\n    \"\"\"\n\n    RUNNING = \"Running\"\n    TERMINATED = \"Terminated\"\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/","title":"Container Instance Worker","text":""},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance","title":"prefect_azure.workers.container_instance","text":"

        Module containing the Azure Container Instances worker used for executing flow runs in ACI containers.

        To start an ACI worker, run the following command:

        prefect worker start --pool 'my-work-pool' --type azure-container-instance\n

        Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

        Using a custom ARM template

        To facilitate easy customization, the Azure Container worker provisions a containing group using an ARM template. The default ARM template is represented in YAML as follows:

        ---\narm_template:\n  \"$schema\": https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#\n  contentVersion: 1.0.0.0\n  parameters:\n    location:\n      type: string\n      defaultValue: \"[resourceGroup().location]\"\n      metadata:\n        description: Location for all resources.\n    container_group_name:\n      type: string\n      defaultValue: \"[uniqueString(resourceGroup().id)]\"\n      metadata:\n        description: The name of the container group to create.\n    container_name:\n      type: string\n      defaultValue: \"[uniqueString(resourceGroup().id)]\"\n      metadata:\n        description: The name of the container to create.\n  resources:\n  - type: Microsoft.ContainerInstance/containerGroups\n    apiVersion: '2022-09-01'\n    name: \"[parameters('container_group_name')]\"\n    location: \"[parameters('location')]\"\n    properties:\n      containers:\n      - name: \"[parameters('container_name')]\"\n        properties:\n          image: rpeden/my-aci-flow:latest\n          command: \"{{ command }}\"\n          resources:\n            requests:\n              cpu: \"{{ cpu }}\"\n              memoryInGB: \"{{ memory }}\"\n          environmentVariables: []\n      osType: Linux\n      restartPolicy: Never\n

        Each values enclosed in {{ }} is a placeholder that will be replaced with a value at runtime. The values that can be used a placeholders are defined by the variables schema defined in the base job template.

        The default job manifest and available variables can be customized on a work pool by work pool basis. These customizations can be made via the Prefect UI when creating or editing a work pool.

        Using an ARM template makes the worker flexible; you're not limited to using the features the worker provides out of the box. Instead, you can modify the ARM template to use any features available in Azure Container Instances.

        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerJobConfiguration","title":"AzureContainerJobConfiguration","text":"

        Bases: BaseJobConfiguration

        Configuration for an Azure Container Instance flow run.

        Source code in prefect_azure/workers/container_instance.py
        class AzureContainerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration for an Azure Container Instance flow run.\n    \"\"\"\n\n    image: str = Field(default_factory=get_prefect_image_name)\n    resource_group_name: str = Field(default=...)\n    subscription_id: SecretStr = Field(default=...)\n    identities: Optional[List[str]] = Field(default=None)\n    entrypoint: Optional[str] = Field(default=DEFAULT_CONTAINER_ENTRYPOINT)\n    image_registry: Optional[\n        Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n        ]\n    ] = Field(default=None)\n    cpu: float = Field(default=ACI_DEFAULT_CPU)\n    gpu_count: Optional[int] = Field(default=None)\n    gpu_sku: Optional[str] = Field(default=None)\n    memory: float = Field(default=ACI_DEFAULT_MEMORY)\n    subnet_ids: Optional[List[str]] = Field(default=None)\n    dns_servers: Optional[List[str]] = Field(default=None)\n    stream_output: bool = Field(default=False)\n    aci_credentials: AzureContainerInstanceCredentials = Field(\n        # default to an empty credentials object that will use\n        # `DefaultAzureCredential` to authenticate.\n        default_factory=AzureContainerInstanceCredentials\n    )\n    # Execution settings\n    task_start_timeout_seconds: int = Field(default=240)\n    task_watch_poll_interval: float = Field(default=5.0)\n    arm_template: Dict[str, Any] = Field(template=_get_default_arm_template())\n    keep_container_group: bool = Field(default=False)\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        # expectations:\n        # - the first resource in the template is the container group\n        # - the container group has a single container\n        container_group = self.arm_template[\"resources\"][0]\n        container = container_group[\"properties\"][\"containers\"][0]\n\n        # set the container's environment variables\n        container[\"properties\"][\"environmentVariables\"] = self._get_arm_environment()\n\n        # convert the command from a string to a list, because that's what ACI expects\n        if self.command:\n            container[\"properties\"][\"command\"] = self.command.split(\" \")\n\n        self._add_image()\n\n        # Add the entrypoint if provided. Creating an ACI container with a\n        # command overrides the container's built-in entrypoint. Prefect base images\n        # use entrypoint.sh as the entrypoint, so we need to add to the beginning of\n        # the command list to avoid breaking EXTRA_PIP_PACKAGES installation on\n        # container startup.\n        if self.entrypoint:\n            container[\"properties\"][\"command\"].insert(0, self.entrypoint)\n\n        if self.image_registry:\n            self._add_image_registry_credentials(self.image_registry)\n\n        if self.identities:\n            self._add_identities(self.identities)\n\n        if self.subnet_ids:\n            self._add_subnets(self.subnet_ids)\n\n        if self.dns_servers:\n            self._add_dns_servers(self.dns_servers)\n\n    def _add_image(self):\n        \"\"\"\n        Add the image to the arm template.\n        \"\"\"\n        try:\n            self.arm_template[\"resources\"][0][\"properties\"][\"containers\"][0][\n                \"properties\"\n            ][\"image\"] = self.image\n        except KeyError:\n            raise ValueError(\"Unable to add image due to invalid job ARM template.\")\n\n    def _add_image_registry_credentials(\n        self,\n        image_registry: Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n            None,\n        ],\n    ):\n        \"\"\"\n        Create image registry credentials based on the type of image_registry provided.\n\n        Args:\n            image_registry: An instance of a DockerRegistry or\n            ACRManagedIdentity object.\n        \"\"\"\n        if image_registry and isinstance(\n            image_registry, prefect.infrastructure.container.DockerRegistry\n        ):\n            self.arm_template[\"resources\"][0][\"properties\"][\n                \"imageRegistryCredentials\"\n            ] = [\n                {\n                    \"server\": image_registry.registry_url,\n                    \"username\": image_registry.username,\n                    \"password\": image_registry.password.get_secret_value(),\n                }\n            ]\n        elif image_registry and isinstance(image_registry, ACRManagedIdentity):\n            self.arm_template[\"resources\"][0][\"properties\"][\n                \"imageRegistryCredentials\"\n            ] = [\n                {\n                    \"server\": image_registry.registry_url,\n                    \"identity\": image_registry.identity,\n                }\n            ]\n\n    def _add_identities(self, identities: List[str]):\n        \"\"\"\n        Add identities to the container group.\n\n        Args:\n            identities: A list of user-assigned identities to add to\n            the container group.\n        \"\"\"\n        self.arm_template[\"resources\"][0][\"identity\"] = {\n            \"type\": \"UserAssigned\",\n            \"userAssignedIdentities\": {\n                # note: For user-assigned identities, the key is the resource ID\n                # of the identity and the value is an empty object. See:\n                # https://docs.microsoft.com/en-us/azure/templates/microsoft.containerinstance/containergroups?tabs=bicep#identity-object # noqa\n                identity: {}\n                for identity in identities\n            },\n        }\n\n    def _add_subnets(self, subnet_ids: List[str]):\n        \"\"\"\n        Add subnets to the container group.\n\n        Args:\n            subnet_ids: A list of subnet ids to add to the container group.\n        \"\"\"\n        self.arm_template[\"resources\"][0][\"properties\"][\"subnetIds\"] = [\n            {\"id\": subnet_id} for subnet_id in subnet_ids\n        ]\n\n    def _add_dns_servers(self, dns_servers: List[str]):\n        \"\"\"\n        Add dns servers to the container group.\n\n        Args:\n            dns_servers: A list of dns servers to add to the container group.\n        \"\"\"\n        self.arm_template[\"resources\"][0][\"properties\"][\"dnsConfig\"] = {\n            \"nameServers\": dns_servers\n        }\n\n    def _get_arm_environment(self):\n        \"\"\"\n        Returns the environment variables to pass to the ARM template.\n        \"\"\"\n        env = {**self._base_environment(), **self.env}\n\n        azure_env = [\n            {\"name\": key, \"secureValue\": value}\n            if key in ENV_SECRETS\n            else {\"name\": key, \"value\": value}\n            for key, value in env.items()\n        ]\n        return azure_env\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

        Prepares the job configuration for a flow run.

        Source code in prefect_azure/workers/container_instance.py
        def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n\n    # expectations:\n    # - the first resource in the template is the container group\n    # - the container group has a single container\n    container_group = self.arm_template[\"resources\"][0]\n    container = container_group[\"properties\"][\"containers\"][0]\n\n    # set the container's environment variables\n    container[\"properties\"][\"environmentVariables\"] = self._get_arm_environment()\n\n    # convert the command from a string to a list, because that's what ACI expects\n    if self.command:\n        container[\"properties\"][\"command\"] = self.command.split(\" \")\n\n    self._add_image()\n\n    # Add the entrypoint if provided. Creating an ACI container with a\n    # command overrides the container's built-in entrypoint. Prefect base images\n    # use entrypoint.sh as the entrypoint, so we need to add to the beginning of\n    # the command list to avoid breaking EXTRA_PIP_PACKAGES installation on\n    # container startup.\n    if self.entrypoint:\n        container[\"properties\"][\"command\"].insert(0, self.entrypoint)\n\n    if self.image_registry:\n        self._add_image_registry_credentials(self.image_registry)\n\n    if self.identities:\n        self._add_identities(self.identities)\n\n    if self.subnet_ids:\n        self._add_subnets(self.subnet_ids)\n\n    if self.dns_servers:\n        self._add_dns_servers(self.dns_servers)\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerVariables","title":"AzureContainerVariables","text":"

        Bases: BaseVariables

        Variables for an Azure Container Instance flow run.

        Source code in prefect_azure/workers/container_instance.py
        class AzureContainerVariables(BaseVariables):\n    \"\"\"\n    Variables for an Azure Container Instance flow run.\n    \"\"\"\n\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image to use for the Prefect container in the task. This value \"\n            \"defaults to a Prefect base image matching your local versions.\"\n        ),\n    )\n    resource_group_name: str = Field(\n        default=...,\n        title=\"Azure Resource Group Name\",\n        description=(\n            \"The name of the Azure Resource Group in which to run Prefect ACI tasks.\"\n        ),\n    )\n    subscription_id: SecretStr = Field(\n        default=...,\n        title=\"Azure Subscription ID\",\n        description=\"The ID of the Azure subscription to create containers under.\",\n    )\n    identities: Optional[List[str]] = Field(\n        title=\"Identities\",\n        default=None,\n        description=(\n            \"A list of user-assigned identities to associate with the container group. \"\n            \"The identities should be an ARM resource IDs in the form: \"\n            \"'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'.\"  # noqa\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=DEFAULT_CONTAINER_ENTRYPOINT,\n        description=(\n            \"The entrypoint of the container you wish you run. This value \"\n            \"defaults to the entrypoint used by Prefect images and should only be \"\n            \"changed when using a custom image that is not based on an official \"\n            \"Prefect image. Any commands set on deployments will be passed \"\n            \"to the entrypoint as parameters.\"\n        ),\n    )\n    image_registry: Optional[\n        Union[\n            prefect.infrastructure.container.DockerRegistry,\n            ACRManagedIdentity,\n        ]\n    ] = Field(\n        default=None,\n        title=\"Image Registry (Optional)\",\n        description=(\n            \"To use any private container registry with a username and password, \"\n            \"choose DockerRegistry. To use a private Azure Container Registry \"\n            \"with a managed identity, choose ACRManagedIdentity.\"\n        ),\n    )\n    cpu: float = Field(\n        title=\"CPU\",\n        default=ACI_DEFAULT_CPU,\n        description=(\n            \"The number of virtual CPUs to assign to the task container. \"\n            f\"If not provided, a default value of {ACI_DEFAULT_CPU} will be used.\"\n        ),\n    )\n    gpu_count: Optional[int] = Field(\n        title=\"GPU Count\",\n        default=None,\n        description=(\n            \"The number of GPUs to assign to the task container. \"\n            \"If not provided, no GPU will be used.\"\n        ),\n    )\n    gpu_sku: Optional[str] = Field(\n        title=\"GPU SKU\",\n        default=None,\n        description=(\n            \"The Azure GPU SKU to use. See the ACI documentation for a list of \"\n            \"GPU SKUs available in each Azure region.\"\n        ),\n    )\n    memory: float = Field(\n        default=ACI_DEFAULT_MEMORY,\n        description=(\n            \"The amount of memory in gigabytes to provide to the ACI task. Valid \"\n            \"amounts are specified in the Azure documentation. If not provided, a \"\n            f\"default value of  {ACI_DEFAULT_MEMORY} will be used unless present \"\n            \"on the task definition.\"\n        ),\n    )\n    subnet_ids: Optional[List[str]] = Field(\n        title=\"Subnet IDs\",\n        default=None,\n        description=(\"A list of subnet IDs to associate with the container group. \"),\n    )\n    dns_servers: Optional[List[str]] = Field(\n        title=\"DNS Servers\",\n        default=None,\n        description=(\"A list of DNS servers to associate with the container group.\"),\n    )\n    aci_credentials: AzureContainerInstanceCredentials = Field(\n        default_factory=AzureContainerInstanceCredentials,\n        description=(\"The credentials to use to authenticate with Azure.\"),\n    )\n    stream_output: bool = Field(\n        default=False,\n        description=(\n            \"If `True`, logs will be streamed from the Prefect container to the local \"\n            \"console.\"\n        ),\n    )\n    # Execution settings\n    task_start_timeout_seconds: int = Field(\n        default=240,\n        description=(\n            \"The amount of time to watch for the start of the ACI container. \"\n            \"before marking it as failed.\"\n        ),\n    )\n    task_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The number of seconds to wait between Azure API calls while monitoring \"\n            \"the state of an Azure Container Instances task.\"\n        ),\n    )\n    keep_container_group: bool = Field(\n        default=False,\n        title=\"Keep Container Group After Completion\",\n        description=\"Keep the completed container group on Azure.\",\n    )\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorker","title":"AzureContainerWorker","text":"

        Bases: BaseWorker

        A Prefect worker that runs flows in an Azure Container Instance.

        Source code in prefect_azure/workers/container_instance.py
        class AzureContainerWorker(BaseWorker):\n    \"\"\"\n    A Prefect worker that runs flows in an Azure Container Instance.\n    \"\"\"\n\n    type = \"azure-container-instance\"\n    job_configuration = AzureContainerJobConfiguration\n    job_configuration_variables = AzureContainerVariables\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _display_name = \"Azure Container Instances\"\n    _description = (\n        \"Execute flow runs within containers on Azure's Container Instances \"\n        \"service. Requires an Azure account.\"\n    )\n    _documentation_url = (\n        \"https://prefecthq.github.io/prefect-azure/container_instance_worker/\"\n    )\n\n    async def run(\n        self,\n        flow_run: FlowRun,\n        configuration: AzureContainerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ):\n        \"\"\"\n        Run a flow in an Azure Container Instance.\n        Args:\n            flow_run: The flow run to run.\n            configuration: The configuration for the flow run.\n            task_status: The task status object for the current task. Used\n            to provide an identifier that can be used to cancel the task.\n\n        Returns:\n            The result of the flow run.\n        \"\"\"\n        run_start_time = datetime.datetime.now(datetime.timezone.utc)\n        prefect_client = get_client()\n\n        # Get the flow, so we can use its name in the container group name\n        # to make it easier to identify and debug.\n        flow = await prefect_client.read_flow(flow_run.flow_id)\n        container_group_name = f\"{flow.name}-{flow_run.id}\"\n\n        # Slugify flow.name if the generated name will be too long for the\n        # max deployment name length (64) including \"prefect-\"\n        if len(container_group_name) > 55:\n            slugified_flow_name = slugify(\n                flow.name,\n                max_length=55 - len(str(flow_run.id)),\n                regex_pattern=r\"[^a-zA-Z0-9-]+\",\n            )\n            container_group_name = f\"{slugified_flow_name}-{flow_run.id}\"\n\n        self._logger.info(\n            f\"{self._log_prefix}: Preparing to run command {configuration.command} \"\n            f\"in container  {configuration.image})...\"\n        )\n\n        aci_client = configuration.aci_credentials.get_container_client(\n            configuration.subscription_id.get_secret_value()\n        )\n        resource_client = configuration.aci_credentials.get_resource_client(\n            configuration.subscription_id.get_secret_value()\n        )\n\n        created_container_group: Union[ContainerGroup, None] = None\n        try:\n            self._logger.info(f\"{self._log_prefix}: Creating container group...\")\n\n            created_container_group = await self._provision_container_group(\n                aci_client,\n                resource_client,\n                configuration,\n                container_group_name,\n            )\n            # Both the flow ID and container group name will be needed to\n            # cancel the flow run if needed.\n            identifier = f\"{flow_run.id}:{container_group_name}\"\n\n            if self._provisioning_succeeded(created_container_group):\n                self._logger.info(f\"{self._log_prefix}: Running command...\")\n                if task_status is not None:\n                    task_status.started(value=identifier)\n\n                status_code = await run_sync_in_worker_thread(\n                    self._watch_task_and_get_exit_code,\n                    aci_client,\n                    configuration,\n                    created_container_group,\n                    run_start_time,\n                )\n\n                self._logger.info(f\"{self._log_prefix}: Completed command run.\")\n\n            else:\n                raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n        finally:\n            if configuration.keep_container_group:\n                self._logger.info(f\"{self._log_prefix}: Stopping container group...\")\n                aci_client.container_groups.stop(\n                    resource_group_name=configuration.resource_group_name,\n                    container_group_name=container_group_name,\n                )\n            else:\n                await self._wait_for_container_group_deletion(\n                    aci_client, configuration, container_group_name\n                )\n\n        return AzureContainerWorkerResult(\n            identifier=created_container_group.name, status_code=status_code\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: AzureContainerJobConfiguration,\n    ):\n        \"\"\"\n        Kill a flow running in an ACI container group.\n\n        Args:\n            infrastructure_pid: The container group identification data yielded by\n                `AzureContainerInstanceJob.run`.\n            configuration: The job configuration.\n        \"\"\"\n        (flow_run_id, container_group_name) = infrastructure_pid.split(\":\")\n\n        aci_client = configuration.aci_credentials.get_container_client(\n            configuration.subscription_id.get_secret_value()\n        )\n\n        # get the container group to check that it still exists\n        try:\n            container_group = aci_client.container_groups.get(\n                resource_group_name=configuration.resource_group_name,\n                container_group_name=container_group_name,\n            )\n        except ResourceNotFoundError as exc:\n            # the container group no longer exists, so there's nothing to cancel\n            raise InfrastructureNotFound(\n                f\"Cannot stop ACI job: container group \"\n                f\"{container_group_name} no longer exists.\"\n            ) from exc\n\n        # get the container state to check if the container has terminated\n        container = self._get_container(container_group)\n        container_state = container.instance_view.current_state.state\n\n        # the container group needs to be deleted regardless of whether the container\n        # already terminated\n        await self._wait_for_container_group_deletion(\n            aci_client, configuration, container_group_name\n        )\n\n        # if the container has already terminated, raise an exception to let the agent\n        # know the flow was not cancelled\n        if container_state == ContainerRunState.TERMINATED:\n            raise InfrastructureNotAvailable(\n                f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n                f\"but container {container.name} has already terminated.\"\n            )\n\n    def _wait_for_task_container_start(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group_name: str,\n        creation_status_poller: LROPoller[DeploymentExtended],\n    ) -> Optional[ContainerGroup]:\n        \"\"\"\n        Wait for the result of group and container creation.\n\n        Args:\n            creation_status_poller: Poller returned by the Azure SDK.\n\n        Raises:\n            RuntimeError: Raised if the timeout limit is exceeded before the\n            container starts.\n\n        Returns:\n            A `ContainerGroup` representing the current status of the group being\n            watched, or None if creation failed.\n        \"\"\"\n        t0 = time.time()\n        timeout = configuration.task_start_timeout_seconds\n\n        while not creation_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while watching waiting for \"\n                        \"container start.\"\n                    )\n                )\n            time.sleep(configuration.task_watch_poll_interval)\n\n        deployment = creation_status_poller.result()\n\n        provisioning_succeeded = (\n            deployment.properties.provisioning_state\n            == ContainerGroupProvisioningState.SUCCEEDED\n        )\n\n        if provisioning_succeeded:\n            return self._get_container_group(\n                client, configuration.resource_group_name, container_group_name\n            )\n        else:\n            return None\n\n    async def _provision_container_group(\n        self,\n        aci_client: ContainerInstanceManagementClient,\n        resource_client: ResourceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group_name: str,\n    ):\n        \"\"\"\n        Create a container group and wait for it to start.\n        Args:\n            aci_client: An authenticated ACI client.\n            resource_client: An authenticated resource client.\n            configuration: The job configuration.\n            container_group_name: The name of the container group to create.\n\n        Returns:\n            A `ContainerGroup` representing the container group that was created.\n        \"\"\"\n        properties = DeploymentProperties(\n            mode=DeploymentMode.INCREMENTAL,\n            template=configuration.arm_template,\n            parameters={\"container_group_name\": {\"value\": container_group_name}},\n        )\n        deployment = Deployment(properties=properties)\n\n        creation_status_poller = await run_sync_in_worker_thread(\n            resource_client.deployments.begin_create_or_update,\n            resource_group_name=configuration.resource_group_name,\n            deployment_name=f\"prefect-{container_group_name}\",\n            parameters=deployment,\n        )\n\n        created_container_group = await run_sync_in_worker_thread(\n            self._wait_for_task_container_start,\n            aci_client,\n            configuration,\n            container_group_name,\n            creation_status_poller,\n        )\n\n        return created_container_group\n\n    def _watch_task_and_get_exit_code(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group: ContainerGroup,\n        run_start_time: datetime.datetime,\n    ) -> int:\n        \"\"\"\n        Waits until the container finishes running and obtains its exit code.\n\n        Args:\n            client: An initialized Azure `ContainerInstanceManagementClient`\n            container_group: The `ContainerGroup` in which the container resides.\n\n        Returns:\n            An `int` representing the container's exit code.\n        \"\"\"\n        status_code = -1\n        running_container = self._get_container(container_group)\n        current_state = running_container.instance_view.current_state.state\n\n        # get any logs the container has already generated\n        last_log_time = run_start_time\n        if configuration.stream_output:\n            last_log_time = self._get_and_stream_output(\n                client=client,\n                configuration=configuration,\n                container_group=container_group,\n                last_log_time=last_log_time,\n            )\n\n        # set exit code if flow run already finished:\n        if current_state == ContainerRunState.TERMINATED:\n            status_code = running_container.instance_view.current_state.exit_code\n\n        while current_state != ContainerRunState.TERMINATED:\n            try:\n                container_group = self._get_container_group(\n                    client,\n                    configuration.resource_group_name,\n                    container_group.name,\n                )\n            except ResourceNotFoundError:\n                self._logger.exception(\n                    f\"{self._log_prefix}: Container group was deleted before flow run \"\n                    \"completed, likely due to flow cancellation.\"\n                )\n\n                # since the flow was cancelled, exit early instead of raising an\n                # exception\n                return status_code\n\n            container = self._get_container(container_group)\n            current_state = container.instance_view.current_state.state\n\n            if current_state == ContainerRunState.TERMINATED:\n                status_code = container.instance_view.current_state.exit_code\n                # break instead of waiting for next loop iteration because\n                # trying to read logs from a terminated container raises an exception\n                break\n\n            if configuration.stream_output:\n                last_log_time = self._get_and_stream_output(\n                    client=client,\n                    configuration=configuration,\n                    container_group=container_group,\n                    last_log_time=last_log_time,\n                )\n\n            time.sleep(configuration.task_watch_poll_interval)\n\n        return status_code\n\n    async def _wait_for_container_group_deletion(\n        self,\n        aci_client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group_name: str,\n    ):\n        \"\"\"\n        Wait for the container group to be deleted.\n        Args:\n            aci_client: An authenticated ACI client.\n            configuration: The job configuration.\n            container_group_name: The name of the container group to delete.\n        \"\"\"\n        self._logger.info(f\"{self._log_prefix}: Deleting container...\")\n\n        deletion_status_poller = await run_sync_in_worker_thread(\n            aci_client.container_groups.begin_delete,\n            resource_group_name=configuration.resource_group_name,\n            container_group_name=container_group_name,\n        )\n\n        t0 = time.time()\n        timeout = CONTAINER_GROUP_DELETION_TIMEOUT_SECONDS\n\n        while not deletion_status_poller.done():\n            elapsed_time = time.time() - t0\n\n            if timeout and elapsed_time > timeout:\n                raise RuntimeError(\n                    (\n                        f\"Timed out after {elapsed_time}s while waiting for deletion of\"\n                        f\" container group {container_group_name}. To verify the group \"\n                        \"has been deleted, check the Azure Portal or run \"\n                        f\"az container show --name {container_group_name} --resource-group {configuration.resource_group_name}\"  # noqa\n                    )\n                )\n            await anyio.sleep(configuration.task_watch_poll_interval)\n\n        self._logger.info(f\"{self._log_prefix}: Container deleted.\")\n\n    def _get_container(self, container_group: ContainerGroup) -> Container:\n        \"\"\"\n        Extracts the job container from a container group.\n        \"\"\"\n        return container_group.containers[0]\n\n    @staticmethod\n    def _get_container_group(\n        client: ContainerInstanceManagementClient,\n        resource_group_name: str,\n        container_group_name: str,\n    ) -> ContainerGroup:\n        \"\"\"\n        Gets the container group from Azure.\n        \"\"\"\n        return client.container_groups.get(\n            resource_group_name=resource_group_name,\n            container_group_name=container_group_name,\n        )\n\n    def _get_and_stream_output(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group: ContainerGroup,\n        last_log_time: datetime.datetime,\n    ) -> datetime.datetime:\n        \"\"\"\n        Fetches logs output from the job container and writes all entries after\n        a given time to stderr.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        logs = self._get_logs(\n            client=client, configuration=configuration, container_group=container_group\n        )\n        return self._stream_output(logs, last_log_time)\n\n    def _get_logs(\n        self,\n        client: ContainerInstanceManagementClient,\n        configuration: AzureContainerJobConfiguration,\n        container_group: ContainerGroup,\n        max_lines: int = 100,\n    ) -> str:\n        \"\"\"\n        Gets the most container logs up to a given maximum.\n\n        Args:\n            client: An initialized `ContainerInstanceManagementClient`\n            container_group: The container group that holds the job container.\n            max_lines: The number of log lines to pull. Defaults to 100.\n\n        Returns:\n            A string containing the requested log entries, one per line.\n        \"\"\"\n        container = self._get_container(container_group)\n\n        logs: Union[Logs, None] = None\n        try:\n            logs = client.containers.list_logs(\n                resource_group_name=configuration.resource_group_name,\n                container_group_name=container_group.name,\n                container_name=container.name,\n                tail=max_lines,\n                timestamps=True,\n            )\n        except HttpResponseError:\n            # Trying to get logs when the container is under heavy CPU load sometimes\n            # results in an error, but we won't want to raise an exception and stop\n            # monitoring the flow. Instead, log the error and carry on so we can try to\n            # get all missed logs on the next check.\n            self._logger.warning(\n                f\"{self._log_prefix}: Unable to retrieve logs from container \"\n                f\"{container.name}. Trying again in \"\n                f\"{configuration.task_watch_poll_interval}s\"\n            )\n\n        return logs.content if logs else \"\"\n\n    def _stream_output(\n        self, log_content: Union[str, None], last_log_time: datetime.datetime\n    ) -> datetime.datetime:\n        \"\"\"\n        Writes each entry from a string of log lines to stderr.\n\n        Args:\n            log_content: A string containing Azure container logs.\n            last_log_time: The timestamp of the last output line already streamed.\n\n        Returns:\n            The time of the most recent output line written by this call.\n        \"\"\"\n        if not log_content:\n            # nothing to stream\n            return last_log_time\n\n        log_lines = log_content.split(\"\\n\")\n\n        last_written_time = last_log_time\n\n        for log_line in log_lines:\n            # skip if the line is blank or whitespace\n            if not log_line.strip():\n                continue\n\n            line_parts = log_line.split(\" \")\n            # timestamp should always be before first space in line\n            line_timestamp = line_parts[0]\n            line = \" \".join(line_parts[1:])\n\n            try:\n                line_time = dateutil.parser.parse(line_timestamp)\n                if line_time > last_written_time:\n                    self._write_output_line(line)\n                    last_written_time = line_time\n            except dateutil.parser.ParserError as e:\n                self._logger.debug(\n                    (\n                        f\"{self._log_prefix}: Unable to parse timestamp from Azure \"\n                        \"log line: %s\"\n                    ),\n                    log_line,\n                    exc_info=e,\n                )\n\n        return last_written_time\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"AzureContainerInstanceJob {self.name!r}\"\n        else:\n            return \"AzureContainerInstanceJob\"\n\n    @staticmethod\n    def _provisioning_succeeded(container_group: Union[ContainerGroup, None]) -> bool:\n        \"\"\"\n        Determines whether ACI container group provisioning was successful.\n\n        Args:\n            container_group: a container group returned by the Azure SDK.\n\n        Returns:\n            True if provisioning was successful, False otherwise.\n        \"\"\"\n        if not container_group:\n            return False\n\n        return (\n            container_group.provisioning_state\n            == ContainerGroupProvisioningState.SUCCEEDED\n            and len(container_group.containers) == 1\n        )\n\n    @staticmethod\n    def _write_output_line(line: str):\n        \"\"\"\n        Writes a line of output to stderr.\n        \"\"\"\n        print(line, file=sys.stderr)\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

        Kill a flow running in an ACI container group.

        Parameters:

        Name Type Description Default infrastructure_pid str

        The container group identification data yielded by AzureContainerInstanceJob.run.

        required configuration AzureContainerJobConfiguration

        The job configuration.

        required Source code in prefect_azure/workers/container_instance.py
        async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: AzureContainerJobConfiguration,\n):\n    \"\"\"\n    Kill a flow running in an ACI container group.\n\n    Args:\n        infrastructure_pid: The container group identification data yielded by\n            `AzureContainerInstanceJob.run`.\n        configuration: The job configuration.\n    \"\"\"\n    (flow_run_id, container_group_name) = infrastructure_pid.split(\":\")\n\n    aci_client = configuration.aci_credentials.get_container_client(\n        configuration.subscription_id.get_secret_value()\n    )\n\n    # get the container group to check that it still exists\n    try:\n        container_group = aci_client.container_groups.get(\n            resource_group_name=configuration.resource_group_name,\n            container_group_name=container_group_name,\n        )\n    except ResourceNotFoundError as exc:\n        # the container group no longer exists, so there's nothing to cancel\n        raise InfrastructureNotFound(\n            f\"Cannot stop ACI job: container group \"\n            f\"{container_group_name} no longer exists.\"\n        ) from exc\n\n    # get the container state to check if the container has terminated\n    container = self._get_container(container_group)\n    container_state = container.instance_view.current_state.state\n\n    # the container group needs to be deleted regardless of whether the container\n    # already terminated\n    await self._wait_for_container_group_deletion(\n        aci_client, configuration, container_group_name\n    )\n\n    # if the container has already terminated, raise an exception to let the agent\n    # know the flow was not cancelled\n    if container_state == ContainerRunState.TERMINATED:\n        raise InfrastructureNotAvailable(\n            f\"Cannot stop ACI job: container group {container_group.name} exists, \"\n            f\"but container {container.name} has already terminated.\"\n        )\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorker.run","title":"run async","text":"

        Run a flow in an Azure Container Instance. Args: flow_run: The flow run to run. configuration: The configuration for the flow run. task_status: The task status object for the current task. Used to provide an identifier that can be used to cancel the task.

        Returns:

        Type Description

        The result of the flow run.

        Source code in prefect_azure/workers/container_instance.py
        async def run(\n    self,\n    flow_run: FlowRun,\n    configuration: AzureContainerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n):\n    \"\"\"\n    Run a flow in an Azure Container Instance.\n    Args:\n        flow_run: The flow run to run.\n        configuration: The configuration for the flow run.\n        task_status: The task status object for the current task. Used\n        to provide an identifier that can be used to cancel the task.\n\n    Returns:\n        The result of the flow run.\n    \"\"\"\n    run_start_time = datetime.datetime.now(datetime.timezone.utc)\n    prefect_client = get_client()\n\n    # Get the flow, so we can use its name in the container group name\n    # to make it easier to identify and debug.\n    flow = await prefect_client.read_flow(flow_run.flow_id)\n    container_group_name = f\"{flow.name}-{flow_run.id}\"\n\n    # Slugify flow.name if the generated name will be too long for the\n    # max deployment name length (64) including \"prefect-\"\n    if len(container_group_name) > 55:\n        slugified_flow_name = slugify(\n            flow.name,\n            max_length=55 - len(str(flow_run.id)),\n            regex_pattern=r\"[^a-zA-Z0-9-]+\",\n        )\n        container_group_name = f\"{slugified_flow_name}-{flow_run.id}\"\n\n    self._logger.info(\n        f\"{self._log_prefix}: Preparing to run command {configuration.command} \"\n        f\"in container  {configuration.image})...\"\n    )\n\n    aci_client = configuration.aci_credentials.get_container_client(\n        configuration.subscription_id.get_secret_value()\n    )\n    resource_client = configuration.aci_credentials.get_resource_client(\n        configuration.subscription_id.get_secret_value()\n    )\n\n    created_container_group: Union[ContainerGroup, None] = None\n    try:\n        self._logger.info(f\"{self._log_prefix}: Creating container group...\")\n\n        created_container_group = await self._provision_container_group(\n            aci_client,\n            resource_client,\n            configuration,\n            container_group_name,\n        )\n        # Both the flow ID and container group name will be needed to\n        # cancel the flow run if needed.\n        identifier = f\"{flow_run.id}:{container_group_name}\"\n\n        if self._provisioning_succeeded(created_container_group):\n            self._logger.info(f\"{self._log_prefix}: Running command...\")\n            if task_status is not None:\n                task_status.started(value=identifier)\n\n            status_code = await run_sync_in_worker_thread(\n                self._watch_task_and_get_exit_code,\n                aci_client,\n                configuration,\n                created_container_group,\n                run_start_time,\n            )\n\n            self._logger.info(f\"{self._log_prefix}: Completed command run.\")\n\n        else:\n            raise RuntimeError(f\"{self._log_prefix}: Container creation failed.\")\n\n    finally:\n        if configuration.keep_container_group:\n            self._logger.info(f\"{self._log_prefix}: Stopping container group...\")\n            aci_client.container_groups.stop(\n                resource_group_name=configuration.resource_group_name,\n                container_group_name=container_group_name,\n            )\n        else:\n            await self._wait_for_container_group_deletion(\n                aci_client, configuration, container_group_name\n            )\n\n    return AzureContainerWorkerResult(\n        identifier=created_container_group.name, status_code=status_code\n    )\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.AzureContainerWorkerResult","title":"AzureContainerWorkerResult","text":"

        Bases: BaseWorkerResult

        Contains information about the final state of a completed process

        Source code in prefect_azure/workers/container_instance.py
        class AzureContainerWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.ContainerGroupProvisioningState","title":"ContainerGroupProvisioningState","text":"

        Bases: str, Enum

        Terminal provisioning states for ACI container groups. Per the Azure docs, the states in this Enum are the only ones that can be relied on as dependencies.

        Source code in prefect_azure/workers/container_instance.py
        class ContainerGroupProvisioningState(str, Enum):\n    \"\"\"\n    Terminal provisioning states for ACI container groups. Per the Azure docs,\n    the states in this Enum are the only ones that can be relied on as dependencies.\n    \"\"\"\n\n    SUCCEEDED = \"Succeeded\"\n    FAILED = \"Failed\"\n
        "},{"location":"integrations/prefect-azure/container_instance_worker/#prefect_azure.workers.container_instance.ContainerRunState","title":"ContainerRunState","text":"

        Bases: str, Enum

        Terminal run states for ACI containers.

        Source code in prefect_azure/workers/container_instance.py
        class ContainerRunState(str, Enum):\n    \"\"\"\n    Terminal run states for ACI containers.\n    \"\"\"\n\n    RUNNING = \"Running\"\n    TERMINATED = \"Terminated\"\n
        "},{"location":"integrations/prefect-azure/cosmos_db/","title":"Cosmos DB","text":""},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db","title":"prefect_azure.cosmos_db","text":"

        Tasks for interacting with Azure Cosmos DB

        "},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db.cosmos_db_create_item","title":"cosmos_db_create_item async","text":"

        Create an item in the container.

        To update or replace an existing item, use the upsert_item method.

        Parameters:

        Name Type Description Default body Dict[str, Any]

        A dict-like object representing the item to create.

        required container Union[str, ContainerProxy, Dict[str, Any]]

        The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved.

        required database Union[str, DatabaseProxy, Dict[str, Any]]

        The ID (name), dict representing the properties or DatabaseProxy instance of the database to read.

        required cosmos_db_credentials AzureCosmosDbCredentials

        Credentials to use for authentication with Azure.

        required **kwargs Any

        Additional keyword arguments to pass.

        {}

        Returns:

        Type Description Dict[str, Any]

        A dict representing the new item.

        Example

        Create an item in the container.

        To update or replace an existing item, use the upsert_item method.

        import uuid\n\nfrom prefect import flow\n\nfrom prefect_azure import AzureCosmosDbCredentials\nfrom prefect_azure.cosmos_db import cosmos_db_create_item\n\n@flow\ndef example_cosmos_db_create_item_flow():\n    connection_string = \"connection_string\"\n    cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n    body = {\n        \"firstname\": \"Olivia\",\n        \"age\": 3,\n        \"id\": str(uuid.uuid4())\n    }\n    container = \"Persons\"\n    database = \"SampleDB\"\n\n    result = cosmos_db_create_item(\n        body,\n        container,\n        database,\n        cosmos_db_credentials\n    )\n    return result\n\nexample_cosmos_db_create_item_flow()\n

        Source code in prefect_azure/cosmos_db.py
        @task\nasync def cosmos_db_create_item(\n    body: Dict[str, Any],\n    container: Union[str, \"ContainerProxy\", Dict[str, Any]],\n    database: Union[str, \"DatabaseProxy\", Dict[str, Any]],\n    cosmos_db_credentials: AzureCosmosDbCredentials,\n    **kwargs: Any,\n) -> Dict[str, Any]:\n    \"\"\"\n    Create an item in the container.\n\n    To update or replace an existing item, use the upsert_item method.\n\n    Args:\n        body: A dict-like object representing the item to create.\n        container: The ID (name) of the container, a ContainerProxy instance,\n            or a dict representing the properties of the container to be retrieved.\n        database: The ID (name), dict representing the properties\n            or DatabaseProxy instance of the database to read.\n        cosmos_db_credentials: Credentials to use for authentication with Azure.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        A dict representing the new item.\n\n    Example:\n        Create an item in the container.\n\n        To update or replace an existing item, use the upsert_item method.\n        ```python\n        import uuid\n\n        from prefect import flow\n\n        from prefect_azure import AzureCosmosDbCredentials\n        from prefect_azure.cosmos_db import cosmos_db_create_item\n\n        @flow\n        def example_cosmos_db_create_item_flow():\n            connection_string = \"connection_string\"\n            cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n            body = {\n                \"firstname\": \"Olivia\",\n                \"age\": 3,\n                \"id\": str(uuid.uuid4())\n            }\n            container = \"Persons\"\n            database = \"SampleDB\"\n\n            result = cosmos_db_create_item(\n                body,\n                container,\n                database,\n                cosmos_db_credentials\n            )\n            return result\n\n        example_cosmos_db_create_item_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Creating the item within container %s under %s database\",\n        container,\n        database,\n    )\n\n    container_client = cosmos_db_credentials.get_container_client(container, database)\n    create_item = partial(container_client.create_item, body, **kwargs)\n    result = await to_thread.run_sync(create_item)\n    return result\n
        "},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db.cosmos_db_query_items","title":"cosmos_db_query_items async","text":"

        Return all results matching the given query.

        You can use any value for the container name in the FROM clause, but often the container name is used. In the examples below, the container name is \"products,\" and is aliased as \"p\" for easier referencing in the WHERE clause.

        Parameters:

        Name Type Description Default query str

        The Azure Cosmos DB SQL query to execute.

        required container Union[str, ContainerProxy, Dict[str, Any]]

        The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved.

        required database Union[str, DatabaseProxy, Dict[str, Any]]

        The ID (name), dict representing the properties or DatabaseProxy instance of the database to read.

        required cosmos_db_credentials AzureCosmosDbCredentials

        Credentials to use for authentication with Azure.

        required parameters Optional[List[Dict[str, object]]]

        Optional array of parameters to the query. Each parameter is a dict() with 'name' and 'value' keys.

        None partition_key Optional[Any]

        Partition key for the item to retrieve.

        None **kwargs Any

        Additional keyword arguments to pass.

        {}

        Returns:

        Type Description List[Union[str, dict]]

        An list of results.

        Example

        Query SampleDB Persons container where age >= 44

        from prefect import flow\n\nfrom prefect_azure import AzureCosmosDbCredentials\nfrom prefect_azure.cosmos_db import cosmos_db_query_items\n\n@flow\ndef example_cosmos_db_query_items_flow():\n    connection_string = \"connection_string\"\n    cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n    query = \"SELECT * FROM c where c.age >= @age\"\n    container = \"Persons\"\n    database = \"SampleDB\"\n    parameters = [dict(name=\"@age\", value=44)]\n\n    results = cosmos_db_query_items(\n        query,\n        container,\n        database,\n        cosmos_db_credentials,\n        parameters=parameters,\n        enable_cross_partition_query=True,\n    )\n    return results\n\nexample_cosmos_db_query_items_flow()\n

        Source code in prefect_azure/cosmos_db.py
        @task\nasync def cosmos_db_query_items(\n    query: str,\n    container: Union[str, \"ContainerProxy\", Dict[str, Any]],\n    database: Union[str, \"DatabaseProxy\", Dict[str, Any]],\n    cosmos_db_credentials: AzureCosmosDbCredentials,\n    parameters: Optional[List[Dict[str, object]]] = None,\n    partition_key: Optional[Any] = None,\n    **kwargs: Any,\n) -> List[Union[str, dict]]:\n    \"\"\"\n    Return all results matching the given query.\n\n    You can use any value for the container name in the FROM clause,\n    but often the container name is used.\n    In the examples below, the container name is \"products,\"\n    and is aliased as \"p\" for easier referencing in the WHERE clause.\n\n    Args:\n        query: The Azure Cosmos DB SQL query to execute.\n        container: The ID (name) of the container, a ContainerProxy instance,\n            or a dict representing the properties of the container to be retrieved.\n        database: The ID (name), dict representing the properties\n            or DatabaseProxy instance of the database to read.\n        cosmos_db_credentials: Credentials to use for authentication with Azure.\n        parameters: Optional array of parameters to the query.\n            Each parameter is a dict() with 'name' and 'value' keys.\n        partition_key: Partition key for the item to retrieve.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        An `list` of results.\n\n    Example:\n        Query SampleDB Persons container where age >= 44\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureCosmosDbCredentials\n        from prefect_azure.cosmos_db import cosmos_db_query_items\n\n        @flow\n        def example_cosmos_db_query_items_flow():\n            connection_string = \"connection_string\"\n            cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n            query = \"SELECT * FROM c where c.age >= @age\"\n            container = \"Persons\"\n            database = \"SampleDB\"\n            parameters = [dict(name=\"@age\", value=44)]\n\n            results = cosmos_db_query_items(\n                query,\n                container,\n                database,\n                cosmos_db_credentials,\n                parameters=parameters,\n                enable_cross_partition_query=True,\n            )\n            return results\n\n        example_cosmos_db_query_items_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Running query from container %s in %s database\", container, database)\n\n    container_client = cosmos_db_credentials.get_container_client(container, database)\n    partial_query_items = partial(\n        container_client.query_items,\n        query,\n        parameters=parameters,\n        partition_key=partition_key,\n        **kwargs,\n    )\n    results = await to_thread.run_sync(partial_query_items)\n    return results\n
        "},{"location":"integrations/prefect-azure/cosmos_db/#prefect_azure.cosmos_db.cosmos_db_read_item","title":"cosmos_db_read_item async","text":"

        Get the item identified by item.

        Parameters:

        Name Type Description Default item Union[str, Dict[str, Any]]

        The ID (name) or dict representing item to retrieve.

        required partition_key Any

        Partition key for the item to retrieve.

        required container Union[str, ContainerProxy, Dict[str, Any]]

        The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved.

        required database Union[str, DatabaseProxy, Dict[str, Any]]

        The ID (name), dict representing the properties or DatabaseProxy instance of the database to read.

        required cosmos_db_credentials AzureCosmosDbCredentials

        Credentials to use for authentication with Azure.

        required **kwargs Any

        Additional keyword arguments to pass.

        {}

        Returns:

        Type Description List[Union[str, dict]]

        Dict representing the item to be retrieved.

        Example

        Read an item using a partition key from Cosmos DB.

        from prefect import flow\n\nfrom prefect_azure import AzureCosmosDbCredentials\nfrom prefect_azure.cosmos_db import cosmos_db_read_item\n\n@flow\ndef example_cosmos_db_read_item_flow():\n    connection_string = \"connection_string\"\n    cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n    item = \"item\"\n    partition_key = \"partition_key\"\n    container = \"container\"\n    database = \"database\"\n\n    result = cosmos_db_read_item(\n        item,\n        partition_key,\n        container,\n        database,\n        cosmos_db_credentials\n    )\n    return result\n\nexample_cosmos_db_read_item_flow()\n

        Source code in prefect_azure/cosmos_db.py
        @task\nasync def cosmos_db_read_item(\n    item: Union[str, Dict[str, Any]],\n    partition_key: Any,\n    container: Union[str, \"ContainerProxy\", Dict[str, Any]],\n    database: Union[str, \"DatabaseProxy\", Dict[str, Any]],\n    cosmos_db_credentials: AzureCosmosDbCredentials,\n    **kwargs: Any,\n) -> List[Union[str, dict]]:\n    \"\"\"\n    Get the item identified by item.\n\n    Args:\n        item: The ID (name) or dict representing item to retrieve.\n        partition_key: Partition key for the item to retrieve.\n        container: The ID (name) of the container, a ContainerProxy instance,\n            or a dict representing the properties of the container to be retrieved.\n        database: The ID (name), dict representing the properties\n            or DatabaseProxy instance of the database to read.\n        cosmos_db_credentials: Credentials to use for authentication with Azure.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        Dict representing the item to be retrieved.\n\n    Example:\n        Read an item using a partition key from Cosmos DB.\n        ```python\n        from prefect import flow\n\n        from prefect_azure import AzureCosmosDbCredentials\n        from prefect_azure.cosmos_db import cosmos_db_read_item\n\n        @flow\n        def example_cosmos_db_read_item_flow():\n            connection_string = \"connection_string\"\n            cosmos_db_credentials = AzureCosmosDbCredentials(connection_string)\n\n            item = \"item\"\n            partition_key = \"partition_key\"\n            container = \"container\"\n            database = \"database\"\n\n            result = cosmos_db_read_item(\n                item,\n                partition_key,\n                container,\n                database,\n                cosmos_db_credentials\n            )\n            return result\n\n        example_cosmos_db_read_item_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Reading item %s with partition_key %s from container %s in %s database\",\n        item,\n        partition_key,\n        container,\n        database,\n    )\n\n    container_client = cosmos_db_credentials.get_container_client(container, database)\n    read_item = partial(container_client.read_item, item, partition_key, **kwargs)\n    result = await to_thread.run_sync(read_item)\n    return result\n
        "},{"location":"integrations/prefect-azure/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials","title":"prefect_azure.credentials","text":"

        Credential classes used to perform authenticated interactions with Azure

        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials","title":"AzureBlobStorageCredentials","text":"

        Bases: Block

        Stores credentials for authenticating with Azure Blob Storage.

        Parameters:

        Name Type Description Default account_url

        The URL for your Azure storage account. If provided, the account URL will be used to authenticate with the discovered default Azure credentials.

        required connection_string

        The connection string to your Azure storage account. If provided, the connection string will take precedence over the account URL.

        required Example

        Load stored Azure Blob Storage credentials and retrieve a blob service client:

        from prefect_azure import AzureBlobStorageCredentials\n\nazure_credentials_block = AzureBlobStorageCredentials.load(\"BLOCK_NAME\")\n\nblob_service_client = azure_credentials_block.get_blob_client()\n

        Source code in prefect_azure/credentials.py
        class AzureBlobStorageCredentials(Block):\n    \"\"\"\n    Stores credentials for authenticating with Azure Blob Storage.\n\n    Args:\n        account_url: The URL for your Azure storage account. If provided, the account\n            URL will be used to authenticate with the discovered default Azure\n            credentials.\n        connection_string: The connection string to your Azure storage account. If\n            provided, the connection string will take precedence over the account URL.\n\n    Example:\n        Load stored Azure Blob Storage credentials and retrieve a blob service client:\n        ```python\n        from prefect_azure import AzureBlobStorageCredentials\n\n        azure_credentials_block = AzureBlobStorageCredentials.load(\"BLOCK_NAME\")\n\n        blob_service_client = azure_credentials_block.get_blob_client()\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure Blob Storage Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials\"  # noqa\n\n    connection_string: Optional[SecretStr] = Field(\n        default=None,\n        description=(\n            \"The connection string to your Azure storage account. If provided, the \"\n            \"connection string will take precedence over the account URL.\"\n        ),\n    )\n    account_url: Optional[str] = Field(\n        default=None,\n        title=\"Account URL\",\n        description=(\n            \"The URL for your Azure storage account. If provided, the account \"\n            \"URL will be used to authenticate with the discovered default \"\n            \"Azure credentials.\"\n        ),\n    )\n\n    @root_validator\n    def check_connection_string_or_account_url(\n        cls, values: Dict[str, Any]\n    ) -> Dict[str, Any]:\n        \"\"\"\n        Checks that either a connection string or account URL is provided, not both.\n        \"\"\"\n        has_account_url = values.get(\"account_url\") is not None\n        has_conn_str = values.get(\"connection_string\") is not None\n        if not has_account_url and not has_conn_str:\n            raise ValueError(\n                \"Must provide either a connection string or an account URL.\"\n            )\n        if has_account_url and has_conn_str:\n            raise ValueError(\n                \"Must provide either a connection string or account URL, but not both.\"\n            )\n        return values\n\n    @_raise_help_msg(\"blob_storage\")\n    def get_client(self) -> \"BlobServiceClient\":\n        \"\"\"\n        Returns an authenticated base Blob Service client that can be used to create\n        other clients for Azure services.\n\n        Example:\n            Create an authorized Blob Service session\n            ```python\n            import os\n            import asyncio\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            async def example_get_client_flow():\n                connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n                azure_credentials = AzureBlobStorageCredentials(\n                    connection_string=connection_string,\n                )\n                async with azure_credentials.get_client() as blob_service_client:\n                    # run other code here\n                    pass\n\n            asyncio.run(example_get_client_flow())\n            ```\n        \"\"\"\n        if self.connection_string is None:\n            return BlobServiceClient(\n                account_url=self.account_url,\n                credential=ADefaultAzureCredential(),\n            )\n\n        return BlobServiceClient.from_connection_string(\n            self.connection_string.get_secret_value()\n        )\n\n    @_raise_help_msg(\"blob_storage\")\n    def get_blob_client(self, container, blob) -> \"BlobClient\":\n        \"\"\"\n        Returns an authenticated Blob client that can be used to\n        download and upload blobs.\n\n        Args:\n            container: Name of the Blob Storage container to retrieve from.\n            blob: Name of the blob within this container to retrieve.\n\n        Example:\n            Create an authorized Blob session\n            ```python\n            import os\n            import asyncio\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            async def example_get_blob_client_flow():\n                connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n                azure_credentials = AzureBlobStorageCredentials(\n                    connection_string=connection_string,\n                )\n                async with azure_credentials.get_blob_client(\n                    \"container\", \"blob\"\n                ) as blob_client:\n                    # run other code here\n                    pass\n\n            asyncio.run(example_get_blob_client_flow())\n            ```\n        \"\"\"\n        if self.connection_string is None:\n            return BlobClient(\n                account_url=self.account_url,\n                container_name=container,\n                credential=ADefaultAzureCredential(),\n                blob_name=blob,\n            )\n\n        blob_client = BlobClient.from_connection_string(\n            self.connection_string.get_secret_value(), container, blob\n        )\n        return blob_client\n\n    @_raise_help_msg(\"blob_storage\")\n    def get_container_client(self, container) -> \"ContainerClient\":\n        \"\"\"\n        Returns an authenticated Container client that can be used to create clients\n        for Azure services.\n\n        Args:\n            container: Name of the Blob Storage container to retrieve from.\n\n        Example:\n            Create an authorized Container session\n            ```python\n            import os\n            import asyncio\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            async def example_get_container_client_flow():\n                connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n                azure_credentials = AzureBlobStorageCredentials(\n                    connection_string=connection_string,\n                )\n                async with azure_credentials.get_container_client(\n                    \"container\"\n                ) as container_client:\n                    # run other code here\n                    pass\n\n            asyncio.run(example_get_container_client_flow())\n            ```\n        \"\"\"\n        if self.connection_string is None:\n            return ContainerClient(\n                account_url=self.account_url,\n                container_name=container,\n                credential=ADefaultAzureCredential(),\n            )\n\n        container_client = ContainerClient.from_connection_string(\n            self.connection_string.get_secret_value(), container\n        )\n        return container_client\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.check_connection_string_or_account_url","title":"check_connection_string_or_account_url","text":"

        Checks that either a connection string or account URL is provided, not both.

        Source code in prefect_azure/credentials.py
        @root_validator\ndef check_connection_string_or_account_url(\n    cls, values: Dict[str, Any]\n) -> Dict[str, Any]:\n    \"\"\"\n    Checks that either a connection string or account URL is provided, not both.\n    \"\"\"\n    has_account_url = values.get(\"account_url\") is not None\n    has_conn_str = values.get(\"connection_string\") is not None\n    if not has_account_url and not has_conn_str:\n        raise ValueError(\n            \"Must provide either a connection string or an account URL.\"\n        )\n    if has_account_url and has_conn_str:\n        raise ValueError(\n            \"Must provide either a connection string or account URL, but not both.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.get_blob_client","title":"get_blob_client","text":"

        Returns an authenticated Blob client that can be used to download and upload blobs.

        Parameters:

        Name Type Description Default container

        Name of the Blob Storage container to retrieve from.

        required blob

        Name of the blob within this container to retrieve.

        required Example

        Create an authorized Blob session

        import os\nimport asyncio\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\nasync def example_get_blob_client_flow():\n    connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n    azure_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    async with azure_credentials.get_blob_client(\n        \"container\", \"blob\"\n    ) as blob_client:\n        # run other code here\n        pass\n\nasyncio.run(example_get_blob_client_flow())\n

        Source code in prefect_azure/credentials.py
        @_raise_help_msg(\"blob_storage\")\ndef get_blob_client(self, container, blob) -> \"BlobClient\":\n    \"\"\"\n    Returns an authenticated Blob client that can be used to\n    download and upload blobs.\n\n    Args:\n        container: Name of the Blob Storage container to retrieve from.\n        blob: Name of the blob within this container to retrieve.\n\n    Example:\n        Create an authorized Blob session\n        ```python\n        import os\n        import asyncio\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        async def example_get_blob_client_flow():\n            connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n            azure_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            async with azure_credentials.get_blob_client(\n                \"container\", \"blob\"\n            ) as blob_client:\n                # run other code here\n                pass\n\n        asyncio.run(example_get_blob_client_flow())\n        ```\n    \"\"\"\n    if self.connection_string is None:\n        return BlobClient(\n            account_url=self.account_url,\n            container_name=container,\n            credential=ADefaultAzureCredential(),\n            blob_name=blob,\n        )\n\n    blob_client = BlobClient.from_connection_string(\n        self.connection_string.get_secret_value(), container, blob\n    )\n    return blob_client\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.get_client","title":"get_client","text":"

        Returns an authenticated base Blob Service client that can be used to create other clients for Azure services.

        Example

        Create an authorized Blob Service session

        import os\nimport asyncio\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\nasync def example_get_client_flow():\n    connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n    azure_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    async with azure_credentials.get_client() as blob_service_client:\n        # run other code here\n        pass\n\nasyncio.run(example_get_client_flow())\n

        Source code in prefect_azure/credentials.py
        @_raise_help_msg(\"blob_storage\")\ndef get_client(self) -> \"BlobServiceClient\":\n    \"\"\"\n    Returns an authenticated base Blob Service client that can be used to create\n    other clients for Azure services.\n\n    Example:\n        Create an authorized Blob Service session\n        ```python\n        import os\n        import asyncio\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        async def example_get_client_flow():\n            connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n            azure_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            async with azure_credentials.get_client() as blob_service_client:\n                # run other code here\n                pass\n\n        asyncio.run(example_get_client_flow())\n        ```\n    \"\"\"\n    if self.connection_string is None:\n        return BlobServiceClient(\n            account_url=self.account_url,\n            credential=ADefaultAzureCredential(),\n        )\n\n    return BlobServiceClient.from_connection_string(\n        self.connection_string.get_secret_value()\n    )\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureBlobStorageCredentials.get_container_client","title":"get_container_client","text":"

        Returns an authenticated Container client that can be used to create clients for Azure services.

        Parameters:

        Name Type Description Default container

        Name of the Blob Storage container to retrieve from.

        required Example

        Create an authorized Container session

        import os\nimport asyncio\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\nasync def example_get_container_client_flow():\n    connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n    azure_credentials = AzureBlobStorageCredentials(\n        connection_string=connection_string,\n    )\n    async with azure_credentials.get_container_client(\n        \"container\"\n    ) as container_client:\n        # run other code here\n        pass\n\nasyncio.run(example_get_container_client_flow())\n

        Source code in prefect_azure/credentials.py
        @_raise_help_msg(\"blob_storage\")\ndef get_container_client(self, container) -> \"ContainerClient\":\n    \"\"\"\n    Returns an authenticated Container client that can be used to create clients\n    for Azure services.\n\n    Args:\n        container: Name of the Blob Storage container to retrieve from.\n\n    Example:\n        Create an authorized Container session\n        ```python\n        import os\n        import asyncio\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        async def example_get_container_client_flow():\n            connection_string = os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n            azure_credentials = AzureBlobStorageCredentials(\n                connection_string=connection_string,\n            )\n            async with azure_credentials.get_container_client(\n                \"container\"\n            ) as container_client:\n                # run other code here\n                pass\n\n        asyncio.run(example_get_container_client_flow())\n        ```\n    \"\"\"\n    if self.connection_string is None:\n        return ContainerClient(\n            account_url=self.account_url,\n            container_name=container,\n            credential=ADefaultAzureCredential(),\n        )\n\n    container_client = ContainerClient.from_connection_string(\n        self.connection_string.get_secret_value(), container\n    )\n    return container_client\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials","title":"AzureContainerInstanceCredentials","text":"

        Bases: Block

        Block used to manage Azure Container Instances authentication. Stores Azure Service Principal authentication data.

        Source code in prefect_azure/credentials.py
        class AzureContainerInstanceCredentials(Block):\n    \"\"\"\n    Block used to manage Azure Container Instances authentication. Stores Azure Service\n    Principal authentication data.\n    \"\"\"\n\n    _block_type_name = \"Azure Container Instance Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials\"  # noqa\n\n    client_id: Optional[str] = Field(\n        default=None,\n        title=\"Client ID\",\n        description=(\n            \"The service principal client ID. \"\n            \"If none of client_id, tenant_id, and client_secret are provided, \"\n            \"will use DefaultAzureCredential; else will need to provide all three to \"\n            \"use ClientSecretCredential.\"\n        ),\n    )\n    tenant_id: Optional[str] = Field(\n        default=None,\n        title=\"Tenant ID\",\n        description=(\n            \"The service principal tenant ID.\"\n            \"If none of client_id, tenant_id, and client_secret are provided, \"\n            \"will use DefaultAzureCredential; else will need to provide all three to \"\n            \"use ClientSecretCredential.\"\n        ),\n    )\n    client_secret: Optional[SecretStr] = Field(\n        default=None,\n        description=(\n            \"The service principal client secret.\"\n            \"If none of client_id, tenant_id, and client_secret are provided, \"\n            \"will use DefaultAzureCredential; else will need to provide all three to \"\n            \"use ClientSecretCredential.\"\n        ),\n    )\n    credential_kwargs: Dict[str, Any] = Field(\n        default_factory=dict,\n        title=\"Additional Credential Keyword Arguments\",\n        description=(\n            \"Additional keyword arguments to pass to \"\n            \"`ClientSecretCredential` or `DefaultAzureCredential`.\"\n        ),\n    )\n\n    @root_validator\n    def validate_credential_kwargs(cls, values):\n        \"\"\"\n        Validates that if any of `client_id`, `tenant_id`, or `client_secret` are\n        provided, all must be provided.\n        \"\"\"\n        auth_args = (\"client_id\", \"tenant_id\", \"client_secret\")\n        has_any = any(values.get(key) is not None for key in auth_args)\n        has_all = all(values.get(key) is not None for key in auth_args)\n        if has_any and not has_all:\n            raise ValueError(\n                \"If any of `client_id`, `tenant_id`, or `client_secret` are provided, \"\n                \"all must be provided.\"\n            )\n        return values\n\n    def get_container_client(self, subscription_id: str):\n        \"\"\"\n        Creates an Azure Container Instances client initialized with data from\n        this block's fields and a provided Azure subscription ID.\n\n        Args:\n            subscription_id: A valid Azure subscription ID.\n\n        Returns:\n            An initialized `ContainerInstanceManagementClient`\n        \"\"\"\n\n        return ContainerInstanceManagementClient(\n            credential=self._create_credential(),\n            subscription_id=subscription_id,\n        )\n\n    def get_resource_client(self, subscription_id: str):\n        \"\"\"\n        Creates an Azure resource management client initialized with data from\n        this block's fields and a provided Azure subscription ID.\n\n        Args:\n            subscription_id: A valid Azure subscription ID.\n\n        Returns:\n            An initialized `ResourceManagementClient`\n        \"\"\"\n\n        return ResourceManagementClient(\n            credential=self._create_credential(),\n            subscription_id=subscription_id,\n        )\n\n    def _create_credential(self):\n        \"\"\"\n        Creates an Azure credential initialized with data from this block's fields.\n\n        Returns:\n            An initialized Azure `TokenCredential` ready to use with Azure SDK client\n            classes.\n        \"\"\"\n        auth_args = (self.client_id, self.tenant_id, self.client_secret)\n        if auth_args == (None, None, None):\n            return DefaultAzureCredential(**self.credential_kwargs)\n\n        return ClientSecretCredential(\n            tenant_id=self.tenant_id,\n            client_id=self.client_id,\n            client_secret=self.client_secret.get_secret_value(),\n            **self.credential_kwargs,\n        )\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials.get_container_client","title":"get_container_client","text":"

        Creates an Azure Container Instances client initialized with data from this block's fields and a provided Azure subscription ID.

        Parameters:

        Name Type Description Default subscription_id str

        A valid Azure subscription ID.

        required

        Returns:

        Type Description

        An initialized ContainerInstanceManagementClient

        Source code in prefect_azure/credentials.py
        def get_container_client(self, subscription_id: str):\n    \"\"\"\n    Creates an Azure Container Instances client initialized with data from\n    this block's fields and a provided Azure subscription ID.\n\n    Args:\n        subscription_id: A valid Azure subscription ID.\n\n    Returns:\n        An initialized `ContainerInstanceManagementClient`\n    \"\"\"\n\n    return ContainerInstanceManagementClient(\n        credential=self._create_credential(),\n        subscription_id=subscription_id,\n    )\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials.get_resource_client","title":"get_resource_client","text":"

        Creates an Azure resource management client initialized with data from this block's fields and a provided Azure subscription ID.

        Parameters:

        Name Type Description Default subscription_id str

        A valid Azure subscription ID.

        required

        Returns:

        Type Description

        An initialized ResourceManagementClient

        Source code in prefect_azure/credentials.py
        def get_resource_client(self, subscription_id: str):\n    \"\"\"\n    Creates an Azure resource management client initialized with data from\n    this block's fields and a provided Azure subscription ID.\n\n    Args:\n        subscription_id: A valid Azure subscription ID.\n\n    Returns:\n        An initialized `ResourceManagementClient`\n    \"\"\"\n\n    return ResourceManagementClient(\n        credential=self._create_credential(),\n        subscription_id=subscription_id,\n    )\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureContainerInstanceCredentials.validate_credential_kwargs","title":"validate_credential_kwargs","text":"

        Validates that if any of client_id, tenant_id, or client_secret are provided, all must be provided.

        Source code in prefect_azure/credentials.py
        @root_validator\ndef validate_credential_kwargs(cls, values):\n    \"\"\"\n    Validates that if any of `client_id`, `tenant_id`, or `client_secret` are\n    provided, all must be provided.\n    \"\"\"\n    auth_args = (\"client_id\", \"tenant_id\", \"client_secret\")\n    has_any = any(values.get(key) is not None for key in auth_args)\n    has_all = all(values.get(key) is not None for key in auth_args)\n    if has_any and not has_all:\n        raise ValueError(\n            \"If any of `client_id`, `tenant_id`, or `client_secret` are provided, \"\n            \"all must be provided.\"\n        )\n    return values\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials","title":"AzureCosmosDbCredentials","text":"

        Bases: Block

        Block used to manage Cosmos DB authentication with Azure. Azure authentication is handled via the azure module through a connection string.

        Parameters:

        Name Type Description Default connection_string

        Includes the authorization information required.

        required Example

        Load stored Azure Cosmos DB credentials:

        from prefect_azure import AzureCosmosDbCredentials\nazure_credentials_block = AzureCosmosDbCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_azure/credentials.py
        class AzureCosmosDbCredentials(Block):\n    \"\"\"\n    Block used to manage Cosmos DB authentication with Azure.\n    Azure authentication is handled via the `azure` module through\n    a connection string.\n\n    Args:\n        connection_string: Includes the authorization information required.\n\n    Example:\n        Load stored Azure Cosmos DB credentials:\n        ```python\n        from prefect_azure import AzureCosmosDbCredentials\n        azure_credentials_block = AzureCosmosDbCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure Cosmos DB Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials\"  # noqa\n\n    connection_string: SecretStr = Field(\n        default=..., description=\"Includes the authorization information required.\"\n    )\n\n    @_raise_help_msg(\"cosmos_db\")\n    def get_client(self) -> \"CosmosClient\":\n        \"\"\"\n        Returns an authenticated Cosmos client that can be used to create\n        other clients for Azure services.\n\n        Example:\n            Create an authorized Cosmos session\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureCosmosDbCredentials\n\n            @flow\n            def example_get_client_flow():\n                connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n                azure_credentials = AzureCosmosDbCredentials(\n                    connection_string=connection_string,\n                )\n                cosmos_client = azure_credentials.get_client()\n                return cosmos_client\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        return CosmosClient.from_connection_string(\n            self.connection_string.get_secret_value()\n        )\n\n    def get_database_client(self, database: str) -> \"DatabaseProxy\":\n        \"\"\"\n        Returns an authenticated Database client.\n\n        Args:\n            database: Name of the database.\n\n        Example:\n            Create an authorized Cosmos session\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureCosmosDbCredentials\n\n            @flow\n            def example_get_client_flow():\n                connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n                azure_credentials = AzureCosmosDbCredentials(\n                    connection_string=connection_string,\n                )\n                cosmos_client = azure_credentials.get_database_client()\n                return cosmos_client\n\n            example_get_database_client_flow()\n            ```\n        \"\"\"\n        cosmos_client = self.get_client()\n        database_client = cosmos_client.get_database_client(database=database)\n        return database_client\n\n    def get_container_client(self, container: str, database: str) -> \"ContainerProxy\":\n        \"\"\"\n        Returns an authenticated Container client used for querying.\n\n        Args:\n            container: Name of the Cosmos DB container to retrieve from.\n            database: Name of the Cosmos DB database.\n\n        Example:\n            Create an authorized Container session\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureBlobStorageCredentials\n\n            @flow\n            def example_get_container_client_flow():\n                connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n                azure_credentials = AzureCosmosDbCredentials(\n                    connection_string=connection_string,\n                )\n                container_client = azure_credentials.get_container_client(container)\n                return container_client\n\n            example_get_container_client_flow()\n            ```\n        \"\"\"\n        database_client = self.get_database_client(database)\n        container_client = database_client.get_container_client(container=container)\n        return container_client\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials.get_client","title":"get_client","text":"

        Returns an authenticated Cosmos client that can be used to create other clients for Azure services.

        Example

        Create an authorized Cosmos session

        import os\nfrom prefect import flow\nfrom prefect_azure import AzureCosmosDbCredentials\n\n@flow\ndef example_get_client_flow():\n    connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n    azure_credentials = AzureCosmosDbCredentials(\n        connection_string=connection_string,\n    )\n    cosmos_client = azure_credentials.get_client()\n    return cosmos_client\n\nexample_get_client_flow()\n

        Source code in prefect_azure/credentials.py
        @_raise_help_msg(\"cosmos_db\")\ndef get_client(self) -> \"CosmosClient\":\n    \"\"\"\n    Returns an authenticated Cosmos client that can be used to create\n    other clients for Azure services.\n\n    Example:\n        Create an authorized Cosmos session\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureCosmosDbCredentials\n\n        @flow\n        def example_get_client_flow():\n            connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n            azure_credentials = AzureCosmosDbCredentials(\n                connection_string=connection_string,\n            )\n            cosmos_client = azure_credentials.get_client()\n            return cosmos_client\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    return CosmosClient.from_connection_string(\n        self.connection_string.get_secret_value()\n    )\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials.get_container_client","title":"get_container_client","text":"

        Returns an authenticated Container client used for querying.

        Parameters:

        Name Type Description Default container str

        Name of the Cosmos DB container to retrieve from.

        required database str

        Name of the Cosmos DB database.

        required Example

        Create an authorized Container session

        import os\nfrom prefect import flow\nfrom prefect_azure import AzureBlobStorageCredentials\n\n@flow\ndef example_get_container_client_flow():\n    connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n    azure_credentials = AzureCosmosDbCredentials(\n        connection_string=connection_string,\n    )\n    container_client = azure_credentials.get_container_client(container)\n    return container_client\n\nexample_get_container_client_flow()\n

        Source code in prefect_azure/credentials.py
        def get_container_client(self, container: str, database: str) -> \"ContainerProxy\":\n    \"\"\"\n    Returns an authenticated Container client used for querying.\n\n    Args:\n        container: Name of the Cosmos DB container to retrieve from.\n        database: Name of the Cosmos DB database.\n\n    Example:\n        Create an authorized Container session\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureBlobStorageCredentials\n\n        @flow\n        def example_get_container_client_flow():\n            connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n            azure_credentials = AzureCosmosDbCredentials(\n                connection_string=connection_string,\n            )\n            container_client = azure_credentials.get_container_client(container)\n            return container_client\n\n        example_get_container_client_flow()\n        ```\n    \"\"\"\n    database_client = self.get_database_client(database)\n    container_client = database_client.get_container_client(container=container)\n    return container_client\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureCosmosDbCredentials.get_database_client","title":"get_database_client","text":"

        Returns an authenticated Database client.

        Parameters:

        Name Type Description Default database str

        Name of the database.

        required Example

        Create an authorized Cosmos session

        import os\nfrom prefect import flow\nfrom prefect_azure import AzureCosmosDbCredentials\n\n@flow\ndef example_get_client_flow():\n    connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n    azure_credentials = AzureCosmosDbCredentials(\n        connection_string=connection_string,\n    )\n    cosmos_client = azure_credentials.get_database_client()\n    return cosmos_client\n\nexample_get_database_client_flow()\n

        Source code in prefect_azure/credentials.py
        def get_database_client(self, database: str) -> \"DatabaseProxy\":\n    \"\"\"\n    Returns an authenticated Database client.\n\n    Args:\n        database: Name of the database.\n\n    Example:\n        Create an authorized Cosmos session\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureCosmosDbCredentials\n\n        @flow\n        def example_get_client_flow():\n            connection_string = os.getenv(\"AZURE_COSMOS_CONNECTION_STRING\")\n            azure_credentials = AzureCosmosDbCredentials(\n                connection_string=connection_string,\n            )\n            cosmos_client = azure_credentials.get_database_client()\n            return cosmos_client\n\n        example_get_database_client_flow()\n        ```\n    \"\"\"\n    cosmos_client = self.get_client()\n    database_client = cosmos_client.get_database_client(database=database)\n    return database_client\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureMlCredentials","title":"AzureMlCredentials","text":"

        Bases: Block

        Block used to manage authentication with AzureML. Azure authentication is handled via the azure module.

        Parameters:

        Name Type Description Default tenant_id

        The active directory tenant that the service identity belongs to.

        required service_principal_id

        The service principal ID.

        required service_principal_password

        The service principal password/key.

        required subscription_id

        The Azure subscription ID containing the workspace.

        required resource_group

        The resource group containing the workspace.

        required workspace_name

        The existing workspace name.

        required Example

        Load stored AzureML credentials:

        from prefect_azure import AzureMlCredentials\nazure_ml_credentials_block = AzureMlCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_azure/credentials.py
        class AzureMlCredentials(Block):\n    \"\"\"\n    Block used to manage authentication with AzureML. Azure authentication is\n    handled via the `azure` module.\n\n    Args:\n        tenant_id: The active directory tenant that the service identity belongs to.\n        service_principal_id: The service principal ID.\n        service_principal_password: The service principal password/key.\n        subscription_id: The Azure subscription ID containing the workspace.\n        resource_group: The resource group containing the workspace.\n        workspace_name: The existing workspace name.\n\n    Example:\n        Load stored AzureML credentials:\n        ```python\n        from prefect_azure import AzureMlCredentials\n        azure_ml_credentials_block = AzureMlCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"AzureML Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/54e3fa7e00197a4fbd1d82ed62494cb58d08c96a-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-azure/credentials/#prefect_azure.credentials.AzureMlCredentials\"  # noqa\n\n    tenant_id: str = Field(\n        default=...,\n        description=\"The active directory tenant that the service identity belongs to.\",\n    )\n    service_principal_id: str = Field(\n        default=..., description=\"The service principal ID.\"\n    )\n    service_principal_password: SecretStr = Field(\n        default=..., description=\"The service principal password/key.\"\n    )\n    subscription_id: str = Field(\n        default=...,\n        description=\"The Azure subscription ID containing the workspace, in format: '00000000-0000-0000-0000-000000000000'.\",  # noqa\n    )\n    resource_group: str = Field(\n        default=..., description=\"The resource group containing the workspace.\"\n    )\n    workspace_name: str = Field(default=..., description=\"The existing workspace name.\")\n\n    @_raise_help_msg(\"ml_datastore\")\n    def get_workspace(self) -> \"Workspace\":\n        \"\"\"\n        Returns an authenticated base Workspace that can be used in\n        Azure's Datasets and Datastores.\n\n        Example:\n            Create an authorized workspace\n            ```python\n            import os\n            from prefect import flow\n            from prefect_azure import AzureMlCredentials\n            @flow\n            def example_get_workspace_flow():\n                azure_credentials = AzureMlCredentials(\n                    tenant_id=\"tenant_id\",\n                    service_principal_id=\"service_principal_id\",\n                    service_principal_password=\"service_principal_password\",\n                    subscription_id=\"subscription_id\",\n                    resource_group=\"resource_group\",\n                    workspace_name=\"workspace_name\"\n                )\n                workspace_client = azure_credentials.get_workspace()\n                return workspace_client\n            example_get_workspace_flow()\n            ```\n        \"\"\"\n        service_principal_password = self.service_principal_password.get_secret_value()\n        service_principal_authentication = ServicePrincipalAuthentication(\n            tenant_id=self.tenant_id,\n            service_principal_id=self.service_principal_id,\n            service_principal_password=service_principal_password,\n        )\n\n        workspace = Workspace(\n            subscription_id=self.subscription_id,\n            resource_group=self.resource_group,\n            workspace_name=self.workspace_name,\n            auth=service_principal_authentication,\n        )\n\n        return workspace\n
        "},{"location":"integrations/prefect-azure/credentials/#prefect_azure.credentials.AzureMlCredentials.get_workspace","title":"get_workspace","text":"

        Returns an authenticated base Workspace that can be used in Azure's Datasets and Datastores.

        Example

        Create an authorized workspace

        import os\nfrom prefect import flow\nfrom prefect_azure import AzureMlCredentials\n@flow\ndef example_get_workspace_flow():\n    azure_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\"\n    )\n    workspace_client = azure_credentials.get_workspace()\n    return workspace_client\nexample_get_workspace_flow()\n

        Source code in prefect_azure/credentials.py
        @_raise_help_msg(\"ml_datastore\")\ndef get_workspace(self) -> \"Workspace\":\n    \"\"\"\n    Returns an authenticated base Workspace that can be used in\n    Azure's Datasets and Datastores.\n\n    Example:\n        Create an authorized workspace\n        ```python\n        import os\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        @flow\n        def example_get_workspace_flow():\n            azure_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\"\n            )\n            workspace_client = azure_credentials.get_workspace()\n            return workspace_client\n        example_get_workspace_flow()\n        ```\n    \"\"\"\n    service_principal_password = self.service_principal_password.get_secret_value()\n    service_principal_authentication = ServicePrincipalAuthentication(\n        tenant_id=self.tenant_id,\n        service_principal_id=self.service_principal_id,\n        service_principal_password=service_principal_password,\n    )\n\n    workspace = Workspace(\n        subscription_id=self.subscription_id,\n        resource_group=self.resource_group,\n        workspace_name=self.workspace_name,\n        auth=service_principal_authentication,\n    )\n\n    return workspace\n
        "},{"location":"integrations/prefect-azure/ml_datastore/","title":"ML Datastore","text":""},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore","title":"prefect_azure.ml_datastore","text":"

        Tasks for interacting with Azure ML Datastore

        "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_get_datastore","title":"ml_get_datastore async","text":"

        Gets the Datastore within the Workspace.

        Parameters:

        Name Type Description Default ml_credentials AzureMlCredentials

        Credentials to use for authentication with Azure.

        required datastore_name str

        The name of the Datastore. If None, then the default Datastore of the Workspace is returned.

        None Example

        Get Datastore object

        from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_get_datastore\n\n@flow\ndef example_ml_get_datastore_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    results = ml_get_datastore(ml_credentials, datastore_name=\"datastore_name\")\n    return results\n

        Source code in prefect_azure/ml_datastore.py
        @task\nasync def ml_get_datastore(\n    ml_credentials: \"AzureMlCredentials\", datastore_name: str = None\n) -> Datastore:\n    \"\"\"\n    Gets the Datastore within the Workspace.\n\n    Args:\n        ml_credentials: Credentials to use for authentication with Azure.\n        datastore_name: The name of the Datastore. If `None`, then the\n            default Datastore of the Workspace is returned.\n\n    Example:\n        Get Datastore object\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_get_datastore\n\n        @flow\n        def example_ml_get_datastore_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            results = ml_get_datastore(ml_credentials, datastore_name=\"datastore_name\")\n            return results\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Getting datastore %s\", datastore_name)\n\n    result = await _get_datastore(ml_credentials, datastore_name)\n    return result\n
        "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_list_datastores","title":"ml_list_datastores","text":"

        Lists the Datastores in the Workspace.

        Parameters:

        Name Type Description Default ml_credentials AzureMlCredentials

        Credentials to use for authentication with Azure.

        required Example

        List Datastore objects

        from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_list_datastores\n\n@flow\ndef example_ml_list_datastores_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    results = ml_list_datastores(ml_credentials)\n    return results\n

        Source code in prefect_azure/ml_datastore.py
        @task\ndef ml_list_datastores(ml_credentials: \"AzureMlCredentials\") -> Dict:\n    \"\"\"\n    Lists the Datastores in the Workspace.\n\n    Args:\n        ml_credentials: Credentials to use for authentication with Azure.\n\n    Example:\n        List Datastore objects\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_list_datastores\n\n        @flow\n        def example_ml_list_datastores_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            results = ml_list_datastores(ml_credentials)\n            return results\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Listing datastores\")\n\n    workspace = ml_credentials.get_workspace()\n    results = workspace.datastores\n    return results\n
        "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_register_datastore_blob_container","title":"ml_register_datastore_blob_container async","text":"

        Registers a Azure Blob Storage container as a Datastore in a Azure ML service Workspace.

        Parameters:

        Name Type Description Default container_name str

        The name of the container.

        required ml_credentials AzureMlCredentials

        Credentials to use for authentication with Azure ML.

        required blob_storage_credentials AzureBlobStorageCredentials

        Credentials to use for authentication with Azure Blob Storage.

        required datastore_name str

        The name of the datastore. If not defined, the container name will be used.

        None create_container_if_not_exists bool

        Create a container, if one does not exist with the given name.

        False overwrite bool

        Overwrite an existing datastore. If the datastore does not exist, it will be created.

        False set_as_default bool

        Set the created Datastore as the default datastore for the Workspace.

        False Example

        Upload Datastore object

        from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_register_datastore_blob_container\n\n@flow\ndef example_ml_register_datastore_blob_container_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    blob_storage_credentials = AzureBlobStorageCredentials(\"connection_string\")\n    result = ml_register_datastore_blob_container(\n        \"container\",\n        ml_credentials,\n        blob_storage_credentials,\n        datastore_name=\"datastore_name\"\n    )\n    return result\n

        Source code in prefect_azure/ml_datastore.py
        @task\nasync def ml_register_datastore_blob_container(\n    container_name: str,\n    ml_credentials: \"AzureMlCredentials\",\n    blob_storage_credentials: \"AzureBlobStorageCredentials\",\n    datastore_name: str = None,\n    create_container_if_not_exists: bool = False,\n    overwrite: bool = False,\n    set_as_default: bool = False,\n) -> \"AzureBlobDatastore\":\n    \"\"\"\n    Registers a Azure Blob Storage container as a\n    Datastore in a Azure ML service Workspace.\n\n    Args:\n        container_name: The name of the container.\n        ml_credentials: Credentials to use for authentication with Azure ML.\n        blob_storage_credentials: Credentials to use for authentication\n            with Azure Blob Storage.\n        datastore_name: The name of the datastore. If not defined, the\n            container name will be used.\n        create_container_if_not_exists: Create a container, if one does not\n            exist with the given name.\n        overwrite: Overwrite an existing datastore. If\n            the datastore does not exist, it will be created.\n        set_as_default: Set the created Datastore as the default datastore\n            for the Workspace.\n\n    Example:\n        Upload Datastore object\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_register_datastore_blob_container\n\n        @flow\n        def example_ml_register_datastore_blob_container_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            blob_storage_credentials = AzureBlobStorageCredentials(\"connection_string\")\n            result = ml_register_datastore_blob_container(\n                \"container\",\n                ml_credentials,\n                blob_storage_credentials,\n                datastore_name=\"datastore_name\"\n            )\n            return result\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    if datastore_name is None:\n        datastore_name = container_name\n\n    logger.info(\n        \"Registering %s container into %s datastore\", container_name, datastore_name\n    )\n\n    workspace = ml_credentials.get_workspace()\n    async with blob_storage_credentials.get_client() as blob_service_client:\n        credential = blob_service_client.credential\n        account_name = credential.account_name\n        account_key = credential.account_key\n\n    partial_register = partial(\n        Datastore.register_azure_blob_container,\n        workspace=workspace,\n        datastore_name=datastore_name,\n        container_name=container_name,\n        account_name=account_name,\n        account_key=account_key,\n        overwrite=overwrite,\n        create_if_not_exists=create_container_if_not_exists,\n    )\n    result = await to_thread.run_sync(partial_register)\n\n    if set_as_default:\n        result.set_as_default()\n\n    return result\n
        "},{"location":"integrations/prefect-azure/ml_datastore/#prefect_azure.ml_datastore.ml_upload_datastore","title":"ml_upload_datastore async","text":"

        Uploads local files to a Datastore.

        Parameters:

        Name Type Description Default path Union[str, Path, List[Union[str, Path]]]

        The path to a single file, single directory, or a list of path to files to be uploaded.

        required ml_credentials AzureMlCredentials

        Credentials to use for authentication with Azure.

        required target_path Union[str, Path]

        The location in the blob container to upload to. If None, then upload to root.

        None relative_root Union[str, Path]

        The root from which is used to determine the path of the files in the blob. For example, if we upload /path/to/file.txt, and we define base path to be /path, when file.txt is uploaded to the blob storage, it will have the path of /to/file.txt.

        None datastore_name str

        The name of the Datastore. If None, then the default Datastore of the Workspace is returned.

        None overwrite bool

        Overwrite existing file(s).

        False Example

        Upload Datastore object

        from prefect import flow\nfrom prefect_azure import AzureMlCredentials\nfrom prefect_azure.ml_datastore import ml_upload_datastore\n\n@flow\ndef example_ml_upload_datastore_flow():\n    ml_credentials = AzureMlCredentials(\n        tenant_id=\"tenant_id\",\n        service_principal_id=\"service_principal_id\",\n        service_principal_password=\"service_principal_password\",\n        subscription_id=\"subscription_id\",\n        resource_group=\"resource_group\",\n        workspace_name=\"workspace_name\",\n    )\n    result = ml_upload_datastore(\n        \"path/to/dir/or/file\",\n        ml_credentials,\n        datastore_name=\"datastore_name\"\n    )\n    return result\n

        Source code in prefect_azure/ml_datastore.py
        @task\nasync def ml_upload_datastore(\n    path: Union[str, Path, List[Union[str, Path]]],\n    ml_credentials: \"AzureMlCredentials\",\n    target_path: Union[str, Path] = None,\n    relative_root: Union[str, Path] = None,\n    datastore_name: str = None,\n    overwrite: bool = False,\n) -> \"DataReference\":\n    \"\"\"\n    Uploads local files to a Datastore.\n\n    Args:\n        path: The path to a single file, single directory,\n            or a list of path to files to be uploaded.\n        ml_credentials: Credentials to use for authentication with Azure.\n        target_path: The location in the blob container to upload to. If\n            None, then upload to root.\n        relative_root: The root from which is used to determine the path of\n            the files in the blob. For example, if we upload /path/to/file.txt,\n            and we define base path to be /path, when file.txt is uploaded\n            to the blob storage, it will have the path of /to/file.txt.\n        datastore_name: The name of the Datastore. If `None`, then the\n            default Datastore of the Workspace is returned.\n        overwrite: Overwrite existing file(s).\n\n    Example:\n        Upload Datastore object\n        ```python\n        from prefect import flow\n        from prefect_azure import AzureMlCredentials\n        from prefect_azure.ml_datastore import ml_upload_datastore\n\n        @flow\n        def example_ml_upload_datastore_flow():\n            ml_credentials = AzureMlCredentials(\n                tenant_id=\"tenant_id\",\n                service_principal_id=\"service_principal_id\",\n                service_principal_password=\"service_principal_password\",\n                subscription_id=\"subscription_id\",\n                resource_group=\"resource_group\",\n                workspace_name=\"workspace_name\",\n            )\n            result = ml_upload_datastore(\n                \"path/to/dir/or/file\",\n                ml_credentials,\n                datastore_name=\"datastore_name\"\n            )\n            return result\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading %s into %s datastore\", path, datastore_name)\n\n    datastore = await _get_datastore(ml_credentials, datastore_name)\n\n    if isinstance(path, Path):\n        path = str(path)\n    elif isinstance(path, list) and isinstance(path[0], Path):\n        path = [str(p) for p in path]\n\n    if isinstance(target_path, Path):\n        target_path = str(target_path)\n\n    if isinstance(relative_root, Path):\n        relative_root = str(relative_root)\n\n    if isinstance(path, str) and os.path.isdir(path):\n        partial_upload = partial(\n            datastore.upload,\n            src_dir=path,\n            target_path=target_path,\n            overwrite=overwrite,\n            show_progress=False,\n        )\n    else:\n        partial_upload = partial(\n            datastore.upload_files,\n            files=path if isinstance(path, list) else [path],\n            relative_root=relative_root,\n            target_path=target_path,\n            overwrite=overwrite,\n            show_progress=False,\n        )\n\n    result = await to_thread.run_sync(partial_upload)\n    return result\n
        "},{"location":"integrations/prefect-azure/deployments/steps/","title":"Steps","text":""},{"location":"integrations/prefect-azure/deployments/steps/#prefect_azure.deployments.steps","title":"prefect_azure.deployments.steps","text":"

        Prefect deployment steps for code storage and retrieval in Azure Blob Storage.

        These steps can be used in a prefect.yaml file to define the default push and pull steps for a group of deployments, or they can be used to define the push and pull steps for a specific deployment.

        Example

        Sample prefect.yaml file that is configured to push and pull to and from an Azure Blob Storage container:

        prefect_version: ...\nname: ...\n\npush:\n    - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n\npull:\n    - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: \"{{ container }}\"\n        folder: \"{{ folder }}\"\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n

        Note

        Azure Storage account needs to have Hierarchical Namespace disabled.

        For more information about using deployment steps, check out out the Prefect docs.

        "},{"location":"integrations/prefect-azure/deployments/steps/#prefect_azure.deployments.steps.pull_from_azure_blob_storage","title":"pull_from_azure_blob_storage","text":"

        Pulls from an Azure Blob Storage container.

        Parameters:

        Name Type Description Default container str

        The name of the container to pull files from

        required folder str

        The folder within the container to pull from

        required credentials Dict[str, str]

        A dictionary of credentials with keys connection_string or account_url and values of the corresponding connection string or account url. If both are provided, connection_string will be used.

        required Note

        Azure Storage account needs to have Hierarchical Namespace disabled.

        Example

        Pull from an Azure Blob Storage container using credentials stored in a block:

        pull:\n    - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n

        Pull from an Azure Blob Storage container using an account URL and default credentials:

        pull:\n    - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials:\n            account_url: https://myaccount.blob.core.windows.net/\n

        Source code in prefect_azure/deployments/steps.py
        def pull_from_azure_blob_storage(\n    container: str,\n    folder: str,\n    credentials: Dict[str, str],\n):\n    \"\"\"\n    Pulls from an Azure Blob Storage container.\n\n    Args:\n        container: The name of the container to pull files from\n        folder: The folder within the container to pull from\n        credentials: A dictionary of credentials with keys `connection_string` or\n            `account_url` and values of the corresponding connection string or\n            account url. If both are provided, `connection_string` will be used.\n\n    Note:\n        Azure Storage account needs to have Hierarchical Namespace disabled.\n\n    Example:\n        Pull from an Azure Blob Storage container using credentials stored in\n        a block:\n        ```yaml\n        pull:\n            - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n        ```\n\n        Pull from an Azure Blob Storage container using an account URL and\n        default credentials:\n        ```yaml\n        pull:\n            - prefect_azure.deployments.steps.pull_from_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials:\n                    account_url: https://myaccount.blob.core.windows.net/\n        ```\n    \"\"\"  # noqa\n    local_path = Path.cwd()\n    if credentials.get(\"connection_string\") is not None:\n        container_client = ContainerClient.from_connection_string(\n            credentials[\"connection_string\"], container_name=container\n        )\n    elif credentials.get(\"account_url\") is not None:\n        container_client = ContainerClient(\n            account_url=credentials[\"account_url\"],\n            container_name=container,\n            credential=DefaultAzureCredential(),\n        )\n    else:\n        raise ValueError(\n            \"Credentials must contain either connection_string or account_url\"\n        )\n\n    with container_client as client:\n        for blob in client.list_blobs(name_starts_with=folder):\n            target = PurePosixPath(\n                local_path\n                / relative_path_to_current_platform(blob.name).relative_to(folder)\n            )\n            Path.mkdir(Path(target.parent), parents=True, exist_ok=True)\n            with open(target, \"wb\") as f:\n                client.download_blob(blob).readinto(f)\n\n    return {\n        \"container\": container,\n        \"folder\": folder,\n        \"directory\": local_path,\n    }\n
        "},{"location":"integrations/prefect-azure/deployments/steps/#prefect_azure.deployments.steps.push_to_azure_blob_storage","title":"push_to_azure_blob_storage","text":"

        Pushes to an Azure Blob Storage container.

        Parameters:

        Name Type Description Default container str

        The name of the container to push files to

        required folder str

        The folder within the container to push to

        required credentials Dict[str, str]

        A dictionary of credentials with keys connection_string or account_url and values of the corresponding connection string or account url. If both are provided, connection_string will be used.

        required ignore_file Optional[str]

        The path to a file containing patterns of files to ignore when pushing to Azure Blob Storage. If not provided, the default .prefectignore file will be used.

        '.prefectignore' Example

        Push to an Azure Blob Storage container using credentials stored in a block:

        push:\n    - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n

        Push to an Azure Blob Storage container using an account URL and default credentials:

        push:\n    - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n        requires: prefect-azure[blob_storage]\n        container: my-container\n        folder: my-folder\n        credentials:\n            account_url: https://myaccount.blob.core.windows.net/\n

        Source code in prefect_azure/deployments/steps.py
        def push_to_azure_blob_storage(\n    container: str,\n    folder: str,\n    credentials: Dict[str, str],\n    ignore_file: Optional[str] = \".prefectignore\",\n):\n    \"\"\"\n    Pushes to an Azure Blob Storage container.\n\n    Args:\n        container: The name of the container to push files to\n        folder: The folder within the container to push to\n        credentials: A dictionary of credentials with keys `connection_string` or\n            `account_url` and values of the corresponding connection string or\n            account url. If both are provided, `connection_string` will be used.\n        ignore_file: The path to a file containing patterns of files to ignore when\n            pushing to Azure Blob Storage. If not provided, the default `.prefectignore`\n            file will be used.\n\n    Example:\n        Push to an Azure Blob Storage container using credentials stored in a\n        block:\n        ```yaml\n        push:\n            - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}\"\n        ```\n\n        Push to an Azure Blob Storage container using an account URL and\n        default credentials:\n        ```yaml\n        push:\n            - prefect_azure.deployments.steps.push_to_azure_blob_storage:\n                requires: prefect-azure[blob_storage]\n                container: my-container\n                folder: my-folder\n                credentials:\n                    account_url: https://myaccount.blob.core.windows.net/\n        ```\n    \"\"\"  # noqa\n    local_path = Path.cwd()\n    if credentials.get(\"connection_string\") is not None:\n        container_client = ContainerClient.from_connection_string(\n            credentials[\"connection_string\"], container_name=container\n        )\n    elif credentials.get(\"account_url\") is not None:\n        container_client = ContainerClient(\n            account_url=credentials[\"account_url\"],\n            container_name=container,\n            credential=DefaultAzureCredential(),\n        )\n    else:\n        raise ValueError(\n            \"Credentials must contain either connection_string or account_url\"\n        )\n\n    included_files = None\n    if ignore_file and Path(ignore_file).exists():\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(str(local_path), ignore_patterns)\n\n    with container_client as client:\n        for local_file_path in local_path.expanduser().rglob(\"*\"):\n            if (\n                included_files is not None\n                and str(local_file_path.relative_to(local_path)) not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = Path(folder) / local_file_path.relative_to(\n                    local_path\n                )\n                with open(local_file_path, \"rb\") as f:\n                    client.upload_blob(str(remote_file_path), f, overwrite=True)\n\n    return {\n        \"container\": container,\n        \"folder\": folder,\n    }\n
        "},{"location":"integrations/prefect-bitbucket/","title":"prefect-bitbucket","text":""},{"location":"integrations/prefect-bitbucket/#welcome","title":"Welcome!","text":"

        Prefect integrations for working with Bitbucket repositories.

        "},{"location":"integrations/prefect-bitbucket/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-bitbucket/#python-setup","title":"Python setup","text":"

        Requires an installation of Python 3.8+.

        We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

        These tasks are designed to work with Prefect 2.0. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-bitbucket/#installation","title":"Installation","text":"

        Install prefect-bitbucket with pip:

        pip install prefect-bitbucket\n

        Then, register to view the block on Prefect Cloud:

        prefect block register -m prefect_bitbucket\n

        Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

        "},{"location":"integrations/prefect-bitbucket/#write-and-run-a-flow","title":"Write and run a flow","text":""},{"location":"integrations/prefect-bitbucket/#load-a-pre-existing-bitbucketcredentials-block","title":"Load a pre-existing BitBucketCredentials block","text":"
        from prefect import flow\nfrom prefect_bitbucket.credentials import BitBucketCredentials\n\n@flow\ndef use_stored_bitbucket_creds_flow():\n    bitbucket_credentials_block = BitBucketCredentials.load(\"BLOCK_NAME\")\n\n    return bitbucket_credentials_block\n\nuse_stored_bitbucket_creds_flow()\n
        "},{"location":"integrations/prefect-bitbucket/#create-a-new-bitbucketcredentials-block-in-a-flow","title":"Create a new BitBucketCredentials block in a flow","text":"
        from prefect import flow\nfrom prefect_bitbucket.credentials import BitBucketCredentials\n\n@flow\ndef create_new_bitbucket_creds_flow():\n    bitbucket_credentials_block = BitBucketCredentials(\n        token=\"my-token\",\n        username=\"my-username\"\n    )\n\ncreate_new_bitbucket_creds_flow()\n
        "},{"location":"integrations/prefect-bitbucket/#create-a-bitbucketrepository-block-for-a-public-repo","title":"Create a BitBucketRepository block for a public repo","text":"
        from prefect_bitbucket import BitBucketRepository\n\npublic_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\n\n# Creates a public BitBucket repository BitBucketRepository block\npublic_bitbucket_block = BitBucketRepository(\n    repository=public_repo\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\npublic_bitbucket_block.save(\"my-bitbucket-block\")\n
        "},{"location":"integrations/prefect-bitbucket/#create-a-bitbucketrepository-block-for-a-public-repo-at-a-specific-branch-or-tag","title":"Create a BitBucketRepository block for a public repo at a specific branch or tag","text":"
        from prefect_bitbucket import BitBucketRepository\n\npublic_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\n\n# Creates a public BitBucket repository BitBucketRepository block\nbranch_bitbucket_block = BitBucketRepository(\n    reference=\"my-branch-or-tag\",  # e.g \"master\"\n    repository=public_repo\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\nbranch_bitbucket_block.save(\"my-bitbucket-branch-block\")\n
        "},{"location":"integrations/prefect-bitbucket/#create-a-new-bitbucketcredentials-block-and-a-bitbucketrepository-block-for-a-private-repo","title":"Create a new BitBucketCredentials block and a BitBucketRepository block for a private repo","text":"
        from prefect_bitbucket import BitBucketCredentials, BitBucketRepository\n\n# For a private repo, we need credentials to access it\nbitbucket_credentials_block = BitBucketCredentials(\n    token=\"my-token\",\n    username=\"my-username\"  # optional\n)\n\n# Saves the BitBucketCredentials block to your Prefect workspace (in the Blocks tab)\nbitbucket_credentials_block.save(name=\"my-bitbucket-credentials-block\")\n\n\n# Creates a private BitBucket repository BitBucketRepository block\nprivate_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\nprivate_bitbucket_block = BitBucketRepository(\n    repository=private_repo,\n    bitbucket_credentials=bitbucket_credentials_block\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\nprivate_bitbucket_block.save(name=\"my-private-bitbucket-block\")\n
        "},{"location":"integrations/prefect-bitbucket/#use-a-preexisting-bitbucketcredentials-block-to-create-a-bitbucketrepository-block-for-a-private-repo","title":"Use a preexisting BitBucketCredentials block to create a BitBucketRepository block for a private repo","text":"
        from prefect_bitbucket import BitBucketCredentials, BitBucketRepository\n\n# Loads a preexisting BitBucketCredentials block\nBitBucketCredentials.load(\"my-bitbucket-credentials-block\")\n\n# Creates a private BitBucket repository BitBucketRepository block\nprivate_repo = \"https://bitbucket.org/my-workspace/my-repository.git\"\nprivate_bitbucket_block = BitBucketRepository(\n    repository=private_repo,\n    bitbucket_credentials=bitbucket_credentials_block\n)\n\n# Saves the BitBucketRepository block to your Prefect workspace (in the Blocks tab)\nprivate_bitbucket_block.save(name=\"my-private-bitbucket-block\")\n

        Differences between Bitbucket Server and Bitbucket Cloud

        For Bitbucket Cloud, only set the token to authenticate. For Bitbucket Server, set both the token and the username.

        "},{"location":"integrations/prefect-bitbucket/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials","title":"prefect_bitbucket.credentials","text":"

        Module to enable authenticate interactions with BitBucket.

        "},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials.BitBucketCredentials","title":"BitBucketCredentials","text":"

        Bases: CredentialsBlock

        Store BitBucket credentials to interact with private BitBucket repositories.

        Attributes:

        Name Type Description token Optional[SecretStr]

        An access token to authenticate with BitBucket. This is required for accessing private repositories.

        username Optional[str]

        Identification name unique across entire BitBucket site.

        password Optional[SecretStr]

        The password to authenticate to BitBucket.

        url str

        The base URL of your BitBucket instance.

        Examples:

        Load stored BitBucket credentials:

        from prefect_bitbucket import BitBucketCredentials\nbitbucket_credentials_block = BitBucketCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_bitbucket/credentials.py
        class BitBucketCredentials(CredentialsBlock):\n    \"\"\"Store BitBucket credentials to interact with private BitBucket repositories.\n\n    Attributes:\n        token: An access token to authenticate with BitBucket. This is required\n            for accessing private repositories.\n        username: Identification name unique across entire BitBucket site.\n        password: The password to authenticate to BitBucket.\n        url: The base URL of your BitBucket instance.\n\n\n    Examples:\n        Load stored BitBucket credentials:\n        ```python\n        from prefect_bitbucket import BitBucketCredentials\n        bitbucket_credentials_block = BitBucketCredentials.load(\"BLOCK_NAME\")\n        ```\n\n\n    \"\"\"\n\n    _block_type_name = \"BitBucket Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/5d729f7355fb6828c4b605268ded9cfafab3ae4f-250x250.png\"  # noqa\n    token: Optional[SecretStr] = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=(\n            \"A BitBucket Personal Access Token - required for private repositories.\"\n        ),\n        example=\"x-token-auth:my-token\",\n    )\n    username: Optional[str] = Field(\n        default=None,\n        description=\"Identification name unique across entire BitBucket site.\",\n    )\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password to authenticate to BitBucket.\"\n    )\n    url: str = Field(\n        default=\"https://api.bitbucket.org/\",\n        description=\"The base URL of your BitBucket instance.\",\n        title=\"URL\",\n    )\n\n    @validator(\"username\")\n    def _validate_username(cls, value: str) -> str:\n        \"\"\"When username provided, will validate it.\"\"\"\n        pattern = \"^[A-Za-z0-9_-]*$\"\n\n        if not re.match(pattern, value):\n            raise ValueError(\n                \"Username must be alpha, num, dash and/or underscore only.\"\n            )\n        if not len(value) <= 30:\n            raise ValueError(\"Username cannot be longer than 30 chars.\")\n        return value\n\n    def get_client(\n        self, client_type: Union[str, ClientType], **client_kwargs\n    ) -> Union[Cloud, Bitbucket]:\n        \"\"\"Get an authenticated local or cloud Bitbucket client.\n\n        Args:\n            client_type: Whether to use a local or cloud client.\n\n        Returns:\n            An authenticated Bitbucket client.\n\n        \"\"\"\n        # ref: https://atlassian-python-api.readthedocs.io/\n        if isinstance(client_type, str):\n            client_type = ClientType(client_type.lower())\n\n        password = self.password.get_secret_value()\n        input_client_kwargs = dict(\n            url=self.url, username=self.username, password=password\n        )\n        input_client_kwargs.update(**client_kwargs)\n\n        if client_type == ClientType.CLOUD:\n            client = Cloud(**input_client_kwargs)\n        else:\n            client = Bitbucket(**input_client_kwargs)\n        return client\n
        "},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials.BitBucketCredentials.get_client","title":"get_client","text":"

        Get an authenticated local or cloud Bitbucket client.

        Parameters:

        Name Type Description Default client_type Union[str, ClientType]

        Whether to use a local or cloud client.

        required

        Returns:

        Type Description Union[Cloud, Bitbucket]

        An authenticated Bitbucket client.

        Source code in prefect_bitbucket/credentials.py
        def get_client(\n    self, client_type: Union[str, ClientType], **client_kwargs\n) -> Union[Cloud, Bitbucket]:\n    \"\"\"Get an authenticated local or cloud Bitbucket client.\n\n    Args:\n        client_type: Whether to use a local or cloud client.\n\n    Returns:\n        An authenticated Bitbucket client.\n\n    \"\"\"\n    # ref: https://atlassian-python-api.readthedocs.io/\n    if isinstance(client_type, str):\n        client_type = ClientType(client_type.lower())\n\n    password = self.password.get_secret_value()\n    input_client_kwargs = dict(\n        url=self.url, username=self.username, password=password\n    )\n    input_client_kwargs.update(**client_kwargs)\n\n    if client_type == ClientType.CLOUD:\n        client = Cloud(**input_client_kwargs)\n    else:\n        client = Bitbucket(**input_client_kwargs)\n    return client\n
        "},{"location":"integrations/prefect-bitbucket/credentials/#prefect_bitbucket.credentials.ClientType","title":"ClientType","text":"

        Bases: Enum

        The client type to use.

        Source code in prefect_bitbucket/credentials.py
        class ClientType(Enum):\n    \"\"\"The client type to use.\"\"\"\n\n    LOCAL = \"local\"\n    CLOUD = \"cloud\"\n
        "},{"location":"integrations/prefect-bitbucket/repository/","title":"Repository","text":""},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository","title":"prefect_bitbucket.repository","text":"

        Allows for interaction with a BitBucket repository.

        The BitBucket class in this collection is a storage block that lets Prefect agents pull Prefect flow code from BitBucket repositories.

        The BitBucket block is ideally configured via the Prefect UI, but can also be used in Python as the following examples demonstrate.

        Examples ```python from prefect_bitbucket.repository import BitBucketRepository

        "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository--public-bitbucket-repository","title":"public BitBucket repository","text":"

        public_bitbucket_block = BitBucketRepository( repository=\"https://bitbucket.com/my-project/my-repository.git\" )

        public_bitbucket_block.save(name=\"my-bitbucket-block\")

        "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository--specific-branch-or-tag","title":"specific branch or tag","text":"

        branch_bitbucket_block = BitBucketRepository( reference=\"branch-or-tag-name\", repository=\"https://bitbucket.com/my-project/my-repository.git\" )

        branch_bitbucket_block.save(name=\"my-bitbucket-block\")

        "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository--private-bitbucket-repository","title":"private BitBucket repository","text":"

        private_bitbucket_block = BitBucketRepository( repository=\"https://bitbucket.com/my-project/my-repository.git\", bitbucket_credentials=BitBucketCredentials.load(\"my-bitbucket-credentials-block\") )

        private_bitbucket_block.save(name=\"my-private-bitbucket-block\")

        "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository.BitBucketRepository","title":"BitBucketRepository","text":"

        Bases: ReadableDeploymentStorage

        Interact with files stored in BitBucket repositories.

        An accessible installation of git is required for this block to function properly.

        Source code in prefect_bitbucket/repository.py
        class BitBucketRepository(ReadableDeploymentStorage):\n    \"\"\"Interact with files stored in BitBucket repositories.\n\n    An accessible installation of git is required for this block to function\n    properly.\n    \"\"\"\n\n    _block_type_name = \"BitBucket Repository\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/5d729f7355fb6828c4b605268ded9cfafab3ae4f-250x250.png\"  # noqa\n    _description = \"Interact with files stored in BitBucket repositories.\"\n\n    repository: str = Field(\n        default=...,\n        description=\"The URL of a BitBucket repository to read from in HTTPS format\",\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch or tag.\",\n    )\n    bitbucket_credentials: Optional[BitBucketCredentials] = Field(\n        default=None,\n        description=(\n            \"An optional BitBucketCredentials block for authenticating with \"\n            \"private BitBucket repos.\"\n        ),\n    )\n\n    @validator(\"bitbucket_credentials\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n        \"\"\"Ensure that credentials are not provided with 'SSH' formatted BitBucket URLs.\n\n        Validators are by default only called on provided arguments.\n\n        Note: validates `credentials` specifically so that it only fires when private\n        repositories are used.\n        \"\"\"\n        if v is not None:\n            if urlparse(values[\"repository\"]).scheme != \"https\":\n                raise InvalidRepositoryURLError(\n                    (\n                        \"Credentials can only be used with BitBucket repositories \"\n                        \"using the 'HTTPS' format. You must either remove the \"\n                        \"credential if you wish to use the 'SSH' format and are not \"\n                        \"using a private repository, or you must change the repository \"\n                        \"URL to the 'HTTPS' format.\"\n                    )\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos in the cloud:\n        https://x-token-auth:<access-token>@bitbucket.org/<user>/<repo>.git\n        For private repos with a local bitbucket server:\n        https://<username>:<access-token>@<server>/scm/<project>/<repo>.git\n\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urlparse(self.repository)\n        token_is_set = (\n            self.bitbucket_credentials is not None and self.bitbucket_credentials.token\n        )\n\n        # Need a token for private repos\n        if url_components.scheme == \"https\" and token_is_set:\n            token = self.bitbucket_credentials.token.get_secret_value()\n            username = self.bitbucket_credentials.username\n            if username is None:\n                username = \"x-token-auth\"\n            updated_components = url_components._replace(\n                netloc=f\"{username}:{token}@{url_components.netloc}\"\n            )\n            full_url = urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: Optional[str]\n    ) -> Tuple[str, str]:\n        \"\"\"Return the fully formed paths for BitBucketRepository contents.\n\n        Return will take the form of (content_source, content_destination).\n\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"Clones a BitBucket project within `from_path` to the provided `local_path`.\n\n        This defaults to cloning the repository reference configured on the\n        Block to the present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n\n        \"\"\"\n        # Construct command\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        cmd += [\"--depth\", \"1\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            copy_tree(src=content_source, dst=content_destination)\n
        "},{"location":"integrations/prefect-bitbucket/repository/#prefect_bitbucket.repository.BitBucketRepository.get_directory","title":"get_directory async","text":"

        Clones a BitBucket project within from_path to the provided local_path.

        This defaults to cloning the repository reference configured on the Block to the present working directory.

        Parameters:

        Name Type Description Default from_path Optional[str]

        If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

        None local_path Optional[str]

        A local path to clone to; defaults to present working directory.

        None Source code in prefect_bitbucket/repository.py
        @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"Clones a BitBucket project within `from_path` to the provided `local_path`.\n\n    This defaults to cloning the repository reference configured on the\n    Block to the present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n\n    \"\"\"\n    # Construct command\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        copy_tree(src=content_source, dst=content_destination)\n
        "},{"location":"integrations/prefect-dask/","title":"prefect-dask","text":"

        The prefect-dask collection makes it easy to include distributed processing for your flows. Check out the examples below to get started!

        "},{"location":"integrations/prefect-dask/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-dask/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

        Perhaps you're already working with Prefect flows. Say your flow downloads many images to train your machine learning model. Unfortunately, it takes a long time to download your flows because your code is running sequentially.

        After installing prefect-dask you can parallelize your flow in three simple steps:

        1. Add the import: from prefect_dask import DaskTaskRunner
        2. Specify the task runner in the flow decorator: @flow(task_runner=DaskTaskRunner)
        3. Submit tasks to the flow's task runner: a_task.submit(*args, **kwargs)

        The parallelized code runs in about 1/3 of the time in our test! And that's without distributing the workload over multiple machines. Here's the before and after!

        BeforeAfter
        # Completed in 15.2 seconds\n\nfrom typing import List\nfrom pathlib import Path\n\nimport httpx\nfrom prefect import flow, task\n\nURL_FORMAT = (\n    \"https://www.cpc.ncep.noaa.gov/products/NMME/archive/\"\n    \"{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png\"\n)\n\n@task\ndef download_image(year: int, month: int, directory: Path) -> Path:\n    # download image from URL\n    url = URL_FORMAT.format(year=year, month=month)\n    resp = httpx.get(url)\n\n    # save content to directory/YYYYMM.png\n    file_path = (directory / url.split(\"/\")[-1]).with_stem(f\"{year:04d}{month:02d}\")\n    file_path.write_bytes(resp.content)\n    return file_path\n\n@flow\ndef download_nino_34_plumes_from_year(year: int) -> List[Path]:\n    # create a directory to hold images\n    directory = Path(\"data\")\n    directory.mkdir(exist_ok=True)\n\n    # download all images\n    file_paths = []\n    for month in range(1, 12 + 1):\n        file_path = download_image(year, month, directory)\n        file_paths.append(file_path)\n    return file_paths\n\nif __name__ == \"__main__\":\n    download_nino_34_plumes_from_year(2022)\n
        # Completed in 5.7 seconds\n\nfrom typing import List\nfrom pathlib import Path\n\nimport httpx\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner\n\nURL_FORMAT = (\n    \"https://www.cpc.ncep.noaa.gov/products/NMME/archive/\"\n    \"{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png\"\n)\n\n@task\ndef download_image(year: int, month: int, directory: Path) -> Path:\n    # download image from URL\n    url = URL_FORMAT.format(year=year, month=month)\n    resp = httpx.get(url)\n\n    # save content to directory/YYYYMM.png\n    file_path = (directory / url.split(\"/\")[-1]).with_stem(f\"{year:04d}{month:02d}\")\n    file_path.write_bytes(resp.content)\n    return file_path\n\n@flow(task_runner=DaskTaskRunner(cluster_kwargs={\"processes\": False}))\ndef download_nino_34_plumes_from_year(year: int) -> List[Path]:\n    # create a directory to hold images\n    directory = Path(\"data\")\n    directory.mkdir(exist_ok=True)\n\n    # download all images\n    file_paths = []\n    for month in range(1, 12 + 1):\n        file_path = download_image.submit(year, month, directory)\n        file_paths.append(file_path)\n    return file_paths\n\nif __name__ == \"__main__\":\n    download_nino_34_plumes_from_year(2022)\n

        The original flow completes in 15.2 seconds.

        However, with just a few minor tweaks, we were able to reduce the runtime by nearly three folds, down to just 5.7 seconds!

        "},{"location":"integrations/prefect-dask/#integrate-with-dask-clientcluster-and-collections","title":"Integrate with Dask client/cluster and collections","text":"

        Suppose you have an existing Dask client/cluster and collection, like a dask.dataframe.DataFrame, and you want to add observability.

        With prefect-dask, there's no major overhaul necessary because Prefect was designed with incremental adoption in mind! It's as easy as:

        1. Adding the imports
        2. Sprinkling a few task and flow decorators
        3. Using get_dask_client context manager on collections to distribute work across workers
        4. Specifying the task runner and client's address in the flow decorator
        5. Submitting the tasks to the flow's task runner
        BeforeAfter
        import dask.dataframe\nimport dask.distributed\n\n\n\nclient = dask.distributed.Client()\n\n\ndef read_data(start: str, end: str) -> dask.dataframe.DataFrame:\n    df = dask.datasets.timeseries(start, end, partition_freq=\"4w\")\n    return df\n\n\ndef process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame:\n\n    df_yearly_avg = df.groupby(df.index.year).mean()\n    return df_yearly_avg.compute()\n\n\ndef dask_pipeline():\n    df = read_data(\"1988\", \"2022\")\n    df_yearly_average = process_data(df)\n    return df_yearly_average\n\ndask_pipeline()\n
        import dask.dataframe\nimport dask.distributed\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_dask_client\n\nclient = dask.distributed.Client()\n\n@task\ndef read_data(start: str, end: str) -> dask.dataframe.DataFrame:\n    df = dask.datasets.timeseries(start, end, partition_freq=\"4w\")\n    return df\n\n@task\ndef process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame:\n    with get_dask_client():\n        df_yearly_avg = df.groupby(df.index.year).mean()\n        return df_yearly_avg.compute()\n\n@flow(task_runner=DaskTaskRunner(address=client.scheduler.address))\ndef dask_pipeline():\n    df = read_data.submit(\"1988\", \"2022\")\n    df_yearly_average = process_data.submit(df)\n    return df_yearly_average\n\ndask_pipeline()\n

        Now, you can conveniently see when each task completed, both in the terminal and the UI!

        14:10:09.845 | INFO    | prefect.engine - Created flow run 'chocolate-pony' for flow 'dask-flow'\n14:10:09.847 | INFO    | prefect.task_runner.dask - Connecting to an existing Dask cluster at tcp://127.0.0.1:59255\n14:10:09.857 | INFO    | distributed.scheduler - Receive client connection: Client-8c1e0f24-9133-11ed-800e-86f2469c4e7a\n14:10:09.859 | INFO    | distributed.core - Starting established connection to tcp://127.0.0.1:59516\n14:10:09.862 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n14:10:11.344 | INFO    | Flow run 'chocolate-pony' - Created task run 'read_data-5bc97744-0' for task 'read_data'\n14:10:11.626 | INFO    | Flow run 'chocolate-pony' - Submitted task run 'read_data-5bc97744-0' for execution.\n14:10:11.795 | INFO    | Flow run 'chocolate-pony' - Created task run 'process_data-090555ba-0' for task 'process_data'\n14:10:11.798 | INFO    | Flow run 'chocolate-pony' - Submitted task run 'process_data-090555ba-0' for execution.\n14:10:13.279 | INFO    | Task run 'read_data-5bc97744-0' - Finished in state Completed()\n14:11:43.539 | INFO    | Task run 'process_data-090555ba-0' - Finished in state Completed()\n14:11:43.883 | INFO    | Flow run 'chocolate-pony' - Finished in state Completed('All states completed.')\n
        "},{"location":"integrations/prefect-dask/#resources","title":"Resources","text":"

        For additional examples, check out the Usage Guide!

        "},{"location":"integrations/prefect-dask/#installation","title":"Installation","text":"

        Get started by installing prefect-dask!

        pipconda
        pip install -U prefect-dask\n
        conda install -c conda-forge prefect-dask\n

        Requires an installation of Python 3.7+.

        We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

        These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-dask/#feedback","title":"Feedback","text":"

        If you encounter any bugs while using prefect-dask, feel free to open an issue in the prefect repository.

        If you have any questions or issues while using prefect-dask, you can find help in either the Prefect Discourse forum or the Prefect Slack community.

        "},{"location":"integrations/prefect-dask/#contributing","title":"Contributing","text":"

        If you'd like to help contribute to fix an issue or add a feature to prefect-dask, please propose changes through a pull request from a fork of the repository.

        Here are the steps:

        1. Fork the repository
        2. Clone the forked repository
        3. Install the repository and its dependencies:
          pip install -e \".[dev]\"\n
        4. Make desired changes
        5. Add tests
        6. Install pre-commit to perform quality checks prior to commit:
          pre-commit install\n
        7. git commit, git push, and create a pull request
        "},{"location":"integrations/prefect-dask/task_runners/","title":"Task Runners","text":""},{"location":"integrations/prefect-dask/task_runners/#prefect_dask.task_runners","title":"prefect_dask.task_runners","text":"

        Interface and implementations of the Dask Task Runner. Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

        Example
        import time\n\nfrom prefect import flow, task\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#0\n#1\n#2\n#3\n#4\n#5\n#6\n#7\n#8\n#9\n

        Switching to a DaskTaskRunner:

        import time\n\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=DaskTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

        "},{"location":"integrations/prefect-dask/task_runners/#prefect_dask.task_runners.DaskTaskRunner","title":"DaskTaskRunner","text":"

        Bases: BaseTaskRunner

        A parallel task_runner that submits tasks to the dask.distributed scheduler. By default a temporary distributed.LocalCluster is created (and subsequently torn down) within the start() contextmanager. To use a different cluster class (e.g. dask_kubernetes.KubeCluster), you can specify cluster_class/cluster_kwargs.

        Alternatively, if you already have a dask cluster running, you can provide the cluster object via the cluster kwarg or the address of the scheduler via the address kwarg.

        Multiprocessing safety

        Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or warnings will be displayed.

        Parameters:

        Name Type Description Default cluster Cluster

        Currently running dask cluster; if one is not provider (or specified via address kwarg), a temporary cluster will be created in DaskTaskRunner.start(). Defaults to None.

        None address string

        Address of a currently running dask scheduler. Defaults to None.

        None cluster_class string or callable

        The cluster class to use when creating a temporary dask cluster. Can be either the full class name (e.g. \"distributed.LocalCluster\"), or the class itself.

        None cluster_kwargs dict

        Additional kwargs to pass to the cluster_class when creating a temporary dask cluster.

        None adapt_kwargs dict

        Additional kwargs to pass to cluster.adapt when creating a temporary dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided.

        None client_kwargs dict

        Additional kwargs to use when creating a dask.distributed.Client.

        None

        Examples:

        Using a temporary local dask cluster:

        from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner)\ndef my_flow():\n    ...\n

        Using a temporary cluster running elsewhere. Any Dask cluster class should work, here we use dask-cloudprovider:

        DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.FargateCluster\",\n    cluster_kwargs={\n        \"image\": \"prefecthq/prefect:latest\",\n        \"n_workers\": 5,\n    },\n)\n

        Connecting to an existing dask cluster:

        DaskTaskRunner(address=\"192.0.2.255:8786\")\n

        Source code in prefect_dask/task_runners.py
        class DaskTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A parallel task_runner that submits tasks to the `dask.distributed` scheduler.\n    By default a temporary `distributed.LocalCluster` is created (and\n    subsequently torn down) within the `start()` contextmanager. To use a\n    different cluster class (e.g.\n    [`dask_kubernetes.KubeCluster`](https://kubernetes.dask.org/)), you can\n    specify `cluster_class`/`cluster_kwargs`.\n\n    Alternatively, if you already have a dask cluster running, you can provide\n    the cluster object via the `cluster` kwarg or the address of the scheduler\n    via the `address` kwarg.\n    !!! warning \"Multiprocessing safety\"\n        Note that, because the `DaskTaskRunner` uses multiprocessing, calls to flows\n        in scripts must be guarded with `if __name__ == \"__main__\":` or warnings will\n        be displayed.\n\n    Args:\n        cluster (distributed.deploy.Cluster, optional): Currently running dask cluster;\n            if one is not provider (or specified via `address` kwarg), a temporary\n            cluster will be created in `DaskTaskRunner.start()`. Defaults to `None`.\n        address (string, optional): Address of a currently running dask\n            scheduler. Defaults to `None`.\n        cluster_class (string or callable, optional): The cluster class to use\n            when creating a temporary dask cluster. Can be either the full\n            class name (e.g. `\"distributed.LocalCluster\"`), or the class itself.\n        cluster_kwargs (dict, optional): Additional kwargs to pass to the\n            `cluster_class` when creating a temporary dask cluster.\n        adapt_kwargs (dict, optional): Additional kwargs to pass to `cluster.adapt`\n            when creating a temporary dask cluster. Note that adaptive scaling\n            is only enabled if `adapt_kwargs` are provided.\n        client_kwargs (dict, optional): Additional kwargs to use when creating a\n            [`dask.distributed.Client`](https://distributed.dask.org/en/latest/api.html#client).\n\n    Examples:\n        Using a temporary local dask cluster:\n        ```python\n        from prefect import flow\n        from prefect_dask.task_runners import DaskTaskRunner\n\n        @flow(task_runner=DaskTaskRunner)\n        def my_flow():\n            ...\n        ```\n\n        Using a temporary cluster running elsewhere. Any Dask cluster class should\n        work, here we use [dask-cloudprovider](https://cloudprovider.dask.org):\n        ```python\n        DaskTaskRunner(\n            cluster_class=\"dask_cloudprovider.FargateCluster\",\n            cluster_kwargs={\n                \"image\": \"prefecthq/prefect:latest\",\n                \"n_workers\": 5,\n            },\n        )\n        ```\n\n        Connecting to an existing dask cluster:\n        ```python\n        DaskTaskRunner(address=\"192.0.2.255:8786\")\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        cluster: Optional[distributed.deploy.Cluster] = None,\n        address: str = None,\n        cluster_class: Union[str, Callable] = None,\n        cluster_kwargs: dict = None,\n        adapt_kwargs: dict = None,\n        client_kwargs: dict = None,\n    ):\n        # Validate settings and infer defaults\n        if address:\n            if cluster or cluster_class or cluster_kwargs or adapt_kwargs:\n                raise ValueError(\n                    \"Cannot specify `address` and \"\n                    \"`cluster`/`cluster_class`/`cluster_kwargs`/`adapt_kwargs`\"\n                )\n        elif cluster:\n            if cluster_class or cluster_kwargs:\n                raise ValueError(\n                    \"Cannot specify `cluster` and `cluster_class`/`cluster_kwargs`\"\n                )\n            if not cluster.asynchronous:\n                raise ValueError(\n                    \"The cluster must have `asynchronous=True` to be \"\n                    \"used with `DaskTaskRunner`.\"\n                )\n        else:\n            if isinstance(cluster_class, str):\n                cluster_class = from_qualified_name(cluster_class)\n            else:\n                cluster_class = cluster_class\n\n        # Create a copies of incoming kwargs since we may mutate them\n        cluster_kwargs = cluster_kwargs.copy() if cluster_kwargs else {}\n        adapt_kwargs = adapt_kwargs.copy() if adapt_kwargs else {}\n        client_kwargs = client_kwargs.copy() if client_kwargs else {}\n\n        # Update kwargs defaults\n        client_kwargs.setdefault(\"set_as_default\", False)\n\n        # The user cannot specify async/sync themselves\n        if \"asynchronous\" in client_kwargs:\n            raise ValueError(\n                \"`client_kwargs` cannot set `asynchronous`. \"\n                \"This option is managed by Prefect.\"\n            )\n        if \"asynchronous\" in cluster_kwargs:\n            raise ValueError(\n                \"`cluster_kwargs` cannot set `asynchronous`. \"\n                \"This option is managed by Prefect.\"\n            )\n\n        # Store settings\n        self.address = address\n        self.cluster_class = cluster_class\n        self.cluster_kwargs = cluster_kwargs\n        self.adapt_kwargs = adapt_kwargs\n        self.client_kwargs = client_kwargs\n\n        # Runtime attributes\n        self._client: \"distributed.Client\" = None\n        self._cluster: \"distributed.deploy.Cluster\" = cluster\n        self._dask_futures: Dict[str, \"distributed.Future\"] = {}\n\n        super().__init__()\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return (\n            TaskConcurrencyType.PARALLEL\n            if self.cluster_kwargs.get(\"processes\")\n            else TaskConcurrencyType.CONCURRENT\n        )\n\n    def duplicate(self):\n        \"\"\"\n        Create a new instance of the task runner with the same settings.\n        \"\"\"\n        return type(self)(\n            address=self.address,\n            cluster_class=self.cluster_class,\n            cluster_kwargs=self.cluster_kwargs,\n            adapt_kwargs=self.adapt_kwargs,\n            client_kwargs=self.client_kwargs,\n        )\n\n    def __eq__(self, other: object) -> bool:\n        \"\"\"\n        Check if an instance has the same settings as this task runner.\n        \"\"\"\n        if type(self) == type(other):\n            return (\n                self.address == other.address\n                and self.cluster_class == other.cluster_class\n                and self.cluster_kwargs == other.cluster_kwargs\n                and self.adapt_kwargs == other.adapt_kwargs\n                and self.client_kwargs == other.client_kwargs\n            )\n        else:\n            return NotImplemented\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        # unpack the upstream call in order to cast Prefect futures to Dask futures\n        # where possible to optimize Dask task scheduling\n        call_kwargs = self._optimize_futures(call.keywords)\n\n        if \"task_run\" in call_kwargs:\n            task_run = call_kwargs[\"task_run\"]\n            flow_run = FlowRunContext.get().flow_run\n            # Dask displays the text up to the first '-' as the name; the task run key\n            # should include the task run name for readability in the Dask console.\n            # For cases where the task run fails and reruns for a retried flow run,\n            # the flow run count is included so that the new key will not match\n            # the failed run's key, therefore not retrieving from the Dask cache.\n            dask_key = f\"{task_run.name}-{task_run.id.hex}-{flow_run.run_count}\"\n        else:\n            dask_key = str(key)\n\n        self._dask_futures[key] = self._client.submit(\n            call.func,\n            key=dask_key,\n            # Dask defaults to treating functions are pure, but we set this here for\n            # explicit expectations. If this task run is submitted to Dask twice, the\n            # result of the first run should be returned. Subsequent runs would return\n            # `Abort` exceptions if they were submitted again.\n            pure=True,\n            **call_kwargs,\n        )\n\n    def _get_dask_future(self, key: UUID) -> \"distributed.Future\":\n        \"\"\"\n        Retrieve the dask future corresponding to a Prefect future.\n        The Dask future is for the `run_fn`, which should return a `State`.\n        \"\"\"\n        return self._dask_futures[key]\n\n    def _optimize_futures(self, expr):\n        def visit_fn(expr):\n            if isinstance(expr, PrefectFuture):\n                dask_future = self._dask_futures.get(expr.key)\n                if dask_future is not None:\n                    return dask_future\n            # Fallback to return the expression unaltered\n            return expr\n\n        return visit_collection(expr, visit_fn=visit_fn, return_data=True)\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        future = self._get_dask_future(key)\n        try:\n            return await future.result(timeout=timeout)\n        except distributed.TimeoutError:\n            return None\n        except BaseException as exc:\n            return await exception_to_crashed_state(exc)\n\n    async def _start(self, exit_stack: AsyncExitStack):\n        \"\"\"\n        Start the task runner and prep for context exit.\n        - Creates a cluster if an external address is not set.\n        - Creates a client to connect to the cluster.\n        - Pushes a call to wait for all running futures to complete on exit.\n        \"\"\"\n\n        if self._cluster:\n            self.logger.info(f\"Connecting to existing Dask cluster {self._cluster}\")\n            self._connect_to = self._cluster\n            if self.adapt_kwargs:\n                self._cluster.adapt(**self.adapt_kwargs)\n        elif self.address:\n            self.logger.info(\n                f\"Connecting to an existing Dask cluster at {self.address}\"\n            )\n            self._connect_to = self.address\n        else:\n            self.cluster_class = self.cluster_class or distributed.LocalCluster\n\n            self.logger.info(\n                f\"Creating a new Dask cluster with \"\n                f\"`{to_qualified_name(self.cluster_class)}`\"\n            )\n            self._connect_to = self._cluster = await exit_stack.enter_async_context(\n                self.cluster_class(asynchronous=True, **self.cluster_kwargs)\n            )\n            if self.adapt_kwargs:\n                adapt_response = self._cluster.adapt(**self.adapt_kwargs)\n                if inspect.isawaitable(adapt_response):\n                    await adapt_response\n\n        self._client = await exit_stack.enter_async_context(\n            distributed.Client(\n                self._connect_to, asynchronous=True, **self.client_kwargs\n            )\n        )\n\n        if self._client.dashboard_link:\n            self.logger.info(\n                f\"The Dask dashboard is available at {self._client.dashboard_link}\",\n            )\n\n    def __getstate__(self):\n        \"\"\"\n        Allow the `DaskTaskRunner` to be serialized by dropping\n        the `distributed.Client`, which contains locks.\n        Must be deserialized on a dask worker.\n        \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_client\", \"_cluster\", \"_connect_to\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"\n        Restore the `distributed.Client` by loading the client on a dask worker.\n        \"\"\"\n        self.__dict__.update(data)\n        self._client = distributed.get_client()\n
        "},{"location":"integrations/prefect-dask/task_runners/#prefect_dask.task_runners.DaskTaskRunner.duplicate","title":"duplicate","text":"

        Create a new instance of the task runner with the same settings.

        Source code in prefect_dask/task_runners.py
        def duplicate(self):\n    \"\"\"\n    Create a new instance of the task runner with the same settings.\n    \"\"\"\n    return type(self)(\n        address=self.address,\n        cluster_class=self.cluster_class,\n        cluster_kwargs=self.cluster_kwargs,\n        adapt_kwargs=self.adapt_kwargs,\n        client_kwargs=self.client_kwargs,\n    )\n
        "},{"location":"integrations/prefect-dask/usage_guide/","title":"Usage Guide","text":"

        Below is a guide on how to use prefect-dask effectively.

        "},{"location":"integrations/prefect-dask/usage_guide/#running-tasks-on-dask","title":"Running tasks on Dask","text":"

        The DaskTaskRunner is a parallel task runner that submits tasks to the dask.distributed scheduler.

        By default, a temporary Dask cluster is created for the duration of the flow run.

        For example, this flow counts up to 10 in parallel (note that the output is not sequential).

        import time\n\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=DaskTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

        If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via an address argument.

        To configure your flow to use the DaskTaskRunner:

        1. Make sure the prefect-dask collection is installed as described earlier: pip install prefect-dask.
        2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
        3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.

        For example, this flow uses the DaskTaskRunner configured to access an existing Dask cluster at http://my-dask-cluster.

        from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n    ...\n

        DaskTaskRunner accepts the following optional parameters:

        Parameter Description address Address of a currently running Dask scheduler. cluster_class The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, \"distributed.LocalCluster\"), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client.

        Multiprocessing safety

        Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or you will encounter warnings and errors.

        If you don't provide the address of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores available to your execution environment. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers or threads_per_worker to cluster_kwargs.

        # Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n    cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
        "},{"location":"integrations/prefect-dask/usage_guide/#distributing-dask-collections-across-workers","title":"Distributing Dask collections across workers","text":"

        If you use a Dask collection, such as a dask.DataFrame or dask.Bag, to distribute the work across workers and achieve parallel computations, use one of the context managers get_dask_client or get_async_dask_client:

        import dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_dask_client\n\n@task\ndef compute_task():\n    with get_dask_client() as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = df.describe().compute()\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_flow():\n    prefect_future = compute_task.submit()\n    return prefect_future.result()\n\ndask_flow()\n

        The context managers can be used the same way in both flow run contexts and task run contexts.

        Resolving futures in sync client

        Note, by default, dask_collection.compute() returns concrete values while client.compute(dask_collection) returns Dask Futures. Therefore, if you call client.compute, you must resolve all futures before exiting out of the context manager by either:

        1. setting sync=True

          with get_dask_client() as client:\n    df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n    summary_df = client.compute(df.describe(), sync=True)\n

        2. calling result()

          with get_dask_client() as client:\n    df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n    summary_df = client.compute(df.describe()).result()\n
          For more information, visit the docs on Waiting on Futures.

        There is also an equivalent context manager for asynchronous tasks and flows: get_async_dask_client.

        import asyncio\n\nimport dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_async_dask_client\n\n@task\nasync def compute_task():\n    async with get_async_dask_client() as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = await client.compute(df.describe())\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\nasync def dask_flow():\n    prefect_future = await compute_task.submit()\n    return await prefect_future.result()\n\nasyncio.run(dask_flow())\n

        Resolving futures in async client

        With the async client, you do not need to set sync=True or call result().

        However you must await client.compute(dask_collection) before exiting out of the context manager.

        To invoke compute from the Dask collection, set sync=False and call result() before exiting out of the context manager: await dask_collection.compute(sync=False).

        "},{"location":"integrations/prefect-dask/usage_guide/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"

        The DaskTaskRunner is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.

        To configure, you need to provide a cluster_class. This can be:

        • A string specifying the import path to the cluster class (for example, \"dask_cloudprovider.aws.FargateCluster\")
        • The cluster class itself
        • A function for creating a custom cluster

        You can also configure cluster_kwargs, which takes a dictionary of keyword arguments to pass to cluster_class when starting the flow run.

        For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster with 4 workers running with an image named my-prefect-image:

        DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
        "},{"location":"integrations/prefect-dask/usage_guide/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"

        Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):

        • All workers in the cluster must have dependencies installed for all flows you intend to run.
        • Multiple flow runs may compete for resources. Dask tries to do a good job sharing resources between tasks, but you may still run into issues.

        That said, you may prefer managing a single long-running cluster.

        To configure a DaskTaskRunner to connect to an existing cluster, pass in the address of the scheduler to the address argument:

        # Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
        "},{"location":"integrations/prefect-dask/usage_guide/#adaptive-scaling","title":"Adaptive scaling","text":"

        One nice feature of using a DaskTaskRunner is the ability to scale adaptively to the workload. Instead of specifying n_workers as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.

        To do this, you can pass adapt_kwargs to DaskTaskRunner. This takes the following fields:

        • maximum (int or None, optional): the maximum number of workers to scale to. Set to None for no maximum.
        • minimum (int or None, optional): the minimum number of workers to scale to. Set to None for no minimum.

        For example, here we configure a flow to run on a FargateCluster scaling up to at most 10 workers.

        DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    adapt_kwargs={\"maximum\": 10}\n)\n
        "},{"location":"integrations/prefect-dask/usage_guide/#dask-annotations","title":"Dask annotations","text":"

        Dask annotations can be used to further control the behavior of tasks.

        For example, we can set the priority of tasks in the Dask scheduler:

        import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n    with dask.annotate(priority=-10):\n        future = show(1)  # low priority task\n\n    with dask.annotate(priority=10):\n        future = show(2)  # high priority task\n

        Another common use case is resource annotations:

        import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n    task_runner=DaskTaskRunner(\n        cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n    )\n)\ndef my_flow():\n    with dask.annotate(resources={'GPU': 1}):\n        future = show(0)  # this task requires 1 GPU resource on a worker\n\n    with dask.annotate(resources={'process': 1}):\n        # These tasks each require 1 process on a worker; because we've \n        # specified that our cluster has 1 process per worker and 1 worker,\n        # these tasks will run sequentially\n        future = show(1)\n        future = show(2)\n        future = show(3)\n

        For more tips on how to use tasks and flows in a Collection, check out Using Collections!

        "},{"location":"integrations/prefect-dask/utils/","title":"Utils","text":""},{"location":"integrations/prefect-dask/utils/#prefect_dask.utils","title":"prefect_dask.utils","text":"

        Utils to use alongside prefect-dask.

        "},{"location":"integrations/prefect-dask/utils/#prefect_dask.utils.get_async_dask_client","title":"get_async_dask_client async","text":"

        Yields a temporary asynchronous dask client; this is useful for parallelizing operations on dask collections, such as a dask.DataFrame or dask.Bag.

        Without invoking this, workers do not automatically get a client to connect to the full cluster. Therefore, it will attempt perform work within the worker itself serially, and potentially overwhelming the single worker.

        Parameters:

        Name Type Description Default timeout Optional[Union[int, float, str, timedelta]]

        Timeout after which to error out; has no effect in flow run contexts because the client has already started; Defaults to the distributed.comm.timeouts.connect configuration value.

        None client_kwargs Dict[str, Any]

        Additional keyword arguments to pass to distributed.Client, and overwrites inherited keyword arguments from the task runner, if any.

        {}

        Yields:

        Type Description AsyncGenerator[Client, None]

        A temporary asynchronous dask client.

        Examples:

        Use get_async_dask_client to distribute work across workers.

        import dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_async_dask_client\n\n@task\nasync def compute_task():\n    async with get_async_dask_client(timeout=\"120s\") as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = await client.compute(df.describe())\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\nasync def dask_flow():\n    prefect_future = await compute_task.submit()\n    return await prefect_future.result()\n\nasyncio.run(dask_flow())\n

        Source code in prefect_dask/utils.py
        @asynccontextmanager\nasync def get_async_dask_client(\n    timeout: Optional[Union[int, float, str, timedelta]] = None,\n    **client_kwargs: Dict[str, Any],\n) -> AsyncGenerator[Client, None]:\n    \"\"\"\n    Yields a temporary asynchronous dask client; this is useful\n    for parallelizing operations on dask collections,\n    such as a `dask.DataFrame` or `dask.Bag`.\n\n    Without invoking this, workers do not automatically get a client to connect\n    to the full cluster. Therefore, it will attempt perform work within the\n    worker itself serially, and potentially overwhelming the single worker.\n\n    Args:\n        timeout: Timeout after which to error out; has no effect in\n            flow run contexts because the client has already started;\n            Defaults to the `distributed.comm.timeouts.connect`\n            configuration value.\n        client_kwargs: Additional keyword arguments to pass to\n            `distributed.Client`, and overwrites inherited keyword arguments\n            from the task runner, if any.\n\n    Yields:\n        A temporary asynchronous dask client.\n\n    Examples:\n        Use `get_async_dask_client` to distribute work across workers.\n        ```python\n        import dask\n        from prefect import flow, task\n        from prefect_dask import DaskTaskRunner, get_async_dask_client\n\n        @task\n        async def compute_task():\n            async with get_async_dask_client(timeout=\"120s\") as client:\n                df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n                summary_df = await client.compute(df.describe())\n            return summary_df\n\n        @flow(task_runner=DaskTaskRunner())\n        async def dask_flow():\n            prefect_future = await compute_task.submit()\n            return await prefect_future.result()\n\n        asyncio.run(dask_flow())\n        ```\n    \"\"\"\n    client_kwargs = _generate_client_kwargs(\n        async_client=True, timeout=timeout, **client_kwargs\n    )\n    async with Client(**client_kwargs) as client:\n        yield client\n
        "},{"location":"integrations/prefect-dask/utils/#prefect_dask.utils.get_dask_client","title":"get_dask_client","text":"

        Yields a temporary synchronous dask client; this is useful for parallelizing operations on dask collections, such as a dask.DataFrame or dask.Bag.

        Without invoking this, workers do not automatically get a client to connect to the full cluster. Therefore, it will attempt perform work within the worker itself serially, and potentially overwhelming the single worker.

        When in an async context, we recommend using get_async_dask_client instead.

        Parameters:

        Name Type Description Default timeout Optional[Union[int, float, str, timedelta]]

        Timeout after which to error out; has no effect in flow run contexts because the client has already started; Defaults to the distributed.comm.timeouts.connect configuration value.

        None client_kwargs Dict[str, Any]

        Additional keyword arguments to pass to distributed.Client, and overwrites inherited keyword arguments from the task runner, if any.

        {}

        Yields:

        Type Description Client

        A temporary synchronous dask client.

        Examples:

        Use get_dask_client to distribute work across workers.

        import dask\nfrom prefect import flow, task\nfrom prefect_dask import DaskTaskRunner, get_dask_client\n\n@task\ndef compute_task():\n    with get_dask_client(timeout=\"120s\") as client:\n        df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n        summary_df = client.compute(df.describe()).result()\n    return summary_df\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_flow():\n    prefect_future = compute_task.submit()\n    return prefect_future.result()\n\ndask_flow()\n

        Source code in prefect_dask/utils.py
        @contextmanager\ndef get_dask_client(\n    timeout: Optional[Union[int, float, str, timedelta]] = None,\n    **client_kwargs: Dict[str, Any],\n) -> Generator[Client, None, None]:\n    \"\"\"\n    Yields a temporary synchronous dask client; this is useful\n    for parallelizing operations on dask collections,\n    such as a `dask.DataFrame` or `dask.Bag`.\n\n    Without invoking this, workers do not automatically get a client to connect\n    to the full cluster. Therefore, it will attempt perform work within the\n    worker itself serially, and potentially overwhelming the single worker.\n\n    When in an async context, we recommend using `get_async_dask_client` instead.\n\n    Args:\n        timeout: Timeout after which to error out; has no effect in\n            flow run contexts because the client has already started;\n            Defaults to the `distributed.comm.timeouts.connect`\n            configuration value.\n        client_kwargs: Additional keyword arguments to pass to\n            `distributed.Client`, and overwrites inherited keyword arguments\n            from the task runner, if any.\n\n    Yields:\n        A temporary synchronous dask client.\n\n    Examples:\n        Use `get_dask_client` to distribute work across workers.\n        ```python\n        import dask\n        from prefect import flow, task\n        from prefect_dask import DaskTaskRunner, get_dask_client\n\n        @task\n        def compute_task():\n            with get_dask_client(timeout=\"120s\") as client:\n                df = dask.datasets.timeseries(\"2000\", \"2001\", partition_freq=\"4w\")\n                summary_df = client.compute(df.describe()).result()\n            return summary_df\n\n        @flow(task_runner=DaskTaskRunner())\n        def dask_flow():\n            prefect_future = compute_task.submit()\n            return prefect_future.result()\n\n        dask_flow()\n        ```\n    \"\"\"\n    client_kwargs = _generate_client_kwargs(\n        async_client=False, timeout=timeout, **client_kwargs\n    )\n    with Client(**client_kwargs) as client:\n        yield client\n
        "},{"location":"integrations/prefect-databricks/","title":"prefect-databricks","text":"

        "},{"location":"integrations/prefect-databricks/#welcome","title":"Welcome!","text":"

        Prefect integrations for interacting with Databricks

        The tasks within this collection were created by a code generator using the service's OpenAPI spec.

        The service's REST API documentation can be found here.

        "},{"location":"integrations/prefect-databricks/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-databricks/#python-setup","title":"Python setup","text":"

        Requires an installation of Python 3.8+.

        We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

        These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-databricks/#installation","title":"Installation","text":"

        Install prefect-databricks with pip:

        pip install prefect-databricks\n
        "},{"location":"integrations/prefect-databricks/#lists-jobs-on-the-databricks-instance","title":"Lists jobs on the Databricks instance","text":"
        from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.jobs import jobs_list\n\n\n@flow\ndef example_execute_endpoint_flow():\n    databricks_credentials = DatabricksCredentials.load(\"my-block\")\n    jobs = jobs_list(\n        databricks_credentials,\n        limit=5\n    )\n    return jobs\n\nexample_execute_endpoint_flow()\n
        "},{"location":"integrations/prefect-databricks/#use-with_options-to-customize-options-on-any-existing-task-or-flow","title":"Use with_options to customize options on any existing task or flow","text":"
        custom_example_execute_endpoint_flow = example_execute_endpoint_flow.with_options(\n    name=\"My custom flow name\",\n    retries=2,\n    retry_delay_seconds=10,\n)\n
        "},{"location":"integrations/prefect-databricks/#launch-a-new-cluster-and-run-a-databricks-notebook","title":"Launch a new cluster and run a Databricks notebook","text":"

        Notebook named example.ipynb on Databricks which accepts a name parameter:

        name = dbutils.widgets.get(\"name\")\nmessage = f\"Don't worry {name}, I got your request! Welcome to prefect-databricks!\"\nprint(message)\n

        Prefect flow that launches a new cluster to run example.ipynb:

        from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.jobs import jobs_runs_submit\nfrom prefect_databricks.models.jobs import (\n    AutoScale,\n    AwsAttributes,\n    JobTaskSettings,\n    NotebookTask,\n    NewCluster,\n)\n\n\n@flow\ndef jobs_runs_submit_flow(notebook_path, **base_parameters):\n    databricks_credentials = DatabricksCredentials.load(\"my-block\")\n\n    # specify new cluster settings\n    aws_attributes = AwsAttributes(\n        availability=\"SPOT\",\n        zone_id=\"us-west-2a\",\n        ebs_volume_type=\"GENERAL_PURPOSE_SSD\",\n        ebs_volume_count=3,\n        ebs_volume_size=100,\n    )\n    auto_scale = AutoScale(min_workers=1, max_workers=2)\n    new_cluster = NewCluster(\n        aws_attributes=aws_attributes,\n        autoscale=auto_scale,\n        node_type_id=\"m4.large\",\n        spark_version=\"10.4.x-scala2.12\",\n        spark_conf={\"spark.speculation\": True},\n    )\n\n    # specify notebook to use and parameters to pass\n    notebook_task = NotebookTask(\n        notebook_path=notebook_path,\n        base_parameters=base_parameters,\n    )\n\n    # compile job task settings\n    job_task_settings = JobTaskSettings(\n        new_cluster=new_cluster,\n        notebook_task=notebook_task,\n        task_key=\"prefect-task\"\n    )\n\n    run = jobs_runs_submit(\n        databricks_credentials=databricks_credentials,\n        run_name=\"prefect-job\",\n        tasks=[job_task_settings]\n    )\n\n    return run\n\n\njobs_runs_submit_flow(\"/Users/username@gmail.com/example.ipynb\", name=\"Marvin\")\n

        Note, instead of using the built-in models, you may also input valid JSON. For example, AutoScale(min_workers=1, max_workers=2) is equivalent to {\"min_workers\": 1, \"max_workers\": 2}.

        "},{"location":"integrations/prefect-databricks/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-databricks/credentials/#prefect_databricks.credentials","title":"prefect_databricks.credentials","text":"

        Credential classes used to perform authenticated interactions with Databricks

        "},{"location":"integrations/prefect-databricks/credentials/#prefect_databricks.credentials.DatabricksCredentials","title":"DatabricksCredentials","text":"

        Bases: Block

        Block used to manage Databricks authentication.

        Attributes:

        Name Type Description databricks_instance str

        Databricks instance used in formatting the endpoint URL.

        token SecretStr

        The token to authenticate with Databricks.

        client_kwargs Optional[Dict[str, Any]]

        Additional keyword arguments to pass to AsyncClient.

        Examples:

        Load stored Databricks credentials:

        from prefect_databricks import DatabricksCredentials\ndatabricks_credentials_block = DatabricksCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_databricks/credentials.py
        class DatabricksCredentials(Block):\n    \"\"\"\n    Block used to manage Databricks authentication.\n\n    Attributes:\n        databricks_instance:\n            Databricks instance used in formatting the endpoint URL.\n        token: The token to authenticate with Databricks.\n        client_kwargs: Additional keyword arguments to pass to AsyncClient.\n\n    Examples:\n        Load stored Databricks credentials:\n        ```python\n        from prefect_databricks import DatabricksCredentials\n        databricks_credentials_block = DatabricksCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Databricks Credentials\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5GTHI1PH2dTiantfps6Fnc/1c750fab7f4c14ea1b93a62b9fea6a94/databricks_logo_icon_170295.png?h=250\"  # noqa\n\n    databricks_instance: str = Field(\n        default=...,\n        description=\"Databricks instance used in formatting the endpoint URL.\",\n    )\n    token: SecretStr = Field(\n        default=..., description=\"The token to authenticate with Databricks.\"\n    )\n    client_kwargs: Optional[Dict[str, Any]] = Field(\n        default=None, description=\"Additional keyword arguments to pass to AsyncClient.\"\n    )\n\n    def get_client(self) -> AsyncClient:\n        \"\"\"\n        Gets an Databricks REST AsyncClient.\n\n        Returns:\n            An Databricks REST AsyncClient.\n\n        Example:\n            Gets a Databricks REST AsyncClient.\n            ```python\n            from prefect import flow\n            from prefect_databricks import DatabricksCredentials\n\n            @flow\n            def example_get_client_flow():\n                token = \"consumer_key\"\n                databricks_credentials = DatabricksCredentials(token=token)\n                client = databricks_credentials.get_client()\n                return client\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        base_url = f\"https://{self.databricks_instance}/api/\"\n\n        client_kwargs = self.client_kwargs or {}\n        client_kwargs[\"headers\"] = {\n            \"Authorization\": f\"Bearer {self.token.get_secret_value()}\"\n        }\n        client = AsyncClient(base_url=base_url, **client_kwargs)\n        return client\n
        "},{"location":"integrations/prefect-databricks/credentials/#prefect_databricks.credentials.DatabricksCredentials.get_client","title":"get_client","text":"

        Gets an Databricks REST AsyncClient.

        Returns:

        Type Description AsyncClient

        An Databricks REST AsyncClient.

        Example

        Gets a Databricks REST AsyncClient.

        from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\n\n@flow\ndef example_get_client_flow():\n    token = \"consumer_key\"\n    databricks_credentials = DatabricksCredentials(token=token)\n    client = databricks_credentials.get_client()\n    return client\n\nexample_get_client_flow()\n

        Source code in prefect_databricks/credentials.py
        def get_client(self) -> AsyncClient:\n    \"\"\"\n    Gets an Databricks REST AsyncClient.\n\n    Returns:\n        An Databricks REST AsyncClient.\n\n    Example:\n        Gets a Databricks REST AsyncClient.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n\n        @flow\n        def example_get_client_flow():\n            token = \"consumer_key\"\n            databricks_credentials = DatabricksCredentials(token=token)\n            client = databricks_credentials.get_client()\n            return client\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    base_url = f\"https://{self.databricks_instance}/api/\"\n\n    client_kwargs = self.client_kwargs or {}\n    client_kwargs[\"headers\"] = {\n        \"Authorization\": f\"Bearer {self.token.get_secret_value()}\"\n    }\n    client = AsyncClient(base_url=base_url, **client_kwargs)\n    return client\n
        "},{"location":"integrations/prefect-databricks/flows/","title":"Flows","text":""},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows","title":"prefect_databricks.flows","text":"

        Module containing flows for interacting with Databricks

        "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobInternalError","title":"DatabricksJobInternalError","text":"

        Bases: Exception

        Raised when Databricks jobs runs submit encounters internal error

        Source code in prefect_databricks/flows.py
        class DatabricksJobInternalError(Exception):\n    \"\"\"Raised when Databricks jobs runs submit encounters internal error\"\"\"\n
        "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobRunTimedOut","title":"DatabricksJobRunTimedOut","text":"

        Bases: Exception

        Raised when Databricks jobs runs does not complete in the configured max wait seconds

        Source code in prefect_databricks/flows.py
        class DatabricksJobRunTimedOut(Exception):\n    \"\"\"\n    Raised when Databricks jobs runs does not complete in the configured max\n    wait seconds\n    \"\"\"\n
        "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobSkipped","title":"DatabricksJobSkipped","text":"

        Bases: Exception

        Raised when Databricks jobs runs submit skips

        Source code in prefect_databricks/flows.py
        class DatabricksJobSkipped(Exception):\n    \"\"\"Raised when Databricks jobs runs submit skips\"\"\"\n
        "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.DatabricksJobTerminated","title":"DatabricksJobTerminated","text":"

        Bases: Exception

        Raised when Databricks jobs runs submit terminates

        Source code in prefect_databricks/flows.py
        class DatabricksJobTerminated(Exception):\n    \"\"\"Raised when Databricks jobs runs submit terminates\"\"\"\n
        "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.jobs_runs_submit_and_wait_for_completion","title":"jobs_runs_submit_and_wait_for_completion async","text":"

        Flow that triggers a job run and waits for the triggered run to complete.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required tasks List[RunSubmitTaskSettings]

        Tasks to run, e.g.

        [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n        },\n        \"timeout_seconds\": 86400,\n    },\n]\n

        None run_name Optional[str]

        An optional name for the run. The default value is Untitled, e.g. A multitask job run.

        None git_source Optional[GitSource]

        This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. Key-values: - git_url: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. https://github.com/databricks/databricks-cli. - git_provider: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. github. - git_branch: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. main. - git_tag: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. release-1.0.0. - git_commit: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. e0056d01. - git_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

        None timeout_seconds Optional[int]

        An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400.

        None idempotency_token Optional[str]

        An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g. 8f018174-4792-40d5-bcbc-3e6a527352c8.

        None access_control_list Optional[List[AccessControlRequest]]

        List of permissions to set on the job.

        None max_wait_seconds int

        Maximum number of seconds to wait for the entire flow to complete.

        900 poll_frequency_seconds int

        Number of seconds to wait in between checks for run completion.

        10 return_metadata bool

        When True, method will return a tuple of notebook output as well as job run metadata; by default though, the method only returns notebook output

        False **jobs_runs_submit_kwargs Dict[str, Any]

        Additional keyword arguments to pass to jobs_runs_submit.

        {}

        Returns:

        Type Description Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]

        Either a dict or a tuple (depends on return_metadata) comprised of

        Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]
        • task_notebook_outputs: dictionary of task keys to its corresponding notebook output; this is the only object returned by default from this method
        Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]
        • jobs_runs_metadata: dictionary containing IDs of the jobs runs tasks; this is only returned if return_metadata=True.

        Examples:

        Submit jobs runs and wait.

        from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.flows import jobs_runs_submit_and_wait_for_completion\nfrom prefect_databricks.models.jobs import (\n    AutoScale,\n    AwsAttributes,\n    JobTaskSettings,\n    NotebookTask,\n    NewCluster,\n)\n\n@flow\ndef jobs_runs_submit_and_wait_for_completion_flow(notebook_path, **base_parameters):\n    databricks_credentials = await DatabricksCredentials.load(\"BLOCK_NAME\")\n\n    # specify new cluster settings\n    aws_attributes = AwsAttributes(\n        availability=\"SPOT\",\n        zone_id=\"us-west-2a\",\n        ebs_volume_type=\"GENERAL_PURPOSE_SSD\",\n        ebs_volume_count=3,\n        ebs_volume_size=100,\n    )\n    auto_scale = AutoScale(min_workers=1, max_workers=2)\n    new_cluster = NewCluster(\n        aws_attributes=aws_attributes,\n        autoscale=auto_scale,\n        node_type_id=\"m4.large\",\n        spark_version=\"10.4.x-scala2.12\",\n        spark_conf={\"spark.speculation\": True},\n    )\n\n    # specify notebook to use and parameters to pass\n    notebook_task = NotebookTask(\n        notebook_path=notebook_path,\n        base_parameters=base_parameters,\n    )\n\n    # compile job task settings\n    job_task_settings = JobTaskSettings(\n        new_cluster=new_cluster,\n        notebook_task=notebook_task,\n        task_key=\"prefect-task\"\n    )\n\n    multi_task_runs = jobs_runs_submit_and_wait_for_completion(\n        databricks_credentials=databricks_credentials,\n        run_name=\"prefect-job\",\n        tasks=[job_task_settings]\n    )\n\n    return multi_task_runs\n

        Source code in prefect_databricks/flows.py
        @flow(\n    name=\"Submit jobs runs and wait for completion\",\n    description=(\n        \"Triggers a Databricks jobs runs and waits for the \"\n        \"triggered runs to complete.\"\n    ),\n)\nasync def jobs_runs_submit_and_wait_for_completion(\n    databricks_credentials: DatabricksCredentials,\n    tasks: List[RunSubmitTaskSettings] = None,\n    run_name: Optional[str] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n    git_source: Optional[GitSource] = None,\n    timeout_seconds: Optional[int] = None,\n    idempotency_token: Optional[str] = None,\n    access_control_list: Optional[List[AccessControlRequest]] = None,\n    return_metadata: bool = False,\n    **jobs_runs_submit_kwargs: Dict[str, Any],\n) -> Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata]]:\n    \"\"\"\n    Flow that triggers a job run and waits for the triggered run to complete.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        tasks: Tasks to run, e.g.\n            ```\n            [\n                {\n                    \"task_key\": \"Sessionize\",\n                    \"description\": \"Extracts session data from events\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.Sessionize\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Orders_Ingest\",\n                    \"description\": \"Ingests order data\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.OrdersIngest\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Match\",\n                    \"description\": \"Matches orders with user sessions\",\n                    \"depends_on\": [\n                        {\"task_key\": \"Orders_Ingest\"},\n                        {\"task_key\": \"Sessionize\"},\n                    ],\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                    \"notebook_task\": {\n                        \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                        \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n                    },\n                    \"timeout_seconds\": 86400,\n                },\n            ]\n            ```\n        run_name:\n            An optional name for the run. The default value is `Untitled`, e.g. `A\n            multitask job run`.\n        git_source:\n            This functionality is in Public Preview.  An optional specification for\n            a remote repository containing the notebooks used by this\n            job's notebook tasks. Key-values:\n            - git_url:\n                URL of the repository to be cloned by this job. The maximum\n                length is 300 characters, e.g.\n                `https://github.com/databricks/databricks-cli`.\n            - git_provider:\n                Unique identifier of the service used to host the Git\n                repository. The value is case insensitive, e.g. `github`.\n            - git_branch:\n                Name of the branch to be checked out and used by this job.\n                This field cannot be specified in conjunction with git_tag\n                or git_commit. The maximum length is 255 characters, e.g.\n                `main`.\n            - git_tag:\n                Name of the tag to be checked out and used by this job. This\n                field cannot be specified in conjunction with git_branch or\n                git_commit. The maximum length is 255 characters, e.g.\n                `release-1.0.0`.\n            - git_commit:\n                Commit to be checked out and used by this job. This field\n                cannot be specified in conjunction with git_branch or\n                git_tag. The maximum length is 64 characters, e.g.\n                `e0056d01`.\n            - git_snapshot:\n                Read-only state of the remote repository at the time the job was run.\n                            This field is only included on job runs.\n        timeout_seconds:\n            An optional timeout applied to each run of this job. The default\n            behavior is to have no timeout, e.g. `86400`.\n        idempotency_token:\n            An optional token that can be used to guarantee the idempotency of job\n            run requests. If a run with the provided token already\n            exists, the request does not create a new run but returns\n            the ID of the existing run instead. If a run with the\n            provided token is deleted, an error is returned.  If you\n            specify the idempotency token, upon failure you can retry\n            until the request succeeds. Databricks guarantees that\n            exactly one run is launched with that idempotency token.\n            This token must have at most 64 characters.  For more\n            information, see [How to ensure idempotency for\n            jobs](https://kb.databricks.com/jobs/jobs-idempotency.html),\n            e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`.\n        access_control_list:\n            List of permissions to set on the job.\n        max_wait_seconds: Maximum number of seconds to wait for the entire flow to complete.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n        return_metadata: When True, method will return a tuple of notebook output as well as\n            job run metadata; by default though, the method only returns notebook output\n        **jobs_runs_submit_kwargs: Additional keyword arguments to pass to `jobs_runs_submit`.\n\n    Returns:\n        Either a dict or a tuple (depends on `return_metadata`) comprised of\n        * task_notebook_outputs: dictionary of task keys to its corresponding notebook output;\n          this is the only object returned by default from this method\n        * jobs_runs_metadata: dictionary containing IDs of the jobs runs tasks; this is only\n          returned if `return_metadata=True`.\n\n    Examples:\n        Submit jobs runs and wait.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n        from prefect_databricks.flows import jobs_runs_submit_and_wait_for_completion\n        from prefect_databricks.models.jobs import (\n            AutoScale,\n            AwsAttributes,\n            JobTaskSettings,\n            NotebookTask,\n            NewCluster,\n        )\n\n        @flow\n        def jobs_runs_submit_and_wait_for_completion_flow(notebook_path, **base_parameters):\n            databricks_credentials = await DatabricksCredentials.load(\"BLOCK_NAME\")\n\n            # specify new cluster settings\n            aws_attributes = AwsAttributes(\n                availability=\"SPOT\",\n                zone_id=\"us-west-2a\",\n                ebs_volume_type=\"GENERAL_PURPOSE_SSD\",\n                ebs_volume_count=3,\n                ebs_volume_size=100,\n            )\n            auto_scale = AutoScale(min_workers=1, max_workers=2)\n            new_cluster = NewCluster(\n                aws_attributes=aws_attributes,\n                autoscale=auto_scale,\n                node_type_id=\"m4.large\",\n                spark_version=\"10.4.x-scala2.12\",\n                spark_conf={\"spark.speculation\": True},\n            )\n\n            # specify notebook to use and parameters to pass\n            notebook_task = NotebookTask(\n                notebook_path=notebook_path,\n                base_parameters=base_parameters,\n            )\n\n            # compile job task settings\n            job_task_settings = JobTaskSettings(\n                new_cluster=new_cluster,\n                notebook_task=notebook_task,\n                task_key=\"prefect-task\"\n            )\n\n            multi_task_runs = jobs_runs_submit_and_wait_for_completion(\n                databricks_credentials=databricks_credentials,\n                run_name=\"prefect-job\",\n                tasks=[job_task_settings]\n            )\n\n            return multi_task_runs\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n\n    # submit the jobs runs\n    multi_task_jobs_runs_future = await jobs_runs_submit.submit(\n        databricks_credentials=databricks_credentials,\n        tasks=tasks,\n        run_name=run_name,\n        git_source=git_source,\n        timeout_seconds=timeout_seconds,\n        idempotency_token=idempotency_token,\n        access_control_list=access_control_list,\n        **jobs_runs_submit_kwargs,\n    )\n    multi_task_jobs_runs = await multi_task_jobs_runs_future.result()\n    multi_task_jobs_runs_id = multi_task_jobs_runs[\"run_id\"]\n\n    # wait for all the jobs runs to complete in a separate flow\n    # for a cleaner radar interface\n    jobs_runs_state, jobs_runs_metadata = await jobs_runs_wait_for_completion(\n        multi_task_jobs_runs_id=multi_task_jobs_runs_id,\n        databricks_credentials=databricks_credentials,\n        run_name=run_name,\n        max_wait_seconds=max_wait_seconds,\n        poll_frequency_seconds=poll_frequency_seconds,\n    )\n\n    # fetch the state results\n    jobs_runs_life_cycle_state = jobs_runs_state[\"life_cycle_state\"]\n    jobs_runs_state_message = jobs_runs_state[\"state_message\"]\n\n    # return results or raise error\n    if jobs_runs_life_cycle_state == RunLifeCycleState.terminated.value:\n        jobs_runs_result_state = jobs_runs_state.get(\"result_state\", None)\n        if jobs_runs_result_state == RunResultState.success.value:\n            task_notebook_outputs = {}\n            for task in jobs_runs_metadata[\"tasks\"]:\n                task_key = task[\"task_key\"]\n                task_run_id = task[\"run_id\"]\n                task_run_output_future = await jobs_runs_get_output.submit(\n                    run_id=task_run_id,\n                    databricks_credentials=databricks_credentials,\n                )\n                task_run_output = await task_run_output_future.result()\n                task_run_notebook_output = task_run_output.get(\"notebook_output\", {})\n                task_notebook_outputs[task_key] = task_run_notebook_output\n            logger.info(\n                \"Databricks Jobs Runs Submit (%s ID %s) completed successfully!\",\n                run_name,\n                multi_task_jobs_runs_id,\n            )\n            if return_metadata:\n                return task_notebook_outputs, jobs_runs_metadata\n            return task_notebook_outputs\n        else:\n            raise DatabricksJobTerminated(\n                f\"Databricks Jobs Runs Submit \"\n                f\"({run_name} ID {multi_task_jobs_runs_id}) \"\n                f\"terminated with result state, {jobs_runs_result_state}: \"\n                f\"{jobs_runs_state_message}\"\n            )\n    elif jobs_runs_life_cycle_state == RunLifeCycleState.skipped.value:\n        raise DatabricksJobSkipped(\n            f\"Databricks Jobs Runs Submit ({run_name} ID \"\n            f\"{multi_task_jobs_runs_id}) was skipped: {jobs_runs_state_message}.\",\n        )\n    elif jobs_runs_life_cycle_state == RunLifeCycleState.internalerror.value:\n        raise DatabricksJobInternalError(\n            f\"Databricks Jobs Runs Submit ({run_name} ID \"\n            f\"{multi_task_jobs_runs_id}) \"\n            f\"encountered an internal error: {jobs_runs_state_message}.\",\n        )\n
        "},{"location":"integrations/prefect-databricks/flows/#prefect_databricks.flows.jobs_runs_wait_for_completion","title":"jobs_runs_wait_for_completion async","text":"

        Flow that triggers a job run and waits for the triggered run to complete.

        Parameters:

        Name Type Description Default run_name Optional[str]

        The name of the jobs runs task.

        None multi_task_jobs_run_id

        The ID of the jobs runs task to watch.

        required databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required max_wait_seconds int

        Maximum number of seconds to wait for the entire flow to complete.

        900 poll_frequency_seconds int

        Number of seconds to wait in between checks for run completion.

        10

        Returns:

        Name Type Description jobs_runs_state

        A dict containing the jobs runs life cycle state and message.

        jobs_runs_metadata

        A dict containing IDs of the jobs runs tasks.

        Example

        Waits for completion on jobs runs.

        from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.flows import jobs_runs_wait_for_completion\n\n@flow\ndef jobs_runs_wait_for_completion_flow():\n    databricks_credentials = DatabricksCredentials.load(\"BLOCK_NAME\")\n    return jobs_runs_wait_for_completion(\n        multi_task_jobs_run_id=45429,\n        databricks_credentials=databricks_credentials,\n        run_name=\"my_run_name\",\n        max_wait_seconds=1800,  # 30 minutes\n        poll_frequency_seconds=120,  # 2 minutes\n    )\n

        Source code in prefect_databricks/flows.py
        @flow(\n    name=\"Wait for completion of jobs runs\",\n    description=\"Waits for the jobs runs to finish running\",\n)\nasync def jobs_runs_wait_for_completion(\n    multi_task_jobs_runs_id: int,\n    databricks_credentials: DatabricksCredentials,\n    run_name: Optional[str] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n):\n    \"\"\"\n    Flow that triggers a job run and waits for the triggered run to complete.\n\n    Args:\n        run_name: The name of the jobs runs task.\n        multi_task_jobs_run_id: The ID of the jobs runs task to watch.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        max_wait_seconds:\n            Maximum number of seconds to wait for the entire flow to complete.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Returns:\n        jobs_runs_state: A dict containing the jobs runs life cycle state and message.\n        jobs_runs_metadata: A dict containing IDs of the jobs runs tasks.\n\n    Example:\n        Waits for completion on jobs runs.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n        from prefect_databricks.flows import jobs_runs_wait_for_completion\n\n        @flow\n        def jobs_runs_wait_for_completion_flow():\n            databricks_credentials = DatabricksCredentials.load(\"BLOCK_NAME\")\n            return jobs_runs_wait_for_completion(\n                multi_task_jobs_run_id=45429,\n                databricks_credentials=databricks_credentials,\n                run_name=\"my_run_name\",\n                max_wait_seconds=1800,  # 30 minutes\n                poll_frequency_seconds=120,  # 2 minutes\n            )\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    seconds_waited_for_run_completion = 0\n    wait_for = []\n\n    jobs_status = {}\n    tasks_status = {}\n    while seconds_waited_for_run_completion <= max_wait_seconds:\n        jobs_runs_metadata_future = await jobs_runs_get.submit(\n            run_id=multi_task_jobs_runs_id,\n            databricks_credentials=databricks_credentials,\n            wait_for=wait_for,\n        )\n        wait_for = [jobs_runs_metadata_future]\n\n        jobs_runs_metadata = await jobs_runs_metadata_future.result()\n        jobs_status = _update_and_log_state_changes(\n            jobs_status, jobs_runs_metadata, logger, \"Job\"\n        )\n        jobs_runs_metadata_tasks = jobs_runs_metadata.get(\"tasks\", [])\n        for task_metadata in jobs_runs_metadata_tasks:\n            tasks_status = _update_and_log_state_changes(\n                tasks_status, task_metadata, logger, \"Task\"\n            )\n\n        jobs_runs_state = jobs_runs_metadata.get(\"state\", {})\n        jobs_runs_life_cycle_state = jobs_runs_state[\"life_cycle_state\"]\n        if jobs_runs_life_cycle_state in TERMINAL_STATUS_CODES:\n            return jobs_runs_state, jobs_runs_metadata\n\n        logger.info(\"Waiting for %s seconds.\", poll_frequency_seconds)\n        await asyncio.sleep(poll_frequency_seconds)\n        seconds_waited_for_run_completion += poll_frequency_seconds\n\n    raise DatabricksJobRunTimedOut(\n        f\"Max wait time of {max_wait_seconds} seconds exceeded while waiting \"\n        f\"for job run ({run_name} ID {multi_task_jobs_runs_id})\"\n    )\n
        "},{"location":"integrations/prefect-databricks/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs","title":"prefect_databricks.jobs","text":"

        This is a module containing tasks for interacting with: Databricks jobs

        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_create","title":"jobs_create async","text":"

        Create a new job.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required name str

        An optional name for the job, e.g. A multitask job.

        'Untitled' tags Dict

        A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g.

        {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n

        None tasks Optional[List[JobTaskSettings]]

        A list of task specifications to be executed by this job, e.g.

        [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n        },\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n]\n

        None job_clusters Optional[List[JobCluster]]

        A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g.

        [\n    {\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n    }\n]\n

        None email_notifications JobEmailNotifications

        An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. Key-values: - on_start: A list of email addresses to be notified when a run begins. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent, e.g.

        [\"user.name@databricks.com\"]\n
        - on_success: A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a TERMINATED life_cycle_state and a SUCCESSFUL result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent, e.g.
        [\"user.name@databricks.com\"]\n
        - on_failure: A list of email addresses to notify when a run completes unsuccessfully. A run is considered unsuccessful if it ends with an INTERNAL_ERROR life_cycle_state or a SKIPPED, FAILED, or TIMED_OUT result_state. If not specified on job creation, reset, or update, or the list is empty, then notifications are not sent. Job-level failure notifications are sent only once after the entire job run (including all of its retries) has failed. Notifications are not sent when failed job runs are retried. To receive a failure notification after every failed task (including every failed retry), use task-level notifications instead, e.g.
        [\"user.name@databricks.com\"]\n
        - no_alert_for_skipped_runs: If true, do not send email to recipients specified in on_failure if the run is skipped.

        None webhook_notifications WebhookNotifications

        A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. Key-values: - on_start: An optional list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the on_start property, e.g.

        [\n    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n]\n
        - on_success: An optional list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the on_success property, e.g.
        [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n
        - on_failure: An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the on_failure property, e.g.
        [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n

        None timeout_seconds Optional[int]

        An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400.

        None schedule CronSchedule

        An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or sending an API request to runNow. Key-values: - quartz_cron_expression: A Cron expression using Quartz syntax that describes the schedule for a job. See Cron Trigger for details. This field is required, e.g. 20 30 * * * ?. - timezone_id: A Java timezone ID. The schedule for a job is resolved with respect to this timezone. See Java TimeZone for details. This field is required, e.g. Europe/London. - pause_status: Indicate whether this schedule is paused or not, e.g. PAUSED.

        None max_concurrent_runs Optional[int]

        An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job\u2019s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won\u2019t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. 10.

        None git_source GitSource

        This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. { \"git_url\": \"https://github.com/databricks/databricks-cli\", \"git_branch\": \"main\", \"git_provider\": \"gitHub\", } Key-values: - git_url: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. https://github.com/databricks/databricks-cli. - git_provider: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. github. - git_branch: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. main. - git_tag: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. release-1.0.0. - git_commit: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. e0056d01. - git_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

        None format Optional[str]

        Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to 'MULTI_TASK', e.g. MULTI_TASK.

        None access_control_list Optional[List[AccessControlRequest]]

        List of permissions to set on the job.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - job_id: int

        API Endpoint:

        /2.1/jobs/create

        API Responses: Response Description 200 Job was created successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_create(\n    databricks_credentials: \"DatabricksCredentials\",\n    name: str = \"Untitled\",\n    tags: Dict = None,\n    tasks: Optional[List[\"models.JobTaskSettings\"]] = None,\n    job_clusters: Optional[List[\"models.JobCluster\"]] = None,\n    email_notifications: \"models.JobEmailNotifications\" = None,\n    webhook_notifications: \"models.WebhookNotifications\" = None,\n    timeout_seconds: Optional[int] = None,\n    schedule: \"models.CronSchedule\" = None,\n    max_concurrent_runs: Optional[int] = None,\n    git_source: \"models.GitSource\" = None,\n    format: Optional[str] = None,\n    access_control_list: Optional[List[\"models.AccessControlRequest\"]] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Create a new job.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        name:\n            An optional name for the job, e.g. `A multitask job`.\n        tags:\n            A map of tags associated with the job. These are forwarded to the\n            cluster as cluster tags for jobs clusters, and are subject\n            to the same limitations as cluster tags. A maximum of 25\n            tags can be added to the job, e.g.\n            ```\n            {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n            ```\n        tasks:\n            A list of task specifications to be executed by this job, e.g.\n            ```\n            [\n                {\n                    \"task_key\": \"Sessionize\",\n                    \"description\": \"Extracts session data from events\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.Sessionize\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                    \"timeout_seconds\": 86400,\n                    \"max_retries\": 3,\n                    \"min_retry_interval_millis\": 2000,\n                    \"retry_on_timeout\": False,\n                },\n                {\n                    \"task_key\": \"Orders_Ingest\",\n                    \"description\": \"Ingests order data\",\n                    \"depends_on\": [],\n                    \"job_cluster_key\": \"auto_scaling_cluster\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.OrdersIngest\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                    \"timeout_seconds\": 86400,\n                    \"max_retries\": 3,\n                    \"min_retry_interval_millis\": 2000,\n                    \"retry_on_timeout\": False,\n                },\n                {\n                    \"task_key\": \"Match\",\n                    \"description\": \"Matches orders with user sessions\",\n                    \"depends_on\": [\n                        {\"task_key\": \"Orders_Ingest\"},\n                        {\"task_key\": \"Sessionize\"},\n                    ],\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                    \"notebook_task\": {\n                        \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                        \"source\": \"WORKSPACE\",\n                        \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n                    },\n                    \"timeout_seconds\": 86400,\n                    \"max_retries\": 3,\n                    \"min_retry_interval_millis\": 2000,\n                    \"retry_on_timeout\": False,\n                },\n            ]\n            ```\n        job_clusters:\n            A list of job cluster specifications that can be shared and reused by\n            tasks of this job. Libraries cannot be declared in a shared\n            job cluster. You must declare dependent libraries in task\n            settings, e.g.\n            ```\n            [\n                {\n                    \"job_cluster_key\": \"auto_scaling_cluster\",\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                }\n            ]\n            ```\n        email_notifications:\n            An optional set of email addresses that is notified when runs of this\n            job begin or complete as well as when this job is deleted.\n            The default behavior is to not send any emails. Key-values:\n            - on_start:\n                A list of email addresses to be notified when a run begins.\n                If not specified on job creation, reset, or update, the list\n                is empty, and notifications are not sent, e.g.\n                ```\n                [\"user.name@databricks.com\"]\n                ```\n            - on_success:\n                A list of email addresses to be notified when a run\n                successfully completes. A run is considered to have\n                completed successfully if it ends with a `TERMINATED`\n                `life_cycle_state` and a `SUCCESSFUL` result_state. If not\n                specified on job creation, reset, or update, the list is\n                empty, and notifications are not sent, e.g.\n                ```\n                [\"user.name@databricks.com\"]\n                ```\n            - on_failure:\n                A list of email addresses to notify when a run completes\n                unsuccessfully. A run is considered unsuccessful if it ends\n                with an `INTERNAL_ERROR` `life_cycle_state` or a `SKIPPED`,\n                `FAILED`, or `TIMED_OUT` `result_state`. If not specified on\n                job creation, reset, or update, or the list is empty, then\n                notifications are not sent. Job-level failure notifications\n                are sent only once after the entire job run (including all\n                of its retries) has failed. Notifications are not sent when\n                failed job runs are retried. To receive a failure\n                notification after every failed task (including every failed\n                retry), use task-level notifications instead, e.g.\n                ```\n                [\"user.name@databricks.com\"]\n                ```\n            - no_alert_for_skipped_runs:\n                If true, do not send email to recipients specified in\n                `on_failure` if the run is skipped.\n        webhook_notifications:\n            A collection of system notification IDs to notify when runs of this job\n            begin or complete. The default behavior is to not send any\n            system notifications. Key-values:\n            - on_start:\n                An optional list of notification IDs to call when the run\n                starts. A maximum of 3 destinations can be specified for the\n                `on_start` property, e.g.\n                ```\n                [\n                    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n                    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n                ]\n                ```\n            - on_success:\n                An optional list of notification IDs to call when the run\n                completes successfully. A maximum of 3 destinations can be\n                specified for the `on_success` property, e.g.\n                ```\n                [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n                ```\n            - on_failure:\n                An optional list of notification IDs to call when the run\n                fails. A maximum of 3 destinations can be specified for the\n                `on_failure` property, e.g.\n                ```\n                [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n                ```\n        timeout_seconds:\n            An optional timeout applied to each run of this job. The default\n            behavior is to have no timeout, e.g. `86400`.\n        schedule:\n            An optional periodic schedule for this job. The default behavior is that\n            the job only runs when triggered by clicking \u201cRun Now\u201d in\n            the Jobs UI or sending an API request to `runNow`. Key-values:\n            - quartz_cron_expression:\n                A Cron expression using Quartz syntax that describes the\n                schedule for a job. See [Cron Trigger](http://www.quartz-\n                scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\n                for details. This field is required, e.g. `20 30 * * * ?`.\n            - timezone_id:\n                A Java timezone ID. The schedule for a job is resolved with\n                respect to this timezone. See [Java\n                TimeZone](https://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html)\n                for details. This field is required, e.g. `Europe/London`.\n            - pause_status:\n                Indicate whether this schedule is paused or not, e.g.\n                `PAUSED`.\n        max_concurrent_runs:\n            An optional maximum allowed number of concurrent runs of the job.  Set\n            this value if you want to be able to execute multiple runs\n            of the same job concurrently. This is useful for example if\n            you trigger your job on a frequent schedule and want to\n            allow consecutive runs to overlap with each other, or if you\n            want to trigger multiple runs which differ by their input\n            parameters.  This setting affects only new runs. For\n            example, suppose the job\u2019s concurrency is 4 and there are 4\n            concurrent active runs. Then setting the concurrency to 3\n            won\u2019t kill any of the active runs. However, from then on,\n            new runs are skipped unless there are fewer than 3 active\n            runs.  This value cannot exceed 1000\\. Setting this value to\n            0 causes all new runs to be skipped. The default behavior is\n            to allow only 1 concurrent run, e.g. `10`.\n        git_source:\n            This functionality is in Public Preview.  An optional specification for\n            a remote repository containing the notebooks used by this\n            job's notebook tasks, e.g.\n            ```\n            {\n                \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                \"git_branch\": \"main\",\n                \"git_provider\": \"gitHub\",\n            }\n            ``` Key-values:\n            - git_url:\n                URL of the repository to be cloned by this job. The maximum\n                length is 300 characters, e.g.\n                `https://github.com/databricks/databricks-cli`.\n            - git_provider:\n                Unique identifier of the service used to host the Git\n                repository. The value is case insensitive, e.g. `github`.\n            - git_branch:\n                Name of the branch to be checked out and used by this job.\n                This field cannot be specified in conjunction with git_tag\n                or git_commit. The maximum length is 255 characters, e.g.\n                `main`.\n            - git_tag:\n                Name of the tag to be checked out and used by this job. This\n                field cannot be specified in conjunction with git_branch or\n                git_commit. The maximum length is 255 characters, e.g.\n                `release-1.0.0`.\n            - git_commit:\n                Commit to be checked out and used by this job. This field\n                cannot be specified in conjunction with git_branch or\n                git_tag. The maximum length is 64 characters, e.g.\n                `e0056d01`.\n            - git_snapshot:\n                Read-only state of the remote repository at the time the job was run.\n                            This field is only included on job runs.\n        format:\n            Used to tell what is the format of the job. This field is ignored in\n            Create/Update/Reset calls. When using the Jobs API 2.1 this\n            value is always set to `'MULTI_TASK'`, e.g. `MULTI_TASK`.\n        access_control_list:\n            List of permissions to set on the job.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `job_id: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/create`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was created successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/create\"  # noqa\n\n    responses = {\n        200: \"Job was created successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"name\": name,\n        \"tags\": tags,\n        \"tasks\": tasks,\n        \"job_clusters\": job_clusters,\n        \"email_notifications\": email_notifications,\n        \"webhook_notifications\": webhook_notifications,\n        \"timeout_seconds\": timeout_seconds,\n        \"schedule\": schedule,\n        \"max_concurrent_runs\": max_concurrent_runs,\n        \"git_source\": git_source,\n        \"format\": format,\n        \"access_control_list\": access_control_list,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_delete","title":"jobs_delete async","text":"

        Deletes a job.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required job_id Optional[int]

        The canonical identifier of the job to delete. This field is required, e.g. 11223344.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, an empty dict.

        API Endpoint:

        /2.1/jobs/delete

        API Responses: Response Description 200 Job was deleted successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_delete(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Deletes a job.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to delete. This field is required,\n            e.g. `11223344`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/delete`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was deleted successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/delete\"  # noqa\n\n    responses = {\n        200: \"Job was deleted successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_get","title":"jobs_get async","text":"

        Retrieves the details for a single job.

        Parameters:

        Name Type Description Default job_id int

        The canonical identifier of the job to retrieve information about. This field is required.

        required databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - job_id: int- creator_user_name: str- run_as_user_name: str- settings: \"models.JobSettings\"- created_time: int

        API Endpoint:

        /2.1/jobs/get

        API Responses: Response Description 200 Job was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_get(\n    job_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieves the details for a single job.\n\n    Args:\n        job_id:\n            The canonical identifier of the job to retrieve information about. This\n            field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `job_id: int`</br>- `creator_user_name: str`</br>- `run_as_user_name: str`</br>- `settings: \"models.JobSettings\"`</br>- `created_time: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/get`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/get\"  # noqa\n\n    responses = {\n        200: \"Job was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"job_id\": job_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_list","title":"jobs_list async","text":"

        Retrieves a list of jobs.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required limit int

        The number of jobs to return. This value must be greater than 0 and less or equal to 25. The default value is 20.

        20 offset int

        The offset of the first job to return, relative to the most recently created job.

        0 name Optional[str]

        A filter on the list based on the exact (case insensitive) job name.

        None expand_tasks bool

        Whether to include task and cluster details in the response.

        False

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - jobs: List[\"models.Job\"]- has_more: bool

        API Endpoint:

        /2.1/jobs/list

        API Responses: Response Description 200 List of jobs was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_list(\n    databricks_credentials: \"DatabricksCredentials\",\n    limit: int = 20,\n    offset: int = 0,\n    name: Optional[str] = None,\n    expand_tasks: bool = False,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieves a list of jobs.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        limit:\n            The number of jobs to return. This value must be greater than 0 and less\n            or equal to 25. The default value is 20.\n        offset:\n            The offset of the first job to return, relative to the most recently\n            created job.\n        name:\n            A filter on the list based on the exact (case insensitive) job name.\n        expand_tasks:\n            Whether to include task and cluster details in the response.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `jobs: List[\"models.Job\"]`</br>- `has_more: bool`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/list`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | List of jobs was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/list\"  # noqa\n\n    responses = {\n        200: \"List of jobs was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"limit\": limit,\n        \"offset\": offset,\n        \"name\": name,\n        \"expand_tasks\": expand_tasks,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_reset","title":"jobs_reset async","text":"

        Overwrites all the settings for a specific job. Use the Update endpoint to update job settings partially.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required job_id Optional[int]

        The canonical identifier of the job to reset. This field is required, e.g. 11223344.

        None new_settings JobSettings

        The new settings of the job. These settings completely replace the old settings. Changes to the field JobSettings.timeout_seconds are applied to active runs. Changes to other fields are applied to future runs only. Key-values: - name: An optional name for the job, e.g. A multitask job. - tags: A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g.

        {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n
        - tasks: A list of task specifications to be executed by this job, e.g.
        [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/order-data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\n                \"name\": \"John Doe\",\n                \"age\": \"35\",\n            },\n        },\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n]\n
        - job_clusters: A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g.
        [\n    {\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n    }\n]\n
        - email_notifications: An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. - webhook_notifications: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. - timeout_seconds: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400. - schedule: An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or sending an API request to runNow. - max_concurrent_runs: An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job\u2019s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won\u2019t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. 10. - git_source: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g.
        {\n    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n    \"git_branch\": \"main\",\n    \"git_provider\": \"gitHub\",\n}\n
        - format: Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to 'MULTI_TASK', e.g. MULTI_TASK.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, an empty dict.

        API Endpoint:

        /2.1/jobs/reset

        API Responses: Response Description 200 Job was overwritten successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_reset(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n    new_settings: \"models.JobSettings\" = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Overwrites all the settings for a specific job. Use the Update endpoint to\n    update job settings partially.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to reset. This field is required,\n            e.g. `11223344`.\n        new_settings:\n            The new settings of the job. These settings completely replace the old\n            settings.  Changes to the field\n            `JobSettings.timeout_seconds` are applied to active runs.\n            Changes to other fields are applied to future runs only. Key-values:\n            - name:\n                An optional name for the job, e.g. `A multitask job`.\n            - tags:\n                A map of tags associated with the job. These are forwarded\n                to the cluster as cluster tags for jobs clusters, and are\n                subject to the same limitations as cluster tags. A maximum\n                of 25 tags can be added to the job, e.g.\n                ```\n                {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n                ```\n            - tasks:\n                A list of task specifications to be executed by this job, e.g.\n                ```\n                [\n                    {\n                        \"task_key\": \"Sessionize\",\n                        \"description\": \"Extracts session data from events\",\n                        \"depends_on\": [],\n                        \"existing_cluster_id\": \"0923-164208-meows279\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.Sessionize\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Orders_Ingest\",\n                        \"description\": \"Ingests order data\",\n                        \"depends_on\": [],\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.OrdersIngest\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/order-data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Match\",\n                        \"description\": \"Matches orders with user sessions\",\n                        \"depends_on\": [\n                            {\"task_key\": \"Orders_Ingest\"},\n                            {\"task_key\": \"Sessionize\"},\n                        ],\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                        \"notebook_task\": {\n                            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                            \"source\": \"WORKSPACE\",\n                            \"base_parameters\": {\n                                \"name\": \"John Doe\",\n                                \"age\": \"35\",\n                            },\n                        },\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                ]\n                ```\n            - job_clusters:\n                A list of job cluster specifications that can be shared and\n                reused by tasks of this job. Libraries cannot be declared in\n                a shared job cluster. You must declare dependent libraries\n                in task settings, e.g.\n                ```\n                [\n                    {\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                    }\n                ]\n                ```\n            - email_notifications:\n                An optional set of email addresses that is notified when\n                runs of this job begin or complete as well as when this job\n                is deleted. The default behavior is to not send any emails.\n            - webhook_notifications:\n                A collection of system notification IDs to notify when runs\n                of this job begin or complete. The default behavior is to\n                not send any system notifications.\n            - timeout_seconds:\n                An optional timeout applied to each run of this job. The\n                default behavior is to have no timeout, e.g. `86400`.\n            - schedule:\n                An optional periodic schedule for this job. The default\n                behavior is that the job only runs when triggered by\n                clicking \u201cRun Now\u201d in the Jobs UI or sending an API request\n                to `runNow`.\n            - max_concurrent_runs:\n                An optional maximum allowed number of concurrent runs of the\n                job.  Set this value if you want to be able to execute\n                multiple runs of the same job concurrently. This is useful\n                for example if you trigger your job on a frequent schedule\n                and want to allow consecutive runs to overlap with each\n                other, or if you want to trigger multiple runs which differ\n                by their input parameters.  This setting affects only new\n                runs. For example, suppose the job\u2019s concurrency is 4 and\n                there are 4 concurrent active runs. Then setting the\n                concurrency to 3 won\u2019t kill any of the active runs. However,\n                from then on, new runs are skipped unless there are fewer\n                than 3 active runs.  This value cannot exceed 1000\\. Setting\n                this value to 0 causes all new runs to be skipped. The\n                default behavior is to allow only 1 concurrent run, e.g.\n                `10`.\n            - git_source:\n                This functionality is in Public Preview.  An optional\n                specification for a remote repository containing the\n                notebooks used by this job's notebook tasks, e.g.\n                ```\n                {\n                    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                    \"git_branch\": \"main\",\n                    \"git_provider\": \"gitHub\",\n                }\n                ```\n            - format:\n                Used to tell what is the format of the job. This field is\n                ignored in Create/Update/Reset calls. When using the Jobs\n                API 2.1 this value is always set to `'MULTI_TASK'`, e.g.\n                `MULTI_TASK`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/reset`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was overwritten successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/reset\"  # noqa\n\n    responses = {\n        200: \"Job was overwritten successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n        \"new_settings\": new_settings,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_run_now","title":"jobs_run_now async","text":"

        Run a job and return the run_id of the triggered run.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required job_id Optional[int]

        The ID of the job to be executed, e.g. 11223344.

        None idempotency_token Optional[str]

        An optional token to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g. 8f018174-4792-40d5-bcbc-3e6a527352c8.

        None jar_params Optional[List[str]]

        A list of parameters for jobs with Spark JAR tasks, for example 'jar_params': ['john doe', '35']. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (for example {'jar_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs, e.g.

        [\"john\", \"doe\", \"35\"]\n

        None notebook_params Optional[Dict]

        A map from keys to values for jobs with notebook task, for example 'notebook_params': {'name': 'john doe', 'age': '35'}. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job\u2019s base parameters. notebook_params cannot be specified in conjunction with jar_params. Use Task parameter variables to set parameters containing information about job runs. The JSON representation of this field (for example {'notebook_params':{'name':'john doe','age':'35'}}) cannot exceed 10,000 bytes, e.g.

        {\"name\": \"john doe\", \"age\": \"35\"}\n

        None python_params Optional[List[str]]

        A list of parameters for jobs with Python tasks, for example 'python_params': ['john doe', '35']. The parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

        [\"john doe\", \"35\"]\n

        None spark_submit_params Optional[List[str]]

        A list of parameters for jobs with spark submit task, for example 'spark_submit_params': ['--class', 'org.apache.spark.examples.SparkPi']. The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

        [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n

        None python_named_params Optional[Dict]

        A map from keys to values for jobs with Python wheel task, for example 'python_named_params': {'name': 'task', 'data': 'dbfs:/path/to/data.json'}, e.g.

        {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n

        None pipeline_params Optional[str] None sql_params Optional[Dict]

        A map from keys to values for SQL tasks, for example 'sql_params': {'name': 'john doe', 'age': '35'}. The SQL alert task does not support custom parameters, e.g.

        {\"name\": \"john doe\", \"age\": \"35\"}\n

        None dbt_commands Optional[List]

        An array of commands to execute for jobs with the dbt task, for example 'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run'], e.g.

        [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - run_id: int- number_in_job: int

        API Endpoint:

        /2.1/jobs/run-now

        API Responses: Response Description 200 Run was started successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_run_now(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n    idempotency_token: Optional[str] = None,\n    jar_params: Optional[List[str]] = None,\n    notebook_params: Optional[Dict] = None,\n    python_params: Optional[List[str]] = None,\n    spark_submit_params: Optional[List[str]] = None,\n    python_named_params: Optional[Dict] = None,\n    pipeline_params: Optional[str] = None,\n    sql_params: Optional[Dict] = None,\n    dbt_commands: Optional[List] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Run a job and return the `run_id` of the triggered run.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The ID of the job to be executed, e.g. `11223344`.\n        idempotency_token:\n            An optional token to guarantee the idempotency of job run requests. If a\n            run with the provided token already exists, the request does\n            not create a new run but returns the ID of the existing run\n            instead. If a run with the provided token is deleted, an\n            error is returned.  If you specify the idempotency token,\n            upon failure you can retry until the request succeeds.\n            Databricks guarantees that exactly one run is launched with\n            that idempotency token.  This token must have at most 64\n            characters.  For more information, see [How to ensure\n            idempotency for jobs](https://kb.databricks.com/jobs/jobs-\n            idempotency.html), e.g.\n            `8f018174-4792-40d5-bcbc-3e6a527352c8`.\n        jar_params:\n            A list of parameters for jobs with Spark JAR tasks, for example\n            `'jar_params': ['john doe', '35']`. The parameters are used\n            to invoke the main function of the main class specified in\n            the Spark JAR task. If not specified upon `run-now`, it\n            defaults to an empty list. jar_params cannot be specified in\n            conjunction with notebook_params. The JSON representation of\n            this field (for example `{'jar_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs, e.g.\n            ```\n            [\"john\", \"doe\", \"35\"]\n            ```\n        notebook_params:\n            A map from keys to values for jobs with notebook task, for example\n            `'notebook_params': {'name': 'john doe', 'age': '35'}`. The\n            map is passed to the notebook and is accessible through the\n            [dbutils.widgets.get](https://docs.databricks.com/dev-\n            tools/databricks-utils.html\n            dbutils-widgets) function.  If not specified upon `run-now`,\n            the triggered run uses the job\u2019s base parameters.\n            notebook_params cannot be specified in conjunction with\n            jar_params.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  The JSON representation of this\n            field (for example `{'notebook_params':{'name':'john\n            doe','age':'35'}}`) cannot exceed 10,000 bytes, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        python_params:\n            A list of parameters for jobs with Python tasks, for example\n            `'python_params': ['john doe', '35']`. The parameters are\n            passed to Python file as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"john doe\", \"35\"]\n            ```\n        spark_submit_params:\n            A list of parameters for jobs with spark submit task, for example\n            `'spark_submit_params': ['--class',\n            'org.apache.spark.examples.SparkPi']`. The parameters are\n            passed to spark-submit script as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n            ```\n        python_named_params:\n            A map from keys to values for jobs with Python wheel task, for example\n            `'python_named_params': {'name': 'task', 'data':\n            'dbfs:/path/to/data.json'}`, e.g.\n            ```\n            {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n            ```\n        pipeline_params:\n\n        sql_params:\n            A map from keys to values for SQL tasks, for example `'sql_params':\n            {'name': 'john doe', 'age': '35'}`. The SQL alert task does\n            not support custom parameters, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        dbt_commands:\n            An array of commands to execute for jobs with the dbt task, for example\n            `'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run']`, e.g.\n            ```\n            [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n            ```\n\n    Returns:\n        Upon success, a dict of the response. </br>- `run_id: int`</br>- `number_in_job: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/run-now`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was started successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/run-now\"  # noqa\n\n    responses = {\n        200: \"Run was started successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n        \"idempotency_token\": idempotency_token,\n        \"jar_params\": jar_params,\n        \"notebook_params\": notebook_params,\n        \"python_params\": python_params,\n        \"spark_submit_params\": spark_submit_params,\n        \"python_named_params\": python_named_params,\n        \"pipeline_params\": pipeline_params,\n        \"sql_params\": sql_params,\n        \"dbt_commands\": dbt_commands,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_cancel","title":"jobs_runs_cancel async","text":"

        Cancels a job run. The run is canceled asynchronously, so it may still be running when this request completes.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required run_id Optional[int]

        This field is required, e.g. 455644833.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, an empty dict.

        API Endpoint:

        /2.1/jobs/runs/cancel

        API Responses: Response Description 200 Run was cancelled successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_cancel(\n    databricks_credentials: \"DatabricksCredentials\",\n    run_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Cancels a job run. The run is canceled asynchronously, so it may still be\n    running when this request completes.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        run_id:\n            This field is required, e.g. `455644833`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/cancel`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was cancelled successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/cancel\"  # noqa\n\n    responses = {\n        200: \"Run was cancelled successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"run_id\": run_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_cancel_all","title":"jobs_runs_cancel_all async","text":"

        Cancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required job_id Optional[int]

        The canonical identifier of the job to cancel all runs of. This field is required, e.g. 11223344.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, an empty dict.

        API Endpoint:

        /2.1/jobs/runs/cancel-all

        API Responses: Response Description 200 All runs were cancelled successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_cancel_all(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Cancels all active runs of a job. The runs are canceled asynchronously, so it\n    doesn't prevent new runs from being started.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to cancel all runs of. This field is\n            required, e.g. `11223344`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/cancel-all`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | All runs were cancelled successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/cancel-all\"  # noqa\n\n    responses = {\n        200: \"All runs were cancelled successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_delete","title":"jobs_runs_delete async","text":"

        Deletes a non-active run. Returns an error if the run is active.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required run_id Optional[int]

        The canonical identifier of the run for which to retrieve the metadata, e.g. 455644833.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, an empty dict.

        API Endpoint:

        /2.1/jobs/runs/delete

        API Responses: Response Description 200 Run was deleted successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_delete(\n    databricks_credentials: \"DatabricksCredentials\",\n    run_id: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Deletes a non-active run. Returns an error if the run is active.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        run_id:\n            The canonical identifier of the run for which to retrieve the metadata,\n            e.g. `455644833`.\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/delete`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was deleted successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/delete\"  # noqa\n\n    responses = {\n        200: \"Run was deleted successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"run_id\": run_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_export","title":"jobs_runs_export async","text":"

        Export and retrieve the job run task.

        Parameters:

        Name Type Description Default run_id int

        The canonical identifier for the run. This field is required.

        required databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required views_to_export Optional[ViewsToExport]

        Which views to export (CODE, DASHBOARDS, or ALL). Defaults to CODE.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - views: List[\"models.ViewItem\"]

        API Endpoint:

        /2.0/jobs/runs/export

        API Responses: Response Description 200 Run was exported successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_export(\n    run_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n    views_to_export: Optional[\"models.ViewsToExport\"] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Export and retrieve the job run task.\n\n    Args:\n        run_id:\n            The canonical identifier for the run. This field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        views_to_export:\n            Which views to export (CODE, DASHBOARDS, or ALL). Defaults to CODE.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `views: List[\"models.ViewItem\"]`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.0/jobs/runs/export`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was exported successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.0/jobs/runs/export\"  # noqa\n\n    responses = {\n        200: \"Run was exported successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"run_id\": run_id,\n        \"views_to_export\": views_to_export,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_get","title":"jobs_runs_get async","text":"

        Retrieve the metadata of a run.

        Parameters:

        Name Type Description Default run_id int

        The canonical identifier of the run for which to retrieve the metadata. This field is required.

        required databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required include_history Optional[bool]

        Whether to include the repair history in the response.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - job_id: int- run_id: int- number_in_job: int- creator_user_name: str- original_attempt_run_id: int- state: \"models.RunState\"- schedule: \"models.CronSchedule\"- tasks: List[\"models.RunTask\"]- job_clusters: List[\"models.JobCluster\"]- cluster_spec: \"models.ClusterSpec\"- cluster_instance: \"models.ClusterInstance\"- git_source: \"models.GitSource\"- overriding_parameters: \"models.RunParameters\"- start_time: int- setup_duration: int- execution_duration: int- cleanup_duration: int- end_time: int- trigger: \"models.TriggerType\"- run_name: str- run_page_url: str- run_type: \"models.RunType\"- attempt_number: int- repair_history: List[\"models.RepairHistoryItem\"]

        API Endpoint:

        /2.1/jobs/runs/get

        API Responses: Response Description 200 Run was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_get(\n    run_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n    include_history: Optional[bool] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieve the metadata of a run.\n\n    Args:\n        run_id:\n            The canonical identifier of the run for which to retrieve the metadata.\n            This field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        include_history:\n            Whether to include the repair history in the response.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `job_id: int`</br>- `run_id: int`</br>- `number_in_job: int`</br>- `creator_user_name: str`</br>- `original_attempt_run_id: int`</br>- `state: \"models.RunState\"`</br>- `schedule: \"models.CronSchedule\"`</br>- `tasks: List[\"models.RunTask\"]`</br>- `job_clusters: List[\"models.JobCluster\"]`</br>- `cluster_spec: \"models.ClusterSpec\"`</br>- `cluster_instance: \"models.ClusterInstance\"`</br>- `git_source: \"models.GitSource\"`</br>- `overriding_parameters: \"models.RunParameters\"`</br>- `start_time: int`</br>- `setup_duration: int`</br>- `execution_duration: int`</br>- `cleanup_duration: int`</br>- `end_time: int`</br>- `trigger: \"models.TriggerType\"`</br>- `run_name: str`</br>- `run_page_url: str`</br>- `run_type: \"models.RunType\"`</br>- `attempt_number: int`</br>- `repair_history: List[\"models.RepairHistoryItem\"]`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/get`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/get\"  # noqa\n\n    responses = {\n        200: \"Run was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"run_id\": run_id,\n        \"include_history\": include_history,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_get_output","title":"jobs_runs_get_output async","text":"

        Retrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit() call, you can use this endpoint to retrieve that value. Databricks restricts this API to return the first 5 MB of the output. To return a larger result, you can store job results in a cloud storage service. This endpoint validates that the run_id parameter is valid and returns an HTTP status code 400 if the run_id parameter is invalid. Runs are automatically removed after 60 days. If you to want to reference them beyond 60 days, you must save old run results before they expire. To export using the UI, see Export job run results. To export using the Jobs API, see Runs export.

        Parameters:

        Name Type Description Default run_id int

        The canonical identifier for the run. This field is required.

        required databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - notebook_output: \"models.NotebookOutput\"- sql_output: \"models.SqlOutput\"- dbt_output: \"models.DbtOutput\"- logs: str- logs_truncated: bool- error: str- error_trace: str- metadata: \"models.Run\"

        API Endpoint:

        /2.1/jobs/runs/get-output

        API Responses: Response Description 200 Run output was retrieved successfully. 400 A job run with multiple tasks was provided. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_get_output(\n    run_id: int,\n    databricks_credentials: \"DatabricksCredentials\",\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Retrieve the output and metadata of a single task run. When a notebook task\n    returns a value through the dbutils.notebook.exit() call, you can use this\n    endpoint to retrieve that value. Databricks restricts this API to return the\n    first 5 MB of the output. To return a larger result, you can store job\n    results in a cloud storage service. This endpoint validates that the run_id\n    parameter is valid and returns an HTTP status code 400 if the run_id\n    parameter is invalid. Runs are automatically removed after 60 days. If you\n    to want to reference them beyond 60 days, you must save old run results\n    before they expire. To export using the UI, see Export job run results. To\n    export using the Jobs API, see Runs export.\n\n    Args:\n        run_id:\n            The canonical identifier for the run. This field is required.\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `notebook_output: \"models.NotebookOutput\"`</br>- `sql_output: \"models.SqlOutput\"`</br>- `dbt_output: \"models.DbtOutput\"`</br>- `logs: str`</br>- `logs_truncated: bool`</br>- `error: str`</br>- `error_trace: str`</br>- `metadata: \"models.Run\"`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/get-output`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run output was retrieved successfully. |\n    | 400 | A job run with multiple tasks was provided. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/get-output\"  # noqa\n\n    responses = {\n        200: \"Run output was retrieved successfully.\",  # noqa\n        400: \"A job run with multiple tasks was provided.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"run_id\": run_id,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_list","title":"jobs_runs_list async","text":"

        List runs in descending order by start time.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required active_only bool

        If active_only is true, only active runs are included in the results; otherwise, lists both active and completed runs. An active run is a run in the PENDING, RUNNING, or TERMINATING. This field cannot be true when completed_only is true.

        False completed_only bool

        If completed_only is true, only completed runs are included in the results; otherwise, lists both active and completed runs. This field cannot be true when active_only is true.

        False job_id Optional[int]

        The job for which to list runs. If omitted, the Jobs service lists runs from all jobs.

        None offset int

        The offset of the first run to return, relative to the most recent run.

        0 limit int

        The number of runs to return. This value must be greater than 0 and less than 25. The default value is 25. If a request specifies a limit of 0, the service instead uses the maximum limit.

        25 run_type Optional[str]

        The type of runs to return. For a description of run types, see Run.

        None expand_tasks bool

        Whether to include task and cluster details in the response.

        False start_time_from Optional[int]

        Show runs that started at or after this value. The value must be a UTC timestamp in milliseconds. Can be combined with start_time_to to filter by a time range.

        None start_time_to Optional[int]

        Show runs that started at or before this value. The value must be a UTC timestamp in milliseconds. Can be combined with start_time_from to filter by a time range.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - runs: List[\"models.Run\"]- has_more: bool

        API Endpoint:

        /2.1/jobs/runs/list

        API Responses: Response Description 200 List of runs was retrieved successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_list(\n    databricks_credentials: \"DatabricksCredentials\",\n    active_only: bool = False,\n    completed_only: bool = False,\n    job_id: Optional[int] = None,\n    offset: int = 0,\n    limit: int = 25,\n    run_type: Optional[str] = None,\n    expand_tasks: bool = False,\n    start_time_from: Optional[int] = None,\n    start_time_to: Optional[int] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List runs in descending order by start time.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        active_only:\n            If active_only is `true`, only active runs are included in the results;\n            otherwise, lists both active and completed runs. An active\n            run is a run in the `PENDING`, `RUNNING`, or `TERMINATING`.\n            This field cannot be `true` when completed_only is `true`.\n        completed_only:\n            If completed_only is `true`, only completed runs are included in the\n            results; otherwise, lists both active and completed runs.\n            This field cannot be `true` when active_only is `true`.\n        job_id:\n            The job for which to list runs. If omitted, the Jobs service lists runs\n            from all jobs.\n        offset:\n            The offset of the first run to return, relative to the most recent run.\n        limit:\n            The number of runs to return. This value must be greater than 0 and less\n            than 25\\. The default value is 25\\. If a request specifies a\n            limit of 0, the service instead uses the maximum limit.\n        run_type:\n            The type of runs to return. For a description of run types, see\n            [Run](https://docs.databricks.com/dev-\n            tools/api/latest/jobs.html\n            operation/JobsRunsGet).\n        expand_tasks:\n            Whether to include task and cluster details in the response.\n        start_time_from:\n            Show runs that started _at or after_ this value. The value must be a UTC\n            timestamp in milliseconds. Can be combined with\n            _start_time_to_ to filter by a time range.\n        start_time_to:\n            Show runs that started _at or before_ this value. The value must be a\n            UTC timestamp in milliseconds. Can be combined with\n            _start_time_from_ to filter by a time range.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `runs: List[\"models.Run\"]`</br>- `has_more: bool`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/list`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | List of runs was retrieved successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/list\"  # noqa\n\n    responses = {\n        200: \"List of runs was retrieved successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    params = {\n        \"active_only\": active_only,\n        \"completed_only\": completed_only,\n        \"job_id\": job_id,\n        \"offset\": offset,\n        \"limit\": limit,\n        \"run_type\": run_type,\n        \"expand_tasks\": expand_tasks,\n        \"start_time_from\": start_time_from,\n        \"start_time_to\": start_time_to,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.GET,\n        params=params,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_repair","title":"jobs_runs_repair async","text":"

        Re-run one or more tasks. Tasks are re-run as part of the original job run, use the current job and task settings, and can be viewed in the history for the original job run.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required run_id Optional[int]

        The job run ID of the run to repair. The run must not be in progress, e.g. 455644833.

        None rerun_tasks Optional[List[str]]

        The task keys of the task runs to repair, e.g.

        [\"task0\", \"task1\"]\n

        None latest_repair_id Optional[int]

        The ID of the latest repair. This parameter is not required when repairing a run for the first time, but must be provided on subsequent requests to repair the same run, e.g. 734650698524280.

        None rerun_all_failed_tasks bool

        If true, repair all failed tasks. Only one of rerun_tasks or rerun_all_failed_tasks can be used.

        False jar_params Optional[List[str]]

        A list of parameters for jobs with Spark JAR tasks, for example 'jar_params': ['john doe', '35']. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (for example {'jar_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs, e.g.

        [\"john\", \"doe\", \"35\"]\n

        None notebook_params Optional[Dict]

        A map from keys to values for jobs with notebook task, for example 'notebook_params': {'name': 'john doe', 'age': '35'}. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job\u2019s base parameters. notebook_params cannot be specified in conjunction with jar_params. Use Task parameter variables to set parameters containing information about job runs. The JSON representation of this field (for example {'notebook_params':{'name':'john doe','age':'35'}}) cannot exceed 10,000 bytes, e.g.

        {\"name\": \"john doe\", \"age\": \"35\"}\n

        None python_params Optional[List[str]]

        A list of parameters for jobs with Python tasks, for example 'python_params': ['john doe', '35']. The parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

        [\"john doe\", \"35\"]\n

        None spark_submit_params Optional[List[str]]

        A list of parameters for jobs with spark submit task, for example 'spark_submit_params': ['--class', 'org.apache.spark.examples.SparkPi']. The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example {'python_params':['john doe','35']}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g.

        [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n

        None python_named_params Optional[Dict]

        A map from keys to values for jobs with Python wheel task, for example 'python_named_params': {'name': 'task', 'data': 'dbfs:/path/to/data.json'}, e.g.

        {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n

        None pipeline_params Optional[str] None sql_params Optional[Dict]

        A map from keys to values for SQL tasks, for example 'sql_params': {'name': 'john doe', 'age': '35'}. The SQL alert task does not support custom parameters, e.g.

        {\"name\": \"john doe\", \"age\": \"35\"}\n

        None dbt_commands Optional[List]

        An array of commands to execute for jobs with the dbt task, for example 'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run'], e.g.

        [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - repair_id: int

        API Endpoint:

        /2.1/jobs/runs/repair

        API Responses: Response Description 200 Run repair was initiated. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_repair(\n    databricks_credentials: \"DatabricksCredentials\",\n    run_id: Optional[int] = None,\n    rerun_tasks: Optional[List[str]] = None,\n    latest_repair_id: Optional[int] = None,\n    rerun_all_failed_tasks: bool = False,\n    jar_params: Optional[List[str]] = None,\n    notebook_params: Optional[Dict] = None,\n    python_params: Optional[List[str]] = None,\n    spark_submit_params: Optional[List[str]] = None,\n    python_named_params: Optional[Dict] = None,\n    pipeline_params: Optional[str] = None,\n    sql_params: Optional[Dict] = None,\n    dbt_commands: Optional[List] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Re-run one or more tasks. Tasks are re-run as part of the original job run, use\n    the current job and task settings, and can be viewed in the history for the\n    original job run.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        run_id:\n            The job run ID of the run to repair. The run must not be in progress,\n            e.g. `455644833`.\n        rerun_tasks:\n            The task keys of the task runs to repair, e.g.\n            ```\n            [\"task0\", \"task1\"]\n            ```\n        latest_repair_id:\n            The ID of the latest repair. This parameter is not required when\n            repairing a run for the first time, but must be provided on\n            subsequent requests to repair the same run, e.g.\n            `734650698524280`.\n        rerun_all_failed_tasks:\n            If true, repair all failed tasks. Only one of rerun_tasks or\n            rerun_all_failed_tasks can be used.\n        jar_params:\n            A list of parameters for jobs with Spark JAR tasks, for example\n            `'jar_params': ['john doe', '35']`. The parameters are used\n            to invoke the main function of the main class specified in\n            the Spark JAR task. If not specified upon `run-now`, it\n            defaults to an empty list. jar_params cannot be specified in\n            conjunction with notebook_params. The JSON representation of\n            this field (for example `{'jar_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs, e.g.\n            ```\n            [\"john\", \"doe\", \"35\"]\n            ```\n        notebook_params:\n            A map from keys to values for jobs with notebook task, for example\n            `'notebook_params': {'name': 'john doe', 'age': '35'}`. The\n            map is passed to the notebook and is accessible through the\n            [dbutils.widgets.get](https://docs.databricks.com/dev-\n            tools/databricks-utils.html\n            dbutils-widgets) function.  If not specified upon `run-now`,\n            the triggered run uses the job\u2019s base parameters.\n            notebook_params cannot be specified in conjunction with\n            jar_params.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  The JSON representation of this\n            field (for example `{'notebook_params':{'name':'john\n            doe','age':'35'}}`) cannot exceed 10,000 bytes, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        python_params:\n            A list of parameters for jobs with Python tasks, for example\n            `'python_params': ['john doe', '35']`. The parameters are\n            passed to Python file as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"john doe\", \"35\"]\n            ```\n        spark_submit_params:\n            A list of parameters for jobs with spark submit task, for example\n            `'spark_submit_params': ['--class',\n            'org.apache.spark.examples.SparkPi']`. The parameters are\n            passed to spark-submit script as command-line parameters. If\n            specified upon `run-now`, it would overwrite the parameters\n            specified in job setting. The JSON representation of this\n            field (for example `{'python_params':['john doe','35']}`)\n            cannot exceed 10,000 bytes.  Use [Task parameter\n            variables](https://docs.databricks.com/jobs.html\n            parameter-variables) to set parameters containing\n            information about job runs.  Important  These parameters\n            accept only Latin characters (ASCII character set). Using\n            non-ASCII characters returns an error. Examples of invalid,\n            non-ASCII characters are Chinese, Japanese kanjis, and\n            emojis, e.g.\n            ```\n            [\"--class\", \"org.apache.spark.examples.SparkPi\"]\n            ```\n        python_named_params:\n            A map from keys to values for jobs with Python wheel task, for example\n            `'python_named_params': {'name': 'task', 'data':\n            'dbfs:/path/to/data.json'}`, e.g.\n            ```\n            {\"name\": \"task\", \"data\": \"dbfs:/path/to/data.json\"}\n            ```\n        pipeline_params:\n\n        sql_params:\n            A map from keys to values for SQL tasks, for example `'sql_params':\n            {'name': 'john doe', 'age': '35'}`. The SQL alert task does\n            not support custom parameters, e.g.\n            ```\n            {\"name\": \"john doe\", \"age\": \"35\"}\n            ```\n        dbt_commands:\n            An array of commands to execute for jobs with the dbt task, for example\n            `'dbt_commands': ['dbt deps', 'dbt seed', 'dbt run']`, e.g.\n            ```\n            [\"dbt deps\", \"dbt seed\", \"dbt run\"]\n            ```\n\n    Returns:\n        Upon success, a dict of the response. </br>- `repair_id: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/repair`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run repair was initiated. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/repair\"  # noqa\n\n    responses = {\n        200: \"Run repair was initiated.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"run_id\": run_id,\n        \"rerun_tasks\": rerun_tasks,\n        \"latest_repair_id\": latest_repair_id,\n        \"rerun_all_failed_tasks\": rerun_all_failed_tasks,\n        \"jar_params\": jar_params,\n        \"notebook_params\": notebook_params,\n        \"python_params\": python_params,\n        \"spark_submit_params\": spark_submit_params,\n        \"python_named_params\": python_named_params,\n        \"pipeline_params\": pipeline_params,\n        \"sql_params\": sql_params,\n        \"dbt_commands\": dbt_commands,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_runs_submit","title":"jobs_runs_submit async","text":"

        Submit a one-time run. This endpoint allows you to submit a workload directly without creating a job. Use the jobs/runs/get API to check the run state after the job is submitted.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required tasks Optional[List[RunSubmitTaskSettings]]

        , e.g.

        [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n        },\n        \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n        \"timeout_seconds\": 86400,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n        },\n        \"timeout_seconds\": 86400,\n    },\n]\n

        None run_name Optional[str]

        An optional name for the run. The default value is Untitled, e.g. A multitask job run.

        None webhook_notifications WebhookNotifications

        A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. Key-values: - on_start: An optional list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the on_start property, e.g.

        [\n    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n]\n
        - on_success: An optional list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the on_success property, e.g.
        [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n
        - on_failure: An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the on_failure property, e.g.
        [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n

        None git_source GitSource

        This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. { \"git_url\": \"https://github.com/databricks/databricks-cli\", \"git_branch\": \"main\", \"git_provider\": \"gitHub\", } Key-values: - git_url: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. https://github.com/databricks/databricks-cli. - git_provider: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. github. - git_branch: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. main. - git_tag: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. release-1.0.0. - git_commit: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. e0056d01. - git_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

        None timeout_seconds Optional[int]

        An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400.

        None idempotency_token Optional[str]

        An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g. 8f018174-4792-40d5-bcbc-3e6a527352c8.

        None access_control_list Optional[List[AccessControlRequest]]

        List of permissions to set on the job.

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, a dict of the response. - run_id: int

        API Endpoint:

        /2.1/jobs/runs/submit

        API Responses: Response Description 200 Run was created and started successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_runs_submit(\n    databricks_credentials: \"DatabricksCredentials\",\n    tasks: Optional[List[\"models.RunSubmitTaskSettings\"]] = None,\n    run_name: Optional[str] = None,\n    webhook_notifications: \"models.WebhookNotifications\" = None,\n    git_source: \"models.GitSource\" = None,\n    timeout_seconds: Optional[int] = None,\n    idempotency_token: Optional[str] = None,\n    access_control_list: Optional[List[\"models.AccessControlRequest\"]] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Submit a one-time run. This endpoint allows you to submit a workload directly\n    without creating a job. Use the `jobs/runs/get` API to check the run state\n    after the job is submitted.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        tasks:\n            , e.g.\n            ```\n            [\n                {\n                    \"task_key\": \"Sessionize\",\n                    \"description\": \"Extracts session data from events\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.Sessionize\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Orders_Ingest\",\n                    \"description\": \"Ingests order data\",\n                    \"depends_on\": [],\n                    \"existing_cluster_id\": \"0923-164208-meows279\",\n                    \"spark_jar_task\": {\n                        \"main_class_name\": \"com.databricks.OrdersIngest\",\n                        \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                    },\n                    \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                    \"timeout_seconds\": 86400,\n                },\n                {\n                    \"task_key\": \"Match\",\n                    \"description\": \"Matches orders with user sessions\",\n                    \"depends_on\": [\n                        {\"task_key\": \"Orders_Ingest\"},\n                        {\"task_key\": \"Sessionize\"},\n                    ],\n                    \"new_cluster\": {\n                        \"spark_version\": \"7.3.x-scala2.12\",\n                        \"node_type_id\": \"i3.xlarge\",\n                        \"spark_conf\": {\"spark.speculation\": True},\n                        \"aws_attributes\": {\n                            \"availability\": \"SPOT\",\n                            \"zone_id\": \"us-west-2a\",\n                        },\n                        \"autoscale\": {\"min_workers\": 2, \"max_workers\": 16},\n                    },\n                    \"notebook_task\": {\n                        \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                        \"source\": \"WORKSPACE\",\n                        \"base_parameters\": {\"name\": \"John Doe\", \"age\": \"35\"},\n                    },\n                    \"timeout_seconds\": 86400,\n                },\n            ]\n            ```\n        run_name:\n            An optional name for the run. The default value is `Untitled`, e.g. `A\n            multitask job run`.\n        webhook_notifications:\n            A collection of system notification IDs to notify when runs of this job\n            begin or complete. The default behavior is to not send any\n            system notifications. Key-values:\n            - on_start:\n                An optional list of notification IDs to call when the run\n                starts. A maximum of 3 destinations can be specified for the\n                `on_start` property, e.g.\n                ```\n                [\n                    {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n                    {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n                ]\n                ```\n            - on_success:\n                An optional list of notification IDs to call when the run\n                completes successfully. A maximum of 3 destinations can be\n                specified for the `on_success` property, e.g.\n                ```\n                [{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}]\n                ```\n            - on_failure:\n                An optional list of notification IDs to call when the run\n                fails. A maximum of 3 destinations can be specified for the\n                `on_failure` property, e.g.\n                ```\n                [{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}]\n                ```\n        git_source:\n            This functionality is in Public Preview.  An optional specification for\n            a remote repository containing the notebooks used by this\n            job's notebook tasks, e.g.\n            ```\n            {\n                \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                \"git_branch\": \"main\",\n                \"git_provider\": \"gitHub\",\n            }\n            ``` Key-values:\n            - git_url:\n                URL of the repository to be cloned by this job. The maximum\n                length is 300 characters, e.g.\n                `https://github.com/databricks/databricks-cli`.\n            - git_provider:\n                Unique identifier of the service used to host the Git\n                repository. The value is case insensitive, e.g. `github`.\n            - git_branch:\n                Name of the branch to be checked out and used by this job.\n                This field cannot be specified in conjunction with git_tag\n                or git_commit. The maximum length is 255 characters, e.g.\n                `main`.\n            - git_tag:\n                Name of the tag to be checked out and used by this job. This\n                field cannot be specified in conjunction with git_branch or\n                git_commit. The maximum length is 255 characters, e.g.\n                `release-1.0.0`.\n            - git_commit:\n                Commit to be checked out and used by this job. This field\n                cannot be specified in conjunction with git_branch or\n                git_tag. The maximum length is 64 characters, e.g.\n                `e0056d01`.\n            - git_snapshot:\n                Read-only state of the remote repository at the time the job was run.\n                            This field is only included on job runs.\n        timeout_seconds:\n            An optional timeout applied to each run of this job. The default\n            behavior is to have no timeout, e.g. `86400`.\n        idempotency_token:\n            An optional token that can be used to guarantee the idempotency of job\n            run requests. If a run with the provided token already\n            exists, the request does not create a new run but returns\n            the ID of the existing run instead. If a run with the\n            provided token is deleted, an error is returned.  If you\n            specify the idempotency token, upon failure you can retry\n            until the request succeeds. Databricks guarantees that\n            exactly one run is launched with that idempotency token.\n            This token must have at most 64 characters.  For more\n            information, see [How to ensure idempotency for\n            jobs](https://kb.databricks.com/jobs/jobs-idempotency.html),\n            e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`.\n        access_control_list:\n            List of permissions to set on the job.\n\n    Returns:\n        Upon success, a dict of the response. </br>- `run_id: int`</br>\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/runs/submit`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Run was created and started successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/runs/submit\"  # noqa\n\n    responses = {\n        200: \"Run was created and started successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"tasks\": tasks,\n        \"run_name\": run_name,\n        \"webhook_notifications\": webhook_notifications,\n        \"git_source\": git_source,\n        \"timeout_seconds\": timeout_seconds,\n        \"idempotency_token\": idempotency_token,\n        \"access_control_list\": access_control_list,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/jobs/#prefect_databricks.jobs.jobs_update","title":"jobs_update async","text":"

        Add, update, or remove specific settings of an existing job. Use the Reset endpoint to overwrite all job settings.

        Parameters:

        Name Type Description Default databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required job_id Optional[int]

        The canonical identifier of the job to update. This field is required, e.g. 11223344.

        None new_settings JobSettings

        The new settings for the job. Any top-level fields specified in new_settings are completely replaced. Partially updating nested fields is not supported. Changes to the field JobSettings.timeout_seconds are applied to active runs. Changes to other fields are applied to future runs only. Key-values: - name: An optional name for the job, e.g. A multitask job. - tags: A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g.

        {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n
        - tasks: A list of task specifications to be executed by this job, e.g.
        [\n    {\n        \"task_key\": \"Sessionize\",\n        \"description\": \"Extracts session data from events\",\n        \"depends_on\": [],\n        \"existing_cluster_id\": \"0923-164208-meows279\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.Sessionize\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Orders_Ingest\",\n        \"description\": \"Ingests order data\",\n        \"depends_on\": [],\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"spark_jar_task\": {\n            \"main_class_name\": \"com.databricks.OrdersIngest\",\n            \"parameters\": [\n                \"--data\",\n                \"dbfs:/path/to/order-data.json\",\n            ],\n        },\n        \"libraries\": [\n            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n        ],\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n    {\n        \"task_key\": \"Match\",\n        \"description\": \"Matches orders with user sessions\",\n        \"depends_on\": [\n            {\"task_key\": \"Orders_Ingest\"},\n            {\"task_key\": \"Sessionize\"},\n        ],\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n            \"source\": \"WORKSPACE\",\n            \"base_parameters\": {\n                \"name\": \"John Doe\",\n                \"age\": \"35\",\n            },\n        },\n        \"timeout_seconds\": 86400,\n        \"max_retries\": 3,\n        \"min_retry_interval_millis\": 2000,\n        \"retry_on_timeout\": False,\n    },\n]\n
        - job_clusters: A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g.
        [\n    {\n        \"job_cluster_key\": \"auto_scaling_cluster\",\n        \"new_cluster\": {\n            \"spark_version\": \"7.3.x-scala2.12\",\n            \"node_type_id\": \"i3.xlarge\",\n            \"spark_conf\": {\"spark.speculation\": True},\n            \"aws_attributes\": {\n                \"availability\": \"SPOT\",\n                \"zone_id\": \"us-west-2a\",\n            },\n            \"autoscale\": {\n                \"min_workers\": 2,\n                \"max_workers\": 16,\n            },\n        },\n    }\n]\n
        - email_notifications: An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. - webhook_notifications: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. - timeout_seconds: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. 86400. - schedule: An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or sending an API request to runNow. - max_concurrent_runs: An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job\u2019s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won\u2019t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. 10. - git_source: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g.
        {\n    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n    \"git_branch\": \"main\",\n    \"git_provider\": \"gitHub\",\n}\n
        - format: Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to 'MULTI_TASK', e.g. MULTI_TASK.

        None fields_to_remove Optional[List[str]]

        Remove top-level fields in the job settings. Removing nested fields is not supported. This field is optional, e.g.

        [\"libraries\", \"schedule\"]\n

        None

        Returns:

        Type Description Dict[str, Any]

        Upon success, an empty dict.

        API Endpoint:

        /2.1/jobs/update

        API Responses: Response Description 200 Job was updated successfully. 400 The request was malformed. See JSON response for error details. 401 The request was unauthorized. 500 The request was not handled correctly due to a server error. Source code in prefect_databricks/jobs.py
        @task\nasync def jobs_update(\n    databricks_credentials: \"DatabricksCredentials\",\n    job_id: Optional[int] = None,\n    new_settings: \"models.JobSettings\" = None,\n    fields_to_remove: Optional[List[str]] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Add, update, or remove specific settings of an existing job. Use the Reset\n    endpoint to overwrite all job settings.\n\n    Args:\n        databricks_credentials:\n            Credentials to use for authentication with Databricks.\n        job_id:\n            The canonical identifier of the job to update. This field is required,\n            e.g. `11223344`.\n        new_settings:\n            The new settings for the job. Any top-level fields specified in\n            `new_settings` are completely replaced. Partially updating\n            nested fields is not supported.  Changes to the field\n            `JobSettings.timeout_seconds` are applied to active runs.\n            Changes to other fields are applied to future runs only. Key-values:\n            - name:\n                An optional name for the job, e.g. `A multitask job`.\n            - tags:\n                A map of tags associated with the job. These are forwarded\n                to the cluster as cluster tags for jobs clusters, and are\n                subject to the same limitations as cluster tags. A maximum\n                of 25 tags can be added to the job, e.g.\n                ```\n                {\"cost-center\": \"engineering\", \"team\": \"jobs\"}\n                ```\n            - tasks:\n                A list of task specifications to be executed by this job, e.g.\n                ```\n                [\n                    {\n                        \"task_key\": \"Sessionize\",\n                        \"description\": \"Extracts session data from events\",\n                        \"depends_on\": [],\n                        \"existing_cluster_id\": \"0923-164208-meows279\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.Sessionize\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Orders_Ingest\",\n                        \"description\": \"Ingests order data\",\n                        \"depends_on\": [],\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"spark_jar_task\": {\n                            \"main_class_name\": \"com.databricks.OrdersIngest\",\n                            \"parameters\": [\n                                \"--data\",\n                                \"dbfs:/path/to/order-data.json\",\n                            ],\n                        },\n                        \"libraries\": [\n                            {\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}\n                        ],\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                    {\n                        \"task_key\": \"Match\",\n                        \"description\": \"Matches orders with user sessions\",\n                        \"depends_on\": [\n                            {\"task_key\": \"Orders_Ingest\"},\n                            {\"task_key\": \"Sessionize\"},\n                        ],\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                        \"notebook_task\": {\n                            \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                            \"source\": \"WORKSPACE\",\n                            \"base_parameters\": {\n                                \"name\": \"John Doe\",\n                                \"age\": \"35\",\n                            },\n                        },\n                        \"timeout_seconds\": 86400,\n                        \"max_retries\": 3,\n                        \"min_retry_interval_millis\": 2000,\n                        \"retry_on_timeout\": False,\n                    },\n                ]\n                ```\n            - job_clusters:\n                A list of job cluster specifications that can be shared and\n                reused by tasks of this job. Libraries cannot be declared in\n                a shared job cluster. You must declare dependent libraries\n                in task settings, e.g.\n                ```\n                [\n                    {\n                        \"job_cluster_key\": \"auto_scaling_cluster\",\n                        \"new_cluster\": {\n                            \"spark_version\": \"7.3.x-scala2.12\",\n                            \"node_type_id\": \"i3.xlarge\",\n                            \"spark_conf\": {\"spark.speculation\": True},\n                            \"aws_attributes\": {\n                                \"availability\": \"SPOT\",\n                                \"zone_id\": \"us-west-2a\",\n                            },\n                            \"autoscale\": {\n                                \"min_workers\": 2,\n                                \"max_workers\": 16,\n                            },\n                        },\n                    }\n                ]\n                ```\n            - email_notifications:\n                An optional set of email addresses that is notified when\n                runs of this job begin or complete as well as when this job\n                is deleted. The default behavior is to not send any emails.\n            - webhook_notifications:\n                A collection of system notification IDs to notify when runs\n                of this job begin or complete. The default behavior is to\n                not send any system notifications.\n            - timeout_seconds:\n                An optional timeout applied to each run of this job. The\n                default behavior is to have no timeout, e.g. `86400`.\n            - schedule:\n                An optional periodic schedule for this job. The default\n                behavior is that the job only runs when triggered by\n                clicking \u201cRun Now\u201d in the Jobs UI or sending an API request\n                to `runNow`.\n            - max_concurrent_runs:\n                An optional maximum allowed number of concurrent runs of the\n                job.  Set this value if you want to be able to execute\n                multiple runs of the same job concurrently. This is useful\n                for example if you trigger your job on a frequent schedule\n                and want to allow consecutive runs to overlap with each\n                other, or if you want to trigger multiple runs which differ\n                by their input parameters.  This setting affects only new\n                runs. For example, suppose the job\u2019s concurrency is 4 and\n                there are 4 concurrent active runs. Then setting the\n                concurrency to 3 won\u2019t kill any of the active runs. However,\n                from then on, new runs are skipped unless there are fewer\n                than 3 active runs.  This value cannot exceed 1000\\. Setting\n                this value to 0 causes all new runs to be skipped. The\n                default behavior is to allow only 1 concurrent run, e.g.\n                `10`.\n            - git_source:\n                This functionality is in Public Preview.  An optional\n                specification for a remote repository containing the\n                notebooks used by this job's notebook tasks, e.g.\n                ```\n                {\n                    \"git_url\": \"https://github.com/databricks/databricks-cli\",\n                    \"git_branch\": \"main\",\n                    \"git_provider\": \"gitHub\",\n                }\n                ```\n            - format:\n                Used to tell what is the format of the job. This field is\n                ignored in Create/Update/Reset calls. When using the Jobs\n                API 2.1 this value is always set to `'MULTI_TASK'`, e.g.\n                `MULTI_TASK`.\n        fields_to_remove:\n            Remove top-level fields in the job settings. Removing nested fields is\n            not supported. This field is optional, e.g.\n            ```\n            [\"libraries\", \"schedule\"]\n            ```\n\n    Returns:\n        Upon success, an empty dict.\n\n    <h4>API Endpoint:</h4>\n    `/2.1/jobs/update`\n\n    <h4>API Responses:</h4>\n    | Response | Description |\n    | --- | --- |\n    | 200 | Job was updated successfully. |\n    | 400 | The request was malformed. See JSON response for error details. |\n    | 401 | The request was unauthorized. |\n    | 500 | The request was not handled correctly due to a server error. |\n    \"\"\"  # noqa\n    endpoint = \"/2.1/jobs/update\"  # noqa\n\n    responses = {\n        200: \"Job was updated successfully.\",  # noqa\n        400: \"The request was malformed. See JSON response for error details.\",  # noqa\n        401: \"The request was unauthorized.\",  # noqa\n        500: \"The request was not handled correctly due to a server error.\",  # noqa\n    }\n\n    json_payload = {\n        \"job_id\": job_id,\n        \"new_settings\": new_settings,\n        \"fields_to_remove\": fields_to_remove,\n    }\n\n    response = await execute_endpoint.fn(\n        endpoint,\n        databricks_credentials,\n        http_method=HTTPMethod.POST,\n        json=json_payload,\n    )\n\n    contents = _unpack_contents(response, responses)\n    return contents\n
        "},{"location":"integrations/prefect-databricks/rest/","title":"Rest","text":""},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest","title":"prefect_databricks.rest","text":"

        This is a module containing generic REST tasks.

        "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.HTTPMethod","title":"HTTPMethod","text":"

        Bases: Enum

        Available HTTP request methods.

        Source code in prefect_databricks/rest.py
        class HTTPMethod(Enum):\n    \"\"\"\n    Available HTTP request methods.\n    \"\"\"\n\n    GET = \"get\"\n    POST = \"post\"\n    PUT = \"put\"\n    DELETE = \"delete\"\n    PATCH = \"patch\"\n
        "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.execute_endpoint","title":"execute_endpoint async","text":"

        Generic function for executing REST endpoints.

        Parameters:

        Name Type Description Default endpoint str

        The endpoint route.

        required databricks_credentials DatabricksCredentials

        Credentials to use for authentication with Databricks.

        required http_method HTTPMethod

        Either GET, POST, PUT, DELETE, or PATCH.

        GET params Dict[str, Any]

        URL query parameters in the request.

        None json Dict[str, Any]

        JSON serializable object to include in the body of the request.

        None **kwargs Dict[str, Any]

        Additional keyword arguments to pass.

        {}

        Returns:

        Type Description Response

        The httpx.Response from interacting with the endpoint.

        Examples:

        Lists jobs on the Databricks instance.

        from prefect import flow\nfrom prefect_databricks import DatabricksCredentials\nfrom prefect_databricks.rest import execute_endpoint\n@flow\ndef example_execute_endpoint_flow():\n    endpoint = \"/2.1/jobs/list\"\n    databricks_credentials = DatabricksCredentials.load(\"my-block\")\n    params = {\n        \"limit\": 5,\n        \"offset\": None,\n        \"expand_tasks\": True,\n    }\n    response = execute_endpoint(\n        endpoint,\n        databricks_credentials,\n        params=params\n    )\n    return response.json()\n

        Source code in prefect_databricks/rest.py
        @task\nasync def execute_endpoint(\n    endpoint: str,\n    databricks_credentials: \"DatabricksCredentials\",\n    http_method: HTTPMethod = HTTPMethod.GET,\n    params: Dict[str, Any] = None,\n    json: Dict[str, Any] = None,\n    **kwargs: Dict[str, Any],\n) -> httpx.Response:\n    \"\"\"\n    Generic function for executing REST endpoints.\n\n    Args:\n        endpoint: The endpoint route.\n        databricks_credentials: Credentials to use for authentication with Databricks.\n        http_method: Either GET, POST, PUT, DELETE, or PATCH.\n        params: URL query parameters in the request.\n        json: JSON serializable object to include in the body of the request.\n        **kwargs: Additional keyword arguments to pass.\n\n    Returns:\n        The httpx.Response from interacting with the endpoint.\n\n    Examples:\n        Lists jobs on the Databricks instance.\n        ```python\n        from prefect import flow\n        from prefect_databricks import DatabricksCredentials\n        from prefect_databricks.rest import execute_endpoint\n        @flow\n        def example_execute_endpoint_flow():\n            endpoint = \"/2.1/jobs/list\"\n            databricks_credentials = DatabricksCredentials.load(\"my-block\")\n            params = {\n                \"limit\": 5,\n                \"offset\": None,\n                \"expand_tasks\": True,\n            }\n            response = execute_endpoint(\n                endpoint,\n                databricks_credentials,\n                params=params\n            )\n            return response.json()\n        ```\n    \"\"\"\n    if isinstance(http_method, HTTPMethod):\n        http_method = http_method.value\n\n    if params is not None:\n        stripped_params = strip_kwargs(**params)\n    else:\n        stripped_params = None\n\n    if json is not None:\n        kwargs[\"json\"] = strip_kwargs(**json)\n\n    async with databricks_credentials.get_client() as client:\n        response = await getattr(client, http_method)(\n            endpoint, params=stripped_params, **kwargs\n        )\n\n    return response\n
        "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.serialize_model","title":"serialize_model","text":"

        Recursively serializes pydantic.BaseModel into JSON; returns original obj if not a BaseModel.

        Parameters:

        Name Type Description Default obj Any

        Input object to serialize.

        required

        Returns:

        Type Description Any

        Serialized version of object.

        Source code in prefect_databricks/rest.py
        def serialize_model(obj: Any) -> Any:\n    \"\"\"\n    Recursively serializes `pydantic.BaseModel` into JSON;\n    returns original obj if not a `BaseModel`.\n\n    Args:\n        obj: Input object to serialize.\n\n    Returns:\n        Serialized version of object.\n    \"\"\"\n    if isinstance(obj, list):\n        return [serialize_model(o) for o in obj]\n    elif isinstance(obj, Dict):\n        return {k: serialize_model(v) for k, v in obj.items()}\n\n    if isinstance(obj, MODELS):\n        return model_dump(obj, mode=\"json\")\n    elif isinstance(obj, Enum):\n        return obj.value\n    return obj\n
        "},{"location":"integrations/prefect-databricks/rest/#prefect_databricks.rest.strip_kwargs","title":"strip_kwargs","text":"

        Recursively drops keyword arguments if value is None, and serializes any pydantic.BaseModel types.

        Parameters:

        Name Type Description Default **kwargs Dict

        Input keyword arguments.

        {}

        Returns:

        Type Description Dict

        Stripped version of kwargs.

        Source code in prefect_databricks/rest.py
        def strip_kwargs(**kwargs: Dict) -> Dict:\n    \"\"\"\n    Recursively drops keyword arguments if value is None,\n    and serializes any `pydantic.BaseModel` types.\n\n    Args:\n        **kwargs: Input keyword arguments.\n\n    Returns:\n        Stripped version of kwargs.\n    \"\"\"\n    stripped_dict = {}\n    for k, v in kwargs.items():\n        v = serialize_model(v)\n        if isinstance(v, dict):\n            v = strip_kwargs(**v)\n        if v is not None:\n            stripped_dict[k] = v\n    return stripped_dict or {}\n
        "},{"location":"integrations/prefect-databricks/models/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs","title":"prefect_databricks.models.jobs","text":""},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlList","title":"AccessControlList","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class AccessControlList(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    access_control_list: Optional[List[AccessControlRequest]] = Field(\n        None, description=\"List of permissions to set on the job.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequest","title":"AccessControlRequest","text":"

        Bases: AccessControlRequestForUser, AccessControlRequestForGroup

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class AccessControlRequest(AccessControlRequestForUser, AccessControlRequestForGroup):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequestForGroup","title":"AccessControlRequestForGroup","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class AccessControlRequestForGroup(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    group_name: Optional[GroupName] = None\n    permission_level: Optional[PermissionLevelForGroup] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequestForServicePrincipal","title":"AccessControlRequestForServicePrincipal","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class AccessControlRequestForServicePrincipal(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    permission_level: Optional[PermissionLevel] = None\n    service_principal_name: Optional[ServicePrincipalName] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AccessControlRequestForUser","title":"AccessControlRequestForUser","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class AccessControlRequestForUser(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    permission_level: Optional[PermissionLevel] = None\n    user_name: Optional[UserName] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AutoScale","title":"AutoScale","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class AutoScale(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    max_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"The maximum number of workers to which the cluster can scale up when\"\n            \" overloaded. max_workers must be strictly greater than min_workers.\"\n        ),\n    )\n    min_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"The minimum number of workers to which the cluster can scale down when\"\n            \" underutilized. It is also the initial number of workers the cluster has\"\n            \" after creation.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.AwsAttributes","title":"AwsAttributes","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class AwsAttributes(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    availability: Optional[Literal[\"SPOT\", \"ON_DEMAND\", \"SPOT_WITH_FALLBACK\"]] = Field(\n        None,\n        description=(\n            \"Availability type used for all subsequent nodes past the `first_on_demand`\"\n            \" ones. **Note:** If `first_on_demand` is zero, this availability type is\"\n            \" used for the entire cluster.\\n\\n`SPOT`: use spot instances.\\n`ON_DEMAND`:\"\n            \" use on-demand instances.\\n`SPOT_WITH_FALLBACK`: preferably use spot\"\n            \" instances, but fall back to on-demand instances if spot instances cannot\"\n            \" be acquired (for example, if AWS spot prices are too high).\"\n        ),\n    )\n    ebs_volume_count: Optional[int] = Field(\n        None,\n        description=(\n            \"The number of volumes launched for each instance. You can choose up to 10\"\n            \" volumes. This feature is only enabled for supported node types. Legacy\"\n            \" node types cannot specify custom EBS volumes. For node types with no\"\n            \" instance store, at least one EBS volume needs to be specified; otherwise,\"\n            \" cluster creation fails.\\n\\nThese EBS volumes are mounted at `/ebs0`,\"\n            \" `/ebs1`, and etc. Instance store volumes are mounted at `/local_disk0`,\"\n            \" `/local_disk1`, and etc.\\n\\nIf EBS volumes are attached, Databricks\"\n            \" configures Spark to use only the EBS volumes for scratch storage because\"\n            \" heterogeneously sized scratch devices can lead to inefficient disk\"\n            \" utilization. If no EBS volumes are attached, Databricks configures Spark\"\n            \" to use instance store volumes.\\n\\nIf EBS volumes are specified, then the\"\n            \" Spark configuration `spark.local.dir` is overridden.\"\n        ),\n    )\n    ebs_volume_iops: Optional[int] = Field(\n        None,\n        description=(\n            \"The number of IOPS per EBS gp3 volume.\\n\\nThis value must be between 3000\"\n            \" and 16000.\\n\\nThe value of IOPS and throughput is calculated based on AWS\"\n            \" documentation to match the maximum performance of a gp2 volume with the\"\n            \" same volume size.\\n\\nFor more information, see the [EBS volume limit\"\n            \" calculator](https://github.com/awslabs/aws-support-tools/tree/master/EBS/VolumeLimitCalculator).\"\n        ),\n    )\n    ebs_volume_size: Optional[int] = Field(\n        None,\n        description=(\n            \"The size of each EBS volume (in GiB) launched for each instance. For\"\n            \" general purpose SSD, this value must be within the range 100 - 4096\\\\.\"\n            \" For throughput optimized HDD, this value must be within the range 500 -\"\n            \" 4096\\\\. Custom EBS volumes cannot be specified for the legacy node types\"\n            \" (_memory-optimized_ and _compute-optimized_).\"\n        ),\n    )\n    ebs_volume_throughput: Optional[int] = Field(\n        None,\n        description=(\n            \"The throughput per EBS gp3 volume, in MiB per second.\\n\\nThis value must\"\n            \" be between 125 and 1000.\"\n        ),\n    )\n    ebs_volume_type: Optional[\n        Literal[\"GENERAL_PURPOSE_SSD\", \"THROUGHPUT_OPTIMIZED_HDD\"]\n    ] = Field(\n        None,\n        description=(\n            \"The type of EBS volume that is launched with this\"\n            \" cluster.\\n\\n`GENERAL_PURPOSE_SSD`: provision extra storage using AWS gp2\"\n            \" EBS volumes.\\n`THROUGHPUT_OPTIMIZED_HDD`: provision extra storage using\"\n            \" AWS st1 volumes.\"\n        ),\n    )\n    first_on_demand: Optional[int] = Field(\n        None,\n        description=(\n            \"The first first_on_demand nodes of the cluster are placed on on-demand\"\n            \" instances. If this value is greater than 0, the cluster driver node is\"\n            \" placed on an on-demand instance. If this value is greater than or equal\"\n            \" to the current cluster size, all nodes are placed on on-demand instances.\"\n            \" If this value is less than the current cluster size, first_on_demand\"\n            \" nodes are placed on on-demand instances and the remainder are placed on\"\n            \" `availability` instances. This value does not affect cluster size and\"\n            \" cannot be mutated over the lifetime of a cluster.\"\n        ),\n    )\n    instance_profile_arn: Optional[str] = Field(\n        None,\n        description=(\n            \"Nodes for this cluster are only be placed on AWS instances with this\"\n            \" instance profile. If omitted, nodes are placed on instances without an\"\n            \" instance profile. The instance profile must have previously been added to\"\n            \" the Databricks environment by an account administrator.\\n\\nThis feature\"\n            \" may only be available to certain customer plans.\"\n        ),\n    )\n    spot_bid_price_percent: Optional[int] = Field(\n        None,\n        description=(\n            \"The max price for AWS spot instances, as a percentage of the corresponding\"\n            \" instance type\u2019s on-demand price. For example, if this field is set to 50,\"\n            \" and the cluster needs a new `i3.xlarge` spot instance, then the max price\"\n            \" is half of the price of on-demand `i3.xlarge` instances. Similarly, if\"\n            \" this field is set to 200, the max price is twice the price of on-demand\"\n            \" `i3.xlarge` instances. If not specified, the default value is 100\\\\. When\"\n            \" spot instances are requested for this cluster, only spot instances whose\"\n            \" max price percentage matches this field is considered. For safety, we\"\n            \" enforce this field to be no more than 10000.\"\n        ),\n    )\n    zone_id: Optional[str] = Field(\n        None,\n        description=(\n            \"Identifier for the availability zone/datacenter in which the cluster\"\n            \" resides. You have three options:\\n\\n**Specify an availability zone as a\"\n            \" string**, for example: \u201cus-west-2a\u201d. The provided availability zone must\"\n            \" be in the same region as the Databricks deployment. For example,\"\n            \" \u201cus-west-2a\u201d is not a valid zone ID if the Databricks deployment resides\"\n            \" in the \u201cus-east-1\u201d region.\\n\\n**Enable automatic availability zone\"\n            \" selection (\u201cAuto-AZ\u201d)**, by setting the value \u201cauto\u201d. Databricks selects\"\n            \" the AZ based on available IPs in the workspace subnets and retries in\"\n            \" other availability zones if AWS returns insufficient capacity\"\n            \" errors.\\n\\n**Do not specify a value**. If not specified, a default zone\"\n            \" is used.\\n\\nThe list of available zones as well as the default value can\"\n            \" be found by using the [List\"\n            \" zones](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-zones)\"\n            \" API.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CanManage","title":"CanManage","text":"

        Bases: str, Enum

        Permission to manage the job.

        Source code in prefect_databricks/models/jobs.py
        class CanManage(str, Enum):\n    \"\"\"\n    Permission to manage the job.\n    \"\"\"\n\n    canmanage = \"CAN_MANAGE\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CanManageRun","title":"CanManageRun","text":"

        Bases: str, Enum

        Permission to run and/or manage runs for the job.

        Source code in prefect_databricks/models/jobs.py
        class CanManageRun(str, Enum):\n    \"\"\"\n    Permission to run and/or manage runs for the job.\n    \"\"\"\n\n    canmanagerun = \"CAN_MANAGE_RUN\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CanView","title":"CanView","text":"

        Bases: str, Enum

        Permission to view the settings of the job.

        Source code in prefect_databricks/models/jobs.py
        class CanView(str, Enum):\n    \"\"\"\n    Permission to view the settings of the job.\n    \"\"\"\n\n    canview = \"CAN_VIEW\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterAttributes","title":"ClusterAttributes","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterAttributes(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autotermination_minutes: Optional[int] = Field(\n        None,\n        description=(\n            \"Automatically terminates the cluster after it is inactive for this time in\"\n            \" minutes. If not set, this cluster is not be automatically terminated. If\"\n            \" specified, the threshold must be between 10 and 10000 minutes. You can\"\n            \" also set this value to 0 to explicitly disable automatic termination.\"\n        ),\n    )\n    aws_attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"Attributes related to clusters running on Amazon Web Services. If not\"\n            \" specified at cluster creation, a set of default values are used.\"\n        ),\n    )\n    cluster_log_conf: Optional[ClusterLogConf] = Field(\n        None,\n        description=(\n            \"The configuration for delivering Spark logs to a long-term storage\"\n            \" destination. Only one destination can be specified for one cluster. If\"\n            \" the conf is given, the logs is delivered to the destination every `5\"\n            \" mins`. The destination of driver logs is\"\n            \" `<destination>/<cluster-ID>/driver`, while the destination of executor\"\n            \" logs is `<destination>/<cluster-ID>/executor`.\"\n        ),\n    )\n    cluster_name: Optional[str] = Field(\n        None,\n        description=(\n            \"Cluster name requested by the user. This doesn\u2019t have to be unique. If not\"\n            \" specified at creation, the cluster name is an empty string.\"\n        ),\n    )\n    cluster_source: Optional[ClusterSource] = Field(\n        None,\n        description=(\n            \"Determines whether the cluster was created by a user through the UI,\"\n            \" created by the Databricks Jobs scheduler, or through an API request.\"\n        ),\n    )\n    custom_tags: Optional[ClusterTag] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags for cluster resources. Databricks tags\"\n            \" all cluster resources (such as AWS instances and EBS volumes) with these\"\n            \" tags in addition to default_tags.\\n\\n**Note**:\\n\\n* Tags are not\"\n            \" supported on legacy node types such as compute-optimized and\"\n            \" memory-optimized\\n* Databricks allows at most 45 custom tags\"\n        ),\n    )\n    docker_image: Optional[DockerImage] = Field(\n        None,\n        description=(\n            \"Docker image for a [custom\"\n            \" container](https://docs.databricks.com/clusters/custom-containers.html).\"\n        ),\n    )\n    driver_node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The node type of the Spark driver. This field is optional; if unset, the\"\n            \" driver node type is set as the same value as `node_type_id` defined\"\n            \" above.\"\n        ),\n    )\n    enable_elastic_disk: Optional[bool] = Field(\n        None,\n        description=(\n            \"Autoscaling Local Storage: when enabled, this cluster dynamically acquires\"\n            \" additional disk space when its Spark workers are running low on disk\"\n            \" space. This feature requires specific AWS permissions to function\"\n            \" correctly. Refer to [Autoscaling local\"\n            \" storage](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage)\"\n            \" for details.\"\n        ),\n    )\n    enable_local_disk_encryption: Optional[bool] = Field(\n        None,\n        description=(\n            \"Determines whether encryption of the disks attached to the cluster locally\"\n            \" is enabled.\"\n        ),\n    )\n    init_scripts: Optional[List[InitScriptInfo]] = Field(\n        None,\n        description=(\n            \"The configuration for storing init scripts. Any number of destinations can\"\n            \" be specified. The scripts are executed sequentially in the order\"\n            \" provided. If `cluster_log_conf` is specified, init script logs are sent\"\n            \" to `<destination>/<cluster-ID>/init_scripts`.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to which the cluster belongs. Refer\"\n            \" to [Pools](https://docs.databricks.com/clusters/instance-pools/index.html)\"\n            \" for details.\"\n        ),\n    )\n    node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"This field encodes, through a single value, the resources available to\"\n            \" each of the Spark nodes in this cluster. For example, the Spark nodes can\"\n            \" be provisioned and optimized for memory or compute intensive workloads A\"\n            \" list of available node types can be retrieved by using the [List node\"\n            \" types](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-node-types)\"\n            \" API call.\"\n        ),\n    )\n    policy_id: Optional[str] = Field(\n        None,\n        description=(\n            \"A [cluster\"\n            \" policy](https://docs.databricks.com/dev-tools/api/latest/policies.html) ID.\"\n        ),\n    )\n    spark_conf: Optional[SparkConfPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified Spark configuration\"\n            \" key-value pairs. You can also pass in a string of extra JVM options to\"\n            \" the driver and the executors via `spark.driver.extraJavaOptions` and\"\n            \" `spark.executor.extraJavaOptions` respectively.\\n\\nExample Spark confs:\"\n            ' `{\"spark.speculation\": true, \"spark.streaming.ui.retainedBatches\": 5}` or'\n            ' `{\"spark.driver.extraJavaOptions\": \"-verbose:gc -XX:+PrintGCDetails\"}`'\n        ),\n    )\n    spark_env_vars: Optional[SparkEnvPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified environment\"\n            \" variable key-value pairs. Key-value pairs of the form (X,Y) are exported\"\n            \" as is (that is, `export X='Y'`) while launching the driver and\"\n            \" workers.\\n\\nIn order to specify an additional set of\"\n            \" `SPARK_DAEMON_JAVA_OPTS`, we recommend appending them to\"\n            \" `$SPARK_DAEMON_JAVA_OPTS` as shown in the following example. This ensures\"\n            \" that all default databricks managed environmental variables are included\"\n            ' as well.\\n\\nExample Spark environment variables: `{\"SPARK_WORKER_MEMORY\":'\n            ' \"28000m\", \"SPARK_LOCAL_DIRS\": \"/local_disk0\"}` or'\n            ' `{\"SPARK_DAEMON_JAVA_OPTS\": \"$SPARK_DAEMON_JAVA_OPTS'\n            ' -Dspark.shuffle.service.enabled=true\"}`'\n        ),\n    )\n    spark_version: Optional[str] = Field(\n        None,\n        description=(\n            \"The runtime version of the cluster, for example \u201c5.0.x-scala2.11\u201d. You can\"\n            \" retrieve a list of available runtime versions by using the [Runtime\"\n            \" versions](https://docs.databricks.com/dev-tools/api/latest/clusters.html#runtime-versions)\"\n            \" API call.\"\n        ),\n    )\n    ssh_public_keys: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"SSH public key contents that is added to each Spark node in this cluster.\"\n            \" The corresponding private keys can be used to login with the user name\"\n            \" `ubuntu` on port `2200`. Up to 10 keys can be specified.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterCloudProviderNodeInfo","title":"ClusterCloudProviderNodeInfo","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterCloudProviderNodeInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    available_core_quota: Optional[int] = Field(\n        None, description=\"Available CPU core quota.\"\n    )\n    status: Optional[ClusterCloudProviderNodeStatus] = Field(\n        None, description=\"Status as reported by the cloud provider.\"\n    )\n    total_core_quota: Optional[int] = Field(None, description=\"Total CPU core quota.\")\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterCloudProviderNodeStatus","title":"ClusterCloudProviderNodeStatus","text":"

        Bases: str, Enum

        * NotEnabledOnSubscription: Node type not available for subscription.\n
        • NotAvailableInRegion: Node type not available in region.
        Source code in prefect_databricks/models/jobs.py
        class ClusterCloudProviderNodeStatus(str, Enum):\n    \"\"\"\n        * NotEnabledOnSubscription: Node type not available for subscription.\n    * NotAvailableInRegion: Node type not available in region.\n\n    \"\"\"\n\n    not_enabled_on_subscription = \"NotEnabledOnSubscription\"\n    not_available_in_region = \"NotAvailableInRegion\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterEvent","title":"ClusterEvent","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterEvent(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cluster_id: str = Field(\n        ..., description=\"Canonical identifier for the cluster. This field is required.\"\n    )\n    details: EventDetails = Field(\n        ..., description=\"The event details. This field is required.\"\n    )\n    timestamp: Optional[int] = Field(\n        None,\n        description=(\n            \"The timestamp when the event occurred, stored as the number of\"\n            \" milliseconds since the unix epoch. Assigned by the Timeline service.\"\n        ),\n    )\n    type: ClusterEventType = Field(\n        ..., description=\"The event type. This field is required.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterEventType","title":"ClusterEventType","text":"

        Bases: str, Enum

        * `CREATING`: Indicates that the cluster is being created.\n
        • DID_NOT_EXPAND_DISK: Indicates that a disk is low on space, but adding disks would put it over the max capacity.
        • EXPANDED_DISK: Indicates that a disk was low on space and the disks were expanded.
        • FAILED_TO_EXPAND_DISK: Indicates that a disk was low on space and disk space could not be expanded.
        • INIT_SCRIPTS_STARTING: Indicates that the cluster scoped init script has started.
        • INIT_SCRIPTS_FINISHED: Indicates that the cluster scoped init script has finished.
        • STARTING: Indicates that the cluster is being started.
        • RESTARTING: Indicates that the cluster is being started.
        • TERMINATING: Indicates that the cluster is being terminated.
        • EDITED: Indicates that the cluster has been edited.
        • RUNNING: Indicates the cluster has finished being created. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.
        • RESIZING: Indicates a change in the target size of the cluster (upsize or downsize).
        • UPSIZE_COMPLETED: Indicates that nodes finished being added to the cluster. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.
        • NODES_LOST: Indicates that some nodes were lost from the cluster.
        • DRIVER_HEALTHY: Indicates that the driver is healthy and the cluster is ready for use.
        • DRIVER_UNAVAILABLE: Indicates that the driver is unavailable.
        • SPARK_EXCEPTION: Indicates that a Spark exception was thrown from the driver.
        • DRIVER_NOT_RESPONDING: Indicates that the driver is up but is not responsive, likely due to GC.
        • DBFS_DOWN: Indicates that the driver is up but DBFS is down.
        • METASTORE_DOWN: Indicates that the driver is up but the metastore is down.
        • NODE_BLACKLISTED: Indicates that a node is not allowed by Spark.
        • PINNED: Indicates that the cluster was pinned.
        • UNPINNED: Indicates that the cluster was unpinned.
        Source code in prefect_databricks/models/jobs.py
        class ClusterEventType(str, Enum):\n    \"\"\"\n        * `CREATING`: Indicates that the cluster is being created.\n    * `DID_NOT_EXPAND_DISK`: Indicates that a disk is low on space, but adding disks would put it over the max capacity.\n    * `EXPANDED_DISK`: Indicates that a disk was low on space and the disks were expanded.\n    * `FAILED_TO_EXPAND_DISK`: Indicates that a disk was low on space and disk space could not be expanded.\n    * `INIT_SCRIPTS_STARTING`: Indicates that the cluster scoped init script has started.\n    * `INIT_SCRIPTS_FINISHED`: Indicates that the cluster scoped init script has finished.\n    * `STARTING`: Indicates that the cluster is being started.\n    * `RESTARTING`: Indicates that the cluster is being started.\n    * `TERMINATING`: Indicates that the cluster is being terminated.\n    * `EDITED`: Indicates that the cluster has been edited.\n    * `RUNNING`: Indicates the cluster has finished being created. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.\n    * `RESIZING`: Indicates a change in the target size of the cluster (upsize or downsize).\n    * `UPSIZE_COMPLETED`: Indicates that nodes finished being added to the cluster. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired.\n    * `NODES_LOST`: Indicates that some nodes were lost from the cluster.\n    * `DRIVER_HEALTHY`: Indicates that the driver is healthy and the cluster is ready for use.\n    * `DRIVER_UNAVAILABLE`: Indicates that the driver is unavailable.\n    * `SPARK_EXCEPTION`: Indicates that a Spark exception was thrown from the driver.\n    * `DRIVER_NOT_RESPONDING`: Indicates that the driver is up but is not responsive, likely due to GC.\n    * `DBFS_DOWN`: Indicates that the driver is up but DBFS is down.\n    * `METASTORE_DOWN`: Indicates that the driver is up but the metastore is down.\n    * `NODE_BLACKLISTED`: Indicates that a node is not allowed by Spark.\n    * `PINNED`: Indicates that the cluster was pinned.\n    * `UNPINNED`: Indicates that the cluster was unpinned.\n    \"\"\"\n\n    creating = \"CREATING\"\n    didnotexpanddisk = \"DID_NOT_EXPAND_DISK\"\n    expandeddisk = \"EXPANDED_DISK\"\n    failedtoexpanddisk = \"FAILED_TO_EXPAND_DISK\"\n    initscriptsstarting = \"INIT_SCRIPTS_STARTING\"\n    initscriptsfinished = \"INIT_SCRIPTS_FINISHED\"\n    starting = \"STARTING\"\n    restarting = \"RESTARTING\"\n    terminating = \"TERMINATING\"\n    edited = \"EDITED\"\n    running = \"RUNNING\"\n    resizing = \"RESIZING\"\n    upsizecompleted = \"UPSIZE_COMPLETED\"\n    nodeslost = \"NODES_LOST\"\n    driverhealthy = \"DRIVER_HEALTHY\"\n    driverunavailable = \"DRIVER_UNAVAILABLE\"\n    sparkexception = \"SPARK_EXCEPTION\"\n    drivernotresponding = \"DRIVER_NOT_RESPONDING\"\n    dbfsdown = \"DBFS_DOWN\"\n    metastoredown = \"METASTORE_DOWN\"\n    nodeblacklisted = \"NODE_BLACKLISTED\"\n    pinned = \"PINNED\"\n    unpinned = \"UNPINNED\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterInfo","title":"ClusterInfo","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autoscale: Optional[AutoScale] = Field(\n        None,\n        description=(\n            \"If autoscale, parameters needed in order to automatically scale clusters\"\n            \" up and down based on load.\"\n        ),\n    )\n    autotermination_minutes: Optional[int] = Field(\n        None,\n        description=(\n            \"Automatically terminates the cluster after it is inactive for this time in\"\n            \" minutes. If not set, this cluster is not be automatically terminated. If\"\n            \" specified, the threshold must be between 10 and 10000 minutes. You can\"\n            \" also set this value to 0 to explicitly disable automatic termination.\"\n        ),\n    )\n    aws_attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"Attributes related to clusters running on Amazon Web Services. If not\"\n            \" specified at cluster creation, a set of default values is used.\"\n        ),\n    )\n    cluster_cores: Optional[float] = Field(\n        None,\n        description=(\n            \"Number of CPU cores available for this cluster. This can be fractional\"\n            \" since certain node types are configured to share cores between Spark\"\n            \" nodes on the same instance.\"\n        ),\n    )\n    cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"Canonical identifier for the cluster. This ID is retained during cluster\"\n            \" restarts and resizes, while each new cluster has a globally unique ID.\"\n        ),\n    )\n    cluster_log_conf: Optional[ClusterLogConf] = Field(\n        None,\n        description=(\n            \"The configuration for delivering Spark logs to a long-term storage\"\n            \" destination. Only one destination can be specified for one cluster. If\"\n            \" the conf is given, the logs are delivered to the destination every `5\"\n            \" mins`. The destination of driver logs is\"\n            \" `<destination>/<cluster-ID>/driver`, while the destination of executor\"\n            \" logs is `<destination>/<cluster-ID>/executor`.\"\n        ),\n    )\n    cluster_log_status: Optional[LogSyncStatus] = Field(\n        None, description=\"Cluster log delivery status.\"\n    )\n    cluster_memory_mb: Optional[int] = Field(\n        None, description=\"Total amount of cluster memory, in megabytes.\"\n    )\n    cluster_name: Optional[str] = Field(\n        None,\n        description=(\n            \"Cluster name requested by the user. This doesn\u2019t have to be unique. If not\"\n            \" specified at creation, the cluster name is an empty string.\"\n        ),\n    )\n    cluster_source: Optional[ClusterSource] = Field(\n        None,\n        description=(\n            \"Determines whether the cluster was created by a user through the UI, by\"\n            \" the Databricks Jobs scheduler, or through an API request.\"\n        ),\n    )\n    creator_user_name: Optional[str] = Field(\n        None,\n        description=(\n            \"Creator user name. The field won\u2019t be included in the response if the user\"\n            \" has already been deleted.\"\n        ),\n    )\n    custom_tags: Optional[List[ClusterTag]] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags for cluster resources. Databricks tags\"\n            \" all cluster resources (such as AWS instances and EBS volumes) with these\"\n            \" tags in addition to default_tags.\\n\\n**Note**:\\n\\n* Tags are not\"\n            \" supported on legacy node types such as compute-optimized and\"\n            \" memory-optimized\\n* Databricks allows at most 45 custom tags\"\n        ),\n    )\n    default_tags: Optional[ClusterTag] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags that are added by Databricks regardless\"\n            \" of any custom_tags, including:\\n\\n* Vendor: Databricks\\n* Creator:\"\n            \" <username-of-creator>\\n* ClusterName: <name-of-cluster>\\n* ClusterId:\"\n            \" <id-of-cluster>\\n* Name: <Databricks internal use>  \\nOn job clusters:\\n*\"\n            \" RunName: <name-of-job>\\n* JobId: <id-of-job>  \\nOn resources used by\"\n            \" Databricks SQL:\\n* SqlEndpointId: <id-of-endpoint>\"\n        ),\n    )\n    docker_image: Optional[DockerImage] = Field(\n        None,\n        description=(\n            \"Docker image for a [custom\"\n            \" container](https://docs.databricks.com/clusters/custom-containers.html).\"\n        ),\n    )\n    driver: Optional[SparkNode] = Field(\n        None,\n        description=(\n            \"Node on which the Spark driver resides. The driver node contains the Spark\"\n            \" master and the Databricks application that manages the per-notebook Spark\"\n            \" REPLs.\"\n        ),\n    )\n    driver_node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The node type of the Spark driver. This field is optional; if unset, the\"\n            \" driver node type is set as the same value as `node_type_id` defined\"\n            \" above.\"\n        ),\n    )\n    enable_elastic_disk: Optional[bool] = Field(\n        None,\n        description=(\n            \"Autoscaling Local Storage: when enabled, this cluster dynamically acquires\"\n            \" additional disk space when its Spark workers are running low on disk\"\n            \" space. This feature requires specific AWS permissions to function\"\n            \" correctly - refer to [Autoscaling local\"\n            \" storage](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage)\"\n            \" for details.\"\n        ),\n    )\n    executors: Optional[List[SparkNode]] = Field(\n        None, description=\"Nodes on which the Spark executors reside.\"\n    )\n    init_scripts: Optional[List[InitScriptInfo]] = Field(\n        None,\n        description=(\n            \"The configuration for storing init scripts. Any number of destinations can\"\n            \" be specified. The scripts are executed sequentially in the order\"\n            \" provided. If `cluster_log_conf` is specified, init script logs are sent\"\n            \" to `<destination>/<cluster-ID>/init_scripts`.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to which the cluster belongs. Refer\"\n            \" to [Pools](https://docs.databricks.com/clusters/instance-pools/index.html)\"\n            \" for details.\"\n        ),\n    )\n    jdbc_port: Optional[int] = Field(\n        None,\n        description=(\n            \"Port on which Spark JDBC server is listening in the driver node. No\"\n            \" service listens on this port in executor nodes.\"\n        ),\n    )\n    last_activity_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when the cluster was last active. A cluster\"\n            \" is active if there is at least one command that has not finished on the\"\n            \" cluster. This field is available after the cluster has reached a\"\n            \" `RUNNING` state. Updates to this field are made as best-effort attempts.\"\n            \" Certain versions of Spark do not support reporting of cluster activity.\"\n            \" Refer to [Automatic\"\n            \" termination](https://docs.databricks.com/clusters/clusters-manage.html#automatic-termination)\"\n            \" for details.\"\n        ),\n    )\n    last_state_loss_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time when the cluster driver last lost its state (due to a restart or\"\n            \" driver failure).\"\n        ),\n    )\n    node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"This field encodes, through a single value, the resources available to\"\n            \" each of the Spark nodes in this cluster. For example, the Spark nodes can\"\n            \" be provisioned and optimized for memory or compute intensive workloads. A\"\n            \" list of available node types can be retrieved by using the [List node\"\n            \" types](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-node-types)\"\n            \" API call.\"\n        ),\n    )\n    num_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"If num_workers, number of worker nodes that this cluster must have. A\"\n            \" cluster has one Spark driver and num_workers executors for a total of\"\n            \" num_workers + 1 Spark nodes. **Note:** When reading the properties of a\"\n            \" cluster, this field reflects the desired number of workers rather than\"\n            \" the actual number of workers. For instance, if a cluster is resized from\"\n            \" 5 to 10 workers, this field is immediately updated to reflect the target\"\n            \" size of 10 workers, whereas the workers listed in `executors` gradually\"\n            \" increase from 5 to 10 as the new nodes are provisioned.\"\n        ),\n    )\n    spark_conf: Optional[SparkConfPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified Spark configuration\"\n            \" key-value pairs. You can also pass in a string of extra JVM options to\"\n            \" the driver and the executors via `spark.driver.extraJavaOptions` and\"\n            \" `spark.executor.extraJavaOptions` respectively.\\n\\nExample Spark confs:\"\n            ' `{\"spark.speculation\": true, \"spark.streaming.ui.retainedBatches\": 5}` or'\n            ' `{\"spark.driver.extraJavaOptions\": \"-verbose:gc -XX:+PrintGCDetails\"}`'\n        ),\n    )\n    spark_context_id: Optional[int] = Field(\n        None,\n        description=(\n            \"A canonical SparkContext identifier. This value _does_ change when the\"\n            \" Spark driver restarts. The pair `(cluster_id, spark_context_id)` is a\"\n            \" globally unique identifier over all Spark contexts.\"\n        ),\n    )\n    spark_env_vars: Optional[SparkEnvPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified environment\"\n            \" variable key-value pairs. Key-value pairs of the form (X,Y) are exported\"\n            \" as is (that is, `export X='Y'`) while launching the driver and\"\n            \" workers.\\n\\nTo specify an additional set of `SPARK_DAEMON_JAVA_OPTS`, we\"\n            \" recommend appending them to `$SPARK_DAEMON_JAVA_OPTS` as shown in the\"\n            \" following example. This ensures that all default databricks managed\"\n            \" environmental variables are included as well.\\n\\nExample Spark\"\n            ' environment variables: `{\"SPARK_WORKER_MEMORY\": \"28000m\",'\n            ' \"SPARK_LOCAL_DIRS\": \"/local_disk0\"}` or `{\"SPARK_DAEMON_JAVA_OPTS\":'\n            ' \"$SPARK_DAEMON_JAVA_OPTS -Dspark.shuffle.service.enabled=true\"}`'\n        ),\n    )\n    spark_version: Optional[str] = Field(\n        None,\n        description=(\n            \"The runtime version of the cluster. You can retrieve a list of available\"\n            \" runtime versions by using the [Runtime\"\n            \" versions](https://docs.databricks.com/dev-tools/api/latest/clusters.html#runtime-versions)\"\n            \" API call.\"\n        ),\n    )\n    ssh_public_keys: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"SSH public key contents that are added to each Spark node in this cluster.\"\n            \" The corresponding private keys can be used to login with the user name\"\n            \" `ubuntu` on port `2200`. Up to 10 keys can be specified.\"\n        ),\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when the cluster creation request was\"\n            \" received (when the cluster entered a `PENDING` state).\"\n        ),\n    )\n    state: Optional[ClusterState] = Field(None, description=\"State of the cluster.\")\n    state_message: Optional[str] = Field(\n        None,\n        description=(\n            \"A message associated with the most recent state transition (for example,\"\n            \" the reason why the cluster entered a `TERMINATED` state). This field is\"\n            \" unstructured, and its exact format is subject to change.\"\n        ),\n    )\n    terminated_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when the cluster was terminated, if\"\n            \" applicable.\"\n        ),\n    )\n    termination_reason: Optional[TerminationReason] = Field(\n        None,\n        description=(\n            \"Information about why the cluster was terminated. This field only appears\"\n            \" when the cluster is in a `TERMINATING` or `TERMINATED` state.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterInstance","title":"ClusterInstance","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterInstance(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The canonical identifier for the cluster used by a run. This field is\"\n            \" always available for runs on existing clusters. For runs on new clusters,\"\n            \" it becomes available once the cluster is created. This value can be used\"\n            \" to view logs by browsing to `/#setting/sparkui/$cluster_id/driver-logs`.\"\n            \" The logs continue to be available after the run completes.\\n\\nThe\"\n            \" response won\u2019t include this field if the identifier is not available yet.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    spark_context_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The canonical identifier for the Spark context used by a run. This field\"\n            \" is filled in once the run begins execution. This value can be used to\"\n            \" view the Spark UI by browsing to\"\n            \" `/#setting/sparkui/$cluster_id/$spark_context_id`. The Spark UI continues\"\n            \" to be available after the run has completed.\\n\\nThe response won\u2019t\"\n            \" include this field if the identifier is not available yet.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterLibraryStatuses","title":"ClusterLibraryStatuses","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterLibraryStatuses(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cluster_id: Optional[str] = Field(\n        None, description=\"Unique identifier for the cluster.\"\n    )\n    library_statuses: Optional[List[LibraryFullStatus]] = Field(\n        None, description=\"Status of all libraries on the cluster.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterLogConf","title":"ClusterLogConf","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterLogConf(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbfs: Optional[DbfsStorageInfo] = Field(\n        None,\n        description=(\n            \"DBFS location of cluster log. Destination must be provided. For example,\"\n            ' `{ \"dbfs\" : { \"destination\" : \"dbfs:/home/cluster_log\" } }`'\n        ),\n    )\n    s3: Optional[S3StorageInfo] = Field(\n        None,\n        description=(\n            \"S3 location of cluster log. `destination` and either `region` or\"\n            ' `endpoint` must be provided. For example, `{ \"s3\": { \"destination\" :'\n            ' \"s3://cluster_log_bucket/prefix\", \"region\" : \"us-west-2\" } }`'\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterSize","title":"ClusterSize","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterSize(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autoscale: Optional[AutoScale] = Field(\n        None,\n        description=(\n            \"If autoscale, parameters needed in order to automatically scale clusters\"\n            \" up and down based on load.\"\n        ),\n    )\n    num_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"If num_workers, number of worker nodes that this cluster must have. A\"\n            \" cluster has one Spark driver and num_workers executors for a total of\"\n            \" num_workers + 1 Spark nodes. When reading the properties of a cluster,\"\n            \" this field reflects the desired number of workers rather than the actual\"\n            \" number of workers. For instance, if a cluster is resized from 5 to 10\"\n            \" workers, this field is updated to reflect the target size of 10 workers,\"\n            \" whereas the workers listed in executors gradually increase from 5 to 10\"\n            \" as the new nodes are provisioned.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterSource","title":"ClusterSource","text":"

        Bases: str, Enum

        * UI: Cluster created through the UI.\n
        • JOB: Cluster created by the Databricks job scheduler.
        • API: Cluster created through an API call.
        Source code in prefect_databricks/models/jobs.py
        class ClusterSource(str, Enum):\n    \"\"\"\n        * UI: Cluster created through the UI.\n    * JOB: Cluster created by the Databricks job scheduler.\n    * API: Cluster created through an API call.\n\n    \"\"\"\n\n    ui = \"UI\"\n    job = \"JOB\"\n    api = \"API\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterSpec","title":"ClusterSpec","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ClusterSpec(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this job. When running jobs on an existing cluster, you may need\"\n            \" to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the job. The default value is an empty list.\"\n        ),\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterState","title":"ClusterState","text":"

        Bases: str, Enum

        * PENDING: Indicates that a cluster is in the process of being created.\n
        • RUNNING: Indicates that a cluster has been started and is ready for use.
        • RESTARTING: Indicates that a cluster is in the process of restarting.
        • RESIZING: Indicates that a cluster is in the process of adding or removing nodes.
        • TERMINATING: Indicates that a cluster is in the process of being destroyed.
        • TERMINATED: Indicates that a cluster has been successfully destroyed.
        • ERROR: This state is no longer used. It was used to indicate a cluster that failed to be created. TERMINATING and TERMINATED are used instead.
        • UNKNOWN: Indicates that a cluster is in an unknown state. A cluster should never be in this state.
        Source code in prefect_databricks/models/jobs.py
        class ClusterState(str, Enum):\n    \"\"\"\n        * PENDING: Indicates that a cluster is in the process of being created.\n    * RUNNING: Indicates that a cluster has been started and is ready for use.\n    * RESTARTING: Indicates that a cluster is in the process of restarting.\n    * RESIZING: Indicates that a cluster is in the process of adding or removing nodes.\n    * TERMINATING: Indicates that a cluster is in the process of being destroyed.\n    * TERMINATED: Indicates that a cluster has been successfully destroyed.\n    * ERROR: This state is no longer used. It was used to indicate a cluster that failed to be created. `TERMINATING` and `TERMINATED` are used instead.\n    * UNKNOWN: Indicates that a cluster is in an unknown state. A cluster should never be in this state.\n\n    \"\"\"\n\n    pending = \"PENDING\"\n    running = \"RUNNING\"\n    restarting = \"RESTARTING\"\n    resizing = \"RESIZING\"\n    terminating = \"TERMINATING\"\n    terminated = \"TERMINATED\"\n    error = \"ERROR\"\n    unknown = \"UNKNOWN\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ClusterTag","title":"ClusterTag","text":"

        Bases: BaseModel

        See source code for the fields' description.

        An object with key value pairs. The key length must be between 1 and 127 UTF-8 characters, inclusive. The value length must be less than or equal to 255 UTF-8 characters. For a list of all restrictions, see AWS Tag Restrictions: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions

        Source code in prefect_databricks/models/jobs.py
        class ClusterTag(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An object with key value pairs. The key length must be between 1 and 127 UTF-8 characters, inclusive. The value length must be less than or equal to 255 UTF-8 characters. For a list of all restrictions, see AWS Tag Restrictions: <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions>\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.CronSchedule","title":"CronSchedule","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class CronSchedule(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    pause_status: Optional[Literal[\"PAUSED\", \"UNPAUSED\"]] = Field(\n        None,\n        description=\"Indicate whether this schedule is paused or not.\",\n        example=\"PAUSED\",\n    )\n    quartz_cron_expression: str = Field(\n        ...,\n        description=(\n            \"A Cron expression using Quartz syntax that describes the schedule for a\"\n            \" job. See [Cron\"\n            \" Trigger](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\"\n            \" for details. This field is required.\"\n        ),\n        example=\"20 30 * * * ?\",\n    )\n    timezone_id: str = Field(\n        ...,\n        description=(\n            \"A Java timezone ID. The schedule for a job is resolved with respect to\"\n            \" this timezone. See [Java\"\n            \" TimeZone](https://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html)\"\n            \" for details. This field is required.\"\n        ),\n        example=\"Europe/London\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DbfsStorageInfo","title":"DbfsStorageInfo","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class DbfsStorageInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    destination: Optional[str] = Field(\n        None, description=\"DBFS destination. Example: `dbfs:/my/path`\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DbtOutput","title":"DbtOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class DbtOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    artifacts_headers: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"An optional map of headers to send when retrieving the artifact from the\"\n            \" `artifacts_link`.\"\n        ),\n    )\n    artifacts_link: Optional[str] = Field(\n        None,\n        description=(\n            \"A pre-signed URL to download the (compressed) dbt artifacts. This link is\"\n            \" valid for a limited time (30 minutes). This information is only available\"\n            \" after the run has finished.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DbtTask","title":"DbtTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class DbtTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    catalog: Optional[str] = Field(\n        None,\n        description=(\n            \"Optional name of the catalog to use. The value is the top level in the\"\n            \" 3-level namespace of Unity Catalog (catalog / schema / relation). The\"\n            \" catalog value can only be specified if a warehouse_id is specified.\"\n            \" Requires dbt-databricks >= 1.1.1.\"\n        ),\n        example=\"main\",\n    )\n    commands: List = Field(\n        ...,\n        description=(\n            \"A list of dbt commands to execute. All commands must start with `dbt`.\"\n            \" This parameter must not be empty. A maximum of up to 10 commands can be\"\n            \" provided.\"\n        ),\n        example=[\"dbt deps\", \"dbt seed\", \"dbt run --models 123\"],\n    )\n    profiles_directory: Optional[str] = Field(\n        None,\n        description=(\n            \"Optional (relative) path to the profiles directory. Can only be specified\"\n            \" if no warehouse_id is specified. If no warehouse_id is specified and this\"\n            \" folder is unset, the root directory is used.\"\n        ),\n    )\n    project_directory: Optional[str] = Field(\n        None,\n        description=(\n            \"Optional (relative) path to the project directory, if no value is\"\n            \" provided, the root of the git repository is used.\"\n        ),\n    )\n    schema_: Optional[str] = Field(\n        None,\n        alias=\"schema\",\n        description=(\n            \"Optional schema to write to. This parameter is only used when a\"\n            \" warehouse_id is also provided. If not provided, the `default` schema is\"\n            \" used.\"\n        ),\n    )\n    warehouse_id: Optional[str] = Field(\n        None,\n        description=(\n            \"ID of the SQL warehouse to connect to. If provided, we automatically\"\n            \" generate and provide the profile and connection details to dbt. It can be\"\n            \" overridden on a per-command basis by using the `--profiles-dir` command\"\n            \" line argument.\"\n        ),\n        example=\"30dade0507d960d1\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DockerBasicAuth","title":"DockerBasicAuth","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class DockerBasicAuth(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    password: Optional[str] = Field(\n        None, description=\"Password for the Docker repository.\"\n    )\n    username: Optional[str] = Field(\n        None, description=\"User name for the Docker repository.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.DockerImage","title":"DockerImage","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class DockerImage(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    basic_auth: Optional[DockerBasicAuth] = Field(\n        None, description=\"Basic authentication information for Docker repository.\"\n    )\n    url: Optional[str] = Field(None, description=\"URL for the Docker image.\")\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Error","title":"Error","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class Error(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    error_code: Optional[str] = Field(\n        None, description=\"Error code\", example=\"INTERNAL_ERROR\"\n    )\n    message: Optional[str] = Field(\n        None,\n        description=(\n            \"Human-readable error message that describes the cause of the error.\"\n        ),\n        example=\"Unexpected error.\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.EventDetails","title":"EventDetails","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class EventDetails(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"* For created clusters, the attributes of the cluster.\\n* For edited\"\n            \" clusters, the new attributes of the cluster.\"\n        ),\n    )\n    cause: Optional[ResizeCause] = Field(\n        None, description=\"The cause of a change in target size.\"\n    )\n    cluster_size: Optional[ClusterSize] = Field(\n        None,\n        description=\"The cluster size that was set in the cluster creation or edit.\",\n    )\n    current_num_workers: Optional[int] = Field(\n        None, description=\"The number of nodes in the cluster.\"\n    )\n    previous_attributes: Optional[AwsAttributes] = Field(\n        None, description=\"The cluster attributes before a cluster was edited.\"\n    )\n    previous_cluster_size: Optional[ClusterSize] = Field(\n        None, description=\"The size of the cluster before an edit or resize.\"\n    )\n    reason: Optional[TerminationReason] = Field(\n        None,\n        description=(\n            \"A termination reason:\\n\\n* On a `TERMINATED` event, the reason for the\"\n            \" termination.\\n* On a `RESIZE_COMPLETE` event, indicates the reason that\"\n            \" we failed to acquire some nodes.\"\n        ),\n    )\n    target_num_workers: Optional[int] = Field(\n        None, description=\"The targeted number of nodes in the cluster.\"\n    )\n    user: Optional[str] = Field(\n        None,\n        description=(\n            \"The user that caused the event to occur. (Empty if it was done by\"\n            \" Databricks.)\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.FileStorageInfo","title":"FileStorageInfo","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class FileStorageInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    destination: Optional[str] = Field(\n        None, description=\"File destination. Example: `file:/my/file.sh`\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GitSnapshot","title":"GitSnapshot","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Read-only state of the remote repository at the time the job was run. This field is only included on job runs.

        Source code in prefect_databricks/models/jobs.py
        class GitSnapshot(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    Read-only state of the remote repository at the time the job was run. This field is only included on job runs.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    used_commit: Optional[str] = Field(\n        None,\n        description=(\n            \"Commit that was used to execute the run. If git_branch was specified, this\"\n            \" points to the HEAD of the branch at the time of the run; if git_tag was\"\n            \" specified, this points to the commit the tag points to.\"\n        ),\n        example=\"4506fdf41e9fa98090570a34df7a5bce163ff15f\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GitSource","title":"GitSource","text":"

        Bases: BaseModel

        See source code for the fields' description.

        This functionality is in Public Preview.\n

        An optional specification for a remote repository containing the notebooks used by this job's notebook tasks.

        Source code in prefect_databricks/models/jobs.py
        class GitSource(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n        This functionality is in Public Preview.\n\n    An optional specification for a remote repository containing the notebooks used by this job's notebook tasks.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    git_branch: Optional[str] = Field(\n        None,\n        description=(\n            \"Name of the branch to be checked out and used by this job. This field\"\n            \" cannot be specified in conjunction with git_tag or git_commit.\\nThe\"\n            \" maximum length is 255 characters.\"\n        ),\n        example=\"main\",\n    )\n    git_commit: Optional[str] = Field(\n        None,\n        description=(\n            \"Commit to be checked out and used by this job. This field cannot be\"\n            \" specified in conjunction with git_branch or git_tag.\\nThe maximum length\"\n            \" is 64 characters.\"\n        ),\n        example=\"e0056d01\",\n    )\n    git_provider: Optional[\n        Literal[\n            \"gitHub\",\n            \"bitbucketCloud\",\n            \"azureDevOpsServices\",\n            \"gitHubEnterprise\",\n            \"bitbucketServer\",\n            \"gitLab\",\n            \"gitLabEnterpriseEdition\",\n            \"awsCodeCommit\",\n        ]\n    ] = Field(\n        None,\n        description=(\n            \"Unique identifier of the service used to host the Git repository. The\"\n            \" value is case insensitive.\"\n        ),\n        example=\"github\",\n    )\n    git_snapshot: Optional[GitSnapshot] = None\n    git_tag: Optional[str] = Field(\n        None,\n        description=(\n            \"Name of the tag to be checked out and used by this job. This field cannot\"\n            \" be specified in conjunction with git_branch or git_commit.\\nThe maximum\"\n            \" length is 255 characters.\"\n        ),\n        example=\"release-1.0.0\",\n    )\n    git_url: Optional[str] = Field(\n        None,\n        description=(\n            \"URL of the repository to be cloned by this job.\\nThe maximum length is 300\"\n            \" characters.\"\n        ),\n        example=\"https://github.com/databricks/databricks-cli\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GitSource1","title":"GitSource1","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class GitSource1(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: Union[GitSource, Any, Any, Any] = Field(\n        ...,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.GroupName","title":"GroupName","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class GroupName(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=(\n            \"Group name. There are two built-in groups: `users` for all users, and\"\n            \" `admins` for administrators.\"\n        ),\n        example=\"users\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.InitScriptInfo","title":"InitScriptInfo","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class InitScriptInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    s3: Optional[S3StorageInfo] = Field(\n        None,\n        alias=\"S3\",\n        description=(\n            \"S3 location of init script. Destination and either region or endpoint must\"\n            ' be provided. For example, `{ \"s3\": { \"destination\" :'\n            ' \"s3://init_script_bucket/prefix\", \"region\" : \"us-west-2\" } }`'\n        ),\n    )\n    dbfs: Optional[DbfsStorageInfo] = Field(\n        None,\n        description=(\n            \"DBFS location of init script. Destination must be provided. For example,\"\n            ' `{ \"dbfs\" : { \"destination\" : \"dbfs:/home/init_script\" } }`'\n        ),\n    )\n    file: Optional[FileStorageInfo] = Field(\n        None,\n        description=(\n            \"File location of init script. Destination must be provided. For example,\"\n            ' `{ \"file\" : { \"destination\" : \"file:/my/local/file.sh\" } }`'\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.IsOwner","title":"IsOwner","text":"

        Bases: str, Enum

        Perimssion that represents ownership of the job.

        Source code in prefect_databricks/models/jobs.py
        class IsOwner(str, Enum):\n    \"\"\"\n    Perimssion that represents ownership of the job.\n    \"\"\"\n\n    isowner = \"IS_OWNER\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Job","title":"Job","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class Job(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    created_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this job was created in epoch milliseconds (milliseconds\"\n            \" since 1/1/1970 UTC).\"\n        ),\n        example=1601370337343,\n    )\n    creator_user_name: Optional[str] = Field(\n        None,\n        description=(\n            \"The creator user name. This field won\u2019t be included in the response if the\"\n            \" user has already been deleted.\"\n        ),\n        example=\"user.name@databricks.com\",\n    )\n    job_id: Optional[int] = Field(\n        None, description=\"The canonical identifier for this job.\", example=11223344\n    )\n    settings: Optional[JobSettings] = Field(\n        None,\n        description=(\n            \"Settings for this job and all of its runs. These settings can be updated\"\n            \" using the `resetJob` method.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobCluster","title":"JobCluster","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class JobCluster(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    job_cluster_key: str = Field(\n        ...,\n        description=(\n            \"A unique name for the job cluster. This field is required and must be\"\n            \" unique within the job.\\n`JobTaskSettings` may refer to this field to\"\n            \" determine which cluster to launch for the task execution.\"\n        ),\n        example=\"auto_scaling_cluster\",\n        max_length=100,\n        min_length=1,\n        regex=\"^[\\\\w\\\\-]+$\",\n    )\n    new_cluster: Optional[NewCluster] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobEmailNotifications","title":"JobEmailNotifications","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class JobEmailNotifications(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    no_alert_for_skipped_runs: Optional[bool] = Field(\n        None,\n        description=(\n            \"If true, do not send email to recipients specified in `on_failure` if the\"\n            \" run is skipped.\"\n        ),\n        example=False,\n    )\n    on_failure: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of email addresses to notify when a run completes unsuccessfully. A\"\n            \" run is considered unsuccessful if it ends with an `INTERNAL_ERROR`\"\n            \" `life_cycle_state` or a `SKIPPED`, `FAILED`, or `TIMED_OUT`\"\n            \" `result_state`. If not specified on job creation, reset, or update, or\"\n            \" the list is empty, then notifications are not sent. Job-level failure\"\n            \" notifications are sent only once after the entire job run (including all\"\n            \" of its retries) has failed. Notifications are not sent when failed job\"\n            \" runs are retried. To receive a failure notification after every failed\"\n            \" task (including every failed retry), use task-level notifications\"\n            \" instead.\"\n        ),\n        example=[\"user.name@databricks.com\"],\n    )\n    on_start: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of email addresses to be notified when a run begins. If not\"\n            \" specified on job creation, reset, or update, the list is empty, and\"\n            \" notifications are not sent.\"\n        ),\n        example=[\"user.name@databricks.com\"],\n    )\n    on_success: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of email addresses to be notified when a run successfully\"\n            \" completes. A run is considered to have completed successfully if it ends\"\n            \" with a `TERMINATED` `life_cycle_state` and a `SUCCESSFUL` result_state.\"\n            \" If not specified on job creation, reset, or update, the list is empty,\"\n            \" and notifications are not sent.\"\n        ),\n        example=[\"user.name@databricks.com\"],\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobSettings","title":"JobSettings","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class JobSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    email_notifications: Optional[JobEmailNotifications] = Field(\n        None,\n        description=(\n            \"An optional set of email addresses that is notified when runs of this job\"\n            \" begin or complete as well as when this job is deleted. The default\"\n            \" behavior is to not send any emails.\"\n        ),\n    )\n    format: Optional[Literal[\"SINGLE_TASK\", \"MULTI_TASK\"]] = Field(\n        None,\n        description=(\n            \"Used to tell what is the format of the job. This field is ignored in\"\n            \" Create/Update/Reset calls. When using the Jobs API 2.1 this value is\"\n            ' always set to `\"MULTI_TASK\"`.'\n        ),\n        example=\"MULTI_TASK\",\n    )\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    job_clusters: Optional[List[JobCluster]] = Field(\n        None,\n        description=(\n            \"A list of job cluster specifications that can be shared and reused by\"\n            \" tasks of this job. Libraries cannot be declared in a shared job cluster.\"\n            \" You must declare dependent libraries in task settings.\"\n        ),\n        example=[\n            {\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n            }\n        ],\n        max_items=100,\n    )\n    max_concurrent_runs: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional maximum allowed number of concurrent runs of the job.\\n\\nSet\"\n            \" this value if you want to be able to execute multiple runs of the same\"\n            \" job concurrently. This is useful for example if you trigger your job on a\"\n            \" frequent schedule and want to allow consecutive runs to overlap with each\"\n            \" other, or if you want to trigger multiple runs which differ by their\"\n            \" input parameters.\\n\\nThis setting affects only new runs. For example,\"\n            \" suppose the job\u2019s concurrency is 4 and there are 4 concurrent active\"\n            \" runs. Then setting the concurrency to 3 won\u2019t kill any of the active\"\n            \" runs. However, from then on, new runs are skipped unless there are fewer\"\n            \" than 3 active runs.\\n\\nThis value cannot exceed 1000\\\\. Setting this\"\n            \" value to 0 causes all new runs to be skipped. The default behavior is to\"\n            \" allow only 1 concurrent run.\"\n        ),\n        example=10,\n    )\n    name: Optional[str] = Field(\n        \"Untitled\",\n        description=\"An optional name for the job.\",\n        example=\"A multitask job\",\n    )\n    schedule: Optional[CronSchedule] = Field(\n        None,\n        description=(\n            \"An optional periodic schedule for this job. The default behavior is that\"\n            \" the job only runs when triggered by clicking \u201cRun Now\u201d in the Jobs UI or\"\n            \" sending an API request to `runNow`.\"\n        ),\n    )\n    tags: Optional[Dict[str, Any]] = Field(\n        \"{}\",\n        description=(\n            \"A map of tags associated with the job. These are forwarded to the cluster\"\n            \" as cluster tags for jobs clusters, and are subject to the same\"\n            \" limitations as cluster tags. A maximum of 25 tags can be added to the\"\n            \" job.\"\n        ),\n        example={\"cost-center\": \"engineering\", \"team\": \"jobs\"},\n    )\n    tasks: Optional[List[JobTaskSettings]] = Field(\n        None,\n        description=\"A list of task specifications to be executed by this job.\",\n        example=[\n            {\n                \"depends_on\": [],\n                \"description\": \"Extracts session data from events\",\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                \"max_retries\": 3,\n                \"min_retry_interval_millis\": 2000,\n                \"retry_on_timeout\": False,\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.Sessionize\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                },\n                \"task_key\": \"Sessionize\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [],\n                \"description\": \"Ingests order data\",\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                \"max_retries\": 3,\n                \"min_retry_interval_millis\": 2000,\n                \"retry_on_timeout\": False,\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.OrdersIngest\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                },\n                \"task_key\": \"Orders_Ingest\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [\n                    {\"task_key\": \"Orders_Ingest\"},\n                    {\"task_key\": \"Sessionize\"},\n                ],\n                \"description\": \"Matches orders with user sessions\",\n                \"max_retries\": 3,\n                \"min_retry_interval_millis\": 2000,\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n                \"notebook_task\": {\n                    \"base_parameters\": {\"age\": \"35\", \"name\": \"John Doe\"},\n                    \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                    \"source\": \"WORKSPACE\",\n                },\n                \"retry_on_timeout\": False,\n                \"task_key\": \"Match\",\n                \"timeout_seconds\": 86400,\n            },\n        ],\n        max_items=100,\n    )\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job. The default behavior\"\n            \" is to have no timeout.\"\n        ),\n        example=86400,\n    )\n    webhook_notifications: Optional[WebhookNotifications] = Field(\n        None,\n        description=(\n            \"A collection of system notification IDs to notify when runs of this job\"\n            \" begin or complete. The default behavior is to not send any system\"\n            \" notifications.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobTask","title":"JobTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class JobTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this job must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this job must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None,\n        description=\"If spark_jar_task, indicates that this job must run a JAR.\",\n        example=\"\",\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this job must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this job must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.JobTaskSettings","title":"JobTaskSettings","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class JobTaskSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    depends_on: Optional[TaskDependencies] = None\n    description: Optional[TaskDescription] = None\n    email_notifications: Optional[JobEmailNotifications] = Field(\n        None,\n        description=(\n            \"An optional set of email addresses that is notified when runs of this task\"\n            \" begin or complete as well as when this task is deleted. The default\"\n            \" behavior is to not send any emails.\"\n        ),\n    )\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this task. When running tasks on an existing cluster, you may\"\n            \" need to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    job_cluster_key: Optional[str] = Field(\n        None,\n        description=(\n            \"If job_cluster_key, this task is executed reusing the cluster specified in\"\n            \" `job.settings.job_clusters`.\"\n        ),\n        max_length=100,\n        min_length=1,\n        regex=\"^[\\\\w\\\\-]+$\",\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the task. The default value is an empty list.\"\n        ),\n    )\n    max_retries: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional maximum number of times to retry an unsuccessful run. A run is\"\n            \" considered to be unsuccessful if it completes with the `FAILED`\"\n            \" result_state or `INTERNAL_ERROR` `life_cycle_state`. The value -1 means\"\n            \" to retry indefinitely and the value 0 means to never retry. The default\"\n            \" behavior is to never retry.\"\n        ),\n        example=10,\n    )\n    min_retry_interval_millis: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional minimal interval in milliseconds between the start of the\"\n            \" failed run and the subsequent retry run. The default behavior is that\"\n            \" unsuccessful runs are immediately retried.\"\n        ),\n        example=2000,\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this task must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this task must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    retry_on_timeout: Optional[bool] = Field(\n        None,\n        description=(\n            \"An optional policy to specify whether to retry a task when it times out.\"\n            \" The default behavior is to not retry on timeout.\"\n        ),\n        example=True,\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None, description=\"If spark_jar_task, indicates that this task must run a JAR.\"\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this task must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this task must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n    task_key: TaskKey\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job task. The default\"\n            \" behavior is to have no timeout.\"\n        ),\n        example=86400,\n    )\n    webhook_notifications: Optional[WebhookNotifications] = Field(\n        None,\n        description=(\n            \"A collection of system notification IDs to notify when the run begins or\"\n            \" completes. The default behavior is to not send any system notifications.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Library","title":"Library","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class Library(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    cran: Optional[RCranLibrary] = Field(\n        None, description=\"If cran, specification of a CRAN library to be installed.\"\n    )\n    egg: Optional[str] = Field(\n        None,\n        description=(\n            \"If egg, URI of the egg to be installed. DBFS and S3 URIs are supported.\"\n            ' For example: `{ \"egg\": \"dbfs:/my/egg\" }` or `{ \"egg\":'\n            ' \"s3://my-bucket/egg\" }`. If S3 is used, make sure the cluster has read'\n            \" access on the library. You may need to launch the cluster with an\"\n            \" instance profile to access the S3 URI.\"\n        ),\n        example=\"dbfs:/my/egg\",\n    )\n    jar: Optional[str] = Field(\n        None,\n        description=(\n            \"If jar, URI of the JAR to be installed. DBFS and S3 URIs are supported.\"\n            ' For example: `{ \"jar\": \"dbfs:/mnt/databricks/library.jar\" }` or `{ \"jar\":'\n            ' \"s3://my-bucket/library.jar\" }`. If S3 is used, make sure the cluster has'\n            \" read access on the library. You may need to launch the cluster with an\"\n            \" instance profile to access the S3 URI.\"\n        ),\n        example=\"dbfs:/my-jar.jar\",\n    )\n    maven: Optional[MavenLibrary] = Field(\n        None,\n        description=(\n            \"If maven, specification of a Maven library to be installed. For example:\"\n            ' `{ \"coordinates\": \"org.jsoup:jsoup:1.7.2\" }`'\n        ),\n    )\n    pypi: Optional[PythonPyPiLibrary] = Field(\n        None,\n        description=(\n            \"If pypi, specification of a PyPI library to be installed. Specifying the\"\n            \" `repo` field is optional and if not specified, the default pip index is\"\n            ' used. For example: `{ \"package\": \"simplejson\", \"repo\":'\n            ' \"https://my-repo.com\" }`'\n        ),\n    )\n    whl: Optional[str] = Field(\n        None,\n        description=(\n            \"If whl, URI of the wheel or zipped wheels to be installed. DBFS and S3\"\n            ' URIs are supported. For example: `{ \"whl\": \"dbfs:/my/whl\" }` or `{ \"whl\":'\n            ' \"s3://my-bucket/whl\" }`. If S3 is used, make sure the cluster has read'\n            \" access on the library. You may need to launch the cluster with an\"\n            \" instance profile to access the S3 URI. Also the wheel file name needs to\"\n            \" use the [correct\"\n            \" convention](https://www.python.org/dev/peps/pep-0427/#file-format). If\"\n            \" zipped wheels are to be installed, the file name suffix should be\"\n            \" `.wheelhouse.zip`.\"\n        ),\n        example=\"dbfs:/my/whl\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.LibraryFullStatus","title":"LibraryFullStatus","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class LibraryFullStatus(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    is_library_for_all_clusters: Optional[bool] = Field(\n        None,\n        description=(\n            \"Whether the library was set to be installed on all clusters via the\"\n            \" libraries UI.\"\n        ),\n    )\n    library: Optional[Library] = Field(\n        None, description=\"Unique identifier for the library.\"\n    )\n    messages: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"All the info and warning messages that have occurred so far for this\"\n            \" library.\"\n        ),\n    )\n    status: Optional[LibraryInstallStatus] = Field(\n        None, description=\"Status of installing the library on the cluster.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.LibraryInstallStatus","title":"LibraryInstallStatus","text":"

        Bases: str, Enum

        * `PENDING`: No action has yet been taken to install the library. This state should be very short lived.\n
        • RESOLVING: Metadata necessary to install the library is being retrieved from the provided repository. For Jar, Egg, and Whl libraries, this step is a no-op.
        • INSTALLING: The library is actively being installed, either by adding resources to Spark or executing system commands inside the Spark nodes.
        • INSTALLED: The library has been successfully instally.
        • SKIPPED: Installation on a Databricks Runtime 7.0 or above cluster was skipped due to Scala version incompatibility.
        • FAILED: Some step in installation failed. More information can be found in the messages field.
        • UNINSTALL_ON_RESTART: The library has been marked for removal. Libraries can be removed only when clusters are restarted, so libraries that enter this state remains until the cluster is restarted.
        Source code in prefect_databricks/models/jobs.py
        class LibraryInstallStatus(str, Enum):\n    \"\"\"\n        * `PENDING`: No action has yet been taken to install the library. This state should be very short lived.\n    * `RESOLVING`: Metadata necessary to install the library is being retrieved from the provided repository. For Jar, Egg, and Whl libraries, this step is a no-op.\n    * `INSTALLING`: The library is actively being installed, either by adding resources to Spark or executing system commands inside the Spark nodes.\n    * `INSTALLED`: The library has been successfully instally.\n    * `SKIPPED`: Installation on a Databricks Runtime 7.0 or above cluster was skipped due to Scala version incompatibility.\n    * `FAILED`: Some step in installation failed. More information can be found in the messages field.\n    * `UNINSTALL_ON_RESTART`: The library has been marked for removal. Libraries can be removed only when clusters are restarted, so libraries that enter this state remains until the cluster is restarted.\n    \"\"\"\n\n    pending = \"PENDING\"\n    resolving = \"RESOLVING\"\n    installing = \"INSTALLING\"\n    installed = \"INSTALLED\"\n    skipped = \"SKIPPED\"\n    failed = \"FAILED\"\n    uninstallonrestart = \"UNINSTALL_ON_RESTART\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ListOrder","title":"ListOrder","text":"

        Bases: str, Enum

        * `DESC`: Descending order.\n
        • ASC: Ascending order.
        Source code in prefect_databricks/models/jobs.py
        class ListOrder(str, Enum):\n    \"\"\"\n        * `DESC`: Descending order.\n    * `ASC`: Ascending order.\n    \"\"\"\n\n    desc = \"DESC\"\n    asc = \"ASC\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.LogSyncStatus","title":"LogSyncStatus","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class LogSyncStatus(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    last_attempted: Optional[int] = Field(\n        None,\n        description=(\n            \"The timestamp of last attempt. If the last attempt fails, last_exception\"\n            \" contains the exception in the last attempt.\"\n        ),\n    )\n    last_exception: Optional[str] = Field(\n        None,\n        description=(\n            \"The exception thrown in the last attempt, it would be null (omitted in the\"\n            \" response) if there is no exception in last attempted.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.MavenLibrary","title":"MavenLibrary","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class MavenLibrary(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    coordinates: str = Field(\n        ...,\n        description=(\n            \"Gradle-style Maven coordinates. For example: `org.jsoup:jsoup:1.7.2`. This\"\n            \" field is required.\"\n        ),\n        example=\"org.jsoup:jsoup:1.7.2\",\n    )\n    exclusions: Optional[List[str]] = Field(\n        None,\n        description=(\n            'List of dependences to exclude. For example: `[\"slf4j:slf4j\",'\n            ' \"*:hadoop-client\"]`.\\n\\nMaven dependency exclusions:'\n            \" <https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html>.\"\n        ),\n        example=[\"slf4j:slf4j\", \"*:hadoop-client\"],\n    )\n    repo: Optional[str] = Field(\n        None,\n        description=(\n            \"Maven repo to install the Maven package from. If omitted, both Maven\"\n            \" Central Repository and Spark Packages are searched.\"\n        ),\n        example=\"https://my-repo.com\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NewCluster","title":"NewCluster","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class NewCluster(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    autoscale: Optional[AutoScale] = Field(\n        None,\n        description=(\n            \"If autoscale, the required parameters to automatically scale clusters up\"\n            \" and down based on load.\"\n        ),\n    )\n    aws_attributes: Optional[AwsAttributes] = Field(\n        None,\n        description=(\n            \"Attributes related to clusters running on Amazon Web Services. If not\"\n            \" specified at cluster creation, a set of default values is used.\"\n        ),\n    )\n    cluster_log_conf: Optional[ClusterLogConf] = Field(\n        None,\n        description=(\n            \"The configuration for delivering Spark logs to a long-term storage\"\n            \" destination. Only one destination can be specified for one cluster. If\"\n            \" the conf is given, the logs are delivered to the destination every `5\"\n            \" mins`. The destination of driver logs is\"\n            \" `<destination>/<cluster-id>/driver`, while the destination of executor\"\n            \" logs is `<destination>/<cluster-id>/executor`.\"\n        ),\n    )\n    custom_tags: Optional[ClusterTag] = Field(\n        None,\n        description=(\n            \"An object containing a set of tags for cluster resources. Databricks tags\"\n            \" all cluster resources (such as AWS instances and EBS volumes) with these\"\n            \" tags in addition to default_tags.\\n\\n**Note**:\\n\\n* Tags are not\"\n            \" supported on legacy node types such as compute-optimized and\"\n            \" memory-optimized\\n* Databricks allows at most 45 custom tags\"\n        ),\n    )\n    driver_instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to use for the driver node. You must\"\n            \" also specify `instance_pool_id`. Refer to [Instance Pools\"\n            \" API](https://docs.databricks.com/dev-tools/api/latest/instance-pools.html)\"\n            \" for details.\"\n        ),\n    )\n    driver_node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The node type of the Spark driver. This field is optional; if unset, the\"\n            \" driver node type is set as the same value as `node_type_id` defined\"\n            \" above.\"\n        ),\n    )\n    enable_elastic_disk: Optional[bool] = Field(\n        None,\n        description=(\n            \"Autoscaling Local Storage: when enabled, this cluster dynamically acquires\"\n            \" additional disk space when its Spark workers are running low on disk\"\n            \" space. This feature requires specific AWS permissions to function\"\n            \" correctly - refer to [Autoscaling local\"\n            \" storage](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage)\"\n            \" for details.\"\n        ),\n    )\n    enable_local_disk_encryption: Optional[bool] = Field(\n        None,\n        description=(\n            \"Determines whether encryption of disks locally attached to the cluster is\"\n            \" enabled.\"\n        ),\n    )\n    init_scripts: Optional[List[InitScriptInfo]] = Field(\n        None,\n        description=(\n            \"The configuration for storing init scripts. Any number of scripts can be\"\n            \" specified. The scripts are executed sequentially in the order provided.\"\n            \" If `cluster_log_conf` is specified, init script logs are sent to\"\n            \" `<destination>/<cluster-id>/init_scripts`.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None,\n        description=(\n            \"The optional ID of the instance pool to use for cluster nodes. If\"\n            \" `driver_instance_pool_id` is present, `instance_pool_id` is used for\"\n            \" worker nodes only. Otherwise, it is used for both the driver node and\"\n            \" worker nodes. Refer to [Instance Pools\"\n            \" API](https://docs.databricks.com/dev-tools/api/latest/instance-pools.html)\"\n            \" for details.\"\n        ),\n    )\n    node_type_id: Optional[str] = Field(\n        None,\n        description=(\n            \"This field encodes, through a single value, the resources available to\"\n            \" each of the Spark nodes in this cluster. For example, the Spark nodes can\"\n            \" be provisioned and optimized for memory or compute intensive workloads A\"\n            \" list of available node types can be retrieved by using the [List node\"\n            \" types](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-node-types)\"\n            \" API call.\"\n        ),\n    )\n    num_workers: Optional[int] = Field(\n        None,\n        description=(\n            \"If num_workers, number of worker nodes that this cluster must have. A\"\n            \" cluster has one Spark driver and num_workers executors for a total of\"\n            \" num_workers + 1 Spark nodes. When reading the properties of a cluster,\"\n            \" this field reflects the desired number of workers rather than the actual\"\n            \" current number of workers. For example, if a cluster is resized from 5 to\"\n            \" 10 workers, this field immediately updates to reflect the target size of\"\n            \" 10 workers, whereas the workers listed in `spark_info` gradually increase\"\n            \" from 5 to 10 as the new nodes are provisioned.\"\n        ),\n    )\n    policy_id: Optional[str] = Field(\n        None,\n        description=(\n            \"A [cluster\"\n            \" policy](https://docs.databricks.com/dev-tools/api/latest/policies.html)\"\n            \" ID. Either `node_type_id` or `instance_pool_id` must be specified in the\"\n            \" cluster policy if they are not specified in this job cluster object.\"\n        ),\n    )\n    spark_conf: Optional[SparkConfPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified Spark configuration\"\n            \" key-value pairs. You can also pass in a string of extra JVM options to\"\n            \" the driver and the executors via `spark.driver.extraJavaOptions` and\"\n            \" `spark.executor.extraJavaOptions` respectively.\\n\\nExample Spark confs:\"\n            ' `{\"spark.speculation\": true, \"spark.streaming.ui.retainedBatches\": 5}` or'\n            ' `{\"spark.driver.extraJavaOptions\": \"-verbose:gc -XX:+PrintGCDetails\"}`'\n        ),\n    )\n    spark_env_vars: Optional[SparkEnvPair] = Field(\n        None,\n        description=(\n            \"An object containing a set of optional, user-specified environment\"\n            \" variable key-value pairs. Key-value pair of the form (X,Y) are exported\"\n            \" as is (for example, `export X='Y'`) while launching the driver and\"\n            \" workers.\\n\\nTo specify an additional set of `SPARK_DAEMON_JAVA_OPTS`, we\"\n            \" recommend appending them to `$SPARK_DAEMON_JAVA_OPTS` as shown in the\"\n            \" following example. This ensures that all default databricks managed\"\n            \" environmental variables are included as well.\\n\\nExample Spark\"\n            ' environment variables: `{\"SPARK_WORKER_MEMORY\": \"28000m\",'\n            ' \"SPARK_LOCAL_DIRS\": \"/local_disk0\"}` or `{\"SPARK_DAEMON_JAVA_OPTS\":'\n            ' \"$SPARK_DAEMON_JAVA_OPTS -Dspark.shuffle.service.enabled=true\"}`'\n        ),\n    )\n    spark_version: str = Field(\n        ...,\n        description=(\n            \"The Spark version of the cluster. A list of available Spark versions can\"\n            \" be retrieved by using the [Runtime\"\n            \" versions](https://docs.databricks.com/dev-tools/api/latest/clusters.html#runtime-versions)\"\n            \" API call.\"\n        ),\n    )\n    ssh_public_keys: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"SSH public key contents that are added to each Spark node in this cluster.\"\n            \" The corresponding private keys can be used to login with the user name\"\n            \" `ubuntu` on port `2200`. Up to 10 keys can be specified.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NodeType","title":"NodeType","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class NodeType(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    description: str = Field(\n        ...,\n        description=(\n            \"A string description associated with this node type. This field is\"\n            \" required.\"\n        ),\n    )\n    instance_type_id: str = Field(\n        ...,\n        description=(\n            \"An identifier for the type of hardware that this node runs on. This field\"\n            \" is required.\"\n        ),\n    )\n    is_deprecated: Optional[bool] = Field(\n        None,\n        description=(\n            \"Whether the node type is deprecated. Non-deprecated node types offer\"\n            \" greater performance.\"\n        ),\n    )\n    memory_mb: int = Field(\n        ...,\n        description=(\n            \"Memory (in MB) available for this node type. This field is required.\"\n        ),\n    )\n    node_info: Optional[ClusterCloudProviderNodeInfo] = Field(\n        None, description=\"Node type info reported by the cloud provider.\"\n    )\n    node_type_id: str = Field(\n        ..., description=\"Unique identifier for this node type. This field is required.\"\n    )\n    num_cores: Optional[float] = Field(\n        None,\n        description=(\n            \"Number of CPU cores available for this node type. This can be fractional\"\n            \" if the number of cores on a machine instance is not divisible by the\"\n            \" number of Spark nodes on that machine. This field is required.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NotebookOutput","title":"NotebookOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class NotebookOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    result: Optional[str] = Field(\n        None,\n        description=(\n            \"The value passed to\"\n            \" [dbutils.notebook.exit()](https://docs.databricks.com/notebooks/notebook-workflows.html#notebook-workflows-exit).\"\n            \" Databricks restricts this API to return the first 5 MB of the value. For\"\n            \" a larger result, your job can store the results in a cloud storage\"\n            \" service. This field is absent if `dbutils.notebook.exit()` was never\"\n            \" called.\"\n        ),\n        example=\"An arbitrary string passed by calling dbutils.notebook.exit(...)\",\n    )\n    truncated: Optional[bool] = Field(\n        None, description=\"Whether or not the result was truncated.\", example=False\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.NotebookTask","title":"NotebookTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class NotebookTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    base_parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"Base parameters to be used for each run of this job. If the run is\"\n            \" initiated by a call to\"\n            \" [`run-now`](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunNow)\"\n            \" with parameters specified, the two parameters maps are merged. If the\"\n            \" same key is specified in `base_parameters` and in `run-now`, the value\"\n            \" from `run-now` is used.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\\n\\nIf the notebook\"\n            \" takes a parameter that is not specified in the job\u2019s `base_parameters` or\"\n            \" the `run-now` override parameters, the default value from the notebook is\"\n            \" used.\\n\\nRetrieve these parameters in a notebook using\"\n            \" [dbutils.widgets.get](https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-widgets).\"\n        ),\n        example={\"age\": 35, \"name\": \"John Doe\"},\n    )\n    notebook_path: str = Field(\n        ...,\n        description=(\n            \"The path of the notebook to be run in the Databricks workspace or remote\"\n            \" repository. For notebooks stored in the Databricks workspace, the path\"\n            \" must be absolute and begin with a slash. For notebooks stored in a remote\"\n            \" repository, the path must be relative. This field is required.\"\n        ),\n        example=\"/Users/user.name@databricks.com/notebook_to_run\",\n    )\n    source: Optional[Literal[\"WORKSPACE\", \"GIT\"]] = Field(\n        None,\n        description=(\n            \"Optional location type of the notebook. When set to `WORKSPACE`, the\"\n            \" notebook will be retrieved from the local Databricks workspace. When set\"\n            \" to `GIT`, the notebook will be retrieved from a Git repository defined in\"\n            \" `git_source`. If the value is empty, the task will use `GIT` if\"\n            \" `git_source` is defined and `WORKSPACE` otherwise.\"\n        ),\n        example=\"WORKSPACE\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.OnFailureItem","title":"OnFailureItem","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class OnFailureItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    id: Optional[str] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.OnStartItem","title":"OnStartItem","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class OnStartItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    id: Optional[str] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.OnSucces","title":"OnSucces","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class OnSucces(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    id: Optional[str] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ParameterPair","title":"ParameterPair","text":"

        Bases: BaseModel

        See source code for the fields' description.

        An object with additional information about why a cluster was terminated. The object keys are one of TerminationParameter and the value is the termination information.

        Source code in prefect_databricks/models/jobs.py
        class ParameterPair(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An object with additional information about why a cluster was terminated. The object keys are one of `TerminationParameter` and the value is the termination information.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PermissionLevel","title":"PermissionLevel","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class PermissionLevel(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: Union[CanManage, CanManageRun, CanView, IsOwner] = Field(\n        ..., description=\"Permission level to grant.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PermissionLevelForGroup","title":"PermissionLevelForGroup","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class PermissionLevelForGroup(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: Union[CanManage, CanManageRun, CanView] = Field(\n        ..., description=\"Permission level to grant.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PipelineParams","title":"PipelineParams","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class PipelineParams(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    full_refresh: Optional[bool] = Field(\n        None, description=\"If true, triggers a full refresh on the delta live table.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PipelineTask","title":"PipelineTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class PipelineTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    full_refresh: Optional[bool] = Field(\n        False,\n        description=(\n            \"If true, a full refresh will be triggered on the delta live table.\"\n        ),\n    )\n    pipeline_id: Optional[str] = Field(\n        None,\n        description=\"The full name of the pipeline task to execute.\",\n        example=\"a12cd3e4-0ab1-1abc-1a2b-1a2bcd3e4fg5\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PoolClusterTerminationCode","title":"PoolClusterTerminationCode","text":"

        Bases: str, Enum

        * INSTANCE_POOL_MAX_CAPACITY_FAILURE: The pool max capacity has been reached.\n
        • INSTANCE_POOL_NOT_FOUND_FAILURE: The pool specified by the cluster is no longer active or doesn\u2019t exist.
        Source code in prefect_databricks/models/jobs.py
        class PoolClusterTerminationCode(str, Enum):\n    \"\"\"\n        * INSTANCE_POOL_MAX_CAPACITY_FAILURE: The pool max capacity has been reached.\n    * INSTANCE_POOL_NOT_FOUND_FAILURE: The pool specified by the cluster is no longer active or doesn\u2019t exist.\n    \"\"\"\n\n    instancepoolmaxcapacityfailure = \"INSTANCE_POOL_MAX_CAPACITY_FAILURE\"\n    instancepoolnotfoundfailure = \"INSTANCE_POOL_NOT_FOUND_FAILURE\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PythonPyPiLibrary","title":"PythonPyPiLibrary","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class PythonPyPiLibrary(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    package: str = Field(\n        ...,\n        description=(\n            \"The name of the PyPI package to install. An optional exact version\"\n            \" specification is also supported. Examples: `simplejson` and\"\n            \" `simplejson==3.8.0`. This field is required.\"\n        ),\n        example=\"simplejson==3.8.0\",\n    )\n    repo: Optional[str] = Field(\n        None,\n        description=(\n            \"The repository where the package can be found. If not specified, the\"\n            \" default pip index is used.\"\n        ),\n        example=\"https://my-repo.com\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.PythonWheelTask","title":"PythonWheelTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class PythonWheelTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    entry_point: Optional[str] = Field(\n        None,\n        description=(\n            \"Named entry point to use, if it does not exist in the metadata of the\"\n            \" package it executes the function from the package directly using\"\n            \" `$packageName.$entryPoint()`\"\n        ),\n    )\n    named_parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"Command-line parameters passed to Python wheel task in the form of\"\n            ' `[\"--name=task\", \"--data=dbfs:/path/to/data.json\"]`. Leave it empty if'\n            \" `parameters` is not null.\"\n        ),\n        example={\"data\": \"dbfs:/path/to/data.json\", \"name\": \"task\"},\n    )\n    package_name: Optional[str] = Field(\n        None, description=\"Name of the package to execute\"\n    )\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Command-line parameters passed to Python wheel task. Leave it empty if\"\n            \" `named_parameters` is not null.\"\n        ),\n        example=[\"--name=task\", \"one\", \"two\"],\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RCranLibrary","title":"RCranLibrary","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RCranLibrary(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    package: str = Field(\n        ...,\n        description=\"The name of the CRAN package to install. This field is required.\",\n        example=\"geojson\",\n    )\n    repo: Optional[str] = Field(\n        None,\n        description=(\n            \"The repository where the package can be found. If not specified, the\"\n            \" default CRAN repo is used.\"\n        ),\n        example=\"https://my-repo.com\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RepairHistory","title":"RepairHistory","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RepairHistory(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    repair_history: Optional[List[RepairHistoryItem]] = Field(\n        None, description=\"The repair history of the run.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RepairHistoryItem","title":"RepairHistoryItem","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RepairHistoryItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    end_time: Optional[int] = Field(\n        None, description=\"The end time of the (repaired) run.\", example=1625060863413\n    )\n    id: Optional[int] = Field(\n        None,\n        description=(\n            \"The ID of the repair. Only returned for the items that represent a repair\"\n            \" in `repair_history`.\"\n        ),\n        example=734650698524280,\n    )\n    start_time: Optional[int] = Field(\n        None, description=\"The start time of the (repaired) run.\", example=1625060460483\n    )\n    state: Optional[RunState] = None\n    task_run_ids: Optional[List[int]] = Field(\n        None,\n        description=(\n            \"The run IDs of the task runs that ran as part of this repair history item.\"\n        ),\n        example=[1106460542112844, 988297789683452],\n    )\n    type: Optional[Literal[\"ORIGINAL\", \"REPAIR\"]] = Field(\n        None,\n        description=(\n            \"The repair history item type. Indicates whether a run is the original run\"\n            \" or a repair run.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RepairRunInput","title":"RepairRunInput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RepairRunInput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    latest_repair_id: Optional[int] = Field(\n        None,\n        description=(\n            \"The ID of the latest repair. This parameter is not required when repairing\"\n            \" a run for the first time, but must be provided on subsequent requests to\"\n            \" repair the same run.\"\n        ),\n        example=734650698524280,\n    )\n    rerun_all_failed_tasks: Optional[bool] = Field(\n        False,\n        description=(\n            \"If true, repair all failed tasks. Only one of rerun_tasks or\"\n            \" rerun_all_failed_tasks can be used.\"\n        ),\n    )\n    rerun_tasks: Optional[List[str]] = Field(\n        None,\n        description=\"The task keys of the task runs to repair.\",\n        example=[\"task0\", \"task1\"],\n    )\n    run_id: Optional[int] = Field(\n        None,\n        description=(\n            \"The job run ID of the run to repair. The run must not be in progress.\"\n        ),\n        example=455644833,\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ResizeCause","title":"ResizeCause","text":"

        Bases: str, Enum

        * `AUTOSCALE`: Automatically resized based on load.\n
        • USER_REQUEST: User requested a new size.
        • AUTORECOVERY: Autorecovery monitor resized the cluster after it lost a node.
        Source code in prefect_databricks/models/jobs.py
        class ResizeCause(str, Enum):\n    \"\"\"\n        * `AUTOSCALE`: Automatically resized based on load.\n    * `USER_REQUEST`: User requested a new size.\n    * `AUTORECOVERY`: Autorecovery monitor resized the cluster after it lost a node.\n    \"\"\"\n\n    autoscale = \"AUTOSCALE\"\n    userrequest = \"USER_REQUEST\"\n    autorecovery = \"AUTORECOVERY\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.Run","title":"Run","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class Run(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    attempt_number: Optional[int] = Field(\n        None,\n        description=(\n            \"The sequence number of this run attempt for a triggered job run. The\"\n            \" initial attempt of a run has an attempt_number of 0\\\\. If the initial run\"\n            \" attempt fails, and the job has a retry policy (`max_retries` \\\\> 0),\"\n            \" subsequent runs are created with an `original_attempt_run_id` of the\"\n            \" original attempt\u2019s ID and an incrementing `attempt_number`. Runs are\"\n            \" retried only until they succeed, and the maximum `attempt_number` is the\"\n            \" same as the `max_retries` value for the job.\"\n        ),\n        example=0,\n    )\n    cleanup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to terminate the cluster and clean up any\"\n            \" associated artifacts. The total duration of the run is the sum of the\"\n            \" setup_duration, the execution_duration, and the cleanup_duration.\"\n        ),\n        example=0,\n    )\n    cluster_instance: Optional[ClusterInstance] = Field(\n        None,\n        description=(\n            \"The cluster used for this run. If the run is specified to use a new\"\n            \" cluster, this field is set once the Jobs service has requested a cluster\"\n            \" for the run.\"\n        ),\n    )\n    cluster_spec: Optional[ClusterSpec] = Field(\n        None,\n        description=(\n            \"A snapshot of the job\u2019s cluster specification when this run was created.\"\n        ),\n    )\n    creator_user_name: Optional[str] = Field(\n        None,\n        description=(\n            \"The creator user name. This field won\u2019t be included in the response if the\"\n            \" user has already been deleted.\"\n        ),\n        example=\"user.name@databricks.com\",\n    )\n    end_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run ended in epoch milliseconds (milliseconds since\"\n            \" 1/1/1970 UTC). This field is set to 0 if the job is still running.\"\n        ),\n        example=1625060863413,\n    )\n    execution_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to execute the commands in the JAR or\"\n            \" notebook until they completed, failed, timed out, were cancelled, or\"\n            \" encountered an unexpected error.\"\n        ),\n        example=0,\n    )\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    job_clusters: Optional[List[JobCluster]] = Field(\n        None,\n        description=(\n            \"A list of job cluster specifications that can be shared and reused by\"\n            \" tasks of this job. Libraries cannot be declared in a shared job cluster.\"\n            \" You must declare dependent libraries in task settings.\"\n        ),\n        example=[\n            {\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n            }\n        ],\n        max_items=100,\n    )\n    job_id: Optional[int] = Field(\n        None,\n        description=\"The canonical identifier of the job that contains this run.\",\n        example=11223344,\n    )\n    number_in_job: Optional[int] = Field(\n        None,\n        deprecated=True,\n        description=(\n            \"A unique identifier for this job run. This is set to the same value as\"\n            \" `run_id`.\"\n        ),\n        example=455644833,\n    )\n    original_attempt_run_id: Optional[int] = Field(\n        None,\n        description=(\n            \"If this run is a retry of a prior run attempt, this field contains the\"\n            \" run_id of the original attempt; otherwise, it is the same as the run_id.\"\n        ),\n        example=455644833,\n    )\n    overriding_parameters: Optional[RunParameters] = Field(\n        None, description=\"The parameters used for this run.\"\n    )\n    run_id: Optional[int] = Field(\n        None,\n        description=(\n            \"The canonical identifier of the run. This ID is unique across all runs of\"\n            \" all jobs.\"\n        ),\n        example=455644833,\n    )\n    run_name: Optional[str] = Field(\n        \"Untitled\",\n        description=(\n            \"An optional name for the run. The maximum allowed length is 4096 bytes in\"\n            \" UTF-8 encoding.\"\n        ),\n        example=\"A multitask job run\",\n    )\n    run_page_url: Optional[str] = Field(\n        None,\n        description=\"The URL to the detail page of the run.\",\n        example=\"https://my-workspace.cloud.databricks.com/#job/11223344/run/123\",\n    )\n    run_type: Optional[RunType] = None\n    schedule: Optional[CronSchedule] = Field(\n        None,\n        description=(\n            \"The cron schedule that triggered this run if it was triggered by the\"\n            \" periodic scheduler.\"\n        ),\n    )\n    setup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time it took to set up the cluster in milliseconds. For runs that run\"\n            \" on new clusters this is the cluster creation time, for runs that run on\"\n            \" existing clusters this time should be very short.\"\n        ),\n        example=0,\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run was started in epoch milliseconds (milliseconds\"\n            \" since 1/1/1970 UTC). This may not be the time when the job task starts\"\n            \" executing, for example, if the job is scheduled to run on a new cluster,\"\n            \" this is the time the cluster creation call is issued.\"\n        ),\n        example=1625060460483,\n    )\n    state: Optional[RunState] = Field(\n        None, description=\"The result and lifecycle states of the run.\"\n    )\n    tasks: Optional[List[RunTask]] = Field(\n        None,\n        description=(\n            \"The list of tasks performed by the run. Each task has its own `run_id`\"\n            \" which you can use to call `JobsGetOutput` to retrieve the run resutls.\"\n        ),\n        example=[\n            {\n                \"attempt_number\": 0,\n                \"cleanup_duration\": 0,\n                \"cluster_instance\": {\n                    \"cluster_id\": \"0923-164208-meows279\",\n                    \"spark_context_id\": \"4348585301701786933\",\n                },\n                \"description\": \"Ingests order data\",\n                \"end_time\": 1629989930171,\n                \"execution_duration\": 0,\n                \"job_cluster_key\": \"auto_scaling_cluster\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                \"run_id\": 2112892,\n                \"run_page_url\": (\n                    \"https://my-workspace.cloud.databricks.com/#job/39832/run/20\"\n                ),\n                \"setup_duration\": 0,\n                \"spark_jar_task\": {\"main_class_name\": \"com.databricks.OrdersIngest\"},\n                \"start_time\": 1629989929660,\n                \"state\": {\n                    \"life_cycle_state\": \"INTERNAL_ERROR\",\n                    \"result_state\": \"FAILED\",\n                    \"state_message\": (\n                        \"Library installation failed for library due to user error.\"\n                        \" Error messages:\\n'Manage' permissions are required to install\"\n                        \" libraries on a cluster\"\n                    ),\n                    \"user_cancelled_or_timedout\": False,\n                },\n                \"task_key\": \"Orders_Ingest\",\n            },\n            {\n                \"attempt_number\": 0,\n                \"cleanup_duration\": 0,\n                \"cluster_instance\": {\"cluster_id\": \"0923-164208-meows279\"},\n                \"depends_on\": [\n                    {\"task_key\": \"Orders_Ingest\"},\n                    {\"task_key\": \"Sessionize\"},\n                ],\n                \"description\": \"Matches orders with user sessions\",\n                \"end_time\": 1629989930238,\n                \"execution_duration\": 0,\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n                \"notebook_task\": {\n                    \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                    \"source\": \"WORKSPACE\",\n                },\n                \"run_id\": 2112897,\n                \"run_page_url\": (\n                    \"https://my-workspace.cloud.databricks.com/#job/39832/run/21\"\n                ),\n                \"setup_duration\": 0,\n                \"start_time\": 0,\n                \"state\": {\n                    \"life_cycle_state\": \"SKIPPED\",\n                    \"state_message\": \"An upstream task failed.\",\n                    \"user_cancelled_or_timedout\": False,\n                },\n                \"task_key\": \"Match\",\n            },\n            {\n                \"attempt_number\": 0,\n                \"cleanup_duration\": 0,\n                \"cluster_instance\": {\n                    \"cluster_id\": \"0923-164208-meows279\",\n                    \"spark_context_id\": \"4348585301701786933\",\n                },\n                \"description\": \"Extracts session data from events\",\n                \"end_time\": 1629989930144,\n                \"execution_duration\": 0,\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                \"run_id\": 2112902,\n                \"run_page_url\": (\n                    \"https://my-workspace.cloud.databricks.com/#job/39832/run/22\"\n                ),\n                \"setup_duration\": 0,\n                \"spark_jar_task\": {\"main_class_name\": \"com.databricks.Sessionize\"},\n                \"start_time\": 1629989929668,\n                \"state\": {\n                    \"life_cycle_state\": \"INTERNAL_ERROR\",\n                    \"result_state\": \"FAILED\",\n                    \"state_message\": (\n                        \"Library installation failed for library due to user error.\"\n                        \" Error messages:\\n'Manage' permissions are required to install\"\n                        \" libraries on a cluster\"\n                    ),\n                    \"user_cancelled_or_timedout\": False,\n                },\n                \"task_key\": \"Sessionize\",\n            },\n        ],\n        max_items=100,\n    )\n    trigger: Optional[TriggerType] = Field(\n        None, description=\"The type of trigger that fired this run.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunLifeCycleState","title":"RunLifeCycleState","text":"

        Bases: str, Enum

        * `PENDING`: The run has been triggered. If there is not already an active run of the same job, the cluster and execution context are being prepared. If there is already an active run of the same job, the run immediately transitions into the `SKIPPED` state without preparing any resources.\n
        • RUNNING: The task of this run is being executed.
        • TERMINATING: The task of this run has completed, and the cluster and execution context are being cleaned up.
        • TERMINATED: The task of this run has completed, and the cluster and execution context have been cleaned up. This state is terminal.
        • SKIPPED: This run was aborted because a previous run of the same job was already active. This state is terminal.
        • INTERNAL_ERROR: An exceptional state that indicates a failure in the Jobs service, such as network failure over a long period. If a run on a new cluster ends in the INTERNAL_ERROR state, the Jobs service terminates the cluster as soon as possible. This state is terminal.
        • BLOCKED: The run is blocked on an upstream dependency.
        • WAITING_FOR_RETRY: The run is waiting for a retry.
        Source code in prefect_databricks/models/jobs.py
        class RunLifeCycleState(str, Enum):\n    \"\"\"\n        * `PENDING`: The run has been triggered. If there is not already an active run of the same job, the cluster and execution context are being prepared. If there is already an active run of the same job, the run immediately transitions into the `SKIPPED` state without preparing any resources.\n    * `RUNNING`: The task of this run is being executed.\n    * `TERMINATING`: The task of this run has completed, and the cluster and execution context are being cleaned up.\n    * `TERMINATED`: The task of this run has completed, and the cluster and execution context have been cleaned up. This state is terminal.\n    * `SKIPPED`: This run was aborted because a previous run of the same job was already active. This state is terminal.\n    * `INTERNAL_ERROR`: An exceptional state that indicates a failure in the Jobs service, such as network failure over a long period. If a run on a new cluster ends in the `INTERNAL_ERROR` state, the Jobs service terminates the cluster as soon as possible. This state is terminal.\n    * `BLOCKED`: The run is blocked on an upstream dependency.\n    * `WAITING_FOR_RETRY`: The run is waiting for a retry.\n    \"\"\"\n\n    terminated = \"TERMINATED\"\n    pending = \"PENDING\"\n    running = \"RUNNING\"\n    terminating = \"TERMINATING\"\n    skipped = \"SKIPPED\"\n    internalerror = \"INTERNAL_ERROR\"\n    blocked = \"BLOCKED\"\n    waitingforretry = \"WAITING_FOR_RETRY\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunNowInput","title":"RunNowInput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RunNowInput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    idempotency_token: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional token to guarantee the idempotency of job run requests. If a\"\n            \" run with the provided token already exists, the request does not create a\"\n            \" new run but returns the ID of the existing run instead. If a run with the\"\n            \" provided token is deleted, an error is returned.\\n\\nIf you specify the\"\n            \" idempotency token, upon failure you can retry until the request succeeds.\"\n            \" Databricks guarantees that exactly one run is launched with that\"\n            \" idempotency token.\\n\\nThis token must have at most 64 characters.\\n\\nFor\"\n            \" more information, see [How to ensure idempotency for\"\n            \" jobs](https://kb.databricks.com/jobs/jobs-idempotency.html).\"\n        ),\n        example=\"8f018174-4792-40d5-bcbc-3e6a527352c8\",\n    )\n    job_id: Optional[int] = Field(\n        None, description=\"The ID of the job to be executed\", example=11223344\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunParameters","title":"RunParameters","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RunParameters(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_commands: Optional[List] = Field(\n        None,\n        description=(\n            \"An array of commands to execute for jobs with the dbt task, for example\"\n            ' `\"dbt_commands\": [\"dbt deps\", \"dbt seed\", \"dbt run\"]`'\n        ),\n        example=[\"dbt deps\", \"dbt seed\", \"dbt run\"],\n    )\n    jar_params: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of parameters for jobs with Spark JAR tasks, for example\"\n            ' `\"jar_params\": [\"john doe\", \"35\"]`. The parameters are used to invoke the'\n            \" main function of the main class specified in the Spark JAR task. If not\"\n            \" specified upon `run-now`, it defaults to an empty list. jar_params cannot\"\n            \" be specified in conjunction with notebook_params. The JSON representation\"\n            ' of this field (for example `{\"jar_params\":[\"john doe\",\"35\"]}`) cannot'\n            \" exceed 10,000 bytes.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\"john\", \"doe\", \"35\"],\n    )\n    notebook_params: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"A map from keys to values for jobs with notebook task, for example\"\n            ' `\"notebook_params\": {\"name\": \"john doe\", \"age\": \"35\"}`. The map is passed'\n            \" to the notebook and is accessible through the\"\n            \" [dbutils.widgets.get](https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-widgets)\"\n            \" function.\\n\\nIf not specified upon `run-now`, the triggered run uses the\"\n            \" job\u2019s base parameters.\\n\\nnotebook_params cannot be specified in\"\n            \" conjunction with jar_params.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\\n\\nThe JSON\"\n            \" representation of this field (for example\"\n            ' `{\"notebook_params\":{\"name\":\"john doe\",\"age\":\"35\"}}`) cannot exceed'\n            \" 10,000 bytes.\"\n        ),\n        example={\"age\": \"35\", \"name\": \"john doe\"},\n    )\n    pipeline_params: Optional[PipelineParams] = None\n    python_named_params: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"A map from keys to values for jobs with Python wheel task, for example\"\n            ' `\"python_named_params\": {\"name\": \"task\", \"data\":'\n            ' \"dbfs:/path/to/data.json\"}`.'\n        ),\n        example={\"data\": \"dbfs:/path/to/data.json\", \"name\": \"task\"},\n    )\n    python_params: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of parameters for jobs with Python tasks, for example\"\n            ' `\"python_params\": [\"john doe\", \"35\"]`. The parameters are passed to'\n            \" Python file as command-line parameters. If specified upon `run-now`, it\"\n            \" would overwrite the parameters specified in job setting. The JSON\"\n            ' representation of this field (for example `{\"python_params\":[\"john'\n            ' doe\",\"35\"]}`) cannot exceed 10,000 bytes.\\n\\nUse [Task parameter'\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job\"\n            \" runs.\\n\\nImportant\\n\\nThese parameters accept only Latin characters\"\n            \" (ASCII character set). Using non-ASCII characters returns an error.\"\n            \" Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis,\"\n            \" and emojis.\"\n        ),\n        example=[\"john doe\", \"35\"],\n    )\n    spark_submit_params: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"A list of parameters for jobs with spark submit task, for example\"\n            ' `\"spark_submit_params\": [\"--class\",'\n            ' \"org.apache.spark.examples.SparkPi\"]`. The parameters are passed to'\n            \" spark-submit script as command-line parameters. If specified upon\"\n            \" `run-now`, it would overwrite the parameters specified in job setting.\"\n            \" The JSON representation of this field (for example\"\n            ' `{\"python_params\":[\"john doe\",\"35\"]}`) cannot exceed 10,000 bytes.\\n\\nUse'\n            \" [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job\"\n            \" runs.\\n\\nImportant\\n\\nThese parameters accept only Latin characters\"\n            \" (ASCII character set). Using non-ASCII characters returns an error.\"\n            \" Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis,\"\n            \" and emojis.\"\n        ),\n        example=[\"--class\", \"org.apache.spark.examples.SparkPi\"],\n    )\n    sql_params: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            'A map from keys to values for SQL tasks, for example `\"sql_params\":'\n            ' {\"name\": \"john doe\", \"age\": \"35\"}`. The SQL alert task does not support'\n            \" custom parameters.\"\n        ),\n        example={\"age\": \"35\", \"name\": \"john doe\"},\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunResultState","title":"RunResultState","text":"

        Bases: str, Enum

        * `SUCCESS`: The task completed successfully.\n
        • FAILED: The task completed with an error.
        • TIMEDOUT: The run was stopped after reaching the timeout.
        • CANCELED: The run was canceled at user request.
        Source code in prefect_databricks/models/jobs.py
        class RunResultState(str, Enum):\n    \"\"\"\n        * `SUCCESS`: The task completed successfully.\n    * `FAILED`: The task completed with an error.\n    * `TIMEDOUT`: The run was stopped after reaching the timeout.\n    * `CANCELED`: The run was canceled at user request.\n    \"\"\"\n\n    success = \"SUCCESS\"\n    failed = \"FAILED\"\n    timedout = \"TIMEDOUT\"\n    canceled = \"CANCELED\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunState","title":"RunState","text":"

        Bases: BaseModel

        See source code for the fields' description.

        The result and lifecycle state of the run.

        Source code in prefect_databricks/models/jobs.py
        class RunState(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    The result and lifecycle state of the run.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    life_cycle_state: Optional[RunLifeCycleState] = Field(\n        None,\n        description=(\n            \"A description of a run\u2019s current location in the run lifecycle. This field\"\n            \" is always available in the response.\"\n        ),\n    )\n    result_state: Optional[RunResultState] = None\n    state_message: Optional[str] = Field(\n        None,\n        description=(\n            \"A descriptive message for the current state. This field is unstructured,\"\n            \" and its exact format is subject to change.\"\n        ),\n        example=\"\",\n    )\n    user_cancelled_or_timedout: Optional[bool] = Field(\n        None,\n        description=(\n            \"Whether a run was canceled manually by a user or by the scheduler because\"\n            \" the run timed out.\"\n        ),\n        example=False,\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunSubmitSettings","title":"RunSubmitSettings","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RunSubmitSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    idempotency_token: Optional[str] = Field(\n        None,\n        description=(\n            \"An optional token that can be used to guarantee the idempotency of job run\"\n            \" requests. If a run with the provided token already exists, the request\"\n            \" does not create a new run but returns the ID of the existing run instead.\"\n            \" If a run with the provided token is deleted, an error is returned.\\n\\nIf\"\n            \" you specify the idempotency token, upon failure you can retry until the\"\n            \" request succeeds. Databricks guarantees that exactly one run is launched\"\n            \" with that idempotency token.\\n\\nThis token must have at most 64\"\n            \" characters.\\n\\nFor more information, see [How to ensure idempotency for\"\n            \" jobs](https://kb.databricks.com/jobs/jobs-idempotency.html).\"\n        ),\n        example=\"8f018174-4792-40d5-bcbc-3e6a527352c8\",\n    )\n    run_name: Optional[str] = Field(\n        None,\n        description=\"An optional name for the run. The default value is `Untitled`.\",\n        example=\"A multitask job run\",\n    )\n    tasks: Optional[List[RunSubmitTaskSettings]] = Field(\n        None,\n        example=[\n            {\n                \"depends_on\": [],\n                \"description\": \"Extracts session data from events\",\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/Sessionize.jar\"}],\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.Sessionize\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/data.json\"],\n                },\n                \"task_key\": \"Sessionize\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [],\n                \"description\": \"Ingests order data\",\n                \"existing_cluster_id\": \"0923-164208-meows279\",\n                \"libraries\": [{\"jar\": \"dbfs:/mnt/databricks/OrderIngest.jar\"}],\n                \"spark_jar_task\": {\n                    \"main_class_name\": \"com.databricks.OrdersIngest\",\n                    \"parameters\": [\"--data\", \"dbfs:/path/to/order-data.json\"],\n                },\n                \"task_key\": \"Orders_Ingest\",\n                \"timeout_seconds\": 86400,\n            },\n            {\n                \"depends_on\": [\n                    {\"task_key\": \"Orders_Ingest\"},\n                    {\"task_key\": \"Sessionize\"},\n                ],\n                \"description\": \"Matches orders with user sessions\",\n                \"new_cluster\": {\n                    \"autoscale\": {\"max_workers\": 16, \"min_workers\": 2},\n                    \"aws_attributes\": {\"availability\": \"SPOT\", \"zone_id\": \"us-west-2a\"},\n                    \"node_type_id\": \"i3.xlarge\",\n                    \"spark_conf\": {\"spark.speculation\": True},\n                    \"spark_version\": \"7.3.x-scala2.12\",\n                },\n                \"notebook_task\": {\n                    \"base_parameters\": {\"age\": \"35\", \"name\": \"John Doe\"},\n                    \"notebook_path\": \"/Users/user.name@databricks.com/Match\",\n                    \"source\": \"WORKSPACE\",\n                },\n                \"task_key\": \"Match\",\n                \"timeout_seconds\": 86400,\n            },\n        ],\n        max_items=100,\n    )\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job. The default behavior\"\n            \" is to have no timeout.\"\n        ),\n        example=86400,\n    )\n    webhook_notifications: Optional[WebhookNotifications] = Field(\n        None,\n        description=(\n            \"A collection of system notification IDs to notify when runs of this job\"\n            \" begin or complete. The default behavior is to not send any system\"\n            \" notifications.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunSubmitTaskSettings","title":"RunSubmitTaskSettings","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RunSubmitTaskSettings(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    depends_on: Optional[TaskDependencies] = None\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this task. When running tasks on an existing cluster, you may\"\n            \" need to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n        example=\"0923-164208-meows279\",\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the task. The default value is an empty list.\"\n        ),\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this task must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this task must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None, description=\"If spark_jar_task, indicates that this task must run a JAR.\"\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this task must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this task must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n    task_key: TaskKey\n    timeout_seconds: Optional[int] = Field(\n        None,\n        description=(\n            \"An optional timeout applied to each run of this job task. The default\"\n            \" behavior is to have no timeout.\"\n        ),\n        example=86400,\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunTask","title":"RunTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class RunTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    attempt_number: Optional[int] = Field(\n        None,\n        description=(\n            \"The sequence number of this run attempt for a triggered job run. The\"\n            \" initial attempt of a run has an attempt_number of 0\\\\. If the initial run\"\n            \" attempt fails, and the job has a retry policy (`max_retries` \\\\> 0),\"\n            \" subsequent runs are created with an `original_attempt_run_id` of the\"\n            \" original attempt\u2019s ID and an incrementing `attempt_number`. Runs are\"\n            \" retried only until they succeed, and the maximum `attempt_number` is the\"\n            \" same as the `max_retries` value for the job.\"\n        ),\n        example=0,\n    )\n    cleanup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to terminate the cluster and clean up any\"\n            \" associated artifacts. The total duration of the run is the sum of the\"\n            \" setup_duration, the execution_duration, and the cleanup_duration.\"\n        ),\n        example=0,\n    )\n    cluster_instance: Optional[ClusterInstance] = Field(\n        None,\n        description=(\n            \"The cluster used for this run. If the run is specified to use a new\"\n            \" cluster, this field is set once the Jobs service has requested a cluster\"\n            \" for the run.\"\n        ),\n    )\n    dbt_task: Optional[DbtTask] = Field(\n        None,\n        description=(\n            \"If dbt_task, indicates that this must execute a dbt task. It requires both\"\n            \" Databricks SQL and the ability to use a serverless or a pro SQL\"\n            \" warehouse.\"\n        ),\n    )\n    depends_on: Optional[TaskDependencies] = None\n    description: Optional[TaskDescription] = None\n    end_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run ended in epoch milliseconds (milliseconds since\"\n            \" 1/1/1970 UTC). This field is set to 0 if the job is still running.\"\n        ),\n        example=1625060863413,\n    )\n    execution_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time in milliseconds it took to execute the commands in the JAR or\"\n            \" notebook until they completed, failed, timed out, were cancelled, or\"\n            \" encountered an unexpected error.\"\n        ),\n        example=0,\n    )\n    existing_cluster_id: Optional[str] = Field(\n        None,\n        description=(\n            \"If existing_cluster_id, the ID of an existing cluster that is used for all\"\n            \" runs of this job. When running jobs on an existing cluster, you may need\"\n            \" to manually restart the cluster if it stops responding. We suggest\"\n            \" running jobs on new clusters for greater reliability.\"\n        ),\n    )\n    git_source: Optional[GitSource1] = Field(\n        None,\n        description=(\n            \"This functionality is in Public Preview.\\n\\nAn optional specification for\"\n            \" a remote repository containing the notebooks used by this job's notebook\"\n            \" tasks.\"\n        ),\n        example={\n            \"git_branch\": \"main\",\n            \"git_provider\": \"gitHub\",\n            \"git_url\": \"https://github.com/databricks/databricks-cli\",\n        },\n    )\n    libraries: Optional[List[Library]] = Field(\n        None,\n        description=(\n            \"An optional list of libraries to be installed on the cluster that executes\"\n            \" the job. The default value is an empty list.\"\n        ),\n    )\n    new_cluster: Optional[NewCluster] = Field(\n        None,\n        description=(\n            \"If new_cluster, a description of a cluster that is created for each run.\"\n        ),\n    )\n    notebook_task: Optional[NotebookTask] = Field(\n        None,\n        description=(\n            \"If notebook_task, indicates that this job must run a notebook. This field\"\n            \" may not be specified in conjunction with spark_jar_task.\"\n        ),\n    )\n    pipeline_task: Optional[PipelineTask] = Field(\n        None,\n        description=(\n            \"If pipeline_task, indicates that this job must execute a Pipeline.\"\n        ),\n    )\n    python_wheel_task: Optional[PythonWheelTask] = Field(\n        None,\n        description=(\n            \"If python_wheel_task, indicates that this job must execute a PythonWheel.\"\n        ),\n    )\n    run_id: Optional[int] = Field(\n        None, description=\"The ID of the task run.\", example=99887766\n    )\n    setup_duration: Optional[int] = Field(\n        None,\n        description=(\n            \"The time it took to set up the cluster in milliseconds. For runs that run\"\n            \" on new clusters this is the cluster creation time, for runs that run on\"\n            \" existing clusters this time should be very short.\"\n        ),\n        example=0,\n    )\n    spark_jar_task: Optional[SparkJarTask] = Field(\n        None, description=\"If spark_jar_task, indicates that this job must run a JAR.\"\n    )\n    spark_python_task: Optional[SparkPythonTask] = Field(\n        None,\n        description=(\n            \"If spark_python_task, indicates that this job must run a Python file.\"\n        ),\n    )\n    spark_submit_task: Optional[SparkSubmitTask] = Field(\n        None,\n        description=(\n            \"If spark_submit_task, indicates that this job must be launched by the\"\n            \" spark submit script.\"\n        ),\n    )\n    sql_task: Optional[SqlTask] = Field(\n        None,\n        description=(\n            \"If sql_task, indicates that this job must execute a SQL task. It requires\"\n            \" both Databricks SQL and a serverless or a pro SQL warehouse.\"\n        ),\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"The time at which this run was started in epoch milliseconds (milliseconds\"\n            \" since 1/1/1970 UTC). This may not be the time when the job task starts\"\n            \" executing, for example, if the job is scheduled to run on a new cluster,\"\n            \" this is the time the cluster creation call is issued.\"\n        ),\n        example=1625060460483,\n    )\n    state: Optional[RunState] = Field(\n        None, description=\"The result and lifecycle states of the run.\"\n    )\n    task_key: Optional[TaskKey] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.RunType","title":"RunType","text":"

        Bases: str, Enum

        The type of the run.\n
        • JOB_RUN - Normal job run. A run created with Run now.
        • WORKFLOW_RUN - Workflow run. A run created with dbutils.notebook.run.
        • SUBMIT_RUN - Submit run. A run created with Run Submit.
        Source code in prefect_databricks/models/jobs.py
        class RunType(str, Enum):\n    \"\"\"\n        The type of the run.\n    * `JOB_RUN` \\- Normal job run. A run created with [Run now](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunNow).\n    * `WORKFLOW_RUN` \\- Workflow run. A run created with [dbutils.notebook.run](https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-workflow).\n    * `SUBMIT_RUN` \\- Submit run. A run created with [Run Submit](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsSubmit).\n    \"\"\"\n\n    jobrun = \"JOB_RUN\"\n    workflowrun = \"WORKFLOW_RUN\"\n    submitrun = \"SUBMIT_RUN\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.S3StorageInfo","title":"S3StorageInfo","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class S3StorageInfo(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    canned_acl: Optional[str] = Field(\n        None,\n        description=(\n            \"(Optional) Set canned access control list. For example:\"\n            \" `bucket-owner-full-control`. If canned_acl is set, the cluster instance\"\n            \" profile must have `s3:PutObjectAcl` permission on the destination bucket\"\n            \" and prefix. The full list of possible canned ACLs can be found at\"\n            \" <https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl>.\"\n            \" By default only the object owner gets full control. If you are using\"\n            \" cross account role for writing data, you may want to set\"\n            \" `bucket-owner-full-control` to make bucket owner able to read the logs.\"\n        ),\n    )\n    destination: Optional[str] = Field(\n        None,\n        description=(\n            \"S3 destination. For example: `s3://my-bucket/some-prefix` You must\"\n            \" configure the cluster with an instance profile and the instance profile\"\n            \" must have write access to the destination. You _cannot_ use AWS keys.\"\n        ),\n    )\n    enable_encryption: Optional[bool] = Field(\n        None, description=\"(Optional)Enable server side encryption, `false` by default.\"\n    )\n    encryption_type: Optional[str] = Field(\n        None,\n        description=(\n            \"(Optional) The encryption type, it could be `sse-s3` or `sse-kms`. It is\"\n            \" used only when encryption is enabled and the default type is `sse-s3`.\"\n        ),\n    )\n    endpoint: Optional[str] = Field(\n        None,\n        description=(\n            \"S3 endpoint. For example: `https://s3-us-west-2.amazonaws.com`. Either\"\n            \" region or endpoint must be set. If both are set, endpoint is used.\"\n        ),\n    )\n    kms_key: Optional[str] = Field(\n        None,\n        description=(\n            \"(Optional) KMS key used if encryption is enabled and encryption type is\"\n            \" set to `sse-kms`.\"\n        ),\n    )\n    region: Optional[str] = Field(\n        None,\n        description=(\n            \"S3 region. For example: `us-west-2`. Either region or endpoint must be\"\n            \" set. If both are set, endpoint is used.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ServicePrincipalName","title":"ServicePrincipalName","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ServicePrincipalName(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=\"Name of an Azure service principal.\",\n        example=\"9f0621ee-b52b-11ea-b3de-0242ac130004\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkConfPair","title":"SparkConfPair","text":"

        Bases: BaseModel

        See source code for the fields' description.

        An arbitrary object where the object key is a configuration property name and the value is a configuration property value.

        Source code in prefect_databricks/models/jobs.py
        class SparkConfPair(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An arbitrary object where the object key is a configuration property name and the value is a configuration property value.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkEnvPair","title":"SparkEnvPair","text":"

        Bases: BaseModel

        See source code for the fields' description.

        An arbitrary object where the object key is an environment variable name and the value is an environment variable value.

        Source code in prefect_databricks/models/jobs.py
        class SparkEnvPair(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n    An arbitrary object where the object key is an environment variable name and the value is an environment variable value.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n\n        allow_mutation = False\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkJarTask","title":"SparkJarTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SparkJarTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    jar_uri: Optional[str] = Field(\n        None,\n        deprecated=True,\n        description=(\n            \"Deprecated since 04/2016\\\\. Provide a `jar` through the `libraries` field\"\n            \" instead. For an example, see\"\n            \" [Create](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsCreate).\"\n        ),\n    )\n    main_class_name: Optional[str] = Field(\n        None,\n        description=(\n            \"The full name of the class containing the main method to be executed. This\"\n            \" class must be contained in a JAR provided as a library.\\n\\nThe code must\"\n            \" use `SparkContext.getOrCreate` to obtain a Spark context; otherwise, runs\"\n            \" of the job fail.\"\n        ),\n        example=\"com.databricks.ComputeModels\",\n    )\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Parameters passed to the main method.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\"--data\", \"dbfs:/path/to/data.json\"],\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkNode","title":"SparkNode","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SparkNode(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    host_private_ip: Optional[str] = Field(\n        None, description=\"The private IP address of the host instance.\"\n    )\n    instance_id: Optional[str] = Field(\n        None,\n        description=(\n            \"Globally unique identifier for the host instance from the cloud provider.\"\n        ),\n    )\n    node_aws_attributes: Optional[SparkNodeAwsAttributes] = Field(\n        None, description=\"Attributes specific to AWS for a Spark node.\"\n    )\n    node_id: Optional[str] = Field(\n        None, description=\"Globally unique identifier for this node.\"\n    )\n    private_ip: Optional[str] = Field(\n        None,\n        description=(\n            \"Private IP address (typically a 10.x.x.x address) of the Spark node. This\"\n            \" is different from the private IP address of the host instance.\"\n        ),\n    )\n    public_dns: Optional[str] = Field(\n        None,\n        description=(\n            \"Public DNS address of this node. This address can be used to access the\"\n            \" Spark JDBC server on the driver node. To communicate with the JDBC\"\n            \" server, traffic must be manually authorized by adding security group\"\n            \" rules to the \u201cworker-unmanaged\u201d security group via the AWS console.\"\n        ),\n    )\n    start_timestamp: Optional[int] = Field(\n        None,\n        description=\"The timestamp (in millisecond) when the Spark node is launched.\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkNodeAwsAttributes","title":"SparkNodeAwsAttributes","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SparkNodeAwsAttributes(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    is_spot: Optional[bool] = Field(\n        None, description=\"Whether this node is on an Amazon spot instance.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkPythonTask","title":"SparkPythonTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SparkPythonTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Command line parameters passed to the Python file.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\"--data\", \"dbfs:/path/to/data.json\"],\n    )\n    python_file: str = Field(\n        ...,\n        description=(\n            \"The Python file to be executed. Cloud file URIs (such as dbfs:/, s3:/,\"\n            \" adls:/, gcs:/) and workspace paths are supported. For python files stored\"\n            \" in the Databricks workspace, the path must be absolute and begin with\"\n            \" `/Repos`. This field is required.\"\n        ),\n        example=\"dbfs:/path/to/file.py\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkSubmitTask","title":"SparkSubmitTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SparkSubmitTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    parameters: Optional[List[str]] = Field(\n        None,\n        description=(\n            \"Command-line parameters passed to spark submit.\\n\\nUse [Task parameter\"\n            \" variables](https://docs.databricks.com/jobs.html#parameter-variables) to\"\n            \" set parameters containing information about job runs.\"\n        ),\n        example=[\n            \"--class\",\n            \"org.apache.spark.examples.SparkPi\",\n            \"dbfs:/path/to/examples.jar\",\n            \"10\",\n        ],\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SparkVersion","title":"SparkVersion","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SparkVersion(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    key: Optional[str] = Field(\n        None,\n        description=(\n            \"[Databricks Runtime\"\n            \" version](https://docs.databricks.com/dev-tools/api/latest/index.html#programmatic-version)\"\n            \" key, for example `7.3.x-scala2.12`. The value that must be provided as\"\n            \" the `spark_version` when creating a new cluster. The exact runtime\"\n            \" version may change over time for a \u201cwildcard\u201d version (that is,\"\n            \" `7.3.x-scala2.12` is a \u201cwildcard\u201d version) with minor bug fixes.\"\n        ),\n    )\n    name: Optional[str] = Field(\n        None,\n        description=(\n            \"A descriptive name for the runtime version, for example \u201cDatabricks\"\n            \" Runtime 7.3 LTS\u201d.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlAlertOutput","title":"SqlAlertOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlAlertOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    output_link: Optional[str] = Field(\n        None, description=\"The link to find the output results.\"\n    )\n    query_text: Optional[str] = Field(\n        None,\n        description=(\n            \"The text of the SQL query. Can Run permission of the SQL query associated\"\n            \" with the SQL alert is required to view this field.\"\n        ),\n    )\n    sql_statements: Optional[SqlStatementOutput] = Field(\n        None, description=\"Information about SQL statements executed in the run.\"\n    )\n    warehouse_id: Optional[str] = Field(\n        None, description=\"The canonical identifier of the SQL warehouse.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlDashboardOutput","title":"SqlDashboardOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlDashboardOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    widgets: Optional[SqlDashboardWidgetOutput] = Field(\n        None,\n        description=(\n            \"Widgets executed in the run. Only SQL query based widgets are listed.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlDashboardWidgetOutput","title":"SqlDashboardWidgetOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlDashboardWidgetOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    end_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when execution of the SQL widget ends.\"\n        ),\n    )\n    error: Optional[SqlOutputError] = Field(\n        None, description=\"The information about the error when execution fails.\"\n    )\n    output_link: Optional[str] = Field(\n        None, description=\"The link to find the output results.\"\n    )\n    start_time: Optional[int] = Field(\n        None,\n        description=(\n            \"Time (in epoch milliseconds) when execution of the SQL widget starts.\"\n        ),\n    )\n    status: Optional[\n        Literal[\"PENDING\", \"RUNNING\", \"SUCCESS\", \"FAILED\", \"CANCELLED\"]\n    ] = Field(None, description=\"The execution status of the SQL widget.\")\n    widget_id: Optional[str] = Field(\n        None, description=\"The canonical identifier of the SQL widget.\"\n    )\n    widget_title: Optional[str] = Field(\n        None, description=\"The title of the SQL widget.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlOutput","title":"SqlOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    alert_output: Optional[SqlAlertOutput] = Field(\n        None, description=\"The output of a SQL alert task, if available.\"\n    )\n    dashboard_output: Optional[SqlDashboardOutput] = Field(\n        None, description=\"The output of a SQL dashboard task, if available.\"\n    )\n    query_output: Optional[SqlQueryOutput] = Field(\n        None, description=\"The output of a SQL query task, if available.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlOutputError","title":"SqlOutputError","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlOutputError(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    message: Optional[str] = Field(\n        None, description=\"The error message when execution fails.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlQueryOutput","title":"SqlQueryOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlQueryOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    output_link: Optional[str] = Field(\n        None, description=\"The link to find the output results.\"\n    )\n    query_text: Optional[str] = Field(\n        None,\n        description=(\n            \"The text of the SQL query. Can Run permission of the SQL query is required\"\n            \" to view this field.\"\n        ),\n    )\n    sql_statements: Optional[SqlStatementOutput] = Field(\n        None, description=\"Information about SQL statements executed in the run.\"\n    )\n    warehouse_id: Optional[str] = Field(\n        None, description=\"The canonical identifier of the SQL warehouse.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlStatementOutput","title":"SqlStatementOutput","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlStatementOutput(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    lookup_key: Optional[str] = Field(\n        None, description=\"A key that can be used to look up query details.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTask","title":"SqlTask","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlTask(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    alert: Optional[SqlTaskAlert] = Field(\n        None, description=\"If alert, indicates that this job must refresh a SQL alert.\"\n    )\n    dashboard: Optional[SqlTaskDashboard] = Field(\n        None,\n        description=(\n            \"If dashboard, indicates that this job must refresh a SQL dashboard.\"\n        ),\n    )\n    parameters: Optional[Dict[str, Any]] = Field(\n        None,\n        description=(\n            \"Parameters to be used for each run of this job. The SQL alert task does\"\n            \" not support custom parameters.\"\n        ),\n        example={\"age\": 35, \"name\": \"John Doe\"},\n    )\n    query: Optional[SqlTaskQuery] = Field(\n        None, description=\"If query, indicates that this job must execute a SQL query.\"\n    )\n    warehouse_id: str = Field(\n        ...,\n        description=(\n            \"The canonical identifier of the SQL warehouse. Only serverless and pro SQL\"\n            \" warehouses are supported.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTaskAlert","title":"SqlTaskAlert","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlTaskAlert(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    alert_id: str = Field(..., description=\"The canonical identifier of the SQL alert.\")\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTaskDashboard","title":"SqlTaskDashboard","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlTaskDashboard(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    dashboard_id: str = Field(\n        ..., description=\"The canonical identifier of the SQL dashboard.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.SqlTaskQuery","title":"SqlTaskQuery","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class SqlTaskQuery(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    query_id: str = Field(..., description=\"The canonical identifier of the SQL query.\")\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskDependencies","title":"TaskDependencies","text":"

        Bases: BaseModel

        See source code for the fields' description.

        An optional array of objects specifying the dependency graph of the task. All tasks specified in this field must complete successfully before executing this task.\n

        The key is task_key, and the value is the name assigned to the dependent task. This field is required when a job consists of more than one task.

        Source code in prefect_databricks/models/jobs.py
        class TaskDependencies(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n\n        An optional array of objects specifying the dependency graph of the task. All tasks specified in this field must complete successfully before executing this task.\n    The key is `task_key`, and the value is the name assigned to the dependent task.\n    This field is required when a job consists of more than one task.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: List[TaskDependency] = Field(\n        ...,\n        description=(\n            \"An optional array of objects specifying the dependency graph of the task.\"\n            \" All tasks specified in this field must complete successfully before\"\n            \" executing this task.\\nThe key is `task_key`, and the value is the name\"\n            \" assigned to the dependent task.\\nThis field is required when a job\"\n            \" consists of more than one task.\"\n        ),\n        example=[{\"task_key\": \"Previous_Task_Key\"}, {\"task_key\": \"Other_Task_Key\"}],\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskDependency","title":"TaskDependency","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class TaskDependency(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    task_key: Optional[str] = None\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskDescription","title":"TaskDescription","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class TaskDescription(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=(\n            \"An optional description for this task.\\nThe maximum length is 4096 bytes.\"\n        ),\n        example=\"This is the description for this task.\",\n        max_length=4096,\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TaskKey","title":"TaskKey","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class TaskKey(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ...,\n        description=(\n            \"A unique name for the task. This field is used to refer to this task from\"\n            \" other tasks.\\nThis field is required and must be unique within its parent\"\n            \" job.\\nOn Update or Reset, this field is used to reference the tasks to be\"\n            \" updated or reset.\\nThe maximum length is 100 characters.\"\n        ),\n        example=\"Task_Key\",\n        max_length=100,\n        min_length=1,\n        regex=\"^[\\\\w\\\\-]+$\",\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationCode","title":"TerminationCode","text":"

        Bases: str, Enum

        * USER_REQUEST: A user terminated the cluster directly. Parameters should include a `username` field that indicates the specific user who terminated the cluster.\n
        • JOB_FINISHED: The cluster was launched by a job, and terminated when the job completed.
        • INACTIVITY: The cluster was terminated since it was idle.
        • CLOUD_PROVIDER_SHUTDOWN: The instance that hosted the Spark driver was terminated by the cloud provider. In AWS, for example, AWS may retire instances and directly shut them down. Parameters should include an aws_instance_state_reason field indicating the AWS-provided reason why the instance was terminated.
        • COMMUNICATION_LOST: Databricks lost connection to services on the driver instance. For example, this can happen when problems arise in cloud networking infrastructure, or when the instance itself becomes unhealthy.
        • CLOUD_PROVIDER_LAUNCH_FAILURE: Databricks experienced a cloud provider failure when requesting instances to launch clusters. For example, AWS limits the number of running instances and EBS volumes. If you ask Databricks to launch a cluster that requires instances or EBS volumes that exceed your AWS limit, the cluster fails with this status code. Parameters should include one of aws_api_error_code, aws_instance_state_reason, or aws_spot_request_status to indicate the AWS-provided reason why Databricks could not request the required instances for the cluster.
        • SPARK_STARTUP_FAILURE: The cluster failed to initialize. Possible reasons may include failure to create the environment for Spark or issues launching the Spark master and worker processes.
        • INVALID_ARGUMENT: Cannot launch the cluster because the user specified an invalid argument. For example, the user might specify an invalid runtime version for the cluster.
        • UNEXPECTED_LAUNCH_FAILURE: While launching this cluster, Databricks failed to complete critical setup steps, terminating the cluster.
        • INTERNAL_ERROR: Databricks encountered an unexpected error that forced the running cluster to be terminated. Contact Databricks support for additional details.
        • SPARK_ERROR: The Spark driver failed to start. Possible reasons may include incompatible libraries and initialization scripts that corrupted the Spark container.
        • METASTORE_COMPONENT_UNHEALTHY: The cluster failed to start because the external metastore could not be reached. Refer to Troubleshooting.
        • DBFS_COMPONENT_UNHEALTHY: The cluster failed to start because Databricks File System (DBFS) could not be reached.
        • DRIVER_UNREACHABLE: Databricks was not able to access the Spark driver, because it was not reachable.
        • DRIVER_UNRESPONSIVE: Databricks was not able to access the Spark driver, because it was unresponsive.
        • INSTANCE_UNREACHABLE: Databricks was not able to access instances in order to start the cluster. This can be a transient networking issue. If the problem persists, this usually indicates a networking environment misconfiguration.
        • CONTAINER_LAUNCH_FAILURE: Databricks was unable to launch containers on worker nodes for the cluster. Have your admin check your network configuration.
        • INSTANCE_POOL_CLUSTER_FAILURE: Pool backed cluster specific failure. Refer to Pools for details.
        • REQUEST_REJECTED: Databricks cannot handle the request at this moment. Try again later and contact Databricks if the problem persists.
        • INIT_SCRIPT_FAILURE: Databricks cannot load and run a cluster-scoped init script on one of the cluster\u2019s nodes, or the init script terminates with a non-zero exit code. Refer to Init script logs.
        • TRIAL_EXPIRED: The Databricks trial subscription expired.
        Source code in prefect_databricks/models/jobs.py
        class TerminationCode(str, Enum):\n    \"\"\"\n        * USER_REQUEST: A user terminated the cluster directly. Parameters should include a `username` field that indicates the specific user who terminated the cluster.\n    * JOB_FINISHED: The cluster was launched by a job, and terminated when the job completed.\n    * INACTIVITY: The cluster was terminated since it was idle.\n    * CLOUD_PROVIDER_SHUTDOWN: The instance that hosted the Spark driver was terminated by the cloud provider. In AWS, for example, AWS may retire instances and directly shut them down. Parameters should include an `aws_instance_state_reason` field indicating the AWS-provided reason why the instance was terminated.\n    * COMMUNICATION_LOST: Databricks lost connection to services on the driver instance. For example, this can happen when problems arise in cloud networking infrastructure, or when the instance itself becomes unhealthy.\n    * CLOUD_PROVIDER_LAUNCH_FAILURE: Databricks experienced a cloud provider failure when requesting instances to launch clusters. For example, AWS limits the number of running instances and EBS volumes. If you ask Databricks to launch a cluster that requires instances or EBS volumes that exceed your AWS limit, the cluster fails with this status code. Parameters should include one of `aws_api_error_code`, `aws_instance_state_reason`, or `aws_spot_request_status` to indicate the AWS-provided reason why Databricks could not request the required instances for the cluster.\n    * SPARK_STARTUP_FAILURE: The cluster failed to initialize. Possible reasons may include failure to create the environment for Spark or issues launching the Spark master and worker processes.\n    * INVALID_ARGUMENT: Cannot launch the cluster because the user specified an invalid argument. For example, the user might specify an invalid runtime version for the cluster.\n    * UNEXPECTED_LAUNCH_FAILURE: While launching this cluster, Databricks failed to complete critical setup steps, terminating the cluster.\n    * INTERNAL_ERROR: Databricks encountered an unexpected error that forced the running cluster to be terminated. Contact Databricks support for additional details.\n    * SPARK_ERROR: The Spark driver failed to start. Possible reasons may include incompatible libraries and initialization scripts that corrupted the Spark container.\n    * METASTORE_COMPONENT_UNHEALTHY: The cluster failed to start because the external metastore could not be reached. Refer to [Troubleshooting](https://docs.databricks.com/data/metastores/external-hive-metastore.html#troubleshooting).\n    * DBFS_COMPONENT_UNHEALTHY: The cluster failed to start because Databricks File System (DBFS) could not be reached.\n    * DRIVER_UNREACHABLE: Databricks was not able to access the Spark driver, because it was not reachable.\n    * DRIVER_UNRESPONSIVE: Databricks was not able to access the Spark driver, because it was unresponsive.\n    * INSTANCE_UNREACHABLE: Databricks was not able to access instances in order to start the cluster. This can be a transient networking issue. If the problem persists, this usually indicates a networking environment misconfiguration.\n    * CONTAINER_LAUNCH_FAILURE: Databricks was unable to launch containers on worker nodes for the cluster. Have your admin check your network configuration.\n    * INSTANCE_POOL_CLUSTER_FAILURE: Pool backed cluster specific failure. Refer to [Pools](https://docs.databricks.com/clusters/instance-pools/index.html) for details.\n    * REQUEST_REJECTED: Databricks cannot handle the request at this moment. Try again later and contact Databricks if the problem persists.\n    * INIT_SCRIPT_FAILURE: Databricks cannot load and run a cluster-scoped init script on one of the cluster\u2019s nodes, or the init script terminates with a non-zero exit code. Refer to [Init script logs](https://docs.databricks.com/clusters/init-scripts.html#init-script-log).\n    * TRIAL_EXPIRED: The Databricks trial subscription expired.\n    \"\"\"\n\n    userrequest = \"USER_REQUEST\"\n    jobfinished = \"JOB_FINISHED\"\n    inactivity = \"INACTIVITY\"\n    cloudprovidershutdown = \"CLOUD_PROVIDER_SHUTDOWN\"\n    communicationlost = \"COMMUNICATION_LOST\"\n    cloudproviderlaunchfailure = \"CLOUD_PROVIDER_LAUNCH_FAILURE\"\n    sparkstartupfailure = \"SPARK_STARTUP_FAILURE\"\n    invalidargument = \"INVALID_ARGUMENT\"\n    unexpectedlaunchfailure = \"UNEXPECTED_LAUNCH_FAILURE\"\n    internalerror = \"INTERNAL_ERROR\"\n    sparkerror = \"SPARK_ERROR\"\n    metastorecomponentunhealthy = \"METASTORE_COMPONENT_UNHEALTHY\"\n    dbfscomponentunhealthy = \"DBFS_COMPONENT_UNHEALTHY\"\n    driverunreachable = \"DRIVER_UNREACHABLE\"\n    driverunresponsive = \"DRIVER_UNRESPONSIVE\"\n    instanceunreachable = \"INSTANCE_UNREACHABLE\"\n    containerlaunchfailure = \"CONTAINER_LAUNCH_FAILURE\"\n    instancepoolclusterfailure = \"INSTANCE_POOL_CLUSTER_FAILURE\"\n    requestrejected = \"REQUEST_REJECTED\"\n    initscriptfailure = \"INIT_SCRIPT_FAILURE\"\n    trialexpired = \"TRIAL_EXPIRED\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationParameter","title":"TerminationParameter","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class TerminationParameter(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    aws_api_error_code: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided error code describing why cluster nodes could not be\"\n            \" provisioned. For example, `InstanceLimitExceeded` indicates that the\"\n            \" limit of EC2 instances for a specific instance type has been exceeded.\"\n            \" For reference, see:\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/APIReference/query-api-troubleshooting.html>.\"\n        ),\n    )\n    aws_error_message: Optional[str] = Field(\n        None,\n        description=(\n            \"Human-readable context of various failures from AWS. This field is\"\n            \" unstructured, and its exact format is subject to change.\"\n        ),\n    )\n    aws_impaired_status_details: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided status check which failed and induced a node loss. This\"\n            \" status may correspond to a failed instance or system check. For\"\n            \" reference, see\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html>.\"\n        ),\n    )\n    aws_instance_state_reason: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided state reason describing why the driver node was\"\n            \" terminated. For example, `Client.VolumeLimitExceeded` indicates that the\"\n            \" limit of EBS volumes or total EBS volume storage has been exceeded. For\"\n            \" reference, see\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_StateReason.html>.\"\n        ),\n    )\n    aws_instance_status_event: Optional[str] = Field(\n        None,\n        description=(\n            \"The AWS provided scheduled event (for example reboot) which induced a node\"\n            \" loss. For reference, see\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html>.\"\n        ),\n    )\n    aws_spot_request_fault_code: Optional[str] = Field(\n        None,\n        description=(\n            \"Provides additional details when a spot request fails. For example\"\n            \" `InsufficientFreeAddressesInSubnet` indicates the subnet does not have\"\n            \" free IP addresses to accommodate the new instance. For reference, see\"\n            \" <https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-spot-instance-requests.html>.\"\n        ),\n    )\n    aws_spot_request_status: Optional[str] = Field(\n        None,\n        description=(\n            \"Describes why a spot request could not be fulfilled. For example,\"\n            \" `price-too-low` indicates that the max price was lower than the current\"\n            \" spot price. For reference, see:\"\n            \" <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html#spot-instance-bid-status-understand>.\"\n        ),\n    )\n    databricks_error_message: Optional[str] = Field(\n        None,\n        description=(\n            \"Additional context that may explain the reason for cluster termination.\"\n            \" This field is unstructured, and its exact format is subject to change.\"\n        ),\n    )\n    inactivity_duration_min: Optional[str] = Field(\n        None,\n        description=(\n            \"An idle cluster was shut down after being inactive for this duration.\"\n        ),\n    )\n    instance_id: Optional[str] = Field(\n        None, description=\"The ID of the instance that was hosting the Spark driver.\"\n    )\n    instance_pool_error_code: Optional[str] = Field(\n        None,\n        description=(\n            \"The [error\"\n            \" code](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterterminationreasonpoolclusterterminationcode)\"\n            \" for cluster failures specific to a pool.\"\n        ),\n    )\n    instance_pool_id: Optional[str] = Field(\n        None, description=\"The ID of the instance pool the cluster is using.\"\n    )\n    username: Optional[str] = Field(\n        None, description=\"The username of the user who terminated the cluster.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationReason","title":"TerminationReason","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class TerminationReason(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    code: Optional[TerminationCode] = Field(\n        None, description=\"Status code indicating why a cluster was terminated.\"\n    )\n    parameters: Optional[ParameterPair] = Field(\n        None,\n        description=(\n            \"Object containing a set of parameters that provide information about why a\"\n            \" cluster was terminated.\"\n        ),\n    )\n    type: Optional[TerminationType] = Field(\n        None, description=\"Reason indicating why a cluster was terminated.\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TerminationType","title":"TerminationType","text":"

        Bases: str, Enum

        * SUCCESS: Termination succeeded.\n
        • CLIENT_ERROR: Non-retriable. Client must fix parameters before reattempting the cluster creation.
        • SERVICE_FAULT: Databricks service issue. Client can retry.
        • CLOUD_FAILURECloud provider infrastructure issue. Client can retry after the underlying issue is resolved.
        Source code in prefect_databricks/models/jobs.py
        class TerminationType(str, Enum):\n    \"\"\"\n        * SUCCESS: Termination succeeded.\n    * CLIENT_ERROR: Non-retriable. Client must fix parameters before reattempting the cluster creation.\n    * SERVICE_FAULT: Databricks service issue. Client can retry.\n    * CLOUD_FAILURECloud provider infrastructure issue. Client can retry after the underlying issue is resolved.\n\n    \"\"\"\n\n    success = \"SUCCESS\"\n    clienterror = \"CLIENT_ERROR\"\n    servicefault = \"SERVICE_FAULT\"\n    cloudfailure = \"CLOUD_FAILURE\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.TriggerType","title":"TriggerType","text":"

        Bases: str, Enum

        * `PERIODIC`: Schedules that periodically trigger runs, such as a cron scheduler.\n
        • ONE_TIME: One time triggers that fire a single run. This occurs you triggered a single run on demand through the UI or the API.
        • RETRY: Indicates a run that is triggered as a retry of a previously failed run. This occurs when you request to re-run the job in case of failures.
        Source code in prefect_databricks/models/jobs.py
        class TriggerType(str, Enum):\n    \"\"\"\n        * `PERIODIC`: Schedules that periodically trigger runs, such as a cron scheduler.\n    * `ONE_TIME`: One time triggers that fire a single run. This occurs you triggered a single run on demand through the UI or the API.\n    * `RETRY`: Indicates a run that is triggered as a retry of a previously failed run. This occurs when you request to re-run the job in case of failures.\n    \"\"\"\n\n    periodic = \"PERIODIC\"\n    onetime = \"ONE_TIME\"\n    retry = \"RETRY\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.UserName","title":"UserName","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class UserName(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    __root__: str = Field(\n        ..., description=\"Email address for the user.\", example=\"jsmith@example.com\"\n    )\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ViewItem","title":"ViewItem","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class ViewItem(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    content: Optional[str] = Field(None, description=\"Content of the view.\")\n    name: Optional[str] = Field(\n        None,\n        description=(\n            \"Name of the view item. In the case of code view, it would be the\"\n            \" notebook\u2019s name. In the case of dashboard view, it would be the\"\n            \" dashboard\u2019s name.\"\n        ),\n    )\n    type: Optional[ViewType] = Field(None, description=\"Type of the view item.\")\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ViewType","title":"ViewType","text":"

        Bases: str, Enum

        * `NOTEBOOK`: Notebook view item.\n
        • DASHBOARD: Dashboard view item.
        Source code in prefect_databricks/models/jobs.py
        class ViewType(str, Enum):\n    \"\"\"\n        * `NOTEBOOK`: Notebook view item.\n    * `DASHBOARD`: Dashboard view item.\n    \"\"\"\n\n    notebook = \"NOTEBOOK\"\n    dashboard = \"DASHBOARD\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.ViewsToExport","title":"ViewsToExport","text":"

        Bases: str, Enum

        * `CODE`: Code view of the notebook.\n
        • DASHBOARDS: All dashboard views of the notebook.
        • ALL: All views of the notebook.
        Source code in prefect_databricks/models/jobs.py
        class ViewsToExport(str, Enum):\n    \"\"\"\n        * `CODE`: Code view of the notebook.\n    * `DASHBOARDS`: All dashboard views of the notebook.\n    * `ALL`: All views of the notebook.\n    \"\"\"\n\n    code = \"CODE\"\n    dashboards = \"DASHBOARDS\"\n    all = \"ALL\"\n
        "},{"location":"integrations/prefect-databricks/models/jobs/#prefect_databricks.models.jobs.WebhookNotifications","title":"WebhookNotifications","text":"

        Bases: BaseModel

        See source code for the fields' description.

        Source code in prefect_databricks/models/jobs.py
        class WebhookNotifications(BaseModel):\n    \"\"\"\n    See source code for the fields' description.\n    \"\"\"\n\n    class Config:\n        extra = Extra.allow\n        allow_mutation = False\n\n    on_failure: Optional[List[OnFailureItem]] = Field(\n        None,\n        description=(\n            \"An optional list of notification IDs to call when the run fails. A maximum\"\n            \" of 3 destinations can be specified for the `on_failure` property.\"\n        ),\n        example=[{\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"}],\n    )\n    on_start: Optional[List[OnStartItem]] = Field(\n        None,\n        description=(\n            \"An optional list of notification IDs to call when the run starts. A\"\n            \" maximum of 3 destinations can be specified for the `on_start` property.\"\n        ),\n        example=[\n            {\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"},\n            {\"id\": \"0481e838-0a59-4eff-9541-a4ca6f149574\"},\n        ],\n    )\n    on_success: Optional[List[OnSucces]] = Field(\n        None,\n        description=(\n            \"An optional list of notification IDs to call when the run completes\"\n            \" successfully. A maximum of 3 destinations can be specified for the\"\n            \" `on_success` property.\"\n        ),\n        example=[{\"id\": \"03dd86e4-57ef-4818-a950-78e41a1d71ab\"}],\n    )\n
        "},{"location":"integrations/prefect-dbt/","title":"prefect-dbt","text":"

        With prefect-dbt, you can trigger and observe dbt Cloud jobs, execute dbt Core CLI commands, and incorporate other tools, such as Snowflake, into your dbt runs. Prefect provides a global view of the state of your workflows and allows you to take action based on state changes.

        "},{"location":"integrations/prefect-dbt/#getting-started","title":"Getting started","text":"
        1. Install prefect-dbt
        2. Register newly installed blocks types

        Explore the examples below to learn how to use Prefect with dbt.

        "},{"location":"integrations/prefect-dbt/#integrate-dbt-cloud-jobs-with-prefect-flows","title":"Integrate dbt Cloud jobs with Prefect flows","text":"

        If you have an existing dbt Cloud job, you can use the pre-built flow run_dbt_cloud_job to trigger a job run and wait until the job run is finished.

        If some nodes fail, run_dbt_cloud_job efficiently retries the unsuccessful nodes.

        Prior to running this flow, save your dbt Cloud credentials to a DbtCloudCredentials block

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudJob\nfrom prefect_dbt.cloud.jobs import run_dbt_cloud_job\n\n@flow\ndef run_dbt_job_flow():\n    result = run_dbt_cloud_job(\n        dbt_cloud_job=DbtCloudJob.load(\"my-block-name\"),\n        targeted_retries=5,\n    )\n    return result\n\nrun_dbt_job_flow()\n
        "},{"location":"integrations/prefect-dbt/#integrate-dbt-core-cli-commands-with-prefect-flows","title":"Integrate dbt Core CLI commands with Prefect flows","text":"

        prefect-dbt supports execution of dbt Core CLI commands.

        If you don't have a DbtCoreOperation block saved, create one and set the commands that you want to run.

        Optionally, specify the project_dir. If profiles_dir is not set, the DBT_PROFILES_DIR environment variable will be used. If DBT_PROFILES_DIR is not set, the default directory will be used $HOME/.dbt/.

        "},{"location":"integrations/prefect-dbt/#using-an-existing-profile","title":"Using an existing profile","text":"

        If you have an existing dbt profile, specify the profiles_dir where profiles.yml is located. You can use it in code like this:

        from prefect import flow\nfrom prefect_dbt.cli.commands import DbtCoreOperation\n\n@flow\ndef trigger_dbt_flow() -> str:\n    result = DbtCoreOperation(\n        commands=[\"pwd\", \"dbt debug\", \"dbt run\"],\n        project_dir=\"PROJECT-DIRECTORY-PLACEHOLDER\",\n        profiles_dir=\"PROFILES-DIRECTORY-PLACEHOLDER\"\n    ).run()\n    return result\n\ntrigger_dbt_flow()\n
        "},{"location":"integrations/prefect-dbt/#writing-a-new-profile","title":"Writing a new profile","text":"

        To setup a new profile, first save and load a DbtCliProfile block and use it in DbtCoreOperation.

        Then, specifyprofiles_dir where profiles.yml will be written. Here's example code with placeholders:

        from prefect import flow\nfrom prefect_dbt.cli import DbtCliProfile, DbtCoreOperation\n\n@flow\ndef trigger_dbt_flow():\n    dbt_cli_profile = DbtCliProfile.load(\"DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER\")\n    with DbtCoreOperation(\n        commands=[\"dbt debug\", \"dbt run\"],\n        project_dir=\"PROJECT-DIRECTORY-PLACEHOLDER\",\n        profiles_dir=\"PROFILES-DIRECTORY-PLACEHOLDER\",\n        dbt_cli_profile=dbt_cli_profile,\n    ) as dbt_operation:\n        dbt_process = dbt_operation.trigger()\n        # do other things before waiting for completion\n        dbt_process.wait_for_completion()\n        result = dbt_process.fetch_result()\n    return result\n\ntrigger_dbt_flow()\n
        "},{"location":"integrations/prefect-dbt/#resources","title":"Resources","text":"

        If you need help using dbt, consult the dbt documentation.

        "},{"location":"integrations/prefect-dbt/#installation","title":"Installation","text":"

        To install prefect-dbt for use with dbt Cloud:

        pip install prefect-dbt\n

        To install with additional functionality for dbt Core (CLI):

        pip install \"prefect-dbt[cli]\"\n

        To install with additional functionality for dbt Core and Snowflake profiles:

        pip install \"prefect-dbt[snowflake]\"\n

        To install with additional functionality for dbt Core and BigQuery profiles:

        pip install \"prefect-dbt[bigquery]\"\n

        To install with additional functionality for dbt Core and Postgres profiles:

        pip install \"prefect-dbt[postgres]\"\n

        Some dbt Core profiles require additional installation

        According to dbt's Databricks setup page, users must first install the adapter:

        pip install dbt-databricks\n

        Check out the desired profile setup page on the sidebar for others.

        prefect-dbt requires Python 3.8 or newer.

        We recommend using a Python virtual environment manager such as conda, venv, or pipenv.

        "},{"location":"integrations/prefect-dbt/#registering-block-types","title":"Registering block types","text":"

        Register the block types in the prefect-dbt module to make them available for use.

        prefect block register -m prefect_dbt\n
        "},{"location":"integrations/prefect-dbt/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

        Blocks can be created through code or through the UI.

        "},{"location":"integrations/prefect-dbt/#dbt-cloud","title":"dbt Cloud","text":"

        To create a dbt Cloud Credentials block do the following:

        1. Go to your dbt Cloud profile.
        2. Log in to your dbt Cloud account.
        3. Scroll to API or click API Access on the sidebar.
        4. Copy the API Key.
        5. Click Projects on the sidebar.
        6. Copy the account ID from the URL: https://cloud.getdbt.com/settings/accounts/<ACCOUNT_ID>.
        7. Create and run the following script, replacing the placeholders.
        from prefect_dbt.cloud import DbtCloudCredentials\n\nDbtCloudCredentials(\n    api_key=\"API-KEY-PLACEHOLDER\",\n    account_id=\"ACCOUNT-ID-PLACEHOLDER\"\n).save(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\n

        Then, to create a dbt Cloud job block do the following:

        1. Head over to your dbt home page.
        2. On the top nav bar, click on Deploy -> Jobs.
        3. Select a job.
        4. Copy the job ID from the URL: https://cloud.getdbt.com/deploy/<ACCOUNT_ID>/projects/<PROJECT_ID>/jobs/<JOB_ID>
        5. Create and run the following script, replacing the placeholders.
        from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n\ndbt_cloud_credentials = DbtCloudCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\ndbt_cloud_job = DbtCloudJob(\n    dbt_cloud_credentials=dbt_cloud_credentials,\n    job_id=\"JOB-ID-PLACEHOLDER\"\n).save(\"JOB-BLOCK-NAME-PLACEHOLDER\")\n

        You can now load the saved block, which can access your credentials:

        from prefect_dbt.cloud import DbtCloudJob\n\nDbtCloudJob.load(\"JOB-BLOCK-NAME-PLACEHOLDER\")\n
        "},{"location":"integrations/prefect-dbt/#dbt-core-cli","title":"dbt Core CLI","text":"

        Available TargetConfigs blocks

        Visit the API Reference to see other built-in TargetConfigs blocks.

        If the desired service profile is not available, check out the Examples Catalog to see how you can build one from the generic TargetConfigs class.

        To create dbt Core target config and profile blocks for BigQuery:

        1. Save and load a GcpCredentials block.
        2. Determine the schema / dataset you want to use in BigQuery.
        3. Create a short script, replacing the placeholders.
        from prefect_gcp.credentials import GcpCredentials\nfrom prefect_dbt.cli import BigQueryTargetConfigs, DbtCliProfile\n\ncredentials = GcpCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\ntarget_configs = BigQueryTargetConfigs(\n    schema=\"SCHEMA-NAME-PLACEHOLDER\",  # also known as dataset\n    credentials=credentials,\n)\ntarget_configs.save(\"TARGET-CONFIGS-BLOCK-NAME-PLACEHOLDER\")\n\ndbt_cli_profile = DbtCliProfile(\n    name=\"PROFILE-NAME-PLACEHOLDER\",\n    target=\"TARGET-NAME-placeholder\",\n    target_configs=target_configs,\n)\ndbt_cli_profile.save(\"DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER\")\n

        Then, to create a dbt Core operation block:

        1. Determine the dbt commands you want to run.
        2. Create a short script, replacing the placeholders.
        from prefect_dbt.cli import DbtCliProfile, DbtCoreOperation\n\ndbt_cli_profile = DbtCliProfile.load(\"DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER\")\ndbt_core_operation = DbtCoreOperation(\n    commands=[\"DBT-CLI-COMMANDS-PLACEHOLDER\"],\n    dbt_cli_profile=dbt_cli_profile,\n    overwrite_profiles=True,\n)\ndbt_core_operation.save(\"DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER\")\n

        Congrats! You can now easily load the saved block, which holds your credentials:

        from prefect_dbt.cloud import DbtCoreOperation\n\nDbtCoreOperation.load(\"DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER\")\n
        "},{"location":"integrations/prefect-dbt/cli/commands/","title":"Commands","text":""},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands","title":"prefect_dbt.cli.commands","text":"

        Module containing tasks and flows for interacting with dbt CLI

        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.DbtCoreOperation","title":"DbtCoreOperation","text":"

        Bases: ShellOperation

        A block representing a dbt operation, containing multiple dbt and shell commands.

        For long-lasting operations, use the trigger method and utilize the block as a context manager for automatic closure of processes when context is exited. If not, manually call the close method to close processes.

        For short-lasting operations, use the run method. Context is automatically managed with this method.

        Attributes:

        Name Type Description commands

        A list of commands to execute sequentially.

        stream_output

        Whether to stream output.

        env

        A dictionary of environment variables to set for the shell operation.

        working_directory

        The working directory context the commands will be executed within.

        shell

        The shell to use to execute the commands.

        extension

        The extension to use for the temporary file. if unset defaults to .ps1 on Windows and .sh on other platforms.

        profiles_dir Optional[Path]

        The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the dbt commands provided. If this is not set, will try using the DBT_PROFILES_DIR environment variable, but if that's also not set, will use the default directory $HOME/.dbt/.

        project_dir Optional[Path]

        The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

        overwrite_profiles bool

        Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

        dbt_cli_profile Optional[DbtCliProfile]

        Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

        Examples:

        Load a configured block.

        from prefect_dbt import DbtCoreOperation\n\ndbt_op = DbtCoreOperation.load(\"BLOCK_NAME\")\n

        Execute short-lasting dbt debug and list with a custom DbtCliProfile.

        from prefect_dbt import DbtCoreOperation, DbtCliProfile\nfrom prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake import SnowflakeConnector\n\nsnowflake_connector = await SnowflakeConnector.load(\"snowflake-connector\")\ntarget_configs = SnowflakeTargetConfigs(connector=snowflake_connector)\ndbt_cli_profile = DbtCliProfile(\n    name=\"jaffle_shop\",\n    target=\"dev\",\n    target_configs=target_configs,\n)\ndbt_init = DbtCoreOperation(\n    commands=[\"dbt debug\", \"dbt list\"],\n    dbt_cli_profile=dbt_cli_profile,\n    overwrite_profiles=True\n)\ndbt_init.run()\n

        Execute a longer-lasting dbt run as a context manager.

        with DbtCoreOperation(commands=[\"dbt run\"]) as dbt_run:\n    dbt_process = dbt_run.trigger()\n    # do other things\n    dbt_process.wait_for_completion()\n    dbt_output = dbt_process.fetch_result()\n

        Source code in prefect_dbt/cli/commands.py
        class DbtCoreOperation(ShellOperation):\n    \"\"\"\n    A block representing a dbt operation, containing multiple dbt and shell commands.\n\n    For long-lasting operations, use the trigger method and utilize the block as a\n    context manager for automatic closure of processes when context is exited.\n    If not, manually call the close method to close processes.\n\n    For short-lasting operations, use the run method. Context is automatically managed\n    with this method.\n\n    Attributes:\n        commands: A list of commands to execute sequentially.\n        stream_output: Whether to stream output.\n        env: A dictionary of environment variables to set for the shell operation.\n        working_directory: The working directory context the commands\n            will be executed within.\n        shell: The shell to use to execute the commands.\n        extension: The extension to use for the temporary file.\n            if unset defaults to `.ps1` on Windows and `.sh` on other platforms.\n        profiles_dir: The directory to search for the profiles.yml file.\n            Setting this appends the `--profiles-dir` option to the dbt commands\n            provided. If this is not set, will try using the DBT_PROFILES_DIR\n            environment variable, but if that's also not\n            set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error if profiles.yml already\n            exists under profile_dir and overwrite_profiles is set to False.\n\n    Examples:\n        Load a configured block.\n        ```python\n        from prefect_dbt import DbtCoreOperation\n\n        dbt_op = DbtCoreOperation.load(\"BLOCK_NAME\")\n        ```\n\n        Execute short-lasting dbt debug and list with a custom DbtCliProfile.\n        ```python\n        from prefect_dbt import DbtCoreOperation, DbtCliProfile\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake import SnowflakeConnector\n\n        snowflake_connector = await SnowflakeConnector.load(\"snowflake-connector\")\n        target_configs = SnowflakeTargetConfigs(connector=snowflake_connector)\n        dbt_cli_profile = DbtCliProfile(\n            name=\"jaffle_shop\",\n            target=\"dev\",\n            target_configs=target_configs,\n        )\n        dbt_init = DbtCoreOperation(\n            commands=[\"dbt debug\", \"dbt list\"],\n            dbt_cli_profile=dbt_cli_profile,\n            overwrite_profiles=True\n        )\n        dbt_init.run()\n        ```\n\n        Execute a longer-lasting dbt run as a context manager.\n        ```python\n        with DbtCoreOperation(commands=[\"dbt run\"]) as dbt_run:\n            dbt_process = dbt_run.trigger()\n            # do other things\n            dbt_process.wait_for_completion()\n            dbt_output = dbt_process.fetch_result()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt Core Operation\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.DbtCoreOperation\"  # noqa\n\n    profiles_dir: Optional[Path] = Field(\n        default=None,\n        description=(\n            \"The directory to search for the profiles.yml file. \"\n            \"Setting this appends the `--profiles-dir` option to the dbt commands \"\n            \"provided. If this is not set, will try using the DBT_PROFILES_DIR \"\n            \"environment variable, but if that's also not \"\n            \"set, will use the default directory `$HOME/.dbt/`.\"\n        ),\n    )\n    project_dir: Optional[Path] = Field(\n        default=None,\n        description=(\n            \"The directory to search for the dbt_project.yml file. \"\n            \"Default is the current working directory and its parents.\"\n        ),\n    )\n    overwrite_profiles: bool = Field(\n        default=False,\n        description=(\n            \"Whether the existing profiles.yml file under profiles_dir \"\n            \"should be overwritten with a new profile.\"\n        ),\n    )\n    dbt_cli_profile: Optional[DbtCliProfile] = Field(\n        default=None,\n        description=(\n            \"Profiles class containing the profile written to profiles.yml. \"\n            \"Note! This is optional and will raise an error if profiles.yml already \"\n            \"exists under profile_dir and overwrite_profiles is set to False.\"\n        ),\n    )\n\n    @validator(\"commands\", always=True)\n    def _has_a_dbt_command(cls, commands):\n        \"\"\"\n        Check that the commands contain a dbt command.\n        \"\"\"\n        if not any(\"dbt \" in command for command in commands):\n            raise ValueError(\n                \"None of the commands are a valid dbt sub-command; see dbt --help, \"\n                \"or use prefect_shell.ShellOperation for non-dbt related \"\n                \"commands instead\"\n            )\n        return commands\n\n    def _find_valid_profiles_dir(self) -> PosixPath:\n        \"\"\"\n        Ensure that there is a profiles.yml available for use.\n        \"\"\"\n        profiles_dir = self.profiles_dir\n        if profiles_dir is None:\n            if self.env.get(\"DBT_PROFILES_DIR\") is not None:\n                # get DBT_PROFILES_DIR from the user input env\n                profiles_dir = self.env[\"DBT_PROFILES_DIR\"]\n            else:\n                # get DBT_PROFILES_DIR from the system env, or default to ~/.dbt\n                profiles_dir = os.getenv(\"DBT_PROFILES_DIR\", Path.home() / \".dbt\")\n        profiles_dir = relative_path_to_current_platform(\n            Path(profiles_dir).expanduser()\n        )\n\n        # https://docs.getdbt.com/dbt-cli/configure-your-profile\n        # Note that the file always needs to be called profiles.yml,\n        # regardless of which directory it is in.\n        profiles_path = profiles_dir / \"profiles.yml\"\n        overwrite_profiles = self.overwrite_profiles\n        dbt_cli_profile = self.dbt_cli_profile\n        if not profiles_path.exists() or overwrite_profiles:\n            if dbt_cli_profile is None:\n                raise ValueError(\n                    \"Since overwrite_profiles is True or profiles_path is empty, \"\n                    \"need `dbt_cli_profile` to write a profile\"\n                )\n            profile = dbt_cli_profile.get_profile()\n            profiles_dir.mkdir(exist_ok=True)\n            with open(profiles_path, \"w+\") as f:\n                yaml.dump(profile, f, default_flow_style=False)\n        elif dbt_cli_profile is not None:\n            raise ValueError(\n                f\"Since overwrite_profiles is False and profiles_path {profiles_path} \"\n                f\"already exists, the profile within dbt_cli_profile couldn't be used; \"\n                f\"if the existing profile is satisfactory, do not set dbt_cli_profile\"\n            )\n        return profiles_dir\n\n    def _append_dirs_to_commands(self, profiles_dir) -> List[str]:\n        \"\"\"\n        Append profiles_dir and project_dir options to dbt commands.\n        \"\"\"\n        project_dir = self.project_dir\n\n        commands = []\n        for command in self.commands:\n            command += f\" --profiles-dir {profiles_dir}\"\n            if project_dir is not None:\n                project_dir = Path(project_dir).expanduser()\n                command += f\" --project-dir {project_dir}\"\n            commands.append(command)\n        return commands\n\n    def _compile_kwargs(self, **open_kwargs: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Helper method to compile the kwargs for `open_process` so it's not repeated\n        across the run and trigger methods.\n        \"\"\"\n        profiles_dir = self._find_valid_profiles_dir()\n        commands = self._append_dirs_to_commands(profiles_dir=profiles_dir)\n\n        # _compile_kwargs is called within trigger() and run(), prior to execution.\n        # However _compile_kwargs directly uses self.commands, but here we modified\n        # the commands without saving back to self.commands so we need to create a copy.\n        # was also thinking of using env vars but DBT_PROJECT_DIR is not supported yet.\n        modified_self = self.copy()\n        modified_self.commands = commands\n        return super(type(self), modified_self)._compile_kwargs(**open_kwargs)\n
        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.create_summary_markdown","title":"create_summary_markdown","text":"

        Creates a Prefect task artifact summarizing the results of the above predefined prefrect-dbt task.

        Source code in prefect_dbt/cli/commands.py
        def create_summary_markdown(results: dbtRunnerResult, command: str) -> UUID:\n    \"\"\"\n    Creates a Prefect task artifact summarizing the results\n    of the above predefined prefrect-dbt task.\n    \"\"\"\n    # Create Summary Markdown Artifact\n    run_statuses: Dict[str, List[str]] = {\n        \"successful\": [],\n        \"failed\": [],\n        \"skipped\": [],\n    }\n\n    for r in results.result.results:\n        if r.status == NodeStatus.Success or r.status == NodeStatus.Pass:\n            run_statuses[\"successful\"].append(r)\n        elif (\n            r.status == NodeStatus.Fail\n            or r.status == NodeStatus.Error\n            or r.status == NodeStatus.RuntimeErr\n        ):\n            run_statuses[\"failed\"].append(r)\n        elif r.status == NodeStatus.Skipped:\n            run_statuses[\"skipped\"].append(r)\n\n    markdown = f\"# dbt {command} Task Summary\"\n\n    if run_statuses[\"failed\"] != []:\n        failed_runs_str = \"\"\n        for r in run_statuses[\"failed\"]:\n            failed_runs_str += f\"**{r.node.name}**\\n \\\n                Node Type: {r.node.resource_type}\\n \\\n                Node Path: {r.node.original_file_path}\"\n            if r.message:\n                message = r.message.replace(\"\\n\", \".\")\n                failed_runs_str += f\"\\nError Message: {message}\\n\"\n        markdown += f\"\"\"\\n## Failed Runs \ud83d\udd34\\n\\n{failed_runs_str}\\n\\n\"\"\"\n\n    if run_statuses[\"successful\"] != []:\n        successful_runs_str = \"\\n\".join(\n            [f\"**{r.node.name}**\" for r in run_statuses[\"successful\"]]\n        )\n        markdown += f\"\"\"\\n## Successful Runs \u2705\\n\\n{successful_runs_str}\\n\\n\"\"\"\n\n    if run_statuses[\"skipped\"] != []:\n        skipped_runs_str = \"\\n\".join(\n            [f\"**{r.node.name}**\" for r in run_statuses[\"skipped\"]]\n        )\n        markdown += f\"\"\" ## Skipped Runs \ud83d\udeab\\n\\n{skipped_runs_str}\\n\\n\"\"\"\n\n    return markdown\n
        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_build","title":"run_dbt_build async","text":"

        Executes the 'dbt build' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

        Parameters:

        Name Type Description Default profiles_dir Optional[Union[Path, str]]

        The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

        None project_dir Optional[Union[Path, str]]

        The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

        None overwrite_profiles bool

        Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

        False dbt_cli_profile Optional[DbtCliProfile]

        Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

        None create_artifact bool

        If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

        True artifact_key str

        The key under which to store the dbt build results artifact in Prefect. Defaults to 'dbt-build-task-summary'.

        'dbt-build-task-summary'
            from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_build_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_build_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

        Raises:

        Type Description ValueError

        If required dbt_cli_profile is not provided when needed for profile writing.

        RuntimeError

        If the dbt build fails for any reason, it will be indicated by the exception raised.

        Source code in prefect_dbt/cli/commands.py
        @task\nasync def run_dbt_build(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-build-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt build' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt build results artifact in Prefect.\n            Defaults to 'dbt-build-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_build_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_build_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"build\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n    return results\n
        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_model","title":"run_dbt_model async","text":"

        Executes the 'dbt run' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

        Parameters:

        Name Type Description Default profiles_dir Optional[Union[Path, str]]

        The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

        None project_dir Optional[Union[Path, str]]

        The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

        None overwrite_profiles bool

        Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

        False dbt_cli_profile Optional[DbtCliProfile]

        Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

        None create_artifact bool

        If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

        True artifact_key str

        The key under which to store the dbt run results artifact in Prefect. Defaults to 'dbt-run-task-summary'.

        'dbt-run-task-summary'
            from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_run_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_run_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

        Raises:

        Type Description ValueError

        If required dbt_cli_profile is not provided when needed for profile writing.

        RuntimeError

        If the dbt build fails for any reason, it will be indicated by the exception raised.

        Source code in prefect_dbt/cli/commands.py
        @task\nasync def run_dbt_model(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-run-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt run' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt run results artifact in Prefect.\n            Defaults to 'dbt-run-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_run_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_run_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"run\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_seed","title":"run_dbt_seed async","text":"

        Executes the 'dbt seed' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

        Parameters:

        Name Type Description Default profiles_dir Optional[Union[Path, str]]

        The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

        None project_dir Optional[Union[Path, str]]

        The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

        None overwrite_profiles bool

        Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

        False dbt_cli_profile Optional[DbtCliProfile]

        Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

        None create_artifact bool

        If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

        True artifact_key str

        The key under which to store the dbt build results artifact in Prefect. Defaults to 'dbt-seed-task-summary'.

        'dbt-seed-task-summary'
            from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_seed_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_seed_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

        Raises:

        Type Description ValueError

        If required dbt_cli_profile is not provided when needed for profile writing.

        RuntimeError

        If the dbt build fails for any reason, it will be indicated by the exception raised.

        Source code in prefect_dbt/cli/commands.py
        @task\nasync def run_dbt_seed(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-seed-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt seed' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt build results artifact in Prefect.\n            Defaults to 'dbt-seed-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_seed_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_seed_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"seed\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_snapshot","title":"run_dbt_snapshot async","text":"

        Executes the 'dbt snapshot' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

        Parameters:

        Name Type Description Default profiles_dir Optional[Union[Path, str]]

        The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

        None project_dir Optional[Union[Path, str]]

        The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

        None overwrite_profiles bool

        Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

        False dbt_cli_profile Optional[DbtCliProfile]

        Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

        None create_artifact bool

        If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

        True artifact_key str

        The key under which to store the dbt build results artifact in Prefect. Defaults to 'dbt-snapshot-task-summary'.

        'dbt-snapshot-task-summary'
            from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_snapshot_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_snapshot_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

        Raises:

        Type Description ValueError

        If required dbt_cli_profile is not provided when needed for profile writing.

        RuntimeError

        If the dbt build fails for any reason, it will be indicated by the exception raised.

        Source code in prefect_dbt/cli/commands.py
        @task\nasync def run_dbt_snapshot(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-snapshot-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt snapshot' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt build results artifact in Prefect.\n            Defaults to 'dbt-snapshot-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_snapshot_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_snapshot_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"snapshot\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.run_dbt_test","title":"run_dbt_test async","text":"

        Executes the 'dbt test' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results.

        Parameters:

        Name Type Description Default profiles_dir Optional[Union[Path, str]]

        The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR env variable, but if that's also not set, will use the default directory $HOME/.dbt/.

        None project_dir Optional[Union[Path, str]]

        The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

        None overwrite_profiles bool

        Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

        False dbt_cli_profile Optional[DbtCliProfile]

        Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

        None create_artifact bool

        If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to True.

        True artifact_key str

        The key under which to store the dbt test results artifact in Prefect. Defaults to 'dbt-test-task-summary'.

        'dbt-test-task-summary'
            from prefect import flow\n    from prefect_dbt.cli.tasks import dbt_test_task\n\n    @flow\n    def dbt_test_flow():\n        dbt_test_task(\n            project_dir=\"/Users/test/my_dbt_project_dir\"\n        )\n

        Raises:

        Type Description ValueError

        If required dbt_cli_profile is not provided when needed for profile writing.

        RuntimeError

        If the dbt build fails for any reason, it will be indicated by the exception raised.

        Source code in prefect_dbt/cli/commands.py
        @task\nasync def run_dbt_test(\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-test-task-summary\",\n    **command_kwargs,\n):\n    \"\"\"\n    Executes the 'dbt test' command within a Prefect task,\n    and optionally creates a Prefect artifact summarizing the dbt build results.\n\n    Args:\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided.\n            If this is not set, will try using the DBT_PROFILES_DIR env variable,\n            but if that's also not set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error\n            if profiles.yml already exists under profile_dir\n            and overwrite_profiles is set to False.\n        create_artifact: If True, creates a Prefect artifact on the task run\n            with the dbt build results using the specified artifact key.\n            Defaults to True.\n        artifact_key: The key under which to store\n            the dbt test results artifact in Prefect.\n            Defaults to 'dbt-test-task-summary'.\n\n    Example:\n    ```python\n        from prefect import flow\n        from prefect_dbt.cli.tasks import dbt_test_task\n\n        @flow\n        def dbt_test_flow():\n            dbt_test_task(\n                project_dir=\"/Users/test/my_dbt_project_dir\"\n            )\n    ```\n\n    Raises:\n        ValueError: If required dbt_cli_profile is not provided\n                    when needed for profile writing.\n        RuntimeError: If the dbt build fails for any reason,\n                    it will be indicated by the exception raised.\n    \"\"\"\n\n    results = await trigger_dbt_cli_command.fn(\n        command=\"test\",\n        profiles_dir=profiles_dir,\n        project_dir=project_dir,\n        overwrite_profiles=overwrite_profiles,\n        dbt_cli_profile=dbt_cli_profile,\n        create_artifact=create_artifact,\n        artifact_key=artifact_key,\n        **command_kwargs,\n    )\n\n    return results\n
        "},{"location":"integrations/prefect-dbt/cli/commands/#prefect_dbt.cli.commands.trigger_dbt_cli_command","title":"trigger_dbt_cli_command async","text":"

        Task for running dbt commands.

        If no profiles.yml file is found or if overwrite_profiles flag is set to True, this will first generate a profiles.yml file in the profiles_dir directory. Then run the dbt CLI shell command.

        Parameters:

        Name Type Description Default command str

        The dbt command to be executed.

        required profiles_dir Optional[Union[Path, str]]

        The directory to search for the profiles.yml file. Setting this appends the --profiles-dir option to the command provided. If this is not set, will try using the DBT_PROFILES_DIR environment variable, but if that's also not set, will use the default directory $HOME/.dbt/.

        None project_dir Optional[Union[Path, str]]

        The directory to search for the dbt_project.yml file. Default is the current working directory and its parents.

        None overwrite_profiles bool

        Whether the existing profiles.yml file under profiles_dir should be overwritten with a new profile.

        False dbt_cli_profile Optional[DbtCliProfile]

        Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile_dir and overwrite_profiles is set to False.

        None **shell_run_command_kwargs

        Additional keyword arguments to pass to shell_run_command.

        required

        Returns:

        Name Type Description last_line_cli_output str

        The last line of the CLI output will be returned if return_all in shell_run_command_kwargs is False. This is the default behavior.

        full_cli_output List[str]

        Full CLI output will be returned if return_all in shell_run_command_kwargs is True.

        Examples:

        Execute dbt debug with a pre-populated profiles.yml.

        from prefect import flow\nfrom prefect_dbt.cli.commands import trigger_dbt_cli_command\n\n@flow\ndef trigger_dbt_cli_command_flow():\n    result = trigger_dbt_cli_command(\"dbt debug\")\n    return result\n\ntrigger_dbt_cli_command_flow()\n

        Execute dbt debug without a pre-populated profiles.yml.

        from prefect import flow\nfrom prefect_dbt.cli.credentials import DbtCliProfile\nfrom prefect_dbt.cli.commands import trigger_dbt_cli_command\nfrom prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake.credentials import SnowflakeCredentials\n\n@flow\ndef trigger_dbt_cli_command_flow():\n    credentials = SnowflakeCredentials(\n        user=\"user\",\n        password=\"password\",\n        account=\"account.region.aws\",\n        role=\"role\",\n    )\n    connector = SnowflakeConnector(\n        schema=\"public\",\n        database=\"database\",\n        warehouse=\"warehouse\",\n        credentials=credentials,\n    )\n    target_configs = SnowflakeTargetConfigs(\n        connector=connector\n    )\n    dbt_cli_profile = DbtCliProfile(\n        name=\"jaffle_shop\",\n        target=\"dev\",\n        target_configs=target_configs,\n    )\n    result = trigger_dbt_cli_command(\n        \"dbt debug\",\n        overwrite_profiles=True,\n        dbt_cli_profile=dbt_cli_profile\n    )\n    return result\n\ntrigger_dbt_cli_command_flow()\n

        Source code in prefect_dbt/cli/commands.py
        @task\nasync def trigger_dbt_cli_command(\n    command: str,\n    profiles_dir: Optional[Union[Path, str]] = None,\n    project_dir: Optional[Union[Path, str]] = None,\n    overwrite_profiles: bool = False,\n    dbt_cli_profile: Optional[DbtCliProfile] = None,\n    create_artifact: bool = True,\n    artifact_key: str = \"dbt-cli-command-summary\",\n    **command_kwargs: Dict[str, Any],\n) -> Optional[dbtRunnerResult]:\n    \"\"\"\n    Task for running dbt commands.\n\n    If no profiles.yml file is found or if overwrite_profiles flag is set to True, this\n    will first generate a profiles.yml file in the profiles_dir directory. Then run the dbt\n    CLI shell command.\n\n    Args:\n        command: The dbt command to be executed.\n        profiles_dir: The directory to search for the profiles.yml file. Setting this\n            appends the `--profiles-dir` option to the command provided. If this is not set,\n            will try using the DBT_PROFILES_DIR environment variable, but if that's also not\n            set, will use the default directory `$HOME/.dbt/`.\n        project_dir: The directory to search for the dbt_project.yml file.\n            Default is the current working directory and its parents.\n        overwrite_profiles: Whether the existing profiles.yml file under profiles_dir\n            should be overwritten with a new profile.\n        dbt_cli_profile: Profiles class containing the profile written to profiles.yml.\n            Note! This is optional and will raise an error if profiles.yml already exists\n            under profile_dir and overwrite_profiles is set to False.\n        **shell_run_command_kwargs: Additional keyword arguments to pass to\n            [shell_run_command](https://prefecthq.github.io/prefect-shell/commands/#prefect_shell.commands.shell_run_command).\n\n    Returns:\n        last_line_cli_output (str): The last line of the CLI output will be returned\n            if `return_all` in `shell_run_command_kwargs` is False. This is the default\n            behavior.\n        full_cli_output (List[str]): Full CLI output will be returned if `return_all`\n            in `shell_run_command_kwargs` is True.\n\n    Examples:\n        Execute `dbt debug` with a pre-populated profiles.yml.\n        ```python\n        from prefect import flow\n        from prefect_dbt.cli.commands import trigger_dbt_cli_command\n\n        @flow\n        def trigger_dbt_cli_command_flow():\n            result = trigger_dbt_cli_command(\"dbt debug\")\n            return result\n\n        trigger_dbt_cli_command_flow()\n        ```\n\n        Execute `dbt debug` without a pre-populated profiles.yml.\n        ```python\n        from prefect import flow\n        from prefect_dbt.cli.credentials import DbtCliProfile\n        from prefect_dbt.cli.commands import trigger_dbt_cli_command\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake.credentials import SnowflakeCredentials\n\n        @flow\n        def trigger_dbt_cli_command_flow():\n            credentials = SnowflakeCredentials(\n                user=\"user\",\n                password=\"password\",\n                account=\"account.region.aws\",\n                role=\"role\",\n            )\n            connector = SnowflakeConnector(\n                schema=\"public\",\n                database=\"database\",\n                warehouse=\"warehouse\",\n                credentials=credentials,\n            )\n            target_configs = SnowflakeTargetConfigs(\n                connector=connector\n            )\n            dbt_cli_profile = DbtCliProfile(\n                name=\"jaffle_shop\",\n                target=\"dev\",\n                target_configs=target_configs,\n            )\n            result = trigger_dbt_cli_command(\n                \"dbt debug\",\n                overwrite_profiles=True,\n                dbt_cli_profile=dbt_cli_profile\n            )\n            return result\n\n        trigger_dbt_cli_command_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    if profiles_dir is None:\n        profiles_dir = os.getenv(\"DBT_PROFILES_DIR\", str(Path.home()) + \"/.dbt\")\n\n    if command.startswith(\"dbt\"):\n        command = command.split(\" \", 1)[1]\n\n    # https://docs.getdbt.com/dbt-cli/configure-your-profile\n    # Note that the file always needs to be called profiles.yml,\n    # regardless of which directory it is in.\n    profiles_path = profiles_dir + \"/profiles.yml\"\n    logger.debug(f\"Using this profiles path: {profiles_path}\")\n\n    # write the profile if overwrite or no profiles exist\n    if overwrite_profiles or not Path(profiles_path).expanduser().exists():\n        if dbt_cli_profile is None:\n            raise ValueError(\"Provide `dbt_cli_profile` keyword for writing profiles\")\n        profile = dbt_cli_profile.get_profile()\n        Path(profiles_dir).expanduser().mkdir(exist_ok=True)\n        with open(profiles_path, \"w+\") as f:\n            yaml.dump(profile, f, default_flow_style=False)\n        logger.info(f\"Wrote profile to {profiles_path}\")\n    elif dbt_cli_profile is not None:\n        raise ValueError(\n            f\"Since overwrite_profiles is False and profiles_path ({profiles_path}) \"\n            f\"already exists, the profile within dbt_cli_profile could not be used; \"\n            f\"if the existing profile is satisfactory, do not pass dbt_cli_profile\"\n        )\n\n    # append the options\n    cli_args = [command]\n    cli_args.append(\"--profiles-dir\")\n    cli_args.append(profiles_dir)\n    if project_dir is not None:\n        project_dir = Path(project_dir).expanduser()\n        cli_args.append(\"--project-dir\")\n        cli_args.append(project_dir)\n\n    if command_kwargs:\n        cli_args.append(command_kwargs)\n\n    # fix up empty shell_run_command_kwargs\n    dbt_runner_client = dbtRunner()\n    logger.info(f\"Running dbt command: {cli_args}\")\n    result: dbtRunnerResult = dbt_runner_client.invoke(cli_args)\n\n    if result.exception is not None:\n        logger.error(f\"dbt build task failed with exception: {result.exception}\")\n        raise result.exception\n\n    # Creating the dbt Summary Markdown if enabled\n    if create_artifact and isinstance(result.result, RunExecutionResult):\n        markdown = create_summary_markdown(result, command)\n        artifact_id = await create_markdown_artifact(\n            markdown=markdown,\n            key=artifact_key,\n        )\n        if not artifact_id:\n            logger.error(f\"Artifact was not created for dbt {command} task\")\n        else:\n            logger.info(\n                f\"dbt {command} task completed successfully with artifact {artifact_id}\"\n            )\n    else:\n        logger.debug(\n            f\"Artifact was not created for dbt {command} this task \\\n                     due to create_artifact=False or the dbt command did not \\\n                     return any RunExecutionResults. \\\n                     See https://docs.getdbt.com/reference/programmatic-invocations \\\n                     for more details on dbtRunnerResult.\"\n        )\n    return result\n
        "},{"location":"integrations/prefect-dbt/cli/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials","title":"prefect_dbt.cli.credentials","text":"

        Module containing credentials for interacting with dbt CLI

        "},{"location":"integrations/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials.DbtCliProfile","title":"DbtCliProfile","text":"

        Bases: Block

        Profile for use across dbt CLI tasks and flows.

        Attributes:

        Name Type Description name str

        Profile name used for populating profiles.yml.

        target str

        The default target your dbt project will use.

        target_configs TargetConfigs

        Target configs contain credentials and settings, specific to the warehouse you're connecting to. To find valid keys, head to the Available adapters page and click the desired adapter's \"Profile Setup\" hyperlink.

        global_configs GlobalConfigs

        Global configs control things like the visual output of logs, the manner in which dbt parses your project, and what to do when dbt finds a version mismatch or a failing model. Valid keys can be found here.

        Examples:

        Load stored dbt CLI profile:

        from prefect_dbt.cli import DbtCliProfile\ndbt_cli_profile = DbtCliProfile.load(\"BLOCK_NAME\").get_profile()\n

        Get a dbt Snowflake profile from DbtCliProfile by using SnowflakeTargetConfigs:

        from prefect_dbt.cli import DbtCliProfile\nfrom prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector\n\ncredentials = SnowflakeCredentials(\n    user=\"user\",\n    password=\"password\",\n    account=\"account.region.aws\",\n    role=\"role\",\n)\nconnector = SnowflakeConnector(\n    schema=\"public\",\n    database=\"database\",\n    warehouse=\"warehouse\",\n    credentials=credentials,\n)\ntarget_configs = SnowflakeTargetConfigs(\n    connector=connector\n)\ndbt_cli_profile = DbtCliProfile(\n    name=\"jaffle_shop\",\n    target=\"dev\",\n    target_configs=target_configs,\n)\nprofile = dbt_cli_profile.get_profile()\n

        Get a dbt Redshift profile from DbtCliProfile by using generic TargetConfigs:

        from prefect_dbt.cli import DbtCliProfile\nfrom prefect_dbt.cli.configs import GlobalConfigs, TargetConfigs\n\ntarget_configs_extras = dict(\n    host=\"hostname.region.redshift.amazonaws.com\",\n    user=\"username\",\n    password=\"password1\",\n    port=5439,\n    dbname=\"analytics\",\n)\ntarget_configs = TargetConfigs(\n    type=\"redshift\",\n    schema=\"schema\",\n    threads=4,\n    extras=target_configs_extras\n)\ndbt_cli_profile = DbtCliProfile(\n    name=\"jaffle_shop\",\n    target=\"dev\",\n    target_configs=target_configs,\n)\nprofile = dbt_cli_profile.get_profile()\n

        Source code in prefect_dbt/cli/credentials.py
        class DbtCliProfile(Block):\n    \"\"\"\n    Profile for use across dbt CLI tasks and flows.\n\n    Attributes:\n        name (str): Profile name used for populating profiles.yml.\n        target (str): The default target your dbt project will use.\n        target_configs (TargetConfigs): Target configs contain credentials and\n            settings, specific to the warehouse you're connecting to.\n            To find valid keys, head to the [Available adapters](\n            https://docs.getdbt.com/docs/available-adapters) page and\n            click the desired adapter's \"Profile Setup\" hyperlink.\n        global_configs (GlobalConfigs): Global configs control\n            things like the visual output of logs, the manner\n            in which dbt parses your project, and what to do when\n            dbt finds a version mismatch or a failing model.\n            Valid keys can be found [here](\n            https://docs.getdbt.com/reference/global-configs).\n\n    Examples:\n        Load stored dbt CLI profile:\n        ```python\n        from prefect_dbt.cli import DbtCliProfile\n        dbt_cli_profile = DbtCliProfile.load(\"BLOCK_NAME\").get_profile()\n        ```\n\n        Get a dbt Snowflake profile from DbtCliProfile by using SnowflakeTargetConfigs:\n        ```python\n        from prefect_dbt.cli import DbtCliProfile\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector\n\n        credentials = SnowflakeCredentials(\n            user=\"user\",\n            password=\"password\",\n            account=\"account.region.aws\",\n            role=\"role\",\n        )\n        connector = SnowflakeConnector(\n            schema=\"public\",\n            database=\"database\",\n            warehouse=\"warehouse\",\n            credentials=credentials,\n        )\n        target_configs = SnowflakeTargetConfigs(\n            connector=connector\n        )\n        dbt_cli_profile = DbtCliProfile(\n            name=\"jaffle_shop\",\n            target=\"dev\",\n            target_configs=target_configs,\n        )\n        profile = dbt_cli_profile.get_profile()\n        ```\n\n        Get a dbt Redshift profile from DbtCliProfile by using generic TargetConfigs:\n        ```python\n        from prefect_dbt.cli import DbtCliProfile\n        from prefect_dbt.cli.configs import GlobalConfigs, TargetConfigs\n\n        target_configs_extras = dict(\n            host=\"hostname.region.redshift.amazonaws.com\",\n            user=\"username\",\n            password=\"password1\",\n            port=5439,\n            dbname=\"analytics\",\n        )\n        target_configs = TargetConfigs(\n            type=\"redshift\",\n            schema=\"schema\",\n            threads=4,\n            extras=target_configs_extras\n        )\n        dbt_cli_profile = DbtCliProfile(\n            name=\"jaffle_shop\",\n            target=\"dev\",\n            target_configs=target_configs,\n        )\n        profile = dbt_cli_profile.get_profile()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Profile\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials.DbtCliProfile\"  # noqa\n\n    name: str = Field(\n        default=..., description=\"Profile name used for populating profiles.yml.\"\n    )\n    target: str = Field(\n        default=..., description=\"The default target your dbt project will use.\"\n    )\n    target_configs: Union[\n        SnowflakeTargetConfigs,\n        BigQueryTargetConfigs,\n        PostgresTargetConfigs,\n        TargetConfigs,\n    ] = Field(\n        default=...,\n        description=(\n            \"Target configs contain credentials and settings, specific to the \"\n            \"warehouse you're connecting to.\"\n        ),\n    )\n    global_configs: Optional[GlobalConfigs] = Field(\n        default=None,\n        description=(\n            \"Global configs control things like the visual output of logs, the manner \"\n            \"in which dbt parses your project, and what to do when dbt finds a version \"\n            \"mismatch or a failing model.\"\n        ),\n    )\n\n    def get_profile(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt profile, likely used for writing to profiles.yml.\n\n        Returns:\n            A JSON compatible dictionary with the expected format of profiles.yml.\n        \"\"\"\n        profile = {\n            \"config\": self.global_configs.get_configs() if self.global_configs else {},\n            self.name: {\n                \"target\": self.target,\n                \"outputs\": {self.target: self.target_configs.get_configs()},\n            },\n        }\n        return profile\n
        "},{"location":"integrations/prefect-dbt/cli/credentials/#prefect_dbt.cli.credentials.DbtCliProfile.get_profile","title":"get_profile","text":"

        Returns the dbt profile, likely used for writing to profiles.yml.

        Returns:

        Type Description Dict[str, Any]

        A JSON compatible dictionary with the expected format of profiles.yml.

        Source code in prefect_dbt/cli/credentials.py
        def get_profile(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt profile, likely used for writing to profiles.yml.\n\n    Returns:\n        A JSON compatible dictionary with the expected format of profiles.yml.\n    \"\"\"\n    profile = {\n        \"config\": self.global_configs.get_configs() if self.global_configs else {},\n        self.name: {\n            \"target\": self.target,\n            \"outputs\": {self.target: self.target_configs.get_configs()},\n        },\n    }\n    return profile\n
        "},{"location":"integrations/prefect-dbt/cli/configs/base/","title":"Base","text":""},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base","title":"prefect_dbt.cli.configs.base","text":"

        Module containing models for base configs

        "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.DbtConfigs","title":"DbtConfigs","text":"

        Bases: Block, ABC

        Abstract class for other dbt Configs.

        Attributes:

        Name Type Description extras Optional[Dict[str, Any]]

        Extra target configs' keywords, not yet exposed in prefect-dbt, but available in dbt; if there are duplicate keys between extras and TargetConfigs, an error will be raised.

        Source code in prefect_dbt/cli/configs/base.py
        class DbtConfigs(Block, abc.ABC):\n    \"\"\"\n    Abstract class for other dbt Configs.\n\n    Attributes:\n        extras: Extra target configs' keywords, not yet exposed\n            in prefect-dbt, but available in dbt; if there are\n            duplicate keys between extras and TargetConfigs,\n            an error will be raised.\n    \"\"\"\n\n    extras: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=(\n            \"Extra target configs' keywords, not yet exposed in prefect-dbt, \"\n            \"but available in dbt.\"\n        ),\n    )\n    allow_field_overrides: bool = Field(\n        default=False,\n        description=(\n            \"If enabled, fields from dbt target configs will override \"\n            \"fields provided in extras and credentials.\"\n        ),\n    )\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.DbtConfigs\"  # noqa\n\n    def _populate_configs_json(\n        self,\n        configs_json: Dict[str, Any],\n        fields: Dict[str, Any],\n        model: BaseModel = None,\n    ) -> Dict[str, Any]:\n        \"\"\"\n        Recursively populate configs_json.\n        \"\"\"\n        # if allow_field_overrides is True keys from TargetConfigs take precedence\n        override_configs_json = {}\n\n        for field_name, field in fields.items():\n            if model is not None:\n                # get actual value from model\n                try:\n                    field_value = getattr(model, field_name)\n                except AttributeError:\n                    field_value = getattr(model, field.alias)\n                # override the name with alias so dbt parser can recognize the keyword;\n                # e.g. schema_ -> schema, returns the original name if no alias is set\n                field_name = field.alias\n            else:\n                field_value = field\n\n            if field_value is None or field_name == \"allow_field_overrides\":\n                # do not add to configs json if no value or default is set\n                continue\n\n            if isinstance(field_value, BaseModel):\n                configs_json = self._populate_configs_json(\n                    configs_json, field_value.__fields__, model=field_value\n                )\n            elif field_name == \"extras\":\n                configs_json = self._populate_configs_json(\n                    configs_json,\n                    field_value,\n                )\n                override_configs_json.update(configs_json)\n            else:\n                if field_name in configs_json.keys() and not self.allow_field_overrides:\n                    raise ValueError(\n                        f\"The keyword, {field_name}, has already been provided in \"\n                        f\"TargetConfigs; remove duplicated keywords to continue\"\n                    )\n                if isinstance(field_value, SecretField):\n                    field_value = field_value.get_secret_value()\n                elif isinstance(field_value, Path):\n                    field_value = str(field_value)\n                configs_json[field_name] = field_value\n\n                if self.allow_field_overrides and model is self or model is None:\n                    override_configs_json[field_name] = field_value\n\n        configs_json.update(override_configs_json)\n        return configs_json\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs, likely used eventually for writing to profiles.yml.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        return self._populate_configs_json({}, self.__fields__, model=self)\n
        "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.DbtConfigs.get_configs","title":"get_configs","text":"

        Returns the dbt configs, likely used eventually for writing to profiles.yml.

        Returns:

        Type Description Dict[str, Any]

        A configs JSON.

        Source code in prefect_dbt/cli/configs/base.py
        def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs, likely used eventually for writing to profiles.yml.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    return self._populate_configs_json({}, self.__fields__, model=self)\n
        "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.GlobalConfigs","title":"GlobalConfigs","text":"

        Bases: DbtConfigs

        Global configs control things like the visual output of logs, the manner in which dbt parses your project, and what to do when dbt finds a version mismatch or a failing model. Docs can be found here.

        Attributes:

        Name Type Description send_anonymous_usage_stats Optional[bool]

        Whether usage stats are sent to dbt.

        use_colors Optional[bool]

        Colorize the output it prints in your terminal.

        partial_parse Optional[bool]

        When partial parsing is enabled, dbt will use an stored internal manifest to determine which files have been changed (if any) since it last parsed the project.

        printer_width Optional[int]

        Length of characters before starting a new line.

        write_json Optional[bool]

        Determines whether dbt writes JSON artifacts to the target/ directory.

        warn_error Optional[bool]

        Whether to convert dbt warnings into errors.

        log_format Optional[str]

        The LOG_FORMAT config specifies how dbt's logs should be formatted. If the value of this config is json, dbt will output fully structured logs in JSON format.

        debug Optional[bool]

        Whether to redirect dbt's debug logs to standard out.

        version_check Optional[bool]

        Whether to raise an error if a project's version is used with an incompatible dbt version.

        fail_fast Optional[bool]

        Make dbt exit immediately if a single resource fails to build.

        use_experimental_parser Optional[bool]

        Opt into the latest experimental version of the static parser.

        static_parser Optional[bool]

        Whether to use the static parser.

        Examples:

        Load stored GlobalConfigs:

        from prefect_dbt.cli.configs import GlobalConfigs\n\ndbt_cli_global_configs = GlobalConfigs.load(\"BLOCK_NAME\")\n

        Source code in prefect_dbt/cli/configs/base.py
        class GlobalConfigs(DbtConfigs):\n    \"\"\"\n    Global configs control things like the visual output\n    of logs, the manner in which dbt parses your project,\n    and what to do when dbt finds a version mismatch\n    or a failing model. Docs can be found [here](\n    https://docs.getdbt.com/reference/global-configs).\n\n    Attributes:\n        send_anonymous_usage_stats: Whether usage stats are sent to dbt.\n        use_colors: Colorize the output it prints in your terminal.\n        partial_parse: When partial parsing is enabled, dbt will use an\n            stored internal manifest to determine which files have been changed\n            (if any) since it last parsed the project.\n        printer_width: Length of characters before starting a new line.\n        write_json: Determines whether dbt writes JSON artifacts to\n            the target/ directory.\n        warn_error: Whether to convert dbt warnings into errors.\n        log_format: The LOG_FORMAT config specifies how dbt's logs should\n            be formatted. If the value of this config is json, dbt will\n            output fully structured logs in JSON format.\n        debug: Whether to redirect dbt's debug logs to standard out.\n        version_check: Whether to raise an error if a project's version\n            is used with an incompatible dbt version.\n        fail_fast: Make dbt exit immediately if a single resource fails to build.\n        use_experimental_parser: Opt into the latest experimental version\n            of the static parser.\n        static_parser: Whether to use the [static parser](\n            https://docs.getdbt.com/reference/parsing#static-parser).\n\n    Examples:\n        Load stored GlobalConfigs:\n        ```python\n        from prefect_dbt.cli.configs import GlobalConfigs\n\n        dbt_cli_global_configs = GlobalConfigs.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Global Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.GlobalConfigs\"  # noqa\n\n    send_anonymous_usage_stats: Optional[bool] = Field(\n        default=None,\n        description=\"Whether usage stats are sent to dbt.\",\n    )\n    use_colors: Optional[bool] = Field(\n        default=None,\n        description=\"Colorize the output it prints in your terminal.\",\n    )\n    partial_parse: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"When partial parsing is enabled, dbt will use an \"\n            \"stored internal manifest to determine which files have been changed \"\n            \"(if any) since it last parsed the project.\"\n        ),\n    )\n    printer_width: Optional[int] = Field(\n        default=None,\n        description=\"Length of characters before starting a new line.\",\n    )\n    write_json: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Determines whether dbt writes JSON artifacts to \" \"the target/ directory.\"\n        ),\n    )\n    warn_error: Optional[bool] = Field(\n        default=None,\n        description=\"Whether to convert dbt warnings into errors.\",\n    )\n    log_format: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The LOG_FORMAT config specifies how dbt's logs should \"\n            \"be formatted. If the value of this config is json, dbt will \"\n            \"output fully structured logs in JSON format.\"\n        ),\n    )\n    debug: Optional[bool] = Field(\n        default=None,\n        description=\"Whether to redirect dbt's debug logs to standard out.\",\n    )\n    version_check: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether to raise an error if a project's version \"\n            \"is used with an incompatible dbt version.\"\n        ),\n    )\n    fail_fast: Optional[bool] = Field(\n        default=None,\n        description=(\"Make dbt exit immediately if a single resource fails to build.\"),\n    )\n    use_experimental_parser: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Opt into the latest experimental version \" \"of the static parser.\"\n        ),\n    )\n    static_parser: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Whether to use the [static parser](https://docs.getdbt.com/reference/parsing#static-parser).\"  # noqa\n        ),\n    )\n
        "},{"location":"integrations/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.TargetConfigs","title":"TargetConfigs","text":"

        Bases: BaseTargetConfigs

        Target configs contain credentials and settings, specific to the warehouse you're connecting to. To find valid keys, head to the Available adapters page and click the desired adapter's \"Profile Setup\" hyperlink.

        Attributes:

        Name Type Description type

        The name of the database warehouse.

        schema

        The schema that dbt will build objects into; in BigQuery, a schema is actually a dataset.

        threads

        The number of threads representing the max number of paths through the graph dbt may work on at once.

        Examples:

        Load stored TargetConfigs:

        from prefect_dbt.cli.configs import TargetConfigs\n\ndbt_cli_target_configs = TargetConfigs.load(\"BLOCK_NAME\")\n

        Source code in prefect_dbt/cli/configs/base.py
        class TargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to the warehouse you're connecting to.\n    To find valid keys, head to the [Available adapters](\n    https://docs.getdbt.com/docs/available-adapters) page and\n    click the desired adapter's \"Profile Setup\" hyperlink.\n\n    Attributes:\n        type: The name of the database warehouse.\n        schema: The schema that dbt will build objects into;\n            in BigQuery, a schema is actually a dataset.\n        threads: The number of threads representing the max number\n            of paths through the graph dbt may work on at once.\n\n    Examples:\n        Load stored TargetConfigs:\n        ```python\n        from prefect_dbt.cli.configs import TargetConfigs\n\n        dbt_cli_target_configs = TargetConfigs.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/base/#prefect_dbt.cli.configs.base.TargetConfigs\"  # noqa\n
        "},{"location":"integrations/prefect-dbt/cli/configs/bigquery/","title":"BigQuery","text":""},{"location":"integrations/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery","title":"prefect_dbt.cli.configs.bigquery","text":"

        Module containing models for BigQuery configs

        "},{"location":"integrations/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery.BigQueryTargetConfigs","title":"BigQueryTargetConfigs","text":"

        Bases: BaseTargetConfigs

        Target configs contain credentials and settings, specific to BigQuery. To find valid keys, head to the BigQuery Profile page.

        Attributes:

        Name Type Description credentials GcpCredentials

        The credentials to use to authenticate; if there are duplicate keys between credentials and TargetConfigs, e.g. schema, an error will be raised.

        Examples:

        Load stored BigQueryTargetConfigs.

        from prefect_dbt.cli.configs import BigQueryTargetConfigs\n\nbigquery_target_configs = BigQueryTargetConfigs.load(\"BLOCK_NAME\")\n

        Instantiate BigQueryTargetConfigs.

        from prefect_dbt.cli.configs import BigQueryTargetConfigs\nfrom prefect_gcp.credentials import GcpCredentials\n\ncredentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\ntarget_configs = BigQueryTargetConfigs(\n    schema=\"schema\",  # also known as dataset\n    credentials=credentials,\n)\n

        Source code in prefect_dbt/cli/configs/bigquery.py
        class BigQueryTargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to BigQuery.\n    To find valid keys, head to the [BigQuery Profile](\n    https://docs.getdbt.com/reference/warehouse-profiles/bigquery-profile)\n    page.\n\n    Attributes:\n        credentials: The credentials to use to authenticate; if there are\n            duplicate keys between credentials and TargetConfigs,\n            e.g. schema, an error will be raised.\n\n    Examples:\n        Load stored BigQueryTargetConfigs.\n        ```python\n        from prefect_dbt.cli.configs import BigQueryTargetConfigs\n\n        bigquery_target_configs = BigQueryTargetConfigs.load(\"BLOCK_NAME\")\n        ```\n\n        Instantiate BigQueryTargetConfigs.\n        ```python\n        from prefect_dbt.cli.configs import BigQueryTargetConfigs\n        from prefect_gcp.credentials import GcpCredentials\n\n        credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n        target_configs = BigQueryTargetConfigs(\n            schema=\"schema\",  # also known as dataset\n            credentials=credentials,\n        )\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI BigQuery Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _description = \"dbt CLI target configs containing credentials and settings, specific to BigQuery.\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery.BigQueryTargetConfigs\"  # noqa\n\n    type: Literal[\"bigquery\"] = Field(\n        default=\"bigquery\", description=\"The type of target.\"\n    )\n    project: Optional[str] = Field(default=None, description=\"The project to use.\")\n    credentials: GcpCredentials = Field(\n        default_factory=GcpCredentials,\n        description=\"The credentials to use to authenticate.\",\n    )\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs specific to BigQuery profile.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        # since GcpCredentials will always define a project\n        self_copy = self.copy()\n        if self_copy.project is not None:\n            self_copy.credentials.project = None\n        all_configs_json = self._populate_configs_json(\n            {}, self_copy.__fields__, model=self_copy\n        )\n\n        # decouple prefect-gcp from prefect-dbt\n        # by mapping all the keys dbt gcp accepts\n        # https://docs.getdbt.com/reference/warehouse-setups/bigquery-setup\n        rename_keys = {\n            # dbt\n            \"type\": \"type\",\n            \"schema\": \"schema\",\n            \"threads\": \"threads\",\n            # general\n            \"dataset\": \"schema\",\n            \"method\": \"method\",\n            \"project\": \"project\",\n            # service-account\n            \"service_account_file\": \"keyfile\",\n            # service-account json\n            \"service_account_info\": \"keyfile_json\",\n            # oauth secrets\n            \"refresh_token\": \"refresh_token\",\n            \"client_id\": \"client_id\",\n            \"client_secret\": \"client_secret\",\n            \"token_uri\": \"token_uri\",\n            # optional\n            \"priority\": \"priority\",\n            \"timeout_seconds\": \"timeout_seconds\",\n            \"location\": \"location\",\n            \"maximum_bytes_billed\": \"maximum_bytes_billed\",\n            \"scopes\": \"scopes\",\n            \"impersonate_service_account\": \"impersonate_service_account\",\n            \"execution_project\": \"execution_project\",\n        }\n        configs_json = {}\n        extras = self.extras or {}\n        for key in all_configs_json.keys():\n            if key not in rename_keys and key not in extras:\n                # skip invalid keys\n                continue\n            # rename key to something dbt profile expects\n            dbt_key = rename_keys.get(key) or key\n            configs_json[dbt_key] = all_configs_json[key]\n\n        if \"keyfile_json\" in configs_json:\n            configs_json[\"method\"] = \"service-account-json\"\n        elif \"keyfile\" in configs_json:\n            configs_json[\"method\"] = \"service-account\"\n            configs_json[\"keyfile\"] = str(configs_json[\"keyfile\"])\n        else:\n            configs_json[\"method\"] = \"oauth-secrets\"\n            # through gcloud application-default login\n            google_credentials = (\n                self_copy.credentials.get_credentials_from_service_account()\n            )\n            if hasattr(google_credentials, \"token\"):\n                request = Request()\n                google_credentials.refresh(request)\n                configs_json[\"token\"] = google_credentials.token\n            else:\n                for key in (\"refresh_token\", \"client_id\", \"client_secret\", \"token_uri\"):\n                    configs_json[key] = getattr(google_credentials, key)\n\n        if \"project\" not in configs_json:\n            raise ValueError(\n                \"The keyword, project, must be provided in either \"\n                \"GcpCredentials or BigQueryTargetConfigs\"\n            )\n        return configs_json\n
        "},{"location":"integrations/prefect-dbt/cli/configs/bigquery/#prefect_dbt.cli.configs.bigquery.BigQueryTargetConfigs.get_configs","title":"get_configs","text":"

        Returns the dbt configs specific to BigQuery profile.

        Returns:

        Type Description Dict[str, Any]

        A configs JSON.

        Source code in prefect_dbt/cli/configs/bigquery.py
        def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs specific to BigQuery profile.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    # since GcpCredentials will always define a project\n    self_copy = self.copy()\n    if self_copy.project is not None:\n        self_copy.credentials.project = None\n    all_configs_json = self._populate_configs_json(\n        {}, self_copy.__fields__, model=self_copy\n    )\n\n    # decouple prefect-gcp from prefect-dbt\n    # by mapping all the keys dbt gcp accepts\n    # https://docs.getdbt.com/reference/warehouse-setups/bigquery-setup\n    rename_keys = {\n        # dbt\n        \"type\": \"type\",\n        \"schema\": \"schema\",\n        \"threads\": \"threads\",\n        # general\n        \"dataset\": \"schema\",\n        \"method\": \"method\",\n        \"project\": \"project\",\n        # service-account\n        \"service_account_file\": \"keyfile\",\n        # service-account json\n        \"service_account_info\": \"keyfile_json\",\n        # oauth secrets\n        \"refresh_token\": \"refresh_token\",\n        \"client_id\": \"client_id\",\n        \"client_secret\": \"client_secret\",\n        \"token_uri\": \"token_uri\",\n        # optional\n        \"priority\": \"priority\",\n        \"timeout_seconds\": \"timeout_seconds\",\n        \"location\": \"location\",\n        \"maximum_bytes_billed\": \"maximum_bytes_billed\",\n        \"scopes\": \"scopes\",\n        \"impersonate_service_account\": \"impersonate_service_account\",\n        \"execution_project\": \"execution_project\",\n    }\n    configs_json = {}\n    extras = self.extras or {}\n    for key in all_configs_json.keys():\n        if key not in rename_keys and key not in extras:\n            # skip invalid keys\n            continue\n        # rename key to something dbt profile expects\n        dbt_key = rename_keys.get(key) or key\n        configs_json[dbt_key] = all_configs_json[key]\n\n    if \"keyfile_json\" in configs_json:\n        configs_json[\"method\"] = \"service-account-json\"\n    elif \"keyfile\" in configs_json:\n        configs_json[\"method\"] = \"service-account\"\n        configs_json[\"keyfile\"] = str(configs_json[\"keyfile\"])\n    else:\n        configs_json[\"method\"] = \"oauth-secrets\"\n        # through gcloud application-default login\n        google_credentials = (\n            self_copy.credentials.get_credentials_from_service_account()\n        )\n        if hasattr(google_credentials, \"token\"):\n            request = Request()\n            google_credentials.refresh(request)\n            configs_json[\"token\"] = google_credentials.token\n        else:\n            for key in (\"refresh_token\", \"client_id\", \"client_secret\", \"token_uri\"):\n                configs_json[key] = getattr(google_credentials, key)\n\n    if \"project\" not in configs_json:\n        raise ValueError(\n            \"The keyword, project, must be provided in either \"\n            \"GcpCredentials or BigQueryTargetConfigs\"\n        )\n    return configs_json\n
        "},{"location":"integrations/prefect-dbt/cli/configs/postgres/","title":"Postgres","text":""},{"location":"integrations/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres","title":"prefect_dbt.cli.configs.postgres","text":"

        Module containing models for Postgres configs

        "},{"location":"integrations/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres.PostgresTargetConfigs","title":"PostgresTargetConfigs","text":"

        Bases: BaseTargetConfigs

        Target configs contain credentials and settings, specific to Postgres. To find valid keys, head to the Postgres Profile page.

        Attributes:

        Name Type Description credentials Union[SqlAlchemyConnector, DatabaseCredentials]

        The credentials to use to authenticate; if there are duplicate keys between credentials and TargetConfigs, e.g. schema, an error will be raised.

        Examples:

        Load stored PostgresTargetConfigs:

        from prefect_dbt.cli.configs import PostgresTargetConfigs\n\npostgres_target_configs = PostgresTargetConfigs.load(\"BLOCK_NAME\")\n

        Instantiate PostgresTargetConfigs with DatabaseCredentials.

        from prefect_dbt.cli.configs import PostgresTargetConfigs\nfrom prefect_sqlalchemy import DatabaseCredentials, SyncDriver\n\ncredentials = DatabaseCredentials(\n    driver=SyncDriver.POSTGRESQL_PSYCOPG2,\n    username=\"prefect\",\n    password=\"prefect_password\",\n    database=\"postgres\",\n    host=\"host\",\n    port=8080\n)\ntarget_configs = PostgresTargetConfigs(credentials=credentials, schema=\"schema\")\n

        Source code in prefect_dbt/cli/configs/postgres.py
        class PostgresTargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to Postgres.\n    To find valid keys, head to the [Postgres Profile](\n    https://docs.getdbt.com/reference/warehouse-profiles/postgres-profile)\n    page.\n\n    Attributes:\n        credentials: The credentials to use to authenticate; if there are\n            duplicate keys between credentials and TargetConfigs,\n            e.g. schema, an error will be raised.\n\n    Examples:\n        Load stored PostgresTargetConfigs:\n        ```python\n        from prefect_dbt.cli.configs import PostgresTargetConfigs\n\n        postgres_target_configs = PostgresTargetConfigs.load(\"BLOCK_NAME\")\n        ```\n\n        Instantiate PostgresTargetConfigs with DatabaseCredentials.\n        ```python\n        from prefect_dbt.cli.configs import PostgresTargetConfigs\n        from prefect_sqlalchemy import DatabaseCredentials, SyncDriver\n\n        credentials = DatabaseCredentials(\n            driver=SyncDriver.POSTGRESQL_PSYCOPG2,\n            username=\"prefect\",\n            password=\"prefect_password\",\n            database=\"postgres\",\n            host=\"host\",\n            port=8080\n        )\n        target_configs = PostgresTargetConfigs(credentials=credentials, schema=\"schema\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Postgres Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _description = \"dbt CLI target configs containing credentials and settings specific to Postgres.\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres.PostgresTargetConfigs\"  # noqa\n\n    type: Literal[\"postgres\"] = Field(\n        default=\"postgres\", description=\"The type of the target.\"\n    )\n    credentials: Union[SqlAlchemyConnector, DatabaseCredentials] = Field(\n        default=...,\n        description=(\n            \"The credentials to use to authenticate; if there are duplicate keys \"\n            \"between credentials and TargetConfigs, e.g. schema, \"\n            \"an error will be raised.\"\n        ),\n    )  # noqa\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs specific to Postgres profile.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        if isinstance(self.credentials, DatabaseCredentials):\n            warnings.warn(\n                \"Using DatabaseCredentials is deprecated and will be removed \"\n                \"on May 7th, 2023, use SqlAlchemyConnector instead.\",\n                DeprecationWarning,\n            )\n        all_configs_json = super().get_configs()\n\n        rename_keys = {\n            # dbt\n            \"type\": \"type\",\n            \"schema\": \"schema\",\n            \"threads\": \"threads\",\n            # general\n            \"host\": \"host\",\n            \"username\": \"user\",\n            \"password\": \"password\",\n            \"port\": \"port\",\n            \"database\": \"dbname\",\n            # optional\n            \"keepalives_idle\": \"keepalives_idle\",\n            \"connect_timeout\": \"connect_timeout\",\n            \"retries\": \"retries\",\n            \"search_path\": \"search_path\",\n            \"role\": \"role\",\n            \"sslmode\": \"sslmode\",\n        }\n\n        configs_json = {}\n        extras = self.extras or {}\n        for key in all_configs_json.keys():\n            if key not in rename_keys and key not in extras:\n                # skip invalid keys, like fetch_size + poll_frequency_s\n                continue\n            # rename key to something dbt profile expects\n            dbt_key = rename_keys.get(key) or key\n            configs_json[dbt_key] = all_configs_json[key]\n        port = configs_json.get(\"port\")\n        if port is not None:\n            configs_json[\"port\"] = int(port)\n        return configs_json\n
        "},{"location":"integrations/prefect-dbt/cli/configs/postgres/#prefect_dbt.cli.configs.postgres.PostgresTargetConfigs.get_configs","title":"get_configs","text":"

        Returns the dbt configs specific to Postgres profile.

        Returns:

        Type Description Dict[str, Any]

        A configs JSON.

        Source code in prefect_dbt/cli/configs/postgres.py
        def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs specific to Postgres profile.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    if isinstance(self.credentials, DatabaseCredentials):\n        warnings.warn(\n            \"Using DatabaseCredentials is deprecated and will be removed \"\n            \"on May 7th, 2023, use SqlAlchemyConnector instead.\",\n            DeprecationWarning,\n        )\n    all_configs_json = super().get_configs()\n\n    rename_keys = {\n        # dbt\n        \"type\": \"type\",\n        \"schema\": \"schema\",\n        \"threads\": \"threads\",\n        # general\n        \"host\": \"host\",\n        \"username\": \"user\",\n        \"password\": \"password\",\n        \"port\": \"port\",\n        \"database\": \"dbname\",\n        # optional\n        \"keepalives_idle\": \"keepalives_idle\",\n        \"connect_timeout\": \"connect_timeout\",\n        \"retries\": \"retries\",\n        \"search_path\": \"search_path\",\n        \"role\": \"role\",\n        \"sslmode\": \"sslmode\",\n    }\n\n    configs_json = {}\n    extras = self.extras or {}\n    for key in all_configs_json.keys():\n        if key not in rename_keys and key not in extras:\n            # skip invalid keys, like fetch_size + poll_frequency_s\n            continue\n        # rename key to something dbt profile expects\n        dbt_key = rename_keys.get(key) or key\n        configs_json[dbt_key] = all_configs_json[key]\n    port = configs_json.get(\"port\")\n    if port is not None:\n        configs_json[\"port\"] = int(port)\n    return configs_json\n
        "},{"location":"integrations/prefect-dbt/cli/configs/snowflake/","title":"Snowflake","text":""},{"location":"integrations/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake","title":"prefect_dbt.cli.configs.snowflake","text":"

        Module containing models for Snowflake configs

        "},{"location":"integrations/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake.SnowflakeTargetConfigs","title":"SnowflakeTargetConfigs","text":"

        Bases: BaseTargetConfigs

        Target configs contain credentials and settings, specific to Snowflake. To find valid keys, head to the Snowflake Profile page.

        Attributes:

        Name Type Description connector SnowflakeConnector

        The connector to use.

        Examples:

        Load stored SnowflakeTargetConfigs:

        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n\nsnowflake_target_configs = SnowflakeTargetConfigs.load(\"BLOCK_NAME\")\n

        Instantiate SnowflakeTargetConfigs.

        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector\n\ncredentials = SnowflakeCredentials(\n    user=\"user\",\n    password=\"password\",\n    account=\"account.region.aws\",\n    role=\"role\",\n)\nconnector = SnowflakeConnector(\n    schema=\"public\",\n    database=\"database\",\n    warehouse=\"warehouse\",\n    credentials=credentials,\n)\ntarget_configs = SnowflakeTargetConfigs(\n    connector=connector,\n    extras={\"retry_on_database_errors\": True},\n)\n

        Source code in prefect_dbt/cli/configs/snowflake.py
        class SnowflakeTargetConfigs(BaseTargetConfigs):\n    \"\"\"\n    Target configs contain credentials and\n    settings, specific to Snowflake.\n    To find valid keys, head to the [Snowflake Profile](\n    https://docs.getdbt.com/reference/warehouse-profiles/snowflake-profile)\n    page.\n\n    Attributes:\n        connector: The connector to use.\n\n    Examples:\n        Load stored SnowflakeTargetConfigs:\n        ```python\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n\n        snowflake_target_configs = SnowflakeTargetConfigs.load(\"BLOCK_NAME\")\n        ```\n\n        Instantiate SnowflakeTargetConfigs.\n        ```python\n        from prefect_dbt.cli.configs import SnowflakeTargetConfigs\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector\n\n        credentials = SnowflakeCredentials(\n            user=\"user\",\n            password=\"password\",\n            account=\"account.region.aws\",\n            role=\"role\",\n        )\n        connector = SnowflakeConnector(\n            schema=\"public\",\n            database=\"database\",\n            warehouse=\"warehouse\",\n            credentials=credentials,\n        )\n        target_configs = SnowflakeTargetConfigs(\n            connector=connector,\n            extras={\"retry_on_database_errors\": True},\n        )\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt CLI Snowflake Target Configs\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake.SnowflakeTargetConfigs\"  # noqa\n\n    type: Literal[\"snowflake\"] = Field(\n        default=\"snowflake\", description=\"The type of the target configs.\"\n    )\n    schema_: Optional[str] = Field(\n        default=None,\n        alias=\"schema\",\n        description=\"The schema to use for the target configs.\",\n    )\n    connector: SnowflakeConnector = Field(\n        default=..., description=\"The connector to use.\"\n    )\n\n    def get_configs(self) -> Dict[str, Any]:\n        \"\"\"\n        Returns the dbt configs specific to Snowflake profile.\n\n        Returns:\n            A configs JSON.\n        \"\"\"\n        all_configs_json = super().get_configs()\n\n        # decouple prefect-snowflake from prefect-dbt\n        # by mapping all the keys dbt snowflake accepts\n        # https://docs.getdbt.com/reference/warehouse-setups/snowflake-setup\n        rename_keys = {\n            # dbt\n            \"type\": \"type\",\n            \"schema\": \"schema\",\n            \"threads\": \"threads\",\n            # general\n            \"account\": \"account\",\n            \"user\": \"user\",\n            \"role\": \"role\",\n            \"database\": \"database\",\n            \"warehouse\": \"warehouse\",\n            # user and password\n            \"password\": \"password\",\n            # duo mfa / sso\n            \"authenticator\": \"authenticator\",\n            # key pair\n            \"private_key_path\": \"private_key_path\",\n            \"private_key_passphrase\": \"private_key_passphrase\",\n            # optional\n            \"client_session_keep_alive\": \"client_session_keep_alive\",\n            \"query_tag\": \"query_tag\",\n            \"connect_retries\": \"connect_retries\",\n            \"connect_timeout\": \"connect_timeout\",\n            \"retry_on_database_errors\": \"retry_on_database_errors\",\n            \"retry_all\": \"retry_all\",\n        }\n        configs_json = {}\n        extras = self.extras or {}\n        for key in all_configs_json.keys():\n            if key not in rename_keys and key not in extras:\n                # skip invalid keys, like fetch_size + poll_frequency_s\n                continue\n            # rename key to something dbt profile expects\n            dbt_key = rename_keys.get(key) or key\n            configs_json[dbt_key] = all_configs_json[key]\n        return configs_json\n
        "},{"location":"integrations/prefect-dbt/cli/configs/snowflake/#prefect_dbt.cli.configs.snowflake.SnowflakeTargetConfigs.get_configs","title":"get_configs","text":"

        Returns the dbt configs specific to Snowflake profile.

        Returns:

        Type Description Dict[str, Any]

        A configs JSON.

        Source code in prefect_dbt/cli/configs/snowflake.py
        def get_configs(self) -> Dict[str, Any]:\n    \"\"\"\n    Returns the dbt configs specific to Snowflake profile.\n\n    Returns:\n        A configs JSON.\n    \"\"\"\n    all_configs_json = super().get_configs()\n\n    # decouple prefect-snowflake from prefect-dbt\n    # by mapping all the keys dbt snowflake accepts\n    # https://docs.getdbt.com/reference/warehouse-setups/snowflake-setup\n    rename_keys = {\n        # dbt\n        \"type\": \"type\",\n        \"schema\": \"schema\",\n        \"threads\": \"threads\",\n        # general\n        \"account\": \"account\",\n        \"user\": \"user\",\n        \"role\": \"role\",\n        \"database\": \"database\",\n        \"warehouse\": \"warehouse\",\n        # user and password\n        \"password\": \"password\",\n        # duo mfa / sso\n        \"authenticator\": \"authenticator\",\n        # key pair\n        \"private_key_path\": \"private_key_path\",\n        \"private_key_passphrase\": \"private_key_passphrase\",\n        # optional\n        \"client_session_keep_alive\": \"client_session_keep_alive\",\n        \"query_tag\": \"query_tag\",\n        \"connect_retries\": \"connect_retries\",\n        \"connect_timeout\": \"connect_timeout\",\n        \"retry_on_database_errors\": \"retry_on_database_errors\",\n        \"retry_all\": \"retry_all\",\n    }\n    configs_json = {}\n    extras = self.extras or {}\n    for key in all_configs_json.keys():\n        if key not in rename_keys and key not in extras:\n            # skip invalid keys, like fetch_size + poll_frequency_s\n            continue\n        # rename key to something dbt profile expects\n        dbt_key = rename_keys.get(key) or key\n        configs_json[dbt_key] = all_configs_json[key]\n    return configs_json\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/","title":"Clients","text":""},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients","title":"prefect_dbt.cloud.clients","text":"

        Module containing clients for interacting with the dbt Cloud API

        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient","title":"DbtCloudAdministrativeClient","text":"

        Client for interacting with the dbt cloud Administrative API.

        Parameters:

        Name Type Description Default api_key str

        API key to authenticate with the dbt Cloud administrative API.

        required account_id int

        ID of dbt Cloud account with which to interact.

        required domain str

        Domain at which the dbt Cloud API is hosted.

        'cloud.getdbt.com' Source code in prefect_dbt/cloud/clients.py
        class DbtCloudAdministrativeClient:\n    \"\"\"\n    Client for interacting with the dbt cloud Administrative API.\n\n    Args:\n        api_key: API key to authenticate with the dbt Cloud administrative API.\n        account_id: ID of dbt Cloud account with which to interact.\n        domain: Domain at which the dbt Cloud API is hosted.\n    \"\"\"\n\n    def __init__(self, api_key: str, account_id: int, domain: str = \"cloud.getdbt.com\"):\n        self._closed = False\n        self._started = False\n\n        self._admin_client = AsyncClient(\n            headers={\n                \"Authorization\": f\"Bearer {api_key}\",\n                \"user-agent\": f\"prefect-{prefect.__version__}\",\n                \"x-dbt-partner-source\": \"prefect\",\n            },\n            base_url=f\"https://{domain}/api/v2/accounts/{account_id}\",\n        )\n\n    async def call_endpoint(\n        self,\n        http_method: str,\n        path: str,\n        params: Optional[Dict[str, Any]] = None,\n        json: Optional[Dict[str, Any]] = None,\n    ) -> Response:\n        \"\"\"\n        Call an endpoint in the dbt Cloud API.\n\n        Args:\n            path: The partial path for the request (e.g. /projects/). Will be appended\n                onto the base URL as determined by the client configuration.\n            http_method: HTTP method to call on the endpoint.\n            params: Query parameters to include in the request.\n            json: JSON serializable body to send in the request.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"\n        response = await self._admin_client.request(\n            method=http_method, url=path, params=params, json=json\n        )\n\n        response.raise_for_status()\n\n        return response\n\n    async def get_job(\n        self,\n        job_id: int,\n        order_by: Optional[str] = None,\n    ) -> Response:\n        \"\"\"\n        Return job details for a job on an account.\n\n        Args:\n            job_id: Numeric ID of the job.\n            order_by: Field to order the result by. Use - to indicate reverse order.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"order_by\": order_by} if order_by else None\n        return await self.call_endpoint(\n            path=f\"/jobs/{job_id}/\", http_method=\"GET\", params=params\n        )\n\n    async def trigger_job_run(\n        self, job_id: int, options: Optional[TriggerJobRunOptions] = None\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [trigger job run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Jobs/operation/triggerRun)\n        to initiate a job run.\n\n        Args:\n            job_id: The ID of the job to trigger.\n            options: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        if options is None:\n            options = TriggerJobRunOptions()\n\n        return await self.call_endpoint(\n            path=f\"/jobs/{job_id}/run/\",\n            http_method=\"POST\",\n            json=options.dict(exclude_none=True),\n        )\n\n    async def get_run(\n        self,\n        run_id: int,\n        include_related: Optional[\n            List[Literal[\"trigger\", \"job\", \"debug_logs\", \"run_steps\"]]\n        ] = None,\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [get run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getRunById)\n        to get details about a job run.\n\n        Args:\n            run_id: The ID of the run to get details for.\n            include_related: List of related fields to pull with the run.\n                Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\".\n                If \"debug_logs\" is not provided in a request, then the included debug\n                logs will be truncated to the last 1,000 lines of the debug log output file.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"include_related\": include_related} if include_related else None\n        return await self.call_endpoint(\n            path=f\"/runs/{run_id}/\", http_method=\"GET\", params=params\n        )\n\n    async def list_run_artifacts(\n        self, run_id: int, step: Optional[int] = None\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [list run artifacts endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/listArtifactsByRunId)\n        to fetch a list of paths of artifacts generated for a completed run.\n\n        Args:\n            run_id: The ID of the run to list run artifacts for.\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"step\": step} if step else None\n        return await self.call_endpoint(\n            path=f\"/runs/{run_id}/artifacts/\", http_method=\"GET\", params=params\n        )\n\n    async def get_run_artifact(\n        self, run_id: int, path: str, step: Optional[int] = None\n    ) -> Response:\n        \"\"\"\n        Sends a request to the [get run artifact endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getArtifactsByRunId)\n        to fetch an artifact generated for a completed run.\n\n        Args:\n            run_id: The ID of the run to list run artifacts for.\n            path: The relative path to the run artifact (e.g. manifest.json, catalog.json,\n                run_results.json)\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n\n        Returns:\n            The response from the dbt Cloud administrative API.\n        \"\"\"  # noqa\n        params = {\"step\": step} if step else None\n        return await self.call_endpoint(\n            path=f\"/runs/{run_id}/artifacts/{path}\", http_method=\"GET\", params=params\n        )\n\n    async def __aenter__(self):\n        if self._closed:\n            raise RuntimeError(\n                \"The client cannot be started again after it has been closed.\"\n            )\n        if self._started:\n            raise RuntimeError(\"The client cannot be started more than once.\")\n\n        self._started = True\n\n        return self\n\n    async def __aexit__(self, *exc):\n        self._closed = True\n        await self._admin_client.__aexit__()\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.call_endpoint","title":"call_endpoint async","text":"

        Call an endpoint in the dbt Cloud API.

        Parameters:

        Name Type Description Default path str

        The partial path for the request (e.g. /projects/). Will be appended onto the base URL as determined by the client configuration.

        required http_method str

        HTTP method to call on the endpoint.

        required params Optional[Dict[str, Any]]

        Query parameters to include in the request.

        None json Optional[Dict[str, Any]]

        JSON serializable body to send in the request.

        None

        Returns:

        Type Description Response

        The response from the dbt Cloud administrative API.

        Source code in prefect_dbt/cloud/clients.py
        async def call_endpoint(\n    self,\n    http_method: str,\n    path: str,\n    params: Optional[Dict[str, Any]] = None,\n    json: Optional[Dict[str, Any]] = None,\n) -> Response:\n    \"\"\"\n    Call an endpoint in the dbt Cloud API.\n\n    Args:\n        path: The partial path for the request (e.g. /projects/). Will be appended\n            onto the base URL as determined by the client configuration.\n        http_method: HTTP method to call on the endpoint.\n        params: Query parameters to include in the request.\n        json: JSON serializable body to send in the request.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"\n    response = await self._admin_client.request(\n        method=http_method, url=path, params=params, json=json\n    )\n\n    response.raise_for_status()\n\n    return response\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.get_job","title":"get_job async","text":"

        Return job details for a job on an account.

        Parameters:

        Name Type Description Default job_id int

        Numeric ID of the job.

        required order_by Optional[str]

        Field to order the result by. Use - to indicate reverse order.

        None

        Returns:

        Type Description Response

        The response from the dbt Cloud administrative API.

        Source code in prefect_dbt/cloud/clients.py
        async def get_job(\n    self,\n    job_id: int,\n    order_by: Optional[str] = None,\n) -> Response:\n    \"\"\"\n    Return job details for a job on an account.\n\n    Args:\n        job_id: Numeric ID of the job.\n        order_by: Field to order the result by. Use - to indicate reverse order.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"order_by\": order_by} if order_by else None\n    return await self.call_endpoint(\n        path=f\"/jobs/{job_id}/\", http_method=\"GET\", params=params\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.get_run","title":"get_run async","text":"

        Sends a request to the get run endpoint to get details about a job run.

        Parameters:

        Name Type Description Default run_id int

        The ID of the run to get details for.

        required include_related Optional[List[Literal['trigger', 'job', 'debug_logs', 'run_steps']]]

        List of related fields to pull with the run. Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\". If \"debug_logs\" is not provided in a request, then the included debug logs will be truncated to the last 1,000 lines of the debug log output file.

        None

        Returns:

        Type Description Response

        The response from the dbt Cloud administrative API.

        Source code in prefect_dbt/cloud/clients.py
        async def get_run(\n    self,\n    run_id: int,\n    include_related: Optional[\n        List[Literal[\"trigger\", \"job\", \"debug_logs\", \"run_steps\"]]\n    ] = None,\n) -> Response:\n    \"\"\"\n    Sends a request to the [get run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getRunById)\n    to get details about a job run.\n\n    Args:\n        run_id: The ID of the run to get details for.\n        include_related: List of related fields to pull with the run.\n            Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\".\n            If \"debug_logs\" is not provided in a request, then the included debug\n            logs will be truncated to the last 1,000 lines of the debug log output file.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"include_related\": include_related} if include_related else None\n    return await self.call_endpoint(\n        path=f\"/runs/{run_id}/\", http_method=\"GET\", params=params\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.get_run_artifact","title":"get_run_artifact async","text":"

        Sends a request to the get run artifact endpoint to fetch an artifact generated for a completed run.

        Parameters:

        Name Type Description Default run_id int

        The ID of the run to list run artifacts for.

        required path str

        The relative path to the run artifact (e.g. manifest.json, catalog.json, run_results.json)

        required step Optional[int]

        The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

        None

        Returns:

        Type Description Response

        The response from the dbt Cloud administrative API.

        Source code in prefect_dbt/cloud/clients.py
        async def get_run_artifact(\n    self, run_id: int, path: str, step: Optional[int] = None\n) -> Response:\n    \"\"\"\n    Sends a request to the [get run artifact endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getArtifactsByRunId)\n    to fetch an artifact generated for a completed run.\n\n    Args:\n        run_id: The ID of the run to list run artifacts for.\n        path: The relative path to the run artifact (e.g. manifest.json, catalog.json,\n            run_results.json)\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"step\": step} if step else None\n    return await self.call_endpoint(\n        path=f\"/runs/{run_id}/artifacts/{path}\", http_method=\"GET\", params=params\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.list_run_artifacts","title":"list_run_artifacts async","text":"

        Sends a request to the list run artifacts endpoint to fetch a list of paths of artifacts generated for a completed run.

        Parameters:

        Name Type Description Default run_id int

        The ID of the run to list run artifacts for.

        required step Optional[int]

        The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

        None

        Returns:

        Type Description Response

        The response from the dbt Cloud administrative API.

        Source code in prefect_dbt/cloud/clients.py
        async def list_run_artifacts(\n    self, run_id: int, step: Optional[int] = None\n) -> Response:\n    \"\"\"\n    Sends a request to the [list run artifacts endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/listArtifactsByRunId)\n    to fetch a list of paths of artifacts generated for a completed run.\n\n    Args:\n        run_id: The ID of the run to list run artifacts for.\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    params = {\"step\": step} if step else None\n    return await self.call_endpoint(\n        path=f\"/runs/{run_id}/artifacts/\", http_method=\"GET\", params=params\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudAdministrativeClient.trigger_job_run","title":"trigger_job_run async","text":"

        Sends a request to the trigger job run endpoint to initiate a job run.

        Parameters:

        Name Type Description Default job_id int

        The ID of the job to trigger.

        required options Optional[TriggerJobRunOptions]

        An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

        None

        Returns:

        Type Description Response

        The response from the dbt Cloud administrative API.

        Source code in prefect_dbt/cloud/clients.py
        async def trigger_job_run(\n    self, job_id: int, options: Optional[TriggerJobRunOptions] = None\n) -> Response:\n    \"\"\"\n    Sends a request to the [trigger job run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Jobs/operation/triggerRun)\n    to initiate a job run.\n\n    Args:\n        job_id: The ID of the job to trigger.\n        options: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.\n\n    Returns:\n        The response from the dbt Cloud administrative API.\n    \"\"\"  # noqa\n    if options is None:\n        options = TriggerJobRunOptions()\n\n    return await self.call_endpoint(\n        path=f\"/jobs/{job_id}/run/\",\n        http_method=\"POST\",\n        json=options.dict(exclude_none=True),\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudMetadataClient","title":"DbtCloudMetadataClient","text":"

        Client for interacting with the dbt cloud Administrative API.

        Parameters:

        Name Type Description Default api_key str

        API key to authenticate with the dbt Cloud administrative API.

        required domain str

        Domain at which the dbt Cloud API is hosted.

        'metadata.cloud.getdbt.com' Source code in prefect_dbt/cloud/clients.py
        class DbtCloudMetadataClient:\n    \"\"\"\n    Client for interacting with the dbt cloud Administrative API.\n\n    Args:\n        api_key: API key to authenticate with the dbt Cloud administrative API.\n        domain: Domain at which the dbt Cloud API is hosted.\n    \"\"\"\n\n    def __init__(self, api_key: str, domain: str = \"metadata.cloud.getdbt.com\"):\n        self._http_endpoint = HTTPEndpoint(\n            base_headers={\n                \"Authorization\": f\"Bearer {api_key}\",\n                \"user-agent\": f\"prefect-{prefect.__version__}\",\n                \"x-dbt-partner-source\": \"prefect\",\n                \"content-type\": \"application/json\",\n            },\n            url=f\"https://{domain}/graphql\",\n        )\n\n    def query(\n        self,\n        query: str,\n        variables: Optional[Dict] = None,\n        operation_name: Optional[str] = None,\n    ) -> Dict[str, Any]:\n        \"\"\"\n        Run a GraphQL query against the dbt Cloud metadata API.\n\n        Args:\n            query: The GraphQL query to run.\n            variables: The values of any variables defined in the GraphQL query.\n            operation_name: The name of the operation to run if multiple operations\n                are defined in the provided query.\n\n        Returns:\n            The result of the GraphQL query.\n        \"\"\"\n        return self._http_endpoint(\n            query=query, variables=variables, operation_name=operation_name\n        )\n
        "},{"location":"integrations/prefect-dbt/cloud/clients/#prefect_dbt.cloud.clients.DbtCloudMetadataClient.query","title":"query","text":"

        Run a GraphQL query against the dbt Cloud metadata API.

        Parameters:

        Name Type Description Default query str

        The GraphQL query to run.

        required variables Optional[Dict]

        The values of any variables defined in the GraphQL query.

        None operation_name Optional[str]

        The name of the operation to run if multiple operations are defined in the provided query.

        None

        Returns:

        Type Description Dict[str, Any]

        The result of the GraphQL query.

        Source code in prefect_dbt/cloud/clients.py
        def query(\n    self,\n    query: str,\n    variables: Optional[Dict] = None,\n    operation_name: Optional[str] = None,\n) -> Dict[str, Any]:\n    \"\"\"\n    Run a GraphQL query against the dbt Cloud metadata API.\n\n    Args:\n        query: The GraphQL query to run.\n        variables: The values of any variables defined in the GraphQL query.\n        operation_name: The name of the operation to run if multiple operations\n            are defined in the provided query.\n\n    Returns:\n        The result of the GraphQL query.\n    \"\"\"\n    return self._http_endpoint(\n        query=query, variables=variables, operation_name=operation_name\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials","title":"prefect_dbt.cloud.credentials","text":"

        Module containing credentials for interacting with dbt Cloud

        "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials","title":"DbtCloudCredentials","text":"

        Bases: CredentialsBlock

        Credentials block for credential use across dbt Cloud tasks and flows.

        Attributes:

        Name Type Description api_key SecretStr

        API key to authenticate with the dbt Cloud administrative API. Refer to the Authentication docs for retrieving the API key.

        account_id int

        ID of dbt Cloud account with which to interact.

        domain Optional[str]

        Domain at which the dbt Cloud API is hosted.

        Examples:

        Load stored dbt Cloud credentials:

        from prefect_dbt.cloud import DbtCloudCredentials\n\ndbt_cloud_credentials = DbtCloudCredentials.load(\"BLOCK_NAME\")\n

        Use DbtCloudCredentials instance to trigger a job run:

        from prefect_dbt.cloud import DbtCloudCredentials\n\ncredentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\nasync with dbt_cloud_credentials.get_administrative_client() as client:\n    client.trigger_job_run(job_id=1)\n

        Load saved dbt Cloud credentials within a flow:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n@flow\ndef trigger_dbt_cloud_job_run_flow():\n    credentials = DbtCloudCredentials.load(\"my-dbt-credentials\")\n    trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\ntrigger_dbt_cloud_job_run_flow()\n

        Source code in prefect_dbt/cloud/credentials.py
        class DbtCloudCredentials(CredentialsBlock):\n    \"\"\"\n    Credentials block for credential use across dbt Cloud tasks and flows.\n\n    Attributes:\n        api_key (SecretStr): API key to authenticate with the dbt Cloud\n            administrative API. Refer to the [Authentication docs](\n            https://docs.getdbt.com/dbt-cloud/api-v2#section/Authentication)\n            for retrieving the API key.\n        account_id (int): ID of dbt Cloud account with which to interact.\n        domain (Optional[str]): Domain at which the dbt Cloud API is hosted.\n\n    Examples:\n        Load stored dbt Cloud credentials:\n        ```python\n        from prefect_dbt.cloud import DbtCloudCredentials\n\n        dbt_cloud_credentials = DbtCloudCredentials.load(\"BLOCK_NAME\")\n        ```\n\n        Use DbtCloudCredentials instance to trigger a job run:\n        ```python\n        from prefect_dbt.cloud import DbtCloudCredentials\n\n        credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            client.trigger_job_run(job_id=1)\n        ```\n\n        Load saved dbt Cloud credentials within a flow:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n        @flow\n        def trigger_dbt_cloud_job_run_flow():\n            credentials = DbtCloudCredentials.load(\"my-dbt-credentials\")\n            trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\n        trigger_dbt_cloud_job_run_flow()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt Cloud Credentials\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials\"  # noqa\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"A dbt Cloud API key to use for authentication.\",\n    )\n    account_id: int = Field(\n        default=..., title=\"Account ID\", description=\"The ID of your dbt Cloud account.\"\n    )\n    domain: str = Field(\n        default=\"cloud.getdbt.com\",\n        description=\"The base domain of your dbt Cloud instance.\",\n    )\n\n    def get_administrative_client(self) -> DbtCloudAdministrativeClient:\n        \"\"\"\n        Returns a newly instantiated client for working with the dbt Cloud\n        administrative API.\n\n        Returns:\n            An authenticated dbt Cloud administrative API client.\n        \"\"\"\n        return DbtCloudAdministrativeClient(\n            api_key=self.api_key.get_secret_value(),\n            account_id=self.account_id,\n            domain=self.domain,\n        )\n\n    def get_metadata_client(self) -> DbtCloudMetadataClient:\n        \"\"\"\n        Returns a newly instantiated client for working with the dbt Cloud\n        metadata API.\n\n        Example:\n            Sending queries via the returned metadata client:\n            ```python\n            from prefect_dbt import DbtCloudCredentials\n\n            credentials_block = DbtCloudCredentials.load(\"test-account\")\n            metadata_client = credentials_block.get_metadata_client()\n            query = \\\"\\\"\\\"\n            {\n                metrics(jobId: 123) {\n                    uniqueId\n                    name\n                    packageName\n                    tags\n                    label\n                    runId\n                    description\n                    type\n                    sql\n                    timestamp\n                    timeGrains\n                    dimensions\n                    meta\n                    resourceType\n                    filters {\n                        field\n                        operator\n                        value\n                    }\n                    model {\n                        name\n                    }\n                }\n            }\n            \\\"\\\"\\\"\n            metadata_client.query(query)\n            # Result:\n            # {\n            #   \"data\": {\n            #     \"metrics\": [\n            #       {\n            #         \"uniqueId\": \"metric.tpch.total_revenue\",\n            #         \"name\": \"total_revenue\",\n            #         \"packageName\": \"tpch\",\n            #         \"tags\": [],\n            #         \"label\": \"Total Revenue ($)\",\n            #         \"runId\": 108952046,\n            #         \"description\": \"\",\n            #         \"type\": \"sum\",\n            #         \"sql\": \"net_item_sales_amount\",\n            #         \"timestamp\": \"order_date\",\n            #         \"timeGrains\": [\"day\", \"week\", \"month\"],\n            #         \"dimensions\": [\"status_code\", \"priority_code\"],\n            #         \"meta\": {},\n            #         \"resourceType\": \"metric\",\n            #         \"filters\": [],\n            #         \"model\": { \"name\": \"fct_orders\" }\n            #       }\n            #     ]\n            #   }\n            # }\n            ```\n\n        Returns:\n            An authenticated dbt Cloud metadata API client.\n        \"\"\"\n        return DbtCloudMetadataClient(\n            api_key=self.api_key.get_secret_value(),\n            domain=f\"metadata.{self.domain}\",\n        )\n\n    def get_client(\n        self, client_type: Literal[\"administrative\", \"metadata\"]\n    ) -> Union[DbtCloudAdministrativeClient, DbtCloudMetadataClient]:\n        \"\"\"\n        Returns a newly instantiated client for working with the dbt Cloud API.\n\n        Args:\n            client_type: Type of client to return. Accepts either 'administrative'\n                or 'metadata'.\n\n        Returns:\n            The authenticated client of the requested type.\n        \"\"\"\n        get_client_method = getattr(self, f\"get_{client_type}_client\", None)\n        if get_client_method is None:\n            raise ValueError(f\"'{client_type}' is not a supported client type.\")\n        return get_client_method()\n
        "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials.get_administrative_client","title":"get_administrative_client","text":"

        Returns a newly instantiated client for working with the dbt Cloud administrative API.

        Returns:

        Type Description DbtCloudAdministrativeClient

        An authenticated dbt Cloud administrative API client.

        Source code in prefect_dbt/cloud/credentials.py
        def get_administrative_client(self) -> DbtCloudAdministrativeClient:\n    \"\"\"\n    Returns a newly instantiated client for working with the dbt Cloud\n    administrative API.\n\n    Returns:\n        An authenticated dbt Cloud administrative API client.\n    \"\"\"\n    return DbtCloudAdministrativeClient(\n        api_key=self.api_key.get_secret_value(),\n        account_id=self.account_id,\n        domain=self.domain,\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials.get_client","title":"get_client","text":"

        Returns a newly instantiated client for working with the dbt Cloud API.

        Parameters:

        Name Type Description Default client_type Literal['administrative', 'metadata']

        Type of client to return. Accepts either 'administrative' or 'metadata'.

        required

        Returns:

        Type Description Union[DbtCloudAdministrativeClient, DbtCloudMetadataClient]

        The authenticated client of the requested type.

        Source code in prefect_dbt/cloud/credentials.py
        def get_client(\n    self, client_type: Literal[\"administrative\", \"metadata\"]\n) -> Union[DbtCloudAdministrativeClient, DbtCloudMetadataClient]:\n    \"\"\"\n    Returns a newly instantiated client for working with the dbt Cloud API.\n\n    Args:\n        client_type: Type of client to return. Accepts either 'administrative'\n            or 'metadata'.\n\n    Returns:\n        The authenticated client of the requested type.\n    \"\"\"\n    get_client_method = getattr(self, f\"get_{client_type}_client\", None)\n    if get_client_method is None:\n        raise ValueError(f\"'{client_type}' is not a supported client type.\")\n    return get_client_method()\n
        "},{"location":"integrations/prefect-dbt/cloud/credentials/#prefect_dbt.cloud.credentials.DbtCloudCredentials.get_metadata_client","title":"get_metadata_client","text":"

        Returns a newly instantiated client for working with the dbt Cloud metadata API.

        Example

        Sending queries via the returned metadata client:

        from prefect_dbt import DbtCloudCredentials\n\ncredentials_block = DbtCloudCredentials.load(\"test-account\")\nmetadata_client = credentials_block.get_metadata_client()\nquery = \"\"\"\n{\n    metrics(jobId: 123) {\n        uniqueId\n        name\n        packageName\n        tags\n        label\n        runId\n        description\n        type\n        sql\n        timestamp\n        timeGrains\n        dimensions\n        meta\n        resourceType\n        filters {\n            field\n            operator\n            value\n        }\n        model {\n            name\n        }\n    }\n}\n\"\"\"\nmetadata_client.query(query)\n# Result:\n# {\n#   \"data\": {\n#     \"metrics\": [\n#       {\n#         \"uniqueId\": \"metric.tpch.total_revenue\",\n#         \"name\": \"total_revenue\",\n#         \"packageName\": \"tpch\",\n#         \"tags\": [],\n#         \"label\": \"Total Revenue ($)\",\n#         \"runId\": 108952046,\n#         \"description\": \"\",\n#         \"type\": \"sum\",\n#         \"sql\": \"net_item_sales_amount\",\n#         \"timestamp\": \"order_date\",\n#         \"timeGrains\": [\"day\", \"week\", \"month\"],\n#         \"dimensions\": [\"status_code\", \"priority_code\"],\n#         \"meta\": {},\n#         \"resourceType\": \"metric\",\n#         \"filters\": [],\n#         \"model\": { \"name\": \"fct_orders\" }\n#       }\n#     ]\n#   }\n# }\n

        Returns:

        Type Description DbtCloudMetadataClient

        An authenticated dbt Cloud metadata API client.

        Source code in prefect_dbt/cloud/credentials.py
        def get_metadata_client(self) -> DbtCloudMetadataClient:\n    \"\"\"\n    Returns a newly instantiated client for working with the dbt Cloud\n    metadata API.\n\n    Example:\n        Sending queries via the returned metadata client:\n        ```python\n        from prefect_dbt import DbtCloudCredentials\n\n        credentials_block = DbtCloudCredentials.load(\"test-account\")\n        metadata_client = credentials_block.get_metadata_client()\n        query = \\\"\\\"\\\"\n        {\n            metrics(jobId: 123) {\n                uniqueId\n                name\n                packageName\n                tags\n                label\n                runId\n                description\n                type\n                sql\n                timestamp\n                timeGrains\n                dimensions\n                meta\n                resourceType\n                filters {\n                    field\n                    operator\n                    value\n                }\n                model {\n                    name\n                }\n            }\n        }\n        \\\"\\\"\\\"\n        metadata_client.query(query)\n        # Result:\n        # {\n        #   \"data\": {\n        #     \"metrics\": [\n        #       {\n        #         \"uniqueId\": \"metric.tpch.total_revenue\",\n        #         \"name\": \"total_revenue\",\n        #         \"packageName\": \"tpch\",\n        #         \"tags\": [],\n        #         \"label\": \"Total Revenue ($)\",\n        #         \"runId\": 108952046,\n        #         \"description\": \"\",\n        #         \"type\": \"sum\",\n        #         \"sql\": \"net_item_sales_amount\",\n        #         \"timestamp\": \"order_date\",\n        #         \"timeGrains\": [\"day\", \"week\", \"month\"],\n        #         \"dimensions\": [\"status_code\", \"priority_code\"],\n        #         \"meta\": {},\n        #         \"resourceType\": \"metric\",\n        #         \"filters\": [],\n        #         \"model\": { \"name\": \"fct_orders\" }\n        #       }\n        #     ]\n        #   }\n        # }\n        ```\n\n    Returns:\n        An authenticated dbt Cloud metadata API client.\n    \"\"\"\n    return DbtCloudMetadataClient(\n        api_key=self.api_key.get_secret_value(),\n        domain=f\"metadata.{self.domain}\",\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs","title":"prefect_dbt.cloud.jobs","text":"

        Module containing tasks and flows for interacting with dbt Cloud jobs

        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob","title":"DbtCloudJob","text":"

        Bases: JobBlock

        Block that holds the information and methods to interact with a dbt Cloud job.

        Attributes:

        Name Type Description dbt_cloud_credentials DbtCloudCredentials

        The credentials to use to authenticate with dbt Cloud.

        job_id int

        The id of the dbt Cloud job.

        timeout_seconds int

        The number of seconds to wait for the job to complete.

        interval_seconds int

        The number of seconds to wait between polling for job completion.

        trigger_job_run_options TriggerJobRunOptions

        The options to use when triggering a job run.

        Examples:

        Load a configured dbt Cloud job block.

        from prefect_dbt.cloud import DbtCloudJob\n\ndbt_cloud_job = DbtCloudJob.load(\"BLOCK_NAME\")\n

        Triggers a dbt Cloud job, waits for completion, and fetches the results.

        from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n\n@flow\ndef dbt_cloud_job_flow():\n    dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n    dbt_cloud_job = DbtCloudJob.load(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=154217\n    )\n    dbt_cloud_job_run = dbt_cloud_job.trigger()\n    dbt_cloud_job_run.wait_for_completion()\n    dbt_cloud_job_run.fetch_result()\n    return dbt_cloud_job_run\n\ndbt_cloud_job_flow()\n

        Source code in prefect_dbt/cloud/jobs.py
        class DbtCloudJob(JobBlock):\n    \"\"\"\n    Block that holds the information and methods to interact with a dbt Cloud job.\n\n    Attributes:\n        dbt_cloud_credentials: The credentials to use to authenticate with dbt Cloud.\n        job_id: The id of the dbt Cloud job.\n        timeout_seconds: The number of seconds to wait for the job to complete.\n        interval_seconds:\n            The number of seconds to wait between polling for job completion.\n        trigger_job_run_options: The options to use when triggering a job run.\n\n    Examples:\n        Load a configured dbt Cloud job block.\n        ```python\n        from prefect_dbt.cloud import DbtCloudJob\n\n        dbt_cloud_job = DbtCloudJob.load(\"BLOCK_NAME\")\n        ```\n\n        Triggers a dbt Cloud job, waits for completion, and fetches the results.\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n\n        @flow\n        def dbt_cloud_job_flow():\n            dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n            dbt_cloud_job = DbtCloudJob.load(\n                dbt_cloud_credentials=dbt_cloud_credentials,\n                job_id=154217\n            )\n            dbt_cloud_job_run = dbt_cloud_job.trigger()\n            dbt_cloud_job_run.wait_for_completion()\n            dbt_cloud_job_run.fetch_result()\n            return dbt_cloud_job_run\n\n        dbt_cloud_job_flow()\n        ```\n    \"\"\"\n\n    _block_type_name = \"dbt Cloud Job\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5zE9lxfzBHjw3tnEup4wWL/9a001902ed43a84c6c96d23b24622e19/dbt-bit_tm.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob\"  # noqa\n\n    dbt_cloud_credentials: DbtCloudCredentials = Field(\n        default=...,\n        description=\"The dbt Cloud credentials to use to authenticate with dbt Cloud.\",\n    )  # noqa: E501\n    job_id: int = Field(\n        default=..., description=\"The id of the dbt Cloud job.\", title=\"Job ID\"\n    )\n    timeout_seconds: int = Field(\n        default=900,\n        description=\"The number of seconds to wait for the job to complete.\",\n    )\n    interval_seconds: int = Field(\n        default=10,\n        description=\"The number of seconds to wait between polling for job completion.\",\n    )\n    trigger_job_run_options: TriggerJobRunOptions = Field(\n        default_factory=TriggerJobRunOptions,\n        description=\"The options to use when triggering a job run.\",\n    )\n\n    @sync_compatible\n    async def get_job(self, order_by: Optional[str] = None) -> Dict[str, Any]:\n        \"\"\"\n        Retrieve information about a dbt Cloud job.\n\n        Args:\n            order_by: The field to order the results by.\n\n        Returns:\n            The job data.\n        \"\"\"\n        try:\n            async with self.dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.get_job(\n                    job_id=self.job_id,\n                    order_by=order_by,\n                )\n        except HTTPStatusError as ex:\n            raise DbtCloudGetJobFailed(extract_user_message(ex)) from ex\n        return response.json()[\"data\"]\n\n    @sync_compatible\n    async def trigger(\n        self, trigger_job_run_options: Optional[TriggerJobRunOptions] = None\n    ) -> DbtCloudJobRun:\n        \"\"\"\n        Triggers a dbt Cloud job.\n\n        Returns:\n            A representation of the dbt Cloud job run.\n        \"\"\"\n        try:\n            trigger_job_run_options = (\n                trigger_job_run_options or self.trigger_job_run_options\n            )\n            async with self.dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.trigger_job_run(\n                    job_id=self.job_id, options=trigger_job_run_options\n                )\n        except HTTPStatusError as ex:\n            raise DbtCloudJobRunTriggerFailed(extract_user_message(ex)) from ex\n\n        run_data = response.json()[\"data\"]\n        run_id = run_data.get(\"id\")\n        run = DbtCloudJobRun(\n            dbt_cloud_job=self,\n            run_id=run_id,\n        )\n        self.logger.info(\n            f\"dbt Cloud job {self.job_id} run {run_id} successfully triggered. \"\n            f\"You can view the status of this run at \"\n            f\"https://{self.dbt_cloud_credentials.domain}/#/accounts/\"\n            f\"{self.dbt_cloud_credentials.account_id}/projects/\"\n            f\"{run_data['project_id']}/runs/{run_id}/\"\n        )\n        return run\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob.get_job","title":"get_job async","text":"

        Retrieve information about a dbt Cloud job.

        Parameters:

        Name Type Description Default order_by Optional[str]

        The field to order the results by.

        None

        Returns:

        Type Description Dict[str, Any]

        The job data.

        Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def get_job(self, order_by: Optional[str] = None) -> Dict[str, Any]:\n    \"\"\"\n    Retrieve information about a dbt Cloud job.\n\n    Args:\n        order_by: The field to order the results by.\n\n    Returns:\n        The job data.\n    \"\"\"\n    try:\n        async with self.dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_job(\n                job_id=self.job_id,\n                order_by=order_by,\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetJobFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJob.trigger","title":"trigger async","text":"

        Triggers a dbt Cloud job.

        Returns:

        Type Description DbtCloudJobRun

        A representation of the dbt Cloud job run.

        Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def trigger(\n    self, trigger_job_run_options: Optional[TriggerJobRunOptions] = None\n) -> DbtCloudJobRun:\n    \"\"\"\n    Triggers a dbt Cloud job.\n\n    Returns:\n        A representation of the dbt Cloud job run.\n    \"\"\"\n    try:\n        trigger_job_run_options = (\n            trigger_job_run_options or self.trigger_job_run_options\n        )\n        async with self.dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.trigger_job_run(\n                job_id=self.job_id, options=trigger_job_run_options\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudJobRunTriggerFailed(extract_user_message(ex)) from ex\n\n    run_data = response.json()[\"data\"]\n    run_id = run_data.get(\"id\")\n    run = DbtCloudJobRun(\n        dbt_cloud_job=self,\n        run_id=run_id,\n    )\n    self.logger.info(\n        f\"dbt Cloud job {self.job_id} run {run_id} successfully triggered. \"\n        f\"You can view the status of this run at \"\n        f\"https://{self.dbt_cloud_credentials.domain}/#/accounts/\"\n        f\"{self.dbt_cloud_credentials.account_id}/projects/\"\n        f\"{run_data['project_id']}/runs/{run_id}/\"\n    )\n    return run\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun","title":"DbtCloudJobRun","text":"

        Bases: JobRun

        Class that holds the information and methods to interact with the resulting run of a dbt Cloud job.

        Source code in prefect_dbt/cloud/jobs.py
        class DbtCloudJobRun(JobRun):  # NOT A BLOCK\n    \"\"\"\n    Class that holds the information and methods to interact\n    with the resulting run of a dbt Cloud job.\n    \"\"\"\n\n    def __init__(self, run_id: int, dbt_cloud_job: \"DbtCloudJob\"):\n        self.run_id = run_id\n        self._dbt_cloud_job = dbt_cloud_job\n        self._dbt_cloud_credentials = dbt_cloud_job.dbt_cloud_credentials\n\n    @property\n    def _log_prefix(self):\n        return f\"dbt Cloud job {self._dbt_cloud_job.job_id} run {self.run_id}.\"\n\n    async def _wait_until_state(\n        self,\n        in_final_state_fn: Awaitable[Callable],\n        get_state_fn: Awaitable[Callable],\n        log_state_fn: Callable = None,\n        timeout_seconds: int = 60,\n        interval_seconds: int = 1,\n    ):\n        \"\"\"\n        Wait until the job run reaches a specific state.\n\n        Args:\n            in_final_state_fn: An async function that accepts a run state\n                and returns a boolean indicating whether the job run is\n                in a final state.\n            get_state_fn: An async function that returns\n                the current state of the job run.\n            log_state_fn: A callable that accepts a run\n                state and makes it human readable.\n            timeout_seconds: The maximum amount of time, in seconds, to wait\n                for the job run to reach the final state.\n            interval_seconds: The number of seconds to wait between checks of\n                the job run's state.\n        \"\"\"\n        start_time = time.time()\n        last_state = run_state = None\n        while not in_final_state_fn(run_state):\n            run_state = await get_state_fn()\n            if run_state != last_state:\n                if self.logger is not None:\n                    self.logger.info(\n                        \"%s has new state: %s\",\n                        self._log_prefix,\n                        log_state_fn(run_state),\n                    )\n                last_state = run_state\n\n            elapsed_time_seconds = time.time() - start_time\n            if elapsed_time_seconds > timeout_seconds:\n                raise DbtCloudJobRunTimedOut(\n                    f\"Max wait time of {timeout_seconds} \"\n                    \"seconds exceeded while waiting\"\n                )\n            await asyncio.sleep(interval_seconds)\n\n    @sync_compatible\n    async def get_run(self) -> Dict[str, Any]:\n        \"\"\"\n        Makes a request to the dbt Cloud API to get the run data.\n\n        Returns:\n            The run data.\n        \"\"\"\n        try:\n            dbt_cloud_credentials = self._dbt_cloud_credentials\n            async with dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.get_run(self.run_id)\n        except HTTPStatusError as ex:\n            raise DbtCloudGetRunFailed(extract_user_message(ex)) from ex\n        run_data = response.json()[\"data\"]\n        return run_data\n\n    @sync_compatible\n    async def get_status_code(self) -> int:\n        \"\"\"\n        Makes a request to the dbt Cloud API to get the run status.\n\n        Returns:\n            The run status code.\n        \"\"\"\n        run_data = await self.get_run()\n        run_status_code = run_data.get(\"status\")\n        return run_status_code\n\n    @sync_compatible\n    async def wait_for_completion(self) -> None:\n        \"\"\"\n        Waits for the job run to reach a terminal state.\n        \"\"\"\n        await self._wait_until_state(\n            in_final_state_fn=DbtCloudJobRunStatus.is_terminal_status_code,\n            get_state_fn=self.get_status_code,\n            log_state_fn=DbtCloudJobRunStatus,\n            timeout_seconds=self._dbt_cloud_job.timeout_seconds,\n            interval_seconds=self._dbt_cloud_job.interval_seconds,\n        )\n\n    @sync_compatible\n    async def fetch_result(self, step: Optional[int] = None) -> Dict[str, Any]:\n        \"\"\"\n        Gets the results from the job run. Since the results\n        may not be ready, use wait_for_completion before calling this method.\n\n        Args:\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n        \"\"\"\n        run_data = await self.get_run()\n        run_status = DbtCloudJobRunStatus(run_data.get(\"status\"))\n        if run_status == DbtCloudJobRunStatus.SUCCESS:\n            try:\n                async with self._dbt_cloud_credentials.get_administrative_client() as client:  # noqa\n                    response = await client.list_run_artifacts(\n                        run_id=self.run_id, step=step\n                    )\n                run_data[\"artifact_paths\"] = response.json()[\"data\"]\n                self.logger.info(\"%s completed successfully!\", self._log_prefix)\n            except HTTPStatusError as ex:\n                raise DbtCloudListRunArtifactsFailed(extract_user_message(ex)) from ex\n            return run_data\n        elif run_status == DbtCloudJobRunStatus.CANCELLED:\n            raise DbtCloudJobRunCancelled(f\"{self._log_prefix} was cancelled.\")\n        elif run_status == DbtCloudJobRunStatus.FAILED:\n            raise DbtCloudJobRunFailed(f\"{self._log_prefix} has failed.\")\n        else:\n            raise DbtCloudJobRunIncomplete(\n                f\"{self._log_prefix} is still running; \"\n                \"use wait_for_completion() to wait until results are ready.\"\n            )\n\n    @sync_compatible\n    async def get_run_artifacts(\n        self,\n        path: Literal[\"manifest.json\", \"catalog.json\", \"run_results.json\"],\n        step: Optional[int] = None,\n    ) -> Union[Dict[str, Any], str]:\n        \"\"\"\n        Get an artifact generated for a completed run.\n\n        Args:\n            path: The relative path to the run artifact.\n            step: The index of the step in the run to query for artifacts. The\n                first step in the run has the index 1. If the step parameter is\n                omitted, then this method will return the artifacts compiled\n                for the last step in the run.\n\n        Returns:\n            The contents of the requested manifest. Returns a `Dict` if the\n                requested artifact is a JSON file and a `str` otherwise.\n        \"\"\"\n        try:\n            dbt_cloud_credentials = self._dbt_cloud_credentials\n            async with dbt_cloud_credentials.get_administrative_client() as client:\n                response = await client.get_run_artifact(\n                    run_id=self.run_id, path=path, step=step\n                )\n        except HTTPStatusError as ex:\n            raise DbtCloudGetRunArtifactFailed(extract_user_message(ex)) from ex\n\n        if path.endswith(\".json\"):\n            artifact_contents = response.json()\n        else:\n            artifact_contents = response.text\n        return artifact_contents\n\n    def _select_unsuccessful_commands(\n        self,\n        run_results: List[Dict[str, Any]],\n        command_components: List[str],\n        command: str,\n        exe_command: str,\n    ) -> List[str]:\n        \"\"\"\n        Select nodes that were not successful and rebuild a command.\n        \"\"\"\n        # note \"fail\" here instead of \"cancelled\" because\n        # nodes do not have a cancelled state\n        run_nodes = \" \".join(\n            run_result[\"unique_id\"].split(\".\")[2]\n            for run_result in run_results\n            if run_result[\"status\"] in (\"error\", \"skipped\", \"fail\")\n        )\n\n        select_arg = None\n        if \"-s\" in command_components:\n            select_arg = \"-s\"\n        elif \"--select\" in command_components:\n            select_arg = \"--select\"\n\n        # prevent duplicate --select/-s statements\n        if select_arg is not None:\n            # dbt --fail-fast run, -s, bad_mod --vars '{\"env\": \"prod\"}' to:\n            # dbt --fail-fast run -s other_mod bad_mod --vars '{\"env\": \"prod\"}'\n            command_start, select_arg, command_end = command.partition(select_arg)\n            modified_command = f\"{command_start} {select_arg} {run_nodes} {command_end}\"  # noqa\n        else:\n            # dbt --fail-fast, build, --vars '{\"env\": \"prod\"}' to:\n            # dbt --fail-fast build --select bad_model --vars '{\"env\": \"prod\"}'\n            dbt_global_args, exe_command, exe_args = command.partition(exe_command)\n            modified_command = (\n                f\"{dbt_global_args} {exe_command} -s {run_nodes} {exe_args}\"\n            )\n        return modified_command\n\n    async def _build_trigger_job_run_options(\n        self,\n        job: Dict[str, Any],\n        run: Dict[str, Any],\n    ) -> TriggerJobRunOptions:\n        \"\"\"\n        Compiles a list of steps (commands) to retry, then either build trigger job\n        run options from scratch if it does not exist, else overrides the existing.\n        \"\"\"\n        generate_docs = job.get(\"generate_docs\", False)\n        generate_sources = job.get(\"generate_sources\", False)\n\n        steps_override = []\n        for run_step in run[\"run_steps\"]:\n            status = run_step[\"status_humanized\"].lower()\n            # Skipping cloning, profile setup, and dbt deps - always the first three\n            # steps in any run, and note, index starts at 1 instead of 0\n            if run_step[\"index\"] <= 3 or status == \"success\":\n                continue\n            # get dbt build from \"Invoke dbt with `dbt build`\"\n            command = run_step[\"name\"].partition(\"`\")[2].partition(\"`\")[0]\n\n            # These steps will be re-run regardless if\n            # generate_docs or generate_sources are enabled for a given job\n            # so if we don't skip, it'll run twice\n            freshness_in_command = (\n                \"dbt source snapshot-freshness\" in command\n                or \"dbt source freshness\" in command\n            )\n            if \"dbt docs generate\" in command and generate_docs:\n                continue\n            elif freshness_in_command and generate_sources:\n                continue\n\n            # find an executable command like `build` or `run`\n            # search in a list so that there aren't false positives, like\n            # `\"run\" in \"dbt run-operation\"`, which is True; we actually want\n            # `\"run\" in [\"dbt\", \"run-operation\"]` which is False\n            command_components = shlex.split(command)\n            for exe_command in EXE_COMMANDS:\n                if exe_command in command_components:\n                    break\n            else:\n                exe_command = \"\"\n\n            is_exe_command = exe_command in EXE_COMMANDS\n            is_not_success = status in (\"error\", \"skipped\", \"cancelled\")\n            is_skipped = status == \"skipped\"\n            if (not is_exe_command and is_not_success) or (\n                is_exe_command and is_skipped\n            ):\n                # if no matches like `run-operation`, we will be rerunning entirely\n                # or if it's one of the expected commands and is skipped\n                steps_override.append(command)\n            else:\n                # errors and failures are when we need to inspect to figure\n                # out the point of failure\n                try:\n                    run_artifact = await self.get_run_artifacts(\n                        \"run_results.json\", run_step[\"index\"]\n                    )\n                except JSONDecodeError:\n                    # get the run results scoped to the step which had an error\n                    # an error here indicates that either:\n                    # 1) the fail-fast flag was set, in which case\n                    #    the run_results.json file was never created; or\n                    # 2) there was a problem on dbt Cloud's side saving\n                    #    this artifact\n                    steps_override.append(command)\n                else:\n                    # we only need to find the individual nodes\n                    # for those run commands\n                    run_results = run_artifact[\"results\"]\n                    modified_command = self._select_unsuccessful_commands(\n                        run_results=run_results,\n                        command_components=command_components,\n                        command=command,\n                        exe_command=exe_command,\n                    )\n                    steps_override.append(modified_command)\n\n        if self._dbt_cloud_job.trigger_job_run_options is None:\n            trigger_job_run_options_override = TriggerJobRunOptions(\n                steps_override=steps_override\n            )\n        else:\n            trigger_job_run_options_override = (\n                self._dbt_cloud_job.trigger_job_run_options.copy()\n            )\n            trigger_job_run_options_override.steps_override = steps_override\n        return trigger_job_run_options_override\n\n    @sync_compatible\n    async def retry_failed_steps(self) -> \"DbtCloudJobRun\":  # noqa: F821\n        \"\"\"\n        Retries steps that did not complete successfully in a run.\n\n        Returns:\n            A representation of the dbt Cloud job run.\n        \"\"\"\n        job = await self._dbt_cloud_job.get_job()\n        run = await self.get_run()\n\n        trigger_job_run_options_override = await self._build_trigger_job_run_options(\n            job=job, run=run\n        )\n\n        num_steps = len(trigger_job_run_options_override.steps_override)\n        if num_steps == 0:\n            self.logger.info(f\"{self._log_prefix} does not have any steps to retry.\")\n        else:\n            self.logger.info(f\"{self._log_prefix} has {num_steps} steps to retry.\")\n            run = await self._dbt_cloud_job.trigger(\n                trigger_job_run_options=trigger_job_run_options_override,\n            )\n        return run\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.fetch_result","title":"fetch_result async","text":"

        Gets the results from the job run. Since the results may not be ready, use wait_for_completion before calling this method.

        Parameters:

        Name Type Description Default step Optional[int]

        The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

        None Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def fetch_result(self, step: Optional[int] = None) -> Dict[str, Any]:\n    \"\"\"\n    Gets the results from the job run. Since the results\n    may not be ready, use wait_for_completion before calling this method.\n\n    Args:\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n    \"\"\"\n    run_data = await self.get_run()\n    run_status = DbtCloudJobRunStatus(run_data.get(\"status\"))\n    if run_status == DbtCloudJobRunStatus.SUCCESS:\n        try:\n            async with self._dbt_cloud_credentials.get_administrative_client() as client:  # noqa\n                response = await client.list_run_artifacts(\n                    run_id=self.run_id, step=step\n                )\n            run_data[\"artifact_paths\"] = response.json()[\"data\"]\n            self.logger.info(\"%s completed successfully!\", self._log_prefix)\n        except HTTPStatusError as ex:\n            raise DbtCloudListRunArtifactsFailed(extract_user_message(ex)) from ex\n        return run_data\n    elif run_status == DbtCloudJobRunStatus.CANCELLED:\n        raise DbtCloudJobRunCancelled(f\"{self._log_prefix} was cancelled.\")\n    elif run_status == DbtCloudJobRunStatus.FAILED:\n        raise DbtCloudJobRunFailed(f\"{self._log_prefix} has failed.\")\n    else:\n        raise DbtCloudJobRunIncomplete(\n            f\"{self._log_prefix} is still running; \"\n            \"use wait_for_completion() to wait until results are ready.\"\n        )\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.get_run","title":"get_run async","text":"

        Makes a request to the dbt Cloud API to get the run data.

        Returns:

        Type Description Dict[str, Any]

        The run data.

        Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def get_run(self) -> Dict[str, Any]:\n    \"\"\"\n    Makes a request to the dbt Cloud API to get the run data.\n\n    Returns:\n        The run data.\n    \"\"\"\n    try:\n        dbt_cloud_credentials = self._dbt_cloud_credentials\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run(self.run_id)\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunFailed(extract_user_message(ex)) from ex\n    run_data = response.json()[\"data\"]\n    return run_data\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.get_run_artifacts","title":"get_run_artifacts async","text":"

        Get an artifact generated for a completed run.

        Parameters:

        Name Type Description Default path Literal['manifest.json', 'catalog.json', 'run_results.json']

        The relative path to the run artifact.

        required step Optional[int]

        The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

        None

        Returns:

        Type Description Union[Dict[str, Any], str]

        The contents of the requested manifest. Returns a Dict if the requested artifact is a JSON file and a str otherwise.

        Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def get_run_artifacts(\n    self,\n    path: Literal[\"manifest.json\", \"catalog.json\", \"run_results.json\"],\n    step: Optional[int] = None,\n) -> Union[Dict[str, Any], str]:\n    \"\"\"\n    Get an artifact generated for a completed run.\n\n    Args:\n        path: The relative path to the run artifact.\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The contents of the requested manifest. Returns a `Dict` if the\n            requested artifact is a JSON file and a `str` otherwise.\n    \"\"\"\n    try:\n        dbt_cloud_credentials = self._dbt_cloud_credentials\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run_artifact(\n                run_id=self.run_id, path=path, step=step\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunArtifactFailed(extract_user_message(ex)) from ex\n\n    if path.endswith(\".json\"):\n        artifact_contents = response.json()\n    else:\n        artifact_contents = response.text\n    return artifact_contents\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.get_status_code","title":"get_status_code async","text":"

        Makes a request to the dbt Cloud API to get the run status.

        Returns:

        Type Description int

        The run status code.

        Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def get_status_code(self) -> int:\n    \"\"\"\n    Makes a request to the dbt Cloud API to get the run status.\n\n    Returns:\n        The run status code.\n    \"\"\"\n    run_data = await self.get_run()\n    run_status_code = run_data.get(\"status\")\n    return run_status_code\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.retry_failed_steps","title":"retry_failed_steps async","text":"

        Retries steps that did not complete successfully in a run.

        Returns:

        Type Description DbtCloudJobRun

        A representation of the dbt Cloud job run.

        Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def retry_failed_steps(self) -> \"DbtCloudJobRun\":  # noqa: F821\n    \"\"\"\n    Retries steps that did not complete successfully in a run.\n\n    Returns:\n        A representation of the dbt Cloud job run.\n    \"\"\"\n    job = await self._dbt_cloud_job.get_job()\n    run = await self.get_run()\n\n    trigger_job_run_options_override = await self._build_trigger_job_run_options(\n        job=job, run=run\n    )\n\n    num_steps = len(trigger_job_run_options_override.steps_override)\n    if num_steps == 0:\n        self.logger.info(f\"{self._log_prefix} does not have any steps to retry.\")\n    else:\n        self.logger.info(f\"{self._log_prefix} has {num_steps} steps to retry.\")\n        run = await self._dbt_cloud_job.trigger(\n            trigger_job_run_options=trigger_job_run_options_override,\n        )\n    return run\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.DbtCloudJobRun.wait_for_completion","title":"wait_for_completion async","text":"

        Waits for the job run to reach a terminal state.

        Source code in prefect_dbt/cloud/jobs.py
        @sync_compatible\nasync def wait_for_completion(self) -> None:\n    \"\"\"\n    Waits for the job run to reach a terminal state.\n    \"\"\"\n    await self._wait_until_state(\n        in_final_state_fn=DbtCloudJobRunStatus.is_terminal_status_code,\n        get_state_fn=self.get_status_code,\n        log_state_fn=DbtCloudJobRunStatus,\n        timeout_seconds=self._dbt_cloud_job.timeout_seconds,\n        interval_seconds=self._dbt_cloud_job.interval_seconds,\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.get_dbt_cloud_job_info","title":"get_dbt_cloud_job_info async","text":"

        A task to retrieve information about a dbt Cloud job.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required job_id int

        The ID of the job to get.

        required

        Returns:

        Type Description Dict

        The job data returned by the dbt Cloud administrative API.

        Example

        Get status of a dbt Cloud job:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import get_job\n\n@flow\ndef get_job_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return get_job(\n        dbt_cloud_credentials=credentials,\n        job_id=42\n    )\n\nget_job_flow()\n

        Source code in prefect_dbt/cloud/jobs.py
        @task(\n    name=\"Get dbt Cloud job details\",\n    description=\"Retrieves details of a dbt Cloud job \"\n    \"for the job with the given job_id.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def get_dbt_cloud_job_info(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    job_id: int,\n    order_by: Optional[str] = None,\n) -> Dict:\n    \"\"\"\n    A task to retrieve information about a dbt Cloud job.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        job_id: The ID of the job to get.\n\n    Returns:\n        The job data returned by the dbt Cloud administrative API.\n\n    Example:\n        Get status of a dbt Cloud job:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import get_job\n\n        @flow\n        def get_job_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return get_job(\n                dbt_cloud_credentials=credentials,\n                job_id=42\n            )\n\n        get_job_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_job(\n                job_id=job_id,\n                order_by=order_by,\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetJobFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.get_run_id","title":"get_run_id","text":"

        Task that extracts the run ID from a trigger job run API response,

        This task is mainly used to maintain dependency tracking between the trigger_dbt_cloud_job_run task and downstream tasks/flows that use the run ID.

        Parameters:

        Name Type Description Default obj Dict

        The JSON body from the trigger job run response.

        required Example
        from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run, get_run_id\n\n\n@flow\ndef trigger_run_and_get_id():\n    dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        )\n\n    triggered_run_data = trigger_dbt_cloud_job_run(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n        options=trigger_job_run_options,\n    )\n    run_id = get_run_id.submit(triggered_run_data)\n    return run_id\n\ntrigger_run_and_get_id()\n
        Source code in prefect_dbt/cloud/jobs.py
        @task(\n    name=\"Get dbt Cloud job run ID\",\n    description=\"Extracts the run ID from a trigger job run API response\",\n)\ndef get_run_id(obj: Dict):\n    \"\"\"\n    Task that extracts the run ID from a trigger job run API response,\n\n    This task is mainly used to maintain dependency tracking between the\n    `trigger_dbt_cloud_job_run` task and downstream tasks/flows that use the run ID.\n\n    Args:\n        obj: The JSON body from the trigger job run response.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run, get_run_id\n\n\n        @flow\n        def trigger_run_and_get_id():\n            dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                )\n\n            triggered_run_data = trigger_dbt_cloud_job_run(\n                dbt_cloud_credentials=dbt_cloud_credentials,\n                job_id=job_id,\n                options=trigger_job_run_options,\n            )\n            run_id = get_run_id.submit(triggered_run_data)\n            return run_id\n\n        trigger_run_and_get_id()\n        ```\n    \"\"\"\n    id = obj.get(\"id\")\n    if id is None:\n        raise RuntimeError(\"Unable to determine run ID for triggered job.\")\n    return id\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.retry_dbt_cloud_job_run_subset_and_wait_for_completion","title":"retry_dbt_cloud_job_run_subset_and_wait_for_completion async","text":"

        Flow that retrys a subset of dbt Cloud job run, filtered by select statuses, and waits for the triggered retry to complete.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required trigger_job_run_options Optional[TriggerJobRunOptions]

        An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

        None max_wait_seconds int

        Maximum number of seconds to wait for job to complete

        900 poll_frequency_seconds int

        Number of seconds to wait in between checks for run completion.

        10 run_id int

        The ID of the job run to retry.

        required

        Raises:

        Type Description ValueError

        If trigger_job_run_options.steps_override is set by the user.

        Returns:

        Type Description Dict

        The run data returned by the dbt Cloud administrative API.

        Examples:

        Retry a subset of models in a dbt Cloud job run and wait for completion:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import retry_dbt_cloud_job_run_subset_and_wait_for_completion\n\n@flow\ndef retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow():\n    credentials = DbtCloudCredentials.load(\"MY_BLOCK_NAME\")\n    retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n        dbt_cloud_credentials=credentials,\n        run_id=88640123,\n    )\n\nretry_dbt_cloud_job_run_subset_and_wait_for_completion_flow()\n

        Source code in prefect_dbt/cloud/jobs.py
        @flow(\n    name=\"Retry subset of dbt Cloud job run and wait for completion\",\n    description=(\n        \"Retries a subset of dbt Cloud job run, filtered by select statuses, \"\n        \"and waits for the triggered retry to complete.\"\n    ),\n)\nasync def retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    run_id: int,\n    trigger_job_run_options: Optional[TriggerJobRunOptions] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n) -> Dict:\n    \"\"\"\n    Flow that retrys a subset of dbt Cloud job run, filtered by select statuses,\n    and waits for the triggered retry to complete.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        trigger_job_run_options: An optional TriggerJobRunOptions instance to\n            specify overrides for the triggered job run.\n        max_wait_seconds: Maximum number of seconds to wait for job to complete\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n        run_id: The ID of the job run to retry.\n\n    Raises:\n        ValueError: If `trigger_job_run_options.steps_override` is set by the user.\n\n    Returns:\n        The run data returned by the dbt Cloud administrative API.\n\n    Examples:\n        Retry a subset of models in a dbt Cloud job run and wait for completion:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import retry_dbt_cloud_job_run_subset_and_wait_for_completion\n\n        @flow\n        def retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow():\n            credentials = DbtCloudCredentials.load(\"MY_BLOCK_NAME\")\n            retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n                dbt_cloud_credentials=credentials,\n                run_id=88640123,\n            )\n\n        retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow()\n        ```\n    \"\"\"  # noqa\n    if trigger_job_run_options and trigger_job_run_options.steps_override is not None:\n        raise ValueError(\n            \"Do not set `steps_override` in `trigger_job_run_options` \"\n            \"because this flow will automatically set it\"\n        )\n\n    run_info_future = await get_dbt_cloud_run_info.submit(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        run_id=run_id,\n        include_related=[\"run_steps\"],\n    )\n    run_info = await run_info_future.result()\n\n    job_id = run_info[\"job_id\"]\n    job_info_future = await get_dbt_cloud_job_info.submit(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n    )\n    job_info = await job_info_future.result()\n\n    trigger_job_run_options_override = await _build_trigger_job_run_options(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        trigger_job_run_options=trigger_job_run_options,\n        run_id=run_id,\n        run_info=run_info,\n        job_info=job_info,\n    )\n\n    # to circumvent `RuntimeError: The task runner is already started!`\n    flow_run_context = FlowRunContext.get()\n    task_runner_type = type(flow_run_context.task_runner)\n\n    run_data = await trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n        task_runner=task_runner_type()\n    )(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n        retry_filtered_models_attempts=0,\n        trigger_job_run_options=trigger_job_run_options_override,\n        max_wait_seconds=max_wait_seconds,\n        poll_frequency_seconds=poll_frequency_seconds,\n    )\n    return run_data\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.run_dbt_cloud_job","title":"run_dbt_cloud_job async","text":"

        Flow that triggers and waits for a dbt Cloud job run, retrying a subset of failed nodes if necessary.

        Parameters:

        Name Type Description Default dbt_cloud_job DbtCloudJob

        Block that holds the information and methods to interact with a dbt Cloud job.

        required targeted_retries int

        The number of times to retry failed steps.

        3

        Examples:

        from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\nfrom prefect_dbt.cloud.jobs import run_dbt_cloud_job\n\n@flow\ndef run_dbt_cloud_job_flow():\n    dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n    dbt_cloud_job = DbtCloudJob(\n        dbt_cloud_credentials=dbt_cloud_credentials, job_id=154217\n    )\n    return run_dbt_cloud_job(dbt_cloud_job=dbt_cloud_job)\n\nrun_dbt_cloud_job_flow()\n
        Source code in prefect_dbt/cloud/jobs.py
        @flow\nasync def run_dbt_cloud_job(\n    dbt_cloud_job: DbtCloudJob,\n    targeted_retries: int = 3,\n) -> Dict[str, Any]:\n    \"\"\"\n    Flow that triggers and waits for a dbt Cloud job run, retrying a\n    subset of failed nodes if necessary.\n\n    Args:\n        dbt_cloud_job: Block that holds the information and\n            methods to interact with a dbt Cloud job.\n        targeted_retries: The number of times to retry failed steps.\n\n    Examples:\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob\n        from prefect_dbt.cloud.jobs import run_dbt_cloud_job\n\n        @flow\n        def run_dbt_cloud_job_flow():\n            dbt_cloud_credentials = DbtCloudCredentials.load(\"dbt-token\")\n            dbt_cloud_job = DbtCloudJob(\n                dbt_cloud_credentials=dbt_cloud_credentials, job_id=154217\n            )\n            return run_dbt_cloud_job(dbt_cloud_job=dbt_cloud_job)\n\n        run_dbt_cloud_job_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    run = await task(dbt_cloud_job.trigger.aio)(dbt_cloud_job)\n    while targeted_retries > 0:\n        try:\n            await task(run.wait_for_completion.aio)(run)\n            result = await task(run.fetch_result.aio)(run)\n            return result\n        except DbtCloudJobRunFailed:\n            logger.info(\n                f\"Retrying job run with ID: {run.run_id} \"\n                f\"{targeted_retries} more times\"\n            )\n            run = await task(run.retry_failed_steps.aio)(run)\n            targeted_retries -= 1\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.trigger_dbt_cloud_job_run","title":"trigger_dbt_cloud_job_run async","text":"

        A task to trigger a dbt Cloud job run.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required job_id int

        The ID of the job to trigger.

        required options Optional[TriggerJobRunOptions]

        An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

        None

        Returns:

        Type Description Dict

        The run data returned from the dbt Cloud administrative API.

        Examples:

        Trigger a dbt Cloud job run:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n@flow\ndef trigger_dbt_cloud_job_run_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\n\ntrigger_dbt_cloud_job_run_flow()\n

        Trigger a dbt Cloud job run with overrides:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\nfrom prefect_dbt.cloud.models import TriggerJobRunOptions\n\n\n@flow\ndef trigger_dbt_cloud_job_run_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    trigger_dbt_cloud_job_run(\n        dbt_cloud_credentials=credentials,\n        job_id=1,\n        options=TriggerJobRunOptions(\n            git_branch=\"staging\",\n            schema_override=\"dbt_cloud_pr_123\",\n            dbt_version_override=\"0.18.0\",\n            target_name_override=\"staging\",\n            timeout_seconds_override=3000,\n            generate_docs_override=True,\n            threads_override=8,\n            steps_override=[\n                \"dbt seed\",\n                \"dbt run --fail-fast\",\n                \"dbt test --fail-fast\",\n            ],\n        ),\n    )\n\n\ntrigger_dbt_cloud_job_run()\n

        Source code in prefect_dbt/cloud/jobs.py
        @task(\n    name=\"Trigger dbt Cloud job run\",\n    description=\"Triggers a dbt Cloud job run for the job \"\n    \"with the given job_id and optional overrides.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def trigger_dbt_cloud_job_run(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    job_id: int,\n    options: Optional[TriggerJobRunOptions] = None,\n) -> Dict:\n    \"\"\"\n    A task to trigger a dbt Cloud job run.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        job_id: The ID of the job to trigger.\n        options: An optional TriggerJobRunOptions instance to specify overrides\n            for the triggered job run.\n\n    Returns:\n        The run data returned from the dbt Cloud administrative API.\n\n    Examples:\n        Trigger a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n\n\n        @flow\n        def trigger_dbt_cloud_job_run_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1)\n\n\n        trigger_dbt_cloud_job_run_flow()\n        ```\n\n        Trigger a dbt Cloud job run with overrides:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run\n        from prefect_dbt.cloud.models import TriggerJobRunOptions\n\n\n        @flow\n        def trigger_dbt_cloud_job_run_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            trigger_dbt_cloud_job_run(\n                dbt_cloud_credentials=credentials,\n                job_id=1,\n                options=TriggerJobRunOptions(\n                    git_branch=\"staging\",\n                    schema_override=\"dbt_cloud_pr_123\",\n                    dbt_version_override=\"0.18.0\",\n                    target_name_override=\"staging\",\n                    timeout_seconds_override=3000,\n                    generate_docs_override=True,\n                    threads_override=8,\n                    steps_override=[\n                        \"dbt seed\",\n                        \"dbt run --fail-fast\",\n                        \"dbt test --fail-fast\",\n                    ],\n                ),\n            )\n\n\n        trigger_dbt_cloud_job_run()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n\n    logger.info(f\"Triggering run for job with ID {job_id}\")\n\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.trigger_job_run(job_id=job_id, options=options)\n    except HTTPStatusError as ex:\n        raise DbtCloudJobRunTriggerFailed(extract_user_message(ex)) from ex\n\n    run_data = response.json()[\"data\"]\n\n    if \"project_id\" in run_data and \"id\" in run_data:\n        logger.info(\n            f\"Run successfully triggered for job with ID {job_id}. \"\n            \"You can view the status of this run at \"\n            f\"https://{dbt_cloud_credentials.domain}/#/accounts/\"\n            f\"{dbt_cloud_credentials.account_id}/projects/{run_data['project_id']}/\"\n            f\"runs/{run_data['id']}/\"\n        )\n\n    return run_data\n
        "},{"location":"integrations/prefect-dbt/cloud/jobs/#prefect_dbt.cloud.jobs.trigger_dbt_cloud_job_run_and_wait_for_completion","title":"trigger_dbt_cloud_job_run_and_wait_for_completion async","text":"

        Flow that triggers a job run and waits for the triggered run to complete.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required job_id int

        The ID of the job to trigger.

        required trigger_job_run_options Optional[TriggerJobRunOptions]

        An optional TriggerJobRunOptions instance to specify overrides for the triggered job run.

        None max_wait_seconds int

        Maximum number of seconds to wait for job to complete

        900 poll_frequency_seconds int

        Number of seconds to wait in between checks for run completion.

        10 retry_filtered_models_attempts int

        Number of times to retry models selected by retry_status_filters.

        3

        Raises:

        Type Description DbtCloudJobRunCancelled

        The triggered dbt Cloud job run was cancelled.

        DbtCloudJobRunFailed

        The triggered dbt Cloud job run failed.

        RuntimeError

        The triggered dbt Cloud job run ended in an unexpected state.

        Returns:

        Type Description Dict

        The run data returned by the dbt Cloud administrative API.

        Examples:

        Trigger a dbt Cloud job and wait for completion as a stand alone flow:

        import asyncio\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\nasyncio.run(\n    trigger_dbt_cloud_job_run_and_wait_for_completion(\n        dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        ),\n        job_id=1\n    )\n)\n

        Trigger a dbt Cloud job and wait for completion as a sub-flow:

        from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\n@flow\ndef my_flow():\n    ...\n    run_result = trigger_dbt_cloud_job_run_and_wait_for_completion(\n        dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        ),\n        job_id=1\n    )\n    ...\n\nmy_flow()\n

        Trigger a dbt Cloud job with overrides:

        import asyncio\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\nfrom prefect_dbt.cloud.models import TriggerJobRunOptions\n\nasyncio.run(\n    trigger_dbt_cloud_job_run_and_wait_for_completion(\n        dbt_cloud_credentials=DbtCloudCredentials(\n            api_key=\"my_api_key\",\n            account_id=123456789\n        ),\n        job_id=1,\n        trigger_job_run_options=TriggerJobRunOptions(\n            git_branch=\"staging\",\n            schema_override=\"dbt_cloud_pr_123\",\n            dbt_version_override=\"0.18.0\",\n            target_name_override=\"staging\",\n            timeout_seconds_override=3000,\n            generate_docs_override=True,\n            threads_override=8,\n            steps_override=[\n                \"dbt seed\",\n                \"dbt run --fail-fast\",\n                \"dbt test --fail fast\",\n            ],\n        ),\n    )\n)\n

        Source code in prefect_dbt/cloud/jobs.py
        @flow(\n    name=\"Trigger dbt Cloud job run and wait for completion\",\n    description=\"Triggers a dbt Cloud job run and waits for the\"\n    \"triggered run to complete.\",\n)\nasync def trigger_dbt_cloud_job_run_and_wait_for_completion(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    job_id: int,\n    trigger_job_run_options: Optional[TriggerJobRunOptions] = None,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n    retry_filtered_models_attempts: int = 3,\n) -> Dict:\n    \"\"\"\n    Flow that triggers a job run and waits for the triggered run to complete.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        job_id: The ID of the job to trigger.\n        trigger_job_run_options: An optional TriggerJobRunOptions instance to\n            specify overrides for the triggered job run.\n        max_wait_seconds: Maximum number of seconds to wait for job to complete\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n        retry_filtered_models_attempts: Number of times to retry models selected by `retry_status_filters`.\n\n    Raises:\n        DbtCloudJobRunCancelled: The triggered dbt Cloud job run was cancelled.\n        DbtCloudJobRunFailed: The triggered dbt Cloud job run failed.\n        RuntimeError: The triggered dbt Cloud job run ended in an unexpected state.\n\n    Returns:\n        The run data returned by the dbt Cloud administrative API.\n\n    Examples:\n        Trigger a dbt Cloud job and wait for completion as a stand alone flow:\n        ```python\n        import asyncio\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\n        asyncio.run(\n            trigger_dbt_cloud_job_run_and_wait_for_completion(\n                dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                ),\n                job_id=1\n            )\n        )\n        ```\n\n        Trigger a dbt Cloud job and wait for completion as a sub-flow:\n        ```python\n        from prefect import flow\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\n        @flow\n        def my_flow():\n            ...\n            run_result = trigger_dbt_cloud_job_run_and_wait_for_completion(\n                dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                ),\n                job_id=1\n            )\n            ...\n\n        my_flow()\n        ```\n\n        Trigger a dbt Cloud job with overrides:\n        ```python\n        import asyncio\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n        from prefect_dbt.cloud.models import TriggerJobRunOptions\n\n        asyncio.run(\n            trigger_dbt_cloud_job_run_and_wait_for_completion(\n                dbt_cloud_credentials=DbtCloudCredentials(\n                    api_key=\"my_api_key\",\n                    account_id=123456789\n                ),\n                job_id=1,\n                trigger_job_run_options=TriggerJobRunOptions(\n                    git_branch=\"staging\",\n                    schema_override=\"dbt_cloud_pr_123\",\n                    dbt_version_override=\"0.18.0\",\n                    target_name_override=\"staging\",\n                    timeout_seconds_override=3000,\n                    generate_docs_override=True,\n                    threads_override=8,\n                    steps_override=[\n                        \"dbt seed\",\n                        \"dbt run --fail-fast\",\n                        \"dbt test --fail fast\",\n                    ],\n                ),\n            )\n        )\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n\n    triggered_run_data_future = await trigger_dbt_cloud_job_run.submit(\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        job_id=job_id,\n        options=trigger_job_run_options,\n    )\n    run_id = (await triggered_run_data_future.result()).get(\"id\")\n    if run_id is None:\n        raise RuntimeError(\"Unable to determine run ID for triggered job.\")\n\n    final_run_status, run_data = await wait_for_dbt_cloud_job_run(\n        run_id=run_id,\n        dbt_cloud_credentials=dbt_cloud_credentials,\n        max_wait_seconds=max_wait_seconds,\n        poll_frequency_seconds=poll_frequency_seconds,\n    )\n\n    if final_run_status == DbtCloudJobRunStatus.SUCCESS:\n        try:\n            list_run_artifacts_future = await list_dbt_cloud_run_artifacts.submit(\n                dbt_cloud_credentials=dbt_cloud_credentials,\n                run_id=run_id,\n            )\n            run_data[\"artifact_paths\"] = await list_run_artifacts_future.result()\n        except DbtCloudListRunArtifactsFailed as ex:\n            logger.warning(\n                \"Unable to retrieve artifacts for job run with ID %s. Reason: %s\",\n                run_id,\n                ex,\n            )\n        logger.info(\n            \"dbt Cloud job run with ID %s completed successfully!\",\n            run_id,\n        )\n        return run_data\n    elif final_run_status == DbtCloudJobRunStatus.CANCELLED:\n        raise DbtCloudJobRunCancelled(\n            f\"Triggered job run with ID {run_id} was cancelled.\"\n        )\n    elif final_run_status == DbtCloudJobRunStatus.FAILED:\n        while retry_filtered_models_attempts > 0:\n            logger.info(\n                f\"Retrying job run with ID: {run_id} \"\n                f\"{retry_filtered_models_attempts} more times\"\n            )\n            try:\n                retry_filtered_models_attempts -= 1\n                run_data = await retry_dbt_cloud_job_run_subset_and_wait_for_completion(\n                    dbt_cloud_credentials=dbt_cloud_credentials,\n                    run_id=run_id,\n                    trigger_job_run_options=trigger_job_run_options,\n                    max_wait_seconds=max_wait_seconds,\n                    poll_frequency_seconds=poll_frequency_seconds,\n                )\n                return run_data\n            except Exception:\n                pass\n        else:\n            raise DbtCloudJobRunFailed(f\"Triggered job run with ID: {run_id} failed.\")\n    else:\n        raise RuntimeError(\n            f\"Triggered job run with ID: {run_id} ended with unexpected\"\n            f\"status {final_run_status.value}.\"\n        )\n
        "},{"location":"integrations/prefect-dbt/cloud/models/","title":"Models","text":""},{"location":"integrations/prefect-dbt/cloud/models/#prefect_dbt.cloud.models","title":"prefect_dbt.cloud.models","text":"

        Module containing models used for passing data to dbt Cloud

        "},{"location":"integrations/prefect-dbt/cloud/models/#prefect_dbt.cloud.models.TriggerJobRunOptions","title":"TriggerJobRunOptions","text":"

        Bases: BaseModel

        Defines options that can be defined when triggering a dbt Cloud job run.

        Source code in prefect_dbt/cloud/models.py
        class TriggerJobRunOptions(BaseModel):\n    \"\"\"\n    Defines options that can be defined when triggering a dbt Cloud job run.\n    \"\"\"\n\n    cause: str = Field(\n        default_factory=default_cause_factory,\n        description=\"A text description of the reason for running this job.\",\n    )\n    git_sha: Optional[str] = Field(\n        default=None, description=\"The git sha to check out before running this job.\"\n    )\n    git_branch: Optional[str] = Field(\n        default=None, description=\"The git branch to check out before running this job.\"\n    )\n    schema_override: Optional[str] = Field(\n        default=None,\n        description=\"Override the destination schema in the configured \"\n        \"target for this job.\",\n    )\n    dbt_version_override: Optional[str] = Field(\n        default=None, description=\"Override the version of dbt used to run this job.\"\n    )\n    threads_override: Optional[int] = Field(\n        default=None, description=\"Override the number of threads used to run this job.\"\n    )\n    target_name_override: Optional[str] = Field(\n        default=None,\n        description=\"Override the target.name context variable used when \"\n        \"running this job\",\n    )\n    generate_docs_override: Optional[bool] = Field(\n        default=None,\n        description=\"Override whether or not this job generates docs \"\n        \"(true=yes, false=no).\",\n    )\n    timeout_seconds_override: Optional[int] = Field(\n        default=None, description=\"Override the timeout in seconds for this job.\"\n    )\n    steps_override: Optional[List[str]] = Field(\n        default=None, description=\"Override the list of steps for this job.\"\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/models/#prefect_dbt.cloud.models.default_cause_factory","title":"default_cause_factory","text":"

        Factory function to populate the default cause for a job run to include information from the Prefect run context.

        Source code in prefect_dbt/cloud/models.py
        def default_cause_factory():\n    \"\"\"\n    Factory function to populate the default cause for a job run to include information\n    from the Prefect run context.\n    \"\"\"\n    cause = \"Triggered via Prefect\"\n\n    try:\n        context = get_run_context()\n        if isinstance(context, FlowRunContext):\n            cause = f\"{cause} in flow run {context.flow_run.name}\"\n        elif isinstance(context, TaskRunContext):\n            cause = f\"{cause} in task run {context.task_run.name}\"\n    except RuntimeError:\n        pass\n\n    return cause\n
        "},{"location":"integrations/prefect-dbt/cloud/runs/","title":"Runs","text":""},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs","title":"prefect_dbt.cloud.runs","text":"

        Module containing tasks and flows for interacting with dbt Cloud job runs

        "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.DbtCloudJobRunStatus","title":"DbtCloudJobRunStatus","text":"

        Bases: Enum

        dbt Cloud Job statuses.

        Source code in prefect_dbt/cloud/runs.py
        class DbtCloudJobRunStatus(Enum):\n    \"\"\"dbt Cloud Job statuses.\"\"\"\n\n    QUEUED = 1\n    STARTING = 2\n    RUNNING = 3\n    SUCCESS = 10\n    FAILED = 20\n    CANCELLED = 30\n\n    @classmethod\n    def is_terminal_status_code(cls, status_code: Any) -> bool:\n        \"\"\"\n        Returns True if a status code is terminal for a job run.\n        Returns False otherwise.\n        \"\"\"\n        return status_code in [cls.SUCCESS.value, cls.FAILED.value, cls.CANCELLED.value]\n
        "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.DbtCloudJobRunStatus.is_terminal_status_code","title":"is_terminal_status_code classmethod","text":"

        Returns True if a status code is terminal for a job run. Returns False otherwise.

        Source code in prefect_dbt/cloud/runs.py
        @classmethod\ndef is_terminal_status_code(cls, status_code: Any) -> bool:\n    \"\"\"\n    Returns True if a status code is terminal for a job run.\n    Returns False otherwise.\n    \"\"\"\n    return status_code in [cls.SUCCESS.value, cls.FAILED.value, cls.CANCELLED.value]\n
        "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.get_dbt_cloud_run_artifact","title":"get_dbt_cloud_run_artifact async","text":"

        A task to get an artifact generated for a completed run. The requested artifact is saved to a file in the current working directory.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required run_id int

        The ID of the run to list run artifacts for.

        required path str

        The relative path to the run artifact (e.g. manifest.json, catalog.json, run_results.json)

        required step Optional[int]

        The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

        None

        Returns:

        Type Description Union[Dict, str]

        The contents of the requested manifest. Returns a Dict if the requested artifact is a JSON file and a str otherwise.

        Examples:

        Get an artifact of a dbt Cloud job run:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.runs import get_dbt_cloud_run_artifact\n\n@flow\ndef get_artifact_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return get_dbt_cloud_run_artifact(\n        dbt_cloud_credentials=credentials,\n        run_id=42,\n        path=\"manifest.json\"\n    )\n\nget_artifact_flow()\n

        Get an artifact of a dbt Cloud job run and write it to a file:

        import json\n\nfrom prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import get_dbt_cloud_run_artifact\n\n@flow\ndef get_artifact_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    get_run_artifact_result = get_dbt_cloud_run_artifact(\n        dbt_cloud_credentials=credentials,\n        run_id=42,\n        path=\"manifest.json\"\n    )\n\n    with open(\"manifest.json\", \"w\") as file:\n        json.dump(get_run_artifact_result, file)\n\nget_artifact_flow()\n

        Source code in prefect_dbt/cloud/runs.py
        @task(\n    name=\"Get dbt Cloud job artifact\",\n    description=\"Fetches an artifact from a completed run.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def get_dbt_cloud_run_artifact(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    run_id: int,\n    path: str,\n    step: Optional[int] = None,\n) -> Union[Dict, str]:\n    \"\"\"\n    A task to get an artifact generated for a completed run. The requested artifact\n    is saved to a file in the current working directory.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        run_id: The ID of the run to list run artifacts for.\n        path: The relative path to the run artifact (e.g. manifest.json, catalog.json,\n            run_results.json)\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        The contents of the requested manifest. Returns a `Dict` if the\n            requested artifact is a JSON file and a `str` otherwise.\n\n    Examples:\n        Get an artifact of a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.runs import get_dbt_cloud_run_artifact\n\n        @flow\n        def get_artifact_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return get_dbt_cloud_run_artifact(\n                dbt_cloud_credentials=credentials,\n                run_id=42,\n                path=\"manifest.json\"\n            )\n\n        get_artifact_flow()\n        ```\n\n        Get an artifact of a dbt Cloud job run and write it to a file:\n        ```python\n        import json\n\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import get_dbt_cloud_run_artifact\n\n        @flow\n        def get_artifact_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            get_run_artifact_result = get_dbt_cloud_run_artifact(\n                dbt_cloud_credentials=credentials,\n                run_id=42,\n                path=\"manifest.json\"\n            )\n\n            with open(\"manifest.json\", \"w\") as file:\n                json.dump(get_run_artifact_result, file)\n\n        get_artifact_flow()\n        ```\n    \"\"\"  # noqa\n\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run_artifact(\n                run_id=run_id, path=path, step=step\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunArtifactFailed(extract_user_message(ex)) from ex\n\n    if path.endswith(\".json\"):\n        artifact_contents = response.json()\n    else:\n        artifact_contents = response.text\n\n    return artifact_contents\n
        "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.get_dbt_cloud_run_info","title":"get_dbt_cloud_run_info async","text":"

        A task to retrieve information about a dbt Cloud job run.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required run_id int

        The ID of the job to trigger.

        required include_related Optional[List[Literal['trigger', 'job', 'debug_logs', 'run_steps']]]

        List of related fields to pull with the run. Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\". If \"debug_logs\" is not provided in a request, then the included debug logs will be truncated to the last 1,000 lines of the debug log output file.

        None

        Returns:

        Type Description Dict

        The run data returned by the dbt Cloud administrative API.

        Example

        Get status of a dbt Cloud job run:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import get_run\n\n@flow\ndef get_run_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return get_run(\n        dbt_cloud_credentials=credentials,\n        run_id=42\n    )\n\nget_run_flow()\n

        Source code in prefect_dbt/cloud/runs.py
        @task(\n    name=\"Get dbt Cloud job run details\",\n    description=\"Retrieves details of a dbt Cloud job run \"\n    \"for the run with the given run_id.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def get_dbt_cloud_run_info(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    run_id: int,\n    include_related: Optional[\n        List[Literal[\"trigger\", \"job\", \"debug_logs\", \"run_steps\"]]\n    ] = None,\n) -> Dict:\n    \"\"\"\n    A task to retrieve information about a dbt Cloud job run.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        run_id: The ID of the job to trigger.\n        include_related: List of related fields to pull with the run.\n            Valid values are \"trigger\", \"job\", \"debug_logs\", and \"run_steps\".\n            If \"debug_logs\" is not provided in a request, then the included debug\n            logs will be truncated to the last 1,000 lines of the debug log output file.\n\n    Returns:\n        The run data returned by the dbt Cloud administrative API.\n\n    Example:\n        Get status of a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import get_run\n\n        @flow\n        def get_run_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return get_run(\n                dbt_cloud_credentials=credentials,\n                run_id=42\n            )\n\n        get_run_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.get_run(\n                run_id=run_id, include_related=include_related\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudGetRunFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
        "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.list_dbt_cloud_run_artifacts","title":"list_dbt_cloud_run_artifacts async","text":"

        A task to list the artifact files generated for a completed run.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required run_id int

        The ID of the run to list run artifacts for.

        required step Optional[int]

        The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run.

        None

        Returns:

        Type Description List[str]

        A list of paths to artifact files that can be used to retrieve the generated artifacts.

        Example

        List artifacts of a dbt Cloud job run:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import list_dbt_cloud_run_artifacts\n\n@flow\ndef list_artifacts_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    return list_dbt_cloud_run_artifacts(\n        dbt_cloud_credentials=credentials,\n        run_id=42\n    )\n\nlist_artifacts_flow()\n

        Source code in prefect_dbt/cloud/runs.py
        @task(\n    name=\"List dbt Cloud job artifacts\",\n    description=\"Fetches a list of artifact files generated for a completed run.\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def list_dbt_cloud_run_artifacts(\n    dbt_cloud_credentials: DbtCloudCredentials, run_id: int, step: Optional[int] = None\n) -> List[str]:\n    \"\"\"\n    A task to list the artifact files generated for a completed run.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        run_id: The ID of the run to list run artifacts for.\n        step: The index of the step in the run to query for artifacts. The\n            first step in the run has the index 1. If the step parameter is\n            omitted, then this method will return the artifacts compiled\n            for the last step in the run.\n\n    Returns:\n        A list of paths to artifact files that can be used to retrieve the generated artifacts.\n\n    Example:\n        List artifacts of a dbt Cloud job run:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.jobs import list_dbt_cloud_run_artifacts\n\n        @flow\n        def list_artifacts_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            return list_dbt_cloud_run_artifacts(\n                dbt_cloud_credentials=credentials,\n                run_id=42\n            )\n\n        list_artifacts_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.list_run_artifacts(run_id=run_id, step=step)\n    except HTTPStatusError as ex:\n        raise DbtCloudListRunArtifactsFailed(extract_user_message(ex)) from ex\n    return response.json()[\"data\"]\n
        "},{"location":"integrations/prefect-dbt/cloud/runs/#prefect_dbt.cloud.runs.wait_for_dbt_cloud_job_run","title":"wait_for_dbt_cloud_job_run async","text":"

        Waits for the given dbt Cloud job run to finish running.

        Parameters:

        Name Type Description Default run_id int

        The ID of the run to wait for.

        required dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required max_wait_seconds int

        Maximum number of seconds to wait for job to complete

        900 poll_frequency_seconds int

        Number of seconds to wait in between checks for run completion.

        10

        Raises:

        Type Description DbtCloudJobRunTimedOut

        When the elapsed wait time exceeds max_wait_seconds.

        Returns:

        Name Type Description run_status DbtCloudJobRunStatus

        An enum representing the final dbt Cloud job run status

        run_data Dict

        A dictionary containing information about the run after completion.

        Source code in prefect_dbt/cloud/runs.py
        @flow(\n    name=\"Wait for dbt Cloud job run\",\n    description=\"Waits for a dbt Cloud job run to finish running.\",\n)\nasync def wait_for_dbt_cloud_job_run(\n    run_id: int,\n    dbt_cloud_credentials: DbtCloudCredentials,\n    max_wait_seconds: int = 900,\n    poll_frequency_seconds: int = 10,\n) -> Tuple[DbtCloudJobRunStatus, Dict]:\n    \"\"\"\n    Waits for the given dbt Cloud job run to finish running.\n\n    Args:\n        run_id: The ID of the run to wait for.\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        max_wait_seconds: Maximum number of seconds to wait for job to complete\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Raises:\n        DbtCloudJobRunTimedOut: When the elapsed wait time exceeds `max_wait_seconds`.\n\n    Returns:\n        run_status: An enum representing the final dbt Cloud job run status\n        run_data: A dictionary containing information about the run after completion.\n\n\n    Example:\n\n\n    \"\"\"\n    logger = get_run_logger()\n    seconds_waited_for_run_completion = 0\n    wait_for = []\n    while seconds_waited_for_run_completion <= max_wait_seconds:\n        run_data_future = await get_dbt_cloud_run_info.submit(\n            dbt_cloud_credentials=dbt_cloud_credentials,\n            run_id=run_id,\n            wait_for=wait_for,\n        )\n        run_data = await run_data_future.result()\n        run_status_code = run_data.get(\"status\")\n\n        if DbtCloudJobRunStatus.is_terminal_status_code(run_status_code):\n            return DbtCloudJobRunStatus(run_status_code), run_data\n\n        wait_for = [run_data_future]\n        logger.debug(\n            \"dbt Cloud job run with ID %i has status %s. Waiting for %i seconds.\",\n            run_id,\n            DbtCloudJobRunStatus(run_status_code).name,\n            poll_frequency_seconds,\n        )\n        await asyncio.sleep(poll_frequency_seconds)\n        seconds_waited_for_run_completion += poll_frequency_seconds\n\n    raise DbtCloudJobRunTimedOut(\n        f\"Max wait time of {max_wait_seconds} seconds exceeded while waiting \"\n        \"for job run with ID {run_id}\"\n    )\n
        "},{"location":"integrations/prefect-dbt/cloud/utils/","title":"Utils","text":""},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils","title":"prefect_dbt.cloud.utils","text":"

        Utilities for common interactions with the dbt Cloud API

        "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.DbtCloudAdministrativeApiCallFailed","title":"DbtCloudAdministrativeApiCallFailed","text":"

        Bases: Exception

        Raised when a call to dbt Cloud administrative API fails.

        Source code in prefect_dbt/cloud/utils.py
        class DbtCloudAdministrativeApiCallFailed(Exception):\n    \"\"\"Raised when a call to dbt Cloud administrative API fails.\"\"\"\n
        "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.call_dbt_cloud_administrative_api_endpoint","title":"call_dbt_cloud_administrative_api_endpoint async","text":"

        Task that calls a specified endpoint in the dbt Cloud administrative API. Use this task if a prebuilt one is not yet available.

        Parameters:

        Name Type Description Default dbt_cloud_credentials DbtCloudCredentials

        Credentials for authenticating with dbt Cloud.

        required path str

        The partial path for the request (e.g. /projects/). Will be appended onto the base URL as determined by the client configuration.

        required http_method str

        HTTP method to call on the endpoint.

        required params Optional[Dict[str, Any]]

        Query parameters to include in the request.

        None json Optional[Dict[str, Any]]

        JSON serializable body to send in the request.

        None

        Returns:

        Type Description Any

        The body of the response. If the body is JSON serializable, then the result of json.loads with the body as the input will be returned. Otherwise, the body will be returned directly.

        Examples:

        List projects for an account:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n@flow\ndef get_projects_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    result = call_dbt_cloud_administrative_api_endpoint(\n        dbt_cloud_credentials=credentials,\n        path=\"/projects/\",\n        http_method=\"GET\",\n    )\n    return result[\"data\"]\n\nget_projects_flow()\n

        Create a new job:

        from prefect import flow\n\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n\n@flow\ndef create_job_flow():\n    credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n    result = call_dbt_cloud_administrative_api_endpoint(\n        dbt_cloud_credentials=credentials,\n        path=\"/jobs/\",\n        http_method=\"POST\",\n        json={\n            \"id\": None,\n            \"account_id\": 123456789,\n            \"project_id\": 100,\n            \"environment_id\": 10,\n            \"name\": \"Nightly run\",\n            \"dbt_version\": None,\n            \"triggers\": {\"github_webhook\": True, \"schedule\": True},\n            \"execute_steps\": [\"dbt run\", \"dbt test\", \"dbt source snapshot-freshness\"],\n            \"settings\": {\"threads\": 4, \"target_name\": \"prod\"},\n            \"state\": 1,\n            \"schedule\": {\n                \"date\": {\"type\": \"every_day\"},\n                \"time\": {\"type\": \"every_hour\", \"interval\": 1},\n            },\n        },\n    )\n    return result[\"data\"]\n\ncreate_job_flow()\n

        Source code in prefect_dbt/cloud/utils.py
        @task(\n    name=\"Call dbt Cloud administrative API endpoint\",\n    description=\"Calls a dbt Cloud administrative API endpoint\",\n    retries=3,\n    retry_delay_seconds=10,\n)\nasync def call_dbt_cloud_administrative_api_endpoint(\n    dbt_cloud_credentials: DbtCloudCredentials,\n    path: str,\n    http_method: str,\n    params: Optional[Dict[str, Any]] = None,\n    json: Optional[Dict[str, Any]] = None,\n) -> Any:\n    \"\"\"\n    Task that calls a specified endpoint in the dbt Cloud administrative API. Use this\n    task if a prebuilt one is not yet available.\n\n    Args:\n        dbt_cloud_credentials: Credentials for authenticating with dbt Cloud.\n        path: The partial path for the request (e.g. /projects/). Will be appended\n            onto the base URL as determined by the client configuration.\n        http_method: HTTP method to call on the endpoint.\n        params: Query parameters to include in the request.\n        json: JSON serializable body to send in the request.\n\n    Returns:\n        The body of the response. If the body is JSON serializable, then the result of\n            `json.loads` with the body as the input will be returned. Otherwise, the\n            body will be returned directly.\n\n    Examples:\n        List projects for an account:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n        @flow\n        def get_projects_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            result = call_dbt_cloud_administrative_api_endpoint(\n                dbt_cloud_credentials=credentials,\n                path=\"/projects/\",\n                http_method=\"GET\",\n            )\n            return result[\"data\"]\n\n        get_projects_flow()\n        ```\n\n        Create a new job:\n        ```python\n        from prefect import flow\n\n        from prefect_dbt.cloud import DbtCloudCredentials\n        from prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint\n\n\n        @flow\n        def create_job_flow():\n            credentials = DbtCloudCredentials(api_key=\"my_api_key\", account_id=123456789)\n\n            result = call_dbt_cloud_administrative_api_endpoint(\n                dbt_cloud_credentials=credentials,\n                path=\"/jobs/\",\n                http_method=\"POST\",\n                json={\n                    \"id\": None,\n                    \"account_id\": 123456789,\n                    \"project_id\": 100,\n                    \"environment_id\": 10,\n                    \"name\": \"Nightly run\",\n                    \"dbt_version\": None,\n                    \"triggers\": {\"github_webhook\": True, \"schedule\": True},\n                    \"execute_steps\": [\"dbt run\", \"dbt test\", \"dbt source snapshot-freshness\"],\n                    \"settings\": {\"threads\": 4, \"target_name\": \"prod\"},\n                    \"state\": 1,\n                    \"schedule\": {\n                        \"date\": {\"type\": \"every_day\"},\n                        \"time\": {\"type\": \"every_hour\", \"interval\": 1},\n                    },\n                },\n            )\n            return result[\"data\"]\n\n        create_job_flow()\n        ```\n    \"\"\"  # noqa\n    try:\n        async with dbt_cloud_credentials.get_administrative_client() as client:\n            response = await client.call_endpoint(\n                http_method=http_method, path=path, params=params, json=json\n            )\n    except HTTPStatusError as ex:\n        raise DbtCloudAdministrativeApiCallFailed(extract_developer_message(ex)) from ex\n    try:\n        return response.json()\n    except JSONDecodeError:\n        return response.text\n
        "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.extract_developer_message","title":"extract_developer_message","text":"

        Extracts developer message from a error response from the dbt Cloud administrative API.

        Parameters:

        Name Type Description Default ex HTTPStatusError

        An HTTPStatusError raised by httpx

        required

        Returns:

        Type Description Optional[str]

        developer_message from dbt Cloud administrative API response or None if a

        Optional[str]

        developer_message cannot be extracted

        Source code in prefect_dbt/cloud/utils.py
        def extract_developer_message(ex: HTTPStatusError) -> Optional[str]:\n    \"\"\"\n    Extracts developer message from a error response from the dbt Cloud\n    administrative API.\n\n    Args:\n        ex: An HTTPStatusError raised by httpx\n\n    Returns:\n        developer_message from dbt Cloud administrative API response or None if a\n        developer_message cannot be extracted\n    \"\"\"\n    response_payload = ex.response.json()\n    status = response_payload.get(\"status\", {})\n    return status.get(\"developer_message\")\n
        "},{"location":"integrations/prefect-dbt/cloud/utils/#prefect_dbt.cloud.utils.extract_user_message","title":"extract_user_message","text":"

        Extracts user message from a error response from the dbt Cloud administrative API.

        Parameters:

        Name Type Description Default ex HTTPStatusError

        An HTTPStatusError raised by httpx

        required

        Returns:

        Type Description Optional[str]

        user_message from dbt Cloud administrative API response or None if a

        Optional[str]

        user_message cannot be extracted

        Source code in prefect_dbt/cloud/utils.py
        def extract_user_message(ex: HTTPStatusError) -> Optional[str]:\n    \"\"\"\n    Extracts user message from a error response from the dbt Cloud administrative API.\n\n    Args:\n        ex: An HTTPStatusError raised by httpx\n\n    Returns:\n        user_message from dbt Cloud administrative API response or None if a\n        user_message cannot be extracted\n    \"\"\"\n    response_payload = ex.response.json()\n    status = response_payload.get(\"status\", {})\n    return status.get(\"user_message\")\n
        "},{"location":"integrations/prefect-docker/","title":"prefect-docker","text":""},{"location":"integrations/prefect-docker/#welcome","title":"Welcome!","text":"

        Prefect integrations for working with Docker.

        Note! The DockerRegistryCredentials in prefect-docker is a unique block, separate from the DockerRegistry in prefect core. While DockerRegistry implements a few functionality from both DockerHost and DockerRegistryCredentials for convenience, it does not allow much configuration to interact with a Docker host.

        Do not use DockerRegistry with this collection. Instead, use DockerHost and DockerRegistryCredentials.

        "},{"location":"integrations/prefect-docker/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-docker/#python-setup","title":"Python setup","text":"

        Requires an installation of Python 3.8+.

        We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

        These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-docker/#installation","title":"Installation","text":"

        Install prefect-docker with pip:

        pip install prefect-docker\n

        Then, register to view the block on Prefect Cloud:

        prefect block register -m prefect_docker\n

        Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

        "},{"location":"integrations/prefect-docker/#pull-image-and-create-start-log-stop-and-remove-docker-container","title":"Pull image, and create, start, log, stop, and remove Docker container","text":"
        from prefect import flow, get_run_logger\nfrom prefect_docker.images import pull_docker_image\nfrom prefect_docker.containers import (\n    create_docker_container,\n    start_docker_container,\n    get_docker_container_logs,\n    stop_docker_container,\n    remove_docker_container,\n)\n\n\n@flow\ndef docker_flow():\n    logger = get_run_logger()\n    pull_docker_image(\"prefecthq/prefect\", \"latest\")\n    container = create_docker_container(\n        image=\"prefecthq/prefect\", command=\"echo 'hello world!' && sleep 60\"\n    )\n    start_docker_container(container_id=container.id)\n    logs = get_docker_container_logs(container_id=container.id)\n    logger.info(logs)\n    stop_docker_container(container_id=container.id)\n    remove_docker_container(container_id=container.id)\n    return container\n
        "},{"location":"integrations/prefect-docker/#use-a-custom-docker-host-to-create-a-docker-container","title":"Use a custom Docker Host to create a Docker container","text":"
        from prefect import flow\nfrom prefect_docker import DockerHost\nfrom prefect_docker.containers import create_docker_container\n\n@flow\ndef create_docker_container_flow():\n    docker_host = DockerHost(\n        base_url=\"tcp://127.0.0.1:1234\",\n        max_pool_size=4\n    )\n    container = create_docker_container(\n        docker_host=docker_host,\n        image=\"prefecthq/prefect\",\n        command=\"echo 'hello world!'\"\n    )\n\ncreate_docker_container_flow()\n
        "},{"location":"integrations/prefect-docker/#resources","title":"Resources","text":"

        If you encounter any bugs while using prefect-docker, feel free to open an issue in the prefect repository.

        If you have any questions or issues while using prefect-docker, you can find help in the Prefect Slack community.

        "},{"location":"integrations/prefect-docker/#development","title":"Development","text":"

        If you'd like to install a version of prefect-docker for development, clone the repository and perform an editable install with pip:

        git clone https://github.com/PrefectHQ/prefect-docker.git\n\ncd prefect-docker/\n\npip install -e \".[dev]\"\n\n# Install linting pre-commit hooks\npre-commit install\n
        "},{"location":"integrations/prefect-docker/containers/","title":"Containers","text":""},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers","title":"prefect_docker.containers","text":"

        Integrations with Docker Containers.

        "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.create_docker_container","title":"create_docker_container async","text":"

        Create a container without starting it. Similar to docker create.

        Parameters:

        Name Type Description Default image str

        The image to run.

        required command Optional[Union[str, List[str]]]

        The command(s) to run in the container.

        None name Optional[str]

        The name for this container.

        None detach Optional[bool]

        Run container in the background.

        None docker_host Optional[DockerHost]

        Settings for interacting with a Docker host.

        None entrypoint Optional[Union[str, List[str]]]

        The entrypoint for the container.

        None environment Optional[Union[Dict[str, str], List[str]]]

        Environment variables to set inside the container, as a dictionary or a list of strings in the format [\"SOMEVARIABLE=xxx\"].

        None **create_kwargs Dict[str, Any]

        Additional keyword arguments to pass to client.containers.create.

        {}

        Returns:

        Type Description Container

        A Docker Container object.

        Examples:

        Create a container with the Prefect image.

        from prefect import flow\nfrom prefect_docker.containers import create_docker_container\n\n@flow\ndef create_docker_container_flow():\n    container = create_docker_container(\n        image=\"prefecthq/prefect\",\n        command=\"echo 'hello world!'\"\n    )\n\ncreate_docker_container_flow()\n

        Source code in prefect_docker/containers.py
        @task\nasync def create_docker_container(\n    image: str,\n    command: Optional[Union[str, List[str]]] = None,\n    name: Optional[str] = None,\n    detach: Optional[bool] = None,\n    entrypoint: Optional[Union[str, List[str]]] = None,\n    environment: Optional[Union[Dict[str, str], List[str]]] = None,\n    docker_host: Optional[DockerHost] = None,\n    **create_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Create a container without starting it. Similar to docker create.\n\n    Args:\n        image: The image to run.\n        command: The command(s) to run in the container.\n        name: The name for this container.\n        detach: Run container in the background.\n        docker_host: Settings for interacting with a Docker host.\n        entrypoint: The entrypoint for the container.\n        environment: Environment variables to set inside the container,\n            as a dictionary or a list of strings in the format [\"SOMEVARIABLE=xxx\"].\n        **create_kwargs: Additional keyword arguments to pass to\n            [`client.containers.create`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.create).\n\n    Returns:\n        A Docker Container object.\n\n    Examples:\n        Create a container with the Prefect image.\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import create_docker_container\n\n        @flow\n        def create_docker_container_flow():\n            container = create_docker_container(\n                image=\"prefecthq/prefect\",\n                command=\"echo 'hello world!'\"\n            )\n\n        create_docker_container_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        logger.info(f\"Creating container with {image!r} image.\")\n        container = await run_sync_in_worker_thread(\n            client.containers.create,\n            image=image,\n            command=command,\n            name=name,\n            detach=detach,\n            entrypoint=entrypoint,\n            environment=environment,\n            **create_kwargs,\n        )\n    return container\n
        "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.get_docker_container_logs","title":"get_docker_container_logs async","text":"

        Get logs from this container. Similar to the docker logs command.

        Parameters:

        Name Type Description Default container_id str

        The container ID to pull logs from.

        required docker_host Optional[DockerHost]

        Settings for interacting with a Docker host.

        None **logs_kwargs Dict[str, Any]

        Additional keyword arguments to pass to client.containers.get(container_id).logs.

        {}

        Returns:

        Type Description str

        The Container's logs.

        Examples:

        Gets logs from a container with an ID that starts with \"c157\".

        from prefect import flow\nfrom prefect_docker.containers import get_docker_container_logs\n\n@flow\ndef get_docker_container_logs_flow():\n    logs = get_docker_container_logs(container_id=\"c157\")\n    return logs\n\nget_docker_container_logs_flow()\n

        Source code in prefect_docker/containers.py
        @task\nasync def get_docker_container_logs(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **logs_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Get logs from this container. Similar to the docker logs command.\n\n    Args:\n        container_id: The container ID to pull logs from.\n        docker_host: Settings for interacting with a Docker host.\n        **logs_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).logs`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.logs).\n\n    Returns:\n        The Container's logs.\n\n    Examples:\n        Gets logs from a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import get_docker_container_logs\n\n        @flow\n        def get_docker_container_logs_flow():\n            logs = get_docker_container_logs(container_id=\"c157\")\n            return logs\n\n        get_docker_container_logs_flow()\n        ```\n\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Retrieving logs from {container.id!r} container.\")\n        logs = await run_sync_in_worker_thread(container.logs, **logs_kwargs)\n\n    return logs.decode()\n
        "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.remove_docker_container","title":"remove_docker_container async","text":"

        Remove this container. Similar to the docker rm command.

        Parameters:

        Name Type Description Default container_id str

        The container ID to remove.

        required docker_host Optional[DockerHost]

        Settings for interacting with a Docker host.

        None **remove_kwargs Dict[str, Any]

        Additional keyword arguments to pass to client.containers.get(container_id).remove.

        {}

        Returns:

        Type Description Container

        The Docker Container object.

        Examples:

        Removes a container with an ID that starts with \"c157\".

        from prefect import flow\nfrom prefect_docker.containers import remove_docker_container\n\n@flow\ndef remove_docker_container_flow():\n    container = remove_docker_container(container_id=\"c157\")\n    return container\n\nremove_docker_container()\n

        Source code in prefect_docker/containers.py
        @task\nasync def remove_docker_container(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **remove_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Remove this container. Similar to the docker rm command.\n\n    Args:\n        container_id: The container ID to remove.\n        docker_host: Settings for interacting with a Docker host.\n        **remove_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).remove`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.remove).\n\n    Returns:\n        The Docker Container object.\n\n    Examples:\n        Removes a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import remove_docker_container\n\n        @flow\n        def remove_docker_container_flow():\n            container = remove_docker_container(container_id=\"c157\")\n            return container\n\n        remove_docker_container()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Removing container {container.id!r}.\")\n        await run_sync_in_worker_thread(container.remove, **remove_kwargs)\n\n    return container\n
        "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.start_docker_container","title":"start_docker_container async","text":"

        Start this container. Similar to the docker start command.

        Parameters:

        Name Type Description Default container_id str

        The container ID to start.

        required docker_host Optional[DockerHost]

        Settings for interacting with a Docker host.

        None **start_kwargs Dict[str, Any]

        Additional keyword arguments to pass to client.containers.get(container_id).start.

        {}

        Returns:

        Type Description Container

        The Docker Container object.

        Examples:

        Start a container with an ID that starts with \"c157\".

        from prefect import flow\nfrom prefect_docker.containers import start_docker_container\n\n@flow\ndef start_docker_container_flow():\n    container = start_docker_container(container_id=\"c157\")\n    return container\n\nstart_docker_container_flow()\n

        Source code in prefect_docker/containers.py
        @task\nasync def start_docker_container(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **start_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Start this container. Similar to the docker start command.\n\n    Args:\n        container_id: The container ID to start.\n        docker_host: Settings for interacting with a Docker host.\n        **start_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).start`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.start).\n\n    Returns:\n        The Docker Container object.\n\n    Examples:\n        Start a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import start_docker_container\n\n        @flow\n        def start_docker_container_flow():\n            container = start_docker_container(container_id=\"c157\")\n            return container\n\n        start_docker_container_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Starting container {container.id!r}.\")\n        await run_sync_in_worker_thread(container.start, **start_kwargs)\n\n    return container\n
        "},{"location":"integrations/prefect-docker/containers/#prefect_docker.containers.stop_docker_container","title":"stop_docker_container async","text":"

        Stops a container. Similar to the docker stop command.

        Parameters:

        Name Type Description Default container_id str

        The container ID to stop.

        required docker_host Optional[DockerHost]

        Settings for interacting with a Docker host.

        None **stop_kwargs Dict[str, Any]

        Additional keyword arguments to pass to client.containers.get(container_id).stop.

        {}

        Returns:

        Type Description Container

        The Docker Container object.

        Examples:

        Stop a container with an ID that starts with \"c157\".

        from prefect import flow\nfrom prefect_docker.containers import stop_docker_container\n\n@flow\ndef stop_docker_container_flow():\n    container = stop_docker_container(container_id=\"c157\")\n    return container\n\nstop_docker_container_flow()\n

        Source code in prefect_docker/containers.py
        @task\nasync def stop_docker_container(\n    container_id: str,\n    docker_host: Optional[DockerHost] = None,\n    **stop_kwargs: Dict[str, Any],\n) -> Container:\n    \"\"\"\n    Stops a container. Similar to the docker stop command.\n\n    Args:\n        container_id: The container ID to stop.\n        docker_host: Settings for interacting with a Docker host.\n        **stop_kwargs: Additional keyword arguments to pass to\n            [`client.containers.get(container_id).stop`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.stop).\n\n    Returns:\n        The Docker Container object.\n\n    Examples:\n        Stop a container with an ID that starts with \"c157\".\n        ```python\n        from prefect import flow\n        from prefect_docker.containers import stop_docker_container\n\n        @flow\n        def stop_docker_container_flow():\n            container = stop_docker_container(container_id=\"c157\")\n            return container\n\n        stop_docker_container_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    with (docker_host or DockerHost()).get_client() as client:\n        container = await run_sync_in_worker_thread(client.containers.get, container_id)\n        logger.info(f\"Stopping container {container.id!r}.\")\n        await run_sync_in_worker_thread(container.stop, **stop_kwargs)\n\n    return container\n
        "},{"location":"integrations/prefect-docker/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-docker/credentials/#prefect_docker.credentials","title":"prefect_docker.credentials","text":"

        Module containing docker credentials.

        "},{"location":"integrations/prefect-docker/credentials/#prefect_docker.credentials.DockerRegistryCredentials","title":"DockerRegistryCredentials","text":"

        Bases: Block

        Block used to manage credentials for interacting with a Docker Registry.

        Examples:

        Log into Docker Registry.

        from prefect_docker import DockerHost, DockerRegistryCredentials\n\ndocker_host = DockerHost()\ndocker_registry_credentials = DockerRegistryCredentials(\n    username=\"my_username\",\n    password=\"my_password\",\n    registry_url=\"registry.hub.docker.com\",\n)\nwith docker_host.get_client() as client:\n    docker_registry_credentials.login(client)\n

        Source code in prefect_docker/credentials.py
        class DockerRegistryCredentials(Block):\n    \"\"\"\n    Block used to manage credentials for interacting with a Docker Registry.\n\n    Examples:\n        Log into Docker Registry.\n        ```python\n        from prefect_docker import DockerHost, DockerRegistryCredentials\n\n        docker_host = DockerHost()\n        docker_registry_credentials = DockerRegistryCredentials(\n            username=\"my_username\",\n            password=\"my_password\",\n            registry_url=\"registry.hub.docker.com\",\n        )\n        with docker_host.get_client() as client:\n            docker_registry_credentials.login(client)\n        ```\n    \"\"\"\n\n    _block_type_name = \"Docker Registry Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"  # noqa\n    _description = \"Store credentials for interacting with a Docker Registry.\"\n\n    username: str = Field(\n        default=..., description=\"The username to log into the registry with.\"\n    )\n    password: SecretStr = Field(\n        default=..., description=\"The password to log into the registry with.\"\n    )\n    registry_url: str = Field(\n        default=...,\n        description=(\n            'The URL to the registry. Generally, \"http\" or \"https\" can be omitted.'\n        ),\n        example=\"index.docker.io\",\n    )\n    reauth: bool = Field(\n        default=True,\n        description=\"Whether or not to reauthenticate on each interaction.\",\n    )\n\n    async def login(self, client: docker.DockerClient):\n        \"\"\"\n        Authenticates a given Docker client with the configured Docker registry.\n\n        Args:\n            client: A Docker Client.\n        \"\"\"\n        logger = get_run_logger()\n        logger.debug(f\"Logging into {self.registry_url}.\")\n        await run_sync_in_worker_thread(\n            client.login,\n            username=self.username,\n            password=self.password.get_secret_value(),\n            registry=self.registry_url,\n            # See https://github.com/docker/docker-py/issues/2256 for information on\n            # the default value for reauth.\n            reauth=self.reauth,\n        )\n
        "},{"location":"integrations/prefect-docker/credentials/#prefect_docker.credentials.DockerRegistryCredentials.login","title":"login async","text":"

        Authenticates a given Docker client with the configured Docker registry.

        Parameters:

        Name Type Description Default client DockerClient

        A Docker Client.

        required Source code in prefect_docker/credentials.py
        async def login(self, client: docker.DockerClient):\n    \"\"\"\n    Authenticates a given Docker client with the configured Docker registry.\n\n    Args:\n        client: A Docker Client.\n    \"\"\"\n    logger = get_run_logger()\n    logger.debug(f\"Logging into {self.registry_url}.\")\n    await run_sync_in_worker_thread(\n        client.login,\n        username=self.username,\n        password=self.password.get_secret_value(),\n        registry=self.registry_url,\n        # See https://github.com/docker/docker-py/issues/2256 for information on\n        # the default value for reauth.\n        reauth=self.reauth,\n    )\n
        "},{"location":"integrations/prefect-docker/host/","title":"Host","text":""},{"location":"integrations/prefect-docker/host/#prefect_docker.host","title":"prefect_docker.host","text":"

        Module containing Docker host settings.

        "},{"location":"integrations/prefect-docker/host/#prefect_docker.host.DockerHost","title":"DockerHost","text":"

        Bases: Block

        Block used to manage settings for interacting with a Docker host.

        Attributes:

        Name Type Description base_url Optional[str]

        URL to the Docker server, e.g. unix:///var/run/docker.sock or tcp://127.0.0.1:1234. If this is not set, the client will be configured from environment variables.

        version str

        The version of the API to use. Set to auto to automatically detect the server's version.

        timeout Optional[int]

        Default timeout for API calls, in seconds.

        max_pool_size Optional[int]

        The maximum number of connections to save in the pool.

        client_kwargs Dict[str, Any]

        Additional keyword arguments to pass to docker.from_env() or DockerClient.

        Examples:

        Get a Docker Host client.

        from prefect_docker import DockerHost\n\ndocker_host = DockerHost(\nbase_url=\"tcp://127.0.0.1:1234\",\n    max_pool_size=4\n)\nwith docker_host.get_client() as client:\n    ... # Use the client for Docker operations\n

        Source code in prefect_docker/host.py
        class DockerHost(Block):\n    \"\"\"\n    Block used to manage settings for interacting with a Docker host.\n\n    Attributes:\n        base_url: URL to the Docker server, e.g. `unix:///var/run/docker.sock`\n            or `tcp://127.0.0.1:1234`. If this is not set, the client will\n            be configured from environment variables.\n        version: The version of the API to use. Set to auto to\n            automatically detect the server's version.\n        timeout: Default timeout for API calls, in seconds.\n        max_pool_size: The maximum number of connections to save in the pool.\n        client_kwargs: Additional keyword arguments to pass to\n            `docker.from_env()` or `DockerClient`.\n\n    Examples:\n        Get a Docker Host client.\n        ```python\n        from prefect_docker import DockerHost\n\n        docker_host = DockerHost(\n        base_url=\"tcp://127.0.0.1:1234\",\n            max_pool_size=4\n        )\n        with docker_host.get_client() as client:\n            ... # Use the client for Docker operations\n        ```\n    \"\"\"\n\n    _block_type_name = \"Docker Host\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/14a315b79990200db7341e42553e23650b34bb96-250x250.png\"  # noqa\n    _description = \"Store settings for interacting with a Docker host.\"\n\n    base_url: Optional[str] = Field(\n        default=None,\n        description=\"URL to the Docker host.\",\n        title=\"Base URL\",\n        example=\"unix:///var/run/docker.sock\",\n    )\n    version: str = Field(default=\"auto\", description=\"The version of the API to use\")\n    timeout: Optional[int] = Field(\n        default=None, description=\"Default timeout for API calls, in seconds.\"\n    )\n    max_pool_size: Optional[int] = Field(\n        default=None,\n        description=\"The maximum number of connections to save in the pool.\",\n    )\n    client_kwargs: Dict[str, Any] = Field(\n        default_factory=dict,\n        title=\"Additional Configuration\",\n        description=(\n            \"Additional keyword arguments to pass to \"\n            \"`docker.from_env()` or `DockerClient`.\"\n        ),\n    )\n\n    def get_client(self) -> docker.DockerClient:\n        \"\"\"\n        Gets a Docker Client to communicate with a Docker host.\n\n        Returns:\n            A Docker Client.\n        \"\"\"\n        logger = get_run_logger()\n        client_kwargs = {\n            \"version\": self.version,\n            \"timeout\": self.timeout,\n            \"max_pool_size\": self.max_pool_size,\n            **self.client_kwargs,\n        }\n        client_kwargs = {\n            key: value for key, value in client_kwargs.items() if value is not None\n        }\n        if self.base_url is None:\n            logger.debug(\n                f\"Creating a Docker client from \"\n                f\"environment variables, using {self.version} version.\"\n            )\n            client = _ContextManageableDockerClient.from_env(**client_kwargs)\n        else:\n            logger.debug(\n                f\"Creating a Docker client to {self.base_url} \"\n                f\"using {self.version} version.\"\n            )\n            client = _ContextManageableDockerClient(\n                base_url=self.base_url, **client_kwargs\n            )\n        return client\n
        "},{"location":"integrations/prefect-docker/host/#prefect_docker.host.DockerHost.get_client","title":"get_client","text":"

        Gets a Docker Client to communicate with a Docker host.

        Returns:

        Type Description DockerClient

        A Docker Client.

        Source code in prefect_docker/host.py
        def get_client(self) -> docker.DockerClient:\n    \"\"\"\n    Gets a Docker Client to communicate with a Docker host.\n\n    Returns:\n        A Docker Client.\n    \"\"\"\n    logger = get_run_logger()\n    client_kwargs = {\n        \"version\": self.version,\n        \"timeout\": self.timeout,\n        \"max_pool_size\": self.max_pool_size,\n        **self.client_kwargs,\n    }\n    client_kwargs = {\n        key: value for key, value in client_kwargs.items() if value is not None\n    }\n    if self.base_url is None:\n        logger.debug(\n            f\"Creating a Docker client from \"\n            f\"environment variables, using {self.version} version.\"\n        )\n        client = _ContextManageableDockerClient.from_env(**client_kwargs)\n    else:\n        logger.debug(\n            f\"Creating a Docker client to {self.base_url} \"\n            f\"using {self.version} version.\"\n        )\n        client = _ContextManageableDockerClient(\n            base_url=self.base_url, **client_kwargs\n        )\n    return client\n
        "},{"location":"integrations/prefect-docker/images/","title":"Images","text":""},{"location":"integrations/prefect-docker/images/#prefect_docker.images","title":"prefect_docker.images","text":"

        Integrations with Docker Images.

        "},{"location":"integrations/prefect-docker/images/#prefect_docker.images.pull_docker_image","title":"pull_docker_image async","text":"

        Pull an image of the given name and return it. Similar to the docker pull command.

        If all_tags is set, the tag parameter is ignored and all image tags will be pulled.

        Parameters:

        Name Type Description Default repository str

        The repository to pull.

        required tag Optional[str]

        The tag to pull; if not provided, it is set to latest.

        None platform Optional[str]

        Platform in the format os[/arch[/variant]].

        None all_tags bool

        Pull all image tags which will return a list of Images.

        False docker_host Optional[DockerHost]

        Settings for interacting with a Docker host; if not provided, will automatically instantiate a DockerHost from env.

        None docker_registry_credentials Optional[DockerRegistryCredentials]

        Docker credentials used to log in to a registry before pulling the image.

        None **pull_kwargs Dict[str, Any]

        Additional keyword arguments to pass to client.images.pull.

        {}

        Returns:

        Type Description Union[Image, List[Image]]

        The image that has been pulled, or a list of images if all_tags is True.

        Examples:

        Pull prefecthq/prefect image with the tag latest-python3.10.

        from prefect import flow\nfrom prefect_docker.images import pull_docker_image\n\n@flow\ndef pull_docker_image_flow():\n    image = pull_docker_image(\n        repository=\"prefecthq/prefect\",\n        tag=\"latest-python3.10\"\n    )\n    return image\n\npull_docker_image_flow()\n

        Source code in prefect_docker/images.py
        @task\nasync def pull_docker_image(\n    repository: str,\n    tag: Optional[str] = None,\n    platform: Optional[str] = None,\n    all_tags: bool = False,\n    docker_host: Optional[DockerHost] = None,\n    docker_registry_credentials: Optional[DockerRegistryCredentials] = None,\n    **pull_kwargs: Dict[str, Any],\n) -> Union[Image, List[Image]]:\n    \"\"\"\n    Pull an image of the given name and return it. Similar to the docker pull command.\n\n    If all_tags is set, the tag parameter is ignored and all image tags will be pulled.\n\n    Args:\n        repository: The repository to pull.\n        tag: The tag to pull; if not provided, it is set to latest.\n        platform: Platform in the format os[/arch[/variant]].\n        all_tags: Pull all image tags which will return a list of Images.\n        docker_host: Settings for interacting with a Docker host; if not\n            provided, will automatically instantiate a `DockerHost` from env.\n        docker_registry_credentials: Docker credentials used to log in to\n            a registry before pulling the image.\n        **pull_kwargs: Additional keyword arguments to pass to `client.images.pull`.\n\n    Returns:\n        The image that has been pulled, or a list of images if `all_tags` is `True`.\n\n    Examples:\n        Pull prefecthq/prefect image with the tag latest-python3.10.\n        ```python\n        from prefect import flow\n        from prefect_docker.images import pull_docker_image\n\n        @flow\n        def pull_docker_image_flow():\n            image = pull_docker_image(\n                repository=\"prefecthq/prefect\",\n                tag=\"latest-python3.10\"\n            )\n            return image\n\n        pull_docker_image_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    if tag and all_tags:\n        raise ValueError(\"Cannot pass `tags` and `all_tags` together\")\n\n    pull_kwargs = {\n        \"repository\": repository,\n        \"tag\": tag,\n        \"platform\": platform,\n        \"all_tags\": all_tags,\n        **pull_kwargs,\n    }\n    pull_kwargs = {\n        key: value for key, value in pull_kwargs.items() if value is not None\n    }\n\n    with (docker_host or DockerHost()).get_client() as client:\n        if docker_registry_credentials is not None:\n            await docker_registry_credentials.login(client=client)\n\n        if tag:\n            logger.info(f\"Pulling image: {repository}:{tag}.\")\n        elif all_tags:\n            logger.info(f\"Pulling all images from: {repository}\")\n\n        image = await run_sync_in_worker_thread(client.images.pull, **pull_kwargs)\n\n    return image\n
        "},{"location":"integrations/prefect-docker/worker/","title":"Worker","text":""},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker","title":"prefect_docker.worker","text":"

        Module containing the Docker worker used for executing flow runs as Docker containers.

        To start a Docker worker, run the following command:

        prefect worker start --pool 'my-work-pool' --type docker\n

        Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

        For more information about work pools and workers, checkout out the Prefect docs.

        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorker","title":"DockerWorker","text":"

        Bases: BaseWorker

        Prefect worker that executes flow runs within Docker containers.

        Source code in prefect_docker/worker.py
        class DockerWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Docker containers.\"\"\"\n\n    type = \"docker\"\n    job_configuration = DockerWorkerJobConfiguration\n    _description = (\n        \"Execute flow runs within Docker containers. Works well for managing flow \"\n        \"execution environments via Docker images. Requires access to a running \"\n        \"Docker daemon.\"\n    )\n    _display_name = \"Docker\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-docker/worker/\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/2IfXXfMq66mrzJBDFFCHTp/6d8f320d9e4fc4393f045673d61ab612/Moby-logo.png?h=250\"  # noqa\n\n    def __init__(self, *args: Any, test_mode: bool = None, **kwargs: Any) -> None:\n        if test_mode is None:\n            self.test_mode = bool(os.getenv(\"PREFECT_DOCKER_TEST_MODE\", False))\n        else:\n            self.test_mode = test_mode\n        super().__init__(*args, **kwargs)\n\n    async def setup(self):\n        if not self.test_mode:\n            self._client = get_client()\n            if self._client.server_type == ServerType.EPHEMERAL:\n                raise RuntimeError(\n                    \"Docker worker cannot be used with an ephemeral server. Please set\"\n                    \" PREFECT_API_URL to the URL for your Prefect API instance. You\"\n                    \" can use a local Prefect API instance by running `prefect server\"\n                    \" start`.\"\n                )\n\n        return await super().setup()\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: BaseJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        \"\"\"\n        Executes a flow run within a Docker container and waits for the flow run\n        to complete.\n        \"\"\"\n        # The `docker` library uses requests instead of an async http library so it must\n        # be run in a thread to avoid blocking the event loop.\n        container, created_event = await run_sync_in_worker_thread(\n            self._create_and_start_container, configuration\n        )\n        container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n        # Mark as started and return the infrastructure id\n        if task_status:\n            task_status.started(container_pid)\n\n        # Monitor the container\n        container = await run_sync_in_worker_thread(\n            self._watch_container_safe, container, configuration, created_event\n        )\n\n        exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n        return DockerWorkerResult(\n            status_code=exit_code if exit_code is not None else -1,\n            identifier=container_pid,\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: DockerWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a container for a cancelled flow run based on the provided infrastructure\n        PID.\n        \"\"\"\n        docker_client = self._get_client()\n\n        base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n        if docker_client.api.base_url != base_url:\n            raise InfrastructureNotAvailable(\n                \"\".join(\n                    [\n                        (\n                            f\"Unable to stop container {container_id!r}: the current\"\n                            \" Docker API \"\n                        ),\n                        (\n                            f\"URL {docker_client.api.base_url!r} does not match the\"\n                            \" expected \"\n                        ),\n                        f\"API base URL {base_url}.\",\n                    ]\n                )\n            )\n        await run_sync_in_worker_thread(\n            self._stop_container, container_id, docker_client, grace_seconds\n        )\n\n    def _stop_container(\n        self,\n        container_id: str,\n        client: \"DockerClient\",\n        grace_seconds: int = 30,\n    ):\n        try:\n            container = client.containers.get(container_id=container_id)\n        except docker.errors.NotFound:\n            raise InfrastructureNotFound(\n                f\"Unable to stop container {container_id!r}: The container was not\"\n                \" found.\"\n            )\n\n        container.stop(timeout=grace_seconds)\n\n    def _get_client(self):\n        \"\"\"Returns a docker client.\"\"\"\n        try:\n            with warnings.catch_warnings():\n                # Silence warnings due to use of deprecated methods within dockerpy\n                # See https://github.com/docker/docker-py/pull/2931\n                warnings.filterwarnings(\n                    \"ignore\",\n                    message=\"distutils Version classes are deprecated.*\",\n                    category=DeprecationWarning,\n                )\n\n                docker_client = docker.from_env()\n\n        except docker.errors.DockerException as exc:\n            raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n        return docker_client\n\n    def _get_infrastructure_pid(self, container_id: str) -> str:\n        \"\"\"Generates a Docker infrastructure_pid string in the form of\n        `<docker_host_base_url>:<container_id>`.\n        \"\"\"\n        docker_client = self._get_client()\n        base_url = docker_client.api.base_url\n        docker_client.close()\n        return f\"{base_url}:{container_id}\"\n\n    def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n        \"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n        # base_url can contain `:` so we only want the last item of the split\n        base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n        return base_url, str(container_id)\n\n    def _build_container_settings(\n        self,\n        docker_client: \"DockerClient\",\n        configuration: DockerWorkerJobConfiguration,\n    ) -> Dict:\n        \"\"\"Builds a dictionary of container settings to pass to the Docker API.\"\"\"\n        network_mode = configuration.get_network_mode()\n        return dict(\n            image=configuration.image,\n            network=configuration.networks[0] if configuration.networks else None,\n            network_mode=network_mode,\n            command=configuration.command,\n            environment=configuration.env,\n            auto_remove=configuration.auto_remove,\n            labels=configuration.labels,\n            extra_hosts=configuration.get_extra_hosts(docker_client),\n            name=configuration.name,\n            volumes=configuration.volumes,\n            mem_limit=configuration.mem_limit,\n            memswap_limit=configuration.memswap_limit,\n            privileged=configuration.privileged,\n        )\n\n    def _create_and_start_container(\n        self, configuration: DockerWorkerJobConfiguration\n    ) -> Tuple[\"Container\", Event]:\n        \"\"\"Creates and starts a Docker container.\"\"\"\n        docker_client = self._get_client()\n        if configuration.registry_credentials:\n            self._logger.info(\"Logging into Docker registry...\")\n            docker_client.login(\n                username=configuration.registry_credentials.username,\n                password=configuration.registry_credentials.password.get_secret_value(),\n                registry=configuration.registry_credentials.registry_url,\n                reauth=configuration.registry_credentials.reauth,\n            )\n        container_settings = self._build_container_settings(\n            docker_client, configuration\n        )\n\n        if self._should_pull_image(docker_client, configuration=configuration):\n            self._logger.info(f\"Pulling image {configuration.image!r}...\")\n            self._pull_image(docker_client, configuration)\n\n        try:\n            container = self._create_container(docker_client, **container_settings)\n        except Exception as exc:\n            self._emit_container_creation_failed_event(configuration)\n            raise exc\n\n        created_event = self._emit_container_status_change_event(\n            container, configuration\n        )\n\n        # Add additional networks after the container is created; only one network can\n        # be attached at creation time\n        if len(configuration.networks) > 1:\n            for network_name in configuration.networks[1:]:\n                network = docker_client.networks.get(network_name)\n                network.connect(container)\n\n        # Start the container\n        container.start()\n\n        docker_client.close()\n\n        return container, created_event\n\n    def _watch_container_safe(\n        self,\n        container: \"Container\",\n        configuration: DockerWorkerJobConfiguration,\n        created_event: Event,\n    ) -> \"Container\":\n        \"\"\"Watches a container for completion, handling any errors that may occur.\"\"\"\n        # Monitor the container capturing the latest snapshot while capturing\n        # not found errors\n        docker_client = self._get_client()\n\n        try:\n            seen_statuses = {container.status}\n            last_event = created_event\n            for latest_container in self._watch_container(\n                docker_client, container.id, configuration\n            ):\n                container = latest_container\n                if container.status not in seen_statuses:\n                    seen_statuses.add(container.status)\n                    last_event = self._emit_container_status_change_event(\n                        container, configuration, last_event=last_event\n                    )\n\n        except docker.errors.NotFound:\n            # The container was removed during watching\n            self._logger.warning(\n                f\"Docker container {container.name} was removed before we could wait \"\n                \"for its completion.\"\n            )\n        finally:\n            docker_client.close()\n\n        return container\n\n    def _watch_container(\n        self,\n        docker_client: \"DockerClient\",\n        container_id: str,\n        configuration: DockerWorkerJobConfiguration,\n    ) -> Generator[None, None, \"Container\"]:\n        \"\"\"\n        Watches a container for completion, yielding the latest container\n        snapshot on each iteration.\n        \"\"\"\n        container: \"Container\" = docker_client.containers.get(container_id)\n\n        status = container.status\n        self._logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n        if configuration.stream_output:\n            try:\n                for log in container.logs(stream=True):\n                    log: bytes\n                    print(log.decode().rstrip())\n            except docker.errors.APIError as exc:\n                if \"marked for removal\" in str(exc):\n                    self._logger.warning(\n                        f\"Docker container {container.name} was marked for removal\"\n                        \" before logs could be retrieved. Output will not be\"\n                        \" streamed. \"\n                    )\n                else:\n                    self._logger.exception(\n                        \"An unexpected Docker API error occurred while streaming output \"\n                        f\"from container {container.name}.\"\n                    )\n\n            container.reload()\n            if container.status != status:\n                self._logger.info(\n                    f\"Docker container {container.name!r} has status\"\n                    f\" {container.status!r}\"\n                )\n            yield container\n\n        container.wait()\n        self._logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n    def _should_pull_image(\n        self, docker_client: \"DockerClient\", configuration: DockerWorkerJobConfiguration\n    ) -> bool:\n        \"\"\"\n        Decide whether we need to pull the Docker image.\n        \"\"\"\n        image_pull_policy = configuration._determine_image_pull_policy()\n\n        if image_pull_policy is ImagePullPolicy.ALWAYS:\n            return True\n        elif image_pull_policy is ImagePullPolicy.NEVER:\n            return False\n        elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n            try:\n                # NOTE: images.get() wants the tag included with the image\n                # name, while images.pull() wants them split.\n                docker_client.images.get(configuration.image)\n            except docker.errors.ImageNotFound:\n                self._logger.debug(\n                    f\"Could not find Docker image locally: {configuration.image}\"\n                )\n                return True\n        return False\n\n    def _pull_image(\n        self, docker_client: \"DockerClient\", configuration: DockerWorkerJobConfiguration\n    ):\n        \"\"\"\n        Pull the image we're going to use to create the container.\n        \"\"\"\n        image, tag = parse_image_tag(configuration.image)\n\n        return docker_client.images.pull(image, tag)\n\n    def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n        \"\"\"\n        Create a docker container with retries on name conflicts.\n\n        If the container already exists with the given name, an incremented index is\n        added.\n        \"\"\"\n        # Create the container with retries on name conflicts (with an incremented idx)\n        index = 0\n        container = None\n        name = original_name = kwargs.pop(\"name\")\n\n        while not container:\n            try:\n                display_name = repr(name) if name else \"with auto-generated name\"\n                self._logger.info(f\"Creating Docker container {display_name}...\")\n                container = docker_client.containers.create(name=name, **kwargs)\n            except docker.errors.APIError as exc:\n                if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n                    self._logger.info(\n                        f\"Docker container name {display_name} already exists; \"\n                        \"retrying...\"\n                    )\n                    index += 1\n                    name = f\"{original_name}-{index}\"\n                else:\n                    raise\n\n        self._logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        return container\n\n    def _container_as_resource(self, container: \"Container\") -> Dict[str, str]:\n        \"\"\"Convert a container to a resource dictionary\"\"\"\n        return {\n            \"prefect.resource.id\": f\"prefect.docker.container.{container.id}\",\n            \"prefect.resource.name\": container.name,\n        }\n\n    def _emit_container_creation_failed_event(\n        self, configuration: DockerWorkerJobConfiguration\n    ) -> Event:\n        \"\"\"Emit a Prefect event when a docker container fails to be created.\"\"\"\n        return emit_event(\n            event=\"prefect.docker.container.creation-failed\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(configuration=configuration),\n        )\n\n    def _emit_container_status_change_event(\n        self,\n        container: \"Container\",\n        configuration: DockerWorkerJobConfiguration,\n        last_event: Optional[Event] = None,\n    ) -> Event:\n        \"\"\"Emit a Prefect event for a Docker container event.\"\"\"\n        related = self._event_related_resources(configuration=configuration)\n\n        worker_resource = self._event_resource()\n        worker_resource[\"prefect.resource.role\"] = \"worker\"\n        worker_related_resource = RelatedResource(__root__=worker_resource)\n\n        return emit_event(\n            event=f\"prefect.docker.container.{container.status.lower()}\",\n            resource=self._container_as_resource(container),\n            related=related + [worker_related_resource],\n            follows=last_event,\n        )\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

        Stops a container for a cancelled flow run based on the provided infrastructure PID.

        Source code in prefect_docker/worker.py
        async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: DockerWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a container for a cancelled flow run based on the provided infrastructure\n    PID.\n    \"\"\"\n    docker_client = self._get_client()\n\n    base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n    if docker_client.api.base_url != base_url:\n        raise InfrastructureNotAvailable(\n            \"\".join(\n                [\n                    (\n                        f\"Unable to stop container {container_id!r}: the current\"\n                        \" Docker API \"\n                    ),\n                    (\n                        f\"URL {docker_client.api.base_url!r} does not match the\"\n                        \" expected \"\n                    ),\n                    f\"API base URL {base_url}.\",\n                ]\n            )\n        )\n    await run_sync_in_worker_thread(\n        self._stop_container, container_id, docker_client, grace_seconds\n    )\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorker.run","title":"run async","text":"

        Executes a flow run within a Docker container and waits for the flow run to complete.

        Source code in prefect_docker/worker.py
        async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: BaseJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n    \"\"\"\n    Executes a flow run within a Docker container and waits for the flow run\n    to complete.\n    \"\"\"\n    # The `docker` library uses requests instead of an async http library so it must\n    # be run in a thread to avoid blocking the event loop.\n    container, created_event = await run_sync_in_worker_thread(\n        self._create_and_start_container, configuration\n    )\n    container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n    # Mark as started and return the infrastructure id\n    if task_status:\n        task_status.started(container_pid)\n\n    # Monitor the container\n    container = await run_sync_in_worker_thread(\n        self._watch_container_safe, container, configuration, created_event\n    )\n\n    exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n    return DockerWorkerResult(\n        status_code=exit_code if exit_code is not None else -1,\n        identifier=container_pid,\n    )\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration","title":"DockerWorkerJobConfiguration","text":"

        Bases: BaseJobConfiguration

        Configuration class used by the Docker worker.

        An instance of this class is passed to the Docker worker's run method for each flow run. It contains all the information necessary to execute the flow run as a Docker container.

        Attributes:

        Name Type Description name

        The name to give to created Docker containers.

        command

        The command executed in created Docker containers to kick off flow run execution.

        env

        The environment variables to set in created Docker containers.

        labels

        The labels to set on created Docker containers.

        image str

        The image reference of a container image to use for created jobs. If not set, the latest Prefect image will be used.

        image_pull_policy Optional[Literal['IfNotPresent', 'Always', 'Never']]

        The image pull policy to use when pulling images.

        networks List[str]

        Docker networks that created containers should be connected to.

        network_mode Optional[str]

        The network mode for the created containers (e.g. host, bridge). If 'networks' is set, this cannot be set.

        auto_remove bool

        If set, containers will be deleted on completion.

        volumes List[str]

        Docker volumes that should be mounted in created containers.

        stream_output bool

        If set, the output from created containers will be streamed to local standard output.

        mem_limit Optional[str]

        Memory limit of created containers. Accepts a value with a unit identifier (e.g. 100000b, 1000k, 128m, 1g.) If a value is given without a unit, bytes are assumed.

        memswap_limit Optional[str]

        Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit is also set. If mem_limit is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit is 300m and memswap_limit is not set, containers can use 600m in total of memory and swap.

        privileged bool

        Give extended privileges to created containers.

        Source code in prefect_docker/worker.py
        class DockerWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Docker worker.\n\n    An instance of this class is passed to the Docker worker's `run` method\n    for each flow run. It contains all the information necessary to execute the\n    flow run as a Docker container.\n\n    Attributes:\n        name: The name to give to created Docker containers.\n        command: The command executed in created Docker containers to kick off\n            flow run execution.\n        env: The environment variables to set in created Docker containers.\n        labels: The labels to set on created Docker containers.\n        image: The image reference of a container image to use for created jobs.\n            If not set, the latest Prefect image will be used.\n        image_pull_policy: The image pull policy to use when pulling images.\n        networks: Docker networks that created containers should be connected to.\n        network_mode: The network mode for the created containers (e.g. host, bridge).\n            If 'networks' is set, this cannot be set.\n        auto_remove: If set, containers will be deleted on completion.\n        volumes: Docker volumes that should be mounted in created containers.\n        stream_output: If set, the output from created containers will be streamed\n            to local standard output.\n        mem_limit: Memory limit of created containers. Accepts a value\n            with a unit identifier (e.g. 100000b, 1000k, 128m, 1g.) If a value is\n            given without a unit, bytes are assumed.\n        memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n            set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n            allowing the container to use as much swap as memory. For example, if\n            `mem_limit` is 300m and `memswap_limit` is not set, containers can use\n            600m in total of memory and swap.\n        privileged: Give extended privileges to created containers.\n    \"\"\"\n\n    image: str = Field(\n        default_factory=get_prefect_image_name,\n        description=\"The image reference of a container image to use for created jobs. \"\n        \"If not set, the latest Prefect image will be used.\",\n        example=\"docker.io/prefecthq/prefect:2-latest\",\n    )\n    registry_credentials: Optional[DockerRegistryCredentials] = Field(\n        default=None,\n        description=\"Credentials for logging into a Docker registry to pull\"\n        \" images from.\",\n    )\n    image_pull_policy: Optional[Literal[\"IfNotPresent\", \"Always\", \"Never\"]] = Field(\n        default=None,\n        description=\"The image pull policy to use when pulling images.\",\n    )\n    networks: List[str] = Field(\n        default_factory=list,\n        description=\"Docker networks that created containers should be connected to.\",\n    )\n    network_mode: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The network mode for the created containers (e.g. host, bridge). If\"\n            \" 'networks' is set, this cannot be set.\"\n        ),\n    )\n    auto_remove: bool = Field(\n        default=False,\n        description=\"If set, containers will be deleted on completion.\",\n    )\n    volumes: List[str] = Field(\n        default_factory=list,\n        description=\"A list of volume to mount into created containers.\",\n        example=[\"/my/local/path:/path/in/container\"],\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, the output from created containers will be streamed to local \"\n            \"standard output.\"\n        ),\n    )\n    mem_limit: Optional[str] = Field(\n        default=None,\n        title=\"Memory Limit\",\n        description=(\n            \"Memory limit of created containers. Accepts a value \"\n            \"with a unit identifier (e.g. 100000b, 1000k, 128m, 1g.) \"\n            \"If a value is given without a unit, bytes are assumed.\"\n        ),\n    )\n    memswap_limit: Optional[str] = Field(\n        default=None,\n        title=\"Memory Swap Limit\",\n        description=(\n            \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n            \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n            \"allowing the container to use as much swap as memory. For example, if \"\n            \"`mem_limit` is 300m and `memswap_limit` is not set, containers can use \"\n            \"600m in total of memory and swap.\"\n        ),\n    )\n\n    privileged: bool = Field(\n        default=False,\n        description=\"Give extended privileges to created container.\",\n    )\n\n    @validator(\"volumes\")\n    def _validate_volume_format(cls, volumes):\n        \"\"\"Validates that provided volume strings are in the correct format.\"\"\"\n        for volume in volumes:\n            if \":\" not in volume:\n                raise ValueError(\n                    \"Invalid volume specification. \"\n                    f\"Expected format 'path:container_path', but got {volume!r}\"\n                )\n\n        return volumes\n\n    def _convert_labels_to_docker_format(self, labels: Dict[str, str]):\n        \"\"\"Converts labels to the format expected by Docker.\"\"\"\n        labels = labels or {}\n        new_labels = {}\n        for name, value in labels.items():\n            if \"/\" in name:\n                namespace, key = name.split(\"/\", maxsplit=1)\n                new_namespace = \".\".join(reversed(namespace.split(\".\")))\n                new_labels[f\"{new_namespace}.{key}\"] = value\n            else:\n                new_labels[name] = value\n        return new_labels\n\n    def _slugify_container_name(self) -> Optional[str]:\n        \"\"\"\n        Generates a container name to match the configured name, ensuring it is Docker\n        compatible.\n        \"\"\"\n        # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n        if not self.name:\n            return None\n\n        return (\n            slugify(\n                self.name,\n                lowercase=False,\n                # Docker does not limit length but URL limits apply eventually so\n                # limit the length for safety\n                max_length=250,\n                # Docker allows these characters for container names\n                regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n            ).lstrip(\n                # Docker does not allow leading underscore, dash, or period\n                \"_-.\"\n            )\n            # Docker does not allow 0 character names so cast to null if the name is\n            # empty after slufification\n            or None\n        )\n\n    def _base_environment(self):\n        \"\"\"\n        If the API URL has been set update the value to ensure connectivity\n        when using a bridge network by updating local connections to use the\n        docker internal host unless the network mode is \"host\" where localhost\n        is available already.\n        \"\"\"\n\n        base_env = super()._base_environment()\n        network_mode = self.get_network_mode()\n        if (\n            \"PREFECT_API_URL\" in base_env\n            and base_env[\"PREFECT_API_URL\"] is not None\n            and network_mode != \"host\"\n        ):\n            base_env[\"PREFECT_API_URL\"] = (\n                base_env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", \"host.docker.internal\")\n                .replace(\"127.0.0.1\", \"host.docker.internal\")\n            )\n        return base_env\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the agent for a flow run by setting the image, labels, and name\n        attributes.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self.image = self.image or get_prefect_image_name()\n        self.labels = self._convert_labels_to_docker_format(\n            {**self.labels, **CONTAINER_LABELS}\n        )\n        self.name = self._slugify_container_name()\n\n    def get_network_mode(self) -> Optional[str]:\n        \"\"\"\n        Returns the network mode to use for the container based on the configured\n        options and the platform.\n        \"\"\"\n        # User's value takes precedence; this may collide with the incompatible options\n        # mentioned below.\n        if self.network_mode:\n            if sys.platform != \"linux\" and self.network_mode == \"host\":\n                warnings.warn(\n                    f\"{self.network_mode!r} network mode is not supported on platform \"\n                    f\"{sys.platform!r} and may not work as intended.\"\n                )\n            return self.network_mode\n\n        # Network mode is not compatible with networks or ports (we do not support ports\n        # yet though)\n        if self.networks:\n            return None\n\n        # Check for a local API connection\n        api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n        if api_url:\n            try:\n                _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n            except Exception as exc:\n                warnings.warn(\n                    f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                    f\"{exc}\\nThe network mode will not be inferred.\"\n                )\n                return None\n\n            host = netloc.split(\":\")[0]\n\n            # If using a locally hosted API, use a host network on linux\n            if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n                return \"host\"\n\n        # Default to unset\n        return None\n\n    def get_extra_hosts(self, docker_client) -> Optional[Dict[str, str]]:\n        \"\"\"\n        A host.docker.internal -> host-gateway mapping is necessary for communicating\n        with the API on Linux machines. Docker Desktop on macOS will automatically\n        already have this mapping.\n        \"\"\"\n        if sys.platform == \"linux\" and (\n            # Do not warn if the user has specified a host manually that does not use\n            # a local address\n            \"PREFECT_API_URL\" not in self.env\n            or re.search(\n                \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n                self.env[\"PREFECT_API_URL\"],\n            )\n        ):\n            user_version = packaging.version.parse(\n                format_outlier_version_name(docker_client.version()[\"Version\"])\n            )\n            required_version = packaging.version.parse(\"20.10.0\")\n\n            if user_version < required_version:\n                warnings.warn(\n                    \"`host.docker.internal` could not be automatically resolved to\"\n                    \" your local ip address. This feature is not supported on Docker\"\n                    f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                    \" encounter issues.\"\n                )\n                return {}\n            else:\n                # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n                # Only supported by Docker v20.10.0+ which is our minimum recommend\n                # version\n                return {\"host.docker.internal\": \"host-gateway\"}\n\n    def _determine_image_pull_policy(self) -> ImagePullPolicy:\n        \"\"\"\n        Determine the appropriate image pull policy.\n\n        1. If they specified an image pull policy, use that.\n\n        2. If they did not specify an image pull policy and gave us\n           the \"latest\" tag, use ImagePullPolicy.always.\n\n        3. If they did not specify an image pull policy and did not\n           specify a tag, use ImagePullPolicy.always.\n\n        4. If they did not specify an image pull policy and gave us\n           a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n        This logic matches the behavior of Kubernetes.\n        See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n        \"\"\"\n        if not self.image_pull_policy:\n            _, tag = parse_image_tag(self.image)\n            if tag == \"latest\" or not tag:\n                return ImagePullPolicy.ALWAYS\n            return ImagePullPolicy.IF_NOT_PRESENT\n        return ImagePullPolicy(self.image_pull_policy)\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration.get_extra_hosts","title":"get_extra_hosts","text":"

        A host.docker.internal -> host-gateway mapping is necessary for communicating with the API on Linux machines. Docker Desktop on macOS will automatically already have this mapping.

        Source code in prefect_docker/worker.py
        def get_extra_hosts(self, docker_client) -> Optional[Dict[str, str]]:\n    \"\"\"\n    A host.docker.internal -> host-gateway mapping is necessary for communicating\n    with the API on Linux machines. Docker Desktop on macOS will automatically\n    already have this mapping.\n    \"\"\"\n    if sys.platform == \"linux\" and (\n        # Do not warn if the user has specified a host manually that does not use\n        # a local address\n        \"PREFECT_API_URL\" not in self.env\n        or re.search(\n            \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n            self.env[\"PREFECT_API_URL\"],\n        )\n    ):\n        user_version = packaging.version.parse(\n            format_outlier_version_name(docker_client.version()[\"Version\"])\n        )\n        required_version = packaging.version.parse(\"20.10.0\")\n\n        if user_version < required_version:\n            warnings.warn(\n                \"`host.docker.internal` could not be automatically resolved to\"\n                \" your local ip address. This feature is not supported on Docker\"\n                f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                \" encounter issues.\"\n            )\n            return {}\n        else:\n            # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n            # Only supported by Docker v20.10.0+ which is our minimum recommend\n            # version\n            return {\"host.docker.internal\": \"host-gateway\"}\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration.get_network_mode","title":"get_network_mode","text":"

        Returns the network mode to use for the container based on the configured options and the platform.

        Source code in prefect_docker/worker.py
        def get_network_mode(self) -> Optional[str]:\n    \"\"\"\n    Returns the network mode to use for the container based on the configured\n    options and the platform.\n    \"\"\"\n    # User's value takes precedence; this may collide with the incompatible options\n    # mentioned below.\n    if self.network_mode:\n        if sys.platform != \"linux\" and self.network_mode == \"host\":\n            warnings.warn(\n                f\"{self.network_mode!r} network mode is not supported on platform \"\n                f\"{sys.platform!r} and may not work as intended.\"\n            )\n        return self.network_mode\n\n    # Network mode is not compatible with networks or ports (we do not support ports\n    # yet though)\n    if self.networks:\n        return None\n\n    # Check for a local API connection\n    api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n    if api_url:\n        try:\n            _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n        except Exception as exc:\n            warnings.warn(\n                f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                f\"{exc}\\nThe network mode will not be inferred.\"\n            )\n            return None\n\n        host = netloc.split(\":\")[0]\n\n        # If using a locally hosted API, use a host network on linux\n        if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n            return \"host\"\n\n    # Default to unset\n    return None\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

        Prepares the agent for a flow run by setting the image, labels, and name attributes.

        Source code in prefect_docker/worker.py
        def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the agent for a flow run by setting the image, labels, and name\n    attributes.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n\n    self.image = self.image or get_prefect_image_name()\n    self.labels = self._convert_labels_to_docker_format(\n        {**self.labels, **CONTAINER_LABELS}\n    )\n    self.name = self._slugify_container_name()\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.DockerWorkerResult","title":"DockerWorkerResult","text":"

        Bases: BaseWorkerResult

        Contains information about a completed Docker container

        Source code in prefect_docker/worker.py
        class DockerWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about a completed Docker container\"\"\"\n
        "},{"location":"integrations/prefect-docker/worker/#prefect_docker.worker.ImagePullPolicy","title":"ImagePullPolicy","text":"

        Bases: Enum

        Enum representing the image pull policy options for a Docker container.

        Source code in prefect_docker/worker.py
        class ImagePullPolicy(enum.Enum):\n    \"\"\"Enum representing the image pull policy options for a Docker container.\"\"\"\n\n    IF_NOT_PRESENT = \"IfNotPresent\"\n    ALWAYS = \"Always\"\n    NEVER = \"Never\"\n
        "},{"location":"integrations/prefect-email/","title":"prefect-email","text":"

        Visit the full docs here to see additional examples and the API reference.

        prefect-email is a collection of prebuilt Prefect integrations that can be used to interact with email services.

        "},{"location":"integrations/prefect-email/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-email/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

        prefect-email makes sending emails effortless, giving you peace of mind that your emails are being sent as expected.

        First, install prefect-email and save your email credentials to a block to run the examples below!

        from prefect import flow\nfrom prefect_email import EmailServerCredentials, email_send_message\n\n@flow\ndef example_email_send_message_flow(email_addresses):\n    email_server_credentials = EmailServerCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    for email_address in email_addresses:\n        subject = email_send_message.with_options(name=f\"email {email_address}\").submit(\n            email_server_credentials=email_server_credentials,\n            subject=\"Example Flow Notification using Gmail\",\n            msg=\"This proves email_send_message works!\",\n            email_to=email_address,\n        )\n\nexample_email_send_message_flow([\"EMAIL-ADDRESS-PLACEHOLDER\"])\n

        Outputs:

        16:58:27.646 | INFO    | prefect.engine - Created flow run 'busy-bat' for flow 'example-email-send-message-flow'\n16:58:29.225 | INFO    | Flow run 'busy-bat' - Created task run 'email someone@gmail.com-0' for task 'email someone@gmail.com'\n16:58:29.229 | INFO    | Flow run 'busy-bat' - Submitted task run 'email someone@gmail.com-0' for execution.\n16:58:31.523 | INFO    | Task run 'email someone@gmail.com-0' - Finished in state Completed()\n16:58:31.713 | INFO    | Flow run 'busy-bat' - Finished in state Completed('All states completed.')\n

        Please note, many email services, like Gmail, require an App Password to successfully send emails. If you encounter an error similar to smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted..., it's likely you are not using an App Password.

        "},{"location":"integrations/prefect-email/#capture-exceptions-and-notify-by-email","title":"Capture exceptions and notify by email","text":"

        Perhaps you want an email notification with the details of the exception when your flow run fails.

        prefect-email can be wrapped in an except statement to do just that!

        from prefect import flow\nfrom prefect.context import get_run_context\nfrom prefect_email import EmailServerCredentials, email_send_message\n\ndef notify_exc_by_email(exc):\n    context = get_run_context()\n    flow_run_name = context.flow_run.name\n    email_server_credentials = EmailServerCredentials.load(\"email-server-credentials\")\n    email_send_message(\n        email_server_credentials=email_server_credentials,\n        subject=f\"Flow run {flow_run_name!r} failed\",\n        msg=f\"Flow run {flow_run_name!r} failed due to {exc}.\",\n        email_to=email_server_credentials.username,\n    )\n\n@flow\ndef example_flow():\n    try:\n        1 / 0\n    except Exception as exc:\n        notify_exc_by_email(exc)\n        raise\n\nexample_flow()\n
        "},{"location":"integrations/prefect-email/#resources","title":"Resources","text":"

        For more tips on how to use tasks and flows in a Collection, check out Using Collections!

        "},{"location":"integrations/prefect-email/#installation","title":"Installation","text":"

        Install prefect-email with pip:

        pip install prefect-email\n

        Then, register to view the block on Prefect Cloud:

        prefect block register -m prefect_email\n

        Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

        Requires an installation of Python 3.8+.

        We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

        These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-email/#saving-credentials-to-block","title":"Saving credentials to block","text":"

        Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

        Below is a walkthrough on saving block documents through code.

        Create a short script, replacing the placeholders.

        from prefect_email import EmailServerCredentials\n\ncredentials = EmailServerCredentials(\n    username=\"EMAIL-ADDRESS-PLACEHOLDER\",\n    password=\"PASSWORD-PLACEHOLDER\",  # must be an app password\n)\ncredentials.save(\"BLOCK-NAME-PLACEHOLDER\")\n

        Congrats! You can now easily load the saved block, which holds your credentials:

        from prefect_email import EmailServerCredentials\n\nEmailServerCredentials.load(\"BLOCK_NAME_PLACEHOLDER\")\n

        Registering blocks

        Register blocks in this module to view and edit them on Prefect Cloud:

        prefect block register -m prefect_email\n
        "},{"location":"integrations/prefect-email/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials","title":"prefect_email.credentials","text":"

        Credential classes used to perform authenticated interactions with email services

        "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.EmailServerCredentials","title":"EmailServerCredentials","text":"

        Bases: Block

        Block used to manage generic email server authentication. It is recommended you use a Google App Password if you use Gmail.

        Attributes:

        Name Type Description username Optional[str]

        The username to use for authentication to the server. Unnecessary if SMTP login is not required.

        password SecretStr

        The password to use for authentication to the server. Unnecessary if SMTP login is not required.

        smtp_server Union[SMTPServer, str]

        Either the hostname of the SMTP server, or one of the keys from the built-in SMTPServer Enum members, like \"gmail\".

        smtp_type Union[SMTPType, str]

        Either \"SSL\", \"STARTTLS\", or \"INSECURE\".

        smtp_port Optional[int]

        If provided, overrides the smtp_type's default port number.

        Example

        Load stored email server credentials:

        from prefect_email import EmailServerCredentials\nemail_credentials_block = EmailServerCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_email/credentials.py
        class EmailServerCredentials(Block):\n    \"\"\"\n    Block used to manage generic email server authentication.\n    It is recommended you use a\n    [Google App Password](https://support.google.com/accounts/answer/185833)\n    if you use Gmail.\n\n    Attributes:\n        username: The username to use for authentication to the server.\n            Unnecessary if SMTP login is not required.\n        password: The password to use for authentication to the server.\n            Unnecessary if SMTP login is not required.\n        smtp_server: Either the hostname of the SMTP server, or one of the\n            keys from the built-in SMTPServer Enum members, like \"gmail\".\n        smtp_type: Either \"SSL\", \"STARTTLS\", or \"INSECURE\".\n        smtp_port: If provided, overrides the smtp_type's default port number.\n\n    Example:\n        Load stored email server credentials:\n        ```python\n        from prefect_email import EmailServerCredentials\n        email_credentials_block = EmailServerCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _block_type_name = \"Email Server Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/82bc6ed16ca42a2252a5512c72233a253b8a58eb-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-email/credentials/#prefect_email.credentials.EmailServerCredentials\"  # noqa\n\n    username: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The username to use for authentication to the server. \"\n            \"Unnecessary if SMTP login is not required.\"\n        ),\n    )\n    password: SecretStr = Field(\n        default_factory=partial(SecretStr, \"\"),\n        description=(\n            \"The password to use for authentication to the server. \"\n            \"Unnecessary if SMTP login is not required.\"\n        ),\n    )\n    smtp_server: Union[SMTPServer, str] = Field(\n        default=SMTPServer.GMAIL,\n        description=(\n            \"Either the hostname of the SMTP server, or one of the \"\n            \"keys from the built-in SMTPServer Enum members, like 'gmail'.\"\n        ),\n        title=\"SMTP Server\",\n    )\n    smtp_type: Union[SMTPType, str] = Field(\n        default=SMTPType.SSL,\n        description=(\"Either 'SSL', 'STARTTLS', or 'INSECURE'.\"),\n        title=\"SMTP Type\",\n    )\n    smtp_port: Optional[int] = Field(\n        default=None,\n        description=(\"If provided, overrides the smtp_type's default port number.\"),\n        title=\"SMTP Port\",\n    )\n\n    @validator(\"smtp_server\", pre=True)\n    def _cast_smtp_server(cls, value):\n        \"\"\"\n        Cast the smtp_server to an SMTPServer Enum member, if valid.\n        \"\"\"\n        return _cast_to_enum(value, SMTPServer)\n\n    @validator(\"smtp_type\", pre=True)\n    def _cast_smtp_type(cls, value):\n        \"\"\"\n        Cast the smtp_type to an SMTPType Enum member, if valid.\n        \"\"\"\n        if isinstance(value, int):\n            return SMTPType(value)\n        return _cast_to_enum(value, SMTPType, restrict=True)\n\n    def get_server(self) -> SMTP:\n        \"\"\"\n        Gets an authenticated SMTP server.\n\n        Returns:\n            SMTP: An authenticated SMTP server.\n\n        Example:\n            Gets a GMail SMTP server through defaults.\n            ```python\n            from prefect import flow\n            from prefect_email import EmailServerCredentials\n\n            @flow\n            def example_get_server_flow():\n                email_server_credentials = EmailServerCredentials(\n                    username=\"username@gmail.com\",\n                    password=\"password\",\n                )\n                server = email_server_credentials.get_server()\n                return server\n\n            example_get_server_flow()\n            ```\n        \"\"\"\n        smtp_server = self.smtp_server\n        if isinstance(smtp_server, SMTPServer):\n            smtp_server = smtp_server.value\n\n        smtp_type = self.smtp_type\n        smtp_port = self.smtp_port\n        if smtp_port is None:\n            smtp_port = smtp_type.value\n\n        if smtp_type == SMTPType.INSECURE:\n            server = SMTP(smtp_server, smtp_port)\n        else:\n            context = ssl.create_default_context()\n            if smtp_type == SMTPType.SSL:\n                server = SMTP_SSL(smtp_server, smtp_port, context=context)\n            elif smtp_type == SMTPType.STARTTLS:\n                server = SMTP(smtp_server, smtp_port)\n                server.starttls(context=context)\n            if self.username is not None:\n                server.login(self.username, self.password.get_secret_value())\n\n        return server\n
        "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.EmailServerCredentials.get_server","title":"get_server","text":"

        Gets an authenticated SMTP server.

        Returns:

        Name Type Description SMTP SMTP

        An authenticated SMTP server.

        Example

        Gets a GMail SMTP server through defaults.

        from prefect import flow\nfrom prefect_email import EmailServerCredentials\n\n@flow\ndef example_get_server_flow():\n    email_server_credentials = EmailServerCredentials(\n        username=\"username@gmail.com\",\n        password=\"password\",\n    )\n    server = email_server_credentials.get_server()\n    return server\n\nexample_get_server_flow()\n

        Source code in prefect_email/credentials.py
        def get_server(self) -> SMTP:\n    \"\"\"\n    Gets an authenticated SMTP server.\n\n    Returns:\n        SMTP: An authenticated SMTP server.\n\n    Example:\n        Gets a GMail SMTP server through defaults.\n        ```python\n        from prefect import flow\n        from prefect_email import EmailServerCredentials\n\n        @flow\n        def example_get_server_flow():\n            email_server_credentials = EmailServerCredentials(\n                username=\"username@gmail.com\",\n                password=\"password\",\n            )\n            server = email_server_credentials.get_server()\n            return server\n\n        example_get_server_flow()\n        ```\n    \"\"\"\n    smtp_server = self.smtp_server\n    if isinstance(smtp_server, SMTPServer):\n        smtp_server = smtp_server.value\n\n    smtp_type = self.smtp_type\n    smtp_port = self.smtp_port\n    if smtp_port is None:\n        smtp_port = smtp_type.value\n\n    if smtp_type == SMTPType.INSECURE:\n        server = SMTP(smtp_server, smtp_port)\n    else:\n        context = ssl.create_default_context()\n        if smtp_type == SMTPType.SSL:\n            server = SMTP_SSL(smtp_server, smtp_port, context=context)\n        elif smtp_type == SMTPType.STARTTLS:\n            server = SMTP(smtp_server, smtp_port)\n            server.starttls(context=context)\n        if self.username is not None:\n            server.login(self.username, self.password.get_secret_value())\n\n    return server\n
        "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.SMTPServer","title":"SMTPServer","text":"

        Bases: Enum

        Server used to send email.

        Source code in prefect_email/credentials.py
        class SMTPServer(Enum):\n    \"\"\"\n    Server used to send email.\n    \"\"\"\n\n    AOL = \"smtp.aol.com\"\n    ATT = \"smtp.mail.att.net\"\n    COMCAST = \"smtp.comcast.net\"\n    ICLOUD = \"smtp.mail.me.com\"\n    GMAIL = \"smtp.gmail.com\"\n    OUTLOOK = \"smtp-mail.outlook.com\"\n    YAHOO = \"smtp.mail.yahoo.com\"\n
        "},{"location":"integrations/prefect-email/credentials/#prefect_email.credentials.SMTPType","title":"SMTPType","text":"

        Bases: Enum

        Protocols used to secure email transmissions.

        Source code in prefect_email/credentials.py
        class SMTPType(Enum):\n    \"\"\"\n    Protocols used to secure email transmissions.\n    \"\"\"\n\n    SSL = 465\n    STARTTLS = 587\n    INSECURE = 25\n
        "},{"location":"integrations/prefect-email/message/","title":"Message","text":""},{"location":"integrations/prefect-email/message/#prefect_email.message","title":"prefect_email.message","text":"

        Tasks for interacting with email message services

        "},{"location":"integrations/prefect-email/message/#prefect_email.message.email_send_message","title":"email_send_message async","text":"

        Sends an email message from an authenticated email service over SMTP. Sending messages containing HTML code is supported - the default MIME type is set to the text/html.

        Parameters:

        Name Type Description Default subject str

        The subject line of the email.

        required msg str

        The contents of the email, added as html; can be used in combination with msg_plain.

        required msg_plain Optional[str]

        The contents of the email as plain text, can be used in combination with msg.

        None email_to Optional[Union[str, List[str]]]

        The email addresses to send the message to, separated by commas. If a list is provided, will join the items, separated by commas.

        None email_to_cc Optional[Union[str, List[str]]]

        Additional email addresses to send the message to as cc, separated by commas. If a list is provided, will join the items, separated by commas.

        None email_to_bcc Optional[Union[str, List[str]]]

        Additional email addresses to send the message to as bcc, separated by commas. If a list is provided, will join the items, separated by commas.

        None attachments Optional[List[str]]

        Names of files that should be sent as attachment.

        None

        Returns:

        Name Type Description MimeText

        The MIME Multipart message of the email.

        Example

        Sends a notification email to someone@gmail.com.

        from prefect import flow\nfrom prefect_email import EmailServerCredentials, email_send_message\n\n@flow\ndef example_email_send_message_flow():\n    email_server_credentials = EmailServerCredentials(\n        username=\"username@email.com\",\n        password=\"password\",\n    )\n    subject = email_send_message(\n        email_server_credentials=email_server_credentials,\n        subject=\"Example Flow Notification\",\n        msg=\"This proves email_send_message works!\",\n        email_to=\"someone@email.com\",\n    )\n    return subject\n\nexample_email_send_message_flow()\n

        Source code in prefect_email/message.py
        @task\nasync def email_send_message(\n    subject: str,\n    msg: str,\n    email_server_credentials: \"EmailServerCredentials\",\n    msg_plain: Optional[str] = None,\n    email_from: Optional[str] = None,\n    email_to: Optional[Union[str, List[str]]] = None,\n    email_to_cc: Optional[Union[str, List[str]]] = None,\n    email_to_bcc: Optional[Union[str, List[str]]] = None,\n    attachments: Optional[List[str]] = None,\n):\n    \"\"\"\n    Sends an email message from an authenticated email service over SMTP.\n    Sending messages containing HTML code is supported - the default MIME\n    type is set to the text/html.\n\n    Args:\n        subject: The subject line of the email.\n        msg: The contents of the email, added as html; can be used in\n            combination with msg_plain.\n        msg_plain: The contents of the email as plain text,\n            can be used in combination with msg.\n        email_to: The email addresses to send the message to, separated by commas.\n            If a list is provided, will join the items, separated by commas.\n        email_to_cc: Additional email addresses to send the message to as cc,\n            separated by commas. If a list is provided, will join the items,\n            separated by commas.\n        email_to_bcc: Additional email addresses to send the message to as bcc,\n            separated by commas. If a list is provided, will join the items,\n            separated by commas.\n        attachments: Names of files that should be sent as attachment.\n\n    Returns:\n        MimeText: The MIME Multipart message of the email.\n\n    Example:\n        Sends a notification email to someone@gmail.com.\n        ```python\n        from prefect import flow\n        from prefect_email import EmailServerCredentials, email_send_message\n\n        @flow\n        def example_email_send_message_flow():\n            email_server_credentials = EmailServerCredentials(\n                username=\"username@email.com\",\n                password=\"password\",\n            )\n            subject = email_send_message(\n                email_server_credentials=email_server_credentials,\n                subject=\"Example Flow Notification\",\n                msg=\"This proves email_send_message works!\",\n                email_to=\"someone@email.com\",\n            )\n            return subject\n\n        example_email_send_message_flow()\n        ```\n    \"\"\"\n    message = MIMEMultipart()\n    message[\"Subject\"] = subject\n    message[\"From\"] = email_from or email_server_credentials.username\n\n    email_to_dict = {\"To\": email_to, \"Cc\": email_to_cc, \"Bcc\": email_to_bcc}\n    if all(val is None for val in email_to_dict.values()):\n        raise ValueError(\n            \"One of email_to, email_to_cc, or email_to_bcc must be specified\"\n        )\n\n    for key, val in email_to_dict.items():\n        if isinstance(val, list):\n            val = \", \".join(val)\n        message[key] = val\n\n    # First add the message in plain text, then the HTML version;\n    # email clients try to render the last part first\n    if msg_plain:\n        message.attach(MIMEText(msg_plain, \"plain\"))\n    if msg:\n        message.attach(MIMEText(msg, \"html\"))\n\n    for filepath in attachments or []:\n        with open(filepath, \"rb\") as attachment:\n            part = MIMEBase(\"application\", \"octet-stream\")\n            part.set_payload(attachment.read())\n\n        encoders.encode_base64(part)\n        filename = os.path.basename(filepath)\n        part.add_header(\n            \"Content-Disposition\",\n            f\"attachment; filename= {filename}\",\n        )\n        message.attach(part)\n\n    with email_server_credentials.get_server() as server:\n        partial_send_message = partial(server.send_message, message)\n        await to_thread.run_sync(partial_send_message)\n\n    return message\n
        "},{"location":"integrations/prefect-gcp/","title":"prefect-gcp","text":"

        prefect-gcp makes it easy to leverage the capabilities of Google Cloud Platform (GCP) in your flows, featuring support for Vertex AI, Cloud Run, BigQuery, Cloud Storage, and Secret Manager.

        "},{"location":"integrations/prefect-gcp/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-gcp/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

        You will need to first install prefect-gcp and authenticate with a service account in order to use prefect-gcp.

        prefect-gcp is able to safely save and load the service account, so they can be reused across the collection! Simply follow the steps below.

        1. Refer to the GCP service account documentation on how to create and download a service account key file.
        2. Copy the JSON contents.
        3. Create a short script, replacing the placeholders with your information.
        from prefect_gcp import GcpCredentials\n\n# replace this PLACEHOLDER dict with your own service account info\nservice_account_info = {\n  \"type\": \"service_account\",\n  \"project_id\": \"PROJECT_ID\",\n  \"private_key_id\": \"KEY_ID\",\n  \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nPRIVATE_KEY\\n-----END PRIVATE KEY-----\\n\",\n  \"client_email\": \"SERVICE_ACCOUNT_EMAIL\",\n  \"client_id\": \"CLIENT_ID\",\n  \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n  \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n  \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n  \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\"\n}\n\nGcpCredentials(\n    service_account_info=service_account_info\n).save(\"BLOCK-NAME-PLACEHOLDER\")\n

        service_account_info vs service_account_file

        The advantage of using service_account_info, instead of service_account_file, is that it is accessible across containers.

        If service_account_file is used, the provided file path must be available in the container executing the flow.

        Congrats! You can now easily load the saved block, which holds your credentials:

        from prefect_gcp import GcpCredentials\nGcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n

        Registering blocks

        Register blocks in this module to view and edit them on Prefect Cloud:

        prefect block register -m prefect_gcp\n
        "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-cloud-run","title":"Using Prefect with Google Cloud Run","text":"

        Is your local computer or server running out of memory or taking too long to complete a job?

        prefect_gcp can offers a solution by enabling you to execute your Prefect flows remotely, on-demand thru Google Cloud Run.

        The following code snippets demonstrate how prefect_gcp can be used to run a job on Cloud Run, either as part of a Prefect deployment's infrastructure or within a flow.

        "},{"location":"integrations/prefect-gcp/#as-infrastructure","title":"As Infrastructure","text":"

        Below is a simple walkthrough for how to use Google Cloud Run as infrastructure for a deployment.

        "},{"location":"integrations/prefect-gcp/#set-variables","title":"Set variables","text":"

        To expedite copy/paste without the needing to update placeholders manually, update and execute the following.

        export CREDENTIALS_BLOCK_NAME=\"BLOCK-NAME-PLACEHOLDER\"\nexport CLOUD_RUN_JOB_BLOCK_NAME=\"cloud-run-job-example\"\nexport CLOUD_RUN_JOB_REGION=\"us-central1\"\nexport GCS_BUCKET_BLOCK_NAME=\"cloud-run-job-bucket-example\"\nexport GCP_PROJECT_ID=$(gcloud config get-value project)\n
        "},{"location":"integrations/prefect-gcp/#build-an-image","title":"Build an image","text":"

        First, find an existing image within the Google Artifact Registry. Ensure it has Python and prefect-gcp[cloud_storage] installed, or follow the instructions below to set one up.

        Create a Dockerfile.

        FROM prefecthq/prefect:2-python3.11\nRUN pip install \"prefect-gcp[cloud_storage]\"\n

        Then push to the Google Artifact Registry.

        gcloud artifacts repositories create test-example-repository --repository-format=docker --location=us\ngcloud auth configure-docker us-docker.pkg.dev\ndocker build -t us-docker.pkg.dev/${GCP_PROJECT_ID}/test-example-repository/prefect-gcp:2-python3.11 .\ndocker push us-docker.pkg.dev/${GCP_PROJECT_ID}/test-example-repository/prefect-gcp:2-python3.11\n
        "},{"location":"integrations/prefect-gcp/#save-an-infrastructure-and-storage-block","title":"Save an infrastructure and storage block","text":"

        Save a custom infrastructure and storage block by executing the following snippet.

        import os\nfrom prefect_gcp import GcpCredentials, CloudRunJob, GcsBucket\n\ngcp_credentials = GcpCredentials.load(os.environ[\"CREDENTIALS_BLOCK_NAME\"])\n\n# must be from GCR and have Python + Prefect\nimage = f\"us-docker.pkg.dev/{os.environ['GCP_PROJECT_ID']}/test-example-repository/prefect-gcp:2-python3.11\"  # noqa\n\ncloud_run_job = CloudRunJob(\n    image=image,\n    credentials=gcp_credentials,\n    region=os.environ[\"CLOUD_RUN_JOB_REGION\"],\n)\ncloud_run_job.save(os.environ[\"CLOUD_RUN_JOB_BLOCK_NAME\"], overwrite=True)\n\nbucket_name = \"cloud-run-job-bucket\"\ncloud_storage_client = gcp_credentials.get_cloud_storage_client()\ncloud_storage_client.create_bucket(bucket_name)\ngcs_bucket = GcsBucket(\n    bucket=bucket_name,\n    gcp_credentials=gcp_credentials,\n)\ngcs_bucket.save(os.environ[\"GCS_BUCKET_BLOCK_NAME\"], overwrite=True)\n
        "},{"location":"integrations/prefect-gcp/#write-a-flow","title":"Write a flow","text":"

        Then, use an existing flow to create a deployment with, or use the flow below if you don't have an existing flow handy.

        from prefect import flow\n\n@flow(log_prints=True)\ndef cloud_run_job_flow():\n    print(\"Hello, Prefect!\")\n\nif __name__ == \"__main__\":\n    cloud_run_job_flow()\n
        "},{"location":"integrations/prefect-gcp/#create-a-deployment","title":"Create a deployment","text":"

        If the script was named \"cloud_run_job_script.py\", build a deployment manifest with the following command.

        prefect deployment build cloud_run_job_script.py:cloud_run_job_flow \\\n    -n cloud-run-deployment \\\n    -ib cloud-run-job/${CLOUD_RUN_JOB_BLOCK_NAME} \\\n    -sb gcs-bucket/${GCS_BUCKET_BLOCK_NAME}\n

        Now apply the deployment!

        prefect deployment apply cloud_run_job_flow-deployment.yaml\n
        "},{"location":"integrations/prefect-gcp/#test-the-deployment","title":"Test the deployment","text":"

        Start up an agent in a separate terminal. The agent will poll the Prefect API for scheduled flow runs that are ready to run.

        prefect agent start -q 'default'\n

        Run the deployment once to test.

        prefect deployment run cloud-run-job-flow/cloud-run-deployment\n

        Once the flow run has completed, you will see Hello, Prefect! logged in the Prefect UI.

        No class found for dispatch key

        If you encounter an error message like KeyError: \"No class found for dispatch key 'cloud-run-job' in registry for type 'Block'.\", ensure prefect-gcp is installed in the environment that your agent is running!

        "},{"location":"integrations/prefect-gcp/#within-flow","title":"Within Flow","text":"

        You can execute commands through Cloud Run Job directly within a Prefect flow.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_run import CloudRunJob\n\n@flow\ndef cloud_run_job_flow():\n    cloud_run_job = CloudRunJob(\n        image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n        credentials=GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\"),\n        region=\"us-central1\",\n        command=[\"echo\", \"Hello, Prefect!\"],\n    )\n    return cloud_run_job.run()\n
        "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-vertex-ai","title":"Using Prefect with Google Vertex AI","text":"

        prefect_gcp can enable you to execute your Prefect flows remotely, on-demand using Google Vertex AI too!

        Be sure to additionally install the AI Platform extra!

        Setting up a Vertex AI job is extremely similar to setting up a Cloud Run Job, but replace CloudRunJob with the following snippet.

        from prefect_gcp import GcpCredentials, VertexAICustomTrainingJob, GcsBucket\n\ngcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n\nvertex_ai_job = VertexAICustomTrainingJob(\n    image=\"IMAGE-NAME-PLACEHOLDER\",  # must be from GCR and have Python + Prefect\n    credentials=gcp_credentials,\n    region=\"us-central1\",\n)\nvertex_ai_job.save(\"test-example\")\n

        Cloud Run Job vs Vertex AI

        With Vertex AI, you can allocate computational resources on-the-fly for your executions, much like Cloud Run.

        However, unlike Cloud Run, you have the flexibility to provision instances with higher CPU, GPU, TPU, and RAM capacities.

        Additionally, jobs can run for up to 7 days, which is significantly longer than the maximum duration allowed on Cloud Run.

        "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-bigquery","title":"Using Prefect with Google BigQuery","text":"

        Got big data in BigQuery? prefect_gcp allows you to steadily stream data from and write to Google BigQuery within your Prefect flows!

        Be sure to install prefect-gcp with the BigQuery extra!

        The provided code snippet shows how you can use prefect_gcp to create a new dataset in BigQuery, define a table, insert rows, and fetch data from the table.

        from prefect import flow\nfrom prefect_gcp.bigquery import GcpCredentials, BigQueryWarehouse\n\n@flow\ndef bigquery_flow():\n    all_rows = []\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n\n    client = gcp_credentials.get_bigquery_client()\n    client.create_dataset(\"test_example\", exists_ok=True)\n\n    with BigQueryWarehouse(gcp_credentials=gcp_credentials) as warehouse:\n        warehouse.execute(\n            \"CREATE TABLE IF NOT EXISTS test_example.customers (name STRING, address STRING);\"\n        )\n        warehouse.execute_many(\n            \"INSERT INTO test_example.customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            ],\n        )\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = warehouse.fetch_many(\"SELECT * FROM test_example.customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.extend(new_rows)\n    return all_rows\n\nbigquery_flow()\n
        "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-cloud-storage","title":"Using Prefect with Google Cloud Storage","text":"

        With prefect_gcp, you can have peace of mind that your Prefect flows have not only seamlessly uploaded and downloaded objects to Google Cloud Storage, but also have these actions logged.

        Be sure to additionally install prefect-gcp with the Cloud Storage extra!

        The provided code snippet shows how you can use prefect_gcp to upload a file to a Google Cloud Storage bucket and download the same file under a different file name.

        from pathlib import Path\nfrom prefect import flow\nfrom prefect_gcp import GcpCredentials, GcsBucket\n\n\n@flow\ndef cloud_storage_flow():\n    # create a dummy file to upload\n    file_path = Path(\"test-example.txt\")\n    file_path.write_text(\"Hello, Prefect!\")\n\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    gcs_bucket = GcsBucket(\n        bucket=\"BUCKET-NAME-PLACEHOLDER\",\n        gcp_credentials=gcp_credentials\n    )\n\n    gcs_bucket_path = gcs_bucket.upload_from_path(file_path)\n    downloaded_file_path = gcs_bucket.download_object_to_path(\n        gcs_bucket_path, \"downloaded-test-example.txt\"\n    )\n    return downloaded_file_path.read_text()\n\n\ncloud_storage_flow()\n

        Upload and download directories

        GcsBucket supports uploading and downloading entire directories. To view examples, check out the Examples Catalog!

        "},{"location":"integrations/prefect-gcp/#using-prefect-with-google-secret-manager","title":"Using Prefect with Google Secret Manager","text":"

        Do you already have secrets available on Google Secret Manager? There's no need to migrate them!

        prefect_gcp allows you to read and write secrets with Google Secret Manager within your Prefect flows.

        Be sure to install prefect-gcp with the Secret Manager extra!

        The provided code snippet shows how you can use prefect_gcp to write a secret to the Secret Manager, read the secret data, delete the secret, and finally return the secret data.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials, GcpSecret\n\n\n@flow\ndef secret_manager_flow():\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    gcp_secret = GcpSecret(secret_name=\"test-example\", gcp_credentials=gcp_credentials)\n    gcp_secret.write_secret(secret_data=b\"Hello, Prefect!\")\n    secret_data = gcp_secret.read_secret()\n    gcp_secret.delete_secret()\n    return secret_data\n\nsecret_manager_flow()\n
        "},{"location":"integrations/prefect-gcp/#accessing-google-credentials-or-clients-from-gcpcredentials","title":"Accessing Google credentials or clients from GcpCredentials","text":"

        In the case that prefect-gcp is missing a feature, feel free to submit an issue.

        In the meantime, you may want to access the underlying Google Cloud credentials or clients, which prefect-gcp exposes via the GcpCredentials block.

        The provided code snippet shows how you can use prefect_gcp to instantiate a Google Cloud client, like bigquery.Client.

        Note a GcpCredentials object is NOT a valid input to the underlying BigQuery client--use the get_credentials_from_service_account method to access and pass an actual google.auth.Credentials object.

        import google.cloud.bigquery\nfrom prefect import flow\nfrom prefect_gcp import GcpCredentials\n\n@flow\ndef create_bigquery_client():\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    google_auth_credentials = gcp_credentials.get_credentials_from_service_account()\n    bigquery_client = bigquery.Client(credentials=google_auth_credentials)\n

        If you simply want to access the underlying client, prefect-gcp exposes a get_client method from GcpCredentials.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\n\n@flow\ndef create_bigquery_client():\n    gcp_credentials = GcpCredentials.load(\"BLOCK-NAME-PLACEHOLDER\")\n    bigquery_client = gcp_credentials.get_client(\"bigquery\")\n
        "},{"location":"integrations/prefect-gcp/#resources","title":"Resources","text":"

        For more tips on how to use tasks and flows in a Collection, check out Using Collections!

        "},{"location":"integrations/prefect-gcp/#installation","title":"Installation","text":"

        To use prefect-gcp and Cloud Run:

        pip install prefect-gcp\n

        To use Cloud Storage:

        pip install \"prefect-gcp[cloud_storage]\"\n

        To use BigQuery:

        pip install \"prefect-gcp[bigquery]\"\n

        To use Secret Manager:

        pip install \"prefect-gcp[secret_manager]\"\n

        To use Vertex AI:

        pip install \"prefect-gcp[aiplatform]\"\n

        A list of available blocks in prefect-gcp and their setup instructions can be found here.

        Requires an installation of Python 3.7+.

        We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

        These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-gcp/#feedback","title":"Feedback","text":"

        If you encounter any bugs while using prefect-gcp, feel free to open an issue in the prefect-gcp repository.

        If you have any questions or issues while using prefect-gcp, you can find help in either the Prefect Discourse forum or the Prefect Slack community.

        Feel free to star or watch prefect-gcp for updates too!

        "},{"location":"integrations/prefect-gcp/aiplatform/","title":"AI Platform","text":""},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform","title":"prefect_gcp.aiplatform","text":"

        DEPRECATION WARNING:

        This module is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by the Vertex AI worker, which offers enhanced functionality and better performance.

        For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

        Integrations with Google AI Platform.

        Examples:

        Run a job using Vertex AI Custom Training:\n```python\nfrom prefect_gcp.credentials import GcpCredentials\nfrom prefect_gcp.aiplatform import VertexAICustomTrainingJob\n\ngcp_credentials = GcpCredentials.load(\"BLOCK_NAME\")\njob = VertexAICustomTrainingJob(\n    region=\"us-east1\",\n    image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n    gcp_credentials=gcp_credentials,\n)\njob.run()\n```\n\nRun a job that runs the command `echo hello world` using Google Cloud Run Jobs:\n```python\nfrom prefect_gcp.credentials import GcpCredentials\nfrom prefect_gcp.aiplatform import VertexAICustomTrainingJob\n\ngcp_credentials = GcpCredentials.load(\"BLOCK_NAME\")\njob = VertexAICustomTrainingJob(\n    command=[\"echo\", \"hello world\"],\n    region=\"us-east1\",\n    image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n    gcp_credentials=gcp_credentials,\n)\njob.run()\n```\n\nPreview job specs:\n```python\nfrom prefect_gcp.credentials import GcpCredentials\nfrom prefect_gcp.aiplatform import VertexAICustomTrainingJob\n\ngcp_credentials = GcpCredentials.load(\"BLOCK_NAME\")\njob = VertexAICustomTrainingJob(\n    command=[\"echo\", \"hello world\"],\n    region=\"us-east1\",\n    image=\"us-docker.pkg.dev/cloudrun/container/job:latest\",\n    gcp_credentials=gcp_credentials,\n)\njob.preview()\n```\n
        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob","title":"VertexAICustomTrainingJob","text":"

        Bases: Infrastructure

        Infrastructure block used to run Vertex AI custom training jobs.

        Source code in prefect_gcp/aiplatform.py
        @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=(\n        \"Use the Vertex AI worker instead.\"\n        \" Refer to the upgrade guide for more information:\"\n        \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\"\n    ),\n)\nclass VertexAICustomTrainingJob(Infrastructure):\n    \"\"\"\n    Infrastructure block used to run Vertex AI custom training jobs.\n    \"\"\"\n\n    _block_type_name = \"Vertex AI Custom Training Job\"\n    _block_type_slug = \"vertex-ai-custom-training-job\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob\"  # noqa: E501\n\n    type: Literal[\"vertex-ai-custom-training-job\"] = Field(\n        \"vertex-ai-custom-training-job\", description=\"The slug for this task type.\"\n    )\n\n    gcp_credentials: GcpCredentials = Field(\n        default_factory=GcpCredentials,\n        description=(\n            \"GCP credentials to use when running the configured Vertex AI custom \"\n            \"training job. If not provided, credentials will be inferred from the \"\n            \"environment. See `GcpCredentials` for details.\"\n        ),\n    )\n    region: str = Field(\n        default=...,\n        description=\"The region where the Vertex AI custom training job resides.\",\n    )\n    image: str = Field(\n        default=...,\n        title=\"Image Name\",\n        description=(\n            \"The image to use for a new Vertex AI custom training job. This value must \"\n            \"refer to an image within either Google Container Registry \"\n            \"or Google Artifact Registry, like `gcr.io/<project_name>/<repo>/`.\"\n        ),\n    )\n    env: Dict[str, str] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to be passed to your Cloud Run Job.\",\n    )\n    machine_type: str = Field(\n        default=\"n1-standard-4\",\n        description=\"The machine type to use for the run, which controls the available \"\n        \"CPU and memory.\",\n    )\n    accelerator_type: Optional[str] = Field(\n        default=None, description=\"The type of accelerator to attach to the machine.\"\n    )\n    accelerator_count: Optional[int] = Field(\n        default=None, description=\"The number of accelerators to attach to the machine.\"\n    )\n    boot_disk_type: str = Field(\n        default=\"pd-ssd\",\n        title=\"Boot Disk Type\",\n        description=\"The type of boot disk to attach to the machine.\",\n    )\n    boot_disk_size_gb: int = Field(\n        default=100,\n        title=\"Boot Disk Size\",\n        description=\"The size of the boot disk to attach to the machine, in gigabytes.\",\n    )\n    maximum_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(days=7), description=\"The maximum job running time.\"\n    )\n    network: Optional[str] = Field(\n        default=None,\n        description=\"The full name of the Compute Engine network\"\n        \"to which the Job should be peered. Private services access must \"\n        \"already be configured for the network. If left unspecified, the job \"\n        \"is not peered with any network.\",\n    )\n    reserved_ip_ranges: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of names for the reserved ip ranges under the VPC \"\n        \"network that can be used for this job. If set, we will deploy the job \"\n        \"within the provided ip ranges. Otherwise, the job will be deployed to \"\n        \"any ip ranges under the provided VPC network.\",\n    )\n    service_account: Optional[str] = Field(\n        default=None,\n        description=(\n            \"Specifies the service account to use \"\n            \"as the run-as account in Vertex AI. The agent submitting jobs must have \"\n            \"act-as permission on this run-as account. If unspecified, the AI \"\n            \"Platform Custom Code Service Agent for the CustomJob's project is \"\n            \"used. Takes precedence over the service account found in gcp_credentials, \"\n            \"and required if a service account cannot be detected in gcp_credentials.\"\n        ),\n    )\n    job_watch_poll_interval: float = Field(\n        default=5.0,\n        description=(\n            \"The amount of time to wait between GCP API calls while monitoring the \"\n            \"state of a Vertex AI Job.\"\n        ),\n    )\n\n    @property\n    def job_name(self):\n        \"\"\"\n        The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference:\n        https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name\n        \"\"\"  # noqa\n        try:\n            base_name = self.name or self.image.split(\"/\")[2]\n            return f\"{base_name}-{uuid4().hex}\"\n        except IndexError:\n            raise ValueError(\n                \"The provided image must be from either Google Container Registry \"\n                \"or Google Artifact Registry\"\n            )\n\n    def _get_compatible_labels(self) -> Dict[str, str]:\n        \"\"\"\n        Ensures labels are compatible with GCP label requirements.\n        https://cloud.google.com/resource-manager/docs/creating-managing-labels\n\n        Ex: the Prefect provided key of prefect.io/flow-name -> prefect-io_flow-name\n        \"\"\"\n        compatible_labels = {}\n        for key, val in self.labels.items():\n            new_key = slugify(\n                key,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n            compatible_labels[new_key] = slugify(\n                val,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n        return compatible_labels\n\n    def preview(self) -> str:\n        \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n        job_spec = self._build_job_spec()\n        custom_job = CustomJob(\n            display_name=self.job_name,\n            job_spec=job_spec,\n            labels=self._get_compatible_labels(),\n        )\n        return str(custom_job)  # outputs a json string\n\n    def get_corresponding_worker_type(self) -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        return \"vertex-ai\"\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for a `Vertex AI` work pool with the same\n        configuration as this block.\n        Returns:\n            - dict: a base job template for a `Vertex AI` work pool\n        \"\"\"\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type(),\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to generate default base job template for Cloud Run worker.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n            ]:\n                continue\n            elif key == \"gcp_credentials\":\n                if not self.gcp_credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(\n                            self.gcp_credentials._block_document_id\n                        )\n                    }\n                }\n            elif key == \"maximum_run_time\":\n                base_job_template[\"variables\"][\"properties\"][\"maximum_run_time_hours\"][\n                    \"default\"\n                ] = round(value.total_seconds() / 3600)\n            elif key == \"service_account\":\n                base_job_template[\"variables\"][\"properties\"][\"service_account_name\"][\n                    \"default\"\n                ] = value\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by `Vertex AI` work pools.\"\n                    \" Skipping.\"\n                )\n\n        return base_job_template\n\n    def _build_job_spec(self) -> \"CustomJobSpec\":\n        \"\"\"\n        Builds a job spec by gathering details.\n        \"\"\"\n        # gather worker pool spec\n        env_list = [\n            {\"name\": name, \"value\": value}\n            for name, value in {\n                **self._base_environment(),\n                **self.env,\n            }.items()\n        ]\n        container_spec = ContainerSpec(\n            image_uri=self.image, command=self.command, args=[], env=env_list\n        )\n        machine_spec = MachineSpec(\n            machine_type=self.machine_type,\n            accelerator_type=self.accelerator_type,\n            accelerator_count=self.accelerator_count,\n        )\n        worker_pool_spec = WorkerPoolSpec(\n            container_spec=container_spec,\n            machine_spec=machine_spec,\n            replica_count=1,\n            disk_spec=DiskSpec(\n                boot_disk_type=self.boot_disk_type,\n                boot_disk_size_gb=self.boot_disk_size_gb,\n            ),\n        )\n        # look for service account\n        service_account = (\n            self.service_account or self.gcp_credentials._service_account_email\n        )\n        if service_account is None:\n            raise ValueError(\n                \"A service account is required for the Vertex job. \"\n                \"A service account could not be detected in the attached credentials; \"\n                \"please set a service account explicitly, e.g. \"\n                '`VertexAICustomTrainingJob(service_acount=\"...\")`'\n            )\n\n        # build custom job specs\n        timeout = Duration().FromTimedelta(td=self.maximum_run_time)\n        scheduling = Scheduling(timeout=timeout)\n        job_spec = CustomJobSpec(\n            worker_pool_specs=[worker_pool_spec],\n            service_account=service_account,\n            scheduling=scheduling,\n            network=self.network,\n            reserved_ip_ranges=self.reserved_ip_ranges,\n        )\n        return job_spec\n\n    async def _create_and_begin_job(\n        self,\n        job_spec: \"CustomJobSpec\",\n        job_service_async_client: \"JobServiceAsyncClient\",\n    ) -> \"CustomJob\":\n        \"\"\"\n        Builds a custom job and begins running it.\n        \"\"\"\n        # create custom job\n        custom_job = CustomJob(\n            display_name=self.job_name,\n            job_spec=job_spec,\n            labels=self._get_compatible_labels(),\n        )\n\n        # run job\n        self.logger.info(\n            f\"{self._log_prefix}: Creating job {self.job_name!r} \"\n            f\"with command {' '.join(self.command)!r} in region \"\n            f\"{self.region!r} using image {self.image!r}\"\n        )\n\n        project = self.gcp_credentials.project\n        resource_name = f\"projects/{project}/locations/{self.region}\"\n\n        async for attempt in AsyncRetrying(\n            stop=stop_after_attempt(3), wait=wait_fixed(1) + wait_random(0, 3)\n        ):\n            with attempt:\n                custom_job_run = await job_service_async_client.create_custom_job(\n                    parent=resource_name,\n                    custom_job=custom_job,\n                )\n\n        self.logger.info(\n            f\"{self._log_prefix}: Job {self.job_name!r} created. \"\n            f\"The full job name is {custom_job_run.name!r}\"\n        )\n\n        return custom_job_run\n\n    async def _watch_job_run(\n        self,\n        full_job_name: str,  # different from self.job_name\n        job_service_async_client: \"JobServiceAsyncClient\",\n        current_state: \"JobState\",\n        until_states: Tuple[\"JobState\"],\n        timeout: int = None,\n    ) -> \"CustomJob\":\n        \"\"\"\n        Polls job run to see if status changed.\n\n        State changes reported by the Vertex AI API may sometimes be inaccurate\n        immediately upon startup, but should eventually report a correct running\n        and then terminal state. The minimum training duration for a custom job is\n        30 seconds, so short-lived jobs may be marked as successful some time\n        after a flow run has completed.\n        \"\"\"\n        state = JobState.JOB_STATE_UNSPECIFIED\n        last_state = current_state\n        t0 = time.time()\n\n        while state not in until_states:\n            job_run = await job_service_async_client.get_custom_job(\n                name=full_job_name,\n            )\n            state = job_run.state\n            if state != last_state:\n                state_label = (\n                    state.name.replace(\"_\", \" \")\n                    .lower()\n                    .replace(\"state\", \"state is now:\")\n                )\n                # results in \"New job state is now: succeeded\"\n                self.logger.debug(\n                    f\"{self._log_prefix}: {self.job_name} has new {state_label}\"\n                )\n                last_state = state\n            else:\n                # Intermittently, the job will not be described. We want to respect the\n                # watch timeout though.\n                self.logger.debug(f\"{self._log_prefix}: Job not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching job for states \"\n                    \"{until_states!r}\"\n                )\n            await asyncio.sleep(self.job_watch_poll_interval)\n\n        return job_run\n\n    @sync_compatible\n    async def run(\n        self, task_status: Optional[\"TaskStatus\"] = None\n    ) -> VertexAICustomTrainingJobResult:\n        \"\"\"\n        Run the configured task on VertexAI.\n\n        Args:\n            task_status: An optional `TaskStatus` to update when the container starts.\n\n        Returns:\n            The `VertexAICustomTrainingJobResult`.\n        \"\"\"\n        client_options = ClientOptions(\n            api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n        )\n\n        job_spec = self._build_job_spec()\n        job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n        job_run = await self._create_and_begin_job(\n            job_spec,\n            job_service_async_client,\n        )\n\n        if task_status:\n            task_status.started(self.job_name)\n\n        final_job_run = await self._watch_job_run(\n            full_job_name=job_run.name,\n            job_service_async_client=job_service_async_client,\n            current_state=job_run.state,\n            until_states=(\n                JobState.JOB_STATE_SUCCEEDED,\n                JobState.JOB_STATE_FAILED,\n                JobState.JOB_STATE_CANCELLED,\n                JobState.JOB_STATE_EXPIRED,\n            ),\n            timeout=self.maximum_run_time.total_seconds(),\n        )\n\n        error_msg = final_job_run.error.message\n        if error_msg:\n            raise RuntimeError(f\"{self._log_prefix}: {error_msg}\")\n\n        status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n        return VertexAICustomTrainingJobResult(\n            identifier=final_job_run.display_name, status_code=status_code\n        )\n\n    @sync_compatible\n    async def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n        \"\"\"\n        Kill a job running Cloud Run.\n\n        Args:\n            identifier: The Vertex AI full job name, formatted like\n                \"projects/{project}/locations/{location}/customJobs/{custom_job}\".\n\n        Returns:\n            The `VertexAICustomTrainingJobResult`.\n        \"\"\"\n        client_options = ClientOptions(\n            api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n        )\n        job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n        await self._kill_job(\n            job_service_async_client=job_service_async_client,\n            full_job_name=identifier,\n        )\n        self.logger.info(f\"Requested to cancel {identifier}...\")\n\n    async def _kill_job(\n        self, job_service_async_client: \"JobServiceAsyncClient\", full_job_name: str\n    ) -> None:\n        \"\"\"\n        Thin wrapper around Job.delete, wrapping a try/except since\n        Job is an independent class that doesn't have knowledge of\n        CloudRunJob and its associated logic.\n        \"\"\"\n        cancel_custom_job_request = CancelCustomJobRequest(name=full_job_name)\n        try:\n            await job_service_async_client.cancel_custom_job(\n                request=cancel_custom_job_request,\n            )\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Vertex AI job; the job name {full_job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n\n    @property\n    def _log_prefix(self) -> str:\n        \"\"\"\n        Internal property for generating a prefix for logs where `name` may be null\n        \"\"\"\n        if self.name is not None:\n            return f\"VertexAICustomTrainingJob {self.name!r}\"\n        else:\n            return \"VertexAICustomTrainingJob\"\n
        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.job_name","title":"job_name property","text":"

        The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference: https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name

        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

        Generate a base job template for a Vertex AI work pool with the same configuration as this block. Returns: - dict: a base job template for a Vertex AI work pool

        Source code in prefect_gcp/aiplatform.py
        async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for a `Vertex AI` work pool with the same\n    configuration as this block.\n    Returns:\n        - dict: a base job template for a `Vertex AI` work pool\n    \"\"\"\n    base_job_template = await get_default_base_job_template_for_infrastructure_type(\n        self.get_corresponding_worker_type(),\n    )\n    assert (\n        base_job_template is not None\n    ), \"Failed to generate default base job template for Cloud Run worker.\"\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n        ]:\n            continue\n        elif key == \"gcp_credentials\":\n            if not self.gcp_credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(\n                        self.gcp_credentials._block_document_id\n                    )\n                }\n            }\n        elif key == \"maximum_run_time\":\n            base_job_template[\"variables\"][\"properties\"][\"maximum_run_time_hours\"][\n                \"default\"\n            ] = round(value.total_seconds() / 3600)\n        elif key == \"service_account\":\n            base_job_template[\"variables\"][\"properties\"][\"service_account_name\"][\n                \"default\"\n            ] = value\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by `Vertex AI` work pools.\"\n                \" Skipping.\"\n            )\n\n    return base_job_template\n
        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.get_corresponding_worker_type","title":"get_corresponding_worker_type","text":"

        Return the corresponding worker type for this infrastructure block.

        Source code in prefect_gcp/aiplatform.py
        def get_corresponding_worker_type(self) -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    return \"vertex-ai\"\n
        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.kill","title":"kill async","text":"

        Kill a job running Cloud Run.

        Parameters:

        Name Type Description Default identifier str

        The Vertex AI full job name, formatted like \"projects/{project}/locations/{location}/customJobs/{custom_job}\".

        required

        Returns:

        Type Description None

        The VertexAICustomTrainingJobResult.

        Source code in prefect_gcp/aiplatform.py
        @sync_compatible\nasync def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n    \"\"\"\n    Kill a job running Cloud Run.\n\n    Args:\n        identifier: The Vertex AI full job name, formatted like\n            \"projects/{project}/locations/{location}/customJobs/{custom_job}\".\n\n    Returns:\n        The `VertexAICustomTrainingJobResult`.\n    \"\"\"\n    client_options = ClientOptions(\n        api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n    )\n    job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n        client_options=client_options\n    )\n    await self._kill_job(\n        job_service_async_client=job_service_async_client,\n        full_job_name=identifier,\n    )\n    self.logger.info(f\"Requested to cancel {identifier}...\")\n
        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.preview","title":"preview","text":"

        Generate a preview of the job definition that will be sent to GCP.

        Source code in prefect_gcp/aiplatform.py
        def preview(self) -> str:\n    \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n    job_spec = self._build_job_spec()\n    custom_job = CustomJob(\n        display_name=self.job_name,\n        job_spec=job_spec,\n        labels=self._get_compatible_labels(),\n    )\n    return str(custom_job)  # outputs a json string\n
        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJob.run","title":"run async","text":"

        Run the configured task on VertexAI.

        Parameters:

        Name Type Description Default task_status Optional[TaskStatus]

        An optional TaskStatus to update when the container starts.

        None

        Returns:

        Type Description VertexAICustomTrainingJobResult

        The VertexAICustomTrainingJobResult.

        Source code in prefect_gcp/aiplatform.py
        @sync_compatible\nasync def run(\n    self, task_status: Optional[\"TaskStatus\"] = None\n) -> VertexAICustomTrainingJobResult:\n    \"\"\"\n    Run the configured task on VertexAI.\n\n    Args:\n        task_status: An optional `TaskStatus` to update when the container starts.\n\n    Returns:\n        The `VertexAICustomTrainingJobResult`.\n    \"\"\"\n    client_options = ClientOptions(\n        api_endpoint=f\"{self.region}-aiplatform.googleapis.com\"\n    )\n\n    job_spec = self._build_job_spec()\n    job_service_async_client = self.gcp_credentials.get_job_service_async_client(\n        client_options=client_options\n    )\n    job_run = await self._create_and_begin_job(\n        job_spec,\n        job_service_async_client,\n    )\n\n    if task_status:\n        task_status.started(self.job_name)\n\n    final_job_run = await self._watch_job_run(\n        full_job_name=job_run.name,\n        job_service_async_client=job_service_async_client,\n        current_state=job_run.state,\n        until_states=(\n            JobState.JOB_STATE_SUCCEEDED,\n            JobState.JOB_STATE_FAILED,\n            JobState.JOB_STATE_CANCELLED,\n            JobState.JOB_STATE_EXPIRED,\n        ),\n        timeout=self.maximum_run_time.total_seconds(),\n    )\n\n    error_msg = final_job_run.error.message\n    if error_msg:\n        raise RuntimeError(f\"{self._log_prefix}: {error_msg}\")\n\n    status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n    return VertexAICustomTrainingJobResult(\n        identifier=final_job_run.display_name, status_code=status_code\n    )\n
        "},{"location":"integrations/prefect-gcp/aiplatform/#prefect_gcp.aiplatform.VertexAICustomTrainingJobResult","title":"VertexAICustomTrainingJobResult","text":"

        Bases: InfrastructureResult

        Result from a Vertex AI custom training job.

        Source code in prefect_gcp/aiplatform.py
        class VertexAICustomTrainingJobResult(InfrastructureResult):\n    \"\"\"Result from a Vertex AI custom training job.\"\"\"\n
        "},{"location":"integrations/prefect-gcp/bigquery/","title":"BigQuery","text":""},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery","title":"prefect_gcp.bigquery","text":"

        Tasks for interacting with GCP BigQuery

        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse","title":"BigQueryWarehouse","text":"

        Bases: DatabaseBlock

        A block for querying a database with BigQuery.

        Upon instantiating, a connection to BigQuery is established and maintained for the life of the object until the close method is called.

        It is recommended to use this block as a context manager, which will automatically close the connection and its cursors when the context is exited.

        It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor could be lost.

        Attributes:

        Name Type Description gcp_credentials GcpCredentials

        The credentials to use to authenticate.

        fetch_size int

        The number of rows to fetch at a time when calling fetch_many. Note, this parameter is executed on the client side and is not passed to the database. To limit on the server side, add the LIMIT clause, or the dialect's equivalent clause, like TOP, to the query.

        Source code in prefect_gcp/bigquery.py
        class BigQueryWarehouse(DatabaseBlock):\n    \"\"\"\n    A block for querying a database with BigQuery.\n\n    Upon instantiating, a connection to BigQuery is established\n    and maintained for the life of the object until the close method is called.\n\n    It is recommended to use this block as a context manager, which will automatically\n    close the connection and its cursors when the context is exited.\n\n    It is also recommended that this block is loaded and consumed within a single task\n    or flow because if the block is passed across separate tasks and flows,\n    the state of the block's connection and cursor could be lost.\n\n    Attributes:\n        gcp_credentials: The credentials to use to authenticate.\n        fetch_size: The number of rows to fetch at a time when calling fetch_many.\n            Note, this parameter is executed on the client side and is not\n            passed to the database. To limit on the server side, add the `LIMIT`\n            clause, or the dialect's equivalent clause, like `TOP`, to the query.\n    \"\"\"  # noqa\n\n    _block_type_name = \"BigQuery Warehouse\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse\"  # noqa: E501\n\n    gcp_credentials: GcpCredentials\n    fetch_size: int = Field(\n        default=1, description=\"The number of rows to fetch at a time.\"\n    )\n\n    _connection: Optional[\"Connection\"] = None\n    _unique_cursors: Dict[str, \"Cursor\"] = None\n\n    def _start_connection(self):\n        \"\"\"\n        Starts a connection.\n        \"\"\"\n        with self.gcp_credentials.get_bigquery_client() as client:\n            self._connection = Connection(client=client)\n\n    def block_initialization(self) -> None:\n        super().block_initialization()\n        if self._connection is None:\n            self._start_connection()\n\n        if self._unique_cursors is None:\n            self._unique_cursors = {}\n\n    def get_connection(self) -> \"Connection\":\n        \"\"\"\n        Get the opened connection to BigQuery.\n        \"\"\"\n        return self._connection\n\n    def _get_cursor(self, inputs: Dict[str, Any]) -> Tuple[bool, \"Cursor\"]:\n        \"\"\"\n        Get a BigQuery cursor.\n\n        Args:\n            inputs: The inputs to generate a unique hash, used to decide\n                whether a new cursor should be used.\n\n        Returns:\n            Whether a cursor is new and a BigQuery cursor.\n        \"\"\"\n        input_hash = hash_objects(inputs)\n        assert input_hash is not None, (\n            \"We were not able to hash your inputs, \"\n            \"which resulted in an unexpected data return; \"\n            \"please open an issue with a reproducible example.\"\n        )\n        if input_hash not in self._unique_cursors.keys():\n            new_cursor = self._connection.cursor()\n            self._unique_cursors[input_hash] = new_cursor\n            return True, new_cursor\n        else:\n            existing_cursor = self._unique_cursors[input_hash]\n            return False, existing_cursor\n\n    def reset_cursors(self) -> None:\n        \"\"\"\n        Tries to close all opened cursors.\n        \"\"\"\n        input_hashes = tuple(self._unique_cursors.keys())\n        for input_hash in input_hashes:\n            cursor = self._unique_cursors.pop(input_hash)\n            try:\n                cursor.close()\n            except Exception as exc:\n                self.logger.warning(\n                    f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n                )\n\n    @sync_compatible\n    async def fetch_one(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> \"Row\":\n        \"\"\"\n        Fetch a single result from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Returns:\n            A tuple containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Execute operation with parameters, fetching one new row at a time:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    SELECT word, word_count\n                    FROM `bigquery-public-data.samples.shakespeare`\n                    WHERE corpus = %(corpus)s\n                    AND word_count >= %(min_word_count)s\n                    ORDER BY word_count DESC\n                    LIMIT 3;\n                '''\n                parameters = {\n                    \"corpus\": \"romeoandjuliet\",\n                    \"min_word_count\": 250,\n                }\n                for _ in range(0, 3):\n                    result = warehouse.fetch_one(operation, parameters=parameters)\n                    print(result)\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        new, cursor = self._get_cursor(inputs)\n        if new:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n        result = await run_sync_in_worker_thread(cursor.fetchone)\n        return result\n\n    @sync_compatible\n    async def fetch_many(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        size: Optional[int] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[\"Row\"]:\n        \"\"\"\n        Fetch a limited number of results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            size: The number of results to return; if None or 0, uses the value of\n                `fetch_size` configured on the block.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Execute operation with parameters, fetching two new rows at a time:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    SELECT word, word_count\n                    FROM `bigquery-public-data.samples.shakespeare`\n                    WHERE corpus = %(corpus)s\n                    AND word_count >= %(min_word_count)s\n                    ORDER BY word_count DESC\n                    LIMIT 6;\n                '''\n                parameters = {\n                    \"corpus\": \"romeoandjuliet\",\n                    \"min_word_count\": 250,\n                }\n                for _ in range(0, 3):\n                    result = warehouse.fetch_many(\n                        operation,\n                        parameters=parameters,\n                        size=2\n                    )\n                    print(result)\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        new, cursor = self._get_cursor(inputs)\n        if new:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n        size = size or self.fetch_size\n        result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n        return result\n\n    @sync_compatible\n    async def fetch_all(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[\"Row\"]:\n        \"\"\"\n        Fetch all results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Execute operation with parameters, fetching all rows:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    SELECT word, word_count\n                    FROM `bigquery-public-data.samples.shakespeare`\n                    WHERE corpus = %(corpus)s\n                    AND word_count >= %(min_word_count)s\n                    ORDER BY word_count DESC\n                    LIMIT 3;\n                '''\n                parameters = {\n                    \"corpus\": \"romeoandjuliet\",\n                    \"min_word_count\": 250,\n                }\n                result = warehouse.fetch_all(operation, parameters=parameters)\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        new, cursor = self._get_cursor(inputs)\n        if new:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n        result = await run_sync_in_worker_thread(cursor.fetchall)\n        return result\n\n    @sync_compatible\n    async def execute(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> None:\n        \"\"\"\n        Executes an operation on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Additional options to pass to `connection.execute`.\n\n        Examples:\n            Execute operation with parameters:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n                operation = '''\n                    CREATE TABLE mydataset.trips AS (\n                    SELECT\n                        bikeid,\n                        start_time,\n                        duration_minutes\n                    FROM\n                        bigquery-public-data.austin_bikeshare.bikeshare_trips\n                    LIMIT %(limit)s\n                    );\n                '''\n                warehouse.execute(operation, parameters={\"limit\": 5})\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            parameters=parameters,\n            **execution_options,\n        )\n        cursor = self._get_cursor(inputs)[1]\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    @sync_compatible\n    async def execute_many(\n        self,\n        operation: str,\n        seq_of_parameters: List[Dict[str, Any]],\n    ) -> None:\n        \"\"\"\n        Executes many operations on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operations\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            seq_of_parameters: The sequence of parameters for the operation.\n\n        Examples:\n            Create mytable in mydataset and insert two rows into it:\n            ```python\n            from prefect_gcp.bigquery import BigQueryWarehouse\n\n            with BigQueryWarehouse.load(\"bigquery\") as warehouse:\n                create_operation = '''\n                CREATE TABLE IF NOT EXISTS mydataset.mytable (\n                    col1 STRING,\n                    col2 INTEGER,\n                    col3 BOOLEAN\n                )\n                '''\n                warehouse.execute(create_operation)\n                insert_operation = '''\n                INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s)\n                '''\n                seq_of_parameters = [\n                    (\"a\", 1, True),\n                    (\"b\", 2, False),\n                ]\n                warehouse.execute_many(\n                    insert_operation,\n                    seq_of_parameters=seq_of_parameters\n                )\n            ```\n        \"\"\"\n        inputs = dict(\n            operation=operation,\n            seq_of_parameters=seq_of_parameters,\n        )\n        cursor = self._get_cursor(inputs)[1]\n        await run_sync_in_worker_thread(cursor.executemany, **inputs)\n\n    def close(self):\n        \"\"\"\n        Closes connection and its cursors.\n        \"\"\"\n        try:\n            self.reset_cursors()\n        finally:\n            if self._connection is not None:\n                self._connection.close()\n                self._connection = None\n\n    def __enter__(self):\n        \"\"\"\n        Start a connection upon entry.\n        \"\"\"\n        return self\n\n    def __exit__(self, *args):\n        \"\"\"\n        Closes connection and its cursors upon exit.\n        \"\"\"\n        self.close()\n\n    def __getstate__(self):\n        \"\"\" \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_connection\", \"_unique_cursors\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\" \"\"\"\n        self.__dict__.update(data)\n        self._unique_cursors = {}\n        self._start_connection()\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.close","title":"close","text":"

        Closes connection and its cursors.

        Source code in prefect_gcp/bigquery.py
        def close(self):\n    \"\"\"\n    Closes connection and its cursors.\n    \"\"\"\n    try:\n        self.reset_cursors()\n    finally:\n        if self._connection is not None:\n            self._connection.close()\n            self._connection = None\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.execute","title":"execute async","text":"

        Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

        Unlike the fetch methods, this method will always execute the operation upon calling.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None **execution_options Dict[str, Any]

        Additional options to pass to connection.execute.

        {}

        Examples:

        Execute operation with parameters:

        from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        CREATE TABLE mydataset.trips AS (\n        SELECT\n            bikeid,\n            start_time,\n            duration_minutes\n        FROM\n            bigquery-public-data.austin_bikeshare.bikeshare_trips\n        LIMIT %(limit)s\n        );\n    '''\n    warehouse.execute(operation, parameters={\"limit\": 5})\n

        Source code in prefect_gcp/bigquery.py
        @sync_compatible\nasync def execute(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> None:\n    \"\"\"\n    Executes an operation on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Examples:\n        Execute operation with parameters:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                CREATE TABLE mydataset.trips AS (\n                SELECT\n                    bikeid,\n                    start_time,\n                    duration_minutes\n                FROM\n                    bigquery-public-data.austin_bikeshare.bikeshare_trips\n                LIMIT %(limit)s\n                );\n            '''\n            warehouse.execute(operation, parameters={\"limit\": 5})\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    cursor = self._get_cursor(inputs)[1]\n    await run_sync_in_worker_thread(cursor.execute, **inputs)\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.execute_many","title":"execute_many async","text":"

        Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

        Unlike the fetch methods, this method will always execute the operations upon calling.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required seq_of_parameters List[Dict[str, Any]]

        The sequence of parameters for the operation.

        required

        Examples:

        Create mytable in mydataset and insert two rows into it:

        from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"bigquery\") as warehouse:\n    create_operation = '''\n    CREATE TABLE IF NOT EXISTS mydataset.mytable (\n        col1 STRING,\n        col2 INTEGER,\n        col3 BOOLEAN\n    )\n    '''\n    warehouse.execute(create_operation)\n    insert_operation = '''\n    INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s)\n    '''\n    seq_of_parameters = [\n        (\"a\", 1, True),\n        (\"b\", 2, False),\n    ]\n    warehouse.execute_many(\n        insert_operation,\n        seq_of_parameters=seq_of_parameters\n    )\n

        Source code in prefect_gcp/bigquery.py
        @sync_compatible\nasync def execute_many(\n    self,\n    operation: str,\n    seq_of_parameters: List[Dict[str, Any]],\n) -> None:\n    \"\"\"\n    Executes many operations on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operations\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        seq_of_parameters: The sequence of parameters for the operation.\n\n    Examples:\n        Create mytable in mydataset and insert two rows into it:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"bigquery\") as warehouse:\n            create_operation = '''\n            CREATE TABLE IF NOT EXISTS mydataset.mytable (\n                col1 STRING,\n                col2 INTEGER,\n                col3 BOOLEAN\n            )\n            '''\n            warehouse.execute(create_operation)\n            insert_operation = '''\n            INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s)\n            '''\n            seq_of_parameters = [\n                (\"a\", 1, True),\n                (\"b\", 2, False),\n            ]\n            warehouse.execute_many(\n                insert_operation,\n                seq_of_parameters=seq_of_parameters\n            )\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        seq_of_parameters=seq_of_parameters,\n    )\n    cursor = self._get_cursor(inputs)[1]\n    await run_sync_in_worker_thread(cursor.executemany, **inputs)\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.fetch_all","title":"fetch_all async","text":"

        Fetch all results from the database.

        Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None **execution_options Dict[str, Any]

        Additional options to pass to connection.execute.

        {}

        Returns:

        Type Description List[Row]

        A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Execute operation with parameters, fetching all rows:

        from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = %(corpus)s\n        AND word_count >= %(min_word_count)s\n        ORDER BY word_count DESC\n        LIMIT 3;\n    '''\n    parameters = {\n        \"corpus\": \"romeoandjuliet\",\n        \"min_word_count\": 250,\n    }\n    result = warehouse.fetch_all(operation, parameters=parameters)\n

        Source code in prefect_gcp/bigquery.py
        @sync_compatible\nasync def fetch_all(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> List[\"Row\"]:\n    \"\"\"\n    Fetch all results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Execute operation with parameters, fetching all rows:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = %(corpus)s\n                AND word_count >= %(min_word_count)s\n                ORDER BY word_count DESC\n                LIMIT 3;\n            '''\n            parameters = {\n                \"corpus\": \"romeoandjuliet\",\n                \"min_word_count\": 250,\n            }\n            result = warehouse.fetch_all(operation, parameters=parameters)\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    new, cursor = self._get_cursor(inputs)\n    if new:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    result = await run_sync_in_worker_thread(cursor.fetchall)\n    return result\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.fetch_many","title":"fetch_many async","text":"

        Fetch a limited number of results from the database.

        Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None size Optional[int]

        The number of results to return; if None or 0, uses the value of fetch_size configured on the block.

        None **execution_options Dict[str, Any]

        Additional options to pass to connection.execute.

        {}

        Returns:

        Type Description List[Row]

        A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Execute operation with parameters, fetching two new rows at a time:

        from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = %(corpus)s\n        AND word_count >= %(min_word_count)s\n        ORDER BY word_count DESC\n        LIMIT 6;\n    '''\n    parameters = {\n        \"corpus\": \"romeoandjuliet\",\n        \"min_word_count\": 250,\n    }\n    for _ in range(0, 3):\n        result = warehouse.fetch_many(\n            operation,\n            parameters=parameters,\n            size=2\n        )\n        print(result)\n

        Source code in prefect_gcp/bigquery.py
        @sync_compatible\nasync def fetch_many(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    size: Optional[int] = None,\n    **execution_options: Dict[str, Any],\n) -> List[\"Row\"]:\n    \"\"\"\n    Fetch a limited number of results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        size: The number of results to return; if None or 0, uses the value of\n            `fetch_size` configured on the block.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Execute operation with parameters, fetching two new rows at a time:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = %(corpus)s\n                AND word_count >= %(min_word_count)s\n                ORDER BY word_count DESC\n                LIMIT 6;\n            '''\n            parameters = {\n                \"corpus\": \"romeoandjuliet\",\n                \"min_word_count\": 250,\n            }\n            for _ in range(0, 3):\n                result = warehouse.fetch_many(\n                    operation,\n                    parameters=parameters,\n                    size=2\n                )\n                print(result)\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    new, cursor = self._get_cursor(inputs)\n    if new:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    size = size or self.fetch_size\n    result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n    return result\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.fetch_one","title":"fetch_one async","text":"

        Fetch a single result from the database.

        Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None **execution_options Dict[str, Any]

        Additional options to pass to connection.execute.

        {}

        Returns:

        Type Description Row

        A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Execute operation with parameters, fetching one new row at a time:

        from prefect_gcp.bigquery import BigQueryWarehouse\n\nwith BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n    operation = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = %(corpus)s\n        AND word_count >= %(min_word_count)s\n        ORDER BY word_count DESC\n        LIMIT 3;\n    '''\n    parameters = {\n        \"corpus\": \"romeoandjuliet\",\n        \"min_word_count\": 250,\n    }\n    for _ in range(0, 3):\n        result = warehouse.fetch_one(operation, parameters=parameters)\n        print(result)\n

        Source code in prefect_gcp/bigquery.py
        @sync_compatible\nasync def fetch_one(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> \"Row\":\n    \"\"\"\n    Fetch a single result from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Additional options to pass to `connection.execute`.\n\n    Returns:\n        A tuple containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Execute operation with parameters, fetching one new row at a time:\n        ```python\n        from prefect_gcp.bigquery import BigQueryWarehouse\n\n        with BigQueryWarehouse.load(\"BLOCK_NAME\") as warehouse:\n            operation = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = %(corpus)s\n                AND word_count >= %(min_word_count)s\n                ORDER BY word_count DESC\n                LIMIT 3;\n            '''\n            parameters = {\n                \"corpus\": \"romeoandjuliet\",\n                \"min_word_count\": 250,\n            }\n            for _ in range(0, 3):\n                result = warehouse.fetch_one(operation, parameters=parameters)\n                print(result)\n        ```\n    \"\"\"\n    inputs = dict(\n        operation=operation,\n        parameters=parameters,\n        **execution_options,\n    )\n    new, cursor = self._get_cursor(inputs)\n    if new:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n\n    result = await run_sync_in_worker_thread(cursor.fetchone)\n    return result\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.get_connection","title":"get_connection","text":"

        Get the opened connection to BigQuery.

        Source code in prefect_gcp/bigquery.py
        def get_connection(self) -> \"Connection\":\n    \"\"\"\n    Get the opened connection to BigQuery.\n    \"\"\"\n    return self._connection\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.BigQueryWarehouse.reset_cursors","title":"reset_cursors","text":"

        Tries to close all opened cursors.

        Source code in prefect_gcp/bigquery.py
        def reset_cursors(self) -> None:\n    \"\"\"\n    Tries to close all opened cursors.\n    \"\"\"\n    input_hashes = tuple(self._unique_cursors.keys())\n    for input_hash in input_hashes:\n        cursor = self._unique_cursors.pop(input_hash)\n        try:\n            cursor.close()\n        except Exception as exc:\n            self.logger.warning(\n                f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n            )\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_create_table","title":"bigquery_create_table async","text":"

        Creates table in BigQuery. Args: dataset: Name of a dataset in that the table will be created. table: Name of a table to create. schema: Schema to use when creating the table. gcp_credentials: Credentials to use for authentication with GCP. clustering_fields: List of fields to cluster the table by. time_partitioning: bigquery.TimePartitioning object specifying a partitioning of the newly created table project: Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: The location of the dataset that will be written to. external_config: The external data source. # noqa Returns: Table name. Example:

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_create_table\nfrom google.cloud.bigquery import SchemaField\n@flow\ndef example_bigquery_create_table_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    schema = [\n        SchemaField(\"number\", field_type=\"INTEGER\", mode=\"REQUIRED\"),\n        SchemaField(\"text\", field_type=\"STRING\", mode=\"REQUIRED\"),\n        SchemaField(\"bool\", field_type=\"BOOLEAN\")\n    ]\n    result = bigquery_create_table(\n        dataset=\"dataset\",\n        table=\"test_table\",\n        schema=schema,\n        gcp_credentials=gcp_credentials\n    )\n    return result\nexample_bigquery_create_table_flow()\n

        Source code in prefect_gcp/bigquery.py
        @task\nasync def bigquery_create_table(\n    dataset: str,\n    table: str,\n    gcp_credentials: GcpCredentials,\n    schema: Optional[List[\"SchemaField\"]] = None,\n    clustering_fields: List[str] = None,\n    time_partitioning: \"TimePartitioning\" = None,\n    project: Optional[str] = None,\n    location: str = \"US\",\n    external_config: Optional[\"ExternalConfig\"] = None,\n) -> str:\n    \"\"\"\n    Creates table in BigQuery.\n    Args:\n        dataset: Name of a dataset in that the table will be created.\n        table: Name of a table to create.\n        schema: Schema to use when creating the table.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        clustering_fields: List of fields to cluster the table by.\n        time_partitioning: `bigquery.TimePartitioning` object specifying a partitioning\n            of the newly created table\n        project: Project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: The location of the dataset that will be written to.\n        external_config: The [external data source](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigquery_table#nested_external_data_configuration).  # noqa\n    Returns:\n        Table name.\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_create_table\n        from google.cloud.bigquery import SchemaField\n        @flow\n        def example_bigquery_create_table_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            schema = [\n                SchemaField(\"number\", field_type=\"INTEGER\", mode=\"REQUIRED\"),\n                SchemaField(\"text\", field_type=\"STRING\", mode=\"REQUIRED\"),\n                SchemaField(\"bool\", field_type=\"BOOLEAN\")\n            ]\n            result = bigquery_create_table(\n                dataset=\"dataset\",\n                table=\"test_table\",\n                schema=schema,\n                gcp_credentials=gcp_credentials\n            )\n            return result\n        example_bigquery_create_table_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Creating %s.%s\", dataset, table)\n\n    if not external_config and not schema:\n        raise ValueError(\"Either a schema or an external config must be provided.\")\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n    try:\n        partial_get_dataset = partial(client.get_dataset, dataset)\n        dataset_ref = await to_thread.run_sync(partial_get_dataset)\n    except NotFound:\n        logger.debug(\"Dataset %s not found, creating\", dataset)\n        partial_create_dataset = partial(client.create_dataset, dataset)\n        dataset_ref = await to_thread.run_sync(partial_create_dataset)\n\n    table_ref = dataset_ref.table(table)\n    try:\n        partial_get_table = partial(client.get_table, table_ref)\n        await to_thread.run_sync(partial_get_table)\n        logger.info(\"%s.%s already exists\", dataset, table)\n    except NotFound:\n        logger.debug(\"Table %s not found, creating\", table)\n        table_obj = Table(table_ref, schema=schema)\n\n        # external data configuration\n        if external_config:\n            table_obj.external_data_configuration = external_config\n\n        # cluster for optimal data sorting/access\n        if clustering_fields:\n            table_obj.clustering_fields = clustering_fields\n\n        # partitioning\n        if time_partitioning:\n            table_obj.time_partitioning = time_partitioning\n\n        partial_create_table = partial(client.create_table, table_obj)\n        await to_thread.run_sync(partial_create_table)\n\n    return table\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_insert_stream","title":"bigquery_insert_stream async","text":"

        Insert records in a Google BigQuery table via the streaming API.

        Parameters:

        Name Type Description Default dataset str

        Name of a dataset where the records will be written to.

        required table str

        Name of a table to write to.

        required records List[dict]

        The list of records to insert as rows into the BigQuery table; each item in the list should be a dictionary whose keys correspond to columns in the table.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required project Optional[str]

        The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials.

        None location str

        Location of the dataset that will be written to.

        'US'

        Returns:

        Type Description List

        List of inserted rows.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_insert_stream\nfrom google.cloud.bigquery import SchemaField\n\n@flow\ndef example_bigquery_insert_stream_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    records = [\n        {\"number\": 1, \"text\": \"abc\", \"bool\": True},\n        {\"number\": 2, \"text\": \"def\", \"bool\": False},\n    ]\n    result = bigquery_insert_stream(\n        dataset=\"integrations\",\n        table=\"test_table\",\n        records=records,\n        gcp_credentials=gcp_credentials\n    )\n    return result\n\nexample_bigquery_insert_stream_flow()\n
        Source code in prefect_gcp/bigquery.py
        @task\nasync def bigquery_insert_stream(\n    dataset: str,\n    table: str,\n    records: List[dict],\n    gcp_credentials: GcpCredentials,\n    project: Optional[str] = None,\n    location: str = \"US\",\n) -> List:\n    \"\"\"\n    Insert records in a Google BigQuery table via the [streaming\n    API](https://cloud.google.com/bigquery/streaming-data-into-bigquery).\n\n    Args:\n        dataset: Name of a dataset where the records will be written to.\n        table: Name of a table to write to.\n        records: The list of records to insert as rows into the BigQuery table;\n            each item in the list should be a dictionary whose keys correspond to\n            columns in the table.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        project: The project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: Location of the dataset that will be written to.\n\n    Returns:\n        List of inserted rows.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_insert_stream\n        from google.cloud.bigquery import SchemaField\n\n        @flow\n        def example_bigquery_insert_stream_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            records = [\n                {\"number\": 1, \"text\": \"abc\", \"bool\": True},\n                {\"number\": 2, \"text\": \"def\", \"bool\": False},\n            ]\n            result = bigquery_insert_stream(\n                dataset=\"integrations\",\n                table=\"test_table\",\n                records=records,\n                gcp_credentials=gcp_credentials\n            )\n            return result\n\n        example_bigquery_insert_stream_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Inserting into %s.%s as a stream\", dataset, table)\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n    table_ref = client.dataset(dataset).table(table)\n    partial_insert = partial(\n        client.insert_rows_json, table=table_ref, json_rows=records\n    )\n    response = await to_thread.run_sync(partial_insert)\n\n    errors = []\n    output = []\n    for row in response:\n        output.append(row)\n        if \"errors\" in row:\n            errors.append(row[\"errors\"])\n\n    if errors:\n        raise ValueError(errors)\n\n    return output\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_load_cloud_storage","title":"bigquery_load_cloud_storage async","text":"

        Run method for this Task. Invoked by calling this Task within a Flow context, after initialization. Args: uri: GCS path to load data from. dataset: The id of a destination dataset to write the records to. table: The name of a destination table to write the records to. gcp_credentials: Credentials to use for authentication with GCP. schema: The schema to use when creating the table. job_config: Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). project: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: Location of the dataset that will be written to.

        Returns:

        Type Description LoadJob

        The response from load_table_from_uri.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_load_cloud_storage\n\n@flow\ndef example_bigquery_load_cloud_storage_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    result = bigquery_load_cloud_storage(\n        dataset=\"dataset\",\n        table=\"test_table\",\n        uri=\"uri\",\n        gcp_credentials=gcp_credentials\n    )\n    return result\n\nexample_bigquery_load_cloud_storage_flow()\n
        Source code in prefect_gcp/bigquery.py
        @task\nasync def bigquery_load_cloud_storage(\n    dataset: str,\n    table: str,\n    uri: str,\n    gcp_credentials: GcpCredentials,\n    schema: Optional[List[\"SchemaField\"]] = None,\n    job_config: Optional[dict] = None,\n    project: Optional[str] = None,\n    location: str = \"US\",\n) -> \"LoadJob\":\n    \"\"\"\n    Run method for this Task.  Invoked by _calling_ this\n    Task within a Flow context, after initialization.\n    Args:\n        uri: GCS path to load data from.\n        dataset: The id of a destination dataset to write the records to.\n        table: The name of a destination table to write the records to.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        schema: The schema to use when creating the table.\n        job_config: Dictionary of job configuration parameters;\n            note that the parameters provided here must be pickleable\n            (e.g., dataset references will be rejected).\n        project: The project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: Location of the dataset that will be written to.\n\n    Returns:\n        The response from `load_table_from_uri`.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_load_cloud_storage\n\n        @flow\n        def example_bigquery_load_cloud_storage_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            result = bigquery_load_cloud_storage(\n                dataset=\"dataset\",\n                table=\"test_table\",\n                uri=\"uri\",\n                gcp_credentials=gcp_credentials\n            )\n            return result\n\n        example_bigquery_load_cloud_storage_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Loading into %s.%s from cloud storage\", dataset, table)\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n    table_ref = client.dataset(dataset).table(table)\n\n    job_config = job_config or {}\n    if \"autodetect\" not in job_config:\n        job_config[\"autodetect\"] = True\n    job_config = LoadJobConfig(**job_config)\n    if schema:\n        job_config.schema = schema\n\n    result = None\n    try:\n        partial_load = partial(\n            _result_sync,\n            client.load_table_from_uri,\n            uri,\n            table_ref,\n            job_config=job_config,\n        )\n        result = await to_thread.run_sync(partial_load)\n    except Exception as exception:\n        logger.exception(exception)\n        if result is not None and result.errors is not None:\n            for error in result.errors:\n                logger.exception(error)\n        raise\n\n    if result is not None:\n        # remove unpickleable attributes\n        result._client = None\n        result._completion_lock = None\n\n    return result\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_load_file","title":"bigquery_load_file async","text":"

        Loads file into BigQuery.

        Parameters:

        Name Type Description Default dataset str

        ID of a destination dataset to write the records to; if not provided here, will default to the one provided at initialization.

        required table str

        Name of a destination table to write the records to; if not provided here, will default to the one provided at initialization.

        required path Union[str, Path]

        A string or path-like object of the file to be loaded.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required schema Optional[List[SchemaField]]

        Schema to use when creating the table.

        None job_config Optional[dict]

        An optional dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected).

        None rewind bool

        if True, seek to the beginning of the file handle before reading the file.

        False size Optional[int]

        Number of bytes to read from the file handle. If size is None or large, resumable upload will be used. Otherwise, multipart upload will be used.

        None project Optional[str]

        Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials.

        None location str

        location of the dataset that will be written to.

        'US'

        Returns:

        Type Description LoadJob

        The response from load_table_from_file.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_load_file\nfrom google.cloud.bigquery import SchemaField\n\n@flow\ndef example_bigquery_load_file_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    result = bigquery_load_file(\n        dataset=\"dataset\",\n        table=\"test_table\",\n        path=\"path\",\n        gcp_credentials=gcp_credentials\n    )\n    return result\n\nexample_bigquery_load_file_flow()\n
        Source code in prefect_gcp/bigquery.py
        @task\nasync def bigquery_load_file(\n    dataset: str,\n    table: str,\n    path: Union[str, Path],\n    gcp_credentials: GcpCredentials,\n    schema: Optional[List[\"SchemaField\"]] = None,\n    job_config: Optional[dict] = None,\n    rewind: bool = False,\n    size: Optional[int] = None,\n    project: Optional[str] = None,\n    location: str = \"US\",\n) -> \"LoadJob\":\n    \"\"\"\n    Loads file into BigQuery.\n\n    Args:\n        dataset: ID of a destination dataset to write the records to;\n            if not provided here, will default to the one provided at initialization.\n        table: Name of a destination table to write the records to;\n            if not provided here, will default to the one provided at initialization.\n        path: A string or path-like object of the file to be loaded.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        schema: Schema to use when creating the table.\n        job_config: An optional dictionary of job configuration parameters;\n            note that the parameters provided here must be pickleable\n            (e.g., dataset references will be rejected).\n        rewind: if True, seek to the beginning of the file handle\n            before reading the file.\n        size: Number of bytes to read from the file handle. If size is None or large,\n            resumable upload will be used. Otherwise, multipart upload will be used.\n        project: Project to initialize the BigQuery Client with; if\n            not provided, will default to the one inferred from your credentials.\n        location: location of the dataset that will be written to.\n\n    Returns:\n        The response from `load_table_from_file`.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_load_file\n        from google.cloud.bigquery import SchemaField\n\n        @flow\n        def example_bigquery_load_file_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            result = bigquery_load_file(\n                dataset=\"dataset\",\n                table=\"test_table\",\n                path=\"path\",\n                gcp_credentials=gcp_credentials\n            )\n            return result\n\n        example_bigquery_load_file_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Loading into %s.%s from file\", dataset, table)\n\n    if not os.path.exists(path):\n        raise ValueError(f\"{path} does not exist\")\n    elif not os.path.isfile(path):\n        raise ValueError(f\"{path} is not a file\")\n\n    client = gcp_credentials.get_bigquery_client(project=project)\n    table_ref = client.dataset(dataset).table(table)\n\n    job_config = job_config or {}\n    if \"autodetect\" not in job_config:\n        job_config[\"autodetect\"] = True\n        # TODO: test if autodetect is needed when schema is passed\n    job_config = LoadJobConfig(**job_config)\n    if schema:\n        # TODO: test if schema can be passed directly in job_config\n        job_config.schema = schema\n\n    try:\n        with open(path, \"rb\") as file_obj:\n            partial_load = partial(\n                _result_sync,\n                client.load_table_from_file,\n                file_obj,\n                table_ref,\n                rewind=rewind,\n                size=size,\n                location=location,\n                job_config=job_config,\n            )\n            result = await to_thread.run_sync(partial_load)\n    except IOError:\n        logger.exception(f\"Could not open and read from {path}\")\n        raise\n\n    if result is not None:\n        # remove unpickleable attributes\n        result._client = None\n        result._completion_lock = None\n\n    return result\n
        "},{"location":"integrations/prefect-gcp/bigquery/#prefect_gcp.bigquery.bigquery_query","title":"bigquery_query async","text":"

        Runs a BigQuery query.

        Parameters:

        Name Type Description Default query str

        String of the query to execute.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required query_params Optional[List[tuple]]

        List of 3-tuples specifying BigQuery query parameters; currently only scalar query parameters are supported. See the Google documentation for more details on how both the query and the query parameters should be formatted.

        None dry_run_max_bytes Optional[int]

        If provided, the maximum number of bytes the query is allowed to process; this will be determined by executing a dry run and raising a ValueError if the maximum is exceeded.

        None dataset Optional[str]

        Name of a destination dataset to write the query results to, if you don't want them returned; if provided, table must also be provided.

        None table Optional[str]

        Name of a destination table to write the query results to, if you don't want them returned; if provided, dataset must also be provided.

        None to_dataframe bool

        If provided, returns the results of the query as a pandas dataframe instead of a list of bigquery.table.Row objects.

        False job_config Optional[dict]

        Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected).

        None project Optional[str]

        The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials.

        None result_transformer Optional[Callable[[List[Row]], Any]]

        Function that can be passed to transform the result of a query before returning. The function will be passed the list of rows returned by BigQuery for the given query.

        None location str

        Location of the dataset that will be queried.

        'US'

        Returns:

        Type Description Any

        A list of rows, or pandas DataFrame if to_dataframe,

        Any

        matching the query criteria.

        Example

        Queries the public names database, returning 10 results.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.bigquery import bigquery_query\n\n@flow\ndef example_bigquery_query_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\",\n        project=\"project\"\n    )\n    query = '''\n        SELECT word, word_count\n        FROM `bigquery-public-data.samples.shakespeare`\n        WHERE corpus = @corpus\n        AND word_count >= @min_word_count\n        ORDER BY word_count DESC;\n    '''\n    query_params = [\n        (\"corpus\", \"STRING\", \"romeoandjuliet\"),\n        (\"min_word_count\", \"INT64\", 250)\n    ]\n    result = bigquery_query(\n        query, gcp_credentials, query_params=query_params\n    )\n    return result\n\nexample_bigquery_query_flow()\n

        Source code in prefect_gcp/bigquery.py
        @task\nasync def bigquery_query(\n    query: str,\n    gcp_credentials: GcpCredentials,\n    query_params: Optional[List[tuple]] = None,  # 3-tuples\n    dry_run_max_bytes: Optional[int] = None,\n    dataset: Optional[str] = None,\n    table: Optional[str] = None,\n    to_dataframe: bool = False,\n    job_config: Optional[dict] = None,\n    project: Optional[str] = None,\n    result_transformer: Optional[Callable[[List[\"Row\"]], Any]] = None,\n    location: str = \"US\",\n) -> Any:\n    \"\"\"\n    Runs a BigQuery query.\n\n    Args:\n        query: String of the query to execute.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        query_params: List of 3-tuples specifying BigQuery query parameters; currently\n            only scalar query parameters are supported.  See the\n            [Google documentation](https://cloud.google.com/bigquery/docs/parameterized-queries#bigquery-query-params-python)\n            for more details on how both the query and the query parameters should be formatted.\n        dry_run_max_bytes: If provided, the maximum number of bytes the query\n            is allowed to process; this will be determined by executing a dry run\n            and raising a `ValueError` if the maximum is exceeded.\n        dataset: Name of a destination dataset to write the query results to,\n            if you don't want them returned; if provided, `table` must also be provided.\n        table: Name of a destination table to write the query results to,\n            if you don't want them returned; if provided, `dataset` must also be provided.\n        to_dataframe: If provided, returns the results of the query as a pandas\n            dataframe instead of a list of `bigquery.table.Row` objects.\n        job_config: Dictionary of job configuration parameters;\n            note that the parameters provided here must be pickleable\n            (e.g., dataset references will be rejected).\n        project: The project to initialize the BigQuery Client with; if not\n            provided, will default to the one inferred from your credentials.\n        result_transformer: Function that can be passed to transform the result of a query before returning. The function will be passed the list of rows returned by BigQuery for the given query.\n        location: Location of the dataset that will be queried.\n\n    Returns:\n        A list of rows, or pandas DataFrame if to_dataframe,\n        matching the query criteria.\n\n    Example:\n        Queries the public names database, returning 10 results.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.bigquery import bigquery_query\n\n        @flow\n        def example_bigquery_query_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\",\n                project=\"project\"\n            )\n            query = '''\n                SELECT word, word_count\n                FROM `bigquery-public-data.samples.shakespeare`\n                WHERE corpus = @corpus\n                AND word_count >= @min_word_count\n                ORDER BY word_count DESC;\n            '''\n            query_params = [\n                (\"corpus\", \"STRING\", \"romeoandjuliet\"),\n                (\"min_word_count\", \"INT64\", 250)\n            ]\n            result = bigquery_query(\n                query, gcp_credentials, query_params=query_params\n            )\n            return result\n\n        example_bigquery_query_flow()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Running BigQuery query\")\n\n    client = gcp_credentials.get_bigquery_client(project=project, location=location)\n\n    # setup job config\n    job_config = QueryJobConfig(**job_config or {})\n    if query_params is not None:\n        job_config.query_parameters = [ScalarQueryParameter(*qp) for qp in query_params]\n\n    # perform dry_run if requested\n    if dry_run_max_bytes is not None:\n        saved_info = dict(\n            dry_run=job_config.dry_run, use_query_cache=job_config.use_query_cache\n        )\n        job_config.dry_run = True\n        job_config.use_query_cache = False\n        partial_query = partial(client.query, query, job_config=job_config)\n        response = await to_thread.run_sync(partial_query)\n        total_bytes_processed = response.total_bytes_processed\n        if total_bytes_processed > dry_run_max_bytes:\n            raise RuntimeError(\n                f\"Query will process {total_bytes_processed} bytes which is above \"\n                f\"the set maximum of {dry_run_max_bytes} for this task.\"\n            )\n        job_config.dry_run = saved_info[\"dry_run\"]\n        job_config.use_query_cache = saved_info[\"use_query_cache\"]\n\n    # if writing to a destination table\n    if dataset is not None:\n        table_ref = client.dataset(dataset).table(table)\n        job_config.destination = table_ref\n\n    partial_query = partial(\n        _result_sync,\n        client.query,\n        query,\n        job_config=job_config,\n    )\n    result = await to_thread.run_sync(partial_query)\n\n    if to_dataframe:\n        return result.to_dataframe()\n    else:\n        if result_transformer:\n            return result_transformer(result)\n        else:\n            return list(result)\n
        "},{"location":"integrations/prefect-gcp/cloud_run/","title":"Cloud Run","text":""},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run","title":"prefect_gcp.cloud_run","text":"

        DEPRECATION WARNING:

        This module is deprecated as of March 2024 and will not be available after September 2024. It has been replaced by the Cloud Run and Cloud Run V2 workers, which offer enhanced functionality and better performance.

        For upgrade instructions, see https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.

        Integrations with Google Cloud Run Job.

        Examples:

        Run a job using Google Cloud Run Jobs:\n```python\nCloudRunJob(\n    image=\"gcr.io/my-project/my-image\",\n    region=\"us-east1\",\n    credentials=my_gcp_credentials\n).run()\n```\n\nRun a job that runs the command `echo hello world` using Google Cloud Run Jobs:\n```python\nCloudRunJob(\n    image=\"gcr.io/my-project/my-image\",\n    region=\"us-east1\",\n    credentials=my_gcp_credentials\n    command=[\"echo\", \"hello world\"]\n).run()\n```\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob","title":"CloudRunJob","text":"

        Bases: Infrastructure

        Infrastructure block used to run GCP Cloud Run Jobs.

        Project name information is provided by the Credentials object, and should always be correct as long as the Credentials object is for the correct project.

        Note this block is experimental. The interface may change without notice.

        Source code in prefect_gcp/cloud_run.py
        @deprecated_class(\n    start_date=\"Mar 2024\",\n    help=(\n        \"Use the Cloud Run or Cloud Run v2 worker instead.\"\n        \" Refer to the upgrade guide for more information:\"\n        \" https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/.\"\n    ),\n)\nclass CloudRunJob(Infrastructure):\n    \"\"\"\n    <span class=\"badge-api experimental\"/>\n\n    Infrastructure block used to run GCP Cloud Run Jobs.\n\n    Project name information is provided by the Credentials object, and should always\n    be correct as long as the Credentials object is for the correct project.\n\n    Note this block is experimental. The interface may change without notice.\n    \"\"\"\n\n    _block_type_slug = \"cloud-run-job\"\n    _block_type_name = \"GCP Cloud Run Job\"\n    _description = \"Infrastructure block used to run GCP Cloud Run Jobs. Note this block is experimental. The interface may change without notice.\"  # noqa\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob\"  # noqa: E501\n\n    type: Literal[\"cloud-run-job\"] = Field(\n        \"cloud-run-job\", description=\"The slug for this task type.\"\n    )\n    image: str = Field(\n        ...,\n        title=\"Image Name\",\n        description=(\n            \"The image to use for a new Cloud Run Job. This value must \"\n            \"refer to an image within either Google Container Registry \"\n            \"or Google Artifact Registry, like `gcr.io/<project_name>/<repo>/`.\"\n        ),\n    )\n    region: str = Field(..., description=\"The region where the Cloud Run Job resides.\")\n    credentials: GcpCredentials  # cannot be Field; else it shows as Json\n\n    # Job settings\n    cpu: Optional[int] = Field(\n        default=None,\n        title=\"CPU\",\n        description=(\n            \"The amount of compute allocated to the Cloud Run Job. \"\n            \"The int must be valid based on the rules specified at \"\n            \"https://cloud.google.com/run/docs/configuring/cpu#setting-jobs .\"\n        ),\n    )\n    memory: Optional[int] = Field(\n        default=None,\n        title=\"Memory\",\n        description=\"The amount of memory allocated to the Cloud Run Job.\",\n    )\n    memory_unit: Optional[Literal[\"G\", \"Gi\", \"M\", \"Mi\"]] = Field(\n        default=None,\n        title=\"Memory Units\",\n        description=(\n            \"The unit of memory. See \"\n            \"https://cloud.google.com/run/docs/configuring/memory-limits#setting \"\n            \"for additional details.\"\n        ),\n    )\n    vpc_connector_name: Optional[str] = Field(\n        default=None,\n        title=\"VPC Connector Name\",\n        description=\"The name of the VPC connector to use for the Cloud Run Job.\",\n    )\n    args: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"Arguments to be passed to your Cloud Run Job's entrypoint command.\"\n        ),\n    )\n    env: Dict[str, str] = Field(\n        default_factory=dict,\n        description=\"Environment variables to be passed to your Cloud Run Job.\",\n    )\n\n    # Cleanup behavior\n    keep_job: Optional[bool] = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud Run Job on Google Cloud Platform.\",\n    )\n    timeout: Optional[int] = Field(\n        default=600,\n        gt=0,\n        le=3600,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to complete \"\n            \"before raising an exception.\"\n        ),\n    )\n    max_retries: Optional[int] = Field(\n        default=3,\n        ge=0,\n        le=10,\n        title=\"Max Retries\",\n        description=(\n            \"The maximum retries setting specifies the number of times a task is \"\n            \"allowed to restart in case of failure before being failed permanently.\"\n        ),\n    )\n    # For private use\n    _job_name: str = None\n    _execution: Optional[Execution] = None\n\n    @property\n    def job_name(self):\n        \"\"\"Create a unique and valid job name.\"\"\"\n\n        if self._job_name is None:\n            # get `repo` from `gcr.io/<project_name>/repo/other`\n            components = self.image.split(\"/\")\n            image_name = components[2]\n            # only alphanumeric and '-' allowed for a job name\n            modified_image_name = image_name.replace(\":\", \"-\").replace(\".\", \"-\")\n            # make 50 char limit for final job name, which will be '<name>-<uuid>'\n            if len(modified_image_name) > 17:\n                modified_image_name = modified_image_name[:17]\n            name = f\"{modified_image_name}-{uuid4().hex}\"\n            self._job_name = name\n\n        return self._job_name\n\n    @property\n    def memory_string(self):\n        \"\"\"Returns the string expected for memory resources argument.\"\"\"\n        if self.memory and self.memory_unit:\n            return str(self.memory) + self.memory_unit\n        return None\n\n    @validator(\"image\")\n    def _remove_image_spaces(cls, value):\n        \"\"\"Deal with spaces in image names.\"\"\"\n        if value is not None:\n            return value.strip()\n\n    @root_validator\n    def _check_valid_memory(cls, values):\n        \"\"\"Make sure memory conforms to expected values for API.\n        See: https://cloud.google.com/run/docs/configuring/memory-limits#setting\n        \"\"\"  # noqa\n        if (values.get(\"memory\") is not None and values.get(\"memory_unit\") is None) or (\n            values.get(\"memory_unit\") is not None and values.get(\"memory\") is None\n        ):\n            raise ValueError(\n                \"A memory value and unit must both be supplied to specify a memory\"\n                \" value other than the default memory value.\"\n            )\n        return values\n\n    def get_corresponding_worker_type(self) -> str:\n        \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n        return \"cloud-run\"\n\n    async def generate_work_pool_base_job_template(self) -> dict:\n        \"\"\"\n        Generate a base job template for a cloud-run work pool with the same\n        configuration as this block.\n\n        Returns:\n            - dict: a base job template for a cloud-run work pool\n        \"\"\"\n        base_job_template = await get_default_base_job_template_for_infrastructure_type(\n            self.get_corresponding_worker_type(),\n        )\n        assert (\n            base_job_template is not None\n        ), \"Failed to generate default base job template for Cloud Run worker.\"\n        for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n            if key == \"command\":\n                base_job_template[\"variables\"][\"properties\"][\"command\"][\n                    \"default\"\n                ] = shlex.join(value)\n            elif key in [\n                \"type\",\n                \"block_type_slug\",\n                \"_block_document_id\",\n                \"_block_document_name\",\n                \"_is_anonymous\",\n                \"memory_unit\",\n            ]:\n                continue\n            elif key == \"credentials\":\n                if not self.credentials._block_document_id:\n                    raise BlockNotSavedError(\n                        \"It looks like you are trying to use a block that\"\n                        \" has not been saved. Please call `.save` on your block\"\n                        \" before publishing it as a work pool.\"\n                    )\n                base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                    \"default\"\n                ] = {\n                    \"$ref\": {\n                        \"block_document_id\": str(self.credentials._block_document_id)\n                    }\n                }\n            elif key == \"memory\" and self.memory_string:\n                base_job_template[\"variables\"][\"properties\"][\"memory\"][\n                    \"default\"\n                ] = self.memory_string\n            elif key == \"cpu\" and self.cpu is not None:\n                base_job_template[\"variables\"][\"properties\"][\"cpu\"][\n                    \"default\"\n                ] = f\"{self.cpu * 1000}m\"\n            elif key == \"args\":\n                # Not a default variable, but we can add it to the template\n                base_job_template[\"variables\"][\"properties\"][\"args\"] = {\n                    \"title\": \"Arguments\",\n                    \"type\": \"string\",\n                    \"description\": \"Arguments to be passed to your Cloud Run Job's entrypoint command.\",  # noqa\n                    \"default\": value,\n                }\n                base_job_template[\"job_configuration\"][\"job_body\"][\"spec\"][\"template\"][\n                    \"spec\"\n                ][\"template\"][\"spec\"][\"containers\"][0][\"args\"] = \"{{ args }}\"\n            elif key in base_job_template[\"variables\"][\"properties\"]:\n                base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n            else:\n                self.logger.warning(\n                    f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                    \" Skipping.\"\n                )\n\n        return base_job_template\n\n    def _create_job_error(self, exc):\n        \"\"\"Provides a nicer error for 404s when trying to create a Cloud Run Job.\"\"\"\n        # TODO consider lookup table instead of the if/else,\n        # also check for documented errors\n        if exc.status_code == 404:\n            raise RuntimeError(\n                f\"Failed to find resources at {exc.uri}. Confirm that region\"\n                f\" '{self.region}' is the correct region for your Cloud Run Job and\"\n                f\" that {self.credentials.project} is the correct GCP project. If\"\n                f\" your project ID is not correct, you are using a Credentials block\"\n                f\" with permissions for the wrong project.\"\n            ) from exc\n        raise exc\n\n    def _job_run_submission_error(self, exc):\n        \"\"\"Provides a nicer error for 404s when submitting job runs.\"\"\"\n        if exc.status_code == 404:\n            pat1 = r\"The requested URL [^ ]+ was not found on this server\"\n            # pat2 = (\n            #     r\"Resource '[^ ]+' of kind 'JOB' in region '[\\w\\-0-9]+' \"\n            #     r\"in project '[\\w\\-0-9]+' does not exist\"\n            # )\n            if re.findall(pat1, str(exc)):\n                raise RuntimeError(\n                    f\"Failed to find resources at {exc.uri}. \"\n                    f\"Confirm that region '{self.region}' is \"\n                    f\"the correct region for your Cloud Run Job \"\n                    f\"and that '{self.credentials.project}' is the \"\n                    f\"correct GCP project. If your project ID is not \"\n                    f\"correct, you are using a Credentials \"\n                    f\"block with permissions for the wrong project.\"\n                ) from exc\n            else:\n                raise exc\n\n        raise exc\n\n    def _cpu_as_k8s_quantity(self) -> str:\n        \"\"\"Return the CPU integer in the format expected by GCP Cloud Run Jobs API.\n        See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        See also: https://cloud.google.com/run/docs/configuring/cpu#setting-jobs\n        \"\"\"  # noqa\n        return str(self.cpu * 1000) + \"m\"\n\n    @sync_compatible\n    async def run(self, task_status: Optional[TaskStatus] = None):\n        \"\"\"Run the configured job on a Google Cloud Run Job.\"\"\"\n        with self._get_client() as client:\n            await run_sync_in_worker_thread(\n                self._create_job_and_wait_for_registration, client\n            )\n            job_execution = await run_sync_in_worker_thread(\n                self._begin_job_execution, client\n            )\n\n            if task_status:\n                task_status.started(self.job_name)\n\n            result = await run_sync_in_worker_thread(\n                self._watch_job_execution_and_get_result,\n                client,\n                job_execution,\n                5,\n            )\n            return result\n\n    @sync_compatible\n    async def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n        \"\"\"\n        Kill a task running Cloud Run.\n\n        Args:\n            identifier: The Cloud Run Job name. This should match a\n                value yielded by CloudRunJob.run.\n        \"\"\"\n        if grace_seconds != 30:\n            self.logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n            )\n\n        with self._get_client() as client:\n            await run_sync_in_worker_thread(\n                self._kill_job,\n                client=client,\n                namespace=self.credentials.project,\n                job_name=identifier,\n            )\n\n    def _kill_job(self, client: Resource, namespace: str, job_name: str) -> None:\n        \"\"\"\n        Thin wrapper around Job.delete, wrapping a try/except since\n        Job is an independent class that doesn't have knowledge of\n        CloudRunJob and its associated logic.\n        \"\"\"\n        try:\n            Job.delete(client=client, namespace=namespace, job_name=job_name)\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Cloud Run Job; the job name {job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n\n    def _create_job_and_wait_for_registration(self, client: Resource) -> None:\n        \"\"\"Create a new job wait for it to finish registering.\"\"\"\n        try:\n            self.logger.info(f\"Creating Cloud Run Job {self.job_name}\")\n            Job.create(\n                client=client,\n                namespace=self.credentials.project,\n                body=self._jobs_body(),\n            )\n        except googleapiclient.errors.HttpError as exc:\n            self._create_job_error(exc)\n\n        try:\n            self._wait_for_job_creation(client=client, timeout=self.timeout)\n        except Exception:\n            self.logger.exception(\n                \"Encountered an exception while waiting for job run creation\"\n            )\n            if not self.keep_job:\n                self.logger.info(\n                    f\"Deleting Cloud Run Job {self.job_name} from Google Cloud Run.\"\n                )\n                try:\n                    Job.delete(\n                        client=client,\n                        namespace=self.credentials.project,\n                        job_name=self.job_name,\n                    )\n                except Exception:\n                    self.logger.exception(\n                        \"Received an unexpected exception while attempting to delete\"\n                        f\" Cloud Run Job {self.job_name!r}\"\n                    )\n            raise\n\n    def _begin_job_execution(self, client: Resource) -> Execution:\n        \"\"\"Submit a job run for execution and return the execution object.\"\"\"\n        try:\n            self.logger.info(\n                f\"Submitting Cloud Run Job {self.job_name!r} for execution.\"\n            )\n            submission = Job.run(\n                client=client,\n                namespace=self.credentials.project,\n                job_name=self.job_name,\n            )\n\n            job_execution = Execution.get(\n                client=client,\n                namespace=submission[\"metadata\"][\"namespace\"],\n                execution_name=submission[\"metadata\"][\"name\"],\n            )\n\n            command = (\n                \" \".join(self.command) if self.command else \"default container command\"\n            )\n\n            self.logger.info(\n                f\"Cloud Run Job {self.job_name!r}: Running command {command!r}\"\n            )\n        except Exception as exc:\n            self._job_run_submission_error(exc)\n\n        return job_execution\n\n    def _watch_job_execution_and_get_result(\n        self, client: Resource, execution: Execution, poll_interval: int\n    ) -> CloudRunJobResult:\n        \"\"\"Wait for execution to complete and then return result.\"\"\"\n        try:\n            job_execution = self._watch_job_execution(\n                client=client,\n                job_execution=execution,\n                timeout=self.timeout,\n                poll_interval=poll_interval,\n            )\n        except Exception:\n            self.logger.exception(\n                \"Received an unexpected exception while monitoring Cloud Run Job \"\n                f\"{self.job_name!r}\"\n            )\n            raise\n\n        if job_execution.succeeded():\n            status_code = 0\n            self.logger.info(f\"Job Run {self.job_name} completed successfully\")\n        else:\n            status_code = 1\n            error_msg = job_execution.condition_after_completion()[\"message\"]\n            self.logger.error(\n                f\"Job Run {self.job_name} did not complete successfully. {error_msg}\"\n            )\n\n        self.logger.info(\n            f\"Job Run logs can be found on GCP at: {job_execution.log_uri}\"\n        )\n\n        if not self.keep_job:\n            self.logger.info(\n                f\"Deleting completed Cloud Run Job {self.job_name!r} from Google Cloud\"\n                \" Run...\"\n            )\n            try:\n                Job.delete(\n                    client=client,\n                    namespace=self.credentials.project,\n                    job_name=self.job_name,\n                )\n            except Exception:\n                self.logger.exception(\n                    \"Received an unexpected exception while attempting to delete Cloud\"\n                    f\" Run Job {self.job_name}\"\n                )\n\n        return CloudRunJobResult(identifier=self.job_name, status_code=status_code)\n\n    def _jobs_body(self) -> dict:\n        \"\"\"Create properly formatted body used for a Job CREATE request.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs\n        \"\"\"\n        jobs_metadata = {\"name\": self.job_name}\n\n        annotations = {\n            # See: https://cloud.google.com/run/docs/troubleshooting#launch-stage-validation  # noqa\n            \"run.googleapis.com/launch-stage\": \"BETA\",\n        }\n        # add vpc connector if specified\n        if self.vpc_connector_name:\n            annotations[\n                \"run.googleapis.com/vpc-access-connector\"\n            ] = self.vpc_connector_name\n\n        # env and command here\n        containers = [self._add_container_settings({\"image\": self.image})]\n\n        # apply this timeout to each task\n        timeout_seconds = str(self.timeout)\n\n        body = {\n            \"apiVersion\": \"run.googleapis.com/v1\",\n            \"kind\": \"Job\",\n            \"metadata\": jobs_metadata,\n            \"spec\": {  # JobSpec\n                \"template\": {  # ExecutionTemplateSpec\n                    \"metadata\": {\"annotations\": annotations},\n                    \"spec\": {  # ExecutionSpec\n                        \"template\": {  # TaskTemplateSpec\n                            \"spec\": {\n                                \"containers\": containers,\n                                \"timeoutSeconds\": timeout_seconds,\n                                \"maxRetries\": self.max_retries,\n                            }  # TaskSpec\n                        }\n                    },\n                }\n            },\n        }\n        return body\n\n    def preview(self) -> str:\n        \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n        body = self._jobs_body()\n        container_settings = body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n            \"containers\"\n        ][0][\"env\"]\n        body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\"env\"] = [\n            container_setting\n            for container_setting in container_settings\n            if container_setting[\"name\"] != \"PREFECT_API_KEY\"\n        ]\n        return json.dumps(body, indent=2)\n\n    def _watch_job_execution(\n        self, client, job_execution: Execution, timeout: int, poll_interval: int = 5\n    ):\n        \"\"\"\n        Update job_execution status until it is no longer running or timeout is reached.\n        \"\"\"\n        t0 = time.time()\n        while job_execution.is_running():\n            job_execution = Execution.get(\n                client=client,\n                namespace=job_execution.namespace,\n                execution_name=job_execution.name,\n            )\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n        return job_execution\n\n    def _wait_for_job_creation(\n        self, client: Resource, timeout: int, poll_interval: int = 5\n    ):\n        \"\"\"Give created job time to register.\"\"\"\n        job = Job.get(\n            client=client, namespace=self.credentials.project, job_name=self.job_name\n        )\n\n        t0 = time.time()\n        while not job.is_ready():\n            ready_condition = (\n                job.ready_condition\n                if job.ready_condition\n                else \"waiting for condition update\"\n            )\n            self.logger.info(\n                f\"Job is not yet ready... Current condition: {ready_condition}\"\n            )\n            job = Job.get(\n                client=client,\n                namespace=self.credentials.project,\n                job_name=self.job_name,\n            )\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n    def _get_client(self) -> Resource:\n        \"\"\"Get the base client needed for interacting with GCP APIs.\"\"\"\n        # region needed for 'v1' API\n        api_endpoint = f\"https://{self.region}-run.googleapis.com\"\n        gcp_creds = self.credentials.get_credentials_from_service_account()\n        options = ClientOptions(api_endpoint=api_endpoint)\n\n        return discovery.build(\n            \"run\", \"v1\", client_options=options, credentials=gcp_creds\n        ).namespaces()\n\n    # CONTAINER SETTINGS\n    def _add_container_settings(self, base_settings: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Add settings related to containers for Cloud Run Jobs to a dictionary.\n        Includes environment variables, entrypoint command, entrypoint arguments,\n        and cpu and memory limits.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container\n        and https://cloud.google.com/run/docs/reference/rest/v1/Container#ResourceRequirements\n        \"\"\"  # noqa\n        container_settings = base_settings.copy()\n        container_settings.update(self._add_env())\n        container_settings.update(self._add_resources())\n        container_settings.update(self._add_command())\n        container_settings.update(self._add_args())\n        return container_settings\n\n    def _add_args(self) -> dict:\n        \"\"\"Set the arguments that will be passed to the entrypoint for a Cloud Run Job.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container\n        \"\"\"  # noqa\n        return {\"args\": self.args} if self.args else {}\n\n    def _add_command(self) -> dict:\n        \"\"\"Set the command that a container will run for a Cloud Run Job.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container\n        \"\"\"  # noqa\n        return {\"command\": self.command}\n\n    def _add_resources(self) -> dict:\n        \"\"\"Set specified resources limits for a Cloud Run Job.\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container#ResourceRequirements\n        See also: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        \"\"\"  # noqa\n        resources = {\"limits\": {}, \"requests\": {}}\n\n        if self.cpu is not None:\n            cpu = self._cpu_as_k8s_quantity()\n            resources[\"limits\"][\"cpu\"] = cpu\n            resources[\"requests\"][\"cpu\"] = cpu\n        if self.memory_string is not None:\n            resources[\"limits\"][\"memory\"] = self.memory_string\n            resources[\"requests\"][\"memory\"] = self.memory_string\n\n        return {\"resources\": resources} if resources[\"requests\"] else {}\n\n    def _add_env(self) -> dict:\n        \"\"\"Add environment variables for a Cloud Run Job.\n\n        Method `self._base_environment()` gets necessary Prefect environment variables\n        from the config.\n\n        See: https://cloud.google.com/run/docs/reference/rest/v1/Container#envvar for\n        how environment variables are specified for Cloud Run Jobs.\n        \"\"\"  # noqa\n        env = {**self._base_environment(), **self.env}\n        cloud_run_env = [{\"name\": k, \"value\": v} for k, v in env.items()]\n        return {\"env\": cloud_run_env}\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.job_name","title":"job_name property","text":"

        Create a unique and valid job name.

        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.memory_string","title":"memory_string property","text":"

        Returns the string expected for memory resources argument.

        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.generate_work_pool_base_job_template","title":"generate_work_pool_base_job_template async","text":"

        Generate a base job template for a cloud-run work pool with the same configuration as this block.

        Returns:

        Type Description dict
        • dict: a base job template for a cloud-run work pool
        Source code in prefect_gcp/cloud_run.py
        async def generate_work_pool_base_job_template(self) -> dict:\n    \"\"\"\n    Generate a base job template for a cloud-run work pool with the same\n    configuration as this block.\n\n    Returns:\n        - dict: a base job template for a cloud-run work pool\n    \"\"\"\n    base_job_template = await get_default_base_job_template_for_infrastructure_type(\n        self.get_corresponding_worker_type(),\n    )\n    assert (\n        base_job_template is not None\n    ), \"Failed to generate default base job template for Cloud Run worker.\"\n    for key, value in self.dict(exclude_unset=True, exclude_defaults=True).items():\n        if key == \"command\":\n            base_job_template[\"variables\"][\"properties\"][\"command\"][\n                \"default\"\n            ] = shlex.join(value)\n        elif key in [\n            \"type\",\n            \"block_type_slug\",\n            \"_block_document_id\",\n            \"_block_document_name\",\n            \"_is_anonymous\",\n            \"memory_unit\",\n        ]:\n            continue\n        elif key == \"credentials\":\n            if not self.credentials._block_document_id:\n                raise BlockNotSavedError(\n                    \"It looks like you are trying to use a block that\"\n                    \" has not been saved. Please call `.save` on your block\"\n                    \" before publishing it as a work pool.\"\n                )\n            base_job_template[\"variables\"][\"properties\"][\"credentials\"][\n                \"default\"\n            ] = {\n                \"$ref\": {\n                    \"block_document_id\": str(self.credentials._block_document_id)\n                }\n            }\n        elif key == \"memory\" and self.memory_string:\n            base_job_template[\"variables\"][\"properties\"][\"memory\"][\n                \"default\"\n            ] = self.memory_string\n        elif key == \"cpu\" and self.cpu is not None:\n            base_job_template[\"variables\"][\"properties\"][\"cpu\"][\n                \"default\"\n            ] = f\"{self.cpu * 1000}m\"\n        elif key == \"args\":\n            # Not a default variable, but we can add it to the template\n            base_job_template[\"variables\"][\"properties\"][\"args\"] = {\n                \"title\": \"Arguments\",\n                \"type\": \"string\",\n                \"description\": \"Arguments to be passed to your Cloud Run Job's entrypoint command.\",  # noqa\n                \"default\": value,\n            }\n            base_job_template[\"job_configuration\"][\"job_body\"][\"spec\"][\"template\"][\n                \"spec\"\n            ][\"template\"][\"spec\"][\"containers\"][0][\"args\"] = \"{{ args }}\"\n        elif key in base_job_template[\"variables\"][\"properties\"]:\n            base_job_template[\"variables\"][\"properties\"][key][\"default\"] = value\n        else:\n            self.logger.warning(\n                f\"Variable {key!r} is not supported by Cloud Run work pools.\"\n                \" Skipping.\"\n            )\n\n    return base_job_template\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.get_corresponding_worker_type","title":"get_corresponding_worker_type","text":"

        Return the corresponding worker type for this infrastructure block.

        Source code in prefect_gcp/cloud_run.py
        def get_corresponding_worker_type(self) -> str:\n    \"\"\"Return the corresponding worker type for this infrastructure block.\"\"\"\n    return \"cloud-run\"\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.kill","title":"kill async","text":"

        Kill a task running Cloud Run.

        Parameters:

        Name Type Description Default identifier str

        The Cloud Run Job name. This should match a value yielded by CloudRunJob.run.

        required Source code in prefect_gcp/cloud_run.py
        @sync_compatible\nasync def kill(self, identifier: str, grace_seconds: int = 30) -> None:\n    \"\"\"\n    Kill a task running Cloud Run.\n\n    Args:\n        identifier: The Cloud Run Job name. This should match a\n            value yielded by CloudRunJob.run.\n    \"\"\"\n    if grace_seconds != 30:\n        self.logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n        )\n\n    with self._get_client() as client:\n        await run_sync_in_worker_thread(\n            self._kill_job,\n            client=client,\n            namespace=self.credentials.project,\n            job_name=identifier,\n        )\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.preview","title":"preview","text":"

        Generate a preview of the job definition that will be sent to GCP.

        Source code in prefect_gcp/cloud_run.py
        def preview(self) -> str:\n    \"\"\"Generate a preview of the job definition that will be sent to GCP.\"\"\"\n    body = self._jobs_body()\n    container_settings = body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n        \"containers\"\n    ][0][\"env\"]\n    body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\"env\"] = [\n        container_setting\n        for container_setting in container_settings\n        if container_setting[\"name\"] != \"PREFECT_API_KEY\"\n    ]\n    return json.dumps(body, indent=2)\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJob.run","title":"run async","text":"

        Run the configured job on a Google Cloud Run Job.

        Source code in prefect_gcp/cloud_run.py
        @sync_compatible\nasync def run(self, task_status: Optional[TaskStatus] = None):\n    \"\"\"Run the configured job on a Google Cloud Run Job.\"\"\"\n    with self._get_client() as client:\n        await run_sync_in_worker_thread(\n            self._create_job_and_wait_for_registration, client\n        )\n        job_execution = await run_sync_in_worker_thread(\n            self._begin_job_execution, client\n        )\n\n        if task_status:\n            task_status.started(self.job_name)\n\n        result = await run_sync_in_worker_thread(\n            self._watch_job_execution_and_get_result,\n            client,\n            job_execution,\n            5,\n        )\n        return result\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.CloudRunJobResult","title":"CloudRunJobResult","text":"

        Bases: InfrastructureResult

        Result from a Cloud Run Job.

        Source code in prefect_gcp/cloud_run.py
        class CloudRunJobResult(InfrastructureResult):\n    \"\"\"Result from a Cloud Run Job.\"\"\"\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution","title":"Execution","text":"

        Bases: BaseModel

        Utility class to call GCP executions API and interact with the returned objects.

        Source code in prefect_gcp/cloud_run.py
        class Execution(BaseModel):\n    \"\"\"\n    Utility class to call GCP `executions` API and\n    interact with the returned objects.\n    \"\"\"\n\n    name: str\n    namespace: str\n    metadata: dict\n    spec: dict\n    status: dict\n    log_uri: str\n\n    def is_running(self) -> bool:\n        \"\"\"Returns True if Execution is not completed.\"\"\"\n        return self.status.get(\"completionTime\") is None\n\n    def condition_after_completion(self):\n        \"\"\"Returns Execution condition if Execution has completed.\"\"\"\n        for condition in self.status[\"conditions\"]:\n            if condition[\"type\"] == \"Completed\":\n                return condition\n\n    def succeeded(self):\n        \"\"\"Whether or not the Execution completed is a successful state.\"\"\"\n        completed_condition = self.condition_after_completion()\n        if completed_condition and completed_condition[\"status\"] == \"True\":\n            return True\n\n        return False\n\n    @classmethod\n    def get(cls, client: Resource, namespace: str, execution_name: str):\n        \"\"\"\n        Make a get request to the GCP executions API\n        and return an Execution instance.\n        \"\"\"\n        request = client.executions().get(\n            name=f\"namespaces/{namespace}/executions/{execution_name}\"\n        )\n        response = request.execute()\n\n        return cls(\n            name=response[\"metadata\"][\"name\"],\n            namespace=response[\"metadata\"][\"namespace\"],\n            metadata=response[\"metadata\"],\n            spec=response[\"spec\"],\n            status=response[\"status\"],\n            log_uri=response[\"status\"][\"logUri\"],\n        )\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.condition_after_completion","title":"condition_after_completion","text":"

        Returns Execution condition if Execution has completed.

        Source code in prefect_gcp/cloud_run.py
        def condition_after_completion(self):\n    \"\"\"Returns Execution condition if Execution has completed.\"\"\"\n    for condition in self.status[\"conditions\"]:\n        if condition[\"type\"] == \"Completed\":\n            return condition\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.get","title":"get classmethod","text":"

        Make a get request to the GCP executions API and return an Execution instance.

        Source code in prefect_gcp/cloud_run.py
        @classmethod\ndef get(cls, client: Resource, namespace: str, execution_name: str):\n    \"\"\"\n    Make a get request to the GCP executions API\n    and return an Execution instance.\n    \"\"\"\n    request = client.executions().get(\n        name=f\"namespaces/{namespace}/executions/{execution_name}\"\n    )\n    response = request.execute()\n\n    return cls(\n        name=response[\"metadata\"][\"name\"],\n        namespace=response[\"metadata\"][\"namespace\"],\n        metadata=response[\"metadata\"],\n        spec=response[\"spec\"],\n        status=response[\"status\"],\n        log_uri=response[\"status\"][\"logUri\"],\n    )\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.is_running","title":"is_running","text":"

        Returns True if Execution is not completed.

        Source code in prefect_gcp/cloud_run.py
        def is_running(self) -> bool:\n    \"\"\"Returns True if Execution is not completed.\"\"\"\n    return self.status.get(\"completionTime\") is None\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Execution.succeeded","title":"succeeded","text":"

        Whether or not the Execution completed is a successful state.

        Source code in prefect_gcp/cloud_run.py
        def succeeded(self):\n    \"\"\"Whether or not the Execution completed is a successful state.\"\"\"\n    completed_condition = self.condition_after_completion()\n    if completed_condition and completed_condition[\"status\"] == \"True\":\n        return True\n\n    return False\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job","title":"Job","text":"

        Bases: BaseModel

        Utility class to call GCP jobs API and interact with the returned objects.

        Source code in prefect_gcp/cloud_run.py
        class Job(BaseModel):\n    \"\"\"\n    Utility class to call GCP `jobs` API and\n    interact with the returned objects.\n    \"\"\"\n\n    metadata: dict\n    spec: dict\n    status: dict\n    name: str\n    ready_condition: dict\n    execution_status: dict\n\n    def _is_missing_container(self):\n        \"\"\"\n        Check if Job status is not ready because\n        the specified container cannot be found.\n        \"\"\"\n        if (\n            self.ready_condition.get(\"status\") == \"False\"\n            and self.ready_condition.get(\"reason\") == \"ContainerMissing\"\n        ):\n            return True\n        return False\n\n    def is_ready(self) -> bool:\n        \"\"\"Whether a job is finished registering and ready to be executed\"\"\"\n        if self._is_missing_container():\n            raise Exception(f\"{self.ready_condition['message']}\")\n        return self.ready_condition.get(\"status\") == \"True\"\n\n    def has_execution_in_progress(self) -> bool:\n        \"\"\"See if job has a run in progress.\"\"\"\n        return (\n            self.execution_status == {}\n            or self.execution_status.get(\"completionTimestamp\") is None\n        )\n\n    @staticmethod\n    def _get_ready_condition(job: dict) -> dict:\n        \"\"\"Utility to access JSON field containing ready condition.\"\"\"\n        if job[\"status\"].get(\"conditions\"):\n            for condition in job[\"status\"][\"conditions\"]:\n                if condition[\"type\"] == \"Ready\":\n                    return condition\n\n        return {}\n\n    @staticmethod\n    def _get_execution_status(job: dict):\n        \"\"\"Utility to access JSON field containing execution status.\"\"\"\n        if job[\"status\"].get(\"latestCreatedExecution\"):\n            return job[\"status\"][\"latestCreatedExecution\"]\n\n        return {}\n\n    @classmethod\n    def get(cls, client: Resource, namespace: str, job_name: str):\n        \"\"\"Make a get request to the GCP jobs API and return a Job instance.\"\"\"\n        request = client.jobs().get(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n        response = request.execute()\n\n        return cls(\n            metadata=response[\"metadata\"],\n            spec=response[\"spec\"],\n            status=response[\"status\"],\n            name=response[\"metadata\"][\"name\"],\n            ready_condition=cls._get_ready_condition(response),\n            execution_status=cls._get_execution_status(response),\n        )\n\n    @staticmethod\n    def create(client: Resource, namespace: str, body: dict):\n        \"\"\"Make a create request to the GCP jobs API.\"\"\"\n        request = client.jobs().create(parent=f\"namespaces/{namespace}\", body=body)\n        response = request.execute()\n        return response\n\n    @staticmethod\n    def delete(client: Resource, namespace: str, job_name: str):\n        \"\"\"Make a delete request to the GCP jobs API.\"\"\"\n        request = client.jobs().delete(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n        response = request.execute()\n        return response\n\n    @staticmethod\n    def run(client: Resource, namespace: str, job_name: str):\n        \"\"\"Make a run request to the GCP jobs API.\"\"\"\n        request = client.jobs().run(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n        response = request.execute()\n        return response\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.create","title":"create staticmethod","text":"

        Make a create request to the GCP jobs API.

        Source code in prefect_gcp/cloud_run.py
        @staticmethod\ndef create(client: Resource, namespace: str, body: dict):\n    \"\"\"Make a create request to the GCP jobs API.\"\"\"\n    request = client.jobs().create(parent=f\"namespaces/{namespace}\", body=body)\n    response = request.execute()\n    return response\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.delete","title":"delete staticmethod","text":"

        Make a delete request to the GCP jobs API.

        Source code in prefect_gcp/cloud_run.py
        @staticmethod\ndef delete(client: Resource, namespace: str, job_name: str):\n    \"\"\"Make a delete request to the GCP jobs API.\"\"\"\n    request = client.jobs().delete(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n    response = request.execute()\n    return response\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.get","title":"get classmethod","text":"

        Make a get request to the GCP jobs API and return a Job instance.

        Source code in prefect_gcp/cloud_run.py
        @classmethod\ndef get(cls, client: Resource, namespace: str, job_name: str):\n    \"\"\"Make a get request to the GCP jobs API and return a Job instance.\"\"\"\n    request = client.jobs().get(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n    response = request.execute()\n\n    return cls(\n        metadata=response[\"metadata\"],\n        spec=response[\"spec\"],\n        status=response[\"status\"],\n        name=response[\"metadata\"][\"name\"],\n        ready_condition=cls._get_ready_condition(response),\n        execution_status=cls._get_execution_status(response),\n    )\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.has_execution_in_progress","title":"has_execution_in_progress","text":"

        See if job has a run in progress.

        Source code in prefect_gcp/cloud_run.py
        def has_execution_in_progress(self) -> bool:\n    \"\"\"See if job has a run in progress.\"\"\"\n    return (\n        self.execution_status == {}\n        or self.execution_status.get(\"completionTimestamp\") is None\n    )\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.is_ready","title":"is_ready","text":"

        Whether a job is finished registering and ready to be executed

        Source code in prefect_gcp/cloud_run.py
        def is_ready(self) -> bool:\n    \"\"\"Whether a job is finished registering and ready to be executed\"\"\"\n    if self._is_missing_container():\n        raise Exception(f\"{self.ready_condition['message']}\")\n    return self.ready_condition.get(\"status\") == \"True\"\n
        "},{"location":"integrations/prefect-gcp/cloud_run/#prefect_gcp.cloud_run.Job.run","title":"run staticmethod","text":"

        Make a run request to the GCP jobs API.

        Source code in prefect_gcp/cloud_run.py
        @staticmethod\ndef run(client: Resource, namespace: str, job_name: str):\n    \"\"\"Make a run request to the GCP jobs API.\"\"\"\n    request = client.jobs().run(name=f\"namespaces/{namespace}/jobs/{job_name}\")\n    response = request.execute()\n    return response\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/","title":"Cloud Run","text":""},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run","title":"prefect_gcp.workers.cloud_run","text":"

        Module containing the Cloud Run worker used for executing flow runs as Cloud Run jobs.

        Get started by creating a Cloud Run work pool:

        prefect work-pool create 'my-cloud-run-pool' --type cloud-run\n

        Then start a Cloud Run worker with the following command:

        prefect worker start --pool 'my-cloud-run-pool'\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run--configuration","title":"Configuration","text":"

        Read more about configuring work pools here.

        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run--advanced-configuration","title":"Advanced Configuration","text":"

        Using a custom Cloud Run job template

        Below is the default job body template used by the Cloud Run Worker:

        {\n    \"apiVersion\": \"run.googleapis.com/v1\",\n    \"kind\": \"Job\",\n    \"metadata\":\n        {\n            \"name\": \"{{ name }}\",\n            \"annotations\":\n            {\n                \"run.googleapis.com/launch-stage\": \"BETA\",\n            }\n        },\n        \"spec\":\n        {\n            \"template\":\n            {\n                \"spec\":\n                {\n                    \"template\":\n                    {\n                        \"spec\":\n                        {\n                            \"containers\":\n                            [\n                                {\n                                    \"image\": \"{{ image }}\",\n                                    \"args\": \"{{ args }}\",\n                                    \"resources\":\n                                    {\n                                        \"limits\":\n                                        {\n                                            \"cpu\": \"{{ cpu }}\",\n                                            \"memory\": \"{{ memory }}\"\n                                        },\n                                        \"requests\":\n                                        {\n                                            \"cpu\": \"{{ cpu }}\",\n                                            \"memory\": \"{{ memory }}\"\n                                        }\n                                    }\n                                }\n                            ],\n                            \"timeoutSeconds\": \"{{ timeout }}\",\n                            \"serviceAccountName\": \"{{ service_account_name }}\"\n                        }\n                    }\n                }\n                }\n            },\n            \"metadata\":\n            {\n                \"annotations\":\n                {\n                    \"run.googleapis.com/vpc-access-connector\": \"{{ vpc_connector_name }}\"\n                }\n            }\n        },\n    },\n    \"timeout\": \"{{ timeout }}\",\n    \"keep_job\": \"{{ keep_job }}\"\n}\n
        Each values enclosed in {{ }} is a placeholder that will be replaced with a value at runtime on a per-deployment basis. The values that can be used a placeholders are defined by the variables schema defined in the base job template.

        The default job body template and available variables can be customized on a work pool by work pool basis. By editing the default job body template you can:

        • Add additional placeholders to the default job template
        • Remove placeholders from the default job template
        • Pass values to Cloud Run that are not defined in the variables schema
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run--adding-additional-placeholders","title":"Adding additional placeholders","text":"

        For example, to allow for extra customization of a new annotation not described in the default job template, you can add the following:

        {\n    \"apiVersion\": \"run.googleapis.com/v1\",\n    \"kind\": \"Job\",\n    \"metadata\":\n    {\n        \"name\": \"{{ name }}\",\n        \"annotations\":\n        {\n            \"run.googleapis.com/my-custom-annotation\": \"{{ my_custom_annotation }}\",\n            \"run.googleapis.com/launch-stage\": \"BETA\",\n        },\n      ...\n    },\n  ...\n}\n
        my_custom_annotation can now be used as a placeholder in the job template and set on a per-deployment basis.

        # deployment.yaml\n...\ninfra_overrides: {\"my_custom_annotation\": \"my-custom-value\"}\n

        Additionally, fields can be set to prevent configuration at the deployment level. For example to configure the vpc_connector_name field, the placeholder can be removed and replaced with an actual value. Now all deployments that point to this work pool will use the same vpc_connector_name value.

        {\n    \"apiVersion\": \"run.googleapis.com/v1\",\n    \"kind\": \"Job\",\n    \"spec\":\n    {\n        \"template\":\n        {\n            \"metadata\":\n            {\n                \"annotations\":\n                {\n                    \"run.googleapis.com/vpc-access-connector\": \"my-vpc-connector\"\n                }\n            },\n            ...\n        },\n        ...\n    }\n}\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorker","title":"CloudRunWorker","text":"

        Bases: BaseWorker

        Prefect worker that executes flow runs within Cloud Run Jobs.

        Source code in prefect_gcp/workers/cloud_run.py
        class CloudRunWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Cloud Run Jobs.\"\"\"\n\n    type = \"cloud-run\"\n    job_configuration = CloudRunWorkerJobConfiguration\n    job_configuration_variables = CloudRunWorkerVariables\n    _description = (\n        \"Execute flow runs within containers on Google Cloud Run. Requires \"\n        \"a Google Cloud Platform account.\"\n    )\n    _display_name = \"Google Cloud Run\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/cloud_run_worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n\n    def _create_job_error(self, exc, configuration):\n        \"\"\"Provides a nicer error for 404s when trying to create a Cloud Run Job.\"\"\"\n        # TODO consider lookup table instead of the if/else,\n        # also check for documented errors\n        if exc.status_code == 404:\n            raise RuntimeError(\n                f\"Failed to find resources at {exc.uri}. Confirm that region\"\n                f\" '{self.region}' is the correct region for your Cloud Run Job and\"\n                f\" that {configuration.project} is the correct GCP project. If\"\n                f\" your project ID is not correct, you are using a Credentials block\"\n                f\" with permissions for the wrong project.\"\n            ) from exc\n        raise exc\n\n    def _job_run_submission_error(self, exc, configuration):\n        \"\"\"Provides a nicer error for 404s when submitting job runs.\"\"\"\n        if exc.status_code == 404:\n            pat1 = r\"The requested URL [^ ]+ was not found on this server\"\n            # pat2 = (\n            #     r\"Resource '[^ ]+' of kind 'JOB' in region '[\\w\\-0-9]+' \"\n            #     r\"in project '[\\w\\-0-9]+' does not exist\"\n            # )\n            if re.findall(pat1, str(exc)):\n                raise RuntimeError(\n                    f\"Failed to find resources at {exc.uri}. \"\n                    f\"Confirm that region '{self.region}' is \"\n                    f\"the correct region for your Cloud Run Job \"\n                    f\"and that '{configuration.project}' is the \"\n                    f\"correct GCP project. If your project ID is not \"\n                    f\"correct, you are using a Credentials \"\n                    f\"block with permissions for the wrong project.\"\n                ) from exc\n            else:\n                raise exc\n\n        raise exc\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: CloudRunWorkerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> CloudRunWorkerResult:\n        \"\"\"\n        Executes a flow run within a Cloud Run Job and waits for the flow run\n        to complete.\n\n        Args:\n            flow_run: The flow run to execute\n            configuration: The configuration to use when executing the flow run.\n            task_status: The task status object for the current flow run. If provided,\n                the task will be marked as started.\n\n        Returns:\n            CloudRunWorkerResult: A result object containing information about the\n                final state of the flow run\n        \"\"\"\n\n        logger = self.get_flow_run_logger(flow_run)\n\n        with self._get_client(configuration) as client:\n            await run_sync_in_worker_thread(\n                self._create_job_and_wait_for_registration,\n                configuration,\n                client,\n                logger,\n            )\n            job_execution = await run_sync_in_worker_thread(\n                self._begin_job_execution, configuration, client, logger\n            )\n\n            if task_status:\n                task_status.started(configuration.job_name)\n\n            result = await run_sync_in_worker_thread(\n                self._watch_job_execution_and_get_result,\n                configuration,\n                client,\n                job_execution,\n                logger,\n            )\n            return result\n\n    def _get_client(self, configuration: CloudRunWorkerJobConfiguration) -> Resource:\n        \"\"\"Get the base client needed for interacting with GCP APIs.\"\"\"\n        # region needed for 'v1' API\n        api_endpoint = f\"https://{configuration.region}-run.googleapis.com\"\n        gcp_creds = configuration.credentials.get_credentials_from_service_account()\n        options = ClientOptions(api_endpoint=api_endpoint)\n\n        return discovery.build(\n            \"run\", \"v1\", client_options=options, credentials=gcp_creds\n        ).namespaces()\n\n    def _create_job_and_wait_for_registration(\n        self,\n        configuration: CloudRunWorkerJobConfiguration,\n        client: Resource,\n        logger: PrefectLogAdapter,\n    ) -> None:\n        \"\"\"Create a new job wait for it to finish registering.\"\"\"\n        try:\n            logger.info(f\"Creating Cloud Run Job {configuration.job_name}\")\n\n            Job.create(\n                client=client,\n                namespace=configuration.credentials.project,\n                body=configuration.job_body,\n            )\n        except googleapiclient.errors.HttpError as exc:\n            self._create_job_error(exc, configuration)\n\n        try:\n            self._wait_for_job_creation(\n                client=client, configuration=configuration, logger=logger\n            )\n        except Exception:\n            logger.exception(\n                \"Encountered an exception while waiting for job run creation\"\n            )\n            if not configuration.keep_job:\n                logger.info(\n                    f\"Deleting Cloud Run Job {configuration.job_name} from \"\n                    \"Google Cloud Run.\"\n                )\n                try:\n                    Job.delete(\n                        client=client,\n                        namespace=configuration.credentials.project,\n                        job_name=configuration.job_name,\n                    )\n                except Exception:\n                    logger.exception(\n                        \"Received an unexpected exception while attempting to delete\"\n                        f\" Cloud Run Job {configuration.job_name!r}\"\n                    )\n            raise\n\n    def _begin_job_execution(\n        self,\n        configuration: CloudRunWorkerJobConfiguration,\n        client: Resource,\n        logger: PrefectLogAdapter,\n    ) -> Execution:\n        \"\"\"Submit a job run for execution and return the execution object.\"\"\"\n        try:\n            logger.info(\n                f\"Submitting Cloud Run Job {configuration.job_name!r} for execution.\"\n            )\n            submission = Job.run(\n                client=client,\n                namespace=configuration.project,\n                job_name=configuration.job_name,\n            )\n\n            job_execution = Execution.get(\n                client=client,\n                namespace=submission[\"metadata\"][\"namespace\"],\n                execution_name=submission[\"metadata\"][\"name\"],\n            )\n        except Exception as exc:\n            self._job_run_submission_error(exc, configuration)\n\n        return job_execution\n\n    def _watch_job_execution_and_get_result(\n        self,\n        configuration: CloudRunWorkerJobConfiguration,\n        client: Resource,\n        execution: Execution,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ) -> CloudRunWorkerResult:\n        \"\"\"Wait for execution to complete and then return result.\"\"\"\n        try:\n            job_execution = self._watch_job_execution(\n                client=client,\n                job_execution=execution,\n                timeout=configuration.timeout,\n                poll_interval=poll_interval,\n            )\n        except Exception:\n            logger.exception(\n                \"Received an unexpected exception while monitoring Cloud Run Job \"\n                f\"{configuration.job_name!r}\"\n            )\n            raise\n\n        if job_execution.succeeded():\n            status_code = 0\n            logger.info(f\"Job Run {configuration.job_name} completed successfully\")\n        else:\n            status_code = 1\n            error_msg = job_execution.condition_after_completion()[\"message\"]\n            logger.error(\n                \"Job Run {configuration.job_name} did not complete successfully. \"\n                f\"{error_msg}\"\n            )\n\n        logger.info(f\"Job Run logs can be found on GCP at: {job_execution.log_uri}\")\n\n        if not configuration.keep_job:\n            logger.info(\n                f\"Deleting completed Cloud Run Job {configuration.job_name!r} \"\n                \"from Google Cloud Run...\"\n            )\n            try:\n                Job.delete(\n                    client=client,\n                    namespace=configuration.project,\n                    job_name=configuration.job_name,\n                )\n            except Exception:\n                logger.exception(\n                    \"Received an unexpected exception while attempting to delete Cloud\"\n                    f\" Run Job {configuration.job_name}\"\n                )\n\n        return CloudRunWorkerResult(\n            identifier=configuration.job_name, status_code=status_code\n        )\n\n    def _watch_job_execution(\n        self, client, job_execution: Execution, timeout: int, poll_interval: int = 5\n    ):\n        \"\"\"\n        Update job_execution status until it is no longer running or timeout is reached.\n        \"\"\"\n        t0 = time.time()\n        while job_execution.is_running():\n            job_execution = Execution.get(\n                client=client,\n                namespace=job_execution.namespace,\n                execution_name=job_execution.name,\n            )\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n        return job_execution\n\n    def _wait_for_job_creation(\n        self,\n        client: Resource,\n        configuration: CloudRunWorkerJobConfiguration,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ):\n        \"\"\"Give created job time to register.\"\"\"\n        job = Job.get(\n            client=client,\n            namespace=configuration.project,\n            job_name=configuration.job_name,\n        )\n\n        t0 = time.time()\n        while not job.is_ready():\n            ready_condition = (\n                job.ready_condition\n                if job.ready_condition\n                else \"waiting for condition update\"\n            )\n            logger.info(f\"Job is not yet ready... Current condition: {ready_condition}\")\n            job = Job.get(\n                client=client,\n                namespace=configuration.project,\n                job_name=configuration.job_name,\n            )\n\n            elapsed_time = time.time() - t0\n            if (\n                configuration.timeout is not None\n                and elapsed_time > configuration.timeout\n            ):\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while waiting for Cloud Run Job \"\n                    \"execution to complete. Your job may still be running on GCP.\"\n                )\n\n            time.sleep(poll_interval)\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: CloudRunWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a job for a cancelled flow run based on the provided infrastructure PID\n        and run configuration.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n            )\n\n        with self._get_client(configuration) as client:\n            await run_sync_in_worker_thread(\n                self._stop_job,\n                client=client,\n                namespace=configuration.project,\n                job_name=infrastructure_pid,\n            )\n\n    def _stop_job(self, client: Resource, namespace: str, job_name: str):\n        try:\n            Job.delete(client=client, namespace=namespace, job_name=job_name)\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Cloud Run Job; the job name {job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

        Stops a job for a cancelled flow run based on the provided infrastructure PID and run configuration.

        Source code in prefect_gcp/workers/cloud_run.py
        async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: CloudRunWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a job for a cancelled flow run based on the provided infrastructure PID\n    and run configuration.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n        )\n\n    with self._get_client(configuration) as client:\n        await run_sync_in_worker_thread(\n            self._stop_job,\n            client=client,\n            namespace=configuration.project,\n            job_name=infrastructure_pid,\n        )\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorker.run","title":"run async","text":"

        Executes a flow run within a Cloud Run Job and waits for the flow run to complete.

        Parameters:

        Name Type Description Default flow_run FlowRun

        The flow run to execute

        required configuration CloudRunWorkerJobConfiguration

        The configuration to use when executing the flow run.

        required task_status Optional[TaskStatus]

        The task status object for the current flow run. If provided, the task will be marked as started.

        None

        Returns:

        Name Type Description CloudRunWorkerResult CloudRunWorkerResult

        A result object containing information about the final state of the flow run

        Source code in prefect_gcp/workers/cloud_run.py
        async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: CloudRunWorkerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> CloudRunWorkerResult:\n    \"\"\"\n    Executes a flow run within a Cloud Run Job and waits for the flow run\n    to complete.\n\n    Args:\n        flow_run: The flow run to execute\n        configuration: The configuration to use when executing the flow run.\n        task_status: The task status object for the current flow run. If provided,\n            the task will be marked as started.\n\n    Returns:\n        CloudRunWorkerResult: A result object containing information about the\n            final state of the flow run\n    \"\"\"\n\n    logger = self.get_flow_run_logger(flow_run)\n\n    with self._get_client(configuration) as client:\n        await run_sync_in_worker_thread(\n            self._create_job_and_wait_for_registration,\n            configuration,\n            client,\n            logger,\n        )\n        job_execution = await run_sync_in_worker_thread(\n            self._begin_job_execution, configuration, client, logger\n        )\n\n        if task_status:\n            task_status.started(configuration.job_name)\n\n        result = await run_sync_in_worker_thread(\n            self._watch_job_execution_and_get_result,\n            configuration,\n            client,\n            job_execution,\n            logger,\n        )\n        return result\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration","title":"CloudRunWorkerJobConfiguration","text":"

        Bases: BaseJobConfiguration

        Configuration class used by the Cloud Run Worker to create a Cloud Run Job.

        An instance of this class is passed to the Cloud Run worker's run method for each flow run. It contains all information necessary to execute the flow run as a Cloud Run Job.

        Attributes:

        Name Type Description region str

        The region where the Cloud Run Job resides.

        credentials Optional[GcpCredentials]

        The GCP Credentials used to connect to Cloud Run.

        job_body Dict[str, Any]

        The job body used to create the Cloud Run Job.

        timeout Optional[int]

        The length of time that Prefect will wait for a Cloud Run Job.

        keep_job Optional[bool]

        Whether to delete the Cloud Run Job after it completes.

        Source code in prefect_gcp/workers/cloud_run.py
        class CloudRunWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Cloud Run Worker to create a Cloud Run Job.\n\n    An instance of this class is passed to the Cloud Run worker's `run` method\n    for each flow run. It contains all information necessary to execute\n    the flow run as a Cloud Run Job.\n\n    Attributes:\n        region: The region where the Cloud Run Job resides.\n        credentials: The GCP Credentials used to connect to Cloud Run.\n        job_body: The job body used to create the Cloud Run Job.\n        timeout: The length of time that Prefect will wait for a Cloud Run Job.\n        keep_job: Whether to delete the Cloud Run Job after it completes.\n    \"\"\"\n\n    region: str = Field(\n        default=\"us-central1\", description=\"The region where the Cloud Run Job resides.\"\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to connect to Cloud Run. \"\n        \"If not provided credentials will be inferred from \"\n        \"the local environment.\",\n    )\n    job_body: Dict[str, Any] = Field(template=_get_default_job_body_template())\n    timeout: Optional[int] = Field(\n        default=600,\n        gt=0,\n        le=3600,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to complete \"\n            \"before raising an exception.\"\n        ),\n    )\n    keep_job: Optional[bool] = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud Run Job on Google Cloud Platform.\",\n    )\n\n    @property\n    def project(self) -> str:\n        \"\"\"property for accessing the project from the credentials.\"\"\"\n        return self.credentials.project\n\n    @property\n    def job_name(self) -> str:\n        \"\"\"property for accessing the name from the job metadata.\"\"\"\n        return self.job_body[\"metadata\"][\"name\"]\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n\n        Ensures that necessary values are present in the job body and that the\n        job body is valid.\n\n        Args:\n            flow_run: The flow run to prepare the job configuration for\n            deployment: The deployment associated with the flow run used for\n                preparation.\n            flow: The flow associated with the flow run used for preparation.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self._populate_envs()\n        self._populate_or_format_command()\n        self._format_args_if_present()\n        self._populate_image_if_not_present()\n        self._populate_name_if_not_present()\n\n    def _populate_envs(self):\n        \"\"\"Populate environment variables. BaseWorker.prepare_for_flow_run handles\n        putting the environment variables in the `env` attribute. This method\n        moves them into the jobs body\"\"\"\n        envs = [{\"name\": k, \"value\": v} for k, v in self.env.items()]\n        self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n            \"env\"\n        ] = envs\n\n    def _populate_name_if_not_present(self):\n        \"\"\"Adds the flow run name to the job if one is not already provided.\"\"\"\n        try:\n            if \"name\" not in self.job_body[\"metadata\"]:\n                base_job_name = slugify_name(self.name)\n                job_name = f\"{base_job_name}-{uuid4().hex}\"\n                self.job_body[\"metadata\"][\"name\"] = job_name\n        except KeyError:\n            raise ValueError(\"Unable to verify name due to invalid job body template.\")\n\n    def _populate_image_if_not_present(self):\n        \"\"\"Adds the latest prefect image to the job if one is not already provided.\"\"\"\n        try:\n            if (\n                \"image\"\n                not in self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0]\n            ):\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"image\"] = f\"docker.io/{get_prefect_image_name()}\"\n        except KeyError:\n            raise ValueError(\"Unable to verify image due to invalid job body template.\")\n\n    def _populate_or_format_command(self):\n        \"\"\"\n        Ensures that the command is present in the job manifest. Populates the command\n        with the `prefect -m prefect.engine` if a command is not present.\n        \"\"\"\n        try:\n            command = self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                \"containers\"\n            ][0].get(\"command\")\n            if command is None:\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"command\"] = shlex.split(self._base_flow_run_command())\n            elif isinstance(command, str):\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"command\"] = shlex.split(command)\n        except KeyError:\n            raise ValueError(\n                \"Unable to verify command due to invalid job body template.\"\n            )\n\n    def _format_args_if_present(self):\n        try:\n            args = self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                \"containers\"\n            ][0].get(\"args\")\n            if args is not None and isinstance(args, str):\n                self.job_body[\"spec\"][\"template\"][\"spec\"][\"template\"][\"spec\"][\n                    \"containers\"\n                ][0][\"args\"] = shlex.split(args)\n        except KeyError:\n            raise ValueError(\"Unable to verify args due to invalid job body template.\")\n\n    @validator(\"job_body\")\n    def _ensure_job_includes_all_required_components(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job body includes all required components.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n\n    @validator(\"job_body\")\n    def _ensure_job_has_compatible_values(cls, value: Dict[str, Any]):\n        \"\"\"Ensure that the job body has compatible values.\"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatible values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration.job_name","title":"job_name: str property","text":"

        property for accessing the name from the job metadata.

        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration.project","title":"project: str property","text":"

        property for accessing the project from the credentials.

        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

        Prepares the job configuration for a flow run.

        Ensures that necessary values are present in the job body and that the job body is valid.

        Parameters:

        Name Type Description Default flow_run FlowRun

        The flow run to prepare the job configuration for

        required deployment Optional[DeploymentResponse]

        The deployment associated with the flow run used for preparation.

        None flow Optional[Flow]

        The flow associated with the flow run used for preparation.

        None Source code in prefect_gcp/workers/cloud_run.py
        def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n\n    Ensures that necessary values are present in the job body and that the\n    job body is valid.\n\n    Args:\n        flow_run: The flow run to prepare the job configuration for\n        deployment: The deployment associated with the flow run used for\n            preparation.\n        flow: The flow associated with the flow run used for preparation.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n\n    self._populate_envs()\n    self._populate_or_format_command()\n    self._format_args_if_present()\n    self._populate_image_if_not_present()\n    self._populate_name_if_not_present()\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerResult","title":"CloudRunWorkerResult","text":"

        Bases: BaseWorkerResult

        Contains information about the final state of a completed process

        Source code in prefect_gcp/workers/cloud_run.py
        class CloudRunWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker/#prefect_gcp.workers.cloud_run.CloudRunWorkerVariables","title":"CloudRunWorkerVariables","text":"

        Bases: BaseVariables

        Default variables for the Cloud Run worker.

        The schema for this class is used to populate the variables section of the default base job template.

        Source code in prefect_gcp/workers/cloud_run.py
        class CloudRunWorkerVariables(BaseVariables):\n    \"\"\"\n    Default variables for the Cloud Run worker.\n\n    The schema for this class is used to populate the `variables` section of the default\n    base job template.\n    \"\"\"\n\n    region: str = Field(\n        default=\"us-central1\",\n        description=\"The region where the Cloud Run Job resides.\",\n        example=\"us-central1\",\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to initiate the \"\n        \"Cloud Run Job. If not provided credentials will be \"\n        \"inferred from the local environment.\",\n    )\n    image: Optional[str] = Field(\n        default=None,\n        title=\"Image Name\",\n        description=(\n            \"The image to use for a new Cloud Run Job. \"\n            \"If not set, the latest Prefect image will be used. \"\n            \"See https://cloud.google.com/run/docs/deploying#images.\"\n        ),\n        example=\"docker.io/prefecthq/prefect:2-latest\",\n    )\n    cpu: Optional[str] = Field(\n        default=None,\n        title=\"CPU\",\n        description=(\n            \"The amount of compute allocated to the Cloud Run Job. \"\n            \"(1000m = 1 CPU). See \"\n            \"https://cloud.google.com/run/docs/configuring/cpu#setting-jobs.\"\n        ),\n        example=\"1000m\",\n        regex=r\"^(\\d*000)m$\",\n    )\n    memory: Optional[str] = Field(\n        default=None,\n        title=\"Memory\",\n        description=(\n            \"The amount of memory allocated to the Cloud Run Job. \"\n            \"Must be specified in units of 'G', 'Gi', 'M', or 'Mi'. \"\n            \"See https://cloud.google.com/run/docs/configuring/memory-limits#setting.\"\n        ),\n        example=\"512Mi\",\n        regex=r\"^\\d+(?:G|Gi|M|Mi)$\",\n    )\n    vpc_connector_name: Optional[str] = Field(\n        default=None,\n        title=\"VPC Connector Name\",\n        description=\"The name of the VPC connector to use for the Cloud Run Job.\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        title=\"Service Account Name\",\n        description=\"The name of the service account to use for the task execution \"\n        \"of Cloud Run Job. By default Cloud Run jobs run as the default \"\n        \"Compute Engine Service Account. \",\n        example=\"service-account@example.iam.gserviceaccount.com\",\n    )\n    keep_job: Optional[bool] = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud Run Job after it has run.\",\n    )\n    timeout: Optional[int] = Field(\n        default=600,\n        gt=0,\n        le=3600,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for Cloud Run Job state changes.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/","title":"Cloud run worker v2","text":""},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2","title":"prefect_gcp.workers.cloud_run_v2","text":""},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration","title":"CloudRunWorkerJobV2Configuration","text":"

        Bases: BaseJobConfiguration

        The configuration for the Cloud Run worker V2.

        The schema for this class is used to populate the job_body section of the default base job template.

        Source code in prefect_gcp/workers/cloud_run_v2.py
        class CloudRunWorkerJobV2Configuration(BaseJobConfiguration):\n    \"\"\"\n    The configuration for the Cloud Run worker V2.\n\n    The schema for this class is used to populate the `job_body` section of the\n    default base job template.\n    \"\"\"\n\n    credentials: GcpCredentials = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=(\n            \"The GCP Credentials used to connect to Cloud Run. \"\n            \"If not provided credentials will be inferred from \"\n            \"the local environment.\"\n        ),\n    )\n    job_body: Dict[str, Any] = Field(\n        template=_get_default_job_body_template(),\n    )\n    keep_job: bool = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud run job on Google Cloud Platform.\",\n    )\n    region: str = Field(\n        default=\"us-central1\",\n        description=\"The region in which to run the Cloud Run job\",\n    )\n    timeout: int = Field(\n        default=600,\n        gt=0,\n        le=86400,\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to \"\n            \"complete before raising an exception.\"\n        ),\n    )\n    _job_name: str = PrivateAttr(default=None)\n\n    @property\n    def project(self) -> str:\n        \"\"\"\n        Returns the GCP project associated with the credentials.\n\n        Returns:\n            str: The GCP project associated with the credentials.\n        \"\"\"\n        return self.credentials.project\n\n    @property\n    def job_name(self) -> str:\n        \"\"\"\n        Returns the name of the job.\n\n        Returns:\n            str: The name of the job.\n        \"\"\"\n        if self._job_name is None:\n            base_job_name = slugify_name(self.name)\n            job_name = f\"{base_job_name}-{uuid4().hex}\"\n            self._job_name = job_name\n\n        return self._job_name\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n\n        Ensures that necessary values are present in the job body and that the\n        job body is valid.\n\n        Args:\n            flow_run: The flow run to prepare the job configuration for\n            deployment: The deployment associated with the flow run used for\n                preparation.\n            flow: The flow associated with the flow run used for preparation.\n        \"\"\"\n        super().prepare_for_flow_run(\n            flow_run=flow_run,\n            deployment=deployment,\n            flow=flow,\n        )\n\n        self._populate_env()\n        self._populate_or_format_command()\n        self._format_args_if_present()\n        self._populate_image_if_not_present()\n        self._populate_timeout()\n        self._remove_vpc_access_if_unset()\n\n    def _populate_timeout(self):\n        \"\"\"\n        Populates the job body with the timeout.\n        \"\"\"\n        self.job_body[\"template\"][\"template\"][\"timeout\"] = f\"{self.timeout}s\"\n\n    def _populate_env(self):\n        \"\"\"\n        Populates the job body with environment variables.\n        \"\"\"\n        envs = [{\"name\": k, \"value\": v} for k, v in self.env.items()]\n\n        self.job_body[\"template\"][\"template\"][\"containers\"][0][\"env\"] = envs\n\n    def _populate_image_if_not_present(self):\n        \"\"\"\n        Populates the job body with the image if not present.\n        \"\"\"\n        if \"image\" not in self.job_body[\"template\"][\"template\"][\"containers\"][0]:\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"image\"\n            ] = f\"docker.io/{get_prefect_image_name()}\"\n\n    def _populate_or_format_command(self):\n        \"\"\"\n        Populates the job body with the command if not present.\n        \"\"\"\n        command = self.job_body[\"template\"][\"template\"][\"containers\"][0].get(\"command\")\n\n        if command is None:\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"command\"\n            ] = shlex.split(self._base_flow_run_command())\n        elif isinstance(command, str):\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"command\"\n            ] = shlex.split(command)\n\n    def _format_args_if_present(self):\n        \"\"\"\n        Formats the job body args if present.\n        \"\"\"\n        args = self.job_body[\"template\"][\"template\"][\"containers\"][0].get(\"args\")\n\n        if args is not None and isinstance(args, str):\n            self.job_body[\"template\"][\"template\"][\"containers\"][0][\n                \"args\"\n            ] = shlex.split(args)\n\n    def _remove_vpc_access_if_unset(self):\n        \"\"\"\n        Removes vpcAccess if unset.\n        \"\"\"\n\n        if \"vpcAccess\" not in self.job_body[\"template\"][\"template\"]:\n            return\n\n        vpc_access = self.job_body[\"template\"][\"template\"][\"vpcAccess\"]\n\n        # if vpcAccess is unset or connector is unset, remove the entire vpcAccess block\n        # otherwise leave the user provided value.\n        if not vpc_access or (\n            len(vpc_access) == 1\n            and \"connector\" in vpc_access\n            and vpc_access[\"connector\"] is None\n        ):\n            self.job_body[\"template\"][\"template\"].pop(\"vpcAccess\")\n\n    # noinspection PyMethodParameters\n    @validator(\"job_body\")\n    def _ensure_job_includes_all_required_components(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job body includes all required components.\n\n        Args:\n            value: The job body to validate.\n        Returns:\n            The validated job body.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n\n        if missing_paths:\n            raise ValueError(\n                f\"Job body is missing required components: {', '.join(missing_paths)}\"\n            )\n\n        return value\n\n    # noinspection PyMethodParameters\n    @validator(\"job_body\")\n    def _ensure_job_has_compatible_values(cls, value: Dict[str, Any]):\n        \"\"\"Ensure that the job body has compatible values.\"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_body())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatible values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration.job_name","title":"job_name: str property","text":"

        Returns the name of the job.

        Returns:

        Name Type Description str str

        The name of the job.

        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration.project","title":"project: str property","text":"

        Returns the GCP project associated with the credentials.

        Returns:

        Name Type Description str str

        The GCP project associated with the credentials.

        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerJobV2Configuration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

        Prepares the job configuration for a flow run.

        Ensures that necessary values are present in the job body and that the job body is valid.

        Parameters:

        Name Type Description Default flow_run FlowRun

        The flow run to prepare the job configuration for

        required deployment Optional[DeploymentResponse]

        The deployment associated with the flow run used for preparation.

        None flow Optional[Flow]

        The flow associated with the flow run used for preparation.

        None Source code in prefect_gcp/workers/cloud_run_v2.py
        def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n\n    Ensures that necessary values are present in the job body and that the\n    job body is valid.\n\n    Args:\n        flow_run: The flow run to prepare the job configuration for\n        deployment: The deployment associated with the flow run used for\n            preparation.\n        flow: The flow associated with the flow run used for preparation.\n    \"\"\"\n    super().prepare_for_flow_run(\n        flow_run=flow_run,\n        deployment=deployment,\n        flow=flow,\n    )\n\n    self._populate_env()\n    self._populate_or_format_command()\n    self._format_args_if_present()\n    self._populate_image_if_not_present()\n    self._populate_timeout()\n    self._remove_vpc_access_if_unset()\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2","title":"CloudRunWorkerV2","text":"

        Bases: BaseWorker

        The Cloud Run worker V2.

        Source code in prefect_gcp/workers/cloud_run_v2.py
        class CloudRunWorkerV2(BaseWorker):\n    \"\"\"\n    The Cloud Run worker V2.\n    \"\"\"\n\n    type = \"cloud-run-v2\"\n    job_configuration = CloudRunWorkerJobV2Configuration\n    job_configuration_variables = CloudRunWorkerV2Variables\n    _description = \"Execute flow runs within containers on Google Cloud Run (V2 API). Requires a Google Cloud Platform account.\"  # noqa\n    _display_name = \"Google Cloud Run V2\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/worker_v2/\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4SpnOBvMYkHp6z939MDKP6/549a91bc1ce9afd4fb12c68db7b68106/social-icon-google-cloud-1200-630.png?h=250\"  # noqa\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: CloudRunWorkerJobV2Configuration,\n        task_status: Optional[TaskStatus] = None,\n    ) -> CloudRunJobV2Result:\n        \"\"\"\n        Runs the flow run on Cloud Run and waits for it to complete.\n\n        Args:\n            flow_run: The flow run to run.\n            configuration: The configuration for the job.\n            task_status: The task status to update.\n\n        Returns:\n            The result of the job.\n        \"\"\"\n        logger = self.get_flow_run_logger(flow_run)\n\n        with self._get_client(configuration=configuration) as cr_client:\n            await run_sync_in_worker_thread(\n                self._create_job_and_wait_for_registration,\n                configuration=configuration,\n                cr_client=cr_client,\n                logger=logger,\n            )\n\n            execution = await run_sync_in_worker_thread(\n                self._begin_job_execution,\n                configuration=configuration,\n                cr_client=cr_client,\n                logger=logger,\n            )\n\n            if task_status:\n                task_status.started(configuration.job_name)\n\n            result = await run_sync_in_worker_thread(\n                self._watch_job_execution_and_get_result,\n                configuration=configuration,\n                cr_client=cr_client,\n                execution=execution,\n                logger=logger,\n            )\n\n            return result\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: CloudRunWorkerJobV2Configuration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops the Cloud Run job.\n\n        Args:\n            infrastructure_pid: The ID of the infrastructure to stop.\n            configuration: The configuration for the job.\n            grace_seconds: The number of seconds to wait before stopping the job.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n            )\n\n        with self._get_client(configuration=configuration) as cr_client:\n            await run_sync_in_worker_thread(\n                self._stop_job,\n                cr_client=cr_client,\n                configuration=configuration,\n                job_name=infrastructure_pid,\n            )\n\n    @staticmethod\n    def _get_client(\n        configuration: CloudRunWorkerJobV2Configuration,\n    ) -> ResourceWarning:\n        \"\"\"\n        Get the base client needed for interacting with GCP Cloud Run V2 API.\n\n        Returns:\n            Resource: The base client needed for interacting with GCP Cloud Run V2 API.\n        \"\"\"\n        api_endpoint = \"https://run.googleapis.com\"\n        gcp_creds = configuration.credentials.get_credentials_from_service_account()\n\n        options = ClientOptions(api_endpoint=api_endpoint)\n\n        return (\n            discovery.build(\n                \"run\",\n                \"v2\",\n                client_options=options,\n                credentials=gcp_creds,\n                num_retries=3,  # Set to 3 in case of intermittent/connection issues\n            )\n            .projects()\n            .locations()\n        )\n\n    def _create_job_and_wait_for_registration(\n        self,\n        configuration: CloudRunWorkerJobV2Configuration,\n        cr_client: Resource,\n        logger: PrefectLogAdapter,\n    ):\n        \"\"\"\n        Creates the Cloud Run job and waits for it to register.\n\n        Args:\n            configuration: The configuration for the job.\n            cr_client: The Cloud Run client.\n            logger: The logger to use.\n        \"\"\"\n        try:\n            logger.info(f\"Creating Cloud Run JobV2 {configuration.job_name}\")\n\n            JobV2.create(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_id=configuration.job_name,\n                body=configuration.job_body,\n            )\n        except HttpError as exc:\n            self._create_job_error(\n                exc=exc,\n                configuration=configuration,\n            )\n\n        try:\n            self._wait_for_job_creation(\n                cr_client=cr_client,\n                configuration=configuration,\n                logger=logger,\n            )\n        except Exception as exc:\n            logger.critical(\n                f\"Failed to create Cloud Run JobV2 {configuration.job_name}.\\n{exc}\"\n            )\n\n            if not configuration.keep_job:\n                try:\n                    JobV2.delete(\n                        cr_client=cr_client,\n                        project=configuration.project,\n                        location=configuration.region,\n                        job_name=configuration.job_name,\n                    )\n                except Exception as exc2:\n                    logger.critical(\n                        f\"Failed to delete Cloud Run JobV2 {configuration.job_name}.\"\n                        f\"\\n{exc2}\"\n                    )\n\n            raise\n\n    @staticmethod\n    def _wait_for_job_creation(\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ):\n        \"\"\"\n        Waits for the Cloud Run job to be created.\n\n        Args:\n            cr_client: The Cloud Run client.\n            configuration: The configuration for the job.\n            logger: The logger to use.\n            poll_interval: The interval to poll the Cloud Run job, defaults to 5\n                seconds.\n        \"\"\"\n        job = JobV2.get(\n            cr_client=cr_client,\n            project=configuration.project,\n            location=configuration.region,\n            job_name=configuration.job_name,\n        )\n\n        t0 = time.time()\n\n        while not job.is_ready():\n            if not (ready_condition := job.get_ready_condition()):\n                ready_condition = \"waiting for condition update\"\n\n            logger.info(f\"Current Job Condition: {ready_condition}\")\n\n            job = JobV2.get(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_name=configuration.job_name,\n            )\n\n            elapsed_time = time.time() - t0\n\n            if elapsed_time > configuration.timeout:\n                raise RuntimeError(\n                    f\"Timeout of {configuration.timeout} seconds reached while \"\n                    f\"waiting for Cloud Run Job V2 {configuration.job_name} to be \"\n                    \"created.\"\n                )\n\n            time.sleep(poll_interval)\n\n    @staticmethod\n    def _create_job_error(\n        exc: HttpError,\n        configuration: CloudRunWorkerJobV2Configuration,\n    ):\n        \"\"\"\n        Creates a formatted error message for the Cloud Run V2 API errors\n        \"\"\"\n        # noinspection PyUnresolvedReferences\n        if exc.status_code == 404:\n            raise RuntimeError(\n                f\"Failed to find resources at {exc.uri}. Confirm that region\"\n                f\" '{configuration.region}' is the correct region for your Cloud\"\n                f\" Run Job and that {configuration.project} is the correct GCP \"\n                f\" project. If your project ID is not correct, you are using a \"\n                f\"Credentials block with permissions for the wrong project.\"\n            ) from exc\n\n        raise exc\n\n    def _begin_job_execution(\n        self,\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        logger: PrefectLogAdapter,\n    ) -> ExecutionV2:\n        \"\"\"\n        Begins the Cloud Run job execution.\n\n        Args:\n            cr_client: The Cloud Run client.\n            configuration: The configuration for the job.\n            logger: The logger to use.\n\n        Returns:\n            The Cloud Run job execution.\n        \"\"\"\n        try:\n            logger.info(\n                f\"Submitting Cloud Run Job V2 {configuration.job_name} for execution...\"\n            )\n\n            submission = JobV2.run(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_name=configuration.job_name,\n            )\n\n            job_execution = ExecutionV2.get(\n                cr_client=cr_client,\n                execution_id=submission[\"metadata\"][\"name\"],\n            )\n\n            command = (\n                \" \".join(configuration.command)\n                if configuration.command\n                else \"default container command\"\n            )\n\n            logger.info(\n                f\"Cloud Run Job V2 {configuration.job_name} submitted for execution \"\n                f\"with command: {command}\"\n            )\n\n            return job_execution\n        except Exception as exc:\n            self._job_run_submission_error(\n                exc=exc,\n                configuration=configuration,\n            )\n            raise\n\n    def _watch_job_execution_and_get_result(\n        self,\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        execution: ExecutionV2,\n        logger: PrefectLogAdapter,\n        poll_interval: int = 5,\n    ) -> CloudRunJobV2Result:\n        \"\"\"\n        Watch the job execution and get the result.\n\n        Args:\n            cr_client (Resource): The base client needed for interacting with GCP\n                Cloud Run V2 API.\n            configuration (CloudRunWorkerJobV2Configuration): The configuration for\n                the job.\n            execution (ExecutionV2): The execution to watch.\n            logger (PrefectLogAdapter): The logger to use.\n            poll_interval (int): The number of seconds to wait between polls.\n                Defaults to 5 seconds.\n\n        Returns:\n            The result of the job.\n        \"\"\"\n        try:\n            execution = self._watch_job_execution(\n                cr_client=cr_client,\n                configuration=configuration,\n                execution=execution,\n                poll_interval=poll_interval,\n            )\n        except Exception as exc:\n            logger.critical(\n                f\"Encountered an exception while waiting for job run completion - \"\n                f\"{exc}\"\n            )\n            raise\n\n        if execution.succeeded():\n            status_code = 0\n            logger.info(f\"Cloud Run Job V2 {configuration.job_name} succeeded\")\n        else:\n            status_code = 1\n            error_mg = execution.condition_after_completion().get(\"message\")\n            logger.error(\n                f\"Cloud Run Job V2 {configuration.job_name} failed - {error_mg}\"\n            )\n\n        logger.info(f\"Job run logs can be found on GCP at: {execution.logUri}\")\n\n        if not configuration.keep_job:\n            logger.info(\n                f\"Deleting completed Cloud Run Job {configuration.job_name!r} from \"\n                \"Google Cloud Run...\"\n            )\n\n            try:\n                JobV2.delete(\n                    cr_client=cr_client,\n                    project=configuration.project,\n                    location=configuration.region,\n                    job_name=configuration.job_name,\n                )\n            except Exception as exc:\n                logger.critical(\n                    \"Received an exception while deleting the Cloud Run Job V2 \"\n                    f\"- {configuration.job_name} - {exc}\"\n                )\n\n        return CloudRunJobV2Result(\n            identifier=configuration.job_name,\n            status_code=status_code,\n        )\n\n    # noinspection DuplicatedCode\n    @staticmethod\n    def _watch_job_execution(\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        execution: ExecutionV2,\n        poll_interval: int,\n    ) -> ExecutionV2:\n        \"\"\"\n        Update execution status until it is no longer running or timeout is reached.\n\n        Args:\n            cr_client (Resource): The base client needed for interacting with GCP\n                Cloud Run V2 API.\n            configuration (CloudRunWorkerJobV2Configuration): The configuration for\n                the job.\n            execution (ExecutionV2): The execution to watch.\n            poll_interval (int): The number of seconds to wait between polls.\n\n        Returns:\n            The execution.\n        \"\"\"\n        t0 = time.time()\n\n        while execution.is_running():\n            execution = ExecutionV2.get(\n                cr_client=cr_client,\n                execution_id=execution.name,\n            )\n\n            elapsed_time = time.time() - t0\n\n            if elapsed_time > configuration.timeout:\n                raise RuntimeError(\n                    f\"Timeout of {configuration.timeout} seconds reached while \"\n                    f\"waiting for Cloud Run Job V2 {configuration.job_name} to \"\n                    \"complete.\"\n                )\n\n            time.sleep(poll_interval)\n\n        return execution\n\n    @staticmethod\n    def _job_run_submission_error(\n        exc: Exception,\n        configuration: CloudRunWorkerJobV2Configuration,\n    ):\n        \"\"\"\n        Creates a formatted error message for the Cloud Run V2 API errors\n\n        Args:\n            exc: The exception to format.\n            configuration: The configuration for the job.\n        \"\"\"\n        # noinspection PyUnresolvedReferences\n        if exc.status_code == 404:\n            pat1 = r\"The requested URL [^ ]+ was not found on this server\"\n\n            if re.findall(pat1, str(exc)):\n                # noinspection PyUnresolvedReferences\n                raise RuntimeError(\n                    f\"Failed to find resources at {exc.uri}. \"\n                    f\"Confirm that region '{configuration.region}' is \"\n                    f\"the correct region for your Cloud Run Job \"\n                    f\"and that '{configuration.project}' is the \"\n                    f\"correct GCP project. If your project ID is not \"\n                    f\"correct, you are using a Credentials \"\n                    f\"block with permissions for the wrong project.\"\n                ) from exc\n            else:\n                raise exc\n\n    @staticmethod\n    def _stop_job(\n        cr_client: Resource,\n        configuration: CloudRunWorkerJobV2Configuration,\n        job_name: str,\n    ):\n        \"\"\"\n        Stops/deletes the Cloud Run job.\n\n        Args:\n            cr_client: The Cloud Run client.\n            configuration: The configuration for the job.\n            job_name: The name of the job to stop.\n        \"\"\"\n        try:\n            JobV2.delete(\n                cr_client=cr_client,\n                project=configuration.project,\n                location=configuration.region,\n                job_name=job_name,\n            )\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Cloud Run Job; the job name {job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2.kill_infrastructure","title":"kill_infrastructure async","text":"

        Stops the Cloud Run job.

        Parameters:

        Name Type Description Default infrastructure_pid str

        The ID of the infrastructure to stop.

        required configuration CloudRunWorkerJobV2Configuration

        The configuration for the job.

        required grace_seconds int

        The number of seconds to wait before stopping the job.

        30 Source code in prefect_gcp/workers/cloud_run_v2.py
        async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: CloudRunWorkerJobV2Configuration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops the Cloud Run job.\n\n    Args:\n        infrastructure_pid: The ID of the infrastructure to stop.\n        configuration: The configuration for the job.\n        grace_seconds: The number of seconds to wait before stopping the job.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/run/docs/reference/rest/v1/namespaces.jobs/delete\"  # noqa\n        )\n\n    with self._get_client(configuration=configuration) as cr_client:\n        await run_sync_in_worker_thread(\n            self._stop_job,\n            cr_client=cr_client,\n            configuration=configuration,\n            job_name=infrastructure_pid,\n        )\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2.run","title":"run async","text":"

        Runs the flow run on Cloud Run and waits for it to complete.

        Parameters:

        Name Type Description Default flow_run FlowRun

        The flow run to run.

        required configuration CloudRunWorkerJobV2Configuration

        The configuration for the job.

        required task_status Optional[TaskStatus]

        The task status to update.

        None

        Returns:

        Type Description CloudRunJobV2Result

        The result of the job.

        Source code in prefect_gcp/workers/cloud_run_v2.py
        async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: CloudRunWorkerJobV2Configuration,\n    task_status: Optional[TaskStatus] = None,\n) -> CloudRunJobV2Result:\n    \"\"\"\n    Runs the flow run on Cloud Run and waits for it to complete.\n\n    Args:\n        flow_run: The flow run to run.\n        configuration: The configuration for the job.\n        task_status: The task status to update.\n\n    Returns:\n        The result of the job.\n    \"\"\"\n    logger = self.get_flow_run_logger(flow_run)\n\n    with self._get_client(configuration=configuration) as cr_client:\n        await run_sync_in_worker_thread(\n            self._create_job_and_wait_for_registration,\n            configuration=configuration,\n            cr_client=cr_client,\n            logger=logger,\n        )\n\n        execution = await run_sync_in_worker_thread(\n            self._begin_job_execution,\n            configuration=configuration,\n            cr_client=cr_client,\n            logger=logger,\n        )\n\n        if task_status:\n            task_status.started(configuration.job_name)\n\n        result = await run_sync_in_worker_thread(\n            self._watch_job_execution_and_get_result,\n            configuration=configuration,\n            cr_client=cr_client,\n            execution=execution,\n            logger=logger,\n        )\n\n        return result\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2Result","title":"CloudRunWorkerV2Result","text":"

        Bases: BaseWorkerResult

        The result of a Cloud Run worker V2 job.

        Source code in prefect_gcp/workers/cloud_run_v2.py
        class CloudRunWorkerV2Result(BaseWorkerResult):\n    \"\"\"\n    The result of a Cloud Run worker V2 job.\n    \"\"\"\n
        "},{"location":"integrations/prefect-gcp/cloud_run_worker_v2/#prefect_gcp.workers.cloud_run_v2.CloudRunWorkerV2Variables","title":"CloudRunWorkerV2Variables","text":"

        Bases: BaseVariables

        Default variables for the Cloud Run worker V2.

        The schema for this class is used to populate the variables section of the default base job template.

        Source code in prefect_gcp/workers/cloud_run_v2.py
        class CloudRunWorkerV2Variables(BaseVariables):\n    \"\"\"\n    Default variables for the Cloud Run worker V2.\n\n    The schema for this class is used to populate the `variables` section of the\n    default base job template.\n    \"\"\"\n\n    credentials: GcpCredentials = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=(\n            \"The GCP Credentials used to connect to Cloud Run. \"\n            \"If not provided credentials will be inferred from \"\n            \"the local environment.\"\n        ),\n    )\n    region: str = Field(\n        default=\"us-central1\",\n        description=\"The region in which to run the Cloud Run job\",\n    )\n    image: Optional[str] = Field(\n        default=\"prefecthq/prefect:2-latest\",\n        title=\"Image Name\",\n        description=(\n            \"The image to use for the Cloud Run job. \"\n            \"If not provided the default Prefect image will be used.\"\n        ),\n    )\n    args: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"The arguments to pass to the Cloud Run Job V2's entrypoint command.\"\n        ),\n    )\n    keep_job: bool = Field(\n        default=False,\n        title=\"Keep Job After Completion\",\n        description=\"Keep the completed Cloud run job on Google Cloud Platform.\",\n    )\n    launch_stage: Literal[\n        \"ALPHA\",\n        \"BETA\",\n        \"GA\",\n        \"DEPRECATED\",\n        \"EARLY_ACCESS\",\n        \"PRELAUNCH\",\n        \"UNIMPLEMENTED\",\n        \"LAUNCH_TAG_UNSPECIFIED\",\n    ] = Field(\n        \"BETA\",\n        description=(\n            \"The launch stage of the Cloud Run Job V2. \"\n            \"See https://cloud.google.com/run/docs/about-features-categories \"\n            \"for additional details.\"\n        ),\n    )\n    max_retries: int = Field(\n        default=0,\n        title=\"Max Retries\",\n        description=\"The number of times to retry the Cloud Run job.\",\n    )\n    cpu: str = Field(\n        default=\"1000m\",\n        title=\"CPU\",\n        description=\"The CPU to allocate to the Cloud Run job.\",\n    )\n    memory: str = Field(\n        default=\"512Mi\",\n        title=\"Memory\",\n        description=(\n            \"The memory to allocate to the Cloud Run job along with the units, which\"\n            \"could be: G, Gi, M, Mi.\"\n        ),\n        example=\"512Mi\",\n        pattern=r\"^\\d+(?:G|Gi|M|Mi)$\",\n    )\n    timeout: int = Field(\n        default=600,\n        gt=0,\n        le=86400,\n        title=\"Job Timeout\",\n        description=(\n            \"The length of time that Prefect will wait for a Cloud Run Job to \"\n            \"complete before raising an exception (maximum of 86400 seconds, 1 day).\"\n        ),\n    )\n    vpc_connector_name: Optional[str] = Field(\n        default=None,\n        title=\"VPC Connector Name\",\n        description=\"The name of the VPC connector to use for the Cloud Run job.\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        title=\"Service Account Name\",\n        description=(\n            \"The name of the service account to use for the task execution \"\n            \"of Cloud Run Job. By default Cloud Run jobs run as the default \"\n            \"Compute Engine Service Account.\"\n        ),\n        example=\"service-account@example.iam.gserviceaccount.com\",\n    )\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/","title":"Cloud Storage","text":""},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage","title":"prefect_gcp.cloud_storage","text":"

        Tasks for interacting with GCP Cloud Storage.

        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat","title":"DataFrameSerializationFormat","text":"

        Bases: Enum

        An enumeration class to represent different file formats, compression options for upload_from_dataframe

        Attributes:

        Name Type Description CSV

        Representation for 'csv' file format with no compression and its related content type and suffix.

        CSV_GZIP

        Representation for 'csv' file format with 'gzip' compression and its related content type and suffix.

        PARQUET

        Representation for 'parquet' file format with no compression and its related content type and suffix.

        PARQUET_SNAPPY

        Representation for 'parquet' file format with 'snappy' compression and its related content type and suffix.

        PARQUET_GZIP

        Representation for 'parquet' file format with 'gzip' compression and its related content type and suffix.

        Source code in prefect_gcp/cloud_storage.py
        class DataFrameSerializationFormat(Enum):\n    \"\"\"\n    An enumeration class to represent different file formats,\n    compression options for upload_from_dataframe\n\n    Attributes:\n        CSV: Representation for 'csv' file format with no compression\n            and its related content type and suffix.\n\n        CSV_GZIP: Representation for 'csv' file format with 'gzip' compression\n            and its related content type and suffix.\n\n        PARQUET: Representation for 'parquet' file format with no compression\n            and its related content type and suffix.\n\n        PARQUET_SNAPPY: Representation for 'parquet' file format\n            with 'snappy' compression and its related content type and suffix.\n\n        PARQUET_GZIP: Representation for 'parquet' file format\n            with 'gzip' compression and its related content type and suffix.\n    \"\"\"\n\n    CSV = (\"csv\", None, \"text/csv\", \".csv\")\n    CSV_GZIP = (\"csv\", \"gzip\", \"application/x-gzip\", \".csv.gz\")\n    PARQUET = (\"parquet\", None, \"application/octet-stream\", \".parquet\")\n    PARQUET_SNAPPY = (\n        \"parquet\",\n        \"snappy\",\n        \"application/octet-stream\",\n        \".snappy.parquet\",\n    )\n    PARQUET_GZIP = (\"parquet\", \"gzip\", \"application/octet-stream\", \".gz.parquet\")\n\n    @property\n    def format(self) -> str:\n        \"\"\"The file format of the current instance.\"\"\"\n        return self.value[0]\n\n    @property\n    def compression(self) -> Union[str, None]:\n        \"\"\"The compression type of the current instance.\"\"\"\n        return self.value[1]\n\n    @property\n    def content_type(self) -> str:\n        \"\"\"The content type of the current instance.\"\"\"\n        return self.value[2]\n\n    @property\n    def suffix(self) -> str:\n        \"\"\"The suffix of the file format of the current instance.\"\"\"\n        return self.value[3]\n\n    def fix_extension_with(self, gcs_blob_path: str) -> str:\n        \"\"\"Fix the extension of a GCS blob.\n\n        Args:\n            gcs_blob_path: The path to the GCS blob to be modified.\n\n        Returns:\n            The modified path to the GCS blob with the new extension.\n        \"\"\"\n        gcs_blob_path = PurePosixPath(gcs_blob_path)\n        folder = gcs_blob_path.parent\n        filename = PurePosixPath(gcs_blob_path.stem).with_suffix(self.suffix)\n        return str(folder.joinpath(filename))\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.compression","title":"compression: Union[str, None] property","text":"

        The compression type of the current instance.

        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.content_type","title":"content_type: str property","text":"

        The content type of the current instance.

        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.format","title":"format: str property","text":"

        The file format of the current instance.

        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.suffix","title":"suffix: str property","text":"

        The suffix of the file format of the current instance.

        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.DataFrameSerializationFormat.fix_extension_with","title":"fix_extension_with","text":"

        Fix the extension of a GCS blob.

        Parameters:

        Name Type Description Default gcs_blob_path str

        The path to the GCS blob to be modified.

        required

        Returns:

        Type Description str

        The modified path to the GCS blob with the new extension.

        Source code in prefect_gcp/cloud_storage.py
        def fix_extension_with(self, gcs_blob_path: str) -> str:\n    \"\"\"Fix the extension of a GCS blob.\n\n    Args:\n        gcs_blob_path: The path to the GCS blob to be modified.\n\n    Returns:\n        The modified path to the GCS blob with the new extension.\n    \"\"\"\n    gcs_blob_path = PurePosixPath(gcs_blob_path)\n    folder = gcs_blob_path.parent\n    filename = PurePosixPath(gcs_blob_path.stem).with_suffix(self.suffix)\n    return str(folder.joinpath(filename))\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket","title":"GcsBucket","text":"

        Bases: WritableDeploymentStorage, WritableFileSystem, ObjectStorageBlock

        Block used to store data using GCP Cloud Storage Buckets.

        Note! GcsBucket in prefect-gcp is a unique block, separate from GCS in core Prefect. GcsBucket does not use gcsfs under the hood, instead using the google-cloud-storage package, and offers more configuration and functionality.

        Attributes:

        Name Type Description bucket str

        Name of the bucket.

        gcp_credentials GcpCredentials

        The credentials to authenticate with GCP.

        bucket_folder str

        A default path to a folder within the GCS bucket to use for reading and writing objects.

        Example

        Load stored GCP Cloud Storage Bucket:

        from prefect_gcp.cloud_storage import GcsBucket\ngcp_cloud_storage_bucket_block = GcsBucket.load(\"BLOCK_NAME\")\n

        Source code in prefect_gcp/cloud_storage.py
        class GcsBucket(WritableDeploymentStorage, WritableFileSystem, ObjectStorageBlock):\n    \"\"\"\n    Block used to store data using GCP Cloud Storage Buckets.\n\n    Note! `GcsBucket` in `prefect-gcp` is a unique block, separate from `GCS`\n    in core Prefect. `GcsBucket` does not use `gcsfs` under the hood,\n    instead using the `google-cloud-storage` package, and offers more configuration\n    and functionality.\n\n    Attributes:\n        bucket: Name of the bucket.\n        gcp_credentials: The credentials to authenticate with GCP.\n        bucket_folder: A default path to a folder within the GCS bucket to use\n            for reading and writing objects.\n\n    Example:\n        Load stored GCP Cloud Storage Bucket:\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n        gcp_cloud_storage_bucket_block = GcsBucket.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _block_type_name = \"GCS Bucket\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket\"  # noqa: E501\n\n    bucket: str = Field(..., description=\"Name of the bucket.\")\n    gcp_credentials: GcpCredentials = Field(\n        default_factory=GcpCredentials,\n        description=\"The credentials to authenticate with GCP.\",\n    )\n    bucket_folder: str = Field(\n        default=\"\",\n        description=(\n            \"A default path to a folder within the GCS bucket to use \"\n            \"for reading and writing objects.\"\n        ),\n    )\n\n    @property\n    def basepath(self) -> str:\n        \"\"\"\n        Read-only property that mirrors the bucket folder.\n\n        Used for deployment.\n        \"\"\"\n        return self.bucket_folder\n\n    @validator(\"bucket_folder\", pre=True, always=True)\n    def _bucket_folder_suffix(cls, value):\n        \"\"\"\n        Ensures that the bucket folder is suffixed with a forward slash.\n        \"\"\"\n        if value != \"\" and not value.endswith(\"/\"):\n            value = f\"{value}/\"\n        return value\n\n    def _resolve_path(self, path: str) -> str:\n        \"\"\"\n        A helper function used in write_path to join `self.bucket_folder` and `path`.\n\n        Args:\n            path: Name of the key, e.g. \"file1\". Each object in your\n                bucket has a unique key (or key name).\n\n        Returns:\n            The joined path.\n        \"\"\"\n        # If bucket_folder provided, it means we won't write to the root dir of\n        # the bucket. So we need to add it on the front of the path.\n        path = (\n            str(PurePosixPath(self.bucket_folder, path)) if self.bucket_folder else path\n        )\n        if path in [\"\", \".\", \"/\"]:\n            # client.bucket.list_blobs(prefix=None) is the proper way\n            # of specifying the root folder of the bucket\n            path = None\n        return path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> List[Union[str, Path]]:\n        \"\"\"\n        Copies a folder from the configured GCS bucket to a local directory.\n        Defaults to copying the entire contents of the block's bucket_folder\n        to the current working directory.\n\n        Args:\n            from_path: Path in GCS bucket to download from. Defaults to the block's\n                configured bucket_folder.\n            local_path: Local path to download GCS bucket contents to.\n                Defaults to the current working directory.\n\n        Returns:\n            A list of downloaded file paths.\n        \"\"\"\n        from_path = (\n            self.bucket_folder if from_path is None else self._resolve_path(from_path)\n        )\n\n        if local_path is None:\n            local_path = os.path.abspath(\".\")\n        else:\n            local_path = os.path.abspath(os.path.expanduser(local_path))\n\n        project = self.gcp_credentials.project\n        client = self.gcp_credentials.get_cloud_storage_client(project=project)\n\n        blobs = await run_sync_in_worker_thread(\n            client.list_blobs, self.bucket, prefix=from_path\n        )\n\n        file_paths = []\n        for blob in blobs:\n            blob_path = blob.name\n            if blob_path[-1] == \"/\":\n                # object is a folder and will be created if it contains any objects\n                continue\n            local_file_path = os.path.join(local_path, blob_path)\n            os.makedirs(os.path.dirname(local_file_path), exist_ok=True)\n\n            with disable_run_logger():\n                file_path = await cloud_storage_download_blob_to_file.fn(\n                    bucket=self.bucket,\n                    blob=blob_path,\n                    path=local_file_path,\n                    gcp_credentials=self.gcp_credentials,\n                )\n                file_paths.append(file_path)\n        return file_paths\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n        \"\"\"\n        Uploads a directory from a given local path to the configured GCS bucket in a\n        given folder.\n\n        Defaults to uploading the entire contents the current working directory to the\n        block's bucket_folder.\n\n        Args:\n            local_path: Path to local directory to upload from.\n            to_path: Path in GCS bucket to upload to. Defaults to block's configured\n                bucket_folder.\n            ignore_file: Path to file containing gitignore style expressions for\n                filepaths to ignore.\n\n        Returns:\n            The number of files uploaded.\n        \"\"\"\n        if local_path is None:\n            local_path = os.path.abspath(\".\")\n        else:\n            local_path = os.path.expanduser(local_path)\n\n        to_path = self.bucket_folder if to_path is None else self._resolve_path(to_path)\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n            included_files = filter_files(local_path, ignore_patterns)\n\n        uploaded_file_count = 0\n        for local_file_path in Path(local_path).rglob(\"*\"):\n            if (\n                included_files is not None\n                and local_file_path.name not in included_files\n            ):\n                continue\n            elif not local_file_path.is_dir():\n                remote_file_path = str(\n                    PurePosixPath(to_path, local_file_path.relative_to(local_path))\n                )\n                local_file_content = local_file_path.read_bytes()\n                await self.write_path(remote_file_path, content=local_file_content)\n                uploaded_file_count += 1\n\n        return uploaded_file_count\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        \"\"\"\n        Read specified path from GCS and return contents. Provide the entire\n        path to the key in GCS.\n\n        Args:\n            path: Entire path to (and including) the key.\n\n        Returns:\n            A bytes or string representation of the blob object.\n        \"\"\"\n        path = self._resolve_path(path)\n        with disable_run_logger():\n            contents = await cloud_storage_download_blob_as_bytes.fn(\n                bucket=self.bucket, blob=path, gcp_credentials=self.gcp_credentials\n            )\n        return contents\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        \"\"\"\n        Writes to an GCS bucket.\n\n        Args:\n            path: The key name. Each object in your bucket has a unique\n                key (or key name).\n            content: What you are uploading to GCS Bucket.\n\n        Returns:\n            The path that the contents were written to.\n        \"\"\"\n        path = self._resolve_path(path)\n        with disable_run_logger():\n            await cloud_storage_upload_blob_from_string.fn(\n                data=content,\n                bucket=self.bucket,\n                blob=path,\n                gcp_credentials=self.gcp_credentials,\n            )\n        return path\n\n    # NEW BLOCK INTERFACE METHODS BELOW\n    def _join_bucket_folder(self, bucket_path: str = \"\") -> str:\n        \"\"\"\n        Joins the base bucket folder to the bucket path.\n\n        NOTE: If a method reuses another method in this class, be careful to not\n        call this  twice because it'll join the bucket folder twice.\n        See https://github.com/PrefectHQ/prefect-aws/issues/141 for a past issue.\n        \"\"\"\n        bucket_path = str(bucket_path)\n        if self.bucket_folder != \"\" and bucket_path.startswith(self.bucket_folder):\n            self.logger.info(\n                f\"Bucket path {bucket_path!r} is already prefixed with \"\n                f\"bucket folder {self.bucket_folder!r}; is this intentional?\"\n            )\n\n        bucket_path = str(PurePosixPath(self.bucket_folder) / bucket_path)\n        if bucket_path in [\"\", \".\", \"/\"]:\n            # client.bucket.list_blobs(prefix=None) is the proper way\n            # of specifying the root folder of the bucket\n            bucket_path = None\n        return bucket_path\n\n    @sync_compatible\n    async def create_bucket(\n        self, location: Optional[str] = None, **create_kwargs\n    ) -> \"Bucket\":\n        \"\"\"\n        Creates a bucket.\n\n        Args:\n            location: The location of the bucket.\n            **create_kwargs: Additional keyword arguments to pass to the\n                `create_bucket` method.\n\n        Returns:\n            The bucket object.\n\n        Examples:\n            Create a bucket.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket(bucket=\"my-bucket\")\n            gcs_bucket.create_bucket()\n            ```\n        \"\"\"\n        self.logger.info(f\"Creating bucket {self.bucket!r}.\")\n        client = self.gcp_credentials.get_cloud_storage_client()\n        bucket = await run_sync_in_worker_thread(\n            client.create_bucket, self.bucket, location=location, **create_kwargs\n        )\n        return bucket\n\n    @sync_compatible\n    async def get_bucket(self) -> \"Bucket\":\n        \"\"\"\n        Returns the bucket object.\n\n        Returns:\n            The bucket object.\n\n        Examples:\n            Get the bucket object.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.get_bucket()\n            ```\n        \"\"\"\n        self.logger.info(f\"Getting bucket {self.bucket!r}.\")\n        client = self.gcp_credentials.get_cloud_storage_client()\n        bucket = await run_sync_in_worker_thread(client.get_bucket, self.bucket)\n        return bucket\n\n    @sync_compatible\n    async def list_blobs(self, folder: str = \"\") -> List[\"Blob\"]:\n        \"\"\"\n        Lists all blobs in the bucket that are in a folder.\n        Folders are not included in the output.\n\n        Args:\n            folder: The folder to list blobs from.\n\n        Returns:\n            A list of Blob objects.\n\n        Examples:\n            Get all blobs from a folder named \"prefect\".\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.list_blobs(\"prefect\")\n            ```\n        \"\"\"\n        client = self.gcp_credentials.get_cloud_storage_client()\n\n        bucket_path = self._join_bucket_folder(folder)\n        if bucket_path is None:\n            self.logger.info(f\"Listing blobs in bucket {self.bucket!r}.\")\n        else:\n            self.logger.info(\n                f\"Listing blobs in folder {bucket_path!r} in bucket {self.bucket!r}.\"\n            )\n        blobs = await run_sync_in_worker_thread(\n            client.list_blobs, self.bucket, prefix=bucket_path\n        )\n\n        # Ignore folders\n        return [blob for blob in blobs if not blob.name.endswith(\"/\")]\n\n    @sync_compatible\n    async def list_folders(self, folder: str = \"\") -> List[str]:\n        \"\"\"\n        Lists all folders and subfolders in the bucket.\n\n        Args:\n            folder: List all folders and subfolders inside given folder.\n\n        Returns:\n            A list of folders.\n\n        Examples:\n            Get all folders from a bucket named \"my-bucket\".\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.list_folders()\n            ```\n\n            Get all folders from a folder called years\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.list_folders(\"years\")\n            ```\n        \"\"\"\n\n        # Beware of calling _join_bucket_folder twice, see note in method.\n        # However, we just want to use it to check if we are listing the root folder\n        bucket_path = self._join_bucket_folder(folder)\n        if bucket_path is None:\n            self.logger.info(f\"Listing folders in bucket {self.bucket!r}.\")\n        else:\n            self.logger.info(\n                f\"Listing folders in {bucket_path!r} in bucket {self.bucket!r}.\"\n            )\n\n        blobs = await self.list_blobs(folder)\n        # gets all folders with full path\n        folders = {str(PurePosixPath(blob.name).parent) for blob in blobs}\n\n        return [folder for folder in folders if folder != \".\"]\n\n    @sync_compatible\n    async def download_object_to_path(\n        self,\n        from_path: str,\n        to_path: Optional[Union[str, Path]] = None,\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads an object from the object storage service to a path.\n\n        Args:\n            from_path: The path to the blob to download; this gets prefixed\n                with the bucket_folder.\n            to_path: The path to download the blob to. If not provided, the\n                blob's name will be used.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Blob.download_to_filename`.\n\n        Returns:\n            The absolute path that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to notes.txt.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n            ```\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        # making path absolute, but converting back to str here\n        # since !r looks nicer that way and filename arg expects str\n        to_path = str(Path(to_path).absolute())\n\n        bucket = await self.get_bucket()\n        bucket_path = self._join_bucket_folder(from_path)\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n            f\"to {to_path!r}.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.download_to_filename, filename=to_path, **download_kwargs\n        )\n        return Path(to_path)\n\n    @sync_compatible\n    async def download_object_to_file_object(\n        self,\n        from_path: str,\n        to_file_object: BinaryIO,\n        **download_kwargs: Dict[str, Any],\n    ) -> BinaryIO:\n        \"\"\"\n        Downloads an object from the object storage service to a file-like object,\n        which can be a BytesIO object or a BufferedWriter.\n\n        Args:\n            from_path: The path to the blob to download from; this gets prefixed\n                with the bucket_folder.\n            to_file_object: The file-like object to download the blob to.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Blob.download_to_file`.\n\n        Returns:\n            The file-like object that the object was downloaded to.\n\n        Examples:\n            Download my_folder/notes.txt object to a BytesIO object.\n            ```python\n            from io import BytesIO\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with BytesIO() as buf:\n                gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n            ```\n\n            Download my_folder/notes.txt object to a BufferedWriter.\n            ```python\n                from prefect_gcp.cloud_storage import GcsBucket\n\n                gcs_bucket = GcsBucket.load(\"my-bucket\")\n                with open(\"notes.txt\", \"wb\") as f:\n                    gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n            ```\n        \"\"\"\n        bucket = await self.get_bucket()\n\n        bucket_path = self._join_bucket_folder(from_path)\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n            f\"to file object.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.download_to_file, file_obj=to_file_object, **download_kwargs\n        )\n        return to_file_object\n\n    @sync_compatible\n    async def download_folder_to_path(\n        self,\n        from_folder: str,\n        to_folder: Optional[Union[str, Path]] = None,\n        **download_kwargs: Dict[str, Any],\n    ) -> Path:\n        \"\"\"\n        Downloads objects *within* a folder (excluding the folder itself)\n        from the object storage service to a folder.\n\n        Args:\n            from_folder: The path to the folder to download from; this gets prefixed\n                with the bucket_folder.\n            to_folder: The path to download the folder to. If not provided, will default\n                to the current directory.\n            **download_kwargs: Additional keyword arguments to pass to\n                `Blob.download_to_filename`.\n\n        Returns:\n            The absolute path that the folder was downloaded to.\n\n        Examples:\n            Download my_folder to a local folder named my_folder.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n            ```\n        \"\"\"\n        if to_folder is None:\n            to_folder = \"\"\n        to_folder = Path(to_folder).absolute()\n\n        blobs = await self.list_blobs(folder=from_folder)\n        if len(blobs) == 0:\n            self.logger.warning(\n                f\"No blobs were downloaded from \"\n                f\"bucket {self.bucket!r} path {from_folder!r}.\"\n            )\n            return to_folder\n\n        # do not call self._join_bucket_folder for list_blobs\n        # because it's built-in to that method already!\n        # however, we still need to do it because we're using relative_to\n        bucket_folder = self._join_bucket_folder(from_folder)\n\n        async_coros = []\n        for blob in blobs:\n            bucket_path = PurePosixPath(blob.name).relative_to(bucket_folder)\n            if str(bucket_path).endswith(\"/\"):\n                continue\n            to_path = to_folder / bucket_path\n            to_path.parent.mkdir(parents=True, exist_ok=True)\n            self.logger.info(\n                f\"Downloading blob from bucket {self.bucket!r} path \"\n                f\"{str(bucket_path)!r} to {to_path}.\"\n            )\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    blob.download_to_filename, filename=str(to_path), **download_kwargs\n                )\n            )\n        await asyncio.gather(*async_coros)\n\n        return to_folder\n\n    @sync_compatible\n    async def upload_from_path(\n        self,\n        from_path: Union[str, Path],\n        to_path: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads an object from a path to the object storage service.\n\n        Args:\n            from_path: The path to the file to upload from.\n            to_path: The path to upload the file to. If not provided, will use\n                the file name of from_path; this gets prefixed\n                with the bucket_folder.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Blob.upload_from_filename`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload notes.txt to my_folder/notes.txt.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n            ```\n        \"\"\"\n        if to_path is None:\n            to_path = Path(from_path).name\n\n        bucket_path = self._join_bucket_folder(to_path)\n        bucket = await self.get_bucket()\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Uploading from {from_path!r} to the bucket \"\n            f\"{self.bucket!r} path {bucket_path!r}.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.upload_from_filename, filename=from_path, **upload_kwargs\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_file_object(\n        self, from_file_object: BinaryIO, to_path: str, **upload_kwargs\n    ) -> str:\n        \"\"\"\n        Uploads an object to the object storage service from a file-like object,\n        which can be a BytesIO object or a BufferedReader.\n\n        Args:\n            from_file_object: The file-like object to upload from.\n            to_path: The path to upload the object to; this gets prefixed\n                with the bucket_folder.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Blob.upload_from_file`.\n\n        Returns:\n            The path that the object was uploaded to.\n\n        Examples:\n            Upload my_folder/notes.txt object to a BytesIO object.\n            ```python\n            from io import BytesIO\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                gcs_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n            ```\n\n            Upload BufferedReader object to my_folder/notes.txt.\n            ```python\n            from io import BufferedReader\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"rb\") as f:\n                gcs_bucket.upload_from_file_object(\n                    BufferedReader(f), \"my_folder/notes.txt\"\n                )\n            ```\n        \"\"\"\n        bucket = await self.get_bucket()\n\n        bucket_path = self._join_bucket_folder(to_path)\n        blob = bucket.blob(bucket_path)\n        self.logger.info(\n            f\"Uploading from file object to the bucket \"\n            f\"{self.bucket!r} path {bucket_path!r}.\"\n        )\n\n        await run_sync_in_worker_thread(\n            blob.upload_from_file, from_file_object, **upload_kwargs\n        )\n        return bucket_path\n\n    @sync_compatible\n    async def upload_from_folder(\n        self,\n        from_folder: Union[str, Path],\n        to_folder: Optional[str] = None,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"\n        Uploads files *within* a folder (excluding the folder itself)\n        to the object storage service folder.\n\n        Args:\n            from_folder: The path to the folder to upload from.\n            to_folder: The path to upload the folder to. If not provided, will default\n                to bucket_folder or the base directory of the bucket.\n            **upload_kwargs: Additional keyword arguments to pass to\n                `Blob.upload_from_filename`.\n\n        Returns:\n            The path that the folder was uploaded to.\n\n        Examples:\n            Upload local folder my_folder to the bucket's folder my_folder.\n            ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            gcs_bucket.upload_from_folder(\"my_folder\")\n            ```\n        \"\"\"\n        from_folder = Path(from_folder)\n        # join bucket folder expects string for the first input\n        # when it returns None, we need to convert it back to empty string\n        # so relative_to works\n        bucket_folder = self._join_bucket_folder(to_folder or \"\") or \"\"\n\n        num_uploaded = 0\n        bucket = await self.get_bucket()\n\n        async_coros = []\n        for from_path in from_folder.rglob(\"**/*\"):\n            if from_path.is_dir():\n                continue\n            bucket_path = str(Path(bucket_folder) / from_path.relative_to(from_folder))\n            self.logger.info(\n                f\"Uploading from {str(from_path)!r} to the bucket \"\n                f\"{self.bucket!r} path {bucket_path!r}.\"\n            )\n            blob = bucket.blob(bucket_path)\n            async_coros.append(\n                run_sync_in_worker_thread(\n                    blob.upload_from_filename, filename=from_path, **upload_kwargs\n                )\n            )\n            num_uploaded += 1\n        await asyncio.gather(*async_coros)\n        if num_uploaded == 0:\n            self.logger.warning(f\"No files were uploaded from {from_folder}.\")\n        return bucket_folder\n\n    @sync_compatible\n    async def upload_from_dataframe(\n        self,\n        df: \"DataFrame\",\n        to_path: str,\n        serialization_format: Union[\n            str, DataFrameSerializationFormat\n        ] = DataFrameSerializationFormat.CSV_GZIP,\n        **upload_kwargs: Dict[str, Any],\n    ) -> str:\n        \"\"\"Upload a Pandas DataFrame to Google Cloud Storage in various formats.\n\n        This function uploads the data in a Pandas DataFrame to Google Cloud Storage\n        in a specified format, such as .csv, .csv.gz, .parquet,\n        .parquet.snappy, and .parquet.gz.\n\n        Args:\n            df: The Pandas DataFrame to be uploaded.\n            to_path: The destination path for the uploaded DataFrame.\n            serialization_format: The format to serialize the DataFrame into.\n                When passed as a `str`, the valid options are:\n                'csv', 'csv_gzip',  'parquet', 'parquet_snappy', 'parquet_gzip'.\n                Defaults to `DataFrameSerializationFormat.CSV_GZIP`.\n            **upload_kwargs: Additional keyword arguments to pass to the underlying\n            `Blob.upload_from_dataframe` method.\n\n        Returns:\n            The path that the object was uploaded to.\n        \"\"\"\n        if isinstance(serialization_format, str):\n            serialization_format = DataFrameSerializationFormat[\n                serialization_format.upper()\n            ]\n\n        with BytesIO() as bytes_buffer:\n            if serialization_format.format == \"parquet\":\n                df.to_parquet(\n                    path=bytes_buffer,\n                    compression=serialization_format.compression,\n                    index=False,\n                )\n            elif serialization_format.format == \"csv\":\n                df.to_csv(\n                    path_or_buf=bytes_buffer,\n                    compression=serialization_format.compression,\n                    index=False,\n                )\n\n            bytes_buffer.seek(0)\n            to_path = serialization_format.fix_extension_with(gcs_blob_path=to_path)\n\n            return await self.upload_from_file_object(\n                from_file_object=bytes_buffer,\n                to_path=to_path,\n                **{\"content_type\": serialization_format.content_type, **upload_kwargs},\n            )\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.basepath","title":"basepath: str property","text":"

        Read-only property that mirrors the bucket folder.

        Used for deployment.

        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.create_bucket","title":"create_bucket async","text":"

        Creates a bucket.

        Parameters:

        Name Type Description Default location Optional[str]

        The location of the bucket.

        None **create_kwargs

        Additional keyword arguments to pass to the create_bucket method.

        {}

        Returns:

        Type Description Bucket

        The bucket object.

        Examples:

        Create a bucket.

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket(bucket=\"my-bucket\")\ngcs_bucket.create_bucket()\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def create_bucket(\n    self, location: Optional[str] = None, **create_kwargs\n) -> \"Bucket\":\n    \"\"\"\n    Creates a bucket.\n\n    Args:\n        location: The location of the bucket.\n        **create_kwargs: Additional keyword arguments to pass to the\n            `create_bucket` method.\n\n    Returns:\n        The bucket object.\n\n    Examples:\n        Create a bucket.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket(bucket=\"my-bucket\")\n        gcs_bucket.create_bucket()\n        ```\n    \"\"\"\n    self.logger.info(f\"Creating bucket {self.bucket!r}.\")\n    client = self.gcp_credentials.get_cloud_storage_client()\n    bucket = await run_sync_in_worker_thread(\n        client.create_bucket, self.bucket, location=location, **create_kwargs\n    )\n    return bucket\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.download_folder_to_path","title":"download_folder_to_path async","text":"

        Downloads objects within a folder (excluding the folder itself) from the object storage service to a folder.

        Parameters:

        Name Type Description Default from_folder str

        The path to the folder to download from; this gets prefixed with the bucket_folder.

        required to_folder Optional[Union[str, Path]]

        The path to download the folder to. If not provided, will default to the current directory.

        None **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.download_to_filename.

        {}

        Returns:

        Type Description Path

        The absolute path that the folder was downloaded to.

        Examples:

        Download my_folder to a local folder named my_folder.

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def download_folder_to_path(\n    self,\n    from_folder: str,\n    to_folder: Optional[Union[str, Path]] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads objects *within* a folder (excluding the folder itself)\n    from the object storage service to a folder.\n\n    Args:\n        from_folder: The path to the folder to download from; this gets prefixed\n            with the bucket_folder.\n        to_folder: The path to download the folder to. If not provided, will default\n            to the current directory.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_filename`.\n\n    Returns:\n        The absolute path that the folder was downloaded to.\n\n    Examples:\n        Download my_folder to a local folder named my_folder.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.download_folder_to_path(\"my_folder\", \"my_folder\")\n        ```\n    \"\"\"\n    if to_folder is None:\n        to_folder = \"\"\n    to_folder = Path(to_folder).absolute()\n\n    blobs = await self.list_blobs(folder=from_folder)\n    if len(blobs) == 0:\n        self.logger.warning(\n            f\"No blobs were downloaded from \"\n            f\"bucket {self.bucket!r} path {from_folder!r}.\"\n        )\n        return to_folder\n\n    # do not call self._join_bucket_folder for list_blobs\n    # because it's built-in to that method already!\n    # however, we still need to do it because we're using relative_to\n    bucket_folder = self._join_bucket_folder(from_folder)\n\n    async_coros = []\n    for blob in blobs:\n        bucket_path = PurePosixPath(blob.name).relative_to(bucket_folder)\n        if str(bucket_path).endswith(\"/\"):\n            continue\n        to_path = to_folder / bucket_path\n        to_path.parent.mkdir(parents=True, exist_ok=True)\n        self.logger.info(\n            f\"Downloading blob from bucket {self.bucket!r} path \"\n            f\"{str(bucket_path)!r} to {to_path}.\"\n        )\n        async_coros.append(\n            run_sync_in_worker_thread(\n                blob.download_to_filename, filename=str(to_path), **download_kwargs\n            )\n        )\n    await asyncio.gather(*async_coros)\n\n    return to_folder\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.download_object_to_file_object","title":"download_object_to_file_object async","text":"

        Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter.

        Parameters:

        Name Type Description Default from_path str

        The path to the blob to download from; this gets prefixed with the bucket_folder.

        required to_file_object BinaryIO

        The file-like object to download the blob to.

        required **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.download_to_file.

        {}

        Returns:

        Type Description BinaryIO

        The file-like object that the object was downloaded to.

        Examples:

        Download my_folder/notes.txt object to a BytesIO object.

        from io import BytesIO\nfrom prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\nwith BytesIO() as buf:\n    gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n

        Download my_folder/notes.txt object to a BufferedWriter.

            from prefect_gcp.cloud_storage import GcsBucket\n\n    gcs_bucket = GcsBucket.load(\"my-bucket\")\n    with open(\"notes.txt\", \"wb\") as f:\n        gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def download_object_to_file_object(\n    self,\n    from_path: str,\n    to_file_object: BinaryIO,\n    **download_kwargs: Dict[str, Any],\n) -> BinaryIO:\n    \"\"\"\n    Downloads an object from the object storage service to a file-like object,\n    which can be a BytesIO object or a BufferedWriter.\n\n    Args:\n        from_path: The path to the blob to download from; this gets prefixed\n            with the bucket_folder.\n        to_file_object: The file-like object to download the blob to.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_file`.\n\n    Returns:\n        The file-like object that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to a BytesIO object.\n        ```python\n        from io import BytesIO\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        with BytesIO() as buf:\n            gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", buf)\n        ```\n\n        Download my_folder/notes.txt object to a BufferedWriter.\n        ```python\n            from prefect_gcp.cloud_storage import GcsBucket\n\n            gcs_bucket = GcsBucket.load(\"my-bucket\")\n            with open(\"notes.txt\", \"wb\") as f:\n                gcs_bucket.download_object_to_file_object(\"my_folder/notes.txt\", f)\n        ```\n    \"\"\"\n    bucket = await self.get_bucket()\n\n    bucket_path = self._join_bucket_folder(from_path)\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n        f\"to file object.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.download_to_file, file_obj=to_file_object, **download_kwargs\n    )\n    return to_file_object\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.download_object_to_path","title":"download_object_to_path async","text":"

        Downloads an object from the object storage service to a path.

        Parameters:

        Name Type Description Default from_path str

        The path to the blob to download; this gets prefixed with the bucket_folder.

        required to_path Optional[Union[str, Path]]

        The path to download the blob to. If not provided, the blob's name will be used.

        None **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.download_to_filename.

        {}

        Returns:

        Type Description Path

        The absolute path that the object was downloaded to.

        Examples:

        Download my_folder/notes.txt object to notes.txt.

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def download_object_to_path(\n    self,\n    from_path: str,\n    to_path: Optional[Union[str, Path]] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Path:\n    \"\"\"\n    Downloads an object from the object storage service to a path.\n\n    Args:\n        from_path: The path to the blob to download; this gets prefixed\n            with the bucket_folder.\n        to_path: The path to download the blob to. If not provided, the\n            blob's name will be used.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_filename`.\n\n    Returns:\n        The absolute path that the object was downloaded to.\n\n    Examples:\n        Download my_folder/notes.txt object to notes.txt.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.download_object_to_path(\"my_folder/notes.txt\", \"notes.txt\")\n        ```\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    # making path absolute, but converting back to str here\n    # since !r looks nicer that way and filename arg expects str\n    to_path = str(Path(to_path).absolute())\n\n    bucket = await self.get_bucket()\n    bucket_path = self._join_bucket_folder(from_path)\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Downloading blob from bucket {self.bucket!r} path {bucket_path!r}\"\n        f\"to {to_path!r}.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.download_to_filename, filename=to_path, **download_kwargs\n    )\n    return Path(to_path)\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.get_bucket","title":"get_bucket async","text":"

        Returns the bucket object.

        Returns:

        Type Description Bucket

        The bucket object.

        Examples:

        Get the bucket object.

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.get_bucket()\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def get_bucket(self) -> \"Bucket\":\n    \"\"\"\n    Returns the bucket object.\n\n    Returns:\n        The bucket object.\n\n    Examples:\n        Get the bucket object.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.get_bucket()\n        ```\n    \"\"\"\n    self.logger.info(f\"Getting bucket {self.bucket!r}.\")\n    client = self.gcp_credentials.get_cloud_storage_client()\n    bucket = await run_sync_in_worker_thread(client.get_bucket, self.bucket)\n    return bucket\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.get_directory","title":"get_directory async","text":"

        Copies a folder from the configured GCS bucket to a local directory. Defaults to copying the entire contents of the block's bucket_folder to the current working directory.

        Parameters:

        Name Type Description Default from_path Optional[str]

        Path in GCS bucket to download from. Defaults to the block's configured bucket_folder.

        None local_path Optional[str]

        Local path to download GCS bucket contents to. Defaults to the current working directory.

        None

        Returns:

        Type Description List[Union[str, Path]]

        A list of downloaded file paths.

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> List[Union[str, Path]]:\n    \"\"\"\n    Copies a folder from the configured GCS bucket to a local directory.\n    Defaults to copying the entire contents of the block's bucket_folder\n    to the current working directory.\n\n    Args:\n        from_path: Path in GCS bucket to download from. Defaults to the block's\n            configured bucket_folder.\n        local_path: Local path to download GCS bucket contents to.\n            Defaults to the current working directory.\n\n    Returns:\n        A list of downloaded file paths.\n    \"\"\"\n    from_path = (\n        self.bucket_folder if from_path is None else self._resolve_path(from_path)\n    )\n\n    if local_path is None:\n        local_path = os.path.abspath(\".\")\n    else:\n        local_path = os.path.abspath(os.path.expanduser(local_path))\n\n    project = self.gcp_credentials.project\n    client = self.gcp_credentials.get_cloud_storage_client(project=project)\n\n    blobs = await run_sync_in_worker_thread(\n        client.list_blobs, self.bucket, prefix=from_path\n    )\n\n    file_paths = []\n    for blob in blobs:\n        blob_path = blob.name\n        if blob_path[-1] == \"/\":\n            # object is a folder and will be created if it contains any objects\n            continue\n        local_file_path = os.path.join(local_path, blob_path)\n        os.makedirs(os.path.dirname(local_file_path), exist_ok=True)\n\n        with disable_run_logger():\n            file_path = await cloud_storage_download_blob_to_file.fn(\n                bucket=self.bucket,\n                blob=blob_path,\n                path=local_file_path,\n                gcp_credentials=self.gcp_credentials,\n            )\n            file_paths.append(file_path)\n    return file_paths\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.list_blobs","title":"list_blobs async","text":"

        Lists all blobs in the bucket that are in a folder. Folders are not included in the output.

        Parameters:

        Name Type Description Default folder str

        The folder to list blobs from.

        ''

        Returns:

        Type Description List[Blob]

        A list of Blob objects.

        Examples:

        Get all blobs from a folder named \"prefect\".

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.list_blobs(\"prefect\")\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def list_blobs(self, folder: str = \"\") -> List[\"Blob\"]:\n    \"\"\"\n    Lists all blobs in the bucket that are in a folder.\n    Folders are not included in the output.\n\n    Args:\n        folder: The folder to list blobs from.\n\n    Returns:\n        A list of Blob objects.\n\n    Examples:\n        Get all blobs from a folder named \"prefect\".\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.list_blobs(\"prefect\")\n        ```\n    \"\"\"\n    client = self.gcp_credentials.get_cloud_storage_client()\n\n    bucket_path = self._join_bucket_folder(folder)\n    if bucket_path is None:\n        self.logger.info(f\"Listing blobs in bucket {self.bucket!r}.\")\n    else:\n        self.logger.info(\n            f\"Listing blobs in folder {bucket_path!r} in bucket {self.bucket!r}.\"\n        )\n    blobs = await run_sync_in_worker_thread(\n        client.list_blobs, self.bucket, prefix=bucket_path\n    )\n\n    # Ignore folders\n    return [blob for blob in blobs if not blob.name.endswith(\"/\")]\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.list_folders","title":"list_folders async","text":"

        Lists all folders and subfolders in the bucket.

        Parameters:

        Name Type Description Default folder str

        List all folders and subfolders inside given folder.

        ''

        Returns:

        Type Description List[str]

        A list of folders.

        Examples:

        Get all folders from a bucket named \"my-bucket\".

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.list_folders()\n

        Get all folders from a folder called years

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.list_folders(\"years\")\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def list_folders(self, folder: str = \"\") -> List[str]:\n    \"\"\"\n    Lists all folders and subfolders in the bucket.\n\n    Args:\n        folder: List all folders and subfolders inside given folder.\n\n    Returns:\n        A list of folders.\n\n    Examples:\n        Get all folders from a bucket named \"my-bucket\".\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.list_folders()\n        ```\n\n        Get all folders from a folder called years\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.list_folders(\"years\")\n        ```\n    \"\"\"\n\n    # Beware of calling _join_bucket_folder twice, see note in method.\n    # However, we just want to use it to check if we are listing the root folder\n    bucket_path = self._join_bucket_folder(folder)\n    if bucket_path is None:\n        self.logger.info(f\"Listing folders in bucket {self.bucket!r}.\")\n    else:\n        self.logger.info(\n            f\"Listing folders in {bucket_path!r} in bucket {self.bucket!r}.\"\n        )\n\n    blobs = await self.list_blobs(folder)\n    # gets all folders with full path\n    folders = {str(PurePosixPath(blob.name).parent) for blob in blobs}\n\n    return [folder for folder in folders if folder != \".\"]\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.put_directory","title":"put_directory async","text":"

        Uploads a directory from a given local path to the configured GCS bucket in a given folder.

        Defaults to uploading the entire contents the current working directory to the block's bucket_folder.

        Parameters:

        Name Type Description Default local_path Optional[str]

        Path to local directory to upload from.

        None to_path Optional[str]

        Path in GCS bucket to upload to. Defaults to block's configured bucket_folder.

        None ignore_file Optional[str]

        Path to file containing gitignore style expressions for filepaths to ignore.

        None

        Returns:

        Type Description int

        The number of files uploaded.

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n    \"\"\"\n    Uploads a directory from a given local path to the configured GCS bucket in a\n    given folder.\n\n    Defaults to uploading the entire contents the current working directory to the\n    block's bucket_folder.\n\n    Args:\n        local_path: Path to local directory to upload from.\n        to_path: Path in GCS bucket to upload to. Defaults to block's configured\n            bucket_folder.\n        ignore_file: Path to file containing gitignore style expressions for\n            filepaths to ignore.\n\n    Returns:\n        The number of files uploaded.\n    \"\"\"\n    if local_path is None:\n        local_path = os.path.abspath(\".\")\n    else:\n        local_path = os.path.expanduser(local_path)\n\n    to_path = self.bucket_folder if to_path is None else self._resolve_path(to_path)\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(local_path, ignore_patterns)\n\n    uploaded_file_count = 0\n    for local_file_path in Path(local_path).rglob(\"*\"):\n        if (\n            included_files is not None\n            and local_file_path.name not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = str(\n                PurePosixPath(to_path, local_file_path.relative_to(local_path))\n            )\n            local_file_content = local_file_path.read_bytes()\n            await self.write_path(remote_file_path, content=local_file_content)\n            uploaded_file_count += 1\n\n    return uploaded_file_count\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.read_path","title":"read_path async","text":"

        Read specified path from GCS and return contents. Provide the entire path to the key in GCS.

        Parameters:

        Name Type Description Default path str

        Entire path to (and including) the key.

        required

        Returns:

        Type Description bytes

        A bytes or string representation of the blob object.

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def read_path(self, path: str) -> bytes:\n    \"\"\"\n    Read specified path from GCS and return contents. Provide the entire\n    path to the key in GCS.\n\n    Args:\n        path: Entire path to (and including) the key.\n\n    Returns:\n        A bytes or string representation of the blob object.\n    \"\"\"\n    path = self._resolve_path(path)\n    with disable_run_logger():\n        contents = await cloud_storage_download_blob_as_bytes.fn(\n            bucket=self.bucket, blob=path, gcp_credentials=self.gcp_credentials\n        )\n    return contents\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_dataframe","title":"upload_from_dataframe async","text":"

        Upload a Pandas DataFrame to Google Cloud Storage in various formats.

        This function uploads the data in a Pandas DataFrame to Google Cloud Storage in a specified format, such as .csv, .csv.gz, .parquet, .parquet.snappy, and .parquet.gz.

        Parameters:

        Name Type Description Default df DataFrame

        The Pandas DataFrame to be uploaded.

        required to_path str

        The destination path for the uploaded DataFrame.

        required serialization_format Union[str, DataFrameSerializationFormat]

        The format to serialize the DataFrame into. When passed as a str, the valid options are: 'csv', 'csv_gzip', 'parquet', 'parquet_snappy', 'parquet_gzip'. Defaults to DataFrameSerializationFormat.CSV_GZIP.

        CSV_GZIP **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to the underlying

        {}

        Returns:

        Type Description str

        The path that the object was uploaded to.

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def upload_from_dataframe(\n    self,\n    df: \"DataFrame\",\n    to_path: str,\n    serialization_format: Union[\n        str, DataFrameSerializationFormat\n    ] = DataFrameSerializationFormat.CSV_GZIP,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"Upload a Pandas DataFrame to Google Cloud Storage in various formats.\n\n    This function uploads the data in a Pandas DataFrame to Google Cloud Storage\n    in a specified format, such as .csv, .csv.gz, .parquet,\n    .parquet.snappy, and .parquet.gz.\n\n    Args:\n        df: The Pandas DataFrame to be uploaded.\n        to_path: The destination path for the uploaded DataFrame.\n        serialization_format: The format to serialize the DataFrame into.\n            When passed as a `str`, the valid options are:\n            'csv', 'csv_gzip',  'parquet', 'parquet_snappy', 'parquet_gzip'.\n            Defaults to `DataFrameSerializationFormat.CSV_GZIP`.\n        **upload_kwargs: Additional keyword arguments to pass to the underlying\n        `Blob.upload_from_dataframe` method.\n\n    Returns:\n        The path that the object was uploaded to.\n    \"\"\"\n    if isinstance(serialization_format, str):\n        serialization_format = DataFrameSerializationFormat[\n            serialization_format.upper()\n        ]\n\n    with BytesIO() as bytes_buffer:\n        if serialization_format.format == \"parquet\":\n            df.to_parquet(\n                path=bytes_buffer,\n                compression=serialization_format.compression,\n                index=False,\n            )\n        elif serialization_format.format == \"csv\":\n            df.to_csv(\n                path_or_buf=bytes_buffer,\n                compression=serialization_format.compression,\n                index=False,\n            )\n\n        bytes_buffer.seek(0)\n        to_path = serialization_format.fix_extension_with(gcs_blob_path=to_path)\n\n        return await self.upload_from_file_object(\n            from_file_object=bytes_buffer,\n            to_path=to_path,\n            **{\"content_type\": serialization_format.content_type, **upload_kwargs},\n        )\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_file_object","title":"upload_from_file_object async","text":"

        Uploads an object to the object storage service from a file-like object, which can be a BytesIO object or a BufferedReader.

        Parameters:

        Name Type Description Default from_file_object BinaryIO

        The file-like object to upload from.

        required to_path str

        The path to upload the object to; this gets prefixed with the bucket_folder.

        required **upload_kwargs

        Additional keyword arguments to pass to Blob.upload_from_file.

        {}

        Returns:

        Type Description str

        The path that the object was uploaded to.

        Examples:

        Upload my_folder/notes.txt object to a BytesIO object.

        from io import BytesIO\nfrom prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    gcs_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n

        Upload BufferedReader object to my_folder/notes.txt.

        from io import BufferedReader\nfrom prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\nwith open(\"notes.txt\", \"rb\") as f:\n    gcs_bucket.upload_from_file_object(\n        BufferedReader(f), \"my_folder/notes.txt\"\n    )\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def upload_from_file_object(\n    self, from_file_object: BinaryIO, to_path: str, **upload_kwargs\n) -> str:\n    \"\"\"\n    Uploads an object to the object storage service from a file-like object,\n    which can be a BytesIO object or a BufferedReader.\n\n    Args:\n        from_file_object: The file-like object to upload from.\n        to_path: The path to upload the object to; this gets prefixed\n            with the bucket_folder.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_file`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload my_folder/notes.txt object to a BytesIO object.\n        ```python\n        from io import BytesIO\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            gcs_bucket.upload_from_file_object(f, \"my_folder/notes.txt\")\n        ```\n\n        Upload BufferedReader object to my_folder/notes.txt.\n        ```python\n        from io import BufferedReader\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        with open(\"notes.txt\", \"rb\") as f:\n            gcs_bucket.upload_from_file_object(\n                BufferedReader(f), \"my_folder/notes.txt\"\n            )\n        ```\n    \"\"\"\n    bucket = await self.get_bucket()\n\n    bucket_path = self._join_bucket_folder(to_path)\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Uploading from file object to the bucket \"\n        f\"{self.bucket!r} path {bucket_path!r}.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.upload_from_file, from_file_object, **upload_kwargs\n    )\n    return bucket_path\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_folder","title":"upload_from_folder async","text":"

        Uploads files within a folder (excluding the folder itself) to the object storage service folder.

        Parameters:

        Name Type Description Default from_folder Union[str, Path]

        The path to the folder to upload from.

        required to_folder Optional[str]

        The path to upload the folder to. If not provided, will default to bucket_folder or the base directory of the bucket.

        None **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.upload_from_filename.

        {}

        Returns:

        Type Description str

        The path that the folder was uploaded to.

        Examples:

        Upload local folder my_folder to the bucket's folder my_folder.

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.upload_from_folder(\"my_folder\")\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def upload_from_folder(\n    self,\n    from_folder: Union[str, Path],\n    to_folder: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads files *within* a folder (excluding the folder itself)\n    to the object storage service folder.\n\n    Args:\n        from_folder: The path to the folder to upload from.\n        to_folder: The path to upload the folder to. If not provided, will default\n            to bucket_folder or the base directory of the bucket.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_filename`.\n\n    Returns:\n        The path that the folder was uploaded to.\n\n    Examples:\n        Upload local folder my_folder to the bucket's folder my_folder.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.upload_from_folder(\"my_folder\")\n        ```\n    \"\"\"\n    from_folder = Path(from_folder)\n    # join bucket folder expects string for the first input\n    # when it returns None, we need to convert it back to empty string\n    # so relative_to works\n    bucket_folder = self._join_bucket_folder(to_folder or \"\") or \"\"\n\n    num_uploaded = 0\n    bucket = await self.get_bucket()\n\n    async_coros = []\n    for from_path in from_folder.rglob(\"**/*\"):\n        if from_path.is_dir():\n            continue\n        bucket_path = str(Path(bucket_folder) / from_path.relative_to(from_folder))\n        self.logger.info(\n            f\"Uploading from {str(from_path)!r} to the bucket \"\n            f\"{self.bucket!r} path {bucket_path!r}.\"\n        )\n        blob = bucket.blob(bucket_path)\n        async_coros.append(\n            run_sync_in_worker_thread(\n                blob.upload_from_filename, filename=from_path, **upload_kwargs\n            )\n        )\n        num_uploaded += 1\n    await asyncio.gather(*async_coros)\n    if num_uploaded == 0:\n        self.logger.warning(f\"No files were uploaded from {from_folder}.\")\n    return bucket_folder\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.upload_from_path","title":"upload_from_path async","text":"

        Uploads an object from a path to the object storage service.

        Parameters:

        Name Type Description Default from_path Union[str, Path]

        The path to the file to upload from.

        required to_path Optional[str]

        The path to upload the file to. If not provided, will use the file name of from_path; this gets prefixed with the bucket_folder.

        None **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.upload_from_filename.

        {}

        Returns:

        Type Description str

        The path that the object was uploaded to.

        Examples:

        Upload notes.txt to my_folder/notes.txt.

        from prefect_gcp.cloud_storage import GcsBucket\n\ngcs_bucket = GcsBucket.load(\"my-bucket\")\ngcs_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def upload_from_path(\n    self,\n    from_path: Union[str, Path],\n    to_path: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads an object from a path to the object storage service.\n\n    Args:\n        from_path: The path to the file to upload from.\n        to_path: The path to upload the file to. If not provided, will use\n            the file name of from_path; this gets prefixed\n            with the bucket_folder.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_filename`.\n\n    Returns:\n        The path that the object was uploaded to.\n\n    Examples:\n        Upload notes.txt to my_folder/notes.txt.\n        ```python\n        from prefect_gcp.cloud_storage import GcsBucket\n\n        gcs_bucket = GcsBucket.load(\"my-bucket\")\n        gcs_bucket.upload_from_path(\"notes.txt\", \"my_folder/notes.txt\")\n        ```\n    \"\"\"\n    if to_path is None:\n        to_path = Path(from_path).name\n\n    bucket_path = self._join_bucket_folder(to_path)\n    bucket = await self.get_bucket()\n    blob = bucket.blob(bucket_path)\n    self.logger.info(\n        f\"Uploading from {from_path!r} to the bucket \"\n        f\"{self.bucket!r} path {bucket_path!r}.\"\n    )\n\n    await run_sync_in_worker_thread(\n        blob.upload_from_filename, filename=from_path, **upload_kwargs\n    )\n    return bucket_path\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.GcsBucket.write_path","title":"write_path async","text":"

        Writes to an GCS bucket.

        Parameters:

        Name Type Description Default path str

        The key name. Each object in your bucket has a unique key (or key name).

        required content bytes

        What you are uploading to GCS Bucket.

        required

        Returns:

        Type Description str

        The path that the contents were written to.

        Source code in prefect_gcp/cloud_storage.py
        @sync_compatible\nasync def write_path(self, path: str, content: bytes) -> str:\n    \"\"\"\n    Writes to an GCS bucket.\n\n    Args:\n        path: The key name. Each object in your bucket has a unique\n            key (or key name).\n        content: What you are uploading to GCS Bucket.\n\n    Returns:\n        The path that the contents were written to.\n    \"\"\"\n    path = self._resolve_path(path)\n    with disable_run_logger():\n        await cloud_storage_upload_blob_from_string.fn(\n            data=content,\n            bucket=self.bucket,\n            blob=path,\n            gcp_credentials=self.gcp_credentials,\n        )\n    return path\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_copy_blob","title":"cloud_storage_copy_blob async","text":"

        Copies data from one Google Cloud Storage bucket to another, without downloading it locally.

        Parameters:

        Name Type Description Default source_bucket str

        Source bucket name.

        required dest_bucket str

        Destination bucket name.

        required source_blob str

        Source blob name.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required dest_blob Optional[str]

        Destination blob name; if not provided, defaults to source_blob.

        None timeout Union[float, Tuple[float, float]]

        The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None **copy_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Bucket.copy_blob.

        {}

        Returns:

        Type Description str

        Destination blob name.

        Example

        Copies blob from one bucket to another.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_copy_blob\n\n@flow()\ndef example_cloud_storage_copy_blob_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    blob = cloud_storage_copy_blob(\n        \"source_bucket\",\n        \"dest_bucket\",\n        \"source_blob\",\n        gcp_credentials\n    )\n    return blob\n\nexample_cloud_storage_copy_blob_flow()\n

        Source code in prefect_gcp/cloud_storage.py
        @task\nasync def cloud_storage_copy_blob(\n    source_bucket: str,\n    dest_bucket: str,\n    source_blob: str,\n    gcp_credentials: GcpCredentials,\n    dest_blob: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **copy_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Copies data from one Google Cloud Storage bucket to another,\n    without downloading it locally.\n\n    Args:\n        source_bucket: Source bucket name.\n        dest_bucket: Destination bucket name.\n        source_blob: Source blob name.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        dest_blob: Destination blob name; if not provided, defaults to source_blob.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **copy_kwargs: Additional keyword arguments to pass to\n            `Bucket.copy_blob`.\n\n    Returns:\n        Destination blob name.\n\n    Example:\n        Copies blob from one bucket to another.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_copy_blob\n\n        @flow()\n        def example_cloud_storage_copy_blob_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            blob = cloud_storage_copy_blob(\n                \"source_bucket\",\n                \"dest_bucket\",\n                \"source_blob\",\n                gcp_credentials\n            )\n            return blob\n\n        example_cloud_storage_copy_blob_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Copying blob named %s from the %s bucket to the %s bucket\",\n        source_blob,\n        source_bucket,\n        dest_bucket,\n    )\n\n    source_bucket_obj = await _get_bucket(\n        source_bucket, gcp_credentials, project=project\n    )\n\n    dest_bucket_obj = await _get_bucket(dest_bucket, gcp_credentials, project=project)\n    if dest_blob is None:\n        dest_blob = source_blob\n\n    source_blob_obj = source_bucket_obj.blob(source_blob)\n    await run_sync_in_worker_thread(\n        source_bucket_obj.copy_blob,\n        blob=source_blob_obj,\n        destination_bucket=dest_bucket_obj,\n        new_name=dest_blob,\n        timeout=timeout,\n        **copy_kwargs,\n    )\n\n    return dest_blob\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_create_bucket","title":"cloud_storage_create_bucket async","text":"

        Creates a bucket.

        Parameters:

        Name Type Description Default bucket str

        Name of the bucket.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None location Optional[str]

        Location of the bucket.

        None **create_kwargs Dict[str, Any]

        Additional keyword arguments to pass to client.create_bucket.

        {}

        Returns:

        Type Description str

        The bucket name.

        Example

        Creates a bucket named \"prefect\".

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_create_bucket\n\n@flow()\ndef example_cloud_storage_create_bucket_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    bucket = cloud_storage_create_bucket(\"prefect\", gcp_credentials)\n\nexample_cloud_storage_create_bucket_flow()\n

        Source code in prefect_gcp/cloud_storage.py
        @task\nasync def cloud_storage_create_bucket(\n    bucket: str,\n    gcp_credentials: GcpCredentials,\n    project: Optional[str] = None,\n    location: Optional[str] = None,\n    **create_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Creates a bucket.\n\n    Args:\n        bucket: Name of the bucket.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        location: Location of the bucket.\n        **create_kwargs: Additional keyword arguments to pass to `client.create_bucket`.\n\n    Returns:\n        The bucket name.\n\n    Example:\n        Creates a bucket named \"prefect\".\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_create_bucket\n\n        @flow()\n        def example_cloud_storage_create_bucket_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            bucket = cloud_storage_create_bucket(\"prefect\", gcp_credentials)\n\n        example_cloud_storage_create_bucket_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Creating %s bucket\", bucket)\n\n    client = gcp_credentials.get_cloud_storage_client(project=project)\n    await run_sync_in_worker_thread(\n        client.create_bucket, bucket, location=location, **create_kwargs\n    )\n    return bucket\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_download_blob_as_bytes","title":"cloud_storage_download_blob_as_bytes async","text":"

        Downloads a blob as bytes.

        Parameters:

        Name Type Description Default bucket str

        Name of the bucket.

        required blob str

        Name of the Cloud Storage blob.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required chunk_size int

        The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

        None encryption_key Optional[str]

        An encryption key.

        None timeout Union[float, Tuple[float, float]]

        The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.download_as_bytes.

        {}

        Returns:

        Type Description bytes

        A bytes or string representation of the blob object.

        Example

        Downloads blob from bucket.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_download_blob_as_bytes\n\n@flow()\ndef example_cloud_storage_download_blob_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    contents = cloud_storage_download_blob_as_bytes(\n        \"bucket\", \"blob\", gcp_credentials)\n    return contents\n\nexample_cloud_storage_download_blob_flow()\n

        Source code in prefect_gcp/cloud_storage.py
        @task\nasync def cloud_storage_download_blob_as_bytes(\n    bucket: str,\n    blob: str,\n    gcp_credentials: GcpCredentials,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **download_kwargs: Dict[str, Any],\n) -> bytes:\n    \"\"\"\n    Downloads a blob as bytes.\n\n    Args:\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        chunk_size (int, optional): The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_as_bytes`.\n\n    Returns:\n        A bytes or string representation of the blob object.\n\n    Example:\n        Downloads blob from bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_download_blob_as_bytes\n\n        @flow()\n        def example_cloud_storage_download_blob_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            contents = cloud_storage_download_blob_as_bytes(\n                \"bucket\", \"blob\", gcp_credentials)\n            return contents\n\n        example_cloud_storage_download_blob_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Downloading blob named %s from the %s bucket\", blob, bucket)\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    contents = await run_sync_in_worker_thread(\n        blob_obj.download_as_bytes, timeout=timeout, **download_kwargs\n    )\n    return contents\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_download_blob_to_file","title":"cloud_storage_download_blob_to_file async","text":"

        Downloads a blob to a file path.

        Parameters:

        Name Type Description Default bucket str

        Name of the bucket.

        required blob str

        Name of the Cloud Storage blob.

        required path Union[str, Path]

        Downloads the contents to the provided file path; if the path is a directory, automatically joins the blob name.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required chunk_size int

        The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

        None encryption_key Optional[str]

        An encryption key.

        None timeout Union[float, Tuple[float, float]]

        The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None **download_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.download_to_filename.

        {}

        Returns:

        Type Description Union[str, Path]

        The path to the blob object.

        Example

        Downloads blob from bucket.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_download_blob_to_file\n\n@flow()\ndef example_cloud_storage_download_blob_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    path = cloud_storage_download_blob_to_file(\n        \"bucket\", \"blob\", \"file_path\", gcp_credentials)\n    return path\n\nexample_cloud_storage_download_blob_flow()\n

        Source code in prefect_gcp/cloud_storage.py
        @task\nasync def cloud_storage_download_blob_to_file(\n    bucket: str,\n    blob: str,\n    path: Union[str, Path],\n    gcp_credentials: GcpCredentials,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **download_kwargs: Dict[str, Any],\n) -> Union[str, Path]:\n    \"\"\"\n    Downloads a blob to a file path.\n\n    Args:\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        path: Downloads the contents to the provided file path;\n            if the path is a directory, automatically joins the blob name.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        chunk_size (int, optional): The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **download_kwargs: Additional keyword arguments to pass to\n            `Blob.download_to_filename`.\n\n    Returns:\n        The path to the blob object.\n\n    Example:\n        Downloads blob from bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_download_blob_to_file\n\n        @flow()\n        def example_cloud_storage_download_blob_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            path = cloud_storage_download_blob_to_file(\n                \"bucket\", \"blob\", \"file_path\", gcp_credentials)\n            return path\n\n        example_cloud_storage_download_blob_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\n        \"Downloading blob named %s from the %s bucket to %s\", blob, bucket, path\n    )\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    if os.path.isdir(path):\n        if isinstance(path, Path):\n            path = path.joinpath(blob)  # keep as Path if Path is passed\n        else:\n            path = os.path.join(path, blob)  # keep as str if a str is passed\n\n    await run_sync_in_worker_thread(\n        blob_obj.download_to_filename, path, timeout=timeout, **download_kwargs\n    )\n    return path\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_upload_blob_from_file","title":"cloud_storage_upload_blob_from_file async","text":"

        Uploads a blob from file path or file-like object. Usage for passing in file-like object is if the data was downloaded from the web; can bypass writing to disk and directly upload to Cloud Storage.

        Parameters:

        Name Type Description Default file Union[str, Path, BytesIO]

        Path to data or file like object to upload.

        required bucket str

        Name of the bucket.

        required blob str

        Name of the Cloud Storage blob.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required content_type Optional[str]

        Type of content being uploaded.

        None chunk_size Optional[int]

        The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

        None encryption_key Optional[str]

        An encryption key.

        None timeout Union[float, Tuple[float, float]]

        The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.upload_from_file or Blob.upload_from_filename.

        {}

        Returns:

        Type Description str

        The blob name.

        Example

        Uploads blob to bucket.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_file\n\n@flow()\ndef example_cloud_storage_upload_blob_from_file_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    blob = cloud_storage_upload_blob_from_file(\n        \"/path/somewhere\", \"bucket\", \"blob\", gcp_credentials)\n    return blob\n\nexample_cloud_storage_upload_blob_from_file_flow()\n

        Source code in prefect_gcp/cloud_storage.py
        @task\nasync def cloud_storage_upload_blob_from_file(\n    file: Union[str, Path, BytesIO],\n    bucket: str,\n    blob: str,\n    gcp_credentials: GcpCredentials,\n    content_type: Optional[str] = None,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads a blob from file path or file-like object. Usage for passing in\n    file-like object is if the data was downloaded from the web;\n    can bypass writing to disk and directly upload to Cloud Storage.\n\n    Args:\n        file: Path to data or file like object to upload.\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        content_type: Type of content being uploaded.\n        chunk_size: The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_file` or `Blob.upload_from_filename`.\n\n    Returns:\n        The blob name.\n\n    Example:\n        Uploads blob to bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_file\n\n        @flow()\n        def example_cloud_storage_upload_blob_from_file_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            blob = cloud_storage_upload_blob_from_file(\n                \"/path/somewhere\", \"bucket\", \"blob\", gcp_credentials)\n            return blob\n\n        example_cloud_storage_upload_blob_from_file_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading blob named %s to the %s bucket\", blob, bucket)\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    if isinstance(file, BytesIO):\n        await run_sync_in_worker_thread(\n            blob_obj.upload_from_file,\n            file,\n            content_type=content_type,\n            timeout=timeout,\n            **upload_kwargs,\n        )\n    else:\n        await run_sync_in_worker_thread(\n            blob_obj.upload_from_filename,\n            file,\n            content_type=content_type,\n            timeout=timeout,\n            **upload_kwargs,\n        )\n    return blob\n
        "},{"location":"integrations/prefect-gcp/cloud_storage/#prefect_gcp.cloud_storage.cloud_storage_upload_blob_from_string","title":"cloud_storage_upload_blob_from_string async","text":"

        Uploads a blob from a string or bytes representation of data.

        Parameters:

        Name Type Description Default data Union[str, bytes]

        String or bytes representation of data to upload.

        required bucket str

        Name of the bucket.

        required blob str

        Name of the Cloud Storage blob.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required content_type Optional[str]

        Type of content being uploaded.

        None chunk_size Optional[int]

        The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

        None encryption_key Optional[str]

        An encryption key.

        None timeout Union[float, Tuple[float, float]]

        The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout).

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None **upload_kwargs Dict[str, Any]

        Additional keyword arguments to pass to Blob.upload_from_string.

        {}

        Returns:

        Type Description str

        The blob name.

        Example

        Uploads blob to bucket.

        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_string\n\n@flow()\ndef example_cloud_storage_upload_blob_from_string_flow():\n    gcp_credentials = GcpCredentials(\n        service_account_file=\"/path/to/service/account/keyfile.json\")\n    blob = cloud_storage_upload_blob_from_string(\n        \"data\", \"bucket\", \"blob\", gcp_credentials)\n    return blob\n\nexample_cloud_storage_upload_blob_from_string_flow()\n

        Source code in prefect_gcp/cloud_storage.py
        @task\nasync def cloud_storage_upload_blob_from_string(\n    data: Union[str, bytes],\n    bucket: str,\n    blob: str,\n    gcp_credentials: GcpCredentials,\n    content_type: Optional[str] = None,\n    chunk_size: Optional[int] = None,\n    encryption_key: Optional[str] = None,\n    timeout: Union[float, Tuple[float, float]] = 60,\n    project: Optional[str] = None,\n    **upload_kwargs: Dict[str, Any],\n) -> str:\n    \"\"\"\n    Uploads a blob from a string or bytes representation of data.\n\n    Args:\n        data: String or bytes representation of data to upload.\n        bucket: Name of the bucket.\n        blob: Name of the Cloud Storage blob.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        content_type: Type of content being uploaded.\n        chunk_size: The size of a chunk of data whenever\n            iterating (in bytes). This must be a multiple of 256 KB\n            per the API specification.\n        encryption_key: An encryption key.\n        timeout: The number of seconds the transport should wait\n            for the server response. Can also be passed as a tuple\n            (connect_timeout, read_timeout).\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n        **upload_kwargs: Additional keyword arguments to pass to\n            `Blob.upload_from_string`.\n\n    Returns:\n        The blob name.\n\n    Example:\n        Uploads blob to bucket.\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.cloud_storage import cloud_storage_upload_blob_from_string\n\n        @flow()\n        def example_cloud_storage_upload_blob_from_string_flow():\n            gcp_credentials = GcpCredentials(\n                service_account_file=\"/path/to/service/account/keyfile.json\")\n            blob = cloud_storage_upload_blob_from_string(\n                \"data\", \"bucket\", \"blob\", gcp_credentials)\n            return blob\n\n        example_cloud_storage_upload_blob_from_string_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Uploading blob named %s to the %s bucket\", blob, bucket)\n\n    bucket_obj = await _get_bucket(bucket, gcp_credentials, project=project)\n    blob_obj = bucket_obj.blob(\n        blob, chunk_size=chunk_size, encryption_key=encryption_key\n    )\n\n    await run_sync_in_worker_thread(\n        blob_obj.upload_from_string,\n        data,\n        content_type=content_type,\n        timeout=timeout,\n        **upload_kwargs,\n    )\n    return blob\n
        "},{"location":"integrations/prefect-gcp/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials","title":"prefect_gcp.credentials","text":"

        Module handling GCP credentials.

        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials","title":"GcpCredentials","text":"

        Bases: CredentialsBlock

        Block used to manage authentication with GCP. Google authentication is handled via the google.oauth2 module or through the CLI. Specify either one of service account_file or service_account_info; if both are not specified, the client will try to detect the credentials following Google's Application Default Credentials. See Google's Authentication documentation for details on inference and recommended authentication patterns.

        Attributes:

        Name Type Description service_account_file Optional[Path]

        Path to the service account JSON keyfile.

        service_account_info Optional[SecretDict]

        The contents of the keyfile as a dict.

        Example

        Load GCP credentials stored in a GCP Credentials Block:

        from prefect_gcp import GcpCredentials\ngcp_credentials_block = GcpCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_gcp/credentials.py
        class GcpCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with GCP. Google authentication is\n    handled via the `google.oauth2` module or through the CLI.\n    Specify either one of service `account_file` or `service_account_info`; if both\n    are not specified, the client will try to detect the credentials following Google's\n    [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials).\n    See Google's [Authentication documentation](https://cloud.google.com/docs/authentication#service-accounts)\n    for details on inference and recommended authentication patterns.\n\n    Attributes:\n        service_account_file: Path to the service account JSON keyfile.\n        service_account_info: The contents of the keyfile as a dict.\n\n    Example:\n        Load GCP credentials stored in a `GCP Credentials` Block:\n        ```python\n        from prefect_gcp import GcpCredentials\n        gcp_credentials_block = GcpCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _block_type_name = \"GCP Credentials\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials\"  # noqa: E501\n\n    service_account_file: Optional[Path] = Field(\n        default=None, description=\"Path to the service account JSON keyfile.\"\n    )\n    service_account_info: Optional[SecretDict] = Field(\n        default=None, description=\"The contents of the keyfile as a dict.\"\n    )\n    project: Optional[str] = Field(\n        default=None, description=\"The GCP project to use for the client.\"\n    )\n\n    _service_account_email: Optional[str] = None\n\n    def __hash__(self):\n        return hash(\n            (\n                hash(self.service_account_file),\n                hash(frozenset(self.service_account_info.dict().items()))\n                if self.service_account_info\n                else None,\n                hash(self.project),\n                hash(self._service_account_email),\n            )\n        )\n\n    @root_validator\n    def _provide_one_service_account_source(cls, values):\n        \"\"\"\n        Ensure that only a service account file or service account info ias provided.\n        \"\"\"\n        both_service_account = (\n            values.get(\"service_account_info\") is not None\n            and values.get(\"service_account_file\") is not None\n        )\n        if both_service_account:\n            raise ValueError(\n                \"Only one of service_account_info or service_account_file \"\n                \"can be specified at once\"\n            )\n        return values\n\n    @validator(\"service_account_file\")\n    def _check_service_account_file(cls, file):\n        \"\"\"Get full path of provided file and make sure that it exists.\"\"\"\n        if not file:\n            return file\n\n        service_account_file = Path(file).expanduser()\n        if not service_account_file.exists():\n            raise ValueError(\"The provided path to the service account is invalid\")\n        return service_account_file\n\n    @validator(\"service_account_info\", pre=True)\n    def _convert_json_string_json_service_account_info(cls, value):\n        \"\"\"\n        Converts service account info provided as a json formatted string\n        to a dictionary\n        \"\"\"\n        if isinstance(value, str):\n            try:\n                service_account_info = json.loads(value)\n                return service_account_info\n            except Exception:\n                raise ValueError(\"Unable to decode service_account_info\")\n        else:\n            return value\n\n    def block_initialization(self):\n        credentials = self.get_credentials_from_service_account()\n        if self.project is None:\n            if self.service_account_info or self.service_account_file:\n                credentials_project = credentials.project_id\n            # google.auth.default using gcloud auth application-default login\n            elif credentials.quota_project_id:\n                credentials_project = credentials.quota_project_id\n            # compute-assigned service account via GCP metadata server\n            else:\n                _, credentials_project = google.auth.default()\n            self.project = credentials_project\n\n        if hasattr(credentials, \"service_account_email\"):\n            self._service_account_email = credentials.service_account_email\n\n    def get_credentials_from_service_account(self) -> Credentials:\n        \"\"\"\n        Helper method to serialize credentials by using either\n        service_account_file or service_account_info.\n        \"\"\"\n        if self.service_account_info:\n            credentials = Credentials.from_service_account_info(\n                self.service_account_info.get_secret_value(),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        elif self.service_account_file:\n            credentials = Credentials.from_service_account_file(\n                self.service_account_file,\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        else:\n            credentials, _ = google.auth.default()\n        return credentials\n\n    @sync_compatible\n    async def get_access_token(self):\n        \"\"\"\n        See: https://stackoverflow.com/a/69107745\n        Also: https://www.jhanley.com/google-cloud-creating-oauth-access-tokens-for-rest-api-calls/\n        \"\"\"  # noqa\n        request = google.auth.transport.requests.Request()\n        credentials = self.get_credentials_from_service_account()\n        await run_sync_in_worker_thread(credentials.refresh, request)\n        return credentials.token\n\n    def get_client(\n        self,\n        client_type: Union[str, ClientType],\n        **get_client_kwargs: Dict[str, Any],\n    ) -> Any:\n        \"\"\"\n        Helper method to dynamically get a client type.\n\n        Args:\n            client_type: The name of the client to get.\n            **get_client_kwargs: Additional keyword arguments to pass to the\n                `get_*_client` method.\n\n        Returns:\n            An authenticated client.\n\n        Raises:\n            ValueError: if the client is not supported.\n        \"\"\"\n        if isinstance(client_type, str):\n            client_type = ClientType(client_type)\n        client_type = client_type.value\n        get_client_method = getattr(self, f\"get_{client_type}_client\")\n        return get_client_method(**get_client_kwargs)\n\n    @_raise_help_msg(\"cloud_storage\")\n    def get_cloud_storage_client(\n        self, project: Optional[str] = None\n    ) -> \"StorageClient\":\n        \"\"\"\n        Gets an authenticated Cloud Storage client.\n\n        Args:\n            project: Name of the project to use; overrides the base\n                class's project if provided.\n\n        Returns:\n            An authenticated Cloud Storage client.\n\n        Examples:\n            Gets a GCP Cloud Storage client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_cloud_storage_client()\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_cloud_storage_client()\n            example_get_client_flow()\n            ```\n        \"\"\"\n        credentials = self.get_credentials_from_service_account()\n\n        # override class project if method project is provided\n        project = project or self.project\n        storage_client = StorageClient(credentials=credentials, project=project)\n        return storage_client\n\n    @_raise_help_msg(\"bigquery\")\n    def get_bigquery_client(\n        self, project: str = None, location: str = None\n    ) -> \"BigQueryClient\":\n        \"\"\"\n        Gets an authenticated BigQuery client.\n\n        Args:\n            project: Name of the project to use; overrides the base\n                class's project if provided.\n            location: Location to use.\n\n        Returns:\n            An authenticated BigQuery client.\n\n        Examples:\n            Gets a GCP BigQuery client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_bigquery_client()\n            example_get_client_flow()\n            ```\n\n            Gets a GCP BigQuery client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_bigquery_client()\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        credentials = self.get_credentials_from_service_account()\n\n        # override class project if method project is provided\n        project = project or self.project\n        big_query_client = BigQueryClient(\n            credentials=credentials, project=project, location=location\n        )\n        return big_query_client\n\n    @_raise_help_msg(\"secret_manager\")\n    def get_secret_manager_client(self) -> \"SecretManagerServiceClient\":\n        \"\"\"\n        Gets an authenticated Secret Manager Service client.\n\n        Returns:\n            An authenticated Secret Manager Service client.\n\n        Examples:\n            Gets a GCP Secret Manager client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_secret_manager_client()\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_secret_manager_client()\n            example_get_client_flow()\n            ```\n        \"\"\"\n        credentials = self.get_credentials_from_service_account()\n\n        # doesn't accept project; must pass in project in tasks\n        secret_manager_client = SecretManagerServiceClient(credentials=credentials)\n        return secret_manager_client\n\n    @_raise_help_msg(\"aiplatform\")\n    def get_job_service_client(\n        self, client_options: Union[Dict[str, Any], ClientOptions] = None\n    ) -> \"JobServiceClient\":\n        \"\"\"\n        Gets an authenticated Job Service client for Vertex AI.\n\n        Returns:\n            An authenticated Job Service client.\n\n        Examples:\n            Gets a GCP Job Service client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_job_service_client()\n\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_job_service_client()\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        if isinstance(client_options, dict):\n            client_options = from_dict(client_options)\n\n        credentials = self.get_credentials_from_service_account()\n        return JobServiceClient(credentials=credentials, client_options=client_options)\n\n    @_raise_help_msg(\"aiplatform\")\n    def get_job_service_async_client(\n        self, client_options: Union[Dict[str, Any], ClientOptions] = None\n    ) -> \"JobServiceAsyncClient\":\n        \"\"\"\n        Gets an authenticated Job Service async client for Vertex AI.\n\n        Returns:\n            An authenticated Job Service async client.\n\n        Examples:\n            Gets a GCP Job Service client from a path.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_file = \"~/.secrets/prefect-service-account.json\"\n                client = GcpCredentials(\n                    service_account_file=service_account_file\n                ).get_job_service_async_client()\n\n            example_get_client_flow()\n            ```\n\n            Gets a GCP Cloud Storage client from a dictionary.\n            ```python\n            from prefect import flow\n            from prefect_gcp.credentials import GcpCredentials\n\n            @flow()\n            def example_get_client_flow():\n                service_account_info = {\n                    \"type\": \"service_account\",\n                    \"project_id\": \"project_id\",\n                    \"private_key_id\": \"private_key_id\",\n                    \"private_key\": \"private_key\",\n                    \"client_email\": \"client_email\",\n                    \"client_id\": \"client_id\",\n                    \"auth_uri\": \"auth_uri\",\n                    \"token_uri\": \"token_uri\",\n                    \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                    \"client_x509_cert_url\": \"client_x509_cert_url\"\n                }\n                client = GcpCredentials(\n                    service_account_info=service_account_info\n                ).get_job_service_async_client()\n\n            example_get_client_flow()\n            ```\n        \"\"\"\n        if isinstance(client_options, dict):\n            client_options = from_dict(client_options)\n\n        return _get_job_service_async_client_cached(\n            self, tuple(client_options.__dict__.items())\n        )\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_access_token","title":"get_access_token async","text":"Source code in prefect_gcp/credentials.py
        @sync_compatible\nasync def get_access_token(self):\n    \"\"\"\n    See: https://stackoverflow.com/a/69107745\n    Also: https://www.jhanley.com/google-cloud-creating-oauth-access-tokens-for-rest-api-calls/\n    \"\"\"  # noqa\n    request = google.auth.transport.requests.Request()\n    credentials = self.get_credentials_from_service_account()\n    await run_sync_in_worker_thread(credentials.refresh, request)\n    return credentials.token\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_bigquery_client","title":"get_bigquery_client","text":"

        Gets an authenticated BigQuery client.

        Parameters:

        Name Type Description Default project str

        Name of the project to use; overrides the base class's project if provided.

        None location str

        Location to use.

        None

        Returns:

        Type Description Client

        An authenticated BigQuery client.

        Examples:

        Gets a GCP BigQuery client from a path.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_bigquery_client()\nexample_get_client_flow()\n

        Gets a GCP BigQuery client from a dictionary.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_bigquery_client()\n\nexample_get_client_flow()\n

        Source code in prefect_gcp/credentials.py
        @_raise_help_msg(\"bigquery\")\ndef get_bigquery_client(\n    self, project: str = None, location: str = None\n) -> \"BigQueryClient\":\n    \"\"\"\n    Gets an authenticated BigQuery client.\n\n    Args:\n        project: Name of the project to use; overrides the base\n            class's project if provided.\n        location: Location to use.\n\n    Returns:\n        An authenticated BigQuery client.\n\n    Examples:\n        Gets a GCP BigQuery client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_bigquery_client()\n        example_get_client_flow()\n        ```\n\n        Gets a GCP BigQuery client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_bigquery_client()\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    credentials = self.get_credentials_from_service_account()\n\n    # override class project if method project is provided\n    project = project or self.project\n    big_query_client = BigQueryClient(\n        credentials=credentials, project=project, location=location\n    )\n    return big_query_client\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_client","title":"get_client","text":"

        Helper method to dynamically get a client type.

        Parameters:

        Name Type Description Default client_type Union[str, ClientType]

        The name of the client to get.

        required **get_client_kwargs Dict[str, Any]

        Additional keyword arguments to pass to the get_*_client method.

        {}

        Returns:

        Type Description Any

        An authenticated client.

        Raises:

        Type Description ValueError

        if the client is not supported.

        Source code in prefect_gcp/credentials.py
        def get_client(\n    self,\n    client_type: Union[str, ClientType],\n    **get_client_kwargs: Dict[str, Any],\n) -> Any:\n    \"\"\"\n    Helper method to dynamically get a client type.\n\n    Args:\n        client_type: The name of the client to get.\n        **get_client_kwargs: Additional keyword arguments to pass to the\n            `get_*_client` method.\n\n    Returns:\n        An authenticated client.\n\n    Raises:\n        ValueError: if the client is not supported.\n    \"\"\"\n    if isinstance(client_type, str):\n        client_type = ClientType(client_type)\n    client_type = client_type.value\n    get_client_method = getattr(self, f\"get_{client_type}_client\")\n    return get_client_method(**get_client_kwargs)\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_cloud_storage_client","title":"get_cloud_storage_client","text":"

        Gets an authenticated Cloud Storage client.

        Parameters:

        Name Type Description Default project Optional[str]

        Name of the project to use; overrides the base class's project if provided.

        None

        Returns:

        Type Description Client

        An authenticated Cloud Storage client.

        Examples:

        Gets a GCP Cloud Storage client from a path.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_cloud_storage_client()\nexample_get_client_flow()\n

        Gets a GCP Cloud Storage client from a dictionary.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_cloud_storage_client()\nexample_get_client_flow()\n

        Source code in prefect_gcp/credentials.py
        @_raise_help_msg(\"cloud_storage\")\ndef get_cloud_storage_client(\n    self, project: Optional[str] = None\n) -> \"StorageClient\":\n    \"\"\"\n    Gets an authenticated Cloud Storage client.\n\n    Args:\n        project: Name of the project to use; overrides the base\n            class's project if provided.\n\n    Returns:\n        An authenticated Cloud Storage client.\n\n    Examples:\n        Gets a GCP Cloud Storage client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_cloud_storage_client()\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_cloud_storage_client()\n        example_get_client_flow()\n        ```\n    \"\"\"\n    credentials = self.get_credentials_from_service_account()\n\n    # override class project if method project is provided\n    project = project or self.project\n    storage_client = StorageClient(credentials=credentials, project=project)\n    return storage_client\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_credentials_from_service_account","title":"get_credentials_from_service_account","text":"

        Helper method to serialize credentials by using either service_account_file or service_account_info.

        Source code in prefect_gcp/credentials.py
        def get_credentials_from_service_account(self) -> Credentials:\n    \"\"\"\n    Helper method to serialize credentials by using either\n    service_account_file or service_account_info.\n    \"\"\"\n    if self.service_account_info:\n        credentials = Credentials.from_service_account_info(\n            self.service_account_info.get_secret_value(),\n            scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n        )\n    elif self.service_account_file:\n        credentials = Credentials.from_service_account_file(\n            self.service_account_file,\n            scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n        )\n    else:\n        credentials, _ = google.auth.default()\n    return credentials\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_job_service_async_client","title":"get_job_service_async_client","text":"

        Gets an authenticated Job Service async client for Vertex AI.

        Returns:

        Type Description JobServiceAsyncClient

        An authenticated Job Service async client.

        Examples:

        Gets a GCP Job Service client from a path.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_job_service_async_client()\n\nexample_get_client_flow()\n

        Gets a GCP Cloud Storage client from a dictionary.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_job_service_async_client()\n\nexample_get_client_flow()\n

        Source code in prefect_gcp/credentials.py
        @_raise_help_msg(\"aiplatform\")\ndef get_job_service_async_client(\n    self, client_options: Union[Dict[str, Any], ClientOptions] = None\n) -> \"JobServiceAsyncClient\":\n    \"\"\"\n    Gets an authenticated Job Service async client for Vertex AI.\n\n    Returns:\n        An authenticated Job Service async client.\n\n    Examples:\n        Gets a GCP Job Service client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_job_service_async_client()\n\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_job_service_async_client()\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    if isinstance(client_options, dict):\n        client_options = from_dict(client_options)\n\n    return _get_job_service_async_client_cached(\n        self, tuple(client_options.__dict__.items())\n    )\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_job_service_client","title":"get_job_service_client","text":"

        Gets an authenticated Job Service client for Vertex AI.

        Returns:

        Type Description JobServiceClient

        An authenticated Job Service client.

        Examples:

        Gets a GCP Job Service client from a path.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_job_service_client()\n\nexample_get_client_flow()\n

        Gets a GCP Cloud Storage client from a dictionary.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_job_service_client()\n\nexample_get_client_flow()\n

        Source code in prefect_gcp/credentials.py
        @_raise_help_msg(\"aiplatform\")\ndef get_job_service_client(\n    self, client_options: Union[Dict[str, Any], ClientOptions] = None\n) -> \"JobServiceClient\":\n    \"\"\"\n    Gets an authenticated Job Service client for Vertex AI.\n\n    Returns:\n        An authenticated Job Service client.\n\n    Examples:\n        Gets a GCP Job Service client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_job_service_client()\n\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_job_service_client()\n\n        example_get_client_flow()\n        ```\n    \"\"\"\n    if isinstance(client_options, dict):\n        client_options = from_dict(client_options)\n\n    credentials = self.get_credentials_from_service_account()\n    return JobServiceClient(credentials=credentials, client_options=client_options)\n
        "},{"location":"integrations/prefect-gcp/credentials/#prefect_gcp.credentials.GcpCredentials.get_secret_manager_client","title":"get_secret_manager_client","text":"

        Gets an authenticated Secret Manager Service client.

        Returns:

        Type Description SecretManagerServiceClient

        An authenticated Secret Manager Service client.

        Examples:

        Gets a GCP Secret Manager client from a path.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_file = \"~/.secrets/prefect-service-account.json\"\n    client = GcpCredentials(\n        service_account_file=service_account_file\n    ).get_secret_manager_client()\nexample_get_client_flow()\n

        Gets a GCP Cloud Storage client from a dictionary.

        from prefect import flow\nfrom prefect_gcp.credentials import GcpCredentials\n\n@flow()\ndef example_get_client_flow():\n    service_account_info = {\n        \"type\": \"service_account\",\n        \"project_id\": \"project_id\",\n        \"private_key_id\": \"private_key_id\",\n        \"private_key\": \"private_key\",\n        \"client_email\": \"client_email\",\n        \"client_id\": \"client_id\",\n        \"auth_uri\": \"auth_uri\",\n        \"token_uri\": \"token_uri\",\n        \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n        \"client_x509_cert_url\": \"client_x509_cert_url\"\n    }\n    client = GcpCredentials(\n        service_account_info=service_account_info\n    ).get_secret_manager_client()\nexample_get_client_flow()\n

        Source code in prefect_gcp/credentials.py
        @_raise_help_msg(\"secret_manager\")\ndef get_secret_manager_client(self) -> \"SecretManagerServiceClient\":\n    \"\"\"\n    Gets an authenticated Secret Manager Service client.\n\n    Returns:\n        An authenticated Secret Manager Service client.\n\n    Examples:\n        Gets a GCP Secret Manager client from a path.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_file = \"~/.secrets/prefect-service-account.json\"\n            client = GcpCredentials(\n                service_account_file=service_account_file\n            ).get_secret_manager_client()\n        example_get_client_flow()\n        ```\n\n        Gets a GCP Cloud Storage client from a dictionary.\n        ```python\n        from prefect import flow\n        from prefect_gcp.credentials import GcpCredentials\n\n        @flow()\n        def example_get_client_flow():\n            service_account_info = {\n                \"type\": \"service_account\",\n                \"project_id\": \"project_id\",\n                \"private_key_id\": \"private_key_id\",\n                \"private_key\": \"private_key\",\n                \"client_email\": \"client_email\",\n                \"client_id\": \"client_id\",\n                \"auth_uri\": \"auth_uri\",\n                \"token_uri\": \"token_uri\",\n                \"auth_provider_x509_cert_url\": \"auth_provider_x509_cert_url\",\n                \"client_x509_cert_url\": \"client_x509_cert_url\"\n            }\n            client = GcpCredentials(\n                service_account_info=service_account_info\n            ).get_secret_manager_client()\n        example_get_client_flow()\n        ```\n    \"\"\"\n    credentials = self.get_credentials_from_service_account()\n\n    # doesn't accept project; must pass in project in tasks\n    secret_manager_client = SecretManagerServiceClient(credentials=credentials)\n    return secret_manager_client\n
        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/","title":"Google Cloud Run Worker Guide","text":""},{"location":"integrations/prefect-gcp/gcp-worker-guide/#why-use-google-cloud-run-for-flow-run-execution","title":"Why use Google Cloud Run for flow run execution?","text":"

        Google Cloud Run is a fully managed compute platform that automatically scales your containerized applications.

        1. Serverless architecture: Cloud Run follows a serverless architecture, which means you don't need to manage any underlying infrastructure. Google Cloud Run automatically handles the scaling and availability of your flow run infrastructure, allowing you to focus on developing and deploying your code.

        2. Scalability: Cloud Run can automatically scale your pipeline to handle varying workloads and traffic. It can quickly respond to increased demand and scale back down during low activity periods, ensuring efficient resource utilization.

        3. Integration with Google Cloud services: Google Cloud Run easily integrates with other Google Cloud services, such as Google Cloud Storage, Google Cloud Pub/Sub, and Google Cloud Build. This interoperability enables you to build end-to-end data pipelines that use a variety of services.

        4. Portability: Since Cloud Run uses container images, you can develop your pipelines locally using Docker and then deploy them on Google Cloud Run without significant modifications. This portability allows you to run the same pipeline in different environments.

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#google-cloud-run-guide","title":"Google Cloud Run guide","text":"

        After completing this guide, you will have:

        1. Created a Google Cloud Service Account
        2. Created a Prefect Work Pool
        3. Deployed a Prefect Worker as a Cloud Run Service
        4. Deployed a Flow
        5. Executed the Flow as a Google Cloud Run Job

        If you're looking for a general introduction to workers, work pools, and deployments, check out the workers and work pools tutorial.

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#prerequisites","title":"Prerequisites","text":"

        Before starting this guide, make sure you have:

        • A Google Cloud Platform (GCP) account.
        • A project on your GCP account where you have the necessary permissions to create Cloud Run Services and Service Accounts.
        • The gcloud CLI installed on your local machine. You can follow Google Cloud's installation guide. If you're using Apple (or a Linux system) you can also use Homebrew for installation.
        • Docker installed on your local machine.
        • A Prefect server instance. You can sign up for a forever free Prefect Cloud Account or, alternatively, self-host a Prefect server.
        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-1-create-a-google-cloud-service-account","title":"Step 1. Create a Google Cloud service account","text":"

        First, open a terminal or command prompt on your local machine where gcloud is installed. If you haven't already authenticated with gcloud, run the following command and follow the instructions to log in to your GCP account.

        gcloud auth login\n

        Next, you'll set your project where you'd like to create the service account. Use the following command and replace <PROJECT_ID> with your GCP project's ID.

        gcloud config set project <PROJECT-ID>\n

        For example, if your project's ID is prefect-project the command will look like this:

        gcloud config set project prefect-project\n

        Now you're ready to make the service account. To do so, you'll need to run this command:

        gcloud iam service-accounts create <SERVICE-ACCOUNT-NAME> --display-name=\"<DISPLAY-NAME>\"\n

        Here's an example of the command above which you can use which already has the service account name and display name provided. An additional option to describe the service account has also been added:

        gcloud iam service-accounts create prefect-service-account \\\n    --description=\"service account to use for the prefect worker\" \\\n    --display-name=\"prefect-service-account\"\n

        The last step of this process is to make sure the service account has the proper permissions to execute flow runs as Cloud Run jobs. Run the following commands to grant the necessary permissions:

        gcloud projects add-iam-policy-binding <PROJECT-ID> \\\n    --member=\"serviceAccount:<SERVICE-ACCOUNT-NAME>@<PROJECT-ID>.iam.gserviceaccount.com\" \\\n    --role=\"roles/iam.serviceAccountUser\"\n
        gcloud projects add-iam-policy-binding <PROJECT-ID> \\\n    --member=\"serviceAccount:<SERVICE-ACCOUNT-NAME>@<PROJECT-ID>.iam.gserviceaccount.com\" \\\n    --role=\"roles/run.admin\"\n

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-2-create-a-cloud-run-work-pool","title":"Step 2. Create a Cloud Run work pool","text":"

        Let's walk through the process of creating a Cloud Run work pool.

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#fill-out-the-work-pool-base-job-template","title":"Fill out the work pool base job template","text":"

        You can create a new work pool using the Prefect UI or CLI. The following command creates a work pool of type cloud-run via the CLI (you'll want to replace the <WORK-POOL-NAME> with the name of your work pool):

        prefect work-pool create --type cloud-run <WORK-POOL-NAME>\n

        Once the work pool is created, find the work pool in the UI and edit it.

        There are many ways to customize the base job template for the work pool. Modifying the template influences the infrastructure configuration that the worker provisions for flow runs submitted to the work pool. For this guide we are going to modify just a few of the available fields.

        Specify the region for the cloud run job.

        Save the name of the service account created in first step of this guide.

        Your work pool is now ready to receive scheduled flow runs!

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-3-deploy-a-cloud-run-worker","title":"Step 3. Deploy a Cloud Run worker","text":"

        Now you can launch a Cloud Run service to host the Cloud Run worker. This worker will poll the work pool that you created in the previous step.

        Navigate back to your terminal and run the following commands to set your Prefect API key and URL as environment variables. Be sure to replace <ACCOUNT-ID> and <WORKSPACE-ID> with your Prefect account and workspace IDs (both will be available in the URL of the UI when previewing the workspace dashboard). You'll want to replace <YOUR-API-KEY> with an active API key as well.

        export PREFECT_API_URL='https://api.prefect.cloud/api/accounts/<ACCOUNT-ID>/workspaces/<WORKSPACE-ID>'\nexport PREFECT_API_KEY='<YOUR-API-KEY>'\n

        Once those variables are set, run the following shell command to deploy your worker as a service. Don't forget to replace <YOUR-SERVICE-ACCOUNT-NAME> with the name of the service account you created in the first step of this guide, and replace <WORK-POOL-NAME> with the name of the work pool you created in the second step.

        gcloud run deploy prefect-worker --image=prefecthq/prefect:2-latest \\\n--set-env-vars PREFECT_API_URL=$PREFECT_API_URL,PREFECT_API_KEY=$PREFECT_API_KEY \\\n--service-account <YOUR-SERVICE-ACCOUNT-NAME> \\\n--no-cpu-throttling \\\n--min-instances 1 \\\n--args \"prefect\",\"worker\",\"start\",\"--install-policy\",\"always\",\"--with-healthcheck\",\"-p\",\"<WORK-POOL-NAME>\",\"-t\",\"cloud-run\"\n

        After running this command, you'll be prompted to specify a region. Choose the same region that you selected when creating the Cloud Run work pool in the second step of this guide. The next prompt will ask if you'd like to allow unauthentiated invocations to your worker. For this guide, you can select \"No\".

        After a few seconds, you'll be able to see your new prefect-worker service by navigating to the Cloud Run page of your Google Cloud console. Additionally, you should be able to see a record of this worker in the Prefect UI on the work pool's page by navigating to the Worker tab. Let's not leave our worker hanging, it's time to give it a job.

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-4-deploy-a-flow","title":"Step 4. Deploy a flow","text":"

        Let's prepare a flow to run as a Cloud Run job. In this section of the guide, we'll \"bake\" our code into a Docker image, and push that image to Google Artifact Registry.

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#create-a-registry","title":"Create a registry","text":"

        Let's create a docker repository in your Google Artifact Registry to host your custom image. If you already have a registry, and are authenticated to it, skip ahead to the Write a flow section.

        The following command creates a repository using the gcloud CLI. You'll want to replace the <REPOSITORY-NAME> with your own value. :

        gcloud artifacts repositories create <REPOSITORY-NAME> \\\n--repository-format=docker --location=us\n

        Now you can authenticate to artifact registry:

        gcloud auth configure-docker us-docker.pkg.dev\n

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#write-a-flow","title":"Write a flow","text":"

        First, create a new directory. This will serve as the root of your project's repository. Within the directory, create a sub-directory called flows. Navigate to the flows subdirectory and create a new file for your flow. Feel free to write your own flow, but here's a ready-made one for your convenience:

        import httpx\nfrom prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef mark_it_down(temp):\n    markdown_report = f\"\"\"# Weather Report\n## Recent weather\n\n| Time        | Temperature |\n|:--------------|-------:|\n| Now | {temp} |\n| In 1 hour       | {temp + 2} |\n\"\"\"\n    create_markdown_artifact(\n        key=\"weather-report\",\n        markdown=markdown_report,\n        description=\"Very scientific weather report\",\n    )\n\n\n@flow\ndef fetch_weather(lat: float, lon: float):\n    base_url = \"https://api.open-meteo.com/v1/forecast/\"\n    weather = httpx.get(\n        base_url,\n        params=dict(latitude=lat, longitude=lon, hourly=\"temperature_2m\"),\n    )\n    most_recent_temp = float(weather.json()[\"hourly\"][\"temperature_2m\"][0])\n    mark_it_down(most_recent_temp)\n\n\nif __name__ == \"__main__\":\n    fetch_weather(38.9, -77.0)\n

        In the remainder of this guide, this script will be referred to as weather_flow.py, but you can name yours whatever you'd like.

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#creating-a-prefectyaml-file","title":"Creating a prefect.yaml file","text":"

        Now we're ready to make a prefect.yaml file, which will be responsible for managing the deployments of this repository. Navigate back to the root of your directory, and run the following command to create a prefect.yaml file using Prefect's docker deployment recipe.

        prefect init --recipe docker\n

        You'll receive a prompt to put in values for the image name and tag. Since we will be pushing the image to Google Artifact Registry, the name of your image should be prefixed with the path to the docker repository you created within the registry. For example: us-docker.pkg.dev/<PROJECT-ID>/<REPOSITORY-NAME>/. You'll want to replace <PROJECT-ID> with the ID of your project in GCP. This should match the ID of the project you used in first step of this guide. Here is an example of what this could look like:

        image_name: us-docker.pkg.dev/prefect-project/my-artifact-registry/gcp-weather-image\ntag: latest\n

        At this point, there will be a new prefect.yaml file available at the root of your project. The contents will look similar to the example below, however, I've added in a combination of yaml templating options and prefect deployment actions to build out a simple CI/CD process. Feel free to copy the contents and paste them in your prefect.yaml:

        # Welcome to your prefect.yaml file! You can you this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: <WORKING-DIRECTORY>\nprefect-version: 2.13.4\n\n# build section allows you to manage and build docker image\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n    id: build_image\n    requires: prefect-docker>=0.3.1\n    image_name: <PATH-TO-ARTIFACT-REGISTRY>/gcp-weather-image\n    tag: latest\n    dockerfile: auto\n    platform: linux/amd64\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_docker.deployments.steps.push_docker_image:\n    requires: prefect-docker>=0.3.1\n    image_name: '{{ build_image.image_name }}'\n    tag: '{{ build_image.tag }}'\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\n    directory: /opt/prefect/<WORKING-DIRECTORY>\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: gcp-weather-deploy\n  version: null\n  tags: []\n  description: null\n  schedule: {}\n  flow_name: null\n  entrypoint: flows/weather_flow.py:fetch_weather\n  parameters:\n    lat: 14.5994\n    lon: 28.6731\n  work_pool:\n    name: my-cloud-run-pool\n    work_queue_name: default\n    job_variables:\n      image: '{{ build_image.image }}'\n

        Tip

        After copying the example above, don't forget to replace <WORKING-DIRECTORY> with the name of the directory where your flow folder and prefect.yaml live. You'll also need to replace <PATH-TO-ARTIFACT-REGISTRY> with the path to the Docker repository in your Google Artifact Registry.

        To get a better understanding of the different components of the prefect.yaml file above and what they do, feel free to read this next section. Otherwise, you can skip ahead to Flow Deployment.

        In the build section of the prefect.yaml the following step is executed at deployment build time:

        1. prefect_docker.deployments.steps.build_docker_image : builds a Docker image automatically which uses the name and tag chosen previously.

        Warning

        If you are using an ARM-based chip (such as an M1 or M2 Mac), you'll want to ensure that you add platform: linux/amd64 to your build_docker_image step to ensure that your docker image uses an AMD architecture. For example:

        - prefect_docker.deployments.steps.build_docker_image:\nid: build_image\nrequires: prefect-docker>=0.3.1\nimage_name: us-docker.pkg.dev/prefect-project/my-docker-repository/gcp-weather-image\ntag: latest\ndockerfile: auto\nplatform: linux/amd64\n

        The push section sends the Docker image to the Docker repository in your Google Artifact Registry, so that it can be easily accessed by the worker for flow run execution.

        The pull section sets the working directory for the process prior to importing your flow.

        In the deployments section of the prefect.yaml file above, you'll see that there is a deployment declaration named gcp-weather-deploy. Within the declaration, the entrypoint for the flow is specified along with some default parameters which will be passed to the flow at runtime. Last but not least, the name of the workpool that we created in step 2 of this guide is specified.

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#flow-deployment","title":"Flow deployment","text":"

        Once you're happy with the specifications in the prefect.yaml file, run the following command in the terminal to deploy your flow:

        prefect deploy --name gcp-weather-deploy\n

        Once the flow is deployed to Prefect Cloud or your local Prefect Server, it's time to queue up a flow run!

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#step-5-flow-execution","title":"Step 5. Flow execution","text":"

        Find your deployment in the UI, and hit the Quick Run button. You have now successfully submitted a flow run to your Cloud Run worker! If you used the flow script provided in this guide, check the Artifacts tab for the flow run once it completes. You'll have a nice little weather report waiting for you there. Hope your day is a sunny one!

        "},{"location":"integrations/prefect-gcp/gcp-worker-guide/#recap-and-next-steps","title":"Recap and next steps","text":"

        Congratulations on completing this guide! Looking back on our journey, you have:

        1. Created a Google Cloud service account
        2. Created a Cloud Run work pool
        3. Deployed a Cloud Run worker
        4. Deployed a flow
        5. Executed a flow

        For next steps, you could:

        • Take a look at some of the other work pools Prefect has to offer
        • Do a deep drive on Prefect concepts
        • Try out another guide to explore new deployment patterns and recipes

        The world is your oyster \ud83e\uddaa\u2728.

        "},{"location":"integrations/prefect-gcp/secret_manager/","title":"Secret Manager","text":""},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager","title":"prefect_gcp.secret_manager","text":""},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret","title":"GcpSecret","text":"

        Bases: SecretBlock

        Manages a secret in Google Cloud Platform's Secret Manager.

        Attributes:

        Name Type Description gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        secret_name str

        Name of the secret to manage.

        secret_version str

        Version number of the secret to use, or \"latest\".

        Source code in prefect_gcp/secret_manager.py
        class GcpSecret(SecretBlock):\n    \"\"\"\n    Manages a secret in Google Cloud Platform's Secret Manager.\n\n    Attributes:\n        gcp_credentials: Credentials to use for authentication with GCP.\n        secret_name: Name of the secret to manage.\n        secret_version: Version number of the secret to use, or \"latest\".\n    \"\"\"\n\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret\"  # noqa: E501\n\n    gcp_credentials: GcpCredentials\n    secret_name: str = Field(default=..., description=\"Name of the secret to manage.\")\n    secret_version: str = Field(\n        default=\"latest\", description=\"Version number of the secret to use.\"\n    )\n\n    @sync_compatible\n    async def read_secret(self) -> bytes:\n        \"\"\"\n        Reads the secret data from the secret storage service.\n\n        Returns:\n            The secret data as bytes.\n        \"\"\"\n        client = self.gcp_credentials.get_secret_manager_client()\n        project = self.gcp_credentials.project\n        name = f\"projects/{project}/secrets/{self.secret_name}/versions/{self.secret_version}\"  # noqa\n        request = AccessSecretVersionRequest(name=name)\n\n        self.logger.debug(f\"Preparing to read secret data from {name!r}.\")\n        response = await run_sync_in_worker_thread(\n            client.access_secret_version, request=request\n        )\n        secret = response.payload.data\n        self.logger.info(f\"The secret {name!r} data was successfully read.\")\n        return secret\n\n    @sync_compatible\n    async def write_secret(self, secret_data: bytes) -> str:\n        \"\"\"\n        Writes the secret data to the secret storage service; if it doesn't exist\n        it will be created.\n\n        Args:\n            secret_data: The secret to write.\n\n        Returns:\n            The path that the secret was written to.\n        \"\"\"\n        client = self.gcp_credentials.get_secret_manager_client()\n        project = self.gcp_credentials.project\n        parent = f\"projects/{project}/secrets/{self.secret_name}\"\n        payload = SecretPayload(data=secret_data)\n        add_request = AddSecretVersionRequest(parent=parent, payload=payload)\n\n        self.logger.debug(f\"Preparing to write secret data to {parent!r}.\")\n        try:\n            response = await run_sync_in_worker_thread(\n                client.add_secret_version, request=add_request\n            )\n        except NotFound:\n            self.logger.info(\n                f\"The secret {parent!r} does not exist yet, creating it now.\"\n            )\n            create_parent = f\"projects/{project}\"\n            secret_id = self.secret_name\n            secret = Secret(replication=Replication(automatic=Replication.Automatic()))\n            create_request = CreateSecretRequest(\n                parent=create_parent, secret_id=secret_id, secret=secret\n            )\n            await run_sync_in_worker_thread(\n                client.create_secret, request=create_request\n            )\n\n            self.logger.debug(f\"Preparing to write secret data to {parent!r} again.\")\n            response = await run_sync_in_worker_thread(\n                client.add_secret_version, request=add_request\n            )\n\n        self.logger.info(f\"The secret data was written successfully to {parent!r}.\")\n        return response.name\n\n    @sync_compatible\n    async def delete_secret(self) -> str:\n        \"\"\"\n        Deletes the secret from the secret storage service.\n\n        Returns:\n            The path that the secret was deleted from.\n        \"\"\"\n        client = self.gcp_credentials.get_secret_manager_client()\n        project = self.gcp_credentials.project\n\n        name = f\"projects/{project}/secrets/{self.secret_name}\"\n        request = DeleteSecretRequest(name=name)\n\n        self.logger.debug(f\"Preparing to delete the secret {name!r}.\")\n        await run_sync_in_worker_thread(client.delete_secret, request=request)\n        self.logger.info(f\"The secret {name!r} was successfully deleted.\")\n        return name\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret.delete_secret","title":"delete_secret async","text":"

        Deletes the secret from the secret storage service.

        Returns:

        Type Description str

        The path that the secret was deleted from.

        Source code in prefect_gcp/secret_manager.py
        @sync_compatible\nasync def delete_secret(self) -> str:\n    \"\"\"\n    Deletes the secret from the secret storage service.\n\n    Returns:\n        The path that the secret was deleted from.\n    \"\"\"\n    client = self.gcp_credentials.get_secret_manager_client()\n    project = self.gcp_credentials.project\n\n    name = f\"projects/{project}/secrets/{self.secret_name}\"\n    request = DeleteSecretRequest(name=name)\n\n    self.logger.debug(f\"Preparing to delete the secret {name!r}.\")\n    await run_sync_in_worker_thread(client.delete_secret, request=request)\n    self.logger.info(f\"The secret {name!r} was successfully deleted.\")\n    return name\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret.read_secret","title":"read_secret async","text":"

        Reads the secret data from the secret storage service.

        Returns:

        Type Description bytes

        The secret data as bytes.

        Source code in prefect_gcp/secret_manager.py
        @sync_compatible\nasync def read_secret(self) -> bytes:\n    \"\"\"\n    Reads the secret data from the secret storage service.\n\n    Returns:\n        The secret data as bytes.\n    \"\"\"\n    client = self.gcp_credentials.get_secret_manager_client()\n    project = self.gcp_credentials.project\n    name = f\"projects/{project}/secrets/{self.secret_name}/versions/{self.secret_version}\"  # noqa\n    request = AccessSecretVersionRequest(name=name)\n\n    self.logger.debug(f\"Preparing to read secret data from {name!r}.\")\n    response = await run_sync_in_worker_thread(\n        client.access_secret_version, request=request\n    )\n    secret = response.payload.data\n    self.logger.info(f\"The secret {name!r} data was successfully read.\")\n    return secret\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.GcpSecret.write_secret","title":"write_secret async","text":"

        Writes the secret data to the secret storage service; if it doesn't exist it will be created.

        Parameters:

        Name Type Description Default secret_data bytes

        The secret to write.

        required

        Returns:

        Type Description str

        The path that the secret was written to.

        Source code in prefect_gcp/secret_manager.py
        @sync_compatible\nasync def write_secret(self, secret_data: bytes) -> str:\n    \"\"\"\n    Writes the secret data to the secret storage service; if it doesn't exist\n    it will be created.\n\n    Args:\n        secret_data: The secret to write.\n\n    Returns:\n        The path that the secret was written to.\n    \"\"\"\n    client = self.gcp_credentials.get_secret_manager_client()\n    project = self.gcp_credentials.project\n    parent = f\"projects/{project}/secrets/{self.secret_name}\"\n    payload = SecretPayload(data=secret_data)\n    add_request = AddSecretVersionRequest(parent=parent, payload=payload)\n\n    self.logger.debug(f\"Preparing to write secret data to {parent!r}.\")\n    try:\n        response = await run_sync_in_worker_thread(\n            client.add_secret_version, request=add_request\n        )\n    except NotFound:\n        self.logger.info(\n            f\"The secret {parent!r} does not exist yet, creating it now.\"\n        )\n        create_parent = f\"projects/{project}\"\n        secret_id = self.secret_name\n        secret = Secret(replication=Replication(automatic=Replication.Automatic()))\n        create_request = CreateSecretRequest(\n            parent=create_parent, secret_id=secret_id, secret=secret\n        )\n        await run_sync_in_worker_thread(\n            client.create_secret, request=create_request\n        )\n\n        self.logger.debug(f\"Preparing to write secret data to {parent!r} again.\")\n        response = await run_sync_in_worker_thread(\n            client.add_secret_version, request=add_request\n        )\n\n    self.logger.info(f\"The secret data was written successfully to {parent!r}.\")\n    return response.name\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.create_secret","title":"create_secret async","text":"

        Creates a secret in Google Cloud Platform's Secret Manager.

        Parameters:

        Name Type Description Default secret_name str

        Name of the secret to retrieve.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required timeout float

        The number of seconds the transport should wait for the server response.

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None

        Returns:

        Type Description str

        The path of the created secret.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import create_secret\n\n@flow()\ndef example_cloud_storage_create_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_path = create_secret(\"secret_name\", gcp_credentials)\n    return secret_path\n\nexample_cloud_storage_create_secret_flow()\n
        Source code in prefect_gcp/secret_manager.py
        @task\nasync def create_secret(\n    secret_name: str,\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Creates a secret in Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the created secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import create_secret\n\n        @flow()\n        def example_cloud_storage_create_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_path = create_secret(\"secret_name\", gcp_credentials)\n            return secret_path\n\n        example_cloud_storage_create_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Creating the %s secret\", secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    parent = f\"projects/{project}\"\n    secret_settings = {\"replication\": {\"automatic\": {}}}\n\n    partial_create = partial(\n        client.create_secret,\n        parent=parent,\n        secret_id=secret_name,\n        secret=secret_settings,\n        timeout=timeout,\n    )\n    response = await to_thread.run_sync(partial_create)\n    return response.name\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.delete_secret","title":"delete_secret async","text":"

        Deletes the specified secret from Google Cloud Platform's Secret Manager.

        Parameters:

        Name Type Description Default secret_name str

        Name of the secret to delete.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required timeout float

        The number of seconds the transport should wait for the server response.

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None

        Returns:

        Type Description str

        The path of the deleted secret.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import delete_secret\n\n@flow()\ndef example_cloud_storage_delete_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_path = delete_secret(\"secret_name\", gcp_credentials)\n    return secret_path\n\nexample_cloud_storage_delete_secret_flow()\n
        Source code in prefect_gcp/secret_manager.py
        @task\nasync def delete_secret(\n    secret_name: str,\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Deletes the specified secret from Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to delete.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the deleted secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import delete_secret\n\n        @flow()\n        def example_cloud_storage_delete_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_path = delete_secret(\"secret_name\", gcp_credentials)\n            return secret_path\n\n        example_cloud_storage_delete_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Deleting %s secret\", secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    name = f\"projects/{project}/secrets/{secret_name}/\"\n    partial_delete = partial(client.delete_secret, name=name, timeout=timeout)\n    await to_thread.run_sync(partial_delete)\n    return name\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.delete_secret_version","title":"delete_secret_version async","text":"

        Deletes a version of a given secret from Google Cloud Platform's Secret Manager.

        Parameters:

        Name Type Description Default secret_name str

        Name of the secret to retrieve.

        required version_id int

        Version number of the secret to use; \"latest\" can NOT be used.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required timeout float

        The number of seconds the transport should wait for the server response.

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None

        Returns:

        Type Description str

        The path of the deleted secret version.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import delete_secret_version\n\n@flow()\ndef example_cloud_storage_delete_secret_version_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_value = delete_secret_version(\"secret_name\", 1, gcp_credentials)\n    return secret_value\n\nexample_cloud_storage_delete_secret_version_flow()\n
        Source code in prefect_gcp/secret_manager.py
        @task\nasync def delete_secret_version(\n    secret_name: str,\n    version_id: int,\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Deletes a version of a given secret from Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        version_id: Version number of the secret to use; \"latest\" can NOT be used.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the deleted secret version.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import delete_secret_version\n\n        @flow()\n        def example_cloud_storage_delete_secret_version_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_value = delete_secret_version(\"secret_name\", 1, gcp_credentials)\n            return secret_value\n\n        example_cloud_storage_delete_secret_version_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Reading %s version of %s secret\", version_id, secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    if version_id == \"latest\":\n        raise ValueError(\"The version_id cannot be 'latest'\")\n\n    name = f\"projects/{project}/secrets/{secret_name}/versions/{version_id}\"\n    partial_destroy = partial(client.destroy_secret_version, name=name, timeout=timeout)\n    await to_thread.run_sync(partial_destroy)\n    return name\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.read_secret","title":"read_secret async","text":"

        Reads the value of a given secret from Google Cloud Platform's Secret Manager.

        Parameters:

        Name Type Description Default secret_name str

        Name of the secret to retrieve.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required timeout float

        The number of seconds the transport should wait for the server response.

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None

        Returns:

        Type Description str

        Contents of the specified secret.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import read_secret\n\n@flow()\ndef example_cloud_storage_read_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_value = read_secret(\"secret_name\", gcp_credentials, version_id=1)\n    return secret_value\n\nexample_cloud_storage_read_secret_flow()\n
        Source code in prefect_gcp/secret_manager.py
        @task\nasync def read_secret(\n    secret_name: str,\n    gcp_credentials: \"GcpCredentials\",\n    version_id: Union[str, int] = \"latest\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Reads the value of a given secret from Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        Contents of the specified secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import read_secret\n\n        @flow()\n        def example_cloud_storage_read_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_value = read_secret(\"secret_name\", gcp_credentials, version_id=1)\n            return secret_value\n\n        example_cloud_storage_read_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Reading %s version of %s secret\", version_id, secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    name = f\"projects/{project}/secrets/{secret_name}/versions/{version_id}\"\n    partial_access = partial(client.access_secret_version, name=name, timeout=timeout)\n    response = await to_thread.run_sync(partial_access)\n    secret = response.payload.data.decode(\"UTF-8\")\n    return secret\n
        "},{"location":"integrations/prefect-gcp/secret_manager/#prefect_gcp.secret_manager.update_secret","title":"update_secret async","text":"

        Updates a secret in Google Cloud Platform's Secret Manager.

        Parameters:

        Name Type Description Default secret_name str

        Name of the secret to retrieve.

        required secret_value Union[str, bytes]

        Desired value of the secret. Can be either str or bytes.

        required gcp_credentials GcpCredentials

        Credentials to use for authentication with GCP.

        required timeout float

        The number of seconds the transport should wait for the server response.

        60 project Optional[str]

        Name of the project to use; overrides the gcp_credentials project if provided.

        None

        Returns:

        Type Description str

        The path of the updated secret.

        Example
        from prefect import flow\nfrom prefect_gcp import GcpCredentials\nfrom prefect_gcp.secret_manager import update_secret\n\n@flow()\ndef example_cloud_storage_update_secret_flow():\n    gcp_credentials = GcpCredentials(project=\"project\")\n    secret_path = update_secret(\"secret_name\", \"secret_value\", gcp_credentials)\n    return secret_path\n\nexample_cloud_storage_update_secret_flow()\n
        Source code in prefect_gcp/secret_manager.py
        @task\nasync def update_secret(\n    secret_name: str,\n    secret_value: Union[str, bytes],\n    gcp_credentials: \"GcpCredentials\",\n    timeout: float = 60,\n    project: Optional[str] = None,\n) -> str:\n    \"\"\"\n    Updates a secret in Google Cloud Platform's Secret Manager.\n\n    Args:\n        secret_name: Name of the secret to retrieve.\n        secret_value: Desired value of the secret. Can be either `str` or `bytes`.\n        gcp_credentials: Credentials to use for authentication with GCP.\n        timeout: The number of seconds the transport should wait\n            for the server response.\n        project: Name of the project to use; overrides the\n            gcp_credentials project if provided.\n\n    Returns:\n        The path of the updated secret.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_gcp import GcpCredentials\n        from prefect_gcp.secret_manager import update_secret\n\n        @flow()\n        def example_cloud_storage_update_secret_flow():\n            gcp_credentials = GcpCredentials(project=\"project\")\n            secret_path = update_secret(\"secret_name\", \"secret_value\", gcp_credentials)\n            return secret_path\n\n        example_cloud_storage_update_secret_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n    logger.info(\"Updating the %s secret\", secret_name)\n\n    client = gcp_credentials.get_secret_manager_client()\n    project = project or gcp_credentials.project\n\n    parent = f\"projects/{project}/secrets/{secret_name}\"\n    if isinstance(secret_value, str):\n        secret_value = secret_value.encode(\"UTF-8\")\n    partial_add = partial(\n        client.add_secret_version,\n        parent=parent,\n        payload={\"data\": secret_value},\n        timeout=timeout,\n    )\n    response = await to_thread.run_sync(partial_add)\n    return response.name\n
        "},{"location":"integrations/prefect-gcp/vertex_worker/","title":"Vertex AI","text":""},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex","title":"prefect_gcp.workers.vertex","text":"

        Module containing the custom worker used for executing flow runs as Vertex AI Custom Jobs.

        Get started by creating a Cloud Run work pool:

        prefect work-pool create 'my-vertex-pool' --type vertex-ai\n

        Then start a Cloud Run worker with the following command:

        prefect worker start --pool 'my-vertex-pool'\n
        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex--configuration","title":"Configuration","text":"

        Read more about configuring work pools here.

        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorker","title":"VertexAIWorker","text":"

        Bases: BaseWorker

        Prefect worker that executes flow runs within Vertex AI Jobs.

        Source code in prefect_gcp/workers/vertex.py
        class VertexAIWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Vertex AI Jobs.\"\"\"\n\n    type = \"vertex-ai\"\n    job_configuration = VertexAIWorkerJobConfiguration\n    job_configuration_variables = VertexAIWorkerVariables\n    _description = (\n        \"Execute flow runs within containers on Google Vertex AI. Requires \"\n        \"a Google Cloud Platform account.\"\n    )\n    _display_name = \"Google Vertex AI\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-gcp/vertex_worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/10424e311932e31c477ac2b9ef3d53cefbaad708-250x250.png\"  # noqa\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: VertexAIWorkerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> VertexAIWorkerResult:\n        \"\"\"\n        Executes a flow run within a Vertex AI Job and waits for the flow run\n        to complete.\n\n        Args:\n            flow_run: The flow run to execute\n            configuration: The configuration to use when executing the flow run.\n            task_status: The task status object for the current flow run. If provided,\n                the task will be marked as started.\n\n        Returns:\n            VertexAIWorkerResult: A result object containing information about the\n                final state of the flow run\n        \"\"\"\n        logger = self.get_flow_run_logger(flow_run)\n\n        client_options = ClientOptions(\n            api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n        )\n\n        job_name = configuration.job_name\n\n        job_spec = self._build_job_spec(configuration)\n        job_service_async_client = (\n            configuration.credentials.get_job_service_async_client(\n                client_options=client_options\n            )\n        )\n\n        job_run = await self._create_and_begin_job(\n            job_name,\n            job_spec,\n            job_service_async_client,\n            configuration,\n            logger,\n        )\n\n        if task_status:\n            task_status.started(job_run.name)\n\n        final_job_run = await self._watch_job_run(\n            job_name=job_name,\n            full_job_name=job_run.name,\n            job_service_async_client=job_service_async_client,\n            current_state=job_run.state,\n            until_states=(\n                JobState.JOB_STATE_SUCCEEDED,\n                JobState.JOB_STATE_FAILED,\n                JobState.JOB_STATE_CANCELLED,\n                JobState.JOB_STATE_EXPIRED,\n            ),\n            configuration=configuration,\n            logger=logger,\n            timeout=int(\n                datetime.timedelta(\n                    hours=configuration.job_spec[\"maximum_run_time_hours\"]\n                ).total_seconds()\n            ),\n        )\n\n        error_msg = final_job_run.error.message\n\n        # Vertex will include an error message upon valid\n        # flow cancellations, so we'll avoid raising an error in that case\n        if error_msg and \"CANCELED\" not in error_msg:\n            raise RuntimeError(error_msg)\n\n        status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n        return VertexAIWorkerResult(\n            identifier=final_job_run.display_name, status_code=status_code\n        )\n\n    def _build_job_spec(\n        self, configuration: VertexAIWorkerJobConfiguration\n    ) -> \"CustomJobSpec\":\n        \"\"\"\n        Builds a job spec by gathering details.\n        \"\"\"\n        # here, we extract the `worker_pool_specs` out of the job_spec\n        worker_pool_specs = [\n            WorkerPoolSpec(\n                container_spec=ContainerSpec(**spec[\"container_spec\"]),\n                machine_spec=MachineSpec(**spec[\"machine_spec\"]),\n                replica_count=spec[\"replica_count\"],\n                disk_spec=DiskSpec(**spec[\"disk_spec\"]),\n            )\n            for spec in configuration.job_spec.pop(\"worker_pool_specs\", [])\n        ]\n\n        timeout = Duration().FromTimedelta(\n            td=datetime.timedelta(\n                hours=configuration.job_spec[\"maximum_run_time_hours\"]\n            )\n        )\n        scheduling = Scheduling(timeout=timeout)\n\n        # construct the final job spec that we will provide to Vertex AI\n        job_spec = CustomJobSpec(\n            worker_pool_specs=worker_pool_specs,\n            scheduling=scheduling,\n            ignore_unknown_fields=True,\n            **configuration.job_spec,\n        )\n        return job_spec\n\n    async def _create_and_begin_job(\n        self,\n        job_name: str,\n        job_spec: \"CustomJobSpec\",\n        job_service_async_client: \"JobServiceAsyncClient\",\n        configuration: VertexAIWorkerJobConfiguration,\n        logger: PrefectLogAdapter,\n    ) -> \"CustomJob\":\n        \"\"\"\n        Builds a custom job and begins running it.\n        \"\"\"\n        # create custom job\n        custom_job = CustomJob(\n            display_name=job_name,\n            job_spec=job_spec,\n            labels=self._get_compatible_labels(configuration=configuration),\n        )\n\n        # run job\n        logger.info(f\"Creating job {job_name!r}\")\n\n        project = configuration.project\n        resource_name = f\"projects/{project}/locations/{configuration.region}\"\n\n        async for attempt in AsyncRetrying(\n            stop=stop_after_attempt(3), wait=wait_fixed(1) + wait_random(0, 3)\n        ):\n            with attempt:\n                custom_job_run = await job_service_async_client.create_custom_job(\n                    parent=resource_name,\n                    custom_job=custom_job,\n                )\n\n        logger.info(\n            f\"Job {job_name!r} created. \"\n            f\"The full job name is {custom_job_run.name!r}\"\n        )\n\n        return custom_job_run\n\n    async def _watch_job_run(\n        self,\n        job_name: str,\n        full_job_name: str,  # different from job_name\n        job_service_async_client: \"JobServiceAsyncClient\",\n        current_state: \"JobState\",\n        until_states: Tuple[\"JobState\"],\n        configuration: VertexAIWorkerJobConfiguration,\n        logger: PrefectLogAdapter,\n        timeout: int = None,\n    ) -> \"CustomJob\":\n        \"\"\"\n        Polls job run to see if status changed.\n\n        State changes reported by the Vertex AI API may sometimes be inaccurate\n        immediately upon startup, but should eventually report a correct running\n        and then terminal state. The minimum training duration for a custom job is\n        30 seconds, so short-lived jobs may be marked as successful some time\n        after a flow run has completed.\n        \"\"\"\n        state = JobState.JOB_STATE_UNSPECIFIED\n        last_state = current_state\n        t0 = time.time()\n\n        while state not in until_states:\n            job_run = await job_service_async_client.get_custom_job(\n                name=full_job_name,\n            )\n            state = job_run.state\n            if state != last_state:\n                state_label = (\n                    state.name.replace(\"_\", \" \")\n                    .lower()\n                    .replace(\"state\", \"state is now:\")\n                )\n                # results in \"New job state is now: succeeded\"\n                logger.debug(f\"{job_name} has new {state_label}\")\n                last_state = state\n            else:\n                # Intermittently, the job will not be described. We want to respect the\n                # watch timeout though.\n                logger.debug(f\"Job {job_name} not found.\")\n\n            elapsed_time = time.time() - t0\n            if timeout is not None and elapsed_time > timeout:\n                raise RuntimeError(\n                    f\"Timed out after {elapsed_time}s while watching job for states \"\n                    \"{until_states!r}\"\n                )\n            await asyncio.sleep(configuration.job_watch_poll_interval)\n\n        return job_run\n\n    def _get_compatible_labels(\n        self, configuration: VertexAIWorkerJobConfiguration\n    ) -> Dict[str, str]:\n        \"\"\"\n        Ensures labels are compatible with GCP label requirements.\n        https://cloud.google.com/resource-manager/docs/creating-managing-labels\n\n        Ex: the Prefect provided key of prefect.io/flow-name -> prefect-io_flow-name\n        \"\"\"\n        compatible_labels = {}\n        for key, val in configuration.labels.items():\n            new_key = slugify(\n                key,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n            compatible_labels[new_key] = slugify(\n                val,\n                lowercase=True,\n                replacements=[(\"/\", \"_\"), (\".\", \"-\")],\n                max_length=63,\n                regex_pattern=_DISALLOWED_GCP_LABEL_CHARACTERS,\n            )\n        return compatible_labels\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: VertexAIWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a job running in Vertex AI upon flow cancellation,\n        based on the provided infrastructure PID + run configuration.\n        \"\"\"\n        if grace_seconds != 30:\n            self._logger.warning(\n                f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n                \"support dynamic grace period configuration. See here for more info: \"\n                \"https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.customJobs/cancel\"  # noqa\n            )\n\n        client_options = ClientOptions(\n            api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n        )\n        job_service_async_client = (\n            configuration.credentials.get_job_service_async_client(\n                client_options=client_options\n            )\n        )\n        await self._stop_job(\n            client=job_service_async_client,\n            vertex_job_name=infrastructure_pid,\n        )\n\n    async def _stop_job(self, client: \"JobServiceAsyncClient\", vertex_job_name: str):\n        \"\"\"\n        Calls the `cancel_custom_job` method on the Vertex AI Job Service Client.\n        \"\"\"\n        cancel_custom_job_request = CancelCustomJobRequest(name=vertex_job_name)\n        try:\n            await client.cancel_custom_job(\n                request=cancel_custom_job_request,\n            )\n        except Exception as exc:\n            if \"does not exist\" in str(exc):\n                raise InfrastructureNotFound(\n                    f\"Cannot stop Vertex AI job; the job name {vertex_job_name!r} \"\n                    \"could not be found.\"\n                ) from exc\n            raise\n
        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

        Stops a job running in Vertex AI upon flow cancellation, based on the provided infrastructure PID + run configuration.

        Source code in prefect_gcp/workers/vertex.py
        async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: VertexAIWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a job running in Vertex AI upon flow cancellation,\n    based on the provided infrastructure PID + run configuration.\n    \"\"\"\n    if grace_seconds != 30:\n        self._logger.warning(\n            f\"Kill grace period of {grace_seconds}s requested, but GCP does not \"\n            \"support dynamic grace period configuration. See here for more info: \"\n            \"https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.customJobs/cancel\"  # noqa\n        )\n\n    client_options = ClientOptions(\n        api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n    )\n    job_service_async_client = (\n        configuration.credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n    )\n    await self._stop_job(\n        client=job_service_async_client,\n        vertex_job_name=infrastructure_pid,\n    )\n
        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorker.run","title":"run async","text":"

        Executes a flow run within a Vertex AI Job and waits for the flow run to complete.

        Parameters:

        Name Type Description Default flow_run FlowRun

        The flow run to execute

        required configuration VertexAIWorkerJobConfiguration

        The configuration to use when executing the flow run.

        required task_status Optional[TaskStatus]

        The task status object for the current flow run. If provided, the task will be marked as started.

        None

        Returns:

        Name Type Description VertexAIWorkerResult VertexAIWorkerResult

        A result object containing information about the final state of the flow run

        Source code in prefect_gcp/workers/vertex.py
        async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: VertexAIWorkerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> VertexAIWorkerResult:\n    \"\"\"\n    Executes a flow run within a Vertex AI Job and waits for the flow run\n    to complete.\n\n    Args:\n        flow_run: The flow run to execute\n        configuration: The configuration to use when executing the flow run.\n        task_status: The task status object for the current flow run. If provided,\n            the task will be marked as started.\n\n    Returns:\n        VertexAIWorkerResult: A result object containing information about the\n            final state of the flow run\n    \"\"\"\n    logger = self.get_flow_run_logger(flow_run)\n\n    client_options = ClientOptions(\n        api_endpoint=f\"{configuration.region}-aiplatform.googleapis.com\"\n    )\n\n    job_name = configuration.job_name\n\n    job_spec = self._build_job_spec(configuration)\n    job_service_async_client = (\n        configuration.credentials.get_job_service_async_client(\n            client_options=client_options\n        )\n    )\n\n    job_run = await self._create_and_begin_job(\n        job_name,\n        job_spec,\n        job_service_async_client,\n        configuration,\n        logger,\n    )\n\n    if task_status:\n        task_status.started(job_run.name)\n\n    final_job_run = await self._watch_job_run(\n        job_name=job_name,\n        full_job_name=job_run.name,\n        job_service_async_client=job_service_async_client,\n        current_state=job_run.state,\n        until_states=(\n            JobState.JOB_STATE_SUCCEEDED,\n            JobState.JOB_STATE_FAILED,\n            JobState.JOB_STATE_CANCELLED,\n            JobState.JOB_STATE_EXPIRED,\n        ),\n        configuration=configuration,\n        logger=logger,\n        timeout=int(\n            datetime.timedelta(\n                hours=configuration.job_spec[\"maximum_run_time_hours\"]\n            ).total_seconds()\n        ),\n    )\n\n    error_msg = final_job_run.error.message\n\n    # Vertex will include an error message upon valid\n    # flow cancellations, so we'll avoid raising an error in that case\n    if error_msg and \"CANCELED\" not in error_msg:\n        raise RuntimeError(error_msg)\n\n    status_code = 0 if final_job_run.state == JobState.JOB_STATE_SUCCEEDED else 1\n\n    return VertexAIWorkerResult(\n        identifier=final_job_run.display_name, status_code=status_code\n    )\n
        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerJobConfiguration","title":"VertexAIWorkerJobConfiguration","text":"

        Bases: BaseJobConfiguration

        Configuration class used by the Vertex AI Worker to create a Job.

        An instance of this class is passed to the Vertex AI Worker's run method for each flow run. It contains all information necessary to execute the flow run as a Vertex AI Job.

        Attributes:

        Name Type Description region str

        The region where the Vertex AI Job resides.

        credentials Optional[GcpCredentials]

        The GCP Credentials used to connect to Vertex AI.

        job_spec Dict[str, Any]

        The Vertex AI Job spec used to create the Job.

        job_watch_poll_interval float

        The interval between GCP API calls to check Job state.

        Source code in prefect_gcp/workers/vertex.py
        class VertexAIWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Vertex AI Worker to create a Job.\n\n    An instance of this class is passed to the Vertex AI Worker's `run` method\n    for each flow run. It contains all information necessary to execute\n    the flow run as a Vertex AI Job.\n\n    Attributes:\n        region: The region where the Vertex AI Job resides.\n        credentials: The GCP Credentials used to connect to Vertex AI.\n        job_spec: The Vertex AI Job spec used to create the Job.\n        job_watch_poll_interval: The interval between GCP API calls to check Job state.\n    \"\"\"\n\n    region: str = Field(\n        description=\"The region where the Vertex AI Job resides.\",\n        example=\"us-central1\",\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to initiate the \"\n        \"Vertex AI Job. If not provided credentials will be \"\n        \"inferred from the local environment.\",\n    )\n\n    job_spec: Dict[str, Any] = Field(\n        template={\n            \"service_account_name\": \"{{ service_account_name }}\",\n            \"network\": \"{{ network }}\",\n            \"reserved_ip_ranges\": \"{{ reserved_ip_ranges }}\",\n            \"maximum_run_time_hours\": \"{{ maximum_run_time_hours }}\",\n            \"worker_pool_specs\": [\n                {\n                    \"replica_count\": 1,\n                    \"container_spec\": {\n                        \"image_uri\": \"{{ image }}\",\n                        \"command\": \"{{ command }}\",\n                        \"args\": [],\n                    },\n                    \"machine_spec\": {\n                        \"machine_type\": \"{{ machine_type }}\",\n                        \"accelerator_type\": \"{{ accelerator_type }}\",\n                        \"accelerator_count\": \"{{ accelerator_count }}\",\n                    },\n                    \"disk_spec\": {\n                        \"boot_disk_type\": \"{{ boot_disk_type }}\",\n                        \"boot_disk_size_gb\": \"{{ boot_disk_size_gb }}\",\n                    },\n                }\n            ],\n        }\n    )\n    job_watch_poll_interval: float = Field(\n        default=5.0,\n        title=\"Poll Interval (Seconds)\",\n        description=(\n            \"The amount of time to wait between GCP API calls while monitoring the \"\n            \"state of a Vertex AI Job.\"\n        ),\n    )\n\n    @property\n    def project(self) -> str:\n        \"\"\"property for accessing the project from the credentials.\"\"\"\n        return self.credentials.project\n\n    @property\n    def job_name(self) -> str:\n        \"\"\"\n        The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference:\n        https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name\n        \"\"\"  # noqa\n        unique_suffix = uuid4().hex\n        job_name = f\"{self.name}-{unique_suffix}\"\n        return job_name\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self._inject_formatted_env_vars()\n        self._inject_formatted_command()\n        self._ensure_existence_of_service_account()\n\n    def _inject_formatted_env_vars(self):\n        \"\"\"Inject environment variables in the Vertex job_spec configuration,\n        in the correct format, which is sourced from the BaseJobConfiguration.\n        This method is invoked by `prepare_for_flow_run()`.\"\"\"\n        worker_pool_specs = self.job_spec[\"worker_pool_specs\"]\n        formatted_env_vars = [\n            {\"name\": key, \"value\": value} for key, value in self.env.items()\n        ]\n        worker_pool_specs[0][\"container_spec\"][\"env\"] = formatted_env_vars\n\n    def _inject_formatted_command(self):\n        \"\"\"Inject shell commands in the Vertex job_spec configuration,\n        in the correct format, which is sourced from the BaseJobConfiguration.\n        Here, we'll ensure that the default string format\n        is converted to a list of strings.\"\"\"\n        worker_pool_specs = self.job_spec[\"worker_pool_specs\"]\n\n        existing_command = worker_pool_specs[0][\"container_spec\"].get(\"command\")\n        if existing_command is None:\n            worker_pool_specs[0][\"container_spec\"][\"command\"] = shlex.split(\n                self._base_flow_run_command()\n            )\n        elif isinstance(existing_command, str):\n            worker_pool_specs[0][\"container_spec\"][\"command\"] = shlex.split(\n                existing_command\n            )\n\n    def _ensure_existence_of_service_account(self):\n        \"\"\"Verify that a service account was provided, either in the credentials\n        or as a standalone service account name override.\"\"\"\n\n        provided_service_account_name = self.job_spec.get(\"service_account_name\")\n        credential_service_account = self.credentials._service_account_email\n\n        service_account_to_use = (\n            provided_service_account_name or credential_service_account\n        )\n\n        if service_account_to_use is None:\n            raise ValueError(\n                \"A service account is required for the Vertex job. \"\n                \"A service account could not be detected in the attached credentials \"\n                \"or in the service_account_name input. \"\n                \"Please pass in valid GCP credentials or a valid service_account_name\"\n            )\n\n        self.job_spec[\"service_account_name\"] = service_account_to_use\n\n    @validator(\"job_spec\")\n    def _ensure_job_spec_includes_required_attributes(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job spec includes all required components.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_spec())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n
        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerJobConfiguration.job_name","title":"job_name: str property","text":"

        The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference: https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name

        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerJobConfiguration.project","title":"project: str property","text":"

        property for accessing the project from the credentials.

        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerResult","title":"VertexAIWorkerResult","text":"

        Bases: BaseWorkerResult

        Contains information about the final state of a completed process

        Source code in prefect_gcp/workers/vertex.py
        class VertexAIWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
        "},{"location":"integrations/prefect-gcp/vertex_worker/#prefect_gcp.workers.vertex.VertexAIWorkerVariables","title":"VertexAIWorkerVariables","text":"

        Bases: BaseVariables

        Default variables for the Vertex AI worker.

        The schema for this class is used to populate the variables section of the default base job template.

        Source code in prefect_gcp/workers/vertex.py
        class VertexAIWorkerVariables(BaseVariables):\n    \"\"\"\n    Default variables for the Vertex AI worker.\n\n    The schema for this class is used to populate the `variables` section of the default\n    base job template.\n    \"\"\"\n\n    region: str = Field(\n        description=\"The region where the Vertex AI Job resides.\",\n        example=\"us-central1\",\n    )\n    image: str = Field(\n        title=\"Image Name\",\n        description=(\n            \"The URI of a container image in the Container or Artifact Registry, \"\n            \"used to run your Vertex AI Job. Note that Vertex AI will need access\"\n            \"to the project and region where the container image is stored. See \"\n            \"https://cloud.google.com/vertex-ai/docs/training/create-custom-container\"\n        ),\n        example=\"gcr.io/your-project/your-repo:latest\",\n    )\n    credentials: Optional[GcpCredentials] = Field(\n        title=\"GCP Credentials\",\n        default_factory=GcpCredentials,\n        description=\"The GCP Credentials used to initiate the \"\n        \"Vertex AI Job. If not provided credentials will be \"\n        \"inferred from the local environment.\",\n    )\n    machine_type: str = Field(\n        title=\"Machine Type\",\n        description=(\n            \"The machine type to use for the run, which controls \"\n            \"the available CPU and memory. \"\n            \"See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec\"\n        ),\n        default=\"n1-standard-4\",\n    )\n    accelerator_type: Optional[str] = Field(\n        title=\"Accelerator Type\",\n        description=(\n            \"The type of accelerator to attach to the machine. \"\n            \"See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec\"\n        ),\n        example=\"NVIDIA_TESLA_K80\",\n        default=None,\n    )\n    accelerator_count: Optional[int] = Field(\n        title=\"Accelerator Count\",\n        description=(\n            \"The number of accelerators to attach to the machine. \"\n            \"See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec\"\n        ),\n        example=1,\n        default=None,\n    )\n    boot_disk_type: str = Field(\n        title=\"Boot Disk Type\",\n        description=\"The type of boot disk to attach to the machine.\",\n        default=\"pd-ssd\",\n    )\n    boot_disk_size_gb: int = Field(\n        title=\"Boot Disk Size (GB)\",\n        description=\"The size of the boot disk to attach to the machine, in gigabytes.\",\n        default=100,\n    )\n    maximum_run_time_hours: int = Field(\n        default=1,\n        title=\"Maximum Run Time (Hours)\",\n        description=\"The maximum job running time, in hours\",\n    )\n    network: Optional[str] = Field(\n        default=None,\n        title=\"Network\",\n        description=\"The full name of the Compute Engine network\"\n        \"to which the Job should be peered. Private services access must \"\n        \"already be configured for the network. If left unspecified, the job \"\n        \"is not peered with any network. \"\n        \"For example: projects/12345/global/networks/myVPC\",\n    )\n    reserved_ip_ranges: Optional[List[str]] = Field(\n        default=None,\n        title=\"Reserved IP Ranges\",\n        description=\"A list of names for the reserved ip ranges under the VPC \"\n        \"network that can be used for this job. If set, we will deploy the job \"\n        \"within the provided ip ranges. Otherwise, the job will be deployed to \"\n        \"any ip ranges under the provided VPC network.\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        title=\"Service Account Name\",\n        description=(\n            \"Specifies the service account to use \"\n            \"as the run-as account in Vertex AI. The worker submitting jobs must have \"\n            \"act-as permission on this run-as account. If unspecified, the AI \"\n            \"Platform Custom Code Service Agent for the CustomJob's project is \"\n            \"used. Takes precedence over the service account found in GCP credentials, \"\n            \"and required if a service account cannot be detected in GCP credentials.\"\n        ),\n    )\n    job_watch_poll_interval: float = Field(\n        default=5.0,\n        title=\"Poll Interval (Seconds)\",\n        description=(\n            \"The amount of time to wait between GCP API calls while monitoring the \"\n            \"state of a Vertex AI Job.\"\n        ),\n    )\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/","title":"Deployment Steps","text":""},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps","title":"prefect_gcp.deployments.steps","text":"

        Prefect deployment steps for code storage in and retrieval from Google Cloud Storage.

        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PullFromGcsOutput","title":"PullFromGcsOutput","text":"

        Bases: TypedDict

        The output of the pull_from_gcs step.

        Source code in prefect_gcp/deployments/steps.py
        class PullFromGcsOutput(TypedDict):\n    \"\"\"\n    The output of the `pull_from_gcs` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n    directory: str\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PullProjectFromGcsOutput","title":"PullProjectFromGcsOutput","text":"

        Bases: PullFromGcsOutput

        Deprecated. Use PullFromGcsOutput instead.

        Source code in prefect_gcp/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PullFromGcsOutput` instead.\")\nclass PullProjectFromGcsOutput(PullFromGcsOutput):\n    \"\"\"Deprecated. Use `PullFromGcsOutput` instead.\"\"\"\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PushProjectToGcsOutput","title":"PushProjectToGcsOutput","text":"

        Bases: PushToGcsOutput

        Deprecated. Use PushToGcsOutput instead.

        Source code in prefect_gcp/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `PushToGcsOutput` instead.\")\nclass PushProjectToGcsOutput(PushToGcsOutput):\n    \"\"\"Deprecated. Use `PushToGcsOutput` instead.\"\"\"\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.PushToGcsOutput","title":"PushToGcsOutput","text":"

        Bases: TypedDict

        The output of the push_to_gcs step.

        Source code in prefect_gcp/deployments/steps.py
        class PushToGcsOutput(TypedDict):\n    \"\"\"\n    The output of the `push_to_gcs` step.\n    \"\"\"\n\n    bucket: str\n    folder: str\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.pull_from_gcs","title":"pull_from_gcs","text":"

        Pulls the contents of a project from an GCS bucket to the current working directory.

        Parameters:

        Name Type Description Default bucket str

        The name of the GCS bucket where files are stored.

        required folder str

        The folder in the GCS bucket where files are stored.

        required project Optional[str]

        The GCP project the bucket belongs to. If not provided, the project will be inferred from the credentials or the local environment.

        None credentials Optional[Dict]

        A dictionary containing the service account information and project used for authentication. If not provided, the application default credentials will be used.

        None

        Returns:

        Type Description PullProjectFromGcsOutput

        A dictionary containing the bucket, folder, and local directory where files were downloaded.

        Examples:

        Pull from GCS using the default environment credentials:

        build:\n    - prefect_gcp.deployments.steps.pull_from_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n

        Pull from GCS using credentials stored in a block:

        build:\n    - prefect_gcp.deployments.steps.pull_from_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n

        Pull from to an GCS bucket using credentials stored in a service account file:

        build:\n    - prefect_gcp.deployments.steps.pull_from_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials:\n            project: my-project\n            service_account_file: /path/to/service_account.json\n

        Source code in prefect_gcp/deployments/steps.py
        def pull_from_gcs(\n    bucket: str,\n    folder: str,\n    project: Optional[str] = None,\n    credentials: Optional[Dict] = None,\n) -> PullProjectFromGcsOutput:\n    \"\"\"\n    Pulls the contents of a project from an GCS bucket to the current working directory.\n\n    Args:\n        bucket: The name of the GCS bucket where files are stored.\n        folder: The folder in the GCS bucket where files are stored.\n        project: The GCP project the bucket belongs to. If not provided, the project will be\n            inferred from the credentials or the local environment.\n        credentials: A dictionary containing the service account information and project\n            used for authentication. If not provided, the application default\n            credentials will be used.\n\n    Returns:\n        A dictionary containing the bucket, folder, and local directory where files were downloaded.\n\n    Examples:\n        Pull from GCS using the default environment credentials:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.pull_from_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n        ```\n\n        Pull from GCS using credentials stored in a block:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.pull_from_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n        ```\n\n        Pull from to an GCS bucket using credentials stored in a service account file:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.pull_from_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials:\n                    project: my-project\n                    service_account_file: /path/to/service_account.json\n        ```\n\n    \"\"\"  # noqa\n    local_path = Path.cwd()\n    project = credentials.get(\"project\") if credentials else None\n\n    gcp_creds = None\n    if credentials is not None:\n        if credentials.get(\"service_account_info\") is not None:\n            gcp_creds = Credentials.from_service_account_info(\n                credentials.get(\"service_account_info\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        elif credentials.get(\"service_account_file\") is not None:\n            gcp_creds = Credentials.from_service_account_file(\n                credentials.get(\"service_account_file\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n\n    gcp_creds = gcp_creds or google.auth.default()[0]\n\n    storage_client = StorageClient(credentials=gcp_creds, project=project)\n\n    blobs = storage_client.list_blobs(bucket, prefix=folder)\n\n    for blob in blobs:\n        if blob.name.endswith(\"/\"):\n            # object is a folder and will be created if it contains any objects\n            continue\n        local_blob_download_path = PurePosixPath(\n            local_path\n            / relative_path_to_current_platform(blob.name).relative_to(folder)\n        )\n        Path.mkdir(Path(local_blob_download_path.parent), parents=True, exist_ok=True)\n\n        blob.download_to_filename(local_blob_download_path)\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n        \"directory\": str(local_path),\n    }\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.pull_project_from_gcs","title":"pull_project_from_gcs","text":"

        Deprecated. Use pull_from_gcs instead.

        Source code in prefect_gcp/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `pull_from_gcs` instead.\")\ndef pull_project_from_gcs(*args, **kwargs) -> PullProjectFromGcsOutput:\n    \"\"\"\n    Deprecated. Use `pull_from_gcs` instead.\n    \"\"\"\n    return pull_from_gcs(*args, **kwargs)\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.push_project_to_gcs","title":"push_project_to_gcs","text":"

        Deprecated. Use push_to_gcs instead.

        Source code in prefect_gcp/deployments/steps.py
        @deprecated_callable(start_date=\"Jun 2023\", help=\"Use `push_to_gcs` instead.\")\ndef push_project_to_gcs(*args, **kwargs) -> PushToGcsOutput:\n    \"\"\"\n    Deprecated. Use `push_to_gcs` instead.\n    \"\"\"\n    return push_to_gcs(*args, **kwargs)\n
        "},{"location":"integrations/prefect-gcp/deployments/steps/#prefect_gcp.deployments.steps.push_to_gcs","title":"push_to_gcs","text":"

        Pushes the contents of the current working directory to a GCS bucket, excluding files and folders specified in the ignore_file.

        Parameters:

        Name Type Description Default bucket str

        The name of the GCS bucket where files will be uploaded.

        required folder str

        The folder in the GCS bucket where files will be uploaded.

        required project Optional[str]

        The GCP project the bucket belongs to. If not provided, the project will be inferred from the credentials or the local environment.

        None credentials Optional[Dict]

        A dictionary containing the service account information and project used for authentication. If not provided, the application default credentials will be used.

        None ignore_file

        The name of the file containing ignore patterns.

        '.prefectignore'

        Returns:

        Type Description PushToGcsOutput

        A dictionary containing the bucket and folder where files were uploaded.

        Examples:

        Push to a GCS bucket:

        build:\n    - prefect_gcp.deployments.steps.push_to_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-project\n

        Push to a GCS bucket using credentials stored in a block:

        build:\n    - prefect_gcp.deployments.steps.push_to_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n

        Push to a GCS bucket using credentials stored in a service account file:

        build:\n    - prefect_gcp.deployments.steps.push_to_gcs:\n        requires: prefect-gcp\n        bucket: my-bucket\n        folder: my-folder\n        credentials:\n            project: my-project\n            service_account_file: /path/to/service_account.json\n

        Source code in prefect_gcp/deployments/steps.py
        def push_to_gcs(\n    bucket: str,\n    folder: str,\n    project: Optional[str] = None,\n    credentials: Optional[Dict] = None,\n    ignore_file=\".prefectignore\",\n) -> PushToGcsOutput:\n    \"\"\"\n    Pushes the contents of the current working directory to a GCS bucket,\n    excluding files and folders specified in the ignore_file.\n\n    Args:\n        bucket: The name of the GCS bucket where files will be uploaded.\n        folder: The folder in the GCS bucket where files will be uploaded.\n        project: The GCP project the bucket belongs to. If not provided, the project\n            will be inferred from the credentials or the local environment.\n        credentials: A dictionary containing the service account information and project\n            used for authentication. If not provided, the application default\n            credentials will be used.\n        ignore_file: The name of the file containing ignore patterns.\n\n    Returns:\n        A dictionary containing the bucket and folder where files were uploaded.\n\n    Examples:\n        Push to a GCS bucket:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.push_to_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-project\n        ```\n\n        Push  to a GCS bucket using credentials stored in a block:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.push_to_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials: \"{{ prefect.blocks.gcp-credentials.dev-credentials }}\"\n        ```\n\n        Push to a GCS bucket using credentials stored in a service account\n        file:\n        ```yaml\n        build:\n            - prefect_gcp.deployments.steps.push_to_gcs:\n                requires: prefect-gcp\n                bucket: my-bucket\n                folder: my-folder\n                credentials:\n                    project: my-project\n                    service_account_file: /path/to/service_account.json\n        ```\n\n    \"\"\"\n    project = credentials.get(\"project\") if credentials else None\n\n    gcp_creds = None\n    if credentials is not None:\n        if credentials.get(\"service_account_info\") is not None:\n            gcp_creds = Credentials.from_service_account_info(\n                credentials.get(\"service_account_info\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n        elif credentials.get(\"service_account_file\") is not None:\n            gcp_creds = Credentials.from_service_account_file(\n                credentials.get(\"service_account_file\"),\n                scopes=[\"https://www.googleapis.com/auth/cloud-platform\"],\n            )\n\n    gcp_creds = gcp_creds or google.auth.default()[0]\n\n    storage_client = StorageClient(credentials=gcp_creds, project=project)\n    bucket_resource = storage_client.bucket(bucket)\n\n    local_path = Path.cwd()\n\n    included_files = None\n    if ignore_file and Path(ignore_file).exists():\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(str(local_path), ignore_patterns)\n\n    for local_file_path in local_path.expanduser().rglob(\"*\"):\n        relative_local_file_path = local_file_path.relative_to(local_path)\n        if (\n            included_files is not None\n            and str(relative_local_file_path) not in included_files\n        ):\n            continue\n        elif not local_file_path.is_dir():\n            remote_file_path = (folder / relative_local_file_path).as_posix()\n\n            blob_resource = bucket_resource.blob(remote_file_path)\n            blob_resource.upload_from_filename(local_file_path)\n\n    return {\n        \"bucket\": bucket,\n        \"folder\": folder,\n    }\n
        "},{"location":"integrations/prefect-github/","title":"prefect-github","text":""},{"location":"integrations/prefect-github/#welcome","title":"Welcome!","text":"

        Prefect integrations interacting with GitHub.

        The tasks within this collection were created by a code generator using the GitHub GraphQL schema.

        "},{"location":"integrations/prefect-github/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-github/#python-setup","title":"Python setup","text":"

        Requires an installation of Python 3.8 or newer.

        We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

        These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-github/#installation","title":"Installation","text":"

        Install prefect-github with pip:

        pip install prefect-github\n

        Then, register to view the block on Prefect Cloud:

        prefect block register -m prefect_github\n

        Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

        "},{"location":"integrations/prefect-github/#write-and-run-a-flow","title":"Write and run a flow","text":"
        from prefect import flow\nfrom prefect_github import GitHubCredentials\nfrom prefect_github.repository import query_repository\nfrom prefect_github.mutations import add_star_starrable\n\n\n@flow()\ndef github_add_star_flow():\n    github_credentials = GitHubCredentials.load(\"github-token\")\n    repository_id = query_repository(\n        \"PrefectHQ\",\n        \"Prefect\",\n        github_credentials=github_credentials,\n        return_fields=\"id\"\n    )[\"id\"]\n    starrable = add_star_starrable(\n        repository_id,\n        github_credentials\n    )\n    return starrable\n\n\ngithub_add_star_flow()\n
        "},{"location":"integrations/prefect-github/#resources","title":"Resources","text":"

        If you encounter any bugs while using prefect-github, feel free to open an issue in the prefect-github repository.

        If you have any questions or issues while using prefect-github, you can find help in the Prefect Slack community.

        Feel free to \u2b50\ufe0f or watch prefect-github for updates too!

        "},{"location":"integrations/prefect-github/#development","title":"Development","text":"

        If you'd like to install a version of prefect-github for development, clone the repository and perform an editable install with pip:

        git clone https://github.com/PrefectHQ/prefect-github.git\n\ncd prefect-github/\n\npip install -e \".[dev]\"\n\n# Install linting pre-commit hooks\npre-commit install\n
        "},{"location":"integrations/prefect-github/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials","title":"prefect_github.credentials","text":"

        Credential classes used to perform authenticated interactions with GitHub

        "},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials","title":"GitHubCredentials","text":"

        Bases: CredentialsBlock

        Block used to manage GitHub authentication.

        Attributes:

        Name Type Description token SecretStr

        the token to authenticate into GitHub.

        Examples:

        Load stored GitHub credentials:

        from prefect_github import GitHubCredentials\ngithub_credentials_block = GitHubCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_github/credentials.py
        class GitHubCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage GitHub authentication.\n\n    Attributes:\n        token: the token to authenticate into GitHub.\n\n    Examples:\n        Load stored GitHub credentials:\n        ```python\n        from prefect_github import GitHubCredentials\n        github_credentials_block = GitHubCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"GitHub Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials\"  # noqa\n\n    token: SecretStr = Field(\n        default=None, description=\"A GitHub personal access token (PAT).\"\n    )\n\n    def get_client(self) -> HTTPEndpoint:\n        \"\"\"\n        Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n\n        Returns:\n            An authenticated GitHub GraphQL HTTPEndpoint client.\n\n        Example:\n            Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n            ```python\n            from prefect_github import GitHubCredentials\n\n            github_credentials = GitHubCredentials(token=token)\n            client = github_credentials.get_client()\n            ```\n        \"\"\"\n\n        if self.token is not None:\n            base_headers = {\"Authorization\": f\"Bearer {self.token.get_secret_value()}\"}\n        else:\n            base_headers = None\n\n        endpoint = HTTPEndpoint(\n            \"https://api.github.com/graphql\", base_headers=base_headers\n        )\n        return endpoint\n\n    def get_endpoint(self) -> HTTPEndpoint:\n        \"\"\"\n        Gets an authenticated GitHub GraphQL HTTPEndpoint.\n\n        Returns:\n            An authenticated GitHub GraphQL HTTPEndpoint\n\n        Example:\n            Gets an authenticated GitHub GraphQL HTTPEndpoint.\n            ```python\n            from prefect import flow\n            from prefect_github import GitHubCredentials\n\n            @flow\n            def example_get_endpoint_flow():\n                token = \"token_xxxxxxx\"\n                github_credentials = GitHubCredentials(token=token)\n                endpoint = github_credentials.get_endpoint()\n                return endpoint\n\n            example_get_endpoint_flow()\n            ```\n        \"\"\"\n        warnings.warn(\n            \"`get_endpoint` is deprecated and will be removed March 31st, 2023, \"\n            \"use `get_client` instead.\",\n            DeprecationWarning,\n        )\n        return self.get_client()\n
        "},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials.get_client","title":"get_client","text":"

        Gets an authenticated GitHub GraphQL HTTPEndpoint client.

        Returns:

        Type Description HTTPEndpoint

        An authenticated GitHub GraphQL HTTPEndpoint client.

        Example

        Gets an authenticated GitHub GraphQL HTTPEndpoint client.

        from prefect_github import GitHubCredentials\n\ngithub_credentials = GitHubCredentials(token=token)\nclient = github_credentials.get_client()\n

        Source code in prefect_github/credentials.py
        def get_client(self) -> HTTPEndpoint:\n    \"\"\"\n    Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n\n    Returns:\n        An authenticated GitHub GraphQL HTTPEndpoint client.\n\n    Example:\n        Gets an authenticated GitHub GraphQL HTTPEndpoint client.\n        ```python\n        from prefect_github import GitHubCredentials\n\n        github_credentials = GitHubCredentials(token=token)\n        client = github_credentials.get_client()\n        ```\n    \"\"\"\n\n    if self.token is not None:\n        base_headers = {\"Authorization\": f\"Bearer {self.token.get_secret_value()}\"}\n    else:\n        base_headers = None\n\n    endpoint = HTTPEndpoint(\n        \"https://api.github.com/graphql\", base_headers=base_headers\n    )\n    return endpoint\n
        "},{"location":"integrations/prefect-github/credentials/#prefect_github.credentials.GitHubCredentials.get_endpoint","title":"get_endpoint","text":"

        Gets an authenticated GitHub GraphQL HTTPEndpoint.

        Returns:

        Type Description HTTPEndpoint

        An authenticated GitHub GraphQL HTTPEndpoint

        Example

        Gets an authenticated GitHub GraphQL HTTPEndpoint.

        from prefect import flow\nfrom prefect_github import GitHubCredentials\n\n@flow\ndef example_get_endpoint_flow():\n    token = \"token_xxxxxxx\"\n    github_credentials = GitHubCredentials(token=token)\n    endpoint = github_credentials.get_endpoint()\n    return endpoint\n\nexample_get_endpoint_flow()\n

        Source code in prefect_github/credentials.py
        def get_endpoint(self) -> HTTPEndpoint:\n    \"\"\"\n    Gets an authenticated GitHub GraphQL HTTPEndpoint.\n\n    Returns:\n        An authenticated GitHub GraphQL HTTPEndpoint\n\n    Example:\n        Gets an authenticated GitHub GraphQL HTTPEndpoint.\n        ```python\n        from prefect import flow\n        from prefect_github import GitHubCredentials\n\n        @flow\n        def example_get_endpoint_flow():\n            token = \"token_xxxxxxx\"\n            github_credentials = GitHubCredentials(token=token)\n            endpoint = github_credentials.get_endpoint()\n            return endpoint\n\n        example_get_endpoint_flow()\n        ```\n    \"\"\"\n    warnings.warn(\n        \"`get_endpoint` is deprecated and will be removed March 31st, 2023, \"\n        \"use `get_client` instead.\",\n        DeprecationWarning,\n    )\n    return self.get_client()\n
        "},{"location":"integrations/prefect-github/graphql/","title":"Graphql","text":""},{"location":"integrations/prefect-github/graphql/#prefect_github.graphql","title":"prefect_github.graphql","text":"

        This is a module containing generic GraphQL tasks

        "},{"location":"integrations/prefect-github/graphql/#prefect_github.graphql.execute_graphql","title":"execute_graphql async","text":"

        Generic function for executing GraphQL operations.

        Parameters:

        Name Type Description Default op Union[Operation, str]

        The operation, either as a valid GraphQL string or sgqlc.Operation.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required error_key str

        The key name to look out for in the response that indicates an error has occurred with the request.

        'errors'

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Examples:

        Queries the first three issues from the Prefect repository using a string query.

        from prefect import flow\nfrom prefect_github import GitHubCredentials\nfrom prefect_github.graphql import execute_graphql\n\n@flow()\ndef example_execute_graphql_flow():\n    op = '''\n        query GitHubRepoIssues($owner: String!, $name: String!) {\n            repository(owner: $owner, name: $name) {\n                issues(last: 3) {\n                    nodes {\n                        number\n                        title\n                    }\n                }\n            }\n        }\n    '''\n    token = \"ghp_...\"\n    github_credentials = GitHubCredentials(token=token)\n    params = dict(owner=\"PrefectHQ\", name=\"Prefect\")\n    result = execute_graphql(op, github_credentials, **params)\n    return result\n\nexample_execute_graphql_flow()\n

        Queries the first three issues from Prefect repository using a sgqlc.Operation.

        from prefect import flow\nfrom sgqlc.operation import Operation\nfrom prefect_github import GitHubCredentials\nfrom prefect_github.schemas import graphql_schema\nfrom prefect_github.graphql import execute_graphql\n\n@flow()\ndef example_execute_graphql_flow():\n    op = Operation(graphql_schema.Query)\n    op_settings = op.repository(\n        owner=\"PrefectHQ\", name=\"Prefect\"\n    ).issues(\n        first=3\n    ).nodes()\n    op_settings.__fields__(\"id\", \"title\")\n    token = \"ghp_...\"\n    github_credentials = GitHubCredentials(token=token)\n    result = execute_graphql(\n        op,\n        github_credentials,\n    )\n    return result\n\nexample_execute_graphql_flow()\n

        Source code in prefect_github/graphql.py
        @task\nasync def execute_graphql(\n    op: Union[Operation, str],\n    github_credentials: GitHubCredentials,\n    error_key: str = \"errors\",\n    **vars,\n) -> Dict[str, Any]:\n    # NOTE: Maintainers can update these examples to match their collection!\n    \"\"\"\n    Generic function for executing GraphQL operations.\n\n    Args:\n        op: The operation, either as a valid GraphQL string or sgqlc.Operation.\n        github_credentials: Credentials to use for authentication with GitHub.\n        error_key: The key name to look out for in the response\n            that indicates an error has occurred with the request.\n\n    Returns:\n        A dict of the returned fields.\n\n    Examples:\n        Queries the first three issues from the Prefect repository\n        using a string query.\n        ```python\n        from prefect import flow\n        from prefect_github import GitHubCredentials\n        from prefect_github.graphql import execute_graphql\n\n        @flow()\n        def example_execute_graphql_flow():\n            op = '''\n                query GitHubRepoIssues($owner: String!, $name: String!) {\n                    repository(owner: $owner, name: $name) {\n                        issues(last: 3) {\n                            nodes {\n                                number\n                                title\n                            }\n                        }\n                    }\n                }\n            '''\n            token = \"ghp_...\"\n            github_credentials = GitHubCredentials(token=token)\n            params = dict(owner=\"PrefectHQ\", name=\"Prefect\")\n            result = execute_graphql(op, github_credentials, **params)\n            return result\n\n        example_execute_graphql_flow()\n        ```\n\n        Queries the first three issues from Prefect repository\n        using a sgqlc.Operation.\n        ```python\n        from prefect import flow\n        from sgqlc.operation import Operation\n        from prefect_github import GitHubCredentials\n        from prefect_github.schemas import graphql_schema\n        from prefect_github.graphql import execute_graphql\n\n        @flow()\n        def example_execute_graphql_flow():\n            op = Operation(graphql_schema.Query)\n            op_settings = op.repository(\n                owner=\"PrefectHQ\", name=\"Prefect\"\n            ).issues(\n                first=3\n            ).nodes()\n            op_settings.__fields__(\"id\", \"title\")\n            token = \"ghp_...\"\n            github_credentials = GitHubCredentials(token=token)\n            result = execute_graphql(\n                op,\n                github_credentials,\n            )\n            return result\n\n        example_execute_graphql_flow()\n        ```\n    \"\"\"\n    result = await _execute_graphql_op(\n        op, github_credentials, error_key=error_key, **vars\n    )\n    return result\n
        "},{"location":"integrations/prefect-github/mutations/","title":"Mutations","text":""},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations","title":"prefect_github.mutations","text":"

        This is a module containing: GitHub mutation tasks

        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_comment_subject","title":"add_comment_subject async","text":"

        Adds a comment to an Issue or Pull Request.

        Parameters:

        Name Type Description Default subject_id str

        The Node ID of the subject to modify.

        required body str

        The contents of the comment.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def add_comment_subject(  # noqa\n    subject_id: str,\n    body: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a comment to an Issue or Pull Request.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        body: The contents of the comment.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_comment(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                body=body,\n            )\n        )\n    ).subject(**strip_kwargs())\n\n    op_stack = (\n        \"addComment\",\n        \"subject\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addComment\"][\"subject\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_pull_request_review","title":"add_pull_request_review async","text":"

        Adds a review to a Pull Request.

        Parameters:

        Name Type Description Default pull_request_id str

        The Node ID of the pull request to modify.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required commit_oid datetime

        The commit OID the review pertains to.

        None body str

        The contents of the review body comment.

        None event PullRequestReviewEvent

        The event to perform on the pull request review.

        None comments Iterable[DraftPullRequestReviewComment]

        The review line comments.

        None threads Iterable[DraftPullRequestReviewThread]

        The review line comment threads.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def add_pull_request_review(  # noqa\n    pull_request_id: str,\n    github_credentials: GitHubCredentials,\n    commit_oid: datetime = None,\n    body: str = None,\n    event: graphql_schema.PullRequestReviewEvent = None,\n    comments: Iterable[graphql_schema.DraftPullRequestReviewComment] = None,\n    threads: Iterable[graphql_schema.DraftPullRequestReviewThread] = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a review to a Pull Request.\n\n    Args:\n        pull_request_id: The Node ID of the pull request to modify.\n        github_credentials: Credentials to use for authentication with GitHub.\n        commit_oid: The commit OID the review pertains to.\n        body: The contents of the review body comment.\n        event: The event to perform on the pull request review.\n        comments: The review line comments.\n        threads: The review line comment threads.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_pull_request_review(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n                commit_oid=commit_oid,\n                body=body,\n                event=event,\n                comments=comments,\n                threads=threads,\n            )\n        )\n    ).pull_request_review(**strip_kwargs())\n\n    op_stack = (\n        \"addPullRequestReview\",\n        \"pullRequestReview\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addPullRequestReview\"][\"pullRequestReview\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_reaction","title":"add_reaction async","text":"

        Adds a reaction to a subject.

        Parameters:

        Name Type Description Default subject_id str

        The Node ID of the subject to modify.

        required content ReactionContent

        The name of the emoji to react with.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def add_reaction(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a reaction to a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji to react with.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).reaction(**strip_kwargs())\n\n    op_stack = (\n        \"addReaction\",\n        \"reaction\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addReaction\"][\"reaction\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_reaction_subject","title":"add_reaction_subject async","text":"

        Adds a reaction to a subject.

        Parameters:

        Name Type Description Default subject_id str

        The Node ID of the subject to modify.

        required content ReactionContent

        The name of the emoji to react with.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def add_reaction_subject(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a reaction to a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji to react with.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).subject(**strip_kwargs())\n\n    op_stack = (\n        \"addReaction\",\n        \"subject\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addReaction\"][\"subject\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.add_star_starrable","title":"add_star_starrable async","text":"

        Adds a star to a Starrable.

        Parameters:

        Name Type Description Default starrable_id str

        The Starrable ID to star.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def add_star_starrable(  # noqa\n    starrable_id: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Adds a star to a Starrable.\n\n    Args:\n        starrable_id: The Starrable ID to star.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.add_star(\n        **strip_kwargs(\n            input=dict(\n                starrable_id=starrable_id,\n            )\n        )\n    ).starrable(**strip_kwargs())\n\n    op_stack = (\n        \"addStar\",\n        \"starrable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"addStar\"][\"starrable\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.close_issue","title":"close_issue async","text":"

        Close an issue.

        Parameters:

        Name Type Description Default issue_id str

        ID of the issue to be closed.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required state_reason IssueClosedStateReason

        The reason the issue is to be closed.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def close_issue(  # noqa\n    issue_id: str,\n    github_credentials: GitHubCredentials,\n    state_reason: graphql_schema.IssueClosedStateReason = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Close an issue.\n\n    Args:\n        issue_id: ID of the issue to be closed.\n        github_credentials: Credentials to use for authentication with GitHub.\n        state_reason: The reason the issue is to be closed.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.close_issue(\n        **strip_kwargs(\n            input=dict(\n                issue_id=issue_id,\n                state_reason=state_reason,\n            )\n        )\n    ).issue(**strip_kwargs())\n\n    op_stack = (\n        \"closeIssue\",\n        \"issue\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"closeIssue\"][\"issue\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.close_pull_request","title":"close_pull_request async","text":"

        Close a pull request.

        Parameters:

        Name Type Description Default pull_request_id str

        ID of the pull request to be closed.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def close_pull_request(  # noqa\n    pull_request_id: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Close a pull request.\n\n    Args:\n        pull_request_id: ID of the pull request to be closed.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.close_pull_request(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n            )\n        )\n    ).pull_request(**strip_kwargs())\n\n    op_stack = (\n        \"closePullRequest\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"closePullRequest\"][\"pullRequest\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.create_issue","title":"create_issue async","text":"

        Creates a new issue.

        Parameters:

        Name Type Description Default repository_id str

        The Node ID of the repository.

        required title str

        The title for the issue.

        required assignee_ids Iterable[str]

        The Node ID for the user assignee for this issue.

        required label_ids Iterable[str]

        An array of Node IDs of labels for this issue.

        required project_ids Iterable[str]

        An array of Node IDs for projects associated with this issue.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required body str

        The body for the issue description.

        None milestone_id str

        The Node ID of the milestone for this issue.

        None issue_template str

        The name of an issue template in the repository, assigns labels and assignees from the template to the issue.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def create_issue(  # noqa\n    repository_id: str,\n    title: str,\n    assignee_ids: Iterable[str],\n    label_ids: Iterable[str],\n    project_ids: Iterable[str],\n    github_credentials: GitHubCredentials,\n    body: str = None,\n    milestone_id: str = None,\n    issue_template: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Creates a new issue.\n\n    Args:\n        repository_id: The Node ID of the repository.\n        title: The title for the issue.\n        assignee_ids: The Node ID for the user assignee for this issue.\n        label_ids: An array of Node IDs of labels for this issue.\n        project_ids: An array of Node IDs for projects associated with this\n            issue.\n        github_credentials: Credentials to use for authentication with GitHub.\n        body: The body for the issue description.\n        milestone_id: The Node ID of the milestone for this issue.\n        issue_template: The name of an issue template in the repository, assigns\n            labels and assignees from the template to the issue.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.create_issue(\n        **strip_kwargs(\n            input=dict(\n                repository_id=repository_id,\n                title=title,\n                assignee_ids=assignee_ids,\n                label_ids=label_ids,\n                project_ids=project_ids,\n                body=body,\n                milestone_id=milestone_id,\n                issue_template=issue_template,\n            )\n        )\n    ).issue(**strip_kwargs())\n\n    op_stack = (\n        \"createIssue\",\n        \"issue\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"createIssue\"][\"issue\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.create_pull_request","title":"create_pull_request async","text":"

        Create a new pull request.

        Parameters:

        Name Type Description Default repository_id str

        The Node ID of the repository.

        required base_ref_name str

        The name of the branch you want your changes pulled into. This should be an existing branch on the current repository. You cannot update the base branch on a pull request to point to another repository.

        required head_ref_name str

        The name of the branch where your changes are implemented. For cross-repository pull requests in the same network, namespace head_ref_name with a user like this: username:branch.

        required title str

        The title of the pull request.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required body str

        The contents of the pull request.

        None maintainer_can_modify bool

        Indicates whether maintainers can modify the pull request.

        None draft bool

        Indicates whether this pull request should be a draft.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def create_pull_request(  # noqa\n    repository_id: str,\n    base_ref_name: str,\n    head_ref_name: str,\n    title: str,\n    github_credentials: GitHubCredentials,\n    body: str = None,\n    maintainer_can_modify: bool = None,\n    draft: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Create a new pull request.\n\n    Args:\n        repository_id: The Node ID of the repository.\n        base_ref_name: The name of the branch you want your changes pulled into.\n            This should be an existing branch on the current repository.\n            You cannot update the base branch on a pull request to point\n            to another repository.\n        head_ref_name: The name of the branch where your changes are\n            implemented. For cross-repository pull requests in the same\n            network, namespace `head_ref_name` with a user like this:\n            `username:branch`.\n        title: The title of the pull request.\n        github_credentials: Credentials to use for authentication with GitHub.\n        body: The contents of the pull request.\n        maintainer_can_modify: Indicates whether maintainers can modify the pull\n            request.\n        draft: Indicates whether this pull request should be a draft.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.create_pull_request(\n        **strip_kwargs(\n            input=dict(\n                repository_id=repository_id,\n                base_ref_name=base_ref_name,\n                head_ref_name=head_ref_name,\n                title=title,\n                body=body,\n                maintainer_can_modify=maintainer_can_modify,\n                draft=draft,\n            )\n        )\n    ).pull_request(**strip_kwargs())\n\n    op_stack = (\n        \"createPullRequest\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"createPullRequest\"][\"pullRequest\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.remove_reaction","title":"remove_reaction async","text":"

        Removes a reaction from a subject.

        Parameters:

        Name Type Description Default subject_id str

        The Node ID of the subject to modify.

        required content ReactionContent

        The name of the emoji reaction to remove.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def remove_reaction(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Removes a reaction from a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji reaction to remove.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.remove_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).reaction(**strip_kwargs())\n\n    op_stack = (\n        \"removeReaction\",\n        \"reaction\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"removeReaction\"][\"reaction\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.remove_reaction_subject","title":"remove_reaction_subject async","text":"

        Removes a reaction from a subject.

        Parameters:

        Name Type Description Default subject_id str

        The Node ID of the subject to modify.

        required content ReactionContent

        The name of the emoji reaction to remove.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def remove_reaction_subject(  # noqa\n    subject_id: str,\n    content: graphql_schema.ReactionContent,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Removes a reaction from a subject.\n\n    Args:\n        subject_id: The Node ID of the subject to modify.\n        content: The name of the emoji reaction to remove.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.remove_reaction(\n        **strip_kwargs(\n            input=dict(\n                subject_id=subject_id,\n                content=content,\n            )\n        )\n    ).subject(**strip_kwargs())\n\n    op_stack = (\n        \"removeReaction\",\n        \"subject\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"removeReaction\"][\"subject\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.remove_star_starrable","title":"remove_star_starrable async","text":"

        Removes a star from a Starrable.

        Parameters:

        Name Type Description Default starrable_id str

        The Starrable ID to unstar.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def remove_star_starrable(  # noqa\n    starrable_id: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Removes a star from a Starrable.\n\n    Args:\n        starrable_id: The Starrable ID to unstar.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.remove_star(\n        **strip_kwargs(\n            input=dict(\n                starrable_id=starrable_id,\n            )\n        )\n    ).starrable(**strip_kwargs())\n\n    op_stack = (\n        \"removeStar\",\n        \"starrable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"removeStar\"][\"starrable\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.request_reviews","title":"request_reviews async","text":"

        Set review requests on a pull request.

        Parameters:

        Name Type Description Default pull_request_id str

        The Node ID of the pull request to modify.

        required user_ids Iterable[str]

        The Node IDs of the user to request.

        required team_ids Iterable[str]

        The Node IDs of the team to request.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required union bool

        Add users to the set rather than replace.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def request_reviews(  # noqa\n    pull_request_id: str,\n    user_ids: Iterable[str],\n    team_ids: Iterable[str],\n    github_credentials: GitHubCredentials,\n    union: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Set review requests on a pull request.\n\n    Args:\n        pull_request_id: The Node ID of the pull request to modify.\n        user_ids: The Node IDs of the user to request.\n        team_ids: The Node IDs of the team to request.\n        github_credentials: Credentials to use for authentication with GitHub.\n        union: Add users to the set rather than replace.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.request_reviews(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n                user_ids=user_ids,\n                team_ids=team_ids,\n                union=union,\n            )\n        )\n    )\n\n    op_stack = (\"requestReviews\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"requestReviews\"]\n
        "},{"location":"integrations/prefect-github/mutations/#prefect_github.mutations.request_reviews_pull_request","title":"request_reviews_pull_request async","text":"

        Set review requests on a pull request.

        Parameters:

        Name Type Description Default pull_request_id str

        The Node ID of the pull request to modify.

        required user_ids Iterable[str]

        The Node IDs of the user to request.

        required team_ids Iterable[str]

        The Node IDs of the team to request.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required union bool

        Add users to the set rather than replace.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/mutation/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/mutations.py
        @task\nasync def request_reviews_pull_request(  # noqa\n    pull_request_id: str,\n    user_ids: Iterable[str],\n    team_ids: Iterable[str],\n    github_credentials: GitHubCredentials,\n    union: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Set review requests on a pull request.\n\n    Args:\n        pull_request_id: The Node ID of the pull request to modify.\n        user_ids: The Node IDs of the user to request.\n        team_ids: The Node IDs of the team to request.\n        github_credentials: Credentials to use for authentication with GitHub.\n        union: Add users to the set rather than replace.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/mutation/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Mutation)\n    op_selection = op.request_reviews(\n        **strip_kwargs(\n            input=dict(\n                pull_request_id=pull_request_id,\n                user_ids=user_ids,\n                team_ids=team_ids,\n                union=union,\n            )\n        )\n    ).pull_request(**strip_kwargs())\n\n    op_stack = (\n        \"requestReviews\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"requestReviews\"][\"pullRequest\"]\n
        "},{"location":"integrations/prefect-github/organization/","title":"Organization","text":""},{"location":"integrations/prefect-github/organization/#prefect_github.organization","title":"prefect_github.organization","text":"

        This is a module containing: GitHub query_organization* tasks

        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization","title":"query_organization async","text":"

        The query root of GitHub's GraphQL interface.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\"organization\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_audit_log","title":"query_organization_audit_log async","text":"

        Audit log entries of the organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None query str

        The query string to filter audit entries.

        None order_by AuditLogOrder

        Ordering options for the returned audit log entries.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_audit_log(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    order_by: graphql_schema.AuditLogOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Audit log entries of the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: The query string to filter audit entries.\n        order_by: Ordering options for the returned audit log entries.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).audit_log(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"auditLog\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"auditLog\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_domains","title":"query_organization_domains async","text":"

        A list of domains owned by the organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None is_verified bool

        Filter by if the domain is verified.

        None is_approved bool

        Filter by if the domain is approved.

        None order_by VerifiableDomainOrder

        Ordering options for verifiable domains returned.

        {'field': 'DOMAIN', 'direction': 'ASC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_domains(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_verified: bool = None,\n    is_approved: bool = None,\n    order_by: graphql_schema.VerifiableDomainOrder = {\n        \"field\": \"DOMAIN\",\n        \"direction\": \"ASC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of domains owned by the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_verified: Filter by if the domain is verified.\n        is_approved: Filter by if the domain is approved.\n        order_by: Ordering options for verifiable domains returned.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).domains(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_verified=is_verified,\n            is_approved=is_approved,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"domains\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"domains\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_enterprise_owners","title":"query_organization_enterprise_owners async","text":"

        A list of owners of the organization's enterprise account.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required query str

        The search string to look for.

        None organization_role RoleInOrganization

        The organization role to filter by.

        None order_by OrgEnterpriseOwnerOrder

        Ordering options for enterprise owners returned from the connection.

        {'field': 'LOGIN', 'direction': 'ASC'} after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_enterprise_owners(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    organization_role: graphql_schema.RoleInOrganization = None,\n    order_by: graphql_schema.OrgEnterpriseOwnerOrder = {\n        \"field\": \"LOGIN\",\n        \"direction\": \"ASC\",\n    },\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of owners of the organization's enterprise account.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: The search string to look for.\n        organization_role: The organization role to filter by.\n        order_by: Ordering options for enterprise owners\n            returned from the connection.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).enterprise_owners(\n        **strip_kwargs(\n            query=query,\n            organization_role=organization_role,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"enterpriseOwners\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"enterpriseOwners\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_interaction_ability","title":"query_organization_interaction_ability async","text":"

        The interaction ability settings for this organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_interaction_ability(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"interactionAbility\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_ip_allow_list_entries","title":"query_organization_ip_allow_list_entries async","text":"

        The IP addresses that are allowed to access resources owned by the organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by IpAllowListEntryOrder

        Ordering options for IP allow list entries returned.

        {'field': 'ALLOW_LIST_VALUE', 'direction': 'ASC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_ip_allow_list_entries(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.IpAllowListEntryOrder = {\n        \"field\": \"ALLOW_LIST_VALUE\",\n        \"direction\": \"ASC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The IP addresses that are allowed to access resources owned by the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for IP allow list\n            entries returned.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).ip_allow_list_entries(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"ipAllowListEntries\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"ipAllowListEntries\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_item_showcase","title":"query_organization_item_showcase async","text":"

        Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_item_showcase(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Showcases a selection of repositories and gists that the profile owner has\n    either curated or that have been selected automatically based on popularity.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).item_showcase(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"itemShowcase\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"itemShowcase\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_member_statuses","title":"query_organization_member_statuses async","text":"

        Get the status messages members of this entity have set that are either public or visible only to the organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by UserStatusOrder

        Ordering options for user statuses returned from the connection.

        {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_member_statuses(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.UserStatusOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Get the status messages members of this entity have set that are either public\n    or visible only to the organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for user statuses returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).member_statuses(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"memberStatuses\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"memberStatuses\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_members_with_role","title":"query_organization_members_with_role async","text":"

        A list of users who are members of this organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_members_with_role(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users who are members of this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).members_with_role(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"membersWithRole\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"membersWithRole\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_packages","title":"query_organization_packages async","text":"

        A list of packages under the owner.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None names Iterable[str]

        Find packages by their names.

        None repository_id str

        Find packages in a repository by ID.

        None package_type PackageType

        Filter registry package by type.

        None order_by PackageOrder

        Ordering of the returned packages.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_packages(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"packages\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_pending_members","title":"query_organization_pending_members async","text":"

        A list of users who have been invited to join this organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_pending_members(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users who have been invited to join this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pending_members(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"pendingMembers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"pendingMembers\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_pinnable_items","title":"query_organization_pinnable_items async","text":"

        A list of repositories and gists this profile owner can pin to their profile.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required types Iterable[PinnableItemType]

        Filter the types of pinnable items that are returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_pinnable_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner can pin to their profile.\n\n    Args:\n        login: The organization's login.\n        types: Filter the types of pinnable items that are\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinnable_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"pinnableItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"pinnableItems\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_pinned_items","title":"query_organization_pinned_items async","text":"

        A list of repositories and gists this profile owner has pinned to their profile.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required types Iterable[PinnableItemType]

        Filter the types of pinned items that are returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_pinned_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner has pinned to their profile.\n\n    Args:\n        login: The organization's login.\n        types: Filter the types of pinned items that are returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinned_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"pinnedItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"pinnedItems\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_project","title":"query_organization_project async","text":"

        Find project by number.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required number int

        The project number to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_project(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        login: The organization's login.\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"project\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_project_next","title":"query_organization_project_next async","text":"

        Find a project by project (beta) number.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required number int

        The project (beta) number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_project_next(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by project (beta) number.\n\n    Args:\n        login: The organization's login.\n        number: The project (beta) number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectNext\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_project_v2","title":"query_organization_project_v2 async","text":"

        Find a project by number.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required number int

        The project number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_project_v2(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by number.\n\n    Args:\n        login: The organization's login.\n        number: The project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectV2\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_projects","title":"query_organization_projects async","text":"

        A list of projects under the owner.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required states Iterable[ProjectState]

        A list of states to filter the projects by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required order_by ProjectOrder

        Ordering options for projects returned from the connection.

        None search str

        Query to search projects by, currently only searching by name.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_projects(  # noqa\n    login: str,\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The organization's login.\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projects\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_projects_next","title":"query_organization_projects_next async","text":"

        A list of projects (beta) under the owner.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required query str

        A project (beta) to search for under the the owner.

        None sort_by ProjectNextOrderField

        How to order the returned projects (beta).

        'TITLE' after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_projects_next(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects (beta) under the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project (beta) to search for under the the owner.\n        sort_by: How to order the returned projects (beta).\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_next(\n        **strip_kwargs(\n            query=query,\n            sort_by=sort_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectsNext\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_projects_v2","title":"query_organization_projects_v2 async","text":"

        A list of projects under the owner.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required query str

        A project to search for under the the owner.

        None order_by ProjectV2Order

        How to order the returned projects.

        {'field': 'NUMBER', 'direction': 'DESC'} after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_projects_v2(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project to search for under the the owner.\n        order_by: How to order the returned projects.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_v2(\n        **strip_kwargs(\n            query=query,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"projectsV2\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_recent_projects","title":"query_organization_recent_projects async","text":"

        Recent projects that this user has modified in the context of the owner.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_recent_projects(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"recentProjects\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repositories","title":"query_organization_repositories async","text":"

        A list of repositories that the user owns.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None affiliations Iterable[RepositoryAffiliation]

        Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

        None owner_affiliations Iterable[RepositoryAffiliation]

        Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

        ('OWNER', 'COLLABORATOR') is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None is_fork bool

        If non-null, filters repositories according to whether they are forks of another repository.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositories\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository","title":"query_organization_repository async","text":"

        Find Repository.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required name str

        Name of Repository to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_repository(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        login: The organization's login.\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repository\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository_discussion_comments","title":"query_organization_repository_discussion_comments async","text":"

        Discussion comments this user has authored.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None repository_id str

        Filter discussion comments to only those in a specific repository.

        None only_answers bool

        Filter discussion comments to only those that were marked as the answer.

        False return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_repository_discussion_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    repository_id: str = None,\n    only_answers: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussion comments this user has authored.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list\n            that come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements\n            from the list.\n        last: Returns the last _n_ elements from\n            the list.\n        repository_id: Filter discussion comments\n            to only those in a specific repository.\n        only_answers: Filter discussion comments\n            to only those that were marked as the answer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussion_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            repository_id=repository_id,\n            only_answers=only_answers,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositoryDiscussionComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositoryDiscussionComments\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository_discussions","title":"query_organization_repository_discussions async","text":"

        Discussions this user has started.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by DiscussionOrder

        Ordering options for discussions returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'DESC'} repository_id str

        Filter discussions to only those in a specific repository.

        None answered bool

        Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_repository_discussions(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    repository_id: str = None,\n    answered: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussions this user has started.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for discussions\n            returned from the connection.\n        repository_id: Filter discussions to only those\n            in a specific repository.\n        answered: Filter discussions to only those that\n            have been answered or not. Defaults to including both\n            answered and unanswered discussions.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            repository_id=repository_id,\n            answered=answered,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositoryDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositoryDiscussions\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_repository_migrations","title":"query_organization_repository_migrations async","text":"

        A list of all repository migrations for this organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None state MigrationState

        Filter repository migrations by state.

        None repository_name str

        Filter repository migrations by repository name.

        None order_by RepositoryMigrationOrder

        Ordering options for repository migrations returned.

        {'field': 'CREATED_AT', 'direction': 'ASC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_repository_migrations(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    state: graphql_schema.MigrationState = None,\n    repository_name: str = None,\n    order_by: graphql_schema.RepositoryMigrationOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"ASC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of all repository migrations for this organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        state: Filter repository migrations by state.\n        repository_name: Filter repository migrations by\n            repository name.\n        order_by: Ordering options for repository\n            migrations returned.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_migrations(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            state=state,\n            repository_name=repository_name,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"repositoryMigrations\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"repositoryMigrations\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_saml_identity_provider","title":"query_organization_saml_identity_provider async","text":"

        The Organization's SAML identity providers.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_saml_identity_provider(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The Organization's SAML identity providers.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).saml_identity_provider(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"samlIdentityProvider\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"samlIdentityProvider\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsoring","title":"query_organization_sponsoring async","text":"

        List of users and organizations this entity is sponsoring.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorOrder

        Ordering options for the users and organizations returned from the connection.

        {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsoring(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of users and organizations this entity is sponsoring.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for the users and organizations\n            returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsoring(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsoring\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsoring\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsors","title":"query_organization_sponsors async","text":"

        List of sponsors for this user or organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None tier_id str

        If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see.

        None order_by SponsorOrder

        Ordering options for sponsors returned from the connection.

        {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsors(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    tier_id: str = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsors for this user or organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        tier_id: If given, will filter for sponsors at the given tier.\n            Will only return sponsors whose tier the viewer is permitted\n            to see.\n        order_by: Ordering options for sponsors returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            tier_id=tier_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsors\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsors\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsors_activities","title":"query_organization_sponsors_activities async","text":"

        Events involving this sponsorable, such as new sponsorships.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required actions Iterable[SponsorsActivityAction]

        Filter activities to only the specified actions.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None period SponsorsActivityPeriod

        Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred.

        'MONTH' order_by SponsorsActivityOrder

        Ordering options for activity returned from the connection.

        {'field': 'TIMESTAMP', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsors_activities(  # noqa\n    login: str,\n    actions: Iterable[graphql_schema.SponsorsActivityAction],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    period: graphql_schema.SponsorsActivityPeriod = \"MONTH\",\n    order_by: graphql_schema.SponsorsActivityOrder = {\n        \"field\": \"TIMESTAMP\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Events involving this sponsorable, such as new sponsorships.\n\n    Args:\n        login: The organization's login.\n        actions: Filter activities to only the specified\n            actions.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        period: Filter activities returned to only those\n            that occurred in the most recent specified time period. Set\n            to ALL to avoid filtering by when the activity occurred.\n        order_by: Ordering options for activity returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_activities(\n        **strip_kwargs(\n            actions=actions,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            period=period,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorsActivities\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorsActivities\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsors_listing","title":"query_organization_sponsors_listing async","text":"

        The GitHub Sponsors listing for this user or organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsors_listing(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The GitHub Sponsors listing for this user or organization.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_listing(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"sponsorsListing\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorsListing\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorship_for_viewer_as_sponsor","title":"query_organization_sponsorship_for_viewer_as_sponsor async","text":"

        The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsorship_for_viewer_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from the viewer to this user/organization; that is, the\n    sponsorship where you're the sponsor. Only returns a sponsorship if it is\n    active.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsor(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipForViewerAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipForViewerAsSponsor\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorship_for_viewer_as_sponsorable","title":"query_organization_sponsorship_for_viewer_as_sponsorable async","text":"

        The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsorship_for_viewer_as_sponsorable(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from this user/organization to the viewer; that is, the\n    sponsorship you're receiving. Only returns a sponsorship if it is active.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsorable(**strip_kwargs())\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipForViewerAsSponsorable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipForViewerAsSponsorable\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorship_newsletters","title":"query_organization_sponsorship_newsletters async","text":"

        List of sponsorship updates sent from this sponsorable to sponsors.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorshipNewsletterOrder

        Ordering options for sponsorship updates returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsorship_newsletters(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipNewsletterOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsorship updates sent from this sponsorable to sponsors.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorship\n            updates returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_newsletters(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipNewsletters\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipNewsletters\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorships_as_maintainer","title":"query_organization_sponsorships_as_maintainer async","text":"

        This object's sponsorships as the maintainer.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None include_private bool

        Whether or not to include private sponsorships in the result set.

        False order_by SponsorshipOrder

        Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsorships_as_maintainer(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    include_private: bool = False,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the maintainer.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        include_private: Whether or not to include\n            private sponsorships in the result set.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_maintainer(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            include_private=include_private,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipsAsMaintainer\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipsAsMaintainer\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_sponsorships_as_sponsor","title":"query_organization_sponsorships_as_sponsor async","text":"

        This object's sponsorships as the sponsor.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorshipOrder

        Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_sponsorships_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the sponsor.\n\n    Args:\n        login: The organization's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_sponsor(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"sponsorshipsAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"sponsorshipsAsSponsor\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_team","title":"query_organization_team async","text":"

        Find an organization's team by its slug.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required slug str

        The name or slug of the team to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_team(  # noqa\n    login: str,\n    slug: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find an organization's team by its slug.\n\n    Args:\n        login: The organization's login.\n        slug: The name or slug of the team to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).team(\n        **strip_kwargs(\n            slug=slug,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"team\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"team\"]\n
        "},{"location":"integrations/prefect-github/organization/#prefect_github.organization.query_organization_teams","title":"query_organization_teams async","text":"

        A list of teams in this organization.

        Parameters:

        Name Type Description Default login str

        The organization's login.

        required user_logins Iterable[str]

        User logins to filter by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy TeamPrivacy

        If non-null, filters teams according to privacy.

        None role TeamRole

        If non-null, filters teams according to whether the viewer is an admin or member on team.

        None query str

        If non-null, filters teams with query on team name and team slug.

        None order_by TeamOrder

        Ordering options for teams returned from the connection.

        None ldap_mapped bool

        If true, filters teams that are mapped to an LDAP Group (Enterprise only).

        None root_teams_only bool

        If true, restrict to only root teams.

        False after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/organization.py
        @task\nasync def query_organization_teams(  # noqa\n    login: str,\n    user_logins: Iterable[str],\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.TeamPrivacy = None,\n    role: graphql_schema.TeamRole = None,\n    query: str = None,\n    order_by: graphql_schema.TeamOrder = None,\n    ldap_mapped: bool = None,\n    root_teams_only: bool = False,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of teams in this organization.\n\n    Args:\n        login: The organization's login.\n        user_logins: User logins to filter by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters teams according to privacy.\n        role: If non-null, filters teams according to whether the viewer\n            is an admin or member on team.\n        query: If non-null, filters teams with query on team name and team\n            slug.\n        order_by: Ordering options for teams returned from the connection.\n        ldap_mapped: If true, filters teams that are mapped to an LDAP\n            Group (Enterprise only).\n        root_teams_only: If true, restrict to only root teams.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.organization(\n        **strip_kwargs(\n            login=login,\n        )\n    ).teams(\n        **strip_kwargs(\n            user_logins=user_logins,\n            privacy=privacy,\n            role=role,\n            query=query,\n            order_by=order_by,\n            ldap_mapped=ldap_mapped,\n            root_teams_only=root_teams_only,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"organization\",\n        \"teams\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"organization\"][\"teams\"]\n
        "},{"location":"integrations/prefect-github/repository/","title":"Repository","text":""},{"location":"integrations/prefect-github/repository/#prefect_github.repository","title":"prefect_github.repository","text":"

        This is a module containing: GitHub query_repository* tasks and the GitHub storage block.

        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.GitHubRepository","title":"GitHubRepository","text":"

        Bases: ReadableDeploymentStorage

        Interact with files stored on GitHub repositories.

        Source code in prefect_github/repository.py
        class GitHubRepository(ReadableDeploymentStorage):\n    \"\"\"\n    Interact with files stored on GitHub repositories.\n    \"\"\"\n\n    _block_type_name = \"GitHub Repository\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/41971cfecfea5f79ff334164f06ecb34d1038dd4-250x250.png\"  # noqa: E501\n    _documentation_url = \"https://prefecthq.github.io/prefect-github/repository/#prefect_github.repository.GitHubRepository\"  # noqa\n\n    repository_url: str = Field(\n        default=...,\n        title=\"Repository URL\",\n        description=(\n            \"The URL of a GitHub repository to read from, in either HTTPS or SSH \"\n            \"format. If you are using a private repo, it must be in the HTTPS format.\"\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    credentials: Optional[GitHubCredentials] = Field(\n        default=None,\n        description=\"An optional GitHubCredentials block for using private GitHub repos.\",  # noqa: E501\n    )\n\n    @validator(\"credentials\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict):\n        \"\"\"Ensure that credentials are not provided with 'SSH' formatted GitHub URLs.\"\"\"\n        if v is not None:\n            if urlparse(values[\"repository_url\"]).scheme != \"https\":\n                raise InvalidRepositoryURLError(\n                    (\n                        \"Crendentials can only be used with GitHub repositories \"\n                        \"using the 'HTTPS' format. You must either remove the \"\n                        \"credential if you wish to use the 'SSH' format and are not \"\n                        \"using a private repository, or you must change the repository \"\n                        \"url to the 'HTTPS' format. \"\n                    )\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urlparse(self.repository_url)\n        if url_components.scheme == \"https\" and self.credentials is not None:\n            token_value = self.credentials.token.get_secret_value()\n            updated_components = url_components._replace(\n                netloc=f\"{token_value}@{url_components.netloc}\"\n            )\n            full_url = urlunparse(updated_components)\n        else:\n            full_url = self.repository_url\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: str\n    ) -> Tuple[str, str]:\n        \"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Clones a GitHub project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = f\"git clone {self._create_repo_url()}\"\n        if self.reference:\n            cmd += f\" -b {self.reference}\"\n\n        # Limit git history\n        cmd += \" --depth 1\"\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            tmp_path_str = tmp_dir\n            cmd += f\" {tmp_path_str}\"\n            cmd = shlex.split(cmd)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise RuntimeError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_path_str, sub_directory=from_path\n            )\n\n            copy_tree(src=content_source, dst=content_destination)\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.GitHubRepository.get_directory","title":"get_directory async","text":"

        Clones a GitHub project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory.

        Parameters:

        Name Type Description Default from_path Optional[str]

        If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

        None local_path Optional[str]

        A local path to clone to; defaults to present working directory.

        None Source code in prefect_github/repository.py
        @sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Clones a GitHub project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = f\"git clone {self._create_repo_url()}\"\n    if self.reference:\n        cmd += f\" -b {self.reference}\"\n\n    # Limit git history\n    cmd += \" --depth 1\"\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        tmp_path_str = tmp_dir\n        cmd += f\" {tmp_path_str}\"\n        cmd = shlex.split(cmd)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise RuntimeError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_path_str, sub_directory=from_path\n        )\n\n        copy_tree(src=content_source, dst=content_destination)\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository","title":"query_repository async","text":"

        The query root of GitHub's GraphQL interface.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a repository\n            referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\"repository\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_assignable_users","title":"query_repository_assignable_users async","text":"

        A list of users that can be assigned to issues in this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True query str

        Filters users with query on user name and login.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_assignable_users(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users that can be assigned to issues in this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        query: Filters users with query on user name and login.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).assignable_users(\n        **strip_kwargs(\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"assignableUsers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"assignableUsers\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_branch_protection_rules","title":"query_repository_branch_protection_rules async","text":"

        A list of branch protection rules for this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_branch_protection_rules(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of branch protection rules for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).branch_protection_rules(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"branchProtectionRules\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"branchProtectionRules\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_code_of_conduct","title":"query_repository_code_of_conduct async","text":"

        Returns the code of conduct for this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_code_of_conduct(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns the code of conduct for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).code_of_conduct(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"codeOfConduct\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"codeOfConduct\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_collaborators","title":"query_repository_collaborators async","text":"

        A list of collaborators associated with the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True affiliation CollaboratorAffiliation

        Collaborators affiliation level with a repository.

        None query str

        Filters users with query on user name and login.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_collaborators(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    affiliation: graphql_schema.CollaboratorAffiliation = None,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of collaborators associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        affiliation: Collaborators affiliation level with a\n            repository.\n        query: Filters users with query on user name and login.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).collaborators(\n        **strip_kwargs(\n            affiliation=affiliation,\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"collaborators\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"collaborators\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_commit_comments","title":"query_repository_commit_comments async","text":"

        A list of commit comments associated with the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_commit_comments(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of commit comments associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).commit_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"commitComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"commitComments\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_contact_links","title":"query_repository_contact_links async","text":"

        Returns a list of contact links associated to the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_contact_links(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of contact links associated to the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).contact_links(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"contactLinks\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"contactLinks\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_default_branch_ref","title":"query_repository_default_branch_ref async","text":"

        The Ref associated with the repository's default branch.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_default_branch_ref(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The Ref associated with the repository's default branch.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).default_branch_ref(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"defaultBranchRef\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"defaultBranchRef\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_deploy_keys","title":"query_repository_deploy_keys async","text":"

        A list of deploy keys that are on this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_deploy_keys(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of deploy keys that are on this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).deploy_keys(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"deployKeys\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"deployKeys\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_deployments","title":"query_repository_deployments async","text":"

        Deployments associated with the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required environments Iterable[str]

        Environments to list deployments for.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True order_by DeploymentOrder

        Ordering options for deployments returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'ASC'} after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_deployments(  # noqa\n    owner: str,\n    name: str,\n    environments: Iterable[str],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.DeploymentOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"ASC\",\n    },\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Deployments associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        environments: Environments to list deployments for.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for deployments returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).deployments(\n        **strip_kwargs(\n            environments=environments,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"deployments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"deployments\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussion","title":"query_repository_discussion async","text":"

        Returns a single discussion from the current repository by number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The number for the discussion to be returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_discussion(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single discussion from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the discussion to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussion(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussion\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussion\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussion_categories","title":"query_repository_discussion_categories async","text":"

        A list of discussion categories that are available in the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None filter_by_assignable bool

        Filter by categories that are assignable by the viewer.

        False return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_discussion_categories(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    filter_by_assignable: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of discussion categories that are available in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        filter_by_assignable: Filter by categories that\n            are assignable by the viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussion_categories(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            filter_by_assignable=filter_by_assignable,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussionCategories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussionCategories\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussion_category","title":"query_repository_discussion_category async","text":"

        A discussion category by slug.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required slug str

        The slug of the discussion category to be returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_discussion_category(  # noqa\n    owner: str,\n    name: str,\n    slug: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A discussion category by slug.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        slug: The slug of the discussion category to be\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussion_category(\n        **strip_kwargs(\n            slug=slug,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussionCategory\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussionCategory\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_discussions","title":"query_repository_discussions async","text":"

        A list of discussions that have been opened in the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None category_id str

        Only include discussions that belong to the category with this ID.

        None order_by DiscussionOrder

        Ordering options for discussions returned from the connection.

        {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_discussions(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    category_id: str = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of discussions that have been opened in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        category_id: Only include discussions that belong to the\n            category with this ID.\n        order_by: Ordering options for discussions returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            category_id=category_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"discussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"discussions\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_environment","title":"query_repository_environment async","text":"

        Returns a single active environment from the current repository by name.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required environment_name str

        The name of the environment to be returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_environment(  # noqa\n    owner: str,\n    name: str,\n    environment_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single active environment from the current repository by name.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        environment_name: The name of the environment to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).environment(\n        **strip_kwargs(\n            name=environment_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"environment\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"environment\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_environments","title":"query_repository_environments async","text":"

        A list of environments that are in this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_environments(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of environments that are in this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).environments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"environments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"environments\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_forks","title":"query_repository_forks async","text":"

        A list of direct forked repositories.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None affiliations Iterable[RepositoryAffiliation]

        Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

        None owner_affiliations Iterable[RepositoryAffiliation]

        Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

        ('OWNER', 'COLLABORATOR') is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_forks(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of direct forked repositories.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        privacy: If non-null, filters repositories according to privacy.\n        order_by: Ordering options for repositories returned from the\n            connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to whether\n            they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).forks(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"forks\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"forks\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_funding_links","title":"query_repository_funding_links async","text":"

        The funding links for this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_funding_links(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The funding links for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).funding_links(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"fundingLinks\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"fundingLinks\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_interaction_ability","title":"query_repository_interaction_ability async","text":"

        The interaction ability settings for this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_interaction_ability(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"interactionAbility\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issue","title":"query_repository_issue async","text":"

        Returns a single issue from the current repository by number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The number for the issue to be returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_issue(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single issue from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the issue to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issue(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"issue\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issue\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issue_or_pull_request","title":"query_repository_issue_or_pull_request async","text":"

        Returns a single issue-like object from the current repository by number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The number for the issue to be returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_issue_or_pull_request(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single issue-like object from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the issue to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issue_or_pull_request(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"issueOrPullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issueOrPullRequest\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issue_templates","title":"query_repository_issue_templates async","text":"

        Returns a list of issue templates associated to the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_issue_templates(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of issue templates associated to the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issue_templates(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"issueTemplates\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issueTemplates\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_issues","title":"query_repository_issues async","text":"

        A list of issues that have been opened in the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required labels Iterable[str]

        A list of label names to filter the pull requests by.

        required states Iterable[IssueState]

        A list of states to filter the issues by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True order_by IssueOrder

        Ordering options for issues returned from the connection.

        None filter_by IssueFilters

        Filtering options for issues returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_issues(  # noqa\n    owner: str,\n    name: str,\n    labels: Iterable[str],\n    states: Iterable[graphql_schema.IssueState],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.IssueOrder = None,\n    filter_by: graphql_schema.IssueFilters = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issues that have been opened in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        labels: A list of label names to filter the pull requests by.\n        states: A list of states to filter the issues by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for issues returned from the\n            connection.\n        filter_by: Filtering options for issues returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).issues(\n        **strip_kwargs(\n            labels=labels,\n            states=states,\n            order_by=order_by,\n            filter_by=filter_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"issues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"issues\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_label","title":"query_repository_label async","text":"

        Returns a single label by name.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required label_name str

        Label name.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_label(  # noqa\n    owner: str,\n    name: str,\n    label_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single label by name.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        label_name: Label name.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).label(\n        **strip_kwargs(\n            name=label_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"label\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"label\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_labels","title":"query_repository_labels async","text":"

        A list of labels associated with the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True order_by LabelOrder

        Ordering options for labels returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'ASC'} after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None query str

        If provided, searches labels by name and description.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_labels(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.LabelOrder = {\"field\": \"CREATED_AT\", \"direction\": \"ASC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of labels associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for labels returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: If provided, searches labels by name and description.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).labels(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"labels\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"labels\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_languages","title":"query_repository_languages async","text":"

        A list containing a breakdown of the language composition of the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by LanguageOrder

        Order for connection.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_languages(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.LanguageOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list containing a breakdown of the language composition of the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).languages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"languages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"languages\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_latest_release","title":"query_repository_latest_release async","text":"

        Get the latest release for the repository if one exists.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_latest_release(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Get the latest release for the repository if one exists.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).latest_release(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"latestRelease\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"latestRelease\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_license_info","title":"query_repository_license_info async","text":"

        The license associated with the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_license_info(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The license associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).license_info(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"licenseInfo\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"licenseInfo\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_mentionable_users","title":"query_repository_mentionable_users async","text":"

        A list of Users that can be mentioned in the context of the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True query str

        Filters users with query on user name and login.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_mentionable_users(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of Users that can be mentioned in the context of the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        query: Filters users with query on user name and login.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).mentionable_users(\n        **strip_kwargs(\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"mentionableUsers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"mentionableUsers\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_milestone","title":"query_repository_milestone async","text":"

        Returns a single milestone from the current repository by number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The number for the milestone to be returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_milestone(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single milestone from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the milestone to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).milestone(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"milestone\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"milestone\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_milestones","title":"query_repository_milestones async","text":"

        A list of milestones associated with the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required states Iterable[MilestoneState]

        Filter by the state of the milestones.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by MilestoneOrder

        Ordering options for milestones.

        None query str

        Filters milestones with a query on the title.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_milestones(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.MilestoneState],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.MilestoneOrder = None,\n    query: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of milestones associated with the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: Filter by the state of the milestones.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for milestones.\n        query: Filters milestones with a query on the title.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).milestones(\n        **strip_kwargs(\n            states=states,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            query=query,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"milestones\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"milestones\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_object","title":"query_repository_object async","text":"

        A Git object in the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True oid datetime

        The Git object ID.

        None expression str

        A Git revision expression suitable for rev-parse.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_object(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    oid: datetime = None,\n    expression: str = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A Git object in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        oid: The Git object ID.\n        expression: A Git revision expression suitable for rev-parse.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).object(\n        **strip_kwargs(\n            oid=oid,\n            expression=expression,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"object\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"object\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_owner","title":"query_repository_owner async","text":"

        The User owner of the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_owner(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The User owner of the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).owner(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"owner\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"owner\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_packages","title":"query_repository_packages async","text":"

        A list of packages under the owner.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None names Iterable[str]

        Find packages by their names.

        None repository_id str

        Find packages in a repository by ID.

        None package_type PackageType

        Filter registry package by type.

        None order_by PackageOrder

        Ordering of the returned packages.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_packages(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"packages\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pinned_discussions","title":"query_repository_pinned_discussions async","text":"

        A list of discussions that have been pinned in this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_pinned_discussions(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of discussions that have been pinned in this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pinned_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pinnedDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pinnedDiscussions\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pinned_issues","title":"query_repository_pinned_issues async","text":"

        A list of pinned issues for this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_pinned_issues(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pinned issues for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pinned_issues(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pinnedIssues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pinnedIssues\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_primary_language","title":"query_repository_primary_language async","text":"

        The primary language of the repository's code.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_primary_language(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The primary language of the repository's code.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).primary_language(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"primaryLanguage\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"primaryLanguage\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_project","title":"query_repository_project async","text":"

        Find project by number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The project number to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_project(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"project\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_project_next","title":"query_repository_project_next async","text":"

        Finds and returns the Project (beta) according to the provided Project (beta) number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The ProjectNext number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_project_next(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Finds and returns the Project (beta) according to the provided Project (beta)\n    number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The ProjectNext number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectNext\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_project_v2","title":"query_repository_project_v2 async","text":"

        Finds and returns the Project according to the provided Project number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The Project number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_project_v2(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Finds and returns the Project according to the provided Project number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The Project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectV2\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_projects","title":"query_repository_projects async","text":"

        A list of projects under the owner.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required states Iterable[ProjectState]

        A list of states to filter the projects by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True order_by ProjectOrder

        Ordering options for projects returned from the connection.

        None search str

        Query to search projects by, currently only searching by name.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_projects(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projects\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_projects_next","title":"query_repository_projects_next async","text":"

        List of projects (beta) linked to this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None query str

        A project (beta) to search for linked to the repo.

        None sort_by ProjectNextOrderField

        How to order the returned project (beta) objects.

        'TITLE' return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_projects_next(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of projects (beta) linked to this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: A project (beta) to search for linked to the repo.\n        sort_by: How to order the returned project (beta) objects.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).projects_next(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n            sort_by=sort_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectsNext\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_projects_v2","title":"query_repository_projects_v2 async","text":"

        List of projects linked to this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None query str

        A project to search for linked to the repo.

        None order_by ProjectV2Order

        How to order the returned projects.

        {'field': 'NUMBER', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_projects_v2(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of projects linked to this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        query: A project to search for linked to the repo.\n        order_by: How to order the returned projects.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).projects_v2(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            query=query,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"projectsV2\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pull_request","title":"query_repository_pull_request async","text":"

        Returns a single pull request from the current repository by number.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required number int

        The number for the pull request to be returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_pull_request(  # noqa\n    owner: str,\n    name: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a single pull request from the current repository by number.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        number: The number for the pull request to be returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pull_request(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pullRequest\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pullRequest\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pull_request_templates","title":"query_repository_pull_request_templates async","text":"

        Returns a list of pull request templates associated to the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_pull_request_templates(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of pull request templates associated to the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pull_request_templates(**strip_kwargs())\n\n    op_stack = (\n        \"repository\",\n        \"pullRequestTemplates\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pullRequestTemplates\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_pull_requests","title":"query_repository_pull_requests async","text":"

        A list of pull requests that have been opened in the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required states Iterable[PullRequestState]

        A list of states to filter the pull requests by.

        required labels Iterable[str]

        A list of label names to filter the pull requests by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True head_ref_name str

        The head ref name to filter the pull requests by.

        None base_ref_name str

        The base ref name to filter the pull requests by.

        None order_by IssueOrder

        Ordering options for pull requests returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_pull_requests(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.PullRequestState],\n    labels: Iterable[str],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    head_ref_name: str = None,\n    base_ref_name: str = None,\n    order_by: graphql_schema.IssueOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pull requests that have been opened in the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: A list of states to filter the pull requests by.\n        labels: A list of label names to filter the pull requests\n            by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        head_ref_name: The head ref name to filter the pull\n            requests by.\n        base_ref_name: The base ref name to filter the pull\n            requests by.\n        order_by: Ordering options for pull requests returned from\n            the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).pull_requests(\n        **strip_kwargs(\n            states=states,\n            labels=labels,\n            head_ref_name=head_ref_name,\n            base_ref_name=base_ref_name,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"pullRequests\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"pullRequests\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_recent_projects","title":"query_repository_recent_projects async","text":"

        Recent projects that this user has modified in the context of the owner.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_recent_projects(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"recentProjects\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_ref","title":"query_repository_ref async","text":"

        Fetch a given ref from the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required qualified_name str

        The ref to retrieve. Fully qualified matches are checked in order (refs/heads/master) before falling back onto checks for short name matches (master).

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_ref(  # noqa\n    owner: str,\n    name: str,\n    qualified_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Fetch a given ref from the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        qualified_name: The ref to retrieve. Fully qualified matches are\n            checked in order (`refs/heads/master`) before falling back\n            onto checks for short name matches (`master`).\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).ref(\n        **strip_kwargs(\n            qualified_name=qualified_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"ref\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"ref\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_refs","title":"query_repository_refs async","text":"

        Fetch a list of refs from the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required ref_prefix str

        A ref name prefix like refs/heads/, refs/tags/, etc.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True query str

        Filters refs with query on name.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None direction OrderDirection None order_by RefOrder

        Ordering options for refs returned from the connection.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_refs(  # noqa\n    owner: str,\n    name: str,\n    ref_prefix: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    query: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    direction: graphql_schema.OrderDirection = None,\n    order_by: graphql_schema.RefOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Fetch a list of refs from the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        ref_prefix: A ref name prefix like `refs/heads/`, `refs/tags/`,\n            etc.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        query: Filters refs with query on name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        direction: DEPRECATED: use orderBy. The ordering direction.\n        order_by: Ordering options for refs returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).refs(\n        **strip_kwargs(\n            ref_prefix=ref_prefix,\n            query=query,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            direction=direction,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"refs\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"refs\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_release","title":"query_repository_release async","text":"

        Lookup a single release given various criteria.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required tag_name str

        The name of the Tag the Release was created from.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_release(  # noqa\n    owner: str,\n    name: str,\n    tag_name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Lookup a single release given various criteria.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        tag_name: The name of the Tag the Release was created from.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).release(\n        **strip_kwargs(\n            tag_name=tag_name,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"release\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"release\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_releases","title":"query_repository_releases async","text":"

        List of releases which are dependent on this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by ReleaseOrder

        Order for connection.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_releases(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.ReleaseOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of releases which are dependent on this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).releases(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"releases\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"releases\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_repository_topics","title":"query_repository_repository_topics async","text":"

        A list of applied repository-topic associations for this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_repository_topics(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of applied repository-topic associations for this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).repository_topics(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"repositoryTopics\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"repositoryTopics\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_stargazers","title":"query_repository_stargazers async","text":"

        A list of users who have starred this starrable.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by StarOrder

        Order for connection.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_stargazers(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.StarOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users who have starred this starrable.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).stargazers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"stargazers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"stargazers\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_submodules","title":"query_repository_submodules async","text":"

        Returns a list of all submodules in this repository parsed from the .gitmodules file as of the default branch's HEAD commit.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_submodules(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Returns a list of all submodules in this repository parsed from the .gitmodules\n    file as of the default branch's HEAD commit.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).submodules(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"submodules\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"submodules\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_vulnerability_alerts","title":"query_repository_vulnerability_alerts async","text":"

        A list of vulnerability alerts that are on this repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required states Iterable[RepositoryVulnerabilityAlertState]

        Filter by the state of the alert.

        required dependency_scopes Iterable[RepositoryVulnerabilityAlertDependencyScope]

        Filter by the scope of the alert's dependency.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_vulnerability_alerts(  # noqa\n    owner: str,\n    name: str,\n    states: Iterable[graphql_schema.RepositoryVulnerabilityAlertState],\n    dependency_scopes: Iterable[\n        graphql_schema.RepositoryVulnerabilityAlertDependencyScope\n    ],\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of vulnerability alerts that are on this repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        states: Filter by the state of the alert.\n        dependency_scopes: Filter by the scope of the\n            alert's dependency.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).vulnerability_alerts(\n        **strip_kwargs(\n            states=states,\n            dependency_scopes=dependency_scopes,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"vulnerabilityAlerts\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"vulnerabilityAlerts\"]\n
        "},{"location":"integrations/prefect-github/repository/#prefect_github.repository.query_repository_watchers","title":"query_repository_watchers async","text":"

        A list of users watching the repository.

        Parameters:

        Name Type Description Default owner str

        The login field of a user or organization.

        required name str

        The name of the repository.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository.py
        @task\nasync def query_repository_watchers(  # noqa\n    owner: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users watching the repository.\n\n    Args:\n        owner: The login field of a user or organization.\n        name: The name of the repository.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository(\n        **strip_kwargs(\n            owner=owner,\n            name=name,\n            follow_renames=follow_renames,\n        )\n    ).watchers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"repository\",\n        \"watchers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repository\"][\"watchers\"]\n
        "},{"location":"integrations/prefect-github/repository_owner/","title":"Repository Owner","text":""},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner","title":"prefect_github.repository_owner","text":"

        This is a module containing: GitHub query_repository_owner* tasks

        "},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner.query_repository_owner","title":"query_repository_owner async","text":"

        The query root of GitHub's GraphQL interface.

        Parameters:

        Name Type Description Default login str

        The username to lookup the owner by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository_owner.py
        @task\nasync def query_repository_owner(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        login: The username to lookup the owner by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository_owner(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\"repositoryOwner\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repositoryOwner\"]\n
        "},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner.query_repository_owner_repositories","title":"query_repository_owner_repositories async","text":"

        A list of repositories that the user owns.

        Parameters:

        Name Type Description Default login str

        The username to lookup the owner by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None affiliations Iterable[RepositoryAffiliation]

        Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

        None owner_affiliations Iterable[RepositoryAffiliation]

        Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

        ('OWNER', 'COLLABORATOR') is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None is_fork bool

        If non-null, filters repositories according to whether they are forks of another repository.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository_owner.py
        @task\nasync def query_repository_owner_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        login: The username to lookup the owner by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository_owner(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"repositoryOwner\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repositoryOwner\"][\"repositories\"]\n
        "},{"location":"integrations/prefect-github/repository_owner/#prefect_github.repository_owner.query_repository_owner_repository","title":"query_repository_owner_repository async","text":"

        Find Repository.

        Parameters:

        Name Type Description Default login str

        The username to lookup the owner by.

        required name str

        Name of Repository to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/repository_owner.py
        @task\nasync def query_repository_owner_repository(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        login: The username to lookup the owner by.\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.repository_owner(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"repositoryOwner\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"repositoryOwner\"][\"repository\"]\n
        "},{"location":"integrations/prefect-github/user/","title":"User","text":""},{"location":"integrations/prefect-github/user/#prefect_github.user","title":"prefect_github.user","text":"

        This is a module containing: GitHub query_user* tasks

        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user","title":"query_user async","text":"

        The query root of GitHub's GraphQL interface.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\"user\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_commit_comments","title":"query_user_commit_comments async","text":"

        A list of commit comments made by this user.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_commit_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of commit comments made by this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).commit_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"commitComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"commitComments\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_contributions_collection","title":"query_user_contributions_collection async","text":"

        The collection of contributions this user has made to different repositories.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required organization_id str

        The ID of the organization used to filter contributions.

        None from_ datetime

        Only contributions made at this time or later will be counted. If omitted, defaults to a year ago.

        None to datetime

        Only contributions made before and up to (including) this time will be counted. If omitted, defaults to the current time or one year from the provided from argument.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_contributions_collection(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    organization_id: str = None,\n    from_: datetime = None,\n    to: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The collection of contributions this user has made to different repositories.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        organization_id: The ID of the organization\n            used to filter contributions.\n        from_: Only contributions made at this time or\n            later will be counted. If omitted, defaults to a year ago.\n        to: Only contributions made before and up to\n            (including) this time will be counted. If omitted, defaults\n            to the current time or one year from the provided from\n            argument.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).contributions_collection(\n        **strip_kwargs(\n            organization_id=organization_id,\n            from_=from_,\n            to=to,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"contributionsCollection\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"contributionsCollection\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_followers","title":"query_user_followers async","text":"

        A list of users the given user is followed by.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_followers(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is followed by.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).followers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"followers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"followers\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_following","title":"query_user_following async","text":"

        A list of users the given user is following.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_following(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is following.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).following(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"following\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"following\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_gist","title":"query_user_gist async","text":"

        Find gist by repo name.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required name str

        The gist name to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_gist(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find gist by repo name.\n\n    Args:\n        login: The user's login.\n        name: The gist name to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).gist(\n        **strip_kwargs(\n            name=name,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"gist\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"gist\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_gist_comments","title":"query_user_gist_comments async","text":"

        A list of gist comments made by this user.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_gist_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of gist comments made by this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).gist_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"gistComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"gistComments\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_gists","title":"query_user_gists async","text":"

        A list of the Gists the user has created.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy GistPrivacy

        Filters Gists according to privacy.

        None order_by GistOrder

        Ordering options for gists returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_gists(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.GistPrivacy = None,\n    order_by: graphql_schema.GistOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of the Gists the user has created.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: Filters Gists according to privacy.\n        order_by: Ordering options for gists returned from the connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).gists(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"gists\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"gists\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_interaction_ability","title":"query_user_interaction_ability async","text":"

        The interaction ability settings for this user.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_interaction_ability(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"interactionAbility\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_issue_comments","title":"query_user_issue_comments async","text":"

        A list of issue comments made by this user.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required order_by IssueCommentOrder

        Ordering options for issue comments returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_issue_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueCommentOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issue comments made by this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issue comments returned\n            from the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).issue_comments(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"issueComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"issueComments\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_issues","title":"query_user_issues async","text":"

        A list of issues associated with this user.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required labels Iterable[str]

        A list of label names to filter the pull requests by.

        required states Iterable[IssueState]

        A list of states to filter the issues by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required order_by IssueOrder

        Ordering options for issues returned from the connection.

        None filter_by IssueFilters

        Filtering options for issues returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_issues(  # noqa\n    login: str,\n    labels: Iterable[str],\n    states: Iterable[graphql_schema.IssueState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueOrder = None,\n    filter_by: graphql_schema.IssueFilters = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issues associated with this user.\n\n    Args:\n        login: The user's login.\n        labels: A list of label names to filter the pull requests by.\n        states: A list of states to filter the issues by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issues returned from the\n            connection.\n        filter_by: Filtering options for issues returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).issues(\n        **strip_kwargs(\n            labels=labels,\n            states=states,\n            order_by=order_by,\n            filter_by=filter_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"issues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"issues\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_item_showcase","title":"query_user_item_showcase async","text":"

        Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_item_showcase(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Showcases a selection of repositories and gists that the profile owner has\n    either curated or that have been selected automatically based on popularity.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).item_showcase(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"itemShowcase\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"itemShowcase\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_organization","title":"query_user_organization async","text":"

        Find an organization by its login that the user belongs to.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required organization_login str

        The login of the organization to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_organization(  # noqa\n    login: str,\n    organization_login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find an organization by its login that the user belongs to.\n\n    Args:\n        login: The user's login.\n        organization_login: The login of the organization to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).organization(\n        **strip_kwargs(\n            login=organization_login,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"organization\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"organization\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_organizations","title":"query_user_organizations async","text":"

        A list of organizations the user belongs to.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_organizations(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of organizations the user belongs to.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).organizations(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"organizations\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"organizations\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_packages","title":"query_user_packages async","text":"

        A list of packages under the owner.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None names Iterable[str]

        Find packages by their names.

        None repository_id str

        Find packages in a repository by ID.

        None package_type PackageType

        Filter registry package by type.

        None order_by PackageOrder

        Ordering of the returned packages.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_packages(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"packages\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_pinnable_items","title":"query_user_pinnable_items async","text":"

        A list of repositories and gists this profile owner can pin to their profile.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required types Iterable[PinnableItemType]

        Filter the types of pinnable items that are returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_pinnable_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner can pin to their profile.\n\n    Args:\n        login: The user's login.\n        types: Filter the types of pinnable items that are\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinnable_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"pinnableItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"pinnableItems\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_pinned_items","title":"query_user_pinned_items async","text":"

        A list of repositories and gists this profile owner has pinned to their profile.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required types Iterable[PinnableItemType]

        Filter the types of pinned items that are returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_pinned_items(  # noqa\n    login: str,\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner has pinned to their profile.\n\n    Args:\n        login: The user's login.\n        types: Filter the types of pinned items that are returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pinned_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"pinnedItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"pinnedItems\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_project","title":"query_user_project async","text":"

        Find project by number.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required number int

        The project number to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_project(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        login: The user's login.\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"project\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_project_next","title":"query_user_project_next async","text":"

        Find a project by project (beta) number.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required number int

        The project (beta) number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_project_next(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by project (beta) number.\n\n    Args:\n        login: The user's login.\n        number: The project (beta) number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectNext\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_project_v2","title":"query_user_project_v2 async","text":"

        Find a project by number.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required number int

        The project number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_project_v2(  # noqa\n    login: str,\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by number.\n\n    Args:\n        login: The user's login.\n        number: The project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectV2\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_projects","title":"query_user_projects async","text":"

        A list of projects under the owner.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required states Iterable[ProjectState]

        A list of states to filter the projects by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required order_by ProjectOrder

        Ordering options for projects returned from the connection.

        None search str

        Query to search projects by, currently only searching by name.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_projects(  # noqa\n    login: str,\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The user's login.\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projects\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_projects_next","title":"query_user_projects_next async","text":"

        A list of projects (beta) under the owner.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required query str

        A project (beta) to search for under the the owner.

        None sort_by ProjectNextOrderField

        How to order the returned projects (beta).

        'TITLE' after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_projects_next(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects (beta) under the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project (beta) to search for under the the owner.\n        sort_by: How to order the returned projects (beta).\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_next(\n        **strip_kwargs(\n            query=query,\n            sort_by=sort_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectsNext\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_projects_v2","title":"query_user_projects_v2 async","text":"

        A list of projects under the owner.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required query str

        A project to search for under the the owner.

        None order_by ProjectV2Order

        How to order the returned projects.

        {'field': 'NUMBER', 'direction': 'DESC'} after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_projects_v2(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project to search for under the the owner.\n        order_by: How to order the returned projects.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).projects_v2(\n        **strip_kwargs(\n            query=query,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"projectsV2\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_public_keys","title":"query_user_public_keys async","text":"

        A list of public keys associated with this user.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_public_keys(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of public keys associated with this user.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).public_keys(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"publicKeys\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"publicKeys\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_pull_requests","title":"query_user_pull_requests async","text":"

        A list of pull requests associated with this user.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required states Iterable[PullRequestState]

        A list of states to filter the pull requests by.

        required labels Iterable[str]

        A list of label names to filter the pull requests by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required head_ref_name str

        The head ref name to filter the pull requests by.

        None base_ref_name str

        The base ref name to filter the pull requests by.

        None order_by IssueOrder

        Ordering options for pull requests returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_pull_requests(  # noqa\n    login: str,\n    states: Iterable[graphql_schema.PullRequestState],\n    labels: Iterable[str],\n    github_credentials: GitHubCredentials,\n    head_ref_name: str = None,\n    base_ref_name: str = None,\n    order_by: graphql_schema.IssueOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pull requests associated with this user.\n\n    Args:\n        login: The user's login.\n        states: A list of states to filter the pull requests by.\n        labels: A list of label names to filter the pull requests\n            by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        head_ref_name: The head ref name to filter the pull\n            requests by.\n        base_ref_name: The base ref name to filter the pull\n            requests by.\n        order_by: Ordering options for pull requests returned from\n            the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).pull_requests(\n        **strip_kwargs(\n            states=states,\n            labels=labels,\n            head_ref_name=head_ref_name,\n            base_ref_name=base_ref_name,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"pullRequests\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"pullRequests\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_recent_projects","title":"query_user_recent_projects async","text":"

        Recent projects that this user has modified in the context of the owner.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_recent_projects(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"recentProjects\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repositories","title":"query_user_repositories async","text":"

        A list of repositories that the user owns.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None affiliations Iterable[RepositoryAffiliation]

        Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

        None owner_affiliations Iterable[RepositoryAffiliation]

        Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

        ('OWNER', 'COLLABORATOR') is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None is_fork bool

        If non-null, filters repositories according to whether they are forks of another repository.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositories\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repositories_contributed_to","title":"query_user_repositories_contributed_to async","text":"

        A list of repositories that the user recently contributed to.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None include_user_repositories bool

        If true, include user repositories.

        None contribution_types Iterable[RepositoryContributionType]

        If non-null, include only the specified types of contributions. The GitHub.com UI uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_repositories_contributed_to(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    is_locked: bool = None,\n    include_user_repositories: bool = None,\n    contribution_types: Iterable[graphql_schema.RepositoryContributionType] = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user recently contributed to.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories\n            according to privacy.\n        order_by: Ordering options for repositories\n            returned from the connection.\n        is_locked: If non-null, filters repositories\n            according to whether they have been locked.\n        include_user_repositories: If true, include\n            user repositories.\n        contribution_types: If non-null, include\n            only the specified types of contributions. The GitHub.com UI\n            uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repositories_contributed_to(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            is_locked=is_locked,\n            include_user_repositories=include_user_repositories,\n            contribution_types=contribution_types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositoriesContributedTo\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositoriesContributedTo\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repository","title":"query_user_repository async","text":"

        Find Repository.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required name str

        Name of Repository to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_repository(  # noqa\n    login: str,\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        login: The user's login.\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repository\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repository_discussion_comments","title":"query_user_repository_discussion_comments async","text":"

        Discussion comments this user has authored.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None repository_id str

        Filter discussion comments to only those in a specific repository.

        None only_answers bool

        Filter discussion comments to only those that were marked as the answer.

        False return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_repository_discussion_comments(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    repository_id: str = None,\n    only_answers: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussion comments this user has authored.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list\n            that come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements\n            from the list.\n        last: Returns the last _n_ elements from\n            the list.\n        repository_id: Filter discussion comments\n            to only those in a specific repository.\n        only_answers: Filter discussion comments\n            to only those that were marked as the answer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussion_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            repository_id=repository_id,\n            only_answers=only_answers,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositoryDiscussionComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositoryDiscussionComments\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_repository_discussions","title":"query_user_repository_discussions async","text":"

        Discussions this user has started.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by DiscussionOrder

        Ordering options for discussions returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'DESC'} repository_id str

        Filter discussions to only those in a specific repository.

        None answered bool

        Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_repository_discussions(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    repository_id: str = None,\n    answered: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussions this user has started.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for discussions\n            returned from the connection.\n        repository_id: Filter discussions to only those\n            in a specific repository.\n        answered: Filter discussions to only those that\n            have been answered or not. Defaults to including both\n            answered and unanswered discussions.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).repository_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            repository_id=repository_id,\n            answered=answered,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"repositoryDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"repositoryDiscussions\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_saved_replies","title":"query_user_saved_replies async","text":"

        Replies this user has saved.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SavedReplyOrder

        The field to order saved replies by.

        {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_saved_replies(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SavedReplyOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Replies this user has saved.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: The field to order saved replies by.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).saved_replies(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"savedReplies\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"savedReplies\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsoring","title":"query_user_sponsoring async","text":"

        List of users and organizations this entity is sponsoring.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorOrder

        Ordering options for the users and organizations returned from the connection.

        {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsoring(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of users and organizations this entity is sponsoring.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for the users and organizations\n            returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsoring(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsoring\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsoring\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsors","title":"query_user_sponsors async","text":"

        List of sponsors for this user or organization.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None tier_id str

        If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see.

        None order_by SponsorOrder

        Ordering options for sponsors returned from the connection.

        {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsors(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    tier_id: str = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsors for this user or organization.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        tier_id: If given, will filter for sponsors at the given tier.\n            Will only return sponsors whose tier the viewer is permitted\n            to see.\n        order_by: Ordering options for sponsors returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            tier_id=tier_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsors\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsors\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsors_activities","title":"query_user_sponsors_activities async","text":"

        Events involving this sponsorable, such as new sponsorships.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required actions Iterable[SponsorsActivityAction]

        Filter activities to only the specified actions.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None period SponsorsActivityPeriod

        Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred.

        'MONTH' order_by SponsorsActivityOrder

        Ordering options for activity returned from the connection.

        {'field': 'TIMESTAMP', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsors_activities(  # noqa\n    login: str,\n    actions: Iterable[graphql_schema.SponsorsActivityAction],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    period: graphql_schema.SponsorsActivityPeriod = \"MONTH\",\n    order_by: graphql_schema.SponsorsActivityOrder = {\n        \"field\": \"TIMESTAMP\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Events involving this sponsorable, such as new sponsorships.\n\n    Args:\n        login: The user's login.\n        actions: Filter activities to only the specified\n            actions.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        period: Filter activities returned to only those\n            that occurred in the most recent specified time period. Set\n            to ALL to avoid filtering by when the activity occurred.\n        order_by: Ordering options for activity returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_activities(\n        **strip_kwargs(\n            actions=actions,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            period=period,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorsActivities\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorsActivities\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsors_listing","title":"query_user_sponsors_listing async","text":"

        The GitHub Sponsors listing for this user or organization.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsors_listing(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The GitHub Sponsors listing for this user or organization.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsors_listing(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"sponsorsListing\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorsListing\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorship_for_viewer_as_sponsor","title":"query_user_sponsorship_for_viewer_as_sponsor async","text":"

        The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsorship_for_viewer_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from the viewer to this user/organization; that is, the\n    sponsorship where you're the sponsor. Only returns a sponsorship if it is\n    active.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsor(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipForViewerAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipForViewerAsSponsor\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorship_for_viewer_as_sponsorable","title":"query_user_sponsorship_for_viewer_as_sponsorable async","text":"

        The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsorship_for_viewer_as_sponsorable(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from this user/organization to the viewer; that is, the\n    sponsorship you're receiving. Only returns a sponsorship if it is active.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_for_viewer_as_sponsorable(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipForViewerAsSponsorable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipForViewerAsSponsorable\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorship_newsletters","title":"query_user_sponsorship_newsletters async","text":"

        List of sponsorship updates sent from this sponsorable to sponsors.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorshipNewsletterOrder

        Ordering options for sponsorship updates returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsorship_newsletters(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipNewsletterOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsorship updates sent from this sponsorable to sponsors.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorship\n            updates returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorship_newsletters(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipNewsletters\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipNewsletters\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorships_as_maintainer","title":"query_user_sponsorships_as_maintainer async","text":"

        This object's sponsorships as the maintainer.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None include_private bool

        Whether or not to include private sponsorships in the result set.

        False order_by SponsorshipOrder

        Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsorships_as_maintainer(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    include_private: bool = False,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the maintainer.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        include_private: Whether or not to include\n            private sponsorships in the result set.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_maintainer(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            include_private=include_private,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipsAsMaintainer\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipsAsMaintainer\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_sponsorships_as_sponsor","title":"query_user_sponsorships_as_sponsor async","text":"

        This object's sponsorships as the sponsor.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorshipOrder

        Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_sponsorships_as_sponsor(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the sponsor.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).sponsorships_as_sponsor(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"sponsorshipsAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"sponsorshipsAsSponsor\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_starred_repositories","title":"query_user_starred_repositories async","text":"

        Repositories the user has starred.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None owned_by_viewer bool

        Filters starred repositories to only return repositories owned by the viewer.

        None order_by StarOrder

        Order for connection.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_starred_repositories(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    owned_by_viewer: bool = None,\n    order_by: graphql_schema.StarOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has starred.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        owned_by_viewer: Filters starred repositories to\n            only return repositories owned by the viewer.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).starred_repositories(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            owned_by_viewer=owned_by_viewer,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"starredRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"starredRepositories\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_status","title":"query_user_status async","text":"

        The user's description of what they're currently doing.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_status(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The user's description of what they're currently doing.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).status(**strip_kwargs())\n\n    op_stack = (\n        \"user\",\n        \"status\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"status\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_top_repositories","title":"query_user_top_repositories async","text":"

        Repositories the user has contributed to, ordered by contribution rank, plus repositories the user has created.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None since datetime

        How far back in time to fetch contributed repositories.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_top_repositories(  # noqa\n    login: str,\n    order_by: graphql_schema.RepositoryOrder,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    since: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has contributed to, ordered by contribution rank, plus\n    repositories the user has created.\n\n    Args:\n        login: The user's login.\n        order_by: Ordering options for repositories returned\n            from the connection.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        since: How far back in time to fetch contributed\n            repositories.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).top_repositories(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            since=since,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"topRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"topRepositories\"]\n
        "},{"location":"integrations/prefect-github/user/#prefect_github.user.query_user_watching","title":"query_user_watching async","text":"

        A list of repositories the given user is watching.

        Parameters:

        Name Type Description Default login str

        The user's login.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None affiliations Iterable[RepositoryAffiliation]

        Affiliation options for repositories returned from the connection. If none specified, the results will include repositories for which the current viewer is an owner or collaborator, or member.

        None owner_affiliations Iterable[RepositoryAffiliation]

        Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

        ('OWNER', 'COLLABORATOR') is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/user.py
        @task\nasync def query_user_watching(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories the given user is watching.\n\n    Args:\n        login: The user's login.\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to privacy.\n        order_by: Ordering options for repositories returned from the\n            connection.\n        affiliations: Affiliation options for repositories returned\n            from the connection. If none specified, the results will\n            include repositories for which the current viewer is an\n            owner or collaborator, or member.\n        owner_affiliations: Array of owner's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.user(\n        **strip_kwargs(\n            login=login,\n        )\n    ).watching(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"user\",\n        \"watching\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"user\"][\"watching\"]\n
        "},{"location":"integrations/prefect-github/utils/","title":"Utils","text":""},{"location":"integrations/prefect-github/utils/#prefect_github.utils","title":"prefect_github.utils","text":"

        Utilities to assist with using generated collections.

        "},{"location":"integrations/prefect-github/utils/#prefect_github.utils.camel_to_snake_case","title":"camel_to_snake_case","text":"

        Converts CamelCase and lowerCamelCase to snake_case. Args: string: The string in CamelCase or lowerCamelCase to convert. Returns: A snake_case version of the string.

        Source code in prefect_github/utils.py
        def camel_to_snake_case(string: str) -> str:\n    \"\"\"\n    Converts CamelCase and lowerCamelCase to snake_case.\n    Args:\n        string: The string in CamelCase or lowerCamelCase to convert.\n    Returns:\n        A snake_case version of the string.\n    \"\"\"\n    string = SNAKE_CASE_REGEX1.sub(r\"\\1_\\2\", string)\n    return SNAKE_CASE_REGEX2.sub(r\"\\1_\\2\", string).lower()\n
        "},{"location":"integrations/prefect-github/utils/#prefect_github.utils.initialize_return_fields_defaults","title":"initialize_return_fields_defaults","text":"

        Reads config_path to parse out the desired default fields to return. Args: config_path: The path to the config file.

        Source code in prefect_github/utils.py
        def initialize_return_fields_defaults(config_path: Union[Path, str]) -> List:\n    \"\"\"\n    Reads config_path to parse out the desired default fields to return.\n    Args:\n        config_path: The path to the config file.\n    \"\"\"\n    with open(config_path, \"r\") as f:\n        config = json.load(f)\n\n    return_fields_defaults = defaultdict(lambda: [])\n    for op_type, sub_op_types in config.items():\n        for sub_op_type in sub_op_types:\n            if isinstance(sub_op_type, str):\n                return_fields_defaults[(op_type,)].append(\n                    camel_to_snake_case(sub_op_type)\n                )\n            elif isinstance(sub_op_type, dict):\n                sub_op_type_key = list(sub_op_type.keys())[0]\n                return_fields_defaults[(op_type, sub_op_type_key)] = [\n                    camel_to_snake_case(field) for field in sub_op_type[sub_op_type_key]\n                ]\n    return return_fields_defaults\n
        "},{"location":"integrations/prefect-github/utils/#prefect_github.utils.strip_kwargs","title":"strip_kwargs","text":"

        Drops keyword arguments if value is None because sgqlc.Operation errors out if a keyword argument is provided, but set to None.

        Parameters:

        Name Type Description Default **kwargs Dict

        Input keyword arguments.

        {}

        Returns:

        Type Description Dict

        Stripped version of kwargs.

        Source code in prefect_github/utils.py
        def strip_kwargs(**kwargs: Dict) -> Dict:\n    \"\"\"\n    Drops keyword arguments if value is None because sgqlc.Operation\n    errors out if a keyword argument is provided, but set to None.\n\n    Args:\n        **kwargs: Input keyword arguments.\n\n    Returns:\n        Stripped version of kwargs.\n    \"\"\"\n    stripped_dict = {}\n    for k, v in kwargs.items():\n        if isinstance(v, dict):\n            v = strip_kwargs(**v)\n        if v is not None:\n            stripped_dict[k] = v\n    return stripped_dict or {}\n
        "},{"location":"integrations/prefect-github/viewer/","title":"Viewer","text":""},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer","title":"prefect_github.viewer","text":"

        This is a module containing: GitHub query_viewer* tasks

        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer","title":"query_viewer async","text":"

        The query root of GitHub's GraphQL interface.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The query root of GitHub's GraphQL interface.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs())\n\n    op_stack = (\"viewer\",)\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_commit_comments","title":"query_viewer_commit_comments async","text":"

        A list of commit comments made by this user.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_commit_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of commit comments made by this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).commit_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"commitComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"commitComments\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_contributions_collection","title":"query_viewer_contributions_collection async","text":"

        The collection of contributions this user has made to different repositories.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required organization_id str

        The ID of the organization used to filter contributions.

        None from_ datetime

        Only contributions made at this time or later will be counted. If omitted, defaults to a year ago.

        None to datetime

        Only contributions made before and up to (including) this time will be counted. If omitted, defaults to the current time or one year from the provided from argument.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_contributions_collection(  # noqa\n    github_credentials: GitHubCredentials,\n    organization_id: str = None,\n    from_: datetime = None,\n    to: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The collection of contributions this user has made to different repositories.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        organization_id: The ID of the organization\n            used to filter contributions.\n        from_: Only contributions made at this time or\n            later will be counted. If omitted, defaults to a year ago.\n        to: Only contributions made before and up to\n            (including) this time will be counted. If omitted, defaults\n            to the current time or one year from the provided from\n            argument.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).contributions_collection(\n        **strip_kwargs(\n            organization_id=organization_id,\n            from_=from_,\n            to=to,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"contributionsCollection\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"contributionsCollection\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_followers","title":"query_viewer_followers async","text":"

        A list of users the given user is followed by.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_followers(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is followed by.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).followers(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"followers\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"followers\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_following","title":"query_viewer_following async","text":"

        A list of users the given user is following.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_following(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of users the given user is following.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).following(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"following\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"following\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_gist","title":"query_viewer_gist async","text":"

        Find gist by repo name.

        Parameters:

        Name Type Description Default name str

        The gist name to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_gist(  # noqa\n    name: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find gist by repo name.\n\n    Args:\n        name: The gist name to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).gist(\n        **strip_kwargs(\n            name=name,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"gist\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"gist\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_gist_comments","title":"query_viewer_gist_comments async","text":"

        A list of gist comments made by this user.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_gist_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of gist comments made by this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).gist_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"gistComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"gistComments\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_gists","title":"query_viewer_gists async","text":"

        A list of the Gists the user has created.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy GistPrivacy

        Filters Gists according to privacy.

        None order_by GistOrder

        Ordering options for gists returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_gists(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.GistPrivacy = None,\n    order_by: graphql_schema.GistOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of the Gists the user has created.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: Filters Gists according to privacy.\n        order_by: Ordering options for gists returned from the connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).gists(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"gists\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"gists\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_interaction_ability","title":"query_viewer_interaction_ability async","text":"

        The interaction ability settings for this user.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_interaction_ability(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The interaction ability settings for this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).interaction_ability(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"interactionAbility\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"interactionAbility\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_issue_comments","title":"query_viewer_issue_comments async","text":"

        A list of issue comments made by this user.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required order_by IssueCommentOrder

        Ordering options for issue comments returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_issue_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueCommentOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issue comments made by this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issue comments returned\n            from the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).issue_comments(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"issueComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"issueComments\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_issues","title":"query_viewer_issues async","text":"

        A list of issues associated with this user.

        Parameters:

        Name Type Description Default labels Iterable[str]

        A list of label names to filter the pull requests by.

        required states Iterable[IssueState]

        A list of states to filter the issues by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required order_by IssueOrder

        Ordering options for issues returned from the connection.

        None filter_by IssueFilters

        Filtering options for issues returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_issues(  # noqa\n    labels: Iterable[str],\n    states: Iterable[graphql_schema.IssueState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.IssueOrder = None,\n    filter_by: graphql_schema.IssueFilters = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of issues associated with this user.\n\n    Args:\n        labels: A list of label names to filter the pull requests by.\n        states: A list of states to filter the issues by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for issues returned from the\n            connection.\n        filter_by: Filtering options for issues returned from the\n            connection.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).issues(\n        **strip_kwargs(\n            labels=labels,\n            states=states,\n            order_by=order_by,\n            filter_by=filter_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"issues\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"issues\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_item_showcase","title":"query_viewer_item_showcase async","text":"

        Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_item_showcase(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Showcases a selection of repositories and gists that the profile owner has\n    either curated or that have been selected automatically based on popularity.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).item_showcase(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"itemShowcase\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"itemShowcase\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_organization","title":"query_viewer_organization async","text":"

        Find an organization by its login that the user belongs to.

        Parameters:

        Name Type Description Default login str

        The login of the organization to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_organization(  # noqa\n    login: str,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find an organization by its login that the user belongs to.\n\n    Args:\n        login: The login of the organization to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).organization(\n        **strip_kwargs(\n            login=login,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"organization\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"organization\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_organizations","title":"query_viewer_organizations async","text":"

        A list of organizations the user belongs to.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_organizations(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of organizations the user belongs to.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).organizations(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"organizations\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"organizations\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_packages","title":"query_viewer_packages async","text":"

        A list of packages under the owner.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None names Iterable[str]

        Find packages by their names.

        None repository_id str

        Find packages in a repository by ID.

        None package_type PackageType

        Filter registry package by type.

        None order_by PackageOrder

        Ordering of the returned packages.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_packages(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    names: Iterable[str] = None,\n    repository_id: str = None,\n    package_type: graphql_schema.PackageType = None,\n    order_by: graphql_schema.PackageOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of packages under the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        names: Find packages by their names.\n        repository_id: Find packages in a repository by ID.\n        package_type: Filter registry package by type.\n        order_by: Ordering of the returned packages.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).packages(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            names=names,\n            repository_id=repository_id,\n            package_type=package_type,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"packages\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"packages\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_pinnable_items","title":"query_viewer_pinnable_items async","text":"

        A list of repositories and gists this profile owner can pin to their profile.

        Parameters:

        Name Type Description Default types Iterable[PinnableItemType]

        Filter the types of pinnable items that are returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_pinnable_items(  # noqa\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner can pin to their profile.\n\n    Args:\n        types: Filter the types of pinnable items that are\n            returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).pinnable_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"pinnableItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"pinnableItems\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_pinned_items","title":"query_viewer_pinned_items async","text":"

        A list of repositories and gists this profile owner has pinned to their profile.

        Parameters:

        Name Type Description Default types Iterable[PinnableItemType]

        Filter the types of pinned items that are returned.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_pinned_items(  # noqa\n    types: Iterable[graphql_schema.PinnableItemType],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories and gists this profile owner has pinned to their profile.\n\n    Args:\n        types: Filter the types of pinned items that are returned.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).pinned_items(\n        **strip_kwargs(\n            types=types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"pinnedItems\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"pinnedItems\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_project","title":"query_viewer_project async","text":"

        Find project by number.

        Parameters:

        Name Type Description Default number int

        The project number to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_project(  # noqa\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find project by number.\n\n    Args:\n        number: The project number to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).project(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"project\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"project\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_project_next","title":"query_viewer_project_next async","text":"

        Find a project by project (beta) number.

        Parameters:

        Name Type Description Default number int

        The project (beta) number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_project_next(  # noqa\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by project (beta) number.\n\n    Args:\n        number: The project (beta) number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).project_next(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectNext\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_project_v2","title":"query_viewer_project_v2 async","text":"

        Find a project by number.

        Parameters:

        Name Type Description Default number int

        The project number.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_project_v2(  # noqa\n    number: int,\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find a project by number.\n\n    Args:\n        number: The project number.\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).project_v2(\n        **strip_kwargs(\n            number=number,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectV2\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_projects","title":"query_viewer_projects async","text":"

        A list of projects under the owner.

        Parameters:

        Name Type Description Default states Iterable[ProjectState]

        A list of states to filter the projects by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required order_by ProjectOrder

        Ordering options for projects returned from the connection.

        None search str

        Query to search projects by, currently only searching by name.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_projects(  # noqa\n    states: Iterable[graphql_schema.ProjectState],\n    github_credentials: GitHubCredentials,\n    order_by: graphql_schema.ProjectOrder = None,\n    search: str = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        states: A list of states to filter the projects by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        order_by: Ordering options for projects returned from the\n            connection.\n        search: Query to search projects by, currently only searching\n            by name.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).projects(\n        **strip_kwargs(\n            states=states,\n            order_by=order_by,\n            search=search,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projects\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_projects_next","title":"query_viewer_projects_next async","text":"

        A list of projects (beta) under the owner.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required query str

        A project (beta) to search for under the the owner.

        None sort_by ProjectNextOrderField

        How to order the returned projects (beta).

        'TITLE' after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_projects_next(  # noqa\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    sort_by: graphql_schema.ProjectNextOrderField = \"TITLE\",\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects (beta) under the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project (beta) to search for under the the owner.\n        sort_by: How to order the returned projects (beta).\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).projects_next(\n        **strip_kwargs(\n            query=query,\n            sort_by=sort_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectsNext\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectsNext\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_projects_v2","title":"query_viewer_projects_v2 async","text":"

        A list of projects under the owner.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required query str

        A project to search for under the the owner.

        None order_by ProjectV2Order

        How to order the returned projects.

        {'field': 'NUMBER', 'direction': 'DESC'} after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_projects_v2(  # noqa\n    github_credentials: GitHubCredentials,\n    query: str = None,\n    order_by: graphql_schema.ProjectV2Order = {\"field\": \"NUMBER\", \"direction\": \"DESC\"},\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of projects under the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        query: A project to search for under the the owner.\n        order_by: How to order the returned projects.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).projects_v2(\n        **strip_kwargs(\n            query=query,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"projectsV2\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"projectsV2\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_public_keys","title":"query_viewer_public_keys async","text":"

        A list of public keys associated with this user.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_public_keys(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of public keys associated with this user.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).public_keys(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"publicKeys\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"publicKeys\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_pull_requests","title":"query_viewer_pull_requests async","text":"

        A list of pull requests associated with this user.

        Parameters:

        Name Type Description Default states Iterable[PullRequestState]

        A list of states to filter the pull requests by.

        required labels Iterable[str]

        A list of label names to filter the pull requests by.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required head_ref_name str

        The head ref name to filter the pull requests by.

        None base_ref_name str

        The base ref name to filter the pull requests by.

        None order_by IssueOrder

        Ordering options for pull requests returned from the connection.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_pull_requests(  # noqa\n    states: Iterable[graphql_schema.PullRequestState],\n    labels: Iterable[str],\n    github_credentials: GitHubCredentials,\n    head_ref_name: str = None,\n    base_ref_name: str = None,\n    order_by: graphql_schema.IssueOrder = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of pull requests associated with this user.\n\n    Args:\n        states: A list of states to filter the pull requests by.\n        labels: A list of label names to filter the pull requests\n            by.\n        github_credentials: Credentials to use for authentication with GitHub.\n        head_ref_name: The head ref name to filter the pull\n            requests by.\n        base_ref_name: The base ref name to filter the pull\n            requests by.\n        order_by: Ordering options for pull requests returned from\n            the connection.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).pull_requests(\n        **strip_kwargs(\n            states=states,\n            labels=labels,\n            head_ref_name=head_ref_name,\n            base_ref_name=base_ref_name,\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"pullRequests\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"pullRequests\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_recent_projects","title":"query_viewer_recent_projects async","text":"

        Recent projects that this user has modified in the context of the owner.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_recent_projects(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Recent projects that this user has modified in the context of the owner.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).recent_projects(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"recentProjects\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"recentProjects\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repositories","title":"query_viewer_repositories async","text":"

        A list of repositories that the user owns.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None affiliations Iterable[RepositoryAffiliation]

        Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns.

        None owner_affiliations Iterable[RepositoryAffiliation]

        Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

        ('OWNER', 'COLLABORATOR') is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None is_fork bool

        If non-null, filters repositories according to whether they are forks of another repository.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_repositories(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    is_fork: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user owns.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to\n            privacy.\n        order_by: Ordering options for repositories returned from\n            the connection.\n        affiliations: Array of viewer's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the current viewer\n            owns.\n        owner_affiliations: Array of owner's affiliation options\n            for repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        is_fork: If non-null, filters repositories according to\n            whether they are forks of another repository.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repositories(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            is_fork=is_fork,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositories\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repositories_contributed_to","title":"query_viewer_repositories_contributed_to async","text":"

        A list of repositories that the user recently contributed to.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None include_user_repositories bool

        If true, include user repositories.

        None contribution_types Iterable[RepositoryContributionType]

        If non-null, include only the specified types of contributions. The GitHub.com UI uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_repositories_contributed_to(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    is_locked: bool = None,\n    include_user_repositories: bool = None,\n    contribution_types: Iterable[graphql_schema.RepositoryContributionType] = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories that the user recently contributed to.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories\n            according to privacy.\n        order_by: Ordering options for repositories\n            returned from the connection.\n        is_locked: If non-null, filters repositories\n            according to whether they have been locked.\n        include_user_repositories: If true, include\n            user repositories.\n        contribution_types: If non-null, include\n            only the specified types of contributions. The GitHub.com UI\n            uses [COMMIT, ISSUE, PULL_REQUEST, REPOSITORY].\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repositories_contributed_to(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            is_locked=is_locked,\n            include_user_repositories=include_user_repositories,\n            contribution_types=contribution_types,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositoriesContributedTo\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositoriesContributedTo\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repository","title":"query_viewer_repository async","text":"

        Find Repository.

        Parameters:

        Name Type Description Default name str

        Name of Repository to find.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required follow_renames bool

        Follow repository renames. If disabled, a repository referenced by its old name will return an error.

        True return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_repository(  # noqa\n    name: str,\n    github_credentials: GitHubCredentials,\n    follow_renames: bool = True,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Find Repository.\n\n    Args:\n        name: Name of Repository to find.\n        github_credentials: Credentials to use for authentication with GitHub.\n        follow_renames: Follow repository renames. If disabled, a\n            repository referenced by its old name will return an error.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repository(\n        **strip_kwargs(\n            name=name,\n            follow_renames=follow_renames,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repository\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repository\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repository_discussion_comments","title":"query_viewer_repository_discussion_comments async","text":"

        Discussion comments this user has authored.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None repository_id str

        Filter discussion comments to only those in a specific repository.

        None only_answers bool

        Filter discussion comments to only those that were marked as the answer.

        False return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_repository_discussion_comments(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    repository_id: str = None,\n    only_answers: bool = False,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussion comments this user has authored.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list\n            that come after the specified cursor.\n        before: Returns the elements in the list\n            that come before the specified cursor.\n        first: Returns the first _n_ elements\n            from the list.\n        last: Returns the last _n_ elements from\n            the list.\n        repository_id: Filter discussion comments\n            to only those in a specific repository.\n        only_answers: Filter discussion comments\n            to only those that were marked as the answer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repository_discussion_comments(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            repository_id=repository_id,\n            only_answers=only_answers,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositoryDiscussionComments\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositoryDiscussionComments\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_repository_discussions","title":"query_viewer_repository_discussions async","text":"

        Discussions this user has started.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by DiscussionOrder

        Ordering options for discussions returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'DESC'} repository_id str

        Filter discussions to only those in a specific repository.

        None answered bool

        Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_repository_discussions(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.DiscussionOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    repository_id: str = None,\n    answered: bool = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Discussions this user has started.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for discussions\n            returned from the connection.\n        repository_id: Filter discussions to only those\n            in a specific repository.\n        answered: Filter discussions to only those that\n            have been answered or not. Defaults to including both\n            answered and unanswered discussions.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).repository_discussions(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n            repository_id=repository_id,\n            answered=answered,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"repositoryDiscussions\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"repositoryDiscussions\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_saved_replies","title":"query_viewer_saved_replies async","text":"

        Replies this user has saved.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SavedReplyOrder

        The field to order saved replies by.

        {'field': 'UPDATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_saved_replies(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SavedReplyOrder = {\n        \"field\": \"UPDATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Replies this user has saved.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come before\n            the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: The field to order saved replies by.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).saved_replies(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"savedReplies\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"savedReplies\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsoring","title":"query_viewer_sponsoring async","text":"

        List of users and organizations this entity is sponsoring.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorOrder

        Ordering options for the users and organizations returned from the connection.

        {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsoring(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of users and organizations this entity is sponsoring.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        order_by: Ordering options for the users and organizations\n            returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsoring(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsoring\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsoring\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsors","title":"query_viewer_sponsors async","text":"

        List of sponsors for this user or organization.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None tier_id str

        If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see.

        None order_by SponsorOrder

        Ordering options for sponsors returned from the connection.

        {'field': 'RELEVANCE', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsors(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    tier_id: str = None,\n    order_by: graphql_schema.SponsorOrder = {\"field\": \"RELEVANCE\", \"direction\": \"DESC\"},\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsors for this user or organization.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        tier_id: If given, will filter for sponsors at the given tier.\n            Will only return sponsors whose tier the viewer is permitted\n            to see.\n        order_by: Ordering options for sponsors returned from the\n            connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsors(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            tier_id=tier_id,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsors\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsors\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsors_activities","title":"query_viewer_sponsors_activities async","text":"

        Events involving this sponsorable, such as new sponsorships.

        Parameters:

        Name Type Description Default actions Iterable[SponsorsActivityAction]

        Filter activities to only the specified actions.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None period SponsorsActivityPeriod

        Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred.

        'MONTH' order_by SponsorsActivityOrder

        Ordering options for activity returned from the connection.

        {'field': 'TIMESTAMP', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsors_activities(  # noqa\n    actions: Iterable[graphql_schema.SponsorsActivityAction],\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    period: graphql_schema.SponsorsActivityPeriod = \"MONTH\",\n    order_by: graphql_schema.SponsorsActivityOrder = {\n        \"field\": \"TIMESTAMP\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Events involving this sponsorable, such as new sponsorships.\n\n    Args:\n        actions: Filter activities to only the specified\n            actions.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        period: Filter activities returned to only those\n            that occurred in the most recent specified time period. Set\n            to ALL to avoid filtering by when the activity occurred.\n        order_by: Ordering options for activity returned\n            from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsors_activities(\n        **strip_kwargs(\n            actions=actions,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            period=period,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorsActivities\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorsActivities\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsors_listing","title":"query_viewer_sponsors_listing async","text":"

        The GitHub Sponsors listing for this user or organization.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsors_listing(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The GitHub Sponsors listing for this user or organization.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsors_listing(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorsListing\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorsListing\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorship_for_viewer_as_sponsor","title":"query_viewer_sponsorship_for_viewer_as_sponsor async","text":"

        The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsorship_for_viewer_as_sponsor(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from the viewer to this user/organization; that is, the\n    sponsorship where you're the sponsor. Only returns a sponsorship if it is\n    active.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorship_for_viewer_as_sponsor(\n        **strip_kwargs()\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipForViewerAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipForViewerAsSponsor\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorship_for_viewer_as_sponsorable","title":"query_viewer_sponsorship_for_viewer_as_sponsorable async","text":"

        The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsorship_for_viewer_as_sponsorable(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The sponsorship from this user/organization to the viewer; that is, the\n    sponsorship you're receiving. Only returns a sponsorship if it is active.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorship_for_viewer_as_sponsorable(\n        **strip_kwargs()\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipForViewerAsSponsorable\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipForViewerAsSponsorable\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorship_newsletters","title":"query_viewer_sponsorship_newsletters async","text":"

        List of sponsorship updates sent from this sponsorable to sponsors.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorshipNewsletterOrder

        Ordering options for sponsorship updates returned from the connection.

        {'field': 'CREATED_AT', 'direction': 'DESC'} return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsorship_newsletters(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipNewsletterOrder = {\n        \"field\": \"CREATED_AT\",\n        \"direction\": \"DESC\",\n    },\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    List of sponsorship updates sent from this sponsorable to sponsors.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorship\n            updates returned from the connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorship_newsletters(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipNewsletters\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipNewsletters\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorships_as_maintainer","title":"query_viewer_sponsorships_as_maintainer async","text":"

        This object's sponsorships as the maintainer.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None include_private bool

        Whether or not to include private sponsorships in the result set.

        False order_by SponsorshipOrder

        Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsorships_as_maintainer(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    include_private: bool = False,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the maintainer.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from\n            the list.\n        last: Returns the last _n_ elements from the\n            list.\n        include_private: Whether or not to include\n            private sponsorships in the result set.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorships_as_maintainer(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            include_private=include_private,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipsAsMaintainer\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipsAsMaintainer\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_sponsorships_as_sponsor","title":"query_viewer_sponsorships_as_sponsor async","text":"

        This object's sponsorships as the sponsor.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None order_by SponsorshipOrder

        Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_sponsorships_as_sponsor(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    order_by: graphql_schema.SponsorshipOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    This object's sponsorships as the sponsor.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that\n            come after the specified cursor.\n        before: Returns the elements in the list that\n            come before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the\n            list.\n        order_by: Ordering options for sponsorships\n            returned from this connection. If left blank, the\n            sponsorships will be ordered based on relevancy to the\n            viewer.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).sponsorships_as_sponsor(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"sponsorshipsAsSponsor\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"sponsorshipsAsSponsor\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_starred_repositories","title":"query_viewer_starred_repositories async","text":"

        Repositories the user has starred.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None owned_by_viewer bool

        Filters starred repositories to only return repositories owned by the viewer.

        None order_by StarOrder

        Order for connection.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_starred_repositories(  # noqa\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    owned_by_viewer: bool = None,\n    order_by: graphql_schema.StarOrder = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has starred.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come\n            after the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the\n            list.\n        last: Returns the last _n_ elements from the list.\n        owned_by_viewer: Filters starred repositories to\n            only return repositories owned by the viewer.\n        order_by: Order for connection.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).starred_repositories(\n        **strip_kwargs(\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            owned_by_viewer=owned_by_viewer,\n            order_by=order_by,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"starredRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"starredRepositories\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_status","title":"query_viewer_status async","text":"

        The user's description of what they're currently doing.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_status(  # noqa\n    github_credentials: GitHubCredentials,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    The user's description of what they're currently doing.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).status(**strip_kwargs())\n\n    op_stack = (\n        \"viewer\",\n        \"status\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"status\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_top_repositories","title":"query_viewer_top_repositories async","text":"

        Repositories the user has contributed to, ordered by contribution rank, plus repositories the user has created.

        Parameters:

        Name Type Description Default order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        required github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None since datetime

        How far back in time to fetch contributed repositories.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_top_repositories(  # noqa\n    order_by: graphql_schema.RepositoryOrder,\n    github_credentials: GitHubCredentials,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    since: datetime = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    Repositories the user has contributed to, ordered by contribution rank, plus\n    repositories the user has created.\n\n    Args:\n        order_by: Ordering options for repositories returned\n            from the connection.\n        github_credentials: Credentials to use for authentication with GitHub.\n        after: Returns the elements in the list that come after\n            the specified cursor.\n        before: Returns the elements in the list that come\n            before the specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        since: How far back in time to fetch contributed\n            repositories.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).top_repositories(\n        **strip_kwargs(\n            order_by=order_by,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n            since=since,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"topRepositories\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"topRepositories\"]\n
        "},{"location":"integrations/prefect-github/viewer/#prefect_github.viewer.query_viewer_watching","title":"query_viewer_watching async","text":"

        A list of repositories the given user is watching.

        Parameters:

        Name Type Description Default github_credentials GitHubCredentials

        Credentials to use for authentication with GitHub.

        required privacy RepositoryPrivacy

        If non-null, filters repositories according to privacy.

        None order_by RepositoryOrder

        Ordering options for repositories returned from the connection.

        None affiliations Iterable[RepositoryAffiliation]

        Affiliation options for repositories returned from the connection. If none specified, the results will include repositories for which the current viewer is an owner or collaborator, or member.

        None owner_affiliations Iterable[RepositoryAffiliation]

        Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns.

        ('OWNER', 'COLLABORATOR') is_locked bool

        If non-null, filters repositories according to whether they have been locked.

        None after str

        Returns the elements in the list that come after the specified cursor.

        None before str

        Returns the elements in the list that come before the specified cursor.

        None first int

        Returns the first n elements from the list.

        None last int

        Returns the last n elements from the list.

        None return_fields Iterable[str]

        Subset the return fields (as snake_case); defaults to fields listed in configs/query/*.json.

        None

        Returns:

        Type Description Dict[str, Any]

        A dict of the returned fields.

        Source code in prefect_github/viewer.py
        @task\nasync def query_viewer_watching(  # noqa\n    github_credentials: GitHubCredentials,\n    privacy: graphql_schema.RepositoryPrivacy = None,\n    order_by: graphql_schema.RepositoryOrder = None,\n    affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None,\n    owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = (\n        \"OWNER\",\n        \"COLLABORATOR\",\n    ),\n    is_locked: bool = None,\n    after: str = None,\n    before: str = None,\n    first: int = None,\n    last: int = None,\n    return_fields: Iterable[str] = None,\n) -> Dict[str, Any]:  # pragma: no cover\n    \"\"\"\n    A list of repositories the given user is watching.\n\n    Args:\n        github_credentials: Credentials to use for authentication with GitHub.\n        privacy: If non-null, filters repositories according to privacy.\n        order_by: Ordering options for repositories returned from the\n            connection.\n        affiliations: Affiliation options for repositories returned\n            from the connection. If none specified, the results will\n            include repositories for which the current viewer is an\n            owner or collaborator, or member.\n        owner_affiliations: Array of owner's affiliation options for\n            repositories returned from the connection. For example,\n            OWNER will include only repositories that the organization\n            or user being viewed owns.\n        is_locked: If non-null, filters repositories according to\n            whether they have been locked.\n        after: Returns the elements in the list that come after the\n            specified cursor.\n        before: Returns the elements in the list that come before the\n            specified cursor.\n        first: Returns the first _n_ elements from the list.\n        last: Returns the last _n_ elements from the list.\n        return_fields: Subset the return fields (as snake_case); defaults to\n            fields listed in configs/query/*.json.\n\n    Returns:\n        A dict of the returned fields.\n    \"\"\"\n    op = Operation(graphql_schema.Query)\n    op_selection = op.viewer(**strip_kwargs()).watching(\n        **strip_kwargs(\n            privacy=privacy,\n            order_by=order_by,\n            affiliations=affiliations,\n            owner_affiliations=owner_affiliations,\n            is_locked=is_locked,\n            after=after,\n            before=before,\n            first=first,\n            last=last,\n        )\n    )\n\n    op_stack = (\n        \"viewer\",\n        \"watching\",\n    )\n    op_selection = _subset_return_fields(\n        op_selection, op_stack, return_fields, return_fields_defaults\n    )\n\n    result = await _execute_graphql_op(op, github_credentials)\n    return result[\"viewer\"][\"watching\"]\n
        "},{"location":"integrations/prefect-gitlab/","title":"prefect-gitlab","text":""},{"location":"integrations/prefect-gitlab/#welcome","title":"Welcome!","text":"

        prefect-gitlab is a Prefect collection for working with GitLab repositories.

        "},{"location":"integrations/prefect-gitlab/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-gitlab/#python-setup","title":"Python setup","text":"

        Requires an installation of Python 3.8 or higher.

        We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

        This integration is designed to work with Prefect 2.3.0 or higher. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-gitlab/#installation","title":"Installation","text":"

        Install prefect-gitlab with pip:

        pip install prefect-gitlab\n

        Then, register the block types) in this integration to view the storage block type on Prefect Cloud:

        prefect block register -m prefect_gitlab\n

        Note, to use the load method on a block, you must already have a block document saved.

        "},{"location":"integrations/prefect-gitlab/#creating-a-gitlab-storage-block","title":"Creating a GitLab storage block","text":""},{"location":"integrations/prefect-gitlab/#in-python","title":"In Python","text":"
        from prefect_gitlab import GitLabRepository\n\n# public GitLab repository\npublic_gitlab_block = GitLabRepository(\n    name=\"my-gitlab-block\",\n    repository=\"https://gitlab.com/testing/my-repository.git\"\n)\n\npublic_gitlab_block.save()\n\n\n# specific branch or tag of a GitLab repository\nbranch_gitlab_block = GitLabRepository(\n    name=\"my-gitlab-block\",\n    reference=\"branch-or-tag-name\",\n    repository=\"https://gitlab.com/testing/my-repository.git\"\n)\n\nbranch_gitlab_block.save()\n\n\n# Get all history of a specific branch or tag of a GitLab repository\nbranch_gitlab_block = GitLabRepository(\n    name=\"my-gitlab-block\",\n    reference=\"branch-or-tag-name\",\n    git_depth=None,\n    repository=\"https://gitlab.com/testing/my-repository.git\"\n)\n\nbranch_gitlab_block.save()\n\n# private GitLab repository\nprivate_gitlab_block = GitLabRepository(\n    name=\"my-private-gitlab-block\",\n    repository=\"https://gitlab.com/testing/my-repository.git\",\n    access_token=\"YOUR_GITLAB_PERSONAL_ACCESS_TOKEN\"\n)\n\nprivate_gitlab_block.save()\n
        "},{"location":"integrations/prefect-gitlab/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-gitlab/credentials/#prefect_gitlab.credentials","title":"prefect_gitlab.credentials","text":"

        Module used to enable authenticated interactions with GitLab

        "},{"location":"integrations/prefect-gitlab/credentials/#prefect_gitlab.credentials.GitLabCredentials","title":"GitLabCredentials","text":"

        Bases: Block

        Store a GitLab personal access token to interact with private GitLab repositories.

        Attributes:

        Name Type Description token SecretStr

        The personal access token to authenticate with GitLab.

        url str

        URL to self-hosted GitLab instances.

        Examples:

        Load stored GitLab credentials:

        from prefect_gitlab import GitLabCredentials\ngitlab_credentials_block = GitLabCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_gitlab/credentials.py
        class GitLabCredentials(Block):\n    \"\"\"\n    Store a GitLab personal access token to interact with private GitLab\n    repositories.\n\n    Attributes:\n        token: The personal access token to authenticate with GitLab.\n        url: URL to self-hosted GitLab instances.\n\n    Examples:\n        Load stored GitLab credentials:\n        ```python\n        from prefect_gitlab import GitLabCredentials\n        gitlab_credentials_block = GitLabCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"GitLab Credentials\"\n    _logo_url = HttpUrl(\n        url=\"https://images.ctfassets.net/gm98wzqotmnx/55edIimT4g9gbjhkh5a3Sp/dfdb9391d8f45c2e93e72e3a4d350771/gitlab-logo-500.png?h=250\",  # noqa\n        scheme=\"https\",\n    )\n\n    token: SecretStr = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=\"A GitLab Personal Access Token with read_repository scope.\",\n    )\n    url: str = Field(\n        default=None, title=\"URL\", description=\"URL to self-hosted GitLab instances.\"\n    )\n\n    def get_client(self) -> Gitlab:\n        \"\"\"\n        Gets an authenticated GitLab client.\n\n        Returns:\n            An authenticated GitLab client.\n        \"\"\"\n        # ref: https://python-gitlab.readthedocs.io/en/stable/\n        gitlab = Gitlab(url=self.url, oauth_token=self.token.get_secret_value())\n        gitlab.auth()\n        return gitlab\n
        "},{"location":"integrations/prefect-gitlab/credentials/#prefect_gitlab.credentials.GitLabCredentials.get_client","title":"get_client","text":"

        Gets an authenticated GitLab client.

        Returns:

        Type Description Gitlab

        An authenticated GitLab client.

        Source code in prefect_gitlab/credentials.py
        def get_client(self) -> Gitlab:\n    \"\"\"\n    Gets an authenticated GitLab client.\n\n    Returns:\n        An authenticated GitLab client.\n    \"\"\"\n    # ref: https://python-gitlab.readthedocs.io/en/stable/\n    gitlab = Gitlab(url=self.url, oauth_token=self.token.get_secret_value())\n    gitlab.auth()\n    return gitlab\n
        "},{"location":"integrations/prefect-gitlab/repositories/","title":"Repositories","text":""},{"location":"integrations/prefect-gitlab/repositories/#prefect_gitlab.repositories","title":"prefect_gitlab.repositories","text":"

        Integrations with GitLab.

        The GitLab class in this collection is a storage block that lets Prefect agents pull Prefect flow code from GitLab repositories.

        The GitLab block is ideally configured via the Prefect UI, but can also be used in Python as the following examples demonstrate.

        Examples:

            from prefect_gitlab.repositories import GitLabRepository\n\n    # public GitLab repository\n    public_gitlab_block = GitLabRepository(\n        name=\"my-gitlab-block\",\n        repository=\"https://gitlab.com/testing/my-repository.git\"\n    )\n\n    public_gitlab_block.save()\n\n\n    # specific branch or tag of a GitLab repository\n    branch_gitlab_block = GitLabRepository(\n        name=\"my-gitlab-block\",\n        reference=\"branch-or-tag-name\"\n        repository=\"https://gitlab.com/testing/my-repository.git\"\n    )\n\n    branch_gitlab_block.save()\n\n\n    # private GitLab repository\n    private_gitlab_block = GitLabRepository(\n        name=\"my-private-gitlab-block\",\n        repository=\"https://gitlab.com/testing/my-repository.git\",\n        access_token=\"YOUR_GITLAB_PERSONAL_ACCESS_TOKEN\"\n    )\n\n    private_gitlab_block.save()\n
        "},{"location":"integrations/prefect-gitlab/repositories/#prefect_gitlab.repositories.GitLabRepository","title":"GitLabRepository","text":"

        Bases: ReadableDeploymentStorage

        Interact with files stored in GitLab repositories.

        An accessible installation of git is required for this block to function properly.

        Source code in prefect_gitlab/repositories.py
        class GitLabRepository(ReadableDeploymentStorage):\n    \"\"\"\n    Interact with files stored in GitLab repositories.\n\n    An accessible installation of git is required for this block to function\n    properly.\n    \"\"\"\n\n    _block_type_name = \"GitLab Repository\"\n    _logo_url = HttpUrl(\n        url=\"https://images.ctfassets.net/gm98wzqotmnx/55edIimT4g9gbjhkh5a3Sp/dfdb9391d8f45c2e93e72e3a4d350771/gitlab-logo-500.png?h=250\",  # noqa\n        scheme=\"https\",\n    )\n    _description = \"Interact with files stored in GitLab repositories.\"\n\n    repository: str = Field(\n        default=...,\n        description=(\n            \"The URL of a GitLab repository to read from, in either HTTP/HTTPS or SSH format.\"  # noqa\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    git_depth: Optional[int] = Field(\n        default=1,\n        gte=1,\n        description=\"The number of commits that Git history is truncated to \"\n        \"during cloning. Set to None to fetch the entire history.\",\n    )\n    credentials: Optional[GitLabCredentials] = Field(\n        default=None,\n        description=\"An optional GitLab Credentials block for authenticating with \"\n        \"private GitLab repos.\",\n    )\n\n    @validator(\"credentials\")\n    def _ensure_credentials_go_with_http(cls, v: str, values: dict) -> str:\n        \"\"\"Ensure that credentials are not provided with 'SSH' formatted GitLub URLs.\n        Note: validates `access_token` specifically so that it only fires when\n        private repositories are used.\n        \"\"\"\n        if v is not None:\n            if urllib.parse.urlparse(values[\"repository\"]).scheme not in [\n                \"https\",\n                \"http\",\n            ]:\n                raise InvalidRepositoryURLError(\n                    (\n                        \"Credentials can only be used with GitLab repositories \"\n                        \"using the 'HTTPS'/'HTTP' format. You must either remove the \"\n                        \"credential if you wish to use the 'SSH' format and are not \"\n                        \"using a private repository, or you must change the repository \"\n                        \"URL to the 'HTTPS'/'HTTP' format.\"\n                    )\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n        \"\"\"Format the URL provided to the `git clone` command.\n        For private repos: https://<oauth-key>@gitlab.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urllib.parse.urlparse(self.repository)\n        if url_components.scheme in [\"https\", \"http\"] and self.credentials is not None:\n            token = self.credentials.token.get_secret_value()\n            updated_components = url_components._replace(\n                netloc=f\"oauth2:{token}@{url_components.netloc}\"\n            )\n            full_url = urllib.parse.urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: Optional[str]\n    ) -> Tuple[str, str]:\n        \"\"\"Returns the fully formed paths for GitLabRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    @retry(\n        stop=stop_after_attempt(MAX_CLONE_ATTEMPTS),\n        wait=wait_fixed(CLONE_RETRY_MIN_DELAY_SECONDS)\n        + wait_random(\n            CLONE_RETRY_MIN_DELAY_JITTER_SECONDS,\n            CLONE_RETRY_MAX_DELAY_JITTER_SECONDS,\n        ),\n        reraise=True,\n    )\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n        \"\"\"\n        Clones a GitLab project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        if self.git_depth is not None:\n            cmd += [\"--depth\", f\"{self.git_depth}\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            copy_tree(src=content_source, dst=content_destination)\n
        "},{"location":"integrations/prefect-gitlab/repositories/#prefect_gitlab.repositories.GitLabRepository.get_directory","title":"get_directory async","text":"

        Clones a GitLab project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory. Args: from_path: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. local_path: A local path to clone to; defaults to present working directory.

        Source code in prefect_gitlab/repositories.py
        @sync_compatible\n@retry(\n    stop=stop_after_attempt(MAX_CLONE_ATTEMPTS),\n    wait=wait_fixed(CLONE_RETRY_MIN_DELAY_SECONDS)\n    + wait_random(\n        CLONE_RETRY_MIN_DELAY_JITTER_SECONDS,\n        CLONE_RETRY_MAX_DELAY_JITTER_SECONDS,\n    ),\n    reraise=True,\n)\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n    \"\"\"\n    Clones a GitLab project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    if self.git_depth is not None:\n        cmd += [\"--depth\", f\"{self.git_depth}\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        copy_tree(src=content_source, dst=content_destination)\n
        "},{"location":"integrations/prefect-kubernetes/","title":"prefect-kubernetes","text":""},{"location":"integrations/prefect-kubernetes/#welcome","title":"Welcome!","text":"

        prefect-kubernetes is a collection of Prefect tasks, flows, and blocks enabling orchestration, observation and management of Kubernetes resources.

        Jump to examples.

        "},{"location":"integrations/prefect-kubernetes/#resources","title":"Resources","text":"

        For more tips on how to use tasks and flows in a Collection, check out Using Collections!

        "},{"location":"integrations/prefect-kubernetes/#installation","title":"Installation","text":"

        Install prefect-kubernetes with pip:

         pip install prefect-kubernetes\n ```\n\nRequires an installation of Python 3.8+.\n\nWe recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.\n\nThese tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the [Prefect documentation](https://docs.prefect.io/).\n\nThen, to register [blocks](https://docs.prefect.io/ui/blocks/) on Prefect Cloud:\n\n```bash\nprefect block register -m prefect_kubernetes\n

        Note, to use the load method on Blocks, you must already have a block document saved through code or saved through the UI.

        "},{"location":"integrations/prefect-kubernetes/#example-usage","title":"Example Usage","text":""},{"location":"integrations/prefect-kubernetes/#use-with_options-to-customize-options-on-any-existing-task-or-flow","title":"Use with_options to customize options on any existing task or flow","text":"
        from prefect_kubernetes.flows import run_namespaced_job\n\ncustomized_run_namespaced_job = run_namespaced_job.with_options(\n    name=\"My flow running a Kubernetes Job\",\n    retries=2,\n    retry_delay_seconds=10,\n) # this is now a new flow object that can be called\n

        For more tips on how to use tasks and flows in a Collection, check out Using Collections!

        "},{"location":"integrations/prefect-kubernetes/#specify-and-run-a-kubernetes-job-from-a-yaml-file","title":"Specify and run a Kubernetes Job from a yaml file","text":"
        from prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.flows import run_namespaced_job # this is a flow\nfrom prefect_kubernetes.jobs import KubernetesJob\n\nk8s_creds = KubernetesCredentials.load(\"k8s-creds\")\n\njob = KubernetesJob.from_yaml_file( # or create in the UI with a dict manifest\n    credentials=k8s_creds,\n    manifest_path=\"path/to/job.yaml\",\n)\n\njob.save(\"my-k8s-job\", overwrite=True)\n\nif __name__ == \"__main__\":\n    # run the flow\n    run_namespaced_job(job)\n
        "},{"location":"integrations/prefect-kubernetes/#generate-a-resource-specific-client-from-kubernetesclusterconfig","title":"Generate a resource-specific client from KubernetesClusterConfig","text":"
        # with minikube / docker desktop & a valid ~/.kube/config this should ~just work~\nfrom prefect.blocks.kubernetes import KubernetesClusterConfig\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\nk8s_config = KubernetesClusterConfig.from_file('~/.kube/config')\n\nk8s_credentials = KubernetesCredentials(cluster_config=k8s_config)\n\nwith k8s_credentials.get_client(\"core\") as v1_core_client:\n    for namespace in v1_core_client.list_namespace().items:\n        print(namespace.metadata.name)\n
        "},{"location":"integrations/prefect-kubernetes/#list-jobs-in-a-specific-namespace","title":"List jobs in a specific namespace","text":"
        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import list_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_list = list_namespaced_job(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        namespace=\"my-namespace\",\n    )\n
        "},{"location":"integrations/prefect-kubernetes/#patch-an-existing-deployment","title":"Patch an existing deployment","text":"
        from kubernetes.client.models import V1Deployment\n\nfrom prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import patch_namespaced_deployment\nfrom prefect_kubernetes.utilities import convert_manifest_to_model\n\n@flow\ndef kubernetes_orchestrator():\n\n    v1_deployment_updates = convert_manifest_to_model(\n        manifest=\"path/to/manifest.yaml\",\n        v1_model_name=\"V1Deployment\",\n    )\n\n    v1_deployment = patch_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"my-deployment\",\n        deployment_updates=v1_deployment_updates,\n        namespace=\"my-namespace\"\n    )\n
        "},{"location":"integrations/prefect-kubernetes/#feedback","title":"Feedback","text":"

        If you encounter any bugs while using prefect-kubernetes, feel free to open an issue in the prefect repository.

        If you have any questions or issues while using prefect-kubernetes, you can find help in either the Prefect Discourse forum or the Prefect Slack community.

        "},{"location":"integrations/prefect-kubernetes/#contributing","title":"Contributing","text":"

        If you'd like to help contribute to fix an issue or add a feature to prefect-kubernetes, please propose changes through a pull request from a fork of the repository.

        Here are the steps:

        1. Fork the repository
        2. Clone the forked repository
        3. Install the repository and its dependencies:
           pip install -e \".[dev]\"\n
        4. Make desired changes
        5. Add tests
        6. Install pre-commit to perform quality checks prior to commit: pre-commit install
        7. git commit, git push, and create a pull request
        "},{"location":"integrations/prefect-kubernetes/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials","title":"prefect_kubernetes.credentials","text":"

        Module for defining Kubernetes credential handling and client generation.

        "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

        Bases: Block

        Stores configuration for interaction with Kubernetes clusters.

        See from_file for creation.

        Attributes:

        Name Type Description config Dict

        The entire loaded YAML contents of a kubectl config file

        context_name str

        The name of the kubectl context to use

        Example

        Load a saved Kubernetes cluster config:

        from prefect_kubernetes.credentials import import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

        Source code in prefect_kubernetes/credentials.py
        class KubernetesClusterConfig(Block):\n    \"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect_kubernetes.credentials import import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig\"  # noqa\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        if isinstance(value, str):\n            return yaml.safe_load(value)\n        return value\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n        \"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n\n        path = Path(path or config.kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        (\n            existing_contexts,\n            current_context,\n        ) = config.kube_config.list_kube_config_contexts(config_file=str(path))\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n        \"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n        \"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
        "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

        Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

        Source code in prefect_kubernetes/credentials.py
        def configure_client(self) -> None:\n    \"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
        "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

        Create a cluster config from the a Kubernetes config file.

        By default, the current context in the default Kubernetes config file will be used.

        An alternative file or context may be specified.

        The entire config file will be loaded and stored.

        Source code in prefect_kubernetes/credentials.py
        @classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n    \"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n\n    path = Path(path or config.kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    (\n        existing_contexts,\n        current_context,\n    ) = config.kube_config.list_kube_config_contexts(config_file=str(path))\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
        "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

        Returns a Kubernetes API client for this cluster config.

        Source code in prefect_kubernetes/credentials.py
        def get_api_client(self) -> \"ApiClient\":\n    \"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
        "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials","title":"KubernetesCredentials","text":"

        Bases: Block

        Credentials block for generating configured Kubernetes API clients.

        Attributes:

        Name Type Description cluster_config Optional[KubernetesClusterConfig]

        A KubernetesClusterConfig block holding a JSON kube config for a specific kubernetes context.

        Example

        Load stored Kubernetes credentials:

        from prefect_kubernetes.credentials import KubernetesCredentials\n\nkubernetes_credentials = KubernetesCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_kubernetes/credentials.py
        class KubernetesCredentials(Block):\n    \"\"\"Credentials block for generating configured Kubernetes API clients.\n\n    Attributes:\n        cluster_config: A `KubernetesClusterConfig` block holding a JSON kube\n            config for a specific kubernetes context.\n\n    Example:\n        Load stored Kubernetes credentials:\n        ```python\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        kubernetes_credentials = KubernetesCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials\"  # noqa\n\n    cluster_config: Optional[KubernetesClusterConfig] = None\n\n    @contextmanager\n    def get_client(\n        self,\n        client_type: Literal[\"apps\", \"batch\", \"core\", \"custom_objects\"],\n        configuration: Optional[Configuration] = None,\n    ) -> Generator[KubernetesClient, None, None]:\n        \"\"\"Convenience method for retrieving a Kubernetes API client for deployment resources.\n\n        Args:\n            client_type: The resource-specific type of Kubernetes client to retrieve.\n\n        Yields:\n            An authenticated, resource-specific Kubernetes API client.\n\n        Example:\n            ```python\n            from prefect_kubernetes.credentials import KubernetesCredentials\n\n            with KubernetesCredentials.get_client(\"core\") as core_v1_client:\n                for pod in core_v1_client.list_namespaced_pod():\n                    print(pod.metadata.name)\n            ```\n        \"\"\"\n        client_config = configuration or Configuration()\n\n        with ApiClient(configuration=client_config) as generic_client:\n            try:\n                yield self.get_resource_specific_client(client_type)\n            finally:\n                generic_client.rest_client.pool_manager.clear()\n\n    def get_resource_specific_client(\n        self,\n        client_type: str,\n    ) -> Union[AppsV1Api, BatchV1Api, CoreV1Api]:\n        \"\"\"\n        Utility function for configuring a generic Kubernetes client.\n        It will attempt to connect to a Kubernetes cluster in three steps with\n        the first successful connection attempt becoming the mode of communication with\n        a cluster:\n\n        1. It will first attempt to use a `KubernetesCredentials` block's\n        `cluster_config` to configure a client using\n        `KubernetesClusterConfig.configure_client`.\n\n        2. Attempt in-cluster connection (will only work when running on a pod).\n\n        3. Attempt out-of-cluster connection using the default location for a\n        kube config file.\n\n        Args:\n            client_type: The Kubernetes API client type for interacting with specific\n                Kubernetes resources.\n\n        Returns:\n            KubernetesClient: An authenticated, resource-specific Kubernetes Client.\n\n        Raises:\n            ValueError: If `client_type` is not a valid Kubernetes API client type.\n        \"\"\"\n\n        if self.cluster_config:\n            self.cluster_config.configure_client()\n        else:\n            try:\n                config.load_incluster_config()\n            except ConfigException:\n                config.load_kube_config()\n\n        try:\n            return K8S_CLIENT_TYPES[client_type]()\n        except KeyError:\n            raise ValueError(\n                f\"Invalid client type provided '{client_type}'.\"\n                f\" Must be one of {listrepr(K8S_CLIENT_TYPES.keys())}.\"\n            )\n
        "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials.get_client","title":"get_client","text":"

        Convenience method for retrieving a Kubernetes API client for deployment resources.

        Parameters:

        Name Type Description Default client_type Literal['apps', 'batch', 'core', 'custom_objects']

        The resource-specific type of Kubernetes client to retrieve.

        required

        Yields:

        Type Description KubernetesClient

        An authenticated, resource-specific Kubernetes API client.

        Example
        from prefect_kubernetes.credentials import KubernetesCredentials\n\nwith KubernetesCredentials.get_client(\"core\") as core_v1_client:\n    for pod in core_v1_client.list_namespaced_pod():\n        print(pod.metadata.name)\n
        Source code in prefect_kubernetes/credentials.py
        @contextmanager\ndef get_client(\n    self,\n    client_type: Literal[\"apps\", \"batch\", \"core\", \"custom_objects\"],\n    configuration: Optional[Configuration] = None,\n) -> Generator[KubernetesClient, None, None]:\n    \"\"\"Convenience method for retrieving a Kubernetes API client for deployment resources.\n\n    Args:\n        client_type: The resource-specific type of Kubernetes client to retrieve.\n\n    Yields:\n        An authenticated, resource-specific Kubernetes API client.\n\n    Example:\n        ```python\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        with KubernetesCredentials.get_client(\"core\") as core_v1_client:\n            for pod in core_v1_client.list_namespaced_pod():\n                print(pod.metadata.name)\n        ```\n    \"\"\"\n    client_config = configuration or Configuration()\n\n    with ApiClient(configuration=client_config) as generic_client:\n        try:\n            yield self.get_resource_specific_client(client_type)\n        finally:\n            generic_client.rest_client.pool_manager.clear()\n
        "},{"location":"integrations/prefect-kubernetes/credentials/#prefect_kubernetes.credentials.KubernetesCredentials.get_resource_specific_client","title":"get_resource_specific_client","text":"

        Utility function for configuring a generic Kubernetes client. It will attempt to connect to a Kubernetes cluster in three steps with the first successful connection attempt becoming the mode of communication with a cluster:

        1. It will first attempt to use a KubernetesCredentials block's cluster_config to configure a client using KubernetesClusterConfig.configure_client.

        2. Attempt in-cluster connection (will only work when running on a pod).

        3. Attempt out-of-cluster connection using the default location for a kube config file.

        Parameters:

        Name Type Description Default client_type str

        The Kubernetes API client type for interacting with specific Kubernetes resources.

        required

        Returns:

        Name Type Description KubernetesClient Union[AppsV1Api, BatchV1Api, CoreV1Api]

        An authenticated, resource-specific Kubernetes Client.

        Raises:

        Type Description ValueError

        If client_type is not a valid Kubernetes API client type.

        Source code in prefect_kubernetes/credentials.py
        def get_resource_specific_client(\n    self,\n    client_type: str,\n) -> Union[AppsV1Api, BatchV1Api, CoreV1Api]:\n    \"\"\"\n    Utility function for configuring a generic Kubernetes client.\n    It will attempt to connect to a Kubernetes cluster in three steps with\n    the first successful connection attempt becoming the mode of communication with\n    a cluster:\n\n    1. It will first attempt to use a `KubernetesCredentials` block's\n    `cluster_config` to configure a client using\n    `KubernetesClusterConfig.configure_client`.\n\n    2. Attempt in-cluster connection (will only work when running on a pod).\n\n    3. Attempt out-of-cluster connection using the default location for a\n    kube config file.\n\n    Args:\n        client_type: The Kubernetes API client type for interacting with specific\n            Kubernetes resources.\n\n    Returns:\n        KubernetesClient: An authenticated, resource-specific Kubernetes Client.\n\n    Raises:\n        ValueError: If `client_type` is not a valid Kubernetes API client type.\n    \"\"\"\n\n    if self.cluster_config:\n        self.cluster_config.configure_client()\n    else:\n        try:\n            config.load_incluster_config()\n        except ConfigException:\n            config.load_kube_config()\n\n    try:\n        return K8S_CLIENT_TYPES[client_type]()\n    except KeyError:\n        raise ValueError(\n            f\"Invalid client type provided '{client_type}'.\"\n            f\" Must be one of {listrepr(K8S_CLIENT_TYPES.keys())}.\"\n        )\n
        "},{"location":"integrations/prefect-kubernetes/custom_objects/","title":"Custom Objects","text":""},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects","title":"prefect_kubernetes.custom_objects","text":""},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.create_namespaced_custom_object","title":"create_namespaced_custom_object async","text":"

        Task for creating a namespaced custom object.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required group str

        The custom resource object's group

        required version str

        The custom resource object's version

        required plural str

        The custom resource object's plural

        required body Dict[str, Any]

        A Dict containing the custom resource object's specification.

        required namespace Optional[str]

        The Kubernetes namespace to create the custom object in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description object

        object containing the custom resource created by this task.

        Example

        Create a custom object in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import create_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = create_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        body={\n            'api': 'crd-version',\n            'kind': 'crd-kind',\n            'metadata': {\n                'name': 'crd-name',\n            },\n        },\n    )\n

        Source code in prefect_kubernetes/custom_objects.py
        @task\nasync def create_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    body: Dict[str, Any],\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for creating a namespaced custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        body: A Dict containing the custom resource object's specification.\n        namespace: The Kubernetes namespace to create the custom object in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        object containing the custom resource created by this task.\n\n    Example:\n        Create a custom object in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import create_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = create_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                body={\n                    'api': 'crd-version',\n                    'kind': 'crd-kind',\n                    'metadata': {\n                        'name': 'crd-name',\n                    },\n                },\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.create_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            body=body,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.delete_namespaced_custom_object","title":"delete_namespaced_custom_object async","text":"

        Task for deleting a namespaced custom object.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required group str

        The custom resource object's group

        required version str

        The custom resource object's version

        required plural str

        The custom resource object's plural

        required name str

        The name of a custom object to delete.

        required namespace Optional[str]

        The Kubernetes namespace to create this custom object in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description object

        object containing the custom resource deleted by this task.

        Example

        Delete \"my-custom-object\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import delete_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = delete_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n    )\n

        Source code in prefect_kubernetes/custom_objects.py
        @task\nasync def delete_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for deleting a namespaced custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to delete.\n        namespace: The Kubernetes namespace to create this custom object in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n\n    Returns:\n        object containing the custom resource deleted by this task.\n\n    Example:\n        Delete \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import delete_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = delete_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n            )\n        ```\n    \"\"\"\n\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.delete_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.get_namespaced_custom_object","title":"get_namespaced_custom_object async","text":"

        Task for reading a namespaced Kubernetes custom object.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required group str

        The custom resource object's group

        required version str

        The custom resource object's version

        required plural str

        The custom resource object's plural

        required name str

        The name of a custom object to read.

        required namespace Optional[str]

        The Kubernetes namespace the custom resource is in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Raises:

        Type Description ValueError

        if name is None.

        Returns:

        Type Description object

        object containing the custom resource specification.

        Example

        Read \"my-custom-object\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import get_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = get_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n    )\n

        Source code in prefect_kubernetes/custom_objects.py
        @task\nasync def get_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for reading a namespaced Kubernetes custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to read.\n        namespace: The Kubernetes namespace the custom resource is in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `name` is `None`.\n\n    Returns:\n        object containing the custom resource specification.\n\n    Example:\n        Read \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import get_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = get_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.get_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.get_namespaced_custom_object_status","title":"get_namespaced_custom_object_status async","text":"

        Task for fetching status of a namespaced custom object.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required group str

        The custom resource object's group

        required version str

        The custom resource object's version

        required plural str

        The custom resource object's plural

        required name str

        The name of a custom object to read.

        required namespace str

        The Kubernetes namespace the custom resource is in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description object

        object containing the custom-object specification with status.

        Example

        Fetch status of \"my-custom-object\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import (\n    get_namespaced_custom_object_status,\n)\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = get_namespaced_custom_object_status(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n    )\n

        Source code in prefect_kubernetes/custom_objects.py
        @task\nasync def get_namespaced_custom_object_status(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for fetching status of a namespaced custom object.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to read.\n        namespace: The Kubernetes namespace the custom resource is in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        object containing the custom-object specification with status.\n\n    Example:\n        Fetch status of \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import (\n            get_namespaced_custom_object_status,\n        )\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = get_namespaced_custom_object_status(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.get_namespaced_custom_object_status,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.list_namespaced_custom_object","title":"list_namespaced_custom_object async","text":"

        Task for listing namespaced custom objects.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required group str

        The custom resource object's group

        required version str

        The custom resource object's version

        required plural str

        The custom resource object's plural

        required namespace str

        The Kubernetes namespace to list custom resources for.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description object

        object containing a list of custom resources.

        Example

        List custom resources in \"my-namespace\":

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import list_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    namespaced_custom_objects_list = list_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        namespace=\"my-namespace\",\n    )\n

        Source code in prefect_kubernetes/custom_objects.py
        @task\nasync def list_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for listing namespaced custom objects.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        namespace: The Kubernetes namespace to list custom resources for.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        object containing a list of custom resources.\n\n    Example:\n        List custom resources in \"my-namespace\":\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import list_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            namespaced_custom_objects_list = list_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.list_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.patch_namespaced_custom_object","title":"patch_namespaced_custom_object async","text":"

        Task for patching a namespaced custom resource.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required group str

        The custom resource object's group

        required version str

        The custom resource object's version

        required plural str

        The custom resource object's plural

        required name str

        The name of a custom object to patch.

        required body Dict[str, Any]

        A Dict containing the custom resource object's patch.

        required namespace str

        The custom resource's Kubernetes namespace.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Raises:

        Type Description ValueError

        if body is None.

        Returns:

        Type Description object

        object containing the custom resource specification

        object

        after the patch gets applied.

        Example

        Patch \"my-custom-object\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import (\n    patch_namespaced_custom_object,\n)\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = patch_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n        body={\n            'api': 'crd-version',\n            'kind': 'crd-kind',\n            'metadata': {\n                'name': 'my-custom-object',\n            },\n        },\n    )\n

        Source code in prefect_kubernetes/custom_objects.py
        @task\nasync def patch_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    body: Dict[str, Any],\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for patching a namespaced custom resource.\n\n    Args:\n        kubernetes_credentials: KubernetesCredentials block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to patch.\n        body: A Dict containing the custom resource object's patch.\n        namespace: The custom resource's Kubernetes namespace.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `body` is `None`.\n\n    Returns:\n        object containing the custom resource specification\n        after the patch gets applied.\n\n    Example:\n        Patch \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import (\n            patch_namespaced_custom_object,\n        )\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = patch_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n                body={\n                    'api': 'crd-version',\n                    'kind': 'crd-kind',\n                    'metadata': {\n                        'name': 'my-custom-object',\n                    },\n                },\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.patch_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            body=body,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/custom_objects/#prefect_kubernetes.custom_objects.replace_namespaced_custom_object","title":"replace_namespaced_custom_object async","text":"

        Task for replacing a namespaced custom resource.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required group str

        The custom resource object's group

        required version str

        The custom resource object's version

        required plural str

        The custom resource object's plural

        required name str

        The name of a custom object to replace.

        required body Dict[str, Any]

        A Dict containing the custom resource object's specification.

        required namespace str

        The custom resource's Kubernetes namespace.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Raises:

        Type Description ValueError

        if body is None.

        Returns:

        Type Description object

        object containing the custom resource specification after the replacement.

        Example

        Replace \"my-custom-object\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.custom_objects import replace_namespaced_custom_object\n\n@flow\ndef kubernetes_orchestrator():\n    custom_object_metadata = replace_namespaced_custom_object(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        group=\"crd-group\",\n        version=\"crd-version\",\n        plural=\"crd-plural\",\n        name=\"my-custom-object\",\n        body={\n            'api': 'crd-version',\n            'kind': 'crd-kind',\n            'metadata': {\n                'name': 'my-custom-object',\n            },\n        },\n    )\n

        Source code in prefect_kubernetes/custom_objects.py
        @task\nasync def replace_namespaced_custom_object(\n    kubernetes_credentials: KubernetesCredentials,\n    group: str,\n    version: str,\n    plural: str,\n    name: str,\n    body: Dict[str, Any],\n    namespace: str = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> object:\n    \"\"\"Task for replacing a namespaced custom resource.\n\n    Args:\n        kubernetes_credentials: KubernetesCredentials block\n            holding authentication needed to generate the required API client.\n        group: The custom resource object's group\n        version: The custom resource object's version\n        plural: The custom resource object's plural\n        name: The name of a custom object to replace.\n        body: A Dict containing the custom resource object's specification.\n        namespace: The custom resource's Kubernetes namespace.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `body` is `None`.\n\n    Returns:\n        object containing the custom resource specification after the replacement.\n\n    Example:\n        Replace \"my-custom-object\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.custom_objects import replace_namespaced_custom_object\n\n        @flow\n        def kubernetes_orchestrator():\n            custom_object_metadata = replace_namespaced_custom_object(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                group=\"crd-group\",\n                version=\"crd-version\",\n                plural=\"crd-plural\",\n                name=\"my-custom-object\",\n                body={\n                    'api': 'crd-version',\n                    'kind': 'crd-kind',\n                    'metadata': {\n                        'name': 'my-custom-object',\n                    },\n                },\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"custom_objects\") as custom_objects_client:\n        return await run_sync_in_worker_thread(\n            custom_objects_client.replace_namespaced_custom_object,\n            group=group,\n            version=version,\n            plural=plural,\n            name=name,\n            body=body,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/deployments/","title":"Deployments","text":""},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments","title":"prefect_kubernetes.deployments","text":"

        Module for interacting with Kubernetes deployments from Prefect flows.

        "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.create_namespaced_deployment","title":"create_namespaced_deployment async","text":"

        Create a Kubernetes deployment in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required new_deployment V1Deployment

        A Kubernetes V1Deployment specification.

        required namespace Optional[str]

        The Kubernetes namespace to create this deployment in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Deployment

        A Kubernetes V1Deployment object.

        Example

        Create a deployment in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import create_namespaced_deployment\nfrom kubernetes.client.models import V1Deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = create_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        new_deployment=V1Deployment(metadata={\"name\": \"test-deployment\"}),\n    )\n

        Source code in prefect_kubernetes/deployments.py
        @task\nasync def create_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    new_deployment: V1Deployment,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Create a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        new_deployment: A Kubernetes `V1Deployment` specification.\n        namespace: The Kubernetes namespace to create this deployment in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Create a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import create_namespaced_deployment\n        from kubernetes.client.models import V1Deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = create_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                new_deployment=V1Deployment(metadata={\"name\": \"test-deployment\"}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.create_namespaced_deployment,\n            namespace=namespace,\n            body=new_deployment,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.delete_namespaced_deployment","title":"delete_namespaced_deployment async","text":"

        Delete a Kubernetes deployment in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required deployment_name str

        The name of the deployment to delete.

        required delete_options Optional[V1DeleteOptions]

        A Kubernetes V1DeleteOptions object.

        None namespace Optional[str]

        The Kubernetes namespace to delete this deployment from.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Deployment

        A Kubernetes V1Deployment object.

        Example

        Delete a deployment in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import delete_namespaced_deployment\nfrom kubernetes.client.models import V1DeleteOptions\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = delete_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\",\n        delete_options=V1DeleteOptions(grace_period_seconds=0),\n    )\n

        Source code in prefect_kubernetes/deployments.py
        @task\nasync def delete_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Delete a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to delete.\n        delete_options: A Kubernetes `V1DeleteOptions` object.\n        namespace: The Kubernetes namespace to delete this deployment from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Delete a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import delete_namespaced_deployment\n        from kubernetes.client.models import V1DeleteOptions\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = delete_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\",\n                delete_options=V1DeleteOptions(grace_period_seconds=0),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.delete_namespaced_deployment,\n            deployment_name,\n            body=delete_options,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.list_namespaced_deployment","title":"list_namespaced_deployment async","text":"

        List all deployments in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required namespace Optional[str]

        The Kubernetes namespace to list deployments from.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1DeploymentList

        A Kubernetes V1DeploymentList object.

        Example

        List all deployments in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import list_namespaced_deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_list = list_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n    )\n

        Source code in prefect_kubernetes/deployments.py
        @task\nasync def list_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1DeploymentList:\n    \"\"\"List all deployments in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        namespace: The Kubernetes namespace to list deployments from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1DeploymentList` object.\n\n    Example:\n        List all deployments in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import list_namespaced_deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_list = list_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.list_namespaced_deployment,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.patch_namespaced_deployment","title":"patch_namespaced_deployment async","text":"

        Patch a Kubernetes deployment in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required deployment_name str

        The name of the deployment to patch.

        required deployment_updates V1Deployment

        A Kubernetes V1Deployment object.

        required namespace Optional[str]

        The Kubernetes namespace to patch this deployment in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Deployment

        A Kubernetes V1Deployment object.

        Example

        Patch a deployment in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import patch_namespaced_deployment\nfrom kubernetes.client.models import V1Deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = patch_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\",\n        deployment_updates=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}}),\n    )\n

        Source code in prefect_kubernetes/deployments.py
        @task\nasync def patch_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    deployment_updates: V1Deployment,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Patch a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to patch.\n        deployment_updates: A Kubernetes `V1Deployment` object.\n        namespace: The Kubernetes namespace to patch this deployment in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Patch a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import patch_namespaced_deployment\n        from kubernetes.client.models import V1Deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = patch_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\",\n                deployment_updates=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.patch_namespaced_deployment,\n            name=deployment_name,\n            namespace=namespace,\n            body=deployment_updates,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.read_namespaced_deployment","title":"read_namespaced_deployment async","text":"

        Read information on a Kubernetes deployment in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required deployment_name str

        The name of the deployment to read.

        required namespace Optional[str]

        The Kubernetes namespace to read this deployment from.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Deployment

        A Kubernetes V1Deployment object.

        Example

        Read a deployment in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = read_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\"\n    )\n

        Source code in prefect_kubernetes/deployments.py
        @task\nasync def read_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Read information on a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to read.\n        namespace: The Kubernetes namespace to read this deployment from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Read a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = read_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\"\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.read_namespaced_deployment,\n            name=deployment_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/deployments/#prefect_kubernetes.deployments.replace_namespaced_deployment","title":"replace_namespaced_deployment async","text":"

        Replace a Kubernetes deployment in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required deployment_name str

        The name of the deployment to replace.

        required new_deployment V1Deployment

        A Kubernetes V1Deployment object.

        required namespace Optional[str]

        The Kubernetes namespace to replace this deployment in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Deployment

        A Kubernetes V1Deployment object.

        Example

        Replace a deployment in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.deployments import replace_namespaced_deployment\nfrom kubernetes.client.models import V1Deployment\n\n@flow\ndef kubernetes_orchestrator():\n    v1_deployment_metadata = replace_namespaced_deployment(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        deployment_name=\"test-deployment\",\n        new_deployment=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}})\n    )\n

        Source code in prefect_kubernetes/deployments.py
        @task\nasync def replace_namespaced_deployment(\n    kubernetes_credentials: KubernetesCredentials,\n    deployment_name: str,\n    new_deployment: V1Deployment,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Deployment:\n    \"\"\"Replace a Kubernetes deployment in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        deployment_name: The name of the deployment to replace.\n        new_deployment: A Kubernetes `V1Deployment` object.\n        namespace: The Kubernetes namespace to replace this deployment in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Deployment` object.\n\n    Example:\n        Replace a deployment in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.deployments import replace_namespaced_deployment\n        from kubernetes.client.models import V1Deployment\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_deployment_metadata = replace_namespaced_deployment(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                deployment_name=\"test-deployment\",\n                new_deployment=V1Deployment(metadata={\"labels\": {\"foo\": \"bar\"}})\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"apps\") as apps_v1_client:\n        return await run_sync_in_worker_thread(\n            apps_v1_client.replace_namespaced_deployment,\n            body=new_deployment,\n            name=deployment_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/exceptions/","title":"Exceptions","text":""},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions","title":"prefect_kubernetes.exceptions","text":"

        Module to define common exceptions within prefect_kubernetes.

        "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesJobDefinitionError","title":"KubernetesJobDefinitionError","text":"

        Bases: OpenApiException

        An exception for when a Kubernetes job definition is invalid.

        Source code in prefect_kubernetes/exceptions.py
        class KubernetesJobDefinitionError(OpenApiException):\n    \"\"\"An exception for when a Kubernetes job definition is invalid.\"\"\"\n
        "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesJobFailedError","title":"KubernetesJobFailedError","text":"

        Bases: OpenApiException

        An exception for when a Kubernetes job fails.

        Source code in prefect_kubernetes/exceptions.py
        class KubernetesJobFailedError(OpenApiException):\n    \"\"\"An exception for when a Kubernetes job fails.\"\"\"\n
        "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesJobTimeoutError","title":"KubernetesJobTimeoutError","text":"

        Bases: OpenApiException

        An exception for when a Kubernetes job times out.

        Source code in prefect_kubernetes/exceptions.py
        class KubernetesJobTimeoutError(OpenApiException):\n    \"\"\"An exception for when a Kubernetes job times out.\"\"\"\n
        "},{"location":"integrations/prefect-kubernetes/exceptions/#prefect_kubernetes.exceptions.KubernetesResourceNotFoundError","title":"KubernetesResourceNotFoundError","text":"

        Bases: ApiException

        An exception for when a Kubernetes resource cannot be found by a client.

        Source code in prefect_kubernetes/exceptions.py
        class KubernetesResourceNotFoundError(ApiException):\n    \"\"\"An exception for when a Kubernetes resource cannot be found by a client.\"\"\"\n
        "},{"location":"integrations/prefect-kubernetes/flows/","title":"Flows","text":""},{"location":"integrations/prefect-kubernetes/flows/#prefect_kubernetes.flows","title":"prefect_kubernetes.flows","text":"

        A module to define flows interacting with Kubernetes resources.

        "},{"location":"integrations/prefect-kubernetes/flows/#prefect_kubernetes.flows.run_namespaced_job","title":"run_namespaced_job async","text":"

        Flow for running a namespaced Kubernetes job.

        Parameters:

        Name Type Description Default kubernetes_job KubernetesJob

        The KubernetesJob block that specifies the job to run.

        required

        Returns:

        Type Description Dict[str, Any]

        The a dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.

        Raises:

        Type Description RuntimeError

        If the created Kubernetes job attains a failed status.

        ```python\nfrom prefect_kubernetes import KubernetesJob, run_namespaced_job\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\nrun_namespaced_job(\n    kubernetes_job=KubernetesJob.from_yaml_file(\n        credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        manifest_path=\"path/to/job.yaml\",\n    )\n)\n```\n
        Source code in prefect_kubernetes/flows.py
        @flow\nasync def run_namespaced_job(\n    kubernetes_job: KubernetesJob,\n) -> Dict[str, Any]:\n    \"\"\"Flow for running a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_job: The `KubernetesJob` block that specifies the job to run.\n\n    Returns:\n        The a dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.\n\n    Raises:\n        RuntimeError: If the created Kubernetes job attains a failed status.\n\n    Example:\n\n        ```python\n        from prefect_kubernetes import KubernetesJob, run_namespaced_job\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        run_namespaced_job(\n            kubernetes_job=KubernetesJob.from_yaml_file(\n                credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                manifest_path=\"path/to/job.yaml\",\n            )\n        )\n        ```\n    \"\"\"\n    kubernetes_job_run = await task(kubernetes_job.trigger.aio)(kubernetes_job)\n\n    await task(kubernetes_job_run.wait_for_completion.aio)(kubernetes_job_run)\n\n    return await task(kubernetes_job_run.fetch_result.aio)(kubernetes_job_run)\n
        "},{"location":"integrations/prefect-kubernetes/jobs/","title":"Jobs","text":""},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs","title":"prefect_kubernetes.jobs","text":"

        Module to define tasks for interacting with Kubernetes jobs.

        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob","title":"KubernetesJob","text":"

        Bases: JobBlock

        A block representing a Kubernetes job configuration.

        Source code in prefect_kubernetes/jobs.py
        class KubernetesJob(JobBlock):\n    \"\"\"A block representing a Kubernetes job configuration.\"\"\"\n\n    v1_job: Dict[str, Any] = Field(\n        default=...,\n        title=\"Job Manifest\",\n        description=(\n            \"The Kubernetes job manifest to run. This dictionary can be produced \"\n            \"using `yaml.safe_load`.\"\n        ),\n    )\n    api_kwargs: Dict[str, Any] = Field(\n        default_factory=dict,\n        title=\"Additional API Arguments\",\n        description=\"Additional arguments to include in Kubernetes API calls.\",\n        example={\"pretty\": \"true\"},\n    )\n    credentials: KubernetesCredentials = Field(\n        default=..., description=\"The credentials to configure a client from.\"\n    )\n    delete_after_completion: bool = Field(\n        default=True,\n        description=\"Whether to delete the job after it has completed.\",\n    )\n    interval_seconds: int = Field(\n        default=5,\n        description=\"The number of seconds to wait between job status checks.\",\n    )\n    namespace: str = Field(\n        default=\"default\",\n        description=\"The namespace to create and run the job in.\",\n    )\n    timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=\"The number of seconds to wait for the job run before timing out.\",\n    )\n\n    _block_type_name = \"Kubernetes Job\"\n    _block_type_slug = \"k8s-job\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"  # noqa: E501\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob\"  # noqa\n\n    @sync_compatible\n    async def trigger(self):\n        \"\"\"Create a Kubernetes job and return a `KubernetesJobRun` object.\"\"\"\n\n        v1_job_model = convert_manifest_to_model(self.v1_job, \"V1Job\")\n\n        await create_namespaced_job.fn(\n            kubernetes_credentials=self.credentials,\n            new_job=v1_job_model,\n            namespace=self.namespace,\n            **self.api_kwargs,\n        )\n\n        return KubernetesJobRun(kubernetes_job=self, v1_job_model=v1_job_model)\n\n    @classmethod\n    def from_yaml_file(\n        cls: Type[Self], manifest_path: Union[Path, str], **kwargs\n    ) -> Self:\n        \"\"\"Create a `KubernetesJob` from a YAML file.\n\n        Args:\n            manifest_path: The YAML file to create the `KubernetesJob` from.\n\n        Returns:\n            A KubernetesJob object.\n        \"\"\"\n        with open(manifest_path, \"r\") as yaml_stream:\n            yaml_dict = yaml.safe_load(yaml_stream)\n\n        return cls(v1_job=yaml_dict, **kwargs)\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob.from_yaml_file","title":"from_yaml_file classmethod","text":"

        Create a KubernetesJob from a YAML file.

        Parameters:

        Name Type Description Default manifest_path Union[Path, str]

        The YAML file to create the KubernetesJob from.

        required

        Returns:

        Type Description Self

        A KubernetesJob object.

        Source code in prefect_kubernetes/jobs.py
        @classmethod\ndef from_yaml_file(\n    cls: Type[Self], manifest_path: Union[Path, str], **kwargs\n) -> Self:\n    \"\"\"Create a `KubernetesJob` from a YAML file.\n\n    Args:\n        manifest_path: The YAML file to create the `KubernetesJob` from.\n\n    Returns:\n        A KubernetesJob object.\n    \"\"\"\n    with open(manifest_path, \"r\") as yaml_stream:\n        yaml_dict = yaml.safe_load(yaml_stream)\n\n    return cls(v1_job=yaml_dict, **kwargs)\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJob.trigger","title":"trigger async","text":"

        Create a Kubernetes job and return a KubernetesJobRun object.

        Source code in prefect_kubernetes/jobs.py
        @sync_compatible\nasync def trigger(self):\n    \"\"\"Create a Kubernetes job and return a `KubernetesJobRun` object.\"\"\"\n\n    v1_job_model = convert_manifest_to_model(self.v1_job, \"V1Job\")\n\n    await create_namespaced_job.fn(\n        kubernetes_credentials=self.credentials,\n        new_job=v1_job_model,\n        namespace=self.namespace,\n        **self.api_kwargs,\n    )\n\n    return KubernetesJobRun(kubernetes_job=self, v1_job_model=v1_job_model)\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJobRun","title":"KubernetesJobRun","text":"

        Bases: JobRun[Dict[str, Any]]

        A container representing a run of a Kubernetes job.

        Source code in prefect_kubernetes/jobs.py
        class KubernetesJobRun(JobRun[Dict[str, Any]]):\n    \"\"\"A container representing a run of a Kubernetes job.\"\"\"\n\n    def __init__(\n        self,\n        kubernetes_job: \"KubernetesJob\",\n        v1_job_model: V1Job,\n    ):\n        self.pod_logs = None\n\n        self._completed = False\n        self._kubernetes_job = kubernetes_job\n        self._v1_job_model = v1_job_model\n\n    async def _cleanup(self):\n        \"\"\"Deletes the Kubernetes job resource.\"\"\"\n\n        delete_options = V1DeleteOptions(propagation_policy=\"Foreground\")\n\n        deleted_v1_job = await delete_namespaced_job.fn(\n            kubernetes_credentials=self._kubernetes_job.credentials,\n            job_name=self._v1_job_model.metadata.name,\n            delete_options=delete_options,\n            namespace=self._kubernetes_job.namespace,\n            **self._kubernetes_job.api_kwargs,\n        )\n        self.logger.info(\n            f\"Job {self._v1_job_model.metadata.name} deleted \"\n            f\"with {deleted_v1_job.status!r}.\"\n        )\n\n    @sync_compatible\n    async def wait_for_completion(self):\n        \"\"\"Waits for the job to complete.\n\n        If the job has `delete_after_completion` set to `True`,\n        the job will be deleted if it is observed by this method\n        to enter a completed state.\n\n        Raises:\n            RuntimeError: If the Kubernetes job fails.\n            KubernetesJobTimeoutError: If the Kubernetes job times out.\n            ValueError: If `wait_for_completion` is never called.\n        \"\"\"\n        self.pod_logs = {}\n\n        elapsed_time = 0\n\n        while not self._completed:\n            job_expired = (\n                elapsed_time > self._kubernetes_job.timeout_seconds\n                if self._kubernetes_job.timeout_seconds\n                else False\n            )\n            if job_expired:\n                raise KubernetesJobTimeoutError(\n                    f\"Job timed out after {elapsed_time} seconds.\"\n                )\n\n            v1_job_status = await read_namespaced_job_status.fn(\n                kubernetes_credentials=self._kubernetes_job.credentials,\n                job_name=self._v1_job_model.metadata.name,\n                namespace=self._kubernetes_job.namespace,\n                **self._kubernetes_job.api_kwargs,\n            )\n            pod_selector = (\n                \"controller-uid=\" f\"{v1_job_status.metadata.labels['controller-uid']}\"\n            )\n            v1_pod_list = await list_namespaced_pod.fn(\n                kubernetes_credentials=self._kubernetes_job.credentials,\n                namespace=self._kubernetes_job.namespace,\n                label_selector=pod_selector,\n                **self._kubernetes_job.api_kwargs,\n            )\n\n            for pod in v1_pod_list.items:\n                pod_name = pod.metadata.name\n\n                if pod.status.phase == \"Pending\" or pod_name in self.pod_logs.keys():\n                    continue\n\n                self.logger.info(f\"Capturing logs for pod {pod_name!r}.\")\n\n                self.pod_logs[pod_name] = await read_namespaced_pod_log.fn(\n                    kubernetes_credentials=self._kubernetes_job.credentials,\n                    pod_name=pod_name,\n                    container=v1_job_status.spec.template.spec.containers[0].name,\n                    namespace=self._kubernetes_job.namespace,\n                    **self._kubernetes_job.api_kwargs,\n                )\n\n            if v1_job_status.status.active:\n                await sleep(self._kubernetes_job.interval_seconds)\n                if self._kubernetes_job.timeout_seconds:\n                    elapsed_time += self._kubernetes_job.interval_seconds\n            elif v1_job_status.status.conditions:\n                final_completed_conditions = [\n                    condition.type == \"Complete\"\n                    for condition in v1_job_status.status.conditions\n                    if condition.status == \"True\"\n                ]\n                if final_completed_conditions and any(final_completed_conditions):\n                    self._completed = True\n                    self.logger.info(\n                        f\"Job {v1_job_status.metadata.name!r} has \"\n                        f\"completed with {v1_job_status.status.succeeded} pods.\"\n                    )\n                elif final_completed_conditions:\n                    failed_conditions = [\n                        condition.reason\n                        for condition in v1_job_status.status.conditions\n                        if condition.type == \"Failed\"\n                    ]\n                    raise RuntimeError(\n                        f\"Job {v1_job_status.metadata.name!r} failed due to \"\n                        f\"{failed_conditions}, check the Kubernetes pod logs \"\n                        f\"for more information.\"\n                    )\n\n        if self._kubernetes_job.delete_after_completion:\n            await self._cleanup()\n\n    @sync_compatible\n    async def fetch_result(self) -> Dict[str, Any]:\n        \"\"\"Fetch the results of the job.\n\n        Returns:\n            The logs from each of the pods in the job.\n\n        Raises:\n            ValueError: If this method is called when the job has\n                a non-terminal state.\n        \"\"\"\n\n        if not self._completed:\n            raise ValueError(\n                \"The Kubernetes Job run is not in a completed state - \"\n                \"be sure to call `wait_for_completion` before attempting \"\n                \"to fetch the result.\"\n            )\n        return self.pod_logs\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJobRun.fetch_result","title":"fetch_result async","text":"

        Fetch the results of the job.

        Returns:

        Type Description Dict[str, Any]

        The logs from each of the pods in the job.

        Raises:

        Type Description ValueError

        If this method is called when the job has a non-terminal state.

        Source code in prefect_kubernetes/jobs.py
        @sync_compatible\nasync def fetch_result(self) -> Dict[str, Any]:\n    \"\"\"Fetch the results of the job.\n\n    Returns:\n        The logs from each of the pods in the job.\n\n    Raises:\n        ValueError: If this method is called when the job has\n            a non-terminal state.\n    \"\"\"\n\n    if not self._completed:\n        raise ValueError(\n            \"The Kubernetes Job run is not in a completed state - \"\n            \"be sure to call `wait_for_completion` before attempting \"\n            \"to fetch the result.\"\n        )\n    return self.pod_logs\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.KubernetesJobRun.wait_for_completion","title":"wait_for_completion async","text":"

        Waits for the job to complete.

        If the job has delete_after_completion set to True, the job will be deleted if it is observed by this method to enter a completed state.

        Raises:

        Type Description RuntimeError

        If the Kubernetes job fails.

        KubernetesJobTimeoutError

        If the Kubernetes job times out.

        ValueError

        If wait_for_completion is never called.

        Source code in prefect_kubernetes/jobs.py
        @sync_compatible\nasync def wait_for_completion(self):\n    \"\"\"Waits for the job to complete.\n\n    If the job has `delete_after_completion` set to `True`,\n    the job will be deleted if it is observed by this method\n    to enter a completed state.\n\n    Raises:\n        RuntimeError: If the Kubernetes job fails.\n        KubernetesJobTimeoutError: If the Kubernetes job times out.\n        ValueError: If `wait_for_completion` is never called.\n    \"\"\"\n    self.pod_logs = {}\n\n    elapsed_time = 0\n\n    while not self._completed:\n        job_expired = (\n            elapsed_time > self._kubernetes_job.timeout_seconds\n            if self._kubernetes_job.timeout_seconds\n            else False\n        )\n        if job_expired:\n            raise KubernetesJobTimeoutError(\n                f\"Job timed out after {elapsed_time} seconds.\"\n            )\n\n        v1_job_status = await read_namespaced_job_status.fn(\n            kubernetes_credentials=self._kubernetes_job.credentials,\n            job_name=self._v1_job_model.metadata.name,\n            namespace=self._kubernetes_job.namespace,\n            **self._kubernetes_job.api_kwargs,\n        )\n        pod_selector = (\n            \"controller-uid=\" f\"{v1_job_status.metadata.labels['controller-uid']}\"\n        )\n        v1_pod_list = await list_namespaced_pod.fn(\n            kubernetes_credentials=self._kubernetes_job.credentials,\n            namespace=self._kubernetes_job.namespace,\n            label_selector=pod_selector,\n            **self._kubernetes_job.api_kwargs,\n        )\n\n        for pod in v1_pod_list.items:\n            pod_name = pod.metadata.name\n\n            if pod.status.phase == \"Pending\" or pod_name in self.pod_logs.keys():\n                continue\n\n            self.logger.info(f\"Capturing logs for pod {pod_name!r}.\")\n\n            self.pod_logs[pod_name] = await read_namespaced_pod_log.fn(\n                kubernetes_credentials=self._kubernetes_job.credentials,\n                pod_name=pod_name,\n                container=v1_job_status.spec.template.spec.containers[0].name,\n                namespace=self._kubernetes_job.namespace,\n                **self._kubernetes_job.api_kwargs,\n            )\n\n        if v1_job_status.status.active:\n            await sleep(self._kubernetes_job.interval_seconds)\n            if self._kubernetes_job.timeout_seconds:\n                elapsed_time += self._kubernetes_job.interval_seconds\n        elif v1_job_status.status.conditions:\n            final_completed_conditions = [\n                condition.type == \"Complete\"\n                for condition in v1_job_status.status.conditions\n                if condition.status == \"True\"\n            ]\n            if final_completed_conditions and any(final_completed_conditions):\n                self._completed = True\n                self.logger.info(\n                    f\"Job {v1_job_status.metadata.name!r} has \"\n                    f\"completed with {v1_job_status.status.succeeded} pods.\"\n                )\n            elif final_completed_conditions:\n                failed_conditions = [\n                    condition.reason\n                    for condition in v1_job_status.status.conditions\n                    if condition.type == \"Failed\"\n                ]\n                raise RuntimeError(\n                    f\"Job {v1_job_status.metadata.name!r} failed due to \"\n                    f\"{failed_conditions}, check the Kubernetes pod logs \"\n                    f\"for more information.\"\n                )\n\n    if self._kubernetes_job.delete_after_completion:\n        await self._cleanup()\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.create_namespaced_job","title":"create_namespaced_job async","text":"

        Task for creating a namespaced Kubernetes job.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required new_job V1Job

        A Kubernetes V1Job specification.

        required namespace Optional[str]

        The Kubernetes namespace to create this job in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description V1Job

        A Kubernetes V1Job object.

        Example

        Create a job in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import create_namespaced_job\nfrom kubernetes.client.models import V1Job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = create_namespaced_job(\n        new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

        Source code in prefect_kubernetes/jobs.py
        @task\nasync def create_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    new_job: V1Job,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for creating a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        new_job: A Kubernetes `V1Job` specification.\n        namespace: The Kubernetes namespace to create this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Create a job in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import create_namespaced_job\n        from kubernetes.client.models import V1Job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = create_namespaced_job(\n                new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.create_namespaced_job,\n            namespace=namespace,\n            body=new_job,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.delete_namespaced_job","title":"delete_namespaced_job async","text":"

        Task for deleting a namespaced Kubernetes job.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required job_name str

        The name of a job to delete.

        required delete_options Optional[V1DeleteOptions]

        A Kubernetes V1DeleteOptions object.

        None namespace Optional[str]

        The Kubernetes namespace to delete this job in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description V1Status

        A Kubernetes V1Status object.

        Example

        Delete \"my-job\" in the default namespace:

        from kubernetes.client.models import V1DeleteOptions\nfrom prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import delete_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_status = delete_namespaced_job(\n        job_name=\"my-job\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        delete_options=V1DeleteOptions(propagation_policy=\"Foreground\"),\n    )\n

        Source code in prefect_kubernetes/jobs.py
        @task\nasync def delete_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Status:\n    \"\"\"Task for deleting a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to delete.\n        delete_options: A Kubernetes `V1DeleteOptions` object.\n        namespace: The Kubernetes namespace to delete this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n\n    Returns:\n        A Kubernetes `V1Status` object.\n\n    Example:\n        Delete \"my-job\" in the default namespace:\n        ```python\n        from kubernetes.client.models import V1DeleteOptions\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import delete_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_status = delete_namespaced_job(\n                job_name=\"my-job\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                delete_options=V1DeleteOptions(propagation_policy=\"Foreground\"),\n            )\n        ```\n    \"\"\"\n\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.delete_namespaced_job,\n            name=job_name,\n            body=delete_options,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.list_namespaced_job","title":"list_namespaced_job async","text":"

        Task for listing namespaced Kubernetes jobs.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required namespace Optional[str]

        The Kubernetes namespace to list jobs from.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description V1JobList

        A Kubernetes V1JobList object.

        Example

        List jobs in \"my-namespace\":

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import list_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    namespaced_job_list = list_namespaced_job(\n        namespace=\"my-namespace\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

        Source code in prefect_kubernetes/jobs.py
        @task\nasync def list_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1JobList:\n    \"\"\"Task for listing namespaced Kubernetes jobs.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        namespace: The Kubernetes namespace to list jobs from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1JobList` object.\n\n    Example:\n        List jobs in \"my-namespace\":\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import list_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            namespaced_job_list = list_namespaced_job(\n                namespace=\"my-namespace\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.list_namespaced_job,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.patch_namespaced_job","title":"patch_namespaced_job async","text":"

        Task for patching a namespaced Kubernetes job.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required job_name str

        The name of a job to patch.

        required job_updates V1Job

        A Kubernetes V1Job specification.

        required namespace Optional[str]

        The Kubernetes namespace to patch this job in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Raises:

        Type Description ValueError

        if job_name is None.

        Returns:

        Type Description V1Job

        A Kubernetes V1Job object.

        Example

        Patch \"my-job\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import patch_namespaced_job\n\nfrom kubernetes.client.models import V1Job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = patch_namespaced_job(\n        job_name=\"my-job\",\n        job_updates=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}}),\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

        Source code in prefect_kubernetes/jobs.py
        @task\nasync def patch_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    job_updates: V1Job,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for patching a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: KubernetesCredentials block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to patch.\n        job_updates: A Kubernetes `V1Job` specification.\n        namespace: The Kubernetes namespace to patch this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `job_name` is `None`.\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Patch \"my-job\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import patch_namespaced_job\n\n        from kubernetes.client.models import V1Job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = patch_namespaced_job(\n                job_name=\"my-job\",\n                job_updates=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}}),\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.patch_namespaced_job,\n            name=job_name,\n            namespace=namespace,\n            body=job_updates,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.read_namespaced_job","title":"read_namespaced_job async","text":"

        Task for reading a namespaced Kubernetes job.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required job_name str

        The name of a job to read.

        required namespace Optional[str]

        The Kubernetes namespace to read this job in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Raises:

        Type Description ValueError

        if job_name is None.

        Returns:

        Type Description V1Job

        A Kubernetes V1Job object.

        Example

        Read \"my-job\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import read_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = read_namespaced_job(\n        job_name=\"my-job\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

        Source code in prefect_kubernetes/jobs.py
        @task\nasync def read_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for reading a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to read.\n        namespace: The Kubernetes namespace to read this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Raises:\n        ValueError: if `job_name` is `None`.\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Read \"my-job\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import read_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = read_namespaced_job(\n                job_name=\"my-job\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.read_namespaced_job,\n            name=job_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.read_namespaced_job_status","title":"read_namespaced_job_status async","text":"

        Task for fetching status of a namespaced Kubernetes job.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required job_name str

        The name of a job to fetch status for.

        required namespace Optional[str]

        The Kubernetes namespace to fetch status of job in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description V1Job

        A Kubernetes V1JobStatus object.

        Example

        Fetch status of a job in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import read_namespaced_job_status\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_status = read_namespaced_job_status(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        job_name=\"my-job\",\n    )\n

        Source code in prefect_kubernetes/jobs.py
        @task\nasync def read_namespaced_job_status(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for fetching status of a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to fetch status for.\n        namespace: The Kubernetes namespace to fetch status of job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1JobStatus` object.\n\n    Example:\n        Fetch status of a job in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import read_namespaced_job_status\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_status = read_namespaced_job_status(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                job_name=\"my-job\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.read_namespaced_job_status,\n            name=job_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/jobs/#prefect_kubernetes.jobs.replace_namespaced_job","title":"replace_namespaced_job async","text":"

        Task for replacing a namespaced Kubernetes job.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block holding authentication needed to generate the required API client.

        required job_name str

        The name of a job to replace.

        required new_job V1Job

        A Kubernetes V1Job specification.

        required namespace Optional[str]

        The Kubernetes namespace to replace this job in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API (e.g. {\"pretty\": \"...\", \"dry_run\": \"...\"}).

        {}

        Returns:

        Type Description V1Job

        A Kubernetes V1Job object.

        Example

        Replace \"my-job\" in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.jobs import replace_namespaced_job\n\n@flow\ndef kubernetes_orchestrator():\n    v1_job_metadata = replace_namespaced_job(\n        new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n        job_name=\"my-job\",\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n    )\n

        Source code in prefect_kubernetes/jobs.py
        @task\nasync def replace_namespaced_job(\n    kubernetes_credentials: KubernetesCredentials,\n    job_name: str,\n    new_job: V1Job,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Job:\n    \"\"\"Task for replacing a namespaced Kubernetes job.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block\n            holding authentication needed to generate the required API client.\n        job_name: The name of a job to replace.\n        new_job: A Kubernetes `V1Job` specification.\n        namespace: The Kubernetes namespace to replace this job in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the\n            Kubernetes API (e.g. `{\"pretty\": \"...\", \"dry_run\": \"...\"}`).\n\n    Returns:\n        A Kubernetes `V1Job` object.\n\n    Example:\n        Replace \"my-job\" in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.jobs import replace_namespaced_job\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_job_metadata = replace_namespaced_job(\n                new_job=V1Job(metadata={\"labels\": {\"foo\": \"bar\"}}),\n                job_name=\"my-job\",\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"batch\") as batch_v1_client:\n        return await run_sync_in_worker_thread(\n            batch_v1_client.replace_namespaced_job,\n            name=job_name,\n            body=new_job,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/pods/","title":"Pods","text":""},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods","title":"prefect_kubernetes.pods","text":"

        Module for interacting with Kubernetes pods from Prefect flows.

        "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.create_namespaced_pod","title":"create_namespaced_pod async","text":"

        Create a Kubernetes pod in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required new_pod V1Pod

        A Kubernetes V1Pod specification.

        required namespace Optional[str]

        The Kubernetes namespace to create this pod in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Pod

        A Kubernetes V1Pod object.

        Example

        Create a pod in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import create_namespaced_pod\nfrom kubernetes.client.models import V1Pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = create_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        new_pod=V1Pod(metadata={\"name\": \"test-pod\"}),\n    )\n

        Source code in prefect_kubernetes/pods.py
        @task\nasync def create_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    new_pod: V1Pod,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Create a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        new_pod: A Kubernetes `V1Pod` specification.\n        namespace: The Kubernetes namespace to create this pod in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Create a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import create_namespaced_pod\n        from kubernetes.client.models import V1Pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = create_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                new_pod=V1Pod(metadata={\"name\": \"test-pod\"}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.create_namespaced_pod,\n            namespace=namespace,\n            body=new_pod,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.delete_namespaced_pod","title":"delete_namespaced_pod async","text":"

        Delete a Kubernetes pod in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required pod_name str

        The name of the pod to delete.

        required delete_options Optional[V1DeleteOptions]

        A Kubernetes V1DeleteOptions object.

        None namespace Optional[str]

        The Kubernetes namespace to delete this pod from.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Pod

        A Kubernetes V1Pod object.

        Example

        Delete a pod in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import delete_namespaced_pod\nfrom kubernetes.client.models import V1DeleteOptions\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = delete_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        delete_options=V1DeleteOptions(grace_period_seconds=0),\n    )\n

        Source code in prefect_kubernetes/pods.py
        @task\nasync def delete_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Delete a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to delete.\n        delete_options: A Kubernetes `V1DeleteOptions` object.\n        namespace: The Kubernetes namespace to delete this pod from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Delete a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import delete_namespaced_pod\n        from kubernetes.client.models import V1DeleteOptions\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = delete_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                delete_options=V1DeleteOptions(grace_period_seconds=0),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.delete_namespaced_pod,\n            pod_name,\n            body=delete_options,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.list_namespaced_pod","title":"list_namespaced_pod async","text":"

        List all pods in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required namespace Optional[str]

        The Kubernetes namespace to list pods from.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1PodList

        A Kubernetes V1PodList object.

        Example

        List all pods in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import list_namespaced_pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_list = list_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n    )\n

        Source code in prefect_kubernetes/pods.py
        @task\nasync def list_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1PodList:\n    \"\"\"List all pods in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        namespace: The Kubernetes namespace to list pods from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1PodList` object.\n\n    Example:\n        List all pods in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import list_namespaced_pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_list = list_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\")\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.list_namespaced_pod, namespace=namespace, **kube_kwargs\n        )\n
        "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.patch_namespaced_pod","title":"patch_namespaced_pod async","text":"

        Patch a Kubernetes pod in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required pod_name str

        The name of the pod to patch.

        required pod_updates V1Pod

        A Kubernetes V1Pod object.

        required namespace Optional[str]

        The Kubernetes namespace to patch this pod in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Pod

        A Kubernetes V1Pod object.

        Example

        Patch a pod in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import patch_namespaced_pod\nfrom kubernetes.client.models import V1Pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = patch_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        pod_updates=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}}),\n    )\n

        Source code in prefect_kubernetes/pods.py
        @task\nasync def patch_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    pod_updates: V1Pod,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Patch a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to patch.\n        pod_updates: A Kubernetes `V1Pod` object.\n        namespace: The Kubernetes namespace to patch this pod in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Patch a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import patch_namespaced_pod\n        from kubernetes.client.models import V1Pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = patch_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                pod_updates=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.patch_namespaced_pod,\n            name=pod_name,\n            namespace=namespace,\n            body=pod_updates,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.read_namespaced_pod","title":"read_namespaced_pod async","text":"

        Read information on a Kubernetes pod in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required pod_name str

        The name of the pod to read.

        required namespace Optional[str]

        The Kubernetes namespace to read this pod from.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Pod

        A Kubernetes V1Pod object.

        Example

        Read a pod in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = read_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\"\n    )\n

        Source code in prefect_kubernetes/pods.py
        @task\nasync def read_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Read information on a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to read.\n        namespace: The Kubernetes namespace to read this pod from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Read a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = read_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\"\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.read_namespaced_pod,\n            name=pod_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.read_namespaced_pod_log","title":"read_namespaced_pod_log async","text":"

        Read logs from a Kubernetes pod in a given namespace.

        If print_func is provided, the logs will be streamed using that function. If the pod is no longer running, logs generated up to that point will be returned.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required pod_name str

        The name of the pod to read logs from.

        required container str

        The name of the container to read logs from.

        required namespace Optional[str]

        The Kubernetes namespace to read this pod from.

        'default' print_func Optional[Callable]

        If provided, it will stream the pod logs by calling print_func for every line and returning None. If not provided, the current pod logs will be returned immediately.

        None **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description Union[str, None]

        A string containing the logs from the pod's container.

        Example

        Read logs from a pod in the default namespace:

        from prefect import flow, get_run_logger\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import read_namespaced_pod_logs\n\n@flow\ndef kubernetes_orchestrator():\n    logger = get_run_logger()\n\n    pod_logs = read_namespaced_pod_logs(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        container=\"test-container\",\n        print_func=logger.info\n    )\n

        Source code in prefect_kubernetes/pods.py
        @task\nasync def read_namespaced_pod_log(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    container: str,\n    namespace: Optional[str] = \"default\",\n    print_func: Optional[Callable] = None,\n    **kube_kwargs: Dict[str, Any],\n) -> Union[str, None]:\n    \"\"\"Read logs from a Kubernetes pod in a given namespace.\n\n    If `print_func` is provided, the logs will be streamed using that function.\n    If the pod is no longer running, logs generated up to that point will be returned.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to read logs from.\n        container: The name of the container to read logs from.\n        namespace: The Kubernetes namespace to read this pod from.\n        print_func: If provided, it will stream the pod logs by calling `print_func`\n            for every line and returning `None`. If not provided, the current pod\n            logs will be returned immediately.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A string containing the logs from the pod's container.\n\n    Example:\n        Read logs from a pod in the default namespace:\n        ```python\n        from prefect import flow, get_run_logger\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import read_namespaced_pod_logs\n\n        @flow\n        def kubernetes_orchestrator():\n            logger = get_run_logger()\n\n            pod_logs = read_namespaced_pod_logs(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                container=\"test-container\",\n                print_func=logger.info\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        if print_func is not None:\n            # should no longer need to manually refresh on ApiException.status == 410\n            # as of https://github.com/kubernetes-client/python-base/pull/133\n            for log_line in Watch().stream(\n                core_v1_client.read_namespaced_pod_log,\n                name=pod_name,\n                namespace=namespace,\n                container=container,\n            ):\n                print_func(log_line)\n\n        return await run_sync_in_worker_thread(\n            core_v1_client.read_namespaced_pod_log,\n            name=pod_name,\n            namespace=namespace,\n            container=container,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/pods/#prefect_kubernetes.pods.replace_namespaced_pod","title":"replace_namespaced_pod async","text":"

        Replace a Kubernetes pod in a given namespace.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required pod_name str

        The name of the pod to replace.

        required new_pod V1Pod

        A Kubernetes V1Pod object.

        required namespace Optional[str]

        The Kubernetes namespace to replace this pod in.

        'default' **kube_kwargs Dict[str, Any]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Pod

        A Kubernetes V1Pod object.

        Example

        Replace a pod in the default namespace:

        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.pods import replace_namespaced_pod\nfrom kubernetes.client.models import V1Pod\n\n@flow\ndef kubernetes_orchestrator():\n    v1_pod_metadata = replace_namespaced_pod(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        pod_name=\"test-pod\",\n        new_pod=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}})\n    )\n

        Source code in prefect_kubernetes/pods.py
        @task\nasync def replace_namespaced_pod(\n    kubernetes_credentials: KubernetesCredentials,\n    pod_name: str,\n    new_pod: V1Pod,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Dict[str, Any],\n) -> V1Pod:\n    \"\"\"Replace a Kubernetes pod in a given namespace.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        pod_name: The name of the pod to replace.\n        new_pod: A Kubernetes `V1Pod` object.\n        namespace: The Kubernetes namespace to replace this pod in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A Kubernetes `V1Pod` object.\n\n    Example:\n        Replace a pod in the default namespace:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.pods import replace_namespaced_pod\n        from kubernetes.client.models import V1Pod\n\n        @flow\n        def kubernetes_orchestrator():\n            v1_pod_metadata = replace_namespaced_pod(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                pod_name=\"test-pod\",\n                new_pod=V1Pod(metadata={\"labels\": {\"foo\": \"bar\"}})\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.replace_namespaced_pod,\n            body=new_pod,\n            name=pod_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/services/","title":"Services","text":""},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services","title":"prefect_kubernetes.services","text":"

        Tasks for working with Kubernetes services.

        "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.create_namespaced_service","title":"create_namespaced_service async","text":"

        Create a namespaced Kubernetes service.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        A KubernetesCredentials block used to generate a CoreV1Api client.

        required new_service V1Service

        A V1Service object representing the service to create.

        required namespace Optional[str]

        The namespace to create the service in.

        'default' **kube_kwargs Optional[Dict[str, Any]]

        Additional keyword arguments to pass to the CoreV1Api method call.

        {}

        Returns:

        Type Description V1Service

        A V1Service representing the created service.

        Example
        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import create_namespaced_service\nfrom kubernetes.client.models import V1Service\n\n@flow\ndef create_service_flow():\n    v1_service = create_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        new_service=V1Service(metadata={...}, spec={...}),\n    )\n
        Source code in prefect_kubernetes/services.py
        @task\nasync def create_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    new_service: V1Service,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Create a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: A `KubernetesCredentials` block used to generate a\n            `CoreV1Api` client.\n        new_service: A `V1Service` object representing the service to create.\n        namespace: The namespace to create the service in.\n        **kube_kwargs: Additional keyword arguments to pass to the `CoreV1Api`\n            method call.\n\n    Returns:\n        A `V1Service` representing the created service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import create_namespaced_service\n        from kubernetes.client.models import V1Service\n\n        @flow\n        def create_service_flow():\n            v1_service = create_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                new_service=V1Service(metadata={...}, spec={...}),\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.create_namespaced_service,\n            body=new_service,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.delete_namespaced_service","title":"delete_namespaced_service async","text":"

        Delete a namespaced Kubernetes service.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required service_name str

        The name of the service to delete.

        required delete_options Optional[V1DeleteOptions]

        A V1DeleteOptions object representing the options to delete the service with.

        None namespace Optional[str]

        The namespace to delete the service from.

        'default' **kube_kwargs Optional[Dict[str, Any]]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Service

        A V1Service representing the deleted service.

        Example
        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import delete_namespaced_service\n\n@flow\ndef kubernetes_orchestrator():\n    delete_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        namespace=\"my-namespace\",\n    )\n
        Source code in prefect_kubernetes/services.py
        @task\nasync def delete_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    delete_options: Optional[V1DeleteOptions] = None,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Delete a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to delete.\n        delete_options: A `V1DeleteOptions` object representing the options to\n            delete the service with.\n        namespace: The namespace to delete the service from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` representing the deleted service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import delete_namespaced_service\n\n        @flow\n        def kubernetes_orchestrator():\n            delete_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.delete_namespaced_service,\n            name=service_name,\n            namespace=namespace,\n            body=delete_options,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.list_namespaced_service","title":"list_namespaced_service async","text":"

        List namespaced Kubernetes services.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required namespace Optional[str]

        The namespace to list services from.

        'default' **kube_kwargs Optional[Dict[str, Any]]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1ServiceList

        A V1ServiceList representing the list of services in the given namespace.

        Example
        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import list_namespaced_service\n\n@flow\ndef kubernetes_orchestrator():\n    list_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        namespace=\"my-namespace\",\n    )\n
        Source code in prefect_kubernetes/services.py
        @task\nasync def list_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1ServiceList:\n    \"\"\"List namespaced Kubernetes services.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        namespace: The namespace to list services from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1ServiceList` representing the list of services in the given namespace.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import list_namespaced_service\n\n        @flow\n        def kubernetes_orchestrator():\n            list_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.list_namespaced_service,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.patch_namespaced_service","title":"patch_namespaced_service async","text":"

        Patch a namespaced Kubernetes service.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required service_name str

        The name of the service to patch.

        required service_updates V1Service

        A V1Service object representing patches to service_name.

        required namespace Optional[str]

        The namespace to patch the service in.

        'default' **kube_kwargs Optional[Dict[str, Any]]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Service

        A V1Service representing the patched service.

        Example
        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import patch_namespaced_service\nfrom kubernetes.client.models import V1Service\n\n@flow\ndef kubernetes_orchestrator():\n    patch_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        new_service=V1Service(metadata={...}, spec={...}),\n        namespace=\"my-namespace\",\n    )\n
        Source code in prefect_kubernetes/services.py
        @task\nasync def patch_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    service_updates: V1Service,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Patch a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to patch.\n        service_updates: A `V1Service` object representing patches to `service_name`.\n        namespace: The namespace to patch the service in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` representing the patched service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import patch_namespaced_service\n        from kubernetes.client.models import V1Service\n\n        @flow\n        def kubernetes_orchestrator():\n            patch_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                new_service=V1Service(metadata={...}, spec={...}),\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.patch_namespaced_service,\n            name=service_name,\n            body=service_updates,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.read_namespaced_service","title":"read_namespaced_service async","text":"

        Read a namespaced Kubernetes service.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required service_name str

        The name of the service to read.

        required namespace Optional[str]

        The namespace to read the service from.

        'default' **kube_kwargs Optional[Dict[str, Any]]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Service

        A V1Service object representing the service.

        Example
        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import read_namespaced_service\n\n@flow\ndef kubernetes_orchestrator():\n    read_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        namespace=\"my-namespace\",\n    )\n
        Source code in prefect_kubernetes/services.py
        @task\nasync def read_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Read a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to read.\n        namespace: The namespace to read the service from.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` object representing the service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import read_namespaced_service\n\n        @flow\n        def kubernetes_orchestrator():\n            read_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.read_namespaced_service,\n            name=service_name,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/services/#prefect_kubernetes.services.replace_namespaced_service","title":"replace_namespaced_service async","text":"

        Replace a namespaced Kubernetes service.

        Parameters:

        Name Type Description Default kubernetes_credentials KubernetesCredentials

        KubernetesCredentials block for creating authenticated Kubernetes API clients.

        required service_name str

        The name of the service to replace.

        required new_service V1Service

        A V1Service object representing the new service.

        required namespace Optional[str]

        The namespace to replace the service in.

        'default' **kube_kwargs Optional[Dict[str, Any]]

        Optional extra keyword arguments to pass to the Kubernetes API.

        {}

        Returns:

        Type Description V1Service

        A V1Service representing the new service.

        Example
        from prefect import flow\nfrom prefect_kubernetes.credentials import KubernetesCredentials\nfrom prefect_kubernetes.services import replace_namespaced_service\nfrom kubernetes.client.models import V1Service\n\n@flow\ndef kubernetes_orchestrator():\n    replace_namespaced_service(\n        kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n        service_name=\"my-service\",\n        new_service=V1Service(metadata={...}, spec={...}),\n        namespace=\"my-namespace\",\n    )\n
        Source code in prefect_kubernetes/services.py
        @task\nasync def replace_namespaced_service(\n    kubernetes_credentials: KubernetesCredentials,\n    service_name: str,\n    new_service: V1Service,\n    namespace: Optional[str] = \"default\",\n    **kube_kwargs: Optional[Dict[str, Any]],\n) -> V1Service:\n    \"\"\"Replace a namespaced Kubernetes service.\n\n    Args:\n        kubernetes_credentials: `KubernetesCredentials` block for creating\n            authenticated Kubernetes API clients.\n        service_name: The name of the service to replace.\n        new_service: A `V1Service` object representing the new service.\n        namespace: The namespace to replace the service in.\n        **kube_kwargs: Optional extra keyword arguments to pass to the Kubernetes API.\n\n    Returns:\n        A `V1Service` representing the new service.\n\n    Example:\n        ```python\n        from prefect import flow\n        from prefect_kubernetes.credentials import KubernetesCredentials\n        from prefect_kubernetes.services import replace_namespaced_service\n        from kubernetes.client.models import V1Service\n\n        @flow\n        def kubernetes_orchestrator():\n            replace_namespaced_service(\n                kubernetes_credentials=KubernetesCredentials.load(\"k8s-creds\"),\n                service_name=\"my-service\",\n                new_service=V1Service(metadata={...}, spec={...}),\n                namespace=\"my-namespace\",\n            )\n        ```\n    \"\"\"\n    with kubernetes_credentials.get_client(\"core\") as core_v1_client:\n        return await run_sync_in_worker_thread(\n            core_v1_client.replace_namespaced_service,\n            name=service_name,\n            body=new_service,\n            namespace=namespace,\n            **kube_kwargs,\n        )\n
        "},{"location":"integrations/prefect-kubernetes/utilities/","title":"Utilities","text":""},{"location":"integrations/prefect-kubernetes/utilities/#prefect_kubernetes.utilities","title":"prefect_kubernetes.utilities","text":"

        Utilities for working with the Python Kubernetes API.

        "},{"location":"integrations/prefect-kubernetes/utilities/#prefect_kubernetes.utilities.convert_manifest_to_model","title":"convert_manifest_to_model","text":"

        Recursively converts a dict representation of a Kubernetes resource to the corresponding Python model containing the Python models that compose it, according to the openapi_types on the class retrieved with v1_model_name.

        If manifest is a path-like object with a .yaml or .yml extension, it will be treated as a path to a Kubernetes resource manifest and loaded into a dict.

        Parameters:

        Name Type Description Default manifest Union[Path, str, KubernetesManifest]

        A path to a Kubernetes resource manifest or its dict representation.

        required v1_model_name str

        The name of a Kubernetes client model to convert the manifest to.

        required

        Returns:

        Type Description V1KubernetesModel

        A populated instance of a Kubernetes client model with type v1_model_name.

        Raises:

        Type Description ValueError

        If v1_model_name is not a valid Kubernetes client model name.

        ValueError

        If manifest is path-like and is not a valid yaml filename.

        Source code in prefect_kubernetes/utilities.py
        def convert_manifest_to_model(\n    manifest: Union[Path, str, KubernetesManifest], v1_model_name: str\n) -> V1KubernetesModel:\n    \"\"\"Recursively converts a `dict` representation of a Kubernetes resource to the\n    corresponding Python model containing the Python models that compose it,\n    according to the `openapi_types` on the class retrieved with `v1_model_name`.\n\n    If `manifest` is a path-like object with a `.yaml` or `.yml` extension, it will be\n    treated as a path to a Kubernetes resource manifest and loaded into a `dict`.\n\n    Args:\n        manifest: A path to a Kubernetes resource manifest or its `dict` representation.\n        v1_model_name: The name of a Kubernetes client model to convert the manifest to.\n\n    Returns:\n        A populated instance of a Kubernetes client model with type `v1_model_name`.\n\n    Raises:\n        ValueError: If `v1_model_name` is not a valid Kubernetes client model name.\n        ValueError: If `manifest` is path-like and is not a valid yaml filename.\n    \"\"\"\n    if not manifest:\n        return None\n\n    if not (isinstance(v1_model_name, str) and v1_model_name in set(dir(k8s_models))):\n        raise ValueError(\n            \"`v1_model` must be the name of a valid Kubernetes client model, received \"\n            f\": {v1_model_name!r}\"\n        )\n\n    if isinstance(manifest, (Path, str)):\n        str_path = str(manifest)\n        if not str_path.endswith((\".yaml\", \".yml\")):\n            raise ValueError(\"Manifest must be a valid dict or path to a .yaml file.\")\n        manifest = KubernetesJob.job_from_file(manifest)\n\n    converted_manifest = {}\n    v1_model = getattr(k8s_models, v1_model_name)\n    valid_supplied_fields = (  # valid and specified fields for current `v1_model_name`\n        (k, v)\n        for k, v in v1_model.openapi_types.items()\n        if v1_model.attribute_map[k] in manifest  # map goes \ud83d\udc0d -> \ud83d\udc2b, user supplies \ud83d\udc2b\n    )\n\n    for field, value_type in valid_supplied_fields:\n        if value_type.startswith(\"V1\"):  # field value is another model\n            converted_manifest[field] = convert_manifest_to_model(\n                manifest[v1_model.attribute_map[field]], value_type\n            )\n        elif value_type.startswith(\"list[V1\"):  # field value is a list of models\n            field_item_type = value_type.replace(\"list[\", \"\").replace(\"]\", \"\")\n            try:\n                converted_manifest[field] = [\n                    convert_manifest_to_model(item, field_item_type)\n                    for item in manifest[v1_model.attribute_map[field]]\n                ]\n            except TypeError:\n                converted_manifest[field] = manifest[v1_model.attribute_map[field]]\n        elif value_type in base_types:  # field value is a primitive Python type\n            converted_manifest[field] = manifest[v1_model.attribute_map[field]]\n\n    return v1_model(**converted_manifest)\n
        "},{"location":"integrations/prefect-kubernetes/utilities/#prefect_kubernetes.utilities.enable_socket_keep_alive","title":"enable_socket_keep_alive","text":"

        Setting the keep-alive flags on the kubernetes client object. Unfortunately neither the kubernetes library nor the urllib3 library which kubernetes is using internally offer the functionality to enable keep-alive messages. Thus the flags are added to be used on the underlying sockets.

        Source code in prefect_kubernetes/utilities.py
        def enable_socket_keep_alive(client: ApiClient) -> None:\n    \"\"\"\n    Setting the keep-alive flags on the kubernetes client object.\n    Unfortunately neither the kubernetes library nor the urllib3 library which\n    kubernetes is using internally offer the functionality to enable keep-alive\n    messages. Thus the flags are added to be used on the underlying sockets.\n    \"\"\"\n\n    socket_options = [(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)]\n\n    if hasattr(socket, \"TCP_KEEPINTVL\"):\n        socket_options.append((socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 30))\n\n    if hasattr(socket, \"TCP_KEEPCNT\"):\n        socket_options.append((socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 6))\n\n    if hasattr(socket, \"TCP_KEEPIDLE\"):\n        socket_options.append((socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 6))\n\n    if sys.platform == \"darwin\":\n        # TCP_KEEP_ALIVE not available on socket module in macOS, but defined in\n        # https://github.com/apple/darwin-xnu/blob/2ff845c2e033bd0ff64b5b6aa6063a1f8f65aa32/bsd/netinet/tcp.h#L215\n        TCP_KEEP_ALIVE = 0x10\n        socket_options.append((socket.IPPROTO_TCP, TCP_KEEP_ALIVE, 30))\n\n    client.rest_client.pool_manager.connection_pool_kw[\n        \"socket_options\"\n    ] = socket_options\n
        "},{"location":"integrations/prefect-kubernetes/worker/","title":"Worker","text":""},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker","title":"prefect_kubernetes.worker","text":"

        Module containing the Kubernetes worker used for executing flow runs as Kubernetes jobs.

        To start a Kubernetes worker, run the following command:

        prefect worker start --pool 'my-work-pool' --type kubernetes\n

        Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker--securing-your-prefect-cloud-api-key","title":"Securing your Prefect Cloud API key","text":"

        If you are using Prefect Cloud and would like to pass your Prefect Cloud API key to created jobs via a Kubernetes secret, set the PREFECT_KUBERNETES_WORKER_STORE_PREFECT_API_IN_SECRET environment variable before starting your worker:

        export PREFECT_KUBERNETES_WORKER_STORE_PREFECT_API_IN_SECRET=\"true\"\nprefect worker start --pool 'my-work-pool' --type kubernetes\n

        Note that your work will need permission to create secrets in the same namespace(s) that Kubernetes jobs are created in to execute flow runs.

        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker--using-a-custom-kubernetes-job-manifest-template","title":"Using a custom Kubernetes job manifest template","text":"

        The default template used for Kubernetes job manifests looks like this:

        ---\napiVersion: batch/v1\nkind: Job\nmetadata:\nlabels: \"{{ labels }}\"\nnamespace: \"{{ namespace }}\"\ngenerateName: \"{{ name }}-\"\nspec:\nttlSecondsAfterFinished: \"{{ finished_job_ttl }}\"\ntemplate:\n    spec:\n    parallelism: 1\n    completions: 1\n    restartPolicy: Never\n    serviceAccountName: \"{{ service_account_name }}\"\n    containers:\n    - name: prefect-job\n        env: \"{{ env }}\"\n        image: \"{{ image }}\"\n        imagePullPolicy: \"{{ image_pull_policy }}\"\n        args: \"{{ command }}\"\n

        Each values enclosed in {{ }} is a placeholder that will be replaced with a value at runtime. The values that can be used a placeholders are defined by the variables schema defined in the base job template.

        The default job manifest and available variables can be customized on a work pool by work pool basis. These customizations can be made via the Prefect UI when creating or editing a work pool.

        For example, if you wanted to allow custom memory requests for a Kubernetes work pool you could update the job manifest template to look like this:

        ---\napiVersion: batch/v1\nkind: Job\nmetadata:\nlabels: \"{{ labels }}\"\nnamespace: \"{{ namespace }}\"\ngenerateName: \"{{ name }}-\"\nspec:\nttlSecondsAfterFinished: \"{{ finished_job_ttl }}\"\ntemplate:\n    spec:\n    parallelism: 1\n    completions: 1\n    restartPolicy: Never\n    serviceAccountName: \"{{ service_account_name }}\"\n    containers:\n    - name: prefect-job\n        env: \"{{ env }}\"\n        image: \"{{ image }}\"\n        imagePullPolicy: \"{{ image_pull_policy }}\"\n        args: \"{{ command }}\"\n        resources:\n            requests:\n                memory: \"{{ memory }}Mi\"\n            limits:\n                memory: 128Mi\n

        In this new template, the memory placeholder allows customization of the memory allocated to Kubernetes jobs created by workers in this work pool, but the limit is hard-coded and cannot be changed by deployments.

        For more information about work pools and workers, checkout out the Prefect docs.

        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesImagePullPolicy","title":"KubernetesImagePullPolicy","text":"

        Bases: Enum

        Enum representing the image pull policy options for a Kubernetes job.

        Source code in prefect_kubernetes/worker.py
        class KubernetesImagePullPolicy(enum.Enum):\n    \"\"\"Enum representing the image pull policy options for a Kubernetes job.\"\"\"\n\n    IF_NOT_PRESENT = \"IfNotPresent\"\n    ALWAYS = \"Always\"\n    NEVER = \"Never\"\n
        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorker","title":"KubernetesWorker","text":"

        Bases: BaseWorker

        Prefect worker that executes flow runs within Kubernetes Jobs.

        Source code in prefect_kubernetes/worker.py
        class KubernetesWorker(BaseWorker):\n    \"\"\"Prefect worker that executes flow runs within Kubernetes Jobs.\"\"\"\n\n    type = \"kubernetes\"\n    job_configuration = KubernetesWorkerJobConfiguration\n    job_configuration_variables = KubernetesWorkerVariables\n    _description = (\n        \"Execute flow runs within jobs scheduled on a Kubernetes cluster. Requires a \"\n        \"Kubernetes cluster.\"\n    )\n    _display_name = \"Kubernetes\"\n    _documentation_url = \"https://prefecthq.github.io/prefect-kubernetes/worker/\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/2d0b896006ad463b49c28aaac14f31e00e32cfab-250x250.png\"  # noqa\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self._created_secrets = {}\n\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: KubernetesWorkerJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> KubernetesWorkerResult:\n        \"\"\"\n        Executes a flow run within a Kubernetes Job and waits for the flow run\n        to complete.\n\n        Args:\n            flow_run: The flow run to execute\n            configuration: The configuration to use when executing the flow run.\n            task_status: The task status object for the current flow run. If provided,\n                the task will be marked as started.\n\n        Returns:\n            KubernetesWorkerResult: A result object containing information about the\n                final state of the flow run\n        \"\"\"\n        logger = self.get_flow_run_logger(flow_run)\n\n        with self._get_configured_kubernetes_client(configuration) as client:\n            logger.info(\"Creating Kubernetes job...\")\n            job = await run_sync_in_worker_thread(\n                self._create_job, configuration, client\n            )\n            pid = await run_sync_in_worker_thread(\n                self._get_infrastructure_pid, job, client\n            )\n            # Indicate that the job has started\n            if task_status is not None:\n                task_status.started(pid)\n\n            # Monitor the job until completion\n\n            events_replicator = KubernetesEventsReplicator(\n                client=client,\n                job_name=job.metadata.name,\n                namespace=configuration.namespace,\n                worker_resource=self._event_resource(),\n                related_resources=self._event_related_resources(\n                    configuration=configuration\n                ),\n                timeout_seconds=configuration.pod_watch_timeout_seconds,\n            )\n\n            with events_replicator:\n                status_code = await run_sync_in_worker_thread(\n                    self._watch_job, logger, job.metadata.name, configuration, client\n                )\n            return KubernetesWorkerResult(identifier=pid, status_code=status_code)\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"\n        Stops a job for a cancelled flow run based on the provided infrastructure PID\n        and run configuration.\n        \"\"\"\n        await run_sync_in_worker_thread(\n            self._stop_job, infrastructure_pid, configuration, grace_seconds\n        )\n\n    async def teardown(self, *exc_info):\n        await super().teardown(*exc_info)\n\n        await self._clean_up_created_secrets()\n\n    async def _clean_up_created_secrets(self):\n        \"\"\"Deletes any secrets created during the worker's operation.\"\"\"\n        coros = []\n        for key, configuration in self._created_secrets.items():\n            with self._get_configured_kubernetes_client(configuration) as client:\n                with self._get_core_client(client) as core_client:\n                    coros.append(\n                        run_sync_in_worker_thread(\n                            core_client.delete_namespaced_secret,\n                            name=key[0],\n                            namespace=key[1],\n                        )\n                    )\n\n        results = await asyncio.gather(*coros, return_exceptions=True)\n        for result in results:\n            if isinstance(result, Exception):\n                self._logger.warning(\n                    \"Failed to delete created secret with exception: %s\", result\n                )\n\n    def _stop_job(\n        self,\n        infrastructure_pid: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n        \"\"\"Removes the given Job from the Kubernetes cluster\"\"\"\n        with self._get_configured_kubernetes_client(configuration) as client:\n            job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n                infrastructure_pid\n            )\n\n            if job_namespace != configuration.namespace:\n                raise InfrastructureNotAvailable(\n                    f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n                    f\"{job_namespace!r} but this worker expected jobs to be running in \"\n                    f\"namespace {configuration.namespace!r} based on the work pool and \"\n                    \"deployment configuration.\"\n                )\n\n            current_cluster_uid = self._get_cluster_uid(client)\n            if job_cluster_uid != current_cluster_uid:\n                raise InfrastructureNotAvailable(\n                    f\"Unable to kill job {job_name!r}: The job is running on another \"\n                    \"cluster than the one specified by the infrastructure PID.\"\n                )\n\n            with self._get_batch_client(client) as batch_client:\n                try:\n                    batch_client.delete_namespaced_job(\n                        name=job_name,\n                        namespace=job_namespace,\n                        grace_period_seconds=grace_seconds,\n                        # Foreground propagation deletes dependent objects before deleting # noqa\n                        # owner objects. This ensures that the pods are cleaned up before # noqa\n                        # the job is marked as deleted.\n                        # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion # noqa\n                        propagation_policy=\"Foreground\",\n                    )\n                except kubernetes.client.exceptions.ApiException as exc:\n                    if exc.status == 404:\n                        raise InfrastructureNotFound(\n                            f\"Unable to kill job {job_name!r}: The job was not found.\"\n                        ) from exc\n                    else:\n                        raise\n\n    @contextmanager\n    def _get_configured_kubernetes_client(\n        self, configuration: KubernetesWorkerJobConfiguration\n    ) -> Generator[\"ApiClient\", None, None]:\n        \"\"\"\n        Returns a configured Kubernetes client.\n        \"\"\"\n\n        try:\n            if configuration.cluster_config:\n                client = kubernetes.config.new_client_from_config_dict(\n                    config_dict=configuration.cluster_config.config,\n                    context=configuration.cluster_config.context_name,\n                )\n            else:\n                # If no hardcoded config specified, try to load Kubernetes configuration\n                # within a cluster. If that doesn't work, try to load the configuration\n                # from the local environment, allowing any further ConfigExceptions to\n                # bubble up.\n                try:\n                    kubernetes.config.load_incluster_config()\n                    config = kubernetes.client.Configuration.get_default_copy()\n                    client = kubernetes.client.ApiClient(configuration=config)\n                except kubernetes.config.ConfigException:\n                    client = kubernetes.config.new_client_from_config()\n\n            if os.environ.get(\n                \"PREFECT_KUBERNETES_WORKER_ADD_TCP_KEEPALIVE\", \"TRUE\"\n            ).strip().lower() in (\"true\", \"1\"):\n                enable_socket_keep_alive(client)\n\n            yield client\n        finally:\n            client.rest_client.pool_manager.clear()\n\n    def _replace_api_key_with_secret(\n        self, configuration: KubernetesWorkerJobConfiguration, client: \"ApiClient\"\n    ):\n        \"\"\"Replaces the PREFECT_API_KEY environment variable with a Kubernetes secret\"\"\"\n        manifest_env = configuration.job_manifest[\"spec\"][\"template\"][\"spec\"][\n            \"containers\"\n        ][0].get(\"env\")\n        manifest_api_key_env = next(\n            (\n                env_entry\n                for env_entry in manifest_env\n                if env_entry.get(\"name\") == \"PREFECT_API_KEY\"\n            ),\n            {},\n        )\n        api_key = manifest_api_key_env.get(\"value\")\n        if api_key:\n            secret_name = f\"prefect-{_slugify_name(self.name)}-api-key\"\n            secret = self._upsert_secret(\n                name=secret_name,\n                value=api_key,\n                namespace=configuration.namespace,\n                client=client,\n            )\n            # Store configuration so that we can delete the secret when the worker shuts\n            # down\n            self._created_secrets[\n                (secret.metadata.name, secret.metadata.namespace)\n            ] = configuration\n            new_api_env_entry = {\n                \"name\": \"PREFECT_API_KEY\",\n                \"valueFrom\": {\"secretKeyRef\": {\"name\": secret_name, \"key\": \"value\"}},\n            }\n            manifest_env = [\n                entry if entry.get(\"name\") != \"PREFECT_API_KEY\" else new_api_env_entry\n                for entry in manifest_env\n            ]\n            configuration.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                \"env\"\n            ] = manifest_env\n\n    @retry(\n        stop=stop_after_attempt(MAX_ATTEMPTS),\n        wait=wait_fixed(RETRY_MIN_DELAY_SECONDS)\n        + wait_random(\n            RETRY_MIN_DELAY_JITTER_SECONDS,\n            RETRY_MAX_DELAY_JITTER_SECONDS,\n        ),\n        reraise=True,\n    )\n    def _create_job(\n        self, configuration: KubernetesWorkerJobConfiguration, client: \"ApiClient\"\n    ) -> \"V1Job\":\n        \"\"\"\n        Creates a Kubernetes job from a job manifest.\n        \"\"\"\n        if os.environ.get(\n            \"PREFECT_KUBERNETES_WORKER_STORE_PREFECT_API_IN_SECRET\", \"\"\n        ).strip().lower() in (\"true\", \"1\"):\n            self._replace_api_key_with_secret(\n                configuration=configuration, client=client\n            )\n        try:\n            with self._get_batch_client(client) as batch_client:\n                job = batch_client.create_namespaced_job(\n                    configuration.namespace, configuration.job_manifest\n                )\n        except kubernetes.client.exceptions.ApiException as exc:\n            # Parse the reason and message from the response if feasible\n            message = \"\"\n            if exc.reason:\n                message += \": \" + exc.reason\n            if exc.body and \"message\" in (body := json.loads(exc.body)):\n                message += \": \" + body[\"message\"]\n\n            raise InfrastructureError(\n                f\"Unable to create Kubernetes job{message}\"\n            ) from exc\n\n        return job\n\n    def _upsert_secret(\n        self, name: str, value: str, namespace: str, client: \"ApiClient\"\n    ):\n        encoded_value = base64.b64encode(value.encode(\"utf-8\")).decode(\"utf-8\")\n        with self._get_core_client(client) as core_client:\n            try:\n                # Get the current version of the Secret and update it with the\n                # new value\n                current_secret = core_client.read_namespaced_secret(\n                    name=name, namespace=namespace\n                )\n                current_secret.data = {\"value\": encoded_value}\n                secret = core_client.replace_namespaced_secret(\n                    name=name, namespace=namespace, body=current_secret\n                )\n            except ApiException as exc:\n                if exc.status != 404:\n                    raise\n                # Create the secret if it doesn't already exist\n                metadata = V1ObjectMeta(name=name, namespace=namespace)\n                secret = V1Secret(\n                    api_version=\"v1\",\n                    kind=\"Secret\",\n                    metadata=metadata,\n                    data={\"value\": encoded_value},\n                )\n                secret = core_client.create_namespaced_secret(\n                    namespace=namespace, body=secret\n                )\n            return secret\n\n    @contextmanager\n    def _get_batch_client(\n        self, client: \"ApiClient\"\n    ) -> Generator[\"BatchV1Api\", None, None]:\n        \"\"\"\n        Context manager for retrieving a Kubernetes batch client.\n        \"\"\"\n        try:\n            yield kubernetes.client.BatchV1Api(api_client=client)\n        finally:\n            client.rest_client.pool_manager.clear()\n\n    def _get_infrastructure_pid(self, job: \"V1Job\", client: \"ApiClient\") -> str:\n        \"\"\"\n        Generates a Kubernetes infrastructure PID.\n\n        The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n        \"\"\"\n        cluster_uid = self._get_cluster_uid(client)\n        pid = f\"{cluster_uid}:{job.metadata.namespace}:{job.metadata.name}\"\n        return pid\n\n    def _parse_infrastructure_pid(\n        self, infrastructure_pid: str\n    ) -> Tuple[str, str, str]:\n        \"\"\"\n        Parse a Kubernetes infrastructure PID into its component parts.\n\n        Returns a cluster UID, namespace, and job name.\n        \"\"\"\n        cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n        return cluster_uid, namespace, job_name\n\n    @contextmanager\n    def _get_core_client(\n        self, client: \"ApiClient\"\n    ) -> Generator[\"CoreV1Api\", None, None]:\n        \"\"\"\n        Context manager for retrieving a Kubernetes core client.\n        \"\"\"\n        try:\n            yield kubernetes.client.CoreV1Api(api_client=client)\n        finally:\n            client.rest_client.pool_manager.clear()\n\n    def _get_cluster_uid(self, client: \"ApiClient\") -> str:\n        \"\"\"\n        Gets a unique id for the current cluster being used.\n\n        There is no real unique identifier for a cluster. However, the `kube-system`\n        namespace is immutable and has a persistence UID that we use instead.\n\n        PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n        namespace cannot be read e.g. when a cluster role cannot be created. If set,\n        this variable will be used and we will not attempt to read the `kube-system`\n        namespace.\n\n        See https://github.com/kubernetes/kubernetes/issues/44954\n        \"\"\"\n        # Default to an environment variable\n        env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n        if env_cluster_uid:\n            return env_cluster_uid\n\n        # Read the UID from the cluster namespace\n        with self._get_core_client(client) as core_client:\n            namespace = core_client.read_namespace(\"kube-system\")\n        cluster_uid = namespace.metadata.uid\n\n        return cluster_uid\n\n    def _job_events(\n        self,\n        watch: kubernetes.watch.Watch,\n        batch_client: kubernetes.client.BatchV1Api,\n        job_name: str,\n        namespace: str,\n        watch_kwargs: dict,\n    ) -> Generator[Union[Any, dict, str], Any, None]:\n        \"\"\"\n        Stream job events.\n\n        Pick up from the current resource version returned by the API\n        in the case of a 410.\n\n        See https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes  # noqa\n        \"\"\"\n        while True:\n            try:\n                return watch.stream(\n                    func=batch_client.list_namespaced_job,\n                    namespace=namespace,\n                    field_selector=f\"metadata.name={job_name}\",\n                    **watch_kwargs,\n                )\n            except ApiException as e:\n                if e.status == 410:\n                    job_list = batch_client.list_namespaced_job(\n                        namespace=namespace, field_selector=f\"metadata.name={job_name}\"\n                    )\n                    resource_version = job_list.metadata.resource_version\n                    watch_kwargs[\"resource_version\"] = resource_version\n                else:\n                    raise\n\n    def _watch_job(\n        self,\n        logger: logging.Logger,\n        job_name: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> int:\n        \"\"\"\n        Watch a job.\n\n        Return the final status code of the first container.\n        \"\"\"\n        logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n        job = self._get_job(logger, job_name, configuration, client)\n        if not job:\n            return -1\n\n        pod = self._get_job_pod(logger, job_name, configuration, client)\n        if not pod:\n            return -1\n\n        # Calculate the deadline before streaming output\n        deadline = (\n            (time.monotonic() + configuration.job_watch_timeout_seconds)\n            if configuration.job_watch_timeout_seconds is not None\n            else None\n        )\n\n        if configuration.stream_output:\n            with self._get_core_client(client) as core_client:\n                logs = core_client.read_namespaced_pod_log(\n                    pod.metadata.name,\n                    configuration.namespace,\n                    follow=True,\n                    _preload_content=False,\n                    container=\"prefect-job\",\n                )\n                try:\n                    for log in logs.stream():\n                        print(log.decode().rstrip())\n\n                        # Check if we have passed the deadline and should stop streaming\n                        # logs\n                        remaining_time = (\n                            deadline - time.monotonic() if deadline else None\n                        )\n                        if deadline and remaining_time <= 0:\n                            break\n\n                except Exception:\n                    logger.warning(\n                        (\n                            \"Error occurred while streaming logs - \"\n                            \"Job will continue to run but logs will \"\n                            \"no longer be streamed to stdout.\"\n                        ),\n                        exc_info=True,\n                    )\n\n        with self._get_batch_client(client) as batch_client:\n            # Check if the job is completed before beginning a watch\n            job = batch_client.read_namespaced_job(\n                name=job_name, namespace=configuration.namespace\n            )\n            completed = job.status.completion_time is not None\n\n            while not completed:\n                remaining_time = (\n                    math.ceil(deadline - time.monotonic()) if deadline else None\n                )\n                if deadline and remaining_time <= 0:\n                    logger.error(\n                        f\"Job {job_name!r}: Job did not complete within \"\n                        f\"timeout of {configuration.job_watch_timeout_seconds}s.\"\n                    )\n                    return -1\n\n                watch = kubernetes.watch.Watch()\n\n                # The kubernetes library will disable retries if the timeout kwarg is\n                # present regardless of the value so we do not pass it unless given\n                # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n                watch_kwargs = {\"timeout_seconds\": remaining_time} if deadline else {}\n\n                for event in self._job_events(\n                    watch,\n                    batch_client,\n                    job_name,\n                    configuration.namespace,\n                    watch_kwargs,\n                ):\n                    if event[\"type\"] == \"DELETED\":\n                        logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n                        completed = True\n                    elif event[\"object\"].status.completion_time:\n                        if not event[\"object\"].status.succeeded:\n                            # Job failed, exit while loop and return pod exit code\n                            logger.error(f\"Job {job_name!r}: Job failed.\")\n                        completed = True\n                    # Check if the job has reached its backoff limit\n                    # and stop watching if it has\n                    elif (\n                        event[\"object\"].spec.backoff_limit is not None\n                        and event[\"object\"].status.failed is not None\n                        and event[\"object\"].status.failed\n                        > event[\"object\"].spec.backoff_limit\n                    ):\n                        logger.error(f\"Job {job_name!r}: Job reached backoff limit.\")\n                        completed = True\n                    # If the job has no backoff limit, check if it has failed\n                    # and stop watching if it has\n                    elif (\n                        not event[\"object\"].spec.backoff_limit\n                        and event[\"object\"].status.failed\n                    ):\n                        completed = True\n\n                    if completed:\n                        watch.stop()\n                        break\n\n        with self._get_core_client(client) as core_client:\n            # Get all pods for the job\n            pods = core_client.list_namespaced_pod(\n                namespace=configuration.namespace, label_selector=f\"job-name={job_name}\"\n            )\n            # Get the status for only the most recently used pod\n            pods.items.sort(\n                key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n            )\n            most_recent_pod = pods.items[0] if pods.items else None\n            first_container_status = (\n                most_recent_pod.status.container_statuses[0]\n                if most_recent_pod\n                else None\n            )\n            if not first_container_status:\n                logger.error(f\"Job {job_name!r}: No pods found for job.\")\n                return -1\n\n            # In some cases, such as spot instance evictions, the pod will be forcibly\n            # terminated and not report a status correctly.\n            elif (\n                first_container_status.state is None\n                or first_container_status.state.terminated is None\n                or first_container_status.state.terminated.exit_code is None\n            ):\n                logger.error(\n                    f\"Could not determine exit code for {job_name!r}.\"\n                    \"Exit code will be reported as -1.\"\n                    f\"First container status info did not report an exit code.\"\n                    f\"First container info: {first_container_status}.\"\n                )\n                return -1\n\n        return first_container_status.state.terminated.exit_code\n\n    def _get_job(\n        self,\n        logger: logging.Logger,\n        job_id: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> Optional[\"V1Job\"]:\n        \"\"\"Get a Kubernetes job by id.\"\"\"\n        with self._get_batch_client(client) as batch_client:\n            try:\n                job = batch_client.read_namespaced_job(\n                    name=job_id, namespace=configuration.namespace\n                )\n            except kubernetes.client.exceptions.ApiException:\n                logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n                return None\n            return job\n\n    def _get_job_pod(\n        self,\n        logger: logging.Logger,\n        job_name: str,\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> Optional[\"V1Pod\"]:\n        \"\"\"Get the first running pod for a job.\"\"\"\n        from kubernetes.client.models import V1Pod\n\n        watch = kubernetes.watch.Watch()\n        logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n        last_phase = None\n        last_pod_name: Optional[str] = None\n        with self._get_core_client(client) as core_client:\n            for event in watch.stream(\n                func=core_client.list_namespaced_pod,\n                namespace=configuration.namespace,\n                label_selector=f\"job-name={job_name}\",\n                timeout_seconds=configuration.pod_watch_timeout_seconds,\n            ):\n                pod: V1Pod = event[\"object\"]\n                last_pod_name = pod.metadata.name\n\n                phase = pod.status.phase\n                if phase != last_phase:\n                    logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n                if phase != \"Pending\":\n                    watch.stop()\n                    return pod\n\n                last_phase = phase\n\n        # If we've gotten here, we never found the Pod that was created for the flow run\n        # Job, so let's inspect the situation and log what we can find.  It's possible\n        # that the Job ran into scheduling constraints it couldn't satisfy, like\n        # memory/CPU requests, or a volume that wasn't available, or a node with an\n        # available GPU.\n        logger.error(f\"Job {job_name!r}: Pod never started.\")\n        self._log_recent_events(logger, job_name, last_pod_name, configuration, client)\n\n    def _log_recent_events(\n        self,\n        logger: logging.Logger,\n        job_name: str,\n        pod_name: Optional[str],\n        configuration: KubernetesWorkerJobConfiguration,\n        client: \"ApiClient\",\n    ) -> None:\n        \"\"\"Look for reasons why a Job may not have been able to schedule a Pod, or why\n        a Pod may not have been able to start and log them to the provided logger.\"\"\"\n        from kubernetes.client.models import CoreV1Event, CoreV1EventList\n\n        def best_event_time(event: CoreV1Event) -> datetime:\n            \"\"\"Choose the best timestamp from a Kubernetes event\"\"\"\n            return event.event_time or event.last_timestamp\n\n        def log_event(event: CoreV1Event):\n            \"\"\"Log an event in one of a few formats to the provided logger\"\"\"\n            if event.count and event.count > 1:\n                logger.info(\n                    \"%s event %r (%s times) at %s: %s\",\n                    event.involved_object.kind,\n                    event.reason,\n                    event.count,\n                    best_event_time(event),\n                    event.message,\n                )\n            else:\n                logger.info(\n                    \"%s event %r at %s: %s\",\n                    event.involved_object.kind,\n                    event.reason,\n                    best_event_time(event),\n                    event.message,\n                )\n\n        with self._get_core_client(client) as core_client:\n            events: CoreV1EventList = core_client.list_namespaced_event(\n                configuration.namespace\n            )\n            event: CoreV1Event\n            for event in sorted(events.items, key=best_event_time):\n                if (\n                    event.involved_object.api_version == \"batch/v1\"\n                    and event.involved_object.kind == \"Job\"\n                    and event.involved_object.namespace == configuration.namespace\n                    and event.involved_object.name == job_name\n                ):\n                    log_event(event)\n\n                if (\n                    pod_name\n                    and event.involved_object.api_version == \"v1\"\n                    and event.involved_object.kind == \"Pod\"\n                    and event.involved_object.namespace == configuration.namespace\n                    and event.involved_object.name == pod_name\n                ):\n                    log_event(event)\n
        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

        Stops a job for a cancelled flow run based on the provided infrastructure PID and run configuration.

        Source code in prefect_kubernetes/worker.py
        async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: KubernetesWorkerJobConfiguration,\n    grace_seconds: int = 30,\n):\n    \"\"\"\n    Stops a job for a cancelled flow run based on the provided infrastructure PID\n    and run configuration.\n    \"\"\"\n    await run_sync_in_worker_thread(\n        self._stop_job, infrastructure_pid, configuration, grace_seconds\n    )\n
        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorker.run","title":"run async","text":"

        Executes a flow run within a Kubernetes Job and waits for the flow run to complete.

        Parameters:

        Name Type Description Default flow_run FlowRun

        The flow run to execute

        required configuration KubernetesWorkerJobConfiguration

        The configuration to use when executing the flow run.

        required task_status Optional[TaskStatus]

        The task status object for the current flow run. If provided, the task will be marked as started.

        None

        Returns:

        Name Type Description KubernetesWorkerResult KubernetesWorkerResult

        A result object containing information about the final state of the flow run

        Source code in prefect_kubernetes/worker.py
        async def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: KubernetesWorkerJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> KubernetesWorkerResult:\n    \"\"\"\n    Executes a flow run within a Kubernetes Job and waits for the flow run\n    to complete.\n\n    Args:\n        flow_run: The flow run to execute\n        configuration: The configuration to use when executing the flow run.\n        task_status: The task status object for the current flow run. If provided,\n            the task will be marked as started.\n\n    Returns:\n        KubernetesWorkerResult: A result object containing information about the\n            final state of the flow run\n    \"\"\"\n    logger = self.get_flow_run_logger(flow_run)\n\n    with self._get_configured_kubernetes_client(configuration) as client:\n        logger.info(\"Creating Kubernetes job...\")\n        job = await run_sync_in_worker_thread(\n            self._create_job, configuration, client\n        )\n        pid = await run_sync_in_worker_thread(\n            self._get_infrastructure_pid, job, client\n        )\n        # Indicate that the job has started\n        if task_status is not None:\n            task_status.started(pid)\n\n        # Monitor the job until completion\n\n        events_replicator = KubernetesEventsReplicator(\n            client=client,\n            job_name=job.metadata.name,\n            namespace=configuration.namespace,\n            worker_resource=self._event_resource(),\n            related_resources=self._event_related_resources(\n                configuration=configuration\n            ),\n            timeout_seconds=configuration.pod_watch_timeout_seconds,\n        )\n\n        with events_replicator:\n            status_code = await run_sync_in_worker_thread(\n                self._watch_job, logger, job.metadata.name, configuration, client\n            )\n        return KubernetesWorkerResult(identifier=pid, status_code=status_code)\n
        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerJobConfiguration","title":"KubernetesWorkerJobConfiguration","text":"

        Bases: BaseJobConfiguration

        Configuration class used by the Kubernetes worker.

        An instance of this class is passed to the Kubernetes worker's run method for each flow run. It contains all of the information necessary to execute the flow run as a Kubernetes job.

        Attributes:

        Name Type Description name

        The name to give to created Kubernetes job.

        command

        The command executed in created Kubernetes jobs to kick off flow run execution.

        env

        The environment variables to set in created Kubernetes jobs.

        labels

        The labels to set on created Kubernetes jobs.

        namespace str

        The Kubernetes namespace to create Kubernetes jobs in.

        job_manifest Dict[str, Any]

        The Kubernetes job manifest to use to create Kubernetes jobs.

        cluster_config Optional[KubernetesClusterConfig]

        The Kubernetes cluster configuration to use for authentication to a Kubernetes cluster.

        job_watch_timeout_seconds Optional[int]

        The number of seconds to wait for the job to complete before timing out. If None, the worker will wait indefinitely.

        pod_watch_timeout_seconds int

        The number of seconds to wait for the pod to complete before timing out.

        stream_output bool

        Whether or not to stream the job's output.

        Source code in prefect_kubernetes/worker.py
        class KubernetesWorkerJobConfiguration(BaseJobConfiguration):\n    \"\"\"\n    Configuration class used by the Kubernetes worker.\n\n    An instance of this class is passed to the Kubernetes worker's `run` method\n    for each flow run. It contains all of the information necessary to execute\n    the flow run as a Kubernetes job.\n\n    Attributes:\n        name: The name to give to created Kubernetes job.\n        command: The command executed in created Kubernetes jobs to kick off\n            flow run execution.\n        env: The environment variables to set in created Kubernetes jobs.\n        labels: The labels to set on created Kubernetes jobs.\n        namespace: The Kubernetes namespace to create Kubernetes jobs in.\n        job_manifest: The Kubernetes job manifest to use to create Kubernetes jobs.\n        cluster_config: The Kubernetes cluster configuration to use for authentication\n            to a Kubernetes cluster.\n        job_watch_timeout_seconds: The number of seconds to wait for the job to\n            complete before timing out. If `None`, the worker will wait indefinitely.\n        pod_watch_timeout_seconds: The number of seconds to wait for the pod to\n            complete before timing out.\n        stream_output: Whether or not to stream the job's output.\n    \"\"\"\n\n    namespace: str = Field(default=\"default\")\n    job_manifest: Dict[str, Any] = Field(template=_get_default_job_manifest_template())\n    cluster_config: Optional[KubernetesClusterConfig] = Field(default=None)\n    job_watch_timeout_seconds: Optional[int] = Field(default=None)\n    pod_watch_timeout_seconds: int = Field(default=60)\n    stream_output: bool = Field(default=True)\n\n    # internal-use only\n    _api_dns_name: Optional[str] = None  # Replaces 'localhost' in API URL\n\n    @validator(\"job_manifest\")\n    def _ensure_metadata_is_present(cls, value: Dict[str, Any]):\n        \"\"\"Ensures that the metadata is present in the job manifest.\"\"\"\n        if \"metadata\" not in value:\n            value[\"metadata\"] = {}\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_labels_is_present(cls, value: Dict[str, Any]):\n        \"\"\"Ensures that the metadata is present in the job manifest.\"\"\"\n        if \"labels\" not in value[\"metadata\"]:\n            value[\"metadata\"][\"labels\"] = {}\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_namespace_is_present(cls, value: Dict[str, Any], values):\n        \"\"\"Ensures that the namespace is present in the job manifest.\"\"\"\n        if \"namespace\" not in value[\"metadata\"]:\n            value[\"metadata\"][\"namespace\"] = values[\"namespace\"]\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_job_includes_all_required_components(cls, value: Dict[str, Any]):\n        \"\"\"\n        Ensures that the job manifest includes all required components.\n        \"\"\"\n        patch = JsonPatch.from_diff(value, _get_base_job_manifest())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n\n    @validator(\"job_manifest\")\n    def _ensure_job_has_compatible_values(cls, value: Dict[str, Any]):\n        patch = JsonPatch.from_diff(value, _get_base_job_manifest())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatible values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        \"\"\"\n        Prepares the job configuration for a flow run.\n\n        Ensures that necessary values are present in the job manifest and that the\n        job manifest is valid.\n\n        Args:\n            flow_run: The flow run to prepare the job configuration for\n            deployment: The deployment associated with the flow run used for\n                preparation.\n            flow: The flow associated with the flow run used for preparation.\n        \"\"\"\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n        # Update configuration env and job manifest env\n        self._update_prefect_api_url_if_local_server()\n        self._populate_env_in_manifest()\n        # Update labels in job manifest\n        self._slugify_labels()\n        # Add defaults to job manifest if necessary\n        self._populate_image_if_not_present()\n        self._populate_command_if_not_present()\n        self._populate_generate_name_if_not_present()\n\n    def _populate_env_in_manifest(self):\n        \"\"\"\n        Populates environment variables in the job manifest.\n\n        When `env` is templated as a variable in the job manifest it comes in as a\n        dictionary. We need to convert it to a list of dictionaries to conform to the\n        Kubernetes job manifest schema.\n\n        This function also handles the case where the user has removed the `{{ env }}`\n        placeholder and hard coded a value for `env`. In this case, we need to prepend\n        our environment variables to the list to ensure Prefect setting propagation.\n        An example reason the a user would remove the `{{ env }}` placeholder to\n        hardcode Kubernetes secrets in the base job template.\n        \"\"\"\n        transformed_env = [{\"name\": k, \"value\": v} for k, v in self.env.items()]\n\n        template_env = self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][\n            0\n        ].get(\"env\")\n\n        # If user has removed `{{ env }}` placeholder and hard coded a value for `env`,\n        # we need to prepend our environment variables to the list to ensure Prefect\n        # setting propagation.\n        if isinstance(template_env, list):\n            self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\"env\"] = [\n                *transformed_env,\n                *template_env,\n            ]\n        # Current templating adds `env` as a dict when the kubernetes manifest requires\n        # a list of dicts. Might be able to improve this in the future with a better\n        # default `env` value and better typing.\n        else:\n            self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                \"env\"\n            ] = transformed_env\n\n    def _update_prefect_api_url_if_local_server(self):\n        \"\"\"If the API URL has been set by the base environment rather than the by the\n        user, update the value to ensure connectivity when using a bridge network by\n        updating local connections to use the internal host\n        \"\"\"\n        if self.env.get(\"PREFECT_API_URL\") and self._api_dns_name:\n            self.env[\"PREFECT_API_URL\"] = (\n                self.env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", self._api_dns_name)\n                .replace(\"127.0.0.1\", self._api_dns_name)\n            )\n\n    def _slugify_labels(self):\n        \"\"\"Slugifies the labels in the job manifest.\"\"\"\n        all_labels = {**self.job_manifest[\"metadata\"].get(\"labels\", {}), **self.labels}\n        self.job_manifest[\"metadata\"][\"labels\"] = {\n            _slugify_label_key(k): _slugify_label_value(v)\n            for k, v in all_labels.items()\n        }\n\n    def _populate_image_if_not_present(self):\n        \"\"\"Ensures that the image is present in the job manifest. Populates the image\n        with the default Prefect image if it is not present.\"\"\"\n        try:\n            if (\n                \"image\"\n                not in self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0]\n            ):\n                self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                    \"image\"\n                ] = get_prefect_image_name()\n        except KeyError:\n            raise ValueError(\n                \"Unable to verify image due to invalid job manifest template.\"\n            )\n\n    def _populate_command_if_not_present(self):\n        \"\"\"\n        Ensures that the command is present in the job manifest. Populates the command\n        with the `prefect -m prefect.engine` if a command is not present.\n        \"\"\"\n        try:\n            command = self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][\n                0\n            ].get(\"args\")\n            if command is None:\n                self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                    \"args\"\n                ] = shlex.split(self._base_flow_run_command())\n            elif isinstance(command, str):\n                self.job_manifest[\"spec\"][\"template\"][\"spec\"][\"containers\"][0][\n                    \"args\"\n                ] = shlex.split(command)\n            elif not isinstance(command, list):\n                raise ValueError(\n                    \"Invalid job manifest template: 'command' must be a string or list.\"\n                )\n        except KeyError:\n            raise ValueError(\n                \"Unable to verify command due to invalid job manifest template.\"\n            )\n\n    def _populate_generate_name_if_not_present(self):\n        \"\"\"Ensures that the generateName is present in the job manifest.\"\"\"\n        manifest_generate_name = self.job_manifest[\"metadata\"].get(\"generateName\", \"\")\n        has_placeholder = len(find_placeholders(manifest_generate_name)) > 0\n        # if name wasn't present during template rendering, generateName will be\n        # just a hyphen\n        manifest_generate_name_templated_with_empty_string = (\n            manifest_generate_name == \"-\"\n        )\n        if (\n            not manifest_generate_name\n            or has_placeholder\n            or manifest_generate_name_templated_with_empty_string\n        ):\n            generate_name = None\n            if self.name:\n                generate_name = _slugify_name(self.name)\n            # _slugify_name will return None if the slugified name in an exception\n            if not generate_name:\n                generate_name = \"prefect-job\"\n            self.job_manifest[\"metadata\"][\"generateName\"] = f\"{generate_name}-\"\n
        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

        Prepares the job configuration for a flow run.

        Ensures that necessary values are present in the job manifest and that the job manifest is valid.

        Parameters:

        Name Type Description Default flow_run FlowRun

        The flow run to prepare the job configuration for

        required deployment Optional[DeploymentResponse]

        The deployment associated with the flow run used for preparation.

        None flow Optional[Flow]

        The flow associated with the flow run used for preparation.

        None Source code in prefect_kubernetes/worker.py
        def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n    \"\"\"\n    Prepares the job configuration for a flow run.\n\n    Ensures that necessary values are present in the job manifest and that the\n    job manifest is valid.\n\n    Args:\n        flow_run: The flow run to prepare the job configuration for\n        deployment: The deployment associated with the flow run used for\n            preparation.\n        flow: The flow associated with the flow run used for preparation.\n    \"\"\"\n    super().prepare_for_flow_run(flow_run, deployment, flow)\n    # Update configuration env and job manifest env\n    self._update_prefect_api_url_if_local_server()\n    self._populate_env_in_manifest()\n    # Update labels in job manifest\n    self._slugify_labels()\n    # Add defaults to job manifest if necessary\n    self._populate_image_if_not_present()\n    self._populate_command_if_not_present()\n    self._populate_generate_name_if_not_present()\n
        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerResult","title":"KubernetesWorkerResult","text":"

        Bases: BaseWorkerResult

        Contains information about the final state of a completed process

        Source code in prefect_kubernetes/worker.py
        class KubernetesWorkerResult(BaseWorkerResult):\n    \"\"\"Contains information about the final state of a completed process\"\"\"\n
        "},{"location":"integrations/prefect-kubernetes/worker/#prefect_kubernetes.worker.KubernetesWorkerVariables","title":"KubernetesWorkerVariables","text":"

        Bases: BaseVariables

        Default variables for the Kubernetes worker.

        The schema for this class is used to populate the variables section of the default base job template.

        Source code in prefect_kubernetes/worker.py
        class KubernetesWorkerVariables(BaseVariables):\n    \"\"\"\n    Default variables for the Kubernetes worker.\n\n    The schema for this class is used to populate the `variables` section of the default\n    base job template.\n    \"\"\"\n\n    namespace: str = Field(\n        default=\"default\", description=\"The Kubernetes namespace to create jobs within.\"\n    )\n    image: Optional[str] = Field(\n        default=None,\n        description=\"The image reference of a container image to use for created jobs. \"\n        \"If not set, the latest Prefect image will be used.\",\n        example=\"docker.io/prefecthq/prefect:2-latest\",\n    )\n    service_account_name: Optional[str] = Field(\n        default=None,\n        description=\"The Kubernetes service account to use for job creation.\",\n    )\n    image_pull_policy: Literal[\"IfNotPresent\", \"Always\", \"Never\"] = Field(\n        default=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n        description=\"The Kubernetes image pull policy to use for job containers.\",\n    )\n    finished_job_ttl: Optional[int] = Field(\n        default=None,\n        title=\"Finished Job TTL\",\n        description=\"The number of seconds to retain jobs after completion. If set, \"\n        \"finished jobs will be cleaned up by Kubernetes after the given delay. If not \"\n        \"set, jobs will be retained indefinitely.\",\n    )\n    job_watch_timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"Number of seconds to wait for each event emitted by a job before \"\n            \"timing out. If not set, the worker will wait for each event indefinitely.\"\n        ),\n    )\n    pod_watch_timeout_seconds: int = Field(\n        default=60,\n        description=\"Number of seconds to watch for pod creation before timing out.\",\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the job to local standard output.\"\n        ),\n    )\n    cluster_config: Optional[KubernetesClusterConfig] = Field(\n        default=None,\n        description=\"The Kubernetes cluster config to use for job creation.\",\n    )\n
        "},{"location":"integrations/prefect-ray/","title":"prefect-ray","text":""},{"location":"integrations/prefect-ray/#welcome","title":"Welcome!","text":"

        Visit the full docs here to see additional examples and the API reference.

        prefect-ray contains Prefect integrations with the Ray execution framework, a flexible distributed computing framework for Python.

        Provides a RayTaskRunner that enables Prefect flows to run tasks execute tasks in parallel using Ray.

        "},{"location":"integrations/prefect-ray/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-ray/#python-setup","title":"Python setup","text":"

        Requires an installation of Python 3.8 or newer.

        We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

        These tasks are designed to work with Prefect 2.0+. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-ray/#installation","title":"Installation","text":"

        Install prefect-ray with pip:

        pip install prefect-ray\n

        Users running Apple Silicon (such as M1 macs) should check out the Ray docs here for more details.

        "},{"location":"integrations/prefect-ray/#running-tasks-on-ray","title":"Running tasks on Ray","text":"

        The RayTaskRunner is a Prefect task runner that submits tasks to Ray for parallel execution.

        By default, a temporary Ray instance is created for the duration of the flow run.

        For example, this flow counts to 3 in parallel.

        import time\n\nfrom prefect import flow, task\nfrom prefect_ray import RayTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=RayTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

        If you already have a Ray instance running, you can provide the connection URL via an address argument.

        To configure your flow to use the RayTaskRunner:

        1. Make sure the prefect-ray collection is installed as described earlier: pip install prefect-ray.
        2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
        3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

        For example, this flow uses the RayTaskRunner with a local, temporary Ray instance created by Prefect at flow run time.

        from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner())\ndef my_flow():\n    ... \n

        This flow uses the RayTaskRunner configured to access an existing Ray instance at ray://192.0.2.255:8786.

        from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n    ... \n

        RayTaskRunner accepts the following optional parameters:

        Parameter Description address Address of a currently running Ray instance, starting with the ray:// URI. init_kwargs Additional kwargs to use when calling ray.init.

        Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address of a Ray instance, Prefect creates a temporary instance automatically.

        Ray environment limitations

        Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.

        See the Ray installation documentation for further compatibility information.

        "},{"location":"integrations/prefect-ray/#running-tasks-on-a-ray-remote-cluster","title":"Running tasks on a Ray remote cluster","text":"

        When using the RayTaskRunner with a remote Ray cluster, you may run into issues that are not seen when using a local Ray instance. To resolve these issues, we recommend taking the following steps when working with a remote Ray cluster:

        1. By default, Prefect will not persist any data to the filesystem of the remote ray worker. However, if you want to take advantage of Prefect's caching ability, you will need to configure a remote result storage to persist results across task runs.

        We recommend using the Prefect UI to configure a storage block to use for remote results storage.

        Here's an example of a flow that uses caching and remote result storage:

        from typing import List\n\nfrom prefect import flow, get_run_logger, task\nfrom prefect.filesystems import S3\nfrom prefect.tasks import task_input_hash\nfrom prefect_ray.task_runners import RayTaskRunner\n\n\n# The result of this task will be cached in the configured result storage\n@task(cache_key_fn=task_input_hash)\ndef say_hello(name: str) -> None:\n    logger = get_run_logger()\n    # This log statement will print only on the first run. Subsequent runs will be cached.\n    logger.info(f\"hello {name}!\")\n    return name\n\n\n@flow(\n    task_runner=RayTaskRunner(\n        address=\"ray://<instance_public_ip_address>:10001\",\n    ),\n    # Using an S3 block that has already been created via the Prefect UI\n    result_storage=\"s3/my-result-storage\",\n)\ndef greetings(names: List[str]) -> None:\n    for name in names:\n        say_hello.submit(name)\n\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

        1. If you get an error stating that the module 'prefect' cannot be found, ensure prefect is installed on the remote cluster, with:

          pip install prefect\n

        2. If you get an error with a message similar to \"File system created with scheme 's3' could not be created\", ensure the required Python modules are installed on both local and remote machines. The required prerequisite modules can be found in the Prefect documentation. For example, if using S3 for the remote storage:

          pip install s3fs\n

        3. If you are seeing timeout or other connection errors, double check the address provided to the RayTaskRunner. The address should look similar to: address='ray://<head_node_ip_address>:10001':

          RayTaskRunner(address=\"ray://1.23.199.255:10001\")\n

        "},{"location":"integrations/prefect-ray/#specifying-remote-options","title":"Specifying remote options","text":"

        The remote_options context can be used to control the task\u2019s remote options.

        For example, we can set the number of CPUs and GPUs to use for the process task:

        from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\nfrom prefect_ray.context import remote_options\n\n@task\ndef process(x):\n    return x + 1\n\n\n@flow(task_runner=RayTaskRunner())\ndef my_flow():\n    # equivalent to setting @ray.remote(num_cpus=4, num_gpus=2)\n    with remote_options(num_cpus=4, num_gpus=2):\n        process.submit(42)\n
        "},{"location":"integrations/prefect-ray/task-runners/","title":"Task Runners","text":""},{"location":"integrations/prefect-ray/task-runners/#prefect_ray.task_runners","title":"prefect_ray.task_runners","text":"

        Interface and implementations of the Ray Task Runner. Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

        Example
        import time\n\nfrom prefect import flow, task\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#0\n#1\n#2\n#3\n#4\n#5\n#6\n#7\n#8\n#9\n

        Switching to a RayTaskRunner:

        import time\n\nfrom prefect import flow, task\nfrom prefect_ray import RayTaskRunner\n\n@task\ndef shout(number):\n    time.sleep(0.5)\n    print(f\"#{number}\")\n\n@flow(task_runner=RayTaskRunner)\ndef count_to(highest_number):\n    for number in range(highest_number):\n        shout.submit(number)\n\nif __name__ == \"__main__\":\n    count_to(10)\n\n# outputs\n#3\n#7\n#2\n#6\n#4\n#0\n#1\n#5\n#8\n#9\n

        "},{"location":"integrations/prefect-ray/task-runners/#prefect_ray.task_runners.RayTaskRunner","title":"RayTaskRunner","text":"

        Bases: BaseTaskRunner

        A parallel task_runner that submits tasks to ray. By default, a temporary Ray cluster is created for the duration of the flow run. Alternatively, if you already have a ray instance running, you can provide the connection URL via the address kwarg. Args: address (string, optional): Address of a currently running ray instance; if one is not provided, a temporary instance will be created. init_kwargs (dict, optional): Additional kwargs to use when calling ray.init. Examples: Using a temporary local ray cluster:

        from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner())\ndef my_flow():\n    ...\n
        Connecting to an existing ray instance:
        RayTaskRunner(address=\"ray://192.0.2.255:8786\")\n

        Source code in prefect_ray/task_runners.py
        class RayTaskRunner(BaseTaskRunner):\n    \"\"\"\n    A parallel task_runner that submits tasks to `ray`.\n    By default, a temporary Ray cluster is created for the duration of the flow run.\n    Alternatively, if you already have a `ray` instance running, you can provide\n    the connection URL via the `address` kwarg.\n    Args:\n        address (string, optional): Address of a currently running `ray` instance; if\n            one is not provided, a temporary instance will be created.\n        init_kwargs (dict, optional): Additional kwargs to use when calling `ray.init`.\n    Examples:\n        Using a temporary local ray cluster:\n        ```python\n        from prefect import flow\n        from prefect_ray.task_runners import RayTaskRunner\n\n        @flow(task_runner=RayTaskRunner())\n        def my_flow():\n            ...\n        ```\n        Connecting to an existing ray instance:\n        ```python\n        RayTaskRunner(address=\"ray://192.0.2.255:8786\")\n        ```\n    \"\"\"\n\n    def __init__(\n        self,\n        address: str = None,\n        init_kwargs: dict = None,\n    ):\n        # Store settings\n        self.address = address\n        self.init_kwargs = init_kwargs.copy() if init_kwargs else {}\n\n        self.init_kwargs.setdefault(\"namespace\", \"prefect\")\n\n        # Runtime attributes\n        self._ray_refs: Dict[str, \"ray.ObjectRef\"] = {}\n\n        super().__init__()\n\n    def duplicate(self):\n        \"\"\"\n        Return a new instance of with the same settings as this one.\n        \"\"\"\n        return type(self)(address=self.address, init_kwargs=self.init_kwargs)\n\n    def __eq__(self, other: object) -> bool:\n        \"\"\"\n        Check if an instance has the same settings as this task runner.\n        \"\"\"\n        if type(self) == type(other):\n            return (\n                self.address == other.address and self.init_kwargs == other.init_kwargs\n            )\n        else:\n            return NotImplemented\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.PARALLEL\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        call_kwargs, upstream_ray_obj_refs = self._exchange_prefect_for_ray_futures(\n            call.keywords\n        )\n\n        remote_options = RemoteOptionsContext.get().current_remote_options\n        # Ray does not support the submission of async functions and we must create a\n        # sync entrypoint\n        if remote_options:\n            ray_decorator = ray.remote(**remote_options)\n        else:\n            ray_decorator = ray.remote\n\n        self._ray_refs[key] = (\n            ray_decorator(self._run_prefect_task)\n            .options(name=call.keywords[\"task_run\"].name)\n            .remote(sync_compatible(call.func), *upstream_ray_obj_refs, **call_kwargs)\n        )\n\n    def _exchange_prefect_for_ray_futures(self, kwargs_prefect_futures):\n        \"\"\"Exchanges Prefect futures for Ray futures.\"\"\"\n\n        upstream_ray_obj_refs = []\n\n        def exchange_prefect_for_ray_future(expr):\n            \"\"\"Exchanges Prefect future for Ray future.\"\"\"\n            if isinstance(expr, PrefectFuture):\n                ray_future = self._ray_refs.get(expr.key)\n                if ray_future is not None:\n                    upstream_ray_obj_refs.append(ray_future)\n                    return ray_future\n            return expr\n\n        kwargs_ray_futures = visit_collection(\n            kwargs_prefect_futures,\n            visit_fn=exchange_prefect_for_ray_future,\n            return_data=True,\n        )\n\n        return kwargs_ray_futures, upstream_ray_obj_refs\n\n    @staticmethod\n    def _run_prefect_task(func, *upstream_ray_obj_refs, **kwargs):\n        \"\"\"Resolves Ray futures before calling the actual Prefect task function.\n\n        Passing upstream_ray_obj_refs directly as args enables Ray to wait for\n        upstream tasks before running this remote function.\n        This variable is otherwise unused as the ray object refs are also\n        contained in kwargs.\n        \"\"\"\n\n        def resolve_ray_future(expr):\n            \"\"\"Resolves Ray future.\"\"\"\n            if isinstance(expr, ray.ObjectRef):\n                return ray.get(expr)\n            return expr\n\n        kwargs = visit_collection(kwargs, visit_fn=resolve_ray_future, return_data=True)\n\n        return func(**kwargs)\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        ref = self._get_ray_ref(key)\n\n        result = None\n\n        with anyio.move_on_after(timeout):\n            # We await the reference directly instead of using `ray.get` so we can\n            # avoid blocking the event loop\n            try:\n                result = await ref\n            except RayTaskError as exc:\n                # unwrap the original exception that caused task failure, except for\n                # KeyboardInterrupt, which unwraps as TaskCancelledError\n                result = await exception_to_crashed_state(exc.cause)\n            except BaseException as exc:\n                result = await exception_to_crashed_state(exc)\n\n        return result\n\n    async def _start(self, exit_stack: AsyncExitStack):\n        \"\"\"\n        Start the task runner and prep for context exit.\n\n        - Creates a cluster if an external address is not set.\n        - Creates a client to connect to the cluster.\n        - Pushes a call to wait for all running futures to complete on exit.\n        \"\"\"\n        if self.address and self.address != \"auto\":\n            self.logger.info(\n                f\"Connecting to an existing Ray instance at {self.address}\"\n            )\n            init_args = (self.address,)\n        elif ray.is_initialized():\n            self.logger.info(\n                \"Local Ray instance is already initialized. \"\n                \"Using existing local instance.\"\n            )\n            return\n        else:\n            self.logger.info(\"Creating a local Ray instance\")\n            init_args = ()\n\n        context = ray.init(*init_args, **self.init_kwargs)\n        dashboard_url = getattr(context, \"dashboard_url\", None)\n        exit_stack.push(context)\n\n        # Display some information about the cluster\n        nodes = ray.nodes()\n        living_nodes = [node for node in nodes if node.get(\"alive\")]\n        self.logger.info(f\"Using Ray cluster with {len(living_nodes)} nodes.\")\n\n        if dashboard_url:\n            self.logger.info(\n                f\"The Ray UI is available at {dashboard_url}\",\n            )\n\n    async def _shutdown_ray(self):\n        \"\"\"\n        Shuts down the cluster.\n        \"\"\"\n        self.logger.debug(\"Shutting down Ray cluster...\")\n        ray.shutdown()\n\n    def _get_ray_ref(self, key: UUID) -> \"ray.ObjectRef\":\n        \"\"\"\n        Retrieve the ray object reference corresponding to a prefect future.\n        \"\"\"\n        return self._ray_refs[key]\n
        "},{"location":"integrations/prefect-ray/task-runners/#prefect_ray.task_runners.RayTaskRunner.duplicate","title":"duplicate","text":"

        Return a new instance of with the same settings as this one.

        Source code in prefect_ray/task_runners.py
        def duplicate(self):\n    \"\"\"\n    Return a new instance of with the same settings as this one.\n    \"\"\"\n    return type(self)(address=self.address, init_kwargs=self.init_kwargs)\n
        "},{"location":"integrations/prefect-shell/","title":"Integrating shell commands into your dataflow with prefect-shell","text":"

        Visit the full docs here to see additional examples and the API reference.

        The prefect-shell collection makes it easy to execute shell commands in your Prefect flows. Check out the examples below to get started!

        "},{"location":"integrations/prefect-shell/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-shell/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

        With prefect-shell, you can bring your trusty shell commands (and/or scripts) straight into the Prefect flow party, complete with awesome Prefect logging.

        No more separate logs, just seamless integration. Let's get the shell-abration started!

        from prefect import flow\nfrom datetime import datetime\nfrom prefect_shell import ShellOperation\n\n@flow\ndef download_data():\n    today = datetime.today().strftime(\"%Y%m%d\")\n\n    # for short running operations, you can use the `run` method\n    # which automatically manages the context\n    ShellOperation(\n        commands=[\n            \"mkdir -p data\",\n            \"mkdir -p data/${today}\"\n        ],\n        env={\"today\": today}\n    ).run()\n\n    # for long running operations, you can use a context manager\n    with ShellOperation(\n        commands=[\n            \"curl -O https://masie_web.apps.nsidc.org/pub/DATASETS/NOAA/G02135/north/daily/data/N_seaice_extent_daily_v3.0.csv\",\n        ],\n        working_dir=f\"data/{today}\",\n    ) as download_csv_operation:\n\n        # trigger runs the process in the background\n        download_csv_process = download_csv_operation.trigger()\n\n        # then do other things here in the meantime, like download another file\n        ...\n\n        # when you're ready, wait for the process to finish\n        download_csv_process.wait_for_completion()\n\n        # if you'd like to get the output lines, you can use the `fetch_result` method\n        output_lines = download_csv_process.fetch_result()\n\ndownload_data()\n

        Outputs:

        14:48:16.550 | INFO    | prefect.engine - Created flow run 'tentacled-chachalaca' for flow 'download-data'\n14:48:17.977 | INFO    | Flow run 'tentacled-chachalaca' - PID 19360 triggered with 2 commands running inside the '.' directory.\n14:48:17.987 | INFO    | Flow run 'tentacled-chachalaca' - PID 19360 completed with return code 0.\n14:48:17.994 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 triggered with 1 commands running inside the PosixPath('data/20230201') directory.\n14:48:18.009 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dl\n14:48:18.010 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\noad  Upload   Total   Spent    Left  Speed\n  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\n14:48:18.840 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n 11 1630k   11  192k    0     0   229k      0  0:00:07 --:--:--  0:00:07  231k\n14:48:19.839 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n 83 1630k   83 1368k    0     0   745k      0  0:00:02  0:00:01  0:00:01  747k\n14:48:19.993 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n100 1630k  100 1630k    0     0   819k      0  0\n14:48:19.994 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 stream output:\n:00:01  0:00:01 --:--:--  821k\n14:48:19.996 | INFO    | Flow run 'tentacled-chachalaca' - PID 19363 completed with return code 0.\n14:48:19.998 | INFO    | Flow run 'tentacled-chachalaca' - Successfully closed all open processes.\n14:48:20.203 | INFO    | Flow run 'tentacled-chachalaca' - Finished in state Completed()\n

        Utilize Previously Saved Blocks

        You can save commands within a ShellOperation block, then reuse them across multiple flows, or even plain Python scripts.

        Save the block with desired commands:

        from prefect_shell import ShellOperation\n\nping_op = ShellOperation(commands=[\"ping -t 1 prefect.io\"])\nping_op.save(\"block-name\")\n

        Load the saved block:

        from prefect_shell import ShellOperation\n\nping_op = ShellOperation.load(\"block-name\")\n

        To view and edit the blocks on Prefect UI:

        prefect block register -m prefect_shell\n
        "},{"location":"integrations/prefect-shell/#resources","title":"Resources","text":"

        For more tips on how to use tasks and flows in a Collection, check out Using Collections!

        "},{"location":"integrations/prefect-shell/#installation","title":"Installation","text":"

        Install prefect-shell with pip:

        pip install -U prefect-shell\n

        Requires an installation of Python 3.8+.

        We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

        These tasks are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-shell/commands/","title":"Commands","text":""},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands","title":"prefect_shell.commands","text":"

        Tasks for interacting with shell commands

        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation","title":"ShellOperation","text":"

        Bases: JobBlock

        A block representing a shell operation, containing multiple commands.

        For long-lasting operations, use the trigger method and utilize the block as a context manager for automatic closure of processes when context is exited. If not, manually call the close method to close processes.

        For short-lasting operations, use the run method. Context is automatically managed with this method.

        Attributes:

        Name Type Description commands List[str]

        A list of commands to execute sequentially.

        stream_output bool

        Whether to stream output.

        env Dict[str, str]

        A dictionary of environment variables to set for the shell operation.

        working_dir DirectoryPath

        The working directory context the commands will be executed within.

        shell str

        The shell to use to execute the commands.

        extension Optional[str]

        The extension to use for the temporary file. if unset defaults to .ps1 on Windows and .sh on other platforms.

        Examples:

        Load a configured block:

        from prefect_shell import ShellOperation\n\nshell_operation = ShellOperation.load(\"BLOCK_NAME\")\n

        Source code in prefect_shell/commands.py
        class ShellOperation(JobBlock):\n    \"\"\"\n    A block representing a shell operation, containing multiple commands.\n\n    For long-lasting operations, use the trigger method and utilize the block as a\n    context manager for automatic closure of processes when context is exited.\n    If not, manually call the close method to close processes.\n\n    For short-lasting operations, use the run method. Context is automatically managed\n    with this method.\n\n    Attributes:\n        commands: A list of commands to execute sequentially.\n        stream_output: Whether to stream output.\n        env: A dictionary of environment variables to set for the shell operation.\n        working_dir: The working directory context the commands\n            will be executed within.\n        shell: The shell to use to execute the commands.\n        extension: The extension to use for the temporary file.\n            if unset defaults to `.ps1` on Windows and `.sh` on other platforms.\n\n    Examples:\n        Load a configured block:\n        ```python\n        from prefect_shell import ShellOperation\n\n        shell_operation = ShellOperation.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Shell Operation\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/0b47a017e1b40381de770c17647c49cdf6388d1c-250x250.png\"  # noqa: E501\n    _documentation_url = \"https://prefecthq.github.io/prefect-shell/commands/#prefect_shell.commands.ShellOperation\"  # noqa: E501\n\n    commands: List[str] = Field(\n        default=..., description=\"A list of commands to execute sequentially.\"\n    )\n    stream_output: bool = Field(default=True, description=\"Whether to stream output.\")\n    env: Dict[str, str] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to use for the subprocess.\",\n    )\n    working_dir: DirectoryPath = Field(\n        default=None,\n        title=\"Working Directory\",\n        description=(\n            \"The absolute path to the working directory \"\n            \"the command will be executed within.\"\n        ),\n    )\n    shell: str = Field(\n        default=None,\n        description=(\n            \"The shell to run the command with; if unset, \"\n            \"defaults to `powershell` on Windows and `bash` on other platforms.\"\n        ),\n    )\n    extension: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The extension to use for the temporary file; if unset, \"\n            \"defaults to `.ps1` on Windows and `.sh` on other platforms.\"\n        ),\n    )\n\n    _exit_stack: AsyncExitStack = PrivateAttr(\n        default_factory=AsyncExitStack,\n    )\n\n    @contextmanager\n    def _prep_trigger_command(self) -> Generator[str, None, None]:\n        \"\"\"\n        Write the commands to a temporary file, handling all the details of\n        creating the file and cleaning it up afterwards. Then, return the command\n        to run the temporary file.\n        \"\"\"\n        try:\n            extension = self.extension or (\".ps1\" if sys.platform == \"win32\" else \".sh\")\n            temp_file = tempfile.NamedTemporaryFile(\n                prefix=\"prefect-\",\n                suffix=extension,\n                delete=False,\n            )\n\n            joined_commands = os.linesep.join(self.commands)\n            self.logger.debug(\n                f\"Writing the following commands to \"\n                f\"{temp_file.name!r}:{os.linesep}{joined_commands}\"\n            )\n            temp_file.write(joined_commands.encode())\n\n            if self.shell is None and sys.platform == \"win32\" or extension == \".ps1\":\n                shell = \"powershell\"\n            elif self.shell is None:\n                shell = \"bash\"\n            else:\n                shell = self.shell.lower()\n\n            if shell == \"powershell\":\n                # if powershell, set exit code to that of command\n                temp_file.write(\"\\r\\nExit $LastExitCode\".encode())\n            temp_file.close()\n\n            trigger_command = [shell, temp_file.name]\n            yield trigger_command\n        finally:\n            if os.path.exists(temp_file.name):\n                os.remove(temp_file.name)\n\n    def _compile_kwargs(self, **open_kwargs: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Helper method to compile the kwargs for `open_process` so it's not repeated\n        across the run and trigger methods.\n        \"\"\"\n        trigger_command = self._exit_stack.enter_context(self._prep_trigger_command())\n        input_env = os.environ.copy()\n        input_env.update(self.env)\n        input_open_kwargs = dict(\n            command=trigger_command,\n            stdout=subprocess.PIPE,\n            stderr=subprocess.PIPE,\n            env=input_env,\n            cwd=self.working_dir,\n            **open_kwargs,\n        )\n        return input_open_kwargs\n\n    @sync_compatible\n    async def trigger(self, **open_kwargs: Dict[str, Any]) -> ShellProcess:\n        \"\"\"\n        Triggers a shell command and returns the shell command run object\n        to track the execution of the run. This method is ideal for long-lasting\n        shell commands; for short-lasting shell commands, it is recommended\n        to use the `run` method instead.\n\n        Args:\n            **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n        Returns:\n            A `ShellProcess` object.\n\n        Examples:\n            Sleep for 5 seconds and then print \"Hello, world!\":\n            ```python\n            from prefect_shell import ShellOperation\n\n            with ShellOperation(\n                commands=[\"sleep 5\", \"echo 'Hello, world!'\"],\n            ) as shell_operation:\n                shell_process = shell_operation.trigger()\n                shell_process.wait_for_completion()\n                shell_output = shell_process.fetch_result()\n            ```\n        \"\"\"\n        input_open_kwargs = self._compile_kwargs(**open_kwargs)\n        process = await self._exit_stack.enter_async_context(\n            open_process(**input_open_kwargs)\n        )\n        num_commands = len(self.commands)\n        self.logger.info(\n            f\"PID {process.pid} triggered with {num_commands} commands running \"\n            f\"inside the {(self.working_dir or '.')!r} directory.\"\n        )\n        return ShellProcess(shell_operation=self, process=process)\n\n    @sync_compatible\n    async def run(self, **open_kwargs: Dict[str, Any]) -> List[str]:\n        \"\"\"\n        Runs a shell command, but unlike the trigger method,\n        additionally waits and fetches the result directly, automatically managing\n        the context. This method is ideal for short-lasting shell commands;\n        for long-lasting shell commands, it is\n        recommended to use the `trigger` method instead.\n\n        Args:\n            **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n        Returns:\n            The lines output from the shell command as a list.\n\n        Examples:\n            Sleep for 5 seconds and then print \"Hello, world!\":\n            ```python\n            from prefect_shell import ShellOperation\n\n            shell_output = ShellOperation(\n                commands=[\"sleep 5\", \"echo 'Hello, world!'\"]\n            ).run()\n            ```\n        \"\"\"\n        input_open_kwargs = self._compile_kwargs(**open_kwargs)\n        async with open_process(**input_open_kwargs) as process:\n            shell_process = ShellProcess(shell_operation=self, process=process)\n            num_commands = len(self.commands)\n            self.logger.info(\n                f\"PID {process.pid} triggered with {num_commands} commands running \"\n                f\"inside the {(self.working_dir or '.')!r} directory.\"\n            )\n            await shell_process.wait_for_completion()\n            result = await shell_process.fetch_result()\n\n        return result\n\n    @sync_compatible\n    async def close(self):\n        \"\"\"\n        Close the job block.\n        \"\"\"\n        await self._exit_stack.aclose()\n        self.logger.info(\"Successfully closed all open processes.\")\n\n    async def aclose(self):\n        \"\"\"\n        Asynchronous version of the close method.\n        \"\"\"\n        await self.close()\n\n    async def __aenter__(self) -> \"ShellOperation\":\n        \"\"\"\n        Asynchronous version of the enter method.\n        \"\"\"\n        return self\n\n    async def __aexit__(self, *exc_info):\n        \"\"\"\n        Asynchronous version of the exit method.\n        \"\"\"\n        await self.close()\n\n    def __enter__(self) -> \"ShellOperation\":\n        \"\"\"\n        Enter the context of the job block.\n        \"\"\"\n        return self\n\n    def __exit__(self, *exc_info):\n        \"\"\"\n        Exit the context of the job block.\n        \"\"\"\n        self.close()\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.aclose","title":"aclose async","text":"

        Asynchronous version of the close method.

        Source code in prefect_shell/commands.py
        async def aclose(self):\n    \"\"\"\n    Asynchronous version of the close method.\n    \"\"\"\n    await self.close()\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.close","title":"close async","text":"

        Close the job block.

        Source code in prefect_shell/commands.py
        @sync_compatible\nasync def close(self):\n    \"\"\"\n    Close the job block.\n    \"\"\"\n    await self._exit_stack.aclose()\n    self.logger.info(\"Successfully closed all open processes.\")\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.run","title":"run async","text":"

        Runs a shell command, but unlike the trigger method, additionally waits and fetches the result directly, automatically managing the context. This method is ideal for short-lasting shell commands; for long-lasting shell commands, it is recommended to use the trigger method instead.

        Parameters:

        Name Type Description Default **open_kwargs Dict[str, Any]

        Additional keyword arguments to pass to open_process.

        {}

        Returns:

        Type Description List[str]

        The lines output from the shell command as a list.

        Examples:

        Sleep for 5 seconds and then print \"Hello, world!\":

        from prefect_shell import ShellOperation\n\nshell_output = ShellOperation(\n    commands=[\"sleep 5\", \"echo 'Hello, world!'\"]\n).run()\n

        Source code in prefect_shell/commands.py
        @sync_compatible\nasync def run(self, **open_kwargs: Dict[str, Any]) -> List[str]:\n    \"\"\"\n    Runs a shell command, but unlike the trigger method,\n    additionally waits and fetches the result directly, automatically managing\n    the context. This method is ideal for short-lasting shell commands;\n    for long-lasting shell commands, it is\n    recommended to use the `trigger` method instead.\n\n    Args:\n        **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n    Returns:\n        The lines output from the shell command as a list.\n\n    Examples:\n        Sleep for 5 seconds and then print \"Hello, world!\":\n        ```python\n        from prefect_shell import ShellOperation\n\n        shell_output = ShellOperation(\n            commands=[\"sleep 5\", \"echo 'Hello, world!'\"]\n        ).run()\n        ```\n    \"\"\"\n    input_open_kwargs = self._compile_kwargs(**open_kwargs)\n    async with open_process(**input_open_kwargs) as process:\n        shell_process = ShellProcess(shell_operation=self, process=process)\n        num_commands = len(self.commands)\n        self.logger.info(\n            f\"PID {process.pid} triggered with {num_commands} commands running \"\n            f\"inside the {(self.working_dir or '.')!r} directory.\"\n        )\n        await shell_process.wait_for_completion()\n        result = await shell_process.fetch_result()\n\n    return result\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellOperation.trigger","title":"trigger async","text":"

        Triggers a shell command and returns the shell command run object to track the execution of the run. This method is ideal for long-lasting shell commands; for short-lasting shell commands, it is recommended to use the run method instead.

        Parameters:

        Name Type Description Default **open_kwargs Dict[str, Any]

        Additional keyword arguments to pass to open_process.

        {}

        Returns:

        Type Description ShellProcess

        A ShellProcess object.

        Examples:

        Sleep for 5 seconds and then print \"Hello, world!\":

        from prefect_shell import ShellOperation\n\nwith ShellOperation(\n    commands=[\"sleep 5\", \"echo 'Hello, world!'\"],\n) as shell_operation:\n    shell_process = shell_operation.trigger()\n    shell_process.wait_for_completion()\n    shell_output = shell_process.fetch_result()\n

        Source code in prefect_shell/commands.py
        @sync_compatible\nasync def trigger(self, **open_kwargs: Dict[str, Any]) -> ShellProcess:\n    \"\"\"\n    Triggers a shell command and returns the shell command run object\n    to track the execution of the run. This method is ideal for long-lasting\n    shell commands; for short-lasting shell commands, it is recommended\n    to use the `run` method instead.\n\n    Args:\n        **open_kwargs: Additional keyword arguments to pass to `open_process`.\n\n    Returns:\n        A `ShellProcess` object.\n\n    Examples:\n        Sleep for 5 seconds and then print \"Hello, world!\":\n        ```python\n        from prefect_shell import ShellOperation\n\n        with ShellOperation(\n            commands=[\"sleep 5\", \"echo 'Hello, world!'\"],\n        ) as shell_operation:\n            shell_process = shell_operation.trigger()\n            shell_process.wait_for_completion()\n            shell_output = shell_process.fetch_result()\n        ```\n    \"\"\"\n    input_open_kwargs = self._compile_kwargs(**open_kwargs)\n    process = await self._exit_stack.enter_async_context(\n        open_process(**input_open_kwargs)\n    )\n    num_commands = len(self.commands)\n    self.logger.info(\n        f\"PID {process.pid} triggered with {num_commands} commands running \"\n        f\"inside the {(self.working_dir or '.')!r} directory.\"\n    )\n    return ShellProcess(shell_operation=self, process=process)\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess","title":"ShellProcess","text":"

        Bases: JobRun

        A class representing a shell process.

        Source code in prefect_shell/commands.py
        class ShellProcess(JobRun):\n    \"\"\"\n    A class representing a shell process.\n    \"\"\"\n\n    def __init__(self, shell_operation: \"ShellOperation\", process: Process):\n        self._shell_operation = shell_operation\n        self._process = process\n        self._output = []\n\n    @property\n    def pid(self) -> int:\n        \"\"\"\n        The PID of the process.\n\n        Returns:\n            The PID of the process.\n        \"\"\"\n        return self._process.pid\n\n    @property\n    def return_code(self) -> Optional[int]:\n        \"\"\"\n        The return code of the process.\n\n        Returns:\n            The return code of the process, or `None` if the process is still running.\n        \"\"\"\n        return self._process.returncode\n\n    async def _capture_output(self, source):\n        \"\"\"\n        Capture output from source.\n        \"\"\"\n        async for output in TextReceiveStream(source):\n            text = output.rstrip()\n            if self._shell_operation.stream_output:\n                self.logger.info(f\"PID {self.pid} stream output:{os.linesep}{text}\")\n            self._output.extend(text.split(os.linesep))\n\n    @sync_compatible\n    async def wait_for_completion(self) -> None:\n        \"\"\"\n        Wait for the shell command to complete after a process is triggered.\n        \"\"\"\n        self.logger.debug(f\"Waiting for PID {self.pid} to complete.\")\n\n        await asyncio.gather(\n            self._capture_output(self._process.stdout),\n            self._capture_output(self._process.stderr),\n        )\n        await self._process.wait()\n\n        if self.return_code != 0:\n            raise RuntimeError(\n                f\"PID {self.pid} failed with return code {self.return_code}.\"\n            )\n        self.logger.info(\n            f\"PID {self.pid} completed with return code {self.return_code}.\"\n        )\n\n    @sync_compatible\n    async def fetch_result(self) -> List[str]:\n        \"\"\"\n        Retrieve the output of the shell operation.\n\n        Returns:\n            The lines output from the shell operation as a list.\n        \"\"\"\n        if self._process.returncode is None:\n            self.logger.info(\"Process is still running, result may be incomplete.\")\n        return self._output\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.pid","title":"pid: int property","text":"

        The PID of the process.

        Returns:

        Type Description int

        The PID of the process.

        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.return_code","title":"return_code: Optional[int] property","text":"

        The return code of the process.

        Returns:

        Type Description Optional[int]

        The return code of the process, or None if the process is still running.

        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.fetch_result","title":"fetch_result async","text":"

        Retrieve the output of the shell operation.

        Returns:

        Type Description List[str]

        The lines output from the shell operation as a list.

        Source code in prefect_shell/commands.py
        @sync_compatible\nasync def fetch_result(self) -> List[str]:\n    \"\"\"\n    Retrieve the output of the shell operation.\n\n    Returns:\n        The lines output from the shell operation as a list.\n    \"\"\"\n    if self._process.returncode is None:\n        self.logger.info(\"Process is still running, result may be incomplete.\")\n    return self._output\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.ShellProcess.wait_for_completion","title":"wait_for_completion async","text":"

        Wait for the shell command to complete after a process is triggered.

        Source code in prefect_shell/commands.py
        @sync_compatible\nasync def wait_for_completion(self) -> None:\n    \"\"\"\n    Wait for the shell command to complete after a process is triggered.\n    \"\"\"\n    self.logger.debug(f\"Waiting for PID {self.pid} to complete.\")\n\n    await asyncio.gather(\n        self._capture_output(self._process.stdout),\n        self._capture_output(self._process.stderr),\n    )\n    await self._process.wait()\n\n    if self.return_code != 0:\n        raise RuntimeError(\n            f\"PID {self.pid} failed with return code {self.return_code}.\"\n        )\n    self.logger.info(\n        f\"PID {self.pid} completed with return code {self.return_code}.\"\n    )\n
        "},{"location":"integrations/prefect-shell/commands/#prefect_shell.commands.shell_run_command","title":"shell_run_command async","text":"

        Runs arbitrary shell commands.

        Parameters:

        Name Type Description Default command str

        Shell command to be executed; can also be provided post-initialization by calling this task instance.

        required env Optional[dict]

        Dictionary of environment variables to use for the subprocess; can also be provided at runtime.

        None helper_command Optional[str]

        String representing a shell command, which will be executed prior to the command in the same process. Can be used to change directories, define helper functions, etc. for different commands in a flow.

        None shell Optional[str]

        Shell to run the command with.

        None extension Optional[str]

        File extension to be appended to the command to be executed.

        None return_all bool

        Whether this task should return all lines of stdout as a list, or just the last line as a string.

        False stream_level int

        The logging level of the stream; defaults to 20 equivalent to logging.INFO.

        INFO cwd Union[str, bytes, PathLike, None]

        The working directory context the command will be executed within

        None

        Returns:

        Type Description Union[List, str]

        If return all, returns all lines as a list; else the last line as a string.

        Example

        List contents in the current directory.

        from prefect import flow\nfrom prefect_shell import shell_run_command\n\n@flow\ndef example_shell_run_command_flow():\n    return shell_run_command(command=\"ls .\", return_all=True)\n\nexample_shell_run_command_flow()\n

        Source code in prefect_shell/commands.py
        @task\nasync def shell_run_command(\n    command: str,\n    env: Optional[dict] = None,\n    helper_command: Optional[str] = None,\n    shell: Optional[str] = None,\n    extension: Optional[str] = None,\n    return_all: bool = False,\n    stream_level: int = logging.INFO,\n    cwd: Union[str, bytes, os.PathLike, None] = None,\n) -> Union[List, str]:\n    \"\"\"\n    Runs arbitrary shell commands.\n\n    Args:\n        command: Shell command to be executed; can also be\n            provided post-initialization by calling this task instance.\n        env: Dictionary of environment variables to use for\n            the subprocess; can also be provided at runtime.\n        helper_command: String representing a shell command, which\n            will be executed prior to the `command` in the same process.\n            Can be used to change directories, define helper functions, etc.\n            for different commands in a flow.\n        shell: Shell to run the command with.\n        extension: File extension to be appended to the command to be executed.\n        return_all: Whether this task should return all lines of stdout as a list,\n            or just the last line as a string.\n        stream_level: The logging level of the stream;\n            defaults to 20 equivalent to `logging.INFO`.\n        cwd: The working directory context the command will be executed within\n\n    Returns:\n        If return all, returns all lines as a list; else the last line as a string.\n\n    Example:\n        List contents in the current directory.\n        ```python\n        from prefect import flow\n        from prefect_shell import shell_run_command\n\n        @flow\n        def example_shell_run_command_flow():\n            return shell_run_command(command=\"ls .\", return_all=True)\n\n        example_shell_run_command_flow()\n        ```\n    \"\"\"\n    logger = get_run_logger()\n\n    current_env = os.environ.copy()\n    current_env.update(env or {})\n\n    if shell is None:\n        # if shell is not specified:\n        # use powershell for windows\n        # use bash for other platforms\n        shell = \"powershell\" if sys.platform == \"win32\" else \"bash\"\n\n    extension = \".ps1\" if shell.lower() == \"powershell\" else extension\n\n    tmp = tempfile.NamedTemporaryFile(prefix=\"prefect-\", suffix=extension, delete=False)\n    try:\n        if helper_command:\n            tmp.write(helper_command.encode())\n            tmp.write(os.linesep.encode())\n        tmp.write(command.encode())\n        if shell.lower() == \"powershell\":\n            # if powershell, set exit code to that of command\n            tmp.write(\"\\r\\nExit $LastExitCode\".encode())\n        tmp.close()\n\n        shell_command = [shell, tmp.name]\n\n        lines = []\n        async with await anyio.open_process(\n            shell_command, env=current_env, cwd=cwd\n        ) as process:\n            async for text in TextReceiveStream(process.stdout):\n                logger.log(level=stream_level, msg=text)\n                lines.extend(text.rstrip().split(\"\\n\"))\n\n            await process.wait()\n            if process.returncode:\n                stderr = \"\\n\".join(\n                    [text async for text in TextReceiveStream(process.stderr)]\n                )\n                if not stderr and lines:\n                    stderr = f\"{lines[-1]}\\n\"\n                msg = (\n                    f\"Command failed with exit code {process.returncode}:\\n\" f\"{stderr}\"\n                )\n                raise RuntimeError(msg)\n    finally:\n        if os.path.exists(tmp.name):\n            os.remove(tmp.name)\n\n    line = lines[-1] if lines else \"\"\n    return lines if return_all else line\n
        "},{"location":"integrations/prefect-slack/","title":"prefect-slack","text":""},{"location":"integrations/prefect-slack/#welcome","title":"Welcome!","text":"

        prefect-slack is a collection of prebuilt Prefect tasks that can be used to quickly construct Prefect flows.

        "},{"location":"integrations/prefect-slack/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-slack/#python-setup","title":"Python setup","text":"

        Requires an installation of Python 3.8+

        We recommend using a Python virtual environment manager such as pipenv, conda or virtualenv.

        These tasks are designed to work with Prefect 2.0. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-slack/#installation","title":"Installation","text":"

        Install prefect-slack

        pip install prefect-slack\n
        "},{"location":"integrations/prefect-slack/#slack-setup","title":"Slack setup","text":"

        In order to use tasks in the collection, you'll first need to create an Slack app and install it in your Slack workspace. You can create a Slack app by navigating to the apps page for your Slack account and selecting 'Create New App'.

        For tasks that require a Bot user OAuth token, you can get a token for your app by navigating to your apps OAuth & Permissions page.

        For tasks that require and Webhook URL, you get generate new Webhook URLs by navigating to you apps Incoming Webhooks page.

        Slack's Basic app setup guide provides additional details on setting up a Slack app.

        "},{"location":"integrations/prefect-slack/#write-and-run-a-flow","title":"Write and run a flow","text":"
        from prefect import flow\nfrom prefect.context import get_run_context\nfrom prefect_slack import SlackCredentials\nfrom prefect_slack.messages import send_chat_message\n\n\n@flow\ndef example_send_message_flow():\n   context = get_run_context()\n\n   # Run other tasks and subflows here\n\n   token = \"xoxb-your-bot-token-here\"\n   send_chat_message(\n         slack_credentials=SlackCredentials(token),\n         channel=\"#prefect\",\n         text=f\"Flow run {context.flow_run.name} completed :tada:\"\n   )\n\nexample_send_message_flow()\n
        "},{"location":"integrations/prefect-slack/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials","title":"prefect_slack.credentials","text":"

        Credential classes use to store Slack credentials.

        "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackCredentials","title":"SlackCredentials","text":"

        Bases: Block

        Block holding Slack credentials for use in tasks and flows.

        Parameters:

        Name Type Description Default token

        Bot user OAuth token for the Slack app used to perform actions.

        required

        Examples:

        Load stored Slack credentials:

        from prefect_slack import SlackCredentials\nslack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\n

        Get a Slack client:

        from prefect_slack import SlackCredentials\nslack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\nclient = slack_credentials_block.get_client()\n

        Source code in prefect_slack/credentials.py
        class SlackCredentials(Block):\n    \"\"\"\n    Block holding Slack credentials for use in tasks and flows.\n\n    Args:\n        token: Bot user OAuth token for the Slack app used to perform actions.\n\n    Examples:\n        Load stored Slack credentials:\n        ```python\n        from prefect_slack import SlackCredentials\n        slack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\n        ```\n\n        Get a Slack client:\n        ```python\n        from prefect_slack import SlackCredentials\n        slack_credentials_block = SlackCredentials.load(\"BLOCK_NAME\")\n        client = slack_credentials_block.get_client()\n        ```\n    \"\"\"  # noqa E501\n\n    _block_type_name = \"Slack Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/c1965ecbf8704ee1ea20d77786de9a41ce1087d1-500x500.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-slack/credentials/#prefect_slack.credentials.SlackCredentials\"  # noqa\n\n    token: SecretStr = Field(\n        default=...,\n        description=\"Bot user OAuth token for the Slack app used to perform actions.\",\n    )\n\n    def get_client(self) -> AsyncWebClient:\n        \"\"\"\n        Returns an authenticated `AsyncWebClient` to interact with the Slack API.\n        \"\"\"\n        return AsyncWebClient(token=self.token.get_secret_value())\n
        "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackCredentials.get_client","title":"get_client","text":"

        Returns an authenticated AsyncWebClient to interact with the Slack API.

        Source code in prefect_slack/credentials.py
        def get_client(self) -> AsyncWebClient:\n    \"\"\"\n    Returns an authenticated `AsyncWebClient` to interact with the Slack API.\n    \"\"\"\n    return AsyncWebClient(token=self.token.get_secret_value())\n
        "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook","title":"SlackWebhook","text":"

        Bases: NotificationBlock

        Block holding a Slack webhook for use in tasks and flows.

        Parameters:

        Name Type Description Default url

        Slack webhook URL which can be used to send messages (e.g. https://hooks.slack.com/XXX).

        required

        Examples:

        Load stored Slack webhook:

        from prefect_slack import SlackWebhook\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n

        Get a Slack webhook client:

        from prefect_slack import SlackWebhook\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nclient = slack_webhook_block.get_client()\n

        Send a notification in Slack:

        from prefect_slack import SlackWebhook\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello, world!\")\n

        Source code in prefect_slack/credentials.py
        class SlackWebhook(NotificationBlock):\n    \"\"\"\n    Block holding a Slack webhook for use in tasks and flows.\n\n    Args:\n        url: Slack webhook URL which can be used to send messages\n            (e.g. `https://hooks.slack.com/XXX`).\n\n    Examples:\n        Load stored Slack webhook:\n        ```python\n        from prefect_slack import SlackWebhook\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        ```\n\n        Get a Slack webhook client:\n        ```python\n        from prefect_slack import SlackWebhook\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        client = slack_webhook_block.get_client()\n        ```\n\n        Send a notification in Slack:\n        ```python\n        from prefect_slack import SlackWebhook\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        slack_webhook_block.notify(\"Hello, world!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Slack Incoming Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/7dkzINU9r6j44giEFuHuUC/85d4cd321ad60c1b1e898bc3fbd28580/5cb480cd5f1b6d3fbadece79.png?h=250\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook\"  # noqa\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Slack webhook URL which can be used to send messages.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n\n    def get_client(self) -> AsyncWebhookClient:\n        \"\"\"\n        Returns an authenticated `AsyncWebhookClient` to interact with the configured\n        Slack webhook.\n        \"\"\"\n        return AsyncWebhookClient(url=self.url.get_secret_value())\n\n    @sync_compatible\n    async def notify(self, body: str, subject: Optional[str] = None):\n        \"\"\"\n        Sends a message to the Slack channel.\n        \"\"\"\n        client = self.get_client()\n\n        response = await client.send(text=body)\n\n        # prefect>=2.17.2 added a means for notification blocks to raise errors on\n        # failures. This is not available in older versions, so we need to check if the\n        # private base class attribute exists before using it.\n        if getattr(self, \"_raise_on_failure\", False):  # pragma: no cover\n            try:\n                from prefect.blocks.abstract import NotificationError\n            except ImportError:\n                NotificationError = Exception\n\n            if response.status_code >= 400:\n                raise NotificationError(f\"Failed to send message: {response.body}\")\n
        "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook.get_client","title":"get_client","text":"

        Returns an authenticated AsyncWebhookClient to interact with the configured Slack webhook.

        Source code in prefect_slack/credentials.py
        def get_client(self) -> AsyncWebhookClient:\n    \"\"\"\n    Returns an authenticated `AsyncWebhookClient` to interact with the configured\n    Slack webhook.\n    \"\"\"\n    return AsyncWebhookClient(url=self.url.get_secret_value())\n
        "},{"location":"integrations/prefect-slack/credentials/#prefect_slack.credentials.SlackWebhook.notify","title":"notify async","text":"

        Sends a message to the Slack channel.

        Source code in prefect_slack/credentials.py
        @sync_compatible\nasync def notify(self, body: str, subject: Optional[str] = None):\n    \"\"\"\n    Sends a message to the Slack channel.\n    \"\"\"\n    client = self.get_client()\n\n    response = await client.send(text=body)\n\n    # prefect>=2.17.2 added a means for notification blocks to raise errors on\n    # failures. This is not available in older versions, so we need to check if the\n    # private base class attribute exists before using it.\n    if getattr(self, \"_raise_on_failure\", False):  # pragma: no cover\n        try:\n            from prefect.blocks.abstract import NotificationError\n        except ImportError:\n            NotificationError = Exception\n\n        if response.status_code >= 400:\n            raise NotificationError(f\"Failed to send message: {response.body}\")\n
        "},{"location":"integrations/prefect-slack/messages/","title":"Messages","text":""},{"location":"integrations/prefect-slack/messages/#prefect_slack.messages","title":"prefect_slack.messages","text":"

        Tasks for sending Slack messages.

        "},{"location":"integrations/prefect-slack/messages/#prefect_slack.messages.send_chat_message","title":"send_chat_message async","text":"

        Sends a message to a Slack channel.

        Parameters:

        Name Type Description Default channel str

        The name of the channel in which to post the chat message (e.g. #general).

        required slack_credentials SlackCredentials

        Instance of SlackCredentials initialized with a Slack bot token.

        required text Optional[str]

        Contents of the message. It's a best practice to always provide a text argument when posting a message. The text argument is used in places where content cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc.

        None attachments Optional[Sequence[Union[Dict, Attachment]]]

        List of objects defining secondary context in the posted Slack message. The Slack API docs provide guidance on building attachments.

        None slack_blocks Optional[Sequence[Union[Dict, Block]]]

        List of objects defining the layout and formatting of the posted message. The Slack API docs provide guidance on building messages with blocks.

        None

        Returns:

        Name Type Description Dict Dict

        Response from the Slack API. Example response structures can be found in the Slack API docs.

        Example

        Post a message at the end of a flow run.

        from prefect import flow\nfrom prefect.context import get_run_context\nfrom prefect_slack import SlackCredentials\nfrom prefect_slack.messages import send_chat_message\n\n\n@flow\ndef example_send_message_flow():\n    context = get_run_context()\n\n    # Run other tasks and subflows here\n\n    token = \"xoxb-your-bot-token-here\"\n    send_chat_message(\n        slack_credentials=SlackCredentials(token),\n        channel=\"#prefect\",\n        text=f\"Flow run {context.flow_run.name} completed :tada:\"\n    )\n\nexample_send_message_flow()\n
        Source code in prefect_slack/messages.py
        @task\nasync def send_chat_message(\n    channel: str,\n    slack_credentials: SlackCredentials,\n    text: Optional[str] = None,\n    attachments: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.attachments.Attachment\"]]\n    ] = None,\n    slack_blocks: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.blocks.Block\"]]\n    ] = None,\n) -> Dict:\n    \"\"\"\n    Sends a message to a Slack channel.\n\n    Args:\n        channel: The name of the channel in which to post the chat message\n            (e.g. #general).\n        slack_credentials: Instance of `SlackCredentials` initialized with a Slack\n            bot token.\n        text: Contents of the message. It's a best practice to always provide a `text`\n            argument when posting a message. The `text` argument is used in places where\n            content cannot be rendered such as: system push notifications, assistive\n            technology such as screen readers, etc.\n        attachments: List of objects defining secondary context in the posted Slack\n            message. The [Slack API docs](https://api.slack.com/messaging/composing/layouts#building-attachments)\n            provide guidance on building attachments.\n        slack_blocks: List of objects defining the layout and formatting of the posted\n            message. The [Slack API docs](https://api.slack.com/block-kit/building)\n            provide guidance on building messages with blocks.\n\n    Returns:\n        Dict: Response from the Slack API. Example response structures can be found in\n            the [Slack API docs](https://api.slack.com/methods/chat.postMessage#examples).\n\n    Example:\n        Post a message at the end of a flow run.\n\n        ```python\n        from prefect import flow\n        from prefect.context import get_run_context\n        from prefect_slack import SlackCredentials\n        from prefect_slack.messages import send_chat_message\n\n\n        @flow\n        def example_send_message_flow():\n            context = get_run_context()\n\n            # Run other tasks and subflows here\n\n            token = \"xoxb-your-bot-token-here\"\n            send_chat_message(\n                slack_credentials=SlackCredentials(token),\n                channel=\"#prefect\",\n                text=f\"Flow run {context.flow_run.name} completed :tada:\"\n            )\n\n        example_send_message_flow()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Posting chat message to %s\", channel)\n\n    client = slack_credentials.get_client()\n    result = await client.chat_postMessage(\n        channel=channel, text=text, blocks=slack_blocks, attachments=attachments\n    )\n    return result.data\n
        "},{"location":"integrations/prefect-slack/messages/#prefect_slack.messages.send_incoming_webhook_message","title":"send_incoming_webhook_message async","text":"

        Sends a message via an incoming webhook.

        Parameters:

        Name Type Description Default slack_webhook SlackWebhook

        Instance of SlackWebhook initialized with a Slack webhook URL.

        required text Optional[str]

        Contents of the message. It's a best practice to always provide a text argument when posting a message. The text argument is used in places where content cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc.

        None attachments Optional[Sequence[Union[Dict, Attachment]]]

        List of objects defining secondary context in the posted Slack message. The Slack API docs provide guidance on building attachments.

        None slack_blocks Optional[Sequence[Union[Dict, Block]]]

        List of objects defining the layout and formatting of the posted message. The Slack API docs provide guidance on building messages with blocks.

        None Example

        Post a message at the end of a flow run.

        from prefect import flow\nfrom prefect_slack import SlackWebhook\nfrom prefect_slack.messages import send_incoming_webhook_message\n\n\n@flow\ndef example_send_message_flow():\n    # Run other tasks and subflows here\n\n    webhook_url = \"https://hooks.slack.com/XXX\"\n    send_incoming_webhook_message(\n        slack_webhook=SlackWebhook(\n            url=webhook_url\n        ),\n        text=\"Warehouse loading flow completed :sparkles:\"\n    )\n\nexample_send_message_flow()\n
        Source code in prefect_slack/messages.py
        @task\nasync def send_incoming_webhook_message(\n    slack_webhook: SlackWebhook,\n    text: Optional[str] = None,\n    attachments: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.attachments.Attachment\"]]\n    ] = None,\n    slack_blocks: Optional[\n        Sequence[Union[Dict, \"slack_sdk.models.blocks.Block\"]]\n    ] = None,\n) -> None:\n    \"\"\"\n    Sends a message via an incoming webhook.\n\n    Args:\n        slack_webhook: Instance of `SlackWebhook` initialized with a Slack\n            webhook URL.\n        text: Contents of the message. It's a best practice to always provide a `text`\n            argument when posting a message. The `text` argument is used in places where\n            content cannot be rendered such as: system push notifications, assistive\n            technology such as screen readers, etc.\n        attachments: List of objects defining secondary context in the posted Slack\n            message. The [Slack API docs](https://api.slack.com/messaging/composing/layouts#building-attachments)\n            provide guidance on building attachments.\n        slack_blocks: List of objects defining the layout and formatting of the posted\n            message. The [Slack API docs](https://api.slack.com/block-kit/building)\n            provide guidance on building messages with blocks.\n\n    Example:\n        Post a message at the end of a flow run.\n\n        ```python\n        from prefect import flow\n        from prefect_slack import SlackWebhook\n        from prefect_slack.messages import send_incoming_webhook_message\n\n\n        @flow\n        def example_send_message_flow():\n            # Run other tasks and subflows here\n\n            webhook_url = \"https://hooks.slack.com/XXX\"\n            send_incoming_webhook_message(\n                slack_webhook=SlackWebhook(\n                    url=webhook_url\n                ),\n                text=\"Warehouse loading flow completed :sparkles:\"\n            )\n\n        example_send_message_flow()\n        ```\n    \"\"\"  # noqa\n    logger = get_run_logger()\n    logger.info(\"Posting message to provided webhook\")\n\n    client = slack_webhook.get_client()\n    await client.send(text=text, attachments=attachments, blocks=slack_blocks)\n
        "},{"location":"integrations/prefect-snowflake/","title":"prefect-snowflake","text":""},{"location":"integrations/prefect-snowflake/#welcome","title":"Welcome!","text":"

        The prefect-snowflake collection makes it easy to connect to a Snowflake database in your Prefect flows. Check out the examples below to get started!

        "},{"location":"integrations/prefect-snowflake/#getting-started","title":"Getting Started","text":""},{"location":"integrations/prefect-snowflake/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

        Prefect works with Snowflake by providing dataflow automation for faster, more efficient data pipeline creation, execution, and monitoring.

        This results in reduced errors, increased confidence in your data, and ultimately, faster insights.

        To set up a table, use the execute and execute_many methods. Then, use the fetch_many method to retrieve data in a stream until there's no more data.

        By using the SnowflakeConnector as a context manager, you can make sure that the Snowflake connection and cursors are closed properly after you're done with them.

        Be sure to install prefect-snowflake, register the blocks, and create a credentials block to run the examples below!

        SyncAsync
        from prefect import flow, task\nfrom prefect_snowflake import SnowflakeConnector\n\n\n@task\ndef setup_table(block_name: str) -> None:\n    with SnowflakeConnector.load(block_name) as connector:\n        connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Space\"},\n                {\"name\": \"Me\", \"address\": \"Myway 88\"},\n            ],\n        )\n\n@task\ndef fetch_data(block_name: str) -> list:\n    all_rows = []\n    with SnowflakeConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\ndef snowflake_flow(block_name: str) -> list:\n    setup_table(block_name)\n    all_rows = fetch_data(block_name)\n    return all_rows\n\n\nif __name__==\"__main__\":\n    snowflake_flow()\n
        from prefect import flow, task\nfrom prefect_snowflake import SnowflakeConnector\nimport asyncio\n\n@task\nasync def setup_table(block_name: str) -> None:\n    with await SnowflakeConnector.load(block_name) as connector:\n        await connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        await connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Space\"},\n                {\"name\": \"Me\", \"address\": \"Myway 88\"},\n            ],\n        )\n\n@task\nasync def fetch_data(block_name: str) -> list:\n    all_rows = []\n    with await SnowflakeConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = await connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\nasync def snowflake_flow(block_name: str) -> list:\n    await setup_table(block_name)\n    all_rows = await fetch_data(block_name)\n    return all_rows\n\n\nif __name__==\"__main__\":\n    asyncio.run(snowflake_flow(\"example\"))\n
        "},{"location":"integrations/prefect-snowflake/#access-underlying-snowflake-connection","title":"Access underlying Snowflake connection","text":"

        If the native methods of the block don't meet your requirements, don't worry.

        You have the option to access the underlying Snowflake connection and utilize its built-in methods as well.

        import pandas as pd\nfrom prefect import flow\nfrom prefect_snowflake.database import SnowflakeConnector\nfrom snowflake.connector.pandas_tools import write_pandas\n\n@flow\ndef snowflake_write_pandas_flow():\n    connector = SnowflakeConnector.load(\"my-block\")\n    with connector.get_connection() as connection:\n        table_name = \"TABLE_NAME\"\n        ddl = \"NAME STRING, NUMBER INT\"\n        statement = f'CREATE TABLE IF NOT EXISTS {table_name} ({ddl})'\n        with connection.cursor() as cursor:\n            cursor.execute(statement)\n\n        # case sensitivity matters here!\n        df = pd.DataFrame([('Marvin', 42), ('Ford', 88)], columns=['NAME', 'NUMBER'])\n        success, num_chunks, num_rows, _ = write_pandas(\n            conn=connection,\n            df=df,\n            table_name=table_name,\n            database=snowflake_connector.database,\n            schema=snowflake_connector.schema_  # note the \"_\" suffix\n        )\n
        "},{"location":"integrations/prefect-snowflake/#resources","title":"Resources","text":"

        For more tips on how to use tasks and flows in an integration, check out Using Collections!

        "},{"location":"integrations/prefect-snowflake/#installation","title":"Installation","text":"

        Install prefect-snowflake with pip:

        pip install prefect-snowflake\n
        "},{"location":"integrations/prefect-snowflake/#registering-blocks","title":"Registering blocks","text":"

        Register blocks in this module to make them available for use.

        prefect block register -m prefect_aws\n
        "},{"location":"integrations/prefect-snowflake/#saving-credentials-to-block","title":"Saving credentials to block","text":"

        Note, to use the load method on a block, you must already have a block document saved through code or saved through the UI.

        Below is a walkthrough on saving a SnowflakeCredentials block through code.

        1. Head over to https://app.snowflake.com/.
        2. Login to your Snowflake account, e.g. nh12345.us-east-2.aws, with your username and password.
        3. Use those credentials to fill replace the placeholders below.
        from prefect_snowflake import SnowflakeCredentials\n\ncredentials = SnowflakeCredentials(\n    account=\"ACCOUNT-PLACEHOLDER\",  # resembles nh12345.us-east-2.aws\n    user=\"USER-PLACEHOLDER\",\n    password=\"PASSWORD-PLACEHOLDER\"\n)\ncredentials.save(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\n

        Then, to create a SnowflakeConnector block:

        1. After logging in, click on any worksheet.
        2. On the left side, select a database and schema.
        3. On the top right, select a warehouse.
        4. Create a short script, replacing the placeholders below.
        from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector\n\ncredentials = SnowflakeCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\n\nconnector = SnowflakeConnector(\n    credentials=credentials,\n    database=\"DATABASE-PLACEHOLDER\",\n    schema=\"SCHEMA-PLACEHOLDER\",\n    warehouse=\"COMPUTE_WH\",\n)\nconnector.save(\"CONNECTOR-BLOCK-NAME-PLACEHOLDER\")\n

        Congrats! You can now easily load the saved block, which holds your credentials and connection info:

        from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector\n\nSnowflakeCredentials.load(\"CREDENTIALS-BLOCK-NAME-PLACEHOLDER\")\nSnowflakeConnector.load(\"CONNECTOR-BLOCK-NAME-PLACEHOLDER\")\n

        Registering blocks

        Register blocks in this module to view and edit them on Prefect Cloud:

        prefect block register -m prefect_snowflake\n

        A list of available blocks in prefect-snowflake and their setup instructions can be found here.

        "},{"location":"integrations/prefect-snowflake/#feedback","title":"Feedback","text":"

        If you encounter any bugs while using prefect-snowflake, feel free to open an issue in the prefect repository.

        If you have any questions or issues while using prefect-snowflake, you can find help in the Prefect Slack community.

        "},{"location":"integrations/prefect-snowflake/#contributing","title":"Contributing","text":"

        If you'd like to help contribute to fix an issue or add a feature to prefect-snowflake, please propose changes through a pull request from a fork of the Prefect repository.

        "},{"location":"integrations/prefect-snowflake/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials","title":"prefect_snowflake.credentials","text":"

        Credentials block for authenticating with Snowflake.

        "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.InvalidPemFormat","title":"InvalidPemFormat","text":"

        Bases: Exception

        Invalid PEM Format Certificate

        Source code in prefect_snowflake/credentials.py
        class InvalidPemFormat(Exception):\n    \"\"\"Invalid PEM Format Certificate\"\"\"\n
        "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials","title":"SnowflakeCredentials","text":"

        Bases: CredentialsBlock

        Block used to manage authentication with Snowflake.

        Parameters:

        Name Type Description Default account str

        The snowflake account name.

        required user str

        The user name used to authenticate.

        required password SecretStr

        The password used to authenticate.

        required private_key SecretStr

        The PEM used to authenticate.

        required authenticator str

        The type of authenticator to use for initializing connection (oauth, externalbrowser, etc); refer to Snowflake documentation for details, and note that externalbrowser will only work in an environment where a browser is available.

        required token SecretStr

        The OAuth or JWT Token to provide when authenticator is set to OAuth.

        required endpoint str

        The Okta endpoint to use when authenticator is set to okta_endpoint, e.g. https://<okta_account_name>.okta.com.

        required role str

        The name of the default role to use.

        required autocommit bool

        Whether to automatically commit.

        required Example

        Load stored Snowflake credentials:

        from prefect_snowflake import SnowflakeCredentials\n\nsnowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_snowflake/credentials.py
        class SnowflakeCredentials(CredentialsBlock):\n    \"\"\"\n    Block used to manage authentication with Snowflake.\n\n    Args:\n        account (str): The snowflake account name.\n        user (str): The user name used to authenticate.\n        password (SecretStr): The password used to authenticate.\n        private_key (SecretStr): The PEM used to authenticate.\n        authenticator (str): The type of authenticator to use for initializing\n            connection (oauth, externalbrowser, etc); refer to\n            [Snowflake documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect)\n            for details, and note that `externalbrowser` will only\n            work in an environment where a browser is available.\n        token (SecretStr): The OAuth or JWT Token to provide when\n            authenticator is set to OAuth.\n        endpoint (str): The Okta endpoint to use when authenticator is\n            set to `okta_endpoint`, e.g. `https://<okta_account_name>.okta.com`.\n        role (str): The name of the default role to use.\n        autocommit (bool): Whether to automatically commit.\n\n    Example:\n        Load stored Snowflake credentials:\n        ```python\n        from prefect_snowflake import SnowflakeCredentials\n\n        snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"  # noqa E501\n\n    _block_type_name = \"Snowflake Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/bd359de0b4be76c2254bd329fe3a267a1a3879c2-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials\"  # noqa\n\n    account: str = Field(\n        ..., description=\"The snowflake account name.\", example=\"nh12345.us-east-2.aws\"\n    )\n    user: str = Field(..., description=\"The user name used to authenticate.\")\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password used to authenticate.\"\n    )\n    private_key: Optional[SecretBytes] = Field(\n        default=None, description=\"The PEM used to authenticate.\"\n    )\n    private_key_path: Optional[Path] = Field(\n        default=None, description=\"The path to the private key.\"\n    )\n    private_key_passphrase: Optional[SecretStr] = Field(\n        default=None, description=\"The password to use for the private key.\"\n    )\n    authenticator: Literal[\n        \"snowflake\",\n        \"snowflake_jwt\",\n        \"externalbrowser\",\n        \"okta_endpoint\",\n        \"oauth\",\n        \"username_password_mfa\",\n    ] = Field(  # noqa\n        default=\"snowflake\",\n        description=(\"The type of authenticator to use for initializing connection.\"),\n    )\n    token: Optional[SecretStr] = Field(\n        default=None,\n        description=(\n            \"The OAuth or JWT Token to provide when authenticator is set to `oauth`.\"\n        ),\n    )\n    endpoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The Okta endpoint to use when authenticator is set to `okta_endpoint`.\"\n        ),\n    )\n    role: Optional[str] = Field(\n        default=None, description=\"The name of the default role to use.\"\n    )\n    autocommit: Optional[bool] = Field(\n        default=None, description=\"Whether to automatically commit.\"\n    )\n\n    @root_validator(pre=True)\n    def _validate_auth_kwargs(cls, values):\n        \"\"\"\n        Ensure an authorization value has been provided by the user.\n        \"\"\"\n        auth_params = (\n            \"password\",\n            \"private_key\",\n            \"private_key_path\",\n            \"authenticator\",\n            \"token\",\n        )\n        if not any(values.get(param) for param in auth_params):\n            auth_str = \", \".join(auth_params)\n            raise ValueError(\n                f\"One of the authentication keys must be provided: {auth_str}\\n\"\n            )\n        elif values.get(\"private_key\") and values.get(\"private_key_path\"):\n            raise ValueError(\n                \"Do not provide both private_key and private_key_path; select one.\"\n            )\n        elif values.get(\"password\") and values.get(\"private_key_passphrase\"):\n            raise ValueError(\n                \"Do not provide both password and private_key_passphrase; \"\n                \"specify private_key_passphrase only instead.\"\n            )\n        return values\n\n    @root_validator(pre=True)\n    def _validate_token_kwargs(cls, values):\n        \"\"\"\n        Ensure an authorization value has been provided by the user.\n        \"\"\"\n        authenticator = values.get(\"authenticator\")\n        token = values.get(\"token\")\n        if authenticator == \"oauth\" and not token:\n            raise ValueError(\n                \"If authenticator is set to `oauth`, `token` must be provided\"\n            )\n        return values\n\n    @root_validator(pre=True)\n    def _validate_okta_kwargs(cls, values):\n        \"\"\"\n        Ensure an authorization value has been provided by the user.\n        \"\"\"\n        authenticator = values.get(\"authenticator\")\n\n        # did not want to make a breaking change so we will allow both\n        # see https://github.com/PrefectHQ/prefect-snowflake/issues/44\n        if \"okta_endpoint\" in values.keys():\n            warnings.warn(\n                \"Please specify `endpoint` instead of `okta_endpoint`; \"\n                \"`okta_endpoint` will be removed March 31, 2023.\",\n                DeprecationWarning,\n                stacklevel=2,\n            )\n            # remove okta endpoint from fields\n            okta_endpoint = values.pop(\"okta_endpoint\")\n            if \"endpoint\" not in values.keys():\n                values[\"endpoint\"] = okta_endpoint\n\n        endpoint = values.get(\"endpoint\")\n        if authenticator == \"okta_endpoint\" and not endpoint:\n            raise ValueError(\n                \"If authenticator is set to `okta_endpoint`, \"\n                \"`endpoint` must be provided\"\n            )\n        return values\n\n    def resolve_private_key(self) -> Optional[bytes]:\n        \"\"\"\n        Converts a PEM encoded private key into a DER binary key.\n\n        Returns:\n            DER encoded key if private_key has been provided otherwise returns None.\n\n        Raises:\n            InvalidPemFormat: If private key is not in PEM format.\n        \"\"\"\n        if self.private_key_path is None and self.private_key is None:\n            return None\n        elif self.private_key_path:\n            private_key = self.private_key_path.read_bytes()\n        else:\n            private_key = self._decode_secret(self.private_key)\n\n        if self.private_key_passphrase is not None:\n            password = self._decode_secret(self.private_key_passphrase)\n        elif self.password is not None:\n            warnings.warn(\n                \"Using the password field for private_key is deprecated \"\n                \"and will not work after March 31, 2023; please use \"\n                \"private_key_passphrase instead\",\n                DeprecationWarning,\n                stacklevel=2,\n            )\n            password = self._decode_secret(self.password)\n        else:\n            password = None\n\n        composed_private_key = self._compose_pem(private_key)\n        return load_pem_private_key(\n            data=composed_private_key,\n            password=password,\n            backend=default_backend(),\n        ).private_bytes(\n            encoding=Encoding.DER,\n            format=PrivateFormat.PKCS8,\n            encryption_algorithm=NoEncryption(),\n        )\n\n    @staticmethod\n    def _decode_secret(secret: Union[SecretStr, SecretBytes]) -> Optional[bytes]:\n        \"\"\"\n        Decode the provided secret into bytes. If the secret is not a\n        string or bytes, or it is whitespace, then return None.\n\n        Args:\n            secret: The value to decode.\n\n        Returns:\n            The decoded secret as bytes.\n\n        \"\"\"\n        if isinstance(secret, (SecretBytes, SecretStr)):\n            secret = secret.get_secret_value()\n\n        if not isinstance(secret, (bytes, str)) or len(secret) == 0 or secret.isspace():\n            return None\n\n        return secret if isinstance(secret, bytes) else secret.encode()\n\n    @staticmethod\n    def _compose_pem(private_key: bytes) -> bytes:\n        \"\"\"Validate structure of PEM certificate.\n\n        The original key passed from Prefect is sometimes malformed.\n        This function recomposes the key into a valid key that will\n        pass the serialization step when resolving the key to a DER.\n\n        Args:\n            private_key: A valid PEM format byte encoded string.\n\n        Returns:\n            byte encoded certificate.\n\n        Raises:\n            InvalidPemFormat: if private key is an invalid format.\n        \"\"\"\n        pem_parts = re.match(_SIMPLE_PEM_CERTIFICATE_REGEX, private_key.decode())\n        if pem_parts is None:\n            raise InvalidPemFormat()\n\n        body = \"\\n\".join(re.split(r\"\\s+\", pem_parts[2].strip()))\n        # reassemble header+body+footer\n        return f\"{pem_parts[1]}\\n{body}\\n{pem_parts[3]}\".encode()\n\n    def get_client(\n        self, **connect_kwargs: Any\n    ) -> snowflake.connector.SnowflakeConnection:\n        \"\"\"\n        Returns an authenticated connection that can be used to query\n        Snowflake databases.\n\n        Any additional arguments passed to this method will be used to configure\n        the SnowflakeConnection. For available parameters, please refer to the\n        [Snowflake Python connector documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect).\n\n        Args:\n            **connect_kwargs: Additional arguments to pass to\n                `snowflake.connector.connect`.\n\n        Returns:\n            An authenticated Snowflake connection.\n\n        Example:\n            Get Snowflake connection with only block configuration:\n            ```python\n            from prefect_snowflake import SnowflakeCredentials\n\n            snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n            connection = snowflake_credentials_block.get_client()\n            ```\n\n            Get Snowflake connector scoped to a specified database:\n            ```python\n            from prefect_snowflake import SnowflakeCredentials\n\n            snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n            connection = snowflake_credentials_block.get_client(database=\"my_database\")\n            ```\n        \"\"\"  # noqa\n        connect_params = {\n            # required to track task's usage in the Snowflake Partner Network Portal\n            \"application\": \"Prefect_Snowflake_Collection\",\n            **self.dict(exclude_unset=True, exclude={\"block_type_slug\"}),\n            **connect_kwargs,\n        }\n\n        for key, value in connect_params.items():\n            if isinstance(value, SecretField):\n                connect_params[key] = connect_params[key].get_secret_value()\n\n        # set authenticator to the actual okta_endpoint\n        if connect_params.get(\"authenticator\") == \"okta_endpoint\":\n            endpoint = connect_params.pop(\"endpoint\", None) or connect_params.pop(\n                \"okta_endpoint\", None\n            )  # okta_endpoint is deprecated\n            connect_params[\"authenticator\"] = endpoint\n\n        private_der_key = self.resolve_private_key()\n        if private_der_key is not None:\n            connect_params[\"private_key\"] = private_der_key\n            connect_params.pop(\"password\", None)\n            connect_params.pop(\"private_key_passphrase\", None)\n\n        return snowflake.connector.connect(**connect_params)\n
        "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials.get_client","title":"get_client","text":"

        Returns an authenticated connection that can be used to query Snowflake databases.

        Any additional arguments passed to this method will be used to configure the SnowflakeConnection. For available parameters, please refer to the Snowflake Python connector documentation.

        Parameters:

        Name Type Description Default **connect_kwargs Any

        Additional arguments to pass to snowflake.connector.connect.

        {}

        Returns:

        Type Description SnowflakeConnection

        An authenticated Snowflake connection.

        Example

        Get Snowflake connection with only block configuration:

        from prefect_snowflake import SnowflakeCredentials\n\nsnowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\nconnection = snowflake_credentials_block.get_client()\n

        Get Snowflake connector scoped to a specified database:

        from prefect_snowflake import SnowflakeCredentials\n\nsnowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\nconnection = snowflake_credentials_block.get_client(database=\"my_database\")\n

        Source code in prefect_snowflake/credentials.py
        def get_client(\n    self, **connect_kwargs: Any\n) -> snowflake.connector.SnowflakeConnection:\n    \"\"\"\n    Returns an authenticated connection that can be used to query\n    Snowflake databases.\n\n    Any additional arguments passed to this method will be used to configure\n    the SnowflakeConnection. For available parameters, please refer to the\n    [Snowflake Python connector documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect).\n\n    Args:\n        **connect_kwargs: Additional arguments to pass to\n            `snowflake.connector.connect`.\n\n    Returns:\n        An authenticated Snowflake connection.\n\n    Example:\n        Get Snowflake connection with only block configuration:\n        ```python\n        from prefect_snowflake import SnowflakeCredentials\n\n        snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n        connection = snowflake_credentials_block.get_client()\n        ```\n\n        Get Snowflake connector scoped to a specified database:\n        ```python\n        from prefect_snowflake import SnowflakeCredentials\n\n        snowflake_credentials_block = SnowflakeCredentials.load(\"BLOCK_NAME\")\n\n        connection = snowflake_credentials_block.get_client(database=\"my_database\")\n        ```\n    \"\"\"  # noqa\n    connect_params = {\n        # required to track task's usage in the Snowflake Partner Network Portal\n        \"application\": \"Prefect_Snowflake_Collection\",\n        **self.dict(exclude_unset=True, exclude={\"block_type_slug\"}),\n        **connect_kwargs,\n    }\n\n    for key, value in connect_params.items():\n        if isinstance(value, SecretField):\n            connect_params[key] = connect_params[key].get_secret_value()\n\n    # set authenticator to the actual okta_endpoint\n    if connect_params.get(\"authenticator\") == \"okta_endpoint\":\n        endpoint = connect_params.pop(\"endpoint\", None) or connect_params.pop(\n            \"okta_endpoint\", None\n        )  # okta_endpoint is deprecated\n        connect_params[\"authenticator\"] = endpoint\n\n    private_der_key = self.resolve_private_key()\n    if private_der_key is not None:\n        connect_params[\"private_key\"] = private_der_key\n        connect_params.pop(\"password\", None)\n        connect_params.pop(\"private_key_passphrase\", None)\n\n    return snowflake.connector.connect(**connect_params)\n
        "},{"location":"integrations/prefect-snowflake/credentials/#prefect_snowflake.credentials.SnowflakeCredentials.resolve_private_key","title":"resolve_private_key","text":"

        Converts a PEM encoded private key into a DER binary key.

        Returns:

        Type Description Optional[bytes]

        DER encoded key if private_key has been provided otherwise returns None.

        Raises:

        Type Description InvalidPemFormat

        If private key is not in PEM format.

        Source code in prefect_snowflake/credentials.py
        def resolve_private_key(self) -> Optional[bytes]:\n    \"\"\"\n    Converts a PEM encoded private key into a DER binary key.\n\n    Returns:\n        DER encoded key if private_key has been provided otherwise returns None.\n\n    Raises:\n        InvalidPemFormat: If private key is not in PEM format.\n    \"\"\"\n    if self.private_key_path is None and self.private_key is None:\n        return None\n    elif self.private_key_path:\n        private_key = self.private_key_path.read_bytes()\n    else:\n        private_key = self._decode_secret(self.private_key)\n\n    if self.private_key_passphrase is not None:\n        password = self._decode_secret(self.private_key_passphrase)\n    elif self.password is not None:\n        warnings.warn(\n            \"Using the password field for private_key is deprecated \"\n            \"and will not work after March 31, 2023; please use \"\n            \"private_key_passphrase instead\",\n            DeprecationWarning,\n            stacklevel=2,\n        )\n        password = self._decode_secret(self.password)\n    else:\n        password = None\n\n    composed_private_key = self._compose_pem(private_key)\n    return load_pem_private_key(\n        data=composed_private_key,\n        password=password,\n        backend=default_backend(),\n    ).private_bytes(\n        encoding=Encoding.DER,\n        format=PrivateFormat.PKCS8,\n        encryption_algorithm=NoEncryption(),\n    )\n
        "},{"location":"integrations/prefect-snowflake/database/","title":"Database","text":""},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database","title":"prefect_snowflake.database","text":"

        Module for querying against Snowflake databases.

        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector","title":"SnowflakeConnector","text":"

        Bases: DatabaseBlock

        Block used to manage connections with Snowflake.

        Upon instantiating, a connection is created and maintained for the life of the object until the close method is called.

        It is recommended to use this block as a context manager, which will automatically close the engine and its connections when the context is exited.

        It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor will be lost.

        Parameters:

        Name Type Description Default credentials

        The credentials to authenticate with Snowflake.

        required database

        The name of the default database to use.

        required warehouse

        The name of the default warehouse to use.

        required schema

        The name of the default schema to use; this attribute is accessible through SnowflakeConnector(...).schema_.

        required fetch_size

        The number of rows to fetch at a time.

        required poll_frequency_s

        The number of seconds before checking query.

        required

        Examples:

        Load stored Snowflake connector as a context manager:

        from prefect_snowflake.database import SnowflakeConnector\n\nsnowflake_connector = SnowflakeConnector.load(\"BLOCK_NAME\")\n

        Insert data into database and fetch results.

        from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = conn.fetch_all(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Space\"}\n    )\n    print(results)\n

        Source code in prefect_snowflake/database.py
        class SnowflakeConnector(DatabaseBlock):\n    \"\"\"\n    Block used to manage connections with Snowflake.\n\n    Upon instantiating, a connection is created and maintained for the life of\n    the object until the close method is called.\n\n    It is recommended to use this block as a context manager, which will automatically\n    close the engine and its connections when the context is exited.\n\n    It is also recommended that this block is loaded and consumed within a single task\n    or flow because if the block is passed across separate tasks and flows,\n    the state of the block's connection and cursor will be lost.\n\n    Args:\n        credentials: The credentials to authenticate with Snowflake.\n        database: The name of the default database to use.\n        warehouse: The name of the default warehouse to use.\n        schema: The name of the default schema to use;\n            this attribute is accessible through `SnowflakeConnector(...).schema_`.\n        fetch_size: The number of rows to fetch at a time.\n        poll_frequency_s: The number of seconds before checking query.\n\n    Examples:\n        Load stored Snowflake connector as a context manager:\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        snowflake_connector = SnowflakeConnector.load(\"BLOCK_NAME\")\n        ```\n\n        Insert data into database and fetch results.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = conn.fetch_all(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Space\"}\n            )\n            print(results)\n        ```\n    \"\"\"  # noqa\n\n    _block_type_name = \"Snowflake Connector\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/bd359de0b4be76c2254bd329fe3a267a1a3879c2-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector\"  # noqa\n    _description = \"Perform data operations against a Snowflake database.\"\n\n    credentials: SnowflakeCredentials = Field(\n        default=..., description=\"The credentials to authenticate with Snowflake.\"\n    )\n    database: str = Field(\n        default=..., description=\"The name of the default database to use.\"\n    )\n    warehouse: str = Field(\n        default=..., description=\"The name of the default warehouse to use.\"\n    )\n    schema_: str = Field(\n        default=...,\n        alias=\"schema\",\n        description=\"The name of the default schema to use.\",\n    )\n    fetch_size: int = Field(\n        default=1, description=\"The default number of rows to fetch at a time.\"\n    )\n    poll_frequency_s: int = Field(\n        default=1,\n        title=\"Poll Frequency [seconds]\",\n        description=(\n            \"The number of seconds between checking query \"\n            \"status for long running queries.\"\n        ),\n    )\n\n    _connection: Optional[SnowflakeConnection] = None\n    _unique_cursors: Dict[str, SnowflakeCursor] = None\n\n    def get_connection(self, **connect_kwargs: Any) -> SnowflakeConnection:\n        \"\"\"\n        Returns an authenticated connection that can be\n        used to query from Snowflake databases.\n\n        Args:\n            **connect_kwargs: Additional arguments to pass to\n                `snowflake.connector.connect`.\n\n        Returns:\n            The authenticated SnowflakeConnection.\n\n        Examples:\n            ```python\n            from prefect_snowflake.credentials import SnowflakeCredentials\n            from prefect_snowflake.database import SnowflakeConnector\n\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            with snowflake_connector.get_connection() as connection:\n                ...\n            ```\n        \"\"\"\n        if self._connection is not None:\n            return self._connection\n\n        connect_params = {\n            \"database\": self.database,\n            \"warehouse\": self.warehouse,\n            \"schema\": self.schema_,\n        }\n        connection = self.credentials.get_client(**connect_kwargs, **connect_params)\n        self._connection = connection\n        self.logger.info(\"Started a new connection to Snowflake.\")\n        return connection\n\n    def _start_connection(self):\n        \"\"\"\n        Starts Snowflake database connection.\n        \"\"\"\n        self.get_connection()\n        if self._unique_cursors is None:\n            self._unique_cursors = {}\n\n    def _get_cursor(\n        self,\n        inputs: Dict[str, Any],\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    ) -> Tuple[bool, SnowflakeCursor]:\n        \"\"\"\n        Get a Snowflake cursor.\n\n        Args:\n            inputs: The inputs to generate a unique hash, used to decide\n                whether a new cursor should be used.\n            cursor_type: The class of the cursor to use when creating a\n                Snowflake cursor.\n\n        Returns:\n            Whether a cursor is new and a Snowflake cursor.\n        \"\"\"\n        self._start_connection()\n\n        input_hash = hash_objects(inputs)\n        if input_hash is None:\n            raise RuntimeError(\n                \"We were not able to hash your inputs, \"\n                \"which resulted in an unexpected data return; \"\n                \"please open an issue with a reproducible example.\"\n            )\n        if input_hash not in self._unique_cursors.keys():\n            new_cursor = self._connection.cursor(cursor_type)\n            self._unique_cursors[input_hash] = new_cursor\n            return True, new_cursor\n        else:\n            existing_cursor = self._unique_cursors[input_hash]\n            return False, existing_cursor\n\n    async def _execute_async(self, cursor: SnowflakeCursor, inputs: Dict[str, Any]):\n        \"\"\"Helper method to execute operations asynchronously.\"\"\"\n        response = await run_sync_in_worker_thread(cursor.execute_async, **inputs)\n        self.logger.info(\n            f\"Executing the operation, {inputs['command']!r}, asynchronously; \"\n            f\"polling for the result every {self.poll_frequency_s} seconds.\"\n        )\n\n        query_id = response[\"queryId\"]\n        while self._connection.is_still_running(\n            await run_sync_in_worker_thread(\n                self._connection.get_query_status_throw_if_error, query_id\n            )\n        ):\n            await asyncio.sleep(self.poll_frequency_s)\n        await run_sync_in_worker_thread(cursor.get_results_from_sfqid, query_id)\n\n    def reset_cursors(self) -> None:\n        \"\"\"\n        Tries to close all opened cursors.\n\n        Examples:\n            Reset the cursors to refresh cursor position.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                print(conn.fetch_one(\"SELECT * FROM customers\"))  # Ford\n                conn.reset_cursors()\n                print(conn.fetch_one(\"SELECT * FROM customers\"))  # should be Ford again\n            ```\n        \"\"\"  # noqa\n        if not self._unique_cursors:\n            self.logger.info(\"There were no cursors to reset.\")\n            return\n\n        input_hashes = tuple(self._unique_cursors.keys())\n        for input_hash in input_hashes:\n            cursor = self._unique_cursors.pop(input_hash)\n            try:\n                cursor.close()\n            except Exception as exc:\n                self.logger.warning(\n                    f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n                )\n        self.logger.info(\"Successfully reset the cursors.\")\n\n    @sync_compatible\n    async def fetch_one(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> Tuple[Any]:\n        \"\"\"\n        Fetch a single result from the database.\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Returns:\n            A tuple containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Fetch one row from the database where address is Space.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                result = conn.fetch_one(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Space\"}\n                )\n                print(result)\n            ```\n        \"\"\"  # noqa\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        new, cursor = self._get_cursor(inputs, cursor_type=cursor_type)\n        if new:\n            await self._execute_async(cursor, inputs)\n        self.logger.debug(\"Preparing to fetch a row.\")\n        result = await run_sync_in_worker_thread(cursor.fetchone)\n        return result\n\n    @sync_compatible\n    async def fetch_many(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        size: Optional[int] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch a limited number of results from the database.\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            size: The number of results to return; if None or 0, uses the value of\n                `fetch_size` configured on the block.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Repeatedly fetch two rows from the database where address is Highway 42.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Me\", \"address\": \"Highway 42\"},\n                    ],\n                )\n                result = conn.fetch_many(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Highway 42\"},\n                    size=2\n                )\n                print(result)  # Marvin, Ford\n                result = conn.fetch_many(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Highway 42\"},\n                    size=2\n                )\n                print(result)  # Unknown, Me\n            ```\n        \"\"\"  # noqa\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        new, cursor = self._get_cursor(inputs, cursor_type)\n        if new:\n            await self._execute_async(cursor, inputs)\n        size = size or self.fetch_size\n        self.logger.debug(f\"Preparing to fetch {size} rows.\")\n        result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n        return result\n\n    @sync_compatible\n    async def fetch_all(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch all results from the database.\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Fetch all rows from the database where address is Highway 42.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                result = conn.fetch_all(\n                    \"SELECT * FROM customers WHERE address = %(address)s\",\n                    parameters={\"address\": \"Highway 42\"},\n                )\n                print(result)  # Marvin, Ford, Unknown\n            ```\n        \"\"\"  # noqa\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        new, cursor = self._get_cursor(inputs, cursor_type)\n        if new:\n            await self._execute_async(cursor, inputs)\n        self.logger.debug(\"Preparing to fetch all rows.\")\n        result = await run_sync_in_worker_thread(cursor.fetchall)\n        return result\n\n    @sync_compatible\n    async def execute(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n        **execute_kwargs: Any,\n    ) -> None:\n        \"\"\"\n        Executes an operation on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n            **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n        Examples:\n            Create table named customers with two columns, name and address.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n            ```\n        \"\"\"  # noqa\n        self._start_connection()\n\n        inputs = dict(\n            command=operation,\n            params=parameters,\n            **execute_kwargs,\n        )\n        with self._connection.cursor(cursor_type) as cursor:\n            await run_sync_in_worker_thread(cursor.execute, **inputs)\n        self.logger.info(f\"Executed the operation, {operation!r}.\")\n\n    @sync_compatible\n    async def execute_many(\n        self,\n        operation: str,\n        seq_of_parameters: List[Dict[str, Any]],\n    ) -> None:\n        \"\"\"\n        Executes many operations on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n        Unlike the fetch methods, this method will always execute the operations\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            seq_of_parameters: The sequence of parameters for the operation.\n\n        Examples:\n            Create table and insert three rows into it.\n            ```python\n            from prefect_snowflake.database import SnowflakeConnector\n\n            with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n                conn.execute(\n                    \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n                )\n                conn.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    ],\n                )\n            ```\n        \"\"\"  # noqa\n        self._start_connection()\n\n        inputs = dict(\n            command=operation,\n            seqparams=seq_of_parameters,\n        )\n        with self._connection.cursor() as cursor:\n            await run_sync_in_worker_thread(cursor.executemany, **inputs)\n        self.logger.info(\n            f\"Executed {len(seq_of_parameters)} operations off {operation!r}.\"\n        )\n\n    def close(self):\n        \"\"\"\n        Closes connection and its cursors.\n        \"\"\"\n        try:\n            self.reset_cursors()\n        finally:\n            if self._connection is None:\n                self.logger.info(\"There was no connection open to be closed.\")\n                return\n            self._connection.close()\n            self._connection = None\n            self.logger.info(\"Successfully closed the Snowflake connection.\")\n\n    def __enter__(self):\n        \"\"\"\n        Start a connection upon entry.\n        \"\"\"\n        return self\n\n    def __exit__(self, *args):\n        \"\"\"\n        Closes connection and its cursors upon exit.\n        \"\"\"\n        self.close()\n\n    def __getstate__(self):\n        \"\"\"Allows block to be pickled and dumped.\"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_connection\", \"_unique_cursors\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"Reset connection and cursors upon loading.\"\"\"\n        self.__dict__.update(data)\n        self._start_connection()\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.close","title":"close","text":"

        Closes connection and its cursors.

        Source code in prefect_snowflake/database.py
        def close(self):\n    \"\"\"\n    Closes connection and its cursors.\n    \"\"\"\n    try:\n        self.reset_cursors()\n    finally:\n        if self._connection is None:\n            self.logger.info(\"There was no connection open to be closed.\")\n            return\n        self._connection.close()\n        self._connection = None\n        self.logger.info(\"Successfully closed the Snowflake connection.\")\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.execute","title":"execute async","text":"

        Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None cursor_type Type[SnowflakeCursor]

        The class of the cursor to use when creating a Snowflake cursor.

        SnowflakeCursor **execute_kwargs Any

        Additional options to pass to cursor.execute_async.

        {}

        Examples:

        Create table named customers with two columns, name and address.

        from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n

        Source code in prefect_snowflake/database.py
        @sync_compatible\nasync def execute(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> None:\n    \"\"\"\n    Executes an operation on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Examples:\n        Create table named customers with two columns, name and address.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n        ```\n    \"\"\"  # noqa\n    self._start_connection()\n\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    with self._connection.cursor(cursor_type) as cursor:\n        await run_sync_in_worker_thread(cursor.execute, **inputs)\n    self.logger.info(f\"Executed the operation, {operation!r}.\")\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.execute_many","title":"execute_many async","text":"

        Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operations upon calling.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required seq_of_parameters List[Dict[str, Any]]

        The sequence of parameters for the operation.

        required

        Examples:

        Create table and insert three rows into it.

        from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n        ],\n    )\n

        Source code in prefect_snowflake/database.py
        @sync_compatible\nasync def execute_many(\n    self,\n    operation: str,\n    seq_of_parameters: List[Dict[str, Any]],\n) -> None:\n    \"\"\"\n    Executes many operations on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n    Unlike the fetch methods, this method will always execute the operations\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        seq_of_parameters: The sequence of parameters for the operation.\n\n    Examples:\n        Create table and insert three rows into it.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                ],\n            )\n        ```\n    \"\"\"  # noqa\n    self._start_connection()\n\n    inputs = dict(\n        command=operation,\n        seqparams=seq_of_parameters,\n    )\n    with self._connection.cursor() as cursor:\n        await run_sync_in_worker_thread(cursor.executemany, **inputs)\n    self.logger.info(\n        f\"Executed {len(seq_of_parameters)} operations off {operation!r}.\"\n    )\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.fetch_all","title":"fetch_all async","text":"

        Fetch all results from the database. Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None cursor_type Type[SnowflakeCursor]

        The class of the cursor to use when creating a Snowflake cursor.

        SnowflakeCursor **execute_kwargs Any

        Additional options to pass to cursor.execute_async.

        {}

        Returns:

        Type Description List[Tuple[Any]]

        A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Fetch all rows from the database where address is Highway 42.

        from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    result = conn.fetch_all(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Highway 42\"},\n    )\n    print(result)  # Marvin, Ford, Unknown\n

        Source code in prefect_snowflake/database.py
        @sync_compatible\nasync def fetch_all(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch all results from the database.\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Fetch all rows from the database where address is Highway 42.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            result = conn.fetch_all(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Highway 42\"},\n            )\n            print(result)  # Marvin, Ford, Unknown\n        ```\n    \"\"\"  # noqa\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    new, cursor = self._get_cursor(inputs, cursor_type)\n    if new:\n        await self._execute_async(cursor, inputs)\n    self.logger.debug(\"Preparing to fetch all rows.\")\n    result = await run_sync_in_worker_thread(cursor.fetchall)\n    return result\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.fetch_many","title":"fetch_many async","text":"

        Fetch a limited number of results from the database. Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None size Optional[int]

        The number of results to return; if None or 0, uses the value of fetch_size configured on the block.

        None cursor_type Type[SnowflakeCursor]

        The class of the cursor to use when creating a Snowflake cursor.

        SnowflakeCursor **execute_kwargs Any

        Additional options to pass to cursor.execute_async.

        {}

        Returns:

        Type Description List[Tuple[Any]]

        A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Repeatedly fetch two rows from the database where address is Highway 42.

        from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            {\"name\": \"Me\", \"address\": \"Highway 42\"},\n        ],\n    )\n    result = conn.fetch_many(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Highway 42\"},\n        size=2\n    )\n    print(result)  # Marvin, Ford\n    result = conn.fetch_many(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Highway 42\"},\n        size=2\n    )\n    print(result)  # Unknown, Me\n

        Source code in prefect_snowflake/database.py
        @sync_compatible\nasync def fetch_many(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    size: Optional[int] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch a limited number of results from the database.\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        size: The number of results to return; if None or 0, uses the value of\n            `fetch_size` configured on the block.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Repeatedly fetch two rows from the database where address is Highway 42.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Me\", \"address\": \"Highway 42\"},\n                ],\n            )\n            result = conn.fetch_many(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Highway 42\"},\n                size=2\n            )\n            print(result)  # Marvin, Ford\n            result = conn.fetch_many(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Highway 42\"},\n                size=2\n            )\n            print(result)  # Unknown, Me\n        ```\n    \"\"\"  # noqa\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    new, cursor = self._get_cursor(inputs, cursor_type)\n    if new:\n        await self._execute_async(cursor, inputs)\n    size = size or self.fetch_size\n    self.logger.debug(f\"Preparing to fetch {size} rows.\")\n    result = await run_sync_in_worker_thread(cursor.fetchmany, size=size)\n    return result\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.fetch_one","title":"fetch_one async","text":"

        Fetch a single result from the database. Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None cursor_type Type[SnowflakeCursor]

        The class of the cursor to use when creating a Snowflake cursor.

        SnowflakeCursor **execute_kwargs Any

        Additional options to pass to cursor.execute_async.

        {}

        Returns:

        Type Description Tuple[Any]

        A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Fetch one row from the database where address is Space.

        from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    result = conn.fetch_one(\n        \"SELECT * FROM customers WHERE address = %(address)s\",\n        parameters={\"address\": \"Space\"}\n    )\n    print(result)\n

        Source code in prefect_snowflake/database.py
        @sync_compatible\nasync def fetch_one(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    **execute_kwargs: Any,\n) -> Tuple[Any]:\n    \"\"\"\n    Fetch a single result from the database.\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        cursor_type: The class of the cursor to use when creating a Snowflake cursor.\n        **execute_kwargs: Additional options to pass to `cursor.execute_async`.\n\n    Returns:\n        A tuple containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Fetch one row from the database where address is Space.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            result = conn.fetch_one(\n                \"SELECT * FROM customers WHERE address = %(address)s\",\n                parameters={\"address\": \"Space\"}\n            )\n            print(result)\n        ```\n    \"\"\"  # noqa\n    inputs = dict(\n        command=operation,\n        params=parameters,\n        **execute_kwargs,\n    )\n    new, cursor = self._get_cursor(inputs, cursor_type=cursor_type)\n    if new:\n        await self._execute_async(cursor, inputs)\n    self.logger.debug(\"Preparing to fetch a row.\")\n    result = await run_sync_in_worker_thread(cursor.fetchone)\n    return result\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.get_connection","title":"get_connection","text":"

        Returns an authenticated connection that can be used to query from Snowflake databases.

        Parameters:

        Name Type Description Default **connect_kwargs Any

        Additional arguments to pass to snowflake.connector.connect.

        {}

        Returns:

        Type Description SnowflakeConnection

        The authenticated SnowflakeConnection.

        Examples:

        from prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector\n\nsnowflake_credentials = SnowflakeCredentials(\n    account=\"account\",\n    user=\"user\",\n    password=\"password\",\n)\nsnowflake_connector = SnowflakeConnector(\n    database=\"database\",\n    warehouse=\"warehouse\",\n    schema=\"schema\",\n    credentials=snowflake_credentials\n)\nwith snowflake_connector.get_connection() as connection:\n    ...\n
        Source code in prefect_snowflake/database.py
        def get_connection(self, **connect_kwargs: Any) -> SnowflakeConnection:\n    \"\"\"\n    Returns an authenticated connection that can be\n    used to query from Snowflake databases.\n\n    Args:\n        **connect_kwargs: Additional arguments to pass to\n            `snowflake.connector.connect`.\n\n    Returns:\n        The authenticated SnowflakeConnection.\n\n    Examples:\n        ```python\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector\n\n        snowflake_credentials = SnowflakeCredentials(\n            account=\"account\",\n            user=\"user\",\n            password=\"password\",\n        )\n        snowflake_connector = SnowflakeConnector(\n            database=\"database\",\n            warehouse=\"warehouse\",\n            schema=\"schema\",\n            credentials=snowflake_credentials\n        )\n        with snowflake_connector.get_connection() as connection:\n            ...\n        ```\n    \"\"\"\n    if self._connection is not None:\n        return self._connection\n\n    connect_params = {\n        \"database\": self.database,\n        \"warehouse\": self.warehouse,\n        \"schema\": self.schema_,\n    }\n    connection = self.credentials.get_client(**connect_kwargs, **connect_params)\n    self._connection = connection\n    self.logger.info(\"Started a new connection to Snowflake.\")\n    return connection\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.SnowflakeConnector.reset_cursors","title":"reset_cursors","text":"

        Tries to close all opened cursors.

        Examples:

        Reset the cursors to refresh cursor position.

        from prefect_snowflake.database import SnowflakeConnector\n\nwith SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n    conn.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n    )\n    conn.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    print(conn.fetch_one(\"SELECT * FROM customers\"))  # Ford\n    conn.reset_cursors()\n    print(conn.fetch_one(\"SELECT * FROM customers\"))  # should be Ford again\n

        Source code in prefect_snowflake/database.py
        def reset_cursors(self) -> None:\n    \"\"\"\n    Tries to close all opened cursors.\n\n    Examples:\n        Reset the cursors to refresh cursor position.\n        ```python\n        from prefect_snowflake.database import SnowflakeConnector\n\n        with SnowflakeConnector.load(\"BLOCK_NAME\") as conn:\n            conn.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n            )\n            conn.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            print(conn.fetch_one(\"SELECT * FROM customers\"))  # Ford\n            conn.reset_cursors()\n            print(conn.fetch_one(\"SELECT * FROM customers\"))  # should be Ford again\n        ```\n    \"\"\"  # noqa\n    if not self._unique_cursors:\n        self.logger.info(\"There were no cursors to reset.\")\n        return\n\n    input_hashes = tuple(self._unique_cursors.keys())\n    for input_hash in input_hashes:\n        cursor = self._unique_cursors.pop(input_hash)\n        try:\n            cursor.close()\n        except Exception as exc:\n            self.logger.warning(\n                f\"Failed to close cursor for input hash {input_hash!r}: {exc}\"\n            )\n    self.logger.info(\"Successfully reset the cursors.\")\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.snowflake_multiquery","title":"snowflake_multiquery async","text":"

        Executes multiple queries against a Snowflake database in a shared session. Allows execution in a transaction.

        Parameters:

        Name Type Description Default queries List[str]

        The list of queries to execute against the database.

        required params Union[Tuple[Any], Dict[str, Any]]

        The params to replace the placeholders in the query.

        None snowflake_connector SnowflakeConnector

        The credentials to use to authenticate.

        required cursor_type Type[SnowflakeCursor]

        The type of database cursor to use for the query.

        SnowflakeCursor as_transaction bool

        If True, queries are executed in a transaction.

        False return_transaction_control_results bool

        Determines if the results of queries controlling the transaction (BEGIN/COMMIT) should be returned.

        False poll_frequency_seconds int

        Number of seconds to wait in between checks for run completion.

        1

        Returns:

        Type Description List[List[Tuple[Any]]]

        List of the outputs of response.fetchall() for each query.

        Examples:

        Query Snowflake table with the ID value parameterized.

        from prefect import flow\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector, snowflake_multiquery\n\n\n@flow\ndef snowflake_multiquery_flow():\n    snowflake_credentials = SnowflakeCredentials(\n        account=\"account\",\n        user=\"user\",\n        password=\"password\",\n    )\n    snowflake_connector = SnowflakeConnector(\n        database=\"database\",\n        warehouse=\"warehouse\",\n        schema=\"schema\",\n        credentials=snowflake_credentials\n    )\n    result = snowflake_multiquery(\n        [\"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\", \"SELECT 1,2\"],\n        snowflake_connector,\n        params={\"id_param\": 1},\n        as_transaction=True\n    )\n    return result\n\nsnowflake_multiquery_flow()\n

        Source code in prefect_snowflake/database.py
        @task\nasync def snowflake_multiquery(\n    queries: List[str],\n    snowflake_connector: SnowflakeConnector,\n    params: Union[Tuple[Any], Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    as_transaction: bool = False,\n    return_transaction_control_results: bool = False,\n    poll_frequency_seconds: int = 1,\n) -> List[List[Tuple[Any]]]:\n    \"\"\"\n    Executes multiple queries against a Snowflake database in a shared session.\n    Allows execution in a transaction.\n\n    Args:\n        queries: The list of queries to execute against the database.\n        params: The params to replace the placeholders in the query.\n        snowflake_connector: The credentials to use to authenticate.\n        cursor_type: The type of database cursor to use for the query.\n        as_transaction: If True, queries are executed in a transaction.\n        return_transaction_control_results: Determines if the results of queries\n            controlling the transaction (BEGIN/COMMIT) should be returned.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Returns:\n        List of the outputs of `response.fetchall()` for each query.\n\n    Examples:\n        Query Snowflake table with the ID value parameterized.\n        ```python\n        from prefect import flow\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector, snowflake_multiquery\n\n\n        @flow\n        def snowflake_multiquery_flow():\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            result = snowflake_multiquery(\n                [\"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\", \"SELECT 1,2\"],\n                snowflake_connector,\n                params={\"id_param\": 1},\n                as_transaction=True\n            )\n            return result\n\n        snowflake_multiquery_flow()\n        ```\n    \"\"\"\n    with snowflake_connector.get_connection() as connection:\n        if as_transaction:\n            queries.insert(0, BEGIN_TRANSACTION_STATEMENT)\n            queries.append(END_TRANSACTION_STATEMENT)\n\n        with connection.cursor(cursor_type) as cursor:\n            results = []\n            for query in queries:\n                response = cursor.execute_async(query, params=params)\n                query_id = response[\"queryId\"]\n                while connection.is_still_running(\n                    connection.get_query_status_throw_if_error(query_id)\n                ):\n                    await asyncio.sleep(poll_frequency_seconds)\n                cursor.get_results_from_sfqid(query_id)\n                result = cursor.fetchall()\n                results.append(result)\n\n    # cut off results from BEGIN/COMMIT queries\n    if as_transaction and not return_transaction_control_results:\n        return results[1:-1]\n    else:\n        return results\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.snowflake_query","title":"snowflake_query async","text":"

        Executes a query against a Snowflake database.

        Parameters:

        Name Type Description Default query str

        The query to execute against the database.

        required params Union[Tuple[Any], Dict[str, Any]]

        The params to replace the placeholders in the query.

        None snowflake_connector SnowflakeConnector

        The credentials to use to authenticate.

        required cursor_type Type[SnowflakeCursor]

        The type of database cursor to use for the query.

        SnowflakeCursor poll_frequency_seconds int

        Number of seconds to wait in between checks for run completion.

        1

        Returns:

        Type Description List[Tuple[Any]]

        The output of response.fetchall().

        Examples:

        Query Snowflake table with the ID value parameterized.

        from prefect import flow\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n@flow\ndef snowflake_query_flow():\n    snowflake_credentials = SnowflakeCredentials(\n        account=\"account\",\n        user=\"user\",\n        password=\"password\",\n    )\n    snowflake_connector = SnowflakeConnector(\n        database=\"database\",\n        warehouse=\"warehouse\",\n        schema=\"schema\",\n        credentials=snowflake_credentials\n    )\n    result = snowflake_query(\n        \"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\",\n        snowflake_connector,\n        params={\"id_param\": 1}\n    )\n    return result\n\nsnowflake_query_flow()\n

        Source code in prefect_snowflake/database.py
        @task\nasync def snowflake_query(\n    query: str,\n    snowflake_connector: SnowflakeConnector,\n    params: Union[Tuple[Any], Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n    poll_frequency_seconds: int = 1,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Executes a query against a Snowflake database.\n\n    Args:\n        query: The query to execute against the database.\n        params: The params to replace the placeholders in the query.\n        snowflake_connector: The credentials to use to authenticate.\n        cursor_type: The type of database cursor to use for the query.\n        poll_frequency_seconds: Number of seconds to wait in between checks for\n            run completion.\n\n    Returns:\n        The output of `response.fetchall()`.\n\n    Examples:\n        Query Snowflake table with the ID value parameterized.\n        ```python\n        from prefect import flow\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n        @flow\n        def snowflake_query_flow():\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            result = snowflake_query(\n                \"SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;\",\n                snowflake_connector,\n                params={\"id_param\": 1}\n            )\n            return result\n\n        snowflake_query_flow()\n        ```\n    \"\"\"\n    # context manager automatically rolls back failed transactions and closes\n    with snowflake_connector.get_connection() as connection:\n        with connection.cursor(cursor_type) as cursor:\n            response = cursor.execute_async(query, params=params)\n            query_id = response[\"queryId\"]\n            while connection.is_still_running(\n                connection.get_query_status_throw_if_error(query_id)\n            ):\n                await asyncio.sleep(poll_frequency_seconds)\n            cursor.get_results_from_sfqid(query_id)\n            result = cursor.fetchall()\n    return result\n
        "},{"location":"integrations/prefect-snowflake/database/#prefect_snowflake.database.snowflake_query_sync","title":"snowflake_query_sync async","text":"

        Executes a query in sync mode against a Snowflake database.

        Parameters:

        Name Type Description Default query str

        The query to execute against the database.

        required params Union[Tuple[Any], Dict[str, Any]]

        The params to replace the placeholders in the query.

        None snowflake_connector SnowflakeConnector

        The credentials to use to authenticate.

        required cursor_type Type[SnowflakeCursor]

        The type of database cursor to use for the query.

        SnowflakeCursor

        Returns:

        Type Description List[Tuple[Any]]

        The output of response.fetchall().

        Examples:

        Execute a put statement.

        from prefect import flow\nfrom prefect_snowflake.credentials import SnowflakeCredentials\nfrom prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n@flow\ndef snowflake_query_sync_flow():\n    snowflake_credentials = SnowflakeCredentials(\n        account=\"account\",\n        user=\"user\",\n        password=\"password\",\n    )\n    snowflake_connector = SnowflakeConnector(\n        database=\"database\",\n        warehouse=\"warehouse\",\n        schema=\"schema\",\n        credentials=snowflake_credentials\n    )\n    result = snowflake_query_sync(\n        \"put file://a_file.csv @mystage;\",\n        snowflake_connector,\n    )\n    return result\n\nsnowflake_query_sync_flow()\n

        Source code in prefect_snowflake/database.py
        @task\nasync def snowflake_query_sync(\n    query: str,\n    snowflake_connector: SnowflakeConnector,\n    params: Union[Tuple[Any], Dict[str, Any]] = None,\n    cursor_type: Type[SnowflakeCursor] = SnowflakeCursor,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Executes a query in sync mode against a Snowflake database.\n\n    Args:\n        query: The query to execute against the database.\n        params: The params to replace the placeholders in the query.\n        snowflake_connector: The credentials to use to authenticate.\n        cursor_type: The type of database cursor to use for the query.\n\n    Returns:\n        The output of `response.fetchall()`.\n\n    Examples:\n        Execute a put statement.\n        ```python\n        from prefect import flow\n        from prefect_snowflake.credentials import SnowflakeCredentials\n        from prefect_snowflake.database import SnowflakeConnector, snowflake_query\n\n\n        @flow\n        def snowflake_query_sync_flow():\n            snowflake_credentials = SnowflakeCredentials(\n                account=\"account\",\n                user=\"user\",\n                password=\"password\",\n            )\n            snowflake_connector = SnowflakeConnector(\n                database=\"database\",\n                warehouse=\"warehouse\",\n                schema=\"schema\",\n                credentials=snowflake_credentials\n            )\n            result = snowflake_query_sync(\n                \"put file://a_file.csv @mystage;\",\n                snowflake_connector,\n            )\n            return result\n\n        snowflake_query_sync_flow()\n        ```\n    \"\"\"\n    # context manager automatically rolls back failed transactions and closes\n    with snowflake_connector.get_connection() as connection:\n        with connection.cursor(cursor_type) as cursor:\n            cursor.execute(query, params=params)\n            result = cursor.fetchall()\n    return result\n
        "},{"location":"integrations/prefect-sqlalchemy/","title":"prefect-sqlalchemy","text":"

        Visit the full docs here to see additional examples and the API reference.

        The prefect-sqlalchemy collection makes it easy to connect to a database in your Prefect flows. Check out the examples below to get started!

        "},{"location":"integrations/prefect-sqlalchemy/#getting-started","title":"Getting started","text":""},{"location":"integrations/prefect-sqlalchemy/#integrate-with-prefect-flows","title":"Integrate with Prefect flows","text":"

        Prefect and SQLAlchemy are a data powerhouse duo. With Prefect, your workflows are orchestratable and observable, and with SQLAlchemy, your databases are a snap to handle! Get ready to experience the ultimate data \"flow-chemistry\"!

        To set up a table, use the execute and execute_many methods. Then, use the fetch_many method to retrieve data in a stream until there's no more data.

        By using the SqlAlchemyConnector as a context manager, you can make sure that the SQLAlchemy engine and any connected resources are closed properly after you're done with them.

        Be sure to install prefect-sqlalchemy and save your credentials to a Prefect block to run the examples below!

        Async support

        SqlAlchemyConnector also supports async workflows! Just be sure to save, load, and use an async driver as in the example below.

        from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n\nconnector = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=AsyncDriver.SQLITE_AIOSQLITE,\n        database=\"DATABASE-PLACEHOLDER.db\"\n    )\n)\n\nconnector.save(\"BLOCK_NAME-PLACEHOLDER\")\n
        SyncAsync
        from prefect import flow, task\nfrom prefect_sqlalchemy import SqlAlchemyConnector\n\n\n@task\ndef setup_table(block_name: str) -> None:\n    with SqlAlchemyConnector.load(block_name) as connector:\n        connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        connector.execute(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n        )\n        connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            ],\n        )\n\n@task\ndef fetch_data(block_name: str) -> list:\n    all_rows = []\n    with SqlAlchemyConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\ndef sqlalchemy_flow(block_name: str) -> list:\n    setup_table(block_name)\n    all_rows = fetch_data(block_name)\n    return all_rows\n\n\nsqlalchemy_flow(\"BLOCK-NAME-PLACEHOLDER\")\n
        from prefect import flow, task\nfrom prefect_sqlalchemy import SqlAlchemyConnector\nimport asyncio\n\n@task\nasync def setup_table(block_name: str) -> None:\n    async with await SqlAlchemyConnector.load(block_name) as connector:\n        await connector.execute(\n            \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\"\n        )\n        await connector.execute(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n        )\n        await connector.execute_many(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            seq_of_parameters=[\n                {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                {\"name\": \"Unknown\", \"address\": \"Highway 42\"},\n            ],\n        )\n\n@task\nasync def fetch_data(block_name: str) -> list:\n    all_rows = []\n    async with await SqlAlchemyConnector.load(block_name) as connector:\n        while True:\n            # Repeated fetch* calls using the same operation will\n            # skip re-executing and instead return the next set of results\n            new_rows = await connector.fetch_many(\"SELECT * FROM customers\", size=2)\n            if len(new_rows) == 0:\n                break\n            all_rows.append(new_rows)\n    return all_rows\n\n@flow\nasync def sqlalchemy_flow(block_name: str) -> list:\n    await setup_table(block_name)\n    all_rows = await fetch_data(block_name)\n    return all_rows\n\n\nasyncio.run(sqlalchemy_flow(\"BLOCK-NAME-PLACEHOLDER\"))\n
        "},{"location":"integrations/prefect-sqlalchemy/#resources","title":"Resources","text":"

        For more tips on how to use tasks and flows provided in a Prefect integration library, check out the Prefect docs on using integrations.

        "},{"location":"integrations/prefect-sqlalchemy/#installation","title":"Installation","text":"

        Install prefect-sqlalchemy with pip:

        pip install prefect-sqlalchemy\n

        Requires an installation of Python 3.8 or higher.

        We recommend using a Python virtual environment manager such as pipenv, conda, or virtualenv.

        The tasks in this library are designed to work with Prefect 2. For more information about how to use Prefect, please refer to the Prefect documentation.

        "},{"location":"integrations/prefect-sqlalchemy/#saving-credentials-to-a-block","title":"Saving credentials to a block","text":"

        To use the load method on Blocks, you must have a block document saved through code or saved through the UI.

        Below is a walkthrough on saving block documents through code; simply create a short script, replacing the placeholders.

        from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver\n\nconnector = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=SyncDriver.POSTGRESQL_PSYCOPG2,\n        username=\"USERNAME-PLACEHOLDER\",\n        password=\"PASSWORD-PLACEHOLDER\",\n        host=\"localhost\",\n        port=5432,\n        database=\"DATABASE-PLACEHOLDER\",\n    )\n)\n\nconnector.save(\"BLOCK_NAME-PLACEHOLDER\")\n

        Congrats! You can now easily load the saved block, which holds your credentials:

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nSqlAlchemyConnector.load(\"BLOCK_NAME-PLACEHOLDER\")\n

        The required keywords depend upon the desired driver. For example, SQLite requires only the driver and database arguments:

        from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver\n\nconnector = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=SyncDriver.SQLITE_PYSQLITE,\n        database=\"DATABASE-PLACEHOLDER.db\"\n    )\n)\n\nconnector.save(\"BLOCK_NAME-PLACEHOLDER\")\n

        Registering blocks

        Register blocks in this module to view and edit them on Prefect Cloud:

        prefect block register -m prefect_sqlalchemy\n
        "},{"location":"integrations/prefect-sqlalchemy/#feedback","title":"Feedback","text":"

        If you encounter any bugs while using prefect-sqlalchemy, please open an issue in the prefect repository.

        If you have any questions or issues while using prefect-sqlalchemy, you can find help in the Prefect Community Slack .

        "},{"location":"integrations/prefect-sqlalchemy/#contributing","title":"Contributing","text":"

        If you'd like to help contribute to fix an issue or add a feature to prefect-sqlalchemy, please propose changes through a pull request from a fork of the repository.

        Here are the steps:

        1. Fork the repository
        2. Clone the forked repository
        3. Install the repository and its dependencies:
          pip install -e \".[dev]\"\n
        4. Make desired changes
        5. Add tests
        6. Install pre-commit to perform quality checks prior to commit:
          pre-commit install\n
        7. git commit, git push, and create a pull request
        "},{"location":"integrations/prefect-sqlalchemy/credentials/","title":"Credentials","text":""},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials","title":"prefect_sqlalchemy.credentials","text":"

        Credential classes used to perform authenticated interactions with SQLAlchemy

        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.AsyncDriver","title":"AsyncDriver","text":"

        Bases: Enum

        Known dialects with their corresponding async drivers.

        Attributes:

        Name Type Description POSTGRESQL_ASYNCPG Enum

        postgresql+asyncpg

        SQLITE_AIOSQLITE Enum

        sqlite+aiosqlite

        MYSQL_ASYNCMY Enum

        mysql+asyncmy

        MYSQL_AIOMYSQL Enum

        mysql+aiomysql

        Source code in prefect_sqlalchemy/credentials.py
        class AsyncDriver(Enum):\n    \"\"\"\n    Known dialects with their corresponding async drivers.\n\n    Attributes:\n        POSTGRESQL_ASYNCPG (Enum): [postgresql+asyncpg](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.asyncpg)\n\n        SQLITE_AIOSQLITE (Enum): [sqlite+aiosqlite](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.aiosqlite)\n\n        MYSQL_ASYNCMY (Enum): [mysql+asyncmy](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.asyncmy)\n        MYSQL_AIOMYSQL (Enum): [mysql+aiomysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.aiomysql)\n    \"\"\"  # noqa\n\n    POSTGRESQL_ASYNCPG = \"postgresql+asyncpg\"\n\n    SQLITE_AIOSQLITE = \"sqlite+aiosqlite\"\n\n    MYSQL_ASYNCMY = \"mysql+asyncmy\"\n    MYSQL_AIOMYSQL = \"mysql+aiomysql\"\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.ConnectionComponents","title":"ConnectionComponents","text":"

        Bases: BaseModel

        Parameters to use to create a SQLAlchemy engine URL.

        Attributes:

        Name Type Description driver Union[AsyncDriver, SyncDriver, str]

        The driver name to use.

        database str

        The name of the database to use.

        username Optional[str]

        The user name used to authenticate.

        password Optional[SecretStr]

        The password used to authenticate.

        host Optional[str]

        The host address of the database.

        port Optional[str]

        The port to connect to the database.

        query Optional[Dict[str, str]]

        A dictionary of string keys to string values to be passed to the dialect and/or the DBAPI upon connect.

        Source code in prefect_sqlalchemy/credentials.py
        class ConnectionComponents(BaseModel):\n    \"\"\"\n    Parameters to use to create a SQLAlchemy engine URL.\n\n    Attributes:\n        driver: The driver name to use.\n        database: The name of the database to use.\n        username: The user name used to authenticate.\n        password: The password used to authenticate.\n        host: The host address of the database.\n        port: The port to connect to the database.\n        query: A dictionary of string keys to string values to be passed to the dialect\n            and/or the DBAPI upon connect.\n    \"\"\"\n\n    driver: Union[AsyncDriver, SyncDriver, str] = Field(\n        default=..., description=\"The driver name to use.\"\n    )\n    database: str = Field(default=..., description=\"The name of the database to use.\")\n    username: Optional[str] = Field(\n        default=None, description=\"The user name used to authenticate.\"\n    )\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password used to authenticate.\"\n    )\n    host: Optional[str] = Field(\n        default=None, description=\"The host address of the database.\"\n    )\n    port: Optional[str] = Field(\n        default=None, description=\"The port to connect to the database.\"\n    )\n    query: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=(\n            \"A dictionary of string keys to string values to be passed to the dialect \"\n            \"and/or the DBAPI upon connect. To specify non-string parameters to a \"\n            \"Python DBAPI directly, use connect_args.\"\n        ),\n    )\n\n    def create_url(self) -> URL:\n        \"\"\"\n        Create a fully formed connection URL.\n\n        Returns:\n            The SQLAlchemy engine URL.\n        \"\"\"\n        driver = self.driver\n        drivername = driver.value if isinstance(driver, Enum) else driver\n        password = self.password.get_secret_value() if self.password else None\n        url_params = dict(\n            drivername=drivername,\n            username=self.username,\n            password=password,\n            database=self.database,\n            host=self.host,\n            port=self.port,\n            query=self.query,\n        )\n        return URL.create(\n            **{\n                url_key: url_param\n                for url_key, url_param in url_params.items()\n                if url_param is not None\n            }\n        )\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.ConnectionComponents.create_url","title":"create_url","text":"

        Create a fully formed connection URL.

        Returns:

        Type Description URL

        The SQLAlchemy engine URL.

        Source code in prefect_sqlalchemy/credentials.py
        def create_url(self) -> URL:\n    \"\"\"\n    Create a fully formed connection URL.\n\n    Returns:\n        The SQLAlchemy engine URL.\n    \"\"\"\n    driver = self.driver\n    drivername = driver.value if isinstance(driver, Enum) else driver\n    password = self.password.get_secret_value() if self.password else None\n    url_params = dict(\n        drivername=drivername,\n        username=self.username,\n        password=password,\n        database=self.database,\n        host=self.host,\n        port=self.port,\n        query=self.query,\n    )\n    return URL.create(\n        **{\n            url_key: url_param\n            for url_key, url_param in url_params.items()\n            if url_param is not None\n        }\n    )\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials","title":"DatabaseCredentials","text":"

        Bases: Block

        Block used to manage authentication with a database.

        Attributes:

        Name Type Description driver Optional[Union[AsyncDriver, SyncDriver, str]]

        The driver name, e.g. \"postgresql+asyncpg\"

        database Optional[str]

        The name of the database to use.

        username Optional[str]

        The user name used to authenticate.

        password Optional[SecretStr]

        The password used to authenticate.

        host Optional[str]

        The host address of the database.

        port Optional[str]

        The port to connect to the database.

        query Optional[Dict[str, str]]

        A dictionary of string keys to string values to be passed to the dialect and/or the DBAPI upon connect. To specify non-string parameters to a Python DBAPI directly, use connect_args.

        url Optional[AnyUrl]

        Manually create and provide a URL to create the engine, this is useful for external dialects, e.g. Snowflake, because some of the params, such as \"warehouse\", is not directly supported in the vanilla sqlalchemy.engine.URL.create method; do not provide this alongside with other URL params as it will raise a ValueError.

        connect_args Optional[Dict[str, Any]]

        The options which will be passed directly to the DBAPI's connect() method as additional keyword arguments.

        Example

        Load stored database credentials:

        from prefect_sqlalchemy import DatabaseCredentials\ndatabase_block = DatabaseCredentials.load(\"BLOCK_NAME\")\n

        Source code in prefect_sqlalchemy/credentials.py
        class DatabaseCredentials(Block):\n    \"\"\"\n    Block used to manage authentication with a database.\n\n    Attributes:\n        driver: The driver name, e.g. \"postgresql+asyncpg\"\n        database: The name of the database to use.\n        username: The user name used to authenticate.\n        password: The password used to authenticate.\n        host: The host address of the database.\n        port: The port to connect to the database.\n        query: A dictionary of string keys to string values to be passed to\n            the dialect and/or the DBAPI upon connect. To specify non-string\n            parameters to a Python DBAPI directly, use connect_args.\n        url: Manually create and provide a URL to create the engine,\n            this is useful for external dialects, e.g. Snowflake, because some\n            of the params, such as \"warehouse\", is not directly supported in\n            the vanilla `sqlalchemy.engine.URL.create` method; do not provide\n            this alongside with other URL params as it will raise a `ValueError`.\n        connect_args: The options which will be passed directly to the\n            DBAPI's connect() method as additional keyword arguments.\n\n    Example:\n        Load stored database credentials:\n        ```python\n        from prefect_sqlalchemy import DatabaseCredentials\n        database_block = DatabaseCredentials.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Database Credentials\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/fb3f4debabcda1c5a3aeea4f5b3f94c28845e23e-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials\"  # noqa\n\n    driver: Optional[Union[AsyncDriver, SyncDriver, str]] = Field(\n        default=None, description=\"The driver name to use.\"\n    )\n    username: Optional[str] = Field(\n        default=None, description=\"The user name used to authenticate.\"\n    )\n    password: Optional[SecretStr] = Field(\n        default=None, description=\"The password used to authenticate.\"\n    )\n    database: Optional[str] = Field(\n        default=None, description=\"The name of the database to use.\"\n    )\n    host: Optional[str] = Field(\n        default=None, description=\"The host address of the database.\"\n    )\n    port: Optional[str] = Field(\n        default=None, description=\"The port to connect to the database.\"\n    )\n    query: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=(\n            \"A dictionary of string keys to string values to be passed to the dialect \"\n            \"and/or the DBAPI upon connect. To specify non-string parameters to a \"\n            \"Python DBAPI directly, use connect_args.\"\n        ),\n    )\n    url: Optional[AnyUrl] = Field(\n        default=None,\n        description=(\n            \"Manually create and provide a URL to create the engine, this is useful \"\n            \"for external dialects, e.g. Snowflake, because some of the params, \"\n            \"such as 'warehouse', is not directly supported in the vanilla \"\n            \"`sqlalchemy.engine.URL.create` method; do not provide this \"\n            \"alongside with other URL params as it will raise a `ValueError`.\"\n        ),\n    )\n    connect_args: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=(\n            \"The options which will be passed directly to the DBAPI's connect() \"\n            \"method as additional keyword arguments.\"\n        ),\n    )\n\n    def block_initialization(self):\n        \"\"\"\n        Initializes the engine.\n        \"\"\"\n        warnings.warn(\n            \"DatabaseCredentials is now deprecated and will be removed March 2023; \"\n            \"please use SqlAlchemyConnector instead.\",\n            DeprecationWarning,\n        )\n        if isinstance(self.driver, AsyncDriver):\n            drivername = self.driver.value\n            self._driver_is_async = True\n        elif isinstance(self.driver, SyncDriver):\n            drivername = self.driver.value\n            self._driver_is_async = False\n        else:\n            drivername = self.driver\n            self._driver_is_async = drivername in AsyncDriver._value2member_map_\n\n        url_params = dict(\n            drivername=drivername,\n            username=self.username,\n            password=self.password.get_secret_value() if self.password else None,\n            database=self.database,\n            host=self.host,\n            port=self.port,\n            query=self.query,\n        )\n        if not self.url:\n            required_url_keys = (\"drivername\", \"database\")\n            if not all(url_params[key] for key in required_url_keys):\n                required_url_keys = (\"driver\", \"database\")\n                raise ValueError(\n                    f\"If the `url` is not provided, \"\n                    f\"all of these URL params are required: \"\n                    f\"{required_url_keys}\"\n                )\n            self.rendered_url = URL.create(\n                **{\n                    url_key: url_param\n                    for url_key, url_param in url_params.items()\n                    if url_param is not None\n                }\n            )  # from params\n        else:\n            if any(val for val in url_params.values()):\n                raise ValueError(\n                    f\"The `url` should not be provided \"\n                    f\"alongside any of these URL params: \"\n                    f\"{url_params.keys()}\"\n                )\n            self.rendered_url = make_url(str(self.url))\n\n    def get_engine(self) -> Union[\"Connection\", \"AsyncConnection\"]:\n        \"\"\"\n        Returns an authenticated engine that can be\n        used to query from databases.\n\n        Returns:\n            The authenticated SQLAlchemy Connection / AsyncConnection.\n\n        Examples:\n            Create an asynchronous engine to PostgreSQL using URL params.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                sqlalchemy_credentials = DatabaseCredentials(\n                    driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                    username=\"prefect\",\n                    password=\"prefect_password\",\n                    database=\"postgres\"\n                )\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n\n            Create a synchronous engine to Snowflake using the `url` kwarg.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                url = (\n                    \"snowflake://<user_login_name>:<password>\"\n                    \"@<account_identifier>/<database_name>\"\n                    \"?warehouse=<warehouse_name>\"\n                )\n                sqlalchemy_credentials = DatabaseCredentials(url=url)\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n        \"\"\"\n        engine_kwargs = dict(\n            url=self.rendered_url,\n            connect_args=self.connect_args or {},\n            poolclass=NullPool,\n        )\n        if self._driver_is_async:\n            engine = create_async_engine(**engine_kwargs)\n        else:\n            engine = create_engine(**engine_kwargs)\n        return engine\n\n    class Config:\n        \"\"\"Configuration of pydantic.\"\"\"\n\n        # Support serialization of the 'URL' type\n        arbitrary_types_allowed = True\n        json_encoders = {URL: lambda u: u.render_as_string()}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        \"\"\"\n        Convert to a dictionary.\n        \"\"\"\n        # Support serialization of the 'URL' type\n        d = super().dict(*args, **kwargs)\n        d[\"rendered_url\"] = SecretStr(\n            self.rendered_url.render_as_string(hide_password=False)\n        )\n        return d\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.Config","title":"Config","text":"

        Configuration of pydantic.

        Source code in prefect_sqlalchemy/credentials.py
        class Config:\n    \"\"\"Configuration of pydantic.\"\"\"\n\n    # Support serialization of the 'URL' type\n    arbitrary_types_allowed = True\n    json_encoders = {URL: lambda u: u.render_as_string()}\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.block_initialization","title":"block_initialization","text":"

        Initializes the engine.

        Source code in prefect_sqlalchemy/credentials.py
        def block_initialization(self):\n    \"\"\"\n    Initializes the engine.\n    \"\"\"\n    warnings.warn(\n        \"DatabaseCredentials is now deprecated and will be removed March 2023; \"\n        \"please use SqlAlchemyConnector instead.\",\n        DeprecationWarning,\n    )\n    if isinstance(self.driver, AsyncDriver):\n        drivername = self.driver.value\n        self._driver_is_async = True\n    elif isinstance(self.driver, SyncDriver):\n        drivername = self.driver.value\n        self._driver_is_async = False\n    else:\n        drivername = self.driver\n        self._driver_is_async = drivername in AsyncDriver._value2member_map_\n\n    url_params = dict(\n        drivername=drivername,\n        username=self.username,\n        password=self.password.get_secret_value() if self.password else None,\n        database=self.database,\n        host=self.host,\n        port=self.port,\n        query=self.query,\n    )\n    if not self.url:\n        required_url_keys = (\"drivername\", \"database\")\n        if not all(url_params[key] for key in required_url_keys):\n            required_url_keys = (\"driver\", \"database\")\n            raise ValueError(\n                f\"If the `url` is not provided, \"\n                f\"all of these URL params are required: \"\n                f\"{required_url_keys}\"\n            )\n        self.rendered_url = URL.create(\n            **{\n                url_key: url_param\n                for url_key, url_param in url_params.items()\n                if url_param is not None\n            }\n        )  # from params\n    else:\n        if any(val for val in url_params.values()):\n            raise ValueError(\n                f\"The `url` should not be provided \"\n                f\"alongside any of these URL params: \"\n                f\"{url_params.keys()}\"\n            )\n        self.rendered_url = make_url(str(self.url))\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.dict","title":"dict","text":"

        Convert to a dictionary.

        Source code in prefect_sqlalchemy/credentials.py
        def dict(self, *args, **kwargs) -> Dict:\n    \"\"\"\n    Convert to a dictionary.\n    \"\"\"\n    # Support serialization of the 'URL' type\n    d = super().dict(*args, **kwargs)\n    d[\"rendered_url\"] = SecretStr(\n        self.rendered_url.render_as_string(hide_password=False)\n    )\n    return d\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.DatabaseCredentials.get_engine","title":"get_engine","text":"

        Returns an authenticated engine that can be used to query from databases.

        Returns:

        Type Description Union[Connection, AsyncConnection]

        The authenticated SQLAlchemy Connection / AsyncConnection.

        Examples:

        Create an asynchronous engine to PostgreSQL using URL params.

        from prefect import flow\nfrom prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n@flow\ndef sqlalchemy_credentials_flow():\n    sqlalchemy_credentials = DatabaseCredentials(\n        driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n        username=\"prefect\",\n        password=\"prefect_password\",\n        database=\"postgres\"\n    )\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

        Create a synchronous engine to Snowflake using the url kwarg.

        from prefect import flow\nfrom prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n@flow\ndef sqlalchemy_credentials_flow():\n    url = (\n        \"snowflake://<user_login_name>:<password>\"\n        \"@<account_identifier>/<database_name>\"\n        \"?warehouse=<warehouse_name>\"\n    )\n    sqlalchemy_credentials = DatabaseCredentials(url=url)\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

        Source code in prefect_sqlalchemy/credentials.py
        def get_engine(self) -> Union[\"Connection\", \"AsyncConnection\"]:\n    \"\"\"\n    Returns an authenticated engine that can be\n    used to query from databases.\n\n    Returns:\n        The authenticated SQLAlchemy Connection / AsyncConnection.\n\n    Examples:\n        Create an asynchronous engine to PostgreSQL using URL params.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            sqlalchemy_credentials = DatabaseCredentials(\n                driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                username=\"prefect\",\n                password=\"prefect_password\",\n                database=\"postgres\"\n            )\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n\n        Create a synchronous engine to Snowflake using the `url` kwarg.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            url = (\n                \"snowflake://<user_login_name>:<password>\"\n                \"@<account_identifier>/<database_name>\"\n                \"?warehouse=<warehouse_name>\"\n            )\n            sqlalchemy_credentials = DatabaseCredentials(url=url)\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n    \"\"\"\n    engine_kwargs = dict(\n        url=self.rendered_url,\n        connect_args=self.connect_args or {},\n        poolclass=NullPool,\n    )\n    if self._driver_is_async:\n        engine = create_async_engine(**engine_kwargs)\n    else:\n        engine = create_engine(**engine_kwargs)\n    return engine\n
        "},{"location":"integrations/prefect-sqlalchemy/credentials/#prefect_sqlalchemy.credentials.SyncDriver","title":"SyncDriver","text":"

        Bases: Enum

        Known dialects with their corresponding sync drivers.

        Attributes:

        Name Type Description POSTGRESQL_PSYCOPG2 Enum

        postgresql+psycopg2

        POSTGRESQL_PG8000 Enum

        postgresql+pg8000

        POSTGRESQL_PSYCOPG2CFFI Enum

        postgresql+psycopg2cffi

        POSTGRESQL_PYPOSTGRESQL Enum

        postgresql+pypostgresql

        POSTGRESQL_PYGRESQL Enum

        postgresql+pygresql

        MYSQL_MYSQLDB Enum

        mysql+mysqldb

        MYSQL_PYMYSQL Enum

        mysql+pymysql

        MYSQL_MYSQLCONNECTOR Enum

        mysql+mysqlconnector

        MYSQL_CYMYSQL Enum

        mysql+cymysql

        MYSQL_OURSQL Enum

        mysql+oursql

        MYSQL_PYODBC Enum

        mysql+pyodbc

        SQLITE_PYSQLITE Enum

        sqlite+pysqlite

        SQLITE_PYSQLCIPHER Enum

        sqlite+pysqlcipher

        ORACLE_CX_ORACLE Enum

        oracle+cx_oracle

        MSSQL_PYODBC Enum

        mssql+pyodbc

        MSSQL_MXODBC Enum

        mssql+mxodbc

        MSSQL_PYMSSQL Enum

        mssql+pymssql

        Source code in prefect_sqlalchemy/credentials.py
        class SyncDriver(Enum):\n    \"\"\"\n    Known dialects with their corresponding sync drivers.\n\n    Attributes:\n        POSTGRESQL_PSYCOPG2 (Enum): [postgresql+psycopg2](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2)\n        POSTGRESQL_PG8000 (Enum): [postgresql+pg8000](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pg8000)\n        POSTGRESQL_PSYCOPG2CFFI (Enum): [postgresql+psycopg2cffi](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2cffi)\n        POSTGRESQL_PYPOSTGRESQL (Enum): [postgresql+pypostgresql](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pypostgresql)\n        POSTGRESQL_PYGRESQL (Enum): [postgresql+pygresql](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pygresql)\n\n        MYSQL_MYSQLDB (Enum): [mysql+mysqldb](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqldb)\n        MYSQL_PYMYSQL (Enum): [mysql+pymysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.pymysql)\n        MYSQL_MYSQLCONNECTOR (Enum): [mysql+mysqlconnector](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqlconnector)\n        MYSQL_CYMYSQL (Enum): [mysql+cymysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.cymysql)\n        MYSQL_OURSQL (Enum): [mysql+oursql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.oursql)\n        MYSQL_PYODBC (Enum): [mysql+pyodbc](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.pyodbc)\n\n        SQLITE_PYSQLITE (Enum): [sqlite+pysqlite](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.pysqlite)\n        SQLITE_PYSQLCIPHER (Enum): [sqlite+pysqlcipher](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.pysqlcipher)\n\n        ORACLE_CX_ORACLE (Enum): [oracle+cx_oracle](https://docs.sqlalchemy.org/en/14/dialects/oracle.html#module-sqlalchemy.dialects.oracle.cx_oracle)\n\n        MSSQL_PYODBC (Enum): [mssql+pyodbc](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pyodbc)\n        MSSQL_MXODBC (Enum): [mssql+mxodbc](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.mxodbc)\n        MSSQL_PYMSSQL (Enum): [mssql+pymssql](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pymssql)\n    \"\"\"  # noqa\n\n    POSTGRESQL_PSYCOPG2 = \"postgresql+psycopg2\"\n    POSTGRESQL_PG8000 = \"postgresql+pg8000\"\n    POSTGRESQL_PSYCOPG2CFFI = \"postgresql+psycopg2cffi\"\n    POSTGRESQL_PYPOSTGRESQL = \"postgresql+pypostgresql\"\n    POSTGRESQL_PYGRESQL = \"postgresql+pygresql\"\n\n    MYSQL_MYSQLDB = \"mysql+mysqldb\"\n    MYSQL_PYMYSQL = \"mysql+pymysql\"\n    MYSQL_MYSQLCONNECTOR = \"mysql+mysqlconnector\"\n    MYSQL_CYMYSQL = \"mysql+cymysql\"\n    MYSQL_OURSQL = \"mysql+oursql\"\n    MYSQL_PYODBC = \"mysql+pyodbc\"\n\n    SQLITE_PYSQLITE = \"sqlite+pysqlite\"\n    SQLITE_PYSQLCIPHER = \"sqlite+pysqlcipher\"\n\n    ORACLE_CX_ORACLE = \"oracle+cx_oracle\"\n\n    MSSQL_PYODBC = \"mssql+pyodbc\"\n    MSSQL_MXODBC = \"mssql+mxodbc\"\n    MSSQL_PYMSSQL = \"mssql+pymssql\"\n
        "},{"location":"integrations/prefect-sqlalchemy/database/","title":"Database","text":""},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database","title":"prefect_sqlalchemy.database","text":"

        Tasks for querying a database with SQLAlchemy

        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector","title":"SqlAlchemyConnector","text":"

        Bases: CredentialsBlock, DatabaseBlock

        Block used to manage authentication with a database.

        Upon instantiating, an engine is created and maintained for the life of the object until the close method is called.

        It is recommended to use this block as a context manager, which will automatically close the engine and its connections when the context is exited.

        It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor could be lost.

        Attributes:

        Name Type Description connection_info Union[ConnectionComponents, AnyUrl]

        SQLAlchemy URL to create the engine; either create from components or create from a string.

        connect_args Optional[Dict[str, Any]]

        The options which will be passed directly to the DBAPI's connect() method as additional keyword arguments.

        fetch_size int

        The number of rows to fetch at a time.

        Example

        Load stored database credentials and use in context manager:

        from prefect_sqlalchemy import SqlAlchemyConnector\n\ndatabase_block = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nwith database_block:\n    ...\n

        Create table named customers and insert values; then fetch the first 10 rows.

        from prefect_sqlalchemy import (\n    SqlAlchemyConnector, SyncDriver, ConnectionComponents\n)\n\nwith SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n        driver=SyncDriver.SQLITE_PYSQLITE,\n        database=\"prefect.db\"\n    )\n) as database:\n    database.execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n    )\n    for i in range(1, 42):\n        database.execute(\n            \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n            parameters={\"name\": \"Marvin\", \"address\": f\"Highway {i}\"},\n        )\n    results = database.fetch_many(\n        \"SELECT * FROM customers WHERE name = :name;\",\n        parameters={\"name\": \"Marvin\"},\n        size=10\n    )\nprint(results)\n

        Source code in prefect_sqlalchemy/database.py
        class SqlAlchemyConnector(CredentialsBlock, DatabaseBlock):\n    \"\"\"\n    Block used to manage authentication with a database.\n\n    Upon instantiating, an engine is created and maintained for the life of\n    the object until the close method is called.\n\n    It is recommended to use this block as a context manager, which will automatically\n    close the engine and its connections when the context is exited.\n\n    It is also recommended that this block is loaded and consumed within a single task\n    or flow because if the block is passed across separate tasks and flows,\n    the state of the block's connection and cursor could be lost.\n\n    Attributes:\n        connection_info: SQLAlchemy URL to create the engine;\n            either create from components or create from a string.\n        connect_args: The options which will be passed directly to the\n            DBAPI's connect() method as additional keyword arguments.\n        fetch_size: The number of rows to fetch at a time.\n\n    Example:\n        Load stored database credentials and use in context manager:\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        database_block = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        with database_block:\n            ...\n        ```\n\n        Create table named customers and insert values; then fetch the first 10 rows.\n        ```python\n        from prefect_sqlalchemy import (\n            SqlAlchemyConnector, SyncDriver, ConnectionComponents\n        )\n\n        with SqlAlchemyConnector(\n            connection_info=ConnectionComponents(\n                driver=SyncDriver.SQLITE_PYSQLITE,\n                database=\"prefect.db\"\n            )\n        ) as database:\n            database.execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n            )\n            for i in range(1, 42):\n                database.execute(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    parameters={\"name\": \"Marvin\", \"address\": f\"Highway {i}\"},\n                )\n            results = database.fetch_many(\n                \"SELECT * FROM customers WHERE name = :name;\",\n                parameters={\"name\": \"Marvin\"},\n                size=10\n            )\n        print(results)\n        ```\n    \"\"\"\n\n    _block_type_name = \"SQLAlchemy Connector\"\n    _logo_url = \"https://cdn.sanity.io/images/3ugk85nk/production/3c7dff04f70aaf4528e184a3b028f9e40b98d68c-250x250.png\"  # noqa\n    _documentation_url = \"https://prefecthq.github.io/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector\"  # noqa\n\n    connection_info: Union[ConnectionComponents, AnyUrl] = Field(\n        default=...,\n        description=(\n            \"SQLAlchemy URL to create the engine; either create from components \"\n            \"or create from a string.\"\n        ),\n    )\n    connect_args: Optional[Dict[str, Any]] = Field(\n        default=None,\n        title=\"Additional Connection Arguments\",\n        description=(\n            \"The options which will be passed directly to the DBAPI's connect() \"\n            \"method as additional keyword arguments.\"\n        ),\n    )\n    fetch_size: int = Field(\n        default=1, description=\"The number of rows to fetch at a time.\"\n    )\n\n    _engine: Optional[Union[AsyncEngine, Engine]] = None\n    _exit_stack: Union[ExitStack, AsyncExitStack] = None\n    _unique_results: Dict[str, CursorResult] = None\n\n    class Config:\n        \"\"\"Configuration of pydantic.\"\"\"\n\n        # Support serialization of the 'URL' type\n        arbitrary_types_allowed = True\n        json_encoders = {URL: lambda u: u.render_as_string()}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        \"\"\"\n        Convert to a dictionary.\n        \"\"\"\n        # Support serialization of the 'URL' type\n        d = super().dict(*args, **kwargs)\n        d[\"_rendered_url\"] = SecretStr(\n            self._rendered_url.render_as_string(hide_password=False)\n        )\n        return d\n\n    def block_initialization(self):\n        \"\"\"\n        Initializes the engine.\n        \"\"\"\n        super().block_initialization()\n\n        if isinstance(self.connection_info, ConnectionComponents):\n            self._rendered_url = self.connection_info.create_url()\n        else:\n            # make rendered url from string\n            self._rendered_url = make_url(str(self.connection_info))\n        drivername = self._rendered_url.drivername\n\n        try:\n            AsyncDriver(drivername)\n            self._driver_is_async = True\n        except ValueError:\n            self._driver_is_async = False\n\n        if self._unique_results is None:\n            self._unique_results = {}\n\n        if self._exit_stack is None:\n            self._start_exit_stack()\n\n    def _start_exit_stack(self):\n        \"\"\"\n        Starts an AsyncExitStack or ExitStack depending on whether driver is async.\n        \"\"\"\n        self._exit_stack = AsyncExitStack() if self._driver_is_async else ExitStack()\n\n    def get_engine(\n        self, **create_engine_kwargs: Dict[str, Any]\n    ) -> Union[Engine, AsyncEngine]:\n        \"\"\"\n        Returns an authenticated engine that can be\n        used to query from databases.\n\n        If an existing engine exists, return that one.\n\n        Returns:\n            The authenticated SQLAlchemy Engine / AsyncEngine.\n\n        Examples:\n            Create an asynchronous engine to PostgreSQL using URL params.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import (\n                SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n            )\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                sqlalchemy_credentials = SqlAlchemyConnector(\n                connection_info=ConnectionComponents(\n                        driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                        username=\"prefect\",\n                        password=\"prefect_password\",\n                        database=\"postgres\"\n                    )\n                )\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n\n            Create a synchronous engine to Snowflake using the `url` kwarg.\n            ```python\n            from prefect import flow\n            from prefect_sqlalchemy import SqlAlchemyConnector, AsyncDriver\n\n            @flow\n            def sqlalchemy_credentials_flow():\n                url = (\n                    \"snowflake://<user_login_name>:<password>\"\n                    \"@<account_identifier>/<database_name>\"\n                    \"?warehouse=<warehouse_name>\"\n                )\n                sqlalchemy_credentials = SqlAlchemyConnector(url=url)\n                print(sqlalchemy_credentials.get_engine())\n\n            sqlalchemy_credentials_flow()\n            ```\n        \"\"\"\n        if self._engine is not None:\n            self.logger.debug(\"Reusing existing engine.\")\n            return self._engine\n\n        engine_kwargs = dict(\n            url=self._rendered_url,\n            connect_args=self.connect_args or {},\n            **create_engine_kwargs,\n        )\n        if self._driver_is_async:\n            # no need to await here\n            engine = create_async_engine(**engine_kwargs)\n        else:\n            engine = create_engine(**engine_kwargs)\n        self.logger.info(\"Created a new engine.\")\n\n        if self._engine is None:\n            self._engine = engine\n\n        return engine\n\n    def get_connection(\n        self, begin: bool = True, **connect_kwargs: Dict[str, Any]\n    ) -> Union[Connection, AsyncConnection]:\n        \"\"\"\n        Returns a connection that can be used to query from databases.\n\n        Args:\n            begin: Whether to begin a transaction on the connection; if True, if\n                any operations fail, the entire transaction will be rolled back.\n            **connect_kwargs: Additional keyword arguments to pass to either\n                `engine.begin` or engine.connect`.\n\n        Returns:\n            The SQLAlchemy Connection / AsyncConnection.\n\n        Examples:\n            Create an synchronous connection as a context-managed transaction.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            with sqlalchemy_connector.get_connection(begin=False) as connection:\n                connection.execute(\"SELECT * FROM table LIMIT 1;\")\n            ```\n\n            Create an asynchronous connection as a context-managed transacation.\n            ```python\n            import asyncio\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            async with sqlalchemy_connector.get_connection(begin=False) as connection:\n                asyncio.run(connection.execute(\"SELECT * FROM table LIMIT 1;\"))\n            ```\n        \"\"\"  # noqa: E501\n        engine = self.get_engine()\n        if begin:\n            connection = engine.begin(**connect_kwargs)\n        else:\n            connection = engine.connect(**connect_kwargs)\n        self.logger.info(\"Created a new connection.\")\n        return connection\n\n    def get_client(\n        self,\n        client_type: Literal[\"engine\", \"connection\"],\n        **get_client_kwargs: Dict[str, Any],\n    ) -> Union[Engine, AsyncEngine, Connection, AsyncConnection]:\n        \"\"\"\n        Returns either an engine or connection that can be used to query from databases.\n\n        Args:\n            client_type: Select from either 'engine' or 'connection'.\n            **get_client_kwargs: Additional keyword arguments to pass to\n                either `get_engine` or `get_connection`.\n\n        Returns:\n            The authenticated SQLAlchemy engine or connection.\n\n        Examples:\n            Create an engine.\n            ```python\n            from prefect_sqlalchemy import SqlalchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            engine = sqlalchemy_connector.get_client(client_type=\"engine\")\n            ```\n\n            Create a context managed connection.\n            ```python\n            from prefect_sqlalchemy import SqlalchemyConnector\n\n            sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n            with sqlalchemy_connector.get_client(client_type=\"connection\") as conn:\n                ...\n            ```\n        \"\"\"  # noqa: E501\n        if client_type == \"engine\":\n            client = self.get_engine(**get_client_kwargs)\n        elif client_type == \"connection\":\n            client = self.get_connection(**get_client_kwargs)\n        else:\n            raise ValueError(\n                f\"{client_type!r} is not supported; choose from engine or connection.\"\n            )\n        return client\n\n    async def _async_sync_execute(\n        self,\n        connection: Union[Connection, AsyncConnection],\n        *execute_args: Tuple[Any],\n        **execute_kwargs: Dict[str, Any],\n    ) -> CursorResult:\n        \"\"\"\n        Execute the statement asynchronously or synchronously.\n        \"\"\"\n        # can't use run_sync_in_worker_thread:\n        # ProgrammingError: (sqlite3.ProgrammingError) SQLite objects created in a\n        # thread can only be used in that same thread.\n        result_set = connection.execute(*execute_args, **execute_kwargs)\n\n        if self._driver_is_async:\n            result_set = await result_set\n            await connection.commit()  # very important\n        elif SQLALCHEMY_VERSION.startswith(\"2.\"):\n            connection.commit()\n        return result_set\n\n    @asynccontextmanager\n    async def _manage_connection(self, **get_connection_kwargs: Dict[str, Any]):\n        if self._driver_is_async:\n            async with self.get_connection(**get_connection_kwargs) as connection:\n                yield connection\n        else:\n            with self.get_connection(**get_connection_kwargs) as connection:\n                yield connection\n\n    async def _get_result_set(\n        self, *execute_args: Tuple[Any], **execute_kwargs: Dict[str, Any]\n    ) -> CursorResult:\n        \"\"\"\n        Returns a new or existing result set based on whether the inputs\n        are unique.\n\n        Args:\n            *execute_args: Args to pass to execute.\n            **execute_kwargs: Keyword args to pass to execute.\n\n        Returns:\n            The result set from the operation.\n        \"\"\"  # noqa: E501\n        input_hash = hash_objects(*execute_args, **execute_kwargs)\n        assert input_hash is not None, (\n            \"We were not able to hash your inputs, \"\n            \"which resulted in an unexpected data return; \"\n            \"please open an issue with a reproducible example.\"\n        )\n\n        if input_hash not in self._unique_results.keys():\n            if self._driver_is_async:\n                connection = await self._exit_stack.enter_async_context(\n                    self.get_connection()\n                )\n            else:\n                connection = self._exit_stack.enter_context(self.get_connection())\n            result_set = await self._async_sync_execute(\n                connection, *execute_args, **execute_kwargs\n            )\n            # implicitly store the connection by storing the result set\n            # which points to its parent connection\n            self._unique_results[input_hash] = result_set\n        else:\n            result_set = self._unique_results[input_hash]\n        return result_set\n\n    def _reset_cursor_results(self) -> None:\n        \"\"\"\n        Closes all the existing cursor results.\n        \"\"\"\n        input_hashes = tuple(self._unique_results.keys())\n        for input_hash in input_hashes:\n            try:\n                cursor_result = self._unique_results.pop(input_hash)\n                cursor_result.close()\n            except Exception as exc:\n                self.logger.warning(\n                    f\"Failed to close connection for input hash {input_hash!r}: {exc}\"\n                )\n\n    @sync_compatible\n    async def reset_connections(self) -> None:\n        \"\"\"\n        Tries to close all opened connections and their results.\n\n        Examples:\n            Resets connections so `fetch_*` methods return new results.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                results = database.fetch_one(\"SELECT * FROM customers\")\n                database.reset_connections()\n                results = database.fetch_one(\"SELECT * FROM customers\")\n            ```\n        \"\"\"\n        if self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} has no synchronous connections. \"\n                f\"Please use the `reset_async_connections` method instead.\"\n            )\n\n        if self._exit_stack is None:\n            self.logger.info(\"There were no connections to reset.\")\n            return\n\n        self._reset_cursor_results()\n        self._exit_stack.close()\n        self.logger.info(\"Reset opened connections and their results.\")\n\n    async def reset_async_connections(self) -> None:\n        \"\"\"\n        Tries to close all opened connections and their results.\n\n        Examples:\n            Resets connections so `fetch_*` methods return new results.\n            ```python\n            import asyncio\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            async def example_run():\n                async with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                    results = await database.fetch_one(\"SELECT * FROM customers\")\n                    await database.reset_async_connections()\n                    results = await database.fetch_one(\"SELECT * FROM customers\")\n\n            asyncio.run(example_run())\n            ```\n        \"\"\"\n        if not self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} has no asynchronous connections. \"\n                f\"Please use the `reset_connections` method instead.\"\n            )\n\n        if self._exit_stack is None:\n            self.logger.info(\"There were no connections to reset.\")\n            return\n\n        self._reset_cursor_results()\n        await self._exit_stack.aclose()\n        self.logger.info(\"Reset opened connections and their results.\")\n\n    @sync_compatible\n    async def fetch_one(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> Tuple[Any]:\n        \"\"\"\n        Fetch a single result from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Create a table, insert three rows into it, and fetch a row repeatedly.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                results = True\n                while results:\n                    results = database.fetch_one(\"SELECT * FROM customers\")\n                    print(results)\n            ```\n        \"\"\"  # noqa\n        result_set = await self._get_result_set(\n            text(operation), parameters, execution_options=execution_options\n        )\n        self.logger.debug(\"Preparing to fetch one row.\")\n        row = result_set.fetchone()\n        return row\n\n    @sync_compatible\n    async def fetch_many(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        size: Optional[int] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch a limited number of results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            size: The number of results to return; if None or 0, uses the value of\n                `fetch_size` configured on the block.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Create a table, insert three rows into it, and fetch two rows repeatedly.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n                print(results)\n                results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n                print(results)\n            ```\n        \"\"\"  # noqa\n        result_set = await self._get_result_set(\n            text(operation), parameters, execution_options=execution_options\n        )\n        size = size or self.fetch_size\n        self.logger.debug(f\"Preparing to fetch {size} rows.\")\n        rows = result_set.fetchmany(size=size)\n        return rows\n\n    @sync_compatible\n    async def fetch_all(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> List[Tuple[Any]]:\n        \"\"\"\n        Fetch all results from the database.\n\n        Repeated calls using the same inputs to *any* of the fetch methods of this\n        block will skip executing the operation again, and instead,\n        return the next set of results from the previous execution,\n        until the reset_cursors method is called.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Returns:\n            A list of tuples containing the data returned by the database,\n                where each row is a tuple and each column is a value in the tuple.\n\n        Examples:\n            Create a table, insert three rows into it, and fetch all where name is 'Me'.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n                results = database.fetch_all(\"SELECT * FROM customers WHERE name = :name\", parameters={\"name\": \"Me\"})\n            ```\n        \"\"\"  # noqa\n        result_set = await self._get_result_set(\n            text(operation), parameters, execution_options=execution_options\n        )\n        self.logger.debug(\"Preparing to fetch all rows.\")\n        rows = result_set.fetchall()\n        return rows\n\n    @sync_compatible\n    async def execute(\n        self,\n        operation: str,\n        parameters: Optional[Dict[str, Any]] = None,\n        **execution_options: Dict[str, Any],\n    ) -> None:\n        \"\"\"\n        Executes an operation on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            parameters: The parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Examples:\n            Create a table and insert one row into it.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n                )\n            ```\n        \"\"\"  # noqa\n        async with self._manage_connection(begin=False) as connection:\n            await self._async_sync_execute(\n                connection,\n                text(operation),\n                parameters,\n                execution_options=execution_options,\n            )\n        self.logger.info(f\"Executed the operation, {operation!r}\")\n\n    @sync_compatible\n    async def execute_many(\n        self,\n        operation: str,\n        seq_of_parameters: List[Dict[str, Any]],\n        **execution_options: Dict[str, Any],\n    ) -> None:\n        \"\"\"\n        Executes many operations on the database. This method is intended to be used\n        for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n        Unlike the fetch methods, this method will always execute the operation\n        upon calling.\n\n        Args:\n            operation: The SQL query or other operation to be executed.\n            seq_of_parameters: The sequence of parameters for the operation.\n            **execution_options: Options to pass to `Connection.execution_options`.\n\n        Examples:\n            Create a table and insert two rows into it.\n            ```python\n            from prefect_sqlalchemy import SqlAlchemyConnector\n\n            with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n                database.execute_many(\n                    \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                    seq_of_parameters=[\n                        {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                        {\"name\": \"Unknown\", \"address\": \"Space\"},\n                        {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                    ],\n                )\n            ```\n        \"\"\"  # noqa\n        async with self._manage_connection(begin=False) as connection:\n            await self._async_sync_execute(\n                connection,\n                text(operation),\n                seq_of_parameters,\n                execution_options=execution_options,\n            )\n        self.logger.info(\n            f\"Executed {len(seq_of_parameters)} operations based off {operation!r}.\"\n        )\n\n    async def __aenter__(self):\n        \"\"\"\n        Start an asynchronous database engine upon entry.\n        \"\"\"\n        if not self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} cannot be run asynchronously. \"\n                f\"Please use the `with` syntax.\"\n            )\n        return self\n\n    async def __aexit__(self, *args):\n        \"\"\"\n        Dispose the asynchronous database engine upon exit.\n        \"\"\"\n        await self.aclose()\n\n    async def aclose(self):\n        \"\"\"\n        Closes async connections and its cursors.\n        \"\"\"\n        if not self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} is not asynchronous. \"\n                f\"Please use the `close` method instead.\"\n            )\n        try:\n            await self.reset_async_connections()\n        finally:\n            if self._engine is not None:\n                await self._engine.dispose()\n                self._engine = None\n                self.logger.info(\"Disposed the engine.\")\n\n    def __enter__(self):\n        \"\"\"\n        Start an synchronous database engine upon entry.\n        \"\"\"\n        if self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} cannot be run synchronously. \"\n                f\"Please use the `async with` syntax.\"\n            )\n        return self\n\n    def __exit__(self, *args):\n        \"\"\"\n        Dispose the synchronous database engine upon exit.\n        \"\"\"\n        self.close()\n\n    def close(self):\n        \"\"\"\n        Closes sync connections and its cursors.\n        \"\"\"\n        if self._driver_is_async:\n            raise RuntimeError(\n                f\"{self._rendered_url.drivername} is not synchronous. \"\n                f\"Please use the `aclose` method instead.\"\n            )\n\n        try:\n            self.reset_connections()\n        finally:\n            if self._engine is not None:\n                self._engine.dispose()\n                self._engine = None\n                self.logger.info(\"Disposed the engine.\")\n\n    def __getstate__(self):\n        \"\"\"Allows the block to be pickleable.\"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_engine\", \"_exit_stack\", \"_unique_results\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n        \"\"\"Upon loading back, restart the engine and results.\"\"\"\n        self.__dict__.update(data)\n\n        if self._unique_results is None:\n            self._unique_results = {}\n\n        if self._exit_stack is None:\n            self._start_exit_stack()\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.Config","title":"Config","text":"

        Configuration of pydantic.

        Source code in prefect_sqlalchemy/database.py
        class Config:\n    \"\"\"Configuration of pydantic.\"\"\"\n\n    # Support serialization of the 'URL' type\n    arbitrary_types_allowed = True\n    json_encoders = {URL: lambda u: u.render_as_string()}\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.aclose","title":"aclose async","text":"

        Closes async connections and its cursors.

        Source code in prefect_sqlalchemy/database.py
        async def aclose(self):\n    \"\"\"\n    Closes async connections and its cursors.\n    \"\"\"\n    if not self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} is not asynchronous. \"\n            f\"Please use the `close` method instead.\"\n        )\n    try:\n        await self.reset_async_connections()\n    finally:\n        if self._engine is not None:\n            await self._engine.dispose()\n            self._engine = None\n            self.logger.info(\"Disposed the engine.\")\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.block_initialization","title":"block_initialization","text":"

        Initializes the engine.

        Source code in prefect_sqlalchemy/database.py
        def block_initialization(self):\n    \"\"\"\n    Initializes the engine.\n    \"\"\"\n    super().block_initialization()\n\n    if isinstance(self.connection_info, ConnectionComponents):\n        self._rendered_url = self.connection_info.create_url()\n    else:\n        # make rendered url from string\n        self._rendered_url = make_url(str(self.connection_info))\n    drivername = self._rendered_url.drivername\n\n    try:\n        AsyncDriver(drivername)\n        self._driver_is_async = True\n    except ValueError:\n        self._driver_is_async = False\n\n    if self._unique_results is None:\n        self._unique_results = {}\n\n    if self._exit_stack is None:\n        self._start_exit_stack()\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.close","title":"close","text":"

        Closes sync connections and its cursors.

        Source code in prefect_sqlalchemy/database.py
        def close(self):\n    \"\"\"\n    Closes sync connections and its cursors.\n    \"\"\"\n    if self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} is not synchronous. \"\n            f\"Please use the `aclose` method instead.\"\n        )\n\n    try:\n        self.reset_connections()\n    finally:\n        if self._engine is not None:\n            self._engine.dispose()\n            self._engine = None\n            self.logger.info(\"Disposed the engine.\")\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.dict","title":"dict","text":"

        Convert to a dictionary.

        Source code in prefect_sqlalchemy/database.py
        def dict(self, *args, **kwargs) -> Dict:\n    \"\"\"\n    Convert to a dictionary.\n    \"\"\"\n    # Support serialization of the 'URL' type\n    d = super().dict(*args, **kwargs)\n    d[\"_rendered_url\"] = SecretStr(\n        self._rendered_url.render_as_string(hide_password=False)\n    )\n    return d\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.execute","title":"execute async","text":"

        Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

        Unlike the fetch methods, this method will always execute the operation upon calling.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None **execution_options Dict[str, Any]

        Options to pass to Connection.execution_options.

        {}

        Examples:

        Create a table and insert one row into it.

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n    )\n

        Source code in prefect_sqlalchemy/database.py
        @sync_compatible\nasync def execute(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> None:\n    \"\"\"\n    Executes an operation on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Examples:\n        Create a table and insert one row into it.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                parameters={\"name\": \"Marvin\", \"address\": \"Highway 42\"},\n            )\n        ```\n    \"\"\"  # noqa\n    async with self._manage_connection(begin=False) as connection:\n        await self._async_sync_execute(\n            connection,\n            text(operation),\n            parameters,\n            execution_options=execution_options,\n        )\n    self.logger.info(f\"Executed the operation, {operation!r}\")\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.execute_many","title":"execute_many async","text":"

        Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE.

        Unlike the fetch methods, this method will always execute the operation upon calling.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required seq_of_parameters List[Dict[str, Any]]

        The sequence of parameters for the operation.

        required **execution_options Dict[str, Any]

        Options to pass to Connection.execution_options.

        {}

        Examples:

        Create a table and insert two rows into it.

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n

        Source code in prefect_sqlalchemy/database.py
        @sync_compatible\nasync def execute_many(\n    self,\n    operation: str,\n    seq_of_parameters: List[Dict[str, Any]],\n    **execution_options: Dict[str, Any],\n) -> None:\n    \"\"\"\n    Executes many operations on the database. This method is intended to be used\n    for operations that do not return data, such as INSERT, UPDATE, or DELETE.\n\n    Unlike the fetch methods, this method will always execute the operation\n    upon calling.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        seq_of_parameters: The sequence of parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Examples:\n        Create a table and insert two rows into it.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n        ```\n    \"\"\"  # noqa\n    async with self._manage_connection(begin=False) as connection:\n        await self._async_sync_execute(\n            connection,\n            text(operation),\n            seq_of_parameters,\n            execution_options=execution_options,\n        )\n    self.logger.info(\n        f\"Executed {len(seq_of_parameters)} operations based off {operation!r}.\"\n    )\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.fetch_all","title":"fetch_all async","text":"

        Fetch all results from the database.

        Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None **execution_options Dict[str, Any]

        Options to pass to Connection.execution_options.

        {}

        Returns:

        Type Description List[Tuple[Any]]

        A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Create a table, insert three rows into it, and fetch all where name is 'Me'.

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = database.fetch_all(\"SELECT * FROM customers WHERE name = :name\", parameters={\"name\": \"Me\"})\n

        Source code in prefect_sqlalchemy/database.py
        @sync_compatible\nasync def fetch_all(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch all results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Create a table, insert three rows into it, and fetch all where name is 'Me'.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = database.fetch_all(\"SELECT * FROM customers WHERE name = :name\", parameters={\"name\": \"Me\"})\n        ```\n    \"\"\"  # noqa\n    result_set = await self._get_result_set(\n        text(operation), parameters, execution_options=execution_options\n    )\n    self.logger.debug(\"Preparing to fetch all rows.\")\n    rows = result_set.fetchall()\n    return rows\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.fetch_many","title":"fetch_many async","text":"

        Fetch a limited number of results from the database.

        Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None size Optional[int]

        The number of results to return; if None or 0, uses the value of fetch_size configured on the block.

        None **execution_options Dict[str, Any]

        Options to pass to Connection.execution_options.

        {}

        Returns:

        Type Description List[Tuple[Any]]

        A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Create a table, insert three rows into it, and fetch two rows repeatedly.

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n    print(results)\n    results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n    print(results)\n

        Source code in prefect_sqlalchemy/database.py
        @sync_compatible\nasync def fetch_many(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    size: Optional[int] = None,\n    **execution_options: Dict[str, Any],\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Fetch a limited number of results from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        size: The number of results to return; if None or 0, uses the value of\n            `fetch_size` configured on the block.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Create a table, insert three rows into it, and fetch two rows repeatedly.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n            print(results)\n            results = database.fetch_many(\"SELECT * FROM customers\", size=2)\n            print(results)\n        ```\n    \"\"\"  # noqa\n    result_set = await self._get_result_set(\n        text(operation), parameters, execution_options=execution_options\n    )\n    size = size or self.fetch_size\n    self.logger.debug(f\"Preparing to fetch {size} rows.\")\n    rows = result_set.fetchmany(size=size)\n    return rows\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.fetch_one","title":"fetch_one async","text":"

        Fetch a single result from the database.

        Repeated calls using the same inputs to any of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset_cursors method is called.

        Parameters:

        Name Type Description Default operation str

        The SQL query or other operation to be executed.

        required parameters Optional[Dict[str, Any]]

        The parameters for the operation.

        None **execution_options Dict[str, Any]

        Options to pass to Connection.execution_options.

        {}

        Returns:

        Type Description Tuple[Any]

        A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple.

        Examples:

        Create a table, insert three rows into it, and fetch a row repeatedly.

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n    database.execute_many(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        seq_of_parameters=[\n            {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n            {\"name\": \"Unknown\", \"address\": \"Space\"},\n            {\"name\": \"Me\", \"address\": \"Myway 88\"},\n        ],\n    )\n    results = True\n    while results:\n        results = database.fetch_one(\"SELECT * FROM customers\")\n        print(results)\n

        Source code in prefect_sqlalchemy/database.py
        @sync_compatible\nasync def fetch_one(\n    self,\n    operation: str,\n    parameters: Optional[Dict[str, Any]] = None,\n    **execution_options: Dict[str, Any],\n) -> Tuple[Any]:\n    \"\"\"\n    Fetch a single result from the database.\n\n    Repeated calls using the same inputs to *any* of the fetch methods of this\n    block will skip executing the operation again, and instead,\n    return the next set of results from the previous execution,\n    until the reset_cursors method is called.\n\n    Args:\n        operation: The SQL query or other operation to be executed.\n        parameters: The parameters for the operation.\n        **execution_options: Options to pass to `Connection.execution_options`.\n\n    Returns:\n        A list of tuples containing the data returned by the database,\n            where each row is a tuple and each column is a value in the tuple.\n\n    Examples:\n        Create a table, insert three rows into it, and fetch a row repeatedly.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            database.execute(\"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\")\n            database.execute_many(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                seq_of_parameters=[\n                    {\"name\": \"Ford\", \"address\": \"Highway 42\"},\n                    {\"name\": \"Unknown\", \"address\": \"Space\"},\n                    {\"name\": \"Me\", \"address\": \"Myway 88\"},\n                ],\n            )\n            results = True\n            while results:\n                results = database.fetch_one(\"SELECT * FROM customers\")\n                print(results)\n        ```\n    \"\"\"  # noqa\n    result_set = await self._get_result_set(\n        text(operation), parameters, execution_options=execution_options\n    )\n    self.logger.debug(\"Preparing to fetch one row.\")\n    row = result_set.fetchone()\n    return row\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.get_client","title":"get_client","text":"

        Returns either an engine or connection that can be used to query from databases.

        Parameters:

        Name Type Description Default client_type Literal['engine', 'connection']

        Select from either 'engine' or 'connection'.

        required **get_client_kwargs Dict[str, Any]

        Additional keyword arguments to pass to either get_engine or get_connection.

        {}

        Returns:

        Type Description Union[Engine, AsyncEngine, Connection, AsyncConnection]

        The authenticated SQLAlchemy engine or connection.

        Examples:

        Create an engine.

        from prefect_sqlalchemy import SqlalchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nengine = sqlalchemy_connector.get_client(client_type=\"engine\")\n

        Create a context managed connection.

        from prefect_sqlalchemy import SqlalchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nwith sqlalchemy_connector.get_client(client_type=\"connection\") as conn:\n    ...\n

        Source code in prefect_sqlalchemy/database.py
        def get_client(\n    self,\n    client_type: Literal[\"engine\", \"connection\"],\n    **get_client_kwargs: Dict[str, Any],\n) -> Union[Engine, AsyncEngine, Connection, AsyncConnection]:\n    \"\"\"\n    Returns either an engine or connection that can be used to query from databases.\n\n    Args:\n        client_type: Select from either 'engine' or 'connection'.\n        **get_client_kwargs: Additional keyword arguments to pass to\n            either `get_engine` or `get_connection`.\n\n    Returns:\n        The authenticated SQLAlchemy engine or connection.\n\n    Examples:\n        Create an engine.\n        ```python\n        from prefect_sqlalchemy import SqlalchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        engine = sqlalchemy_connector.get_client(client_type=\"engine\")\n        ```\n\n        Create a context managed connection.\n        ```python\n        from prefect_sqlalchemy import SqlalchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        with sqlalchemy_connector.get_client(client_type=\"connection\") as conn:\n            ...\n        ```\n    \"\"\"  # noqa: E501\n    if client_type == \"engine\":\n        client = self.get_engine(**get_client_kwargs)\n    elif client_type == \"connection\":\n        client = self.get_connection(**get_client_kwargs)\n    else:\n        raise ValueError(\n            f\"{client_type!r} is not supported; choose from engine or connection.\"\n        )\n    return client\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.get_connection","title":"get_connection","text":"

        Returns a connection that can be used to query from databases.

        Parameters:

        Name Type Description Default begin bool

        Whether to begin a transaction on the connection; if True, if any operations fail, the entire transaction will be rolled back.

        True **connect_kwargs Dict[str, Any]

        Additional keyword arguments to pass to either engine.begin or engine.connect`.

        {}

        Returns:

        Type Description Union[Connection, AsyncConnection]

        The SQLAlchemy Connection / AsyncConnection.

        Examples:

        Create an synchronous connection as a context-managed transaction.

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nwith sqlalchemy_connector.get_connection(begin=False) as connection:\n    connection.execute(\"SELECT * FROM table LIMIT 1;\")\n

        Create an asynchronous connection as a context-managed transacation.

        import asyncio\nfrom prefect_sqlalchemy import SqlAlchemyConnector\n\nsqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\nasync with sqlalchemy_connector.get_connection(begin=False) as connection:\n    asyncio.run(connection.execute(\"SELECT * FROM table LIMIT 1;\"))\n

        Source code in prefect_sqlalchemy/database.py
        def get_connection(\n    self, begin: bool = True, **connect_kwargs: Dict[str, Any]\n) -> Union[Connection, AsyncConnection]:\n    \"\"\"\n    Returns a connection that can be used to query from databases.\n\n    Args:\n        begin: Whether to begin a transaction on the connection; if True, if\n            any operations fail, the entire transaction will be rolled back.\n        **connect_kwargs: Additional keyword arguments to pass to either\n            `engine.begin` or engine.connect`.\n\n    Returns:\n        The SQLAlchemy Connection / AsyncConnection.\n\n    Examples:\n        Create an synchronous connection as a context-managed transaction.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        with sqlalchemy_connector.get_connection(begin=False) as connection:\n            connection.execute(\"SELECT * FROM table LIMIT 1;\")\n        ```\n\n        Create an asynchronous connection as a context-managed transacation.\n        ```python\n        import asyncio\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        sqlalchemy_connector = SqlAlchemyConnector.load(\"BLOCK_NAME\")\n        async with sqlalchemy_connector.get_connection(begin=False) as connection:\n            asyncio.run(connection.execute(\"SELECT * FROM table LIMIT 1;\"))\n        ```\n    \"\"\"  # noqa: E501\n    engine = self.get_engine()\n    if begin:\n        connection = engine.begin(**connect_kwargs)\n    else:\n        connection = engine.connect(**connect_kwargs)\n    self.logger.info(\"Created a new connection.\")\n    return connection\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.get_engine","title":"get_engine","text":"

        Returns an authenticated engine that can be used to query from databases.

        If an existing engine exists, return that one.

        Returns:

        Type Description Union[Engine, AsyncEngine]

        The authenticated SQLAlchemy Engine / AsyncEngine.

        Examples:

        Create an asynchronous engine to PostgreSQL using URL params.

        from prefect import flow\nfrom prefect_sqlalchemy import (\n    SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n)\n\n@flow\ndef sqlalchemy_credentials_flow():\n    sqlalchemy_credentials = SqlAlchemyConnector(\n    connection_info=ConnectionComponents(\n            driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n            username=\"prefect\",\n            password=\"prefect_password\",\n            database=\"postgres\"\n        )\n    )\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

        Create a synchronous engine to Snowflake using the url kwarg.

        from prefect import flow\nfrom prefect_sqlalchemy import SqlAlchemyConnector, AsyncDriver\n\n@flow\ndef sqlalchemy_credentials_flow():\n    url = (\n        \"snowflake://<user_login_name>:<password>\"\n        \"@<account_identifier>/<database_name>\"\n        \"?warehouse=<warehouse_name>\"\n    )\n    sqlalchemy_credentials = SqlAlchemyConnector(url=url)\n    print(sqlalchemy_credentials.get_engine())\n\nsqlalchemy_credentials_flow()\n

        Source code in prefect_sqlalchemy/database.py
        def get_engine(\n    self, **create_engine_kwargs: Dict[str, Any]\n) -> Union[Engine, AsyncEngine]:\n    \"\"\"\n    Returns an authenticated engine that can be\n    used to query from databases.\n\n    If an existing engine exists, return that one.\n\n    Returns:\n        The authenticated SQLAlchemy Engine / AsyncEngine.\n\n    Examples:\n        Create an asynchronous engine to PostgreSQL using URL params.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import (\n            SqlAlchemyConnector, ConnectionComponents, AsyncDriver\n        )\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            sqlalchemy_credentials = SqlAlchemyConnector(\n            connection_info=ConnectionComponents(\n                    driver=AsyncDriver.POSTGRESQL_ASYNCPG,\n                    username=\"prefect\",\n                    password=\"prefect_password\",\n                    database=\"postgres\"\n                )\n            )\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n\n        Create a synchronous engine to Snowflake using the `url` kwarg.\n        ```python\n        from prefect import flow\n        from prefect_sqlalchemy import SqlAlchemyConnector, AsyncDriver\n\n        @flow\n        def sqlalchemy_credentials_flow():\n            url = (\n                \"snowflake://<user_login_name>:<password>\"\n                \"@<account_identifier>/<database_name>\"\n                \"?warehouse=<warehouse_name>\"\n            )\n            sqlalchemy_credentials = SqlAlchemyConnector(url=url)\n            print(sqlalchemy_credentials.get_engine())\n\n        sqlalchemy_credentials_flow()\n        ```\n    \"\"\"\n    if self._engine is not None:\n        self.logger.debug(\"Reusing existing engine.\")\n        return self._engine\n\n    engine_kwargs = dict(\n        url=self._rendered_url,\n        connect_args=self.connect_args or {},\n        **create_engine_kwargs,\n    )\n    if self._driver_is_async:\n        # no need to await here\n        engine = create_async_engine(**engine_kwargs)\n    else:\n        engine = create_engine(**engine_kwargs)\n    self.logger.info(\"Created a new engine.\")\n\n    if self._engine is None:\n        self._engine = engine\n\n    return engine\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.reset_async_connections","title":"reset_async_connections async","text":"

        Tries to close all opened connections and their results.

        Examples:

        Resets connections so fetch_* methods return new results.

        import asyncio\nfrom prefect_sqlalchemy import SqlAlchemyConnector\n\nasync def example_run():\n    async with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n        results = await database.fetch_one(\"SELECT * FROM customers\")\n        await database.reset_async_connections()\n        results = await database.fetch_one(\"SELECT * FROM customers\")\n\nasyncio.run(example_run())\n

        Source code in prefect_sqlalchemy/database.py
        async def reset_async_connections(self) -> None:\n    \"\"\"\n    Tries to close all opened connections and their results.\n\n    Examples:\n        Resets connections so `fetch_*` methods return new results.\n        ```python\n        import asyncio\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        async def example_run():\n            async with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n                results = await database.fetch_one(\"SELECT * FROM customers\")\n                await database.reset_async_connections()\n                results = await database.fetch_one(\"SELECT * FROM customers\")\n\n        asyncio.run(example_run())\n        ```\n    \"\"\"\n    if not self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} has no asynchronous connections. \"\n            f\"Please use the `reset_connections` method instead.\"\n        )\n\n    if self._exit_stack is None:\n        self.logger.info(\"There were no connections to reset.\")\n        return\n\n    self._reset_cursor_results()\n    await self._exit_stack.aclose()\n    self.logger.info(\"Reset opened connections and their results.\")\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.SqlAlchemyConnector.reset_connections","title":"reset_connections async","text":"

        Tries to close all opened connections and their results.

        Examples:

        Resets connections so fetch_* methods return new results.

        from prefect_sqlalchemy import SqlAlchemyConnector\n\nwith SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n    results = database.fetch_one(\"SELECT * FROM customers\")\n    database.reset_connections()\n    results = database.fetch_one(\"SELECT * FROM customers\")\n

        Source code in prefect_sqlalchemy/database.py
        @sync_compatible\nasync def reset_connections(self) -> None:\n    \"\"\"\n    Tries to close all opened connections and their results.\n\n    Examples:\n        Resets connections so `fetch_*` methods return new results.\n        ```python\n        from prefect_sqlalchemy import SqlAlchemyConnector\n\n        with SqlAlchemyConnector.load(\"MY_BLOCK\") as database:\n            results = database.fetch_one(\"SELECT * FROM customers\")\n            database.reset_connections()\n            results = database.fetch_one(\"SELECT * FROM customers\")\n        ```\n    \"\"\"\n    if self._driver_is_async:\n        raise RuntimeError(\n            f\"{self._rendered_url.drivername} has no synchronous connections. \"\n            f\"Please use the `reset_async_connections` method instead.\"\n        )\n\n    if self._exit_stack is None:\n        self.logger.info(\"There were no connections to reset.\")\n        return\n\n    self._reset_cursor_results()\n    self._exit_stack.close()\n    self.logger.info(\"Reset opened connections and their results.\")\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.sqlalchemy_execute","title":"sqlalchemy_execute async","text":"

        Executes a SQL DDL or DML statement; useful for creating tables and inserting rows since this task does not return any objects.

        Parameters:

        Name Type Description Default statement str

        The statement to execute against the database.

        required sqlalchemy_credentials DatabaseCredentials

        The credentials to use to authenticate.

        required params Optional[Union[Tuple[Any], Dict[str, Any]]]

        The params to replace the placeholders in the query.

        None

        Examples:

        Create table named customers and insert values.

        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\nfrom prefect_sqlalchemy.database import sqlalchemy_execute\nfrom prefect import flow\n\n@flow\ndef sqlalchemy_execute_flow():\n    sqlalchemy_credentials = DatabaseCredentials(\n        driver=AsyncDriver.SQLITE_AIOSQLITE,\n        database=\"prefect.db\",\n    )\n    sqlalchemy_execute(\n        \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n        sqlalchemy_credentials,\n    )\n    sqlalchemy_execute(\n        \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n        sqlalchemy_credentials,\n        params={\"name\": \"Marvin\", \"address\": \"Highway 42\"}\n    )\n\nsqlalchemy_execute_flow()\n

        Source code in prefect_sqlalchemy/database.py
        @task\nasync def sqlalchemy_execute(\n    statement: str,\n    sqlalchemy_credentials: \"DatabaseCredentials\",\n    params: Optional[Union[Tuple[Any], Dict[str, Any]]] = None,\n):\n    \"\"\"\n    Executes a SQL DDL or DML statement; useful for creating tables and inserting rows\n    since this task does not return any objects.\n\n    Args:\n        statement: The statement to execute against the database.\n        sqlalchemy_credentials: The credentials to use to authenticate.\n        params: The params to replace the placeholders in the query.\n\n    Examples:\n        Create table named customers and insert values.\n        ```python\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n        from prefect_sqlalchemy.database import sqlalchemy_execute\n        from prefect import flow\n\n        @flow\n        def sqlalchemy_execute_flow():\n            sqlalchemy_credentials = DatabaseCredentials(\n                driver=AsyncDriver.SQLITE_AIOSQLITE,\n                database=\"prefect.db\",\n            )\n            sqlalchemy_execute(\n                \"CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);\",\n                sqlalchemy_credentials,\n            )\n            sqlalchemy_execute(\n                \"INSERT INTO customers (name, address) VALUES (:name, :address);\",\n                sqlalchemy_credentials,\n                params={\"name\": \"Marvin\", \"address\": \"Highway 42\"}\n            )\n\n        sqlalchemy_execute_flow()\n        ```\n    \"\"\"\n    warnings.warn(\n        \"sqlalchemy_query is now deprecated and will be removed March 2023; \"\n        \"please use SqlAlchemyConnector execute_* methods instead.\",\n        DeprecationWarning,\n    )\n    # do not return anything or else results in the error:\n    # This result object does not return rows. It has been closed automatically\n    engine = sqlalchemy_credentials.get_engine()\n    async_supported = sqlalchemy_credentials._driver_is_async\n    async with _connect(engine, async_supported) as connection:\n        await _execute(connection, statement, params, async_supported)\n
        "},{"location":"integrations/prefect-sqlalchemy/database/#prefect_sqlalchemy.database.sqlalchemy_query","title":"sqlalchemy_query async","text":"

        Executes a SQL query; useful for querying data from existing tables.

        Parameters:

        Name Type Description Default query str

        The query to execute against the database.

        required sqlalchemy_credentials DatabaseCredentials

        The credentials to use to authenticate.

        required params Optional[Union[Tuple[Any], Dict[str, Any]]]

        The params to replace the placeholders in the query.

        None limit Optional[int]

        The number of rows to fetch. Note, this parameter is executed on the client side, i.e. passed to fetchmany. To limit on the server side, add the LIMIT clause, or the dialect's equivalent clause, like TOP, to the query.

        None

        Returns:

        Type Description List[Tuple[Any]]

        The fetched results.

        Examples:

        Query postgres table with the ID value parameterized.

        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\nfrom prefect_sqlalchemy.database import sqlalchemy_query\nfrom prefect import flow\n\n@flow\ndef sqlalchemy_query_flow():\n    sqlalchemy_credentials = DatabaseCredentials(\n        driver=AsyncDriver.SQLITE_AIOSQLITE,\n        database=\"prefect.db\",\n    )\n    result = sqlalchemy_query(\n        \"SELECT * FROM customers WHERE name = :name;\",\n        sqlalchemy_credentials,\n        params={\"name\": \"Marvin\"},\n    )\n    return result\n\nsqlalchemy_query_flow()\n

        Source code in prefect_sqlalchemy/database.py
        @task\nasync def sqlalchemy_query(\n    query: str,\n    sqlalchemy_credentials: \"DatabaseCredentials\",\n    params: Optional[Union[Tuple[Any], Dict[str, Any]]] = None,\n    limit: Optional[int] = None,\n) -> List[Tuple[Any]]:\n    \"\"\"\n    Executes a SQL query; useful for querying data from existing tables.\n\n    Args:\n        query: The query to execute against the database.\n        sqlalchemy_credentials: The credentials to use to authenticate.\n        params: The params to replace the placeholders in the query.\n        limit: The number of rows to fetch. Note, this parameter is\n            executed on the client side, i.e. passed to `fetchmany`.\n            To limit on the server side, add the `LIMIT` clause, or\n            the dialect's equivalent clause, like `TOP`, to the query.\n\n    Returns:\n        The fetched results.\n\n    Examples:\n        Query postgres table with the ID value parameterized.\n        ```python\n        from prefect_sqlalchemy import DatabaseCredentials, AsyncDriver\n        from prefect_sqlalchemy.database import sqlalchemy_query\n        from prefect import flow\n\n        @flow\n        def sqlalchemy_query_flow():\n            sqlalchemy_credentials = DatabaseCredentials(\n                driver=AsyncDriver.SQLITE_AIOSQLITE,\n                database=\"prefect.db\",\n            )\n            result = sqlalchemy_query(\n                \"SELECT * FROM customers WHERE name = :name;\",\n                sqlalchemy_credentials,\n                params={\"name\": \"Marvin\"},\n            )\n            return result\n\n        sqlalchemy_query_flow()\n        ```\n    \"\"\"\n    warnings.warn(\n        \"sqlalchemy_query is now deprecated and will be removed March 2023; \"\n        \"please use SqlAlchemyConnector fetch_* methods instead.\",\n        DeprecationWarning,\n    )\n    engine = sqlalchemy_credentials.get_engine()\n    async_supported = sqlalchemy_credentials._driver_is_async\n    async with _connect(engine, async_supported) as connection:\n        result = await _execute(connection, query, params, async_supported)\n        # some databases, like sqlite, require a connection still open to fetch!\n        rows = result.fetchall() if limit is None else result.fetchmany(limit)\n    return rows\n
        "},{"location":"recipes/recipes/","title":"Prefect Recipes","text":"

        Prefect recipes are common, extensible examples for setting up Prefect in your execution environment with ready-made ingredients such as Dockerfiles, Terraform files, and GitHub Actions.

        Recipes are useful when you are looking for tutorials on how to deploy a worker, use event-driven flows, set up unit testing, and more.

        The following are Prefect recipes specific to Prefect 2. You can find a full repository of recipes at https://github.com/PrefectHQ/prefect-recipes and additional recipes at Prefect Discourse.

        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#recipe-catalog","title":"Recipe catalog","text":"Agent on Azure with Kubernetes

        Configure Prefect on Azure with Kubernetes, running a Prefect agent to execute deployment flow runs.

        Maintained by Prefect

        This recipe uses:

        Agent on ECS Fargate with AWS CLI

        Run a Prefect 2 agent on ECS Fargate using the AWS CLI.

        Maintained by Prefect

        This recipe uses:

        Agent on ECS Fargate with Terraform

        Run a Prefect 2 agent on ECS Fargate using Terraform.

        Maintained by Prefect

        This recipe uses:

        Agent on an Azure VM

        Set up an Azure VM and run a Prefect agent.

        Maintained by Prefect

        This recipe uses:

        Deploy a dlt pipeline on Prefect

        dlt is an open-source Python library that enables the declarative loading of data sources into well-structured tables or datasets by automatically inferring and evolving schemas.

        Maintained by Prefect

        This recipe uses:

        Flow Deployment with GitHub Actions

        Deploy a Prefect flow with storage and infrastructure blocks, update and push Docker image to container registry.

        Maintained by Prefect

        This recipe uses:

        Flow Deployment with GitHub Storage and Docker Infrastructure

        Create a deployment with GitHub as a storage and Docker Container as an infrastructure

        Maintained by Prefect

        This recipe uses:

        Prefect server on an AKS Cluster

        Deploy a Prefect server to an Azure Kubernetes Service (AKS) Cluster with Azure Blob Storage.

        Maintained by Prefect

        This recipe uses:

        Serverless Prefect with AWS Chalice

        Execute Prefect flows in an AWS Lambda function managed by Chalice.

        Maintained by Prefect

        This recipe uses:

        Serverless Workflows with ECSTask Blocks

        Deploy a Prefect agent to AWS ECS Fargate using GitHub Actions and ECSTask infrastructure blocks.

        Maintained by Prefect

        This recipe uses:

        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#contributing-recipes","title":"Contributing recipes","text":"

        We're always looking for new recipe contributions! See the Prefect Recipes repository for details on how you can add your Prefect recipe, share best practices with fellow Prefect users, and earn some swag.

        Prefect recipes provide a vital cookbook where users can find helpful code examples and, when appropriate, common steps for specific Prefect use cases.

        We love recipes from anyone who has example code that another Prefect user can benefit from (e.g. a Prefect flow that loads data into Snowflake).

        Have a blog post, Discourse article, or tutorial you\u2019d like to share as a recipe? All submissions are welcome. Clone the prefect-recipes repo, create a branch, add a link to your recipe to the README, and submit a PR. Have more questions? Read on.

        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-is-a-recipe","title":"What is a recipe?","text":"

        A Prefect recipe is like a cookbook recipe: it tells you what you need \u2014 the ingredients \u2014 and some basic steps, but assumes you can put the pieces together. Think of the Hello Fresh meal experience, but for dataflows.

        A tutorial, on the other hand, is Julia Child holding your hand through the entire cooking process: explaining each ingredient and procedure, demonstrating best practices, pointing out potential problems, and generally making sure you can\u2019t stray from the happy path to a delicious meal.

        We love Julia, and we love tutorials. But we don\u2019t expect that a Prefect recipe should handhold users through every step and possible contingency of a solution. A recipe can start from an expectation of more expertise and problem-solving ability on the part of the reader.

        To see an example of a high quality recipe, check out Serverless with AWS Chalice. This recipe includes all of the elements we like to see.

        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#steps-to-add-your-recipe","title":"Steps to add your recipe","text":"

        Here\u2019s our guide to creating a recipe:

        # Clone the repository\ngit clone git@github.com:PrefectHQ/prefect-recipes.git\ncd prefect-recipes\n\n# Create and checkout a new branch\n\ngit checkout -b new_recipe_branch_name\n
        1. Add your recipe. Your code may simply be a copy/paste of a single Python file or an entire folder. Unsure of where to add your file or folder? Just add under the flows-advanced/ folder. A Prefect Recipes maintainer will help you find the best place for your recipe. Just want to direct others to a project you made, whether it be a repo or a blogpost? Simply link to it in the Prefect Recipes README!
        2. (Optional) Write a README.
        3. Include a dependencies file, if applicable.
        4. Push your code and make a PR to the repository.

        That\u2019s it!

        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-makes-a-good-recipe","title":"What makes a good recipe?","text":"

        Every recipe is useful, as other Prefect users can adapt the recipe to their needs. Particularly good ones help a Prefect user bake a great dataflow solution! Take a look at the prefect-recipes repo to see some examples.

        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-the-common-ingredients-of-a-good-recipe","title":"What are the common ingredients of a good recipe?","text":"
        • Easy to understand: Can a user easily follow your recipe? Would a README or code comments help? A simple explanation providing context on how to use the example code is useful, but not required. A good README can set a recipe apart, so we have some additional suggestions for README files below.
        • Code and more: Sometimes a use case is best represented in Python code or shell scripts. Sometimes a configuration file is the most important artifact \u2014 think of a Dockerfile or Terraform file for configuring infrastructure.
        • All-inclusive: Share as much code as you can. Even boilerplate code like Dockerfiles or Terraform or Helm files are useful. Just don\u2019t share company secrets or IP.
        • Specific: Don't worry about generalizing your code, aside from removing anything internal/secret! Other users will extrapolate their own unique solutions from your example.
        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-some-tips-for-a-good-recipe-readme","title":"What are some tips for a good recipe README?","text":"

        A thoughtful README can take a recipe from good to great. Here are some best practices that we\u2019ve found make for a great recipe README:

        • Provide a brief explanation of what your recipe demonstrates. This helps users determine quickly whether the recipe is relevant to their needs or answers their questions.
        • List which files are included and what each is meant to do. Each explanation can contain only a few words.
        • Describe any dependencies and prerequisites (in addition to any dependencies you include in a requirements file). This includes both libraries or modules and any services your recipes depends on.
        • If steps are involved or there\u2019s an order to do things, a simple list of steps is helpful.
        • Bonus: troubleshooting steps you encountered to get here or tips where other users might get tripped up.
        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#next-steps","title":"Next steps","text":"

        We hope you\u2019ll feel comfortable sharing your Prefect solutions as recipes in the prefect-recipes repo. Collaboration and knowledge sharing are defining attributes of our Prefect Community!

        Have questions about sharing or using recipes? Reach out on our active Prefect Slack Community!

        Happy engineering!

        ","tags":["recipes","best practices","examples"],"boost":2},{"location":"tutorial/","title":"Tutorial Overview","text":"

        Prefect orchestrates workflows \u2014 it simplifies the creation, scheduling, and monitoring of complex data pipelines. You define workflows as Python code and Prefect handles the rest.

        Prefect also provides error handling, retry mechanisms, and a user-friendly dashboard for monitoring. It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated.

        This tutorial provides a guided walk-through of Prefect's core concepts and instructions on how to use them.

        You will:

        1. Create a flow
        2. Add tasks to it
        3. Deploy and run the flow locally
        4. Create a work pool and run the flow on remote infrastructure

        These four topics will get most users to their first production deployment.

        Advanced users that need more governance and control of their workflow infrastructure can go one step further by:

        5.Using a worker-based deployment

        If you're looking for examples of more advanced operations (such as deploying on Kubernetes), check out Prefect's guides.

        Compared to the Quickstart, this tutorial is a more in-depth guide to Prefect's functionality. You will also see how to customize the Docker image where your flow runs and learn how to run flows on your own infrastructure.

        ","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#prerequisites","title":"Prerequisites","text":"

        Before you start, make sure you have Python installed in a virtual environment. Then install Prefect:

        pip install -U prefect\n

        See the install guide for more detailed instructions.

        To get the most out of Prefect, you need to connect to a forever-free Prefect Cloud account.

        1. Create a new account or sign in at https://app.prefect.cloud/.
        2. Use the prefect cloud login CLI command to authenticate to Prefect Cloud from your environment.
        prefect cloud login\n

        Choose Log in with a web browser and click the Authorize button in the browser window that opens.

        If you have any issues with browser-based authentication, see the Prefect Cloud docs to learn how to authenticate with a manually created API key.

        As an alternative to using Prefect Cloud, you can self-host a Prefect server instance. If you choose this option, run prefect server start to start a local Prefect server instance.

        ","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#first-steps-flows","title":"First steps: Flows","text":"

        Let's begin by learning how to create your first Prefect flow - click here to get started.

        ","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/deployments/","title":"Deploying Flows","text":"

        Reminder to connect to Prefect Cloud or a self-hosted Prefect server instance

        Some features in this tutorial, such as scheduling, require you to be connected to a Prefect server. If using a self-hosted setup, run prefect server start to run both the webserver and UI. If using Prefect Cloud, make sure you have successfully authenticated your local environment.

        ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#why-deployments","title":"Why deployments?","text":"

        Some of the most common reasons to use an orchestration tool such as Prefect are for scheduling and event-based triggering. Up to this point, we\u2019ve demonstrated running Prefect flows as scripts, but this means you have been the one triggering and managing flow runs. You can certainly continue to trigger your workflows in this way and use Prefect as a monitoring layer for other schedulers or systems, but you will miss out on many of the other benefits and features that Prefect offers.

        Deploying a flow exposes an API and UI so that you can:

        • trigger new runs, cancel active runs, pause scheduled runs, customize parameters, and more
        • remotely configure schedules and automation rules for your deployments
        • dynamically provision infrastructure using workers
        ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#what-is-a-deployment","title":"What is a deployment?","text":"

        Deploying a flow is the act of specifying where and how it will run. This information is encapsulated and sent to Prefect as a deployment that contains the crucial metadata needed for remote orchestration. Deployments elevate workflows from functions that you call manually to API-managed entities.

        Attributes of a deployment include (but are not limited to):

        • Flow entrypoint: path to your flow function
        • Schedule or Trigger: optional schedule or triggering rules for this deployment
        • Tags: optional text labels for organizing your deployments
        ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#create-a-deployment","title":"Create a deployment","text":"

        Using our get_repo_info flow from the previous sections, we can easily create a deployment for it by calling a single method on the flow object: flow.serve.

        repo_info.py
        import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.serve(name=\"my-first-deployment\")\n

        Running this script will do two things:

        • create a deployment called \"my-first-deployment\" for your flow in the Prefect API
        • stay running to listen for flow runs for this deployment; when a run is found, it will be asynchronously executed within a subprocess

        Deployments must be defined in static files

        Flows can be defined and run interactively, that is, within REPLs or Notebooks. Deployments, on the other hand, require that your flow definition be in a known file (which can be located on a remote filesystem in certain setups, as we'll see in the next section of the tutorial).

        Because this deployment has no schedule or triggering automation, you will need to use the UI or API to create runs for it. Let's use the CLI (in a separate terminal window) to create a run for this deployment:

        prefect deployment run 'get-repo-info/my-first-deployment'\n

        If you are watching either your terminal or your UI, you should see the newly created run execute successfully! Let's take this example further by adding a schedule and additional metadata.

        ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#additional-options","title":"Additional options","text":"

        The serve method on flows exposes many options for the deployment. Let's use a few of these options now:

        • cron: a keyword that allows us to set a cron string schedule for the deployment; see schedules for more advanced scheduling options
        • tags: a keyword that allows us to tag this deployment and its runs for bookkeeping and filtering purposes
        • description: a keyword that allows us to document what this deployment does; by default the description is set from the docstring of the flow function, but we did not document our flow function
        • version: a keyword that allows us to track changes to our deployment; by default a hash of the file containing the flow is used; popular options include semver tags or git commit hashes

        Let's add these options to our deployment:

        if __name__ == \"__main__\":\n    get_repo_info.serve(\n        name=\"my-first-deployment\",\n        cron=\"* * * * *\",\n        tags=[\"testing\", \"tutorial\"],\n        description=\"Given a GitHub repository, logs repository statistics for that repo.\",\n        version=\"tutorial/deployments\",\n    )\n

        When you rerun this script, you will find an updated deployment in the UI that is actively scheduling work! Stop the script in the CLI using CTRL+C and your schedule will be automatically paused.

        .serve is a long-running process

        For remotely triggered or scheduled runs to be executed, your script with flow.serve must be actively running.

        ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#running-multiple-deployments-at-once","title":"Running multiple deployments at once","text":"

        This method is useful for creating deployments for single flows, but what if we have two or more flows? This situation only requires a few additional method calls and imports to get up and running:

        multi_flow_deployment.py
        import time\nfrom prefect import flow, serve\n\n\n@flow\ndef slow_flow(sleep: int = 60):\n    \"Sleepy flow - sleeps the provided amount of time (in seconds).\"\n    time.sleep(sleep)\n\n\n@flow\ndef fast_flow():\n    \"Fastest flow this side of the Mississippi.\"\n    return\n\n\nif __name__ == \"__main__\":\n    slow_deploy = slow_flow.to_deployment(name=\"sleeper\", interval=45)\n    fast_deploy = fast_flow.to_deployment(name=\"fast\")\n    serve(slow_deploy, fast_deploy)\n

        A few observations:

        • the flow.to_deployment interface exposes the exact same options as flow.serve; this method produces a deployment object
        • the deployments are only registered with the API once serve(...) is called
        • when serving multiple deployments, the only requirement is that they share a Python environment; they can be executed and scheduled independently of each other

        Spend some time experimenting with this setup. A few potential next steps for exploration include:

        • pausing and unpausing the schedule for the \"sleeper\" deployment
        • using the UI to submit ad-hoc runs for the \"sleeper\" deployment with different values for sleep
        • cancelling an active run for the \"sleeper\" deployment from the UI (good luck cancelling the \"fast\" one \ud83d\ude09)

        Hybrid execution option

        Another implication of Prefect's deployment interface is that you can choose to use our hybrid execution model. Whether you use Prefect Cloud or host a Prefect server instance yourself, you can run work flows in the environments best suited to their execution. This model allows you efficient use of your infrastructure resources while maintaining the privacy of your code and data. There is no ingress required. For more information read more about our hybrid model.

        ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#next-steps","title":"Next steps","text":"

        Congratulations! You now have your first working deployment.

        Deploying flows through the serve method is a fast way to start scheduling flows with Prefect. However, if your team has more complex infrastructure requirements or you'd like to have Prefect manage flow execution, you can deploy flows to a work pool.

        Learn about work pools and how Prefect Cloud can handle infrastructure configuration for you in the next step of the tutorial.

        ","tags":["orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/flows/","title":"Flows","text":"

        Prerequisites

        This tutorial assumes you have already installed Prefect and connected to Prefect Cloud or a self-hosted server instance. See the prerequisites section of the tutorial for more details.

        ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#what-is-a-flow","title":"What is a flow?","text":"

        Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

        • All runs of the flow have persistent state. Transitions between states are recorded, allowing for flow execution to be observed and acted upon.
        • Input arguments can be type validated as workflow parameters.
        • Retries can be performed on failure.
        • Timeouts can be enforced to prevent unintentional, long-running workflows.
        • Metadata about flow runs, such as run time and final state, is automatically tracked.
        • They can easily be elevated to a deployment, which exposes a remote API for interacting with it
        ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#run-your-first-flow","title":"Run your first flow","text":"

        The simplest way to get started with Prefect is to annotate a Python function with the\u00a0@flow\u00a0decorator. The script below fetches statistics about the main Prefect repository. Note that httpx is an HTTP client library and a dependency of Prefect. Let's turn this function into a Prefect flow and run the script:

        repo_info.py
        import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info():\n    url = \"https://api.github.com/repos/PrefectHQ/prefect\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

        Running this file will result in some interesting output:

        12:47:42.792 | INFO | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\nStars \ud83c\udf20 : 12146\nForks \ud83c\udf74 : 1245\n12:47:45.008 | INFO | Flow run 'ludicrous-warthog' - Finished in state Completed()\n

        Flows can contain arbitrary Python

        As we can see above, flow definitions can contain arbitrary Python logic.

        ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#parameters","title":"Parameters","text":"

        As with any Python function, you can pass arguments to a flow. The positional and keyword arguments defined on your flow function are called parameters. Prefect will automatically perform type conversion using any provided type hints. Let's make the repository a string parameter with a default value:

        repo_info.py
        import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info(repo_name=\"PrefectHQ/marvin\")\n

        We can call our flow with varying values for the repo_name parameter (including \"bad\" values):

        python repo_info.py\n

        Try passing repo_name=\"missing-org/missing-repo\".

        You should see

        HTTPStatusError: Client error '404 Not Found' for url '<https://api.github.com/repos/missing-org/missing-repo>'\n

        Now navigate to your Prefect dashboard and compare the displays for these two runs.

        ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#logging","title":"Logging","text":"

        Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing. If we navigate to our dashboard and explore the runs we created above, we will notice that the repository statistics are not captured in the flow run logs. Let's fix that by adding some logging to our flow:

        repo_info.py
        import httpx\nfrom prefect import flow, get_run_logger\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    logger = get_run_logger()\n    logger.info(\"%s repository statistics \ud83e\udd13:\", repo_name)\n    logger.info(f\"Stars \ud83c\udf20 : %d\", repo[\"stargazers_count\"])\n    logger.info(f\"Forks \ud83c\udf74 : %d\", repo[\"forks_count\"])\n

        Now the output looks more consistent and, more importantly, our statistics are stored in the Prefect backend and displayed in the UI for this flow run:

        12:47:42.792 | INFO    | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\n12:47:43.016 | INFO    | Flow run 'ludicrous-warthog' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:47:43.016 | INFO    | Flow run 'ludicrous-warthog' - Stars \ud83c\udf20 : 12146\n12:47:43.042 | INFO    | Flow run 'ludicrous-warthog' - Forks \ud83c\udf74 : 1245\n12:47:45.008 | INFO    | Flow run 'ludicrous-warthog' - Finished in state Completed()\n

        log_prints=True

        We could have achieved the exact same outcome by using Prefect's convenient log_prints keyword argument in the flow decorator:

        @flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    ...\n

        Logging vs Artifacts

        The example above is for educational purposes. In general, it is better to use Prefect artifacts for storing metrics and output. Logs are best for tracking progress and debugging errors.

        ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#retries","title":"Retries","text":"

        So far our script works, but in the future unexpected errors may occur. For example the GitHub API may be temporarily unavailable or rate limited. Retries help make our flow more resilient. Let's add retry functionality to our example above:

        repo_info.py
        import httpx\nfrom prefect import flow\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n
        ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#next-tasks","title":"Next: Tasks","text":"

        As you have seen, adding a flow decorator converts our Python function to a resilient and observable workflow. In the next section, you'll supercharge this flow by using tasks to break down the workflow's complexity and make it more performant and observable - click here to continue.

        ","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/tasks/","title":"Tasks","text":"","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#what-is-a-task","title":"What is a task?","text":"

        A task is any Python function decorated with a @task decorator. You can think of a flow as a recipe for connecting a known sequence of tasks together. Tasks, and the dependencies between them, are displayed in the flow run graph, enabling you to break down a complex flow into something you can observe, understand and control at a more granular level. When a function becomes a task, it can be executed concurrently and its return value can be cached.

        Flows and tasks share some common features:

        • Both are defined easily using their respective decorator, which accepts settings for that flow / task (see all task settings / flow settings).
        • Each can be given a name, description and tags for organization and bookkeeping.
        • Both provide functionality for retries, timeouts, and other hooks to handle failure and completion events.

        Network calls (such as our GET requests to the GitHub API) are particularly useful as tasks because they take advantage of task features such as retries, caching, and concurrency.

        Tasks may be called from other tasks

        As of prefect 2.18.x, tasks can be called from within other tasks. This removes the need to use subflows for simple task composition.

        When to use tasks

        Not all functions in a flow need be tasks. Use them only when their features are useful.

        Let's take our flow from before and move the request into a task:

        repo_info.py
        import httpx\nfrom prefect import flow, task\nfrom typing import Optional\n\n\n@task\ndef get_url(url: str, params: Optional[dict[str, any]] = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    repo_stats = get_url(url)\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

        Running the flow in your terminal will result in something like this:

        09:55:55.412 | INFO    | prefect.engine - Created flow run 'great-ammonite' for flow 'get-repo-info'\n09:55:55.499 | INFO    | Flow run 'great-ammonite' - Created task run 'get_url-0' for task 'get_url'\n09:55:55.500 | INFO    | Flow run 'great-ammonite' - Executing 'get_url-0' immediately...\n09:55:55.825 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Stars \ud83c\udf20 : 12157\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Forks \ud83c\udf74 : 1251\n09:55:55.849 | INFO    | Flow run 'great-ammonite' - Finished in state Completed('All states completed.')\n

        And you should now see this task run tracked in the UI as well.

        ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#caching","title":"Caching","text":"

        Tasks support the ability to cache their return value. Caching allows you to efficiently reuse results of tasks that may be expensive to reproduce with every flow run, or reuse cached results if the inputs to a task have not changed.

        To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. You can define a task that is cached based on its inputs by using the Prefect task_input_hash. Let's add caching to our get_url task:

        import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\nfrom typing import Optional\n\n\n@task(cache_key_fn=task_input_hash, \n      cache_expiration=timedelta(hours=1),\n      )\ndef get_url(url: str, params: Optional[dict[str, any]] = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n

        You can test this caching behavior by using a personal repository as your workflow parameter - give it a star, or remove a star and see how the output of this task changes (or doesn't) by running your flow multiple times.

        Task results and caching

        Task results are cached in memory during a flow run and persisted to your home directory by default. Prefect Cloud only stores the cache key, not the data itself.

        ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#concurrency","title":"Concurrency","text":"

        Tasks enable concurrency, allowing you to execute multiple tasks asynchronously. This concurrency can greatly enhance the efficiency and performance of your workflows. Let's expand our script to calculate the average open issues per user. This will require making more requests:

        repo_info.py
        import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\nfrom typing import Optional\n\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: Optional[dict[str, any]] = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n\n\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p]\n\n\n@flow(retries=3, retry_delay_seconds=5, log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    repo_stats = get_url(f\"https://api.github.com/repos/{repo_name}\")\n    issues = get_open_issues(repo_name, repo_stats[\"open_issues_count\"])\n    issues_per_user = len(issues) / len(set([i[\"user\"][\"id\"] for i in issues]))\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo_stats['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo_stats['forks_count']}\")\n    print(f\"Average open issues per user \ud83d\udc8c : {issues_per_user:.2f}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

        Now we're fetching the data we need, but the requests are happening sequentially. Tasks expose a submit method that changes the execution from sequential to concurrent. In our specific example, we also need to use the result method because we are unpacking a list of return values:

        def get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url.submit(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p.result()]\n

        The logs show that each task is running concurrently:

        12:45:28.241 | INFO    | prefect.engine - Created flow run 'intrepid-coua' for flow 'get-repo-info'\n12:45:28.311 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-0' for task 'get_url'\n12:45:28.312 | INFO    | Flow run 'intrepid-coua' - Executing 'get_url-0' immediately...\n12:45:28.543 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n12:45:28.583 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-1' for task 'get_url'\n12:45:28.584 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-1' for execution.\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-2' for task 'get_url'\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-2' for execution.\n12:45:28.609 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-4' for task 'get_url'\n12:45:28.610 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-4' for execution.\n12:45:28.624 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-5' for task 'get_url'\n12:45:28.625 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-5' for execution.\n12:45:28.640 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-6' for task 'get_url'\n12:45:28.641 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-6' for execution.\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-3' for task 'get_url'\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-3' for execution.\n12:45:29.096 | INFO    | Task run 'get_url-6' - Finished in state Completed()\n12:45:29.565 | INFO    | Task run 'get_url-2' - Finished in state Completed()\n12:45:29.721 | INFO    | Task run 'get_url-5' - Finished in state Completed()\n12:45:29.749 | INFO    | Task run 'get_url-4' - Finished in state Completed()\n12:45:29.801 | INFO    | Task run 'get_url-3' - Finished in state Completed()\n12:45:29.817 | INFO    | Task run 'get_url-1' - Finished in state Completed()\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - Stars \ud83c\udf20 : 12159\n12:45:29.821 | INFO    | Flow run 'intrepid-coua' - Forks \ud83c\udf74 : 1251\nAverage open issues per user \ud83d\udc8c : 2.27\n12:45:29.838 | INFO    | Flow run 'intrepid-coua' - Finished in state Completed('All states completed.')\n
        ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#subflows","title":"Subflows","text":"

        Not only can you call tasks within a flow, but you can also call other flows! Child flows are called\u00a0subflows\u00a0and allow you to efficiently manage, track, and version common multi-task logic.

        Subflows are a great way to organize your workflows and offer more visibility within the UI.

        Let's add a flow decorator to our get_open_issues function:

        @flow\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url.submit(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p.result()]\n

        Whenever we run the parent flow, a new run will be generated for related functions within that as well. Not only is this run tracked as a subflow run of the main flow, but you can also inspect it independently in the UI!

        ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#next-deployments","title":"Next: Deployments","text":"

        We now have a flow with tasks, subflows, retries, logging, caching, and concurrent execution. In the next section, we'll see how we can deploy this flow in order to run it on a schedule and/or external infrastructure - click here to learn how to create your first deployment.

        ","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/work-pools/","title":"Work Pools","text":"","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#why-work-pools","title":"Why work pools?","text":"

        Work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. To transition from persistent infrastructure to dynamic infrastructure, use flow.deploy instead of flow.serve.

        Choosing Between flow.deploy() and flow.serve()

        Earlier in the tutorial you used serve to deploy your flows. For many use cases, serve is sufficient to meet scheduling and orchestration needs. Work pools are optional. If infrastructure needs escalate, work pools can become a handy tool. The best part? You're not locked into one method. You can seamlessly combine approaches as needed.

        Deployment definition methods differ slightly for work pools

        When you use work-pool-based execution, you define deployments differently. Deployments for workers are configured with deploy, which requires additional configuration. A deployment created with serve cannot be used with a work pool.

        The primary reason to use work pools is for dynamic infrastructure provisioning and configuration. For example, you might have a workflow that has expensive infrastructure requirements and is run infrequently. In this case, you don't want an idle process running within that infrastructure.

        Other advantages to using work pools include:

        • You can configure default infrastructure configurations on your work pools that all jobs inherit and can override.
        • Platform teams can use work pools to expose opinionated (and enforced!) interfaces to the infrastructure that they oversee.
        • Work pools can be used to prioritize (or limit) flow runs through the use of work queues.

        Prefect provides several types of work pools. Prefect Cloud provides a Prefect Managed work pool option that is the simplest way to run workflows remotely. A cloud-provider account, such as AWS, is not required with a Prefect Managed work pool.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#set-up-a-work-pool","title":"Set up a work pool","text":"

        Prefect Cloud

        This tutorial uses Prefect Cloud to deploy flows to work pools. Managed execution and push work pools are available in Prefect Cloud only. If you are not using Prefect Cloud, please learn about work pools below and then proceed to the next tutorial that uses worker-based work pools.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-prefect-managed-work-pool","title":"Create a Prefect Managed work pool","text":"

        In your terminal, run the following command to set up a work pool named my-managed-pool of type prefect:managed.

        prefect work-pool create my-managed-pool --type prefect:managed \n

        Let\u2019s confirm that the work pool was successfully created by running the following command.

        prefect work-pool ls\n

        You should see your new my-managed-pool in the output list.

        Finally, let\u2019s double check that you can see this work pool in the UI.

        Navigate to the Work Pools tab and verify that you see my-managed-pool listed.

        Feel free to select Edit from the three-dot menu on right of the work pool card to view the details of your work pool.

        Work pools contain configuration that is used to provision infrastructure for flow runs. For example, you can specify additional Python packages or environment variables that should be set for all deployments that use this work pool. Note that individual deployments can override the work pool configuration.

        Now that you\u2019ve set up your work pool, we can deploy a flow to this work pool. Let's deploy your tutorial flow to my-managed-pool.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-the-deployment","title":"Create the deployment","text":"

        From our previous steps, we now have:

        1. A flow
        2. A work pool

        Let's update our repo_info.py file to create a deployment in Prefect Cloud.

        The updates that we need to make to repo_info.py are:

        1. Change flow.serve to flow.deploy.
        2. Tell flow.deploy which work pool to deploy to.

        Here's what the updated repo_info.py looks like:

        repo_info.py
        import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.from_source(\n        source=\"https://github.com/discdiver/demos.git\", \n        entrypoint=\"repo_info.py:get_repo_info\"\n    ).deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-managed-pool\", \n    )\n

        In the from_source method, we specify the source of our flow code.

        In the deploy method, we specify the name of our deployment and the name of the work pool that we created earlier.

        You can store your flow code in any of several types of remote storage. In this example, we use a GitHub repository, but you could use a Docker image, as you'll see in an upcoming section of the tutorial. Alternatively, you could store your flow code in cloud provider storage such as AWS S3, or within a different git-based cloud provider such as GitLab or Bitbucket.

        Note

        In the example above, we store our code in a GitHub repository. If you make changes to the flow code, you will need to push those changes to your own GitHub account and update the source argument of from_source to point to your repository.

        Now that you've updated your script, you can run it to register your deployment on Prefect Cloud:

        python repo_info.py\n

        You should see a message in the CLI that your deployment was created with instructions for how to run it.

        Successfully created/updated all deployments!\n\n                       Deployments                       \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                              \u2503 Status  \u2503 Details \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 get-repo-info/my-first-deployment | applied \u2502         \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nTo schedule a run for this deployment, use the following command:\n\n        $ prefect deployment run 'get-repo-info/my-first-deployment'\n\n\nYou can also run your flow via the Prefect UI: https://app.prefect.cloud/account/\nabc/workspace/123/deployments/deployment/xyz\n

        Navigate to your Prefect Cloud UI and view your new deployment. Click the Run button to trigger a run of your deployment.

        Because this deployment was configured with a Prefect Managed work pool, Prefect Cloud will run your flow on your behalf.

        View the logs in the UI.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#schedule-a-deployment-run","title":"Schedule a deployment run","text":"

        Now everything is set up for us to submit a flow-run to the work pool. Go ahead and run the deployment from the CLI or the UI.

        prefect deployment run 'get_repo_info/my-deployment'\n

        Prefect Managed work pools are a great way to get started with Prefect. See the Managed Execution guide for more details.

        Many users will find that they need more control over the infrastructure that their flows run on. Prefect Cloud's push work pools are a popular option in those cases.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#push-work-pools-with-automatic-infrastructure-provisioning","title":"Push work pools with automatic infrastructure provisioning","text":"

        Serverless push work pools scale infinitely and provide more configuration options than Prefect Managed work pools.

        Prefect provides push work pools for AWS ECS on Fargate, Azure Container Instances, Google Cloud Run, and Modal. To use a push work pool, you will need an account with sufficient permissions on the cloud provider that you want to use. We'll use GCP for this example.

        Setting up the cloud provider pieces for infrastructure can be tricky and time consuming. Fortunately, Prefect can automatically provision infrastructure for you and wire it all together to work with your push work pool.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#create-a-push-work-pool-with-automatic-infrastructure-provisioning","title":"Create a push work pool with automatic infrastructure provisioning","text":"

        In your terminal, run the following command to set up a push work pool.

        Install the gcloud CLI and authenticate with your GCP project.

        If you already have the gcloud CLI installed, be sure to update to the latest version with gcloud components update.

        You will need the following permissions in your GCP project:

        • resourcemanager.projects.list
        • serviceusage.services.enable
        • iam.serviceAccounts.create
        • iam.serviceAccountKeys.create
        • resourcemanager.projects.setIamPolicy
        • artifactregistry.repositories.create

        Docker is also required to build and push images to your registry. You can install Docker here.

        Run the following command to set up a work pool named my-cloud-run-pool of type cloud-run:push.

        prefect work-pool create --type cloud-run:push --provision-infra my-cloud-run-pool \n

        Using the --provision-infra flag allows you to select a GCP project to use for your work pool and automatically configure it to be ready to execute flows via Cloud Run. In your GCP project, this command will activate the Cloud Run API, create a service account, and create a key for the service account, if they don't already exist. In your Prefect workspace, this command will create a GCPCredentials block for storing the service account key.

        Here's an abbreviated example output from running the command:

        \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Provisioning infrastructure for your work pool my-cloud-run-pool will require:                           \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in GCP project central-kit-405415 in region us-central1                                      \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Activate the Cloud Run API for your project                                                    \u2502\n\u2502         - Activate the Artifact Registry API for your project                                            \u2502\n\u2502         - Create an Artifact Registry repository named prefect-images                                    \u2502\n\u2502         - Create a service account for managing Cloud Run jobs: prefect-cloud-run                        \u2502\n\u2502             - Service account will be granted the following roles:                                       \u2502\n\u2502                 - Service Account User                                                                   \u2502\n\u2502                 - Cloud Run Developer                                                                    \u2502\n\u2502         - Create a key for service account prefect-cloud-run                                             \u2502\n\u2502                                                                                                          \u2502\n\u2502     Updates in Prefect workspace                                                                         \u2502\n\u2502                                                                                                          \u2502\n\u2502         - Create GCP credentials block my--pool-push-pool-credentials to store the service account key   \u2502\n\u2502                                                                                                          \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\nProceed with infrastructure provisioning? [y/n]: y\nActivating Cloud Run API\nActivating Artifact Registry API\nCreating Artifact Registry repository\nConfiguring authentication to Artifact Registry\nSetting default Docker build namespace\nCreating service account\nAssigning roles to service account\nCreating service account key\nCreating GCP credentials block\nProvisioning Infrastructure \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 100% 0:00:00\nInfrastructure successfully provisioned!\nCreated work pool 'my-cloud-run-pool'!\n

        After infrastructure provisioning completes, you will be logged into your new Artifact Registry repository and the default Docker build namespace will be set to the URL of the repository.

        While the default namespace is set, any images you build without specifying a registry or username/organization will be pushed to the repository.

        To take advantage of this functionality, you can write your deploy script like this:

        example_deploy_script.py
        from prefect import flow                                                       \nfrom prefect.deployments import DeploymentImage                                \n\n\n@flow(log_prints=True)\ndef my_flow(name: str = \"world\"):\n    print(f\"Hello {name}! I'm a flow running on Cloud Run!\")\n\n\nif __name__ == \"__main__\":                                                     \n    my_flow.deploy(                                                            \n        name=\"my-deployment\",\n        work_pool_name=\"above-ground\",\n        cron=\"0 1 * * *\",\n        image=DeploymentImage(\n            name=\"my-image:latest\",\n            platform=\"linux/amd64\",\n        )\n    )\n

        Run the script to create the deployment on the Prefect Cloud server.

        Running this script will build a Docker image with the tag <region>-docker.pkg.dev/<project>/<repository-name>/my-image:latest and push it to your repository.

        Tip

        Make sure you have Docker running locally before running this script.

        Note that you only need to include an object of the DeploymentImage class with the argument platform=\"linux/amd64 if you're building your image on a machine with an ARM-based processor. Otherwise, you could just pass image=\"my-image:latest\" to deploy.

        Also note that the cron argument will schedule the deployment to run at 1am every day. See the schedules docs for more information on scheduling options.

        See the Push Work Pool guide for more details and example commands for each cloud provider.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/work-pools/#next-step","title":"Next step","text":"

        Congratulations! You've learned how to deploy flows to work pools. If these work pool options meet all of your needs, we encourage you to go deeper with the concepts docs or explore our how-to guides to see examples of particular Prefect use cases.

        However, if you need more control over your infrastructure, want to run your workflows in Kubernetes, or are running a self-hosted Prefect server instance, we encourage you to see the next section of the tutorial. There you'll learn how to use work pools that rely on a worker and see how to customize Docker images for container-based infrastructure.

        ","tags":["work pools","orchestration","flow runs","deployments","tutorial"],"boost":2},{"location":"tutorial/workers/","title":"Workers","text":"","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#prerequisites","title":"Prerequisites","text":"

        Docker installed and running on your machine.

        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#why-workers","title":"Why workers","text":"

        In the previous section of the tutorial, you learned how work pools are a bridge between the Prefect orchestration layer and infrastructure for flow runs that can be dynamically provisioned. You saw how you can transition from persistent infrastructure to dynamic infrastructure by using flow.deploy instead of flow.serve.

        Work pools that rely on client-side workers take this a step further by enabling you to run work flows in your own Docker containers, Kubernetes clusters, and serverless environments such as AWS ECS, Azure Container Instances, and GCP Cloud Run.

        The architecture of a worker-based work pool deployment can be summarized with the following diagram:

        graph TD\n    subgraph your_infra[\"Your Execution Environment\"]\n        worker[\"Worker\"]\n    subgraph flow_run_infra[Flow Run Infra]\n     flow_run_a((\"Flow Run A\"))\n    end\n    subgraph flow_run_infra_2[Flow Run Infra]\n     flow_run_b((\"Flow Run B\"))\n    end      \n    end\n\n    subgraph api[\"Prefect API\"]\n    Deployment --> |assigned to| work_pool\n        work_pool([\"Work Pool\"])\n    end\n\n    worker --> |polls| work_pool\n    worker --> |creates| flow_run_infra\n    worker --> |creates| flow_run_infra_2

        Notice above that the worker is in charge of provisioning the flow run infrastructure. In context of this tutorial, that flow run infrastructure is an ephemeral Docker container to host each flow run. Different worker types create different types of flow run infrastructure.

        Now that we\u2019ve reviewed the concepts of a work pool and worker, let\u2019s create them so that you can deploy your tutorial flow, and execute it later using the Prefect API.

        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#set-up-a-work-pool-and-worker","title":"Set up a work pool and worker","text":"

        For this tutorial you will create a Docker type work pool via the CLI.

        Using the Docker work pool type means that all work sent to this work pool will run within a dedicated Docker container using a Docker client available to the worker.

        Other work pool types

        There are work pool types for serverless computing environments such as AWS ECS, Azure Container Instances, Google Cloud Run, and Vertex AI. Kubernetes is also a popular work pool type.

        These options are expanded upon in various How-to Guides.

        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#create-a-work-pool","title":"Create a work pool","text":"

        In your terminal, run the following command to set up a Docker type work pool.

        prefect work-pool create --type docker my-docker-pool\n

        Let\u2019s confirm that the work pool was successfully created by running the following command in the same terminal.

        prefect work-pool ls\n

        You should see your new my-docker-pool listed in the output.

        Finally, let\u2019s double check that you can see this work pool in your Prefect UI.

        Navigate to the Work Pools tab and verify that you see my-docker-pool listed.

        When you click into my-docker-pool you should see a red status icon signifying that this work pool is not ready.

        To make the work pool ready, you need to start a worker.

        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#start-a-worker","title":"Start a worker","text":"

        Workers are a lightweight polling process that kick off scheduled flow runs on a specific type of infrastructure (such as Docker). To start a worker on your local machine, open a new terminal and confirm that your virtual environment has prefect installed.

        Run the following command in this new terminal to start the worker:

        prefect worker start --pool my-docker-pool\n

        You should see the worker start. It's now polling the Prefect API to check for any scheduled flow runs it should pick up and then submit for execution. You\u2019ll see your new worker listed in the UI under the Workers tab of the Work Pools page with a recent last polled date.

        You should also be able to see a Ready status indicator on your work pool - progress!

        You will need to keep this terminal session active for the worker to continue to pick up jobs. Since you are running this worker locally, the worker will terminate if you close the terminal. Therefore, in a production setting this worker should run as a daemonized or managed process.

        Now that you\u2019ve set up your work pool and worker, we have what we need to kick off and execute flow runs of flows deployed to this work pool. Let's deploy your tutorial flow to my-docker-pool.

        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#create-the-deployment","title":"Create the deployment","text":"

        From our previous steps, we now have:

        1. A flow
        2. A work pool
        3. A worker

        Now it\u2019s time to put it all together. We're going to update our repo_info.py file to build a Docker image and update our deployment so our worker can execute it.

        The updates that you need to make to repo_info.py are:

        1. Change flow.serve to flow.deploy.
        2. Tell flow.deploy which work pool to deploy to.
        3. Tell flow.deploy the name to use for the Docker image that will be built.

        Here's what the updated repo_info.py looks like:

        repo_info.py
        import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=\"my-first-deployment-image:tutorial\",\n        push=False\n    )\n

        Why the push=False?

        For this tutorial, your Docker worker is running on your machine, so we don't need to push the image built by flow.deploy to a registry. When your worker is running on a remote machine, you will need to push the image to a registry that the worker can access.

        Remove the push=False argument, include your registry name, and ensure you've authenticated with the Docker CLI to push the image to a registry.

        Now that you've updated your script, you can run it to deploy your flow to the work pool:

        python repo_info.py\n

        Prefect will build a custom Docker image containing your workflow code that the worker can use to dynamically spawn Docker containers whenever this workflow needs to run.

        What Dockerfile?

        In this example, Prefect generates a Dockerfile for you that will build an image based off of one of Prefect's published images. The generated Dockerfile will copy the current directory into the Docker image and install any dependencies listed in a requirements.txt file.

        If you want to use a custom Dockerfile, you can specify the path to the Dockerfile using the DeploymentImage class:

        repo_info.py
        import httpx\nfrom prefect import flow\nfrom prefect.deployments import DeploymentImage\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        image=DeploymentImage(\n            name=\"my-first-deployment-image\",\n            tag=\"tutorial\",\n            dockerfile=\"Dockerfile\"\n        ),\n        push=False\n    )\n
        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#modify-the-deployment","title":"Modify the deployment","text":"

        If you need to make updates to your deployment, you can do so by modifying your script and rerunning it. You'll need to make one update to specify a value for job_variables to ensure your Docker worker can successfully execute scheduled runs for this flow. See the example below.

        The job_variables section allows you to fine-tune the infrastructure settings for a specific deployment. These values override default values in the specified work pool's base job template.

        When testing images locally without pushing them to a registry (to avoid potential errors like docker.errors.NotFound), it's recommended to include an image_pull_policy job_variable set to Never. However, for production workflows, always consider pushing images to a remote registry for more reliability and accessibility.

        Here's how you can quickly set the image_pull_policy to be Never for this tutorial deployment without affecting the default value set on your work pool:

        repo_info.py
        import httpx\nfrom prefect import flow\n\n\n@flow(log_prints=True)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"{repo_name} repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info.deploy(\n        name=\"my-first-deployment\", \n        work_pool_name=\"my-docker-pool\", \n        job_variables={\"image_pull_policy\": \"Never\"},\n        image=\"my-first-deployment-image:tutorial\",\n        push=False\n    )\n

        To register this update to your deployment's parameters with Prefect's API, run:

        python repo_info.py\n

        Now everything is set for us to submit a flow-run to the work pool:

        prefect deployment run 'get_repo_info/my-deployment'\n

        Common Pitfall

        • Store and run your deploy scripts at the root of your repo, otherwise the built Docker file may be missing files that it needs to execute!

        Did you know?

        A Prefect flow can have more than one deployment. This pattern can be useful if you want your flow to run in different execution environments.

        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2},{"location":"tutorial/workers/#next-steps","title":"Next steps","text":"
        • Go deeper with deployments and learn about configuring deployments in YAML with prefect.yaml.
        • Concepts contain deep dives into Prefect components.
        • Guides provide step-by-step recipes for common Prefect operations including:
        • Deploying flows on Kubernetes
        • Deploying flows in Docker
        • Deploying flows on serverless infrastructure
        • Daemonizing workers

        Happy building!

        ","tags":["workers","orchestration","flow runs","deployments","triggers","tutorial"],"boost":2}]} \ No newline at end of file diff --git a/versions/unreleased/sitemap.xml.gz b/versions/unreleased/sitemap.xml.gz index 9758e0976052999eea7a785a6778dc41aeca17f0..d709bc13540ee3733e4e065b40143c50a04eb542 100644 GIT binary patch delta 15 WcmZn`ZWd;f@8;m(QQgQ^%LxD*{sUV8 delta 15 WcmZn`ZWd;f@8;k*DY22QmJ